Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'timers/urgent' into WIP.timers

Pick up urgent fixes to avoid conflicts.

+5700 -2813
+12 -4
Documentation/acpi/acpi-lid.txt
··· 59 59 If the userspace hasn't been prepared to ignore the unreliable "opened" 60 60 events and the unreliable initial state notification, Linux users can use 61 61 the following kernel parameters to handle the possible issues: 62 - A. button.lid_init_state=open: 62 + A. button.lid_init_state=method: 63 + When this option is specified, the ACPI button driver reports the 64 + initial lid state using the returning value of the _LID control method 65 + and whether the "opened"/"closed" events are paired fully relies on the 66 + firmware implementation. 67 + This option can be used to fix some platforms where the returning value 68 + of the _LID control method is reliable but the initial lid state 69 + notification is missing. 70 + This option is the default behavior during the period the userspace 71 + isn't ready to handle the buggy AML tables. 72 + B. button.lid_init_state=open: 63 73 When this option is specified, the ACPI button driver always reports the 64 74 initial lid state as "opened" and whether the "opened"/"closed" events 65 75 are paired fully relies on the firmware implementation. 66 76 This may fix some platforms where the returning value of the _LID 67 77 control method is not reliable and the initial lid state notification is 68 78 missing. 69 - This option is the default behavior during the period the userspace 70 - isn't ready to handle the buggy AML tables. 71 79 72 80 If the userspace has been prepared to ignore the unreliable "opened" events 73 81 and the unreliable initial state notification, Linux users should always 74 82 use the following kernel parameter: 75 - B. button.lid_init_state=ignore: 83 + C. button.lid_init_state=ignore: 76 84 When this option is specified, the ACPI button driver never reports the 77 85 initial lid state and there is a compensation mechanism implemented to 78 86 ensure that the reliable "closed" notifications can always be delievered
+10 -9
Documentation/admin-guide/pm/cpufreq.rst
··· 1 1 .. |struct cpufreq_policy| replace:: :c:type:`struct cpufreq_policy <cpufreq_policy>` 2 + .. |intel_pstate| replace:: :doc:`intel_pstate <intel_pstate>` 2 3 3 4 ======================= 4 5 CPU Performance Scaling ··· 76 75 interface it comes from and may not be easily represented in an abstract, 77 76 platform-independent way. For this reason, ``CPUFreq`` allows scaling drivers 78 77 to bypass the governor layer and implement their own performance scaling 79 - algorithms. That is done by the ``intel_pstate`` scaling driver. 78 + algorithms. That is done by the |intel_pstate| scaling driver. 80 79 81 80 82 81 ``CPUFreq`` Policy Objects ··· 175 174 into account. That is achieved by invoking the governor's ``->stop`` and 176 175 ``->start()`` callbacks, in this order, for the entire policy. 177 176 178 - As mentioned before, the ``intel_pstate`` scaling driver bypasses the scaling 177 + As mentioned before, the |intel_pstate| scaling driver bypasses the scaling 179 178 governor layer of ``CPUFreq`` and provides its own P-state selection algorithms. 180 - Consequently, if ``intel_pstate`` is used, scaling governors are not attached to 179 + Consequently, if |intel_pstate| is used, scaling governors are not attached to 181 180 new policy objects. Instead, the driver's ``->setpolicy()`` callback is invoked 182 181 to register per-CPU utilization update callbacks for each policy. These 183 182 callbacks are invoked by the CPU scheduler in the same way as for scaling 184 - governors, but in the ``intel_pstate`` case they both determine the P-state to 183 + governors, but in the |intel_pstate| case they both determine the P-state to 185 184 use and change the hardware configuration accordingly in one go from scheduler 186 185 context. 187 186 ··· 258 257 259 258 ``scaling_available_governors`` 260 259 List of ``CPUFreq`` scaling governors present in the kernel that can 261 - be attached to this policy or (if the ``intel_pstate`` scaling driver is 260 + be attached to this policy or (if the |intel_pstate| scaling driver is 262 261 in use) list of scaling algorithms provided by the driver that can be 263 262 applied to this policy. 264 263 ··· 275 274 the CPU is actually running at (due to hardware design and other 276 275 limitations). 277 276 278 - Some scaling drivers (e.g. ``intel_pstate``) attempt to provide 277 + Some scaling drivers (e.g. |intel_pstate|) attempt to provide 279 278 information more precisely reflecting the current CPU frequency through 280 279 this attribute, but that still may not be the exact current CPU 281 280 frequency as seen by the hardware at the moment. ··· 285 284 286 285 ``scaling_governor`` 287 286 The scaling governor currently attached to this policy or (if the 288 - ``intel_pstate`` scaling driver is in use) the scaling algorithm 287 + |intel_pstate| scaling driver is in use) the scaling algorithm 289 288 provided by the driver that is currently applied to this policy. 290 289 291 290 This attribute is read-write and writing to it will cause a new scaling 292 291 governor to be attached to this policy or a new scaling algorithm 293 292 provided by the scaling driver to be applied to it (in the 294 - ``intel_pstate`` case), as indicated by the string written to this 293 + |intel_pstate| case), as indicated by the string written to this 295 294 attribute (which must be one of the names listed by the 296 295 ``scaling_available_governors`` attribute described above). 297 296 ··· 620 619 the "boost" setting for the whole system. It is not present if the underlying 621 620 scaling driver does not support the frequency boost mechanism (or supports it, 622 621 but provides a driver-specific interface for controlling it, like 623 - ``intel_pstate``). 622 + |intel_pstate|). 624 623 625 624 If the value in this file is 1, the frequency boost mechanism is enabled. This 626 625 means that either the hardware can be put into states in which it is able to
+1
Documentation/admin-guide/pm/index.rst
··· 6 6 :maxdepth: 2 7 7 8 8 cpufreq 9 + intel_pstate 9 10 10 11 .. only:: subproject and html 11 12
+755
Documentation/admin-guide/pm/intel_pstate.rst
··· 1 + =============================================== 2 + ``intel_pstate`` CPU Performance Scaling Driver 3 + =============================================== 4 + 5 + :: 6 + 7 + Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 8 + 9 + 10 + General Information 11 + =================== 12 + 13 + ``intel_pstate`` is a part of the 14 + :doc:`CPU performance scaling subsystem <cpufreq>` in the Linux kernel 15 + (``CPUFreq``). It is a scaling driver for the Sandy Bridge and later 16 + generations of Intel processors. Note, however, that some of those processors 17 + may not be supported. [To understand ``intel_pstate`` it is necessary to know 18 + how ``CPUFreq`` works in general, so this is the time to read :doc:`cpufreq` if 19 + you have not done that yet.] 20 + 21 + For the processors supported by ``intel_pstate``, the P-state concept is broader 22 + than just an operating frequency or an operating performance point (see the 23 + `LinuxCon Europe 2015 presentation by Kristen Accardi <LCEU2015_>`_ for more 24 + information about that). For this reason, the representation of P-states used 25 + by ``intel_pstate`` internally follows the hardware specification (for details 26 + refer to `Intel® 64 and IA-32 Architectures Software Developer’s Manual 27 + Volume 3: System Programming Guide <SDM_>`_). However, the ``CPUFreq`` core 28 + uses frequencies for identifying operating performance points of CPUs and 29 + frequencies are involved in the user space interface exposed by it, so 30 + ``intel_pstate`` maps its internal representation of P-states to frequencies too 31 + (fortunately, that mapping is unambiguous). At the same time, it would not be 32 + practical for ``intel_pstate`` to supply the ``CPUFreq`` core with a table of 33 + available frequencies due to the possible size of it, so the driver does not do 34 + that. Some functionality of the core is limited by that. 35 + 36 + Since the hardware P-state selection interface used by ``intel_pstate`` is 37 + available at the logical CPU level, the driver always works with individual 38 + CPUs. Consequently, if ``intel_pstate`` is in use, every ``CPUFreq`` policy 39 + object corresponds to one logical CPU and ``CPUFreq`` policies are effectively 40 + equivalent to CPUs. In particular, this means that they become "inactive" every 41 + time the corresponding CPU is taken offline and need to be re-initialized when 42 + it goes back online. 43 + 44 + ``intel_pstate`` is not modular, so it cannot be unloaded, which means that the 45 + only way to pass early-configuration-time parameters to it is via the kernel 46 + command line. However, its configuration can be adjusted via ``sysfs`` to a 47 + great extent. In some configurations it even is possible to unregister it via 48 + ``sysfs`` which allows another ``CPUFreq`` scaling driver to be loaded and 49 + registered (see `below <status_attr_>`_). 50 + 51 + 52 + Operation Modes 53 + =============== 54 + 55 + ``intel_pstate`` can operate in three different modes: in the active mode with 56 + or without hardware-managed P-states support and in the passive mode. Which of 57 + them will be in effect depends on what kernel command line options are used and 58 + on the capabilities of the processor. 59 + 60 + Active Mode 61 + ----------- 62 + 63 + This is the default operation mode of ``intel_pstate``. If it works in this 64 + mode, the ``scaling_driver`` policy attribute in ``sysfs`` for all ``CPUFreq`` 65 + policies contains the string "intel_pstate". 66 + 67 + In this mode the driver bypasses the scaling governors layer of ``CPUFreq`` and 68 + provides its own scaling algorithms for P-state selection. Those algorithms 69 + can be applied to ``CPUFreq`` policies in the same way as generic scaling 70 + governors (that is, through the ``scaling_governor`` policy attribute in 71 + ``sysfs``). [Note that different P-state selection algorithms may be chosen for 72 + different policies, but that is not recommended.] 73 + 74 + They are not generic scaling governors, but their names are the same as the 75 + names of some of those governors. Moreover, confusingly enough, they generally 76 + do not work in the same way as the generic governors they share the names with. 77 + For example, the ``powersave`` P-state selection algorithm provided by 78 + ``intel_pstate`` is not a counterpart of the generic ``powersave`` governor 79 + (roughly, it corresponds to the ``schedutil`` and ``ondemand`` governors). 80 + 81 + There are two P-state selection algorithms provided by ``intel_pstate`` in the 82 + active mode: ``powersave`` and ``performance``. The way they both operate 83 + depends on whether or not the hardware-managed P-states (HWP) feature has been 84 + enabled in the processor and possibly on the processor model. 85 + 86 + Which of the P-state selection algorithms is used by default depends on the 87 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option. 88 + Namely, if that option is set, the ``performance`` algorithm will be used by 89 + default, and the other one will be used by default if it is not set. 90 + 91 + Active Mode With HWP 92 + ~~~~~~~~~~~~~~~~~~~~ 93 + 94 + If the processor supports the HWP feature, it will be enabled during the 95 + processor initialization and cannot be disabled after that. It is possible 96 + to avoid enabling it by passing the ``intel_pstate=no_hwp`` argument to the 97 + kernel in the command line. 98 + 99 + If the HWP feature has been enabled, ``intel_pstate`` relies on the processor to 100 + select P-states by itself, but still it can give hints to the processor's 101 + internal P-state selection logic. What those hints are depends on which P-state 102 + selection algorithm has been applied to the given policy (or to the CPU it 103 + corresponds to). 104 + 105 + Even though the P-state selection is carried out by the processor automatically, 106 + ``intel_pstate`` registers utilization update callbacks with the CPU scheduler 107 + in this mode. However, they are not used for running a P-state selection 108 + algorithm, but for periodic updates of the current CPU frequency information to 109 + be made available from the ``scaling_cur_freq`` policy attribute in ``sysfs``. 110 + 111 + HWP + ``performance`` 112 + ..................... 113 + 114 + In this configuration ``intel_pstate`` will write 0 to the processor's 115 + Energy-Performance Preference (EPP) knob (if supported) or its 116 + Energy-Performance Bias (EPB) knob (otherwise), which means that the processor's 117 + internal P-state selection logic is expected to focus entirely on performance. 118 + 119 + This will override the EPP/EPB setting coming from the ``sysfs`` interface 120 + (see `Energy vs Performance Hints`_ below). 121 + 122 + Also, in this configuration the range of P-states available to the processor's 123 + internal P-state selection logic is always restricted to the upper boundary 124 + (that is, the maximum P-state that the driver is allowed to use). 125 + 126 + HWP + ``powersave`` 127 + ................... 128 + 129 + In this configuration ``intel_pstate`` will set the processor's 130 + Energy-Performance Preference (EPP) knob (if supported) or its 131 + Energy-Performance Bias (EPB) knob (otherwise) to whatever value it was 132 + previously set to via ``sysfs`` (or whatever default value it was 133 + set to by the platform firmware). This usually causes the processor's 134 + internal P-state selection logic to be less performance-focused. 135 + 136 + Active Mode Without HWP 137 + ~~~~~~~~~~~~~~~~~~~~~~~ 138 + 139 + This is the default operation mode for processors that do not support the HWP 140 + feature. It also is used by default with the ``intel_pstate=no_hwp`` argument 141 + in the kernel command line. However, in this mode ``intel_pstate`` may refuse 142 + to work with the given processor if it does not recognize it. [Note that 143 + ``intel_pstate`` will never refuse to work with any processor with the HWP 144 + feature enabled.] 145 + 146 + In this mode ``intel_pstate`` registers utilization update callbacks with the 147 + CPU scheduler in order to run a P-state selection algorithm, either 148 + ``powersave`` or ``performance``, depending on the ``scaling_cur_freq`` policy 149 + setting in ``sysfs``. The current CPU frequency information to be made 150 + available from the ``scaling_cur_freq`` policy attribute in ``sysfs`` is 151 + periodically updated by those utilization update callbacks too. 152 + 153 + ``performance`` 154 + ............... 155 + 156 + Without HWP, this P-state selection algorithm is always the same regardless of 157 + the processor model and platform configuration. 158 + 159 + It selects the maximum P-state it is allowed to use, subject to limits set via 160 + ``sysfs``, every time the P-state selection computations are carried out by the 161 + driver's utilization update callback for the given CPU (that does not happen 162 + more often than every 10 ms), but the hardware configuration will not be changed 163 + if the new P-state is the same as the current one. 164 + 165 + This is the default P-state selection algorithm if the 166 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option 167 + is set. 168 + 169 + ``powersave`` 170 + ............. 171 + 172 + Without HWP, this P-state selection algorithm generally depends on the 173 + processor model and/or the system profile setting in the ACPI tables and there 174 + are two variants of it. 175 + 176 + One of them is used with processors from the Atom line and (regardless of the 177 + processor model) on platforms with the system profile in the ACPI tables set to 178 + "mobile" (laptops mostly), "tablet", "appliance PC", "desktop", or 179 + "workstation". It is also used with processors supporting the HWP feature if 180 + that feature has not been enabled (that is, with the ``intel_pstate=no_hwp`` 181 + argument in the kernel command line). It is similar to the algorithm 182 + implemented by the generic ``schedutil`` scaling governor except that the 183 + utilization metric used by it is based on numbers coming from feedback 184 + registers of the CPU. It generally selects P-states proportional to the 185 + current CPU utilization, so it is referred to as the "proportional" algorithm. 186 + 187 + The second variant of the ``powersave`` P-state selection algorithm, used in all 188 + of the other cases (generally, on processors from the Core line, so it is 189 + referred to as the "Core" algorithm), is based on the values read from the APERF 190 + and MPERF feedback registers and the previously requested target P-state. 191 + It does not really take CPU utilization into account explicitly, but as a rule 192 + it causes the CPU P-state to ramp up very quickly in response to increased 193 + utilization which is generally desirable in server environments. 194 + 195 + Regardless of the variant, this algorithm is run by the driver's utilization 196 + update callback for the given CPU when it is invoked by the CPU scheduler, but 197 + not more often than every 10 ms (that can be tweaked via ``debugfs`` in `this 198 + particular case <Tuning Interface in debugfs_>`_). Like in the ``performance`` 199 + case, the hardware configuration is not touched if the new P-state turns out to 200 + be the same as the current one. 201 + 202 + This is the default P-state selection algorithm if the 203 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option 204 + is not set. 205 + 206 + Passive Mode 207 + ------------ 208 + 209 + This mode is used if the ``intel_pstate=passive`` argument is passed to the 210 + kernel in the command line (it implies the ``intel_pstate=no_hwp`` setting too). 211 + Like in the active mode without HWP support, in this mode ``intel_pstate`` may 212 + refuse to work with the given processor if it does not recognize it. 213 + 214 + If the driver works in this mode, the ``scaling_driver`` policy attribute in 215 + ``sysfs`` for all ``CPUFreq`` policies contains the string "intel_cpufreq". 216 + Then, the driver behaves like a regular ``CPUFreq`` scaling driver. That is, 217 + it is invoked by generic scaling governors when necessary to talk to the 218 + hardware in order to change the P-state of a CPU (in particular, the 219 + ``schedutil`` governor can invoke it directly from scheduler context). 220 + 221 + While in this mode, ``intel_pstate`` can be used with all of the (generic) 222 + scaling governors listed by the ``scaling_available_governors`` policy attribute 223 + in ``sysfs`` (and the P-state selection algorithms described above are not 224 + used). Then, it is responsible for the configuration of policy objects 225 + corresponding to CPUs and provides the ``CPUFreq`` core (and the scaling 226 + governors attached to the policy objects) with accurate information on the 227 + maximum and minimum operating frequencies supported by the hardware (including 228 + the so-called "turbo" frequency ranges). In other words, in the passive mode 229 + the entire range of available P-states is exposed by ``intel_pstate`` to the 230 + ``CPUFreq`` core. However, in this mode the driver does not register 231 + utilization update callbacks with the CPU scheduler and the ``scaling_cur_freq`` 232 + information comes from the ``CPUFreq`` core (and is the last frequency selected 233 + by the current scaling governor for the given policy). 234 + 235 + 236 + .. _turbo: 237 + 238 + Turbo P-states Support 239 + ====================== 240 + 241 + In the majority of cases, the entire range of P-states available to 242 + ``intel_pstate`` can be divided into two sub-ranges that correspond to 243 + different types of processor behavior, above and below a boundary that 244 + will be referred to as the "turbo threshold" in what follows. 245 + 246 + The P-states above the turbo threshold are referred to as "turbo P-states" and 247 + the whole sub-range of P-states they belong to is referred to as the "turbo 248 + range". These names are related to the Turbo Boost technology allowing a 249 + multicore processor to opportunistically increase the P-state of one or more 250 + cores if there is enough power to do that and if that is not going to cause the 251 + thermal envelope of the processor package to be exceeded. 252 + 253 + Specifically, if software sets the P-state of a CPU core within the turbo range 254 + (that is, above the turbo threshold), the processor is permitted to take over 255 + performance scaling control for that core and put it into turbo P-states of its 256 + choice going forward. However, that permission is interpreted differently by 257 + different processor generations. Namely, the Sandy Bridge generation of 258 + processors will never use any P-states above the last one set by software for 259 + the given core, even if it is within the turbo range, whereas all of the later 260 + processor generations will take it as a license to use any P-states from the 261 + turbo range, even above the one set by software. In other words, on those 262 + processors setting any P-state from the turbo range will enable the processor 263 + to put the given core into all turbo P-states up to and including the maximum 264 + supported one as it sees fit. 265 + 266 + One important property of turbo P-states is that they are not sustainable. More 267 + precisely, there is no guarantee that any CPUs will be able to stay in any of 268 + those states indefinitely, because the power distribution within the processor 269 + package may change over time or the thermal envelope it was designed for might 270 + be exceeded if a turbo P-state was used for too long. 271 + 272 + In turn, the P-states below the turbo threshold generally are sustainable. In 273 + fact, if one of them is set by software, the processor is not expected to change 274 + it to a lower one unless in a thermal stress or a power limit violation 275 + situation (a higher P-state may still be used if it is set for another CPU in 276 + the same package at the same time, for example). 277 + 278 + Some processors allow multiple cores to be in turbo P-states at the same time, 279 + but the maximum P-state that can be set for them generally depends on the number 280 + of cores running concurrently. The maximum turbo P-state that can be set for 3 281 + cores at the same time usually is lower than the analogous maximum P-state for 282 + 2 cores, which in turn usually is lower than the maximum turbo P-state that can 283 + be set for 1 core. The one-core maximum turbo P-state is thus the maximum 284 + supported one overall. 285 + 286 + The maximum supported turbo P-state, the turbo threshold (the maximum supported 287 + non-turbo P-state) and the minimum supported P-state are specific to the 288 + processor model and can be determined by reading the processor's model-specific 289 + registers (MSRs). Moreover, some processors support the Configurable TDP 290 + (Thermal Design Power) feature and, when that feature is enabled, the turbo 291 + threshold effectively becomes a configurable value that can be set by the 292 + platform firmware. 293 + 294 + Unlike ``_PSS`` objects in the ACPI tables, ``intel_pstate`` always exposes 295 + the entire range of available P-states, including the whole turbo range, to the 296 + ``CPUFreq`` core and (in the passive mode) to generic scaling governors. This 297 + generally causes turbo P-states to be set more often when ``intel_pstate`` is 298 + used relative to ACPI-based CPU performance scaling (see `below <acpi-cpufreq_>`_ 299 + for more information). 300 + 301 + Moreover, since ``intel_pstate`` always knows what the real turbo threshold is 302 + (even if the Configurable TDP feature is enabled in the processor), its 303 + ``no_turbo`` attribute in ``sysfs`` (described `below <no_turbo_attr_>`_) should 304 + work as expected in all cases (that is, if set to disable turbo P-states, it 305 + always should prevent ``intel_pstate`` from using them). 306 + 307 + 308 + Processor Support 309 + ================= 310 + 311 + To handle a given processor ``intel_pstate`` requires a number of different 312 + pieces of information on it to be known, including: 313 + 314 + * The minimum supported P-state. 315 + 316 + * The maximum supported `non-turbo P-state <turbo_>`_. 317 + 318 + * Whether or not turbo P-states are supported at all. 319 + 320 + * The maximum supported `one-core turbo P-state <turbo_>`_ (if turbo P-states 321 + are supported). 322 + 323 + * The scaling formula to translate the driver's internal representation 324 + of P-states into frequencies and the other way around. 325 + 326 + Generally, ways to obtain that information are specific to the processor model 327 + or family. Although it often is possible to obtain all of it from the processor 328 + itself (using model-specific registers), there are cases in which hardware 329 + manuals need to be consulted to get to it too. 330 + 331 + For this reason, there is a list of supported processors in ``intel_pstate`` and 332 + the driver initialization will fail if the detected processor is not in that 333 + list, unless it supports the `HWP feature <Active Mode_>`_. [The interface to 334 + obtain all of the information listed above is the same for all of the processors 335 + supporting the HWP feature, which is why they all are supported by 336 + ``intel_pstate``.] 337 + 338 + 339 + User Space Interface in ``sysfs`` 340 + ================================= 341 + 342 + Global Attributes 343 + ----------------- 344 + 345 + ``intel_pstate`` exposes several global attributes (files) in ``sysfs`` to 346 + control its functionality at the system level. They are located in the 347 + ``/sys/devices/system/cpu/cpufreq/intel_pstate/`` directory and affect all 348 + CPUs. 349 + 350 + Some of them are not present if the ``intel_pstate=per_cpu_perf_limits`` 351 + argument is passed to the kernel in the command line. 352 + 353 + ``max_perf_pct`` 354 + Maximum P-state the driver is allowed to set in percent of the 355 + maximum supported performance level (the highest supported `turbo 356 + P-state <turbo_>`_). 357 + 358 + This attribute will not be exposed if the 359 + ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel 360 + command line. 361 + 362 + ``min_perf_pct`` 363 + Minimum P-state the driver is allowed to set in percent of the 364 + maximum supported performance level (the highest supported `turbo 365 + P-state <turbo_>`_). 366 + 367 + This attribute will not be exposed if the 368 + ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel 369 + command line. 370 + 371 + ``num_pstates`` 372 + Number of P-states supported by the processor (between 0 and 255 373 + inclusive) including both turbo and non-turbo P-states (see 374 + `Turbo P-states Support`_). 375 + 376 + The value of this attribute is not affected by the ``no_turbo`` 377 + setting described `below <no_turbo_attr_>`_. 378 + 379 + This attribute is read-only. 380 + 381 + ``turbo_pct`` 382 + Ratio of the `turbo range <turbo_>`_ size to the size of the entire 383 + range of supported P-states, in percent. 384 + 385 + This attribute is read-only. 386 + 387 + .. _no_turbo_attr: 388 + 389 + ``no_turbo`` 390 + If set (equal to 1), the driver is not allowed to set any turbo P-states 391 + (see `Turbo P-states Support`_). If unset (equalt to 0, which is the 392 + default), turbo P-states can be set by the driver. 393 + [Note that ``intel_pstate`` does not support the general ``boost`` 394 + attribute (supported by some other scaling drivers) which is replaced 395 + by this one.] 396 + 397 + This attrubute does not affect the maximum supported frequency value 398 + supplied to the ``CPUFreq`` core and exposed via the policy interface, 399 + but it affects the maximum possible value of per-policy P-state limits 400 + (see `Interpretation of Policy Attributes`_ below for details). 401 + 402 + .. _status_attr: 403 + 404 + ``status`` 405 + Operation mode of the driver: "active", "passive" or "off". 406 + 407 + "active" 408 + The driver is functional and in the `active mode 409 + <Active Mode_>`_. 410 + 411 + "passive" 412 + The driver is functional and in the `passive mode 413 + <Passive Mode_>`_. 414 + 415 + "off" 416 + The driver is not functional (it is not registered as a scaling 417 + driver with the ``CPUFreq`` core). 418 + 419 + This attribute can be written to in order to change the driver's 420 + operation mode or to unregister it. The string written to it must be 421 + one of the possible values of it and, if successful, the write will 422 + cause the driver to switch over to the operation mode represented by 423 + that string - or to be unregistered in the "off" case. [Actually, 424 + switching over from the active mode to the passive mode or the other 425 + way around causes the driver to be unregistered and registered again 426 + with a different set of callbacks, so all of its settings (the global 427 + as well as the per-policy ones) are then reset to their default 428 + values, possibly depending on the target operation mode.] 429 + 430 + That only is supported in some configurations, though (for example, if 431 + the `HWP feature is enabled in the processor <Active Mode With HWP_>`_, 432 + the operation mode of the driver cannot be changed), and if it is not 433 + supported in the current configuration, writes to this attribute with 434 + fail with an appropriate error. 435 + 436 + Interpretation of Policy Attributes 437 + ----------------------------------- 438 + 439 + The interpretation of some ``CPUFreq`` policy attributes described in 440 + :doc:`cpufreq` is special with ``intel_pstate`` as the current scaling driver 441 + and it generally depends on the driver's `operation mode <Operation Modes_>`_. 442 + 443 + First of all, the values of the ``cpuinfo_max_freq``, ``cpuinfo_min_freq`` and 444 + ``scaling_cur_freq`` attributes are produced by applying a processor-specific 445 + multiplier to the internal P-state representation used by ``intel_pstate``. 446 + Also, the values of the ``scaling_max_freq`` and ``scaling_min_freq`` 447 + attributes are capped by the frequency corresponding to the maximum P-state that 448 + the driver is allowed to set. 449 + 450 + If the ``no_turbo`` `global attribute <no_turbo_attr_>`_ is set, the driver is 451 + not allowed to use turbo P-states, so the maximum value of ``scaling_max_freq`` 452 + and ``scaling_min_freq`` is limited to the maximum non-turbo P-state frequency. 453 + Accordingly, setting ``no_turbo`` causes ``scaling_max_freq`` and 454 + ``scaling_min_freq`` to go down to that value if they were above it before. 455 + However, the old values of ``scaling_max_freq`` and ``scaling_min_freq`` will be 456 + restored after unsetting ``no_turbo``, unless these attributes have been written 457 + to after ``no_turbo`` was set. 458 + 459 + If ``no_turbo`` is not set, the maximum possible value of ``scaling_max_freq`` 460 + and ``scaling_min_freq`` corresponds to the maximum supported turbo P-state, 461 + which also is the value of ``cpuinfo_max_freq`` in either case. 462 + 463 + Next, the following policy attributes have special meaning if 464 + ``intel_pstate`` works in the `active mode <Active Mode_>`_: 465 + 466 + ``scaling_available_governors`` 467 + List of P-state selection algorithms provided by ``intel_pstate``. 468 + 469 + ``scaling_governor`` 470 + P-state selection algorithm provided by ``intel_pstate`` currently in 471 + use with the given policy. 472 + 473 + ``scaling_cur_freq`` 474 + Frequency of the average P-state of the CPU represented by the given 475 + policy for the time interval between the last two invocations of the 476 + driver's utilization update callback by the CPU scheduler for that CPU. 477 + 478 + The meaning of these attributes in the `passive mode <Passive Mode_>`_ is the 479 + same as for other scaling drivers. 480 + 481 + Additionally, the value of the ``scaling_driver`` attribute for ``intel_pstate`` 482 + depends on the operation mode of the driver. Namely, it is either 483 + "intel_pstate" (in the `active mode <Active Mode_>`_) or "intel_cpufreq" (in the 484 + `passive mode <Passive Mode_>`_). 485 + 486 + Coordination of P-State Limits 487 + ------------------------------ 488 + 489 + ``intel_pstate`` allows P-state limits to be set in two ways: with the help of 490 + the ``max_perf_pct`` and ``min_perf_pct`` `global attributes 491 + <Global Attributes_>`_ or via the ``scaling_max_freq`` and ``scaling_min_freq`` 492 + ``CPUFreq`` policy attributes. The coordination between those limits is based 493 + on the following rules, regardless of the current operation mode of the driver: 494 + 495 + 1. All CPUs are affected by the global limits (that is, none of them can be 496 + requested to run faster than the global maximum and none of them can be 497 + requested to run slower than the global minimum). 498 + 499 + 2. Each individual CPU is affected by its own per-policy limits (that is, it 500 + cannot be requested to run faster than its own per-policy maximum and it 501 + cannot be requested to run slower than its own per-policy minimum). 502 + 503 + 3. The global and per-policy limits can be set independently. 504 + 505 + If the `HWP feature is enabled in the processor <Active Mode With HWP_>`_, the 506 + resulting effective values are written into its registers whenever the limits 507 + change in order to request its internal P-state selection logic to always set 508 + P-states within these limits. Otherwise, the limits are taken into account by 509 + scaling governors (in the `passive mode <Passive Mode_>`_) and by the driver 510 + every time before setting a new P-state for a CPU. 511 + 512 + Additionally, if the ``intel_pstate=per_cpu_perf_limits`` command line argument 513 + is passed to the kernel, ``max_perf_pct`` and ``min_perf_pct`` are not exposed 514 + at all and the only way to set the limits is by using the policy attributes. 515 + 516 + 517 + Energy vs Performance Hints 518 + --------------------------- 519 + 520 + If ``intel_pstate`` works in the `active mode with the HWP feature enabled 521 + <Active Mode With HWP_>`_ in the processor, additional attributes are present 522 + in every ``CPUFreq`` policy directory in ``sysfs``. They are intended to allow 523 + user space to help ``intel_pstate`` to adjust the processor's internal P-state 524 + selection logic by focusing it on performance or on energy-efficiency, or 525 + somewhere between the two extremes: 526 + 527 + ``energy_performance_preference`` 528 + Current value of the energy vs performance hint for the given policy 529 + (or the CPU represented by it). 530 + 531 + The hint can be changed by writing to this attribute. 532 + 533 + ``energy_performance_available_preferences`` 534 + List of strings that can be written to the 535 + ``energy_performance_preference`` attribute. 536 + 537 + They represent different energy vs performance hints and should be 538 + self-explanatory, except that ``default`` represents whatever hint 539 + value was set by the platform firmware. 540 + 541 + Strings written to the ``energy_performance_preference`` attribute are 542 + internally translated to integer values written to the processor's 543 + Energy-Performance Preference (EPP) knob (if supported) or its 544 + Energy-Performance Bias (EPB) knob. 545 + 546 + [Note that tasks may by migrated from one CPU to another by the scheduler's 547 + load-balancing algorithm and if different energy vs performance hints are 548 + set for those CPUs, that may lead to undesirable outcomes. To avoid such 549 + issues it is better to set the same energy vs performance hint for all CPUs 550 + or to pin every task potentially sensitive to them to a specific CPU.] 551 + 552 + .. _acpi-cpufreq: 553 + 554 + ``intel_pstate`` vs ``acpi-cpufreq`` 555 + ==================================== 556 + 557 + On the majority of systems supported by ``intel_pstate``, the ACPI tables 558 + provided by the platform firmware contain ``_PSS`` objects returning information 559 + that can be used for CPU performance scaling (refer to the `ACPI specification`_ 560 + for details on the ``_PSS`` objects and the format of the information returned 561 + by them). 562 + 563 + The information returned by the ACPI ``_PSS`` objects is used by the 564 + ``acpi-cpufreq`` scaling driver. On systems supported by ``intel_pstate`` 565 + the ``acpi-cpufreq`` driver uses the same hardware CPU performance scaling 566 + interface, but the set of P-states it can use is limited by the ``_PSS`` 567 + output. 568 + 569 + On those systems each ``_PSS`` object returns a list of P-states supported by 570 + the corresponding CPU which basically is a subset of the P-states range that can 571 + be used by ``intel_pstate`` on the same system, with one exception: the whole 572 + `turbo range <turbo_>`_ is represented by one item in it (the topmost one). By 573 + convention, the frequency returned by ``_PSS`` for that item is greater by 1 MHz 574 + than the frequency of the highest non-turbo P-state listed by it, but the 575 + corresponding P-state representation (following the hardware specification) 576 + returned for it matches the maximum supported turbo P-state (or is the 577 + special value 255 meaning essentially "go as high as you can get"). 578 + 579 + The list of P-states returned by ``_PSS`` is reflected by the table of 580 + available frequencies supplied by ``acpi-cpufreq`` to the ``CPUFreq`` core and 581 + scaling governors and the minimum and maximum supported frequencies reported by 582 + it come from that list as well. In particular, given the special representation 583 + of the turbo range described above, this means that the maximum supported 584 + frequency reported by ``acpi-cpufreq`` is higher by 1 MHz than the frequency 585 + of the highest supported non-turbo P-state listed by ``_PSS`` which, of course, 586 + affects decisions made by the scaling governors, except for ``powersave`` and 587 + ``performance``. 588 + 589 + For example, if a given governor attempts to select a frequency proportional to 590 + estimated CPU load and maps the load of 100% to the maximum supported frequency 591 + (possibly multiplied by a constant), then it will tend to choose P-states below 592 + the turbo threshold if ``acpi-cpufreq`` is used as the scaling driver, because 593 + in that case the turbo range corresponds to a small fraction of the frequency 594 + band it can use (1 MHz vs 1 GHz or more). In consequence, it will only go to 595 + the turbo range for the highest loads and the other loads above 50% that might 596 + benefit from running at turbo frequencies will be given non-turbo P-states 597 + instead. 598 + 599 + One more issue related to that may appear on systems supporting the 600 + `Configurable TDP feature <turbo_>`_ allowing the platform firmware to set the 601 + turbo threshold. Namely, if that is not coordinated with the lists of P-states 602 + returned by ``_PSS`` properly, there may be more than one item corresponding to 603 + a turbo P-state in those lists and there may be a problem with avoiding the 604 + turbo range (if desirable or necessary). Usually, to avoid using turbo 605 + P-states overall, ``acpi-cpufreq`` simply avoids using the topmost state listed 606 + by ``_PSS``, but that is not sufficient when there are other turbo P-states in 607 + the list returned by it. 608 + 609 + Apart from the above, ``acpi-cpufreq`` works like ``intel_pstate`` in the 610 + `passive mode <Passive Mode_>`_, except that the number of P-states it can set 611 + is limited to the ones listed by the ACPI ``_PSS`` objects. 612 + 613 + 614 + Kernel Command Line Options for ``intel_pstate`` 615 + ================================================ 616 + 617 + Several kernel command line options can be used to pass early-configuration-time 618 + parameters to ``intel_pstate`` in order to enforce specific behavior of it. All 619 + of them have to be prepended with the ``intel_pstate=`` prefix. 620 + 621 + ``disable`` 622 + Do not register ``intel_pstate`` as the scaling driver even if the 623 + processor is supported by it. 624 + 625 + ``passive`` 626 + Register ``intel_pstate`` in the `passive mode <Passive Mode_>`_ to 627 + start with. 628 + 629 + This option implies the ``no_hwp`` one described below. 630 + 631 + ``force`` 632 + Register ``intel_pstate`` as the scaling driver instead of 633 + ``acpi-cpufreq`` even if the latter is preferred on the given system. 634 + 635 + This may prevent some platform features (such as thermal controls and 636 + power capping) that rely on the availability of ACPI P-states 637 + information from functioning as expected, so it should be used with 638 + caution. 639 + 640 + This option does not work with processors that are not supported by 641 + ``intel_pstate`` and on platforms where the ``pcc-cpufreq`` scaling 642 + driver is used instead of ``acpi-cpufreq``. 643 + 644 + ``no_hwp`` 645 + Do not enable the `hardware-managed P-states (HWP) feature 646 + <Active Mode With HWP_>`_ even if it is supported by the processor. 647 + 648 + ``hwp_only`` 649 + Register ``intel_pstate`` as the scaling driver only if the 650 + `hardware-managed P-states (HWP) feature <Active Mode With HWP_>`_ is 651 + supported by the processor. 652 + 653 + ``support_acpi_ppc`` 654 + Take ACPI ``_PPC`` performance limits into account. 655 + 656 + If the preferred power management profile in the FADT (Fixed ACPI 657 + Description Table) is set to "Enterprise Server" or "Performance 658 + Server", the ACPI ``_PPC`` limits are taken into account by default 659 + and this option has no effect. 660 + 661 + ``per_cpu_perf_limits`` 662 + Use per-logical-CPU P-State limits (see `Coordination of P-state 663 + Limits`_ for details). 664 + 665 + 666 + Diagnostics and Tuning 667 + ====================== 668 + 669 + Trace Events 670 + ------------ 671 + 672 + There are two static trace events that can be used for ``intel_pstate`` 673 + diagnostics. One of them is the ``cpu_frequency`` trace event generally used 674 + by ``CPUFreq``, and the other one is the ``pstate_sample`` trace event specific 675 + to ``intel_pstate``. Both of them are triggered by ``intel_pstate`` only if 676 + it works in the `active mode <Active Mode_>`_. 677 + 678 + The following sequence of shell commands can be used to enable them and see 679 + their output (if the kernel is generally configured to support event tracing):: 680 + 681 + # cd /sys/kernel/debug/tracing/ 682 + # echo 1 > events/power/pstate_sample/enable 683 + # echo 1 > events/power/cpu_frequency/enable 684 + # cat trace 685 + gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107 scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618 freq=2474476 686 + cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2 687 + 688 + If ``intel_pstate`` works in the `passive mode <Passive Mode_>`_, the 689 + ``cpu_frequency`` trace event will be triggered either by the ``schedutil`` 690 + scaling governor (for the policies it is attached to), or by the ``CPUFreq`` 691 + core (for the policies with other scaling governors). 692 + 693 + ``ftrace`` 694 + ---------- 695 + 696 + The ``ftrace`` interface can be used for low-level diagnostics of 697 + ``intel_pstate``. For example, to check how often the function to set a 698 + P-state is called, the ``ftrace`` filter can be set to to 699 + :c:func:`intel_pstate_set_pstate`:: 700 + 701 + # cd /sys/kernel/debug/tracing/ 702 + # cat available_filter_functions | grep -i pstate 703 + intel_pstate_set_pstate 704 + intel_pstate_cpu_init 705 + ... 706 + # echo intel_pstate_set_pstate > set_ftrace_filter 707 + # echo function > current_tracer 708 + # cat trace | head -15 709 + # tracer: function 710 + # 711 + # entries-in-buffer/entries-written: 80/80 #P:4 712 + # 713 + # _-----=> irqs-off 714 + # / _----=> need-resched 715 + # | / _---=> hardirq/softirq 716 + # || / _--=> preempt-depth 717 + # ||| / delay 718 + # TASK-PID CPU# |||| TIMESTAMP FUNCTION 719 + # | | | |||| | | 720 + Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func 721 + gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func 722 + gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 723 + <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func 724 + 725 + Tuning Interface in ``debugfs`` 726 + ------------------------------- 727 + 728 + The ``powersave`` algorithm provided by ``intel_pstate`` for `the Core line of 729 + processors in the active mode <powersave_>`_ is based on a `PID controller`_ 730 + whose parameters were chosen to address a number of different use cases at the 731 + same time. However, it still is possible to fine-tune it to a specific workload 732 + and the ``debugfs`` interface under ``/sys/kernel/debug/pstate_snb/`` is 733 + provided for this purpose. [Note that the ``pstate_snb`` directory will be 734 + present only if the specific P-state selection algorithm matching the interface 735 + in it actually is in use.] 736 + 737 + The following files present in that directory can be used to modify the PID 738 + controller parameters at run time: 739 + 740 + | ``deadband`` 741 + | ``d_gain_pct`` 742 + | ``i_gain_pct`` 743 + | ``p_gain_pct`` 744 + | ``sample_rate_ms`` 745 + | ``setpoint`` 746 + 747 + Note, however, that achieving desirable results this way generally requires 748 + expert-level understanding of the power vs performance tradeoff, so extra care 749 + is recommended when attempting to do that. 750 + 751 + 752 + .. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 753 + .. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 754 + .. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf 755 + .. _PID controller: https://en.wikipedia.org/wiki/PID_controller
-281
Documentation/cpu-freq/intel-pstate.txt
··· 1 - Intel P-State driver 2 - -------------------- 3 - 4 - This driver provides an interface to control the P-State selection for the 5 - SandyBridge+ Intel processors. 6 - 7 - The following document explains P-States: 8 - http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 9 - As stated in the document, P-State doesn’t exactly mean a frequency. However, for 10 - the sake of the relationship with cpufreq, P-State and frequency are used 11 - interchangeably. 12 - 13 - Understanding the cpufreq core governors and policies are important before 14 - discussing more details about the Intel P-State driver. Based on what callbacks 15 - a cpufreq driver provides to the cpufreq core, it can support two types of 16 - drivers: 17 - - with target_index() callback: In this mode, the drivers using cpufreq core 18 - simply provide the minimum and maximum frequency limits and an additional 19 - interface target_index() to set the current frequency. The cpufreq subsystem 20 - has a number of scaling governors ("performance", "powersave", "ondemand", 21 - etc.). Depending on which governor is in use, cpufreq core will call for 22 - transitions to a specific frequency using target_index() callback. 23 - - setpolicy() callback: In this mode, drivers do not provide target_index() 24 - callback, so cpufreq core can't request a transition to a specific frequency. 25 - The driver provides minimum and maximum frequency limits and callbacks to set a 26 - policy. The policy in cpufreq sysfs is referred to as the "scaling governor". 27 - The cpufreq core can request the driver to operate in any of the two policies: 28 - "performance" and "powersave". The driver decides which frequency to use based 29 - on the above policy selection considering minimum and maximum frequency limits. 30 - 31 - The Intel P-State driver falls under the latter category, which implements the 32 - setpolicy() callback. This driver decides what P-State to use based on the 33 - requested policy from the cpufreq core. If the processor is capable of 34 - selecting its next P-State internally, then the driver will offload this 35 - responsibility to the processor (aka HWP: Hardware P-States). If not, the 36 - driver implements algorithms to select the next P-State. 37 - 38 - Since these policies are implemented in the driver, they are not same as the 39 - cpufreq scaling governors implementation, even if they have the same name in 40 - the cpufreq sysfs (scaling_governors). For example the "performance" policy is 41 - similar to cpufreq’s "performance" governor, but "powersave" is completely 42 - different than the cpufreq "powersave" governor. The strategy here is similar 43 - to cpufreq "ondemand", where the requested P-State is related to the system load. 44 - 45 - Sysfs Interface 46 - 47 - In addition to the frequency-controlling interfaces provided by the cpufreq 48 - core, the driver provides its own sysfs files to control the P-State selection. 49 - These files have been added to /sys/devices/system/cpu/intel_pstate/. 50 - Any changes made to these files are applicable to all CPUs (even in a 51 - multi-package system, Refer to later section on placing "Per-CPU limits"). 52 - 53 - max_perf_pct: Limits the maximum P-State that will be requested by 54 - the driver. It states it as a percentage of the available performance. The 55 - available (P-State) performance may be reduced by the no_turbo 56 - setting described below. 57 - 58 - min_perf_pct: Limits the minimum P-State that will be requested by 59 - the driver. It states it as a percentage of the max (non-turbo) 60 - performance level. 61 - 62 - no_turbo: Limits the driver to selecting P-State below the turbo 63 - frequency range. 64 - 65 - turbo_pct: Displays the percentage of the total performance that 66 - is supported by hardware that is in the turbo range. This number 67 - is independent of whether turbo has been disabled or not. 68 - 69 - num_pstates: Displays the number of P-States that are supported 70 - by hardware. This number is independent of whether turbo has 71 - been disabled or not. 72 - 73 - For example, if a system has these parameters: 74 - Max 1 core turbo ratio: 0x21 (Max 1 core ratio is the maximum P-State) 75 - Max non turbo ratio: 0x17 76 - Minimum ratio : 0x08 (Here the ratio is called max efficiency ratio) 77 - 78 - Sysfs will show : 79 - max_perf_pct:100, which corresponds to 1 core ratio 80 - min_perf_pct:24, max_efficiency_ratio / max 1 Core ratio 81 - no_turbo:0, turbo is not disabled 82 - num_pstates:26 = (max 1 Core ratio - Max Efficiency Ratio + 1) 83 - turbo_pct:39 = (max 1 core ratio - max non turbo ratio) / num_pstates 84 - 85 - Refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual 86 - Volume 3: System Programming Guide" to understand ratios. 87 - 88 - There is one more sysfs attribute in /sys/devices/system/cpu/intel_pstate/ 89 - that can be used for controlling the operation mode of the driver: 90 - 91 - status: Three settings are possible: 92 - "off" - The driver is not in use at this time. 93 - "active" - The driver works as a P-state governor (default). 94 - "passive" - The driver works as a regular cpufreq one and collaborates 95 - with the generic cpufreq governors (it sets P-states as 96 - requested by those governors). 97 - The current setting is returned by reads from this attribute. Writing one 98 - of the above strings to it changes the operation mode as indicated by that 99 - string, if possible. If HW-managed P-states (HWP) are enabled, it is not 100 - possible to change the driver's operation mode and attempts to write to 101 - this attribute will fail. 102 - 103 - cpufreq sysfs for Intel P-State 104 - 105 - Since this driver registers with cpufreq, cpufreq sysfs is also presented. 106 - There are some important differences, which need to be considered. 107 - 108 - scaling_cur_freq: This displays the real frequency which was used during 109 - the last sample period instead of what is requested. Some other cpufreq driver, 110 - like acpi-cpufreq, displays what is requested (Some changes are on the 111 - way to fix this for acpi-cpufreq driver). The same is true for frequencies 112 - displayed at /proc/cpuinfo. 113 - 114 - scaling_governor: This displays current active policy. Since each CPU has a 115 - cpufreq sysfs, it is possible to set a scaling governor to each CPU. But this 116 - is not possible with Intel P-States, as there is one common policy for all 117 - CPUs. Here, the last requested policy will be applicable to all CPUs. It is 118 - suggested that one use the cpupower utility to change policy to all CPUs at the 119 - same time. 120 - 121 - scaling_setspeed: This attribute can never be used with Intel P-State. 122 - 123 - scaling_max_freq/scaling_min_freq: This interface can be used similarly to 124 - the max_perf_pct/min_perf_pct of Intel P-State sysfs. However since frequencies 125 - are converted to nearest possible P-State, this is prone to rounding errors. 126 - This method is not preferred to limit performance. 127 - 128 - affected_cpus: Not used 129 - related_cpus: Not used 130 - 131 - For contemporary Intel processors, the frequency is controlled by the 132 - processor itself and the P-State exposed to software is related to 133 - performance levels. The idea that frequency can be set to a single 134 - frequency is fictional for Intel Core processors. Even if the scaling 135 - driver selects a single P-State, the actual frequency the processor 136 - will run at is selected by the processor itself. 137 - 138 - Per-CPU limits 139 - 140 - The kernel command line option "intel_pstate=per_cpu_perf_limits" forces 141 - the intel_pstate driver to use per-CPU performance limits. When it is set, 142 - the sysfs control interface described above is subject to limitations. 143 - - The following controls are not available for both read and write 144 - /sys/devices/system/cpu/intel_pstate/max_perf_pct 145 - /sys/devices/system/cpu/intel_pstate/min_perf_pct 146 - - The following controls can be used to set performance limits, as far as the 147 - architecture of the processor permits: 148 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq 149 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq 150 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor 151 - - User can still observe turbo percent and number of P-States from 152 - /sys/devices/system/cpu/intel_pstate/turbo_pct 153 - /sys/devices/system/cpu/intel_pstate/num_pstates 154 - - User can read write system wide turbo status 155 - /sys/devices/system/cpu/no_turbo 156 - 157 - Support of energy performance hints 158 - It is possible to provide hints to the HWP algorithms in the processor 159 - to be more performance centric to more energy centric. When the driver 160 - is using HWP, two additional cpufreq sysfs attributes are presented for 161 - each logical CPU. 162 - These attributes are: 163 - - energy_performance_available_preferences 164 - - energy_performance_preference 165 - 166 - To get list of supported hints: 167 - $ cat energy_performance_available_preferences 168 - default performance balance_performance balance_power power 169 - 170 - The current preference can be read or changed via cpufreq sysfs 171 - attribute "energy_performance_preference". Reading from this attribute 172 - will display current effective setting. User can write any of the valid 173 - preference string to this attribute. User can always restore to power-on 174 - default by writing "default". 175 - 176 - Since threads can migrate to different CPUs, this is possible that the 177 - new CPU may have different energy performance preference than the previous 178 - one. To avoid such issues, either threads can be pinned to specific CPUs 179 - or set the same energy performance preference value to all CPUs. 180 - 181 - Tuning Intel P-State driver 182 - 183 - When the performance can be tuned using PID (Proportional Integral 184 - Derivative) controller, debugfs files are provided for adjusting performance. 185 - They are presented under: 186 - /sys/kernel/debug/pstate_snb/ 187 - 188 - The PID tunable parameters are: 189 - deadband 190 - d_gain_pct 191 - i_gain_pct 192 - p_gain_pct 193 - sample_rate_ms 194 - setpoint 195 - 196 - To adjust these parameters, some understanding of driver implementation is 197 - necessary. There are some tweeks described here, but be very careful. Adjusting 198 - them requires expert level understanding of power and performance relationship. 199 - These limits are only useful when the "powersave" policy is active. 200 - 201 - -To make the system more responsive to load changes, sample_rate_ms can 202 - be adjusted (current default is 10ms). 203 - -To make the system use higher performance, even if the load is lower, setpoint 204 - can be adjusted to a lower number. This will also lead to faster ramp up time 205 - to reach the maximum P-State. 206 - If there are no derivative and integral coefficients, The next P-State will be 207 - equal to: 208 - current P-State - ((setpoint - current cpu load) * p_gain_pct) 209 - 210 - For example, if the current PID parameters are (Which are defaults for the core 211 - processors like SandyBridge): 212 - deadband = 0 213 - d_gain_pct = 0 214 - i_gain_pct = 0 215 - p_gain_pct = 20 216 - sample_rate_ms = 10 217 - setpoint = 97 218 - 219 - If the current P-State = 0x08 and current load = 100, this will result in the 220 - next P-State = 0x08 - ((97 - 100) * 0.2) = 8.6 (rounded to 9). Here the P-State 221 - goes up by only 1. If during next sample interval the current load doesn't 222 - change and still 100, then P-State goes up by one again. This process will 223 - continue as long as the load is more than the setpoint until the maximum P-State 224 - is reached. 225 - 226 - For the same load at setpoint = 60, this will result in the next P-State 227 - = 0x08 - ((60 - 100) * 0.2) = 16 228 - So by changing the setpoint from 97 to 60, there is an increase of the 229 - next P-State from 9 to 16. So this will make processor execute at higher 230 - P-State for the same CPU load. If the load continues to be more than the 231 - setpoint during next sample intervals, then P-State will go up again till the 232 - maximum P-State is reached. But the ramp up time to reach the maximum P-State 233 - will be much faster when the setpoint is 60 compared to 97. 234 - 235 - Debugging Intel P-State driver 236 - 237 - Event tracing 238 - To debug P-State transition, the Linux event tracing interface can be used. 239 - There are two specific events, which can be enabled (Provided the kernel 240 - configs related to event tracing are enabled). 241 - 242 - # cd /sys/kernel/debug/tracing/ 243 - # echo 1 > events/power/pstate_sample/enable 244 - # echo 1 > events/power/cpu_frequency/enable 245 - # cat trace 246 - gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107 247 - scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618 248 - freq=2474476 249 - cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2 250 - 251 - 252 - Using ftrace 253 - 254 - If function level tracing is required, the Linux ftrace interface can be used. 255 - For example if we want to check how often a function to set a P-State is 256 - called, we can set ftrace filter to intel_pstate_set_pstate. 257 - 258 - # cd /sys/kernel/debug/tracing/ 259 - # cat available_filter_functions | grep -i pstate 260 - intel_pstate_set_pstate 261 - intel_pstate_cpu_init 262 - ... 263 - 264 - # echo intel_pstate_set_pstate > set_ftrace_filter 265 - # echo function > current_tracer 266 - # cat trace | head -15 267 - # tracer: function 268 - # 269 - # entries-in-buffer/entries-written: 80/80 #P:4 270 - # 271 - # _-----=> irqs-off 272 - # / _----=> need-resched 273 - # | / _---=> hardirq/softirq 274 - # || / _--=> preempt-depth 275 - # ||| / delay 276 - # TASK-PID CPU# |||| TIMESTAMP FUNCTION 277 - # | | | |||| | | 278 - Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func 279 - gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func 280 - gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 281 - <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func
+1 -1
Documentation/devicetree/bindings/input/touchscreen/edt-ft5x06.txt
··· 36 36 control gpios 37 37 38 38 - threshold: allows setting the "click"-threshold in the range 39 - from 20 to 80. 39 + from 0 to 80. 40 40 41 41 - gain: allows setting the sensitivity in the range from 0 to 42 42 31. Note that lower values indicate higher
+6
Documentation/devicetree/bindings/mfd/hisilicon,hi655x.txt
··· 16 16 - reg: Base address of PMIC on Hi6220 SoC. 17 17 - interrupt-controller: Hi655x has internal IRQs (has own IRQ domain). 18 18 - pmic-gpios: The GPIO used by PMIC IRQ. 19 + - #clock-cells: From common clock binding; shall be set to 0 20 + 21 + Optional properties: 22 + - clock-output-names: From common clock binding to override the 23 + default output clock name 19 24 20 25 Example: 21 26 pmic: pmic@f8000000 { ··· 29 24 interrupt-controller; 30 25 #interrupt-cells = <2>; 31 26 pmic-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>; 27 + #clock-cells = <0>; 32 28 }
+2
Documentation/devicetree/bindings/mmc/mmc-pwrseq-simple.txt
··· 18 18 "ext_clock" (External clock provided to the card). 19 19 - post-power-on-delay-ms : Delay in ms after powering the card and 20 20 de-asserting the reset-gpios (if any) 21 + - power-off-delay-us : Delay in us after asserting the reset-gpios (if any) 22 + during power off of the card. 21 23 22 24 Example: 23 25
+4
Documentation/devicetree/bindings/net/fsl-fec.txt
··· 15 15 - phy-reset-active-high : If present then the reset sequence using the GPIO 16 16 specified in the "phy-reset-gpios" property is reversed (H=reset state, 17 17 L=operation state). 18 + - phy-reset-post-delay : Post reset delay in milliseconds. If present then 19 + a delay of phy-reset-post-delay milliseconds will be observed after the 20 + phy-reset-gpios has been toggled. Can be omitted thus no delay is 21 + observed. Delay is in range of 1ms to 1000ms. Other delays are invalid. 18 22 - phy-supply : regulator that powers the Ethernet PHY. 19 23 - phy-handle : phandle to the PHY device connected to this device. 20 24 - fixed-link : Assume a fixed link. See fixed-link.txt in the same directory.
-2
Documentation/devicetree/bindings/pinctrl/pinctrl-bindings.txt
··· 247 247 bias-pull-up - pull up the pin 248 248 bias-pull-down - pull down the pin 249 249 bias-pull-pin-default - use pin-default pull state 250 - bi-directional - pin supports simultaneous input/output operations 251 250 drive-push-pull - drive actively high and low 252 251 drive-open-drain - drive with open drain 253 252 drive-open-source - drive with open source ··· 259 260 power-source - select between different power supplies 260 261 low-power-enable - enable low power mode 261 262 low-power-disable - disable low power mode 262 - output-enable - enable output on pin regardless of output value 263 263 output-low - set the pin to output mode with low level 264 264 output-high - set the pin to output mode with high level 265 265 slew-rate - set the slew rate
+1 -1
Documentation/input/devices/edt-ft5x06.rst
··· 15 15 The driver allows configuration of the touch screen via a set of sysfs files: 16 16 17 17 /sys/class/input/eventX/device/device/threshold: 18 - allows setting the "click"-threshold in the range from 20 to 80. 18 + allows setting the "click"-threshold in the range from 0 to 80. 19 19 20 20 /sys/class/input/eventX/device/device/gain: 21 21 allows setting the sensitivity in the range from 0 to 31. Note that
+65 -49
Documentation/sound/hd-audio/models.rst
··· 16 16 6-jack in back, 2-jack in front 17 17 6stack-digout 18 18 6-jack with a SPDIF out 19 + 6stack-automute 20 + 6-jack with headphone jack detection 19 21 20 22 ALC260 21 23 ====== ··· 64 62 Enables docking station I/O for some Lenovos 65 63 hp-gpio-led 66 64 GPIO LED support on HP laptops 65 + hp-dock-gpio-mic1-led 66 + HP dock with mic LED support 67 67 dell-headset-multi 68 68 Headset jack, which can also be used as mic-in 69 69 dell-headset-dock ··· 76 72 Combo jack sensing on ALC283 77 73 tpt440-dock 78 74 Pin configs for Lenovo Thinkpad Dock support 75 + tpt440 76 + Lenovo Thinkpad T440s setup 77 + tpt460 78 + Lenovo Thinkpad T460/560 setup 79 + dual-codecs 80 + Lenovo laptops with dual codecs 79 81 80 82 ALC66x/67x/892 81 83 ============== ··· 107 97 Inverted internal mic workaround 108 98 dell-headset-multi 109 99 Headset jack, which can also be used as mic-in 100 + dual-codecs 101 + Lenovo laptops with dual codecs 110 102 111 103 ALC680 112 104 ====== ··· 126 114 Inverted internal mic workaround 127 115 no-primary-hp 128 116 VAIO Z/VGC-LN51JGB workaround (for fixed speaker DAC) 117 + dual-codecs 118 + ALC1220 dual codecs for Gaming mobos 129 119 130 120 ALC861/660 131 121 ========== ··· 220 206 221 207 Conexant 5045 222 208 ============= 223 - laptop-hpsense 224 - Laptop with HP sense (old model laptop) 225 - laptop-micsense 226 - Laptop with Mic sense (old model fujitsu) 227 - laptop-hpmicsense 228 - Laptop with HP and Mic senses 229 - benq 230 - Benq R55E 231 - laptop-hp530 232 - HP 530 laptop 233 - test 234 - for testing/debugging purpose, almost all controls can be 235 - adjusted. Appearing only when compiled with $CONFIG_SND_DEBUG=y 209 + cap-mix-amp 210 + Fix max input level on mixer widget 211 + toshiba-p105 212 + Toshiba P105 quirk 213 + hp-530 214 + HP 530 quirk 236 215 237 216 Conexant 5047 238 217 ============= 239 - laptop 240 - Basic Laptop config 241 - laptop-hp 242 - Laptop config for some HP models (subdevice 30A5) 243 - laptop-eapd 244 - Laptop config with EAPD support 245 - test 246 - for testing/debugging purpose, almost all controls can be 247 - adjusted. Appearing only when compiled with $CONFIG_SND_DEBUG=y 218 + cap-mix-amp 219 + Fix max input level on mixer widget 248 220 249 221 Conexant 5051 250 222 ============= 251 - laptop 252 - Basic Laptop config (default) 253 - hp 254 - HP Spartan laptop 255 - hp-dv6736 256 - HP dv6736 257 - hp-f700 258 - HP Compaq Presario F700 259 - ideapad 260 - Lenovo IdeaPad laptop 261 - toshiba 262 - Toshiba Satellite M300 223 + lenovo-x200 224 + Lenovo X200 quirk 263 225 264 226 Conexant 5066 265 227 ============= 266 - laptop 267 - Basic Laptop config (default) 268 - hp-laptop 269 - HP laptops, e g G60 270 - asus 271 - Asus K52JU, Lenovo G560 272 - dell-laptop 273 - Dell laptops 274 - dell-vostro 275 - Dell Vostro 276 - olpc-xo-1_5 277 - OLPC XO 1.5 278 - ideapad 279 - Lenovo IdeaPad U150 228 + stereo-dmic 229 + Workaround for inverted stereo digital mic 230 + gpio1 231 + Enable GPIO1 pin 232 + headphone-mic-pin 233 + Enable headphone mic NID 0x18 without detection 234 + tp410 235 + Thinkpad T400 & co quirks 280 236 thinkpad 281 - Lenovo Thinkpad 237 + Thinkpad mute/mic LED quirk 238 + lemote-a1004 239 + Lemote A1004 quirk 240 + lemote-a1205 241 + Lemote A1205 quirk 242 + olpc-xo 243 + OLPC XO quirk 244 + mute-led-eapd 245 + Mute LED control via EAPD 246 + hp-dock 247 + HP dock support 248 + mute-led-gpio 249 + Mute LED control via GPIO 282 250 283 251 STAC9200 284 252 ======== ··· 440 444 Dell desktops/laptops 441 445 alienware 442 446 Alienware M17x 447 + asus-mobo 448 + Pin configs for ASUS mobo with 5.1/SPDIF out 443 449 auto 444 450 BIOS setup (default) 445 451 ··· 475 477 Pin fixup for HP Envy TS bass speaker (NID 0x10) 476 478 hp-bnb13-eq 477 479 Hardware equalizer setup for HP laptops 480 + hp-envy-ts-bass 481 + HP Envy TS bass support 478 482 auto 479 483 BIOS setup (default) 480 484 ··· 496 496 497 497 Cirrus Logic CS4206/4207 498 498 ======================== 499 + mbp53 500 + MacBook Pro 5,3 499 501 mbp55 500 502 MacBook Pro 5,5 501 503 imac27 502 504 IMac 27 Inch 505 + imac27_122 506 + iMac 12,2 507 + apple 508 + Generic Apple quirk 509 + mbp101 510 + MacBookPro 10,1 511 + mbp81 512 + MacBookPro 8,1 513 + mba42 514 + MacBookAir 4,2 503 515 auto 504 516 BIOS setup (default) 505 517 ··· 521 509 MacBook Air 6,1 and 6,2 522 510 gpio0 523 511 Enable GPIO 0 amp 512 + mbp11 513 + MacBookPro 11,2 514 + macmini 515 + MacMini 7,1 524 516 auto 525 517 BIOS setup (default) 526 518
+1 -1
MAINTAINERS
··· 7143 7143 F: drivers/media/platform/rcar_jpu.c 7144 7144 7145 7145 JSM Neo PCI based serial card 7146 - M: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com> 7146 + M: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> 7147 7147 L: linux-serial@vger.kernel.org 7148 7148 S: Maintained 7149 7149 F: drivers/tty/serial/jsm/
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION*
+64 -16
arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
··· 81 81 }; 82 82 }; 83 83 84 + reg_sys_5v: regulator@0 { 85 + compatible = "regulator-fixed"; 86 + regulator-name = "SYS_5V"; 87 + regulator-min-microvolt = <5000000>; 88 + regulator-max-microvolt = <5000000>; 89 + regulator-boot-on; 90 + regulator-always-on; 91 + }; 92 + 93 + reg_vdd_3v3: regulator@1 { 94 + compatible = "regulator-fixed"; 95 + regulator-name = "VDD_3V3"; 96 + regulator-min-microvolt = <3300000>; 97 + regulator-max-microvolt = <3300000>; 98 + regulator-boot-on; 99 + regulator-always-on; 100 + vin-supply = <&reg_sys_5v>; 101 + }; 102 + 103 + reg_5v_hub: regulator@2 { 104 + compatible = "regulator-fixed"; 105 + regulator-name = "5V_HUB"; 106 + regulator-min-microvolt = <5000000>; 107 + regulator-max-microvolt = <5000000>; 108 + regulator-boot-on; 109 + gpio = <&gpio0 7 0>; 110 + regulator-always-on; 111 + vin-supply = <&reg_sys_5v>; 112 + }; 113 + 114 + wl1835_pwrseq: wl1835-pwrseq { 115 + compatible = "mmc-pwrseq-simple"; 116 + /* WLAN_EN GPIO */ 117 + reset-gpios = <&gpio0 5 GPIO_ACTIVE_LOW>; 118 + clocks = <&pmic>; 119 + clock-names = "ext_clock"; 120 + power-off-delay-us = <10>; 121 + }; 122 + 84 123 soc { 85 124 spi0: spi@f7106000 { 86 125 status = "ok"; ··· 295 256 296 257 /* GPIO blocks 16 thru 19 do not appear to be routed to pins */ 297 258 298 - dwmmc_2: dwmmc2@f723f000 { 299 - ti,non-removable; 259 + dwmmc_0: dwmmc0@f723d000 { 260 + cap-mmc-highspeed; 300 261 non-removable; 301 - /* WL_EN */ 302 - vmmc-supply = <&wlan_en_reg>; 262 + bus-width = <0x8>; 263 + vmmc-supply = <&ldo19>; 264 + }; 265 + 266 + dwmmc_1: dwmmc1@f723e000 { 267 + card-detect-delay = <200>; 268 + cap-sd-highspeed; 269 + sd-uhs-sdr12; 270 + sd-uhs-sdr25; 271 + sd-uhs-sdr50; 272 + vqmmc-supply = <&ldo7>; 273 + vmmc-supply = <&ldo10>; 274 + bus-width = <0x4>; 275 + disable-wp; 276 + cd-gpios = <&gpio1 0 1>; 277 + }; 278 + 279 + dwmmc_2: dwmmc2@f723f000 { 280 + bus-width = <0x4>; 281 + non-removable; 282 + vmmc-supply = <&reg_vdd_3v3>; 283 + mmc-pwrseq = <&wl1835_pwrseq>; 303 284 304 285 #address-cells = <0x1>; 305 286 #size-cells = <0x0>; ··· 330 271 interrupt-parent = <&gpio1>; 331 272 interrupts = <3 IRQ_TYPE_EDGE_RISING>; 332 273 }; 333 - }; 334 - 335 - wlan_en_reg: regulator@1 { 336 - compatible = "regulator-fixed"; 337 - regulator-name = "wlan-en-regulator"; 338 - regulator-min-microvolt = <1800000>; 339 - regulator-max-microvolt = <1800000>; 340 - /* WLAN_EN GPIO */ 341 - gpio = <&gpio0 5 0>; 342 - /* WLAN card specific delay */ 343 - startup-delay-us = <70000>; 344 - enable-active-high; 345 274 }; 346 275 }; 347 276 ··· 377 330 pmic: pmic@f8000000 { 378 331 compatible = "hisilicon,hi655x-pmic"; 379 332 reg = <0x0 0xf8000000 0x0 0x1000>; 333 + #clock-cells = <0>; 380 334 interrupt-controller; 381 335 #interrupt-cells = <2>; 382 336 pmic-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+1 -30
arch/arm64/boot/dts/hisilicon/hi6220.dtsi
··· 725 725 status = "disabled"; 726 726 }; 727 727 728 - fixed_5v_hub: regulator@0 { 729 - compatible = "regulator-fixed"; 730 - regulator-name = "fixed_5v_hub"; 731 - regulator-min-microvolt = <5000000>; 732 - regulator-max-microvolt = <5000000>; 733 - regulator-boot-on; 734 - gpio = <&gpio0 7 0>; 735 - regulator-always-on; 736 - }; 737 - 738 728 usb_phy: usbphy { 739 729 compatible = "hisilicon,hi6220-usb-phy"; 740 730 #phy-cells = <0>; 741 - phy-supply = <&fixed_5v_hub>; 731 + phy-supply = <&reg_5v_hub>; 742 732 hisilicon,peripheral-syscon = <&sys_ctrl>; 743 733 }; 744 734 ··· 756 766 757 767 dwmmc_0: dwmmc0@f723d000 { 758 768 compatible = "hisilicon,hi6220-dw-mshc"; 759 - num-slots = <0x1>; 760 - cap-mmc-highspeed; 761 - non-removable; 762 769 reg = <0x0 0xf723d000 0x0 0x1000>; 763 770 interrupts = <0x0 0x48 0x4>; 764 771 clocks = <&sys_ctrl 2>, <&sys_ctrl 1>; 765 772 clock-names = "ciu", "biu"; 766 773 resets = <&sys_ctrl PERIPH_RSTDIS0_MMC0>; 767 774 reset-names = "reset"; 768 - bus-width = <0x8>; 769 - vmmc-supply = <&ldo19>; 770 775 pinctrl-names = "default"; 771 776 pinctrl-0 = <&emmc_pmx_func &emmc_clk_cfg_func 772 777 &emmc_cfg_func &emmc_rst_cfg_func>; ··· 769 784 770 785 dwmmc_1: dwmmc1@f723e000 { 771 786 compatible = "hisilicon,hi6220-dw-mshc"; 772 - num-slots = <0x1>; 773 - card-detect-delay = <200>; 774 787 hisilicon,peripheral-syscon = <&ao_ctrl>; 775 - cap-sd-highspeed; 776 - sd-uhs-sdr12; 777 - sd-uhs-sdr25; 778 - sd-uhs-sdr50; 779 788 reg = <0x0 0xf723e000 0x0 0x1000>; 780 789 interrupts = <0x0 0x49 0x4>; 781 790 #address-cells = <0x1>; ··· 778 799 clock-names = "ciu", "biu"; 779 800 resets = <&sys_ctrl PERIPH_RSTDIS0_MMC1>; 780 801 reset-names = "reset"; 781 - vqmmc-supply = <&ldo7>; 782 - vmmc-supply = <&ldo10>; 783 - bus-width = <0x4>; 784 - disable-wp; 785 - cd-gpios = <&gpio1 0 1>; 786 802 pinctrl-names = "default", "idle"; 787 803 pinctrl-0 = <&sd_pmx_func &sd_clk_cfg_func &sd_cfg_func>; 788 804 pinctrl-1 = <&sd_pmx_idle &sd_clk_cfg_idle &sd_cfg_idle>; ··· 785 811 786 812 dwmmc_2: dwmmc2@f723f000 { 787 813 compatible = "hisilicon,hi6220-dw-mshc"; 788 - num-slots = <0x1>; 789 814 reg = <0x0 0xf723f000 0x0 0x1000>; 790 815 interrupts = <0x0 0x4a 0x4>; 791 816 clocks = <&sys_ctrl HI6220_MMC2_CIUCLK>, <&sys_ctrl HI6220_MMC2_CLK>; 792 817 clock-names = "ciu", "biu"; 793 818 resets = <&sys_ctrl PERIPH_RSTDIS0_MMC2>; 794 819 reset-names = "reset"; 795 - bus-width = <0x4>; 796 - broken-cd; 797 820 pinctrl-names = "default", "idle"; 798 821 pinctrl-0 = <&sdio_pmx_func &sdio_clk_cfg_func &sdio_cfg_func>; 799 822 pinctrl-1 = <&sdio_pmx_idle &sdio_clk_cfg_idle &sdio_cfg_idle>;
+3 -3
arch/arm64/include/asm/acpi.h
··· 23 23 #define ACPI_MADT_GICC_LENGTH \ 24 24 (acpi_gbl_FADT.header.revision < 6 ? 76 : 80) 25 25 26 - #define BAD_MADT_GICC_ENTRY(entry, end) \ 27 - (!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) || \ 28 - (entry)->header.length != ACPI_MADT_GICC_LENGTH) 26 + #define BAD_MADT_GICC_ENTRY(entry, end) \ 27 + (!(entry) || (entry)->header.length != ACPI_MADT_GICC_LENGTH || \ 28 + (unsigned long)(entry) + ACPI_MADT_GICC_LENGTH > (end)) 29 29 30 30 /* Basic configuration for ACPI */ 31 31 #ifdef CONFIG_ACPI
+3 -1
arch/arm64/kernel/pci.c
··· 191 191 return NULL; 192 192 193 193 root_ops = kzalloc_node(sizeof(*root_ops), GFP_KERNEL, node); 194 - if (!root_ops) 194 + if (!root_ops) { 195 + kfree(ri); 195 196 return NULL; 197 + } 196 198 197 199 ri->cfg = pci_acpi_setup_ecam_mapping(root); 198 200 if (!ri->cfg) {
+6
arch/frv/include/asm/timex.h
··· 16 16 #define vxtime_lock() do {} while (0) 17 17 #define vxtime_unlock() do {} while (0) 18 18 19 + /* This attribute is used in include/linux/jiffies.h alongside with 20 + * __cacheline_aligned_in_smp. It is assumed that __cacheline_aligned_in_smp 21 + * for frv does not contain another section specification. 22 + */ 23 + #define __jiffy_arch_data __attribute__((__section__(".data"))) 24 + 19 25 #endif 20 26
-1
arch/mips/kernel/process.c
··· 120 120 struct thread_info *ti = task_thread_info(p); 121 121 struct pt_regs *childregs, *regs = current_pt_regs(); 122 122 unsigned long childksp; 123 - p->set_child_tid = p->clear_child_tid = NULL; 124 123 125 124 childksp = (unsigned long)task_stack_page(p) + THREAD_SIZE - 32; 126 125
-2
arch/openrisc/kernel/process.c
··· 167 167 168 168 top_of_kernel_stack = sp; 169 169 170 - p->set_child_tid = p->clear_child_tid = NULL; 171 - 172 170 /* Locate userspace context on stack... */ 173 171 sp -= STACK_FRAME_OVERHEAD; /* redzone */ 174 172 sp -= sizeof(struct pt_regs);
+2
arch/powerpc/include/uapi/asm/cputable.h
··· 46 46 #define PPC_FEATURE2_HTM_NOSC 0x01000000 47 47 #define PPC_FEATURE2_ARCH_3_00 0x00800000 /* ISA 3.00 */ 48 48 #define PPC_FEATURE2_HAS_IEEE128 0x00400000 /* VSX IEEE Binary Float 128-bit */ 49 + #define PPC_FEATURE2_DARN 0x00200000 /* darn random number insn */ 50 + #define PPC_FEATURE2_SCV 0x00100000 /* scv syscall */ 49 51 50 52 /* 51 53 * IMPORTANT!
+2 -1
arch/powerpc/kernel/cputable.c
··· 124 124 #define COMMON_USER_POWER9 COMMON_USER_POWER8 125 125 #define COMMON_USER2_POWER9 (COMMON_USER2_POWER8 | \ 126 126 PPC_FEATURE2_ARCH_3_00 | \ 127 - PPC_FEATURE2_HAS_IEEE128) 127 + PPC_FEATURE2_HAS_IEEE128 | \ 128 + PPC_FEATURE2_DARN ) 128 129 129 130 #ifdef CONFIG_PPC_BOOK3E_64 130 131 #define COMMON_USER_BOOKE (COMMON_USER_PPC64 | PPC_FEATURE_BOOKE)
+2
arch/powerpc/kernel/prom.c
··· 161 161 { .pabyte = 0, .pabit = 3, .cpu_features = CPU_FTR_CTRL }, 162 162 { .pabyte = 0, .pabit = 6, .cpu_features = CPU_FTR_NOEXECUTE }, 163 163 { .pabyte = 1, .pabit = 2, .mmu_features = MMU_FTR_CI_LARGE_PAGE }, 164 + #ifdef CONFIG_PPC_RADIX_MMU 164 165 { .pabyte = 40, .pabit = 0, .mmu_features = MMU_FTR_TYPE_RADIX }, 166 + #endif 165 167 { .pabyte = 1, .pabit = 1, .invert = 1, .cpu_features = CPU_FTR_NODSISRALIGN }, 166 168 { .pabyte = 5, .pabit = 0, .cpu_features = CPU_FTR_REAL_LE, 167 169 .cpu_user_ftrs = PPC_FEATURE_TRUE_LE },
+3 -1
arch/powerpc/platforms/cell/spu_base.c
··· 197 197 (REGION_ID(ea) != USER_REGION_ID)) { 198 198 199 199 spin_unlock(&spu->register_lock); 200 - ret = hash_page(ea, _PAGE_PRESENT | _PAGE_READ, 0x300, dsisr); 200 + ret = hash_page(ea, 201 + _PAGE_PRESENT | _PAGE_READ | _PAGE_PRIVILEGED, 202 + 0x300, dsisr); 201 203 spin_lock(&spu->register_lock); 202 204 203 205 if (!ret) {
+2 -3
arch/powerpc/platforms/powernv/npu-dma.c
··· 714 714 void pnv_npu2_destroy_context(struct npu_context *npu_context, 715 715 struct pci_dev *gpdev) 716 716 { 717 - struct pnv_phb *nphb, *phb; 717 + struct pnv_phb *nphb; 718 718 struct npu *npu; 719 719 struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); 720 720 struct device_node *nvlink_dn; ··· 728 728 729 729 nphb = pci_bus_to_host(npdev->bus)->private_data; 730 730 npu = &nphb->npu; 731 - phb = pci_bus_to_host(gpdev->bus)->private_data; 732 731 nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); 733 732 if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", 734 733 &nvlink_index))) 735 734 return; 736 735 npu_context->npdev[npu->index][nvlink_index] = NULL; 737 - opal_npu_destroy_context(phb->opal_id, npu_context->mm->context.id, 736 + opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id, 738 737 PCI_DEVID(gpdev->bus->number, gpdev->devfn)); 739 738 kref_put(&npu_context->kref, pnv_npu2_release_context); 740 739 }
+1 -1
arch/x86/Kconfig
··· 360 360 Management" code will be disabled if you say Y here. 361 361 362 362 See also <file:Documentation/x86/i386/IO-APIC.txt>, 363 - <file:Documentation/nmi_watchdog.txt> and the SMP-HOWTO available at 363 + <file:Documentation/lockup-watchdogs.txt> and the SMP-HOWTO available at 364 364 <http://www.tldp.org/docs.html#howto>. 365 365 366 366 If you don't know what to do here, say N.
+1 -1
arch/x86/Makefile
··· 159 159 # If '-Os' is enabled, disable it and print a warning. 160 160 ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE 161 161 undefine CONFIG_CC_OPTIMIZE_FOR_SIZE 162 - $(warning Disabling CONFIG_CC_OPTIMIZE_FOR_SIZE. Your compiler does not have -mfentry so you cannot optimize for size with CONFIG_FUNCTION_GRAPH_TRACER.) 162 + $(warning Disabling CONFIG_CC_OPTIMIZE_FOR_SIZE. Your compiler does not have -mfentry so you cannot optimize for size with CONFIG_FUNCTION_GRAPH_TRACER.) 163 163 endif 164 164 165 165 endif
+1 -1
arch/x86/boot/compressed/Makefile
··· 94 94 quiet_cmd_check_data_rel = DATAREL $@ 95 95 define cmd_check_data_rel 96 96 for obj in $(filter %.o,$^); do \ 97 - readelf -S $$obj | grep -qF .rel.local && { \ 97 + ${CROSS_COMPILE}readelf -S $$obj | grep -qF .rel.local && { \ 98 98 echo "error: $$obj has data relocations!" >&2; \ 99 99 exit 1; \ 100 100 } || true; \
+19 -11
arch/x86/entry/entry_32.S
··· 252 252 END(__switch_to_asm) 253 253 254 254 /* 255 + * The unwinder expects the last frame on the stack to always be at the same 256 + * offset from the end of the page, which allows it to validate the stack. 257 + * Calling schedule_tail() directly would break that convention because its an 258 + * asmlinkage function so its argument has to be pushed on the stack. This 259 + * wrapper creates a proper "end of stack" frame header before the call. 260 + */ 261 + ENTRY(schedule_tail_wrapper) 262 + FRAME_BEGIN 263 + 264 + pushl %eax 265 + call schedule_tail 266 + popl %eax 267 + 268 + FRAME_END 269 + ret 270 + ENDPROC(schedule_tail_wrapper) 271 + /* 255 272 * A newly forked process directly context switches into this address. 256 273 * 257 274 * eax: prev task we switched from ··· 276 259 * edi: kernel thread arg 277 260 */ 278 261 ENTRY(ret_from_fork) 279 - FRAME_BEGIN /* help unwinder find end of stack */ 280 - 281 - /* 282 - * schedule_tail() is asmlinkage so we have to put its 'prev' argument 283 - * on the stack. 284 - */ 285 - pushl %eax 286 - call schedule_tail 287 - popl %eax 262 + call schedule_tail_wrapper 288 263 289 264 testl %ebx, %ebx 290 265 jnz 1f /* kernel threads are uncommon */ 291 266 292 267 2: 293 268 /* When we fork, we trace the syscall return in the child, too. */ 294 - leal FRAME_OFFSET(%esp), %eax 269 + movl %esp, %eax 295 270 call syscall_return_slowpath 296 - FRAME_END 297 271 jmp restore_all 298 272 299 273 /* kernel thread */
+4 -7
arch/x86/entry/entry_64.S
··· 36 36 #include <asm/smap.h> 37 37 #include <asm/pgtable_types.h> 38 38 #include <asm/export.h> 39 - #include <asm/frame.h> 40 39 #include <linux/err.h> 41 40 42 41 .code64 ··· 405 406 * r12: kernel thread arg 406 407 */ 407 408 ENTRY(ret_from_fork) 408 - FRAME_BEGIN /* help unwinder find end of stack */ 409 409 movq %rax, %rdi 410 - call schedule_tail /* rdi: 'prev' task parameter */ 410 + call schedule_tail /* rdi: 'prev' task parameter */ 411 411 412 - testq %rbx, %rbx /* from kernel_thread? */ 413 - jnz 1f /* kernel threads are uncommon */ 412 + testq %rbx, %rbx /* from kernel_thread? */ 413 + jnz 1f /* kernel threads are uncommon */ 414 414 415 415 2: 416 - leaq FRAME_OFFSET(%rsp),%rdi /* pt_regs pointer */ 416 + movq %rsp, %rdi 417 417 call syscall_return_slowpath /* returns with IRQs disabled */ 418 418 TRACE_IRQS_ON /* user mode is traced as IRQS on */ 419 419 SWAPGS 420 - FRAME_END 421 420 jmp restore_regs_and_iret 422 421 423 422 1:
+1
arch/x86/include/asm/mce.h
··· 266 266 #endif 267 267 268 268 int mce_available(struct cpuinfo_x86 *c); 269 + bool mce_is_memory_error(struct mce *m); 269 270 270 271 DECLARE_PER_CPU(unsigned, mce_exception_count); 271 272 DECLARE_PER_CPU(unsigned, mce_poll_count);
+7 -2
arch/x86/kernel/alternative.c
··· 409 409 memcpy(insnbuf, replacement, a->replacementlen); 410 410 insnbuf_sz = a->replacementlen; 411 411 412 - /* 0xe8 is a relative jump; fix the offset. */ 413 - if (*insnbuf == 0xe8 && a->replacementlen == 5) { 412 + /* 413 + * 0xe8 is a relative jump; fix the offset. 414 + * 415 + * Instruction length is checked before the opcode to avoid 416 + * accessing uninitialized bytes for zero-length replacements. 417 + */ 418 + if (a->replacementlen == 5 && *insnbuf == 0xe8) { 414 419 *(s32 *)(insnbuf + 1) += replacement - instr; 415 420 DPRINTK("Fix CALL offset: 0x%x, CALL 0x%lx", 416 421 *(s32 *)(insnbuf + 1),
+6 -7
arch/x86/kernel/cpu/mcheck/mce.c
··· 499 499 return 1; 500 500 } 501 501 502 - static bool memory_error(struct mce *m) 502 + bool mce_is_memory_error(struct mce *m) 503 503 { 504 - struct cpuinfo_x86 *c = &boot_cpu_data; 505 - 506 - if (c->x86_vendor == X86_VENDOR_AMD) { 504 + if (m->cpuvendor == X86_VENDOR_AMD) { 507 505 /* ErrCodeExt[20:16] */ 508 506 u8 xec = (m->status >> 16) & 0x1f; 509 507 510 508 return (xec == 0x0 || xec == 0x8); 511 - } else if (c->x86_vendor == X86_VENDOR_INTEL) { 509 + } else if (m->cpuvendor == X86_VENDOR_INTEL) { 512 510 /* 513 511 * Intel SDM Volume 3B - 15.9.2 Compound Error Codes 514 512 * ··· 527 529 528 530 return false; 529 531 } 532 + EXPORT_SYMBOL_GPL(mce_is_memory_error); 530 533 531 534 static bool cec_add_mce(struct mce *m) 532 535 { ··· 535 536 return false; 536 537 537 538 /* We eat only correctable DRAM errors with usable addresses. */ 538 - if (memory_error(m) && 539 + if (mce_is_memory_error(m) && 539 540 !(m->status & MCI_STATUS_UC) && 540 541 mce_usable_address(m)) 541 542 if (!cec_add_elem(m->addr >> PAGE_SHIFT)) ··· 712 713 713 714 severity = mce_severity(&m, mca_cfg.tolerant, NULL, false); 714 715 715 - if (severity == MCE_DEFERRED_SEVERITY && memory_error(&m)) 716 + if (severity == MCE_DEFERRED_SEVERITY && mce_is_memory_error(&m)) 716 717 if (m.status & MCI_STATUS_ADDRV) 717 718 m.severity = severity; 718 719
+8 -8
arch/x86/kernel/cpu/microcode/amd.c
··· 320 320 } 321 321 322 322 static enum ucode_state 323 - load_microcode_amd(int cpu, u8 family, const u8 *data, size_t size); 323 + load_microcode_amd(bool save, u8 family, const u8 *data, size_t size); 324 324 325 325 int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax) 326 326 { ··· 338 338 if (!desc.mc) 339 339 return -EINVAL; 340 340 341 - ret = load_microcode_amd(smp_processor_id(), x86_family(cpuid_1_eax), 342 - desc.data, desc.size); 341 + ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size); 343 342 if (ret != UCODE_OK) 344 343 return -EINVAL; 345 344 ··· 674 675 } 675 676 676 677 static enum ucode_state 677 - load_microcode_amd(int cpu, u8 family, const u8 *data, size_t size) 678 + load_microcode_amd(bool save, u8 family, const u8 *data, size_t size) 678 679 { 679 680 enum ucode_state ret; 680 681 ··· 688 689 689 690 #ifdef CONFIG_X86_32 690 691 /* save BSP's matching patch for early load */ 691 - if (cpu_data(cpu).cpu_index == boot_cpu_data.cpu_index) { 692 - struct ucode_patch *p = find_patch(cpu); 692 + if (save) { 693 + struct ucode_patch *p = find_patch(0); 693 694 if (p) { 694 695 memset(amd_ucode_patch, 0, PATCH_MAX_SIZE); 695 696 memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), ··· 721 722 { 722 723 char fw_name[36] = "amd-ucode/microcode_amd.bin"; 723 724 struct cpuinfo_x86 *c = &cpu_data(cpu); 725 + bool bsp = c->cpu_index == boot_cpu_data.cpu_index; 724 726 enum ucode_state ret = UCODE_NFOUND; 725 727 const struct firmware *fw; 726 728 727 729 /* reload ucode container only on the boot cpu */ 728 - if (!refresh_fw || c->cpu_index != boot_cpu_data.cpu_index) 730 + if (!refresh_fw || !bsp) 729 731 return UCODE_OK; 730 732 731 733 if (c->x86 >= 0x15) ··· 743 743 goto fw_release; 744 744 } 745 745 746 - ret = load_microcode_amd(cpu, c->x86, fw->data, fw->size); 746 + ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size); 747 747 748 748 fw_release: 749 749 release_firmware(fw);
+14 -6
arch/x86/kernel/ftrace.c
··· 689 689 { 690 690 return module_alloc(size); 691 691 } 692 - static inline void tramp_free(void *tramp) 692 + static inline void tramp_free(void *tramp, int size) 693 693 { 694 + int npages = PAGE_ALIGN(size) >> PAGE_SHIFT; 695 + 696 + set_memory_nx((unsigned long)tramp, npages); 697 + set_memory_rw((unsigned long)tramp, npages); 694 698 module_memfree(tramp); 695 699 } 696 700 #else ··· 703 699 { 704 700 return NULL; 705 701 } 706 - static inline void tramp_free(void *tramp) { } 702 + static inline void tramp_free(void *tramp, int size) { } 707 703 #endif 708 704 709 705 /* Defined as markers to the end of the ftrace default trampolines */ ··· 775 771 /* Copy ftrace_caller onto the trampoline memory */ 776 772 ret = probe_kernel_read(trampoline, (void *)start_offset, size); 777 773 if (WARN_ON(ret < 0)) { 778 - tramp_free(trampoline); 774 + tramp_free(trampoline, *tramp_size); 779 775 return 0; 780 776 } 781 777 ··· 801 797 802 798 /* Are we pointing to the reference? */ 803 799 if (WARN_ON(memcmp(op_ptr.op, op_ref, 3) != 0)) { 804 - tramp_free(trampoline); 800 + tramp_free(trampoline, *tramp_size); 805 801 return 0; 806 802 } 807 803 ··· 843 839 unsigned long offset; 844 840 unsigned long ip; 845 841 unsigned int size; 846 - int ret; 842 + int ret, npages; 847 843 848 844 if (ops->trampoline) { 849 845 /* ··· 852 848 */ 853 849 if (!(ops->flags & FTRACE_OPS_FL_ALLOC_TRAMP)) 854 850 return; 851 + npages = PAGE_ALIGN(ops->trampoline_size) >> PAGE_SHIFT; 852 + set_memory_rw(ops->trampoline, npages); 855 853 } else { 856 854 ops->trampoline = create_trampoline(ops, &size); 857 855 if (!ops->trampoline) 858 856 return; 859 857 ops->trampoline_size = size; 858 + npages = PAGE_ALIGN(size) >> PAGE_SHIFT; 860 859 } 861 860 862 861 offset = calc_trampoline_call_offset(ops->flags & FTRACE_OPS_FL_SAVE_REGS); ··· 870 863 /* Do a safe modify in case the trampoline is executing */ 871 864 new = ftrace_call_replace(ip, (unsigned long)func); 872 865 ret = update_ftrace_func(ip, new); 866 + set_memory_ro(ops->trampoline, npages); 873 867 874 868 /* The update should never fail */ 875 869 WARN_ON(ret); ··· 947 939 if (!ops || !(ops->flags & FTRACE_OPS_FL_ALLOC_TRAMP)) 948 940 return; 949 941 950 - tramp_free((void *)ops->trampoline); 942 + tramp_free((void *)ops->trampoline, ops->trampoline_size); 951 943 ops->trampoline = 0; 952 944 } 953 945
+9
arch/x86/kernel/kprobes/core.c
··· 52 52 #include <linux/ftrace.h> 53 53 #include <linux/frame.h> 54 54 #include <linux/kasan.h> 55 + #include <linux/moduleloader.h> 55 56 56 57 #include <asm/text-patching.h> 57 58 #include <asm/cacheflush.h> ··· 416 415 } else { 417 416 p->ainsn.boostable = false; 418 417 } 418 + } 419 + 420 + /* Recover page to RW mode before releasing it */ 421 + void free_insn_page(void *page) 422 + { 423 + set_memory_nx((unsigned long)page & PAGE_MASK, 1); 424 + set_memory_rw((unsigned long)page & PAGE_MASK, 1); 425 + module_memfree(page); 419 426 } 420 427 421 428 static int arch_copy_kprobe(struct kprobe *p)
+1 -1
arch/x86/kernel/process_32.c
··· 78 78 79 79 printk(KERN_DEFAULT "EIP: %pS\n", (void *)regs->ip); 80 80 printk(KERN_DEFAULT "EFLAGS: %08lx CPU: %d\n", regs->flags, 81 - smp_processor_id()); 81 + raw_smp_processor_id()); 82 82 83 83 printk(KERN_DEFAULT "EAX: %08lx EBX: %08lx ECX: %08lx EDX: %08lx\n", 84 84 regs->ax, regs->bx, regs->cx, regs->dx);
+2 -2
arch/x86/kernel/setup.c
··· 980 980 */ 981 981 x86_configure_nx(); 982 982 983 - simple_udelay_calibration(); 984 - 985 983 parse_early_param(); 986 984 987 985 #ifdef CONFIG_MEMORY_HOTPLUG ··· 1038 1040 * needs to be done after dmi_scan_machine, for the BP. 1039 1041 */ 1040 1042 init_hypervisor_platform(); 1043 + 1044 + simple_udelay_calibration(); 1041 1045 1042 1046 x86_init.resources.probe_roms(); 1043 1047
+40 -9
arch/x86/kernel/unwind_frame.c
··· 104 104 return (unsigned long *)task_pt_regs(state->task) - 2; 105 105 } 106 106 107 + static bool is_last_frame(struct unwind_state *state) 108 + { 109 + return state->bp == last_frame(state); 110 + } 111 + 107 112 #ifdef CONFIG_X86_32 108 113 #define GCC_REALIGN_WORDS 3 109 114 #else ··· 120 115 return last_frame(state) - GCC_REALIGN_WORDS; 121 116 } 122 117 123 - static bool is_last_task_frame(struct unwind_state *state) 118 + static bool is_last_aligned_frame(struct unwind_state *state) 124 119 { 125 120 unsigned long *last_bp = last_frame(state); 126 121 unsigned long *aligned_bp = last_aligned_frame(state); 127 122 128 123 /* 129 - * We have to check for the last task frame at two different locations 130 - * because gcc can occasionally decide to realign the stack pointer and 131 - * change the offset of the stack frame in the prologue of a function 132 - * called by head/entry code. Examples: 124 + * GCC can occasionally decide to realign the stack pointer and change 125 + * the offset of the stack frame in the prologue of a function called 126 + * by head/entry code. Examples: 133 127 * 134 128 * <start_secondary>: 135 129 * push %edi ··· 145 141 * push %rbp 146 142 * mov %rsp,%rbp 147 143 * 148 - * Note that after aligning the stack, it pushes a duplicate copy of 149 - * the return address before pushing the frame pointer. 144 + * After aligning the stack, it pushes a duplicate copy of the return 145 + * address before pushing the frame pointer. 150 146 */ 151 - return (state->bp == last_bp || 152 - (state->bp == aligned_bp && *(aligned_bp+1) == *(last_bp+1))); 147 + return (state->bp == aligned_bp && *(aligned_bp + 1) == *(last_bp + 1)); 148 + } 149 + 150 + static bool is_last_ftrace_frame(struct unwind_state *state) 151 + { 152 + unsigned long *last_bp = last_frame(state); 153 + unsigned long *last_ftrace_bp = last_bp - 3; 154 + 155 + /* 156 + * When unwinding from an ftrace handler of a function called by entry 157 + * code, the stack layout of the last frame is: 158 + * 159 + * bp 160 + * parent ret addr 161 + * bp 162 + * function ret addr 163 + * parent ret addr 164 + * pt_regs 165 + * ----------------- 166 + */ 167 + return (state->bp == last_ftrace_bp && 168 + *state->bp == *(state->bp + 2) && 169 + *(state->bp + 1) == *(state->bp + 4)); 170 + } 171 + 172 + static bool is_last_task_frame(struct unwind_state *state) 173 + { 174 + return is_last_frame(state) || is_last_aligned_frame(state) || 175 + is_last_ftrace_frame(state); 153 176 } 154 177 155 178 /*
+4 -1
arch/x86/kvm/lapic.c
··· 1495 1495 1496 1496 static void cancel_hv_timer(struct kvm_lapic *apic) 1497 1497 { 1498 + preempt_disable(); 1498 1499 kvm_x86_ops->cancel_hv_timer(apic->vcpu); 1499 1500 apic->lapic_timer.hv_timer_in_use = false; 1501 + preempt_enable(); 1500 1502 } 1501 1503 1502 1504 static bool start_hv_timer(struct kvm_lapic *apic) ··· 1936 1934 for (i = 0; i < KVM_APIC_LVT_NUM; i++) 1937 1935 kvm_lapic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED); 1938 1936 apic_update_lvtt(apic); 1939 - if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_LINT0_REENABLED)) 1937 + if (kvm_vcpu_is_reset_bsp(vcpu) && 1938 + kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_LINT0_REENABLED)) 1940 1939 kvm_lapic_set_reg(apic, APIC_LVT0, 1941 1940 SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT)); 1942 1941 apic_manage_nmi_watchdog(apic, kvm_lapic_get_reg(apic, APIC_LVT0));
+12 -14
arch/x86/kvm/svm.c
··· 1807 1807 * AMD's VMCB does not have an explicit unusable field, so emulate it 1808 1808 * for cross vendor migration purposes by "not present" 1809 1809 */ 1810 - var->unusable = !var->present || (var->type == 0); 1810 + var->unusable = !var->present; 1811 1811 1812 1812 switch (seg) { 1813 1813 case VCPU_SREG_TR: ··· 1840 1840 */ 1841 1841 if (var->unusable) 1842 1842 var->db = 0; 1843 + /* This is symmetric with svm_set_segment() */ 1843 1844 var->dpl = to_svm(vcpu)->vmcb->save.cpl; 1844 1845 break; 1845 1846 } ··· 1981 1980 s->base = var->base; 1982 1981 s->limit = var->limit; 1983 1982 s->selector = var->selector; 1984 - if (var->unusable) 1985 - s->attrib = 0; 1986 - else { 1987 - s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK); 1988 - s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT; 1989 - s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT; 1990 - s->attrib |= (var->present & 1) << SVM_SELECTOR_P_SHIFT; 1991 - s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT; 1992 - s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT; 1993 - s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT; 1994 - s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT; 1995 - } 1983 + s->attrib = (var->type & SVM_SELECTOR_TYPE_MASK); 1984 + s->attrib |= (var->s & 1) << SVM_SELECTOR_S_SHIFT; 1985 + s->attrib |= (var->dpl & 3) << SVM_SELECTOR_DPL_SHIFT; 1986 + s->attrib |= ((var->present & 1) && !var->unusable) << SVM_SELECTOR_P_SHIFT; 1987 + s->attrib |= (var->avl & 1) << SVM_SELECTOR_AVL_SHIFT; 1988 + s->attrib |= (var->l & 1) << SVM_SELECTOR_L_SHIFT; 1989 + s->attrib |= (var->db & 1) << SVM_SELECTOR_DB_SHIFT; 1990 + s->attrib |= (var->g & 1) << SVM_SELECTOR_G_SHIFT; 1996 1991 1997 1992 /* 1998 1993 * This is always accurate, except if SYSRET returned to a segment ··· 1997 2000 * would entail passing the CPL to userspace and back. 1998 2001 */ 1999 2002 if (seg == VCPU_SREG_SS) 2000 - svm->vmcb->save.cpl = (s->attrib >> SVM_SELECTOR_DPL_SHIFT) & 3; 2003 + /* This is symmetric with svm_get_segment() */ 2004 + svm->vmcb->save.cpl = (var->dpl & 3); 2001 2005 2002 2006 mark_dirty(svm->vmcb, VMCB_SEG); 2003 2007 }
+62 -85
arch/x86/kvm/vmx.c
··· 6914 6914 return 0; 6915 6915 } 6916 6916 6917 - /* 6918 - * This function performs the various checks including 6919 - * - if it's 4KB aligned 6920 - * - No bits beyond the physical address width are set 6921 - * - Returns 0 on success or else 1 6922 - * (Intel SDM Section 30.3) 6923 - */ 6924 - static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason, 6925 - gpa_t *vmpointer) 6917 + static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer) 6926 6918 { 6927 6919 gva_t gva; 6928 - gpa_t vmptr; 6929 6920 struct x86_exception e; 6930 - struct page *page; 6931 - struct vcpu_vmx *vmx = to_vmx(vcpu); 6932 - int maxphyaddr = cpuid_maxphyaddr(vcpu); 6933 6921 6934 6922 if (get_vmx_mem_address(vcpu, vmcs_readl(EXIT_QUALIFICATION), 6935 6923 vmcs_read32(VMX_INSTRUCTION_INFO), false, &gva)) 6936 6924 return 1; 6937 6925 6938 - if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, &vmptr, 6939 - sizeof(vmptr), &e)) { 6926 + if (kvm_read_guest_virt(&vcpu->arch.emulate_ctxt, gva, vmpointer, 6927 + sizeof(*vmpointer), &e)) { 6940 6928 kvm_inject_page_fault(vcpu, &e); 6941 6929 return 1; 6942 6930 } 6943 6931 6944 - switch (exit_reason) { 6945 - case EXIT_REASON_VMON: 6946 - /* 6947 - * SDM 3: 24.11.5 6948 - * The first 4 bytes of VMXON region contain the supported 6949 - * VMCS revision identifier 6950 - * 6951 - * Note - IA32_VMX_BASIC[48] will never be 1 6952 - * for the nested case; 6953 - * which replaces physical address width with 32 6954 - * 6955 - */ 6956 - if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) { 6957 - nested_vmx_failInvalid(vcpu); 6958 - return kvm_skip_emulated_instruction(vcpu); 6959 - } 6960 - 6961 - page = nested_get_page(vcpu, vmptr); 6962 - if (page == NULL) { 6963 - nested_vmx_failInvalid(vcpu); 6964 - return kvm_skip_emulated_instruction(vcpu); 6965 - } 6966 - if (*(u32 *)kmap(page) != VMCS12_REVISION) { 6967 - kunmap(page); 6968 - nested_release_page_clean(page); 6969 - nested_vmx_failInvalid(vcpu); 6970 - return kvm_skip_emulated_instruction(vcpu); 6971 - } 6972 - kunmap(page); 6973 - nested_release_page_clean(page); 6974 - vmx->nested.vmxon_ptr = vmptr; 6975 - break; 6976 - case EXIT_REASON_VMCLEAR: 6977 - if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) { 6978 - nested_vmx_failValid(vcpu, 6979 - VMXERR_VMCLEAR_INVALID_ADDRESS); 6980 - return kvm_skip_emulated_instruction(vcpu); 6981 - } 6982 - 6983 - if (vmptr == vmx->nested.vmxon_ptr) { 6984 - nested_vmx_failValid(vcpu, 6985 - VMXERR_VMCLEAR_VMXON_POINTER); 6986 - return kvm_skip_emulated_instruction(vcpu); 6987 - } 6988 - break; 6989 - case EXIT_REASON_VMPTRLD: 6990 - if (!PAGE_ALIGNED(vmptr) || (vmptr >> maxphyaddr)) { 6991 - nested_vmx_failValid(vcpu, 6992 - VMXERR_VMPTRLD_INVALID_ADDRESS); 6993 - return kvm_skip_emulated_instruction(vcpu); 6994 - } 6995 - 6996 - if (vmptr == vmx->nested.vmxon_ptr) { 6997 - nested_vmx_failValid(vcpu, 6998 - VMXERR_VMPTRLD_VMXON_POINTER); 6999 - return kvm_skip_emulated_instruction(vcpu); 7000 - } 7001 - break; 7002 - default: 7003 - return 1; /* shouldn't happen */ 7004 - } 7005 - 7006 - if (vmpointer) 7007 - *vmpointer = vmptr; 7008 6932 return 0; 7009 6933 } 7010 6934 ··· 6990 7066 static int handle_vmon(struct kvm_vcpu *vcpu) 6991 7067 { 6992 7068 int ret; 7069 + gpa_t vmptr; 7070 + struct page *page; 6993 7071 struct vcpu_vmx *vmx = to_vmx(vcpu); 6994 7072 const u64 VMXON_NEEDED_FEATURES = FEATURE_CONTROL_LOCKED 6995 7073 | FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX; ··· 7021 7095 return 1; 7022 7096 } 7023 7097 7024 - if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMON, NULL)) 7098 + if (nested_vmx_get_vmptr(vcpu, &vmptr)) 7025 7099 return 1; 7026 - 7100 + 7101 + /* 7102 + * SDM 3: 24.11.5 7103 + * The first 4 bytes of VMXON region contain the supported 7104 + * VMCS revision identifier 7105 + * 7106 + * Note - IA32_VMX_BASIC[48] will never be 1 for the nested case; 7107 + * which replaces physical address width with 32 7108 + */ 7109 + if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) { 7110 + nested_vmx_failInvalid(vcpu); 7111 + return kvm_skip_emulated_instruction(vcpu); 7112 + } 7113 + 7114 + page = nested_get_page(vcpu, vmptr); 7115 + if (page == NULL) { 7116 + nested_vmx_failInvalid(vcpu); 7117 + return kvm_skip_emulated_instruction(vcpu); 7118 + } 7119 + if (*(u32 *)kmap(page) != VMCS12_REVISION) { 7120 + kunmap(page); 7121 + nested_release_page_clean(page); 7122 + nested_vmx_failInvalid(vcpu); 7123 + return kvm_skip_emulated_instruction(vcpu); 7124 + } 7125 + kunmap(page); 7126 + nested_release_page_clean(page); 7127 + 7128 + vmx->nested.vmxon_ptr = vmptr; 7027 7129 ret = enter_vmx_operation(vcpu); 7028 7130 if (ret) 7029 7131 return ret; ··· 7167 7213 if (!nested_vmx_check_permission(vcpu)) 7168 7214 return 1; 7169 7215 7170 - if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMCLEAR, &vmptr)) 7216 + if (nested_vmx_get_vmptr(vcpu, &vmptr)) 7171 7217 return 1; 7218 + 7219 + if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) { 7220 + nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_INVALID_ADDRESS); 7221 + return kvm_skip_emulated_instruction(vcpu); 7222 + } 7223 + 7224 + if (vmptr == vmx->nested.vmxon_ptr) { 7225 + nested_vmx_failValid(vcpu, VMXERR_VMCLEAR_VMXON_POINTER); 7226 + return kvm_skip_emulated_instruction(vcpu); 7227 + } 7172 7228 7173 7229 if (vmptr == vmx->nested.current_vmptr) 7174 7230 nested_release_vmcs12(vmx); ··· 7509 7545 if (!nested_vmx_check_permission(vcpu)) 7510 7546 return 1; 7511 7547 7512 - if (nested_vmx_check_vmptr(vcpu, EXIT_REASON_VMPTRLD, &vmptr)) 7548 + if (nested_vmx_get_vmptr(vcpu, &vmptr)) 7513 7549 return 1; 7550 + 7551 + if (!PAGE_ALIGNED(vmptr) || (vmptr >> cpuid_maxphyaddr(vcpu))) { 7552 + nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_INVALID_ADDRESS); 7553 + return kvm_skip_emulated_instruction(vcpu); 7554 + } 7555 + 7556 + if (vmptr == vmx->nested.vmxon_ptr) { 7557 + nested_vmx_failValid(vcpu, VMXERR_VMPTRLD_VMXON_POINTER); 7558 + return kvm_skip_emulated_instruction(vcpu); 7559 + } 7514 7560 7515 7561 if (vmx->nested.current_vmptr != vmptr) { 7516 7562 struct vmcs12 *new_vmcs12; ··· 7887 7913 { 7888 7914 unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION); 7889 7915 int cr = exit_qualification & 15; 7890 - int reg = (exit_qualification >> 8) & 15; 7891 - unsigned long val = kvm_register_readl(vcpu, reg); 7916 + int reg; 7917 + unsigned long val; 7892 7918 7893 7919 switch ((exit_qualification >> 4) & 3) { 7894 7920 case 0: /* mov to cr */ 7921 + reg = (exit_qualification >> 8) & 15; 7922 + val = kvm_register_readl(vcpu, reg); 7895 7923 switch (cr) { 7896 7924 case 0: 7897 7925 if (vmcs12->cr0_guest_host_mask & ··· 7948 7972 * lmsw can change bits 1..3 of cr0, and only set bit 0 of 7949 7973 * cr0. Other attempted changes are ignored, with no exit. 7950 7974 */ 7975 + val = (exit_qualification >> LMSW_SOURCE_DATA_SHIFT) & 0x0f; 7951 7976 if (vmcs12->cr0_guest_host_mask & 0xe & 7952 7977 (val ^ vmcs12->cr0_read_shadow)) 7953 7978 return true;
+5 -2
arch/x86/kvm/x86.c
··· 8394 8394 if (vcpu->arch.pv.pv_unhalted) 8395 8395 return true; 8396 8396 8397 - if (atomic_read(&vcpu->arch.nmi_queued)) 8397 + if (kvm_test_request(KVM_REQ_NMI, vcpu) || 8398 + (vcpu->arch.nmi_pending && 8399 + kvm_x86_ops->nmi_allowed(vcpu))) 8398 8400 return true; 8399 8401 8400 - if (kvm_test_request(KVM_REQ_SMI, vcpu)) 8402 + if (kvm_test_request(KVM_REQ_SMI, vcpu) || 8403 + (vcpu->arch.smi_pending && !is_smm(vcpu))) 8401 8404 return true; 8402 8405 8403 8406 if (kvm_arch_interrupt_allowed(vcpu) &&
+1 -1
arch/x86/mm/pageattr.c
··· 186 186 unsigned int i, level; 187 187 unsigned long addr; 188 188 189 - BUG_ON(irqs_disabled()); 189 + BUG_ON(irqs_disabled() && !early_boot_irqs_disabled); 190 190 WARN_ON(PAGE_ALIGN(start) != start); 191 191 192 192 on_each_cpu(__cpa_flush_range, NULL, 1);
+4 -2
arch/x86/platform/efi/efi.c
··· 828 828 829 829 /* 830 830 * We don't do virtual mode, since we don't do runtime services, on 831 - * non-native EFI 831 + * non-native EFI. With efi=old_map, we don't do runtime services in 832 + * kexec kernel because in the initial boot something else might 833 + * have been mapped at these virtual addresses. 832 834 */ 833 - if (!efi_is_native()) { 835 + if (!efi_is_native() || efi_enabled(EFI_OLD_MEMMAP)) { 834 836 efi_memmap_unmap(); 835 837 clear_bit(EFI_RUNTIME_SERVICES, &efi.flags); 836 838 return;
+71 -8
arch/x86/platform/efi/efi_64.c
··· 71 71 72 72 pgd_t * __init efi_call_phys_prolog(void) 73 73 { 74 - unsigned long vaddress; 75 - pgd_t *save_pgd; 74 + unsigned long vaddr, addr_pgd, addr_p4d, addr_pud; 75 + pgd_t *save_pgd, *pgd_k, *pgd_efi; 76 + p4d_t *p4d, *p4d_k, *p4d_efi; 77 + pud_t *pud; 76 78 77 79 int pgd; 78 - int n_pgds; 80 + int n_pgds, i, j; 79 81 80 82 if (!efi_enabled(EFI_OLD_MEMMAP)) { 81 83 save_pgd = (pgd_t *)read_cr3(); ··· 90 88 n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE); 91 89 save_pgd = kmalloc_array(n_pgds, sizeof(*save_pgd), GFP_KERNEL); 92 90 91 + /* 92 + * Build 1:1 identity mapping for efi=old_map usage. Note that 93 + * PAGE_OFFSET is PGDIR_SIZE aligned when KASLR is disabled, while 94 + * it is PUD_SIZE ALIGNED with KASLR enabled. So for a given physical 95 + * address X, the pud_index(X) != pud_index(__va(X)), we can only copy 96 + * PUD entry of __va(X) to fill in pud entry of X to build 1:1 mapping. 97 + * This means here we can only reuse the PMD tables of the direct mapping. 98 + */ 93 99 for (pgd = 0; pgd < n_pgds; pgd++) { 94 - save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE); 95 - vaddress = (unsigned long)__va(pgd * PGDIR_SIZE); 96 - set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress)); 100 + addr_pgd = (unsigned long)(pgd * PGDIR_SIZE); 101 + vaddr = (unsigned long)__va(pgd * PGDIR_SIZE); 102 + pgd_efi = pgd_offset_k(addr_pgd); 103 + save_pgd[pgd] = *pgd_efi; 104 + 105 + p4d = p4d_alloc(&init_mm, pgd_efi, addr_pgd); 106 + if (!p4d) { 107 + pr_err("Failed to allocate p4d table!\n"); 108 + goto out; 109 + } 110 + 111 + for (i = 0; i < PTRS_PER_P4D; i++) { 112 + addr_p4d = addr_pgd + i * P4D_SIZE; 113 + p4d_efi = p4d + p4d_index(addr_p4d); 114 + 115 + pud = pud_alloc(&init_mm, p4d_efi, addr_p4d); 116 + if (!pud) { 117 + pr_err("Failed to allocate pud table!\n"); 118 + goto out; 119 + } 120 + 121 + for (j = 0; j < PTRS_PER_PUD; j++) { 122 + addr_pud = addr_p4d + j * PUD_SIZE; 123 + 124 + if (addr_pud > (max_pfn << PAGE_SHIFT)) 125 + break; 126 + 127 + vaddr = (unsigned long)__va(addr_pud); 128 + 129 + pgd_k = pgd_offset_k(vaddr); 130 + p4d_k = p4d_offset(pgd_k, vaddr); 131 + pud[j] = *pud_offset(p4d_k, vaddr); 132 + } 133 + } 97 134 } 98 135 out: 99 136 __flush_tlb_all(); ··· 145 104 /* 146 105 * After the lock is released, the original page table is restored. 147 106 */ 148 - int pgd_idx; 107 + int pgd_idx, i; 149 108 int nr_pgds; 109 + pgd_t *pgd; 110 + p4d_t *p4d; 111 + pud_t *pud; 150 112 151 113 if (!efi_enabled(EFI_OLD_MEMMAP)) { 152 114 write_cr3((unsigned long)save_pgd); ··· 159 115 160 116 nr_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE); 161 117 162 - for (pgd_idx = 0; pgd_idx < nr_pgds; pgd_idx++) 118 + for (pgd_idx = 0; pgd_idx < nr_pgds; pgd_idx++) { 119 + pgd = pgd_offset_k(pgd_idx * PGDIR_SIZE); 163 120 set_pgd(pgd_offset_k(pgd_idx * PGDIR_SIZE), save_pgd[pgd_idx]); 121 + 122 + if (!(pgd_val(*pgd) & _PAGE_PRESENT)) 123 + continue; 124 + 125 + for (i = 0; i < PTRS_PER_P4D; i++) { 126 + p4d = p4d_offset(pgd, 127 + pgd_idx * PGDIR_SIZE + i * P4D_SIZE); 128 + 129 + if (!(p4d_val(*p4d) & _PAGE_PRESENT)) 130 + continue; 131 + 132 + pud = (pud_t *)p4d_page_vaddr(*p4d); 133 + pud_free(&init_mm, pud); 134 + } 135 + 136 + p4d = (p4d_t *)pgd_page_vaddr(*pgd); 137 + p4d_free(&init_mm, p4d); 138 + } 164 139 165 140 kfree(save_pgd); 166 141
+3
arch/x86/platform/efi/quirks.c
··· 360 360 free_bootmem_late(start, size); 361 361 } 362 362 363 + if (!num_entries) 364 + return; 365 + 363 366 new_size = efi.memmap.desc_size * num_entries; 364 367 new_phys = efi_memmap_alloc(num_entries); 365 368 if (!new_phys) {
+1 -1
block/blk-cgroup.c
··· 74 74 blkcg_policy[i]->pd_free_fn(blkg->pd[i]); 75 75 76 76 if (blkg->blkcg != &blkcg_root) 77 - blk_exit_rl(&blkg->rl); 77 + blk_exit_rl(blkg->q, &blkg->rl); 78 78 79 79 blkg_rwstat_exit(&blkg->stat_ios); 80 80 blkg_rwstat_exit(&blkg->stat_bytes);
+8 -2
block/blk-core.c
··· 648 648 if (!rl->rq_pool) 649 649 return -ENOMEM; 650 650 651 + if (rl != &q->root_rl) 652 + WARN_ON_ONCE(!blk_get_queue(q)); 653 + 651 654 return 0; 652 655 } 653 656 654 - void blk_exit_rl(struct request_list *rl) 657 + void blk_exit_rl(struct request_queue *q, struct request_list *rl) 655 658 { 656 - if (rl->rq_pool) 659 + if (rl->rq_pool) { 657 660 mempool_destroy(rl->rq_pool); 661 + if (rl != &q->root_rl) 662 + blk_put_queue(q); 663 + } 658 664 } 659 665 660 666 struct request_queue *blk_alloc_queue(gfp_t gfp_mask)
+9 -20
block/blk-mq.c
··· 628 628 } 629 629 EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list); 630 630 631 - void blk_mq_abort_requeue_list(struct request_queue *q) 632 - { 633 - unsigned long flags; 634 - LIST_HEAD(rq_list); 635 - 636 - spin_lock_irqsave(&q->requeue_lock, flags); 637 - list_splice_init(&q->requeue_list, &rq_list); 638 - spin_unlock_irqrestore(&q->requeue_lock, flags); 639 - 640 - while (!list_empty(&rq_list)) { 641 - struct request *rq; 642 - 643 - rq = list_first_entry(&rq_list, struct request, queuelist); 644 - list_del_init(&rq->queuelist); 645 - blk_mq_end_request(rq, -EIO); 646 - } 647 - } 648 - EXPORT_SYMBOL(blk_mq_abort_requeue_list); 649 - 650 631 struct request *blk_mq_tag_to_rq(struct blk_mq_tags *tags, unsigned int tag) 651 632 { 652 633 if (tag < tags->nr_tags) { ··· 2641 2660 return ret; 2642 2661 } 2643 2662 2644 - void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues) 2663 + static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, 2664 + int nr_hw_queues) 2645 2665 { 2646 2666 struct request_queue *q; 2647 2667 ··· 2665 2683 2666 2684 list_for_each_entry(q, &set->tag_list, tag_set_list) 2667 2685 blk_mq_unfreeze_queue(q); 2686 + } 2687 + 2688 + void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues) 2689 + { 2690 + mutex_lock(&set->tag_list_lock); 2691 + __blk_mq_update_nr_hw_queues(set, nr_hw_queues); 2692 + mutex_unlock(&set->tag_list_lock); 2668 2693 } 2669 2694 EXPORT_SYMBOL_GPL(blk_mq_update_nr_hw_queues); 2670 2695
+4 -4
block/blk-sysfs.c
··· 809 809 810 810 blk_free_queue_stats(q->stats); 811 811 812 - blk_exit_rl(&q->root_rl); 812 + blk_exit_rl(q, &q->root_rl); 813 813 814 814 if (q->queue_tags) 815 815 __blk_queue_free_tags(q); ··· 887 887 goto unlock; 888 888 } 889 889 890 - if (q->mq_ops) 890 + if (q->mq_ops) { 891 891 __blk_mq_register_dev(dev, q); 892 - 893 - blk_mq_debugfs_register(q); 892 + blk_mq_debugfs_register(q); 893 + } 894 894 895 895 kobject_uevent(&q->kobj, KOBJ_ADD); 896 896
+108 -62
block/blk-throttle.c
··· 22 22 #define DFL_THROTL_SLICE_HD (HZ / 10) 23 23 #define DFL_THROTL_SLICE_SSD (HZ / 50) 24 24 #define MAX_THROTL_SLICE (HZ) 25 - #define DFL_IDLE_THRESHOLD_SSD (1000L) /* 1 ms */ 26 - #define DFL_IDLE_THRESHOLD_HD (100L * 1000) /* 100 ms */ 27 25 #define MAX_IDLE_TIME (5L * 1000 * 1000) /* 5 s */ 28 - /* default latency target is 0, eg, guarantee IO latency by default */ 29 - #define DFL_LATENCY_TARGET (0) 26 + #define MIN_THROTL_BPS (320 * 1024) 27 + #define MIN_THROTL_IOPS (10) 28 + #define DFL_LATENCY_TARGET (-1L) 29 + #define DFL_IDLE_THRESHOLD (0) 30 30 31 31 #define SKIP_LATENCY (((u64)1) << BLK_STAT_RES_SHIFT) 32 32 ··· 157 157 unsigned long last_check_time; 158 158 159 159 unsigned long latency_target; /* us */ 160 + unsigned long latency_target_conf; /* us */ 160 161 /* When did we start a new slice */ 161 162 unsigned long slice_start[2]; 162 163 unsigned long slice_end[2]; ··· 166 165 unsigned long checked_last_finish_time; /* ns / 1024 */ 167 166 unsigned long avg_idletime; /* ns / 1024 */ 168 167 unsigned long idletime_threshold; /* us */ 168 + unsigned long idletime_threshold_conf; /* us */ 169 169 170 170 unsigned int bio_cnt; /* total bios */ 171 171 unsigned int bad_bio_cnt; /* bios exceeding latency threshold */ ··· 202 200 struct work_struct dispatch_work; 203 201 unsigned int limit_index; 204 202 bool limit_valid[LIMIT_CNT]; 205 - 206 - unsigned long dft_idletime_threshold; /* us */ 207 203 208 204 unsigned long low_upgrade_time; 209 205 unsigned long low_downgrade_time; ··· 294 294 295 295 td = tg->td; 296 296 ret = tg->bps[rw][td->limit_index]; 297 - if (ret == 0 && td->limit_index == LIMIT_LOW) 298 - return tg->bps[rw][LIMIT_MAX]; 297 + if (ret == 0 && td->limit_index == LIMIT_LOW) { 298 + /* intermediate node or iops isn't 0 */ 299 + if (!list_empty(&blkg->blkcg->css.children) || 300 + tg->iops[rw][td->limit_index]) 301 + return U64_MAX; 302 + else 303 + return MIN_THROTL_BPS; 304 + } 299 305 300 306 if (td->limit_index == LIMIT_MAX && tg->bps[rw][LIMIT_LOW] && 301 307 tg->bps[rw][LIMIT_LOW] != tg->bps[rw][LIMIT_MAX]) { ··· 321 315 322 316 if (cgroup_subsys_on_dfl(io_cgrp_subsys) && !blkg->parent) 323 317 return UINT_MAX; 318 + 324 319 td = tg->td; 325 320 ret = tg->iops[rw][td->limit_index]; 326 - if (ret == 0 && tg->td->limit_index == LIMIT_LOW) 327 - return tg->iops[rw][LIMIT_MAX]; 321 + if (ret == 0 && tg->td->limit_index == LIMIT_LOW) { 322 + /* intermediate node or bps isn't 0 */ 323 + if (!list_empty(&blkg->blkcg->css.children) || 324 + tg->bps[rw][td->limit_index]) 325 + return UINT_MAX; 326 + else 327 + return MIN_THROTL_IOPS; 328 + } 328 329 329 330 if (td->limit_index == LIMIT_MAX && tg->iops[rw][LIMIT_LOW] && 330 331 tg->iops[rw][LIMIT_LOW] != tg->iops[rw][LIMIT_MAX]) { ··· 495 482 /* LIMIT_LOW will have default value 0 */ 496 483 497 484 tg->latency_target = DFL_LATENCY_TARGET; 485 + tg->latency_target_conf = DFL_LATENCY_TARGET; 486 + tg->idletime_threshold = DFL_IDLE_THRESHOLD; 487 + tg->idletime_threshold_conf = DFL_IDLE_THRESHOLD; 498 488 499 489 return &tg->pd; 500 490 } ··· 526 510 if (cgroup_subsys_on_dfl(io_cgrp_subsys) && blkg->parent) 527 511 sq->parent_sq = &blkg_to_tg(blkg->parent)->service_queue; 528 512 tg->td = td; 529 - 530 - tg->idletime_threshold = td->dft_idletime_threshold; 531 513 } 532 514 533 515 /* ··· 1363 1349 return 0; 1364 1350 } 1365 1351 1366 - static void tg_conf_updated(struct throtl_grp *tg) 1352 + static void tg_conf_updated(struct throtl_grp *tg, bool global) 1367 1353 { 1368 1354 struct throtl_service_queue *sq = &tg->service_queue; 1369 1355 struct cgroup_subsys_state *pos_css; ··· 1381 1367 * restrictions in the whole hierarchy and allows them to bypass 1382 1368 * blk-throttle. 1383 1369 */ 1384 - blkg_for_each_descendant_pre(blkg, pos_css, tg_to_blkg(tg)) 1385 - tg_update_has_rules(blkg_to_tg(blkg)); 1370 + blkg_for_each_descendant_pre(blkg, pos_css, 1371 + global ? tg->td->queue->root_blkg : tg_to_blkg(tg)) { 1372 + struct throtl_grp *this_tg = blkg_to_tg(blkg); 1373 + struct throtl_grp *parent_tg; 1374 + 1375 + tg_update_has_rules(this_tg); 1376 + /* ignore root/second level */ 1377 + if (!cgroup_subsys_on_dfl(io_cgrp_subsys) || !blkg->parent || 1378 + !blkg->parent->parent) 1379 + continue; 1380 + parent_tg = blkg_to_tg(blkg->parent); 1381 + /* 1382 + * make sure all children has lower idle time threshold and 1383 + * higher latency target 1384 + */ 1385 + this_tg->idletime_threshold = min(this_tg->idletime_threshold, 1386 + parent_tg->idletime_threshold); 1387 + this_tg->latency_target = max(this_tg->latency_target, 1388 + parent_tg->latency_target); 1389 + } 1386 1390 1387 1391 /* 1388 1392 * We're already holding queue_lock and know @tg is valid. Let's ··· 1445 1413 else 1446 1414 *(unsigned int *)((void *)tg + of_cft(of)->private) = v; 1447 1415 1448 - tg_conf_updated(tg); 1416 + tg_conf_updated(tg, false); 1449 1417 ret = 0; 1450 1418 out_finish: 1451 1419 blkg_conf_finish(&ctx); ··· 1529 1497 tg->iops_conf[READ][off] == iops_dft && 1530 1498 tg->iops_conf[WRITE][off] == iops_dft && 1531 1499 (off != LIMIT_LOW || 1532 - (tg->idletime_threshold == tg->td->dft_idletime_threshold && 1533 - tg->latency_target == DFL_LATENCY_TARGET))) 1500 + (tg->idletime_threshold_conf == DFL_IDLE_THRESHOLD && 1501 + tg->latency_target_conf == DFL_LATENCY_TARGET))) 1534 1502 return 0; 1535 1503 1536 - if (tg->bps_conf[READ][off] != bps_dft) 1504 + if (tg->bps_conf[READ][off] != U64_MAX) 1537 1505 snprintf(bufs[0], sizeof(bufs[0]), "%llu", 1538 1506 tg->bps_conf[READ][off]); 1539 - if (tg->bps_conf[WRITE][off] != bps_dft) 1507 + if (tg->bps_conf[WRITE][off] != U64_MAX) 1540 1508 snprintf(bufs[1], sizeof(bufs[1]), "%llu", 1541 1509 tg->bps_conf[WRITE][off]); 1542 - if (tg->iops_conf[READ][off] != iops_dft) 1510 + if (tg->iops_conf[READ][off] != UINT_MAX) 1543 1511 snprintf(bufs[2], sizeof(bufs[2]), "%u", 1544 1512 tg->iops_conf[READ][off]); 1545 - if (tg->iops_conf[WRITE][off] != iops_dft) 1513 + if (tg->iops_conf[WRITE][off] != UINT_MAX) 1546 1514 snprintf(bufs[3], sizeof(bufs[3]), "%u", 1547 1515 tg->iops_conf[WRITE][off]); 1548 1516 if (off == LIMIT_LOW) { 1549 - if (tg->idletime_threshold == ULONG_MAX) 1517 + if (tg->idletime_threshold_conf == ULONG_MAX) 1550 1518 strcpy(idle_time, " idle=max"); 1551 1519 else 1552 1520 snprintf(idle_time, sizeof(idle_time), " idle=%lu", 1553 - tg->idletime_threshold); 1521 + tg->idletime_threshold_conf); 1554 1522 1555 - if (tg->latency_target == ULONG_MAX) 1523 + if (tg->latency_target_conf == ULONG_MAX) 1556 1524 strcpy(latency_time, " latency=max"); 1557 1525 else 1558 1526 snprintf(latency_time, sizeof(latency_time), 1559 - " latency=%lu", tg->latency_target); 1527 + " latency=%lu", tg->latency_target_conf); 1560 1528 } 1561 1529 1562 1530 seq_printf(sf, "%s rbps=%s wbps=%s riops=%s wiops=%s%s%s\n", ··· 1595 1563 v[2] = tg->iops_conf[READ][index]; 1596 1564 v[3] = tg->iops_conf[WRITE][index]; 1597 1565 1598 - idle_time = tg->idletime_threshold; 1599 - latency_time = tg->latency_target; 1566 + idle_time = tg->idletime_threshold_conf; 1567 + latency_time = tg->latency_target_conf; 1600 1568 while (true) { 1601 1569 char tok[27]; /* wiops=18446744073709551616 */ 1602 1570 char *p; ··· 1655 1623 tg->iops_conf[READ][LIMIT_MAX]); 1656 1624 tg->iops[WRITE][LIMIT_LOW] = min(tg->iops_conf[WRITE][LIMIT_LOW], 1657 1625 tg->iops_conf[WRITE][LIMIT_MAX]); 1626 + tg->idletime_threshold_conf = idle_time; 1627 + tg->latency_target_conf = latency_time; 1658 1628 1659 - if (index == LIMIT_LOW) { 1660 - blk_throtl_update_limit_valid(tg->td); 1661 - if (tg->td->limit_valid[LIMIT_LOW]) 1662 - tg->td->limit_index = LIMIT_LOW; 1663 - tg->idletime_threshold = (idle_time == ULONG_MAX) ? 1664 - ULONG_MAX : idle_time; 1665 - tg->latency_target = (latency_time == ULONG_MAX) ? 1666 - ULONG_MAX : latency_time; 1629 + /* force user to configure all settings for low limit */ 1630 + if (!(tg->bps[READ][LIMIT_LOW] || tg->iops[READ][LIMIT_LOW] || 1631 + tg->bps[WRITE][LIMIT_LOW] || tg->iops[WRITE][LIMIT_LOW]) || 1632 + tg->idletime_threshold_conf == DFL_IDLE_THRESHOLD || 1633 + tg->latency_target_conf == DFL_LATENCY_TARGET) { 1634 + tg->bps[READ][LIMIT_LOW] = 0; 1635 + tg->bps[WRITE][LIMIT_LOW] = 0; 1636 + tg->iops[READ][LIMIT_LOW] = 0; 1637 + tg->iops[WRITE][LIMIT_LOW] = 0; 1638 + tg->idletime_threshold = DFL_IDLE_THRESHOLD; 1639 + tg->latency_target = DFL_LATENCY_TARGET; 1640 + } else if (index == LIMIT_LOW) { 1641 + tg->idletime_threshold = tg->idletime_threshold_conf; 1642 + tg->latency_target = tg->latency_target_conf; 1667 1643 } 1668 - tg_conf_updated(tg); 1644 + 1645 + blk_throtl_update_limit_valid(tg->td); 1646 + if (tg->td->limit_valid[LIMIT_LOW]) { 1647 + if (index == LIMIT_LOW) 1648 + tg->td->limit_index = LIMIT_LOW; 1649 + } else 1650 + tg->td->limit_index = LIMIT_MAX; 1651 + tg_conf_updated(tg, index == LIMIT_LOW && 1652 + tg->td->limit_valid[LIMIT_LOW]); 1669 1653 ret = 0; 1670 1654 out_finish: 1671 1655 blkg_conf_finish(&ctx); ··· 1770 1722 /* 1771 1723 * cgroup is idle if: 1772 1724 * - single idle is too long, longer than a fixed value (in case user 1773 - * configure a too big threshold) or 4 times of slice 1725 + * configure a too big threshold) or 4 times of idletime threshold 1774 1726 * - average think time is more than threshold 1775 1727 * - IO latency is largely below threshold 1776 1728 */ 1777 - unsigned long time = jiffies_to_usecs(4 * tg->td->throtl_slice); 1729 + unsigned long time; 1730 + bool ret; 1778 1731 1779 - time = min_t(unsigned long, MAX_IDLE_TIME, time); 1780 - return (ktime_get_ns() >> 10) - tg->last_finish_time > time || 1781 - tg->avg_idletime > tg->idletime_threshold || 1782 - (tg->latency_target && tg->bio_cnt && 1732 + time = min_t(unsigned long, MAX_IDLE_TIME, 4 * tg->idletime_threshold); 1733 + ret = tg->latency_target == DFL_LATENCY_TARGET || 1734 + tg->idletime_threshold == DFL_IDLE_THRESHOLD || 1735 + (ktime_get_ns() >> 10) - tg->last_finish_time > time || 1736 + tg->avg_idletime > tg->idletime_threshold || 1737 + (tg->latency_target && tg->bio_cnt && 1783 1738 tg->bad_bio_cnt * 5 < tg->bio_cnt); 1739 + throtl_log(&tg->service_queue, 1740 + "avg_idle=%ld, idle_threshold=%ld, bad_bio=%d, total_bio=%d, is_idle=%d, scale=%d", 1741 + tg->avg_idletime, tg->idletime_threshold, tg->bad_bio_cnt, 1742 + tg->bio_cnt, ret, tg->td->scale); 1743 + return ret; 1784 1744 } 1785 1745 1786 1746 static bool throtl_tg_can_upgrade(struct throtl_grp *tg) ··· 1884 1828 struct cgroup_subsys_state *pos_css; 1885 1829 struct blkcg_gq *blkg; 1886 1830 1831 + throtl_log(&td->service_queue, "upgrade to max"); 1887 1832 td->limit_index = LIMIT_MAX; 1888 1833 td->low_upgrade_time = jiffies; 1889 1834 td->scale = 0; ··· 1907 1850 { 1908 1851 td->scale /= 2; 1909 1852 1853 + throtl_log(&td->service_queue, "downgrade, scale %d", td->scale); 1910 1854 if (td->scale) { 1911 1855 td->low_upgrade_time = jiffies - td->scale * td->throtl_slice; 1912 1856 return; ··· 2081 2023 td->avg_buckets[i].valid = true; 2082 2024 last_latency = td->avg_buckets[i].latency; 2083 2025 } 2026 + 2027 + for (i = 0; i < LATENCY_BUCKET_SIZE; i++) 2028 + throtl_log(&td->service_queue, 2029 + "Latency bucket %d: latency=%ld, valid=%d", i, 2030 + td->avg_buckets[i].latency, td->avg_buckets[i].valid); 2084 2031 } 2085 2032 #else 2086 2033 static inline void throtl_update_latency_buckets(struct throtl_data *td) ··· 2417 2354 void blk_throtl_register_queue(struct request_queue *q) 2418 2355 { 2419 2356 struct throtl_data *td; 2420 - struct cgroup_subsys_state *pos_css; 2421 - struct blkcg_gq *blkg; 2422 2357 2423 2358 td = q->td; 2424 2359 BUG_ON(!td); 2425 2360 2426 - if (blk_queue_nonrot(q)) { 2361 + if (blk_queue_nonrot(q)) 2427 2362 td->throtl_slice = DFL_THROTL_SLICE_SSD; 2428 - td->dft_idletime_threshold = DFL_IDLE_THRESHOLD_SSD; 2429 - } else { 2363 + else 2430 2364 td->throtl_slice = DFL_THROTL_SLICE_HD; 2431 - td->dft_idletime_threshold = DFL_IDLE_THRESHOLD_HD; 2432 - } 2433 2365 #ifndef CONFIG_BLK_DEV_THROTTLING_LOW 2434 2366 /* if no low limit, use previous default */ 2435 2367 td->throtl_slice = DFL_THROTL_SLICE_HD; ··· 2433 2375 td->track_bio_latency = !q->mq_ops && !q->request_fn; 2434 2376 if (!td->track_bio_latency) 2435 2377 blk_stat_enable_accounting(q); 2436 - 2437 - /* 2438 - * some tg are created before queue is fully initialized, eg, nonrot 2439 - * isn't initialized yet 2440 - */ 2441 - rcu_read_lock(); 2442 - blkg_for_each_descendant_post(blkg, pos_css, q->root_blkg) { 2443 - struct throtl_grp *tg = blkg_to_tg(blkg); 2444 - 2445 - tg->idletime_threshold = td->dft_idletime_threshold; 2446 - } 2447 - rcu_read_unlock(); 2448 2378 } 2449 2379 2450 2380 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
+1 -1
block/blk.h
··· 59 59 60 60 int blk_init_rl(struct request_list *rl, struct request_queue *q, 61 61 gfp_t gfp_mask); 62 - void blk_exit_rl(struct request_list *rl); 62 + void blk_exit_rl(struct request_queue *q, struct request_list *rl); 63 63 void blk_rq_bio_prep(struct request_queue *q, struct request *rq, 64 64 struct bio *bio); 65 65 void blk_queue_bypass_start(struct request_queue *q);
+15 -2
block/cfq-iosched.c
··· 38 38 static const int cfq_hist_divisor = 4; 39 39 40 40 /* 41 - * offset from end of service tree 41 + * offset from end of queue service tree for idle class 42 42 */ 43 43 #define CFQ_IDLE_DELAY (NSEC_PER_SEC / 5) 44 + /* offset from end of group service tree under time slice mode */ 45 + #define CFQ_SLICE_MODE_GROUP_DELAY (NSEC_PER_SEC / 5) 46 + /* offset from end of group service under IOPS mode */ 47 + #define CFQ_IOPS_MODE_GROUP_DELAY (HZ / 5) 44 48 45 49 /* 46 50 * below this threshold, we consider thinktime immediate ··· 1366 1362 cfqg->vfraction = max_t(unsigned, vfr, 1); 1367 1363 } 1368 1364 1365 + static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd) 1366 + { 1367 + if (!iops_mode(cfqd)) 1368 + return CFQ_SLICE_MODE_GROUP_DELAY; 1369 + else 1370 + return CFQ_IOPS_MODE_GROUP_DELAY; 1371 + } 1372 + 1369 1373 static void 1370 1374 cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg) 1371 1375 { ··· 1393 1381 n = rb_last(&st->rb); 1394 1382 if (n) { 1395 1383 __cfqg = rb_entry_cfqg(n); 1396 - cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY; 1384 + cfqg->vdisktime = __cfqg->vdisktime + 1385 + cfq_get_cfqg_vdisktime_delay(cfqd); 1397 1386 } else 1398 1387 cfqg->vdisktime = st->min_vdisktime; 1399 1388 cfq_group_service_tree_add(st, cfqg);
+3 -1
block/partition-generic.c
··· 320 320 321 321 if (info) { 322 322 struct partition_meta_info *pinfo = alloc_part_info(disk); 323 - if (!pinfo) 323 + if (!pinfo) { 324 + err = -ENOMEM; 324 325 goto out_free_stats; 326 + } 325 327 memcpy(pinfo, info, sizeof(*info)); 326 328 p->info = pinfo; 327 329 }
+2
block/partitions/msdos.c
··· 300 300 continue; 301 301 bsd_start = le32_to_cpu(p->p_offset); 302 302 bsd_size = le32_to_cpu(p->p_size); 303 + if (memcmp(flavour, "bsd\0", 4) == 0) 304 + bsd_start += offset; 303 305 if (offset == bsd_start && size == bsd_size) 304 306 /* full parent partition, we have it already */ 305 307 continue;
+39 -1
crypto/skcipher.c
··· 764 764 return 0; 765 765 } 766 766 767 + static int skcipher_setkey_unaligned(struct crypto_skcipher *tfm, 768 + const u8 *key, unsigned int keylen) 769 + { 770 + unsigned long alignmask = crypto_skcipher_alignmask(tfm); 771 + struct skcipher_alg *cipher = crypto_skcipher_alg(tfm); 772 + u8 *buffer, *alignbuffer; 773 + unsigned long absize; 774 + int ret; 775 + 776 + absize = keylen + alignmask; 777 + buffer = kmalloc(absize, GFP_ATOMIC); 778 + if (!buffer) 779 + return -ENOMEM; 780 + 781 + alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1); 782 + memcpy(alignbuffer, key, keylen); 783 + ret = cipher->setkey(tfm, alignbuffer, keylen); 784 + kzfree(buffer); 785 + return ret; 786 + } 787 + 788 + static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, 789 + unsigned int keylen) 790 + { 791 + struct skcipher_alg *cipher = crypto_skcipher_alg(tfm); 792 + unsigned long alignmask = crypto_skcipher_alignmask(tfm); 793 + 794 + if (keylen < cipher->min_keysize || keylen > cipher->max_keysize) { 795 + crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); 796 + return -EINVAL; 797 + } 798 + 799 + if ((unsigned long)key & alignmask) 800 + return skcipher_setkey_unaligned(tfm, key, keylen); 801 + 802 + return cipher->setkey(tfm, key, keylen); 803 + } 804 + 767 805 static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm) 768 806 { 769 807 struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm); ··· 822 784 tfm->__crt_alg->cra_type == &crypto_givcipher_type) 823 785 return crypto_init_skcipher_ops_ablkcipher(tfm); 824 786 825 - skcipher->setkey = alg->setkey; 787 + skcipher->setkey = skcipher_setkey; 826 788 skcipher->encrypt = alg->encrypt; 827 789 skcipher->decrypt = alg->decrypt; 828 790 skcipher->ivsize = alg->ivsize;
-4
drivers/acpi/acpica/tbutils.c
··· 418 418 419 419 table_desc->validation_count++; 420 420 if (table_desc->validation_count == 0) { 421 - ACPI_ERROR((AE_INFO, 422 - "Table %p, Validation count is zero after increment\n", 423 - table_desc)); 424 421 table_desc->validation_count--; 425 - return_ACPI_STATUS(AE_LIMIT); 426 422 } 427 423 428 424 *out_table = table_desc->pointer;
+10 -1
drivers/acpi/button.c
··· 57 57 58 58 #define ACPI_BUTTON_LID_INIT_IGNORE 0x00 59 59 #define ACPI_BUTTON_LID_INIT_OPEN 0x01 60 + #define ACPI_BUTTON_LID_INIT_METHOD 0x02 60 61 61 62 #define _COMPONENT ACPI_BUTTON_COMPONENT 62 63 ACPI_MODULE_NAME("button"); ··· 113 112 114 113 static BLOCKING_NOTIFIER_HEAD(acpi_lid_notifier); 115 114 static struct acpi_device *lid_device; 116 - static u8 lid_init_state = ACPI_BUTTON_LID_INIT_OPEN; 115 + static u8 lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 117 116 118 117 static unsigned long lid_report_interval __read_mostly = 500; 119 118 module_param(lid_report_interval, ulong, 0644); ··· 377 376 case ACPI_BUTTON_LID_INIT_OPEN: 378 377 (void)acpi_lid_notify_state(device, 1); 379 378 break; 379 + case ACPI_BUTTON_LID_INIT_METHOD: 380 + (void)acpi_lid_update_state(device); 381 + break; 380 382 case ACPI_BUTTON_LID_INIT_IGNORE: 381 383 default: 382 384 break; ··· 564 560 if (!strncmp(val, "open", sizeof("open") - 1)) { 565 561 lid_init_state = ACPI_BUTTON_LID_INIT_OPEN; 566 562 pr_info("Notify initial lid state as open\n"); 563 + } else if (!strncmp(val, "method", sizeof("method") - 1)) { 564 + lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 565 + pr_info("Notify initial lid state with _LID return value\n"); 567 566 } else if (!strncmp(val, "ignore", sizeof("ignore") - 1)) { 568 567 lid_init_state = ACPI_BUTTON_LID_INIT_IGNORE; 569 568 pr_info("Do not notify initial lid state\n"); ··· 580 573 switch (lid_init_state) { 581 574 case ACPI_BUTTON_LID_INIT_OPEN: 582 575 return sprintf(buffer, "open"); 576 + case ACPI_BUTTON_LID_INIT_METHOD: 577 + return sprintf(buffer, "method"); 583 578 case ACPI_BUTTON_LID_INIT_IGNORE: 584 579 return sprintf(buffer, "ignore"); 585 580 default:
+1 -1
drivers/acpi/nfit/mce.c
··· 26 26 struct nfit_spa *nfit_spa; 27 27 28 28 /* We only care about memory errors */ 29 - if (!(mce->status & MCACOD)) 29 + if (!mce_is_memory_error(mce)) 30 30 return NOTIFY_DONE; 31 31 32 32 /*
+5 -2
drivers/acpi/sysfs.c
··· 333 333 container_of(bin_attr, struct acpi_table_attr, attr); 334 334 struct acpi_table_header *table_header = NULL; 335 335 acpi_status status; 336 + ssize_t rc; 336 337 337 338 status = acpi_get_table(table_attr->name, table_attr->instance, 338 339 &table_header); 339 340 if (ACPI_FAILURE(status)) 340 341 return -ENODEV; 341 342 342 - return memory_read_from_buffer(buf, count, &offset, 343 - table_header, table_header->length); 343 + rc = memory_read_from_buffer(buf, count, &offset, table_header, 344 + table_header->length); 345 + acpi_put_table(table_header); 346 + return rc; 344 347 } 345 348 346 349 static int acpi_table_attr_init(struct kobject *tables_obj,
+5 -6
drivers/base/power/wakeup.c
··· 512 512 /** 513 513 * wakup_source_activate - Mark given wakeup source as active. 514 514 * @ws: Wakeup source to handle. 515 - * @hard: If set, abort suspends in progress and wake up from suspend-to-idle. 516 515 * 517 516 * Update the @ws' statistics and, if @ws has just been activated, notify the PM 518 517 * core of the event by incrementing the counter of of wakeup events being 519 518 * processed. 520 519 */ 521 - static void wakeup_source_activate(struct wakeup_source *ws, bool hard) 520 + static void wakeup_source_activate(struct wakeup_source *ws) 522 521 { 523 522 unsigned int cec; 524 523 525 524 if (WARN_ONCE(wakeup_source_not_registered(ws), 526 525 "unregistered wakeup source\n")) 527 526 return; 528 - 529 - if (hard) 530 - pm_system_wakeup(); 531 527 532 528 ws->active = true; 533 529 ws->active_count++; ··· 550 554 ws->wakeup_count++; 551 555 552 556 if (!ws->active) 553 - wakeup_source_activate(ws, hard); 557 + wakeup_source_activate(ws); 558 + 559 + if (hard) 560 + pm_system_wakeup(); 554 561 } 555 562 556 563 /**
+5 -10
drivers/block/nbd.c
··· 937 937 return -ENOSPC; 938 938 } 939 939 940 - /* Reset all properties of an NBD device */ 941 - static void nbd_reset(struct nbd_device *nbd) 942 - { 943 - nbd->config = NULL; 944 - nbd->tag_set.timeout = 0; 945 - queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, nbd->disk->queue); 946 - } 947 - 948 940 static void nbd_bdev_reset(struct block_device *bdev) 949 941 { 950 942 if (bdev->bd_openers > 1) ··· 1021 1029 } 1022 1030 kfree(config->socks); 1023 1031 } 1024 - nbd_reset(nbd); 1032 + kfree(nbd->config); 1033 + nbd->config = NULL; 1034 + 1035 + nbd->tag_set.timeout = 0; 1036 + queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD, nbd->disk->queue); 1025 1037 1026 1038 mutex_unlock(&nbd->config_lock); 1027 1039 nbd_put(nbd); ··· 1479 1483 disk->fops = &nbd_fops; 1480 1484 disk->private_data = nbd; 1481 1485 sprintf(disk->disk_name, "nbd%d", index); 1482 - nbd_reset(nbd); 1483 1486 add_disk(disk); 1484 1487 nbd_total_devices++; 1485 1488 return index;
+2
drivers/block/rbd.c
··· 4023 4023 4024 4024 switch (req_op(rq)) { 4025 4025 case REQ_OP_DISCARD: 4026 + case REQ_OP_WRITE_ZEROES: 4026 4027 op_type = OBJ_OP_DISCARD; 4027 4028 break; 4028 4029 case REQ_OP_WRITE: ··· 4421 4420 q->limits.discard_granularity = segment_size; 4422 4421 q->limits.discard_alignment = segment_size; 4423 4422 blk_queue_max_discard_sectors(q, segment_size / SECTOR_SIZE); 4423 + blk_queue_max_write_zeroes_sectors(q, segment_size / SECTOR_SIZE); 4424 4424 4425 4425 if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC)) 4426 4426 q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
+3 -3
drivers/char/pcmcia/cm4040_cs.c
··· 374 374 375 375 rc = write_sync_reg(SCR_HOST_TO_READER_START, dev); 376 376 if (rc <= 0) { 377 - DEBUGP(5, dev, "write_sync_reg c=%.2Zx\n", rc); 377 + DEBUGP(5, dev, "write_sync_reg c=%.2zx\n", rc); 378 378 DEBUGP(2, dev, "<- cm4040_write (failed)\n"); 379 379 if (rc == -ERESTARTSYS) 380 380 return rc; ··· 387 387 for (i = 0; i < bytes_to_write; i++) { 388 388 rc = wait_for_bulk_out_ready(dev); 389 389 if (rc <= 0) { 390 - DEBUGP(5, dev, "wait_for_bulk_out_ready rc=%.2Zx\n", 390 + DEBUGP(5, dev, "wait_for_bulk_out_ready rc=%.2zx\n", 391 391 rc); 392 392 DEBUGP(2, dev, "<- cm4040_write (failed)\n"); 393 393 if (rc == -ERESTARTSYS) ··· 403 403 rc = write_sync_reg(SCR_HOST_TO_READER_DONE, dev); 404 404 405 405 if (rc <= 0) { 406 - DEBUGP(5, dev, "write_sync_reg c=%.2Zx\n", rc); 406 + DEBUGP(5, dev, "write_sync_reg c=%.2zx\n", rc); 407 407 DEBUGP(2, dev, "<- cm4040_write (failed)\n"); 408 408 if (rc == -ERESTARTSYS) 409 409 return rc;
+5 -1
drivers/char/random.c
··· 1097 1097 static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs) 1098 1098 { 1099 1099 __u32 *ptr = (__u32 *) regs; 1100 + unsigned long flags; 1100 1101 1101 1102 if (regs == NULL) 1102 1103 return 0; 1104 + local_irq_save(flags); 1103 1105 if (f->reg_idx >= sizeof(struct pt_regs) / sizeof(__u32)) 1104 1106 f->reg_idx = 0; 1105 - return *(ptr + f->reg_idx++); 1107 + ptr += f->reg_idx++; 1108 + local_irq_restore(flags); 1109 + return *ptr; 1106 1110 } 1107 1111 1108 1112 void add_interrupt_randomness(int irq, int irq_flags)
+9
drivers/cpufreq/Kconfig.arm
··· 71 71 72 72 If in doubt, say N. 73 73 74 + config ARM_DB8500_CPUFREQ 75 + tristate "ST-Ericsson DB8500 cpufreq" if COMPILE_TEST && !ARCH_U8500 76 + default ARCH_U8500 77 + depends on HAS_IOMEM 78 + depends on !CPU_THERMAL || THERMAL 79 + help 80 + This adds the CPUFreq driver for ST-Ericsson Ux500 (DB8500) SoC 81 + series. 82 + 74 83 config ARM_IMX6Q_CPUFREQ 75 84 tristate "Freescale i.MX6 cpufreq support" 76 85 depends on ARCH_MXC
+1 -1
drivers/cpufreq/Makefile
··· 53 53 54 54 obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o 55 55 obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 56 - obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o 56 + obj-$(CONFIG_ARM_DB8500_CPUFREQ) += dbx500-cpufreq.o 57 57 obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o 58 58 obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o 59 59 obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
+1
drivers/cpufreq/cpufreq.c
··· 2468 2468 if (!(cpufreq_driver->flags & CPUFREQ_STICKY) && 2469 2469 list_empty(&cpufreq_policy_list)) { 2470 2470 /* if all ->init() calls failed, unregister */ 2471 + ret = -ENODEV; 2471 2472 pr_debug("%s: No CPU initialized for driver %s\n", __func__, 2472 2473 driver_data->name); 2473 2474 goto err_if_unreg;
+16 -3
drivers/cpufreq/kirkwood-cpufreq.c
··· 127 127 return PTR_ERR(priv.cpu_clk); 128 128 } 129 129 130 - clk_prepare_enable(priv.cpu_clk); 130 + err = clk_prepare_enable(priv.cpu_clk); 131 + if (err) { 132 + dev_err(priv.dev, "Unable to prepare cpuclk\n"); 133 + return err; 134 + } 135 + 131 136 kirkwood_freq_table[0].frequency = clk_get_rate(priv.cpu_clk) / 1000; 132 137 133 138 priv.ddr_clk = of_clk_get_by_name(np, "ddrclk"); ··· 142 137 goto out_cpu; 143 138 } 144 139 145 - clk_prepare_enable(priv.ddr_clk); 140 + err = clk_prepare_enable(priv.ddr_clk); 141 + if (err) { 142 + dev_err(priv.dev, "Unable to prepare ddrclk\n"); 143 + goto out_cpu; 144 + } 146 145 kirkwood_freq_table[1].frequency = clk_get_rate(priv.ddr_clk) / 1000; 147 146 148 147 priv.powersave_clk = of_clk_get_by_name(np, "powersave"); ··· 155 146 err = PTR_ERR(priv.powersave_clk); 156 147 goto out_ddr; 157 148 } 158 - clk_prepare_enable(priv.powersave_clk); 149 + err = clk_prepare_enable(priv.powersave_clk); 150 + if (err) { 151 + dev_err(priv.dev, "Unable to prepare powersave clk\n"); 152 + goto out_ddr; 153 + } 159 154 160 155 of_node_put(np); 161 156 np = NULL;
+35 -4
drivers/dma/ep93xx_dma.c
··· 201 201 struct dma_device dma_dev; 202 202 bool m2m; 203 203 int (*hw_setup)(struct ep93xx_dma_chan *); 204 + void (*hw_synchronize)(struct ep93xx_dma_chan *); 204 205 void (*hw_shutdown)(struct ep93xx_dma_chan *); 205 206 void (*hw_submit)(struct ep93xx_dma_chan *); 206 207 int (*hw_interrupt)(struct ep93xx_dma_chan *); ··· 324 323 | M2P_CONTROL_ENABLE; 325 324 m2p_set_control(edmac, control); 326 325 326 + edmac->buffer = 0; 327 + 327 328 return 0; 328 329 } 329 330 ··· 334 331 return (readl(edmac->regs + M2P_STATUS) >> 4) & 0x3; 335 332 } 336 333 337 - static void m2p_hw_shutdown(struct ep93xx_dma_chan *edmac) 334 + static void m2p_hw_synchronize(struct ep93xx_dma_chan *edmac) 338 335 { 336 + unsigned long flags; 339 337 u32 control; 340 338 339 + spin_lock_irqsave(&edmac->lock, flags); 341 340 control = readl(edmac->regs + M2P_CONTROL); 342 341 control &= ~(M2P_CONTROL_STALLINT | M2P_CONTROL_NFBINT); 343 342 m2p_set_control(edmac, control); 343 + spin_unlock_irqrestore(&edmac->lock, flags); 344 344 345 345 while (m2p_channel_state(edmac) >= M2P_STATE_ON) 346 - cpu_relax(); 346 + schedule(); 347 + } 347 348 349 + static void m2p_hw_shutdown(struct ep93xx_dma_chan *edmac) 350 + { 348 351 m2p_set_control(edmac, 0); 349 352 350 - while (m2p_channel_state(edmac) == M2P_STATE_STALL) 351 - cpu_relax(); 353 + while (m2p_channel_state(edmac) != M2P_STATE_IDLE) 354 + dev_warn(chan2dev(edmac), "M2P: Not yet IDLE\n"); 352 355 } 353 356 354 357 static void m2p_fill_desc(struct ep93xx_dma_chan *edmac) ··· 1170 1161 } 1171 1162 1172 1163 /** 1164 + * ep93xx_dma_synchronize - Synchronizes the termination of transfers to the 1165 + * current context. 1166 + * @chan: channel 1167 + * 1168 + * Synchronizes the DMA channel termination to the current context. When this 1169 + * function returns it is guaranteed that all transfers for previously issued 1170 + * descriptors have stopped and and it is safe to free the memory associated 1171 + * with them. Furthermore it is guaranteed that all complete callback functions 1172 + * for a previously submitted descriptor have finished running and it is safe to 1173 + * free resources accessed from within the complete callbacks. 1174 + */ 1175 + static void ep93xx_dma_synchronize(struct dma_chan *chan) 1176 + { 1177 + struct ep93xx_dma_chan *edmac = to_ep93xx_dma_chan(chan); 1178 + 1179 + if (edmac->edma->hw_synchronize) 1180 + edmac->edma->hw_synchronize(edmac); 1181 + } 1182 + 1183 + /** 1173 1184 * ep93xx_dma_terminate_all - terminate all transactions 1174 1185 * @chan: channel 1175 1186 * ··· 1352 1323 dma_dev->device_prep_slave_sg = ep93xx_dma_prep_slave_sg; 1353 1324 dma_dev->device_prep_dma_cyclic = ep93xx_dma_prep_dma_cyclic; 1354 1325 dma_dev->device_config = ep93xx_dma_slave_config; 1326 + dma_dev->device_synchronize = ep93xx_dma_synchronize; 1355 1327 dma_dev->device_terminate_all = ep93xx_dma_terminate_all; 1356 1328 dma_dev->device_issue_pending = ep93xx_dma_issue_pending; 1357 1329 dma_dev->device_tx_status = ep93xx_dma_tx_status; ··· 1370 1340 } else { 1371 1341 dma_cap_set(DMA_PRIVATE, dma_dev->cap_mask); 1372 1342 1343 + edma->hw_synchronize = m2p_hw_synchronize; 1373 1344 edma->hw_setup = m2p_hw_setup; 1374 1345 edma->hw_shutdown = m2p_hw_shutdown; 1375 1346 edma->hw_submit = m2p_hw_submit;
+44 -65
drivers/dma/mv_xor_v2.c
··· 161 161 struct mv_xor_v2_sw_desc *sw_desq; 162 162 int desc_size; 163 163 unsigned int npendings; 164 + unsigned int hw_queue_idx; 164 165 }; 165 166 166 167 /** ··· 215 214 } 216 215 217 216 /* 218 - * Return the next available index in the DESQ. 219 - */ 220 - static int mv_xor_v2_get_desq_write_ptr(struct mv_xor_v2_device *xor_dev) 221 - { 222 - /* read the index for the next available descriptor in the DESQ */ 223 - u32 reg = readl(xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_ALLOC_OFF); 224 - 225 - return ((reg >> MV_XOR_V2_DMA_DESQ_ALLOC_WRPTR_SHIFT) 226 - & MV_XOR_V2_DMA_DESQ_ALLOC_WRPTR_MASK); 227 - } 228 - 229 - /* 230 217 * notify the engine of new descriptors, and update the available index. 231 218 */ 232 219 static void mv_xor_v2_add_desc_to_desq(struct mv_xor_v2_device *xor_dev, ··· 246 257 return MV_XOR_V2_EXT_DESC_SIZE; 247 258 } 248 259 249 - /* 250 - * Set the IMSG threshold 251 - */ 252 - static inline 253 - void mv_xor_v2_set_imsg_thrd(struct mv_xor_v2_device *xor_dev, int thrd_val) 254 - { 255 - u32 reg; 256 - 257 - reg = readl(xor_dev->dma_base + MV_XOR_V2_DMA_IMSG_THRD_OFF); 258 - 259 - reg &= (~MV_XOR_V2_DMA_IMSG_THRD_MASK << MV_XOR_V2_DMA_IMSG_THRD_SHIFT); 260 - reg |= (thrd_val << MV_XOR_V2_DMA_IMSG_THRD_SHIFT); 261 - 262 - writel(reg, xor_dev->dma_base + MV_XOR_V2_DMA_IMSG_THRD_OFF); 263 - } 264 - 265 260 static irqreturn_t mv_xor_v2_interrupt_handler(int irq, void *data) 266 261 { 267 262 struct mv_xor_v2_device *xor_dev = data; ··· 261 288 if (!ndescs) 262 289 return IRQ_NONE; 263 290 264 - /* 265 - * Update IMSG threshold, to disable new IMSG interrupts until 266 - * end of the tasklet 267 - */ 268 - mv_xor_v2_set_imsg_thrd(xor_dev, MV_XOR_V2_DESC_NUM); 269 - 270 291 /* schedule a tasklet to handle descriptors callbacks */ 271 292 tasklet_schedule(&xor_dev->irq_tasklet); 272 293 ··· 273 306 static dma_cookie_t 274 307 mv_xor_v2_tx_submit(struct dma_async_tx_descriptor *tx) 275 308 { 276 - int desq_ptr; 277 309 void *dest_hw_desc; 278 310 dma_cookie_t cookie; 279 311 struct mv_xor_v2_sw_desc *sw_desc = ··· 288 322 spin_lock_bh(&xor_dev->lock); 289 323 cookie = dma_cookie_assign(tx); 290 324 291 - /* get the next available slot in the DESQ */ 292 - desq_ptr = mv_xor_v2_get_desq_write_ptr(xor_dev); 293 - 294 325 /* copy the HW descriptor from the SW descriptor to the DESQ */ 295 - dest_hw_desc = xor_dev->hw_desq_virt + desq_ptr; 326 + dest_hw_desc = xor_dev->hw_desq_virt + xor_dev->hw_queue_idx; 296 327 297 328 memcpy(dest_hw_desc, &sw_desc->hw_desc, xor_dev->desc_size); 298 329 299 330 xor_dev->npendings++; 331 + xor_dev->hw_queue_idx++; 332 + if (xor_dev->hw_queue_idx >= MV_XOR_V2_DESC_NUM) 333 + xor_dev->hw_queue_idx = 0; 300 334 301 335 spin_unlock_bh(&xor_dev->lock); 302 336 ··· 310 344 mv_xor_v2_prep_sw_desc(struct mv_xor_v2_device *xor_dev) 311 345 { 312 346 struct mv_xor_v2_sw_desc *sw_desc; 347 + bool found = false; 313 348 314 349 /* Lock the channel */ 315 350 spin_lock_bh(&xor_dev->lock); ··· 322 355 return NULL; 323 356 } 324 357 325 - /* get a free SW descriptor from the SW DESQ */ 326 - sw_desc = list_first_entry(&xor_dev->free_sw_desc, 327 - struct mv_xor_v2_sw_desc, free_list); 358 + list_for_each_entry(sw_desc, &xor_dev->free_sw_desc, free_list) { 359 + if (async_tx_test_ack(&sw_desc->async_tx)) { 360 + found = true; 361 + break; 362 + } 363 + } 364 + 365 + if (!found) { 366 + spin_unlock_bh(&xor_dev->lock); 367 + return NULL; 368 + } 369 + 328 370 list_del(&sw_desc->free_list); 329 371 330 372 /* Release the channel */ 331 373 spin_unlock_bh(&xor_dev->lock); 332 - 333 - /* set the async tx descriptor */ 334 - dma_async_tx_descriptor_init(&sw_desc->async_tx, &xor_dev->dmachan); 335 - sw_desc->async_tx.tx_submit = mv_xor_v2_tx_submit; 336 - async_tx_ack(&sw_desc->async_tx); 337 374 338 375 return sw_desc; 339 376 } ··· 360 389 __func__, len, &src, &dest, flags); 361 390 362 391 sw_desc = mv_xor_v2_prep_sw_desc(xor_dev); 392 + if (!sw_desc) 393 + return NULL; 363 394 364 395 sw_desc->async_tx.flags = flags; 365 396 ··· 416 443 __func__, src_cnt, len, &dest, flags); 417 444 418 445 sw_desc = mv_xor_v2_prep_sw_desc(xor_dev); 446 + if (!sw_desc) 447 + return NULL; 419 448 420 449 sw_desc->async_tx.flags = flags; 421 450 ··· 466 491 container_of(chan, struct mv_xor_v2_device, dmachan); 467 492 468 493 sw_desc = mv_xor_v2_prep_sw_desc(xor_dev); 494 + if (!sw_desc) 495 + return NULL; 469 496 470 497 /* set the HW descriptor */ 471 498 hw_descriptor = &sw_desc->hw_desc; ··· 531 554 { 532 555 struct mv_xor_v2_device *xor_dev = (struct mv_xor_v2_device *) data; 533 556 int pending_ptr, num_of_pending, i; 534 - struct mv_xor_v2_descriptor *next_pending_hw_desc = NULL; 535 557 struct mv_xor_v2_sw_desc *next_pending_sw_desc = NULL; 536 558 537 559 dev_dbg(xor_dev->dmadev.dev, "%s %d\n", __func__, __LINE__); ··· 538 562 /* get the pending descriptors parameters */ 539 563 num_of_pending = mv_xor_v2_get_pending_params(xor_dev, &pending_ptr); 540 564 541 - /* next HW descriptor */ 542 - next_pending_hw_desc = xor_dev->hw_desq_virt + pending_ptr; 543 - 544 565 /* loop over free descriptors */ 545 566 for (i = 0; i < num_of_pending; i++) { 546 - 547 - if (pending_ptr > MV_XOR_V2_DESC_NUM) 548 - pending_ptr = 0; 549 - 550 - if (next_pending_sw_desc != NULL) 551 - next_pending_hw_desc++; 567 + struct mv_xor_v2_descriptor *next_pending_hw_desc = 568 + xor_dev->hw_desq_virt + pending_ptr; 552 569 553 570 /* get the SW descriptor related to the HW descriptor */ 554 571 next_pending_sw_desc = ··· 577 608 578 609 /* increment the next descriptor */ 579 610 pending_ptr++; 611 + if (pending_ptr >= MV_XOR_V2_DESC_NUM) 612 + pending_ptr = 0; 580 613 } 581 614 582 615 if (num_of_pending != 0) { 583 616 /* free the descriptores */ 584 617 mv_xor_v2_free_desc_from_desq(xor_dev, num_of_pending); 585 618 } 586 - 587 - /* Update IMSG threshold, to enable new IMSG interrupts */ 588 - mv_xor_v2_set_imsg_thrd(xor_dev, 0); 589 619 } 590 620 591 621 /* ··· 615 647 xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BALR_OFF); 616 648 writel((xor_dev->hw_desq & 0xFFFF00000000) >> 32, 617 649 xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_BAHR_OFF); 618 - 619 - /* enable the DMA engine */ 620 - writel(0, xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_STOP_OFF); 621 650 622 651 /* 623 652 * This is a temporary solution, until we activate the ··· 659 694 reg |= MV_XOR_V2_GLOB_PAUSE_AXI_TIME_DIS_VAL; 660 695 writel(reg, xor_dev->glob_base + MV_XOR_V2_GLOB_PAUSE); 661 696 697 + /* enable the DMA engine */ 698 + writel(0, xor_dev->dma_base + MV_XOR_V2_DMA_DESQ_STOP_OFF); 699 + 662 700 return 0; 663 701 } 664 702 ··· 692 724 return PTR_ERR(xor_dev->glob_base); 693 725 694 726 platform_set_drvdata(pdev, xor_dev); 727 + 728 + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(40)); 729 + if (ret) 730 + return ret; 695 731 696 732 xor_dev->clk = devm_clk_get(&pdev->dev, NULL); 697 733 if (IS_ERR(xor_dev->clk) && PTR_ERR(xor_dev->clk) == -EPROBE_DEFER) ··· 757 785 758 786 /* add all SW descriptors to the free list */ 759 787 for (i = 0; i < MV_XOR_V2_DESC_NUM; i++) { 760 - xor_dev->sw_desq[i].idx = i; 761 - list_add(&xor_dev->sw_desq[i].free_list, 788 + struct mv_xor_v2_sw_desc *sw_desc = 789 + xor_dev->sw_desq + i; 790 + sw_desc->idx = i; 791 + dma_async_tx_descriptor_init(&sw_desc->async_tx, 792 + &xor_dev->dmachan); 793 + sw_desc->async_tx.tx_submit = mv_xor_v2_tx_submit; 794 + async_tx_ack(&sw_desc->async_tx); 795 + 796 + list_add(&sw_desc->free_list, 762 797 &xor_dev->free_sw_desc); 763 798 } 764 799
+2 -1
drivers/dma/pl330.c
··· 3008 3008 3009 3009 for (i = 0; i < AMBA_NR_IRQS; i++) { 3010 3010 irq = adev->irq[i]; 3011 - devm_free_irq(&adev->dev, irq, pl330); 3011 + if (irq) 3012 + devm_free_irq(&adev->dev, irq, pl330); 3012 3013 } 3013 3014 3014 3015 dma_async_device_unregister(&pl330->ddma);
+3
drivers/dma/sh/rcar-dmac.c
··· 1287 1287 if (desc->hwdescs.use) { 1288 1288 dptr = (rcar_dmac_chan_read(chan, RCAR_DMACHCRB) & 1289 1289 RCAR_DMACHCRB_DPTR_MASK) >> RCAR_DMACHCRB_DPTR_SHIFT; 1290 + if (dptr == 0) 1291 + dptr = desc->nchunks; 1292 + dptr--; 1290 1293 WARN_ON(dptr >= desc->nchunks); 1291 1294 } else { 1292 1295 running = desc->running;
+1 -1
drivers/dma/sh/usb-dmac.c
··· 117 117 #define USB_DMASWR 0x0008 118 118 #define USB_DMASWR_SWR (1 << 0) 119 119 #define USB_DMAOR 0x0060 120 - #define USB_DMAOR_AE (1 << 2) 120 + #define USB_DMAOR_AE (1 << 1) 121 121 #define USB_DMAOR_DME (1 << 0) 122 122 123 123 #define USB_DMASAR 0x0000
+2
drivers/firmware/dmi-id.c
··· 47 47 DEFINE_DMI_ATTR_WITH_SHOW(product_version, 0444, DMI_PRODUCT_VERSION); 48 48 DEFINE_DMI_ATTR_WITH_SHOW(product_serial, 0400, DMI_PRODUCT_SERIAL); 49 49 DEFINE_DMI_ATTR_WITH_SHOW(product_uuid, 0400, DMI_PRODUCT_UUID); 50 + DEFINE_DMI_ATTR_WITH_SHOW(product_family, 0400, DMI_PRODUCT_FAMILY); 50 51 DEFINE_DMI_ATTR_WITH_SHOW(board_vendor, 0444, DMI_BOARD_VENDOR); 51 52 DEFINE_DMI_ATTR_WITH_SHOW(board_name, 0444, DMI_BOARD_NAME); 52 53 DEFINE_DMI_ATTR_WITH_SHOW(board_version, 0444, DMI_BOARD_VERSION); ··· 192 191 ADD_DMI_ATTR(product_version, DMI_PRODUCT_VERSION); 193 192 ADD_DMI_ATTR(product_serial, DMI_PRODUCT_SERIAL); 194 193 ADD_DMI_ATTR(product_uuid, DMI_PRODUCT_UUID); 194 + ADD_DMI_ATTR(product_family, DMI_PRODUCT_FAMILY); 195 195 ADD_DMI_ATTR(board_vendor, DMI_BOARD_VENDOR); 196 196 ADD_DMI_ATTR(board_name, DMI_BOARD_NAME); 197 197 ADD_DMI_ATTR(board_version, DMI_BOARD_VERSION);
+1
drivers/firmware/dmi_scan.c
··· 430 430 dmi_save_ident(dm, DMI_PRODUCT_VERSION, 6); 431 431 dmi_save_ident(dm, DMI_PRODUCT_SERIAL, 7); 432 432 dmi_save_uuid(dm, DMI_PRODUCT_UUID, 8); 433 + dmi_save_ident(dm, DMI_PRODUCT_FAMILY, 26); 433 434 break; 434 435 case 2: /* Base Board Information */ 435 436 dmi_save_ident(dm, DMI_BOARD_VENDOR, 4);
+3
drivers/firmware/efi/efi-bgrt.c
··· 36 36 if (acpi_disabled) 37 37 return; 38 38 39 + if (!efi_enabled(EFI_BOOT)) 40 + return; 41 + 39 42 if (table->length < sizeof(bgrt_tab)) { 40 43 pr_notice("Ignoring BGRT: invalid length %u (expected %zu)\n", 41 44 table->length, sizeof(bgrt_tab));
+11 -6
drivers/firmware/efi/efi-pstore.c
··· 53 53 if (sscanf(name, "dump-type%u-%u-%d-%lu-%c", 54 54 &record->type, &part, &cnt, &time, &data_type) == 5) { 55 55 record->id = generic_id(time, part, cnt); 56 + record->part = part; 56 57 record->count = cnt; 57 58 record->time.tv_sec = time; 58 59 record->time.tv_nsec = 0; ··· 65 64 } else if (sscanf(name, "dump-type%u-%u-%d-%lu", 66 65 &record->type, &part, &cnt, &time) == 4) { 67 66 record->id = generic_id(time, part, cnt); 67 + record->part = part; 68 68 record->count = cnt; 69 69 record->time.tv_sec = time; 70 70 record->time.tv_nsec = 0; ··· 79 77 * multiple logs, remains. 80 78 */ 81 79 record->id = generic_id(time, part, 0); 80 + record->part = part; 82 81 record->count = 0; 83 82 record->time.tv_sec = time; 84 83 record->time.tv_nsec = 0; ··· 244 241 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 245 242 int i, ret = 0; 246 243 244 + record->time.tv_sec = get_seconds(); 245 + record->time.tv_nsec = 0; 246 + 247 + record->id = generic_id(record->time.tv_sec, record->part, 248 + record->count); 249 + 247 250 snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu-%c", 248 251 record->type, record->part, record->count, 249 - get_seconds(), record->compressed ? 'C' : 'D'); 252 + record->time.tv_sec, record->compressed ? 'C' : 'D'); 250 253 251 254 for (i = 0; i < DUMP_NAME_LEN; i++) 252 255 efi_name[i] = name[i]; ··· 264 255 if (record->reason == KMSG_DUMP_OOPS) 265 256 efivar_run_worker(); 266 257 267 - record->id = record->part; 268 258 return ret; 269 259 }; 270 260 ··· 295 287 * holding multiple logs, remains. 296 288 */ 297 289 snprintf(name_old, sizeof(name_old), "dump-type%u-%u-%lu", 298 - ed->record->type, (unsigned int)ed->record->id, 290 + ed->record->type, ed->record->part, 299 291 ed->record->time.tv_sec); 300 292 301 293 for (i = 0; i < DUMP_NAME_LEN; i++) ··· 328 320 char name[DUMP_NAME_LEN]; 329 321 efi_char16_t efi_name[DUMP_NAME_LEN]; 330 322 int found, i; 331 - unsigned int part; 332 323 333 - do_div(record->id, 1000); 334 - part = do_div(record->id, 100); 335 324 snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu", 336 325 record->type, record->part, record->count, 337 326 record->time.tv_sec);
+2 -2
drivers/firmware/efi/libstub/secureboot.c
··· 16 16 17 17 /* BIOS variables */ 18 18 static const efi_guid_t efi_variable_guid = EFI_GLOBAL_VARIABLE_GUID; 19 - static const efi_char16_t const efi_SecureBoot_name[] = { 19 + static const efi_char16_t efi_SecureBoot_name[] = { 20 20 'S', 'e', 'c', 'u', 'r', 'e', 'B', 'o', 'o', 't', 0 21 21 }; 22 - static const efi_char16_t const efi_SetupMode_name[] = { 22 + static const efi_char16_t efi_SetupMode_name[] = { 23 23 'S', 'e', 't', 'u', 'p', 'M', 'o', 'd', 'e', 0 24 24 }; 25 25
+6 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
··· 425 425 426 426 void amdgpu_fbdev_restore_mode(struct amdgpu_device *adev) 427 427 { 428 - struct amdgpu_fbdev *afbdev = adev->mode_info.rfbdev; 428 + struct amdgpu_fbdev *afbdev; 429 429 struct drm_fb_helper *fb_helper; 430 430 int ret; 431 + 432 + if (!adev) 433 + return; 434 + 435 + afbdev = adev->mode_info.rfbdev; 431 436 432 437 if (!afbdev) 433 438 return;
+22 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 634 634 mutex_unlock(&id_mgr->lock); 635 635 } 636 636 637 - if (gds_switch_needed) { 637 + if (ring->funcs->emit_gds_switch && gds_switch_needed) { 638 638 id->gds_base = job->gds_base; 639 639 id->gds_size = job->gds_size; 640 640 id->gws_base = job->gws_base; ··· 672 672 struct amdgpu_vm_id_manager *id_mgr = &adev->vm_manager.id_mgr[vmhub]; 673 673 struct amdgpu_vm_id *id = &id_mgr->ids[vmid]; 674 674 675 + atomic64_set(&id->owner, 0); 675 676 id->gds_base = 0; 676 677 id->gds_size = 0; 677 678 id->gws_base = 0; 678 679 id->gws_size = 0; 679 680 id->oa_base = 0; 680 681 id->oa_size = 0; 682 + } 683 + 684 + /** 685 + * amdgpu_vm_reset_all_id - reset VMID to zero 686 + * 687 + * @adev: amdgpu device structure 688 + * 689 + * Reset VMID to force flush on next use 690 + */ 691 + void amdgpu_vm_reset_all_ids(struct amdgpu_device *adev) 692 + { 693 + unsigned i, j; 694 + 695 + for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) { 696 + struct amdgpu_vm_id_manager *id_mgr = 697 + &adev->vm_manager.id_mgr[i]; 698 + 699 + for (j = 1; j < id_mgr->num_ids; ++j) 700 + amdgpu_vm_reset_id(adev, i, j); 701 + } 681 702 } 682 703 683 704 /** ··· 2290 2269 dma_fence_context_alloc(AMDGPU_MAX_RINGS); 2291 2270 for (i = 0; i < AMDGPU_MAX_RINGS; ++i) 2292 2271 adev->vm_manager.seqno[i] = 0; 2293 - 2294 2272 2295 2273 atomic_set(&adev->vm_manager.vm_pte_next_ring, 0); 2296 2274 atomic64_set(&adev->vm_manager.client_counter, 0);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
··· 204 204 int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job); 205 205 void amdgpu_vm_reset_id(struct amdgpu_device *adev, unsigned vmhub, 206 206 unsigned vmid); 207 + void amdgpu_vm_reset_all_ids(struct amdgpu_device *adev); 207 208 int amdgpu_vm_update_directories(struct amdgpu_device *adev, 208 209 struct amdgpu_vm *vm); 209 210 int amdgpu_vm_clear_freed(struct amdgpu_device *adev,
+5 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 220 220 } 221 221 222 222 const struct ttm_mem_type_manager_func amdgpu_vram_mgr_func = { 223 - amdgpu_vram_mgr_init, 224 - amdgpu_vram_mgr_fini, 225 - amdgpu_vram_mgr_new, 226 - amdgpu_vram_mgr_del, 227 - amdgpu_vram_mgr_debug 223 + .init = amdgpu_vram_mgr_init, 224 + .takedown = amdgpu_vram_mgr_fini, 225 + .get_node = amdgpu_vram_mgr_new, 226 + .put_node = amdgpu_vram_mgr_del, 227 + .debug = amdgpu_vram_mgr_debug 228 228 };
+6
drivers/gpu/drm/amd/amdgpu/ci_dpm.c
··· 906 906 u32 vblank_time = amdgpu_dpm_get_vblank_time(adev); 907 907 u32 switch_limit = adev->mc.vram_type == AMDGPU_VRAM_TYPE_GDDR5 ? 450 : 300; 908 908 909 + /* disable mclk switching if the refresh is >120Hz, even if the 910 + * blanking period would allow it 911 + */ 912 + if (amdgpu_dpm_get_vrefresh(adev) > 120) 913 + return true; 914 + 909 915 if (vblank_time < switch_limit) 910 916 return true; 911 917 else
+2 -13
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
··· 950 950 { 951 951 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 952 952 953 - if (adev->vm_manager.enabled) { 954 - gmc_v6_0_vm_fini(adev); 955 - adev->vm_manager.enabled = false; 956 - } 957 953 gmc_v6_0_hw_fini(adev); 958 954 959 955 return 0; ··· 964 968 if (r) 965 969 return r; 966 970 967 - if (!adev->vm_manager.enabled) { 968 - r = gmc_v6_0_vm_init(adev); 969 - if (r) { 970 - dev_err(adev->dev, "vm manager initialization failed (%d).\n", r); 971 - return r; 972 - } 973 - adev->vm_manager.enabled = true; 974 - } 971 + amdgpu_vm_reset_all_ids(adev); 975 972 976 - return r; 973 + return 0; 977 974 } 978 975 979 976 static bool gmc_v6_0_is_idle(void *handle)
+2 -13
drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
··· 1117 1117 { 1118 1118 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1119 1119 1120 - if (adev->vm_manager.enabled) { 1121 - gmc_v7_0_vm_fini(adev); 1122 - adev->vm_manager.enabled = false; 1123 - } 1124 1120 gmc_v7_0_hw_fini(adev); 1125 1121 1126 1122 return 0; ··· 1131 1135 if (r) 1132 1136 return r; 1133 1137 1134 - if (!adev->vm_manager.enabled) { 1135 - r = gmc_v7_0_vm_init(adev); 1136 - if (r) { 1137 - dev_err(adev->dev, "vm manager initialization failed (%d).\n", r); 1138 - return r; 1139 - } 1140 - adev->vm_manager.enabled = true; 1141 - } 1138 + amdgpu_vm_reset_all_ids(adev); 1142 1139 1143 - return r; 1140 + return 0; 1144 1141 } 1145 1142 1146 1143 static bool gmc_v7_0_is_idle(void *handle)
+2 -13
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
··· 1209 1209 { 1210 1210 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1211 1211 1212 - if (adev->vm_manager.enabled) { 1213 - gmc_v8_0_vm_fini(adev); 1214 - adev->vm_manager.enabled = false; 1215 - } 1216 1212 gmc_v8_0_hw_fini(adev); 1217 1213 1218 1214 return 0; ··· 1223 1227 if (r) 1224 1228 return r; 1225 1229 1226 - if (!adev->vm_manager.enabled) { 1227 - r = gmc_v8_0_vm_init(adev); 1228 - if (r) { 1229 - dev_err(adev->dev, "vm manager initialization failed (%d).\n", r); 1230 - return r; 1231 - } 1232 - adev->vm_manager.enabled = true; 1233 - } 1230 + amdgpu_vm_reset_all_ids(adev); 1234 1231 1235 - return r; 1232 + return 0; 1236 1233 } 1237 1234 1238 1235 static bool gmc_v8_0_is_idle(void *handle)
+2 -14
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 791 791 { 792 792 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 793 793 794 - if (adev->vm_manager.enabled) { 795 - gmc_v9_0_vm_fini(adev); 796 - adev->vm_manager.enabled = false; 797 - } 798 794 gmc_v9_0_hw_fini(adev); 799 795 800 796 return 0; ··· 805 809 if (r) 806 810 return r; 807 811 808 - if (!adev->vm_manager.enabled) { 809 - r = gmc_v9_0_vm_init(adev); 810 - if (r) { 811 - dev_err(adev->dev, 812 - "vm manager initialization failed (%d).\n", r); 813 - return r; 814 - } 815 - adev->vm_manager.enabled = true; 816 - } 812 + amdgpu_vm_reset_all_ids(adev); 817 813 818 - return r; 814 + return 0; 819 815 } 820 816 821 817 static bool gmc_v9_0_is_idle(void *handle)
+68 -27
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
··· 77 77 static uint64_t vce_v3_0_ring_get_rptr(struct amdgpu_ring *ring) 78 78 { 79 79 struct amdgpu_device *adev = ring->adev; 80 + u32 v; 81 + 82 + mutex_lock(&adev->grbm_idx_mutex); 83 + if (adev->vce.harvest_config == 0 || 84 + adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE1) 85 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0)); 86 + else if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) 87 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1)); 80 88 81 89 if (ring == &adev->vce.ring[0]) 82 - return RREG32(mmVCE_RB_RPTR); 90 + v = RREG32(mmVCE_RB_RPTR); 83 91 else if (ring == &adev->vce.ring[1]) 84 - return RREG32(mmVCE_RB_RPTR2); 92 + v = RREG32(mmVCE_RB_RPTR2); 85 93 else 86 - return RREG32(mmVCE_RB_RPTR3); 94 + v = RREG32(mmVCE_RB_RPTR3); 95 + 96 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 97 + mutex_unlock(&adev->grbm_idx_mutex); 98 + 99 + return v; 87 100 } 88 101 89 102 /** ··· 109 96 static uint64_t vce_v3_0_ring_get_wptr(struct amdgpu_ring *ring) 110 97 { 111 98 struct amdgpu_device *adev = ring->adev; 99 + u32 v; 100 + 101 + mutex_lock(&adev->grbm_idx_mutex); 102 + if (adev->vce.harvest_config == 0 || 103 + adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE1) 104 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0)); 105 + else if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) 106 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1)); 112 107 113 108 if (ring == &adev->vce.ring[0]) 114 - return RREG32(mmVCE_RB_WPTR); 109 + v = RREG32(mmVCE_RB_WPTR); 115 110 else if (ring == &adev->vce.ring[1]) 116 - return RREG32(mmVCE_RB_WPTR2); 111 + v = RREG32(mmVCE_RB_WPTR2); 117 112 else 118 - return RREG32(mmVCE_RB_WPTR3); 113 + v = RREG32(mmVCE_RB_WPTR3); 114 + 115 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 116 + mutex_unlock(&adev->grbm_idx_mutex); 117 + 118 + return v; 119 119 } 120 120 121 121 /** ··· 142 116 { 143 117 struct amdgpu_device *adev = ring->adev; 144 118 119 + mutex_lock(&adev->grbm_idx_mutex); 120 + if (adev->vce.harvest_config == 0 || 121 + adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE1) 122 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(0)); 123 + else if (adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) 124 + WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(1)); 125 + 145 126 if (ring == &adev->vce.ring[0]) 146 127 WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr)); 147 128 else if (ring == &adev->vce.ring[1]) 148 129 WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr)); 149 130 else 150 131 WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr)); 132 + 133 + WREG32(mmGRBM_GFX_INDEX, mmGRBM_GFX_INDEX_DEFAULT); 134 + mutex_unlock(&adev->grbm_idx_mutex); 151 135 } 152 136 153 137 static void vce_v3_0_override_vce_clock_gating(struct amdgpu_device *adev, bool override) ··· 267 231 struct amdgpu_ring *ring; 268 232 int idx, r; 269 233 270 - ring = &adev->vce.ring[0]; 271 - WREG32(mmVCE_RB_RPTR, lower_32_bits(ring->wptr)); 272 - WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr)); 273 - WREG32(mmVCE_RB_BASE_LO, ring->gpu_addr); 274 - WREG32(mmVCE_RB_BASE_HI, upper_32_bits(ring->gpu_addr)); 275 - WREG32(mmVCE_RB_SIZE, ring->ring_size / 4); 276 - 277 - ring = &adev->vce.ring[1]; 278 - WREG32(mmVCE_RB_RPTR2, lower_32_bits(ring->wptr)); 279 - WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr)); 280 - WREG32(mmVCE_RB_BASE_LO2, ring->gpu_addr); 281 - WREG32(mmVCE_RB_BASE_HI2, upper_32_bits(ring->gpu_addr)); 282 - WREG32(mmVCE_RB_SIZE2, ring->ring_size / 4); 283 - 284 - ring = &adev->vce.ring[2]; 285 - WREG32(mmVCE_RB_RPTR3, lower_32_bits(ring->wptr)); 286 - WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr)); 287 - WREG32(mmVCE_RB_BASE_LO3, ring->gpu_addr); 288 - WREG32(mmVCE_RB_BASE_HI3, upper_32_bits(ring->gpu_addr)); 289 - WREG32(mmVCE_RB_SIZE3, ring->ring_size / 4); 290 - 291 234 mutex_lock(&adev->grbm_idx_mutex); 292 235 for (idx = 0; idx < 2; ++idx) { 293 236 if (adev->vce.harvest_config & (1 << idx)) 294 237 continue; 295 238 296 239 WREG32(mmGRBM_GFX_INDEX, GET_VCE_INSTANCE(idx)); 240 + 241 + /* Program instance 0 reg space for two instances or instance 0 case 242 + program instance 1 reg space for only instance 1 available case */ 243 + if (idx != 1 || adev->vce.harvest_config == AMDGPU_VCE_HARVEST_VCE0) { 244 + ring = &adev->vce.ring[0]; 245 + WREG32(mmVCE_RB_RPTR, lower_32_bits(ring->wptr)); 246 + WREG32(mmVCE_RB_WPTR, lower_32_bits(ring->wptr)); 247 + WREG32(mmVCE_RB_BASE_LO, ring->gpu_addr); 248 + WREG32(mmVCE_RB_BASE_HI, upper_32_bits(ring->gpu_addr)); 249 + WREG32(mmVCE_RB_SIZE, ring->ring_size / 4); 250 + 251 + ring = &adev->vce.ring[1]; 252 + WREG32(mmVCE_RB_RPTR2, lower_32_bits(ring->wptr)); 253 + WREG32(mmVCE_RB_WPTR2, lower_32_bits(ring->wptr)); 254 + WREG32(mmVCE_RB_BASE_LO2, ring->gpu_addr); 255 + WREG32(mmVCE_RB_BASE_HI2, upper_32_bits(ring->gpu_addr)); 256 + WREG32(mmVCE_RB_SIZE2, ring->ring_size / 4); 257 + 258 + ring = &adev->vce.ring[2]; 259 + WREG32(mmVCE_RB_RPTR3, lower_32_bits(ring->wptr)); 260 + WREG32(mmVCE_RB_WPTR3, lower_32_bits(ring->wptr)); 261 + WREG32(mmVCE_RB_BASE_LO3, ring->gpu_addr); 262 + WREG32(mmVCE_RB_BASE_HI3, upper_32_bits(ring->gpu_addr)); 263 + WREG32(mmVCE_RB_SIZE3, ring->ring_size / 4); 264 + } 265 + 297 266 vce_v3_0_mc_resume(adev, idx); 298 267 WREG32_FIELD(VCE_STATUS, JOB_BUSY, 1); 299 268
+28 -4
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 2655 2655 return sizeof(struct smu7_power_state); 2656 2656 } 2657 2657 2658 + static int smu7_vblank_too_short(struct pp_hwmgr *hwmgr, 2659 + uint32_t vblank_time_us) 2660 + { 2661 + struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); 2662 + uint32_t switch_limit_us; 2663 + 2664 + switch (hwmgr->chip_id) { 2665 + case CHIP_POLARIS10: 2666 + case CHIP_POLARIS11: 2667 + case CHIP_POLARIS12: 2668 + switch_limit_us = data->is_memory_gddr5 ? 190 : 150; 2669 + break; 2670 + default: 2671 + switch_limit_us = data->is_memory_gddr5 ? 450 : 150; 2672 + break; 2673 + } 2674 + 2675 + if (vblank_time_us < switch_limit_us) 2676 + return true; 2677 + else 2678 + return false; 2679 + } 2658 2680 2659 2681 static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr, 2660 2682 struct pp_power_state *request_ps, ··· 2691 2669 bool disable_mclk_switching; 2692 2670 bool disable_mclk_switching_for_frame_lock; 2693 2671 struct cgs_display_info info = {0}; 2672 + struct cgs_mode_info mode_info = {0}; 2694 2673 const struct phm_clock_and_voltage_limits *max_limits; 2695 2674 uint32_t i; 2696 2675 struct smu7_hwmgr *data = (struct smu7_hwmgr *)(hwmgr->backend); ··· 2700 2677 int32_t count; 2701 2678 int32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0; 2702 2679 2680 + info.mode_info = &mode_info; 2703 2681 data->battery_state = (PP_StateUILabel_Battery == 2704 2682 request_ps->classification.ui_label); 2705 2683 ··· 2726 2702 smu7_ps->vce_clks.ecclk = hwmgr->vce_arbiter.ecclk; 2727 2703 2728 2704 cgs_get_active_displays_info(hwmgr->device, &info); 2729 - 2730 - /*TO DO result = PHM_CheckVBlankTime(hwmgr, &vblankTooShort);*/ 2731 2705 2732 2706 minimum_clocks.engineClock = hwmgr->display_config.min_core_set_clock; 2733 2707 minimum_clocks.memoryClock = hwmgr->display_config.min_mem_set_clock; ··· 2791 2769 PHM_PlatformCaps_DisableMclkSwitchingForFrameLock); 2792 2770 2793 2771 2794 - disable_mclk_switching = (1 < info.display_count) || 2795 - disable_mclk_switching_for_frame_lock; 2772 + disable_mclk_switching = ((1 < info.display_count) || 2773 + disable_mclk_switching_for_frame_lock || 2774 + smu7_vblank_too_short(hwmgr, mode_info.vblank_time_us) || 2775 + (mode_info.refresh_rate > 120)); 2796 2776 2797 2777 sclk = smu7_ps->performance_levels[0].engine_clock; 2798 2778 mclk = smu7_ps->performance_levels[0].memory_clock;
+1 -1
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
··· 4186 4186 enum pp_clock_type type, uint32_t mask) 4187 4187 { 4188 4188 struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend); 4189 - uint32_t i; 4189 + int i; 4190 4190 4191 4191 if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL) 4192 4192 return -EINVAL;
+10 -10
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
··· 709 709 710 710 static struct phm_master_table_item 711 711 vega10_thermal_start_thermal_controller_master_list[] = { 712 - {NULL, tf_vega10_thermal_initialize}, 713 - {NULL, tf_vega10_thermal_set_temperature_range}, 714 - {NULL, tf_vega10_thermal_enable_alert}, 712 + { .tableFunction = tf_vega10_thermal_initialize }, 713 + { .tableFunction = tf_vega10_thermal_set_temperature_range }, 714 + { .tableFunction = tf_vega10_thermal_enable_alert }, 715 715 /* We should restrict performance levels to low before we halt the SMC. 716 716 * On the other hand we are still in boot state when we do this 717 717 * so it would be pointless. 718 718 * If this assumption changes we have to revisit this table. 719 719 */ 720 - {NULL, tf_vega10_thermal_setup_fan_table}, 721 - {NULL, tf_vega10_thermal_start_smc_fan_control}, 722 - {NULL, NULL} 720 + { .tableFunction = tf_vega10_thermal_setup_fan_table }, 721 + { .tableFunction = tf_vega10_thermal_start_smc_fan_control }, 722 + { } 723 723 }; 724 724 725 725 static struct phm_master_table_header ··· 731 731 732 732 static struct phm_master_table_item 733 733 vega10_thermal_set_temperature_range_master_list[] = { 734 - {NULL, tf_vega10_thermal_disable_alert}, 735 - {NULL, tf_vega10_thermal_set_temperature_range}, 736 - {NULL, tf_vega10_thermal_enable_alert}, 737 - {NULL, NULL} 734 + { .tableFunction = tf_vega10_thermal_disable_alert }, 735 + { .tableFunction = tf_vega10_thermal_set_temperature_range }, 736 + { .tableFunction = tf_vega10_thermal_enable_alert }, 737 + { } 738 738 }; 739 739 740 740 struct phm_master_table_header
+83
drivers/gpu/drm/drm_dp_helper.c
··· 1208 1208 return 0; 1209 1209 } 1210 1210 EXPORT_SYMBOL(drm_dp_stop_crc); 1211 + 1212 + struct dpcd_quirk { 1213 + u8 oui[3]; 1214 + bool is_branch; 1215 + u32 quirks; 1216 + }; 1217 + 1218 + #define OUI(first, second, third) { (first), (second), (third) } 1219 + 1220 + static const struct dpcd_quirk dpcd_quirk_list[] = { 1221 + /* Analogix 7737 needs reduced M and N at HBR2 link rates */ 1222 + { OUI(0x00, 0x22, 0xb9), true, BIT(DP_DPCD_QUIRK_LIMITED_M_N) }, 1223 + }; 1224 + 1225 + #undef OUI 1226 + 1227 + /* 1228 + * Get a bit mask of DPCD quirks for the sink/branch device identified by 1229 + * ident. The quirk data is shared but it's up to the drivers to act on the 1230 + * data. 1231 + * 1232 + * For now, only the OUI (first three bytes) is used, but this may be extended 1233 + * to device identification string and hardware/firmware revisions later. 1234 + */ 1235 + static u32 1236 + drm_dp_get_quirks(const struct drm_dp_dpcd_ident *ident, bool is_branch) 1237 + { 1238 + const struct dpcd_quirk *quirk; 1239 + u32 quirks = 0; 1240 + int i; 1241 + 1242 + for (i = 0; i < ARRAY_SIZE(dpcd_quirk_list); i++) { 1243 + quirk = &dpcd_quirk_list[i]; 1244 + 1245 + if (quirk->is_branch != is_branch) 1246 + continue; 1247 + 1248 + if (memcmp(quirk->oui, ident->oui, sizeof(ident->oui)) != 0) 1249 + continue; 1250 + 1251 + quirks |= quirk->quirks; 1252 + } 1253 + 1254 + return quirks; 1255 + } 1256 + 1257 + /** 1258 + * drm_dp_read_desc - read sink/branch descriptor from DPCD 1259 + * @aux: DisplayPort AUX channel 1260 + * @desc: Device decriptor to fill from DPCD 1261 + * @is_branch: true for branch devices, false for sink devices 1262 + * 1263 + * Read DPCD 0x400 (sink) or 0x500 (branch) into @desc. Also debug log the 1264 + * identification. 1265 + * 1266 + * Returns 0 on success or a negative error code on failure. 1267 + */ 1268 + int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc, 1269 + bool is_branch) 1270 + { 1271 + struct drm_dp_dpcd_ident *ident = &desc->ident; 1272 + unsigned int offset = is_branch ? DP_BRANCH_OUI : DP_SINK_OUI; 1273 + int ret, dev_id_len; 1274 + 1275 + ret = drm_dp_dpcd_read(aux, offset, ident, sizeof(*ident)); 1276 + if (ret < 0) 1277 + return ret; 1278 + 1279 + desc->quirks = drm_dp_get_quirks(ident, is_branch); 1280 + 1281 + dev_id_len = strnlen(ident->device_id, sizeof(ident->device_id)); 1282 + 1283 + DRM_DEBUG_KMS("DP %s: OUI %*phD dev-ID %*pE HW-rev %d.%d SW-rev %d.%d quirks 0x%04x\n", 1284 + is_branch ? "branch" : "sink", 1285 + (int)sizeof(ident->oui), ident->oui, 1286 + dev_id_len, ident->device_id, 1287 + ident->hw_rev >> 4, ident->hw_rev & 0xf, 1288 + ident->sw_major_rev, ident->sw_minor_rev, 1289 + desc->quirks); 1290 + 1291 + return 0; 1292 + } 1293 + EXPORT_SYMBOL(drm_dp_read_desc);
+3 -2
drivers/gpu/drm/drm_plane.c
··· 948 948 } 949 949 950 950 out: 951 - if (ret && crtc->funcs->page_flip_target) 952 - drm_crtc_vblank_put(crtc); 953 951 if (fb) 954 952 drm_framebuffer_put(fb); 955 953 if (crtc->primary->old_fb) ··· 961 963 962 964 drm_modeset_drop_locks(&ctx); 963 965 drm_modeset_acquire_fini(&ctx); 966 + 967 + if (ret && crtc->funcs->page_flip_target) 968 + drm_crtc_vblank_put(crtc); 964 969 965 970 return ret; 966 971 }
+1 -7
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 82 82 return ret; 83 83 } 84 84 85 - static void exynos_drm_preclose(struct drm_device *dev, 86 - struct drm_file *file) 87 - { 88 - exynos_drm_subdrv_close(dev, file); 89 - } 90 - 91 85 static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file) 92 86 { 87 + exynos_drm_subdrv_close(dev, file); 93 88 kfree(file->driver_priv); 94 89 file->driver_priv = NULL; 95 90 } ··· 140 145 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME 141 146 | DRIVER_ATOMIC | DRIVER_RENDER, 142 147 .open = exynos_drm_open, 143 - .preclose = exynos_drm_preclose, 144 148 .lastclose = exynos_drm_lastclose, 145 149 .postclose = exynos_drm_postclose, 146 150 .gem_free_object_unlocked = exynos_drm_gem_free_object,
+1 -4
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 160 160 * drm framework doesn't support multiple irq yet. 161 161 * we can refer to the crtc to current hardware interrupt occurred through 162 162 * this pipe value. 163 - * @enabled: if the crtc is enabled or not 164 - * @event: vblank event that is currently queued for flip 165 - * @wait_update: wait all pending planes updates to finish 166 - * @pending_update: number of pending plane updates in this crtc 167 163 * @ops: pointer to callbacks for exynos drm specific functionality 168 164 * @ctx: A pointer to the crtc's implementation specific context 165 + * @pipe_clk: A pointer to the crtc's pipeline clock. 169 166 */ 170 167 struct exynos_drm_crtc { 171 168 struct drm_crtc base;
+9 -17
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 1633 1633 { 1634 1634 struct device *dev = dsi->dev; 1635 1635 struct device_node *node = dev->of_node; 1636 - struct device_node *ep; 1637 1636 int ret; 1638 1637 1639 1638 ret = exynos_dsi_of_read_u32(node, "samsung,pll-clock-frequency", ··· 1640 1641 if (ret < 0) 1641 1642 return ret; 1642 1643 1643 - ep = of_graph_get_endpoint_by_regs(node, DSI_PORT_OUT, 0); 1644 - if (!ep) { 1645 - dev_err(dev, "no output port with endpoint specified\n"); 1646 - return -EINVAL; 1647 - } 1648 - 1649 - ret = exynos_dsi_of_read_u32(ep, "samsung,burst-clock-frequency", 1644 + ret = exynos_dsi_of_read_u32(node, "samsung,burst-clock-frequency", 1650 1645 &dsi->burst_clk_rate); 1651 1646 if (ret < 0) 1652 - goto end; 1647 + return ret; 1653 1648 1654 - ret = exynos_dsi_of_read_u32(ep, "samsung,esc-clock-frequency", 1649 + ret = exynos_dsi_of_read_u32(node, "samsung,esc-clock-frequency", 1655 1650 &dsi->esc_clk_rate); 1656 1651 if (ret < 0) 1657 - goto end; 1658 - 1659 - of_node_put(ep); 1652 + return ret; 1660 1653 1661 1654 dsi->bridge_node = of_graph_get_remote_node(node, DSI_PORT_OUT, 0); 1662 1655 if (!dsi->bridge_node) 1663 1656 return -EINVAL; 1664 1657 1665 - end: 1666 - of_node_put(ep); 1667 - 1668 - return ret; 1658 + return 0; 1669 1659 } 1670 1660 1671 1661 static int exynos_dsi_bind(struct device *dev, struct device *master, ··· 1805 1817 1806 1818 static int exynos_dsi_remove(struct platform_device *pdev) 1807 1819 { 1820 + struct exynos_dsi *dsi = platform_get_drvdata(pdev); 1821 + 1822 + of_node_put(dsi->bridge_node); 1823 + 1808 1824 pm_runtime_disable(&pdev->dev); 1809 1825 1810 1826 component_del(&pdev->dev, &exynos_dsi_component_ops);
+11 -7
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 759 759 if (scan->type & DRM_MODE_TYPE_PREFERRED) { 760 760 mode_dev->panel_fixed_mode = 761 761 drm_mode_duplicate(dev, scan); 762 + DRM_DEBUG_KMS("Using mode from DDC\n"); 762 763 goto out; /* FIXME: check for quirks */ 763 764 } 764 765 } 765 766 766 767 /* Failed to get EDID, what about VBT? do we need this? */ 767 - if (mode_dev->vbt_mode) 768 + if (dev_priv->lfp_lvds_vbt_mode) { 768 769 mode_dev->panel_fixed_mode = 769 - drm_mode_duplicate(dev, mode_dev->vbt_mode); 770 + drm_mode_duplicate(dev, dev_priv->lfp_lvds_vbt_mode); 770 771 771 - if (!mode_dev->panel_fixed_mode) 772 - if (dev_priv->lfp_lvds_vbt_mode) 773 - mode_dev->panel_fixed_mode = 774 - drm_mode_duplicate(dev, 775 - dev_priv->lfp_lvds_vbt_mode); 772 + if (mode_dev->panel_fixed_mode) { 773 + mode_dev->panel_fixed_mode->type |= 774 + DRM_MODE_TYPE_PREFERRED; 775 + DRM_DEBUG_KMS("Using mode from VBT\n"); 776 + goto out; 777 + } 778 + } 776 779 777 780 /* 778 781 * If we didn't get EDID, try checking if the panel is already turned ··· 792 789 if (mode_dev->panel_fixed_mode) { 793 790 mode_dev->panel_fixed_mode->type |= 794 791 DRM_MODE_TYPE_PREFERRED; 792 + DRM_DEBUG_KMS("Using pre-programmed mode\n"); 795 793 goto out; /* FIXME: check for quirks */ 796 794 } 797 795 }
+20 -10
drivers/gpu/drm/i915/gvt/execlist.c
··· 779 779 vgpu_vreg(vgpu, ctx_status_ptr_reg) = ctx_status_ptr.dw; 780 780 } 781 781 782 + static void clean_workloads(struct intel_vgpu *vgpu, unsigned long engine_mask) 783 + { 784 + struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 785 + struct intel_engine_cs *engine; 786 + struct intel_vgpu_workload *pos, *n; 787 + unsigned int tmp; 788 + 789 + /* free the unsubmited workloads in the queues. */ 790 + for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { 791 + list_for_each_entry_safe(pos, n, 792 + &vgpu->workload_q_head[engine->id], list) { 793 + list_del_init(&pos->list); 794 + free_workload(pos); 795 + } 796 + } 797 + } 798 + 782 799 void intel_vgpu_clean_execlist(struct intel_vgpu *vgpu) 783 800 { 801 + clean_workloads(vgpu, ALL_ENGINES); 784 802 kmem_cache_destroy(vgpu->workloads); 785 803 } 786 804 ··· 829 811 { 830 812 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 831 813 struct intel_engine_cs *engine; 832 - struct intel_vgpu_workload *pos, *n; 833 814 unsigned int tmp; 834 815 835 - for_each_engine_masked(engine, dev_priv, engine_mask, tmp) { 836 - /* free the unsubmited workload in the queue */ 837 - list_for_each_entry_safe(pos, n, 838 - &vgpu->workload_q_head[engine->id], list) { 839 - list_del_init(&pos->list); 840 - free_workload(pos); 841 - } 842 - 816 + clean_workloads(vgpu, engine_mask); 817 + for_each_engine_masked(engine, dev_priv, engine_mask, tmp) 843 818 init_vgpu_execlist(vgpu, engine->id); 844 - } 845 819 }
+21 -9
drivers/gpu/drm/i915/gvt/handlers.c
··· 1366 1366 void *p_data, unsigned int bytes) 1367 1367 { 1368 1368 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; 1369 - i915_reg_t reg = {.reg = offset}; 1369 + u32 v = *(u32 *)p_data; 1370 + 1371 + if (!IS_SKYLAKE(dev_priv) && !IS_KABYLAKE(dev_priv)) 1372 + return intel_vgpu_default_mmio_write(vgpu, 1373 + offset, p_data, bytes); 1370 1374 1371 1375 switch (offset) { 1372 1376 case 0x4ddc: 1373 - vgpu_vreg(vgpu, offset) = 0x8000003c; 1374 - /* WaCompressedResourceSamplerPbeMediaNewHashMode:skl */ 1375 - I915_WRITE(reg, vgpu_vreg(vgpu, offset)); 1377 + /* bypass WaCompressedResourceSamplerPbeMediaNewHashMode */ 1378 + vgpu_vreg(vgpu, offset) = v & ~(1 << 31); 1376 1379 break; 1377 1380 case 0x42080: 1378 - vgpu_vreg(vgpu, offset) = 0x8000; 1379 - /* WaCompressedResourceDisplayNewHashMode:skl */ 1380 - I915_WRITE(reg, vgpu_vreg(vgpu, offset)); 1381 + /* bypass WaCompressedResourceDisplayNewHashMode */ 1382 + vgpu_vreg(vgpu, offset) = v & ~(1 << 15); 1383 + break; 1384 + case 0xe194: 1385 + /* bypass WaCompressedResourceSamplerPbeMediaNewHashMode */ 1386 + vgpu_vreg(vgpu, offset) = v & ~(1 << 8); 1387 + break; 1388 + case 0x7014: 1389 + /* bypass WaCompressedResourceSamplerPbeMediaNewHashMode */ 1390 + vgpu_vreg(vgpu, offset) = v & ~(1 << 13); 1381 1391 break; 1382 1392 default: 1383 1393 return -EINVAL; ··· 1644 1634 MMIO_DFH(GAM_ECOCHK, D_ALL, F_CMD_ACCESS, NULL, NULL); 1645 1635 MMIO_DFH(GEN7_COMMON_SLICE_CHICKEN1, D_ALL, F_MODE_MASK | F_CMD_ACCESS, 1646 1636 NULL, NULL); 1647 - MMIO_DFH(COMMON_SLICE_CHICKEN2, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 1637 + MMIO_DFH(COMMON_SLICE_CHICKEN2, D_ALL, F_MODE_MASK | F_CMD_ACCESS, NULL, 1638 + skl_misc_ctl_write); 1648 1639 MMIO_DFH(0x9030, D_ALL, F_CMD_ACCESS, NULL, NULL); 1649 1640 MMIO_DFH(0x20a0, D_ALL, F_CMD_ACCESS, NULL, NULL); 1650 1641 MMIO_DFH(0x2420, D_ALL, F_CMD_ACCESS, NULL, NULL); ··· 2579 2568 MMIO_D(0x6e570, D_BDW_PLUS); 2580 2569 MMIO_D(0x65f10, D_BDW_PLUS); 2581 2570 2582 - MMIO_DFH(0xe194, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2571 + MMIO_DFH(0xe194, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, 2572 + skl_misc_ctl_write); 2583 2573 MMIO_DFH(0xe188, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2584 2574 MMIO_DFH(HALF_SLICE_CHICKEN2, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL); 2585 2575 MMIO_DFH(0x2580, D_BDW_PLUS, F_MODE_MASK | F_CMD_ACCESS, NULL, NULL);
-4
drivers/gpu/drm/i915/i915_drv.c
··· 1272 1272 1273 1273 dev_priv->ipc_enabled = false; 1274 1274 1275 - /* Everything is in place, we can now relax! */ 1276 - DRM_INFO("Initialized %s %d.%d.%d %s for %s on minor %d\n", 1277 - driver.name, driver.major, driver.minor, driver.patchlevel, 1278 - driver.date, pci_name(pdev), dev_priv->drm.primary->index); 1279 1275 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG)) 1280 1276 DRM_INFO("DRM_I915_DEBUG enabled\n"); 1281 1277 if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
+2 -1
drivers/gpu/drm/i915/i915_drv.h
··· 562 562 563 563 void intel_link_compute_m_n(int bpp, int nlanes, 564 564 int pixel_clock, int link_clock, 565 - struct intel_link_m_n *m_n); 565 + struct intel_link_m_n *m_n, 566 + bool reduce_m_n); 566 567 567 568 /* Interface history: 568 569 *
+1 -1
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 2313 2313 appgtt->base.allocate_va_range) { 2314 2314 ret = appgtt->base.allocate_va_range(&appgtt->base, 2315 2315 vma->node.start, 2316 - vma->node.size); 2316 + vma->size); 2317 2317 if (ret) 2318 2318 goto err_pages; 2319 2319 }
-5
drivers/gpu/drm/i915/i915_gem_shrinker.c
··· 59 59 return; 60 60 61 61 mutex_unlock(&dev->struct_mutex); 62 - 63 - /* expedite the RCU grace period to free some request slabs */ 64 - synchronize_rcu_expedited(); 65 62 } 66 63 67 64 static bool any_vma_pinned(struct drm_i915_gem_object *obj) ··· 270 273 I915_SHRINK_UNBOUND | 271 274 I915_SHRINK_ACTIVE); 272 275 intel_runtime_pm_put(dev_priv); 273 - 274 - synchronize_rcu(); /* wait for our earlier RCU delayed slab frees */ 275 276 276 277 return freed; 277 278 }
+6 -9
drivers/gpu/drm/i915/i915_irq.c
··· 2953 2953 u32 pipestat_mask; 2954 2954 u32 enable_mask; 2955 2955 enum pipe pipe; 2956 - u32 val; 2957 2956 2958 2957 pipestat_mask = PLANE_FLIP_DONE_INT_STATUS_VLV | 2959 2958 PIPE_CRC_DONE_INTERRUPT_STATUS; ··· 2963 2964 2964 2965 enable_mask = I915_DISPLAY_PORT_INTERRUPT | 2965 2966 I915_DISPLAY_PIPE_A_EVENT_INTERRUPT | 2966 - I915_DISPLAY_PIPE_B_EVENT_INTERRUPT; 2967 + I915_DISPLAY_PIPE_B_EVENT_INTERRUPT | 2968 + I915_LPE_PIPE_A_INTERRUPT | 2969 + I915_LPE_PIPE_B_INTERRUPT; 2970 + 2967 2971 if (IS_CHERRYVIEW(dev_priv)) 2968 - enable_mask |= I915_DISPLAY_PIPE_C_EVENT_INTERRUPT; 2972 + enable_mask |= I915_DISPLAY_PIPE_C_EVENT_INTERRUPT | 2973 + I915_LPE_PIPE_C_INTERRUPT; 2969 2974 2970 2975 WARN_ON(dev_priv->irq_mask != ~0); 2971 - 2972 - val = (I915_LPE_PIPE_A_INTERRUPT | 2973 - I915_LPE_PIPE_B_INTERRUPT | 2974 - I915_LPE_PIPE_C_INTERRUPT); 2975 - 2976 - enable_mask |= val; 2977 2976 2978 2977 dev_priv->irq_mask = ~enable_mask; 2979 2978
+1 -1
drivers/gpu/drm/i915/i915_reg.h
··· 8280 8280 8281 8281 /* MIPI DSI registers */ 8282 8282 8283 - #define _MIPI_PORT(port, a, c) ((port) ? c : a) /* ports A and C only */ 8283 + #define _MIPI_PORT(port, a, c) (((port) == PORT_A) ? a : c) /* ports A and C only */ 8284 8284 #define _MMIO_MIPI(port, a, c) _MMIO(_MIPI_PORT(port, a, c)) 8285 8285 8286 8286 #define MIPIO_TXESC_CLK_DIV1 _MMIO(0x160004)
+14 -8
drivers/gpu/drm/i915/intel_display.c
··· 6101 6101 pipe_config->fdi_lanes = lane; 6102 6102 6103 6103 intel_link_compute_m_n(pipe_config->pipe_bpp, lane, fdi_dotclock, 6104 - link_bw, &pipe_config->fdi_m_n); 6104 + link_bw, &pipe_config->fdi_m_n, false); 6105 6105 6106 6106 ret = ironlake_check_fdi_lanes(dev, intel_crtc->pipe, pipe_config); 6107 6107 if (ret == -EINVAL && pipe_config->pipe_bpp > 6*3) { ··· 6277 6277 } 6278 6278 6279 6279 static void compute_m_n(unsigned int m, unsigned int n, 6280 - uint32_t *ret_m, uint32_t *ret_n) 6280 + uint32_t *ret_m, uint32_t *ret_n, 6281 + bool reduce_m_n) 6281 6282 { 6282 6283 /* 6283 6284 * Reduce M/N as much as possible without loss in precision. Several DP ··· 6286 6285 * values. The passed in values are more likely to have the least 6287 6286 * significant bits zero than M after rounding below, so do this first. 6288 6287 */ 6289 - while ((m & 1) == 0 && (n & 1) == 0) { 6290 - m >>= 1; 6291 - n >>= 1; 6288 + if (reduce_m_n) { 6289 + while ((m & 1) == 0 && (n & 1) == 0) { 6290 + m >>= 1; 6291 + n >>= 1; 6292 + } 6292 6293 } 6293 6294 6294 6295 *ret_n = min_t(unsigned int, roundup_pow_of_two(n), DATA_LINK_N_MAX); ··· 6301 6298 void 6302 6299 intel_link_compute_m_n(int bits_per_pixel, int nlanes, 6303 6300 int pixel_clock, int link_clock, 6304 - struct intel_link_m_n *m_n) 6301 + struct intel_link_m_n *m_n, 6302 + bool reduce_m_n) 6305 6303 { 6306 6304 m_n->tu = 64; 6307 6305 6308 6306 compute_m_n(bits_per_pixel * pixel_clock, 6309 6307 link_clock * nlanes * 8, 6310 - &m_n->gmch_m, &m_n->gmch_n); 6308 + &m_n->gmch_m, &m_n->gmch_n, 6309 + reduce_m_n); 6311 6310 6312 6311 compute_m_n(pixel_clock, link_clock, 6313 - &m_n->link_m, &m_n->link_n); 6312 + &m_n->link_m, &m_n->link_n, 6313 + reduce_m_n); 6314 6314 } 6315 6315 6316 6316 static inline bool intel_panel_use_ssc(struct drm_i915_private *dev_priv)
+10 -35
drivers/gpu/drm/i915/intel_dp.c
··· 1507 1507 DRM_DEBUG_KMS("common rates: %s\n", str); 1508 1508 } 1509 1509 1510 - bool 1511 - __intel_dp_read_desc(struct intel_dp *intel_dp, struct intel_dp_desc *desc) 1512 - { 1513 - u32 base = drm_dp_is_branch(intel_dp->dpcd) ? DP_BRANCH_OUI : 1514 - DP_SINK_OUI; 1515 - 1516 - return drm_dp_dpcd_read(&intel_dp->aux, base, desc, sizeof(*desc)) == 1517 - sizeof(*desc); 1518 - } 1519 - 1520 - bool intel_dp_read_desc(struct intel_dp *intel_dp) 1521 - { 1522 - struct intel_dp_desc *desc = &intel_dp->desc; 1523 - bool oui_sup = intel_dp->dpcd[DP_DOWN_STREAM_PORT_COUNT] & 1524 - DP_OUI_SUPPORT; 1525 - int dev_id_len; 1526 - 1527 - if (!__intel_dp_read_desc(intel_dp, desc)) 1528 - return false; 1529 - 1530 - dev_id_len = strnlen(desc->device_id, sizeof(desc->device_id)); 1531 - DRM_DEBUG_KMS("DP %s: OUI %*phD%s dev-ID %*pE HW-rev %d.%d SW-rev %d.%d\n", 1532 - drm_dp_is_branch(intel_dp->dpcd) ? "branch" : "sink", 1533 - (int)sizeof(desc->oui), desc->oui, oui_sup ? "" : "(NS)", 1534 - dev_id_len, desc->device_id, 1535 - desc->hw_rev >> 4, desc->hw_rev & 0xf, 1536 - desc->sw_major_rev, desc->sw_minor_rev); 1537 - 1538 - return true; 1539 - } 1540 - 1541 1510 static int rate_to_index(int find, const int *rates) 1542 1511 { 1543 1512 int i = 0; ··· 1593 1624 int common_rates[DP_MAX_SUPPORTED_RATES] = {}; 1594 1625 int common_len; 1595 1626 uint8_t link_bw, rate_select; 1627 + bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc, 1628 + DP_DPCD_QUIRK_LIMITED_M_N); 1596 1629 1597 1630 common_len = intel_dp_common_rates(intel_dp, common_rates); 1598 1631 ··· 1724 1753 intel_link_compute_m_n(bpp, lane_count, 1725 1754 adjusted_mode->crtc_clock, 1726 1755 pipe_config->port_clock, 1727 - &pipe_config->dp_m_n); 1756 + &pipe_config->dp_m_n, 1757 + reduce_m_n); 1728 1758 1729 1759 if (intel_connector->panel.downclock_mode != NULL && 1730 1760 dev_priv->drrs.type == SEAMLESS_DRRS_SUPPORT) { ··· 1733 1761 intel_link_compute_m_n(bpp, lane_count, 1734 1762 intel_connector->panel.downclock_mode->clock, 1735 1763 pipe_config->port_clock, 1736 - &pipe_config->dp_m2_n2); 1764 + &pipe_config->dp_m2_n2, 1765 + reduce_m_n); 1737 1766 } 1738 1767 1739 1768 /* ··· 3595 3622 if (!intel_dp_read_dpcd(intel_dp)) 3596 3623 return false; 3597 3624 3598 - intel_dp_read_desc(intel_dp); 3625 + drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc, 3626 + drm_dp_is_branch(intel_dp->dpcd)); 3599 3627 3600 3628 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11) 3601 3629 dev_priv->no_aux_handshake = intel_dp->dpcd[DP_MAX_DOWNSPREAD] & ··· 4598 4624 4599 4625 intel_dp_print_rates(intel_dp); 4600 4626 4601 - intel_dp_read_desc(intel_dp); 4627 + drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc, 4628 + drm_dp_is_branch(intel_dp->dpcd)); 4602 4629 4603 4630 intel_dp_configure_mst(intel_dp); 4604 4631
+4 -1
drivers/gpu/drm/i915/intel_dp_mst.c
··· 44 44 int lane_count, slots; 45 45 const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode; 46 46 int mst_pbn; 47 + bool reduce_m_n = drm_dp_has_quirk(&intel_dp->desc, 48 + DP_DPCD_QUIRK_LIMITED_M_N); 47 49 48 50 pipe_config->has_pch_encoder = false; 49 51 bpp = 24; ··· 77 75 intel_link_compute_m_n(bpp, lane_count, 78 76 adjusted_mode->crtc_clock, 79 77 pipe_config->port_clock, 80 - &pipe_config->dp_m_n); 78 + &pipe_config->dp_m_n, 79 + reduce_m_n); 81 80 82 81 pipe_config->dp_m_n.tu = slots; 83 82
+1 -12
drivers/gpu/drm/i915/intel_drv.h
··· 906 906 M2_N2 907 907 }; 908 908 909 - struct intel_dp_desc { 910 - u8 oui[3]; 911 - u8 device_id[6]; 912 - u8 hw_rev; 913 - u8 sw_major_rev; 914 - u8 sw_minor_rev; 915 - } __packed; 916 - 917 909 struct intel_dp_compliance_data { 918 910 unsigned long edid; 919 911 uint8_t video_pattern; ··· 949 957 /* Max link BW for the sink as per DPCD registers */ 950 958 int max_sink_link_bw; 951 959 /* sink or branch descriptor */ 952 - struct intel_dp_desc desc; 960 + struct drm_dp_desc desc; 953 961 struct drm_dp_aux aux; 954 962 enum intel_display_power_domain aux_power_domain; 955 963 uint8_t train_set[4]; ··· 1524 1532 } 1525 1533 1526 1534 bool intel_dp_read_dpcd(struct intel_dp *intel_dp); 1527 - bool __intel_dp_read_desc(struct intel_dp *intel_dp, 1528 - struct intel_dp_desc *desc); 1529 - bool intel_dp_read_desc(struct intel_dp *intel_dp); 1530 1535 int intel_dp_link_required(int pixel_clock, int bpp); 1531 1536 int intel_dp_max_data_rate(int max_link_clock, int max_lanes); 1532 1537 bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
-36
drivers/gpu/drm/i915/intel_lpe_audio.c
··· 149 149 150 150 static void lpe_audio_irq_unmask(struct irq_data *d) 151 151 { 152 - struct drm_i915_private *dev_priv = d->chip_data; 153 - unsigned long irqflags; 154 - u32 val = (I915_LPE_PIPE_A_INTERRUPT | 155 - I915_LPE_PIPE_B_INTERRUPT); 156 - 157 - if (IS_CHERRYVIEW(dev_priv)) 158 - val |= I915_LPE_PIPE_C_INTERRUPT; 159 - 160 - spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 161 - 162 - dev_priv->irq_mask &= ~val; 163 - I915_WRITE(VLV_IIR, val); 164 - I915_WRITE(VLV_IIR, val); 165 - I915_WRITE(VLV_IMR, dev_priv->irq_mask); 166 - POSTING_READ(VLV_IMR); 167 - 168 - spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 169 152 } 170 153 171 154 static void lpe_audio_irq_mask(struct irq_data *d) 172 155 { 173 - struct drm_i915_private *dev_priv = d->chip_data; 174 - unsigned long irqflags; 175 - u32 val = (I915_LPE_PIPE_A_INTERRUPT | 176 - I915_LPE_PIPE_B_INTERRUPT); 177 - 178 - if (IS_CHERRYVIEW(dev_priv)) 179 - val |= I915_LPE_PIPE_C_INTERRUPT; 180 - 181 - spin_lock_irqsave(&dev_priv->irq_lock, irqflags); 182 - 183 - dev_priv->irq_mask |= val; 184 - I915_WRITE(VLV_IMR, dev_priv->irq_mask); 185 - I915_WRITE(VLV_IIR, val); 186 - I915_WRITE(VLV_IIR, val); 187 - POSTING_READ(VLV_IIR); 188 - 189 - spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); 190 156 } 191 157 192 158 static struct irq_chip lpe_audio_irqchip = { ··· 295 329 return; 296 330 297 331 desc = irq_to_desc(dev_priv->lpe_audio.irq); 298 - 299 - lpe_audio_irq_mask(&desc->irq_data); 300 332 301 333 lpe_audio_platdev_destroy(dev_priv); 302 334
+1 -1
drivers/gpu/drm/i915/intel_lrc.c
··· 1989 1989 1990 1990 ce->ring = ring; 1991 1991 ce->state = vma; 1992 - ce->initialised = engine->init_context == NULL; 1992 + ce->initialised |= engine->init_context == NULL; 1993 1993 1994 1994 return 0; 1995 1995
+1 -1
drivers/gpu/drm/i915/intel_lspcon.c
··· 240 240 return false; 241 241 } 242 242 243 - intel_dp_read_desc(dp); 243 + drm_dp_read_desc(&dp->aux, &dp->desc, drm_dp_is_branch(dp->dpcd)); 244 244 245 245 DRM_DEBUG_KMS("Success: LSPCON init\n"); 246 246 return true;
+5 -3
drivers/gpu/drm/i915/selftests/i915_gem_context.c
··· 320 320 static int igt_ctx_exec(void *arg) 321 321 { 322 322 struct drm_i915_private *i915 = arg; 323 - struct drm_i915_gem_object *obj; 323 + struct drm_i915_gem_object *obj = NULL; 324 324 struct drm_file *file; 325 325 IGT_TIMEOUT(end_time); 326 326 LIST_HEAD(objects); ··· 359 359 } 360 360 361 361 for_each_engine(engine, i915, id) { 362 - if (dw == 0) { 362 + if (!obj) { 363 363 obj = create_test_object(ctx, file, &objects); 364 364 if (IS_ERR(obj)) { 365 365 err = PTR_ERR(obj); ··· 376 376 goto out_unlock; 377 377 } 378 378 379 - if (++dw == max_dwords(obj)) 379 + if (++dw == max_dwords(obj)) { 380 + obj = NULL; 380 381 dw = 0; 382 + } 381 383 ndwords++; 382 384 } 383 385 ncontexts++;
+1
drivers/gpu/drm/msm/Kconfig
··· 13 13 select QCOM_SCM 14 14 select SND_SOC_HDMI_CODEC if SND_SOC 15 15 select SYNC_FILE 16 + select PM_OPP 16 17 default y 17 18 help 18 19 DRM/KMS driver for MSM/snapdragon.
+1 -1
drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c
··· 116 116 return 0; 117 117 } 118 118 119 - static struct irq_domain_ops mdss_hw_irqdomain_ops = { 119 + static const struct irq_domain_ops mdss_hw_irqdomain_ops = { 120 120 .map = mdss_hw_irqdomain_map, 121 121 .xlate = irq_domain_xlate_onecell, 122 122 };
+7 -2
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c
··· 225 225 226 226 mdp5_state = kmemdup(to_mdp5_plane_state(plane->state), 227 227 sizeof(*mdp5_state), GFP_KERNEL); 228 + if (!mdp5_state) 229 + return NULL; 228 230 229 - if (mdp5_state && mdp5_state->base.fb) 230 - drm_framebuffer_reference(mdp5_state->base.fb); 231 + __drm_atomic_helper_plane_duplicate_state(plane, &mdp5_state->base); 231 232 232 233 return &mdp5_state->base; 233 234 } ··· 445 444 mdp5_pipe_release(state->state, old_hwpipe); 446 445 mdp5_pipe_release(state->state, old_right_hwpipe); 447 446 } 447 + } else { 448 + mdp5_pipe_release(state->state, mdp5_state->hwpipe); 449 + mdp5_pipe_release(state->state, mdp5_state->r_hwpipe); 450 + mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL; 448 451 } 449 452 450 453 return 0;
+1
drivers/gpu/drm/msm/msm_drv.c
··· 830 830 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 831 831 .gem_prime_export = drm_gem_prime_export, 832 832 .gem_prime_import = drm_gem_prime_import, 833 + .gem_prime_res_obj = msm_gem_prime_res_obj, 833 834 .gem_prime_pin = msm_gem_prime_pin, 834 835 .gem_prime_unpin = msm_gem_prime_unpin, 835 836 .gem_prime_get_sg_table = msm_gem_prime_get_sg_table,
+1
drivers/gpu/drm/msm/msm_drv.h
··· 224 224 void *msm_gem_prime_vmap(struct drm_gem_object *obj); 225 225 void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); 226 226 int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 227 + struct reservation_object *msm_gem_prime_res_obj(struct drm_gem_object *obj); 227 228 struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, 228 229 struct dma_buf_attachment *attach, struct sg_table *sg); 229 230 int msm_gem_prime_pin(struct drm_gem_object *obj);
+2 -8
drivers/gpu/drm/msm/msm_fence.c
··· 99 99 } 100 100 101 101 struct msm_fence { 102 - struct msm_fence_context *fctx; 103 102 struct dma_fence base; 103 + struct msm_fence_context *fctx; 104 104 }; 105 105 106 106 static inline struct msm_fence *to_msm_fence(struct dma_fence *fence) ··· 130 130 return fence_completed(f->fctx, f->base.seqno); 131 131 } 132 132 133 - static void msm_fence_release(struct dma_fence *fence) 134 - { 135 - struct msm_fence *f = to_msm_fence(fence); 136 - kfree_rcu(f, base.rcu); 137 - } 138 - 139 133 static const struct dma_fence_ops msm_fence_ops = { 140 134 .get_driver_name = msm_fence_get_driver_name, 141 135 .get_timeline_name = msm_fence_get_timeline_name, 142 136 .enable_signaling = msm_fence_enable_signaling, 143 137 .signaled = msm_fence_signaled, 144 138 .wait = dma_fence_default_wait, 145 - .release = msm_fence_release, 139 + .release = dma_fence_free, 146 140 }; 147 141 148 142 struct dma_fence *
+6
drivers/gpu/drm/msm/msm_gem.c
··· 758 758 struct msm_gem_object *msm_obj; 759 759 bool use_vram = false; 760 760 761 + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 762 + 761 763 switch (flags & MSM_BO_CACHE_MASK) { 762 764 case MSM_BO_UNCACHED: 763 765 case MSM_BO_CACHED: ··· 855 853 856 854 size = PAGE_ALIGN(dmabuf->size); 857 855 856 + /* Take mutex so we can modify the inactive list in msm_gem_new_impl */ 857 + mutex_lock(&dev->struct_mutex); 858 858 ret = msm_gem_new_impl(dev, size, MSM_BO_WC, dmabuf->resv, &obj); 859 + mutex_unlock(&dev->struct_mutex); 860 + 859 861 if (ret) 860 862 goto fail; 861 863
+7
drivers/gpu/drm/msm/msm_gem_prime.c
··· 70 70 if (!obj->import_attach) 71 71 msm_gem_put_pages(obj); 72 72 } 73 + 74 + struct reservation_object *msm_gem_prime_res_obj(struct drm_gem_object *obj) 75 + { 76 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 77 + 78 + return msm_obj->resv; 79 + }
+7 -7
drivers/gpu/drm/msm/msm_gem_submit.c
··· 410 410 if (!in_fence) 411 411 return -EINVAL; 412 412 413 - /* TODO if we get an array-fence due to userspace merging multiple 414 - * fences, we need a way to determine if all the backing fences 415 - * are from our own context.. 413 + /* 414 + * Wait if the fence is from a foreign context, or if the fence 415 + * array contains any fence from a foreign context. 416 416 */ 417 - 418 - if (in_fence->context != gpu->fctx->context) { 417 + if (!dma_fence_match_context(in_fence, gpu->fctx->context)) { 419 418 ret = dma_fence_wait(in_fence, true); 420 419 if (ret) 421 420 return ret; ··· 495 496 goto out; 496 497 } 497 498 498 - if ((submit_cmd.size + submit_cmd.submit_offset) >= 499 - msm_obj->base.size) { 499 + if (!submit_cmd.size || 500 + ((submit_cmd.size + submit_cmd.submit_offset) > 501 + msm_obj->base.size)) { 500 502 DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); 501 503 ret = -EINVAL; 502 504 goto out;
+2 -2
drivers/gpu/drm/msm/msm_gpu.c
··· 549 549 gpu->grp_clks[i] = get_clock(dev, name); 550 550 551 551 /* Remember the key clocks that we need to control later */ 552 - if (!strcmp(name, "core")) 552 + if (!strcmp(name, "core") || !strcmp(name, "core_clk")) 553 553 gpu->core_clk = gpu->grp_clks[i]; 554 - else if (!strcmp(name, "rbbmtimer")) 554 + else if (!strcmp(name, "rbbmtimer") || !strcmp(name, "rbbmtimer_clk")) 555 555 gpu->rbbmtimer_clk = gpu->grp_clks[i]; 556 556 557 557 ++i;
+2 -2
drivers/gpu/drm/qxl/qxl_display.c
··· 575 575 if (ret) 576 576 return; 577 577 578 - cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 579 - 580 578 if (fb != old_state->fb) { 581 579 obj = to_qxl_framebuffer(fb)->obj; 582 580 user_bo = gem_to_qxl_bo(obj); ··· 612 614 qxl_bo_kunmap(cursor_bo); 613 615 qxl_bo_kunmap(user_bo); 614 616 617 + cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 615 618 cmd->u.set.visible = 1; 616 619 cmd->u.set.shape = qxl_bo_physical_address(qdev, 617 620 cursor_bo, 0); ··· 623 624 if (ret) 624 625 goto out_free_release; 625 626 627 + cmd = (struct qxl_cursor_cmd *) qxl_release_map(qdev, release); 626 628 cmd->type = QXL_CURSOR_MOVE; 627 629 } 628 630
+6
drivers/gpu/drm/radeon/ci_dpm.c
··· 776 776 u32 vblank_time = r600_dpm_get_vblank_time(rdev); 777 777 u32 switch_limit = pi->mem_gddr5 ? 450 : 300; 778 778 779 + /* disable mclk switching if the refresh is >120Hz, even if the 780 + * blanking period would allow it 781 + */ 782 + if (r600_dpm_get_vrefresh(rdev) > 120) 783 + return true; 784 + 779 785 if (vblank_time < switch_limit) 780 786 return true; 781 787 else
+2 -2
drivers/gpu/drm/radeon/cik.c
··· 7401 7401 WREG32(DC_HPD5_INT_CONTROL, tmp); 7402 7402 } 7403 7403 if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT) { 7404 - tmp = RREG32(DC_HPD5_INT_CONTROL); 7404 + tmp = RREG32(DC_HPD6_INT_CONTROL); 7405 7405 tmp |= DC_HPDx_INT_ACK; 7406 7406 WREG32(DC_HPD6_INT_CONTROL, tmp); 7407 7407 } ··· 7431 7431 WREG32(DC_HPD5_INT_CONTROL, tmp); 7432 7432 } 7433 7433 if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 7434 - tmp = RREG32(DC_HPD5_INT_CONTROL); 7434 + tmp = RREG32(DC_HPD6_INT_CONTROL); 7435 7435 tmp |= DC_HPDx_RX_INT_ACK; 7436 7436 WREG32(DC_HPD6_INT_CONTROL, tmp); 7437 7437 }
+2 -2
drivers/gpu/drm/radeon/evergreen.c
··· 4927 4927 WREG32(DC_HPD5_INT_CONTROL, tmp); 4928 4928 } 4929 4929 if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) { 4930 - tmp = RREG32(DC_HPD5_INT_CONTROL); 4930 + tmp = RREG32(DC_HPD6_INT_CONTROL); 4931 4931 tmp |= DC_HPDx_INT_ACK; 4932 4932 WREG32(DC_HPD6_INT_CONTROL, tmp); 4933 4933 } ··· 4958 4958 WREG32(DC_HPD5_INT_CONTROL, tmp); 4959 4959 } 4960 4960 if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 4961 - tmp = RREG32(DC_HPD5_INT_CONTROL); 4961 + tmp = RREG32(DC_HPD6_INT_CONTROL); 4962 4962 tmp |= DC_HPDx_RX_INT_ACK; 4963 4963 WREG32(DC_HPD6_INT_CONTROL, tmp); 4964 4964 }
+1 -1
drivers/gpu/drm/radeon/r600.c
··· 3988 3988 WREG32(DC_HPD5_INT_CONTROL, tmp); 3989 3989 } 3990 3990 if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) { 3991 - tmp = RREG32(DC_HPD5_INT_CONTROL); 3991 + tmp = RREG32(DC_HPD6_INT_CONTROL); 3992 3992 tmp |= DC_HPDx_INT_ACK; 3993 3993 WREG32(DC_HPD6_INT_CONTROL, tmp); 3994 3994 }
+1 -1
drivers/gpu/drm/radeon/radeon_kms.c
··· 116 116 if ((radeon_runtime_pm != 0) && 117 117 radeon_has_atpx() && 118 118 ((flags & RADEON_IS_IGP) == 0) && 119 - !pci_is_thunderbolt_attached(rdev->pdev)) 119 + !pci_is_thunderbolt_attached(dev->pdev)) 120 120 flags |= RADEON_IS_PX; 121 121 122 122 /* radeon_device_init should report only fatal error
+2 -2
drivers/gpu/drm/radeon/si.c
··· 6317 6317 WREG32(DC_HPD5_INT_CONTROL, tmp); 6318 6318 } 6319 6319 if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) { 6320 - tmp = RREG32(DC_HPD5_INT_CONTROL); 6320 + tmp = RREG32(DC_HPD6_INT_CONTROL); 6321 6321 tmp |= DC_HPDx_INT_ACK; 6322 6322 WREG32(DC_HPD6_INT_CONTROL, tmp); 6323 6323 } ··· 6348 6348 WREG32(DC_HPD5_INT_CONTROL, tmp); 6349 6349 } 6350 6350 if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) { 6351 - tmp = RREG32(DC_HPD5_INT_CONTROL); 6351 + tmp = RREG32(DC_HPD6_INT_CONTROL); 6352 6352 tmp |= DC_HPDx_RX_INT_ACK; 6353 6353 WREG32(DC_HPD6_INT_CONTROL, tmp); 6354 6354 }
+4 -2
drivers/hid/Kconfig
··· 275 275 - Trio Linker Plus II 276 276 277 277 config HID_ELECOM 278 - tristate "ELECOM BM084 bluetooth mouse" 278 + tristate "ELECOM HID devices" 279 279 depends on HID 280 280 ---help--- 281 - Support for the ELECOM BM084 (bluetooth mouse). 281 + Support for ELECOM devices: 282 + - BM084 Bluetooth Mouse 283 + - DEFT Trackball (Wired and wireless) 282 284 283 285 config HID_ELO 284 286 tristate "ELO USB 4000/4500 touchscreen"
+12
drivers/hid/hid-asus.c
··· 69 69 #define QUIRK_IS_MULTITOUCH BIT(3) 70 70 #define QUIRK_NO_CONSUMER_USAGES BIT(4) 71 71 #define QUIRK_USE_KBD_BACKLIGHT BIT(5) 72 + #define QUIRK_T100_KEYBOARD BIT(6) 72 73 73 74 #define I2C_KEYBOARD_QUIRKS (QUIRK_FIX_NOTEBOOK_REPORT | \ 74 75 QUIRK_NO_INIT_REPORTS | \ ··· 537 536 drvdata->kbd_backlight->removed = true; 538 537 cancel_work_sync(&drvdata->kbd_backlight->work); 539 538 } 539 + 540 + hid_hw_stop(hdev); 540 541 } 541 542 542 543 static __u8 *asus_report_fixup(struct hid_device *hdev, __u8 *rdesc, ··· 551 548 hid_info(hdev, "Fixing up Asus notebook report descriptor\n"); 552 549 rdesc[55] = 0xdd; 553 550 } 551 + if (drvdata->quirks & QUIRK_T100_KEYBOARD && 552 + *rsize == 76 && rdesc[73] == 0x81 && rdesc[74] == 0x01) { 553 + hid_info(hdev, "Fixing up Asus T100 keyb report descriptor\n"); 554 + rdesc[74] &= ~HID_MAIN_ITEM_CONSTANT; 555 + } 556 + 554 557 return rdesc; 555 558 } 556 559 ··· 569 560 USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD1) }, 570 561 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 571 562 USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD2), QUIRK_USE_KBD_BACKLIGHT }, 563 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, 564 + USB_DEVICE_ID_ASUSTEK_T100_KEYBOARD), 565 + QUIRK_T100_KEYBOARD | QUIRK_NO_CONSUMER_USAGES }, 572 566 { } 573 567 }; 574 568 MODULE_DEVICE_TABLE(hid, asus_devices);
+3
drivers/hid/hid-core.c
··· 1855 1855 { HID_I2C_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_I2C_TOUCHPAD) }, 1856 1856 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD1) }, 1857 1857 { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD2) }, 1858 + { HID_USB_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_T100_KEYBOARD) }, 1858 1859 { HID_USB_DEVICE(USB_VENDOR_ID_AUREAL, USB_DEVICE_ID_AUREAL_W01RN) }, 1859 1860 { HID_USB_DEVICE(USB_VENDOR_ID_BELKIN, USB_DEVICE_ID_FLIP_KVM) }, 1860 1861 { HID_USB_DEVICE(USB_VENDOR_ID_BETOP_2185BFM, 0x2208) }, ··· 1892 1891 { HID_USB_DEVICE(USB_VENDOR_ID_DREAM_CHEEKY, USB_DEVICE_ID_DREAM_CHEEKY_WN) }, 1893 1892 { HID_USB_DEVICE(USB_VENDOR_ID_DREAM_CHEEKY, USB_DEVICE_ID_DREAM_CHEEKY_FA) }, 1894 1893 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) }, 1894 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_DEFT_WIRED) }, 1895 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_DEFT_WIRELESS) }, 1895 1896 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0009) }, 1896 1897 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0030) }, 1897 1898 { HID_USB_DEVICE(USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_ACCUTOUCH_2216) },
+53 -9
drivers/hid/hid-elecom.c
··· 1 1 /* 2 - * HID driver for Elecom BM084 (bluetooth mouse). 3 - * Removes a non-existing horizontal wheel from 4 - * the HID descriptor. 5 - * (This module is based on "hid-ortek".) 6 - * 2 + * HID driver for ELECOM devices. 7 3 * Copyright (c) 2010 Richard Nauber <Richard.Nauber@gmail.com> 4 + * Copyright (c) 2016 Yuxuan Shui <yshuiv7@gmail.com> 5 + * Copyright (c) 2017 Diego Elio Pettenò <flameeyes@flameeyes.eu> 8 6 */ 9 7 10 8 /* ··· 21 23 static __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc, 22 24 unsigned int *rsize) 23 25 { 24 - if (*rsize >= 48 && rdesc[46] == 0x05 && rdesc[47] == 0x0c) { 25 - hid_info(hdev, "Fixing up Elecom BM084 report descriptor\n"); 26 - rdesc[47] = 0x00; 26 + switch (hdev->product) { 27 + case USB_DEVICE_ID_ELECOM_BM084: 28 + /* The BM084 Bluetooth mouse includes a non-existing horizontal 29 + * wheel in the HID descriptor. */ 30 + if (*rsize >= 48 && rdesc[46] == 0x05 && rdesc[47] == 0x0c) { 31 + hid_info(hdev, "Fixing up Elecom BM084 report descriptor\n"); 32 + rdesc[47] = 0x00; 33 + } 34 + break; 35 + case USB_DEVICE_ID_ELECOM_DEFT_WIRED: 36 + case USB_DEVICE_ID_ELECOM_DEFT_WIRELESS: 37 + /* The DEFT trackball has eight buttons, but its descriptor only 38 + * reports five, disabling the three Fn buttons on the top of 39 + * the mouse. 40 + * 41 + * Apply the following diff to the descriptor: 42 + * 43 + * Collection (Physical), Collection (Physical), 44 + * Report ID (1), Report ID (1), 45 + * Report Count (5), -> Report Count (8), 46 + * Report Size (1), Report Size (1), 47 + * Usage Page (Button), Usage Page (Button), 48 + * Usage Minimum (01h), Usage Minimum (01h), 49 + * Usage Maximum (05h), -> Usage Maximum (08h), 50 + * Logical Minimum (0), Logical Minimum (0), 51 + * Logical Maximum (1), Logical Maximum (1), 52 + * Input (Variable), Input (Variable), 53 + * Report Count (1), -> Report Count (0), 54 + * Report Size (3), Report Size (3), 55 + * Input (Constant), Input (Constant), 56 + * Report Size (16), Report Size (16), 57 + * Report Count (2), Report Count (2), 58 + * Usage Page (Desktop), Usage Page (Desktop), 59 + * Usage (X), Usage (X), 60 + * Usage (Y), Usage (Y), 61 + * Logical Minimum (-32768), Logical Minimum (-32768), 62 + * Logical Maximum (32767), Logical Maximum (32767), 63 + * Input (Variable, Relative), Input (Variable, Relative), 64 + * End Collection, End Collection, 65 + */ 66 + if (*rsize == 213 && rdesc[13] == 5 && rdesc[21] == 5) { 67 + hid_info(hdev, "Fixing up Elecom DEFT Fn buttons\n"); 68 + rdesc[13] = 8; /* Button/Variable Report Count */ 69 + rdesc[21] = 8; /* Button/Variable Usage Maximum */ 70 + rdesc[29] = 0; /* Button/Constant Report Count */ 71 + } 72 + break; 27 73 } 28 74 return rdesc; 29 75 } 30 76 31 77 static const struct hid_device_id elecom_devices[] = { 32 - { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084)}, 78 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) }, 79 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_DEFT_WIRED) }, 80 + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_DEFT_WIRELESS) }, 33 81 { } 34 82 }; 35 83 MODULE_DEVICE_TABLE(hid, elecom_devices);
+3
drivers/hid/hid-ids.h
··· 173 173 #define USB_VENDOR_ID_ASUSTEK 0x0b05 174 174 #define USB_DEVICE_ID_ASUSTEK_LCM 0x1726 175 175 #define USB_DEVICE_ID_ASUSTEK_LCM2 0x175b 176 + #define USB_DEVICE_ID_ASUSTEK_T100_KEYBOARD 0x17e0 176 177 #define USB_DEVICE_ID_ASUSTEK_I2C_KEYBOARD 0x8585 177 178 #define USB_DEVICE_ID_ASUSTEK_I2C_TOUCHPAD 0x0101 178 179 #define USB_DEVICE_ID_ASUSTEK_ROG_KEYBOARD1 0x1854 ··· 359 358 360 359 #define USB_VENDOR_ID_ELECOM 0x056e 361 360 #define USB_DEVICE_ID_ELECOM_BM084 0x0061 361 + #define USB_DEVICE_ID_ELECOM_DEFT_WIRED 0x00fe 362 + #define USB_DEVICE_ID_ELECOM_DEFT_WIRELESS 0x00ff 362 363 363 364 #define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34 364 365 #define USB_DEVICE_ID_DREAM_CHEEKY_WN 0x0004
+8 -7
drivers/hid/hid-magicmouse.c
··· 349 349 350 350 if (input->id.product == USB_DEVICE_ID_APPLE_MAGICMOUSE) { 351 351 magicmouse_emit_buttons(msc, clicks & 3); 352 + input_mt_report_pointer_emulation(input, true); 352 353 input_report_rel(input, REL_X, x); 353 354 input_report_rel(input, REL_Y, y); 354 355 } else { /* USB_DEVICE_ID_APPLE_MAGICTRACKPAD */ ··· 389 388 __clear_bit(BTN_RIGHT, input->keybit); 390 389 __clear_bit(BTN_MIDDLE, input->keybit); 391 390 __set_bit(BTN_MOUSE, input->keybit); 392 - __set_bit(BTN_TOOL_FINGER, input->keybit); 393 - __set_bit(BTN_TOOL_DOUBLETAP, input->keybit); 394 - __set_bit(BTN_TOOL_TRIPLETAP, input->keybit); 395 - __set_bit(BTN_TOOL_QUADTAP, input->keybit); 396 - __set_bit(BTN_TOOL_QUINTTAP, input->keybit); 397 - __set_bit(BTN_TOUCH, input->keybit); 398 - __set_bit(INPUT_PROP_POINTER, input->propbit); 399 391 __set_bit(INPUT_PROP_BUTTONPAD, input->propbit); 400 392 } 401 393 394 + __set_bit(BTN_TOOL_FINGER, input->keybit); 395 + __set_bit(BTN_TOOL_DOUBLETAP, input->keybit); 396 + __set_bit(BTN_TOOL_TRIPLETAP, input->keybit); 397 + __set_bit(BTN_TOOL_QUADTAP, input->keybit); 398 + __set_bit(BTN_TOOL_QUINTTAP, input->keybit); 399 + __set_bit(BTN_TOUCH, input->keybit); 400 + __set_bit(INPUT_PROP_POINTER, input->propbit); 402 401 403 402 __set_bit(EV_ABS, input->evbit); 404 403
+13
drivers/hid/i2c-hid/i2c-hid.c
··· 897 897 return 0; 898 898 } 899 899 900 + static void i2c_hid_acpi_fix_up_power(struct device *dev) 901 + { 902 + acpi_handle handle = ACPI_HANDLE(dev); 903 + struct acpi_device *adev; 904 + 905 + if (handle && acpi_bus_get_device(handle, &adev) == 0) 906 + acpi_device_fix_up_power(adev); 907 + } 908 + 900 909 static const struct acpi_device_id i2c_hid_acpi_match[] = { 901 910 {"ACPI0C50", 0 }, 902 911 {"PNP0C50", 0 }, ··· 918 909 { 919 910 return -ENODEV; 920 911 } 912 + 913 + static inline void i2c_hid_acpi_fix_up_power(struct device *dev) {} 921 914 #endif 922 915 923 916 #ifdef CONFIG_OF ··· 1040 1029 ret = i2c_hid_alloc_buffers(ihid, HID_MIN_BUFFER_SIZE); 1041 1030 if (ret < 0) 1042 1031 goto err_regulator; 1032 + 1033 + i2c_hid_acpi_fix_up_power(&client->dev); 1043 1034 1044 1035 pm_runtime_get_noresume(&client->dev); 1045 1036 pm_runtime_set_active(&client->dev);
+24 -23
drivers/hid/wacom_wac.c
··· 1571 1571 { 1572 1572 unsigned char *data = wacom->data; 1573 1573 1574 - if (wacom->pen_input) 1574 + if (wacom->pen_input) { 1575 1575 dev_dbg(wacom->pen_input->dev.parent, 1576 1576 "%s: received report #%d\n", __func__, data[0]); 1577 - else if (wacom->touch_input) 1577 + 1578 + if (len == WACOM_PKGLEN_PENABLED || 1579 + data[0] == WACOM_REPORT_PENABLED) 1580 + return wacom_tpc_pen(wacom); 1581 + } 1582 + else if (wacom->touch_input) { 1578 1583 dev_dbg(wacom->touch_input->dev.parent, 1579 1584 "%s: received report #%d\n", __func__, data[0]); 1580 1585 1581 - switch (len) { 1582 - case WACOM_PKGLEN_TPC1FG: 1583 - return wacom_tpc_single_touch(wacom, len); 1584 - 1585 - case WACOM_PKGLEN_TPC2FG: 1586 - return wacom_tpc_mt_touch(wacom); 1587 - 1588 - case WACOM_PKGLEN_PENABLED: 1589 - return wacom_tpc_pen(wacom); 1590 - 1591 - default: 1592 - switch (data[0]) { 1593 - case WACOM_REPORT_TPC1FG: 1594 - case WACOM_REPORT_TPCHID: 1595 - case WACOM_REPORT_TPCST: 1596 - case WACOM_REPORT_TPC1FGE: 1586 + switch (len) { 1587 + case WACOM_PKGLEN_TPC1FG: 1597 1588 return wacom_tpc_single_touch(wacom, len); 1598 1589 1599 - case WACOM_REPORT_TPCMT: 1600 - case WACOM_REPORT_TPCMT2: 1601 - return wacom_mt_touch(wacom); 1590 + case WACOM_PKGLEN_TPC2FG: 1591 + return wacom_tpc_mt_touch(wacom); 1602 1592 1603 - case WACOM_REPORT_PENABLED: 1604 - return wacom_tpc_pen(wacom); 1593 + default: 1594 + switch (data[0]) { 1595 + case WACOM_REPORT_TPC1FG: 1596 + case WACOM_REPORT_TPCHID: 1597 + case WACOM_REPORT_TPCST: 1598 + case WACOM_REPORT_TPC1FGE: 1599 + return wacom_tpc_single_touch(wacom, len); 1600 + 1601 + case WACOM_REPORT_TPCMT: 1602 + case WACOM_REPORT_TPCMT2: 1603 + return wacom_mt_touch(wacom); 1604 + 1605 + } 1605 1606 } 1606 1607 } 1607 1608
+1 -1
drivers/i2c/busses/i2c-designware-platdrv.c
··· 94 94 static int dw_i2c_acpi_configure(struct platform_device *pdev) 95 95 { 96 96 struct dw_i2c_dev *dev = platform_get_drvdata(pdev); 97 + u32 ss_ht = 0, fp_ht = 0, hs_ht = 0, fs_ht = 0; 97 98 acpi_handle handle = ACPI_HANDLE(&pdev->dev); 98 99 const struct acpi_device_id *id; 99 - u32 ss_ht, fp_ht, hs_ht, fs_ht; 100 100 struct acpi_device *adev; 101 101 const char *uid; 102 102
+21 -4
drivers/i2c/busses/i2c-tiny-usb.c
··· 178 178 int value, int index, void *data, int len) 179 179 { 180 180 struct i2c_tiny_usb *dev = (struct i2c_tiny_usb *)adapter->algo_data; 181 + void *dmadata = kmalloc(len, GFP_KERNEL); 182 + int ret; 183 + 184 + if (!dmadata) 185 + return -ENOMEM; 181 186 182 187 /* do control transfer */ 183 - return usb_control_msg(dev->usb_dev, usb_rcvctrlpipe(dev->usb_dev, 0), 188 + ret = usb_control_msg(dev->usb_dev, usb_rcvctrlpipe(dev->usb_dev, 0), 184 189 cmd, USB_TYPE_VENDOR | USB_RECIP_INTERFACE | 185 - USB_DIR_IN, value, index, data, len, 2000); 190 + USB_DIR_IN, value, index, dmadata, len, 2000); 191 + 192 + memcpy(data, dmadata, len); 193 + kfree(dmadata); 194 + return ret; 186 195 } 187 196 188 197 static int usb_write(struct i2c_adapter *adapter, int cmd, 189 198 int value, int index, void *data, int len) 190 199 { 191 200 struct i2c_tiny_usb *dev = (struct i2c_tiny_usb *)adapter->algo_data; 201 + void *dmadata = kmemdup(data, len, GFP_KERNEL); 202 + int ret; 203 + 204 + if (!dmadata) 205 + return -ENOMEM; 192 206 193 207 /* do control transfer */ 194 - return usb_control_msg(dev->usb_dev, usb_sndctrlpipe(dev->usb_dev, 0), 208 + ret = usb_control_msg(dev->usb_dev, usb_sndctrlpipe(dev->usb_dev, 0), 195 209 cmd, USB_TYPE_VENDOR | USB_RECIP_INTERFACE, 196 - value, index, data, len, 2000); 210 + value, index, dmadata, len, 2000); 211 + 212 + kfree(dmadata); 213 + return ret; 197 214 } 198 215 199 216 static void i2c_tiny_usb_free(struct i2c_tiny_usb *dev)
+16 -14
drivers/input/mouse/elan_i2c_i2c.c
··· 554 554 struct completion *completion) 555 555 { 556 556 struct device *dev = &client->dev; 557 - long ret; 558 557 int error; 559 558 int len; 560 - u8 buffer[ETP_I2C_INF_LENGTH]; 559 + u8 buffer[ETP_I2C_REPORT_LEN]; 560 + 561 + len = i2c_master_recv(client, buffer, ETP_I2C_REPORT_LEN); 562 + if (len != ETP_I2C_REPORT_LEN) { 563 + error = len < 0 ? len : -EIO; 564 + dev_warn(dev, "failed to read I2C data after FW WDT reset: %d (%d)\n", 565 + error, len); 566 + } 561 567 562 568 reinit_completion(completion); 563 569 enable_irq(client->irq); 564 570 565 571 error = elan_i2c_write_cmd(client, ETP_I2C_STAND_CMD, ETP_I2C_RESET); 566 - if (!error) 567 - ret = wait_for_completion_interruptible_timeout(completion, 568 - msecs_to_jiffies(300)); 569 - disable_irq(client->irq); 570 - 571 572 if (error) { 572 573 dev_err(dev, "device reset failed: %d\n", error); 573 - return error; 574 - } else if (ret == 0) { 574 + } else if (!wait_for_completion_timeout(completion, 575 + msecs_to_jiffies(300))) { 575 576 dev_err(dev, "timeout waiting for device reset\n"); 576 - return -ETIMEDOUT; 577 - } else if (ret < 0) { 578 - error = ret; 579 - dev_err(dev, "error waiting for device reset: %d\n", error); 580 - return error; 577 + error = -ETIMEDOUT; 581 578 } 579 + 580 + disable_irq(client->irq); 581 + 582 + if (error) 583 + return error; 582 584 583 585 len = i2c_master_recv(client, buffer, ETP_I2C_INF_LENGTH); 584 586 if (len != ETP_I2C_INF_LENGTH) {
+1
drivers/input/touchscreen/atmel_mxt_ts.c
··· 350 350 case MXT_TOUCH_KEYARRAY_T15: 351 351 case MXT_TOUCH_PROXIMITY_T23: 352 352 case MXT_TOUCH_PROXKEY_T52: 353 + case MXT_TOUCH_MULTITOUCHSCREEN_T100: 353 354 case MXT_PROCI_GRIPFACE_T20: 354 355 case MXT_PROCG_NOISE_T22: 355 356 case MXT_PROCI_ONETOUCH_T24:
+1 -1
drivers/input/touchscreen/edt-ft5x06.c
··· 471 471 static EDT_ATTR(offset, S_IWUSR | S_IRUGO, WORK_REGISTER_OFFSET, 472 472 M09_REGISTER_OFFSET, 0, 31); 473 473 static EDT_ATTR(threshold, S_IWUSR | S_IRUGO, WORK_REGISTER_THRESHOLD, 474 - M09_REGISTER_THRESHOLD, 20, 80); 474 + M09_REGISTER_THRESHOLD, 0, 80); 475 475 static EDT_ATTR(report_rate, S_IWUSR | S_IRUGO, WORK_REGISTER_REPORT_RATE, 476 476 NO_REGISTER, 3, 14); 477 477
+1 -1
drivers/leds/leds-pca955x.c
··· 285 285 "slave address 0x%02x\n", 286 286 client->name, chip->bits, client->addr); 287 287 288 - if (!i2c_check_functionality(adapter, I2C_FUNC_I2C)) 288 + if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) 289 289 return -EIO; 290 290 291 291 if (pdata) {
+4 -4
drivers/md/bitmap.c
··· 485 485 pr_debug(" magic: %08x\n", le32_to_cpu(sb->magic)); 486 486 pr_debug(" version: %d\n", le32_to_cpu(sb->version)); 487 487 pr_debug(" uuid: %08x.%08x.%08x.%08x\n", 488 - *(__u32 *)(sb->uuid+0), 489 - *(__u32 *)(sb->uuid+4), 490 - *(__u32 *)(sb->uuid+8), 491 - *(__u32 *)(sb->uuid+12)); 488 + le32_to_cpu(*(__u32 *)(sb->uuid+0)), 489 + le32_to_cpu(*(__u32 *)(sb->uuid+4)), 490 + le32_to_cpu(*(__u32 *)(sb->uuid+8)), 491 + le32_to_cpu(*(__u32 *)(sb->uuid+12))); 492 492 pr_debug(" events: %llu\n", 493 493 (unsigned long long) le64_to_cpu(sb->events)); 494 494 pr_debug("events cleared: %llu\n",
+1 -1
drivers/md/dm-bufio.c
··· 1334 1334 { 1335 1335 struct dm_io_request io_req = { 1336 1336 .bi_op = REQ_OP_WRITE, 1337 - .bi_op_flags = REQ_PREFLUSH, 1337 + .bi_op_flags = REQ_PREFLUSH | REQ_SYNC, 1338 1338 .mem.type = DM_IO_KMEM, 1339 1339 .mem.ptr.addr = NULL, 1340 1340 .client = c->dm_io,
+8 -22
drivers/md/dm-integrity.c
··· 783 783 for (i = 0; i < commit_sections; i++) 784 784 rw_section_mac(ic, commit_start + i, true); 785 785 } 786 - rw_journal(ic, REQ_OP_WRITE, REQ_FUA, commit_start, commit_sections, &io_comp); 786 + rw_journal(ic, REQ_OP_WRITE, REQ_FUA | REQ_SYNC, commit_start, 787 + commit_sections, &io_comp); 787 788 } else { 788 789 unsigned to_end; 789 790 io_comp.in_flight = (atomic_t)ATOMIC_INIT(2); ··· 2375 2374 blk_queue_max_integrity_segments(disk->queue, UINT_MAX); 2376 2375 } 2377 2376 2378 - /* FIXME: use new kvmalloc */ 2379 - static void *dm_integrity_kvmalloc(size_t size, gfp_t gfp) 2380 - { 2381 - void *ptr = NULL; 2382 - 2383 - if (size <= PAGE_SIZE) 2384 - ptr = kmalloc(size, GFP_KERNEL | gfp); 2385 - if (!ptr && size <= KMALLOC_MAX_SIZE) 2386 - ptr = kmalloc(size, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY | gfp); 2387 - if (!ptr) 2388 - ptr = __vmalloc(size, GFP_KERNEL | gfp, PAGE_KERNEL); 2389 - 2390 - return ptr; 2391 - } 2392 - 2393 2377 static void dm_integrity_free_page_list(struct dm_integrity_c *ic, struct page_list *pl) 2394 2378 { 2395 2379 unsigned i; ··· 2393 2407 struct page_list *pl; 2394 2408 unsigned i; 2395 2409 2396 - pl = dm_integrity_kvmalloc(page_list_desc_size, __GFP_ZERO); 2410 + pl = kvmalloc(page_list_desc_size, GFP_KERNEL | __GFP_ZERO); 2397 2411 if (!pl) 2398 2412 return NULL; 2399 2413 ··· 2423 2437 struct scatterlist **sl; 2424 2438 unsigned i; 2425 2439 2426 - sl = dm_integrity_kvmalloc(ic->journal_sections * sizeof(struct scatterlist *), __GFP_ZERO); 2440 + sl = kvmalloc(ic->journal_sections * sizeof(struct scatterlist *), GFP_KERNEL | __GFP_ZERO); 2427 2441 if (!sl) 2428 2442 return NULL; 2429 2443 ··· 2439 2453 2440 2454 n_pages = (end_index - start_index + 1); 2441 2455 2442 - s = dm_integrity_kvmalloc(n_pages * sizeof(struct scatterlist), 0); 2456 + s = kvmalloc(n_pages * sizeof(struct scatterlist), GFP_KERNEL); 2443 2457 if (!s) { 2444 2458 dm_integrity_free_journal_scatterlist(ic, sl); 2445 2459 return NULL; ··· 2603 2617 goto bad; 2604 2618 } 2605 2619 2606 - sg = dm_integrity_kvmalloc((ic->journal_pages + 1) * sizeof(struct scatterlist), 0); 2620 + sg = kvmalloc((ic->journal_pages + 1) * sizeof(struct scatterlist), GFP_KERNEL); 2607 2621 if (!sg) { 2608 2622 *error = "Unable to allocate sg list"; 2609 2623 r = -ENOMEM; ··· 2659 2673 r = -ENOMEM; 2660 2674 goto bad; 2661 2675 } 2662 - ic->sk_requests = dm_integrity_kvmalloc(ic->journal_sections * sizeof(struct skcipher_request *), __GFP_ZERO); 2676 + ic->sk_requests = kvmalloc(ic->journal_sections * sizeof(struct skcipher_request *), GFP_KERNEL | __GFP_ZERO); 2663 2677 if (!ic->sk_requests) { 2664 2678 *error = "Unable to allocate sk requests"; 2665 2679 r = -ENOMEM; ··· 2726 2740 r = -ENOMEM; 2727 2741 goto bad; 2728 2742 } 2729 - ic->journal_tree = dm_integrity_kvmalloc(journal_tree_size, 0); 2743 + ic->journal_tree = kvmalloc(journal_tree_size, GFP_KERNEL); 2730 2744 if (!ic->journal_tree) { 2731 2745 *error = "Could not allocate memory for journal tree"; 2732 2746 r = -ENOMEM;
+3 -2
drivers/md/dm-ioctl.c
··· 1710 1710 } 1711 1711 1712 1712 /* 1713 - * Try to avoid low memory issues when a device is suspended. 1713 + * Use __GFP_HIGH to avoid low memory issues when a device is 1714 + * suspended and the ioctl is needed to resume it. 1714 1715 * Use kmalloc() rather than vmalloc() when we can. 1715 1716 */ 1716 1717 dmi = NULL; 1717 1718 noio_flag = memalloc_noio_save(); 1718 - dmi = kvmalloc(param_kernel->data_size, GFP_KERNEL); 1719 + dmi = kvmalloc(param_kernel->data_size, GFP_KERNEL | __GFP_HIGH); 1719 1720 memalloc_noio_restore(noio_flag); 1720 1721 1721 1722 if (!dmi) {
+1 -1
drivers/md/dm-raid1.c
··· 260 260 struct mirror *m; 261 261 struct dm_io_request io_req = { 262 262 .bi_op = REQ_OP_WRITE, 263 - .bi_op_flags = REQ_PREFLUSH, 263 + .bi_op_flags = REQ_PREFLUSH | REQ_SYNC, 264 264 .mem.type = DM_IO_KMEM, 265 265 .mem.ptr.addr = NULL, 266 266 .client = ms->io_client,
+2 -1
drivers/md/dm-snap-persistent.c
··· 741 741 /* 742 742 * Commit exceptions to disk. 743 743 */ 744 - if (ps->valid && area_io(ps, REQ_OP_WRITE, REQ_PREFLUSH | REQ_FUA)) 744 + if (ps->valid && area_io(ps, REQ_OP_WRITE, 745 + REQ_PREFLUSH | REQ_FUA | REQ_SYNC)) 745 746 ps->valid = 0; 746 747 747 748 /*
+2 -2
drivers/md/dm-verity-target.c
··· 166 166 return r; 167 167 } 168 168 169 - if (likely(v->version >= 1)) 169 + if (likely(v->salt_size && (v->version >= 1))) 170 170 r = verity_hash_update(v, req, v->salt, v->salt_size, res); 171 171 172 172 return r; ··· 177 177 { 178 178 int r; 179 179 180 - if (unlikely(!v->version)) { 180 + if (unlikely(v->salt_size && (!v->version))) { 181 181 r = verity_hash_update(v, req, v->salt, v->salt_size, res); 182 182 183 183 if (r < 0) {
+1 -1
drivers/md/dm.c
··· 1657 1657 1658 1658 bio_init(&md->flush_bio, NULL, 0); 1659 1659 md->flush_bio.bi_bdev = md->bdev; 1660 - md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH; 1660 + md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC; 1661 1661 1662 1662 dm_stats_init(&md->stats); 1663 1663
+3 -1
drivers/md/md-cluster.c
··· 1311 1311 cmsg.raid_slot = cpu_to_le32(rdev->desc_nr); 1312 1312 lock_comm(cinfo, 1); 1313 1313 ret = __sendmsg(cinfo, &cmsg); 1314 - if (ret) 1314 + if (ret) { 1315 + unlock_comm(cinfo); 1315 1316 return ret; 1317 + } 1316 1318 cinfo->no_new_dev_lockres->flags |= DLM_LKF_NOQUEUE; 1317 1319 ret = dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_EX); 1318 1320 cinfo->no_new_dev_lockres->flags &= ~DLM_LKF_NOQUEUE;
+1 -1
drivers/md/md.c
··· 765 765 test_bit(FailFast, &rdev->flags) && 766 766 !test_bit(LastDev, &rdev->flags)) 767 767 ff = MD_FAILFAST; 768 - bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_FUA | ff; 768 + bio->bi_opf = REQ_OP_WRITE | REQ_SYNC | REQ_PREFLUSH | REQ_FUA | ff; 769 769 770 770 atomic_inc(&mddev->pending_writes); 771 771 submit_bio(bio);
+2 -2
drivers/md/raid5-cache.c
··· 1782 1782 mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum, 1783 1783 mb, PAGE_SIZE)); 1784 1784 if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE, 1785 - REQ_FUA, false)) { 1785 + REQ_SYNC | REQ_FUA, false)) { 1786 1786 __free_page(page); 1787 1787 return -EIO; 1788 1788 } ··· 2388 2388 mb->checksum = cpu_to_le32(crc32c_le(log->uuid_checksum, 2389 2389 mb, PAGE_SIZE)); 2390 2390 sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page, 2391 - REQ_OP_WRITE, REQ_FUA, false); 2391 + REQ_OP_WRITE, REQ_SYNC | REQ_FUA, false); 2392 2392 sh->log_start = ctx->pos; 2393 2393 list_add_tail(&sh->r5c, &log->stripe_in_journal_list); 2394 2394 atomic_inc(&log->stripe_in_journal_count);
+2 -2
drivers/md/raid5-ppl.c
··· 907 907 pplhdr->checksum = cpu_to_le32(~crc32c_le(~0, pplhdr, PAGE_SIZE)); 908 908 909 909 if (!sync_page_io(rdev, rdev->ppl.sector - rdev->data_offset, 910 - PPL_HEADER_SIZE, page, REQ_OP_WRITE | REQ_FUA, 0, 911 - false)) { 910 + PPL_HEADER_SIZE, page, REQ_OP_WRITE | REQ_SYNC | 911 + REQ_FUA, 0, false)) { 912 912 md_error(rdev->mddev, rdev); 913 913 ret = -EIO; 914 914 }
+14 -4
drivers/md/raid5.c
··· 4085 4085 set_bit(STRIPE_INSYNC, &sh->state); 4086 4086 else { 4087 4087 atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches); 4088 - if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) 4088 + if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) { 4089 4089 /* don't try to repair!! */ 4090 4090 set_bit(STRIPE_INSYNC, &sh->state); 4091 - else { 4091 + pr_warn_ratelimited("%s: mismatch sector in range " 4092 + "%llu-%llu\n", mdname(conf->mddev), 4093 + (unsigned long long) sh->sector, 4094 + (unsigned long long) sh->sector + 4095 + STRIPE_SECTORS); 4096 + } else { 4092 4097 sh->check_state = check_state_compute_run; 4093 4098 set_bit(STRIPE_COMPUTE_RUN, &sh->state); 4094 4099 set_bit(STRIPE_OP_COMPUTE_BLK, &s->ops_request); ··· 4242 4237 } 4243 4238 } else { 4244 4239 atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches); 4245 - if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) 4240 + if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) { 4246 4241 /* don't try to repair!! */ 4247 4242 set_bit(STRIPE_INSYNC, &sh->state); 4248 - else { 4243 + pr_warn_ratelimited("%s: mismatch sector in range " 4244 + "%llu-%llu\n", mdname(conf->mddev), 4245 + (unsigned long long) sh->sector, 4246 + (unsigned long long) sh->sector + 4247 + STRIPE_SECTORS); 4248 + } else { 4249 4249 int *target = &sh->ops.target; 4250 4250 4251 4251 sh->ops.target = -1;
+4 -4
drivers/media/platform/mtk-vcodec/vdec/vdec_h264_if.c
··· 493 493 } 494 494 495 495 static struct vdec_common_if vdec_h264_if = { 496 - vdec_h264_init, 497 - vdec_h264_decode, 498 - vdec_h264_get_param, 499 - vdec_h264_deinit, 496 + .init = vdec_h264_init, 497 + .decode = vdec_h264_decode, 498 + .get_param = vdec_h264_get_param, 499 + .deinit = vdec_h264_deinit, 500 500 }; 501 501 502 502 struct vdec_common_if *get_h264_dec_comm_if(void);
+4 -4
drivers/media/platform/mtk-vcodec/vdec/vdec_vp8_if.c
··· 620 620 } 621 621 622 622 static struct vdec_common_if vdec_vp8_if = { 623 - vdec_vp8_init, 624 - vdec_vp8_decode, 625 - vdec_vp8_get_param, 626 - vdec_vp8_deinit, 623 + .init = vdec_vp8_init, 624 + .decode = vdec_vp8_decode, 625 + .get_param = vdec_vp8_get_param, 626 + .deinit = vdec_vp8_deinit, 627 627 }; 628 628 629 629 struct vdec_common_if *get_vp8_dec_comm_if(void);
+4 -4
drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c
··· 979 979 } 980 980 981 981 static struct vdec_common_if vdec_vp9_if = { 982 - vdec_vp9_init, 983 - vdec_vp9_decode, 984 - vdec_vp9_get_param, 985 - vdec_vp9_deinit, 982 + .init = vdec_vp9_init, 983 + .decode = vdec_vp9_decode, 984 + .get_param = vdec_vp9_get_param, 985 + .deinit = vdec_vp9_deinit, 986 986 }; 987 987 988 988 struct vdec_common_if *get_vp9_dec_comm_if(void);
+11 -1
drivers/misc/sgi-xp/xp.h
··· 309 309 xpc_send(short partid, int ch_number, u32 flags, void *payload, 310 310 u16 payload_size) 311 311 { 312 + if (!xpc_interface.send) 313 + return xpNotLoaded; 314 + 312 315 return xpc_interface.send(partid, ch_number, flags, payload, 313 316 payload_size); 314 317 } ··· 320 317 xpc_send_notify(short partid, int ch_number, u32 flags, void *payload, 321 318 u16 payload_size, xpc_notify_func func, void *key) 322 319 { 320 + if (!xpc_interface.send_notify) 321 + return xpNotLoaded; 322 + 323 323 return xpc_interface.send_notify(partid, ch_number, flags, payload, 324 324 payload_size, func, key); 325 325 } ··· 330 324 static inline void 331 325 xpc_received(short partid, int ch_number, void *payload) 332 326 { 333 - return xpc_interface.received(partid, ch_number, payload); 327 + if (xpc_interface.received) 328 + xpc_interface.received(partid, ch_number, payload); 334 329 } 335 330 336 331 static inline enum xp_retval 337 332 xpc_partid_to_nasids(short partid, void *nasids) 338 333 { 334 + if (!xpc_interface.partid_to_nasids) 335 + return xpNotLoaded; 336 + 339 337 return xpc_interface.partid_to_nasids(partid, nasids); 340 338 } 341 339
+7 -29
drivers/misc/sgi-xp/xp_main.c
··· 69 69 EXPORT_SYMBOL_GPL(xpc_registrations); 70 70 71 71 /* 72 - * Initialize the XPC interface to indicate that XPC isn't loaded. 72 + * Initialize the XPC interface to NULL to indicate that XPC isn't loaded. 73 73 */ 74 - static enum xp_retval 75 - xpc_notloaded(void) 76 - { 77 - return xpNotLoaded; 78 - } 79 - 80 - struct xpc_interface xpc_interface = { 81 - (void (*)(int))xpc_notloaded, 82 - (void (*)(int))xpc_notloaded, 83 - (enum xp_retval(*)(short, int, u32, void *, u16))xpc_notloaded, 84 - (enum xp_retval(*)(short, int, u32, void *, u16, xpc_notify_func, 85 - void *))xpc_notloaded, 86 - (void (*)(short, int, void *))xpc_notloaded, 87 - (enum xp_retval(*)(short, void *))xpc_notloaded 88 - }; 74 + struct xpc_interface xpc_interface = { }; 89 75 EXPORT_SYMBOL_GPL(xpc_interface); 90 76 91 77 /* ··· 101 115 void 102 116 xpc_clear_interface(void) 103 117 { 104 - xpc_interface.connect = (void (*)(int))xpc_notloaded; 105 - xpc_interface.disconnect = (void (*)(int))xpc_notloaded; 106 - xpc_interface.send = (enum xp_retval(*)(short, int, u32, void *, u16)) 107 - xpc_notloaded; 108 - xpc_interface.send_notify = (enum xp_retval(*)(short, int, u32, void *, 109 - u16, xpc_notify_func, 110 - void *))xpc_notloaded; 111 - xpc_interface.received = (void (*)(short, int, void *)) 112 - xpc_notloaded; 113 - xpc_interface.partid_to_nasids = (enum xp_retval(*)(short, void *)) 114 - xpc_notloaded; 118 + memset(&xpc_interface, 0, sizeof(xpc_interface)); 115 119 } 116 120 EXPORT_SYMBOL_GPL(xpc_clear_interface); 117 121 ··· 164 188 165 189 mutex_unlock(&registration->mutex); 166 190 167 - xpc_interface.connect(ch_number); 191 + if (xpc_interface.connect) 192 + xpc_interface.connect(ch_number); 168 193 169 194 return xpSuccess; 170 195 } ··· 214 237 registration->assigned_limit = 0; 215 238 registration->idle_limit = 0; 216 239 217 - xpc_interface.disconnect(ch_number); 240 + if (xpc_interface.disconnect) 241 + xpc_interface.disconnect(ch_number); 218 242 219 243 mutex_unlock(&registration->mutex); 220 244
+7
drivers/mmc/core/pwrseq_simple.c
··· 27 27 struct mmc_pwrseq pwrseq; 28 28 bool clk_enabled; 29 29 u32 post_power_on_delay_ms; 30 + u32 power_off_delay_us; 30 31 struct clk *ext_clk; 31 32 struct gpio_descs *reset_gpios; 32 33 }; ··· 79 78 80 79 mmc_pwrseq_simple_set_gpios_value(pwrseq, 1); 81 80 81 + if (pwrseq->power_off_delay_us) 82 + usleep_range(pwrseq->power_off_delay_us, 83 + 2 * pwrseq->power_off_delay_us); 84 + 82 85 if (!IS_ERR(pwrseq->ext_clk) && pwrseq->clk_enabled) { 83 86 clk_disable_unprepare(pwrseq->ext_clk); 84 87 pwrseq->clk_enabled = false; ··· 124 119 125 120 device_property_read_u32(dev, "post-power-on-delay-ms", 126 121 &pwrseq->post_power_on_delay_ms); 122 + device_property_read_u32(dev, "power-off-delay-us", 123 + &pwrseq->power_off_delay_us); 127 124 128 125 pwrseq->pwrseq.dev = dev; 129 126 pwrseq->pwrseq.ops = &mmc_pwrseq_simple_ops;
+12 -3
drivers/mmc/host/cavium-octeon.c
··· 108 108 static void octeon_mmc_int_enable(struct cvm_mmc_host *host, u64 val) 109 109 { 110 110 writeq(val, host->base + MIO_EMM_INT(host)); 111 - if (!host->dma_active || (host->dma_active && !host->has_ciu3)) 111 + if (!host->has_ciu3) 112 112 writeq(val, host->base + MIO_EMM_INT_EN(host)); 113 113 } 114 114 ··· 267 267 } 268 268 269 269 host->global_pwr_gpiod = devm_gpiod_get_optional(&pdev->dev, 270 - "power-gpios", 270 + "power", 271 271 GPIOD_OUT_HIGH); 272 272 if (IS_ERR(host->global_pwr_gpiod)) { 273 273 dev_err(&pdev->dev, "Invalid power GPIO\n"); ··· 288 288 if (ret) { 289 289 dev_err(&pdev->dev, "Error populating slots\n"); 290 290 octeon_mmc_set_shared_power(host, 0); 291 - return ret; 291 + goto error; 292 292 } 293 293 i++; 294 294 } 295 295 return 0; 296 + 297 + error: 298 + for (i = 0; i < CAVIUM_MAX_MMC; i++) { 299 + if (host->slot[i]) 300 + cvm_mmc_of_slot_remove(host->slot[i]); 301 + if (host->slot_pdev[i]) 302 + of_platform_device_destroy(&host->slot_pdev[i]->dev, NULL); 303 + } 304 + return ret; 296 305 } 297 306 298 307 static int octeon_mmc_remove(struct platform_device *pdev)
+6
drivers/mmc/host/cavium-thunderx.c
··· 146 146 return 0; 147 147 148 148 error: 149 + for (i = 0; i < CAVIUM_MAX_MMC; i++) { 150 + if (host->slot[i]) 151 + cvm_mmc_of_slot_remove(host->slot[i]); 152 + if (host->slot_pdev[i]) 153 + of_platform_device_destroy(&host->slot_pdev[i]->dev, NULL); 154 + } 149 155 clk_disable_unprepare(host->clk); 150 156 return ret; 151 157 }
+10 -15
drivers/mmc/host/cavium.c
··· 839 839 cvm_mmc_reset_bus(slot); 840 840 if (host->global_pwr_gpiod) 841 841 host->set_shared_power(host, 0); 842 - else 842 + else if (!IS_ERR(mmc->supply.vmmc)) 843 843 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0); 844 844 break; 845 845 846 846 case MMC_POWER_UP: 847 847 if (host->global_pwr_gpiod) 848 848 host->set_shared_power(host, 1); 849 - else 849 + else if (!IS_ERR(mmc->supply.vmmc)) 850 850 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd); 851 851 break; 852 852 } ··· 968 968 return -EINVAL; 969 969 } 970 970 971 - mmc->supply.vmmc = devm_regulator_get_optional(dev, "vmmc"); 972 - if (IS_ERR(mmc->supply.vmmc)) { 973 - if (PTR_ERR(mmc->supply.vmmc) == -EPROBE_DEFER) 974 - return -EPROBE_DEFER; 975 - /* 976 - * Legacy Octeon firmware has no regulator entry, fall-back to 977 - * a hard-coded voltage to get a sane OCR. 978 - */ 971 + ret = mmc_regulator_get_supply(mmc); 972 + if (ret == -EPROBE_DEFER) 973 + return ret; 974 + /* 975 + * Legacy Octeon firmware has no regulator entry, fall-back to 976 + * a hard-coded voltage to get a sane OCR. 977 + */ 978 + if (IS_ERR(mmc->supply.vmmc)) 979 979 mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34; 980 - } else { 981 - ret = mmc_regulator_get_ocrmask(mmc->supply.vmmc); 982 - if (ret > 0) 983 - mmc->ocr_avail = ret; 984 - } 985 980 986 981 /* Common MMC bindings */ 987 982 ret = mmc_of_parse(mmc);
+2 -1
drivers/mmc/host/sdhci-iproc.c
··· 187 187 }; 188 188 189 189 static const struct sdhci_pltfm_data sdhci_iproc_pltfm_data = { 190 - .quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK, 190 + .quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 191 + SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 191 192 .quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN, 192 193 .ops = &sdhci_iproc_ops, 193 194 };
+1 -13
drivers/mmc/host/sdhci-xenon-phy.c
··· 787 787 return ret; 788 788 } 789 789 790 - void xenon_clean_phy(struct sdhci_host *host) 791 - { 792 - struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 793 - struct xenon_priv *priv = sdhci_pltfm_priv(pltfm_host); 794 - 795 - kfree(priv->phy_params); 796 - } 797 - 798 790 static int xenon_add_phy(struct device_node *np, struct sdhci_host *host, 799 791 const char *phy_name) 800 792 { ··· 811 819 if (ret) 812 820 return ret; 813 821 814 - ret = xenon_emmc_phy_parse_param_dt(host, np, priv->phy_params); 815 - if (ret) 816 - xenon_clean_phy(host); 817 - 818 - return ret; 822 + return xenon_emmc_phy_parse_param_dt(host, np, priv->phy_params); 819 823 } 820 824 821 825 int xenon_phy_parse_dt(struct device_node *np, struct sdhci_host *host)
+1 -5
drivers/mmc/host/sdhci-xenon.c
··· 486 486 487 487 err = xenon_sdhc_prepare(host); 488 488 if (err) 489 - goto clean_phy_param; 489 + goto err_clk; 490 490 491 491 err = sdhci_add_host(host); 492 492 if (err) ··· 496 496 497 497 remove_sdhc: 498 498 xenon_sdhc_unprepare(host); 499 - clean_phy_param: 500 - xenon_clean_phy(host); 501 499 err_clk: 502 500 clk_disable_unprepare(pltfm_host->clk); 503 501 free_pltfm: ··· 507 509 { 508 510 struct sdhci_host *host = platform_get_drvdata(pdev); 509 511 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 510 - 511 - xenon_clean_phy(host); 512 512 513 513 sdhci_remove_host(host, 0); 514 514
-1
drivers/mmc/host/sdhci-xenon.h
··· 93 93 }; 94 94 95 95 int xenon_phy_adj(struct sdhci_host *host, struct mmc_ios *ios); 96 - void xenon_clean_phy(struct sdhci_host *host); 97 96 int xenon_phy_parse_dt(struct device_node *np, 98 97 struct sdhci_host *host); 99 98 void xenon_soc_pad_ctrl(struct sdhci_host *host,
+1 -1
drivers/net/bonding/bond_3ad.c
··· 2577 2577 return -1; 2578 2578 2579 2579 ad_info->aggregator_id = aggregator->aggregator_identifier; 2580 - ad_info->ports = aggregator->num_of_ports; 2580 + ad_info->ports = __agg_active_ports(aggregator); 2581 2581 ad_info->actor_key = aggregator->actor_oper_aggregator_key; 2582 2582 ad_info->partner_key = aggregator->partner_oper_aggregator_key; 2583 2583 ether_addr_copy(ad_info->partner_system,
+11 -5
drivers/net/bonding/bond_main.c
··· 2612 2612 bond_for_each_slave_rcu(bond, slave, iter) { 2613 2613 unsigned long trans_start = dev_trans_start(slave->dev); 2614 2614 2615 + slave->new_link = BOND_LINK_NOCHANGE; 2616 + 2615 2617 if (slave->link != BOND_LINK_UP) { 2616 2618 if (bond_time_in_interval(bond, trans_start, 1) && 2617 2619 bond_time_in_interval(bond, slave->last_rx, 1)) { 2618 2620 2619 - slave->link = BOND_LINK_UP; 2621 + slave->new_link = BOND_LINK_UP; 2620 2622 slave_state_changed = 1; 2621 2623 2622 2624 /* primary_slave has no meaning in round-robin ··· 2645 2643 if (!bond_time_in_interval(bond, trans_start, 2) || 2646 2644 !bond_time_in_interval(bond, slave->last_rx, 2)) { 2647 2645 2648 - slave->link = BOND_LINK_DOWN; 2646 + slave->new_link = BOND_LINK_DOWN; 2649 2647 slave_state_changed = 1; 2650 2648 2651 2649 if (slave->link_failure_count < UINT_MAX) ··· 2675 2673 if (do_failover || slave_state_changed) { 2676 2674 if (!rtnl_trylock()) 2677 2675 goto re_arm; 2676 + 2677 + bond_for_each_slave(bond, slave, iter) { 2678 + if (slave->new_link != BOND_LINK_NOCHANGE) 2679 + slave->link = slave->new_link; 2680 + } 2678 2681 2679 2682 if (slave_state_changed) { 2680 2683 bond_slave_state_change(bond); ··· 4278 4271 int arp_validate_value, fail_over_mac_value, primary_reselect_value, i; 4279 4272 struct bond_opt_value newval; 4280 4273 const struct bond_opt_value *valptr; 4281 - int arp_all_targets_value; 4274 + int arp_all_targets_value = 0; 4282 4275 u16 ad_actor_sys_prio = 0; 4283 4276 u16 ad_user_port_key = 0; 4284 - __be32 arp_target[BOND_MAX_ARP_TARGETS]; 4277 + __be32 arp_target[BOND_MAX_ARP_TARGETS] = { 0 }; 4285 4278 int arp_ip_count; 4286 4279 int bond_mode = BOND_MODE_ROUNDROBIN; 4287 4280 int xmit_hashtype = BOND_XMIT_POLICY_LAYER2; ··· 4508 4501 arp_validate_value = 0; 4509 4502 } 4510 4503 4511 - arp_all_targets_value = 0; 4512 4504 if (arp_all_targets) { 4513 4505 bond_opt_initstr(&newval, arp_all_targets); 4514 4506 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_ARP_ALL_TARGETS),
+2 -5
drivers/net/ethernet/8390/ax88796.c
··· 748 748 749 749 ret = ax_mii_init(dev); 750 750 if (ret) 751 - goto out_irq; 751 + goto err_out; 752 752 753 753 ax_NS8390_init(dev, 0); 754 754 755 755 ret = register_netdev(dev); 756 756 if (ret) 757 - goto out_irq; 757 + goto err_out; 758 758 759 759 netdev_info(dev, "%dbit, irq %d, %lx, MAC: %pM\n", 760 760 ei_local->word16 ? 16 : 8, dev->irq, dev->base_addr, ··· 762 762 763 763 return 0; 764 764 765 - out_irq: 766 - /* cleanup irq */ 767 - free_irq(dev->irq, dev); 768 765 err_out: 769 766 return ret; 770 767 }
+4 -4
drivers/net/ethernet/atheros/atlx/atl2.c
··· 1353 1353 if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) && 1354 1354 pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) { 1355 1355 printk(KERN_ERR "atl2: No usable DMA configuration, aborting\n"); 1356 + err = -EIO; 1356 1357 goto err_dma; 1357 1358 } 1358 1359 ··· 1367 1366 * pcibios_set_master to do the needed arch specific settings */ 1368 1367 pci_set_master(pdev); 1369 1368 1370 - err = -ENOMEM; 1371 1369 netdev = alloc_etherdev(sizeof(struct atl2_adapter)); 1372 - if (!netdev) 1370 + if (!netdev) { 1371 + err = -ENOMEM; 1373 1372 goto err_alloc_etherdev; 1373 + } 1374 1374 1375 1375 SET_NETDEV_DEV(netdev, &pdev->dev); 1376 1376 ··· 1409 1407 err = atl2_sw_init(adapter); 1410 1408 if (err) 1411 1409 goto err_sw_init; 1412 - 1413 - err = -EIO; 1414 1410 1415 1411 netdev->hw_features = NETIF_F_HW_VLAN_CTAG_RX; 1416 1412 netdev->features |= (NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX);
+3 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 5078 5078 struct be_adapter *adapter = netdev_priv(dev); 5079 5079 u8 l4_hdr = 0; 5080 5080 5081 - /* The code below restricts offload features for some tunneled packets. 5081 + /* The code below restricts offload features for some tunneled and 5082 + * Q-in-Q packets. 5082 5083 * Offload features for normal (non tunnel) packets are unchanged. 5083 5084 */ 5085 + features = vlan_features_check(skb, features); 5084 5086 if (!skb->encapsulation || 5085 5087 !(adapter->flags & BE_FLAGS_VXLAN_OFFLOADS)) 5086 5088 return features;
+15 -1
drivers/net/ethernet/freescale/fec_main.c
··· 3192 3192 { 3193 3193 int err, phy_reset; 3194 3194 bool active_high = false; 3195 - int msec = 1; 3195 + int msec = 1, phy_post_delay = 0; 3196 3196 struct device_node *np = pdev->dev.of_node; 3197 3197 3198 3198 if (!np) ··· 3208 3208 return phy_reset; 3209 3209 else if (!gpio_is_valid(phy_reset)) 3210 3210 return 0; 3211 + 3212 + err = of_property_read_u32(np, "phy-reset-post-delay", &phy_post_delay); 3213 + /* valid reset duration should be less than 1s */ 3214 + if (!err && phy_post_delay > 1000) 3215 + return -EINVAL; 3211 3216 3212 3217 active_high = of_property_read_bool(np, "phy-reset-active-high"); 3213 3218 ··· 3230 3225 usleep_range(msec * 1000, msec * 1000 + 1000); 3231 3226 3232 3227 gpio_set_value_cansleep(phy_reset, !active_high); 3228 + 3229 + if (!phy_post_delay) 3230 + return 0; 3231 + 3232 + if (phy_post_delay > 20) 3233 + msleep(phy_post_delay); 3234 + else 3235 + usleep_range(phy_post_delay * 1000, 3236 + phy_post_delay * 1000 + 1000); 3233 3237 3234 3238 return 0; 3235 3239 }
+36 -5
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 774 774 mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n", 775 775 mlx5_command_str(msg_to_opcode(ent->in)), 776 776 msg_to_opcode(ent->in)); 777 - mlx5_cmd_comp_handler(dev, 1UL << ent->idx); 777 + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); 778 778 } 779 779 780 780 static void cmd_work_handler(struct work_struct *work) ··· 804 804 } 805 805 806 806 cmd->ent_arr[ent->idx] = ent; 807 + set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state); 807 808 lay = get_inst(cmd, ent->idx); 808 809 ent->lay = lay; 809 810 memset(lay, 0, sizeof(*lay)); ··· 826 825 if (ent->callback) 827 826 schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); 828 827 828 + /* Skip sending command to fw if internal error */ 829 + if (pci_channel_offline(dev->pdev) || 830 + dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) { 831 + u8 status = 0; 832 + u32 drv_synd; 833 + 834 + ent->ret = mlx5_internal_err_ret_value(dev, msg_to_opcode(ent->in), &drv_synd, &status); 835 + MLX5_SET(mbox_out, ent->out, status, status); 836 + MLX5_SET(mbox_out, ent->out, syndrome, drv_synd); 837 + 838 + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); 839 + return; 840 + } 841 + 829 842 /* ring doorbell after the descriptor is valid */ 830 843 mlx5_core_dbg(dev, "writing 0x%x to command doorbell\n", 1 << ent->idx); 831 844 wmb(); ··· 850 835 poll_timeout(ent); 851 836 /* make sure we read the descriptor after ownership is SW */ 852 837 rmb(); 853 - mlx5_cmd_comp_handler(dev, 1UL << ent->idx); 838 + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, (ent->ret == -ETIMEDOUT)); 854 839 } 855 840 } 856 841 ··· 894 879 wait_for_completion(&ent->done); 895 880 } else if (!wait_for_completion_timeout(&ent->done, timeout)) { 896 881 ent->ret = -ETIMEDOUT; 897 - mlx5_cmd_comp_handler(dev, 1UL << ent->idx); 882 + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); 898 883 } 899 884 900 885 err = ent->ret; ··· 1390 1375 } 1391 1376 } 1392 1377 1393 - void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec) 1378 + void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced) 1394 1379 { 1395 1380 struct mlx5_cmd *cmd = &dev->cmd; 1396 1381 struct mlx5_cmd_work_ent *ent; ··· 1410 1395 struct semaphore *sem; 1411 1396 1412 1397 ent = cmd->ent_arr[i]; 1398 + 1399 + /* if we already completed the command, ignore it */ 1400 + if (!test_and_clear_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, 1401 + &ent->state)) { 1402 + /* only real completion can free the cmd slot */ 1403 + if (!forced) { 1404 + mlx5_core_err(dev, "Command completion arrived after timeout (entry idx = %d).\n", 1405 + ent->idx); 1406 + free_ent(cmd, ent->idx); 1407 + } 1408 + continue; 1409 + } 1410 + 1413 1411 if (ent->callback) 1414 1412 cancel_delayed_work(&ent->cb_timeout_work); 1415 1413 if (ent->page_queue) ··· 1445 1417 mlx5_core_dbg(dev, "command completed. ret 0x%x, delivery status %s(0x%x)\n", 1446 1418 ent->ret, deliv_status_to_str(ent->status), ent->status); 1447 1419 } 1448 - free_ent(cmd, ent->idx); 1420 + 1421 + /* only real completion will free the entry slot */ 1422 + if (!forced) 1423 + free_ent(cmd, ent->idx); 1449 1424 1450 1425 if (ent->callback) { 1451 1426 ds = ent->ts2 - ent->ts1;
+7 -1
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 1041 1041 #define MLX5_IB_GRH_BYTES 40 1042 1042 #define MLX5_IPOIB_ENCAP_LEN 4 1043 1043 #define MLX5_GID_SIZE 16 1044 + #define MLX5_IPOIB_PSEUDO_LEN 20 1045 + #define MLX5_IPOIB_HARD_LEN (MLX5_IPOIB_PSEUDO_LEN + MLX5_IPOIB_ENCAP_LEN) 1044 1046 1045 1047 static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq, 1046 1048 struct mlx5_cqe64 *cqe, ··· 1050 1048 struct sk_buff *skb) 1051 1049 { 1052 1050 struct net_device *netdev = rq->netdev; 1051 + char *pseudo_header; 1053 1052 u8 *dgid; 1054 1053 u8 g; 1055 1054 ··· 1079 1076 if (likely(netdev->features & NETIF_F_RXHASH)) 1080 1077 mlx5e_skb_set_hash(cqe, skb); 1081 1078 1079 + /* 20 bytes of ipoib header and 4 for encap existing */ 1080 + pseudo_header = skb_push(skb, MLX5_IPOIB_PSEUDO_LEN); 1081 + memset(pseudo_header, 0, MLX5_IPOIB_PSEUDO_LEN); 1082 1082 skb_reset_mac_header(skb); 1083 - skb_pull(skb, MLX5_IPOIB_ENCAP_LEN); 1083 + skb_pull(skb, MLX5_IPOIB_HARD_LEN); 1084 1084 1085 1085 skb->dev = netdev; 1086 1086
+50 -10
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 43 43 #include <net/tc_act/tc_vlan.h> 44 44 #include <net/tc_act/tc_tunnel_key.h> 45 45 #include <net/tc_act/tc_pedit.h> 46 + #include <net/tc_act/tc_csum.h> 46 47 #include <net/vxlan.h> 47 48 #include <net/arp.h> 48 49 #include "en.h" ··· 385 384 if (e->flags & MLX5_ENCAP_ENTRY_VALID) 386 385 mlx5_encap_dealloc(priv->mdev, e->encap_id); 387 386 388 - hlist_del_rcu(&e->encap_hlist); 387 + hash_del_rcu(&e->encap_hlist); 389 388 kfree(e->encap_header); 390 389 kfree(e); 391 390 } ··· 926 925 struct mlx5e_tc_flow_parse_attr *parse_attr) 927 926 { 928 927 struct pedit_headers *set_masks, *add_masks, *set_vals, *add_vals; 929 - int i, action_size, nactions, max_actions, first, last; 928 + int i, action_size, nactions, max_actions, first, last, first_z; 930 929 void *s_masks_p, *a_masks_p, *vals_p; 931 - u32 s_mask, a_mask, val; 932 930 struct mlx5_fields *f; 933 931 u8 cmd, field_bsize; 932 + u32 s_mask, a_mask; 934 933 unsigned long mask; 935 934 void *action; 936 935 ··· 947 946 for (i = 0; i < ARRAY_SIZE(fields); i++) { 948 947 f = &fields[i]; 949 948 /* avoid seeing bits set from previous iterations */ 950 - s_mask = a_mask = mask = val = 0; 949 + s_mask = 0; 950 + a_mask = 0; 951 951 952 952 s_masks_p = (void *)set_masks + f->offset; 953 953 a_masks_p = (void *)add_masks + f->offset; ··· 983 981 memset(a_masks_p, 0, f->size); 984 982 } 985 983 986 - memcpy(&val, vals_p, f->size); 987 - 988 984 field_bsize = f->size * BITS_PER_BYTE; 985 + 986 + first_z = find_first_zero_bit(&mask, field_bsize); 989 987 first = find_first_bit(&mask, field_bsize); 990 988 last = find_last_bit(&mask, field_bsize); 991 - if (first > 0 || last != (field_bsize - 1)) { 989 + if (first > 0 || last != (field_bsize - 1) || first_z < last) { 992 990 printk(KERN_WARNING "mlx5: partial rewrite (mask %lx) is currently not offloaded\n", 993 991 mask); 994 992 return -EOPNOTSUPP; ··· 1004 1002 } 1005 1003 1006 1004 if (field_bsize == 32) 1007 - MLX5_SET(set_action_in, action, data, ntohl(val)); 1005 + MLX5_SET(set_action_in, action, data, ntohl(*(__be32 *)vals_p)); 1008 1006 else if (field_bsize == 16) 1009 - MLX5_SET(set_action_in, action, data, ntohs(val)); 1007 + MLX5_SET(set_action_in, action, data, ntohs(*(__be16 *)vals_p)); 1010 1008 else if (field_bsize == 8) 1011 - MLX5_SET(set_action_in, action, data, val); 1009 + MLX5_SET(set_action_in, action, data, *(u8 *)vals_p); 1012 1010 1013 1011 action += action_size; 1014 1012 nactions++; ··· 1111 1109 return err; 1112 1110 } 1113 1111 1112 + static bool csum_offload_supported(struct mlx5e_priv *priv, u32 action, u32 update_flags) 1113 + { 1114 + u32 prot_flags = TCA_CSUM_UPDATE_FLAG_IPV4HDR | TCA_CSUM_UPDATE_FLAG_TCP | 1115 + TCA_CSUM_UPDATE_FLAG_UDP; 1116 + 1117 + /* The HW recalcs checksums only if re-writing headers */ 1118 + if (!(action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)) { 1119 + netdev_warn(priv->netdev, 1120 + "TC csum action is only offloaded with pedit\n"); 1121 + return false; 1122 + } 1123 + 1124 + if (update_flags & ~prot_flags) { 1125 + netdev_warn(priv->netdev, 1126 + "can't offload TC csum action for some header/s - flags %#x\n", 1127 + update_flags); 1128 + return false; 1129 + } 1130 + 1131 + return true; 1132 + } 1133 + 1114 1134 static int parse_tc_nic_actions(struct mlx5e_priv *priv, struct tcf_exts *exts, 1115 1135 struct mlx5e_tc_flow_parse_attr *parse_attr, 1116 1136 struct mlx5e_tc_flow *flow) ··· 1171 1147 attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR | 1172 1148 MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 1173 1149 continue; 1150 + } 1151 + 1152 + if (is_tcf_csum(a)) { 1153 + if (csum_offload_supported(priv, attr->action, 1154 + tcf_csum_update_flags(a))) 1155 + continue; 1156 + 1157 + return -EOPNOTSUPP; 1174 1158 } 1175 1159 1176 1160 if (is_tcf_skbedit_mark(a)) { ··· 1681 1649 1682 1650 attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; 1683 1651 continue; 1652 + } 1653 + 1654 + if (is_tcf_csum(a)) { 1655 + if (csum_offload_supported(priv, attr->action, 1656 + tcf_csum_update_flags(a))) 1657 + continue; 1658 + 1659 + return -EOPNOTSUPP; 1684 1660 } 1685 1661 1686 1662 if (is_tcf_mirred_egress_redirect(a)) {
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 422 422 break; 423 423 424 424 case MLX5_EVENT_TYPE_CMD: 425 - mlx5_cmd_comp_handler(dev, be32_to_cpu(eqe->data.cmd.vector)); 425 + mlx5_cmd_comp_handler(dev, be32_to_cpu(eqe->data.cmd.vector), false); 426 426 break; 427 427 428 428 case MLX5_EVENT_TYPE_PORT_CHANGE:
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 90 90 spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); 91 91 92 92 mlx5_core_dbg(dev, "vector 0x%llx\n", vector); 93 - mlx5_cmd_comp_handler(dev, vector); 93 + mlx5_cmd_comp_handler(dev, vector, true); 94 94 return; 95 95 96 96 no_trig:
+4 -11
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 612 612 struct mlx5_priv *priv = &mdev->priv; 613 613 struct msix_entry *msix = priv->msix_arr; 614 614 int irq = msix[i + MLX5_EQ_VEC_COMP_BASE].vector; 615 - int err; 616 615 617 616 if (!zalloc_cpumask_var(&priv->irq_info[i].mask, GFP_KERNEL)) { 618 617 mlx5_core_warn(mdev, "zalloc_cpumask_var failed"); ··· 621 622 cpumask_set_cpu(cpumask_local_spread(i, priv->numa_node), 622 623 priv->irq_info[i].mask); 623 624 624 - err = irq_set_affinity_hint(irq, priv->irq_info[i].mask); 625 - if (err) { 626 - mlx5_core_warn(mdev, "irq_set_affinity_hint failed,irq 0x%.4x", 627 - irq); 628 - goto err_clear_mask; 629 - } 625 + #ifdef CONFIG_SMP 626 + if (irq_set_affinity_hint(irq, priv->irq_info[i].mask)) 627 + mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x", irq); 628 + #endif 630 629 631 630 return 0; 632 - 633 - err_clear_mask: 634 - free_cpumask_var(priv->irq_info[i].mask); 635 - return err; 636 631 } 637 632 638 633 static void mlx5_irq_clear_affinity_hint(struct mlx5_core_dev *mdev, int i)
+5 -3
drivers/net/geneve.c
··· 1293 1293 if (nla_put_u32(skb, IFLA_GENEVE_ID, vni)) 1294 1294 goto nla_put_failure; 1295 1295 1296 - if (ip_tunnel_info_af(info) == AF_INET) { 1296 + if (rtnl_dereference(geneve->sock4)) { 1297 1297 if (nla_put_in_addr(skb, IFLA_GENEVE_REMOTE, 1298 1298 info->key.u.ipv4.dst)) 1299 1299 goto nla_put_failure; ··· 1302 1302 !!(info->key.tun_flags & TUNNEL_CSUM))) 1303 1303 goto nla_put_failure; 1304 1304 1305 + } 1306 + 1305 1307 #if IS_ENABLED(CONFIG_IPV6) 1306 - } else { 1308 + if (rtnl_dereference(geneve->sock6)) { 1307 1309 if (nla_put_in6_addr(skb, IFLA_GENEVE_REMOTE6, 1308 1310 &info->key.u.ipv6.dst)) 1309 1311 goto nla_put_failure; ··· 1317 1315 if (nla_put_u8(skb, IFLA_GENEVE_UDP_ZERO_CSUM6_RX, 1318 1316 !geneve->use_udp6_rx_checksums)) 1319 1317 goto nla_put_failure; 1320 - #endif 1321 1318 } 1319 + #endif 1322 1320 1323 1321 if (nla_put_u8(skb, IFLA_GENEVE_TTL, info->key.ttl) || 1324 1322 nla_put_u8(skb, IFLA_GENEVE_TOS, info->key.tos) ||
+1 -1
drivers/net/gtp.c
··· 873 873 874 874 /* Check if there's an existing gtpX device to configure */ 875 875 dev = dev_get_by_index_rcu(net, nla_get_u32(nla[GTPA_LINK])); 876 - if (dev->netdev_ops == &gtp_netdev_ops) 876 + if (dev && dev->netdev_ops == &gtp_netdev_ops) 877 877 gtp = netdev_priv(dev); 878 878 879 879 put_net(net);
+1 -1
drivers/net/phy/Kconfig
··· 108 108 config MDIO_OCTEON 109 109 tristate "Octeon and some ThunderX SOCs MDIO buses" 110 110 depends on 64BIT 111 - depends on HAS_IOMEM 111 + depends on HAS_IOMEM && OF_MDIO 112 112 select MDIO_CAVIUM 113 113 help 114 114 This module provides a driver for the Octeon and ThunderX MDIO
+37 -29
drivers/net/phy/marvell.c
··· 255 255 { 256 256 int err; 257 257 258 - /* The Marvell PHY has an errata which requires 259 - * that certain registers get written in order 260 - * to restart autonegotiation */ 261 - err = phy_write(phydev, MII_BMCR, BMCR_RESET); 262 - 263 - if (err < 0) 264 - return err; 265 - 266 - err = phy_write(phydev, 0x1d, 0x1f); 267 - if (err < 0) 268 - return err; 269 - 270 - err = phy_write(phydev, 0x1e, 0x200c); 271 - if (err < 0) 272 - return err; 273 - 274 - err = phy_write(phydev, 0x1d, 0x5); 275 - if (err < 0) 276 - return err; 277 - 278 - err = phy_write(phydev, 0x1e, 0); 279 - if (err < 0) 280 - return err; 281 - 282 - err = phy_write(phydev, 0x1e, 0x100); 283 - if (err < 0) 284 - return err; 285 - 286 258 err = marvell_set_polarity(phydev, phydev->mdix_ctrl); 287 259 if (err < 0) 288 260 return err; ··· 286 314 } 287 315 288 316 return 0; 317 + } 318 + 319 + static int m88e1101_config_aneg(struct phy_device *phydev) 320 + { 321 + int err; 322 + 323 + /* This Marvell PHY has an errata which requires 324 + * that certain registers get written in order 325 + * to restart autonegotiation 326 + */ 327 + err = phy_write(phydev, MII_BMCR, BMCR_RESET); 328 + 329 + if (err < 0) 330 + return err; 331 + 332 + err = phy_write(phydev, 0x1d, 0x1f); 333 + if (err < 0) 334 + return err; 335 + 336 + err = phy_write(phydev, 0x1e, 0x200c); 337 + if (err < 0) 338 + return err; 339 + 340 + err = phy_write(phydev, 0x1d, 0x5); 341 + if (err < 0) 342 + return err; 343 + 344 + err = phy_write(phydev, 0x1e, 0); 345 + if (err < 0) 346 + return err; 347 + 348 + err = phy_write(phydev, 0x1e, 0x100); 349 + if (err < 0) 350 + return err; 351 + 352 + return marvell_config_aneg(phydev); 289 353 } 290 354 291 355 static int m88e1111_config_aneg(struct phy_device *phydev) ··· 1900 1892 .flags = PHY_HAS_INTERRUPT, 1901 1893 .probe = marvell_probe, 1902 1894 .config_init = &marvell_config_init, 1903 - .config_aneg = &marvell_config_aneg, 1895 + .config_aneg = &m88e1101_config_aneg, 1904 1896 .read_status = &genphy_read_status, 1905 1897 .ack_interrupt = &marvell_ack_interrupt, 1906 1898 .config_intr = &marvell_config_intr,
+25 -8
drivers/net/usb/cdc_ether.c
··· 310 310 return -ENODEV; 311 311 } 312 312 313 - /* Some devices don't initialise properly. In particular 314 - * the packet filter is not reset. There are devices that 315 - * don't do reset all the way. So the packet filter should 316 - * be set to a sane initial value. 317 - */ 318 - usbnet_cdc_update_filter(dev); 319 - 320 313 return 0; 321 314 322 315 bad_desc: ··· 317 324 return -ENODEV; 318 325 } 319 326 EXPORT_SYMBOL_GPL(usbnet_generic_cdc_bind); 327 + 328 + 329 + /* like usbnet_generic_cdc_bind() but handles filter initialization 330 + * correctly 331 + */ 332 + int usbnet_ether_cdc_bind(struct usbnet *dev, struct usb_interface *intf) 333 + { 334 + int rv; 335 + 336 + rv = usbnet_generic_cdc_bind(dev, intf); 337 + if (rv < 0) 338 + goto bail_out; 339 + 340 + /* Some devices don't initialise properly. In particular 341 + * the packet filter is not reset. There are devices that 342 + * don't do reset all the way. So the packet filter should 343 + * be set to a sane initial value. 344 + */ 345 + usbnet_cdc_update_filter(dev); 346 + 347 + bail_out: 348 + return rv; 349 + } 350 + EXPORT_SYMBOL_GPL(usbnet_ether_cdc_bind); 320 351 321 352 void usbnet_cdc_unbind(struct usbnet *dev, struct usb_interface *intf) 322 353 { ··· 434 417 BUILD_BUG_ON((sizeof(((struct usbnet *)0)->data) 435 418 < sizeof(struct cdc_state))); 436 419 437 - status = usbnet_generic_cdc_bind(dev, intf); 420 + status = usbnet_ether_cdc_bind(dev, intf); 438 421 if (status < 0) 439 422 return status; 440 423
+10 -3
drivers/net/usb/smsc95xx.c
··· 681 681 if (ret < 0) 682 682 return ret; 683 683 684 - if (features & NETIF_F_HW_CSUM) 684 + if (features & NETIF_F_IP_CSUM) 685 685 read_buf |= Tx_COE_EN_; 686 686 else 687 687 read_buf &= ~Tx_COE_EN_; ··· 1279 1279 1280 1280 spin_lock_init(&pdata->mac_cr_lock); 1281 1281 1282 + /* LAN95xx devices do not alter the computed checksum of 0 to 0xffff. 1283 + * RFC 2460, ipv6 UDP calculated checksum yields a result of zero must 1284 + * be changed to 0xffff. RFC 768, ipv4 UDP computed checksum is zero, 1285 + * it is transmitted as all ones. The zero transmitted checksum means 1286 + * transmitter generated no checksum. Hence, enable csum offload only 1287 + * for ipv4 packets. 1288 + */ 1282 1289 if (DEFAULT_TX_CSUM_ENABLE) 1283 - dev->net->features |= NETIF_F_HW_CSUM; 1290 + dev->net->features |= NETIF_F_IP_CSUM; 1284 1291 if (DEFAULT_RX_CSUM_ENABLE) 1285 1292 dev->net->features |= NETIF_F_RXCSUM; 1286 1293 1287 - dev->net->hw_features = NETIF_F_HW_CSUM | NETIF_F_RXCSUM; 1294 + dev->net->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM; 1288 1295 1289 1296 smsc95xx_init_mac_address(dev); 1290 1297
+1
drivers/net/virtio_net.c
··· 1989 1989 .ndo_poll_controller = virtnet_netpoll, 1990 1990 #endif 1991 1991 .ndo_xdp = virtnet_xdp, 1992 + .ndo_features_check = passthru_features_check, 1992 1993 }; 1993 1994 1994 1995 static void virtnet_config_changed_work(struct work_struct *work)
+42 -23
drivers/nvme/host/core.c
··· 925 925 } 926 926 927 927 #ifdef CONFIG_BLK_DEV_INTEGRITY 928 + static void nvme_prep_integrity(struct gendisk *disk, struct nvme_id_ns *id, 929 + u16 bs) 930 + { 931 + struct nvme_ns *ns = disk->private_data; 932 + u16 old_ms = ns->ms; 933 + u8 pi_type = 0; 934 + 935 + ns->ms = le16_to_cpu(id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ms); 936 + ns->ext = ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT); 937 + 938 + /* PI implementation requires metadata equal t10 pi tuple size */ 939 + if (ns->ms == sizeof(struct t10_pi_tuple)) 940 + pi_type = id->dps & NVME_NS_DPS_PI_MASK; 941 + 942 + if (blk_get_integrity(disk) && 943 + (ns->pi_type != pi_type || ns->ms != old_ms || 944 + bs != queue_logical_block_size(disk->queue) || 945 + (ns->ms && ns->ext))) 946 + blk_integrity_unregister(disk); 947 + 948 + ns->pi_type = pi_type; 949 + } 950 + 928 951 static void nvme_init_integrity(struct nvme_ns *ns) 929 952 { 930 953 struct blk_integrity integrity; ··· 974 951 blk_queue_max_integrity_segments(ns->queue, 1); 975 952 } 976 953 #else 954 + static void nvme_prep_integrity(struct gendisk *disk, struct nvme_id_ns *id, 955 + u16 bs) 956 + { 957 + } 977 958 static void nvme_init_integrity(struct nvme_ns *ns) 978 959 { 979 960 } ··· 1024 997 static void __nvme_revalidate_disk(struct gendisk *disk, struct nvme_id_ns *id) 1025 998 { 1026 999 struct nvme_ns *ns = disk->private_data; 1027 - u8 lbaf, pi_type; 1028 - u16 old_ms; 1029 - unsigned short bs; 1030 - 1031 - old_ms = ns->ms; 1032 - lbaf = id->flbas & NVME_NS_FLBAS_LBA_MASK; 1033 - ns->lba_shift = id->lbaf[lbaf].ds; 1034 - ns->ms = le16_to_cpu(id->lbaf[lbaf].ms); 1035 - ns->ext = ns->ms && (id->flbas & NVME_NS_FLBAS_META_EXT); 1000 + u16 bs; 1036 1001 1037 1002 /* 1038 1003 * If identify namespace failed, use default 512 byte block size so 1039 1004 * block layer can use before failing read/write for 0 capacity. 1040 1005 */ 1006 + ns->lba_shift = id->lbaf[id->flbas & NVME_NS_FLBAS_LBA_MASK].ds; 1041 1007 if (ns->lba_shift == 0) 1042 1008 ns->lba_shift = 9; 1043 1009 bs = 1 << ns->lba_shift; 1044 - /* XXX: PI implementation requires metadata equal t10 pi tuple size */ 1045 - pi_type = ns->ms == sizeof(struct t10_pi_tuple) ? 1046 - id->dps & NVME_NS_DPS_PI_MASK : 0; 1047 1010 1048 1011 blk_mq_freeze_queue(disk->queue); 1049 - if (blk_get_integrity(disk) && (ns->pi_type != pi_type || 1050 - ns->ms != old_ms || 1051 - bs != queue_logical_block_size(disk->queue) || 1052 - (ns->ms && ns->ext))) 1053 - blk_integrity_unregister(disk); 1054 1012 1055 - ns->pi_type = pi_type; 1013 + if (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED) 1014 + nvme_prep_integrity(disk, id, bs); 1056 1015 blk_queue_logical_block_size(ns->queue, bs); 1057 - 1058 1016 if (ns->ms && !blk_get_integrity(disk) && !ns->ext) 1059 1017 nvme_init_integrity(ns); 1060 1018 if (ns->ms && !(ns->ms == 8 && ns->pi_type) && !blk_get_integrity(disk)) ··· 1617 1605 } 1618 1606 memcpy(ctrl->psd, id->psd, sizeof(ctrl->psd)); 1619 1607 1620 - if (ctrl->ops->is_fabrics) { 1608 + if (ctrl->ops->flags & NVME_F_FABRICS) { 1621 1609 ctrl->icdoff = le16_to_cpu(id->icdoff); 1622 1610 ctrl->ioccsz = le32_to_cpu(id->ioccsz); 1623 1611 ctrl->iorcsz = le32_to_cpu(id->iorcsz); ··· 2110 2098 if (ns->ndev) 2111 2099 nvme_nvm_unregister_sysfs(ns); 2112 2100 del_gendisk(ns->disk); 2113 - blk_mq_abort_requeue_list(ns->queue); 2114 2101 blk_cleanup_queue(ns->queue); 2115 2102 } 2116 2103 ··· 2447 2436 continue; 2448 2437 revalidate_disk(ns->disk); 2449 2438 blk_set_queue_dying(ns->queue); 2450 - blk_mq_abort_requeue_list(ns->queue); 2451 - blk_mq_start_stopped_hw_queues(ns->queue, true); 2439 + 2440 + /* 2441 + * Forcibly start all queues to avoid having stuck requests. 2442 + * Note that we must ensure the queues are not stopped 2443 + * when the final removal happens. 2444 + */ 2445 + blk_mq_start_hw_queues(ns->queue); 2446 + 2447 + /* draining requests in requeue list */ 2448 + blk_mq_kick_requeue_list(ns->queue); 2452 2449 } 2453 2450 mutex_unlock(&ctrl->namespaces_mutex); 2454 2451 }
+62 -89
drivers/nvme/host/fc.c
··· 45 45 46 46 #define NVMEFC_QUEUE_DELAY 3 /* ms units */ 47 47 48 - #define NVME_FC_MAX_CONNECT_ATTEMPTS 1 49 - 50 48 struct nvme_fc_queue { 51 49 struct nvme_fc_ctrl *ctrl; 52 50 struct device *dev; ··· 163 165 struct work_struct delete_work; 164 166 struct work_struct reset_work; 165 167 struct delayed_work connect_work; 166 - int reconnect_delay; 167 - int connect_attempts; 168 168 169 169 struct kref ref; 170 170 u32 flags; ··· 1372 1376 complete_rq = __nvme_fc_fcpop_chk_teardowns(ctrl, op); 1373 1377 if (!complete_rq) { 1374 1378 if (unlikely(op->flags & FCOP_FLAGS_TERMIO)) { 1375 - status = cpu_to_le16(NVME_SC_ABORT_REQ); 1379 + status = cpu_to_le16(NVME_SC_ABORT_REQ << 1); 1376 1380 if (blk_queue_dying(rq->q)) 1377 - status |= cpu_to_le16(NVME_SC_DNR); 1381 + status |= cpu_to_le16(NVME_SC_DNR << 1); 1378 1382 } 1379 1383 nvme_end_request(rq, status, result); 1380 1384 } else ··· 1747 1751 dev_warn(ctrl->ctrl.device, 1748 1752 "NVME-FC{%d}: transport association error detected: %s\n", 1749 1753 ctrl->cnum, errmsg); 1750 - dev_info(ctrl->ctrl.device, 1754 + dev_warn(ctrl->ctrl.device, 1751 1755 "NVME-FC{%d}: resetting controller\n", ctrl->cnum); 1752 1756 1753 1757 /* stop the queues on error, cleanup is in reset thread */ ··· 2191 2195 if (!opts->nr_io_queues) 2192 2196 return 0; 2193 2197 2194 - dev_info(ctrl->ctrl.device, "creating %d I/O queues.\n", 2195 - opts->nr_io_queues); 2196 - 2197 2198 nvme_fc_init_io_queues(ctrl); 2198 2199 2199 2200 memset(&ctrl->tag_set, 0, sizeof(ctrl->tag_set)); ··· 2261 2268 if (ctrl->queue_count == 1) 2262 2269 return 0; 2263 2270 2264 - dev_info(ctrl->ctrl.device, "Recreating %d I/O queues.\n", 2265 - opts->nr_io_queues); 2266 - 2267 2271 nvme_fc_init_io_queues(ctrl); 2268 2272 2269 2273 ret = blk_mq_reinit_tagset(&ctrl->tag_set); ··· 2296 2306 int ret; 2297 2307 bool changed; 2298 2308 2299 - ctrl->connect_attempts++; 2309 + ++ctrl->ctrl.opts->nr_reconnects; 2300 2310 2301 2311 /* 2302 2312 * Create the admin queue ··· 2393 2403 changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE); 2394 2404 WARN_ON_ONCE(!changed); 2395 2405 2396 - ctrl->connect_attempts = 0; 2397 - 2398 - kref_get(&ctrl->ctrl.kref); 2406 + ctrl->ctrl.opts->nr_reconnects = 0; 2399 2407 2400 2408 if (ctrl->queue_count > 1) { 2401 2409 nvme_start_queues(&ctrl->ctrl); ··· 2524 2536 2525 2537 /* 2526 2538 * tear down the controller 2527 - * This will result in the last reference on the nvme ctrl to 2528 - * expire, calling the transport nvme_fc_nvme_ctrl_freed() callback. 2529 - * From there, the transport will tear down it's logical queues and 2530 - * association. 2539 + * After the last reference on the nvme ctrl is removed, 2540 + * the transport nvme_fc_nvme_ctrl_freed() callback will be 2541 + * invoked. From there, the transport will tear down it's 2542 + * logical queues and association. 2531 2543 */ 2532 2544 nvme_uninit_ctrl(&ctrl->ctrl); 2533 2545 2534 2546 nvme_put_ctrl(&ctrl->ctrl); 2535 2547 } 2536 2548 2549 + static bool 2550 + __nvme_fc_schedule_delete_work(struct nvme_fc_ctrl *ctrl) 2551 + { 2552 + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING)) 2553 + return true; 2554 + 2555 + if (!queue_work(nvme_fc_wq, &ctrl->delete_work)) 2556 + return true; 2557 + 2558 + return false; 2559 + } 2560 + 2537 2561 static int 2538 2562 __nvme_fc_del_ctrl(struct nvme_fc_ctrl *ctrl) 2539 2563 { 2540 - if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING)) 2541 - return -EBUSY; 2542 - 2543 - if (!queue_work(nvme_fc_wq, &ctrl->delete_work)) 2544 - return -EBUSY; 2545 - 2546 - return 0; 2564 + return __nvme_fc_schedule_delete_work(ctrl) ? -EBUSY : 0; 2547 2565 } 2548 2566 2549 2567 /* ··· 2575 2581 } 2576 2582 2577 2583 static void 2584 + nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status) 2585 + { 2586 + /* If we are resetting/deleting then do nothing */ 2587 + if (ctrl->ctrl.state != NVME_CTRL_RECONNECTING) { 2588 + WARN_ON_ONCE(ctrl->ctrl.state == NVME_CTRL_NEW || 2589 + ctrl->ctrl.state == NVME_CTRL_LIVE); 2590 + return; 2591 + } 2592 + 2593 + dev_info(ctrl->ctrl.device, 2594 + "NVME-FC{%d}: reset: Reconnect attempt failed (%d)\n", 2595 + ctrl->cnum, status); 2596 + 2597 + if (nvmf_should_reconnect(&ctrl->ctrl)) { 2598 + dev_info(ctrl->ctrl.device, 2599 + "NVME-FC{%d}: Reconnect attempt in %d seconds.\n", 2600 + ctrl->cnum, ctrl->ctrl.opts->reconnect_delay); 2601 + queue_delayed_work(nvme_fc_wq, &ctrl->connect_work, 2602 + ctrl->ctrl.opts->reconnect_delay * HZ); 2603 + } else { 2604 + dev_warn(ctrl->ctrl.device, 2605 + "NVME-FC{%d}: Max reconnect attempts (%d) " 2606 + "reached. Removing controller\n", 2607 + ctrl->cnum, ctrl->ctrl.opts->nr_reconnects); 2608 + WARN_ON(__nvme_fc_schedule_delete_work(ctrl)); 2609 + } 2610 + } 2611 + 2612 + static void 2578 2613 nvme_fc_reset_ctrl_work(struct work_struct *work) 2579 2614 { 2580 2615 struct nvme_fc_ctrl *ctrl = ··· 2614 2591 nvme_fc_delete_association(ctrl); 2615 2592 2616 2593 ret = nvme_fc_create_association(ctrl); 2617 - if (ret) { 2618 - dev_warn(ctrl->ctrl.device, 2619 - "NVME-FC{%d}: reset: Reconnect attempt failed (%d)\n", 2620 - ctrl->cnum, ret); 2621 - if (ctrl->connect_attempts >= NVME_FC_MAX_CONNECT_ATTEMPTS) { 2622 - dev_warn(ctrl->ctrl.device, 2623 - "NVME-FC{%d}: Max reconnect attempts (%d) " 2624 - "reached. Removing controller\n", 2625 - ctrl->cnum, ctrl->connect_attempts); 2626 - 2627 - if (!nvme_change_ctrl_state(&ctrl->ctrl, 2628 - NVME_CTRL_DELETING)) { 2629 - dev_err(ctrl->ctrl.device, 2630 - "NVME-FC{%d}: failed to change state " 2631 - "to DELETING\n", ctrl->cnum); 2632 - return; 2633 - } 2634 - 2635 - WARN_ON(!queue_work(nvme_fc_wq, &ctrl->delete_work)); 2636 - return; 2637 - } 2638 - 2639 - dev_warn(ctrl->ctrl.device, 2640 - "NVME-FC{%d}: Reconnect attempt in %d seconds.\n", 2641 - ctrl->cnum, ctrl->reconnect_delay); 2642 - queue_delayed_work(nvme_fc_wq, &ctrl->connect_work, 2643 - ctrl->reconnect_delay * HZ); 2644 - } else 2594 + if (ret) 2595 + nvme_fc_reconnect_or_delete(ctrl, ret); 2596 + else 2645 2597 dev_info(ctrl->ctrl.device, 2646 2598 "NVME-FC{%d}: controller reset complete\n", ctrl->cnum); 2647 2599 } ··· 2630 2632 { 2631 2633 struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl); 2632 2634 2633 - dev_warn(ctrl->ctrl.device, 2635 + dev_info(ctrl->ctrl.device, 2634 2636 "NVME-FC{%d}: admin requested controller reset\n", ctrl->cnum); 2635 2637 2636 2638 if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING)) ··· 2647 2649 static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = { 2648 2650 .name = "fc", 2649 2651 .module = THIS_MODULE, 2650 - .is_fabrics = true, 2652 + .flags = NVME_F_FABRICS, 2651 2653 .reg_read32 = nvmf_reg_read32, 2652 2654 .reg_read64 = nvmf_reg_read64, 2653 2655 .reg_write32 = nvmf_reg_write32, ··· 2669 2671 struct nvme_fc_ctrl, connect_work); 2670 2672 2671 2673 ret = nvme_fc_create_association(ctrl); 2672 - if (ret) { 2673 - dev_warn(ctrl->ctrl.device, 2674 - "NVME-FC{%d}: Reconnect attempt failed (%d)\n", 2675 - ctrl->cnum, ret); 2676 - if (ctrl->connect_attempts >= NVME_FC_MAX_CONNECT_ATTEMPTS) { 2677 - dev_warn(ctrl->ctrl.device, 2678 - "NVME-FC{%d}: Max reconnect attempts (%d) " 2679 - "reached. Removing controller\n", 2680 - ctrl->cnum, ctrl->connect_attempts); 2681 - 2682 - if (!nvme_change_ctrl_state(&ctrl->ctrl, 2683 - NVME_CTRL_DELETING)) { 2684 - dev_err(ctrl->ctrl.device, 2685 - "NVME-FC{%d}: failed to change state " 2686 - "to DELETING\n", ctrl->cnum); 2687 - return; 2688 - } 2689 - 2690 - WARN_ON(!queue_work(nvme_fc_wq, &ctrl->delete_work)); 2691 - return; 2692 - } 2693 - 2694 - dev_warn(ctrl->ctrl.device, 2695 - "NVME-FC{%d}: Reconnect attempt in %d seconds.\n", 2696 - ctrl->cnum, ctrl->reconnect_delay); 2697 - queue_delayed_work(nvme_fc_wq, &ctrl->connect_work, 2698 - ctrl->reconnect_delay * HZ); 2699 - } else 2674 + if (ret) 2675 + nvme_fc_reconnect_or_delete(ctrl, ret); 2676 + else 2700 2677 dev_info(ctrl->ctrl.device, 2701 2678 "NVME-FC{%d}: controller reconnect complete\n", 2702 2679 ctrl->cnum); ··· 2728 2755 INIT_WORK(&ctrl->delete_work, nvme_fc_delete_ctrl_work); 2729 2756 INIT_WORK(&ctrl->reset_work, nvme_fc_reset_ctrl_work); 2730 2757 INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); 2731 - ctrl->reconnect_delay = opts->reconnect_delay; 2732 2758 spin_lock_init(&ctrl->lock); 2733 2759 2734 2760 /* io queue count */ ··· 2791 2819 ctrl->ctrl.opts = NULL; 2792 2820 /* initiate nvme ctrl ref counting teardown */ 2793 2821 nvme_uninit_ctrl(&ctrl->ctrl); 2794 - nvme_put_ctrl(&ctrl->ctrl); 2795 2822 2796 2823 /* as we're past the point where we transition to the ref 2797 2824 * counting teardown path, if we return a bad pointer here, ··· 2805 2834 ret = -EIO; 2806 2835 return ERR_PTR(ret); 2807 2836 } 2837 + 2838 + kref_get(&ctrl->ctrl.kref); 2808 2839 2809 2840 dev_info(ctrl->ctrl.device, 2810 2841 "NVME-FC{%d}: new ctrl: NQN \"%s\"\n", ··· 2944 2971 static struct nvmf_transport_ops nvme_fc_transport = { 2945 2972 .name = "fc", 2946 2973 .required_opts = NVMF_OPT_TRADDR | NVMF_OPT_HOST_TRADDR, 2947 - .allowed_opts = NVMF_OPT_RECONNECT_DELAY, 2974 + .allowed_opts = NVMF_OPT_RECONNECT_DELAY | NVMF_OPT_CTRL_LOSS_TMO, 2948 2975 .create_ctrl = nvme_fc_create_ctrl, 2949 2976 }; 2950 2977
+3 -1
drivers/nvme/host/nvme.h
··· 208 208 struct nvme_ctrl_ops { 209 209 const char *name; 210 210 struct module *module; 211 - bool is_fabrics; 211 + unsigned int flags; 212 + #define NVME_F_FABRICS (1 << 0) 213 + #define NVME_F_METADATA_SUPPORTED (1 << 1) 212 214 int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val); 213 215 int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val); 214 216 int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
+9 -6
drivers/nvme/host/pci.c
··· 263 263 c.dbbuf.prp2 = cpu_to_le64(dev->dbbuf_eis_dma_addr); 264 264 265 265 if (nvme_submit_sync_cmd(dev->ctrl.admin_q, &c, NULL, 0)) { 266 - dev_warn(dev->dev, "unable to set dbbuf\n"); 266 + dev_warn(dev->ctrl.device, "unable to set dbbuf\n"); 267 267 /* Free memory and continue on */ 268 268 nvme_dbbuf_dma_free(dev); 269 269 } ··· 1394 1394 result = pci_read_config_word(to_pci_dev(dev->dev), PCI_STATUS, 1395 1395 &pci_status); 1396 1396 if (result == PCIBIOS_SUCCESSFUL) 1397 - dev_warn(dev->dev, 1397 + dev_warn(dev->ctrl.device, 1398 1398 "controller is down; will reset: CSTS=0x%x, PCI_STATUS=0x%hx\n", 1399 1399 csts, pci_status); 1400 1400 else 1401 - dev_warn(dev->dev, 1401 + dev_warn(dev->ctrl.device, 1402 1402 "controller is down; will reset: CSTS=0x%x, PCI_STATUS read failed (%d)\n", 1403 1403 csts, result); 1404 1404 } ··· 1740 1740 */ 1741 1741 if (pdev->vendor == PCI_VENDOR_ID_APPLE && pdev->device == 0x2001) { 1742 1742 dev->q_depth = 2; 1743 - dev_warn(dev->dev, "detected Apple NVMe controller, set " 1744 - "queue depth=%u to work around controller resets\n", 1743 + dev_warn(dev->ctrl.device, "detected Apple NVMe controller, " 1744 + "set queue depth=%u to work around controller resets\n", 1745 1745 dev->q_depth); 1746 1746 } 1747 1747 ··· 1759 1759 if (dev->cmbsz) { 1760 1760 if (sysfs_add_file_to_group(&dev->ctrl.device->kobj, 1761 1761 &dev_attr_cmb.attr, NULL)) 1762 - dev_warn(dev->dev, 1762 + dev_warn(dev->ctrl.device, 1763 1763 "failed to add sysfs attribute for CMB\n"); 1764 1764 } 1765 1765 } ··· 2047 2047 static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = { 2048 2048 .name = "pcie", 2049 2049 .module = THIS_MODULE, 2050 + .flags = NVME_F_METADATA_SUPPORTED, 2050 2051 .reg_read32 = nvme_pci_reg_read32, 2051 2052 .reg_write32 = nvme_pci_reg_write32, 2052 2053 .reg_read64 = nvme_pci_reg_read64, ··· 2294 2293 { PCI_VDEVICE(INTEL, 0x0a54), 2295 2294 .driver_data = NVME_QUIRK_STRIPE_SIZE | 2296 2295 NVME_QUIRK_DEALLOCATE_ZEROES, }, 2296 + { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ 2297 + .driver_data = NVME_QUIRK_NO_DEEPEST_PS }, 2297 2298 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ 2298 2299 .driver_data = NVME_QUIRK_IDENTIFY_CNS, }, 2299 2300 { PCI_DEVICE(0x1c58, 0x0003), /* HGST adapter */
+15 -5
drivers/nvme/host/rdma.c
··· 1038 1038 nvme_rdma_wr_error(cq, wc, "SEND"); 1039 1039 } 1040 1040 1041 + static inline int nvme_rdma_queue_sig_limit(struct nvme_rdma_queue *queue) 1042 + { 1043 + int sig_limit; 1044 + 1045 + /* 1046 + * We signal completion every queue depth/2 and also handle the 1047 + * degenerated case of a device with queue_depth=1, where we 1048 + * would need to signal every message. 1049 + */ 1050 + sig_limit = max(queue->queue_size / 2, 1); 1051 + return (++queue->sig_count % sig_limit) == 0; 1052 + } 1053 + 1041 1054 static int nvme_rdma_post_send(struct nvme_rdma_queue *queue, 1042 1055 struct nvme_rdma_qe *qe, struct ib_sge *sge, u32 num_sge, 1043 1056 struct ib_send_wr *first, bool flush) ··· 1078 1065 * Would have been way to obvious to handle this in hardware or 1079 1066 * at least the RDMA stack.. 1080 1067 * 1081 - * This messy and racy code sniplet is copy and pasted from the iSER 1082 - * initiator, and the magic '32' comes from there as well. 1083 - * 1084 1068 * Always signal the flushes. The magic request used for the flush 1085 1069 * sequencer is not allocated in our driver's tagset and it's 1086 1070 * triggered to be freed by blk_cleanup_queue(). So we need to ··· 1085 1075 * embedded in request's payload, is not freed when __ib_process_cq() 1086 1076 * calls wr_cqe->done(). 1087 1077 */ 1088 - if ((++queue->sig_count % 32) == 0 || flush) 1078 + if (nvme_rdma_queue_sig_limit(queue) || flush) 1089 1079 wr.send_flags |= IB_SEND_SIGNALED; 1090 1080 1091 1081 if (first) ··· 1792 1782 static const struct nvme_ctrl_ops nvme_rdma_ctrl_ops = { 1793 1783 .name = "rdma", 1794 1784 .module = THIS_MODULE, 1795 - .is_fabrics = true, 1785 + .flags = NVME_F_FABRICS, 1796 1786 .reg_read32 = nvmf_reg_read32, 1797 1787 .reg_read64 = nvmf_reg_read64, 1798 1788 .reg_write32 = nvmf_reg_write32,
+1 -1
drivers/nvme/target/loop.c
··· 558 558 static const struct nvme_ctrl_ops nvme_loop_ctrl_ops = { 559 559 .name = "loop", 560 560 .module = THIS_MODULE, 561 - .is_fabrics = true, 561 + .flags = NVME_F_FABRICS, 562 562 .reg_read32 = nvmf_reg_read32, 563 563 .reg_read64 = nvmf_reg_read64, 564 564 .reg_write32 = nvmf_reg_write32,
+2 -1
drivers/of/platform.c
··· 523 523 arch_initcall_sync(of_platform_default_populate_init); 524 524 #endif 525 525 526 - static int of_platform_device_destroy(struct device *dev, void *data) 526 + int of_platform_device_destroy(struct device *dev, void *data) 527 527 { 528 528 /* Do not touch devices not populated from the device tree */ 529 529 if (!dev->of_node || !of_node_check_flag(dev->of_node, OF_POPULATED)) ··· 544 544 of_node_clear_flag(dev->of_node, OF_POPULATED_BUS); 545 545 return 0; 546 546 } 547 + EXPORT_SYMBOL_GPL(of_platform_device_destroy); 547 548 548 549 /** 549 550 * of_platform_depopulate() - Remove devices populated from device tree
+30 -3
drivers/pci/dwc/pci-imx6.c
··· 252 252 static int imx6q_pcie_abort_handler(unsigned long addr, 253 253 unsigned int fsr, struct pt_regs *regs) 254 254 { 255 - return 0; 255 + unsigned long pc = instruction_pointer(regs); 256 + unsigned long instr = *(unsigned long *)pc; 257 + int reg = (instr >> 12) & 15; 258 + 259 + /* 260 + * If the instruction being executed was a read, 261 + * make it look like it read all-ones. 262 + */ 263 + if ((instr & 0x0c100000) == 0x04100000) { 264 + unsigned long val; 265 + 266 + if (instr & 0x00400000) 267 + val = 255; 268 + else 269 + val = -1; 270 + 271 + regs->uregs[reg] = val; 272 + regs->ARM_pc += 4; 273 + return 0; 274 + } 275 + 276 + if ((instr & 0x0e100090) == 0x00100090) { 277 + regs->uregs[reg] = -1; 278 + regs->ARM_pc += 4; 279 + return 0; 280 + } 281 + 282 + return 1; 256 283 } 257 284 258 285 static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) ··· 846 819 * we can install the handler here without risking it 847 820 * accessing some uninitialized driver state. 848 821 */ 849 - hook_fault_code(16 + 6, imx6q_pcie_abort_handler, SIGBUS, 0, 850 - "imprecise external abort"); 822 + hook_fault_code(8, imx6q_pcie_abort_handler, SIGBUS, 0, 823 + "external abort on non-linefetch"); 851 824 852 825 return platform_driver_register(&imx6_pcie_driver); 853 826 }
+1
drivers/pci/endpoint/Kconfig
··· 6 6 7 7 config PCI_ENDPOINT 8 8 bool "PCI Endpoint Support" 9 + depends on HAS_DMA 9 10 help 10 11 Enable this configuration option to support configurable PCI 11 12 endpoint. This should be enabled if the platform has a PCI
+2 -1
drivers/pci/pci.c
··· 2144 2144 2145 2145 if (!pm_runtime_suspended(dev) 2146 2146 || pci_target_state(pci_dev) != pci_dev->current_state 2147 - || platform_pci_need_resume(pci_dev)) 2147 + || platform_pci_need_resume(pci_dev) 2148 + || (pci_dev->dev_flags & PCI_DEV_FLAGS_NEEDS_RESUME)) 2148 2149 return false; 2149 2150 2150 2151 /*
+6 -10
drivers/pci/switch/switchtec.c
··· 1291 1291 cdev = &stdev->cdev; 1292 1292 cdev_init(cdev, &switchtec_fops); 1293 1293 cdev->owner = THIS_MODULE; 1294 - cdev->kobj.parent = &dev->kobj; 1295 1294 1296 1295 return stdev; 1297 1296 ··· 1441 1442 stdev->mmio_sys_info = stdev->mmio + SWITCHTEC_GAS_SYS_INFO_OFFSET; 1442 1443 stdev->mmio_flash_info = stdev->mmio + SWITCHTEC_GAS_FLASH_INFO_OFFSET; 1443 1444 stdev->mmio_ntb = stdev->mmio + SWITCHTEC_GAS_NTB_OFFSET; 1444 - stdev->partition = ioread8(&stdev->mmio_ntb->partition_id); 1445 + stdev->partition = ioread8(&stdev->mmio_sys_info->partition_id); 1445 1446 stdev->partition_count = ioread8(&stdev->mmio_ntb->partition_count); 1446 1447 stdev->mmio_part_cfg_all = stdev->mmio + SWITCHTEC_GAS_PART_CFG_OFFSET; 1447 1448 stdev->mmio_part_cfg = &stdev->mmio_part_cfg_all[stdev->partition]; 1448 1449 stdev->mmio_pff_csr = stdev->mmio + SWITCHTEC_GAS_PFF_CSR_OFFSET; 1450 + 1451 + if (stdev->partition_count < 1) 1452 + stdev->partition_count = 1; 1449 1453 1450 1454 init_pff(stdev); 1451 1455 ··· 1481 1479 SWITCHTEC_EVENT_EN_IRQ, 1482 1480 &stdev->mmio_part_cfg->mrpc_comp_hdr); 1483 1481 1484 - rc = cdev_add(&stdev->cdev, stdev->dev.devt, 1); 1485 - if (rc) 1486 - goto err_put; 1487 - 1488 - rc = device_add(&stdev->dev); 1482 + rc = cdev_device_add(&stdev->cdev, &stdev->dev); 1489 1483 if (rc) 1490 1484 goto err_devadd; 1491 1485 ··· 1490 1492 return 0; 1491 1493 1492 1494 err_devadd: 1493 - cdev_del(&stdev->cdev); 1494 1495 stdev_kill(stdev); 1495 1496 err_put: 1496 1497 ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt)); ··· 1503 1506 1504 1507 pci_set_drvdata(pdev, NULL); 1505 1508 1506 - device_del(&stdev->dev); 1507 - cdev_del(&stdev->cdev); 1509 + cdev_device_del(&stdev->cdev, &stdev->dev); 1508 1510 ida_simple_remove(&switchtec_minor_ida, MINOR(stdev->dev.devt)); 1509 1511 dev_info(&stdev->dev, "unregistered.\n"); 1510 1512
+11
drivers/perf/arm_pmu_acpi.c
··· 29 29 return -EINVAL; 30 30 31 31 gsi = gicc->performance_interrupt; 32 + 33 + /* 34 + * Per the ACPI spec, the MADT cannot describe a PMU that doesn't 35 + * have an interrupt. QEMU advertises this by using a GSI of zero, 36 + * which is not known to be valid on any hardware despite being 37 + * valid per the spec. Take the pragmatic approach and reject a 38 + * GSI of zero for now. 39 + */ 40 + if (!gsi) 41 + return 0; 42 + 32 43 if (gicc->flags & ACPI_MADT_PERFORMANCE_IRQ_MODE) 33 44 trigger = ACPI_EDGE_SENSITIVE; 34 45 else
+3 -17
drivers/pinctrl/core.c
··· 680 680 * pinctrl_generic_free_groups() - removes all pin groups 681 681 * @pctldev: pin controller device 682 682 * 683 - * Note that the caller must take care of locking. 683 + * Note that the caller must take care of locking. The pinctrl groups 684 + * are allocated with devm_kzalloc() so no need to free them here. 684 685 */ 685 686 static void pinctrl_generic_free_groups(struct pinctrl_dev *pctldev) 686 687 { 687 688 struct radix_tree_iter iter; 688 - struct group_desc *group; 689 - unsigned long *indices; 690 689 void **slot; 691 - int i = 0; 692 - 693 - indices = devm_kzalloc(pctldev->dev, sizeof(*indices) * 694 - pctldev->num_groups, GFP_KERNEL); 695 - if (!indices) 696 - return; 697 690 698 691 radix_tree_for_each_slot(slot, &pctldev->pin_group_tree, &iter, 0) 699 - indices[i++] = iter.index; 700 - 701 - for (i = 0; i < pctldev->num_groups; i++) { 702 - group = radix_tree_lookup(&pctldev->pin_group_tree, 703 - indices[i]); 704 - radix_tree_delete(&pctldev->pin_group_tree, indices[i]); 705 - devm_kfree(pctldev->dev, group); 706 - } 692 + radix_tree_delete(&pctldev->pin_group_tree, iter.index); 707 693 708 694 pctldev->num_groups = 0; 709 695 }
+12 -4
drivers/pinctrl/freescale/pinctrl-mxs.c
··· 194 194 return 0; 195 195 } 196 196 197 + static void mxs_pinctrl_rmwl(u32 value, u32 mask, u8 shift, void __iomem *reg) 198 + { 199 + u32 tmp; 200 + 201 + tmp = readl(reg); 202 + tmp &= ~(mask << shift); 203 + tmp |= value << shift; 204 + writel(tmp, reg); 205 + } 206 + 197 207 static int mxs_pinctrl_set_mux(struct pinctrl_dev *pctldev, unsigned selector, 198 208 unsigned group) 199 209 { ··· 221 211 reg += bank * 0x20 + pin / 16 * 0x10; 222 212 shift = pin % 16 * 2; 223 213 224 - writel(0x3 << shift, reg + CLR); 225 - writel(g->muxsel[i] << shift, reg + SET); 214 + mxs_pinctrl_rmwl(g->muxsel[i], 0x3, shift, reg); 226 215 } 227 216 228 217 return 0; ··· 288 279 /* mA */ 289 280 if (config & MA_PRESENT) { 290 281 shift = pin % 8 * 4; 291 - writel(0x3 << shift, reg + CLR); 292 - writel(ma << shift, reg + SET); 282 + mxs_pinctrl_rmwl(ma, 0x3, shift, reg); 293 283 } 294 284 295 285 /* vol */
+19 -5
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1539 1539 * is not listed below. 1540 1540 */ 1541 1541 static const struct dmi_system_id chv_no_valid_mask[] = { 1542 + /* See https://bugzilla.kernel.org/show_bug.cgi?id=194945 */ 1542 1543 { 1543 - /* See https://bugzilla.kernel.org/show_bug.cgi?id=194945 */ 1544 - .ident = "Acer Chromebook (CYAN)", 1544 + .ident = "Intel_Strago based Chromebooks (All models)", 1545 1545 .matches = { 1546 1546 DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"), 1547 - DMI_MATCH(DMI_PRODUCT_NAME, "Edgar"), 1548 - DMI_MATCH(DMI_BIOS_DATE, "05/21/2016"), 1547 + DMI_MATCH(DMI_PRODUCT_FAMILY, "Intel_Strago"), 1549 1548 }, 1550 - } 1549 + }, 1550 + { 1551 + .ident = "Acer Chromebook R11 (Cyan)", 1552 + .matches = { 1553 + DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"), 1554 + DMI_MATCH(DMI_PRODUCT_NAME, "Cyan"), 1555 + }, 1556 + }, 1557 + { 1558 + .ident = "Samsung Chromebook 3 (Celes)", 1559 + .matches = { 1560 + DMI_MATCH(DMI_SYS_VENDOR, "GOOGLE"), 1561 + DMI_MATCH(DMI_PRODUCT_NAME, "Celes"), 1562 + }, 1563 + }, 1564 + {} 1551 1565 }; 1552 1566 1553 1567 static int chv_gpio_probe(struct chv_pinctrl *pctrl, int irq)
-3
drivers/pinctrl/pinconf-generic.c
··· 35 35 PCONFDUMP(PIN_CONFIG_BIAS_PULL_PIN_DEFAULT, 36 36 "input bias pull to pin specific state", NULL, false), 37 37 PCONFDUMP(PIN_CONFIG_BIAS_PULL_UP, "input bias pull up", NULL, false), 38 - PCONFDUMP(PIN_CONFIG_BIDIRECTIONAL, "bi-directional pin operations", NULL, false), 39 38 PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_DRAIN, "output drive open drain", NULL, false), 40 39 PCONFDUMP(PIN_CONFIG_DRIVE_OPEN_SOURCE, "output drive open source", NULL, false), 41 40 PCONFDUMP(PIN_CONFIG_DRIVE_PUSH_PULL, "output drive push pull", NULL, false), ··· 160 161 { "bias-pull-up", PIN_CONFIG_BIAS_PULL_UP, 1 }, 161 162 { "bias-pull-pin-default", PIN_CONFIG_BIAS_PULL_PIN_DEFAULT, 1 }, 162 163 { "bias-pull-down", PIN_CONFIG_BIAS_PULL_DOWN, 1 }, 163 - { "bi-directional", PIN_CONFIG_BIDIRECTIONAL, 1 }, 164 164 { "drive-open-drain", PIN_CONFIG_DRIVE_OPEN_DRAIN, 0 }, 165 165 { "drive-open-source", PIN_CONFIG_DRIVE_OPEN_SOURCE, 0 }, 166 166 { "drive-push-pull", PIN_CONFIG_DRIVE_PUSH_PULL, 0 }, ··· 172 174 { "input-schmitt-enable", PIN_CONFIG_INPUT_SCHMITT_ENABLE, 1 }, 173 175 { "low-power-disable", PIN_CONFIG_LOW_POWER_MODE, 0 }, 174 176 { "low-power-enable", PIN_CONFIG_LOW_POWER_MODE, 1 }, 175 - { "output-enable", PIN_CONFIG_OUTPUT, 1, }, 176 177 { "output-high", PIN_CONFIG_OUTPUT, 1, }, 177 178 { "output-low", PIN_CONFIG_OUTPUT, 0, }, 178 179 { "power-source", PIN_CONFIG_POWER_SOURCE, 0 },
+4 -17
drivers/pinctrl/pinmux.c
··· 826 826 * pinmux_generic_free_functions() - removes all functions 827 827 * @pctldev: pin controller device 828 828 * 829 - * Note that the caller must take care of locking. 829 + * Note that the caller must take care of locking. The pinctrl 830 + * functions are allocated with devm_kzalloc() so no need to free 831 + * them here. 830 832 */ 831 833 void pinmux_generic_free_functions(struct pinctrl_dev *pctldev) 832 834 { 833 835 struct radix_tree_iter iter; 834 - struct function_desc *function; 835 - unsigned long *indices; 836 836 void **slot; 837 - int i = 0; 838 - 839 - indices = devm_kzalloc(pctldev->dev, sizeof(*indices) * 840 - pctldev->num_functions, GFP_KERNEL); 841 - if (!indices) 842 - return; 843 837 844 838 radix_tree_for_each_slot(slot, &pctldev->pin_function_tree, &iter, 0) 845 - indices[i++] = iter.index; 846 - 847 - for (i = 0; i < pctldev->num_functions; i++) { 848 - function = radix_tree_lookup(&pctldev->pin_function_tree, 849 - indices[i]); 850 - radix_tree_delete(&pctldev->pin_function_tree, indices[i]); 851 - devm_kfree(pctldev->dev, function); 852 - } 839 + radix_tree_delete(&pctldev->pin_function_tree, iter.index); 853 840 854 841 pctldev->num_functions = 0; 855 842 }
+1 -1
drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
··· 394 394 SUNXI_PIN(SUNXI_PINCTRL_PIN(E, 18), 395 395 SUNXI_FUNCTION(0x0, "gpio_in"), 396 396 SUNXI_FUNCTION(0x1, "gpio_out"), 397 - SUNXI_FUNCTION(0x3, "owa")), /* DOUT */ 397 + SUNXI_FUNCTION(0x3, "spdif")), /* DOUT */ 398 398 SUNXI_PIN(SUNXI_PINCTRL_PIN(E, 19), 399 399 SUNXI_FUNCTION(0x0, "gpio_in"), 400 400 SUNXI_FUNCTION(0x1, "gpio_out")),
+1
drivers/powercap/powercap_sys.c
··· 538 538 539 539 power_zone->id = result; 540 540 idr_init(&power_zone->idr); 541 + result = -ENOMEM; 541 542 power_zone->name = kstrdup(name, GFP_KERNEL); 542 543 if (!power_zone->name) 543 544 goto err_name_alloc;
+1 -1
drivers/rtc/rtc-cmos.c
··· 1088 1088 } 1089 1089 spin_unlock_irqrestore(&rtc_lock, flags); 1090 1090 1091 - pm_wakeup_event(dev, 0); 1091 + pm_wakeup_hard_event(dev); 1092 1092 acpi_clear_event(ACPI_EVENT_RTC); 1093 1093 acpi_disable_event(ACPI_EVENT_RTC, 0); 1094 1094 return ACPI_INTERRUPT_HANDLED;
+4 -1
drivers/scsi/csiostor/csio_hw.c
··· 1769 1769 goto bye; 1770 1770 } 1771 1771 1772 - mempool_free(mbp, hw->mb_mempool); 1773 1772 if (finicsum != cfcsum) { 1774 1773 csio_warn(hw, 1775 1774 "Config File checksum mismatch: csum=%#x, computed=%#x\n", ··· 1779 1780 rv = csio_hw_validate_caps(hw, mbp); 1780 1781 if (rv != 0) 1781 1782 goto bye; 1783 + 1784 + mempool_free(mbp, hw->mb_mempool); 1785 + mbp = NULL; 1786 + 1782 1787 /* 1783 1788 * Note that we're operating with parameters 1784 1789 * not supplied by the driver, rather than from hard-wired
+23 -4
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
··· 1170 1170 cmd = list_first_entry_or_null(&vscsi->free_cmd, 1171 1171 struct ibmvscsis_cmd, list); 1172 1172 if (cmd) { 1173 + if (cmd->abort_cmd) 1174 + cmd->abort_cmd = NULL; 1173 1175 cmd->flags &= ~(DELAY_SEND); 1174 1176 list_del(&cmd->list); 1175 1177 cmd->iue = iue; ··· 1776 1774 if (cmd->abort_cmd) { 1777 1775 retry = true; 1778 1776 cmd->abort_cmd->flags &= ~(DELAY_SEND); 1777 + cmd->abort_cmd = NULL; 1779 1778 } 1780 1779 1781 1780 /* ··· 1791 1788 list_del(&cmd->list); 1792 1789 ibmvscsis_free_cmd_resources(vscsi, 1793 1790 cmd); 1791 + /* 1792 + * With a successfully aborted op 1793 + * through LIO we want to increment the 1794 + * the vscsi credit so that when we dont 1795 + * send a rsp to the original scsi abort 1796 + * op (h_send_crq), but the tm rsp to 1797 + * the abort is sent, the credit is 1798 + * correctly sent with the abort tm rsp. 1799 + * We would need 1 for the abort tm rsp 1800 + * and 1 credit for the aborted scsi op. 1801 + * Thus we need to increment here. 1802 + * Also we want to increment the credit 1803 + * here because we want to make sure 1804 + * cmd is actually released first 1805 + * otherwise the client will think it 1806 + * it can send a new cmd, and we could 1807 + * find ourselves short of cmd elements. 1808 + */ 1809 + vscsi->credit += 1; 1794 1810 } else { 1795 1811 iue = cmd->iue; 1796 1812 ··· 2984 2962 2985 2963 rsp->opcode = SRP_RSP; 2986 2964 2987 - if (vscsi->credit > 0 && vscsi->state == SRP_PROCESSING) 2988 - rsp->req_lim_delta = cpu_to_be32(vscsi->credit); 2989 - else 2990 - rsp->req_lim_delta = cpu_to_be32(1 + vscsi->credit); 2965 + rsp->req_lim_delta = cpu_to_be32(1 + vscsi->credit); 2991 2966 rsp->tag = cmd->rsp.tag; 2992 2967 rsp->flags = 0; 2993 2968
+1 -1
drivers/scsi/libfc/fc_rport.c
··· 1422 1422 fp = fc_frame_alloc(lport, sizeof(*rtv)); 1423 1423 if (!fp) { 1424 1424 rjt_data.reason = ELS_RJT_UNAB; 1425 - rjt_data.reason = ELS_EXPL_INSUF_RES; 1425 + rjt_data.explan = ELS_EXPL_INSUF_RES; 1426 1426 fc_seq_els_rsp_send(in_fp, ELS_LS_RJT, &rjt_data); 1427 1427 goto drop; 1428 1428 }
+17 -6
drivers/scsi/lpfc/lpfc.h
··· 141 141 uint32_t buffer_tag; /* used for tagged queue ring */ 142 142 }; 143 143 144 + struct lpfc_nvmet_ctxbuf { 145 + struct list_head list; 146 + struct lpfc_nvmet_rcv_ctx *context; 147 + struct lpfc_iocbq *iocbq; 148 + struct lpfc_sglq *sglq; 149 + }; 150 + 144 151 struct lpfc_dma_pool { 145 152 struct lpfc_dmabuf *elements; 146 153 uint32_t max_count; ··· 170 163 struct lpfc_dmabuf dbuf; 171 164 uint16_t total_size; 172 165 uint16_t bytes_recv; 173 - void *context; 174 - struct lpfc_iocbq *iocbq; 175 - struct lpfc_sglq *sglq; 166 + uint16_t idx; 176 167 struct lpfc_queue *hrq; /* ptr to associated Header RQ */ 177 168 struct lpfc_queue *drq; /* ptr to associated Data RQ */ 178 169 }; ··· 675 670 /* INIT_LINK mailbox command */ 676 671 #define LS_NPIV_FAB_SUPPORTED 0x2 /* Fabric supports NPIV */ 677 672 #define LS_IGNORE_ERATT 0x4 /* intr handler should ignore ERATT */ 673 + #define LS_MDS_LINK_DOWN 0x8 /* MDS Diagnostics Link Down */ 674 + #define LS_MDS_LOOPBACK 0x16 /* MDS Diagnostics Link Up (Loopback) */ 678 675 679 676 uint32_t hba_flag; /* hba generic flags */ 680 677 #define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */ ··· 784 777 uint32_t cfg_nvme_oas; 785 778 uint32_t cfg_nvme_io_channel; 786 779 uint32_t cfg_nvmet_mrq; 787 - uint32_t cfg_nvmet_mrq_post; 788 780 uint32_t cfg_enable_nvmet; 789 781 uint32_t cfg_nvme_enable_fb; 790 782 uint32_t cfg_nvmet_fb_size; ··· 949 943 struct pci_pool *lpfc_mbuf_pool; 950 944 struct pci_pool *lpfc_hrb_pool; /* header receive buffer pool */ 951 945 struct pci_pool *lpfc_drb_pool; /* data receive buffer pool */ 946 + struct pci_pool *lpfc_nvmet_drb_pool; /* data receive buffer pool */ 952 947 struct pci_pool *lpfc_hbq_pool; /* SLI3 hbq buffer pool */ 953 948 struct pci_pool *txrdy_payload_pool; 954 949 struct lpfc_dma_pool lpfc_mbuf_safety_pool; ··· 1235 1228 static inline struct lpfc_sli_ring * 1236 1229 lpfc_phba_elsring(struct lpfc_hba *phba) 1237 1230 { 1238 - if (phba->sli_rev == LPFC_SLI_REV4) 1239 - return phba->sli4_hba.els_wq->pring; 1231 + if (phba->sli_rev == LPFC_SLI_REV4) { 1232 + if (phba->sli4_hba.els_wq) 1233 + return phba->sli4_hba.els_wq->pring; 1234 + else 1235 + return NULL; 1236 + } 1240 1237 return &phba->sli.sli3_ring[LPFC_ELS_RING]; 1241 1238 }
+23 -24
drivers/scsi/lpfc/lpfc_attr.c
··· 60 60 #define LPFC_MIN_DEVLOSS_TMO 1 61 61 #define LPFC_MAX_DEVLOSS_TMO 255 62 62 63 - #define LPFC_DEF_MRQ_POST 256 64 - #define LPFC_MIN_MRQ_POST 32 65 - #define LPFC_MAX_MRQ_POST 512 63 + #define LPFC_DEF_MRQ_POST 512 64 + #define LPFC_MIN_MRQ_POST 512 65 + #define LPFC_MAX_MRQ_POST 2048 66 66 67 67 /* 68 68 * Write key size should be multiple of 4. If write key is changed ··· 205 205 atomic_read(&tgtp->xmt_ls_rsp_error)); 206 206 207 207 len += snprintf(buf+len, PAGE_SIZE-len, 208 - "FCP: Rcv %08x Drop %08x\n", 208 + "FCP: Rcv %08x Release %08x Drop %08x\n", 209 209 atomic_read(&tgtp->rcv_fcp_cmd_in), 210 + atomic_read(&tgtp->xmt_fcp_release), 210 211 atomic_read(&tgtp->rcv_fcp_cmd_drop)); 211 212 212 213 if (atomic_read(&tgtp->rcv_fcp_cmd_in) != ··· 219 218 } 220 219 221 220 len += snprintf(buf+len, PAGE_SIZE-len, 222 - "FCP Rsp: RD %08x rsp %08x WR %08x rsp %08x\n", 221 + "FCP Rsp: RD %08x rsp %08x WR %08x rsp %08x " 222 + "drop %08x\n", 223 223 atomic_read(&tgtp->xmt_fcp_read), 224 224 atomic_read(&tgtp->xmt_fcp_read_rsp), 225 225 atomic_read(&tgtp->xmt_fcp_write), 226 - atomic_read(&tgtp->xmt_fcp_rsp)); 227 - 228 - len += snprintf(buf+len, PAGE_SIZE-len, 229 - "FCP Rsp: abort %08x drop %08x\n", 230 - atomic_read(&tgtp->xmt_fcp_abort), 226 + atomic_read(&tgtp->xmt_fcp_rsp), 231 227 atomic_read(&tgtp->xmt_fcp_drop)); 232 228 233 229 len += snprintf(buf+len, PAGE_SIZE-len, ··· 234 236 atomic_read(&tgtp->xmt_fcp_rsp_drop)); 235 237 236 238 len += snprintf(buf+len, PAGE_SIZE-len, 237 - "ABORT: Xmt %08x Err %08x Cmpl %08x", 239 + "ABORT: Xmt %08x Cmpl %08x\n", 240 + atomic_read(&tgtp->xmt_fcp_abort), 241 + atomic_read(&tgtp->xmt_fcp_abort_cmpl)); 242 + 243 + len += snprintf(buf + len, PAGE_SIZE - len, 244 + "ABORT: Sol %08x Usol %08x Err %08x Cmpl %08x", 245 + atomic_read(&tgtp->xmt_abort_sol), 246 + atomic_read(&tgtp->xmt_abort_unsol), 238 247 atomic_read(&tgtp->xmt_abort_rsp), 239 - atomic_read(&tgtp->xmt_abort_rsp_error), 240 - atomic_read(&tgtp->xmt_abort_cmpl)); 248 + atomic_read(&tgtp->xmt_abort_rsp_error)); 249 + 250 + len += snprintf(buf + len, PAGE_SIZE - len, 251 + "IO_CTX: %08x outstanding %08x total %x", 252 + phba->sli4_hba.nvmet_ctx_cnt, 253 + phba->sli4_hba.nvmet_io_wait_cnt, 254 + phba->sli4_hba.nvmet_io_wait_total); 241 255 242 256 len += snprintf(buf+len, PAGE_SIZE-len, "\n"); 243 257 return len; ··· 3322 3312 "Specify number of RQ pairs for processing NVMET cmds"); 3323 3313 3324 3314 /* 3325 - * lpfc_nvmet_mrq_post: Specify number buffers to post on every MRQ 3326 - * 3327 - */ 3328 - LPFC_ATTR_R(nvmet_mrq_post, LPFC_DEF_MRQ_POST, 3329 - LPFC_MIN_MRQ_POST, LPFC_MAX_MRQ_POST, 3330 - "Specify number of buffers to post on every MRQ"); 3331 - 3332 - /* 3333 3315 * lpfc_enable_fc4_type: Defines what FC4 types are supported. 3334 3316 * Supported Values: 1 - register just FCP 3335 3317 * 3 - register both FCP and NVME ··· 5156 5154 &dev_attr_lpfc_suppress_rsp, 5157 5155 &dev_attr_lpfc_nvme_io_channel, 5158 5156 &dev_attr_lpfc_nvmet_mrq, 5159 - &dev_attr_lpfc_nvmet_mrq_post, 5160 5157 &dev_attr_lpfc_nvme_enable_fb, 5161 5158 &dev_attr_lpfc_nvmet_fb_size, 5162 5159 &dev_attr_lpfc_enable_bg, ··· 6195 6194 6196 6195 lpfc_enable_fc4_type_init(phba, lpfc_enable_fc4_type); 6197 6196 lpfc_nvmet_mrq_init(phba, lpfc_nvmet_mrq); 6198 - lpfc_nvmet_mrq_post_init(phba, lpfc_nvmet_mrq_post); 6199 6197 6200 6198 /* Initialize first burst. Target vs Initiator are different. */ 6201 6199 lpfc_nvme_enable_fb_init(phba, lpfc_nvme_enable_fb); ··· 6291 6291 /* Not NVME Target mode. Turn off Target parameters. */ 6292 6292 phba->nvmet_support = 0; 6293 6293 phba->cfg_nvmet_mrq = 0; 6294 - phba->cfg_nvmet_mrq_post = 0; 6295 6294 phba->cfg_nvmet_fb_size = 0; 6296 6295 } 6297 6296
+7 -4
drivers/scsi/lpfc/lpfc_crtn.h
··· 75 75 void lpfc_cancel_all_vport_retry_delay_timer(struct lpfc_hba *); 76 76 void lpfc_retry_pport_discovery(struct lpfc_hba *); 77 77 void lpfc_release_rpi(struct lpfc_hba *, struct lpfc_vport *, uint16_t); 78 + int lpfc_init_iocb_list(struct lpfc_hba *phba, int cnt); 79 + void lpfc_free_iocb_list(struct lpfc_hba *phba); 80 + int lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, 81 + struct lpfc_queue *drq, int count, int idx); 78 82 79 83 void lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *, LPFC_MBOXQ_t *); 80 84 void lpfc_mbx_cmpl_reg_login(struct lpfc_hba *, LPFC_MBOXQ_t *); ··· 250 246 void lpfc_sli4_rb_free(struct lpfc_hba *, struct hbq_dmabuf *); 251 247 struct rqb_dmabuf *lpfc_sli4_nvmet_alloc(struct lpfc_hba *phba); 252 248 void lpfc_sli4_nvmet_free(struct lpfc_hba *phba, struct rqb_dmabuf *dmab); 253 - void lpfc_nvmet_rq_post(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp, 254 - struct lpfc_dmabuf *mp); 249 + void lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, 250 + struct lpfc_nvmet_ctxbuf *ctxp); 255 251 int lpfc_nvmet_rcv_unsol_abort(struct lpfc_vport *vport, 256 252 struct fc_frame_header *fc_hdr); 257 253 void lpfc_sli4_build_dflt_fcf_record(struct lpfc_hba *, struct fcf_record *, 258 254 uint16_t); 259 255 int lpfc_sli4_rq_put(struct lpfc_queue *hq, struct lpfc_queue *dq, 260 256 struct lpfc_rqe *hrqe, struct lpfc_rqe *drqe); 261 - int lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hq, 262 - struct lpfc_queue *dq, int count); 263 257 int lpfc_free_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hq); 264 258 void lpfc_unregister_fcf(struct lpfc_hba *); 265 259 void lpfc_unregister_fcf_rescan(struct lpfc_hba *); ··· 273 271 void lpfc_sli4_clear_fcf_rr_bmask(struct lpfc_hba *); 274 272 275 273 int lpfc_mem_alloc(struct lpfc_hba *, int align); 274 + int lpfc_nvmet_mem_alloc(struct lpfc_hba *phba); 276 275 int lpfc_mem_alloc_active_rrq_pool_s4(struct lpfc_hba *); 277 276 void lpfc_mem_free(struct lpfc_hba *); 278 277 void lpfc_mem_free_all(struct lpfc_hba *);
+1
drivers/scsi/lpfc/lpfc_ct.c
··· 2092 2092 2093 2093 ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */ 2094 2094 ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */ 2095 + ae->un.AttrTypes[6] = 0x01; /* Type 40 - NVME */ 2095 2096 ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */ 2096 2097 size = FOURBYTES + 32; 2097 2098 ad->AttrLen = cpu_to_be16(size);
+41 -28
drivers/scsi/lpfc/lpfc_debugfs.c
··· 798 798 atomic_read(&tgtp->xmt_fcp_rsp)); 799 799 800 800 len += snprintf(buf + len, size - len, 801 - "FCP Rsp: abort %08x drop %08x\n", 802 - atomic_read(&tgtp->xmt_fcp_abort), 803 - atomic_read(&tgtp->xmt_fcp_drop)); 804 - 805 - len += snprintf(buf + len, size - len, 806 801 "FCP Rsp Cmpl: %08x err %08x drop %08x\n", 807 802 atomic_read(&tgtp->xmt_fcp_rsp_cmpl), 808 803 atomic_read(&tgtp->xmt_fcp_rsp_error), 809 804 atomic_read(&tgtp->xmt_fcp_rsp_drop)); 810 805 811 806 len += snprintf(buf + len, size - len, 812 - "ABORT: Xmt %08x Err %08x Cmpl %08x", 807 + "ABORT: Xmt %08x Cmpl %08x\n", 808 + atomic_read(&tgtp->xmt_fcp_abort), 809 + atomic_read(&tgtp->xmt_fcp_abort_cmpl)); 810 + 811 + len += snprintf(buf + len, size - len, 812 + "ABORT: Sol %08x Usol %08x Err %08x Cmpl %08x", 813 + atomic_read(&tgtp->xmt_abort_sol), 814 + atomic_read(&tgtp->xmt_abort_unsol), 813 815 atomic_read(&tgtp->xmt_abort_rsp), 814 - atomic_read(&tgtp->xmt_abort_rsp_error), 815 - atomic_read(&tgtp->xmt_abort_cmpl)); 816 + atomic_read(&tgtp->xmt_abort_rsp_error)); 816 817 817 818 len += snprintf(buf + len, size - len, "\n"); 818 819 ··· 842 841 } 843 842 spin_unlock(&phba->sli4_hba.abts_nvme_buf_list_lock); 844 843 } 844 + 845 + len += snprintf(buf + len, size - len, 846 + "IO_CTX: %08x outstanding %08x total %08x\n", 847 + phba->sli4_hba.nvmet_ctx_cnt, 848 + phba->sli4_hba.nvmet_io_wait_cnt, 849 + phba->sli4_hba.nvmet_io_wait_total); 845 850 } else { 846 851 if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)) 847 852 return len; ··· 1966 1959 atomic_set(&tgtp->rcv_ls_req_out, 0); 1967 1960 atomic_set(&tgtp->rcv_ls_req_drop, 0); 1968 1961 atomic_set(&tgtp->xmt_ls_abort, 0); 1962 + atomic_set(&tgtp->xmt_ls_abort_cmpl, 0); 1969 1963 atomic_set(&tgtp->xmt_ls_rsp, 0); 1970 1964 atomic_set(&tgtp->xmt_ls_drop, 0); 1971 1965 atomic_set(&tgtp->xmt_ls_rsp_error, 0); ··· 1975 1967 atomic_set(&tgtp->rcv_fcp_cmd_in, 0); 1976 1968 atomic_set(&tgtp->rcv_fcp_cmd_out, 0); 1977 1969 atomic_set(&tgtp->rcv_fcp_cmd_drop, 0); 1978 - atomic_set(&tgtp->xmt_fcp_abort, 0); 1979 1970 atomic_set(&tgtp->xmt_fcp_drop, 0); 1980 1971 atomic_set(&tgtp->xmt_fcp_read_rsp, 0); 1981 1972 atomic_set(&tgtp->xmt_fcp_read, 0); 1982 1973 atomic_set(&tgtp->xmt_fcp_write, 0); 1983 1974 atomic_set(&tgtp->xmt_fcp_rsp, 0); 1975 + atomic_set(&tgtp->xmt_fcp_release, 0); 1984 1976 atomic_set(&tgtp->xmt_fcp_rsp_cmpl, 0); 1985 1977 atomic_set(&tgtp->xmt_fcp_rsp_error, 0); 1986 1978 atomic_set(&tgtp->xmt_fcp_rsp_drop, 0); 1987 1979 1980 + atomic_set(&tgtp->xmt_fcp_abort, 0); 1981 + atomic_set(&tgtp->xmt_fcp_abort_cmpl, 0); 1982 + atomic_set(&tgtp->xmt_abort_sol, 0); 1983 + atomic_set(&tgtp->xmt_abort_unsol, 0); 1988 1984 atomic_set(&tgtp->xmt_abort_rsp, 0); 1989 1985 atomic_set(&tgtp->xmt_abort_rsp_error, 0); 1990 - atomic_set(&tgtp->xmt_abort_cmpl, 0); 1991 1986 } 1992 1987 return nbytes; 1993 1988 } ··· 3081 3070 qp->assoc_qid, qp->q_cnt_1, 3082 3071 (unsigned long long)qp->q_cnt_4); 3083 3072 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3084 - "\t\tWQID[%02d], QE-CNT[%04d], QE-SIZE[%04d], " 3085 - "HOST-IDX[%04d], PORT-IDX[%04d]", 3073 + "\t\tWQID[%02d], QE-CNT[%04d], QE-SZ[%04d], " 3074 + "HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]", 3086 3075 qp->queue_id, qp->entry_count, 3087 3076 qp->entry_size, qp->host_index, 3088 - qp->hba_index); 3077 + qp->hba_index, qp->entry_repost); 3089 3078 len += snprintf(pbuffer + len, 3090 3079 LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n"); 3091 3080 return len; ··· 3132 3121 qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2, 3133 3122 qp->q_cnt_3, (unsigned long long)qp->q_cnt_4); 3134 3123 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3135 - "\tCQID[%02d], QE-CNT[%04d], QE-SIZE[%04d], " 3136 - "HOST-IDX[%04d], PORT-IDX[%04d]", 3124 + "\tCQID[%02d], QE-CNT[%04d], QE-SZ[%04d], " 3125 + "HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]", 3137 3126 qp->queue_id, qp->entry_count, 3138 3127 qp->entry_size, qp->host_index, 3139 - qp->hba_index); 3128 + qp->hba_index, qp->entry_repost); 3140 3129 3141 3130 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n"); 3142 3131 ··· 3154 3143 "\t\t%s RQ info: ", rqtype); 3155 3144 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3156 3145 "AssocCQID[%02d]: RQ-STAT[nopost:x%x nobuf:x%x " 3157 - "trunc:x%x rcv:x%llx]\n", 3146 + "posted:x%x rcv:x%llx]\n", 3158 3147 qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2, 3159 3148 qp->q_cnt_3, (unsigned long long)qp->q_cnt_4); 3160 3149 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3161 - "\t\tHQID[%02d], QE-CNT[%04d], QE-SIZE[%04d], " 3162 - "HOST-IDX[%04d], PORT-IDX[%04d]\n", 3150 + "\t\tHQID[%02d], QE-CNT[%04d], QE-SZ[%04d], " 3151 + "HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]\n", 3163 3152 qp->queue_id, qp->entry_count, qp->entry_size, 3164 - qp->host_index, qp->hba_index); 3153 + qp->host_index, qp->hba_index, qp->entry_repost); 3165 3154 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3166 - "\t\tDQID[%02d], QE-CNT[%04d], QE-SIZE[%04d], " 3167 - "HOST-IDX[%04d], PORT-IDX[%04d]\n", 3155 + "\t\tDQID[%02d], QE-CNT[%04d], QE-SZ[%04d], " 3156 + "HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]\n", 3168 3157 datqp->queue_id, datqp->entry_count, 3169 3158 datqp->entry_size, datqp->host_index, 3170 - datqp->hba_index); 3159 + datqp->hba_index, datqp->entry_repost); 3171 3160 return len; 3172 3161 } 3173 3162 ··· 3253 3242 eqtype, qp->q_cnt_1, qp->q_cnt_2, qp->q_cnt_3, 3254 3243 (unsigned long long)qp->q_cnt_4); 3255 3244 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, 3256 - "EQID[%02d], QE-CNT[%04d], QE-SIZE[%04d], " 3257 - "HOST-IDX[%04d], PORT-IDX[%04d]", 3245 + "EQID[%02d], QE-CNT[%04d], QE-SZ[%04d], " 3246 + "HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]", 3258 3247 qp->queue_id, qp->entry_count, qp->entry_size, 3259 - qp->host_index, qp->hba_index); 3248 + qp->host_index, qp->hba_index, qp->entry_repost); 3260 3249 len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n"); 3261 3250 3262 3251 return len; ··· 5866 5855 atomic_dec(&lpfc_debugfs_hba_count); 5867 5856 } 5868 5857 5869 - debugfs_remove(lpfc_debugfs_root); /* lpfc */ 5870 - lpfc_debugfs_root = NULL; 5858 + if (atomic_read(&lpfc_debugfs_hba_count) == 0) { 5859 + debugfs_remove(lpfc_debugfs_root); /* lpfc */ 5860 + lpfc_debugfs_root = NULL; 5861 + } 5871 5862 } 5872 5863 #endif 5873 5864 return;
+1
drivers/scsi/lpfc/lpfc_disc.h
··· 90 90 #define NLP_FCP_INITIATOR 0x10 /* entry is an FCP Initiator */ 91 91 #define NLP_NVME_TARGET 0x20 /* entry is a NVME Target */ 92 92 #define NLP_NVME_INITIATOR 0x40 /* entry is a NVME Initiator */ 93 + #define NLP_NVME_DISCOVERY 0x80 /* entry has NVME disc srvc */ 93 94 94 95 uint16_t nlp_fc4_type; /* FC types node supports. */ 95 96 /* Assigned from GID_FF, only
+22 -4
drivers/scsi/lpfc/lpfc_els.c
··· 1047 1047 irsp->ulpStatus, irsp->un.ulpWord[4], 1048 1048 irsp->ulpTimeout); 1049 1049 1050 + 1051 + /* If this is not a loop open failure, bail out */ 1052 + if (!(irsp->ulpStatus == IOSTAT_LOCAL_REJECT && 1053 + ((irsp->un.ulpWord[4] & IOERR_PARAM_MASK) == 1054 + IOERR_LOOP_OPEN_FAILURE))) 1055 + goto flogifail; 1056 + 1050 1057 /* FLOGI failed, so there is no fabric */ 1051 1058 spin_lock_irq(shost->host_lock); 1052 1059 vport->fc_flag &= ~(FC_FABRIC | FC_PUBLIC_LOOP); ··· 2084 2077 2085 2078 if (irsp->ulpStatus) { 2086 2079 /* Check for retry */ 2080 + ndlp->fc4_prli_sent--; 2087 2081 if (lpfc_els_retry(phba, cmdiocb, rspiocb)) { 2088 2082 /* ELS command is being retried */ 2089 - ndlp->fc4_prli_sent--; 2090 2083 goto out; 2091 2084 } 2085 + 2092 2086 /* PRLI failed */ 2093 2087 lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, 2094 - "2754 PRLI failure DID:%06X Status:x%x/x%x\n", 2088 + "2754 PRLI failure DID:%06X Status:x%x/x%x, " 2089 + "data: x%x\n", 2095 2090 ndlp->nlp_DID, irsp->ulpStatus, 2096 - irsp->un.ulpWord[4]); 2091 + irsp->un.ulpWord[4], ndlp->fc4_prli_sent); 2092 + 2097 2093 /* Do not call DSM for lpfc_els_abort'ed ELS cmds */ 2098 2094 if (lpfc_error_lost_link(irsp)) 2099 2095 goto out; ··· 7451 7441 */ 7452 7442 spin_lock_irq(&phba->hbalock); 7453 7443 pring = lpfc_phba_elsring(phba); 7444 + 7445 + /* Bail out if we've no ELS wq, like in PCI error recovery case. */ 7446 + if (unlikely(!pring)) { 7447 + spin_unlock_irq(&phba->hbalock); 7448 + return; 7449 + } 7450 + 7454 7451 if (phba->sli_rev == LPFC_SLI_REV4) 7455 7452 spin_lock(&pring->ring_lock); 7456 7453 ··· 8684 8667 lpfc_do_scr_ns_plogi(phba, vport); 8685 8668 goto out; 8686 8669 fdisc_failed: 8687 - if (vport->fc_vport->vport_state != FC_VPORT_NO_FABRIC_RSCS) 8670 + if (vport->fc_vport && 8671 + (vport->fc_vport->vport_state != FC_VPORT_NO_FABRIC_RSCS)) 8688 8672 lpfc_vport_set_state(vport, FC_VPORT_FAILED); 8689 8673 /* Cancel discovery timer */ 8690 8674 lpfc_can_disctmo(vport);
+5 -4
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 693 693 pring = lpfc_phba_elsring(phba); 694 694 status = (ha_copy & (HA_RXMASK << (4*LPFC_ELS_RING))); 695 695 status >>= (4*LPFC_ELS_RING); 696 - if ((status & HA_RXMASK) || 697 - (pring->flag & LPFC_DEFERRED_RING_EVENT) || 698 - (phba->hba_flag & HBA_SP_QUEUE_EVT)) { 696 + if (pring && (status & HA_RXMASK || 697 + pring->flag & LPFC_DEFERRED_RING_EVENT || 698 + phba->hba_flag & HBA_SP_QUEUE_EVT)) { 699 699 if (pring->flag & LPFC_STOP_IOCB_EVENT) { 700 700 pring->flag |= LPFC_DEFERRED_RING_EVENT; 701 701 /* Set the lpfc data pending flag */ 702 702 set_bit(LPFC_DATA_READY, &phba->data_flags); 703 703 } else { 704 - if (phba->link_state >= LPFC_LINK_UP) { 704 + if (phba->link_state >= LPFC_LINK_UP || 705 + phba->link_flag & LS_MDS_LOOPBACK) { 705 706 pring->flag &= ~LPFC_DEFERRED_RING_EVENT; 706 707 lpfc_sli_handle_slow_ring_event(phba, pring, 707 708 (status &
+15 -1
drivers/scsi/lpfc/lpfc_hw4.h
··· 1356 1356 1357 1357 #define LPFC_HDR_BUF_SIZE 128 1358 1358 #define LPFC_DATA_BUF_SIZE 2048 1359 + #define LPFC_NVMET_DATA_BUF_SIZE 128 1359 1360 struct rq_context { 1360 1361 uint32_t word0; 1361 1362 #define lpfc_rq_context_rqe_count_SHIFT 16 /* Version 0 Only */ ··· 4421 4420 }; 4422 4421 #define TXRDY_PAYLOAD_LEN 12 4423 4422 4423 + #define CMD_SEND_FRAME 0xE1 4424 + 4425 + struct send_frame_wqe { 4426 + struct ulp_bde64 bde; /* words 0-2 */ 4427 + uint32_t frame_len; /* word 3 */ 4428 + uint32_t fc_hdr_wd0; /* word 4 */ 4429 + uint32_t fc_hdr_wd1; /* word 5 */ 4430 + struct wqe_common wqe_com; /* words 6-11 */ 4431 + uint32_t fc_hdr_wd2; /* word 12 */ 4432 + uint32_t fc_hdr_wd3; /* word 13 */ 4433 + uint32_t fc_hdr_wd4; /* word 14 */ 4434 + uint32_t fc_hdr_wd5; /* word 15 */ 4435 + }; 4424 4436 4425 4437 union lpfc_wqe { 4426 4438 uint32_t words[16]; ··· 4452 4438 struct fcp_trsp64_wqe fcp_trsp; 4453 4439 struct fcp_tsend64_wqe fcp_tsend; 4454 4440 struct fcp_treceive64_wqe fcp_treceive; 4455 - 4441 + struct send_frame_wqe send_frame; 4456 4442 }; 4457 4443 4458 4444 union lpfc_wqe128 {
+39 -98
drivers/scsi/lpfc/lpfc_init.c
··· 1099 1099 1100 1100 list_for_each_entry_safe(ctxp, ctxp_next, &nvmet_aborts, list) { 1101 1101 ctxp->flag &= ~(LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP); 1102 - lpfc_nvmet_rq_post(phba, ctxp, &ctxp->rqb_buffer->hbuf); 1102 + lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 1103 1103 } 1104 1104 } 1105 1105 ··· 3381 3381 { 3382 3382 struct lpfc_sglq *sglq_entry = NULL, *sglq_entry_next = NULL; 3383 3383 uint16_t i, lxri, xri_cnt, els_xri_cnt; 3384 - uint16_t nvmet_xri_cnt, tot_cnt; 3384 + uint16_t nvmet_xri_cnt; 3385 3385 LIST_HEAD(nvmet_sgl_list); 3386 3386 int rc; 3387 3387 ··· 3389 3389 * update on pci function's nvmet xri-sgl list 3390 3390 */ 3391 3391 els_xri_cnt = lpfc_sli4_get_els_iocb_cnt(phba); 3392 - nvmet_xri_cnt = phba->cfg_nvmet_mrq * phba->cfg_nvmet_mrq_post; 3393 - tot_cnt = phba->sli4_hba.max_cfg_param.max_xri - els_xri_cnt; 3394 - if (nvmet_xri_cnt > tot_cnt) { 3395 - phba->cfg_nvmet_mrq_post = tot_cnt / phba->cfg_nvmet_mrq; 3396 - nvmet_xri_cnt = phba->cfg_nvmet_mrq * phba->cfg_nvmet_mrq_post; 3397 - lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 3398 - "6301 NVMET post-sgl count changed to %d\n", 3399 - phba->cfg_nvmet_mrq_post); 3400 - } 3392 + 3393 + /* For NVMET, ALL remaining XRIs are dedicated for IO processing */ 3394 + nvmet_xri_cnt = phba->sli4_hba.max_cfg_param.max_xri - els_xri_cnt; 3401 3395 3402 3396 if (nvmet_xri_cnt > phba->sli4_hba.nvmet_xri_cnt) { 3403 3397 /* els xri-sgl expanded */ ··· 4540 4546 pmb->vport = phba->pport; 4541 4547 4542 4548 if (phba->sli4_hba.link_state.status != LPFC_FC_LA_TYPE_LINK_UP) { 4549 + phba->link_flag &= ~(LS_MDS_LINK_DOWN | LS_MDS_LOOPBACK); 4550 + 4551 + switch (phba->sli4_hba.link_state.status) { 4552 + case LPFC_FC_LA_TYPE_MDS_LINK_DOWN: 4553 + phba->link_flag |= LS_MDS_LINK_DOWN; 4554 + break; 4555 + case LPFC_FC_LA_TYPE_MDS_LOOPBACK: 4556 + phba->link_flag |= LS_MDS_LOOPBACK; 4557 + break; 4558 + default: 4559 + break; 4560 + } 4561 + 4543 4562 /* Parse and translate status field */ 4544 4563 mb = &pmb->u.mb; 4545 4564 mb->mbxStatus = lpfc_sli4_parse_latt_fault(phba, ··· 5837 5830 spin_lock_init(&phba->sli4_hba.abts_nvme_buf_list_lock); 5838 5831 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvme_buf_list); 5839 5832 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list); 5833 + INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_ctx_list); 5834 + INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list); 5835 + 5840 5836 /* Fast-path XRI aborted CQ Event work queue list */ 5841 5837 INIT_LIST_HEAD(&phba->sli4_hba.sp_nvme_xri_aborted_work_queue); 5842 5838 } ··· 5847 5837 /* This abort list used by worker thread */ 5848 5838 spin_lock_init(&phba->sli4_hba.sgl_list_lock); 5849 5839 spin_lock_init(&phba->sli4_hba.nvmet_io_lock); 5840 + spin_lock_init(&phba->sli4_hba.nvmet_io_wait_lock); 5850 5841 5851 5842 /* 5852 5843 * Initialize driver internal slow-path work queues ··· 5962 5951 for (i = 0; i < lpfc_enable_nvmet_cnt; i++) { 5963 5952 if (wwn == lpfc_enable_nvmet[i]) { 5964 5953 #if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) 5954 + if (lpfc_nvmet_mem_alloc(phba)) 5955 + break; 5956 + 5957 + phba->nvmet_support = 1; /* a match */ 5958 + 5965 5959 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 5966 5960 "6017 NVME Target %016llx\n", 5967 5961 wwn); 5968 - phba->nvmet_support = 1; /* a match */ 5969 5962 #else 5970 5963 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 5971 5964 "6021 Can't enable NVME Target." 5972 5965 " NVME_TARGET_FC infrastructure" 5973 5966 " is not in kernel\n"); 5974 5967 #endif 5968 + break; 5975 5969 } 5976 5970 } 5977 5971 } ··· 6285 6269 * 6286 6270 * This routine is invoked to free the driver's IOCB list and memory. 6287 6271 **/ 6288 - static void 6272 + void 6289 6273 lpfc_free_iocb_list(struct lpfc_hba *phba) 6290 6274 { 6291 6275 struct lpfc_iocbq *iocbq_entry = NULL, *iocbq_next = NULL; ··· 6313 6297 * 0 - successful 6314 6298 * other values - error 6315 6299 **/ 6316 - static int 6300 + int 6317 6301 lpfc_init_iocb_list(struct lpfc_hba *phba, int iocb_count) 6318 6302 { 6319 6303 struct lpfc_iocbq *iocbq_entry = NULL; ··· 6541 6525 uint16_t rpi_limit, curr_rpi_range; 6542 6526 struct lpfc_dmabuf *dmabuf; 6543 6527 struct lpfc_rpi_hdr *rpi_hdr; 6544 - uint32_t rpi_count; 6545 6528 6546 6529 /* 6547 6530 * If the SLI4 port supports extents, posting the rpi header isn't ··· 6553 6538 return NULL; 6554 6539 6555 6540 /* The limit on the logical index is just the max_rpi count. */ 6556 - rpi_limit = phba->sli4_hba.max_cfg_param.rpi_base + 6557 - phba->sli4_hba.max_cfg_param.max_rpi - 1; 6541 + rpi_limit = phba->sli4_hba.max_cfg_param.max_rpi; 6558 6542 6559 6543 spin_lock_irq(&phba->hbalock); 6560 6544 /* ··· 6564 6550 curr_rpi_range = phba->sli4_hba.next_rpi; 6565 6551 spin_unlock_irq(&phba->hbalock); 6566 6552 6567 - /* 6568 - * The port has a limited number of rpis. The increment here 6569 - * is LPFC_RPI_HDR_COUNT - 1 to account for the starting value 6570 - * and to allow the full max_rpi range per port. 6571 - */ 6572 - if ((curr_rpi_range + (LPFC_RPI_HDR_COUNT - 1)) > rpi_limit) 6573 - rpi_count = rpi_limit - curr_rpi_range; 6574 - else 6575 - rpi_count = LPFC_RPI_HDR_COUNT; 6576 - 6577 - if (!rpi_count) 6553 + /* Reached full RPI range */ 6554 + if (curr_rpi_range == rpi_limit) 6578 6555 return NULL; 6556 + 6579 6557 /* 6580 6558 * First allocate the protocol header region for the port. The 6581 6559 * port expects a 4KB DMA-mapped memory region that is 4K aligned. ··· 6601 6595 6602 6596 /* The rpi_hdr stores the logical index only. */ 6603 6597 rpi_hdr->start_rpi = curr_rpi_range; 6598 + rpi_hdr->next_rpi = phba->sli4_hba.next_rpi + LPFC_RPI_HDR_COUNT; 6604 6599 list_add_tail(&rpi_hdr->list, &phba->sli4_hba.lpfc_rpi_hdr_list); 6605 6600 6606 - /* 6607 - * The next_rpi stores the next logical module-64 rpi value used 6608 - * to post physical rpis in subsequent rpi postings. 6609 - */ 6610 - phba->sli4_hba.next_rpi += rpi_count; 6611 6601 spin_unlock_irq(&phba->hbalock); 6612 6602 return rpi_hdr; 6613 6603 ··· 8174 8172 /* Create NVMET Receive Queue for header */ 8175 8173 qdesc = lpfc_sli4_queue_alloc(phba, 8176 8174 phba->sli4_hba.rq_esize, 8177 - phba->sli4_hba.rq_ecount); 8175 + LPFC_NVMET_RQE_DEF_COUNT); 8178 8176 if (!qdesc) { 8179 8177 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 8180 8178 "3146 Failed allocate " ··· 8196 8194 /* Create NVMET Receive Queue for data */ 8197 8195 qdesc = lpfc_sli4_queue_alloc(phba, 8198 8196 phba->sli4_hba.rq_esize, 8199 - phba->sli4_hba.rq_ecount); 8197 + LPFC_NVMET_RQE_DEF_COUNT); 8200 8198 if (!qdesc) { 8201 8199 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 8202 8200 "3156 Failed allocate " ··· 8325 8323 8326 8324 /* Everything on this list has been freed */ 8327 8325 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list); 8328 - } 8329 - 8330 - int 8331 - lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, 8332 - struct lpfc_queue *drq, int count) 8333 - { 8334 - int rc, i; 8335 - struct lpfc_rqe hrqe; 8336 - struct lpfc_rqe drqe; 8337 - struct lpfc_rqb *rqbp; 8338 - struct rqb_dmabuf *rqb_buffer; 8339 - LIST_HEAD(rqb_buf_list); 8340 - 8341 - rqbp = hrq->rqbp; 8342 - for (i = 0; i < count; i++) { 8343 - rqb_buffer = (rqbp->rqb_alloc_buffer)(phba); 8344 - if (!rqb_buffer) 8345 - break; 8346 - rqb_buffer->hrq = hrq; 8347 - rqb_buffer->drq = drq; 8348 - list_add_tail(&rqb_buffer->hbuf.list, &rqb_buf_list); 8349 - } 8350 - while (!list_empty(&rqb_buf_list)) { 8351 - list_remove_head(&rqb_buf_list, rqb_buffer, struct rqb_dmabuf, 8352 - hbuf.list); 8353 - 8354 - hrqe.address_lo = putPaddrLow(rqb_buffer->hbuf.phys); 8355 - hrqe.address_hi = putPaddrHigh(rqb_buffer->hbuf.phys); 8356 - drqe.address_lo = putPaddrLow(rqb_buffer->dbuf.phys); 8357 - drqe.address_hi = putPaddrHigh(rqb_buffer->dbuf.phys); 8358 - rc = lpfc_sli4_rq_put(hrq, drq, &hrqe, &drqe); 8359 - if (rc < 0) { 8360 - (rqbp->rqb_free_buffer)(phba, rqb_buffer); 8361 - } else { 8362 - list_add_tail(&rqb_buffer->hbuf.list, 8363 - &rqbp->rqb_buffer_list); 8364 - rqbp->buffer_count++; 8365 - } 8366 - } 8367 - return 1; 8368 8326 } 8369 8327 8370 8328 int ··· 8745 8783 rc = -ENOMEM; 8746 8784 goto out_destroy; 8747 8785 } 8748 - 8749 - lpfc_rq_adjust_repost(phba, phba->sli4_hba.hdr_rq, LPFC_ELS_HBQ); 8750 - lpfc_rq_adjust_repost(phba, phba->sli4_hba.dat_rq, LPFC_ELS_HBQ); 8751 8786 8752 8787 rc = lpfc_rq_create(phba, phba->sli4_hba.hdr_rq, phba->sli4_hba.dat_rq, 8753 8788 phba->sli4_hba.els_cq, LPFC_USOL); ··· 11069 11110 struct lpfc_hba *phba; 11070 11111 struct lpfc_vport *vport = NULL; 11071 11112 struct Scsi_Host *shost = NULL; 11072 - int error, cnt; 11113 + int error; 11073 11114 uint32_t cfg_mode, intr_mode; 11074 11115 11075 11116 /* Allocate memory for HBA structure */ ··· 11103 11144 goto out_unset_pci_mem_s4; 11104 11145 } 11105 11146 11106 - cnt = phba->cfg_iocb_cnt * 1024; 11107 - if (phba->nvmet_support) 11108 - cnt += phba->cfg_nvmet_mrq_post * phba->cfg_nvmet_mrq; 11109 - 11110 - /* Initialize and populate the iocb list per host */ 11111 - lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 11112 - "2821 initialize iocb list %d total %d\n", 11113 - phba->cfg_iocb_cnt, cnt); 11114 - error = lpfc_init_iocb_list(phba, cnt); 11115 - 11116 - if (error) { 11117 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 11118 - "1413 Failed to initialize iocb list.\n"); 11119 - goto out_unset_driver_resource_s4; 11120 - } 11121 - 11122 11147 INIT_LIST_HEAD(&phba->active_rrq_list); 11123 11148 INIT_LIST_HEAD(&phba->fcf.fcf_pri_list); 11124 11149 ··· 11111 11168 if (error) { 11112 11169 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 11113 11170 "1414 Failed to set up driver resource.\n"); 11114 - goto out_free_iocb_list; 11171 + goto out_unset_driver_resource_s4; 11115 11172 } 11116 11173 11117 11174 /* Get the default values for Model Name and Description */ ··· 11211 11268 lpfc_destroy_shost(phba); 11212 11269 out_unset_driver_resource: 11213 11270 lpfc_unset_driver_resource_phase2(phba); 11214 - out_free_iocb_list: 11215 - lpfc_free_iocb_list(phba); 11216 11271 out_unset_driver_resource_s4: 11217 11272 lpfc_sli4_driver_resource_unset(phba); 11218 11273 out_unset_pci_mem_s4:
+28 -72
drivers/scsi/lpfc/lpfc_mem.c
··· 214 214 return -ENOMEM; 215 215 } 216 216 217 + int 218 + lpfc_nvmet_mem_alloc(struct lpfc_hba *phba) 219 + { 220 + phba->lpfc_nvmet_drb_pool = 221 + pci_pool_create("lpfc_nvmet_drb_pool", 222 + phba->pcidev, LPFC_NVMET_DATA_BUF_SIZE, 223 + SGL_ALIGN_SZ, 0); 224 + if (!phba->lpfc_nvmet_drb_pool) { 225 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 226 + "6024 Can't enable NVME Target - no memory\n"); 227 + return -ENOMEM; 228 + } 229 + return 0; 230 + } 231 + 217 232 /** 218 233 * lpfc_mem_free - Frees memory allocated by lpfc_mem_alloc 219 234 * @phba: HBA to free memory for ··· 247 232 248 233 /* Free HBQ pools */ 249 234 lpfc_sli_hbqbuf_free_all(phba); 235 + if (phba->lpfc_nvmet_drb_pool) 236 + pci_pool_destroy(phba->lpfc_nvmet_drb_pool); 237 + phba->lpfc_nvmet_drb_pool = NULL; 250 238 if (phba->lpfc_drb_pool) 251 239 pci_pool_destroy(phba->lpfc_drb_pool); 252 240 phba->lpfc_drb_pool = NULL; ··· 629 611 lpfc_sli4_nvmet_alloc(struct lpfc_hba *phba) 630 612 { 631 613 struct rqb_dmabuf *dma_buf; 632 - struct lpfc_iocbq *nvmewqe; 633 - union lpfc_wqe128 *wqe; 634 614 635 615 dma_buf = kzalloc(sizeof(struct rqb_dmabuf), GFP_KERNEL); 636 616 if (!dma_buf) ··· 640 624 kfree(dma_buf); 641 625 return NULL; 642 626 } 643 - dma_buf->dbuf.virt = pci_pool_alloc(phba->lpfc_drb_pool, GFP_KERNEL, 644 - &dma_buf->dbuf.phys); 627 + dma_buf->dbuf.virt = pci_pool_alloc(phba->lpfc_nvmet_drb_pool, 628 + GFP_KERNEL, &dma_buf->dbuf.phys); 645 629 if (!dma_buf->dbuf.virt) { 646 630 pci_pool_free(phba->lpfc_hrb_pool, dma_buf->hbuf.virt, 647 631 dma_buf->hbuf.phys); 648 632 kfree(dma_buf); 649 633 return NULL; 650 634 } 651 - dma_buf->total_size = LPFC_DATA_BUF_SIZE; 652 - 653 - dma_buf->context = kzalloc(sizeof(struct lpfc_nvmet_rcv_ctx), 654 - GFP_KERNEL); 655 - if (!dma_buf->context) { 656 - pci_pool_free(phba->lpfc_drb_pool, dma_buf->dbuf.virt, 657 - dma_buf->dbuf.phys); 658 - pci_pool_free(phba->lpfc_hrb_pool, dma_buf->hbuf.virt, 659 - dma_buf->hbuf.phys); 660 - kfree(dma_buf); 661 - return NULL; 662 - } 663 - 664 - dma_buf->iocbq = lpfc_sli_get_iocbq(phba); 665 - if (!dma_buf->iocbq) { 666 - kfree(dma_buf->context); 667 - pci_pool_free(phba->lpfc_drb_pool, dma_buf->dbuf.virt, 668 - dma_buf->dbuf.phys); 669 - pci_pool_free(phba->lpfc_hrb_pool, dma_buf->hbuf.virt, 670 - dma_buf->hbuf.phys); 671 - kfree(dma_buf); 672 - lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 673 - "2621 Ran out of nvmet iocb/WQEs\n"); 674 - return NULL; 675 - } 676 - dma_buf->iocbq->iocb_flag = LPFC_IO_NVMET; 677 - nvmewqe = dma_buf->iocbq; 678 - wqe = (union lpfc_wqe128 *)&nvmewqe->wqe; 679 - /* Initialize WQE */ 680 - memset(wqe, 0, sizeof(union lpfc_wqe)); 681 - /* Word 7 */ 682 - bf_set(wqe_ct, &wqe->generic.wqe_com, SLI4_CT_RPI); 683 - bf_set(wqe_class, &wqe->generic.wqe_com, CLASS3); 684 - bf_set(wqe_pu, &wqe->generic.wqe_com, 1); 685 - /* Word 10 */ 686 - bf_set(wqe_nvme, &wqe->fcp_tsend.wqe_com, 1); 687 - bf_set(wqe_ebde_cnt, &wqe->generic.wqe_com, 0); 688 - bf_set(wqe_qosd, &wqe->generic.wqe_com, 0); 689 - 690 - dma_buf->iocbq->context1 = NULL; 691 - spin_lock(&phba->sli4_hba.sgl_list_lock); 692 - dma_buf->sglq = __lpfc_sli_get_nvmet_sglq(phba, dma_buf->iocbq); 693 - spin_unlock(&phba->sli4_hba.sgl_list_lock); 694 - if (!dma_buf->sglq) { 695 - lpfc_sli_release_iocbq(phba, dma_buf->iocbq); 696 - kfree(dma_buf->context); 697 - pci_pool_free(phba->lpfc_drb_pool, dma_buf->dbuf.virt, 698 - dma_buf->dbuf.phys); 699 - pci_pool_free(phba->lpfc_hrb_pool, dma_buf->hbuf.virt, 700 - dma_buf->hbuf.phys); 701 - kfree(dma_buf); 702 - lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 703 - "6132 Ran out of nvmet XRIs\n"); 704 - return NULL; 705 - } 635 + dma_buf->total_size = LPFC_NVMET_DATA_BUF_SIZE; 706 636 return dma_buf; 707 637 } 708 638 ··· 667 705 void 668 706 lpfc_sli4_nvmet_free(struct lpfc_hba *phba, struct rqb_dmabuf *dmab) 669 707 { 670 - unsigned long flags; 671 - 672 - __lpfc_clear_active_sglq(phba, dmab->sglq->sli4_lxritag); 673 - dmab->sglq->state = SGL_FREED; 674 - dmab->sglq->ndlp = NULL; 675 - 676 - spin_lock_irqsave(&phba->sli4_hba.sgl_list_lock, flags); 677 - list_add_tail(&dmab->sglq->list, &phba->sli4_hba.lpfc_nvmet_sgl_list); 678 - spin_unlock_irqrestore(&phba->sli4_hba.sgl_list_lock, flags); 679 - 680 - lpfc_sli_release_iocbq(phba, dmab->iocbq); 681 - kfree(dmab->context); 682 708 pci_pool_free(phba->lpfc_hrb_pool, dmab->hbuf.virt, dmab->hbuf.phys); 683 - pci_pool_free(phba->lpfc_drb_pool, dmab->dbuf.virt, dmab->dbuf.phys); 709 + pci_pool_free(phba->lpfc_nvmet_drb_pool, 710 + dmab->dbuf.virt, dmab->dbuf.phys); 684 711 kfree(dmab); 685 712 } 686 713 ··· 754 803 rc = lpfc_sli4_rq_put(rqb_entry->hrq, rqb_entry->drq, &hrqe, &drqe); 755 804 if (rc < 0) { 756 805 (rqbp->rqb_free_buffer)(phba, rqb_entry); 806 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 807 + "6409 Cannot post to RQ %d: %x %x\n", 808 + rqb_entry->hrq->queue_id, 809 + rqb_entry->hrq->host_index, 810 + rqb_entry->hrq->hba_index); 757 811 } else { 758 812 list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list); 759 813 rqbp->buffer_count++;
+6
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 1944 1944 1945 1945 /* Target driver cannot solicit NVME FB. */ 1946 1946 if (bf_get_be32(prli_tgt, nvpr)) { 1947 + /* Complete the nvme target roles. The transport 1948 + * needs to know if the rport is capable of 1949 + * discovery in addition to its role. 1950 + */ 1947 1951 ndlp->nlp_type |= NLP_NVME_TARGET; 1952 + if (bf_get_be32(prli_disc, nvpr)) 1953 + ndlp->nlp_type |= NLP_NVME_DISCOVERY; 1948 1954 if ((bf_get_be32(prli_fba, nvpr) == 1) && 1949 1955 (bf_get_be32(prli_fb_sz, nvpr) > 0) && 1950 1956 (phba->cfg_nvme_enable_fb) &&
+335 -81
drivers/scsi/lpfc/lpfc_nvmet.c
··· 142 142 } 143 143 144 144 /** 145 - * lpfc_nvmet_rq_post - Repost a NVMET RQ DMA buffer and clean up context 145 + * lpfc_nvmet_ctxbuf_post - Repost a NVMET RQ DMA buffer and clean up context 146 146 * @phba: HBA buffer is associated with 147 147 * @ctxp: context to clean up 148 148 * @mp: Buffer to free ··· 155 155 * Returns: None 156 156 **/ 157 157 void 158 - lpfc_nvmet_rq_post(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp, 159 - struct lpfc_dmabuf *mp) 158 + lpfc_nvmet_ctxbuf_post(struct lpfc_hba *phba, struct lpfc_nvmet_ctxbuf *ctx_buf) 160 159 { 161 - if (ctxp) { 162 - if (ctxp->flag) 163 - lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 164 - "6314 rq_post ctx xri x%x flag x%x\n", 165 - ctxp->oxid, ctxp->flag); 160 + #if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) 161 + struct lpfc_nvmet_rcv_ctx *ctxp = ctx_buf->context; 162 + struct lpfc_nvmet_tgtport *tgtp; 163 + struct fc_frame_header *fc_hdr; 164 + struct rqb_dmabuf *nvmebuf; 165 + struct lpfc_dmabuf *hbufp; 166 + uint32_t *payload; 167 + uint32_t size, oxid, sid, rc; 168 + unsigned long iflag; 166 169 167 - if (ctxp->txrdy) { 168 - pci_pool_free(phba->txrdy_payload_pool, ctxp->txrdy, 169 - ctxp->txrdy_phys); 170 - ctxp->txrdy = NULL; 171 - ctxp->txrdy_phys = 0; 172 - } 173 - ctxp->state = LPFC_NVMET_STE_FREE; 170 + if (ctxp->txrdy) { 171 + pci_pool_free(phba->txrdy_payload_pool, ctxp->txrdy, 172 + ctxp->txrdy_phys); 173 + ctxp->txrdy = NULL; 174 + ctxp->txrdy_phys = 0; 174 175 } 175 - lpfc_rq_buf_free(phba, mp); 176 + ctxp->state = LPFC_NVMET_STE_FREE; 177 + 178 + spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag); 179 + if (phba->sli4_hba.nvmet_io_wait_cnt) { 180 + hbufp = &nvmebuf->hbuf; 181 + list_remove_head(&phba->sli4_hba.lpfc_nvmet_io_wait_list, 182 + nvmebuf, struct rqb_dmabuf, 183 + hbuf.list); 184 + phba->sli4_hba.nvmet_io_wait_cnt--; 185 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock, 186 + iflag); 187 + 188 + fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt); 189 + oxid = be16_to_cpu(fc_hdr->fh_ox_id); 190 + tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 191 + payload = (uint32_t *)(nvmebuf->dbuf.virt); 192 + size = nvmebuf->bytes_recv; 193 + sid = sli4_sid_from_fc_hdr(fc_hdr); 194 + 195 + ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context; 196 + memset(ctxp, 0, sizeof(ctxp->ctx)); 197 + ctxp->wqeq = NULL; 198 + ctxp->txrdy = NULL; 199 + ctxp->offset = 0; 200 + ctxp->phba = phba; 201 + ctxp->size = size; 202 + ctxp->oxid = oxid; 203 + ctxp->sid = sid; 204 + ctxp->state = LPFC_NVMET_STE_RCV; 205 + ctxp->entry_cnt = 1; 206 + ctxp->flag = 0; 207 + ctxp->ctxbuf = ctx_buf; 208 + spin_lock_init(&ctxp->ctxlock); 209 + 210 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 211 + if (phba->ktime_on) { 212 + ctxp->ts_cmd_nvme = ktime_get_ns(); 213 + ctxp->ts_isr_cmd = ctxp->ts_cmd_nvme; 214 + ctxp->ts_nvme_data = 0; 215 + ctxp->ts_data_wqput = 0; 216 + ctxp->ts_isr_data = 0; 217 + ctxp->ts_data_nvme = 0; 218 + ctxp->ts_nvme_status = 0; 219 + ctxp->ts_status_wqput = 0; 220 + ctxp->ts_isr_status = 0; 221 + ctxp->ts_status_nvme = 0; 222 + } 223 + #endif 224 + atomic_inc(&tgtp->rcv_fcp_cmd_in); 225 + /* 226 + * The calling sequence should be: 227 + * nvmet_fc_rcv_fcp_req->lpfc_nvmet_xmt_fcp_op/cmp- req->done 228 + * lpfc_nvmet_xmt_fcp_op_cmp should free the allocated ctxp. 229 + * When we return from nvmet_fc_rcv_fcp_req, all relevant info 230 + * the NVME command / FC header is stored. 231 + * A buffer has already been reposted for this IO, so just free 232 + * the nvmebuf. 233 + */ 234 + rc = nvmet_fc_rcv_fcp_req(phba->targetport, &ctxp->ctx.fcp_req, 235 + payload, size); 236 + 237 + /* Process FCP command */ 238 + if (rc == 0) { 239 + atomic_inc(&tgtp->rcv_fcp_cmd_out); 240 + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 241 + return; 242 + } 243 + 244 + atomic_inc(&tgtp->rcv_fcp_cmd_drop); 245 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 246 + "2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n", 247 + ctxp->oxid, rc, 248 + atomic_read(&tgtp->rcv_fcp_cmd_in), 249 + atomic_read(&tgtp->rcv_fcp_cmd_out), 250 + atomic_read(&tgtp->xmt_fcp_release)); 251 + 252 + lpfc_nvmet_defer_release(phba, ctxp); 253 + lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, sid, oxid); 254 + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 255 + return; 256 + } 257 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock, iflag); 258 + 259 + spin_lock_irqsave(&phba->sli4_hba.nvmet_io_lock, iflag); 260 + list_add_tail(&ctx_buf->list, 261 + &phba->sli4_hba.lpfc_nvmet_ctx_list); 262 + phba->sli4_hba.nvmet_ctx_cnt++; 263 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_lock, iflag); 264 + #endif 176 265 } 177 266 178 267 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS ··· 591 502 "6150 LS Drop IO x%x: Prep\n", 592 503 ctxp->oxid); 593 504 lpfc_in_buf_free(phba, &nvmebuf->dbuf); 505 + atomic_inc(&nvmep->xmt_ls_abort); 594 506 lpfc_nvmet_unsol_ls_issue_abort(phba, ctxp, 595 507 ctxp->sid, ctxp->oxid); 596 508 return -ENOMEM; ··· 635 545 lpfc_nlp_put(nvmewqeq->context1); 636 546 637 547 lpfc_in_buf_free(phba, &nvmebuf->dbuf); 548 + atomic_inc(&nvmep->xmt_ls_abort); 638 549 lpfc_nvmet_unsol_ls_issue_abort(phba, ctxp, ctxp->sid, ctxp->oxid); 639 550 return -ENXIO; 640 551 } ··· 703 612 lpfc_nvmeio_data(phba, "NVMET FCP CMND: xri x%x op x%x len x%x\n", 704 613 ctxp->oxid, rsp->op, rsp->rsplen); 705 614 615 + ctxp->flag |= LPFC_NVMET_IO_INP; 706 616 rc = lpfc_sli4_issue_wqe(phba, LPFC_FCP_RING, nvmewqeq); 707 617 if (rc == WQE_SUCCESS) { 708 - ctxp->flag |= LPFC_NVMET_IO_INP; 709 618 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 710 619 if (!phba->ktime_on) 711 620 return 0; ··· 783 692 lpfc_nvmet_xmt_fcp_release(struct nvmet_fc_target_port *tgtport, 784 693 struct nvmefc_tgt_fcp_req *rsp) 785 694 { 695 + struct lpfc_nvmet_tgtport *lpfc_nvmep = tgtport->private; 786 696 struct lpfc_nvmet_rcv_ctx *ctxp = 787 697 container_of(rsp, struct lpfc_nvmet_rcv_ctx, ctx.fcp_req); 788 698 struct lpfc_hba *phba = ctxp->phba; ··· 802 710 lpfc_nvmeio_data(phba, "NVMET FCP FREE: xri x%x ste %d\n", ctxp->oxid, 803 711 ctxp->state, 0); 804 712 713 + atomic_inc(&lpfc_nvmep->xmt_fcp_release); 714 + 805 715 if (aborting) 806 716 return; 807 717 808 - lpfc_nvmet_rq_post(phba, ctxp, &ctxp->rqb_buffer->hbuf); 718 + lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 809 719 } 810 720 811 721 static struct nvmet_fc_target_template lpfc_tgttemplate = { ··· 828 734 .target_priv_sz = sizeof(struct lpfc_nvmet_tgtport), 829 735 }; 830 736 737 + void 738 + lpfc_nvmet_cleanup_io_context(struct lpfc_hba *phba) 739 + { 740 + struct lpfc_nvmet_ctxbuf *ctx_buf, *next_ctx_buf; 741 + unsigned long flags; 742 + 743 + list_for_each_entry_safe( 744 + ctx_buf, next_ctx_buf, 745 + &phba->sli4_hba.lpfc_nvmet_ctx_list, list) { 746 + spin_lock_irqsave( 747 + &phba->sli4_hba.abts_nvme_buf_list_lock, flags); 748 + list_del_init(&ctx_buf->list); 749 + spin_unlock_irqrestore( 750 + &phba->sli4_hba.abts_nvme_buf_list_lock, flags); 751 + __lpfc_clear_active_sglq(phba, 752 + ctx_buf->sglq->sli4_lxritag); 753 + ctx_buf->sglq->state = SGL_FREED; 754 + ctx_buf->sglq->ndlp = NULL; 755 + 756 + spin_lock_irqsave(&phba->sli4_hba.sgl_list_lock, flags); 757 + list_add_tail(&ctx_buf->sglq->list, 758 + &phba->sli4_hba.lpfc_nvmet_sgl_list); 759 + spin_unlock_irqrestore(&phba->sli4_hba.sgl_list_lock, 760 + flags); 761 + 762 + lpfc_sli_release_iocbq(phba, ctx_buf->iocbq); 763 + kfree(ctx_buf->context); 764 + } 765 + } 766 + 767 + int 768 + lpfc_nvmet_setup_io_context(struct lpfc_hba *phba) 769 + { 770 + struct lpfc_nvmet_ctxbuf *ctx_buf; 771 + struct lpfc_iocbq *nvmewqe; 772 + union lpfc_wqe128 *wqe; 773 + int i; 774 + 775 + lpfc_printf_log(phba, KERN_INFO, LOG_NVME, 776 + "6403 Allocate NVMET resources for %d XRIs\n", 777 + phba->sli4_hba.nvmet_xri_cnt); 778 + 779 + /* For all nvmet xris, allocate resources needed to process a 780 + * received command on a per xri basis. 781 + */ 782 + for (i = 0; i < phba->sli4_hba.nvmet_xri_cnt; i++) { 783 + ctx_buf = kzalloc(sizeof(*ctx_buf), GFP_KERNEL); 784 + if (!ctx_buf) { 785 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 786 + "6404 Ran out of memory for NVMET\n"); 787 + return -ENOMEM; 788 + } 789 + 790 + ctx_buf->context = kzalloc(sizeof(*ctx_buf->context), 791 + GFP_KERNEL); 792 + if (!ctx_buf->context) { 793 + kfree(ctx_buf); 794 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 795 + "6405 Ran out of NVMET " 796 + "context memory\n"); 797 + return -ENOMEM; 798 + } 799 + ctx_buf->context->ctxbuf = ctx_buf; 800 + 801 + ctx_buf->iocbq = lpfc_sli_get_iocbq(phba); 802 + if (!ctx_buf->iocbq) { 803 + kfree(ctx_buf->context); 804 + kfree(ctx_buf); 805 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 806 + "6406 Ran out of NVMET iocb/WQEs\n"); 807 + return -ENOMEM; 808 + } 809 + ctx_buf->iocbq->iocb_flag = LPFC_IO_NVMET; 810 + nvmewqe = ctx_buf->iocbq; 811 + wqe = (union lpfc_wqe128 *)&nvmewqe->wqe; 812 + /* Initialize WQE */ 813 + memset(wqe, 0, sizeof(union lpfc_wqe)); 814 + /* Word 7 */ 815 + bf_set(wqe_ct, &wqe->generic.wqe_com, SLI4_CT_RPI); 816 + bf_set(wqe_class, &wqe->generic.wqe_com, CLASS3); 817 + bf_set(wqe_pu, &wqe->generic.wqe_com, 1); 818 + /* Word 10 */ 819 + bf_set(wqe_nvme, &wqe->fcp_tsend.wqe_com, 1); 820 + bf_set(wqe_ebde_cnt, &wqe->generic.wqe_com, 0); 821 + bf_set(wqe_qosd, &wqe->generic.wqe_com, 0); 822 + 823 + ctx_buf->iocbq->context1 = NULL; 824 + spin_lock(&phba->sli4_hba.sgl_list_lock); 825 + ctx_buf->sglq = __lpfc_sli_get_nvmet_sglq(phba, ctx_buf->iocbq); 826 + spin_unlock(&phba->sli4_hba.sgl_list_lock); 827 + if (!ctx_buf->sglq) { 828 + lpfc_sli_release_iocbq(phba, ctx_buf->iocbq); 829 + kfree(ctx_buf->context); 830 + kfree(ctx_buf); 831 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 832 + "6407 Ran out of NVMET XRIs\n"); 833 + return -ENOMEM; 834 + } 835 + spin_lock(&phba->sli4_hba.nvmet_io_lock); 836 + list_add_tail(&ctx_buf->list, 837 + &phba->sli4_hba.lpfc_nvmet_ctx_list); 838 + spin_unlock(&phba->sli4_hba.nvmet_io_lock); 839 + } 840 + phba->sli4_hba.nvmet_ctx_cnt = phba->sli4_hba.nvmet_xri_cnt; 841 + return 0; 842 + } 843 + 831 844 int 832 845 lpfc_nvmet_create_targetport(struct lpfc_hba *phba) 833 846 { 834 847 struct lpfc_vport *vport = phba->pport; 835 848 struct lpfc_nvmet_tgtport *tgtp; 836 849 struct nvmet_fc_port_info pinfo; 837 - int error = 0; 850 + int error; 838 851 839 852 if (phba->targetport) 840 853 return 0; 854 + 855 + error = lpfc_nvmet_setup_io_context(phba); 856 + if (error) 857 + return error; 841 858 842 859 memset(&pinfo, 0, sizeof(struct nvmet_fc_port_info)); 843 860 pinfo.node_name = wwn_to_u64(vport->fc_nodename.u.wwn); ··· 977 772 &phba->pcidev->dev, 978 773 &phba->targetport); 979 774 #else 980 - error = -ENOMEM; 775 + error = -ENOENT; 981 776 #endif 982 777 if (error) { 983 778 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_DISC, 984 779 "6025 Cannot register NVME targetport " 985 780 "x%x\n", error); 986 781 phba->targetport = NULL; 782 + 783 + lpfc_nvmet_cleanup_io_context(phba); 784 + 987 785 } else { 988 786 tgtp = (struct lpfc_nvmet_tgtport *) 989 787 phba->targetport->private; ··· 1003 795 atomic_set(&tgtp->rcv_ls_req_out, 0); 1004 796 atomic_set(&tgtp->rcv_ls_req_drop, 0); 1005 797 atomic_set(&tgtp->xmt_ls_abort, 0); 798 + atomic_set(&tgtp->xmt_ls_abort_cmpl, 0); 1006 799 atomic_set(&tgtp->xmt_ls_rsp, 0); 1007 800 atomic_set(&tgtp->xmt_ls_drop, 0); 1008 801 atomic_set(&tgtp->xmt_ls_rsp_error, 0); ··· 1011 802 atomic_set(&tgtp->rcv_fcp_cmd_in, 0); 1012 803 atomic_set(&tgtp->rcv_fcp_cmd_out, 0); 1013 804 atomic_set(&tgtp->rcv_fcp_cmd_drop, 0); 1014 - atomic_set(&tgtp->xmt_fcp_abort, 0); 1015 805 atomic_set(&tgtp->xmt_fcp_drop, 0); 1016 806 atomic_set(&tgtp->xmt_fcp_read_rsp, 0); 1017 807 atomic_set(&tgtp->xmt_fcp_read, 0); 1018 808 atomic_set(&tgtp->xmt_fcp_write, 0); 1019 809 atomic_set(&tgtp->xmt_fcp_rsp, 0); 810 + atomic_set(&tgtp->xmt_fcp_release, 0); 1020 811 atomic_set(&tgtp->xmt_fcp_rsp_cmpl, 0); 1021 812 atomic_set(&tgtp->xmt_fcp_rsp_error, 0); 1022 813 atomic_set(&tgtp->xmt_fcp_rsp_drop, 0); 814 + atomic_set(&tgtp->xmt_fcp_abort, 0); 815 + atomic_set(&tgtp->xmt_fcp_abort_cmpl, 0); 816 + atomic_set(&tgtp->xmt_abort_unsol, 0); 817 + atomic_set(&tgtp->xmt_abort_sol, 0); 1023 818 atomic_set(&tgtp->xmt_abort_rsp, 0); 1024 819 atomic_set(&tgtp->xmt_abort_rsp_error, 0); 1025 - atomic_set(&tgtp->xmt_abort_cmpl, 0); 1026 820 } 1027 821 return error; 1028 822 } ··· 1076 864 list_for_each_entry_safe(ctxp, next_ctxp, 1077 865 &phba->sli4_hba.lpfc_abts_nvmet_ctx_list, 1078 866 list) { 1079 - if (ctxp->rqb_buffer->sglq->sli4_xritag != xri) 867 + if (ctxp->ctxbuf->sglq->sli4_xritag != xri) 1080 868 continue; 1081 869 1082 870 /* Check if we already received a free context call ··· 1097 885 (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE || 1098 886 ndlp->nlp_state == NLP_STE_MAPPED_NODE)) { 1099 887 lpfc_set_rrq_active(phba, ndlp, 1100 - ctxp->rqb_buffer->sglq->sli4_lxritag, 888 + ctxp->ctxbuf->sglq->sli4_lxritag, 1101 889 rxid, 1); 1102 890 lpfc_sli4_abts_err_handler(phba, ndlp, axri); 1103 891 } ··· 1106 894 "6318 XB aborted %x flg x%x (%x)\n", 1107 895 ctxp->oxid, ctxp->flag, released); 1108 896 if (released) 1109 - lpfc_nvmet_rq_post(phba, ctxp, 1110 - &ctxp->rqb_buffer->hbuf); 897 + lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 898 + 1111 899 if (rrq_empty) 1112 900 lpfc_worker_wake_up(phba); 1113 901 return; ··· 1135 923 list_for_each_entry_safe(ctxp, next_ctxp, 1136 924 &phba->sli4_hba.lpfc_abts_nvmet_ctx_list, 1137 925 list) { 1138 - if (ctxp->rqb_buffer->sglq->sli4_xritag != xri) 926 + if (ctxp->ctxbuf->sglq->sli4_xritag != xri) 1139 927 continue; 1140 928 1141 929 spin_unlock(&phba->sli4_hba.abts_nvme_buf_list_lock); ··· 1187 975 init_completion(&tgtp->tport_unreg_done); 1188 976 nvmet_fc_unregister_targetport(phba->targetport); 1189 977 wait_for_completion_timeout(&tgtp->tport_unreg_done, 5); 978 + lpfc_nvmet_cleanup_io_context(phba); 1190 979 } 1191 980 phba->targetport = NULL; 1192 981 #endif ··· 1223 1010 oxid = 0; 1224 1011 size = 0; 1225 1012 sid = 0; 1013 + ctxp = NULL; 1226 1014 goto dropit; 1227 1015 } 1228 1016 ··· 1318 1104 struct lpfc_nvmet_rcv_ctx *ctxp; 1319 1105 struct lpfc_nvmet_tgtport *tgtp; 1320 1106 struct fc_frame_header *fc_hdr; 1107 + struct lpfc_nvmet_ctxbuf *ctx_buf; 1321 1108 uint32_t *payload; 1322 - uint32_t size, oxid, sid, rc; 1109 + uint32_t size, oxid, sid, rc, qno; 1110 + unsigned long iflag; 1323 1111 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1324 1112 uint32_t id; 1325 1113 #endif 1326 1114 1115 + ctx_buf = NULL; 1327 1116 if (!nvmebuf || !phba->targetport) { 1328 1117 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1329 - "6157 FCP Drop IO\n"); 1118 + "6157 NVMET FCP Drop IO\n"); 1330 1119 oxid = 0; 1331 1120 size = 0; 1332 1121 sid = 0; 1122 + ctxp = NULL; 1333 1123 goto dropit; 1334 1124 } 1335 1125 1126 + spin_lock_irqsave(&phba->sli4_hba.nvmet_io_lock, iflag); 1127 + if (phba->sli4_hba.nvmet_ctx_cnt) { 1128 + list_remove_head(&phba->sli4_hba.lpfc_nvmet_ctx_list, 1129 + ctx_buf, struct lpfc_nvmet_ctxbuf, list); 1130 + phba->sli4_hba.nvmet_ctx_cnt--; 1131 + } 1132 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_lock, iflag); 1133 + 1134 + fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt); 1135 + oxid = be16_to_cpu(fc_hdr->fh_ox_id); 1136 + size = nvmebuf->bytes_recv; 1137 + 1138 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1139 + if (phba->cpucheck_on & LPFC_CHECK_NVMET_RCV) { 1140 + id = smp_processor_id(); 1141 + if (id < LPFC_CHECK_CPU_CNT) 1142 + phba->cpucheck_rcv_io[id]++; 1143 + } 1144 + #endif 1145 + 1146 + lpfc_nvmeio_data(phba, "NVMET FCP RCV: xri x%x sz %d CPU %02x\n", 1147 + oxid, size, smp_processor_id()); 1148 + 1149 + if (!ctx_buf) { 1150 + /* Queue this NVME IO to process later */ 1151 + spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag); 1152 + list_add_tail(&nvmebuf->hbuf.list, 1153 + &phba->sli4_hba.lpfc_nvmet_io_wait_list); 1154 + phba->sli4_hba.nvmet_io_wait_cnt++; 1155 + phba->sli4_hba.nvmet_io_wait_total++; 1156 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock, 1157 + iflag); 1158 + 1159 + /* Post a brand new DMA buffer to RQ */ 1160 + qno = nvmebuf->idx; 1161 + lpfc_post_rq_buffer( 1162 + phba, phba->sli4_hba.nvmet_mrq_hdr[qno], 1163 + phba->sli4_hba.nvmet_mrq_data[qno], 1, qno); 1164 + return; 1165 + } 1336 1166 1337 1167 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 1338 1168 payload = (uint32_t *)(nvmebuf->dbuf.virt); 1339 - fc_hdr = (struct fc_frame_header *)(nvmebuf->hbuf.virt); 1340 - size = nvmebuf->bytes_recv; 1341 - oxid = be16_to_cpu(fc_hdr->fh_ox_id); 1342 1169 sid = sli4_sid_from_fc_hdr(fc_hdr); 1343 1170 1344 - ctxp = (struct lpfc_nvmet_rcv_ctx *)nvmebuf->context; 1345 - if (ctxp == NULL) { 1346 - atomic_inc(&tgtp->rcv_fcp_cmd_drop); 1347 - lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1348 - "6158 FCP Drop IO x%x: Alloc\n", 1349 - oxid); 1350 - lpfc_nvmet_rq_post(phba, NULL, &nvmebuf->hbuf); 1351 - /* Cannot send ABTS without context */ 1352 - return; 1353 - } 1171 + ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context; 1354 1172 memset(ctxp, 0, sizeof(ctxp->ctx)); 1355 1173 ctxp->wqeq = NULL; 1356 1174 ctxp->txrdy = NULL; ··· 1392 1146 ctxp->oxid = oxid; 1393 1147 ctxp->sid = sid; 1394 1148 ctxp->state = LPFC_NVMET_STE_RCV; 1395 - ctxp->rqb_buffer = nvmebuf; 1396 1149 ctxp->entry_cnt = 1; 1397 1150 ctxp->flag = 0; 1151 + ctxp->ctxbuf = ctx_buf; 1398 1152 spin_lock_init(&ctxp->ctxlock); 1399 1153 1400 1154 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS ··· 1410 1164 ctxp->ts_isr_status = 0; 1411 1165 ctxp->ts_status_nvme = 0; 1412 1166 } 1413 - 1414 - if (phba->cpucheck_on & LPFC_CHECK_NVMET_RCV) { 1415 - id = smp_processor_id(); 1416 - if (id < LPFC_CHECK_CPU_CNT) 1417 - phba->cpucheck_rcv_io[id]++; 1418 - } 1419 1167 #endif 1420 - 1421 - lpfc_nvmeio_data(phba, "NVMET FCP RCV: xri x%x sz %d CPU %02x\n", 1422 - oxid, size, smp_processor_id()); 1423 1168 1424 1169 atomic_inc(&tgtp->rcv_fcp_cmd_in); 1425 1170 /* 1426 1171 * The calling sequence should be: 1427 1172 * nvmet_fc_rcv_fcp_req -> lpfc_nvmet_xmt_fcp_op/cmp -> req->done 1428 1173 * lpfc_nvmet_xmt_fcp_op_cmp should free the allocated ctxp. 1174 + * When we return from nvmet_fc_rcv_fcp_req, all relevant info in 1175 + * the NVME command / FC header is stored, so we are free to repost 1176 + * the buffer. 1429 1177 */ 1430 1178 rc = nvmet_fc_rcv_fcp_req(phba->targetport, &ctxp->ctx.fcp_req, 1431 1179 payload, size); ··· 1427 1187 /* Process FCP command */ 1428 1188 if (rc == 0) { 1429 1189 atomic_inc(&tgtp->rcv_fcp_cmd_out); 1190 + lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */ 1430 1191 return; 1431 1192 } 1432 1193 1433 1194 atomic_inc(&tgtp->rcv_fcp_cmd_drop); 1434 1195 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1435 - "6159 FCP Drop IO x%x: err x%x\n", 1436 - ctxp->oxid, rc); 1196 + "6159 FCP Drop IO x%x: err x%x: x%x x%x x%x\n", 1197 + ctxp->oxid, rc, 1198 + atomic_read(&tgtp->rcv_fcp_cmd_in), 1199 + atomic_read(&tgtp->rcv_fcp_cmd_out), 1200 + atomic_read(&tgtp->xmt_fcp_release)); 1437 1201 dropit: 1438 1202 lpfc_nvmeio_data(phba, "NVMET FCP DROP: xri x%x sz %d from %06x\n", 1439 1203 oxid, size, sid); 1440 1204 if (oxid) { 1205 + lpfc_nvmet_defer_release(phba, ctxp); 1441 1206 lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, sid, oxid); 1207 + lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */ 1442 1208 return; 1443 1209 } 1444 1210 1445 - if (nvmebuf) { 1446 - nvmebuf->iocbq->hba_wqidx = 0; 1447 - /* We assume a rcv'ed cmd ALWAYs fits into 1 buffer */ 1448 - lpfc_nvmet_rq_post(phba, NULL, &nvmebuf->hbuf); 1449 - } 1211 + if (ctx_buf) 1212 + lpfc_nvmet_ctxbuf_post(phba, ctx_buf); 1213 + 1214 + if (nvmebuf) 1215 + lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */ 1450 1216 #endif 1451 1217 } 1452 1218 ··· 1504 1258 uint64_t isr_timestamp) 1505 1259 { 1506 1260 if (phba->nvmet_support == 0) { 1507 - lpfc_nvmet_rq_post(phba, NULL, &nvmebuf->hbuf); 1261 + lpfc_rq_buf_free(phba, &nvmebuf->hbuf); 1508 1262 return; 1509 1263 } 1510 1264 lpfc_nvmet_unsol_fcp_buffer(phba, pring, nvmebuf, ··· 1705 1459 nvmewqe = ctxp->wqeq; 1706 1460 if (nvmewqe == NULL) { 1707 1461 /* Allocate buffer for command wqe */ 1708 - nvmewqe = ctxp->rqb_buffer->iocbq; 1462 + nvmewqe = ctxp->ctxbuf->iocbq; 1709 1463 if (nvmewqe == NULL) { 1710 1464 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1711 1465 "6110 lpfc_nvmet_prep_fcp_wqe: No " ··· 1732 1486 return NULL; 1733 1487 } 1734 1488 1735 - sgl = (struct sli4_sge *)ctxp->rqb_buffer->sglq->sgl; 1489 + sgl = (struct sli4_sge *)ctxp->ctxbuf->sglq->sgl; 1736 1490 switch (rsp->op) { 1737 1491 case NVMET_FCOP_READDATA: 1738 1492 case NVMET_FCOP_READDATA_RSP: ··· 2057 1811 result = wcqe->parameter; 2058 1812 2059 1813 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2060 - atomic_inc(&tgtp->xmt_abort_cmpl); 1814 + if (ctxp->flag & LPFC_NVMET_ABORT_OP) 1815 + atomic_inc(&tgtp->xmt_fcp_abort_cmpl); 2061 1816 2062 1817 ctxp->state = LPFC_NVMET_STE_DONE; 2063 1818 ··· 2073 1826 } 2074 1827 ctxp->flag &= ~LPFC_NVMET_ABORT_OP; 2075 1828 spin_unlock_irqrestore(&ctxp->ctxlock, flags); 1829 + atomic_inc(&tgtp->xmt_abort_rsp); 2076 1830 2077 1831 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS, 2078 1832 "6165 ABORT cmpl: xri x%x flg x%x (%d) " ··· 2082 1834 wcqe->word0, wcqe->total_data_placed, 2083 1835 result, wcqe->word3); 2084 1836 1837 + cmdwqe->context2 = NULL; 1838 + cmdwqe->context3 = NULL; 2085 1839 /* 2086 1840 * if transport has released ctx, then can reuse it. Otherwise, 2087 1841 * will be recycled by transport release call. 2088 1842 */ 2089 1843 if (released) 2090 - lpfc_nvmet_rq_post(phba, ctxp, &ctxp->rqb_buffer->hbuf); 1844 + lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 2091 1845 2092 - cmdwqe->context2 = NULL; 2093 - cmdwqe->context3 = NULL; 1846 + /* This is the iocbq for the abort, not the command */ 2094 1847 lpfc_sli_release_iocbq(phba, cmdwqe); 2095 1848 2096 1849 /* Since iaab/iaar are NOT set, there is no work left. ··· 2125 1876 result = wcqe->parameter; 2126 1877 2127 1878 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2128 - atomic_inc(&tgtp->xmt_abort_cmpl); 1879 + if (ctxp->flag & LPFC_NVMET_ABORT_OP) 1880 + atomic_inc(&tgtp->xmt_fcp_abort_cmpl); 2129 1881 2130 1882 if (!ctxp) { 2131 1883 /* if context is clear, related io alrady complete */ ··· 2156 1906 } 2157 1907 ctxp->flag &= ~LPFC_NVMET_ABORT_OP; 2158 1908 spin_unlock_irqrestore(&ctxp->ctxlock, flags); 1909 + atomic_inc(&tgtp->xmt_abort_rsp); 2159 1910 2160 1911 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 2161 1912 "6316 ABTS cmpl xri x%x flg x%x (%x) " ··· 2164 1913 ctxp->oxid, ctxp->flag, released, 2165 1914 wcqe->word0, wcqe->total_data_placed, 2166 1915 result, wcqe->word3); 1916 + 1917 + cmdwqe->context2 = NULL; 1918 + cmdwqe->context3 = NULL; 2167 1919 /* 2168 1920 * if transport has released ctx, then can reuse it. Otherwise, 2169 1921 * will be recycled by transport release call. 2170 1922 */ 2171 1923 if (released) 2172 - lpfc_nvmet_rq_post(phba, ctxp, &ctxp->rqb_buffer->hbuf); 2173 - 2174 - cmdwqe->context2 = NULL; 2175 - cmdwqe->context3 = NULL; 1924 + lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 2176 1925 2177 1926 /* Since iaab/iaar are NOT set, there is no work left. 2178 1927 * For LPFC_NVMET_XBUSY, lpfc_sli4_nvmet_xri_aborted ··· 2203 1952 result = wcqe->parameter; 2204 1953 2205 1954 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2206 - atomic_inc(&tgtp->xmt_abort_cmpl); 1955 + atomic_inc(&tgtp->xmt_ls_abort_cmpl); 2207 1956 2208 1957 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 2209 1958 "6083 Abort cmpl: ctx %p WCQE: %08x %08x %08x %08x\n", ··· 2234 1983 sid, xri, ctxp->wqeq->sli4_xritag); 2235 1984 2236 1985 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2237 - if (!ctxp->wqeq) { 2238 - ctxp->wqeq = ctxp->rqb_buffer->iocbq; 2239 - ctxp->wqeq->hba_wqidx = 0; 2240 - } 2241 1986 2242 1987 ndlp = lpfc_findnode_did(phba->pport, sid); 2243 1988 if (!ndlp || !NLP_CHK_NODE_ACT(ndlp) || ··· 2329 2082 2330 2083 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2331 2084 if (!ctxp->wqeq) { 2332 - ctxp->wqeq = ctxp->rqb_buffer->iocbq; 2085 + ctxp->wqeq = ctxp->ctxbuf->iocbq; 2333 2086 ctxp->wqeq->hba_wqidx = 0; 2334 2087 } 2335 2088 ··· 2350 2103 /* Issue ABTS for this WQE based on iotag */ 2351 2104 ctxp->abort_wqeq = lpfc_sli_get_iocbq(phba); 2352 2105 if (!ctxp->abort_wqeq) { 2106 + atomic_inc(&tgtp->xmt_abort_rsp_error); 2353 2107 lpfc_printf_log(phba, KERN_WARNING, LOG_NVME_ABTS, 2354 2108 "6161 ABORT failed: No wqeqs: " 2355 2109 "xri: x%x\n", ctxp->oxid); ··· 2375 2127 /* driver queued commands are in process of being flushed */ 2376 2128 if (phba->hba_flag & HBA_NVME_IOQ_FLUSH) { 2377 2129 spin_unlock_irqrestore(&phba->hbalock, flags); 2130 + atomic_inc(&tgtp->xmt_abort_rsp_error); 2378 2131 lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 2379 2132 "6163 Driver in reset cleanup - flushing " 2380 2133 "NVME Req now. hba_flag x%x oxid x%x\n", ··· 2388 2139 /* Outstanding abort is in progress */ 2389 2140 if (abts_wqeq->iocb_flag & LPFC_DRIVER_ABORTED) { 2390 2141 spin_unlock_irqrestore(&phba->hbalock, flags); 2142 + atomic_inc(&tgtp->xmt_abort_rsp_error); 2391 2143 lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 2392 2144 "6164 Outstanding NVME I/O Abort Request " 2393 2145 "still pending on oxid x%x\n", ··· 2439 2189 abts_wqeq->context2 = ctxp; 2440 2190 rc = lpfc_sli4_issue_wqe(phba, LPFC_FCP_RING, abts_wqeq); 2441 2191 spin_unlock_irqrestore(&phba->hbalock, flags); 2442 - if (rc == WQE_SUCCESS) 2192 + if (rc == WQE_SUCCESS) { 2193 + atomic_inc(&tgtp->xmt_abort_sol); 2443 2194 return 0; 2195 + } 2444 2196 2197 + atomic_inc(&tgtp->xmt_abort_rsp_error); 2445 2198 ctxp->flag &= ~LPFC_NVMET_ABORT_OP; 2446 2199 lpfc_sli_release_iocbq(phba, abts_wqeq); 2447 2200 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS, ··· 2467 2214 2468 2215 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 2469 2216 if (!ctxp->wqeq) { 2470 - ctxp->wqeq = ctxp->rqb_buffer->iocbq; 2217 + ctxp->wqeq = ctxp->ctxbuf->iocbq; 2471 2218 ctxp->wqeq->hba_wqidx = 0; 2472 2219 } 2473 2220 ··· 2483 2230 rc = lpfc_sli4_issue_wqe(phba, LPFC_FCP_RING, abts_wqeq); 2484 2231 spin_unlock_irqrestore(&phba->hbalock, flags); 2485 2232 if (rc == WQE_SUCCESS) { 2486 - atomic_inc(&tgtp->xmt_abort_rsp); 2487 2233 return 0; 2488 2234 } 2489 2235 2490 2236 aerr: 2237 + atomic_inc(&tgtp->xmt_abort_rsp_error); 2491 2238 ctxp->flag &= ~LPFC_NVMET_ABORT_OP; 2492 2239 atomic_inc(&tgtp->xmt_abort_rsp_error); 2493 2240 lpfc_printf_log(phba, KERN_WARNING, LOG_NVME_ABTS, ··· 2522 2269 } 2523 2270 abts_wqeq = ctxp->wqeq; 2524 2271 wqe_abts = &abts_wqeq->wqe; 2272 + 2525 2273 lpfc_nvmet_unsol_issue_abort(phba, ctxp, sid, xri); 2526 2274 2527 2275 spin_lock_irqsave(&phba->hbalock, flags); ··· 2532 2278 rc = lpfc_sli4_issue_wqe(phba, LPFC_ELS_RING, abts_wqeq); 2533 2279 spin_unlock_irqrestore(&phba->hbalock, flags); 2534 2280 if (rc == WQE_SUCCESS) { 2535 - atomic_inc(&tgtp->xmt_abort_rsp); 2281 + atomic_inc(&tgtp->xmt_abort_unsol); 2536 2282 return 0; 2537 2283 } 2538 2284
+9 -5
drivers/scsi/lpfc/lpfc_nvmet.h
··· 22 22 ********************************************************************/ 23 23 24 24 #define LPFC_NVMET_DEFAULT_SEGS (64 + 1) /* 256K IOs */ 25 + #define LPFC_NVMET_RQE_DEF_COUNT 512 25 26 #define LPFC_NVMET_SUCCESS_LEN 12 26 27 27 28 /* Used for NVME Target */ ··· 35 34 atomic_t rcv_ls_req_out; 36 35 atomic_t rcv_ls_req_drop; 37 36 atomic_t xmt_ls_abort; 37 + atomic_t xmt_ls_abort_cmpl; 38 38 39 39 /* Stats counters - lpfc_nvmet_xmt_ls_rsp */ 40 40 atomic_t xmt_ls_rsp; ··· 49 47 atomic_t rcv_fcp_cmd_in; 50 48 atomic_t rcv_fcp_cmd_out; 51 49 atomic_t rcv_fcp_cmd_drop; 50 + atomic_t xmt_fcp_release; 52 51 53 52 /* Stats counters - lpfc_nvmet_xmt_fcp_op */ 54 - atomic_t xmt_fcp_abort; 55 53 atomic_t xmt_fcp_drop; 56 54 atomic_t xmt_fcp_read_rsp; 57 55 atomic_t xmt_fcp_read; ··· 64 62 atomic_t xmt_fcp_rsp_drop; 65 63 66 64 67 - /* Stats counters - lpfc_nvmet_unsol_issue_abort */ 65 + /* Stats counters - lpfc_nvmet_xmt_fcp_abort */ 66 + atomic_t xmt_fcp_abort; 67 + atomic_t xmt_fcp_abort_cmpl; 68 + atomic_t xmt_abort_sol; 69 + atomic_t xmt_abort_unsol; 68 70 atomic_t xmt_abort_rsp; 69 71 atomic_t xmt_abort_rsp_error; 70 - 71 - /* Stats counters - lpfc_nvmet_xmt_abort_cmp */ 72 - atomic_t xmt_abort_cmpl; 73 72 }; 74 73 75 74 struct lpfc_nvmet_rcv_ctx { ··· 106 103 #define LPFC_NVMET_CTX_RLS 0x8 /* ctx free requested */ 107 104 #define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */ 108 105 struct rqb_dmabuf *rqb_buffer; 106 + struct lpfc_nvmet_ctxbuf *ctxbuf; 109 107 110 108 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 111 109 uint64_t ts_isr_cmd;
+277 -80
drivers/scsi/lpfc/lpfc_sli.c
··· 74 74 struct lpfc_iocbq *); 75 75 static void lpfc_sli4_send_seq_to_ulp(struct lpfc_vport *, 76 76 struct hbq_dmabuf *); 77 + static void lpfc_sli4_handle_mds_loopback(struct lpfc_vport *vport, 78 + struct hbq_dmabuf *dmabuf); 77 79 static int lpfc_sli4_fp_handle_cqe(struct lpfc_hba *, struct lpfc_queue *, 78 80 struct lpfc_cqe *); 79 81 static int lpfc_sli4_post_sgl_list(struct lpfc_hba *, struct list_head *, ··· 481 479 if (unlikely(!hq) || unlikely(!dq)) 482 480 return -ENOMEM; 483 481 put_index = hq->host_index; 484 - temp_hrqe = hq->qe[hq->host_index].rqe; 482 + temp_hrqe = hq->qe[put_index].rqe; 485 483 temp_drqe = dq->qe[dq->host_index].rqe; 486 484 487 485 if (hq->type != LPFC_HRQ || dq->type != LPFC_DRQ) 488 486 return -EINVAL; 489 - if (hq->host_index != dq->host_index) 487 + if (put_index != dq->host_index) 490 488 return -EINVAL; 491 489 /* If the host has not yet processed the next entry then we are done */ 492 - if (((hq->host_index + 1) % hq->entry_count) == hq->hba_index) 490 + if (((put_index + 1) % hq->entry_count) == hq->hba_index) 493 491 return -EBUSY; 494 492 lpfc_sli_pcimem_bcopy(hrqe, temp_hrqe, hq->entry_size); 495 493 lpfc_sli_pcimem_bcopy(drqe, temp_drqe, dq->entry_size); 496 494 497 495 /* Update the host index to point to the next slot */ 498 - hq->host_index = ((hq->host_index + 1) % hq->entry_count); 496 + hq->host_index = ((put_index + 1) % hq->entry_count); 499 497 dq->host_index = ((dq->host_index + 1) % dq->entry_count); 498 + hq->RQ_buf_posted++; 500 499 501 500 /* Ring The Header Receive Queue Doorbell */ 502 501 if (!(hq->host_index % hq->entry_repost)) { ··· 5909 5906 bf_set(lpfc_mbx_set_feature_mds, 5910 5907 &mbox->u.mqe.un.set_feature, 1); 5911 5908 bf_set(lpfc_mbx_set_feature_mds_deep_loopbk, 5912 - &mbox->u.mqe.un.set_feature, 0); 5909 + &mbox->u.mqe.un.set_feature, 1); 5913 5910 mbox->u.mqe.un.set_feature.feature = LPFC_SET_MDS_DIAGS; 5914 5911 mbox->u.mqe.un.set_feature.param_len = 8; 5915 5912 break; ··· 6515 6512 (phba->hba_flag & HBA_FCOE_MODE) ? "FCoE" : "FC"); 6516 6513 } 6517 6514 6515 + int 6516 + lpfc_post_rq_buffer(struct lpfc_hba *phba, struct lpfc_queue *hrq, 6517 + struct lpfc_queue *drq, int count, int idx) 6518 + { 6519 + int rc, i; 6520 + struct lpfc_rqe hrqe; 6521 + struct lpfc_rqe drqe; 6522 + struct lpfc_rqb *rqbp; 6523 + struct rqb_dmabuf *rqb_buffer; 6524 + LIST_HEAD(rqb_buf_list); 6525 + 6526 + rqbp = hrq->rqbp; 6527 + for (i = 0; i < count; i++) { 6528 + /* IF RQ is already full, don't bother */ 6529 + if (rqbp->buffer_count + i >= rqbp->entry_count - 1) 6530 + break; 6531 + rqb_buffer = rqbp->rqb_alloc_buffer(phba); 6532 + if (!rqb_buffer) 6533 + break; 6534 + rqb_buffer->hrq = hrq; 6535 + rqb_buffer->drq = drq; 6536 + rqb_buffer->idx = idx; 6537 + list_add_tail(&rqb_buffer->hbuf.list, &rqb_buf_list); 6538 + } 6539 + while (!list_empty(&rqb_buf_list)) { 6540 + list_remove_head(&rqb_buf_list, rqb_buffer, struct rqb_dmabuf, 6541 + hbuf.list); 6542 + 6543 + hrqe.address_lo = putPaddrLow(rqb_buffer->hbuf.phys); 6544 + hrqe.address_hi = putPaddrHigh(rqb_buffer->hbuf.phys); 6545 + drqe.address_lo = putPaddrLow(rqb_buffer->dbuf.phys); 6546 + drqe.address_hi = putPaddrHigh(rqb_buffer->dbuf.phys); 6547 + rc = lpfc_sli4_rq_put(hrq, drq, &hrqe, &drqe); 6548 + if (rc < 0) { 6549 + rqbp->rqb_free_buffer(phba, rqb_buffer); 6550 + } else { 6551 + list_add_tail(&rqb_buffer->hbuf.list, 6552 + &rqbp->rqb_buffer_list); 6553 + rqbp->buffer_count++; 6554 + } 6555 + } 6556 + return 1; 6557 + } 6558 + 6518 6559 /** 6519 6560 * lpfc_sli4_hba_setup - SLI4 device initialization PCI function 6520 6561 * @phba: Pointer to HBA context object. ··· 6571 6524 int 6572 6525 lpfc_sli4_hba_setup(struct lpfc_hba *phba) 6573 6526 { 6574 - int rc, i; 6527 + int rc, i, cnt; 6575 6528 LPFC_MBOXQ_t *mboxq; 6576 6529 struct lpfc_mqe *mqe; 6577 6530 uint8_t *vpd; ··· 6922 6875 goto out_destroy_queue; 6923 6876 } 6924 6877 phba->sli4_hba.nvmet_xri_cnt = rc; 6878 + 6879 + cnt = phba->cfg_iocb_cnt * 1024; 6880 + /* We need 1 iocbq for every SGL, for IO processing */ 6881 + cnt += phba->sli4_hba.nvmet_xri_cnt; 6882 + /* Initialize and populate the iocb list per host */ 6883 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 6884 + "2821 initialize iocb list %d total %d\n", 6885 + phba->cfg_iocb_cnt, cnt); 6886 + rc = lpfc_init_iocb_list(phba, cnt); 6887 + if (rc) { 6888 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 6889 + "1413 Failed to init iocb list.\n"); 6890 + goto out_destroy_queue; 6891 + } 6892 + 6925 6893 lpfc_nvmet_create_targetport(phba); 6926 6894 } else { 6927 6895 /* update host scsi xri-sgl sizes and mappings */ ··· 6956 6894 "and mapping: %d\n", rc); 6957 6895 goto out_destroy_queue; 6958 6896 } 6897 + 6898 + cnt = phba->cfg_iocb_cnt * 1024; 6899 + /* Initialize and populate the iocb list per host */ 6900 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 6901 + "2820 initialize iocb list %d total %d\n", 6902 + phba->cfg_iocb_cnt, cnt); 6903 + rc = lpfc_init_iocb_list(phba, cnt); 6904 + if (rc) { 6905 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 6906 + "6301 Failed to init iocb list.\n"); 6907 + goto out_destroy_queue; 6908 + } 6959 6909 } 6960 6910 6961 6911 if (phba->nvmet_support && phba->cfg_nvmet_mrq) { 6962 - 6963 6912 /* Post initial buffers to all RQs created */ 6964 6913 for (i = 0; i < phba->cfg_nvmet_mrq; i++) { 6965 6914 rqbp = phba->sli4_hba.nvmet_mrq_hdr[i]->rqbp; 6966 6915 INIT_LIST_HEAD(&rqbp->rqb_buffer_list); 6967 6916 rqbp->rqb_alloc_buffer = lpfc_sli4_nvmet_alloc; 6968 6917 rqbp->rqb_free_buffer = lpfc_sli4_nvmet_free; 6969 - rqbp->entry_count = 256; 6918 + rqbp->entry_count = LPFC_NVMET_RQE_DEF_COUNT; 6970 6919 rqbp->buffer_count = 0; 6971 - 6972 - /* Divide by 4 and round down to multiple of 16 */ 6973 - rc = (phba->cfg_nvmet_mrq_post >> 2) & 0xfff8; 6974 - phba->sli4_hba.nvmet_mrq_hdr[i]->entry_repost = rc; 6975 - phba->sli4_hba.nvmet_mrq_data[i]->entry_repost = rc; 6976 6920 6977 6921 lpfc_post_rq_buffer( 6978 6922 phba, phba->sli4_hba.nvmet_mrq_hdr[i], 6979 6923 phba->sli4_hba.nvmet_mrq_data[i], 6980 - phba->cfg_nvmet_mrq_post); 6924 + LPFC_NVMET_RQE_DEF_COUNT, i); 6981 6925 } 6982 6926 } 6983 6927 ··· 7150 7082 /* Unset all the queues set up in this routine when error out */ 7151 7083 lpfc_sli4_queue_unset(phba); 7152 7084 out_destroy_queue: 7085 + lpfc_free_iocb_list(phba); 7153 7086 lpfc_sli4_queue_destroy(phba); 7154 7087 out_stop_timers: 7155 7088 lpfc_stop_hba_timers(phba); ··· 8690 8621 memset(wqe, 0, sizeof(union lpfc_wqe128)); 8691 8622 /* Some of the fields are in the right position already */ 8692 8623 memcpy(wqe, &iocbq->iocb, sizeof(union lpfc_wqe)); 8693 - wqe->generic.wqe_com.word7 = 0; /* The ct field has moved so reset */ 8694 - wqe->generic.wqe_com.word10 = 0; 8624 + if (iocbq->iocb.ulpCommand != CMD_SEND_FRAME) { 8625 + /* The ct field has moved so reset */ 8626 + wqe->generic.wqe_com.word7 = 0; 8627 + wqe->generic.wqe_com.word10 = 0; 8628 + } 8695 8629 8696 8630 abort_tag = (uint32_t) iocbq->iotag; 8697 8631 xritag = iocbq->sli4_xritag; ··· 9188 9116 } 9189 9117 9190 9118 break; 9119 + case CMD_SEND_FRAME: 9120 + bf_set(wqe_xri_tag, &wqe->generic.wqe_com, xritag); 9121 + bf_set(wqe_reqtag, &wqe->generic.wqe_com, iocbq->iotag); 9122 + return 0; 9191 9123 case CMD_XRI_ABORTED_CX: 9192 9124 case CMD_CREATE_XRI_CR: /* Do we expect to use this? */ 9193 9125 case CMD_IOCB_FCP_IBIDIR64_CR: /* bidirectional xfer */ ··· 12864 12788 struct fc_frame_header *fc_hdr; 12865 12789 struct lpfc_queue *hrq = phba->sli4_hba.hdr_rq; 12866 12790 struct lpfc_queue *drq = phba->sli4_hba.dat_rq; 12791 + struct lpfc_nvmet_tgtport *tgtp; 12867 12792 struct hbq_dmabuf *dma_buf; 12868 12793 uint32_t status, rq_id; 12869 12794 unsigned long iflags; ··· 12885 12808 case FC_STATUS_RQ_BUF_LEN_EXCEEDED: 12886 12809 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 12887 12810 "2537 Receive Frame Truncated!!\n"); 12888 - hrq->RQ_buf_trunc++; 12889 12811 case FC_STATUS_RQ_SUCCESS: 12890 12812 lpfc_sli4_rq_release(hrq, drq); 12891 12813 spin_lock_irqsave(&phba->hbalock, iflags); ··· 12895 12819 goto out; 12896 12820 } 12897 12821 hrq->RQ_rcv_buf++; 12822 + hrq->RQ_buf_posted--; 12898 12823 memcpy(&dma_buf->cq_event.cqe.rcqe_cmpl, rcqe, sizeof(*rcqe)); 12899 12824 12900 12825 /* If a NVME LS event (type 0x28), treat it as Fast path */ ··· 12909 12832 spin_unlock_irqrestore(&phba->hbalock, iflags); 12910 12833 workposted = true; 12911 12834 break; 12912 - case FC_STATUS_INSUFF_BUF_NEED_BUF: 12913 12835 case FC_STATUS_INSUFF_BUF_FRM_DISC: 12836 + if (phba->nvmet_support) { 12837 + tgtp = phba->targetport->private; 12838 + lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_NVME, 12839 + "6402 RQE Error x%x, posted %d err_cnt " 12840 + "%d: %x %x %x\n", 12841 + status, hrq->RQ_buf_posted, 12842 + hrq->RQ_no_posted_buf, 12843 + atomic_read(&tgtp->rcv_fcp_cmd_in), 12844 + atomic_read(&tgtp->rcv_fcp_cmd_out), 12845 + atomic_read(&tgtp->xmt_fcp_release)); 12846 + } 12847 + /* fallthrough */ 12848 + 12849 + case FC_STATUS_INSUFF_BUF_NEED_BUF: 12914 12850 hrq->RQ_no_posted_buf++; 12915 12851 /* Post more buffers if possible */ 12916 12852 spin_lock_irqsave(&phba->hbalock, iflags); ··· 13041 12951 while ((cqe = lpfc_sli4_cq_get(cq))) { 13042 12952 workposted |= lpfc_sli4_sp_handle_mcqe(phba, cqe); 13043 12953 if (!(++ecount % cq->entry_repost)) 13044 - lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM); 12954 + break; 13045 12955 cq->CQ_mbox++; 13046 12956 } 13047 12957 break; ··· 13055 12965 workposted |= lpfc_sli4_sp_handle_cqe(phba, cq, 13056 12966 cqe); 13057 12967 if (!(++ecount % cq->entry_repost)) 13058 - lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM); 12968 + break; 13059 12969 } 13060 12970 13061 12971 /* Track the max number of CQEs processed in 1 EQ */ ··· 13225 13135 struct lpfc_queue *drq; 13226 13136 struct rqb_dmabuf *dma_buf; 13227 13137 struct fc_frame_header *fc_hdr; 13138 + struct lpfc_nvmet_tgtport *tgtp; 13228 13139 uint32_t status, rq_id; 13229 13140 unsigned long iflags; 13230 13141 uint32_t fctl, idx; ··· 13256 13165 case FC_STATUS_RQ_BUF_LEN_EXCEEDED: 13257 13166 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 13258 13167 "6126 Receive Frame Truncated!!\n"); 13259 - hrq->RQ_buf_trunc++; 13260 - break; 13261 13168 case FC_STATUS_RQ_SUCCESS: 13262 13169 lpfc_sli4_rq_release(hrq, drq); 13263 13170 spin_lock_irqsave(&phba->hbalock, iflags); ··· 13267 13178 } 13268 13179 spin_unlock_irqrestore(&phba->hbalock, iflags); 13269 13180 hrq->RQ_rcv_buf++; 13181 + hrq->RQ_buf_posted--; 13270 13182 fc_hdr = (struct fc_frame_header *)dma_buf->hbuf.virt; 13271 13183 13272 13184 /* Just some basic sanity checks on FCP Command frame */ ··· 13290 13200 drop: 13291 13201 lpfc_in_buf_free(phba, &dma_buf->dbuf); 13292 13202 break; 13293 - case FC_STATUS_INSUFF_BUF_NEED_BUF: 13294 13203 case FC_STATUS_INSUFF_BUF_FRM_DISC: 13204 + if (phba->nvmet_support) { 13205 + tgtp = phba->targetport->private; 13206 + lpfc_printf_log(phba, KERN_ERR, LOG_SLI | LOG_NVME, 13207 + "6401 RQE Error x%x, posted %d err_cnt " 13208 + "%d: %x %x %x\n", 13209 + status, hrq->RQ_buf_posted, 13210 + hrq->RQ_no_posted_buf, 13211 + atomic_read(&tgtp->rcv_fcp_cmd_in), 13212 + atomic_read(&tgtp->rcv_fcp_cmd_out), 13213 + atomic_read(&tgtp->xmt_fcp_release)); 13214 + } 13215 + /* fallthrough */ 13216 + 13217 + case FC_STATUS_INSUFF_BUF_NEED_BUF: 13295 13218 hrq->RQ_no_posted_buf++; 13296 13219 /* Post more buffers if possible */ 13297 - spin_lock_irqsave(&phba->hbalock, iflags); 13298 - phba->hba_flag |= HBA_POST_RECEIVE_BUFFER; 13299 - spin_unlock_irqrestore(&phba->hbalock, iflags); 13300 - workposted = true; 13301 13220 break; 13302 13221 } 13303 13222 out: ··· 13460 13361 while ((cqe = lpfc_sli4_cq_get(cq))) { 13461 13362 workposted |= lpfc_sli4_fp_handle_cqe(phba, cq, cqe); 13462 13363 if (!(++ecount % cq->entry_repost)) 13463 - lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM); 13364 + break; 13464 13365 } 13465 13366 13466 13367 /* Track the max number of CQEs processed in 1 EQ */ ··· 13551 13452 while ((cqe = lpfc_sli4_cq_get(cq))) { 13552 13453 workposted |= lpfc_sli4_fp_handle_cqe(phba, cq, cqe); 13553 13454 if (!(++ecount % cq->entry_repost)) 13554 - lpfc_sli4_cq_release(cq, LPFC_QUEUE_NOARM); 13455 + break; 13555 13456 } 13556 13457 13557 13458 /* Track the max number of CQEs processed in 1 EQ */ ··· 13633 13534 while ((eqe = lpfc_sli4_eq_get(eq))) { 13634 13535 lpfc_sli4_fof_handle_eqe(phba, eqe); 13635 13536 if (!(++ecount % eq->entry_repost)) 13636 - lpfc_sli4_eq_release(eq, LPFC_QUEUE_NOARM); 13537 + break; 13637 13538 eq->EQ_processed++; 13638 13539 } 13639 13540 ··· 13750 13651 13751 13652 lpfc_sli4_hba_handle_eqe(phba, eqe, hba_eqidx); 13752 13653 if (!(++ecount % fpeq->entry_repost)) 13753 - lpfc_sli4_eq_release(fpeq, LPFC_QUEUE_NOARM); 13654 + break; 13754 13655 fpeq->EQ_processed++; 13755 13656 } 13756 13657 ··· 13931 13832 } 13932 13833 queue->entry_size = entry_size; 13933 13834 queue->entry_count = entry_count; 13934 - 13935 - /* 13936 - * entry_repost is calculated based on the number of entries in the 13937 - * queue. This works out except for RQs. If buffers are NOT initially 13938 - * posted for every RQE, entry_repost should be adjusted accordingly. 13939 - */ 13940 - queue->entry_repost = (entry_count >> 3); 13941 - if (queue->entry_repost < LPFC_QUEUE_MIN_REPOST) 13942 - queue->entry_repost = LPFC_QUEUE_MIN_REPOST; 13943 13835 queue->phba = phba; 13836 + 13837 + /* entry_repost will be set during q creation */ 13944 13838 13945 13839 return queue; 13946 13840 out_fail: ··· 14165 14073 status = -ENXIO; 14166 14074 eq->host_index = 0; 14167 14075 eq->hba_index = 0; 14076 + eq->entry_repost = LPFC_EQ_REPOST; 14168 14077 14169 14078 mempool_free(mbox, phba->mbox_mem_pool); 14170 14079 return status; ··· 14239 14146 default: 14240 14147 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 14241 14148 "0361 Unsupported CQ count: " 14242 - "entry cnt %d sz %d pg cnt %d repost %d\n", 14149 + "entry cnt %d sz %d pg cnt %d\n", 14243 14150 cq->entry_count, cq->entry_size, 14244 - cq->page_count, cq->entry_repost); 14151 + cq->page_count); 14245 14152 if (cq->entry_count < 256) { 14246 14153 status = -EINVAL; 14247 14154 goto out; ··· 14294 14201 cq->assoc_qid = eq->queue_id; 14295 14202 cq->host_index = 0; 14296 14203 cq->hba_index = 0; 14204 + cq->entry_repost = LPFC_CQ_REPOST; 14297 14205 14298 14206 out: 14299 14207 mempool_free(mbox, phba->mbox_mem_pool); ··· 14486 14392 cq->assoc_qid = eq->queue_id; 14487 14393 cq->host_index = 0; 14488 14394 cq->hba_index = 0; 14395 + cq->entry_repost = LPFC_CQ_REPOST; 14489 14396 14490 14397 rc = 0; 14491 14398 list_for_each_entry(dmabuf, &cq->page_list, list) { ··· 14735 14640 mq->subtype = subtype; 14736 14641 mq->host_index = 0; 14737 14642 mq->hba_index = 0; 14643 + mq->entry_repost = LPFC_MQ_REPOST; 14738 14644 14739 14645 /* link the mq onto the parent cq child list */ 14740 14646 list_add_tail(&mq->list, &cq->child_list); ··· 14961 14865 } 14962 14866 14963 14867 /** 14964 - * lpfc_rq_adjust_repost - Adjust entry_repost for an RQ 14965 - * @phba: HBA structure that indicates port to create a queue on. 14966 - * @rq: The queue structure to use for the receive queue. 14967 - * @qno: The associated HBQ number 14968 - * 14969 - * 14970 - * For SLI4 we need to adjust the RQ repost value based on 14971 - * the number of buffers that are initially posted to the RQ. 14972 - */ 14973 - void 14974 - lpfc_rq_adjust_repost(struct lpfc_hba *phba, struct lpfc_queue *rq, int qno) 14975 - { 14976 - uint32_t cnt; 14977 - 14978 - /* sanity check on queue memory */ 14979 - if (!rq) 14980 - return; 14981 - cnt = lpfc_hbq_defs[qno]->entry_count; 14982 - 14983 - /* Recalc repost for RQs based on buffers initially posted */ 14984 - cnt = (cnt >> 3); 14985 - if (cnt < LPFC_QUEUE_MIN_REPOST) 14986 - cnt = LPFC_QUEUE_MIN_REPOST; 14987 - 14988 - rq->entry_repost = cnt; 14989 - } 14990 - 14991 - /** 14992 14868 * lpfc_rq_create - Create a Receive Queue on the HBA 14993 14869 * @phba: HBA structure that indicates port to create a queue on. 14994 14870 * @hrq: The queue structure to use to create the header receive queue. ··· 15145 15077 hrq->subtype = subtype; 15146 15078 hrq->host_index = 0; 15147 15079 hrq->hba_index = 0; 15080 + hrq->entry_repost = LPFC_RQ_REPOST; 15148 15081 15149 15082 /* now create the data queue */ 15150 15083 lpfc_sli4_config(phba, mbox, LPFC_MBOX_SUBSYSTEM_FCOE, ··· 15156 15087 if (phba->sli4_hba.pc_sli4_params.rqv == LPFC_Q_CREATE_VERSION_1) { 15157 15088 bf_set(lpfc_rq_context_rqe_count_1, 15158 15089 &rq_create->u.request.context, hrq->entry_count); 15159 - rq_create->u.request.context.buffer_size = LPFC_DATA_BUF_SIZE; 15090 + if (subtype == LPFC_NVMET) 15091 + rq_create->u.request.context.buffer_size = 15092 + LPFC_NVMET_DATA_BUF_SIZE; 15093 + else 15094 + rq_create->u.request.context.buffer_size = 15095 + LPFC_DATA_BUF_SIZE; 15160 15096 bf_set(lpfc_rq_context_rqe_size, &rq_create->u.request.context, 15161 15097 LPFC_RQE_SIZE_8); 15162 15098 bf_set(lpfc_rq_context_page_size, &rq_create->u.request.context, ··· 15198 15124 LPFC_RQ_RING_SIZE_4096); 15199 15125 break; 15200 15126 } 15201 - bf_set(lpfc_rq_context_buf_size, &rq_create->u.request.context, 15202 - LPFC_DATA_BUF_SIZE); 15127 + if (subtype == LPFC_NVMET) 15128 + bf_set(lpfc_rq_context_buf_size, 15129 + &rq_create->u.request.context, 15130 + LPFC_NVMET_DATA_BUF_SIZE); 15131 + else 15132 + bf_set(lpfc_rq_context_buf_size, 15133 + &rq_create->u.request.context, 15134 + LPFC_DATA_BUF_SIZE); 15203 15135 } 15204 15136 bf_set(lpfc_rq_context_cq_id, &rq_create->u.request.context, 15205 15137 cq->queue_id); ··· 15238 15158 drq->subtype = subtype; 15239 15159 drq->host_index = 0; 15240 15160 drq->hba_index = 0; 15161 + drq->entry_repost = LPFC_RQ_REPOST; 15241 15162 15242 15163 /* link the header and data RQs onto the parent cq child list */ 15243 15164 list_add_tail(&hrq->list, &cq->child_list); ··· 15351 15270 cq->queue_id); 15352 15271 bf_set(lpfc_rq_context_data_size, 15353 15272 &rq_create->u.request.context, 15354 - LPFC_DATA_BUF_SIZE); 15273 + LPFC_NVMET_DATA_BUF_SIZE); 15355 15274 bf_set(lpfc_rq_context_hdr_size, 15356 15275 &rq_create->u.request.context, 15357 15276 LPFC_HDR_BUF_SIZE); ··· 15396 15315 hrq->subtype = subtype; 15397 15316 hrq->host_index = 0; 15398 15317 hrq->hba_index = 0; 15318 + hrq->entry_repost = LPFC_RQ_REPOST; 15399 15319 15400 15320 drq->db_format = LPFC_DB_RING_FORMAT; 15401 15321 drq->db_regaddr = phba->sli4_hba.RQDBregaddr; ··· 15405 15323 drq->subtype = subtype; 15406 15324 drq->host_index = 0; 15407 15325 drq->hba_index = 0; 15326 + drq->entry_repost = LPFC_RQ_REPOST; 15408 15327 15409 15328 list_add_tail(&hrq->list, &cq->child_list); 15410 15329 list_add_tail(&drq->list, &cq->child_list); ··· 16146 16063 struct fc_vft_header *fc_vft_hdr; 16147 16064 uint32_t *header = (uint32_t *) fc_hdr; 16148 16065 16066 + #define FC_RCTL_MDS_DIAGS 0xF4 16067 + 16149 16068 switch (fc_hdr->fh_r_ctl) { 16150 16069 case FC_RCTL_DD_UNCAT: /* uncategorized information */ 16151 16070 case FC_RCTL_DD_SOL_DATA: /* solicited data */ ··· 16175 16090 case FC_RCTL_F_BSY: /* fabric busy to data frame */ 16176 16091 case FC_RCTL_F_BSYL: /* fabric busy to link control frame */ 16177 16092 case FC_RCTL_LCR: /* link credit reset */ 16093 + case FC_RCTL_MDS_DIAGS: /* MDS Diagnostics */ 16178 16094 case FC_RCTL_END: /* end */ 16179 16095 break; 16180 16096 case FC_RCTL_VFTH: /* Virtual Fabric tagging Header */ ··· 16185 16099 default: 16186 16100 goto drop; 16187 16101 } 16102 + 16103 + #define FC_TYPE_VENDOR_UNIQUE 0xFF 16104 + 16188 16105 switch (fc_hdr->fh_type) { 16189 16106 case FC_TYPE_BLS: 16190 16107 case FC_TYPE_ELS: 16191 16108 case FC_TYPE_FCP: 16192 16109 case FC_TYPE_CT: 16193 16110 case FC_TYPE_NVME: 16111 + case FC_TYPE_VENDOR_UNIQUE: 16194 16112 break; 16195 16113 case FC_TYPE_IP: 16196 16114 case FC_TYPE_ILS: ··· 16205 16115 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 16206 16116 "2538 Received frame rctl:%s (x%x), type:%s (x%x), " 16207 16117 "frame Data:%08x %08x %08x %08x %08x %08x %08x\n", 16118 + (fc_hdr->fh_r_ctl == FC_RCTL_MDS_DIAGS) ? "MDS Diags" : 16208 16119 lpfc_rctl_names[fc_hdr->fh_r_ctl], fc_hdr->fh_r_ctl, 16209 - lpfc_type_names[fc_hdr->fh_type], fc_hdr->fh_type, 16210 - be32_to_cpu(header[0]), be32_to_cpu(header[1]), 16211 - be32_to_cpu(header[2]), be32_to_cpu(header[3]), 16212 - be32_to_cpu(header[4]), be32_to_cpu(header[5]), 16213 - be32_to_cpu(header[6])); 16120 + (fc_hdr->fh_type == FC_TYPE_VENDOR_UNIQUE) ? 16121 + "Vendor Unique" : lpfc_type_names[fc_hdr->fh_type], 16122 + fc_hdr->fh_type, be32_to_cpu(header[0]), 16123 + be32_to_cpu(header[1]), be32_to_cpu(header[2]), 16124 + be32_to_cpu(header[3]), be32_to_cpu(header[4]), 16125 + be32_to_cpu(header[5]), be32_to_cpu(header[6])); 16214 16126 return 0; 16215 16127 drop: 16216 16128 lpfc_printf_log(phba, KERN_WARNING, LOG_ELS, ··· 17018 16926 lpfc_sli_release_iocbq(phba, iocbq); 17019 16927 } 17020 16928 16929 + static void 16930 + lpfc_sli4_mds_loopback_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, 16931 + struct lpfc_iocbq *rspiocb) 16932 + { 16933 + struct lpfc_dmabuf *pcmd = cmdiocb->context2; 16934 + 16935 + if (pcmd && pcmd->virt) 16936 + pci_pool_free(phba->lpfc_drb_pool, pcmd->virt, pcmd->phys); 16937 + kfree(pcmd); 16938 + lpfc_sli_release_iocbq(phba, cmdiocb); 16939 + } 16940 + 16941 + static void 16942 + lpfc_sli4_handle_mds_loopback(struct lpfc_vport *vport, 16943 + struct hbq_dmabuf *dmabuf) 16944 + { 16945 + struct fc_frame_header *fc_hdr; 16946 + struct lpfc_hba *phba = vport->phba; 16947 + struct lpfc_iocbq *iocbq = NULL; 16948 + union lpfc_wqe *wqe; 16949 + struct lpfc_dmabuf *pcmd = NULL; 16950 + uint32_t frame_len; 16951 + int rc; 16952 + 16953 + fc_hdr = (struct fc_frame_header *)dmabuf->hbuf.virt; 16954 + frame_len = bf_get(lpfc_rcqe_length, &dmabuf->cq_event.cqe.rcqe_cmpl); 16955 + 16956 + /* Send the received frame back */ 16957 + iocbq = lpfc_sli_get_iocbq(phba); 16958 + if (!iocbq) 16959 + goto exit; 16960 + 16961 + /* Allocate buffer for command payload */ 16962 + pcmd = kmalloc(sizeof(struct lpfc_dmabuf), GFP_KERNEL); 16963 + if (pcmd) 16964 + pcmd->virt = pci_pool_alloc(phba->lpfc_drb_pool, GFP_KERNEL, 16965 + &pcmd->phys); 16966 + if (!pcmd || !pcmd->virt) 16967 + goto exit; 16968 + 16969 + INIT_LIST_HEAD(&pcmd->list); 16970 + 16971 + /* copyin the payload */ 16972 + memcpy(pcmd->virt, dmabuf->dbuf.virt, frame_len); 16973 + 16974 + /* fill in BDE's for command */ 16975 + iocbq->iocb.un.xseq64.bdl.addrHigh = putPaddrHigh(pcmd->phys); 16976 + iocbq->iocb.un.xseq64.bdl.addrLow = putPaddrLow(pcmd->phys); 16977 + iocbq->iocb.un.xseq64.bdl.bdeFlags = BUFF_TYPE_BDE_64; 16978 + iocbq->iocb.un.xseq64.bdl.bdeSize = frame_len; 16979 + 16980 + iocbq->context2 = pcmd; 16981 + iocbq->vport = vport; 16982 + iocbq->iocb_flag &= ~LPFC_FIP_ELS_ID_MASK; 16983 + iocbq->iocb_flag |= LPFC_USE_FCPWQIDX; 16984 + 16985 + /* 16986 + * Setup rest of the iocb as though it were a WQE 16987 + * Build the SEND_FRAME WQE 16988 + */ 16989 + wqe = (union lpfc_wqe *)&iocbq->iocb; 16990 + 16991 + wqe->send_frame.frame_len = frame_len; 16992 + wqe->send_frame.fc_hdr_wd0 = be32_to_cpu(*((uint32_t *)fc_hdr)); 16993 + wqe->send_frame.fc_hdr_wd1 = be32_to_cpu(*((uint32_t *)fc_hdr + 1)); 16994 + wqe->send_frame.fc_hdr_wd2 = be32_to_cpu(*((uint32_t *)fc_hdr + 2)); 16995 + wqe->send_frame.fc_hdr_wd3 = be32_to_cpu(*((uint32_t *)fc_hdr + 3)); 16996 + wqe->send_frame.fc_hdr_wd4 = be32_to_cpu(*((uint32_t *)fc_hdr + 4)); 16997 + wqe->send_frame.fc_hdr_wd5 = be32_to_cpu(*((uint32_t *)fc_hdr + 5)); 16998 + 16999 + iocbq->iocb.ulpCommand = CMD_SEND_FRAME; 17000 + iocbq->iocb.ulpLe = 1; 17001 + iocbq->iocb_cmpl = lpfc_sli4_mds_loopback_cmpl; 17002 + rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, iocbq, 0); 17003 + if (rc == IOCB_ERROR) 17004 + goto exit; 17005 + 17006 + lpfc_in_buf_free(phba, &dmabuf->dbuf); 17007 + return; 17008 + 17009 + exit: 17010 + lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 17011 + "2023 Unable to process MDS loopback frame\n"); 17012 + if (pcmd && pcmd->virt) 17013 + pci_pool_free(phba->lpfc_drb_pool, pcmd->virt, pcmd->phys); 17014 + kfree(pcmd); 17015 + lpfc_sli_release_iocbq(phba, iocbq); 17016 + lpfc_in_buf_free(phba, &dmabuf->dbuf); 17017 + } 17018 + 17021 17019 /** 17022 17020 * lpfc_sli4_handle_received_buffer - Handle received buffers from firmware 17023 17021 * @phba: Pointer to HBA context object. ··· 17145 16963 else 17146 16964 fcfi = bf_get(lpfc_rcqe_fcf_id, 17147 16965 &dmabuf->cq_event.cqe.rcqe_cmpl); 16966 + 16967 + if (fc_hdr->fh_r_ctl == 0xF4 && fc_hdr->fh_type == 0xFF) { 16968 + vport = phba->pport; 16969 + /* Handle MDS Loopback frames */ 16970 + lpfc_sli4_handle_mds_loopback(vport, dmabuf); 16971 + return; 16972 + } 17148 16973 17149 16974 /* d_id this frame is directed to */ 17150 16975 did = sli4_did_from_fc_hdr(fc_hdr); ··· 17326 17137 "status x%x add_status x%x, mbx status x%x\n", 17327 17138 shdr_status, shdr_add_status, rc); 17328 17139 rc = -ENXIO; 17140 + } else { 17141 + /* 17142 + * The next_rpi stores the next logical module-64 rpi value used 17143 + * to post physical rpis in subsequent rpi postings. 17144 + */ 17145 + spin_lock_irq(&phba->hbalock); 17146 + phba->sli4_hba.next_rpi = rpi_page->next_rpi; 17147 + spin_unlock_irq(&phba->hbalock); 17329 17148 } 17330 17149 return rc; 17331 17150 } ··· 18914 18717 18915 18718 spin_lock_irqsave(&pring->ring_lock, iflags); 18916 18719 ctxp = pwqe->context2; 18917 - sglq = ctxp->rqb_buffer->sglq; 18720 + sglq = ctxp->ctxbuf->sglq; 18918 18721 if (pwqe->sli4_xritag == NO_XRI) { 18919 18722 pwqe->sli4_lxritag = sglq->sli4_lxritag; 18920 18723 pwqe->sli4_xritag = sglq->sli4_xritag;
+13 -6
drivers/scsi/lpfc/lpfc_sli4.h
··· 24 24 #define LPFC_XRI_EXCH_BUSY_WAIT_TMO 10000 25 25 #define LPFC_XRI_EXCH_BUSY_WAIT_T1 10 26 26 #define LPFC_XRI_EXCH_BUSY_WAIT_T2 30000 27 - #define LPFC_RELEASE_NOTIFICATION_INTERVAL 32 28 27 #define LPFC_RPI_LOW_WATER_MARK 10 29 28 30 29 #define LPFC_UNREG_FCF 1 ··· 154 155 uint32_t entry_count; /* Number of entries to support on the queue */ 155 156 uint32_t entry_size; /* Size of each queue entry. */ 156 157 uint32_t entry_repost; /* Count of entries before doorbell is rung */ 157 - #define LPFC_QUEUE_MIN_REPOST 8 158 + #define LPFC_EQ_REPOST 8 159 + #define LPFC_MQ_REPOST 8 160 + #define LPFC_CQ_REPOST 64 161 + #define LPFC_RQ_REPOST 64 162 + #define LPFC_RELEASE_NOTIFICATION_INTERVAL 32 /* For WQs */ 158 163 uint32_t queue_id; /* Queue ID assigned by the hardware */ 159 164 uint32_t assoc_qid; /* Queue ID associated with, for CQ/WQ/MQ */ 160 165 uint32_t page_count; /* Number of pages allocated for this queue */ ··· 198 195 /* defines for RQ stats */ 199 196 #define RQ_no_posted_buf q_cnt_1 200 197 #define RQ_no_buf_found q_cnt_2 201 - #define RQ_buf_trunc q_cnt_3 198 + #define RQ_buf_posted q_cnt_3 202 199 #define RQ_rcv_buf q_cnt_4 203 200 204 201 uint64_t isr_timestamp; ··· 620 617 uint16_t scsi_xri_start; 621 618 uint16_t els_xri_cnt; 622 619 uint16_t nvmet_xri_cnt; 620 + uint16_t nvmet_ctx_cnt; 621 + uint16_t nvmet_io_wait_cnt; 622 + uint16_t nvmet_io_wait_total; 623 623 struct list_head lpfc_els_sgl_list; 624 624 struct list_head lpfc_abts_els_sgl_list; 625 625 struct list_head lpfc_nvmet_sgl_list; 626 626 struct list_head lpfc_abts_nvmet_ctx_list; 627 627 struct list_head lpfc_abts_scsi_buf_list; 628 628 struct list_head lpfc_abts_nvme_buf_list; 629 + struct list_head lpfc_nvmet_ctx_list; 630 + struct list_head lpfc_nvmet_io_wait_list; 629 631 struct lpfc_sglq **lpfc_sglq_active_list; 630 632 struct list_head lpfc_rpi_hdr_list; 631 633 unsigned long *rpi_bmask; ··· 662 654 spinlock_t abts_scsi_buf_list_lock; /* list of aborted SCSI IOs */ 663 655 spinlock_t sgl_list_lock; /* list of aborted els IOs */ 664 656 spinlock_t nvmet_io_lock; 657 + spinlock_t nvmet_io_wait_lock; /* IOs waiting for ctx resources */ 665 658 uint32_t physical_port; 666 659 667 660 /* CPU to vector mapping information */ ··· 670 661 uint16_t num_online_cpu; 671 662 uint16_t num_present_cpu; 672 663 uint16_t curr_disp_cpu; 673 - 674 - uint16_t nvmet_mrq_post_idx; 675 664 }; 676 665 677 666 enum lpfc_sge_type { ··· 705 698 struct lpfc_dmabuf *dmabuf; 706 699 uint32_t page_count; 707 700 uint32_t start_rpi; 701 + uint16_t next_rpi; 708 702 }; 709 703 710 704 struct lpfc_rsrc_blks { ··· 770 762 int lpfc_mrq_create(struct lpfc_hba *phba, struct lpfc_queue **hrqp, 771 763 struct lpfc_queue **drqp, struct lpfc_queue **cqp, 772 764 uint32_t subtype); 773 - void lpfc_rq_adjust_repost(struct lpfc_hba *, struct lpfc_queue *, int); 774 765 int lpfc_eq_destroy(struct lpfc_hba *, struct lpfc_queue *); 775 766 int lpfc_cq_destroy(struct lpfc_hba *, struct lpfc_queue *); 776 767 int lpfc_mq_destroy(struct lpfc_hba *, struct lpfc_queue *);
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "11.2.0.12" 23 + #define LPFC_DRIVER_VERSION "11.2.0.14" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+1 -1
drivers/scsi/scsi_lib.c
··· 1851 1851 1852 1852 /* zero out the cmd, except for the embedded scsi_request */ 1853 1853 memset((char *)cmd + sizeof(cmd->req), 0, 1854 - sizeof(*cmd) - sizeof(cmd->req)); 1854 + sizeof(*cmd) - sizeof(cmd->req) + shost->hostt->cmd_size); 1855 1855 1856 1856 req->special = cmd; 1857 1857
+47 -16
drivers/scsi/sd.c
··· 827 827 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk); 828 828 u64 sector = blk_rq_pos(rq) >> (ilog2(sdp->sector_size) - 9); 829 829 u32 nr_sectors = blk_rq_sectors(rq) >> (ilog2(sdp->sector_size) - 9); 830 + int ret; 830 831 831 832 if (!(rq->cmd_flags & REQ_NOUNMAP)) { 832 833 switch (sdkp->zeroing_mode) { 833 834 case SD_ZERO_WS16_UNMAP: 834 - return sd_setup_write_same16_cmnd(cmd, true); 835 + ret = sd_setup_write_same16_cmnd(cmd, true); 836 + goto out; 835 837 case SD_ZERO_WS10_UNMAP: 836 - return sd_setup_write_same10_cmnd(cmd, true); 838 + ret = sd_setup_write_same10_cmnd(cmd, true); 839 + goto out; 837 840 } 838 841 } 839 842 840 843 if (sdp->no_write_same) 841 844 return BLKPREP_INVALID; 845 + 842 846 if (sdkp->ws16 || sector > 0xffffffff || nr_sectors > 0xffff) 843 - return sd_setup_write_same16_cmnd(cmd, false); 844 - return sd_setup_write_same10_cmnd(cmd, false); 847 + ret = sd_setup_write_same16_cmnd(cmd, false); 848 + else 849 + ret = sd_setup_write_same10_cmnd(cmd, false); 850 + 851 + out: 852 + if (sd_is_zoned(sdkp) && ret == BLKPREP_OK) 853 + return sd_zbc_write_lock_zone(cmd); 854 + 855 + return ret; 845 856 } 846 857 847 858 static void sd_config_write_same(struct scsi_disk *sdkp) ··· 959 948 rq->__data_len = sdp->sector_size; 960 949 ret = scsi_init_io(cmd); 961 950 rq->__data_len = nr_bytes; 951 + 952 + if (sd_is_zoned(sdkp) && ret != BLKPREP_OK) 953 + sd_zbc_write_unlock_zone(cmd); 954 + 962 955 return ret; 963 956 } 964 957 ··· 1582 1567 return retval; 1583 1568 } 1584 1569 1585 - static int sd_sync_cache(struct scsi_disk *sdkp) 1570 + static int sd_sync_cache(struct scsi_disk *sdkp, struct scsi_sense_hdr *sshdr) 1586 1571 { 1587 1572 int retries, res; 1588 1573 struct scsi_device *sdp = sdkp->device; 1589 1574 const int timeout = sdp->request_queue->rq_timeout 1590 1575 * SD_FLUSH_TIMEOUT_MULTIPLIER; 1591 - struct scsi_sense_hdr sshdr; 1576 + struct scsi_sense_hdr my_sshdr; 1592 1577 1593 1578 if (!scsi_device_online(sdp)) 1594 1579 return -ENODEV; 1580 + 1581 + /* caller might not be interested in sense, but we need it */ 1582 + if (!sshdr) 1583 + sshdr = &my_sshdr; 1595 1584 1596 1585 for (retries = 3; retries > 0; --retries) { 1597 1586 unsigned char cmd[10] = { 0 }; ··· 1605 1586 * Leave the rest of the command zero to indicate 1606 1587 * flush everything. 1607 1588 */ 1608 - res = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, &sshdr, 1589 + res = scsi_execute(sdp, cmd, DMA_NONE, NULL, 0, NULL, sshdr, 1609 1590 timeout, SD_MAX_RETRIES, 0, RQF_PM, NULL); 1610 1591 if (res == 0) 1611 1592 break; ··· 1615 1596 sd_print_result(sdkp, "Synchronize Cache(10) failed", res); 1616 1597 1617 1598 if (driver_byte(res) & DRIVER_SENSE) 1618 - sd_print_sense_hdr(sdkp, &sshdr); 1599 + sd_print_sense_hdr(sdkp, sshdr); 1600 + 1619 1601 /* we need to evaluate the error return */ 1620 - if (scsi_sense_valid(&sshdr) && 1621 - (sshdr.asc == 0x3a || /* medium not present */ 1622 - sshdr.asc == 0x20)) /* invalid command */ 1602 + if (scsi_sense_valid(sshdr) && 1603 + (sshdr->asc == 0x3a || /* medium not present */ 1604 + sshdr->asc == 0x20)) /* invalid command */ 1623 1605 /* this is no error here */ 1624 1606 return 0; 1625 1607 ··· 3464 3444 3465 3445 if (sdkp->WCE && sdkp->media_present) { 3466 3446 sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n"); 3467 - sd_sync_cache(sdkp); 3447 + sd_sync_cache(sdkp, NULL); 3468 3448 } 3469 3449 3470 3450 if (system_state != SYSTEM_RESTART && sdkp->device->manage_start_stop) { ··· 3476 3456 static int sd_suspend_common(struct device *dev, bool ignore_stop_errors) 3477 3457 { 3478 3458 struct scsi_disk *sdkp = dev_get_drvdata(dev); 3459 + struct scsi_sense_hdr sshdr; 3479 3460 int ret = 0; 3480 3461 3481 3462 if (!sdkp) /* E.g.: runtime suspend following sd_remove() */ ··· 3484 3463 3485 3464 if (sdkp->WCE && sdkp->media_present) { 3486 3465 sd_printk(KERN_NOTICE, sdkp, "Synchronizing SCSI cache\n"); 3487 - ret = sd_sync_cache(sdkp); 3466 + ret = sd_sync_cache(sdkp, &sshdr); 3467 + 3488 3468 if (ret) { 3489 3469 /* ignore OFFLINE device */ 3490 3470 if (ret == -ENODEV) 3491 - ret = 0; 3492 - goto done; 3471 + return 0; 3472 + 3473 + if (!scsi_sense_valid(&sshdr) || 3474 + sshdr.sense_key != ILLEGAL_REQUEST) 3475 + return ret; 3476 + 3477 + /* 3478 + * sshdr.sense_key == ILLEGAL_REQUEST means this drive 3479 + * doesn't support sync. There's not much to do and 3480 + * suspend shouldn't fail. 3481 + */ 3482 + ret = 0; 3493 3483 } 3494 3484 } 3495 3485 ··· 3512 3480 ret = 0; 3513 3481 } 3514 3482 3515 - done: 3516 3483 return ret; 3517 3484 } 3518 3485
+3 -2
drivers/scsi/sg.c
··· 2074 2074 if ((1 == resp->done) && (!resp->sg_io_owned) && 2075 2075 ((-1 == pack_id) || (resp->header.pack_id == pack_id))) { 2076 2076 resp->done = 2; /* guard against other readers */ 2077 - break; 2077 + write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2078 + return resp; 2078 2079 } 2079 2080 } 2080 2081 write_unlock_irqrestore(&sfp->rq_list_lock, iflags); 2081 - return resp; 2082 + return NULL; 2082 2083 } 2083 2084 2084 2085 /* always adds to end of list */
+7
drivers/scsi/ufs/ufshcd.c
··· 7698 7698 ufshcd_add_spm_lvl_sysfs_nodes(hba); 7699 7699 } 7700 7700 7701 + static inline void ufshcd_remove_sysfs_nodes(struct ufs_hba *hba) 7702 + { 7703 + device_remove_file(hba->dev, &hba->rpm_lvl_attr); 7704 + device_remove_file(hba->dev, &hba->spm_lvl_attr); 7705 + } 7706 + 7701 7707 /** 7702 7708 * ufshcd_shutdown - shutdown routine 7703 7709 * @hba: per adapter instance ··· 7741 7735 */ 7742 7736 void ufshcd_remove(struct ufs_hba *hba) 7743 7737 { 7738 + ufshcd_remove_sysfs_nodes(hba); 7744 7739 scsi_remove_host(hba->host); 7745 7740 /* disable interrupts */ 7746 7741 ufshcd_disable_intr(hba, hba->intr_mask);
+24 -6
drivers/target/iscsi/iscsi_target.c
··· 3790 3790 { 3791 3791 int ret = 0; 3792 3792 struct iscsi_conn *conn = arg; 3793 + bool conn_freed = false; 3794 + 3793 3795 /* 3794 3796 * Allow ourselves to be interrupted by SIGINT so that a 3795 3797 * connection recovery / failure event can be triggered externally. ··· 3817 3815 goto transport_err; 3818 3816 3819 3817 ret = iscsit_handle_response_queue(conn); 3820 - if (ret == 1) 3818 + if (ret == 1) { 3821 3819 goto get_immediate; 3822 - else if (ret == -ECONNRESET) 3820 + } else if (ret == -ECONNRESET) { 3821 + conn_freed = true; 3823 3822 goto out; 3824 - else if (ret < 0) 3823 + } else if (ret < 0) { 3825 3824 goto transport_err; 3825 + } 3826 3826 } 3827 3827 3828 3828 transport_err: ··· 3834 3830 * responsible for cleaning up the early connection failure. 3835 3831 */ 3836 3832 if (conn->conn_state != TARG_CONN_STATE_IN_LOGIN) 3837 - iscsit_take_action_for_connection_exit(conn); 3833 + iscsit_take_action_for_connection_exit(conn, &conn_freed); 3838 3834 out: 3835 + if (!conn_freed) { 3836 + while (!kthread_should_stop()) { 3837 + msleep(100); 3838 + } 3839 + } 3839 3840 return 0; 3840 3841 } 3841 3842 ··· 4013 4004 { 4014 4005 int rc; 4015 4006 struct iscsi_conn *conn = arg; 4007 + bool conn_freed = false; 4016 4008 4017 4009 /* 4018 4010 * Allow ourselves to be interrupted by SIGINT so that a ··· 4026 4016 */ 4027 4017 rc = wait_for_completion_interruptible(&conn->rx_login_comp); 4028 4018 if (rc < 0 || iscsi_target_check_conn_state(conn)) 4029 - return 0; 4019 + goto out; 4030 4020 4031 4021 if (!conn->conn_transport->iscsit_get_rx_pdu) 4032 4022 return 0; ··· 4035 4025 4036 4026 if (!signal_pending(current)) 4037 4027 atomic_set(&conn->transport_failed, 1); 4038 - iscsit_take_action_for_connection_exit(conn); 4028 + iscsit_take_action_for_connection_exit(conn, &conn_freed); 4029 + 4030 + out: 4031 + if (!conn_freed) { 4032 + while (!kthread_should_stop()) { 4033 + msleep(100); 4034 + } 4035 + } 4036 + 4039 4037 return 0; 4040 4038 } 4041 4039
+5 -1
drivers/target/iscsi/iscsi_target_erl0.c
··· 930 930 } 931 931 } 932 932 933 - void iscsit_take_action_for_connection_exit(struct iscsi_conn *conn) 933 + void iscsit_take_action_for_connection_exit(struct iscsi_conn *conn, bool *conn_freed) 934 934 { 935 + *conn_freed = false; 936 + 935 937 spin_lock_bh(&conn->state_lock); 936 938 if (atomic_read(&conn->connection_exit)) { 937 939 spin_unlock_bh(&conn->state_lock); ··· 944 942 if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) { 945 943 spin_unlock_bh(&conn->state_lock); 946 944 iscsit_close_connection(conn); 945 + *conn_freed = true; 947 946 return; 948 947 } 949 948 ··· 958 955 spin_unlock_bh(&conn->state_lock); 959 956 960 957 iscsit_handle_connection_cleanup(conn); 958 + *conn_freed = true; 961 959 }
+1 -1
drivers/target/iscsi/iscsi_target_erl0.h
··· 15 15 extern void iscsit_connection_reinstatement_rcfr(struct iscsi_conn *); 16 16 extern void iscsit_cause_connection_reinstatement(struct iscsi_conn *, int); 17 17 extern void iscsit_fall_back_to_erl0(struct iscsi_session *); 18 - extern void iscsit_take_action_for_connection_exit(struct iscsi_conn *); 18 + extern void iscsit_take_action_for_connection_exit(struct iscsi_conn *, bool *); 19 19 20 20 #endif /*** ISCSI_TARGET_ERL0_H ***/
+4
drivers/target/iscsi/iscsi_target_login.c
··· 1464 1464 break; 1465 1465 } 1466 1466 1467 + while (!kthread_should_stop()) { 1468 + msleep(100); 1469 + } 1470 + 1467 1471 return 0; 1468 1472 }
+133 -63
drivers/target/iscsi/iscsi_target_nego.c
··· 493 493 494 494 static int iscsi_target_do_login(struct iscsi_conn *, struct iscsi_login *); 495 495 496 - static bool iscsi_target_sk_state_check(struct sock *sk) 496 + static bool __iscsi_target_sk_check_close(struct sock *sk) 497 497 { 498 498 if (sk->sk_state == TCP_CLOSE_WAIT || sk->sk_state == TCP_CLOSE) { 499 - pr_debug("iscsi_target_sk_state_check: TCP_CLOSE_WAIT|TCP_CLOSE," 499 + pr_debug("__iscsi_target_sk_check_close: TCP_CLOSE_WAIT|TCP_CLOSE," 500 500 "returning FALSE\n"); 501 - return false; 501 + return true; 502 502 } 503 - return true; 503 + return false; 504 + } 505 + 506 + static bool iscsi_target_sk_check_close(struct iscsi_conn *conn) 507 + { 508 + bool state = false; 509 + 510 + if (conn->sock) { 511 + struct sock *sk = conn->sock->sk; 512 + 513 + read_lock_bh(&sk->sk_callback_lock); 514 + state = (__iscsi_target_sk_check_close(sk) || 515 + test_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags)); 516 + read_unlock_bh(&sk->sk_callback_lock); 517 + } 518 + return state; 519 + } 520 + 521 + static bool iscsi_target_sk_check_flag(struct iscsi_conn *conn, unsigned int flag) 522 + { 523 + bool state = false; 524 + 525 + if (conn->sock) { 526 + struct sock *sk = conn->sock->sk; 527 + 528 + read_lock_bh(&sk->sk_callback_lock); 529 + state = test_bit(flag, &conn->login_flags); 530 + read_unlock_bh(&sk->sk_callback_lock); 531 + } 532 + return state; 533 + } 534 + 535 + static bool iscsi_target_sk_check_and_clear(struct iscsi_conn *conn, unsigned int flag) 536 + { 537 + bool state = false; 538 + 539 + if (conn->sock) { 540 + struct sock *sk = conn->sock->sk; 541 + 542 + write_lock_bh(&sk->sk_callback_lock); 543 + state = (__iscsi_target_sk_check_close(sk) || 544 + test_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags)); 545 + if (!state) 546 + clear_bit(flag, &conn->login_flags); 547 + write_unlock_bh(&sk->sk_callback_lock); 548 + } 549 + return state; 504 550 } 505 551 506 552 static void iscsi_target_login_drop(struct iscsi_conn *conn, struct iscsi_login *login) ··· 586 540 587 541 pr_debug("entering iscsi_target_do_login_rx, conn: %p, %s:%d\n", 588 542 conn, current->comm, current->pid); 543 + /* 544 + * If iscsi_target_do_login_rx() has been invoked by ->sk_data_ready() 545 + * before initial PDU processing in iscsi_target_start_negotiation() 546 + * has completed, go ahead and retry until it's cleared. 547 + * 548 + * Otherwise if the TCP connection drops while this is occuring, 549 + * iscsi_target_start_negotiation() will detect the failure, call 550 + * cancel_delayed_work_sync(&conn->login_work), and cleanup the 551 + * remaining iscsi connection resources from iscsi_np process context. 552 + */ 553 + if (iscsi_target_sk_check_flag(conn, LOGIN_FLAGS_INITIAL_PDU)) { 554 + schedule_delayed_work(&conn->login_work, msecs_to_jiffies(10)); 555 + return; 556 + } 589 557 590 558 spin_lock(&tpg->tpg_state_lock); 591 559 state = (tpg->tpg_state == TPG_STATE_ACTIVE); ··· 607 547 608 548 if (!state) { 609 549 pr_debug("iscsi_target_do_login_rx: tpg_state != TPG_STATE_ACTIVE\n"); 610 - iscsi_target_restore_sock_callbacks(conn); 611 - iscsi_target_login_drop(conn, login); 612 - iscsit_deaccess_np(np, tpg, tpg_np); 613 - return; 550 + goto err; 614 551 } 615 552 616 - if (conn->sock) { 617 - struct sock *sk = conn->sock->sk; 618 - 619 - read_lock_bh(&sk->sk_callback_lock); 620 - state = iscsi_target_sk_state_check(sk); 621 - read_unlock_bh(&sk->sk_callback_lock); 622 - 623 - if (!state) { 624 - pr_debug("iscsi_target_do_login_rx, TCP state CLOSE\n"); 625 - iscsi_target_restore_sock_callbacks(conn); 626 - iscsi_target_login_drop(conn, login); 627 - iscsit_deaccess_np(np, tpg, tpg_np); 628 - return; 629 - } 553 + if (iscsi_target_sk_check_close(conn)) { 554 + pr_debug("iscsi_target_do_login_rx, TCP state CLOSE\n"); 555 + goto err; 630 556 } 631 557 632 558 conn->login_kworker = current; ··· 630 584 flush_signals(current); 631 585 conn->login_kworker = NULL; 632 586 633 - if (rc < 0) { 634 - iscsi_target_restore_sock_callbacks(conn); 635 - iscsi_target_login_drop(conn, login); 636 - iscsit_deaccess_np(np, tpg, tpg_np); 637 - return; 638 - } 587 + if (rc < 0) 588 + goto err; 639 589 640 590 pr_debug("iscsi_target_do_login_rx after rx_login_io, %p, %s:%d\n", 641 591 conn, current->comm, current->pid); 642 592 643 593 rc = iscsi_target_do_login(conn, login); 644 594 if (rc < 0) { 645 - iscsi_target_restore_sock_callbacks(conn); 646 - iscsi_target_login_drop(conn, login); 647 - iscsit_deaccess_np(np, tpg, tpg_np); 595 + goto err; 648 596 } else if (!rc) { 649 - if (conn->sock) { 650 - struct sock *sk = conn->sock->sk; 651 - 652 - write_lock_bh(&sk->sk_callback_lock); 653 - clear_bit(LOGIN_FLAGS_READ_ACTIVE, &conn->login_flags); 654 - write_unlock_bh(&sk->sk_callback_lock); 655 - } 597 + if (iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_READ_ACTIVE)) 598 + goto err; 656 599 } else if (rc == 1) { 657 600 iscsi_target_nego_release(conn); 658 601 iscsi_post_login_handler(np, conn, zero_tsih); 659 602 iscsit_deaccess_np(np, tpg, tpg_np); 660 603 } 604 + return; 605 + 606 + err: 607 + iscsi_target_restore_sock_callbacks(conn); 608 + iscsi_target_login_drop(conn, login); 609 + iscsit_deaccess_np(np, tpg, tpg_np); 661 610 } 662 611 663 612 static void iscsi_target_do_cleanup(struct work_struct *work) ··· 700 659 orig_state_change(sk); 701 660 return; 702 661 } 662 + state = __iscsi_target_sk_check_close(sk); 663 + pr_debug("__iscsi_target_sk_close_change: state: %d\n", state); 664 + 703 665 if (test_bit(LOGIN_FLAGS_READ_ACTIVE, &conn->login_flags)) { 704 666 pr_debug("Got LOGIN_FLAGS_READ_ACTIVE=1 sk_state_change" 705 667 " conn: %p\n", conn); 668 + if (state) 669 + set_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags); 706 670 write_unlock_bh(&sk->sk_callback_lock); 707 671 orig_state_change(sk); 708 672 return; 709 673 } 710 - if (test_and_set_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags)) { 674 + if (test_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags)) { 711 675 pr_debug("Got LOGIN_FLAGS_CLOSED=1 sk_state_change conn: %p\n", 712 676 conn); 713 677 write_unlock_bh(&sk->sk_callback_lock); 714 678 orig_state_change(sk); 715 679 return; 716 680 } 717 - 718 - state = iscsi_target_sk_state_check(sk); 719 - write_unlock_bh(&sk->sk_callback_lock); 720 - 721 - pr_debug("iscsi_target_sk_state_change: state: %d\n", state); 722 - 723 - if (!state) { 681 + /* 682 + * If the TCP connection has dropped, go ahead and set LOGIN_FLAGS_CLOSED, 683 + * but only queue conn->login_work -> iscsi_target_do_login_rx() 684 + * processing if LOGIN_FLAGS_INITIAL_PDU has already been cleared. 685 + * 686 + * When iscsi_target_do_login_rx() runs, iscsi_target_sk_check_close() 687 + * will detect the dropped TCP connection from delayed workqueue context. 688 + * 689 + * If LOGIN_FLAGS_INITIAL_PDU is still set, which means the initial 690 + * iscsi_target_start_negotiation() is running, iscsi_target_do_login() 691 + * via iscsi_target_sk_check_close() or iscsi_target_start_negotiation() 692 + * via iscsi_target_sk_check_and_clear() is responsible for detecting the 693 + * dropped TCP connection in iscsi_np process context, and cleaning up 694 + * the remaining iscsi connection resources. 695 + */ 696 + if (state) { 724 697 pr_debug("iscsi_target_sk_state_change got failed state\n"); 725 - schedule_delayed_work(&conn->login_cleanup_work, 0); 698 + set_bit(LOGIN_FLAGS_CLOSED, &conn->login_flags); 699 + state = test_bit(LOGIN_FLAGS_INITIAL_PDU, &conn->login_flags); 700 + write_unlock_bh(&sk->sk_callback_lock); 701 + 702 + orig_state_change(sk); 703 + 704 + if (!state) 705 + schedule_delayed_work(&conn->login_work, 0); 726 706 return; 727 707 } 708 + write_unlock_bh(&sk->sk_callback_lock); 709 + 728 710 orig_state_change(sk); 729 711 } 730 712 ··· 1010 946 if (iscsi_target_handle_csg_one(conn, login) < 0) 1011 947 return -1; 1012 948 if (login_rsp->flags & ISCSI_FLAG_LOGIN_TRANSIT) { 949 + /* 950 + * Check to make sure the TCP connection has not 951 + * dropped asynchronously while session reinstatement 952 + * was occuring in this kthread context, before 953 + * transitioning to full feature phase operation. 954 + */ 955 + if (iscsi_target_sk_check_close(conn)) 956 + return -1; 957 + 1013 958 login->tsih = conn->sess->tsih; 1014 959 login->login_complete = 1; 1015 960 iscsi_target_restore_sock_callbacks(conn); ··· 1043 970 login_rsp->flags &= ~ISCSI_FLAG_LOGIN_NEXT_STAGE_MASK; 1044 971 } 1045 972 break; 1046 - } 1047 - 1048 - if (conn->sock) { 1049 - struct sock *sk = conn->sock->sk; 1050 - bool state; 1051 - 1052 - read_lock_bh(&sk->sk_callback_lock); 1053 - state = iscsi_target_sk_state_check(sk); 1054 - read_unlock_bh(&sk->sk_callback_lock); 1055 - 1056 - if (!state) { 1057 - pr_debug("iscsi_target_do_login() failed state for" 1058 - " conn: %p\n", conn); 1059 - return -1; 1060 - } 1061 973 } 1062 974 1063 975 return 0; ··· 1313 1255 1314 1256 write_lock_bh(&sk->sk_callback_lock); 1315 1257 set_bit(LOGIN_FLAGS_READY, &conn->login_flags); 1258 + set_bit(LOGIN_FLAGS_INITIAL_PDU, &conn->login_flags); 1316 1259 write_unlock_bh(&sk->sk_callback_lock); 1317 1260 } 1318 - 1261 + /* 1262 + * If iscsi_target_do_login returns zero to signal more PDU 1263 + * exchanges are required to complete the login, go ahead and 1264 + * clear LOGIN_FLAGS_INITIAL_PDU but only if the TCP connection 1265 + * is still active. 1266 + * 1267 + * Otherwise if TCP connection dropped asynchronously, go ahead 1268 + * and perform connection cleanup now. 1269 + */ 1319 1270 ret = iscsi_target_do_login(conn, login); 1271 + if (!ret && iscsi_target_sk_check_and_clear(conn, LOGIN_FLAGS_INITIAL_PDU)) 1272 + ret = -1; 1273 + 1320 1274 if (ret < 0) { 1321 1275 cancel_delayed_work_sync(&conn->login_work); 1322 1276 cancel_delayed_work_sync(&conn->login_cleanup_work);
+18 -5
drivers/target/target_core_transport.c
··· 1160 1160 if (cmd->unknown_data_length) { 1161 1161 cmd->data_length = size; 1162 1162 } else if (size != cmd->data_length) { 1163 - pr_warn("TARGET_CORE[%s]: Expected Transfer Length:" 1163 + pr_warn_ratelimited("TARGET_CORE[%s]: Expected Transfer Length:" 1164 1164 " %u does not match SCSI CDB Length: %u for SAM Opcode:" 1165 1165 " 0x%02x\n", cmd->se_tfo->get_fabric_name(), 1166 1166 cmd->data_length, size, cmd->t_task_cdb[0]); 1167 1167 1168 - if (cmd->data_direction == DMA_TO_DEVICE && 1169 - cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) { 1170 - pr_err("Rejecting underflow/overflow WRITE data\n"); 1171 - return TCM_INVALID_CDB_FIELD; 1168 + if (cmd->data_direction == DMA_TO_DEVICE) { 1169 + if (cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) { 1170 + pr_err_ratelimited("Rejecting underflow/overflow" 1171 + " for WRITE data CDB\n"); 1172 + return TCM_INVALID_CDB_FIELD; 1173 + } 1174 + /* 1175 + * Some fabric drivers like iscsi-target still expect to 1176 + * always reject overflow writes. Reject this case until 1177 + * full fabric driver level support for overflow writes 1178 + * is introduced tree-wide. 1179 + */ 1180 + if (size > cmd->data_length) { 1181 + pr_err_ratelimited("Rejecting overflow for" 1182 + " WRITE control CDB\n"); 1183 + return TCM_INVALID_CDB_FIELD; 1184 + } 1172 1185 } 1173 1186 /* 1174 1187 * Reject READ_* or WRITE_* with overflow/underflow for
+33 -13
drivers/target/target_core_user.c
··· 97 97 98 98 struct tcmu_dev { 99 99 struct list_head node; 100 - 100 + struct kref kref; 101 101 struct se_device se_dev; 102 102 103 103 char *name; ··· 969 969 udev = kzalloc(sizeof(struct tcmu_dev), GFP_KERNEL); 970 970 if (!udev) 971 971 return NULL; 972 + kref_init(&udev->kref); 972 973 973 974 udev->name = kstrdup(name, GFP_KERNEL); 974 975 if (!udev->name) { ··· 1146 1145 return 0; 1147 1146 } 1148 1147 1148 + static void tcmu_dev_call_rcu(struct rcu_head *p) 1149 + { 1150 + struct se_device *dev = container_of(p, struct se_device, rcu_head); 1151 + struct tcmu_dev *udev = TCMU_DEV(dev); 1152 + 1153 + kfree(udev->uio_info.name); 1154 + kfree(udev->name); 1155 + kfree(udev); 1156 + } 1157 + 1158 + static void tcmu_dev_kref_release(struct kref *kref) 1159 + { 1160 + struct tcmu_dev *udev = container_of(kref, struct tcmu_dev, kref); 1161 + struct se_device *dev = &udev->se_dev; 1162 + 1163 + call_rcu(&dev->rcu_head, tcmu_dev_call_rcu); 1164 + } 1165 + 1149 1166 static int tcmu_release(struct uio_info *info, struct inode *inode) 1150 1167 { 1151 1168 struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info); ··· 1171 1152 clear_bit(TCMU_DEV_BIT_OPEN, &udev->flags); 1172 1153 1173 1154 pr_debug("close\n"); 1174 - 1155 + /* release ref from configure */ 1156 + kref_put(&udev->kref, tcmu_dev_kref_release); 1175 1157 return 0; 1176 1158 } 1177 1159 ··· 1292 1272 dev->dev_attrib.hw_max_sectors = 128; 1293 1273 dev->dev_attrib.hw_queue_depth = 128; 1294 1274 1275 + /* 1276 + * Get a ref incase userspace does a close on the uio device before 1277 + * LIO has initiated tcmu_free_device. 1278 + */ 1279 + kref_get(&udev->kref); 1280 + 1295 1281 ret = tcmu_netlink_event(TCMU_CMD_ADDED_DEVICE, udev->uio_info.name, 1296 1282 udev->uio_info.uio_dev->minor); 1297 1283 if (ret) ··· 1310 1284 return 0; 1311 1285 1312 1286 err_netlink: 1287 + kref_put(&udev->kref, tcmu_dev_kref_release); 1313 1288 uio_unregister_device(&udev->uio_info); 1314 1289 err_register: 1315 1290 vfree(udev->mb_addr); 1316 1291 err_vzalloc: 1317 1292 kfree(info->name); 1293 + info->name = NULL; 1318 1294 1319 1295 return ret; 1320 1296 } ··· 1328 1300 return 0; 1329 1301 } 1330 1302 return -EINVAL; 1331 - } 1332 - 1333 - static void tcmu_dev_call_rcu(struct rcu_head *p) 1334 - { 1335 - struct se_device *dev = container_of(p, struct se_device, rcu_head); 1336 - struct tcmu_dev *udev = TCMU_DEV(dev); 1337 - 1338 - kfree(udev); 1339 1303 } 1340 1304 1341 1305 static bool tcmu_dev_configured(struct tcmu_dev *udev) ··· 1384 1364 udev->uio_info.uio_dev->minor); 1385 1365 1386 1366 uio_unregister_device(&udev->uio_info); 1387 - kfree(udev->uio_info.name); 1388 - kfree(udev->name); 1389 1367 } 1390 - call_rcu(&dev->rcu_head, tcmu_dev_call_rcu); 1368 + 1369 + /* release ref from init */ 1370 + kref_put(&udev->kref, tcmu_dev_kref_release); 1391 1371 } 1392 1372 1393 1373 enum {
+5 -4
drivers/thermal/broadcom/Kconfig
··· 9 9 config BCM_NS_THERMAL 10 10 tristate "Northstar thermal driver" 11 11 depends on ARCH_BCM_IPROC || COMPILE_TEST 12 + default y if ARCH_BCM_IPROC 12 13 help 13 - Northstar is a family of SoCs that includes e.g. BCM4708, BCM47081, 14 - BCM4709 and BCM47094. It contains DMU (Device Management Unit) block 15 - with a thermal sensor that allows checking CPU temperature. This 16 - driver provides support for it. 14 + Support for the Northstar and Northstar Plus family of SoCs (e.g. 15 + BCM4708, BCM4709, BCM5301x, BCM95852X, etc). It contains DMU (Device 16 + Management Unit) block with a thermal sensor that allows checking CPU 17 + temperature.
-3
drivers/thermal/qoriq_thermal.c
··· 195 195 static int qoriq_tmu_probe(struct platform_device *pdev) 196 196 { 197 197 int ret; 198 - const struct thermal_trip *trip; 199 198 struct qoriq_tmu_data *data; 200 199 struct device_node *np = pdev->dev.of_node; 201 200 u32 site = 0; ··· 241 242 "Failed to register thermal zone device %d\n", ret); 242 243 goto err_tmu; 243 244 } 244 - 245 - trip = of_thermal_get_trip_points(data->tz); 246 245 247 246 /* Enable monitoring */ 248 247 site |= 0x1 << (15 - data->sensor_id);
+1 -1
drivers/thermal/thermal_core.c
··· 359 359 * This may be called from any critical situation to trigger a system shutdown 360 360 * after a known period of time. By default this is not scheduled. 361 361 */ 362 - void thermal_emergency_poweroff(void) 362 + static void thermal_emergency_poweroff(void) 363 363 { 364 364 int poweroff_delay_ms = CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS; 365 365 /*
+5 -9
drivers/thermal/ti-soc-thermal/ti-bandgap.c
··· 1010 1010 } 1011 1011 1012 1012 /** 1013 - * ti_bandgap_set_continous_mode() - One time enabling of continuous mode 1013 + * ti_bandgap_set_continuous_mode() - One time enabling of continuous mode 1014 1014 * @bgp: pointer to struct ti_bandgap 1015 1015 * 1016 1016 * Call this function only if HAS(MODE_CONFIG) is set. As this driver may ··· 1214 1214 } 1215 1215 1216 1216 bgp = devm_kzalloc(&pdev->dev, sizeof(*bgp), GFP_KERNEL); 1217 - if (!bgp) { 1218 - dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); 1217 + if (!bgp) 1219 1218 return ERR_PTR(-ENOMEM); 1220 - } 1221 1219 1222 1220 of_id = of_match_device(of_ti_bandgap_match, &pdev->dev); 1223 1221 if (of_id) 1224 1222 bgp->conf = of_id->data; 1225 1223 1226 1224 /* register shadow for context save and restore */ 1227 - bgp->regval = devm_kzalloc(&pdev->dev, sizeof(*bgp->regval) * 1228 - bgp->conf->sensor_count, GFP_KERNEL); 1229 - if (!bgp->regval) { 1230 - dev_err(&pdev->dev, "Unable to allocate mem for driver ref\n"); 1225 + bgp->regval = devm_kcalloc(&pdev->dev, bgp->conf->sensor_count, 1226 + sizeof(*bgp->regval), GFP_KERNEL); 1227 + if (!bgp->regval) 1231 1228 return ERR_PTR(-ENOMEM); 1232 - } 1233 1229 1234 1230 i = 0; 1235 1231 do {
+8 -9
drivers/tty/ehv_bytechan.c
··· 764 764 ehv_bc_driver = alloc_tty_driver(count); 765 765 if (!ehv_bc_driver) { 766 766 ret = -ENOMEM; 767 - goto error; 767 + goto err_free_bcs; 768 768 } 769 769 770 770 ehv_bc_driver->driver_name = "ehv-bc"; ··· 778 778 ret = tty_register_driver(ehv_bc_driver); 779 779 if (ret) { 780 780 pr_err("ehv-bc: could not register tty driver (ret=%i)\n", ret); 781 - goto error; 781 + goto err_put_tty_driver; 782 782 } 783 783 784 784 ret = platform_driver_register(&ehv_bc_tty_driver); 785 785 if (ret) { 786 786 pr_err("ehv-bc: could not register platform driver (ret=%i)\n", 787 787 ret); 788 - goto error; 788 + goto err_deregister_tty_driver; 789 789 } 790 790 791 791 return 0; 792 792 793 - error: 794 - if (ehv_bc_driver) { 795 - tty_unregister_driver(ehv_bc_driver); 796 - put_tty_driver(ehv_bc_driver); 797 - } 798 - 793 + err_deregister_tty_driver: 794 + tty_unregister_driver(ehv_bc_driver); 795 + err_put_tty_driver: 796 + put_tty_driver(ehv_bc_driver); 797 + err_free_bcs: 799 798 kfree(bcs); 800 799 801 800 return ret;
+12
drivers/tty/serdev/core.c
··· 122 122 } 123 123 EXPORT_SYMBOL_GPL(serdev_device_write_wakeup); 124 124 125 + int serdev_device_write_buf(struct serdev_device *serdev, 126 + const unsigned char *buf, size_t count) 127 + { 128 + struct serdev_controller *ctrl = serdev->ctrl; 129 + 130 + if (!ctrl || !ctrl->ops->write_buf) 131 + return -EINVAL; 132 + 133 + return ctrl->ops->write_buf(ctrl, buf, count); 134 + } 135 + EXPORT_SYMBOL_GPL(serdev_device_write_buf); 136 + 125 137 int serdev_device_write(struct serdev_device *serdev, 126 138 const unsigned char *buf, size_t count, 127 139 unsigned long timeout)
+14 -7
drivers/tty/serdev/serdev-ttyport.c
··· 102 102 return PTR_ERR(tty); 103 103 serport->tty = tty; 104 104 105 - serport->port->client_ops = &client_ops; 106 - serport->port->client_data = ctrl; 107 - 108 105 if (tty->ops->open) 109 106 tty->ops->open(serport->tty, NULL); 110 107 else ··· 212 215 struct device *parent, 213 216 struct tty_driver *drv, int idx) 214 217 { 218 + const struct tty_port_client_operations *old_ops; 215 219 struct serdev_controller *ctrl; 216 220 struct serport *serport; 217 221 int ret; ··· 231 233 232 234 ctrl->ops = &ctrl_ops; 233 235 236 + old_ops = port->client_ops; 237 + port->client_ops = &client_ops; 238 + port->client_data = ctrl; 239 + 234 240 ret = serdev_controller_add(ctrl); 235 241 if (ret) 236 - goto err_controller_put; 242 + goto err_reset_data; 237 243 238 244 dev_info(&ctrl->dev, "tty port %s%d registered\n", drv->name, idx); 239 245 return &ctrl->dev; 240 246 241 - err_controller_put: 247 + err_reset_data: 248 + port->client_data = NULL; 249 + port->client_ops = old_ops; 242 250 serdev_controller_put(ctrl); 251 + 243 252 return ERR_PTR(ret); 244 253 } 245 254 246 - void serdev_tty_port_unregister(struct tty_port *port) 255 + int serdev_tty_port_unregister(struct tty_port *port) 247 256 { 248 257 struct serdev_controller *ctrl = port->client_data; 249 258 struct serport *serport = serdev_controller_get_drvdata(ctrl); 250 259 251 260 if (!serport) 252 - return; 261 + return -ENODEV; 253 262 254 263 serdev_controller_remove(ctrl); 255 264 port->client_ops = NULL; 256 265 port->client_data = NULL; 257 266 serdev_controller_put(ctrl); 267 + 268 + return 0; 258 269 }
+11 -10
drivers/tty/serial/8250/8250_port.c
··· 47 47 /* 48 48 * These are definitions for the Exar XR17V35X and XR17(C|D)15X 49 49 */ 50 + #define UART_EXAR_INT0 0x80 50 51 #define UART_EXAR_SLEEP 0x8b /* Sleep mode */ 51 52 #define UART_EXAR_DVID 0x8d /* Device identification */ 52 53 ··· 1338 1337 /* 1339 1338 * Check if the device is a Fintek F81216A 1340 1339 */ 1341 - if (port->type == PORT_16550A) 1340 + if (port->type == PORT_16550A && port->iotype == UPIO_PORT) 1342 1341 fintek_8250_probe(up); 1343 1342 1344 1343 if (up->capabilities != old_capabilities) { ··· 1870 1869 static int exar_handle_irq(struct uart_port *port) 1871 1870 { 1872 1871 unsigned int iir = serial_port_in(port, UART_IIR); 1873 - int ret; 1872 + int ret = 0; 1874 1873 1875 - ret = serial8250_handle_irq(port, iir); 1874 + if (((port->type == PORT_XR17V35X) || (port->type == PORT_XR17D15X)) && 1875 + serial_port_in(port, UART_EXAR_INT0) != 0) 1876 + ret = 1; 1876 1877 1877 - if ((port->type == PORT_XR17V35X) || 1878 - (port->type == PORT_XR17D15X)) { 1879 - serial_port_in(port, 0x80); 1880 - serial_port_in(port, 0x81); 1881 - serial_port_in(port, 0x82); 1882 - serial_port_in(port, 0x83); 1883 - } 1878 + ret |= serial8250_handle_irq(port, iir); 1884 1879 1885 1880 return ret; 1886 1881 } ··· 2174 2177 serial_port_in(port, UART_RX); 2175 2178 serial_port_in(port, UART_IIR); 2176 2179 serial_port_in(port, UART_MSR); 2180 + if ((port->type == PORT_XR17V35X) || (port->type == PORT_XR17D15X)) 2181 + serial_port_in(port, UART_EXAR_INT0); 2177 2182 2178 2183 /* 2179 2184 * At this point, there's no way the LSR could still be 0xff; ··· 2334 2335 serial_port_in(port, UART_RX); 2335 2336 serial_port_in(port, UART_IIR); 2336 2337 serial_port_in(port, UART_MSR); 2338 + if ((port->type == PORT_XR17V35X) || (port->type == PORT_XR17D15X)) 2339 + serial_port_in(port, UART_EXAR_INT0); 2337 2340 up->lsr_saved_flags = 0; 2338 2341 up->msr_saved_flags = 0; 2339 2342
+1
drivers/tty/serial/altera_jtaguart.c
··· 478 478 479 479 port = &altera_jtaguart_ports[i].port; 480 480 uart_remove_one_port(&altera_jtaguart_driver, port); 481 + iounmap(port->membase); 481 482 482 483 return 0; 483 484 }
+1
drivers/tty/serial/altera_uart.c
··· 615 615 if (port) { 616 616 uart_remove_one_port(&altera_uart_driver, port); 617 617 port->mapbase = 0; 618 + iounmap(port->membase); 618 619 } 619 620 620 621 return 0;
+8 -3
drivers/tty/serial/efm32-uart.c
··· 27 27 #define UARTn_FRAME 0x04 28 28 #define UARTn_FRAME_DATABITS__MASK 0x000f 29 29 #define UARTn_FRAME_DATABITS(n) ((n) - 3) 30 + #define UARTn_FRAME_PARITY__MASK 0x0300 30 31 #define UARTn_FRAME_PARITY_NONE 0x0000 31 32 #define UARTn_FRAME_PARITY_EVEN 0x0200 32 33 #define UARTn_FRAME_PARITY_ODD 0x0300 ··· 573 572 16 * (4 + (clkdiv >> 6))); 574 573 575 574 frame = efm32_uart_read32(efm_port, UARTn_FRAME); 576 - if (frame & UARTn_FRAME_PARITY_ODD) 575 + switch (frame & UARTn_FRAME_PARITY__MASK) { 576 + case UARTn_FRAME_PARITY_ODD: 577 577 *parity = 'o'; 578 - else if (frame & UARTn_FRAME_PARITY_EVEN) 578 + break; 579 + case UARTn_FRAME_PARITY_EVEN: 579 580 *parity = 'e'; 580 - else 581 + break; 582 + default: 581 583 *parity = 'n'; 584 + } 582 585 583 586 *bits = (frame & UARTn_FRAME_DATABITS__MASK) - 584 587 UARTn_FRAME_DATABITS(4) + 4;
+1 -1
drivers/tty/serial/ifx6x60.c
··· 1382 1382 static void __exit ifx_spi_exit(void) 1383 1383 { 1384 1384 /* unregister */ 1385 + spi_unregister_driver(&ifx_spi_driver); 1385 1386 tty_unregister_driver(tty_drv); 1386 1387 put_tty_driver(tty_drv); 1387 - spi_unregister_driver(&ifx_spi_driver); 1388 1388 unregister_reboot_notifier(&ifx_modem_reboot_notifier_block); 1389 1389 } 1390 1390
+12 -2
drivers/tty/serial/imx.c
··· 2184 2184 * and DCD (when they are outputs) or enables the respective 2185 2185 * irqs. So set this bit early, i.e. before requesting irqs. 2186 2186 */ 2187 - writel(UFCR_DCEDTE, sport->port.membase + UFCR); 2187 + reg = readl(sport->port.membase + UFCR); 2188 + if (!(reg & UFCR_DCEDTE)) 2189 + writel(reg | UFCR_DCEDTE, sport->port.membase + UFCR); 2188 2190 2189 2191 /* 2190 2192 * Disable UCR3_RI and UCR3_DCD irqs. They are also not ··· 2197 2195 sport->port.membase + UCR3); 2198 2196 2199 2197 } else { 2200 - writel(0, sport->port.membase + UFCR); 2198 + unsigned long ucr3 = UCR3_DSR; 2199 + 2200 + reg = readl(sport->port.membase + UFCR); 2201 + if (reg & UFCR_DCEDTE) 2202 + writel(reg & ~UFCR_DCEDTE, sport->port.membase + UFCR); 2203 + 2204 + if (!is_imx1_uart(sport)) 2205 + ucr3 |= IMX21_UCR3_RXDMUXSEL | UCR3_ADNIMP; 2206 + writel(ucr3, sport->port.membase + UCR3); 2201 2207 } 2202 2208 2203 2209 clk_disable_unprepare(sport->clk_ipg);
+3 -3
drivers/tty/serial/serial_core.c
··· 2083 2083 mutex_lock(&port->mutex); 2084 2084 2085 2085 tty_dev = device_find_child(uport->dev, &match, serial_match_port); 2086 - if (device_may_wakeup(tty_dev)) { 2086 + if (tty_dev && device_may_wakeup(tty_dev)) { 2087 2087 if (!enable_irq_wake(uport->irq)) 2088 2088 uport->irq_wake = 1; 2089 2089 put_device(tty_dev); ··· 2782 2782 * Register the port whether it's detected or not. This allows 2783 2783 * setserial to be used to alter this port's parameters. 2784 2784 */ 2785 - tty_dev = tty_port_register_device_attr(port, drv->tty_driver, 2785 + tty_dev = tty_port_register_device_attr_serdev(port, drv->tty_driver, 2786 2786 uport->line, uport->dev, port, uport->tty_groups); 2787 2787 if (likely(!IS_ERR(tty_dev))) { 2788 2788 device_set_wakeup_capable(tty_dev, 1); ··· 2845 2845 /* 2846 2846 * Remove the devices from the tty layer 2847 2847 */ 2848 - tty_unregister_device(drv->tty_driver, uport->line); 2848 + tty_port_unregister_device(port, drv->tty_driver, uport->line); 2849 2849 2850 2850 tty = tty_port_tty_get(port); 2851 2851 if (tty) {
+70 -5
drivers/tty/tty_port.c
··· 34 34 if (!disc) 35 35 return 0; 36 36 37 + mutex_lock(&tty->atomic_write_lock); 37 38 ret = tty_ldisc_receive_buf(disc, p, (char *)f, count); 39 + mutex_unlock(&tty->atomic_write_lock); 38 40 39 41 tty_ldisc_deref(disc); 40 42 ··· 131 129 struct device *device, void *drvdata, 132 130 const struct attribute_group **attr_grp) 133 131 { 132 + tty_port_link_device(port, driver, index); 133 + return tty_register_device_attr(driver, index, device, drvdata, 134 + attr_grp); 135 + } 136 + EXPORT_SYMBOL_GPL(tty_port_register_device_attr); 137 + 138 + /** 139 + * tty_port_register_device_attr_serdev - register tty or serdev device 140 + * @port: tty_port of the device 141 + * @driver: tty_driver for this device 142 + * @index: index of the tty 143 + * @device: parent if exists, otherwise NULL 144 + * @drvdata: driver data for the device 145 + * @attr_grp: attribute group for the device 146 + * 147 + * Register a serdev or tty device depending on if the parent device has any 148 + * defined serdev clients or not. 149 + */ 150 + struct device *tty_port_register_device_attr_serdev(struct tty_port *port, 151 + struct tty_driver *driver, unsigned index, 152 + struct device *device, void *drvdata, 153 + const struct attribute_group **attr_grp) 154 + { 134 155 struct device *dev; 135 156 136 157 tty_port_link_device(port, driver, index); 137 158 138 159 dev = serdev_tty_port_register(port, device, driver, index); 139 - if (PTR_ERR(dev) != -ENODEV) 160 + if (PTR_ERR(dev) != -ENODEV) { 140 161 /* Skip creating cdev if we registered a serdev device */ 141 162 return dev; 163 + } 142 164 143 165 return tty_register_device_attr(driver, index, device, drvdata, 144 166 attr_grp); 145 167 } 146 - EXPORT_SYMBOL_GPL(tty_port_register_device_attr); 168 + EXPORT_SYMBOL_GPL(tty_port_register_device_attr_serdev); 169 + 170 + /** 171 + * tty_port_register_device_serdev - register tty or serdev device 172 + * @port: tty_port of the device 173 + * @driver: tty_driver for this device 174 + * @index: index of the tty 175 + * @device: parent if exists, otherwise NULL 176 + * 177 + * Register a serdev or tty device depending on if the parent device has any 178 + * defined serdev clients or not. 179 + */ 180 + struct device *tty_port_register_device_serdev(struct tty_port *port, 181 + struct tty_driver *driver, unsigned index, 182 + struct device *device) 183 + { 184 + return tty_port_register_device_attr_serdev(port, driver, index, 185 + device, NULL, NULL); 186 + } 187 + EXPORT_SYMBOL_GPL(tty_port_register_device_serdev); 188 + 189 + /** 190 + * tty_port_unregister_device - deregister a tty or serdev device 191 + * @port: tty_port of the device 192 + * @driver: tty_driver for this device 193 + * @index: index of the tty 194 + * 195 + * If a tty or serdev device is registered with a call to 196 + * tty_port_register_device_serdev() then this function must be called when 197 + * the device is gone. 198 + */ 199 + void tty_port_unregister_device(struct tty_port *port, 200 + struct tty_driver *driver, unsigned index) 201 + { 202 + int ret; 203 + 204 + ret = serdev_tty_port_unregister(port); 205 + if (ret == 0) 206 + return; 207 + 208 + tty_unregister_device(driver, index); 209 + } 210 + EXPORT_SYMBOL_GPL(tty_port_unregister_device); 147 211 148 212 int tty_port_alloc_xmit_buf(struct tty_port *port) 149 213 { ··· 257 189 /* check if last port ref was dropped before tty release */ 258 190 if (WARN_ON(port->itty)) 259 191 return; 260 - 261 - serdev_tty_port_unregister(port); 262 - 263 192 if (port->xmit_buf) 264 193 free_page((unsigned long)port->xmit_buf); 265 194 tty_port_destroy(port);
+5 -1
fs/ceph/file.c
··· 1671 1671 } 1672 1672 1673 1673 size = i_size_read(inode); 1674 - if (!(mode & FALLOC_FL_KEEP_SIZE)) 1674 + if (!(mode & FALLOC_FL_KEEP_SIZE)) { 1675 1675 endoff = offset + length; 1676 + ret = inode_newsize_ok(inode, endoff); 1677 + if (ret) 1678 + goto unlock; 1679 + } 1676 1680 1677 1681 if (fi->fmode & CEPH_FILE_MODE_LAZY) 1678 1682 want = CEPH_CAP_FILE_BUFFER | CEPH_CAP_FILE_LAZYIO;
+23
fs/dax.c
··· 1155 1155 } 1156 1156 1157 1157 /* 1158 + * It is possible, particularly with mixed reads & writes to private 1159 + * mappings, that we have raced with a PMD fault that overlaps with 1160 + * the PTE we need to set up. If so just return and the fault will be 1161 + * retried. 1162 + */ 1163 + if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) { 1164 + vmf_ret = VM_FAULT_NOPAGE; 1165 + goto unlock_entry; 1166 + } 1167 + 1168 + /* 1158 1169 * Note that we don't bother to use iomap_apply here: DAX required 1159 1170 * the file system block size to be equal the page size, which means 1160 1171 * that we never have to deal with more than a single extent here. ··· 1407 1396 entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD); 1408 1397 if (IS_ERR(entry)) 1409 1398 goto fallback; 1399 + 1400 + /* 1401 + * It is possible, particularly with mixed reads & writes to private 1402 + * mappings, that we have raced with a PTE fault that overlaps with 1403 + * the PMD we need to set up. If so just return and the fault will be 1404 + * retried. 1405 + */ 1406 + if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) && 1407 + !pmd_devmap(*vmf->pmd)) { 1408 + result = 0; 1409 + goto unlock_entry; 1410 + } 1410 1411 1411 1412 /* 1412 1413 * Note that we don't use iomap_apply here. We aren't doing I/O, only
+1 -1
fs/gfs2/log.c
··· 659 659 struct gfs2_log_header *lh; 660 660 unsigned int tail; 661 661 u32 hash; 662 - int op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META; 662 + int op_flags = REQ_PREFLUSH | REQ_FUA | REQ_META | REQ_SYNC; 663 663 struct page *page = mempool_alloc(gfs2_page_pool, GFP_NOIO); 664 664 enum gfs2_freeze_state state = atomic_read(&sdp->sd_freeze_state); 665 665 lh = page_address(page);
+1 -1
fs/nfs/namespace.c
··· 246 246 247 247 devname = nfs_devname(dentry, page, PAGE_SIZE); 248 248 if (IS_ERR(devname)) 249 - mnt = (struct vfsmount *)devname; 249 + mnt = ERR_CAST(devname); 250 250 else 251 251 mnt = nfs_do_clone_mount(NFS_SB(dentry->d_sb), devname, &mountdata); 252 252
+6 -17
fs/nfsd/nfs3xdr.c
··· 334 334 if (!p) 335 335 return 0; 336 336 p = xdr_decode_hyper(p, &args->offset); 337 + 337 338 args->count = ntohl(*p++); 338 - 339 - if (!xdr_argsize_check(rqstp, p)) 340 - return 0; 341 - 342 339 len = min(args->count, max_blocksize); 343 340 344 341 /* set up the kvec */ ··· 349 352 v++; 350 353 } 351 354 args->vlen = v; 352 - return 1; 355 + return xdr_argsize_check(rqstp, p); 353 356 } 354 357 355 358 int ··· 541 544 p = decode_fh(p, &args->fh); 542 545 if (!p) 543 546 return 0; 544 - if (!xdr_argsize_check(rqstp, p)) 545 - return 0; 546 547 args->buffer = page_address(*(rqstp->rq_next_page++)); 547 548 548 - return 1; 549 + return xdr_argsize_check(rqstp, p); 549 550 } 550 551 551 552 int ··· 569 574 args->verf = p; p += 2; 570 575 args->dircount = ~0; 571 576 args->count = ntohl(*p++); 572 - 573 - if (!xdr_argsize_check(rqstp, p)) 574 - return 0; 575 - 576 577 args->count = min_t(u32, args->count, PAGE_SIZE); 577 578 args->buffer = page_address(*(rqstp->rq_next_page++)); 578 579 579 - return 1; 580 + return xdr_argsize_check(rqstp, p); 580 581 } 581 582 582 583 int ··· 590 599 args->dircount = ntohl(*p++); 591 600 args->count = ntohl(*p++); 592 601 593 - if (!xdr_argsize_check(rqstp, p)) 594 - return 0; 595 - 596 602 len = args->count = min(args->count, max_blocksize); 597 603 while (len > 0) { 598 604 struct page *p = *(rqstp->rq_next_page++); ··· 597 609 args->buffer = page_address(p); 598 610 len -= PAGE_SIZE; 599 611 } 600 - return 1; 612 + 613 + return xdr_argsize_check(rqstp, p); 601 614 } 602 615 603 616 int
+6 -7
fs/nfsd/nfs4proc.c
··· 1769 1769 opdesc->op_get_currentstateid(cstate, &op->u); 1770 1770 op->status = opdesc->op_func(rqstp, cstate, &op->u); 1771 1771 1772 + /* Only from SEQUENCE */ 1773 + if (cstate->status == nfserr_replay_cache) { 1774 + dprintk("%s NFS4.1 replay from cache\n", __func__); 1775 + status = op->status; 1776 + goto out; 1777 + } 1772 1778 if (!op->status) { 1773 1779 if (opdesc->op_set_currentstateid) 1774 1780 opdesc->op_set_currentstateid(cstate, &op->u); ··· 1785 1779 if (need_wrongsec_check(rqstp)) 1786 1780 op->status = check_nfsd_access(current_fh->fh_export, rqstp); 1787 1781 } 1788 - 1789 1782 encode_op: 1790 - /* Only from SEQUENCE */ 1791 - if (cstate->status == nfserr_replay_cache) { 1792 - dprintk("%s NFS4.1 replay from cache\n", __func__); 1793 - status = op->status; 1794 - goto out; 1795 - } 1796 1783 if (op->status == nfserr_replay_me) { 1797 1784 op->replay = &cstate->replay_owner->so_replay; 1798 1785 nfsd4_encode_replay(&resp->xdr, op);
+3 -10
fs/nfsd/nfsxdr.c
··· 257 257 len = args->count = ntohl(*p++); 258 258 p++; /* totalcount - unused */ 259 259 260 - if (!xdr_argsize_check(rqstp, p)) 261 - return 0; 262 - 263 260 len = min_t(unsigned int, len, NFSSVC_MAXBLKSIZE_V2); 264 261 265 262 /* set up somewhere to store response. ··· 272 275 v++; 273 276 } 274 277 args->vlen = v; 275 - return 1; 278 + return xdr_argsize_check(rqstp, p); 276 279 } 277 280 278 281 int ··· 362 365 p = decode_fh(p, &args->fh); 363 366 if (!p) 364 367 return 0; 365 - if (!xdr_argsize_check(rqstp, p)) 366 - return 0; 367 368 args->buffer = page_address(*(rqstp->rq_next_page++)); 368 369 369 - return 1; 370 + return xdr_argsize_check(rqstp, p); 370 371 } 371 372 372 373 int ··· 402 407 args->cookie = ntohl(*p++); 403 408 args->count = ntohl(*p++); 404 409 args->count = min_t(u32, args->count, PAGE_SIZE); 405 - if (!xdr_argsize_check(rqstp, p)) 406 - return 0; 407 410 args->buffer = page_address(*(rqstp->rq_next_page++)); 408 411 409 - return 1; 412 + return xdr_argsize_check(rqstp, p); 410 413 } 411 414 412 415 /*
+1 -1
fs/ntfs/namei.c
··· 159 159 PTR_ERR(dent_inode)); 160 160 kfree(name); 161 161 /* Return the error code. */ 162 - return (struct dentry *)dent_inode; 162 + return ERR_CAST(dent_inode); 163 163 } 164 164 /* It is guaranteed that @name is no longer allocated at this point. */ 165 165 if (MREF_ERR(mref) == -ENOENT) {
+1 -1
fs/ocfs2/export.c
··· 119 119 120 120 if (IS_ERR(inode)) { 121 121 mlog_errno(PTR_ERR(inode)); 122 - result = (void *)inode; 122 + result = ERR_CAST(inode); 123 123 goto bail; 124 124 } 125 125
+1
fs/overlayfs/Kconfig
··· 1 1 config OVERLAY_FS 2 2 tristate "Overlay filesystem support" 3 + select EXPORTFS 3 4 help 4 5 An overlay filesystem combines two filesystems - an 'upper' filesystem 5 6 and a 'lower' filesystem. When a name exists in both filesystems, the
+17 -7
fs/overlayfs/copy_up.c
··· 300 300 return PTR_ERR(fh); 301 301 } 302 302 303 - err = ovl_do_setxattr(upper, OVL_XATTR_ORIGIN, fh, fh ? fh->len : 0, 0); 303 + /* 304 + * Do not fail when upper doesn't support xattrs. 305 + */ 306 + err = ovl_check_setxattr(dentry, upper, OVL_XATTR_ORIGIN, fh, 307 + fh ? fh->len : 0, 0); 304 308 kfree(fh); 305 309 306 310 return err; ··· 346 342 if (tmpfile) 347 343 temp = ovl_do_tmpfile(upperdir, stat->mode); 348 344 else 349 - temp = ovl_lookup_temp(workdir, dentry); 350 - err = PTR_ERR(temp); 351 - if (IS_ERR(temp)) 352 - goto out1; 353 - 345 + temp = ovl_lookup_temp(workdir); 354 346 err = 0; 355 - if (!tmpfile) 347 + if (IS_ERR(temp)) { 348 + err = PTR_ERR(temp); 349 + temp = NULL; 350 + } 351 + 352 + if (!err && !tmpfile) 356 353 err = ovl_create_real(wdir, temp, &cattr, NULL, true); 357 354 358 355 if (new_creds) { ··· 458 453 459 454 ovl_path_upper(parent, &parentpath); 460 455 upperdir = parentpath.dentry; 456 + 457 + /* Mark parent "impure" because it may now contain non-pure upper */ 458 + err = ovl_set_impure(parent, upperdir); 459 + if (err) 460 + return err; 461 461 462 462 err = vfs_getattr(&parentpath, &pstat, 463 463 STATX_ATIME | STATX_MTIME, AT_STATX_SYNC_AS_STAT);
+47 -14
fs/overlayfs/dir.c
··· 41 41 } 42 42 } 43 43 44 - struct dentry *ovl_lookup_temp(struct dentry *workdir, struct dentry *dentry) 44 + struct dentry *ovl_lookup_temp(struct dentry *workdir) 45 45 { 46 46 struct dentry *temp; 47 47 char name[20]; ··· 68 68 struct dentry *whiteout; 69 69 struct inode *wdir = workdir->d_inode; 70 70 71 - whiteout = ovl_lookup_temp(workdir, dentry); 71 + whiteout = ovl_lookup_temp(workdir); 72 72 if (IS_ERR(whiteout)) 73 73 return whiteout; 74 74 ··· 127 127 return err; 128 128 } 129 129 130 - static int ovl_set_opaque(struct dentry *dentry, struct dentry *upperdentry) 130 + static int ovl_set_opaque_xerr(struct dentry *dentry, struct dentry *upper, 131 + int xerr) 131 132 { 132 133 int err; 133 134 134 - err = ovl_do_setxattr(upperdentry, OVL_XATTR_OPAQUE, "y", 1, 0); 135 + err = ovl_check_setxattr(dentry, upper, OVL_XATTR_OPAQUE, "y", 1, xerr); 135 136 if (!err) 136 137 ovl_dentry_set_opaque(dentry); 137 138 138 139 return err; 140 + } 141 + 142 + static int ovl_set_opaque(struct dentry *dentry, struct dentry *upperdentry) 143 + { 144 + /* 145 + * Fail with -EIO when trying to create opaque dir and upper doesn't 146 + * support xattrs. ovl_rename() calls ovl_set_opaque_xerr(-EXDEV) to 147 + * return a specific error for noxattr case. 148 + */ 149 + return ovl_set_opaque_xerr(dentry, upperdentry, -EIO); 139 150 } 140 151 141 152 /* Common operations required to be done after creation of file on upper */ ··· 171 160 static bool ovl_type_merge(struct dentry *dentry) 172 161 { 173 162 return OVL_TYPE_MERGE(ovl_path_type(dentry)); 163 + } 164 + 165 + static bool ovl_type_origin(struct dentry *dentry) 166 + { 167 + return OVL_TYPE_ORIGIN(ovl_path_type(dentry)); 174 168 } 175 169 176 170 static int ovl_create_upper(struct dentry *dentry, struct inode *inode, ··· 266 250 if (upper->d_parent->d_inode != udir) 267 251 goto out_unlock; 268 252 269 - opaquedir = ovl_lookup_temp(workdir, dentry); 253 + opaquedir = ovl_lookup_temp(workdir); 270 254 err = PTR_ERR(opaquedir); 271 255 if (IS_ERR(opaquedir)) 272 256 goto out_unlock; ··· 398 382 if (err) 399 383 goto out; 400 384 401 - newdentry = ovl_lookup_temp(workdir, dentry); 385 + newdentry = ovl_lookup_temp(workdir); 402 386 err = PTR_ERR(newdentry); 403 387 if (IS_ERR(newdentry)) 404 388 goto out_unlock; ··· 862 846 if (IS_ERR(redirect)) 863 847 return PTR_ERR(redirect); 864 848 865 - err = ovl_do_setxattr(ovl_dentry_upper(dentry), OVL_XATTR_REDIRECT, 866 - redirect, strlen(redirect), 0); 849 + err = ovl_check_setxattr(dentry, ovl_dentry_upper(dentry), 850 + OVL_XATTR_REDIRECT, 851 + redirect, strlen(redirect), -EXDEV); 867 852 if (!err) { 868 853 spin_lock(&dentry->d_lock); 869 854 ovl_dentry_set_redirect(dentry, redirect); 870 855 spin_unlock(&dentry->d_lock); 871 856 } else { 872 857 kfree(redirect); 873 - if (err == -EOPNOTSUPP) 874 - ovl_clear_redirect_dir(dentry->d_sb); 875 - else 876 - pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err); 858 + pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err); 877 859 /* Fall back to userspace copy-up */ 878 860 err = -EXDEV; 879 861 } ··· 957 943 old_upperdir = ovl_dentry_upper(old->d_parent); 958 944 new_upperdir = ovl_dentry_upper(new->d_parent); 959 945 946 + if (!samedir) { 947 + /* 948 + * When moving a merge dir or non-dir with copy up origin into 949 + * a new parent, we are marking the new parent dir "impure". 950 + * When ovl_iterate() iterates an "impure" upper dir, it will 951 + * lookup the origin inodes of the entries to fill d_ino. 952 + */ 953 + if (ovl_type_origin(old)) { 954 + err = ovl_set_impure(new->d_parent, new_upperdir); 955 + if (err) 956 + goto out_revert_creds; 957 + } 958 + if (!overwrite && ovl_type_origin(new)) { 959 + err = ovl_set_impure(old->d_parent, old_upperdir); 960 + if (err) 961 + goto out_revert_creds; 962 + } 963 + } 964 + 960 965 trap = lock_rename(new_upperdir, old_upperdir); 961 966 962 967 olddentry = lookup_one_len(old->d_name.name, old_upperdir, ··· 1025 992 if (ovl_type_merge_or_lower(old)) 1026 993 err = ovl_set_redirect(old, samedir); 1027 994 else if (!old_opaque && ovl_type_merge(new->d_parent)) 1028 - err = ovl_set_opaque(old, olddentry); 995 + err = ovl_set_opaque_xerr(old, olddentry, -EXDEV); 1029 996 if (err) 1030 997 goto out_dput; 1031 998 } ··· 1033 1000 if (ovl_type_merge_or_lower(new)) 1034 1001 err = ovl_set_redirect(new, samedir); 1035 1002 else if (!new_opaque && ovl_type_merge(old->d_parent)) 1036 - err = ovl_set_opaque(new, newdentry); 1003 + err = ovl_set_opaque_xerr(new, newdentry, -EXDEV); 1037 1004 if (err) 1038 1005 goto out_dput; 1039 1006 }
+11 -1
fs/overlayfs/inode.c
··· 240 240 return res; 241 241 } 242 242 243 + static bool ovl_can_list(const char *s) 244 + { 245 + /* List all non-trusted xatts */ 246 + if (strncmp(s, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN) != 0) 247 + return true; 248 + 249 + /* Never list trusted.overlay, list other trusted for superuser only */ 250 + return !ovl_is_private_xattr(s) && capable(CAP_SYS_ADMIN); 251 + } 252 + 243 253 ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size) 244 254 { 245 255 struct dentry *realdentry = ovl_dentry_real(dentry); ··· 273 263 return -EIO; 274 264 275 265 len -= slen; 276 - if (ovl_is_private_xattr(s)) { 266 + if (!ovl_can_list(s)) { 277 267 res -= slen; 278 268 memmove(s, s + slen, len); 279 269 } else {
+5 -11
fs/overlayfs/namei.c
··· 169 169 170 170 static bool ovl_is_opaquedir(struct dentry *dentry) 171 171 { 172 - int res; 173 - char val; 174 - 175 - if (!d_is_dir(dentry)) 176 - return false; 177 - 178 - res = vfs_getxattr(dentry, OVL_XATTR_OPAQUE, &val, 1); 179 - if (res == 1 && val == 'y') 180 - return true; 181 - 182 - return false; 172 + return ovl_check_dir_xattr(dentry, OVL_XATTR_OPAQUE); 183 173 } 184 174 185 175 static int ovl_lookup_single(struct dentry *base, struct ovl_lookup_data *d, ··· 341 351 unsigned int ctr = 0; 342 352 struct inode *inode = NULL; 343 353 bool upperopaque = false; 354 + bool upperimpure = false; 344 355 char *upperredirect = NULL; 345 356 struct dentry *this; 346 357 unsigned int i; ··· 386 395 poe = roe; 387 396 } 388 397 upperopaque = d.opaque; 398 + if (upperdentry && d.is_dir) 399 + upperimpure = ovl_is_impuredir(upperdentry); 389 400 } 390 401 391 402 if (!d.stop && poe->numlower) { ··· 456 463 457 464 revert_creds(old_cred); 458 465 oe->opaque = upperopaque; 466 + oe->impure = upperimpure; 459 467 oe->redirect = upperredirect; 460 468 oe->__upperdentry = upperdentry; 461 469 memcpy(oe->lowerstack, stack, sizeof(struct path) * ctr);
+14 -2
fs/overlayfs/overlayfs.h
··· 24 24 #define OVL_XATTR_OPAQUE OVL_XATTR_PREFIX "opaque" 25 25 #define OVL_XATTR_REDIRECT OVL_XATTR_PREFIX "redirect" 26 26 #define OVL_XATTR_ORIGIN OVL_XATTR_PREFIX "origin" 27 + #define OVL_XATTR_IMPURE OVL_XATTR_PREFIX "impure" 27 28 28 29 /* 29 30 * The tuple (fh,uuid) is a universal unique identifier for a copy up origin, ··· 204 203 struct ovl_dir_cache *ovl_dir_cache(struct dentry *dentry); 205 204 void ovl_set_dir_cache(struct dentry *dentry, struct ovl_dir_cache *cache); 206 205 bool ovl_dentry_is_opaque(struct dentry *dentry); 206 + bool ovl_dentry_is_impure(struct dentry *dentry); 207 207 bool ovl_dentry_is_whiteout(struct dentry *dentry); 208 208 void ovl_dentry_set_opaque(struct dentry *dentry); 209 209 bool ovl_redirect_dir(struct super_block *sb); 210 - void ovl_clear_redirect_dir(struct super_block *sb); 211 210 const char *ovl_dentry_get_redirect(struct dentry *dentry); 212 211 void ovl_dentry_set_redirect(struct dentry *dentry, const char *redirect); 213 212 void ovl_dentry_update(struct dentry *dentry, struct dentry *upperdentry); ··· 220 219 struct file *ovl_path_open(struct path *path, int flags); 221 220 int ovl_copy_up_start(struct dentry *dentry); 222 221 void ovl_copy_up_end(struct dentry *dentry); 222 + bool ovl_check_dir_xattr(struct dentry *dentry, const char *name); 223 + int ovl_check_setxattr(struct dentry *dentry, struct dentry *upperdentry, 224 + const char *name, const void *value, size_t size, 225 + int xerr); 226 + int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry); 227 + 228 + static inline bool ovl_is_impuredir(struct dentry *dentry) 229 + { 230 + return ovl_check_dir_xattr(dentry, OVL_XATTR_IMPURE); 231 + } 232 + 223 233 224 234 /* namei.c */ 225 235 int ovl_path_next(int idx, struct dentry *dentry, struct path *path); ··· 275 263 276 264 /* dir.c */ 277 265 extern const struct inode_operations ovl_dir_inode_operations; 278 - struct dentry *ovl_lookup_temp(struct dentry *workdir, struct dentry *dentry); 266 + struct dentry *ovl_lookup_temp(struct dentry *workdir); 279 267 struct cattr { 280 268 dev_t rdev; 281 269 umode_t mode;
+2
fs/overlayfs/ovl_entry.h
··· 28 28 /* creds of process who forced instantiation of super block */ 29 29 const struct cred *creator_cred; 30 30 bool tmpfile; 31 + bool noxattr; 31 32 wait_queue_head_t copyup_wq; 32 33 /* sb common to all layers */ 33 34 struct super_block *same_sb; ··· 43 42 u64 version; 44 43 const char *redirect; 45 44 bool opaque; 45 + bool impure; 46 46 bool copying; 47 47 }; 48 48 struct rcu_head rcu;
+17 -1
fs/overlayfs/super.c
··· 891 891 dput(temp); 892 892 else 893 893 pr_warn("overlayfs: upper fs does not support tmpfile.\n"); 894 + 895 + /* 896 + * Check if upper/work fs supports trusted.overlay.* 897 + * xattr 898 + */ 899 + err = ovl_do_setxattr(ufs->workdir, OVL_XATTR_OPAQUE, 900 + "0", 1, 0); 901 + if (err) { 902 + ufs->noxattr = true; 903 + pr_warn("overlayfs: upper fs does not support xattr.\n"); 904 + } else { 905 + vfs_removexattr(ufs->workdir, OVL_XATTR_OPAQUE); 906 + } 894 907 } 895 908 } 896 909 ··· 974 961 path_put(&workpath); 975 962 kfree(lowertmp); 976 963 977 - oe->__upperdentry = upperpath.dentry; 964 + if (upperpath.dentry) { 965 + oe->__upperdentry = upperpath.dentry; 966 + oe->impure = ovl_is_impuredir(upperpath.dentry); 967 + } 978 968 for (i = 0; i < numlower; i++) { 979 969 oe->lowerstack[i].dentry = stack[i].dentry; 980 970 oe->lowerstack[i].mnt = ufs->lower_mnt[i];
+64 -8
fs/overlayfs/util.c
··· 175 175 return oe->opaque; 176 176 } 177 177 178 + bool ovl_dentry_is_impure(struct dentry *dentry) 179 + { 180 + struct ovl_entry *oe = dentry->d_fsdata; 181 + 182 + return oe->impure; 183 + } 184 + 178 185 bool ovl_dentry_is_whiteout(struct dentry *dentry) 179 186 { 180 187 return !dentry->d_inode && ovl_dentry_is_opaque(dentry); ··· 198 191 { 199 192 struct ovl_fs *ofs = sb->s_fs_info; 200 193 201 - return ofs->config.redirect_dir; 202 - } 203 - 204 - void ovl_clear_redirect_dir(struct super_block *sb) 205 - { 206 - struct ovl_fs *ofs = sb->s_fs_info; 207 - 208 - ofs->config.redirect_dir = false; 194 + return ofs->config.redirect_dir && !ofs->noxattr; 209 195 } 210 196 211 197 const char *ovl_dentry_get_redirect(struct dentry *dentry) ··· 302 302 oe->copying = false; 303 303 wake_up_locked(&ofs->copyup_wq); 304 304 spin_unlock(&ofs->copyup_wq.lock); 305 + } 306 + 307 + bool ovl_check_dir_xattr(struct dentry *dentry, const char *name) 308 + { 309 + int res; 310 + char val; 311 + 312 + if (!d_is_dir(dentry)) 313 + return false; 314 + 315 + res = vfs_getxattr(dentry, name, &val, 1); 316 + if (res == 1 && val == 'y') 317 + return true; 318 + 319 + return false; 320 + } 321 + 322 + int ovl_check_setxattr(struct dentry *dentry, struct dentry *upperdentry, 323 + const char *name, const void *value, size_t size, 324 + int xerr) 325 + { 326 + int err; 327 + struct ovl_fs *ofs = dentry->d_sb->s_fs_info; 328 + 329 + if (ofs->noxattr) 330 + return xerr; 331 + 332 + err = ovl_do_setxattr(upperdentry, name, value, size, 0); 333 + 334 + if (err == -EOPNOTSUPP) { 335 + pr_warn("overlayfs: cannot set %s xattr on upper\n", name); 336 + ofs->noxattr = true; 337 + return xerr; 338 + } 339 + 340 + return err; 341 + } 342 + 343 + int ovl_set_impure(struct dentry *dentry, struct dentry *upperdentry) 344 + { 345 + int err; 346 + struct ovl_entry *oe = dentry->d_fsdata; 347 + 348 + if (oe->impure) 349 + return 0; 350 + 351 + /* 352 + * Do not fail when upper doesn't support xattrs. 353 + * Upper inodes won't have origin nor redirect xattr anyway. 354 + */ 355 + err = ovl_check_setxattr(dentry, upperdentry, OVL_XATTR_IMPURE, 356 + "y", 1, 0); 357 + if (!err) 358 + oe->impure = true; 359 + 360 + return err; 305 361 }
+1 -1
fs/proc/base.c
··· 821 821 if (!mmget_not_zero(mm)) 822 822 goto free; 823 823 824 - flags = write ? FOLL_WRITE : 0; 824 + flags = FOLL_FORCE | (write ? FOLL_WRITE : 0); 825 825 826 826 while (count > 0) { 827 827 int this_len = min_t(int, count, PAGE_SIZE);
+2 -2
fs/reiserfs/journal.c
··· 1112 1112 depth = reiserfs_write_unlock_nested(s); 1113 1113 if (reiserfs_barrier_flush(s)) 1114 1114 __sync_dirty_buffer(jl->j_commit_bh, 1115 - REQ_PREFLUSH | REQ_FUA); 1115 + REQ_SYNC | REQ_PREFLUSH | REQ_FUA); 1116 1116 else 1117 1117 sync_dirty_buffer(jl->j_commit_bh); 1118 1118 reiserfs_write_lock_nested(s, depth); ··· 1271 1271 1272 1272 if (reiserfs_barrier_flush(sb)) 1273 1273 __sync_dirty_buffer(journal->j_header_bh, 1274 - REQ_PREFLUSH | REQ_FUA); 1274 + REQ_SYNC | REQ_PREFLUSH | REQ_FUA); 1275 1275 else 1276 1276 sync_dirty_buffer(journal->j_header_bh); 1277 1277
+4 -5
fs/xfs/libxfs/xfs_bmap.c
··· 1280 1280 xfs_bmbt_rec_t *frp; 1281 1281 xfs_fsblock_t nextbno; 1282 1282 xfs_extnum_t num_recs; 1283 - xfs_extnum_t start; 1284 1283 1285 1284 num_recs = xfs_btree_get_numrecs(block); 1286 1285 if (unlikely(i + num_recs > room)) { ··· 1302 1303 * Copy records into the extent records. 1303 1304 */ 1304 1305 frp = XFS_BMBT_REC_ADDR(mp, block, 1); 1305 - start = i; 1306 1306 for (j = 0; j < num_recs; j++, i++, frp++) { 1307 1307 xfs_bmbt_rec_host_t *trp = xfs_iext_get_ext(ifp, i); 1308 1308 trp->l0 = be64_to_cpu(frp->l0); ··· 2063 2065 } 2064 2066 temp = xfs_bmap_worst_indlen(bma->ip, temp); 2065 2067 temp2 = xfs_bmap_worst_indlen(bma->ip, temp2); 2066 - diff = (int)(temp + temp2 - startblockval(PREV.br_startblock) - 2067 - (bma->cur ? bma->cur->bc_private.b.allocated : 0)); 2068 + diff = (int)(temp + temp2 - 2069 + (startblockval(PREV.br_startblock) - 2070 + (bma->cur ? 2071 + bma->cur->bc_private.b.allocated : 0))); 2068 2072 if (diff > 0) { 2069 2073 error = xfs_mod_fdblocks(bma->ip->i_mount, 2070 2074 -((int64_t)diff), false); ··· 2123 2123 temp = da_new; 2124 2124 if (bma->cur) 2125 2125 temp += bma->cur->bc_private.b.allocated; 2126 - ASSERT(temp <= da_old); 2127 2126 if (temp < da_old) 2128 2127 xfs_mod_fdblocks(bma->ip->i_mount, 2129 2128 (int64_t)(da_old - temp), false);
+1 -1
fs/xfs/libxfs/xfs_btree.c
··· 4395 4395 xfs_btree_readahead_ptr(cur, ptr, 1); 4396 4396 4397 4397 /* save for the next iteration of the loop */ 4398 - lptr = *ptr; 4398 + xfs_btree_copy_ptrs(cur, &lptr, ptr, 1); 4399 4399 } 4400 4400 4401 4401 /* for each buffer in the level */
+31 -12
fs/xfs/libxfs/xfs_refcount.c
··· 1629 1629 if (mp->m_sb.sb_agblocks >= XFS_REFC_COW_START) 1630 1630 return -EOPNOTSUPP; 1631 1631 1632 - error = xfs_alloc_read_agf(mp, NULL, agno, 0, &agbp); 1632 + INIT_LIST_HEAD(&debris); 1633 + 1634 + /* 1635 + * In this first part, we use an empty transaction to gather up 1636 + * all the leftover CoW extents so that we can subsequently 1637 + * delete them. The empty transaction is used to avoid 1638 + * a buffer lock deadlock if there happens to be a loop in the 1639 + * refcountbt because we're allowed to re-grab a buffer that is 1640 + * already attached to our transaction. When we're done 1641 + * recording the CoW debris we cancel the (empty) transaction 1642 + * and everything goes away cleanly. 1643 + */ 1644 + error = xfs_trans_alloc_empty(mp, &tp); 1633 1645 if (error) 1634 1646 return error; 1635 - cur = xfs_refcountbt_init_cursor(mp, NULL, agbp, agno, NULL); 1647 + 1648 + error = xfs_alloc_read_agf(mp, tp, agno, 0, &agbp); 1649 + if (error) 1650 + goto out_trans; 1651 + cur = xfs_refcountbt_init_cursor(mp, tp, agbp, agno, NULL); 1636 1652 1637 1653 /* Find all the leftover CoW staging extents. */ 1638 - INIT_LIST_HEAD(&debris); 1639 1654 memset(&low, 0, sizeof(low)); 1640 1655 memset(&high, 0, sizeof(high)); 1641 1656 low.rc.rc_startblock = XFS_REFC_COW_START; ··· 1660 1645 if (error) 1661 1646 goto out_cursor; 1662 1647 xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR); 1663 - xfs_buf_relse(agbp); 1648 + xfs_trans_brelse(tp, agbp); 1649 + xfs_trans_cancel(tp); 1664 1650 1665 1651 /* Now iterate the list to free the leftovers */ 1666 - list_for_each_entry(rr, &debris, rr_list) { 1652 + list_for_each_entry_safe(rr, n, &debris, rr_list) { 1667 1653 /* Set up transaction. */ 1668 1654 error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, 0, 0, 0, &tp); 1669 1655 if (error) ··· 1692 1676 error = xfs_trans_commit(tp); 1693 1677 if (error) 1694 1678 goto out_free; 1679 + 1680 + list_del(&rr->rr_list); 1681 + kmem_free(rr); 1695 1682 } 1696 1683 1684 + return error; 1685 + out_defer: 1686 + xfs_defer_cancel(&dfops); 1687 + out_trans: 1688 + xfs_trans_cancel(tp); 1697 1689 out_free: 1698 1690 /* Free the leftover list */ 1699 1691 list_for_each_entry_safe(rr, n, &debris, rr_list) { ··· 1712 1688 1713 1689 out_cursor: 1714 1690 xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); 1715 - xfs_buf_relse(agbp); 1716 - goto out_free; 1717 - 1718 - out_defer: 1719 - xfs_defer_cancel(&dfops); 1720 - xfs_trans_cancel(tp); 1721 - goto out_free; 1691 + xfs_trans_brelse(tp, agbp); 1692 + goto out_trans; 1722 1693 }
+7 -3
fs/xfs/xfs_bmap_util.c
··· 582 582 } 583 583 break; 584 584 default: 585 + /* Local format data forks report no extents. */ 586 + if (ip->i_d.di_format == XFS_DINODE_FMT_LOCAL) { 587 + bmv->bmv_entries = 0; 588 + return 0; 589 + } 585 590 if (ip->i_d.di_format != XFS_DINODE_FMT_EXTENTS && 586 - ip->i_d.di_format != XFS_DINODE_FMT_BTREE && 587 - ip->i_d.di_format != XFS_DINODE_FMT_LOCAL) 591 + ip->i_d.di_format != XFS_DINODE_FMT_BTREE) 588 592 return -EINVAL; 589 593 590 594 if (xfs_get_extsz_hint(ip) || ··· 716 712 * extents. 717 713 */ 718 714 if (map[i].br_startblock == DELAYSTARTBLOCK && 719 - map[i].br_startoff <= XFS_B_TO_FSB(mp, XFS_ISIZE(ip))) 715 + map[i].br_startoff < XFS_B_TO_FSB(mp, XFS_ISIZE(ip))) 720 716 ASSERT((iflags & BMV_IF_DELALLOC) != 0); 721 717 722 718 if (map[i].br_startblock == HOLESTARTBLOCK &&
+26 -12
fs/xfs/xfs_buf.c
··· 97 97 xfs_buf_ioacct_inc( 98 98 struct xfs_buf *bp) 99 99 { 100 - if (bp->b_flags & (XBF_NO_IOACCT|_XBF_IN_FLIGHT)) 100 + if (bp->b_flags & XBF_NO_IOACCT) 101 101 return; 102 102 103 103 ASSERT(bp->b_flags & XBF_ASYNC); 104 - bp->b_flags |= _XBF_IN_FLIGHT; 105 - percpu_counter_inc(&bp->b_target->bt_io_count); 104 + spin_lock(&bp->b_lock); 105 + if (!(bp->b_state & XFS_BSTATE_IN_FLIGHT)) { 106 + bp->b_state |= XFS_BSTATE_IN_FLIGHT; 107 + percpu_counter_inc(&bp->b_target->bt_io_count); 108 + } 109 + spin_unlock(&bp->b_lock); 106 110 } 107 111 108 112 /* ··· 114 110 * freed and unaccount from the buftarg. 115 111 */ 116 112 static inline void 113 + __xfs_buf_ioacct_dec( 114 + struct xfs_buf *bp) 115 + { 116 + ASSERT(spin_is_locked(&bp->b_lock)); 117 + 118 + if (bp->b_state & XFS_BSTATE_IN_FLIGHT) { 119 + bp->b_state &= ~XFS_BSTATE_IN_FLIGHT; 120 + percpu_counter_dec(&bp->b_target->bt_io_count); 121 + } 122 + } 123 + 124 + static inline void 117 125 xfs_buf_ioacct_dec( 118 126 struct xfs_buf *bp) 119 127 { 120 - if (!(bp->b_flags & _XBF_IN_FLIGHT)) 121 - return; 122 - 123 - bp->b_flags &= ~_XBF_IN_FLIGHT; 124 - percpu_counter_dec(&bp->b_target->bt_io_count); 128 + spin_lock(&bp->b_lock); 129 + __xfs_buf_ioacct_dec(bp); 130 + spin_unlock(&bp->b_lock); 125 131 } 126 132 127 133 /* ··· 163 149 * unaccounted (released to LRU) before that occurs. Drop in-flight 164 150 * status now to preserve accounting consistency. 165 151 */ 166 - xfs_buf_ioacct_dec(bp); 167 - 168 152 spin_lock(&bp->b_lock); 153 + __xfs_buf_ioacct_dec(bp); 154 + 169 155 atomic_set(&bp->b_lru_ref, 0); 170 156 if (!(bp->b_state & XFS_BSTATE_DISPOSE) && 171 157 (list_lru_del(&bp->b_target->bt_lru, &bp->b_lru))) ··· 993 979 * ensures the decrement occurs only once per-buf. 994 980 */ 995 981 if ((atomic_read(&bp->b_hold) == 1) && !list_empty(&bp->b_lru)) 996 - xfs_buf_ioacct_dec(bp); 982 + __xfs_buf_ioacct_dec(bp); 997 983 goto out_unlock; 998 984 } 999 985 1000 986 /* the last reference has been dropped ... */ 1001 - xfs_buf_ioacct_dec(bp); 987 + __xfs_buf_ioacct_dec(bp); 1002 988 if (!(bp->b_flags & XBF_STALE) && atomic_read(&bp->b_lru_ref)) { 1003 989 /* 1004 990 * If the buffer is added to the LRU take a new reference to the
+2 -3
fs/xfs/xfs_buf.h
··· 63 63 #define _XBF_KMEM (1 << 21)/* backed by heap memory */ 64 64 #define _XBF_DELWRI_Q (1 << 22)/* buffer on a delwri queue */ 65 65 #define _XBF_COMPOUND (1 << 23)/* compound buffer */ 66 - #define _XBF_IN_FLIGHT (1 << 25) /* I/O in flight, for accounting purposes */ 67 66 68 67 typedef unsigned int xfs_buf_flags_t; 69 68 ··· 83 84 { _XBF_PAGES, "PAGES" }, \ 84 85 { _XBF_KMEM, "KMEM" }, \ 85 86 { _XBF_DELWRI_Q, "DELWRI_Q" }, \ 86 - { _XBF_COMPOUND, "COMPOUND" }, \ 87 - { _XBF_IN_FLIGHT, "IN_FLIGHT" } 87 + { _XBF_COMPOUND, "COMPOUND" } 88 88 89 89 90 90 /* 91 91 * Internal state flags. 92 92 */ 93 93 #define XFS_BSTATE_DISPOSE (1 << 0) /* buffer being discarded */ 94 + #define XFS_BSTATE_IN_FLIGHT (1 << 1) /* I/O in flight */ 94 95 95 96 /* 96 97 * The xfs_buftarg contains 2 notions of "sector size" -
+19 -52
fs/xfs/xfs_file.c
··· 1043 1043 1044 1044 index = startoff >> PAGE_SHIFT; 1045 1045 endoff = XFS_FSB_TO_B(mp, map->br_startoff + map->br_blockcount); 1046 - end = endoff >> PAGE_SHIFT; 1046 + end = (endoff - 1) >> PAGE_SHIFT; 1047 1047 do { 1048 1048 int want; 1049 1049 unsigned nr_pages; 1050 1050 unsigned int i; 1051 1051 1052 - want = min_t(pgoff_t, end - index, PAGEVEC_SIZE); 1052 + want = min_t(pgoff_t, end - index, PAGEVEC_SIZE - 1) + 1; 1053 1053 nr_pages = pagevec_lookup(&pvec, inode->i_mapping, index, 1054 1054 want); 1055 - /* 1056 - * No page mapped into given range. If we are searching holes 1057 - * and if this is the first time we got into the loop, it means 1058 - * that the given offset is landed in a hole, return it. 1059 - * 1060 - * If we have already stepped through some block buffers to find 1061 - * holes but they all contains data. In this case, the last 1062 - * offset is already updated and pointed to the end of the last 1063 - * mapped page, if it does not reach the endpoint to search, 1064 - * that means there should be a hole between them. 1065 - */ 1066 - if (nr_pages == 0) { 1067 - /* Data search found nothing */ 1068 - if (type == DATA_OFF) 1069 - break; 1070 - 1071 - ASSERT(type == HOLE_OFF); 1072 - if (lastoff == startoff || lastoff < endoff) { 1073 - found = true; 1074 - *offset = lastoff; 1075 - } 1055 + if (nr_pages == 0) 1076 1056 break; 1077 - } 1078 - 1079 - /* 1080 - * At lease we found one page. If this is the first time we 1081 - * step into the loop, and if the first page index offset is 1082 - * greater than the given search offset, a hole was found. 1083 - */ 1084 - if (type == HOLE_OFF && lastoff == startoff && 1085 - lastoff < page_offset(pvec.pages[0])) { 1086 - found = true; 1087 - break; 1088 - } 1089 1057 1090 1058 for (i = 0; i < nr_pages; i++) { 1091 1059 struct page *page = pvec.pages[i]; ··· 1066 1098 * file mapping. However, page->index will not change 1067 1099 * because we have a reference on the page. 1068 1100 * 1069 - * Searching done if the page index is out of range. 1070 - * If the current offset is not reaches the end of 1071 - * the specified search range, there should be a hole 1072 - * between them. 1101 + * If current page offset is beyond where we've ended, 1102 + * we've found a hole. 1073 1103 */ 1074 - if (page->index > end) { 1075 - if (type == HOLE_OFF && lastoff < endoff) { 1076 - *offset = lastoff; 1077 - found = true; 1078 - } 1104 + if (type == HOLE_OFF && lastoff < endoff && 1105 + lastoff < page_offset(pvec.pages[i])) { 1106 + found = true; 1107 + *offset = lastoff; 1079 1108 goto out; 1080 1109 } 1110 + /* Searching done if the page index is out of range. */ 1111 + if (page->index > end) 1112 + goto out; 1081 1113 1082 1114 lock_page(page); 1083 1115 /* ··· 1119 1151 1120 1152 /* 1121 1153 * The number of returned pages less than our desired, search 1122 - * done. In this case, nothing was found for searching data, 1123 - * but we found a hole behind the last offset. 1154 + * done. 1124 1155 */ 1125 - if (nr_pages < want) { 1126 - if (type == HOLE_OFF) { 1127 - *offset = lastoff; 1128 - found = true; 1129 - } 1156 + if (nr_pages < want) 1130 1157 break; 1131 - } 1132 1158 1133 1159 index = pvec.pages[i - 1]->index + 1; 1134 1160 pagevec_release(&pvec); 1135 1161 } while (index <= end); 1136 1162 1163 + /* No page at lastoff and we are not done - we found a hole. */ 1164 + if (type == HOLE_OFF && lastoff < endoff) { 1165 + *offset = lastoff; 1166 + found = true; 1167 + } 1137 1168 out: 1138 1169 pagevec_release(&pvec); 1139 1170 return found;
+4 -1
fs/xfs/xfs_fsmap.c
··· 828 828 struct xfs_fsmap dkeys[2]; /* per-dev keys */ 829 829 struct xfs_getfsmap_dev handlers[XFS_GETFSMAP_DEVS]; 830 830 struct xfs_getfsmap_info info = { NULL }; 831 + bool use_rmap; 831 832 int i; 832 833 int error = 0; 833 834 ··· 838 837 !xfs_getfsmap_is_valid_device(mp, &head->fmh_keys[1])) 839 838 return -EINVAL; 840 839 840 + use_rmap = capable(CAP_SYS_ADMIN) && 841 + xfs_sb_version_hasrmapbt(&mp->m_sb); 841 842 head->fmh_entries = 0; 842 843 843 844 /* Set up our device handlers. */ 844 845 memset(handlers, 0, sizeof(handlers)); 845 846 handlers[0].dev = new_encode_dev(mp->m_ddev_targp->bt_dev); 846 - if (xfs_sb_version_hasrmapbt(&mp->m_sb)) 847 + if (use_rmap) 847 848 handlers[0].fn = xfs_getfsmap_datadev_rmapbt; 848 849 else 849 850 handlers[0].fn = xfs_getfsmap_datadev_bnobt;
+51
include/drm/drm_dp_helper.h
··· 913 913 int drm_dp_start_crc(struct drm_dp_aux *aux, struct drm_crtc *crtc); 914 914 int drm_dp_stop_crc(struct drm_dp_aux *aux); 915 915 916 + struct drm_dp_dpcd_ident { 917 + u8 oui[3]; 918 + u8 device_id[6]; 919 + u8 hw_rev; 920 + u8 sw_major_rev; 921 + u8 sw_minor_rev; 922 + } __packed; 923 + 924 + /** 925 + * struct drm_dp_desc - DP branch/sink device descriptor 926 + * @ident: DP device identification from DPCD 0x400 (sink) or 0x500 (branch). 927 + * @quirks: Quirks; use drm_dp_has_quirk() to query for the quirks. 928 + */ 929 + struct drm_dp_desc { 930 + struct drm_dp_dpcd_ident ident; 931 + u32 quirks; 932 + }; 933 + 934 + int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc, 935 + bool is_branch); 936 + 937 + /** 938 + * enum drm_dp_quirk - Display Port sink/branch device specific quirks 939 + * 940 + * Display Port sink and branch devices in the wild have a variety of bugs, try 941 + * to collect them here. The quirks are shared, but it's up to the drivers to 942 + * implement workarounds for them. 943 + */ 944 + enum drm_dp_quirk { 945 + /** 946 + * @DP_DPCD_QUIRK_LIMITED_M_N: 947 + * 948 + * The device requires main link attributes Mvid and Nvid to be limited 949 + * to 16 bits. 950 + */ 951 + DP_DPCD_QUIRK_LIMITED_M_N, 952 + }; 953 + 954 + /** 955 + * drm_dp_has_quirk() - does the DP device have a specific quirk 956 + * @desc: Device decriptor filled by drm_dp_read_desc() 957 + * @quirk: Quirk to query for 958 + * 959 + * Return true if DP device identified by @desc has @quirk. 960 + */ 961 + static inline bool 962 + drm_dp_has_quirk(const struct drm_dp_desc *desc, enum drm_dp_quirk quirk) 963 + { 964 + return desc->quirks & BIT(quirk); 965 + } 966 + 916 967 #endif /* _DRM_DP_HELPER_H_ */
-1
include/linux/blk-mq.h
··· 238 238 bool kick_requeue_list); 239 239 void blk_mq_kick_requeue_list(struct request_queue *q); 240 240 void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs); 241 - void blk_mq_abort_requeue_list(struct request_queue *q); 242 241 void blk_mq_complete_request(struct request *rq); 243 242 244 243 bool blk_mq_queue_stopped(struct request_queue *q);
+3 -3
include/linux/ceph/ceph_debug.h
··· 3 3 4 4 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 5 5 6 + #include <linux/string.h> 7 + 6 8 #ifdef CONFIG_CEPH_LIB_PRETTYDEBUG 7 9 8 10 /* ··· 14 12 */ 15 13 16 14 # if defined(DEBUG) || defined(CONFIG_DYNAMIC_DEBUG) 17 - extern const char *ceph_file_part(const char *s, int len); 18 15 # define dout(fmt, ...) \ 19 16 pr_debug("%.*s %12.12s:%-4d : " fmt, \ 20 17 8 - (int)sizeof(KBUILD_MODNAME), " ", \ 21 - ceph_file_part(__FILE__, sizeof(__FILE__)), \ 22 - __LINE__, ##__VA_ARGS__) 18 + kbasename(__FILE__), __LINE__, ##__VA_ARGS__) 23 19 # else 24 20 /* faux printk call just to see any compiler warnings. */ 25 21 # define dout(fmt, ...) do { \
+10
include/linux/filter.h
··· 272 272 .off = OFF, \ 273 273 .imm = IMM }) 274 274 275 + /* Unconditional jumps, goto pc + off16 */ 276 + 277 + #define BPF_JMP_A(OFF) \ 278 + ((struct bpf_insn) { \ 279 + .code = BPF_JMP | BPF_JA, \ 280 + .dst_reg = 0, \ 281 + .src_reg = 0, \ 282 + .off = OFF, \ 283 + .imm = 0 }) 284 + 275 285 /* Function call */ 276 286 277 287 #define BPF_EMIT_CALL(FUNC) \
+1 -1
include/linux/gfp.h
··· 41 41 #define ___GFP_WRITE 0x800000u 42 42 #define ___GFP_KSWAPD_RECLAIM 0x1000000u 43 43 #ifdef CONFIG_LOCKDEP 44 - #define ___GFP_NOLOCKDEP 0x4000000u 44 + #define ___GFP_NOLOCKDEP 0x2000000u 45 45 #else 46 46 #define ___GFP_NOLOCKDEP 0 47 47 #endif
+7
include/linux/gpio/machine.h
··· 56 56 .flags = _flags, \ 57 57 } 58 58 59 + #ifdef CONFIG_GPIOLIB 59 60 void gpiod_add_lookup_table(struct gpiod_lookup_table *table); 60 61 void gpiod_remove_lookup_table(struct gpiod_lookup_table *table); 62 + #else 63 + static inline 64 + void gpiod_add_lookup_table(struct gpiod_lookup_table *table) {} 65 + static inline 66 + void gpiod_remove_lookup_table(struct gpiod_lookup_table *table) {} 67 + #endif 61 68 62 69 #endif /* __LINUX_GPIO_MACHINE_H */
+10 -8
include/linux/if_vlan.h
··· 614 614 static inline netdev_features_t vlan_features_check(const struct sk_buff *skb, 615 615 netdev_features_t features) 616 616 { 617 - if (skb_vlan_tagged_multi(skb)) 618 - features = netdev_intersect_features(features, 619 - NETIF_F_SG | 620 - NETIF_F_HIGHDMA | 621 - NETIF_F_FRAGLIST | 622 - NETIF_F_HW_CSUM | 623 - NETIF_F_HW_VLAN_CTAG_TX | 624 - NETIF_F_HW_VLAN_STAG_TX); 617 + if (skb_vlan_tagged_multi(skb)) { 618 + /* In the case of multi-tagged packets, use a direct mask 619 + * instead of using netdev_interesect_features(), to make 620 + * sure that only devices supporting NETIF_F_HW_CSUM will 621 + * have checksum offloading support. 622 + */ 623 + features &= NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_HW_CSUM | 624 + NETIF_F_FRAGLIST | NETIF_F_HW_VLAN_CTAG_TX | 625 + NETIF_F_HW_VLAN_STAG_TX; 626 + } 625 627 626 628 return features; 627 629 }
+5 -1
include/linux/jiffies.h
··· 64 64 /* TICK_USEC is the time between ticks in usec assuming fake USER_HZ */ 65 65 #define TICK_USEC ((1000000UL + USER_HZ/2) / USER_HZ) 66 66 67 + #ifndef __jiffy_arch_data 68 + #define __jiffy_arch_data 69 + #endif 70 + 67 71 /* 68 72 * The 64-bit value is not atomic - you MUST NOT read it 69 73 * without sampling the sequence number in jiffies_lock. 70 74 * get_jiffies_64() will do this for you as appropriate. 71 75 */ 72 76 extern u64 __cacheline_aligned_in_smp jiffies_64; 73 - extern unsigned long volatile __cacheline_aligned_in_smp jiffies; 77 + extern unsigned long volatile __cacheline_aligned_in_smp __jiffy_arch_data jiffies; 74 78 75 79 #if (BITS_PER_LONG < 64) 76 80 u64 get_jiffies_64(void);
+8
include/linux/memblock.h
··· 425 425 } 426 426 #endif 427 427 428 + extern unsigned long memblock_reserved_memory_within(phys_addr_t start_addr, 429 + phys_addr_t end_addr); 428 430 #else 429 431 static inline phys_addr_t memblock_alloc(phys_addr_t size, phys_addr_t align) 432 + { 433 + return 0; 434 + } 435 + 436 + static inline unsigned long memblock_reserved_memory_within(phys_addr_t start_addr, 437 + phys_addr_t end_addr) 430 438 { 431 439 return 0; 432 440 }
+8 -2
include/linux/mlx5/device.h
··· 787 787 }; 788 788 789 789 enum { 790 - CQE_RSS_HTYPE_IP = 0x3 << 6, 791 - CQE_RSS_HTYPE_L4 = 0x3 << 2, 790 + CQE_RSS_HTYPE_IP = 0x3 << 2, 791 + /* cqe->rss_hash_type[3:2] - IP destination selected for hash 792 + * (00 = none, 01 = IPv4, 10 = IPv6, 11 = Reserved) 793 + */ 794 + CQE_RSS_HTYPE_L4 = 0x3 << 6, 795 + /* cqe->rss_hash_type[7:6] - L4 destination selected for hash 796 + * (00 = none, 01 = TCP. 10 = UDP, 11 = IPSEC.SPI 797 + */ 792 798 }; 793 799 794 800 enum {
+6 -1
include/linux/mlx5/driver.h
··· 787 787 788 788 typedef void (*mlx5_cmd_cbk_t)(int status, void *context); 789 789 790 + enum { 791 + MLX5_CMD_ENT_STATE_PENDING_COMP, 792 + }; 793 + 790 794 struct mlx5_cmd_work_ent { 795 + unsigned long state; 791 796 struct mlx5_cmd_msg *in; 792 797 struct mlx5_cmd_msg *out; 793 798 void *uout; ··· 981 976 void mlx5_rsc_event(struct mlx5_core_dev *dev, u32 rsn, int event_type); 982 977 void mlx5_srq_event(struct mlx5_core_dev *dev, u32 srqn, int event_type); 983 978 struct mlx5_core_srq *mlx5_core_get_srq(struct mlx5_core_dev *dev, u32 srqn); 984 - void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec); 979 + void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool forced); 985 980 void mlx5_cq_event(struct mlx5_core_dev *dev, u32 cqn, int event_type); 986 981 int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, 987 982 int nent, u64 mask, const char *name,
+11
include/linux/mm.h
··· 2327 2327 #define FOLL_REMOTE 0x2000 /* we are working on non-current tsk/mm */ 2328 2328 #define FOLL_COW 0x4000 /* internal GUP flag */ 2329 2329 2330 + static inline int vm_fault_to_errno(int vm_fault, int foll_flags) 2331 + { 2332 + if (vm_fault & VM_FAULT_OOM) 2333 + return -ENOMEM; 2334 + if (vm_fault & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE)) 2335 + return (foll_flags & FOLL_HWPOISON) ? -EHWPOISON : -EFAULT; 2336 + if (vm_fault & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV)) 2337 + return -EFAULT; 2338 + return 0; 2339 + } 2340 + 2330 2341 typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, 2331 2342 void *data); 2332 2343 extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
+1
include/linux/mmzone.h
··· 678 678 * is the first PFN that needs to be initialised. 679 679 */ 680 680 unsigned long first_deferred_pfn; 681 + unsigned long static_init_size; 681 682 #endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */ 682 683 683 684 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+1
include/linux/mod_devicetable.h
··· 467 467 DMI_PRODUCT_VERSION, 468 468 DMI_PRODUCT_SERIAL, 469 469 DMI_PRODUCT_UUID, 470 + DMI_PRODUCT_FAMILY, 470 471 DMI_BOARD_VENDOR, 471 472 DMI_BOARD_NAME, 472 473 DMI_BOARD_VERSION,
+1 -1
include/linux/netfilter/x_tables.h
··· 294 294 int xt_target_to_user(const struct xt_entry_target *t, 295 295 struct xt_entry_target __user *u); 296 296 int xt_data_to_user(void __user *dst, const void *src, 297 - int usersize, int size); 297 + int usersize, int size, int aligned_size); 298 298 299 299 void *xt_copy_counters_from_user(const void __user *user, unsigned int len, 300 300 struct xt_counters_info *info, bool compat);
+5
include/linux/netfilter_bridge/ebtables.h
··· 125 125 /* True if the target is not a standard target */ 126 126 #define INVALID_TARGET (info->target < -NUM_STANDARD_TARGETS || info->target >= 0) 127 127 128 + static inline bool ebt_invalid_target(int target) 129 + { 130 + return (target < -NUM_STANDARD_TARGETS || target >= 0); 131 + } 132 + 128 133 #endif
+1
include/linux/of_platform.h
··· 64 64 const char *bus_id, 65 65 struct device *parent); 66 66 67 + extern int of_platform_device_destroy(struct device *dev, void *data); 67 68 extern int of_platform_bus_probe(struct device_node *root, 68 69 const struct of_device_id *matches, 69 70 struct device *parent);
+8 -3
include/linux/pci.h
··· 183 183 PCI_DEV_FLAGS_BRIDGE_XLATE_ROOT = (__force pci_dev_flags_t) (1 << 9), 184 184 /* Do not use FLR even if device advertises PCI_AF_CAP */ 185 185 PCI_DEV_FLAGS_NO_FLR_RESET = (__force pci_dev_flags_t) (1 << 10), 186 + /* 187 + * Resume before calling the driver's system suspend hooks, disabling 188 + * the direct_complete optimization. 189 + */ 190 + PCI_DEV_FLAGS_NEEDS_RESUME = (__force pci_dev_flags_t) (1 << 11), 186 191 }; 187 192 188 193 enum pci_irq_reroute_variant { ··· 1347 1342 unsigned int max_vecs, unsigned int flags, 1348 1343 const struct irq_affinity *aff_desc) 1349 1344 { 1350 - if (min_vecs > 1) 1351 - return -EINVAL; 1352 - return 1; 1345 + if ((flags & PCI_IRQ_LEGACY) && min_vecs == 1 && dev->irq) 1346 + return 1; 1347 + return -ENOSPC; 1353 1348 } 1354 1349 1355 1350 static inline void pci_free_irq_vectors(struct pci_dev *dev)
-3
include/linux/pinctrl/pinconf-generic.h
··· 42 42 * @PIN_CONFIG_BIAS_PULL_UP: the pin will be pulled up (usually with high 43 43 * impedance to VDD). If the argument is != 0 pull-up is enabled, 44 44 * if it is 0, pull-up is total, i.e. the pin is connected to VDD. 45 - * @PIN_CONFIG_BIDIRECTIONAL: the pin will be configured to allow simultaneous 46 - * input and output operations. 47 45 * @PIN_CONFIG_DRIVE_OPEN_DRAIN: the pin will be driven with open drain (open 48 46 * collector) which means it is usually wired with other output ports 49 47 * which are then pulled up with an external resistor. Setting this ··· 96 98 PIN_CONFIG_BIAS_PULL_DOWN, 97 99 PIN_CONFIG_BIAS_PULL_PIN_DEFAULT, 98 100 PIN_CONFIG_BIAS_PULL_UP, 99 - PIN_CONFIG_BIDIRECTIONAL, 100 101 PIN_CONFIG_DRIVE_OPEN_DRAIN, 101 102 PIN_CONFIG_DRIVE_OPEN_SOURCE, 102 103 PIN_CONFIG_DRIVE_PUSH_PULL,
+5 -2
include/linux/ptrace.h
··· 54 54 unsigned long addr, unsigned long data); 55 55 extern void ptrace_notify(int exit_code); 56 56 extern void __ptrace_link(struct task_struct *child, 57 - struct task_struct *new_parent); 57 + struct task_struct *new_parent, 58 + const struct cred *ptracer_cred); 58 59 extern void __ptrace_unlink(struct task_struct *child); 59 60 extern void exit_ptrace(struct task_struct *tracer, struct list_head *dead); 60 61 #define PTRACE_MODE_READ 0x01 ··· 207 206 208 207 if (unlikely(ptrace) && current->ptrace) { 209 208 child->ptrace = current->ptrace; 210 - __ptrace_link(child, current->parent); 209 + __ptrace_link(child, current->parent, current->ptracer_cred); 211 210 212 211 if (child->ptrace & PT_SEIZED) 213 212 task_set_jobctl_pending(child, JOBCTL_TRAP_STOP); ··· 216 215 217 216 set_tsk_thread_flag(child, TIF_SIGPENDING); 218 217 } 218 + else 219 + child->ptracer_cred = NULL; 219 220 } 220 221 221 222 /**
+11 -8
include/linux/serdev.h
··· 195 195 void serdev_device_close(struct serdev_device *); 196 196 unsigned int serdev_device_set_baudrate(struct serdev_device *, unsigned int); 197 197 void serdev_device_set_flow_control(struct serdev_device *, bool); 198 + int serdev_device_write_buf(struct serdev_device *, const unsigned char *, size_t); 198 199 void serdev_device_wait_until_sent(struct serdev_device *, long); 199 200 int serdev_device_get_tiocm(struct serdev_device *); 200 201 int serdev_device_set_tiocm(struct serdev_device *, int, int); ··· 237 236 return 0; 238 237 } 239 238 static inline void serdev_device_set_flow_control(struct serdev_device *sdev, bool enable) {} 239 + static inline int serdev_device_write_buf(struct serdev_device *serdev, 240 + const unsigned char *buf, 241 + size_t count) 242 + { 243 + return -ENODEV; 244 + } 240 245 static inline void serdev_device_wait_until_sent(struct serdev_device *sdev, long timeout) {} 241 246 static inline int serdev_device_get_tiocm(struct serdev_device *serdev) 242 247 { ··· 308 301 struct device *serdev_tty_port_register(struct tty_port *port, 309 302 struct device *parent, 310 303 struct tty_driver *drv, int idx); 311 - void serdev_tty_port_unregister(struct tty_port *port); 304 + int serdev_tty_port_unregister(struct tty_port *port); 312 305 #else 313 306 static inline struct device *serdev_tty_port_register(struct tty_port *port, 314 307 struct device *parent, ··· 316 309 { 317 310 return ERR_PTR(-ENODEV); 318 311 } 319 - static inline void serdev_tty_port_unregister(struct tty_port *port) {} 320 - #endif /* CONFIG_SERIAL_DEV_CTRL_TTYPORT */ 321 - 322 - static inline int serdev_device_write_buf(struct serdev_device *serdev, 323 - const unsigned char *data, 324 - size_t count) 312 + static inline int serdev_tty_port_unregister(struct tty_port *port) 325 313 { 326 - return serdev_device_write(serdev, data, count, 0); 314 + return -ENODEV; 327 315 } 316 + #endif /* CONFIG_SERIAL_DEV_CTRL_TTYPORT */ 328 317 329 318 #endif /*_LINUX_SERDEV_H */
+2 -1
include/linux/sunrpc/svc.h
··· 336 336 { 337 337 char *cp = (char *)p; 338 338 struct kvec *vec = &rqstp->rq_arg.head[0]; 339 - return cp == (char *)vec->iov_base + vec->iov_len; 339 + return cp >= (char*)vec->iov_base 340 + && cp <= (char*)vec->iov_base + vec->iov_len; 340 341 } 341 342 342 343 static inline int
+9
include/linux/tty.h
··· 558 558 struct tty_driver *driver, unsigned index, 559 559 struct device *device, void *drvdata, 560 560 const struct attribute_group **attr_grp); 561 + extern struct device *tty_port_register_device_serdev(struct tty_port *port, 562 + struct tty_driver *driver, unsigned index, 563 + struct device *device); 564 + extern struct device *tty_port_register_device_attr_serdev(struct tty_port *port, 565 + struct tty_driver *driver, unsigned index, 566 + struct device *device, void *drvdata, 567 + const struct attribute_group **attr_grp); 568 + extern void tty_port_unregister_device(struct tty_port *port, 569 + struct tty_driver *driver, unsigned index); 561 570 extern int tty_port_alloc_xmit_buf(struct tty_port *port); 562 571 extern void tty_port_free_xmit_buf(struct tty_port *port); 563 572 extern void tty_port_destroy(struct tty_port *port);
+1
include/linux/usb/usbnet.h
··· 206 206 }; 207 207 208 208 extern int usbnet_generic_cdc_bind(struct usbnet *, struct usb_interface *); 209 + extern int usbnet_ether_cdc_bind(struct usbnet *dev, struct usb_interface *intf); 209 210 extern int usbnet_cdc_bind(struct usbnet *, struct usb_interface *); 210 211 extern void usbnet_cdc_unbind(struct usbnet *, struct usb_interface *); 211 212 extern void usbnet_cdc_status(struct usbnet *, struct urb *);
+7 -1
include/net/dst.h
··· 107 107 }; 108 108 }; 109 109 110 + struct dst_metrics { 111 + u32 metrics[RTAX_MAX]; 112 + atomic_t refcnt; 113 + }; 114 + extern const struct dst_metrics dst_default_metrics; 115 + 110 116 u32 *dst_cow_metrics_generic(struct dst_entry *dst, unsigned long old); 111 - extern const u32 dst_default_metrics[]; 112 117 113 118 #define DST_METRICS_READ_ONLY 0x1UL 119 + #define DST_METRICS_REFCOUNTED 0x2UL 114 120 #define DST_METRICS_FLAGS 0x3UL 115 121 #define __DST_METRICS_PTR(Y) \ 116 122 ((u32 *)((Y) & ~DST_METRICS_FLAGS))
+5 -5
include/net/ip_fib.h
··· 114 114 __be32 fib_prefsrc; 115 115 u32 fib_tb_id; 116 116 u32 fib_priority; 117 - u32 *fib_metrics; 118 - #define fib_mtu fib_metrics[RTAX_MTU-1] 119 - #define fib_window fib_metrics[RTAX_WINDOW-1] 120 - #define fib_rtt fib_metrics[RTAX_RTT-1] 121 - #define fib_advmss fib_metrics[RTAX_ADVMSS-1] 117 + struct dst_metrics *fib_metrics; 118 + #define fib_mtu fib_metrics->metrics[RTAX_MTU-1] 119 + #define fib_window fib_metrics->metrics[RTAX_WINDOW-1] 120 + #define fib_rtt fib_metrics->metrics[RTAX_RTT-1] 121 + #define fib_advmss fib_metrics->metrics[RTAX_ADVMSS-1] 122 122 int fib_nhs; 123 123 #ifdef CONFIG_IP_ROUTE_MULTIPATH 124 124 int fib_weight;
+4
include/net/netfilter/nf_conntrack_helper.h
··· 9 9 10 10 #ifndef _NF_CONNTRACK_HELPER_H 11 11 #define _NF_CONNTRACK_HELPER_H 12 + #include <linux/refcount.h> 12 13 #include <net/netfilter/nf_conntrack.h> 13 14 #include <net/netfilter/nf_conntrack_extend.h> 14 15 #include <net/netfilter/nf_conntrack_expect.h> ··· 27 26 struct hlist_node hnode; /* Internal use. */ 28 27 29 28 char name[NF_CT_HELPER_NAME_LEN]; /* name of the module */ 29 + refcount_t refcnt; 30 30 struct module *me; /* pointer to self */ 31 31 const struct nf_conntrack_expect_policy *expect_policy; 32 32 ··· 81 79 struct nf_conntrack_helper *nf_conntrack_helper_try_module_get(const char *name, 82 80 u16 l3num, 83 81 u8 protonum); 82 + void nf_conntrack_helper_put(struct nf_conntrack_helper *helper); 83 + 84 84 void nf_ct_helper_init(struct nf_conntrack_helper *helper, 85 85 u16 l3num, u16 protonum, const char *name, 86 86 u16 default_port, u16 spec_port, u32 id,
+1 -1
include/net/netfilter/nf_tables.h
··· 176 176 int nft_data_init(const struct nft_ctx *ctx, 177 177 struct nft_data *data, unsigned int size, 178 178 struct nft_data_desc *desc, const struct nlattr *nla); 179 - void nft_data_uninit(const struct nft_data *data, enum nft_data_types type); 179 + void nft_data_release(const struct nft_data *data, enum nft_data_types type); 180 180 int nft_data_dump(struct sk_buff *skb, int attr, const struct nft_data *data, 181 181 enum nft_data_types type, unsigned int len); 182 182
+15
include/net/tc_act/tc_csum.h
··· 3 3 4 4 #include <linux/types.h> 5 5 #include <net/act_api.h> 6 + #include <linux/tc_act/tc_csum.h> 6 7 7 8 struct tcf_csum { 8 9 struct tc_action common; ··· 11 10 u32 update_flags; 12 11 }; 13 12 #define to_tcf_csum(a) ((struct tcf_csum *)a) 13 + 14 + static inline bool is_tcf_csum(const struct tc_action *a) 15 + { 16 + #ifdef CONFIG_NET_CLS_ACT 17 + if (a->ops && a->ops->type == TCA_ACT_CSUM) 18 + return true; 19 + #endif 20 + return false; 21 + } 22 + 23 + static inline u32 tcf_csum_update_flags(const struct tc_action *a) 24 + { 25 + return to_tcf_csum(a)->update_flags; 26 + } 14 27 15 28 #endif /* __NET_TC_CSUM_H */
-10
include/net/xfrm.h
··· 979 979 struct flow_cache_object flo; 980 980 struct xfrm_policy *pols[XFRM_POLICY_TYPE_MAX]; 981 981 int num_pols, num_xfrms; 982 - #ifdef CONFIG_XFRM_SUB_POLICY 983 - struct flowi *origin; 984 - struct xfrm_selector *partner; 985 - #endif 986 982 u32 xfrm_genid; 987 983 u32 policy_genid; 988 984 u32 route_mtu_cached; ··· 994 998 dst_release(xdst->route); 995 999 if (likely(xdst->u.dst.xfrm)) 996 1000 xfrm_state_put(xdst->u.dst.xfrm); 997 - #ifdef CONFIG_XFRM_SUB_POLICY 998 - kfree(xdst->origin); 999 - xdst->origin = NULL; 1000 - kfree(xdst->partner); 1001 - xdst->partner = NULL; 1002 - #endif 1003 1001 } 1004 1002 #endif 1005 1003
+1
include/target/iscsi/iscsi_target_core.h
··· 557 557 #define LOGIN_FLAGS_READ_ACTIVE 1 558 558 #define LOGIN_FLAGS_CLOSED 2 559 559 #define LOGIN_FLAGS_READY 4 560 + #define LOGIN_FLAGS_INITIAL_PDU 8 560 561 unsigned long login_flags; 561 562 struct delayed_work login_work; 562 563 struct delayed_work login_cleanup_work;
+1
kernel/bpf/arraymap.c
··· 86 86 array->map.key_size = attr->key_size; 87 87 array->map.value_size = attr->value_size; 88 88 array->map.max_entries = attr->max_entries; 89 + array->map.map_flags = attr->map_flags; 89 90 array->elem_size = elem_size; 90 91 91 92 if (!percpu)
+1
kernel/bpf/lpm_trie.c
··· 432 432 trie->map.key_size = attr->key_size; 433 433 trie->map.value_size = attr->value_size; 434 434 trie->map.max_entries = attr->max_entries; 435 + trie->map.map_flags = attr->map_flags; 435 436 trie->data_size = attr->key_size - 436 437 offsetof(struct bpf_lpm_trie_key, data); 437 438 trie->max_prefixlen = trie->data_size * 8;
+1
kernel/bpf/stackmap.c
··· 88 88 smap->map.key_size = attr->key_size; 89 89 smap->map.value_size = value_size; 90 90 smap->map.max_entries = attr->max_entries; 91 + smap->map.map_flags = attr->map_flags; 91 92 smap->n_buckets = n_buckets; 92 93 smap->map.pages = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT; 93 94
+34 -34
kernel/bpf/verifier.c
··· 463 463 BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5 464 464 }; 465 465 466 + static void mark_reg_not_init(struct bpf_reg_state *regs, u32 regno) 467 + { 468 + BUG_ON(regno >= MAX_BPF_REG); 469 + 470 + memset(&regs[regno], 0, sizeof(regs[regno])); 471 + regs[regno].type = NOT_INIT; 472 + regs[regno].min_value = BPF_REGISTER_MIN_RANGE; 473 + regs[regno].max_value = BPF_REGISTER_MAX_RANGE; 474 + } 475 + 466 476 static void init_reg_state(struct bpf_reg_state *regs) 467 477 { 468 478 int i; 469 479 470 - for (i = 0; i < MAX_BPF_REG; i++) { 471 - regs[i].type = NOT_INIT; 472 - regs[i].imm = 0; 473 - regs[i].min_value = BPF_REGISTER_MIN_RANGE; 474 - regs[i].max_value = BPF_REGISTER_MAX_RANGE; 475 - regs[i].min_align = 0; 476 - regs[i].aux_off = 0; 477 - regs[i].aux_off_align = 0; 478 - } 480 + for (i = 0; i < MAX_BPF_REG; i++) 481 + mark_reg_not_init(regs, i); 479 482 480 483 /* frame pointer */ 481 484 regs[BPF_REG_FP].type = FRAME_PTR; ··· 811 808 reg_off += reg->aux_off; 812 809 } 813 810 814 - /* skb->data is NET_IP_ALIGN-ed, but for strict alignment checking 815 - * we force this to 2 which is universally what architectures use 816 - * when they don't set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS. 811 + /* For platforms that do not have a Kconfig enabling 812 + * CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS the value of 813 + * NET_IP_ALIGN is universally set to '2'. And on platforms 814 + * that do set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, we get 815 + * to this code only in strict mode where we want to emulate 816 + * the NET_IP_ALIGN==2 checking. Therefore use an 817 + * unconditional IP align value of '2'. 817 818 */ 818 - ip_align = strict ? 2 : NET_IP_ALIGN; 819 + ip_align = 2; 819 820 if ((ip_align + reg_off + off) % size != 0) { 820 821 verbose("misaligned packet access off %d+%d+%d size %d\n", 821 822 ip_align, reg_off, off, size); ··· 845 838 int off, int size) 846 839 { 847 840 bool strict = env->strict_alignment; 848 - 849 - if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) 850 - strict = true; 851 841 852 842 switch (reg->type) { 853 843 case PTR_TO_PACKET: ··· 1349 1345 struct bpf_verifier_state *state = &env->cur_state; 1350 1346 const struct bpf_func_proto *fn = NULL; 1351 1347 struct bpf_reg_state *regs = state->regs; 1352 - struct bpf_reg_state *reg; 1353 1348 struct bpf_call_arg_meta meta; 1354 1349 bool changes_data; 1355 1350 int i, err; ··· 1415 1412 } 1416 1413 1417 1414 /* reset caller saved regs */ 1418 - for (i = 0; i < CALLER_SAVED_REGS; i++) { 1419 - reg = regs + caller_saved[i]; 1420 - reg->type = NOT_INIT; 1421 - reg->imm = 0; 1422 - } 1415 + for (i = 0; i < CALLER_SAVED_REGS; i++) 1416 + mark_reg_not_init(regs, caller_saved[i]); 1423 1417 1424 1418 /* update return register */ 1425 1419 if (fn->ret_type == RET_INTEGER) { ··· 2444 2444 { 2445 2445 struct bpf_reg_state *regs = env->cur_state.regs; 2446 2446 u8 mode = BPF_MODE(insn->code); 2447 - struct bpf_reg_state *reg; 2448 2447 int i, err; 2449 2448 2450 2449 if (!may_access_skb(env->prog->type)) { ··· 2476 2477 } 2477 2478 2478 2479 /* reset caller saved regs to unreadable */ 2479 - for (i = 0; i < CALLER_SAVED_REGS; i++) { 2480 - reg = regs + caller_saved[i]; 2481 - reg->type = NOT_INIT; 2482 - reg->imm = 0; 2483 - } 2480 + for (i = 0; i < CALLER_SAVED_REGS; i++) 2481 + mark_reg_not_init(regs, caller_saved[i]); 2484 2482 2485 2483 /* mark destination R0 register as readable, since it contains 2486 2484 * the value fetched from the packet ··· 2688 2692 /* the following conditions reduce the number of explored insns 2689 2693 * from ~140k to ~80k for ultra large programs that use a lot of ptr_to_packet 2690 2694 */ 2691 - static bool compare_ptrs_to_packet(struct bpf_reg_state *old, 2695 + static bool compare_ptrs_to_packet(struct bpf_verifier_env *env, 2696 + struct bpf_reg_state *old, 2692 2697 struct bpf_reg_state *cur) 2693 2698 { 2694 2699 if (old->id != cur->id) ··· 2732 2735 * 'if (R4 > data_end)' and all further insn were already good with r=20, 2733 2736 * so they will be good with r=30 and we can prune the search. 2734 2737 */ 2735 - if (old->off <= cur->off && 2738 + if (!env->strict_alignment && old->off <= cur->off && 2736 2739 old->off >= old->range && cur->off >= cur->range) 2737 2740 return true; 2738 2741 ··· 2803 2806 continue; 2804 2807 2805 2808 if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET && 2806 - compare_ptrs_to_packet(rold, rcur)) 2809 + compare_ptrs_to_packet(env, rold, rcur)) 2807 2810 continue; 2808 2811 2809 2812 return false; ··· 3581 3584 } else { 3582 3585 log_level = 0; 3583 3586 } 3584 - if (attr->prog_flags & BPF_F_STRICT_ALIGNMENT) 3587 + 3588 + env->strict_alignment = !!(attr->prog_flags & BPF_F_STRICT_ALIGNMENT); 3589 + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) 3585 3590 env->strict_alignment = true; 3586 - else 3587 - env->strict_alignment = false; 3588 3591 3589 3592 ret = replace_map_fd_with_map_ptr(env); 3590 3593 if (ret < 0) ··· 3690 3693 mutex_lock(&bpf_verifier_lock); 3691 3694 3692 3695 log_level = 0; 3696 + 3693 3697 env->strict_alignment = false; 3698 + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) 3699 + env->strict_alignment = true; 3694 3700 3695 3701 env->explored_states = kcalloc(env->prog->len, 3696 3702 sizeof(struct bpf_verifier_state_list *),
+12 -5
kernel/fork.c
··· 1577 1577 if (!p) 1578 1578 goto fork_out; 1579 1579 1580 + /* 1581 + * This _must_ happen before we call free_task(), i.e. before we jump 1582 + * to any of the bad_fork_* labels. This is to avoid freeing 1583 + * p->set_child_tid which is (ab)used as a kthread's data pointer for 1584 + * kernel threads (PF_KTHREAD). 1585 + */ 1586 + p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL; 1587 + /* 1588 + * Clear TID on mm_release()? 1589 + */ 1590 + p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr : NULL; 1591 + 1580 1592 ftrace_graph_init_task(p); 1581 1593 1582 1594 rt_mutex_init_task(p); ··· 1755 1743 } 1756 1744 } 1757 1745 1758 - p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL; 1759 - /* 1760 - * Clear TID on mm_release()? 1761 - */ 1762 - p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr : NULL; 1763 1746 #ifdef CONFIG_BLOCK 1764 1747 p->plug = NULL; 1765 1748 #endif
+1 -1
kernel/kprobes.c
··· 122 122 return module_alloc(PAGE_SIZE); 123 123 } 124 124 125 - static void free_insn_page(void *page) 125 + void __weak free_insn_page(void *page) 126 126 { 127 127 module_memfree(page); 128 128 }
+1
kernel/livepatch/Kconfig
··· 10 10 depends on SYSFS 11 11 depends on KALLSYMS_ALL 12 12 depends on HAVE_LIVEPATCH 13 + depends on !TRIM_UNUSED_KSYMS 13 14 help 14 15 Say Y here if you want to support kernel live patching. 15 16 This option has no runtime impact until a kernel "patch"
+18 -6
kernel/locking/rtmutex.c
··· 1785 1785 int ret; 1786 1786 1787 1787 raw_spin_lock_irq(&lock->wait_lock); 1788 - 1789 - set_current_state(TASK_INTERRUPTIBLE); 1790 - 1791 1788 /* sleep on the mutex */ 1789 + set_current_state(TASK_INTERRUPTIBLE); 1792 1790 ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter); 1793 - 1791 + /* 1792 + * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might 1793 + * have to fix that up. 1794 + */ 1795 + fixup_rt_mutex_waiters(lock); 1794 1796 raw_spin_unlock_irq(&lock->wait_lock); 1795 1797 1796 1798 return ret; ··· 1824 1822 1825 1823 raw_spin_lock_irq(&lock->wait_lock); 1826 1824 /* 1825 + * Do an unconditional try-lock, this deals with the lock stealing 1826 + * state where __rt_mutex_futex_unlock() -> mark_wakeup_next_waiter() 1827 + * sets a NULL owner. 1828 + * 1829 + * We're not interested in the return value, because the subsequent 1830 + * test on rt_mutex_owner() will infer that. If the trylock succeeded, 1831 + * we will own the lock and it will have removed the waiter. If we 1832 + * failed the trylock, we're still not owner and we need to remove 1833 + * ourselves. 1834 + */ 1835 + try_to_take_rt_mutex(lock, current, waiter); 1836 + /* 1827 1837 * Unless we're the owner; we're still enqueued on the wait_list. 1828 1838 * So check if we became owner, if not, take us off the wait_list. 1829 1839 */ 1830 1840 if (rt_mutex_owner(lock) != current) { 1831 1841 remove_waiter(lock, waiter); 1832 - fixup_rt_mutex_waiters(lock); 1833 1842 cleanup = true; 1834 1843 } 1835 - 1836 1844 /* 1837 1845 * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might 1838 1846 * have to fix that up.
+1 -1
kernel/power/snapshot.c
··· 1425 1425 * Numbers of normal and highmem page frames allocated for hibernation image 1426 1426 * before suspending devices. 1427 1427 */ 1428 - unsigned int alloc_normal, alloc_highmem; 1428 + static unsigned int alloc_normal, alloc_highmem; 1429 1429 /* 1430 1430 * Memory bitmap used for marking saveable pages (during hibernation) or 1431 1431 * hibernation image pages (during restore)
+13 -7
kernel/ptrace.c
··· 60 60 } 61 61 62 62 63 + void __ptrace_link(struct task_struct *child, struct task_struct *new_parent, 64 + const struct cred *ptracer_cred) 65 + { 66 + BUG_ON(!list_empty(&child->ptrace_entry)); 67 + list_add(&child->ptrace_entry, &new_parent->ptraced); 68 + child->parent = new_parent; 69 + child->ptracer_cred = get_cred(ptracer_cred); 70 + } 71 + 63 72 /* 64 73 * ptrace a task: make the debugger its new parent and 65 74 * move it to the ptrace list. 66 75 * 67 76 * Must be called with the tasklist lock write-held. 68 77 */ 69 - void __ptrace_link(struct task_struct *child, struct task_struct *new_parent) 78 + static void ptrace_link(struct task_struct *child, struct task_struct *new_parent) 70 79 { 71 - BUG_ON(!list_empty(&child->ptrace_entry)); 72 - list_add(&child->ptrace_entry, &new_parent->ptraced); 73 - child->parent = new_parent; 74 80 rcu_read_lock(); 75 - child->ptracer_cred = get_cred(__task_cred(new_parent)); 81 + __ptrace_link(child, new_parent, __task_cred(new_parent)); 76 82 rcu_read_unlock(); 77 83 } 78 84 ··· 392 386 flags |= PT_SEIZED; 393 387 task->ptrace = flags; 394 388 395 - __ptrace_link(task, current); 389 + ptrace_link(task, current); 396 390 397 391 /* SEIZE doesn't trap tracee on attach */ 398 392 if (!seize) ··· 465 459 */ 466 460 if (!ret && !(current->real_parent->flags & PF_EXITING)) { 467 461 current->ptrace = PT_PTRACED; 468 - __ptrace_link(current, current->real_parent); 462 + ptrace_link(current, current->real_parent); 469 463 } 470 464 } 471 465 write_unlock_irq(&tasklist_lock);
+3 -4
kernel/sched/cpufreq_schedutil.c
··· 245 245 sugov_update_commit(sg_policy, time, next_f); 246 246 } 247 247 248 - static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu) 248 + static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) 249 249 { 250 250 struct sugov_policy *sg_policy = sg_cpu->sg_policy; 251 251 struct cpufreq_policy *policy = sg_policy->policy; 252 - u64 last_freq_update_time = sg_policy->last_freq_update_time; 253 252 unsigned long util = 0, max = 1; 254 253 unsigned int j; 255 254 ··· 264 265 * enough, don't take the CPU into account as it probably is 265 266 * idle now (and clear iowait_boost for it). 266 267 */ 267 - delta_ns = last_freq_update_time - j_sg_cpu->last_update; 268 + delta_ns = time - j_sg_cpu->last_update; 268 269 if (delta_ns > TICK_NSEC) { 269 270 j_sg_cpu->iowait_boost = 0; 270 271 continue; ··· 308 309 if (flags & SCHED_CPUFREQ_RT_DL) 309 310 next_f = sg_policy->policy->cpuinfo.max_freq; 310 311 else 311 - next_f = sugov_next_freq_shared(sg_cpu); 312 + next_f = sugov_next_freq_shared(sg_cpu, time); 312 313 313 314 sugov_update_commit(sg_policy, time, next_f); 314 315 }
+11 -3
kernel/time/alarmtimer.c
··· 357 357 { 358 358 struct alarm_base *base = &alarm_bases[alarm->type]; 359 359 360 - start = ktime_add(start, base->gettime()); 360 + start = ktime_add_safe(start, base->gettime()); 361 361 alarm_start(alarm, start); 362 362 } 363 363 EXPORT_SYMBOL_GPL(alarm_start_relative); ··· 445 445 overrun++; 446 446 } 447 447 448 - alarm->node.expires = ktime_add(alarm->node.expires, interval); 448 + alarm->node.expires = ktime_add_safe(alarm->node.expires, interval); 449 449 return overrun; 450 450 } 451 451 EXPORT_SYMBOL_GPL(alarm_forward); ··· 662 662 663 663 /* start the timer */ 664 664 timr->it.alarm.interval = timespec64_to_ktime(new_setting->it_interval); 665 + 666 + /* 667 + * Rate limit to the tick as a hot fix to prevent DOS. Will be 668 + * mopped up later. 669 + */ 670 + if (timr->it.alarm.interval < TICK_NSEC) 671 + timr->it.alarm.interval = TICK_NSEC; 672 + 665 673 exp = timespec64_to_ktime(new_setting->it_value); 666 674 /* Convert (if necessary) to absolute time */ 667 675 if (flags != TIMER_ABSTIME) { 668 676 ktime_t now; 669 677 670 678 now = alarm_bases[timr->it.alarm.alarmtimer.type].gettime(); 671 - exp = ktime_add(now, exp); 679 + exp = ktime_add_safe(now, exp); 672 680 } 673 681 674 682 alarm_start(&timr->it.alarm.alarmtimer, exp);
+16 -8
kernel/time/posix-cpu-timers.c
··· 825 825 * At the hard limit, we just die. 826 826 * No need to calculate anything else now. 827 827 */ 828 - pr_info("CPU Watchdog Timeout (hard): %s[%d]\n", 829 - tsk->comm, task_pid_nr(tsk)); 828 + if (print_fatal_signals) { 829 + pr_info("CPU Watchdog Timeout (hard): %s[%d]\n", 830 + tsk->comm, task_pid_nr(tsk)); 831 + } 830 832 __group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk); 831 833 return; 832 834 } ··· 840 838 soft += USEC_PER_SEC; 841 839 sig->rlim[RLIMIT_RTTIME].rlim_cur = soft; 842 840 } 843 - pr_info("RT Watchdog Timeout (soft): %s[%d]\n", 844 - tsk->comm, task_pid_nr(tsk)); 841 + if (print_fatal_signals) { 842 + pr_info("RT Watchdog Timeout (soft): %s[%d]\n", 843 + tsk->comm, task_pid_nr(tsk)); 844 + } 845 845 __group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk); 846 846 } 847 847 } ··· 940 936 * At the hard limit, we just die. 941 937 * No need to calculate anything else now. 942 938 */ 943 - pr_info("RT Watchdog Timeout (hard): %s[%d]\n", 944 - tsk->comm, task_pid_nr(tsk)); 939 + if (print_fatal_signals) { 940 + pr_info("RT Watchdog Timeout (hard): %s[%d]\n", 941 + tsk->comm, task_pid_nr(tsk)); 942 + } 945 943 __group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk); 946 944 return; 947 945 } ··· 951 945 /* 952 946 * At the soft limit, send a SIGXCPU every second. 953 947 */ 954 - pr_info("CPU Watchdog Timeout (soft): %s[%d]\n", 955 - tsk->comm, task_pid_nr(tsk)); 948 + if (print_fatal_signals) { 949 + pr_info("CPU Watchdog Timeout (soft): %s[%d]\n", 950 + tsk->comm, task_pid_nr(tsk)); 951 + } 956 952 __group_send_sig_info(SIGXCPU, SEND_SIG_PRIV, tsk); 957 953 if (soft < hard) { 958 954 soft++;
+1 -1
kernel/trace/ftrace.c
··· 5063 5063 } 5064 5064 5065 5065 out: 5066 - kfree(fgd->new_hash); 5066 + free_ftrace_hash(fgd->new_hash); 5067 5067 kfree(fgd); 5068 5068 5069 5069 return ret;
+38
lib/test_bpf.c
··· 4504 4504 { }, 4505 4505 { { 0, 1 } }, 4506 4506 }, 4507 + { 4508 + "JMP_JSGE_K: Signed jump: value walk 1", 4509 + .u.insns_int = { 4510 + BPF_ALU32_IMM(BPF_MOV, R0, 0), 4511 + BPF_LD_IMM64(R1, -3), 4512 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 6), 4513 + BPF_ALU64_IMM(BPF_ADD, R1, 1), 4514 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 4), 4515 + BPF_ALU64_IMM(BPF_ADD, R1, 1), 4516 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 2), 4517 + BPF_ALU64_IMM(BPF_ADD, R1, 1), 4518 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 1), 4519 + BPF_EXIT_INSN(), /* bad exit */ 4520 + BPF_ALU32_IMM(BPF_MOV, R0, 1), /* good exit */ 4521 + BPF_EXIT_INSN(), 4522 + }, 4523 + INTERNAL, 4524 + { }, 4525 + { { 0, 1 } }, 4526 + }, 4527 + { 4528 + "JMP_JSGE_K: Signed jump: value walk 2", 4529 + .u.insns_int = { 4530 + BPF_ALU32_IMM(BPF_MOV, R0, 0), 4531 + BPF_LD_IMM64(R1, -3), 4532 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 4), 4533 + BPF_ALU64_IMM(BPF_ADD, R1, 2), 4534 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 2), 4535 + BPF_ALU64_IMM(BPF_ADD, R1, 2), 4536 + BPF_JMP_IMM(BPF_JSGE, R1, 0, 1), 4537 + BPF_EXIT_INSN(), /* bad exit */ 4538 + BPF_ALU32_IMM(BPF_MOV, R0, 1), /* good exit */ 4539 + BPF_EXIT_INSN(), 4540 + }, 4541 + INTERNAL, 4542 + { }, 4543 + { { 0, 1 } }, 4544 + }, 4507 4545 /* BPF_JMP | BPF_JGT | BPF_K */ 4508 4546 { 4509 4547 "JMP_JGT_K: if (3 > 2) return 1",
+8 -12
mm/gup.c
··· 407 407 408 408 ret = handle_mm_fault(vma, address, fault_flags); 409 409 if (ret & VM_FAULT_ERROR) { 410 - if (ret & VM_FAULT_OOM) 411 - return -ENOMEM; 412 - if (ret & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE)) 413 - return *flags & FOLL_HWPOISON ? -EHWPOISON : -EFAULT; 414 - if (ret & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV)) 415 - return -EFAULT; 410 + int err = vm_fault_to_errno(ret, *flags); 411 + 412 + if (err) 413 + return err; 416 414 BUG(); 417 415 } 418 416 ··· 721 723 ret = handle_mm_fault(vma, address, fault_flags); 722 724 major |= ret & VM_FAULT_MAJOR; 723 725 if (ret & VM_FAULT_ERROR) { 724 - if (ret & VM_FAULT_OOM) 725 - return -ENOMEM; 726 - if (ret & (VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE)) 727 - return -EHWPOISON; 728 - if (ret & (VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV)) 729 - return -EFAULT; 726 + int err = vm_fault_to_errno(ret, 0); 727 + 728 + if (err) 729 + return err; 730 730 BUG(); 731 731 } 732 732
+5
mm/hugetlb.c
··· 4170 4170 } 4171 4171 ret = hugetlb_fault(mm, vma, vaddr, fault_flags); 4172 4172 if (ret & VM_FAULT_ERROR) { 4173 + int err = vm_fault_to_errno(ret, flags); 4174 + 4175 + if (err) 4176 + return err; 4177 + 4173 4178 remainder = 0; 4174 4179 break; 4175 4180 }
+1 -2
mm/ksm.c
··· 1028 1028 goto out; 1029 1029 1030 1030 if (PageTransCompound(page)) { 1031 - err = split_huge_page(page); 1032 - if (err) 1031 + if (split_huge_page(page)) 1033 1032 goto out_unlock; 1034 1033 } 1035 1034
+23
mm/memblock.c
··· 1739 1739 } 1740 1740 } 1741 1741 1742 + extern unsigned long __init_memblock 1743 + memblock_reserved_memory_within(phys_addr_t start_addr, phys_addr_t end_addr) 1744 + { 1745 + struct memblock_region *rgn; 1746 + unsigned long size = 0; 1747 + int idx; 1748 + 1749 + for_each_memblock_type((&memblock.reserved), rgn) { 1750 + phys_addr_t start, end; 1751 + 1752 + if (rgn->base + rgn->size < start_addr) 1753 + continue; 1754 + if (rgn->base > end_addr) 1755 + continue; 1756 + 1757 + start = rgn->base; 1758 + end = start + rgn->size; 1759 + size += end - start; 1760 + } 1761 + 1762 + return size; 1763 + } 1764 + 1742 1765 void __init_memblock __memblock_dump_all(void) 1743 1766 { 1744 1767 pr_info("MEMBLOCK configuration:\n");
+2 -6
mm/memory-failure.c
··· 1595 1595 if (ret) { 1596 1596 pr_info("soft offline: %#lx: migration failed %d, type %lx (%pGp)\n", 1597 1597 pfn, ret, page->flags, &page->flags); 1598 - /* 1599 - * We know that soft_offline_huge_page() tries to migrate 1600 - * only one hugepage pointed to by hpage, so we need not 1601 - * run through the pagelist here. 1602 - */ 1603 - putback_active_hugepage(hpage); 1598 + if (!list_empty(&pagelist)) 1599 + putback_movable_pages(&pagelist); 1604 1600 if (ret > 0) 1605 1601 ret = -EIO; 1606 1602 } else {
+30 -10
mm/memory.c
··· 3029 3029 return ret; 3030 3030 } 3031 3031 3032 + /* 3033 + * The ordering of these checks is important for pmds with _PAGE_DEVMAP set. 3034 + * If we check pmd_trans_unstable() first we will trip the bad_pmd() check 3035 + * inside of pmd_none_or_trans_huge_or_clear_bad(). This will end up correctly 3036 + * returning 1 but not before it spams dmesg with the pmd_clear_bad() output. 3037 + */ 3038 + static int pmd_devmap_trans_unstable(pmd_t *pmd) 3039 + { 3040 + return pmd_devmap(*pmd) || pmd_trans_unstable(pmd); 3041 + } 3042 + 3032 3043 static int pte_alloc_one_map(struct vm_fault *vmf) 3033 3044 { 3034 3045 struct vm_area_struct *vma = vmf->vma; ··· 3063 3052 map_pte: 3064 3053 /* 3065 3054 * If a huge pmd materialized under us just retry later. Use 3066 - * pmd_trans_unstable() instead of pmd_trans_huge() to ensure the pmd 3067 - * didn't become pmd_trans_huge under us and then back to pmd_none, as 3068 - * a result of MADV_DONTNEED running immediately after a huge pmd fault 3069 - * in a different thread of this mm, in turn leading to a misleading 3070 - * pmd_trans_huge() retval. All we have to ensure is that it is a 3071 - * regular pmd that we can walk with pte_offset_map() and we can do that 3072 - * through an atomic read in C, which is what pmd_trans_unstable() 3073 - * provides. 3055 + * pmd_trans_unstable() via pmd_devmap_trans_unstable() instead of 3056 + * pmd_trans_huge() to ensure the pmd didn't become pmd_trans_huge 3057 + * under us and then back to pmd_none, as a result of MADV_DONTNEED 3058 + * running immediately after a huge pmd fault in a different thread of 3059 + * this mm, in turn leading to a misleading pmd_trans_huge() retval. 3060 + * All we have to ensure is that it is a regular pmd that we can walk 3061 + * with pte_offset_map() and we can do that through an atomic read in 3062 + * C, which is what pmd_trans_unstable() provides. 3074 3063 */ 3075 - if (pmd_trans_unstable(vmf->pmd) || pmd_devmap(*vmf->pmd)) 3064 + if (pmd_devmap_trans_unstable(vmf->pmd)) 3076 3065 return VM_FAULT_NOPAGE; 3077 3066 3067 + /* 3068 + * At this point we know that our vmf->pmd points to a page of ptes 3069 + * and it cannot become pmd_none(), pmd_devmap() or pmd_trans_huge() 3070 + * for the duration of the fault. If a racing MADV_DONTNEED runs and 3071 + * we zap the ptes pointed to by our vmf->pmd, the vmf->ptl will still 3072 + * be valid and we will re-check to make sure the vmf->pte isn't 3073 + * pte_none() under vmf->ptl protection when we return to 3074 + * alloc_set_pte(). 3075 + */ 3078 3076 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, 3079 3077 &vmf->ptl); 3080 3078 return 0; ··· 3710 3690 vmf->pte = NULL; 3711 3691 } else { 3712 3692 /* See comment in pte_alloc_one_map() */ 3713 - if (pmd_trans_unstable(vmf->pmd) || pmd_devmap(*vmf->pmd)) 3693 + if (pmd_devmap_trans_unstable(vmf->pmd)) 3714 3694 return 0; 3715 3695 /* 3716 3696 * A regular pmd is established and it can't morph into a huge
+3 -2
mm/mlock.c
··· 284 284 { 285 285 int i; 286 286 int nr = pagevec_count(pvec); 287 - int delta_munlocked; 287 + int delta_munlocked = -nr; 288 288 struct pagevec pvec_putback; 289 289 int pgrescued = 0; 290 290 ··· 304 304 continue; 305 305 else 306 306 __munlock_isolation_failed(page); 307 + } else { 308 + delta_munlocked++; 307 309 } 308 310 309 311 /* ··· 317 315 pagevec_add(&pvec_putback, pvec->pages[i]); 318 316 pvec->pages[i] = NULL; 319 317 } 320 - delta_munlocked = -nr + pagevec_count(&pvec_putback); 321 318 __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); 322 319 spin_unlock_irq(zone_lru_lock(zone)); 323 320
+25 -12
mm/page_alloc.c
··· 292 292 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT 293 293 static inline void reset_deferred_meminit(pg_data_t *pgdat) 294 294 { 295 + unsigned long max_initialise; 296 + unsigned long reserved_lowmem; 297 + 298 + /* 299 + * Initialise at least 2G of a node but also take into account that 300 + * two large system hashes that can take up 1GB for 0.25TB/node. 301 + */ 302 + max_initialise = max(2UL << (30 - PAGE_SHIFT), 303 + (pgdat->node_spanned_pages >> 8)); 304 + 305 + /* 306 + * Compensate the all the memblock reservations (e.g. crash kernel) 307 + * from the initial estimation to make sure we will initialize enough 308 + * memory to boot. 309 + */ 310 + reserved_lowmem = memblock_reserved_memory_within(pgdat->node_start_pfn, 311 + pgdat->node_start_pfn + max_initialise); 312 + max_initialise += reserved_lowmem; 313 + 314 + pgdat->static_init_size = min(max_initialise, pgdat->node_spanned_pages); 295 315 pgdat->first_deferred_pfn = ULONG_MAX; 296 316 } 297 317 ··· 334 314 unsigned long pfn, unsigned long zone_end, 335 315 unsigned long *nr_initialised) 336 316 { 337 - unsigned long max_initialise; 338 - 339 317 /* Always populate low zones for address-contrained allocations */ 340 318 if (zone_end < pgdat_end_pfn(pgdat)) 341 319 return true; 342 - /* 343 - * Initialise at least 2G of a node but also take into account that 344 - * two large system hashes that can take up 1GB for 0.25TB/node. 345 - */ 346 - max_initialise = max(2UL << (30 - PAGE_SHIFT), 347 - (pgdat->node_spanned_pages >> 8)); 348 - 349 320 (*nr_initialised)++; 350 - if ((*nr_initialised > max_initialise) && 321 + if ((*nr_initialised > pgdat->static_init_size) && 351 322 (pfn & (PAGES_PER_SECTION - 1)) == 0) { 352 323 pgdat->first_deferred_pfn = pfn; 353 324 return false; ··· 3881 3870 goto got_pg; 3882 3871 3883 3872 /* Avoid allocations with no watermarks from looping endlessly */ 3884 - if (test_thread_flag(TIF_MEMDIE)) 3873 + if (test_thread_flag(TIF_MEMDIE) && 3874 + (alloc_flags == ALLOC_NO_WATERMARKS || 3875 + (gfp_mask & __GFP_NOMEMALLOC))) 3885 3876 goto nopage; 3886 3877 3887 3878 /* Retry as long as the OOM killer is making progress */ ··· 6149 6136 /* pg_data_t should be reset to zero when it's allocated */ 6150 6137 WARN_ON(pgdat->nr_zones || pgdat->kswapd_classzone_idx); 6151 6138 6152 - reset_deferred_meminit(pgdat); 6153 6139 pgdat->node_id = nid; 6154 6140 pgdat->node_start_pfn = node_start_pfn; 6155 6141 pgdat->per_cpu_nodestats = NULL; ··· 6170 6158 (unsigned long)pgdat->node_mem_map); 6171 6159 #endif 6172 6160 6161 + reset_deferred_meminit(pgdat); 6173 6162 free_area_init_core(pgdat); 6174 6163 } 6175 6164
+4 -2
mm/slub.c
··· 5512 5512 char mbuf[64]; 5513 5513 char *buf; 5514 5514 struct slab_attribute *attr = to_slab_attr(slab_attrs[i]); 5515 + ssize_t len; 5515 5516 5516 5517 if (!attr || !attr->store || !attr->show) 5517 5518 continue; ··· 5537 5536 buf = buffer; 5538 5537 } 5539 5538 5540 - attr->show(root_cache, buf); 5541 - attr->store(s, buf, strlen(buf)); 5539 + len = attr->show(root_cache, buf); 5540 + if (len > 0) 5541 + attr->store(s, buf, len); 5542 5542 } 5543 5543 5544 5544 if (buffer)
+5 -2
mm/util.c
··· 357 357 WARN_ON_ONCE((flags & GFP_KERNEL) != GFP_KERNEL); 358 358 359 359 /* 360 - * Make sure that larger requests are not too disruptive - no OOM 361 - * killer and no allocation failure warnings as we have a fallback 360 + * We want to attempt a large physically contiguous block first because 361 + * it is less likely to fragment multiple larger blocks and therefore 362 + * contribute to a long term fragmentation less than vmalloc fallback. 363 + * However make sure that larger requests are not too disruptive - no 364 + * OOM killer and no allocation failure warnings as we have a fallback. 362 365 */ 363 366 if (size > PAGE_SIZE) { 364 367 kmalloc_flags |= __GFP_NOWARN;
+1
net/bridge/br_stp_if.c
··· 179 179 br_debug(br, "using kernel STP\n"); 180 180 181 181 /* To start timers on any ports left in blocking */ 182 + mod_timer(&br->hello_timer, jiffies + br->hello_time); 182 183 br_port_state_selection(br); 183 184 } 184 185
+1 -1
net/bridge/br_stp_timer.c
··· 40 40 if (br->dev->flags & IFF_UP) { 41 41 br_config_bpdu_generation(br); 42 42 43 - if (br->stp_enabled != BR_USER_STP) 43 + if (br->stp_enabled == BR_KERNEL_STP) 44 44 mod_timer(&br->hello_timer, 45 45 round_jiffies(jiffies + br->hello_time)); 46 46 }
+3
net/bridge/netfilter/ebt_arpreply.c
··· 68 68 if (e->ethproto != htons(ETH_P_ARP) || 69 69 e->invflags & EBT_IPROTO) 70 70 return -EINVAL; 71 + if (ebt_invalid_target(info->target)) 72 + return -EINVAL; 73 + 71 74 return 0; 72 75 } 73 76
+6 -3
net/bridge/netfilter/ebtables.c
··· 1373 1373 strlcpy(name, _name, sizeof(name)); 1374 1374 if (copy_to_user(um, name, EBT_FUNCTION_MAXNAMELEN) || 1375 1375 put_user(datasize, (int __user *)(um + EBT_FUNCTION_MAXNAMELEN)) || 1376 - xt_data_to_user(um + entrysize, data, usersize, datasize)) 1376 + xt_data_to_user(um + entrysize, data, usersize, datasize, 1377 + XT_ALIGN(datasize))) 1377 1378 return -EFAULT; 1378 1379 1379 1380 return 0; ··· 1659 1658 if (match->compat_to_user(cm->data, m->data)) 1660 1659 return -EFAULT; 1661 1660 } else { 1662 - if (xt_data_to_user(cm->data, m->data, match->usersize, msize)) 1661 + if (xt_data_to_user(cm->data, m->data, match->usersize, msize, 1662 + COMPAT_XT_ALIGN(msize))) 1663 1663 return -EFAULT; 1664 1664 } 1665 1665 ··· 1689 1687 if (target->compat_to_user(cm->data, t->data)) 1690 1688 return -EFAULT; 1691 1689 } else { 1692 - if (xt_data_to_user(cm->data, t->data, target->usersize, tsize)) 1690 + if (xt_data_to_user(cm->data, t->data, target->usersize, tsize, 1691 + COMPAT_XT_ALIGN(tsize))) 1693 1692 return -EFAULT; 1694 1693 } 1695 1694
+8 -5
net/ceph/auth_x.c
··· 151 151 struct timespec validity; 152 152 void *tp, *tpend; 153 153 void **ptp; 154 - struct ceph_crypto_key new_session_key; 154 + struct ceph_crypto_key new_session_key = { 0 }; 155 155 struct ceph_buffer *new_ticket_blob; 156 156 unsigned long new_expires, new_renew_after; 157 157 u64 new_secret_id; ··· 215 215 dout(" ticket blob is %d bytes\n", dlen); 216 216 ceph_decode_need(ptp, tpend, 1 + sizeof(u64), bad); 217 217 blob_struct_v = ceph_decode_8(ptp); 218 + if (blob_struct_v != 1) 219 + goto bad; 220 + 218 221 new_secret_id = ceph_decode_64(ptp); 219 222 ret = ceph_decode_buffer(&new_ticket_blob, ptp, tpend); 220 223 if (ret) ··· 237 234 type, ceph_entity_type_name(type), th->secret_id, 238 235 (int)th->ticket_blob->vec.iov_len); 239 236 xi->have_keys |= th->service; 240 - 241 - out: 242 - return ret; 237 + return 0; 243 238 244 239 bad: 245 240 ret = -EINVAL; 246 - goto out; 241 + out: 242 + ceph_crypto_key_destroy(&new_session_key); 243 + return ret; 247 244 } 248 245 249 246 static int ceph_x_proc_ticket_reply(struct ceph_auth_client *ac,
-13
net/ceph/ceph_common.c
··· 56 56 module_param_cb(supported_features, &param_ops_supported_features, NULL, 57 57 S_IRUGO); 58 58 59 - /* 60 - * find filename portion of a path (/foo/bar/baz -> baz) 61 - */ 62 - const char *ceph_file_part(const char *s, int len) 63 - { 64 - const char *e = s + len; 65 - 66 - while (e != s && *(e-1) != '/') 67 - e--; 68 - return e; 69 - } 70 - EXPORT_SYMBOL(ceph_file_part); 71 - 72 59 const char *ceph_msg_type_name(int type) 73 60 { 74 61 switch (type) {
+16 -10
net/ceph/messenger.c
··· 1174 1174 * Returns true if the result moves the cursor on to the next piece 1175 1175 * of the data item. 1176 1176 */ 1177 - static bool ceph_msg_data_advance(struct ceph_msg_data_cursor *cursor, 1178 - size_t bytes) 1177 + static void ceph_msg_data_advance(struct ceph_msg_data_cursor *cursor, 1178 + size_t bytes) 1179 1179 { 1180 1180 bool new_piece; 1181 1181 ··· 1207 1207 new_piece = true; 1208 1208 } 1209 1209 cursor->need_crc = new_piece; 1210 - 1211 - return new_piece; 1212 1210 } 1213 1211 1214 1212 static size_t sizeof_footer(struct ceph_connection *con) ··· 1575 1577 size_t page_offset; 1576 1578 size_t length; 1577 1579 bool last_piece; 1578 - bool need_crc; 1579 1580 int ret; 1580 1581 1581 1582 page = ceph_msg_data_next(cursor, &page_offset, &length, ··· 1589 1592 } 1590 1593 if (do_datacrc && cursor->need_crc) 1591 1594 crc = ceph_crc32c_page(crc, page, page_offset, length); 1592 - need_crc = ceph_msg_data_advance(cursor, (size_t)ret); 1595 + ceph_msg_data_advance(cursor, (size_t)ret); 1593 1596 } 1594 1597 1595 1598 dout("%s %p msg %p done\n", __func__, con, msg); ··· 2228 2231 struct ceph_msg *m; 2229 2232 u64 ack = le64_to_cpu(con->in_temp_ack); 2230 2233 u64 seq; 2234 + bool reconnect = (con->in_tag == CEPH_MSGR_TAG_SEQ); 2235 + struct list_head *list = reconnect ? &con->out_queue : &con->out_sent; 2231 2236 2232 - while (!list_empty(&con->out_sent)) { 2233 - m = list_first_entry(&con->out_sent, struct ceph_msg, 2234 - list_head); 2237 + /* 2238 + * In the reconnect case, con_fault() has requeued messages 2239 + * in out_sent. We should cleanup old messages according to 2240 + * the reconnect seq. 2241 + */ 2242 + while (!list_empty(list)) { 2243 + m = list_first_entry(list, struct ceph_msg, list_head); 2244 + if (reconnect && m->needs_out_seq) 2245 + break; 2235 2246 seq = le64_to_cpu(m->hdr.seq); 2236 2247 if (seq > ack) 2237 2248 break; ··· 2248 2243 m->ack_stamp = jiffies; 2249 2244 ceph_msg_remove(m); 2250 2245 } 2246 + 2251 2247 prepare_read_tag(con); 2252 2248 } 2253 2249 ··· 2305 2299 2306 2300 if (do_datacrc) 2307 2301 crc = ceph_crc32c_page(crc, page, page_offset, ret); 2308 - (void) ceph_msg_data_advance(cursor, (size_t)ret); 2302 + ceph_msg_data_advance(cursor, (size_t)ret); 2309 2303 } 2310 2304 if (do_datacrc) 2311 2305 con->in_data_crc = crc;
+1 -3
net/ceph/mon_client.c
··· 43 43 int i, err = -EINVAL; 44 44 struct ceph_fsid fsid; 45 45 u32 epoch, num_mon; 46 - u16 version; 47 46 u32 len; 48 47 49 48 ceph_decode_32_safe(&p, end, len, bad); 50 49 ceph_decode_need(&p, end, len, bad); 51 50 52 51 dout("monmap_decode %p %p len %d\n", p, end, (int)(end-p)); 53 - 54 - ceph_decode_16_safe(&p, end, version, bad); 52 + p += sizeof(u16); /* skip version */ 55 53 56 54 ceph_decode_need(&p, end, sizeof(fsid) + 2*sizeof(u32), bad); 57 55 ceph_decode_copy(&p, &fsid, sizeof(fsid));
+1
net/ceph/osdmap.c
··· 317 317 u32 yes; 318 318 struct crush_rule *r; 319 319 320 + err = -EINVAL; 320 321 ceph_decode_32_safe(p, end, yes, bad); 321 322 if (!yes) { 322 323 dout("crush_decode NO rule %d off %x %p to %p\n",
+14 -9
net/core/dst.c
··· 151 151 } 152 152 EXPORT_SYMBOL(dst_discard_out); 153 153 154 - const u32 dst_default_metrics[RTAX_MAX + 1] = { 154 + const struct dst_metrics dst_default_metrics = { 155 155 /* This initializer is needed to force linker to place this variable 156 156 * into const section. Otherwise it might end into bss section. 157 157 * We really want to avoid false sharing on this variable, and catch 158 158 * any writes on it. 159 159 */ 160 - [RTAX_MAX] = 0xdeadbeef, 160 + .refcnt = ATOMIC_INIT(1), 161 161 }; 162 162 163 163 void dst_init(struct dst_entry *dst, struct dst_ops *ops, ··· 169 169 if (dev) 170 170 dev_hold(dev); 171 171 dst->ops = ops; 172 - dst_init_metrics(dst, dst_default_metrics, true); 172 + dst_init_metrics(dst, dst_default_metrics.metrics, true); 173 173 dst->expires = 0UL; 174 174 dst->path = dst; 175 175 dst->from = NULL; ··· 314 314 315 315 u32 *dst_cow_metrics_generic(struct dst_entry *dst, unsigned long old) 316 316 { 317 - u32 *p = kmalloc(sizeof(u32) * RTAX_MAX, GFP_ATOMIC); 317 + struct dst_metrics *p = kmalloc(sizeof(*p), GFP_ATOMIC); 318 318 319 319 if (p) { 320 - u32 *old_p = __DST_METRICS_PTR(old); 320 + struct dst_metrics *old_p = (struct dst_metrics *)__DST_METRICS_PTR(old); 321 321 unsigned long prev, new; 322 322 323 - memcpy(p, old_p, sizeof(u32) * RTAX_MAX); 323 + atomic_set(&p->refcnt, 1); 324 + memcpy(p->metrics, old_p->metrics, sizeof(p->metrics)); 324 325 325 326 new = (unsigned long) p; 326 327 prev = cmpxchg(&dst->_metrics, old, new); 327 328 328 329 if (prev != old) { 329 330 kfree(p); 330 - p = __DST_METRICS_PTR(prev); 331 + p = (struct dst_metrics *)__DST_METRICS_PTR(prev); 331 332 if (prev & DST_METRICS_READ_ONLY) 332 333 p = NULL; 334 + } else if (prev & DST_METRICS_REFCOUNTED) { 335 + if (atomic_dec_and_test(&old_p->refcnt)) 336 + kfree(old_p); 333 337 } 334 338 } 335 - return p; 339 + BUILD_BUG_ON(offsetof(struct dst_metrics, metrics) != 0); 340 + return (u32 *)p; 336 341 } 337 342 EXPORT_SYMBOL(dst_cow_metrics_generic); 338 343 ··· 346 341 { 347 342 unsigned long prev, new; 348 343 349 - new = ((unsigned long) dst_default_metrics) | DST_METRICS_READ_ONLY; 344 + new = ((unsigned long) &dst_default_metrics) | DST_METRICS_READ_ONLY; 350 345 prev = cmpxchg(&dst->_metrics, old, new); 351 346 if (prev == old) 352 347 kfree(__DST_METRICS_PTR(old));
+1
net/core/filter.c
··· 2281 2281 func == bpf_skb_change_head || 2282 2282 func == bpf_skb_change_tail || 2283 2283 func == bpf_skb_pull_data || 2284 + func == bpf_clone_redirect || 2284 2285 func == bpf_l3_csum_replace || 2285 2286 func == bpf_l4_csum_replace || 2286 2287 func == bpf_xdp_adjust_head)
+19
net/core/net_namespace.c
··· 315 315 goto out; 316 316 } 317 317 318 + static int __net_init net_defaults_init_net(struct net *net) 319 + { 320 + net->core.sysctl_somaxconn = SOMAXCONN; 321 + return 0; 322 + } 323 + 324 + static struct pernet_operations net_defaults_ops = { 325 + .init = net_defaults_init_net, 326 + }; 327 + 328 + static __init int net_defaults_init(void) 329 + { 330 + if (register_pernet_subsys(&net_defaults_ops)) 331 + panic("Cannot initialize net default settings"); 332 + 333 + return 0; 334 + } 335 + 336 + core_initcall(net_defaults_init); 318 337 319 338 #ifdef CONFIG_NET_NS 320 339 static struct ucounts *inc_net_namespaces(struct user_namespace *ns)
+5 -2
net/core/rtnetlink.c
··· 3231 3231 int err = 0; 3232 3232 int fidx = 0; 3233 3233 3234 - if (nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb, 3235 - IFLA_MAX, ifla_policy, NULL) == 0) { 3234 + err = nlmsg_parse(cb->nlh, sizeof(struct ifinfomsg), tb, 3235 + IFLA_MAX, ifla_policy, NULL); 3236 + if (err < 0) { 3237 + return -EINVAL; 3238 + } else if (err == 0) { 3236 3239 if (tb[IFLA_MASTER]) 3237 3240 br_idx = nla_get_u32(tb[IFLA_MASTER]); 3238 3241 }
-2
net/core/sysctl_net_core.c
··· 479 479 { 480 480 struct ctl_table *tbl; 481 481 482 - net->core.sysctl_somaxconn = SOMAXCONN; 483 - 484 482 tbl = netns_core_table; 485 483 if (!net_eq(net, &init_net)) { 486 484 tbl = kmemdup(tbl, sizeof(netns_core_table), GFP_KERNEL);
+39 -17
net/ipv4/arp.c
··· 641 641 } 642 642 EXPORT_SYMBOL(arp_xmit); 643 643 644 + static bool arp_is_garp(struct net *net, struct net_device *dev, 645 + int *addr_type, __be16 ar_op, 646 + __be32 sip, __be32 tip, 647 + unsigned char *sha, unsigned char *tha) 648 + { 649 + bool is_garp = tip == sip; 650 + 651 + /* Gratuitous ARP _replies_ also require target hwaddr to be 652 + * the same as source. 653 + */ 654 + if (is_garp && ar_op == htons(ARPOP_REPLY)) 655 + is_garp = 656 + /* IPv4 over IEEE 1394 doesn't provide target 657 + * hardware address field in its ARP payload. 658 + */ 659 + tha && 660 + !memcmp(tha, sha, dev->addr_len); 661 + 662 + if (is_garp) { 663 + *addr_type = inet_addr_type_dev_table(net, dev, sip); 664 + if (*addr_type != RTN_UNICAST) 665 + is_garp = false; 666 + } 667 + return is_garp; 668 + } 669 + 644 670 /* 645 671 * Process an arp request. 646 672 */ ··· 863 837 864 838 n = __neigh_lookup(&arp_tbl, &sip, dev, 0); 865 839 866 - if (IN_DEV_ARP_ACCEPT(in_dev)) { 867 - unsigned int addr_type = inet_addr_type_dev_table(net, dev, sip); 840 + addr_type = -1; 841 + if (n || IN_DEV_ARP_ACCEPT(in_dev)) { 842 + is_garp = arp_is_garp(net, dev, &addr_type, arp->ar_op, 843 + sip, tip, sha, tha); 844 + } 868 845 846 + if (IN_DEV_ARP_ACCEPT(in_dev)) { 869 847 /* Unsolicited ARP is not accepted by default. 870 848 It is possible, that this option should be enabled for some 871 849 devices (strip is candidate) 872 850 */ 873 - is_garp = tip == sip && addr_type == RTN_UNICAST; 874 - 875 - /* Unsolicited ARP _replies_ also require target hwaddr to be 876 - * the same as source. 877 - */ 878 - if (is_garp && arp->ar_op == htons(ARPOP_REPLY)) 879 - is_garp = 880 - /* IPv4 over IEEE 1394 doesn't provide target 881 - * hardware address field in its ARP payload. 882 - */ 883 - tha && 884 - !memcmp(tha, sha, dev->addr_len); 885 - 886 851 if (!n && 887 - ((arp->ar_op == htons(ARPOP_REPLY) && 888 - addr_type == RTN_UNICAST) || is_garp)) 852 + (is_garp || 853 + (arp->ar_op == htons(ARPOP_REPLY) && 854 + (addr_type == RTN_UNICAST || 855 + (addr_type < 0 && 856 + /* postpone calculation to as late as possible */ 857 + inet_addr_type_dev_table(net, dev, sip) == 858 + RTN_UNICAST))))) 889 859 n = __neigh_lookup(&arp_tbl, &sip, dev, 1); 890 860 } 891 861
+4 -1
net/ipv4/esp4.c
··· 248 248 u8 *tail; 249 249 u8 *vaddr; 250 250 int nfrags; 251 + int esph_offset; 251 252 struct page *page; 252 253 struct sk_buff *trailer; 253 254 int tailen = esp->tailen; ··· 314 313 } 315 314 316 315 cow: 316 + esph_offset = (unsigned char *)esp->esph - skb_transport_header(skb); 317 + 317 318 nfrags = skb_cow_data(skb, tailen, &trailer); 318 319 if (nfrags < 0) 319 320 goto out; 320 321 tail = skb_tail_pointer(trailer); 321 - esp->esph = ip_esp_hdr(skb); 322 + esp->esph = (struct ip_esp_hdr *)(skb_transport_header(skb) + esph_offset); 322 323 323 324 skip_cow: 324 325 esp_output_fill_trailer(tail, esp->tfclen, esp->plen, esp->proto);
+10 -7
net/ipv4/fib_semantics.c
··· 203 203 static void free_fib_info_rcu(struct rcu_head *head) 204 204 { 205 205 struct fib_info *fi = container_of(head, struct fib_info, rcu); 206 + struct dst_metrics *m; 206 207 207 208 change_nexthops(fi) { 208 209 if (nexthop_nh->nh_dev) ··· 214 213 rt_fibinfo_free(&nexthop_nh->nh_rth_input); 215 214 } endfor_nexthops(fi); 216 215 217 - if (fi->fib_metrics != (u32 *) dst_default_metrics) 218 - kfree(fi->fib_metrics); 216 + m = fi->fib_metrics; 217 + if (m != &dst_default_metrics && atomic_dec_and_test(&m->refcnt)) 218 + kfree(m); 219 219 kfree(fi); 220 220 } 221 221 ··· 973 971 val = 255; 974 972 if (type == RTAX_FEATURES && (val & ~RTAX_FEATURE_MASK)) 975 973 return -EINVAL; 976 - fi->fib_metrics[type - 1] = val; 974 + fi->fib_metrics->metrics[type - 1] = val; 977 975 } 978 976 979 977 if (ecn_ca) 980 - fi->fib_metrics[RTAX_FEATURES - 1] |= DST_FEATURE_ECN_CA; 978 + fi->fib_metrics->metrics[RTAX_FEATURES - 1] |= DST_FEATURE_ECN_CA; 981 979 982 980 return 0; 983 981 } ··· 1035 1033 goto failure; 1036 1034 fib_info_cnt++; 1037 1035 if (cfg->fc_mx) { 1038 - fi->fib_metrics = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL); 1036 + fi->fib_metrics = kzalloc(sizeof(*fi->fib_metrics), GFP_KERNEL); 1039 1037 if (!fi->fib_metrics) 1040 1038 goto failure; 1039 + atomic_set(&fi->fib_metrics->refcnt, 1); 1041 1040 } else 1042 - fi->fib_metrics = (u32 *) dst_default_metrics; 1041 + fi->fib_metrics = (struct dst_metrics *)&dst_default_metrics; 1043 1042 1044 1043 fi->fib_net = net; 1045 1044 fi->fib_protocol = cfg->fc_protocol; ··· 1241 1238 if (fi->fib_priority && 1242 1239 nla_put_u32(skb, RTA_PRIORITY, fi->fib_priority)) 1243 1240 goto nla_put_failure; 1244 - if (rtnetlink_put_metrics(skb, fi->fib_metrics) < 0) 1241 + if (rtnetlink_put_metrics(skb, fi->fib_metrics->metrics) < 0) 1245 1242 goto nla_put_failure; 1246 1243 1247 1244 if (fi->fib_prefsrc &&
+9 -1
net/ipv4/route.c
··· 1385 1385 1386 1386 static void ipv4_dst_destroy(struct dst_entry *dst) 1387 1387 { 1388 + struct dst_metrics *p = (struct dst_metrics *)DST_METRICS_PTR(dst); 1388 1389 struct rtable *rt = (struct rtable *) dst; 1390 + 1391 + if (p != &dst_default_metrics && atomic_dec_and_test(&p->refcnt)) 1392 + kfree(p); 1389 1393 1390 1394 if (!list_empty(&rt->rt_uncached)) { 1391 1395 struct uncached_list *ul = rt->rt_uncached_list; ··· 1442 1438 rt->rt_gateway = nh->nh_gw; 1443 1439 rt->rt_uses_gateway = 1; 1444 1440 } 1445 - dst_init_metrics(&rt->dst, fi->fib_metrics, true); 1441 + dst_init_metrics(&rt->dst, fi->fib_metrics->metrics, true); 1442 + if (fi->fib_metrics != &dst_default_metrics) { 1443 + rt->dst._metrics |= DST_METRICS_REFCOUNTED; 1444 + atomic_inc(&fi->fib_metrics->refcnt); 1445 + } 1446 1446 #ifdef CONFIG_IP_ROUTE_CLASSID 1447 1447 rt->dst.tclassid = nh->nh_tclassid; 1448 1448 #endif
+9 -2
net/ipv4/tcp.c
··· 1084 1084 { 1085 1085 struct tcp_sock *tp = tcp_sk(sk); 1086 1086 struct inet_sock *inet = inet_sk(sk); 1087 + struct sockaddr *uaddr = msg->msg_name; 1087 1088 int err, flags; 1088 1089 1089 - if (!(sysctl_tcp_fastopen & TFO_CLIENT_ENABLE)) 1090 + if (!(sysctl_tcp_fastopen & TFO_CLIENT_ENABLE) || 1091 + (uaddr && msg->msg_namelen >= sizeof(uaddr->sa_family) && 1092 + uaddr->sa_family == AF_UNSPEC)) 1090 1093 return -EOPNOTSUPP; 1091 1094 if (tp->fastopen_req) 1092 1095 return -EALREADY; /* Another Fast Open is in progress */ ··· 1111 1108 } 1112 1109 } 1113 1110 flags = (msg->msg_flags & MSG_DONTWAIT) ? O_NONBLOCK : 0; 1114 - err = __inet_stream_connect(sk->sk_socket, msg->msg_name, 1111 + err = __inet_stream_connect(sk->sk_socket, uaddr, 1115 1112 msg->msg_namelen, flags, 1); 1116 1113 /* fastopen_req could already be freed in __inet_stream_connect 1117 1114 * if the connection times out or gets rst ··· 2323 2320 tcp_set_ca_state(sk, TCP_CA_Open); 2324 2321 tcp_clear_retrans(tp); 2325 2322 inet_csk_delack_init(sk); 2323 + /* Initialize rcv_mss to TCP_MIN_MSS to avoid division by 0 2324 + * issue in __tcp_select_window() 2325 + */ 2326 + icsk->icsk_ack.rcv_mss = TCP_MIN_MSS; 2326 2327 tcp_init_send_head(sk); 2327 2328 memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 2328 2329 __sk_dst_reset(sk);
+7 -6
net/ipv6/ip6_gre.c
··· 537 537 538 538 memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6)); 539 539 540 - dsfield = ipv4_get_dsfield(iph); 541 - 542 540 if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS) 543 - fl6.flowlabel |= htonl((__u32)iph->tos << IPV6_TCLASS_SHIFT) 544 - & IPV6_TCLASS_MASK; 541 + dsfield = ipv4_get_dsfield(iph); 542 + else 543 + dsfield = ip6_tclass(t->parms.flowinfo); 545 544 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK) 546 545 fl6.flowi6_mark = skb->mark; 547 546 else ··· 597 598 598 599 memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6)); 599 600 600 - dsfield = ipv6_get_dsfield(ipv6h); 601 601 if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS) 602 - fl6.flowlabel |= (*(__be32 *) ipv6h & IPV6_TCLASS_MASK); 602 + dsfield = ipv6_get_dsfield(ipv6h); 603 + else 604 + dsfield = ip6_tclass(t->parms.flowinfo); 605 + 603 606 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FLOWLABEL) 604 607 fl6.flowlabel |= ip6_flowlabel(ipv6h); 605 608 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK)
+8 -7
net/ipv6/ip6_output.c
··· 1466 1466 */ 1467 1467 alloclen += sizeof(struct frag_hdr); 1468 1468 1469 + copy = datalen - transhdrlen - fraggap; 1470 + if (copy < 0) { 1471 + err = -EINVAL; 1472 + goto error; 1473 + } 1469 1474 if (transhdrlen) { 1470 1475 skb = sock_alloc_send_skb(sk, 1471 1476 alloclen + hh_len, ··· 1520 1515 data += fraggap; 1521 1516 pskb_trim_unique(skb_prev, maxfraglen); 1522 1517 } 1523 - copy = datalen - transhdrlen - fraggap; 1524 - 1525 - if (copy < 0) { 1526 - err = -EINVAL; 1527 - kfree_skb(skb); 1528 - goto error; 1529 - } else if (copy > 0 && getfrag(from, data + transhdrlen, offset, copy, fraggap, skb) < 0) { 1518 + if (copy > 0 && 1519 + getfrag(from, data + transhdrlen, offset, 1520 + copy, fraggap, skb) < 0) { 1530 1521 err = -EFAULT; 1531 1522 kfree_skb(skb); 1532 1523 goto error;
+13 -8
net/ipv6/ip6_tunnel.c
··· 1196 1196 skb_push(skb, sizeof(struct ipv6hdr)); 1197 1197 skb_reset_network_header(skb); 1198 1198 ipv6h = ipv6_hdr(skb); 1199 - ip6_flow_hdr(ipv6h, INET_ECN_encapsulate(0, dsfield), 1199 + ip6_flow_hdr(ipv6h, dsfield, 1200 1200 ip6_make_flowlabel(net, skb, fl6->flowlabel, true, fl6)); 1201 1201 ipv6h->hop_limit = hop_limit; 1202 1202 ipv6h->nexthdr = proto; ··· 1231 1231 if (tproto != IPPROTO_IPIP && tproto != 0) 1232 1232 return -1; 1233 1233 1234 - dsfield = ipv4_get_dsfield(iph); 1235 - 1236 1234 if (t->parms.collect_md) { 1237 1235 struct ip_tunnel_info *tun_info; 1238 1236 const struct ip_tunnel_key *key; ··· 1244 1246 fl6.flowi6_proto = IPPROTO_IPIP; 1245 1247 fl6.daddr = key->u.ipv6.dst; 1246 1248 fl6.flowlabel = key->label; 1249 + dsfield = ip6_tclass(key->label); 1247 1250 } else { 1248 1251 if (!(t->parms.flags & IP6_TNL_F_IGN_ENCAP_LIMIT)) 1249 1252 encap_limit = t->parms.encap_limit; ··· 1253 1254 fl6.flowi6_proto = IPPROTO_IPIP; 1254 1255 1255 1256 if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS) 1256 - fl6.flowlabel |= htonl((__u32)iph->tos << IPV6_TCLASS_SHIFT) 1257 - & IPV6_TCLASS_MASK; 1257 + dsfield = ipv4_get_dsfield(iph); 1258 + else 1259 + dsfield = ip6_tclass(t->parms.flowinfo); 1258 1260 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK) 1259 1261 fl6.flowi6_mark = skb->mark; 1260 1262 else ··· 1266 1266 1267 1267 if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6)) 1268 1268 return -1; 1269 + 1270 + dsfield = INET_ECN_encapsulate(dsfield, ipv4_get_dsfield(iph)); 1269 1271 1270 1272 skb_set_inner_ipproto(skb, IPPROTO_IPIP); 1271 1273 ··· 1302 1300 ip6_tnl_addr_conflict(t, ipv6h)) 1303 1301 return -1; 1304 1302 1305 - dsfield = ipv6_get_dsfield(ipv6h); 1306 - 1307 1303 if (t->parms.collect_md) { 1308 1304 struct ip_tunnel_info *tun_info; 1309 1305 const struct ip_tunnel_key *key; ··· 1315 1315 fl6.flowi6_proto = IPPROTO_IPV6; 1316 1316 fl6.daddr = key->u.ipv6.dst; 1317 1317 fl6.flowlabel = key->label; 1318 + dsfield = ip6_tclass(key->label); 1318 1319 } else { 1319 1320 offset = ip6_tnl_parse_tlv_enc_lim(skb, skb_network_header(skb)); 1320 1321 /* ip6_tnl_parse_tlv_enc_lim() might have reallocated skb->head */ ··· 1338 1337 fl6.flowi6_proto = IPPROTO_IPV6; 1339 1338 1340 1339 if (t->parms.flags & IP6_TNL_F_USE_ORIG_TCLASS) 1341 - fl6.flowlabel |= (*(__be32 *)ipv6h & IPV6_TCLASS_MASK); 1340 + dsfield = ipv6_get_dsfield(ipv6h); 1341 + else 1342 + dsfield = ip6_tclass(t->parms.flowinfo); 1342 1343 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FLOWLABEL) 1343 1344 fl6.flowlabel |= ip6_flowlabel(ipv6h); 1344 1345 if (t->parms.flags & IP6_TNL_F_USE_ORIG_FWMARK) ··· 1353 1350 1354 1351 if (iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6)) 1355 1352 return -1; 1353 + 1354 + dsfield = INET_ECN_encapsulate(dsfield, ipv6_get_dsfield(ipv6h)); 1356 1355 1357 1356 skb_set_inner_ipproto(skb, IPPROTO_IPV6); 1358 1357
+1 -1
net/key/af_key.c
··· 3285 3285 p += pol->sadb_x_policy_len*8; 3286 3286 sec_ctx = (struct sadb_x_sec_ctx *)p; 3287 3287 if (len < pol->sadb_x_policy_len*8 + 3288 - sec_ctx->sadb_x_sec_len) { 3288 + sec_ctx->sadb_x_sec_len*8) { 3289 3289 *dir = -EINVAL; 3290 3290 goto out; 3291 3291 }
+3
net/llc/af_llc.c
··· 311 311 int rc = -EINVAL; 312 312 313 313 dprintk("%s: binding %02X\n", __func__, addr->sllc_sap); 314 + 315 + lock_sock(sk); 314 316 if (unlikely(!sock_flag(sk, SOCK_ZAPPED) || addrlen != sizeof(*addr))) 315 317 goto out; 316 318 rc = -EAFNOSUPPORT; ··· 384 382 out_put: 385 383 llc_sap_put(sap); 386 384 out: 385 + release_sock(sk); 387 386 return rc; 388 387 } 389 388
+2 -1
net/mac80211/rx.c
··· 2492 2492 if (is_multicast_ether_addr(hdr->addr1)) { 2493 2493 mpp_addr = hdr->addr3; 2494 2494 proxied_addr = mesh_hdr->eaddr1; 2495 - } else if (mesh_hdr->flags & MESH_FLAGS_AE_A5_A6) { 2495 + } else if ((mesh_hdr->flags & MESH_FLAGS_AE) == 2496 + MESH_FLAGS_AE_A5_A6) { 2496 2497 /* has_a4 already checked in ieee80211_rx_mesh_check */ 2497 2498 mpp_addr = hdr->addr4; 2498 2499 proxied_addr = mesh_hdr->eaddr2;
+14 -5
net/netfilter/ipvs/ip_vs_core.c
··· 849 849 { 850 850 unsigned int verdict = NF_DROP; 851 851 852 - if (IP_VS_FWD_METHOD(cp) != 0) { 853 - pr_err("shouldn't reach here, because the box is on the " 854 - "half connection in the tun/dr module.\n"); 855 - } 852 + if (IP_VS_FWD_METHOD(cp) != IP_VS_CONN_F_MASQ) 853 + goto ignore_cp; 856 854 857 855 /* Ensure the checksum is correct */ 858 856 if (!skb_csum_unnecessary(skb) && ip_vs_checksum_complete(skb, ihl)) { ··· 884 886 ip_vs_notrack(skb); 885 887 else 886 888 ip_vs_update_conntrack(skb, cp, 0); 889 + 890 + ignore_cp: 887 891 verdict = NF_ACCEPT; 888 892 889 893 out: ··· 1385 1385 */ 1386 1386 cp = pp->conn_out_get(ipvs, af, skb, &iph); 1387 1387 1388 - if (likely(cp)) 1388 + if (likely(cp)) { 1389 + if (IP_VS_FWD_METHOD(cp) != IP_VS_CONN_F_MASQ) 1390 + goto ignore_cp; 1389 1391 return handle_response(af, skb, pd, cp, &iph, hooknum); 1392 + } 1390 1393 1391 1394 /* Check for real-server-started requests */ 1392 1395 if (atomic_read(&ipvs->conn_out_counter)) { ··· 1447 1444 } 1448 1445 } 1449 1446 } 1447 + 1448 + out: 1450 1449 IP_VS_DBG_PKT(12, af, pp, skb, iph.off, 1451 1450 "ip_vs_out: packet continues traversal as normal"); 1452 1451 return NF_ACCEPT; 1452 + 1453 + ignore_cp: 1454 + __ip_vs_conn_put(cp); 1455 + goto out; 1453 1456 } 1454 1457 1455 1458 /*
+12
net/netfilter/nf_conntrack_helper.c
··· 174 174 #endif 175 175 if (h != NULL && !try_module_get(h->me)) 176 176 h = NULL; 177 + if (h != NULL && !refcount_inc_not_zero(&h->refcnt)) { 178 + module_put(h->me); 179 + h = NULL; 180 + } 177 181 178 182 rcu_read_unlock(); 179 183 180 184 return h; 181 185 } 182 186 EXPORT_SYMBOL_GPL(nf_conntrack_helper_try_module_get); 187 + 188 + void nf_conntrack_helper_put(struct nf_conntrack_helper *helper) 189 + { 190 + refcount_dec(&helper->refcnt); 191 + module_put(helper->me); 192 + } 193 + EXPORT_SYMBOL_GPL(nf_conntrack_helper_put); 183 194 184 195 struct nf_conn_help * 185 196 nf_ct_helper_ext_add(struct nf_conn *ct, ··· 428 417 } 429 418 } 430 419 } 420 + refcount_set(&me->refcnt, 1); 431 421 hlist_add_head_rcu(&me->hnode, &nf_ct_helper_hash[h]); 432 422 nf_ct_helper_count++; 433 423 out:
+7 -4
net/netfilter/nf_conntrack_netlink.c
··· 45 45 #include <net/netfilter/nf_conntrack_zones.h> 46 46 #include <net/netfilter/nf_conntrack_timestamp.h> 47 47 #include <net/netfilter/nf_conntrack_labels.h> 48 + #include <net/netfilter/nf_conntrack_seqadj.h> 49 + #include <net/netfilter/nf_conntrack_synproxy.h> 48 50 #ifdef CONFIG_NF_NAT_NEEDED 49 51 #include <net/netfilter/nf_nat_core.h> 50 52 #include <net/netfilter/nf_nat_l4proto.h> ··· 1009 1007 1010 1008 static int 1011 1009 ctnetlink_parse_tuple(const struct nlattr * const cda[], 1012 - struct nf_conntrack_tuple *tuple, 1013 - enum ctattr_type type, u_int8_t l3num, 1014 - struct nf_conntrack_zone *zone) 1010 + struct nf_conntrack_tuple *tuple, u32 type, 1011 + u_int8_t l3num, struct nf_conntrack_zone *zone) 1015 1012 { 1016 1013 struct nlattr *tb[CTA_TUPLE_MAX+1]; 1017 1014 int err; ··· 1829 1828 nf_ct_tstamp_ext_add(ct, GFP_ATOMIC); 1830 1829 nf_ct_ecache_ext_add(ct, 0, 0, GFP_ATOMIC); 1831 1830 nf_ct_labels_ext_add(ct); 1831 + nfct_seqadj_ext_add(ct); 1832 + nfct_synproxy_ext_add(ct); 1832 1833 1833 1834 /* we must add conntrack extensions before confirmation. */ 1834 1835 ct->status |= IPS_CONFIRMED; ··· 2450 2447 2451 2448 static int ctnetlink_exp_dump_tuple(struct sk_buff *skb, 2452 2449 const struct nf_conntrack_tuple *tuple, 2453 - enum ctattr_expect type) 2450 + u32 type) 2454 2451 { 2455 2452 struct nlattr *nest_parms; 2456 2453
+4
net/netfilter/nf_nat_core.c
··· 409 409 { 410 410 struct nf_conntrack_tuple curr_tuple, new_tuple; 411 411 412 + /* Can't setup nat info for confirmed ct. */ 413 + if (nf_ct_is_confirmed(ct)) 414 + return NF_ACCEPT; 415 + 412 416 NF_CT_ASSERT(maniptype == NF_NAT_MANIP_SRC || 413 417 maniptype == NF_NAT_MANIP_DST); 414 418 BUG_ON(nf_nat_initialized(ct, maniptype));
+128 -32
net/netfilter/nf_tables_api.c
··· 3367 3367 return nf_tables_fill_setelem(args->skb, set, elem); 3368 3368 } 3369 3369 3370 + struct nft_set_dump_ctx { 3371 + const struct nft_set *set; 3372 + struct nft_ctx ctx; 3373 + }; 3374 + 3370 3375 static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb) 3371 3376 { 3377 + struct nft_set_dump_ctx *dump_ctx = cb->data; 3372 3378 struct net *net = sock_net(skb->sk); 3373 - u8 genmask = nft_genmask_cur(net); 3379 + struct nft_af_info *afi; 3380 + struct nft_table *table; 3374 3381 struct nft_set *set; 3375 3382 struct nft_set_dump_args args; 3376 - struct nft_ctx ctx; 3377 - struct nlattr *nla[NFTA_SET_ELEM_LIST_MAX + 1]; 3383 + bool set_found = false; 3378 3384 struct nfgenmsg *nfmsg; 3379 3385 struct nlmsghdr *nlh; 3380 3386 struct nlattr *nest; 3381 3387 u32 portid, seq; 3382 - int event, err; 3388 + int event; 3383 3389 3384 - err = nlmsg_parse(cb->nlh, sizeof(struct nfgenmsg), nla, 3385 - NFTA_SET_ELEM_LIST_MAX, nft_set_elem_list_policy, 3386 - NULL); 3387 - if (err < 0) 3388 - return err; 3390 + rcu_read_lock(); 3391 + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { 3392 + if (afi != dump_ctx->ctx.afi) 3393 + continue; 3389 3394 3390 - err = nft_ctx_init_from_elemattr(&ctx, net, cb->skb, cb->nlh, 3391 - (void *)nla, genmask); 3392 - if (err < 0) 3393 - return err; 3395 + list_for_each_entry_rcu(table, &afi->tables, list) { 3396 + if (table != dump_ctx->ctx.table) 3397 + continue; 3394 3398 3395 - set = nf_tables_set_lookup(ctx.table, nla[NFTA_SET_ELEM_LIST_SET], 3396 - genmask); 3397 - if (IS_ERR(set)) 3398 - return PTR_ERR(set); 3399 + list_for_each_entry_rcu(set, &table->sets, list) { 3400 + if (set == dump_ctx->set) { 3401 + set_found = true; 3402 + break; 3403 + } 3404 + } 3405 + break; 3406 + } 3407 + break; 3408 + } 3409 + 3410 + if (!set_found) { 3411 + rcu_read_unlock(); 3412 + return -ENOENT; 3413 + } 3399 3414 3400 3415 event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWSETELEM); 3401 3416 portid = NETLINK_CB(cb->skb).portid; ··· 3422 3407 goto nla_put_failure; 3423 3408 3424 3409 nfmsg = nlmsg_data(nlh); 3425 - nfmsg->nfgen_family = ctx.afi->family; 3410 + nfmsg->nfgen_family = afi->family; 3426 3411 nfmsg->version = NFNETLINK_V0; 3427 - nfmsg->res_id = htons(ctx.net->nft.base_seq & 0xffff); 3412 + nfmsg->res_id = htons(net->nft.base_seq & 0xffff); 3428 3413 3429 - if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, ctx.table->name)) 3414 + if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, table->name)) 3430 3415 goto nla_put_failure; 3431 3416 if (nla_put_string(skb, NFTA_SET_ELEM_LIST_SET, set->name)) 3432 3417 goto nla_put_failure; ··· 3437 3422 3438 3423 args.cb = cb; 3439 3424 args.skb = skb; 3440 - args.iter.genmask = nft_genmask_cur(ctx.net); 3425 + args.iter.genmask = nft_genmask_cur(net); 3441 3426 args.iter.skip = cb->args[0]; 3442 3427 args.iter.count = 0; 3443 3428 args.iter.err = 0; 3444 3429 args.iter.fn = nf_tables_dump_setelem; 3445 - set->ops->walk(&ctx, set, &args.iter); 3430 + set->ops->walk(&dump_ctx->ctx, set, &args.iter); 3431 + rcu_read_unlock(); 3446 3432 3447 3433 nla_nest_end(skb, nest); 3448 3434 nlmsg_end(skb, nlh); ··· 3457 3441 return skb->len; 3458 3442 3459 3443 nla_put_failure: 3444 + rcu_read_unlock(); 3460 3445 return -ENOSPC; 3446 + } 3447 + 3448 + static int nf_tables_dump_set_done(struct netlink_callback *cb) 3449 + { 3450 + kfree(cb->data); 3451 + return 0; 3461 3452 } 3462 3453 3463 3454 static int nf_tables_getsetelem(struct net *net, struct sock *nlsk, ··· 3488 3465 if (nlh->nlmsg_flags & NLM_F_DUMP) { 3489 3466 struct netlink_dump_control c = { 3490 3467 .dump = nf_tables_dump_set, 3468 + .done = nf_tables_dump_set_done, 3491 3469 }; 3470 + struct nft_set_dump_ctx *dump_ctx; 3471 + 3472 + dump_ctx = kmalloc(sizeof(*dump_ctx), GFP_KERNEL); 3473 + if (!dump_ctx) 3474 + return -ENOMEM; 3475 + 3476 + dump_ctx->set = set; 3477 + dump_ctx->ctx = ctx; 3478 + 3479 + c.data = dump_ctx; 3492 3480 return netlink_dump_start(nlsk, skb, nlh, &c); 3493 3481 } 3494 3482 return -EOPNOTSUPP; ··· 3627 3593 { 3628 3594 struct nft_set_ext *ext = nft_set_elem_ext(set, elem); 3629 3595 3630 - nft_data_uninit(nft_set_ext_key(ext), NFT_DATA_VALUE); 3596 + nft_data_release(nft_set_ext_key(ext), NFT_DATA_VALUE); 3631 3597 if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3632 - nft_data_uninit(nft_set_ext_data(ext), set->dtype); 3598 + nft_data_release(nft_set_ext_data(ext), set->dtype); 3633 3599 if (destroy_expr && nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) 3634 3600 nf_tables_expr_destroy(NULL, nft_set_ext_expr(ext)); 3635 3601 if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) ··· 3637 3603 kfree(elem); 3638 3604 } 3639 3605 EXPORT_SYMBOL_GPL(nft_set_elem_destroy); 3606 + 3607 + /* Only called from commit path, nft_set_elem_deactivate() already deals with 3608 + * the refcounting from the preparation phase. 3609 + */ 3610 + static void nf_tables_set_elem_destroy(const struct nft_set *set, void *elem) 3611 + { 3612 + struct nft_set_ext *ext = nft_set_elem_ext(set, elem); 3613 + 3614 + if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) 3615 + nf_tables_expr_destroy(NULL, nft_set_ext_expr(ext)); 3616 + kfree(elem); 3617 + } 3640 3618 3641 3619 static int nft_setelem_parse_flags(const struct nft_set *set, 3642 3620 const struct nlattr *attr, u32 *flags) ··· 3861 3815 kfree(elem.priv); 3862 3816 err3: 3863 3817 if (nla[NFTA_SET_ELEM_DATA] != NULL) 3864 - nft_data_uninit(&data, d2.type); 3818 + nft_data_release(&data, d2.type); 3865 3819 err2: 3866 - nft_data_uninit(&elem.key.val, d1.type); 3820 + nft_data_release(&elem.key.val, d1.type); 3867 3821 err1: 3868 3822 return err; 3869 3823 } ··· 3906 3860 break; 3907 3861 } 3908 3862 return err; 3863 + } 3864 + 3865 + /** 3866 + * nft_data_hold - hold a nft_data item 3867 + * 3868 + * @data: struct nft_data to release 3869 + * @type: type of data 3870 + * 3871 + * Hold a nft_data item. NFT_DATA_VALUE types can be silently discarded, 3872 + * NFT_DATA_VERDICT bumps the reference to chains in case of NFT_JUMP and 3873 + * NFT_GOTO verdicts. This function must be called on active data objects 3874 + * from the second phase of the commit protocol. 3875 + */ 3876 + static void nft_data_hold(const struct nft_data *data, enum nft_data_types type) 3877 + { 3878 + if (type == NFT_DATA_VERDICT) { 3879 + switch (data->verdict.code) { 3880 + case NFT_JUMP: 3881 + case NFT_GOTO: 3882 + data->verdict.chain->use++; 3883 + break; 3884 + } 3885 + } 3886 + } 3887 + 3888 + static void nft_set_elem_activate(const struct net *net, 3889 + const struct nft_set *set, 3890 + struct nft_set_elem *elem) 3891 + { 3892 + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3893 + 3894 + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3895 + nft_data_hold(nft_set_ext_data(ext), set->dtype); 3896 + if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 3897 + (*nft_set_ext_obj(ext))->use++; 3898 + } 3899 + 3900 + static void nft_set_elem_deactivate(const struct net *net, 3901 + const struct nft_set *set, 3902 + struct nft_set_elem *elem) 3903 + { 3904 + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3905 + 3906 + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3907 + nft_data_release(nft_set_ext_data(ext), set->dtype); 3908 + if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 3909 + (*nft_set_ext_obj(ext))->use--; 3909 3910 } 3910 3911 3911 3912 static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, ··· 4020 3927 kfree(elem.priv); 4021 3928 elem.priv = priv; 4022 3929 3930 + nft_set_elem_deactivate(ctx->net, set, &elem); 3931 + 4023 3932 nft_trans_elem(trans) = elem; 4024 3933 list_add_tail(&trans->list, &ctx->net->nft.commit_list); 4025 3934 return 0; ··· 4031 3936 err3: 4032 3937 kfree(elem.priv); 4033 3938 err2: 4034 - nft_data_uninit(&elem.key.val, desc.type); 3939 + nft_data_release(&elem.key.val, desc.type); 4035 3940 err1: 4036 3941 return err; 4037 3942 } ··· 4838 4743 nft_set_destroy(nft_trans_set(trans)); 4839 4744 break; 4840 4745 case NFT_MSG_DELSETELEM: 4841 - nft_set_elem_destroy(nft_trans_elem_set(trans), 4842 - nft_trans_elem(trans).priv, true); 4746 + nf_tables_set_elem_destroy(nft_trans_elem_set(trans), 4747 + nft_trans_elem(trans).priv); 4843 4748 break; 4844 4749 case NFT_MSG_DELOBJ: 4845 4750 nft_obj_destroy(nft_trans_obj(trans)); ··· 5074 4979 case NFT_MSG_DELSETELEM: 5075 4980 te = (struct nft_trans_elem *)trans->data; 5076 4981 4982 + nft_set_elem_activate(net, te->set, &te->elem); 5077 4983 te->set->ops->activate(net, te->set, &te->elem); 5078 4984 te->set->ndeact--; 5079 4985 ··· 5560 5464 EXPORT_SYMBOL_GPL(nft_data_init); 5561 5465 5562 5466 /** 5563 - * nft_data_uninit - release a nft_data item 5467 + * nft_data_release - release a nft_data item 5564 5468 * 5565 5469 * @data: struct nft_data to release 5566 5470 * @type: type of data ··· 5568 5472 * Release a nft_data item. NFT_DATA_VALUE types can be silently discarded, 5569 5473 * all others need to be released by calling this function. 5570 5474 */ 5571 - void nft_data_uninit(const struct nft_data *data, enum nft_data_types type) 5475 + void nft_data_release(const struct nft_data *data, enum nft_data_types type) 5572 5476 { 5573 5477 if (type < NFT_DATA_VERDICT) 5574 5478 return; ··· 5579 5483 WARN_ON(1); 5580 5484 } 5581 5485 } 5582 - EXPORT_SYMBOL_GPL(nft_data_uninit); 5486 + EXPORT_SYMBOL_GPL(nft_data_release); 5583 5487 5584 5488 int nft_data_dump(struct sk_buff *skb, int attr, const struct nft_data *data, 5585 5489 enum nft_data_types type, unsigned int len)
+14 -5
net/netfilter/nft_bitwise.c
··· 83 83 tb[NFTA_BITWISE_MASK]); 84 84 if (err < 0) 85 85 return err; 86 - if (d1.len != priv->len) 87 - return -EINVAL; 86 + if (d1.len != priv->len) { 87 + err = -EINVAL; 88 + goto err1; 89 + } 88 90 89 91 err = nft_data_init(NULL, &priv->xor, sizeof(priv->xor), &d2, 90 92 tb[NFTA_BITWISE_XOR]); 91 93 if (err < 0) 92 - return err; 93 - if (d2.len != priv->len) 94 - return -EINVAL; 94 + goto err1; 95 + if (d2.len != priv->len) { 96 + err = -EINVAL; 97 + goto err2; 98 + } 95 99 96 100 return 0; 101 + err2: 102 + nft_data_release(&priv->xor, d2.type); 103 + err1: 104 + nft_data_release(&priv->mask, d1.type); 105 + return err; 97 106 } 98 107 99 108 static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr)
+10 -2
net/netfilter/nft_cmp.c
··· 201 201 if (err < 0) 202 202 return ERR_PTR(err); 203 203 204 + if (desc.type != NFT_DATA_VALUE) { 205 + err = -EINVAL; 206 + goto err1; 207 + } 208 + 204 209 if (desc.len <= sizeof(u32) && op == NFT_CMP_EQ) 205 210 return &nft_cmp_fast_ops; 206 - else 207 - return &nft_cmp_ops; 211 + 212 + return &nft_cmp_ops; 213 + err1: 214 + nft_data_release(&data, desc.type); 215 + return ERR_PTR(-EINVAL); 208 216 } 209 217 210 218 struct nft_expr_type nft_cmp_type __read_mostly = {
+2 -2
net/netfilter/nft_ct.c
··· 826 826 struct nft_ct_helper_obj *priv = nft_obj_data(obj); 827 827 828 828 if (priv->helper4) 829 - module_put(priv->helper4->me); 829 + nf_conntrack_helper_put(priv->helper4); 830 830 if (priv->helper6) 831 - module_put(priv->helper6->me); 831 + nf_conntrack_helper_put(priv->helper6); 832 832 } 833 833 834 834 static void nft_ct_helper_obj_eval(struct nft_object *obj,
+3 -2
net/netfilter/nft_immediate.c
··· 65 65 return 0; 66 66 67 67 err1: 68 - nft_data_uninit(&priv->data, desc.type); 68 + nft_data_release(&priv->data, desc.type); 69 69 return err; 70 70 } 71 71 ··· 73 73 const struct nft_expr *expr) 74 74 { 75 75 const struct nft_immediate_expr *priv = nft_expr_priv(expr); 76 - return nft_data_uninit(&priv->data, nft_dreg_to_type(priv->dreg)); 76 + 77 + return nft_data_release(&priv->data, nft_dreg_to_type(priv->dreg)); 77 78 } 78 79 79 80 static int nft_immediate_dump(struct sk_buff *skb, const struct nft_expr *expr)
+2 -2
net/netfilter/nft_range.c
··· 102 102 priv->len = desc_from.len; 103 103 return 0; 104 104 err2: 105 - nft_data_uninit(&priv->data_to, desc_to.type); 105 + nft_data_release(&priv->data_to, desc_to.type); 106 106 err1: 107 - nft_data_uninit(&priv->data_from, desc_from.type); 107 + nft_data_release(&priv->data_from, desc_from.type); 108 108 return err; 109 109 } 110 110
+1 -1
net/netfilter/nft_set_hash.c
··· 222 222 struct nft_set_elem elem; 223 223 int err; 224 224 225 - err = rhashtable_walk_init(&priv->ht, &hti, GFP_KERNEL); 225 + err = rhashtable_walk_init(&priv->ht, &hti, GFP_ATOMIC); 226 226 iter->err = err; 227 227 if (err) 228 228 return;
+16 -8
net/netfilter/x_tables.c
··· 283 283 &U->u.user.revision, K->u.kernel.TYPE->revision) 284 284 285 285 int xt_data_to_user(void __user *dst, const void *src, 286 - int usersize, int size) 286 + int usersize, int size, int aligned_size) 287 287 { 288 288 usersize = usersize ? : size; 289 289 if (copy_to_user(dst, src, usersize)) 290 290 return -EFAULT; 291 - if (usersize != size && clear_user(dst + usersize, size - usersize)) 291 + if (usersize != aligned_size && 292 + clear_user(dst + usersize, aligned_size - usersize)) 292 293 return -EFAULT; 293 294 294 295 return 0; 295 296 } 296 297 EXPORT_SYMBOL_GPL(xt_data_to_user); 297 298 298 - #define XT_DATA_TO_USER(U, K, TYPE, C_SIZE) \ 299 + #define XT_DATA_TO_USER(U, K, TYPE) \ 299 300 xt_data_to_user(U->data, K->data, \ 300 301 K->u.kernel.TYPE->usersize, \ 301 - C_SIZE ? : K->u.kernel.TYPE->TYPE##size) 302 + K->u.kernel.TYPE->TYPE##size, \ 303 + XT_ALIGN(K->u.kernel.TYPE->TYPE##size)) 302 304 303 305 int xt_match_to_user(const struct xt_entry_match *m, 304 306 struct xt_entry_match __user *u) 305 307 { 306 308 return XT_OBJ_TO_USER(u, m, match, 0) || 307 - XT_DATA_TO_USER(u, m, match, 0); 309 + XT_DATA_TO_USER(u, m, match); 308 310 } 309 311 EXPORT_SYMBOL_GPL(xt_match_to_user); 310 312 ··· 314 312 struct xt_entry_target __user *u) 315 313 { 316 314 return XT_OBJ_TO_USER(u, t, target, 0) || 317 - XT_DATA_TO_USER(u, t, target, 0); 315 + XT_DATA_TO_USER(u, t, target); 318 316 } 319 317 EXPORT_SYMBOL_GPL(xt_target_to_user); 320 318 ··· 613 611 } 614 612 EXPORT_SYMBOL_GPL(xt_compat_match_from_user); 615 613 614 + #define COMPAT_XT_DATA_TO_USER(U, K, TYPE, C_SIZE) \ 615 + xt_data_to_user(U->data, K->data, \ 616 + K->u.kernel.TYPE->usersize, \ 617 + C_SIZE, \ 618 + COMPAT_XT_ALIGN(C_SIZE)) 619 + 616 620 int xt_compat_match_to_user(const struct xt_entry_match *m, 617 621 void __user **dstptr, unsigned int *size) 618 622 { ··· 634 626 if (match->compat_to_user((void __user *)cm->data, m->data)) 635 627 return -EFAULT; 636 628 } else { 637 - if (XT_DATA_TO_USER(cm, m, match, msize - sizeof(*cm))) 629 + if (COMPAT_XT_DATA_TO_USER(cm, m, match, msize - sizeof(*cm))) 638 630 return -EFAULT; 639 631 } 640 632 ··· 980 972 if (target->compat_to_user((void __user *)ct->data, t->data)) 981 973 return -EFAULT; 982 974 } else { 983 - if (XT_DATA_TO_USER(ct, t, target, tsize - sizeof(*ct))) 975 + if (COMPAT_XT_DATA_TO_USER(ct, t, target, tsize - sizeof(*ct))) 984 976 return -EFAULT; 985 977 } 986 978
+3 -3
net/netfilter/xt_CT.c
··· 96 96 97 97 help = nf_ct_helper_ext_add(ct, helper, GFP_KERNEL); 98 98 if (help == NULL) { 99 - module_put(helper->me); 99 + nf_conntrack_helper_put(helper); 100 100 return -ENOMEM; 101 101 } 102 102 ··· 263 263 err4: 264 264 help = nfct_help(ct); 265 265 if (help) 266 - module_put(help->helper->me); 266 + nf_conntrack_helper_put(help->helper); 267 267 err3: 268 268 nf_ct_tmpl_free(ct); 269 269 err2: ··· 346 346 if (ct) { 347 347 help = nfct_help(ct); 348 348 if (help) 349 - module_put(help->helper->me); 349 + nf_conntrack_helper_put(help->helper); 350 350 351 351 nf_ct_netns_put(par->net, par->family); 352 352
+2 -2
net/openvswitch/conntrack.c
··· 1123 1123 1124 1124 help = nf_ct_helper_ext_add(info->ct, helper, GFP_KERNEL); 1125 1125 if (!help) { 1126 - module_put(helper->me); 1126 + nf_conntrack_helper_put(helper); 1127 1127 return -ENOMEM; 1128 1128 } 1129 1129 ··· 1584 1584 static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info) 1585 1585 { 1586 1586 if (ct_info->helper) 1587 - module_put(ct_info->helper->me); 1587 + nf_conntrack_helper_put(ct_info->helper); 1588 1588 if (ct_info->ct) 1589 1589 nf_ct_tmpl_free(ct_info->ct); 1590 1590 }
-1
net/sched/cls_matchall.c
··· 203 203 204 204 *arg = (unsigned long) head; 205 205 rcu_assign_pointer(tp->root, new); 206 - call_rcu(&head->rcu, mall_destroy_rcu); 207 206 return 0; 208 207 209 208 err_replace_hw_filter:
+3 -1
net/sctp/associola.c
··· 1176 1176 1177 1177 asoc->ctsn_ack_point = asoc->next_tsn - 1; 1178 1178 asoc->adv_peer_ack_point = asoc->ctsn_ack_point; 1179 - if (!asoc->stream) { 1179 + 1180 + if (sctp_state(asoc, COOKIE_WAIT)) { 1181 + sctp_stream_free(asoc->stream); 1180 1182 asoc->stream = new->stream; 1181 1183 new->stream = NULL; 1182 1184 }
+9 -7
net/sctp/input.c
··· 473 473 struct sctp_association **app, 474 474 struct sctp_transport **tpp) 475 475 { 476 + struct sctp_init_chunk *chunkhdr, _chunkhdr; 476 477 union sctp_addr saddr; 477 478 union sctp_addr daddr; 478 479 struct sctp_af *af; 479 480 struct sock *sk = NULL; 480 481 struct sctp_association *asoc; 481 482 struct sctp_transport *transport = NULL; 482 - struct sctp_init_chunk *chunkhdr; 483 483 __u32 vtag = ntohl(sctphdr->vtag); 484 - int len = skb->len - ((void *)sctphdr - (void *)skb->data); 485 484 486 485 *app = NULL; *tpp = NULL; 487 486 ··· 515 516 * discard the packet. 516 517 */ 517 518 if (vtag == 0) { 518 - chunkhdr = (void *)sctphdr + sizeof(struct sctphdr); 519 - if (len < sizeof(struct sctphdr) + sizeof(sctp_chunkhdr_t) 520 - + sizeof(__be32) || 519 + /* chunk header + first 4 octects of init header */ 520 + chunkhdr = skb_header_pointer(skb, skb_transport_offset(skb) + 521 + sizeof(struct sctphdr), 522 + sizeof(struct sctp_chunkhdr) + 523 + sizeof(__be32), &_chunkhdr); 524 + if (!chunkhdr || 521 525 chunkhdr->chunk_hdr.type != SCTP_CID_INIT || 522 - ntohl(chunkhdr->init_hdr.init_tag) != asoc->c.my_vtag) { 526 + ntohl(chunkhdr->init_hdr.init_tag) != asoc->c.my_vtag) 523 527 goto out; 524 - } 528 + 525 529 } else if (vtag != asoc->c.peer_vtag) { 526 530 goto out; 527 531 }
+4 -9
net/sctp/sm_make_chunk.c
··· 2454 2454 * stream sequence number shall be set to 0. 2455 2455 */ 2456 2456 2457 - /* Allocate storage for the negotiated streams if it is not a temporary 2458 - * association. 2459 - */ 2460 - if (!asoc->temp) { 2461 - if (sctp_stream_init(asoc, gfp)) 2462 - goto clean_up; 2457 + if (sctp_stream_init(asoc, gfp)) 2458 + goto clean_up; 2463 2459 2464 - if (sctp_assoc_set_id(asoc, gfp)) 2465 - goto clean_up; 2466 - } 2460 + if (!asoc->temp && sctp_assoc_set_id(asoc, gfp)) 2461 + goto clean_up; 2467 2462 2468 2463 /* ADDIP Section 4.1 ASCONF Chunk Procedures 2469 2464 *
+3
net/sctp/sm_statefuns.c
··· 2088 2088 } 2089 2089 } 2090 2090 2091 + /* Set temp so that it won't be added into hashtable */ 2092 + new_asoc->temp = 1; 2093 + 2091 2094 /* Compare the tie_tag in cookie with the verification tag of 2092 2095 * current association. 2093 2096 */
+8 -13
net/vmw_vsock/af_vsock.c
··· 1540 1540 long timeout; 1541 1541 int err; 1542 1542 struct vsock_transport_send_notify_data send_data; 1543 - 1544 - DEFINE_WAIT(wait); 1543 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 1545 1544 1546 1545 sk = sock->sk; 1547 1546 vsk = vsock_sk(sk); ··· 1583 1584 if (err < 0) 1584 1585 goto out; 1585 1586 1586 - 1587 1587 while (total_written < len) { 1588 1588 ssize_t written; 1589 1589 1590 - prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1590 + add_wait_queue(sk_sleep(sk), &wait); 1591 1591 while (vsock_stream_has_space(vsk) == 0 && 1592 1592 sk->sk_err == 0 && 1593 1593 !(sk->sk_shutdown & SEND_SHUTDOWN) && ··· 1595 1597 /* Don't wait for non-blocking sockets. */ 1596 1598 if (timeout == 0) { 1597 1599 err = -EAGAIN; 1598 - finish_wait(sk_sleep(sk), &wait); 1600 + remove_wait_queue(sk_sleep(sk), &wait); 1599 1601 goto out_err; 1600 1602 } 1601 1603 1602 1604 err = transport->notify_send_pre_block(vsk, &send_data); 1603 1605 if (err < 0) { 1604 - finish_wait(sk_sleep(sk), &wait); 1606 + remove_wait_queue(sk_sleep(sk), &wait); 1605 1607 goto out_err; 1606 1608 } 1607 1609 1608 1610 release_sock(sk); 1609 - timeout = schedule_timeout(timeout); 1611 + timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); 1610 1612 lock_sock(sk); 1611 1613 if (signal_pending(current)) { 1612 1614 err = sock_intr_errno(timeout); 1613 - finish_wait(sk_sleep(sk), &wait); 1615 + remove_wait_queue(sk_sleep(sk), &wait); 1614 1616 goto out_err; 1615 1617 } else if (timeout == 0) { 1616 1618 err = -EAGAIN; 1617 - finish_wait(sk_sleep(sk), &wait); 1619 + remove_wait_queue(sk_sleep(sk), &wait); 1618 1620 goto out_err; 1619 1621 } 1620 - 1621 - prepare_to_wait(sk_sleep(sk), &wait, 1622 - TASK_INTERRUPTIBLE); 1623 1622 } 1624 - finish_wait(sk_sleep(sk), &wait); 1623 + remove_wait_queue(sk_sleep(sk), &wait); 1625 1624 1626 1625 /* These checks occur both as part of and after the loop 1627 1626 * conditional since we need to check before and after
+4 -4
net/wireless/scan.c
··· 322 322 { 323 323 struct cfg80211_sched_scan_request *pos; 324 324 325 - ASSERT_RTNL(); 325 + WARN_ON_ONCE(!rcu_read_lock_held() && !lockdep_rtnl_is_held()); 326 326 327 - list_for_each_entry(pos, &rdev->sched_scan_req_list, list) { 327 + list_for_each_entry_rcu(pos, &rdev->sched_scan_req_list, list) { 328 328 if (pos->reqid == reqid) 329 329 return pos; 330 330 } ··· 398 398 trace_cfg80211_sched_scan_results(wiphy, reqid); 399 399 /* ignore if we're not scanning */ 400 400 401 - rtnl_lock(); 401 + rcu_read_lock(); 402 402 request = cfg80211_find_sched_scan_req(rdev, reqid); 403 403 if (request) { 404 404 request->report_results = true; 405 405 queue_work(cfg80211_wq, &rdev->sched_scan_res_wk); 406 406 } 407 - rtnl_unlock(); 407 + rcu_read_unlock(); 408 408 } 409 409 EXPORT_SYMBOL(cfg80211_sched_scan_results); 410 410
+6 -4
net/wireless/util.c
··· 454 454 if (iftype == NL80211_IFTYPE_MESH_POINT) 455 455 skb_copy_bits(skb, hdrlen, &mesh_flags, 1); 456 456 457 + mesh_flags &= MESH_FLAGS_AE; 458 + 457 459 switch (hdr->frame_control & 458 460 cpu_to_le16(IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) { 459 461 case cpu_to_le16(IEEE80211_FCTL_TODS): ··· 471 469 iftype != NL80211_IFTYPE_STATION)) 472 470 return -1; 473 471 if (iftype == NL80211_IFTYPE_MESH_POINT) { 474 - if (mesh_flags & MESH_FLAGS_AE_A4) 472 + if (mesh_flags == MESH_FLAGS_AE_A4) 475 473 return -1; 476 - if (mesh_flags & MESH_FLAGS_AE_A5_A6) { 474 + if (mesh_flags == MESH_FLAGS_AE_A5_A6) { 477 475 skb_copy_bits(skb, hdrlen + 478 476 offsetof(struct ieee80211s_hdr, eaddr1), 479 477 tmp.h_dest, 2 * ETH_ALEN); ··· 489 487 ether_addr_equal(tmp.h_source, addr))) 490 488 return -1; 491 489 if (iftype == NL80211_IFTYPE_MESH_POINT) { 492 - if (mesh_flags & MESH_FLAGS_AE_A5_A6) 490 + if (mesh_flags == MESH_FLAGS_AE_A5_A6) 493 491 return -1; 494 - if (mesh_flags & MESH_FLAGS_AE_A4) 492 + if (mesh_flags == MESH_FLAGS_AE_A4) 495 493 skb_copy_bits(skb, hdrlen + 496 494 offsetof(struct ieee80211s_hdr, eaddr1), 497 495 tmp.h_source, ETH_ALEN);
+1 -1
net/xfrm/xfrm_device.c
··· 170 170 171 171 static int xfrm_dev_down(struct net_device *dev) 172 172 { 173 - if (dev->hw_features & NETIF_F_HW_ESP) 173 + if (dev->features & NETIF_F_HW_ESP) 174 174 xfrm_dev_state_flush(dev_net(dev), dev, true); 175 175 176 176 xfrm_garbage_collect(dev_net(dev));
-47
net/xfrm/xfrm_policy.c
··· 1797 1797 goto out; 1798 1798 } 1799 1799 1800 - #ifdef CONFIG_XFRM_SUB_POLICY 1801 - static int xfrm_dst_alloc_copy(void **target, const void *src, int size) 1802 - { 1803 - if (!*target) { 1804 - *target = kmalloc(size, GFP_ATOMIC); 1805 - if (!*target) 1806 - return -ENOMEM; 1807 - } 1808 - 1809 - memcpy(*target, src, size); 1810 - return 0; 1811 - } 1812 - #endif 1813 - 1814 - static int xfrm_dst_update_parent(struct dst_entry *dst, 1815 - const struct xfrm_selector *sel) 1816 - { 1817 - #ifdef CONFIG_XFRM_SUB_POLICY 1818 - struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 1819 - return xfrm_dst_alloc_copy((void **)&(xdst->partner), 1820 - sel, sizeof(*sel)); 1821 - #else 1822 - return 0; 1823 - #endif 1824 - } 1825 - 1826 - static int xfrm_dst_update_origin(struct dst_entry *dst, 1827 - const struct flowi *fl) 1828 - { 1829 - #ifdef CONFIG_XFRM_SUB_POLICY 1830 - struct xfrm_dst *xdst = (struct xfrm_dst *)dst; 1831 - return xfrm_dst_alloc_copy((void **)&(xdst->origin), fl, sizeof(*fl)); 1832 - #else 1833 - return 0; 1834 - #endif 1835 - } 1836 - 1837 1800 static int xfrm_expand_policies(const struct flowi *fl, u16 family, 1838 1801 struct xfrm_policy **pols, 1839 1802 int *num_pols, int *num_xfrms) ··· 1868 1905 1869 1906 xdst = (struct xfrm_dst *)dst; 1870 1907 xdst->num_xfrms = err; 1871 - if (num_pols > 1) 1872 - err = xfrm_dst_update_parent(dst, &pols[1]->selector); 1873 - else 1874 - err = xfrm_dst_update_origin(dst, fl); 1875 - if (unlikely(err)) { 1876 - dst_free(dst); 1877 - XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTBUNDLECHECKERROR); 1878 - return ERR_PTR(err); 1879 - } 1880 - 1881 1908 xdst->num_pols = num_pols; 1882 1909 memcpy(xdst->pols, pols, sizeof(struct xfrm_policy *) * num_pols); 1883 1910 xdst->policy_genid = atomic_read(&pols[0]->genid);
+2
net/xfrm/xfrm_state.c
··· 1383 1383 x->curlft.add_time = orig->curlft.add_time; 1384 1384 x->km.state = orig->km.state; 1385 1385 x->km.seq = orig->km.seq; 1386 + x->replay = orig->replay; 1387 + x->preplay = orig->preplay; 1386 1388 1387 1389 return x; 1388 1390
+5 -4
scripts/gdb/linux/dmesg.py
··· 23 23 super(LxDmesg, self).__init__("lx-dmesg", gdb.COMMAND_DATA) 24 24 25 25 def invoke(self, arg, from_tty): 26 - log_buf_addr = int(str(gdb.parse_and_eval("log_buf")).split()[0], 16) 27 - log_first_idx = int(gdb.parse_and_eval("log_first_idx")) 28 - log_next_idx = int(gdb.parse_and_eval("log_next_idx")) 29 - log_buf_len = int(gdb.parse_and_eval("log_buf_len")) 26 + log_buf_addr = int(str(gdb.parse_and_eval( 27 + "'printk.c'::log_buf")).split()[0], 16) 28 + log_first_idx = int(gdb.parse_and_eval("'printk.c'::log_first_idx")) 29 + log_next_idx = int(gdb.parse_and_eval("'printk.c'::log_next_idx")) 30 + log_buf_len = int(gdb.parse_and_eval("'printk.c'::log_buf_len")) 30 31 31 32 inf = gdb.inferiors()[0] 32 33 start = log_buf_addr + log_first_idx
+9 -2
sound/pci/hda/patch_realtek.c
··· 2324 2324 SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_MBA11_VREF), 2325 2325 2326 2326 SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD), 2327 - SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD), 2328 - SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 2329 2327 SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE), 2330 2328 SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), 2329 + SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD), 2330 + SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), 2331 + SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 2331 2332 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 2332 2333 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 2333 2334 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), ··· 2343 2342 {.id = ALC883_FIXUP_ACER_EAPD, .name = "acer-aspire"}, 2344 2343 {.id = ALC882_FIXUP_INV_DMIC, .name = "inv-dmic"}, 2345 2344 {.id = ALC882_FIXUP_NO_PRIMARY_HP, .name = "no-primary-hp"}, 2345 + {.id = ALC1220_FIXUP_GB_DUAL_CODECS, .name = "dual-codecs"}, 2346 2346 {} 2347 2347 }; 2348 2348 ··· 6016 6014 {.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"}, 6017 6015 {.id = ALC292_FIXUP_TPT440, .name = "tpt440"}, 6018 6016 {.id = ALC292_FIXUP_TPT460, .name = "tpt460"}, 6017 + {.id = ALC233_FIXUP_LENOVO_MULTI_CODECS, .name = "dual-codecs"}, 6019 6018 {} 6020 6019 }; 6021 6020 #define ALC225_STANDARD_PINS \ ··· 6468 6465 break; 6469 6466 case 0x10ec0225: 6470 6467 case 0x10ec0295: 6468 + spec->codec_variant = ALC269_TYPE_ALC225; 6469 + break; 6471 6470 case 0x10ec0299: 6472 6471 spec->codec_variant = ALC269_TYPE_ALC225; 6472 + spec->gen.mixer_nid = 0; /* no loopback on ALC299 */ 6473 6473 break; 6474 6474 case 0x10ec0234: 6475 6475 case 0x10ec0274: ··· 7344 7338 {.id = ALC662_FIXUP_ASUS_MODE8, .name = "asus-mode8"}, 7345 7339 {.id = ALC662_FIXUP_INV_DMIC, .name = "inv-dmic"}, 7346 7340 {.id = ALC668_FIXUP_DELL_MIC_NO_PRESENCE, .name = "dell-headset-multi"}, 7341 + {.id = ALC662_FIXUP_LENOVO_MULTI_CODECS, .name = "dual-codecs"}, 7347 7342 {} 7348 7343 }; 7349 7344
+2
sound/pci/hda/patch_sigmatel.c
··· 1559 1559 "Dell Inspiron 1501", STAC_9200_DELL_M26), 1560 1560 SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x01f6, 1561 1561 "unknown Dell", STAC_9200_DELL_M26), 1562 + SND_PCI_QUIRK(PCI_VENDOR_ID_DELL, 0x0201, 1563 + "Dell Latitude D430", STAC_9200_DELL_M22), 1562 1564 /* Panasonic */ 1563 1565 SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-74", STAC_9200_PANASONIC), 1564 1566 /* Gateway machines needs EAPD to be set on resume */
+12 -7
sound/usb/mixer_us16x08.c
··· 698 698 struct snd_usb_audio *chip = elem->head.mixer->chip; 699 699 struct snd_us16x08_meter_store *store = elem->private_data; 700 700 u8 meter_urb[64]; 701 - char tmp[sizeof(mix_init_msg2)] = {0}; 702 701 703 702 switch (kcontrol->private_value) { 704 - case 0: 705 - snd_us16x08_send_urb(chip, (char *)mix_init_msg1, 706 - sizeof(mix_init_msg1)); 703 + case 0: { 704 + char tmp[sizeof(mix_init_msg1)]; 705 + 706 + memcpy(tmp, mix_init_msg1, sizeof(mix_init_msg1)); 707 + snd_us16x08_send_urb(chip, tmp, 4); 707 708 snd_us16x08_recv_urb(chip, meter_urb, 708 709 sizeof(meter_urb)); 709 710 kcontrol->private_value++; 710 711 break; 712 + } 711 713 case 1: 712 714 snd_us16x08_recv_urb(chip, meter_urb, 713 715 sizeof(meter_urb)); ··· 720 718 sizeof(meter_urb)); 721 719 kcontrol->private_value++; 722 720 break; 723 - case 3: 721 + case 3: { 722 + char tmp[sizeof(mix_init_msg2)]; 723 + 724 724 memcpy(tmp, mix_init_msg2, sizeof(mix_init_msg2)); 725 725 tmp[2] = snd_get_meter_comp_index(store); 726 - snd_us16x08_send_urb(chip, tmp, sizeof(mix_init_msg2)); 726 + snd_us16x08_send_urb(chip, tmp, 10); 727 727 snd_us16x08_recv_urb(chip, meter_urb, 728 728 sizeof(meter_urb)); 729 729 kcontrol->private_value = 0; 730 730 break; 731 + } 731 732 } 732 733 733 734 for (set = 0; set < 6; set++) ··· 1140 1135 .control_id = SND_US16X08_ID_EQLOWMIDWIDTH, 1141 1136 .type = USB_MIXER_U8, 1142 1137 .num_channels = 16, 1143 - .name = "EQ MidQLow Q", 1138 + .name = "EQ MidLow Q", 1144 1139 }, 1145 1140 { /* EQ mid high gain */ 1146 1141 .kcontrol_new = &snd_us16x08_eq_gain_ctl,
+1 -1
sound/usb/quirks.c
··· 1364 1364 /* Amanero Combo384 USB interface with native DSD support */ 1365 1365 case USB_ID(0x16d0, 0x071a): 1366 1366 if (fp->altsetting == 2) { 1367 - switch (chip->dev->descriptor.bcdDevice) { 1367 + switch (le16_to_cpu(chip->dev->descriptor.bcdDevice)) { 1368 1368 case 0x199: 1369 1369 return SNDRV_PCM_FMTBIT_DSD_U32_LE; 1370 1370 case 0x19b:
+9 -1
tools/arch/arm/include/uapi/asm/kvm.h
··· 27 27 #define __KVM_HAVE_IRQ_LINE 28 28 #define __KVM_HAVE_READONLY_MEM 29 29 30 + #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 31 + 30 32 #define KVM_REG_SIZE(id) \ 31 33 (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) 32 34 ··· 116 114 }; 117 115 118 116 struct kvm_sync_regs { 117 + /* Used with KVM_CAP_ARM_USER_IRQ */ 118 + __u64 device_irq_level; 119 119 }; 120 120 121 121 struct kvm_arch_memory_slot { ··· 196 192 #define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5 197 193 #define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6 198 194 #define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7 195 + #define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8 199 196 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10 200 197 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \ 201 198 (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT) 202 199 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK 0x3ff 203 200 #define VGIC_LEVEL_INFO_LINE_LEVEL 0 204 201 205 - #define KVM_DEV_ARM_VGIC_CTRL_INIT 0 202 + #define KVM_DEV_ARM_VGIC_CTRL_INIT 0 203 + #define KVM_DEV_ARM_ITS_SAVE_TABLES 1 204 + #define KVM_DEV_ARM_ITS_RESTORE_TABLES 2 205 + #define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3 206 206 207 207 /* KVM_IRQ_LINE irq field index values */ 208 208 #define KVM_ARM_IRQ_TYPE_SHIFT 24
+9 -1
tools/arch/arm64/include/uapi/asm/kvm.h
··· 39 39 #define __KVM_HAVE_IRQ_LINE 40 40 #define __KVM_HAVE_READONLY_MEM 41 41 42 + #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 43 + 42 44 #define KVM_REG_SIZE(id) \ 43 45 (1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT)) 44 46 ··· 145 143 #define KVM_GUESTDBG_USE_HW (1 << 17) 146 144 147 145 struct kvm_sync_regs { 146 + /* Used with KVM_CAP_ARM_USER_IRQ */ 147 + __u64 device_irq_level; 148 148 }; 149 149 150 150 struct kvm_arch_memory_slot { ··· 216 212 #define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5 217 213 #define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6 218 214 #define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7 215 + #define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8 219 216 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10 220 217 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \ 221 218 (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT) 222 219 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK 0x3ff 223 220 #define VGIC_LEVEL_INFO_LINE_LEVEL 0 224 221 225 - #define KVM_DEV_ARM_VGIC_CTRL_INIT 0 222 + #define KVM_DEV_ARM_VGIC_CTRL_INIT 0 223 + #define KVM_DEV_ARM_ITS_SAVE_TABLES 1 224 + #define KVM_DEV_ARM_ITS_RESTORE_TABLES 2 225 + #define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3 226 226 227 227 /* Device Control API on vcpu fd */ 228 228 #define KVM_ARM_VCPU_PMU_V3_CTRL 0
+3
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 29 29 #define __KVM_HAVE_IRQ_LINE 30 30 #define __KVM_HAVE_GUEST_DEBUG 31 31 32 + /* Not always available, but if it is, this is the correct offset. */ 33 + #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 34 + 32 35 struct kvm_regs { 33 36 __u64 pc; 34 37 __u64 cr;
+24 -2
tools/arch/s390/include/uapi/asm/kvm.h
··· 26 26 #define KVM_DEV_FLIC_ADAPTER_REGISTER 6 27 27 #define KVM_DEV_FLIC_ADAPTER_MODIFY 7 28 28 #define KVM_DEV_FLIC_CLEAR_IO_IRQ 8 29 + #define KVM_DEV_FLIC_AISM 9 30 + #define KVM_DEV_FLIC_AIRQ_INJECT 10 29 31 /* 30 32 * We can have up to 4*64k pending subchannels + 8 adapter interrupts, 31 33 * as well as up to ASYNC_PF_PER_VCPU*KVM_MAX_VCPUS pfault done interrupts. ··· 43 41 __u8 isc; 44 42 __u8 maskable; 45 43 __u8 swap; 46 - __u8 pad; 44 + __u8 flags; 45 + }; 46 + 47 + #define KVM_S390_ADAPTER_SUPPRESSIBLE 0x01 48 + 49 + struct kvm_s390_ais_req { 50 + __u8 isc; 51 + __u16 mode; 47 52 }; 48 53 49 54 #define KVM_S390_IO_ADAPTER_MASK 1 ··· 119 110 #define KVM_S390_VM_CPU_FEAT_CMMA 10 120 111 #define KVM_S390_VM_CPU_FEAT_PFMFI 11 121 112 #define KVM_S390_VM_CPU_FEAT_SIGPIF 12 113 + #define KVM_S390_VM_CPU_FEAT_KSS 13 122 114 struct kvm_s390_vm_cpu_feat { 123 115 __u64 feat[16]; 124 116 }; ··· 208 198 #define KVM_SYNC_VRS (1UL << 6) 209 199 #define KVM_SYNC_RICCB (1UL << 7) 210 200 #define KVM_SYNC_FPRS (1UL << 8) 201 + #define KVM_SYNC_GSCB (1UL << 9) 202 + /* length and alignment of the sdnx as a power of two */ 203 + #define SDNXC 8 204 + #define SDNXL (1UL << SDNXC) 211 205 /* definition of registers in kvm_run */ 212 206 struct kvm_sync_regs { 213 207 __u64 prefix; /* prefix register */ ··· 232 218 }; 233 219 __u8 reserved[512]; /* for future vector expansion */ 234 220 __u32 fpc; /* valid on KVM_SYNC_VRS or KVM_SYNC_FPRS */ 235 - __u8 padding[52]; /* riccb needs to be 64byte aligned */ 221 + __u8 padding1[52]; /* riccb needs to be 64byte aligned */ 236 222 __u8 riccb[64]; /* runtime instrumentation controls block */ 223 + __u8 padding2[192]; /* sdnx needs to be 256byte aligned */ 224 + union { 225 + __u8 sdnx[SDNXL]; /* state description annex */ 226 + struct { 227 + __u64 reserved1[2]; 228 + __u64 gscb[4]; 229 + }; 230 + }; 237 231 }; 238 232 239 233 #define KVM_REG_S390_TODPR (KVM_REG_S390 | KVM_REG_SIZE_U32 | 0x1)
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 202 202 #define X86_FEATURE_AVX512_4VNNIW (7*32+16) /* AVX-512 Neural Network Instructions */ 203 203 #define X86_FEATURE_AVX512_4FMAPS (7*32+17) /* AVX-512 Multiply Accumulation Single precision */ 204 204 205 + #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */ 206 + 205 207 /* Virtualization flags: Linux defined, word 8 */ 206 208 #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ 207 209 #define X86_FEATURE_VNMI ( 8*32+ 1) /* Intel Virtual NMI */
+7 -1
tools/arch/x86/include/asm/disabled-features.h
··· 36 36 # define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31)) 37 37 #endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */ 38 38 39 + #ifdef CONFIG_X86_5LEVEL 40 + # define DISABLE_LA57 0 41 + #else 42 + # define DISABLE_LA57 (1<<(X86_FEATURE_LA57 & 31)) 43 + #endif 44 + 39 45 /* 40 46 * Make sure to add features to the correct mask 41 47 */ ··· 61 55 #define DISABLED_MASK13 0 62 56 #define DISABLED_MASK14 0 63 57 #define DISABLED_MASK15 0 64 - #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE) 58 + #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57) 65 59 #define DISABLED_MASK17 0 66 60 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18) 67 61
+7 -1
tools/arch/x86/include/asm/required-features.h
··· 53 53 # define NEED_MOVBE 0 54 54 #endif 55 55 56 + #ifdef CONFIG_X86_5LEVEL 57 + # define NEED_LA57 (1<<(X86_FEATURE_LA57 & 31)) 58 + #else 59 + # define NEED_LA57 0 60 + #endif 61 + 56 62 #ifdef CONFIG_X86_64 57 63 #ifdef CONFIG_PARAVIRT 58 64 /* Paravirtualized systems may not have PSE or PGE available */ ··· 104 98 #define REQUIRED_MASK13 0 105 99 #define REQUIRED_MASK14 0 106 100 #define REQUIRED_MASK15 0 107 - #define REQUIRED_MASK16 0 101 + #define REQUIRED_MASK16 (NEED_LA57) 108 102 #define REQUIRED_MASK17 0 109 103 #define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18) 110 104
+3
tools/arch/x86/include/uapi/asm/kvm.h
··· 9 9 #include <linux/types.h> 10 10 #include <linux/ioctl.h> 11 11 12 + #define KVM_PIO_PAGE_OFFSET 1 13 + #define KVM_COALESCED_MMIO_PAGE_OFFSET 2 14 + 12 15 #define DE_VECTOR 0 13 16 #define DB_VECTOR 1 14 17 #define BP_VECTOR 3
+19 -8
tools/arch/x86/include/uapi/asm/vmx.h
··· 76 76 #define EXIT_REASON_WBINVD 54 77 77 #define EXIT_REASON_XSETBV 55 78 78 #define EXIT_REASON_APIC_WRITE 56 79 + #define EXIT_REASON_RDRAND 57 79 80 #define EXIT_REASON_INVPCID 58 81 + #define EXIT_REASON_VMFUNC 59 82 + #define EXIT_REASON_ENCLS 60 83 + #define EXIT_REASON_RDSEED 61 80 84 #define EXIT_REASON_PML_FULL 62 81 85 #define EXIT_REASON_XSAVES 63 82 86 #define EXIT_REASON_XRSTORS 64 ··· 94 90 { EXIT_REASON_TASK_SWITCH, "TASK_SWITCH" }, \ 95 91 { EXIT_REASON_CPUID, "CPUID" }, \ 96 92 { EXIT_REASON_HLT, "HLT" }, \ 93 + { EXIT_REASON_INVD, "INVD" }, \ 97 94 { EXIT_REASON_INVLPG, "INVLPG" }, \ 98 95 { EXIT_REASON_RDPMC, "RDPMC" }, \ 99 96 { EXIT_REASON_RDTSC, "RDTSC" }, \ ··· 113 108 { EXIT_REASON_IO_INSTRUCTION, "IO_INSTRUCTION" }, \ 114 109 { EXIT_REASON_MSR_READ, "MSR_READ" }, \ 115 110 { EXIT_REASON_MSR_WRITE, "MSR_WRITE" }, \ 111 + { EXIT_REASON_INVALID_STATE, "INVALID_STATE" }, \ 112 + { EXIT_REASON_MSR_LOAD_FAIL, "MSR_LOAD_FAIL" }, \ 116 113 { EXIT_REASON_MWAIT_INSTRUCTION, "MWAIT_INSTRUCTION" }, \ 117 114 { EXIT_REASON_MONITOR_TRAP_FLAG, "MONITOR_TRAP_FLAG" }, \ 118 115 { EXIT_REASON_MONITOR_INSTRUCTION, "MONITOR_INSTRUCTION" }, \ ··· 122 115 { EXIT_REASON_MCE_DURING_VMENTRY, "MCE_DURING_VMENTRY" }, \ 123 116 { EXIT_REASON_TPR_BELOW_THRESHOLD, "TPR_BELOW_THRESHOLD" }, \ 124 117 { EXIT_REASON_APIC_ACCESS, "APIC_ACCESS" }, \ 125 - { EXIT_REASON_GDTR_IDTR, "GDTR_IDTR" }, \ 126 - { EXIT_REASON_LDTR_TR, "LDTR_TR" }, \ 118 + { EXIT_REASON_EOI_INDUCED, "EOI_INDUCED" }, \ 119 + { EXIT_REASON_GDTR_IDTR, "GDTR_IDTR" }, \ 120 + { EXIT_REASON_LDTR_TR, "LDTR_TR" }, \ 127 121 { EXIT_REASON_EPT_VIOLATION, "EPT_VIOLATION" }, \ 128 122 { EXIT_REASON_EPT_MISCONFIG, "EPT_MISCONFIG" }, \ 129 123 { EXIT_REASON_INVEPT, "INVEPT" }, \ 124 + { EXIT_REASON_RDTSCP, "RDTSCP" }, \ 130 125 { EXIT_REASON_PREEMPTION_TIMER, "PREEMPTION_TIMER" }, \ 131 - { EXIT_REASON_WBINVD, "WBINVD" }, \ 132 - { EXIT_REASON_APIC_WRITE, "APIC_WRITE" }, \ 133 - { EXIT_REASON_EOI_INDUCED, "EOI_INDUCED" }, \ 134 - { EXIT_REASON_INVALID_STATE, "INVALID_STATE" }, \ 135 - { EXIT_REASON_MSR_LOAD_FAIL, "MSR_LOAD_FAIL" }, \ 136 - { EXIT_REASON_INVD, "INVD" }, \ 137 126 { EXIT_REASON_INVVPID, "INVVPID" }, \ 127 + { EXIT_REASON_WBINVD, "WBINVD" }, \ 128 + { EXIT_REASON_XSETBV, "XSETBV" }, \ 129 + { EXIT_REASON_APIC_WRITE, "APIC_WRITE" }, \ 130 + { EXIT_REASON_RDRAND, "RDRAND" }, \ 138 131 { EXIT_REASON_INVPCID, "INVPCID" }, \ 132 + { EXIT_REASON_VMFUNC, "VMFUNC" }, \ 133 + { EXIT_REASON_ENCLS, "ENCLS" }, \ 134 + { EXIT_REASON_RDSEED, "RDSEED" }, \ 135 + { EXIT_REASON_PML_FULL, "PML_FULL" }, \ 139 136 { EXIT_REASON_XSAVES, "XSAVES" }, \ 140 137 { EXIT_REASON_XRSTORS, "XRSTORS" } 141 138
+10
tools/include/linux/filter.h
··· 208 208 .off = OFF, \ 209 209 .imm = IMM }) 210 210 211 + /* Unconditional jumps, goto pc + off16 */ 212 + 213 + #define BPF_JMP_A(OFF) \ 214 + ((struct bpf_insn) { \ 215 + .code = BPF_JMP | BPF_JA, \ 216 + .dst_reg = 0, \ 217 + .src_reg = 0, \ 218 + .off = OFF, \ 219 + .imm = 0 }) 220 + 211 221 /* Function call */ 212 222 213 223 #define BPF_EMIT_CALL(FUNC) \
+2 -6
tools/include/uapi/linux/stat.h
··· 48 48 * tv_sec holds the number of seconds before (negative) or after (positive) 49 49 * 00:00:00 1st January 1970 UTC. 50 50 * 51 - * tv_nsec holds a number of nanoseconds before (0..-999,999,999 if tv_sec is 52 - * negative) or after (0..999,999,999 if tv_sec is positive) the tv_sec time. 53 - * 54 - * Note that if both tv_sec and tv_nsec are non-zero, then the two values must 55 - * either be both positive or both negative. 51 + * tv_nsec holds a number of nanoseconds (0..999,999,999) after the tv_sec time. 56 52 * 57 53 * __reserved is held in case we need a yet finer resolution. 58 54 */ 59 55 struct statx_timestamp { 60 56 __s64 tv_sec; 61 - __s32 tv_nsec; 57 + __u32 tv_nsec; 62 58 __s32 __reserved; 63 59 }; 64 60
+4
tools/perf/Documentation/perf-script.txt
··· 311 311 Set the maximum number of program blocks to print with brstackasm for 312 312 each sample. 313 313 314 + --inline:: 315 + If a callgraph address belongs to an inlined function, the inline stack 316 + will be printed. Each entry has function name and file/line. 317 + 314 318 SEE ALSO 315 319 -------- 316 320 linkperf:perf-record[1], linkperf:perf-script-perl[1],
+2
tools/perf/builtin-script.c
··· 2494 2494 "Enable kernel symbol demangling"), 2495 2495 OPT_STRING(0, "time", &script.time_str, "str", 2496 2496 "Time span of interest (start,stop)"), 2497 + OPT_BOOLEAN(0, "inline", &symbol_conf.inline_name, 2498 + "Show inline function"), 2497 2499 OPT_END() 2498 2500 }; 2499 2501 const char * const script_subcommands[] = { "record", "report", NULL };
+2
tools/perf/ui/hist.c
··· 210 210 return 0; 211 211 212 212 ret = b->callchain->max_depth - a->callchain->max_depth; 213 + if (callchain_param.order == ORDER_CALLER) 214 + ret = -ret; 213 215 } 214 216 return ret; 215 217 }
+11 -6
tools/perf/util/callchain.c
··· 621 621 static enum match_result match_chain_srcline(struct callchain_cursor_node *node, 622 622 struct callchain_list *cnode) 623 623 { 624 - char *left = get_srcline(cnode->ms.map->dso, 625 - map__rip_2objdump(cnode->ms.map, cnode->ip), 626 - cnode->ms.sym, true, false); 627 - char *right = get_srcline(node->map->dso, 628 - map__rip_2objdump(node->map, node->ip), 629 - node->sym, true, false); 624 + char *left = NULL; 625 + char *right = NULL; 630 626 enum match_result ret = MATCH_EQ; 631 627 int cmp; 628 + 629 + if (cnode->ms.map) 630 + left = get_srcline(cnode->ms.map->dso, 631 + map__rip_2objdump(cnode->ms.map, cnode->ip), 632 + cnode->ms.sym, true, false); 633 + if (node->map) 634 + right = get_srcline(node->map->dso, 635 + map__rip_2objdump(node->map, node->ip), 636 + node->sym, true, false); 632 637 633 638 if (left && right) 634 639 cmp = strcmp(left, right);
+33
tools/perf/util/evsel_fprintf.c
··· 7 7 #include "map.h" 8 8 #include "strlist.h" 9 9 #include "symbol.h" 10 + #include "srcline.h" 10 11 11 12 static int comma_fprintf(FILE *fp, bool *first, const char *fmt, ...) 12 13 { ··· 168 167 169 168 if (!print_oneline) 170 169 printed += fprintf(fp, "\n"); 170 + 171 + if (symbol_conf.inline_name && node->map) { 172 + struct inline_node *inode; 173 + 174 + addr = map__rip_2objdump(node->map, node->ip), 175 + inode = dso__parse_addr_inlines(node->map->dso, addr); 176 + 177 + if (inode) { 178 + struct inline_list *ilist; 179 + 180 + list_for_each_entry(ilist, &inode->val, list) { 181 + if (print_arrow) 182 + printed += fprintf(fp, " <-"); 183 + 184 + /* IP is same, just skip it */ 185 + if (print_ip) 186 + printed += fprintf(fp, "%c%16s", 187 + s, ""); 188 + if (print_sym) 189 + printed += fprintf(fp, " %s", 190 + ilist->funcname); 191 + if (print_srcline) 192 + printed += fprintf(fp, "\n %s:%d", 193 + ilist->filename, 194 + ilist->line_nr); 195 + if (!print_oneline) 196 + printed += fprintf(fp, "\n"); 197 + } 198 + 199 + inline_node__delete(inode); 200 + } 201 + } 171 202 172 203 if (symbol_conf.bt_stop_list && 173 204 node->sym &&
+27 -24
tools/perf/util/srcline.c
··· 56 56 } 57 57 } 58 58 59 - list_add_tail(&ilist->list, &node->val); 59 + if (callchain_param.order == ORDER_CALLEE) 60 + list_add_tail(&ilist->list, &node->val); 61 + else 62 + list_add(&ilist->list, &node->val); 60 63 61 64 return 0; 62 65 } ··· 203 200 204 201 #define MAX_INLINE_NEST 1024 205 202 206 - static void inline_list__reverse(struct inline_node *node) 203 + static int inline_list__append_dso_a2l(struct dso *dso, 204 + struct inline_node *node) 207 205 { 208 - struct inline_list *ilist, *n; 206 + struct a2l_data *a2l = dso->a2l; 207 + char *funcname = a2l->funcname ? strdup(a2l->funcname) : NULL; 208 + char *filename = a2l->filename ? strdup(a2l->filename) : NULL; 209 209 210 - list_for_each_entry_safe_reverse(ilist, n, &node->val, list) 211 - list_move_tail(&ilist->list, &node->val); 210 + return inline_list__append(filename, funcname, a2l->line, node, dso); 212 211 } 213 212 214 213 static int addr2line(const char *dso_name, u64 addr, ··· 235 230 236 231 bfd_map_over_sections(a2l->abfd, find_address_in_section, a2l); 237 232 238 - if (a2l->found && unwind_inlines) { 233 + if (!a2l->found) 234 + return 0; 235 + 236 + if (unwind_inlines) { 239 237 int cnt = 0; 238 + 239 + if (node && inline_list__append_dso_a2l(dso, node)) 240 + return 0; 240 241 241 242 while (bfd_find_inliner_info(a2l->abfd, &a2l->filename, 242 243 &a2l->funcname, &a2l->line) && 243 244 cnt++ < MAX_INLINE_NEST) { 244 245 245 246 if (node != NULL) { 246 - if (inline_list__append(strdup(a2l->filename), 247 - strdup(a2l->funcname), 248 - a2l->line, node, 249 - dso) != 0) 247 + if (inline_list__append_dso_a2l(dso, node)) 250 248 return 0; 249 + // found at least one inline frame 250 + ret = 1; 251 251 } 252 252 } 253 - 254 - if ((node != NULL) && 255 - (callchain_param.order != ORDER_CALLEE)) { 256 - inline_list__reverse(node); 257 - } 258 253 } 259 254 260 - if (a2l->found && a2l->filename) { 261 - *file = strdup(a2l->filename); 255 + if (file) { 256 + *file = a2l->filename ? strdup(a2l->filename) : NULL; 257 + ret = *file ? 1 : 0; 258 + } 259 + 260 + if (line) 262 261 *line = a2l->line; 263 - 264 - if (*file) 265 - ret = 1; 266 - } 267 262 268 263 return ret; 269 264 } ··· 283 278 static struct inline_node *addr2inlines(const char *dso_name, u64 addr, 284 279 struct dso *dso) 285 280 { 286 - char *file = NULL; 287 - unsigned int line = 0; 288 281 struct inline_node *node; 289 282 290 283 node = zalloc(sizeof(*node)); ··· 294 291 INIT_LIST_HEAD(&node->val); 295 292 node->addr = addr; 296 293 297 - if (!addr2line(dso_name, addr, &file, &line, dso, TRUE, node)) 294 + if (!addr2line(dso_name, addr, NULL, NULL, dso, TRUE, node)) 298 295 goto out_free_inline_node; 299 296 300 297 if (list_empty(&node->val))
+5 -1
tools/perf/util/unwind-libdw.c
··· 168 168 { 169 169 struct unwind_info *ui = arg; 170 170 Dwarf_Addr pc; 171 + bool isactivation; 171 172 172 - if (!dwfl_frame_pc(state, &pc, NULL)) { 173 + if (!dwfl_frame_pc(state, &pc, &isactivation)) { 173 174 pr_err("%s", dwfl_errmsg(-1)); 174 175 return DWARF_CB_ABORT; 175 176 } 177 + 178 + if (!isactivation) 179 + --pc; 176 180 177 181 return entry(pc, ui) || !(--ui->max_stack) ? 178 182 DWARF_CB_ABORT : DWARF_CB_OK;
+11
tools/perf/util/unwind-libunwind-local.c
··· 692 692 693 693 while (!ret && (unw_step(&c) > 0) && i < max_stack) { 694 694 unw_get_reg(&c, UNW_REG_IP, &ips[i]); 695 + 696 + /* 697 + * Decrement the IP for any non-activation frames. 698 + * this is required to properly find the srcline 699 + * for caller frames. 700 + * See also the documentation for dwfl_frame_pc(), 701 + * which this code tries to replicate. 702 + */ 703 + if (unw_is_signal_frame(&c) <= 0) 704 + --ips[i]; 705 + 695 706 ++i; 696 707 } 697 708
+4
tools/power/acpi/.gitignore
··· 1 + acpidbg 2 + acpidump 3 + ec 4 + include
+235 -4
tools/testing/selftests/bpf/test_verifier.c
··· 49 49 #define MAX_NR_MAPS 4 50 50 51 51 #define F_NEEDS_EFFICIENT_UNALIGNED_ACCESS (1 << 0) 52 + #define F_LOAD_WITH_STRICT_ALIGNMENT (1 << 1) 52 53 53 54 struct bpf_test { 54 55 const char *descr; ··· 2616 2615 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 2617 2616 }, 2618 2617 { 2618 + "direct packet access: test17 (pruning, alignment)", 2619 + .insns = { 2620 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 2621 + offsetof(struct __sk_buff, data)), 2622 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 2623 + offsetof(struct __sk_buff, data_end)), 2624 + BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1, 2625 + offsetof(struct __sk_buff, mark)), 2626 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 2627 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 14), 2628 + BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 1, 4), 2629 + BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1), 2630 + BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, -4), 2631 + BPF_MOV64_IMM(BPF_REG_0, 0), 2632 + BPF_EXIT_INSN(), 2633 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1), 2634 + BPF_JMP_A(-6), 2635 + }, 2636 + .errstr = "misaligned packet access off 2+15+-4 size 4", 2637 + .result = REJECT, 2638 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 2639 + .flags = F_LOAD_WITH_STRICT_ALIGNMENT, 2640 + }, 2641 + { 2619 2642 "helper access to packet: test1, valid packet_ptr range", 2620 2643 .insns = { 2621 2644 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 3363 3338 }, 3364 3339 .fixup_map1 = { 4 }, 3365 3340 .result = ACCEPT, 3341 + .prog_type = BPF_PROG_TYPE_SCHED_CLS 3342 + }, 3343 + { 3344 + "alu ops on ptr_to_map_value_or_null, 1", 3345 + .insns = { 3346 + BPF_MOV64_IMM(BPF_REG_1, 10), 3347 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), 3348 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 3349 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 3350 + BPF_LD_MAP_FD(BPF_REG_1, 0), 3351 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 3352 + BPF_FUNC_map_lookup_elem), 3353 + BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), 3354 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -2), 3355 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 2), 3356 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 3357 + BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), 3358 + BPF_EXIT_INSN(), 3359 + }, 3360 + .fixup_map1 = { 4 }, 3361 + .errstr = "R4 invalid mem access", 3362 + .result = REJECT, 3363 + .prog_type = BPF_PROG_TYPE_SCHED_CLS 3364 + }, 3365 + { 3366 + "alu ops on ptr_to_map_value_or_null, 2", 3367 + .insns = { 3368 + BPF_MOV64_IMM(BPF_REG_1, 10), 3369 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), 3370 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 3371 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 3372 + BPF_LD_MAP_FD(BPF_REG_1, 0), 3373 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 3374 + BPF_FUNC_map_lookup_elem), 3375 + BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), 3376 + BPF_ALU64_IMM(BPF_AND, BPF_REG_4, -1), 3377 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 3378 + BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), 3379 + BPF_EXIT_INSN(), 3380 + }, 3381 + .fixup_map1 = { 4 }, 3382 + .errstr = "R4 invalid mem access", 3383 + .result = REJECT, 3384 + .prog_type = BPF_PROG_TYPE_SCHED_CLS 3385 + }, 3386 + { 3387 + "alu ops on ptr_to_map_value_or_null, 3", 3388 + .insns = { 3389 + BPF_MOV64_IMM(BPF_REG_1, 10), 3390 + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), 3391 + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 3392 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 3393 + BPF_LD_MAP_FD(BPF_REG_1, 0), 3394 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 3395 + BPF_FUNC_map_lookup_elem), 3396 + BPF_MOV64_REG(BPF_REG_4, BPF_REG_0), 3397 + BPF_ALU64_IMM(BPF_LSH, BPF_REG_4, 1), 3398 + BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 3399 + BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0), 3400 + BPF_EXIT_INSN(), 3401 + }, 3402 + .fixup_map1 = { 4 }, 3403 + .errstr = "R4 invalid mem access", 3404 + .result = REJECT, 3366 3405 .prog_type = BPF_PROG_TYPE_SCHED_CLS 3367 3406 }, 3368 3407 { ··· 5026 4937 .fixup_map_in_map = { 3 }, 5027 4938 .errstr = "R1 type=map_value_or_null expected=map_ptr", 5028 4939 .result = REJECT, 5029 - } 4940 + }, 4941 + { 4942 + "ld_abs: check calling conv, r1", 4943 + .insns = { 4944 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4945 + BPF_MOV64_IMM(BPF_REG_1, 0), 4946 + BPF_LD_ABS(BPF_W, -0x200000), 4947 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), 4948 + BPF_EXIT_INSN(), 4949 + }, 4950 + .errstr = "R1 !read_ok", 4951 + .result = REJECT, 4952 + }, 4953 + { 4954 + "ld_abs: check calling conv, r2", 4955 + .insns = { 4956 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4957 + BPF_MOV64_IMM(BPF_REG_2, 0), 4958 + BPF_LD_ABS(BPF_W, -0x200000), 4959 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 4960 + BPF_EXIT_INSN(), 4961 + }, 4962 + .errstr = "R2 !read_ok", 4963 + .result = REJECT, 4964 + }, 4965 + { 4966 + "ld_abs: check calling conv, r3", 4967 + .insns = { 4968 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4969 + BPF_MOV64_IMM(BPF_REG_3, 0), 4970 + BPF_LD_ABS(BPF_W, -0x200000), 4971 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), 4972 + BPF_EXIT_INSN(), 4973 + }, 4974 + .errstr = "R3 !read_ok", 4975 + .result = REJECT, 4976 + }, 4977 + { 4978 + "ld_abs: check calling conv, r4", 4979 + .insns = { 4980 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4981 + BPF_MOV64_IMM(BPF_REG_4, 0), 4982 + BPF_LD_ABS(BPF_W, -0x200000), 4983 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 4984 + BPF_EXIT_INSN(), 4985 + }, 4986 + .errstr = "R4 !read_ok", 4987 + .result = REJECT, 4988 + }, 4989 + { 4990 + "ld_abs: check calling conv, r5", 4991 + .insns = { 4992 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 4993 + BPF_MOV64_IMM(BPF_REG_5, 0), 4994 + BPF_LD_ABS(BPF_W, -0x200000), 4995 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_5), 4996 + BPF_EXIT_INSN(), 4997 + }, 4998 + .errstr = "R5 !read_ok", 4999 + .result = REJECT, 5000 + }, 5001 + { 5002 + "ld_abs: check calling conv, r7", 5003 + .insns = { 5004 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5005 + BPF_MOV64_IMM(BPF_REG_7, 0), 5006 + BPF_LD_ABS(BPF_W, -0x200000), 5007 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_7), 5008 + BPF_EXIT_INSN(), 5009 + }, 5010 + .result = ACCEPT, 5011 + }, 5012 + { 5013 + "ld_ind: check calling conv, r1", 5014 + .insns = { 5015 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5016 + BPF_MOV64_IMM(BPF_REG_1, 1), 5017 + BPF_LD_IND(BPF_W, BPF_REG_1, -0x200000), 5018 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), 5019 + BPF_EXIT_INSN(), 5020 + }, 5021 + .errstr = "R1 !read_ok", 5022 + .result = REJECT, 5023 + }, 5024 + { 5025 + "ld_ind: check calling conv, r2", 5026 + .insns = { 5027 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5028 + BPF_MOV64_IMM(BPF_REG_2, 1), 5029 + BPF_LD_IND(BPF_W, BPF_REG_2, -0x200000), 5030 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 5031 + BPF_EXIT_INSN(), 5032 + }, 5033 + .errstr = "R2 !read_ok", 5034 + .result = REJECT, 5035 + }, 5036 + { 5037 + "ld_ind: check calling conv, r3", 5038 + .insns = { 5039 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5040 + BPF_MOV64_IMM(BPF_REG_3, 1), 5041 + BPF_LD_IND(BPF_W, BPF_REG_3, -0x200000), 5042 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_3), 5043 + BPF_EXIT_INSN(), 5044 + }, 5045 + .errstr = "R3 !read_ok", 5046 + .result = REJECT, 5047 + }, 5048 + { 5049 + "ld_ind: check calling conv, r4", 5050 + .insns = { 5051 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5052 + BPF_MOV64_IMM(BPF_REG_4, 1), 5053 + BPF_LD_IND(BPF_W, BPF_REG_4, -0x200000), 5054 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 5055 + BPF_EXIT_INSN(), 5056 + }, 5057 + .errstr = "R4 !read_ok", 5058 + .result = REJECT, 5059 + }, 5060 + { 5061 + "ld_ind: check calling conv, r5", 5062 + .insns = { 5063 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5064 + BPF_MOV64_IMM(BPF_REG_5, 1), 5065 + BPF_LD_IND(BPF_W, BPF_REG_5, -0x200000), 5066 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_5), 5067 + BPF_EXIT_INSN(), 5068 + }, 5069 + .errstr = "R5 !read_ok", 5070 + .result = REJECT, 5071 + }, 5072 + { 5073 + "ld_ind: check calling conv, r7", 5074 + .insns = { 5075 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 5076 + BPF_MOV64_IMM(BPF_REG_7, 1), 5077 + BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000), 5078 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_7), 5079 + BPF_EXIT_INSN(), 5080 + }, 5081 + .result = ACCEPT, 5082 + }, 5030 5083 }; 5031 5084 5032 5085 static int probe_filter_length(const struct bpf_insn *fp) ··· 5290 5059 5291 5060 do_test_fixup(test, prog, map_fds); 5292 5061 5293 - fd_prog = bpf_load_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER, 5294 - prog, prog_len, "GPL", 0, bpf_vlog, 5295 - sizeof(bpf_vlog)); 5062 + fd_prog = bpf_verify_program(prog_type ? : BPF_PROG_TYPE_SOCKET_FILTER, 5063 + prog, prog_len, test->flags & F_LOAD_WITH_STRICT_ALIGNMENT, 5064 + "GPL", 0, bpf_vlog, sizeof(bpf_vlog)); 5296 5065 5297 5066 expected_ret = unpriv && test->result_unpriv != UNDEF ? 5298 5067 test->result_unpriv : test->result;
+21
tools/testing/selftests/ftrace/test.d/kprobe/multiple_kprobes.tc
··· 1 + #!/bin/sh 2 + # description: Register/unregister many kprobe events 3 + 4 + # ftrace fentry skip size depends on the machine architecture. 5 + # Currently HAVE_KPROBES_ON_FTRACE defined on x86 and powerpc 6 + case `uname -m` in 7 + x86_64|i[3456]86) OFFS=5;; 8 + ppc*) OFFS=4;; 9 + *) OFFS=0;; 10 + esac 11 + 12 + echo "Setup up to 256 kprobes" 13 + grep t /proc/kallsyms | cut -f3 -d" " | grep -v .*\\..* | \ 14 + head -n 256 | while read i; do echo p ${i}+${OFFS} ; done > kprobe_events ||: 15 + 16 + echo 1 > events/kprobes/enable 17 + echo 0 > events/kprobes/enable 18 + echo > kprobe_events 19 + echo "Waiting for unoptimizing & freeing" 20 + sleep 5 21 + echo "Done"
+1 -1
tools/testing/selftests/powerpc/tm/tm-resched-dscr.c
··· 42 42 printf("Check DSCR TM context switch: "); 43 43 fflush(stdout); 44 44 for (;;) { 45 - rv = 1; 46 45 asm __volatile__ ( 47 46 /* set a known value into the DSCR */ 48 47 "ld 3, %[dscr1];" 49 48 "mtspr %[sprn_dscr], 3;" 50 49 50 + "li %[rv], 1;" 51 51 /* start and suspend a transaction */ 52 52 "tbegin.;" 53 53 "beq 1f;"
+1
usr/Kconfig
··· 220 220 endchoice 221 221 222 222 config INITRAMFS_COMPRESSION 223 + depends on INITRAMFS_SOURCE!="" 223 224 string 224 225 default "" if INITRAMFS_COMPRESSION_NONE 225 226 default ".gz" if INITRAMFS_COMPRESSION_GZIP