Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

+2894 -1331
+12 -4
Documentation/acpi/acpi-lid.txt
··· 59 59 If the userspace hasn't been prepared to ignore the unreliable "opened" 60 60 events and the unreliable initial state notification, Linux users can use 61 61 the following kernel parameters to handle the possible issues: 62 - A. button.lid_init_state=open: 62 + A. button.lid_init_state=method: 63 + When this option is specified, the ACPI button driver reports the 64 + initial lid state using the returning value of the _LID control method 65 + and whether the "opened"/"closed" events are paired fully relies on the 66 + firmware implementation. 67 + This option can be used to fix some platforms where the returning value 68 + of the _LID control method is reliable but the initial lid state 69 + notification is missing. 70 + This option is the default behavior during the period the userspace 71 + isn't ready to handle the buggy AML tables. 72 + B. button.lid_init_state=open: 63 73 When this option is specified, the ACPI button driver always reports the 64 74 initial lid state as "opened" and whether the "opened"/"closed" events 65 75 are paired fully relies on the firmware implementation. 66 76 This may fix some platforms where the returning value of the _LID 67 77 control method is not reliable and the initial lid state notification is 68 78 missing. 69 - This option is the default behavior during the period the userspace 70 - isn't ready to handle the buggy AML tables. 71 79 72 80 If the userspace has been prepared to ignore the unreliable "opened" events 73 81 and the unreliable initial state notification, Linux users should always 74 82 use the following kernel parameter: 75 - B. button.lid_init_state=ignore: 83 + C. button.lid_init_state=ignore: 76 84 When this option is specified, the ACPI button driver never reports the 77 85 initial lid state and there is a compensation mechanism implemented to 78 86 ensure that the reliable "closed" notifications can always be delievered
+10 -9
Documentation/admin-guide/pm/cpufreq.rst
··· 1 1 .. |struct cpufreq_policy| replace:: :c:type:`struct cpufreq_policy <cpufreq_policy>` 2 + .. |intel_pstate| replace:: :doc:`intel_pstate <intel_pstate>` 2 3 3 4 ======================= 4 5 CPU Performance Scaling ··· 76 75 interface it comes from and may not be easily represented in an abstract, 77 76 platform-independent way. For this reason, ``CPUFreq`` allows scaling drivers 78 77 to bypass the governor layer and implement their own performance scaling 79 - algorithms. That is done by the ``intel_pstate`` scaling driver. 78 + algorithms. That is done by the |intel_pstate| scaling driver. 80 79 81 80 82 81 ``CPUFreq`` Policy Objects ··· 175 174 into account. That is achieved by invoking the governor's ``->stop`` and 176 175 ``->start()`` callbacks, in this order, for the entire policy. 177 176 178 - As mentioned before, the ``intel_pstate`` scaling driver bypasses the scaling 177 + As mentioned before, the |intel_pstate| scaling driver bypasses the scaling 179 178 governor layer of ``CPUFreq`` and provides its own P-state selection algorithms. 180 - Consequently, if ``intel_pstate`` is used, scaling governors are not attached to 179 + Consequently, if |intel_pstate| is used, scaling governors are not attached to 181 180 new policy objects. Instead, the driver's ``->setpolicy()`` callback is invoked 182 181 to register per-CPU utilization update callbacks for each policy. These 183 182 callbacks are invoked by the CPU scheduler in the same way as for scaling 184 - governors, but in the ``intel_pstate`` case they both determine the P-state to 183 + governors, but in the |intel_pstate| case they both determine the P-state to 185 184 use and change the hardware configuration accordingly in one go from scheduler 186 185 context. 187 186 ··· 258 257 259 258 ``scaling_available_governors`` 260 259 List of ``CPUFreq`` scaling governors present in the kernel that can 261 - be attached to this policy or (if the ``intel_pstate`` scaling driver is 260 + be attached to this policy or (if the |intel_pstate| scaling driver is 262 261 in use) list of scaling algorithms provided by the driver that can be 263 262 applied to this policy. 264 263 ··· 275 274 the CPU is actually running at (due to hardware design and other 276 275 limitations). 277 276 278 - Some scaling drivers (e.g. ``intel_pstate``) attempt to provide 277 + Some scaling drivers (e.g. |intel_pstate|) attempt to provide 279 278 information more precisely reflecting the current CPU frequency through 280 279 this attribute, but that still may not be the exact current CPU 281 280 frequency as seen by the hardware at the moment. ··· 285 284 286 285 ``scaling_governor`` 287 286 The scaling governor currently attached to this policy or (if the 288 - ``intel_pstate`` scaling driver is in use) the scaling algorithm 287 + |intel_pstate| scaling driver is in use) the scaling algorithm 289 288 provided by the driver that is currently applied to this policy. 290 289 291 290 This attribute is read-write and writing to it will cause a new scaling 292 291 governor to be attached to this policy or a new scaling algorithm 293 292 provided by the scaling driver to be applied to it (in the 294 - ``intel_pstate`` case), as indicated by the string written to this 293 + |intel_pstate| case), as indicated by the string written to this 295 294 attribute (which must be one of the names listed by the 296 295 ``scaling_available_governors`` attribute described above). 297 296 ··· 620 619 the "boost" setting for the whole system. It is not present if the underlying 621 620 scaling driver does not support the frequency boost mechanism (or supports it, 622 621 but provides a driver-specific interface for controlling it, like 623 - ``intel_pstate``). 622 + |intel_pstate|). 624 623 625 624 If the value in this file is 1, the frequency boost mechanism is enabled. This 626 625 means that either the hardware can be put into states in which it is able to
+1
Documentation/admin-guide/pm/index.rst
··· 6 6 :maxdepth: 2 7 7 8 8 cpufreq 9 + intel_pstate 9 10 10 11 .. only:: subproject and html 11 12
+755
Documentation/admin-guide/pm/intel_pstate.rst
··· 1 + =============================================== 2 + ``intel_pstate`` CPU Performance Scaling Driver 3 + =============================================== 4 + 5 + :: 6 + 7 + Copyright (c) 2017 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com> 8 + 9 + 10 + General Information 11 + =================== 12 + 13 + ``intel_pstate`` is a part of the 14 + :doc:`CPU performance scaling subsystem <cpufreq>` in the Linux kernel 15 + (``CPUFreq``). It is a scaling driver for the Sandy Bridge and later 16 + generations of Intel processors. Note, however, that some of those processors 17 + may not be supported. [To understand ``intel_pstate`` it is necessary to know 18 + how ``CPUFreq`` works in general, so this is the time to read :doc:`cpufreq` if 19 + you have not done that yet.] 20 + 21 + For the processors supported by ``intel_pstate``, the P-state concept is broader 22 + than just an operating frequency or an operating performance point (see the 23 + `LinuxCon Europe 2015 presentation by Kristen Accardi <LCEU2015_>`_ for more 24 + information about that). For this reason, the representation of P-states used 25 + by ``intel_pstate`` internally follows the hardware specification (for details 26 + refer to `Intel® 64 and IA-32 Architectures Software Developer’s Manual 27 + Volume 3: System Programming Guide <SDM_>`_). However, the ``CPUFreq`` core 28 + uses frequencies for identifying operating performance points of CPUs and 29 + frequencies are involved in the user space interface exposed by it, so 30 + ``intel_pstate`` maps its internal representation of P-states to frequencies too 31 + (fortunately, that mapping is unambiguous). At the same time, it would not be 32 + practical for ``intel_pstate`` to supply the ``CPUFreq`` core with a table of 33 + available frequencies due to the possible size of it, so the driver does not do 34 + that. Some functionality of the core is limited by that. 35 + 36 + Since the hardware P-state selection interface used by ``intel_pstate`` is 37 + available at the logical CPU level, the driver always works with individual 38 + CPUs. Consequently, if ``intel_pstate`` is in use, every ``CPUFreq`` policy 39 + object corresponds to one logical CPU and ``CPUFreq`` policies are effectively 40 + equivalent to CPUs. In particular, this means that they become "inactive" every 41 + time the corresponding CPU is taken offline and need to be re-initialized when 42 + it goes back online. 43 + 44 + ``intel_pstate`` is not modular, so it cannot be unloaded, which means that the 45 + only way to pass early-configuration-time parameters to it is via the kernel 46 + command line. However, its configuration can be adjusted via ``sysfs`` to a 47 + great extent. In some configurations it even is possible to unregister it via 48 + ``sysfs`` which allows another ``CPUFreq`` scaling driver to be loaded and 49 + registered (see `below <status_attr_>`_). 50 + 51 + 52 + Operation Modes 53 + =============== 54 + 55 + ``intel_pstate`` can operate in three different modes: in the active mode with 56 + or without hardware-managed P-states support and in the passive mode. Which of 57 + them will be in effect depends on what kernel command line options are used and 58 + on the capabilities of the processor. 59 + 60 + Active Mode 61 + ----------- 62 + 63 + This is the default operation mode of ``intel_pstate``. If it works in this 64 + mode, the ``scaling_driver`` policy attribute in ``sysfs`` for all ``CPUFreq`` 65 + policies contains the string "intel_pstate". 66 + 67 + In this mode the driver bypasses the scaling governors layer of ``CPUFreq`` and 68 + provides its own scaling algorithms for P-state selection. Those algorithms 69 + can be applied to ``CPUFreq`` policies in the same way as generic scaling 70 + governors (that is, through the ``scaling_governor`` policy attribute in 71 + ``sysfs``). [Note that different P-state selection algorithms may be chosen for 72 + different policies, but that is not recommended.] 73 + 74 + They are not generic scaling governors, but their names are the same as the 75 + names of some of those governors. Moreover, confusingly enough, they generally 76 + do not work in the same way as the generic governors they share the names with. 77 + For example, the ``powersave`` P-state selection algorithm provided by 78 + ``intel_pstate`` is not a counterpart of the generic ``powersave`` governor 79 + (roughly, it corresponds to the ``schedutil`` and ``ondemand`` governors). 80 + 81 + There are two P-state selection algorithms provided by ``intel_pstate`` in the 82 + active mode: ``powersave`` and ``performance``. The way they both operate 83 + depends on whether or not the hardware-managed P-states (HWP) feature has been 84 + enabled in the processor and possibly on the processor model. 85 + 86 + Which of the P-state selection algorithms is used by default depends on the 87 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option. 88 + Namely, if that option is set, the ``performance`` algorithm will be used by 89 + default, and the other one will be used by default if it is not set. 90 + 91 + Active Mode With HWP 92 + ~~~~~~~~~~~~~~~~~~~~ 93 + 94 + If the processor supports the HWP feature, it will be enabled during the 95 + processor initialization and cannot be disabled after that. It is possible 96 + to avoid enabling it by passing the ``intel_pstate=no_hwp`` argument to the 97 + kernel in the command line. 98 + 99 + If the HWP feature has been enabled, ``intel_pstate`` relies on the processor to 100 + select P-states by itself, but still it can give hints to the processor's 101 + internal P-state selection logic. What those hints are depends on which P-state 102 + selection algorithm has been applied to the given policy (or to the CPU it 103 + corresponds to). 104 + 105 + Even though the P-state selection is carried out by the processor automatically, 106 + ``intel_pstate`` registers utilization update callbacks with the CPU scheduler 107 + in this mode. However, they are not used for running a P-state selection 108 + algorithm, but for periodic updates of the current CPU frequency information to 109 + be made available from the ``scaling_cur_freq`` policy attribute in ``sysfs``. 110 + 111 + HWP + ``performance`` 112 + ..................... 113 + 114 + In this configuration ``intel_pstate`` will write 0 to the processor's 115 + Energy-Performance Preference (EPP) knob (if supported) or its 116 + Energy-Performance Bias (EPB) knob (otherwise), which means that the processor's 117 + internal P-state selection logic is expected to focus entirely on performance. 118 + 119 + This will override the EPP/EPB setting coming from the ``sysfs`` interface 120 + (see `Energy vs Performance Hints`_ below). 121 + 122 + Also, in this configuration the range of P-states available to the processor's 123 + internal P-state selection logic is always restricted to the upper boundary 124 + (that is, the maximum P-state that the driver is allowed to use). 125 + 126 + HWP + ``powersave`` 127 + ................... 128 + 129 + In this configuration ``intel_pstate`` will set the processor's 130 + Energy-Performance Preference (EPP) knob (if supported) or its 131 + Energy-Performance Bias (EPB) knob (otherwise) to whatever value it was 132 + previously set to via ``sysfs`` (or whatever default value it was 133 + set to by the platform firmware). This usually causes the processor's 134 + internal P-state selection logic to be less performance-focused. 135 + 136 + Active Mode Without HWP 137 + ~~~~~~~~~~~~~~~~~~~~~~~ 138 + 139 + This is the default operation mode for processors that do not support the HWP 140 + feature. It also is used by default with the ``intel_pstate=no_hwp`` argument 141 + in the kernel command line. However, in this mode ``intel_pstate`` may refuse 142 + to work with the given processor if it does not recognize it. [Note that 143 + ``intel_pstate`` will never refuse to work with any processor with the HWP 144 + feature enabled.] 145 + 146 + In this mode ``intel_pstate`` registers utilization update callbacks with the 147 + CPU scheduler in order to run a P-state selection algorithm, either 148 + ``powersave`` or ``performance``, depending on the ``scaling_cur_freq`` policy 149 + setting in ``sysfs``. The current CPU frequency information to be made 150 + available from the ``scaling_cur_freq`` policy attribute in ``sysfs`` is 151 + periodically updated by those utilization update callbacks too. 152 + 153 + ``performance`` 154 + ............... 155 + 156 + Without HWP, this P-state selection algorithm is always the same regardless of 157 + the processor model and platform configuration. 158 + 159 + It selects the maximum P-state it is allowed to use, subject to limits set via 160 + ``sysfs``, every time the P-state selection computations are carried out by the 161 + driver's utilization update callback for the given CPU (that does not happen 162 + more often than every 10 ms), but the hardware configuration will not be changed 163 + if the new P-state is the same as the current one. 164 + 165 + This is the default P-state selection algorithm if the 166 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option 167 + is set. 168 + 169 + ``powersave`` 170 + ............. 171 + 172 + Without HWP, this P-state selection algorithm generally depends on the 173 + processor model and/or the system profile setting in the ACPI tables and there 174 + are two variants of it. 175 + 176 + One of them is used with processors from the Atom line and (regardless of the 177 + processor model) on platforms with the system profile in the ACPI tables set to 178 + "mobile" (laptops mostly), "tablet", "appliance PC", "desktop", or 179 + "workstation". It is also used with processors supporting the HWP feature if 180 + that feature has not been enabled (that is, with the ``intel_pstate=no_hwp`` 181 + argument in the kernel command line). It is similar to the algorithm 182 + implemented by the generic ``schedutil`` scaling governor except that the 183 + utilization metric used by it is based on numbers coming from feedback 184 + registers of the CPU. It generally selects P-states proportional to the 185 + current CPU utilization, so it is referred to as the "proportional" algorithm. 186 + 187 + The second variant of the ``powersave`` P-state selection algorithm, used in all 188 + of the other cases (generally, on processors from the Core line, so it is 189 + referred to as the "Core" algorithm), is based on the values read from the APERF 190 + and MPERF feedback registers and the previously requested target P-state. 191 + It does not really take CPU utilization into account explicitly, but as a rule 192 + it causes the CPU P-state to ramp up very quickly in response to increased 193 + utilization which is generally desirable in server environments. 194 + 195 + Regardless of the variant, this algorithm is run by the driver's utilization 196 + update callback for the given CPU when it is invoked by the CPU scheduler, but 197 + not more often than every 10 ms (that can be tweaked via ``debugfs`` in `this 198 + particular case <Tuning Interface in debugfs_>`_). Like in the ``performance`` 199 + case, the hardware configuration is not touched if the new P-state turns out to 200 + be the same as the current one. 201 + 202 + This is the default P-state selection algorithm if the 203 + :c:macro:`CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE` kernel configuration option 204 + is not set. 205 + 206 + Passive Mode 207 + ------------ 208 + 209 + This mode is used if the ``intel_pstate=passive`` argument is passed to the 210 + kernel in the command line (it implies the ``intel_pstate=no_hwp`` setting too). 211 + Like in the active mode without HWP support, in this mode ``intel_pstate`` may 212 + refuse to work with the given processor if it does not recognize it. 213 + 214 + If the driver works in this mode, the ``scaling_driver`` policy attribute in 215 + ``sysfs`` for all ``CPUFreq`` policies contains the string "intel_cpufreq". 216 + Then, the driver behaves like a regular ``CPUFreq`` scaling driver. That is, 217 + it is invoked by generic scaling governors when necessary to talk to the 218 + hardware in order to change the P-state of a CPU (in particular, the 219 + ``schedutil`` governor can invoke it directly from scheduler context). 220 + 221 + While in this mode, ``intel_pstate`` can be used with all of the (generic) 222 + scaling governors listed by the ``scaling_available_governors`` policy attribute 223 + in ``sysfs`` (and the P-state selection algorithms described above are not 224 + used). Then, it is responsible for the configuration of policy objects 225 + corresponding to CPUs and provides the ``CPUFreq`` core (and the scaling 226 + governors attached to the policy objects) with accurate information on the 227 + maximum and minimum operating frequencies supported by the hardware (including 228 + the so-called "turbo" frequency ranges). In other words, in the passive mode 229 + the entire range of available P-states is exposed by ``intel_pstate`` to the 230 + ``CPUFreq`` core. However, in this mode the driver does not register 231 + utilization update callbacks with the CPU scheduler and the ``scaling_cur_freq`` 232 + information comes from the ``CPUFreq`` core (and is the last frequency selected 233 + by the current scaling governor for the given policy). 234 + 235 + 236 + .. _turbo: 237 + 238 + Turbo P-states Support 239 + ====================== 240 + 241 + In the majority of cases, the entire range of P-states available to 242 + ``intel_pstate`` can be divided into two sub-ranges that correspond to 243 + different types of processor behavior, above and below a boundary that 244 + will be referred to as the "turbo threshold" in what follows. 245 + 246 + The P-states above the turbo threshold are referred to as "turbo P-states" and 247 + the whole sub-range of P-states they belong to is referred to as the "turbo 248 + range". These names are related to the Turbo Boost technology allowing a 249 + multicore processor to opportunistically increase the P-state of one or more 250 + cores if there is enough power to do that and if that is not going to cause the 251 + thermal envelope of the processor package to be exceeded. 252 + 253 + Specifically, if software sets the P-state of a CPU core within the turbo range 254 + (that is, above the turbo threshold), the processor is permitted to take over 255 + performance scaling control for that core and put it into turbo P-states of its 256 + choice going forward. However, that permission is interpreted differently by 257 + different processor generations. Namely, the Sandy Bridge generation of 258 + processors will never use any P-states above the last one set by software for 259 + the given core, even if it is within the turbo range, whereas all of the later 260 + processor generations will take it as a license to use any P-states from the 261 + turbo range, even above the one set by software. In other words, on those 262 + processors setting any P-state from the turbo range will enable the processor 263 + to put the given core into all turbo P-states up to and including the maximum 264 + supported one as it sees fit. 265 + 266 + One important property of turbo P-states is that they are not sustainable. More 267 + precisely, there is no guarantee that any CPUs will be able to stay in any of 268 + those states indefinitely, because the power distribution within the processor 269 + package may change over time or the thermal envelope it was designed for might 270 + be exceeded if a turbo P-state was used for too long. 271 + 272 + In turn, the P-states below the turbo threshold generally are sustainable. In 273 + fact, if one of them is set by software, the processor is not expected to change 274 + it to a lower one unless in a thermal stress or a power limit violation 275 + situation (a higher P-state may still be used if it is set for another CPU in 276 + the same package at the same time, for example). 277 + 278 + Some processors allow multiple cores to be in turbo P-states at the same time, 279 + but the maximum P-state that can be set for them generally depends on the number 280 + of cores running concurrently. The maximum turbo P-state that can be set for 3 281 + cores at the same time usually is lower than the analogous maximum P-state for 282 + 2 cores, which in turn usually is lower than the maximum turbo P-state that can 283 + be set for 1 core. The one-core maximum turbo P-state is thus the maximum 284 + supported one overall. 285 + 286 + The maximum supported turbo P-state, the turbo threshold (the maximum supported 287 + non-turbo P-state) and the minimum supported P-state are specific to the 288 + processor model and can be determined by reading the processor's model-specific 289 + registers (MSRs). Moreover, some processors support the Configurable TDP 290 + (Thermal Design Power) feature and, when that feature is enabled, the turbo 291 + threshold effectively becomes a configurable value that can be set by the 292 + platform firmware. 293 + 294 + Unlike ``_PSS`` objects in the ACPI tables, ``intel_pstate`` always exposes 295 + the entire range of available P-states, including the whole turbo range, to the 296 + ``CPUFreq`` core and (in the passive mode) to generic scaling governors. This 297 + generally causes turbo P-states to be set more often when ``intel_pstate`` is 298 + used relative to ACPI-based CPU performance scaling (see `below <acpi-cpufreq_>`_ 299 + for more information). 300 + 301 + Moreover, since ``intel_pstate`` always knows what the real turbo threshold is 302 + (even if the Configurable TDP feature is enabled in the processor), its 303 + ``no_turbo`` attribute in ``sysfs`` (described `below <no_turbo_attr_>`_) should 304 + work as expected in all cases (that is, if set to disable turbo P-states, it 305 + always should prevent ``intel_pstate`` from using them). 306 + 307 + 308 + Processor Support 309 + ================= 310 + 311 + To handle a given processor ``intel_pstate`` requires a number of different 312 + pieces of information on it to be known, including: 313 + 314 + * The minimum supported P-state. 315 + 316 + * The maximum supported `non-turbo P-state <turbo_>`_. 317 + 318 + * Whether or not turbo P-states are supported at all. 319 + 320 + * The maximum supported `one-core turbo P-state <turbo_>`_ (if turbo P-states 321 + are supported). 322 + 323 + * The scaling formula to translate the driver's internal representation 324 + of P-states into frequencies and the other way around. 325 + 326 + Generally, ways to obtain that information are specific to the processor model 327 + or family. Although it often is possible to obtain all of it from the processor 328 + itself (using model-specific registers), there are cases in which hardware 329 + manuals need to be consulted to get to it too. 330 + 331 + For this reason, there is a list of supported processors in ``intel_pstate`` and 332 + the driver initialization will fail if the detected processor is not in that 333 + list, unless it supports the `HWP feature <Active Mode_>`_. [The interface to 334 + obtain all of the information listed above is the same for all of the processors 335 + supporting the HWP feature, which is why they all are supported by 336 + ``intel_pstate``.] 337 + 338 + 339 + User Space Interface in ``sysfs`` 340 + ================================= 341 + 342 + Global Attributes 343 + ----------------- 344 + 345 + ``intel_pstate`` exposes several global attributes (files) in ``sysfs`` to 346 + control its functionality at the system level. They are located in the 347 + ``/sys/devices/system/cpu/cpufreq/intel_pstate/`` directory and affect all 348 + CPUs. 349 + 350 + Some of them are not present if the ``intel_pstate=per_cpu_perf_limits`` 351 + argument is passed to the kernel in the command line. 352 + 353 + ``max_perf_pct`` 354 + Maximum P-state the driver is allowed to set in percent of the 355 + maximum supported performance level (the highest supported `turbo 356 + P-state <turbo_>`_). 357 + 358 + This attribute will not be exposed if the 359 + ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel 360 + command line. 361 + 362 + ``min_perf_pct`` 363 + Minimum P-state the driver is allowed to set in percent of the 364 + maximum supported performance level (the highest supported `turbo 365 + P-state <turbo_>`_). 366 + 367 + This attribute will not be exposed if the 368 + ``intel_pstate=per_cpu_perf_limits`` argument is present in the kernel 369 + command line. 370 + 371 + ``num_pstates`` 372 + Number of P-states supported by the processor (between 0 and 255 373 + inclusive) including both turbo and non-turbo P-states (see 374 + `Turbo P-states Support`_). 375 + 376 + The value of this attribute is not affected by the ``no_turbo`` 377 + setting described `below <no_turbo_attr_>`_. 378 + 379 + This attribute is read-only. 380 + 381 + ``turbo_pct`` 382 + Ratio of the `turbo range <turbo_>`_ size to the size of the entire 383 + range of supported P-states, in percent. 384 + 385 + This attribute is read-only. 386 + 387 + .. _no_turbo_attr: 388 + 389 + ``no_turbo`` 390 + If set (equal to 1), the driver is not allowed to set any turbo P-states 391 + (see `Turbo P-states Support`_). If unset (equalt to 0, which is the 392 + default), turbo P-states can be set by the driver. 393 + [Note that ``intel_pstate`` does not support the general ``boost`` 394 + attribute (supported by some other scaling drivers) which is replaced 395 + by this one.] 396 + 397 + This attrubute does not affect the maximum supported frequency value 398 + supplied to the ``CPUFreq`` core and exposed via the policy interface, 399 + but it affects the maximum possible value of per-policy P-state limits 400 + (see `Interpretation of Policy Attributes`_ below for details). 401 + 402 + .. _status_attr: 403 + 404 + ``status`` 405 + Operation mode of the driver: "active", "passive" or "off". 406 + 407 + "active" 408 + The driver is functional and in the `active mode 409 + <Active Mode_>`_. 410 + 411 + "passive" 412 + The driver is functional and in the `passive mode 413 + <Passive Mode_>`_. 414 + 415 + "off" 416 + The driver is not functional (it is not registered as a scaling 417 + driver with the ``CPUFreq`` core). 418 + 419 + This attribute can be written to in order to change the driver's 420 + operation mode or to unregister it. The string written to it must be 421 + one of the possible values of it and, if successful, the write will 422 + cause the driver to switch over to the operation mode represented by 423 + that string - or to be unregistered in the "off" case. [Actually, 424 + switching over from the active mode to the passive mode or the other 425 + way around causes the driver to be unregistered and registered again 426 + with a different set of callbacks, so all of its settings (the global 427 + as well as the per-policy ones) are then reset to their default 428 + values, possibly depending on the target operation mode.] 429 + 430 + That only is supported in some configurations, though (for example, if 431 + the `HWP feature is enabled in the processor <Active Mode With HWP_>`_, 432 + the operation mode of the driver cannot be changed), and if it is not 433 + supported in the current configuration, writes to this attribute with 434 + fail with an appropriate error. 435 + 436 + Interpretation of Policy Attributes 437 + ----------------------------------- 438 + 439 + The interpretation of some ``CPUFreq`` policy attributes described in 440 + :doc:`cpufreq` is special with ``intel_pstate`` as the current scaling driver 441 + and it generally depends on the driver's `operation mode <Operation Modes_>`_. 442 + 443 + First of all, the values of the ``cpuinfo_max_freq``, ``cpuinfo_min_freq`` and 444 + ``scaling_cur_freq`` attributes are produced by applying a processor-specific 445 + multiplier to the internal P-state representation used by ``intel_pstate``. 446 + Also, the values of the ``scaling_max_freq`` and ``scaling_min_freq`` 447 + attributes are capped by the frequency corresponding to the maximum P-state that 448 + the driver is allowed to set. 449 + 450 + If the ``no_turbo`` `global attribute <no_turbo_attr_>`_ is set, the driver is 451 + not allowed to use turbo P-states, so the maximum value of ``scaling_max_freq`` 452 + and ``scaling_min_freq`` is limited to the maximum non-turbo P-state frequency. 453 + Accordingly, setting ``no_turbo`` causes ``scaling_max_freq`` and 454 + ``scaling_min_freq`` to go down to that value if they were above it before. 455 + However, the old values of ``scaling_max_freq`` and ``scaling_min_freq`` will be 456 + restored after unsetting ``no_turbo``, unless these attributes have been written 457 + to after ``no_turbo`` was set. 458 + 459 + If ``no_turbo`` is not set, the maximum possible value of ``scaling_max_freq`` 460 + and ``scaling_min_freq`` corresponds to the maximum supported turbo P-state, 461 + which also is the value of ``cpuinfo_max_freq`` in either case. 462 + 463 + Next, the following policy attributes have special meaning if 464 + ``intel_pstate`` works in the `active mode <Active Mode_>`_: 465 + 466 + ``scaling_available_governors`` 467 + List of P-state selection algorithms provided by ``intel_pstate``. 468 + 469 + ``scaling_governor`` 470 + P-state selection algorithm provided by ``intel_pstate`` currently in 471 + use with the given policy. 472 + 473 + ``scaling_cur_freq`` 474 + Frequency of the average P-state of the CPU represented by the given 475 + policy for the time interval between the last two invocations of the 476 + driver's utilization update callback by the CPU scheduler for that CPU. 477 + 478 + The meaning of these attributes in the `passive mode <Passive Mode_>`_ is the 479 + same as for other scaling drivers. 480 + 481 + Additionally, the value of the ``scaling_driver`` attribute for ``intel_pstate`` 482 + depends on the operation mode of the driver. Namely, it is either 483 + "intel_pstate" (in the `active mode <Active Mode_>`_) or "intel_cpufreq" (in the 484 + `passive mode <Passive Mode_>`_). 485 + 486 + Coordination of P-State Limits 487 + ------------------------------ 488 + 489 + ``intel_pstate`` allows P-state limits to be set in two ways: with the help of 490 + the ``max_perf_pct`` and ``min_perf_pct`` `global attributes 491 + <Global Attributes_>`_ or via the ``scaling_max_freq`` and ``scaling_min_freq`` 492 + ``CPUFreq`` policy attributes. The coordination between those limits is based 493 + on the following rules, regardless of the current operation mode of the driver: 494 + 495 + 1. All CPUs are affected by the global limits (that is, none of them can be 496 + requested to run faster than the global maximum and none of them can be 497 + requested to run slower than the global minimum). 498 + 499 + 2. Each individual CPU is affected by its own per-policy limits (that is, it 500 + cannot be requested to run faster than its own per-policy maximum and it 501 + cannot be requested to run slower than its own per-policy minimum). 502 + 503 + 3. The global and per-policy limits can be set independently. 504 + 505 + If the `HWP feature is enabled in the processor <Active Mode With HWP_>`_, the 506 + resulting effective values are written into its registers whenever the limits 507 + change in order to request its internal P-state selection logic to always set 508 + P-states within these limits. Otherwise, the limits are taken into account by 509 + scaling governors (in the `passive mode <Passive Mode_>`_) and by the driver 510 + every time before setting a new P-state for a CPU. 511 + 512 + Additionally, if the ``intel_pstate=per_cpu_perf_limits`` command line argument 513 + is passed to the kernel, ``max_perf_pct`` and ``min_perf_pct`` are not exposed 514 + at all and the only way to set the limits is by using the policy attributes. 515 + 516 + 517 + Energy vs Performance Hints 518 + --------------------------- 519 + 520 + If ``intel_pstate`` works in the `active mode with the HWP feature enabled 521 + <Active Mode With HWP_>`_ in the processor, additional attributes are present 522 + in every ``CPUFreq`` policy directory in ``sysfs``. They are intended to allow 523 + user space to help ``intel_pstate`` to adjust the processor's internal P-state 524 + selection logic by focusing it on performance or on energy-efficiency, or 525 + somewhere between the two extremes: 526 + 527 + ``energy_performance_preference`` 528 + Current value of the energy vs performance hint for the given policy 529 + (or the CPU represented by it). 530 + 531 + The hint can be changed by writing to this attribute. 532 + 533 + ``energy_performance_available_preferences`` 534 + List of strings that can be written to the 535 + ``energy_performance_preference`` attribute. 536 + 537 + They represent different energy vs performance hints and should be 538 + self-explanatory, except that ``default`` represents whatever hint 539 + value was set by the platform firmware. 540 + 541 + Strings written to the ``energy_performance_preference`` attribute are 542 + internally translated to integer values written to the processor's 543 + Energy-Performance Preference (EPP) knob (if supported) or its 544 + Energy-Performance Bias (EPB) knob. 545 + 546 + [Note that tasks may by migrated from one CPU to another by the scheduler's 547 + load-balancing algorithm and if different energy vs performance hints are 548 + set for those CPUs, that may lead to undesirable outcomes. To avoid such 549 + issues it is better to set the same energy vs performance hint for all CPUs 550 + or to pin every task potentially sensitive to them to a specific CPU.] 551 + 552 + .. _acpi-cpufreq: 553 + 554 + ``intel_pstate`` vs ``acpi-cpufreq`` 555 + ==================================== 556 + 557 + On the majority of systems supported by ``intel_pstate``, the ACPI tables 558 + provided by the platform firmware contain ``_PSS`` objects returning information 559 + that can be used for CPU performance scaling (refer to the `ACPI specification`_ 560 + for details on the ``_PSS`` objects and the format of the information returned 561 + by them). 562 + 563 + The information returned by the ACPI ``_PSS`` objects is used by the 564 + ``acpi-cpufreq`` scaling driver. On systems supported by ``intel_pstate`` 565 + the ``acpi-cpufreq`` driver uses the same hardware CPU performance scaling 566 + interface, but the set of P-states it can use is limited by the ``_PSS`` 567 + output. 568 + 569 + On those systems each ``_PSS`` object returns a list of P-states supported by 570 + the corresponding CPU which basically is a subset of the P-states range that can 571 + be used by ``intel_pstate`` on the same system, with one exception: the whole 572 + `turbo range <turbo_>`_ is represented by one item in it (the topmost one). By 573 + convention, the frequency returned by ``_PSS`` for that item is greater by 1 MHz 574 + than the frequency of the highest non-turbo P-state listed by it, but the 575 + corresponding P-state representation (following the hardware specification) 576 + returned for it matches the maximum supported turbo P-state (or is the 577 + special value 255 meaning essentially "go as high as you can get"). 578 + 579 + The list of P-states returned by ``_PSS`` is reflected by the table of 580 + available frequencies supplied by ``acpi-cpufreq`` to the ``CPUFreq`` core and 581 + scaling governors and the minimum and maximum supported frequencies reported by 582 + it come from that list as well. In particular, given the special representation 583 + of the turbo range described above, this means that the maximum supported 584 + frequency reported by ``acpi-cpufreq`` is higher by 1 MHz than the frequency 585 + of the highest supported non-turbo P-state listed by ``_PSS`` which, of course, 586 + affects decisions made by the scaling governors, except for ``powersave`` and 587 + ``performance``. 588 + 589 + For example, if a given governor attempts to select a frequency proportional to 590 + estimated CPU load and maps the load of 100% to the maximum supported frequency 591 + (possibly multiplied by a constant), then it will tend to choose P-states below 592 + the turbo threshold if ``acpi-cpufreq`` is used as the scaling driver, because 593 + in that case the turbo range corresponds to a small fraction of the frequency 594 + band it can use (1 MHz vs 1 GHz or more). In consequence, it will only go to 595 + the turbo range for the highest loads and the other loads above 50% that might 596 + benefit from running at turbo frequencies will be given non-turbo P-states 597 + instead. 598 + 599 + One more issue related to that may appear on systems supporting the 600 + `Configurable TDP feature <turbo_>`_ allowing the platform firmware to set the 601 + turbo threshold. Namely, if that is not coordinated with the lists of P-states 602 + returned by ``_PSS`` properly, there may be more than one item corresponding to 603 + a turbo P-state in those lists and there may be a problem with avoiding the 604 + turbo range (if desirable or necessary). Usually, to avoid using turbo 605 + P-states overall, ``acpi-cpufreq`` simply avoids using the topmost state listed 606 + by ``_PSS``, but that is not sufficient when there are other turbo P-states in 607 + the list returned by it. 608 + 609 + Apart from the above, ``acpi-cpufreq`` works like ``intel_pstate`` in the 610 + `passive mode <Passive Mode_>`_, except that the number of P-states it can set 611 + is limited to the ones listed by the ACPI ``_PSS`` objects. 612 + 613 + 614 + Kernel Command Line Options for ``intel_pstate`` 615 + ================================================ 616 + 617 + Several kernel command line options can be used to pass early-configuration-time 618 + parameters to ``intel_pstate`` in order to enforce specific behavior of it. All 619 + of them have to be prepended with the ``intel_pstate=`` prefix. 620 + 621 + ``disable`` 622 + Do not register ``intel_pstate`` as the scaling driver even if the 623 + processor is supported by it. 624 + 625 + ``passive`` 626 + Register ``intel_pstate`` in the `passive mode <Passive Mode_>`_ to 627 + start with. 628 + 629 + This option implies the ``no_hwp`` one described below. 630 + 631 + ``force`` 632 + Register ``intel_pstate`` as the scaling driver instead of 633 + ``acpi-cpufreq`` even if the latter is preferred on the given system. 634 + 635 + This may prevent some platform features (such as thermal controls and 636 + power capping) that rely on the availability of ACPI P-states 637 + information from functioning as expected, so it should be used with 638 + caution. 639 + 640 + This option does not work with processors that are not supported by 641 + ``intel_pstate`` and on platforms where the ``pcc-cpufreq`` scaling 642 + driver is used instead of ``acpi-cpufreq``. 643 + 644 + ``no_hwp`` 645 + Do not enable the `hardware-managed P-states (HWP) feature 646 + <Active Mode With HWP_>`_ even if it is supported by the processor. 647 + 648 + ``hwp_only`` 649 + Register ``intel_pstate`` as the scaling driver only if the 650 + `hardware-managed P-states (HWP) feature <Active Mode With HWP_>`_ is 651 + supported by the processor. 652 + 653 + ``support_acpi_ppc`` 654 + Take ACPI ``_PPC`` performance limits into account. 655 + 656 + If the preferred power management profile in the FADT (Fixed ACPI 657 + Description Table) is set to "Enterprise Server" or "Performance 658 + Server", the ACPI ``_PPC`` limits are taken into account by default 659 + and this option has no effect. 660 + 661 + ``per_cpu_perf_limits`` 662 + Use per-logical-CPU P-State limits (see `Coordination of P-state 663 + Limits`_ for details). 664 + 665 + 666 + Diagnostics and Tuning 667 + ====================== 668 + 669 + Trace Events 670 + ------------ 671 + 672 + There are two static trace events that can be used for ``intel_pstate`` 673 + diagnostics. One of them is the ``cpu_frequency`` trace event generally used 674 + by ``CPUFreq``, and the other one is the ``pstate_sample`` trace event specific 675 + to ``intel_pstate``. Both of them are triggered by ``intel_pstate`` only if 676 + it works in the `active mode <Active Mode_>`_. 677 + 678 + The following sequence of shell commands can be used to enable them and see 679 + their output (if the kernel is generally configured to support event tracing):: 680 + 681 + # cd /sys/kernel/debug/tracing/ 682 + # echo 1 > events/power/pstate_sample/enable 683 + # echo 1 > events/power/cpu_frequency/enable 684 + # cat trace 685 + gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107 scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618 freq=2474476 686 + cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2 687 + 688 + If ``intel_pstate`` works in the `passive mode <Passive Mode_>`_, the 689 + ``cpu_frequency`` trace event will be triggered either by the ``schedutil`` 690 + scaling governor (for the policies it is attached to), or by the ``CPUFreq`` 691 + core (for the policies with other scaling governors). 692 + 693 + ``ftrace`` 694 + ---------- 695 + 696 + The ``ftrace`` interface can be used for low-level diagnostics of 697 + ``intel_pstate``. For example, to check how often the function to set a 698 + P-state is called, the ``ftrace`` filter can be set to to 699 + :c:func:`intel_pstate_set_pstate`:: 700 + 701 + # cd /sys/kernel/debug/tracing/ 702 + # cat available_filter_functions | grep -i pstate 703 + intel_pstate_set_pstate 704 + intel_pstate_cpu_init 705 + ... 706 + # echo intel_pstate_set_pstate > set_ftrace_filter 707 + # echo function > current_tracer 708 + # cat trace | head -15 709 + # tracer: function 710 + # 711 + # entries-in-buffer/entries-written: 80/80 #P:4 712 + # 713 + # _-----=> irqs-off 714 + # / _----=> need-resched 715 + # | / _---=> hardirq/softirq 716 + # || / _--=> preempt-depth 717 + # ||| / delay 718 + # TASK-PID CPU# |||| TIMESTAMP FUNCTION 719 + # | | | |||| | | 720 + Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func 721 + gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func 722 + gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 723 + <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func 724 + 725 + Tuning Interface in ``debugfs`` 726 + ------------------------------- 727 + 728 + The ``powersave`` algorithm provided by ``intel_pstate`` for `the Core line of 729 + processors in the active mode <powersave_>`_ is based on a `PID controller`_ 730 + whose parameters were chosen to address a number of different use cases at the 731 + same time. However, it still is possible to fine-tune it to a specific workload 732 + and the ``debugfs`` interface under ``/sys/kernel/debug/pstate_snb/`` is 733 + provided for this purpose. [Note that the ``pstate_snb`` directory will be 734 + present only if the specific P-state selection algorithm matching the interface 735 + in it actually is in use.] 736 + 737 + The following files present in that directory can be used to modify the PID 738 + controller parameters at run time: 739 + 740 + | ``deadband`` 741 + | ``d_gain_pct`` 742 + | ``i_gain_pct`` 743 + | ``p_gain_pct`` 744 + | ``sample_rate_ms`` 745 + | ``setpoint`` 746 + 747 + Note, however, that achieving desirable results this way generally requires 748 + expert-level understanding of the power vs performance tradeoff, so extra care 749 + is recommended when attempting to do that. 750 + 751 + 752 + .. _LCEU2015: http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 753 + .. _SDM: http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-system-programming-manual-325384.html 754 + .. _ACPI specification: http://www.uefi.org/sites/default/files/resources/ACPI_6_1.pdf 755 + .. _PID controller: https://en.wikipedia.org/wiki/PID_controller
-281
Documentation/cpu-freq/intel-pstate.txt
··· 1 - Intel P-State driver 2 - -------------------- 3 - 4 - This driver provides an interface to control the P-State selection for the 5 - SandyBridge+ Intel processors. 6 - 7 - The following document explains P-States: 8 - http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf 9 - As stated in the document, P-State doesn’t exactly mean a frequency. However, for 10 - the sake of the relationship with cpufreq, P-State and frequency are used 11 - interchangeably. 12 - 13 - Understanding the cpufreq core governors and policies are important before 14 - discussing more details about the Intel P-State driver. Based on what callbacks 15 - a cpufreq driver provides to the cpufreq core, it can support two types of 16 - drivers: 17 - - with target_index() callback: In this mode, the drivers using cpufreq core 18 - simply provide the minimum and maximum frequency limits and an additional 19 - interface target_index() to set the current frequency. The cpufreq subsystem 20 - has a number of scaling governors ("performance", "powersave", "ondemand", 21 - etc.). Depending on which governor is in use, cpufreq core will call for 22 - transitions to a specific frequency using target_index() callback. 23 - - setpolicy() callback: In this mode, drivers do not provide target_index() 24 - callback, so cpufreq core can't request a transition to a specific frequency. 25 - The driver provides minimum and maximum frequency limits and callbacks to set a 26 - policy. The policy in cpufreq sysfs is referred to as the "scaling governor". 27 - The cpufreq core can request the driver to operate in any of the two policies: 28 - "performance" and "powersave". The driver decides which frequency to use based 29 - on the above policy selection considering minimum and maximum frequency limits. 30 - 31 - The Intel P-State driver falls under the latter category, which implements the 32 - setpolicy() callback. This driver decides what P-State to use based on the 33 - requested policy from the cpufreq core. If the processor is capable of 34 - selecting its next P-State internally, then the driver will offload this 35 - responsibility to the processor (aka HWP: Hardware P-States). If not, the 36 - driver implements algorithms to select the next P-State. 37 - 38 - Since these policies are implemented in the driver, they are not same as the 39 - cpufreq scaling governors implementation, even if they have the same name in 40 - the cpufreq sysfs (scaling_governors). For example the "performance" policy is 41 - similar to cpufreq’s "performance" governor, but "powersave" is completely 42 - different than the cpufreq "powersave" governor. The strategy here is similar 43 - to cpufreq "ondemand", where the requested P-State is related to the system load. 44 - 45 - Sysfs Interface 46 - 47 - In addition to the frequency-controlling interfaces provided by the cpufreq 48 - core, the driver provides its own sysfs files to control the P-State selection. 49 - These files have been added to /sys/devices/system/cpu/intel_pstate/. 50 - Any changes made to these files are applicable to all CPUs (even in a 51 - multi-package system, Refer to later section on placing "Per-CPU limits"). 52 - 53 - max_perf_pct: Limits the maximum P-State that will be requested by 54 - the driver. It states it as a percentage of the available performance. The 55 - available (P-State) performance may be reduced by the no_turbo 56 - setting described below. 57 - 58 - min_perf_pct: Limits the minimum P-State that will be requested by 59 - the driver. It states it as a percentage of the max (non-turbo) 60 - performance level. 61 - 62 - no_turbo: Limits the driver to selecting P-State below the turbo 63 - frequency range. 64 - 65 - turbo_pct: Displays the percentage of the total performance that 66 - is supported by hardware that is in the turbo range. This number 67 - is independent of whether turbo has been disabled or not. 68 - 69 - num_pstates: Displays the number of P-States that are supported 70 - by hardware. This number is independent of whether turbo has 71 - been disabled or not. 72 - 73 - For example, if a system has these parameters: 74 - Max 1 core turbo ratio: 0x21 (Max 1 core ratio is the maximum P-State) 75 - Max non turbo ratio: 0x17 76 - Minimum ratio : 0x08 (Here the ratio is called max efficiency ratio) 77 - 78 - Sysfs will show : 79 - max_perf_pct:100, which corresponds to 1 core ratio 80 - min_perf_pct:24, max_efficiency_ratio / max 1 Core ratio 81 - no_turbo:0, turbo is not disabled 82 - num_pstates:26 = (max 1 Core ratio - Max Efficiency Ratio + 1) 83 - turbo_pct:39 = (max 1 core ratio - max non turbo ratio) / num_pstates 84 - 85 - Refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual 86 - Volume 3: System Programming Guide" to understand ratios. 87 - 88 - There is one more sysfs attribute in /sys/devices/system/cpu/intel_pstate/ 89 - that can be used for controlling the operation mode of the driver: 90 - 91 - status: Three settings are possible: 92 - "off" - The driver is not in use at this time. 93 - "active" - The driver works as a P-state governor (default). 94 - "passive" - The driver works as a regular cpufreq one and collaborates 95 - with the generic cpufreq governors (it sets P-states as 96 - requested by those governors). 97 - The current setting is returned by reads from this attribute. Writing one 98 - of the above strings to it changes the operation mode as indicated by that 99 - string, if possible. If HW-managed P-states (HWP) are enabled, it is not 100 - possible to change the driver's operation mode and attempts to write to 101 - this attribute will fail. 102 - 103 - cpufreq sysfs for Intel P-State 104 - 105 - Since this driver registers with cpufreq, cpufreq sysfs is also presented. 106 - There are some important differences, which need to be considered. 107 - 108 - scaling_cur_freq: This displays the real frequency which was used during 109 - the last sample period instead of what is requested. Some other cpufreq driver, 110 - like acpi-cpufreq, displays what is requested (Some changes are on the 111 - way to fix this for acpi-cpufreq driver). The same is true for frequencies 112 - displayed at /proc/cpuinfo. 113 - 114 - scaling_governor: This displays current active policy. Since each CPU has a 115 - cpufreq sysfs, it is possible to set a scaling governor to each CPU. But this 116 - is not possible with Intel P-States, as there is one common policy for all 117 - CPUs. Here, the last requested policy will be applicable to all CPUs. It is 118 - suggested that one use the cpupower utility to change policy to all CPUs at the 119 - same time. 120 - 121 - scaling_setspeed: This attribute can never be used with Intel P-State. 122 - 123 - scaling_max_freq/scaling_min_freq: This interface can be used similarly to 124 - the max_perf_pct/min_perf_pct of Intel P-State sysfs. However since frequencies 125 - are converted to nearest possible P-State, this is prone to rounding errors. 126 - This method is not preferred to limit performance. 127 - 128 - affected_cpus: Not used 129 - related_cpus: Not used 130 - 131 - For contemporary Intel processors, the frequency is controlled by the 132 - processor itself and the P-State exposed to software is related to 133 - performance levels. The idea that frequency can be set to a single 134 - frequency is fictional for Intel Core processors. Even if the scaling 135 - driver selects a single P-State, the actual frequency the processor 136 - will run at is selected by the processor itself. 137 - 138 - Per-CPU limits 139 - 140 - The kernel command line option "intel_pstate=per_cpu_perf_limits" forces 141 - the intel_pstate driver to use per-CPU performance limits. When it is set, 142 - the sysfs control interface described above is subject to limitations. 143 - - The following controls are not available for both read and write 144 - /sys/devices/system/cpu/intel_pstate/max_perf_pct 145 - /sys/devices/system/cpu/intel_pstate/min_perf_pct 146 - - The following controls can be used to set performance limits, as far as the 147 - architecture of the processor permits: 148 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq 149 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq 150 - /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor 151 - - User can still observe turbo percent and number of P-States from 152 - /sys/devices/system/cpu/intel_pstate/turbo_pct 153 - /sys/devices/system/cpu/intel_pstate/num_pstates 154 - - User can read write system wide turbo status 155 - /sys/devices/system/cpu/no_turbo 156 - 157 - Support of energy performance hints 158 - It is possible to provide hints to the HWP algorithms in the processor 159 - to be more performance centric to more energy centric. When the driver 160 - is using HWP, two additional cpufreq sysfs attributes are presented for 161 - each logical CPU. 162 - These attributes are: 163 - - energy_performance_available_preferences 164 - - energy_performance_preference 165 - 166 - To get list of supported hints: 167 - $ cat energy_performance_available_preferences 168 - default performance balance_performance balance_power power 169 - 170 - The current preference can be read or changed via cpufreq sysfs 171 - attribute "energy_performance_preference". Reading from this attribute 172 - will display current effective setting. User can write any of the valid 173 - preference string to this attribute. User can always restore to power-on 174 - default by writing "default". 175 - 176 - Since threads can migrate to different CPUs, this is possible that the 177 - new CPU may have different energy performance preference than the previous 178 - one. To avoid such issues, either threads can be pinned to specific CPUs 179 - or set the same energy performance preference value to all CPUs. 180 - 181 - Tuning Intel P-State driver 182 - 183 - When the performance can be tuned using PID (Proportional Integral 184 - Derivative) controller, debugfs files are provided for adjusting performance. 185 - They are presented under: 186 - /sys/kernel/debug/pstate_snb/ 187 - 188 - The PID tunable parameters are: 189 - deadband 190 - d_gain_pct 191 - i_gain_pct 192 - p_gain_pct 193 - sample_rate_ms 194 - setpoint 195 - 196 - To adjust these parameters, some understanding of driver implementation is 197 - necessary. There are some tweeks described here, but be very careful. Adjusting 198 - them requires expert level understanding of power and performance relationship. 199 - These limits are only useful when the "powersave" policy is active. 200 - 201 - -To make the system more responsive to load changes, sample_rate_ms can 202 - be adjusted (current default is 10ms). 203 - -To make the system use higher performance, even if the load is lower, setpoint 204 - can be adjusted to a lower number. This will also lead to faster ramp up time 205 - to reach the maximum P-State. 206 - If there are no derivative and integral coefficients, The next P-State will be 207 - equal to: 208 - current P-State - ((setpoint - current cpu load) * p_gain_pct) 209 - 210 - For example, if the current PID parameters are (Which are defaults for the core 211 - processors like SandyBridge): 212 - deadband = 0 213 - d_gain_pct = 0 214 - i_gain_pct = 0 215 - p_gain_pct = 20 216 - sample_rate_ms = 10 217 - setpoint = 97 218 - 219 - If the current P-State = 0x08 and current load = 100, this will result in the 220 - next P-State = 0x08 - ((97 - 100) * 0.2) = 8.6 (rounded to 9). Here the P-State 221 - goes up by only 1. If during next sample interval the current load doesn't 222 - change and still 100, then P-State goes up by one again. This process will 223 - continue as long as the load is more than the setpoint until the maximum P-State 224 - is reached. 225 - 226 - For the same load at setpoint = 60, this will result in the next P-State 227 - = 0x08 - ((60 - 100) * 0.2) = 16 228 - So by changing the setpoint from 97 to 60, there is an increase of the 229 - next P-State from 9 to 16. So this will make processor execute at higher 230 - P-State for the same CPU load. If the load continues to be more than the 231 - setpoint during next sample intervals, then P-State will go up again till the 232 - maximum P-State is reached. But the ramp up time to reach the maximum P-State 233 - will be much faster when the setpoint is 60 compared to 97. 234 - 235 - Debugging Intel P-State driver 236 - 237 - Event tracing 238 - To debug P-State transition, the Linux event tracing interface can be used. 239 - There are two specific events, which can be enabled (Provided the kernel 240 - configs related to event tracing are enabled). 241 - 242 - # cd /sys/kernel/debug/tracing/ 243 - # echo 1 > events/power/pstate_sample/enable 244 - # echo 1 > events/power/cpu_frequency/enable 245 - # cat trace 246 - gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107 247 - scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618 248 - freq=2474476 249 - cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2 250 - 251 - 252 - Using ftrace 253 - 254 - If function level tracing is required, the Linux ftrace interface can be used. 255 - For example if we want to check how often a function to set a P-State is 256 - called, we can set ftrace filter to intel_pstate_set_pstate. 257 - 258 - # cd /sys/kernel/debug/tracing/ 259 - # cat available_filter_functions | grep -i pstate 260 - intel_pstate_set_pstate 261 - intel_pstate_cpu_init 262 - ... 263 - 264 - # echo intel_pstate_set_pstate > set_ftrace_filter 265 - # echo function > current_tracer 266 - # cat trace | head -15 267 - # tracer: function 268 - # 269 - # entries-in-buffer/entries-written: 80/80 #P:4 270 - # 271 - # _-----=> irqs-off 272 - # / _----=> need-resched 273 - # | / _---=> hardirq/softirq 274 - # || / _--=> preempt-depth 275 - # ||| / delay 276 - # TASK-PID CPU# |||| TIMESTAMP FUNCTION 277 - # | | | |||| | | 278 - Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func 279 - gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func 280 - gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func 281 - <idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func
-31
Documentation/devicetree/bindings/staging/ion/hi6220-ion.txt
··· 1 - Hi6220 SoC ION 2 - =================================================================== 3 - Required properties: 4 - - compatible : "hisilicon,hi6220-ion" 5 - - list of the ION heaps 6 - - heap name : maybe heap_sys_user@0 7 - - heap id : id should be unique in the system. 8 - - heap base : base ddr address of the heap,0 means that 9 - it is dynamic. 10 - - heap size : memory size and 0 means it is dynamic. 11 - - heap type : the heap type of the heap, please also 12 - see the define in ion.h(drivers/staging/android/uapi/ion.h) 13 - ------------------------------------------------------------------- 14 - Example: 15 - hi6220-ion { 16 - compatible = "hisilicon,hi6220-ion"; 17 - heap_sys_user@0 { 18 - heap-name = "sys_user"; 19 - heap-id = <0x0>; 20 - heap-base = <0x0>; 21 - heap-size = <0x0>; 22 - heap-type = "ion_system"; 23 - }; 24 - heap_sys_contig@0 { 25 - heap-name = "sys_contig"; 26 - heap-id = <0x1>; 27 - heap-base = <0x0>; 28 - heap-size = <0x0>; 29 - heap-type = "ion_system_contig"; 30 - }; 31 - };
+2 -4
Documentation/usb/typec.rst
··· 114 114 registering/unregistering cables and their plugs: 115 115 116 116 .. kernel-doc:: drivers/usb/typec/typec.c 117 - :functions: typec_register_cable typec_unregister_cable typec_register_plug 118 - typec_unregister_plug 117 + :functions: typec_register_cable typec_unregister_cable typec_register_plug typec_unregister_plug 119 118 120 119 The class will provide a handle to struct typec_cable and struct typec_plug if 121 120 the registration is successful, or NULL if it isn't. ··· 136 137 APIs to report it to the class: 137 138 138 139 .. kernel-doc:: drivers/usb/typec/typec.c 139 - :functions: typec_set_data_role typec_set_pwr_role typec_set_vconn_role 140 - typec_set_pwr_opmode 140 + :functions: typec_set_data_role typec_set_pwr_role typec_set_vconn_role typec_set_pwr_opmode 141 141 142 142 Alternate Modes 143 143 ~~~~~~~~~~~~~~~
+1 -1
Documentation/watchdog/watchdog-parameters.txt
··· 117 117 ------------------------------------------------- 118 118 iTCO_wdt: 119 119 heartbeat: Watchdog heartbeat in seconds. 120 - (2<heartbeat<39 (TCO v1) or 613 (TCO v2), default=30) 120 + (5<=heartbeat<=74 (TCO v1) or 1226 (TCO v2), default=30) 121 121 nowayout: Watchdog cannot be stopped once started 122 122 (default=kernel config parameter) 123 123 -------------------------------------------------
+9 -6
MAINTAINERS
··· 846 846 M: Sumit Semwal <sumit.semwal@linaro.org> 847 847 L: devel@driverdev.osuosl.org 848 848 S: Supported 849 - F: Documentation/devicetree/bindings/staging/ion/ 850 849 F: drivers/staging/android/ion 851 850 F: drivers/staging/android/uapi/ion.h 852 851 F: drivers/staging/android/uapi/ion_test.h ··· 3114 3115 F: drivers/net/ieee802154/cc2520.c 3115 3116 F: include/linux/spi/cc2520.h 3116 3117 F: Documentation/devicetree/bindings/net/ieee802154/cc2520.txt 3118 + 3119 + CCREE ARM TRUSTZONE CRYPTOCELL 700 REE DRIVER 3120 + M: Gilad Ben-Yossef <gilad@benyossef.com> 3121 + L: linux-crypto@vger.kernel.org 3122 + L: driverdev-devel@linuxdriverproject.org 3123 + S: Supported 3124 + F: drivers/staging/ccree/ 3125 + W: https://developer.arm.com/products/system-ip/trustzone-cryptocell/cryptocell-700-family 3117 3126 3118 3127 CEC FRAMEWORK 3119 3128 M: Hans Verkuil <hans.verkuil@cisco.com> ··· 5702 5695 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 5703 5696 S: Maintained 5704 5697 F: drivers/staging/greybus/ 5705 - L: greybus-dev@lists.linaro.org 5698 + L: greybus-dev@lists.linaro.org (moderated for non-subscribers) 5706 5699 5707 5700 GREYBUS AUDIO PROTOCOLS DRIVERS 5708 5701 M: Vaibhav Agarwal <vaibhav.sr@gmail.com> ··· 9560 9553 9561 9554 OSD LIBRARY and FILESYSTEM 9562 9555 M: Boaz Harrosh <ooo@electrozaur.com> 9563 - M: Benny Halevy <bhalevy@primarydata.com> 9564 - L: osd-dev@open-osd.org 9565 - W: http://open-osd.org 9566 - T: git git://git.open-osd.org/open-osd.git 9567 9556 S: Maintained 9568 9557 F: drivers/scsi/osd/ 9569 9558 F: include/scsi/osd_*
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc2 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION*
+4 -2
arch/alpha/kernel/osf_sys.c
··· 1201 1201 if (!access_ok(VERIFY_WRITE, ur, sizeof(*ur))) 1202 1202 return -EFAULT; 1203 1203 1204 - err = 0; 1205 - err |= put_user(status, ustatus); 1204 + err = put_user(status, ustatus); 1205 + if (ret < 0) 1206 + return err ? err : ret; 1207 + 1206 1208 err |= __put_user(r.ru_utime.tv_sec, &ur->ru_utime.tv_sec); 1207 1209 err |= __put_user(r.ru_utime.tv_usec, &ur->ru_utime.tv_usec); 1208 1210 err |= __put_user(r.ru_stime.tv_sec, &ur->ru_stime.tv_sec);
+1 -1
arch/arm/boot/dts/bcm283x-rpi-smsc9512.dtsi
··· 1 1 / { 2 2 aliases { 3 - ethernet = &ethernet; 3 + ethernet0 = &ethernet; 4 4 }; 5 5 }; 6 6
+1 -1
arch/arm/boot/dts/bcm283x-rpi-smsc9514.dtsi
··· 1 1 / { 2 2 aliases { 3 - ethernet = &ethernet; 3 + ethernet0 = &ethernet; 4 4 }; 5 5 }; 6 6
+13 -9
arch/arm/boot/dts/bcm283x.dtsi
··· 198 198 brcm,pins = <0 1>; 199 199 brcm,function = <BCM2835_FSEL_ALT0>; 200 200 }; 201 - i2c0_gpio32: i2c0_gpio32 { 202 - brcm,pins = <32 34>; 201 + i2c0_gpio28: i2c0_gpio28 { 202 + brcm,pins = <28 29>; 203 203 brcm,function = <BCM2835_FSEL_ALT0>; 204 204 }; 205 205 i2c0_gpio44: i2c0_gpio44 { ··· 295 295 /* Separate from the uart0_gpio14 group 296 296 * because it conflicts with spi1_gpio16, and 297 297 * people often run uart0 on the two pins 298 - * without flow contrl. 298 + * without flow control. 299 299 */ 300 300 uart0_ctsrts_gpio16: uart0_ctsrts_gpio16 { 301 301 brcm,pins = <16 17>; 302 302 brcm,function = <BCM2835_FSEL_ALT3>; 303 303 }; 304 - uart0_gpio30: uart0_gpio30 { 304 + uart0_ctsrts_gpio30: uart0_ctsrts_gpio30 { 305 305 brcm,pins = <30 31>; 306 306 brcm,function = <BCM2835_FSEL_ALT3>; 307 307 }; 308 - uart0_ctsrts_gpio32: uart0_ctsrts_gpio32 { 308 + uart0_gpio32: uart0_gpio32 { 309 309 brcm,pins = <32 33>; 310 310 brcm,function = <BCM2835_FSEL_ALT3>; 311 + }; 312 + uart0_gpio36: uart0_gpio36 { 313 + brcm,pins = <36 37>; 314 + brcm,function = <BCM2835_FSEL_ALT2>; 315 + }; 316 + uart0_ctsrts_gpio38: uart0_ctsrts_gpio38 { 317 + brcm,pins = <38 39>; 318 + brcm,function = <BCM2835_FSEL_ALT2>; 311 319 }; 312 320 313 321 uart1_gpio14: uart1_gpio14 { ··· 333 325 uart1_ctsrts_gpio30: uart1_ctsrts_gpio30 { 334 326 brcm,pins = <30 31>; 335 327 brcm,function = <BCM2835_FSEL_ALT5>; 336 - }; 337 - uart1_gpio36: uart1_gpio36 { 338 - brcm,pins = <36 37 38 39>; 339 - brcm,function = <BCM2835_FSEL_ALT2>; 340 328 }; 341 329 uart1_gpio40: uart1_gpio40 { 342 330 brcm,pins = <40 41>;
+2
arch/arm/boot/dts/dra7-evm.dts
··· 204 204 tps659038: tps659038@58 { 205 205 compatible = "ti,tps659038"; 206 206 reg = <0x58>; 207 + ti,palmas-override-powerhold; 208 + ti,system-power-controller; 207 209 208 210 tps659038_pmic { 209 211 compatible = "ti,tps659038-pmic";
+4
arch/arm/boot/dts/dra7.dtsi
··· 2017 2017 coefficients = <0 2000>; 2018 2018 }; 2019 2019 2020 + &cpu_crit { 2021 + temperature = <120000>; /* milli Celsius */ 2022 + }; 2023 + 2020 2024 /include/ "dra7xx-clocks.dtsi"
+1 -1
arch/arm/boot/dts/imx53-qsrb.dts
··· 23 23 imx53-qsrb { 24 24 pinctrl_pmic: pmicgrp { 25 25 fsl,pins = < 26 - MX53_PAD_CSI0_DAT5__GPIO5_23 0x1e4 /* IRQ */ 26 + MX53_PAD_CSI0_DAT5__GPIO5_23 0x1c4 /* IRQ */ 27 27 >; 28 28 }; 29 29 };
-17
arch/arm/boot/dts/imx6sx-sdb.dts
··· 12 12 model = "Freescale i.MX6 SoloX SDB RevB Board"; 13 13 }; 14 14 15 - &cpu0 { 16 - operating-points = < 17 - /* kHz uV */ 18 - 996000 1250000 19 - 792000 1175000 20 - 396000 1175000 21 - 198000 1175000 22 - >; 23 - fsl,soc-operating-points = < 24 - /* ARM kHz SOC uV */ 25 - 996000 1250000 26 - 792000 1175000 27 - 396000 1175000 28 - 198000 1175000 29 - >; 30 - }; 31 - 32 15 &i2c1 { 33 16 clock-frequency = <100000>; 34 17 pinctrl-names = "default";
+3 -3
arch/arm/boot/dts/logicpd-torpedo-37xx-devkit.dts
··· 249 249 OMAP3_CORE1_IOPAD(0x2110, PIN_INPUT | MUX_MODE0) /* cam_xclka.cam_xclka */ 250 250 OMAP3_CORE1_IOPAD(0x2112, PIN_INPUT | MUX_MODE0) /* cam_pclk.cam_pclk */ 251 251 252 - OMAP3_CORE1_IOPAD(0x2114, PIN_INPUT | MUX_MODE0) /* cam_d0.cam_d0 */ 253 - OMAP3_CORE1_IOPAD(0x2116, PIN_INPUT | MUX_MODE0) /* cam_d1.cam_d1 */ 254 - OMAP3_CORE1_IOPAD(0x2118, PIN_INPUT | MUX_MODE0) /* cam_d2.cam_d2 */ 252 + OMAP3_CORE1_IOPAD(0x2116, PIN_INPUT | MUX_MODE0) /* cam_d0.cam_d0 */ 253 + OMAP3_CORE1_IOPAD(0x2118, PIN_INPUT | MUX_MODE0) /* cam_d1.cam_d1 */ 254 + OMAP3_CORE1_IOPAD(0x211a, PIN_INPUT | MUX_MODE0) /* cam_d2.cam_d2 */ 255 255 OMAP3_CORE1_IOPAD(0x211c, PIN_INPUT | MUX_MODE0) /* cam_d3.cam_d3 */ 256 256 OMAP3_CORE1_IOPAD(0x211e, PIN_INPUT | MUX_MODE0) /* cam_d4.cam_d4 */ 257 257 OMAP3_CORE1_IOPAD(0x2120, PIN_INPUT | MUX_MODE0) /* cam_d5.cam_d5 */
+2
arch/arm/boot/dts/mt7623.dtsi
··· 72 72 <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>, 73 73 <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>, 74 74 <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>; 75 + clock-frequency = <13000000>; 76 + arm,cpu-registers-not-fw-configured; 75 77 }; 76 78 77 79 watchdog: watchdog@10007000 {
+2 -1
arch/arm/boot/dts/omap3-gta04.dtsi
··· 55 55 simple-audio-card,bitclock-master = <&telephony_link_master>; 56 56 simple-audio-card,frame-master = <&telephony_link_master>; 57 57 simple-audio-card,format = "i2s"; 58 - 58 + simple-audio-card,bitclock-inversion; 59 + simple-audio-card,frame-inversion; 59 60 simple-audio-card,cpu { 60 61 sound-dai = <&mcbsp4>; 61 62 };
+1 -1
arch/arm/boot/dts/omap4-panda-a4.dts
··· 13 13 /* Pandaboard Rev A4+ have external pullups on SCL & SDA */ 14 14 &dss_hdmi_pins { 15 15 pinctrl-single,pins = < 16 - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0) /* hdmi_cec.hdmi_cec */ 16 + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ 17 17 OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0) /* hdmi_scl.hdmi_scl */ 18 18 OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0) /* hdmi_sda.hdmi_sda */ 19 19 >;
+1 -1
arch/arm/boot/dts/omap4-panda-es.dts
··· 34 34 /* PandaboardES has external pullups on SCL & SDA */ 35 35 &dss_hdmi_pins { 36 36 pinctrl-single,pins = < 37 - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0) /* hdmi_cec.hdmi_cec */ 37 + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ 38 38 OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0) /* hdmi_scl.hdmi_scl */ 39 39 OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0) /* hdmi_sda.hdmi_sda */ 40 40 >;
+68
arch/arm/configs/gemini_defconfig
··· 1 + # CONFIG_LOCALVERSION_AUTO is not set 2 + CONFIG_SYSVIPC=y 3 + CONFIG_NO_HZ_IDLE=y 4 + CONFIG_BSD_PROCESS_ACCT=y 5 + CONFIG_USER_NS=y 6 + CONFIG_RELAY=y 7 + CONFIG_BLK_DEV_INITRD=y 8 + CONFIG_PARTITION_ADVANCED=y 9 + CONFIG_ARCH_MULTI_V4=y 10 + # CONFIG_ARCH_MULTI_V7 is not set 11 + CONFIG_ARCH_GEMINI=y 12 + CONFIG_PCI=y 13 + CONFIG_PREEMPT=y 14 + CONFIG_AEABI=y 15 + CONFIG_CMDLINE="console=ttyS0,115200n8" 16 + CONFIG_KEXEC=y 17 + CONFIG_BINFMT_MISC=y 18 + CONFIG_PM=y 19 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 20 + CONFIG_DEVTMPFS=y 21 + CONFIG_MTD=y 22 + CONFIG_MTD_BLOCK=y 23 + CONFIG_MTD_CFI=y 24 + CONFIG_MTD_CFI_INTELEXT=y 25 + CONFIG_MTD_CFI_AMDSTD=y 26 + CONFIG_MTD_CFI_STAA=y 27 + CONFIG_MTD_PHYSMAP=y 28 + CONFIG_MTD_PHYSMAP_OF=y 29 + CONFIG_BLK_DEV_RAM=y 30 + CONFIG_BLK_DEV_RAM_SIZE=16384 31 + # CONFIG_SCSI_PROC_FS is not set 32 + CONFIG_BLK_DEV_SD=y 33 + # CONFIG_SCSI_LOWLEVEL is not set 34 + CONFIG_ATA=y 35 + CONFIG_INPUT_EVDEV=y 36 + CONFIG_KEYBOARD_GPIO=y 37 + # CONFIG_INPUT_MOUSE is not set 38 + # CONFIG_LEGACY_PTYS is not set 39 + CONFIG_SERIAL_8250=y 40 + CONFIG_SERIAL_8250_CONSOLE=y 41 + CONFIG_SERIAL_8250_NR_UARTS=1 42 + CONFIG_SERIAL_8250_RUNTIME_UARTS=1 43 + CONFIG_SERIAL_OF_PLATFORM=y 44 + # CONFIG_HW_RANDOM is not set 45 + # CONFIG_HWMON is not set 46 + CONFIG_WATCHDOG=y 47 + CONFIG_GEMINI_WATCHDOG=y 48 + CONFIG_USB=y 49 + CONFIG_USB_MON=y 50 + CONFIG_USB_FOTG210_HCD=y 51 + CONFIG_USB_STORAGE=y 52 + CONFIG_NEW_LEDS=y 53 + CONFIG_LEDS_CLASS=y 54 + CONFIG_LEDS_GPIO=y 55 + CONFIG_LEDS_TRIGGERS=y 56 + CONFIG_LEDS_TRIGGER_HEARTBEAT=y 57 + CONFIG_RTC_CLASS=y 58 + CONFIG_RTC_DRV_GEMINI=y 59 + CONFIG_DMADEVICES=y 60 + # CONFIG_DNOTIFY is not set 61 + CONFIG_TMPFS=y 62 + CONFIG_TMPFS_POSIX_ACL=y 63 + CONFIG_ROMFS_FS=y 64 + CONFIG_NLS_CODEPAGE_437=y 65 + CONFIG_NLS_ISO8859_1=y 66 + # CONFIG_ENABLE_WARN_DEPRECATED is not set 67 + # CONFIG_ENABLE_MUST_CHECK is not set 68 + CONFIG_DEBUG_FS=y
+2 -1
arch/arm/include/asm/kvm_coproc.h
··· 31 31 int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run); 32 32 int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run); 33 33 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run); 34 - int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run); 34 + int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run); 35 + int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run); 35 36 int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run); 36 37 int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run); 37 38
+74 -32
arch/arm/kvm/coproc.c
··· 32 32 #include <asm/vfp.h> 33 33 #include "../vfp/vfpinstr.h" 34 34 35 + #define CREATE_TRACE_POINTS 35 36 #include "trace.h" 36 37 #include "coproc.h" 37 38 ··· 107 106 } 108 107 109 108 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run) 110 - { 111 - kvm_inject_undefined(vcpu); 112 - return 1; 113 - } 114 - 115 - int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run) 116 109 { 117 110 kvm_inject_undefined(vcpu); 118 111 return 1; ··· 279 284 * must always support PMCCNTR (the cycle counter): we just RAZ/WI for 280 285 * all PM registers, which doesn't crash the guest kernel at least. 281 286 */ 282 - static bool pm_fake(struct kvm_vcpu *vcpu, 287 + static bool trap_raz_wi(struct kvm_vcpu *vcpu, 283 288 const struct coproc_params *p, 284 289 const struct coproc_reg *r) 285 290 { ··· 289 294 return read_zero(vcpu, p); 290 295 } 291 296 292 - #define access_pmcr pm_fake 293 - #define access_pmcntenset pm_fake 294 - #define access_pmcntenclr pm_fake 295 - #define access_pmovsr pm_fake 296 - #define access_pmselr pm_fake 297 - #define access_pmceid0 pm_fake 298 - #define access_pmceid1 pm_fake 299 - #define access_pmccntr pm_fake 300 - #define access_pmxevtyper pm_fake 301 - #define access_pmxevcntr pm_fake 302 - #define access_pmuserenr pm_fake 303 - #define access_pmintenset pm_fake 304 - #define access_pmintenclr pm_fake 297 + #define access_pmcr trap_raz_wi 298 + #define access_pmcntenset trap_raz_wi 299 + #define access_pmcntenclr trap_raz_wi 300 + #define access_pmovsr trap_raz_wi 301 + #define access_pmselr trap_raz_wi 302 + #define access_pmceid0 trap_raz_wi 303 + #define access_pmceid1 trap_raz_wi 304 + #define access_pmccntr trap_raz_wi 305 + #define access_pmxevtyper trap_raz_wi 306 + #define access_pmxevcntr trap_raz_wi 307 + #define access_pmuserenr trap_raz_wi 308 + #define access_pmintenset trap_raz_wi 309 + #define access_pmintenclr trap_raz_wi 305 310 306 311 /* Architected CP15 registers. 307 312 * CRn denotes the primary register number, but is copied to the CRm in the ··· 527 532 return 1; 528 533 } 529 534 530 - /** 531 - * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access 532 - * @vcpu: The VCPU pointer 533 - * @run: The kvm_run struct 534 - */ 535 - int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run) 535 + static struct coproc_params decode_64bit_hsr(struct kvm_vcpu *vcpu) 536 536 { 537 537 struct coproc_params params; 538 538 ··· 541 551 params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf; 542 552 params.CRm = 0; 543 553 554 + return params; 555 + } 556 + 557 + /** 558 + * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access 559 + * @vcpu: The VCPU pointer 560 + * @run: The kvm_run struct 561 + */ 562 + int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run) 563 + { 564 + struct coproc_params params = decode_64bit_hsr(vcpu); 565 + 544 566 return emulate_cp15(vcpu, &params); 567 + } 568 + 569 + /** 570 + * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access 571 + * @vcpu: The VCPU pointer 572 + * @run: The kvm_run struct 573 + */ 574 + int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run) 575 + { 576 + struct coproc_params params = decode_64bit_hsr(vcpu); 577 + 578 + /* raz_wi cp14 */ 579 + trap_raz_wi(vcpu, &params, NULL); 580 + 581 + /* handled */ 582 + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 583 + return 1; 545 584 } 546 585 547 586 static void reset_coproc_regs(struct kvm_vcpu *vcpu, ··· 583 564 table[i].reset(vcpu, &table[i]); 584 565 } 585 566 586 - /** 587 - * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access 588 - * @vcpu: The VCPU pointer 589 - * @run: The kvm_run struct 590 - */ 591 - int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run) 567 + static struct coproc_params decode_32bit_hsr(struct kvm_vcpu *vcpu) 592 568 { 593 569 struct coproc_params params; 594 570 ··· 597 583 params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7; 598 584 params.Rt2 = 0; 599 585 586 + return params; 587 + } 588 + 589 + /** 590 + * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access 591 + * @vcpu: The VCPU pointer 592 + * @run: The kvm_run struct 593 + */ 594 + int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run) 595 + { 596 + struct coproc_params params = decode_32bit_hsr(vcpu); 600 597 return emulate_cp15(vcpu, &params); 598 + } 599 + 600 + /** 601 + * kvm_handle_cp14_32 -- handles a mrc/mcr trap on a guest CP14 access 602 + * @vcpu: The VCPU pointer 603 + * @run: The kvm_run struct 604 + */ 605 + int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run) 606 + { 607 + struct coproc_params params = decode_32bit_hsr(vcpu); 608 + 609 + /* raz_wi cp14 */ 610 + trap_raz_wi(vcpu, &params, NULL); 611 + 612 + /* handled */ 613 + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 614 + return 1; 601 615 } 602 616 603 617 /******************************************************************************
+2 -2
arch/arm/kvm/handle_exit.c
··· 95 95 [HSR_EC_WFI] = kvm_handle_wfx, 96 96 [HSR_EC_CP15_32] = kvm_handle_cp15_32, 97 97 [HSR_EC_CP15_64] = kvm_handle_cp15_64, 98 - [HSR_EC_CP14_MR] = kvm_handle_cp14_access, 98 + [HSR_EC_CP14_MR] = kvm_handle_cp14_32, 99 99 [HSR_EC_CP14_LS] = kvm_handle_cp14_load_store, 100 - [HSR_EC_CP14_64] = kvm_handle_cp14_access, 100 + [HSR_EC_CP14_64] = kvm_handle_cp14_64, 101 101 [HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access, 102 102 [HSR_EC_CP10_ID] = kvm_handle_cp10_id, 103 103 [HSR_EC_HVC] = handle_hvc,
+2
arch/arm/kvm/hyp/Makefile
··· 2 2 # Makefile for Kernel-based Virtual Machine module, HYP part 3 3 # 4 4 5 + ccflags-y += -fno-stack-protector 6 + 5 7 KVM=../../../../virt/kvm 6 8 7 9 obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o
+3 -1
arch/arm/kvm/hyp/switch.c
··· 48 48 write_sysreg(HSTR_T(15), HSTR); 49 49 write_sysreg(HCPTR_TTA | HCPTR_TCP(10) | HCPTR_TCP(11), HCPTR); 50 50 val = read_sysreg(HDCR); 51 - write_sysreg(val | HDCR_TPM | HDCR_TPMCR, HDCR); 51 + val |= HDCR_TPM | HDCR_TPMCR; /* trap performance monitors */ 52 + val |= HDCR_TDRA | HDCR_TDOSA | HDCR_TDA; /* trap debug regs */ 53 + write_sysreg(val, HDCR); 52 54 } 53 55 54 56 static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu)
+4 -4
arch/arm/kvm/trace.h
··· 1 - #if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ) 2 - #define _TRACE_KVM_H 1 + #if !defined(_TRACE_ARM_KVM_H) || defined(TRACE_HEADER_MULTI_READ) 2 + #define _TRACE_ARM_KVM_H 3 3 4 4 #include <linux/tracepoint.h> 5 5 ··· 74 74 __entry->vcpu_pc, __entry->r0, __entry->imm) 75 75 ); 76 76 77 - #endif /* _TRACE_KVM_H */ 77 + #endif /* _TRACE_ARM_KVM_H */ 78 78 79 79 #undef TRACE_INCLUDE_PATH 80 - #define TRACE_INCLUDE_PATH arch/arm/kvm 80 + #define TRACE_INCLUDE_PATH . 81 81 #undef TRACE_INCLUDE_FILE 82 82 #define TRACE_INCLUDE_FILE trace 83 83
+1 -1
arch/arm/mach-at91/pm.c
··· 335 335 { .idle = sama5d3_ddr_standby, .memctrl = AT91_MEMCTRL_DDRSDR}, 336 336 }; 337 337 338 - static const struct of_device_id const ramc_ids[] __initconst = { 338 + static const struct of_device_id ramc_ids[] __initconst = { 339 339 { .compatible = "atmel,at91rm9200-sdramc", .data = &ramc_infos[0] }, 340 340 { .compatible = "atmel,at91sam9260-sdramc", .data = &ramc_infos[1] }, 341 341 { .compatible = "atmel,at91sam9g45-ddramc", .data = &ramc_infos[2] },
+1 -1
arch/arm/mach-bcm/bcm_kona_smc.c
··· 33 33 unsigned result; 34 34 }; 35 35 36 - static const struct of_device_id const bcm_kona_smc_ids[] __initconst = { 36 + static const struct of_device_id bcm_kona_smc_ids[] __initconst = { 37 37 {.compatible = "brcm,kona-smc"}, 38 38 {.compatible = "bcm,kona-smc"}, /* deprecated name */ 39 39 {},
+1 -1
arch/arm/mach-cns3xxx/core.c
··· 346 346 .power_off = csn3xxx_usb_power_off, 347 347 }; 348 348 349 - static const struct of_dev_auxdata const cns3xxx_auxdata[] __initconst = { 349 + static const struct of_dev_auxdata cns3xxx_auxdata[] __initconst = { 350 350 { "intel,usb-ehci", CNS3XXX_USB_BASE, "ehci-platform", &cns3xxx_usb_ehci_pdata }, 351 351 { "intel,usb-ohci", CNS3XXX_USB_OHCI_BASE, "ohci-platform", &cns3xxx_usb_ohci_pdata }, 352 352 { "cavium,cns3420-ahci", CNS3XXX_SATA2_BASE, "ahci", NULL },
+2 -1
arch/arm/mach-omap2/common.h
··· 266 266 extern const struct smp_operations omap4_smp_ops; 267 267 #endif 268 268 269 + extern u32 omap4_get_cpu1_ns_pa_addr(void); 270 + 269 271 #if defined(CONFIG_SMP) && defined(CONFIG_PM) 270 272 extern int omap4_mpuss_init(void); 271 273 extern int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state); 272 274 extern int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state); 273 - extern u32 omap4_get_cpu1_ns_pa_addr(void); 274 275 #else 275 276 static inline int omap4_enter_lowpower(unsigned int cpu, 276 277 unsigned int power_state)
+5 -5
arch/arm/mach-omap2/omap-mpuss-lowpower.c
··· 213 213 {} 214 214 #endif 215 215 216 - u32 omap4_get_cpu1_ns_pa_addr(void) 217 - { 218 - return old_cpu1_ns_pa_addr; 219 - } 220 - 221 216 /** 222 217 * omap4_enter_lowpower: OMAP4 MPUSS Low Power Entry Function 223 218 * The purpose of this function is to manage low power programming ··· 451 456 } 452 457 453 458 #endif 459 + 460 + u32 omap4_get_cpu1_ns_pa_addr(void) 461 + { 462 + return old_cpu1_ns_pa_addr; 463 + } 454 464 455 465 /* 456 466 * For kexec, we must set CPU1_WAKEUP_NS_PA_ADDR to point to
+7 -4
arch/arm/mach-omap2/omap-smp.c
··· 306 306 307 307 cpu1_startup_pa = readl_relaxed(cfg.wakeupgen_base + 308 308 OMAP_AUX_CORE_BOOT_1); 309 - cpu1_ns_pa_addr = omap4_get_cpu1_ns_pa_addr(); 310 309 311 310 /* Did the configured secondary_startup() get overwritten? */ 312 311 if (!omap4_smp_cpu1_startup_valid(cpu1_startup_pa)) ··· 315 316 * If omap4 or 5 has NS_PA_ADDR configured, CPU1 may be in a 316 317 * deeper idle state in WFI and will wake to an invalid address. 317 318 */ 318 - if ((soc_is_omap44xx() || soc_is_omap54xx()) && 319 - !omap4_smp_cpu1_startup_valid(cpu1_ns_pa_addr)) 320 - needs_reset = true; 319 + if ((soc_is_omap44xx() || soc_is_omap54xx())) { 320 + cpu1_ns_pa_addr = omap4_get_cpu1_ns_pa_addr(); 321 + if (!omap4_smp_cpu1_startup_valid(cpu1_ns_pa_addr)) 322 + needs_reset = true; 323 + } else { 324 + cpu1_ns_pa_addr = 0; 325 + } 321 326 322 327 if (!needs_reset || !c->cpu1_rstctrl_va) 323 328 return;
+1 -1
arch/arm/mach-omap2/prm_common.c
··· 711 711 }; 712 712 #endif 713 713 714 - static const struct of_device_id const omap_prcm_dt_match_table[] __initconst = { 714 + static const struct of_device_id omap_prcm_dt_match_table[] __initconst = { 715 715 #ifdef CONFIG_SOC_AM33XX 716 716 { .compatible = "ti,am3-prcm", .data = &am3_prm_data }, 717 717 #endif
+1 -1
arch/arm/mach-omap2/vc.c
··· 559 559 u8 hsscll_12; 560 560 }; 561 561 562 - static const struct i2c_init_data const omap4_i2c_timing_data[] __initconst = { 562 + static const struct i2c_init_data omap4_i2c_timing_data[] __initconst = { 563 563 { 564 564 .load = 50, 565 565 .loadbits = 0x3,
+1 -1
arch/arm/mach-spear/time.c
··· 204 204 setup_irq(irq, &spear_timer_irq); 205 205 } 206 206 207 - static const struct of_device_id const timer_of_match[] __initconst = { 207 + static const struct of_device_id timer_of_match[] __initconst = { 208 208 { .compatible = "st,spear-timer", }, 209 209 { }, 210 210 };
+5
arch/arm64/Kconfig.platforms
··· 106 106 select ARMADA_AP806_SYSCON 107 107 select ARMADA_CP110_SYSCON 108 108 select ARMADA_37XX_CLK 109 + select GPIOLIB 110 + select GPIOLIB_IRQCHIP 109 111 select MVEBU_ODMI 110 112 select MVEBU_PIC 113 + select OF_GPIO 114 + select PINCTRL 115 + select PINCTRL_ARMADA_37XX 111 116 help 112 117 This enables support for Marvell EBU familly, including: 113 118 - Armada 3700 SoC Family
+8
arch/arm64/boot/dts/marvell/armada-3720-db.dts
··· 79 79 }; 80 80 81 81 &i2c0 { 82 + pinctrl-names = "default"; 83 + pinctrl-0 = <&i2c1_pins>; 82 84 status = "okay"; 83 85 84 86 gpio_exp: pca9555@22 { ··· 115 113 116 114 &spi0 { 117 115 status = "okay"; 116 + pinctrl-names = "default"; 117 + pinctrl-0 = <&spi_quad_pins>; 118 118 119 119 m25p80@0 { 120 120 compatible = "jedec,spi-nor"; ··· 147 143 148 144 /* Exported on the micro USB connector CON32 through an FTDI */ 149 145 &uart0 { 146 + pinctrl-names = "default"; 147 + pinctrl-0 = <&uart1_pins>; 150 148 status = "okay"; 151 149 }; 152 150 ··· 190 184 }; 191 185 192 186 &eth0 { 187 + pinctrl-names = "default"; 188 + pinctrl-0 = <&rgmii_pins>; 193 189 phy-mode = "rgmii-id"; 194 190 phy = <&phy0>; 195 191 status = "okay";
+70 -3
arch/arm64/boot/dts/marvell/armada-37xx.dtsi
··· 161 161 #clock-cells = <1>; 162 162 }; 163 163 164 - gpio1: gpio@13800 { 165 - compatible = "marvell,mvebu-gpio-3700", 164 + pinctrl_nb: pinctrl@13800 { 165 + compatible = "marvell,armada3710-nb-pinctrl", 166 166 "syscon", "simple-mfd"; 167 - reg = <0x13800 0x500>; 167 + reg = <0x13800 0x100>, <0x13C00 0x20>; 168 + gpionb: gpio { 169 + #gpio-cells = <2>; 170 + gpio-ranges = <&pinctrl_nb 0 0 36>; 171 + gpio-controller; 172 + interrupts = 173 + <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>, 174 + <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>, 175 + <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>, 176 + <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>, 177 + <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>, 178 + <GIC_SPI 56 IRQ_TYPE_LEVEL_HIGH>, 179 + <GIC_SPI 57 IRQ_TYPE_LEVEL_HIGH>, 180 + <GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>, 181 + <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>, 182 + <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>, 183 + <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>, 184 + <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>; 185 + 186 + }; 168 187 169 188 xtalclk: xtal-clk { 170 189 compatible = "marvell,armada-3700-xtal-clock"; 171 190 clock-output-names = "xtal"; 172 191 #clock-cells = <0>; 173 192 }; 193 + 194 + spi_quad_pins: spi-quad-pins { 195 + groups = "spi_quad"; 196 + function = "spi"; 197 + }; 198 + 199 + i2c1_pins: i2c1-pins { 200 + groups = "i2c1"; 201 + function = "i2c"; 202 + }; 203 + 204 + i2c2_pins: i2c2-pins { 205 + groups = "i2c2"; 206 + function = "i2c"; 207 + }; 208 + 209 + uart1_pins: uart1-pins { 210 + groups = "uart1"; 211 + function = "uart"; 212 + }; 213 + 214 + uart2_pins: uart2-pins { 215 + groups = "uart2"; 216 + function = "uart"; 217 + }; 218 + }; 219 + 220 + pinctrl_sb: pinctrl@18800 { 221 + compatible = "marvell,armada3710-sb-pinctrl", 222 + "syscon", "simple-mfd"; 223 + reg = <0x18800 0x100>, <0x18C00 0x20>; 224 + gpiosb: gpio { 225 + #gpio-cells = <2>; 226 + gpio-ranges = <&pinctrl_sb 0 0 29>; 227 + gpio-controller; 228 + interrupts = 229 + <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>, 230 + <GIC_SPI 159 IRQ_TYPE_LEVEL_HIGH>, 231 + <GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>, 232 + <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>, 233 + <GIC_SPI 156 IRQ_TYPE_LEVEL_HIGH>; 234 + }; 235 + 236 + rgmii_pins: mii-pins { 237 + groups = "rgmii"; 238 + function = "mii"; 239 + }; 240 + 174 241 }; 175 242 176 243 eth0: ethernet@30000 {
+3
arch/arm64/boot/dts/mediatek/mt8173-evb.dts
··· 134 134 bus-width = <8>; 135 135 max-frequency = <50000000>; 136 136 cap-mmc-highspeed; 137 + mediatek,hs200-cmd-int-delay=<26>; 138 + mediatek,hs400-cmd-int-delay=<14>; 139 + mediatek,hs400-cmd-resp-sel-rising; 137 140 vmmc-supply = <&mt6397_vemc_3v3_reg>; 138 141 vqmmc-supply = <&mt6397_vio18_reg>; 139 142 non-removable;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-gru-kevin.dts
··· 44 44 45 45 /dts-v1/; 46 46 #include "rk3399-gru.dtsi" 47 - #include <include/dt-bindings/input/linux-event-codes.h> 47 + #include <dt-bindings/input/linux-event-codes.h> 48 48 49 49 /* 50 50 * Kevin-specific things
+50 -62
arch/arm64/configs/defconfig
··· 30 30 CONFIG_JUMP_LABEL=y 31 31 CONFIG_MODULES=y 32 32 CONFIG_MODULE_UNLOAD=y 33 - # CONFIG_BLK_DEV_BSG is not set 34 33 # CONFIG_IOSCHED_DEADLINE is not set 35 34 CONFIG_ARCH_SUNXI=y 36 35 CONFIG_ARCH_ALPINE=y ··· 61 62 CONFIG_ARCH_ZX=y 62 63 CONFIG_ARCH_ZYNQMP=y 63 64 CONFIG_PCI=y 64 - CONFIG_PCI_MSI=y 65 65 CONFIG_PCI_IOV=y 66 - CONFIG_PCI_AARDVARK=y 67 - CONFIG_PCIE_RCAR=y 68 - CONFIG_PCI_HOST_GENERIC=y 69 - CONFIG_PCI_XGENE=y 70 66 CONFIG_PCI_LAYERSCAPE=y 71 67 CONFIG_PCI_HISI=y 72 68 CONFIG_PCIE_QCOM=y 73 69 CONFIG_PCIE_ARMADA_8K=y 70 + CONFIG_PCI_AARDVARK=y 71 + CONFIG_PCIE_RCAR=y 72 + CONFIG_PCI_HOST_GENERIC=y 73 + CONFIG_PCI_XGENE=y 74 74 CONFIG_ARM64_VA_BITS_48=y 75 75 CONFIG_SCHED_MC=y 76 76 CONFIG_NUMA=y ··· 78 80 CONFIG_TRANSPARENT_HUGEPAGE=y 79 81 CONFIG_CMA=y 80 82 CONFIG_SECCOMP=y 81 - CONFIG_XEN=y 82 83 CONFIG_KEXEC=y 83 84 CONFIG_CRASH_DUMP=y 85 + CONFIG_XEN=y 84 86 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 85 87 CONFIG_COMPAT=y 86 - CONFIG_CPU_IDLE=y 87 88 CONFIG_HIBERNATION=y 88 89 CONFIG_ARM_CPUIDLE=y 89 90 CONFIG_CPU_FREQ=y ··· 152 155 CONFIG_BLK_DEV_LOOP=y 153 156 CONFIG_BLK_DEV_NBD=m 154 157 CONFIG_VIRTIO_BLK=y 155 - CONFIG_EEPROM_AT25=m 156 158 CONFIG_SRAM=y 159 + CONFIG_EEPROM_AT25=m 157 160 # CONFIG_SCSI_PROC_FS is not set 158 161 CONFIG_BLK_DEV_SD=y 159 162 CONFIG_SCSI_SAS_ATA=y ··· 165 168 CONFIG_AHCI_MVEBU=y 166 169 CONFIG_AHCI_XGENE=y 167 170 CONFIG_AHCI_QORIQ=y 168 - CONFIG_SATA_RCAR=y 169 171 CONFIG_SATA_SIL24=y 172 + CONFIG_SATA_RCAR=y 170 173 CONFIG_PATA_PLATFORM=y 171 174 CONFIG_PATA_OF_PLATFORM=y 172 175 CONFIG_NETDEVICES=y ··· 183 186 CONFIG_E1000E=y 184 187 CONFIG_IGB=y 185 188 CONFIG_IGBVF=y 186 - CONFIG_MVPP2=y 187 189 CONFIG_MVNETA=y 190 + CONFIG_MVPP2=y 188 191 CONFIG_SKY2=y 189 192 CONFIG_RAVB=y 190 193 CONFIG_SMC91X=y 191 194 CONFIG_SMSC911X=y 192 195 CONFIG_STMMAC_ETH=m 193 - CONFIG_REALTEK_PHY=m 196 + CONFIG_MDIO_BUS_MUX_MMIOREG=y 194 197 CONFIG_MESON_GXL_PHY=m 195 198 CONFIG_MICREL_PHY=y 196 - CONFIG_MDIO_BUS_MUX=y 197 - CONFIG_MDIO_BUS_MUX_MMIOREG=y 199 + CONFIG_REALTEK_PHY=m 198 200 CONFIG_USB_PEGASUS=m 199 201 CONFIG_USB_RTL8150=m 200 202 CONFIG_USB_RTL8152=m ··· 226 230 CONFIG_SERIAL_OF_PLATFORM=y 227 231 CONFIG_SERIAL_AMBA_PL011=y 228 232 CONFIG_SERIAL_AMBA_PL011_CONSOLE=y 233 + CONFIG_SERIAL_MESON=y 234 + CONFIG_SERIAL_MESON_CONSOLE=y 229 235 CONFIG_SERIAL_SAMSUNG=y 230 236 CONFIG_SERIAL_SAMSUNG_CONSOLE=y 231 237 CONFIG_SERIAL_TEGRA=y 232 238 CONFIG_SERIAL_SH_SCI=y 233 239 CONFIG_SERIAL_SH_SCI_NR_UARTS=11 234 240 CONFIG_SERIAL_SH_SCI_CONSOLE=y 235 - CONFIG_SERIAL_MESON=y 236 - CONFIG_SERIAL_MESON_CONSOLE=y 237 241 CONFIG_SERIAL_MSM=y 238 242 CONFIG_SERIAL_MSM_CONSOLE=y 239 243 CONFIG_SERIAL_XILINX_PS_UART=y ··· 257 261 CONFIG_I2C_RCAR=y 258 262 CONFIG_I2C_CROS_EC_TUNNEL=y 259 263 CONFIG_SPI=y 260 - CONFIG_SPI_MESON_SPIFC=m 261 264 CONFIG_SPI_BCM2835=m 262 265 CONFIG_SPI_BCM2835AUX=m 266 + CONFIG_SPI_MESON_SPIFC=m 263 267 CONFIG_SPI_ORION=y 264 268 CONFIG_SPI_PL022=y 265 269 CONFIG_SPI_QUP=y 266 - CONFIG_SPI_SPIDEV=m 267 270 CONFIG_SPI_S3C64XX=y 271 + CONFIG_SPI_SPIDEV=m 268 272 CONFIG_SPMI=y 269 273 CONFIG_PINCTRL_SINGLE=y 270 274 CONFIG_PINCTRL_MAX77620=y ··· 282 286 CONFIG_GPIO_PCA953X_IRQ=y 283 287 CONFIG_GPIO_MAX77620=y 284 288 CONFIG_POWER_RESET_MSM=y 285 - CONFIG_BATTERY_BQ27XXX=y 286 289 CONFIG_POWER_RESET_XGENE=y 287 290 CONFIG_POWER_RESET_SYSCON=y 291 + CONFIG_BATTERY_BQ27XXX=y 292 + CONFIG_SENSORS_ARM_SCPI=y 288 293 CONFIG_SENSORS_LM90=m 289 294 CONFIG_SENSORS_INA2XX=m 290 - CONFIG_SENSORS_ARM_SCPI=y 291 - CONFIG_THERMAL=y 292 - CONFIG_THERMAL_EMULATION=y 293 295 CONFIG_THERMAL_GOV_POWER_ALLOCATOR=y 294 296 CONFIG_CPU_THERMAL=y 295 - CONFIG_BCM2835_THERMAL=y 297 + CONFIG_THERMAL_EMULATION=y 296 298 CONFIG_EXYNOS_THERMAL=y 297 299 CONFIG_WATCHDOG=y 298 - CONFIG_BCM2835_WDT=y 299 - CONFIG_RENESAS_WDT=y 300 300 CONFIG_S3C2410_WATCHDOG=y 301 301 CONFIG_MESON_GXBB_WATCHDOG=m 302 302 CONFIG_MESON_WATCHDOG=m 303 - CONFIG_MFD_EXYNOS_LPASS=m 304 - CONFIG_MFD_MAX77620=y 305 - CONFIG_MFD_RK808=y 306 - CONFIG_MFD_SPMI_PMIC=y 307 - CONFIG_MFD_SEC_CORE=y 308 - CONFIG_MFD_HI655X_PMIC=y 309 - CONFIG_REGULATOR=y 303 + CONFIG_RENESAS_WDT=y 304 + CONFIG_BCM2835_WDT=y 310 305 CONFIG_MFD_CROS_EC=y 311 306 CONFIG_MFD_CROS_EC_I2C=y 307 + CONFIG_MFD_EXYNOS_LPASS=m 308 + CONFIG_MFD_HI655X_PMIC=y 309 + CONFIG_MFD_MAX77620=y 310 + CONFIG_MFD_SPMI_PMIC=y 311 + CONFIG_MFD_RK808=y 312 + CONFIG_MFD_SEC_CORE=y 312 313 CONFIG_REGULATOR_FIXED_VOLTAGE=y 313 314 CONFIG_REGULATOR_GPIO=y 314 315 CONFIG_REGULATOR_HI655X=y ··· 338 345 CONFIG_DRM_EXYNOS_HDMI=y 339 346 CONFIG_DRM_EXYNOS_MIC=y 340 347 CONFIG_DRM_RCAR_DU=m 341 - CONFIG_DRM_RCAR_HDMI=y 342 348 CONFIG_DRM_RCAR_LVDS=y 343 349 CONFIG_DRM_RCAR_VSP=y 344 350 CONFIG_DRM_TEGRA=m 345 - CONFIG_DRM_VC4=m 346 351 CONFIG_DRM_PANEL_SIMPLE=m 347 352 CONFIG_DRM_I2C_ADV7511=m 353 + CONFIG_DRM_VC4=m 348 354 CONFIG_DRM_HISI_KIRIN=m 349 355 CONFIG_DRM_MESON=m 350 356 CONFIG_FB=y ··· 358 366 CONFIG_SND=y 359 367 CONFIG_SND_SOC=y 360 368 CONFIG_SND_BCM2835_SOC_I2S=m 361 - CONFIG_SND_SOC_RCAR=y 362 369 CONFIG_SND_SOC_SAMSUNG=y 370 + CONFIG_SND_SOC_RCAR=y 363 371 CONFIG_SND_SOC_AK4613=y 364 372 CONFIG_USB=y 365 373 CONFIG_USB_OTG=y 366 374 CONFIG_USB_XHCI_HCD=y 367 - CONFIG_USB_XHCI_PLATFORM=y 368 - CONFIG_USB_XHCI_RCAR=y 369 - CONFIG_USB_EHCI_EXYNOS=y 370 375 CONFIG_USB_XHCI_TEGRA=y 371 376 CONFIG_USB_EHCI_HCD=y 372 377 CONFIG_USB_EHCI_MSM=y 378 + CONFIG_USB_EHCI_EXYNOS=y 373 379 CONFIG_USB_EHCI_HCD_PLATFORM=y 374 - CONFIG_USB_OHCI_EXYNOS=y 375 380 CONFIG_USB_OHCI_HCD=y 381 + CONFIG_USB_OHCI_EXYNOS=y 376 382 CONFIG_USB_OHCI_HCD_PLATFORM=y 377 383 CONFIG_USB_RENESAS_USBHS=m 378 384 CONFIG_USB_STORAGE=y 379 - CONFIG_USB_DWC2=y 380 385 CONFIG_USB_DWC3=y 386 + CONFIG_USB_DWC2=y 381 387 CONFIG_USB_CHIPIDEA=y 382 388 CONFIG_USB_CHIPIDEA_UDC=y 383 389 CONFIG_USB_CHIPIDEA_HOST=y 384 390 CONFIG_USB_ISP1760=y 385 391 CONFIG_USB_HSIC_USB3503=y 386 392 CONFIG_USB_MSM_OTG=y 393 + CONFIG_USB_QCOM_8X16_PHY=y 387 394 CONFIG_USB_ULPI=y 388 395 CONFIG_USB_GADGET=y 389 396 CONFIG_USB_RENESAS_USBHS_UDC=m 390 397 CONFIG_MMC=y 391 398 CONFIG_MMC_BLOCK_MINORS=32 392 399 CONFIG_MMC_ARMMMCI=y 393 - CONFIG_MMC_MESON_GX=y 394 400 CONFIG_MMC_SDHCI=y 395 401 CONFIG_MMC_SDHCI_ACPI=y 396 402 CONFIG_MMC_SDHCI_PLTFM=y ··· 396 406 CONFIG_MMC_SDHCI_OF_ESDHC=y 397 407 CONFIG_MMC_SDHCI_CADENCE=y 398 408 CONFIG_MMC_SDHCI_TEGRA=y 409 + CONFIG_MMC_MESON_GX=y 399 410 CONFIG_MMC_SDHCI_MSM=y 400 411 CONFIG_MMC_SPI=y 401 412 CONFIG_MMC_SDHI=y ··· 405 414 CONFIG_MMC_DW_K3=y 406 415 CONFIG_MMC_DW_ROCKCHIP=y 407 416 CONFIG_MMC_SUNXI=y 408 - CONFIG_MMC_SDHCI_XENON=y 409 417 CONFIG_MMC_BCM2835=y 418 + CONFIG_MMC_SDHCI_XENON=y 410 419 CONFIG_NEW_LEDS=y 411 420 CONFIG_LEDS_CLASS=y 412 421 CONFIG_LEDS_GPIO=y 413 422 CONFIG_LEDS_PWM=y 414 423 CONFIG_LEDS_SYSCON=y 415 - CONFIG_LEDS_TRIGGERS=y 416 - CONFIG_LEDS_TRIGGER_DEFAULT_ON=y 417 424 CONFIG_LEDS_TRIGGER_HEARTBEAT=y 418 425 CONFIG_LEDS_TRIGGER_CPU=y 426 + CONFIG_LEDS_TRIGGER_DEFAULT_ON=y 419 427 CONFIG_RTC_CLASS=y 420 428 CONFIG_RTC_DRV_MAX77686=y 429 + CONFIG_RTC_DRV_RK808=m 421 430 CONFIG_RTC_DRV_S5M=y 422 431 CONFIG_RTC_DRV_DS3232=y 423 432 CONFIG_RTC_DRV_EFI=y 433 + CONFIG_RTC_DRV_S3C=y 424 434 CONFIG_RTC_DRV_PL031=y 425 435 CONFIG_RTC_DRV_SUN6I=y 426 - CONFIG_RTC_DRV_RK808=m 427 436 CONFIG_RTC_DRV_TEGRA=y 428 437 CONFIG_RTC_DRV_XGENE=y 429 - CONFIG_RTC_DRV_S3C=y 430 438 CONFIG_DMADEVICES=y 439 + CONFIG_DMA_BCM2835=m 431 440 CONFIG_MV_XOR_V2=y 432 441 CONFIG_PL330_DMA=y 433 - CONFIG_DMA_BCM2835=m 434 442 CONFIG_TEGRA20_APB_DMA=y 435 443 CONFIG_QCOM_BAM_DMA=y 436 444 CONFIG_QCOM_HIDMA_MGMT=y ··· 442 452 CONFIG_VIRTIO_MMIO=y 443 453 CONFIG_XEN_GNTDEV=y 444 454 CONFIG_XEN_GRANT_DEV_ALLOC=y 455 + CONFIG_COMMON_CLK_RK808=y 445 456 CONFIG_COMMON_CLK_SCPI=y 446 457 CONFIG_COMMON_CLK_CS2000_CP=y 447 458 CONFIG_COMMON_CLK_S2MPS11=y 448 - CONFIG_COMMON_CLK_PWM=y 449 - CONFIG_COMMON_CLK_RK808=y 450 459 CONFIG_CLK_QORIQ=y 460 + CONFIG_COMMON_CLK_PWM=y 451 461 CONFIG_COMMON_CLK_QCOM=y 462 + CONFIG_QCOM_CLK_SMD_RPM=y 452 463 CONFIG_MSM_GCC_8916=y 453 464 CONFIG_MSM_GCC_8994=y 454 465 CONFIG_MSM_MMCC_8996=y 455 466 CONFIG_HWSPINLOCK_QCOM=y 456 - CONFIG_MAILBOX=y 457 467 CONFIG_ARM_MHU=y 458 468 CONFIG_PLATFORM_MHU=y 459 469 CONFIG_BCM2835_MBOX=y 460 470 CONFIG_HI6220_MBOX=y 461 471 CONFIG_ARM_SMMU=y 462 472 CONFIG_ARM_SMMU_V3=y 473 + CONFIG_RPMSG_QCOM_SMD=y 463 474 CONFIG_RASPBERRYPI_POWER=y 464 475 CONFIG_QCOM_SMEM=y 465 - CONFIG_QCOM_SMD=y 466 476 CONFIG_QCOM_SMD_RPM=y 477 + CONFIG_QCOM_SMP2P=y 478 + CONFIG_QCOM_SMSM=y 467 479 CONFIG_ROCKCHIP_PM_DOMAINS=y 468 480 CONFIG_ARCH_TEGRA_132_SOC=y 469 481 CONFIG_ARCH_TEGRA_210_SOC=y 470 482 CONFIG_ARCH_TEGRA_186_SOC=y 471 483 CONFIG_EXTCON_USB_GPIO=y 484 + CONFIG_IIO=y 485 + CONFIG_EXYNOS_ADC=y 472 486 CONFIG_PWM=y 473 487 CONFIG_PWM_BCM2835=m 474 - CONFIG_PWM_ROCKCHIP=y 475 - CONFIG_PWM_TEGRA=m 476 488 CONFIG_PWM_MESON=m 477 - CONFIG_COMMON_RESET_HI6220=y 489 + CONFIG_PWM_ROCKCHIP=y 490 + CONFIG_PWM_SAMSUNG=y 491 + CONFIG_PWM_TEGRA=m 478 492 CONFIG_PHY_RCAR_GEN3_USB2=y 479 493 CONFIG_PHY_HI6220_USB=y 494 + CONFIG_PHY_SUN4I_USB=y 480 495 CONFIG_PHY_ROCKCHIP_INNO_USB2=y 481 496 CONFIG_PHY_ROCKCHIP_EMMC=y 482 - CONFIG_PHY_SUN4I_USB=y 483 497 CONFIG_PHY_XGENE=y 484 498 CONFIG_PHY_TEGRA_XUSB=y 485 499 CONFIG_ARM_SCPI_PROTOCOL=y 486 - CONFIG_ACPI=y 487 - CONFIG_IIO=y 488 - CONFIG_EXYNOS_ADC=y 489 - CONFIG_PWM_SAMSUNG=y 490 500 CONFIG_RASPBERRYPI_FIRMWARE=y 501 + CONFIG_ACPI=y 491 502 CONFIG_EXT2_FS=y 492 503 CONFIG_EXT3_FS=y 493 504 CONFIG_EXT4_FS_POSIX_ACL=y ··· 502 511 CONFIG_CUSE=m 503 512 CONFIG_OVERLAY_FS=m 504 513 CONFIG_VFAT_FS=y 505 - CONFIG_TMPFS=y 506 514 CONFIG_HUGETLBFS=y 507 515 CONFIG_CONFIGFS_FS=y 508 516 CONFIG_EFIVAR_FS=y ··· 529 539 CONFIG_SECURITY=y 530 540 CONFIG_CRYPTO_ECHAINIV=y 531 541 CONFIG_CRYPTO_ANSI_CPRNG=y 532 - CONFIG_CRYPTO_DEV_SAFEXCEL=m 533 542 CONFIG_ARM64_CRYPTO=y 534 543 CONFIG_CRYPTO_SHA1_ARM64_CE=y 535 544 CONFIG_CRYPTO_SHA2_ARM64_CE=y 536 545 CONFIG_CRYPTO_GHASH_ARM64_CE=y 537 546 CONFIG_CRYPTO_AES_ARM64_CE_CCM=y 538 547 CONFIG_CRYPTO_AES_ARM64_CE_BLK=y 539 - # CONFIG_CRYPTO_AES_ARM64_NEON_BLK is not set
-1
arch/arm64/include/asm/atomic_ll_sc.h
··· 264 264 " st" #rel "xr" #sz "\t%w[tmp], %" #w "[new], %[v]\n" \ 265 265 " cbnz %w[tmp], 1b\n" \ 266 266 " " #mb "\n" \ 267 - " mov %" #w "[oldval], %" #w "[old]\n" \ 268 267 "2:" \ 269 268 : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \ 270 269 [v] "+Q" (*(unsigned long *)ptr) \
+10 -2
arch/arm64/include/asm/cpufeature.h
··· 115 115 116 116 extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS); 117 117 extern struct static_key_false cpu_hwcap_keys[ARM64_NCAPS]; 118 + extern struct static_key_false arm64_const_caps_ready; 118 119 119 120 bool this_cpu_has_cap(unsigned int cap); 120 121 ··· 125 124 } 126 125 127 126 /* System capability check for constant caps */ 128 - static inline bool cpus_have_const_cap(int num) 127 + static inline bool __cpus_have_const_cap(int num) 129 128 { 130 129 if (num >= ARM64_NCAPS) 131 130 return false; ··· 139 138 return test_bit(num, cpu_hwcaps); 140 139 } 141 140 141 + static inline bool cpus_have_const_cap(int num) 142 + { 143 + if (static_branch_likely(&arm64_const_caps_ready)) 144 + return __cpus_have_const_cap(num); 145 + else 146 + return cpus_have_cap(num); 147 + } 148 + 142 149 static inline void cpus_set_cap(unsigned int num) 143 150 { 144 151 if (num >= ARM64_NCAPS) { ··· 154 145 num, ARM64_NCAPS); 155 146 } else { 156 147 __set_bit(num, cpu_hwcaps); 157 - static_branch_enable(&cpu_hwcap_keys[num]); 158 148 } 159 149 } 160 150
+6 -2
arch/arm64/include/asm/kvm_host.h
··· 24 24 25 25 #include <linux/types.h> 26 26 #include <linux/kvm_types.h> 27 + #include <asm/cpufeature.h> 27 28 #include <asm/kvm.h> 28 29 #include <asm/kvm_asm.h> 29 30 #include <asm/kvm_mmio.h> ··· 356 355 unsigned long vector_ptr) 357 356 { 358 357 /* 359 - * Call initialization code, and switch to the full blown 360 - * HYP code. 358 + * Call initialization code, and switch to the full blown HYP code. 359 + * If the cpucaps haven't been finalized yet, something has gone very 360 + * wrong, and hyp will crash and burn when it uses any 361 + * cpus_have_const_cap() wrapper. 361 362 */ 363 + BUG_ON(!static_branch_likely(&arm64_const_caps_ready)); 362 364 __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr); 363 365 } 364 366
+21 -2
arch/arm64/kernel/cpufeature.c
··· 985 985 */ 986 986 void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps) 987 987 { 988 - for (; caps->matches; caps++) 989 - if (caps->enable && cpus_have_cap(caps->capability)) 988 + for (; caps->matches; caps++) { 989 + unsigned int num = caps->capability; 990 + 991 + if (!cpus_have_cap(num)) 992 + continue; 993 + 994 + /* Ensure cpus_have_const_cap(num) works */ 995 + static_branch_enable(&cpu_hwcap_keys[num]); 996 + 997 + if (caps->enable) { 990 998 /* 991 999 * Use stop_machine() as it schedules the work allowing 992 1000 * us to modify PSTATE, instead of on_each_cpu() which ··· 1002 994 * we return. 1003 995 */ 1004 996 stop_machine(caps->enable, NULL, cpu_online_mask); 997 + } 998 + } 1005 999 } 1006 1000 1007 1001 /* ··· 1106 1096 enable_cpu_capabilities(arm64_features); 1107 1097 } 1108 1098 1099 + DEFINE_STATIC_KEY_FALSE(arm64_const_caps_ready); 1100 + EXPORT_SYMBOL(arm64_const_caps_ready); 1101 + 1102 + static void __init mark_const_caps_ready(void) 1103 + { 1104 + static_branch_enable(&arm64_const_caps_ready); 1105 + } 1106 + 1109 1107 /* 1110 1108 * Check if the current CPU has a given feature capability. 1111 1109 * Should be called from non-preemptible context. ··· 1149 1131 /* Set the CPU feature capabilies */ 1150 1132 setup_feature_capabilities(); 1151 1133 enable_errata_workarounds(); 1134 + mark_const_caps_ready(); 1152 1135 setup_elf_hwcaps(arm64_elf_hwcaps); 1153 1136 1154 1137 if (system_supports_32bit_el0())
+16 -7
arch/arm64/kernel/perf_event.c
··· 877 877 878 878 if (attr->exclude_idle) 879 879 return -EPERM; 880 - if (is_kernel_in_hyp_mode() && 881 - attr->exclude_kernel != attr->exclude_hv) 882 - return -EINVAL; 880 + 881 + /* 882 + * If we're running in hyp mode, then we *are* the hypervisor. 883 + * Therefore we ignore exclude_hv in this configuration, since 884 + * there's no hypervisor to sample anyway. This is consistent 885 + * with other architectures (x86 and Power). 886 + */ 887 + if (is_kernel_in_hyp_mode()) { 888 + if (!attr->exclude_kernel) 889 + config_base |= ARMV8_PMU_INCLUDE_EL2; 890 + } else { 891 + if (attr->exclude_kernel) 892 + config_base |= ARMV8_PMU_EXCLUDE_EL1; 893 + if (!attr->exclude_hv) 894 + config_base |= ARMV8_PMU_INCLUDE_EL2; 895 + } 883 896 if (attr->exclude_user) 884 897 config_base |= ARMV8_PMU_EXCLUDE_EL0; 885 - if (!is_kernel_in_hyp_mode() && attr->exclude_kernel) 886 - config_base |= ARMV8_PMU_EXCLUDE_EL1; 887 - if (!attr->exclude_hv) 888 - config_base |= ARMV8_PMU_INCLUDE_EL2; 889 898 890 899 /* 891 900 * Install the filter into config_base as this is used to
+2
arch/arm64/kvm/hyp/Makefile
··· 2 2 # Makefile for Kernel-based Virtual Machine module, HYP part 3 3 # 4 4 5 + ccflags-y += -fno-stack-protector 6 + 5 7 KVM=../../../../virt/kvm 6 8 7 9 obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o
+4
arch/powerpc/include/asm/module.h
··· 14 14 #include <asm-generic/module.h> 15 15 16 16 17 + #ifdef CC_USING_MPROFILE_KERNEL 18 + #define MODULE_ARCH_VERMAGIC "mprofile-kernel" 19 + #endif 20 + 17 21 #ifndef __powerpc64__ 18 22 /* 19 23 * Thanks to Paul M for explaining this.
+12
arch/powerpc/include/asm/page.h
··· 132 132 #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) 133 133 #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) 134 134 #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) 135 + 136 + #ifdef CONFIG_PPC_BOOK3S_64 137 + /* 138 + * On hash the vmalloc and other regions alias to the kernel region when passed 139 + * through __pa(), which virt_to_pfn() uses. That means virt_addr_valid() can 140 + * return true for some vmalloc addresses, which is incorrect. So explicitly 141 + * check that the address is in the kernel region. 142 + */ 143 + #define virt_addr_valid(kaddr) (REGION_ID(kaddr) == KERNEL_REGION_ID && \ 144 + pfn_valid(virt_to_pfn(kaddr))) 145 + #else 135 146 #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) 147 + #endif 136 148 137 149 /* 138 150 * On Book-E parts we need __va to parse the device tree and we can't
+1 -1
arch/powerpc/kernel/idle_book3s.S
··· 416 416 * which needs to be restored from the stack. 417 417 */ 418 418 li r3, 1 419 - stb r0,PACA_NAPSTATELOST(r13) 419 + stb r3,PACA_NAPSTATELOST(r13) 420 420 blr 421 421 422 422 /*
+2 -1
arch/powerpc/kernel/kprobes.c
··· 305 305 save_previous_kprobe(kcb); 306 306 set_current_kprobe(p, regs, kcb); 307 307 kprobes_inc_nmissed_count(p); 308 - prepare_singlestep(p, regs); 309 308 kcb->kprobe_status = KPROBE_REENTER; 310 309 if (p->ainsn.boostable >= 0) { 311 310 ret = try_to_emulate(p, regs); 312 311 313 312 if (ret > 0) { 314 313 restore_previous_kprobe(kcb); 314 + preempt_enable_no_resched(); 315 315 return 1; 316 316 } 317 317 } 318 + prepare_singlestep(p, regs); 318 319 return 1; 319 320 } else { 320 321 if (*addr != BREAKPOINT_INSTRUCTION) {
+19
arch/powerpc/kernel/process.c
··· 864 864 if (!MSR_TM_SUSPENDED(mfmsr())) 865 865 return; 866 866 867 + /* 868 + * If we are in a transaction and FP is off then we can't have 869 + * used FP inside that transaction. Hence the checkpointed 870 + * state is the same as the live state. We need to copy the 871 + * live state to the checkpointed state so that when the 872 + * transaction is restored, the checkpointed state is correct 873 + * and the aborted transaction sees the correct state. We use 874 + * ckpt_regs.msr here as that's what tm_reclaim will use to 875 + * determine if it's going to write the checkpointed state or 876 + * not. So either this will write the checkpointed registers, 877 + * or reclaim will. Similarly for VMX. 878 + */ 879 + if ((thr->ckpt_regs.msr & MSR_FP) == 0) 880 + memcpy(&thr->ckfp_state, &thr->fp_state, 881 + sizeof(struct thread_fp_state)); 882 + if ((thr->ckpt_regs.msr & MSR_VEC) == 0) 883 + memcpy(&thr->ckvr_state, &thr->vr_state, 884 + sizeof(struct thread_vr_state)); 885 + 867 886 giveup_all(container_of(thr, struct task_struct, thread)); 868 887 869 888 tm_reclaim(thr, thr->ckpt_regs.msr, cause);
+1 -1
arch/powerpc/kvm/Kconfig
··· 67 67 select KVM_BOOK3S_64_HANDLER 68 68 select KVM 69 69 select KVM_BOOK3S_PR_POSSIBLE if !KVM_BOOK3S_HV_POSSIBLE 70 - select SPAPR_TCE_IOMMU if IOMMU_SUPPORT 70 + select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_SERIES || PPC_POWERNV) 71 71 ---help--- 72 72 Support running unmodified book3s_64 and book3s_32 guest kernels 73 73 in virtual machines on book3s_64 host processors.
+2 -2
arch/powerpc/kvm/Makefile
··· 46 46 e500_emulate.o 47 47 kvm-objs-$(CONFIG_KVM_E500MC) := $(kvm-e500mc-objs) 48 48 49 - kvm-book3s_64-builtin-objs-$(CONFIG_KVM_BOOK3S_64_HANDLER) := \ 49 + kvm-book3s_64-builtin-objs-$(CONFIG_SPAPR_TCE_IOMMU) := \ 50 50 book3s_64_vio_hv.o 51 51 52 52 kvm-pr-y := \ ··· 90 90 book3s_xics.o 91 91 92 92 kvm-book3s_64-objs-$(CONFIG_KVM_XIVE) += book3s_xive.o 93 + kvm-book3s_64-objs-$(CONFIG_SPAPR_TCE_IOMMU) += book3s_64_vio.o 93 94 94 95 kvm-book3s_64-module-objs := \ 95 96 $(common-objs-y) \ 96 97 book3s.o \ 97 - book3s_64_vio.o \ 98 98 book3s_rtas.o \ 99 99 $(kvm-book3s_64-objs-y) 100 100
+13
arch/powerpc/kvm/book3s_64_vio_hv.c
··· 301 301 /* udbg_printf("H_PUT_TCE(): liobn=0x%lx ioba=0x%lx, tce=0x%lx\n", */ 302 302 /* liobn, ioba, tce); */ 303 303 304 + /* For radix, we might be in virtual mode, so punt */ 305 + if (kvm_is_radix(vcpu->kvm)) 306 + return H_TOO_HARD; 307 + 304 308 stt = kvmppc_find_table(vcpu->kvm, liobn); 305 309 if (!stt) 306 310 return H_TOO_HARD; ··· 384 380 unsigned long *rmap = NULL; 385 381 bool prereg = false; 386 382 struct kvmppc_spapr_tce_iommu_table *stit; 383 + 384 + /* For radix, we might be in virtual mode, so punt */ 385 + if (kvm_is_radix(vcpu->kvm)) 386 + return H_TOO_HARD; 387 387 388 388 stt = kvmppc_find_table(vcpu->kvm, liobn); 389 389 if (!stt) ··· 499 491 long i, ret; 500 492 struct kvmppc_spapr_tce_iommu_table *stit; 501 493 494 + /* For radix, we might be in virtual mode, so punt */ 495 + if (kvm_is_radix(vcpu->kvm)) 496 + return H_TOO_HARD; 497 + 502 498 stt = kvmppc_find_table(vcpu->kvm, liobn); 503 499 if (!stt) 504 500 return H_TOO_HARD; ··· 539 527 return H_SUCCESS; 540 528 } 541 529 530 + /* This can be called in either virtual mode or real mode */ 542 531 long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, 543 532 unsigned long ioba) 544 533 {
+8 -1
arch/powerpc/kvm/book3s_hv_builtin.c
··· 207 207 208 208 long kvmppc_h_random(struct kvm_vcpu *vcpu) 209 209 { 210 - if (powernv_get_random_real_mode(&vcpu->arch.gpr[4])) 210 + int r; 211 + 212 + /* Only need to do the expensive mfmsr() on radix */ 213 + if (kvm_is_radix(vcpu->kvm) && (mfmsr() & MSR_IR)) 214 + r = powernv_get_random_long(&vcpu->arch.gpr[4]); 215 + else 216 + r = powernv_get_random_real_mode(&vcpu->arch.gpr[4]); 217 + if (r) 211 218 return H_SUCCESS; 212 219 213 220 return H_HARDWARE;
+58 -22
arch/powerpc/kvm/book3s_pr_papr.c
··· 50 50 pteg_addr = get_pteg_addr(vcpu, pte_index); 51 51 52 52 mutex_lock(&vcpu->kvm->arch.hpt_mutex); 53 - copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg)); 53 + ret = H_FUNCTION; 54 + if (copy_from_user(pteg, (void __user *)pteg_addr, sizeof(pteg))) 55 + goto done; 54 56 hpte = pteg; 55 57 56 58 ret = H_PTEG_FULL; ··· 73 71 hpte[0] = cpu_to_be64(kvmppc_get_gpr(vcpu, 6)); 74 72 hpte[1] = cpu_to_be64(kvmppc_get_gpr(vcpu, 7)); 75 73 pteg_addr += i * HPTE_SIZE; 76 - copy_to_user((void __user *)pteg_addr, hpte, HPTE_SIZE); 74 + ret = H_FUNCTION; 75 + if (copy_to_user((void __user *)pteg_addr, hpte, HPTE_SIZE)) 76 + goto done; 77 77 kvmppc_set_gpr(vcpu, 4, pte_index | i); 78 78 ret = H_SUCCESS; 79 79 ··· 97 93 98 94 pteg = get_pteg_addr(vcpu, pte_index); 99 95 mutex_lock(&vcpu->kvm->arch.hpt_mutex); 100 - copy_from_user(pte, (void __user *)pteg, sizeof(pte)); 96 + ret = H_FUNCTION; 97 + if (copy_from_user(pte, (void __user *)pteg, sizeof(pte))) 98 + goto done; 101 99 pte[0] = be64_to_cpu((__force __be64)pte[0]); 102 100 pte[1] = be64_to_cpu((__force __be64)pte[1]); 103 101 ··· 109 103 ((flags & H_ANDCOND) && (pte[0] & avpn) != 0)) 110 104 goto done; 111 105 112 - copy_to_user((void __user *)pteg, &v, sizeof(v)); 106 + ret = H_FUNCTION; 107 + if (copy_to_user((void __user *)pteg, &v, sizeof(v))) 108 + goto done; 113 109 114 110 rb = compute_tlbie_rb(pte[0], pte[1], pte_index); 115 111 vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false); ··· 179 171 } 180 172 181 173 pteg = get_pteg_addr(vcpu, tsh & H_BULK_REMOVE_PTEX); 182 - copy_from_user(pte, (void __user *)pteg, sizeof(pte)); 174 + if (copy_from_user(pte, (void __user *)pteg, sizeof(pte))) { 175 + ret = H_FUNCTION; 176 + break; 177 + } 183 178 pte[0] = be64_to_cpu((__force __be64)pte[0]); 184 179 pte[1] = be64_to_cpu((__force __be64)pte[1]); 185 180 ··· 195 184 tsh |= H_BULK_REMOVE_NOT_FOUND; 196 185 } else { 197 186 /* Splat the pteg in (userland) hpt */ 198 - copy_to_user((void __user *)pteg, &v, sizeof(v)); 187 + if (copy_to_user((void __user *)pteg, &v, sizeof(v))) { 188 + ret = H_FUNCTION; 189 + break; 190 + } 199 191 200 192 rb = compute_tlbie_rb(pte[0], pte[1], 201 193 tsh & H_BULK_REMOVE_PTEX); ··· 225 211 226 212 pteg = get_pteg_addr(vcpu, pte_index); 227 213 mutex_lock(&vcpu->kvm->arch.hpt_mutex); 228 - copy_from_user(pte, (void __user *)pteg, sizeof(pte)); 214 + ret = H_FUNCTION; 215 + if (copy_from_user(pte, (void __user *)pteg, sizeof(pte))) 216 + goto done; 229 217 pte[0] = be64_to_cpu((__force __be64)pte[0]); 230 218 pte[1] = be64_to_cpu((__force __be64)pte[1]); 231 219 ··· 250 234 vcpu->arch.mmu.tlbie(vcpu, rb, rb & 1 ? true : false); 251 235 pte[0] = (__force u64)cpu_to_be64(pte[0]); 252 236 pte[1] = (__force u64)cpu_to_be64(pte[1]); 253 - copy_to_user((void __user *)pteg, pte, sizeof(pte)); 237 + ret = H_FUNCTION; 238 + if (copy_to_user((void __user *)pteg, pte, sizeof(pte))) 239 + goto done; 254 240 ret = H_SUCCESS; 255 241 256 242 done: 257 243 mutex_unlock(&vcpu->kvm->arch.hpt_mutex); 258 244 kvmppc_set_gpr(vcpu, 3, ret); 259 245 260 - return EMULATE_DONE; 261 - } 262 - 263 - static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) 264 - { 265 - unsigned long liobn = kvmppc_get_gpr(vcpu, 4); 266 - unsigned long ioba = kvmppc_get_gpr(vcpu, 5); 267 - unsigned long tce = kvmppc_get_gpr(vcpu, 6); 268 - long rc; 269 - 270 - rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); 271 - if (rc == H_TOO_HARD) 272 - return EMULATE_FAIL; 273 - kvmppc_set_gpr(vcpu, 3, rc); 274 246 return EMULATE_DONE; 275 247 } 276 248 ··· 278 274 long rc; 279 275 280 276 rc = kvmppc_h_logical_ci_store(vcpu); 277 + if (rc == H_TOO_HARD) 278 + return EMULATE_FAIL; 279 + kvmppc_set_gpr(vcpu, 3, rc); 280 + return EMULATE_DONE; 281 + } 282 + 283 + #ifdef CONFIG_SPAPR_TCE_IOMMU 284 + static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) 285 + { 286 + unsigned long liobn = kvmppc_get_gpr(vcpu, 4); 287 + unsigned long ioba = kvmppc_get_gpr(vcpu, 5); 288 + unsigned long tce = kvmppc_get_gpr(vcpu, 6); 289 + long rc; 290 + 291 + rc = kvmppc_h_put_tce(vcpu, liobn, ioba, tce); 281 292 if (rc == H_TOO_HARD) 282 293 return EMULATE_FAIL; 283 294 kvmppc_set_gpr(vcpu, 3, rc); ··· 329 310 kvmppc_set_gpr(vcpu, 3, rc); 330 311 return EMULATE_DONE; 331 312 } 313 + 314 + #else /* CONFIG_SPAPR_TCE_IOMMU */ 315 + static int kvmppc_h_pr_put_tce(struct kvm_vcpu *vcpu) 316 + { 317 + return EMULATE_FAIL; 318 + } 319 + 320 + static int kvmppc_h_pr_put_tce_indirect(struct kvm_vcpu *vcpu) 321 + { 322 + return EMULATE_FAIL; 323 + } 324 + 325 + static int kvmppc_h_pr_stuff_tce(struct kvm_vcpu *vcpu) 326 + { 327 + return EMULATE_FAIL; 328 + } 329 + #endif /* CONFIG_SPAPR_TCE_IOMMU */ 332 330 333 331 static int kvmppc_h_pr_xics_hcall(struct kvm_vcpu *vcpu, u32 cmd) 334 332 {
+3 -1
arch/powerpc/kvm/powerpc.c
··· 1749 1749 r = kvm_vm_ioctl_enable_cap(kvm, &cap); 1750 1750 break; 1751 1751 } 1752 - #ifdef CONFIG_PPC_BOOK3S_64 1752 + #ifdef CONFIG_SPAPR_TCE_IOMMU 1753 1753 case KVM_CREATE_SPAPR_TCE_64: { 1754 1754 struct kvm_create_spapr_tce_64 create_tce_64; 1755 1755 ··· 1780 1780 r = kvm_vm_ioctl_create_spapr_tce(kvm, &create_tce_64); 1781 1781 goto out; 1782 1782 } 1783 + #endif 1784 + #ifdef CONFIG_PPC_BOOK3S_64 1783 1785 case KVM_PPC_GET_SMMU_INFO: { 1784 1786 struct kvm_ppc_smmu_info info; 1785 1787 struct kvm *kvm = filp->private_data;
+4 -3
arch/powerpc/mm/dump_linuxpagetables.c
··· 16 16 */ 17 17 #include <linux/debugfs.h> 18 18 #include <linux/fs.h> 19 + #include <linux/hugetlb.h> 19 20 #include <linux/io.h> 20 21 #include <linux/mm.h> 21 22 #include <linux/sched.h> ··· 392 391 393 392 for (i = 0; i < PTRS_PER_PMD; i++, pmd++) { 394 393 addr = start + i * PMD_SIZE; 395 - if (!pmd_none(*pmd)) 394 + if (!pmd_none(*pmd) && !pmd_huge(*pmd)) 396 395 /* pmd exists */ 397 396 walk_pte(st, pmd, addr); 398 397 else ··· 408 407 409 408 for (i = 0; i < PTRS_PER_PUD; i++, pud++) { 410 409 addr = start + i * PUD_SIZE; 411 - if (!pud_none(*pud)) 410 + if (!pud_none(*pud) && !pud_huge(*pud)) 412 411 /* pud exists */ 413 412 walk_pmd(st, pud, addr); 414 413 else ··· 428 427 */ 429 428 for (i = 0; i < PTRS_PER_PGD; i++, pgd++) { 430 429 addr = KERN_VIRT_START + i * PGDIR_SIZE; 431 - if (!pgd_none(*pgd)) 430 + if (!pgd_none(*pgd) && !pgd_huge(*pgd)) 432 431 /* pgd exists */ 433 432 walk_pud(st, pgd, addr); 434 433 else
+1 -1
arch/x86/include/asm/kvm_host.h
··· 43 43 #define KVM_PRIVATE_MEM_SLOTS 3 44 44 #define KVM_MEM_SLOTS_NUM (KVM_USER_MEM_SLOTS + KVM_PRIVATE_MEM_SLOTS) 45 45 46 - #define KVM_HALT_POLL_NS_DEFAULT 400000 46 + #define KVM_HALT_POLL_NS_DEFAULT 200000 47 47 48 48 #define KVM_IRQCHIP_NUM_PINS KVM_IOAPIC_NUM_PINS 49 49
+6 -5
arch/x86/include/asm/uaccess.h
··· 319 319 #define __get_user_asm_u64(x, ptr, retval, errret) \ 320 320 ({ \ 321 321 __typeof__(ptr) __ptr = (ptr); \ 322 - asm volatile(ASM_STAC "\n" \ 322 + asm volatile("\n" \ 323 323 "1: movl %2,%%eax\n" \ 324 324 "2: movl %3,%%edx\n" \ 325 - "3: " ASM_CLAC "\n" \ 325 + "3:\n" \ 326 326 ".section .fixup,\"ax\"\n" \ 327 327 "4: mov %4,%0\n" \ 328 328 " xorl %%eax,%%eax\n" \ ··· 331 331 ".previous\n" \ 332 332 _ASM_EXTABLE(1b, 4b) \ 333 333 _ASM_EXTABLE(2b, 4b) \ 334 - : "=r" (retval), "=A"(x) \ 334 + : "=r" (retval), "=&A"(x) \ 335 335 : "m" (__m(__ptr)), "m" __m(((u32 *)(__ptr)) + 1), \ 336 336 "i" (errret), "0" (retval)); \ 337 337 }) ··· 703 703 #define unsafe_put_user(x, ptr, err_label) \ 704 704 do { \ 705 705 int __pu_err; \ 706 - __put_user_size((x), (ptr), sizeof(*(ptr)), __pu_err, -EFAULT); \ 706 + __typeof__(*(ptr)) __pu_val = (x); \ 707 + __put_user_size(__pu_val, (ptr), sizeof(*(ptr)), __pu_err, -EFAULT); \ 707 708 if (unlikely(__pu_err)) goto err_label; \ 708 709 } while (0) 709 710 710 711 #define unsafe_get_user(x, ptr, err_label) \ 711 712 do { \ 712 713 int __gu_err; \ 713 - unsigned long __gu_val; \ 714 + __inttype(*(ptr)) __gu_val; \ 714 715 __get_user_size(__gu_val, (ptr), sizeof(*(ptr)), __gu_err, -EFAULT); \ 715 716 (x) = (__force __typeof__(*(ptr)))__gu_val; \ 716 717 if (unlikely(__gu_err)) goto err_label; \
+1
arch/x86/kernel/fpu/init.c
··· 90 90 * Boot time FPU feature detection code: 91 91 */ 92 92 unsigned int mxcsr_feature_mask __read_mostly = 0xffffffffu; 93 + EXPORT_SYMBOL_GPL(mxcsr_feature_mask); 93 94 94 95 static void __init fpu__init_system_mxcsr(void) 95 96 {
+1 -1
arch/x86/kvm/emulate.c
··· 4173 4173 4174 4174 static int check_svme(struct x86_emulate_ctxt *ctxt) 4175 4175 { 4176 - u64 efer; 4176 + u64 efer = 0; 4177 4177 4178 4178 ctxt->ops->get_msr(ctxt, MSR_EFER, &efer); 4179 4179
+21 -14
arch/x86/kvm/paging_tmpl.h
··· 283 283 pt_element_t pte; 284 284 pt_element_t __user *uninitialized_var(ptep_user); 285 285 gfn_t table_gfn; 286 - unsigned index, pt_access, pte_access, accessed_dirty, pte_pkey; 286 + u64 pt_access, pte_access; 287 + unsigned index, accessed_dirty, pte_pkey; 287 288 unsigned nested_access; 288 289 gpa_t pte_gpa; 289 290 bool have_ad; 290 291 int offset; 292 + u64 walk_nx_mask = 0; 291 293 const int write_fault = access & PFERR_WRITE_MASK; 292 294 const int user_fault = access & PFERR_USER_MASK; 293 295 const int fetch_fault = access & PFERR_FETCH_MASK; ··· 304 302 have_ad = PT_HAVE_ACCESSED_DIRTY(mmu); 305 303 306 304 #if PTTYPE == 64 305 + walk_nx_mask = 1ULL << PT64_NX_SHIFT; 307 306 if (walker->level == PT32E_ROOT_LEVEL) { 308 307 pte = mmu->get_pdptr(vcpu, (addr >> 30) & 3); 309 308 trace_kvm_mmu_paging_element(pte, walker->level); ··· 316 313 walker->max_level = walker->level; 317 314 ASSERT(!(is_long_mode(vcpu) && !is_pae(vcpu))); 318 315 319 - accessed_dirty = have_ad ? PT_GUEST_ACCESSED_MASK : 0; 320 - 321 316 /* 322 317 * FIXME: on Intel processors, loads of the PDPTE registers for PAE paging 323 318 * by the MOV to CR instruction are treated as reads and do not cause the ··· 323 322 */ 324 323 nested_access = (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK; 325 324 326 - pt_access = pte_access = ACC_ALL; 325 + pte_access = ~0; 327 326 ++walker->level; 328 327 329 328 do { 330 329 gfn_t real_gfn; 331 330 unsigned long host_addr; 332 331 333 - pt_access &= pte_access; 332 + pt_access = pte_access; 334 333 --walker->level; 335 334 336 335 index = PT_INDEX(addr, walker->level); ··· 372 371 373 372 trace_kvm_mmu_paging_element(pte, walker->level); 374 373 374 + /* 375 + * Inverting the NX it lets us AND it like other 376 + * permission bits. 377 + */ 378 + pte_access = pt_access & (pte ^ walk_nx_mask); 379 + 375 380 if (unlikely(!FNAME(is_present_gpte)(pte))) 376 381 goto error; 377 382 ··· 386 379 goto error; 387 380 } 388 381 389 - accessed_dirty &= pte; 390 - pte_access = pt_access & FNAME(gpte_access)(vcpu, pte); 391 - 392 382 walker->ptes[walker->level - 1] = pte; 393 383 } while (!is_last_gpte(mmu, walker->level, pte)); 394 384 395 385 pte_pkey = FNAME(gpte_pkeys)(vcpu, pte); 396 - errcode = permission_fault(vcpu, mmu, pte_access, pte_pkey, access); 386 + accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0; 387 + 388 + /* Convert to ACC_*_MASK flags for struct guest_walker. */ 389 + walker->pt_access = FNAME(gpte_access)(vcpu, pt_access ^ walk_nx_mask); 390 + walker->pte_access = FNAME(gpte_access)(vcpu, pte_access ^ walk_nx_mask); 391 + errcode = permission_fault(vcpu, mmu, walker->pte_access, pte_pkey, access); 397 392 if (unlikely(errcode)) 398 393 goto error; 399 394 ··· 412 403 walker->gfn = real_gpa >> PAGE_SHIFT; 413 404 414 405 if (!write_fault) 415 - FNAME(protect_clean_gpte)(mmu, &pte_access, pte); 406 + FNAME(protect_clean_gpte)(mmu, &walker->pte_access, pte); 416 407 else 417 408 /* 418 409 * On a write fault, fold the dirty bit into accessed_dirty. ··· 430 421 goto retry_walk; 431 422 } 432 423 433 - walker->pt_access = pt_access; 434 - walker->pte_access = pte_access; 435 424 pgprintk("%s: pte %llx pte_access %x pt_access %x\n", 436 - __func__, (u64)pte, pte_access, pt_access); 425 + __func__, (u64)pte, walker->pte_access, walker->pt_access); 437 426 return 1; 438 427 439 428 error: ··· 459 452 */ 460 453 if (!(errcode & PFERR_RSVD_MASK)) { 461 454 vcpu->arch.exit_qualification &= 0x187; 462 - vcpu->arch.exit_qualification |= ((pt_access & pte) & 0x7) << 3; 455 + vcpu->arch.exit_qualification |= (pte_access & 0x7) << 3; 463 456 } 464 457 #endif 465 458 walker->fault.address = addr;
+1 -1
arch/x86/kvm/pmu_intel.c
··· 294 294 ((u64)1 << edx.split.bit_width_fixed) - 1; 295 295 } 296 296 297 - pmu->global_ctrl = ((1 << pmu->nr_arch_gp_counters) - 1) | 297 + pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) | 298 298 (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED); 299 299 pmu->global_ctrl_mask = ~pmu->global_ctrl; 300 300
+2 -1
arch/x86/kvm/svm.c
··· 1272 1272 1273 1273 } 1274 1274 1275 - static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, int index) 1275 + static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, 1276 + unsigned int index) 1276 1277 { 1277 1278 u64 *avic_physical_id_table; 1278 1279 struct kvm_arch *vm_data = &vcpu->kvm->arch;
+2 -2
arch/x86/kvm/vmx.c
··· 6504 6504 enable_ept_ad_bits = 0; 6505 6505 } 6506 6506 6507 - if (!cpu_has_vmx_ept_ad_bits()) 6507 + if (!cpu_has_vmx_ept_ad_bits() || !enable_ept) 6508 6508 enable_ept_ad_bits = 0; 6509 6509 6510 6510 if (!cpu_has_vmx_unrestricted_guest()) ··· 11213 11213 if (!nested_cpu_has_pml(vmcs12)) 11214 11214 return 0; 11215 11215 11216 - if (vmcs12->guest_pml_index > PML_ENTITY_NUM) { 11216 + if (vmcs12->guest_pml_index >= PML_ENTITY_NUM) { 11217 11217 vmx->nested.pml_full = true; 11218 11218 return 1; 11219 11219 }
+33 -12
arch/x86/kvm/x86.c
··· 1763 1763 { 1764 1764 struct kvm_arch *ka = &kvm->arch; 1765 1765 struct pvclock_vcpu_time_info hv_clock; 1766 + u64 ret; 1766 1767 1767 1768 spin_lock(&ka->pvclock_gtod_sync_lock); 1768 1769 if (!ka->use_master_clock) { ··· 1775 1774 hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; 1776 1775 spin_unlock(&ka->pvclock_gtod_sync_lock); 1777 1776 1777 + /* both __this_cpu_read() and rdtsc() should be on the same cpu */ 1778 + get_cpu(); 1779 + 1778 1780 kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL, 1779 1781 &hv_clock.tsc_shift, 1780 1782 &hv_clock.tsc_to_system_mul); 1781 - return __pvclock_read_cycles(&hv_clock, rdtsc()); 1783 + ret = __pvclock_read_cycles(&hv_clock, rdtsc()); 1784 + 1785 + put_cpu(); 1786 + 1787 + return ret; 1782 1788 } 1783 1789 1784 1790 static void kvm_setup_pvclock_page(struct kvm_vcpu *v) ··· 3296 3288 } 3297 3289 } 3298 3290 3291 + #define XSAVE_MXCSR_OFFSET 24 3292 + 3299 3293 static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu, 3300 3294 struct kvm_xsave *guest_xsave) 3301 3295 { 3302 3296 u64 xstate_bv = 3303 3297 *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)]; 3298 + u32 mxcsr = *(u32 *)&guest_xsave->region[XSAVE_MXCSR_OFFSET / sizeof(u32)]; 3304 3299 3305 3300 if (boot_cpu_has(X86_FEATURE_XSAVE)) { 3306 3301 /* ··· 3311 3300 * CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility 3312 3301 * with old userspace. 3313 3302 */ 3314 - if (xstate_bv & ~kvm_supported_xcr0()) 3303 + if (xstate_bv & ~kvm_supported_xcr0() || 3304 + mxcsr & ~mxcsr_feature_mask) 3315 3305 return -EINVAL; 3316 3306 load_xsave(vcpu, (u8 *)guest_xsave->region); 3317 3307 } else { 3318 - if (xstate_bv & ~XFEATURE_MASK_FPSSE) 3308 + if (xstate_bv & ~XFEATURE_MASK_FPSSE || 3309 + mxcsr & ~mxcsr_feature_mask) 3319 3310 return -EINVAL; 3320 3311 memcpy(&vcpu->arch.guest_fpu.state.fxsave, 3321 3312 guest_xsave->region, sizeof(struct fxregs_state)); ··· 4831 4818 4832 4819 static int kernel_pio(struct kvm_vcpu *vcpu, void *pd) 4833 4820 { 4834 - /* TODO: String I/O for in kernel device */ 4835 - int r; 4821 + int r = 0, i; 4836 4822 4837 - if (vcpu->arch.pio.in) 4838 - r = kvm_io_bus_read(vcpu, KVM_PIO_BUS, vcpu->arch.pio.port, 4839 - vcpu->arch.pio.size, pd); 4840 - else 4841 - r = kvm_io_bus_write(vcpu, KVM_PIO_BUS, 4842 - vcpu->arch.pio.port, vcpu->arch.pio.size, 4843 - pd); 4823 + for (i = 0; i < vcpu->arch.pio.count; i++) { 4824 + if (vcpu->arch.pio.in) 4825 + r = kvm_io_bus_read(vcpu, KVM_PIO_BUS, vcpu->arch.pio.port, 4826 + vcpu->arch.pio.size, pd); 4827 + else 4828 + r = kvm_io_bus_write(vcpu, KVM_PIO_BUS, 4829 + vcpu->arch.pio.port, vcpu->arch.pio.size, 4830 + pd); 4831 + if (r) 4832 + break; 4833 + pd += vcpu->arch.pio.size; 4834 + } 4844 4835 return r; 4845 4836 } 4846 4837 ··· 4881 4864 4882 4865 if (vcpu->arch.pio.count) 4883 4866 goto data_avail; 4867 + 4868 + memset(vcpu->arch.pio_data, 0, size * count); 4884 4869 4885 4870 ret = emulator_pio_in_out(vcpu, size, port, val, count, true); 4886 4871 if (ret) { ··· 5067 5048 5068 5049 if (var.unusable) { 5069 5050 memset(desc, 0, sizeof(*desc)); 5051 + if (base3) 5052 + *base3 = 0; 5070 5053 return false; 5071 5054 } 5072 5055
+4 -11
arch/x86/xen/enlighten_pv.c
··· 142 142 struct xen_extraversion extra; 143 143 HYPERVISOR_xen_version(XENVER_extraversion, &extra); 144 144 145 - pr_info("Booting paravirtualized kernel %son %s\n", 146 - xen_feature(XENFEAT_auto_translated_physmap) ? 147 - "with PVH extensions " : "", pv_info.name); 145 + pr_info("Booting paravirtualized kernel on %s\n", pv_info.name); 148 146 printk(KERN_INFO "Xen version: %d.%d%s%s\n", 149 147 version >> 16, version & 0xffff, extra.extraversion, 150 148 xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : ""); ··· 955 957 956 958 void xen_setup_shared_info(void) 957 959 { 958 - if (!xen_feature(XENFEAT_auto_translated_physmap)) { 959 - set_fixmap(FIX_PARAVIRT_BOOTMAP, 960 - xen_start_info->shared_info); 960 + set_fixmap(FIX_PARAVIRT_BOOTMAP, xen_start_info->shared_info); 961 961 962 - HYPERVISOR_shared_info = 963 - (struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP); 964 - } else 965 - HYPERVISOR_shared_info = 966 - (struct shared_info *)__va(xen_start_info->shared_info); 962 + HYPERVISOR_shared_info = 963 + (struct shared_info *)fix_to_virt(FIX_PARAVIRT_BOOTMAP); 967 964 968 965 #ifndef CONFIG_SMP 969 966 /* In UP this is as good a place as any to set up shared info */
+1 -1
arch/x86/xen/mmu.c
··· 42 42 } 43 43 EXPORT_SYMBOL_GPL(arbitrary_virt_to_machine); 44 44 45 - void xen_flush_tlb_all(void) 45 + static void xen_flush_tlb_all(void) 46 46 { 47 47 struct mmuext_op *op; 48 48 struct multicall_space mcs;
+36 -62
arch/x86/xen/mmu_pv.c
··· 355 355 pteval_t flags = val & PTE_FLAGS_MASK; 356 356 unsigned long mfn; 357 357 358 - if (!xen_feature(XENFEAT_auto_translated_physmap)) 359 - mfn = __pfn_to_mfn(pfn); 360 - else 361 - mfn = pfn; 358 + mfn = __pfn_to_mfn(pfn); 359 + 362 360 /* 363 361 * If there's no mfn for the pfn, then just create an 364 362 * empty non-present pte. Unfortunately this loses ··· 644 646 /* The limit is the last byte to be touched */ 645 647 limit--; 646 648 BUG_ON(limit >= FIXADDR_TOP); 647 - 648 - if (xen_feature(XENFEAT_auto_translated_physmap)) 649 - return 0; 650 649 651 650 /* 652 651 * 64-bit has a great big hole in the middle of the address ··· 1284 1289 1285 1290 static void __init xen_pagetable_p2m_setup(void) 1286 1291 { 1287 - if (xen_feature(XENFEAT_auto_translated_physmap)) 1288 - return; 1289 - 1290 1292 xen_vmalloc_p2m_tree(); 1291 1293 1292 1294 #ifdef CONFIG_X86_64 ··· 1306 1314 xen_build_mfn_list_list(); 1307 1315 1308 1316 /* Remap memory freed due to conflicts with E820 map */ 1309 - if (!xen_feature(XENFEAT_auto_translated_physmap)) 1310 - xen_remap_memory(); 1317 + xen_remap_memory(); 1311 1318 1312 1319 xen_setup_shared_info(); 1313 1320 } ··· 1916 1925 /* Zap identity mapping */ 1917 1926 init_level4_pgt[0] = __pgd(0); 1918 1927 1919 - if (!xen_feature(XENFEAT_auto_translated_physmap)) { 1920 - /* Pre-constructed entries are in pfn, so convert to mfn */ 1921 - /* L4[272] -> level3_ident_pgt 1922 - * L4[511] -> level3_kernel_pgt */ 1923 - convert_pfn_mfn(init_level4_pgt); 1928 + /* Pre-constructed entries are in pfn, so convert to mfn */ 1929 + /* L4[272] -> level3_ident_pgt */ 1930 + /* L4[511] -> level3_kernel_pgt */ 1931 + convert_pfn_mfn(init_level4_pgt); 1924 1932 1925 - /* L3_i[0] -> level2_ident_pgt */ 1926 - convert_pfn_mfn(level3_ident_pgt); 1927 - /* L3_k[510] -> level2_kernel_pgt 1928 - * L3_k[511] -> level2_fixmap_pgt */ 1929 - convert_pfn_mfn(level3_kernel_pgt); 1933 + /* L3_i[0] -> level2_ident_pgt */ 1934 + convert_pfn_mfn(level3_ident_pgt); 1935 + /* L3_k[510] -> level2_kernel_pgt */ 1936 + /* L3_k[511] -> level2_fixmap_pgt */ 1937 + convert_pfn_mfn(level3_kernel_pgt); 1930 1938 1931 - /* L3_k[511][506] -> level1_fixmap_pgt */ 1932 - convert_pfn_mfn(level2_fixmap_pgt); 1933 - } 1939 + /* L3_k[511][506] -> level1_fixmap_pgt */ 1940 + convert_pfn_mfn(level2_fixmap_pgt); 1941 + 1934 1942 /* We get [511][511] and have Xen's version of level2_kernel_pgt */ 1935 1943 l3 = m2v(pgd[pgd_index(__START_KERNEL_map)].pgd); 1936 1944 l2 = m2v(l3[pud_index(__START_KERNEL_map)].pud); ··· 1952 1962 if (i && i < pgd_index(__START_KERNEL_map)) 1953 1963 init_level4_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i]; 1954 1964 1955 - if (!xen_feature(XENFEAT_auto_translated_physmap)) { 1956 - /* Make pagetable pieces RO */ 1957 - set_page_prot(init_level4_pgt, PAGE_KERNEL_RO); 1958 - set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO); 1959 - set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO); 1960 - set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO); 1961 - set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO); 1962 - set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO); 1963 - set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO); 1964 - set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO); 1965 + /* Make pagetable pieces RO */ 1966 + set_page_prot(init_level4_pgt, PAGE_KERNEL_RO); 1967 + set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO); 1968 + set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO); 1969 + set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO); 1970 + set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO); 1971 + set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO); 1972 + set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO); 1973 + set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO); 1965 1974 1966 - /* Pin down new L4 */ 1967 - pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE, 1968 - PFN_DOWN(__pa_symbol(init_level4_pgt))); 1975 + /* Pin down new L4 */ 1976 + pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE, 1977 + PFN_DOWN(__pa_symbol(init_level4_pgt))); 1969 1978 1970 - /* Unpin Xen-provided one */ 1971 - pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd))); 1979 + /* Unpin Xen-provided one */ 1980 + pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd))); 1972 1981 1973 - /* 1974 - * At this stage there can be no user pgd, and no page 1975 - * structure to attach it to, so make sure we just set kernel 1976 - * pgd. 1977 - */ 1978 - xen_mc_batch(); 1979 - __xen_write_cr3(true, __pa(init_level4_pgt)); 1980 - xen_mc_issue(PARAVIRT_LAZY_CPU); 1981 - } else 1982 - native_write_cr3(__pa(init_level4_pgt)); 1982 + /* 1983 + * At this stage there can be no user pgd, and no page structure to 1984 + * attach it to, so make sure we just set kernel pgd. 1985 + */ 1986 + xen_mc_batch(); 1987 + __xen_write_cr3(true, __pa(init_level4_pgt)); 1988 + xen_mc_issue(PARAVIRT_LAZY_CPU); 1983 1989 1984 1990 /* We can't that easily rip out L3 and L2, as the Xen pagetables are 1985 1991 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ... for ··· 2389 2403 2390 2404 static void __init xen_post_allocator_init(void) 2391 2405 { 2392 - if (xen_feature(XENFEAT_auto_translated_physmap)) 2393 - return; 2394 - 2395 2406 pv_mmu_ops.set_pte = xen_set_pte; 2396 2407 pv_mmu_ops.set_pmd = xen_set_pmd; 2397 2408 pv_mmu_ops.set_pud = xen_set_pud; ··· 2493 2510 void __init xen_init_mmu_ops(void) 2494 2511 { 2495 2512 x86_init.paging.pagetable_init = xen_pagetable_init; 2496 - 2497 - if (xen_feature(XENFEAT_auto_translated_physmap)) 2498 - return; 2499 2513 2500 2514 pv_mmu_ops = xen_mmu_ops; 2501 2515 ··· 2630 2650 * this function are redundant and can be ignored. 2631 2651 */ 2632 2652 2633 - if (xen_feature(XENFEAT_auto_translated_physmap)) 2634 - return 0; 2635 - 2636 2653 if (unlikely(order > MAX_CONTIG_ORDER)) 2637 2654 return -ENOMEM; 2638 2655 ··· 2665 2688 unsigned long flags; 2666 2689 int success; 2667 2690 unsigned long vstart; 2668 - 2669 - if (xen_feature(XENFEAT_auto_translated_physmap)) 2670 - return; 2671 2691 2672 2692 if (unlikely(order > MAX_CONTIG_ORDER)) 2673 2693 return;
+9
drivers/acpi/button.c
··· 57 57 58 58 #define ACPI_BUTTON_LID_INIT_IGNORE 0x00 59 59 #define ACPI_BUTTON_LID_INIT_OPEN 0x01 60 + #define ACPI_BUTTON_LID_INIT_METHOD 0x02 60 61 61 62 #define _COMPONENT ACPI_BUTTON_COMPONENT 62 63 ACPI_MODULE_NAME("button"); ··· 377 376 case ACPI_BUTTON_LID_INIT_OPEN: 378 377 (void)acpi_lid_notify_state(device, 1); 379 378 break; 379 + case ACPI_BUTTON_LID_INIT_METHOD: 380 + (void)acpi_lid_update_state(device); 381 + break; 380 382 case ACPI_BUTTON_LID_INIT_IGNORE: 381 383 default: 382 384 break; ··· 564 560 if (!strncmp(val, "open", sizeof("open") - 1)) { 565 561 lid_init_state = ACPI_BUTTON_LID_INIT_OPEN; 566 562 pr_info("Notify initial lid state as open\n"); 563 + } else if (!strncmp(val, "method", sizeof("method") - 1)) { 564 + lid_init_state = ACPI_BUTTON_LID_INIT_METHOD; 565 + pr_info("Notify initial lid state with _LID return value\n"); 567 566 } else if (!strncmp(val, "ignore", sizeof("ignore") - 1)) { 568 567 lid_init_state = ACPI_BUTTON_LID_INIT_IGNORE; 569 568 pr_info("Do not notify initial lid state\n"); ··· 580 573 switch (lid_init_state) { 581 574 case ACPI_BUTTON_LID_INIT_OPEN: 582 575 return sprintf(buffer, "open"); 576 + case ACPI_BUTTON_LID_INIT_METHOD: 577 + return sprintf(buffer, "method"); 583 578 case ACPI_BUTTON_LID_INIT_IGNORE: 584 579 return sprintf(buffer, "ignore"); 585 580 default:
+5 -6
drivers/base/power/wakeup.c
··· 512 512 /** 513 513 * wakup_source_activate - Mark given wakeup source as active. 514 514 * @ws: Wakeup source to handle. 515 - * @hard: If set, abort suspends in progress and wake up from suspend-to-idle. 516 515 * 517 516 * Update the @ws' statistics and, if @ws has just been activated, notify the PM 518 517 * core of the event by incrementing the counter of of wakeup events being 519 518 * processed. 520 519 */ 521 - static void wakeup_source_activate(struct wakeup_source *ws, bool hard) 520 + static void wakeup_source_activate(struct wakeup_source *ws) 522 521 { 523 522 unsigned int cec; 524 523 525 524 if (WARN_ONCE(wakeup_source_not_registered(ws), 526 525 "unregistered wakeup source\n")) 527 526 return; 528 - 529 - if (hard) 530 - pm_system_wakeup(); 531 527 532 528 ws->active = true; 533 529 ws->active_count++; ··· 550 554 ws->wakeup_count++; 551 555 552 556 if (!ws->active) 553 - wakeup_source_activate(ws, hard); 557 + wakeup_source_activate(ws); 558 + 559 + if (hard) 560 + pm_system_wakeup(); 554 561 } 555 562 556 563 /**
+15 -12
drivers/block/drbd/drbd_req.c
··· 315 315 } 316 316 317 317 /* still holds resource->req_lock */ 318 - static int drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and_error *m, int put) 318 + static void drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and_error *m, int put) 319 319 { 320 320 struct drbd_device *device = req->device; 321 321 D_ASSERT(device, m || (req->rq_state & RQ_POSTPONED)); 322 322 323 + if (!put) 324 + return; 325 + 323 326 if (!atomic_sub_and_test(put, &req->completion_ref)) 324 - return 0; 327 + return; 325 328 326 329 drbd_req_complete(req, m); 330 + 331 + /* local completion may still come in later, 332 + * we need to keep the req object around. */ 333 + if (req->rq_state & RQ_LOCAL_ABORTED) 334 + return; 327 335 328 336 if (req->rq_state & RQ_POSTPONED) { 329 337 /* don't destroy the req object just yet, 330 338 * but queue it for retry */ 331 339 drbd_restart_request(req); 332 - return 0; 340 + return; 333 341 } 334 342 335 - return 1; 343 + kref_put(&req->kref, drbd_req_destroy); 336 344 } 337 345 338 346 static void set_if_null_req_next(struct drbd_peer_device *peer_device, struct drbd_request *req) ··· 527 519 if (req->i.waiting) 528 520 wake_up(&device->misc_wait); 529 521 530 - if (c_put) { 531 - if (drbd_req_put_completion_ref(req, m, c_put)) 532 - kref_put(&req->kref, drbd_req_destroy); 533 - } else { 534 - kref_put(&req->kref, drbd_req_destroy); 535 - } 522 + drbd_req_put_completion_ref(req, m, c_put); 523 + kref_put(&req->kref, drbd_req_destroy); 536 524 } 537 525 538 526 static void drbd_report_io_error(struct drbd_device *device, struct drbd_request *req) ··· 1370 1366 } 1371 1367 1372 1368 out: 1373 - if (drbd_req_put_completion_ref(req, &m, 1)) 1374 - kref_put(&req->kref, drbd_req_destroy); 1369 + drbd_req_put_completion_ref(req, &m, 1); 1375 1370 spin_unlock_irq(&resource->req_lock); 1376 1371 1377 1372 /* Even though above is a kref_put(), this is safe.
+5 -3
drivers/block/xen-blkback/xenbus.c
··· 504 504 505 505 dev_set_drvdata(&dev->dev, NULL); 506 506 507 - if (be->blkif) 507 + if (be->blkif) { 508 508 xen_blkif_disconnect(be->blkif); 509 509 510 - /* Put the reference we set in xen_blkif_alloc(). */ 511 - xen_blkif_put(be->blkif); 510 + /* Put the reference we set in xen_blkif_alloc(). */ 511 + xen_blkif_put(be->blkif); 512 + } 513 + 512 514 kfree(be->mode); 513 515 kfree(be); 514 516 return 0;
+5 -1
drivers/char/lp.c
··· 859 859 } else if (!strcmp(str, "auto")) { 860 860 parport_nr[0] = LP_PARPORT_AUTO; 861 861 } else if (!strcmp(str, "none")) { 862 - parport_nr[parport_ptr++] = LP_PARPORT_NONE; 862 + if (parport_ptr < LP_NO) 863 + parport_nr[parport_ptr++] = LP_PARPORT_NONE; 864 + else 865 + printk(KERN_INFO "lp: too many ports, %s ignored.\n", 866 + str); 863 867 } else if (!strcmp(str, "reset")) { 864 868 reset = 1; 865 869 }
+5
drivers/char/mem.c
··· 340 340 static int mmap_mem(struct file *file, struct vm_area_struct *vma) 341 341 { 342 342 size_t size = vma->vm_end - vma->vm_start; 343 + phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT; 344 + 345 + /* It's illegal to wrap around the end of the physical address space. */ 346 + if (offset + (phys_addr_t)size < offset) 347 + return -EINVAL; 343 348 344 349 if (!valid_mmap_phys_addr_range(vma->vm_pgoff, size)) 345 350 return -EINVAL;
+9
drivers/cpufreq/Kconfig.arm
··· 71 71 72 72 If in doubt, say N. 73 73 74 + config ARM_DB8500_CPUFREQ 75 + tristate "ST-Ericsson DB8500 cpufreq" if COMPILE_TEST && !ARCH_U8500 76 + default ARCH_U8500 77 + depends on HAS_IOMEM 78 + depends on !CPU_THERMAL || THERMAL 79 + help 80 + This adds the CPUFreq driver for ST-Ericsson Ux500 (DB8500) SoC 81 + series. 82 + 74 83 config ARM_IMX6Q_CPUFREQ 75 84 tristate "Freescale i.MX6 cpufreq support" 76 85 depends on ARCH_MXC
+1 -1
drivers/cpufreq/Makefile
··· 53 53 54 54 obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o 55 55 obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o 56 - obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o 56 + obj-$(CONFIG_ARM_DB8500_CPUFREQ) += dbx500-cpufreq.o 57 57 obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o 58 58 obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o 59 59 obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
+2
drivers/dax/super.c
··· 44 44 } 45 45 EXPORT_SYMBOL_GPL(dax_read_unlock); 46 46 47 + #ifdef CONFIG_BLOCK 47 48 int bdev_dax_pgoff(struct block_device *bdev, sector_t sector, size_t size, 48 49 pgoff_t *pgoff) 49 50 { ··· 113 112 return 0; 114 113 } 115 114 EXPORT_SYMBOL_GPL(__bdev_dax_supported); 115 + #endif 116 116 117 117 /** 118 118 * struct dax_device - anchor object for dax services
+11 -6
drivers/firmware/efi/efi-pstore.c
··· 53 53 if (sscanf(name, "dump-type%u-%u-%d-%lu-%c", 54 54 &record->type, &part, &cnt, &time, &data_type) == 5) { 55 55 record->id = generic_id(time, part, cnt); 56 + record->part = part; 56 57 record->count = cnt; 57 58 record->time.tv_sec = time; 58 59 record->time.tv_nsec = 0; ··· 65 64 } else if (sscanf(name, "dump-type%u-%u-%d-%lu", 66 65 &record->type, &part, &cnt, &time) == 4) { 67 66 record->id = generic_id(time, part, cnt); 67 + record->part = part; 68 68 record->count = cnt; 69 69 record->time.tv_sec = time; 70 70 record->time.tv_nsec = 0; ··· 79 77 * multiple logs, remains. 80 78 */ 81 79 record->id = generic_id(time, part, 0); 80 + record->part = part; 82 81 record->count = 0; 83 82 record->time.tv_sec = time; 84 83 record->time.tv_nsec = 0; ··· 244 241 efi_guid_t vendor = LINUX_EFI_CRASH_GUID; 245 242 int i, ret = 0; 246 243 244 + record->time.tv_sec = get_seconds(); 245 + record->time.tv_nsec = 0; 246 + 247 + record->id = generic_id(record->time.tv_sec, record->part, 248 + record->count); 249 + 247 250 snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu-%c", 248 251 record->type, record->part, record->count, 249 - get_seconds(), record->compressed ? 'C' : 'D'); 252 + record->time.tv_sec, record->compressed ? 'C' : 'D'); 250 253 251 254 for (i = 0; i < DUMP_NAME_LEN; i++) 252 255 efi_name[i] = name[i]; ··· 264 255 if (record->reason == KMSG_DUMP_OOPS) 265 256 efivar_run_worker(); 266 257 267 - record->id = record->part; 268 258 return ret; 269 259 }; 270 260 ··· 295 287 * holding multiple logs, remains. 296 288 */ 297 289 snprintf(name_old, sizeof(name_old), "dump-type%u-%u-%lu", 298 - ed->record->type, (unsigned int)ed->record->id, 290 + ed->record->type, ed->record->part, 299 291 ed->record->time.tv_sec); 300 292 301 293 for (i = 0; i < DUMP_NAME_LEN; i++) ··· 328 320 char name[DUMP_NAME_LEN]; 329 321 efi_char16_t efi_name[DUMP_NAME_LEN]; 330 322 int found, i; 331 - unsigned int part; 332 323 333 - do_div(record->id, 1000); 334 - part = do_div(record->id, 100); 335 324 snprintf(name, sizeof(name), "dump-type%u-%u-%d-%lu", 336 325 record->type, record->part, record->count, 337 326 record->time.tv_sec);
+15 -6
drivers/firmware/google/vpd.c
··· 116 116 return VPD_OK; 117 117 118 118 info = kzalloc(sizeof(*info), GFP_KERNEL); 119 - info->key = kzalloc(key_len + 1, GFP_KERNEL); 120 - if (!info->key) 119 + if (!info) 121 120 return -ENOMEM; 121 + info->key = kzalloc(key_len + 1, GFP_KERNEL); 122 + if (!info->key) { 123 + ret = -ENOMEM; 124 + goto free_info; 125 + } 122 126 123 127 memcpy(info->key, key, key_len); 124 128 ··· 139 135 list_add_tail(&info->list, &sec->attribs); 140 136 141 137 ret = sysfs_create_bin_file(sec->kobj, &info->bin_attr); 142 - if (ret) { 143 - kfree(info->key); 144 - return ret; 145 - } 138 + if (ret) 139 + goto free_info_key; 146 140 147 141 return 0; 142 + 143 + free_info_key: 144 + kfree(info->key); 145 + free_info: 146 + kfree(info); 147 + 148 + return ret; 148 149 } 149 150 150 151 static void vpd_section_attrib_destroy(struct vpd_section *sec)
+2 -1
drivers/firmware/ti_sci.c
··· 202 202 info->debug_buffer[info->debug_region_size] = 0; 203 203 204 204 info->d = debugfs_create_file(strncat(debug_name, dev_name(dev), 205 - sizeof(debug_name)), 205 + sizeof(debug_name) - 206 + sizeof("ti_sci_debug@")), 206 207 0444, NULL, info, &ti_sci_debug_fops); 207 208 if (IS_ERR(info->d)) 208 209 return PTR_ERR(info->d);
+32 -17
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 10 10 */ 11 11 12 12 #include <drm/drmP.h> 13 + #include <drm/drm_atomic.h> 13 14 #include <drm/drm_atomic_helper.h> 14 15 #include <drm/drm_crtc.h> 15 16 #include <drm/drm_crtc_helper.h> ··· 227 226 static int hdlcd_plane_atomic_check(struct drm_plane *plane, 228 227 struct drm_plane_state *state) 229 228 { 230 - u32 src_w, src_h; 229 + struct drm_rect clip = { 0 }; 230 + struct drm_crtc_state *crtc_state; 231 + u32 src_h = state->src_h >> 16; 231 232 232 - src_w = state->src_w >> 16; 233 - src_h = state->src_h >> 16; 234 - 235 - /* we can't do any scaling of the plane source */ 236 - if ((src_w != state->crtc_w) || (src_h != state->crtc_h)) 233 + /* only the HDLCD_REG_FB_LINE_COUNT register has a limit */ 234 + if (src_h >= HDLCD_MAX_YRES) { 235 + DRM_DEBUG_KMS("Invalid source width: %d\n", src_h); 237 236 return -EINVAL; 237 + } 238 238 239 - return 0; 239 + if (!state->fb || !state->crtc) 240 + return 0; 241 + 242 + crtc_state = drm_atomic_get_existing_crtc_state(state->state, 243 + state->crtc); 244 + if (!crtc_state) { 245 + DRM_DEBUG_KMS("Invalid crtc state\n"); 246 + return -EINVAL; 247 + } 248 + 249 + clip.x2 = crtc_state->adjusted_mode.hdisplay; 250 + clip.y2 = crtc_state->adjusted_mode.vdisplay; 251 + 252 + return drm_plane_helper_check_state(state, &clip, 253 + DRM_PLANE_HELPER_NO_SCALING, 254 + DRM_PLANE_HELPER_NO_SCALING, 255 + false, true); 240 256 } 241 257 242 258 static void hdlcd_plane_atomic_update(struct drm_plane *plane, ··· 262 244 struct drm_framebuffer *fb = plane->state->fb; 263 245 struct hdlcd_drm_private *hdlcd; 264 246 struct drm_gem_cma_object *gem; 265 - u32 src_w, src_h, dest_w, dest_h; 247 + u32 src_x, src_y, dest_h; 266 248 dma_addr_t scanout_start; 267 249 268 250 if (!fb) 269 251 return; 270 252 271 - src_w = plane->state->src_w >> 16; 272 - src_h = plane->state->src_h >> 16; 273 - dest_w = plane->state->crtc_w; 274 - dest_h = plane->state->crtc_h; 253 + src_x = plane->state->src.x1 >> 16; 254 + src_y = plane->state->src.y1 >> 16; 255 + dest_h = drm_rect_height(&plane->state->dst); 275 256 gem = drm_fb_cma_get_gem_obj(fb, 0); 257 + 276 258 scanout_start = gem->paddr + fb->offsets[0] + 277 - plane->state->crtc_y * fb->pitches[0] + 278 - plane->state->crtc_x * 279 - fb->format->cpp[0]; 259 + src_y * fb->pitches[0] + 260 + src_x * fb->format->cpp[0]; 280 261 281 262 hdlcd = plane->dev->dev_private; 282 263 hdlcd_write(hdlcd, HDLCD_REG_FB_LINE_LENGTH, fb->pitches[0]); ··· 322 305 formats, ARRAY_SIZE(formats), 323 306 DRM_PLANE_TYPE_PRIMARY, NULL); 324 307 if (ret) { 325 - devm_kfree(drm->dev, plane); 326 308 return ERR_PTR(ret); 327 309 } 328 310 ··· 345 329 &hdlcd_crtc_funcs, NULL); 346 330 if (ret) { 347 331 hdlcd_plane_destroy(primary); 348 - devm_kfree(drm->dev, primary); 349 332 return ret; 350 333 } 351 334
+12 -20
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_output.c
··· 152 152 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 153 153 }; 154 154 155 - static int atmel_hlcdc_attach_endpoint(struct drm_device *dev, 156 - const struct device_node *np) 155 + static int atmel_hlcdc_attach_endpoint(struct drm_device *dev, int endpoint) 157 156 { 158 157 struct atmel_hlcdc_dc *dc = dev->dev_private; 159 158 struct atmel_hlcdc_rgb_output *output; 160 159 struct drm_panel *panel; 161 160 struct drm_bridge *bridge; 162 161 int ret; 162 + 163 + ret = drm_of_find_panel_or_bridge(dev->dev->of_node, 0, endpoint, 164 + &panel, &bridge); 165 + if (ret) 166 + return ret; 163 167 164 168 output = devm_kzalloc(dev->dev, sizeof(*output), GFP_KERNEL); 165 169 if (!output) ··· 180 176 return ret; 181 177 182 178 output->encoder.possible_crtcs = 0x1; 183 - 184 - ret = drm_of_find_panel_or_bridge(np, 0, 0, &panel, &bridge); 185 - if (ret) 186 - return ret; 187 179 188 180 if (panel) { 189 181 output->connector.dpms = DRM_MODE_DPMS_OFF; ··· 220 220 221 221 int atmel_hlcdc_create_outputs(struct drm_device *dev) 222 222 { 223 - struct device_node *remote; 224 - int ret = -ENODEV; 225 - int endpoint = 0; 223 + int endpoint, ret = 0; 226 224 227 - while (true) { 228 - /* Loop thru possible multiple connections to the output */ 229 - remote = of_graph_get_remote_node(dev->dev->of_node, 0, 230 - endpoint++); 231 - if (!remote) 232 - break; 225 + for (endpoint = 0; !ret; endpoint++) 226 + ret = atmel_hlcdc_attach_endpoint(dev, endpoint); 233 227 234 - ret = atmel_hlcdc_attach_endpoint(dev, remote); 235 - of_node_put(remote); 236 - if (ret) 237 - return ret; 238 - } 228 + /* At least one device was successfully attached.*/ 229 + if (ret == -ENODEV && endpoint) 230 + return 0; 239 231 240 232 return ret; 241 233 }
+3 -1
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
··· 44 44 45 45 /* initially, until copy_from_user() and bo lookup succeeds: */ 46 46 submit->nr_bos = 0; 47 + submit->fence = NULL; 47 48 48 49 ww_acquire_init(&submit->ticket, &reservation_ww_class); 49 50 } ··· 295 294 } 296 295 297 296 ww_acquire_fini(&submit->ticket); 298 - dma_fence_put(submit->fence); 297 + if (submit->fence) 298 + dma_fence_put(submit->fence); 299 299 kfree(submit); 300 300 } 301 301
+1 -1
drivers/gpu/drm/i915/gvt/handlers.c
··· 1244 1244 mode = vgpu_vreg(vgpu, offset); 1245 1245 1246 1246 if (GFX_MODE_BIT_SET_IN_MASK(mode, START_DMA)) { 1247 - WARN_ONCE(1, "VM(%d): iGVT-g doesn't supporte GuC\n", 1247 + WARN_ONCE(1, "VM(%d): iGVT-g doesn't support GuC\n", 1248 1248 vgpu->id); 1249 1249 return 0; 1250 1250 }
+3
drivers/gpu/drm/i915/gvt/render.c
··· 340 340 } else 341 341 v = mmio->value; 342 342 343 + if (mmio->in_context) 344 + continue; 345 + 343 346 I915_WRITE(mmio->reg, v); 344 347 POSTING_READ(mmio->reg); 345 348
+6 -2
drivers/gpu/drm/i915/gvt/sched_policy.c
··· 129 129 struct vgpu_sched_data *vgpu_data; 130 130 ktime_t cur_time; 131 131 132 - /* no target to schedule */ 133 - if (!scheduler->next_vgpu) 132 + /* no need to schedule if next_vgpu is the same with current_vgpu, 133 + * let scheduler chose next_vgpu again by setting it to NULL. 134 + */ 135 + if (scheduler->next_vgpu == scheduler->current_vgpu) { 136 + scheduler->next_vgpu = NULL; 134 137 return; 138 + } 135 139 136 140 /* 137 141 * after the flag is set, workload dispatch thread will
+8 -4
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 195 195 u32 pte_flags; 196 196 int ret; 197 197 198 - ret = vma->vm->allocate_va_range(vma->vm, vma->node.start, vma->size); 199 - if (ret) 200 - return ret; 198 + if (!(vma->flags & I915_VMA_LOCAL_BIND)) { 199 + ret = vma->vm->allocate_va_range(vma->vm, vma->node.start, 200 + vma->size); 201 + if (ret) 202 + return ret; 203 + } 201 204 202 205 vma->pages = vma->obj->mm.pages; 203 206 ··· 2309 2306 if (flags & I915_VMA_LOCAL_BIND) { 2310 2307 struct i915_hw_ppgtt *appgtt = i915->mm.aliasing_ppgtt; 2311 2308 2312 - if (appgtt->base.allocate_va_range) { 2309 + if (!(vma->flags & I915_VMA_LOCAL_BIND) && 2310 + appgtt->base.allocate_va_range) { 2313 2311 ret = appgtt->base.allocate_va_range(&appgtt->base, 2314 2312 vma->node.start, 2315 2313 vma->node.size);
+7 -3
drivers/gpu/drm/i915/i915_reg.h
··· 3051 3051 #define CLKCFG_FSB_667 (3 << 0) /* hrawclk 166 */ 3052 3052 #define CLKCFG_FSB_800 (2 << 0) /* hrawclk 200 */ 3053 3053 #define CLKCFG_FSB_1067 (6 << 0) /* hrawclk 266 */ 3054 + #define CLKCFG_FSB_1067_ALT (0 << 0) /* hrawclk 266 */ 3054 3055 #define CLKCFG_FSB_1333 (7 << 0) /* hrawclk 333 */ 3055 - /* Note, below two are guess */ 3056 - #define CLKCFG_FSB_1600 (4 << 0) /* hrawclk 400 */ 3057 - #define CLKCFG_FSB_1600_ALT (0 << 0) /* hrawclk 400 */ 3056 + /* 3057 + * Note that on at least on ELK the below value is reported for both 3058 + * 333 and 400 MHz BIOS FSB setting, but given that the gmch datasheet 3059 + * lists only 200/266/333 MHz FSB as supported let's decode it as 333 MHz. 3060 + */ 3061 + #define CLKCFG_FSB_1333_ALT (4 << 0) /* hrawclk 333 */ 3058 3062 #define CLKCFG_FSB_MASK (7 << 0) 3059 3063 #define CLKCFG_MEM_533 (1 << 4) 3060 3064 #define CLKCFG_MEM_667 (2 << 4)
+2 -4
drivers/gpu/drm/i915/intel_cdclk.c
··· 1798 1798 case CLKCFG_FSB_800: 1799 1799 return 200000; 1800 1800 case CLKCFG_FSB_1067: 1801 + case CLKCFG_FSB_1067_ALT: 1801 1802 return 266667; 1802 1803 case CLKCFG_FSB_1333: 1804 + case CLKCFG_FSB_1333_ALT: 1803 1805 return 333333; 1804 - /* these two are just a guess; one of them might be right */ 1805 - case CLKCFG_FSB_1600: 1806 - case CLKCFG_FSB_1600_ALT: 1807 - return 400000; 1808 1806 default: 1809 1807 return 133333; 1810 1808 }
+3 -4
drivers/gpu/drm/i915/intel_dsi.c
··· 410 410 val |= (ULPS_STATE_ENTER | DEVICE_READY); 411 411 I915_WRITE(MIPI_DEVICE_READY(port), val); 412 412 413 - /* Wait for ULPS Not active */ 413 + /* Wait for ULPS active */ 414 414 if (intel_wait_for_register(dev_priv, 415 - MIPI_CTRL(port), GLK_ULPS_NOT_ACTIVE, 416 - GLK_ULPS_NOT_ACTIVE, 20)) 417 - DRM_ERROR("ULPS is still active\n"); 415 + MIPI_CTRL(port), GLK_ULPS_NOT_ACTIVE, 0, 20)) 416 + DRM_ERROR("ULPS not active\n"); 418 417 419 418 /* Exit ULPS */ 420 419 val = I915_READ(MIPI_DEVICE_READY(port));
+5
drivers/gpu/drm/i915/intel_lpe_audio.c
··· 63 63 #include <linux/acpi.h> 64 64 #include <linux/device.h> 65 65 #include <linux/pci.h> 66 + #include <linux/pm_runtime.h> 66 67 67 68 #include "i915_drv.h" 68 69 #include <linux/delay.h> ··· 121 120 } 122 121 123 122 kfree(rsc); 123 + 124 + pm_runtime_forbid(&platdev->dev); 125 + pm_runtime_set_active(&platdev->dev); 126 + pm_runtime_enable(&platdev->dev); 124 127 125 128 return platdev; 126 129
+2 -4
drivers/gpu/drm/nouveau/nouveau_display.c
··· 360 360 pm_runtime_get_sync(drm->dev->dev); 361 361 362 362 drm_helper_hpd_irq_event(drm->dev); 363 + /* enable polling for external displays */ 364 + drm_kms_helper_poll_enable(drm->dev); 363 365 364 366 pm_runtime_mark_last_busy(drm->dev->dev); 365 367 pm_runtime_put_sync(drm->dev->dev); ··· 414 412 ret = disp->init(dev); 415 413 if (ret) 416 414 return ret; 417 - 418 - /* enable polling for external displays */ 419 - if (!dev->mode_config.poll_enabled) 420 - drm_kms_helper_poll_enable(dev); 421 415 422 416 /* enable hotplug interrupts */ 423 417 list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+3 -3
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 502 502 pm_runtime_allow(dev->dev); 503 503 pm_runtime_mark_last_busy(dev->dev); 504 504 pm_runtime_put(dev->dev); 505 + } else { 506 + /* enable polling for external displays */ 507 + drm_kms_helper_poll_enable(dev); 505 508 } 506 509 return 0; 507 510 ··· 776 773 pci_set_master(pdev); 777 774 778 775 ret = nouveau_do_resume(drm_dev, true); 779 - 780 - if (!drm_dev->mode_config.poll_enabled) 781 - drm_kms_helper_poll_enable(drm_dev); 782 776 783 777 /* do magic */ 784 778 nvif_mask(&device->object, 0x088488, (1 << 25), (1 << 25));
+2 -1
drivers/gpu/drm/nouveau/nvkm/engine/fifo/gk104.c
··· 148 148 case NVKM_MEM_TARGET_NCOH: target = 3; break; 149 149 default: 150 150 WARN_ON(1); 151 - return; 151 + goto unlock; 152 152 } 153 153 154 154 nvkm_wr32(device, 0x002270, (nvkm_memory_addr(mem) >> 12) | ··· 160 160 & 0x00100000), 161 161 msecs_to_jiffies(2000)) == 0) 162 162 nvkm_error(subdev, "runlist %d update timeout\n", runl); 163 + unlock: 163 164 mutex_unlock(&subdev->mutex); 164 165 } 165 166
+3 -1
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/ls_ucode_gr.c
··· 116 116 ret = nvkm_firmware_get(subdev->device, f, &sig); 117 117 if (ret) 118 118 goto free_data; 119 + 119 120 img->sig = kmemdup(sig->data, sig->size, GFP_KERNEL); 120 121 if (!img->sig) { 121 122 ret = -ENOMEM; ··· 127 126 img->ucode_data = ls_ucode_img_build(bl, code, data, 128 127 &img->ucode_desc); 129 128 if (IS_ERR(img->ucode_data)) { 129 + kfree(img->sig); 130 130 ret = PTR_ERR(img->ucode_data); 131 - goto free_data; 131 + goto free_sig; 132 132 } 133 133 img->ucode_size = img->ucode_desc.image_size; 134 134
+1
drivers/gpu/host1x/Kconfig
··· 1 1 config TEGRA_HOST1X 2 2 tristate "NVIDIA Tegra host1x driver" 3 3 depends on ARCH_TEGRA || (ARM && COMPILE_TEST) 4 + select IOMMU_IOVA if IOMMU_SUPPORT 4 5 help 5 6 Driver for the NVIDIA Tegra host1x hardware. 6 7
+10 -8
drivers/i2c/busses/i2c-designware-platdrv.c
··· 94 94 static int dw_i2c_acpi_configure(struct platform_device *pdev) 95 95 { 96 96 struct dw_i2c_dev *dev = platform_get_drvdata(pdev); 97 + u32 ss_ht = 0, fp_ht = 0, hs_ht = 0, fs_ht = 0; 97 98 acpi_handle handle = ACPI_HANDLE(&pdev->dev); 98 99 const struct acpi_device_id *id; 99 100 struct acpi_device *adev; ··· 108 107 * Try to get SDA hold time and *CNT values from an ACPI method for 109 108 * selected speed modes. 110 109 */ 110 + dw_i2c_acpi_params(pdev, "SSCN", &dev->ss_hcnt, &dev->ss_lcnt, &ss_ht); 111 + dw_i2c_acpi_params(pdev, "FPCN", &dev->fp_hcnt, &dev->fp_lcnt, &fp_ht); 112 + dw_i2c_acpi_params(pdev, "HSCN", &dev->hs_hcnt, &dev->hs_lcnt, &hs_ht); 113 + dw_i2c_acpi_params(pdev, "FMCN", &dev->fs_hcnt, &dev->fs_lcnt, &fs_ht); 114 + 111 115 switch (dev->clk_freq) { 112 116 case 100000: 113 - dw_i2c_acpi_params(pdev, "SSCN", &dev->ss_hcnt, &dev->ss_lcnt, 114 - &dev->sda_hold_time); 117 + dev->sda_hold_time = ss_ht; 115 118 break; 116 119 case 1000000: 117 - dw_i2c_acpi_params(pdev, "FPCN", &dev->fp_hcnt, &dev->fp_lcnt, 118 - &dev->sda_hold_time); 120 + dev->sda_hold_time = fp_ht; 119 121 break; 120 122 case 3400000: 121 - dw_i2c_acpi_params(pdev, "HSCN", &dev->hs_hcnt, &dev->hs_lcnt, 122 - &dev->sda_hold_time); 123 + dev->sda_hold_time = hs_ht; 123 124 break; 124 125 case 400000: 125 126 default: 126 - dw_i2c_acpi_params(pdev, "FMCN", &dev->fs_hcnt, &dev->fs_lcnt, 127 - &dev->sda_hold_time); 127 + dev->sda_hold_time = fs_ht; 128 128 break; 129 129 } 130 130
+8 -5
drivers/iommu/dma-iommu.c
··· 396 396 dma_addr_t iova, size_t size) 397 397 { 398 398 struct iova_domain *iovad = &cookie->iovad; 399 - unsigned long shift = iova_shift(iovad); 400 399 401 400 /* The MSI case is only ever cleaning up its most recent allocation */ 402 401 if (cookie->type == IOMMU_DMA_MSI_COOKIE) 403 402 cookie->msi_iova -= size; 404 403 else 405 - free_iova_fast(iovad, iova >> shift, size >> shift); 404 + free_iova_fast(iovad, iova_pfn(iovad, iova), 405 + size >> iova_shift(iovad)); 406 406 } 407 407 408 408 static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, ··· 617 617 { 618 618 struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 619 619 struct iommu_dma_cookie *cookie = domain->iova_cookie; 620 - struct iova_domain *iovad = &cookie->iovad; 621 - size_t iova_off = iova_offset(iovad, phys); 620 + size_t iova_off = 0; 622 621 dma_addr_t iova; 623 622 624 - size = iova_align(iovad, size + iova_off); 623 + if (cookie->type == IOMMU_DMA_IOVA_COOKIE) { 624 + iova_off = iova_offset(&cookie->iovad, phys); 625 + size = iova_align(&cookie->iovad, size + iova_off); 626 + } 627 + 625 628 iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev); 626 629 if (!iova) 627 630 return DMA_ERROR_CODE;
+4 -1
drivers/iommu/intel-iommu.c
··· 2055 2055 if (context_copied(context)) { 2056 2056 u16 did_old = context_domain_id(context); 2057 2057 2058 - if (did_old >= 0 && did_old < cap_ndoms(iommu->cap)) 2058 + if (did_old >= 0 && did_old < cap_ndoms(iommu->cap)) { 2059 2059 iommu->flush.flush_context(iommu, did_old, 2060 2060 (((u16)bus) << 8) | devfn, 2061 2061 DMA_CCMD_MASK_NOBIT, 2062 2062 DMA_CCMD_DEVICE_INVL); 2063 + iommu->flush.flush_iotlb(iommu, did_old, 0, 0, 2064 + DMA_TLB_DSI_FLUSH); 2065 + } 2063 2066 } 2064 2067 2065 2068 pgd = domain->pgd;
+1
drivers/iommu/mtk_iommu_v1.c
··· 18 18 #include <linux/clk.h> 19 19 #include <linux/component.h> 20 20 #include <linux/device.h> 21 + #include <linux/dma-mapping.h> 21 22 #include <linux/dma-iommu.h> 22 23 #include <linux/err.h> 23 24 #include <linux/interrupt.h>
+10 -7
drivers/irqchip/irq-mbigen.c
··· 106 106 static inline void get_mbigen_clear_reg(irq_hw_number_t hwirq, 107 107 u32 *mask, u32 *addr) 108 108 { 109 - unsigned int ofst; 110 - 111 - hwirq -= RESERVED_IRQ_PER_MBIGEN_CHIP; 112 - ofst = hwirq / 32 * 4; 109 + unsigned int ofst = (hwirq / 32) * 4; 113 110 114 111 *mask = 1 << (hwirq % 32); 115 112 *addr = ofst + REG_MBIGEN_CLEAR_OFFSET; ··· 334 337 mgn_chip->pdev = pdev; 335 338 336 339 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 337 - mgn_chip->base = devm_ioremap_resource(&pdev->dev, res); 338 - if (IS_ERR(mgn_chip->base)) 339 - return PTR_ERR(mgn_chip->base); 340 + if (!res) 341 + return -EINVAL; 342 + 343 + mgn_chip->base = devm_ioremap(&pdev->dev, res->start, 344 + resource_size(res)); 345 + if (!mgn_chip->base) { 346 + dev_err(&pdev->dev, "failed to ioremap %pR\n", res); 347 + return -ENOMEM; 348 + } 340 349 341 350 if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) 342 351 err = mbigen_of_create_domain(pdev, mgn_chip);
+1 -1
drivers/memory/omap-gpmc.c
··· 512 512 pr_info("gpmc cs%i access configuration:\n", cs); 513 513 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 4, 4, "time-para-granularity"); 514 514 GPMC_GET_RAW(GPMC_CS_CONFIG1, 8, 9, "mux-add-data"); 515 - GPMC_GET_RAW_MAX(GPMC_CS_CONFIG1, 12, 13, 515 + GPMC_GET_RAW_SHIFT_MAX(GPMC_CS_CONFIG1, 12, 13, 1, 516 516 GPMC_CONFIG1_DEVICESIZE_MAX, "device-width"); 517 517 GPMC_GET_RAW(GPMC_CS_CONFIG1, 16, 17, "wait-pin"); 518 518 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 21, 21, "wait-on-write");
+1
drivers/misc/Kconfig
··· 492 492 493 493 config PCI_ENDPOINT_TEST 494 494 depends on PCI 495 + select CRC32 495 496 tristate "PCI Endpoint Test driver" 496 497 ---help--- 497 498 Enable this configuration option to enable the host side test driver
+1 -1
drivers/net/bonding/bond_3ad.c
··· 2577 2577 return -1; 2578 2578 2579 2579 ad_info->aggregator_id = aggregator->aggregator_identifier; 2580 - ad_info->ports = aggregator->num_of_ports; 2580 + ad_info->ports = __agg_active_ports(aggregator); 2581 2581 ad_info->actor_key = aggregator->actor_oper_aggregator_key; 2582 2582 ad_info->partner_key = aggregator->partner_oper_aggregator_key; 2583 2583 ether_addr_copy(ad_info->partner_system,
+2 -3
drivers/net/bonding/bond_main.c
··· 4271 4271 int arp_validate_value, fail_over_mac_value, primary_reselect_value, i; 4272 4272 struct bond_opt_value newval; 4273 4273 const struct bond_opt_value *valptr; 4274 - int arp_all_targets_value; 4274 + int arp_all_targets_value = 0; 4275 4275 u16 ad_actor_sys_prio = 0; 4276 4276 u16 ad_user_port_key = 0; 4277 - __be32 arp_target[BOND_MAX_ARP_TARGETS]; 4277 + __be32 arp_target[BOND_MAX_ARP_TARGETS] = { 0 }; 4278 4278 int arp_ip_count; 4279 4279 int bond_mode = BOND_MODE_ROUNDROBIN; 4280 4280 int xmit_hashtype = BOND_XMIT_POLICY_LAYER2; ··· 4501 4501 arp_validate_value = 0; 4502 4502 } 4503 4503 4504 - arp_all_targets_value = 0; 4505 4504 if (arp_all_targets) { 4506 4505 bond_opt_initstr(&newval, arp_all_targets); 4507 4506 valptr = bond_opt_parse(bond_opt_get(BOND_OPT_ARP_ALL_TARGETS),
+4 -4
drivers/net/ethernet/atheros/atlx/atl2.c
··· 1353 1353 if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32)) && 1354 1354 pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32))) { 1355 1355 printk(KERN_ERR "atl2: No usable DMA configuration, aborting\n"); 1356 + err = -EIO; 1356 1357 goto err_dma; 1357 1358 } 1358 1359 ··· 1367 1366 * pcibios_set_master to do the needed arch specific settings */ 1368 1367 pci_set_master(pdev); 1369 1368 1370 - err = -ENOMEM; 1371 1369 netdev = alloc_etherdev(sizeof(struct atl2_adapter)); 1372 - if (!netdev) 1370 + if (!netdev) { 1371 + err = -ENOMEM; 1373 1372 goto err_alloc_etherdev; 1373 + } 1374 1374 1375 1375 SET_NETDEV_DEV(netdev, &pdev->dev); 1376 1376 ··· 1409 1407 err = atl2_sw_init(adapter); 1410 1408 if (err) 1411 1409 goto err_sw_init; 1412 - 1413 - err = -EIO; 1414 1410 1415 1411 netdev->hw_features = NETIF_F_HW_VLAN_CTAG_RX; 1416 1412 netdev->features |= (NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX);
+10 -3
drivers/net/usb/smsc95xx.c
··· 681 681 if (ret < 0) 682 682 return ret; 683 683 684 - if (features & NETIF_F_HW_CSUM) 684 + if (features & NETIF_F_IP_CSUM) 685 685 read_buf |= Tx_COE_EN_; 686 686 else 687 687 read_buf &= ~Tx_COE_EN_; ··· 1279 1279 1280 1280 spin_lock_init(&pdata->mac_cr_lock); 1281 1281 1282 + /* LAN95xx devices do not alter the computed checksum of 0 to 0xffff. 1283 + * RFC 2460, ipv6 UDP calculated checksum yields a result of zero must 1284 + * be changed to 0xffff. RFC 768, ipv4 UDP computed checksum is zero, 1285 + * it is transmitted as all ones. The zero transmitted checksum means 1286 + * transmitter generated no checksum. Hence, enable csum offload only 1287 + * for ipv4 packets. 1288 + */ 1282 1289 if (DEFAULT_TX_CSUM_ENABLE) 1283 - dev->net->features |= NETIF_F_HW_CSUM; 1290 + dev->net->features |= NETIF_F_IP_CSUM; 1284 1291 if (DEFAULT_RX_CSUM_ENABLE) 1285 1292 dev->net->features |= NETIF_F_RXCSUM; 1286 1293 1287 - dev->net->hw_features = NETIF_F_HW_CSUM | NETIF_F_RXCSUM; 1294 + dev->net->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM; 1288 1295 1289 1296 smsc95xx_init_mac_address(dev); 1290 1297
+10
drivers/nvme/host/fc.c
··· 1754 1754 dev_info(ctrl->ctrl.device, 1755 1755 "NVME-FC{%d}: resetting controller\n", ctrl->cnum); 1756 1756 1757 + /* stop the queues on error, cleanup is in reset thread */ 1758 + if (ctrl->queue_count > 1) 1759 + nvme_stop_queues(&ctrl->ctrl); 1760 + 1757 1761 if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RECONNECTING)) { 1758 1762 dev_err(ctrl->ctrl.device, 1759 1763 "NVME-FC{%d}: error_recovery: Couldn't change state " ··· 2723 2719 struct nvme_fc_ctrl *ctrl; 2724 2720 unsigned long flags; 2725 2721 int ret, idx; 2722 + 2723 + if (!(rport->remoteport.port_role & 2724 + (FC_PORT_ROLE_NVME_DISCOVERY | FC_PORT_ROLE_NVME_TARGET))) { 2725 + ret = -EBADR; 2726 + goto out_fail; 2727 + } 2726 2728 2727 2729 ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); 2728 2730 if (!ctrl) {
+6 -1
drivers/nvme/host/pci.c
··· 1506 1506 if (dev->cmb) { 1507 1507 iounmap(dev->cmb); 1508 1508 dev->cmb = NULL; 1509 + if (dev->cmbsz) { 1510 + sysfs_remove_file_from_group(&dev->ctrl.device->kobj, 1511 + &dev_attr_cmb.attr, NULL); 1512 + dev->cmbsz = 0; 1513 + } 1509 1514 } 1510 1515 } 1511 1516 ··· 1784 1779 { 1785 1780 struct pci_dev *pdev = to_pci_dev(dev->dev); 1786 1781 1782 + nvme_release_cmb(dev); 1787 1783 pci_free_irq_vectors(pdev); 1788 1784 1789 1785 if (pci_is_enabled(pdev)) { ··· 2190 2184 nvme_dev_disable(dev, true); 2191 2185 nvme_dev_remove_admin(dev); 2192 2186 nvme_free_queues(dev, 0); 2193 - nvme_release_cmb(dev); 2194 2187 nvme_release_prp_pools(dev); 2195 2188 nvme_dev_unmap(dev); 2196 2189 nvme_put_ctrl(&dev->ctrl);
+6
drivers/nvme/target/core.c
··· 529 529 } 530 530 EXPORT_SYMBOL_GPL(nvmet_req_init); 531 531 532 + void nvmet_req_uninit(struct nvmet_req *req) 533 + { 534 + percpu_ref_put(&req->sq->ref); 535 + } 536 + EXPORT_SYMBOL_GPL(nvmet_req_uninit); 537 + 532 538 static inline bool nvmet_cc_en(u32 cc) 533 539 { 534 540 return cc & 0x1;
+1 -3
drivers/nvme/target/fc.c
··· 517 517 { 518 518 int cpu, idx, cnt; 519 519 520 - if (!(tgtport->ops->target_features & 521 - NVMET_FCTGTFEAT_NEEDS_CMD_CPUSCHED) || 522 - tgtport->ops->max_hw_queues == 1) 520 + if (tgtport->ops->max_hw_queues == 1) 523 521 return WORK_CPU_UNBOUND; 524 522 525 523 /* Simple cpu selection based on qid modulo active cpu count */
-1
drivers/nvme/target/fcloop.c
··· 698 698 .dma_boundary = FCLOOP_DMABOUND_4G, 699 699 /* optional features */ 700 700 .target_features = NVMET_FCTGTFEAT_CMD_IN_ISR | 701 - NVMET_FCTGTFEAT_NEEDS_CMD_CPUSCHED | 702 701 NVMET_FCTGTFEAT_OPDONE_IN_ISR, 703 702 /* sizes of additional private data for data structures */ 704 703 .target_priv_sz = sizeof(struct fcloop_tport),
+1
drivers/nvme/target/nvmet.h
··· 261 261 262 262 bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, 263 263 struct nvmet_sq *sq, struct nvmet_fabrics_ops *ops); 264 + void nvmet_req_uninit(struct nvmet_req *req); 264 265 void nvmet_req_complete(struct nvmet_req *req, u16 status); 265 266 266 267 void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq, u16 qid,
+1
drivers/nvme/target/rdma.c
··· 567 567 rsp->n_rdma = 0; 568 568 569 569 if (unlikely(wc->status != IB_WC_SUCCESS)) { 570 + nvmet_req_uninit(&rsp->req); 570 571 nvmet_rdma_release_rsp(rsp); 571 572 if (wc->status != IB_WC_WR_FLUSH_ERR) { 572 573 pr_info("RDMA READ for CQE 0x%p failed with status %s (%d).\n",
+3
drivers/of/fdt.c
··· 507 507 508 508 /* Allocate memory for the expanded device tree */ 509 509 mem = dt_alloc(size + 4, __alignof__(struct device_node)); 510 + if (!mem) 511 + return NULL; 512 + 510 513 memset(mem, 0, size); 511 514 512 515 *(__be32 *)(mem + size) = cpu_to_be32(0xdeadbeef);
+1 -1
drivers/of/of_reserved_mem.c
··· 197 197 const struct of_device_id *i; 198 198 199 199 for (i = __reservedmem_of_table; i < &__rmem_of_table_sentinel; i++) { 200 - int const (*initfn)(struct reserved_mem *rmem) = i->data; 200 + reservedmem_of_init_fn initfn = i->data; 201 201 const char *compat = i->compatible; 202 202 203 203 if (!of_flat_dt_is_compatible(rmem->fdt_node, compat))
+1
drivers/powercap/powercap_sys.c
··· 538 538 539 539 power_zone->id = result; 540 540 idr_init(&power_zone->idr); 541 + result = -ENOMEM; 541 542 power_zone->name = kstrdup(name, GFP_KERNEL); 542 543 if (!power_zone->name) 543 544 goto err_name_alloc;
+1 -1
drivers/rtc/rtc-cmos.c
··· 1088 1088 } 1089 1089 spin_unlock_irqrestore(&rtc_lock, flags); 1090 1090 1091 - pm_wakeup_event(dev, 0); 1091 + pm_wakeup_hard_event(dev); 1092 1092 acpi_clear_event(ACPI_EVENT_RTC); 1093 1093 acpi_disable_event(ACPI_EVENT_RTC, 0); 1094 1094 return ACPI_INTERRUPT_HANDLED;
+1
drivers/scsi/cxlflash/Kconfig
··· 5 5 config CXLFLASH 6 6 tristate "Support for IBM CAPI Flash" 7 7 depends on PCI && SCSI && CXL && EEH 8 + select IRQ_POLL 8 9 default m 9 10 help 10 11 Allows CAPI Accelerated IO to Flash
+9 -6
drivers/scsi/libfc/fc_fcp.c
··· 407 407 * can_queue. Eventually we will hit the point where we run 408 408 * on all reserved structs. 409 409 */ 410 - static void fc_fcp_can_queue_ramp_down(struct fc_lport *lport) 410 + static bool fc_fcp_can_queue_ramp_down(struct fc_lport *lport) 411 411 { 412 412 struct fc_fcp_internal *si = fc_get_scsi_internal(lport); 413 413 unsigned long flags; 414 414 int can_queue; 415 + bool changed = false; 415 416 416 417 spin_lock_irqsave(lport->host->host_lock, flags); 417 418 ··· 428 427 if (!can_queue) 429 428 can_queue = 1; 430 429 lport->host->can_queue = can_queue; 430 + changed = true; 431 431 432 432 unlock: 433 433 spin_unlock_irqrestore(lport->host->host_lock, flags); 434 + return changed; 434 435 } 435 436 436 437 /* ··· 1899 1896 1900 1897 if (!fc_fcp_lport_queue_ready(lport)) { 1901 1898 if (lport->qfull) { 1902 - fc_fcp_can_queue_ramp_down(lport); 1903 - shost_printk(KERN_ERR, lport->host, 1904 - "libfc: queue full, " 1905 - "reducing can_queue to %d.\n", 1906 - lport->host->can_queue); 1899 + if (fc_fcp_can_queue_ramp_down(lport)) 1900 + shost_printk(KERN_ERR, lport->host, 1901 + "libfc: queue full, " 1902 + "reducing can_queue to %d.\n", 1903 + lport->host->can_queue); 1907 1904 } 1908 1905 rc = SCSI_MLQUEUE_HOST_BUSY; 1909 1906 goto out;
+1
drivers/scsi/lpfc/lpfc_crtn.h
··· 294 294 void lpfc_reset_barrier(struct lpfc_hba *); 295 295 int lpfc_sli_brdready(struct lpfc_hba *, uint32_t); 296 296 int lpfc_sli_brdkill(struct lpfc_hba *); 297 + int lpfc_sli_chipset_init(struct lpfc_hba *phba); 297 298 int lpfc_sli_brdreset(struct lpfc_hba *); 298 299 int lpfc_sli_brdrestart(struct lpfc_hba *); 299 300 int lpfc_sli_hba_setup(struct lpfc_hba *);
+1 -1
drivers/scsi/lpfc/lpfc_ct.c
··· 630 630 NLP_EVT_DEVICE_RECOVERY); 631 631 spin_lock_irq(shost->host_lock); 632 632 ndlp->nlp_flag &= ~NLP_NVMET_RECOV; 633 - spin_lock_irq(shost->host_lock); 633 + spin_unlock_irq(shost->host_lock); 634 634 } 635 635 } 636 636
+8 -1
drivers/scsi/lpfc/lpfc_init.c
··· 3602 3602 LPFC_MBOXQ_t *mboxq; 3603 3603 MAILBOX_t *mb; 3604 3604 3605 + if (phba->sli_rev < LPFC_SLI_REV4) { 3606 + /* Reset the port first */ 3607 + lpfc_sli_brdrestart(phba); 3608 + rc = lpfc_sli_chipset_init(phba); 3609 + if (rc) 3610 + return (uint64_t)-1; 3611 + } 3605 3612 3606 3613 mboxq = (LPFC_MBOXQ_t *) mempool_alloc(phba->mbox_mem_pool, 3607 3614 GFP_KERNEL); ··· 8854 8847 lpfc_wq_destroy(phba, phba->sli4_hba.nvmels_wq); 8855 8848 8856 8849 /* Unset ELS work queue */ 8857 - if (phba->sli4_hba.els_cq) 8850 + if (phba->sli4_hba.els_wq) 8858 8851 lpfc_wq_destroy(phba, phba->sli4_hba.els_wq); 8859 8852 8860 8853 /* Unset unsolicited receive queue */
-1
drivers/scsi/lpfc/lpfc_nvmet.c
··· 764 764 lpfc_tgttemplate.max_sgl_segments = phba->cfg_nvme_seg_cnt + 1; 765 765 lpfc_tgttemplate.max_hw_queues = phba->cfg_nvme_io_channel; 766 766 lpfc_tgttemplate.target_features = NVMET_FCTGTFEAT_READDATA_RSP | 767 - NVMET_FCTGTFEAT_NEEDS_CMD_CPUSCHED | 768 767 NVMET_FCTGTFEAT_CMD_IN_ISR | 769 768 NVMET_FCTGTFEAT_OPDONE_IN_ISR; 770 769
+12 -7
drivers/scsi/lpfc/lpfc_sli.c
··· 4204 4204 /* Reset HBA */ 4205 4205 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 4206 4206 "0325 Reset HBA Data: x%x x%x\n", 4207 - phba->pport->port_state, psli->sli_flag); 4207 + (phba->pport) ? phba->pport->port_state : 0, 4208 + psli->sli_flag); 4208 4209 4209 4210 /* perform board reset */ 4210 4211 phba->fc_eventTag = 0; 4211 4212 phba->link_events = 0; 4212 - phba->pport->fc_myDID = 0; 4213 - phba->pport->fc_prevDID = 0; 4213 + if (phba->pport) { 4214 + phba->pport->fc_myDID = 0; 4215 + phba->pport->fc_prevDID = 0; 4216 + } 4214 4217 4215 4218 /* Turn off parity checking and serr during the physical reset */ 4216 4219 pci_read_config_word(phba->pcidev, PCI_COMMAND, &cfg_value); ··· 4339 4336 /* Restart HBA */ 4340 4337 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 4341 4338 "0337 Restart HBA Data: x%x x%x\n", 4342 - phba->pport->port_state, psli->sli_flag); 4339 + (phba->pport) ? phba->pport->port_state : 0, 4340 + psli->sli_flag); 4343 4341 4344 4342 word0 = 0; 4345 4343 mb = (MAILBOX_t *) &word0; ··· 4354 4350 readl(to_slim); /* flush */ 4355 4351 4356 4352 /* Only skip post after fc_ffinit is completed */ 4357 - if (phba->pport->port_state) 4353 + if (phba->pport && phba->pport->port_state) 4358 4354 word0 = 1; /* This is really setting up word1 */ 4359 4355 else 4360 4356 word0 = 0; /* This is really setting up word1 */ ··· 4363 4359 readl(to_slim); /* flush */ 4364 4360 4365 4361 lpfc_sli_brdreset(phba); 4366 - phba->pport->stopped = 0; 4362 + if (phba->pport) 4363 + phba->pport->stopped = 0; 4367 4364 phba->link_state = LPFC_INIT_START; 4368 4365 phba->hba_flag = 0; 4369 4366 spin_unlock_irq(&phba->hbalock); ··· 4451 4446 * iteration, the function will restart the HBA again. The function returns 4452 4447 * zero if HBA successfully restarted else returns negative error code. 4453 4448 **/ 4454 - static int 4449 + int 4455 4450 lpfc_sli_chipset_init(struct lpfc_hba *phba) 4456 4451 { 4457 4452 uint32_t status, i = 0;
-3
drivers/scsi/pmcraid.c
··· 3770 3770 pmcraid_err("couldn't build passthrough ioadls\n"); 3771 3771 goto out_free_cmd; 3772 3772 } 3773 - } else if (request_size < 0) { 3774 - rc = -EINVAL; 3775 - goto out_free_cmd; 3776 3773 } 3777 3774 3778 3775 /* If data is being written into the device, copy the data from user
+1 -1
drivers/scsi/qedf/qedf.h
··· 259 259 uint16_t task_id; 260 260 uint32_t port_id; /* Remote port fabric ID */ 261 261 int lun; 262 - char op; /* SCSI CDB */ 262 + unsigned char op; /* SCSI CDB */ 263 263 uint8_t lba[4]; 264 264 unsigned int bufflen; /* SCSI buffer length */ 265 265 unsigned int sg_count; /* Number of SG elements */
+1 -1
drivers/scsi/qedf/qedf_els.c
··· 109 109 did = fcport->rdata->ids.port_id; 110 110 sid = fcport->sid; 111 111 112 - __fc_fill_fc_hdr(fc_hdr, FC_RCTL_ELS_REQ, sid, did, 112 + __fc_fill_fc_hdr(fc_hdr, FC_RCTL_ELS_REQ, did, sid, 113 113 FC_TYPE_ELS, FC_FC_FIRST_SEQ | FC_FC_END_SEQ | 114 114 FC_FC_SEQ_INIT, 0); 115 115
+1 -1
drivers/scsi/qedf/qedf_main.c
··· 2895 2895 slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER; 2896 2896 slowpath_params.drv_rev = QEDF_DRIVER_REV_VER; 2897 2897 slowpath_params.drv_eng = QEDF_DRIVER_ENG_VER; 2898 - memcpy(slowpath_params.name, "qedf", QED_DRV_VER_STR_SIZE); 2898 + strncpy(slowpath_params.name, "qedf", QED_DRV_VER_STR_SIZE); 2899 2899 rc = qed_ops->common->slowpath_start(qedf->cdev, &slowpath_params); 2900 2900 if (rc) { 2901 2901 QEDF_ERR(&(qedf->dbg_ctx), "Cannot start slowpath.\n");
+2
drivers/scsi/scsi.c
··· 763 763 struct scsi_device *sdev; 764 764 765 765 list_for_each_entry(sdev, &shost->__devices, siblings) { 766 + if (sdev->sdev_state == SDEV_DEL) 767 + continue; 766 768 if (sdev->channel == channel && sdev->id == id && 767 769 sdev->lun ==lun) 768 770 return sdev;
+1
drivers/scsi/scsi_lib.c
··· 30 30 #include <scsi/scsi_driver.h> 31 31 #include <scsi/scsi_eh.h> 32 32 #include <scsi/scsi_host.h> 33 + #include <scsi/scsi_transport.h> /* __scsi_init_queue() */ 33 34 #include <scsi/scsi_dh.h> 34 35 35 36 #include <trace/events/scsi.h>
+1 -1
drivers/soc/bcm/brcmstb/common.c
··· 49 49 { .compatible = "brcm,bcm7420-sun-top-ctrl", }, 50 50 { .compatible = "brcm,bcm7425-sun-top-ctrl", }, 51 51 { .compatible = "brcm,bcm7429-sun-top-ctrl", }, 52 - { .compatible = "brcm,bcm7425-sun-top-ctrl", }, 52 + { .compatible = "brcm,bcm7435-sun-top-ctrl", }, 53 53 { .compatible = "brcm,brcmstb-sun-top-ctrl", }, 54 54 { } 55 55 };
+2 -1
drivers/soc/imx/Kconfig
··· 2 2 3 3 config IMX7_PM_DOMAINS 4 4 bool "i.MX7 PM domains" 5 - select PM_GENERIC_DOMAINS 6 5 depends on SOC_IMX7D || (COMPILE_TEST && OF) 6 + depends on PM 7 + select PM_GENERIC_DOMAINS 7 8 default y if SOC_IMX7D 8 9 9 10 endmenu
-51
drivers/staging/android/ion/devicetree.txt
··· 1 - Ion Memory Manager 2 - 3 - Ion is a memory manager that allows for sharing of buffers via dma-buf. 4 - Ion allows for different types of allocation via an abstraction called 5 - a 'heap'. A heap represents a specific type of memory. Each heap has 6 - a different type. There can be multiple instances of the same heap 7 - type. 8 - 9 - Specific heap instances are tied to heap IDs. Heap IDs are not to be specified 10 - in the devicetree. 11 - 12 - Required properties for Ion 13 - 14 - - compatible: "linux,ion" PLUS a compatible property for the device 15 - 16 - All child nodes of a linux,ion node are interpreted as heaps 17 - 18 - required properties for heaps 19 - 20 - - compatible: compatible string for a heap type PLUS a compatible property 21 - for the specific instance of the heap. Current heap types 22 - -- linux,ion-heap-system 23 - -- linux,ion-heap-system-contig 24 - -- linux,ion-heap-carveout 25 - -- linux,ion-heap-chunk 26 - -- linux,ion-heap-dma 27 - -- linux,ion-heap-custom 28 - 29 - Optional properties 30 - - memory-region: A phandle to a memory region. Required for DMA heap type 31 - (see reserved-memory.txt for details on the reservation) 32 - 33 - Example: 34 - 35 - ion { 36 - compatbile = "hisilicon,ion", "linux,ion"; 37 - 38 - ion-system-heap { 39 - compatbile = "hisilicon,system-heap", "linux,ion-heap-system" 40 - }; 41 - 42 - ion-camera-region { 43 - compatible = "hisilicon,camera-heap", "linux,ion-heap-dma" 44 - memory-region = <&camera_region>; 45 - }; 46 - 47 - ion-fb-region { 48 - compatbile = "hisilicon,fb-heap", "linux,ion-heap-dma" 49 - memory-region = <&fb_region>; 50 - }; 51 - }
-1
drivers/staging/ccree/ssi_request_mgr.c
··· 376 376 rc = ssi_power_mgr_runtime_get(&drvdata->plat_dev->dev); 377 377 if (rc != 0) { 378 378 SSI_LOG_ERR("ssi_power_mgr_runtime_get returned %x\n",rc); 379 - spin_unlock_bh(&req_mgr_h->hw_lock); 380 379 return rc; 381 380 } 382 381 #endif
+1
drivers/staging/fsl-dpaa2/Kconfig
··· 12 12 config FSL_DPAA2_ETH 13 13 tristate "Freescale DPAA2 Ethernet" 14 14 depends on FSL_DPAA2 && FSL_MC_DPIO 15 + depends on NETDEVICES && ETHERNET 15 16 ---help--- 16 17 Ethernet driver for Freescale DPAA2 SoCs, using the 17 18 Freescale MC bus driver
+15 -9
drivers/staging/rtl8192e/rtl8192e/r8192E_dev.c
··· 97 97 98 98 switch (variable) { 99 99 case HW_VAR_BSSID: 100 - rtl92e_writel(dev, BSSIDR, ((u32 *)(val))[0]); 101 - rtl92e_writew(dev, BSSIDR+2, ((u16 *)(val+2))[0]); 100 + /* BSSIDR 2 byte alignment */ 101 + rtl92e_writew(dev, BSSIDR, *(u16 *)val); 102 + rtl92e_writel(dev, BSSIDR + 2, *(u32 *)(val + 2)); 102 103 break; 103 104 104 105 case HW_VAR_MEDIA_STATUS: ··· 625 624 struct r8192_priv *priv = rtllib_priv(dev); 626 625 627 626 RT_TRACE(COMP_INIT, "===========>%s()\n", __func__); 628 - curCR = rtl92e_readl(dev, EPROM_CMD); 627 + curCR = rtl92e_readw(dev, EPROM_CMD); 629 628 RT_TRACE(COMP_INIT, "read from Reg Cmd9346CR(%x):%x\n", EPROM_CMD, 630 629 curCR); 631 630 priv->epromtype = (curCR & EPROM_CMD_9356SEL) ? EEPROM_93C56 : ··· 962 961 rtl92e_config_rate(dev, &rate_config); 963 962 priv->dot11CurrentPreambleMode = PREAMBLE_AUTO; 964 963 priv->basic_rate = rate_config &= 0x15f; 965 - rtl92e_writel(dev, BSSIDR, ((u32 *)net->bssid)[0]); 966 - rtl92e_writew(dev, BSSIDR+4, ((u16 *)net->bssid)[2]); 964 + rtl92e_writew(dev, BSSIDR, *(u16 *)net->bssid); 965 + rtl92e_writel(dev, BSSIDR + 2, *(u32 *)(net->bssid + 2)); 967 966 968 967 if (priv->rtllib->iw_mode == IW_MODE_ADHOC) { 969 968 rtl92e_writew(dev, ATIMWND, 2); ··· 1183 1182 struct cb_desc *cb_desc, struct sk_buff *skb) 1184 1183 { 1185 1184 struct r8192_priv *priv = rtllib_priv(dev); 1186 - dma_addr_t mapping = pci_map_single(priv->pdev, skb->data, skb->len, 1187 - PCI_DMA_TODEVICE); 1185 + dma_addr_t mapping; 1188 1186 struct tx_fwinfo_8190pci *pTxFwInfo; 1189 1187 1190 1188 pTxFwInfo = (struct tx_fwinfo_8190pci *)skb->data; ··· 1194 1194 pTxFwInfo->Short = _rtl92e_query_is_short(pTxFwInfo->TxHT, 1195 1195 pTxFwInfo->TxRate, cb_desc); 1196 1196 1197 - if (pci_dma_mapping_error(priv->pdev, mapping)) 1198 - netdev_err(dev, "%s(): DMA Mapping error\n", __func__); 1199 1197 if (cb_desc->bAMPDUEnable) { 1200 1198 pTxFwInfo->AllowAggregation = 1; 1201 1199 pTxFwInfo->RxMF = cb_desc->ampdu_factor; ··· 1228 1230 } 1229 1231 1230 1232 memset((u8 *)pdesc, 0, 12); 1233 + 1234 + mapping = pci_map_single(priv->pdev, skb->data, skb->len, 1235 + PCI_DMA_TODEVICE); 1236 + if (pci_dma_mapping_error(priv->pdev, mapping)) { 1237 + netdev_err(dev, "%s(): DMA Mapping error\n", __func__); 1238 + return; 1239 + } 1240 + 1231 1241 pdesc->LINIP = 0; 1232 1242 pdesc->CmdInit = 1; 1233 1243 pdesc->Offset = sizeof(struct tx_fwinfo_8190pci) + 8;
+4 -11
drivers/staging/rtl8192e/rtl819x_TSProc.c
··· 306 306 pTsCommonInfo->TClasNum = TCLAS_Num; 307 307 } 308 308 309 - static bool IsACValid(unsigned int tid) 310 - { 311 - return tid < 7; 312 - } 313 - 314 309 bool GetTs(struct rtllib_device *ieee, struct ts_common_info **ppTS, 315 310 u8 *Addr, u8 TID, enum tr_select TxRxSelect, bool bAddNewTs) 316 311 { ··· 323 328 if (ieee->current_network.qos_data.supported == 0) { 324 329 UP = 0; 325 330 } else { 326 - if (!IsACValid(TID)) { 327 - netdev_warn(ieee->dev, "%s(): TID(%d) is not valid\n", 328 - __func__, TID); 329 - return false; 330 - } 331 - 332 331 switch (TID) { 333 332 case 0: 334 333 case 3: ··· 340 351 case 7: 341 352 UP = 7; 342 353 break; 354 + default: 355 + netdev_warn(ieee->dev, "%s(): TID(%d) is not valid\n", 356 + __func__, TID); 357 + return false; 343 358 } 344 359 } 345 360
-1
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 3531 3531 pwdev_priv->power_mgmt = true; 3532 3532 else 3533 3533 pwdev_priv->power_mgmt = false; 3534 - kfree((u8 *)wdev); 3535 3534 3536 3535 return ret; 3537 3536
+47 -43
drivers/staging/typec/fusb302/fusb302.c
··· 264 264 265 265 #define FUSB302_RESUME_RETRY 10 266 266 #define FUSB302_RESUME_RETRY_SLEEP 50 267 + 268 + static bool fusb302_is_suspended(struct fusb302_chip *chip) 269 + { 270 + int retry_cnt; 271 + 272 + for (retry_cnt = 0; retry_cnt < FUSB302_RESUME_RETRY; retry_cnt++) { 273 + if (atomic_read(&chip->pm_suspend)) { 274 + dev_err(chip->dev, "i2c: pm suspend, retry %d/%d\n", 275 + retry_cnt + 1, FUSB302_RESUME_RETRY); 276 + msleep(FUSB302_RESUME_RETRY_SLEEP); 277 + } else { 278 + return false; 279 + } 280 + } 281 + 282 + return true; 283 + } 284 + 267 285 static int fusb302_i2c_write(struct fusb302_chip *chip, 268 286 u8 address, u8 data) 269 287 { 270 - int retry_cnt; 271 288 int ret = 0; 272 289 273 290 atomic_set(&chip->i2c_busy, 1); 274 - for (retry_cnt = 0; retry_cnt < FUSB302_RESUME_RETRY; retry_cnt++) { 275 - if (atomic_read(&chip->pm_suspend)) { 276 - pr_err("fusb302_i2c: pm suspend, retry %d/%d\n", 277 - retry_cnt + 1, FUSB302_RESUME_RETRY); 278 - msleep(FUSB302_RESUME_RETRY_SLEEP); 279 - } else { 280 - break; 281 - } 291 + 292 + if (fusb302_is_suspended(chip)) { 293 + atomic_set(&chip->i2c_busy, 0); 294 + return -ETIMEDOUT; 282 295 } 296 + 283 297 ret = i2c_smbus_write_byte_data(chip->i2c_client, address, data); 284 298 if (ret < 0) 285 299 fusb302_log(chip, "cannot write 0x%02x to 0x%02x, ret=%d", ··· 306 292 static int fusb302_i2c_block_write(struct fusb302_chip *chip, u8 address, 307 293 u8 length, const u8 *data) 308 294 { 309 - int retry_cnt; 310 295 int ret = 0; 311 296 312 297 if (length <= 0) 313 298 return ret; 314 299 atomic_set(&chip->i2c_busy, 1); 315 - for (retry_cnt = 0; retry_cnt < FUSB302_RESUME_RETRY; retry_cnt++) { 316 - if (atomic_read(&chip->pm_suspend)) { 317 - pr_err("fusb302_i2c: pm suspend, retry %d/%d\n", 318 - retry_cnt + 1, FUSB302_RESUME_RETRY); 319 - msleep(FUSB302_RESUME_RETRY_SLEEP); 320 - } else { 321 - break; 322 - } 300 + 301 + if (fusb302_is_suspended(chip)) { 302 + atomic_set(&chip->i2c_busy, 0); 303 + return -ETIMEDOUT; 323 304 } 305 + 324 306 ret = i2c_smbus_write_i2c_block_data(chip->i2c_client, address, 325 307 length, data); 326 308 if (ret < 0) ··· 330 320 static int fusb302_i2c_read(struct fusb302_chip *chip, 331 321 u8 address, u8 *data) 332 322 { 333 - int retry_cnt; 334 323 int ret = 0; 335 324 336 325 atomic_set(&chip->i2c_busy, 1); 337 - for (retry_cnt = 0; retry_cnt < FUSB302_RESUME_RETRY; retry_cnt++) { 338 - if (atomic_read(&chip->pm_suspend)) { 339 - pr_err("fusb302_i2c: pm suspend, retry %d/%d\n", 340 - retry_cnt + 1, FUSB302_RESUME_RETRY); 341 - msleep(FUSB302_RESUME_RETRY_SLEEP); 342 - } else { 343 - break; 344 - } 326 + 327 + if (fusb302_is_suspended(chip)) { 328 + atomic_set(&chip->i2c_busy, 0); 329 + return -ETIMEDOUT; 345 330 } 331 + 346 332 ret = i2c_smbus_read_byte_data(chip->i2c_client, address); 347 333 *data = (u8)ret; 348 334 if (ret < 0) ··· 351 345 static int fusb302_i2c_block_read(struct fusb302_chip *chip, u8 address, 352 346 u8 length, u8 *data) 353 347 { 354 - int retry_cnt; 355 348 int ret = 0; 356 349 357 350 if (length <= 0) 358 351 return ret; 359 352 atomic_set(&chip->i2c_busy, 1); 360 - for (retry_cnt = 0; retry_cnt < FUSB302_RESUME_RETRY; retry_cnt++) { 361 - if (atomic_read(&chip->pm_suspend)) { 362 - pr_err("fusb302_i2c: pm suspend, retry %d/%d\n", 363 - retry_cnt + 1, FUSB302_RESUME_RETRY); 364 - msleep(FUSB302_RESUME_RETRY_SLEEP); 365 - } else { 366 - break; 367 - } 353 + 354 + if (fusb302_is_suspended(chip)) { 355 + atomic_set(&chip->i2c_busy, 0); 356 + return -ETIMEDOUT; 368 357 } 358 + 369 359 ret = i2c_smbus_read_i2c_block_data(chip->i2c_client, address, 370 360 length, data); 371 361 if (ret < 0) { 372 362 fusb302_log(chip, "cannot block read 0x%02x, len=%d, ret=%d", 373 363 address, length, ret); 374 - return ret; 364 + goto done; 375 365 } 376 366 if (ret != length) { 377 367 fusb302_log(chip, "only read %d/%d bytes from 0x%02x", 378 368 ret, length, address); 379 - return -EIO; 369 + ret = -EIO; 380 370 } 371 + 372 + done: 381 373 atomic_set(&chip->i2c_busy, 0); 382 374 383 375 return ret; ··· 493 489 ret = fusb302_i2c_read(chip, FUSB_REG_STATUS0, &data); 494 490 if (ret < 0) 495 491 return ret; 496 - chip->vbus_present = !!(FUSB_REG_STATUS0 & FUSB_REG_STATUS0_VBUSOK); 492 + chip->vbus_present = !!(data & FUSB_REG_STATUS0_VBUSOK); 497 493 ret = fusb302_i2c_read(chip, FUSB_REG_DEVICE_ID, &data); 498 494 if (ret < 0) 499 495 return ret; ··· 1029 1025 buf[pos++] = FUSB302_TKN_SYNC1; 1030 1026 buf[pos++] = FUSB302_TKN_SYNC2; 1031 1027 1032 - len = pd_header_cnt(msg->header) * 4; 1028 + len = pd_header_cnt_le(msg->header) * 4; 1033 1029 /* plug 2 for header */ 1034 1030 len += 2; 1035 1031 if (len > 0x1F) { ··· 1485 1481 (u8 *)&msg->header); 1486 1482 if (ret < 0) 1487 1483 return ret; 1488 - len = pd_header_cnt(msg->header) * 4; 1484 + len = pd_header_cnt_le(msg->header) * 4; 1489 1485 /* add 4 to length to include the CRC */ 1490 1486 if (len > PD_MAX_PAYLOAD * 4) { 1491 1487 fusb302_log(chip, "PD message too long %d", len); ··· 1667 1663 if (ret < 0) { 1668 1664 fusb302_log(chip, 1669 1665 "cannot set GPIO Int_N to input, ret=%d", ret); 1670 - gpio_free(chip->gpio_int_n); 1671 1666 return ret; 1672 1667 } 1673 1668 ret = gpio_to_irq(chip->gpio_int_n); 1674 1669 if (ret < 0) { 1675 1670 fusb302_log(chip, 1676 1671 "cannot request IRQ for GPIO Int_N, ret=%d", ret); 1677 - gpio_free(chip->gpio_int_n); 1678 1672 return ret; 1679 1673 } 1680 1674 chip->gpio_int_n_irq = ret; ··· 1789 1787 {.compatible = "fcs,fusb302"}, 1790 1788 {}, 1791 1789 }; 1790 + MODULE_DEVICE_TABLE(of, fusb302_dt_match); 1792 1791 1793 1792 static const struct i2c_device_id fusb302_i2c_device_id[] = { 1794 1793 {"typec_fusb302", 0}, 1795 1794 {}, 1796 1795 }; 1796 + MODULE_DEVICE_TABLE(i2c, fusb302_i2c_device_id); 1797 1797 1798 1798 static const struct dev_pm_ops fusb302_pm_ops = { 1799 1799 .suspend = fusb302_pm_suspend,
+10
drivers/staging/typec/pd.h
··· 92 92 return pd_header_type(le16_to_cpu(header)); 93 93 } 94 94 95 + static inline unsigned int pd_header_msgid(u16 header) 96 + { 97 + return (header >> PD_HEADER_ID_SHIFT) & PD_HEADER_ID_MASK; 98 + } 99 + 100 + static inline unsigned int pd_header_msgid_le(__le16 header) 101 + { 102 + return pd_header_msgid(le16_to_cpu(header)); 103 + } 104 + 95 105 #define PD_MAX_PAYLOAD 7 96 106 97 107 struct pd_message {
+3 -1
drivers/staging/typec/pd_vdo.h
··· 22 22 * VDM object is minimum of VDM header + 6 additional data objects. 23 23 */ 24 24 25 + #define VDO_MAX_OBJECTS 6 26 + #define VDO_MAX_SIZE (VDO_MAX_OBJECTS + 1) 27 + 25 28 /* 26 29 * VDM header 27 30 * ---------- ··· 37 34 * <5> :: reserved (SVDM), command type (UVDM) 38 35 * <4:0> :: command 39 36 */ 40 - #define VDO_MAX_SIZE 7 41 37 #define VDO(vid, type, custom) \ 42 38 (((vid) << 16) | \ 43 39 ((type) << 15) | \
+1 -1
drivers/staging/typec/tcpci.c
··· 425 425 .max_register = 0x7F, /* 0x80 .. 0xFF are vendor defined */ 426 426 }; 427 427 428 - const struct tcpc_config tcpci_tcpc_config = { 428 + static const struct tcpc_config tcpci_tcpc_config = { 429 429 .type = TYPEC_PORT_DFP, 430 430 .default_role = TYPEC_SINK, 431 431 };
+73 -4
drivers/staging/typec/tcpm.c
··· 238 238 unsigned int hard_reset_count; 239 239 bool pd_capable; 240 240 bool explicit_contract; 241 + unsigned int rx_msgid; 241 242 242 243 /* Partner capabilities/requests */ 243 244 u32 sink_request; ··· 252 251 unsigned int nr_src_pdo; 253 252 u32 snk_pdo[PDO_MAX_OBJECTS]; 254 253 unsigned int nr_snk_pdo; 254 + u32 snk_vdo[VDO_MAX_OBJECTS]; 255 + unsigned int nr_snk_vdo; 255 256 256 257 unsigned int max_snk_mv; 257 258 unsigned int max_snk_ma; ··· 1000 997 struct pd_mode_data *modep; 1001 998 int rlen = 0; 1002 999 u16 svid; 1000 + int i; 1003 1001 1004 1002 tcpm_log(port, "Rx VDM cmd 0x%x type %d cmd %d len %d", 1005 1003 p0, cmd_type, cmd, cnt); ··· 1011 1007 case CMDT_INIT: 1012 1008 switch (cmd) { 1013 1009 case CMD_DISCOVER_IDENT: 1010 + /* 6.4.4.3.1: Only respond as UFP (device) */ 1011 + if (port->data_role == TYPEC_DEVICE && 1012 + port->nr_snk_vdo) { 1013 + for (i = 0; i < port->nr_snk_vdo; i++) 1014 + response[i + 1] 1015 + = cpu_to_le32(port->snk_vdo[i]); 1016 + rlen = port->nr_snk_vdo + 1; 1017 + } 1014 1018 break; 1015 1019 case CMD_DISCOVER_SVID: 1016 1020 break; ··· 1427 1415 break; 1428 1416 case SOFT_RESET_SEND: 1429 1417 port->message_id = 0; 1418 + port->rx_msgid = -1; 1430 1419 if (port->pwr_role == TYPEC_SOURCE) 1431 1420 next_state = SRC_SEND_CAPABILITIES; 1432 1421 else ··· 1516 1503 port->attached); 1517 1504 1518 1505 if (port->attached) { 1506 + enum pd_ctrl_msg_type type = pd_header_type_le(msg->header); 1507 + unsigned int msgid = pd_header_msgid_le(msg->header); 1508 + 1509 + /* 1510 + * USB PD standard, 6.6.1.2: 1511 + * "... if MessageID value in a received Message is the 1512 + * same as the stored value, the receiver shall return a 1513 + * GoodCRC Message with that MessageID value and drop 1514 + * the Message (this is a retry of an already received 1515 + * Message). Note: this shall not apply to the Soft_Reset 1516 + * Message which always has a MessageID value of zero." 1517 + */ 1518 + if (msgid == port->rx_msgid && type != PD_CTRL_SOFT_RESET) 1519 + goto done; 1520 + port->rx_msgid = msgid; 1521 + 1519 1522 /* 1520 1523 * If both ends believe to be DFP/host, we have a data role 1521 1524 * mismatch. ··· 1549 1520 } 1550 1521 } 1551 1522 1523 + done: 1552 1524 mutex_unlock(&port->lock); 1553 1525 kfree(event); 1554 1526 } ··· 1749 1719 } 1750 1720 ma = min(ma, port->max_snk_ma); 1751 1721 1752 - /* XXX: Any other flags need to be set? */ 1753 - flags = 0; 1722 + flags = RDO_USB_COMM | RDO_NO_SUSPEND; 1754 1723 1755 1724 /* Set mismatch bit if offered power is less than operating power */ 1756 1725 mw = ma * mv / 1000; ··· 1986 1957 port->attached = false; 1987 1958 port->pd_capable = false; 1988 1959 1960 + /* 1961 + * First Rx ID should be 0; set this to a sentinel of -1 so that 1962 + * we can check tcpm_pd_rx_handler() if we had seen it before. 1963 + */ 1964 + port->rx_msgid = -1; 1965 + 1989 1966 port->tcpc->set_pd_rx(port->tcpc, false); 1990 1967 tcpm_init_vbus(port); /* also disables charging */ 1991 1968 tcpm_init_vconn(port); ··· 2205 2170 port->pwr_opmode = TYPEC_PWR_MODE_USB; 2206 2171 port->caps_count = 0; 2207 2172 port->message_id = 0; 2173 + port->rx_msgid = -1; 2208 2174 port->explicit_contract = false; 2209 2175 tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0); 2210 2176 break; ··· 2365 2329 typec_set_pwr_opmode(port->typec_port, TYPEC_PWR_MODE_USB); 2366 2330 port->pwr_opmode = TYPEC_PWR_MODE_USB; 2367 2331 port->message_id = 0; 2332 + port->rx_msgid = -1; 2368 2333 port->explicit_contract = false; 2369 2334 tcpm_set_state(port, SNK_DISCOVERY, 0); 2370 2335 break; ··· 2533 2496 /* Soft_Reset states */ 2534 2497 case SOFT_RESET: 2535 2498 port->message_id = 0; 2499 + port->rx_msgid = -1; 2536 2500 tcpm_pd_send_control(port, PD_CTRL_ACCEPT); 2537 2501 if (port->pwr_role == TYPEC_SOURCE) 2538 2502 tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0); ··· 2542 2504 break; 2543 2505 case SOFT_RESET_SEND: 2544 2506 port->message_id = 0; 2507 + port->rx_msgid = -1; 2545 2508 if (tcpm_pd_send_control(port, PD_CTRL_SOFT_RESET)) 2546 2509 tcpm_set_state_cond(port, hard_reset_state(port), 0); 2547 2510 else ··· 2607 2568 break; 2608 2569 case PR_SWAP_SRC_SNK_SOURCE_OFF: 2609 2570 tcpm_set_cc(port, TYPEC_CC_RD); 2571 + /* 2572 + * USB-PD standard, 6.2.1.4, Port Power Role: 2573 + * "During the Power Role Swap Sequence, for the initial Source 2574 + * Port, the Port Power Role field shall be set to Sink in the 2575 + * PS_RDY Message indicating that the initial Source’s power 2576 + * supply is turned off" 2577 + */ 2578 + tcpm_set_pwr_role(port, TYPEC_SINK); 2610 2579 if (tcpm_pd_send_control(port, PD_CTRL_PS_RDY)) { 2611 2580 tcpm_set_state(port, ERROR_RECOVERY, 0); 2612 2581 break; ··· 2622 2575 tcpm_set_state_cond(port, SNK_UNATTACHED, PD_T_PS_SOURCE_ON); 2623 2576 break; 2624 2577 case PR_SWAP_SRC_SNK_SINK_ON: 2625 - tcpm_set_pwr_role(port, TYPEC_SINK); 2626 2578 tcpm_swap_complete(port, 0); 2627 2579 tcpm_set_state(port, SNK_STARTUP, 0); 2628 2580 break; ··· 2633 2587 case PR_SWAP_SNK_SRC_SOURCE_ON: 2634 2588 tcpm_set_cc(port, tcpm_rp_cc(port)); 2635 2589 tcpm_set_vbus(port, true); 2636 - tcpm_pd_send_control(port, PD_CTRL_PS_RDY); 2590 + /* 2591 + * USB PD standard, 6.2.1.4: 2592 + * "Subsequent Messages initiated by the Policy Engine, 2593 + * such as the PS_RDY Message sent to indicate that Vbus 2594 + * is ready, will have the Port Power Role field set to 2595 + * Source." 2596 + */ 2637 2597 tcpm_set_pwr_role(port, TYPEC_SOURCE); 2598 + tcpm_pd_send_control(port, PD_CTRL_PS_RDY); 2638 2599 tcpm_swap_complete(port, 0); 2639 2600 tcpm_set_state(port, SRC_STARTUP, 0); 2640 2601 break; ··· 3345 3292 return nr_pdo; 3346 3293 } 3347 3294 3295 + static int tcpm_copy_vdos(u32 *dest_vdo, const u32 *src_vdo, 3296 + unsigned int nr_vdo) 3297 + { 3298 + unsigned int i; 3299 + 3300 + if (nr_vdo > VDO_MAX_OBJECTS) 3301 + nr_vdo = VDO_MAX_OBJECTS; 3302 + 3303 + for (i = 0; i < nr_vdo; i++) 3304 + dest_vdo[i] = src_vdo[i]; 3305 + 3306 + return nr_vdo; 3307 + } 3308 + 3348 3309 void tcpm_update_source_capabilities(struct tcpm_port *port, const u32 *pdo, 3349 3310 unsigned int nr_pdo) 3350 3311 { ··· 3449 3382 tcpc->config->nr_src_pdo); 3450 3383 port->nr_snk_pdo = tcpm_copy_pdos(port->snk_pdo, tcpc->config->snk_pdo, 3451 3384 tcpc->config->nr_snk_pdo); 3385 + port->nr_snk_vdo = tcpm_copy_vdos(port->snk_vdo, tcpc->config->snk_vdo, 3386 + tcpc->config->nr_snk_vdo); 3452 3387 3453 3388 port->max_snk_mv = tcpc->config->max_snk_mv; 3454 3389 port->max_snk_ma = tcpc->config->max_snk_ma;
+3
drivers/staging/typec/tcpm.h
··· 60 60 const u32 *snk_pdo; 61 61 unsigned int nr_snk_pdo; 62 62 63 + const u32 *snk_vdo; 64 + unsigned int nr_snk_vdo; 65 + 63 66 unsigned int max_snk_mv; 64 67 unsigned int max_snk_ma; 65 68 unsigned int max_snk_mw;
+19 -12
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
··· 502 502 */ 503 503 sg_init_table(scatterlist, num_pages); 504 504 /* Now set the pages for each scatterlist */ 505 - for (i = 0; i < num_pages; i++) 506 - sg_set_page(scatterlist + i, pages[i], PAGE_SIZE, 0); 505 + for (i = 0; i < num_pages; i++) { 506 + unsigned int len = PAGE_SIZE - offset; 507 + 508 + if (len > count) 509 + len = count; 510 + sg_set_page(scatterlist + i, pages[i], len, offset); 511 + offset = 0; 512 + count -= len; 513 + } 507 514 508 515 dma_buffers = dma_map_sg(g_dev, 509 516 scatterlist, ··· 531 524 u32 addr = sg_dma_address(sg); 532 525 533 526 /* Note: addrs is the address + page_count - 1 534 - * The firmware expects the block to be page 527 + * The firmware expects blocks after the first to be page- 535 528 * aligned and a multiple of the page size 536 529 */ 537 530 WARN_ON(len == 0); 538 - WARN_ON(len & ~PAGE_MASK); 539 - WARN_ON(addr & ~PAGE_MASK); 531 + WARN_ON(i && (i != (dma_buffers - 1)) && (len & ~PAGE_MASK)); 532 + WARN_ON(i && (addr & ~PAGE_MASK)); 540 533 if (k > 0 && 541 - ((addrs[k - 1] & PAGE_MASK) | 542 - ((addrs[k - 1] & ~PAGE_MASK) + 1) << PAGE_SHIFT) 543 - == addr) { 544 - addrs[k - 1] += (len >> PAGE_SHIFT); 545 - } else { 546 - addrs[k++] = addr | ((len >> PAGE_SHIFT) - 1); 547 - } 534 + ((addrs[k - 1] & PAGE_MASK) + 535 + (((addrs[k - 1] & ~PAGE_MASK) + 1) << PAGE_SHIFT)) 536 + == (addr & PAGE_MASK)) 537 + addrs[k - 1] += ((len + PAGE_SIZE - 1) >> PAGE_SHIFT); 538 + else 539 + addrs[k++] = (addr & PAGE_MASK) | 540 + (((len + PAGE_SIZE - 1) >> PAGE_SHIFT) - 1); 548 541 } 549 542 550 543 /* Partial cache lines (fragments) require special measures */
+1
drivers/tee/Kconfig
··· 1 1 # Generic Trusted Execution Environment Configuration 2 2 config TEE 3 3 tristate "Trusted Execution Environment support" 4 + depends on HAVE_ARM_SMCCC || COMPILE_TEST 4 5 select DMA_SHARED_BUFFER 5 6 select GENERIC_ALLOCATOR 6 7 help
+4 -4
drivers/uio/uio.c
··· 279 279 map = kzalloc(sizeof(*map), GFP_KERNEL); 280 280 if (!map) { 281 281 ret = -ENOMEM; 282 - goto err_map_kobj; 282 + goto err_map; 283 283 } 284 284 kobject_init(&map->kobj, &map_attr_type); 285 285 map->mem = mem; ··· 289 289 goto err_map_kobj; 290 290 ret = kobject_uevent(&map->kobj, KOBJ_ADD); 291 291 if (ret) 292 - goto err_map; 292 + goto err_map_kobj; 293 293 } 294 294 295 295 for (pi = 0; pi < MAX_UIO_PORT_REGIONS; pi++) { ··· 308 308 portio = kzalloc(sizeof(*portio), GFP_KERNEL); 309 309 if (!portio) { 310 310 ret = -ENOMEM; 311 - goto err_portio_kobj; 311 + goto err_portio; 312 312 } 313 313 kobject_init(&portio->kobj, &portio_attr_type); 314 314 portio->port = port; ··· 319 319 goto err_portio_kobj; 320 320 ret = kobject_uevent(&portio->kobj, KOBJ_ADD); 321 321 if (ret) 322 - goto err_portio; 322 + goto err_portio_kobj; 323 323 } 324 324 325 325 return 0;
+7 -7
drivers/usb/core/devio.c
··· 475 475 476 476 if (userurb) { /* Async */ 477 477 if (when == SUBMIT) 478 - dev_info(&udev->dev, "userurb %p, ep%d %s-%s, " 478 + dev_info(&udev->dev, "userurb %pK, ep%d %s-%s, " 479 479 "length %u\n", 480 480 userurb, ep, t, d, length); 481 481 else 482 - dev_info(&udev->dev, "userurb %p, ep%d %s-%s, " 482 + dev_info(&udev->dev, "userurb %pK, ep%d %s-%s, " 483 483 "actual_length %u status %d\n", 484 484 userurb, ep, t, d, length, 485 485 timeout_or_status); ··· 1895 1895 if (as) { 1896 1896 int retval; 1897 1897 1898 - snoop(&ps->dev->dev, "reap %p\n", as->userurb); 1898 + snoop(&ps->dev->dev, "reap %pK\n", as->userurb); 1899 1899 retval = processcompl(as, (void __user * __user *)arg); 1900 1900 free_async(as); 1901 1901 return retval; ··· 1912 1912 1913 1913 as = async_getcompleted(ps); 1914 1914 if (as) { 1915 - snoop(&ps->dev->dev, "reap %p\n", as->userurb); 1915 + snoop(&ps->dev->dev, "reap %pK\n", as->userurb); 1916 1916 retval = processcompl(as, (void __user * __user *)arg); 1917 1917 free_async(as); 1918 1918 } else { ··· 2043 2043 if (as) { 2044 2044 int retval; 2045 2045 2046 - snoop(&ps->dev->dev, "reap %p\n", as->userurb); 2046 + snoop(&ps->dev->dev, "reap %pK\n", as->userurb); 2047 2047 retval = processcompl_compat(as, (void __user * __user *)arg); 2048 2048 free_async(as); 2049 2049 return retval; ··· 2060 2060 2061 2061 as = async_getcompleted(ps); 2062 2062 if (as) { 2063 - snoop(&ps->dev->dev, "reap %p\n", as->userurb); 2063 + snoop(&ps->dev->dev, "reap %pK\n", as->userurb); 2064 2064 retval = processcompl_compat(as, (void __user * __user *)arg); 2065 2065 free_async(as); 2066 2066 } else { ··· 2489 2489 #endif 2490 2490 2491 2491 case USBDEVFS_DISCARDURB: 2492 - snoop(&dev->dev, "%s: DISCARDURB %p\n", __func__, p); 2492 + snoop(&dev->dev, "%s: DISCARDURB %pK\n", __func__, p); 2493 2493 ret = proc_unlinkurb(ps, p); 2494 2494 break; 2495 2495
+3 -2
drivers/usb/core/hcd.c
··· 1723 1723 if (retval == 0) 1724 1724 retval = -EINPROGRESS; 1725 1725 else if (retval != -EIDRM && retval != -EBUSY) 1726 - dev_dbg(&udev->dev, "hcd_unlink_urb %p fail %d\n", 1726 + dev_dbg(&udev->dev, "hcd_unlink_urb %pK fail %d\n", 1727 1727 urb, retval); 1728 1728 usb_put_dev(udev); 1729 1729 } ··· 1890 1890 /* kick hcd */ 1891 1891 unlink1(hcd, urb, -ESHUTDOWN); 1892 1892 dev_dbg (hcd->self.controller, 1893 - "shutdown urb %p ep%d%s%s\n", 1893 + "shutdown urb %pK ep%d%s%s\n", 1894 1894 urb, usb_endpoint_num(&ep->desc), 1895 1895 is_in ? "in" : "out", 1896 1896 ({ char *s; ··· 2520 2520 hcd->bandwidth_mutex = kmalloc(sizeof(*hcd->bandwidth_mutex), 2521 2521 GFP_KERNEL); 2522 2522 if (!hcd->bandwidth_mutex) { 2523 + kfree(hcd->address0_mutex); 2523 2524 kfree(hcd); 2524 2525 dev_dbg(dev, "hcd bandwidth mutex alloc failed\n"); 2525 2526 return NULL;
+21 -6
drivers/usb/core/hub.c
··· 362 362 } 363 363 364 364 /* USB 2.0 spec Section 11.24.4.5 */ 365 - static int get_hub_descriptor(struct usb_device *hdev, void *data) 365 + static int get_hub_descriptor(struct usb_device *hdev, 366 + struct usb_hub_descriptor *desc) 366 367 { 367 368 int i, ret, size; 368 369 unsigned dtype; ··· 379 378 for (i = 0; i < 3; i++) { 380 379 ret = usb_control_msg(hdev, usb_rcvctrlpipe(hdev, 0), 381 380 USB_REQ_GET_DESCRIPTOR, USB_DIR_IN | USB_RT_HUB, 382 - dtype << 8, 0, data, size, 381 + dtype << 8, 0, desc, size, 383 382 USB_CTRL_GET_TIMEOUT); 384 - if (ret >= (USB_DT_HUB_NONVAR_SIZE + 2)) 383 + if (hub_is_superspeed(hdev)) { 384 + if (ret == size) 385 + return ret; 386 + } else if (ret >= USB_DT_HUB_NONVAR_SIZE + 2) { 387 + /* Make sure we have the DeviceRemovable field. */ 388 + size = USB_DT_HUB_NONVAR_SIZE + desc->bNbrPorts / 8 + 1; 389 + if (ret < size) 390 + return -EMSGSIZE; 385 391 return ret; 392 + } 386 393 } 387 394 return -EINVAL; 388 395 } ··· 1322 1313 } 1323 1314 mutex_init(&hub->status_mutex); 1324 1315 1325 - hub->descriptor = kmalloc(sizeof(*hub->descriptor), GFP_KERNEL); 1316 + hub->descriptor = kzalloc(sizeof(*hub->descriptor), GFP_KERNEL); 1326 1317 if (!hub->descriptor) { 1327 1318 ret = -ENOMEM; 1328 1319 goto fail; ··· 1330 1321 1331 1322 /* Request the entire hub descriptor. 1332 1323 * hub->descriptor can handle USB_MAXCHILDREN ports, 1333 - * but the hub can/will return fewer bytes here. 1324 + * but a (non-SS) hub can/will return fewer bytes here. 1334 1325 */ 1335 1326 ret = get_hub_descriptor(hdev, hub->descriptor); 1336 1327 if (ret < 0) { 1337 1328 message = "can't read hub descriptor"; 1338 1329 goto fail; 1339 - } else if (hub->descriptor->bNbrPorts > USB_MAXCHILDREN) { 1330 + } 1331 + 1332 + maxchild = USB_MAXCHILDREN; 1333 + if (hub_is_superspeed(hdev)) 1334 + maxchild = min_t(unsigned, maxchild, USB_SS_MAXPORTS); 1335 + 1336 + if (hub->descriptor->bNbrPorts > maxchild) { 1340 1337 message = "hub has too many ports!"; 1341 1338 ret = -ENODEV; 1342 1339 goto fail;
+3
drivers/usb/core/of.c
··· 53 53 * 54 54 * Find the companion device from platform bus. 55 55 * 56 + * Takes a reference to the returned struct device which needs to be dropped 57 + * after use. 58 + * 56 59 * Return: On success, a pointer to the companion device, %NULL on failure. 57 60 */ 58 61 struct device *usb_of_get_companion_dev(struct device *dev)
+1 -1
drivers/usb/core/urb.c
··· 338 338 if (!urb || !urb->complete) 339 339 return -EINVAL; 340 340 if (urb->hcpriv) { 341 - WARN_ONCE(1, "URB %p submitted while active\n", urb); 341 + WARN_ONCE(1, "URB %pK submitted while active\n", urb); 342 342 return -EBUSY; 343 343 } 344 344
+4
drivers/usb/dwc3/dwc3-keystone.c
··· 107 107 return PTR_ERR(kdwc->usbss); 108 108 109 109 kdwc->clk = devm_clk_get(kdwc->dev, "usb"); 110 + if (IS_ERR(kdwc->clk)) { 111 + dev_err(kdwc->dev, "unable to get usb clock\n"); 112 + return PTR_ERR(kdwc->clk); 113 + } 110 114 111 115 error = clk_prepare_enable(kdwc->clk); 112 116 if (error < 0) {
+4
drivers/usb/dwc3/dwc3-pci.c
··· 39 39 #define PCI_DEVICE_ID_INTEL_APL 0x5aaa 40 40 #define PCI_DEVICE_ID_INTEL_KBP 0xa2b0 41 41 #define PCI_DEVICE_ID_INTEL_GLK 0x31aa 42 + #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee 43 + #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e 42 44 43 45 #define PCI_INTEL_BXT_DSM_UUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511" 44 46 #define PCI_INTEL_BXT_FUNC_PMU_PWR 4 ··· 272 270 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_APL), }, 273 271 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KBP), }, 274 272 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_GLK), }, 273 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPLP), }, 274 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CNPH), }, 275 275 { PCI_DEVICE(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_NL_USB), }, 276 276 { } /* Terminating Entry */ 277 277 };
+20 -1
drivers/usb/dwc3/gadget.c
··· 1261 1261 __dwc3_gadget_start_isoc(dwc, dep, cur_uf); 1262 1262 dep->flags &= ~DWC3_EP_PENDING_REQUEST; 1263 1263 } 1264 + return 0; 1264 1265 } 1265 - return 0; 1266 + 1267 + if ((dep->flags & DWC3_EP_BUSY) && 1268 + !(dep->flags & DWC3_EP_MISSED_ISOC)) { 1269 + WARN_ON_ONCE(!dep->resource_index); 1270 + ret = __dwc3_gadget_kick_transfer(dep, 1271 + dep->resource_index); 1272 + } 1273 + 1274 + goto out; 1266 1275 } 1267 1276 1268 1277 if (!dwc3_calc_trbs_left(dep)) 1269 1278 return 0; 1270 1279 1271 1280 ret = __dwc3_gadget_kick_transfer(dep, 0); 1281 + out: 1272 1282 if (ret == -EBUSY) 1273 1283 ret = 0; 1274 1284 ··· 3035 3025 dwc->pending_events = true; 3036 3026 return IRQ_HANDLED; 3037 3027 } 3028 + 3029 + /* 3030 + * With PCIe legacy interrupt, test shows that top-half irq handler can 3031 + * be called again after HW interrupt deassertion. Check if bottom-half 3032 + * irq event handler completes before caching new event to prevent 3033 + * losing events. 3034 + */ 3035 + if (evt->flags & DWC3_EVENT_PENDING) 3036 + return IRQ_HANDLED; 3038 3037 3039 3038 count = dwc3_readl(dwc->regs, DWC3_GEVNTCOUNT(0)); 3040 3039 count &= DWC3_GEVNTCOUNT_MASK;
+5 -5
drivers/usb/gadget/function/f_fs.c
··· 1858 1858 ep->ep->driver_data = ep; 1859 1859 ep->ep->desc = ds; 1860 1860 1861 - comp_desc = (struct usb_ss_ep_comp_descriptor *)(ds + 1862 - USB_DT_ENDPOINT_SIZE); 1863 - ep->ep->maxburst = comp_desc->bMaxBurst + 1; 1864 - 1865 - if (needs_comp_desc) 1861 + if (needs_comp_desc) { 1862 + comp_desc = (struct usb_ss_ep_comp_descriptor *)(ds + 1863 + USB_DT_ENDPOINT_SIZE); 1864 + ep->ep->maxburst = comp_desc->bMaxBurst + 1; 1866 1865 ep->ep->comp_desc = comp_desc; 1866 + } 1867 1867 1868 1868 ret = usb_ep_enable(ep->ep); 1869 1869 if (likely(!ret)) {
+1 -1
drivers/usb/gadget/function/u_serial.c
··· 1256 1256 struct gscons_info *info = &gscons_info; 1257 1257 1258 1258 unregister_console(&gserial_cons); 1259 - if (info->console_thread != NULL) 1259 + if (!IS_ERR_OR_NULL(info->console_thread)) 1260 1260 kthread_stop(info->console_thread); 1261 1261 gs_buf_free(&info->con_buf); 1262 1262 }
+3 -3
drivers/usb/gadget/udc/dummy_hcd.c
··· 2008 2008 HUB_CHAR_COMMON_OCPM); 2009 2009 desc->bNbrPorts = 1; 2010 2010 desc->u.ss.bHubHdrDecLat = 0x04; /* Worst case: 0.4 micro sec*/ 2011 - desc->u.ss.DeviceRemovable = 0xffff; 2011 + desc->u.ss.DeviceRemovable = 0; 2012 2012 } 2013 2013 2014 2014 static inline void hub_descriptor(struct usb_hub_descriptor *desc) ··· 2020 2020 HUB_CHAR_INDV_PORT_LPSM | 2021 2021 HUB_CHAR_COMMON_OCPM); 2022 2022 desc->bNbrPorts = 1; 2023 - desc->u.hs.DeviceRemovable[0] = 0xff; 2024 - desc->u.hs.DeviceRemovable[1] = 0xff; 2023 + desc->u.hs.DeviceRemovable[0] = 0; 2024 + desc->u.hs.DeviceRemovable[1] = 0xff; /* PortPwrCtrlMask */ 2025 2025 } 2026 2026 2027 2027 static int dummy_hub_control(
+3 -1
drivers/usb/host/ehci-platform.c
··· 384 384 } 385 385 386 386 companion_dev = usb_of_get_companion_dev(hcd->self.controller); 387 - if (companion_dev) 387 + if (companion_dev) { 388 388 device_pm_wait_for_dev(hcd->self.controller, companion_dev); 389 + put_device(companion_dev); 390 + } 389 391 390 392 ehci_resume(hcd, priv->reset_on_resume); 391 393 return 0;
+4 -2
drivers/usb/host/r8a66597-hcd.c
··· 1269 1269 time = 30; 1270 1270 break; 1271 1271 default: 1272 - time = 300; 1272 + time = 50; 1273 1273 break; 1274 1274 } 1275 1275 ··· 1785 1785 pipe = td->pipe; 1786 1786 pipe_stop(r8a66597, pipe); 1787 1787 1788 + /* Select a different address or endpoint */ 1788 1789 new_td = td; 1789 1790 do { 1790 1791 list_move_tail(&new_td->queue, ··· 1795 1794 new_td = td; 1796 1795 break; 1797 1796 } 1798 - } while (td != new_td && td->address == new_td->address); 1797 + } while (td != new_td && td->address == new_td->address && 1798 + td->pipe->info.epnum == new_td->pipe->info.epnum); 1799 1799 1800 1800 start_transfer(r8a66597, new_td); 1801 1801
+1 -1
drivers/usb/host/xhci-hub.c
··· 419 419 wait_for_completion(cmd->completion); 420 420 421 421 if (cmd->status == COMP_COMMAND_ABORTED || 422 - cmd->status == COMP_STOPPED) { 422 + cmd->status == COMP_COMMAND_RING_STOPPED) { 423 423 xhci_warn(xhci, "Timeout while waiting for stop endpoint command\n"); 424 424 ret = -ETIME; 425 425 }
+6 -5
drivers/usb/host/xhci-mem.c
··· 56 56 } 57 57 58 58 if (max_packet) { 59 - seg->bounce_buf = kzalloc(max_packet, flags | GFP_DMA); 59 + seg->bounce_buf = kzalloc(max_packet, flags); 60 60 if (!seg->bounce_buf) { 61 61 dma_pool_free(xhci->segment_pool, seg->trbs, dma); 62 62 kfree(seg); ··· 1724 1724 xhci->dcbaa->dev_context_ptrs[0] = cpu_to_le64(xhci->scratchpad->sp_dma); 1725 1725 for (i = 0; i < num_sp; i++) { 1726 1726 dma_addr_t dma; 1727 - void *buf = dma_alloc_coherent(dev, xhci->page_size, &dma, 1727 + void *buf = dma_zalloc_coherent(dev, xhci->page_size, &dma, 1728 1728 flags); 1729 1729 if (!buf) 1730 1730 goto fail_sp4; ··· 2307 2307 /* Place limits on the number of roothub ports so that the hub 2308 2308 * descriptors aren't longer than the USB core will allocate. 2309 2309 */ 2310 - if (xhci->num_usb3_ports > 15) { 2310 + if (xhci->num_usb3_ports > USB_SS_MAXPORTS) { 2311 2311 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2312 - "Limiting USB 3.0 roothub ports to 15."); 2313 - xhci->num_usb3_ports = 15; 2312 + "Limiting USB 3.0 roothub ports to %u.", 2313 + USB_SS_MAXPORTS); 2314 + xhci->num_usb3_ports = USB_SS_MAXPORTS; 2314 2315 } 2315 2316 if (xhci->num_usb2_ports > USB_MAXCHILDREN) { 2316 2317 xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+5 -2
drivers/usb/host/xhci-pci.c
··· 52 52 #define PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI 0x0aa8 53 53 #define PCI_DEVICE_ID_INTEL_BROXTON_B_XHCI 0x1aa8 54 54 #define PCI_DEVICE_ID_INTEL_APL_XHCI 0x5aa8 55 + #define PCI_DEVICE_ID_INTEL_DNV_XHCI 0x19d0 55 56 56 57 static const char hcd_name[] = "xhci_hcd"; 57 58 ··· 167 166 pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI || 168 167 pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_M_XHCI || 169 168 pdev->device == PCI_DEVICE_ID_INTEL_BROXTON_B_XHCI || 170 - pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI)) { 169 + pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI || 170 + pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI)) { 171 171 xhci->quirks |= XHCI_PME_STUCK_QUIRK; 172 172 } 173 173 if (pdev->vendor == PCI_VENDOR_ID_INTEL && ··· 177 175 } 178 176 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 179 177 (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI || 180 - pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI)) 178 + pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI || 179 + pdev->device == PCI_DEVICE_ID_INTEL_DNV_XHCI)) 181 180 xhci->quirks |= XHCI_MISSING_CAS; 182 181 183 182 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+1 -1
drivers/usb/host/xhci-plat.c
··· 177 177 178 178 irq = platform_get_irq(pdev, 0); 179 179 if (irq < 0) 180 - return -ENODEV; 180 + return irq; 181 181 182 182 /* 183 183 * sysdev must point to a device that is known to the system firmware
+9 -11
drivers/usb/host/xhci-ring.c
··· 323 323 if (i_cmd->status != COMP_COMMAND_ABORTED) 324 324 continue; 325 325 326 - i_cmd->status = COMP_STOPPED; 326 + i_cmd->status = COMP_COMMAND_RING_STOPPED; 327 327 328 328 xhci_dbg(xhci, "Turn aborted command %p to no-op\n", 329 329 i_cmd->command_trb); ··· 641 641 xhci_urb_free_priv(urb_priv); 642 642 usb_hcd_unlink_urb_from_ep(hcd, urb); 643 643 spin_unlock(&xhci->lock); 644 - usb_hcd_giveback_urb(hcd, urb, status); 645 644 trace_xhci_urb_giveback(urb); 645 + usb_hcd_giveback_urb(hcd, urb, status); 646 646 spin_lock(&xhci->lock); 647 647 } 648 648 ··· 1380 1380 cmd_comp_code = GET_COMP_CODE(le32_to_cpu(event->status)); 1381 1381 1382 1382 /* If CMD ring stopped we own the trbs between enqueue and dequeue */ 1383 - if (cmd_comp_code == COMP_STOPPED) { 1383 + if (cmd_comp_code == COMP_COMMAND_RING_STOPPED) { 1384 1384 complete_all(&xhci->cmd_ring_stop_completion); 1385 1385 return; 1386 1386 } ··· 1436 1436 break; 1437 1437 case TRB_CMD_NOOP: 1438 1438 /* Is this an aborted command turned to NO-OP? */ 1439 - if (cmd->status == COMP_STOPPED) 1440 - cmd_comp_code = COMP_STOPPED; 1439 + if (cmd->status == COMP_COMMAND_RING_STOPPED) 1440 + cmd_comp_code = COMP_COMMAND_RING_STOPPED; 1441 1441 break; 1442 1442 case TRB_RESET_EP: 1443 1443 WARN_ON(slot_id != TRB_TO_SLOT_ID( ··· 2677 2677 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 2678 2678 union xhci_trb *event_ring_deq; 2679 2679 irqreturn_t ret = IRQ_NONE; 2680 + unsigned long flags; 2680 2681 dma_addr_t deq; 2681 2682 u64 temp_64; 2682 2683 u32 status; 2683 2684 2684 - spin_lock(&xhci->lock); 2685 + spin_lock_irqsave(&xhci->lock, flags); 2685 2686 /* Check if the xHC generated the interrupt, or the irq is shared */ 2686 2687 status = readl(&xhci->op_regs->status); 2687 2688 if (status == ~(u32)0) { ··· 2708 2707 */ 2709 2708 status |= STS_EINT; 2710 2709 writel(status, &xhci->op_regs->status); 2711 - /* FIXME when MSI-X is supported and there are multiple vectors */ 2712 - /* Clear the MSI-X event interrupt status */ 2713 2710 2714 - if (hcd->irq) { 2711 + if (!hcd->msi_enabled) { 2715 2712 u32 irq_pending; 2716 - /* Acknowledge the PCI interrupt */ 2717 2713 irq_pending = readl(&xhci->ir_set->irq_pending); 2718 2714 irq_pending |= IMAN_IP; 2719 2715 writel(irq_pending, &xhci->ir_set->irq_pending); ··· 2755 2757 ret = IRQ_HANDLED; 2756 2758 2757 2759 out: 2758 - spin_unlock(&xhci->lock); 2760 + spin_unlock_irqrestore(&xhci->lock, flags); 2759 2761 2760 2762 return ret; 2761 2763 }
+7 -6
drivers/usb/host/xhci.c
··· 359 359 /* fall back to msi*/ 360 360 ret = xhci_setup_msi(xhci); 361 361 362 - if (!ret) 363 - /* hcd->irq is 0, we have MSI */ 362 + if (!ret) { 363 + hcd->msi_enabled = 1; 364 364 return 0; 365 + } 365 366 366 367 if (!pdev->irq) { 367 368 xhci_err(xhci, "No msi-x/msi found and no IRQ in BIOS\n"); ··· 1764 1763 1765 1764 switch (*cmd_status) { 1766 1765 case COMP_COMMAND_ABORTED: 1767 - case COMP_STOPPED: 1766 + case COMP_COMMAND_RING_STOPPED: 1768 1767 xhci_warn(xhci, "Timeout while waiting for configure endpoint command\n"); 1769 1768 ret = -ETIME; 1770 1769 break; ··· 1814 1813 1815 1814 switch (*cmd_status) { 1816 1815 case COMP_COMMAND_ABORTED: 1817 - case COMP_STOPPED: 1816 + case COMP_COMMAND_RING_STOPPED: 1818 1817 xhci_warn(xhci, "Timeout while waiting for evaluate context command\n"); 1819 1818 ret = -ETIME; 1820 1819 break; ··· 3433 3432 ret = reset_device_cmd->status; 3434 3433 switch (ret) { 3435 3434 case COMP_COMMAND_ABORTED: 3436 - case COMP_STOPPED: 3435 + case COMP_COMMAND_RING_STOPPED: 3437 3436 xhci_warn(xhci, "Timeout waiting for reset device command\n"); 3438 3437 ret = -ETIME; 3439 3438 goto command_cleanup; ··· 3818 3817 */ 3819 3818 switch (command->status) { 3820 3819 case COMP_COMMAND_ABORTED: 3821 - case COMP_STOPPED: 3820 + case COMP_COMMAND_RING_STOPPED: 3822 3821 xhci_warn(xhci, "Timeout while waiting for setup device command\n"); 3823 3822 ret = -ETIME; 3824 3823 break;
+1 -1
drivers/usb/misc/chaoskey.c
··· 192 192 193 193 dev->in_ep = in_ep; 194 194 195 - if (udev->descriptor.idVendor != ALEA_VENDOR_ID) 195 + if (le16_to_cpu(udev->descriptor.idVendor) != ALEA_VENDOR_ID) 196 196 dev->reads_started = 1; 197 197 198 198 dev->size = size;
+1 -1
drivers/usb/misc/iowarrior.c
··· 554 554 info.revision = le16_to_cpu(dev->udev->descriptor.bcdDevice); 555 555 556 556 /* 0==UNKNOWN, 1==LOW(usb1.1) ,2=FULL(usb1.1), 3=HIGH(usb2.0) */ 557 - info.speed = le16_to_cpu(dev->udev->speed); 557 + info.speed = dev->udev->speed; 558 558 info.if_num = dev->interface->cur_altsetting->desc.bInterfaceNumber; 559 559 info.report_size = dev->report_size; 560 560
+1
drivers/usb/misc/legousbtower.c
··· 926 926 USB_MAJOR, dev->minor); 927 927 928 928 exit: 929 + kfree(get_version_reply); 929 930 return retval; 930 931 931 932 error:
+1 -1
drivers/usb/misc/sisusbvga/sisusb_con.c
··· 973 973 974 974 mutex_unlock(&sisusb->lock); 975 975 976 - return 1; 976 + return true; 977 977 } 978 978 979 979 /* Interface routine */
+5 -4
drivers/usb/musb/musb_host.c
··· 2780 2780 int ret; 2781 2781 struct usb_hcd *hcd = musb->hcd; 2782 2782 2783 - MUSB_HST_MODE(musb); 2784 - musb->xceiv->otg->default_a = 1; 2785 - musb->xceiv->otg->state = OTG_STATE_A_IDLE; 2786 - 2783 + if (musb->port_mode == MUSB_PORT_MODE_HOST) { 2784 + MUSB_HST_MODE(musb); 2785 + musb->xceiv->otg->default_a = 1; 2786 + musb->xceiv->otg->state = OTG_STATE_A_IDLE; 2787 + } 2787 2788 otg_set_host(musb->xceiv->otg, &hcd->self); 2788 2789 hcd->self.otg_port = 1; 2789 2790 musb->xceiv->otg->host = &hcd->self;
+9 -4
drivers/usb/musb/tusb6010_omap.c
··· 219 219 u32 dma_remaining; 220 220 int src_burst, dst_burst; 221 221 u16 csr; 222 + u32 psize; 222 223 int ch; 223 224 s8 dmareq; 224 225 s8 sync_dev; ··· 391 390 392 391 if (chdat->tx) { 393 392 /* Send transfer_packet_sz packets at a time */ 394 - musb_writel(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET, 395 - chdat->transfer_packet_sz); 393 + psize = musb_readl(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET); 394 + psize &= ~0x7ff; 395 + psize |= chdat->transfer_packet_sz; 396 + musb_writel(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET, psize); 396 397 397 398 musb_writel(ep_conf, TUSB_EP_TX_OFFSET, 398 399 TUSB_EP_CONFIG_XFR_SIZE(chdat->transfer_len)); 399 400 } else { 400 401 /* Receive transfer_packet_sz packets at a time */ 401 - musb_writel(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET, 402 - chdat->transfer_packet_sz << 16); 402 + psize = musb_readl(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET); 403 + psize &= ~(0x7ff << 16); 404 + psize |= (chdat->transfer_packet_sz << 16); 405 + musb_writel(ep_conf, TUSB_EP_MAX_PACKET_SIZE_OFFSET, psize); 403 406 404 407 musb_writel(ep_conf, TUSB_EP_RX_OFFSET, 405 408 TUSB_EP_CONFIG_XFR_SIZE(chdat->transfer_len));
+5 -5
drivers/usb/serial/ftdi_sio.c
··· 809 809 { USB_DEVICE(FTDI_VID, FTDI_PROPOX_ISPCABLEIII_PID) }, 810 810 { USB_DEVICE(FTDI_VID, CYBER_CORTEX_AV_PID), 811 811 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 812 - { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_PID), 813 - .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 814 - { USB_DEVICE(OLIMEX_VID, OLIMEX_ARM_USB_OCD_H_PID), 815 - .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 812 + { USB_DEVICE_INTERFACE_NUMBER(OLIMEX_VID, OLIMEX_ARM_USB_OCD_PID, 1) }, 813 + { USB_DEVICE_INTERFACE_NUMBER(OLIMEX_VID, OLIMEX_ARM_USB_OCD_H_PID, 1) }, 814 + { USB_DEVICE_INTERFACE_NUMBER(OLIMEX_VID, OLIMEX_ARM_USB_TINY_PID, 1) }, 815 + { USB_DEVICE_INTERFACE_NUMBER(OLIMEX_VID, OLIMEX_ARM_USB_TINY_H_PID, 1) }, 816 816 { USB_DEVICE(FIC_VID, FIC_NEO1973_DEBUG_PID), 817 817 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 818 818 { USB_DEVICE(FTDI_VID, FTDI_OOCDLINK_PID), ··· 1527 1527 (new_serial.flags & ASYNC_FLAGS)); 1528 1528 priv->custom_divisor = new_serial.custom_divisor; 1529 1529 1530 + check_and_exit: 1530 1531 write_latency_timer(port); 1531 1532 1532 - check_and_exit: 1533 1533 if ((old_priv.flags & ASYNC_SPD_MASK) != 1534 1534 (priv->flags & ASYNC_SPD_MASK)) { 1535 1535 if ((priv->flags & ASYNC_SPD_MASK) == ASYNC_SPD_HI)
+2
drivers/usb/serial/ftdi_sio_ids.h
··· 882 882 /* Olimex */ 883 883 #define OLIMEX_VID 0x15BA 884 884 #define OLIMEX_ARM_USB_OCD_PID 0x0003 885 + #define OLIMEX_ARM_USB_TINY_PID 0x0004 886 + #define OLIMEX_ARM_USB_TINY_H_PID 0x002a 885 887 #define OLIMEX_ARM_USB_OCD_H_PID 0x002b 886 888 887 889 /*
+4 -1
drivers/usb/serial/io_ti.c
··· 2336 2336 if (!baud) { 2337 2337 /* pick a default, any default... */ 2338 2338 baud = 9600; 2339 - } else 2339 + } else { 2340 + /* Avoid a zero divisor. */ 2341 + baud = min(baud, 461550); 2340 2342 tty_encode_baud_rate(tty, baud, baud); 2343 + } 2341 2344 2342 2345 edge_port->baud_rate = baud; 2343 2346 config->wBaudRate = (__u16)((461550L + baud/2) / baud);
+12 -9
drivers/usb/serial/ir-usb.c
··· 197 197 static int ir_startup(struct usb_serial *serial) 198 198 { 199 199 struct usb_irda_cs_descriptor *irda_desc; 200 + int rates; 200 201 201 202 irda_desc = irda_usb_find_class_desc(serial, 0); 202 203 if (!irda_desc) { ··· 206 205 return -ENODEV; 207 206 } 208 207 208 + rates = le16_to_cpu(irda_desc->wBaudRate); 209 + 209 210 dev_dbg(&serial->dev->dev, 210 211 "%s - Baud rates supported:%s%s%s%s%s%s%s%s%s\n", 211 212 __func__, 212 - (irda_desc->wBaudRate & USB_IRDA_BR_2400) ? " 2400" : "", 213 - (irda_desc->wBaudRate & USB_IRDA_BR_9600) ? " 9600" : "", 214 - (irda_desc->wBaudRate & USB_IRDA_BR_19200) ? " 19200" : "", 215 - (irda_desc->wBaudRate & USB_IRDA_BR_38400) ? " 38400" : "", 216 - (irda_desc->wBaudRate & USB_IRDA_BR_57600) ? " 57600" : "", 217 - (irda_desc->wBaudRate & USB_IRDA_BR_115200) ? " 115200" : "", 218 - (irda_desc->wBaudRate & USB_IRDA_BR_576000) ? " 576000" : "", 219 - (irda_desc->wBaudRate & USB_IRDA_BR_1152000) ? " 1152000" : "", 220 - (irda_desc->wBaudRate & USB_IRDA_BR_4000000) ? " 4000000" : ""); 213 + (rates & USB_IRDA_BR_2400) ? " 2400" : "", 214 + (rates & USB_IRDA_BR_9600) ? " 9600" : "", 215 + (rates & USB_IRDA_BR_19200) ? " 19200" : "", 216 + (rates & USB_IRDA_BR_38400) ? " 38400" : "", 217 + (rates & USB_IRDA_BR_57600) ? " 57600" : "", 218 + (rates & USB_IRDA_BR_115200) ? " 115200" : "", 219 + (rates & USB_IRDA_BR_576000) ? " 576000" : "", 220 + (rates & USB_IRDA_BR_1152000) ? " 1152000" : "", 221 + (rates & USB_IRDA_BR_4000000) ? " 4000000" : ""); 221 222 222 223 switch (irda_desc->bmAdditionalBOFs) { 223 224 case USB_IRDA_AB_48:
+1 -1
drivers/usb/serial/mct_u232.c
··· 189 189 return -ENOMEM; 190 190 191 191 divisor = mct_u232_calculate_baud_rate(serial, value, &speed); 192 - put_unaligned_le32(cpu_to_le32(divisor), buf); 192 + put_unaligned_le32(divisor, buf); 193 193 rc = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0), 194 194 MCT_U232_SET_BAUD_RATE_REQUEST, 195 195 MCT_U232_SET_REQUEST_TYPE,
+8
drivers/usb/serial/option.c
··· 281 281 #define TELIT_PRODUCT_LE922_USBCFG0 0x1042 282 282 #define TELIT_PRODUCT_LE922_USBCFG3 0x1043 283 283 #define TELIT_PRODUCT_LE922_USBCFG5 0x1045 284 + #define TELIT_PRODUCT_ME910 0x1100 284 285 #define TELIT_PRODUCT_LE920 0x1200 285 286 #define TELIT_PRODUCT_LE910 0x1201 286 287 #define TELIT_PRODUCT_LE910_USBCFG4 0x1206 ··· 639 638 640 639 static const struct option_blacklist_info simcom_sim7100e_blacklist = { 641 640 .reserved = BIT(5) | BIT(6), 641 + }; 642 + 643 + static const struct option_blacklist_info telit_me910_blacklist = { 644 + .sendsetup = BIT(0), 645 + .reserved = BIT(1) | BIT(3), 642 646 }; 643 647 644 648 static const struct option_blacklist_info telit_le910_blacklist = { ··· 1241 1235 .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg3 }, 1242 1236 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff), 1243 1237 .driver_info = (kernel_ulong_t)&telit_le922_blacklist_usbcfg0 }, 1238 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1239 + .driver_info = (kernel_ulong_t)&telit_me910_blacklist }, 1244 1240 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1245 1241 .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, 1246 1242 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+2
drivers/usb/serial/qcserial.c
··· 162 162 {DEVICE_SWI(0x1199, 0x9071)}, /* Sierra Wireless MC74xx */ 163 163 {DEVICE_SWI(0x1199, 0x9078)}, /* Sierra Wireless EM74xx */ 164 164 {DEVICE_SWI(0x1199, 0x9079)}, /* Sierra Wireless EM74xx */ 165 + {DEVICE_SWI(0x1199, 0x907a)}, /* Sierra Wireless EM74xx QDL */ 166 + {DEVICE_SWI(0x1199, 0x907b)}, /* Sierra Wireless EM74xx */ 165 167 {DEVICE_SWI(0x413c, 0x81a2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card */ 166 168 {DEVICE_SWI(0x413c, 0x81a3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card */ 167 169 {DEVICE_SWI(0x413c, 0x81a4)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card */
+55 -35
drivers/usb/storage/ene_ub6250.c
··· 446 446 #define SD_BLOCK_LEN 9 447 447 448 448 struct ene_ub6250_info { 449 + 450 + /* I/O bounce buffer */ 451 + u8 *bbuf; 452 + 449 453 /* for 6250 code */ 450 454 struct SD_STATUS SD_Status; 451 455 struct MS_STATUS MS_Status; ··· 497 493 498 494 static void ene_ub6250_info_destructor(void *extra) 499 495 { 496 + struct ene_ub6250_info *info = (struct ene_ub6250_info *) extra; 497 + 500 498 if (!extra) 501 499 return; 500 + kfree(info->bbuf); 502 501 } 503 502 504 503 static int ene_send_scsi_cmd(struct us_data *us, u8 fDir, void *buf, int use_sg) ··· 867 860 u8 PageNum, u32 *PageBuf, struct ms_lib_type_extdat *ExtraDat) 868 861 { 869 862 struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf; 863 + struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra; 864 + u8 *bbuf = info->bbuf; 870 865 int result; 871 - u8 ExtBuf[4]; 872 866 u32 bn = PhyBlockAddr * 0x20 + PageNum; 873 867 874 868 result = ene_load_bincode(us, MS_RW_PATTERN); ··· 909 901 bcb->CDB[2] = (unsigned char)(PhyBlockAddr>>16); 910 902 bcb->CDB[6] = 0x01; 911 903 912 - result = ene_send_scsi_cmd(us, FDIR_READ, &ExtBuf, 0); 904 + result = ene_send_scsi_cmd(us, FDIR_READ, bbuf, 0); 913 905 if (result != USB_STOR_XFER_GOOD) 914 906 return USB_STOR_TRANSPORT_ERROR; 915 907 ··· 918 910 ExtraDat->status0 = 0x10; /* Not yet,fireware support */ 919 911 920 912 ExtraDat->status1 = 0x00; /* Not yet,fireware support */ 921 - ExtraDat->ovrflg = ExtBuf[0]; 922 - ExtraDat->mngflg = ExtBuf[1]; 923 - ExtraDat->logadr = memstick_logaddr(ExtBuf[2], ExtBuf[3]); 913 + ExtraDat->ovrflg = bbuf[0]; 914 + ExtraDat->mngflg = bbuf[1]; 915 + ExtraDat->logadr = memstick_logaddr(bbuf[2], bbuf[3]); 924 916 925 917 return USB_STOR_TRANSPORT_GOOD; 926 918 } ··· 1340 1332 u8 PageNum, struct ms_lib_type_extdat *ExtraDat) 1341 1333 { 1342 1334 struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf; 1335 + struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra; 1336 + u8 *bbuf = info->bbuf; 1343 1337 int result; 1344 - u8 ExtBuf[4]; 1345 1338 1346 1339 memset(bcb, 0, sizeof(struct bulk_cb_wrap)); 1347 1340 bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN); ··· 1356 1347 bcb->CDB[2] = (unsigned char)(PhyBlock>>16); 1357 1348 bcb->CDB[6] = 0x01; 1358 1349 1359 - result = ene_send_scsi_cmd(us, FDIR_READ, &ExtBuf, 0); 1350 + result = ene_send_scsi_cmd(us, FDIR_READ, bbuf, 0); 1360 1351 if (result != USB_STOR_XFER_GOOD) 1361 1352 return USB_STOR_TRANSPORT_ERROR; 1362 1353 ··· 1364 1355 ExtraDat->intr = 0x80; /* Not yet, waiting for fireware support */ 1365 1356 ExtraDat->status0 = 0x10; /* Not yet, waiting for fireware support */ 1366 1357 ExtraDat->status1 = 0x00; /* Not yet, waiting for fireware support */ 1367 - ExtraDat->ovrflg = ExtBuf[0]; 1368 - ExtraDat->mngflg = ExtBuf[1]; 1369 - ExtraDat->logadr = memstick_logaddr(ExtBuf[2], ExtBuf[3]); 1358 + ExtraDat->ovrflg = bbuf[0]; 1359 + ExtraDat->mngflg = bbuf[1]; 1360 + ExtraDat->logadr = memstick_logaddr(bbuf[2], bbuf[3]); 1370 1361 1371 1362 return USB_STOR_TRANSPORT_GOOD; 1372 1363 } ··· 1565 1556 u16 PhyBlock, newblk, i; 1566 1557 u16 LogStart, LogEnde; 1567 1558 struct ms_lib_type_extdat extdat; 1568 - u8 buf[0x200]; 1569 1559 u32 count = 0, index = 0; 1570 1560 struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra; 1561 + u8 *bbuf = info->bbuf; 1571 1562 1572 1563 for (PhyBlock = 0; PhyBlock < info->MS_Lib.NumberOfPhyBlock;) { 1573 1564 ms_lib_phy_to_log_range(PhyBlock, &LogStart, &LogEnde); ··· 1581 1572 } 1582 1573 1583 1574 if (count == PhyBlock) { 1584 - ms_lib_read_extrablock(us, PhyBlock, 0, 0x80, &buf); 1575 + ms_lib_read_extrablock(us, PhyBlock, 0, 0x80, 1576 + bbuf); 1585 1577 count += 0x80; 1586 1578 } 1587 1579 index = (PhyBlock % 0x80) * 4; 1588 1580 1589 - extdat.ovrflg = buf[index]; 1590 - extdat.mngflg = buf[index+1]; 1591 - extdat.logadr = memstick_logaddr(buf[index+2], buf[index+3]); 1581 + extdat.ovrflg = bbuf[index]; 1582 + extdat.mngflg = bbuf[index+1]; 1583 + extdat.logadr = memstick_logaddr(bbuf[index+2], 1584 + bbuf[index+3]); 1592 1585 1593 1586 if ((extdat.ovrflg & MS_REG_OVR_BKST) != MS_REG_OVR_BKST_OK) { 1594 1587 ms_lib_setacquired_errorblock(us, PhyBlock); ··· 2073 2062 { 2074 2063 struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf; 2075 2064 int result; 2076 - u8 buf[0x200]; 2077 2065 u16 MSP_BlockSize, MSP_UserAreaBlocks; 2078 2066 struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra; 2067 + u8 *bbuf = info->bbuf; 2079 2068 2080 2069 printk(KERN_INFO "transport --- ENE_MSInit\n"); 2081 2070 ··· 2094 2083 bcb->CDB[0] = 0xF1; 2095 2084 bcb->CDB[1] = 0x01; 2096 2085 2097 - result = ene_send_scsi_cmd(us, FDIR_READ, &buf, 0); 2086 + result = ene_send_scsi_cmd(us, FDIR_READ, bbuf, 0); 2098 2087 if (result != USB_STOR_XFER_GOOD) { 2099 2088 printk(KERN_ERR "Execution MS Init Code Fail !!\n"); 2100 2089 return USB_STOR_TRANSPORT_ERROR; 2101 2090 } 2102 2091 /* the same part to test ENE */ 2103 - info->MS_Status = *(struct MS_STATUS *)&buf[0]; 2092 + info->MS_Status = *(struct MS_STATUS *) bbuf; 2104 2093 2105 2094 if (info->MS_Status.Insert && info->MS_Status.Ready) { 2106 2095 printk(KERN_INFO "Insert = %x\n", info->MS_Status.Insert); ··· 2109 2098 printk(KERN_INFO "IsMSPHG = %x\n", info->MS_Status.IsMSPHG); 2110 2099 printk(KERN_INFO "WtP= %x\n", info->MS_Status.WtP); 2111 2100 if (info->MS_Status.IsMSPro) { 2112 - MSP_BlockSize = (buf[6] << 8) | buf[7]; 2113 - MSP_UserAreaBlocks = (buf[10] << 8) | buf[11]; 2101 + MSP_BlockSize = (bbuf[6] << 8) | bbuf[7]; 2102 + MSP_UserAreaBlocks = (bbuf[10] << 8) | bbuf[11]; 2114 2103 info->MSP_TotalBlock = MSP_BlockSize * MSP_UserAreaBlocks; 2115 2104 } else { 2116 2105 ms_card_init(us); /* Card is MS (to ms.c)*/ 2117 2106 } 2118 2107 usb_stor_dbg(us, "MS Init Code OK !!\n"); 2119 2108 } else { 2120 - usb_stor_dbg(us, "MS Card Not Ready --- %x\n", buf[0]); 2109 + usb_stor_dbg(us, "MS Card Not Ready --- %x\n", bbuf[0]); 2121 2110 return USB_STOR_TRANSPORT_ERROR; 2122 2111 } 2123 2112 ··· 2127 2116 static int ene_sd_init(struct us_data *us) 2128 2117 { 2129 2118 int result; 2130 - u8 buf[0x200]; 2131 2119 struct bulk_cb_wrap *bcb = (struct bulk_cb_wrap *) us->iobuf; 2132 2120 struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra; 2121 + u8 *bbuf = info->bbuf; 2133 2122 2134 2123 usb_stor_dbg(us, "transport --- ENE_SDInit\n"); 2135 2124 /* SD Init Part-1 */ ··· 2163 2152 bcb->Flags = US_BULK_FLAG_IN; 2164 2153 bcb->CDB[0] = 0xF1; 2165 2154 2166 - result = ene_send_scsi_cmd(us, FDIR_READ, &buf, 0); 2155 + result = ene_send_scsi_cmd(us, FDIR_READ, bbuf, 0); 2167 2156 if (result != USB_STOR_XFER_GOOD) { 2168 2157 usb_stor_dbg(us, "Execution SD Init Code Fail !!\n"); 2169 2158 return USB_STOR_TRANSPORT_ERROR; 2170 2159 } 2171 2160 2172 - info->SD_Status = *(struct SD_STATUS *)&buf[0]; 2161 + info->SD_Status = *(struct SD_STATUS *) bbuf; 2173 2162 if (info->SD_Status.Insert && info->SD_Status.Ready) { 2174 2163 struct SD_STATUS *s = &info->SD_Status; 2175 2164 2176 - ene_get_card_status(us, (unsigned char *)&buf); 2165 + ene_get_card_status(us, bbuf); 2177 2166 usb_stor_dbg(us, "Insert = %x\n", s->Insert); 2178 2167 usb_stor_dbg(us, "Ready = %x\n", s->Ready); 2179 2168 usb_stor_dbg(us, "IsMMC = %x\n", s->IsMMC); ··· 2181 2170 usb_stor_dbg(us, "HiSpeed = %x\n", s->HiSpeed); 2182 2171 usb_stor_dbg(us, "WtP = %x\n", s->WtP); 2183 2172 } else { 2184 - usb_stor_dbg(us, "SD Card Not Ready --- %x\n", buf[0]); 2173 + usb_stor_dbg(us, "SD Card Not Ready --- %x\n", bbuf[0]); 2185 2174 return USB_STOR_TRANSPORT_ERROR; 2186 2175 } 2187 2176 return USB_STOR_TRANSPORT_GOOD; ··· 2191 2180 static int ene_init(struct us_data *us) 2192 2181 { 2193 2182 int result; 2194 - u8 misc_reg03 = 0; 2183 + u8 misc_reg03; 2195 2184 struct ene_ub6250_info *info = (struct ene_ub6250_info *)(us->extra); 2185 + u8 *bbuf = info->bbuf; 2196 2186 2197 - result = ene_get_card_type(us, REG_CARD_STATUS, &misc_reg03); 2187 + result = ene_get_card_type(us, REG_CARD_STATUS, bbuf); 2198 2188 if (result != USB_STOR_XFER_GOOD) 2199 2189 return USB_STOR_TRANSPORT_ERROR; 2200 2190 2191 + misc_reg03 = bbuf[0]; 2201 2192 if (misc_reg03 & 0x01) { 2202 2193 if (!info->SD_Status.Ready) { 2203 2194 result = ene_sd_init(us); ··· 2316 2303 const struct usb_device_id *id) 2317 2304 { 2318 2305 int result; 2319 - u8 misc_reg03 = 0; 2306 + u8 misc_reg03; 2320 2307 struct us_data *us; 2308 + struct ene_ub6250_info *info; 2321 2309 2322 2310 result = usb_stor_probe1(&us, intf, id, 2323 2311 (id - ene_ub6250_usb_ids) + ene_ub6250_unusual_dev_list, ··· 2327 2313 return result; 2328 2314 2329 2315 /* FIXME: where should the code alloc extra buf ? */ 2330 - if (!us->extra) { 2331 - us->extra = kzalloc(sizeof(struct ene_ub6250_info), GFP_KERNEL); 2332 - if (!us->extra) 2333 - return -ENOMEM; 2334 - us->extra_destructor = ene_ub6250_info_destructor; 2316 + us->extra = kzalloc(sizeof(struct ene_ub6250_info), GFP_KERNEL); 2317 + if (!us->extra) 2318 + return -ENOMEM; 2319 + us->extra_destructor = ene_ub6250_info_destructor; 2320 + 2321 + info = (struct ene_ub6250_info *)(us->extra); 2322 + info->bbuf = kmalloc(512, GFP_KERNEL); 2323 + if (!info->bbuf) { 2324 + kfree(us->extra); 2325 + return -ENOMEM; 2335 2326 } 2336 2327 2337 2328 us->transport_name = "ene_ub6250"; ··· 2348 2329 return result; 2349 2330 2350 2331 /* probe card type */ 2351 - result = ene_get_card_type(us, REG_CARD_STATUS, &misc_reg03); 2332 + result = ene_get_card_type(us, REG_CARD_STATUS, info->bbuf); 2352 2333 if (result != USB_STOR_XFER_GOOD) { 2353 2334 usb_stor_disconnect(intf); 2354 2335 return USB_STOR_TRANSPORT_ERROR; 2355 2336 } 2356 2337 2338 + misc_reg03 = info->bbuf[0]; 2357 2339 if (!(misc_reg03 & 0x01)) { 2358 2340 pr_info("ums_eneub6250: This driver only supports SD/MS cards. " 2359 2341 "It does not support SM cards.\n");
+8 -3
drivers/usb/usbip/vhci_hcd.c
··· 235 235 236 236 static inline void hub_descriptor(struct usb_hub_descriptor *desc) 237 237 { 238 + int width; 239 + 238 240 memset(desc, 0, sizeof(*desc)); 239 241 desc->bDescriptorType = USB_DT_HUB; 240 - desc->bDescLength = 9; 241 242 desc->wHubCharacteristics = cpu_to_le16( 242 243 HUB_CHAR_INDV_PORT_LPSM | HUB_CHAR_COMMON_OCPM); 244 + 243 245 desc->bNbrPorts = VHCI_HC_PORTS; 244 - desc->u.hs.DeviceRemovable[0] = 0xff; 245 - desc->u.hs.DeviceRemovable[1] = 0xff; 246 + BUILD_BUG_ON(VHCI_HC_PORTS > USB_MAXCHILDREN); 247 + width = desc->bNbrPorts / 8 + 1; 248 + desc->bDescLength = USB_DT_HUB_NONVAR_SIZE + 2 * width; 249 + memset(&desc->u.hs.DeviceRemovable[0], 0, width); 250 + memset(&desc->u.hs.DeviceRemovable[width], 0xff, width); 246 251 } 247 252 248 253 static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+3 -2
drivers/uwb/i1480/dfu/usb.c
··· 341 341 static 342 342 int i1480_usb_probe(struct usb_interface *iface, const struct usb_device_id *id) 343 343 { 344 + struct usb_device *udev = interface_to_usbdev(iface); 344 345 struct i1480_usb *i1480_usb; 345 346 struct i1480 *i1480; 346 347 struct device *dev = &iface->dev; ··· 353 352 iface->cur_altsetting->desc.bInterfaceNumber); 354 353 goto error; 355 354 } 356 - if (iface->num_altsetting > 1 357 - && interface_to_usbdev(iface)->descriptor.idProduct == 0xbabe) { 355 + if (iface->num_altsetting > 1 && 356 + le16_to_cpu(udev->descriptor.idProduct) == 0xbabe) { 358 357 /* Need altsetting #1 [HW QUIRK] or EP1 won't work */ 359 358 result = usb_set_interface(interface_to_usbdev(iface), 0, 1); 360 359 if (result < 0)
+1 -1
drivers/watchdog/Kconfig
··· 452 452 453 453 config ORION_WATCHDOG 454 454 tristate "Orion watchdog" 455 - depends on ARCH_ORION5X || ARCH_DOVE || MACH_DOVE || ARCH_MVEBU || COMPILE_TEST 455 + depends on ARCH_ORION5X || ARCH_DOVE || MACH_DOVE || ARCH_MVEBU || (COMPILE_TEST && !ARCH_EBSA110) 456 456 depends on ARM 457 457 select WATCHDOG_CORE 458 458 help
+2 -1
drivers/watchdog/bcm_kona_wdt.c
··· 304 304 if (!wdt) 305 305 return -ENOMEM; 306 306 307 + spin_lock_init(&wdt->lock); 308 + 307 309 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 308 310 wdt->base = devm_ioremap_resource(dev, res); 309 311 if (IS_ERR(wdt->base)) ··· 318 316 return ret; 319 317 } 320 318 321 - spin_lock_init(&wdt->lock); 322 319 platform_set_drvdata(pdev, wdt); 323 320 watchdog_set_drvdata(&bcm_kona_wdt_wdd, wdt); 324 321 bcm_kona_wdt_wdd.parent = &pdev->dev;
+1 -1
drivers/watchdog/cadence_wdt.c
··· 49 49 /* Counter maximum value */ 50 50 #define CDNS_WDT_COUNTER_MAX 0xFFF 51 51 52 - static int wdt_timeout = CDNS_WDT_DEFAULT_TIMEOUT; 52 + static int wdt_timeout; 53 53 static int nowayout = WATCHDOG_NOWAYOUT; 54 54 55 55 module_param(wdt_timeout, int, 0);
+11 -13
drivers/watchdog/iTCO_wdt.c
··· 306 306 307 307 iTCO_vendor_pre_keepalive(p->smi_res, wd_dev->timeout); 308 308 309 - /* Reload the timer by writing to the TCO Timer Counter register */ 310 - if (p->iTCO_version >= 2) { 311 - outw(0x01, TCO_RLD(p)); 312 - } else if (p->iTCO_version == 1) { 313 - /* Reset the timeout status bit so that the timer 314 - * needs to count down twice again before rebooting */ 315 - outw(0x0008, TCO1_STS(p)); /* write 1 to clear bit */ 309 + /* Reset the timeout status bit so that the timer 310 + * needs to count down twice again before rebooting */ 311 + outw(0x0008, TCO1_STS(p)); /* write 1 to clear bit */ 316 312 313 + /* Reload the timer by writing to the TCO Timer Counter register */ 314 + if (p->iTCO_version >= 2) 315 + outw(0x01, TCO_RLD(p)); 316 + else if (p->iTCO_version == 1) 317 317 outb(0x01, TCO_RLD(p)); 318 - } 319 318 320 319 spin_unlock(&p->io_lock); 321 320 return 0; ··· 327 328 unsigned char val8; 328 329 unsigned int tmrval; 329 330 330 - tmrval = seconds_to_ticks(p, t); 331 - 332 - /* For TCO v1 the timer counts down twice before rebooting */ 333 - if (p->iTCO_version == 1) 334 - tmrval /= 2; 331 + /* The timer counts down twice before rebooting */ 332 + tmrval = seconds_to_ticks(p, t) / 2; 335 333 336 334 /* from the specs: */ 337 335 /* "Values of 0h-3h are ignored and should not be attempted" */ ··· 381 385 spin_lock(&p->io_lock); 382 386 val16 = inw(TCO_RLD(p)); 383 387 val16 &= 0x3ff; 388 + if (!(inw(TCO1_STS(p)) & 0x0008)) 389 + val16 += (inw(TCOv2_TMR(p)) & 0x3ff); 384 390 spin_unlock(&p->io_lock); 385 391 386 392 time_left = ticks_to_seconds(p, val16);
+3
drivers/watchdog/pcwd_usb.c
··· 630 630 return -ENODEV; 631 631 } 632 632 633 + if (iface_desc->desc.bNumEndpoints < 1) 634 + return -ENODEV; 635 + 633 636 /* check out the endpoint: it has to be Interrupt & IN */ 634 637 endpoint = &iface_desc->endpoint[0].desc; 635 638
+57 -20
drivers/watchdog/sama5d4_wdt.c
··· 6 6 * Licensed under GPLv2. 7 7 */ 8 8 9 + #include <linux/delay.h> 9 10 #include <linux/interrupt.h> 10 11 #include <linux/io.h> 11 12 #include <linux/kernel.h> ··· 30 29 struct watchdog_device wdd; 31 30 void __iomem *reg_base; 32 31 u32 mr; 32 + unsigned long last_ping; 33 33 }; 34 34 35 35 static int wdt_timeout = WDT_DEFAULT_TIMEOUT; ··· 46 44 "Watchdog cannot be stopped once started (default=" 47 45 __MODULE_STRING(WATCHDOG_NOWAYOUT) ")"); 48 46 47 + #define wdt_enabled (!(wdt->mr & AT91_WDT_WDDIS)) 48 + 49 49 #define wdt_read(wdt, field) \ 50 50 readl_relaxed((wdt)->reg_base + (field)) 51 51 52 - #define wdt_write(wtd, field, val) \ 53 - writel_relaxed((val), (wdt)->reg_base + (field)) 52 + /* 4 slow clock periods is 4/32768 = 122.07µs*/ 53 + #define WDT_DELAY usecs_to_jiffies(123) 54 + 55 + static void wdt_write(struct sama5d4_wdt *wdt, u32 field, u32 val) 56 + { 57 + /* 58 + * WDT_CR and WDT_MR must not be modified within three slow clock 59 + * periods following a restart of the watchdog performed by a write 60 + * access in WDT_CR. 61 + */ 62 + while (time_before(jiffies, wdt->last_ping + WDT_DELAY)) 63 + usleep_range(30, 125); 64 + writel_relaxed(val, wdt->reg_base + field); 65 + wdt->last_ping = jiffies; 66 + } 67 + 68 + static void wdt_write_nosleep(struct sama5d4_wdt *wdt, u32 field, u32 val) 69 + { 70 + if (time_before(jiffies, wdt->last_ping + WDT_DELAY)) 71 + udelay(123); 72 + writel_relaxed(val, wdt->reg_base + field); 73 + wdt->last_ping = jiffies; 74 + } 54 75 55 76 static int sama5d4_wdt_start(struct watchdog_device *wdd) 56 77 { ··· 114 89 wdt->mr &= ~AT91_WDT_WDD; 115 90 wdt->mr |= AT91_WDT_SET_WDV(value); 116 91 wdt->mr |= AT91_WDT_SET_WDD(value); 117 - wdt_write(wdt, AT91_WDT_MR, wdt->mr); 92 + 93 + /* 94 + * WDDIS has to be 0 when updating WDD/WDV. The datasheet states: When 95 + * setting the WDDIS bit, and while it is set, the fields WDV and WDD 96 + * must not be modified. 97 + * If the watchdog is enabled, then the timeout can be updated. Else, 98 + * wait that the user enables it. 99 + */ 100 + if (wdt_enabled) 101 + wdt_write(wdt, AT91_WDT_MR, wdt->mr & ~AT91_WDT_WDDIS); 118 102 119 103 wdd->timeout = timeout; 120 104 ··· 179 145 180 146 static int sama5d4_wdt_init(struct sama5d4_wdt *wdt) 181 147 { 182 - struct watchdog_device *wdd = &wdt->wdd; 183 - u32 value = WDT_SEC2TICKS(wdd->timeout); 184 148 u32 reg; 185 - 186 149 /* 187 - * Because the fields WDV and WDD must not be modified when the WDDIS 188 - * bit is set, so clear the WDDIS bit before writing the WDT_MR. 150 + * When booting and resuming, the bootloader may have changed the 151 + * watchdog configuration. 152 + * If the watchdog is already running, we can safely update it. 153 + * Else, we have to disable it properly. 189 154 */ 190 - reg = wdt_read(wdt, AT91_WDT_MR); 191 - reg &= ~AT91_WDT_WDDIS; 192 - wdt_write(wdt, AT91_WDT_MR, reg); 193 - 194 - wdt->mr |= AT91_WDT_SET_WDD(value); 195 - wdt->mr |= AT91_WDT_SET_WDV(value); 196 - 197 - wdt_write(wdt, AT91_WDT_MR, wdt->mr); 198 - 155 + if (wdt_enabled) { 156 + wdt_write_nosleep(wdt, AT91_WDT_MR, wdt->mr); 157 + } else { 158 + reg = wdt_read(wdt, AT91_WDT_MR); 159 + if (!(reg & AT91_WDT_WDDIS)) 160 + wdt_write_nosleep(wdt, AT91_WDT_MR, 161 + reg | AT91_WDT_WDDIS); 162 + } 199 163 return 0; 200 164 } 201 165 ··· 204 172 struct resource *res; 205 173 void __iomem *regs; 206 174 u32 irq = 0; 175 + u32 timeout; 207 176 int ret; 208 177 209 178 wdt = devm_kzalloc(&pdev->dev, sizeof(*wdt), GFP_KERNEL); ··· 217 184 wdd->ops = &sama5d4_wdt_ops; 218 185 wdd->min_timeout = MIN_WDT_TIMEOUT; 219 186 wdd->max_timeout = MAX_WDT_TIMEOUT; 187 + wdt->last_ping = jiffies; 220 188 221 189 watchdog_set_drvdata(wdd, wdt); 222 190 ··· 254 220 dev_err(&pdev->dev, "unable to set timeout value\n"); 255 221 return ret; 256 222 } 223 + 224 + timeout = WDT_SEC2TICKS(wdd->timeout); 225 + 226 + wdt->mr |= AT91_WDT_SET_WDD(timeout); 227 + wdt->mr |= AT91_WDT_SET_WDV(timeout); 257 228 258 229 ret = sama5d4_wdt_init(wdt); 259 230 if (ret) ··· 302 263 { 303 264 struct sama5d4_wdt *wdt = dev_get_drvdata(dev); 304 265 305 - wdt_write(wdt, AT91_WDT_MR, wdt->mr & ~AT91_WDT_WDDIS); 306 - if (wdt->mr & AT91_WDT_WDDIS) 307 - wdt_write(wdt, AT91_WDT_MR, wdt->mr); 266 + sama5d4_wdt_init(wdt); 308 267 309 268 return 0; 310 269 }
+1 -1
drivers/watchdog/wdt_pci.c
··· 332 332 pr_crit("Would Reboot\n"); 333 333 #else 334 334 pr_crit("Initiating system reboot\n"); 335 - emergency_restart(NULL); 335 + emergency_restart(); 336 336 #endif 337 337 #else 338 338 pr_crit("Reset in 5ms\n");
+1 -3
drivers/watchdog/zx2967_wdt.c
··· 211 211 212 212 base = platform_get_resource(pdev, IORESOURCE_MEM, 0); 213 213 wdt->reg_base = devm_ioremap_resource(dev, base); 214 - if (IS_ERR(wdt->reg_base)) { 215 - dev_err(dev, "ioremap failed\n"); 214 + if (IS_ERR(wdt->reg_base)) 216 215 return PTR_ERR(wdt->reg_base); 217 - } 218 216 219 217 zx2967_wdt_reset_sysctrl(dev); 220 218
+2 -2
fs/ext2/inode.c
··· 817 817 iomap->bdev = bdev; 818 818 iomap->offset = (u64)first_block << blkbits; 819 819 if (blk_queue_dax(bdev->bd_queue)) 820 - iomap->dax_dev = dax_get_by_host(bdev->bd_disk->disk_name); 820 + iomap->dax_dev = fs_dax_get_by_host(bdev->bd_disk->disk_name); 821 821 else 822 822 iomap->dax_dev = NULL; 823 823 ··· 841 841 ext2_iomap_end(struct inode *inode, loff_t offset, loff_t length, 842 842 ssize_t written, unsigned flags, struct iomap *iomap) 843 843 { 844 - put_dax(iomap->dax_dev); 844 + fs_put_dax(iomap->dax_dev); 845 845 if (iomap->type == IOMAP_MAPPED && 846 846 written < length && 847 847 (flags & IOMAP_WRITE))
+2 -2
fs/ext4/inode.c
··· 3412 3412 bdev = inode->i_sb->s_bdev; 3413 3413 iomap->bdev = bdev; 3414 3414 if (blk_queue_dax(bdev->bd_queue)) 3415 - iomap->dax_dev = dax_get_by_host(bdev->bd_disk->disk_name); 3415 + iomap->dax_dev = fs_dax_get_by_host(bdev->bd_disk->disk_name); 3416 3416 else 3417 3417 iomap->dax_dev = NULL; 3418 3418 iomap->offset = first_block << blkbits; ··· 3447 3447 int blkbits = inode->i_blkbits; 3448 3448 bool truncate = false; 3449 3449 3450 - put_dax(iomap->dax_dev); 3450 + fs_put_dax(iomap->dax_dev); 3451 3451 if (!(flags & IOMAP_WRITE) || (flags & IOMAP_FAULT)) 3452 3452 return 0; 3453 3453
+8 -1
fs/fuse/inode.c
··· 975 975 int err; 976 976 char *suffix = ""; 977 977 978 - if (sb->s_bdev) 978 + if (sb->s_bdev) { 979 979 suffix = "-fuseblk"; 980 + /* 981 + * sb->s_bdi points to blkdev's bdi however we want to redirect 982 + * it to our private bdi... 983 + */ 984 + bdi_put(sb->s_bdi); 985 + sb->s_bdi = &noop_backing_dev_info; 986 + } 980 987 err = super_setup_bdi_name(sb, "%u:%u%s", MAJOR(fc->dev), 981 988 MINOR(fc->dev), suffix); 982 989 if (err)
+2 -2
fs/xfs/xfs_iomap.c
··· 1068 1068 /* optionally associate a dax device with the iomap bdev */ 1069 1069 bdev = iomap->bdev; 1070 1070 if (blk_queue_dax(bdev->bd_queue)) 1071 - iomap->dax_dev = dax_get_by_host(bdev->bd_disk->disk_name); 1071 + iomap->dax_dev = fs_dax_get_by_host(bdev->bd_disk->disk_name); 1072 1072 else 1073 1073 iomap->dax_dev = NULL; 1074 1074 ··· 1149 1149 unsigned flags, 1150 1150 struct iomap *iomap) 1151 1151 { 1152 - put_dax(iomap->dax_dev); 1152 + fs_put_dax(iomap->dax_dev); 1153 1153 if ((flags & IOMAP_WRITE) && iomap->type == IOMAP_DELALLOC) 1154 1154 return xfs_file_iomap_end_delalloc(XFS_I(inode), offset, 1155 1155 length, written, iomap);
+4 -1
include/kvm/arm_vgic.h
··· 195 195 /* either a GICv2 CPU interface */ 196 196 gpa_t vgic_cpu_base; 197 197 /* or a number of GICv3 redistributor regions */ 198 - gpa_t vgic_redist_base; 198 + struct { 199 + gpa_t vgic_redist_base; 200 + gpa_t vgic_redist_free_offset; 201 + }; 199 202 }; 200 203 201 204 /* distributor enabled */
+34 -14
include/linux/dax.h
··· 18 18 void **, pfn_t *); 19 19 }; 20 20 21 - int bdev_dax_pgoff(struct block_device *, sector_t, size_t, pgoff_t *pgoff); 22 - #if IS_ENABLED(CONFIG_FS_DAX) 23 - int __bdev_dax_supported(struct super_block *sb, int blocksize); 24 - static inline int bdev_dax_supported(struct super_block *sb, int blocksize) 25 - { 26 - return __bdev_dax_supported(sb, blocksize); 27 - } 28 - #else 29 - static inline int bdev_dax_supported(struct super_block *sb, int blocksize) 30 - { 31 - return -EOPNOTSUPP; 32 - } 33 - #endif 34 - 35 21 #if IS_ENABLED(CONFIG_DAX) 36 22 struct dax_device *dax_get_by_host(const char *host); 37 23 void put_dax(struct dax_device *dax_dev); ··· 28 42 } 29 43 30 44 static inline void put_dax(struct dax_device *dax_dev) 45 + { 46 + } 47 + #endif 48 + 49 + int bdev_dax_pgoff(struct block_device *, sector_t, size_t, pgoff_t *pgoff); 50 + #if IS_ENABLED(CONFIG_FS_DAX) 51 + int __bdev_dax_supported(struct super_block *sb, int blocksize); 52 + static inline int bdev_dax_supported(struct super_block *sb, int blocksize) 53 + { 54 + return __bdev_dax_supported(sb, blocksize); 55 + } 56 + 57 + static inline struct dax_device *fs_dax_get_by_host(const char *host) 58 + { 59 + return dax_get_by_host(host); 60 + } 61 + 62 + static inline void fs_put_dax(struct dax_device *dax_dev) 63 + { 64 + put_dax(dax_dev); 65 + } 66 + 67 + #else 68 + static inline int bdev_dax_supported(struct super_block *sb, int blocksize) 69 + { 70 + return -EOPNOTSUPP; 71 + } 72 + 73 + static inline struct dax_device *fs_dax_get_by_host(const char *host) 74 + { 75 + return NULL; 76 + } 77 + 78 + static inline void fs_put_dax(struct dax_device *dax_dev) 31 79 { 32 80 } 33 81 #endif
+3
include/linux/kprobes.h
··· 349 349 int write, void __user *buffer, 350 350 size_t *length, loff_t *ppos); 351 351 #endif 352 + extern void wait_for_kprobe_optimizer(void); 353 + #else 354 + static inline void wait_for_kprobe_optimizer(void) { } 352 355 #endif /* CONFIG_OPTPROBES */ 353 356 #ifdef CONFIG_KPROBES_ON_FTRACE 354 357 extern void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+1 -1
include/linux/netfilter/x_tables.h
··· 294 294 int xt_target_to_user(const struct xt_entry_target *t, 295 295 struct xt_entry_target __user *u); 296 296 int xt_data_to_user(void __user *dst, const void *src, 297 - int usersize, int size); 297 + int usersize, int size, int aligned_size); 298 298 299 299 void *xt_copy_counters_from_user(const void __user *user, unsigned int len, 300 300 struct xt_counters_info *info, bool compat);
+5
include/linux/netfilter_bridge/ebtables.h
··· 125 125 /* True if the target is not a standard target */ 126 126 #define INVALID_TARGET (info->target < -NUM_STANDARD_TARGETS || info->target >= 0) 127 127 128 + static inline bool ebt_invalid_target(int target) 129 + { 130 + return (target < -NUM_STANDARD_TARGETS || target >= 0); 131 + } 132 + 128 133 #endif
+4 -12
include/linux/nvme-fc-driver.h
··· 27 27 28 28 /* FC Port role bitmask - can merge with FC Port Roles in fc transport */ 29 29 #define FC_PORT_ROLE_NVME_INITIATOR 0x10 30 - #define FC_PORT_ROLE_NVME_TARGET 0x11 31 - #define FC_PORT_ROLE_NVME_DISCOVERY 0x12 30 + #define FC_PORT_ROLE_NVME_TARGET 0x20 31 + #define FC_PORT_ROLE_NVME_DISCOVERY 0x40 32 32 33 33 34 34 /** ··· 642 642 * sequence in one LLDD operation. Errors during Data 643 643 * sequence transmit must not allow RSP sequence to be sent. 644 644 */ 645 - NVMET_FCTGTFEAT_NEEDS_CMD_CPUSCHED = (1 << 1), 646 - /* Bit 1: When 0, the LLDD will deliver FCP CMD 647 - * on the CPU it should be affinitized to. Thus work will 648 - * be scheduled on the cpu received on. When 1, the LLDD 649 - * may not deliver the CMD on the CPU it should be worked 650 - * on. The transport should pick a cpu to schedule the work 651 - * on. 652 - */ 653 - NVMET_FCTGTFEAT_CMD_IN_ISR = (1 << 2), 645 + NVMET_FCTGTFEAT_CMD_IN_ISR = (1 << 1), 654 646 /* Bit 2: When 0, the LLDD is calling the cmd rcv handler 655 647 * in a non-isr context, allowing the transport to finish 656 648 * op completion in the calling context. When 1, the LLDD ··· 650 658 * requiring the transport to transition to a workqueue 651 659 * for op completion. 652 660 */ 653 - NVMET_FCTGTFEAT_OPDONE_IN_ISR = (1 << 3), 661 + NVMET_FCTGTFEAT_OPDONE_IN_ISR = (1 << 2), 654 662 /* Bit 3: When 0, the LLDD is calling the op done handler 655 663 * in a non-isr context, allowing the transport to finish 656 664 * op completion in the calling context. When 1, the LLDD
+1 -1
include/linux/of_irq.h
··· 8 8 #include <linux/ioport.h> 9 9 #include <linux/of.h> 10 10 11 - typedef int const (*of_irq_init_cb_t)(struct device_node *, struct device_node *); 11 + typedef int (*of_irq_init_cb_t)(struct device_node *, struct device_node *); 12 12 13 13 /* 14 14 * Workarounds only applied to 32bit powermac machines
+5
include/linux/soc/renesas/rcar-rst.h
··· 1 1 #ifndef __LINUX_SOC_RENESAS_RCAR_RST_H__ 2 2 #define __LINUX_SOC_RENESAS_RCAR_RST_H__ 3 3 4 + #if defined(CONFIG_ARCH_RCAR_GEN1) || defined(CONFIG_ARCH_RCAR_GEN2) || \ 5 + defined(CONFIG_ARCH_R8A7795) || defined(CONFIG_ARCH_R8A7796) 4 6 int rcar_rst_read_mode_pins(u32 *mode); 7 + #else 8 + static inline int rcar_rst_read_mode_pins(u32 *mode) { return -ENODEV; } 9 + #endif 5 10 6 11 #endif /* __LINUX_SOC_RENESAS_RCAR_RST_H__ */
+1
include/linux/usb/hcd.h
··· 148 148 unsigned rh_registered:1;/* is root hub registered? */ 149 149 unsigned rh_pollable:1; /* may we poll the root hub? */ 150 150 unsigned msix_enabled:1; /* driver has MSI-X enabled? */ 151 + unsigned msi_enabled:1; /* driver has MSI enabled? */ 151 152 unsigned remove_phy:1; /* auto-remove USB phy */ 152 153 153 154 /* The next flag is a stopgap, to be removed when all the HCDs
+4
include/net/netfilter/nf_conntrack_helper.h
··· 9 9 10 10 #ifndef _NF_CONNTRACK_HELPER_H 11 11 #define _NF_CONNTRACK_HELPER_H 12 + #include <linux/refcount.h> 12 13 #include <net/netfilter/nf_conntrack.h> 13 14 #include <net/netfilter/nf_conntrack_extend.h> 14 15 #include <net/netfilter/nf_conntrack_expect.h> ··· 27 26 struct hlist_node hnode; /* Internal use. */ 28 27 29 28 char name[NF_CT_HELPER_NAME_LEN]; /* name of the module */ 29 + refcount_t refcnt; 30 30 struct module *me; /* pointer to self */ 31 31 const struct nf_conntrack_expect_policy *expect_policy; 32 32 ··· 81 79 struct nf_conntrack_helper *nf_conntrack_helper_try_module_get(const char *name, 82 80 u16 l3num, 83 81 u8 protonum); 82 + void nf_conntrack_helper_put(struct nf_conntrack_helper *helper); 83 + 84 84 void nf_ct_helper_init(struct nf_conntrack_helper *helper, 85 85 u16 l3num, u16 protonum, const char *name, 86 86 u16 default_port, u16 spec_port, u32 id,
+1 -1
include/net/netfilter/nf_tables.h
··· 176 176 int nft_data_init(const struct nft_ctx *ctx, 177 177 struct nft_data *data, unsigned int size, 178 178 struct nft_data_desc *desc, const struct nlattr *nla); 179 - void nft_data_uninit(const struct nft_data *data, enum nft_data_types type); 179 + void nft_data_release(const struct nft_data *data, enum nft_data_types type); 180 180 int nft_data_dump(struct sk_buff *skb, int attr, const struct nft_data *data, 181 181 enum nft_data_types type, unsigned int len); 182 182
+3
include/uapi/linux/usb/ch11.h
··· 22 22 */ 23 23 #define USB_MAXCHILDREN 31 24 24 25 + /* See USB 3.1 spec Table 10-5 */ 26 + #define USB_SS_MAXPORTS 15 27 + 25 28 /* 26 29 * Hub request types 27 30 */
+8 -4
kernel/bpf/verifier.c
··· 808 808 reg_off += reg->aux_off; 809 809 } 810 810 811 - /* skb->data is NET_IP_ALIGN-ed, but for strict alignment checking 812 - * we force this to 2 which is universally what architectures use 813 - * when they don't set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS. 811 + /* For platforms that do not have a Kconfig enabling 812 + * CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS the value of 813 + * NET_IP_ALIGN is universally set to '2'. And on platforms 814 + * that do set CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS, we get 815 + * to this code only in strict mode where we want to emulate 816 + * the NET_IP_ALIGN==2 checking. Therefore use an 817 + * unconditional IP align value of '2'. 814 818 */ 815 - ip_align = strict ? 2 : NET_IP_ALIGN; 819 + ip_align = 2; 816 820 if ((ip_align + reg_off + off) % size != 0) { 817 821 verbose("misaligned packet access off %d+%d+%d size %d\n", 818 822 ip_align, reg_off, off, size);
+1 -1
kernel/irq/chip.c
··· 880 880 if (!desc) 881 881 return; 882 882 883 - __irq_do_set_handler(desc, handle, 1, NULL); 884 883 desc->irq_common_data.handler_data = data; 884 + __irq_do_set_handler(desc, handle, 1, NULL); 885 885 886 886 irq_put_desc_busunlock(desc, flags); 887 887 }
+7 -1
kernel/kprobes.c
··· 595 595 } 596 596 597 597 /* Wait for completing optimization and unoptimization */ 598 - static void wait_for_kprobe_optimizer(void) 598 + void wait_for_kprobe_optimizer(void) 599 599 { 600 600 mutex_lock(&kprobe_mutex); 601 601 ··· 2183 2183 * The vaddr this probe is installed will soon 2184 2184 * be vfreed buy not synced to disk. Hence, 2185 2185 * disarming the breakpoint isn't needed. 2186 + * 2187 + * Note, this will also move any optimized probes 2188 + * that are pending to be removed from their 2189 + * corresponding lists to the freeing_list and 2190 + * will not be touched by the delayed 2191 + * kprobe_optimizer work handler. 2186 2192 */ 2187 2193 kill_kprobe(p); 2188 2194 }
+1 -1
kernel/power/snapshot.c
··· 1425 1425 * Numbers of normal and highmem page frames allocated for hibernation image 1426 1426 * before suspending devices. 1427 1427 */ 1428 - unsigned int alloc_normal, alloc_highmem; 1428 + static unsigned int alloc_normal, alloc_highmem; 1429 1429 /* 1430 1430 * Memory bitmap used for marking saveable pages (during hibernation) or 1431 1431 * hibernation image pages (during restore)
+25
kernel/sched/core.c
··· 3502 3502 } 3503 3503 EXPORT_SYMBOL(schedule); 3504 3504 3505 + /* 3506 + * synchronize_rcu_tasks() makes sure that no task is stuck in preempted 3507 + * state (have scheduled out non-voluntarily) by making sure that all 3508 + * tasks have either left the run queue or have gone into user space. 3509 + * As idle tasks do not do either, they must not ever be preempted 3510 + * (schedule out non-voluntarily). 3511 + * 3512 + * schedule_idle() is similar to schedule_preempt_disable() except that it 3513 + * never enables preemption because it does not call sched_submit_work(). 3514 + */ 3515 + void __sched schedule_idle(void) 3516 + { 3517 + /* 3518 + * As this skips calling sched_submit_work(), which the idle task does 3519 + * regardless because that function is a nop when the task is in a 3520 + * TASK_RUNNING state, make sure this isn't used someplace that the 3521 + * current task can be in any other state. Note, idle is always in the 3522 + * TASK_RUNNING state. 3523 + */ 3524 + WARN_ON_ONCE(current->state); 3525 + do { 3526 + __schedule(false); 3527 + } while (need_resched()); 3528 + } 3529 + 3505 3530 #ifdef CONFIG_CONTEXT_TRACKING 3506 3531 asmlinkage __visible void __sched schedule_user(void) 3507 3532 {
+3 -4
kernel/sched/cpufreq_schedutil.c
··· 245 245 sugov_update_commit(sg_policy, time, next_f); 246 246 } 247 247 248 - static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu) 248 + static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) 249 249 { 250 250 struct sugov_policy *sg_policy = sg_cpu->sg_policy; 251 251 struct cpufreq_policy *policy = sg_policy->policy; 252 - u64 last_freq_update_time = sg_policy->last_freq_update_time; 253 252 unsigned long util = 0, max = 1; 254 253 unsigned int j; 255 254 ··· 264 265 * enough, don't take the CPU into account as it probably is 265 266 * idle now (and clear iowait_boost for it). 266 267 */ 267 - delta_ns = last_freq_update_time - j_sg_cpu->last_update; 268 + delta_ns = time - j_sg_cpu->last_update; 268 269 if (delta_ns > TICK_NSEC) { 269 270 j_sg_cpu->iowait_boost = 0; 270 271 continue; ··· 308 309 if (flags & SCHED_CPUFREQ_RT_DL) 309 310 next_f = sg_policy->policy->cpuinfo.max_freq; 310 311 else 311 - next_f = sugov_next_freq_shared(sg_cpu); 312 + next_f = sugov_next_freq_shared(sg_cpu, time); 312 313 313 314 sugov_update_commit(sg_policy, time, next_f); 314 315 }
+1 -1
kernel/sched/idle.c
··· 265 265 smp_mb__after_atomic(); 266 266 267 267 sched_ttwu_pending(); 268 - schedule_preempt_disabled(); 268 + schedule_idle(); 269 269 270 270 if (unlikely(klp_patch_pending(current))) 271 271 klp_update_patch_state(current);
+2
kernel/sched/sched.h
··· 1467 1467 } 1468 1468 #endif 1469 1469 1470 + extern void schedule_idle(void); 1471 + 1470 1472 extern void sysrq_sched_debug_show(void); 1471 1473 extern void sched_init_granularity(void); 1472 1474 extern void update_max_interval(void);
+2 -2
kernel/trace/blktrace.c
··· 1662 1662 goto out; 1663 1663 1664 1664 if (attr == &dev_attr_act_mask) { 1665 - if (sscanf(buf, "%llx", &value) != 1) { 1665 + if (kstrtoull(buf, 0, &value)) { 1666 1666 /* Assume it is a list of trace category names */ 1667 1667 ret = blk_trace_str2mask(buf); 1668 1668 if (ret < 0) 1669 1669 goto out; 1670 1670 value = ret; 1671 1671 } 1672 - } else if (sscanf(buf, "%llu", &value) != 1) 1672 + } else if (kstrtoull(buf, 0, &value)) 1673 1673 goto out; 1674 1674 1675 1675 ret = -ENXIO;
+10 -2
kernel/trace/ftrace.c
··· 4144 4144 int i, ret = -ENODEV; 4145 4145 int size; 4146 4146 4147 - if (glob && (strcmp(glob, "*") == 0 || !strlen(glob))) 4147 + if (!glob || !strlen(glob) || !strcmp(glob, "*")) 4148 4148 func_g.search = NULL; 4149 - else if (glob) { 4149 + else { 4150 4150 int not; 4151 4151 4152 4152 func_g.type = filter_parse_regex(glob, strlen(glob), ··· 4254 4254 err_unlock_ftrace: 4255 4255 mutex_unlock(&ftrace_lock); 4256 4256 return ret; 4257 + } 4258 + 4259 + void clear_ftrace_function_probes(struct trace_array *tr) 4260 + { 4261 + struct ftrace_func_probe *probe, *n; 4262 + 4263 + list_for_each_entry_safe(probe, n, &tr->func_probes, list) 4264 + unregister_ftrace_function_probe_func(NULL, tr, probe->probe_ops); 4257 4265 } 4258 4266 4259 4267 static LIST_HEAD(ftrace_commands);
+32 -2
kernel/trace/trace.c
··· 1558 1558 1559 1559 return 0; 1560 1560 } 1561 - early_initcall(init_trace_selftests); 1561 + core_initcall(init_trace_selftests); 1562 1562 #else 1563 1563 static inline int run_tracer_selftest(struct tracer *type) 1564 1564 { ··· 2568 2568 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip, 2569 2569 int pc) 2570 2570 { 2571 - __ftrace_trace_stack(tr->trace_buffer.buffer, flags, skip, pc, NULL); 2571 + struct ring_buffer *buffer = tr->trace_buffer.buffer; 2572 + 2573 + if (rcu_is_watching()) { 2574 + __ftrace_trace_stack(buffer, flags, skip, pc, NULL); 2575 + return; 2576 + } 2577 + 2578 + /* 2579 + * When an NMI triggers, RCU is enabled via rcu_nmi_enter(), 2580 + * but if the above rcu_is_watching() failed, then the NMI 2581 + * triggered someplace critical, and rcu_irq_enter() should 2582 + * not be called from NMI. 2583 + */ 2584 + if (unlikely(in_nmi())) 2585 + return; 2586 + 2587 + /* 2588 + * It is possible that a function is being traced in a 2589 + * location that RCU is not watching. A call to 2590 + * rcu_irq_enter() will make sure that it is, but there's 2591 + * a few internal rcu functions that could be traced 2592 + * where that wont work either. In those cases, we just 2593 + * do nothing. 2594 + */ 2595 + if (unlikely(rcu_irq_enter_disabled())) 2596 + return; 2597 + 2598 + rcu_irq_enter_irqson(); 2599 + __ftrace_trace_stack(buffer, flags, skip, pc, NULL); 2600 + rcu_irq_exit_irqson(); 2572 2601 } 2573 2602 2574 2603 /** ··· 7579 7550 } 7580 7551 7581 7552 tracing_set_nop(tr); 7553 + clear_ftrace_function_probes(tr); 7582 7554 event_trace_del_tracer(tr); 7583 7555 ftrace_clear_pids(tr); 7584 7556 ftrace_destroy_function_files(tr);
+5
kernel/trace/trace.h
··· 980 980 extern int 981 981 unregister_ftrace_function_probe_func(char *glob, struct trace_array *tr, 982 982 struct ftrace_probe_ops *ops); 983 + extern void clear_ftrace_function_probes(struct trace_array *tr); 983 984 984 985 int register_ftrace_command(struct ftrace_func_command *cmd); 985 986 int unregister_ftrace_command(struct ftrace_func_command *cmd); ··· 999 998 { 1000 999 return -EINVAL; 1001 1000 } 1001 + static inline void clear_ftrace_function_probes(struct trace_array *tr) 1002 + { 1003 + } 1004 + 1002 1005 /* 1003 1006 * The ops parameter passed in is usually undefined. 1004 1007 * This must be a macro.
+5
kernel/trace/trace_kprobe.c
··· 1535 1535 1536 1536 end: 1537 1537 release_all_trace_kprobes(); 1538 + /* 1539 + * Wait for the optimizer work to finish. Otherwise it might fiddle 1540 + * with probes in already freed __init text. 1541 + */ 1542 + wait_for_kprobe_optimizer(); 1538 1543 if (warn) 1539 1544 pr_cont("NG: Some tests are failed. Please check them.\n"); 1540 1545 else
+4 -4
net/9p/trans_xen.c
··· 454 454 goto error_xenbus; 455 455 } 456 456 priv->tag = xenbus_read(xbt, dev->nodename, "tag", NULL); 457 - if (!priv->tag) { 458 - ret = -EINVAL; 457 + if (IS_ERR(priv->tag)) { 458 + ret = PTR_ERR(priv->tag); 459 459 goto error_xenbus; 460 460 } 461 461 ret = xenbus_transaction_end(xbt, 0); ··· 525 525 .otherend_changed = xen_9pfs_front_changed, 526 526 }; 527 527 528 - int p9_trans_xen_init(void) 528 + static int p9_trans_xen_init(void) 529 529 { 530 530 if (!xen_domain()) 531 531 return -ENODEV; ··· 537 537 } 538 538 module_init(p9_trans_xen_init); 539 539 540 - void p9_trans_xen_exit(void) 540 + static void p9_trans_xen_exit(void) 541 541 { 542 542 v9fs_unregister_trans(&p9_xen_trans); 543 543 return xenbus_unregister_driver(&xen_9pfs_front_driver);
+1
net/bridge/br_stp_if.c
··· 173 173 br_debug(br, "using kernel STP\n"); 174 174 175 175 /* To start timers on any ports left in blocking */ 176 + mod_timer(&br->hello_timer, jiffies + br->hello_time); 176 177 br_port_state_selection(br); 177 178 } 178 179
+1 -1
net/bridge/br_stp_timer.c
··· 40 40 if (br->dev->flags & IFF_UP) { 41 41 br_config_bpdu_generation(br); 42 42 43 - if (br->stp_enabled != BR_USER_STP) 43 + if (br->stp_enabled == BR_KERNEL_STP) 44 44 mod_timer(&br->hello_timer, 45 45 round_jiffies(jiffies + br->hello_time)); 46 46 }
+3
net/bridge/netfilter/ebt_arpreply.c
··· 68 68 if (e->ethproto != htons(ETH_P_ARP) || 69 69 e->invflags & EBT_IPROTO) 70 70 return -EINVAL; 71 + if (ebt_invalid_target(info->target)) 72 + return -EINVAL; 73 + 71 74 return 0; 72 75 } 73 76
+6 -3
net/bridge/netfilter/ebtables.c
··· 1373 1373 strlcpy(name, _name, sizeof(name)); 1374 1374 if (copy_to_user(um, name, EBT_FUNCTION_MAXNAMELEN) || 1375 1375 put_user(datasize, (int __user *)(um + EBT_FUNCTION_MAXNAMELEN)) || 1376 - xt_data_to_user(um + entrysize, data, usersize, datasize)) 1376 + xt_data_to_user(um + entrysize, data, usersize, datasize, 1377 + XT_ALIGN(datasize))) 1377 1378 return -EFAULT; 1378 1379 1379 1380 return 0; ··· 1659 1658 if (match->compat_to_user(cm->data, m->data)) 1660 1659 return -EFAULT; 1661 1660 } else { 1662 - if (xt_data_to_user(cm->data, m->data, match->usersize, msize)) 1661 + if (xt_data_to_user(cm->data, m->data, match->usersize, msize, 1662 + COMPAT_XT_ALIGN(msize))) 1663 1663 return -EFAULT; 1664 1664 } 1665 1665 ··· 1689 1687 if (target->compat_to_user(cm->data, t->data)) 1690 1688 return -EFAULT; 1691 1689 } else { 1692 - if (xt_data_to_user(cm->data, t->data, target->usersize, tsize)) 1690 + if (xt_data_to_user(cm->data, t->data, target->usersize, tsize, 1691 + COMPAT_XT_ALIGN(tsize))) 1693 1692 return -EFAULT; 1694 1693 } 1695 1694
+39 -17
net/ipv4/arp.c
··· 641 641 } 642 642 EXPORT_SYMBOL(arp_xmit); 643 643 644 + static bool arp_is_garp(struct net *net, struct net_device *dev, 645 + int *addr_type, __be16 ar_op, 646 + __be32 sip, __be32 tip, 647 + unsigned char *sha, unsigned char *tha) 648 + { 649 + bool is_garp = tip == sip; 650 + 651 + /* Gratuitous ARP _replies_ also require target hwaddr to be 652 + * the same as source. 653 + */ 654 + if (is_garp && ar_op == htons(ARPOP_REPLY)) 655 + is_garp = 656 + /* IPv4 over IEEE 1394 doesn't provide target 657 + * hardware address field in its ARP payload. 658 + */ 659 + tha && 660 + !memcmp(tha, sha, dev->addr_len); 661 + 662 + if (is_garp) { 663 + *addr_type = inet_addr_type_dev_table(net, dev, sip); 664 + if (*addr_type != RTN_UNICAST) 665 + is_garp = false; 666 + } 667 + return is_garp; 668 + } 669 + 644 670 /* 645 671 * Process an arp request. 646 672 */ ··· 863 837 864 838 n = __neigh_lookup(&arp_tbl, &sip, dev, 0); 865 839 866 - if (IN_DEV_ARP_ACCEPT(in_dev)) { 867 - unsigned int addr_type = inet_addr_type_dev_table(net, dev, sip); 840 + if (n || IN_DEV_ARP_ACCEPT(in_dev)) { 841 + addr_type = -1; 842 + is_garp = arp_is_garp(net, dev, &addr_type, arp->ar_op, 843 + sip, tip, sha, tha); 844 + } 868 845 846 + if (IN_DEV_ARP_ACCEPT(in_dev)) { 869 847 /* Unsolicited ARP is not accepted by default. 870 848 It is possible, that this option should be enabled for some 871 849 devices (strip is candidate) 872 850 */ 873 - is_garp = tip == sip && addr_type == RTN_UNICAST; 874 - 875 - /* Unsolicited ARP _replies_ also require target hwaddr to be 876 - * the same as source. 877 - */ 878 - if (is_garp && arp->ar_op == htons(ARPOP_REPLY)) 879 - is_garp = 880 - /* IPv4 over IEEE 1394 doesn't provide target 881 - * hardware address field in its ARP payload. 882 - */ 883 - tha && 884 - !memcmp(tha, sha, dev->addr_len); 885 - 886 851 if (!n && 887 - ((arp->ar_op == htons(ARPOP_REPLY) && 888 - addr_type == RTN_UNICAST) || is_garp)) 852 + (is_garp || 853 + (arp->ar_op == htons(ARPOP_REPLY) && 854 + (addr_type == RTN_UNICAST || 855 + (addr_type < 0 && 856 + /* postpone calculation to as late as possible */ 857 + inet_addr_type_dev_table(net, dev, sip) == 858 + RTN_UNICAST))))) 889 859 n = __neigh_lookup(&arp_tbl, &sip, dev, 1); 890 860 } 891 861
+4
net/ipv4/tcp.c
··· 2320 2320 tcp_set_ca_state(sk, TCP_CA_Open); 2321 2321 tcp_clear_retrans(tp); 2322 2322 inet_csk_delack_init(sk); 2323 + /* Initialize rcv_mss to TCP_MIN_MSS to avoid division by 0 2324 + * issue in __tcp_select_window() 2325 + */ 2326 + icsk->icsk_ack.rcv_mss = TCP_MIN_MSS; 2323 2327 tcp_init_send_head(sk); 2324 2328 memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 2325 2329 __sk_dst_reset(sk);
+8 -7
net/ipv6/ip6_output.c
··· 1466 1466 */ 1467 1467 alloclen += sizeof(struct frag_hdr); 1468 1468 1469 + copy = datalen - transhdrlen - fraggap; 1470 + if (copy < 0) { 1471 + err = -EINVAL; 1472 + goto error; 1473 + } 1469 1474 if (transhdrlen) { 1470 1475 skb = sock_alloc_send_skb(sk, 1471 1476 alloclen + hh_len, ··· 1520 1515 data += fraggap; 1521 1516 pskb_trim_unique(skb_prev, maxfraglen); 1522 1517 } 1523 - copy = datalen - transhdrlen - fraggap; 1524 - 1525 - if (copy < 0) { 1526 - err = -EINVAL; 1527 - kfree_skb(skb); 1528 - goto error; 1529 - } else if (copy > 0 && getfrag(from, data + transhdrlen, offset, copy, fraggap, skb) < 0) { 1518 + if (copy > 0 && 1519 + getfrag(from, data + transhdrlen, offset, 1520 + copy, fraggap, skb) < 0) { 1530 1521 err = -EFAULT; 1531 1522 kfree_skb(skb); 1532 1523 goto error;
+14 -5
net/netfilter/ipvs/ip_vs_core.c
··· 849 849 { 850 850 unsigned int verdict = NF_DROP; 851 851 852 - if (IP_VS_FWD_METHOD(cp) != 0) { 853 - pr_err("shouldn't reach here, because the box is on the " 854 - "half connection in the tun/dr module.\n"); 855 - } 852 + if (IP_VS_FWD_METHOD(cp) != IP_VS_CONN_F_MASQ) 853 + goto ignore_cp; 856 854 857 855 /* Ensure the checksum is correct */ 858 856 if (!skb_csum_unnecessary(skb) && ip_vs_checksum_complete(skb, ihl)) { ··· 884 886 ip_vs_notrack(skb); 885 887 else 886 888 ip_vs_update_conntrack(skb, cp, 0); 889 + 890 + ignore_cp: 887 891 verdict = NF_ACCEPT; 888 892 889 893 out: ··· 1385 1385 */ 1386 1386 cp = pp->conn_out_get(ipvs, af, skb, &iph); 1387 1387 1388 - if (likely(cp)) 1388 + if (likely(cp)) { 1389 + if (IP_VS_FWD_METHOD(cp) != IP_VS_CONN_F_MASQ) 1390 + goto ignore_cp; 1389 1391 return handle_response(af, skb, pd, cp, &iph, hooknum); 1392 + } 1390 1393 1391 1394 /* Check for real-server-started requests */ 1392 1395 if (atomic_read(&ipvs->conn_out_counter)) { ··· 1447 1444 } 1448 1445 } 1449 1446 } 1447 + 1448 + out: 1450 1449 IP_VS_DBG_PKT(12, af, pp, skb, iph.off, 1451 1450 "ip_vs_out: packet continues traversal as normal"); 1452 1451 return NF_ACCEPT; 1452 + 1453 + ignore_cp: 1454 + __ip_vs_conn_put(cp); 1455 + goto out; 1453 1456 } 1454 1457 1455 1458 /*
+12
net/netfilter/nf_conntrack_helper.c
··· 174 174 #endif 175 175 if (h != NULL && !try_module_get(h->me)) 176 176 h = NULL; 177 + if (h != NULL && !refcount_inc_not_zero(&h->refcnt)) { 178 + module_put(h->me); 179 + h = NULL; 180 + } 177 181 178 182 rcu_read_unlock(); 179 183 180 184 return h; 181 185 } 182 186 EXPORT_SYMBOL_GPL(nf_conntrack_helper_try_module_get); 187 + 188 + void nf_conntrack_helper_put(struct nf_conntrack_helper *helper) 189 + { 190 + refcount_dec(&helper->refcnt); 191 + module_put(helper->me); 192 + } 193 + EXPORT_SYMBOL_GPL(nf_conntrack_helper_put); 183 194 184 195 struct nf_conn_help * 185 196 nf_ct_helper_ext_add(struct nf_conn *ct, ··· 428 417 } 429 418 } 430 419 } 420 + refcount_set(&me->refcnt, 1); 431 421 hlist_add_head_rcu(&me->hnode, &nf_ct_helper_hash[h]); 432 422 nf_ct_helper_count++; 433 423 out:
+7 -4
net/netfilter/nf_conntrack_netlink.c
··· 45 45 #include <net/netfilter/nf_conntrack_zones.h> 46 46 #include <net/netfilter/nf_conntrack_timestamp.h> 47 47 #include <net/netfilter/nf_conntrack_labels.h> 48 + #include <net/netfilter/nf_conntrack_seqadj.h> 49 + #include <net/netfilter/nf_conntrack_synproxy.h> 48 50 #ifdef CONFIG_NF_NAT_NEEDED 49 51 #include <net/netfilter/nf_nat_core.h> 50 52 #include <net/netfilter/nf_nat_l4proto.h> ··· 1009 1007 1010 1008 static int 1011 1009 ctnetlink_parse_tuple(const struct nlattr * const cda[], 1012 - struct nf_conntrack_tuple *tuple, 1013 - enum ctattr_type type, u_int8_t l3num, 1014 - struct nf_conntrack_zone *zone) 1010 + struct nf_conntrack_tuple *tuple, u32 type, 1011 + u_int8_t l3num, struct nf_conntrack_zone *zone) 1015 1012 { 1016 1013 struct nlattr *tb[CTA_TUPLE_MAX+1]; 1017 1014 int err; ··· 1829 1828 nf_ct_tstamp_ext_add(ct, GFP_ATOMIC); 1830 1829 nf_ct_ecache_ext_add(ct, 0, 0, GFP_ATOMIC); 1831 1830 nf_ct_labels_ext_add(ct); 1831 + nfct_seqadj_ext_add(ct); 1832 + nfct_synproxy_ext_add(ct); 1832 1833 1833 1834 /* we must add conntrack extensions before confirmation. */ 1834 1835 ct->status |= IPS_CONFIRMED; ··· 2450 2447 2451 2448 static int ctnetlink_exp_dump_tuple(struct sk_buff *skb, 2452 2449 const struct nf_conntrack_tuple *tuple, 2453 - enum ctattr_expect type) 2450 + u32 type) 2454 2451 { 2455 2452 struct nlattr *nest_parms; 2456 2453
+4
net/netfilter/nf_nat_core.c
··· 409 409 { 410 410 struct nf_conntrack_tuple curr_tuple, new_tuple; 411 411 412 + /* Can't setup nat info for confirmed ct. */ 413 + if (nf_ct_is_confirmed(ct)) 414 + return NF_ACCEPT; 415 + 412 416 NF_CT_ASSERT(maniptype == NF_NAT_MANIP_SRC || 413 417 maniptype == NF_NAT_MANIP_DST); 414 418 BUG_ON(nf_nat_initialized(ct, maniptype));
+128 -32
net/netfilter/nf_tables_api.c
··· 3367 3367 return nf_tables_fill_setelem(args->skb, set, elem); 3368 3368 } 3369 3369 3370 + struct nft_set_dump_ctx { 3371 + const struct nft_set *set; 3372 + struct nft_ctx ctx; 3373 + }; 3374 + 3370 3375 static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb) 3371 3376 { 3377 + struct nft_set_dump_ctx *dump_ctx = cb->data; 3372 3378 struct net *net = sock_net(skb->sk); 3373 - u8 genmask = nft_genmask_cur(net); 3379 + struct nft_af_info *afi; 3380 + struct nft_table *table; 3374 3381 struct nft_set *set; 3375 3382 struct nft_set_dump_args args; 3376 - struct nft_ctx ctx; 3377 - struct nlattr *nla[NFTA_SET_ELEM_LIST_MAX + 1]; 3383 + bool set_found = false; 3378 3384 struct nfgenmsg *nfmsg; 3379 3385 struct nlmsghdr *nlh; 3380 3386 struct nlattr *nest; 3381 3387 u32 portid, seq; 3382 - int event, err; 3388 + int event; 3383 3389 3384 - err = nlmsg_parse(cb->nlh, sizeof(struct nfgenmsg), nla, 3385 - NFTA_SET_ELEM_LIST_MAX, nft_set_elem_list_policy, 3386 - NULL); 3387 - if (err < 0) 3388 - return err; 3390 + rcu_read_lock(); 3391 + list_for_each_entry_rcu(afi, &net->nft.af_info, list) { 3392 + if (afi != dump_ctx->ctx.afi) 3393 + continue; 3389 3394 3390 - err = nft_ctx_init_from_elemattr(&ctx, net, cb->skb, cb->nlh, 3391 - (void *)nla, genmask); 3392 - if (err < 0) 3393 - return err; 3395 + list_for_each_entry_rcu(table, &afi->tables, list) { 3396 + if (table != dump_ctx->ctx.table) 3397 + continue; 3394 3398 3395 - set = nf_tables_set_lookup(ctx.table, nla[NFTA_SET_ELEM_LIST_SET], 3396 - genmask); 3397 - if (IS_ERR(set)) 3398 - return PTR_ERR(set); 3399 + list_for_each_entry_rcu(set, &table->sets, list) { 3400 + if (set == dump_ctx->set) { 3401 + set_found = true; 3402 + break; 3403 + } 3404 + } 3405 + break; 3406 + } 3407 + break; 3408 + } 3409 + 3410 + if (!set_found) { 3411 + rcu_read_unlock(); 3412 + return -ENOENT; 3413 + } 3399 3414 3400 3415 event = nfnl_msg_type(NFNL_SUBSYS_NFTABLES, NFT_MSG_NEWSETELEM); 3401 3416 portid = NETLINK_CB(cb->skb).portid; ··· 3422 3407 goto nla_put_failure; 3423 3408 3424 3409 nfmsg = nlmsg_data(nlh); 3425 - nfmsg->nfgen_family = ctx.afi->family; 3410 + nfmsg->nfgen_family = afi->family; 3426 3411 nfmsg->version = NFNETLINK_V0; 3427 - nfmsg->res_id = htons(ctx.net->nft.base_seq & 0xffff); 3412 + nfmsg->res_id = htons(net->nft.base_seq & 0xffff); 3428 3413 3429 - if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, ctx.table->name)) 3414 + if (nla_put_string(skb, NFTA_SET_ELEM_LIST_TABLE, table->name)) 3430 3415 goto nla_put_failure; 3431 3416 if (nla_put_string(skb, NFTA_SET_ELEM_LIST_SET, set->name)) 3432 3417 goto nla_put_failure; ··· 3437 3422 3438 3423 args.cb = cb; 3439 3424 args.skb = skb; 3440 - args.iter.genmask = nft_genmask_cur(ctx.net); 3425 + args.iter.genmask = nft_genmask_cur(net); 3441 3426 args.iter.skip = cb->args[0]; 3442 3427 args.iter.count = 0; 3443 3428 args.iter.err = 0; 3444 3429 args.iter.fn = nf_tables_dump_setelem; 3445 - set->ops->walk(&ctx, set, &args.iter); 3430 + set->ops->walk(&dump_ctx->ctx, set, &args.iter); 3431 + rcu_read_unlock(); 3446 3432 3447 3433 nla_nest_end(skb, nest); 3448 3434 nlmsg_end(skb, nlh); ··· 3457 3441 return skb->len; 3458 3442 3459 3443 nla_put_failure: 3444 + rcu_read_unlock(); 3460 3445 return -ENOSPC; 3446 + } 3447 + 3448 + static int nf_tables_dump_set_done(struct netlink_callback *cb) 3449 + { 3450 + kfree(cb->data); 3451 + return 0; 3461 3452 } 3462 3453 3463 3454 static int nf_tables_getsetelem(struct net *net, struct sock *nlsk, ··· 3488 3465 if (nlh->nlmsg_flags & NLM_F_DUMP) { 3489 3466 struct netlink_dump_control c = { 3490 3467 .dump = nf_tables_dump_set, 3468 + .done = nf_tables_dump_set_done, 3491 3469 }; 3470 + struct nft_set_dump_ctx *dump_ctx; 3471 + 3472 + dump_ctx = kmalloc(sizeof(*dump_ctx), GFP_KERNEL); 3473 + if (!dump_ctx) 3474 + return -ENOMEM; 3475 + 3476 + dump_ctx->set = set; 3477 + dump_ctx->ctx = ctx; 3478 + 3479 + c.data = dump_ctx; 3492 3480 return netlink_dump_start(nlsk, skb, nlh, &c); 3493 3481 } 3494 3482 return -EOPNOTSUPP; ··· 3627 3593 { 3628 3594 struct nft_set_ext *ext = nft_set_elem_ext(set, elem); 3629 3595 3630 - nft_data_uninit(nft_set_ext_key(ext), NFT_DATA_VALUE); 3596 + nft_data_release(nft_set_ext_key(ext), NFT_DATA_VALUE); 3631 3597 if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3632 - nft_data_uninit(nft_set_ext_data(ext), set->dtype); 3598 + nft_data_release(nft_set_ext_data(ext), set->dtype); 3633 3599 if (destroy_expr && nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) 3634 3600 nf_tables_expr_destroy(NULL, nft_set_ext_expr(ext)); 3635 3601 if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) ··· 3637 3603 kfree(elem); 3638 3604 } 3639 3605 EXPORT_SYMBOL_GPL(nft_set_elem_destroy); 3606 + 3607 + /* Only called from commit path, nft_set_elem_deactivate() already deals with 3608 + * the refcounting from the preparation phase. 3609 + */ 3610 + static void nf_tables_set_elem_destroy(const struct nft_set *set, void *elem) 3611 + { 3612 + struct nft_set_ext *ext = nft_set_elem_ext(set, elem); 3613 + 3614 + if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPR)) 3615 + nf_tables_expr_destroy(NULL, nft_set_ext_expr(ext)); 3616 + kfree(elem); 3617 + } 3640 3618 3641 3619 static int nft_setelem_parse_flags(const struct nft_set *set, 3642 3620 const struct nlattr *attr, u32 *flags) ··· 3861 3815 kfree(elem.priv); 3862 3816 err3: 3863 3817 if (nla[NFTA_SET_ELEM_DATA] != NULL) 3864 - nft_data_uninit(&data, d2.type); 3818 + nft_data_release(&data, d2.type); 3865 3819 err2: 3866 - nft_data_uninit(&elem.key.val, d1.type); 3820 + nft_data_release(&elem.key.val, d1.type); 3867 3821 err1: 3868 3822 return err; 3869 3823 } ··· 3906 3860 break; 3907 3861 } 3908 3862 return err; 3863 + } 3864 + 3865 + /** 3866 + * nft_data_hold - hold a nft_data item 3867 + * 3868 + * @data: struct nft_data to release 3869 + * @type: type of data 3870 + * 3871 + * Hold a nft_data item. NFT_DATA_VALUE types can be silently discarded, 3872 + * NFT_DATA_VERDICT bumps the reference to chains in case of NFT_JUMP and 3873 + * NFT_GOTO verdicts. This function must be called on active data objects 3874 + * from the second phase of the commit protocol. 3875 + */ 3876 + static void nft_data_hold(const struct nft_data *data, enum nft_data_types type) 3877 + { 3878 + if (type == NFT_DATA_VERDICT) { 3879 + switch (data->verdict.code) { 3880 + case NFT_JUMP: 3881 + case NFT_GOTO: 3882 + data->verdict.chain->use++; 3883 + break; 3884 + } 3885 + } 3886 + } 3887 + 3888 + static void nft_set_elem_activate(const struct net *net, 3889 + const struct nft_set *set, 3890 + struct nft_set_elem *elem) 3891 + { 3892 + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3893 + 3894 + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3895 + nft_data_hold(nft_set_ext_data(ext), set->dtype); 3896 + if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 3897 + (*nft_set_ext_obj(ext))->use++; 3898 + } 3899 + 3900 + static void nft_set_elem_deactivate(const struct net *net, 3901 + const struct nft_set *set, 3902 + struct nft_set_elem *elem) 3903 + { 3904 + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); 3905 + 3906 + if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA)) 3907 + nft_data_release(nft_set_ext_data(ext), set->dtype); 3908 + if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF)) 3909 + (*nft_set_ext_obj(ext))->use--; 3909 3910 } 3910 3911 3911 3912 static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set, ··· 4020 3927 kfree(elem.priv); 4021 3928 elem.priv = priv; 4022 3929 3930 + nft_set_elem_deactivate(ctx->net, set, &elem); 3931 + 4023 3932 nft_trans_elem(trans) = elem; 4024 3933 list_add_tail(&trans->list, &ctx->net->nft.commit_list); 4025 3934 return 0; ··· 4031 3936 err3: 4032 3937 kfree(elem.priv); 4033 3938 err2: 4034 - nft_data_uninit(&elem.key.val, desc.type); 3939 + nft_data_release(&elem.key.val, desc.type); 4035 3940 err1: 4036 3941 return err; 4037 3942 } ··· 4838 4743 nft_set_destroy(nft_trans_set(trans)); 4839 4744 break; 4840 4745 case NFT_MSG_DELSETELEM: 4841 - nft_set_elem_destroy(nft_trans_elem_set(trans), 4842 - nft_trans_elem(trans).priv, true); 4746 + nf_tables_set_elem_destroy(nft_trans_elem_set(trans), 4747 + nft_trans_elem(trans).priv); 4843 4748 break; 4844 4749 case NFT_MSG_DELOBJ: 4845 4750 nft_obj_destroy(nft_trans_obj(trans)); ··· 5074 4979 case NFT_MSG_DELSETELEM: 5075 4980 te = (struct nft_trans_elem *)trans->data; 5076 4981 4982 + nft_set_elem_activate(net, te->set, &te->elem); 5077 4983 te->set->ops->activate(net, te->set, &te->elem); 5078 4984 te->set->ndeact--; 5079 4985 ··· 5560 5464 EXPORT_SYMBOL_GPL(nft_data_init); 5561 5465 5562 5466 /** 5563 - * nft_data_uninit - release a nft_data item 5467 + * nft_data_release - release a nft_data item 5564 5468 * 5565 5469 * @data: struct nft_data to release 5566 5470 * @type: type of data ··· 5568 5472 * Release a nft_data item. NFT_DATA_VALUE types can be silently discarded, 5569 5473 * all others need to be released by calling this function. 5570 5474 */ 5571 - void nft_data_uninit(const struct nft_data *data, enum nft_data_types type) 5475 + void nft_data_release(const struct nft_data *data, enum nft_data_types type) 5572 5476 { 5573 5477 if (type < NFT_DATA_VERDICT) 5574 5478 return; ··· 5579 5483 WARN_ON(1); 5580 5484 } 5581 5485 } 5582 - EXPORT_SYMBOL_GPL(nft_data_uninit); 5486 + EXPORT_SYMBOL_GPL(nft_data_release); 5583 5487 5584 5488 int nft_data_dump(struct sk_buff *skb, int attr, const struct nft_data *data, 5585 5489 enum nft_data_types type, unsigned int len)
+14 -5
net/netfilter/nft_bitwise.c
··· 83 83 tb[NFTA_BITWISE_MASK]); 84 84 if (err < 0) 85 85 return err; 86 - if (d1.len != priv->len) 87 - return -EINVAL; 86 + if (d1.len != priv->len) { 87 + err = -EINVAL; 88 + goto err1; 89 + } 88 90 89 91 err = nft_data_init(NULL, &priv->xor, sizeof(priv->xor), &d2, 90 92 tb[NFTA_BITWISE_XOR]); 91 93 if (err < 0) 92 - return err; 93 - if (d2.len != priv->len) 94 - return -EINVAL; 94 + goto err1; 95 + if (d2.len != priv->len) { 96 + err = -EINVAL; 97 + goto err2; 98 + } 95 99 96 100 return 0; 101 + err2: 102 + nft_data_release(&priv->xor, d2.type); 103 + err1: 104 + nft_data_release(&priv->mask, d1.type); 105 + return err; 97 106 } 98 107 99 108 static int nft_bitwise_dump(struct sk_buff *skb, const struct nft_expr *expr)
+10 -2
net/netfilter/nft_cmp.c
··· 201 201 if (err < 0) 202 202 return ERR_PTR(err); 203 203 204 + if (desc.type != NFT_DATA_VALUE) { 205 + err = -EINVAL; 206 + goto err1; 207 + } 208 + 204 209 if (desc.len <= sizeof(u32) && op == NFT_CMP_EQ) 205 210 return &nft_cmp_fast_ops; 206 - else 207 - return &nft_cmp_ops; 211 + 212 + return &nft_cmp_ops; 213 + err1: 214 + nft_data_release(&data, desc.type); 215 + return ERR_PTR(-EINVAL); 208 216 } 209 217 210 218 struct nft_expr_type nft_cmp_type __read_mostly = {
+2 -2
net/netfilter/nft_ct.c
··· 826 826 struct nft_ct_helper_obj *priv = nft_obj_data(obj); 827 827 828 828 if (priv->helper4) 829 - module_put(priv->helper4->me); 829 + nf_conntrack_helper_put(priv->helper4); 830 830 if (priv->helper6) 831 - module_put(priv->helper6->me); 831 + nf_conntrack_helper_put(priv->helper6); 832 832 } 833 833 834 834 static void nft_ct_helper_obj_eval(struct nft_object *obj,
+3 -2
net/netfilter/nft_immediate.c
··· 65 65 return 0; 66 66 67 67 err1: 68 - nft_data_uninit(&priv->data, desc.type); 68 + nft_data_release(&priv->data, desc.type); 69 69 return err; 70 70 } 71 71 ··· 73 73 const struct nft_expr *expr) 74 74 { 75 75 const struct nft_immediate_expr *priv = nft_expr_priv(expr); 76 - return nft_data_uninit(&priv->data, nft_dreg_to_type(priv->dreg)); 76 + 77 + return nft_data_release(&priv->data, nft_dreg_to_type(priv->dreg)); 77 78 } 78 79 79 80 static int nft_immediate_dump(struct sk_buff *skb, const struct nft_expr *expr)
+2 -2
net/netfilter/nft_range.c
··· 102 102 priv->len = desc_from.len; 103 103 return 0; 104 104 err2: 105 - nft_data_uninit(&priv->data_to, desc_to.type); 105 + nft_data_release(&priv->data_to, desc_to.type); 106 106 err1: 107 - nft_data_uninit(&priv->data_from, desc_from.type); 107 + nft_data_release(&priv->data_from, desc_from.type); 108 108 return err; 109 109 } 110 110
+1 -1
net/netfilter/nft_set_hash.c
··· 222 222 struct nft_set_elem elem; 223 223 int err; 224 224 225 - err = rhashtable_walk_init(&priv->ht, &hti, GFP_KERNEL); 225 + err = rhashtable_walk_init(&priv->ht, &hti, GFP_ATOMIC); 226 226 iter->err = err; 227 227 if (err) 228 228 return;
+16 -8
net/netfilter/x_tables.c
··· 283 283 &U->u.user.revision, K->u.kernel.TYPE->revision) 284 284 285 285 int xt_data_to_user(void __user *dst, const void *src, 286 - int usersize, int size) 286 + int usersize, int size, int aligned_size) 287 287 { 288 288 usersize = usersize ? : size; 289 289 if (copy_to_user(dst, src, usersize)) 290 290 return -EFAULT; 291 - if (usersize != size && clear_user(dst + usersize, size - usersize)) 291 + if (usersize != aligned_size && 292 + clear_user(dst + usersize, aligned_size - usersize)) 292 293 return -EFAULT; 293 294 294 295 return 0; 295 296 } 296 297 EXPORT_SYMBOL_GPL(xt_data_to_user); 297 298 298 - #define XT_DATA_TO_USER(U, K, TYPE, C_SIZE) \ 299 + #define XT_DATA_TO_USER(U, K, TYPE) \ 299 300 xt_data_to_user(U->data, K->data, \ 300 301 K->u.kernel.TYPE->usersize, \ 301 - C_SIZE ? : K->u.kernel.TYPE->TYPE##size) 302 + K->u.kernel.TYPE->TYPE##size, \ 303 + XT_ALIGN(K->u.kernel.TYPE->TYPE##size)) 302 304 303 305 int xt_match_to_user(const struct xt_entry_match *m, 304 306 struct xt_entry_match __user *u) 305 307 { 306 308 return XT_OBJ_TO_USER(u, m, match, 0) || 307 - XT_DATA_TO_USER(u, m, match, 0); 309 + XT_DATA_TO_USER(u, m, match); 308 310 } 309 311 EXPORT_SYMBOL_GPL(xt_match_to_user); 310 312 ··· 314 312 struct xt_entry_target __user *u) 315 313 { 316 314 return XT_OBJ_TO_USER(u, t, target, 0) || 317 - XT_DATA_TO_USER(u, t, target, 0); 315 + XT_DATA_TO_USER(u, t, target); 318 316 } 319 317 EXPORT_SYMBOL_GPL(xt_target_to_user); 320 318 ··· 613 611 } 614 612 EXPORT_SYMBOL_GPL(xt_compat_match_from_user); 615 613 614 + #define COMPAT_XT_DATA_TO_USER(U, K, TYPE, C_SIZE) \ 615 + xt_data_to_user(U->data, K->data, \ 616 + K->u.kernel.TYPE->usersize, \ 617 + C_SIZE, \ 618 + COMPAT_XT_ALIGN(C_SIZE)) 619 + 616 620 int xt_compat_match_to_user(const struct xt_entry_match *m, 617 621 void __user **dstptr, unsigned int *size) 618 622 { ··· 634 626 if (match->compat_to_user((void __user *)cm->data, m->data)) 635 627 return -EFAULT; 636 628 } else { 637 - if (XT_DATA_TO_USER(cm, m, match, msize - sizeof(*cm))) 629 + if (COMPAT_XT_DATA_TO_USER(cm, m, match, msize - sizeof(*cm))) 638 630 return -EFAULT; 639 631 } 640 632 ··· 980 972 if (target->compat_to_user((void __user *)ct->data, t->data)) 981 973 return -EFAULT; 982 974 } else { 983 - if (XT_DATA_TO_USER(ct, t, target, tsize - sizeof(*ct))) 975 + if (COMPAT_XT_DATA_TO_USER(ct, t, target, tsize - sizeof(*ct))) 984 976 return -EFAULT; 985 977 } 986 978
+3 -3
net/netfilter/xt_CT.c
··· 96 96 97 97 help = nf_ct_helper_ext_add(ct, helper, GFP_KERNEL); 98 98 if (help == NULL) { 99 - module_put(helper->me); 99 + nf_conntrack_helper_put(helper); 100 100 return -ENOMEM; 101 101 } 102 102 ··· 263 263 err4: 264 264 help = nfct_help(ct); 265 265 if (help) 266 - module_put(help->helper->me); 266 + nf_conntrack_helper_put(help->helper); 267 267 err3: 268 268 nf_ct_tmpl_free(ct); 269 269 err2: ··· 346 346 if (ct) { 347 347 help = nfct_help(ct); 348 348 if (help) 349 - module_put(help->helper->me); 349 + nf_conntrack_helper_put(help->helper); 350 350 351 351 nf_ct_netns_put(par->net, par->family); 352 352
+2 -2
net/openvswitch/conntrack.c
··· 1123 1123 1124 1124 help = nf_ct_helper_ext_add(info->ct, helper, GFP_KERNEL); 1125 1125 if (!help) { 1126 - module_put(helper->me); 1126 + nf_conntrack_helper_put(helper); 1127 1127 return -ENOMEM; 1128 1128 } 1129 1129 ··· 1584 1584 static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info) 1585 1585 { 1586 1586 if (ct_info->helper) 1587 - module_put(ct_info->helper->me); 1587 + nf_conntrack_helper_put(ct_info->helper); 1588 1588 if (ct_info->ct) 1589 1589 nf_ct_tmpl_free(ct_info->ct); 1590 1590 }
-1
net/sched/cls_matchall.c
··· 203 203 204 204 *arg = (unsigned long) head; 205 205 rcu_assign_pointer(tp->root, new); 206 - call_rcu(&head->rcu, mall_destroy_rcu); 207 206 return 0; 208 207 209 208 err_replace_hw_filter:
+8 -13
net/vmw_vsock/af_vsock.c
··· 1540 1540 long timeout; 1541 1541 int err; 1542 1542 struct vsock_transport_send_notify_data send_data; 1543 - 1544 - DEFINE_WAIT(wait); 1543 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 1545 1544 1546 1545 sk = sock->sk; 1547 1546 vsk = vsock_sk(sk); ··· 1583 1584 if (err < 0) 1584 1585 goto out; 1585 1586 1586 - 1587 1587 while (total_written < len) { 1588 1588 ssize_t written; 1589 1589 1590 - prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1590 + add_wait_queue(sk_sleep(sk), &wait); 1591 1591 while (vsock_stream_has_space(vsk) == 0 && 1592 1592 sk->sk_err == 0 && 1593 1593 !(sk->sk_shutdown & SEND_SHUTDOWN) && ··· 1595 1597 /* Don't wait for non-blocking sockets. */ 1596 1598 if (timeout == 0) { 1597 1599 err = -EAGAIN; 1598 - finish_wait(sk_sleep(sk), &wait); 1600 + remove_wait_queue(sk_sleep(sk), &wait); 1599 1601 goto out_err; 1600 1602 } 1601 1603 1602 1604 err = transport->notify_send_pre_block(vsk, &send_data); 1603 1605 if (err < 0) { 1604 - finish_wait(sk_sleep(sk), &wait); 1606 + remove_wait_queue(sk_sleep(sk), &wait); 1605 1607 goto out_err; 1606 1608 } 1607 1609 1608 1610 release_sock(sk); 1609 - timeout = schedule_timeout(timeout); 1611 + timeout = wait_woken(&wait, TASK_INTERRUPTIBLE, timeout); 1610 1612 lock_sock(sk); 1611 1613 if (signal_pending(current)) { 1612 1614 err = sock_intr_errno(timeout); 1613 - finish_wait(sk_sleep(sk), &wait); 1615 + remove_wait_queue(sk_sleep(sk), &wait); 1614 1616 goto out_err; 1615 1617 } else if (timeout == 0) { 1616 1618 err = -EAGAIN; 1617 - finish_wait(sk_sleep(sk), &wait); 1619 + remove_wait_queue(sk_sleep(sk), &wait); 1618 1620 goto out_err; 1619 1621 } 1620 - 1621 - prepare_to_wait(sk_sleep(sk), &wait, 1622 - TASK_INTERRUPTIBLE); 1623 1622 } 1624 - finish_wait(sk_sleep(sk), &wait); 1623 + remove_wait_queue(sk_sleep(sk), &wait); 1625 1624 1626 1625 /* These checks occur both as part of and after the loop 1627 1626 * conditional since we need to check before and after
+1 -1
scripts/Makefile.lib
··· 175 175 176 176 dtc_cpp_flags = -Wp,-MD,$(depfile).pre.tmp -nostdinc \ 177 177 -I$(srctree)/arch/$(SRCARCH)/boot/dts \ 178 - -I$(srctree)/arch/$(SRCARCH)/boot/dts/include \ 178 + -I$(srctree)/scripts/dtc/include-prefixes \ 179 179 -I$(srctree)/drivers/of/testcase-data \ 180 180 -undef -D__DTS__ 181 181
+1 -1
scripts/dtc/checks.c
··· 873 873 while (size--) 874 874 reg = (reg << 32) | fdt32_to_cpu(*(cells++)); 875 875 876 - snprintf(unit_addr, sizeof(unit_addr), "%lx", reg); 876 + snprintf(unit_addr, sizeof(unit_addr), "%zx", reg); 877 877 if (!streq(unitname, unit_addr)) 878 878 FAIL(c, dti, "Node %s simple-bus unit address format error, expected \"%s\"", 879 879 node->fullpath, unit_addr);
-4
sound/x86/intel_hdmi_audio.c
··· 1809 1809 pdata->notify_pending = false; 1810 1810 spin_unlock_irq(&pdata->lpe_audio_slock); 1811 1811 1812 - /* runtime PM isn't enabled as default, since it won't save much on 1813 - * BYT/CHT devices; user who want the runtime PM should adjust the 1814 - * power/ontrol and power/autosuspend_delay_ms sysfs entries instead 1815 - */ 1816 1812 pm_runtime_use_autosuspend(&pdev->dev); 1817 1813 pm_runtime_mark_last_busy(&pdev->dev); 1818 1814 pm_runtime_set_active(&pdev->dev);
+4
tools/power/acpi/.gitignore
··· 1 + acpidbg 2 + acpidump 3 + ec 4 + include
+1 -1
tools/testing/selftests/ftrace/ftracetest
··· 58 58 ;; 59 59 --verbose|-v|-vv) 60 60 VERBOSE=$((VERBOSE + 1)) 61 - [ $1 == '-vv' ] && VERBOSE=$((VERBOSE + 1)) 61 + [ $1 = '-vv' ] && VERBOSE=$((VERBOSE + 1)) 62 62 shift 1 63 63 ;; 64 64 --debug|-d)
+1 -1
tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
··· 48 48 e=`cat $EVENT_ENABLE` 49 49 if [ "$e" != $val ]; then 50 50 echo "Expected $val but found $e" 51 - exit -1 51 + exit 1 52 52 fi 53 53 } 54 54
+2 -2
tools/testing/selftests/ftrace/test.d/functions
··· 34 34 echo > set_ftrace_filter 35 35 grep -v '^#' set_ftrace_filter | while read t; do 36 36 tr=`echo $t | cut -d: -f2` 37 - if [ "$tr" == "" ]; then 37 + if [ "$tr" = "" ]; then 38 38 continue 39 39 fi 40 - if [ $tr == "enable_event" -o $tr == "disable_event" ]; then 40 + if [ $tr = "enable_event" -o $tr = "disable_event" ]; then 41 41 tr=`echo $t | cut -d: -f1-4` 42 42 limit=`echo $t | cut -d: -f5` 43 43 else
+6 -2
tools/testing/selftests/ftrace/test.d/instances/instance-event.tc
··· 75 75 if [ -d foo ]; then 76 76 fail "foo still exists" 77 77 fi 78 - exit 0 79 78 80 - 79 + mkdir foo 80 + echo "schedule:enable_event:sched:sched_switch" > foo/set_ftrace_filter 81 + rmdir foo 82 + if [ -d foo ]; then 83 + fail "foo still exists" 84 + fi 81 85 82 86 83 87 instance_slam() {
+1
tools/testing/selftests/powerpc/tm/.gitignore
··· 11 11 tm-signal-context-chk-gpr 12 12 tm-signal-context-chk-vmx 13 13 tm-signal-context-chk-vsx 14 + tm-vmx-unavail
+3 -1
tools/testing/selftests/powerpc/tm/Makefile
··· 2 2 tm-signal-context-chk-vmx tm-signal-context-chk-vsx 3 3 4 4 TEST_GEN_PROGS := tm-resched-dscr tm-syscall tm-signal-msr-resv tm-signal-stack \ 5 - tm-vmxcopy tm-fork tm-tar tm-tmspr $(SIGNAL_CONTEXT_CHK_TESTS) 5 + tm-vmxcopy tm-fork tm-tar tm-tmspr tm-vmx-unavail \ 6 + $(SIGNAL_CONTEXT_CHK_TESTS) 6 7 7 8 include ../../lib.mk 8 9 ··· 14 13 $(OUTPUT)/tm-syscall: tm-syscall-asm.S 15 14 $(OUTPUT)/tm-syscall: CFLAGS += -I../../../../../usr/include 16 15 $(OUTPUT)/tm-tmspr: CFLAGS += -pthread 16 + $(OUTPUT)/tm-vmx-unavail: CFLAGS += -pthread -m64 17 17 18 18 SIGNAL_CONTEXT_CHK_TESTS := $(patsubst %,$(OUTPUT)/%,$(SIGNAL_CONTEXT_CHK_TESTS)) 19 19 $(SIGNAL_CONTEXT_CHK_TESTS): tm-signal.S
+118
tools/testing/selftests/powerpc/tm/tm-vmx-unavail.c
··· 1 + /* 2 + * Copyright 2017, Michael Neuling, IBM Corp. 3 + * Licensed under GPLv2. 4 + * Original: Breno Leitao <brenohl@br.ibm.com> & 5 + * Gustavo Bueno Romero <gromero@br.ibm.com> 6 + * Edited: Michael Neuling 7 + * 8 + * Force VMX unavailable during a transaction and see if it corrupts 9 + * the checkpointed VMX register state after the abort. 10 + */ 11 + 12 + #include <inttypes.h> 13 + #include <htmintrin.h> 14 + #include <string.h> 15 + #include <stdlib.h> 16 + #include <stdio.h> 17 + #include <pthread.h> 18 + #include <sys/mman.h> 19 + #include <unistd.h> 20 + #include <pthread.h> 21 + 22 + #include "tm.h" 23 + #include "utils.h" 24 + 25 + int passed; 26 + 27 + void *worker(void *unused) 28 + { 29 + __int128 vmx0; 30 + uint64_t texasr; 31 + 32 + asm goto ( 33 + "li 3, 1;" /* Stick non-zero value in VMX0 */ 34 + "std 3, 0(%[vmx0_ptr]);" 35 + "lvx 0, 0, %[vmx0_ptr];" 36 + 37 + /* Wait here a bit so we get scheduled out 255 times */ 38 + "lis 3, 0x3fff;" 39 + "1: ;" 40 + "addi 3, 3, -1;" 41 + "cmpdi 3, 0;" 42 + "bne 1b;" 43 + 44 + /* Kernel will hopefully turn VMX off now */ 45 + 46 + "tbegin. ;" 47 + "beq failure;" 48 + 49 + /* Cause VMX unavail. Any VMX instruction */ 50 + "vaddcuw 0,0,0;" 51 + 52 + "tend. ;" 53 + "b %l[success];" 54 + 55 + /* Check VMX0 sanity after abort */ 56 + "failure: ;" 57 + "lvx 1, 0, %[vmx0_ptr];" 58 + "vcmpequb. 2, 0, 1;" 59 + "bc 4, 24, %l[value_mismatch];" 60 + "b %l[value_match];" 61 + : 62 + : [vmx0_ptr] "r"(&vmx0) 63 + : "r3" 64 + : success, value_match, value_mismatch 65 + ); 66 + 67 + /* HTM aborted and VMX0 is corrupted */ 68 + value_mismatch: 69 + texasr = __builtin_get_texasr(); 70 + 71 + printf("\n\n==============\n\n"); 72 + printf("Failure with error: %lx\n", _TEXASR_FAILURE_CODE(texasr)); 73 + printf("Summary error : %lx\n", _TEXASR_FAILURE_SUMMARY(texasr)); 74 + printf("TFIAR exact : %lx\n\n", _TEXASR_TFIAR_EXACT(texasr)); 75 + 76 + passed = 0; 77 + return NULL; 78 + 79 + /* HTM aborted but VMX0 is correct */ 80 + value_match: 81 + // printf("!"); 82 + return NULL; 83 + 84 + success: 85 + // printf("."); 86 + return NULL; 87 + } 88 + 89 + int tm_vmx_unavail_test() 90 + { 91 + int threads; 92 + pthread_t *thread; 93 + 94 + SKIP_IF(!have_htm()); 95 + 96 + passed = 1; 97 + 98 + threads = sysconf(_SC_NPROCESSORS_ONLN) * 4; 99 + thread = malloc(sizeof(pthread_t)*threads); 100 + if (!thread) 101 + return EXIT_FAILURE; 102 + 103 + for (uint64_t i = 0; i < threads; i++) 104 + pthread_create(&thread[i], NULL, &worker, NULL); 105 + 106 + for (uint64_t i = 0; i < threads; i++) 107 + pthread_join(thread[i], NULL); 108 + 109 + free(thread); 110 + 111 + return passed ? EXIT_SUCCESS : EXIT_FAILURE; 112 + } 113 + 114 + 115 + int main(int argc, char **argv) 116 + { 117 + return test_harness(tm_vmx_unavail_test, "tm_vmx_unavail_test"); 118 + }
+9 -9
virt/kvm/arm/hyp/vgic-v3-sr.c
··· 22 22 #include <asm/kvm_hyp.h> 23 23 24 24 #define vtr_to_max_lr_idx(v) ((v) & 0xf) 25 - #define vtr_to_nr_pri_bits(v) (((u32)(v) >> 29) + 1) 25 + #define vtr_to_nr_pre_bits(v) (((u32)(v) >> 26) + 1) 26 26 27 27 static u64 __hyp_text __gic_v3_get_lr(unsigned int lr) 28 28 { ··· 135 135 136 136 if (used_lrs) { 137 137 int i; 138 - u32 nr_pri_bits; 138 + u32 nr_pre_bits; 139 139 140 140 cpu_if->vgic_elrsr = read_gicreg(ICH_ELSR_EL2); 141 141 142 142 write_gicreg(0, ICH_HCR_EL2); 143 143 val = read_gicreg(ICH_VTR_EL2); 144 - nr_pri_bits = vtr_to_nr_pri_bits(val); 144 + nr_pre_bits = vtr_to_nr_pre_bits(val); 145 145 146 146 for (i = 0; i < used_lrs; i++) { 147 147 if (cpu_if->vgic_elrsr & (1 << i)) ··· 152 152 __gic_v3_set_lr(0, i); 153 153 } 154 154 155 - switch (nr_pri_bits) { 155 + switch (nr_pre_bits) { 156 156 case 7: 157 157 cpu_if->vgic_ap0r[3] = read_gicreg(ICH_AP0R3_EL2); 158 158 cpu_if->vgic_ap0r[2] = read_gicreg(ICH_AP0R2_EL2); ··· 162 162 cpu_if->vgic_ap0r[0] = read_gicreg(ICH_AP0R0_EL2); 163 163 } 164 164 165 - switch (nr_pri_bits) { 165 + switch (nr_pre_bits) { 166 166 case 7: 167 167 cpu_if->vgic_ap1r[3] = read_gicreg(ICH_AP1R3_EL2); 168 168 cpu_if->vgic_ap1r[2] = read_gicreg(ICH_AP1R2_EL2); ··· 198 198 struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3; 199 199 u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs; 200 200 u64 val; 201 - u32 nr_pri_bits; 201 + u32 nr_pre_bits; 202 202 int i; 203 203 204 204 /* ··· 217 217 } 218 218 219 219 val = read_gicreg(ICH_VTR_EL2); 220 - nr_pri_bits = vtr_to_nr_pri_bits(val); 220 + nr_pre_bits = vtr_to_nr_pre_bits(val); 221 221 222 222 if (used_lrs) { 223 223 write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2); 224 224 225 - switch (nr_pri_bits) { 225 + switch (nr_pre_bits) { 226 226 case 7: 227 227 write_gicreg(cpu_if->vgic_ap0r[3], ICH_AP0R3_EL2); 228 228 write_gicreg(cpu_if->vgic_ap0r[2], ICH_AP0R2_EL2); ··· 232 232 write_gicreg(cpu_if->vgic_ap0r[0], ICH_AP0R0_EL2); 233 233 } 234 234 235 - switch (nr_pri_bits) { 235 + switch (nr_pre_bits) { 236 236 case 7: 237 237 write_gicreg(cpu_if->vgic_ap1r[3], ICH_AP1R3_EL2); 238 238 write_gicreg(cpu_if->vgic_ap1r[2], ICH_AP1R2_EL2);
+21 -12
virt/kvm/arm/mmu.c
··· 295 295 assert_spin_locked(&kvm->mmu_lock); 296 296 pgd = kvm->arch.pgd + stage2_pgd_index(addr); 297 297 do { 298 + /* 299 + * Make sure the page table is still active, as another thread 300 + * could have possibly freed the page table, while we released 301 + * the lock. 302 + */ 303 + if (!READ_ONCE(kvm->arch.pgd)) 304 + break; 298 305 next = stage2_pgd_addr_end(addr, end); 299 306 if (!stage2_pgd_none(*pgd)) 300 307 unmap_stage2_puds(kvm, pgd, addr, next); ··· 836 829 * Walks the level-1 page table pointed to by kvm->arch.pgd and frees all 837 830 * underlying level-2 and level-3 tables before freeing the actual level-1 table 838 831 * and setting the struct pointer to NULL. 839 - * 840 - * Note we don't need locking here as this is only called when the VM is 841 - * destroyed, which can only be done once. 842 832 */ 843 833 void kvm_free_stage2_pgd(struct kvm *kvm) 844 834 { 845 - if (kvm->arch.pgd == NULL) 846 - return; 835 + void *pgd = NULL; 847 836 848 837 spin_lock(&kvm->mmu_lock); 849 - unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); 838 + if (kvm->arch.pgd) { 839 + unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE); 840 + pgd = READ_ONCE(kvm->arch.pgd); 841 + kvm->arch.pgd = NULL; 842 + } 850 843 spin_unlock(&kvm->mmu_lock); 851 844 852 845 /* Free the HW pgd, one page at a time */ 853 - free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE); 854 - kvm->arch.pgd = NULL; 846 + if (pgd) 847 + free_pages_exact(pgd, S2_PGD_SIZE); 855 848 } 856 849 857 850 static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, ··· 1177 1170 * large. Otherwise, we may see kernel panics with 1178 1171 * CONFIG_DETECT_HUNG_TASK, CONFIG_LOCKUP_DETECTOR, 1179 1172 * CONFIG_LOCKDEP. Additionally, holding the lock too long 1180 - * will also starve other vCPUs. 1173 + * will also starve other vCPUs. We have to also make sure 1174 + * that the page tables are not freed while we released 1175 + * the lock. 1181 1176 */ 1182 - if (need_resched() || spin_needbreak(&kvm->mmu_lock)) 1183 - cond_resched_lock(&kvm->mmu_lock); 1184 - 1177 + cond_resched_lock(&kvm->mmu_lock); 1178 + if (!READ_ONCE(kvm->arch.pgd)) 1179 + break; 1185 1180 next = stage2_pgd_addr_end(addr, end); 1186 1181 if (stage2_pgd_present(*pgd)) 1187 1182 stage2_wp_puds(pgd, addr, next);
+4 -1
virt/kvm/arm/vgic/vgic-init.c
··· 242 242 * If we are creating a VCPU with a GICv3 we must also register the 243 243 * KVM io device for the redistributor that belongs to this VCPU. 244 244 */ 245 - if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) 245 + if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { 246 + mutex_lock(&vcpu->kvm->lock); 246 247 ret = vgic_register_redist_iodev(vcpu); 248 + mutex_unlock(&vcpu->kvm->lock); 249 + } 247 250 return ret; 248 251 } 249 252
+9 -3
virt/kvm/arm/vgic/vgic-mmio-v3.c
··· 586 586 if (!vgic_v3_check_base(kvm)) 587 587 return -EINVAL; 588 588 589 - rd_base = vgic->vgic_redist_base + kvm_vcpu_get_idx(vcpu) * SZ_64K * 2; 589 + rd_base = vgic->vgic_redist_base + vgic->vgic_redist_free_offset; 590 590 sgi_base = rd_base + SZ_64K; 591 591 592 592 kvm_iodevice_init(&rd_dev->dev, &kvm_io_gic_ops); ··· 614 614 mutex_lock(&kvm->slots_lock); 615 615 ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, sgi_base, 616 616 SZ_64K, &sgi_dev->dev); 617 - mutex_unlock(&kvm->slots_lock); 618 - if (ret) 617 + if (ret) { 619 618 kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, 620 619 &rd_dev->dev); 620 + goto out; 621 + } 621 622 623 + vgic->vgic_redist_free_offset += 2 * SZ_64K; 624 + out: 625 + mutex_unlock(&kvm->slots_lock); 622 626 return ret; 623 627 } 624 628 ··· 648 644 649 645 if (ret) { 650 646 /* The current c failed, so we start with the previous one. */ 647 + mutex_lock(&kvm->slots_lock); 651 648 for (c--; c >= 0; c--) { 652 649 vcpu = kvm_get_vcpu(kvm, c); 653 650 vgic_unregister_redist_iodev(vcpu); 654 651 } 652 + mutex_unlock(&kvm->slots_lock); 655 653 } 656 654 657 655 return ret;
+7
virt/kvm/arm/vgic/vgic-v2.c
··· 149 149 if (irq->hw) { 150 150 val |= GICH_LR_HW; 151 151 val |= irq->hwintid << GICH_LR_PHYSID_CPUID_SHIFT; 152 + /* 153 + * Never set pending+active on a HW interrupt, as the 154 + * pending state is kept at the physical distributor 155 + * level. 156 + */ 157 + if (irq->active && irq_is_pending(irq)) 158 + val &= ~GICH_LR_PENDING_BIT; 152 159 } else { 153 160 if (irq->config == VGIC_CONFIG_LEVEL) 154 161 val |= GICH_LR_EOI;
+7
virt/kvm/arm/vgic/vgic-v3.c
··· 127 127 if (irq->hw) { 128 128 val |= ICH_LR_HW; 129 129 val |= ((u64)irq->hwintid) << ICH_LR_PHYS_ID_SHIFT; 130 + /* 131 + * Never set pending+active on a HW interrupt, as the 132 + * pending state is kept at the physical distributor 133 + * level. 134 + */ 135 + if (irq->active && irq_is_pending(irq)) 136 + val &= ~ICH_LR_PENDING_BIT; 130 137 } else { 131 138 if (irq->config == VGIC_CONFIG_LEVEL) 132 139 val |= ICH_LR_EOI;