Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

drivers/net/ethernet/ti/icssg/icssg_prueth.c

net/mac80211/chan.c
89884459a0b9 ("wifi: mac80211: fix idle calculation with multi-link")
87f5500285fb ("wifi: mac80211: simplify ieee80211_assign_link_chanctx()")
https://lore.kernel.org/all/20240422105623.7b1fbda2@canb.auug.org.au/

net/unix/garbage.c
1971d13ffa84 ("af_unix: Suppress false-positive lockdep splat for spin_lock() in __unix_gc().")
4090fa373f0e ("af_unix: Replace garbage collection algorithm.")

drivers/net/ethernet/ti/icssg/icssg_prueth.c
drivers/net/ethernet/ti/icssg/icssg_common.c
4dcd0e83ea1d ("net: ti: icssg-prueth: Fix signedness bug in prueth_init_rx_chns()")
e2dc7bfd677f ("net: ti: icssg-prueth: Move common functions into a separate file")

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4235 -2482
+15 -1
.mailmap
··· 38 38 Alexei Starovoitov <ast@kernel.org> <ast@fb.com> 39 39 Alexei Starovoitov <ast@kernel.org> <ast@plumgrid.com> 40 40 Alexey Makhalov <alexey.amakhalov@broadcom.com> <amakhalov@vmware.com> 41 + Alex Elder <elder@kernel.org> 42 + Alex Elder <elder@kernel.org> <aelder@sgi.com> 43 + Alex Elder <elder@kernel.org> <alex.elder@linaro.org> 44 + Alex Elder <elder@kernel.org> <alex.elder@linary.org> 45 + Alex Elder <elder@kernel.org> <elder@dreamhost.com> 46 + Alex Elder <elder@kernel.org> <elder@dreawmhost.com> 47 + Alex Elder <elder@kernel.org> <elder@ieee.org> 48 + Alex Elder <elder@kernel.org> <elder@inktank.com> 49 + Alex Elder <elder@kernel.org> <elder@linaro.org> 50 + Alex Elder <elder@kernel.org> <elder@newdream.net> 41 51 Alex Hung <alexhung@gmail.com> <alex.hung@canonical.com> 42 52 Alex Shi <alexs@kernel.org> <alex.shi@intel.com> 43 53 Alex Shi <alexs@kernel.org> <alex.shi@linaro.org> ··· 108 98 Ben Widawsky <bwidawsk@kernel.org> <ben.widawsky@intel.com> 109 99 Ben Widawsky <bwidawsk@kernel.org> <benjamin.widawsky@intel.com> 110 100 Benjamin Poirier <benjamin.poirier@gmail.com> <bpoirier@suse.de> 101 + Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@gmail.com> 102 + Benjamin Tissoires <bentiss@kernel.org> <benjamin.tissoires@redhat.com> 111 103 Bjorn Andersson <andersson@kernel.org> <bjorn@kryo.se> 112 104 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@linaro.org> 113 105 Bjorn Andersson <andersson@kernel.org> <bjorn.andersson@sonymobile.com> ··· 458 446 Nadav Amit <nadav.amit@gmail.com> <namit@vmware.com> 459 447 Nadav Amit <nadav.amit@gmail.com> <namit@cs.technion.ac.il> 460 448 Nadia Yvette Chambers <nyc@holomorphy.com> William Lee Irwin III <wli@holomorphy.com> 461 - Naoya Horiguchi <naoya.horiguchi@nec.com> <n-horiguchi@ah.jp.nec.com> 449 + Naoya Horiguchi <nao.horiguchi@gmail.com> <n-horiguchi@ah.jp.nec.com> 450 + Naoya Horiguchi <nao.horiguchi@gmail.com> <naoya.horiguchi@nec.com> 462 451 Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com> 463 452 Neeraj Upadhyay <quic_neeraju@quicinc.com> <neeraju@codeaurora.org> 464 453 Neil Armstrong <neil.armstrong@linaro.org> <narmstrong@baylibre.com> ··· 537 524 Ricardo Ribalda <ribalda@kernel.org> <ricardo@ribalda.com> 538 525 Ricardo Ribalda <ribalda@kernel.org> Ricardo Ribalda Delgado <ribalda@kernel.org> 539 526 Ricardo Ribalda <ribalda@kernel.org> <ricardo.ribalda@gmail.com> 527 + Richard Genoud <richard.genoud@bootlin.com> <richard.genoud@gmail.com> 540 528 Richard Leitner <richard.leitner@linux.dev> <dev@g0hl1n.net> 541 529 Richard Leitner <richard.leitner@linux.dev> <me@g0hl1n.net> 542 530 Richard Leitner <richard.leitner@linux.dev> <richard.leitner@skidata.com>
+417 -180
Documentation/admin-guide/verify-bugs-and-bisect-regressions.rst
··· 29 29 ======================================== 30 30 31 31 *[If you are new to building or bisecting Linux, ignore this section and head 32 - over to the* ":ref:`step-by-step guide<introguide_bissbs>`" *below. It utilizes 32 + over to the* ':ref:`step-by-step guide <introguide_bissbs>`' *below. It utilizes 33 33 the same commands as this section while describing them in brief fashion. The 34 34 steps are nevertheless easy to follow and together with accompanying entries 35 35 in a reference section mention many alternatives, pitfalls, and additional ··· 38 38 **In case you want to check if a bug is present in code currently supported by 39 39 developers**, execute just the *preparations* and *segment 1*; while doing so, 40 40 consider the newest Linux kernel you regularly use to be the 'working' kernel. 41 - In the following example that's assumed to be 6.0.13, which is why the sources 42 - of 6.0 will be used to prepare the .config file. 41 + In the following example that's assumed to be 6.0, which is why its sources 42 + will be used to prepare the .config file. 43 43 44 44 **In case you face a regression**, follow the steps at least till the end of 45 45 *segment 2*. Then you can submit a preliminary report -- or continue with ··· 61 61 cd ~/linux/ 62 62 git remote add -t master stable \ 63 63 https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git 64 - git checkout --detach v6.0 64 + git switch --detach v6.0 65 65 # * Hint: if you used an existing clone, ensure no stale .config is around. 66 66 make olddefconfig 67 67 # * Ensure the former command picked the .config of the 'working' kernel. ··· 87 87 a) Checking out latest mainline code:: 88 88 89 89 cd ~/linux/ 90 - git checkout --force --detach mainline/master 90 + git switch --discard-changes --detach mainline/master 91 91 92 92 b) Build, install, and boot a kernel:: 93 93 ··· 125 125 a) Start by checking out the sources of the 'good' version:: 126 126 127 127 cd ~/linux/ 128 - git checkout --force --detach v6.0 128 + git switch --discard-changes --detach v6.0 129 129 130 130 b) Build, install, and boot a kernel as described earlier in *segment 1, 131 131 section b* -- just feel free to skip the 'du' commands, as you have a rough ··· 136 136 137 137 * **Segment 3**: perform and validate the bisection. 138 138 139 - a) In case your 'broken' version is a stable/longterm release, add the Git 140 - branch holding it:: 139 + a) Retrieve the sources for your 'bad' version:: 141 140 142 141 git remote set-branches --add stable linux-6.1.y 143 142 git fetch stable ··· 156 157 works with the newly built kernel. If it does, tell Git by executing 157 158 ``git bisect good``; if it does not, run ``git bisect bad`` instead. 158 159 159 - All three commands will make Git checkout another commit; then re-execute 160 + All three commands will make Git check out another commit; then re-execute 160 161 this step (e.g. build, install, boot, and test a kernel to then tell Git 161 162 the outcome). Do so again and again until Git shows which commit broke 162 163 things. If you run short of disk space during this process, check the 163 - "Supplementary tasks" section below. 164 + section 'Complementary tasks: cleanup during and after the process' 165 + below. 164 166 165 167 d) Once your finished the bisection, put a few things away:: 166 168 ··· 172 172 173 173 e) Try to verify the bisection result:: 174 174 175 - git checkout --force --detach mainline/master 175 + git switch --discard-changes --detach mainline/master 176 176 git revert --no-edit cafec0cacaca0 177 + cp ~/kernel-config-working .config 178 + ./scripts/config --set-str CONFIG_LOCALVERSION '-local-cafec0cacaca0-reverted' 177 179 178 180 This is optional, as some commits are impossible to revert. But if the 179 181 second command worked flawlessly, build, install, and boot one more kernel 180 - kernel, which should not show the regression. 182 + kernel; just this time skip the first command copying the base .config file 183 + over, as that already has been taken care off. 181 184 182 - * **Supplementary tasks**: cleanup during and after the process. 185 + * **Complementary tasks**: cleanup during and after the process. 183 186 184 187 a) To avoid running out of disk space during a bisection, you might need to 185 188 remove some kernels you built earlier. You most likely want to keep those ··· 205 202 the kernels you built earlier and later you might want to keep around for 206 203 a week or two. 207 204 205 + * **Optional task**: test a debug patch or a proposed fix later:: 206 + 207 + git fetch mainline 208 + git switch --discard-changes --detach mainline/master 209 + git apply /tmp/foobars-proposed-fix-v1.patch 210 + cp ~/kernel-config-working .config 211 + ./scripts/config --set-str CONFIG_LOCALVERSION '-local-foobars-fix-v1' 212 + 213 + Build, install, and boot a kernel as described in *segment 1, section b* -- 214 + but this time omit the first command copying the build configuration over, 215 + as that has been taken care of already. 216 + 208 217 .. _introguide_bissbs: 209 218 210 219 Step-by-step guide on how to verify bugs and bisect regressions 211 220 =============================================================== 212 221 213 222 This guide describes how to set up your own Linux kernels for investigating bugs 214 - or regressions you intent to report. How far you want to follow the instructions 223 + or regressions you intend to report. How far you want to follow the instructions 215 224 depends on your issue: 216 225 217 226 Execute all steps till the end of *segment 1* to **verify if your kernel problem ··· 236 221 *segment 3* to **perform a bisection** for a full-fledged regression report 237 222 developers are obliged to act upon. 238 223 239 - :ref:`Preparations: set up everything to build your own kernels.<introprep_bissbs>` 224 + :ref:`Preparations: set up everything to build your own kernels <introprep_bissbs>`. 240 225 241 - :ref:`Segment 1: try to reproduce the problem with the latest codebase.<introlatestcheck_bissbs>` 226 + :ref:`Segment 1: try to reproduce the problem with the latest codebase <introlatestcheck_bissbs>`. 242 227 243 - :ref:`Segment 2: check if the kernels you build work fine.<introworkingcheck_bissbs>` 228 + :ref:`Segment 2: check if the kernels you build work fine <introworkingcheck_bissbs>`. 244 229 245 - :ref:`Segment 3: perform a bisection and validate the result.<introbisect_bissbs>` 230 + :ref:`Segment 3: perform a bisection and validate the result <introbisect_bissbs>`. 246 231 247 - :ref:`Supplementary tasks: cleanup during and after following this guide.<introclosure_bissbs>` 232 + :ref:`Complementary tasks: cleanup during and after following this guide <introclosure_bissbs>`. 233 + 234 + :ref:`Optional tasks: test reverts, patches, or later versions <introoptional_bissbs>`. 248 235 249 236 The steps in each segment illustrate the important aspects of the process, while 250 237 a comprehensive reference section holds additional details for almost all of the ··· 257 240 For further details on how to report Linux kernel issues or regressions check 258 241 out Documentation/admin-guide/reporting-issues.rst, which works in conjunction 259 242 with this document. It among others explains why you need to verify bugs with 260 - the latest 'mainline' kernel, even if you face a problem with a kernel from a 261 - 'stable/longterm' series; for users facing a regression it also explains that 262 - sending a preliminary report after finishing segment 2 might be wise, as the 263 - regression and its culprit might be known already. For further details on 264 - what actually qualifies as a regression check out 265 - Documentation/admin-guide/reporting-regressions.rst. 243 + the latest 'mainline' kernel (e.g. versions like 6.0, 6.1-rc1, or 6.1-rc6), 244 + even if you face a problem with a kernel from a 'stable/longterm' series 245 + (say 6.0.13). 246 + 247 + For users facing a regression that document also explains why sending a 248 + preliminary report after segment 2 might be wise, as the regression and its 249 + culprit might be known already. For further details on what actually qualifies 250 + as a regression check out Documentation/admin-guide/reporting-regressions.rst. 251 + 252 + If you run into any problems while following this guide or have ideas how to 253 + improve it, :ref:`please let the kernel developers know <submit_improvements>`. 266 254 267 255 .. _introprep_bissbs: 268 256 269 257 Preparations: set up everything to build your own kernels 270 258 --------------------------------------------------------- 271 259 260 + The following steps lay the groundwork for all further tasks. 261 + 262 + Note: the instructions assume you are building and testing on the same 263 + machine; if you want to compile the kernel on another system, check 264 + :ref:`Build kernels on a different machine <buildhost_bis>` below. 265 + 272 266 .. _backup_bissbs: 273 267 274 268 * Create a fresh backup and put system repair and restore tools at hand, just 275 269 to be prepared for the unlikely case of something going sideways. 276 270 277 - [:ref:`details<backup_bisref>`] 271 + [:ref:`details <backup_bisref>`] 278 272 279 273 .. _vanilla_bissbs: 280 274 ··· 293 265 builds them automatically. That includes but is not limited to DKMS, openZFS, 294 266 VirtualBox, and Nvidia's graphics drivers (including the GPLed kernel module). 295 267 296 - [:ref:`details<vanilla_bisref>`] 268 + [:ref:`details <vanilla_bisref>`] 297 269 298 270 .. _secureboot_bissbs: 299 271 ··· 304 276 their restrictions through a process initiated by 305 277 ``mokutil --disable-validation``. 306 278 307 - [:ref:`details<secureboot_bisref>`] 279 + [:ref:`details <secureboot_bisref>`] 308 280 309 281 .. _rangecheck_bissbs: 310 282 311 283 * Determine the kernel versions considered 'good' and 'bad' throughout this 312 - guide. 284 + guide: 313 285 314 - Do you follow this guide to verify if a bug is present in the code developers 315 - care for? Then consider the mainline release your 'working' kernel (the newest 316 - one you regularly use) is based on to be the 'good' version; if your 'working' 317 - kernel for example is 6.0.11, then your 'good' kernel is 6.0. 286 + * Do you follow this guide to verify if a bug is present in the code the 287 + primary developers care for? Then consider the version of the newest kernel 288 + you regularly use currently as 'good' (e.g. 6.0, 6.0.13, or 6.1-rc2). 318 289 319 - In case you face a regression, it depends on the version range where the 320 - regression was introduced: 290 + * Do you face a regression, e.g. something broke or works worse after 291 + switching to a newer kernel version? In that case it depends on the version 292 + range during which the problem appeared: 321 293 322 - * Something which used to work in Linux 6.0 broke when switching to Linux 323 - 6.1-rc1? Then henceforth regard 6.0 as the last known 'good' version 324 - and 6.1-rc1 as the first 'bad' one. 294 + * Something regressed when updating from a stable/longterm release 295 + (say 6.0.13) to a newer mainline series (like 6.1-rc7 or 6.1) or a 296 + stable/longterm version based on one (say 6.1.5)? Then consider the 297 + mainline release your working kernel is based on to be the 'good' 298 + version (e.g. 6.0) and the first version to be broken as the 'bad' one 299 + (e.g. 6.1-rc7, 6.1, or 6.1.5). Note, at this point it is merely assumed 300 + that 6.0 is fine; this hypothesis will be checked in segment 2. 325 301 326 - * Some function stopped working when updating from 6.0.11 to 6.1.4? Then for 327 - the time being consider 6.0 as the last 'good' version and 6.1.4 as 328 - the 'bad' one. Note, at this point it is merely assumed that 6.0 is fine; 329 - this assumption will be checked in segment 2. 302 + * Something regressed when switching from one mainline version (say 6.0) to 303 + a later one (like 6.1-rc1) or a stable/longterm release based on it 304 + (say 6.1.5)? Then regard the last working version (e.g. 6.0) as 'good' and 305 + the first broken (e.g. 6.1-rc1 or 6.1.5) as 'bad'. 330 306 331 - * A feature you used in 6.0.11 does not work at all or worse in 6.1.13? In 332 - that case you want to bisect within a stable/longterm series: consider 333 - 6.0.11 as the last known 'good' version and 6.0.13 as the first 'bad' 334 - one. Note, in this case you still want to compile and test a mainline kernel 335 - as explained in segment 1: the outcome will determine if you need to report 336 - your issue to the regular developers or the stable team. 307 + * Something regressed when updating within a stable/longterm series (say 308 + from 6.0.13 to 6.0.15)? Then consider those versions as 'good' and 'bad' 309 + (e.g. 6.0.13 and 6.0.15), as you need to bisect within that series. 337 310 338 311 *Note, do not confuse 'good' version with 'working' kernel; the latter term 339 312 throughout this guide will refer to the last kernel that has been working 340 313 fine.* 341 314 342 - [:ref:`details<rangecheck_bisref>`] 315 + [:ref:`details <rangecheck_bisref>`] 343 316 344 317 .. _bootworking_bissbs: 345 318 346 319 * Boot into the 'working' kernel and briefly use the apparently broken feature. 347 320 348 - [:ref:`details<bootworking_bisref>`] 321 + [:ref:`details <bootworking_bisref>`] 349 322 350 323 .. _diskspace_bissbs: 351 324 ··· 356 327 debug symbols: both explain approaches reducing the amount of space, which 357 328 should allow you to master these tasks with about 4 Gigabytes free space. 358 329 359 - [:ref:`details<diskspace_bisref>`] 330 + [:ref:`details <diskspace_bisref>`] 360 331 361 332 .. _buildrequires_bissbs: 362 333 ··· 366 337 reference section shows how to quickly install those on various popular Linux 367 338 distributions. 368 339 369 - [:ref:`details<buildrequires_bisref>`] 340 + [:ref:`details <buildrequires_bisref>`] 370 341 371 342 .. _sources_bissbs: 372 343 ··· 389 360 git remote add -t master stable \ 390 361 https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git 391 362 392 - [:ref:`details<sources_bisref>`] 363 + [:ref:`details <sources_bisref>`] 364 + 365 + .. _stablesources_bissbs: 366 + 367 + * Is one of the versions you earlier established as 'good' or 'bad' a stable or 368 + longterm release (say 6.1.5)? Then download the code for the series it belongs 369 + to ('linux-6.1.y' in this example):: 370 + 371 + git remote set-branches --add stable linux-6.1.y 372 + git fetch stable 393 373 394 374 .. _oldconfig_bissbs: 395 375 396 376 * Start preparing a kernel build configuration (the '.config' file). 397 377 398 378 Before doing so, ensure you are still running the 'working' kernel an earlier 399 - step told you to boot; if you are unsure, check the current kernel release 379 + step told you to boot; if you are unsure, check the current kernelrelease 400 380 identifier using ``uname -r``. 401 381 402 382 Afterwards check out the source code for the version earlier established as ··· 413 375 the version number in this and all later Git commands needs to be prefixed 414 376 with a 'v':: 415 377 416 - git checkout --detach v6.0 378 + git switch --discard-changes --detach v6.0 417 379 418 380 Now create a build configuration file:: 419 381 ··· 436 398 'make olddefconfig' again and check if it now picked up the right config file 437 399 as base. 438 400 439 - [:ref:`details<oldconfig_bisref>`] 401 + [:ref:`details <oldconfig_bisref>`] 440 402 441 403 .. _localmodconfig_bissbs: 442 404 ··· 470 432 spending much effort on, as long as it boots and allows to properly test the 471 433 feature that causes trouble. 472 434 473 - [:ref:`details<localmodconfig_bisref>`] 435 + [:ref:`details <localmodconfig_bisref>`] 474 436 475 437 .. _tagging_bissbs: 476 438 ··· 480 442 ./scripts/config --set-str CONFIG_LOCALVERSION '-local' 481 443 ./scripts/config -e CONFIG_LOCALVERSION_AUTO 482 444 483 - [:ref:`details<tagging_bisref>`] 445 + [:ref:`details <tagging_bisref>`] 484 446 485 447 .. _debugsymbols_bissbs: 486 448 ··· 499 461 ./scripts/config -d DEBUG_INFO -d DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT \ 500 462 -d DEBUG_INFO_DWARF4 -d DEBUG_INFO_DWARF5 -e CONFIG_DEBUG_INFO_NONE 501 463 502 - [:ref:`details<debugsymbols_bisref>`] 464 + [:ref:`details <debugsymbols_bisref>`] 503 465 504 466 .. _configmods_bissbs: 505 467 ··· 509 471 * Are you running Debian? Then you want to avoid known problems by performing 510 472 additional adjustments explained in the reference section. 511 473 512 - [:ref:`details<configmods_distros_bisref>`]. 474 + [:ref:`details <configmods_distros_bisref>`]. 513 475 514 476 * If you want to influence other aspects of the configuration, do so now using 515 477 your preferred tool. Note, to use make targets like 'menuconfig' or 516 478 'nconfig', you will need to install the development files of ncurses; for 517 479 'xconfig' you likewise need the Qt5 or Qt6 headers. 518 480 519 - [:ref:`details<configmods_individual_bisref>`]. 481 + [:ref:`details <configmods_individual_bisref>`]. 520 482 521 483 .. _saveconfig_bissbs: 522 484 ··· 526 488 make olddefconfig 527 489 cp .config ~/kernel-config-working 528 490 529 - [:ref:`details<saveconfig_bisref>`] 491 + [:ref:`details <saveconfig_bisref>`] 530 492 531 493 .. _introlatestcheck_bissbs: 532 494 ··· 536 498 The following steps verify if the problem occurs with the code currently 537 499 supported by developers. In case you face a regression, it also checks that the 538 500 problem is not caused by some .config change, as reporting the issue then would 539 - be a waste of time. [:ref:`details<introlatestcheck_bisref>`] 501 + be a waste of time. [:ref:`details <introlatestcheck_bisref>`] 540 502 541 503 .. _checkoutmaster_bissbs: 542 504 543 - * Check out the latest Linux codebase:: 505 + * Check out the latest Linux codebase. 544 506 545 - cd ~/linux/ 546 - git checkout --force --detach mainline/master 507 + * Are your 'good' and 'bad' versions from the same stable or longterm series? 508 + Then check the `front page of kernel.org <https://kernel.org/>`_: if it 509 + lists a release from that series without an '[EOL]' tag, checkout the series 510 + latest version ('linux-6.1.y' in the following example):: 547 511 548 - [:ref:`details<checkoutmaster_bisref>`] 512 + cd ~/linux/ 513 + git switch --discard-changes --detach stable/linux-6.1.y 514 + 515 + Your series is unsupported, if is not listed or carrying a 'end of life' 516 + tag. In that case you might want to check if a successor series (say 517 + linux-6.2.y) or mainline (see next point) fix the bug. 518 + 519 + * In all other cases, run:: 520 + 521 + cd ~/linux/ 522 + git switch --discard-changes --detach mainline/master 523 + 524 + [:ref:`details <checkoutmaster_bisref>`] 549 525 550 526 .. _build_bissbs: 551 527 ··· 574 522 reference section for alternatives, which obviously will require other 575 523 steps to install as well. 576 524 577 - [:ref:`details<build_bisref>`] 525 + [:ref:`details <build_bisref>`] 578 526 579 527 .. _install_bissbs: 580 528 ··· 607 555 down: if you will build more kernels as described in segment 2 and 3, you will 608 556 have to perform those again after executing ``command -v installkernel [...]``. 609 557 610 - [:ref:`details<install_bisref>`] 558 + [:ref:`details <install_bisref>`] 611 559 612 560 .. _storagespace_bissbs: 613 561 ··· 620 568 Write down or remember those two values for later: they enable you to prevent 621 569 running out of disk space accidentally during a bisection. 622 570 623 - [:ref:`details<storagespace_bisref>`] 571 + [:ref:`details <storagespace_bisref>`] 624 572 625 573 .. _kernelrelease_bissbs: 626 574 ··· 647 595 If that command does not return '0', check the reference section, as the cause 648 596 for this might interfere with your testing. 649 597 650 - [:ref:`details<tainted_bisref>`] 598 + [:ref:`details <tainted_bisref>`] 651 599 652 600 .. _recheckbroken_bissbs: 653 601 ··· 655 603 out the instructions in the reference section to ensure nothing went sideways 656 604 during your tests. 657 605 658 - [:ref:`details<recheckbroken_bisref>`] 606 + [:ref:`details <recheckbroken_bisref>`] 659 607 660 608 .. _recheckstablebroken_bissbs: 661 609 662 - * Are you facing a problem within a stable/longterm series, but failed to 663 - reproduce it with the mainline kernel you just built? One that according to 664 - the `front page of kernel.org <https://kernel.org/>`_ is still supported? Then 665 - check if the latest codebase for the particular series might already fix the 666 - problem. To do so, add the stable series Git branch for your 'good' kernel 667 - (again, this here is assumed to be 6.0) and check out the latest version:: 610 + * Did you just built a stable or longterm kernel? And were you able to reproduce 611 + the regression with it? Then you should test the latest mainline codebase as 612 + well, because the result determines which developers the bug must be submitted 613 + to. 614 + 615 + To prepare that test, check out current mainline:: 668 616 669 617 cd ~/linux/ 670 - git remote set-branches --add stable linux-6.0.y 671 - git fetch stable 672 - git checkout --force --detach linux-6.0.y 618 + git switch --discard-changes --detach mainline/master 673 619 674 620 Now use the checked out code to build and install another kernel using the 675 621 commands the earlier steps already described in more detail:: ··· 689 639 uname -r 690 640 cat /proc/sys/kernel/tainted 691 641 692 - Now verify if this kernel is showing the problem. 642 + Now verify if this kernel is showing the problem. If it does, then you need 643 + to report the bug to the primary developers; if it does not, report it to the 644 + stable team. See Documentation/admin-guide/reporting-issues.rst for details. 693 645 694 - [:ref:`details<recheckstablebroken_bisref>`] 646 + [:ref:`details <recheckstablebroken_bisref>`] 695 647 696 648 Do you follow this guide to verify if a problem is present in the code 697 649 currently supported by Linux kernel developers? Then you are done at this 698 650 point. If you later want to remove the kernel you just built, check out 699 - :ref:`Supplementary tasks: cleanup during and after following this guide<introclosure_bissbs>`. 651 + :ref:`Complementary tasks: cleanup during and after following this guide <introclosure_bissbs>`. 700 652 701 653 In case you face a regression, move on and execute at least the next segment 702 654 as well. ··· 710 658 711 659 In case of a regression, you now want to ensure the trimmed configuration file 712 660 you created earlier works as expected; a bisection with the .config file 713 - otherwise would be a waste of time. [:ref:`details<introworkingcheck_bisref>`] 661 + otherwise would be a waste of time. [:ref:`details <introworkingcheck_bisref>`] 714 662 715 663 .. _recheckworking_bissbs: 716 664 ··· 721 669 'good' (once again assumed to be 6.0 here):: 722 670 723 671 cd ~/linux/ 724 - git checkout --detach v6.0 672 + git switch --discard-changes --detach v6.0 725 673 726 674 Now use the checked out code to configure, build, and install another kernel 727 675 using the commands the previous subsection explained in more detail:: ··· 745 693 Now check if this kernel works as expected; if not, consult the reference 746 694 section for further instructions. 747 695 748 - [:ref:`details<recheckworking_bisref>`] 696 + [:ref:`details <recheckworking_bisref>`] 749 697 750 698 .. _introbisect_bissbs: 751 699 ··· 755 703 With all the preparations and precaution builds taken care of, you are now ready 756 704 to begin the bisection. This will make you build quite a few kernels -- usually 757 705 about 15 in case you encountered a regression when updating to a newer series 758 - (say from 6.0.11 to 6.1.3). But do not worry, due to the trimmed build 706 + (say from 6.0.13 to 6.1.5). But do not worry, due to the trimmed build 759 707 configuration created earlier this works a lot faster than many people assume: 760 708 overall on average it will often just take about 10 to 15 minutes to compile 761 709 each kernel on commodity x86 machines. 762 - 763 - * In case your 'bad' version is a stable/longterm release (say 6.1.5), add its 764 - stable branch, unless you already did so earlier:: 765 - 766 - cd ~/linux/ 767 - git remote set-branches --add stable linux-6.1.y 768 - git fetch stable 769 710 770 711 .. _bisectstart_bissbs: 771 712 ··· 770 725 git bisect good v6.0 771 726 git bisect bad v6.1.5 772 727 773 - [:ref:`details<bisectstart_bisref>`] 728 + [:ref:`details <bisectstart_bisref>`] 774 729 775 730 .. _bisectbuild_bissbs: 776 731 ··· 790 745 If compilation fails for some reason, run ``git bisect skip`` and restart 791 746 executing the stack of commands from the beginning. 792 747 793 - In case you skipped the "test latest codebase" step in the guide, check its 748 + In case you skipped the 'test latest codebase' step in the guide, check its 794 749 description as for why the 'df [...]' and 'make -s kernelrelease [...]' 795 750 commands are here. 796 751 ··· 799 754 totally normal to see release identifiers like '6.0-rc1-local-gcafec0cacaca0' 800 755 if you bisect between versions 6.1 and 6.2 for example. 801 756 802 - [:ref:`details<bisectbuild_bisref>`] 757 + [:ref:`details <bisectbuild_bisref>`] 803 758 804 759 .. _bisecttest_bissbs: 805 760 ··· 839 794 might need to scroll up to see the message mentioning the culprit; 840 795 alternatively, run ``git bisect log > ~/bisection-log``. 841 796 842 - [:ref:`details<bisecttest_bisref>`] 797 + [:ref:`details <bisecttest_bisref>`] 843 798 844 799 .. _bisectlog_bissbs: 845 800 ··· 851 806 cp .config ~/bisection-config-culprit 852 807 git bisect reset 853 808 854 - [:ref:`details<bisectlog_bisref>`] 809 + [:ref:`details <bisectlog_bisref>`] 855 810 856 811 .. _revert_bissbs: 857 812 ··· 868 823 Begin by checking out the latest codebase depending on the range you bisected: 869 824 870 825 * Did you face a regression within a stable/longterm series (say between 871 - 6.0.11 and 6.0.13) that does not happen in mainline? Then check out the 826 + 6.0.13 and 6.0.15) that does not happen in mainline? Then check out the 872 827 latest codebase for the affected series like this:: 873 828 874 829 git fetch stable 875 - git checkout --force --detach linux-6.0.y 830 + git switch --discard-changes --detach linux-6.0.y 876 831 877 832 * In all other cases check out latest mainline:: 878 833 879 834 git fetch mainline 880 - git checkout --force --detach mainline/master 835 + git switch --discard-changes --detach mainline/master 881 836 882 837 If you bisected a regression within a stable/longterm series that also 883 838 happens in mainline, there is one more thing to do: look up the mainline ··· 891 846 892 847 git revert --no-edit cafec0cacaca0 893 848 894 - If that fails, give up trying and move on to the next step. But if it works, 895 - build a kernel again using the familiar command sequence:: 849 + If that fails, give up trying and move on to the next step; if it works, 850 + adjust the tag to facilitate the identification and prevent accidentally 851 + overwriting another kernel:: 896 852 897 853 cp ~/kernel-config-working .config 854 + ./scripts/config --set-str CONFIG_LOCALVERSION '-local-cafec0cacaca0-reverted' 855 + 856 + Build a kernel using the familiar command sequence, just without copying the 857 + the base .config over:: 858 + 898 859 make olddefconfig && 899 - make -j $(nproc --all) && 860 + make -j $(nproc --all) 900 861 # * Check if the free space suffices holding another kernel: 901 862 df -h /boot/ /lib/modules/ 902 863 sudo make modules_install 903 864 command -v installkernel && sudo make install 904 - Make -s kernelrelease | tee -a ~/kernels-built 865 + make -s kernelrelease | tee -a ~/kernels-built 905 866 reboot 906 867 907 - Now check one last time if the feature that made you perform a bisection work 908 - with that kernel. 868 + Now check one last time if the feature that made you perform a bisection works 869 + with that kernel: if everything went well, it should not show the regression. 909 870 910 - [:ref:`details<revert_bisref>`] 871 + [:ref:`details <revert_bisref>`] 911 872 912 873 .. _introclosure_bissbs: 913 874 914 - Supplementary tasks: cleanup during and after the bisection 875 + Complementary tasks: cleanup during and after the bisection 915 876 ----------------------------------------------------------- 916 877 917 878 During and after following this guide you might want or need to remove some of ··· 954 903 kernel image and related files behind; in that case remove them as described 955 904 in the reference section. 956 905 957 - [:ref:`details<makeroom_bisref>`] 906 + [:ref:`details <makeroom_bisref>`] 958 907 959 908 .. _finishingtouch_bissbs: 960 909 ··· 977 926 the version considered 'good', and the last three or four you compiled 978 927 during the actual bisection process. 979 928 980 - [:ref:`details<finishingtouch_bisref>`] 929 + [:ref:`details <finishingtouch_bisref>`] 930 + 931 + .. _introoptional_bissbs: 932 + 933 + Optional: test reverts, patches, or later versions 934 + -------------------------------------------------- 935 + 936 + While or after reporting a bug, you might want or potentially will be asked to 937 + test reverts, debug patches, proposed fixes, or other versions. In that case 938 + follow these instructions. 939 + 940 + * Update your Git clone and check out the latest code. 941 + 942 + * In case you want to test mainline, fetch its latest changes before checking 943 + its code out:: 944 + 945 + git fetch mainline 946 + git switch --discard-changes --detach mainline/master 947 + 948 + * In case you want to test a stable or longterm kernel, first add the branch 949 + holding the series you are interested in (6.2 in the example), unless you 950 + already did so earlier:: 951 + 952 + git remote set-branches --add stable linux-6.2.y 953 + 954 + Then fetch the latest changes and check out the latest version from the 955 + series:: 956 + 957 + git fetch stable 958 + git switch --discard-changes --detach stable/linux-6.2.y 959 + 960 + * Copy your kernel build configuration over:: 961 + 962 + cp ~/kernel-config-working .config 963 + 964 + * Your next step depends on what you want to do: 965 + 966 + * In case you just want to test the latest codebase, head to the next step, 967 + you are already all set. 968 + 969 + * In case you want to test if a revert fixes an issue, revert one or multiple 970 + changes by specifying their commit ids:: 971 + 972 + git revert --no-edit cafec0cacaca0 973 + 974 + Now give that kernel a special tag to facilitates its identification and 975 + prevent accidentally overwriting another kernel:: 976 + 977 + ./scripts/config --set-str CONFIG_LOCALVERSION '-local-cafec0cacaca0-reverted' 978 + 979 + * In case you want to test a patch, store the patch in a file like 980 + '/tmp/foobars-proposed-fix-v1.patch' and apply it like this:: 981 + 982 + git apply /tmp/foobars-proposed-fix-v1.patch 983 + 984 + In case of multiple patches, repeat this step with the others. 985 + 986 + Now give that kernel a special tag to facilitates its identification and 987 + prevent accidentally overwriting another kernel:: 988 + 989 + ./scripts/config --set-str CONFIG_LOCALVERSION '-local-foobars-fix-v1' 990 + 991 + * Build a kernel using the familiar commands, just without copying the kernel 992 + build configuration over, as that has been taken care of already:: 993 + 994 + make olddefconfig && 995 + make -j $(nproc --all) 996 + # * Check if the free space suffices holding another kernel: 997 + df -h /boot/ /lib/modules/ 998 + sudo make modules_install 999 + command -v installkernel && sudo make install 1000 + make -s kernelrelease | tee -a ~/kernels-built 1001 + reboot 1002 + 1003 + * Now verify you booted the newly built kernel and check it. 1004 + 1005 + [:ref:`details <introoptional_bisref>`] 981 1006 982 1007 .. _submit_improvements: 983 1008 984 - This concludes the step-by-step guide. 1009 + Conclusion 1010 + ---------- 1011 + 1012 + You have reached the end of the step-by-step guide. 985 1013 986 1014 Did you run into trouble following any of the above steps not cleared up by the 987 1015 reference section below? Did you spot errors? Or do you have ideas how to 988 - improve the guide? Then please take a moment and let the maintainer of this 1016 + improve the guide? 1017 + 1018 + If any of that applies, please take a moment and let the maintainer of this 989 1019 document know by email (Thorsten Leemhuis <linux@leemhuis.info>), ideally while 990 1020 CCing the Linux docs mailing list (linux-doc@vger.kernel.org). Such feedback is 991 - vital to improve this document further, which is in everybody's interest, as it 1021 + vital to improve this text further, which is in everybody's interest, as it 992 1022 will enable more people to master the task described here -- and hopefully also 993 1023 improve similar guides inspired by this one. 994 1024 ··· 1080 948 This section holds additional information for almost all the items in the above 1081 949 step-by-step guide. 1082 950 951 + Preparations for building your own kernels 952 + ------------------------------------------ 953 + 954 + *The steps in this section lay the groundwork for all further tests.* 955 + [:ref:`... <introprep_bissbs>`] 956 + 957 + The steps in all later sections of this guide depend on those described here. 958 + 959 + [:ref:`back to step-by-step guide <introprep_bissbs>`]. 960 + 1083 961 .. _backup_bisref: 1084 962 1085 963 Prepare for emergencies 1086 - ----------------------- 964 + ~~~~~~~~~~~~~~~~~~~~~~~ 1087 965 1088 966 *Create a fresh backup and put system repair and restore tools at hand.* 1089 967 [:ref:`... <backup_bissbs>`] ··· 1108 966 .. _vanilla_bisref: 1109 967 1110 968 Remove anything related to externally maintained kernel modules 1111 - --------------------------------------------------------------- 969 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1112 970 1113 971 *Remove all software that depends on externally developed kernel drivers or 1114 972 builds them automatically.* [:ref:`...<vanilla_bissbs>`] ··· 1126 984 .. _secureboot_bisref: 1127 985 1128 986 Deal with techniques like Secure Boot 1129 - ------------------------------------- 987 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1130 988 1131 989 *On platforms with 'Secure Boot' or similar techniques, prepare everything to 1132 990 ensure the system will permit your self-compiled kernel to boot later.* ··· 1163 1021 .. _bootworking_bisref: 1164 1022 1165 1023 Boot the last kernel that was working 1166 - ------------------------------------- 1024 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1167 1025 1168 1026 *Boot into the last working kernel and briefly recheck if the feature that 1169 1027 regressed really works.* [:ref:`...<bootworking_bissbs>`] ··· 1176 1034 .. _diskspace_bisref: 1177 1035 1178 1036 Space requirements 1179 - ------------------ 1037 + ~~~~~~~~~~~~~~~~~~ 1180 1038 1181 1039 *Ensure to have enough free space for building Linux.* 1182 1040 [:ref:`... <diskspace_bissbs>`] ··· 1194 1052 .. _rangecheck_bisref: 1195 1053 1196 1054 Bisection range 1197 - --------------- 1055 + ~~~~~~~~~~~~~~~ 1198 1056 1199 1057 *Determine the kernel versions considered 'good' and 'bad' throughout this 1200 1058 guide.* [:ref:`...<rangecheck_bissbs>`] 1201 1059 1202 1060 Establishing the range of commits to be checked is mostly straightforward, 1203 1061 except when a regression occurred when switching from a release of one stable 1204 - series to a release of a later series (e.g. from 6.0.11 to 6.1.4). In that case 1062 + series to a release of a later series (e.g. from 6.0.13 to 6.1.5). In that case 1205 1063 Git will need some hand holding, as there is no straight line of descent. 1206 1064 1207 1065 That's because with the release of 6.0 mainline carried on to 6.1 while the 1208 1066 stable series 6.0.y branched to the side. It's therefore theoretically possible 1209 - that the issue you face with 6.1.4 only worked in 6.0.11, as it was fixed by a 1067 + that the issue you face with 6.1.5 only worked in 6.0.13, as it was fixed by a 1210 1068 commit that went into one of the 6.0.y releases, but never hit mainline or the 1211 1069 6.1.y series. Thankfully that normally should not happen due to the way the 1212 1070 stable/longterm maintainers maintain the code. It's thus pretty safe to assume 1213 1071 6.0 as a 'good' kernel. That assumption will be tested anyway, as that kernel 1214 1072 will be built and tested in the segment '2' of this guide; Git would force you 1215 - to do this as well, if you tried bisecting between 6.0.11 and 6.1.13. 1073 + to do this as well, if you tried bisecting between 6.0.13 and 6.1.15. 1216 1074 1217 1075 [:ref:`back to step-by-step guide <rangecheck_bissbs>`] 1218 1076 1219 1077 .. _buildrequires_bisref: 1220 1078 1221 1079 Install build requirements 1222 - -------------------------- 1080 + ~~~~~~~~~~~~~~~~~~~~~~~~~~ 1223 1081 1224 1082 *Install all software required to build a Linux kernel.* 1225 1083 [:ref:`...<buildrequires_bissbs>`] ··· 1259 1117 for example might want to skip installing the development headers for ncurses, 1260 1118 which you will only need in case you later might want to adjust the kernel build 1261 1119 configuration using make the targets 'menuconfig' or 'nconfig'; likewise omit 1262 - the headers of Qt6 is you do not plan to adjust the .config using 'xconfig'. 1120 + the headers of Qt6 if you do not plan to adjust the .config using 'xconfig'. 1263 1121 1264 1122 You furthermore might need additional libraries and their development headers 1265 1123 for tasks not covered in this guide -- for example when building utilities from ··· 1270 1128 .. _sources_bisref: 1271 1129 1272 1130 Download the sources using Git 1273 - ------------------------------ 1131 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1274 1132 1275 1133 *Retrieve the Linux mainline sources.* 1276 1134 [:ref:`...<sources_bissbs>`] ··· 1290 1148 .. _sources_bundle_bisref: 1291 1149 1292 1150 Downloading Linux mainline sources using a bundle 1293 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1151 + """"""""""""""""""""""""""""""""""""""""""""""""" 1294 1152 1295 1153 Use the following commands to retrieve the Linux mainline sources using a 1296 1154 bundle:: ··· 1326 1184 https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git 1327 1185 1328 1186 Now deepen your clone's history to the second predecessor of the mainline 1329 - release of your 'good' version. In case the latter are 6.0 or 6.0.11, 5.19 would 1187 + release of your 'good' version. In case the latter are 6.0 or 6.0.13, 5.19 would 1330 1188 be the first predecessor and 5.18 the second -- hence deepen the history up to 1331 1189 that version:: 1332 1190 ··· 1361 1219 .. _oldconfig_bisref: 1362 1220 1363 1221 Start defining the build configuration for your kernel 1364 - ------------------------------------------------------ 1222 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1365 1223 1366 1224 *Start preparing a kernel build configuration (the '.config' file).* 1367 1225 [:ref:`... <oldconfig_bissbs>`] ··· 1421 1279 .. _localmodconfig_bisref: 1422 1280 1423 1281 Trim the build configuration for your kernel 1424 - -------------------------------------------- 1282 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1425 1283 1426 1284 *Disable any kernel modules apparently superfluous for your setup.* 1427 1285 [:ref:`... <localmodconfig_bissbs>`] ··· 1470 1328 .. _tagging_bisref: 1471 1329 1472 1330 Tag the kernels about to be build 1473 - --------------------------------- 1331 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1474 1332 1475 1333 *Ensure all the kernels you will build are clearly identifiable using a 1476 1334 special tag and a unique version identifier.* [:ref:`... <tagging_bissbs>`] ··· 1486 1344 .. _debugsymbols_bisref: 1487 1345 1488 1346 Decide to enable or disable debug symbols 1489 - ----------------------------------------- 1347 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1490 1348 1491 1349 *Decide how to handle debug symbols.* [:ref:`... <debugsymbols_bissbs>`] 1492 1350 ··· 1515 1373 .. _configmods_bisref: 1516 1374 1517 1375 Adjust build configuration 1518 - -------------------------- 1376 + ~~~~~~~~~~~~~~~~~~~~~~~~~~ 1519 1377 1520 1378 *Check if you may want or need to adjust some other kernel configuration 1521 1379 options:* ··· 1526 1384 .. _configmods_distros_bisref: 1527 1385 1528 1386 Distro specific adjustments 1529 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1387 + """"""""""""""""""""""""""" 1530 1388 1531 1389 *Are you running* [:ref:`... <configmods_bissbs>`] 1532 1390 ··· 1551 1409 .. _configmods_individual_bisref: 1552 1410 1553 1411 Individual adjustments 1554 - ~~~~~~~~~~~~~~~~~~~~~~ 1412 + """""""""""""""""""""" 1555 1413 1556 1414 *If you want to influence the other aspects of the configuration, do so 1557 1415 now.* [:ref:`... <configmods_bissbs>`] ··· 1568 1426 .. _saveconfig_bisref: 1569 1427 1570 1428 Put the .config file aside 1571 - -------------------------- 1429 + ~~~~~~~~~~~~~~~~~~~~~~~~~~ 1572 1430 1573 1431 *Reprocess the .config after the latest changes and store it in a safe place.* 1574 1432 [:ref:`... <saveconfig_bissbs>`] 1575 1433 1576 1434 Put the .config you prepared aside, as you want to copy it back to the build 1577 - directory every time during this guide before you start building another 1435 + directory every time during this guide before you start building another 1578 1436 kernel. That's because going back and forth between different versions can alter 1579 1437 .config files in odd ways; those occasionally cause side effects that could 1580 1438 confuse testing or in some cases render the result of your bisection ··· 1584 1442 1585 1443 .. _introlatestcheck_bisref: 1586 1444 1587 - Try to reproduce the regression 1588 - ----------------------------------------- 1445 + Try to reproduce the problem with the latest codebase 1446 + ----------------------------------------------------- 1589 1447 1590 1448 *Verify the regression is not caused by some .config change and check if it 1591 1449 still occurs with the latest codebase.* [:ref:`... <introlatestcheck_bissbs>`] ··· 1632 1490 1633 1491 Your report might be ignored if you send it to the wrong party -- and even 1634 1492 when you get a reply there is a decent chance that developers tell you to 1635 - evaluate which of the two cases it is before they take a closer look. 1493 + evaluate which of the two cases it is before they take a closer look. 1636 1494 1637 1495 [:ref:`back to step-by-step guide <introlatestcheck_bissbs>`] 1638 1496 1639 1497 .. _checkoutmaster_bisref: 1640 1498 1641 1499 Check out the latest Linux codebase 1642 - ----------------------------------- 1500 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1643 1501 1644 1502 *Check out the latest Linux codebase.* 1645 - [:ref:`... <introlatestcheck_bissbs>`] 1503 + [:ref:`... <checkoutmaster_bissbs>`] 1646 1504 1647 1505 In case you later want to recheck if an ever newer codebase might fix the 1648 1506 problem, remember to run that ``git fetch --shallow-exclude [...]`` command 1649 1507 again mentioned earlier to update your local Git repository. 1650 1508 1651 - [:ref:`back to step-by-step guide <introlatestcheck_bissbs>`] 1509 + [:ref:`back to step-by-step guide <checkoutmaster_bissbs>`] 1652 1510 1653 1511 .. _build_bisref: 1654 1512 1655 1513 Build your kernel 1656 - ----------------- 1514 + ~~~~~~~~~~~~~~~~~ 1657 1515 1658 1516 *Build the image and the modules of your first kernel using the config file 1659 1517 you prepared.* [:ref:`... <build_bissbs>`] ··· 1663 1521 deb, rpm or tar file. 1664 1522 1665 1523 Dealing with build errors 1666 - ~~~~~~~~~~~~~~~~~~~~~~~~~ 1524 + """"""""""""""""""""""""" 1667 1525 1668 1526 When a build error occurs, it might be caused by some aspect of your machine's 1669 1527 setup that often can be fixed quickly; other times though the problem lies in ··· 1694 1552 1695 1553 In the end, most issues you run into have likely been encountered and 1696 1554 reported by others already. That includes issues where the cause is not your 1697 - system, but lies in the code. If you run into one of those, you might thus find a 1698 - solution (e.g. a patch) or workaround for your issue, too. 1555 + system, but lies in the code. If you run into one of those, you might thus find 1556 + a solution (e.g. a patch) or workaround for your issue, too. 1699 1557 1700 1558 Package your kernel up 1701 - ~~~~~~~~~~~~~~~~~~~~~~ 1559 + """""""""""""""""""""" 1702 1560 1703 1561 The step-by-step guide uses the default make targets (e.g. 'bzImage' and 1704 1562 'modules' on x86) to build the image and the modules of your kernel, which later ··· 1729 1587 .. _install_bisref: 1730 1588 1731 1589 Put the kernel in place 1732 - ----------------------- 1590 + ~~~~~~~~~~~~~~~~~~~~~~~ 1733 1591 1734 1592 *Install the kernel you just built.* [:ref:`... <install_bissbs>`] 1735 1593 ··· 1772 1630 .. _storagespace_bisref: 1773 1631 1774 1632 Storage requirements per kernel 1775 - ------------------------------- 1633 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1776 1634 1777 1635 *Check how much storage space the kernel, its modules, and other related files 1778 1636 like the initramfs consume.* [:ref:`... <storagespace_bissbs>`] ··· 1793 1651 .. _tainted_bisref: 1794 1652 1795 1653 Check if your newly built kernel considers itself 'tainted' 1796 - ----------------------------------------------------------- 1654 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1797 1655 1798 1656 *Check if the kernel marked itself as 'tainted'.* 1799 1657 [:ref:`... <tainted_bissbs>`] ··· 1812 1670 .. _recheckbroken_bisref: 1813 1671 1814 1672 Check the kernel built from a recent mainline codebase 1815 - ------------------------------------------------------ 1673 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1816 1674 1817 1675 *Verify if your bug occurs with the newly built kernel.* 1818 1676 [:ref:`... <recheckbroken_bissbs>`] ··· 1838 1696 .. _recheckstablebroken_bisref: 1839 1697 1840 1698 Check the kernel built from the latest stable/longterm codebase 1841 - --------------------------------------------------------------- 1699 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1842 1700 1843 1701 *Are you facing a regression within a stable/longterm release, but failed to 1844 1702 reproduce it with the kernel you just built using the latest mainline sources? ··· 1883 1741 .. _recheckworking_bisref: 1884 1742 1885 1743 Build your own version of the 'good' kernel 1886 - ------------------------------------------- 1744 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1887 1745 1888 1746 *Build your own variant of the working kernel and check if the feature that 1889 1747 regressed works as expected with it.* [:ref:`... <recheckworking_bissbs>`] ··· 1909 1767 1910 1768 Note, if you found and fixed problems with the .config file, you want to use it 1911 1769 to build another kernel from the latest codebase, as your earlier tests with 1912 - mainline and the latest version from an affected stable/longterm series were most 1913 - likely flawed. 1770 + mainline and the latest version from an affected stable/longterm series were 1771 + most likely flawed. 1914 1772 1915 1773 [:ref:`back to step-by-step guide <recheckworking_bissbs>`] 1774 + 1775 + Perform a bisection and validate the result 1776 + ------------------------------------------- 1777 + 1778 + *With all the preparations and precaution builds taken care of, you are now 1779 + ready to begin the bisection.* [:ref:`... <introbisect_bissbs>`] 1780 + 1781 + The steps in this segment perform and validate the bisection. 1782 + 1783 + [:ref:`back to step-by-step guide <introbisect_bissbs>`]. 1916 1784 1917 1785 .. _bisectstart_bisref: 1918 1786 1919 1787 Start the bisection 1920 - ------------------- 1788 + ~~~~~~~~~~~~~~~~~~~ 1921 1789 1922 1790 *Start the bisection and tell Git about the versions earlier established as 1923 1791 'good' and 'bad'.* [:ref:`... <bisectstart_bissbs>`] ··· 1941 1789 .. _bisectbuild_bisref: 1942 1790 1943 1791 Build a kernel from the bisection point 1944 - --------------------------------------- 1792 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1945 1793 1946 1794 *Build, install, and boot a kernel from the code Git checked out using the 1947 1795 same commands you used earlier.* [:ref:`... <bisectbuild_bissbs>`] ··· 1969 1817 .. _bisecttest_bisref: 1970 1818 1971 1819 Bisection checkpoint 1972 - -------------------- 1820 + ~~~~~~~~~~~~~~~~~~~~ 1973 1821 1974 1822 *Check if the feature that regressed works in the kernel you just built.* 1975 1823 [:ref:`... <bisecttest_bissbs>`] ··· 1983 1831 .. _bisectlog_bisref: 1984 1832 1985 1833 Put the bisection log away 1986 - -------------------------- 1834 + ~~~~~~~~~~~~~~~~~~~~~~~~~~ 1987 1835 1988 1836 *Store Git's bisection log and the current .config file in a safe place.* 1989 1837 [:ref:`... <bisectlog_bissbs>`] ··· 2003 1851 .. _revert_bisref: 2004 1852 2005 1853 Try reverting the culprit 2006 - ------------------------- 1854 + ~~~~~~~~~~~~~~~~~~~~~~~~~ 2007 1855 2008 1856 *Try reverting the culprit on top of the latest codebase to see if this fixes 2009 1857 your regression.* [:ref:`... <revert_bissbs>`] ··· 2021 1869 2022 1870 [:ref:`back to step-by-step guide <revert_bissbs>`] 2023 1871 1872 + Cleanup steps during and after following this guide 1873 + --------------------------------------------------- 2024 1874 2025 - Supplementary tasks: cleanup during and after the bisection 2026 - ----------------------------------------------------------- 1875 + *During and after following this guide you might want or need to remove some 1876 + of the kernels you installed.* [:ref:`... <introclosure_bissbs>`] 1877 + 1878 + The steps in this section describe clean-up procedures. 1879 + 1880 + [:ref:`back to step-by-step guide <introclosure_bissbs>`]. 2027 1881 2028 1882 .. _makeroom_bisref: 2029 1883 2030 1884 Cleaning up during the bisection 2031 - -------------------------------- 1885 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2032 1886 2033 1887 *To remove one of the kernels you installed, look up its 'kernelrelease' 2034 1888 identifier.* [:ref:`... <makeroom_bissbs>`] ··· 2069 1911 the steps to do that vary quite a bit between Linux distributions. 2070 1912 2071 1913 Note, be careful with wildcards like '*' when deleting files or directories 2072 - for kernels manually: you might accidentally remove files of a 6.0.11 kernel 1914 + for kernels manually: you might accidentally remove files of a 6.0.13 kernel 2073 1915 when all you want is to remove 6.0 or 6.0.1. 2074 1916 2075 1917 [:ref:`back to step-by-step guide <makeroom_bissbs>`] 2076 1918 2077 1919 Cleaning up after the bisection 2078 - ------------------------------- 1920 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2079 1921 2080 1922 .. _finishingtouch_bisref: 2081 1923 ··· 2090 1932 (~/linux/.git/) behind -- a simple ``git reset --hard`` thus will bring the 2091 1933 sources back. 2092 1934 2093 - Removing the repository as well would likely be unwise at this point: there is a 2094 - decent chance developers will ask you to build another kernel to perform 2095 - additional tests. This is often required to debug an issue or check proposed 2096 - fixes. Before doing so you want to run the ``git fetch mainline`` command again 2097 - followed by ``git checkout mainline/master`` to bring your clone up to date and 2098 - checkout the latest codebase. Then apply the patch using ``git apply 2099 - <filename>`` or ``git am <filename>`` and build yet another kernel using the 2100 - familiar commands. 1935 + Removing the repository as well would likely be unwise at this point: there 1936 + is a decent chance developers will ask you to build another kernel to 1937 + perform additional tests -- like testing a debug patch or a proposed fix. 1938 + Details on how to perform those can be found in the section :ref:`Optional 1939 + tasks: test reverts, patches, or later versions <introoptional_bissbs>`. 2101 1940 2102 1941 Additional tests are also the reason why you want to keep the 2103 1942 ~/kernel-config-working file around for a few weeks. 2104 1943 2105 1944 [:ref:`back to step-by-step guide <finishingtouch_bissbs>`] 2106 1945 1946 + .. _introoptional_bisref: 1947 + 1948 + Test reverts, patches, or later versions 1949 + ---------------------------------------- 1950 + 1951 + *While or after reporting a bug, you might want or potentially will be asked 1952 + to test reverts, patches, proposed fixes, or other versions.* 1953 + [:ref:`... <introoptional_bissbs>`] 1954 + 1955 + All the commands used in this section should be pretty straight forward, so 1956 + there is not much to add except one thing: when setting a kernel tag as 1957 + instructed, ensure it is not much longer than the one used in the example, as 1958 + problems will arise if the kernelrelease identifier exceeds 63 characters. 1959 + 1960 + [:ref:`back to step-by-step guide <introoptional_bissbs>`]. 1961 + 1962 + 1963 + Additional information 1964 + ====================== 1965 + 1966 + .. _buildhost_bis: 1967 + 1968 + Build kernels on a different machine 1969 + ------------------------------------ 1970 + 1971 + To compile kernels on another system, slightly alter the step-by-step guide's 1972 + instructions: 1973 + 1974 + * Start following the guide on the machine where you want to install and test 1975 + the kernels later. 1976 + 1977 + * After executing ':ref:`Boot into the working kernel and briefly use the 1978 + apparently broken feature <bootworking_bissbs>`', save the list of loaded 1979 + modules to a file using ``lsmod > ~/test-machine-lsmod``. Then locate the 1980 + build configuration for the running kernel (see ':ref:`Start defining the 1981 + build configuration for your kernel <oldconfig_bisref>`' for hints on where 1982 + to find it) and store it as '~/test-machine-config-working'. Transfer both 1983 + files to the home directory of your build host. 1984 + 1985 + * Continue the guide on the build host (e.g. with ':ref:`Ensure to have enough 1986 + free space for building [...] <diskspace_bissbs>`'). 1987 + 1988 + * When you reach ':ref:`Start preparing a kernel build configuration[...] 1989 + <oldconfig_bissbs>`': before running ``make olddefconfig`` for the first time, 1990 + execute the following command to base your configuration on the one from the 1991 + test machine's 'working' kernel:: 1992 + 1993 + cp ~/test-machine-config-working ~/linux/.config 1994 + 1995 + * During the next step to ':ref:`disable any apparently superfluous kernel 1996 + modules <localmodconfig_bissbs>`' use the following command instead:: 1997 + 1998 + yes '' | make localmodconfig LSMOD=~/lsmod_foo-machine localmodconfig 1999 + 2000 + * Continue the guide, but ignore the instructions outlining how to compile, 2001 + install, and reboot into a kernel every time they come up. Instead build 2002 + like this:: 2003 + 2004 + cp ~/kernel-config-working .config 2005 + make olddefconfig && 2006 + make -j $(nproc --all) targz-pkg 2007 + 2008 + This will generate a gzipped tar file whose name is printed in the last 2009 + line shown; for example, a kernel with the kernelrelease identifier 2010 + '6.0.0-rc1-local-g928a87efa423' built for x86 machines usually will 2011 + be stored as '~/linux/linux-6.0.0-rc1-local-g928a87efa423-x86.tar.gz'. 2012 + 2013 + Copy that file to your test machine's home directory. 2014 + 2015 + * Switch to the test machine to check if you have enough space to hold another 2016 + kernel. Then extract the file you transferred:: 2017 + 2018 + sudo tar -xvzf ~/linux-6.0.0-rc1-local-g928a87efa423-x86.tar.gz -C / 2019 + 2020 + Afterwards :ref:`generate the initramfs and add the kernel to your boot 2021 + loader's configuration <install_bisref>`; on some distributions the following 2022 + command will take care of both these tasks:: 2023 + 2024 + sudo /sbin/installkernel 6.0.0-rc1-local-g928a87efa423 /boot/vmlinuz-6.0.0-rc1-local-g928a87efa423 2025 + 2026 + Now reboot and ensure you started the intended kernel. 2027 + 2028 + This approach even works when building for another architecture: just install 2029 + cross-compilers and add the appropriate parameters to every invocation of make 2030 + (e.g. ``make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- [...]``). 2107 2031 2108 2032 Additional reading material 2109 - =========================== 2110 - 2111 - Further sources 2112 - --------------- 2033 + --------------------------- 2113 2034 2114 2035 * The `man page for 'git bisect' <https://git-scm.com/docs/git-bisect>`_ and 2115 2036 `fighting regressions with 'git bisect' <https://git-scm.com/docs/git-bisect-lk2009.html>`_
+1 -1
Documentation/devicetree/bindings/serial/atmel,at91-usart.yaml
··· 8 8 title: Atmel Universal Synchronous Asynchronous Receiver/Transmitter (USART) 9 9 10 10 maintainers: 11 - - Richard Genoud <richard.genoud@gmail.com> 11 + - Richard Genoud <richard.genoud@bootlin.com> 12 12 13 13 properties: 14 14 compatible:
+38 -35
Documentation/mm/page_owner.rst
··· 24 24 each page. It is already implemented and activated if page owner is 25 25 enabled. Other usages are more than welcome. 26 26 27 - It can also be used to show all the stacks and their outstanding 28 - allocations, which gives us a quick overview of where the memory is going 29 - without the need to screen through all the pages and match the allocation 30 - and free operation. 27 + It can also be used to show all the stacks and their current number of 28 + allocated base pages, which gives us a quick overview of where the memory 29 + is going without the need to screen through all the pages and match the 30 + allocation and free operation. 31 31 32 32 page owner is disabled by default. So, if you'd like to use it, you need 33 33 to add "page_owner=on" to your boot cmdline. If the kernel is built ··· 75 75 76 76 cat /sys/kernel/debug/page_owner_stacks/show_stacks > stacks.txt 77 77 cat stacks.txt 78 - prep_new_page+0xa9/0x120 79 - get_page_from_freelist+0x7e6/0x2140 80 - __alloc_pages+0x18a/0x370 81 - new_slab+0xc8/0x580 82 - ___slab_alloc+0x1f2/0xaf0 83 - __slab_alloc.isra.86+0x22/0x40 84 - kmem_cache_alloc+0x31b/0x350 85 - __khugepaged_enter+0x39/0x100 86 - dup_mmap+0x1c7/0x5ce 87 - copy_process+0x1afe/0x1c90 88 - kernel_clone+0x9a/0x3c0 89 - __do_sys_clone+0x66/0x90 90 - do_syscall_64+0x7f/0x160 91 - entry_SYSCALL_64_after_hwframe+0x6c/0x74 92 - stack_count: 234 78 + post_alloc_hook+0x177/0x1a0 79 + get_page_from_freelist+0xd01/0xd80 80 + __alloc_pages+0x39e/0x7e0 81 + allocate_slab+0xbc/0x3f0 82 + ___slab_alloc+0x528/0x8a0 83 + kmem_cache_alloc+0x224/0x3b0 84 + sk_prot_alloc+0x58/0x1a0 85 + sk_alloc+0x32/0x4f0 86 + inet_create+0x427/0xb50 87 + __sock_create+0x2e4/0x650 88 + inet_ctl_sock_create+0x30/0x180 89 + igmp_net_init+0xc1/0x130 90 + ops_init+0x167/0x410 91 + setup_net+0x304/0xa60 92 + copy_net_ns+0x29b/0x4a0 93 + create_new_namespaces+0x4a1/0x820 94 + nr_base_pages: 16 93 95 ... 94 96 ... 95 97 echo 7000 > /sys/kernel/debug/page_owner_stacks/count_threshold 96 98 cat /sys/kernel/debug/page_owner_stacks/show_stacks> stacks_7000.txt 97 99 cat stacks_7000.txt 98 - prep_new_page+0xa9/0x120 99 - get_page_from_freelist+0x7e6/0x2140 100 - __alloc_pages+0x18a/0x370 101 - alloc_pages_mpol+0xdf/0x1e0 102 - folio_alloc+0x14/0x50 103 - filemap_alloc_folio+0xb0/0x100 104 - page_cache_ra_unbounded+0x97/0x180 105 - filemap_fault+0x4b4/0x1200 106 - __do_fault+0x2d/0x110 107 - do_pte_missing+0x4b0/0xa30 108 - __handle_mm_fault+0x7fa/0xb70 109 - handle_mm_fault+0x125/0x300 110 - do_user_addr_fault+0x3c9/0x840 111 - exc_page_fault+0x68/0x150 112 - asm_exc_page_fault+0x22/0x30 113 - stack_count: 8248 100 + post_alloc_hook+0x177/0x1a0 101 + get_page_from_freelist+0xd01/0xd80 102 + __alloc_pages+0x39e/0x7e0 103 + alloc_pages_mpol+0x22e/0x490 104 + folio_alloc+0xd5/0x110 105 + filemap_alloc_folio+0x78/0x230 106 + page_cache_ra_order+0x287/0x6f0 107 + filemap_get_pages+0x517/0x1160 108 + filemap_read+0x304/0x9f0 109 + xfs_file_buffered_read+0xe6/0x1d0 [xfs] 110 + xfs_file_read_iter+0x1f0/0x380 [xfs] 111 + __kernel_read+0x3b9/0x730 112 + kernel_read_file+0x309/0x4d0 113 + __do_sys_finit_module+0x381/0x730 114 + do_syscall_64+0x8d/0x150 115 + entry_SYSCALL_64_after_hwframe+0x62/0x6a 116 + nr_base_pages: 20824 114 117 ... 115 118 116 119 cat /sys/kernel/debug/page_owner > page_owner_full.txt
+1 -1
Documentation/process/embargoed-hardware-issues.rst
··· 252 252 AMD Tom Lendacky <thomas.lendacky@amd.com> 253 253 Ampere Darren Hart <darren@os.amperecomputing.com> 254 254 ARM Catalin Marinas <catalin.marinas@arm.com> 255 - IBM Power Anton Blanchard <anton@linux.ibm.com> 255 + IBM Power Michael Ellerman <ellerman@au.ibm.com> 256 256 IBM Z Christian Borntraeger <borntraeger@de.ibm.com> 257 257 Intel Tony Luck <tony.luck@intel.com> 258 258 Qualcomm Trilok Soni <quic_tsoni@quicinc.com>
+8 -10
MAINTAINERS
··· 7829 7829 F: fs/efs/ 7830 7830 7831 7831 EHEA (IBM pSeries eHEA 10Gb ethernet adapter) DRIVER 7832 - M: Douglas Miller <dougmill@linux.ibm.com> 7833 7832 L: netdev@vger.kernel.org 7834 - S: Maintained 7833 + S: Orphan 7835 7834 F: drivers/net/ethernet/ibm/ehea/ 7836 7835 7837 7836 ELM327 CAN NETWORK DRIVER ··· 8747 8748 F: drivers/usb/gadget/udc/fsl* 8748 8749 8749 8750 FREESCALE USB PHY DRIVER 8750 - M: Ran Wang <ran.wang_1@nxp.com> 8751 8751 L: linux-usb@vger.kernel.org 8752 8752 L: linuxppc-dev@lists.ozlabs.org 8753 - S: Maintained 8753 + S: Orphan 8754 8754 F: drivers/usb/phy/phy-fsl-usb* 8755 8755 8756 8756 FREEVXFS FILESYSTEM ··· 9577 9579 9578 9580 HID CORE LAYER 9579 9581 M: Jiri Kosina <jikos@kernel.org> 9580 - M: Benjamin Tissoires <benjamin.tissoires@redhat.com> 9582 + M: Benjamin Tissoires <bentiss@kernel.org> 9581 9583 L: linux-input@vger.kernel.org 9582 9584 S: Maintained 9583 9585 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git ··· 10024 10026 10025 10027 HWPOISON MEMORY FAILURE HANDLING 10026 10028 M: Miaohe Lin <linmiaohe@huawei.com> 10027 - R: Naoya Horiguchi <naoya.horiguchi@nec.com> 10029 + R: Naoya Horiguchi <nao.horiguchi@gmail.com> 10028 10030 L: linux-mm@kvack.org 10029 10031 S: Maintained 10030 10032 F: mm/hwpoison-inject.c ··· 11995 11997 F: security/keys/encrypted-keys/ 11996 11998 11997 11999 KEYS-TRUSTED 11998 - M: James Bottomley <jejb@linux.ibm.com> 12000 + M: James Bottomley <James.Bottomley@HansenPartnership.com> 11999 12001 M: Jarkko Sakkinen <jarkko@kernel.org> 12000 12002 M: Mimi Zohar <zohar@linux.ibm.com> 12001 12003 L: linux-integrity@vger.kernel.org ··· 14357 14359 F: include/dt-bindings/dma/at91.h 14358 14360 14359 14361 MICROCHIP AT91 SERIAL DRIVER 14360 - M: Richard Genoud <richard.genoud@gmail.com> 14362 + M: Richard Genoud <richard.genoud@bootlin.com> 14361 14363 S: Maintained 14362 14364 F: Documentation/devicetree/bindings/serial/atmel,at91-usart.yaml 14363 14365 F: drivers/tty/serial/atmel_serial.c ··· 19678 19680 F: include/scsi/sg.h 19679 19681 19680 19682 SCSI SUBSYSTEM 19681 - M: "James E.J. Bottomley" <jejb@linux.ibm.com> 19683 + M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> 19682 19684 M: "Martin K. Petersen" <martin.petersen@oracle.com> 19683 19685 L: linux-scsi@vger.kernel.org 19684 19686 S: Maintained ··· 22850 22852 22851 22853 USB HID/HIDBP DRIVERS (USB KEYBOARDS, MICE, REMOTE CONTROLS, ...) 22852 22854 M: Jiri Kosina <jikos@kernel.org> 22853 - M: Benjamin Tissoires <benjamin.tissoires@redhat.com> 22855 + M: Benjamin Tissoires <bentiss@kernel.org> 22854 22856 L: linux-usb@vger.kernel.org 22855 22857 S: Maintained 22856 22858 T: git git://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.git
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 9 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+5 -2
arch/arm64/kernel/head.S
··· 289 289 adr_l x1, __hyp_text_end 290 290 adr_l x2, dcache_clean_poc 291 291 blr x2 292 + 293 + mov_q x0, INIT_SCTLR_EL2_MMU_OFF 294 + pre_disable_mmu_workaround 295 + msr sctlr_el2, x0 296 + isb 292 297 0: 293 298 mov_q x0, HCR_HOST_NVHE_FLAGS 294 299 ··· 328 323 cbz x0, 2f 329 324 330 325 /* Set a sane SCTLR_EL1, the VHE way */ 331 - pre_disable_mmu_workaround 332 326 msr_s SYS_SCTLR_EL12, x1 333 327 mov x2, #BOOT_CPU_FLAG_E2H 334 328 b 3f 335 329 336 330 2: 337 - pre_disable_mmu_workaround 338 331 msr sctlr_el1, x1 339 332 mov x2, xzr 340 333 3:
+4 -1
arch/arm64/mm/hugetlbpage.c
··· 276 276 pte_t *ptep = NULL; 277 277 278 278 pgdp = pgd_offset(mm, addr); 279 - p4dp = p4d_offset(pgdp, addr); 279 + p4dp = p4d_alloc(mm, pgdp, addr); 280 + if (!p4dp) 281 + return NULL; 282 + 280 283 pudp = pud_alloc(mm, p4dp, addr); 281 284 if (!pudp) 282 285 return NULL;
-3
arch/arm64/mm/pageattr.c
··· 219 219 pte_t *ptep; 220 220 unsigned long addr = (unsigned long)page_address(page); 221 221 222 - if (!can_set_direct_map()) 223 - return true; 224 - 225 222 pgdp = pgd_offset_k(addr); 226 223 if (pgd_none(READ_ONCE(*pgdp))) 227 224 return false;
+7 -1
arch/powerpc/crypto/chacha-p10-glue.c
··· 197 197 198 198 static int __init chacha_p10_init(void) 199 199 { 200 + if (!cpu_has_feature(CPU_FTR_ARCH_31)) 201 + return 0; 202 + 200 203 static_branch_enable(&have_p10); 201 204 202 205 return crypto_register_skciphers(algs, ARRAY_SIZE(algs)); ··· 207 204 208 205 static void __exit chacha_p10_exit(void) 209 206 { 207 + if (!static_branch_likely(&have_p10)) 208 + return; 209 + 210 210 crypto_unregister_skciphers(algs, ARRAY_SIZE(algs)); 211 211 } 212 212 213 - module_cpu_feature_match(PPC_MODULE_FEATURE_P10, chacha_p10_init); 213 + module_init(chacha_p10_init); 214 214 module_exit(chacha_p10_exit); 215 215 216 216 MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (P10 accelerated)");
+3 -4
arch/powerpc/kernel/iommu.c
··· 1285 1285 struct device *dev) 1286 1286 { 1287 1287 struct iommu_domain *domain = iommu_get_domain_for_dev(dev); 1288 - struct iommu_group *grp = iommu_group_get(dev); 1289 1288 struct iommu_table_group *table_group; 1289 + struct iommu_group *grp; 1290 1290 1291 1291 /* At first attach the ownership is already set */ 1292 - if (!domain) { 1293 - iommu_group_put(grp); 1292 + if (!domain) 1294 1293 return 0; 1295 - } 1296 1294 1295 + grp = iommu_group_get(dev); 1297 1296 table_group = iommu_group_get_iommudata(grp); 1298 1297 /* 1299 1298 * The domain being set to PLATFORM from earlier
+2 -1
arch/s390/kernel/entry.S
··· 340 340 mvc __PT_LAST_BREAK(8,%r11),__LC_PGM_LAST_BREAK 341 341 stctg %c1,%c1,__PT_CR1(%r11) 342 342 #if IS_ENABLED(CONFIG_KVM) 343 - lg %r12,__LC_GMAP 343 + ltg %r12,__LC_GMAP 344 + jz 5f 344 345 clc __GMAP_ASCE(8,%r12), __PT_CR1(%r11) 345 346 jne 5f 346 347 BPENTER __SF_SIE_FLAGS(%r10),_TIF_ISOLATE_BP_GUEST
+65
arch/x86/entry/common.c
··· 255 255 instrumentation_end(); 256 256 syscall_exit_to_user_mode(regs); 257 257 } 258 + 259 + #ifdef CONFIG_X86_FRED 260 + /* 261 + * A FRED-specific INT80 handler is warranted for the follwing reasons: 262 + * 263 + * 1) As INT instructions and hardware interrupts are separate event 264 + * types, FRED does not preclude the use of vector 0x80 for external 265 + * interrupts. As a result, the FRED setup code does not reserve 266 + * vector 0x80 and calling int80_is_external() is not merely 267 + * suboptimal but actively incorrect: it could cause a system call 268 + * to be incorrectly ignored. 269 + * 270 + * 2) It is called only for handling vector 0x80 of event type 271 + * EVENT_TYPE_SWINT and will never be called to handle any external 272 + * interrupt (event type EVENT_TYPE_EXTINT). 273 + * 274 + * 3) FRED has separate entry flows depending on if the event came from 275 + * user space or kernel space, and because the kernel does not use 276 + * INT insns, the FRED kernel entry handler fred_entry_from_kernel() 277 + * falls through to fred_bad_type() if the event type is 278 + * EVENT_TYPE_SWINT, i.e., INT insns. So if the kernel is handling 279 + * an INT insn, it can only be from a user level. 280 + * 281 + * 4) int80_emulation() does a CLEAR_BRANCH_HISTORY. While FRED will 282 + * likely take a different approach if it is ever needed: it 283 + * probably belongs in either fred_intx()/ fred_other() or 284 + * asm_fred_entrypoint_user(), depending on if this ought to be done 285 + * for all entries from userspace or only system 286 + * calls. 287 + * 288 + * 5) INT $0x80 is the fast path for 32-bit system calls under FRED. 289 + */ 290 + DEFINE_FREDENTRY_RAW(int80_emulation) 291 + { 292 + int nr; 293 + 294 + enter_from_user_mode(regs); 295 + 296 + instrumentation_begin(); 297 + add_random_kstack_offset(); 298 + 299 + /* 300 + * FRED pushed 0 into regs::orig_ax and regs::ax contains the 301 + * syscall number. 302 + * 303 + * User tracing code (ptrace or signal handlers) might assume 304 + * that the regs::orig_ax contains a 32-bit number on invoking 305 + * a 32-bit syscall. 306 + * 307 + * Establish the syscall convention by saving the 32bit truncated 308 + * syscall number in regs::orig_ax and by invalidating regs::ax. 309 + */ 310 + regs->orig_ax = regs->ax & GENMASK(31, 0); 311 + regs->ax = -ENOSYS; 312 + 313 + nr = syscall_32_enter(regs); 314 + 315 + local_irq_enable(); 316 + nr = syscall_enter_from_user_mode_work(regs, nr); 317 + do_syscall_32_irqs_on(regs, nr); 318 + 319 + instrumentation_end(); 320 + syscall_exit_to_user_mode(regs); 321 + } 322 + #endif 258 323 #else /* CONFIG_IA32_EMULATION */ 259 324 260 325 /* Handles int $0x80 on a 32bit kernel */
+5 -5
arch/x86/entry/entry_fred.c
··· 28 28 if (regs->fred_cs.sl > 0) { 29 29 pr_emerg("PANIC: invalid or fatal FRED event; event type %u " 30 30 "vector %u error 0x%lx aux 0x%lx at %04x:%016lx\n", 31 - regs->fred_ss.type, regs->fred_ss.vector, regs->orig_ax, 31 + regs->fred_ss.type, regs->fred_ss.vector, error_code, 32 32 fred_event_data(regs), regs->cs, regs->ip); 33 - die("invalid or fatal FRED event", regs, regs->orig_ax); 33 + die("invalid or fatal FRED event", regs, error_code); 34 34 panic("invalid or fatal FRED event"); 35 35 } else { 36 36 unsigned long flags = oops_begin(); ··· 38 38 39 39 pr_alert("BUG: invalid or fatal FRED event; event type %u " 40 40 "vector %u error 0x%lx aux 0x%lx at %04x:%016lx\n", 41 - regs->fred_ss.type, regs->fred_ss.vector, regs->orig_ax, 41 + regs->fred_ss.type, regs->fred_ss.vector, error_code, 42 42 fred_event_data(regs), regs->cs, regs->ip); 43 43 44 - if (__die("Invalid or fatal FRED event", regs, regs->orig_ax)) 44 + if (__die("Invalid or fatal FRED event", regs, error_code)) 45 45 sig = 0; 46 46 47 47 oops_end(flags, regs, sig); ··· 66 66 /* INT80 */ 67 67 case IA32_SYSCALL_VECTOR: 68 68 if (ia32_enabled()) 69 - return int80_emulation(regs); 69 + return fred_int80_emulation(regs); 70 70 fallthrough; 71 71 #endif 72 72
+1
arch/x86/events/intel/lbr.c
··· 1693 1693 lbr->from = x86_pmu.lbr_from; 1694 1694 lbr->to = x86_pmu.lbr_to; 1695 1695 lbr->info = x86_pmu.lbr_info; 1696 + lbr->has_callstack = x86_pmu_has_lbr_callstack(); 1696 1697 } 1697 1698 EXPORT_SYMBOL_GPL(x86_perf_get_lbr); 1698 1699
+3
arch/x86/include/asm/barrier.h
··· 79 79 #define __smp_mb__before_atomic() do { } while (0) 80 80 #define __smp_mb__after_atomic() do { } while (0) 81 81 82 + /* Writing to CR3 provides a full memory barrier in switch_mm(). */ 83 + #define smp_mb__after_switch_mm() do { } while (0) 84 + 82 85 #include <asm-generic/barrier.h> 83 86 84 87 #endif /* _ASM_X86_BARRIER_H */
+1
arch/x86/include/asm/kvm_host.h
··· 855 855 int cpuid_nent; 856 856 struct kvm_cpuid_entry2 *cpuid_entries; 857 857 struct kvm_hypervisor_cpuid kvm_cpuid; 858 + bool is_amd_compatible; 858 859 859 860 /* 860 861 * FIXME: Drop this macro and use KVM_NR_GOVERNED_FEATURES directly
+1
arch/x86/include/asm/perf_event.h
··· 555 555 unsigned int from; 556 556 unsigned int to; 557 557 unsigned int info; 558 + bool has_callstack; 558 559 }; 559 560 560 561 extern void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap);
+7 -4
arch/x86/kernel/cpu/bugs.c
··· 1652 1652 return; 1653 1653 1654 1654 /* Retpoline mitigates against BHI unless the CPU has RRSBA behavior */ 1655 - if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) { 1655 + if (boot_cpu_has(X86_FEATURE_RETPOLINE) && 1656 + !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) { 1656 1657 spec_ctrl_disable_kernel_rrsba(); 1657 1658 if (rrsba_disabled) 1658 1659 return; ··· 2805 2804 { 2806 2805 if (!boot_cpu_has_bug(X86_BUG_BHI)) 2807 2806 return "; BHI: Not affected"; 2808 - else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW)) 2807 + else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_HW)) 2809 2808 return "; BHI: BHI_DIS_S"; 2810 - else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP)) 2809 + else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP)) 2811 2810 return "; BHI: SW loop, KVM: SW loop"; 2812 - else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && rrsba_disabled) 2811 + else if (boot_cpu_has(X86_FEATURE_RETPOLINE) && 2812 + !boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE) && 2813 + rrsba_disabled) 2813 2814 return "; BHI: Retpoline"; 2814 2815 else if (boot_cpu_has(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT)) 2815 2816 return "; BHI: Vulnerable, KVM: SW loop";
+3 -3
arch/x86/kernel/cpu/cpuid-deps.c
··· 44 44 { X86_FEATURE_F16C, X86_FEATURE_XMM2, }, 45 45 { X86_FEATURE_AES, X86_FEATURE_XMM2 }, 46 46 { X86_FEATURE_SHA_NI, X86_FEATURE_XMM2 }, 47 + { X86_FEATURE_GFNI, X86_FEATURE_XMM2 }, 47 48 { X86_FEATURE_FMA, X86_FEATURE_AVX }, 49 + { X86_FEATURE_VAES, X86_FEATURE_AVX }, 50 + { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX }, 48 51 { X86_FEATURE_AVX2, X86_FEATURE_AVX, }, 49 52 { X86_FEATURE_AVX512F, X86_FEATURE_AVX, }, 50 53 { X86_FEATURE_AVX512IFMA, X86_FEATURE_AVX512F }, ··· 59 56 { X86_FEATURE_AVX512VL, X86_FEATURE_AVX512F }, 60 57 { X86_FEATURE_AVX512VBMI, X86_FEATURE_AVX512F }, 61 58 { X86_FEATURE_AVX512_VBMI2, X86_FEATURE_AVX512VL }, 62 - { X86_FEATURE_GFNI, X86_FEATURE_AVX512VL }, 63 - { X86_FEATURE_VAES, X86_FEATURE_AVX512VL }, 64 - { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX512VL }, 65 59 { X86_FEATURE_AVX512_VNNI, X86_FEATURE_AVX512VL }, 66 60 { X86_FEATURE_AVX512_BITALG, X86_FEATURE_AVX512VL }, 67 61 { X86_FEATURE_AVX512_4VNNIW, X86_FEATURE_AVX512F },
-5
arch/x86/kvm/Makefile
··· 3 3 ccflags-y += -I $(srctree)/arch/x86/kvm 4 4 ccflags-$(CONFIG_KVM_WERROR) += -Werror 5 5 6 - ifeq ($(CONFIG_FRAME_POINTER),y) 7 - OBJECT_FILES_NON_STANDARD_vmx/vmenter.o := y 8 - OBJECT_FILES_NON_STANDARD_svm/vmenter.o := y 9 - endif 10 - 11 6 include $(srctree)/virt/kvm/Makefile.kvm 12 7 13 8 kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \
+1
arch/x86/kvm/cpuid.c
··· 376 376 377 377 kvm_update_pv_runtime(vcpu); 378 378 379 + vcpu->arch.is_amd_compatible = guest_cpuid_is_amd_or_hygon(vcpu); 379 380 vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu); 380 381 vcpu->arch.reserved_gpa_bits = kvm_vcpu_reserved_gpa_bits_raw(vcpu); 381 382
+10
arch/x86/kvm/cpuid.h
··· 120 120 return best && is_guest_vendor_intel(best->ebx, best->ecx, best->edx); 121 121 } 122 122 123 + static inline bool guest_cpuid_is_amd_compatible(struct kvm_vcpu *vcpu) 124 + { 125 + return vcpu->arch.is_amd_compatible; 126 + } 127 + 128 + static inline bool guest_cpuid_is_intel_compatible(struct kvm_vcpu *vcpu) 129 + { 130 + return !guest_cpuid_is_amd_compatible(vcpu); 131 + } 132 + 123 133 static inline int guest_cpuid_family(struct kvm_vcpu *vcpu) 124 134 { 125 135 struct kvm_cpuid_entry2 *best;
+2 -1
arch/x86/kvm/lapic.c
··· 2776 2776 trig_mode = reg & APIC_LVT_LEVEL_TRIGGER; 2777 2777 2778 2778 r = __apic_accept_irq(apic, mode, vector, 1, trig_mode, NULL); 2779 - if (r && lvt_type == APIC_LVTPC) 2779 + if (r && lvt_type == APIC_LVTPC && 2780 + guest_cpuid_is_intel_compatible(apic->vcpu)) 2780 2781 kvm_lapic_set_reg(apic, APIC_LVTPC, reg | APIC_LVT_MASKED); 2781 2782 return r; 2782 2783 }
+6 -5
arch/x86/kvm/mmu/mmu.c
··· 4935 4935 context->cpu_role.base.level, is_efer_nx(context), 4936 4936 guest_can_use(vcpu, X86_FEATURE_GBPAGES), 4937 4937 is_cr4_pse(context), 4938 - guest_cpuid_is_amd_or_hygon(vcpu)); 4938 + guest_cpuid_is_amd_compatible(vcpu)); 4939 4939 } 4940 4940 4941 4941 static void __reset_rsvds_bits_mask_ept(struct rsvd_bits_validate *rsvd_check, ··· 5576 5576 * that problem is swept under the rug; KVM's CPUID API is horrific and 5577 5577 * it's all but impossible to solve it without introducing a new API. 5578 5578 */ 5579 - vcpu->arch.root_mmu.root_role.word = 0; 5580 - vcpu->arch.guest_mmu.root_role.word = 0; 5581 - vcpu->arch.nested_mmu.root_role.word = 0; 5579 + vcpu->arch.root_mmu.root_role.invalid = 1; 5580 + vcpu->arch.guest_mmu.root_role.invalid = 1; 5581 + vcpu->arch.nested_mmu.root_role.invalid = 1; 5582 5582 vcpu->arch.root_mmu.cpu_role.ext.valid = 0; 5583 5583 vcpu->arch.guest_mmu.cpu_role.ext.valid = 0; 5584 5584 vcpu->arch.nested_mmu.cpu_role.ext.valid = 0; ··· 7399 7399 * by the memslot, KVM can't use a hugepage due to the 7400 7400 * misaligned address regardless of memory attributes. 7401 7401 */ 7402 - if (gfn >= slot->base_gfn) { 7402 + if (gfn >= slot->base_gfn && 7403 + gfn + nr_pages <= slot->base_gfn + slot->npages) { 7403 7404 if (hugepage_has_attrs(kvm, slot, gfn, level, attrs)) 7404 7405 hugepage_clear_mixed(slot, gfn, level); 7405 7406 else
+22 -29
arch/x86/kvm/mmu/tdp_mmu.c
··· 1548 1548 } 1549 1549 } 1550 1550 1551 - /* 1552 - * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If 1553 - * AD bits are enabled, this will involve clearing the dirty bit on each SPTE. 1554 - * If AD bits are not enabled, this will require clearing the writable bit on 1555 - * each SPTE. Returns true if an SPTE has been changed and the TLBs need to 1556 - * be flushed. 1557 - */ 1551 + static bool tdp_mmu_need_write_protect(struct kvm_mmu_page *sp) 1552 + { 1553 + /* 1554 + * All TDP MMU shadow pages share the same role as their root, aside 1555 + * from level, so it is valid to key off any shadow page to determine if 1556 + * write protection is needed for an entire tree. 1557 + */ 1558 + return kvm_mmu_page_ad_need_write_protect(sp) || !kvm_ad_enabled(); 1559 + } 1560 + 1558 1561 static bool clear_dirty_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root, 1559 1562 gfn_t start, gfn_t end) 1560 1563 { 1561 - u64 dbit = kvm_ad_enabled() ? shadow_dirty_mask : PT_WRITABLE_MASK; 1564 + const u64 dbit = tdp_mmu_need_write_protect(root) ? PT_WRITABLE_MASK : 1565 + shadow_dirty_mask; 1562 1566 struct tdp_iter iter; 1563 1567 bool spte_set = false; 1564 1568 ··· 1577 1573 if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) 1578 1574 continue; 1579 1575 1580 - KVM_MMU_WARN_ON(kvm_ad_enabled() && 1576 + KVM_MMU_WARN_ON(dbit == shadow_dirty_mask && 1581 1577 spte_ad_need_write_protect(iter.old_spte)); 1582 1578 1583 1579 if (!(iter.old_spte & dbit)) ··· 1594 1590 } 1595 1591 1596 1592 /* 1597 - * Clear the dirty status of all the SPTEs mapping GFNs in the memslot. If 1598 - * AD bits are enabled, this will involve clearing the dirty bit on each SPTE. 1599 - * If AD bits are not enabled, this will require clearing the writable bit on 1600 - * each SPTE. Returns true if an SPTE has been changed and the TLBs need to 1601 - * be flushed. 1593 + * Clear the dirty status (D-bit or W-bit) of all the SPTEs mapping GFNs in the 1594 + * memslot. Returns true if an SPTE has been changed and the TLBs need to be 1595 + * flushed. 1602 1596 */ 1603 1597 bool kvm_tdp_mmu_clear_dirty_slot(struct kvm *kvm, 1604 1598 const struct kvm_memory_slot *slot) ··· 1612 1610 return spte_set; 1613 1611 } 1614 1612 1615 - /* 1616 - * Clears the dirty status of all the 4k SPTEs mapping GFNs for which a bit is 1617 - * set in mask, starting at gfn. The given memslot is expected to contain all 1618 - * the GFNs represented by set bits in the mask. If AD bits are enabled, 1619 - * clearing the dirty status will involve clearing the dirty bit on each SPTE 1620 - * or, if AD bits are not enabled, clearing the writable bit on each SPTE. 1621 - */ 1622 1613 static void clear_dirty_pt_masked(struct kvm *kvm, struct kvm_mmu_page *root, 1623 1614 gfn_t gfn, unsigned long mask, bool wrprot) 1624 1615 { 1625 - u64 dbit = (wrprot || !kvm_ad_enabled()) ? PT_WRITABLE_MASK : 1626 - shadow_dirty_mask; 1616 + const u64 dbit = (wrprot || tdp_mmu_need_write_protect(root)) ? PT_WRITABLE_MASK : 1617 + shadow_dirty_mask; 1627 1618 struct tdp_iter iter; 1628 1619 1629 1620 lockdep_assert_held_write(&kvm->mmu_lock); ··· 1628 1633 if (!mask) 1629 1634 break; 1630 1635 1631 - KVM_MMU_WARN_ON(kvm_ad_enabled() && 1636 + KVM_MMU_WARN_ON(dbit == shadow_dirty_mask && 1632 1637 spte_ad_need_write_protect(iter.old_spte)); 1633 1638 1634 1639 if (iter.level > PG_LEVEL_4K || ··· 1654 1659 } 1655 1660 1656 1661 /* 1657 - * Clears the dirty status of all the 4k SPTEs mapping GFNs for which a bit is 1658 - * set in mask, starting at gfn. The given memslot is expected to contain all 1659 - * the GFNs represented by set bits in the mask. If AD bits are enabled, 1660 - * clearing the dirty status will involve clearing the dirty bit on each SPTE 1661 - * or, if AD bits are not enabled, clearing the writable bit on each SPTE. 1662 + * Clear the dirty status (D-bit or W-bit) of all the 4k SPTEs mapping GFNs for 1663 + * which a bit is set in mask, starting at gfn. The given memslot is expected to 1664 + * contain all the GFNs represented by set bits in the mask. 1662 1665 */ 1663 1666 void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, 1664 1667 struct kvm_memory_slot *slot,
+14 -2
arch/x86/kvm/pmu.c
··· 775 775 pmu->pebs_data_cfg_mask = ~0ull; 776 776 bitmap_zero(pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX); 777 777 778 - if (vcpu->kvm->arch.enable_pmu) 779 - static_call(kvm_x86_pmu_refresh)(vcpu); 778 + if (!vcpu->kvm->arch.enable_pmu) 779 + return; 780 + 781 + static_call(kvm_x86_pmu_refresh)(vcpu); 782 + 783 + /* 784 + * At RESET, both Intel and AMD CPUs set all enable bits for general 785 + * purpose counters in IA32_PERF_GLOBAL_CTRL (so that software that 786 + * was written for v1 PMUs don't unknowingly leave GP counters disabled 787 + * in the global controls). Emulate that behavior when refreshing the 788 + * PMU so that userspace doesn't need to manually set PERF_GLOBAL_CTRL. 789 + */ 790 + if (kvm_pmu_has_perf_global_ctrl(pmu) && pmu->nr_arch_gp_counters) 791 + pmu->global_ctrl = GENMASK_ULL(pmu->nr_arch_gp_counters - 1, 0); 780 792 } 781 793 782 794 void kvm_pmu_init(struct kvm_vcpu *vcpu)
+1 -1
arch/x86/kvm/svm/sev.c
··· 434 434 /* Avoid using vmalloc for smaller buffers. */ 435 435 size = npages * sizeof(struct page *); 436 436 if (size > PAGE_SIZE) 437 - pages = __vmalloc(size, GFP_KERNEL_ACCOUNT | __GFP_ZERO); 437 + pages = __vmalloc(size, GFP_KERNEL_ACCOUNT); 438 438 else 439 439 pages = kmalloc(size, GFP_KERNEL_ACCOUNT); 440 440
+10 -7
arch/x86/kvm/svm/svm.c
··· 1503 1503 __free_pages(virt_to_page(svm->msrpm), get_order(MSRPM_SIZE)); 1504 1504 } 1505 1505 1506 + static struct sev_es_save_area *sev_es_host_save_area(struct svm_cpu_data *sd) 1507 + { 1508 + return page_address(sd->save_area) + 0x400; 1509 + } 1510 + 1506 1511 static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu) 1507 1512 { 1508 1513 struct vcpu_svm *svm = to_svm(vcpu); ··· 1524 1519 * or subsequent vmload of host save area. 1525 1520 */ 1526 1521 vmsave(sd->save_area_pa); 1527 - if (sev_es_guest(vcpu->kvm)) { 1528 - struct sev_es_save_area *hostsa; 1529 - hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400); 1530 - 1531 - sev_es_prepare_switch_to_guest(hostsa); 1532 - } 1522 + if (sev_es_guest(vcpu->kvm)) 1523 + sev_es_prepare_switch_to_guest(sev_es_host_save_area(sd)); 1533 1524 1534 1525 if (tsc_scaling) 1535 1526 __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio); ··· 4102 4101 4103 4102 static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_intercepted) 4104 4103 { 4104 + struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, vcpu->cpu); 4105 4105 struct vcpu_svm *svm = to_svm(vcpu); 4106 4106 4107 4107 guest_state_enter_irqoff(); ··· 4110 4108 amd_clear_divider(); 4111 4109 4112 4110 if (sev_es_guest(vcpu->kvm)) 4113 - __svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted); 4111 + __svm_sev_es_vcpu_run(svm, spec_ctrl_intercepted, 4112 + sev_es_host_save_area(sd)); 4114 4113 else 4115 4114 __svm_vcpu_run(svm, spec_ctrl_intercepted); 4116 4115
+2 -1
arch/x86/kvm/svm/svm.h
··· 698 698 699 699 /* vmenter.S */ 700 700 701 - void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted); 701 + void __svm_sev_es_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted, 702 + struct sev_es_save_area *hostsa); 702 703 void __svm_vcpu_run(struct vcpu_svm *svm, bool spec_ctrl_intercepted); 703 704 704 705 #define DEFINE_KVM_GHCB_ACCESSORS(field) \
+44 -53
arch/x86/kvm/svm/vmenter.S
··· 3 3 #include <asm/asm.h> 4 4 #include <asm/asm-offsets.h> 5 5 #include <asm/bitsperlong.h> 6 + #include <asm/frame.h> 6 7 #include <asm/kvm_vcpu_regs.h> 7 8 #include <asm/nospec-branch.h> 8 9 #include "kvm-asm-offsets.h" ··· 68 67 "", X86_FEATURE_V_SPEC_CTRL 69 68 901: 70 69 .endm 71 - .macro RESTORE_HOST_SPEC_CTRL_BODY 70 + .macro RESTORE_HOST_SPEC_CTRL_BODY spec_ctrl_intercepted:req 72 71 900: 73 72 /* Same for after vmexit. */ 74 73 mov $MSR_IA32_SPEC_CTRL, %ecx ··· 77 76 * Load the value that the guest had written into MSR_IA32_SPEC_CTRL, 78 77 * if it was not intercepted during guest execution. 79 78 */ 80 - cmpb $0, (%_ASM_SP) 79 + cmpb $0, \spec_ctrl_intercepted 81 80 jnz 998f 82 81 rdmsr 83 82 movl %eax, SVM_spec_ctrl(%_ASM_DI) ··· 100 99 */ 101 100 SYM_FUNC_START(__svm_vcpu_run) 102 101 push %_ASM_BP 102 + mov %_ASM_SP, %_ASM_BP 103 103 #ifdef CONFIG_X86_64 104 104 push %r15 105 105 push %r14 ··· 270 268 RET 271 269 272 270 RESTORE_GUEST_SPEC_CTRL_BODY 273 - RESTORE_HOST_SPEC_CTRL_BODY 271 + RESTORE_HOST_SPEC_CTRL_BODY (%_ASM_SP) 274 272 275 273 10: cmpb $0, _ASM_RIP(kvm_rebooting) 276 274 jne 2b ··· 292 290 293 291 SYM_FUNC_END(__svm_vcpu_run) 294 292 293 + #ifdef CONFIG_KVM_AMD_SEV 294 + 295 + 296 + #ifdef CONFIG_X86_64 297 + #define SEV_ES_GPRS_BASE 0x300 298 + #define SEV_ES_RBX (SEV_ES_GPRS_BASE + __VCPU_REGS_RBX * WORD_SIZE) 299 + #define SEV_ES_RBP (SEV_ES_GPRS_BASE + __VCPU_REGS_RBP * WORD_SIZE) 300 + #define SEV_ES_RSI (SEV_ES_GPRS_BASE + __VCPU_REGS_RSI * WORD_SIZE) 301 + #define SEV_ES_RDI (SEV_ES_GPRS_BASE + __VCPU_REGS_RDI * WORD_SIZE) 302 + #define SEV_ES_R12 (SEV_ES_GPRS_BASE + __VCPU_REGS_R12 * WORD_SIZE) 303 + #define SEV_ES_R13 (SEV_ES_GPRS_BASE + __VCPU_REGS_R13 * WORD_SIZE) 304 + #define SEV_ES_R14 (SEV_ES_GPRS_BASE + __VCPU_REGS_R14 * WORD_SIZE) 305 + #define SEV_ES_R15 (SEV_ES_GPRS_BASE + __VCPU_REGS_R15 * WORD_SIZE) 306 + #endif 307 + 295 308 /** 296 309 * __svm_sev_es_vcpu_run - Run a SEV-ES vCPU via a transition to SVM guest mode 297 310 * @svm: struct vcpu_svm * 298 311 * @spec_ctrl_intercepted: bool 299 312 */ 300 313 SYM_FUNC_START(__svm_sev_es_vcpu_run) 301 - push %_ASM_BP 302 - #ifdef CONFIG_X86_64 303 - push %r15 304 - push %r14 305 - push %r13 306 - push %r12 307 - #else 308 - push %edi 309 - push %esi 310 - #endif 311 - push %_ASM_BX 314 + FRAME_BEGIN 312 315 313 316 /* 314 - * Save variables needed after vmexit on the stack, in inverse 315 - * order compared to when they are needed. 317 + * Save non-volatile (callee-saved) registers to the host save area. 318 + * Except for RAX and RSP, all GPRs are restored on #VMEXIT, but not 319 + * saved on VMRUN. 316 320 */ 321 + mov %rbp, SEV_ES_RBP (%rdx) 322 + mov %r15, SEV_ES_R15 (%rdx) 323 + mov %r14, SEV_ES_R14 (%rdx) 324 + mov %r13, SEV_ES_R13 (%rdx) 325 + mov %r12, SEV_ES_R12 (%rdx) 326 + mov %rbx, SEV_ES_RBX (%rdx) 317 327 318 - /* Accessed directly from the stack in RESTORE_HOST_SPEC_CTRL. */ 319 - push %_ASM_ARG2 320 - 321 - /* Save @svm. */ 322 - push %_ASM_ARG1 323 - 324 - .ifnc _ASM_ARG1, _ASM_DI 325 328 /* 326 - * Stash @svm in RDI early. On 32-bit, arguments are in RAX, RCX 327 - * and RDX which are clobbered by RESTORE_GUEST_SPEC_CTRL. 329 + * Save volatile registers that hold arguments that are needed after 330 + * #VMEXIT (RDI=@svm and RSI=@spec_ctrl_intercepted). 328 331 */ 329 - mov %_ASM_ARG1, %_ASM_DI 330 - .endif 332 + mov %rdi, SEV_ES_RDI (%rdx) 333 + mov %rsi, SEV_ES_RSI (%rdx) 331 334 332 - /* Clobbers RAX, RCX, RDX. */ 335 + /* Clobbers RAX, RCX, RDX (@hostsa). */ 333 336 RESTORE_GUEST_SPEC_CTRL 334 337 335 338 /* Get svm->current_vmcb->pa into RAX. */ 336 - mov SVM_current_vmcb(%_ASM_DI), %_ASM_AX 337 - mov KVM_VMCB_pa(%_ASM_AX), %_ASM_AX 339 + mov SVM_current_vmcb(%rdi), %rax 340 + mov KVM_VMCB_pa(%rax), %rax 338 341 339 342 /* Enter guest mode */ 340 343 sti 341 344 342 - 1: vmrun %_ASM_AX 345 + 1: vmrun %rax 343 346 344 347 2: cli 345 348 346 - /* Pop @svm to RDI, guest registers have been saved already. */ 347 - pop %_ASM_DI 348 - 349 349 #ifdef CONFIG_MITIGATION_RETPOLINE 350 350 /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ 351 - FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE 351 + FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE 352 352 #endif 353 353 354 - /* Clobbers RAX, RCX, RDX. */ 354 + /* Clobbers RAX, RCX, RDX, consumes RDI (@svm) and RSI (@spec_ctrl_intercepted). */ 355 355 RESTORE_HOST_SPEC_CTRL 356 356 357 357 /* ··· 365 361 */ 366 362 UNTRAIN_RET_VM 367 363 368 - /* "Pop" @spec_ctrl_intercepted. */ 369 - pop %_ASM_BX 370 - 371 - pop %_ASM_BX 372 - 373 - #ifdef CONFIG_X86_64 374 - pop %r12 375 - pop %r13 376 - pop %r14 377 - pop %r15 378 - #else 379 - pop %esi 380 - pop %edi 381 - #endif 382 - pop %_ASM_BP 364 + FRAME_END 383 365 RET 384 366 385 367 RESTORE_GUEST_SPEC_CTRL_BODY 386 - RESTORE_HOST_SPEC_CTRL_BODY 368 + RESTORE_HOST_SPEC_CTRL_BODY %sil 387 369 388 - 3: cmpb $0, _ASM_RIP(kvm_rebooting) 370 + 3: cmpb $0, kvm_rebooting(%rip) 389 371 jne 2b 390 372 ud2 391 373 392 374 _ASM_EXTABLE(1b, 3b) 393 375 394 376 SYM_FUNC_END(__svm_sev_es_vcpu_run) 377 + #endif /* CONFIG_KVM_AMD_SEV */
+1 -1
arch/x86/kvm/vmx/pmu_intel.c
··· 535 535 perf_capabilities = vcpu_get_perf_capabilities(vcpu); 536 536 if (cpuid_model_is_consistent(vcpu) && 537 537 (perf_capabilities & PMU_CAP_LBR_FMT)) 538 - x86_perf_get_lbr(&lbr_desc->records); 538 + memcpy(&lbr_desc->records, &vmx_lbr_caps, sizeof(vmx_lbr_caps)); 539 539 else 540 540 lbr_desc->records.nr = 0; 541 541
+35 -6
arch/x86/kvm/vmx/vmx.c
··· 218 218 int __read_mostly pt_mode = PT_MODE_SYSTEM; 219 219 module_param(pt_mode, int, S_IRUGO); 220 220 221 + struct x86_pmu_lbr __ro_after_init vmx_lbr_caps; 222 + 221 223 static DEFINE_STATIC_KEY_FALSE(vmx_l1d_should_flush); 222 224 static DEFINE_STATIC_KEY_FALSE(vmx_l1d_flush_cond); 223 225 static DEFINE_MUTEX(vmx_l1d_flush_mutex); ··· 7864 7862 vmx_update_exception_bitmap(vcpu); 7865 7863 } 7866 7864 7867 - static u64 vmx_get_perf_capabilities(void) 7865 + static __init u64 vmx_get_perf_capabilities(void) 7868 7866 { 7869 7867 u64 perf_cap = PMU_CAP_FW_WRITES; 7870 - struct x86_pmu_lbr lbr; 7871 7868 u64 host_perf_cap = 0; 7872 7869 7873 7870 if (!enable_pmu) ··· 7876 7875 rdmsrl(MSR_IA32_PERF_CAPABILITIES, host_perf_cap); 7877 7876 7878 7877 if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) { 7879 - x86_perf_get_lbr(&lbr); 7880 - if (lbr.nr) 7878 + x86_perf_get_lbr(&vmx_lbr_caps); 7879 + 7880 + /* 7881 + * KVM requires LBR callstack support, as the overhead due to 7882 + * context switching LBRs without said support is too high. 7883 + * See intel_pmu_create_guest_lbr_event() for more info. 7884 + */ 7885 + if (!vmx_lbr_caps.has_callstack) 7886 + memset(&vmx_lbr_caps, 0, sizeof(vmx_lbr_caps)); 7887 + else if (vmx_lbr_caps.nr) 7881 7888 perf_cap |= host_perf_cap & PMU_CAP_LBR_FMT; 7882 7889 } 7883 7890 7884 7891 if (vmx_pebs_supported()) { 7885 7892 perf_cap |= host_perf_cap & PERF_CAP_PEBS_MASK; 7886 - if ((perf_cap & PERF_CAP_PEBS_FORMAT) < 4) 7887 - perf_cap &= ~PERF_CAP_PEBS_BASELINE; 7893 + 7894 + /* 7895 + * Disallow adaptive PEBS as it is functionally broken, can be 7896 + * used by the guest to read *host* LBRs, and can be used to 7897 + * bypass userspace event filters. To correctly and safely 7898 + * support adaptive PEBS, KVM needs to: 7899 + * 7900 + * 1. Account for the ADAPTIVE flag when (re)programming fixed 7901 + * counters. 7902 + * 7903 + * 2. Gain support from perf (or take direct control of counter 7904 + * programming) to support events without adaptive PEBS 7905 + * enabled for the hardware counter. 7906 + * 7907 + * 3. Ensure LBR MSRs cannot hold host data on VM-Entry with 7908 + * adaptive PEBS enabled and MSR_PEBS_DATA_CFG.LBRS=1. 7909 + * 7910 + * 4. Document which PMU events are effectively exposed to the 7911 + * guest via adaptive PEBS, and make adaptive PEBS mutually 7912 + * exclusive with KVM_SET_PMU_EVENT_FILTER if necessary. 7913 + */ 7914 + perf_cap &= ~PERF_CAP_PEBS_BASELINE; 7888 7915 } 7889 7916 7890 7917 return perf_cap;
+5 -1
arch/x86/kvm/vmx/vmx.h
··· 15 15 #include "vmx_ops.h" 16 16 #include "../cpuid.h" 17 17 #include "run_flags.h" 18 + #include "../mmu.h" 18 19 19 20 #define MSR_TYPE_R 1 20 21 #define MSR_TYPE_W 2 ··· 109 108 /* True if LBRs are marked as not intercepted in the MSR bitmap */ 110 109 bool msr_passthrough; 111 110 }; 111 + 112 + extern struct x86_pmu_lbr vmx_lbr_caps; 112 113 113 114 /* 114 115 * The nested_vmx structure is part of vcpu_vmx, and holds information we need ··· 722 719 if (!enable_ept) 723 720 return true; 724 721 725 - return allow_smaller_maxphyaddr && cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits; 722 + return allow_smaller_maxphyaddr && 723 + cpuid_maxphyaddr(vcpu) < kvm_get_shadow_phys_bits(); 726 724 } 727 725 728 726 static inline bool is_unrestricted_guest(struct kvm_vcpu *vcpu)
+1 -1
arch/x86/kvm/x86.c
··· 3470 3470 static bool can_set_mci_status(struct kvm_vcpu *vcpu) 3471 3471 { 3472 3472 /* McStatusWrEn enabled? */ 3473 - if (guest_cpuid_is_amd_or_hygon(vcpu)) 3473 + if (guest_cpuid_is_amd_compatible(vcpu)) 3474 3474 return !!(vcpu->arch.msr_hwcr & BIT_ULL(18)); 3475 3475 3476 3476 return false;
+7
arch/x86/lib/retpoline.S
··· 382 382 SYM_CODE_START(__x86_return_thunk) 383 383 UNWIND_HINT_FUNC 384 384 ANNOTATE_NOENDBR 385 + #if defined(CONFIG_MITIGATION_UNRET_ENTRY) || \ 386 + defined(CONFIG_MITIGATION_SRSO) || \ 387 + defined(CONFIG_MITIGATION_CALL_DEPTH_TRACKING) 385 388 ALTERNATIVE __stringify(ANNOTATE_UNRET_SAFE; ret), \ 386 389 "jmp warn_thunk_thunk", X86_FEATURE_ALWAYS 390 + #else 391 + ANNOTATE_UNRET_SAFE 392 + ret 393 + #endif 387 394 int3 388 395 SYM_CODE_END(__x86_return_thunk) 389 396 EXPORT_SYMBOL(__x86_return_thunk)
+19 -10
block/bdev.c
··· 645 645 bdev_write_inode(bdev); 646 646 } 647 647 648 + static void blkdev_put_whole(struct block_device *bdev) 649 + { 650 + if (atomic_dec_and_test(&bdev->bd_openers)) 651 + blkdev_flush_mapping(bdev); 652 + if (bdev->bd_disk->fops->release) 653 + bdev->bd_disk->fops->release(bdev->bd_disk); 654 + } 655 + 648 656 static int blkdev_get_whole(struct block_device *bdev, blk_mode_t mode) 649 657 { 650 658 struct gendisk *disk = bdev->bd_disk; ··· 671 663 672 664 if (!atomic_read(&bdev->bd_openers)) 673 665 set_init_blocksize(bdev); 674 - if (test_bit(GD_NEED_PART_SCAN, &disk->state)) 675 - bdev_disk_changed(disk, false); 676 666 atomic_inc(&bdev->bd_openers); 667 + if (test_bit(GD_NEED_PART_SCAN, &disk->state)) { 668 + /* 669 + * Only return scanning errors if we are called from contexts 670 + * that explicitly want them, e.g. the BLKRRPART ioctl. 671 + */ 672 + ret = bdev_disk_changed(disk, false); 673 + if (ret && (mode & BLK_OPEN_STRICT_SCAN)) { 674 + blkdev_put_whole(bdev); 675 + return ret; 676 + } 677 + } 677 678 return 0; 678 - } 679 - 680 - static void blkdev_put_whole(struct block_device *bdev) 681 - { 682 - if (atomic_dec_and_test(&bdev->bd_openers)) 683 - blkdev_flush_mapping(bdev); 684 - if (bdev->bd_disk->fops->release) 685 - bdev->bd_disk->fops->release(bdev->bd_disk); 686 679 } 687 680 688 681 static int blkdev_get_part(struct block_device *part, blk_mode_t mode)
+5 -2
block/blk-iocost.c
··· 1439 1439 lockdep_assert_held(&iocg->ioc->lock); 1440 1440 lockdep_assert_held(&iocg->waitq.lock); 1441 1441 1442 - /* make sure that nobody messed with @iocg */ 1443 - WARN_ON_ONCE(list_empty(&iocg->active_list)); 1442 + /* 1443 + * make sure that nobody messed with @iocg. Check iocg->pd.online 1444 + * to avoid warn when removing blkcg or disk. 1445 + */ 1446 + WARN_ON_ONCE(list_empty(&iocg->active_list) && iocg->pd.online); 1444 1447 WARN_ON_ONCE(iocg->inuse > 1); 1445 1448 1446 1449 iocg->abs_vdebt -= min(abs_vpay, iocg->abs_vdebt);
+2 -1
block/ioctl.c
··· 563 563 return -EACCES; 564 564 if (bdev_is_partition(bdev)) 565 565 return -EINVAL; 566 - return disk_scan_partitions(bdev->bd_disk, mode); 566 + return disk_scan_partitions(bdev->bd_disk, 567 + mode | BLK_OPEN_STRICT_SCAN); 567 568 case BLKTRACESTART: 568 569 case BLKTRACESTOP: 569 570 case BLKTRACETEARDOWN:
+1 -1
drivers/accessibility/speakup/main.c
··· 574 574 } 575 575 attr_ch = get_char(vc, (u_short *)tmp_pos, &spk_attr); 576 576 buf[cnt++] = attr_ch; 577 - while (tmpx < vc->vc_cols - 1) { 577 + while (tmpx < vc->vc_cols - 1 && cnt < sizeof(buf) - 1) { 578 578 tmp_pos += 2; 579 579 tmpx++; 580 580 ch = get_char(vc, (u_short *)tmp_pos, &temp);
+3 -1
drivers/android/binder.c
··· 1708 1708 size_t object_size = 0; 1709 1709 1710 1710 read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset); 1711 - if (offset > buffer->data_size || read_size < sizeof(*hdr)) 1711 + if (offset > buffer->data_size || read_size < sizeof(*hdr) || 1712 + !IS_ALIGNED(offset, sizeof(u32))) 1712 1713 return 0; 1714 + 1713 1715 if (u) { 1714 1716 if (copy_from_user(object, u + offset, read_size)) 1715 1717 return 0;
+3 -4
drivers/bluetooth/btmtk.c
··· 380 380 switch (data->cd_info.state) { 381 381 case HCI_DEVCOREDUMP_IDLE: 382 382 err = hci_devcd_init(hdev, MTK_COREDUMP_SIZE); 383 - if (err < 0) 383 + if (err < 0) { 384 + kfree_skb(skb); 384 385 break; 386 + } 385 387 data->cd_info.cnt = 0; 386 388 387 389 /* It is supposed coredump can be done within 5 seconds */ ··· 408 406 409 407 break; 410 408 } 411 - 412 - if (err < 0) 413 - kfree_skb(skb); 414 409 415 410 return err; 416 411 }
+38
drivers/bluetooth/btqca.c
··· 15 15 16 16 #define VERSION "0.1" 17 17 18 + #define QCA_BDADDR_DEFAULT (&(bdaddr_t) {{ 0xad, 0x5a, 0x00, 0x00, 0x00, 0x00 }}) 19 + 18 20 int qca_read_soc_version(struct hci_dev *hdev, struct qca_btsoc_version *ver, 19 21 enum qca_btsoc_type soc_type) 20 22 { ··· 614 612 } 615 613 EXPORT_SYMBOL_GPL(qca_set_bdaddr_rome); 616 614 615 + static int qca_check_bdaddr(struct hci_dev *hdev) 616 + { 617 + struct hci_rp_read_bd_addr *bda; 618 + struct sk_buff *skb; 619 + int err; 620 + 621 + if (bacmp(&hdev->public_addr, BDADDR_ANY)) 622 + return 0; 623 + 624 + skb = __hci_cmd_sync(hdev, HCI_OP_READ_BD_ADDR, 0, NULL, 625 + HCI_INIT_TIMEOUT); 626 + if (IS_ERR(skb)) { 627 + err = PTR_ERR(skb); 628 + bt_dev_err(hdev, "Failed to read device address (%d)", err); 629 + return err; 630 + } 631 + 632 + if (skb->len != sizeof(*bda)) { 633 + bt_dev_err(hdev, "Device address length mismatch"); 634 + kfree_skb(skb); 635 + return -EIO; 636 + } 637 + 638 + bda = (struct hci_rp_read_bd_addr *)skb->data; 639 + if (!bacmp(&bda->bdaddr, QCA_BDADDR_DEFAULT)) 640 + set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 641 + 642 + kfree_skb(skb); 643 + 644 + return 0; 645 + } 646 + 617 647 static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size, 618 648 struct qca_btsoc_version ver, u8 rom_ver, u16 bid) 619 649 { ··· 851 817 default: 852 818 break; 853 819 } 820 + 821 + err = qca_check_bdaddr(hdev); 822 + if (err) 823 + return err; 854 824 855 825 bt_dev_info(hdev, "QCA setup on UART is completed"); 856 826
+6 -5
drivers/bluetooth/btusb.c
··· 542 542 /* Realtek 8852BE Bluetooth devices */ 543 543 { USB_DEVICE(0x0cb8, 0xc559), .driver_info = BTUSB_REALTEK | 544 544 BTUSB_WIDEBAND_SPEECH }, 545 + { USB_DEVICE(0x0bda, 0x4853), .driver_info = BTUSB_REALTEK | 546 + BTUSB_WIDEBAND_SPEECH }, 545 547 { USB_DEVICE(0x0bda, 0x887b), .driver_info = BTUSB_REALTEK | 546 548 BTUSB_WIDEBAND_SPEECH }, 547 549 { USB_DEVICE(0x0bda, 0xb85b), .driver_info = BTUSB_REALTEK | ··· 3482 3480 3483 3481 static void btusb_coredump_qca(struct hci_dev *hdev) 3484 3482 { 3483 + int err; 3485 3484 static const u8 param[] = { 0x26 }; 3486 - struct sk_buff *skb; 3487 3485 3488 - skb = __hci_cmd_sync(hdev, 0xfc0c, 1, param, HCI_CMD_TIMEOUT); 3489 - if (IS_ERR(skb)) 3490 - bt_dev_err(hdev, "%s: triggle crash failed (%ld)", __func__, PTR_ERR(skb)); 3491 - kfree_skb(skb); 3486 + err = __hci_cmd_send(hdev, 0xfc0c, 1, param); 3487 + if (err < 0) 3488 + bt_dev_err(hdev, "%s: triggle crash failed (%d)", __func__, err); 3492 3489 } 3493 3490 3494 3491 /*
+20 -9
drivers/bluetooth/hci_qca.c
··· 1672 1672 struct hci_uart *hu = hci_get_drvdata(hdev); 1673 1673 bool wakeup; 1674 1674 1675 + if (!hu->serdev) 1676 + return true; 1677 + 1675 1678 /* BT SoC attached through the serial bus is handled by the serdev driver. 1676 1679 * So we need to use the device handle of the serdev driver to get the 1677 1680 * status of device may wakeup. ··· 1908 1905 case QCA_WCN6750: 1909 1906 case QCA_WCN6855: 1910 1907 case QCA_WCN7850: 1911 - set_bit(HCI_QUIRK_USE_BDADDR_PROPERTY, &hdev->quirks); 1912 - 1913 1908 qcadev = serdev_device_get_drvdata(hu->serdev); 1914 1909 if (qcadev->bdaddr_property_broken) 1915 1910 set_bit(HCI_QUIRK_BDADDR_PROPERTY_BROKEN, &hdev->quirks); ··· 1958 1957 qca_debugfs_init(hdev); 1959 1958 hu->hdev->hw_error = qca_hw_error; 1960 1959 hu->hdev->cmd_timeout = qca_cmd_timeout; 1961 - if (device_can_wakeup(hu->serdev->ctrl->dev.parent)) 1962 - hu->hdev->wakeup = qca_wakeup; 1960 + if (hu->serdev) { 1961 + if (device_can_wakeup(hu->serdev->ctrl->dev.parent)) 1962 + hu->hdev->wakeup = qca_wakeup; 1963 + } 1963 1964 } else if (ret == -ENOENT) { 1964 1965 /* No patch/nvm-config found, run with original fw/config */ 1965 1966 set_bit(QCA_ROM_FW, &qca->flags); ··· 2332 2329 (data->soc_type == QCA_WCN6750 || 2333 2330 data->soc_type == QCA_WCN6855)) { 2334 2331 dev_err(&serdev->dev, "failed to acquire BT_EN gpio\n"); 2335 - power_ctrl_enabled = false; 2332 + return PTR_ERR(qcadev->bt_en); 2336 2333 } 2334 + 2335 + if (!qcadev->bt_en) 2336 + power_ctrl_enabled = false; 2337 2337 2338 2338 qcadev->sw_ctrl = devm_gpiod_get_optional(&serdev->dev, "swctrl", 2339 2339 GPIOD_IN); 2340 2340 if (IS_ERR(qcadev->sw_ctrl) && 2341 2341 (data->soc_type == QCA_WCN6750 || 2342 2342 data->soc_type == QCA_WCN6855 || 2343 - data->soc_type == QCA_WCN7850)) 2344 - dev_warn(&serdev->dev, "failed to acquire SW_CTRL gpio\n"); 2343 + data->soc_type == QCA_WCN7850)) { 2344 + dev_err(&serdev->dev, "failed to acquire SW_CTRL gpio\n"); 2345 + return PTR_ERR(qcadev->sw_ctrl); 2346 + } 2345 2347 2346 2348 qcadev->susclk = devm_clk_get_optional(&serdev->dev, NULL); 2347 2349 if (IS_ERR(qcadev->susclk)) { ··· 2365 2357 qcadev->bt_en = devm_gpiod_get_optional(&serdev->dev, "enable", 2366 2358 GPIOD_OUT_LOW); 2367 2359 if (IS_ERR(qcadev->bt_en)) { 2368 - dev_warn(&serdev->dev, "failed to acquire enable gpio\n"); 2369 - power_ctrl_enabled = false; 2360 + dev_err(&serdev->dev, "failed to acquire enable gpio\n"); 2361 + return PTR_ERR(qcadev->bt_en); 2370 2362 } 2363 + 2364 + if (!qcadev->bt_en) 2365 + power_ctrl_enabled = false; 2371 2366 2372 2367 qcadev->susclk = devm_clk_get_optional(&serdev->dev, NULL); 2373 2368 if (IS_ERR(qcadev->susclk)) {
+135 -38
drivers/clk/clk.c
··· 37 37 static HLIST_HEAD(clk_orphan_list); 38 38 static LIST_HEAD(clk_notifier_list); 39 39 40 + /* List of registered clks that use runtime PM */ 41 + static HLIST_HEAD(clk_rpm_list); 42 + static DEFINE_MUTEX(clk_rpm_list_lock); 43 + 40 44 static const struct hlist_head *all_lists[] = { 41 45 &clk_root_list, 42 46 &clk_orphan_list, ··· 63 59 struct clk_hw *hw; 64 60 struct module *owner; 65 61 struct device *dev; 62 + struct hlist_node rpm_node; 66 63 struct device_node *of_node; 67 64 struct clk_core *parent; 68 65 struct clk_parent_map *parents; ··· 125 120 return; 126 121 127 122 pm_runtime_put_sync(core->dev); 123 + } 124 + 125 + /** 126 + * clk_pm_runtime_get_all() - Runtime "get" all clk provider devices 127 + * 128 + * Call clk_pm_runtime_get() on all runtime PM enabled clks in the clk tree so 129 + * that disabling unused clks avoids a deadlock where a device is runtime PM 130 + * resuming/suspending and the runtime PM callback is trying to grab the 131 + * prepare_lock for something like clk_prepare_enable() while 132 + * clk_disable_unused_subtree() holds the prepare_lock and is trying to runtime 133 + * PM resume/suspend the device as well. 134 + * 135 + * Context: Acquires the 'clk_rpm_list_lock' and returns with the lock held on 136 + * success. Otherwise the lock is released on failure. 137 + * 138 + * Return: 0 on success, negative errno otherwise. 139 + */ 140 + static int clk_pm_runtime_get_all(void) 141 + { 142 + int ret; 143 + struct clk_core *core, *failed; 144 + 145 + /* 146 + * Grab the list lock to prevent any new clks from being registered 147 + * or unregistered until clk_pm_runtime_put_all(). 148 + */ 149 + mutex_lock(&clk_rpm_list_lock); 150 + 151 + /* 152 + * Runtime PM "get" all the devices that are needed for the clks 153 + * currently registered. Do this without holding the prepare_lock, to 154 + * avoid the deadlock. 155 + */ 156 + hlist_for_each_entry(core, &clk_rpm_list, rpm_node) { 157 + ret = clk_pm_runtime_get(core); 158 + if (ret) { 159 + failed = core; 160 + pr_err("clk: Failed to runtime PM get '%s' for clk '%s'\n", 161 + dev_name(failed->dev), failed->name); 162 + goto err; 163 + } 164 + } 165 + 166 + return 0; 167 + 168 + err: 169 + hlist_for_each_entry(core, &clk_rpm_list, rpm_node) { 170 + if (core == failed) 171 + break; 172 + 173 + clk_pm_runtime_put(core); 174 + } 175 + mutex_unlock(&clk_rpm_list_lock); 176 + 177 + return ret; 178 + } 179 + 180 + /** 181 + * clk_pm_runtime_put_all() - Runtime "put" all clk provider devices 182 + * 183 + * Put the runtime PM references taken in clk_pm_runtime_get_all() and release 184 + * the 'clk_rpm_list_lock'. 185 + */ 186 + static void clk_pm_runtime_put_all(void) 187 + { 188 + struct clk_core *core; 189 + 190 + hlist_for_each_entry(core, &clk_rpm_list, rpm_node) 191 + clk_pm_runtime_put(core); 192 + mutex_unlock(&clk_rpm_list_lock); 193 + } 194 + 195 + static void clk_pm_runtime_init(struct clk_core *core) 196 + { 197 + struct device *dev = core->dev; 198 + 199 + if (dev && pm_runtime_enabled(dev)) { 200 + core->rpm_enabled = true; 201 + 202 + mutex_lock(&clk_rpm_list_lock); 203 + hlist_add_head(&core->rpm_node, &clk_rpm_list); 204 + mutex_unlock(&clk_rpm_list_lock); 205 + } 128 206 } 129 207 130 208 /*** locking ***/ ··· 1469 1381 if (core->flags & CLK_IGNORE_UNUSED) 1470 1382 return; 1471 1383 1472 - if (clk_pm_runtime_get(core)) 1473 - return; 1474 - 1475 1384 if (clk_core_is_prepared(core)) { 1476 1385 trace_clk_unprepare(core); 1477 1386 if (core->ops->unprepare_unused) ··· 1477 1392 core->ops->unprepare(core->hw); 1478 1393 trace_clk_unprepare_complete(core); 1479 1394 } 1480 - 1481 - clk_pm_runtime_put(core); 1482 1395 } 1483 1396 1484 1397 static void __init clk_disable_unused_subtree(struct clk_core *core) ··· 1491 1408 1492 1409 if (core->flags & CLK_OPS_PARENT_ENABLE) 1493 1410 clk_core_prepare_enable(core->parent); 1494 - 1495 - if (clk_pm_runtime_get(core)) 1496 - goto unprepare_out; 1497 1411 1498 1412 flags = clk_enable_lock(); 1499 1413 ··· 1516 1436 1517 1437 unlock_out: 1518 1438 clk_enable_unlock(flags); 1519 - clk_pm_runtime_put(core); 1520 - unprepare_out: 1521 1439 if (core->flags & CLK_OPS_PARENT_ENABLE) 1522 1440 clk_core_disable_unprepare(core->parent); 1523 1441 } ··· 1531 1453 static int __init clk_disable_unused(void) 1532 1454 { 1533 1455 struct clk_core *core; 1456 + int ret; 1534 1457 1535 1458 if (clk_ignore_unused) { 1536 1459 pr_warn("clk: Not disabling unused clocks\n"); ··· 1540 1461 1541 1462 pr_info("clk: Disabling unused clocks\n"); 1542 1463 1464 + ret = clk_pm_runtime_get_all(); 1465 + if (ret) 1466 + return ret; 1467 + /* 1468 + * Grab the prepare lock to keep the clk topology stable while iterating 1469 + * over clks. 1470 + */ 1543 1471 clk_prepare_lock(); 1544 1472 1545 1473 hlist_for_each_entry(core, &clk_root_list, child_node) ··· 1562 1476 clk_unprepare_unused_subtree(core); 1563 1477 1564 1478 clk_prepare_unlock(); 1479 + 1480 + clk_pm_runtime_put_all(); 1565 1481 1566 1482 return 0; 1567 1483 } ··· 3340 3252 { 3341 3253 struct clk_core *child; 3342 3254 3343 - clk_pm_runtime_get(c); 3344 3255 clk_summary_show_one(s, c, level); 3345 - clk_pm_runtime_put(c); 3346 3256 3347 3257 hlist_for_each_entry(child, &c->children, child_node) 3348 3258 clk_summary_show_subtree(s, child, level + 1); ··· 3350 3264 { 3351 3265 struct clk_core *c; 3352 3266 struct hlist_head **lists = s->private; 3267 + int ret; 3353 3268 3354 3269 seq_puts(s, " enable prepare protect duty hardware connection\n"); 3355 3270 seq_puts(s, " clock count count count rate accuracy phase cycle enable consumer id\n"); 3356 3271 seq_puts(s, "---------------------------------------------------------------------------------------------------------------------------------------------\n"); 3357 3272 3273 + ret = clk_pm_runtime_get_all(); 3274 + if (ret) 3275 + return ret; 3358 3276 3359 3277 clk_prepare_lock(); 3360 3278 ··· 3367 3277 clk_summary_show_subtree(s, c, 0); 3368 3278 3369 3279 clk_prepare_unlock(); 3280 + clk_pm_runtime_put_all(); 3370 3281 3371 3282 return 0; 3372 3283 } ··· 3415 3324 struct clk_core *c; 3416 3325 bool first_node = true; 3417 3326 struct hlist_head **lists = s->private; 3327 + int ret; 3328 + 3329 + ret = clk_pm_runtime_get_all(); 3330 + if (ret) 3331 + return ret; 3418 3332 3419 3333 seq_putc(s, '{'); 3334 + 3420 3335 clk_prepare_lock(); 3421 3336 3422 3337 for (; *lists; lists++) { ··· 3435 3338 } 3436 3339 3437 3340 clk_prepare_unlock(); 3341 + clk_pm_runtime_put_all(); 3438 3342 3439 3343 seq_puts(s, "}\n"); 3440 3344 return 0; ··· 4079 3981 } 4080 3982 4081 3983 clk_core_reparent_orphans_nolock(); 4082 - 4083 - kref_init(&core->ref); 4084 3984 out: 4085 3985 clk_pm_runtime_put(core); 4086 3986 unlock: ··· 4307 4211 kfree(core->parents); 4308 4212 } 4309 4213 4214 + /* Free memory allocated for a struct clk_core */ 4215 + static void __clk_release(struct kref *ref) 4216 + { 4217 + struct clk_core *core = container_of(ref, struct clk_core, ref); 4218 + 4219 + if (core->rpm_enabled) { 4220 + mutex_lock(&clk_rpm_list_lock); 4221 + hlist_del(&core->rpm_node); 4222 + mutex_unlock(&clk_rpm_list_lock); 4223 + } 4224 + 4225 + clk_core_free_parent_map(core); 4226 + kfree_const(core->name); 4227 + kfree(core); 4228 + } 4229 + 4310 4230 static struct clk * 4311 4231 __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) 4312 4232 { ··· 4343 4231 goto fail_out; 4344 4232 } 4345 4233 4234 + kref_init(&core->ref); 4235 + 4346 4236 core->name = kstrdup_const(init->name, GFP_KERNEL); 4347 4237 if (!core->name) { 4348 4238 ret = -ENOMEM; ··· 4357 4243 } 4358 4244 core->ops = init->ops; 4359 4245 4360 - if (dev && pm_runtime_enabled(dev)) 4361 - core->rpm_enabled = true; 4362 4246 core->dev = dev; 4247 + clk_pm_runtime_init(core); 4363 4248 core->of_node = np; 4364 4249 if (dev && dev->driver) 4365 4250 core->owner = dev->driver->owner; ··· 4398 4285 hw->clk = NULL; 4399 4286 4400 4287 fail_create_clk: 4401 - clk_core_free_parent_map(core); 4402 4288 fail_parents: 4403 4289 fail_ops: 4404 - kfree_const(core->name); 4405 4290 fail_name: 4406 - kfree(core); 4291 + kref_put(&core->ref, __clk_release); 4407 4292 fail_out: 4408 4293 return ERR_PTR(ret); 4409 4294 } ··· 4480 4369 return PTR_ERR_OR_ZERO(__clk_register(NULL, node, hw)); 4481 4370 } 4482 4371 EXPORT_SYMBOL_GPL(of_clk_hw_register); 4483 - 4484 - /* Free memory allocated for a clock. */ 4485 - static void __clk_release(struct kref *ref) 4486 - { 4487 - struct clk_core *core = container_of(ref, struct clk_core, ref); 4488 - 4489 - lockdep_assert_held(&prepare_lock); 4490 - 4491 - clk_core_free_parent_map(core); 4492 - kfree_const(core->name); 4493 - kfree(core); 4494 - } 4495 4372 4496 4373 /* 4497 4374 * Empty clk_ops for unregistered clocks. These are used temporarily ··· 4571 4472 if (ops == &clk_nodrv_ops) { 4572 4473 pr_err("%s: unregistered clock: %s\n", __func__, 4573 4474 clk->core->name); 4574 - goto unlock; 4475 + clk_prepare_unlock(); 4476 + return; 4575 4477 } 4576 4478 /* 4577 4479 * Assign empty clock ops for consumers that might still hold ··· 4606 4506 if (clk->core->protect_count) 4607 4507 pr_warn("%s: unregistering protected clock: %s\n", 4608 4508 __func__, clk->core->name); 4509 + clk_prepare_unlock(); 4609 4510 4610 4511 kref_put(&clk->core->ref, __clk_release); 4611 4512 free_clk(clk); 4612 - unlock: 4613 - clk_prepare_unlock(); 4614 4513 } 4615 4514 EXPORT_SYMBOL_GPL(clk_unregister); 4616 4515 ··· 4768 4669 if (clk->min_rate > 0 || clk->max_rate < ULONG_MAX) 4769 4670 clk_set_rate_range_nolock(clk, 0, ULONG_MAX); 4770 4671 4771 - owner = clk->core->owner; 4772 - kref_put(&clk->core->ref, __clk_release); 4773 - 4774 4672 clk_prepare_unlock(); 4775 4673 4674 + owner = clk->core->owner; 4675 + kref_put(&clk->core->ref, __clk_release); 4776 4676 module_put(owner); 4777 - 4778 4677 free_clk(clk); 4779 4678 } 4780 4679
+1 -1
drivers/clk/mediatek/clk-mt7988-infracfg.c
··· 156 156 GATE_INFRA0(CLK_INFRA_PCIE_PERI_26M_CK_P1, "infra_pcie_peri_ck_26m_ck_p1", 157 157 "csw_infra_f26m_sel", 8), 158 158 GATE_INFRA0(CLK_INFRA_PCIE_PERI_26M_CK_P2, "infra_pcie_peri_ck_26m_ck_p2", 159 - "csw_infra_f26m_sel", 9), 159 + "infra_pcie_peri_ck_26m_ck_p3", 9), 160 160 GATE_INFRA0(CLK_INFRA_PCIE_PERI_26M_CK_P3, "infra_pcie_peri_ck_26m_ck_p3", 161 161 "csw_infra_f26m_sel", 10), 162 162 /* INFRA1 */
+15
drivers/clk/mediatek/clk-mtk.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/of_address.h> 15 15 #include <linux/platform_device.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/slab.h> 17 18 18 19 #include "clk-mtk.h" ··· 495 494 return IS_ERR(base) ? PTR_ERR(base) : -ENOMEM; 496 495 } 497 496 497 + 498 + devm_pm_runtime_enable(&pdev->dev); 499 + /* 500 + * Do a pm_runtime_resume_and_get() to workaround a possible 501 + * deadlock between clk_register() and the genpd framework. 502 + */ 503 + r = pm_runtime_resume_and_get(&pdev->dev); 504 + if (r) 505 + return r; 506 + 498 507 /* Calculate how many clk_hw_onecell_data entries to allocate */ 499 508 num_clks = mcd->num_clks + mcd->num_composite_clks; 500 509 num_clks += mcd->num_fixed_clks + mcd->num_factor_clks; ··· 585 574 goto unregister_clks; 586 575 } 587 576 577 + pm_runtime_put(&pdev->dev); 578 + 588 579 return r; 589 580 590 581 unregister_clks: ··· 617 604 free_base: 618 605 if (mcd->shared_io && base) 619 606 iounmap(base); 607 + 608 + pm_runtime_put(&pdev->dev); 620 609 return r; 621 610 } 622 611
+12 -23
drivers/comedi/drivers/vmk80xx.c
··· 641 641 struct vmk80xx_private *devpriv = dev->private; 642 642 struct usb_interface *intf = comedi_to_usb_interface(dev); 643 643 struct usb_host_interface *iface_desc = intf->cur_altsetting; 644 - struct usb_endpoint_descriptor *ep_desc; 645 - int i; 644 + struct usb_endpoint_descriptor *ep_rx_desc, *ep_tx_desc; 645 + int ret; 646 646 647 - if (iface_desc->desc.bNumEndpoints != 2) 647 + if (devpriv->model == VMK8061_MODEL) 648 + ret = usb_find_common_endpoints(iface_desc, &ep_rx_desc, 649 + &ep_tx_desc, NULL, NULL); 650 + else 651 + ret = usb_find_common_endpoints(iface_desc, NULL, NULL, 652 + &ep_rx_desc, &ep_tx_desc); 653 + 654 + if (ret) 648 655 return -ENODEV; 649 656 650 - for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) { 651 - ep_desc = &iface_desc->endpoint[i].desc; 652 - 653 - if (usb_endpoint_is_int_in(ep_desc) || 654 - usb_endpoint_is_bulk_in(ep_desc)) { 655 - if (!devpriv->ep_rx) 656 - devpriv->ep_rx = ep_desc; 657 - continue; 658 - } 659 - 660 - if (usb_endpoint_is_int_out(ep_desc) || 661 - usb_endpoint_is_bulk_out(ep_desc)) { 662 - if (!devpriv->ep_tx) 663 - devpriv->ep_tx = ep_desc; 664 - continue; 665 - } 666 - } 667 - 668 - if (!devpriv->ep_rx || !devpriv->ep_tx) 669 - return -ENODEV; 657 + devpriv->ep_rx = ep_rx_desc; 658 + devpriv->ep_tx = ep_tx_desc; 670 659 671 660 if (!usb_endpoint_maxp(devpriv->ep_rx) || !usb_endpoint_maxp(devpriv->ep_tx)) 672 661 return -EINVAL;
+33 -25
drivers/dpll/dpll_core.c
··· 42 42 struct list_head list; 43 43 const struct dpll_pin_ops *ops; 44 44 void *priv; 45 + void *cookie; 45 46 }; 46 47 47 48 struct dpll_device *dpll_device_get_by_id(int id) ··· 55 54 56 55 static struct dpll_pin_registration * 57 56 dpll_pin_registration_find(struct dpll_pin_ref *ref, 58 - const struct dpll_pin_ops *ops, void *priv) 57 + const struct dpll_pin_ops *ops, void *priv, 58 + void *cookie) 59 59 { 60 60 struct dpll_pin_registration *reg; 61 61 62 62 list_for_each_entry(reg, &ref->registration_list, list) { 63 - if (reg->ops == ops && reg->priv == priv) 63 + if (reg->ops == ops && reg->priv == priv && 64 + reg->cookie == cookie) 64 65 return reg; 65 66 } 66 67 return NULL; ··· 70 67 71 68 static int 72 69 dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin, 73 - const struct dpll_pin_ops *ops, void *priv) 70 + const struct dpll_pin_ops *ops, void *priv, 71 + void *cookie) 74 72 { 75 73 struct dpll_pin_registration *reg; 76 74 struct dpll_pin_ref *ref; ··· 82 78 xa_for_each(xa_pins, i, ref) { 83 79 if (ref->pin != pin) 84 80 continue; 85 - reg = dpll_pin_registration_find(ref, ops, priv); 81 + reg = dpll_pin_registration_find(ref, ops, priv, cookie); 86 82 if (reg) { 87 83 refcount_inc(&ref->refcount); 88 84 return 0; ··· 115 111 } 116 112 reg->ops = ops; 117 113 reg->priv = priv; 114 + reg->cookie = cookie; 118 115 if (ref_exists) 119 116 refcount_inc(&ref->refcount); 120 117 list_add_tail(&reg->list, &ref->registration_list); ··· 124 119 } 125 120 126 121 static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin, 127 - const struct dpll_pin_ops *ops, void *priv) 122 + const struct dpll_pin_ops *ops, void *priv, 123 + void *cookie) 128 124 { 129 125 struct dpll_pin_registration *reg; 130 126 struct dpll_pin_ref *ref; ··· 134 128 xa_for_each(xa_pins, i, ref) { 135 129 if (ref->pin != pin) 136 130 continue; 137 - reg = dpll_pin_registration_find(ref, ops, priv); 131 + reg = dpll_pin_registration_find(ref, ops, priv, cookie); 138 132 if (WARN_ON(!reg)) 139 133 return -EINVAL; 140 134 list_del(&reg->list); ··· 152 146 153 147 static int 154 148 dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll, 155 - const struct dpll_pin_ops *ops, void *priv) 149 + const struct dpll_pin_ops *ops, void *priv, void *cookie) 156 150 { 157 151 struct dpll_pin_registration *reg; 158 152 struct dpll_pin_ref *ref; ··· 163 157 xa_for_each(xa_dplls, i, ref) { 164 158 if (ref->dpll != dpll) 165 159 continue; 166 - reg = dpll_pin_registration_find(ref, ops, priv); 160 + reg = dpll_pin_registration_find(ref, ops, priv, cookie); 167 161 if (reg) { 168 162 refcount_inc(&ref->refcount); 169 163 return 0; ··· 196 190 } 197 191 reg->ops = ops; 198 192 reg->priv = priv; 193 + reg->cookie = cookie; 199 194 if (ref_exists) 200 195 refcount_inc(&ref->refcount); 201 196 list_add_tail(&reg->list, &ref->registration_list); ··· 206 199 207 200 static void 208 201 dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll, 209 - const struct dpll_pin_ops *ops, void *priv) 202 + const struct dpll_pin_ops *ops, void *priv, void *cookie) 210 203 { 211 204 struct dpll_pin_registration *reg; 212 205 struct dpll_pin_ref *ref; ··· 215 208 xa_for_each(xa_dplls, i, ref) { 216 209 if (ref->dpll != dpll) 217 210 continue; 218 - reg = dpll_pin_registration_find(ref, ops, priv); 211 + reg = dpll_pin_registration_find(ref, ops, priv, cookie); 219 212 if (WARN_ON(!reg)) 220 213 return; 221 214 list_del(&reg->list); ··· 601 594 602 595 static int 603 596 __dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin, 604 - const struct dpll_pin_ops *ops, void *priv) 597 + const struct dpll_pin_ops *ops, void *priv, void *cookie) 605 598 { 606 599 int ret; 607 600 608 - ret = dpll_xa_ref_pin_add(&dpll->pin_refs, pin, ops, priv); 601 + ret = dpll_xa_ref_pin_add(&dpll->pin_refs, pin, ops, priv, cookie); 609 602 if (ret) 610 603 return ret; 611 - ret = dpll_xa_ref_dpll_add(&pin->dpll_refs, dpll, ops, priv); 604 + ret = dpll_xa_ref_dpll_add(&pin->dpll_refs, dpll, ops, priv, cookie); 612 605 if (ret) 613 606 goto ref_pin_del; 614 607 xa_set_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED); ··· 617 610 return ret; 618 611 619 612 ref_pin_del: 620 - dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv); 613 + dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv, cookie); 621 614 return ret; 622 615 } 623 616 ··· 649 642 dpll->clock_id == pin->clock_id))) 650 643 ret = -EINVAL; 651 644 else 652 - ret = __dpll_pin_register(dpll, pin, ops, priv); 645 + ret = __dpll_pin_register(dpll, pin, ops, priv, NULL); 653 646 mutex_unlock(&dpll_lock); 654 647 655 648 return ret; ··· 658 651 659 652 static void 660 653 __dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin, 661 - const struct dpll_pin_ops *ops, void *priv) 654 + const struct dpll_pin_ops *ops, void *priv, void *cookie) 662 655 { 663 656 ASSERT_DPLL_PIN_REGISTERED(pin); 664 - dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv); 665 - dpll_xa_ref_dpll_del(&pin->dpll_refs, dpll, ops, priv); 657 + dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv, cookie); 658 + dpll_xa_ref_dpll_del(&pin->dpll_refs, dpll, ops, priv, cookie); 666 659 if (xa_empty(&pin->dpll_refs)) 667 660 xa_clear_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED); 668 661 } ··· 687 680 688 681 mutex_lock(&dpll_lock); 689 682 dpll_pin_delete_ntf(pin); 690 - __dpll_pin_unregister(dpll, pin, ops, priv); 683 + __dpll_pin_unregister(dpll, pin, ops, priv, NULL); 691 684 mutex_unlock(&dpll_lock); 692 685 } 693 686 EXPORT_SYMBOL_GPL(dpll_pin_unregister); ··· 723 716 return -EINVAL; 724 717 725 718 mutex_lock(&dpll_lock); 726 - ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv); 719 + ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv, pin); 727 720 if (ret) 728 721 goto unlock; 729 722 refcount_inc(&pin->refcount); 730 723 xa_for_each(&parent->dpll_refs, i, ref) { 731 - ret = __dpll_pin_register(ref->dpll, pin, ops, priv); 724 + ret = __dpll_pin_register(ref->dpll, pin, ops, priv, parent); 732 725 if (ret) { 733 726 stop = i; 734 727 goto dpll_unregister; ··· 742 735 dpll_unregister: 743 736 xa_for_each(&parent->dpll_refs, i, ref) 744 737 if (i < stop) { 745 - __dpll_pin_unregister(ref->dpll, pin, ops, priv); 738 + __dpll_pin_unregister(ref->dpll, pin, ops, priv, 739 + parent); 746 740 dpll_pin_delete_ntf(pin); 747 741 } 748 742 refcount_dec(&pin->refcount); 749 - dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv); 743 + dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); 750 744 unlock: 751 745 mutex_unlock(&dpll_lock); 752 746 return ret; ··· 772 764 773 765 mutex_lock(&dpll_lock); 774 766 dpll_pin_delete_ntf(pin); 775 - dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv); 767 + dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv, pin); 776 768 refcount_dec(&pin->refcount); 777 769 xa_for_each(&pin->dpll_refs, i, ref) 778 - __dpll_pin_unregister(ref->dpll, pin, ops, priv); 770 + __dpll_pin_unregister(ref->dpll, pin, ops, priv, parent); 779 771 mutex_unlock(&dpll_lock); 780 772 } 781 773 EXPORT_SYMBOL_GPL(dpll_pin_on_pin_unregister);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
··· 819 819 820 820 p->bytes_moved += ctx.bytes_moved; 821 821 if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && 822 - amdgpu_bo_in_cpu_visible_vram(bo)) 822 + amdgpu_res_cpu_visible(adev, bo->tbo.resource)) 823 823 p->bytes_moved_vis += ctx.bytes_moved; 824 824 825 825 if (unlikely(r == -ENOMEM) && domain != bo->allowed_domains) {
+11 -11
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 617 617 return r; 618 618 619 619 if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && 620 - bo->tbo.resource->mem_type == TTM_PL_VRAM && 621 - amdgpu_bo_in_cpu_visible_vram(bo)) 620 + amdgpu_res_cpu_visible(adev, bo->tbo.resource)) 622 621 amdgpu_cs_report_moved_bytes(adev, ctx.bytes_moved, 623 622 ctx.bytes_moved); 624 623 else ··· 1271 1272 void amdgpu_bo_get_memory(struct amdgpu_bo *bo, 1272 1273 struct amdgpu_mem_stats *stats) 1273 1274 { 1275 + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 1276 + struct ttm_resource *res = bo->tbo.resource; 1274 1277 uint64_t size = amdgpu_bo_size(bo); 1275 1278 struct drm_gem_object *obj; 1276 1279 unsigned int domain; 1277 1280 bool shared; 1278 1281 1279 1282 /* Abort if the BO doesn't currently have a backing store */ 1280 - if (!bo->tbo.resource) 1283 + if (!res) 1281 1284 return; 1282 1285 1283 1286 obj = &bo->tbo.base; 1284 1287 shared = drm_gem_object_is_shared_for_memory_stats(obj); 1285 1288 1286 - domain = amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type); 1289 + domain = amdgpu_mem_type_to_domain(res->mem_type); 1287 1290 switch (domain) { 1288 1291 case AMDGPU_GEM_DOMAIN_VRAM: 1289 1292 stats->vram += size; 1290 - if (amdgpu_bo_in_cpu_visible_vram(bo)) 1293 + if (amdgpu_res_cpu_visible(adev, bo->tbo.resource)) 1291 1294 stats->visible_vram += size; 1292 1295 if (shared) 1293 1296 stats->vram_shared += size; ··· 1390 1389 /* Remember that this BO was accessed by the CPU */ 1391 1390 abo->flags |= AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED; 1392 1391 1393 - if (bo->resource->mem_type != TTM_PL_VRAM) 1394 - return 0; 1395 - 1396 - if (amdgpu_bo_in_cpu_visible_vram(abo)) 1392 + if (amdgpu_res_cpu_visible(adev, bo->resource)) 1397 1393 return 0; 1398 1394 1399 1395 /* Can't move a pinned BO to visible VRAM */ ··· 1413 1415 1414 1416 /* this should never happen */ 1415 1417 if (bo->resource->mem_type == TTM_PL_VRAM && 1416 - !amdgpu_bo_in_cpu_visible_vram(abo)) 1418 + !amdgpu_res_cpu_visible(adev, bo->resource)) 1417 1419 return VM_FAULT_SIGBUS; 1418 1420 1419 1421 ttm_bo_move_to_lru_tail_unlocked(bo); ··· 1577 1579 */ 1578 1580 u64 amdgpu_bo_print_info(int id, struct amdgpu_bo *bo, struct seq_file *m) 1579 1581 { 1582 + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 1580 1583 struct dma_buf_attachment *attachment; 1581 1584 struct dma_buf *dma_buf; 1582 1585 const char *placement; ··· 1586 1587 1587 1588 if (dma_resv_trylock(bo->tbo.base.resv)) { 1588 1589 unsigned int domain; 1590 + 1589 1591 domain = amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type); 1590 1592 switch (domain) { 1591 1593 case AMDGPU_GEM_DOMAIN_VRAM: 1592 - if (amdgpu_bo_in_cpu_visible_vram(bo)) 1594 + if (amdgpu_res_cpu_visible(adev, bo->tbo.resource)) 1593 1595 placement = "VRAM VISIBLE"; 1594 1596 else 1595 1597 placement = "VRAM";
-22
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
··· 251 251 } 252 252 253 253 /** 254 - * amdgpu_bo_in_cpu_visible_vram - check if BO is (partly) in visible VRAM 255 - */ 256 - static inline bool amdgpu_bo_in_cpu_visible_vram(struct amdgpu_bo *bo) 257 - { 258 - struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 259 - struct amdgpu_res_cursor cursor; 260 - 261 - if (!bo->tbo.resource || bo->tbo.resource->mem_type != TTM_PL_VRAM) 262 - return false; 263 - 264 - amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor); 265 - while (cursor.remaining) { 266 - if (cursor.start < adev->gmc.visible_vram_size) 267 - return true; 268 - 269 - amdgpu_res_next(&cursor, cursor.size); 270 - } 271 - 272 - return false; 273 - } 274 - 275 - /** 276 254 * amdgpu_bo_explicit_sync - return whether the bo is explicitly synced 277 255 */ 278 256 static inline bool amdgpu_bo_explicit_sync(struct amdgpu_bo *bo)
+44 -33
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 133 133 134 134 } else if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && 135 135 !(abo->flags & AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED) && 136 - amdgpu_bo_in_cpu_visible_vram(abo)) { 136 + amdgpu_res_cpu_visible(adev, bo->resource)) { 137 137 138 138 /* Try evicting to the CPU inaccessible part of VRAM 139 139 * first, but only set GTT as busy placement, so this ··· 403 403 return r; 404 404 } 405 405 406 + /** 407 + * amdgpu_res_cpu_visible - Check that resource can be accessed by CPU 408 + * @adev: amdgpu device 409 + * @res: the resource to check 410 + * 411 + * Returns: true if the full resource is CPU visible, false otherwise. 412 + */ 413 + bool amdgpu_res_cpu_visible(struct amdgpu_device *adev, 414 + struct ttm_resource *res) 415 + { 416 + struct amdgpu_res_cursor cursor; 417 + 418 + if (!res) 419 + return false; 420 + 421 + if (res->mem_type == TTM_PL_SYSTEM || res->mem_type == TTM_PL_TT || 422 + res->mem_type == AMDGPU_PL_PREEMPT) 423 + return true; 424 + 425 + if (res->mem_type != TTM_PL_VRAM) 426 + return false; 427 + 428 + amdgpu_res_first(res, 0, res->size, &cursor); 429 + while (cursor.remaining) { 430 + if ((cursor.start + cursor.size) >= adev->gmc.visible_vram_size) 431 + return false; 432 + amdgpu_res_next(&cursor, cursor.size); 433 + } 434 + 435 + return true; 436 + } 437 + 406 438 /* 407 - * amdgpu_mem_visible - Check that memory can be accessed by ttm_bo_move_memcpy 439 + * amdgpu_res_copyable - Check that memory can be accessed by ttm_bo_move_memcpy 408 440 * 409 441 * Called by amdgpu_bo_move() 410 442 */ 411 - static bool amdgpu_mem_visible(struct amdgpu_device *adev, 412 - struct ttm_resource *mem) 443 + static bool amdgpu_res_copyable(struct amdgpu_device *adev, 444 + struct ttm_resource *mem) 413 445 { 414 - u64 mem_size = (u64)mem->size; 415 - struct amdgpu_res_cursor cursor; 416 - u64 end; 417 - 418 - if (mem->mem_type == TTM_PL_SYSTEM || 419 - mem->mem_type == TTM_PL_TT) 420 - return true; 421 - if (mem->mem_type != TTM_PL_VRAM) 446 + if (!amdgpu_res_cpu_visible(adev, mem)) 422 447 return false; 423 448 424 - amdgpu_res_first(mem, 0, mem_size, &cursor); 425 - end = cursor.start + cursor.size; 426 - while (cursor.remaining) { 427 - amdgpu_res_next(&cursor, cursor.size); 449 + /* ttm_resource_ioremap only supports contiguous memory */ 450 + if (mem->mem_type == TTM_PL_VRAM && 451 + !(mem->placement & TTM_PL_FLAG_CONTIGUOUS)) 452 + return false; 428 453 429 - if (!cursor.remaining) 430 - break; 431 - 432 - /* ttm_resource_ioremap only supports contiguous memory */ 433 - if (end != cursor.start) 434 - return false; 435 - 436 - end = cursor.start + cursor.size; 437 - } 438 - 439 - return end <= adev->gmc.visible_vram_size; 454 + return true; 440 455 } 441 456 442 457 /* ··· 544 529 545 530 if (r) { 546 531 /* Check that all memory is CPU accessible */ 547 - if (!amdgpu_mem_visible(adev, old_mem) || 548 - !amdgpu_mem_visible(adev, new_mem)) { 532 + if (!amdgpu_res_copyable(adev, old_mem) || 533 + !amdgpu_res_copyable(adev, new_mem)) { 549 534 pr_err("Move buffer fallback to memcpy unavailable\n"); 550 535 return r; 551 536 } ··· 572 557 struct ttm_resource *mem) 573 558 { 574 559 struct amdgpu_device *adev = amdgpu_ttm_adev(bdev); 575 - size_t bus_size = (size_t)mem->size; 576 560 577 561 switch (mem->mem_type) { 578 562 case TTM_PL_SYSTEM: ··· 582 568 break; 583 569 case TTM_PL_VRAM: 584 570 mem->bus.offset = mem->start << PAGE_SHIFT; 585 - /* check if it's visible */ 586 - if ((mem->bus.offset + bus_size) > adev->gmc.visible_vram_size) 587 - return -EINVAL; 588 571 589 572 if (adev->mman.aper_base_kaddr && 590 573 mem->placement & TTM_PL_FLAG_CONTIGUOUS)
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
··· 139 139 int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr, 140 140 uint64_t start); 141 141 142 + bool amdgpu_res_cpu_visible(struct amdgpu_device *adev, 143 + struct ttm_resource *res); 144 + 142 145 int amdgpu_ttm_init(struct amdgpu_device *adev); 143 146 void amdgpu_ttm_fini(struct amdgpu_device *adev); 144 147 void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
+46 -26
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 1613 1613 trace_amdgpu_vm_bo_map(bo_va, mapping); 1614 1614 } 1615 1615 1616 + /* Validate operation parameters to prevent potential abuse */ 1617 + static int amdgpu_vm_verify_parameters(struct amdgpu_device *adev, 1618 + struct amdgpu_bo *bo, 1619 + uint64_t saddr, 1620 + uint64_t offset, 1621 + uint64_t size) 1622 + { 1623 + uint64_t tmp, lpfn; 1624 + 1625 + if (saddr & AMDGPU_GPU_PAGE_MASK 1626 + || offset & AMDGPU_GPU_PAGE_MASK 1627 + || size & AMDGPU_GPU_PAGE_MASK) 1628 + return -EINVAL; 1629 + 1630 + if (check_add_overflow(saddr, size, &tmp) 1631 + || check_add_overflow(offset, size, &tmp) 1632 + || size == 0 /* which also leads to end < begin */) 1633 + return -EINVAL; 1634 + 1635 + /* make sure object fit at this offset */ 1636 + if (bo && offset + size > amdgpu_bo_size(bo)) 1637 + return -EINVAL; 1638 + 1639 + /* Ensure last pfn not exceed max_pfn */ 1640 + lpfn = (saddr + size - 1) >> AMDGPU_GPU_PAGE_SHIFT; 1641 + if (lpfn >= adev->vm_manager.max_pfn) 1642 + return -EINVAL; 1643 + 1644 + return 0; 1645 + } 1646 + 1616 1647 /** 1617 1648 * amdgpu_vm_bo_map - map bo inside a vm 1618 1649 * ··· 1670 1639 struct amdgpu_bo *bo = bo_va->base.bo; 1671 1640 struct amdgpu_vm *vm = bo_va->base.vm; 1672 1641 uint64_t eaddr; 1642 + int r; 1673 1643 1674 - /* validate the parameters */ 1675 - if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK) 1676 - return -EINVAL; 1677 - if (saddr + size <= saddr || offset + size <= offset) 1678 - return -EINVAL; 1679 - 1680 - /* make sure object fit at this offset */ 1681 - eaddr = saddr + size - 1; 1682 - if ((bo && offset + size > amdgpu_bo_size(bo)) || 1683 - (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT)) 1684 - return -EINVAL; 1644 + r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size); 1645 + if (r) 1646 + return r; 1685 1647 1686 1648 saddr /= AMDGPU_GPU_PAGE_SIZE; 1687 - eaddr /= AMDGPU_GPU_PAGE_SIZE; 1649 + eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE; 1688 1650 1689 1651 tmp = amdgpu_vm_it_iter_first(&vm->va, saddr, eaddr); 1690 1652 if (tmp) { ··· 1730 1706 uint64_t eaddr; 1731 1707 int r; 1732 1708 1733 - /* validate the parameters */ 1734 - if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK || size & ~PAGE_MASK) 1735 - return -EINVAL; 1736 - if (saddr + size <= saddr || offset + size <= offset) 1737 - return -EINVAL; 1738 - 1739 - /* make sure object fit at this offset */ 1740 - eaddr = saddr + size - 1; 1741 - if ((bo && offset + size > amdgpu_bo_size(bo)) || 1742 - (eaddr >= adev->vm_manager.max_pfn << AMDGPU_GPU_PAGE_SHIFT)) 1743 - return -EINVAL; 1709 + r = amdgpu_vm_verify_parameters(adev, bo, saddr, offset, size); 1710 + if (r) 1711 + return r; 1744 1712 1745 1713 /* Allocate all the needed memory */ 1746 1714 mapping = kmalloc(sizeof(*mapping), GFP_KERNEL); ··· 1746 1730 } 1747 1731 1748 1732 saddr /= AMDGPU_GPU_PAGE_SIZE; 1749 - eaddr /= AMDGPU_GPU_PAGE_SIZE; 1733 + eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE; 1750 1734 1751 1735 mapping->start = saddr; 1752 1736 mapping->last = eaddr; ··· 1833 1817 struct amdgpu_bo_va_mapping *before, *after, *tmp, *next; 1834 1818 LIST_HEAD(removed); 1835 1819 uint64_t eaddr; 1820 + int r; 1836 1821 1837 - eaddr = saddr + size - 1; 1822 + r = amdgpu_vm_verify_parameters(adev, NULL, saddr, 0, size); 1823 + if (r) 1824 + return r; 1825 + 1838 1826 saddr /= AMDGPU_GPU_PAGE_SIZE; 1839 - eaddr /= AMDGPU_GPU_PAGE_SIZE; 1827 + eaddr = saddr + (size - 1) / AMDGPU_GPU_PAGE_SIZE; 1840 1828 1841 1829 /* Allocate all the needed memory */ 1842 1830 before = kzalloc(sizeof(*before), GFP_KERNEL);
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 819 819 mutex_lock(&kfd_processes_mutex); 820 820 821 821 if (kfd_is_locked()) { 822 - mutex_unlock(&kfd_processes_mutex); 823 822 pr_debug("KFD is locked! Cannot create process"); 824 - return ERR_PTR(-EINVAL); 823 + process = ERR_PTR(-EINVAL); 824 + goto out; 825 825 } 826 826 827 827 /* A prior open of /dev/kfd could have already created the process. */
+7 -6
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 23 23 */ 24 24 25 25 #include "nouveau_drv.h" 26 + #include "nouveau_bios.h" 26 27 #include "nouveau_reg.h" 27 28 #include "dispnv04/hw.h" 28 29 #include "nouveau_encoder.h" ··· 1678 1677 */ 1679 1678 if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) { 1680 1679 if (*conn == 0xf2005014 && *conf == 0xffffffff) { 1681 - fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, 1); 1680 + fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, DCB_OUTPUT_B); 1682 1681 return false; 1683 1682 } 1684 1683 } ··· 1764 1763 #ifdef __powerpc__ 1765 1764 /* Apple iMac G4 NV17 */ 1766 1765 if (of_machine_is_compatible("PowerMac4,5")) { 1767 - fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, 1); 1768 - fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, 2); 1766 + fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, DCB_OUTPUT_B); 1767 + fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, DCB_OUTPUT_C); 1769 1768 return; 1770 1769 } 1771 1770 #endif 1772 1771 1773 1772 /* Make up some sane defaults */ 1774 1773 fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1775 - bios->legacy.i2c_indices.crt, 1, 1); 1774 + bios->legacy.i2c_indices.crt, 1, DCB_OUTPUT_B); 1776 1775 1777 1776 if (nv04_tv_identify(dev, bios->legacy.i2c_indices.tv) >= 0) 1778 1777 fabricate_dcb_output(dcb, DCB_OUTPUT_TV, 1779 1778 bios->legacy.i2c_indices.tv, 1780 - all_heads, 0); 1779 + all_heads, DCB_OUTPUT_A); 1781 1780 1782 1781 else if (bios->tmds.output0_script_ptr || 1783 1782 bios->tmds.output1_script_ptr) 1784 1783 fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1785 1784 bios->legacy.i2c_indices.panel, 1786 - all_heads, 1); 1785 + all_heads, DCB_OUTPUT_B); 1787 1786 } 1788 1787 1789 1788 static int
+18 -5
drivers/gpu/drm/nouveau/nouveau_dp.c
··· 225 225 u8 *dpcd = nv_encoder->dp.dpcd; 226 226 int ret = NOUVEAU_DP_NONE, hpd; 227 227 228 - /* If we've already read the DPCD on an eDP device, we don't need to 229 - * reread it as it won't change 228 + /* eDP ports don't support hotplugging - so there's no point in probing eDP ports unless we 229 + * haven't probed them once before. 230 230 */ 231 - if (connector->connector_type == DRM_MODE_CONNECTOR_eDP && 232 - dpcd[DP_DPCD_REV] != 0) 233 - return NOUVEAU_DP_SST; 231 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 232 + if (connector->status == connector_status_connected) 233 + return NOUVEAU_DP_SST; 234 + else if (connector->status == connector_status_disconnected) 235 + return NOUVEAU_DP_NONE; 236 + } 237 + 238 + // Ensure that the aux bus is enabled for probing 239 + drm_dp_dpcd_set_powered(&nv_connector->aux, true); 234 240 235 241 mutex_lock(&nv_encoder->dp.hpd_irq_lock); 236 242 if (mstm) { ··· 298 292 out: 299 293 if (mstm && !mstm->suspended && ret != NOUVEAU_DP_MST) 300 294 nv50_mstm_remove(mstm); 295 + 296 + /* GSP doesn't like when we try to do aux transactions on a port it considers disconnected, 297 + * and since we don't really have a usecase for that anyway - just disable the aux bus here 298 + * if we've decided the connector is disconnected 299 + */ 300 + if (ret == NOUVEAU_DP_NONE) 301 + drm_dp_dpcd_set_powered(&nv_connector->aux, false); 301 302 302 303 mutex_unlock(&nv_encoder->dp.hpd_irq_lock); 303 304 return ret;
+6 -1
drivers/gpu/drm/nouveau/nvkm/subdev/instmem/nv50.c
··· 222 222 void __iomem *map = NULL; 223 223 224 224 /* Already mapped? */ 225 - if (refcount_inc_not_zero(&iobj->maps)) 225 + if (refcount_inc_not_zero(&iobj->maps)) { 226 + /* read barrier match the wmb on refcount set */ 227 + smp_rmb(); 226 228 return iobj->map; 229 + } 227 230 228 231 /* Take the lock, and re-check that another thread hasn't 229 232 * already mapped the object in the meantime. ··· 253 250 iobj->base.memory.ptrs = &nv50_instobj_fast; 254 251 else 255 252 iobj->base.memory.ptrs = &nv50_instobj_slow; 253 + /* barrier to ensure the ptrs are written before refcount is set */ 254 + smp_wmb(); 256 255 refcount_set(&iobj->maps, 1); 257 256 } 258 257
-2
drivers/gpu/drm/panel/panel-novatek-nt36672e.c
··· 614 614 struct nt36672e_panel *ctx = mipi_dsi_get_drvdata(dsi); 615 615 616 616 mipi_dsi_detach(ctx->dsi); 617 - mipi_dsi_device_unregister(ctx->dsi); 618 - 619 617 drm_panel_remove(&ctx->panel); 620 618 } 621 619
-2
drivers/gpu/drm/panel/panel-visionox-rm69299.c
··· 253 253 struct visionox_rm69299 *ctx = mipi_dsi_get_drvdata(dsi); 254 254 255 255 mipi_dsi_detach(ctx->dsi); 256 - mipi_dsi_device_unregister(ctx->dsi); 257 - 258 256 drm_panel_remove(&ctx->panel); 259 257 } 260 258
+5 -5
drivers/gpu/drm/radeon/pptable.h
··· 424 424 typedef struct _ATOM_PPLIB_STATE_V2 425 425 { 426 426 //number of valid dpm levels in this state; Driver uses it to calculate the whole 427 - //size of the state: sizeof(ATOM_PPLIB_STATE_V2) + (ucNumDPMLevels - 1) * sizeof(UCHAR) 427 + //size of the state: struct_size(ATOM_PPLIB_STATE_V2, clockInfoIndex, ucNumDPMLevels) 428 428 UCHAR ucNumDPMLevels; 429 429 430 430 //a index to the array of nonClockInfos ··· 432 432 /** 433 433 * Driver will read the first ucNumDPMLevels in this array 434 434 */ 435 - UCHAR clockInfoIndex[1]; 435 + UCHAR clockInfoIndex[] __counted_by(ucNumDPMLevels); 436 436 } ATOM_PPLIB_STATE_V2; 437 437 438 438 typedef struct _StateArray{ 439 439 //how many states we have 440 440 UCHAR ucNumEntries; 441 441 442 - ATOM_PPLIB_STATE_V2 states[1]; 442 + ATOM_PPLIB_STATE_V2 states[] __counted_by(ucNumEntries); 443 443 }StateArray; 444 444 445 445 ··· 450 450 //sizeof(ATOM_PPLIB_CLOCK_INFO) 451 451 UCHAR ucEntrySize; 452 452 453 - UCHAR clockInfo[1]; 453 + UCHAR clockInfo[] __counted_by(ucNumEntries); 454 454 }ClockInfoArray; 455 455 456 456 typedef struct _NonClockInfoArray{ ··· 460 460 //sizeof(ATOM_PPLIB_NONCLOCK_INFO) 461 461 UCHAR ucEntrySize; 462 462 463 - ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[1]; 463 + ATOM_PPLIB_NONCLOCK_INFO nonClockInfo[] __counted_by(ucNumEntries); 464 464 }NonClockInfoArray; 465 465 466 466 typedef struct _ATOM_PPLIB_Clock_Voltage_Dependency_Record
+6 -2
drivers/gpu/drm/radeon/radeon_atombios.c
··· 923 923 max_device = ATOM_MAX_SUPPORTED_DEVICE_INFO; 924 924 925 925 for (i = 0; i < max_device; i++) { 926 - ATOM_CONNECTOR_INFO_I2C ci = 927 - supported_devices->info.asConnInfo[i]; 926 + ATOM_CONNECTOR_INFO_I2C ci; 927 + 928 + if (frev > 1) 929 + ci = supported_devices->info_2d1.asConnInfo[i]; 930 + else 931 + ci = supported_devices->info.asConnInfo[i]; 928 932 929 933 bios_connectors[i].valid = false; 930 934
+28 -10
drivers/gpu/drm/ttm/ttm_pool.c
··· 288 288 enum ttm_caching caching, 289 289 unsigned int order) 290 290 { 291 - if (pool->use_dma_alloc || pool->nid != NUMA_NO_NODE) 291 + if (pool->use_dma_alloc) 292 292 return &pool->caching[caching].orders[order]; 293 293 294 294 #ifdef CONFIG_X86 295 295 switch (caching) { 296 296 case ttm_write_combined: 297 + if (pool->nid != NUMA_NO_NODE) 298 + return &pool->caching[caching].orders[order]; 299 + 297 300 if (pool->use_dma32) 298 301 return &global_dma32_write_combined[order]; 299 302 300 303 return &global_write_combined[order]; 301 304 case ttm_uncached: 305 + if (pool->nid != NUMA_NO_NODE) 306 + return &pool->caching[caching].orders[order]; 307 + 302 308 if (pool->use_dma32) 303 309 return &global_dma32_uncached[order]; 304 310 ··· 572 566 pool->use_dma_alloc = use_dma_alloc; 573 567 pool->use_dma32 = use_dma32; 574 568 575 - if (use_dma_alloc || nid != NUMA_NO_NODE) { 576 - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 577 - for (j = 0; j < NR_PAGE_ORDERS; ++j) 578 - ttm_pool_type_init(&pool->caching[i].orders[j], 579 - pool, i, j); 569 + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 570 + for (j = 0; j < NR_PAGE_ORDERS; ++j) { 571 + struct ttm_pool_type *pt; 572 + 573 + /* Initialize only pool types which are actually used */ 574 + pt = ttm_pool_select_type(pool, i, j); 575 + if (pt != &pool->caching[i].orders[j]) 576 + continue; 577 + 578 + ttm_pool_type_init(pt, pool, i, j); 579 + } 580 580 } 581 581 } 582 582 EXPORT_SYMBOL(ttm_pool_init); ··· 611 599 { 612 600 unsigned int i, j; 613 601 614 - if (pool->use_dma_alloc || pool->nid != NUMA_NO_NODE) { 615 - for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) 616 - for (j = 0; j < NR_PAGE_ORDERS; ++j) 617 - ttm_pool_type_fini(&pool->caching[i].orders[j]); 602 + for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 603 + for (j = 0; j < NR_PAGE_ORDERS; ++j) { 604 + struct ttm_pool_type *pt; 605 + 606 + pt = ttm_pool_select_type(pool, i, j); 607 + if (pt != &pool->caching[i].orders[j]) 608 + continue; 609 + 610 + ttm_pool_type_fini(pt); 611 + } 618 612 } 619 613 620 614 /* We removed the pool types from the LRU, but we need to also make sure
-4
drivers/gpu/drm/v3d/v3d_irq.c
··· 105 105 struct v3d_file_priv *file = v3d->bin_job->base.file->driver_priv; 106 106 u64 runtime = local_clock() - file->start_ns[V3D_BIN]; 107 107 108 - file->enabled_ns[V3D_BIN] += local_clock() - file->start_ns[V3D_BIN]; 109 108 file->jobs_sent[V3D_BIN]++; 110 109 v3d->queue[V3D_BIN].jobs_sent++; 111 110 ··· 125 126 struct v3d_file_priv *file = v3d->render_job->base.file->driver_priv; 126 127 u64 runtime = local_clock() - file->start_ns[V3D_RENDER]; 127 128 128 - file->enabled_ns[V3D_RENDER] += local_clock() - file->start_ns[V3D_RENDER]; 129 129 file->jobs_sent[V3D_RENDER]++; 130 130 v3d->queue[V3D_RENDER].jobs_sent++; 131 131 ··· 145 147 struct v3d_file_priv *file = v3d->csd_job->base.file->driver_priv; 146 148 u64 runtime = local_clock() - file->start_ns[V3D_CSD]; 147 149 148 - file->enabled_ns[V3D_CSD] += local_clock() - file->start_ns[V3D_CSD]; 149 150 file->jobs_sent[V3D_CSD]++; 150 151 v3d->queue[V3D_CSD].jobs_sent++; 151 152 ··· 192 195 struct v3d_file_priv *file = v3d->tfu_job->base.file->driver_priv; 193 196 u64 runtime = local_clock() - file->start_ns[V3D_TFU]; 194 197 195 - file->enabled_ns[V3D_TFU] += local_clock() - file->start_ns[V3D_TFU]; 196 198 file->jobs_sent[V3D_TFU]++; 197 199 v3d->queue[V3D_TFU].jobs_sent++; 198 200
+32 -3
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c
··· 456 456 .no_wait_gpu = false 457 457 }; 458 458 u32 j, initial_line = dst_offset / dst_stride; 459 - struct vmw_bo_blit_line_data d; 459 + struct vmw_bo_blit_line_data d = {0}; 460 460 int ret = 0; 461 + struct page **dst_pages = NULL; 462 + struct page **src_pages = NULL; 461 463 462 464 /* Buffer objects need to be either pinned or reserved: */ 463 465 if (!(dst->pin_count)) ··· 479 477 return ret; 480 478 } 481 479 480 + if (!src->ttm->pages && src->ttm->sg) { 481 + src_pages = kvmalloc_array(src->ttm->num_pages, 482 + sizeof(struct page *), GFP_KERNEL); 483 + if (!src_pages) 484 + return -ENOMEM; 485 + ret = drm_prime_sg_to_page_array(src->ttm->sg, src_pages, 486 + src->ttm->num_pages); 487 + if (ret) 488 + goto out; 489 + } 490 + if (!dst->ttm->pages && dst->ttm->sg) { 491 + dst_pages = kvmalloc_array(dst->ttm->num_pages, 492 + sizeof(struct page *), GFP_KERNEL); 493 + if (!dst_pages) { 494 + ret = -ENOMEM; 495 + goto out; 496 + } 497 + ret = drm_prime_sg_to_page_array(dst->ttm->sg, dst_pages, 498 + dst->ttm->num_pages); 499 + if (ret) 500 + goto out; 501 + } 502 + 482 503 d.mapped_dst = 0; 483 504 d.mapped_src = 0; 484 505 d.dst_addr = NULL; 485 506 d.src_addr = NULL; 486 - d.dst_pages = dst->ttm->pages; 487 - d.src_pages = src->ttm->pages; 507 + d.dst_pages = dst->ttm->pages ? dst->ttm->pages : dst_pages; 508 + d.src_pages = src->ttm->pages ? src->ttm->pages : src_pages; 488 509 d.dst_num_pages = PFN_UP(dst->resource->size); 489 510 d.src_num_pages = PFN_UP(src->resource->size); 490 511 d.dst_prot = ttm_io_prot(dst, dst->resource, PAGE_KERNEL); ··· 529 504 kunmap_atomic(d.src_addr); 530 505 if (d.dst_addr) 531 506 kunmap_atomic(d.dst_addr); 507 + if (src_pages) 508 + kvfree(src_pages); 509 + if (dst_pages) 510 + kvfree(dst_pages); 532 511 533 512 return ret; 534 513 }
+4 -3
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 377 377 { 378 378 struct ttm_operation_ctx ctx = { 379 379 .interruptible = params->bo_type != ttm_bo_type_kernel, 380 - .no_wait_gpu = false 380 + .no_wait_gpu = false, 381 + .resv = params->resv, 381 382 }; 382 383 struct ttm_device *bdev = &dev_priv->bdev; 383 384 struct drm_device *vdev = &dev_priv->drm; ··· 395 394 396 395 vmw_bo_placement_set(vmw_bo, params->domain, params->busy_domain); 397 396 ret = ttm_bo_init_reserved(bdev, &vmw_bo->tbo, params->bo_type, 398 - &vmw_bo->placement, 0, &ctx, NULL, 399 - NULL, destroy); 397 + &vmw_bo->placement, 0, &ctx, 398 + params->sg, params->resv, destroy); 400 399 if (unlikely(ret)) 401 400 return ret; 402 401
+2
drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
··· 55 55 enum ttm_bo_type bo_type; 56 56 size_t size; 57 57 bool pin; 58 + struct dma_resv *resv; 59 + struct sg_table *sg; 58 60 }; 59 61 60 62 /**
+1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1628 1628 1629 1629 .prime_fd_to_handle = vmw_prime_fd_to_handle, 1630 1630 .prime_handle_to_fd = vmw_prime_handle_to_fd, 1631 + .gem_prime_import_sg_table = vmw_prime_import_sg_table, 1631 1632 1632 1633 .fops = &vmwgfx_driver_fops, 1633 1634 .name = VMWGFX_DRIVER_NAME,
+3
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1130 1130 struct drm_file *file_priv, 1131 1131 uint32_t handle, uint32_t flags, 1132 1132 int *prime_fd); 1133 + struct drm_gem_object *vmw_prime_import_sg_table(struct drm_device *dev, 1134 + struct dma_buf_attachment *attach, 1135 + struct sg_table *table); 1133 1136 1134 1137 /* 1135 1138 * MemoryOBject management - vmwgfx_mob.c
+32
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 149 149 return ret; 150 150 } 151 151 152 + struct drm_gem_object *vmw_prime_import_sg_table(struct drm_device *dev, 153 + struct dma_buf_attachment *attach, 154 + struct sg_table *table) 155 + { 156 + int ret; 157 + struct vmw_private *dev_priv = vmw_priv(dev); 158 + struct drm_gem_object *gem = NULL; 159 + struct vmw_bo *vbo; 160 + struct vmw_bo_params params = { 161 + .domain = (dev_priv->has_mob) ? VMW_BO_DOMAIN_SYS : VMW_BO_DOMAIN_VRAM, 162 + .busy_domain = VMW_BO_DOMAIN_SYS, 163 + .bo_type = ttm_bo_type_sg, 164 + .size = attach->dmabuf->size, 165 + .pin = false, 166 + .resv = attach->dmabuf->resv, 167 + .sg = table, 168 + 169 + }; 170 + 171 + dma_resv_lock(params.resv, NULL); 172 + 173 + ret = vmw_bo_create(dev_priv, &params, &vbo); 174 + if (ret != 0) 175 + goto out_no_bo; 176 + 177 + vbo->tbo.base.funcs = &vmw_gem_object_funcs; 178 + 179 + gem = &vbo->tbo.base; 180 + out_no_bo: 181 + dma_resv_unlock(params.resv); 182 + return gem; 183 + } 152 184 153 185 int vmw_gem_object_create_ioctl(struct drm_device *dev, void *data, 154 186 struct drm_file *filp)
+8 -3
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 933 933 int vmw_du_crtc_atomic_check(struct drm_crtc *crtc, 934 934 struct drm_atomic_state *state) 935 935 { 936 + struct vmw_private *vmw = vmw_priv(crtc->dev); 936 937 struct drm_crtc_state *new_state = drm_atomic_get_new_crtc_state(state, 937 938 crtc); 938 939 struct vmw_display_unit *du = vmw_crtc_to_du(new_state->crtc); ··· 941 940 bool has_primary = new_state->plane_mask & 942 941 drm_plane_mask(crtc->primary); 943 942 944 - /* We always want to have an active plane with an active CRTC */ 945 - if (has_primary != new_state->enable) 946 - return -EINVAL; 943 + /* 944 + * This is fine in general, but broken userspace might expect 945 + * some actual rendering so give a clue as why it's blank. 946 + */ 947 + if (new_state->enable && !has_primary) 948 + drm_dbg_driver(&vmw->drm, 949 + "CRTC without a primary plane will be blank.\n"); 947 950 948 951 949 952 if (new_state->connector_mask != connector_mask &&
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 243 243 244 244 245 245 static const uint32_t __maybe_unused vmw_primary_plane_formats[] = { 246 - DRM_FORMAT_XRGB1555, 247 - DRM_FORMAT_RGB565, 248 246 DRM_FORMAT_XRGB8888, 249 247 DRM_FORMAT_ARGB8888, 248 + DRM_FORMAT_RGB565, 249 + DRM_FORMAT_XRGB1555, 250 250 }; 251 251 252 252 static const uint32_t __maybe_unused vmw_cursor_plane_formats[] = {
+13 -2
drivers/gpu/drm/vmwgfx/vmwgfx_prime.c
··· 75 75 int fd, u32 *handle) 76 76 { 77 77 struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile; 78 + int ret = ttm_prime_fd_to_handle(tfile, fd, handle); 78 79 79 - return ttm_prime_fd_to_handle(tfile, fd, handle); 80 + if (ret) 81 + ret = drm_gem_prime_fd_to_handle(dev, file_priv, fd, handle); 82 + 83 + return ret; 80 84 } 81 85 82 86 int vmw_prime_handle_to_fd(struct drm_device *dev, ··· 89 85 int *prime_fd) 90 86 { 91 87 struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile; 92 - return ttm_prime_handle_to_fd(tfile, handle, flags, prime_fd); 88 + int ret; 89 + 90 + if (handle > VMWGFX_NUM_MOB) 91 + ret = ttm_prime_handle_to_fd(tfile, handle, flags, prime_fd); 92 + else 93 + ret = drm_gem_prime_handle_to_fd(dev, file_priv, handle, flags, prime_fd); 94 + 95 + return ret; 93 96 }
+30 -14
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
··· 188 188 switch (dev_priv->map_mode) { 189 189 case vmw_dma_map_bind: 190 190 case vmw_dma_map_populate: 191 - vsgt->sgt = &vmw_tt->sgt; 192 - ret = sg_alloc_table_from_pages_segment( 193 - &vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0, 194 - (unsigned long)vsgt->num_pages << PAGE_SHIFT, 195 - dma_get_max_seg_size(dev_priv->drm.dev), GFP_KERNEL); 196 - if (ret) 197 - goto out_sg_alloc_fail; 191 + if (vmw_tt->dma_ttm.page_flags & TTM_TT_FLAG_EXTERNAL) { 192 + vsgt->sgt = vmw_tt->dma_ttm.sg; 193 + } else { 194 + vsgt->sgt = &vmw_tt->sgt; 195 + ret = sg_alloc_table_from_pages_segment(&vmw_tt->sgt, 196 + vsgt->pages, vsgt->num_pages, 0, 197 + (unsigned long)vsgt->num_pages << PAGE_SHIFT, 198 + dma_get_max_seg_size(dev_priv->drm.dev), 199 + GFP_KERNEL); 200 + if (ret) 201 + goto out_sg_alloc_fail; 202 + } 198 203 199 204 ret = vmw_ttm_map_for_dma(vmw_tt); 200 205 if (unlikely(ret != 0)) ··· 214 209 return 0; 215 210 216 211 out_map_fail: 217 - sg_free_table(vmw_tt->vsgt.sgt); 218 - vmw_tt->vsgt.sgt = NULL; 212 + drm_warn(&dev_priv->drm, "VSG table map failed!"); 213 + sg_free_table(vsgt->sgt); 214 + vsgt->sgt = NULL; 219 215 out_sg_alloc_fail: 220 216 return ret; 221 217 } ··· 362 356 static int vmw_ttm_populate(struct ttm_device *bdev, 363 357 struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) 364 358 { 365 - int ret; 359 + bool external = (ttm->page_flags & TTM_TT_FLAG_EXTERNAL) != 0; 366 360 367 - /* TODO: maybe completely drop this ? */ 368 361 if (ttm_tt_is_populated(ttm)) 369 362 return 0; 370 363 371 - ret = ttm_pool_alloc(&bdev->pool, ttm, ctx); 364 + if (external && ttm->sg) 365 + return drm_prime_sg_to_dma_addr_array(ttm->sg, 366 + ttm->dma_address, 367 + ttm->num_pages); 372 368 373 - return ret; 369 + return ttm_pool_alloc(&bdev->pool, ttm, ctx); 374 370 } 375 371 376 372 static void vmw_ttm_unpopulate(struct ttm_device *bdev, ··· 380 372 { 381 373 struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, 382 374 dma_ttm); 375 + bool external = (ttm->page_flags & TTM_TT_FLAG_EXTERNAL) != 0; 376 + 377 + if (external) 378 + return; 383 379 384 380 vmw_ttm_unbind(bdev, ttm); 385 381 ··· 402 390 { 403 391 struct vmw_ttm_tt *vmw_be; 404 392 int ret; 393 + bool external = bo->type == ttm_bo_type_sg; 405 394 406 395 vmw_be = kzalloc(sizeof(*vmw_be), GFP_KERNEL); 407 396 if (!vmw_be) ··· 411 398 vmw_be->dev_priv = vmw_priv_from_ttm(bo->bdev); 412 399 vmw_be->mob = NULL; 413 400 414 - if (vmw_be->dev_priv->map_mode == vmw_dma_alloc_coherent) 401 + if (external) 402 + page_flags |= TTM_TT_FLAG_EXTERNAL | TTM_TT_FLAG_EXTERNAL_MAPPABLE; 403 + 404 + if (vmw_be->dev_priv->map_mode == vmw_dma_alloc_coherent || external) 415 405 ret = ttm_sg_tt_init(&vmw_be->dma_ttm, bo, page_flags, 416 406 ttm_cached); 417 407 else
+6 -2
drivers/gpu/drm/xe/display/intel_fb_bo.c
··· 31 31 32 32 ret = ttm_bo_reserve(&bo->ttm, true, false, NULL); 33 33 if (ret) 34 - return ret; 34 + goto err; 35 35 36 36 if (!(bo->flags & XE_BO_SCANOUT_BIT)) { 37 37 /* ··· 42 42 */ 43 43 if (XE_IOCTL_DBG(i915, !list_empty(&bo->ttm.base.gpuva.list))) { 44 44 ttm_bo_unreserve(&bo->ttm); 45 - return -EINVAL; 45 + ret = -EINVAL; 46 + goto err; 46 47 } 47 48 bo->flags |= XE_BO_SCANOUT_BIT; 48 49 } 49 50 ttm_bo_unreserve(&bo->ttm); 51 + return 0; 50 52 53 + err: 54 + xe_bo_put(bo); 51 55 return ret; 52 56 } 53 57
+11 -10
drivers/gpu/drm/xe/xe_vm.c
··· 1577 1577 xe->usm.num_vm_in_fault_mode--; 1578 1578 else if (!(vm->flags & XE_VM_FLAG_MIGRATION)) 1579 1579 xe->usm.num_vm_in_non_fault_mode--; 1580 + 1581 + if (vm->usm.asid) { 1582 + void *lookup; 1583 + 1584 + xe_assert(xe, xe->info.has_asid); 1585 + xe_assert(xe, !(vm->flags & XE_VM_FLAG_MIGRATION)); 1586 + 1587 + lookup = xa_erase(&xe->usm.asid_to_vm, vm->usm.asid); 1588 + xe_assert(xe, lookup == vm); 1589 + } 1580 1590 mutex_unlock(&xe->usm.lock); 1581 1591 1582 1592 for_each_tile(tile, xe, id) ··· 1602 1592 struct xe_device *xe = vm->xe; 1603 1593 struct xe_tile *tile; 1604 1594 u8 id; 1605 - void *lookup; 1606 1595 1607 1596 /* xe_vm_close_and_put was not called? */ 1608 1597 xe_assert(xe, !vm->size); 1609 1598 1610 1599 mutex_destroy(&vm->snap_mutex); 1611 1600 1612 - if (!(vm->flags & XE_VM_FLAG_MIGRATION)) { 1601 + if (!(vm->flags & XE_VM_FLAG_MIGRATION)) 1613 1602 xe_device_mem_access_put(xe); 1614 - 1615 - if (xe->info.has_asid && vm->usm.asid) { 1616 - mutex_lock(&xe->usm.lock); 1617 - lookup = xa_erase(&xe->usm.asid_to_vm, vm->usm.asid); 1618 - xe_assert(xe, lookup == vm); 1619 - mutex_unlock(&xe->usm.lock); 1620 - } 1621 - } 1622 1603 1623 1604 for_each_tile(tile, xe, id) 1624 1605 XE_WARN_ON(vm->pt_root[id]);
+1 -3
drivers/hid/hid-logitech-dj.c
··· 965 965 } 966 966 break; 967 967 case REPORT_TYPE_MOUSE: 968 - workitem->reports_supported |= STD_MOUSE | HIDPP; 969 - if (djrcv_dev->type == recvr_type_mouse_only) 970 - workitem->reports_supported |= MULTIMEDIA; 968 + workitem->reports_supported |= STD_MOUSE | HIDPP | MULTIMEDIA; 971 969 break; 972 970 } 973 971 }
+2
drivers/hid/hid-mcp2221.c
··· 944 944 /* This is needed to be sure hid_hw_stop() isn't called twice by the subsystem */ 945 945 static void mcp2221_remove(struct hid_device *hdev) 946 946 { 947 + #if IS_REACHABLE(CONFIG_IIO) 947 948 struct mcp2221 *mcp = hid_get_drvdata(hdev); 948 949 949 950 cancel_delayed_work_sync(&mcp->init_work); 951 + #endif 950 952 } 951 953 952 954 #if IS_REACHABLE(CONFIG_IIO)
+4 -4
drivers/hid/hid-nintendo.c
··· 481 481 { BTN_TR, JC_BTN_R, }, 482 482 { BTN_TR2, JC_BTN_LSTICK, }, /* ZR */ 483 483 { BTN_START, JC_BTN_PLUS, }, 484 - { BTN_FORWARD, JC_BTN_Y, }, /* C UP */ 485 - { BTN_BACK, JC_BTN_ZR, }, /* C DOWN */ 486 - { BTN_LEFT, JC_BTN_X, }, /* C LEFT */ 487 - { BTN_RIGHT, JC_BTN_MINUS, }, /* C RIGHT */ 484 + { BTN_SELECT, JC_BTN_Y, }, /* C UP */ 485 + { BTN_X, JC_BTN_ZR, }, /* C DOWN */ 486 + { BTN_Y, JC_BTN_X, }, /* C LEFT */ 487 + { BTN_C, JC_BTN_MINUS, }, /* C RIGHT */ 488 488 { BTN_MODE, JC_BTN_HOME, }, 489 489 { BTN_Z, JC_BTN_CAP, }, 490 490 { /* sentinel */ },
+8 -30
drivers/hid/i2c-hid/i2c-hid-core.c
··· 64 64 /* flags */ 65 65 #define I2C_HID_STARTED 0 66 66 #define I2C_HID_RESET_PENDING 1 67 - #define I2C_HID_READ_PENDING 2 68 67 69 68 #define I2C_HID_PWR_ON 0x00 70 69 #define I2C_HID_PWR_SLEEP 0x01 ··· 189 190 msgs[n].len = recv_len; 190 191 msgs[n].buf = recv_buf; 191 192 n++; 192 - 193 - set_bit(I2C_HID_READ_PENDING, &ihid->flags); 194 193 } 195 194 196 195 ret = i2c_transfer(client->adapter, msgs, n); 197 - 198 - if (recv_len) 199 - clear_bit(I2C_HID_READ_PENDING, &ihid->flags); 200 196 201 197 if (ret != n) 202 198 return ret < 0 ? ret : -EIO; ··· 550 556 { 551 557 struct i2c_hid *ihid = dev_id; 552 558 553 - if (test_bit(I2C_HID_READ_PENDING, &ihid->flags)) 554 - return IRQ_HANDLED; 555 - 556 559 i2c_hid_get_input(ihid); 557 560 558 561 return IRQ_HANDLED; ··· 726 735 mutex_lock(&ihid->reset_lock); 727 736 do { 728 737 ret = i2c_hid_start_hwreset(ihid); 729 - if (ret) 738 + if (ret == 0) 739 + ret = i2c_hid_finish_hwreset(ihid); 740 + else 730 741 msleep(1000); 731 742 } while (tries-- > 0 && ret); 743 + mutex_unlock(&ihid->reset_lock); 732 744 733 745 if (ret) 734 - goto abort_reset; 746 + return ret; 735 747 736 748 use_override = i2c_hid_get_dmi_hid_report_desc_override(client->name, 737 749 &rsize); ··· 744 750 i2c_hid_dbg(ihid, "Using a HID report descriptor override\n"); 745 751 } else { 746 752 rdesc = kzalloc(rsize, GFP_KERNEL); 747 - 748 - if (!rdesc) { 749 - ret = -ENOMEM; 750 - goto abort_reset; 751 - } 753 + if (!rdesc) 754 + return -ENOMEM; 752 755 753 756 i2c_hid_dbg(ihid, "asking HID report descriptor\n"); 754 757 ··· 754 763 rdesc, rsize); 755 764 if (ret) { 756 765 hid_err(hid, "reading report descriptor failed\n"); 757 - goto abort_reset; 766 + goto out; 758 767 } 759 768 } 760 - 761 - /* 762 - * Windows directly reads the report-descriptor after sending reset 763 - * and then waits for resets completion afterwards. Some touchpads 764 - * actually wait for the report-descriptor to be read before signalling 765 - * reset completion. 766 - */ 767 - ret = i2c_hid_finish_hwreset(ihid); 768 - abort_reset: 769 - clear_bit(I2C_HID_RESET_PENDING, &ihid->flags); 770 - mutex_unlock(&ihid->reset_lock); 771 - if (ret) 772 - goto out; 773 769 774 770 i2c_hid_dbg(ihid, "Report Descriptor: %*ph\n", rsize, rdesc); 775 771
+1 -1
drivers/hid/intel-ish-hid/ipc/ipc.c
··· 948 948 if (!dev) 949 949 return NULL; 950 950 951 + dev->devc = &pdev->dev; 951 952 ishtp_device_init(dev); 952 953 953 954 init_waitqueue_head(&dev->wait_hw_ready); ··· 984 983 } 985 984 986 985 dev->ops = &ish_hw_ops; 987 - dev->devc = &pdev->dev; 988 986 dev->mtu = IPC_PAYLOAD_SIZE - sizeof(struct ishtp_msg_hdr); 989 987 return dev; 990 988 }
+7 -4
drivers/infiniband/core/cm.c
··· 1026 1026 } 1027 1027 } 1028 1028 1029 - static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id) 1029 + static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id, 1030 + enum ib_cm_state old_state) 1030 1031 { 1031 1032 struct cm_id_private *cm_id_priv; 1032 1033 1033 1034 cm_id_priv = container_of(cm_id, struct cm_id_private, id); 1034 - pr_err("%s: cm_id=%p timed out. state=%d refcnt=%d\n", __func__, 1035 - cm_id, cm_id->state, refcount_read(&cm_id_priv->refcount)); 1035 + pr_err("%s: cm_id=%p timed out. state %d -> %d, refcnt=%d\n", __func__, 1036 + cm_id, old_state, cm_id->state, refcount_read(&cm_id_priv->refcount)); 1036 1037 } 1037 1038 1038 1039 static void cm_destroy_id(struct ib_cm_id *cm_id, int err) 1039 1040 { 1040 1041 struct cm_id_private *cm_id_priv; 1042 + enum ib_cm_state old_state; 1041 1043 struct cm_work *work; 1042 1044 int ret; 1043 1045 1044 1046 cm_id_priv = container_of(cm_id, struct cm_id_private, id); 1045 1047 spin_lock_irq(&cm_id_priv->lock); 1048 + old_state = cm_id->state; 1046 1049 retest: 1047 1050 switch (cm_id->state) { 1048 1051 case IB_CM_LISTEN: ··· 1154 1151 msecs_to_jiffies( 1155 1152 CM_DESTROY_ID_WAIT_TIMEOUT)); 1156 1153 if (!ret) /* timeout happened */ 1157 - cm_destroy_id_wait_timeout(cm_id); 1154 + cm_destroy_id_wait_timeout(cm_id, old_state); 1158 1155 } while (!ret); 1159 1156 1160 1157 while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
+2 -1
drivers/infiniband/hw/mlx5/mad.c
··· 188 188 mdev = dev->mdev; 189 189 mdev_port_num = 1; 190 190 } 191 - if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) { 191 + if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1 && 192 + !mlx5_core_mp_enabled(mdev)) { 192 193 /* set local port to one for Function-Per-Port HCA. */ 193 194 mdev = dev->mdev; 194 195 mdev_port_num = 1;
+2
drivers/infiniband/sw/rxe/rxe.c
··· 33 33 34 34 if (rxe->tfm) 35 35 crypto_free_shash(rxe->tfm); 36 + 37 + mutex_destroy(&rxe->usdev_lock); 36 38 } 37 39 38 40 /* initialize rxe device parameters */
+8
drivers/interconnect/core.c
··· 176 176 177 177 path->num_nodes = num_nodes; 178 178 179 + mutex_lock(&icc_bw_lock); 180 + 179 181 for (i = num_nodes - 1; i >= 0; i--) { 180 182 node->provider->users++; 181 183 hlist_add_head(&path->reqs[i].req_node, &node->req_list); ··· 187 185 /* reference to previous node was saved during path traversal */ 188 186 node = node->reverse; 189 187 } 188 + 189 + mutex_unlock(&icc_bw_lock); 190 190 191 191 return path; 192 192 } ··· 796 792 pr_err("%s: error (%d)\n", __func__, ret); 797 793 798 794 mutex_lock(&icc_lock); 795 + mutex_lock(&icc_bw_lock); 796 + 799 797 for (i = 0; i < path->num_nodes; i++) { 800 798 node = path->reqs[i].node; 801 799 hlist_del(&path->reqs[i].req_node); 802 800 if (!WARN_ON(!node->provider->users)) 803 801 node->provider->users--; 804 802 } 803 + 804 + mutex_unlock(&icc_bw_lock); 805 805 mutex_unlock(&icc_lock); 806 806 807 807 kfree_const(path->name);
-26
drivers/interconnect/qcom/x1e80100.c
··· 116 116 .links = { X1E80100_SLAVE_A2NOC_SNOC }, 117 117 }; 118 118 119 - static struct qcom_icc_node ddr_perf_mode_master = { 120 - .name = "ddr_perf_mode_master", 121 - .id = X1E80100_MASTER_DDR_PERF_MODE, 122 - .channels = 1, 123 - .buswidth = 4, 124 - .num_links = 1, 125 - .links = { X1E80100_SLAVE_DDR_PERF_MODE }, 126 - }; 127 - 128 119 static struct qcom_icc_node qup0_core_master = { 129 120 .name = "qup0_core_master", 130 121 .id = X1E80100_MASTER_QUP_CORE_0, ··· 677 686 .buswidth = 16, 678 687 .num_links = 1, 679 688 .links = { X1E80100_MASTER_A2NOC_SNOC }, 680 - }; 681 - 682 - static struct qcom_icc_node ddr_perf_mode_slave = { 683 - .name = "ddr_perf_mode_slave", 684 - .id = X1E80100_SLAVE_DDR_PERF_MODE, 685 - .channels = 1, 686 - .buswidth = 4, 687 - .num_links = 0, 688 689 }; 689 690 690 691 static struct qcom_icc_node qup0_core_slave = { ··· 1360 1377 .nodes = { &ebi }, 1361 1378 }; 1362 1379 1363 - static struct qcom_icc_bcm bcm_acv_perf = { 1364 - .name = "ACV_PERF", 1365 - .num_nodes = 1, 1366 - .nodes = { &ddr_perf_mode_slave }, 1367 - }; 1368 - 1369 1380 static struct qcom_icc_bcm bcm_ce0 = { 1370 1381 .name = "CE0", 1371 1382 .num_nodes = 1, ··· 1560 1583 }; 1561 1584 1562 1585 static struct qcom_icc_bcm * const clk_virt_bcms[] = { 1563 - &bcm_acv_perf, 1564 1586 &bcm_qup0, 1565 1587 &bcm_qup1, 1566 1588 &bcm_qup2, 1567 1589 }; 1568 1590 1569 1591 static struct qcom_icc_node * const clk_virt_nodes[] = { 1570 - [MASTER_DDR_PERF_MODE] = &ddr_perf_mode_master, 1571 1592 [MASTER_QUP_CORE_0] = &qup0_core_master, 1572 1593 [MASTER_QUP_CORE_1] = &qup1_core_master, 1573 1594 [MASTER_QUP_CORE_2] = &qup2_core_master, 1574 - [SLAVE_DDR_PERF_MODE] = &ddr_perf_mode_slave, 1575 1595 [SLAVE_QUP_CORE_0] = &qup0_core_slave, 1576 1596 [SLAVE_QUP_CORE_1] = &qup1_core_slave, 1577 1597 [SLAVE_QUP_CORE_2] = &qup2_core_slave,
+1
drivers/iommu/iommufd/Kconfig
··· 37 37 depends on DEBUG_KERNEL 38 38 depends on FAULT_INJECTION 39 39 depends on RUNTIME_TESTING_MENU 40 + select IOMMUFD_DRIVER 40 41 default n 41 42 help 42 43 This is dangerous, do not enable unless running
+1 -1
drivers/misc/cardreader/rtsx_pcr.c
··· 1002 1002 } else { 1003 1003 pcr->card_removed |= SD_EXIST; 1004 1004 pcr->card_inserted &= ~SD_EXIST; 1005 - if (PCI_PID(pcr) == PID_5261) { 1005 + if ((PCI_PID(pcr) == PID_5261) || (PCI_PID(pcr) == PID_5264)) { 1006 1006 rtsx_pci_write_register(pcr, RTS5261_FW_STATUS, 1007 1007 RTS5261_EXPRESS_LINK_FAIL_MASK, 0); 1008 1008 pcr->extra_caps |= EXTRA_CAPS_SD_EXPRESS;
+1 -1
drivers/misc/mei/pci-me.c
··· 116 116 {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)}, 117 117 {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)}, 118 118 119 - {MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)}, 119 + {MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_SPS_CFG)}, 120 120 121 121 {MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)}, 122 122 {MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
+16 -1
drivers/misc/mei/platform-vsc.c
··· 400 400 static int mei_vsc_suspend(struct device *dev) 401 401 { 402 402 struct mei_device *mei_dev = dev_get_drvdata(dev); 403 + struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev); 403 404 404 405 mei_stop(mei_dev); 406 + 407 + mei_disable_interrupts(mei_dev); 408 + 409 + vsc_tp_free_irq(hw->tp); 405 410 406 411 return 0; 407 412 } ··· 414 409 static int mei_vsc_resume(struct device *dev) 415 410 { 416 411 struct mei_device *mei_dev = dev_get_drvdata(dev); 412 + struct mei_vsc_hw *hw = mei_dev_to_vsc_hw(mei_dev); 417 413 int ret; 414 + 415 + ret = vsc_tp_request_irq(hw->tp); 416 + if (ret) 417 + return ret; 418 418 419 419 ret = mei_restart(mei_dev); 420 420 if (ret) 421 - return ret; 421 + goto err_free; 422 422 423 423 /* start timer if stopped in suspend */ 424 424 schedule_delayed_work(&mei_dev->timer_work, HZ); 425 425 426 426 return 0; 427 + 428 + err_free: 429 + vsc_tp_free_irq(hw->tp); 430 + 431 + return ret; 427 432 } 428 433 429 434 static DEFINE_SIMPLE_DEV_PM_OPS(mei_vsc_pm_ops, mei_vsc_suspend, mei_vsc_resume);
+59 -25
drivers/misc/mei/vsc-tp.c
··· 94 94 {} 95 95 }; 96 96 97 + static irqreturn_t vsc_tp_isr(int irq, void *data) 98 + { 99 + struct vsc_tp *tp = data; 100 + 101 + atomic_inc(&tp->assert_cnt); 102 + 103 + wake_up(&tp->xfer_wait); 104 + 105 + return IRQ_WAKE_THREAD; 106 + } 107 + 108 + static irqreturn_t vsc_tp_thread_isr(int irq, void *data) 109 + { 110 + struct vsc_tp *tp = data; 111 + 112 + if (tp->event_notify) 113 + tp->event_notify(tp->event_notify_context); 114 + 115 + return IRQ_HANDLED; 116 + } 117 + 97 118 /* wakeup firmware and wait for response */ 98 119 static int vsc_tp_wakeup_request(struct vsc_tp *tp) 99 120 { ··· 405 384 EXPORT_SYMBOL_NS_GPL(vsc_tp_register_event_cb, VSC_TP); 406 385 407 386 /** 387 + * vsc_tp_request_irq - request irq for vsc_tp device 388 + * @tp: vsc_tp device handle 389 + */ 390 + int vsc_tp_request_irq(struct vsc_tp *tp) 391 + { 392 + struct spi_device *spi = tp->spi; 393 + struct device *dev = &spi->dev; 394 + int ret; 395 + 396 + irq_set_status_flags(spi->irq, IRQ_DISABLE_UNLAZY); 397 + ret = request_threaded_irq(spi->irq, vsc_tp_isr, vsc_tp_thread_isr, 398 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 399 + dev_name(dev), tp); 400 + if (ret) 401 + return ret; 402 + 403 + return 0; 404 + } 405 + EXPORT_SYMBOL_NS_GPL(vsc_tp_request_irq, VSC_TP); 406 + 407 + /** 408 + * vsc_tp_free_irq - free irq for vsc_tp device 409 + * @tp: vsc_tp device handle 410 + */ 411 + void vsc_tp_free_irq(struct vsc_tp *tp) 412 + { 413 + free_irq(tp->spi->irq, tp); 414 + } 415 + EXPORT_SYMBOL_NS_GPL(vsc_tp_free_irq, VSC_TP); 416 + 417 + /** 408 418 * vsc_tp_intr_synchronize - synchronize vsc_tp interrupt 409 419 * @tp: vsc_tp device handle 410 420 */ ··· 464 412 disable_irq(tp->spi->irq); 465 413 } 466 414 EXPORT_SYMBOL_NS_GPL(vsc_tp_intr_disable, VSC_TP); 467 - 468 - static irqreturn_t vsc_tp_isr(int irq, void *data) 469 - { 470 - struct vsc_tp *tp = data; 471 - 472 - atomic_inc(&tp->assert_cnt); 473 - 474 - return IRQ_WAKE_THREAD; 475 - } 476 - 477 - static irqreturn_t vsc_tp_thread_isr(int irq, void *data) 478 - { 479 - struct vsc_tp *tp = data; 480 - 481 - wake_up(&tp->xfer_wait); 482 - 483 - if (tp->event_notify) 484 - tp->event_notify(tp->event_notify_context); 485 - 486 - return IRQ_HANDLED; 487 - } 488 415 489 416 static int vsc_tp_match_any(struct acpi_device *adev, void *data) 490 417 { ··· 521 490 tp->spi = spi; 522 491 523 492 irq_set_status_flags(spi->irq, IRQ_DISABLE_UNLAZY); 524 - ret = devm_request_threaded_irq(dev, spi->irq, vsc_tp_isr, 525 - vsc_tp_thread_isr, 526 - IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 527 - dev_name(dev), tp); 493 + ret = request_threaded_irq(spi->irq, vsc_tp_isr, vsc_tp_thread_isr, 494 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 495 + dev_name(dev), tp); 528 496 if (ret) 529 497 return ret; 530 498 ··· 552 522 err_destroy_lock: 553 523 mutex_destroy(&tp->mutex); 554 524 525 + free_irq(spi->irq, tp); 526 + 555 527 return ret; 556 528 } 557 529 ··· 564 532 platform_device_unregister(tp->pdev); 565 533 566 534 mutex_destroy(&tp->mutex); 535 + 536 + free_irq(spi->irq, tp); 567 537 } 568 538 569 539 static const struct acpi_device_id vsc_tp_acpi_ids[] = {
+3
drivers/misc/mei/vsc-tp.h
··· 37 37 int vsc_tp_register_event_cb(struct vsc_tp *tp, vsc_tp_event_cb_t event_cb, 38 38 void *context); 39 39 40 + int vsc_tp_request_irq(struct vsc_tp *tp); 41 + void vsc_tp_free_irq(struct vsc_tp *tp); 42 + 40 43 void vsc_tp_intr_enable(struct vsc_tp *tp); 41 44 void vsc_tp_intr_disable(struct vsc_tp *tp); 42 45 void vsc_tp_intr_synchronize(struct vsc_tp *tp);
+52 -4
drivers/net/dsa/mv88e6xxx/chip.c
··· 566 566 phy_interface_set_rgmii(supported); 567 567 } 568 568 569 + static void 570 + mv88e6250_setup_supported_interfaces(struct mv88e6xxx_chip *chip, int port, 571 + struct phylink_config *config) 572 + { 573 + unsigned long *supported = config->supported_interfaces; 574 + int err; 575 + u16 reg; 576 + 577 + err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg); 578 + if (err) { 579 + dev_err(chip->dev, "p%d: failed to read port status\n", port); 580 + return; 581 + } 582 + 583 + switch (reg & MV88E6250_PORT_STS_PORTMODE_MASK) { 584 + case MV88E6250_PORT_STS_PORTMODE_MII_10_HALF_PHY: 585 + case MV88E6250_PORT_STS_PORTMODE_MII_100_HALF_PHY: 586 + case MV88E6250_PORT_STS_PORTMODE_MII_10_FULL_PHY: 587 + case MV88E6250_PORT_STS_PORTMODE_MII_100_FULL_PHY: 588 + __set_bit(PHY_INTERFACE_MODE_REVMII, supported); 589 + break; 590 + 591 + case MV88E6250_PORT_STS_PORTMODE_MII_HALF: 592 + case MV88E6250_PORT_STS_PORTMODE_MII_FULL: 593 + __set_bit(PHY_INTERFACE_MODE_MII, supported); 594 + break; 595 + 596 + case MV88E6250_PORT_STS_PORTMODE_MII_DUAL_100_RMII_FULL_PHY: 597 + case MV88E6250_PORT_STS_PORTMODE_MII_200_RMII_FULL_PHY: 598 + case MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_HALF_PHY: 599 + case MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_FULL_PHY: 600 + __set_bit(PHY_INTERFACE_MODE_REVRMII, supported); 601 + break; 602 + 603 + case MV88E6250_PORT_STS_PORTMODE_MII_DUAL_100_RMII_FULL: 604 + case MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_FULL: 605 + __set_bit(PHY_INTERFACE_MODE_RMII, supported); 606 + break; 607 + 608 + case MV88E6250_PORT_STS_PORTMODE_MII_100_RGMII: 609 + __set_bit(PHY_INTERFACE_MODE_RGMII, supported); 610 + break; 611 + 612 + default: 613 + dev_err(chip->dev, 614 + "p%d: invalid port mode in status register: %04x\n", 615 + port, reg); 616 + } 617 + } 618 + 569 619 static void mv88e6250_phylink_get_caps(struct mv88e6xxx_chip *chip, int port, 570 620 struct phylink_config *config) 571 621 { 572 - unsigned long *supported = config->supported_interfaces; 573 - 574 - /* Translate the default cmode */ 575 - mv88e6xxx_translate_cmode(chip->ports[port].cmode, supported); 622 + if (!mv88e6xxx_phy_is_internal(chip, port)) 623 + mv88e6250_setup_supported_interfaces(chip, port, config); 576 624 577 625 config->mac_capabilities = MAC_SYM_PAUSE | MAC_10 | MAC_100; 578 626 }
+19 -4
drivers/net/dsa/mv88e6xxx/port.h
··· 25 25 #define MV88E6250_PORT_STS_PORTMODE_PHY_100_HALF 0x0900 26 26 #define MV88E6250_PORT_STS_PORTMODE_PHY_10_FULL 0x0a00 27 27 #define MV88E6250_PORT_STS_PORTMODE_PHY_100_FULL 0x0b00 28 - #define MV88E6250_PORT_STS_PORTMODE_MII_10_HALF 0x0c00 29 - #define MV88E6250_PORT_STS_PORTMODE_MII_100_HALF 0x0d00 30 - #define MV88E6250_PORT_STS_PORTMODE_MII_10_FULL 0x0e00 31 - #define MV88E6250_PORT_STS_PORTMODE_MII_100_FULL 0x0f00 28 + /* - Modes with PHY suffix use output instead of input clock 29 + * - Modes without RMII or RGMII use MII 30 + * - Modes without speed do not have a fixed speed specified in the manual 31 + * ("DC to x MHz" - variable clock support?) 32 + */ 33 + #define MV88E6250_PORT_STS_PORTMODE_MII_DISABLED 0x0000 34 + #define MV88E6250_PORT_STS_PORTMODE_MII_100_RGMII 0x0100 35 + #define MV88E6250_PORT_STS_PORTMODE_MII_DUAL_100_RMII_FULL_PHY 0x0200 36 + #define MV88E6250_PORT_STS_PORTMODE_MII_200_RMII_FULL_PHY 0x0400 37 + #define MV88E6250_PORT_STS_PORTMODE_MII_DUAL_100_RMII_FULL 0x0600 38 + #define MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_FULL 0x0700 39 + #define MV88E6250_PORT_STS_PORTMODE_MII_HALF 0x0800 40 + #define MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_HALF_PHY 0x0900 41 + #define MV88E6250_PORT_STS_PORTMODE_MII_FULL 0x0a00 42 + #define MV88E6250_PORT_STS_PORTMODE_MII_10_100_RMII_FULL_PHY 0x0b00 43 + #define MV88E6250_PORT_STS_PORTMODE_MII_10_HALF_PHY 0x0c00 44 + #define MV88E6250_PORT_STS_PORTMODE_MII_100_HALF_PHY 0x0d00 45 + #define MV88E6250_PORT_STS_PORTMODE_MII_10_FULL_PHY 0x0e00 46 + #define MV88E6250_PORT_STS_PORTMODE_MII_100_FULL_PHY 0x0f00 32 47 #define MV88E6XXX_PORT_STS_LINK 0x0800 33 48 #define MV88E6XXX_PORT_STS_DUPLEX 0x0400 34 49 #define MV88E6XXX_PORT_STS_SPEED_MASK 0x0300
+14 -7
drivers/net/ethernet/broadcom/asp2/bcmasp_intf.c
··· 436 436 umac_wl(intf, 0x800, UMC_RX_MAX_PKT_SZ); 437 437 } 438 438 439 - static int bcmasp_tx_poll(struct napi_struct *napi, int budget) 439 + static int bcmasp_tx_reclaim(struct bcmasp_intf *intf) 440 440 { 441 - struct bcmasp_intf *intf = 442 - container_of(napi, struct bcmasp_intf, tx_napi); 443 441 struct bcmasp_intf_stats64 *stats = &intf->stats64; 444 442 struct device *kdev = &intf->parent->pdev->dev; 445 443 unsigned long read, released = 0; ··· 480 482 DESC_RING_COUNT); 481 483 } 482 484 483 - /* Ensure all descriptors have been written to DRAM for the hardware 484 - * to see updated contents. 485 - */ 486 - wmb(); 485 + return released; 486 + } 487 + 488 + static int bcmasp_tx_poll(struct napi_struct *napi, int budget) 489 + { 490 + struct bcmasp_intf *intf = 491 + container_of(napi, struct bcmasp_intf, tx_napi); 492 + int released = 0; 493 + 494 + released = bcmasp_tx_reclaim(intf); 487 495 488 496 napi_complete(&intf->tx_napi); 489 497 ··· 801 797 intf->tx_spb_dma_read = intf->tx_spb_dma_addr; 802 798 intf->tx_spb_index = 0; 803 799 intf->tx_spb_clean_index = 0; 800 + memset(intf->tx_cbs, 0, sizeof(struct bcmasp_tx_cb) * DESC_RING_COUNT); 804 801 805 802 /* Make sure channels are disabled */ 806 803 tx_spb_ctrl_wl(intf, 0x0, TX_SPB_CTRL_ENABLE); ··· 889 884 usleep_range(1000, 2000); 890 885 } while (timeout-- > 0); 891 886 tx_spb_dma_wl(intf, 0x0, TX_SPB_DMA_FIFO_CTRL); 887 + 888 + bcmasp_tx_reclaim(intf); 892 889 893 890 umac_enable_set(intf, UMC_CMD_TX_EN, 0); 894 891
+8 -6
drivers/net/ethernet/broadcom/b44.c
··· 2009 2009 bp->flags |= B44_FLAG_TX_PAUSE; 2010 2010 else 2011 2011 bp->flags &= ~B44_FLAG_TX_PAUSE; 2012 - if (bp->flags & B44_FLAG_PAUSE_AUTO) { 2013 - b44_halt(bp); 2014 - b44_init_rings(bp); 2015 - b44_init_hw(bp, B44_FULL_RESET); 2016 - } else { 2017 - __b44_set_flow_ctrl(bp, bp->flags); 2012 + if (netif_running(dev)) { 2013 + if (bp->flags & B44_FLAG_PAUSE_AUTO) { 2014 + b44_halt(bp); 2015 + b44_init_rings(bp); 2016 + b44_init_hw(bp, B44_FULL_RESET); 2017 + } else { 2018 + __b44_set_flow_ctrl(bp, bp->flags); 2019 + } 2018 2020 } 2019 2021 spin_unlock_irq(&bp->lock); 2020 2022
+46 -36
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 1811 1811 skb = bnxt_copy_skb(bnapi, data_ptr, len, mapping); 1812 1812 if (!skb) { 1813 1813 bnxt_abort_tpa(cpr, idx, agg_bufs); 1814 - cpr->sw_stats.rx.rx_oom_discards += 1; 1814 + cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; 1815 1815 return NULL; 1816 1816 } 1817 1817 } else { ··· 1821 1821 new_data = __bnxt_alloc_rx_frag(bp, &new_mapping, GFP_ATOMIC); 1822 1822 if (!new_data) { 1823 1823 bnxt_abort_tpa(cpr, idx, agg_bufs); 1824 - cpr->sw_stats.rx.rx_oom_discards += 1; 1824 + cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; 1825 1825 return NULL; 1826 1826 } 1827 1827 ··· 1837 1837 if (!skb) { 1838 1838 skb_free_frag(data); 1839 1839 bnxt_abort_tpa(cpr, idx, agg_bufs); 1840 - cpr->sw_stats.rx.rx_oom_discards += 1; 1840 + cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; 1841 1841 return NULL; 1842 1842 } 1843 1843 skb_reserve(skb, bp->rx_offset); ··· 1848 1848 skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, idx, agg_bufs, true); 1849 1849 if (!skb) { 1850 1850 /* Page reuse already handled by bnxt_rx_pages(). */ 1851 - cpr->sw_stats.rx.rx_oom_discards += 1; 1851 + cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; 1852 1852 return NULL; 1853 1853 } 1854 1854 } ··· 2127 2127 u32 frag_len = bnxt_rx_agg_pages_xdp(bp, cpr, &xdp, 2128 2128 cp_cons, agg_bufs, 2129 2129 false); 2130 - if (!frag_len) { 2131 - cpr->sw_stats.rx.rx_oom_discards += 1; 2132 - rc = -ENOMEM; 2133 - goto next_rx; 2134 - } 2130 + if (!frag_len) 2131 + goto oom_next_rx; 2135 2132 } 2136 2133 xdp_active = true; 2137 2134 } ··· 2154 2157 else 2155 2158 bnxt_xdp_buff_frags_free(rxr, &xdp); 2156 2159 } 2157 - cpr->sw_stats.rx.rx_oom_discards += 1; 2158 - rc = -ENOMEM; 2159 - goto next_rx; 2160 + goto oom_next_rx; 2160 2161 } 2161 2162 } else { 2162 2163 u32 payload; ··· 2165 2170 payload = 0; 2166 2171 skb = bp->rx_skb_func(bp, rxr, cons, data, data_ptr, dma_addr, 2167 2172 payload | len); 2168 - if (!skb) { 2169 - cpr->sw_stats.rx.rx_oom_discards += 1; 2170 - rc = -ENOMEM; 2171 - goto next_rx; 2172 - } 2173 + if (!skb) 2174 + goto oom_next_rx; 2173 2175 } 2174 2176 2175 2177 if (agg_bufs) { 2176 2178 if (!xdp_active) { 2177 2179 skb = bnxt_rx_agg_pages_skb(bp, cpr, skb, cp_cons, agg_bufs, false); 2178 - if (!skb) { 2179 - cpr->sw_stats.rx.rx_oom_discards += 1; 2180 - rc = -ENOMEM; 2181 - goto next_rx; 2182 - } 2180 + if (!skb) 2181 + goto oom_next_rx; 2183 2182 } else { 2184 2183 skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr->page_pool, &xdp, rxcmp1); 2185 2184 if (!skb) { 2186 2185 /* we should be able to free the old skb here */ 2187 2186 bnxt_xdp_buff_frags_free(rxr, &xdp); 2188 - cpr->sw_stats.rx.rx_oom_discards += 1; 2189 - rc = -ENOMEM; 2190 - goto next_rx; 2187 + goto oom_next_rx; 2191 2188 } 2192 2189 } 2193 2190 } ··· 2257 2270 *raw_cons = tmp_raw_cons; 2258 2271 2259 2272 return rc; 2273 + 2274 + oom_next_rx: 2275 + cpr->bnapi->cp_ring.sw_stats.rx.rx_oom_discards += 1; 2276 + rc = -ENOMEM; 2277 + goto next_rx; 2260 2278 } 2261 2279 2262 2280 /* In netpoll mode, if we are using a combined completion ring, we need to ··· 2308 2316 } 2309 2317 rc = bnxt_rx_pkt(bp, cpr, raw_cons, event); 2310 2318 if (rc && rc != -EBUSY) 2311 - cpr->sw_stats.rx.rx_netpoll_discards += 1; 2319 + cpr->bnapi->cp_ring.sw_stats.rx.rx_netpoll_discards += 1; 2312 2320 return rc; 2313 2321 } 2314 2322 ··· 9157 9165 BNXT_FW_HEALTH_WIN_BASE + 9158 9166 BNXT_GRC_REG_CHIP_NUM); 9159 9167 } 9160 - if (!BNXT_CHIP_P5(bp)) 9168 + if (!BNXT_CHIP_P5_PLUS(bp)) 9161 9169 return; 9162 9170 9163 9171 status_loc = BNXT_GRC_REG_STATUS_P5 | ··· 13259 13267 bnxt_rtnl_unlock_sp(bp); 13260 13268 } 13261 13269 13270 + static void bnxt_fw_fatal_close(struct bnxt *bp) 13271 + { 13272 + bnxt_tx_disable(bp); 13273 + bnxt_disable_napi(bp); 13274 + bnxt_disable_int_sync(bp); 13275 + bnxt_free_irq(bp); 13276 + bnxt_clear_int_mode(bp); 13277 + pci_disable_device(bp->pdev); 13278 + } 13279 + 13262 13280 static void bnxt_fw_reset_close(struct bnxt *bp) 13263 13281 { 13264 13282 bnxt_ulp_stop(bp); ··· 13282 13280 pci_read_config_word(bp->pdev, PCI_SUBSYSTEM_ID, &val); 13283 13281 if (val == 0xffff) 13284 13282 bp->fw_reset_min_dsecs = 0; 13285 - bnxt_tx_disable(bp); 13286 - bnxt_disable_napi(bp); 13287 - bnxt_disable_int_sync(bp); 13288 - bnxt_free_irq(bp); 13289 - bnxt_clear_int_mode(bp); 13290 - pci_disable_device(bp->pdev); 13283 + bnxt_fw_fatal_close(bp); 13291 13284 } 13292 13285 __bnxt_close_nic(bp, true, false); 13293 13286 bnxt_vf_reps_free(bp); ··· 15620 15623 { 15621 15624 struct net_device *netdev = pci_get_drvdata(pdev); 15622 15625 struct bnxt *bp = netdev_priv(netdev); 15626 + bool abort = false; 15623 15627 15624 15628 netdev_info(netdev, "PCI I/O error detected\n"); 15625 15629 ··· 15629 15631 15630 15632 bnxt_ulp_stop(bp); 15631 15633 15632 - if (state == pci_channel_io_perm_failure) { 15634 + if (test_and_set_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) { 15635 + netdev_err(bp->dev, "Firmware reset already in progress\n"); 15636 + abort = true; 15637 + } 15638 + 15639 + if (abort || state == pci_channel_io_perm_failure) { 15633 15640 rtnl_unlock(); 15634 15641 return PCI_ERS_RESULT_DISCONNECT; 15635 15642 } 15636 15643 15637 - if (state == pci_channel_io_frozen) 15644 + /* Link is not reliable anymore if state is pci_channel_io_frozen 15645 + * so we disable bus master to prevent any potential bad DMAs before 15646 + * freeing kernel memory. 15647 + */ 15648 + if (state == pci_channel_io_frozen) { 15638 15649 set_bit(BNXT_STATE_PCI_CHANNEL_IO_FROZEN, &bp->state); 15650 + bnxt_fw_fatal_close(bp); 15651 + } 15639 15652 15640 15653 if (netif_running(netdev)) 15641 - bnxt_close(netdev); 15654 + __bnxt_close_nic(bp, true, true); 15642 15655 15643 15656 if (pci_is_enabled(pdev)) 15644 15657 pci_disable_device(pdev); ··· 15737 15728 } 15738 15729 15739 15730 reset_exit: 15731 + clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 15740 15732 bnxt_clear_reservations(bp, true); 15741 15733 rtnl_unlock(); 15742 15734
+3 -3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 16103 16103 val = FIELD_GET(I40E_PRTGL_SAH_MFS_MASK, 16104 16104 rd32(&pf->hw, I40E_PRTGL_SAH)); 16105 16105 if (val < MAX_FRAME_SIZE_DEFAULT) 16106 - dev_warn(&pdev->dev, "MFS for port %x has been set below the default: %x\n", 16107 - pf->hw.port, val); 16106 + dev_warn(&pdev->dev, "MFS for port %x (%d) has been set below the default (%d)\n", 16107 + pf->hw.port, val, MAX_FRAME_SIZE_DEFAULT); 16108 16108 16109 16109 /* Add a filter to drop all Flow control frames from any VSI from being 16110 16110 * transmitted. By doing so we stop a malicious VF from sending out ··· 16644 16644 * since we need to be able to guarantee forward progress even under 16645 16645 * memory pressure. 16646 16646 */ 16647 - i40e_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, i40e_driver_name); 16647 + i40e_wq = alloc_workqueue("%s", 0, 0, i40e_driver_name); 16648 16648 if (!i40e_wq) { 16649 16649 pr_err("%s: Failed to create workqueue\n", i40e_driver_name); 16650 16650 return -ENOMEM;
+29 -1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 3503 3503 } 3504 3504 3505 3505 /** 3506 + * iavf_is_tc_config_same - Compare the mqprio TC config with the 3507 + * TC config already configured on this adapter. 3508 + * @adapter: board private structure 3509 + * @mqprio_qopt: TC config received from kernel. 3510 + * 3511 + * This function compares the TC config received from the kernel 3512 + * with the config already configured on the adapter. 3513 + * 3514 + * Return: True if configuration is same, false otherwise. 3515 + **/ 3516 + static bool iavf_is_tc_config_same(struct iavf_adapter *adapter, 3517 + struct tc_mqprio_qopt *mqprio_qopt) 3518 + { 3519 + struct virtchnl_channel_info *ch = &adapter->ch_config.ch_info[0]; 3520 + int i; 3521 + 3522 + if (adapter->num_tc != mqprio_qopt->num_tc) 3523 + return false; 3524 + 3525 + for (i = 0; i < adapter->num_tc; i++) { 3526 + if (ch[i].count != mqprio_qopt->count[i] || 3527 + ch[i].offset != mqprio_qopt->offset[i]) 3528 + return false; 3529 + } 3530 + return true; 3531 + } 3532 + 3533 + /** 3506 3534 * __iavf_setup_tc - configure multiple traffic classes 3507 3535 * @netdev: network interface device structure 3508 3536 * @type_data: tc offload data ··· 3587 3559 if (ret) 3588 3560 return ret; 3589 3561 /* Return if same TC config is requested */ 3590 - if (adapter->num_tc == num_tc) 3562 + if (iavf_is_tc_config_same(adapter, &mqprio_qopt->qopt)) 3591 3563 return 0; 3592 3564 adapter->num_tc = num_tc; 3593 3565
+8 -8
drivers/net/ethernet/intel/ice/ice_vf_lib.c
··· 856 856 return 0; 857 857 } 858 858 859 + if (flags & ICE_VF_RESET_LOCK) 860 + mutex_lock(&vf->cfg_lock); 861 + else 862 + lockdep_assert_held(&vf->cfg_lock); 863 + 859 864 lag = pf->lag; 860 865 mutex_lock(&pf->lag_mutex); 861 866 if (lag && lag->bonded && lag->primary) { ··· 871 866 else 872 867 act_prt = ICE_LAG_INVALID_PORT; 873 868 } 874 - 875 - if (flags & ICE_VF_RESET_LOCK) 876 - mutex_lock(&vf->cfg_lock); 877 - else 878 - lockdep_assert_held(&vf->cfg_lock); 879 869 880 870 if (ice_is_vf_disabled(vf)) { 881 871 vsi = ice_get_vf_vsi(vf); ··· 956 956 ice_mbx_clear_malvf(&vf->mbx_info); 957 957 958 958 out_unlock: 959 - if (flags & ICE_VF_RESET_LOCK) 960 - mutex_unlock(&vf->cfg_lock); 961 - 962 959 if (lag && lag->bonded && lag->primary && 963 960 act_prt != ICE_LAG_INVALID_PORT) 964 961 ice_lag_move_vf_nodes_cfg(lag, pri_prt, act_prt); 965 962 mutex_unlock(&pf->lag_mutex); 963 + 964 + if (flags & ICE_VF_RESET_LOCK) 965 + mutex_unlock(&vf->cfg_lock); 966 966 967 967 return err; 968 968 }
+2
drivers/net/ethernet/intel/igc/igc.h
··· 298 298 299 299 /* LEDs */ 300 300 struct mutex led_mutex; 301 + struct igc_led_classdev *leds; 301 302 }; 302 303 303 304 void igc_up(struct igc_adapter *adapter); ··· 724 723 void igc_ptp_tx_tstamp_event(struct igc_adapter *adapter); 725 724 726 725 int igc_led_setup(struct igc_adapter *adapter); 726 + void igc_led_free(struct igc_adapter *adapter); 727 727 728 728 #define igc_rx_pg_size(_ring) (PAGE_SIZE << igc_rx_pg_order(_ring)) 729 729
+30 -8
drivers/net/ethernet/intel/igc/igc_leds.c
··· 236 236 pci_dev_id(adapter->pdev), index); 237 237 } 238 238 239 - static void igc_setup_ldev(struct igc_led_classdev *ldev, 240 - struct net_device *netdev, int index) 239 + static int igc_setup_ldev(struct igc_led_classdev *ldev, 240 + struct net_device *netdev, int index) 241 241 { 242 242 struct igc_adapter *adapter = netdev_priv(netdev); 243 243 struct led_classdev *led_cdev = &ldev->led; ··· 257 257 led_cdev->hw_control_get = igc_led_hw_control_get; 258 258 led_cdev->hw_control_get_device = igc_led_hw_control_get_device; 259 259 260 - devm_led_classdev_register(&netdev->dev, led_cdev); 260 + return led_classdev_register(&netdev->dev, led_cdev); 261 261 } 262 262 263 263 int igc_led_setup(struct igc_adapter *adapter) 264 264 { 265 265 struct net_device *netdev = adapter->netdev; 266 - struct device *dev = &netdev->dev; 267 266 struct igc_led_classdev *leds; 268 - int i; 267 + int i, err; 269 268 270 269 mutex_init(&adapter->led_mutex); 271 270 272 - leds = devm_kcalloc(dev, IGC_NUM_LEDS, sizeof(*leds), GFP_KERNEL); 271 + leds = kcalloc(IGC_NUM_LEDS, sizeof(*leds), GFP_KERNEL); 273 272 if (!leds) 274 273 return -ENOMEM; 275 274 276 - for (i = 0; i < IGC_NUM_LEDS; i++) 277 - igc_setup_ldev(leds + i, netdev, i); 275 + for (i = 0; i < IGC_NUM_LEDS; i++) { 276 + err = igc_setup_ldev(leds + i, netdev, i); 277 + if (err) 278 + goto err; 279 + } 280 + 281 + adapter->leds = leds; 278 282 279 283 return 0; 284 + 285 + err: 286 + for (i--; i >= 0; i--) 287 + led_classdev_unregister(&((leds + i)->led)); 288 + 289 + kfree(leds); 290 + return err; 291 + } 292 + 293 + void igc_led_free(struct igc_adapter *adapter) 294 + { 295 + struct igc_led_classdev *leds = adapter->leds; 296 + int i; 297 + 298 + for (i = 0; i < IGC_NUM_LEDS; i++) 299 + led_classdev_unregister(&((leds + i)->led)); 300 + 301 + kfree(leds); 280 302 }
+3
drivers/net/ethernet/intel/igc/igc_main.c
··· 7020 7020 cancel_work_sync(&adapter->watchdog_task); 7021 7021 hrtimer_cancel(&adapter->hrtimer); 7022 7022 7023 + if (IS_ENABLED(CONFIG_IGC_LEDS)) 7024 + igc_led_free(adapter); 7025 + 7023 7026 /* Release control of h/w to f/w. If f/w is AMT enabled, this 7024 7027 * would have already happened in close and is redundant. 7025 7028 */
-1
drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
··· 2181 2181 2182 2182 kfree(pkind->rsrc.bmap); 2183 2183 npc_mcam_rsrcs_deinit(rvu); 2184 - kfree(mcam->counters.bmap); 2185 2184 if (rvu->kpu_prfl_addr) 2186 2185 iounmap(rvu->kpu_prfl_addr); 2187 2186 else
+1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
··· 1640 1640 .mdo_add_secy = mlx5e_macsec_add_secy, 1641 1641 .mdo_upd_secy = mlx5e_macsec_upd_secy, 1642 1642 .mdo_del_secy = mlx5e_macsec_del_secy, 1643 + .rx_uses_md_dst = true, 1643 1644 }; 1644 1645 1645 1646 bool mlx5e_macsec_handle_tx_skb(struct mlx5e_macsec *macsec, struct sk_buff *skb)
+1 -1
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 849 849 850 850 static const struct mlxsw_listener mlxsw_emad_rx_listener = 851 851 MLXSW_RXL(mlxsw_emad_rx_listener_func, ETHEMAD, TRAP_TO_CPU, false, 852 - EMAD, DISCARD); 852 + EMAD, FORWARD); 853 853 854 854 static int mlxsw_emad_tlv_enable(struct mlxsw_core *mlxsw_core) 855 855 {
+6 -14
drivers/net/ethernet/mellanox/mlxsw/core_env.c
··· 1357 1357 .got_inactive = mlxsw_env_got_inactive, 1358 1358 }; 1359 1359 1360 - static int mlxsw_env_max_module_eeprom_len_query(struct mlxsw_env *mlxsw_env) 1360 + static void mlxsw_env_max_module_eeprom_len_query(struct mlxsw_env *mlxsw_env) 1361 1361 { 1362 1362 char mcam_pl[MLXSW_REG_MCAM_LEN]; 1363 - bool mcia_128b_supported; 1363 + bool mcia_128b_supported = false; 1364 1364 int err; 1365 1365 1366 1366 mlxsw_reg_mcam_pack(mcam_pl, 1367 1367 MLXSW_REG_MCAM_FEATURE_GROUP_ENHANCED_FEATURES); 1368 1368 err = mlxsw_reg_query(mlxsw_env->core, MLXSW_REG(mcam), mcam_pl); 1369 - if (err) 1370 - return err; 1371 - 1372 - mlxsw_reg_mcam_unpack(mcam_pl, MLXSW_REG_MCAM_MCIA_128B, 1373 - &mcia_128b_supported); 1369 + if (!err) 1370 + mlxsw_reg_mcam_unpack(mcam_pl, MLXSW_REG_MCAM_MCIA_128B, 1371 + &mcia_128b_supported); 1374 1372 1375 1373 mlxsw_env->max_eeprom_len = mcia_128b_supported ? 128 : 48; 1376 - 1377 - return 0; 1378 1374 } 1379 1375 1380 1376 int mlxsw_env_init(struct mlxsw_core *mlxsw_core, ··· 1441 1445 if (err) 1442 1446 goto err_type_set; 1443 1447 1444 - err = mlxsw_env_max_module_eeprom_len_query(env); 1445 - if (err) 1446 - goto err_eeprom_len_query; 1447 - 1448 + mlxsw_env_max_module_eeprom_len_query(env); 1448 1449 env->line_cards[0]->active = true; 1449 1450 1450 1451 return 0; 1451 1452 1452 - err_eeprom_len_query: 1453 1453 err_type_set: 1454 1454 mlxsw_env_module_event_disable(env, 0); 1455 1455 err_mlxsw_env_module_event_enable:
+4 -6
drivers/net/ethernet/mellanox/mlxsw/pci.c
··· 1545 1545 { 1546 1546 struct pci_dev *pdev = mlxsw_pci->pdev; 1547 1547 char mcam_pl[MLXSW_REG_MCAM_LEN]; 1548 - bool pci_reset_supported; 1548 + bool pci_reset_supported = false; 1549 1549 u32 sys_status; 1550 1550 int err; 1551 1551 ··· 1563 1563 mlxsw_reg_mcam_pack(mcam_pl, 1564 1564 MLXSW_REG_MCAM_FEATURE_GROUP_ENHANCED_FEATURES); 1565 1565 err = mlxsw_reg_query(mlxsw_pci->core, MLXSW_REG(mcam), mcam_pl); 1566 - if (err) 1567 - return err; 1568 - 1569 - mlxsw_reg_mcam_unpack(mcam_pl, MLXSW_REG_MCAM_PCI_RESET, 1570 - &pci_reset_supported); 1566 + if (!err) 1567 + mlxsw_reg_mcam_unpack(mcam_pl, MLXSW_REG_MCAM_PCI_RESET, 1568 + &pci_reset_supported); 1571 1569 1572 1570 if (pci_reset_supported) { 1573 1571 pci_dbg(pdev, "Starting PCI reset flow\n");
+72 -43
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
··· 10 10 #include <linux/netdevice.h> 11 11 #include <linux/mutex.h> 12 12 #include <linux/refcount.h> 13 + #include <linux/idr.h> 13 14 #include <net/devlink.h> 14 15 #include <trace/events/mlxsw.h> 15 16 ··· 59 58 static int mlxsw_sp_acl_tcam_region_id_get(struct mlxsw_sp_acl_tcam *tcam, 60 59 u16 *p_id) 61 60 { 62 - u16 id; 61 + int id; 63 62 64 - id = find_first_zero_bit(tcam->used_regions, tcam->max_regions); 65 - if (id < tcam->max_regions) { 66 - __set_bit(id, tcam->used_regions); 67 - *p_id = id; 68 - return 0; 69 - } 70 - return -ENOBUFS; 63 + id = ida_alloc_max(&tcam->used_regions, tcam->max_regions - 1, 64 + GFP_KERNEL); 65 + if (id < 0) 66 + return id; 67 + 68 + *p_id = id; 69 + 70 + return 0; 71 71 } 72 72 73 73 static void mlxsw_sp_acl_tcam_region_id_put(struct mlxsw_sp_acl_tcam *tcam, 74 74 u16 id) 75 75 { 76 - __clear_bit(id, tcam->used_regions); 76 + ida_free(&tcam->used_regions, id); 77 77 } 78 78 79 79 static int mlxsw_sp_acl_tcam_group_id_get(struct mlxsw_sp_acl_tcam *tcam, 80 80 u16 *p_id) 81 81 { 82 - u16 id; 82 + int id; 83 83 84 - id = find_first_zero_bit(tcam->used_groups, tcam->max_groups); 85 - if (id < tcam->max_groups) { 86 - __set_bit(id, tcam->used_groups); 87 - *p_id = id; 88 - return 0; 89 - } 90 - return -ENOBUFS; 84 + id = ida_alloc_max(&tcam->used_groups, tcam->max_groups - 1, 85 + GFP_KERNEL); 86 + if (id < 0) 87 + return id; 88 + 89 + *p_id = id; 90 + 91 + return 0; 91 92 } 92 93 93 94 static void mlxsw_sp_acl_tcam_group_id_put(struct mlxsw_sp_acl_tcam *tcam, 94 95 u16 id) 95 96 { 96 - __clear_bit(id, tcam->used_groups); 97 + ida_free(&tcam->used_groups, id); 97 98 } 98 99 99 100 struct mlxsw_sp_acl_tcam_pattern { ··· 718 715 rehash.dw.work); 719 716 int credits = MLXSW_SP_ACL_TCAM_VREGION_REHASH_CREDITS; 720 717 718 + mutex_lock(&vregion->lock); 721 719 mlxsw_sp_acl_tcam_vregion_rehash(vregion->mlxsw_sp, vregion, &credits); 720 + mutex_unlock(&vregion->lock); 722 721 if (credits < 0) 723 722 /* Rehash gone out of credits so it was interrupted. 724 723 * Schedule the work as soon as possible to continue. ··· 728 723 mlxsw_core_schedule_dw(&vregion->rehash.dw, 0); 729 724 else 730 725 mlxsw_sp_acl_tcam_vregion_rehash_work_schedule(vregion); 726 + } 727 + 728 + static void 729 + mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(struct mlxsw_sp_acl_tcam_rehash_ctx *ctx) 730 + { 731 + /* The entry markers are relative to the current chunk and therefore 732 + * needs to be reset together with the chunk marker. 733 + */ 734 + ctx->current_vchunk = NULL; 735 + ctx->start_ventry = NULL; 736 + ctx->stop_ventry = NULL; 731 737 } 732 738 733 739 static void ··· 763 747 * the current chunk pointer to make sure all chunks 764 748 * are properly migrated. 765 749 */ 766 - vregion->rehash.ctx.current_vchunk = NULL; 750 + mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(&vregion->rehash.ctx); 767 751 } 768 752 769 753 static struct mlxsw_sp_acl_tcam_vregion * ··· 836 820 struct mlxsw_sp_acl_tcam *tcam = vregion->tcam; 837 821 838 822 if (vgroup->vregion_rehash_enabled && ops->region_rehash_hints_get) { 823 + struct mlxsw_sp_acl_tcam_rehash_ctx *ctx = &vregion->rehash.ctx; 824 + 839 825 mutex_lock(&tcam->lock); 840 826 list_del(&vregion->tlist); 841 827 mutex_unlock(&tcam->lock); 842 - cancel_delayed_work_sync(&vregion->rehash.dw); 828 + if (cancel_delayed_work_sync(&vregion->rehash.dw) && 829 + ctx->hints_priv) 830 + ops->region_rehash_hints_put(ctx->hints_priv); 843 831 } 844 832 mlxsw_sp_acl_tcam_vgroup_vregion_detach(mlxsw_sp, vregion); 845 833 if (vregion->region2) ··· 1174 1154 struct mlxsw_sp_acl_tcam_ventry *ventry, 1175 1155 bool *activity) 1176 1156 { 1177 - return mlxsw_sp_acl_tcam_entry_activity_get(mlxsw_sp, 1178 - ventry->entry, activity); 1157 + struct mlxsw_sp_acl_tcam_vregion *vregion = ventry->vchunk->vregion; 1158 + int err; 1159 + 1160 + mutex_lock(&vregion->lock); 1161 + err = mlxsw_sp_acl_tcam_entry_activity_get(mlxsw_sp, ventry->entry, 1162 + activity); 1163 + mutex_unlock(&vregion->lock); 1164 + return err; 1179 1165 } 1180 1166 1181 1167 static int ··· 1215 1189 { 1216 1190 struct mlxsw_sp_acl_tcam_chunk *new_chunk; 1217 1191 1192 + WARN_ON(vchunk->chunk2); 1193 + 1218 1194 new_chunk = mlxsw_sp_acl_tcam_chunk_create(mlxsw_sp, vchunk, region); 1219 1195 if (IS_ERR(new_chunk)) 1220 1196 return PTR_ERR(new_chunk); ··· 1235 1207 { 1236 1208 mlxsw_sp_acl_tcam_chunk_destroy(mlxsw_sp, vchunk->chunk2); 1237 1209 vchunk->chunk2 = NULL; 1238 - ctx->current_vchunk = NULL; 1210 + mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx); 1239 1211 } 1240 1212 1241 1213 static int ··· 1258 1230 return 0; 1259 1231 } 1260 1232 1233 + if (list_empty(&vchunk->ventry_list)) 1234 + goto out; 1235 + 1261 1236 /* If the migration got interrupted, we have the ventry to start from 1262 1237 * stored in context. 1263 1238 */ ··· 1269 1238 else 1270 1239 ventry = list_first_entry(&vchunk->ventry_list, 1271 1240 typeof(*ventry), list); 1241 + 1242 + WARN_ON(ventry->vchunk != vchunk); 1272 1243 1273 1244 list_for_each_entry_from(ventry, &vchunk->ventry_list, list) { 1274 1245 /* During rollback, once we reach the ventry that failed ··· 1312 1279 } 1313 1280 } 1314 1281 1282 + out: 1315 1283 mlxsw_sp_acl_tcam_vchunk_migrate_end(mlxsw_sp, vchunk, ctx); 1316 1284 return 0; 1317 1285 } ··· 1325 1291 { 1326 1292 struct mlxsw_sp_acl_tcam_vchunk *vchunk; 1327 1293 int err; 1294 + 1295 + if (list_empty(&vregion->vchunk_list)) 1296 + return 0; 1328 1297 1329 1298 /* If the migration got interrupted, we have the vchunk 1330 1299 * we are working on stored in context. ··· 1357 1320 int err, err2; 1358 1321 1359 1322 trace_mlxsw_sp_acl_tcam_vregion_migrate(mlxsw_sp, vregion); 1360 - mutex_lock(&vregion->lock); 1361 1323 err = mlxsw_sp_acl_tcam_vchunk_migrate_all(mlxsw_sp, vregion, 1362 1324 ctx, credits); 1363 1325 if (err) { 1326 + if (ctx->this_is_rollback) 1327 + return err; 1364 1328 /* In case migration was not successful, we need to swap 1365 1329 * so the original region pointer is assigned again 1366 1330 * to vregion->region. 1367 1331 */ 1368 1332 swap(vregion->region, vregion->region2); 1369 - ctx->current_vchunk = NULL; 1333 + mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx); 1370 1334 ctx->this_is_rollback = true; 1371 1335 err2 = mlxsw_sp_acl_tcam_vchunk_migrate_all(mlxsw_sp, vregion, 1372 1336 ctx, credits); ··· 1378 1340 /* Let the rollback to be continued later on. */ 1379 1341 } 1380 1342 } 1381 - mutex_unlock(&vregion->lock); 1382 1343 trace_mlxsw_sp_acl_tcam_vregion_migrate_end(mlxsw_sp, vregion); 1383 1344 return err; 1384 1345 } ··· 1426 1389 1427 1390 ctx->hints_priv = hints_priv; 1428 1391 ctx->this_is_rollback = false; 1392 + mlxsw_sp_acl_tcam_rehash_ctx_vchunk_reset(ctx); 1429 1393 1430 1394 return 0; 1431 1395 ··· 1479 1441 err = mlxsw_sp_acl_tcam_vregion_migrate(mlxsw_sp, vregion, 1480 1442 ctx, credits); 1481 1443 if (err) { 1482 - dev_err(mlxsw_sp->bus_info->dev, "Failed to migrate vregion\n"); 1444 + dev_err_ratelimited(mlxsw_sp->bus_info->dev, "Failed to migrate vregion\n"); 1445 + return; 1483 1446 } 1484 1447 1485 1448 if (*credits >= 0) ··· 1589 1550 if (max_tcam_regions < max_regions) 1590 1551 max_regions = max_tcam_regions; 1591 1552 1592 - tcam->used_regions = bitmap_zalloc(max_regions, GFP_KERNEL); 1593 - if (!tcam->used_regions) { 1594 - err = -ENOMEM; 1595 - goto err_alloc_used_regions; 1596 - } 1553 + ida_init(&tcam->used_regions); 1597 1554 tcam->max_regions = max_regions; 1598 1555 1599 1556 max_groups = MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_GROUPS); 1600 - tcam->used_groups = bitmap_zalloc(max_groups, GFP_KERNEL); 1601 - if (!tcam->used_groups) { 1602 - err = -ENOMEM; 1603 - goto err_alloc_used_groups; 1604 - } 1557 + ida_init(&tcam->used_groups); 1605 1558 tcam->max_groups = max_groups; 1606 1559 tcam->max_group_size = MLXSW_CORE_RES_GET(mlxsw_sp->core, 1607 1560 ACL_MAX_GROUP_SIZE); ··· 1607 1576 return 0; 1608 1577 1609 1578 err_tcam_init: 1610 - bitmap_free(tcam->used_groups); 1611 - err_alloc_used_groups: 1612 - bitmap_free(tcam->used_regions); 1613 - err_alloc_used_regions: 1579 + ida_destroy(&tcam->used_groups); 1580 + ida_destroy(&tcam->used_regions); 1614 1581 mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp); 1615 1582 err_rehash_params_register: 1616 1583 mutex_destroy(&tcam->lock); ··· 1621 1592 const struct mlxsw_sp_acl_tcam_ops *ops = mlxsw_sp->acl_tcam_ops; 1622 1593 1623 1594 ops->fini(mlxsw_sp, tcam->priv); 1624 - bitmap_free(tcam->used_groups); 1625 - bitmap_free(tcam->used_regions); 1595 + ida_destroy(&tcam->used_groups); 1596 + ida_destroy(&tcam->used_regions); 1626 1597 mlxsw_sp_acl_tcam_rehash_params_unregister(mlxsw_sp); 1627 1598 mutex_destroy(&tcam->lock); 1628 1599 }
+3 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.h
··· 6 6 7 7 #include <linux/list.h> 8 8 #include <linux/parman.h> 9 + #include <linux/idr.h> 9 10 10 11 #include "reg.h" 11 12 #include "spectrum.h" 12 13 #include "core_acl_flex_keys.h" 13 14 14 15 struct mlxsw_sp_acl_tcam { 15 - unsigned long *used_regions; /* bit array */ 16 + struct ida used_regions; 16 17 unsigned int max_regions; 17 - unsigned long *used_groups; /* bit array */ 18 + struct ida used_groups; 18 19 unsigned int max_groups; 19 20 unsigned int max_group_size; 20 21 struct mutex lock; /* guards vregion list */
+5 -6
drivers/net/ethernet/renesas/ravb_main.c
··· 2729 2729 struct platform_device *pdev = priv->pdev; 2730 2730 struct net_device *ndev = priv->ndev; 2731 2731 struct device *dev = &pdev->dev; 2732 - const char *dev_name; 2732 + const char *devname = dev_name(dev); 2733 2733 unsigned long flags; 2734 2734 int error, irq_num; 2735 2735 2736 2736 if (irq_name) { 2737 - dev_name = devm_kasprintf(dev, GFP_KERNEL, "%s:%s", ndev->name, ch); 2738 - if (!dev_name) 2737 + devname = devm_kasprintf(dev, GFP_KERNEL, "%s:%s", devname, ch); 2738 + if (!devname) 2739 2739 return -ENOMEM; 2740 2740 2741 2741 irq_num = platform_get_irq_byname(pdev, irq_name); 2742 2742 flags = 0; 2743 2743 } else { 2744 - dev_name = ndev->name; 2745 2744 irq_num = platform_get_irq(pdev, 0); 2746 2745 flags = IRQF_SHARED; 2747 2746 } ··· 2750 2751 if (irq) 2751 2752 *irq = irq_num; 2752 2753 2753 - error = devm_request_irq(dev, irq_num, handler, flags, dev_name, ndev); 2754 + error = devm_request_irq(dev, irq_num, handler, flags, devname, ndev); 2754 2755 if (error) 2755 - netdev_err(ndev, "cannot request IRQ %s\n", dev_name); 2756 + netdev_err(ndev, "cannot request IRQ %s\n", devname); 2756 2757 2757 2758 return error; 2758 2759 }
+5
drivers/net/ethernet/ti/am65-cpts.c
··· 791 791 struct am65_cpts_skb_cb_data *skb_cb = 792 792 (struct am65_cpts_skb_cb_data *)skb->cb; 793 793 794 + if ((ptp_classify_raw(skb) & PTP_CLASS_V1) && 795 + ((mtype_seqid & AM65_CPTS_EVENT_1_SEQUENCE_ID_MASK) == 796 + (skb_cb->skb_mtype_seqid & AM65_CPTS_EVENT_1_SEQUENCE_ID_MASK))) 797 + mtype_seqid = skb_cb->skb_mtype_seqid; 798 + 794 799 if (mtype_seqid == skb_cb->skb_mtype_seqid) { 795 800 u64 ns = event->timestamp; 796 801
+5 -3
drivers/net/ethernet/ti/icssg/icssg_common.c
··· 417 417 if (!i) 418 418 fdqring_id = k3_udma_glue_rx_flow_get_fdq_id(rx_chn->rx_chn, 419 419 i); 420 - rx_chn->irq[i] = k3_udma_glue_rx_get_irq(rx_chn->rx_chn, i); 421 - if (rx_chn->irq[i] <= 0) { 422 - ret = rx_chn->irq[i]; 420 + ret = k3_udma_glue_rx_get_irq(rx_chn->rx_chn, i); 421 + if (ret <= 0) { 422 + if (!ret) 423 + ret = -ENXIO; 423 424 netdev_err(ndev, "Failed to get rx dma irq"); 424 425 goto fail; 425 426 } 427 + rx_chn->irq[i] = ret; 426 428 } 427 429 428 430 return 0;
+1 -1
drivers/net/ethernet/wangxun/libwx/wx_lib.c
··· 1598 1598 */ 1599 1599 static int wx_acquire_msix_vectors(struct wx *wx) 1600 1600 { 1601 - struct irq_affinity affd = {0, }; 1601 + struct irq_affinity affd = { .pre_vectors = 1 }; 1602 1602 int nvecs, i; 1603 1603 1604 1604 /* We start by asking for one vector per queue pair */
+3 -5
drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
··· 20 20 #include "txgbe_phy.h" 21 21 #include "txgbe_hw.h" 22 22 23 - #define TXGBE_I2C_CLK_DEV_NAME "i2c_dw" 24 - 25 23 static int txgbe_swnodes_register(struct txgbe *txgbe) 26 24 { 27 25 struct txgbe_nodes *nodes = &txgbe->nodes; ··· 571 573 char clk_name[32]; 572 574 struct clk *clk; 573 575 574 - snprintf(clk_name, sizeof(clk_name), "%s.%d", 575 - TXGBE_I2C_CLK_DEV_NAME, pci_dev_id(pdev)); 576 + snprintf(clk_name, sizeof(clk_name), "i2c_designware.%d", 577 + pci_dev_id(pdev)); 576 578 577 579 clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, 156250000); 578 580 if (IS_ERR(clk)) ··· 634 636 635 637 info.parent = &pdev->dev; 636 638 info.fwnode = software_node_fwnode(txgbe->nodes.group[SWNODE_I2C]); 637 - info.name = TXGBE_I2C_CLK_DEV_NAME; 639 + info.name = "i2c_designware"; 638 640 info.id = pci_dev_id(pdev); 639 641 640 642 info.res = &DEFINE_RES_IRQ(pdev->irq);
+2 -1
drivers/net/gtp.c
··· 1098 1098 static void gtp_dellink(struct net_device *dev, struct list_head *head) 1099 1099 { 1100 1100 struct gtp_dev *gtp = netdev_priv(dev); 1101 + struct hlist_node *next; 1101 1102 struct pdp_ctx *pctx; 1102 1103 int i; 1103 1104 1104 1105 for (i = 0; i < gtp->hash_size; i++) 1105 - hlist_for_each_entry_rcu(pctx, &gtp->tid_hash[i], hlist_tid) 1106 + hlist_for_each_entry_safe(pctx, next, &gtp->tid_hash[i], hlist_tid) 1106 1107 pdp_context_delete(pctx); 1107 1108 1108 1109 list_del_rcu(&gtp->list);
+36 -10
drivers/net/macsec.c
··· 999 999 struct metadata_dst *md_dst; 1000 1000 struct macsec_rxh_data *rxd; 1001 1001 struct macsec_dev *macsec; 1002 + bool is_macsec_md_dst; 1002 1003 1003 1004 rcu_read_lock(); 1004 1005 rxd = macsec_data_rcu(skb->dev); 1005 1006 md_dst = skb_metadata_dst(skb); 1007 + is_macsec_md_dst = md_dst && md_dst->type == METADATA_MACSEC; 1006 1008 1007 1009 list_for_each_entry_rcu(macsec, &rxd->secys, secys) { 1008 1010 struct sk_buff *nskb; ··· 1015 1013 * the SecTAG, so we have to deduce which port to deliver to. 1016 1014 */ 1017 1015 if (macsec_is_offloaded(macsec) && netif_running(ndev)) { 1018 - struct macsec_rx_sc *rx_sc = NULL; 1016 + const struct macsec_ops *ops; 1019 1017 1020 - if (md_dst && md_dst->type == METADATA_MACSEC) 1021 - rx_sc = find_rx_sc(&macsec->secy, md_dst->u.macsec_info.sci); 1018 + ops = macsec_get_ops(macsec, NULL); 1022 1019 1023 - if (md_dst && md_dst->type == METADATA_MACSEC && !rx_sc) 1020 + if (ops->rx_uses_md_dst && !is_macsec_md_dst) 1024 1021 continue; 1025 1022 1023 + if (is_macsec_md_dst) { 1024 + struct macsec_rx_sc *rx_sc; 1025 + 1026 + /* All drivers that implement MACsec offload 1027 + * support using skb metadata destinations must 1028 + * indicate that they do so. 1029 + */ 1030 + DEBUG_NET_WARN_ON_ONCE(!ops->rx_uses_md_dst); 1031 + rx_sc = find_rx_sc(&macsec->secy, 1032 + md_dst->u.macsec_info.sci); 1033 + if (!rx_sc) 1034 + continue; 1035 + /* device indicated macsec offload occurred */ 1036 + skb->dev = ndev; 1037 + skb->pkt_type = PACKET_HOST; 1038 + eth_skb_pkt_type(skb, ndev); 1039 + ret = RX_HANDLER_ANOTHER; 1040 + goto out; 1041 + } 1042 + 1043 + /* This datapath is insecure because it is unable to 1044 + * enforce isolation of broadcast/multicast traffic and 1045 + * unicast traffic with promiscuous mode on the macsec 1046 + * netdev. Since the core stack has no mechanism to 1047 + * check that the hardware did indeed receive MACsec 1048 + * traffic, it is possible that the response handling 1049 + * done by the MACsec port was to a plaintext packet. 1050 + * This violates the MACsec protocol standard. 1051 + */ 1026 1052 if (ether_addr_equal_64bits(hdr->h_dest, 1027 1053 ndev->dev_addr)) { 1028 1054 /* exact match, divert skb to this port */ ··· 1066 1036 break; 1067 1037 1068 1038 nskb->dev = ndev; 1069 - if (ether_addr_equal_64bits(hdr->h_dest, 1070 - ndev->broadcast)) 1071 - nskb->pkt_type = PACKET_BROADCAST; 1072 - else 1073 - nskb->pkt_type = PACKET_MULTICAST; 1039 + eth_skb_pkt_type(nskb, ndev); 1074 1040 1075 1041 __netif_rx(nskb); 1076 - } else if (rx_sc || ndev->flags & IFF_PROMISC) { 1042 + } else if (ndev->flags & IFF_PROMISC) { 1077 1043 skb->dev = ndev; 1078 1044 skb->pkt_type = PACKET_HOST; 1079 1045 ret = RX_HANDLER_ANOTHER;
+2 -1
drivers/net/phy/dp83869.c
··· 695 695 phy_ctrl_val = dp83869->mode; 696 696 if (phydev->interface == PHY_INTERFACE_MODE_MII) { 697 697 if (dp83869->mode == DP83869_100M_MEDIA_CONVERT || 698 - dp83869->mode == DP83869_RGMII_100_BASE) { 698 + dp83869->mode == DP83869_RGMII_100_BASE || 699 + dp83869->mode == DP83869_RGMII_COPPER_ETHERNET) { 699 700 phy_ctrl_val |= DP83869_OP_MODE_MII; 700 701 } else { 701 702 phydev_err(phydev, "selected op-mode is not valid with MII mode\n");
+26 -17
drivers/net/phy/mediatek-ge-soc.c
··· 216 216 #define MTK_PHY_LED_ON_LINK1000 BIT(0) 217 217 #define MTK_PHY_LED_ON_LINK100 BIT(1) 218 218 #define MTK_PHY_LED_ON_LINK10 BIT(2) 219 + #define MTK_PHY_LED_ON_LINK (MTK_PHY_LED_ON_LINK10 |\ 220 + MTK_PHY_LED_ON_LINK100 |\ 221 + MTK_PHY_LED_ON_LINK1000) 219 222 #define MTK_PHY_LED_ON_LINKDOWN BIT(3) 220 223 #define MTK_PHY_LED_ON_FDX BIT(4) /* Full duplex */ 221 224 #define MTK_PHY_LED_ON_HDX BIT(5) /* Half duplex */ ··· 234 231 #define MTK_PHY_LED_BLINK_100RX BIT(3) 235 232 #define MTK_PHY_LED_BLINK_10TX BIT(4) 236 233 #define MTK_PHY_LED_BLINK_10RX BIT(5) 234 + #define MTK_PHY_LED_BLINK_RX (MTK_PHY_LED_BLINK_10RX |\ 235 + MTK_PHY_LED_BLINK_100RX |\ 236 + MTK_PHY_LED_BLINK_1000RX) 237 + #define MTK_PHY_LED_BLINK_TX (MTK_PHY_LED_BLINK_10TX |\ 238 + MTK_PHY_LED_BLINK_100TX |\ 239 + MTK_PHY_LED_BLINK_1000TX) 237 240 #define MTK_PHY_LED_BLINK_COLLISION BIT(6) 238 241 #define MTK_PHY_LED_BLINK_RX_CRC_ERR BIT(7) 239 242 #define MTK_PHY_LED_BLINK_RX_IDLE_ERR BIT(8) ··· 1256 1247 if (blink < 0) 1257 1248 return -EIO; 1258 1249 1259 - if ((on & (MTK_PHY_LED_ON_LINK1000 | MTK_PHY_LED_ON_LINK100 | 1260 - MTK_PHY_LED_ON_LINK10)) || 1261 - (blink & (MTK_PHY_LED_BLINK_1000RX | MTK_PHY_LED_BLINK_100RX | 1262 - MTK_PHY_LED_BLINK_10RX | MTK_PHY_LED_BLINK_1000TX | 1263 - MTK_PHY_LED_BLINK_100TX | MTK_PHY_LED_BLINK_10TX))) 1250 + if ((on & (MTK_PHY_LED_ON_LINK | MTK_PHY_LED_ON_FDX | MTK_PHY_LED_ON_HDX | 1251 + MTK_PHY_LED_ON_LINKDOWN)) || 1252 + (blink & (MTK_PHY_LED_BLINK_RX | MTK_PHY_LED_BLINK_TX))) 1264 1253 set_bit(bit_netdev, &priv->led_state); 1265 1254 else 1266 1255 clear_bit(bit_netdev, &priv->led_state); ··· 1276 1269 if (!rules) 1277 1270 return 0; 1278 1271 1279 - if (on & (MTK_PHY_LED_ON_LINK1000 | MTK_PHY_LED_ON_LINK100 | MTK_PHY_LED_ON_LINK10)) 1272 + if (on & MTK_PHY_LED_ON_LINK) 1280 1273 *rules |= BIT(TRIGGER_NETDEV_LINK); 1281 1274 1282 1275 if (on & MTK_PHY_LED_ON_LINK10) ··· 1294 1287 if (on & MTK_PHY_LED_ON_HDX) 1295 1288 *rules |= BIT(TRIGGER_NETDEV_HALF_DUPLEX); 1296 1289 1297 - if (blink & (MTK_PHY_LED_BLINK_1000RX | MTK_PHY_LED_BLINK_100RX | MTK_PHY_LED_BLINK_10RX)) 1290 + if (blink & MTK_PHY_LED_BLINK_RX) 1298 1291 *rules |= BIT(TRIGGER_NETDEV_RX); 1299 1292 1300 - if (blink & (MTK_PHY_LED_BLINK_1000TX | MTK_PHY_LED_BLINK_100TX | MTK_PHY_LED_BLINK_10TX)) 1293 + if (blink & MTK_PHY_LED_BLINK_TX) 1301 1294 *rules |= BIT(TRIGGER_NETDEV_TX); 1302 1295 1303 1296 return 0; ··· 1330 1323 on |= MTK_PHY_LED_ON_LINK1000; 1331 1324 1332 1325 if (rules & BIT(TRIGGER_NETDEV_RX)) { 1333 - blink |= MTK_PHY_LED_BLINK_10RX | 1334 - MTK_PHY_LED_BLINK_100RX | 1335 - MTK_PHY_LED_BLINK_1000RX; 1326 + blink |= (on & MTK_PHY_LED_ON_LINK) ? 1327 + (((on & MTK_PHY_LED_ON_LINK10) ? MTK_PHY_LED_BLINK_10RX : 0) | 1328 + ((on & MTK_PHY_LED_ON_LINK100) ? MTK_PHY_LED_BLINK_100RX : 0) | 1329 + ((on & MTK_PHY_LED_ON_LINK1000) ? MTK_PHY_LED_BLINK_1000RX : 0)) : 1330 + MTK_PHY_LED_BLINK_RX; 1336 1331 } 1337 1332 1338 1333 if (rules & BIT(TRIGGER_NETDEV_TX)) { 1339 - blink |= MTK_PHY_LED_BLINK_10TX | 1340 - MTK_PHY_LED_BLINK_100TX | 1341 - MTK_PHY_LED_BLINK_1000TX; 1334 + blink |= (on & MTK_PHY_LED_ON_LINK) ? 1335 + (((on & MTK_PHY_LED_ON_LINK10) ? MTK_PHY_LED_BLINK_10TX : 0) | 1336 + ((on & MTK_PHY_LED_ON_LINK100) ? MTK_PHY_LED_BLINK_100TX : 0) | 1337 + ((on & MTK_PHY_LED_ON_LINK1000) ? MTK_PHY_LED_BLINK_1000TX : 0)) : 1338 + MTK_PHY_LED_BLINK_TX; 1342 1339 } 1343 1340 1344 1341 if (blink || on) ··· 1355 1344 MTK_PHY_LED0_ON_CTRL, 1356 1345 MTK_PHY_LED_ON_FDX | 1357 1346 MTK_PHY_LED_ON_HDX | 1358 - MTK_PHY_LED_ON_LINK10 | 1359 - MTK_PHY_LED_ON_LINK100 | 1360 - MTK_PHY_LED_ON_LINK1000, 1347 + MTK_PHY_LED_ON_LINK, 1361 1348 on); 1362 1349 1363 1350 if (ret)
+3 -8
drivers/net/usb/ax88179_178a.c
··· 1455 1455 /* Skip IP alignment pseudo header */ 1456 1456 skb_pull(skb, 2); 1457 1457 1458 - skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd); 1459 1458 ax88179_rx_checksum(skb, pkt_hdr); 1460 1459 return 1; 1461 1460 } 1462 1461 1463 - ax_skb = skb_clone(skb, GFP_ATOMIC); 1462 + ax_skb = netdev_alloc_skb_ip_align(dev->net, pkt_len); 1464 1463 if (!ax_skb) 1465 1464 return 0; 1466 - skb_trim(ax_skb, pkt_len); 1465 + skb_put(ax_skb, pkt_len); 1466 + memcpy(ax_skb->data, skb->data + 2, pkt_len); 1467 1467 1468 - /* Skip IP alignment pseudo header */ 1469 - skb_pull(ax_skb, 2); 1470 - 1471 - skb->truesize = pkt_len_plus_padd + 1472 - SKB_DATA_ALIGN(sizeof(struct sk_buff)); 1473 1468 ax88179_rx_checksum(ax_skb, pkt_hdr); 1474 1469 usbnet_skb_return(dev, ax_skb); 1475 1470
+3
drivers/net/usb/qmi_wwan.c
··· 1360 1360 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */ 1361 1361 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */ 1362 1362 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */ 1363 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */ 1364 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */ 1365 + {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */ 1363 1366 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1364 1367 {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1365 1368 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+4
drivers/net/vxlan/vxlan_core.c
··· 1616 1616 if (ether_addr_equal(eth_hdr(skb)->h_source, vxlan->dev->dev_addr)) 1617 1617 return false; 1618 1618 1619 + /* Ignore packets from invalid src-address */ 1620 + if (!is_valid_ether_addr(eth_hdr(skb)->h_source)) 1621 + return false; 1622 + 1619 1623 /* Get address from the outer IP header */ 1620 1624 if (vxlan_get_sk_family(vs) == AF_INET) { 1621 1625 saddr.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
+3 -1
drivers/net/wireless/ath/ath11k/mac.c
··· 9112 9112 offload = &arvif->arp_ns_offload; 9113 9113 count = 0; 9114 9114 9115 + /* Note: read_lock_bh() calls rcu_read_lock() */ 9115 9116 read_lock_bh(&idev->lock); 9116 9117 9117 9118 memset(offload->ipv6_addr, 0, sizeof(offload->ipv6_addr)); ··· 9143 9142 } 9144 9143 9145 9144 /* get anycast address */ 9146 - for (ifaca6 = idev->ac_list; ifaca6; ifaca6 = ifaca6->aca_next) { 9145 + for (ifaca6 = rcu_dereference(idev->ac_list); ifaca6; 9146 + ifaca6 = rcu_dereference(ifaca6->aca_next)) { 9147 9147 if (count >= ATH11K_IPV6_MAX_COUNT) 9148 9148 goto generate; 9149 9149
+1 -1
drivers/net/wireless/intel/iwlwifi/cfg/bz.c
··· 10 10 #include "fw/api/txq.h" 11 11 12 12 /* Highest firmware API version supported */ 13 - #define IWL_BZ_UCODE_API_MAX 90 13 + #define IWL_BZ_UCODE_API_MAX 89 14 14 15 15 /* Lowest firmware API version supported */ 16 16 #define IWL_BZ_UCODE_API_MIN 80
+1 -1
drivers/net/wireless/intel/iwlwifi/cfg/sc.c
··· 10 10 #include "fw/api/txq.h" 11 11 12 12 /* Highest firmware API version supported */ 13 - #define IWL_SC_UCODE_API_MAX 90 13 + #define IWL_SC_UCODE_API_MAX 89 14 14 15 15 /* Lowest firmware API version supported */ 16 16 #define IWL_SC_UCODE_API_MIN 82
+2
drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
··· 53 53 if (!pasn) 54 54 return -ENOBUFS; 55 55 56 + iwl_mvm_ftm_remove_pasn_sta(mvm, addr); 57 + 56 58 pasn->cipher = iwl_mvm_cipher_to_location_cipher(cipher); 57 59 58 60 switch (pasn->cipher) {
+1 -1
drivers/net/wireless/intel/iwlwifi/mvm/link.c
··· 279 279 280 280 RCU_INIT_POINTER(mvm->link_id_to_link_conf[link_info->fw_link_id], 281 281 NULL); 282 + iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id); 282 283 return 0; 283 284 } 284 285 ··· 297 296 return 0; 298 297 299 298 cmd.link_id = cpu_to_le32(link_info->fw_link_id); 300 - iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id); 301 299 link_info->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID; 302 300 cmd.spec_link_id = link_conf->link_id; 303 301 cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/scan.c
··· 2829 2829 if (ver_handler->version != scan_ver) 2830 2830 continue; 2831 2831 2832 - return ver_handler->handler(mvm, vif, params, type, uid); 2832 + err = ver_handler->handler(mvm, vif, params, type, uid); 2833 + return err ? : uid; 2833 2834 } 2834 2835 2835 2836 err = iwl_mvm_scan_umac(mvm, vif, params, type, uid);
+1 -1
drivers/net/wireless/virtual/mac80211_hwsim.c
··· 3905 3905 } 3906 3906 3907 3907 nla_for_each_nested(peer, peers, rem) { 3908 - struct cfg80211_pmsr_result result; 3908 + struct cfg80211_pmsr_result result = {}; 3909 3909 3910 3910 err = mac80211_hwsim_parse_pmsr_result(peer, &result, info); 3911 3911 if (err)
+23 -19
drivers/nfc/trf7970a.c
··· 424 424 enum trf7970a_state state; 425 425 struct device *dev; 426 426 struct spi_device *spi; 427 - struct regulator *regulator; 427 + struct regulator *vin_regulator; 428 + struct regulator *vddio_regulator; 428 429 struct nfc_digital_dev *ddev; 429 430 u32 quirks; 430 431 bool is_initiator; ··· 1884 1883 if (trf->state != TRF7970A_ST_PWR_OFF) 1885 1884 return 0; 1886 1885 1887 - ret = regulator_enable(trf->regulator); 1886 + ret = regulator_enable(trf->vin_regulator); 1888 1887 if (ret) { 1889 1888 dev_err(trf->dev, "%s - Can't enable VIN: %d\n", __func__, ret); 1890 1889 return ret; ··· 1927 1926 if (trf->en2_gpiod && !(trf->quirks & TRF7970A_QUIRK_EN2_MUST_STAY_LOW)) 1928 1927 gpiod_set_value_cansleep(trf->en2_gpiod, 0); 1929 1928 1930 - ret = regulator_disable(trf->regulator); 1929 + ret = regulator_disable(trf->vin_regulator); 1931 1930 if (ret) 1932 1931 dev_err(trf->dev, "%s - Can't disable VIN: %d\n", __func__, 1933 1932 ret); ··· 2066 2065 mutex_init(&trf->lock); 2067 2066 INIT_DELAYED_WORK(&trf->timeout_work, trf7970a_timeout_work_handler); 2068 2067 2069 - trf->regulator = devm_regulator_get(&spi->dev, "vin"); 2070 - if (IS_ERR(trf->regulator)) { 2071 - ret = PTR_ERR(trf->regulator); 2068 + trf->vin_regulator = devm_regulator_get(&spi->dev, "vin"); 2069 + if (IS_ERR(trf->vin_regulator)) { 2070 + ret = PTR_ERR(trf->vin_regulator); 2072 2071 dev_err(trf->dev, "Can't get VIN regulator: %d\n", ret); 2073 2072 goto err_destroy_lock; 2074 2073 } 2075 2074 2076 - ret = regulator_enable(trf->regulator); 2075 + ret = regulator_enable(trf->vin_regulator); 2077 2076 if (ret) { 2078 2077 dev_err(trf->dev, "Can't enable VIN: %d\n", ret); 2079 2078 goto err_destroy_lock; 2080 2079 } 2081 2080 2082 - uvolts = regulator_get_voltage(trf->regulator); 2081 + uvolts = regulator_get_voltage(trf->vin_regulator); 2083 2082 if (uvolts > 4000000) 2084 2083 trf->chip_status_ctrl = TRF7970A_CHIP_STATUS_VRS5_3; 2085 2084 2086 - trf->regulator = devm_regulator_get(&spi->dev, "vdd-io"); 2087 - if (IS_ERR(trf->regulator)) { 2088 - ret = PTR_ERR(trf->regulator); 2085 + trf->vddio_regulator = devm_regulator_get(&spi->dev, "vdd-io"); 2086 + if (IS_ERR(trf->vddio_regulator)) { 2087 + ret = PTR_ERR(trf->vddio_regulator); 2089 2088 dev_err(trf->dev, "Can't get VDD_IO regulator: %d\n", ret); 2090 - goto err_destroy_lock; 2089 + goto err_disable_vin_regulator; 2091 2090 } 2092 2091 2093 - ret = regulator_enable(trf->regulator); 2092 + ret = regulator_enable(trf->vddio_regulator); 2094 2093 if (ret) { 2095 2094 dev_err(trf->dev, "Can't enable VDD_IO: %d\n", ret); 2096 - goto err_destroy_lock; 2095 + goto err_disable_vin_regulator; 2097 2096 } 2098 2097 2099 - if (regulator_get_voltage(trf->regulator) == 1800000) { 2098 + if (regulator_get_voltage(trf->vddio_regulator) == 1800000) { 2100 2099 trf->io_ctrl = TRF7970A_REG_IO_CTRL_IO_LOW; 2101 2100 dev_dbg(trf->dev, "trf7970a config vdd_io to 1.8V\n"); 2102 2101 } ··· 2109 2108 if (!trf->ddev) { 2110 2109 dev_err(trf->dev, "Can't allocate NFC digital device\n"); 2111 2110 ret = -ENOMEM; 2112 - goto err_disable_regulator; 2111 + goto err_disable_vddio_regulator; 2113 2112 } 2114 2113 2115 2114 nfc_digital_set_parent_dev(trf->ddev, trf->dev); ··· 2138 2137 trf7970a_shutdown(trf); 2139 2138 err_free_ddev: 2140 2139 nfc_digital_free_device(trf->ddev); 2141 - err_disable_regulator: 2142 - regulator_disable(trf->regulator); 2140 + err_disable_vddio_regulator: 2141 + regulator_disable(trf->vddio_regulator); 2142 + err_disable_vin_regulator: 2143 + regulator_disable(trf->vin_regulator); 2143 2144 err_destroy_lock: 2144 2145 mutex_destroy(&trf->lock); 2145 2146 return ret; ··· 2160 2157 nfc_digital_unregister_device(trf->ddev); 2161 2158 nfc_digital_free_device(trf->ddev); 2162 2159 2163 - regulator_disable(trf->regulator); 2160 + regulator_disable(trf->vddio_regulator); 2161 + regulator_disable(trf->vin_regulator); 2164 2162 2165 2163 mutex_destroy(&trf->lock); 2166 2164 }
+8 -5
drivers/s390/cio/device.c
··· 363 363 364 364 spin_lock_irq(cdev->ccwlock); 365 365 ret = ccw_device_online(cdev); 366 - spin_unlock_irq(cdev->ccwlock); 367 - if (ret == 0) 368 - wait_event(cdev->private->wait_q, dev_fsm_final_state(cdev)); 369 - else { 366 + if (ret) { 367 + spin_unlock_irq(cdev->ccwlock); 370 368 CIO_MSG_EVENT(0, "ccw_device_online returned %d, " 371 369 "device 0.%x.%04x\n", 372 370 ret, cdev->private->dev_id.ssid, ··· 373 375 put_device(&cdev->dev); 374 376 return ret; 375 377 } 376 - spin_lock_irq(cdev->ccwlock); 378 + /* Wait until a final state is reached */ 379 + while (!dev_fsm_final_state(cdev)) { 380 + spin_unlock_irq(cdev->ccwlock); 381 + wait_event(cdev->private->wait_q, dev_fsm_final_state(cdev)); 382 + spin_lock_irq(cdev->ccwlock); 383 + } 377 384 /* Check if online processing was successful */ 378 385 if ((cdev->private->state != DEV_STATE_ONLINE) && 379 386 (cdev->private->state != DEV_STATE_W4SENSE)) {
+5
drivers/s390/cio/device_fsm.c
··· 504 504 ccw_device_done(cdev, DEV_STATE_ONLINE); 505 505 /* Deliver fake irb to device driver, if needed. */ 506 506 if (cdev->private->flags.fake_irb) { 507 + CIO_MSG_EVENT(2, "fakeirb: deliver device 0.%x.%04x intparm %lx type=%d\n", 508 + cdev->private->dev_id.ssid, 509 + cdev->private->dev_id.devno, 510 + cdev->private->intparm, 511 + cdev->private->flags.fake_irb); 507 512 create_fake_irb(&cdev->private->dma_area->irb, 508 513 cdev->private->flags.fake_irb); 509 514 cdev->private->flags.fake_irb = 0;
+8
drivers/s390/cio/device_ops.c
··· 208 208 if (!cdev->private->flags.fake_irb) { 209 209 cdev->private->flags.fake_irb = FAKE_CMD_IRB; 210 210 cdev->private->intparm = intparm; 211 + CIO_MSG_EVENT(2, "fakeirb: queue device 0.%x.%04x intparm %lx type=%d\n", 212 + cdev->private->dev_id.ssid, 213 + cdev->private->dev_id.devno, intparm, 214 + cdev->private->flags.fake_irb); 211 215 return 0; 212 216 } else 213 217 /* There's already a fake I/O around. */ ··· 555 551 if (!cdev->private->flags.fake_irb) { 556 552 cdev->private->flags.fake_irb = FAKE_TM_IRB; 557 553 cdev->private->intparm = intparm; 554 + CIO_MSG_EVENT(2, "fakeirb: queue device 0.%x.%04x intparm %lx type=%d\n", 555 + cdev->private->dev_id.ssid, 556 + cdev->private->dev_id.devno, intparm, 557 + cdev->private->flags.fake_irb); 558 558 return 0; 559 559 } else 560 560 /* There's already a fake I/O around. */
+23 -5
drivers/s390/cio/qdio_main.c
··· 722 722 lgr_info_log(); 723 723 } 724 724 725 - static void qdio_establish_handle_irq(struct qdio_irq *irq_ptr, int cstat, 726 - int dstat) 725 + static int qdio_establish_handle_irq(struct qdio_irq *irq_ptr, int cstat, 726 + int dstat, int dcc) 727 727 { 728 728 DBF_DEV_EVENT(DBF_INFO, irq_ptr, "qest irq"); 729 729 ··· 731 731 goto error; 732 732 if (dstat & ~(DEV_STAT_DEV_END | DEV_STAT_CHN_END)) 733 733 goto error; 734 + if (dcc == 1) 735 + return -EAGAIN; 734 736 if (!(dstat & DEV_STAT_DEV_END)) 735 737 goto error; 736 738 qdio_set_state(irq_ptr, QDIO_IRQ_STATE_ESTABLISHED); 737 - return; 739 + return 0; 738 740 739 741 error: 740 742 DBF_ERROR("%4x EQ:error", irq_ptr->schid.sch_no); 741 743 DBF_ERROR("ds: %2x cs:%2x", dstat, cstat); 742 744 qdio_set_state(irq_ptr, QDIO_IRQ_STATE_ERR); 745 + return -EIO; 743 746 } 744 747 745 748 /* qdio interrupt handler */ ··· 751 748 { 752 749 struct qdio_irq *irq_ptr = cdev->private->qdio_data; 753 750 struct subchannel_id schid; 754 - int cstat, dstat; 751 + int cstat, dstat, rc, dcc; 755 752 756 753 if (!intparm || !irq_ptr) { 757 754 ccw_device_get_schid(cdev, &schid); ··· 771 768 qdio_irq_check_sense(irq_ptr, irb); 772 769 cstat = irb->scsw.cmd.cstat; 773 770 dstat = irb->scsw.cmd.dstat; 771 + dcc = scsw_cmd_is_valid_cc(&irb->scsw) ? irb->scsw.cmd.cc : 0; 772 + rc = 0; 774 773 775 774 switch (irq_ptr->state) { 776 775 case QDIO_IRQ_STATE_INACTIVE: 777 - qdio_establish_handle_irq(irq_ptr, cstat, dstat); 776 + rc = qdio_establish_handle_irq(irq_ptr, cstat, dstat, dcc); 778 777 break; 779 778 case QDIO_IRQ_STATE_CLEANUP: 780 779 qdio_set_state(irq_ptr, QDIO_IRQ_STATE_INACTIVE); ··· 790 785 if (cstat || dstat) 791 786 qdio_handle_activate_check(irq_ptr, intparm, cstat, 792 787 dstat); 788 + else if (dcc == 1) 789 + rc = -EAGAIN; 793 790 break; 794 791 case QDIO_IRQ_STATE_STOPPED: 795 792 break; 796 793 default: 797 794 WARN_ON_ONCE(1); 798 795 } 796 + 797 + if (rc == -EAGAIN) { 798 + DBF_DEV_EVENT(DBF_INFO, irq_ptr, "qint retry"); 799 + rc = ccw_device_start(cdev, irq_ptr->ccw, intparm, 0, 0); 800 + if (!rc) 801 + return; 802 + DBF_ERROR("%4x RETRY ERR", irq_ptr->schid.sch_no); 803 + DBF_ERROR("rc:%4x", rc); 804 + qdio_set_state(irq_ptr, QDIO_IRQ_STATE_ERR); 805 + } 806 + 799 807 wake_up(&cdev->private->wait_q); 800 808 } 801 809
+3 -4
drivers/scsi/scsi_lib.c
··· 635 635 if (blk_queue_add_random(q)) 636 636 add_disk_randomness(req->q->disk); 637 637 638 - if (!blk_rq_is_passthrough(req)) { 639 - WARN_ON_ONCE(!(cmd->flags & SCMD_INITIALIZED)); 640 - cmd->flags &= ~SCMD_INITIALIZED; 641 - } 638 + WARN_ON_ONCE(!blk_rq_is_passthrough(req) && 639 + !(cmd->flags & SCMD_INITIALIZED)); 640 + cmd->flags = 0; 642 641 643 642 /* 644 643 * Calling rcu_barrier() is not necessary here because the
+1
drivers/thermal/thermal_debugfs.c
··· 616 616 tze->trip_stats[trip_id].timestamp = now; 617 617 tze->trip_stats[trip_id].max = max(tze->trip_stats[trip_id].max, temperature); 618 618 tze->trip_stats[trip_id].min = min(tze->trip_stats[trip_id].min, temperature); 619 + tze->trip_stats[trip_id].count++; 619 620 tze->trip_stats[trip_id].avg = tze->trip_stats[trip_id].avg + 620 621 (temperature - tze->trip_stats[trip_id].avg) / 621 622 tze->trip_stats[trip_id].count;
+40 -10
drivers/thunderbolt/switch.c
··· 3180 3180 { 3181 3181 struct tb_port *up, *down; 3182 3182 3183 - if (sw->is_unplugged) 3184 - return; 3185 3183 if (!tb_route(sw) || tb_switch_is_icm(sw)) 3184 + return; 3185 + 3186 + /* 3187 + * Unconfigure downstream port so that wake-on-connect can be 3188 + * configured after router unplug. No need to unconfigure upstream port 3189 + * since its router is unplugged. 3190 + */ 3191 + up = tb_upstream_port(sw); 3192 + down = up->remote; 3193 + if (tb_switch_is_usb4(down->sw)) 3194 + usb4_port_unconfigure(down); 3195 + else 3196 + tb_lc_unconfigure_port(down); 3197 + 3198 + if (sw->is_unplugged) 3186 3199 return; 3187 3200 3188 3201 up = tb_upstream_port(sw); ··· 3203 3190 usb4_port_unconfigure(up); 3204 3191 else 3205 3192 tb_lc_unconfigure_port(up); 3206 - 3207 - down = up->remote; 3208 - if (tb_switch_is_usb4(down->sw)) 3209 - usb4_port_unconfigure(down); 3210 - else 3211 - tb_lc_unconfigure_port(down); 3212 3193 } 3213 3194 3214 3195 static void tb_switch_credits_init(struct tb_switch *sw) ··· 3448 3441 return tb_lc_set_wake(sw, flags); 3449 3442 } 3450 3443 3451 - int tb_switch_resume(struct tb_switch *sw) 3444 + static void tb_switch_check_wakes(struct tb_switch *sw) 3445 + { 3446 + if (device_may_wakeup(&sw->dev)) { 3447 + if (tb_switch_is_usb4(sw)) 3448 + usb4_switch_check_wakes(sw); 3449 + } 3450 + } 3451 + 3452 + /** 3453 + * tb_switch_resume() - Resume a switch after sleep 3454 + * @sw: Switch to resume 3455 + * @runtime: Is this resume from runtime suspend or system sleep 3456 + * 3457 + * Resumes and re-enumerates router (and all its children), if still plugged 3458 + * after suspend. Don't enumerate device router whose UID was changed during 3459 + * suspend. If this is resume from system sleep, notifies PM core about the 3460 + * wakes occurred during suspend. Disables all wakes, except USB4 wake of 3461 + * upstream port for USB4 routers that shall be always enabled. 3462 + */ 3463 + int tb_switch_resume(struct tb_switch *sw, bool runtime) 3452 3464 { 3453 3465 struct tb_port *port; 3454 3466 int err; ··· 3516 3490 if (err) 3517 3491 return err; 3518 3492 3493 + if (!runtime) 3494 + tb_switch_check_wakes(sw); 3495 + 3519 3496 /* Disable wakes */ 3520 3497 tb_switch_set_wake(sw, 0); 3521 3498 ··· 3548 3519 */ 3549 3520 if (tb_port_unlock(port)) 3550 3521 tb_port_warn(port, "failed to unlock port\n"); 3551 - if (port->remote && tb_switch_resume(port->remote->sw)) { 3522 + if (port->remote && 3523 + tb_switch_resume(port->remote->sw, runtime)) { 3552 3524 tb_port_warn(port, 3553 3525 "lost during suspend, disconnecting\n"); 3554 3526 tb_sw_set_unplugged(port->remote->sw);
+8 -2
drivers/thunderbolt/tb.c
··· 1801 1801 continue; 1802 1802 } 1803 1803 1804 + /* Needs to be on different routers */ 1805 + if (in->sw == port->sw) { 1806 + tb_port_dbg(port, "skipping DP OUT on same router\n"); 1807 + continue; 1808 + } 1809 + 1804 1810 tb_port_dbg(port, "DP OUT available\n"); 1805 1811 1806 1812 /* ··· 2942 2936 if (!tb_switch_is_usb4(tb->root_switch)) 2943 2937 tb_switch_reset(tb->root_switch); 2944 2938 2945 - tb_switch_resume(tb->root_switch); 2939 + tb_switch_resume(tb->root_switch, false); 2946 2940 tb_free_invalid_tunnels(tb); 2947 2941 tb_free_unplugged_children(tb->root_switch); 2948 2942 tb_restore_children(tb->root_switch); ··· 3068 3062 struct tb_tunnel *tunnel, *n; 3069 3063 3070 3064 mutex_lock(&tb->lock); 3071 - tb_switch_resume(tb->root_switch); 3065 + tb_switch_resume(tb->root_switch, true); 3072 3066 tb_free_invalid_tunnels(tb); 3073 3067 tb_restore_children(tb->root_switch); 3074 3068 list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
+2 -1
drivers/thunderbolt/tb.h
··· 827 827 int tb_switch_add(struct tb_switch *sw); 828 828 void tb_switch_remove(struct tb_switch *sw); 829 829 void tb_switch_suspend(struct tb_switch *sw, bool runtime); 830 - int tb_switch_resume(struct tb_switch *sw); 830 + int tb_switch_resume(struct tb_switch *sw, bool runtime); 831 831 int tb_switch_reset(struct tb_switch *sw); 832 832 int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit, 833 833 u32 value, int timeout_msec); ··· 1288 1288 return usb4_switch_version(sw) > 0; 1289 1289 } 1290 1290 1291 + void usb4_switch_check_wakes(struct tb_switch *sw); 1291 1292 int usb4_switch_setup(struct tb_switch *sw); 1292 1293 int usb4_switch_configuration_valid(struct tb_switch *sw); 1293 1294 int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid);
+7 -6
drivers/thunderbolt/usb4.c
··· 155 155 tx_dwords, rx_data, rx_dwords); 156 156 } 157 157 158 - static void usb4_switch_check_wakes(struct tb_switch *sw) 158 + /** 159 + * usb4_switch_check_wakes() - Check for wakes and notify PM core about them 160 + * @sw: Router whose wakes to check 161 + * 162 + * Checks wakes occurred during suspend and notify the PM core about them. 163 + */ 164 + void usb4_switch_check_wakes(struct tb_switch *sw) 159 165 { 160 166 bool wakeup_usb4 = false; 161 167 struct usb4_port *usb4; 162 168 struct tb_port *port; 163 169 bool wakeup = false; 164 170 u32 val; 165 - 166 - if (!device_may_wakeup(&sw->dev)) 167 - return; 168 171 169 172 if (tb_route(sw)) { 170 173 if (tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_6, 1)) ··· 246 243 bool tbt3, xhci; 247 244 u32 val = 0; 248 245 int ret; 249 - 250 - usb4_switch_check_wakes(sw); 251 246 252 247 if (!tb_route(sw)) 253 248 return 0;
+3 -3
drivers/tty/serial/8250/8250_dw.c
··· 356 356 long rate; 357 357 int ret; 358 358 359 + clk_disable_unprepare(d->clk); 359 360 rate = clk_round_rate(d->clk, newrate); 360 - if (rate > 0 && p->uartclk != rate) { 361 - clk_disable_unprepare(d->clk); 361 + if (rate > 0) { 362 362 /* 363 363 * Note that any clock-notifer worker will block in 364 364 * serial8250_update_uartclk() until we are done. ··· 366 366 ret = clk_set_rate(d->clk, newrate); 367 367 if (!ret) 368 368 p->uartclk = rate; 369 - clk_prepare_enable(d->clk); 370 369 } 370 + clk_prepare_enable(d->clk); 371 371 372 372 dw8250_do_set_termios(p, termios, old); 373 373 }
+1 -1
drivers/tty/serial/8250/8250_lpc18xx.c
··· 151 151 152 152 ret = uart_read_port_properties(&uart.port); 153 153 if (ret) 154 - return ret; 154 + goto dis_uart_clk; 155 155 156 156 uart.port.iotype = UPIO_MEM32; 157 157 uart.port.regshift = 2;
-6
drivers/tty/serial/8250/8250_pci.c
··· 5010 5010 { PCI_VENDOR_ID_LAVA, PCI_DEVICE_ID_LAVA_QUATRO_B, 5011 5011 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5012 5012 pbn_b0_bt_2_115200 }, 5013 - { PCI_VENDOR_ID_LAVA, PCI_DEVICE_ID_LAVA_QUATTRO_A, 5014 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5015 - pbn_b0_bt_2_115200 }, 5016 - { PCI_VENDOR_ID_LAVA, PCI_DEVICE_ID_LAVA_QUATTRO_B, 5017 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5018 - pbn_b0_bt_2_115200 }, 5019 5013 { PCI_VENDOR_ID_LAVA, PCI_DEVICE_ID_LAVA_OCTO_A, 5020 5014 PCI_ANY_ID, PCI_ANY_ID, 0, 0, 5021 5015 pbn_b0_bt_4_460800 },
+6 -2
drivers/tty/serial/mxs-auart.c
··· 1086 1086 1087 1087 static irqreturn_t mxs_auart_irq_handle(int irq, void *context) 1088 1088 { 1089 - u32 istat; 1089 + u32 istat, stat; 1090 1090 struct mxs_auart_port *s = context; 1091 1091 u32 mctrl_temp = s->mctrl_prev; 1092 - u32 stat = mxs_read(s, REG_STAT); 1093 1092 1093 + uart_port_lock(&s->port); 1094 + 1095 + stat = mxs_read(s, REG_STAT); 1094 1096 istat = mxs_read(s, REG_INTR); 1095 1097 1096 1098 /* ack irq */ ··· 1127 1125 mxs_auart_tx_chars(s); 1128 1126 istat &= ~AUART_INTR_TXIS; 1129 1127 } 1128 + 1129 + uart_port_unlock(&s->port); 1130 1130 1131 1131 return IRQ_HANDLED; 1132 1132 }
-14
drivers/tty/serial/pmac_zilog.c
··· 210 210 { 211 211 struct tty_port *port; 212 212 unsigned char ch, r1, drop, flag; 213 - int loops = 0; 214 213 215 214 /* Sanity check, make sure the old bug is no longer happening */ 216 215 if (uap->port.state == NULL) { ··· 290 291 if (r1 & Rx_OVR) 291 292 tty_insert_flip_char(port, 0, TTY_OVERRUN); 292 293 next_char: 293 - /* We can get stuck in an infinite loop getting char 0 when the 294 - * line is in a wrong HW state, we break that here. 295 - * When that happens, I disable the receive side of the driver. 296 - * Note that what I've been experiencing is a real irq loop where 297 - * I'm getting flooded regardless of the actual port speed. 298 - * Something strange is going on with the HW 299 - */ 300 - if ((++loops) > 1000) 301 - goto flood; 302 294 ch = read_zsreg(uap, R0); 303 295 if (!(ch & Rx_CH_AV)) 304 296 break; 305 297 } 306 298 307 - return true; 308 - flood: 309 - pmz_interrupt_control(uap, 0); 310 - pmz_error("pmz: rx irq flood !\n"); 311 299 return true; 312 300 } 313 301
+4
drivers/tty/serial/serial_base.h
··· 22 22 struct serial_port_device { 23 23 struct device dev; 24 24 struct uart_port *port; 25 + unsigned int tx_enabled:1; 25 26 }; 26 27 27 28 int serial_base_ctrl_init(void); ··· 30 29 31 30 int serial_base_port_init(void); 32 31 void serial_base_port_exit(void); 32 + 33 + void serial_base_port_startup(struct uart_port *port); 34 + void serial_base_port_shutdown(struct uart_port *port); 33 35 34 36 int serial_base_driver_register(struct device_driver *driver); 35 37 void serial_base_driver_unregister(struct device_driver *driver);
+19 -4
drivers/tty/serial/serial_core.c
··· 156 156 * enabled, serial_port_runtime_resume() calls start_tx() again 157 157 * after enabling the device. 158 158 */ 159 - if (pm_runtime_active(&port_dev->dev)) 159 + if (!pm_runtime_enabled(port->dev) || pm_runtime_active(&port_dev->dev)) 160 160 port->ops->start_tx(port); 161 161 pm_runtime_mark_last_busy(&port_dev->dev); 162 162 pm_runtime_put_autosuspend(&port_dev->dev); ··· 323 323 bool init_hw) 324 324 { 325 325 struct tty_port *port = &state->port; 326 + struct uart_port *uport; 326 327 int retval; 327 328 328 329 if (tty_port_initialized(port)) 329 - return 0; 330 + goto out_base_port_startup; 330 331 331 332 retval = uart_port_startup(tty, state, init_hw); 332 - if (retval) 333 + if (retval) { 333 334 set_bit(TTY_IO_ERROR, &tty->flags); 335 + return retval; 336 + } 334 337 335 - return retval; 338 + out_base_port_startup: 339 + uport = uart_port_check(state); 340 + if (!uport) 341 + return -EIO; 342 + 343 + serial_base_port_startup(uport); 344 + 345 + return 0; 336 346 } 337 347 338 348 /* ··· 364 354 */ 365 355 if (tty) 366 356 set_bit(TTY_IO_ERROR, &tty->flags); 357 + 358 + if (uport) 359 + serial_base_port_shutdown(uport); 367 360 368 361 if (tty_port_initialized(port)) { 369 362 tty_port_set_initialized(port, false); ··· 1788 1775 uport->ops->stop_rx(uport); 1789 1776 uart_port_unlock_irq(uport); 1790 1777 1778 + serial_base_port_shutdown(uport); 1791 1779 uart_port_shutdown(port); 1792 1780 1793 1781 /* ··· 1802 1788 * Free the transmit buffer. 1803 1789 */ 1804 1790 uart_port_lock_irq(uport); 1791 + uart_circ_clear(&state->xmit); 1805 1792 buf = state->xmit.buf; 1806 1793 state->xmit.buf = NULL; 1807 1794 uart_port_unlock_irq(uport);
+34
drivers/tty/serial/serial_port.c
··· 39 39 40 40 /* Flush any pending TX for the port */ 41 41 uart_port_lock_irqsave(port, &flags); 42 + if (!port_dev->tx_enabled) 43 + goto unlock; 42 44 if (__serial_port_busy(port)) 43 45 port->ops->start_tx(port); 46 + 47 + unlock: 44 48 uart_port_unlock_irqrestore(port, flags); 45 49 46 50 out: ··· 64 60 return 0; 65 61 66 62 uart_port_lock_irqsave(port, &flags); 63 + if (!port_dev->tx_enabled) { 64 + uart_port_unlock_irqrestore(port, flags); 65 + return 0; 66 + } 67 + 67 68 busy = __serial_port_busy(port); 68 69 if (busy) 69 70 port->ops->start_tx(port); ··· 78 69 pm_runtime_mark_last_busy(dev); 79 70 80 71 return busy ? -EBUSY : 0; 72 + } 73 + 74 + static void serial_base_port_set_tx(struct uart_port *port, 75 + struct serial_port_device *port_dev, 76 + bool enabled) 77 + { 78 + unsigned long flags; 79 + 80 + uart_port_lock_irqsave(port, &flags); 81 + port_dev->tx_enabled = enabled; 82 + uart_port_unlock_irqrestore(port, flags); 83 + } 84 + 85 + void serial_base_port_startup(struct uart_port *port) 86 + { 87 + struct serial_port_device *port_dev = port->port_dev; 88 + 89 + serial_base_port_set_tx(port, port_dev, true); 90 + } 91 + 92 + void serial_base_port_shutdown(struct uart_port *port) 93 + { 94 + struct serial_port_device *port_dev = port->port_dev; 95 + 96 + serial_base_port_set_tx(port, port_dev, false); 81 97 } 82 98 83 99 static DEFINE_RUNTIME_DEV_PM_OPS(serial_port_pm,
+11 -2
drivers/tty/serial/stm32-usart.c
··· 861 861 const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; 862 862 u32 sr; 863 863 unsigned int size; 864 + irqreturn_t ret = IRQ_NONE; 864 865 865 866 sr = readl_relaxed(port->membase + ofs->isr); 866 867 ··· 870 869 (sr & USART_SR_TC)) { 871 870 stm32_usart_tc_interrupt_disable(port); 872 871 stm32_usart_rs485_rts_disable(port); 872 + ret = IRQ_HANDLED; 873 873 } 874 874 875 - if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG) 875 + if ((sr & USART_SR_RTOF) && ofs->icr != UNDEF_REG) { 876 876 writel_relaxed(USART_ICR_RTOCF, 877 877 port->membase + ofs->icr); 878 + ret = IRQ_HANDLED; 879 + } 878 880 879 881 if ((sr & USART_SR_WUF) && ofs->icr != UNDEF_REG) { 880 882 /* Clear wake up flag and disable wake up interrupt */ ··· 886 882 stm32_usart_clr_bits(port, ofs->cr3, USART_CR3_WUFIE); 887 883 if (irqd_is_wakeup_set(irq_get_irq_data(port->irq))) 888 884 pm_wakeup_event(tport->tty->dev, 0); 885 + ret = IRQ_HANDLED; 889 886 } 890 887 891 888 /* ··· 901 896 uart_unlock_and_check_sysrq(port); 902 897 if (size) 903 898 tty_flip_buffer_push(tport); 899 + ret = IRQ_HANDLED; 904 900 } 905 901 } 906 902 ··· 909 903 uart_port_lock(port); 910 904 stm32_usart_transmit_chars(port); 911 905 uart_port_unlock(port); 906 + ret = IRQ_HANDLED; 912 907 } 913 908 914 909 /* Receiver timeout irq for DMA RX */ ··· 919 912 uart_unlock_and_check_sysrq(port); 920 913 if (size) 921 914 tty_flip_buffer_push(tport); 915 + ret = IRQ_HANDLED; 922 916 } 923 917 924 - return IRQ_HANDLED; 918 + return ret; 925 919 } 926 920 927 921 static void stm32_usart_set_mctrl(struct uart_port *port, unsigned int mctrl) ··· 1092 1084 val |= USART_CR2_SWAP; 1093 1085 writel_relaxed(val, port->membase + ofs->cr2); 1094 1086 } 1087 + stm32_port->throttled = false; 1095 1088 1096 1089 /* RX FIFO Flush */ 1097 1090 if (ofs->rqr != UNDEF_REG)
+7 -1
drivers/ufs/host/ufs-qcom.c
··· 47 47 TSTBUS_MAX, 48 48 }; 49 49 50 - #define QCOM_UFS_MAX_GEAR 4 50 + #define QCOM_UFS_MAX_GEAR 5 51 51 #define QCOM_UFS_MAX_LANE 2 52 52 53 53 enum { ··· 67 67 [MODE_PWM][UFS_PWM_G2][UFS_LANE_1] = { 1844, 1000 }, 68 68 [MODE_PWM][UFS_PWM_G3][UFS_LANE_1] = { 3688, 1000 }, 69 69 [MODE_PWM][UFS_PWM_G4][UFS_LANE_1] = { 7376, 1000 }, 70 + [MODE_PWM][UFS_PWM_G5][UFS_LANE_1] = { 14752, 1000 }, 70 71 [MODE_PWM][UFS_PWM_G1][UFS_LANE_2] = { 1844, 1000 }, 71 72 [MODE_PWM][UFS_PWM_G2][UFS_LANE_2] = { 3688, 1000 }, 72 73 [MODE_PWM][UFS_PWM_G3][UFS_LANE_2] = { 7376, 1000 }, 73 74 [MODE_PWM][UFS_PWM_G4][UFS_LANE_2] = { 14752, 1000 }, 75 + [MODE_PWM][UFS_PWM_G5][UFS_LANE_2] = { 29504, 1000 }, 74 76 [MODE_HS_RA][UFS_HS_G1][UFS_LANE_1] = { 127796, 1000 }, 75 77 [MODE_HS_RA][UFS_HS_G2][UFS_LANE_1] = { 255591, 1000 }, 76 78 [MODE_HS_RA][UFS_HS_G3][UFS_LANE_1] = { 1492582, 102400 }, 77 79 [MODE_HS_RA][UFS_HS_G4][UFS_LANE_1] = { 2915200, 204800 }, 80 + [MODE_HS_RA][UFS_HS_G5][UFS_LANE_1] = { 5836800, 409600 }, 78 81 [MODE_HS_RA][UFS_HS_G1][UFS_LANE_2] = { 255591, 1000 }, 79 82 [MODE_HS_RA][UFS_HS_G2][UFS_LANE_2] = { 511181, 1000 }, 80 83 [MODE_HS_RA][UFS_HS_G3][UFS_LANE_2] = { 1492582, 204800 }, 81 84 [MODE_HS_RA][UFS_HS_G4][UFS_LANE_2] = { 2915200, 409600 }, 85 + [MODE_HS_RA][UFS_HS_G5][UFS_LANE_2] = { 5836800, 819200 }, 82 86 [MODE_HS_RB][UFS_HS_G1][UFS_LANE_1] = { 149422, 1000 }, 83 87 [MODE_HS_RB][UFS_HS_G2][UFS_LANE_1] = { 298189, 1000 }, 84 88 [MODE_HS_RB][UFS_HS_G3][UFS_LANE_1] = { 1492582, 102400 }, 85 89 [MODE_HS_RB][UFS_HS_G4][UFS_LANE_1] = { 2915200, 204800 }, 90 + [MODE_HS_RB][UFS_HS_G5][UFS_LANE_1] = { 5836800, 409600 }, 86 91 [MODE_HS_RB][UFS_HS_G1][UFS_LANE_2] = { 298189, 1000 }, 87 92 [MODE_HS_RB][UFS_HS_G2][UFS_LANE_2] = { 596378, 1000 }, 88 93 [MODE_HS_RB][UFS_HS_G3][UFS_LANE_2] = { 1492582, 204800 }, 89 94 [MODE_HS_RB][UFS_HS_G4][UFS_LANE_2] = { 2915200, 409600 }, 95 + [MODE_HS_RB][UFS_HS_G5][UFS_LANE_2] = { 5836800, 819200 }, 90 96 [MODE_MAX][0][0] = { 7643136, 307200 }, 91 97 }; 92 98
+1 -5
drivers/usb/class/cdc-wdm.c
··· 485 485 static int service_outstanding_interrupt(struct wdm_device *desc) 486 486 { 487 487 int rv = 0; 488 - int used; 489 488 490 489 /* submit read urb only if the device is waiting for it */ 491 490 if (!desc->resp_count || !--desc->resp_count) ··· 499 500 goto out; 500 501 } 501 502 502 - used = test_and_set_bit(WDM_RESPONDING, &desc->flags); 503 - if (used) 504 - goto out; 505 - 503 + set_bit(WDM_RESPONDING, &desc->flags); 506 504 spin_unlock_irq(&desc->iuspin); 507 505 rv = usb_submit_urb(desc->response, GFP_KERNEL); 508 506 spin_lock_irq(&desc->iuspin);
+3 -1
drivers/usb/core/port.c
··· 449 449 { 450 450 struct usb_port *port_dev = to_usb_port(dev); 451 451 452 - if (port_dev->child) 452 + if (port_dev->child) { 453 453 usb_disable_usb2_hardware_lpm(port_dev->child); 454 + usb_unlocked_disable_lpm(port_dev->child); 455 + } 454 456 } 455 457 456 458 static const struct dev_pm_ops usb_port_pm_ops = {
+3 -1
drivers/usb/dwc2/hcd_ddma.c
··· 867 867 struct dwc2_dma_desc *dma_desc; 868 868 struct dwc2_hcd_iso_packet_desc *frame_desc; 869 869 u16 frame_desc_idx; 870 - struct urb *usb_urb = qtd->urb->priv; 870 + struct urb *usb_urb; 871 871 u16 remain = 0; 872 872 int rc = 0; 873 873 874 874 if (!qtd->urb) 875 875 return -EINVAL; 876 + 877 + usb_urb = qtd->urb->priv; 876 878 877 879 dma_sync_single_for_cpu(hsotg->dev, qh->desc_list_dma + (idx * 878 880 sizeof(struct dwc2_dma_desc)),
+2 -1
drivers/usb/dwc3/ep0.c
··· 226 226 227 227 /* reinitialize physical ep1 */ 228 228 dep = dwc->eps[1]; 229 - dep->flags = DWC3_EP_ENABLED; 229 + dep->flags &= DWC3_EP_RESOURCE_ALLOCATED; 230 + dep->flags |= DWC3_EP_ENABLED; 230 231 231 232 /* stall is always issued on EP0 */ 232 233 dep = dwc->eps[0];
+16 -13
drivers/usb/gadget/function/f_fs.c
··· 46 46 47 47 #define FUNCTIONFS_MAGIC 0xa647361 /* Chosen by a honest dice roll ;) */ 48 48 49 + #define DMABUF_ENQUEUE_TIMEOUT_MS 5000 50 + 49 51 MODULE_IMPORT_NS(DMA_BUF); 50 52 51 53 /* Reference counter handling */ ··· 1580 1578 struct ffs_dmabuf_priv *priv; 1581 1579 struct ffs_dma_fence *fence; 1582 1580 struct usb_request *usb_req; 1581 + enum dma_resv_usage resv_dir; 1583 1582 struct dma_buf *dmabuf; 1583 + unsigned long timeout; 1584 1584 struct ffs_ep *ep; 1585 1585 bool cookie; 1586 1586 u32 seqno; 1587 + long retl; 1587 1588 int ret; 1588 1589 1589 1590 if (req->flags & ~USB_FFS_DMABUF_TRANSFER_MASK) ··· 1620 1615 goto err_attachment_put; 1621 1616 1622 1617 /* Make sure we don't have writers */ 1623 - if (!dma_resv_test_signaled(dmabuf->resv, DMA_RESV_USAGE_WRITE)) { 1624 - pr_vdebug("FFS WRITE fence is not signaled\n"); 1625 - ret = -EBUSY; 1626 - goto err_resv_unlock; 1627 - } 1628 - 1629 - /* If we're writing to the DMABUF, make sure we don't have readers */ 1630 - if (epfile->in && 1631 - !dma_resv_test_signaled(dmabuf->resv, DMA_RESV_USAGE_READ)) { 1632 - pr_vdebug("FFS READ fence is not signaled\n"); 1633 - ret = -EBUSY; 1618 + timeout = nonblock ? 0 : msecs_to_jiffies(DMABUF_ENQUEUE_TIMEOUT_MS); 1619 + retl = dma_resv_wait_timeout(dmabuf->resv, 1620 + dma_resv_usage_rw(epfile->in), 1621 + true, timeout); 1622 + if (retl == 0) 1623 + retl = -EBUSY; 1624 + if (retl < 0) { 1625 + ret = (int)retl; 1634 1626 goto err_resv_unlock; 1635 1627 } 1636 1628 ··· 1667 1665 dma_fence_init(&fence->base, &ffs_dmabuf_fence_ops, 1668 1666 &priv->lock, priv->context, seqno); 1669 1667 1670 - dma_resv_add_fence(dmabuf->resv, &fence->base, 1671 - dma_resv_usage_rw(epfile->in)); 1668 + resv_dir = epfile->in ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ; 1669 + 1670 + dma_resv_add_fence(dmabuf->resv, &fence->base, resv_dir); 1672 1671 dma_resv_unlock(dmabuf->resv); 1673 1672 1674 1673 /* Now that the dma_fence is in place, queue the transfer. */
+2 -2
drivers/usb/gadget/function/f_ncm.c
··· 878 878 if (alt > 1) 879 879 goto fail; 880 880 881 - if (ncm->port.in_ep->enabled) { 881 + if (ncm->netdev) { 882 882 DBG(cdev, "reset ncm\n"); 883 883 ncm->netdev = NULL; 884 884 gether_disconnect(&ncm->port); ··· 1367 1367 1368 1368 DBG(cdev, "ncm deactivated\n"); 1369 1369 1370 - if (ncm->port.in_ep->enabled) { 1370 + if (ncm->netdev) { 1371 1371 ncm->netdev = NULL; 1372 1372 gether_disconnect(&ncm->port); 1373 1373 }
+2 -3
drivers/usb/gadget/udc/fsl_udc_core.c
··· 868 868 { 869 869 struct fsl_ep *ep = container_of(_ep, struct fsl_ep, ep); 870 870 struct fsl_req *req = container_of(_req, struct fsl_req, req); 871 - struct fsl_udc *udc; 871 + struct fsl_udc *udc = ep->udc; 872 872 unsigned long flags; 873 873 int ret; 874 874 ··· 878 878 dev_vdbg(&udc->gadget.dev, "%s, bad params\n", __func__); 879 879 return -EINVAL; 880 880 } 881 - if (unlikely(!_ep || !ep->ep.desc)) { 881 + if (unlikely(!ep->ep.desc)) { 882 882 dev_vdbg(&udc->gadget.dev, "%s, bad ep\n", __func__); 883 883 return -EINVAL; 884 884 } ··· 887 887 return -EMSGSIZE; 888 888 } 889 889 890 - udc = ep->udc; 891 890 if (!udc->driver || udc->gadget.speed == USB_SPEED_UNKNOWN) 892 891 return -ESHUTDOWN; 893 892
+4 -5
drivers/usb/host/xhci-ring.c
··· 3133 3133 irqreturn_t xhci_irq(struct usb_hcd *hcd) 3134 3134 { 3135 3135 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 3136 - irqreturn_t ret = IRQ_NONE; 3136 + irqreturn_t ret = IRQ_HANDLED; 3137 3137 u32 status; 3138 3138 3139 3139 spin_lock(&xhci->lock); ··· 3141 3141 status = readl(&xhci->op_regs->status); 3142 3142 if (status == ~(u32)0) { 3143 3143 xhci_hc_died(xhci); 3144 - ret = IRQ_HANDLED; 3145 3144 goto out; 3146 3145 } 3147 3146 3148 - if (!(status & STS_EINT)) 3147 + if (!(status & STS_EINT)) { 3148 + ret = IRQ_NONE; 3149 3149 goto out; 3150 + } 3150 3151 3151 3152 if (status & STS_HCE) { 3152 3153 xhci_warn(xhci, "WARNING: Host Controller Error\n"); ··· 3157 3156 if (status & STS_FATAL) { 3158 3157 xhci_warn(xhci, "WARNING: Host System Error\n"); 3159 3158 xhci_halt(xhci); 3160 - ret = IRQ_HANDLED; 3161 3159 goto out; 3162 3160 } 3163 3161 ··· 3167 3167 */ 3168 3168 status |= STS_EINT; 3169 3169 writel(status, &xhci->op_regs->status); 3170 - ret = IRQ_HANDLED; 3171 3170 3172 3171 /* This is the handler of the primary interrupter */ 3173 3172 xhci_handle_events(xhci, xhci->interrupters[0]);
+5 -7
drivers/usb/host/xhci-trace.h
··· 172 172 __field(void *, vdev) 173 173 __field(unsigned long long, out_ctx) 174 174 __field(unsigned long long, in_ctx) 175 - __field(int, hcd_portnum) 176 - __field(int, hw_portnum) 175 + __field(int, slot_id) 177 176 __field(u16, current_mel) 178 177 179 178 ), ··· 180 181 __entry->vdev = vdev; 181 182 __entry->in_ctx = (unsigned long long) vdev->in_ctx->dma; 182 183 __entry->out_ctx = (unsigned long long) vdev->out_ctx->dma; 183 - __entry->hcd_portnum = (int) vdev->rhub_port->hcd_portnum; 184 - __entry->hw_portnum = (int) vdev->rhub_port->hw_portnum; 184 + __entry->slot_id = (int) vdev->slot_id; 185 185 __entry->current_mel = (u16) vdev->current_mel; 186 186 ), 187 - TP_printk("vdev %p ctx %llx | %llx hcd_portnum %d hw_portnum %d current_mel %d", 188 - __entry->vdev, __entry->in_ctx, __entry->out_ctx, 189 - __entry->hcd_portnum, __entry->hw_portnum, __entry->current_mel 187 + TP_printk("vdev %p slot %d ctx %llx | %llx current_mel %d", 188 + __entry->vdev, __entry->slot_id, __entry->in_ctx, 189 + __entry->out_ctx, __entry->current_mel 190 190 ) 191 191 ); 192 192
+5 -1
drivers/usb/misc/onboard_usb_hub.c
··· 78 78 err = regulator_bulk_enable(hub->pdata->num_supplies, hub->supplies); 79 79 if (err) { 80 80 dev_err(hub->dev, "failed to enable supplies: %pe\n", ERR_PTR(err)); 81 - return err; 81 + goto disable_clk; 82 82 } 83 83 84 84 fsleep(hub->pdata->reset_us); ··· 87 87 hub->is_powered_on = true; 88 88 89 89 return 0; 90 + 91 + disable_clk: 92 + clk_disable_unprepare(hub->clk); 93 + return err; 90 94 } 91 95 92 96 static int onboard_hub_power_off(struct onboard_hub *hub)
+40
drivers/usb/serial/option.c
··· 255 255 #define QUECTEL_PRODUCT_EM061K_LMS 0x0124 256 256 #define QUECTEL_PRODUCT_EC25 0x0125 257 257 #define QUECTEL_PRODUCT_EM060K_128 0x0128 258 + #define QUECTEL_PRODUCT_EM060K_129 0x0129 259 + #define QUECTEL_PRODUCT_EM060K_12a 0x012a 260 + #define QUECTEL_PRODUCT_EM060K_12b 0x012b 261 + #define QUECTEL_PRODUCT_EM060K_12c 0x012c 258 262 #define QUECTEL_PRODUCT_EG91 0x0191 259 263 #define QUECTEL_PRODUCT_EG95 0x0195 260 264 #define QUECTEL_PRODUCT_BG96 0x0296 ··· 1222 1218 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x30) }, 1223 1219 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0x00, 0x40) }, 1224 1220 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_128, 0xff, 0xff, 0x40) }, 1221 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x30) }, 1222 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0x00, 0x40) }, 1223 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_129, 0xff, 0xff, 0x40) }, 1224 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x30) }, 1225 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0x00, 0x40) }, 1226 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12a, 0xff, 0xff, 0x40) }, 1227 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x30) }, 1228 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0x00, 0x40) }, 1229 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12b, 0xff, 0xff, 0x40) }, 1230 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x30) }, 1231 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0x00, 0x40) }, 1232 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K_12c, 0xff, 0xff, 0x40) }, 1225 1233 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x30) }, 1226 1234 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0x00, 0x40) }, 1227 1235 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM061K_LCN, 0xff, 0xff, 0x40) }, ··· 1376 1360 .driver_info = NCTRL(2) | RSVD(3) }, 1377 1361 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ 1378 1362 .driver_info = NCTRL(0) | RSVD(1) }, 1363 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */ 1364 + .driver_info = RSVD(0) | NCTRL(3) }, 1365 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a4, 0xff), /* Telit FN20C04 (rmnet) */ 1366 + .driver_info = RSVD(0) | NCTRL(3) }, 1367 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a9, 0xff), /* Telit FN20C04 (rmnet) */ 1368 + .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) }, 1379 1369 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1380 1370 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1381 1371 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), ··· 2074 2052 .driver_info = RSVD(3) }, 2075 2053 { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9803, 0xff), 2076 2054 .driver_info = RSVD(4) }, 2055 + { USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b05), /* Longsung U8300 */ 2056 + .driver_info = RSVD(4) | RSVD(5) }, 2057 + { USB_DEVICE(LONGCHEER_VENDOR_ID, 0x9b3c), /* Longsung U9300 */ 2058 + .driver_info = RSVD(0) | RSVD(4) }, 2077 2059 { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, 2078 2060 { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, 2079 2061 { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, ··· 2298 2272 { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0xff, 0x30) }, /* Fibocom FG150 Diag */ 2299 2273 { USB_DEVICE_AND_INTERFACE_INFO(0x2cb7, 0x010b, 0xff, 0, 0) }, /* Fibocom FG150 AT */ 2300 2274 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0111, 0xff) }, /* Fibocom FM160 (MBIM mode) */ 2275 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0115, 0xff), /* Fibocom FM135 (laptop MBIM) */ 2276 + .driver_info = RSVD(5) }, 2301 2277 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a0, 0xff) }, /* Fibocom NL668-AM/NL652-EU (laptop MBIM) */ 2302 2278 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a2, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */ 2303 2279 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a3, 0xff) }, /* Fibocom FM101-GL (laptop MBIM) */ 2304 2280 { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x01a4, 0xff), /* Fibocom FM101-GL (laptop MBIM) */ 2305 2281 .driver_info = RSVD(4) }, 2282 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a04, 0xff) }, /* Fibocom FM650-CN (ECM mode) */ 2283 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a05, 0xff) }, /* Fibocom FM650-CN (NCM mode) */ 2284 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a06, 0xff) }, /* Fibocom FM650-CN (RNDIS mode) */ 2285 + { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a07, 0xff) }, /* Fibocom FM650-CN (MBIM mode) */ 2306 2286 { USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */ 2307 2287 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */ 2308 2288 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */ 2309 2289 { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1406, 0xff) }, /* GosunCn GM500 ECM/NCM */ 2290 + { USB_DEVICE(0x33f8, 0x0104), /* Rolling RW101-GL (laptop RMNET) */ 2291 + .driver_info = RSVD(4) | RSVD(5) }, 2292 + { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a2, 0xff) }, /* Rolling RW101-GL (laptop MBIM) */ 2293 + { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a3, 0xff) }, /* Rolling RW101-GL (laptop MBIM) */ 2294 + { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x01a4, 0xff), /* Rolling RW101-GL (laptop MBIM) */ 2295 + .driver_info = RSVD(4) }, 2296 + { USB_DEVICE_INTERFACE_CLASS(0x33f8, 0x0115, 0xff), /* Rolling RW135-GL (laptop MBIM) */ 2297 + .driver_info = RSVD(5) }, 2310 2298 { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) }, 2311 2299 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) }, 2312 2300 { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+1 -1
drivers/usb/typec/mux/it5205.c
··· 22 22 #include <linux/usb/typec_mux.h> 23 23 24 24 #define IT5205_REG_CHIP_ID(x) (0x4 + (x)) 25 - #define IT5205FN_CHIP_ID 0x35323035 /* "5205" */ 25 + #define IT5205FN_CHIP_ID 0x35303235 /* "5025" -> "5205" */ 26 26 27 27 /* MUX power down register */ 28 28 #define IT5205_REG_MUXPDR 0x10
+2 -2
drivers/usb/typec/tcpm/tcpm.c
··· 6855 6855 if (data->sink_desc.pdo[0]) { 6856 6856 for (i = 0; i < PDO_MAX_OBJECTS && data->sink_desc.pdo[i]; i++) 6857 6857 port->snk_pdo[i] = data->sink_desc.pdo[i]; 6858 - port->nr_snk_pdo = i + 1; 6858 + port->nr_snk_pdo = i; 6859 6859 port->operating_snk_mw = data->operating_snk_mw; 6860 6860 } 6861 6861 6862 6862 if (data->source_desc.pdo[0]) { 6863 6863 for (i = 0; i < PDO_MAX_OBJECTS && data->source_desc.pdo[i]; i++) 6864 6864 port->src_pdo[i] = data->source_desc.pdo[i]; 6865 - port->nr_src_pdo = i + 1; 6865 + port->nr_src_pdo = i; 6866 6866 } 6867 6867 6868 6868 switch (port->state) {
+4 -2
drivers/usb/typec/ucsi/ucsi.c
··· 1736 1736 ucsi->connector = connector; 1737 1737 ucsi->ntfy = ntfy; 1738 1738 1739 + mutex_lock(&ucsi->ppm_lock); 1739 1740 ret = ucsi->ops->read(ucsi, UCSI_CCI, &cci, sizeof(cci)); 1741 + mutex_unlock(&ucsi->ppm_lock); 1740 1742 if (ret) 1741 1743 return ret; 1742 - if (UCSI_CCI_CONNECTOR(READ_ONCE(cci))) 1743 - ucsi_connector_change(ucsi, cci); 1744 + if (UCSI_CCI_CONNECTOR(cci)) 1745 + ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci)); 1744 1746 1745 1747 return 0; 1746 1748
-3
fs/9p/fid.h
··· 49 49 static inline void v9fs_fid_add_modes(struct p9_fid *fid, unsigned int s_flags, 50 50 unsigned int s_cache, unsigned int f_flags) 51 51 { 52 - if (fid->qid.type != P9_QTFILE) 53 - return; 54 - 55 52 if ((!s_cache) || 56 53 ((fid->qid.version == 0) && !(s_flags & V9FS_IGNORE_QV)) || 57 54 (s_flags & V9FS_DIRECT_IO) || (f_flags & O_DIRECT)) {
+2
fs/9p/vfs_file.c
··· 520 520 .splice_read = v9fs_file_splice_read, 521 521 .splice_write = iter_file_splice_write, 522 522 .fsync = v9fs_file_fsync, 523 + .setlease = simple_nosetlease, 523 524 }; 524 525 525 526 const struct file_operations v9fs_file_operations_dotl = { ··· 535 534 .splice_read = v9fs_file_splice_read, 536 535 .splice_write = iter_file_splice_write, 537 536 .fsync = v9fs_file_fsync_dotl, 537 + .setlease = simple_nosetlease, 538 538 };
+4 -3
fs/9p/vfs_inode.c
··· 83 83 int res; 84 84 int mode = stat->mode; 85 85 86 - res = mode & S_IALLUGO; 86 + res = mode & 0777; /* S_IRWXUGO */ 87 87 if (v9fs_proto_dotu(v9ses)) { 88 88 if ((mode & P9_DMSETUID) == P9_DMSETUID) 89 89 res |= S_ISUID; ··· 177 177 ret = P9_ORDWR; 178 178 break; 179 179 } 180 + 181 + if (uflags & O_TRUNC) 182 + ret |= P9_OTRUNC; 180 183 181 184 if (extended) { 182 185 if (uflags & O_EXCL) ··· 1063 1060 umode_t mode; 1064 1061 struct v9fs_session_info *v9ses = sb->s_fs_info; 1065 1062 struct v9fs_inode *v9inode = V9FS_I(inode); 1066 - 1067 - set_nlink(inode, 1); 1068 1063 1069 1064 inode_set_atime(inode, stat->atime, 0); 1070 1065 inode_set_mtime(inode, stat->mtime, 0);
+17
fs/9p/vfs_super.c
··· 244 244 return res; 245 245 } 246 246 247 + static int v9fs_drop_inode(struct inode *inode) 248 + { 249 + struct v9fs_session_info *v9ses; 250 + 251 + v9ses = v9fs_inode2v9ses(inode); 252 + if (v9ses->cache & (CACHE_META|CACHE_LOOSE)) 253 + return generic_drop_inode(inode); 254 + /* 255 + * in case of non cached mode always drop the 256 + * inode because we want the inode attribute 257 + * to always match that on the server. 258 + */ 259 + return 1; 260 + } 261 + 247 262 static int v9fs_write_inode(struct inode *inode, 248 263 struct writeback_control *wbc) 249 264 { ··· 283 268 .alloc_inode = v9fs_alloc_inode, 284 269 .free_inode = v9fs_free_inode, 285 270 .statfs = simple_statfs, 271 + .drop_inode = v9fs_drop_inode, 286 272 .evict_inode = v9fs_evict_inode, 287 273 .show_options = v9fs_show_options, 288 274 .umount_begin = v9fs_umount_begin, ··· 294 278 .alloc_inode = v9fs_alloc_inode, 295 279 .free_inode = v9fs_free_inode, 296 280 .statfs = v9fs_statfs, 281 + .drop_inode = v9fs_drop_inode, 297 282 .evict_inode = v9fs_evict_inode, 298 283 .show_options = v9fs_show_options, 299 284 .umount_begin = v9fs_umount_begin,
+1 -1
fs/bcachefs/backpointers.c
··· 470 470 goto err; 471 471 } 472 472 473 - bio = bio_alloc(ca->disk_sb.bdev, 1, REQ_OP_READ, GFP_KERNEL); 473 + bio = bio_alloc(ca->disk_sb.bdev, buf_pages(data_buf, bytes), REQ_OP_READ, GFP_KERNEL); 474 474 bio->bi_iter.bi_sector = p.ptr.offset; 475 475 bch2_bio_map(bio, data_buf, bytes); 476 476 ret = submit_bio_wait(bio);
+2 -1
fs/bcachefs/bcachefs_format.h
··· 1504 1504 BIT_ULL(KEY_TYPE_stripe)) \ 1505 1505 x(reflink, 7, BTREE_ID_EXTENTS|BTREE_ID_DATA, \ 1506 1506 BIT_ULL(KEY_TYPE_reflink_v)| \ 1507 - BIT_ULL(KEY_TYPE_indirect_inline_data)) \ 1507 + BIT_ULL(KEY_TYPE_indirect_inline_data)| \ 1508 + BIT_ULL(KEY_TYPE_error)) \ 1508 1509 x(subvolumes, 8, 0, \ 1509 1510 BIT_ULL(KEY_TYPE_subvolume)) \ 1510 1511 x(snapshots, 9, 0, \
+2 -1
fs/bcachefs/btree_gc.c
··· 1587 1587 struct bkey_i *new = bch2_bkey_make_mut_noupdate(trans, k); 1588 1588 ret = PTR_ERR_OR_ZERO(new); 1589 1589 if (ret) 1590 - return ret; 1590 + goto out; 1591 1591 1592 1592 if (!r->refcount) 1593 1593 new->k.type = KEY_TYPE_deleted; ··· 1595 1595 *bkey_refcount(bkey_i_to_s(new)) = cpu_to_le64(r->refcount); 1596 1596 ret = bch2_trans_update(trans, iter, new, 0); 1597 1597 } 1598 + out: 1598 1599 fsck_err: 1599 1600 printbuf_exit(&buf); 1600 1601 return ret;
+1 -1
fs/bcachefs/btree_io.c
··· 888 888 -BCH_ERR_btree_node_read_err_fixable, 889 889 c, NULL, b, i, 890 890 btree_node_bkey_bad_u64s, 891 - "bad k->u64s %u (min %u max %lu)", k->u64s, 891 + "bad k->u64s %u (min %u max %zu)", k->u64s, 892 892 bkeyp_key_u64s(&b->format, k), 893 893 U8_MAX - BKEY_U64s + bkeyp_key_u64s(&b->format, k))) 894 894 goto drop_this_key;
+4 -15
fs/bcachefs/btree_key_cache.c
··· 842 842 * Newest freed entries are at the end of the list - once we hit one 843 843 * that's too new to be freed, we can bail out: 844 844 */ 845 - scanned += bc->nr_freed_nonpcpu; 846 - 847 845 list_for_each_entry_safe(ck, t, &bc->freed_nonpcpu, list) { 848 846 if (!poll_state_synchronize_srcu(&c->btree_trans_barrier, 849 847 ck->btree_trans_barrier_seq)) ··· 855 857 bc->nr_freed_nonpcpu--; 856 858 } 857 859 858 - if (scanned >= nr) 859 - goto out; 860 - 861 - scanned += bc->nr_freed_pcpu; 862 - 863 860 list_for_each_entry_safe(ck, t, &bc->freed_pcpu, list) { 864 861 if (!poll_state_synchronize_srcu(&c->btree_trans_barrier, 865 862 ck->btree_trans_barrier_seq)) ··· 867 874 freed++; 868 875 bc->nr_freed_pcpu--; 869 876 } 870 - 871 - if (scanned >= nr) 872 - goto out; 873 877 874 878 rcu_read_lock(); 875 879 tbl = rht_dereference_rcu(bc->table.tbl, &bc->table); ··· 883 893 next = rht_dereference_bucket_rcu(pos->next, tbl, bc->shrink_iter); 884 894 ck = container_of(pos, struct bkey_cached, hash); 885 895 886 - if (test_bit(BKEY_CACHED_DIRTY, &ck->flags)) 896 + if (test_bit(BKEY_CACHED_DIRTY, &ck->flags)) { 887 897 goto next; 888 - 889 - if (test_bit(BKEY_CACHED_ACCESSED, &ck->flags)) 898 + } else if (test_bit(BKEY_CACHED_ACCESSED, &ck->flags)) { 890 899 clear_bit(BKEY_CACHED_ACCESSED, &ck->flags); 891 - else if (bkey_cached_lock_for_evict(ck)) { 900 + goto next; 901 + } else if (bkey_cached_lock_for_evict(ck)) { 892 902 bkey_cached_evict(bc, ck); 893 903 bkey_cached_free(bc, ck); 894 904 } ··· 906 916 } while (scanned < nr && bc->shrink_iter != start); 907 917 908 918 rcu_read_unlock(); 909 - out: 910 919 memalloc_nofs_restore(flags); 911 920 srcu_read_unlock(&c->btree_trans_barrier, srcu_idx); 912 921 mutex_unlock(&bc->lock);
+2
fs/bcachefs/btree_node_scan.c
··· 302 302 303 303 start->max_key = bpos_predecessor(n->min_key); 304 304 start->range_updated = true; 305 + } else if (n->level) { 306 + n->overwritten = true; 305 307 } else { 306 308 struct printbuf buf = PRINTBUF; 307 309
+1 -1
fs/bcachefs/btree_types.h
··· 321 321 struct btree_bkey_cached_common c; 322 322 323 323 unsigned long flags; 324 + unsigned long btree_trans_barrier_seq; 324 325 u16 u64s; 325 326 bool valid; 326 - u32 btree_trans_barrier_seq; 327 327 struct bkey_cached_key key; 328 328 329 329 struct rhash_head hash;
+5 -1
fs/bcachefs/btree_update_interior.c
··· 1960 1960 if ((flags & BCH_WATERMARK_MASK) == BCH_WATERMARK_interior_updates) 1961 1961 return 0; 1962 1962 1963 - flags &= ~BCH_WATERMARK_MASK; 1963 + if ((flags & BCH_WATERMARK_MASK) <= BCH_WATERMARK_reclaim) { 1964 + flags &= ~BCH_WATERMARK_MASK; 1965 + flags |= BCH_WATERMARK_btree; 1966 + flags |= BCH_TRANS_COMMIT_journal_reclaim; 1967 + } 1964 1968 1965 1969 b = trans->paths[path].l[level].b; 1966 1970
+3 -1
fs/bcachefs/chardev.c
··· 232 232 /* We need request_key() to be called before we punt to kthread: */ 233 233 opt_set(thr->opts, nostart, true); 234 234 235 + bch2_thread_with_stdio_init(&thr->thr, &bch2_offline_fsck_ops); 236 + 235 237 thr->c = bch2_fs_open(devs.data, arg.nr_devs, thr->opts); 236 238 237 239 if (!IS_ERR(thr->c) && 238 240 thr->c->opts.errors == BCH_ON_ERROR_panic) 239 241 thr->c->opts.errors = BCH_ON_ERROR_ro; 240 242 241 - ret = bch2_run_thread_with_stdio(&thr->thr, &bch2_offline_fsck_ops); 243 + ret = __bch2_run_thread_with_stdio(&thr->thr); 242 244 out: 243 245 darray_for_each(devs, i) 244 246 kfree(*i);
+6 -3
fs/bcachefs/fs.c
··· 188 188 BUG_ON(!old); 189 189 190 190 if (unlikely(old != inode)) { 191 - discard_new_inode(&inode->v); 191 + __destroy_inode(&inode->v); 192 + kmem_cache_free(bch2_inode_cache, inode); 192 193 inode = old; 193 194 } else { 194 195 mutex_lock(&c->vfs_inodes_lock); ··· 226 225 227 226 if (unlikely(!inode)) { 228 227 int ret = drop_locks_do(trans, (inode = to_bch_ei(new_inode(c->vfs_sb))) ? 0 : -ENOMEM); 229 - if (ret && inode) 230 - discard_new_inode(&inode->v); 228 + if (ret && inode) { 229 + __destroy_inode(&inode->v); 230 + kmem_cache_free(bch2_inode_cache, inode); 231 + } 231 232 if (ret) 232 233 return ERR_PTR(ret); 233 234 }
+42 -18
fs/bcachefs/journal_io.c
··· 1723 1723 percpu_ref_put(&ca->io_ref); 1724 1724 } 1725 1725 1726 - static CLOSURE_CALLBACK(do_journal_write) 1726 + static CLOSURE_CALLBACK(journal_write_submit) 1727 1727 { 1728 1728 closure_type(w, struct journal_buf, io); 1729 1729 struct journal *j = container_of(w, struct journal, buf[w->idx]); ··· 1766 1766 } 1767 1767 1768 1768 continue_at(cl, journal_write_done, j->wq); 1769 + } 1770 + 1771 + static CLOSURE_CALLBACK(journal_write_preflush) 1772 + { 1773 + closure_type(w, struct journal_buf, io); 1774 + struct journal *j = container_of(w, struct journal, buf[w->idx]); 1775 + struct bch_fs *c = container_of(j, struct bch_fs, journal); 1776 + 1777 + if (j->seq_ondisk + 1 != le64_to_cpu(w->data->seq)) { 1778 + spin_lock(&j->lock); 1779 + closure_wait(&j->async_wait, cl); 1780 + spin_unlock(&j->lock); 1781 + 1782 + continue_at(cl, journal_write_preflush, j->wq); 1783 + return; 1784 + } 1785 + 1786 + if (w->separate_flush) { 1787 + for_each_rw_member(c, ca) { 1788 + percpu_ref_get(&ca->io_ref); 1789 + 1790 + struct journal_device *ja = &ca->journal; 1791 + struct bio *bio = &ja->bio[w->idx]->bio; 1792 + bio_reset(bio, ca->disk_sb.bdev, 1793 + REQ_OP_WRITE|REQ_SYNC|REQ_META|REQ_PREFLUSH); 1794 + bio->bi_end_io = journal_write_endio; 1795 + bio->bi_private = ca; 1796 + closure_bio_submit(bio, cl); 1797 + } 1798 + 1799 + continue_at(cl, journal_write_submit, j->wq); 1800 + } else { 1801 + /* 1802 + * no need to punt to another work item if we're not waiting on 1803 + * preflushes 1804 + */ 1805 + journal_write_submit(&cl->work); 1806 + } 1769 1807 } 1770 1808 1771 1809 static int bch2_journal_write_prep(struct journal *j, struct journal_buf *w) ··· 2071 2033 goto err; 2072 2034 2073 2035 if (!JSET_NO_FLUSH(w->data)) 2074 - closure_wait_event(&j->async_wait, j->seq_ondisk + 1 == le64_to_cpu(w->data->seq)); 2075 - 2076 - if (!JSET_NO_FLUSH(w->data) && w->separate_flush) { 2077 - for_each_rw_member(c, ca) { 2078 - percpu_ref_get(&ca->io_ref); 2079 - 2080 - struct journal_device *ja = &ca->journal; 2081 - struct bio *bio = &ja->bio[w->idx]->bio; 2082 - bio_reset(bio, ca->disk_sb.bdev, 2083 - REQ_OP_WRITE|REQ_SYNC|REQ_META|REQ_PREFLUSH); 2084 - bio->bi_end_io = journal_write_endio; 2085 - bio->bi_private = ca; 2086 - closure_bio_submit(bio, cl); 2087 - } 2088 - } 2089 - 2090 - continue_at(cl, do_journal_write, j->wq); 2036 + continue_at(cl, journal_write_preflush, j->wq); 2037 + else 2038 + continue_at(cl, journal_write_submit, j->wq); 2091 2039 return; 2092 2040 no_io: 2093 2041 continue_at(cl, journal_write_done, j->wq);
+4 -1
fs/bcachefs/recovery.c
··· 249 249 250 250 struct journal_key *k = *kp; 251 251 252 - replay_now_at(j, k->journal_seq); 252 + if (k->journal_seq) 253 + replay_now_at(j, k->journal_seq); 254 + else 255 + replay_now_at(j, j->replay_journal_seq_end); 253 256 254 257 ret = commit_do(trans, NULL, NULL, 255 258 BCH_TRANS_COMMIT_no_enospc|
+8
fs/bcachefs/sb-clean.c
··· 29 29 for (entry = clean->start; 30 30 entry < (struct jset_entry *) vstruct_end(&clean->field); 31 31 entry = vstruct_next(entry)) { 32 + if (vstruct_end(entry) > vstruct_end(&clean->field)) { 33 + bch_err(c, "journal entry (u64s %u) overran end of superblock clean section (u64s %u) by %zu", 34 + le16_to_cpu(entry->u64s), le32_to_cpu(clean->field.u64s), 35 + (u64 *) vstruct_end(entry) - (u64 *) vstruct_end(&clean->field)); 36 + bch2_sb_error_count(c, BCH_FSCK_ERR_sb_clean_entry_overrun); 37 + return -BCH_ERR_fsck_repair_unimplemented; 38 + } 39 + 32 40 ret = bch2_journal_entry_validate(c, NULL, entry, 33 41 le16_to_cpu(c->disk_sb.sb->version), 34 42 BCH_SB_BIG_ENDIAN(c->disk_sb.sb),
+2 -1
fs/bcachefs/sb-errors_types.h
··· 271 271 x(btree_root_unreadable_and_scan_found_nothing, 263) \ 272 272 x(snapshot_node_missing, 264) \ 273 273 x(dup_backpointer_to_bad_csum_extent, 265) \ 274 - x(btree_bitmap_not_marked, 266) 274 + x(btree_bitmap_not_marked, 266) \ 275 + x(sb_clean_entry_overrun, 267) 275 276 276 277 enum bch_sb_error_id { 277 278 #define x(t, n) BCH_FSCK_ERR_##t = n,
+2 -2
fs/bcachefs/sb-members.c
··· 463 463 m->btree_bitmap_shift += resize; 464 464 } 465 465 466 - for (unsigned bit = sectors >> m->btree_bitmap_shift; 467 - bit << m->btree_bitmap_shift < end; 466 + for (unsigned bit = start >> m->btree_bitmap_shift; 467 + (u64) bit << m->btree_bitmap_shift < end; 468 468 bit++) 469 469 bitmap |= BIT_ULL(bit); 470 470
+3 -3
fs/bcachefs/sb-members.h
··· 235 235 { 236 236 u64 end = start + sectors; 237 237 238 - if (end > 64 << ca->mi.btree_bitmap_shift) 238 + if (end > 64ULL << ca->mi.btree_bitmap_shift) 239 239 return false; 240 240 241 - for (unsigned bit = sectors >> ca->mi.btree_bitmap_shift; 242 - bit << ca->mi.btree_bitmap_shift < end; 241 + for (unsigned bit = start >> ca->mi.btree_bitmap_shift; 242 + (u64) bit << ca->mi.btree_bitmap_shift < end; 243 243 bit++) 244 244 if (!(ca->mi.btree_allocated_bitmap & BIT_ULL(bit))) 245 245 return false;
+1
fs/bcachefs/super.c
··· 544 544 545 545 bch2_find_btree_nodes_exit(&c->found_btree_nodes); 546 546 bch2_free_pending_node_rewrites(c); 547 + bch2_fs_allocator_background_exit(c); 547 548 bch2_fs_sb_errors_exit(c); 548 549 bch2_fs_counters_exit(c); 549 550 bch2_fs_snapshots_exit(c);
+13 -2
fs/bcachefs/thread_with_file.c
··· 294 294 return 0; 295 295 } 296 296 297 - int bch2_run_thread_with_stdio(struct thread_with_stdio *thr, 298 - const struct thread_with_stdio_ops *ops) 297 + void bch2_thread_with_stdio_init(struct thread_with_stdio *thr, 298 + const struct thread_with_stdio_ops *ops) 299 299 { 300 300 stdio_buf_init(&thr->stdio.input); 301 301 stdio_buf_init(&thr->stdio.output); 302 302 thr->ops = ops; 303 + } 303 304 305 + int __bch2_run_thread_with_stdio(struct thread_with_stdio *thr) 306 + { 304 307 return bch2_run_thread_with_file(&thr->thr, &thread_with_stdio_fops, thread_with_stdio_fn); 308 + } 309 + 310 + int bch2_run_thread_with_stdio(struct thread_with_stdio *thr, 311 + const struct thread_with_stdio_ops *ops) 312 + { 313 + bch2_thread_with_stdio_init(thr, ops); 314 + 315 + return __bch2_run_thread_with_stdio(thr); 305 316 } 306 317 307 318 int bch2_run_thread_with_stdout(struct thread_with_stdio *thr,
+3
fs/bcachefs/thread_with_file.h
··· 63 63 const struct thread_with_stdio_ops *ops; 64 64 }; 65 65 66 + void bch2_thread_with_stdio_init(struct thread_with_stdio *, 67 + const struct thread_with_stdio_ops *); 68 + int __bch2_run_thread_with_stdio(struct thread_with_stdio *); 66 69 int bch2_run_thread_with_stdio(struct thread_with_stdio *, 67 70 const struct thread_with_stdio_ops *); 68 71 int bch2_run_thread_with_stdout(struct thread_with_stdio *,
+3 -9
fs/btrfs/backref.c
··· 2776 2776 size_t alloc_bytes; 2777 2777 2778 2778 alloc_bytes = max_t(size_t, total_bytes, sizeof(*data)); 2779 - data = kvmalloc(alloc_bytes, GFP_KERNEL); 2779 + data = kvzalloc(alloc_bytes, GFP_KERNEL); 2780 2780 if (!data) 2781 2781 return ERR_PTR(-ENOMEM); 2782 2782 2783 - if (total_bytes >= sizeof(*data)) { 2783 + if (total_bytes >= sizeof(*data)) 2784 2784 data->bytes_left = total_bytes - sizeof(*data); 2785 - data->bytes_missing = 0; 2786 - } else { 2785 + else 2787 2786 data->bytes_missing = sizeof(*data) - total_bytes; 2788 - data->bytes_left = 0; 2789 - } 2790 - 2791 - data->elem_cnt = 0; 2792 - data->elem_missed = 0; 2793 2787 2794 2788 return data; 2795 2789 }
+1 -1
fs/btrfs/extent_map.c
··· 817 817 split->block_len = em->block_len; 818 818 split->orig_start = em->orig_start; 819 819 } else { 820 - const u64 diff = start + len - em->start; 820 + const u64 diff = end - em->start; 821 821 822 822 split->block_len = split->len; 823 823 split->block_start += diff;
+6 -7
fs/btrfs/inode.c
··· 1145 1145 0, *alloc_hint, &ins, 1, 1); 1146 1146 if (ret) { 1147 1147 /* 1148 - * Here we used to try again by going back to non-compressed 1149 - * path for ENOSPC. But we can't reserve space even for 1150 - * compressed size, how could it work for uncompressed size 1151 - * which requires larger size? So here we directly go error 1152 - * path. 1148 + * We can't reserve contiguous space for the compressed size. 1149 + * Unlikely, but it's possible that we could have enough 1150 + * non-contiguous space for the uncompressed size instead. So 1151 + * fall back to uncompressed. 1153 1152 */ 1154 - goto out_free; 1153 + submit_uncompressed_range(inode, async_extent, locked_page); 1154 + goto done; 1155 1155 } 1156 1156 1157 1157 /* Here we're doing allocation and writeback of the compressed pages */ ··· 1203 1203 out_free_reserve: 1204 1204 btrfs_dec_block_group_reservations(fs_info, ins.objectid); 1205 1205 btrfs_free_reserved_extent(fs_info, ins.objectid, ins.offset, 1); 1206 - out_free: 1207 1206 mapping_set_error(inode->vfs_inode.i_mapping, -EIO); 1208 1207 extent_clear_unlock_delalloc(inode, start, end, 1209 1208 NULL, EXTENT_LOCKED | EXTENT_DELALLOC |
+1 -1
fs/btrfs/messages.c
··· 7 7 8 8 #ifdef CONFIG_PRINTK 9 9 10 - #define STATE_STRING_PREFACE ": state " 10 + #define STATE_STRING_PREFACE " state " 11 11 #define STATE_STRING_BUF_LEN (sizeof(STATE_STRING_PREFACE) + BTRFS_FS_STATE_COUNT + 1) 12 12 13 13 /*
+9 -9
fs/btrfs/scrub.c
··· 1012 1012 struct btrfs_fs_info *fs_info = sctx->fs_info; 1013 1013 int num_copies = btrfs_num_copies(fs_info, stripe->bg->start, 1014 1014 stripe->bg->length); 1015 + unsigned long repaired; 1015 1016 int mirror; 1016 1017 int i; 1017 1018 ··· 1079 1078 * Submit the repaired sectors. For zoned case, we cannot do repair 1080 1079 * in-place, but queue the bg to be relocated. 1081 1080 */ 1082 - if (btrfs_is_zoned(fs_info)) { 1083 - if (!bitmap_empty(&stripe->error_bitmap, stripe->nr_sectors)) 1081 + bitmap_andnot(&repaired, &stripe->init_error_bitmap, &stripe->error_bitmap, 1082 + stripe->nr_sectors); 1083 + if (!sctx->readonly && !bitmap_empty(&repaired, stripe->nr_sectors)) { 1084 + if (btrfs_is_zoned(fs_info)) { 1084 1085 btrfs_repair_one_zone(fs_info, sctx->stripes[0].bg->start); 1085 - } else if (!sctx->readonly) { 1086 - unsigned long repaired; 1087 - 1088 - bitmap_andnot(&repaired, &stripe->init_error_bitmap, 1089 - &stripe->error_bitmap, stripe->nr_sectors); 1090 - scrub_write_sectors(sctx, stripe, repaired, false); 1091 - wait_scrub_stripe_io(stripe); 1086 + } else { 1087 + scrub_write_sectors(sctx, stripe, repaired, false); 1088 + wait_scrub_stripe_io(stripe); 1089 + } 1092 1090 } 1093 1091 1094 1092 scrub_stripe_report_errors(sctx, stripe);
+5
fs/btrfs/tests/extent-map-tests.c
··· 847 847 goto out; 848 848 } 849 849 850 + if (em->block_start != SZ_32K + SZ_4K) { 851 + test_err("em->block_start is %llu, expected 36K", em->block_start); 852 + goto out; 853 + } 854 + 850 855 free_extent_map(em); 851 856 852 857 read_lock(&em_tree->lock);
+4
fs/fuse/cuse.c
··· 310 310 /** 311 311 * cuse_process_init_reply - finish initializing CUSE channel 312 312 * 313 + * @fm: The fuse mount information containing the CUSE connection. 314 + * @args: The arguments passed to the init reply. 315 + * @error: The error code signifying if any error occurred during the process. 316 + * 313 317 * This function creates the character device and sets up all the 314 318 * required data structures for it. Please read the comment at the 315 319 * top of this file for high level overview.
+1
fs/fuse/dir.c
··· 1321 1321 err = fuse_do_statx(inode, file, stat); 1322 1322 if (err == -ENOSYS) { 1323 1323 fc->no_statx = 1; 1324 + err = 0; 1324 1325 goto retry; 1325 1326 } 1326 1327 } else {
+7 -5
fs/fuse/file.c
··· 1362 1362 bool *exclusive) 1363 1363 { 1364 1364 struct inode *inode = file_inode(iocb->ki_filp); 1365 - struct fuse_file *ff = iocb->ki_filp->private_data; 1365 + struct fuse_inode *fi = get_fuse_inode(inode); 1366 1366 1367 1367 *exclusive = fuse_dio_wr_exclusive_lock(iocb, from); 1368 1368 if (*exclusive) { ··· 1377 1377 * have raced, so check it again. 1378 1378 */ 1379 1379 if (fuse_io_past_eof(iocb, from) || 1380 - fuse_file_uncached_io_start(inode, ff, NULL) != 0) { 1380 + fuse_inode_uncached_io_start(fi, NULL) != 0) { 1381 1381 inode_unlock_shared(inode); 1382 1382 inode_lock(inode); 1383 1383 *exclusive = true; ··· 1388 1388 static void fuse_dio_unlock(struct kiocb *iocb, bool exclusive) 1389 1389 { 1390 1390 struct inode *inode = file_inode(iocb->ki_filp); 1391 - struct fuse_file *ff = iocb->ki_filp->private_data; 1391 + struct fuse_inode *fi = get_fuse_inode(inode); 1392 1392 1393 1393 if (exclusive) { 1394 1394 inode_unlock(inode); 1395 1395 } else { 1396 1396 /* Allow opens in caching mode after last parallel dio end */ 1397 - fuse_file_uncached_io_end(inode, ff); 1397 + fuse_inode_uncached_io_end(fi); 1398 1398 inode_unlock_shared(inode); 1399 1399 } 1400 1400 } ··· 2574 2574 * First mmap of direct_io file enters caching inode io mode. 2575 2575 * Also waits for parallel dio writers to go into serial mode 2576 2576 * (exclusive instead of shared lock). 2577 + * After first mmap, the inode stays in caching io mode until 2578 + * the direct_io file release. 2577 2579 */ 2578 - rc = fuse_file_cached_io_start(inode, ff); 2580 + rc = fuse_file_cached_io_open(inode, ff); 2579 2581 if (rc) 2580 2582 return rc; 2581 2583 }
+4 -3
fs/fuse/fuse_i.h
··· 1394 1394 struct dentry *dentry, struct fileattr *fa); 1395 1395 1396 1396 /* iomode.c */ 1397 - int fuse_file_cached_io_start(struct inode *inode, struct fuse_file *ff); 1398 - int fuse_file_uncached_io_start(struct inode *inode, struct fuse_file *ff, struct fuse_backing *fb); 1399 - void fuse_file_uncached_io_end(struct inode *inode, struct fuse_file *ff); 1397 + int fuse_file_cached_io_open(struct inode *inode, struct fuse_file *ff); 1398 + int fuse_inode_uncached_io_start(struct fuse_inode *fi, 1399 + struct fuse_backing *fb); 1400 + void fuse_inode_uncached_io_end(struct fuse_inode *fi); 1400 1401 1401 1402 int fuse_file_io_open(struct file *file, struct inode *inode); 1402 1403 void fuse_file_io_release(struct fuse_file *ff, struct inode *inode);
+1
fs/fuse/inode.c
··· 175 175 } 176 176 } 177 177 if (S_ISREG(inode->i_mode) && !fuse_is_bad(inode)) { 178 + WARN_ON(fi->iocachectr != 0); 178 179 WARN_ON(!list_empty(&fi->write_files)); 179 180 WARN_ON(!list_empty(&fi->queued_writes)); 180 181 }
+41 -19
fs/fuse/iomode.c
··· 21 21 } 22 22 23 23 /* 24 - * Start cached io mode. 24 + * Called on cached file open() and on first mmap() of direct_io file. 25 + * Takes cached_io inode mode reference to be dropped on file release. 25 26 * 26 27 * Blocks new parallel dio writes and waits for the in-progress parallel dio 27 28 * writes to complete. 28 29 */ 29 - int fuse_file_cached_io_start(struct inode *inode, struct fuse_file *ff) 30 + int fuse_file_cached_io_open(struct inode *inode, struct fuse_file *ff) 30 31 { 31 32 struct fuse_inode *fi = get_fuse_inode(inode); 32 33 ··· 68 67 return 0; 69 68 } 70 69 71 - static void fuse_file_cached_io_end(struct inode *inode, struct fuse_file *ff) 70 + static void fuse_file_cached_io_release(struct fuse_file *ff, 71 + struct fuse_inode *fi) 72 72 { 73 - struct fuse_inode *fi = get_fuse_inode(inode); 74 - 75 73 spin_lock(&fi->lock); 76 74 WARN_ON(fi->iocachectr <= 0); 77 75 WARN_ON(ff->iomode != IOM_CACHED); ··· 82 82 } 83 83 84 84 /* Start strictly uncached io mode where cache access is not allowed */ 85 - int fuse_file_uncached_io_start(struct inode *inode, struct fuse_file *ff, struct fuse_backing *fb) 85 + int fuse_inode_uncached_io_start(struct fuse_inode *fi, struct fuse_backing *fb) 86 86 { 87 - struct fuse_inode *fi = get_fuse_inode(inode); 88 87 struct fuse_backing *oldfb; 89 88 int err = 0; 90 89 91 90 spin_lock(&fi->lock); 92 91 /* deny conflicting backing files on same fuse inode */ 93 92 oldfb = fuse_inode_backing(fi); 94 - if (oldfb && oldfb != fb) { 93 + if (fb && oldfb && oldfb != fb) { 95 94 err = -EBUSY; 96 95 goto unlock; 97 96 } ··· 98 99 err = -ETXTBSY; 99 100 goto unlock; 100 101 } 101 - WARN_ON(ff->iomode != IOM_NONE); 102 102 fi->iocachectr--; 103 - ff->iomode = IOM_UNCACHED; 104 103 105 104 /* fuse inode holds a single refcount of backing file */ 106 - if (!oldfb) { 105 + if (fb && !oldfb) { 107 106 oldfb = fuse_inode_backing_set(fi, fb); 108 107 WARN_ON_ONCE(oldfb != NULL); 109 108 } else { ··· 112 115 return err; 113 116 } 114 117 115 - void fuse_file_uncached_io_end(struct inode *inode, struct fuse_file *ff) 118 + /* Takes uncached_io inode mode reference to be dropped on file release */ 119 + static int fuse_file_uncached_io_open(struct inode *inode, 120 + struct fuse_file *ff, 121 + struct fuse_backing *fb) 116 122 { 117 123 struct fuse_inode *fi = get_fuse_inode(inode); 124 + int err; 125 + 126 + err = fuse_inode_uncached_io_start(fi, fb); 127 + if (err) 128 + return err; 129 + 130 + WARN_ON(ff->iomode != IOM_NONE); 131 + ff->iomode = IOM_UNCACHED; 132 + return 0; 133 + } 134 + 135 + void fuse_inode_uncached_io_end(struct fuse_inode *fi) 136 + { 118 137 struct fuse_backing *oldfb = NULL; 119 138 120 139 spin_lock(&fi->lock); 121 140 WARN_ON(fi->iocachectr >= 0); 122 - WARN_ON(ff->iomode != IOM_UNCACHED); 123 - ff->iomode = IOM_NONE; 124 141 fi->iocachectr++; 125 142 if (!fi->iocachectr) { 126 143 wake_up(&fi->direct_io_waitq); ··· 143 132 spin_unlock(&fi->lock); 144 133 if (oldfb) 145 134 fuse_backing_put(oldfb); 135 + } 136 + 137 + /* Drop uncached_io reference from passthrough open */ 138 + static void fuse_file_uncached_io_release(struct fuse_file *ff, 139 + struct fuse_inode *fi) 140 + { 141 + WARN_ON(ff->iomode != IOM_UNCACHED); 142 + ff->iomode = IOM_NONE; 143 + fuse_inode_uncached_io_end(fi); 146 144 } 147 145 148 146 /* ··· 183 163 return PTR_ERR(fb); 184 164 185 165 /* First passthrough file open denies caching inode io mode */ 186 - err = fuse_file_uncached_io_start(inode, ff, fb); 166 + err = fuse_file_uncached_io_open(inode, ff, fb); 187 167 if (!err) 188 168 return 0; 189 169 ··· 236 216 if (ff->open_flags & FOPEN_PASSTHROUGH) 237 217 err = fuse_file_passthrough_open(inode, file); 238 218 else 239 - err = fuse_file_cached_io_start(inode, ff); 219 + err = fuse_file_cached_io_open(inode, ff); 240 220 if (err) 241 221 goto fail; 242 222 ··· 256 236 /* No more pending io and no new io possible to inode via open/mmapped file */ 257 237 void fuse_file_io_release(struct fuse_file *ff, struct inode *inode) 258 238 { 239 + struct fuse_inode *fi = get_fuse_inode(inode); 240 + 259 241 /* 260 - * Last parallel dio close allows caching inode io mode. 242 + * Last passthrough file close allows caching inode io mode. 261 243 * Last caching file close exits caching inode io mode. 262 244 */ 263 245 switch (ff->iomode) { ··· 267 245 /* Nothing to do */ 268 246 break; 269 247 case IOM_UNCACHED: 270 - fuse_file_uncached_io_end(inode, ff); 248 + fuse_file_uncached_io_release(ff, fi); 271 249 break; 272 250 case IOM_CACHED: 273 - fuse_file_cached_io_end(inode, ff); 251 + fuse_file_cached_io_release(ff, fi); 274 252 break; 275 253 } 276 254 }
+5 -21
fs/nfsd/nfs4callback.c
··· 983 983 static bool nfsd4_queue_cb(struct nfsd4_callback *cb) 984 984 { 985 985 trace_nfsd_cb_queue(cb->cb_clp, cb); 986 - return queue_delayed_work(callback_wq, &cb->cb_work, 0); 987 - } 988 - 989 - static void nfsd4_queue_cb_delayed(struct nfsd4_callback *cb, 990 - unsigned long msecs) 991 - { 992 - trace_nfsd_cb_queue(cb->cb_clp, cb); 993 - queue_delayed_work(callback_wq, &cb->cb_work, 994 - msecs_to_jiffies(msecs)); 986 + return queue_work(callback_wq, &cb->cb_work); 995 987 } 996 988 997 989 static void nfsd41_cb_inflight_begin(struct nfs4_client *clp) ··· 1482 1490 nfsd4_run_cb_work(struct work_struct *work) 1483 1491 { 1484 1492 struct nfsd4_callback *cb = 1485 - container_of(work, struct nfsd4_callback, cb_work.work); 1493 + container_of(work, struct nfsd4_callback, cb_work); 1486 1494 struct nfs4_client *clp = cb->cb_clp; 1487 1495 struct rpc_clnt *clnt; 1488 1496 int flags; ··· 1494 1502 1495 1503 clnt = clp->cl_cb_client; 1496 1504 if (!clnt) { 1497 - if (test_bit(NFSD4_CLIENT_CB_KILL, &clp->cl_flags)) 1498 - nfsd41_destroy_cb(cb); 1499 - else { 1500 - /* 1501 - * XXX: Ideally, we could wait for the client to 1502 - * reconnect, but I haven't figured out how 1503 - * to do that yet. 1504 - */ 1505 - nfsd4_queue_cb_delayed(cb, 25); 1506 - } 1505 + /* Callback channel broken, or client killed; give up: */ 1506 + nfsd41_destroy_cb(cb); 1507 1507 return; 1508 1508 } 1509 1509 ··· 1528 1544 cb->cb_msg.rpc_argp = cb; 1529 1545 cb->cb_msg.rpc_resp = cb; 1530 1546 cb->cb_ops = ops; 1531 - INIT_DELAYED_WORK(&cb->cb_work, nfsd4_run_cb_work); 1547 + INIT_WORK(&cb->cb_work, nfsd4_run_cb_work); 1532 1548 cb->cb_status = 0; 1533 1549 cb->cb_need_restart = false; 1534 1550 cb->cb_holds_slot = false;
+1 -1
fs/nfsd/state.h
··· 68 68 struct nfs4_client *cb_clp; 69 69 struct rpc_message cb_msg; 70 70 const struct nfsd4_callback_ops *cb_ops; 71 - struct delayed_work cb_work; 71 + struct work_struct cb_work; 72 72 int cb_seq_status; 73 73 int cb_status; 74 74 bool cb_need_restart;
+1 -1
fs/nilfs2/dir.c
··· 240 240 241 241 #define S_SHIFT 12 242 242 static unsigned char 243 - nilfs_type_by_mode[S_IFMT >> S_SHIFT] = { 243 + nilfs_type_by_mode[(S_IFMT >> S_SHIFT) + 1] = { 244 244 [S_IFREG >> S_SHIFT] = NILFS_FT_REG_FILE, 245 245 [S_IFDIR >> S_SHIFT] = NILFS_FT_DIR, 246 246 [S_IFCHR >> S_SHIFT] = NILFS_FT_CHRDEV,
+3
fs/smb/client/cifsfs.c
··· 389 389 * server, can not assume caching of file data or metadata. 390 390 */ 391 391 cifs_set_oplock_level(cifs_inode, 0); 392 + cifs_inode->lease_granted = false; 392 393 cifs_inode->flags = 0; 393 394 spin_lock_init(&cifs_inode->writers_lock); 394 395 cifs_inode->writers = 0; ··· 740 739 741 740 spin_lock(&cifs_tcp_ses_lock); 742 741 spin_lock(&tcon->tc_lock); 742 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 743 + netfs_trace_tcon_ref_see_umount); 743 744 if ((tcon->tc_count > 1) || (tcon->status == TID_EXITING)) { 744 745 /* we have other mounts to same share or we have 745 746 already tried to umount this and woken up
+3
fs/smb/client/cifsglob.h
··· 1190 1190 */ 1191 1191 struct cifs_tcon { 1192 1192 struct list_head tcon_list; 1193 + int debug_id; /* Debugging for tracing */ 1193 1194 int tc_count; 1194 1195 struct list_head rlist; /* reconnect list */ 1195 1196 spinlock_t tc_lock; /* protect anything here that is not protected */ ··· 1277 1276 __u32 max_cached_dirs; 1278 1277 #ifdef CONFIG_CIFS_FSCACHE 1279 1278 u64 resource_id; /* server resource id */ 1279 + bool fscache_acquired; /* T if we've tried acquiring a cookie */ 1280 1280 struct fscache_volume *fscache; /* cookie for share */ 1281 + struct mutex fscache_lock; /* Prevent regetting a cookie */ 1281 1282 #endif 1282 1283 struct list_head pending_opens; /* list of incomplete opens */ 1283 1284 struct cached_fids *cfids;
+4 -5
fs/smb/client/cifsproto.h
··· 303 303 struct TCP_Server_Info *primary_server); 304 304 extern void cifs_put_tcp_session(struct TCP_Server_Info *server, 305 305 int from_reconnect); 306 - extern void cifs_put_tcon(struct cifs_tcon *tcon); 306 + extern void cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace); 307 307 308 308 extern void cifs_release_automount_timer(void); 309 309 ··· 530 530 531 531 extern struct cifs_ses *sesInfoAlloc(void); 532 532 extern void sesInfoFree(struct cifs_ses *); 533 - extern struct cifs_tcon *tcon_info_alloc(bool dir_leases_enabled); 534 - extern void tconInfoFree(struct cifs_tcon *); 533 + extern struct cifs_tcon *tcon_info_alloc(bool dir_leases_enabled, 534 + enum smb3_tcon_ref_trace trace); 535 + extern void tconInfoFree(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace); 535 536 536 537 extern int cifs_sign_rqst(struct smb_rqst *rqst, struct TCP_Server_Info *server, 537 538 __u32 *pexpected_response_sequence_number); ··· 722 721 return options; 723 722 } 724 723 725 - struct super_block *cifs_get_tcon_super(struct cifs_tcon *tcon); 726 - void cifs_put_tcon_super(struct super_block *sb); 727 724 int cifs_wait_for_server_reconnect(struct TCP_Server_Info *server, bool retry); 728 725 729 726 /* Put references of @ses and its children */
+12 -9
fs/smb/client/connect.c
··· 1943 1943 } 1944 1944 1945 1945 /* no need to setup directory caching on IPC share, so pass in false */ 1946 - tcon = tcon_info_alloc(false); 1946 + tcon = tcon_info_alloc(false, netfs_trace_tcon_ref_new_ipc); 1947 1947 if (tcon == NULL) 1948 1948 return -ENOMEM; 1949 1949 ··· 1960 1960 1961 1961 if (rc) { 1962 1962 cifs_server_dbg(VFS, "failed to connect to IPC (rc=%d)\n", rc); 1963 - tconInfoFree(tcon); 1963 + tconInfoFree(tcon, netfs_trace_tcon_ref_free_ipc_fail); 1964 1964 goto out; 1965 1965 } 1966 1966 ··· 2043 2043 * files on session close, as specified in MS-SMB2 3.3.5.6 Receiving an 2044 2044 * SMB2 LOGOFF Request. 2045 2045 */ 2046 - tconInfoFree(tcon); 2046 + tconInfoFree(tcon, netfs_trace_tcon_ref_free_ipc); 2047 2047 if (do_logoff) { 2048 2048 xid = get_xid(); 2049 2049 rc = server->ops->logoff(xid, ses); ··· 2432 2432 continue; 2433 2433 } 2434 2434 ++tcon->tc_count; 2435 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 2436 + netfs_trace_tcon_ref_get_find); 2435 2437 spin_unlock(&tcon->tc_lock); 2436 2438 spin_unlock(&cifs_tcp_ses_lock); 2437 2439 return tcon; ··· 2443 2441 } 2444 2442 2445 2443 void 2446 - cifs_put_tcon(struct cifs_tcon *tcon) 2444 + cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace) 2447 2445 { 2448 2446 unsigned int xid; 2449 2447 struct cifs_ses *ses; ··· 2459 2457 cifs_dbg(FYI, "%s: tc_count=%d\n", __func__, tcon->tc_count); 2460 2458 spin_lock(&cifs_tcp_ses_lock); 2461 2459 spin_lock(&tcon->tc_lock); 2460 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count - 1, trace); 2462 2461 if (--tcon->tc_count > 0) { 2463 2462 spin_unlock(&tcon->tc_lock); 2464 2463 spin_unlock(&cifs_tcp_ses_lock); ··· 2496 2493 _free_xid(xid); 2497 2494 2498 2495 cifs_fscache_release_super_cookie(tcon); 2499 - tconInfoFree(tcon); 2496 + tconInfoFree(tcon, netfs_trace_tcon_ref_free); 2500 2497 cifs_put_smb_ses(ses); 2501 2498 } 2502 2499 ··· 2550 2547 nohandlecache = ctx->nohandlecache; 2551 2548 else 2552 2549 nohandlecache = true; 2553 - tcon = tcon_info_alloc(!nohandlecache); 2550 + tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new); 2554 2551 if (tcon == NULL) { 2555 2552 rc = -ENOMEM; 2556 2553 goto out_fail; ··· 2740 2737 return tcon; 2741 2738 2742 2739 out_fail: 2743 - tconInfoFree(tcon); 2740 + tconInfoFree(tcon, netfs_trace_tcon_ref_free_fail); 2744 2741 return ERR_PTR(rc); 2745 2742 } 2746 2743 ··· 2757 2754 } 2758 2755 2759 2756 if (!IS_ERR(tlink_tcon(tlink))) 2760 - cifs_put_tcon(tlink_tcon(tlink)); 2757 + cifs_put_tcon(tlink_tcon(tlink), netfs_trace_tcon_ref_put_tlink); 2761 2758 kfree(tlink); 2762 2759 } 2763 2760 ··· 3322 3319 int rc = 0; 3323 3320 3324 3321 if (mnt_ctx->tcon) 3325 - cifs_put_tcon(mnt_ctx->tcon); 3322 + cifs_put_tcon(mnt_ctx->tcon, netfs_trace_tcon_ref_put_mnt_ctx); 3326 3323 else if (mnt_ctx->ses) 3327 3324 cifs_put_smb_ses(mnt_ctx->ses); 3328 3325 else if (mnt_ctx->server)
+12
fs/smb/client/fs_context.c
··· 748 748 /* set the port that we got earlier */ 749 749 cifs_set_port((struct sockaddr *)&ctx->dstaddr, ctx->port); 750 750 751 + if (ctx->uid_specified && !ctx->forceuid_specified) { 752 + ctx->override_uid = 1; 753 + pr_notice("enabling forceuid mount option implicitly because uid= option is specified\n"); 754 + } 755 + 756 + if (ctx->gid_specified && !ctx->forcegid_specified) { 757 + ctx->override_gid = 1; 758 + pr_notice("enabling forcegid mount option implicitly because gid= option is specified\n"); 759 + } 760 + 751 761 if (ctx->override_uid && !ctx->uid_specified) { 752 762 ctx->override_uid = 0; 753 763 pr_notice("ignoring forceuid mount option specified with no uid= option\n"); ··· 1029 1019 ctx->override_uid = 0; 1030 1020 else 1031 1021 ctx->override_uid = 1; 1022 + ctx->forceuid_specified = true; 1032 1023 break; 1033 1024 case Opt_forcegid: 1034 1025 if (result.negated) 1035 1026 ctx->override_gid = 0; 1036 1027 else 1037 1028 ctx->override_gid = 1; 1029 + ctx->forcegid_specified = true; 1038 1030 break; 1039 1031 case Opt_perm: 1040 1032 if (result.negated)
+2
fs/smb/client/fs_context.h
··· 165 165 }; 166 166 167 167 struct smb3_fs_context { 168 + bool forceuid_specified; 169 + bool forcegid_specified; 168 170 bool uid_specified; 169 171 bool cruid_specified; 170 172 bool gid_specified;
+20
fs/smb/client/fscache.c
··· 43 43 char *key; 44 44 int ret = -ENOMEM; 45 45 46 + if (tcon->fscache_acquired) 47 + return 0; 48 + 49 + mutex_lock(&tcon->fscache_lock); 50 + if (tcon->fscache_acquired) { 51 + mutex_unlock(&tcon->fscache_lock); 52 + return 0; 53 + } 54 + tcon->fscache_acquired = true; 55 + 46 56 tcon->fscache = NULL; 47 57 switch (sa->sa_family) { 48 58 case AF_INET: 49 59 case AF_INET6: 50 60 break; 51 61 default: 62 + mutex_unlock(&tcon->fscache_lock); 52 63 cifs_dbg(VFS, "Unknown network family '%d'\n", sa->sa_family); 53 64 return -EINVAL; 54 65 } ··· 68 57 69 58 sharename = extract_sharename(tcon->tree_name); 70 59 if (IS_ERR(sharename)) { 60 + mutex_unlock(&tcon->fscache_lock); 71 61 cifs_dbg(FYI, "%s: couldn't extract sharename\n", __func__); 72 62 return PTR_ERR(sharename); 73 63 } ··· 94 82 } 95 83 pr_err("Cache volume key already in use (%s)\n", key); 96 84 vcookie = NULL; 85 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 86 + netfs_trace_tcon_ref_see_fscache_collision); 87 + } else { 88 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 89 + netfs_trace_tcon_ref_see_fscache_okay); 97 90 } 98 91 99 92 tcon->fscache = vcookie; ··· 107 90 kfree(key); 108 91 out: 109 92 kfree(sharename); 93 + mutex_unlock(&tcon->fscache_lock); 110 94 return ret; 111 95 } 112 96 ··· 120 102 cifs_fscache_fill_volume_coherency(tcon, &cd); 121 103 fscache_relinquish_volume(tcon->fscache, &cd, false); 122 104 tcon->fscache = NULL; 105 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 106 + netfs_trace_tcon_ref_see_fscache_relinq); 123 107 } 124 108 125 109 void cifs_fscache_get_inode_cookie(struct inode *inode)
+10 -3
fs/smb/client/misc.c
··· 111 111 } 112 112 113 113 struct cifs_tcon * 114 - tcon_info_alloc(bool dir_leases_enabled) 114 + tcon_info_alloc(bool dir_leases_enabled, enum smb3_tcon_ref_trace trace) 115 115 { 116 116 struct cifs_tcon *ret_buf; 117 + static atomic_t tcon_debug_id; 117 118 118 119 ret_buf = kzalloc(sizeof(*ret_buf), GFP_KERNEL); 119 120 if (!ret_buf) ··· 131 130 132 131 atomic_inc(&tconInfoAllocCount); 133 132 ret_buf->status = TID_NEW; 134 - ++ret_buf->tc_count; 133 + ret_buf->debug_id = atomic_inc_return(&tcon_debug_id); 134 + ret_buf->tc_count = 1; 135 135 spin_lock_init(&ret_buf->tc_lock); 136 136 INIT_LIST_HEAD(&ret_buf->openFileList); 137 137 INIT_LIST_HEAD(&ret_buf->tcon_list); ··· 141 139 atomic_set(&ret_buf->num_local_opens, 0); 142 140 atomic_set(&ret_buf->num_remote_opens, 0); 143 141 ret_buf->stats_from_time = ktime_get_real_seconds(); 142 + #ifdef CONFIG_CIFS_FSCACHE 143 + mutex_init(&ret_buf->fscache_lock); 144 + #endif 145 + trace_smb3_tcon_ref(ret_buf->debug_id, ret_buf->tc_count, trace); 144 146 145 147 return ret_buf; 146 148 } 147 149 148 150 void 149 - tconInfoFree(struct cifs_tcon *tcon) 151 + tconInfoFree(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace) 150 152 { 151 153 if (tcon == NULL) { 152 154 cifs_dbg(FYI, "Null buffer passed to tconInfoFree\n"); 153 155 return; 154 156 } 157 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, trace); 155 158 free_cached_dirs(tcon->cfids); 156 159 atomic_dec(&tconInfoAllocCount); 157 160 kfree(tcon->nativeFileSystem);
+7 -3
fs/smb/client/smb2misc.c
··· 767 767 if (rc) 768 768 cifs_tcon_dbg(VFS, "Close cancelled mid failed rc:%d\n", rc); 769 769 770 - cifs_put_tcon(tcon); 770 + cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cancelled_close_fid); 771 771 kfree(cancelled); 772 772 } 773 773 ··· 811 811 if (tcon->tc_count <= 0) { 812 812 struct TCP_Server_Info *server = NULL; 813 813 814 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 815 + netfs_trace_tcon_ref_see_cancelled_close); 814 816 WARN_ONCE(tcon->tc_count < 0, "tcon refcount is negative"); 815 817 spin_unlock(&cifs_tcp_ses_lock); 816 818 ··· 825 823 return 0; 826 824 } 827 825 tcon->tc_count++; 826 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 827 + netfs_trace_tcon_ref_get_cancelled_close); 828 828 spin_unlock(&cifs_tcp_ses_lock); 829 829 830 830 rc = __smb2_handle_cancelled_cmd(tcon, SMB2_CLOSE_HE, 0, 831 831 persistent_fid, volatile_fid); 832 832 if (rc) 833 - cifs_put_tcon(tcon); 833 + cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cancelled_close); 834 834 835 835 return rc; 836 836 } ··· 860 856 rsp->PersistentFileId, 861 857 rsp->VolatileFileId); 862 858 if (rc) 863 - cifs_put_tcon(tcon); 859 + cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cancelled_mid); 864 860 865 861 return rc; 866 862 }
+6 -1
fs/smb/client/smb2ops.c
··· 2915 2915 tcon = list_first_entry_or_null(&ses->tcon_list, 2916 2916 struct cifs_tcon, 2917 2917 tcon_list); 2918 - if (tcon) 2918 + if (tcon) { 2919 2919 tcon->tc_count++; 2920 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 2921 + netfs_trace_tcon_ref_get_dfs_refer); 2922 + } 2920 2923 spin_unlock(&cifs_tcp_ses_lock); 2921 2924 } 2922 2925 ··· 2983 2980 /* ipc tcons are not refcounted */ 2984 2981 spin_lock(&cifs_tcp_ses_lock); 2985 2982 tcon->tc_count--; 2983 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 2984 + netfs_trace_tcon_ref_dec_dfs_refer); 2986 2985 /* tc_count can never go negative */ 2987 2986 WARN_ON(tcon->tc_count < 0); 2988 2987 spin_unlock(&cifs_tcp_ses_lock);
+5 -3
fs/smb/client/smb2pdu.c
··· 4138 4138 list_for_each_entry(tcon, &ses->tcon_list, tcon_list) { 4139 4139 if (tcon->need_reconnect || tcon->need_reopen_files) { 4140 4140 tcon->tc_count++; 4141 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 4142 + netfs_trace_tcon_ref_get_reconnect_server); 4141 4143 list_add_tail(&tcon->rlist, &tmp_list); 4142 4144 tcon_selected = true; 4143 4145 } ··· 4178 4176 if (tcon->ipc) 4179 4177 cifs_put_smb_ses(tcon->ses); 4180 4178 else 4181 - cifs_put_tcon(tcon); 4179 + cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_reconnect_server); 4182 4180 } 4183 4181 4184 4182 if (!ses_exist) 4185 4183 goto done; 4186 4184 4187 4185 /* allocate a dummy tcon struct used for reconnect */ 4188 - tcon = tcon_info_alloc(false); 4186 + tcon = tcon_info_alloc(false, netfs_trace_tcon_ref_new_reconnect_server); 4189 4187 if (!tcon) { 4190 4188 resched = true; 4191 4189 list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) { ··· 4208 4206 list_del_init(&ses->rlist); 4209 4207 cifs_put_smb_ses(ses); 4210 4208 } 4211 - tconInfoFree(tcon); 4209 + tconInfoFree(tcon, netfs_trace_tcon_ref_free_reconnect_server); 4212 4210 4213 4211 done: 4214 4212 cifs_dbg(FYI, "Reconnecting tcons and channels finished\n");
+2
fs/smb/client/smb2transport.c
··· 189 189 if (tcon->tid != tid) 190 190 continue; 191 191 ++tcon->tc_count; 192 + trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count, 193 + netfs_trace_tcon_ref_get_find_sess_tcon); 192 194 return tcon; 193 195 } 194 196
+90 -2
fs/smb/client/trace.h
··· 3 3 * Copyright (C) 2018, Microsoft Corporation. 4 4 * 5 5 * Author(s): Steve French <stfrench@microsoft.com> 6 + * 7 + * Please use this 3-part article as a reference for writing new tracepoints: 8 + * https://lwn.net/Articles/379903/ 6 9 */ 7 10 #undef TRACE_SYSTEM 8 11 #define TRACE_SYSTEM cifs ··· 18 15 #include <linux/inet.h> 19 16 20 17 /* 21 - * Please use this 3-part article as a reference for writing new tracepoints: 22 - * https://lwn.net/Articles/379903/ 18 + * Specify enums for tracing information. 23 19 */ 20 + #define smb3_tcon_ref_traces \ 21 + EM(netfs_trace_tcon_ref_dec_dfs_refer, "DEC DfsRef") \ 22 + EM(netfs_trace_tcon_ref_free, "FRE ") \ 23 + EM(netfs_trace_tcon_ref_free_fail, "FRE Fail ") \ 24 + EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \ 25 + EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \ 26 + EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \ 27 + EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \ 28 + EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \ 29 + EM(netfs_trace_tcon_ref_get_find, "GET Find ") \ 30 + EM(netfs_trace_tcon_ref_get_find_sess_tcon, "GET FndSes") \ 31 + EM(netfs_trace_tcon_ref_get_reconnect_server, "GET Reconn") \ 32 + EM(netfs_trace_tcon_ref_new, "NEW ") \ 33 + EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \ 34 + EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \ 35 + EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \ 36 + EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \ 37 + EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \ 38 + EM(netfs_trace_tcon_ref_put_mnt_ctx, "PUT MntCtx") \ 39 + EM(netfs_trace_tcon_ref_put_reconnect_server, "PUT Reconn") \ 40 + EM(netfs_trace_tcon_ref_put_tlink, "PUT Tlink ") \ 41 + EM(netfs_trace_tcon_ref_see_cancelled_close, "SEE Cn-Cls") \ 42 + EM(netfs_trace_tcon_ref_see_fscache_collision, "SEE FV-CO!") \ 43 + EM(netfs_trace_tcon_ref_see_fscache_okay, "SEE FV-Ok ") \ 44 + EM(netfs_trace_tcon_ref_see_fscache_relinq, "SEE FV-Rlq") \ 45 + E_(netfs_trace_tcon_ref_see_umount, "SEE Umount") 46 + 47 + #undef EM 48 + #undef E_ 49 + 50 + /* 51 + * Define those tracing enums. 52 + */ 53 + #ifndef __SMB3_DECLARE_TRACE_ENUMS_ONCE_ONLY 54 + #define __SMB3_DECLARE_TRACE_ENUMS_ONCE_ONLY 55 + 56 + #define EM(a, b) a, 57 + #define E_(a, b) a 58 + 59 + enum smb3_tcon_ref_trace { smb3_tcon_ref_traces } __mode(byte); 60 + 61 + #undef EM 62 + #undef E_ 63 + #endif 64 + 65 + /* 66 + * Export enum symbols via userspace. 67 + */ 68 + #define EM(a, b) TRACE_DEFINE_ENUM(a); 69 + #define E_(a, b) TRACE_DEFINE_ENUM(a); 70 + 71 + smb3_tcon_ref_traces; 72 + 73 + #undef EM 74 + #undef E_ 75 + 76 + /* 77 + * Now redefine the EM() and E_() macros to map the enums to the strings that 78 + * will be printed in the output. 79 + */ 80 + #define EM(a, b) { a, b }, 81 + #define E_(a, b) { a, b } 24 82 25 83 /* For logging errors in read or write */ 26 84 DECLARE_EVENT_CLASS(smb3_rw_err_class, ··· 1189 1125 DEFINE_SMB3_CREDIT_EVENT(overflow_credits); 1190 1126 DEFINE_SMB3_CREDIT_EVENT(set_credits); 1191 1127 1128 + 1129 + TRACE_EVENT(smb3_tcon_ref, 1130 + TP_PROTO(unsigned int tcon_debug_id, int ref, 1131 + enum smb3_tcon_ref_trace trace), 1132 + TP_ARGS(tcon_debug_id, ref, trace), 1133 + TP_STRUCT__entry( 1134 + __field(unsigned int, tcon) 1135 + __field(int, ref) 1136 + __field(enum smb3_tcon_ref_trace, trace) 1137 + ), 1138 + TP_fast_assign( 1139 + __entry->tcon = tcon_debug_id; 1140 + __entry->ref = ref; 1141 + __entry->trace = trace; 1142 + ), 1143 + TP_printk("TC=%08x %s r=%u", 1144 + __entry->tcon, 1145 + __print_symbolic(__entry->trace, smb3_tcon_ref_traces), 1146 + __entry->ref) 1147 + ); 1148 + 1149 + 1150 + #undef EM 1151 + #undef E_ 1192 1152 #endif /* _CIFS_TRACE_H */ 1193 1153 1194 1154 #undef TRACE_INCLUDE_PATH
+1 -1
fs/smb/common/smb2pdu.h
··· 711 711 __le16 StructureSize; /* 60 */ 712 712 __le16 Flags; 713 713 __le32 Reserved; 714 - struct_group(network_open_info, 714 + struct_group_attr(network_open_info, __packed, 715 715 __le64 CreationTime; 716 716 __le64 LastAccessTime; 717 717 __le64 LastWriteTime;
+18 -17
fs/smb/server/ksmbd_netlink.h
··· 340 340 /* 341 341 * Share config flags. 342 342 */ 343 - #define KSMBD_SHARE_FLAG_INVALID (0) 344 - #define KSMBD_SHARE_FLAG_AVAILABLE BIT(0) 345 - #define KSMBD_SHARE_FLAG_BROWSEABLE BIT(1) 346 - #define KSMBD_SHARE_FLAG_WRITEABLE BIT(2) 347 - #define KSMBD_SHARE_FLAG_READONLY BIT(3) 348 - #define KSMBD_SHARE_FLAG_GUEST_OK BIT(4) 349 - #define KSMBD_SHARE_FLAG_GUEST_ONLY BIT(5) 350 - #define KSMBD_SHARE_FLAG_STORE_DOS_ATTRS BIT(6) 351 - #define KSMBD_SHARE_FLAG_OPLOCKS BIT(7) 352 - #define KSMBD_SHARE_FLAG_PIPE BIT(8) 353 - #define KSMBD_SHARE_FLAG_HIDE_DOT_FILES BIT(9) 354 - #define KSMBD_SHARE_FLAG_INHERIT_OWNER BIT(10) 355 - #define KSMBD_SHARE_FLAG_STREAMS BIT(11) 356 - #define KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS BIT(12) 357 - #define KSMBD_SHARE_FLAG_ACL_XATTR BIT(13) 358 - #define KSMBD_SHARE_FLAG_UPDATE BIT(14) 359 - #define KSMBD_SHARE_FLAG_CROSSMNT BIT(15) 343 + #define KSMBD_SHARE_FLAG_INVALID (0) 344 + #define KSMBD_SHARE_FLAG_AVAILABLE BIT(0) 345 + #define KSMBD_SHARE_FLAG_BROWSEABLE BIT(1) 346 + #define KSMBD_SHARE_FLAG_WRITEABLE BIT(2) 347 + #define KSMBD_SHARE_FLAG_READONLY BIT(3) 348 + #define KSMBD_SHARE_FLAG_GUEST_OK BIT(4) 349 + #define KSMBD_SHARE_FLAG_GUEST_ONLY BIT(5) 350 + #define KSMBD_SHARE_FLAG_STORE_DOS_ATTRS BIT(6) 351 + #define KSMBD_SHARE_FLAG_OPLOCKS BIT(7) 352 + #define KSMBD_SHARE_FLAG_PIPE BIT(8) 353 + #define KSMBD_SHARE_FLAG_HIDE_DOT_FILES BIT(9) 354 + #define KSMBD_SHARE_FLAG_INHERIT_OWNER BIT(10) 355 + #define KSMBD_SHARE_FLAG_STREAMS BIT(11) 356 + #define KSMBD_SHARE_FLAG_FOLLOW_SYMLINKS BIT(12) 357 + #define KSMBD_SHARE_FLAG_ACL_XATTR BIT(13) 358 + #define KSMBD_SHARE_FLAG_UPDATE BIT(14) 359 + #define KSMBD_SHARE_FLAG_CROSSMNT BIT(15) 360 + #define KSMBD_SHARE_FLAG_CONTINUOUS_AVAILABILITY BIT(16) 360 361 361 362 /* 362 363 * Tree connect request flags.
+5 -8
fs/smb/server/server.c
··· 167 167 int rc; 168 168 bool is_chained = false; 169 169 170 - if (conn->ops->allocate_rsp_buf(work)) 171 - return; 172 - 173 170 if (conn->ops->is_transform_hdr && 174 171 conn->ops->is_transform_hdr(work->request_buf)) { 175 172 rc = conn->ops->decrypt_req(work); 176 - if (rc < 0) { 177 - conn->ops->set_rsp_status(work, STATUS_DATA_ERROR); 178 - goto send; 179 - } 180 - 173 + if (rc < 0) 174 + return; 181 175 work->encrypted = true; 182 176 } 177 + 178 + if (conn->ops->allocate_rsp_buf(work)) 179 + return; 183 180 184 181 rc = conn->ops->init_rsp_hdr(work); 185 182 if (rc) {
+13 -2
fs/smb/server/smb2pdu.c
··· 535 535 if (cmd == SMB2_QUERY_INFO_HE) { 536 536 struct smb2_query_info_req *req; 537 537 538 + if (get_rfc1002_len(work->request_buf) < 539 + offsetof(struct smb2_query_info_req, OutputBufferLength)) 540 + return -EINVAL; 541 + 538 542 req = smb2_get_msg(work->request_buf); 539 543 if ((req->InfoType == SMB2_O_INFO_FILE && 540 544 (req->FileInfoClass == FILE_FULL_EA_INFORMATION || ··· 1988 1984 write_unlock(&sess->tree_conns_lock); 1989 1985 rsp->StructureSize = cpu_to_le16(16); 1990 1986 out_err1: 1991 - rsp->Capabilities = 0; 1987 + if (server_conf.flags & KSMBD_GLOBAL_FLAG_DURABLE_HANDLE && 1988 + test_share_config_flag(share, 1989 + KSMBD_SHARE_FLAG_CONTINUOUS_AVAILABILITY)) 1990 + rsp->Capabilities = SMB2_SHARE_CAP_CONTINUOUS_AVAILABILITY; 1991 + else 1992 + rsp->Capabilities = 0; 1992 1993 rsp->Reserved = 0; 1993 1994 /* default manual caching */ 1994 1995 rsp->ShareFlags = SMB2_SHAREFLAG_MANUAL_CACHING; ··· 3507 3498 memcpy(fp->client_guid, conn->ClientGUID, SMB2_CLIENT_GUID_SIZE); 3508 3499 3509 3500 if (dh_info.type == DURABLE_REQ_V2 || dh_info.type == DURABLE_REQ) { 3510 - if (dh_info.type == DURABLE_REQ_V2 && dh_info.persistent) 3501 + if (dh_info.type == DURABLE_REQ_V2 && dh_info.persistent && 3502 + test_share_config_flag(work->tcon->share_conf, 3503 + KSMBD_SHARE_FLAG_CONTINUOUS_AVAILABILITY)) 3511 3504 fp->is_persistent = true; 3512 3505 else 3513 3506 fp->is_durable = true;
+5
fs/smb/server/vfs.c
··· 754 754 goto out4; 755 755 } 756 756 757 + /* 758 + * explicitly handle file overwrite case, for compatibility with 759 + * filesystems that may not support rename flags (e.g: fuse) 760 + */ 757 761 if ((flags & RENAME_NOREPLACE) && d_is_positive(new_dentry)) { 758 762 err = -EEXIST; 759 763 goto out4; 760 764 } 765 + flags &= ~(RENAME_NOREPLACE); 761 766 762 767 if (old_child == trap) { 763 768 err = -EINVAL;
+4 -1
fs/squashfs/inode.c
··· 48 48 gid_t i_gid; 49 49 int err; 50 50 51 + inode->i_ino = le32_to_cpu(sqsh_ino->inode_number); 52 + if (inode->i_ino == 0) 53 + return -EINVAL; 54 + 51 55 err = squashfs_get_id(sb, le16_to_cpu(sqsh_ino->uid), &i_uid); 52 56 if (err) 53 57 return err; ··· 62 58 63 59 i_uid_write(inode, i_uid); 64 60 i_gid_write(inode, i_gid); 65 - inode->i_ino = le32_to_cpu(sqsh_ino->inode_number); 66 61 inode_set_mtime(inode, le32_to_cpu(sqsh_ino->mtime), 0); 67 62 inode_set_atime(inode, inode_get_mtime_sec(inode), 0); 68 63 inode_set_ctime(inode, inode_get_mtime_sec(inode), 0);
+2
fs/sysfs/file.c
··· 463 463 kn = kernfs_find_and_get(kobj->sd, attr->name); 464 464 if (kn) 465 465 kernfs_break_active_protection(kn); 466 + else 467 + kobject_put(kobj); 466 468 return kn; 467 469 } 468 470 EXPORT_SYMBOL_GPL(sysfs_break_active_protection);
+8
include/asm-generic/barrier.h
··· 294 294 #define io_stop_wc() do { } while (0) 295 295 #endif 296 296 297 + /* 298 + * Architectures that guarantee an implicit smp_mb() in switch_mm() 299 + * can override smp_mb__after_switch_mm. 300 + */ 301 + #ifndef smp_mb__after_switch_mm 302 + # define smp_mb__after_switch_mm() smp_mb() 303 + #endif 304 + 297 305 #endif /* !__ASSEMBLY__ */ 298 306 #endif /* __ASM_GENERIC_BARRIER_H */
+2
include/linux/blkdev.h
··· 128 128 #define BLK_OPEN_WRITE_IOCTL ((__force blk_mode_t)(1 << 4)) 129 129 /* open is exclusive wrt all other BLK_OPEN_WRITE opens to the device */ 130 130 #define BLK_OPEN_RESTRICT_WRITES ((__force blk_mode_t)(1 << 5)) 131 + /* return partition scanning errors */ 132 + #define BLK_OPEN_STRICT_SCAN ((__force blk_mode_t)(1 << 6)) 131 133 132 134 struct gendisk { 133 135 /*
+6 -1
include/linux/bootconfig.h
··· 288 288 int __init xbc_get_info(int *node_size, size_t *data_size); 289 289 290 290 /* XBC cleanup data structures */ 291 - void __init xbc_exit(void); 291 + void __init _xbc_exit(bool early); 292 + 293 + static inline void xbc_exit(void) 294 + { 295 + _xbc_exit(false); 296 + } 292 297 293 298 /* XBC embedded bootconfig data in kernel */ 294 299 #ifdef CONFIG_BOOT_CONFIG_EMBED
+5
include/linux/clk.h
··· 286 286 return 0; 287 287 } 288 288 289 + static inline int devm_clk_rate_exclusive_get(struct device *dev, struct clk *clk) 290 + { 291 + return 0; 292 + } 293 + 289 294 static inline void clk_rate_exclusive_put(struct clk *clk) {} 290 295 291 296 #endif
+25
include/linux/etherdevice.h
··· 612 612 } 613 613 614 614 /** 615 + * eth_skb_pkt_type - Assign packet type if destination address does not match 616 + * @skb: Assigned a packet type if address does not match @dev address 617 + * @dev: Network device used to compare packet address against 618 + * 619 + * If the destination MAC address of the packet does not match the network 620 + * device address, assign an appropriate packet type. 621 + */ 622 + static inline void eth_skb_pkt_type(struct sk_buff *skb, 623 + const struct net_device *dev) 624 + { 625 + const struct ethhdr *eth = eth_hdr(skb); 626 + 627 + if (unlikely(!ether_addr_equal_64bits(eth->h_dest, dev->dev_addr))) { 628 + if (unlikely(is_multicast_ether_addr_64bits(eth->h_dest))) { 629 + if (ether_addr_equal_64bits(eth->h_dest, dev->broadcast)) 630 + skb->pkt_type = PACKET_BROADCAST; 631 + else 632 + skb->pkt_type = PACKET_MULTICAST; 633 + } else { 634 + skb->pkt_type = PACKET_OTHERHOST; 635 + } 636 + } 637 + } 638 + 639 + /** 615 640 * eth_skb_pad - Pad buffer to mininum number of octets for Ethernet frame 616 641 * @skb: Buffer to pad 617 642 *
-1
include/linux/peci.h
··· 58 58 /** 59 59 * struct peci_device - PECI device 60 60 * @dev: device object to register PECI device to the device model 61 - * @controller: manages the bus segment hosting this PECI device 62 61 * @info: PECI device characteristics 63 62 * @info.family: device family 64 63 * @info.model: device model
+9
include/linux/shmem_fs.h
··· 110 110 extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); 111 111 int shmem_unuse(unsigned int type); 112 112 113 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 113 114 extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, 114 115 struct mm_struct *mm, unsigned long vm_flags); 116 + #else 117 + static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, 118 + struct mm_struct *mm, unsigned long vm_flags) 119 + { 120 + return false; 121 + } 122 + #endif 123 + 115 124 #ifdef CONFIG_SHMEM 116 125 extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); 117 126 #else
+3 -10
include/linux/sunrpc/svc_rdma.h
··· 210 210 */ 211 211 struct svc_rdma_write_info { 212 212 struct svcxprt_rdma *wi_rdma; 213 - struct list_head wi_list; 214 213 215 214 const struct svc_rdma_chunk *wi_chunk; 216 215 ··· 238 239 struct ib_cqe sc_cqe; 239 240 struct xdr_buf sc_hdrbuf; 240 241 struct xdr_stream sc_stream; 241 - 242 - struct list_head sc_write_info_list; 243 242 struct svc_rdma_write_info sc_reply_info; 244 - 245 243 void *sc_xprt_buf; 246 244 int sc_page_count; 247 245 int sc_cur_sge_no; ··· 270 274 extern void svc_rdma_cc_release(struct svcxprt_rdma *rdma, 271 275 struct svc_rdma_chunk_ctxt *cc, 272 276 enum dma_data_direction dir); 273 - extern void svc_rdma_write_chunk_release(struct svcxprt_rdma *rdma, 274 - struct svc_rdma_send_ctxt *ctxt); 275 277 extern void svc_rdma_reply_chunk_release(struct svcxprt_rdma *rdma, 276 278 struct svc_rdma_send_ctxt *ctxt); 277 - extern int svc_rdma_prepare_write_list(struct svcxprt_rdma *rdma, 278 - const struct svc_rdma_pcl *write_pcl, 279 - struct svc_rdma_send_ctxt *sctxt, 280 - const struct xdr_buf *xdr); 279 + extern int svc_rdma_send_write_list(struct svcxprt_rdma *rdma, 280 + const struct svc_rdma_recv_ctxt *rctxt, 281 + const struct xdr_buf *xdr); 281 282 extern int svc_rdma_prepare_reply_chunk(struct svcxprt_rdma *rdma, 282 283 const struct svc_rdma_pcl *write_pcl, 283 284 const struct svc_rdma_pcl *reply_pcl,
+33 -32
include/linux/swapops.h
··· 390 390 } 391 391 #endif /* CONFIG_MIGRATION */ 392 392 393 + #ifdef CONFIG_MEMORY_FAILURE 394 + 395 + /* 396 + * Support for hardware poisoned pages 397 + */ 398 + static inline swp_entry_t make_hwpoison_entry(struct page *page) 399 + { 400 + BUG_ON(!PageLocked(page)); 401 + return swp_entry(SWP_HWPOISON, page_to_pfn(page)); 402 + } 403 + 404 + static inline int is_hwpoison_entry(swp_entry_t entry) 405 + { 406 + return swp_type(entry) == SWP_HWPOISON; 407 + } 408 + 409 + #else 410 + 411 + static inline swp_entry_t make_hwpoison_entry(struct page *page) 412 + { 413 + return swp_entry(0, 0); 414 + } 415 + 416 + static inline int is_hwpoison_entry(swp_entry_t swp) 417 + { 418 + return 0; 419 + } 420 + #endif 421 + 393 422 typedef unsigned long pte_marker; 394 423 395 424 #define PTE_MARKER_UFFD_WP BIT(0) ··· 512 483 513 484 /* 514 485 * A pfn swap entry is a special type of swap entry that always has a pfn stored 515 - * in the swap offset. They are used to represent unaddressable device memory 516 - * and to restrict access to a page undergoing migration. 486 + * in the swap offset. They can either be used to represent unaddressable device 487 + * memory, to restrict access to a page undergoing migration or to represent a 488 + * pfn which has been hwpoisoned and unmapped. 517 489 */ 518 490 static inline bool is_pfn_swap_entry(swp_entry_t entry) 519 491 { ··· 522 492 BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); 523 493 524 494 return is_migration_entry(entry) || is_device_private_entry(entry) || 525 - is_device_exclusive_entry(entry); 495 + is_device_exclusive_entry(entry) || is_hwpoison_entry(entry); 526 496 } 527 497 528 498 struct page_vma_mapped_walk; ··· 590 560 return 0; 591 561 } 592 562 #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ 593 - 594 - #ifdef CONFIG_MEMORY_FAILURE 595 - 596 - /* 597 - * Support for hardware poisoned pages 598 - */ 599 - static inline swp_entry_t make_hwpoison_entry(struct page *page) 600 - { 601 - BUG_ON(!PageLocked(page)); 602 - return swp_entry(SWP_HWPOISON, page_to_pfn(page)); 603 - } 604 - 605 - static inline int is_hwpoison_entry(swp_entry_t entry) 606 - { 607 - return swp_type(entry) == SWP_HWPOISON; 608 - } 609 - 610 - #else 611 - 612 - static inline swp_entry_t make_hwpoison_entry(struct page *page) 613 - { 614 - return swp_entry(0, 0); 615 - } 616 - 617 - static inline int is_hwpoison_entry(swp_entry_t swp) 618 - { 619 - return 0; 620 - } 621 - #endif 622 563 623 564 static inline int non_swap_entry(swp_entry_t entry) 624 565 {
+3
include/net/af_unix.h
··· 100 100 U_LOCK_NORMAL, 101 101 U_LOCK_SECOND, /* for double locking, see unix_state_double_lock(). */ 102 102 U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */ 103 + U_LOCK_GC_LISTENER, /* used for listening socket while determining gc 104 + * candidates to close a small race window. 105 + */ 103 106 }; 104 107 105 108 static inline void unix_state_lock_nested(struct sock *sk,
+7 -1
include/net/bluetooth/hci_core.h
··· 738 738 __u8 le_per_adv_data[HCI_MAX_PER_AD_TOT_LEN]; 739 739 __u16 le_per_adv_data_len; 740 740 __u16 le_per_adv_data_offset; 741 + __u8 le_adv_phy; 742 + __u8 le_adv_sec_phy; 741 743 __u8 le_tx_phy; 742 744 __u8 le_rx_phy; 743 745 __s8 rssi; ··· 1514 1512 enum conn_reasons conn_reason); 1515 1513 struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst, 1516 1514 u8 dst_type, bool dst_resolved, u8 sec_level, 1517 - u16 conn_timeout, u8 role); 1515 + u16 conn_timeout, u8 role, u8 phy, u8 sec_phy); 1518 1516 void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status); 1519 1517 struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst, 1520 1518 u8 sec_level, u8 auth_type, ··· 1906 1904 1907 1905 #define privacy_mode_capable(dev) (use_ll_privacy(dev) && \ 1908 1906 (hdev->commands[39] & 0x04)) 1907 + 1908 + #define read_key_size_capable(dev) \ 1909 + ((dev)->commands[20] & 0x10 && \ 1910 + !test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks)) 1909 1911 1910 1912 /* Use enhanced synchronous connection if command is supported and its quirk 1911 1913 * has not been set.
+3
include/net/mac80211.h
··· 953 953 * of their QoS TID or other priority field values. 954 954 * @IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX: first MLO TX, used mostly internally 955 955 * for sequence number assignment 956 + * @IEEE80211_TX_CTRL_SCAN_TX: Indicates that this frame is transmitted 957 + * due to scanning, not in normal operation on the interface. 956 958 * @IEEE80211_TX_CTRL_MLO_LINK: If not @IEEE80211_LINK_UNSPECIFIED, this 957 959 * frame should be transmitted on the specific link. This really is 958 960 * only relevant for frames that do not have data present, and is ··· 975 973 IEEE80211_TX_CTRL_NO_SEQNO = BIT(7), 976 974 IEEE80211_TX_CTRL_DONT_REORDER = BIT(8), 977 975 IEEE80211_TX_CTRL_MCAST_MLO_FIRST_TX = BIT(9), 976 + IEEE80211_TX_CTRL_SCAN_TX = BIT(10), 978 977 IEEE80211_TX_CTRL_MLO_LINK = 0xf0000000, 979 978 }; 980 979
+2
include/net/macsec.h
··· 321 321 * for the TX tag 322 322 * @needed_tailroom: number of bytes reserved at the end of the sk_buff for the 323 323 * TX tag 324 + * @rx_uses_md_dst: whether MACsec device offload supports sk_buff md_dst 324 325 */ 325 326 struct macsec_ops { 326 327 /* Device wide */ ··· 353 352 struct sk_buff *skb); 354 353 unsigned int needed_headroom; 355 354 unsigned int needed_tailroom; 355 + bool rx_uses_md_dst; 356 356 }; 357 357 358 358 void macsec_pn_wrapped(struct macsec_secy *secy, struct macsec_tx_sa *tx_sa);
+21 -19
include/net/sock.h
··· 1410 1410 #define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT)) 1411 1411 extern int sysctl_mem_pcpu_rsv; 1412 1412 1413 - static inline void 1414 - sk_memory_allocated_add(struct sock *sk, int amt) 1413 + static inline void proto_memory_pcpu_drain(struct proto *proto) 1415 1414 { 1416 - int local_reserve; 1415 + int val = this_cpu_xchg(*proto->per_cpu_fw_alloc, 0); 1417 1416 1418 - preempt_disable(); 1419 - local_reserve = __this_cpu_add_return(*sk->sk_prot->per_cpu_fw_alloc, amt); 1420 - if (local_reserve >= READ_ONCE(sysctl_mem_pcpu_rsv)) { 1421 - __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve); 1422 - atomic_long_add(local_reserve, sk->sk_prot->memory_allocated); 1423 - } 1424 - preempt_enable(); 1417 + if (val) 1418 + atomic_long_add(val, proto->memory_allocated); 1425 1419 } 1426 1420 1427 1421 static inline void 1428 - sk_memory_allocated_sub(struct sock *sk, int amt) 1422 + sk_memory_allocated_add(const struct sock *sk, int val) 1429 1423 { 1430 - int local_reserve; 1424 + struct proto *proto = sk->sk_prot; 1431 1425 1432 - preempt_disable(); 1433 - local_reserve = __this_cpu_sub_return(*sk->sk_prot->per_cpu_fw_alloc, amt); 1434 - if (local_reserve <= -READ_ONCE(sysctl_mem_pcpu_rsv)) { 1435 - __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve); 1436 - atomic_long_add(local_reserve, sk->sk_prot->memory_allocated); 1437 - } 1438 - preempt_enable(); 1426 + val = this_cpu_add_return(*proto->per_cpu_fw_alloc, val); 1427 + 1428 + if (unlikely(val >= READ_ONCE(sysctl_mem_pcpu_rsv))) 1429 + proto_memory_pcpu_drain(proto); 1430 + } 1431 + 1432 + static inline void 1433 + sk_memory_allocated_sub(const struct sock *sk, int val) 1434 + { 1435 + struct proto *proto = sk->sk_prot; 1436 + 1437 + val = this_cpu_sub_return(*proto->per_cpu_fw_alloc, val); 1438 + 1439 + if (unlikely(val <= -READ_ONCE(sysctl_mem_pcpu_rsv))) 1440 + proto_memory_pcpu_drain(proto); 1439 1441 } 1440 1442 1441 1443 #define SK_ALLOC_PERCPU_COUNTER_BATCH 16
+2 -1
include/net/tls.h
··· 111 111 u32 stopped : 1; 112 112 u32 copy_mode : 1; 113 113 u32 mixed_decrypted : 1; 114 - u32 msg_ready : 1; 114 + 115 + bool msg_ready; 115 116 116 117 struct strp_msg stm; 117 118
+2
init/main.c
··· 636 636 if (!saved_command_line) 637 637 panic("%s: Failed to allocate %zu bytes\n", __func__, len + ilen); 638 638 639 + len = xlen + strlen(command_line) + 1; 640 + 639 641 static_command_line = memblock_alloc(len, SMP_CACHE_BYTES); 640 642 if (!static_command_line) 641 643 panic("%s: Failed to allocate %zu bytes\n", __func__, len);
+6 -5
kernel/configs/hardening.config
··· 39 39 CONFIG_UBSAN_TRAP=y 40 40 CONFIG_UBSAN_BOUNDS=y 41 41 # CONFIG_UBSAN_SHIFT is not set 42 - # CONFIG_UBSAN_DIV_ZERO 43 - # CONFIG_UBSAN_UNREACHABLE 44 - # CONFIG_UBSAN_BOOL 45 - # CONFIG_UBSAN_ENUM 46 - # CONFIG_UBSAN_ALIGNMENT 42 + # CONFIG_UBSAN_DIV_ZERO is not set 43 + # CONFIG_UBSAN_UNREACHABLE is not set 44 + # CONFIG_UBSAN_SIGNED_WRAP is not set 45 + # CONFIG_UBSAN_BOOL is not set 46 + # CONFIG_UBSAN_ENUM is not set 47 + # CONFIG_UBSAN_ALIGNMENT is not set 47 48 48 49 # Sampling-based heap out-of-bounds and use-after-free detection. 49 50 CONFIG_KFENCE=y
+17 -16
kernel/fork.c
··· 714 714 } else if (anon_vma_fork(tmp, mpnt)) 715 715 goto fail_nomem_anon_vma_fork; 716 716 vm_flags_clear(tmp, VM_LOCKED_MASK); 717 + /* 718 + * Copy/update hugetlb private vma information. 719 + */ 720 + if (is_vm_hugetlb_page(tmp)) 721 + hugetlb_dup_vma_private(tmp); 722 + 723 + /* 724 + * Link the vma into the MT. After using __mt_dup(), memory 725 + * allocation is not necessary here, so it cannot fail. 726 + */ 727 + vma_iter_bulk_store(&vmi, tmp); 728 + 729 + mm->map_count++; 730 + 731 + if (tmp->vm_ops && tmp->vm_ops->open) 732 + tmp->vm_ops->open(tmp); 733 + 717 734 file = tmp->vm_file; 718 735 if (file) { 719 736 struct address_space *mapping = file->f_mapping; ··· 747 730 i_mmap_unlock_write(mapping); 748 731 } 749 732 750 - /* 751 - * Copy/update hugetlb private vma information. 752 - */ 753 - if (is_vm_hugetlb_page(tmp)) 754 - hugetlb_dup_vma_private(tmp); 755 - 756 - /* 757 - * Link the vma into the MT. After using __mt_dup(), memory 758 - * allocation is not necessary here, so it cannot fail. 759 - */ 760 - vma_iter_bulk_store(&vmi, tmp); 761 - 762 - mm->map_count++; 763 733 if (!(tmp->vm_flags & VM_WIPEONFORK)) 764 734 retval = copy_page_range(tmp, mpnt); 765 - 766 - if (tmp->vm_ops && tmp->vm_ops->open) 767 - tmp->vm_ops->open(tmp); 768 735 769 736 if (retval) { 770 737 mpnt = vma_next(&vmi);
+14 -6
kernel/sched/sched.h
··· 79 79 # include <asm/paravirt_api_clock.h> 80 80 #endif 81 81 82 + #include <asm/barrier.h> 83 + 82 84 #include "cpupri.h" 83 85 #include "cpudeadline.h" 84 86 ··· 3447 3445 * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu]. 3448 3446 * Provide it here. 3449 3447 */ 3450 - if (!prev->mm) // from kernel 3448 + if (!prev->mm) { // from kernel 3451 3449 smp_mb(); 3452 - /* 3453 - * user -> user transition guarantees a memory barrier through 3454 - * switch_mm() when current->mm changes. If current->mm is 3455 - * unchanged, no barrier is needed. 3456 - */ 3450 + } else { // from user 3451 + /* 3452 + * user->user transition relies on an implicit 3453 + * memory barrier in switch_mm() when 3454 + * current->mm changes. If the architecture 3455 + * switch_mm() does not have an implicit memory 3456 + * barrier, it is emitted here. If current->mm 3457 + * is unchanged, no barrier is needed. 3458 + */ 3459 + smp_mb__after_switch_mm(); 3460 + } 3457 3461 } 3458 3462 if (prev->mm_cid_active) { 3459 3463 mm_cid_snapshot_time(rq, prev->mm);
+13 -9
lib/bootconfig.c
··· 61 61 return memblock_alloc(size, SMP_CACHE_BYTES); 62 62 } 63 63 64 - static inline void __init xbc_free_mem(void *addr, size_t size) 64 + static inline void __init xbc_free_mem(void *addr, size_t size, bool early) 65 65 { 66 - memblock_free(addr, size); 66 + if (early) 67 + memblock_free(addr, size); 68 + else if (addr) 69 + memblock_free_late(__pa(addr), size); 67 70 } 68 71 69 72 #else /* !__KERNEL__ */ ··· 76 73 return malloc(size); 77 74 } 78 75 79 - static inline void xbc_free_mem(void *addr, size_t size) 76 + static inline void xbc_free_mem(void *addr, size_t size, bool early) 80 77 { 81 78 free(addr); 82 79 } ··· 901 898 } 902 899 903 900 /** 904 - * xbc_exit() - Clean up all parsed bootconfig 901 + * _xbc_exit() - Clean up all parsed bootconfig 902 + * @early: Set true if this is called before budy system is initialized. 905 903 * 906 904 * This clears all data structures of parsed bootconfig on memory. 907 905 * If you need to reuse xbc_init() with new boot config, you can 908 906 * use this. 909 907 */ 910 - void __init xbc_exit(void) 908 + void __init _xbc_exit(bool early) 911 909 { 912 - xbc_free_mem(xbc_data, xbc_data_size); 910 + xbc_free_mem(xbc_data, xbc_data_size, early); 913 911 xbc_data = NULL; 914 912 xbc_data_size = 0; 915 913 xbc_node_num = 0; 916 - xbc_free_mem(xbc_nodes, sizeof(struct xbc_node) * XBC_NODE_MAX); 914 + xbc_free_mem(xbc_nodes, sizeof(struct xbc_node) * XBC_NODE_MAX, early); 917 915 xbc_nodes = NULL; 918 916 brace_index = 0; 919 917 } ··· 967 963 if (!xbc_nodes) { 968 964 if (emsg) 969 965 *emsg = "Failed to allocate bootconfig nodes"; 970 - xbc_exit(); 966 + _xbc_exit(true); 971 967 return -ENOMEM; 972 968 } 973 969 memset(xbc_nodes, 0, sizeof(struct xbc_node) * XBC_NODE_MAX); ··· 981 977 *epos = xbc_err_pos; 982 978 if (emsg) 983 979 *emsg = xbc_err_msg; 984 - xbc_exit(); 980 + _xbc_exit(true); 985 981 } else 986 982 ret = xbc_node_num; 987 983
+16 -2
lib/ubsan.c
··· 44 44 case ubsan_shift_out_of_bounds: 45 45 return "UBSAN: shift out of bounds"; 46 46 #endif 47 - #ifdef CONFIG_UBSAN_DIV_ZERO 47 + #if defined(CONFIG_UBSAN_DIV_ZERO) || defined(CONFIG_UBSAN_SIGNED_WRAP) 48 48 /* 49 - * SanitizerKind::IntegerDivideByZero emits 49 + * SanitizerKind::IntegerDivideByZero and 50 + * SanitizerKind::SignedIntegerOverflow emit 50 51 * SanitizerHandler::DivremOverflow. 51 52 */ 52 53 case ubsan_divrem_overflow: ··· 78 77 return "UBSAN: alignment assumption"; 79 78 case ubsan_type_mismatch: 80 79 return "UBSAN: type mismatch"; 80 + #endif 81 + #ifdef CONFIG_UBSAN_SIGNED_WRAP 82 + /* 83 + * SanitizerKind::SignedIntegerOverflow emits 84 + * SanitizerHandler::AddOverflow, SanitizerHandler::SubOverflow, 85 + * or SanitizerHandler::MulOverflow. 86 + */ 87 + case ubsan_add_overflow: 88 + return "UBSAN: integer addition overflow"; 89 + case ubsan_sub_overflow: 90 + return "UBSAN: integer subtraction overflow"; 91 + case ubsan_mul_overflow: 92 + return "UBSAN: integer multiplication overflow"; 81 93 #endif 82 94 default: 83 95 return "UBSAN: unrecognized failure code";
+32 -22
mm/gup.c
··· 1206 1206 1207 1207 /* first iteration or cross vma bound */ 1208 1208 if (!vma || start >= vma->vm_end) { 1209 + /* 1210 + * MADV_POPULATE_(READ|WRITE) wants to handle VMA 1211 + * lookups+error reporting differently. 1212 + */ 1213 + if (gup_flags & FOLL_MADV_POPULATE) { 1214 + vma = vma_lookup(mm, start); 1215 + if (!vma) { 1216 + ret = -ENOMEM; 1217 + goto out; 1218 + } 1219 + if (check_vma_flags(vma, gup_flags)) { 1220 + ret = -EINVAL; 1221 + goto out; 1222 + } 1223 + goto retry; 1224 + } 1209 1225 vma = gup_vma_lookup(mm, start); 1210 1226 if (!vma && in_gate_area(mm, start)) { 1211 1227 ret = get_gate_page(mm, start & PAGE_MASK, ··· 1701 1685 } 1702 1686 1703 1687 /* 1704 - * faultin_vma_page_range() - populate (prefault) page tables inside the 1705 - * given VMA range readable/writable 1688 + * faultin_page_range() - populate (prefault) page tables inside the 1689 + * given range readable/writable 1706 1690 * 1707 1691 * This takes care of mlocking the pages, too, if VM_LOCKED is set. 1708 1692 * 1709 - * @vma: target vma 1693 + * @mm: the mm to populate page tables in 1710 1694 * @start: start address 1711 1695 * @end: end address 1712 1696 * @write: whether to prefault readable or writable 1713 1697 * @locked: whether the mmap_lock is still held 1714 1698 * 1715 - * Returns either number of processed pages in the vma, or a negative error 1716 - * code on error (see __get_user_pages()). 1699 + * Returns either number of processed pages in the MM, or a negative error 1700 + * code on error (see __get_user_pages()). Note that this function reports 1701 + * errors related to VMAs, such as incompatible mappings, as expected by 1702 + * MADV_POPULATE_(READ|WRITE). 1717 1703 * 1718 - * vma->vm_mm->mmap_lock must be held. The range must be page-aligned and 1719 - * covered by the VMA. If it's released, *@locked will be set to 0. 1704 + * The range must be page-aligned. 1705 + * 1706 + * mm->mmap_lock must be held. If it's released, *@locked will be set to 0. 1720 1707 */ 1721 - long faultin_vma_page_range(struct vm_area_struct *vma, unsigned long start, 1722 - unsigned long end, bool write, int *locked) 1708 + long faultin_page_range(struct mm_struct *mm, unsigned long start, 1709 + unsigned long end, bool write, int *locked) 1723 1710 { 1724 - struct mm_struct *mm = vma->vm_mm; 1725 1711 unsigned long nr_pages = (end - start) / PAGE_SIZE; 1726 1712 int gup_flags; 1727 1713 long ret; 1728 1714 1729 1715 VM_BUG_ON(!PAGE_ALIGNED(start)); 1730 1716 VM_BUG_ON(!PAGE_ALIGNED(end)); 1731 - VM_BUG_ON_VMA(start < vma->vm_start, vma); 1732 - VM_BUG_ON_VMA(end > vma->vm_end, vma); 1733 1717 mmap_assert_locked(mm); 1734 1718 1735 1719 /* ··· 1741 1725 * a poisoned page. 1742 1726 * !FOLL_FORCE: Require proper access permissions. 1743 1727 */ 1744 - gup_flags = FOLL_TOUCH | FOLL_HWPOISON | FOLL_UNLOCKABLE; 1728 + gup_flags = FOLL_TOUCH | FOLL_HWPOISON | FOLL_UNLOCKABLE | 1729 + FOLL_MADV_POPULATE; 1745 1730 if (write) 1746 1731 gup_flags |= FOLL_WRITE; 1747 1732 1748 - /* 1749 - * We want to report -EINVAL instead of -EFAULT for any permission 1750 - * problems or incompatible mappings. 1751 - */ 1752 - if (check_vma_flags(vma, gup_flags)) 1753 - return -EINVAL; 1754 - 1755 - ret = __get_user_pages(mm, start, nr_pages, gup_flags, 1756 - NULL, locked); 1733 + ret = __get_user_pages_locked(mm, start, nr_pages, NULL, locked, 1734 + gup_flags); 1757 1735 lru_add_drain(); 1758 1736 return ret; 1759 1737 }
+3 -3
mm/huge_memory.c
··· 2259 2259 goto unlock_ptls; 2260 2260 } 2261 2261 2262 - folio_move_anon_rmap(src_folio, dst_vma); 2263 - WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); 2264 - 2265 2262 src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd); 2266 2263 /* Folio got pinned from under us. Put it back and fail the move. */ 2267 2264 if (folio_maybe_dma_pinned(src_folio)) { ··· 2266 2269 err = -EBUSY; 2267 2270 goto unlock_ptls; 2268 2271 } 2272 + 2273 + folio_move_anon_rmap(src_folio, dst_vma); 2274 + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr)); 2269 2275 2270 2276 _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot); 2271 2277 /* Follow mremap() behavior and treat the entry dirty after the move */
+7 -3
mm/hugetlb.c
··· 7044 7044 if (!pte_same(pte, newpte)) 7045 7045 set_huge_pte_at(mm, address, ptep, newpte, psize); 7046 7046 } else if (unlikely(is_pte_marker(pte))) { 7047 - /* No other markers apply for now. */ 7048 - WARN_ON_ONCE(!pte_marker_uffd_wp(pte)); 7049 - if (uffd_wp_resolve) 7047 + /* 7048 + * Do nothing on a poison marker; page is 7049 + * corrupted, permissons do not apply. Here 7050 + * pte_marker_uffd_wp()==true implies !poison 7051 + * because they're mutual exclusive. 7052 + */ 7053 + if (pte_marker_uffd_wp(pte) && uffd_wp_resolve) 7050 7054 /* Safe to modify directly (non-present->none). */ 7051 7055 huge_pte_clear(mm, address, ptep, psize); 7052 7056 } else if (!huge_pte_none(pte)) {
+6 -4
mm/internal.h
··· 686 686 void unmap_mapping_folio(struct folio *folio); 687 687 extern long populate_vma_page_range(struct vm_area_struct *vma, 688 688 unsigned long start, unsigned long end, int *locked); 689 - extern long faultin_vma_page_range(struct vm_area_struct *vma, 690 - unsigned long start, unsigned long end, 691 - bool write, int *locked); 689 + extern long faultin_page_range(struct mm_struct *mm, unsigned long start, 690 + unsigned long end, bool write, int *locked); 692 691 extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, 693 692 unsigned long bytes); 694 693 ··· 1126 1127 FOLL_FAST_ONLY = 1 << 20, 1127 1128 /* allow unlocking the mmap lock */ 1128 1129 FOLL_UNLOCKABLE = 1 << 21, 1130 + /* VMA lookup+checks compatible with MADV_POPULATE_(READ|WRITE) */ 1131 + FOLL_MADV_POPULATE = 1 << 22, 1129 1132 }; 1130 1133 1131 1134 #define INTERNAL_GUP_FLAGS (FOLL_TOUCH | FOLL_TRIED | FOLL_REMOTE | FOLL_PIN | \ 1132 - FOLL_FAST_ONLY | FOLL_UNLOCKABLE) 1135 + FOLL_FAST_ONLY | FOLL_UNLOCKABLE | \ 1136 + FOLL_MADV_POPULATE) 1133 1137 1134 1138 /* 1135 1139 * Indicates for which pages that are write-protected in the page table,
+2 -15
mm/madvise.c
··· 908 908 { 909 909 const bool write = behavior == MADV_POPULATE_WRITE; 910 910 struct mm_struct *mm = vma->vm_mm; 911 - unsigned long tmp_end; 912 911 int locked = 1; 913 912 long pages; 914 913 915 914 *prev = vma; 916 915 917 916 while (start < end) { 918 - /* 919 - * We might have temporarily dropped the lock. For example, 920 - * our VMA might have been split. 921 - */ 922 - if (!vma || start >= vma->vm_end) { 923 - vma = vma_lookup(mm, start); 924 - if (!vma) 925 - return -ENOMEM; 926 - } 927 - 928 - tmp_end = min_t(unsigned long, end, vma->vm_end); 929 917 /* Populate (prefault) page tables readable/writable. */ 930 - pages = faultin_vma_page_range(vma, start, tmp_end, write, 931 - &locked); 918 + pages = faultin_page_range(mm, start, end, write, &locked); 932 919 if (!locked) { 933 920 mmap_read_lock(mm); 934 921 locked = 1; ··· 936 949 pr_warn_once("%s: unhandled return value: %ld\n", 937 950 __func__, pages); 938 951 fallthrough; 939 - case -ENOMEM: 952 + case -ENOMEM: /* No VMA or out of memory. */ 940 953 return -ENOMEM; 941 954 } 942 955 }
+15 -3
mm/memory-failure.c
··· 154 154 { 155 155 int ret; 156 156 157 - zone_pcp_disable(page_zone(page)); 157 + /* 158 + * zone_pcp_disable() can't be used here. It will 159 + * hold pcp_batch_high_lock and dissolve_free_huge_page() might hold 160 + * cpu_hotplug_lock via static_key_slow_dec() when hugetlb vmemmap 161 + * optimization is enabled. This will break current lock dependency 162 + * chain and leads to deadlock. 163 + * Disabling pcp before dissolving the page was a deterministic 164 + * approach because we made sure that those pages cannot end up in any 165 + * PCP list. Draining PCP lists expels those pages to the buddy system, 166 + * but nothing guarantees that those pages do not get back to a PCP 167 + * queue if we need to refill those. 168 + */ 158 169 ret = dissolve_free_huge_page(page); 159 - if (!ret) 170 + if (!ret) { 171 + drain_all_pages(page_zone(page)); 160 172 ret = take_page_off_buddy(page); 161 - zone_pcp_enable(page_zone(page)); 173 + } 162 174 163 175 return ret; 164 176 }
+110 -80
mm/page_owner.c
··· 118 118 register_dummy_stack(); 119 119 register_failure_stack(); 120 120 register_early_stack(); 121 - static_branch_enable(&page_owner_inited); 122 121 init_early_allocated_pages(); 123 122 /* Initialize dummy and failure stacks and link them to stack_list */ 124 123 dummy_stack.stack_record = __stack_depot_get_stack_record(dummy_handle); ··· 128 129 refcount_set(&failure_stack.stack_record->count, 1); 129 130 dummy_stack.next = &failure_stack; 130 131 stack_list = &dummy_stack; 132 + static_branch_enable(&page_owner_inited); 131 133 } 132 134 133 135 struct page_ext_operations page_owner_ops = { ··· 196 196 spin_unlock_irqrestore(&stack_list_lock, flags); 197 197 } 198 198 199 - static void inc_stack_record_count(depot_stack_handle_t handle, gfp_t gfp_mask) 199 + static void inc_stack_record_count(depot_stack_handle_t handle, gfp_t gfp_mask, 200 + int nr_base_pages) 200 201 { 201 202 struct stack_record *stack_record = __stack_depot_get_stack_record(handle); 202 203 ··· 218 217 /* Add the new stack_record to our list */ 219 218 add_stack_record_to_list(stack_record, gfp_mask); 220 219 } 221 - refcount_inc(&stack_record->count); 220 + refcount_add(nr_base_pages, &stack_record->count); 222 221 } 223 222 224 - static void dec_stack_record_count(depot_stack_handle_t handle) 223 + static void dec_stack_record_count(depot_stack_handle_t handle, 224 + int nr_base_pages) 225 225 { 226 226 struct stack_record *stack_record = __stack_depot_get_stack_record(handle); 227 227 228 - if (stack_record) 229 - refcount_dec(&stack_record->count); 228 + if (!stack_record) 229 + return; 230 + 231 + if (refcount_sub_and_test(nr_base_pages, &stack_record->count)) 232 + pr_warn("%s: refcount went to 0 for %u handle\n", __func__, 233 + handle); 234 + } 235 + 236 + static inline void __update_page_owner_handle(struct page_ext *page_ext, 237 + depot_stack_handle_t handle, 238 + unsigned short order, 239 + gfp_t gfp_mask, 240 + short last_migrate_reason, u64 ts_nsec, 241 + pid_t pid, pid_t tgid, char *comm) 242 + { 243 + int i; 244 + struct page_owner *page_owner; 245 + 246 + for (i = 0; i < (1 << order); i++) { 247 + page_owner = get_page_owner(page_ext); 248 + page_owner->handle = handle; 249 + page_owner->order = order; 250 + page_owner->gfp_mask = gfp_mask; 251 + page_owner->last_migrate_reason = last_migrate_reason; 252 + page_owner->pid = pid; 253 + page_owner->tgid = tgid; 254 + page_owner->ts_nsec = ts_nsec; 255 + strscpy(page_owner->comm, comm, 256 + sizeof(page_owner->comm)); 257 + __set_bit(PAGE_EXT_OWNER, &page_ext->flags); 258 + __set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); 259 + page_ext = page_ext_next(page_ext); 260 + } 261 + } 262 + 263 + static inline void __update_page_owner_free_handle(struct page_ext *page_ext, 264 + depot_stack_handle_t handle, 265 + unsigned short order, 266 + pid_t pid, pid_t tgid, 267 + u64 free_ts_nsec) 268 + { 269 + int i; 270 + struct page_owner *page_owner; 271 + 272 + for (i = 0; i < (1 << order); i++) { 273 + page_owner = get_page_owner(page_ext); 274 + /* Only __reset_page_owner() wants to clear the bit */ 275 + if (handle) { 276 + __clear_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); 277 + page_owner->free_handle = handle; 278 + } 279 + page_owner->free_ts_nsec = free_ts_nsec; 280 + page_owner->free_pid = current->pid; 281 + page_owner->free_tgid = current->tgid; 282 + page_ext = page_ext_next(page_ext); 283 + } 230 284 } 231 285 232 286 void __reset_page_owner(struct page *page, unsigned short order) 233 287 { 234 - int i; 235 288 struct page_ext *page_ext; 236 289 depot_stack_handle_t handle; 237 290 depot_stack_handle_t alloc_handle; ··· 300 245 alloc_handle = page_owner->handle; 301 246 302 247 handle = save_stack(GFP_NOWAIT | __GFP_NOWARN); 303 - for (i = 0; i < (1 << order); i++) { 304 - __clear_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); 305 - page_owner->free_handle = handle; 306 - page_owner->free_ts_nsec = free_ts_nsec; 307 - page_owner->free_pid = current->pid; 308 - page_owner->free_tgid = current->tgid; 309 - page_ext = page_ext_next(page_ext); 310 - page_owner = get_page_owner(page_ext); 311 - } 248 + __update_page_owner_free_handle(page_ext, handle, order, current->pid, 249 + current->tgid, free_ts_nsec); 312 250 page_ext_put(page_ext); 251 + 313 252 if (alloc_handle != early_handle) 314 253 /* 315 254 * early_handle is being set as a handle for all those ··· 312 263 * the machinery is not ready yet, we cannot decrement 313 264 * their refcount either. 314 265 */ 315 - dec_stack_record_count(alloc_handle); 316 - } 317 - 318 - static inline void __set_page_owner_handle(struct page_ext *page_ext, 319 - depot_stack_handle_t handle, 320 - unsigned short order, gfp_t gfp_mask) 321 - { 322 - struct page_owner *page_owner; 323 - int i; 324 - u64 ts_nsec = local_clock(); 325 - 326 - for (i = 0; i < (1 << order); i++) { 327 - page_owner = get_page_owner(page_ext); 328 - page_owner->handle = handle; 329 - page_owner->order = order; 330 - page_owner->gfp_mask = gfp_mask; 331 - page_owner->last_migrate_reason = -1; 332 - page_owner->pid = current->pid; 333 - page_owner->tgid = current->tgid; 334 - page_owner->ts_nsec = ts_nsec; 335 - strscpy(page_owner->comm, current->comm, 336 - sizeof(page_owner->comm)); 337 - __set_bit(PAGE_EXT_OWNER, &page_ext->flags); 338 - __set_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags); 339 - 340 - page_ext = page_ext_next(page_ext); 341 - } 266 + dec_stack_record_count(alloc_handle, 1 << order); 342 267 } 343 268 344 269 noinline void __set_page_owner(struct page *page, unsigned short order, 345 270 gfp_t gfp_mask) 346 271 { 347 272 struct page_ext *page_ext; 273 + u64 ts_nsec = local_clock(); 348 274 depot_stack_handle_t handle; 349 275 350 276 handle = save_stack(gfp_mask); ··· 327 303 page_ext = page_ext_get(page); 328 304 if (unlikely(!page_ext)) 329 305 return; 330 - __set_page_owner_handle(page_ext, handle, order, gfp_mask); 306 + __update_page_owner_handle(page_ext, handle, order, gfp_mask, -1, 307 + current->pid, current->tgid, ts_nsec, 308 + current->comm); 331 309 page_ext_put(page_ext); 332 - inc_stack_record_count(handle, gfp_mask); 310 + inc_stack_record_count(handle, gfp_mask, 1 << order); 333 311 } 334 312 335 313 void __set_page_owner_migrate_reason(struct page *page, int reason) ··· 366 340 367 341 void __folio_copy_owner(struct folio *newfolio, struct folio *old) 368 342 { 343 + int i; 369 344 struct page_ext *old_ext; 370 345 struct page_ext *new_ext; 371 - struct page_owner *old_page_owner, *new_page_owner; 346 + struct page_owner *old_page_owner; 347 + struct page_owner *new_page_owner; 348 + depot_stack_handle_t migrate_handle; 372 349 373 350 old_ext = page_ext_get(&old->page); 374 351 if (unlikely(!old_ext)) ··· 385 356 386 357 old_page_owner = get_page_owner(old_ext); 387 358 new_page_owner = get_page_owner(new_ext); 388 - new_page_owner->order = old_page_owner->order; 389 - new_page_owner->gfp_mask = old_page_owner->gfp_mask; 390 - new_page_owner->last_migrate_reason = 391 - old_page_owner->last_migrate_reason; 392 - new_page_owner->handle = old_page_owner->handle; 393 - new_page_owner->pid = old_page_owner->pid; 394 - new_page_owner->tgid = old_page_owner->tgid; 395 - new_page_owner->free_pid = old_page_owner->free_pid; 396 - new_page_owner->free_tgid = old_page_owner->free_tgid; 397 - new_page_owner->ts_nsec = old_page_owner->ts_nsec; 398 - new_page_owner->free_ts_nsec = old_page_owner->ts_nsec; 399 - strcpy(new_page_owner->comm, old_page_owner->comm); 400 - 359 + migrate_handle = new_page_owner->handle; 360 + __update_page_owner_handle(new_ext, old_page_owner->handle, 361 + old_page_owner->order, old_page_owner->gfp_mask, 362 + old_page_owner->last_migrate_reason, 363 + old_page_owner->ts_nsec, old_page_owner->pid, 364 + old_page_owner->tgid, old_page_owner->comm); 401 365 /* 402 - * We don't clear the bit on the old folio as it's going to be freed 403 - * after migration. Until then, the info can be useful in case of 404 - * a bug, and the overall stats will be off a bit only temporarily. 405 - * Also, migrate_misplaced_transhuge_page() can still fail the 406 - * migration and then we want the old folio to retain the info. But 407 - * in that case we also don't need to explicitly clear the info from 408 - * the new page, which will be freed. 366 + * Do not proactively clear PAGE_EXT_OWNER{_ALLOCATED} bits as the folio 367 + * will be freed after migration. Keep them until then as they may be 368 + * useful. 409 369 */ 410 - __set_bit(PAGE_EXT_OWNER, &new_ext->flags); 411 - __set_bit(PAGE_EXT_OWNER_ALLOCATED, &new_ext->flags); 370 + __update_page_owner_free_handle(new_ext, 0, old_page_owner->order, 371 + old_page_owner->free_pid, 372 + old_page_owner->free_tgid, 373 + old_page_owner->free_ts_nsec); 374 + /* 375 + * We linked the original stack to the new folio, we need to do the same 376 + * for the new one and the old folio otherwise there will be an imbalance 377 + * when subtracting those pages from the stack. 378 + */ 379 + for (i = 0; i < (1 << new_page_owner->order); i++) { 380 + old_page_owner->handle = migrate_handle; 381 + old_ext = page_ext_next(old_ext); 382 + old_page_owner = get_page_owner(old_ext); 383 + } 384 + 412 385 page_ext_put(new_ext); 413 386 page_ext_put(old_ext); 414 387 } ··· 818 787 goto ext_put_continue; 819 788 820 789 /* Found early allocated page */ 821 - __set_page_owner_handle(page_ext, early_handle, 822 - 0, 0); 790 + __update_page_owner_handle(page_ext, early_handle, 0, 0, 791 + -1, local_clock(), current->pid, 792 + current->tgid, current->comm); 823 793 count++; 824 794 ext_put_continue: 825 795 page_ext_put(page_ext); ··· 872 840 * value of stack_list. 873 841 */ 874 842 stack = smp_load_acquire(&stack_list); 843 + m->private = stack; 875 844 } else { 876 845 stack = m->private; 877 - stack = stack->next; 878 846 } 879 - 880 - m->private = stack; 881 847 882 848 return stack; 883 849 } ··· 891 861 return stack; 892 862 } 893 863 894 - static unsigned long page_owner_stack_threshold; 864 + static unsigned long page_owner_pages_threshold; 895 865 896 866 static int stack_print(struct seq_file *m, void *v) 897 867 { 898 - int i, stack_count; 868 + int i, nr_base_pages; 899 869 struct stack *stack = v; 900 870 unsigned long *entries; 901 871 unsigned long nr_entries; ··· 906 876 907 877 nr_entries = stack_record->size; 908 878 entries = stack_record->entries; 909 - stack_count = refcount_read(&stack_record->count) - 1; 879 + nr_base_pages = refcount_read(&stack_record->count) - 1; 910 880 911 - if (stack_count < 1 || stack_count < page_owner_stack_threshold) 881 + if (nr_base_pages < 1 || nr_base_pages < page_owner_pages_threshold) 912 882 return 0; 913 883 914 884 for (i = 0; i < nr_entries; i++) 915 885 seq_printf(m, " %pS\n", (void *)entries[i]); 916 - seq_printf(m, "stack_count: %d\n\n", stack_count); 886 + seq_printf(m, "nr_base_pages: %d\n\n", nr_base_pages); 917 887 918 888 return 0; 919 889 } ··· 943 913 944 914 static int page_owner_threshold_get(void *data, u64 *val) 945 915 { 946 - *val = READ_ONCE(page_owner_stack_threshold); 916 + *val = READ_ONCE(page_owner_pages_threshold); 947 917 return 0; 948 918 } 949 919 950 920 static int page_owner_threshold_set(void *data, u64 val) 951 921 { 952 - WRITE_ONCE(page_owner_stack_threshold, val); 922 + WRITE_ONCE(page_owner_pages_threshold, val); 953 923 return 0; 954 924 } 955 925
-6
mm/shmem.c
··· 748 748 749 749 #define shmem_huge SHMEM_HUGE_DENY 750 750 751 - bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, 752 - struct mm_struct *mm, unsigned long vm_flags) 753 - { 754 - return false; 755 - } 756 - 757 751 static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, 758 752 struct shrink_control *sc, unsigned long nr_to_split) 759 753 {
+1 -1
net/ax25/af_ax25.c
··· 103 103 s->ax25_dev = NULL; 104 104 if (sk->sk_socket) { 105 105 netdev_put(ax25_dev->dev, 106 - &ax25_dev->dev_tracker); 106 + &s->dev_tracker); 107 107 ax25_dev_put(ax25_dev); 108 108 } 109 109 ax25_cb_del(s);
+4 -2
net/bluetooth/hci_conn.c
··· 1263 1263 1264 1264 struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst, 1265 1265 u8 dst_type, bool dst_resolved, u8 sec_level, 1266 - u16 conn_timeout, u8 role) 1266 + u16 conn_timeout, u8 role, u8 phy, u8 sec_phy) 1267 1267 { 1268 1268 struct hci_conn *conn; 1269 1269 struct smp_irk *irk; ··· 1326 1326 conn->dst_type = dst_type; 1327 1327 conn->sec_level = BT_SECURITY_LOW; 1328 1328 conn->conn_timeout = conn_timeout; 1329 + conn->le_adv_phy = phy; 1330 + conn->le_adv_sec_phy = sec_phy; 1329 1331 1330 1332 err = hci_connect_le_sync(hdev, conn); 1331 1333 if (err) { ··· 2275 2273 le = hci_connect_le(hdev, dst, dst_type, false, 2276 2274 BT_SECURITY_LOW, 2277 2275 HCI_LE_CONN_TIMEOUT, 2278 - HCI_ROLE_SLAVE); 2276 + HCI_ROLE_SLAVE, 0, 0); 2279 2277 else 2280 2278 le = hci_connect_le_scan(hdev, dst, dst_type, 2281 2279 BT_SECURITY_LOW,
+14 -11
net/bluetooth/hci_event.c
··· 3218 3218 if (key) { 3219 3219 set_bit(HCI_CONN_ENCRYPT, &conn->flags); 3220 3220 3221 - if (!(hdev->commands[20] & 0x10)) { 3221 + if (!read_key_size_capable(hdev)) { 3222 3222 conn->enc_key_size = HCI_LINK_KEY_SIZE; 3223 3223 } else { 3224 3224 cp.handle = cpu_to_le16(conn->handle); ··· 3666 3666 * controller really supports it. If it doesn't, assume 3667 3667 * the default size (16). 3668 3668 */ 3669 - if (!(hdev->commands[20] & 0x10) || 3670 - test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks)) { 3669 + if (!read_key_size_capable(hdev)) { 3671 3670 conn->enc_key_size = HCI_LINK_KEY_SIZE; 3672 3671 goto notify; 3673 3672 } ··· 6037 6038 static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev, 6038 6039 bdaddr_t *addr, 6039 6040 u8 addr_type, bool addr_resolved, 6040 - u8 adv_type) 6041 + u8 adv_type, u8 phy, u8 sec_phy) 6041 6042 { 6042 6043 struct hci_conn *conn; 6043 6044 struct hci_conn_params *params; ··· 6092 6093 6093 6094 conn = hci_connect_le(hdev, addr, addr_type, addr_resolved, 6094 6095 BT_SECURITY_LOW, hdev->def_le_autoconnect_timeout, 6095 - HCI_ROLE_MASTER); 6096 + HCI_ROLE_MASTER, phy, sec_phy); 6096 6097 if (!IS_ERR(conn)) { 6097 6098 /* If HCI_AUTO_CONN_EXPLICIT is set, conn is already owned 6098 6099 * by higher layer that tried to connect, if no then ··· 6127 6128 6128 6129 static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr, 6129 6130 u8 bdaddr_type, bdaddr_t *direct_addr, 6130 - u8 direct_addr_type, s8 rssi, u8 *data, u8 len, 6131 - bool ext_adv, bool ctl_time, u64 instant) 6131 + u8 direct_addr_type, u8 phy, u8 sec_phy, s8 rssi, 6132 + u8 *data, u8 len, bool ext_adv, bool ctl_time, 6133 + u64 instant) 6132 6134 { 6133 6135 struct discovery_state *d = &hdev->discovery; 6134 6136 struct smp_irk *irk; ··· 6217 6217 * for advertising reports) and is already verified to be RPA above. 6218 6218 */ 6219 6219 conn = check_pending_le_conn(hdev, bdaddr, bdaddr_type, bdaddr_resolved, 6220 - type); 6220 + type, phy, sec_phy); 6221 6221 if (!ext_adv && conn && type == LE_ADV_IND && 6222 6222 len <= max_adv_len(hdev)) { 6223 6223 /* Store report for later inclusion by ··· 6363 6363 if (info->length <= max_adv_len(hdev)) { 6364 6364 rssi = info->data[info->length]; 6365 6365 process_adv_report(hdev, info->type, &info->bdaddr, 6366 - info->bdaddr_type, NULL, 0, rssi, 6366 + info->bdaddr_type, NULL, 0, 6367 + HCI_ADV_PHY_1M, 0, rssi, 6367 6368 info->data, info->length, false, 6368 6369 false, instant); 6369 6370 } else { ··· 6449 6448 if (legacy_evt_type != LE_ADV_INVALID) { 6450 6449 process_adv_report(hdev, legacy_evt_type, &info->bdaddr, 6451 6450 info->bdaddr_type, NULL, 0, 6451 + info->primary_phy, 6452 + info->secondary_phy, 6452 6453 info->rssi, info->data, info->length, 6453 6454 !(evt_type & LE_EXT_ADV_LEGACY_PDU), 6454 6455 false, instant); ··· 6733 6730 6734 6731 process_adv_report(hdev, info->type, &info->bdaddr, 6735 6732 info->bdaddr_type, &info->direct_addr, 6736 - info->direct_addr_type, info->rssi, NULL, 0, 6737 - false, false, instant); 6733 + info->direct_addr_type, HCI_ADV_PHY_1M, 0, 6734 + info->rssi, NULL, 0, false, false, instant); 6738 6735 } 6739 6736 6740 6737 hci_dev_unlock(hdev);
+6 -3
net/bluetooth/hci_sync.c
··· 6346 6346 6347 6347 plen = sizeof(*cp); 6348 6348 6349 - if (scan_1m(hdev)) { 6349 + if (scan_1m(hdev) && (conn->le_adv_phy == HCI_ADV_PHY_1M || 6350 + conn->le_adv_sec_phy == HCI_ADV_PHY_1M)) { 6350 6351 cp->phys |= LE_SCAN_PHY_1M; 6351 6352 set_ext_conn_params(conn, p); 6352 6353 ··· 6355 6354 plen += sizeof(*p); 6356 6355 } 6357 6356 6358 - if (scan_2m(hdev)) { 6357 + if (scan_2m(hdev) && (conn->le_adv_phy == HCI_ADV_PHY_2M || 6358 + conn->le_adv_sec_phy == HCI_ADV_PHY_2M)) { 6359 6359 cp->phys |= LE_SCAN_PHY_2M; 6360 6360 set_ext_conn_params(conn, p); 6361 6361 ··· 6364 6362 plen += sizeof(*p); 6365 6363 } 6366 6364 6367 - if (scan_coded(hdev)) { 6365 + if (scan_coded(hdev) && (conn->le_adv_phy == HCI_ADV_PHY_CODED || 6366 + conn->le_adv_sec_phy == HCI_ADV_PHY_CODED)) { 6368 6367 cp->phys |= LE_SCAN_PHY_CODED; 6369 6368 set_ext_conn_params(conn, p); 6370 6369
+1 -1
net/bluetooth/l2cap_core.c
··· 7018 7018 if (hci_dev_test_flag(hdev, HCI_ADVERTISING)) 7019 7019 hcon = hci_connect_le(hdev, dst, dst_type, false, 7020 7020 chan->sec_level, timeout, 7021 - HCI_ROLE_SLAVE); 7021 + HCI_ROLE_SLAVE, 0, 0); 7022 7022 else 7023 7023 hcon = hci_connect_le_scan(hdev, dst, dst_type, 7024 7024 chan->sec_level, timeout,
+4 -3
net/bluetooth/l2cap_sock.c
··· 439 439 struct l2cap_chan *chan = l2cap_pi(sk)->chan; 440 440 struct l2cap_options opts; 441 441 struct l2cap_conninfo cinfo; 442 - int len, err = 0; 442 + int err = 0; 443 + size_t len; 443 444 u32 opt; 444 445 445 446 BT_DBG("sk %p", sk); ··· 487 486 488 487 BT_DBG("mode 0x%2.2x", chan->mode); 489 488 490 - len = min_t(unsigned int, len, sizeof(opts)); 489 + len = min(len, sizeof(opts)); 491 490 if (copy_to_user(optval, (char *) &opts, len)) 492 491 err = -EFAULT; 493 492 ··· 537 536 cinfo.hci_handle = chan->conn->hcon->handle; 538 537 memcpy(cinfo.dev_class, chan->conn->hcon->dev_class, 3); 539 538 540 - len = min_t(unsigned int, len, sizeof(cinfo)); 539 + len = min(len, sizeof(cinfo)); 541 540 if (copy_to_user(optval, (char *) &cinfo, len)) 542 541 err = -EFAULT; 543 542
+17 -7
net/bluetooth/mgmt.c
··· 2623 2623 goto failed; 2624 2624 } 2625 2625 2626 - err = hci_cmd_sync_queue(hdev, add_uuid_sync, cmd, mgmt_class_complete); 2626 + /* MGMT_OP_ADD_UUID don't require adapter the UP/Running so use 2627 + * hci_cmd_sync_submit instead of hci_cmd_sync_queue. 2628 + */ 2629 + err = hci_cmd_sync_submit(hdev, add_uuid_sync, cmd, 2630 + mgmt_class_complete); 2627 2631 if (err < 0) { 2628 2632 mgmt_pending_free(cmd); 2629 2633 goto failed; ··· 2721 2717 goto unlock; 2722 2718 } 2723 2719 2724 - err = hci_cmd_sync_queue(hdev, remove_uuid_sync, cmd, 2725 - mgmt_class_complete); 2720 + /* MGMT_OP_REMOVE_UUID don't require adapter the UP/Running so use 2721 + * hci_cmd_sync_submit instead of hci_cmd_sync_queue. 2722 + */ 2723 + err = hci_cmd_sync_submit(hdev, remove_uuid_sync, cmd, 2724 + mgmt_class_complete); 2726 2725 if (err < 0) 2727 2726 mgmt_pending_free(cmd); 2728 2727 ··· 2791 2784 goto unlock; 2792 2785 } 2793 2786 2794 - err = hci_cmd_sync_queue(hdev, set_class_sync, cmd, 2795 - mgmt_class_complete); 2787 + /* MGMT_OP_SET_DEV_CLASS don't require adapter the UP/Running so use 2788 + * hci_cmd_sync_submit instead of hci_cmd_sync_queue. 2789 + */ 2790 + err = hci_cmd_sync_submit(hdev, set_class_sync, cmd, 2791 + mgmt_class_complete); 2796 2792 if (err < 0) 2797 2793 mgmt_pending_free(cmd); 2798 2794 ··· 5485 5475 goto unlock; 5486 5476 } 5487 5477 5488 - err = hci_cmd_sync_queue(hdev, mgmt_remove_adv_monitor_sync, cmd, 5489 - mgmt_remove_adv_monitor_complete); 5478 + err = hci_cmd_sync_submit(hdev, mgmt_remove_adv_monitor_sync, cmd, 5479 + mgmt_remove_adv_monitor_complete); 5490 5480 5491 5481 if (err) { 5492 5482 mgmt_pending_remove(cmd);
+4 -3
net/bluetooth/sco.c
··· 964 964 struct sock *sk = sock->sk; 965 965 struct sco_options opts; 966 966 struct sco_conninfo cinfo; 967 - int len, err = 0; 967 + int err = 0; 968 + size_t len; 968 969 969 970 BT_DBG("sk %p", sk); 970 971 ··· 987 986 988 987 BT_DBG("mtu %u", opts.mtu); 989 988 990 - len = min_t(unsigned int, len, sizeof(opts)); 989 + len = min(len, sizeof(opts)); 991 990 if (copy_to_user(optval, (char *)&opts, len)) 992 991 err = -EFAULT; 993 992 ··· 1005 1004 cinfo.hci_handle = sco_pi(sk)->conn->hcon->handle; 1006 1005 memcpy(cinfo.dev_class, sco_pi(sk)->conn->hcon->dev_class, 3); 1007 1006 1008 - len = min_t(unsigned int, len, sizeof(cinfo)); 1007 + len = min(len, sizeof(cinfo)); 1009 1008 if (copy_to_user(optval, (char *)&cinfo, len)) 1010 1009 err = -EFAULT; 1011 1010
+1 -1
net/bridge/br_netlink.c
··· 667 667 { 668 668 u32 filter = RTEXT_FILTER_BRVLAN_COMPRESSED; 669 669 670 - return br_info_notify(event, br, port, filter); 670 + br_info_notify(event, br, port, filter); 671 671 } 672 672 673 673 /*
+1 -11
net/ethernet/eth.c
··· 164 164 eth = (struct ethhdr *)skb->data; 165 165 skb_pull_inline(skb, ETH_HLEN); 166 166 167 - if (unlikely(!ether_addr_equal_64bits(eth->h_dest, 168 - dev->dev_addr))) { 169 - if (unlikely(is_multicast_ether_addr_64bits(eth->h_dest))) { 170 - if (ether_addr_equal_64bits(eth->h_dest, dev->broadcast)) 171 - skb->pkt_type = PACKET_BROADCAST; 172 - else 173 - skb->pkt_type = PACKET_MULTICAST; 174 - } else { 175 - skb->pkt_type = PACKET_OTHERHOST; 176 - } 177 - } 167 + eth_skb_pkt_type(skb, dev); 178 168 179 169 /* 180 170 * Some variants of DSA tagging don't have an ethertype field
+10 -2
net/ipv4/icmp.c
··· 92 92 #include <net/inet_common.h> 93 93 #include <net/ip_fib.h> 94 94 #include <net/l3mdev.h> 95 + #include <net/addrconf.h> 95 96 96 97 /* 97 98 * Build xmit assembly blocks ··· 1033 1032 struct icmp_ext_hdr *ext_hdr, _ext_hdr; 1034 1033 struct icmp_ext_echo_iio *iio, _iio; 1035 1034 struct net *net = dev_net(skb->dev); 1035 + struct inet6_dev *in6_dev; 1036 + struct in_device *in_dev; 1036 1037 struct net_device *dev; 1037 1038 char buff[IFNAMSIZ]; 1038 1039 u16 ident_len; ··· 1118 1115 /* Fill bits in reply message */ 1119 1116 if (dev->flags & IFF_UP) 1120 1117 status |= ICMP_EXT_ECHOREPLY_ACTIVE; 1121 - if (__in_dev_get_rcu(dev) && __in_dev_get_rcu(dev)->ifa_list) 1118 + 1119 + in_dev = __in_dev_get_rcu(dev); 1120 + if (in_dev && rcu_access_pointer(in_dev->ifa_list)) 1122 1121 status |= ICMP_EXT_ECHOREPLY_IPV4; 1123 - if (!list_empty(&rcu_dereference(dev->ip6_ptr)->addr_list)) 1122 + 1123 + in6_dev = __in6_dev_get(dev); 1124 + if (in6_dev && !list_empty(&in6_dev->addr_list)) 1124 1125 status |= ICMP_EXT_ECHOREPLY_IPV6; 1126 + 1125 1127 dev_put(dev); 1126 1128 icmphdr->un.echo.sequence |= htons(status); 1127 1129 return true;
+3
net/ipv4/route.c
··· 2154 2154 int err = -EINVAL; 2155 2155 u32 tag = 0; 2156 2156 2157 + if (!in_dev) 2158 + return -EINVAL; 2159 + 2157 2160 if (ipv4_is_multicast(saddr) || ipv4_is_lbcast(saddr)) 2158 2161 goto martian_source; 2159 2162
+2 -1
net/ipv4/tcp_ao.c
··· 1068 1068 { 1069 1069 struct tcp_sock *tp = tcp_sk(sk); 1070 1070 struct tcp_ao_info *ao_info; 1071 + struct hlist_node *next; 1071 1072 union tcp_ao_addr *addr; 1072 1073 struct tcp_ao_key *key; 1073 1074 int family, l3index; ··· 1091 1090 l3index = l3mdev_master_ifindex_by_index(sock_net(sk), 1092 1091 sk->sk_bound_dev_if); 1093 1092 1094 - hlist_for_each_entry_rcu(key, &ao_info->head, node) { 1093 + hlist_for_each_entry_safe(key, next, &ao_info->head, node) { 1095 1094 if (!tcp_ao_key_cmp(key, l3index, addr, key->prefixlen, family, -1, -1)) 1096 1095 continue; 1097 1096
+3 -2
net/ipv4/udp.c
··· 1134 1134 1135 1135 if (msg->msg_controllen) { 1136 1136 err = udp_cmsg_send(sk, msg, &ipc.gso_size); 1137 - if (err > 0) 1137 + if (err > 0) { 1138 1138 err = ip_cmsg_send(sk, msg, &ipc, 1139 1139 sk->sk_family == AF_INET6); 1140 + connected = 0; 1141 + } 1140 1142 if (unlikely(err < 0)) { 1141 1143 kfree(ipc.opt); 1142 1144 return err; 1143 1145 } 1144 1146 if (ipc.opt) 1145 1147 free = 1; 1146 - connected = 0; 1147 1148 } 1148 1149 if (!ipc.opt) { 1149 1150 struct ip_options_rcu *inet_opt;
+3 -2
net/ipv6/udp.c
··· 1487 1487 ipc6.opt = opt; 1488 1488 1489 1489 err = udp_cmsg_send(sk, msg, &ipc6.gso_size); 1490 - if (err > 0) 1490 + if (err > 0) { 1491 1491 err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, fl6, 1492 1492 &ipc6); 1493 + connected = false; 1494 + } 1493 1495 if (err < 0) { 1494 1496 fl6_sock_release(flowlabel); 1495 1497 return err; ··· 1503 1501 } 1504 1502 if (!(opt->opt_nflen|opt->opt_flen)) 1505 1503 opt = NULL; 1506 - connected = false; 1507 1504 } 1508 1505 if (!opt) { 1509 1506 opt = txopt_get(np);
+22 -5
net/mac80211/chan.c
··· 819 819 struct ieee80211_local *local = sdata->local; 820 820 struct ieee80211_chanctx_conf *conf; 821 821 struct ieee80211_chanctx *curr_ctx = NULL; 822 + bool new_idle; 822 823 int ret; 823 824 824 825 if (WARN_ON(sdata->vif.type == NL80211_IFTYPE_NAN)) ··· 857 856 858 857 rcu_assign_pointer(link->conf->chanctx_conf, conf); 859 858 860 - sdata->vif.cfg.idle = !conf; 861 - 862 859 if (curr_ctx && ieee80211_chanctx_num_assigned(local, curr_ctx) > 0) { 863 860 ieee80211_recalc_chanctx_chantype(local, curr_ctx); 864 861 ieee80211_recalc_smps_chanctx(local, curr_ctx); ··· 869 870 ieee80211_recalc_chanctx_min_def(local, new_ctx, NULL); 870 871 } 871 872 872 - if (sdata->vif.type != NL80211_IFTYPE_P2P_DEVICE && 873 - sdata->vif.type != NL80211_IFTYPE_MONITOR) 874 - ieee80211_vif_cfg_change_notify(sdata, BSS_CHANGED_IDLE); 873 + if (conf) { 874 + new_idle = false; 875 + } else { 876 + struct ieee80211_link_data *tmp; 877 + 878 + new_idle = true; 879 + for_each_sdata_link(local, tmp) { 880 + if (rcu_access_pointer(tmp->conf->chanctx_conf)) { 881 + new_idle = false; 882 + break; 883 + } 884 + } 885 + } 886 + 887 + if (new_idle != sdata->vif.cfg.idle) { 888 + sdata->vif.cfg.idle = new_idle; 889 + 890 + if (sdata->vif.type != NL80211_IFTYPE_P2P_DEVICE && 891 + sdata->vif.type != NL80211_IFTYPE_MONITOR) 892 + ieee80211_vif_cfg_change_notify(sdata, BSS_CHANGED_IDLE); 893 + } 875 894 876 895 ieee80211_check_fast_xmit_iface(sdata); 877 896
+7 -1
net/mac80211/mesh.c
··· 747 747 struct sk_buff *skb, u32 ctrl_flags) 748 748 { 749 749 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 750 + struct ieee80211_mesh_fast_tx_key key = { 751 + .type = MESH_FAST_TX_TYPE_LOCAL 752 + }; 750 753 struct ieee80211_mesh_fast_tx *entry; 751 754 struct ieee80211s_hdr *meshhdr; 752 755 u8 sa[ETH_ALEN] __aligned(2); ··· 785 782 return false; 786 783 } 787 784 788 - entry = mesh_fast_tx_get(sdata, skb->data); 785 + ether_addr_copy(key.addr, skb->data); 786 + if (!ether_addr_equal(skb->data + ETH_ALEN, sdata->vif.addr)) 787 + key.type = MESH_FAST_TX_TYPE_PROXIED; 788 + entry = mesh_fast_tx_get(sdata, &key); 789 789 if (!entry) 790 790 return false; 791 791
+33 -3
net/mac80211/mesh.h
··· 135 135 #define MESH_FAST_TX_CACHE_TIMEOUT 8000 /* msecs */ 136 136 137 137 /** 138 + * enum ieee80211_mesh_fast_tx_type - cached mesh fast tx entry type 139 + * 140 + * @MESH_FAST_TX_TYPE_LOCAL: tx from the local vif address as SA 141 + * @MESH_FAST_TX_TYPE_PROXIED: local tx with a different SA (e.g. bridged) 142 + * @MESH_FAST_TX_TYPE_FORWARDED: forwarded from a different mesh point 143 + * @NUM_MESH_FAST_TX_TYPE: number of entry types 144 + */ 145 + enum ieee80211_mesh_fast_tx_type { 146 + MESH_FAST_TX_TYPE_LOCAL, 147 + MESH_FAST_TX_TYPE_PROXIED, 148 + MESH_FAST_TX_TYPE_FORWARDED, 149 + 150 + /* must be last */ 151 + NUM_MESH_FAST_TX_TYPE 152 + }; 153 + 154 + 155 + /** 156 + * struct ieee80211_mesh_fast_tx_key - cached mesh fast tx entry key 157 + * 158 + * @addr: The Ethernet DA for this entry 159 + * @type: cache entry type 160 + */ 161 + struct ieee80211_mesh_fast_tx_key { 162 + u8 addr[ETH_ALEN] __aligned(2); 163 + u16 type; 164 + }; 165 + 166 + /** 138 167 * struct ieee80211_mesh_fast_tx - cached mesh fast tx entry 139 168 * @rhash: rhashtable pointer 140 - * @addr_key: The Ethernet DA which is the key for this entry 169 + * @key: the lookup key for this cache entry 141 170 * @fast_tx: base fast_tx data 142 171 * @hdr: cached mesh and rfc1042 headers 143 172 * @hdrlen: length of mesh + rfc1042 ··· 177 148 */ 178 149 struct ieee80211_mesh_fast_tx { 179 150 struct rhash_head rhash; 180 - u8 addr_key[ETH_ALEN] __aligned(2); 151 + struct ieee80211_mesh_fast_tx_key key; 181 152 182 153 struct ieee80211_fast_tx fast_tx; 183 154 u8 hdr[sizeof(struct ieee80211s_hdr) + sizeof(rfc1042_header)]; ··· 363 334 364 335 bool mesh_action_is_path_sel(struct ieee80211_mgmt *mgmt); 365 336 struct ieee80211_mesh_fast_tx * 366 - mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, const u8 *addr); 337 + mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, 338 + struct ieee80211_mesh_fast_tx_key *key); 367 339 bool ieee80211_mesh_xmit_fast(struct ieee80211_sub_if_data *sdata, 368 340 struct sk_buff *skb, u32 ctrl_flags); 369 341 void mesh_fast_tx_cache(struct ieee80211_sub_if_data *sdata,
+22 -9
net/mac80211/mesh_pathtbl.c
··· 37 37 static const struct rhashtable_params fast_tx_rht_params = { 38 38 .nelem_hint = 10, 39 39 .automatic_shrinking = true, 40 - .key_len = ETH_ALEN, 41 - .key_offset = offsetof(struct ieee80211_mesh_fast_tx, addr_key), 40 + .key_len = sizeof_field(struct ieee80211_mesh_fast_tx, key), 41 + .key_offset = offsetof(struct ieee80211_mesh_fast_tx, key), 42 42 .head_offset = offsetof(struct ieee80211_mesh_fast_tx, rhash), 43 43 .hashfn = mesh_table_hash, 44 44 }; ··· 431 431 } 432 432 433 433 struct ieee80211_mesh_fast_tx * 434 - mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, const u8 *addr) 434 + mesh_fast_tx_get(struct ieee80211_sub_if_data *sdata, 435 + struct ieee80211_mesh_fast_tx_key *key) 435 436 { 436 437 struct ieee80211_mesh_fast_tx *entry; 437 438 struct mesh_tx_cache *cache; 438 439 439 440 cache = &sdata->u.mesh.tx_cache; 440 - entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params); 441 + entry = rhashtable_lookup(&cache->rht, key, fast_tx_rht_params); 441 442 if (!entry) 442 443 return NULL; 443 444 444 445 if (!(entry->mpath->flags & MESH_PATH_ACTIVE) || 445 446 mpath_expired(entry->mpath)) { 446 447 spin_lock_bh(&cache->walk_lock); 447 - entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params); 448 + entry = rhashtable_lookup(&cache->rht, key, fast_tx_rht_params); 448 449 if (entry) 449 450 mesh_fast_tx_entry_free(cache, entry); 450 451 spin_unlock_bh(&cache->walk_lock); ··· 490 489 if (!sta) 491 490 return; 492 491 492 + build.key.type = MESH_FAST_TX_TYPE_LOCAL; 493 493 if ((meshhdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6) { 494 494 /* This is required to keep the mppath alive */ 495 495 mppath = mpp_path_lookup(sdata, meshhdr->eaddr1); 496 496 if (!mppath) 497 497 return; 498 498 build.mppath = mppath; 499 + if (!ether_addr_equal(meshhdr->eaddr2, sdata->vif.addr)) 500 + build.key.type = MESH_FAST_TX_TYPE_PROXIED; 499 501 } else if (ieee80211_has_a4(hdr->frame_control)) { 500 502 mppath = mpath; 501 503 } else { 502 504 return; 503 505 } 506 + 507 + if (!ether_addr_equal(hdr->addr4, sdata->vif.addr)) 508 + build.key.type = MESH_FAST_TX_TYPE_FORWARDED; 504 509 505 510 /* rate limit, in case fast xmit can't be enabled */ 506 511 if (mppath->fast_tx_check == jiffies) ··· 554 547 } 555 548 } 556 549 557 - memcpy(build.addr_key, mppath->dst, ETH_ALEN); 550 + memcpy(build.key.addr, mppath->dst, ETH_ALEN); 558 551 build.timestamp = jiffies; 559 552 build.fast_tx.band = info->band; 560 553 build.fast_tx.da_offs = offsetof(struct ieee80211_hdr, addr3); ··· 653 646 const u8 *addr) 654 647 { 655 648 struct mesh_tx_cache *cache = &sdata->u.mesh.tx_cache; 649 + struct ieee80211_mesh_fast_tx_key key = {}; 656 650 struct ieee80211_mesh_fast_tx *entry; 651 + int i; 657 652 653 + ether_addr_copy(key.addr, addr); 658 654 spin_lock_bh(&cache->walk_lock); 659 - entry = rhashtable_lookup_fast(&cache->rht, addr, fast_tx_rht_params); 660 - if (entry) 661 - mesh_fast_tx_entry_free(cache, entry); 655 + for (i = 0; i < NUM_MESH_FAST_TX_TYPE; i++) { 656 + key.type = i; 657 + entry = rhashtable_lookup_fast(&cache->rht, &key, fast_tx_rht_params); 658 + if (entry) 659 + mesh_fast_tx_entry_free(cache, entry); 660 + } 662 661 spin_unlock_bh(&cache->walk_lock); 663 662 } 664 663
+21 -10
net/mac80211/mlme.c
··· 620 620 .from_ap = true, 621 621 .start = ies->data, 622 622 .len = ies->len, 623 - .mode = conn->mode, 624 623 }; 625 624 struct ieee802_11_elems *elems; 626 625 struct ieee80211_supported_band *sband; ··· 628 629 int ret; 629 630 630 631 again: 632 + parse_params.mode = conn->mode; 631 633 elems = ieee802_11_parse_elems_full(&parse_params); 632 634 if (!elems) 633 635 return ERR_PTR(-ENOMEM); ··· 636 636 ap_mode = ieee80211_determine_ap_chan(sdata, channel, bss->vht_cap_info, 637 637 elems, false, conn, &ap_chandef); 638 638 639 - mlme_link_id_dbg(sdata, link_id, "determined AP %pM to be %s\n", 640 - cbss->bssid, ieee80211_conn_mode_str(ap_mode)); 641 - 642 639 /* this should be impossible since parsing depends on our mode */ 643 640 if (WARN_ON(ap_mode > conn->mode)) { 644 641 ret = -EINVAL; 645 642 goto free; 646 643 } 644 + 645 + if (conn->mode != ap_mode) { 646 + conn->mode = ap_mode; 647 + kfree(elems); 648 + goto again; 649 + } 650 + 651 + mlme_link_id_dbg(sdata, link_id, "determined AP %pM to be %s\n", 652 + cbss->bssid, ieee80211_conn_mode_str(ap_mode)); 647 653 648 654 sband = sdata->local->hw.wiphy->bands[channel->band]; 649 655 ··· 701 695 break; 702 696 } 703 697 704 - conn->mode = ap_mode; 705 698 chanreq->oper = ap_chandef; 706 699 707 700 /* wider-bandwidth OFDMA is only done in EHT */ ··· 762 757 } 763 758 764 759 /* the mode can only decrease, so this must terminate */ 765 - if (ap_mode != conn->mode) 760 + if (ap_mode != conn->mode) { 761 + kfree(elems); 766 762 goto again; 763 + } 767 764 768 765 mlme_link_id_dbg(sdata, link_id, 769 766 "connecting with %s mode, max bandwidth %d MHz\n", ··· 5844 5837 */ 5845 5838 if (control & 5846 5839 IEEE80211_MLE_STA_RECONF_CONTROL_AP_REM_TIMER_PRESENT) 5847 - link_removal_timeout[link_id] = le16_to_cpu(*(__le16 *)pos); 5840 + link_removal_timeout[link_id] = get_unaligned_le16(pos); 5848 5841 } 5849 5842 5850 5843 removed_links &= sdata->vif.valid_links; ··· 5869 5862 continue; 5870 5863 } 5871 5864 5872 - link_delay = link_conf->beacon_int * 5873 - link_removal_timeout[link_id]; 5865 + if (link_removal_timeout[link_id] < 1) 5866 + link_delay = 0; 5867 + else 5868 + link_delay = link_conf->beacon_int * 5869 + (link_removal_timeout[link_id] - 1); 5874 5870 5875 5871 if (!delay) 5876 5872 delay = link_delay; ··· 6228 6218 link->u.mgd.dtim_period = elems->dtim_period; 6229 6219 link->u.mgd.have_beacon = true; 6230 6220 ifmgd->assoc_data->need_beacon = false; 6231 - if (ieee80211_hw_check(&local->hw, TIMING_BEACON_ONLY)) { 6221 + if (ieee80211_hw_check(&local->hw, TIMING_BEACON_ONLY) && 6222 + !ieee80211_is_s1g_beacon(hdr->frame_control)) { 6232 6223 link->conf->sync_tsf = 6233 6224 le64_to_cpu(mgmt->u.beacon.timestamp); 6234 6225 link->conf->sync_device_ts =
+5 -1
net/mac80211/rate.c
··· 877 877 struct ieee80211_sub_if_data *sdata; 878 878 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 879 879 struct ieee80211_supported_band *sband; 880 + u32 mask = ~0; 880 881 881 882 rate_control_fill_sta_table(sta, info, dest, max_rates); 882 883 ··· 890 889 if (ieee80211_is_tx_data(skb)) 891 890 rate_control_apply_mask(sdata, sta, sband, dest, max_rates); 892 891 892 + if (!(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX)) 893 + mask = sdata->rc_rateidx_mask[info->band]; 894 + 893 895 if (dest[0].idx < 0) 894 896 __rate_control_send_low(&sdata->local->hw, sband, sta, info, 895 - sdata->rc_rateidx_mask[info->band]); 897 + mask); 896 898 897 899 if (sta) 898 900 rate_fixup_ratelist(vif, sband, info, dest, max_rates);
+14 -3
net/mac80211/rx.c
··· 2763 2763 struct sk_buff *skb, int hdrlen) 2764 2764 { 2765 2765 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 2766 - struct ieee80211_mesh_fast_tx *entry = NULL; 2766 + struct ieee80211_mesh_fast_tx_key key = { 2767 + .type = MESH_FAST_TX_TYPE_FORWARDED 2768 + }; 2769 + struct ieee80211_mesh_fast_tx *entry; 2767 2770 struct ieee80211s_hdr *mesh_hdr; 2768 2771 struct tid_ampdu_tx *tid_tx; 2769 2772 struct sta_info *sta; ··· 2775 2772 2776 2773 mesh_hdr = (struct ieee80211s_hdr *)(skb->data + sizeof(eth)); 2777 2774 if ((mesh_hdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6) 2778 - entry = mesh_fast_tx_get(sdata, mesh_hdr->eaddr1); 2775 + ether_addr_copy(key.addr, mesh_hdr->eaddr1); 2779 2776 else if (!(mesh_hdr->flags & MESH_FLAGS_AE)) 2780 - entry = mesh_fast_tx_get(sdata, skb->data); 2777 + ether_addr_copy(key.addr, skb->data); 2778 + else 2779 + return false; 2780 + 2781 + entry = mesh_fast_tx_get(sdata, &key); 2781 2782 if (!entry) 2782 2783 return false; 2783 2784 ··· 3787 3780 } 3788 3781 break; 3789 3782 case WLAN_CATEGORY_PROTECTED_EHT: 3783 + if (len < offsetofend(typeof(*mgmt), 3784 + u.action.u.ttlm_req.action_code)) 3785 + break; 3786 + 3790 3787 switch (mgmt->u.action.u.ttlm_req.action_code) { 3791 3788 case WLAN_PROTECTED_EHT_ACTION_TTLM_REQ: 3792 3789 if (sdata->vif.type != NL80211_IFTYPE_STATION)
+1
net/mac80211/scan.c
··· 648 648 cpu_to_le16(IEEE80211_SN_TO_SEQ(sn)); 649 649 } 650 650 IEEE80211_SKB_CB(skb)->flags |= tx_flags; 651 + IEEE80211_SKB_CB(skb)->control.flags |= IEEE80211_TX_CTRL_SCAN_TX; 651 652 ieee80211_tx_skb_tid_band(sdata, skb, 7, channel->band); 652 653 } 653 654 }
+9 -4
net/mac80211/tx.c
··· 698 698 txrc.bss_conf = &tx->sdata->vif.bss_conf; 699 699 txrc.skb = tx->skb; 700 700 txrc.reported_rate.idx = -1; 701 - txrc.rate_idx_mask = tx->sdata->rc_rateidx_mask[info->band]; 702 701 703 - if (tx->sdata->rc_has_mcs_mask[info->band]) 704 - txrc.rate_idx_mcs_mask = 705 - tx->sdata->rc_rateidx_mcs_mask[info->band]; 702 + if (unlikely(info->control.flags & IEEE80211_TX_CTRL_SCAN_TX)) { 703 + txrc.rate_idx_mask = ~0; 704 + } else { 705 + txrc.rate_idx_mask = tx->sdata->rc_rateidx_mask[info->band]; 706 + 707 + if (tx->sdata->rc_has_mcs_mask[info->band]) 708 + txrc.rate_idx_mcs_mask = 709 + tx->sdata->rc_rateidx_mcs_mask[info->band]; 710 + } 706 711 707 712 txrc.bss = (tx->sdata->vif.type == NL80211_IFTYPE_AP || 708 713 tx->sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
+4 -2
net/netfilter/ipvs/ip_vs_proto_sctp.c
··· 126 126 if (sctph->source != cp->vport || payload_csum || 127 127 skb->ip_summed == CHECKSUM_PARTIAL) { 128 128 sctph->source = cp->vport; 129 - sctp_nat_csum(skb, sctph, sctphoff); 129 + if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb)) 130 + sctp_nat_csum(skb, sctph, sctphoff); 130 131 } else { 131 132 skb->ip_summed = CHECKSUM_UNNECESSARY; 132 133 } ··· 175 174 (skb->ip_summed == CHECKSUM_PARTIAL && 176 175 !(skb_dst(skb)->dev->features & NETIF_F_SCTP_CRC))) { 177 176 sctph->dest = cp->dport; 178 - sctp_nat_csum(skb, sctph, sctphoff); 177 + if (!skb_is_gso(skb) || !skb_is_gso_sctp(skb)) 178 + sctp_nat_csum(skb, sctph, sctphoff); 179 179 } else if (skb->ip_summed != CHECKSUM_PARTIAL) { 180 180 skb->ip_summed = CHECKSUM_UNNECESSARY; 181 181 }
+3 -1
net/netfilter/nft_chain_filter.c
··· 338 338 return; 339 339 340 340 if (n > 1) { 341 - nf_unregister_net_hook(ctx->net, &found->ops); 341 + if (!(ctx->chain->table->flags & NFT_TABLE_F_DORMANT)) 342 + nf_unregister_net_hook(ctx->net, &found->ops); 343 + 342 344 list_del_rcu(&found->list); 343 345 kfree_rcu(found, rcu); 344 346 return;
+2 -2
net/openvswitch/conntrack.c
··· 1593 1593 for (i = 0; i < CT_LIMIT_HASH_BUCKETS; ++i) { 1594 1594 struct hlist_head *head = &info->limits[i]; 1595 1595 struct ovs_ct_limit *ct_limit; 1596 + struct hlist_node *next; 1596 1597 1597 - hlist_for_each_entry_rcu(ct_limit, head, hlist_node, 1598 - lockdep_ovsl_is_held()) 1598 + hlist_for_each_entry_safe(ct_limit, next, head, hlist_node) 1599 1599 kfree_rcu(ct_limit, rcu); 1600 1600 } 1601 1601 kfree(info->limits);
+22 -64
net/sunrpc/xprtrdma/svc_rdma_rw.c
··· 231 231 } 232 232 233 233 /** 234 - * svc_rdma_write_chunk_release - Release Write chunk I/O resources 235 - * @rdma: controlling transport 236 - * @ctxt: Send context that is being released 237 - */ 238 - void svc_rdma_write_chunk_release(struct svcxprt_rdma *rdma, 239 - struct svc_rdma_send_ctxt *ctxt) 240 - { 241 - struct svc_rdma_write_info *info; 242 - struct svc_rdma_chunk_ctxt *cc; 243 - 244 - while (!list_empty(&ctxt->sc_write_info_list)) { 245 - info = list_first_entry(&ctxt->sc_write_info_list, 246 - struct svc_rdma_write_info, wi_list); 247 - list_del(&info->wi_list); 248 - 249 - cc = &info->wi_cc; 250 - svc_rdma_wake_send_waiters(rdma, cc->cc_sqecount); 251 - svc_rdma_write_info_free(info); 252 - } 253 - } 254 - 255 - /** 256 234 * svc_rdma_reply_chunk_release - Release Reply chunk I/O resources 257 235 * @rdma: controlling transport 258 236 * @ctxt: Send context that is being released ··· 286 308 struct ib_cqe *cqe = wc->wr_cqe; 287 309 struct svc_rdma_chunk_ctxt *cc = 288 310 container_of(cqe, struct svc_rdma_chunk_ctxt, cc_cqe); 311 + struct svc_rdma_write_info *info = 312 + container_of(cc, struct svc_rdma_write_info, wi_cc); 289 313 290 314 switch (wc->status) { 291 315 case IB_WC_SUCCESS: 292 316 trace_svcrdma_wc_write(&cc->cc_cid); 293 - return; 317 + break; 294 318 case IB_WC_WR_FLUSH_ERR: 295 319 trace_svcrdma_wc_write_flush(wc, &cc->cc_cid); 296 320 break; ··· 300 320 trace_svcrdma_wc_write_err(wc, &cc->cc_cid); 301 321 } 302 322 303 - /* The RDMA Write has flushed, so the client won't get 304 - * some of the outgoing RPC message. Signal the loss 305 - * to the client by closing the connection. 306 - */ 307 - svc_xprt_deferred_close(&rdma->sc_xprt); 323 + svc_rdma_wake_send_waiters(rdma, cc->cc_sqecount); 324 + 325 + if (unlikely(wc->status != IB_WC_SUCCESS)) 326 + svc_xprt_deferred_close(&rdma->sc_xprt); 327 + 328 + svc_rdma_write_info_free(info); 308 329 } 309 330 310 331 /** ··· 601 620 return xdr->len; 602 621 } 603 622 604 - /* Link Write WRs for @chunk onto @sctxt's WR chain. 605 - */ 606 - static int svc_rdma_prepare_write_chunk(struct svcxprt_rdma *rdma, 607 - struct svc_rdma_send_ctxt *sctxt, 608 - const struct svc_rdma_chunk *chunk, 609 - const struct xdr_buf *xdr) 623 + static int svc_rdma_send_write_chunk(struct svcxprt_rdma *rdma, 624 + const struct svc_rdma_chunk *chunk, 625 + const struct xdr_buf *xdr) 610 626 { 611 627 struct svc_rdma_write_info *info; 612 628 struct svc_rdma_chunk_ctxt *cc; 613 - struct ib_send_wr *first_wr; 614 629 struct xdr_buf payload; 615 - struct list_head *pos; 616 - struct ib_cqe *cqe; 617 630 int ret; 618 631 619 632 if (xdr_buf_subsegment(xdr, &payload, chunk->ch_position, ··· 623 648 if (ret != payload.len) 624 649 goto out_err; 625 650 626 - ret = -EINVAL; 627 - if (unlikely(cc->cc_sqecount > rdma->sc_sq_depth)) 628 - goto out_err; 629 - 630 - first_wr = sctxt->sc_wr_chain; 631 - cqe = &cc->cc_cqe; 632 - list_for_each(pos, &cc->cc_rwctxts) { 633 - struct svc_rdma_rw_ctxt *rwc; 634 - 635 - rwc = list_entry(pos, struct svc_rdma_rw_ctxt, rw_list); 636 - first_wr = rdma_rw_ctx_wrs(&rwc->rw_ctx, rdma->sc_qp, 637 - rdma->sc_port_num, cqe, first_wr); 638 - cqe = NULL; 639 - } 640 - sctxt->sc_wr_chain = first_wr; 641 - sctxt->sc_sqecount += cc->cc_sqecount; 642 - list_add(&info->wi_list, &sctxt->sc_write_info_list); 643 - 644 651 trace_svcrdma_post_write_chunk(&cc->cc_cid, cc->cc_sqecount); 652 + ret = svc_rdma_post_chunk_ctxt(rdma, cc); 653 + if (ret < 0) 654 + goto out_err; 645 655 return 0; 646 656 647 657 out_err: ··· 635 675 } 636 676 637 677 /** 638 - * svc_rdma_prepare_write_list - Construct WR chain for sending Write list 678 + * svc_rdma_send_write_list - Send all chunks on the Write list 639 679 * @rdma: controlling RDMA transport 640 - * @write_pcl: Write list provisioned by the client 641 - * @sctxt: Send WR resources 680 + * @rctxt: Write list provisioned by the client 642 681 * @xdr: xdr_buf containing an RPC Reply message 643 682 * 644 683 * Returns zero on success, or a negative errno if one or more 645 684 * Write chunks could not be sent. 646 685 */ 647 - int svc_rdma_prepare_write_list(struct svcxprt_rdma *rdma, 648 - const struct svc_rdma_pcl *write_pcl, 649 - struct svc_rdma_send_ctxt *sctxt, 650 - const struct xdr_buf *xdr) 686 + int svc_rdma_send_write_list(struct svcxprt_rdma *rdma, 687 + const struct svc_rdma_recv_ctxt *rctxt, 688 + const struct xdr_buf *xdr) 651 689 { 652 690 struct svc_rdma_chunk *chunk; 653 691 int ret; 654 692 655 - pcl_for_each_chunk(chunk, write_pcl) { 693 + pcl_for_each_chunk(chunk, &rctxt->rc_write_pcl) { 656 694 if (!chunk->ch_payload_length) 657 695 break; 658 - ret = svc_rdma_prepare_write_chunk(rdma, sctxt, chunk, xdr); 696 + ret = svc_rdma_send_write_chunk(rdma, chunk, xdr); 659 697 if (ret < 0) 660 698 return ret; 661 699 }
+1 -4
net/sunrpc/xprtrdma/svc_rdma_sendto.c
··· 142 142 ctxt->sc_send_wr.sg_list = ctxt->sc_sges; 143 143 ctxt->sc_send_wr.send_flags = IB_SEND_SIGNALED; 144 144 ctxt->sc_cqe.done = svc_rdma_wc_send; 145 - INIT_LIST_HEAD(&ctxt->sc_write_info_list); 146 145 ctxt->sc_xprt_buf = buffer; 147 146 xdr_buf_init(&ctxt->sc_hdrbuf, ctxt->sc_xprt_buf, 148 147 rdma->sc_max_req_size); ··· 227 228 struct ib_device *device = rdma->sc_cm_id->device; 228 229 unsigned int i; 229 230 230 - svc_rdma_write_chunk_release(rdma, ctxt); 231 231 svc_rdma_reply_chunk_release(rdma, ctxt); 232 232 233 233 if (ctxt->sc_page_count) ··· 1013 1015 if (!p) 1014 1016 goto put_ctxt; 1015 1017 1016 - ret = svc_rdma_prepare_write_list(rdma, &rctxt->rc_write_pcl, sctxt, 1017 - &rqstp->rq_res); 1018 + ret = svc_rdma_send_write_list(rdma, rctxt, &rqstp->rq_res); 1018 1019 if (ret < 0) 1019 1020 goto put_ctxt; 1020 1021
+1 -1
net/tls/tls.h
··· 215 215 216 216 static inline bool tls_strp_msg_ready(struct tls_sw_context_rx *ctx) 217 217 { 218 - return ctx->strp.msg_ready; 218 + return READ_ONCE(ctx->strp.msg_ready); 219 219 } 220 220 221 221 static inline bool tls_strp_msg_mixed_decrypted(struct tls_sw_context_rx *ctx)
+3 -3
net/tls/tls_strp.c
··· 361 361 if (strp->stm.full_len && strp->stm.full_len == skb->len) { 362 362 desc->count = 0; 363 363 364 - strp->msg_ready = 1; 364 + WRITE_ONCE(strp->msg_ready, 1); 365 365 tls_rx_msg_ready(strp); 366 366 } 367 367 ··· 529 529 if (!tls_strp_check_queue_ok(strp)) 530 530 return tls_strp_read_copy(strp, false); 531 531 532 - strp->msg_ready = 1; 532 + WRITE_ONCE(strp->msg_ready, 1); 533 533 tls_rx_msg_ready(strp); 534 534 535 535 return 0; ··· 581 581 else 582 582 tls_strp_flush_anchor_copy(strp); 583 583 584 - strp->msg_ready = 0; 584 + WRITE_ONCE(strp->msg_ready, 0); 585 585 memset(&strp->stm, 0, sizeof(strp->stm)); 586 586 587 587 tls_strp_check_rcv(strp);
+2
net/wireless/nl80211.c
··· 14031 14031 error: 14032 14032 for (i = 0; i < new_coalesce.n_rules; i++) { 14033 14033 tmp_rule = &new_coalesce.rules[i]; 14034 + if (!tmp_rule) 14035 + continue; 14034 14036 for (j = 0; j < tmp_rule->n_patterns; j++) 14035 14037 kfree(tmp_rule->patterns[j].mask); 14036 14038 kfree(tmp_rule->patterns);
+2 -2
net/wireless/trace.h
··· 1758 1758 1759 1759 DECLARE_EVENT_CLASS(tx_rx_evt, 1760 1760 TP_PROTO(struct wiphy *wiphy, u32 tx, u32 rx), 1761 - TP_ARGS(wiphy, rx, tx), 1761 + TP_ARGS(wiphy, tx, rx), 1762 1762 TP_STRUCT__entry( 1763 1763 WIPHY_ENTRY 1764 1764 __field(u32, tx) ··· 1775 1775 1776 1776 DEFINE_EVENT(tx_rx_evt, rdev_set_antenna, 1777 1777 TP_PROTO(struct wiphy *wiphy, u32 tx, u32 rx), 1778 - TP_ARGS(wiphy, rx, tx) 1778 + TP_ARGS(wiphy, tx, rx) 1779 1779 ); 1780 1780 1781 1781 DECLARE_EVENT_CLASS(wiphy_netdev_id_evt,
+1 -1
sound/core/seq/seq_ump_convert.c
··· 428 428 midi1->note.group = midi2->note.group; 429 429 midi1->note.status = midi2->note.status; 430 430 midi1->note.channel = midi2->note.channel; 431 - switch (midi2->note.status << 4) { 431 + switch (midi2->note.status) { 432 432 case UMP_MSG_STATUS_NOTE_ON: 433 433 case UMP_MSG_STATUS_NOTE_OFF: 434 434 midi1->note.note = midi2->note.note;
+45 -4
sound/pci/hda/patch_realtek.c
··· 7467 7467 ALC285_FIXUP_CS35L56_I2C_2, 7468 7468 ALC285_FIXUP_CS35L56_I2C_4, 7469 7469 ALC285_FIXUP_ASUS_GA403U, 7470 + ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC, 7471 + ALC285_FIXUP_ASUS_GA403U_I2C_SPEAKER2_TO_DAC1, 7472 + ALC285_FIXUP_ASUS_GU605_SPI_2_HEADSET_MIC, 7473 + ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1 7470 7474 }; 7471 7475 7472 7476 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 9694 9690 .type = HDA_FIXUP_FUNC, 9695 9691 .v.func = alc285_fixup_asus_ga403u, 9696 9692 }, 9693 + [ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC] = { 9694 + .type = HDA_FIXUP_PINS, 9695 + .v.pins = (const struct hda_pintbl[]) { 9696 + { 0x19, 0x03a11050 }, 9697 + { 0x1b, 0x03a11c30 }, 9698 + { } 9699 + }, 9700 + .chained = true, 9701 + .chain_id = ALC285_FIXUP_ASUS_GA403U_I2C_SPEAKER2_TO_DAC1 9702 + }, 9703 + [ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1] = { 9704 + .type = HDA_FIXUP_FUNC, 9705 + .v.func = alc285_fixup_speaker2_to_dac1, 9706 + .chained = true, 9707 + .chain_id = ALC285_FIXUP_ASUS_GU605_SPI_2_HEADSET_MIC, 9708 + }, 9709 + [ALC285_FIXUP_ASUS_GU605_SPI_2_HEADSET_MIC] = { 9710 + .type = HDA_FIXUP_PINS, 9711 + .v.pins = (const struct hda_pintbl[]) { 9712 + { 0x19, 0x03a11050 }, 9713 + { 0x1b, 0x03a11c30 }, 9714 + { } 9715 + }, 9716 + .chained = true, 9717 + .chain_id = ALC285_FIXUP_CS35L56_SPI_2 9718 + }, 9719 + [ALC285_FIXUP_ASUS_GA403U_I2C_SPEAKER2_TO_DAC1] = { 9720 + .type = HDA_FIXUP_FUNC, 9721 + .v.func = alc285_fixup_speaker2_to_dac1, 9722 + .chained = true, 9723 + .chain_id = ALC285_FIXUP_ASUS_GA403U, 9724 + }, 9697 9725 }; 9698 9726 9699 9727 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 10120 10084 SND_PCI_QUIRK(0x103c, 0x8ca7, "HP ZBook Fury", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 10121 10085 SND_PCI_QUIRK(0x103c, 0x8cdd, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2), 10122 10086 SND_PCI_QUIRK(0x103c, 0x8cde, "HP Spectre", ALC287_FIXUP_CS35L41_I2C_2), 10087 + SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10088 + SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED), 10123 10089 SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED), 10124 10090 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 10125 10091 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), ··· 10181 10143 SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2), 10182 10144 SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2), 10183 10145 SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B), 10184 - SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U), 10146 + SND_PCI_QUIRK(0x1043, 0x1b13, "ASUS U41SV/GA403U", ALC285_FIXUP_ASUS_GA403U_HEADSET_MIC), 10185 10147 SND_PCI_QUIRK(0x1043, 0x1b93, "ASUS G614JVR/JIR", ALC245_FIXUP_CS35L41_SPI_2), 10186 10148 SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE), 10187 10149 SND_PCI_QUIRK(0x1043, 0x1c03, "ASUS UM3406HA", ALC287_FIXUP_CS35L41_I2C_2), ··· 10189 10151 SND_PCI_QUIRK(0x1043, 0x1c33, "ASUS UX5304MA", ALC245_FIXUP_CS35L41_SPI_2), 10190 10152 SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2), 10191 10153 SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401), 10192 - SND_PCI_QUIRK(0x1043, 0x1c63, "ASUS GU605M", ALC285_FIXUP_CS35L56_SPI_2), 10154 + SND_PCI_QUIRK(0x1043, 0x1c63, "ASUS GU605M", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1), 10193 10155 SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS), 10194 10156 SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JU/JV/JI", ALC285_FIXUP_ASUS_HEADSET_MIC), 10195 10157 SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JY/JZ/JI/JG", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS), ··· 10266 10228 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 10267 10229 SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC), 10268 10230 SND_PCI_QUIRK(0x152d, 0x1082, "Quanta NL3", ALC269_FIXUP_LIFEBOOK), 10231 + SND_PCI_QUIRK(0x152d, 0x1262, "Huawei NBLB-WAX9N", ALC2XX_FIXUP_HEADSET_MIC), 10269 10232 SND_PCI_QUIRK(0x1558, 0x0353, "Clevo V35[05]SN[CDE]Q", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10270 10233 SND_PCI_QUIRK(0x1558, 0x1323, "Clevo N130ZU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10271 10234 SND_PCI_QUIRK(0x1558, 0x1325, "Clevo N15[01][CW]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 10372 10333 SND_PCI_QUIRK(0x17aa, 0x222e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK), 10373 10334 SND_PCI_QUIRK(0x17aa, 0x2231, "Thinkpad T560", ALC292_FIXUP_TPT460), 10374 10335 SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC292_FIXUP_TPT460), 10336 + SND_PCI_QUIRK(0x17aa, 0x2234, "Thinkpad ICE-1", ALC287_FIXUP_TAS2781_I2C), 10375 10337 SND_PCI_QUIRK(0x17aa, 0x2245, "Thinkpad T470", ALC298_FIXUP_TPT470_DOCK), 10376 10338 SND_PCI_QUIRK(0x17aa, 0x2246, "Thinkpad", ALC298_FIXUP_TPT470_DOCK), 10377 10339 SND_PCI_QUIRK(0x17aa, 0x2247, "Thinkpad", ALC298_FIXUP_TPT470_DOCK), ··· 10434 10394 SND_PCI_QUIRK(0x17aa, 0x3886, "Y780 VECO DUAL", ALC287_FIXUP_TAS2781_I2C), 10435 10395 SND_PCI_QUIRK(0x17aa, 0x38a7, "Y780P AMD YG dual", ALC287_FIXUP_TAS2781_I2C), 10436 10396 SND_PCI_QUIRK(0x17aa, 0x38a8, "Y780P AMD VECO dual", ALC287_FIXUP_TAS2781_I2C), 10437 - SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_CS35L41_I2C_2), 10438 - SND_PCI_QUIRK(0x17aa, 0x38ab, "Thinkbook 16P", ALC287_FIXUP_CS35L41_I2C_2), 10397 + SND_PCI_QUIRK(0x17aa, 0x38a9, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 10398 + SND_PCI_QUIRK(0x17aa, 0x38ab, "Thinkbook 16P", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD), 10439 10399 SND_PCI_QUIRK(0x17aa, 0x38b4, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2), 10440 10400 SND_PCI_QUIRK(0x17aa, 0x38b5, "Legion Slim 7 16IRH8", ALC287_FIXUP_CS35L41_I2C_2), 10441 10401 SND_PCI_QUIRK(0x17aa, 0x38b6, "Legion Slim 7 16APH8", ALC287_FIXUP_CS35L41_I2C_2), ··· 10497 10457 SND_PCI_QUIRK(0x1d05, 0x115c, "TongFang GMxTGxx", ALC269_FIXUP_NO_SHUTUP), 10498 10458 SND_PCI_QUIRK(0x1d05, 0x121b, "TongFang GMxAGxx", ALC269_FIXUP_NO_SHUTUP), 10499 10459 SND_PCI_QUIRK(0x1d05, 0x1387, "TongFang GMxIXxx", ALC2XX_FIXUP_HEADSET_MIC), 10460 + SND_PCI_QUIRK(0x1d17, 0x3288, "Haier Boyue G42", ALC269VC_FIXUP_ACER_VCOPPERBOX_PINS), 10500 10461 SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC), 10501 10462 SND_PCI_QUIRK(0x1d72, 0x1701, "XiaomiNotebook Pro", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE), 10502 10463 SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+2 -2
sound/pci/hda/tas2781_hda_i2c.c
··· 514 514 static void tas2781_apply_calib(struct tasdevice_priv *tas_priv) 515 515 { 516 516 static const unsigned char page_array[CALIB_MAX] = { 517 - 0x17, 0x18, 0x18, 0x0d, 0x18 517 + 0x17, 0x18, 0x18, 0x13, 0x18, 518 518 }; 519 519 static const unsigned char rgno_array[CALIB_MAX] = { 520 - 0x74, 0x0c, 0x14, 0x3c, 0x7c 520 + 0x74, 0x0c, 0x14, 0x70, 0x7c, 521 521 }; 522 522 unsigned char *data; 523 523 int i, j, rc;
+4
tools/arch/arm64/include/asm/cputype.h
··· 61 61 #define ARM_CPU_IMP_HISI 0x48 62 62 #define ARM_CPU_IMP_APPLE 0x61 63 63 #define ARM_CPU_IMP_AMPERE 0xC0 64 + #define ARM_CPU_IMP_MICROSOFT 0x6D 64 65 65 66 #define ARM_CPU_PART_AEM_V8 0xD0F 66 67 #define ARM_CPU_PART_FOUNDATION 0xD00 ··· 136 135 137 136 #define AMPERE_CPU_PART_AMPERE1 0xAC3 138 137 138 + #define MICROSOFT_CPU_PART_AZURE_COBALT_100 0xD49 /* Based on r0p0 of ARM Neoverse N2 */ 139 + 139 140 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 140 141 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) 141 142 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) ··· 196 193 #define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX) 197 194 #define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX) 198 195 #define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1) 196 + #define MIDR_MICROSOFT_AZURE_COBALT_100 MIDR_CPU_MODEL(ARM_CPU_IMP_MICROSOFT, MICROSOFT_CPU_PART_AZURE_COBALT_100) 199 197 200 198 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */ 201 199 #define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX
+9 -6
tools/arch/arm64/include/uapi/asm/kvm.h
··· 37 37 #include <asm/ptrace.h> 38 38 #include <asm/sve_context.h> 39 39 40 - #define __KVM_HAVE_GUEST_DEBUG 41 40 #define __KVM_HAVE_IRQ_LINE 42 - #define __KVM_HAVE_READONLY_MEM 43 41 #define __KVM_HAVE_VCPU_EVENTS 44 42 45 43 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 ··· 74 76 75 77 /* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */ 76 78 #define KVM_ARM_DEVICE_TYPE_SHIFT 0 77 - #define KVM_ARM_DEVICE_TYPE_MASK GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \ 78 - KVM_ARM_DEVICE_TYPE_SHIFT) 79 + #define KVM_ARM_DEVICE_TYPE_MASK __GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \ 80 + KVM_ARM_DEVICE_TYPE_SHIFT) 79 81 #define KVM_ARM_DEVICE_ID_SHIFT 16 80 - #define KVM_ARM_DEVICE_ID_MASK GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \ 81 - KVM_ARM_DEVICE_ID_SHIFT) 82 + #define KVM_ARM_DEVICE_ID_MASK __GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \ 83 + KVM_ARM_DEVICE_ID_SHIFT) 82 84 83 85 /* Supported device IDs */ 84 86 #define KVM_ARM_DEVICE_VGIC_V2 0 ··· 159 161 /* Used with KVM_CAP_ARM_USER_IRQ */ 160 162 __u64 device_irq_level; 161 163 }; 164 + 165 + /* Bits for run->s.regs.device_irq_level */ 166 + #define KVM_ARM_DEV_EL1_VTIMER (1 << 0) 167 + #define KVM_ARM_DEV_EL1_PTIMER (1 << 1) 168 + #define KVM_ARM_DEV_PMU (1 << 2) 162 169 163 170 /* 164 171 * PMU filter structure. Describe a range of events with a particular
+44 -1
tools/arch/powerpc/include/uapi/asm/kvm.h
··· 28 28 #define __KVM_HAVE_PPC_SMT 29 29 #define __KVM_HAVE_IRQCHIP 30 30 #define __KVM_HAVE_IRQ_LINE 31 - #define __KVM_HAVE_GUEST_DEBUG 32 31 33 32 /* Not always available, but if it is, this is the correct offset. */ 34 33 #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 ··· 731 732 732 733 #define KVM_XIVE_TIMA_PAGE_OFFSET 0 733 734 #define KVM_XIVE_ESB_PAGE_OFFSET 4 735 + 736 + /* for KVM_PPC_GET_PVINFO */ 737 + 738 + #define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0) 739 + 740 + struct kvm_ppc_pvinfo { 741 + /* out */ 742 + __u32 flags; 743 + __u32 hcall[4]; 744 + __u8 pad[108]; 745 + }; 746 + 747 + /* for KVM_PPC_GET_SMMU_INFO */ 748 + #define KVM_PPC_PAGE_SIZES_MAX_SZ 8 749 + 750 + struct kvm_ppc_one_page_size { 751 + __u32 page_shift; /* Page shift (or 0) */ 752 + __u32 pte_enc; /* Encoding in the HPTE (>>12) */ 753 + }; 754 + 755 + struct kvm_ppc_one_seg_page_size { 756 + __u32 page_shift; /* Base page shift of segment (or 0) */ 757 + __u32 slb_enc; /* SLB encoding for BookS */ 758 + struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ]; 759 + }; 760 + 761 + #define KVM_PPC_PAGE_SIZES_REAL 0x00000001 762 + #define KVM_PPC_1T_SEGMENTS 0x00000002 763 + #define KVM_PPC_NO_HASH 0x00000004 764 + 765 + struct kvm_ppc_smmu_info { 766 + __u64 flags; 767 + __u32 slb_size; 768 + __u16 data_keys; /* # storage keys supported for data */ 769 + __u16 instr_keys; /* # storage keys supported for instructions */ 770 + struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ]; 771 + }; 772 + 773 + /* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */ 774 + struct kvm_ppc_resize_hpt { 775 + __u64 flags; 776 + __u32 shift; 777 + __u32 pad; 778 + }; 734 779 735 780 #endif /* __LINUX_KVM_POWERPC_H */
+314 -1
tools/arch/s390/include/uapi/asm/kvm.h
··· 12 12 #include <linux/types.h> 13 13 14 14 #define __KVM_S390 15 - #define __KVM_HAVE_GUEST_DEBUG 15 + 16 + struct kvm_s390_skeys { 17 + __u64 start_gfn; 18 + __u64 count; 19 + __u64 skeydata_addr; 20 + __u32 flags; 21 + __u32 reserved[9]; 22 + }; 23 + 24 + #define KVM_S390_CMMA_PEEK (1 << 0) 25 + 26 + /** 27 + * kvm_s390_cmma_log - Used for CMMA migration. 28 + * 29 + * Used both for input and output. 30 + * 31 + * @start_gfn: Guest page number to start from. 32 + * @count: Size of the result buffer. 33 + * @flags: Control operation mode via KVM_S390_CMMA_* flags 34 + * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty 35 + * pages are still remaining. 36 + * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set 37 + * in the PGSTE. 38 + * @values: Pointer to the values buffer. 39 + * 40 + * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls. 41 + */ 42 + struct kvm_s390_cmma_log { 43 + __u64 start_gfn; 44 + __u32 count; 45 + __u32 flags; 46 + union { 47 + __u64 remaining; 48 + __u64 mask; 49 + }; 50 + __u64 values; 51 + }; 52 + 53 + #define KVM_S390_RESET_POR 1 54 + #define KVM_S390_RESET_CLEAR 2 55 + #define KVM_S390_RESET_SUBSYSTEM 4 56 + #define KVM_S390_RESET_CPU_INIT 8 57 + #define KVM_S390_RESET_IPL 16 58 + 59 + /* for KVM_S390_MEM_OP */ 60 + struct kvm_s390_mem_op { 61 + /* in */ 62 + __u64 gaddr; /* the guest address */ 63 + __u64 flags; /* flags */ 64 + __u32 size; /* amount of bytes */ 65 + __u32 op; /* type of operation */ 66 + __u64 buf; /* buffer in userspace */ 67 + union { 68 + struct { 69 + __u8 ar; /* the access register number */ 70 + __u8 key; /* access key, ignored if flag unset */ 71 + __u8 pad1[6]; /* ignored */ 72 + __u64 old_addr; /* ignored if cmpxchg flag unset */ 73 + }; 74 + __u32 sida_offset; /* offset into the sida */ 75 + __u8 reserved[32]; /* ignored */ 76 + }; 77 + }; 78 + /* types for kvm_s390_mem_op->op */ 79 + #define KVM_S390_MEMOP_LOGICAL_READ 0 80 + #define KVM_S390_MEMOP_LOGICAL_WRITE 1 81 + #define KVM_S390_MEMOP_SIDA_READ 2 82 + #define KVM_S390_MEMOP_SIDA_WRITE 3 83 + #define KVM_S390_MEMOP_ABSOLUTE_READ 4 84 + #define KVM_S390_MEMOP_ABSOLUTE_WRITE 5 85 + #define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG 6 86 + 87 + /* flags for kvm_s390_mem_op->flags */ 88 + #define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0) 89 + #define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1) 90 + #define KVM_S390_MEMOP_F_SKEY_PROTECTION (1ULL << 2) 91 + 92 + /* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */ 93 + #define KVM_S390_MEMOP_EXTENSION_CAP_BASE (1 << 0) 94 + #define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG (1 << 1) 95 + 96 + struct kvm_s390_psw { 97 + __u64 mask; 98 + __u64 addr; 99 + }; 100 + 101 + /* valid values for type in kvm_s390_interrupt */ 102 + #define KVM_S390_SIGP_STOP 0xfffe0000u 103 + #define KVM_S390_PROGRAM_INT 0xfffe0001u 104 + #define KVM_S390_SIGP_SET_PREFIX 0xfffe0002u 105 + #define KVM_S390_RESTART 0xfffe0003u 106 + #define KVM_S390_INT_PFAULT_INIT 0xfffe0004u 107 + #define KVM_S390_INT_PFAULT_DONE 0xfffe0005u 108 + #define KVM_S390_MCHK 0xfffe1000u 109 + #define KVM_S390_INT_CLOCK_COMP 0xffff1004u 110 + #define KVM_S390_INT_CPU_TIMER 0xffff1005u 111 + #define KVM_S390_INT_VIRTIO 0xffff2603u 112 + #define KVM_S390_INT_SERVICE 0xffff2401u 113 + #define KVM_S390_INT_EMERGENCY 0xffff1201u 114 + #define KVM_S390_INT_EXTERNAL_CALL 0xffff1202u 115 + /* Anything below 0xfffe0000u is taken by INT_IO */ 116 + #define KVM_S390_INT_IO(ai,cssid,ssid,schid) \ 117 + (((schid)) | \ 118 + ((ssid) << 16) | \ 119 + ((cssid) << 18) | \ 120 + ((ai) << 26)) 121 + #define KVM_S390_INT_IO_MIN 0x00000000u 122 + #define KVM_S390_INT_IO_MAX 0xfffdffffu 123 + #define KVM_S390_INT_IO_AI_MASK 0x04000000u 124 + 125 + 126 + struct kvm_s390_interrupt { 127 + __u32 type; 128 + __u32 parm; 129 + __u64 parm64; 130 + }; 131 + 132 + struct kvm_s390_io_info { 133 + __u16 subchannel_id; 134 + __u16 subchannel_nr; 135 + __u32 io_int_parm; 136 + __u32 io_int_word; 137 + }; 138 + 139 + struct kvm_s390_ext_info { 140 + __u32 ext_params; 141 + __u32 pad; 142 + __u64 ext_params2; 143 + }; 144 + 145 + struct kvm_s390_pgm_info { 146 + __u64 trans_exc_code; 147 + __u64 mon_code; 148 + __u64 per_address; 149 + __u32 data_exc_code; 150 + __u16 code; 151 + __u16 mon_class_nr; 152 + __u8 per_code; 153 + __u8 per_atmid; 154 + __u8 exc_access_id; 155 + __u8 per_access_id; 156 + __u8 op_access_id; 157 + #define KVM_S390_PGM_FLAGS_ILC_VALID 0x01 158 + #define KVM_S390_PGM_FLAGS_ILC_0 0x02 159 + #define KVM_S390_PGM_FLAGS_ILC_1 0x04 160 + #define KVM_S390_PGM_FLAGS_ILC_MASK 0x06 161 + #define KVM_S390_PGM_FLAGS_NO_REWIND 0x08 162 + __u8 flags; 163 + __u8 pad[2]; 164 + }; 165 + 166 + struct kvm_s390_prefix_info { 167 + __u32 address; 168 + }; 169 + 170 + struct kvm_s390_extcall_info { 171 + __u16 code; 172 + }; 173 + 174 + struct kvm_s390_emerg_info { 175 + __u16 code; 176 + }; 177 + 178 + #define KVM_S390_STOP_FLAG_STORE_STATUS 0x01 179 + struct kvm_s390_stop_info { 180 + __u32 flags; 181 + }; 182 + 183 + struct kvm_s390_mchk_info { 184 + __u64 cr14; 185 + __u64 mcic; 186 + __u64 failing_storage_address; 187 + __u32 ext_damage_code; 188 + __u32 pad; 189 + __u8 fixed_logout[16]; 190 + }; 191 + 192 + struct kvm_s390_irq { 193 + __u64 type; 194 + union { 195 + struct kvm_s390_io_info io; 196 + struct kvm_s390_ext_info ext; 197 + struct kvm_s390_pgm_info pgm; 198 + struct kvm_s390_emerg_info emerg; 199 + struct kvm_s390_extcall_info extcall; 200 + struct kvm_s390_prefix_info prefix; 201 + struct kvm_s390_stop_info stop; 202 + struct kvm_s390_mchk_info mchk; 203 + char reserved[64]; 204 + } u; 205 + }; 206 + 207 + struct kvm_s390_irq_state { 208 + __u64 buf; 209 + __u32 flags; /* will stay unused for compatibility reasons */ 210 + __u32 len; 211 + __u32 reserved[4]; /* will stay unused for compatibility reasons */ 212 + }; 213 + 214 + struct kvm_s390_ucas_mapping { 215 + __u64 user_addr; 216 + __u64 vcpu_addr; 217 + __u64 length; 218 + }; 219 + 220 + struct kvm_s390_pv_sec_parm { 221 + __u64 origin; 222 + __u64 length; 223 + }; 224 + 225 + struct kvm_s390_pv_unp { 226 + __u64 addr; 227 + __u64 size; 228 + __u64 tweak; 229 + }; 230 + 231 + enum pv_cmd_dmp_id { 232 + KVM_PV_DUMP_INIT, 233 + KVM_PV_DUMP_CONFIG_STOR_STATE, 234 + KVM_PV_DUMP_COMPLETE, 235 + KVM_PV_DUMP_CPU, 236 + }; 237 + 238 + struct kvm_s390_pv_dmp { 239 + __u64 subcmd; 240 + __u64 buff_addr; 241 + __u64 buff_len; 242 + __u64 gaddr; /* For dump storage state */ 243 + __u64 reserved[4]; 244 + }; 245 + 246 + enum pv_cmd_info_id { 247 + KVM_PV_INFO_VM, 248 + KVM_PV_INFO_DUMP, 249 + }; 250 + 251 + struct kvm_s390_pv_info_dump { 252 + __u64 dump_cpu_buffer_len; 253 + __u64 dump_config_mem_buffer_per_1m; 254 + __u64 dump_config_finalize_len; 255 + }; 256 + 257 + struct kvm_s390_pv_info_vm { 258 + __u64 inst_calls_list[4]; 259 + __u64 max_cpus; 260 + __u64 max_guests; 261 + __u64 max_guest_addr; 262 + __u64 feature_indication; 263 + }; 264 + 265 + struct kvm_s390_pv_info_header { 266 + __u32 id; 267 + __u32 len_max; 268 + __u32 len_written; 269 + __u32 reserved; 270 + }; 271 + 272 + struct kvm_s390_pv_info { 273 + struct kvm_s390_pv_info_header header; 274 + union { 275 + struct kvm_s390_pv_info_dump dump; 276 + struct kvm_s390_pv_info_vm vm; 277 + }; 278 + }; 279 + 280 + enum pv_cmd_id { 281 + KVM_PV_ENABLE, 282 + KVM_PV_DISABLE, 283 + KVM_PV_SET_SEC_PARMS, 284 + KVM_PV_UNPACK, 285 + KVM_PV_VERIFY, 286 + KVM_PV_PREP_RESET, 287 + KVM_PV_UNSHARE_ALL, 288 + KVM_PV_INFO, 289 + KVM_PV_DUMP, 290 + KVM_PV_ASYNC_CLEANUP_PREPARE, 291 + KVM_PV_ASYNC_CLEANUP_PERFORM, 292 + }; 293 + 294 + struct kvm_pv_cmd { 295 + __u32 cmd; /* Command to be executed */ 296 + __u16 rc; /* Ultravisor return code */ 297 + __u16 rrc; /* Ultravisor return reason code */ 298 + __u64 data; /* Data or address */ 299 + __u32 flags; /* flags for future extensions. Must be 0 for now */ 300 + __u32 reserved[3]; 301 + }; 302 + 303 + struct kvm_s390_zpci_op { 304 + /* in */ 305 + __u32 fh; /* target device */ 306 + __u8 op; /* operation to perform */ 307 + __u8 pad[3]; 308 + union { 309 + /* for KVM_S390_ZPCIOP_REG_AEN */ 310 + struct { 311 + __u64 ibv; /* Guest addr of interrupt bit vector */ 312 + __u64 sb; /* Guest addr of summary bit */ 313 + __u32 flags; 314 + __u32 noi; /* Number of interrupts */ 315 + __u8 isc; /* Guest interrupt subclass */ 316 + __u8 sbo; /* Offset of guest summary bit vector */ 317 + __u16 pad; 318 + } reg_aen; 319 + __u64 reserved[8]; 320 + } u; 321 + }; 322 + 323 + /* types for kvm_s390_zpci_op->op */ 324 + #define KVM_S390_ZPCIOP_REG_AEN 0 325 + #define KVM_S390_ZPCIOP_DEREG_AEN 1 326 + 327 + /* flags for kvm_s390_zpci_op->u.reg_aen.flags */ 328 + #define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0) 16 329 17 330 /* Device control API: s390-specific devices */ 18 331 #define KVM_DEV_FLIC_GET_ALL_IRQS 1
+12 -5
tools/arch/x86/include/asm/cpufeatures.h
··· 13 13 /* 14 14 * Defines x86 CPU feature bits 15 15 */ 16 - #define NCAPINTS 21 /* N 32-bit words worth of info */ 16 + #define NCAPINTS 22 /* N 32-bit words worth of info */ 17 17 #define NBUGINTS 2 /* N 32-bit bug flags */ 18 18 19 19 /* ··· 81 81 #define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */ 82 82 #define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */ 83 83 #define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */ 84 - 85 - /* CPU types for specific tunings: */ 86 84 #define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */ 87 - /* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */ 85 + #define X86_FEATURE_ZEN5 ( 3*32+ 5) /* "" CPU based on Zen5 microarchitecture */ 88 86 #define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */ 89 87 #define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */ 90 88 #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */ ··· 95 97 #define X86_FEATURE_SYSENTER32 ( 3*32+15) /* "" sysenter in IA32 userspace */ 96 98 #define X86_FEATURE_REP_GOOD ( 3*32+16) /* REP microcode works well */ 97 99 #define X86_FEATURE_AMD_LBR_V2 ( 3*32+17) /* AMD Last Branch Record Extension Version 2 */ 98 - /* FREE, was #define X86_FEATURE_LFENCE_RDTSC ( 3*32+18) "" LFENCE synchronizes RDTSC */ 100 + #define X86_FEATURE_CLEAR_CPU_BUF ( 3*32+18) /* "" Clear CPU buffers using VERW */ 99 101 #define X86_FEATURE_ACC_POWER ( 3*32+19) /* AMD Accumulated Power Mechanism */ 100 102 #define X86_FEATURE_NOPL ( 3*32+20) /* The NOPL (0F 1F) instructions */ 101 103 #define X86_FEATURE_ALWAYS ( 3*32+21) /* "" Always-present feature */ ··· 460 462 #define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */ 461 463 462 464 /* 465 + * Extended auxiliary flags: Linux defined - for features scattered in various 466 + * CPUID levels like 0x80000022, etc. 467 + * 468 + * Reuse free bits when adding new feature flags! 469 + */ 470 + #define X86_FEATURE_AMD_LBR_PMC_FREEZE (21*32+ 0) /* AMD LBR and PMC Freeze */ 471 + 472 + /* 463 473 * BUG word(s) 464 474 */ 465 475 #define X86_BUG(x) (NCAPINTS*32 + (x)) ··· 514 508 /* BUG word 2 */ 515 509 #define X86_BUG_SRSO X86_BUG(1*32 + 0) /* AMD SRSO bug */ 516 510 #define X86_BUG_DIV0 X86_BUG(1*32 + 1) /* AMD DIV0 speculation bug */ 511 + #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* CPU is vulnerable to Register File Data Sampling */ 517 512 #endif /* _ASM_X86_CPUFEATURES_H */
+9 -2
tools/arch/x86/include/asm/disabled-features.h
··· 123 123 # define DISABLE_FRED (1 << (X86_FEATURE_FRED & 31)) 124 124 #endif 125 125 126 + #ifdef CONFIG_KVM_AMD_SEV 127 + #define DISABLE_SEV_SNP 0 128 + #else 129 + #define DISABLE_SEV_SNP (1 << (X86_FEATURE_SEV_SNP & 31)) 130 + #endif 131 + 126 132 /* 127 133 * Make sure to add features to the correct mask 128 134 */ ··· 153 147 DISABLE_ENQCMD) 154 148 #define DISABLED_MASK17 0 155 149 #define DISABLED_MASK18 (DISABLE_IBT) 156 - #define DISABLED_MASK19 0 150 + #define DISABLED_MASK19 (DISABLE_SEV_SNP) 157 151 #define DISABLED_MASK20 0 158 - #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) 152 + #define DISABLED_MASK21 0 153 + #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22) 159 154 160 155 #endif /* _ASM_X86_DISABLED_FEATURES_H */
-2
tools/arch/x86/include/asm/irq_vectors.h
··· 84 84 #define HYPERVISOR_CALLBACK_VECTOR 0xf3 85 85 86 86 /* Vector for KVM to deliver posted interrupt IPI */ 87 - #if IS_ENABLED(CONFIG_KVM) 88 87 #define POSTED_INTR_VECTOR 0xf2 89 88 #define POSTED_INTR_WAKEUP_VECTOR 0xf1 90 89 #define POSTED_INTR_NESTED_VECTOR 0xf0 91 - #endif 92 90 93 91 #define MANAGED_IRQ_SHUTDOWN_VECTOR 0xef 94 92
+51 -23
tools/arch/x86/include/asm/msr-index.h
··· 176 176 * CPU is not vulnerable to Gather 177 177 * Data Sampling (GDS). 178 178 */ 179 + #define ARCH_CAP_RFDS_NO BIT(27) /* 180 + * Not susceptible to Register 181 + * File Data Sampling. 182 + */ 183 + #define ARCH_CAP_RFDS_CLEAR BIT(28) /* 184 + * VERW clears CPU Register 185 + * File. 186 + */ 179 187 180 188 #define ARCH_CAP_XAPIC_DISABLE BIT(21) /* 181 189 * IA32_XAPIC_DISABLE_STATUS MSR ··· 613 605 #define MSR_AMD64_SEV_ES_GHCB 0xc0010130 614 606 #define MSR_AMD64_SEV 0xc0010131 615 607 #define MSR_AMD64_SEV_ENABLED_BIT 0 616 - #define MSR_AMD64_SEV_ES_ENABLED_BIT 1 617 - #define MSR_AMD64_SEV_SNP_ENABLED_BIT 2 618 608 #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) 609 + #define MSR_AMD64_SEV_ES_ENABLED_BIT 1 619 610 #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) 611 + #define MSR_AMD64_SEV_SNP_ENABLED_BIT 2 620 612 #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) 621 - 622 - /* SNP feature bits enabled by the hypervisor */ 623 - #define MSR_AMD64_SNP_VTOM BIT_ULL(3) 624 - #define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(4) 625 - #define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(5) 626 - #define MSR_AMD64_SNP_ALT_INJ BIT_ULL(6) 627 - #define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(7) 628 - #define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(8) 629 - #define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(9) 630 - #define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(10) 631 - #define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(11) 632 - #define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(12) 633 - #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14) 634 - #define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16) 635 - #define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17) 636 - 637 - /* SNP feature bits reserved for future use. */ 638 - #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13) 639 - #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15) 640 - #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18) 613 + #define MSR_AMD64_SNP_VTOM_BIT 3 614 + #define MSR_AMD64_SNP_VTOM BIT_ULL(MSR_AMD64_SNP_VTOM_BIT) 615 + #define MSR_AMD64_SNP_REFLECT_VC_BIT 4 616 + #define MSR_AMD64_SNP_REFLECT_VC BIT_ULL(MSR_AMD64_SNP_REFLECT_VC_BIT) 617 + #define MSR_AMD64_SNP_RESTRICTED_INJ_BIT 5 618 + #define MSR_AMD64_SNP_RESTRICTED_INJ BIT_ULL(MSR_AMD64_SNP_RESTRICTED_INJ_BIT) 619 + #define MSR_AMD64_SNP_ALT_INJ_BIT 6 620 + #define MSR_AMD64_SNP_ALT_INJ BIT_ULL(MSR_AMD64_SNP_ALT_INJ_BIT) 621 + #define MSR_AMD64_SNP_DEBUG_SWAP_BIT 7 622 + #define MSR_AMD64_SNP_DEBUG_SWAP BIT_ULL(MSR_AMD64_SNP_DEBUG_SWAP_BIT) 623 + #define MSR_AMD64_SNP_PREVENT_HOST_IBS_BIT 8 624 + #define MSR_AMD64_SNP_PREVENT_HOST_IBS BIT_ULL(MSR_AMD64_SNP_PREVENT_HOST_IBS_BIT) 625 + #define MSR_AMD64_SNP_BTB_ISOLATION_BIT 9 626 + #define MSR_AMD64_SNP_BTB_ISOLATION BIT_ULL(MSR_AMD64_SNP_BTB_ISOLATION_BIT) 627 + #define MSR_AMD64_SNP_VMPL_SSS_BIT 10 628 + #define MSR_AMD64_SNP_VMPL_SSS BIT_ULL(MSR_AMD64_SNP_VMPL_SSS_BIT) 629 + #define MSR_AMD64_SNP_SECURE_TSC_BIT 11 630 + #define MSR_AMD64_SNP_SECURE_TSC BIT_ULL(MSR_AMD64_SNP_SECURE_TSC_BIT) 631 + #define MSR_AMD64_SNP_VMGEXIT_PARAM_BIT 12 632 + #define MSR_AMD64_SNP_VMGEXIT_PARAM BIT_ULL(MSR_AMD64_SNP_VMGEXIT_PARAM_BIT) 633 + #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13) 634 + #define MSR_AMD64_SNP_IBS_VIRT_BIT 14 635 + #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(MSR_AMD64_SNP_IBS_VIRT_BIT) 636 + #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15) 637 + #define MSR_AMD64_SNP_VMSA_REG_PROT_BIT 16 638 + #define MSR_AMD64_SNP_VMSA_REG_PROT BIT_ULL(MSR_AMD64_SNP_VMSA_REG_PROT_BIT) 639 + #define MSR_AMD64_SNP_SMT_PROT_BIT 17 640 + #define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT) 641 + #define MSR_AMD64_SNP_RESV_BIT 18 642 + #define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT) 641 643 642 644 #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f 645 + 646 + #define MSR_AMD64_RMP_BASE 0xc0010132 647 + #define MSR_AMD64_RMP_END 0xc0010133 643 648 644 649 /* AMD Collaborative Processor Performance Control MSRs */ 645 650 #define MSR_AMD_CPPC_CAP1 0xc00102b0 ··· 740 719 #define MSR_K8_TOP_MEM1 0xc001001a 741 720 #define MSR_K8_TOP_MEM2 0xc001001d 742 721 #define MSR_AMD64_SYSCFG 0xc0010010 743 - #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 722 + #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 744 723 #define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) 724 + #define MSR_AMD64_SYSCFG_SNP_EN_BIT 24 725 + #define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT) 726 + #define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25 727 + #define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT) 728 + #define MSR_AMD64_SYSCFG_MFDM_BIT 19 729 + #define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT) 730 + 745 731 #define MSR_K8_INT_PENDING_MSG 0xc0010055 746 732 /* C1E active bits in int pending message */ 747 733 #define K8_INTP_C1E_ACTIVE_MASK 0x18000000
+2 -1
tools/arch/x86/include/asm/required-features.h
··· 99 99 #define REQUIRED_MASK18 0 100 100 #define REQUIRED_MASK19 0 101 101 #define REQUIRED_MASK20 0 102 - #define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) 102 + #define REQUIRED_MASK21 0 103 + #define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22) 103 104 104 105 #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+300 -8
tools/arch/x86/include/uapi/asm/kvm.h
··· 7 7 * 8 8 */ 9 9 10 + #include <linux/const.h> 11 + #include <linux/bits.h> 10 12 #include <linux/types.h> 11 13 #include <linux/ioctl.h> 12 14 #include <linux/stddef.h> ··· 42 40 #define __KVM_HAVE_IRQ_LINE 43 41 #define __KVM_HAVE_MSI 44 42 #define __KVM_HAVE_USER_NMI 45 - #define __KVM_HAVE_GUEST_DEBUG 46 43 #define __KVM_HAVE_MSIX 47 44 #define __KVM_HAVE_MCE 48 45 #define __KVM_HAVE_PIT_STATE2 ··· 50 49 #define __KVM_HAVE_DEBUGREGS 51 50 #define __KVM_HAVE_XSAVE 52 51 #define __KVM_HAVE_XCRS 53 - #define __KVM_HAVE_READONLY_MEM 54 52 55 53 /* Architectural interrupt line count. */ 56 54 #define KVM_NR_INTERRUPTS 256 ··· 526 526 #define KVM_PMU_EVENT_ALLOW 0 527 527 #define KVM_PMU_EVENT_DENY 1 528 528 529 - #define KVM_PMU_EVENT_FLAG_MASKED_EVENTS BIT(0) 529 + #define KVM_PMU_EVENT_FLAG_MASKED_EVENTS _BITUL(0) 530 530 #define KVM_PMU_EVENT_FLAGS_VALID_MASK (KVM_PMU_EVENT_FLAG_MASKED_EVENTS) 531 + 532 + /* for KVM_CAP_MCE */ 533 + struct kvm_x86_mce { 534 + __u64 status; 535 + __u64 addr; 536 + __u64 misc; 537 + __u64 mcg_status; 538 + __u8 bank; 539 + __u8 pad1[7]; 540 + __u64 pad2[3]; 541 + }; 542 + 543 + /* for KVM_CAP_XEN_HVM */ 544 + #define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR (1 << 0) 545 + #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL (1 << 1) 546 + #define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2) 547 + #define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3) 548 + #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4) 549 + #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) 550 + #define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6) 551 + #define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7) 552 + #define KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA (1 << 8) 553 + 554 + struct kvm_xen_hvm_config { 555 + __u32 flags; 556 + __u32 msr; 557 + __u64 blob_addr_32; 558 + __u64 blob_addr_64; 559 + __u8 blob_size_32; 560 + __u8 blob_size_64; 561 + __u8 pad2[30]; 562 + }; 563 + 564 + struct kvm_xen_hvm_attr { 565 + __u16 type; 566 + __u16 pad[3]; 567 + union { 568 + __u8 long_mode; 569 + __u8 vector; 570 + __u8 runstate_update_flag; 571 + union { 572 + __u64 gfn; 573 + #define KVM_XEN_INVALID_GFN ((__u64)-1) 574 + __u64 hva; 575 + } shared_info; 576 + struct { 577 + __u32 send_port; 578 + __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */ 579 + __u32 flags; 580 + #define KVM_XEN_EVTCHN_DEASSIGN (1 << 0) 581 + #define KVM_XEN_EVTCHN_UPDATE (1 << 1) 582 + #define KVM_XEN_EVTCHN_RESET (1 << 2) 583 + /* 584 + * Events sent by the guest are either looped back to 585 + * the guest itself (potentially on a different port#) 586 + * or signalled via an eventfd. 587 + */ 588 + union { 589 + struct { 590 + __u32 port; 591 + __u32 vcpu; 592 + __u32 priority; 593 + } port; 594 + struct { 595 + __u32 port; /* Zero for eventfd */ 596 + __s32 fd; 597 + } eventfd; 598 + __u32 padding[4]; 599 + } deliver; 600 + } evtchn; 601 + __u32 xen_version; 602 + __u64 pad[8]; 603 + } u; 604 + }; 605 + 606 + 607 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ 608 + #define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0 609 + #define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1 610 + #define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2 611 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 612 + #define KVM_XEN_ATTR_TYPE_EVTCHN 0x3 613 + #define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4 614 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */ 615 + #define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5 616 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */ 617 + #define KVM_XEN_ATTR_TYPE_SHARED_INFO_HVA 0x6 618 + 619 + struct kvm_xen_vcpu_attr { 620 + __u16 type; 621 + __u16 pad[3]; 622 + union { 623 + __u64 gpa; 624 + #define KVM_XEN_INVALID_GPA ((__u64)-1) 625 + __u64 hva; 626 + __u64 pad[8]; 627 + struct { 628 + __u64 state; 629 + __u64 state_entry_time; 630 + __u64 time_running; 631 + __u64 time_runnable; 632 + __u64 time_blocked; 633 + __u64 time_offline; 634 + } runstate; 635 + __u32 vcpu_id; 636 + struct { 637 + __u32 port; 638 + __u32 priority; 639 + __u64 expires_ns; 640 + } timer; 641 + __u8 vector; 642 + } u; 643 + }; 644 + 645 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ 646 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO 0x0 647 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO 0x1 648 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR 0x2 649 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3 650 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4 651 + #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5 652 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 653 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6 654 + #define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7 655 + #define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8 656 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO_HVA */ 657 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO_HVA 0x9 658 + 659 + /* Secure Encrypted Virtualization command */ 660 + enum sev_cmd_id { 661 + /* Guest initialization commands */ 662 + KVM_SEV_INIT = 0, 663 + KVM_SEV_ES_INIT, 664 + /* Guest launch commands */ 665 + KVM_SEV_LAUNCH_START, 666 + KVM_SEV_LAUNCH_UPDATE_DATA, 667 + KVM_SEV_LAUNCH_UPDATE_VMSA, 668 + KVM_SEV_LAUNCH_SECRET, 669 + KVM_SEV_LAUNCH_MEASURE, 670 + KVM_SEV_LAUNCH_FINISH, 671 + /* Guest migration commands (outgoing) */ 672 + KVM_SEV_SEND_START, 673 + KVM_SEV_SEND_UPDATE_DATA, 674 + KVM_SEV_SEND_UPDATE_VMSA, 675 + KVM_SEV_SEND_FINISH, 676 + /* Guest migration commands (incoming) */ 677 + KVM_SEV_RECEIVE_START, 678 + KVM_SEV_RECEIVE_UPDATE_DATA, 679 + KVM_SEV_RECEIVE_UPDATE_VMSA, 680 + KVM_SEV_RECEIVE_FINISH, 681 + /* Guest status and debug commands */ 682 + KVM_SEV_GUEST_STATUS, 683 + KVM_SEV_DBG_DECRYPT, 684 + KVM_SEV_DBG_ENCRYPT, 685 + /* Guest certificates commands */ 686 + KVM_SEV_CERT_EXPORT, 687 + /* Attestation report */ 688 + KVM_SEV_GET_ATTESTATION_REPORT, 689 + /* Guest Migration Extension */ 690 + KVM_SEV_SEND_CANCEL, 691 + 692 + KVM_SEV_NR_MAX, 693 + }; 694 + 695 + struct kvm_sev_cmd { 696 + __u32 id; 697 + __u32 pad0; 698 + __u64 data; 699 + __u32 error; 700 + __u32 sev_fd; 701 + }; 702 + 703 + struct kvm_sev_launch_start { 704 + __u32 handle; 705 + __u32 policy; 706 + __u64 dh_uaddr; 707 + __u32 dh_len; 708 + __u32 pad0; 709 + __u64 session_uaddr; 710 + __u32 session_len; 711 + __u32 pad1; 712 + }; 713 + 714 + struct kvm_sev_launch_update_data { 715 + __u64 uaddr; 716 + __u32 len; 717 + __u32 pad0; 718 + }; 719 + 720 + 721 + struct kvm_sev_launch_secret { 722 + __u64 hdr_uaddr; 723 + __u32 hdr_len; 724 + __u32 pad0; 725 + __u64 guest_uaddr; 726 + __u32 guest_len; 727 + __u32 pad1; 728 + __u64 trans_uaddr; 729 + __u32 trans_len; 730 + __u32 pad2; 731 + }; 732 + 733 + struct kvm_sev_launch_measure { 734 + __u64 uaddr; 735 + __u32 len; 736 + __u32 pad0; 737 + }; 738 + 739 + struct kvm_sev_guest_status { 740 + __u32 handle; 741 + __u32 policy; 742 + __u32 state; 743 + }; 744 + 745 + struct kvm_sev_dbg { 746 + __u64 src_uaddr; 747 + __u64 dst_uaddr; 748 + __u32 len; 749 + __u32 pad0; 750 + }; 751 + 752 + struct kvm_sev_attestation_report { 753 + __u8 mnonce[16]; 754 + __u64 uaddr; 755 + __u32 len; 756 + __u32 pad0; 757 + }; 758 + 759 + struct kvm_sev_send_start { 760 + __u32 policy; 761 + __u32 pad0; 762 + __u64 pdh_cert_uaddr; 763 + __u32 pdh_cert_len; 764 + __u32 pad1; 765 + __u64 plat_certs_uaddr; 766 + __u32 plat_certs_len; 767 + __u32 pad2; 768 + __u64 amd_certs_uaddr; 769 + __u32 amd_certs_len; 770 + __u32 pad3; 771 + __u64 session_uaddr; 772 + __u32 session_len; 773 + __u32 pad4; 774 + }; 775 + 776 + struct kvm_sev_send_update_data { 777 + __u64 hdr_uaddr; 778 + __u32 hdr_len; 779 + __u32 pad0; 780 + __u64 guest_uaddr; 781 + __u32 guest_len; 782 + __u32 pad1; 783 + __u64 trans_uaddr; 784 + __u32 trans_len; 785 + __u32 pad2; 786 + }; 787 + 788 + struct kvm_sev_receive_start { 789 + __u32 handle; 790 + __u32 policy; 791 + __u64 pdh_uaddr; 792 + __u32 pdh_len; 793 + __u32 pad0; 794 + __u64 session_uaddr; 795 + __u32 session_len; 796 + __u32 pad1; 797 + }; 798 + 799 + struct kvm_sev_receive_update_data { 800 + __u64 hdr_uaddr; 801 + __u32 hdr_len; 802 + __u32 pad0; 803 + __u64 guest_uaddr; 804 + __u32 guest_len; 805 + __u32 pad1; 806 + __u64 trans_uaddr; 807 + __u32 trans_len; 808 + __u32 pad2; 809 + }; 810 + 811 + #define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0) 812 + #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1) 813 + 814 + struct kvm_hyperv_eventfd { 815 + __u32 conn_id; 816 + __s32 fd; 817 + __u32 flags; 818 + __u32 padding[3]; 819 + }; 820 + 821 + #define KVM_HYPERV_CONN_ID_MASK 0x00ffffff 822 + #define KVM_HYPERV_EVENTFD_DEASSIGN (1 << 0) 531 823 532 824 /* 533 825 * Masked event layout. ··· 841 549 ((__u64)(!!(exclude)) << 55)) 842 550 843 551 #define KVM_PMU_MASKED_ENTRY_EVENT_SELECT \ 844 - (GENMASK_ULL(7, 0) | GENMASK_ULL(35, 32)) 845 - #define KVM_PMU_MASKED_ENTRY_UMASK_MASK (GENMASK_ULL(63, 56)) 846 - #define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (GENMASK_ULL(15, 8)) 847 - #define KVM_PMU_MASKED_ENTRY_EXCLUDE (BIT_ULL(55)) 552 + (__GENMASK_ULL(7, 0) | __GENMASK_ULL(35, 32)) 553 + #define KVM_PMU_MASKED_ENTRY_UMASK_MASK (__GENMASK_ULL(63, 56)) 554 + #define KVM_PMU_MASKED_ENTRY_UMASK_MATCH (__GENMASK_ULL(15, 8)) 555 + #define KVM_PMU_MASKED_ENTRY_EXCLUDE (_BITULL(55)) 848 556 #define KVM_PMU_MASKED_ENTRY_UMASK_MASK_SHIFT (56) 849 557 850 558 /* for KVM_{GET,SET,HAS}_DEVICE_ATTR */ ··· 852 560 #define KVM_VCPU_TSC_OFFSET 0 /* attribute for the TSC offset */ 853 561 854 562 /* x86-specific KVM_EXIT_HYPERCALL flags. */ 855 - #define KVM_EXIT_HYPERCALL_LONG_MODE BIT(0) 563 + #define KVM_EXIT_HYPERCALL_LONG_MODE _BITULL(0) 856 564 857 565 #define KVM_X86_DEFAULT_VM 0 858 566 #define KVM_X86_SW_PROTECTED_VM 1
+6 -2
tools/include/asm-generic/bitops/__fls.h
··· 5 5 #include <asm/types.h> 6 6 7 7 /** 8 - * __fls - find last (most-significant) set bit in a long word 8 + * generic___fls - find last (most-significant) set bit in a long word 9 9 * @word: the word to search 10 10 * 11 11 * Undefined if no set bit exists, so code should check against 0 first. 12 12 */ 13 - static __always_inline unsigned long __fls(unsigned long word) 13 + static __always_inline unsigned long generic___fls(unsigned long word) 14 14 { 15 15 int num = BITS_PER_LONG - 1; 16 16 ··· 40 40 num -= 1; 41 41 return num; 42 42 } 43 + 44 + #ifndef __HAVE_ARCH___FLS 45 + #define __fls(word) generic___fls(word) 46 + #endif 43 47 44 48 #endif /* _ASM_GENERIC_BITOPS___FLS_H_ */
+6 -2
tools/include/asm-generic/bitops/fls.h
··· 3 3 #define _ASM_GENERIC_BITOPS_FLS_H_ 4 4 5 5 /** 6 - * fls - find last (most-significant) bit set 6 + * generic_fls - find last (most-significant) bit set 7 7 * @x: the word to search 8 8 * 9 9 * This is defined the same way as ffs. 10 10 * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32. 11 11 */ 12 12 13 - static __always_inline int fls(unsigned int x) 13 + static __always_inline int generic_fls(unsigned int x) 14 14 { 15 15 int r = 32; 16 16 ··· 38 38 } 39 39 return r; 40 40 } 41 + 42 + #ifndef __HAVE_ARCH_FLS 43 + #define fls(x) generic_fls(x) 44 + #endif 41 45 42 46 #endif /* _ASM_GENERIC_BITOPS_FLS_H_ */
+16
tools/include/uapi/drm/i915_drm.h
··· 3013 3013 * - %DRM_I915_QUERY_MEMORY_REGIONS (see struct drm_i915_query_memory_regions) 3014 3014 * - %DRM_I915_QUERY_HWCONFIG_BLOB (see `GuC HWCONFIG blob uAPI`) 3015 3015 * - %DRM_I915_QUERY_GEOMETRY_SUBSLICES (see struct drm_i915_query_topology_info) 3016 + * - %DRM_I915_QUERY_GUC_SUBMISSION_VERSION (see struct drm_i915_query_guc_submission_version) 3016 3017 */ 3017 3018 __u64 query_id; 3018 3019 #define DRM_I915_QUERY_TOPOLOGY_INFO 1 ··· 3022 3021 #define DRM_I915_QUERY_MEMORY_REGIONS 4 3023 3022 #define DRM_I915_QUERY_HWCONFIG_BLOB 5 3024 3023 #define DRM_I915_QUERY_GEOMETRY_SUBSLICES 6 3024 + #define DRM_I915_QUERY_GUC_SUBMISSION_VERSION 7 3025 3025 /* Must be kept compact -- no holes and well documented */ 3026 3026 3027 3027 /** ··· 3566 3564 3567 3565 /** @regions: Info about each supported region */ 3568 3566 struct drm_i915_memory_region_info regions[]; 3567 + }; 3568 + 3569 + /** 3570 + * struct drm_i915_query_guc_submission_version - query GuC submission interface version 3571 + */ 3572 + struct drm_i915_query_guc_submission_version { 3573 + /** @branch: Firmware branch version. */ 3574 + __u32 branch; 3575 + /** @major: Firmware major version. */ 3576 + __u32 major; 3577 + /** @minor: Firmware minor version. */ 3578 + __u32 minor; 3579 + /** @patch: Firmware patch version. */ 3580 + __u32 patch; 3569 3581 }; 3570 3582 3571 3583 /**
+29 -1
tools/include/uapi/linux/fs.h
··· 64 64 __u64 minlen; 65 65 }; 66 66 67 + /* 68 + * We include a length field because some filesystems (vfat) have an identifier 69 + * that we do want to expose as a UUID, but doesn't have the standard length. 70 + * 71 + * We use a fixed size buffer beacuse this interface will, by fiat, never 72 + * support "UUIDs" longer than 16 bytes; we don't want to force all downstream 73 + * users to have to deal with that. 74 + */ 75 + struct fsuuid2 { 76 + __u8 len; 77 + __u8 uuid[16]; 78 + }; 79 + 80 + struct fs_sysfs_path { 81 + __u8 len; 82 + __u8 name[128]; 83 + }; 84 + 67 85 /* extent-same (dedupe) ioctls; these MUST match the btrfs ioctl definitions */ 68 86 #define FILE_DEDUPE_RANGE_SAME 0 69 87 #define FILE_DEDUPE_RANGE_DIFFERS 1 ··· 233 215 #define FS_IOC_FSSETXATTR _IOW('X', 32, struct fsxattr) 234 216 #define FS_IOC_GETFSLABEL _IOR(0x94, 49, char[FSLABEL_MAX]) 235 217 #define FS_IOC_SETFSLABEL _IOW(0x94, 50, char[FSLABEL_MAX]) 218 + /* Returns the external filesystem UUID, the same one blkid returns */ 219 + #define FS_IOC_GETFSUUID _IOR(0x15, 0, struct fsuuid2) 220 + /* 221 + * Returns the path component under /sys/fs/ that refers to this filesystem; 222 + * also /sys/kernel/debug/ for filesystems with debugfs exports 223 + */ 224 + #define FS_IOC_GETFSSYSFSPATH _IOR(0x15, 1, struct fs_sysfs_path) 236 225 237 226 /* 238 227 * Inode flags (FS_IOC_GETFLAGS / FS_IOC_SETFLAGS) ··· 326 301 /* per-IO O_APPEND */ 327 302 #define RWF_APPEND ((__force __kernel_rwf_t)0x00000010) 328 303 304 + /* per-IO negation of O_APPEND */ 305 + #define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020) 306 + 329 307 /* mask of flags supported by the kernel */ 330 308 #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ 331 - RWF_APPEND) 309 + RWF_APPEND | RWF_NOAPPEND) 332 310 333 311 /* Pagemap ioctl */ 334 312 #define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
+5 -684
tools/include/uapi/linux/kvm.h
··· 16 16 17 17 #define KVM_API_VERSION 12 18 18 19 + /* 20 + * Backwards-compatible definitions. 21 + */ 22 + #define __KVM_HAVE_GUEST_DEBUG 23 + 19 24 /* for KVM_SET_USER_MEMORY_REGION */ 20 25 struct kvm_userspace_memory_region { 21 26 __u32 slot; ··· 89 84 }; 90 85 91 86 #define KVM_PIT_SPEAKER_DUMMY 1 92 - 93 - struct kvm_s390_skeys { 94 - __u64 start_gfn; 95 - __u64 count; 96 - __u64 skeydata_addr; 97 - __u32 flags; 98 - __u32 reserved[9]; 99 - }; 100 - 101 - #define KVM_S390_CMMA_PEEK (1 << 0) 102 - 103 - /** 104 - * kvm_s390_cmma_log - Used for CMMA migration. 105 - * 106 - * Used both for input and output. 107 - * 108 - * @start_gfn: Guest page number to start from. 109 - * @count: Size of the result buffer. 110 - * @flags: Control operation mode via KVM_S390_CMMA_* flags 111 - * @remaining: Used with KVM_S390_GET_CMMA_BITS. Indicates how many dirty 112 - * pages are still remaining. 113 - * @mask: Used with KVM_S390_SET_CMMA_BITS. Bitmap of bits to actually set 114 - * in the PGSTE. 115 - * @values: Pointer to the values buffer. 116 - * 117 - * Used in KVM_S390_{G,S}ET_CMMA_BITS ioctls. 118 - */ 119 - struct kvm_s390_cmma_log { 120 - __u64 start_gfn; 121 - __u32 count; 122 - __u32 flags; 123 - union { 124 - __u64 remaining; 125 - __u64 mask; 126 - }; 127 - __u64 values; 128 - }; 129 87 130 88 struct kvm_hyperv_exit { 131 89 #define KVM_EXIT_HYPERV_SYNIC 1 ··· 283 315 __u32 ipb; 284 316 } s390_sieic; 285 317 /* KVM_EXIT_S390_RESET */ 286 - #define KVM_S390_RESET_POR 1 287 - #define KVM_S390_RESET_CLEAR 2 288 - #define KVM_S390_RESET_SUBSYSTEM 4 289 - #define KVM_S390_RESET_CPU_INIT 8 290 - #define KVM_S390_RESET_IPL 16 291 318 __u64 s390_reset_flags; 292 319 /* KVM_EXIT_S390_UCONTROL */ 293 320 struct { ··· 499 536 __u8 pad[5]; 500 537 }; 501 538 502 - /* for KVM_S390_MEM_OP */ 503 - struct kvm_s390_mem_op { 504 - /* in */ 505 - __u64 gaddr; /* the guest address */ 506 - __u64 flags; /* flags */ 507 - __u32 size; /* amount of bytes */ 508 - __u32 op; /* type of operation */ 509 - __u64 buf; /* buffer in userspace */ 510 - union { 511 - struct { 512 - __u8 ar; /* the access register number */ 513 - __u8 key; /* access key, ignored if flag unset */ 514 - __u8 pad1[6]; /* ignored */ 515 - __u64 old_addr; /* ignored if cmpxchg flag unset */ 516 - }; 517 - __u32 sida_offset; /* offset into the sida */ 518 - __u8 reserved[32]; /* ignored */ 519 - }; 520 - }; 521 - /* types for kvm_s390_mem_op->op */ 522 - #define KVM_S390_MEMOP_LOGICAL_READ 0 523 - #define KVM_S390_MEMOP_LOGICAL_WRITE 1 524 - #define KVM_S390_MEMOP_SIDA_READ 2 525 - #define KVM_S390_MEMOP_SIDA_WRITE 3 526 - #define KVM_S390_MEMOP_ABSOLUTE_READ 4 527 - #define KVM_S390_MEMOP_ABSOLUTE_WRITE 5 528 - #define KVM_S390_MEMOP_ABSOLUTE_CMPXCHG 6 529 - 530 - /* flags for kvm_s390_mem_op->flags */ 531 - #define KVM_S390_MEMOP_F_CHECK_ONLY (1ULL << 0) 532 - #define KVM_S390_MEMOP_F_INJECT_EXCEPTION (1ULL << 1) 533 - #define KVM_S390_MEMOP_F_SKEY_PROTECTION (1ULL << 2) 534 - 535 - /* flags specifying extension support via KVM_CAP_S390_MEM_OP_EXTENSION */ 536 - #define KVM_S390_MEMOP_EXTENSION_CAP_BASE (1 << 0) 537 - #define KVM_S390_MEMOP_EXTENSION_CAP_CMPXCHG (1 << 1) 538 - 539 539 /* for KVM_INTERRUPT */ 540 540 struct kvm_interrupt { 541 541 /* in */ ··· 563 637 __u32 mp_state; 564 638 }; 565 639 566 - struct kvm_s390_psw { 567 - __u64 mask; 568 - __u64 addr; 569 - }; 570 - 571 - /* valid values for type in kvm_s390_interrupt */ 572 - #define KVM_S390_SIGP_STOP 0xfffe0000u 573 - #define KVM_S390_PROGRAM_INT 0xfffe0001u 574 - #define KVM_S390_SIGP_SET_PREFIX 0xfffe0002u 575 - #define KVM_S390_RESTART 0xfffe0003u 576 - #define KVM_S390_INT_PFAULT_INIT 0xfffe0004u 577 - #define KVM_S390_INT_PFAULT_DONE 0xfffe0005u 578 - #define KVM_S390_MCHK 0xfffe1000u 579 - #define KVM_S390_INT_CLOCK_COMP 0xffff1004u 580 - #define KVM_S390_INT_CPU_TIMER 0xffff1005u 581 - #define KVM_S390_INT_VIRTIO 0xffff2603u 582 - #define KVM_S390_INT_SERVICE 0xffff2401u 583 - #define KVM_S390_INT_EMERGENCY 0xffff1201u 584 - #define KVM_S390_INT_EXTERNAL_CALL 0xffff1202u 585 - /* Anything below 0xfffe0000u is taken by INT_IO */ 586 - #define KVM_S390_INT_IO(ai,cssid,ssid,schid) \ 587 - (((schid)) | \ 588 - ((ssid) << 16) | \ 589 - ((cssid) << 18) | \ 590 - ((ai) << 26)) 591 - #define KVM_S390_INT_IO_MIN 0x00000000u 592 - #define KVM_S390_INT_IO_MAX 0xfffdffffu 593 - #define KVM_S390_INT_IO_AI_MASK 0x04000000u 594 - 595 - 596 - struct kvm_s390_interrupt { 597 - __u32 type; 598 - __u32 parm; 599 - __u64 parm64; 600 - }; 601 - 602 - struct kvm_s390_io_info { 603 - __u16 subchannel_id; 604 - __u16 subchannel_nr; 605 - __u32 io_int_parm; 606 - __u32 io_int_word; 607 - }; 608 - 609 - struct kvm_s390_ext_info { 610 - __u32 ext_params; 611 - __u32 pad; 612 - __u64 ext_params2; 613 - }; 614 - 615 - struct kvm_s390_pgm_info { 616 - __u64 trans_exc_code; 617 - __u64 mon_code; 618 - __u64 per_address; 619 - __u32 data_exc_code; 620 - __u16 code; 621 - __u16 mon_class_nr; 622 - __u8 per_code; 623 - __u8 per_atmid; 624 - __u8 exc_access_id; 625 - __u8 per_access_id; 626 - __u8 op_access_id; 627 - #define KVM_S390_PGM_FLAGS_ILC_VALID 0x01 628 - #define KVM_S390_PGM_FLAGS_ILC_0 0x02 629 - #define KVM_S390_PGM_FLAGS_ILC_1 0x04 630 - #define KVM_S390_PGM_FLAGS_ILC_MASK 0x06 631 - #define KVM_S390_PGM_FLAGS_NO_REWIND 0x08 632 - __u8 flags; 633 - __u8 pad[2]; 634 - }; 635 - 636 - struct kvm_s390_prefix_info { 637 - __u32 address; 638 - }; 639 - 640 - struct kvm_s390_extcall_info { 641 - __u16 code; 642 - }; 643 - 644 - struct kvm_s390_emerg_info { 645 - __u16 code; 646 - }; 647 - 648 - #define KVM_S390_STOP_FLAG_STORE_STATUS 0x01 649 - struct kvm_s390_stop_info { 650 - __u32 flags; 651 - }; 652 - 653 - struct kvm_s390_mchk_info { 654 - __u64 cr14; 655 - __u64 mcic; 656 - __u64 failing_storage_address; 657 - __u32 ext_damage_code; 658 - __u32 pad; 659 - __u8 fixed_logout[16]; 660 - }; 661 - 662 - struct kvm_s390_irq { 663 - __u64 type; 664 - union { 665 - struct kvm_s390_io_info io; 666 - struct kvm_s390_ext_info ext; 667 - struct kvm_s390_pgm_info pgm; 668 - struct kvm_s390_emerg_info emerg; 669 - struct kvm_s390_extcall_info extcall; 670 - struct kvm_s390_prefix_info prefix; 671 - struct kvm_s390_stop_info stop; 672 - struct kvm_s390_mchk_info mchk; 673 - char reserved[64]; 674 - } u; 675 - }; 676 - 677 - struct kvm_s390_irq_state { 678 - __u64 buf; 679 - __u32 flags; /* will stay unused for compatibility reasons */ 680 - __u32 len; 681 - __u32 reserved[4]; /* will stay unused for compatibility reasons */ 682 - }; 683 - 684 640 /* for KVM_SET_GUEST_DEBUG */ 685 641 686 642 #define KVM_GUESTDBG_ENABLE 0x00000001 ··· 616 808 __u32 flags; 617 809 __u64 args[4]; 618 810 __u8 pad[64]; 619 - }; 620 - 621 - /* for KVM_PPC_GET_PVINFO */ 622 - 623 - #define KVM_PPC_PVINFO_FLAGS_EV_IDLE (1<<0) 624 - 625 - struct kvm_ppc_pvinfo { 626 - /* out */ 627 - __u32 flags; 628 - __u32 hcall[4]; 629 - __u8 pad[108]; 630 - }; 631 - 632 - /* for KVM_PPC_GET_SMMU_INFO */ 633 - #define KVM_PPC_PAGE_SIZES_MAX_SZ 8 634 - 635 - struct kvm_ppc_one_page_size { 636 - __u32 page_shift; /* Page shift (or 0) */ 637 - __u32 pte_enc; /* Encoding in the HPTE (>>12) */ 638 - }; 639 - 640 - struct kvm_ppc_one_seg_page_size { 641 - __u32 page_shift; /* Base page shift of segment (or 0) */ 642 - __u32 slb_enc; /* SLB encoding for BookS */ 643 - struct kvm_ppc_one_page_size enc[KVM_PPC_PAGE_SIZES_MAX_SZ]; 644 - }; 645 - 646 - #define KVM_PPC_PAGE_SIZES_REAL 0x00000001 647 - #define KVM_PPC_1T_SEGMENTS 0x00000002 648 - #define KVM_PPC_NO_HASH 0x00000004 649 - 650 - struct kvm_ppc_smmu_info { 651 - __u64 flags; 652 - __u32 slb_size; 653 - __u16 data_keys; /* # storage keys supported for data */ 654 - __u16 instr_keys; /* # storage keys supported for instructions */ 655 - struct kvm_ppc_one_seg_page_size sps[KVM_PPC_PAGE_SIZES_MAX_SZ]; 656 - }; 657 - 658 - /* for KVM_PPC_RESIZE_HPT_{PREPARE,COMMIT} */ 659 - struct kvm_ppc_resize_hpt { 660 - __u64 flags; 661 - __u32 shift; 662 - __u32 pad; 663 811 }; 664 812 665 813 #define KVMIO 0xAE ··· 687 923 /* Bug in KVM_SET_USER_MEMORY_REGION fixed: */ 688 924 #define KVM_CAP_DESTROY_MEMORY_REGION_WORKS 21 689 925 #define KVM_CAP_USER_NMI 22 690 - #ifdef __KVM_HAVE_GUEST_DEBUG 691 926 #define KVM_CAP_SET_GUEST_DEBUG 23 692 - #endif 693 927 #ifdef __KVM_HAVE_PIT 694 928 #define KVM_CAP_REINJECT_CONTROL 24 695 929 #endif ··· 918 1156 #define KVM_CAP_GUEST_MEMFD 234 919 1157 #define KVM_CAP_VM_TYPES 235 920 1158 921 - #ifdef KVM_CAP_IRQ_ROUTING 922 - 923 1159 struct kvm_irq_routing_irqchip { 924 1160 __u32 irqchip; 925 1161 __u32 pin; ··· 981 1221 __u32 flags; 982 1222 struct kvm_irq_routing_entry entries[]; 983 1223 }; 984 - 985 - #endif 986 - 987 - #ifdef KVM_CAP_MCE 988 - /* x86 MCE */ 989 - struct kvm_x86_mce { 990 - __u64 status; 991 - __u64 addr; 992 - __u64 misc; 993 - __u64 mcg_status; 994 - __u8 bank; 995 - __u8 pad1[7]; 996 - __u64 pad2[3]; 997 - }; 998 - #endif 999 - 1000 - #ifdef KVM_CAP_XEN_HVM 1001 - #define KVM_XEN_HVM_CONFIG_HYPERCALL_MSR (1 << 0) 1002 - #define KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL (1 << 1) 1003 - #define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2) 1004 - #define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3) 1005 - #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4) 1006 - #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) 1007 - #define KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG (1 << 6) 1008 - #define KVM_XEN_HVM_CONFIG_PVCLOCK_TSC_UNSTABLE (1 << 7) 1009 - 1010 - struct kvm_xen_hvm_config { 1011 - __u32 flags; 1012 - __u32 msr; 1013 - __u64 blob_addr_32; 1014 - __u64 blob_addr_64; 1015 - __u8 blob_size_32; 1016 - __u8 blob_size_64; 1017 - __u8 pad2[30]; 1018 - }; 1019 - #endif 1020 1224 1021 1225 #define KVM_IRQFD_FLAG_DEASSIGN (1 << 0) 1022 1226 /* ··· 1166 1442 struct kvm_userspace_memory_region2) 1167 1443 1168 1444 /* enable ucontrol for s390 */ 1169 - struct kvm_s390_ucas_mapping { 1170 - __u64 user_addr; 1171 - __u64 vcpu_addr; 1172 - __u64 length; 1173 - }; 1174 1445 #define KVM_S390_UCAS_MAP _IOW(KVMIO, 0x50, struct kvm_s390_ucas_mapping) 1175 1446 #define KVM_S390_UCAS_UNMAP _IOW(KVMIO, 0x51, struct kvm_s390_ucas_mapping) 1176 1447 #define KVM_S390_VCPU_FAULT _IOW(KVMIO, 0x52, unsigned long) ··· 1360 1641 #define KVM_S390_NORMAL_RESET _IO(KVMIO, 0xc3) 1361 1642 #define KVM_S390_CLEAR_RESET _IO(KVMIO, 0xc4) 1362 1643 1363 - struct kvm_s390_pv_sec_parm { 1364 - __u64 origin; 1365 - __u64 length; 1366 - }; 1367 - 1368 - struct kvm_s390_pv_unp { 1369 - __u64 addr; 1370 - __u64 size; 1371 - __u64 tweak; 1372 - }; 1373 - 1374 - enum pv_cmd_dmp_id { 1375 - KVM_PV_DUMP_INIT, 1376 - KVM_PV_DUMP_CONFIG_STOR_STATE, 1377 - KVM_PV_DUMP_COMPLETE, 1378 - KVM_PV_DUMP_CPU, 1379 - }; 1380 - 1381 - struct kvm_s390_pv_dmp { 1382 - __u64 subcmd; 1383 - __u64 buff_addr; 1384 - __u64 buff_len; 1385 - __u64 gaddr; /* For dump storage state */ 1386 - __u64 reserved[4]; 1387 - }; 1388 - 1389 - enum pv_cmd_info_id { 1390 - KVM_PV_INFO_VM, 1391 - KVM_PV_INFO_DUMP, 1392 - }; 1393 - 1394 - struct kvm_s390_pv_info_dump { 1395 - __u64 dump_cpu_buffer_len; 1396 - __u64 dump_config_mem_buffer_per_1m; 1397 - __u64 dump_config_finalize_len; 1398 - }; 1399 - 1400 - struct kvm_s390_pv_info_vm { 1401 - __u64 inst_calls_list[4]; 1402 - __u64 max_cpus; 1403 - __u64 max_guests; 1404 - __u64 max_guest_addr; 1405 - __u64 feature_indication; 1406 - }; 1407 - 1408 - struct kvm_s390_pv_info_header { 1409 - __u32 id; 1410 - __u32 len_max; 1411 - __u32 len_written; 1412 - __u32 reserved; 1413 - }; 1414 - 1415 - struct kvm_s390_pv_info { 1416 - struct kvm_s390_pv_info_header header; 1417 - union { 1418 - struct kvm_s390_pv_info_dump dump; 1419 - struct kvm_s390_pv_info_vm vm; 1420 - }; 1421 - }; 1422 - 1423 - enum pv_cmd_id { 1424 - KVM_PV_ENABLE, 1425 - KVM_PV_DISABLE, 1426 - KVM_PV_SET_SEC_PARMS, 1427 - KVM_PV_UNPACK, 1428 - KVM_PV_VERIFY, 1429 - KVM_PV_PREP_RESET, 1430 - KVM_PV_UNSHARE_ALL, 1431 - KVM_PV_INFO, 1432 - KVM_PV_DUMP, 1433 - KVM_PV_ASYNC_CLEANUP_PREPARE, 1434 - KVM_PV_ASYNC_CLEANUP_PERFORM, 1435 - }; 1436 - 1437 - struct kvm_pv_cmd { 1438 - __u32 cmd; /* Command to be executed */ 1439 - __u16 rc; /* Ultravisor return code */ 1440 - __u16 rrc; /* Ultravisor return reason code */ 1441 - __u64 data; /* Data or address */ 1442 - __u32 flags; /* flags for future extensions. Must be 0 for now */ 1443 - __u32 reserved[3]; 1444 - }; 1445 - 1446 1644 /* Available with KVM_CAP_S390_PROTECTED */ 1447 1645 #define KVM_S390_PV_COMMAND _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd) 1448 1646 ··· 1373 1737 #define KVM_XEN_HVM_GET_ATTR _IOWR(KVMIO, 0xc8, struct kvm_xen_hvm_attr) 1374 1738 #define KVM_XEN_HVM_SET_ATTR _IOW(KVMIO, 0xc9, struct kvm_xen_hvm_attr) 1375 1739 1376 - struct kvm_xen_hvm_attr { 1377 - __u16 type; 1378 - __u16 pad[3]; 1379 - union { 1380 - __u8 long_mode; 1381 - __u8 vector; 1382 - __u8 runstate_update_flag; 1383 - struct { 1384 - __u64 gfn; 1385 - #define KVM_XEN_INVALID_GFN ((__u64)-1) 1386 - } shared_info; 1387 - struct { 1388 - __u32 send_port; 1389 - __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */ 1390 - __u32 flags; 1391 - #define KVM_XEN_EVTCHN_DEASSIGN (1 << 0) 1392 - #define KVM_XEN_EVTCHN_UPDATE (1 << 1) 1393 - #define KVM_XEN_EVTCHN_RESET (1 << 2) 1394 - /* 1395 - * Events sent by the guest are either looped back to 1396 - * the guest itself (potentially on a different port#) 1397 - * or signalled via an eventfd. 1398 - */ 1399 - union { 1400 - struct { 1401 - __u32 port; 1402 - __u32 vcpu; 1403 - __u32 priority; 1404 - } port; 1405 - struct { 1406 - __u32 port; /* Zero for eventfd */ 1407 - __s32 fd; 1408 - } eventfd; 1409 - __u32 padding[4]; 1410 - } deliver; 1411 - } evtchn; 1412 - __u32 xen_version; 1413 - __u64 pad[8]; 1414 - } u; 1415 - }; 1416 - 1417 - 1418 - /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ 1419 - #define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0 1420 - #define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1 1421 - #define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2 1422 - /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1423 - #define KVM_XEN_ATTR_TYPE_EVTCHN 0x3 1424 - #define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4 1425 - /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_RUNSTATE_UPDATE_FLAG */ 1426 - #define KVM_XEN_ATTR_TYPE_RUNSTATE_UPDATE_FLAG 0x5 1427 - 1428 1740 /* Per-vCPU Xen attributes */ 1429 1741 #define KVM_XEN_VCPU_GET_ATTR _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr) 1430 1742 #define KVM_XEN_VCPU_SET_ATTR _IOW(KVMIO, 0xcb, struct kvm_xen_vcpu_attr) ··· 1382 1798 1383 1799 #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) 1384 1800 #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) 1385 - 1386 - struct kvm_xen_vcpu_attr { 1387 - __u16 type; 1388 - __u16 pad[3]; 1389 - union { 1390 - __u64 gpa; 1391 - #define KVM_XEN_INVALID_GPA ((__u64)-1) 1392 - __u64 pad[8]; 1393 - struct { 1394 - __u64 state; 1395 - __u64 state_entry_time; 1396 - __u64 time_running; 1397 - __u64 time_runnable; 1398 - __u64 time_blocked; 1399 - __u64 time_offline; 1400 - } runstate; 1401 - __u32 vcpu_id; 1402 - struct { 1403 - __u32 port; 1404 - __u32 priority; 1405 - __u64 expires_ns; 1406 - } timer; 1407 - __u8 vector; 1408 - } u; 1409 - }; 1410 - 1411 - /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_SHARED_INFO */ 1412 - #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_INFO 0x0 1413 - #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO 0x1 1414 - #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADDR 0x2 1415 - #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3 1416 - #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4 1417 - #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5 1418 - /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1419 - #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6 1420 - #define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7 1421 - #define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8 1422 - 1423 - /* Secure Encrypted Virtualization command */ 1424 - enum sev_cmd_id { 1425 - /* Guest initialization commands */ 1426 - KVM_SEV_INIT = 0, 1427 - KVM_SEV_ES_INIT, 1428 - /* Guest launch commands */ 1429 - KVM_SEV_LAUNCH_START, 1430 - KVM_SEV_LAUNCH_UPDATE_DATA, 1431 - KVM_SEV_LAUNCH_UPDATE_VMSA, 1432 - KVM_SEV_LAUNCH_SECRET, 1433 - KVM_SEV_LAUNCH_MEASURE, 1434 - KVM_SEV_LAUNCH_FINISH, 1435 - /* Guest migration commands (outgoing) */ 1436 - KVM_SEV_SEND_START, 1437 - KVM_SEV_SEND_UPDATE_DATA, 1438 - KVM_SEV_SEND_UPDATE_VMSA, 1439 - KVM_SEV_SEND_FINISH, 1440 - /* Guest migration commands (incoming) */ 1441 - KVM_SEV_RECEIVE_START, 1442 - KVM_SEV_RECEIVE_UPDATE_DATA, 1443 - KVM_SEV_RECEIVE_UPDATE_VMSA, 1444 - KVM_SEV_RECEIVE_FINISH, 1445 - /* Guest status and debug commands */ 1446 - KVM_SEV_GUEST_STATUS, 1447 - KVM_SEV_DBG_DECRYPT, 1448 - KVM_SEV_DBG_ENCRYPT, 1449 - /* Guest certificates commands */ 1450 - KVM_SEV_CERT_EXPORT, 1451 - /* Attestation report */ 1452 - KVM_SEV_GET_ATTESTATION_REPORT, 1453 - /* Guest Migration Extension */ 1454 - KVM_SEV_SEND_CANCEL, 1455 - 1456 - KVM_SEV_NR_MAX, 1457 - }; 1458 - 1459 - struct kvm_sev_cmd { 1460 - __u32 id; 1461 - __u64 data; 1462 - __u32 error; 1463 - __u32 sev_fd; 1464 - }; 1465 - 1466 - struct kvm_sev_launch_start { 1467 - __u32 handle; 1468 - __u32 policy; 1469 - __u64 dh_uaddr; 1470 - __u32 dh_len; 1471 - __u64 session_uaddr; 1472 - __u32 session_len; 1473 - }; 1474 - 1475 - struct kvm_sev_launch_update_data { 1476 - __u64 uaddr; 1477 - __u32 len; 1478 - }; 1479 - 1480 - 1481 - struct kvm_sev_launch_secret { 1482 - __u64 hdr_uaddr; 1483 - __u32 hdr_len; 1484 - __u64 guest_uaddr; 1485 - __u32 guest_len; 1486 - __u64 trans_uaddr; 1487 - __u32 trans_len; 1488 - }; 1489 - 1490 - struct kvm_sev_launch_measure { 1491 - __u64 uaddr; 1492 - __u32 len; 1493 - }; 1494 - 1495 - struct kvm_sev_guest_status { 1496 - __u32 handle; 1497 - __u32 policy; 1498 - __u32 state; 1499 - }; 1500 - 1501 - struct kvm_sev_dbg { 1502 - __u64 src_uaddr; 1503 - __u64 dst_uaddr; 1504 - __u32 len; 1505 - }; 1506 - 1507 - struct kvm_sev_attestation_report { 1508 - __u8 mnonce[16]; 1509 - __u64 uaddr; 1510 - __u32 len; 1511 - }; 1512 - 1513 - struct kvm_sev_send_start { 1514 - __u32 policy; 1515 - __u64 pdh_cert_uaddr; 1516 - __u32 pdh_cert_len; 1517 - __u64 plat_certs_uaddr; 1518 - __u32 plat_certs_len; 1519 - __u64 amd_certs_uaddr; 1520 - __u32 amd_certs_len; 1521 - __u64 session_uaddr; 1522 - __u32 session_len; 1523 - }; 1524 - 1525 - struct kvm_sev_send_update_data { 1526 - __u64 hdr_uaddr; 1527 - __u32 hdr_len; 1528 - __u64 guest_uaddr; 1529 - __u32 guest_len; 1530 - __u64 trans_uaddr; 1531 - __u32 trans_len; 1532 - }; 1533 - 1534 - struct kvm_sev_receive_start { 1535 - __u32 handle; 1536 - __u32 policy; 1537 - __u64 pdh_uaddr; 1538 - __u32 pdh_len; 1539 - __u64 session_uaddr; 1540 - __u32 session_len; 1541 - }; 1542 - 1543 - struct kvm_sev_receive_update_data { 1544 - __u64 hdr_uaddr; 1545 - __u32 hdr_len; 1546 - __u64 guest_uaddr; 1547 - __u32 guest_len; 1548 - __u64 trans_uaddr; 1549 - __u32 trans_len; 1550 - }; 1551 - 1552 - #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) 1553 - #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1) 1554 - #define KVM_DEV_ASSIGN_MASK_INTX (1 << 2) 1555 - 1556 - struct kvm_assigned_pci_dev { 1557 - __u32 assigned_dev_id; 1558 - __u32 busnr; 1559 - __u32 devfn; 1560 - __u32 flags; 1561 - __u32 segnr; 1562 - union { 1563 - __u32 reserved[11]; 1564 - }; 1565 - }; 1566 - 1567 - #define KVM_DEV_IRQ_HOST_INTX (1 << 0) 1568 - #define KVM_DEV_IRQ_HOST_MSI (1 << 1) 1569 - #define KVM_DEV_IRQ_HOST_MSIX (1 << 2) 1570 - 1571 - #define KVM_DEV_IRQ_GUEST_INTX (1 << 8) 1572 - #define KVM_DEV_IRQ_GUEST_MSI (1 << 9) 1573 - #define KVM_DEV_IRQ_GUEST_MSIX (1 << 10) 1574 - 1575 - #define KVM_DEV_IRQ_HOST_MASK 0x00ff 1576 - #define KVM_DEV_IRQ_GUEST_MASK 0xff00 1577 - 1578 - struct kvm_assigned_irq { 1579 - __u32 assigned_dev_id; 1580 - __u32 host_irq; /* ignored (legacy field) */ 1581 - __u32 guest_irq; 1582 - __u32 flags; 1583 - union { 1584 - __u32 reserved[12]; 1585 - }; 1586 - }; 1587 - 1588 - struct kvm_assigned_msix_nr { 1589 - __u32 assigned_dev_id; 1590 - __u16 entry_nr; 1591 - __u16 padding; 1592 - }; 1593 - 1594 - #define KVM_MAX_MSIX_PER_DEV 256 1595 - struct kvm_assigned_msix_entry { 1596 - __u32 assigned_dev_id; 1597 - __u32 gsi; 1598 - __u16 entry; /* The index of entry in the MSI-X table */ 1599 - __u16 padding[3]; 1600 - }; 1601 - 1602 - #define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0) 1603 - #define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1) 1604 - 1605 - /* Available with KVM_CAP_ARM_USER_IRQ */ 1606 - 1607 - /* Bits for run->s.regs.device_irq_level */ 1608 - #define KVM_ARM_DEV_EL1_VTIMER (1 << 0) 1609 - #define KVM_ARM_DEV_EL1_PTIMER (1 << 1) 1610 - #define KVM_ARM_DEV_PMU (1 << 2) 1611 - 1612 - struct kvm_hyperv_eventfd { 1613 - __u32 conn_id; 1614 - __s32 fd; 1615 - __u32 flags; 1616 - __u32 padding[3]; 1617 - }; 1618 - 1619 - #define KVM_HYPERV_CONN_ID_MASK 0x00ffffff 1620 - #define KVM_HYPERV_EVENTFD_DEASSIGN (1 << 0) 1621 1801 1622 1802 #define KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE (1 << 0) 1623 1803 #define KVM_DIRTY_LOG_INITIALLY_SET (1 << 1) ··· 1527 2179 1528 2180 /* Available with KVM_CAP_S390_ZPCI_OP */ 1529 2181 #define KVM_S390_ZPCI_OP _IOW(KVMIO, 0xd1, struct kvm_s390_zpci_op) 1530 - 1531 - struct kvm_s390_zpci_op { 1532 - /* in */ 1533 - __u32 fh; /* target device */ 1534 - __u8 op; /* operation to perform */ 1535 - __u8 pad[3]; 1536 - union { 1537 - /* for KVM_S390_ZPCIOP_REG_AEN */ 1538 - struct { 1539 - __u64 ibv; /* Guest addr of interrupt bit vector */ 1540 - __u64 sb; /* Guest addr of summary bit */ 1541 - __u32 flags; 1542 - __u32 noi; /* Number of interrupts */ 1543 - __u8 isc; /* Guest interrupt subclass */ 1544 - __u8 sbo; /* Offset of guest summary bit vector */ 1545 - __u16 pad; 1546 - } reg_aen; 1547 - __u64 reserved[8]; 1548 - } u; 1549 - }; 1550 - 1551 - /* types for kvm_s390_zpci_op->op */ 1552 - #define KVM_S390_ZPCIOP_REG_AEN 0 1553 - #define KVM_S390_ZPCIOP_DEREG_AEN 1 1554 - 1555 - /* flags for kvm_s390_zpci_op->u.reg_aen.flags */ 1556 - #define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0) 1557 2182 1558 2183 /* Available with KVM_CAP_MEMORY_ATTRIBUTES */ 1559 2184 #define KVM_SET_MEMORY_ATTRIBUTES _IOW(KVMIO, 0xd2, struct kvm_memory_attributes)
+2 -2
tools/include/uapi/sound/asound.h
··· 142 142 * * 143 143 *****************************************************************************/ 144 144 145 - #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 16) 145 + #define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 17) 146 146 147 147 typedef unsigned long snd_pcm_uframes_t; 148 148 typedef signed long snd_pcm_sframes_t; ··· 416 416 unsigned int rmask; /* W: requested masks */ 417 417 unsigned int cmask; /* R: changed masks */ 418 418 unsigned int info; /* R: Info flags for returned setup */ 419 - unsigned int msbits; /* R: used most significant bits */ 419 + unsigned int msbits; /* R: used most significant bits (in sample bit-width) */ 420 420 unsigned int rate_num; /* R: rate numerator */ 421 421 unsigned int rate_den; /* R: rate denominator */ 422 422 snd_pcm_uframes_t fifo_size; /* R: chip FIFO size in frames */
+1
tools/net/ynl/lib/ynl.py
··· 203 203 self.done = 1 204 204 extack_off = 20 205 205 elif self.nl_type == Netlink.NLMSG_DONE: 206 + self.error = struct.unpack("i", self.raw[0:4])[0] 206 207 self.done = 1 207 208 extack_off = 4 208 209
+1 -1
tools/perf/ui/browsers/annotate.c
··· 970 970 if (dso->annotate_warned) 971 971 return -1; 972 972 973 - if (not_annotated) { 973 + if (not_annotated || !sym->annotate2) { 974 974 err = symbol__annotate2(ms, evsel, &browser.arch); 975 975 if (err) { 976 976 char msg[BUFSIZ];
+3
tools/perf/util/annotate.c
··· 2461 2461 if (parch) 2462 2462 *parch = arch; 2463 2463 2464 + if (!list_empty(&notes->src->source)) 2465 + return 0; 2466 + 2464 2467 args.arch = arch; 2465 2468 args.ms = *ms; 2466 2469 if (annotate_opts.full_addr)
+4 -1
tools/perf/util/bpf_skel/lock_contention.bpf.c
··· 284 284 struct task_struct *curr; 285 285 struct mm_struct___old *mm_old; 286 286 struct mm_struct___new *mm_new; 287 + struct sighand_struct *sighand; 287 288 288 289 switch (flags) { 289 290 case LCB_F_READ: /* rwsem */ ··· 306 305 break; 307 306 case LCB_F_SPIN: /* spinlock */ 308 307 curr = bpf_get_current_task_btf(); 309 - if (&curr->sighand->siglock == (void *)lock) 308 + sighand = curr->sighand; 309 + 310 + if (sighand && &sighand->siglock == (void *)lock) 310 311 return LCD_F_SIGHAND_LOCK; 311 312 break; 312 313 default:
+2
tools/testing/selftests/iommu/config
··· 1 1 CONFIG_IOMMUFD=y 2 + CONFIG_FAULT_INJECTION_DEBUG_FS=y 2 3 CONFIG_FAULT_INJECTION=y 3 4 CONFIG_IOMMUFD_TEST=y 5 + CONFIG_FAILSLAB=y
+6 -9
tools/testing/selftests/kvm/max_guest_memory_test.c
··· 22 22 { 23 23 uint64_t gpa; 24 24 25 - for (gpa = start_gpa; gpa < end_gpa; gpa += stride) 26 - *((volatile uint64_t *)gpa) = gpa; 27 - 28 - GUEST_DONE(); 25 + for (;;) { 26 + for (gpa = start_gpa; gpa < end_gpa; gpa += stride) 27 + *((volatile uint64_t *)gpa) = gpa; 28 + GUEST_SYNC(0); 29 + } 29 30 } 30 31 31 32 struct vcpu_info { ··· 56 55 static void run_vcpu(struct kvm_vcpu *vcpu) 57 56 { 58 57 vcpu_run(vcpu); 59 - TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_DONE); 58 + TEST_ASSERT_EQ(get_ucall(vcpu, NULL), UCALL_SYNC); 60 59 } 61 60 62 61 static void *vcpu_worker(void *data) ··· 65 64 struct kvm_vcpu *vcpu = info->vcpu; 66 65 struct kvm_vm *vm = vcpu->vm; 67 66 struct kvm_sregs sregs; 68 - struct kvm_regs regs; 69 67 70 68 vcpu_args_set(vcpu, 3, info->start_gpa, info->end_gpa, vm->page_size); 71 69 72 - /* Snapshot regs before the first run. */ 73 - vcpu_regs_get(vcpu, &regs); 74 70 rendezvous_with_boss(); 75 71 76 72 run_vcpu(vcpu); 77 73 rendezvous_with_boss(); 78 - vcpu_regs_set(vcpu, &regs); 79 74 vcpu_sregs_get(vcpu, &sregs); 80 75 #ifdef __x86_64__ 81 76 /* Toggle CR0.WP to trigger a MMU context reset. */
+1 -1
tools/testing/selftests/kvm/set_memory_region_test.c
··· 333 333 struct kvm_vm *vm; 334 334 int r, i; 335 335 336 - #if defined __aarch64__ || defined __x86_64__ 336 + #if defined __aarch64__ || defined __riscv || defined __x86_64__ 337 337 supported_flags |= KVM_MEM_READONLY; 338 338 #endif 339 339
+19 -1
tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
··· 416 416 417 417 static void guest_test_gp_counters(void) 418 418 { 419 + uint8_t pmu_version = guest_get_pmu_version(); 419 420 uint8_t nr_gp_counters = 0; 420 421 uint32_t base_msr; 421 422 422 - if (guest_get_pmu_version()) 423 + if (pmu_version) 423 424 nr_gp_counters = this_cpu_property(X86_PROPERTY_PMU_NR_GP_COUNTERS); 425 + 426 + /* 427 + * For v2+ PMUs, PERF_GLOBAL_CTRL's architectural post-RESET value is 428 + * "Sets bits n-1:0 and clears the upper bits", where 'n' is the number 429 + * of GP counters. If there are no GP counters, require KVM to leave 430 + * PERF_GLOBAL_CTRL '0'. This edge case isn't covered by the SDM, but 431 + * follow the spirit of the architecture and only globally enable GP 432 + * counters, of which there are none. 433 + */ 434 + if (pmu_version > 1) { 435 + uint64_t global_ctrl = rdmsr(MSR_CORE_PERF_GLOBAL_CTRL); 436 + 437 + if (nr_gp_counters) 438 + GUEST_ASSERT_EQ(global_ctrl, GENMASK_ULL(nr_gp_counters - 1, 0)); 439 + else 440 + GUEST_ASSERT_EQ(global_ctrl, 0); 441 + } 424 442 425 443 if (this_cpu_has(X86_FEATURE_PDCM) && 426 444 rdmsr(MSR_IA32_PERF_CAPABILITIES) & PMU_CAP_FW_WRITES)
+46 -14
tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c
··· 28 28 #define NESTED_TEST_MEM1 0xc0001000 29 29 #define NESTED_TEST_MEM2 0xc0002000 30 30 31 - static void l2_guest_code(void) 31 + static void l2_guest_code(u64 *a, u64 *b) 32 32 { 33 - *(volatile uint64_t *)NESTED_TEST_MEM1; 34 - *(volatile uint64_t *)NESTED_TEST_MEM1 = 1; 33 + READ_ONCE(*a); 34 + WRITE_ONCE(*a, 1); 35 35 GUEST_SYNC(true); 36 36 GUEST_SYNC(false); 37 37 38 - *(volatile uint64_t *)NESTED_TEST_MEM2 = 1; 38 + WRITE_ONCE(*b, 1); 39 39 GUEST_SYNC(true); 40 - *(volatile uint64_t *)NESTED_TEST_MEM2 = 1; 40 + WRITE_ONCE(*b, 1); 41 41 GUEST_SYNC(true); 42 42 GUEST_SYNC(false); 43 43 ··· 45 45 vmcall(); 46 46 } 47 47 48 + static void l2_guest_code_ept_enabled(void) 49 + { 50 + l2_guest_code((u64 *)NESTED_TEST_MEM1, (u64 *)NESTED_TEST_MEM2); 51 + } 52 + 53 + static void l2_guest_code_ept_disabled(void) 54 + { 55 + /* Access the same L1 GPAs as l2_guest_code_ept_enabled() */ 56 + l2_guest_code((u64 *)GUEST_TEST_MEM, (u64 *)GUEST_TEST_MEM); 57 + } 58 + 48 59 void l1_guest_code(struct vmx_pages *vmx) 49 60 { 50 61 #define L2_GUEST_STACK_SIZE 64 51 62 unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; 63 + void *l2_rip; 52 64 53 65 GUEST_ASSERT(vmx->vmcs_gpa); 54 66 GUEST_ASSERT(prepare_for_vmx_operation(vmx)); 55 67 GUEST_ASSERT(load_vmcs(vmx)); 56 68 57 - prepare_vmcs(vmx, l2_guest_code, 58 - &l2_guest_stack[L2_GUEST_STACK_SIZE]); 69 + if (vmx->eptp_gpa) 70 + l2_rip = l2_guest_code_ept_enabled; 71 + else 72 + l2_rip = l2_guest_code_ept_disabled; 73 + 74 + prepare_vmcs(vmx, l2_rip, &l2_guest_stack[L2_GUEST_STACK_SIZE]); 59 75 60 76 GUEST_SYNC(false); 61 77 GUEST_ASSERT(!vmlaunch()); ··· 80 64 GUEST_DONE(); 81 65 } 82 66 83 - int main(int argc, char *argv[]) 67 + static void test_vmx_dirty_log(bool enable_ept) 84 68 { 85 69 vm_vaddr_t vmx_pages_gva = 0; 86 70 struct vmx_pages *vmx; ··· 92 76 struct ucall uc; 93 77 bool done = false; 94 78 95 - TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); 96 - TEST_REQUIRE(kvm_cpu_has_ept()); 79 + pr_info("Nested EPT: %s\n", enable_ept ? "enabled" : "disabled"); 97 80 98 81 /* Create VM */ 99 82 vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code); ··· 118 103 * 119 104 * Note that prepare_eptp should be called only L1's GPA map is done, 120 105 * meaning after the last call to virt_map. 106 + * 107 + * When EPT is disabled, the L2 guest code will still access the same L1 108 + * GPAs as the EPT enabled case. 121 109 */ 122 - prepare_eptp(vmx, vm, 0); 123 - nested_map_memslot(vmx, vm, 0); 124 - nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, 4096); 125 - nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, 4096); 110 + if (enable_ept) { 111 + prepare_eptp(vmx, vm, 0); 112 + nested_map_memslot(vmx, vm, 0); 113 + nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, 4096); 114 + nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, 4096); 115 + } 126 116 127 117 bmap = bitmap_zalloc(TEST_MEM_PAGES); 128 118 host_test_mem = addr_gpa2hva(vm, GUEST_TEST_MEM); ··· 167 147 TEST_FAIL("Unknown ucall %lu", uc.cmd); 168 148 } 169 149 } 150 + } 151 + 152 + int main(int argc, char *argv[]) 153 + { 154 + TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_VMX)); 155 + 156 + test_vmx_dirty_log(/*enable_ept=*/false); 157 + 158 + if (kvm_cpu_has_ept()) 159 + test_vmx_dirty_log(/*enable_ept=*/true); 160 + 161 + return 0; 170 162 }
+1 -1
tools/testing/selftests/powerpc/papr_vpd/papr_vpd.c
··· 154 154 static int papr_vpd_close_handle_without_reading(void) 155 155 { 156 156 const int devfd = open(DEVPATH, O_RDONLY); 157 - struct papr_location_code lc; 157 + struct papr_location_code lc = { .str = "", }; 158 158 int fd; 159 159 160 160 SKIP_IF_MSG(devfd < 0 && errno == ENOENT,
+1 -2
virt/kvm/kvm_main.c
··· 832 832 * mn_active_invalidate_count (see above) instead of 833 833 * mmu_invalidate_in_progress. 834 834 */ 835 - gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end, 836 - hva_range.may_block); 835 + gfn_to_pfn_cache_invalidate_start(kvm, range->start, range->end); 837 836 838 837 /* 839 838 * If one or more memslots were found and thus zapped, notify arch code
+2 -4
virt/kvm/kvm_mm.h
··· 26 26 #ifdef CONFIG_HAVE_KVM_PFNCACHE 27 27 void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, 28 28 unsigned long start, 29 - unsigned long end, 30 - bool may_block); 29 + unsigned long end); 31 30 #else 32 31 static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, 33 32 unsigned long start, 34 - unsigned long end, 35 - bool may_block) 33 + unsigned long end) 36 34 { 37 35 } 38 36 #endif /* HAVE_KVM_PFNCACHE */
+35 -15
virt/kvm/pfncache.c
··· 23 23 * MMU notifier 'invalidate_range_start' hook. 24 24 */ 25 25 void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start, 26 - unsigned long end, bool may_block) 26 + unsigned long end) 27 27 { 28 28 struct gfn_to_pfn_cache *gpc; 29 29 ··· 57 57 spin_unlock(&kvm->gpc_lock); 58 58 } 59 59 60 + static bool kvm_gpc_is_valid_len(gpa_t gpa, unsigned long uhva, 61 + unsigned long len) 62 + { 63 + unsigned long offset = kvm_is_error_gpa(gpa) ? offset_in_page(uhva) : 64 + offset_in_page(gpa); 65 + 66 + /* 67 + * The cached access must fit within a single page. The 'len' argument 68 + * to activate() and refresh() exists only to enforce that. 69 + */ 70 + return offset + len <= PAGE_SIZE; 71 + } 72 + 60 73 bool kvm_gpc_check(struct gfn_to_pfn_cache *gpc, unsigned long len) 61 74 { 62 75 struct kvm_memslots *slots = kvm_memslots(gpc->kvm); ··· 87 74 if (kvm_is_error_hva(gpc->uhva)) 88 75 return false; 89 76 90 - if (offset_in_page(gpc->uhva) + len > PAGE_SIZE) 77 + if (!kvm_gpc_is_valid_len(gpc->gpa, gpc->uhva, len)) 91 78 return false; 92 79 93 80 if (!gpc->valid) ··· 245 232 return -EFAULT; 246 233 } 247 234 248 - static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long uhva, 249 - unsigned long len) 235 + static int __kvm_gpc_refresh(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long uhva) 250 236 { 251 237 unsigned long page_offset; 252 238 bool unmap_old = false; ··· 257 245 258 246 /* Either gpa or uhva must be valid, but not both */ 259 247 if (WARN_ON_ONCE(kvm_is_error_gpa(gpa) == kvm_is_error_hva(uhva))) 260 - return -EINVAL; 261 - 262 - /* 263 - * The cached acces must fit within a single page. The 'len' argument 264 - * exists only to enforce that. 265 - */ 266 - page_offset = kvm_is_error_gpa(gpa) ? offset_in_page(uhva) : 267 - offset_in_page(gpa); 268 - if (page_offset + len > PAGE_SIZE) 269 248 return -EINVAL; 270 249 271 250 lockdep_assert_held(&gpc->refresh_lock); ··· 273 270 old_uhva = PAGE_ALIGN_DOWN(gpc->uhva); 274 271 275 272 if (kvm_is_error_gpa(gpa)) { 273 + page_offset = offset_in_page(uhva); 274 + 276 275 gpc->gpa = INVALID_GPA; 277 276 gpc->memslot = NULL; 278 277 gpc->uhva = PAGE_ALIGN_DOWN(uhva); ··· 283 278 hva_change = true; 284 279 } else { 285 280 struct kvm_memslots *slots = kvm_memslots(gpc->kvm); 281 + 282 + page_offset = offset_in_page(gpa); 286 283 287 284 if (gpc->gpa != gpa || gpc->generation != slots->generation || 288 285 kvm_is_error_hva(gpc->uhva)) { ··· 361 354 362 355 guard(mutex)(&gpc->refresh_lock); 363 356 357 + if (!kvm_gpc_is_valid_len(gpc->gpa, gpc->uhva, len)) 358 + return -EINVAL; 359 + 364 360 /* 365 361 * If the GPA is valid then ignore the HVA, as a cache can be GPA-based 366 362 * or HVA-based, not both. For GPA-based caches, the HVA will be ··· 371 361 */ 372 362 uhva = kvm_is_error_gpa(gpc->gpa) ? gpc->uhva : KVM_HVA_ERR_BAD; 373 363 374 - return __kvm_gpc_refresh(gpc, gpc->gpa, uhva, len); 364 + return __kvm_gpc_refresh(gpc, gpc->gpa, uhva); 375 365 } 376 366 377 367 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm) ··· 390 380 unsigned long len) 391 381 { 392 382 struct kvm *kvm = gpc->kvm; 383 + 384 + if (!kvm_gpc_is_valid_len(gpa, uhva, len)) 385 + return -EINVAL; 393 386 394 387 guard(mutex)(&gpc->refresh_lock); 395 388 ··· 413 400 gpc->active = true; 414 401 write_unlock_irq(&gpc->lock); 415 402 } 416 - return __kvm_gpc_refresh(gpc, gpa, uhva, len); 403 + return __kvm_gpc_refresh(gpc, gpa, uhva); 417 404 } 418 405 419 406 int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len) 420 407 { 408 + /* 409 + * Explicitly disallow INVALID_GPA so that the magic value can be used 410 + * by KVM to differentiate between GPA-based and HVA-based caches. 411 + */ 412 + if (WARN_ON_ONCE(kvm_is_error_gpa(gpa))) 413 + return -EINVAL; 414 + 421 415 return __kvm_gpc_activate(gpc, gpa, KVM_HVA_ERR_BAD, len); 422 416 } 423 417