Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Three sets of overlapping changes, two in the packet scheduler
and one in the meson-gxl PHY driver.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2947 -3468
+1
Documentation/arm64/silicon-errata.txt
··· 75 75 | Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 | 76 76 | Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 | 77 77 | Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 | 78 + | Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
+7
Documentation/cgroup-v2.txt
··· 898 898 normal scheduling policy and absolute bandwidth allocation model for 899 899 realtime scheduling policy. 900 900 901 + WARNING: cgroup2 doesn't yet support control of realtime processes and 902 + the cpu controller can only be enabled when all RT processes are in 903 + the root cgroup. Be aware that system management software may already 904 + have placed RT processes into nonroot cgroups during the system boot 905 + process, and these processes may need to be moved to the root cgroup 906 + before the cpu controller can be enabled. 907 + 901 908 902 909 CPU Interface Files 903 910 ~~~~~~~~~~~~~~~~~~~
+2
Documentation/devicetree/bindings/usb/am33xx-usb.txt
··· 95 95 reg = <0x47401300 0x100>; 96 96 reg-names = "phy"; 97 97 ti,ctrl_mod = <&ctrl_mod>; 98 + #phy-cells = <0>; 98 99 }; 99 100 100 101 usb0: usb@47401000 { ··· 142 141 reg = <0x47401b00 0x100>; 143 142 reg-names = "phy"; 144 143 ti,ctrl_mod = <&ctrl_mod>; 144 + #phy-cells = <0>; 145 145 }; 146 146 147 147 usb1: usb@47401800 {
+34
Documentation/filesystems/overlayfs.txt
··· 156 156 root of the overlay. Finally the directory is moved to the new 157 157 location. 158 158 159 + There are several ways to tune the "redirect_dir" feature. 160 + 161 + Kernel config options: 162 + 163 + - OVERLAY_FS_REDIRECT_DIR: 164 + If this is enabled, then redirect_dir is turned on by default. 165 + - OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW: 166 + If this is enabled, then redirects are always followed by default. Enabling 167 + this results in a less secure configuration. Enable this option only when 168 + worried about backward compatibility with kernels that have the redirect_dir 169 + feature and follow redirects even if turned off. 170 + 171 + Module options (can also be changed through /sys/module/overlay/parameters/*): 172 + 173 + - "redirect_dir=BOOL": 174 + See OVERLAY_FS_REDIRECT_DIR kernel config option above. 175 + - "redirect_always_follow=BOOL": 176 + See OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW kernel config option above. 177 + - "redirect_max=NUM": 178 + The maximum number of bytes in an absolute redirect (default is 256). 179 + 180 + Mount options: 181 + 182 + - "redirect_dir=on": 183 + Redirects are enabled. 184 + - "redirect_dir=follow": 185 + Redirects are not created, but followed. 186 + - "redirect_dir=off": 187 + Redirects are not created and only followed if "redirect_always_follow" 188 + feature is enabled in the kernel/module config. 189 + - "redirect_dir=nofollow": 190 + Redirects are not created and not followed (equivalent to "redirect_dir=off" 191 + if "redirect_always_follow" feature is not enabled). 192 + 159 193 Non-directories 160 194 --------------- 161 195
-874
Documentation/locking/crossrelease.txt
··· 1 - Crossrelease 2 - ============ 3 - 4 - Started by Byungchul Park <byungchul.park@lge.com> 5 - 6 - Contents: 7 - 8 - (*) Background 9 - 10 - - What causes deadlock 11 - - How lockdep works 12 - 13 - (*) Limitation 14 - 15 - - Limit lockdep 16 - - Pros from the limitation 17 - - Cons from the limitation 18 - - Relax the limitation 19 - 20 - (*) Crossrelease 21 - 22 - - Introduce crossrelease 23 - - Introduce commit 24 - 25 - (*) Implementation 26 - 27 - - Data structures 28 - - How crossrelease works 29 - 30 - (*) Optimizations 31 - 32 - - Avoid duplication 33 - - Lockless for hot paths 34 - 35 - (*) APPENDIX A: What lockdep does to work aggresively 36 - 37 - (*) APPENDIX B: How to avoid adding false dependencies 38 - 39 - 40 - ========== 41 - Background 42 - ========== 43 - 44 - What causes deadlock 45 - -------------------- 46 - 47 - A deadlock occurs when a context is waiting for an event to happen, 48 - which is impossible because another (or the) context who can trigger the 49 - event is also waiting for another (or the) event to happen, which is 50 - also impossible due to the same reason. 51 - 52 - For example: 53 - 54 - A context going to trigger event C is waiting for event A to happen. 55 - A context going to trigger event A is waiting for event B to happen. 56 - A context going to trigger event B is waiting for event C to happen. 57 - 58 - A deadlock occurs when these three wait operations run at the same time, 59 - because event C cannot be triggered if event A does not happen, which in 60 - turn cannot be triggered if event B does not happen, which in turn 61 - cannot be triggered if event C does not happen. After all, no event can 62 - be triggered since any of them never meets its condition to wake up. 63 - 64 - A dependency might exist between two waiters and a deadlock might happen 65 - due to an incorrect releationship between dependencies. Thus, we must 66 - define what a dependency is first. A dependency exists between them if: 67 - 68 - 1. There are two waiters waiting for each event at a given time. 69 - 2. The only way to wake up each waiter is to trigger its event. 70 - 3. Whether one can be woken up depends on whether the other can. 71 - 72 - Each wait in the example creates its dependency like: 73 - 74 - Event C depends on event A. 75 - Event A depends on event B. 76 - Event B depends on event C. 77 - 78 - NOTE: Precisely speaking, a dependency is one between whether a 79 - waiter for an event can be woken up and whether another waiter for 80 - another event can be woken up. However from now on, we will describe 81 - a dependency as if it's one between an event and another event for 82 - simplicity. 83 - 84 - And they form circular dependencies like: 85 - 86 - -> C -> A -> B - 87 - / \ 88 - \ / 89 - ---------------- 90 - 91 - where 'A -> B' means that event A depends on event B. 92 - 93 - Such circular dependencies lead to a deadlock since no waiter can meet 94 - its condition to wake up as described. 95 - 96 - CONCLUSION 97 - 98 - Circular dependencies cause a deadlock. 99 - 100 - 101 - How lockdep works 102 - ----------------- 103 - 104 - Lockdep tries to detect a deadlock by checking dependencies created by 105 - lock operations, acquire and release. Waiting for a lock corresponds to 106 - waiting for an event, and releasing a lock corresponds to triggering an 107 - event in the previous section. 108 - 109 - In short, lockdep does: 110 - 111 - 1. Detect a new dependency. 112 - 2. Add the dependency into a global graph. 113 - 3. Check if that makes dependencies circular. 114 - 4. Report a deadlock or its possibility if so. 115 - 116 - For example, consider a graph built by lockdep that looks like: 117 - 118 - A -> B - 119 - \ 120 - -> E 121 - / 122 - C -> D - 123 - 124 - where A, B,..., E are different lock classes. 125 - 126 - Lockdep will add a dependency into the graph on detection of a new 127 - dependency. For example, it will add a dependency 'E -> C' when a new 128 - dependency between lock E and lock C is detected. Then the graph will be: 129 - 130 - A -> B - 131 - \ 132 - -> E - 133 - / \ 134 - -> C -> D - \ 135 - / / 136 - \ / 137 - ------------------ 138 - 139 - where A, B,..., E are different lock classes. 140 - 141 - This graph contains a subgraph which demonstrates circular dependencies: 142 - 143 - -> E - 144 - / \ 145 - -> C -> D - \ 146 - / / 147 - \ / 148 - ------------------ 149 - 150 - where C, D and E are different lock classes. 151 - 152 - This is the condition under which a deadlock might occur. Lockdep 153 - reports it on detection after adding a new dependency. This is the way 154 - how lockdep works. 155 - 156 - CONCLUSION 157 - 158 - Lockdep detects a deadlock or its possibility by checking if circular 159 - dependencies were created after adding each new dependency. 160 - 161 - 162 - ========== 163 - Limitation 164 - ========== 165 - 166 - Limit lockdep 167 - ------------- 168 - 169 - Limiting lockdep to work on only typical locks e.g. spin locks and 170 - mutexes, which are released within the acquire context, the 171 - implementation becomes simple but its capacity for detection becomes 172 - limited. Let's check pros and cons in next section. 173 - 174 - 175 - Pros from the limitation 176 - ------------------------ 177 - 178 - Given the limitation, when acquiring a lock, locks in a held_locks 179 - cannot be released if the context cannot acquire it so has to wait to 180 - acquire it, which means all waiters for the locks in the held_locks are 181 - stuck. It's an exact case to create dependencies between each lock in 182 - the held_locks and the lock to acquire. 183 - 184 - For example: 185 - 186 - CONTEXT X 187 - --------- 188 - acquire A 189 - acquire B /* Add a dependency 'A -> B' */ 190 - release B 191 - release A 192 - 193 - where A and B are different lock classes. 194 - 195 - When acquiring lock A, the held_locks of CONTEXT X is empty thus no 196 - dependency is added. But when acquiring lock B, lockdep detects and adds 197 - a new dependency 'A -> B' between lock A in the held_locks and lock B. 198 - They can be simply added whenever acquiring each lock. 199 - 200 - And data required by lockdep exists in a local structure, held_locks 201 - embedded in task_struct. Forcing to access the data within the context, 202 - lockdep can avoid racy problems without explicit locks while handling 203 - the local data. 204 - 205 - Lastly, lockdep only needs to keep locks currently being held, to build 206 - a dependency graph. However, relaxing the limitation, it needs to keep 207 - even locks already released, because a decision whether they created 208 - dependencies might be long-deferred. 209 - 210 - To sum up, we can expect several advantages from the limitation: 211 - 212 - 1. Lockdep can easily identify a dependency when acquiring a lock. 213 - 2. Races are avoidable while accessing local locks in a held_locks. 214 - 3. Lockdep only needs to keep locks currently being held. 215 - 216 - CONCLUSION 217 - 218 - Given the limitation, the implementation becomes simple and efficient. 219 - 220 - 221 - Cons from the limitation 222 - ------------------------ 223 - 224 - Given the limitation, lockdep is applicable only to typical locks. For 225 - example, page locks for page access or completions for synchronization 226 - cannot work with lockdep. 227 - 228 - Can we detect deadlocks below, under the limitation? 229 - 230 - Example 1: 231 - 232 - CONTEXT X CONTEXT Y CONTEXT Z 233 - --------- --------- ---------- 234 - mutex_lock A 235 - lock_page B 236 - lock_page B 237 - mutex_lock A /* DEADLOCK */ 238 - unlock_page B held by X 239 - unlock_page B 240 - mutex_unlock A 241 - mutex_unlock A 242 - 243 - where A and B are different lock classes. 244 - 245 - No, we cannot. 246 - 247 - Example 2: 248 - 249 - CONTEXT X CONTEXT Y 250 - --------- --------- 251 - mutex_lock A 252 - mutex_lock A 253 - wait_for_complete B /* DEADLOCK */ 254 - complete B 255 - mutex_unlock A 256 - mutex_unlock A 257 - 258 - where A is a lock class and B is a completion variable. 259 - 260 - No, we cannot. 261 - 262 - CONCLUSION 263 - 264 - Given the limitation, lockdep cannot detect a deadlock or its 265 - possibility caused by page locks or completions. 266 - 267 - 268 - Relax the limitation 269 - -------------------- 270 - 271 - Under the limitation, things to create dependencies are limited to 272 - typical locks. However, synchronization primitives like page locks and 273 - completions, which are allowed to be released in any context, also 274 - create dependencies and can cause a deadlock. So lockdep should track 275 - these locks to do a better job. We have to relax the limitation for 276 - these locks to work with lockdep. 277 - 278 - Detecting dependencies is very important for lockdep to work because 279 - adding a dependency means adding an opportunity to check whether it 280 - causes a deadlock. The more lockdep adds dependencies, the more it 281 - thoroughly works. Thus Lockdep has to do its best to detect and add as 282 - many true dependencies into a graph as possible. 283 - 284 - For example, considering only typical locks, lockdep builds a graph like: 285 - 286 - A -> B - 287 - \ 288 - -> E 289 - / 290 - C -> D - 291 - 292 - where A, B,..., E are different lock classes. 293 - 294 - On the other hand, under the relaxation, additional dependencies might 295 - be created and added. Assuming additional 'FX -> C' and 'E -> GX' are 296 - added thanks to the relaxation, the graph will be: 297 - 298 - A -> B - 299 - \ 300 - -> E -> GX 301 - / 302 - FX -> C -> D - 303 - 304 - where A, B,..., E, FX and GX are different lock classes, and a suffix 305 - 'X' is added on non-typical locks. 306 - 307 - The latter graph gives us more chances to check circular dependencies 308 - than the former. However, it might suffer performance degradation since 309 - relaxing the limitation, with which design and implementation of lockdep 310 - can be efficient, might introduce inefficiency inevitably. So lockdep 311 - should provide two options, strong detection and efficient detection. 312 - 313 - Choosing efficient detection: 314 - 315 - Lockdep works with only locks restricted to be released within the 316 - acquire context. However, lockdep works efficiently. 317 - 318 - Choosing strong detection: 319 - 320 - Lockdep works with all synchronization primitives. However, lockdep 321 - suffers performance degradation. 322 - 323 - CONCLUSION 324 - 325 - Relaxing the limitation, lockdep can add additional dependencies giving 326 - additional opportunities to check circular dependencies. 327 - 328 - 329 - ============ 330 - Crossrelease 331 - ============ 332 - 333 - Introduce crossrelease 334 - ---------------------- 335 - 336 - In order to allow lockdep to handle additional dependencies by what 337 - might be released in any context, namely 'crosslock', we have to be able 338 - to identify those created by crosslocks. The proposed 'crossrelease' 339 - feature provoides a way to do that. 340 - 341 - Crossrelease feature has to do: 342 - 343 - 1. Identify dependencies created by crosslocks. 344 - 2. Add the dependencies into a dependency graph. 345 - 346 - That's all. Once a meaningful dependency is added into graph, then 347 - lockdep would work with the graph as it did. The most important thing 348 - crossrelease feature has to do is to correctly identify and add true 349 - dependencies into the global graph. 350 - 351 - A dependency e.g. 'A -> B' can be identified only in the A's release 352 - context because a decision required to identify the dependency can be 353 - made only in the release context. That is to decide whether A can be 354 - released so that a waiter for A can be woken up. It cannot be made in 355 - other than the A's release context. 356 - 357 - It's no matter for typical locks because each acquire context is same as 358 - its release context, thus lockdep can decide whether a lock can be 359 - released in the acquire context. However for crosslocks, lockdep cannot 360 - make the decision in the acquire context but has to wait until the 361 - release context is identified. 362 - 363 - Therefore, deadlocks by crosslocks cannot be detected just when it 364 - happens, because those cannot be identified until the crosslocks are 365 - released. However, deadlock possibilities can be detected and it's very 366 - worth. See 'APPENDIX A' section to check why. 367 - 368 - CONCLUSION 369 - 370 - Using crossrelease feature, lockdep can work with what might be released 371 - in any context, namely crosslock. 372 - 373 - 374 - Introduce commit 375 - ---------------- 376 - 377 - Since crossrelease defers the work adding true dependencies of 378 - crosslocks until they are actually released, crossrelease has to queue 379 - all acquisitions which might create dependencies with the crosslocks. 380 - Then it identifies dependencies using the queued data in batches at a 381 - proper time. We call it 'commit'. 382 - 383 - There are four types of dependencies: 384 - 385 - 1. TT type: 'typical lock A -> typical lock B' 386 - 387 - Just when acquiring B, lockdep can see it's in the A's release 388 - context. So the dependency between A and B can be identified 389 - immediately. Commit is unnecessary. 390 - 391 - 2. TC type: 'typical lock A -> crosslock BX' 392 - 393 - Just when acquiring BX, lockdep can see it's in the A's release 394 - context. So the dependency between A and BX can be identified 395 - immediately. Commit is unnecessary, too. 396 - 397 - 3. CT type: 'crosslock AX -> typical lock B' 398 - 399 - When acquiring B, lockdep cannot identify the dependency because 400 - there's no way to know if it's in the AX's release context. It has 401 - to wait until the decision can be made. Commit is necessary. 402 - 403 - 4. CC type: 'crosslock AX -> crosslock BX' 404 - 405 - When acquiring BX, lockdep cannot identify the dependency because 406 - there's no way to know if it's in the AX's release context. It has 407 - to wait until the decision can be made. Commit is necessary. 408 - But, handling CC type is not implemented yet. It's a future work. 409 - 410 - Lockdep can work without commit for typical locks, but commit step is 411 - necessary once crosslocks are involved. Introducing commit, lockdep 412 - performs three steps. What lockdep does in each step is: 413 - 414 - 1. Acquisition: For typical locks, lockdep does what it originally did 415 - and queues the lock so that CT type dependencies can be checked using 416 - it at the commit step. For crosslocks, it saves data which will be 417 - used at the commit step and increases a reference count for it. 418 - 419 - 2. Commit: No action is reauired for typical locks. For crosslocks, 420 - lockdep adds CT type dependencies using the data saved at the 421 - acquisition step. 422 - 423 - 3. Release: No changes are required for typical locks. When a crosslock 424 - is released, it decreases a reference count for it. 425 - 426 - CONCLUSION 427 - 428 - Crossrelease introduces commit step to handle dependencies of crosslocks 429 - in batches at a proper time. 430 - 431 - 432 - ============== 433 - Implementation 434 - ============== 435 - 436 - Data structures 437 - --------------- 438 - 439 - Crossrelease introduces two main data structures. 440 - 441 - 1. hist_lock 442 - 443 - This is an array embedded in task_struct, for keeping lock history so 444 - that dependencies can be added using them at the commit step. Since 445 - it's local data, it can be accessed locklessly in the owner context. 446 - The array is filled at the acquisition step and consumed at the 447 - commit step. And it's managed in circular manner. 448 - 449 - 2. cross_lock 450 - 451 - One per lockdep_map exists. This is for keeping data of crosslocks 452 - and used at the commit step. 453 - 454 - 455 - How crossrelease works 456 - ---------------------- 457 - 458 - It's the key of how crossrelease works, to defer necessary works to an 459 - appropriate point in time and perform in at once at the commit step. 460 - Let's take a look with examples step by step, starting from how lockdep 461 - works without crossrelease for typical locks. 462 - 463 - acquire A /* Push A onto held_locks */ 464 - acquire B /* Push B onto held_locks and add 'A -> B' */ 465 - acquire C /* Push C onto held_locks and add 'B -> C' */ 466 - release C /* Pop C from held_locks */ 467 - release B /* Pop B from held_locks */ 468 - release A /* Pop A from held_locks */ 469 - 470 - where A, B and C are different lock classes. 471 - 472 - NOTE: This document assumes that readers already understand how 473 - lockdep works without crossrelease thus omits details. But there's 474 - one thing to note. Lockdep pretends to pop a lock from held_locks 475 - when releasing it. But it's subtly different from the original pop 476 - operation because lockdep allows other than the top to be poped. 477 - 478 - In this case, lockdep adds 'the top of held_locks -> the lock to acquire' 479 - dependency every time acquiring a lock. 480 - 481 - After adding 'A -> B', a dependency graph will be: 482 - 483 - A -> B 484 - 485 - where A and B are different lock classes. 486 - 487 - And after adding 'B -> C', the graph will be: 488 - 489 - A -> B -> C 490 - 491 - where A, B and C are different lock classes. 492 - 493 - Let's performs commit step even for typical locks to add dependencies. 494 - Of course, commit step is not necessary for them, however, it would work 495 - well because this is a more general way. 496 - 497 - acquire A 498 - /* 499 - * Queue A into hist_locks 500 - * 501 - * In hist_locks: A 502 - * In graph: Empty 503 - */ 504 - 505 - acquire B 506 - /* 507 - * Queue B into hist_locks 508 - * 509 - * In hist_locks: A, B 510 - * In graph: Empty 511 - */ 512 - 513 - acquire C 514 - /* 515 - * Queue C into hist_locks 516 - * 517 - * In hist_locks: A, B, C 518 - * In graph: Empty 519 - */ 520 - 521 - commit C 522 - /* 523 - * Add 'C -> ?' 524 - * Answer the following to decide '?' 525 - * What has been queued since acquire C: Nothing 526 - * 527 - * In hist_locks: A, B, C 528 - * In graph: Empty 529 - */ 530 - 531 - release C 532 - 533 - commit B 534 - /* 535 - * Add 'B -> ?' 536 - * Answer the following to decide '?' 537 - * What has been queued since acquire B: C 538 - * 539 - * In hist_locks: A, B, C 540 - * In graph: 'B -> C' 541 - */ 542 - 543 - release B 544 - 545 - commit A 546 - /* 547 - * Add 'A -> ?' 548 - * Answer the following to decide '?' 549 - * What has been queued since acquire A: B, C 550 - * 551 - * In hist_locks: A, B, C 552 - * In graph: 'B -> C', 'A -> B', 'A -> C' 553 - */ 554 - 555 - release A 556 - 557 - where A, B and C are different lock classes. 558 - 559 - In this case, dependencies are added at the commit step as described. 560 - 561 - After commits for A, B and C, the graph will be: 562 - 563 - A -> B -> C 564 - 565 - where A, B and C are different lock classes. 566 - 567 - NOTE: A dependency 'A -> C' is optimized out. 568 - 569 - We can see the former graph built without commit step is same as the 570 - latter graph built using commit steps. Of course the former way leads to 571 - earlier finish for building the graph, which means we can detect a 572 - deadlock or its possibility sooner. So the former way would be prefered 573 - when possible. But we cannot avoid using the latter way for crosslocks. 574 - 575 - Let's look at how commit steps work for crosslocks. In this case, the 576 - commit step is performed only on crosslock AX as real. And it assumes 577 - that the AX release context is different from the AX acquire context. 578 - 579 - BX RELEASE CONTEXT BX ACQUIRE CONTEXT 580 - ------------------ ------------------ 581 - acquire A 582 - /* 583 - * Push A onto held_locks 584 - * Queue A into hist_locks 585 - * 586 - * In held_locks: A 587 - * In hist_locks: A 588 - * In graph: Empty 589 - */ 590 - 591 - acquire BX 592 - /* 593 - * Add 'the top of held_locks -> BX' 594 - * 595 - * In held_locks: A 596 - * In hist_locks: A 597 - * In graph: 'A -> BX' 598 - */ 599 - 600 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 601 - It must be guaranteed that the following operations are seen after 602 - acquiring BX globally. It can be done by things like barrier. 603 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 604 - 605 - acquire C 606 - /* 607 - * Push C onto held_locks 608 - * Queue C into hist_locks 609 - * 610 - * In held_locks: C 611 - * In hist_locks: C 612 - * In graph: 'A -> BX' 613 - */ 614 - 615 - release C 616 - /* 617 - * Pop C from held_locks 618 - * 619 - * In held_locks: Empty 620 - * In hist_locks: C 621 - * In graph: 'A -> BX' 622 - */ 623 - acquire D 624 - /* 625 - * Push D onto held_locks 626 - * Queue D into hist_locks 627 - * Add 'the top of held_locks -> D' 628 - * 629 - * In held_locks: A, D 630 - * In hist_locks: A, D 631 - * In graph: 'A -> BX', 'A -> D' 632 - */ 633 - acquire E 634 - /* 635 - * Push E onto held_locks 636 - * Queue E into hist_locks 637 - * 638 - * In held_locks: E 639 - * In hist_locks: C, E 640 - * In graph: 'A -> BX', 'A -> D' 641 - */ 642 - 643 - release E 644 - /* 645 - * Pop E from held_locks 646 - * 647 - * In held_locks: Empty 648 - * In hist_locks: D, E 649 - * In graph: 'A -> BX', 'A -> D' 650 - */ 651 - release D 652 - /* 653 - * Pop D from held_locks 654 - * 655 - * In held_locks: A 656 - * In hist_locks: A, D 657 - * In graph: 'A -> BX', 'A -> D' 658 - */ 659 - commit BX 660 - /* 661 - * Add 'BX -> ?' 662 - * What has been queued since acquire BX: C, E 663 - * 664 - * In held_locks: Empty 665 - * In hist_locks: D, E 666 - * In graph: 'A -> BX', 'A -> D', 667 - * 'BX -> C', 'BX -> E' 668 - */ 669 - 670 - release BX 671 - /* 672 - * In held_locks: Empty 673 - * In hist_locks: D, E 674 - * In graph: 'A -> BX', 'A -> D', 675 - * 'BX -> C', 'BX -> E' 676 - */ 677 - release A 678 - /* 679 - * Pop A from held_locks 680 - * 681 - * In held_locks: Empty 682 - * In hist_locks: A, D 683 - * In graph: 'A -> BX', 'A -> D', 684 - * 'BX -> C', 'BX -> E' 685 - */ 686 - 687 - where A, BX, C,..., E are different lock classes, and a suffix 'X' is 688 - added on crosslocks. 689 - 690 - Crossrelease considers all acquisitions after acqiuring BX are 691 - candidates which might create dependencies with BX. True dependencies 692 - will be determined when identifying the release context of BX. Meanwhile, 693 - all typical locks are queued so that they can be used at the commit step. 694 - And then two dependencies 'BX -> C' and 'BX -> E' are added at the 695 - commit step when identifying the release context. 696 - 697 - The final graph will be, with crossrelease: 698 - 699 - -> C 700 - / 701 - -> BX - 702 - / \ 703 - A - -> E 704 - \ 705 - -> D 706 - 707 - where A, BX, C,..., E are different lock classes, and a suffix 'X' is 708 - added on crosslocks. 709 - 710 - However, the final graph will be, without crossrelease: 711 - 712 - A -> D 713 - 714 - where A and D are different lock classes. 715 - 716 - The former graph has three more dependencies, 'A -> BX', 'BX -> C' and 717 - 'BX -> E' giving additional opportunities to check if they cause 718 - deadlocks. This way lockdep can detect a deadlock or its possibility 719 - caused by crosslocks. 720 - 721 - CONCLUSION 722 - 723 - We checked how crossrelease works with several examples. 724 - 725 - 726 - ============= 727 - Optimizations 728 - ============= 729 - 730 - Avoid duplication 731 - ----------------- 732 - 733 - Crossrelease feature uses a cache like what lockdep already uses for 734 - dependency chains, but this time it's for caching CT type dependencies. 735 - Once that dependency is cached, the same will never be added again. 736 - 737 - 738 - Lockless for hot paths 739 - ---------------------- 740 - 741 - To keep all locks for later use at the commit step, crossrelease adopts 742 - a local array embedded in task_struct, which makes access to the data 743 - lockless by forcing it to happen only within the owner context. It's 744 - like how lockdep handles held_locks. Lockless implmentation is important 745 - since typical locks are very frequently acquired and released. 746 - 747 - 748 - ================================================= 749 - APPENDIX A: What lockdep does to work aggresively 750 - ================================================= 751 - 752 - A deadlock actually occurs when all wait operations creating circular 753 - dependencies run at the same time. Even though they don't, a potential 754 - deadlock exists if the problematic dependencies exist. Thus it's 755 - meaningful to detect not only an actual deadlock but also its potential 756 - possibility. The latter is rather valuable. When a deadlock occurs 757 - actually, we can identify what happens in the system by some means or 758 - other even without lockdep. However, there's no way to detect possiblity 759 - without lockdep unless the whole code is parsed in head. It's terrible. 760 - Lockdep does the both, and crossrelease only focuses on the latter. 761 - 762 - Whether or not a deadlock actually occurs depends on several factors. 763 - For example, what order contexts are switched in is a factor. Assuming 764 - circular dependencies exist, a deadlock would occur when contexts are 765 - switched so that all wait operations creating the dependencies run 766 - simultaneously. Thus to detect a deadlock possibility even in the case 767 - that it has not occured yet, lockdep should consider all possible 768 - combinations of dependencies, trying to: 769 - 770 - 1. Use a global dependency graph. 771 - 772 - Lockdep combines all dependencies into one global graph and uses them, 773 - regardless of which context generates them or what order contexts are 774 - switched in. Aggregated dependencies are only considered so they are 775 - prone to be circular if a problem exists. 776 - 777 - 2. Check dependencies between classes instead of instances. 778 - 779 - What actually causes a deadlock are instances of lock. However, 780 - lockdep checks dependencies between classes instead of instances. 781 - This way lockdep can detect a deadlock which has not happened but 782 - might happen in future by others but the same class. 783 - 784 - 3. Assume all acquisitions lead to waiting. 785 - 786 - Although locks might be acquired without waiting which is essential 787 - to create dependencies, lockdep assumes all acquisitions lead to 788 - waiting since it might be true some time or another. 789 - 790 - CONCLUSION 791 - 792 - Lockdep detects not only an actual deadlock but also its possibility, 793 - and the latter is more valuable. 794 - 795 - 796 - ================================================== 797 - APPENDIX B: How to avoid adding false dependencies 798 - ================================================== 799 - 800 - Remind what a dependency is. A dependency exists if: 801 - 802 - 1. There are two waiters waiting for each event at a given time. 803 - 2. The only way to wake up each waiter is to trigger its event. 804 - 3. Whether one can be woken up depends on whether the other can. 805 - 806 - For example: 807 - 808 - acquire A 809 - acquire B /* A dependency 'A -> B' exists */ 810 - release B 811 - release A 812 - 813 - where A and B are different lock classes. 814 - 815 - A depedency 'A -> B' exists since: 816 - 817 - 1. A waiter for A and a waiter for B might exist when acquiring B. 818 - 2. Only way to wake up each is to release what it waits for. 819 - 3. Whether the waiter for A can be woken up depends on whether the 820 - other can. IOW, TASK X cannot release A if it fails to acquire B. 821 - 822 - For another example: 823 - 824 - TASK X TASK Y 825 - ------ ------ 826 - acquire AX 827 - acquire B /* A dependency 'AX -> B' exists */ 828 - release B 829 - release AX held by Y 830 - 831 - where AX and B are different lock classes, and a suffix 'X' is added 832 - on crosslocks. 833 - 834 - Even in this case involving crosslocks, the same rule can be applied. A 835 - depedency 'AX -> B' exists since: 836 - 837 - 1. A waiter for AX and a waiter for B might exist when acquiring B. 838 - 2. Only way to wake up each is to release what it waits for. 839 - 3. Whether the waiter for AX can be woken up depends on whether the 840 - other can. IOW, TASK X cannot release AX if it fails to acquire B. 841 - 842 - Let's take a look at more complicated example: 843 - 844 - TASK X TASK Y 845 - ------ ------ 846 - acquire B 847 - release B 848 - fork Y 849 - acquire AX 850 - acquire C /* A dependency 'AX -> C' exists */ 851 - release C 852 - release AX held by Y 853 - 854 - where AX, B and C are different lock classes, and a suffix 'X' is 855 - added on crosslocks. 856 - 857 - Does a dependency 'AX -> B' exist? Nope. 858 - 859 - Two waiters are essential to create a dependency. However, waiters for 860 - AX and B to create 'AX -> B' cannot exist at the same time in this 861 - example. Thus the dependency 'AX -> B' cannot be created. 862 - 863 - It would be ideal if the full set of true ones can be considered. But 864 - we can ensure nothing but what actually happened. Relying on what 865 - actually happens at runtime, we can anyway add only true ones, though 866 - they might be a subset of true ones. It's similar to how lockdep works 867 - for typical locks. There might be more true dependencies than what 868 - lockdep has detected in runtime. Lockdep has no choice but to rely on 869 - what actually happens. Crossrelease also relies on it. 870 - 871 - CONCLUSION 872 - 873 - Relying on what actually happens, lockdep can avoid adding false 874 - dependencies.
+12 -3
Documentation/virtual/kvm/api.txt
··· 2901 2901 2902 2902 struct kvm_s390_irq_state { 2903 2903 __u64 buf; 2904 - __u32 flags; 2904 + __u32 flags; /* will stay unused for compatibility reasons */ 2905 2905 __u32 len; 2906 - __u32 reserved[4]; 2906 + __u32 reserved[4]; /* will stay unused for compatibility reasons */ 2907 2907 }; 2908 2908 2909 2909 Userspace passes in the above struct and for each pending interrupt a 2910 2910 struct kvm_s390_irq is copied to the provided buffer. 2911 + 2912 + The structure contains a flags and a reserved field for future extensions. As 2913 + the kernel never checked for flags == 0 and QEMU never pre-zeroed flags and 2914 + reserved, these fields can not be used in the future without breaking 2915 + compatibility. 2911 2916 2912 2917 If -ENOBUFS is returned the buffer provided was too small and userspace 2913 2918 may retry with a bigger buffer. ··· 2937 2932 2938 2933 struct kvm_s390_irq_state { 2939 2934 __u64 buf; 2935 + __u32 flags; /* will stay unused for compatibility reasons */ 2940 2936 __u32 len; 2941 - __u32 pad; 2937 + __u32 reserved[4]; /* will stay unused for compatibility reasons */ 2942 2938 }; 2939 + 2940 + The restrictions for flags and reserved apply as well. 2941 + (see KVM_S390_GET_IRQ_STATE) 2943 2942 2944 2943 The userspace memory referenced by buf contains a struct kvm_s390_irq 2945 2944 for each interrupt to be injected into the guest.
+21 -1
Documentation/vm/zswap.txt
··· 98 98 original compressor. Once all pages are removed from an old zpool, the zpool 99 99 and its compressor are freed. 100 100 101 + Some of the pages in zswap are same-value filled pages (i.e. contents of the 102 + page have same value or repetitive pattern). These pages include zero-filled 103 + pages and they are handled differently. During store operation, a page is 104 + checked if it is a same-value filled page before compressing it. If true, the 105 + compressed length of the page is set to zero and the pattern or same-filled 106 + value is stored. 107 + 108 + Same-value filled pages identification feature is enabled by default and can be 109 + disabled at boot time by setting the "same_filled_pages_enabled" attribute to 0, 110 + e.g. zswap.same_filled_pages_enabled=0. It can also be enabled and disabled at 111 + runtime using the sysfs "same_filled_pages_enabled" attribute, e.g. 112 + 113 + echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled 114 + 115 + When zswap same-filled page identification is disabled at runtime, it will stop 116 + checking for the same-value filled pages during store operation. However, the 117 + existing pages which are marked as same-value filled pages remain stored 118 + unchanged in zswap until they are either loaded or invalidated. 119 + 101 120 A debugfs interface is provided for various statistic about pool size, number 102 - of pages stored, and various counters for the reasons pages are rejected. 121 + of pages stored, same-value filled pages and various counters for the reasons 122 + pages are rejected.
+3 -2
MAINTAINERS
··· 2047 2047 F: arch/arm/include/asm/hardware/cache-uniphier.h 2048 2048 F: arch/arm/mach-uniphier/ 2049 2049 F: arch/arm/mm/cache-uniphier.c 2050 - F: arch/arm64/boot/dts/socionext/ 2050 + F: arch/arm64/boot/dts/socionext/uniphier* 2051 2051 F: drivers/bus/uniphier-system-bus.c 2052 2052 F: drivers/clk/uniphier/ 2053 2053 F: drivers/gpio/gpio-uniphier.c ··· 5435 5435 5436 5436 FCOE SUBSYSTEM (libfc, libfcoe, fcoe) 5437 5437 M: Johannes Thumshirn <jth@kernel.org> 5438 - L: fcoe-devel@open-fcoe.org 5438 + L: linux-scsi@vger.kernel.org 5439 5439 W: www.Open-FCoE.org 5440 5440 S: Supported 5441 5441 F: drivers/scsi/libfc/ ··· 13133 13133 13134 13134 SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER 13135 13135 M: Jie Deng <jiedeng@synopsys.com> 13136 + M: Jose Abreu <Jose.Abreu@synopsys.com> 13136 13137 L: netdev@vger.kernel.org 13137 13138 S: Supported 13138 13139 F: drivers/net/ethernet/synopsys/
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 15 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc2 5 + EXTRAVERSION = -rc3 6 6 NAME = Fearless Coyote 7 7 8 8 # *DOCUMENTATION*
+2
arch/arm/boot/dts/am33xx.dtsi
··· 630 630 reg-names = "phy"; 631 631 status = "disabled"; 632 632 ti,ctrl_mod = <&usb_ctrl_mod>; 633 + #phy-cells = <0>; 633 634 }; 634 635 635 636 usb0: usb@47401000 { ··· 679 678 reg-names = "phy"; 680 679 status = "disabled"; 681 680 ti,ctrl_mod = <&usb_ctrl_mod>; 681 + #phy-cells = <0>; 682 682 }; 683 683 684 684 usb1: usb@47401800 {
+4 -2
arch/arm/boot/dts/am4372.dtsi
··· 927 927 reg = <0x48038000 0x2000>, 928 928 <0x46000000 0x400000>; 929 929 reg-names = "mpu", "dat"; 930 - interrupts = <80>, <81>; 930 + interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>, 931 + <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>; 931 932 interrupt-names = "tx", "rx"; 932 933 status = "disabled"; 933 934 dmas = <&edma 8 2>, ··· 942 941 reg = <0x4803C000 0x2000>, 943 942 <0x46400000 0x400000>; 944 943 reg-names = "mpu", "dat"; 945 - interrupts = <82>, <83>; 944 + interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>, 945 + <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 946 946 interrupt-names = "tx", "rx"; 947 947 status = "disabled"; 948 948 dmas = <&edma 10 2>,
+2 -2
arch/arm/boot/dts/am437x-cm-t43.dts
··· 301 301 status = "okay"; 302 302 pinctrl-names = "default"; 303 303 pinctrl-0 = <&spi0_pins>; 304 - dmas = <&edma 16 305 - &edma 17>; 304 + dmas = <&edma 16 0 305 + &edma 17 0>; 306 306 dma-names = "tx0", "rx0"; 307 307 308 308 flash: w25q64cvzpig@0 {
+1
arch/arm/boot/dts/armada-385-db-ap.dts
··· 236 236 usb3_phy: usb3_phy { 237 237 compatible = "usb-nop-xceiv"; 238 238 vcc-supply = <&reg_xhci0_vbus>; 239 + #phy-cells = <0>; 239 240 }; 240 241 241 242 reg_xhci0_vbus: xhci0-vbus {
+1
arch/arm/boot/dts/armada-385-linksys.dtsi
··· 66 66 usb3_1_phy: usb3_1-phy { 67 67 compatible = "usb-nop-xceiv"; 68 68 vcc-supply = <&usb3_1_vbus>; 69 + #phy-cells = <0>; 69 70 }; 70 71 71 72 usb3_1_vbus: usb3_1-vbus {
+2
arch/arm/boot/dts/armada-385-synology-ds116.dts
··· 191 191 usb3_0_phy: usb3_0_phy { 192 192 compatible = "usb-nop-xceiv"; 193 193 vcc-supply = <&reg_usb3_0_vbus>; 194 + #phy-cells = <0>; 194 195 }; 195 196 196 197 usb3_1_phy: usb3_1_phy { 197 198 compatible = "usb-nop-xceiv"; 198 199 vcc-supply = <&reg_usb3_1_vbus>; 200 + #phy-cells = <0>; 199 201 }; 200 202 201 203 reg_usb3_0_vbus: usb3-vbus0 {
+2
arch/arm/boot/dts/armada-388-gp.dts
··· 276 276 usb2_1_phy: usb2_1_phy { 277 277 compatible = "usb-nop-xceiv"; 278 278 vcc-supply = <&reg_usb2_1_vbus>; 279 + #phy-cells = <0>; 279 280 }; 280 281 281 282 usb3_phy: usb3_phy { 282 283 compatible = "usb-nop-xceiv"; 283 284 vcc-supply = <&reg_usb3_vbus>; 285 + #phy-cells = <0>; 284 286 }; 285 287 286 288 reg_usb3_vbus: usb3-vbus {
+2 -2
arch/arm/boot/dts/bcm-nsp.dtsi
··· 85 85 timer@20200 { 86 86 compatible = "arm,cortex-a9-global-timer"; 87 87 reg = <0x20200 0x100>; 88 - interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>; 88 + interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>; 89 89 clocks = <&periph_clk>; 90 90 }; 91 91 ··· 93 93 compatible = "arm,cortex-a9-twd-timer"; 94 94 reg = <0x20600 0x20>; 95 95 interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | 96 - IRQ_TYPE_LEVEL_HIGH)>; 96 + IRQ_TYPE_EDGE_RISING)>; 97 97 clocks = <&periph_clk>; 98 98 }; 99 99
+1
arch/arm/boot/dts/bcm283x.dtsi
··· 639 639 640 640 usbphy: phy { 641 641 compatible = "usb-nop-xceiv"; 642 + #phy-cells = <0>; 642 643 }; 643 644 };
-4
arch/arm/boot/dts/bcm958623hr.dts
··· 141 141 status = "okay"; 142 142 }; 143 143 144 - &sata { 145 - status = "okay"; 146 - }; 147 - 148 144 &qspi { 149 145 bspi-sel = <0>; 150 146 flash: m25p80@0 {
-4
arch/arm/boot/dts/bcm958625hr.dts
··· 177 177 status = "okay"; 178 178 }; 179 179 180 - &sata { 181 - status = "okay"; 182 - }; 183 - 184 180 &srab { 185 181 compatible = "brcm,bcm58625-srab", "brcm,nsp-srab"; 186 182 status = "okay";
+2
arch/arm/boot/dts/dm814x.dtsi
··· 75 75 reg = <0x47401300 0x100>; 76 76 reg-names = "phy"; 77 77 ti,ctrl_mod = <&usb_ctrl_mod>; 78 + #phy-cells = <0>; 78 79 }; 79 80 80 81 usb0: usb@47401000 { ··· 386 385 reg = <0x1b00 0x100>; 387 386 reg-names = "phy"; 388 387 ti,ctrl_mod = <&usb_ctrl_mod>; 388 + #phy-cells = <0>; 389 389 }; 390 390 }; 391 391
-9
arch/arm/boot/dts/imx53.dtsi
··· 433 433 clock-names = "ipg", "per"; 434 434 }; 435 435 436 - srtc: srtc@53fa4000 { 437 - compatible = "fsl,imx53-rtc", "fsl,imx25-rtc"; 438 - reg = <0x53fa4000 0x4000>; 439 - interrupts = <24>; 440 - interrupt-parent = <&tzic>; 441 - clocks = <&clks IMX5_CLK_SRTC_GATE>; 442 - clock-names = "ipg"; 443 - }; 444 - 445 436 iomuxc: iomuxc@53fa8000 { 446 437 compatible = "fsl,imx53-iomuxc"; 447 438 reg = <0x53fa8000 0x4000>;
+2 -1
arch/arm/boot/dts/logicpd-som-lv-37xx-devkit.dts
··· 72 72 }; 73 73 74 74 &gpmc { 75 - ranges = <1 0 0x08000000 0x1000000>; /* CS1: 16MB for LAN9221 */ 75 + ranges = <0 0 0x30000000 0x1000000 /* CS0: 16MB for NAND */ 76 + 1 0 0x2c000000 0x1000000>; /* CS1: 16MB for LAN9221 */ 76 77 77 78 ethernet@gpmc { 78 79 pinctrl-names = "default";
+11 -6
arch/arm/boot/dts/logicpd-som-lv.dtsi
··· 33 33 hsusb2_phy: hsusb2_phy { 34 34 compatible = "usb-nop-xceiv"; 35 35 reset-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; /* gpio_4 */ 36 + #phy-cells = <0>; 36 37 }; 37 38 }; 38 39 39 40 &gpmc { 40 - ranges = <0 0 0x00000000 0x1000000>; /* CS0: 16MB for NAND */ 41 + ranges = <0 0 0x30000000 0x1000000>; /* CS0: 16MB for NAND */ 41 42 42 43 nand@0,0 { 43 44 compatible = "ti,omap2-nand"; ··· 122 121 123 122 &mmc3 { 124 123 interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>; 125 - pinctrl-0 = <&mmc3_pins>; 124 + pinctrl-0 = <&mmc3_pins &wl127x_gpio>; 126 125 pinctrl-names = "default"; 127 126 vmmc-supply = <&wl12xx_vmmc>; 128 127 non-removable; ··· 133 132 wlcore: wlcore@2 { 134 133 compatible = "ti,wl1273"; 135 134 reg = <2>; 136 - interrupt-parent = <&gpio5>; 137 - interrupts = <24 IRQ_TYPE_LEVEL_HIGH>; /* gpio 152 */ 135 + interrupt-parent = <&gpio1>; 136 + interrupts = <2 IRQ_TYPE_LEVEL_HIGH>; /* gpio 2 */ 138 137 ref-clock-frequency = <26000000>; 139 138 }; 140 139 }; ··· 158 157 OMAP3_CORE1_IOPAD(0x2166, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat5.sdmmc3_dat1 */ 159 158 OMAP3_CORE1_IOPAD(0x2168, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat2 */ 160 159 OMAP3_CORE1_IOPAD(0x216a, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat3 */ 161 - OMAP3_CORE1_IOPAD(0x2184, PIN_INPUT_PULLUP | MUX_MODE4) /* mcbsp4_clkx.gpio_152 */ 162 - OMAP3_CORE1_IOPAD(0x2a0c, PIN_OUTPUT | MUX_MODE4) /* sys_boot1.gpio_3 */ 163 160 OMAP3_CORE1_IOPAD(0x21d0, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs1.sdmmc3_cmd */ 164 161 OMAP3_CORE1_IOPAD(0x21d2, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs2.sdmmc_clk */ 165 162 >; ··· 225 226 hsusb2_reset_pin: pinmux_hsusb1_reset_pin { 226 227 pinctrl-single,pins = < 227 228 OMAP3_WKUP_IOPAD(0x2a0e, PIN_OUTPUT | MUX_MODE4) /* sys_boot2.gpio_4 */ 229 + >; 230 + }; 231 + wl127x_gpio: pinmux_wl127x_gpio_pin { 232 + pinctrl-single,pins = < 233 + OMAP3_WKUP_IOPAD(0x2a0c, PIN_INPUT | MUX_MODE4) /* sys_boot0.gpio_2 */ 234 + OMAP3_WKUP_IOPAD(0x2a0c, PIN_OUTPUT | MUX_MODE4) /* sys_boot1.gpio_3 */ 228 235 >; 229 236 }; 230 237 };
+9 -9
arch/arm/boot/dts/meson.dtsi
··· 85 85 reg = <0x7c00 0x200>; 86 86 }; 87 87 88 - gpio_intc: interrupt-controller@9880 { 89 - compatible = "amlogic,meson-gpio-intc"; 90 - reg = <0xc1109880 0x10>; 91 - interrupt-controller; 92 - #interrupt-cells = <2>; 93 - amlogic,channel-interrupts = <64 65 66 67 68 69 70 71>; 94 - status = "disabled"; 95 - }; 96 - 97 88 hwrng: rng@8100 { 98 89 compatible = "amlogic,meson-rng"; 99 90 reg = <0x8100 0x8>; ··· 179 188 reg = <0x8c80 0x80>; 180 189 #address-cells = <1>; 181 190 #size-cells = <0>; 191 + status = "disabled"; 192 + }; 193 + 194 + gpio_intc: interrupt-controller@9880 { 195 + compatible = "amlogic,meson-gpio-intc"; 196 + reg = <0x9880 0x10>; 197 + interrupt-controller; 198 + #interrupt-cells = <2>; 199 + amlogic,channel-interrupts = <64 65 66 67 68 69 70 71>; 182 200 status = "disabled"; 183 201 }; 184 202
+1
arch/arm/boot/dts/nspire.dtsi
··· 56 56 57 57 usb_phy: usb_phy { 58 58 compatible = "usb-nop-xceiv"; 59 + #phy-cells = <0>; 59 60 }; 60 61 61 62 vbus_reg: vbus_reg {
+1
arch/arm/boot/dts/omap3-beagle-xm.dts
··· 90 90 compatible = "usb-nop-xceiv"; 91 91 reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */ 92 92 vcc-supply = <&hsusb2_power>; 93 + #phy-cells = <0>; 93 94 }; 94 95 95 96 tfp410: encoder0 {
+1
arch/arm/boot/dts/omap3-beagle.dts
··· 64 64 compatible = "usb-nop-xceiv"; 65 65 reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */ 66 66 vcc-supply = <&hsusb2_power>; 67 + #phy-cells = <0>; 67 68 }; 68 69 69 70 sound {
+2
arch/arm/boot/dts/omap3-cm-t3x.dtsi
··· 43 43 hsusb1_phy: hsusb1_phy { 44 44 compatible = "usb-nop-xceiv"; 45 45 vcc-supply = <&hsusb1_power>; 46 + #phy-cells = <0>; 46 47 }; 47 48 48 49 /* HS USB Host PHY on PORT 2 */ 49 50 hsusb2_phy: hsusb2_phy { 50 51 compatible = "usb-nop-xceiv"; 51 52 vcc-supply = <&hsusb2_power>; 53 + #phy-cells = <0>; 52 54 }; 53 55 54 56 ads7846reg: ads7846-reg {
+1
arch/arm/boot/dts/omap3-evm-common.dtsi
··· 29 29 compatible = "usb-nop-xceiv"; 30 30 reset-gpios = <&gpio1 21 GPIO_ACTIVE_LOW>; /* gpio_21 */ 31 31 vcc-supply = <&hsusb2_power>; 32 + #phy-cells = <0>; 32 33 }; 33 34 34 35 leds {
+1
arch/arm/boot/dts/omap3-gta04.dtsi
··· 120 120 hsusb2_phy: hsusb2_phy { 121 121 compatible = "usb-nop-xceiv"; 122 122 reset-gpios = <&gpio6 14 GPIO_ACTIVE_LOW>; 123 + #phy-cells = <0>; 123 124 }; 124 125 125 126 tv0: connector {
+1
arch/arm/boot/dts/omap3-igep0020-common.dtsi
··· 58 58 compatible = "usb-nop-xceiv"; 59 59 reset-gpios = <&gpio1 24 GPIO_ACTIVE_LOW>; /* gpio_24 */ 60 60 vcc-supply = <&hsusb1_power>; 61 + #phy-cells = <0>; 61 62 }; 62 63 63 64 tfp410: encoder {
+1
arch/arm/boot/dts/omap3-igep0030-common.dtsi
··· 37 37 hsusb2_phy: hsusb2_phy { 38 38 compatible = "usb-nop-xceiv"; 39 39 reset-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>; /* gpio_54 */ 40 + #phy-cells = <0>; 40 41 }; 41 42 }; 42 43
+1
arch/arm/boot/dts/omap3-lilly-a83x.dtsi
··· 51 51 hsusb1_phy: hsusb1_phy { 52 52 compatible = "usb-nop-xceiv"; 53 53 vcc-supply = <&reg_vcc3>; 54 + #phy-cells = <0>; 54 55 }; 55 56 }; 56 57
+1
arch/arm/boot/dts/omap3-overo-base.dtsi
··· 51 51 compatible = "usb-nop-xceiv"; 52 52 reset-gpios = <&gpio6 23 GPIO_ACTIVE_LOW>; /* gpio_183 */ 53 53 vcc-supply = <&hsusb2_power>; 54 + #phy-cells = <0>; 54 55 }; 55 56 56 57 /* Regulator to trigger the nPoweron signal of the Wifi module */
+1
arch/arm/boot/dts/omap3-pandora-common.dtsi
··· 205 205 compatible = "usb-nop-xceiv"; 206 206 reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>; /* GPIO_16 */ 207 207 vcc-supply = <&vaux2>; 208 + #phy-cells = <0>; 208 209 }; 209 210 210 211 /* HS USB Host VBUS supply
+1
arch/arm/boot/dts/omap3-tao3530.dtsi
··· 46 46 compatible = "usb-nop-xceiv"; 47 47 reset-gpios = <&gpio6 2 GPIO_ACTIVE_LOW>; /* gpio_162 */ 48 48 vcc-supply = <&hsusb2_power>; 49 + #phy-cells = <0>; 49 50 }; 50 51 51 52 sound {
+1
arch/arm/boot/dts/omap3.dtsi
··· 715 715 compatible = "ti,ohci-omap3"; 716 716 reg = <0x48064400 0x400>; 717 717 interrupts = <76>; 718 + remote-wakeup-connected; 718 719 }; 719 720 720 721 usbhsehci: ehci@48064800 {
+1
arch/arm/boot/dts/omap4-droid4-xt894.dts
··· 73 73 /* HS USB Host PHY on PORT 1 */ 74 74 hsusb1_phy: hsusb1_phy { 75 75 compatible = "usb-nop-xceiv"; 76 + #phy-cells = <0>; 76 77 }; 77 78 78 79 /* LCD regulator from sw5 source */
+1
arch/arm/boot/dts/omap4-duovero.dtsi
··· 43 43 hsusb1_phy: hsusb1_phy { 44 44 compatible = "usb-nop-xceiv"; 45 45 reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */ 46 + #phy-cells = <0>; 46 47 47 48 pinctrl-names = "default"; 48 49 pinctrl-0 = <&hsusb1phy_pins>;
+1
arch/arm/boot/dts/omap4-panda-common.dtsi
··· 89 89 hsusb1_phy: hsusb1_phy { 90 90 compatible = "usb-nop-xceiv"; 91 91 reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */ 92 + #phy-cells = <0>; 92 93 vcc-supply = <&hsusb1_power>; 93 94 clocks = <&auxclk3_ck>; 94 95 clock-names = "main_clk";
+1
arch/arm/boot/dts/omap4-var-som-om44.dtsi
··· 44 44 45 45 reset-gpios = <&gpio6 17 GPIO_ACTIVE_LOW>; /* gpio 177 */ 46 46 vcc-supply = <&vbat>; 47 + #phy-cells = <0>; 47 48 48 49 clocks = <&auxclk3_ck>; 49 50 clock-names = "main_clk";
+2 -3
arch/arm/boot/dts/omap4.dtsi
··· 398 398 elm: elm@48078000 { 399 399 compatible = "ti,am3352-elm"; 400 400 reg = <0x48078000 0x2000>; 401 - interrupts = <4>; 401 + interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>; 402 402 ti,hwmods = "elm"; 403 403 status = "disabled"; 404 404 }; ··· 1081 1081 usbhsohci: ohci@4a064800 { 1082 1082 compatible = "ti,ohci-omap3"; 1083 1083 reg = <0x4a064800 0x400>; 1084 - interrupt-parent = <&gic>; 1085 1084 interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>; 1085 + remote-wakeup-connected; 1086 1086 }; 1087 1087 1088 1088 usbhsehci: ehci@4a064c00 { 1089 1089 compatible = "ti,ehci-omap"; 1090 1090 reg = <0x4a064c00 0x400>; 1091 - interrupt-parent = <&gic>; 1092 1091 interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>; 1093 1092 }; 1094 1093 };
+2
arch/arm/boot/dts/omap5-board-common.dtsi
··· 73 73 clocks = <&auxclk1_ck>; 74 74 clock-names = "main_clk"; 75 75 clock-frequency = <19200000>; 76 + #phy-cells = <0>; 76 77 }; 77 78 78 79 /* HS USB Host PHY on PORT 3 */ 79 80 hsusb3_phy: hsusb3_phy { 80 81 compatible = "usb-nop-xceiv"; 81 82 reset-gpios = <&gpio3 15 GPIO_ACTIVE_LOW>; /* gpio3_79 ETH_NRESET */ 83 + #phy-cells = <0>; 82 84 }; 83 85 84 86 tpd12s015: encoder {
+2
arch/arm/boot/dts/omap5-cm-t54.dts
··· 63 63 hsusb2_phy: hsusb2_phy { 64 64 compatible = "usb-nop-xceiv"; 65 65 reset-gpios = <&gpio3 12 GPIO_ACTIVE_LOW>; /* gpio3_76 HUB_RESET */ 66 + #phy-cells = <0>; 66 67 }; 67 68 68 69 /* HS USB Host PHY on PORT 3 */ 69 70 hsusb3_phy: hsusb3_phy { 70 71 compatible = "usb-nop-xceiv"; 71 72 reset-gpios = <&gpio3 19 GPIO_ACTIVE_LOW>; /* gpio3_83 ETH_RESET */ 73 + #phy-cells = <0>; 72 74 }; 73 75 74 76 leds {
+1
arch/arm/boot/dts/omap5.dtsi
··· 940 940 compatible = "ti,ohci-omap3"; 941 941 reg = <0x4a064800 0x400>; 942 942 interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>; 943 + remote-wakeup-connected; 943 944 }; 944 945 945 946 usbhsehci: ehci@4a064c00 {
+1
arch/arm/boot/dts/r8a7790.dtsi
··· 1201 1201 clock-names = "extal", "usb_extal"; 1202 1202 #clock-cells = <2>; 1203 1203 #power-domain-cells = <0>; 1204 + #reset-cells = <1>; 1204 1205 }; 1205 1206 1206 1207 prr: chipid@ff000044 {
+1
arch/arm/boot/dts/r8a7792.dtsi
··· 829 829 clock-names = "extal"; 830 830 #clock-cells = <2>; 831 831 #power-domain-cells = <0>; 832 + #reset-cells = <1>; 832 833 }; 833 834 }; 834 835
+1
arch/arm/boot/dts/r8a7793.dtsi
··· 1088 1088 clock-names = "extal", "usb_extal"; 1089 1089 #clock-cells = <2>; 1090 1090 #power-domain-cells = <0>; 1091 + #reset-cells = <1>; 1091 1092 }; 1092 1093 1093 1094 rst: reset-controller@e6160000 {
+1
arch/arm/boot/dts/r8a7794.dtsi
··· 1099 1099 clock-names = "extal", "usb_extal"; 1100 1100 #clock-cells = <2>; 1101 1101 #power-domain-cells = <0>; 1102 + #reset-cells = <1>; 1102 1103 }; 1103 1104 1104 1105 rst: reset-controller@e6160000 {
+3 -3
arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
··· 121 121 switch0port10: port@10 { 122 122 reg = <10>; 123 123 label = "dsa"; 124 - phy-mode = "xgmii"; 124 + phy-mode = "xaui"; 125 125 link = <&switch1port10>; 126 126 }; 127 127 }; ··· 208 208 switch1port10: port@10 { 209 209 reg = <10>; 210 210 label = "dsa"; 211 - phy-mode = "xgmii"; 211 + phy-mode = "xaui"; 212 212 link = <&switch0port10>; 213 213 }; 214 214 }; ··· 359 359 }; 360 360 361 361 &i2c1 { 362 - at24mac602@0 { 362 + at24mac602@50 { 363 363 compatible = "atmel,24c02"; 364 364 reg = <0x50>; 365 365 read-only;
+1 -2
arch/arm/include/asm/kvm_arm.h
··· 161 161 #else 162 162 #define VTTBR_X (5 - KVM_T0SZ) 163 163 #endif 164 - #define VTTBR_BADDR_SHIFT (VTTBR_X - 1) 165 - #define VTTBR_BADDR_MASK (((_AC(1, ULL) << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT) 164 + #define VTTBR_BADDR_MASK (((_AC(1, ULL) << (40 - VTTBR_X)) - 1) << VTTBR_X) 166 165 #define VTTBR_VMID_SHIFT _AC(48, ULL) 167 166 #define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT) 168 167
+5
arch/arm/include/asm/kvm_host.h
··· 285 285 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {} 286 286 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {} 287 287 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {} 288 + static inline bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, 289 + struct kvm_run *run) 290 + { 291 + return false; 292 + } 288 293 289 294 int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, 290 295 struct kvm_device_attr *attr);
+1 -1
arch/arm/mach-meson/platsmp.c
··· 102 102 103 103 scu_base = of_iomap(node, 0); 104 104 if (!scu_base) { 105 - pr_err("Couln't map SCU registers\n"); 105 + pr_err("Couldn't map SCU registers\n"); 106 106 return; 107 107 } 108 108
+5 -1
arch/arm/mach-omap2/cm_common.c
··· 68 68 int cm_split_idlest_reg(struct clk_omap_reg *idlest_reg, s16 *prcm_inst, 69 69 u8 *idlest_reg_id) 70 70 { 71 + int ret; 71 72 if (!cm_ll_data->split_idlest_reg) { 72 73 WARN_ONCE(1, "cm: %s: no low-level function defined\n", 73 74 __func__); 74 75 return -EINVAL; 75 76 } 76 77 77 - return cm_ll_data->split_idlest_reg(idlest_reg, prcm_inst, 78 + ret = cm_ll_data->split_idlest_reg(idlest_reg, prcm_inst, 78 79 idlest_reg_id); 80 + *prcm_inst -= cm_base.offset; 81 + return ret; 79 82 } 80 83 81 84 /** ··· 340 337 if (mem) { 341 338 mem->pa = res.start + data->offset; 342 339 mem->va = data->mem + data->offset; 340 + mem->offset = data->offset; 343 341 } 344 342 345 343 data->np = np;
+21
arch/arm/mach-omap2/omap-secure.c
··· 73 73 return omap_secure_memblock_base; 74 74 } 75 75 76 + #if defined(CONFIG_ARCH_OMAP3) && defined(CONFIG_PM) 77 + u32 omap3_save_secure_ram(void __iomem *addr, int size) 78 + { 79 + u32 ret; 80 + u32 param[5]; 81 + 82 + if (size != OMAP3_SAVE_SECURE_RAM_SZ) 83 + return OMAP3_SAVE_SECURE_RAM_SZ; 84 + 85 + param[0] = 4; /* Number of arguments */ 86 + param[1] = __pa(addr); /* Physical address for saving */ 87 + param[2] = 0; 88 + param[3] = 1; 89 + param[4] = 1; 90 + 91 + ret = save_secure_ram_context(__pa(param)); 92 + 93 + return ret; 94 + } 95 + #endif 96 + 76 97 /** 77 98 * rx51_secure_dispatcher: Routine to dispatch secure PPA API calls 78 99 * @idx: The PPA API index
+4
arch/arm/mach-omap2/omap-secure.h
··· 31 31 /* Maximum Secure memory storage size */ 32 32 #define OMAP_SECURE_RAM_STORAGE (88 * SZ_1K) 33 33 34 + #define OMAP3_SAVE_SECURE_RAM_SZ 0x803F 35 + 34 36 /* Secure low power HAL API index */ 35 37 #define OMAP4_HAL_SAVESECURERAM_INDEX 0x1a 36 38 #define OMAP4_HAL_SAVEHW_INDEX 0x1b ··· 67 65 extern u32 omap_smc3(u32 id, u32 process, u32 flag, u32 pargs); 68 66 extern phys_addr_t omap_secure_ram_mempool_base(void); 69 67 extern int omap_secure_ram_reserve_memblock(void); 68 + extern u32 save_secure_ram_context(u32 args_pa); 69 + extern u32 omap3_save_secure_ram(void __iomem *save_regs, int size); 70 70 71 71 extern u32 rx51_secure_dispatcher(u32 idx, u32 process, u32 flag, u32 nargs, 72 72 u32 arg1, u32 arg2, u32 arg3, u32 arg4);
+5 -5
arch/arm/mach-omap2/omap_device.c
··· 391 391 const char *name; 392 392 int error, irq = 0; 393 393 394 - if (!oh || !oh->od || !oh->od->pdev) { 395 - error = -EINVAL; 396 - goto error; 397 - } 394 + if (!oh || !oh->od || !oh->od->pdev) 395 + return -EINVAL; 398 396 399 397 np = oh->od->pdev->dev.of_node; 400 398 if (!np) { ··· 514 516 goto odbs_exit1; 515 517 516 518 od = omap_device_alloc(pdev, &oh, 1); 517 - if (IS_ERR(od)) 519 + if (IS_ERR(od)) { 520 + ret = PTR_ERR(od); 518 521 goto odbs_exit1; 522 + } 519 523 520 524 ret = platform_device_add_data(pdev, pdata, pdata_len); 521 525 if (ret)
+1
arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
··· 1646 1646 .main_clk = "mmchs3_fck", 1647 1647 .prcm = { 1648 1648 .omap2 = { 1649 + .module_offs = CORE_MOD, 1649 1650 .prcm_reg_id = 1, 1650 1651 .module_bit = OMAP3430_EN_MMC3_SHIFT, 1651 1652 .idlest_reg_id = 1,
-4
arch/arm/mach-omap2/pm.h
··· 81 81 /* ... and its pointer from SRAM after copy */ 82 82 extern void (*omap3_do_wfi_sram)(void); 83 83 84 - /* save_secure_ram_context function pointer and size, for copy to SRAM */ 85 - extern int save_secure_ram_context(u32 *addr); 86 - extern unsigned int save_secure_ram_context_sz; 87 - 88 84 extern void omap3_save_scratchpad_contents(void); 89 85 90 86 #define PM_RTA_ERRATUM_i608 (1 << 0)
+4 -9
arch/arm/mach-omap2/pm34xx.c
··· 48 48 #include "prm3xxx.h" 49 49 #include "pm.h" 50 50 #include "sdrc.h" 51 + #include "omap-secure.h" 51 52 #include "sram.h" 52 53 #include "control.h" 53 54 #include "vc.h" ··· 67 66 68 67 static LIST_HEAD(pwrst_list); 69 68 70 - static int (*_omap_save_secure_sram)(u32 *addr); 71 69 void (*omap3_do_wfi_sram)(void); 72 70 73 71 static struct powerdomain *mpu_pwrdm, *neon_pwrdm; ··· 121 121 * will hang the system. 122 122 */ 123 123 pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON); 124 - ret = _omap_save_secure_sram((u32 *)(unsigned long) 125 - __pa(omap3_secure_ram_storage)); 124 + ret = omap3_save_secure_ram(omap3_secure_ram_storage, 125 + OMAP3_SAVE_SECURE_RAM_SZ); 126 126 pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state); 127 127 /* Following is for error tracking, it should not happen */ 128 128 if (ret) { ··· 434 434 * 435 435 * The minimum set of functions is pushed to SRAM for execution: 436 436 * - omap3_do_wfi for erratum i581 WA, 437 - * - save_secure_ram_context for security extensions. 438 437 */ 439 438 void omap_push_sram_idle(void) 440 439 { 441 440 omap3_do_wfi_sram = omap_sram_push(omap3_do_wfi, omap3_do_wfi_sz); 442 - 443 - if (omap_type() != OMAP2_DEVICE_TYPE_GP) 444 - _omap_save_secure_sram = omap_sram_push(save_secure_ram_context, 445 - save_secure_ram_context_sz); 446 441 } 447 442 448 443 static void __init pm_errata_configure(void) ··· 548 553 clkdm_add_wkdep(neon_clkdm, mpu_clkdm); 549 554 if (omap_type() != OMAP2_DEVICE_TYPE_GP) { 550 555 omap3_secure_ram_storage = 551 - kmalloc(0x803F, GFP_KERNEL); 556 + kmalloc(OMAP3_SAVE_SECURE_RAM_SZ, GFP_KERNEL); 552 557 if (!omap3_secure_ram_storage) 553 558 pr_err("Memory allocation failed when allocating for secure sram context\n"); 554 559
+1
arch/arm/mach-omap2/prcm-common.h
··· 528 528 struct omap_domain_base { 529 529 u32 pa; 530 530 void __iomem *va; 531 + s16 offset; 531 532 }; 532 533 533 534 /**
-12
arch/arm/mach-omap2/prm33xx.c
··· 176 176 return v; 177 177 } 178 178 179 - static int am33xx_pwrdm_read_prev_pwrst(struct powerdomain *pwrdm) 180 - { 181 - u32 v; 182 - 183 - v = am33xx_prm_read_reg(pwrdm->prcm_offs, pwrdm->pwrstst_offs); 184 - v &= AM33XX_LASTPOWERSTATEENTERED_MASK; 185 - v >>= AM33XX_LASTPOWERSTATEENTERED_SHIFT; 186 - 187 - return v; 188 - } 189 - 190 179 static int am33xx_pwrdm_set_lowpwrstchange(struct powerdomain *pwrdm) 191 180 { 192 181 am33xx_prm_rmw_reg_bits(AM33XX_LOWPOWERSTATECHANGE_MASK, ··· 346 357 .pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst, 347 358 .pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst, 348 359 .pwrdm_read_pwrst = am33xx_pwrdm_read_pwrst, 349 - .pwrdm_read_prev_pwrst = am33xx_pwrdm_read_prev_pwrst, 350 360 .pwrdm_set_logic_retst = am33xx_pwrdm_set_logic_retst, 351 361 .pwrdm_read_logic_pwrst = am33xx_pwrdm_read_logic_pwrst, 352 362 .pwrdm_read_logic_retst = am33xx_pwrdm_read_logic_retst,
+4 -22
arch/arm/mach-omap2/sleep34xx.S
··· 93 93 ENDPROC(enable_omap3630_toggle_l2_on_restore) 94 94 95 95 /* 96 - * Function to call rom code to save secure ram context. This gets 97 - * relocated to SRAM, so it can be all in .data section. Otherwise 98 - * we need to initialize api_params separately. 96 + * Function to call rom code to save secure ram context. 97 + * 98 + * r0 = physical address of the parameters 99 99 */ 100 - .data 101 - .align 3 102 100 ENTRY(save_secure_ram_context) 103 101 stmfd sp!, {r4 - r11, lr} @ save registers on stack 104 - adr r3, api_params @ r3 points to parameters 105 - str r0, [r3,#0x4] @ r0 has sdram address 106 - ldr r12, high_mask 107 - and r3, r3, r12 108 - ldr r12, sram_phy_addr_mask 109 - orr r3, r3, r12 102 + mov r3, r0 @ physical address of parameters 110 103 mov r0, #25 @ set service ID for PPA 111 104 mov r12, r0 @ copy secure service ID in r12 112 105 mov r1, #0 @ set task id for ROM code in r1 ··· 113 120 nop 114 121 nop 115 122 ldmfd sp!, {r4 - r11, pc} 116 - .align 117 - sram_phy_addr_mask: 118 - .word SRAM_BASE_P 119 - high_mask: 120 - .word 0xffff 121 - api_params: 122 - .word 0x4, 0x0, 0x0, 0x1, 0x1 123 123 ENDPROC(save_secure_ram_context) 124 - ENTRY(save_secure_ram_context_sz) 125 - .word . - save_secure_ram_context 126 - 127 - .text 128 124 129 125 /* 130 126 * ======================
+11 -1
arch/arm64/Kconfig
··· 557 557 558 558 If unsure, say Y. 559 559 560 - 561 560 config SOCIONEXT_SYNQUACER_PREITS 562 561 bool "Socionext Synquacer: Workaround for GICv3 pre-ITS" 563 562 default y ··· 575 576 a 128kB offset to be applied to the target address in this commands. 576 577 577 578 If unsure, say Y. 579 + 580 + config QCOM_FALKOR_ERRATUM_E1041 581 + bool "Falkor E1041: Speculative instruction fetches might cause errant memory access" 582 + default y 583 + help 584 + Falkor CPU may speculatively fetch instructions from an improper 585 + memory location when MMU translation is changed from SCTLR_ELn[M]=1 586 + to SCTLR_ELn[M]=0. Prefix an ISB instruction to fix the problem. 587 + 588 + If unsure, say Y. 589 + 578 590 endmenu 579 591 580 592
+1 -1
arch/arm64/boot/dts/Makefile
··· 12 12 subdir-y += exynos 13 13 subdir-y += freescale 14 14 subdir-y += hisilicon 15 + subdir-y += lg 15 16 subdir-y += marvell 16 17 subdir-y += mediatek 17 18 subdir-y += nvidia ··· 23 22 subdir-y += socionext 24 23 subdir-y += sprd 25 24 subdir-y += xilinx 26 - subdir-y += lg 27 25 subdir-y += zte
+2 -2
arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
··· 753 753 754 754 &uart_B { 755 755 clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>; 756 - clock-names = "xtal", "core", "baud"; 756 + clock-names = "xtal", "pclk", "baud"; 757 757 }; 758 758 759 759 &uart_C { 760 760 clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>; 761 - clock-names = "xtal", "core", "baud"; 761 + clock-names = "xtal", "pclk", "baud"; 762 762 }; 763 763 764 764 &vpu {
+3 -3
arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
··· 688 688 689 689 &uart_A { 690 690 clocks = <&xtal>, <&clkc CLKID_UART0>, <&xtal>; 691 - clock-names = "xtal", "core", "baud"; 691 + clock-names = "xtal", "pclk", "baud"; 692 692 }; 693 693 694 694 &uart_AO { ··· 703 703 704 704 &uart_B { 705 705 clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>; 706 - clock-names = "xtal", "core", "baud"; 706 + clock-names = "xtal", "pclk", "baud"; 707 707 }; 708 708 709 709 &uart_C { 710 710 clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>; 711 - clock-names = "xtal", "core", "baud"; 711 + clock-names = "xtal", "pclk", "baud"; 712 712 }; 713 713 714 714 &vpu {
-1
arch/arm64/boot/dts/socionext/uniphier-ld11-ref.dts
··· 40 40 }; 41 41 42 42 &ethsc { 43 - interrupt-parent = <&gpio>; 44 43 interrupts = <0 8>; 45 44 }; 46 45
-1
arch/arm64/boot/dts/socionext/uniphier-ld20-ref.dts
··· 40 40 }; 41 41 42 42 &ethsc { 43 - interrupt-parent = <&gpio>; 44 43 interrupts = <0 8>; 45 44 }; 46 45
+1 -2
arch/arm64/boot/dts/socionext/uniphier-pxs3-ref.dts
··· 38 38 }; 39 39 40 40 &ethsc { 41 - interrupt-parent = <&gpio>; 42 - interrupts = <0 8>; 41 + interrupts = <4 8>; 43 42 }; 44 43 45 44 &serial0 {
+10
arch/arm64/include/asm/assembler.h
··· 512 512 #endif 513 513 .endm 514 514 515 + /** 516 + * Errata workaround prior to disable MMU. Insert an ISB immediately prior 517 + * to executing the MSR that will change SCTLR_ELn[M] from a value of 1 to 0. 518 + */ 519 + .macro pre_disable_mmu_workaround 520 + #ifdef CONFIG_QCOM_FALKOR_ERRATUM_E1041 521 + isb 522 + #endif 523 + .endm 524 + 515 525 #endif /* __ASM_ASSEMBLER_H */
+3
arch/arm64/include/asm/cpufeature.h
··· 60 60 #define FTR_VISIBLE true /* Feature visible to the user space */ 61 61 #define FTR_HIDDEN false /* Feature is hidden from the user */ 62 62 63 + #define FTR_VISIBLE_IF_IS_ENABLED(config) \ 64 + (IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN) 65 + 63 66 struct arm64_ftr_bits { 64 67 bool sign; /* Value is signed ? */ 65 68 bool visible;
+2
arch/arm64/include/asm/cputype.h
··· 91 91 #define BRCM_CPU_PART_VULCAN 0x516 92 92 93 93 #define QCOM_CPU_PART_FALKOR_V1 0x800 94 + #define QCOM_CPU_PART_FALKOR 0xC00 94 95 95 96 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 96 97 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) ··· 100 99 #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX) 101 100 #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX) 102 101 #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1) 102 + #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR) 103 103 104 104 #ifndef __ASSEMBLY__ 105 105
+1 -2
arch/arm64/include/asm/kvm_arm.h
··· 170 170 #define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN_FLAGS) 171 171 #define VTTBR_X (VTTBR_X_TGRAN_MAGIC - VTCR_EL2_T0SZ_IPA) 172 172 173 - #define VTTBR_BADDR_SHIFT (VTTBR_X - 1) 174 - #define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT) 173 + #define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_X) 175 174 #define VTTBR_VMID_SHIFT (UL(48)) 176 175 #define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT) 177 176
+1
arch/arm64/include/asm/kvm_host.h
··· 370 370 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu); 371 371 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu); 372 372 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu); 373 + bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, struct kvm_run *run); 373 374 int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu, 374 375 struct kvm_device_attr *attr); 375 376 int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
+22 -19
arch/arm64/include/asm/pgtable.h
··· 42 42 #include <asm/cmpxchg.h> 43 43 #include <asm/fixmap.h> 44 44 #include <linux/mmdebug.h> 45 + #include <linux/mm_types.h> 46 + #include <linux/sched.h> 45 47 46 48 extern void __pte_error(const char *file, int line, unsigned long val); 47 49 extern void __pmd_error(const char *file, int line, unsigned long val); ··· 151 149 152 150 static inline pte_t pte_mkclean(pte_t pte) 153 151 { 154 - return clear_pte_bit(pte, __pgprot(PTE_DIRTY)); 152 + pte = clear_pte_bit(pte, __pgprot(PTE_DIRTY)); 153 + pte = set_pte_bit(pte, __pgprot(PTE_RDONLY)); 154 + 155 + return pte; 155 156 } 156 157 157 158 static inline pte_t pte_mkdirty(pte_t pte) 158 159 { 159 - return set_pte_bit(pte, __pgprot(PTE_DIRTY)); 160 + pte = set_pte_bit(pte, __pgprot(PTE_DIRTY)); 161 + 162 + if (pte_write(pte)) 163 + pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY)); 164 + 165 + return pte; 160 166 } 161 167 162 168 static inline pte_t pte_mkold(pte_t pte) ··· 217 207 } 218 208 } 219 209 220 - struct mm_struct; 221 - struct vm_area_struct; 222 - 223 210 extern void __sync_icache_dcache(pte_t pteval, unsigned long addr); 224 211 225 212 /* ··· 245 238 * hardware updates of the pte (ptep_set_access_flags safely changes 246 239 * valid ptes without going through an invalid entry). 247 240 */ 248 - if (pte_valid(*ptep) && pte_valid(pte)) { 241 + if (IS_ENABLED(CONFIG_DEBUG_VM) && pte_valid(*ptep) && pte_valid(pte) && 242 + (mm == current->active_mm || atomic_read(&mm->mm_users) > 1)) { 249 243 VM_WARN_ONCE(!pte_young(pte), 250 244 "%s: racy access flag clearing: 0x%016llx -> 0x%016llx", 251 245 __func__, pte_val(*ptep), pte_val(pte)); ··· 649 641 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 650 642 651 643 /* 652 - * ptep_set_wrprotect - mark read-only while preserving the hardware update of 653 - * the Access Flag. 644 + * ptep_set_wrprotect - mark read-only while trasferring potential hardware 645 + * dirty status (PTE_DBM && !PTE_RDONLY) to the software PTE_DIRTY bit. 654 646 */ 655 647 #define __HAVE_ARCH_PTEP_SET_WRPROTECT 656 648 static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep) 657 649 { 658 650 pte_t old_pte, pte; 659 651 660 - /* 661 - * ptep_set_wrprotect() is only called on CoW mappings which are 662 - * private (!VM_SHARED) with the pte either read-only (!PTE_WRITE && 663 - * PTE_RDONLY) or writable and software-dirty (PTE_WRITE && 664 - * !PTE_RDONLY && PTE_DIRTY); see is_cow_mapping() and 665 - * protection_map[]. There is no race with the hardware update of the 666 - * dirty state: clearing of PTE_RDONLY when PTE_WRITE (a.k.a. PTE_DBM) 667 - * is set. 668 - */ 669 - VM_WARN_ONCE(pte_write(*ptep) && !pte_dirty(*ptep), 670 - "%s: potential race with hardware DBM", __func__); 671 652 pte = READ_ONCE(*ptep); 672 653 do { 673 654 old_pte = pte; 655 + /* 656 + * If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY 657 + * clear), set the PTE_DIRTY bit. 658 + */ 659 + if (pte_hw_dirty(pte)) 660 + pte = pte_mkdirty(pte); 674 661 pte = pte_wrprotect(pte); 675 662 pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), 676 663 pte_val(old_pte), pte_val(pte));
+1
arch/arm64/kernel/cpu-reset.S
··· 37 37 mrs x12, sctlr_el1 38 38 ldr x13, =SCTLR_ELx_FLAGS 39 39 bic x12, x12, x13 40 + pre_disable_mmu_workaround 40 41 msr sctlr_el1, x12 41 42 isb 42 43
+2 -1
arch/arm64/kernel/cpufeature.c
··· 145 145 }; 146 146 147 147 static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = { 148 - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), 148 + ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE), 149 + FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0), 149 150 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0), 150 151 S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI), 151 152 S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+2
arch/arm64/kernel/efi-entry.S
··· 96 96 mrs x0, sctlr_el2 97 97 bic x0, x0, #1 << 0 // clear SCTLR.M 98 98 bic x0, x0, #1 << 2 // clear SCTLR.C 99 + pre_disable_mmu_workaround 99 100 msr sctlr_el2, x0 100 101 isb 101 102 b 2f ··· 104 103 mrs x0, sctlr_el1 105 104 bic x0, x0, #1 << 0 // clear SCTLR.M 106 105 bic x0, x0, #1 << 2 // clear SCTLR.C 106 + pre_disable_mmu_workaround 107 107 msr sctlr_el1, x0 108 108 isb 109 109 2:
+1 -1
arch/arm64/kernel/fpsimd.c
··· 1043 1043 1044 1044 local_bh_disable(); 1045 1045 1046 - current->thread.fpsimd_state = *state; 1046 + current->thread.fpsimd_state.user_fpsimd = state->user_fpsimd; 1047 1047 if (system_supports_sve() && test_thread_flag(TIF_SVE)) 1048 1048 fpsimd_to_sve(current); 1049 1049
+1
arch/arm64/kernel/head.S
··· 750 750 * to take into account by discarding the current kernel mapping and 751 751 * creating a new one. 752 752 */ 753 + pre_disable_mmu_workaround 753 754 msr sctlr_el1, x20 // disable the MMU 754 755 isb 755 756 bl __create_page_tables // recreate kernel mapping
+1 -1
arch/arm64/kernel/hw_breakpoint.c
··· 28 28 #include <linux/perf_event.h> 29 29 #include <linux/ptrace.h> 30 30 #include <linux/smp.h> 31 + #include <linux/uaccess.h> 31 32 32 33 #include <asm/compat.h> 33 34 #include <asm/current.h> ··· 37 36 #include <asm/traps.h> 38 37 #include <asm/cputype.h> 39 38 #include <asm/system_misc.h> 40 - #include <asm/uaccess.h> 41 39 42 40 /* Breakpoint currently in use for each BRP. */ 43 41 static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]);
+1
arch/arm64/kernel/relocate_kernel.S
··· 45 45 mrs x0, sctlr_el2 46 46 ldr x1, =SCTLR_ELx_FLAGS 47 47 bic x0, x0, x1 48 + pre_disable_mmu_workaround 48 49 msr sctlr_el2, x0 49 50 isb 50 51 1:
+21
arch/arm64/kvm/debug.c
··· 221 221 } 222 222 } 223 223 } 224 + 225 + 226 + /* 227 + * After successfully emulating an instruction, we might want to 228 + * return to user space with a KVM_EXIT_DEBUG. We can only do this 229 + * once the emulation is complete, though, so for userspace emulations 230 + * we have to wait until we have re-entered KVM before calling this 231 + * helper. 232 + * 233 + * Return true (and set exit_reason) to return to userspace or false 234 + * if no further action is required. 235 + */ 236 + bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, struct kvm_run *run) 237 + { 238 + if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { 239 + run->exit_reason = KVM_EXIT_DEBUG; 240 + run->debug.arch.hsr = ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT; 241 + return true; 242 + } 243 + return false; 244 + }
+42 -15
arch/arm64/kvm/handle_exit.c
··· 28 28 #include <asm/kvm_emulate.h> 29 29 #include <asm/kvm_mmu.h> 30 30 #include <asm/kvm_psci.h> 31 + #include <asm/debug-monitors.h> 31 32 32 33 #define CREATE_TRACE_POINTS 33 34 #include "trace.h" ··· 188 187 } 189 188 190 189 /* 190 + * We may be single-stepping an emulated instruction. If the emulation 191 + * has been completed in the kernel, we can return to userspace with a 192 + * KVM_EXIT_DEBUG, otherwise userspace needs to complete its 193 + * emulation first. 194 + */ 195 + static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run) 196 + { 197 + int handled; 198 + 199 + /* 200 + * See ARM ARM B1.14.1: "Hyp traps on instructions 201 + * that fail their condition code check" 202 + */ 203 + if (!kvm_condition_valid(vcpu)) { 204 + kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 205 + handled = 1; 206 + } else { 207 + exit_handle_fn exit_handler; 208 + 209 + exit_handler = kvm_get_exit_handler(vcpu); 210 + handled = exit_handler(vcpu, run); 211 + } 212 + 213 + /* 214 + * kvm_arm_handle_step_debug() sets the exit_reason on the kvm_run 215 + * structure if we need to return to userspace. 216 + */ 217 + if (handled > 0 && kvm_arm_handle_step_debug(vcpu, run)) 218 + handled = 0; 219 + 220 + return handled; 221 + } 222 + 223 + /* 191 224 * Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on 192 225 * proper exit to userspace. 193 226 */ 194 227 int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run, 195 228 int exception_index) 196 229 { 197 - exit_handle_fn exit_handler; 198 - 199 230 if (ARM_SERROR_PENDING(exception_index)) { 200 231 u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu)); 201 232 ··· 253 220 return 1; 254 221 case ARM_EXCEPTION_EL1_SERROR: 255 222 kvm_inject_vabt(vcpu); 256 - return 1; 257 - case ARM_EXCEPTION_TRAP: 258 - /* 259 - * See ARM ARM B1.14.1: "Hyp traps on instructions 260 - * that fail their condition code check" 261 - */ 262 - if (!kvm_condition_valid(vcpu)) { 263 - kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); 223 + /* We may still need to return for single-step */ 224 + if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS) 225 + && kvm_arm_handle_step_debug(vcpu, run)) 226 + return 0; 227 + else 264 228 return 1; 265 - } 266 - 267 - exit_handler = kvm_get_exit_handler(vcpu); 268 - 269 - return exit_handler(vcpu, run); 229 + case ARM_EXCEPTION_TRAP: 230 + return handle_trap_exceptions(vcpu, run); 270 231 case ARM_EXCEPTION_HYP_GONE: 271 232 /* 272 233 * EL2 has been reset to the hyp-stub. This happens when a guest
+1
arch/arm64/kvm/hyp-init.S
··· 151 151 mrs x5, sctlr_el2 152 152 ldr x6, =SCTLR_ELx_FLAGS 153 153 bic x5, x5, x6 // Clear SCTL_M and etc 154 + pre_disable_mmu_workaround 154 155 msr sctlr_el2, x5 155 156 isb 156 157
+30 -7
arch/arm64/kvm/hyp/switch.c
··· 22 22 #include <asm/kvm_emulate.h> 23 23 #include <asm/kvm_hyp.h> 24 24 #include <asm/fpsimd.h> 25 + #include <asm/debug-monitors.h> 25 26 26 27 static bool __hyp_text __fpsimd_enabled_nvhe(void) 27 28 { ··· 270 269 return true; 271 270 } 272 271 273 - static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu) 272 + /* Skip an instruction which has been emulated. Returns true if 273 + * execution can continue or false if we need to exit hyp mode because 274 + * single-step was in effect. 275 + */ 276 + static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) 274 277 { 275 278 *vcpu_pc(vcpu) = read_sysreg_el2(elr); 276 279 ··· 287 282 } 288 283 289 284 write_sysreg_el2(*vcpu_pc(vcpu), elr); 285 + 286 + if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) { 287 + vcpu->arch.fault.esr_el2 = 288 + (ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT) | 0x22; 289 + return false; 290 + } else { 291 + return true; 292 + } 290 293 } 291 294 292 295 int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu) ··· 355 342 int ret = __vgic_v2_perform_cpuif_access(vcpu); 356 343 357 344 if (ret == 1) { 358 - __skip_instr(vcpu); 359 - goto again; 345 + if (__skip_instr(vcpu)) 346 + goto again; 347 + else 348 + exit_code = ARM_EXCEPTION_TRAP; 360 349 } 361 350 362 351 if (ret == -1) { 363 - /* Promote an illegal access to an SError */ 364 - __skip_instr(vcpu); 352 + /* Promote an illegal access to an 353 + * SError. If we would be returning 354 + * due to single-step clear the SS 355 + * bit so handle_exit knows what to 356 + * do after dealing with the error. 357 + */ 358 + if (!__skip_instr(vcpu)) 359 + *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; 365 360 exit_code = ARM_EXCEPTION_EL1_SERROR; 366 361 } 367 362 ··· 384 363 int ret = __vgic_v3_perform_cpuif_access(vcpu); 385 364 386 365 if (ret == 1) { 387 - __skip_instr(vcpu); 388 - goto again; 366 + if (__skip_instr(vcpu)) 367 + goto again; 368 + else 369 + exit_code = ARM_EXCEPTION_TRAP; 389 370 } 390 371 391 372 /* 0 falls through to be handled out of EL2 */
+1 -1
arch/arm64/mm/dump.c
··· 389 389 .check_wx = true, 390 390 }; 391 391 392 - walk_pgd(&st, &init_mm, 0); 392 + walk_pgd(&st, &init_mm, VA_START); 393 393 note_page(&st, 0, 0, 0); 394 394 if (st.wx_pages || st.uxn_pages) 395 395 pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
+2 -3
arch/arm64/mm/fault.c
··· 574 574 { 575 575 struct siginfo info; 576 576 const struct fault_info *inf; 577 - int ret = 0; 578 577 579 578 inf = esr_to_fault_info(esr); 580 579 pr_err("Synchronous External Abort: %s (0x%08x) at 0x%016lx\n", ··· 588 589 if (interrupts_enabled(regs)) 589 590 nmi_enter(); 590 591 591 - ret = ghes_notify_sea(); 592 + ghes_notify_sea(); 592 593 593 594 if (interrupts_enabled(regs)) 594 595 nmi_exit(); ··· 603 604 info.si_addr = (void __user *)addr; 604 605 arm64_notify_die("", regs, &info, esr); 605 606 606 - return ret; 607 + return 0; 607 608 } 608 609 609 610 static const struct fault_info fault_info[] = {
+2 -1
arch/arm64/mm/init.c
··· 476 476 477 477 reserve_elfcorehdr(); 478 478 479 + high_memory = __va(memblock_end_of_DRAM() - 1) + 1; 480 + 479 481 dma_contiguous_reserve(arm64_dma_phys_limit); 480 482 481 483 memblock_allow_resize(); ··· 504 502 sparse_init(); 505 503 zone_sizes_init(min, max); 506 504 507 - high_memory = __va((max << PAGE_SHIFT) - 1) + 1; 508 505 memblock_dump_all(); 509 506 } 510 507
+19
arch/riscv/include/asm/barrier.h
··· 38 38 #define smp_rmb() RISCV_FENCE(r,r) 39 39 #define smp_wmb() RISCV_FENCE(w,w) 40 40 41 + /* 42 + * This is a very specific barrier: it's currently only used in two places in 43 + * the kernel, both in the scheduler. See include/linux/spinlock.h for the two 44 + * orderings it guarantees, but the "critical section is RCsc" guarantee 45 + * mandates a barrier on RISC-V. The sequence looks like: 46 + * 47 + * lr.aq lock 48 + * sc lock <= LOCKED 49 + * smp_mb__after_spinlock() 50 + * // critical section 51 + * lr lock 52 + * sc.rl lock <= UNLOCKED 53 + * 54 + * The AQ/RL pair provides a RCpc critical section, but there's not really any 55 + * way we can take advantage of that here because the ordering is only enforced 56 + * on that one lock. Thus, we're just doing a full fence. 57 + */ 58 + #define smp_mb__after_spinlock() RISCV_FENCE(rw,rw) 59 + 41 60 #include <asm-generic/barrier.h> 42 61 43 62 #endif /* __ASSEMBLY__ */
-11
arch/riscv/kernel/setup.c
··· 38 38 #include <asm/tlbflush.h> 39 39 #include <asm/thread_info.h> 40 40 41 - #ifdef CONFIG_HVC_RISCV_SBI 42 - #include <asm/hvc_riscv_sbi.h> 43 - #endif 44 - 45 41 #ifdef CONFIG_DUMMY_CONSOLE 46 42 struct screen_info screen_info = { 47 43 .orig_video_lines = 30, ··· 208 212 209 213 void __init setup_arch(char **cmdline_p) 210 214 { 211 - #if defined(CONFIG_HVC_RISCV_SBI) 212 - if (likely(early_console == NULL)) { 213 - early_console = &riscv_sbi_early_console_dev; 214 - register_console(early_console); 215 - } 216 - #endif 217 - 218 215 #ifdef CONFIG_CMDLINE_BOOL 219 216 #ifdef CONFIG_CMDLINE_OVERRIDE 220 217 strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
+1 -1
arch/riscv/kernel/sys_riscv.c
··· 70 70 bool local = (flags & SYS_RISCV_FLUSH_ICACHE_LOCAL) != 0; 71 71 72 72 /* Check the reserved flags. */ 73 - if (unlikely(flags & !SYS_RISCV_FLUSH_ICACHE_ALL)) 73 + if (unlikely(flags & ~SYS_RISCV_FLUSH_ICACHE_ALL)) 74 74 return -EINVAL; 75 75 76 76 flush_icache_mm(mm, local);
-6
arch/s390/include/asm/pgtable.h
··· 1264 1264 return pud; 1265 1265 } 1266 1266 1267 - #define pud_write pud_write 1268 - static inline int pud_write(pud_t pud) 1269 - { 1270 - return (pud_val(pud) & _REGION3_ENTRY_WRITE) != 0; 1271 - } 1272 - 1273 1267 static inline pud_t pud_mkclean(pud_t pud) 1274 1268 { 1275 1269 if (pud_large(pud)) {
+1
arch/s390/kernel/compat_linux.c
··· 263 263 return retval; 264 264 } 265 265 266 + groups_sort(group_info); 266 267 retval = set_current_groups(group_info); 267 268 put_group_info(group_info); 268 269
+1 -4
arch/s390/kvm/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 1 2 # Makefile for kernel virtual machines on s390 2 3 # 3 4 # Copyright IBM Corp. 2008 4 - # 5 - # This program is free software; you can redistribute it and/or modify 6 - # it under the terms of the GNU General Public License (version 2 only) 7 - # as published by the Free Software Foundation. 8 5 9 6 KVM := ../../../virt/kvm 10 7 common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/async_pf.o $(KVM)/irqchip.o $(KVM)/vfio.o
+1 -4
arch/s390/kvm/diag.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * handling diagnose instructions 3 4 * 4 5 * Copyright IBM Corp. 2008, 2011 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com>
+1 -4
arch/s390/kvm/gaccess.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * access guest memory 3 4 * 4 5 * Copyright IBM Corp. 2008, 2014 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 */
+1 -4
arch/s390/kvm/guestdbg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * kvm guest debug support 3 4 * 4 5 * Copyright IBM Corp. 2014 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com> 11 8 */
+1 -4
arch/s390/kvm/intercept.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * in-kernel handling for sie intercepts 3 4 * 4 5 * Copyright IBM Corp. 2008, 2014 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com>
+1 -4
arch/s390/kvm/interrupt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * handling kvm guest interrupts 3 4 * 4 5 * Copyright IBM Corp. 2008, 2015 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 */
+1 -4
arch/s390/kvm/irq.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * s390 irqchip routines 3 4 * 4 5 * Copyright IBM Corp. 2014 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Cornelia Huck <cornelia.huck@de.ibm.com> 11 8 */
+5 -6
arch/s390/kvm/kvm-s390.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 - * hosting zSeries kernel virtual machines 3 + * hosting IBM Z kernel virtual machines (s390x) 3 4 * 4 - * Copyright IBM Corp. 2008, 2009 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 5 + * Copyright IBM Corp. 2008, 2017 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com> ··· 3805 3808 r = -EINVAL; 3806 3809 break; 3807 3810 } 3811 + /* do not use irq_state.flags, it will break old QEMUs */ 3808 3812 r = kvm_s390_set_irq_state(vcpu, 3809 3813 (void __user *) irq_state.buf, 3810 3814 irq_state.len); ··· 3821 3823 r = -EINVAL; 3822 3824 break; 3823 3825 } 3826 + /* do not use irq_state.flags, it will break old QEMUs */ 3824 3827 r = kvm_s390_get_irq_state(vcpu, 3825 3828 (__u8 __user *) irq_state.buf, 3826 3829 irq_state.len);
+1 -4
arch/s390/kvm/kvm-s390.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * definition for kvm on s390 3 4 * 4 5 * Copyright IBM Corp. 2008, 2009 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com>
+10 -6
arch/s390/kvm/priv.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * handling privileged instructions 3 4 * 4 5 * Copyright IBM Corp. 2008, 2013 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com> ··· 232 235 VCPU_EVENT(vcpu, 4, "%s", "retrying storage key operation"); 233 236 return -EAGAIN; 234 237 } 235 - if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) 236 - return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); 237 238 return 0; 238 239 } 239 240 ··· 241 246 unsigned char key; 242 247 int reg1, reg2; 243 248 int rc; 249 + 250 + if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) 251 + return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); 244 252 245 253 rc = try_handle_skey(vcpu); 246 254 if (rc) ··· 273 275 unsigned long addr; 274 276 int reg1, reg2; 275 277 int rc; 278 + 279 + if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) 280 + return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); 276 281 277 282 rc = try_handle_skey(vcpu); 278 283 if (rc) ··· 311 310 unsigned char key, oldkey; 312 311 int reg1, reg2; 313 312 int rc; 313 + 314 + if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE) 315 + return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP); 314 316 315 317 rc = try_handle_skey(vcpu); 316 318 if (rc)
+1 -4
arch/s390/kvm/sigp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * handling interprocessor communication 3 4 * 4 5 * Copyright IBM Corp. 2008, 2013 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): Carsten Otte <cotte@de.ibm.com> 11 8 * Christian Borntraeger <borntraeger@de.ibm.com>
+1 -4
arch/s390/kvm/vsie.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * kvm nested virtualization support for s390x 3 4 * 4 5 * Copyright IBM Corp. 2016 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License (version 2 only) 8 - * as published by the Free Software Foundation. 9 6 * 10 7 * Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com> 11 8 */
+2 -2
arch/sparc/mm/gup.c
··· 75 75 if (!(pmd_val(pmd) & _PAGE_VALID)) 76 76 return 0; 77 77 78 - if (!pmd_access_permitted(pmd, write)) 78 + if (write && !pmd_write(pmd)) 79 79 return 0; 80 80 81 81 refs = 0; ··· 114 114 if (!(pud_val(pud) & _PAGE_VALID)) 115 115 return 0; 116 116 117 - if (!pud_access_permitted(pud, write)) 117 + if (write && !pud_write(pud)) 118 118 return 0; 119 119 120 120 refs = 0;
+1
arch/um/include/asm/Kbuild
··· 1 1 generic-y += barrier.h 2 + generic-y += bpf_perf_event.h 2 3 generic-y += bug.h 3 4 generic-y += clkdev.h 4 5 generic-y += current.h
+1
arch/x86/Kconfig.debug
··· 400 400 config UNWINDER_GUESS 401 401 bool "Guess unwinder" 402 402 depends on EXPERT 403 + depends on !STACKDEPOT 403 404 ---help--- 404 405 This option enables the "guess" unwinder for unwinding kernel stack 405 406 traces. It scans the stack and reports every kernel text address it
+1
arch/x86/boot/compressed/Makefile
··· 80 80 ifdef CONFIG_X86_64 81 81 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/pagetable.o 82 82 vmlinux-objs-y += $(obj)/mem_encrypt.o 83 + vmlinux-objs-y += $(obj)/pgtable_64.o 83 84 endif 84 85 85 86 $(obj)/eboot.o: KBUILD_CFLAGS += -fshort-wchar -mno-red-zone
+12 -4
arch/x86/boot/compressed/head_64.S
··· 305 305 leaq boot_stack_end(%rbx), %rsp 306 306 307 307 #ifdef CONFIG_X86_5LEVEL 308 - /* Check if 5-level paging has already enabled */ 309 - movq %cr4, %rax 310 - testl $X86_CR4_LA57, %eax 311 - jnz lvl5 308 + /* 309 + * Check if we need to enable 5-level paging. 310 + * RSI holds real mode data and need to be preserved across 311 + * a function call. 312 + */ 313 + pushq %rsi 314 + call l5_paging_required 315 + popq %rsi 316 + 317 + /* If l5_paging_required() returned zero, we're done here. */ 318 + cmpq $0, %rax 319 + je lvl5 312 320 313 321 /* 314 322 * At this point we are in long mode with 4-level paging enabled,
+16
arch/x86/boot/compressed/misc.c
··· 169 169 } 170 170 } 171 171 172 + static bool l5_supported(void) 173 + { 174 + /* Check if leaf 7 is supported. */ 175 + if (native_cpuid_eax(0) < 7) 176 + return 0; 177 + 178 + /* Check if la57 is supported. */ 179 + return native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)); 180 + } 181 + 172 182 #if CONFIG_X86_NEED_RELOCS 173 183 static void handle_relocations(void *output, unsigned long output_len, 174 184 unsigned long virt_addr) ··· 371 361 372 362 console_init(); 373 363 debug_putstr("early console in extract_kernel\n"); 364 + 365 + if (IS_ENABLED(CONFIG_X86_5LEVEL) && !l5_supported()) { 366 + error("This linux kernel as configured requires 5-level paging\n" 367 + "This CPU does not support the required 'cr4.la57' feature\n" 368 + "Unable to boot - please use a kernel appropriate for your CPU\n"); 369 + } 374 370 375 371 free_mem_ptr = heap; /* Heap */ 376 372 free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
+28
arch/x86/boot/compressed/pgtable_64.c
··· 1 + #include <asm/processor.h> 2 + 3 + /* 4 + * __force_order is used by special_insns.h asm code to force instruction 5 + * serialization. 6 + * 7 + * It is not referenced from the code, but GCC < 5 with -fPIE would fail 8 + * due to an undefined symbol. Define it to make these ancient GCCs work. 9 + */ 10 + unsigned long __force_order; 11 + 12 + int l5_paging_required(void) 13 + { 14 + /* Check if leaf 7 is supported. */ 15 + 16 + if (native_cpuid_eax(0) < 7) 17 + return 0; 18 + 19 + /* Check if la57 is supported. */ 20 + if (!(native_cpuid_ecx(7) & (1 << (X86_FEATURE_LA57 & 31)))) 21 + return 0; 22 + 23 + /* Check if 5-level paging has already been enabled. */ 24 + if (native_read_cr4() & X86_CR4_LA57) 25 + return 0; 26 + 27 + return 1; 28 + }
+3 -1
arch/x86/boot/genimage.sh
··· 44 44 45 45 # Make sure the files actually exist 46 46 verify "$FBZIMAGE" 47 - verify "$MTOOLSRC" 48 47 49 48 genbzdisk() { 49 + verify "$MTOOLSRC" 50 50 mformat a: 51 51 syslinux $FIMAGE 52 52 echo "$KCMDLINE" | mcopy - a:syslinux.cfg ··· 57 57 } 58 58 59 59 genfdimage144() { 60 + verify "$MTOOLSRC" 60 61 dd if=/dev/zero of=$FIMAGE bs=1024 count=1440 2> /dev/null 61 62 mformat v: 62 63 syslinux $FIMAGE ··· 69 68 } 70 69 71 70 genfdimage288() { 71 + verify "$MTOOLSRC" 72 72 dd if=/dev/zero of=$FIMAGE bs=1024 count=2880 2> /dev/null 73 73 mformat w: 74 74 syslinux $FIMAGE
-7
arch/x86/crypto/salsa20_glue.c
··· 59 59 60 60 salsa20_ivsetup(ctx, walk.iv); 61 61 62 - if (likely(walk.nbytes == nbytes)) 63 - { 64 - salsa20_encrypt_bytes(ctx, walk.src.virt.addr, 65 - walk.dst.virt.addr, nbytes); 66 - return blkcipher_walk_done(desc, &walk, 0); 67 - } 68 - 69 62 while (walk.nbytes >= 64) { 70 63 salsa20_encrypt_bytes(ctx, walk.src.virt.addr, 71 64 walk.dst.virt.addr,
-2
arch/x86/include/asm/kvm_emulate.h
··· 214 214 void (*halt)(struct x86_emulate_ctxt *ctxt); 215 215 void (*wbinvd)(struct x86_emulate_ctxt *ctxt); 216 216 int (*fix_hypercall)(struct x86_emulate_ctxt *ctxt); 217 - void (*get_fpu)(struct x86_emulate_ctxt *ctxt); /* disables preempt */ 218 - void (*put_fpu)(struct x86_emulate_ctxt *ctxt); /* reenables preempt */ 219 217 int (*intercept)(struct x86_emulate_ctxt *ctxt, 220 218 struct x86_instruction_info *info, 221 219 enum x86_intercept_stage stage);
+16
arch/x86/include/asm/kvm_host.h
··· 536 536 struct kvm_mmu_memory_cache mmu_page_cache; 537 537 struct kvm_mmu_memory_cache mmu_page_header_cache; 538 538 539 + /* 540 + * QEMU userspace and the guest each have their own FPU state. 541 + * In vcpu_run, we switch between the user and guest FPU contexts. 542 + * While running a VCPU, the VCPU thread will have the guest FPU 543 + * context. 544 + * 545 + * Note that while the PKRU state lives inside the fpu registers, 546 + * it is switched out separately at VMENTER and VMEXIT time. The 547 + * "guest_fpu" state here contains the guest FPU context, with the 548 + * host PRKU bits. 549 + */ 550 + struct fpu user_fpu; 539 551 struct fpu guest_fpu; 552 + 540 553 u64 xcr0; 541 554 u64 guest_supported_xcr0; 542 555 u32 guest_xstate_size; ··· 1447 1434 1448 1435 #define put_smstate(type, buf, offset, val) \ 1449 1436 *(type *)((buf) + (offset) - 0x7e00) = val 1437 + 1438 + void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, 1439 + unsigned long start, unsigned long end); 1450 1440 1451 1441 #endif /* _ASM_X86_KVM_HOST_H */
+7 -1
arch/x86/include/asm/suspend_32.h
··· 12 12 13 13 /* image of the saved processor state */ 14 14 struct saved_context { 15 - u16 es, fs, gs, ss; 15 + /* 16 + * On x86_32, all segment registers, with the possible exception of 17 + * gs, are saved at kernel entry in pt_regs. 18 + */ 19 + #ifdef CONFIG_X86_32_LAZY_GS 20 + u16 gs; 21 + #endif 16 22 unsigned long cr0, cr2, cr3, cr4; 17 23 u64 misc_enable; 18 24 bool misc_enable_saved;
+15 -4
arch/x86/include/asm/suspend_64.h
··· 20 20 */ 21 21 struct saved_context { 22 22 struct pt_regs regs; 23 - u16 ds, es, fs, gs, ss; 24 - unsigned long gs_base, gs_kernel_base, fs_base; 23 + 24 + /* 25 + * User CS and SS are saved in current_pt_regs(). The rest of the 26 + * segment selectors need to be saved and restored here. 27 + */ 28 + u16 ds, es, fs, gs; 29 + 30 + /* 31 + * Usermode FSBASE and GSBASE may not match the fs and gs selectors, 32 + * so we save them separately. We save the kernelmode GSBASE to 33 + * restore percpu access after resume. 34 + */ 35 + unsigned long kernelmode_gs_base, usermode_gs_base, fs_base; 36 + 25 37 unsigned long cr0, cr2, cr3, cr4, cr8; 26 38 u64 misc_enable; 27 39 bool misc_enable_saved; ··· 42 30 u16 gdt_pad; /* Unused */ 43 31 struct desc_ptr gdt_desc; 44 32 u16 idt_pad; 45 - u16 idt_limit; 46 - unsigned long idt_base; 33 + struct desc_ptr idt; 47 34 u16 ldt; 48 35 u16 tss; 49 36 unsigned long tr;
+2 -2
arch/x86/kernel/smpboot.c
··· 106 106 static unsigned int logical_packages __read_mostly; 107 107 108 108 /* Maximum number of SMT threads on any online core */ 109 - int __max_smt_threads __read_mostly; 109 + int __read_mostly __max_smt_threads = 1; 110 110 111 111 /* Flag to indicate if a complete sched domain rebuild is required */ 112 112 bool x86_topology_update; ··· 1304 1304 * Today neither Intel nor AMD support heterogenous systems so 1305 1305 * extrapolate the boot cpu's data to all packages. 1306 1306 */ 1307 - ncpus = cpu_data(0).booted_cores * smp_num_siblings; 1307 + ncpus = cpu_data(0).booted_cores * topology_max_smt_threads(); 1308 1308 __max_logical_packages = DIV_ROUND_UP(nr_cpu_ids, ncpus); 1309 1309 pr_info("Max logical packages: %u\n", __max_logical_packages); 1310 1310
-24
arch/x86/kvm/emulate.c
··· 1046 1046 1047 1047 static void read_sse_reg(struct x86_emulate_ctxt *ctxt, sse128_t *data, int reg) 1048 1048 { 1049 - ctxt->ops->get_fpu(ctxt); 1050 1049 switch (reg) { 1051 1050 case 0: asm("movdqa %%xmm0, %0" : "=m"(*data)); break; 1052 1051 case 1: asm("movdqa %%xmm1, %0" : "=m"(*data)); break; ··· 1067 1068 #endif 1068 1069 default: BUG(); 1069 1070 } 1070 - ctxt->ops->put_fpu(ctxt); 1071 1071 } 1072 1072 1073 1073 static void write_sse_reg(struct x86_emulate_ctxt *ctxt, sse128_t *data, 1074 1074 int reg) 1075 1075 { 1076 - ctxt->ops->get_fpu(ctxt); 1077 1076 switch (reg) { 1078 1077 case 0: asm("movdqa %0, %%xmm0" : : "m"(*data)); break; 1079 1078 case 1: asm("movdqa %0, %%xmm1" : : "m"(*data)); break; ··· 1093 1096 #endif 1094 1097 default: BUG(); 1095 1098 } 1096 - ctxt->ops->put_fpu(ctxt); 1097 1099 } 1098 1100 1099 1101 static void read_mmx_reg(struct x86_emulate_ctxt *ctxt, u64 *data, int reg) 1100 1102 { 1101 - ctxt->ops->get_fpu(ctxt); 1102 1103 switch (reg) { 1103 1104 case 0: asm("movq %%mm0, %0" : "=m"(*data)); break; 1104 1105 case 1: asm("movq %%mm1, %0" : "=m"(*data)); break; ··· 1108 1113 case 7: asm("movq %%mm7, %0" : "=m"(*data)); break; 1109 1114 default: BUG(); 1110 1115 } 1111 - ctxt->ops->put_fpu(ctxt); 1112 1116 } 1113 1117 1114 1118 static void write_mmx_reg(struct x86_emulate_ctxt *ctxt, u64 *data, int reg) 1115 1119 { 1116 - ctxt->ops->get_fpu(ctxt); 1117 1120 switch (reg) { 1118 1121 case 0: asm("movq %0, %%mm0" : : "m"(*data)); break; 1119 1122 case 1: asm("movq %0, %%mm1" : : "m"(*data)); break; ··· 1123 1130 case 7: asm("movq %0, %%mm7" : : "m"(*data)); break; 1124 1131 default: BUG(); 1125 1132 } 1126 - ctxt->ops->put_fpu(ctxt); 1127 1133 } 1128 1134 1129 1135 static int em_fninit(struct x86_emulate_ctxt *ctxt) ··· 1130 1138 if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM)) 1131 1139 return emulate_nm(ctxt); 1132 1140 1133 - ctxt->ops->get_fpu(ctxt); 1134 1141 asm volatile("fninit"); 1135 - ctxt->ops->put_fpu(ctxt); 1136 1142 return X86EMUL_CONTINUE; 1137 1143 } 1138 1144 ··· 1141 1151 if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM)) 1142 1152 return emulate_nm(ctxt); 1143 1153 1144 - ctxt->ops->get_fpu(ctxt); 1145 1154 asm volatile("fnstcw %0": "+m"(fcw)); 1146 - ctxt->ops->put_fpu(ctxt); 1147 1155 1148 1156 ctxt->dst.val = fcw; 1149 1157 ··· 1155 1167 if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM)) 1156 1168 return emulate_nm(ctxt); 1157 1169 1158 - ctxt->ops->get_fpu(ctxt); 1159 1170 asm volatile("fnstsw %0": "+m"(fsw)); 1160 - ctxt->ops->put_fpu(ctxt); 1161 1171 1162 1172 ctxt->dst.val = fsw; 1163 1173 ··· 3987 4001 if (rc != X86EMUL_CONTINUE) 3988 4002 return rc; 3989 4003 3990 - ctxt->ops->get_fpu(ctxt); 3991 - 3992 4004 rc = asm_safe("fxsave %[fx]", , [fx] "+m"(fx_state)); 3993 - 3994 - ctxt->ops->put_fpu(ctxt); 3995 4005 3996 4006 if (rc != X86EMUL_CONTINUE) 3997 4007 return rc; ··· 4031 4049 if (rc != X86EMUL_CONTINUE) 4032 4050 return rc; 4033 4051 4034 - ctxt->ops->get_fpu(ctxt); 4035 - 4036 4052 if (size < __fxstate_size(16)) { 4037 4053 rc = fxregs_fixup(&fx_state, size); 4038 4054 if (rc != X86EMUL_CONTINUE) ··· 4046 4066 rc = asm_safe("fxrstor %[fx]", : [fx] "m"(fx_state)); 4047 4067 4048 4068 out: 4049 - ctxt->ops->put_fpu(ctxt); 4050 - 4051 4069 return rc; 4052 4070 } 4053 4071 ··· 5295 5317 { 5296 5318 int rc; 5297 5319 5298 - ctxt->ops->get_fpu(ctxt); 5299 5320 rc = asm_safe("fwait"); 5300 - ctxt->ops->put_fpu(ctxt); 5301 5321 5302 5322 if (unlikely(rc != X86EMUL_CONTINUE)) 5303 5323 return emulate_exception(ctxt, MF_VECTOR, 0, false);
-6
arch/x86/kvm/vmx.c
··· 6751 6751 goto out; 6752 6752 } 6753 6753 6754 - vmx_io_bitmap_b = (unsigned long *)__get_free_page(GFP_KERNEL); 6755 6754 memset(vmx_vmread_bitmap, 0xff, PAGE_SIZE); 6756 6755 memset(vmx_vmwrite_bitmap, 0xff, PAGE_SIZE); 6757 6756 6758 - /* 6759 - * Allow direct access to the PC debug port (it is often used for I/O 6760 - * delays, but the vmexits simply slow things down). 6761 - */ 6762 6757 memset(vmx_io_bitmap_a, 0xff, PAGE_SIZE); 6763 - clear_bit(0x80, vmx_io_bitmap_a); 6764 6758 6765 6759 memset(vmx_io_bitmap_b, 0xff, PAGE_SIZE); 6766 6760
+31 -32
arch/x86/kvm/x86.c
··· 2937 2937 srcu_read_unlock(&vcpu->kvm->srcu, idx); 2938 2938 pagefault_enable(); 2939 2939 kvm_x86_ops->vcpu_put(vcpu); 2940 - kvm_put_guest_fpu(vcpu); 2941 2940 vcpu->arch.last_host_tsc = rdtsc(); 2942 2941 } 2943 2942 ··· 5251 5252 emul_to_vcpu(ctxt)->arch.halt_request = 1; 5252 5253 } 5253 5254 5254 - static void emulator_get_fpu(struct x86_emulate_ctxt *ctxt) 5255 - { 5256 - preempt_disable(); 5257 - kvm_load_guest_fpu(emul_to_vcpu(ctxt)); 5258 - } 5259 - 5260 - static void emulator_put_fpu(struct x86_emulate_ctxt *ctxt) 5261 - { 5262 - preempt_enable(); 5263 - } 5264 - 5265 5255 static int emulator_intercept(struct x86_emulate_ctxt *ctxt, 5266 5256 struct x86_instruction_info *info, 5267 5257 enum x86_intercept_stage stage) ··· 5328 5340 .halt = emulator_halt, 5329 5341 .wbinvd = emulator_wbinvd, 5330 5342 .fix_hypercall = emulator_fix_hypercall, 5331 - .get_fpu = emulator_get_fpu, 5332 - .put_fpu = emulator_put_fpu, 5333 5343 .intercept = emulator_intercept, 5334 5344 .get_cpuid = emulator_get_cpuid, 5335 5345 .set_nmi_mask = emulator_set_nmi_mask, ··· 6764 6778 kvm_x86_ops->tlb_flush(vcpu); 6765 6779 } 6766 6780 6781 + void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, 6782 + unsigned long start, unsigned long end) 6783 + { 6784 + unsigned long apic_address; 6785 + 6786 + /* 6787 + * The physical address of apic access page is stored in the VMCS. 6788 + * Update it when it becomes invalid. 6789 + */ 6790 + apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT); 6791 + if (start <= apic_address && apic_address < end) 6792 + kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD); 6793 + } 6794 + 6767 6795 void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu) 6768 6796 { 6769 6797 struct page *page = NULL; ··· 6952 6952 preempt_disable(); 6953 6953 6954 6954 kvm_x86_ops->prepare_guest_switch(vcpu); 6955 - kvm_load_guest_fpu(vcpu); 6956 6955 6957 6956 /* 6958 6957 * Disable IRQs before setting IN_GUEST_MODE. Posted interrupt ··· 7296 7297 } 7297 7298 } 7298 7299 7300 + kvm_load_guest_fpu(vcpu); 7301 + 7299 7302 if (unlikely(vcpu->arch.complete_userspace_io)) { 7300 7303 int (*cui)(struct kvm_vcpu *) = vcpu->arch.complete_userspace_io; 7301 7304 vcpu->arch.complete_userspace_io = NULL; 7302 7305 r = cui(vcpu); 7303 7306 if (r <= 0) 7304 - goto out; 7307 + goto out_fpu; 7305 7308 } else 7306 7309 WARN_ON(vcpu->arch.pio.count || vcpu->mmio_needed); 7307 7310 ··· 7312 7311 else 7313 7312 r = vcpu_run(vcpu); 7314 7313 7314 + out_fpu: 7315 + kvm_put_guest_fpu(vcpu); 7315 7316 out: 7316 7317 post_kvm_run_save(vcpu); 7317 7318 kvm_sigset_deactivate(vcpu); ··· 7707 7704 vcpu->arch.cr0 |= X86_CR0_ET; 7708 7705 } 7709 7706 7707 + /* Swap (qemu) user FPU context for the guest FPU context. */ 7710 7708 void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) 7711 7709 { 7712 - if (vcpu->guest_fpu_loaded) 7713 - return; 7714 - 7715 - /* 7716 - * Restore all possible states in the guest, 7717 - * and assume host would use all available bits. 7718 - * Guest xcr0 would be loaded later. 7719 - */ 7720 - vcpu->guest_fpu_loaded = 1; 7721 - __kernel_fpu_begin(); 7710 + preempt_disable(); 7711 + copy_fpregs_to_fpstate(&vcpu->arch.user_fpu); 7722 7712 /* PKRU is separately restored in kvm_x86_ops->run. */ 7723 7713 __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, 7724 7714 ~XFEATURE_MASK_PKRU); 7715 + preempt_enable(); 7725 7716 trace_kvm_fpu(1); 7726 7717 } 7727 7718 7719 + /* When vcpu_run ends, restore user space FPU context. */ 7728 7720 void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) 7729 7721 { 7730 - if (!vcpu->guest_fpu_loaded) 7731 - return; 7732 - 7733 - vcpu->guest_fpu_loaded = 0; 7722 + preempt_disable(); 7734 7723 copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); 7735 - __kernel_fpu_end(); 7724 + copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state); 7725 + preempt_enable(); 7736 7726 ++vcpu->stat.fpu_reload; 7737 7727 trace_kvm_fpu(0); 7738 7728 } ··· 7842 7846 * To avoid have the INIT path from kvm_apic_has_events() that be 7843 7847 * called with loaded FPU and does not let userspace fix the state. 7844 7848 */ 7845 - kvm_put_guest_fpu(vcpu); 7849 + if (init_event) 7850 + kvm_put_guest_fpu(vcpu); 7846 7851 mpx_state_buffer = get_xsave_addr(&vcpu->arch.guest_fpu.state.xsave, 7847 7852 XFEATURE_MASK_BNDREGS); 7848 7853 if (mpx_state_buffer) ··· 7852 7855 XFEATURE_MASK_BNDCSR); 7853 7856 if (mpx_state_buffer) 7854 7857 memset(mpx_state_buffer, 0, sizeof(struct mpx_bndcsr)); 7858 + if (init_event) 7859 + kvm_load_guest_fpu(vcpu); 7855 7860 } 7856 7861 7857 7862 if (!init_event) {
+11 -2
arch/x86/lib/x86-opcode-map.txt
··· 607 607 fc: paddb Pq,Qq | vpaddb Vx,Hx,Wx (66),(v1) 608 608 fd: paddw Pq,Qq | vpaddw Vx,Hx,Wx (66),(v1) 609 609 fe: paddd Pq,Qq | vpaddd Vx,Hx,Wx (66),(v1) 610 - ff: 610 + ff: UD0 611 611 EndTable 612 612 613 613 Table: 3-byte opcode 1 (0x0f 0x38) ··· 717 717 7e: vpermt2d/q Vx,Hx,Wx (66),(ev) 718 718 7f: vpermt2ps/d Vx,Hx,Wx (66),(ev) 719 719 80: INVEPT Gy,Mdq (66) 720 - 81: INVPID Gy,Mdq (66) 720 + 81: INVVPID Gy,Mdq (66) 721 721 82: INVPCID Gy,Mdq (66) 722 722 83: vpmultishiftqb Vx,Hx,Wx (66),(ev) 723 723 88: vexpandps/d Vpd,Wpd (66),(ev) ··· 970 970 EndTable 971 971 972 972 GrpTable: Grp10 973 + # all are UD1 974 + 0: UD1 975 + 1: UD1 976 + 2: UD1 977 + 3: UD1 978 + 4: UD1 979 + 5: UD1 980 + 6: UD1 981 + 7: UD1 973 982 EndTable 974 983 975 984 # Grp11A and Grp11B are expressed as Grp11 in Intel SDM
+2 -2
arch/x86/mm/ioremap.c
··· 404 404 return; 405 405 } 406 406 407 + mmiotrace_iounmap(addr); 408 + 407 409 addr = (volatile void __iomem *) 408 410 (PAGE_MASK & (unsigned long __force)addr); 409 - 410 - mmiotrace_iounmap(addr); 411 411 412 412 /* Use the vm area unlocked, assuming the caller 413 413 ensures there isn't another iounmap for the same address
+7 -5
arch/x86/mm/kmmio.c
··· 435 435 unsigned long flags; 436 436 int ret = 0; 437 437 unsigned long size = 0; 438 + unsigned long addr = p->addr & PAGE_MASK; 438 439 const unsigned long size_lim = p->len + (p->addr & ~PAGE_MASK); 439 440 unsigned int l; 440 441 pte_t *pte; 441 442 442 443 spin_lock_irqsave(&kmmio_lock, flags); 443 - if (get_kmmio_probe(p->addr)) { 444 + if (get_kmmio_probe(addr)) { 444 445 ret = -EEXIST; 445 446 goto out; 446 447 } 447 448 448 - pte = lookup_address(p->addr, &l); 449 + pte = lookup_address(addr, &l); 449 450 if (!pte) { 450 451 ret = -EINVAL; 451 452 goto out; ··· 455 454 kmmio_count++; 456 455 list_add_rcu(&p->list, &kmmio_probes); 457 456 while (size < size_lim) { 458 - if (add_kmmio_fault_page(p->addr + size)) 457 + if (add_kmmio_fault_page(addr + size)) 459 458 pr_err("Unable to set page fault.\n"); 460 459 size += page_level_size(l); 461 460 } ··· 529 528 { 530 529 unsigned long flags; 531 530 unsigned long size = 0; 531 + unsigned long addr = p->addr & PAGE_MASK; 532 532 const unsigned long size_lim = p->len + (p->addr & ~PAGE_MASK); 533 533 struct kmmio_fault_page *release_list = NULL; 534 534 struct kmmio_delayed_release *drelease; 535 535 unsigned int l; 536 536 pte_t *pte; 537 537 538 - pte = lookup_address(p->addr, &l); 538 + pte = lookup_address(addr, &l); 539 539 if (!pte) 540 540 return; 541 541 542 542 spin_lock_irqsave(&kmmio_lock, flags); 543 543 while (size < size_lim) { 544 - release_kmmio_fault_page(p->addr + size, &release_list); 544 + release_kmmio_fault_page(addr + size, &release_list); 545 545 size += page_level_size(l); 546 546 } 547 547 list_del_rcu(&p->list);
+21 -6
arch/x86/pci/fixup.c
··· 665 665 unsigned i; 666 666 u32 base, limit, high; 667 667 struct resource *res, *conflict; 668 + struct pci_dev *other; 669 + 670 + /* Check that we are the only device of that type */ 671 + other = pci_get_device(dev->vendor, dev->device, NULL); 672 + if (other != dev || 673 + (other = pci_get_device(dev->vendor, dev->device, other))) { 674 + /* This is a multi-socket system, don't touch it for now */ 675 + pci_dev_put(other); 676 + return; 677 + } 668 678 669 679 for (i = 0; i < 8; i++) { 670 680 pci_read_config_dword(dev, AMD_141b_MMIO_BASE(i), &base); ··· 706 696 res->end = 0xfd00000000ull - 1; 707 697 708 698 /* Just grab the free area behind system memory for this */ 709 - while ((conflict = request_resource_conflict(&iomem_resource, res))) 699 + while ((conflict = request_resource_conflict(&iomem_resource, res))) { 700 + if (conflict->end >= res->end) { 701 + kfree(res); 702 + return; 703 + } 710 704 res->start = conflict->end + 1; 705 + } 711 706 712 707 dev_info(&dev->dev, "adding root bus resource %pR\n", res); 713 708 ··· 729 714 730 715 pci_bus_add_resource(dev->bus, res, 0); 731 716 } 732 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1401, pci_amd_enable_64bit_bar); 733 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x141b, pci_amd_enable_64bit_bar); 734 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar); 735 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar); 736 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar); 717 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1401, pci_amd_enable_64bit_bar); 718 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x141b, pci_amd_enable_64bit_bar); 719 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1571, pci_amd_enable_64bit_bar); 720 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15b1, pci_amd_enable_64bit_bar); 721 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1601, pci_amd_enable_64bit_bar); 737 722 738 723 #endif
+46 -55
arch/x86/power/cpu.c
··· 82 82 /* 83 83 * descriptor tables 84 84 */ 85 - #ifdef CONFIG_X86_32 86 85 store_idt(&ctxt->idt); 87 - #else 88 - /* CONFIG_X86_64 */ 89 - store_idt((struct desc_ptr *)&ctxt->idt_limit); 90 - #endif 86 + 91 87 /* 92 88 * We save it here, but restore it only in the hibernate case. 93 89 * For ACPI S3 resume, this is loaded via 'early_gdt_desc' in 64-bit ··· 99 103 /* 100 104 * segment registers 101 105 */ 102 - #ifdef CONFIG_X86_32 103 - savesegment(es, ctxt->es); 104 - savesegment(fs, ctxt->fs); 106 + #ifdef CONFIG_X86_32_LAZY_GS 105 107 savesegment(gs, ctxt->gs); 106 - savesegment(ss, ctxt->ss); 107 - #else 108 - /* CONFIG_X86_64 */ 109 - asm volatile ("movw %%ds, %0" : "=m" (ctxt->ds)); 110 - asm volatile ("movw %%es, %0" : "=m" (ctxt->es)); 111 - asm volatile ("movw %%fs, %0" : "=m" (ctxt->fs)); 112 - asm volatile ("movw %%gs, %0" : "=m" (ctxt->gs)); 113 - asm volatile ("movw %%ss, %0" : "=m" (ctxt->ss)); 108 + #endif 109 + #ifdef CONFIG_X86_64 110 + savesegment(gs, ctxt->gs); 111 + savesegment(fs, ctxt->fs); 112 + savesegment(ds, ctxt->ds); 113 + savesegment(es, ctxt->es); 114 114 115 115 rdmsrl(MSR_FS_BASE, ctxt->fs_base); 116 - rdmsrl(MSR_GS_BASE, ctxt->gs_base); 117 - rdmsrl(MSR_KERNEL_GS_BASE, ctxt->gs_kernel_base); 116 + rdmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base); 117 + rdmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base); 118 118 mtrr_save_fixed_ranges(NULL); 119 119 120 120 rdmsrl(MSR_EFER, ctxt->efer); ··· 170 178 write_gdt_entry(desc, GDT_ENTRY_TSS, &tss, DESC_TSS); 171 179 172 180 syscall_init(); /* This sets MSR_*STAR and related */ 181 + #else 182 + if (boot_cpu_has(X86_FEATURE_SEP)) 183 + enable_sep_cpu(); 173 184 #endif 174 185 load_TR_desc(); /* This does ltr */ 175 186 load_mm_ldt(current->active_mm); /* This does lldt */ ··· 185 190 } 186 191 187 192 /** 188 - * __restore_processor_state - restore the contents of CPU registers saved 189 - * by __save_processor_state() 190 - * @ctxt - structure to load the registers contents from 193 + * __restore_processor_state - restore the contents of CPU registers saved 194 + * by __save_processor_state() 195 + * @ctxt - structure to load the registers contents from 196 + * 197 + * The asm code that gets us here will have restored a usable GDT, although 198 + * it will be pointing to the wrong alias. 191 199 */ 192 200 static void notrace __restore_processor_state(struct saved_context *ctxt) 193 201 { ··· 213 215 write_cr2(ctxt->cr2); 214 216 write_cr0(ctxt->cr0); 215 217 216 - /* 217 - * now restore the descriptor tables to their proper values 218 - * ltr is done i fix_processor_context(). 219 - */ 220 - #ifdef CONFIG_X86_32 218 + /* Restore the IDT. */ 221 219 load_idt(&ctxt->idt); 222 - #else 223 - /* CONFIG_X86_64 */ 224 - load_idt((const struct desc_ptr *)&ctxt->idt_limit); 225 - #endif 226 220 227 - #ifdef CONFIG_X86_64 228 221 /* 229 - * We need GSBASE restored before percpu access can work. 230 - * percpu access can happen in exception handlers or in complicated 231 - * helpers like load_gs_index(). 222 + * Just in case the asm code got us here with the SS, DS, or ES 223 + * out of sync with the GDT, update them. 232 224 */ 233 - wrmsrl(MSR_GS_BASE, ctxt->gs_base); 225 + loadsegment(ss, __KERNEL_DS); 226 + loadsegment(ds, __USER_DS); 227 + loadsegment(es, __USER_DS); 228 + 229 + /* 230 + * Restore percpu access. Percpu access can happen in exception 231 + * handlers or in complicated helpers like load_gs_index(). 232 + */ 233 + #ifdef CONFIG_X86_64 234 + wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base); 235 + #else 236 + loadsegment(fs, __KERNEL_PERCPU); 237 + loadsegment(gs, __KERNEL_STACK_CANARY); 234 238 #endif 235 239 240 + /* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */ 236 241 fix_processor_context(); 237 242 238 243 /* 239 - * Restore segment registers. This happens after restoring the GDT 240 - * and LDT, which happen in fix_processor_context(). 244 + * Now that we have descriptor tables fully restored and working 245 + * exception handling, restore the usermode segments. 241 246 */ 242 - #ifdef CONFIG_X86_32 247 + #ifdef CONFIG_X86_64 248 + loadsegment(ds, ctxt->es); 243 249 loadsegment(es, ctxt->es); 244 250 loadsegment(fs, ctxt->fs); 245 - loadsegment(gs, ctxt->gs); 246 - loadsegment(ss, ctxt->ss); 247 - 248 - /* 249 - * sysenter MSRs 250 - */ 251 - if (boot_cpu_has(X86_FEATURE_SEP)) 252 - enable_sep_cpu(); 253 - #else 254 - /* CONFIG_X86_64 */ 255 - asm volatile ("movw %0, %%ds" :: "r" (ctxt->ds)); 256 - asm volatile ("movw %0, %%es" :: "r" (ctxt->es)); 257 - asm volatile ("movw %0, %%fs" :: "r" (ctxt->fs)); 258 251 load_gs_index(ctxt->gs); 259 - asm volatile ("movw %0, %%ss" :: "r" (ctxt->ss)); 260 252 261 253 /* 262 - * Restore FSBASE and user GSBASE after reloading the respective 263 - * segment selectors. 254 + * Restore FSBASE and GSBASE after restoring the selectors, since 255 + * restoring the selectors clobbers the bases. Keep in mind 256 + * that MSR_KERNEL_GS_BASE is horribly misnamed. 264 257 */ 265 258 wrmsrl(MSR_FS_BASE, ctxt->fs_base); 266 - wrmsrl(MSR_KERNEL_GS_BASE, ctxt->gs_kernel_base); 259 + wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base); 260 + #elif defined(CONFIG_X86_32_LAZY_GS) 261 + loadsegment(gs, ctxt->gs); 267 262 #endif 268 263 269 264 do_fpu_end();
+1 -1
arch/x86/xen/apic.c
··· 57 57 return 0; 58 58 59 59 if (reg == APIC_LVR) 60 - return 0x10; 60 + return 0x14; 61 61 #ifdef CONFIG_X86_32 62 62 if (reg == APIC_LDR) 63 63 return SET_APIC_LOGICAL_ID(1UL << smp_processor_id());
+7 -6
crypto/af_alg.c
··· 672 672 } 673 673 674 674 tsgl = areq->tsgl; 675 - for_each_sg(tsgl, sg, areq->tsgl_entries, i) { 676 - if (!sg_page(sg)) 677 - continue; 678 - put_page(sg_page(sg)); 679 - } 675 + if (tsgl) { 676 + for_each_sg(tsgl, sg, areq->tsgl_entries, i) { 677 + if (!sg_page(sg)) 678 + continue; 679 + put_page(sg_page(sg)); 680 + } 680 681 681 - if (areq->tsgl && areq->tsgl_entries) 682 682 sock_kfree_s(sk, tsgl, areq->tsgl_entries * sizeof(*tsgl)); 683 + } 683 684 } 684 685 EXPORT_SYMBOL_GPL(af_alg_free_areq_sgls); 685 686
+1 -1
crypto/algif_aead.c
··· 503 503 struct aead_tfm *tfm = private; 504 504 505 505 crypto_free_aead(tfm->aead); 506 + crypto_put_default_null_skcipher2(); 506 507 kfree(tfm); 507 508 } 508 509 ··· 536 535 unsigned int ivlen = crypto_aead_ivsize(tfm); 537 536 538 537 af_alg_pull_tsgl(sk, ctx->used, NULL, 0); 539 - crypto_put_default_null_skcipher2(); 540 538 sock_kzfree_s(sk, ctx->iv, ivlen); 541 539 sock_kfree_s(sk, ctx, ctx->len); 542 540 af_alg_release_parent(sk);
+3 -1
crypto/asymmetric_keys/pkcs7_parser.c
··· 148 148 } 149 149 150 150 ret = pkcs7_check_authattrs(ctx->msg); 151 - if (ret < 0) 151 + if (ret < 0) { 152 + msg = ERR_PTR(ret); 152 153 goto out; 154 + } 153 155 154 156 msg = ctx->msg; 155 157 ctx->msg = NULL;
+1 -1
crypto/asymmetric_keys/pkcs7_trust.c
··· 69 69 /* Self-signed certificates form roots of their own, and if we 70 70 * don't know them, then we can't accept them. 71 71 */ 72 - if (x509->next == x509) { 72 + if (x509->signer == x509) { 73 73 kleave(" = -ENOKEY [unknown self-signed]"); 74 74 return -ENOKEY; 75 75 }
+3 -6
crypto/asymmetric_keys/pkcs7_verify.c
··· 59 59 desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; 60 60 61 61 /* Digest the message [RFC2315 9.3] */ 62 - ret = crypto_shash_init(desc); 63 - if (ret < 0) 64 - goto error; 65 - ret = crypto_shash_finup(desc, pkcs7->data, pkcs7->data_len, 66 - sig->digest); 62 + ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len, 63 + sig->digest); 67 64 if (ret < 0) 68 65 goto error; 69 66 pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest); ··· 147 150 pr_devel("Sig %u: Found cert serial match X.509[%u]\n", 148 151 sinfo->index, certix); 149 152 150 - if (x509->pub->pkey_algo != sinfo->sig->pkey_algo) { 153 + if (strcmp(x509->pub->pkey_algo, sinfo->sig->pkey_algo) != 0) { 151 154 pr_warn("Sig %u: X.509 algo and PKCS#7 sig algo don't match\n", 152 155 sinfo->index); 153 156 continue;
+5 -2
crypto/asymmetric_keys/public_key.c
··· 73 73 char alg_name_buf[CRYPTO_MAX_ALG_NAME]; 74 74 void *output; 75 75 unsigned int outlen; 76 - int ret = -ENOMEM; 76 + int ret; 77 77 78 78 pr_devel("==>%s()\n", __func__); 79 79 ··· 99 99 if (IS_ERR(tfm)) 100 100 return PTR_ERR(tfm); 101 101 102 + ret = -ENOMEM; 102 103 req = akcipher_request_alloc(tfm, GFP_KERNEL); 103 104 if (!req) 104 105 goto error_free_tfm; ··· 128 127 * signature and returns that to us. 129 128 */ 130 129 ret = crypto_wait_req(crypto_akcipher_verify(req), &cwait); 131 - if (ret < 0) 130 + if (ret) 132 131 goto out_free_output; 133 132 134 133 /* Do the actual verification step. */ ··· 143 142 error_free_tfm: 144 143 crypto_free_akcipher(tfm); 145 144 pr_devel("<==%s() = %d\n", __func__, ret); 145 + if (WARN_ON_ONCE(ret > 0)) 146 + ret = -EINVAL; 146 147 return ret; 147 148 } 148 149 EXPORT_SYMBOL_GPL(public_key_verify_signature);
+2
crypto/asymmetric_keys/x509_cert_parser.c
··· 409 409 ctx->cert->pub->pkey_algo = "rsa"; 410 410 411 411 /* Discard the BIT STRING metadata */ 412 + if (vlen < 1 || *(const u8 *)value != 0) 413 + return -EBADMSG; 412 414 ctx->key = value + 1; 413 415 ctx->key_size = vlen - 1; 414 416 return 0;
+2 -6
crypto/asymmetric_keys/x509_public_key.c
··· 79 79 desc->tfm = tfm; 80 80 desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; 81 81 82 - ret = crypto_shash_init(desc); 83 - if (ret < 0) 84 - goto error_2; 85 - might_sleep(); 86 - ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest); 82 + ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, sig->digest); 87 83 if (ret < 0) 88 84 goto error_2; 89 85 ··· 131 135 } 132 136 133 137 ret = -EKEYREJECTED; 134 - if (cert->pub->pkey_algo != cert->sig->pkey_algo) 138 + if (strcmp(cert->pub->pkey_algo, cert->sig->pkey_algo) != 0) 135 139 goto out; 136 140 137 141 ret = public_key_verify_signature(cert->pub, cert->sig);
+5 -1
crypto/hmac.c
··· 195 195 salg = shash_attr_alg(tb[1], 0, 0); 196 196 if (IS_ERR(salg)) 197 197 return PTR_ERR(salg); 198 + alg = &salg->base; 198 199 200 + /* The underlying hash algorithm must be unkeyed */ 199 201 err = -EINVAL; 202 + if (crypto_shash_alg_has_setkey(salg)) 203 + goto out_put_alg; 204 + 200 205 ds = salg->digestsize; 201 206 ss = salg->statesize; 202 - alg = &salg->base; 203 207 if (ds > alg->cra_blocksize || 204 208 ss < alg->cra_blocksize) 205 209 goto out_put_alg;
+1 -1
crypto/rsa_helper.c
··· 30 30 return -EINVAL; 31 31 32 32 if (fips_enabled) { 33 - while (!*ptr && n_sz) { 33 + while (n_sz && !*ptr) { 34 34 ptr++; 35 35 n_sz--; 36 36 }
-7
crypto/salsa20_generic.c
··· 188 188 189 189 salsa20_ivsetup(ctx, walk.iv); 190 190 191 - if (likely(walk.nbytes == nbytes)) 192 - { 193 - salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, 194 - walk.src.virt.addr, nbytes); 195 - return blkcipher_walk_done(desc, &walk, 0); 196 - } 197 - 198 191 while (walk.nbytes >= 64) { 199 192 salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, 200 193 walk.src.virt.addr,
+3 -2
crypto/shash.c
··· 25 25 26 26 static const struct crypto_type crypto_shash_type; 27 27 28 - static int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, 29 - unsigned int keylen) 28 + int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, 29 + unsigned int keylen) 30 30 { 31 31 return -ENOSYS; 32 32 } 33 + EXPORT_SYMBOL_GPL(shash_no_setkey); 33 34 34 35 static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key, 35 36 unsigned int keylen)
+1 -1
drivers/acpi/device_pm.c
··· 1138 1138 * skip all of the subsequent "thaw" callbacks for the device. 1139 1139 */ 1140 1140 if (dev_pm_smart_suspend_and_suspended(dev)) { 1141 - dev->power.direct_complete = true; 1141 + dev_pm_skip_next_resume_phases(dev); 1142 1142 return 0; 1143 1143 } 1144 1144
+3 -3
drivers/ata/ahci_mtk.c
··· 1 1 /* 2 - * MeidaTek AHCI SATA driver 2 + * MediaTek AHCI SATA driver 3 3 * 4 4 * Copyright (c) 2017 MediaTek Inc. 5 5 * Author: Ryder Lee <ryder.lee@mediatek.com> ··· 25 25 #include <linux/reset.h> 26 26 #include "ahci.h" 27 27 28 - #define DRV_NAME "ahci" 28 + #define DRV_NAME "ahci-mtk" 29 29 30 30 #define SYS_CFG 0x14 31 31 #define SYS_CFG_SATA_MSK GENMASK(31, 30) ··· 192 192 }; 193 193 module_platform_driver(mtk_ahci_driver); 194 194 195 - MODULE_DESCRIPTION("MeidaTek SATA AHCI Driver"); 195 + MODULE_DESCRIPTION("MediaTek SATA AHCI Driver"); 196 196 MODULE_LICENSE("GPL v2");
+12
drivers/ata/ahci_qoriq.c
··· 35 35 36 36 /* port register default value */ 37 37 #define AHCI_PORT_PHY_1_CFG 0xa003fffe 38 + #define AHCI_PORT_PHY2_CFG 0x28184d1f 39 + #define AHCI_PORT_PHY3_CFG 0x0e081509 38 40 #define AHCI_PORT_TRANS_CFG 0x08000029 39 41 #define AHCI_PORT_AXICC_CFG 0x3fffffff 40 42 ··· 185 183 writel(readl(qpriv->ecc_addr) | ECC_DIS_ARMV8_CH2, 186 184 qpriv->ecc_addr); 187 185 writel(AHCI_PORT_PHY_1_CFG, reg_base + PORT_PHY1); 186 + writel(AHCI_PORT_PHY2_CFG, reg_base + PORT_PHY2); 187 + writel(AHCI_PORT_PHY3_CFG, reg_base + PORT_PHY3); 188 188 writel(AHCI_PORT_TRANS_CFG, reg_base + PORT_TRANS); 189 189 if (qpriv->is_dmacoherent) 190 190 writel(AHCI_PORT_AXICC_CFG, reg_base + PORT_AXICC); ··· 194 190 195 191 case AHCI_LS2080A: 196 192 writel(AHCI_PORT_PHY_1_CFG, reg_base + PORT_PHY1); 193 + writel(AHCI_PORT_PHY2_CFG, reg_base + PORT_PHY2); 194 + writel(AHCI_PORT_PHY3_CFG, reg_base + PORT_PHY3); 197 195 writel(AHCI_PORT_TRANS_CFG, reg_base + PORT_TRANS); 198 196 if (qpriv->is_dmacoherent) 199 197 writel(AHCI_PORT_AXICC_CFG, reg_base + PORT_AXICC); ··· 207 201 writel(readl(qpriv->ecc_addr) | ECC_DIS_ARMV8_CH2, 208 202 qpriv->ecc_addr); 209 203 writel(AHCI_PORT_PHY_1_CFG, reg_base + PORT_PHY1); 204 + writel(AHCI_PORT_PHY2_CFG, reg_base + PORT_PHY2); 205 + writel(AHCI_PORT_PHY3_CFG, reg_base + PORT_PHY3); 210 206 writel(AHCI_PORT_TRANS_CFG, reg_base + PORT_TRANS); 211 207 if (qpriv->is_dmacoherent) 212 208 writel(AHCI_PORT_AXICC_CFG, reg_base + PORT_AXICC); ··· 220 212 writel(readl(qpriv->ecc_addr) | ECC_DIS_LS1088A, 221 213 qpriv->ecc_addr); 222 214 writel(AHCI_PORT_PHY_1_CFG, reg_base + PORT_PHY1); 215 + writel(AHCI_PORT_PHY2_CFG, reg_base + PORT_PHY2); 216 + writel(AHCI_PORT_PHY3_CFG, reg_base + PORT_PHY3); 223 217 writel(AHCI_PORT_TRANS_CFG, reg_base + PORT_TRANS); 224 218 if (qpriv->is_dmacoherent) 225 219 writel(AHCI_PORT_AXICC_CFG, reg_base + PORT_AXICC); ··· 229 219 230 220 case AHCI_LS2088A: 231 221 writel(AHCI_PORT_PHY_1_CFG, reg_base + PORT_PHY1); 222 + writel(AHCI_PORT_PHY2_CFG, reg_base + PORT_PHY2); 223 + writel(AHCI_PORT_PHY3_CFG, reg_base + PORT_PHY3); 232 224 writel(AHCI_PORT_TRANS_CFG, reg_base + PORT_TRANS); 233 225 if (qpriv->is_dmacoherent) 234 226 writel(AHCI_PORT_AXICC_CFG, reg_base + PORT_AXICC);
+9 -3
drivers/ata/libata-core.c
··· 3082 3082 bit = fls(mask) - 1; 3083 3083 mask &= ~(1 << bit); 3084 3084 3085 - /* Mask off all speeds higher than or equal to the current 3086 - * one. Force 1.5Gbps if current SPD is not available. 3085 + /* 3086 + * Mask off all speeds higher than or equal to the current one. At 3087 + * this point, if current SPD is not available and we previously 3088 + * recorded the link speed from SStatus, the driver has already 3089 + * masked off the highest bit so mask should already be 1 or 0. 3090 + * Otherwise, we should not force 1.5Gbps on a link where we have 3091 + * not previously recorded speed from SStatus. Just return in this 3092 + * case. 3087 3093 */ 3088 3094 if (spd > 1) 3089 3095 mask &= (1 << (spd - 1)) - 1; 3090 3096 else 3091 - mask &= 1; 3097 + return -EINVAL; 3092 3098 3093 3099 /* were we already at the bottom? */ 3094 3100 if (!mask)
+6 -10
drivers/ata/pata_pdc2027x.c
··· 82 82 * is issued to the device. However, if the controller clock is 133MHz, 83 83 * the following tables must be used. 84 84 */ 85 - static struct pdc2027x_pio_timing { 85 + static const struct pdc2027x_pio_timing { 86 86 u8 value0, value1, value2; 87 87 } pdc2027x_pio_timing_tbl[] = { 88 88 { 0xfb, 0x2b, 0xac }, /* PIO mode 0 */ ··· 92 92 { 0x23, 0x09, 0x25 }, /* PIO mode 4, IORDY on, Prefetch off */ 93 93 }; 94 94 95 - static struct pdc2027x_mdma_timing { 95 + static const struct pdc2027x_mdma_timing { 96 96 u8 value0, value1; 97 97 } pdc2027x_mdma_timing_tbl[] = { 98 98 { 0xdf, 0x5f }, /* MDMA mode 0 */ ··· 100 100 { 0x69, 0x25 }, /* MDMA mode 2 */ 101 101 }; 102 102 103 - static struct pdc2027x_udma_timing { 103 + static const struct pdc2027x_udma_timing { 104 104 u8 value0, value1, value2; 105 105 } pdc2027x_udma_timing_tbl[] = { 106 106 { 0x4a, 0x0f, 0xd5 }, /* UDMA mode 0 */ ··· 649 649 * @host: target ATA host 650 650 * @board_idx: board identifier 651 651 */ 652 - static int pdc_hardware_init(struct ata_host *host, unsigned int board_idx) 652 + static void pdc_hardware_init(struct ata_host *host, unsigned int board_idx) 653 653 { 654 654 long pll_clock; 655 655 ··· 665 665 666 666 /* Adjust PLL control register */ 667 667 pdc_adjust_pll(host, pll_clock, board_idx); 668 - 669 - return 0; 670 668 } 671 669 672 670 /** ··· 751 753 //pci_enable_intx(pdev); 752 754 753 755 /* initialize adapter */ 754 - if (pdc_hardware_init(host, board_idx) != 0) 755 - return -EIO; 756 + pdc_hardware_init(host, board_idx); 756 757 757 758 pci_set_master(pdev); 758 759 return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt, ··· 775 778 else 776 779 board_idx = PDC_UDMA_133; 777 780 778 - if (pdc_hardware_init(host, board_idx)) 779 - return -EIO; 781 + pdc_hardware_init(host, board_idx); 780 782 781 783 ata_host_resume(host); 782 784 return 0;
+15
drivers/base/power/main.c
··· 526 526 /*------------------------- Resume routines -------------------------*/ 527 527 528 528 /** 529 + * dev_pm_skip_next_resume_phases - Skip next system resume phases for device. 530 + * @dev: Target device. 531 + * 532 + * Make the core skip the "early resume" and "resume" phases for @dev. 533 + * 534 + * This function can be called by middle-layer code during the "noirq" phase of 535 + * system resume if necessary, but not by device drivers. 536 + */ 537 + void dev_pm_skip_next_resume_phases(struct device *dev) 538 + { 539 + dev->power.is_late_suspended = false; 540 + dev->power.is_suspended = false; 541 + } 542 + 543 + /** 529 544 * device_resume_noirq - Execute a "noirq resume" callback for given device. 530 545 * @dev: Device to handle. 531 546 * @state: PM transition of the system being carried out.
+5 -2
drivers/bus/arm-cci.c
··· 1755 1755 raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock); 1756 1756 mutex_init(&cci_pmu->reserve_mutex); 1757 1757 atomic_set(&cci_pmu->active_events, 0); 1758 - cpumask_set_cpu(smp_processor_id(), &cci_pmu->cpus); 1758 + cpumask_set_cpu(get_cpu(), &cci_pmu->cpus); 1759 1759 1760 1760 ret = cci_pmu_init(cci_pmu, pdev); 1761 - if (ret) 1761 + if (ret) { 1762 + put_cpu(); 1762 1763 return ret; 1764 + } 1763 1765 1764 1766 cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE, 1765 1767 &cci_pmu->node); 1768 + put_cpu(); 1766 1769 pr_info("ARM %s PMU driver probed", cci_pmu->model->name); 1767 1770 return 0; 1768 1771 }
+15 -10
drivers/bus/arm-ccn.c
··· 262 262 NULL 263 263 }; 264 264 265 - static struct attribute_group arm_ccn_pmu_format_attr_group = { 265 + static const struct attribute_group arm_ccn_pmu_format_attr_group = { 266 266 .name = "format", 267 267 .attrs = arm_ccn_pmu_format_attrs, 268 268 }; ··· 451 451 static struct attribute 452 452 *arm_ccn_pmu_events_attrs[ARRAY_SIZE(arm_ccn_pmu_events) + 1]; 453 453 454 - static struct attribute_group arm_ccn_pmu_events_attr_group = { 454 + static const struct attribute_group arm_ccn_pmu_events_attr_group = { 455 455 .name = "events", 456 456 .is_visible = arm_ccn_pmu_events_is_visible, 457 457 .attrs = arm_ccn_pmu_events_attrs, ··· 548 548 NULL 549 549 }; 550 550 551 - static struct attribute_group arm_ccn_pmu_cmp_mask_attr_group = { 551 + static const struct attribute_group arm_ccn_pmu_cmp_mask_attr_group = { 552 552 .name = "cmp_mask", 553 553 .attrs = arm_ccn_pmu_cmp_mask_attrs, 554 554 }; ··· 569 569 NULL, 570 570 }; 571 571 572 - static struct attribute_group arm_ccn_pmu_cpumask_attr_group = { 572 + static const struct attribute_group arm_ccn_pmu_cpumask_attr_group = { 573 573 .attrs = arm_ccn_pmu_cpumask_attrs, 574 574 }; 575 575 ··· 1268 1268 if (ccn->dt.id == 0) { 1269 1269 name = "ccn"; 1270 1270 } else { 1271 - int len = snprintf(NULL, 0, "ccn_%d", ccn->dt.id); 1272 - 1273 - name = devm_kzalloc(ccn->dev, len + 1, GFP_KERNEL); 1274 - snprintf(name, len + 1, "ccn_%d", ccn->dt.id); 1271 + name = devm_kasprintf(ccn->dev, GFP_KERNEL, "ccn_%d", 1272 + ccn->dt.id); 1273 + if (!name) { 1274 + err = -ENOMEM; 1275 + goto error_choose_name; 1276 + } 1275 1277 } 1276 1278 1277 1279 /* Perf driver registration */ ··· 1300 1298 } 1301 1299 1302 1300 /* Pick one CPU which we will use to collect data from CCN... */ 1303 - cpumask_set_cpu(smp_processor_id(), &ccn->dt.cpu); 1301 + cpumask_set_cpu(get_cpu(), &ccn->dt.cpu); 1304 1302 1305 1303 /* Also make sure that the overflow interrupt is handled by this CPU */ 1306 1304 if (ccn->irq) { ··· 1317 1315 1318 1316 cpuhp_state_add_instance_nocalls(CPUHP_AP_PERF_ARM_CCN_ONLINE, 1319 1317 &ccn->dt.node); 1318 + put_cpu(); 1320 1319 return 0; 1321 1320 1322 1321 error_pmu_register: 1323 1322 error_set_affinity: 1323 + put_cpu(); 1324 + error_choose_name: 1324 1325 ida_simple_remove(&arm_ccn_pmu_ida, ccn->dt.id); 1325 1326 for (i = 0; i < ccn->num_xps; i++) 1326 1327 writel(0, ccn->xp[i].base + CCN_XP_DT_CONTROL); ··· 1586 1581 1587 1582 static void __exit arm_ccn_exit(void) 1588 1583 { 1589 - cpuhp_remove_multi_state(CPUHP_AP_PERF_ARM_CCN_ONLINE); 1590 1584 platform_driver_unregister(&arm_ccn_driver); 1585 + cpuhp_remove_multi_state(CPUHP_AP_PERF_ARM_CCN_ONLINE); 1591 1586 } 1592 1587 1593 1588 module_init(arm_ccn_init);
+23 -21
drivers/char/ipmi/ipmi_si_intf.c
··· 199 199 /* The timer for this si. */ 200 200 struct timer_list si_timer; 201 201 202 + /* This flag is set, if the timer can be set */ 203 + bool timer_can_start; 204 + 202 205 /* This flag is set, if the timer is running (timer_pending() isn't enough) */ 203 206 bool timer_running; 204 207 ··· 358 355 359 356 static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) 360 357 { 358 + if (!smi_info->timer_can_start) 359 + return; 361 360 smi_info->last_timeout_jiffies = jiffies; 362 361 mod_timer(&smi_info->si_timer, new_val); 363 362 smi_info->timer_running = true; ··· 379 374 smi_info->handlers->start_transaction(smi_info->si_sm, msg, size); 380 375 } 381 376 382 - static void start_check_enables(struct smi_info *smi_info, bool start_timer) 377 + static void start_check_enables(struct smi_info *smi_info) 383 378 { 384 379 unsigned char msg[2]; 385 380 386 381 msg[0] = (IPMI_NETFN_APP_REQUEST << 2); 387 382 msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD; 388 383 389 - if (start_timer) 390 - start_new_msg(smi_info, msg, 2); 391 - else 392 - smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2); 384 + start_new_msg(smi_info, msg, 2); 393 385 smi_info->si_state = SI_CHECKING_ENABLES; 394 386 } 395 387 396 - static void start_clear_flags(struct smi_info *smi_info, bool start_timer) 388 + static void start_clear_flags(struct smi_info *smi_info) 397 389 { 398 390 unsigned char msg[3]; 399 391 ··· 399 397 msg[1] = IPMI_CLEAR_MSG_FLAGS_CMD; 400 398 msg[2] = WDT_PRE_TIMEOUT_INT; 401 399 402 - if (start_timer) 403 - start_new_msg(smi_info, msg, 3); 404 - else 405 - smi_info->handlers->start_transaction(smi_info->si_sm, msg, 3); 400 + start_new_msg(smi_info, msg, 3); 406 401 smi_info->si_state = SI_CLEARING_FLAGS; 407 402 } 408 403 ··· 434 435 * Note that we cannot just use disable_irq(), since the interrupt may 435 436 * be shared. 436 437 */ 437 - static inline bool disable_si_irq(struct smi_info *smi_info, bool start_timer) 438 + static inline bool disable_si_irq(struct smi_info *smi_info) 438 439 { 439 440 if ((smi_info->io.irq) && (!smi_info->interrupt_disabled)) { 440 441 smi_info->interrupt_disabled = true; 441 - start_check_enables(smi_info, start_timer); 442 + start_check_enables(smi_info); 442 443 return true; 443 444 } 444 445 return false; ··· 448 449 { 449 450 if ((smi_info->io.irq) && (smi_info->interrupt_disabled)) { 450 451 smi_info->interrupt_disabled = false; 451 - start_check_enables(smi_info, true); 452 + start_check_enables(smi_info); 452 453 return true; 453 454 } 454 455 return false; ··· 466 467 467 468 msg = ipmi_alloc_smi_msg(); 468 469 if (!msg) { 469 - if (!disable_si_irq(smi_info, true)) 470 + if (!disable_si_irq(smi_info)) 470 471 smi_info->si_state = SI_NORMAL; 471 472 } else if (enable_si_irq(smi_info)) { 472 473 ipmi_free_smi_msg(msg); ··· 482 483 /* Watchdog pre-timeout */ 483 484 smi_inc_stat(smi_info, watchdog_pretimeouts); 484 485 485 - start_clear_flags(smi_info, true); 486 + start_clear_flags(smi_info); 486 487 smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT; 487 488 if (smi_info->intf) 488 489 ipmi_smi_watchdog_pretimeout(smi_info->intf); ··· 865 866 * disable and messages disabled. 866 867 */ 867 868 if (smi_info->supports_event_msg_buff || smi_info->io.irq) { 868 - start_check_enables(smi_info, true); 869 + start_check_enables(smi_info); 869 870 } else { 870 871 smi_info->curr_msg = alloc_msg_handle_irq(smi_info); 871 872 if (!smi_info->curr_msg) ··· 1166 1167 1167 1168 /* Set up the timer that drives the interface. */ 1168 1169 timer_setup(&new_smi->si_timer, smi_timeout, 0); 1170 + new_smi->timer_can_start = true; 1169 1171 smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES); 1170 1172 1171 1173 /* Try to claim any interrupts. */ ··· 1936 1936 check_set_rcv_irq(smi_info); 1937 1937 } 1938 1938 1939 - static inline void wait_for_timer_and_thread(struct smi_info *smi_info) 1939 + static inline void stop_timer_and_thread(struct smi_info *smi_info) 1940 1940 { 1941 1941 if (smi_info->thread != NULL) 1942 1942 kthread_stop(smi_info->thread); 1943 + 1944 + smi_info->timer_can_start = false; 1943 1945 if (smi_info->timer_running) 1944 1946 del_timer_sync(&smi_info->si_timer); 1945 1947 } ··· 2154 2152 * Start clearing the flags before we enable interrupts or the 2155 2153 * timer to avoid racing with the timer. 2156 2154 */ 2157 - start_clear_flags(new_smi, false); 2155 + start_clear_flags(new_smi); 2158 2156 2159 2157 /* 2160 2158 * IRQ is defined to be set when non-zero. req_events will ··· 2240 2238 dev_set_drvdata(new_smi->io.dev, NULL); 2241 2239 2242 2240 out_err_stop_timer: 2243 - wait_for_timer_and_thread(new_smi); 2241 + stop_timer_and_thread(new_smi); 2244 2242 2245 2243 out_err: 2246 2244 new_smi->interrupt_disabled = true; ··· 2390 2388 */ 2391 2389 if (to_clean->io.irq_cleanup) 2392 2390 to_clean->io.irq_cleanup(&to_clean->io); 2393 - wait_for_timer_and_thread(to_clean); 2391 + stop_timer_and_thread(to_clean); 2394 2392 2395 2393 /* 2396 2394 * Timeouts are stopped, now make sure the interrupts are off ··· 2402 2400 schedule_timeout_uninterruptible(1); 2403 2401 } 2404 2402 if (to_clean->handlers) 2405 - disable_si_irq(to_clean, false); 2403 + disable_si_irq(to_clean); 2406 2404 while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) { 2407 2405 poll(to_clean); 2408 2406 schedule_timeout_uninterruptible(1);
+2
drivers/char/ipmi/ipmi_si_parisc.c
··· 10 10 { 11 11 struct si_sm_io io; 12 12 13 + memset(&io, 0, sizeof(io)); 14 + 13 15 io.si_type = SI_KCS; 14 16 io.addr_source = SI_DEVICETREE; 15 17 io.addr_type = IPMI_MEM_ADDR_SPACE;
+5 -2
drivers/char/ipmi/ipmi_si_pci.c
··· 103 103 io.addr_source_cleanup = ipmi_pci_cleanup; 104 104 io.addr_source_data = pdev; 105 105 106 - if (pci_resource_flags(pdev, 0) & IORESOURCE_IO) 106 + if (pci_resource_flags(pdev, 0) & IORESOURCE_IO) { 107 107 io.addr_type = IPMI_IO_ADDR_SPACE; 108 - else 108 + io.io_setup = ipmi_si_port_setup; 109 + } else { 109 110 io.addr_type = IPMI_MEM_ADDR_SPACE; 111 + io.io_setup = ipmi_si_mem_setup; 112 + } 110 113 io.addr_data = pci_resource_start(pdev, 0); 111 114 112 115 io.regspacing = ipmi_pci_probe_regspacing(&io);
+127 -85
drivers/firmware/arm_scpi.c
··· 28 28 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 29 29 30 30 #include <linux/bitmap.h> 31 - #include <linux/bitfield.h> 32 31 #include <linux/device.h> 33 32 #include <linux/err.h> 34 33 #include <linux/export.h> ··· 72 73 73 74 #define MAX_DVFS_DOMAINS 8 74 75 #define MAX_DVFS_OPPS 16 76 + #define DVFS_LATENCY(hdr) (le32_to_cpu(hdr) >> 16) 77 + #define DVFS_OPP_COUNT(hdr) ((le32_to_cpu(hdr) >> 8) & 0xff) 75 78 76 - #define PROTO_REV_MAJOR_MASK GENMASK(31, 16) 77 - #define PROTO_REV_MINOR_MASK GENMASK(15, 0) 79 + #define PROTOCOL_REV_MINOR_BITS 16 80 + #define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1) 81 + #define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS) 82 + #define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK) 78 83 79 - #define FW_REV_MAJOR_MASK GENMASK(31, 24) 80 - #define FW_REV_MINOR_MASK GENMASK(23, 16) 81 - #define FW_REV_PATCH_MASK GENMASK(15, 0) 84 + #define FW_REV_MAJOR_BITS 24 85 + #define FW_REV_MINOR_BITS 16 86 + #define FW_REV_PATCH_MASK ((1U << FW_REV_MINOR_BITS) - 1) 87 + #define FW_REV_MINOR_MASK ((1U << FW_REV_MAJOR_BITS) - 1) 88 + #define FW_REV_MAJOR(x) ((x) >> FW_REV_MAJOR_BITS) 89 + #define FW_REV_MINOR(x) (((x) & FW_REV_MINOR_MASK) >> FW_REV_MINOR_BITS) 90 + #define FW_REV_PATCH(x) ((x) & FW_REV_PATCH_MASK) 82 91 83 92 #define MAX_RX_TIMEOUT (msecs_to_jiffies(30)) 84 93 ··· 311 304 u8 name[20]; 312 305 } __packed; 313 306 307 + struct clk_get_value { 308 + __le32 rate; 309 + } __packed; 310 + 314 311 struct clk_set_value { 315 312 __le16 id; 316 313 __le16 reserved; ··· 328 317 } __packed; 329 318 330 319 struct dvfs_info { 331 - u8 domain; 332 - u8 opp_count; 333 - __le16 latency; 320 + __le32 header; 334 321 struct { 335 322 __le32 freq; 336 323 __le32 m_volt; ··· 350 341 u8 trigger_type; 351 342 char name[20]; 352 343 }; 344 + 345 + struct sensor_value { 346 + __le32 lo_val; 347 + __le32 hi_val; 348 + } __packed; 353 349 354 350 struct dev_pstate_set { 355 351 __le16 dev_id; ··· 419 405 unsigned int len; 420 406 421 407 if (scpi_info->is_legacy) { 422 - struct legacy_scpi_shared_mem __iomem *mem = 423 - ch->rx_payload; 408 + struct legacy_scpi_shared_mem *mem = ch->rx_payload; 424 409 425 410 /* RX Length is not replied by the legacy Firmware */ 426 411 len = match->rx_len; 427 412 428 - match->status = ioread32(&mem->status); 413 + match->status = le32_to_cpu(mem->status); 429 414 memcpy_fromio(match->rx_buf, mem->payload, len); 430 415 } else { 431 - struct scpi_shared_mem __iomem *mem = ch->rx_payload; 416 + struct scpi_shared_mem *mem = ch->rx_payload; 432 417 433 418 len = min(match->rx_len, CMD_SIZE(cmd)); 434 419 435 - match->status = ioread32(&mem->status); 420 + match->status = le32_to_cpu(mem->status); 436 421 memcpy_fromio(match->rx_buf, mem->payload, len); 437 422 } 438 423 ··· 445 432 static void scpi_handle_remote_msg(struct mbox_client *c, void *msg) 446 433 { 447 434 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 448 - struct scpi_shared_mem __iomem *mem = ch->rx_payload; 435 + struct scpi_shared_mem *mem = ch->rx_payload; 449 436 u32 cmd = 0; 450 437 451 438 if (!scpi_info->is_legacy) 452 - cmd = ioread32(&mem->command); 439 + cmd = le32_to_cpu(mem->command); 453 440 454 441 scpi_process_cmd(ch, cmd); 455 442 } ··· 459 446 unsigned long flags; 460 447 struct scpi_xfer *t = msg; 461 448 struct scpi_chan *ch = container_of(c, struct scpi_chan, cl); 462 - struct scpi_shared_mem __iomem *mem = ch->tx_payload; 449 + struct scpi_shared_mem *mem = (struct scpi_shared_mem *)ch->tx_payload; 463 450 464 451 if (t->tx_buf) { 465 452 if (scpi_info->is_legacy) ··· 478 465 } 479 466 480 467 if (!scpi_info->is_legacy) 481 - iowrite32(t->cmd, &mem->command); 468 + mem->command = cpu_to_le32(t->cmd); 482 469 } 483 470 484 471 static struct scpi_xfer *get_scpi_xfer(struct scpi_chan *ch) ··· 583 570 static unsigned long scpi_clk_get_val(u16 clk_id) 584 571 { 585 572 int ret; 586 - __le32 rate; 573 + struct clk_get_value clk; 587 574 __le16 le_clk_id = cpu_to_le16(clk_id); 588 575 589 576 ret = scpi_send_message(CMD_GET_CLOCK_VALUE, &le_clk_id, 590 - sizeof(le_clk_id), &rate, sizeof(rate)); 577 + sizeof(le_clk_id), &clk, sizeof(clk)); 591 578 592 - return ret ? ret : le32_to_cpu(rate); 579 + return ret ? ret : le32_to_cpu(clk.rate); 593 580 } 594 581 595 582 static int scpi_clk_set_val(u16 clk_id, unsigned long rate) ··· 645 632 646 633 static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain) 647 634 { 648 - if (domain >= MAX_DVFS_DOMAINS) 649 - return ERR_PTR(-EINVAL); 650 - 651 - return scpi_info->dvfs[domain] ?: ERR_PTR(-EINVAL); 652 - } 653 - 654 - static int scpi_dvfs_populate_info(struct device *dev, u8 domain) 655 - { 656 635 struct scpi_dvfs_info *info; 657 636 struct scpi_opp *opp; 658 637 struct dvfs_info buf; 659 638 int ret, i; 660 639 640 + if (domain >= MAX_DVFS_DOMAINS) 641 + return ERR_PTR(-EINVAL); 642 + 643 + if (scpi_info->dvfs[domain]) /* data already populated */ 644 + return scpi_info->dvfs[domain]; 645 + 661 646 ret = scpi_send_message(CMD_GET_DVFS_INFO, &domain, sizeof(domain), 662 647 &buf, sizeof(buf)); 663 648 if (ret) 664 - return ret; 649 + return ERR_PTR(ret); 665 650 666 - info = devm_kmalloc(dev, sizeof(*info), GFP_KERNEL); 651 + info = kmalloc(sizeof(*info), GFP_KERNEL); 667 652 if (!info) 668 - return -ENOMEM; 653 + return ERR_PTR(-ENOMEM); 669 654 670 - info->count = buf.opp_count; 671 - info->latency = le16_to_cpu(buf.latency) * 1000; /* uS to nS */ 655 + info->count = DVFS_OPP_COUNT(buf.header); 656 + info->latency = DVFS_LATENCY(buf.header) * 1000; /* uS to nS */ 672 657 673 - info->opps = devm_kcalloc(dev, info->count, sizeof(*opp), GFP_KERNEL); 674 - if (!info->opps) 675 - return -ENOMEM; 658 + info->opps = kcalloc(info->count, sizeof(*opp), GFP_KERNEL); 659 + if (!info->opps) { 660 + kfree(info); 661 + return ERR_PTR(-ENOMEM); 662 + } 676 663 677 664 for (i = 0, opp = info->opps; i < info->count; i++, opp++) { 678 665 opp->freq = le32_to_cpu(buf.opps[i].freq); ··· 682 669 sort(info->opps, info->count, sizeof(*opp), opp_cmp_func, NULL); 683 670 684 671 scpi_info->dvfs[domain] = info; 685 - return 0; 686 - } 687 - 688 - static void scpi_dvfs_populate(struct device *dev) 689 - { 690 - int domain; 691 - 692 - for (domain = 0; domain < MAX_DVFS_DOMAINS; domain++) 693 - scpi_dvfs_populate_info(dev, domain); 672 + return info; 694 673 } 695 674 696 675 static int scpi_dev_domain_id(struct device *dev) ··· 712 707 713 708 if (IS_ERR(info)) 714 709 return PTR_ERR(info); 710 + 711 + if (!info->latency) 712 + return 0; 715 713 716 714 return info->latency; 717 715 } ··· 776 768 static int scpi_sensor_get_value(u16 sensor, u64 *val) 777 769 { 778 770 __le16 id = cpu_to_le16(sensor); 779 - __le64 value; 771 + struct sensor_value buf; 780 772 int ret; 781 773 782 774 ret = scpi_send_message(CMD_SENSOR_VALUE, &id, sizeof(id), 783 - &value, sizeof(value)); 775 + &buf, sizeof(buf)); 784 776 if (ret) 785 777 return ret; 786 778 787 779 if (scpi_info->is_legacy) 788 - /* only 32-bits supported, upper 32 bits can be junk */ 789 - *val = le32_to_cpup((__le32 *)&value); 780 + /* only 32-bits supported, hi_val can be junk */ 781 + *val = le32_to_cpu(buf.lo_val); 790 782 else 791 - *val = le64_to_cpu(value); 783 + *val = (u64)le32_to_cpu(buf.hi_val) << 32 | 784 + le32_to_cpu(buf.lo_val); 792 785 793 786 return 0; 794 787 } ··· 862 853 static ssize_t protocol_version_show(struct device *dev, 863 854 struct device_attribute *attr, char *buf) 864 855 { 865 - return sprintf(buf, "%lu.%lu\n", 866 - FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), 867 - FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version)); 856 + struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 857 + 858 + return sprintf(buf, "%d.%d\n", 859 + PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 860 + PROTOCOL_REV_MINOR(scpi_info->protocol_version)); 868 861 } 869 862 static DEVICE_ATTR_RO(protocol_version); 870 863 871 864 static ssize_t firmware_version_show(struct device *dev, 872 865 struct device_attribute *attr, char *buf) 873 866 { 874 - return sprintf(buf, "%lu.%lu.%lu\n", 875 - FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), 876 - FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), 877 - FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); 867 + struct scpi_drvinfo *scpi_info = dev_get_drvdata(dev); 868 + 869 + return sprintf(buf, "%d.%d.%d\n", 870 + FW_REV_MAJOR(scpi_info->firmware_version), 871 + FW_REV_MINOR(scpi_info->firmware_version), 872 + FW_REV_PATCH(scpi_info->firmware_version)); 878 873 } 879 874 static DEVICE_ATTR_RO(firmware_version); 880 875 ··· 889 876 }; 890 877 ATTRIBUTE_GROUPS(versions); 891 878 892 - static void scpi_free_channels(void *data) 879 + static void 880 + scpi_free_channels(struct device *dev, struct scpi_chan *pchan, int count) 893 881 { 894 - struct scpi_drvinfo *info = data; 895 882 int i; 896 883 897 - for (i = 0; i < info->num_chans; i++) 898 - mbox_free_channel(info->channels[i].chan); 884 + for (i = 0; i < count && pchan->chan; i++, pchan++) { 885 + mbox_free_channel(pchan->chan); 886 + devm_kfree(dev, pchan->xfers); 887 + devm_iounmap(dev, pchan->rx_payload); 888 + } 889 + } 890 + 891 + static int scpi_remove(struct platform_device *pdev) 892 + { 893 + int i; 894 + struct device *dev = &pdev->dev; 895 + struct scpi_drvinfo *info = platform_get_drvdata(pdev); 896 + 897 + scpi_info = NULL; /* stop exporting SCPI ops through get_scpi_ops */ 898 + 899 + of_platform_depopulate(dev); 900 + sysfs_remove_groups(&dev->kobj, versions_groups); 901 + scpi_free_channels(dev, info->channels, info->num_chans); 902 + platform_set_drvdata(pdev, NULL); 903 + 904 + for (i = 0; i < MAX_DVFS_DOMAINS && info->dvfs[i]; i++) { 905 + kfree(info->dvfs[i]->opps); 906 + kfree(info->dvfs[i]); 907 + } 908 + devm_kfree(dev, info->channels); 909 + devm_kfree(dev, info); 910 + 911 + return 0; 899 912 } 900 913 901 914 #define MAX_SCPI_XFERS 10 ··· 952 913 { 953 914 int count, idx, ret; 954 915 struct resource res; 916 + struct scpi_chan *scpi_chan; 955 917 struct device *dev = &pdev->dev; 956 918 struct device_node *np = dev->of_node; 957 919 ··· 969 929 return -ENODEV; 970 930 } 971 931 972 - scpi_info->channels = devm_kcalloc(dev, count, sizeof(struct scpi_chan), 973 - GFP_KERNEL); 974 - if (!scpi_info->channels) 932 + scpi_chan = devm_kcalloc(dev, count, sizeof(*scpi_chan), GFP_KERNEL); 933 + if (!scpi_chan) 975 934 return -ENOMEM; 976 935 977 - ret = devm_add_action(dev, scpi_free_channels, scpi_info); 978 - if (ret) 979 - return ret; 980 - 981 - for (; scpi_info->num_chans < count; scpi_info->num_chans++) { 936 + for (idx = 0; idx < count; idx++) { 982 937 resource_size_t size; 983 - int idx = scpi_info->num_chans; 984 - struct scpi_chan *pchan = scpi_info->channels + idx; 938 + struct scpi_chan *pchan = scpi_chan + idx; 985 939 struct mbox_client *cl = &pchan->cl; 986 940 struct device_node *shmem = of_parse_phandle(np, "shmem", idx); 987 941 ··· 983 949 of_node_put(shmem); 984 950 if (ret) { 985 951 dev_err(dev, "failed to get SCPI payload mem resource\n"); 986 - return ret; 952 + goto err; 987 953 } 988 954 989 955 size = resource_size(&res); 990 956 pchan->rx_payload = devm_ioremap(dev, res.start, size); 991 957 if (!pchan->rx_payload) { 992 958 dev_err(dev, "failed to ioremap SCPI payload\n"); 993 - return -EADDRNOTAVAIL; 959 + ret = -EADDRNOTAVAIL; 960 + goto err; 994 961 } 995 962 pchan->tx_payload = pchan->rx_payload + (size >> 1); 996 963 ··· 1017 982 dev_err(dev, "failed to get channel%d err %d\n", 1018 983 idx, ret); 1019 984 } 985 + err: 986 + scpi_free_channels(dev, scpi_chan, idx); 987 + scpi_info = NULL; 1020 988 return ret; 1021 989 } 1022 990 991 + scpi_info->channels = scpi_chan; 992 + scpi_info->num_chans = count; 1023 993 scpi_info->commands = scpi_std_commands; 1024 - scpi_info->scpi_ops = &scpi_ops; 994 + 995 + platform_set_drvdata(pdev, scpi_info); 1025 996 1026 997 if (scpi_info->is_legacy) { 1027 998 /* Replace with legacy variants */ ··· 1043 1002 ret = scpi_init_versions(scpi_info); 1044 1003 if (ret) { 1045 1004 dev_err(dev, "incorrect or no SCP firmware found\n"); 1005 + scpi_remove(pdev); 1046 1006 return ret; 1047 1007 } 1048 1008 1049 - scpi_dvfs_populate(dev); 1009 + _dev_info(dev, "SCP Protocol %d.%d Firmware %d.%d.%d version\n", 1010 + PROTOCOL_REV_MAJOR(scpi_info->protocol_version), 1011 + PROTOCOL_REV_MINOR(scpi_info->protocol_version), 1012 + FW_REV_MAJOR(scpi_info->firmware_version), 1013 + FW_REV_MINOR(scpi_info->firmware_version), 1014 + FW_REV_PATCH(scpi_info->firmware_version)); 1015 + scpi_info->scpi_ops = &scpi_ops; 1050 1016 1051 - _dev_info(dev, "SCP Protocol %lu.%lu Firmware %lu.%lu.%lu version\n", 1052 - FIELD_GET(PROTO_REV_MAJOR_MASK, scpi_info->protocol_version), 1053 - FIELD_GET(PROTO_REV_MINOR_MASK, scpi_info->protocol_version), 1054 - FIELD_GET(FW_REV_MAJOR_MASK, scpi_info->firmware_version), 1055 - FIELD_GET(FW_REV_MINOR_MASK, scpi_info->firmware_version), 1056 - FIELD_GET(FW_REV_PATCH_MASK, scpi_info->firmware_version)); 1057 - 1058 - ret = devm_device_add_groups(dev, versions_groups); 1017 + ret = sysfs_create_groups(&dev->kobj, versions_groups); 1059 1018 if (ret) 1060 1019 dev_err(dev, "unable to create sysfs version group\n"); 1061 1020 1062 - return devm_of_platform_populate(dev); 1021 + return of_platform_populate(dev->of_node, NULL, NULL, dev); 1063 1022 } 1064 1023 1065 1024 static const struct of_device_id scpi_of_match[] = { ··· 1076 1035 .of_match_table = scpi_of_match, 1077 1036 }, 1078 1037 .probe = scpi_probe, 1038 + .remove = scpi_remove, 1079 1039 }; 1080 1040 module_platform_driver(scpi_driver); 1081 1041
+48 -15
drivers/gpu/drm/drm_connector.c
··· 152 152 connector->funcs->destroy(connector); 153 153 } 154 154 155 - static void drm_connector_free_work_fn(struct work_struct *work) 155 + void drm_connector_free_work_fn(struct work_struct *work) 156 156 { 157 - struct drm_connector *connector = 158 - container_of(work, struct drm_connector, free_work); 159 - struct drm_device *dev = connector->dev; 157 + struct drm_connector *connector, *n; 158 + struct drm_device *dev = 159 + container_of(work, struct drm_device, mode_config.connector_free_work); 160 + struct drm_mode_config *config = &dev->mode_config; 161 + unsigned long flags; 162 + struct llist_node *freed; 160 163 161 - drm_mode_object_unregister(dev, &connector->base); 162 - connector->funcs->destroy(connector); 164 + spin_lock_irqsave(&config->connector_list_lock, flags); 165 + freed = llist_del_all(&config->connector_free_list); 166 + spin_unlock_irqrestore(&config->connector_list_lock, flags); 167 + 168 + llist_for_each_entry_safe(connector, n, freed, free_node) { 169 + drm_mode_object_unregister(dev, &connector->base); 170 + connector->funcs->destroy(connector); 171 + } 163 172 } 164 173 165 174 /** ··· 199 190 false, drm_connector_free); 200 191 if (ret) 201 192 return ret; 202 - 203 - INIT_WORK(&connector->free_work, drm_connector_free_work_fn); 204 193 205 194 connector->base.properties = &connector->properties; 206 195 connector->dev = dev; ··· 554 547 * actually release the connector when dropping our final reference. 555 548 */ 556 549 static void 557 - drm_connector_put_safe(struct drm_connector *conn) 550 + __drm_connector_put_safe(struct drm_connector *conn) 558 551 { 559 - if (refcount_dec_and_test(&conn->base.refcount.refcount)) 560 - schedule_work(&conn->free_work); 552 + struct drm_mode_config *config = &conn->dev->mode_config; 553 + 554 + lockdep_assert_held(&config->connector_list_lock); 555 + 556 + if (!refcount_dec_and_test(&conn->base.refcount.refcount)) 557 + return; 558 + 559 + llist_add(&conn->free_node, &config->connector_free_list); 560 + schedule_work(&config->connector_free_work); 561 561 } 562 562 563 563 /** ··· 596 582 597 583 /* loop until it's not a zombie connector */ 598 584 } while (!kref_get_unless_zero(&iter->conn->base.refcount)); 599 - spin_unlock_irqrestore(&config->connector_list_lock, flags); 600 585 601 586 if (old_conn) 602 - drm_connector_put_safe(old_conn); 587 + __drm_connector_put_safe(old_conn); 588 + spin_unlock_irqrestore(&config->connector_list_lock, flags); 603 589 604 590 return iter->conn; 605 591 } ··· 616 602 */ 617 603 void drm_connector_list_iter_end(struct drm_connector_list_iter *iter) 618 604 { 605 + struct drm_mode_config *config = &iter->dev->mode_config; 606 + unsigned long flags; 607 + 619 608 iter->dev = NULL; 620 - if (iter->conn) 621 - drm_connector_put_safe(iter->conn); 609 + if (iter->conn) { 610 + spin_lock_irqsave(&config->connector_list_lock, flags); 611 + __drm_connector_put_safe(iter->conn); 612 + spin_unlock_irqrestore(&config->connector_list_lock, flags); 613 + } 622 614 lock_release(&connector_list_iter_dep_map, 0, _RET_IP_); 623 615 } 624 616 EXPORT_SYMBOL(drm_connector_list_iter_end); ··· 1250 1230 1251 1231 if (edid) 1252 1232 size = EDID_LENGTH * (1 + edid->extensions); 1233 + 1234 + /* Set the display info, using edid if available, otherwise 1235 + * reseting the values to defaults. This duplicates the work 1236 + * done in drm_add_edid_modes, but that function is not 1237 + * consistently called before this one in all drivers and the 1238 + * computation is cheap enough that it seems better to 1239 + * duplicate it rather than attempt to ensure some arbitrary 1240 + * ordering of calls. 1241 + */ 1242 + if (edid) 1243 + drm_add_display_info(connector, edid); 1244 + else 1245 + drm_reset_display_info(connector); 1253 1246 1254 1247 drm_object_property_set_value(&connector->base, 1255 1248 dev->mode_config.non_desktop_property,
+1
drivers/gpu/drm/drm_crtc_internal.h
··· 142 142 uint64_t value); 143 143 int drm_connector_create_standard_properties(struct drm_device *dev); 144 144 const char *drm_get_connector_force_name(enum drm_connector_force force); 145 + void drm_connector_free_work_fn(struct work_struct *work); 145 146 146 147 /* IOCTL */ 147 148 int drm_mode_connector_property_set_ioctl(struct drm_device *dev,
+38 -14
drivers/gpu/drm/drm_edid.c
··· 1731 1731 * 1732 1732 * Returns true if @vendor is in @edid, false otherwise 1733 1733 */ 1734 - static bool edid_vendor(struct edid *edid, const char *vendor) 1734 + static bool edid_vendor(const struct edid *edid, const char *vendor) 1735 1735 { 1736 1736 char edid_vendor[3]; 1737 1737 ··· 1749 1749 * 1750 1750 * This tells subsequent routines what fixes they need to apply. 1751 1751 */ 1752 - static u32 edid_get_quirks(struct edid *edid) 1752 + static u32 edid_get_quirks(const struct edid *edid) 1753 1753 { 1754 1754 const struct edid_quirk *quirk; 1755 1755 int i; ··· 2813 2813 /* 2814 2814 * Search EDID for CEA extension block. 2815 2815 */ 2816 - static u8 *drm_find_edid_extension(struct edid *edid, int ext_id) 2816 + static u8 *drm_find_edid_extension(const struct edid *edid, int ext_id) 2817 2817 { 2818 2818 u8 *edid_ext = NULL; 2819 2819 int i; ··· 2835 2835 return edid_ext; 2836 2836 } 2837 2837 2838 - static u8 *drm_find_cea_extension(struct edid *edid) 2838 + static u8 *drm_find_cea_extension(const struct edid *edid) 2839 2839 { 2840 2840 return drm_find_edid_extension(edid, CEA_EXT); 2841 2841 } 2842 2842 2843 - static u8 *drm_find_displayid_extension(struct edid *edid) 2843 + static u8 *drm_find_displayid_extension(const struct edid *edid) 2844 2844 { 2845 2845 return drm_find_edid_extension(edid, DISPLAYID_EXT); 2846 2846 } ··· 4363 4363 } 4364 4364 4365 4365 static void drm_parse_cea_ext(struct drm_connector *connector, 4366 - struct edid *edid) 4366 + const struct edid *edid) 4367 4367 { 4368 4368 struct drm_display_info *info = &connector->display_info; 4369 4369 const u8 *edid_ext; ··· 4397 4397 } 4398 4398 } 4399 4399 4400 - static void drm_add_display_info(struct drm_connector *connector, 4401 - struct edid *edid, u32 quirks) 4400 + /* A connector has no EDID information, so we've got no EDID to compute quirks from. Reset 4401 + * all of the values which would have been set from EDID 4402 + */ 4403 + void 4404 + drm_reset_display_info(struct drm_connector *connector) 4402 4405 { 4403 4406 struct drm_display_info *info = &connector->display_info; 4407 + 4408 + info->width_mm = 0; 4409 + info->height_mm = 0; 4410 + 4411 + info->bpc = 0; 4412 + info->color_formats = 0; 4413 + info->cea_rev = 0; 4414 + info->max_tmds_clock = 0; 4415 + info->dvi_dual = false; 4416 + 4417 + info->non_desktop = 0; 4418 + } 4419 + EXPORT_SYMBOL_GPL(drm_reset_display_info); 4420 + 4421 + u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edid) 4422 + { 4423 + struct drm_display_info *info = &connector->display_info; 4424 + 4425 + u32 quirks = edid_get_quirks(edid); 4404 4426 4405 4427 info->width_mm = edid->width_cm * 10; 4406 4428 info->height_mm = edid->height_cm * 10; ··· 4436 4414 4437 4415 info->non_desktop = !!(quirks & EDID_QUIRK_NON_DESKTOP); 4438 4416 4417 + DRM_DEBUG_KMS("non_desktop set to %d\n", info->non_desktop); 4418 + 4439 4419 if (edid->revision < 3) 4440 - return; 4420 + return quirks; 4441 4421 4442 4422 if (!(edid->input & DRM_EDID_INPUT_DIGITAL)) 4443 - return; 4423 + return quirks; 4444 4424 4445 4425 drm_parse_cea_ext(connector, edid); 4446 4426 ··· 4462 4438 4463 4439 /* Only defined for 1.4 with digital displays */ 4464 4440 if (edid->revision < 4) 4465 - return; 4441 + return quirks; 4466 4442 4467 4443 switch (edid->input & DRM_EDID_DIGITAL_DEPTH_MASK) { 4468 4444 case DRM_EDID_DIGITAL_DEPTH_6: ··· 4497 4473 info->color_formats |= DRM_COLOR_FORMAT_YCRCB444; 4498 4474 if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422) 4499 4475 info->color_formats |= DRM_COLOR_FORMAT_YCRCB422; 4476 + return quirks; 4500 4477 } 4478 + EXPORT_SYMBOL_GPL(drm_add_display_info); 4501 4479 4502 4480 static int validate_displayid(u8 *displayid, int length, int idx) 4503 4481 { ··· 4653 4627 return 0; 4654 4628 } 4655 4629 4656 - quirks = edid_get_quirks(edid); 4657 - 4658 4630 /* 4659 4631 * CEA-861-F adds ycbcr capability map block, for HDMI 2.0 sinks. 4660 4632 * To avoid multiple parsing of same block, lets parse that map 4661 4633 * from sink info, before parsing CEA modes. 4662 4634 */ 4663 - drm_add_display_info(connector, edid, quirks); 4635 + quirks = drm_add_display_info(connector, edid); 4664 4636 4665 4637 /* 4666 4638 * EDID spec says modes should be preferred in this order:
+2 -2
drivers/gpu/drm/drm_lease.c
··· 254 254 return lessee; 255 255 256 256 out_lessee: 257 - drm_master_put(&lessee); 258 - 259 257 mutex_unlock(&dev->mode_config.idr_mutex); 258 + 259 + drm_master_put(&lessee); 260 260 261 261 return ERR_PTR(error); 262 262 }
+5 -3
drivers/gpu/drm/drm_mm.c
··· 575 575 */ 576 576 void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new) 577 577 { 578 + struct drm_mm *mm = old->mm; 579 + 578 580 DRM_MM_BUG_ON(!old->allocated); 579 581 580 582 *new = *old; 581 583 582 584 list_replace(&old->node_list, &new->node_list); 583 - rb_replace_node(&old->rb, &new->rb, &old->mm->interval_tree.rb_root); 585 + rb_replace_node_cached(&old->rb, &new->rb, &mm->interval_tree); 584 586 585 587 if (drm_mm_hole_follows(old)) { 586 588 list_replace(&old->hole_stack, &new->hole_stack); 587 589 rb_replace_node(&old->rb_hole_size, 588 590 &new->rb_hole_size, 589 - &old->mm->holes_size); 591 + &mm->holes_size); 590 592 rb_replace_node(&old->rb_hole_addr, 591 593 &new->rb_hole_addr, 592 - &old->mm->holes_addr); 594 + &mm->holes_addr); 593 595 } 594 596 595 597 old->allocated = false;
+4 -1
drivers/gpu/drm/drm_mode_config.c
··· 382 382 ida_init(&dev->mode_config.connector_ida); 383 383 spin_lock_init(&dev->mode_config.connector_list_lock); 384 384 385 + init_llist_head(&dev->mode_config.connector_free_list); 386 + INIT_WORK(&dev->mode_config.connector_free_work, drm_connector_free_work_fn); 387 + 385 388 drm_mode_create_standard_properties(dev); 386 389 387 390 /* Just to be sure */ ··· 435 432 } 436 433 drm_connector_list_iter_end(&conn_iter); 437 434 /* connector_iter drops references in a work item. */ 438 - flush_scheduled_work(); 435 + flush_work(&dev->mode_config.connector_free_work); 439 436 if (WARN_ON(!list_empty(&dev->mode_config.connector_list))) { 440 437 drm_connector_list_iter_begin(dev, &conn_iter); 441 438 drm_for_each_connector_iter(connector, &conn_iter)
+3 -1
drivers/gpu/drm/vc4/vc4_gem.c
··· 888 888 /* If we got force-completed because of GPU reset rather than 889 889 * through our IRQ handler, signal the fence now. 890 890 */ 891 - if (exec->fence) 891 + if (exec->fence) { 892 892 dma_fence_signal(exec->fence); 893 + dma_fence_put(exec->fence); 894 + } 893 895 894 896 if (exec->bo) { 895 897 for (i = 0; i < exec->bo_count; i++) {
+1
drivers/gpu/drm/vc4/vc4_irq.c
··· 139 139 list_move_tail(&exec->head, &vc4->job_done_list); 140 140 if (exec->fence) { 141 141 dma_fence_signal_locked(exec->fence); 142 + dma_fence_put(exec->fence); 142 143 exec->fence = NULL; 143 144 } 144 145 vc4_submit_next_render_job(dev);
+4 -2
drivers/hwtracing/stm/ftrace.c
··· 42 42 * @len: length of the data packet 43 43 */ 44 44 static void notrace 45 - stm_ftrace_write(const void *buf, unsigned int len) 45 + stm_ftrace_write(struct trace_export *export, const void *buf, unsigned int len) 46 46 { 47 - stm_source_write(&stm_ftrace.data, STM_FTRACE_CHAN, buf, len); 47 + struct stm_ftrace *stm = container_of(export, struct stm_ftrace, ftrace); 48 + 49 + stm_source_write(&stm->data, STM_FTRACE_CHAN, buf, len); 48 50 } 49 51 50 52 static int stm_ftrace_link(struct stm_source_data *data)
+1 -1
drivers/i2c/busses/i2c-cht-wc.c
··· 379 379 return 0; 380 380 } 381 381 382 - static struct platform_device_id cht_wc_i2c_adap_id_table[] = { 382 + static const struct platform_device_id cht_wc_i2c_adap_id_table[] = { 383 383 { .name = "cht_wcove_ext_chgr" }, 384 384 {}, 385 385 };
+1 -1
drivers/i2c/busses/i2c-piix4.c
··· 983 983 984 984 if (adapdata->smba) { 985 985 i2c_del_adapter(adap); 986 - if (adapdata->port == (0 << 1)) { 986 + if (adapdata->port == (0 << piix4_port_shift_sb800)) { 987 987 release_region(adapdata->smba, SMBIOSIZE); 988 988 if (adapdata->sb800_main) 989 989 release_region(SB800_PIIX4_SMB_IDX, 2);
+2 -1
drivers/i2c/busses/i2c-stm32.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * i2c-stm32.h 3 4 * 4 5 * Copyright (C) M'boumba Cedric Madianga 2017 6 + * Copyright (C) STMicroelectronics 2017 5 7 * Author: M'boumba Cedric Madianga <cedric.madianga@gmail.com> 6 8 * 7 - * License terms: GNU General Public License (GPL), version 2 8 9 */ 9 10 10 11 #ifndef _I2C_STM32_H
+2 -1
drivers/i2c/busses/i2c-stm32f4.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Driver for STMicroelectronics STM32 I2C controller 3 4 * ··· 7 6 * http://www.st.com/resource/en/reference_manual/DM00031020.pdf 8 7 * 9 8 * Copyright (C) M'boumba Cedric Madianga 2016 9 + * Copyright (C) STMicroelectronics 2017 10 10 * Author: M'boumba Cedric Madianga <cedric.madianga@gmail.com> 11 11 * 12 12 * This driver is based on i2c-st.c 13 13 * 14 - * License terms: GNU General Public License (GPL), version 2 15 14 */ 16 15 17 16 #include <linux/clk.h>
+2 -1
drivers/i2c/busses/i2c-stm32f7.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Driver for STMicroelectronics STM32F7 I2C controller 3 4 * ··· 8 7 * http://www.st.com/resource/en/reference_manual/dm00124865.pdf 9 8 * 10 9 * Copyright (C) M'boumba Cedric Madianga 2017 10 + * Copyright (C) STMicroelectronics 2017 11 11 * Author: M'boumba Cedric Madianga <cedric.madianga@gmail.com> 12 12 * 13 13 * This driver is based on i2c-stm32f4.c 14 14 * 15 - * License terms: GNU General Public License (GPL), version 2 16 15 */ 17 16 #include <linux/clk.h> 18 17 #include <linux/delay.h>
+1 -1
drivers/infiniband/core/cma.c
··· 4458 4458 return skb->len; 4459 4459 } 4460 4460 4461 - static const struct rdma_nl_cbs cma_cb_table[] = { 4461 + static const struct rdma_nl_cbs cma_cb_table[RDMA_NL_RDMA_CM_NUM_OPS] = { 4462 4462 [RDMA_NL_RDMA_CM_ID_STATS] = { .dump = cma_get_id_stats}, 4463 4463 }; 4464 4464
+1 -1
drivers/infiniband/core/device.c
··· 1146 1146 } 1147 1147 EXPORT_SYMBOL(ib_get_net_dev_by_params); 1148 1148 1149 - static const struct rdma_nl_cbs ibnl_ls_cb_table[] = { 1149 + static const struct rdma_nl_cbs ibnl_ls_cb_table[RDMA_NL_LS_NUM_OPS] = { 1150 1150 [RDMA_NL_LS_OP_RESOLVE] = { 1151 1151 .doit = ib_nl_handle_resolve_resp, 1152 1152 .flags = RDMA_NL_ADMIN_PERM,
+1 -1
drivers/infiniband/core/iwcm.c
··· 80 80 } 81 81 EXPORT_SYMBOL(iwcm_reject_msg); 82 82 83 - static struct rdma_nl_cbs iwcm_nl_cb_table[] = { 83 + static struct rdma_nl_cbs iwcm_nl_cb_table[RDMA_NL_IWPM_NUM_OPS] = { 84 84 [RDMA_NL_IWPM_REG_PID] = {.dump = iwpm_register_pid_cb}, 85 85 [RDMA_NL_IWPM_ADD_MAPPING] = {.dump = iwpm_add_mapping_cb}, 86 86 [RDMA_NL_IWPM_QUERY_MAPPING] = {.dump = iwpm_add_and_query_mapping_cb},
+1 -1
drivers/infiniband/core/nldev.c
··· 303 303 return skb->len; 304 304 } 305 305 306 - static const struct rdma_nl_cbs nldev_cb_table[] = { 306 + static const struct rdma_nl_cbs nldev_cb_table[RDMA_NLDEV_NUM_OPS] = { 307 307 [RDMA_NLDEV_CMD_GET] = { 308 308 .doit = nldev_get_doit, 309 309 .dump = nldev_get_dumpit,
+5 -2
drivers/infiniband/core/security.c
··· 739 739 if (!rdma_protocol_ib(map->agent.device, map->agent.port_num)) 740 740 return 0; 741 741 742 - if (map->agent.qp->qp_type == IB_QPT_SMI && !map->agent.smp_allowed) 743 - return -EACCES; 742 + if (map->agent.qp->qp_type == IB_QPT_SMI) { 743 + if (!map->agent.smp_allowed) 744 + return -EACCES; 745 + return 0; 746 + } 744 747 745 748 return ib_security_pkey_access(map->agent.device, 746 749 map->agent.port_num,
+6
drivers/infiniband/core/uverbs_cmd.c
··· 1971 1971 goto release_qp; 1972 1972 } 1973 1973 1974 + if ((cmd->base.attr_mask & IB_QP_ALT_PATH) && 1975 + !rdma_is_port_valid(qp->device, cmd->base.alt_port_num)) { 1976 + ret = -EINVAL; 1977 + goto release_qp; 1978 + } 1979 + 1974 1980 attr->qp_state = cmd->base.qp_state; 1975 1981 attr->cur_qp_state = cmd->base.cur_qp_state; 1976 1982 attr->path_mtu = cmd->base.path_mtu;
+5
drivers/infiniband/hw/cxgb4/cq.c
··· 395 395 396 396 static int cqe_completes_wr(struct t4_cqe *cqe, struct t4_wq *wq) 397 397 { 398 + if (CQE_OPCODE(cqe) == C4IW_DRAIN_OPCODE) { 399 + WARN_ONCE(1, "Unexpected DRAIN CQE qp id %u!\n", wq->sq.qid); 400 + return 0; 401 + } 402 + 398 403 if (CQE_OPCODE(cqe) == FW_RI_TERMINATE) 399 404 return 0; 400 405
+16 -6
drivers/infiniband/hw/cxgb4/qp.c
··· 868 868 869 869 qhp = to_c4iw_qp(ibqp); 870 870 spin_lock_irqsave(&qhp->lock, flag); 871 - if (t4_wq_in_error(&qhp->wq)) { 871 + 872 + /* 873 + * If the qp has been flushed, then just insert a special 874 + * drain cqe. 875 + */ 876 + if (qhp->wq.flushed) { 872 877 spin_unlock_irqrestore(&qhp->lock, flag); 873 878 complete_sq_drain_wr(qhp, wr); 874 879 return err; ··· 1016 1011 1017 1012 qhp = to_c4iw_qp(ibqp); 1018 1013 spin_lock_irqsave(&qhp->lock, flag); 1019 - if (t4_wq_in_error(&qhp->wq)) { 1014 + 1015 + /* 1016 + * If the qp has been flushed, then just insert a special 1017 + * drain cqe. 1018 + */ 1019 + if (qhp->wq.flushed) { 1020 1020 spin_unlock_irqrestore(&qhp->lock, flag); 1021 1021 complete_rq_drain_wr(qhp, wr); 1022 1022 return err; ··· 1295 1285 spin_unlock_irqrestore(&rchp->lock, flag); 1296 1286 1297 1287 if (schp == rchp) { 1298 - if (t4_clear_cq_armed(&rchp->cq) && 1299 - (rq_flushed || sq_flushed)) { 1288 + if ((rq_flushed || sq_flushed) && 1289 + t4_clear_cq_armed(&rchp->cq)) { 1300 1290 spin_lock_irqsave(&rchp->comp_handler_lock, flag); 1301 1291 (*rchp->ibcq.comp_handler)(&rchp->ibcq, 1302 1292 rchp->ibcq.cq_context); 1303 1293 spin_unlock_irqrestore(&rchp->comp_handler_lock, flag); 1304 1294 } 1305 1295 } else { 1306 - if (t4_clear_cq_armed(&rchp->cq) && rq_flushed) { 1296 + if (rq_flushed && t4_clear_cq_armed(&rchp->cq)) { 1307 1297 spin_lock_irqsave(&rchp->comp_handler_lock, flag); 1308 1298 (*rchp->ibcq.comp_handler)(&rchp->ibcq, 1309 1299 rchp->ibcq.cq_context); 1310 1300 spin_unlock_irqrestore(&rchp->comp_handler_lock, flag); 1311 1301 } 1312 - if (t4_clear_cq_armed(&schp->cq) && sq_flushed) { 1302 + if (sq_flushed && t4_clear_cq_armed(&schp->cq)) { 1313 1303 spin_lock_irqsave(&schp->comp_handler_lock, flag); 1314 1304 (*schp->ibcq.comp_handler)(&schp->ibcq, 1315 1305 schp->ibcq.cq_context);
+19 -7
drivers/infiniband/hw/mlx4/qp.c
··· 666 666 return (-EOPNOTSUPP); 667 667 } 668 668 669 + if (ucmd->rx_hash_fields_mask & ~(MLX4_IB_RX_HASH_SRC_IPV4 | 670 + MLX4_IB_RX_HASH_DST_IPV4 | 671 + MLX4_IB_RX_HASH_SRC_IPV6 | 672 + MLX4_IB_RX_HASH_DST_IPV6 | 673 + MLX4_IB_RX_HASH_SRC_PORT_TCP | 674 + MLX4_IB_RX_HASH_DST_PORT_TCP | 675 + MLX4_IB_RX_HASH_SRC_PORT_UDP | 676 + MLX4_IB_RX_HASH_DST_PORT_UDP)) { 677 + pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", 678 + ucmd->rx_hash_fields_mask); 679 + return (-EOPNOTSUPP); 680 + } 681 + 669 682 if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_IPV4) && 670 683 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_IPV4)) { 671 684 rss_ctx->flags = MLX4_RSS_IPV4; ··· 704 691 return (-EOPNOTSUPP); 705 692 } 706 693 707 - if (rss_ctx->flags & MLX4_RSS_IPV4) { 694 + if (rss_ctx->flags & MLX4_RSS_IPV4) 708 695 rss_ctx->flags |= MLX4_RSS_UDP_IPV4; 709 - } else if (rss_ctx->flags & MLX4_RSS_IPV6) { 696 + if (rss_ctx->flags & MLX4_RSS_IPV6) 710 697 rss_ctx->flags |= MLX4_RSS_UDP_IPV6; 711 - } else { 698 + if (!(rss_ctx->flags & (MLX4_RSS_IPV6 | MLX4_RSS_IPV4))) { 712 699 pr_debug("RX Hash fields_mask is not supported - UDP must be set with IPv4 or IPv6\n"); 713 700 return (-EOPNOTSUPP); 714 701 } ··· 720 707 721 708 if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_PORT_TCP) && 722 709 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_PORT_TCP)) { 723 - if (rss_ctx->flags & MLX4_RSS_IPV4) { 710 + if (rss_ctx->flags & MLX4_RSS_IPV4) 724 711 rss_ctx->flags |= MLX4_RSS_TCP_IPV4; 725 - } else if (rss_ctx->flags & MLX4_RSS_IPV6) { 712 + if (rss_ctx->flags & MLX4_RSS_IPV6) 726 713 rss_ctx->flags |= MLX4_RSS_TCP_IPV6; 727 - } else { 714 + if (!(rss_ctx->flags & (MLX4_RSS_IPV6 | MLX4_RSS_IPV4))) { 728 715 pr_debug("RX Hash fields_mask is not supported - TCP must be set with IPv4 or IPv6\n"); 729 716 return (-EOPNOTSUPP); 730 717 } 731 - 732 718 } else if ((ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_SRC_PORT_TCP) || 733 719 (ucmd->rx_hash_fields_mask & MLX4_IB_RX_HASH_DST_PORT_TCP)) { 734 720 pr_debug("RX Hash fields_mask is not supported - both TCP SRC and DST must be set\n");
+1
drivers/infiniband/ulp/ipoib/ipoib_cm.c
··· 1145 1145 noio_flag = memalloc_noio_save(); 1146 1146 p->tx_ring = vzalloc(ipoib_sendq_size * sizeof(*p->tx_ring)); 1147 1147 if (!p->tx_ring) { 1148 + memalloc_noio_restore(noio_flag); 1148 1149 ret = -ENOMEM; 1149 1150 goto err_tx; 1150 1151 }
+6 -2
drivers/md/dm-bufio.c
··· 1611 1611 int l; 1612 1612 struct dm_buffer *b, *tmp; 1613 1613 unsigned long freed = 0; 1614 - unsigned long count = nr_to_scan; 1614 + unsigned long count = c->n_buffers[LIST_CLEAN] + 1615 + c->n_buffers[LIST_DIRTY]; 1615 1616 unsigned long retain_target = get_retain_buffers(c); 1616 1617 1617 1618 for (l = 0; l < LIST_SIZE; l++) { ··· 1648 1647 dm_bufio_shrink_count(struct shrinker *shrink, struct shrink_control *sc) 1649 1648 { 1650 1649 struct dm_bufio_client *c = container_of(shrink, struct dm_bufio_client, shrinker); 1650 + unsigned long count = READ_ONCE(c->n_buffers[LIST_CLEAN]) + 1651 + READ_ONCE(c->n_buffers[LIST_DIRTY]); 1652 + unsigned long retain_target = get_retain_buffers(c); 1651 1653 1652 - return READ_ONCE(c->n_buffers[LIST_CLEAN]) + READ_ONCE(c->n_buffers[LIST_DIRTY]); 1654 + return (count < retain_target) ? 0 : (count - retain_target); 1653 1655 } 1654 1656 1655 1657 /*
+6 -6
drivers/md/dm-cache-target.c
··· 3472 3472 { 3473 3473 int r; 3474 3474 3475 - r = dm_register_target(&cache_target); 3476 - if (r) { 3477 - DMERR("cache target registration failed: %d", r); 3478 - return r; 3479 - } 3480 - 3481 3475 migration_cache = KMEM_CACHE(dm_cache_migration, 0); 3482 3476 if (!migration_cache) { 3483 3477 dm_unregister_target(&cache_target); 3484 3478 return -ENOMEM; 3479 + } 3480 + 3481 + r = dm_register_target(&cache_target); 3482 + if (r) { 3483 + DMERR("cache target registration failed: %d", r); 3484 + return r; 3485 3485 } 3486 3486 3487 3487 return 0;
+51 -16
drivers/md/dm-mpath.c
··· 458 458 } while (0) 459 459 460 460 /* 461 + * Check whether bios must be queued in the device-mapper core rather 462 + * than here in the target. 463 + * 464 + * If MPATHF_QUEUE_IF_NO_PATH and MPATHF_SAVED_QUEUE_IF_NO_PATH hold 465 + * the same value then we are not between multipath_presuspend() 466 + * and multipath_resume() calls and we have no need to check 467 + * for the DMF_NOFLUSH_SUSPENDING flag. 468 + */ 469 + static bool __must_push_back(struct multipath *m, unsigned long flags) 470 + { 471 + return ((test_bit(MPATHF_QUEUE_IF_NO_PATH, &flags) != 472 + test_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &flags)) && 473 + dm_noflush_suspending(m->ti)); 474 + } 475 + 476 + /* 477 + * Following functions use READ_ONCE to get atomic access to 478 + * all m->flags to avoid taking spinlock 479 + */ 480 + static bool must_push_back_rq(struct multipath *m) 481 + { 482 + unsigned long flags = READ_ONCE(m->flags); 483 + return test_bit(MPATHF_QUEUE_IF_NO_PATH, &flags) || __must_push_back(m, flags); 484 + } 485 + 486 + static bool must_push_back_bio(struct multipath *m) 487 + { 488 + unsigned long flags = READ_ONCE(m->flags); 489 + return __must_push_back(m, flags); 490 + } 491 + 492 + /* 461 493 * Map cloned requests (request-based multipath) 462 494 */ 463 495 static int multipath_clone_and_map(struct dm_target *ti, struct request *rq, ··· 510 478 pgpath = choose_pgpath(m, nr_bytes); 511 479 512 480 if (!pgpath) { 513 - if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) 481 + if (must_push_back_rq(m)) 514 482 return DM_MAPIO_DELAY_REQUEUE; 515 483 dm_report_EIO(m); /* Failed */ 516 484 return DM_MAPIO_KILL; ··· 585 553 } 586 554 587 555 if (!pgpath) { 588 - if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) 556 + if (must_push_back_bio(m)) 589 557 return DM_MAPIO_REQUEUE; 590 558 dm_report_EIO(m); 591 559 return DM_MAPIO_KILL; ··· 683 651 assign_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags, 684 652 (save_old_value && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) || 685 653 (!save_old_value && queue_if_no_path)); 686 - assign_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags, 687 - queue_if_no_path || dm_noflush_suspending(m->ti)); 654 + assign_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags, queue_if_no_path); 688 655 spin_unlock_irqrestore(&m->lock, flags); 689 656 690 657 if (!queue_if_no_path) { ··· 1517 1486 fail_path(pgpath); 1518 1487 1519 1488 if (atomic_read(&m->nr_valid_paths) == 0 && 1520 - !test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) { 1489 + !must_push_back_rq(m)) { 1521 1490 if (error == BLK_STS_IOERR) 1522 1491 dm_report_EIO(m); 1523 1492 /* complete with the original error */ ··· 1552 1521 1553 1522 if (atomic_read(&m->nr_valid_paths) == 0 && 1554 1523 !test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) { 1555 - dm_report_EIO(m); 1556 - *error = BLK_STS_IOERR; 1524 + if (must_push_back_bio(m)) { 1525 + r = DM_ENDIO_REQUEUE; 1526 + } else { 1527 + dm_report_EIO(m); 1528 + *error = BLK_STS_IOERR; 1529 + } 1557 1530 goto done; 1558 1531 } 1559 1532 ··· 1992 1957 { 1993 1958 int r; 1994 1959 1995 - r = dm_register_target(&multipath_target); 1996 - if (r < 0) { 1997 - DMERR("request-based register failed %d", r); 1998 - r = -EINVAL; 1999 - goto bad_register_target; 2000 - } 2001 - 2002 1960 kmultipathd = alloc_workqueue("kmpathd", WQ_MEM_RECLAIM, 0); 2003 1961 if (!kmultipathd) { 2004 1962 DMERR("failed to create workqueue kmpathd"); ··· 2013 1985 goto bad_alloc_kmpath_handlerd; 2014 1986 } 2015 1987 1988 + r = dm_register_target(&multipath_target); 1989 + if (r < 0) { 1990 + DMERR("request-based register failed %d", r); 1991 + r = -EINVAL; 1992 + goto bad_register_target; 1993 + } 1994 + 2016 1995 return 0; 2017 1996 1997 + bad_register_target: 1998 + destroy_workqueue(kmpath_handlerd); 2018 1999 bad_alloc_kmpath_handlerd: 2019 2000 destroy_workqueue(kmultipathd); 2020 2001 bad_alloc_kmultipathd: 2021 - dm_unregister_target(&multipath_target); 2022 - bad_register_target: 2023 2002 return r; 2024 2003 } 2025 2004
+24 -24
drivers/md/dm-snap.c
··· 2411 2411 return r; 2412 2412 } 2413 2413 2414 - r = dm_register_target(&snapshot_target); 2415 - if (r < 0) { 2416 - DMERR("snapshot target register failed %d", r); 2417 - goto bad_register_snapshot_target; 2418 - } 2419 - 2420 - r = dm_register_target(&origin_target); 2421 - if (r < 0) { 2422 - DMERR("Origin target register failed %d", r); 2423 - goto bad_register_origin_target; 2424 - } 2425 - 2426 - r = dm_register_target(&merge_target); 2427 - if (r < 0) { 2428 - DMERR("Merge target register failed %d", r); 2429 - goto bad_register_merge_target; 2430 - } 2431 - 2432 2414 r = init_origin_hash(); 2433 2415 if (r) { 2434 2416 DMERR("init_origin_hash failed."); ··· 2431 2449 goto bad_pending_cache; 2432 2450 } 2433 2451 2452 + r = dm_register_target(&snapshot_target); 2453 + if (r < 0) { 2454 + DMERR("snapshot target register failed %d", r); 2455 + goto bad_register_snapshot_target; 2456 + } 2457 + 2458 + r = dm_register_target(&origin_target); 2459 + if (r < 0) { 2460 + DMERR("Origin target register failed %d", r); 2461 + goto bad_register_origin_target; 2462 + } 2463 + 2464 + r = dm_register_target(&merge_target); 2465 + if (r < 0) { 2466 + DMERR("Merge target register failed %d", r); 2467 + goto bad_register_merge_target; 2468 + } 2469 + 2434 2470 return 0; 2435 2471 2436 - bad_pending_cache: 2437 - kmem_cache_destroy(exception_cache); 2438 - bad_exception_cache: 2439 - exit_origin_hash(); 2440 - bad_origin_hash: 2441 - dm_unregister_target(&merge_target); 2442 2472 bad_register_merge_target: 2443 2473 dm_unregister_target(&origin_target); 2444 2474 bad_register_origin_target: 2445 2475 dm_unregister_target(&snapshot_target); 2446 2476 bad_register_snapshot_target: 2477 + kmem_cache_destroy(pending_cache); 2478 + bad_pending_cache: 2479 + kmem_cache_destroy(exception_cache); 2480 + bad_exception_cache: 2481 + exit_origin_hash(); 2482 + bad_origin_hash: 2447 2483 dm_exception_store_exit(); 2448 2484 2449 2485 return r;
+3 -2
drivers/md/dm-table.c
··· 453 453 454 454 refcount_set(&dd->count, 1); 455 455 list_add(&dd->list, &t->devices); 456 + goto out; 456 457 457 458 } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) { 458 459 r = upgrade_mode(dd, mode, t->md); 459 460 if (r) 460 461 return r; 461 - refcount_inc(&dd->count); 462 462 } 463 - 463 + refcount_inc(&dd->count); 464 + out: 464 465 *result = dd->dm_dev; 465 466 return 0; 466 467 }
+10 -12
drivers/md/dm-thin.c
··· 4355 4355 4356 4356 static int __init dm_thin_init(void) 4357 4357 { 4358 - int r; 4358 + int r = -ENOMEM; 4359 4359 4360 4360 pool_table_init(); 4361 4361 4362 + _new_mapping_cache = KMEM_CACHE(dm_thin_new_mapping, 0); 4363 + if (!_new_mapping_cache) 4364 + return r; 4365 + 4362 4366 r = dm_register_target(&thin_target); 4363 4367 if (r) 4364 - return r; 4368 + goto bad_new_mapping_cache; 4365 4369 4366 4370 r = dm_register_target(&pool_target); 4367 4371 if (r) 4368 - goto bad_pool_target; 4369 - 4370 - r = -ENOMEM; 4371 - 4372 - _new_mapping_cache = KMEM_CACHE(dm_thin_new_mapping, 0); 4373 - if (!_new_mapping_cache) 4374 - goto bad_new_mapping_cache; 4372 + goto bad_thin_target; 4375 4373 4376 4374 return 0; 4377 4375 4378 - bad_new_mapping_cache: 4379 - dm_unregister_target(&pool_target); 4380 - bad_pool_target: 4376 + bad_thin_target: 4381 4377 dm_unregister_target(&thin_target); 4378 + bad_new_mapping_cache: 4379 + kmem_cache_destroy(_new_mapping_cache); 4382 4380 4383 4381 return r; 4384 4382 }
+11 -15
drivers/misc/eeprom/at24.c
··· 562 562 static int at24_read(void *priv, unsigned int off, void *val, size_t count) 563 563 { 564 564 struct at24_data *at24 = priv; 565 - struct i2c_client *client; 565 + struct device *dev = &at24->client[0]->dev; 566 566 char *buf = val; 567 567 int ret; 568 568 ··· 572 572 if (off + count > at24->chip.byte_len) 573 573 return -EINVAL; 574 574 575 - client = at24_translate_offset(at24, &off); 576 - 577 - ret = pm_runtime_get_sync(&client->dev); 575 + ret = pm_runtime_get_sync(dev); 578 576 if (ret < 0) { 579 - pm_runtime_put_noidle(&client->dev); 577 + pm_runtime_put_noidle(dev); 580 578 return ret; 581 579 } 582 580 ··· 590 592 status = at24->read_func(at24, buf, off, count); 591 593 if (status < 0) { 592 594 mutex_unlock(&at24->lock); 593 - pm_runtime_put(&client->dev); 595 + pm_runtime_put(dev); 594 596 return status; 595 597 } 596 598 buf += status; ··· 600 602 601 603 mutex_unlock(&at24->lock); 602 604 603 - pm_runtime_put(&client->dev); 605 + pm_runtime_put(dev); 604 606 605 607 return 0; 606 608 } ··· 608 610 static int at24_write(void *priv, unsigned int off, void *val, size_t count) 609 611 { 610 612 struct at24_data *at24 = priv; 611 - struct i2c_client *client; 613 + struct device *dev = &at24->client[0]->dev; 612 614 char *buf = val; 613 615 int ret; 614 616 ··· 618 620 if (off + count > at24->chip.byte_len) 619 621 return -EINVAL; 620 622 621 - client = at24_translate_offset(at24, &off); 622 - 623 - ret = pm_runtime_get_sync(&client->dev); 623 + ret = pm_runtime_get_sync(dev); 624 624 if (ret < 0) { 625 - pm_runtime_put_noidle(&client->dev); 625 + pm_runtime_put_noidle(dev); 626 626 return ret; 627 627 } 628 628 ··· 636 640 status = at24->write_func(at24, buf, off, count); 637 641 if (status < 0) { 638 642 mutex_unlock(&at24->lock); 639 - pm_runtime_put(&client->dev); 643 + pm_runtime_put(dev); 640 644 return status; 641 645 } 642 646 buf += status; ··· 646 650 647 651 mutex_unlock(&at24->lock); 648 652 649 - pm_runtime_put(&client->dev); 653 + pm_runtime_put(dev); 650 654 651 655 return 0; 652 656 } ··· 876 880 at24->nvmem_config.reg_read = at24_read; 877 881 at24->nvmem_config.reg_write = at24_write; 878 882 at24->nvmem_config.priv = at24; 879 - at24->nvmem_config.stride = 4; 883 + at24->nvmem_config.stride = 1; 880 884 at24->nvmem_config.word_size = 1; 881 885 at24->nvmem_config.size = chip.byte_len; 882 886
+2
drivers/mmc/core/card.h
··· 75 75 #define EXT_CSD_REV_ANY (-1u) 76 76 77 77 #define CID_MANFID_SANDISK 0x2 78 + #define CID_MANFID_ATP 0x9 78 79 #define CID_MANFID_TOSHIBA 0x11 79 80 #define CID_MANFID_MICRON 0x13 80 81 #define CID_MANFID_SAMSUNG 0x15 82 + #define CID_MANFID_APACER 0x27 81 83 #define CID_MANFID_KINGSTON 0x70 82 84 #define CID_MANFID_HYNIX 0x90 83 85
+1 -1
drivers/mmc/core/mmc.c
··· 1290 1290 1291 1291 static void mmc_select_driver_type(struct mmc_card *card) 1292 1292 { 1293 - int card_drv_type, drive_strength, drv_type; 1293 + int card_drv_type, drive_strength, drv_type = 0; 1294 1294 int fixed_drv_type = card->host->fixed_drv_type; 1295 1295 1296 1296 card_drv_type = card->ext_csd.raw_driver_strength |
+8
drivers/mmc/core/quirks.h
··· 53 53 MMC_QUIRK_BLK_NO_CMD23), 54 54 55 55 /* 56 + * Some SD cards lockup while using CMD23 multiblock transfers. 57 + */ 58 + MMC_FIXUP("AF SD", CID_MANFID_ATP, CID_OEMID_ANY, add_quirk_sd, 59 + MMC_QUIRK_BLK_NO_CMD23), 60 + MMC_FIXUP("APUSD", CID_MANFID_APACER, 0x5048, add_quirk_sd, 61 + MMC_QUIRK_BLK_NO_CMD23), 62 + 63 + /* 56 64 * Some MMC cards need longer data read timeout than indicated in CSD. 57 65 */ 58 66 MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc,
+1
drivers/net/dsa/mv88e6xxx/port.c
··· 338 338 cmode = MV88E6XXX_PORT_STS_CMODE_2500BASEX; 339 339 break; 340 340 case PHY_INTERFACE_MODE_XGMII: 341 + case PHY_INTERFACE_MODE_XAUI: 341 342 cmode = MV88E6XXX_PORT_STS_CMODE_XAUI; 342 343 break; 343 344 case PHY_INTERFACE_MODE_RXAUI:
+3 -2
drivers/net/ethernet/aquantia/atlantic/aq_cfg.h
··· 50 50 #define AQ_CFG_PCI_FUNC_MSIX_IRQS 9U 51 51 #define AQ_CFG_PCI_FUNC_PORTS 2U 52 52 53 - #define AQ_CFG_SERVICE_TIMER_INTERVAL (2 * HZ) 53 + #define AQ_CFG_SERVICE_TIMER_INTERVAL (1 * HZ) 54 54 #define AQ_CFG_POLLING_TIMER_INTERVAL ((unsigned int)(2 * HZ)) 55 55 56 56 #define AQ_CFG_SKB_FRAGS_MAX 32U ··· 80 80 #define AQ_CFG_DRV_VERSION __stringify(NIC_MAJOR_DRIVER_VERSION)"."\ 81 81 __stringify(NIC_MINOR_DRIVER_VERSION)"."\ 82 82 __stringify(NIC_BUILD_DRIVER_VERSION)"."\ 83 - __stringify(NIC_REVISION_DRIVER_VERSION) 83 + __stringify(NIC_REVISION_DRIVER_VERSION) \ 84 + AQ_CFG_DRV_VERSION_SUFFIX 84 85 85 86 #endif /* AQ_CFG_H */
+8 -8
drivers/net/ethernet/aquantia/atlantic/aq_ethtool.c
··· 66 66 "OutUCast", 67 67 "OutMCast", 68 68 "OutBCast", 69 - "InUCastOctects", 70 - "OutUCastOctects", 71 - "InMCastOctects", 72 - "OutMCastOctects", 73 - "InBCastOctects", 74 - "OutBCastOctects", 75 - "InOctects", 76 - "OutOctects", 69 + "InUCastOctets", 70 + "OutUCastOctets", 71 + "InMCastOctets", 72 + "OutMCastOctets", 73 + "InBCastOctets", 74 + "OutBCastOctets", 75 + "InOctets", 76 + "OutOctets", 77 77 "InPacketsDma", 78 78 "OutPacketsDma", 79 79 "InOctetsDma",
+26 -3
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 46 46 unsigned int mbps; 47 47 }; 48 48 49 + struct aq_stats_s { 50 + u64 uprc; 51 + u64 mprc; 52 + u64 bprc; 53 + u64 erpt; 54 + u64 uptc; 55 + u64 mptc; 56 + u64 bptc; 57 + u64 erpr; 58 + u64 mbtc; 59 + u64 bbtc; 60 + u64 mbrc; 61 + u64 bbrc; 62 + u64 ubrc; 63 + u64 ubtc; 64 + u64 dpc; 65 + u64 dma_pkt_rc; 66 + u64 dma_pkt_tc; 67 + u64 dma_oct_rc; 68 + u64 dma_oct_tc; 69 + }; 70 + 49 71 #define AQ_HW_IRQ_INVALID 0U 50 72 #define AQ_HW_IRQ_LEGACY 1U 51 73 #define AQ_HW_IRQ_MSI 2U ··· 107 85 void (*destroy)(struct aq_hw_s *self); 108 86 109 87 int (*get_hw_caps)(struct aq_hw_s *self, 110 - struct aq_hw_caps_s *aq_hw_caps); 88 + struct aq_hw_caps_s *aq_hw_caps, 89 + unsigned short device, 90 + unsigned short subsystem_device); 111 91 112 92 int (*hw_ring_tx_xmit)(struct aq_hw_s *self, struct aq_ring_s *aq_ring, 113 93 unsigned int frags); ··· 188 164 189 165 int (*hw_update_stats)(struct aq_hw_s *self); 190 166 191 - int (*hw_get_hw_stats)(struct aq_hw_s *self, u64 *data, 192 - unsigned int *p_count); 167 + struct aq_stats_s *(*hw_get_hw_stats)(struct aq_hw_s *self); 193 168 194 169 int (*hw_get_fw_version)(struct aq_hw_s *self, u32 *fw_version); 195 170
+55 -27
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 37 37 module_param_named(aq_itr_rx, aq_itr_rx, uint, 0644); 38 38 MODULE_PARM_DESC(aq_itr_rx, "RX interrupt throttle rate"); 39 39 40 + static void aq_nic_update_ndev_stats(struct aq_nic_s *self); 41 + 40 42 static void aq_nic_rss_init(struct aq_nic_s *self, unsigned int num_rss_queues) 41 43 { 42 44 struct aq_nic_cfg_s *cfg = &self->aq_nic_cfg; ··· 168 166 static void aq_nic_service_timer_cb(struct timer_list *t) 169 167 { 170 168 struct aq_nic_s *self = from_timer(self, t, service_timer); 171 - struct net_device *ndev = aq_nic_get_ndev(self); 169 + int ctimer = AQ_CFG_SERVICE_TIMER_INTERVAL; 172 170 int err = 0; 173 - unsigned int i = 0U; 174 - struct aq_ring_stats_rx_s stats_rx; 175 - struct aq_ring_stats_tx_s stats_tx; 176 171 177 172 if (aq_utils_obj_test(&self->header.flags, AQ_NIC_FLAGS_IS_NOT_READY)) 178 173 goto err_exit; ··· 181 182 if (self->aq_hw_ops.hw_update_stats) 182 183 self->aq_hw_ops.hw_update_stats(self->aq_hw); 183 184 184 - memset(&stats_rx, 0U, sizeof(struct aq_ring_stats_rx_s)); 185 - memset(&stats_tx, 0U, sizeof(struct aq_ring_stats_tx_s)); 186 - for (i = AQ_DIMOF(self->aq_vec); i--;) { 187 - if (self->aq_vec[i]) 188 - aq_vec_add_stats(self->aq_vec[i], &stats_rx, &stats_tx); 189 - } 185 + aq_nic_update_ndev_stats(self); 190 186 191 - ndev->stats.rx_packets = stats_rx.packets; 192 - ndev->stats.rx_bytes = stats_rx.bytes; 193 - ndev->stats.rx_errors = stats_rx.errors; 194 - ndev->stats.tx_packets = stats_tx.packets; 195 - ndev->stats.tx_bytes = stats_tx.bytes; 196 - ndev->stats.tx_errors = stats_tx.errors; 187 + /* If no link - use faster timer rate to detect link up asap */ 188 + if (!netif_carrier_ok(self->ndev)) 189 + ctimer = max(ctimer / 2, 1); 197 190 198 191 err_exit: 199 - mod_timer(&self->service_timer, 200 - jiffies + AQ_CFG_SERVICE_TIMER_INTERVAL); 192 + mod_timer(&self->service_timer, jiffies + ctimer); 201 193 } 202 194 203 195 static void aq_nic_polling_timer_cb(struct timer_list *t) ··· 212 222 213 223 struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, 214 224 const struct ethtool_ops *et_ops, 215 - struct device *dev, 225 + struct pci_dev *pdev, 216 226 struct aq_pci_func_s *aq_pci_func, 217 227 unsigned int port, 218 228 const struct aq_hw_ops *aq_hw_ops) ··· 232 242 ndev->netdev_ops = ndev_ops; 233 243 ndev->ethtool_ops = et_ops; 234 244 235 - SET_NETDEV_DEV(ndev, dev); 245 + SET_NETDEV_DEV(ndev, &pdev->dev); 236 246 237 247 ndev->if_port = port; 238 248 self->ndev = ndev; ··· 244 254 245 255 self->aq_hw = self->aq_hw_ops.create(aq_pci_func, self->port, 246 256 &self->aq_hw_ops); 247 - err = self->aq_hw_ops.get_hw_caps(self->aq_hw, &self->aq_hw_caps); 257 + err = self->aq_hw_ops.get_hw_caps(self->aq_hw, &self->aq_hw_caps, 258 + pdev->device, pdev->subsystem_device); 248 259 if (err < 0) 249 260 goto err_exit; 250 261 ··· 740 749 741 750 void aq_nic_get_stats(struct aq_nic_s *self, u64 *data) 742 751 { 743 - struct aq_vec_s *aq_vec = NULL; 744 752 unsigned int i = 0U; 745 753 unsigned int count = 0U; 746 - int err = 0; 754 + struct aq_vec_s *aq_vec = NULL; 755 + struct aq_stats_s *stats = self->aq_hw_ops.hw_get_hw_stats(self->aq_hw); 747 756 748 - err = self->aq_hw_ops.hw_get_hw_stats(self->aq_hw, data, &count); 749 - if (err < 0) 757 + if (!stats) 750 758 goto err_exit; 751 759 752 - data += count; 760 + data[i] = stats->uprc + stats->mprc + stats->bprc; 761 + data[++i] = stats->uprc; 762 + data[++i] = stats->mprc; 763 + data[++i] = stats->bprc; 764 + data[++i] = stats->erpt; 765 + data[++i] = stats->uptc + stats->mptc + stats->bptc; 766 + data[++i] = stats->uptc; 767 + data[++i] = stats->mptc; 768 + data[++i] = stats->bptc; 769 + data[++i] = stats->ubrc; 770 + data[++i] = stats->ubtc; 771 + data[++i] = stats->mbrc; 772 + data[++i] = stats->mbtc; 773 + data[++i] = stats->bbrc; 774 + data[++i] = stats->bbtc; 775 + data[++i] = stats->ubrc + stats->mbrc + stats->bbrc; 776 + data[++i] = stats->ubtc + stats->mbtc + stats->bbtc; 777 + data[++i] = stats->dma_pkt_rc; 778 + data[++i] = stats->dma_pkt_tc; 779 + data[++i] = stats->dma_oct_rc; 780 + data[++i] = stats->dma_oct_tc; 781 + data[++i] = stats->dpc; 782 + 783 + i++; 784 + 785 + data += i; 753 786 count = 0U; 754 787 755 788 for (i = 0U, aq_vec = self->aq_vec[0]; ··· 783 768 } 784 769 785 770 err_exit:; 786 - (void)err; 771 + } 772 + 773 + static void aq_nic_update_ndev_stats(struct aq_nic_s *self) 774 + { 775 + struct net_device *ndev = self->ndev; 776 + struct aq_stats_s *stats = self->aq_hw_ops.hw_get_hw_stats(self->aq_hw); 777 + 778 + ndev->stats.rx_packets = stats->uprc + stats->mprc + stats->bprc; 779 + ndev->stats.rx_bytes = stats->ubrc + stats->mbrc + stats->bbrc; 780 + ndev->stats.rx_errors = stats->erpr; 781 + ndev->stats.tx_packets = stats->uptc + stats->mptc + stats->bptc; 782 + ndev->stats.tx_bytes = stats->ubtc + stats->mbtc + stats->bbtc; 783 + ndev->stats.tx_errors = stats->erpt; 784 + ndev->stats.multicast = stats->mprc; 787 785 } 788 786 789 787 void aq_nic_get_link_ksettings(struct aq_nic_s *self,
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 71 71 72 72 struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, 73 73 const struct ethtool_ops *et_ops, 74 - struct device *dev, 74 + struct pci_dev *pdev, 75 75 struct aq_pci_func_s *aq_pci_func, 76 76 unsigned int port, 77 77 const struct aq_hw_ops *aq_hw_ops);
+3 -2
drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
··· 51 51 pci_set_drvdata(pdev, self); 52 52 self->pdev = pdev; 53 53 54 - err = aq_hw_ops->get_hw_caps(NULL, &self->aq_hw_caps); 54 + err = aq_hw_ops->get_hw_caps(NULL, &self->aq_hw_caps, pdev->device, 55 + pdev->subsystem_device); 55 56 if (err < 0) 56 57 goto err_exit; 57 58 ··· 60 59 61 60 for (port = 0; port < self->ports; ++port) { 62 61 struct aq_nic_s *aq_nic = aq_nic_alloc_cold(ndev_ops, eth_ops, 63 - &pdev->dev, self, 62 + pdev, self, 64 63 port, aq_hw_ops); 65 64 66 65 if (!aq_nic) {
+16 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c
··· 18 18 #include "hw_atl_a0_internal.h" 19 19 20 20 static int hw_atl_a0_get_hw_caps(struct aq_hw_s *self, 21 - struct aq_hw_caps_s *aq_hw_caps) 21 + struct aq_hw_caps_s *aq_hw_caps, 22 + unsigned short device, 23 + unsigned short subsystem_device) 22 24 { 23 25 memcpy(aq_hw_caps, &hw_atl_a0_hw_caps_, sizeof(*aq_hw_caps)); 26 + 27 + if (device == HW_ATL_DEVICE_ID_D108 && subsystem_device == 0x0001) 28 + aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_10G; 29 + 30 + if (device == HW_ATL_DEVICE_ID_D109 && subsystem_device == 0x0001) { 31 + aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_10G; 32 + aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_5G; 33 + } 34 + 24 35 return 0; 25 36 } 26 37 ··· 343 332 hw_atl_a0_hw_qos_set(self); 344 333 hw_atl_a0_hw_rss_set(self, &aq_nic_cfg->aq_rss); 345 334 hw_atl_a0_hw_rss_hash_set(self, &aq_nic_cfg->aq_rss); 335 + 336 + /* Reset link status and read out initial hardware counters */ 337 + self->aq_link_status.mbps = 0; 338 + hw_atl_utils_update_stats(self); 346 339 347 340 err = aq_hw_err_from_flags(self); 348 341 if (err < 0)
+28 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 16 16 #include "hw_atl_utils.h" 17 17 #include "hw_atl_llh.h" 18 18 #include "hw_atl_b0_internal.h" 19 + #include "hw_atl_llh_internal.h" 19 20 20 21 static int hw_atl_b0_get_hw_caps(struct aq_hw_s *self, 21 - struct aq_hw_caps_s *aq_hw_caps) 22 + struct aq_hw_caps_s *aq_hw_caps, 23 + unsigned short device, 24 + unsigned short subsystem_device) 22 25 { 23 26 memcpy(aq_hw_caps, &hw_atl_b0_hw_caps_, sizeof(*aq_hw_caps)); 27 + 28 + if (device == HW_ATL_DEVICE_ID_D108 && subsystem_device == 0x0001) 29 + aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_10G; 30 + 31 + if (device == HW_ATL_DEVICE_ID_D109 && subsystem_device == 0x0001) { 32 + aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_10G; 33 + aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_5G; 34 + } 35 + 24 36 return 0; 25 37 } 26 38 ··· 369 357 }; 370 358 371 359 int err = 0; 360 + u32 val; 372 361 373 362 self->aq_nic_cfg = aq_nic_cfg; 374 363 ··· 386 373 hw_atl_b0_hw_qos_set(self); 387 374 hw_atl_b0_hw_rss_set(self, &aq_nic_cfg->aq_rss); 388 375 hw_atl_b0_hw_rss_hash_set(self, &aq_nic_cfg->aq_rss); 376 + 377 + /* Force limit MRRS on RDM/TDM to 2K */ 378 + val = aq_hw_read_reg(self, pci_reg_control6_adr); 379 + aq_hw_write_reg(self, pci_reg_control6_adr, (val & ~0x707) | 0x404); 380 + 381 + /* TX DMA total request limit. B0 hardware is not capable to 382 + * handle more than (8K-MRRS) incoming DMA data. 383 + * Value 24 in 256byte units 384 + */ 385 + aq_hw_write_reg(self, tx_dma_total_req_limit_adr, 24); 386 + 387 + /* Reset link status and read out initial hardware counters */ 388 + self->aq_link_status.mbps = 0; 389 + hw_atl_utils_update_stats(self); 389 390 390 391 err = aq_hw_err_from_flags(self); 391 392 if (err < 0)
+6
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
··· 2343 2343 #define tx_dma_desc_base_addrmsw_adr(descriptor) \ 2344 2344 (0x00007c04u + (descriptor) * 0x40) 2345 2345 2346 + /* tx dma total request limit */ 2347 + #define tx_dma_total_req_limit_adr 0x00007b20u 2348 + 2346 2349 /* tx interrupt moderation control register definitions 2347 2350 * Preprocessor definitions for TX Interrupt Moderation Control Register 2348 2351 * Base Address: 0x00008980 ··· 2371 2368 #define pci_reg_res_dsbl_width 1 2372 2369 /* default value of bitfield reg_res_dsbl */ 2373 2370 #define pci_reg_res_dsbl_default 0x1 2371 + 2372 + /* PCI core control register */ 2373 + #define pci_reg_control6_adr 0x1014u 2374 2374 2375 2375 /* global microprocessor scratch pad definitions */ 2376 2376 #define glb_cpu_scratch_scp_adr(scratch_scp) (0x00000300u + (scratch_scp) * 0x4)
+23 -53
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
··· 503 503 struct hw_atl_s *hw_self = PHAL_ATLANTIC; 504 504 struct hw_aq_atl_utils_mbox mbox; 505 505 506 - if (!self->aq_link_status.mbps) 507 - return 0; 508 - 509 506 hw_atl_utils_mpi_read_stats(self, &mbox); 510 507 511 508 #define AQ_SDELTA(_N_) (hw_self->curr_stats._N_ += \ 512 509 mbox.stats._N_ - hw_self->last_stats._N_) 510 + if (self->aq_link_status.mbps) { 511 + AQ_SDELTA(uprc); 512 + AQ_SDELTA(mprc); 513 + AQ_SDELTA(bprc); 514 + AQ_SDELTA(erpt); 513 515 514 - AQ_SDELTA(uprc); 515 - AQ_SDELTA(mprc); 516 - AQ_SDELTA(bprc); 517 - AQ_SDELTA(erpt); 516 + AQ_SDELTA(uptc); 517 + AQ_SDELTA(mptc); 518 + AQ_SDELTA(bptc); 519 + AQ_SDELTA(erpr); 518 520 519 - AQ_SDELTA(uptc); 520 - AQ_SDELTA(mptc); 521 - AQ_SDELTA(bptc); 522 - AQ_SDELTA(erpr); 523 - 524 - AQ_SDELTA(ubrc); 525 - AQ_SDELTA(ubtc); 526 - AQ_SDELTA(mbrc); 527 - AQ_SDELTA(mbtc); 528 - AQ_SDELTA(bbrc); 529 - AQ_SDELTA(bbtc); 530 - AQ_SDELTA(dpc); 531 - 521 + AQ_SDELTA(ubrc); 522 + AQ_SDELTA(ubtc); 523 + AQ_SDELTA(mbrc); 524 + AQ_SDELTA(mbtc); 525 + AQ_SDELTA(bbrc); 526 + AQ_SDELTA(bbtc); 527 + AQ_SDELTA(dpc); 528 + } 532 529 #undef AQ_SDELTA 530 + hw_self->curr_stats.dma_pkt_rc = stats_rx_dma_good_pkt_counterlsw_get(self); 531 + hw_self->curr_stats.dma_pkt_tc = stats_tx_dma_good_pkt_counterlsw_get(self); 532 + hw_self->curr_stats.dma_oct_rc = stats_rx_dma_good_octet_counterlsw_get(self); 533 + hw_self->curr_stats.dma_oct_tc = stats_tx_dma_good_octet_counterlsw_get(self); 533 534 534 535 memcpy(&hw_self->last_stats, &mbox.stats, sizeof(mbox.stats)); 535 536 536 537 return 0; 537 538 } 538 539 539 - int hw_atl_utils_get_hw_stats(struct aq_hw_s *self, 540 - u64 *data, unsigned int *p_count) 540 + struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self) 541 541 { 542 - struct hw_atl_s *hw_self = PHAL_ATLANTIC; 543 - struct hw_atl_stats_s *stats = &hw_self->curr_stats; 544 - int i = 0; 545 - 546 - data[i] = stats->uprc + stats->mprc + stats->bprc; 547 - data[++i] = stats->uprc; 548 - data[++i] = stats->mprc; 549 - data[++i] = stats->bprc; 550 - data[++i] = stats->erpt; 551 - data[++i] = stats->uptc + stats->mptc + stats->bptc; 552 - data[++i] = stats->uptc; 553 - data[++i] = stats->mptc; 554 - data[++i] = stats->bptc; 555 - data[++i] = stats->ubrc; 556 - data[++i] = stats->ubtc; 557 - data[++i] = stats->mbrc; 558 - data[++i] = stats->mbtc; 559 - data[++i] = stats->bbrc; 560 - data[++i] = stats->bbtc; 561 - data[++i] = stats->ubrc + stats->mbrc + stats->bbrc; 562 - data[++i] = stats->ubtc + stats->mbtc + stats->bbtc; 563 - data[++i] = stats_rx_dma_good_pkt_counterlsw_get(self); 564 - data[++i] = stats_tx_dma_good_pkt_counterlsw_get(self); 565 - data[++i] = stats_rx_dma_good_octet_counterlsw_get(self); 566 - data[++i] = stats_tx_dma_good_octet_counterlsw_get(self); 567 - data[++i] = stats->dpc; 568 - 569 - if (p_count) 570 - *p_count = ++i; 571 - 572 - return 0; 542 + return &PHAL_ATLANTIC->curr_stats; 573 543 } 574 544 575 545 static const u32 hw_atl_utils_hw_mac_regs[] = {
+2 -4
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.h
··· 129 129 struct __packed hw_atl_s { 130 130 struct aq_hw_s base; 131 131 struct hw_atl_stats_s last_stats; 132 - struct hw_atl_stats_s curr_stats; 132 + struct aq_stats_s curr_stats; 133 133 u64 speed; 134 134 unsigned int chip_features; 135 135 u32 fw_ver_actual; ··· 207 207 208 208 int hw_atl_utils_update_stats(struct aq_hw_s *self); 209 209 210 - int hw_atl_utils_get_hw_stats(struct aq_hw_s *self, 211 - u64 *data, 212 - unsigned int *p_count); 210 + struct aq_stats_s *hw_atl_utils_get_hw_stats(struct aq_hw_s *self); 213 211 214 212 #endif /* HW_ATL_UTILS_H */
+4 -2
drivers/net/ethernet/aquantia/atlantic/ver.h
··· 11 11 #define VER_H 12 12 13 13 #define NIC_MAJOR_DRIVER_VERSION 1 14 - #define NIC_MINOR_DRIVER_VERSION 5 15 - #define NIC_BUILD_DRIVER_VERSION 345 14 + #define NIC_MINOR_DRIVER_VERSION 6 15 + #define NIC_BUILD_DRIVER_VERSION 13 16 16 #define NIC_REVISION_DRIVER_VERSION 0 17 + 18 + #define AQ_CFG_DRV_VERSION_SUFFIX "-kern" 17 19 18 20 #endif /* VER_H */
+7 -3
drivers/net/ethernet/arc/emac_rockchip.c
··· 199 199 200 200 /* RMII interface needs always a rate of 50MHz */ 201 201 err = clk_set_rate(priv->refclk, 50000000); 202 - if (err) 202 + if (err) { 203 203 dev_err(dev, 204 204 "failed to change reference clock rate (%d)\n", err); 205 + goto out_regulator_disable; 206 + } 205 207 206 208 if (priv->soc_data->need_div_macclk) { 207 209 priv->macclk = devm_clk_get(dev, "macclk"); ··· 232 230 err = arc_emac_probe(ndev, interface); 233 231 if (err) { 234 232 dev_err(dev, "failed to probe arc emac (%d)\n", err); 235 - goto out_regulator_disable; 233 + goto out_clk_disable_macclk; 236 234 } 237 235 238 236 return 0; 237 + 239 238 out_clk_disable_macclk: 240 - clk_disable_unprepare(priv->macclk); 239 + if (priv->soc_data->need_div_macclk) 240 + clk_disable_unprepare(priv->macclk); 241 241 out_regulator_disable: 242 242 if (priv->regulator) 243 243 regulator_disable(priv->regulator);
-1
drivers/net/ethernet/marvell/skge.c
··· 4081 4081 if (hw->ports > 1) { 4082 4082 skge_write32(hw, B0_IMSK, 0); 4083 4083 skge_read32(hw, B0_IMSK); 4084 - free_irq(pdev->irq, hw); 4085 4084 } 4086 4085 spin_unlock_irq(&hw->hw_lock); 4087 4086
+31 -26
drivers/net/ethernet/mellanox/mlx4/en_port.c
··· 188 188 struct net_device *dev = mdev->pndev[port]; 189 189 struct mlx4_en_priv *priv = netdev_priv(dev); 190 190 struct net_device_stats *stats = &dev->stats; 191 - struct mlx4_cmd_mailbox *mailbox; 191 + struct mlx4_cmd_mailbox *mailbox, *mailbox_priority; 192 192 u64 in_mod = reset << 8 | port; 193 193 int err; 194 194 int i, counter_index; ··· 198 198 mailbox = mlx4_alloc_cmd_mailbox(mdev->dev); 199 199 if (IS_ERR(mailbox)) 200 200 return PTR_ERR(mailbox); 201 + 202 + mailbox_priority = mlx4_alloc_cmd_mailbox(mdev->dev); 203 + if (IS_ERR(mailbox_priority)) { 204 + mlx4_free_cmd_mailbox(mdev->dev, mailbox); 205 + return PTR_ERR(mailbox_priority); 206 + } 207 + 201 208 err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma, in_mod, 0, 202 209 MLX4_CMD_DUMP_ETH_STATS, MLX4_CMD_TIME_CLASS_B, 203 210 MLX4_CMD_NATIVE); ··· 212 205 goto out; 213 206 214 207 mlx4_en_stats = mailbox->buf; 208 + 209 + memset(&tmp_counter_stats, 0, sizeof(tmp_counter_stats)); 210 + counter_index = mlx4_get_default_counter_index(mdev->dev, port); 211 + err = mlx4_get_counter_stats(mdev->dev, counter_index, 212 + &tmp_counter_stats, reset); 213 + 214 + /* 0xffs indicates invalid value */ 215 + memset(mailbox_priority->buf, 0xff, 216 + sizeof(*flowstats) * MLX4_NUM_PRIORITIES); 217 + 218 + if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_FLOWSTATS_EN) { 219 + memset(mailbox_priority->buf, 0, 220 + sizeof(*flowstats) * MLX4_NUM_PRIORITIES); 221 + err = mlx4_cmd_box(mdev->dev, 0, mailbox_priority->dma, 222 + in_mod | MLX4_DUMP_ETH_STATS_FLOW_CONTROL, 223 + 0, MLX4_CMD_DUMP_ETH_STATS, 224 + MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE); 225 + if (err) 226 + goto out; 227 + } 228 + 229 + flowstats = mailbox_priority->buf; 215 230 216 231 spin_lock_bh(&priv->stats_lock); 217 232 ··· 374 345 priv->pkstats.tx_prio[8][0] = be64_to_cpu(mlx4_en_stats->TTOT_novlan); 375 346 priv->pkstats.tx_prio[8][1] = be64_to_cpu(mlx4_en_stats->TOCT_novlan); 376 347 377 - spin_unlock_bh(&priv->stats_lock); 378 - 379 - memset(&tmp_counter_stats, 0, sizeof(tmp_counter_stats)); 380 - counter_index = mlx4_get_default_counter_index(mdev->dev, port); 381 - err = mlx4_get_counter_stats(mdev->dev, counter_index, 382 - &tmp_counter_stats, reset); 383 - 384 - /* 0xffs indicates invalid value */ 385 - memset(mailbox->buf, 0xff, sizeof(*flowstats) * MLX4_NUM_PRIORITIES); 386 - 387 - if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_FLOWSTATS_EN) { 388 - memset(mailbox->buf, 0, 389 - sizeof(*flowstats) * MLX4_NUM_PRIORITIES); 390 - err = mlx4_cmd_box(mdev->dev, 0, mailbox->dma, 391 - in_mod | MLX4_DUMP_ETH_STATS_FLOW_CONTROL, 392 - 0, MLX4_CMD_DUMP_ETH_STATS, 393 - MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE); 394 - if (err) 395 - goto out; 396 - } 397 - 398 - flowstats = mailbox->buf; 399 - 400 - spin_lock_bh(&priv->stats_lock); 401 - 402 348 if (tmp_counter_stats.counter_mode == 0) { 403 349 priv->pf_stats.rx_bytes = be64_to_cpu(tmp_counter_stats.rx_bytes); 404 350 priv->pf_stats.tx_bytes = be64_to_cpu(tmp_counter_stats.tx_bytes); ··· 414 410 415 411 out: 416 412 mlx4_free_cmd_mailbox(mdev->dev, mailbox); 413 + mlx4_free_cmd_mailbox(mdev->dev, mailbox_priority); 417 414 return err; 418 415 } 419 416
+1 -1
drivers/net/ethernet/mellanox/mlx4/en_selftest.c
··· 185 185 if (priv->mdev->dev->caps.flags & 186 186 MLX4_DEV_CAP_FLAG_UC_LOOPBACK) { 187 187 buf[3] = mlx4_en_test_registers(priv); 188 - if (priv->port_up) 188 + if (priv->port_up && dev->mtu >= MLX4_SELFTEST_LB_MIN_MTU) 189 189 buf[4] = mlx4_en_test_loopback(priv); 190 190 } 191 191
+3
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 153 153 #define SMALL_PACKET_SIZE (256 - NET_IP_ALIGN) 154 154 #define HEADER_COPY_SIZE (128 - NET_IP_ALIGN) 155 155 #define MLX4_LOOPBACK_TEST_PAYLOAD (HEADER_COPY_SIZE - ETH_HLEN) 156 + #define PREAMBLE_LEN 8 157 + #define MLX4_SELFTEST_LB_MIN_MTU (MLX4_LOOPBACK_TEST_PAYLOAD + NET_IP_ALIGN + \ 158 + ETH_HLEN + PREAMBLE_LEN) 156 159 157 160 #define MLX4_EN_MIN_MTU 46 158 161 /* VLAN_HLEN is added twice,to support skb vlan tagged with multiple
-1
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 611 611 MLX4_MAX_PORTS; 612 612 else 613 613 res_alloc->guaranteed[t] = 0; 614 - res_alloc->res_free -= res_alloc->guaranteed[t]; 615 614 break; 616 615 default: 617 616 break;
+18
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 4346 4346 4347 4347 static int mlxsw_sp_port_ovs_join(struct mlxsw_sp_port *mlxsw_sp_port) 4348 4348 { 4349 + u16 vid = 1; 4349 4350 int err; 4350 4351 4351 4352 err = mlxsw_sp_port_vp_mode_set(mlxsw_sp_port, true); ··· 4359 4358 true, false); 4360 4359 if (err) 4361 4360 goto err_port_vlan_set; 4361 + 4362 + for (; vid <= VLAN_N_VID - 1; vid++) { 4363 + err = mlxsw_sp_port_vid_learning_set(mlxsw_sp_port, 4364 + vid, false); 4365 + if (err) 4366 + goto err_vid_learning_set; 4367 + } 4368 + 4362 4369 return 0; 4363 4370 4371 + err_vid_learning_set: 4372 + for (vid--; vid >= 1; vid--) 4373 + mlxsw_sp_port_vid_learning_set(mlxsw_sp_port, vid, true); 4364 4374 err_port_vlan_set: 4365 4375 mlxsw_sp_port_stp_set(mlxsw_sp_port, false); 4366 4376 err_port_stp_set: ··· 4381 4369 4382 4370 static void mlxsw_sp_port_ovs_leave(struct mlxsw_sp_port *mlxsw_sp_port) 4383 4371 { 4372 + u16 vid; 4373 + 4374 + for (vid = VLAN_N_VID - 1; vid >= 1; vid--) 4375 + mlxsw_sp_port_vid_learning_set(mlxsw_sp_port, 4376 + vid, true); 4377 + 4384 4378 mlxsw_sp_port_vlan_set(mlxsw_sp_port, 2, VLAN_N_VID - 1, 4385 4379 false, false); 4386 4380 mlxsw_sp_port_stp_set(mlxsw_sp_port, false);
+4 -3
drivers/net/ethernet/qualcomm/emac/emac-phy.c
··· 47 47 #define MDIO_CLK_25_28 7 48 48 49 49 #define MDIO_WAIT_TIMES 1000 50 + #define MDIO_STATUS_DELAY_TIME 1 50 51 51 52 static int emac_mdio_read(struct mii_bus *bus, int addr, int regnum) 52 53 { ··· 66 65 67 66 if (readl_poll_timeout(adpt->base + EMAC_MDIO_CTRL, reg, 68 67 !(reg & (MDIO_START | MDIO_BUSY)), 69 - 100, MDIO_WAIT_TIMES * 100)) 68 + MDIO_STATUS_DELAY_TIME, MDIO_WAIT_TIMES * 100)) 70 69 return -EIO; 71 70 72 71 return (reg >> MDIO_DATA_SHFT) & MDIO_DATA_BMSK; ··· 89 88 writel(reg, adpt->base + EMAC_MDIO_CTRL); 90 89 91 90 if (readl_poll_timeout(adpt->base + EMAC_MDIO_CTRL, reg, 92 - !(reg & (MDIO_START | MDIO_BUSY)), 100, 93 - MDIO_WAIT_TIMES * 100)) 91 + !(reg & (MDIO_START | MDIO_BUSY)), 92 + MDIO_STATUS_DELAY_TIME, MDIO_WAIT_TIMES * 100)) 94 93 return -EIO; 95 94 96 95 return 0;
+2 -25
drivers/net/ethernet/renesas/ravb_main.c
··· 2308 2308 struct ravb_private *priv = netdev_priv(ndev); 2309 2309 int ret = 0; 2310 2310 2311 - if (priv->wol_enabled) { 2312 - /* Reduce the usecount of the clock to zero and then 2313 - * restore it to its original value. This is done to force 2314 - * the clock to be re-enabled which is a workaround 2315 - * for renesas-cpg-mssr driver which do not enable clocks 2316 - * when resuming from PSCI suspend/resume. 2317 - * 2318 - * Without this workaround the driver fails to communicate 2319 - * with the hardware if WoL was enabled when the system 2320 - * entered PSCI suspend. This is due to that if WoL is enabled 2321 - * we explicitly keep the clock from being turned off when 2322 - * suspending, but in PSCI sleep power is cut so the clock 2323 - * is disabled anyhow, the clock driver is not aware of this 2324 - * so the clock is not turned back on when resuming. 2325 - * 2326 - * TODO: once the renesas-cpg-mssr suspend/resume is working 2327 - * this clock dance should be removed. 2328 - */ 2329 - clk_disable(priv->clk); 2330 - clk_disable(priv->clk); 2331 - clk_enable(priv->clk); 2332 - clk_enable(priv->clk); 2333 - 2334 - /* Set reset mode to rearm the WoL logic */ 2311 + /* If WoL is enabled set reset mode to rearm the WoL logic */ 2312 + if (priv->wol_enabled) 2335 2313 ravb_write(ndev, CCC_OPC_RESET, CCC); 2336 - } 2337 2314 2338 2315 /* All register have been reset to default values. 2339 2316 * Restore all registers which where setup at probe time and
+10
drivers/net/ethernet/renesas/sh_eth.c
··· 1892 1892 return PTR_ERR(phydev); 1893 1893 } 1894 1894 1895 + /* mask with MAC supported features */ 1896 + if (mdp->cd->register_type != SH_ETH_REG_GIGABIT) { 1897 + int err = phy_set_max_speed(phydev, SPEED_100); 1898 + if (err) { 1899 + netdev_err(ndev, "failed to limit PHY to 100 Mbit/s\n"); 1900 + phy_disconnect(phydev); 1901 + return err; 1902 + } 1903 + } 1904 + 1895 1905 phy_attached_info(phydev); 1896 1906 1897 1907 return 0;
+1 -1
drivers/net/hippi/rrunner.c
··· 1379 1379 rrpriv->info_dma); 1380 1380 rrpriv->info = NULL; 1381 1381 1382 - free_irq(pdev->irq, dev); 1383 1382 spin_unlock_irqrestore(&rrpriv->lock, flags); 1383 + free_irq(pdev->irq, dev); 1384 1384 1385 1385 return 0; 1386 1386 }
-4
drivers/net/phy/at803x.c
··· 238 238 { 239 239 int value; 240 240 241 - mutex_lock(&phydev->lock); 242 - 243 241 value = phy_read(phydev, MII_BMCR); 244 242 value &= ~(BMCR_PDOWN | BMCR_ISOLATE); 245 243 phy_write(phydev, MII_BMCR, value); 246 - 247 - mutex_unlock(&phydev->lock); 248 244 249 245 return 0; 250 246 }
+4
drivers/net/phy/marvell.c
··· 637 637 if (err < 0) 638 638 goto error; 639 639 640 + /* Do not touch the fiber page if we're in copper->sgmii mode */ 641 + if (phydev->interface == PHY_INTERFACE_MODE_SGMII) 642 + return 0; 643 + 640 644 /* Then the fiber link */ 641 645 err = marvell_set_page(phydev, MII_MARVELL_FIBER_PAGE); 642 646 if (err < 0)
+1
drivers/net/phy/mdio_bus.c
··· 288 288 289 289 if (addr == mdiodev->addr) { 290 290 dev->of_node = child; 291 + dev->fwnode = of_fwnode_handle(child); 291 292 return; 292 293 } 293 294 }
+73
drivers/net/phy/meson-gxl.c
··· 22 22 #include <linux/ethtool.h> 23 23 #include <linux/phy.h> 24 24 #include <linux/netdevice.h> 25 + #include <linux/bitfield.h> 25 26 26 27 static int meson_gxl_config_init(struct phy_device *phydev) 27 28 { ··· 51 50 return 0; 52 51 } 53 52 53 + /* This function is provided to cope with the possible failures of this phy 54 + * during aneg process. When aneg fails, the PHY reports that aneg is done 55 + * but the value found in MII_LPA is wrong: 56 + * - Early failures: MII_LPA is just 0x0001. if MII_EXPANSION reports that 57 + * the link partner (LP) supports aneg but the LP never acked our base 58 + * code word, it is likely that we never sent it to begin with. 59 + * - Late failures: MII_LPA is filled with a value which seems to make sense 60 + * but it actually is not what the LP is advertising. It seems that we 61 + * can detect this using a magic bit in the WOL bank (reg 12 - bit 12). 62 + * If this particular bit is not set when aneg is reported being done, 63 + * it means MII_LPA is likely to be wrong. 64 + * 65 + * In both case, forcing a restart of the aneg process solve the problem. 66 + * When this failure happens, the first retry is usually successful but, 67 + * in some cases, it may take up to 6 retries to get a decent result 68 + */ 69 + static int meson_gxl_read_status(struct phy_device *phydev) 70 + { 71 + int ret, wol, lpa, exp; 72 + 73 + if (phydev->autoneg == AUTONEG_ENABLE) { 74 + ret = genphy_aneg_done(phydev); 75 + if (ret < 0) 76 + return ret; 77 + else if (!ret) 78 + goto read_status_continue; 79 + 80 + /* Need to access WOL bank, make sure the access is open */ 81 + ret = phy_write(phydev, 0x14, 0x0000); 82 + if (ret) 83 + return ret; 84 + ret = phy_write(phydev, 0x14, 0x0400); 85 + if (ret) 86 + return ret; 87 + ret = phy_write(phydev, 0x14, 0x0000); 88 + if (ret) 89 + return ret; 90 + ret = phy_write(phydev, 0x14, 0x0400); 91 + if (ret) 92 + return ret; 93 + 94 + /* Request LPI_STATUS WOL register */ 95 + ret = phy_write(phydev, 0x14, 0x8D80); 96 + if (ret) 97 + return ret; 98 + 99 + /* Read LPI_STATUS value */ 100 + wol = phy_read(phydev, 0x15); 101 + if (wol < 0) 102 + return wol; 103 + 104 + lpa = phy_read(phydev, MII_LPA); 105 + if (lpa < 0) 106 + return lpa; 107 + 108 + exp = phy_read(phydev, MII_EXPANSION); 109 + if (exp < 0) 110 + return exp; 111 + 112 + if (!(wol & BIT(12)) || 113 + ((exp & EXPANSION_NWAY) && !(lpa & LPA_LPACK))) { 114 + /* Looks like aneg failed after all */ 115 + phydev_dbg(phydev, "LPA corruption - aneg restart\n"); 116 + return genphy_restart_aneg(phydev); 117 + } 118 + } 119 + 120 + read_status_continue: 121 + return genphy_read_status(phydev); 122 + } 123 + 54 124 static struct phy_driver meson_gxl_phy[] = { 55 125 { 56 126 .phy_id = 0x01814400, ··· 131 59 .flags = PHY_IS_INTERNAL, 132 60 .config_init = meson_gxl_config_init, 133 61 .aneg_done = genphy_aneg_done, 62 + .read_status = meson_gxl_read_status, 134 63 .suspend = genphy_suspend, 135 64 .resume = genphy_resume, 136 65 },
+3 -6
drivers/net/phy/phy.c
··· 806 806 */ 807 807 void phy_start(struct phy_device *phydev) 808 808 { 809 - bool do_resume = false; 810 809 int err = 0; 811 810 812 811 mutex_lock(&phydev->lock); ··· 818 819 phydev->state = PHY_UP; 819 820 break; 820 821 case PHY_HALTED: 822 + /* if phy was suspended, bring the physical link up again */ 823 + phy_resume(phydev); 824 + 821 825 /* make sure interrupts are re-enabled for the PHY */ 822 826 if (phydev->irq != PHY_POLL) { 823 827 err = phy_enable_interrupts(phydev); ··· 829 827 } 830 828 831 829 phydev->state = PHY_RESUMING; 832 - do_resume = true; 833 830 break; 834 831 default: 835 832 break; 836 833 } 837 834 mutex_unlock(&phydev->lock); 838 - 839 - /* if phy was suspended, bring the physical link up again */ 840 - if (do_resume) 841 - phy_resume(phydev); 842 835 843 836 phy_trigger_machine(phydev, true); 844 837 }
+6 -4
drivers/net/phy/phy_device.c
··· 135 135 if (!mdio_bus_phy_may_suspend(phydev)) 136 136 goto no_resume; 137 137 138 + mutex_lock(&phydev->lock); 138 139 ret = phy_resume(phydev); 140 + mutex_unlock(&phydev->lock); 139 141 if (ret < 0) 140 142 return ret; 141 143 ··· 1041 1039 if (err) 1042 1040 goto error; 1043 1041 1042 + mutex_lock(&phydev->lock); 1044 1043 phy_resume(phydev); 1044 + mutex_unlock(&phydev->lock); 1045 1045 phy_led_triggers_register(phydev); 1046 1046 1047 1047 return err; ··· 1176 1172 { 1177 1173 struct phy_driver *phydrv = to_phy_driver(phydev->mdio.dev.driver); 1178 1174 int ret = 0; 1175 + 1176 + WARN_ON(!mutex_is_locked(&phydev->lock)); 1179 1177 1180 1178 if (phydev->drv && phydrv->resume) 1181 1179 ret = phydrv->resume(phydev); ··· 1685 1679 { 1686 1680 int value; 1687 1681 1688 - mutex_lock(&phydev->lock); 1689 - 1690 1682 value = phy_read(phydev, MII_BMCR); 1691 1683 phy_write(phydev, MII_BMCR, value & ~BMCR_PDOWN); 1692 - 1693 - mutex_unlock(&phydev->lock); 1694 1684 1695 1685 return 0; 1696 1686 }
+2
drivers/net/usb/qmi_wwan.c
··· 1204 1204 {QMI_FIXED_INTF(0x1199, 0x9079, 10)}, /* Sierra Wireless EM74xx */ 1205 1205 {QMI_FIXED_INTF(0x1199, 0x907b, 8)}, /* Sierra Wireless EM74xx */ 1206 1206 {QMI_FIXED_INTF(0x1199, 0x907b, 10)}, /* Sierra Wireless EM74xx */ 1207 + {QMI_FIXED_INTF(0x1199, 0x9091, 8)}, /* Sierra Wireless EM7565 */ 1207 1208 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 1208 1209 {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ 1209 1210 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 1210 1211 {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ 1211 1212 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ 1212 1213 {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ 1214 + {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ 1213 1215 {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 1214 1216 {QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */ 1215 1217 {QMI_FIXED_INTF(0x1c9e, 0x9801, 3)}, /* Telewell TW-3G HSPA+ */
+3
drivers/of/of_mdio.c
··· 85 85 * can be looked up later */ 86 86 of_node_get(child); 87 87 phy->mdio.dev.of_node = child; 88 + phy->mdio.dev.fwnode = of_fwnode_handle(child); 88 89 89 90 /* All data is now stored in the phy struct; 90 91 * register it */ ··· 116 115 */ 117 116 of_node_get(child); 118 117 mdiodev->dev.of_node = child; 118 + mdiodev->dev.fwnode = of_fwnode_handle(child); 119 119 120 120 /* All data is now stored in the mdiodev struct; register it. */ 121 121 rc = mdio_device_register(mdiodev); ··· 212 210 mdio->phy_mask = ~0; 213 211 214 212 mdio->dev.of_node = np; 213 + mdio->dev.fwnode = of_fwnode_handle(np); 215 214 216 215 /* Get bus level PHY reset GPIO details */ 217 216 mdio->reset_delay_us = DEFAULT_GPIO_RESET_DELAY;
+4 -4
drivers/pci/host/pcie-rcar.c
··· 1128 1128 err = rcar_pcie_get_resources(pcie); 1129 1129 if (err < 0) { 1130 1130 dev_err(dev, "failed to request resources: %d\n", err); 1131 - goto err_free_bridge; 1131 + goto err_free_resource_list; 1132 1132 } 1133 1133 1134 1134 err = rcar_pcie_parse_map_dma_ranges(pcie, dev->of_node); 1135 1135 if (err) 1136 - goto err_free_bridge; 1136 + goto err_free_resource_list; 1137 1137 1138 1138 pm_runtime_enable(dev); 1139 1139 err = pm_runtime_get_sync(dev); ··· 1176 1176 err_pm_disable: 1177 1177 pm_runtime_disable(dev); 1178 1178 1179 - err_free_bridge: 1180 - pci_free_host_bridge(bridge); 1179 + err_free_resource_list: 1181 1180 pci_free_resource_list(&pcie->resources); 1181 + pci_free_host_bridge(bridge); 1182 1182 1183 1183 return err; 1184 1184 }
+1 -1
drivers/pci/pci-driver.c
··· 999 999 * the subsequent "thaw" callbacks for the device. 1000 1000 */ 1001 1001 if (dev_pm_smart_suspend_and_suspended(dev)) { 1002 - dev->power.direct_complete = true; 1002 + dev_pm_skip_next_resume_phases(dev); 1003 1003 return 0; 1004 1004 } 1005 1005
+1
drivers/platform/x86/asus-wireless.c
··· 118 118 return; 119 119 } 120 120 input_report_key(data->idev, KEY_RFKILL, 1); 121 + input_sync(data->idev); 121 122 input_report_key(data->idev, KEY_RFKILL, 0); 122 123 input_sync(data->idev); 123 124 }
+17
drivers/platform/x86/dell-laptop.c
··· 37 37 38 38 struct quirk_entry { 39 39 u8 touchpad_led; 40 + u8 kbd_led_levels_off_1; 40 41 41 42 int needs_kbd_timeouts; 42 43 /* ··· 66 65 static struct quirk_entry quirk_dell_xps13_9333 = { 67 66 .needs_kbd_timeouts = 1, 68 67 .kbd_timeouts = { 0, 5, 15, 60, 5 * 60, 15 * 60, -1 }, 68 + }; 69 + 70 + static struct quirk_entry quirk_dell_latitude_e6410 = { 71 + .kbd_led_levels_off_1 = 1, 69 72 }; 70 73 71 74 static struct platform_driver platform_driver = { ··· 273 268 DMI_MATCH(DMI_PRODUCT_NAME, "XPS13 9333"), 274 269 }, 275 270 .driver_data = &quirk_dell_xps13_9333, 271 + }, 272 + { 273 + .callback = dmi_matched, 274 + .ident = "Dell Latitude E6410", 275 + .matches = { 276 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 277 + DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6410"), 278 + }, 279 + .driver_data = &quirk_dell_latitude_e6410, 276 280 }, 277 281 { } 278 282 }; ··· 1162 1148 info->triggers = buffer->output[2] & 0xFF; 1163 1149 units = (buffer->output[2] >> 8) & 0xFF; 1164 1150 info->levels = (buffer->output[2] >> 16) & 0xFF; 1151 + 1152 + if (quirks && quirks->kbd_led_levels_off_1 && info->levels) 1153 + info->levels--; 1165 1154 1166 1155 if (units & BIT(0)) 1167 1156 info->seconds = (buffer->output[3] >> 0) & 0xFF;
+2
drivers/platform/x86/dell-wmi.c
··· 639 639 int ret; 640 640 641 641 buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL); 642 + if (!buffer) 643 + return -ENOMEM; 642 644 buffer->cmd_class = CLASS_INFO; 643 645 buffer->cmd_select = SELECT_APP_REGISTRATION; 644 646 buffer->input[0] = 0x10000;
+3 -3
drivers/s390/net/qeth_core.h
··· 565 565 }; 566 566 567 567 struct qeth_ipato { 568 - int enabled; 569 - int invert4; 570 - int invert6; 568 + bool enabled; 569 + bool invert4; 570 + bool invert6; 571 571 struct list_head entries; 572 572 }; 573 573
+3 -3
drivers/s390/net/qeth_core_main.c
··· 1480 1480 qeth_set_intial_options(card); 1481 1481 /* IP address takeover */ 1482 1482 INIT_LIST_HEAD(&card->ipato.entries); 1483 - card->ipato.enabled = 0; 1484 - card->ipato.invert4 = 0; 1485 - card->ipato.invert6 = 0; 1483 + card->ipato.enabled = false; 1484 + card->ipato.invert4 = false; 1485 + card->ipato.invert6 = false; 1486 1486 /* init QDIO stuff */ 1487 1487 qeth_init_qdio_info(card); 1488 1488 INIT_DELAYED_WORK(&card->buffer_reclaim_work, qeth_buffer_reclaim_work);
+1 -1
drivers/s390/net/qeth_l3.h
··· 82 82 int qeth_l3_add_rxip(struct qeth_card *, enum qeth_prot_versions, const u8 *); 83 83 void qeth_l3_del_rxip(struct qeth_card *card, enum qeth_prot_versions, 84 84 const u8 *); 85 - int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *, struct qeth_ipaddr *); 85 + void qeth_l3_update_ipato(struct qeth_card *card); 86 86 struct qeth_ipaddr *qeth_l3_get_addr_buffer(enum qeth_prot_versions); 87 87 int qeth_l3_add_ip(struct qeth_card *, struct qeth_ipaddr *); 88 88 int qeth_l3_delete_ip(struct qeth_card *, struct qeth_ipaddr *);
+31 -5
drivers/s390/net/qeth_l3_main.c
··· 164 164 } 165 165 } 166 166 167 - int qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card, 168 - struct qeth_ipaddr *addr) 167 + static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card, 168 + struct qeth_ipaddr *addr) 169 169 { 170 170 struct qeth_ipato_entry *ipatoe; 171 171 u8 addr_bits[128] = {0, }; ··· 173 173 int rc = 0; 174 174 175 175 if (!card->ipato.enabled) 176 + return 0; 177 + if (addr->type != QETH_IP_TYPE_NORMAL) 176 178 return 0; 177 179 178 180 qeth_l3_convert_addr_to_bits((u8 *) &addr->u, addr_bits, ··· 292 290 memcpy(addr, tmp_addr, sizeof(struct qeth_ipaddr)); 293 291 addr->ref_counter = 1; 294 292 295 - if (addr->type == QETH_IP_TYPE_NORMAL && 296 - qeth_l3_is_addr_covered_by_ipato(card, addr)) { 293 + if (qeth_l3_is_addr_covered_by_ipato(card, addr)) { 297 294 QETH_CARD_TEXT(card, 2, "tkovaddr"); 298 295 addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG; 299 296 } ··· 606 605 /* 607 606 * IP address takeover related functions 608 607 */ 608 + 609 + /** 610 + * qeth_l3_update_ipato() - Update 'takeover' property, for all NORMAL IPs. 611 + * 612 + * Caller must hold ip_lock. 613 + */ 614 + void qeth_l3_update_ipato(struct qeth_card *card) 615 + { 616 + struct qeth_ipaddr *addr; 617 + unsigned int i; 618 + 619 + hash_for_each(card->ip_htable, i, addr, hnode) { 620 + if (addr->type != QETH_IP_TYPE_NORMAL) 621 + continue; 622 + if (qeth_l3_is_addr_covered_by_ipato(card, addr)) 623 + addr->set_flags |= QETH_IPA_SETIP_TAKEOVER_FLAG; 624 + else 625 + addr->set_flags &= ~QETH_IPA_SETIP_TAKEOVER_FLAG; 626 + } 627 + } 628 + 609 629 static void qeth_l3_clear_ipato_list(struct qeth_card *card) 610 630 { 611 631 struct qeth_ipato_entry *ipatoe, *tmp; ··· 638 616 kfree(ipatoe); 639 617 } 640 618 619 + qeth_l3_update_ipato(card); 641 620 spin_unlock_bh(&card->ip_lock); 642 621 } 643 622 ··· 663 640 } 664 641 } 665 642 666 - if (!rc) 643 + if (!rc) { 667 644 list_add_tail(&new->entry, &card->ipato.entries); 645 + qeth_l3_update_ipato(card); 646 + } 668 647 669 648 spin_unlock_bh(&card->ip_lock); 670 649 ··· 689 664 (proto == QETH_PROT_IPV4)? 4:16) && 690 665 (ipatoe->mask_bits == mask_bits)) { 691 666 list_del(&ipatoe->entry); 667 + qeth_l3_update_ipato(card); 692 668 kfree(ipatoe); 693 669 } 694 670 }
+41 -34
drivers/s390/net/qeth_l3_sys.c
··· 370 370 struct device_attribute *attr, const char *buf, size_t count) 371 371 { 372 372 struct qeth_card *card = dev_get_drvdata(dev); 373 - struct qeth_ipaddr *addr; 374 - int i, rc = 0; 373 + bool enable; 374 + int rc = 0; 375 375 376 376 if (!card) 377 377 return -EINVAL; ··· 384 384 } 385 385 386 386 if (sysfs_streq(buf, "toggle")) { 387 - card->ipato.enabled = (card->ipato.enabled)? 0 : 1; 388 - } else if (sysfs_streq(buf, "1")) { 389 - card->ipato.enabled = 1; 390 - hash_for_each(card->ip_htable, i, addr, hnode) { 391 - if ((addr->type == QETH_IP_TYPE_NORMAL) && 392 - qeth_l3_is_addr_covered_by_ipato(card, addr)) 393 - addr->set_flags |= 394 - QETH_IPA_SETIP_TAKEOVER_FLAG; 395 - } 396 - } else if (sysfs_streq(buf, "0")) { 397 - card->ipato.enabled = 0; 398 - hash_for_each(card->ip_htable, i, addr, hnode) { 399 - if (addr->set_flags & 400 - QETH_IPA_SETIP_TAKEOVER_FLAG) 401 - addr->set_flags &= 402 - ~QETH_IPA_SETIP_TAKEOVER_FLAG; 403 - } 404 - } else 387 + enable = !card->ipato.enabled; 388 + } else if (kstrtobool(buf, &enable)) { 405 389 rc = -EINVAL; 390 + goto out; 391 + } 392 + 393 + if (card->ipato.enabled != enable) { 394 + card->ipato.enabled = enable; 395 + spin_lock_bh(&card->ip_lock); 396 + qeth_l3_update_ipato(card); 397 + spin_unlock_bh(&card->ip_lock); 398 + } 406 399 out: 407 400 mutex_unlock(&card->conf_mutex); 408 401 return rc ? rc : count; ··· 421 428 const char *buf, size_t count) 422 429 { 423 430 struct qeth_card *card = dev_get_drvdata(dev); 431 + bool invert; 424 432 int rc = 0; 425 433 426 434 if (!card) 427 435 return -EINVAL; 428 436 429 437 mutex_lock(&card->conf_mutex); 430 - if (sysfs_streq(buf, "toggle")) 431 - card->ipato.invert4 = (card->ipato.invert4)? 0 : 1; 432 - else if (sysfs_streq(buf, "1")) 433 - card->ipato.invert4 = 1; 434 - else if (sysfs_streq(buf, "0")) 435 - card->ipato.invert4 = 0; 436 - else 438 + if (sysfs_streq(buf, "toggle")) { 439 + invert = !card->ipato.invert4; 440 + } else if (kstrtobool(buf, &invert)) { 437 441 rc = -EINVAL; 442 + goto out; 443 + } 444 + 445 + if (card->ipato.invert4 != invert) { 446 + card->ipato.invert4 = invert; 447 + spin_lock_bh(&card->ip_lock); 448 + qeth_l3_update_ipato(card); 449 + spin_unlock_bh(&card->ip_lock); 450 + } 451 + out: 438 452 mutex_unlock(&card->conf_mutex); 439 453 return rc ? rc : count; 440 454 } ··· 607 607 struct device_attribute *attr, const char *buf, size_t count) 608 608 { 609 609 struct qeth_card *card = dev_get_drvdata(dev); 610 + bool invert; 610 611 int rc = 0; 611 612 612 613 if (!card) 613 614 return -EINVAL; 614 615 615 616 mutex_lock(&card->conf_mutex); 616 - if (sysfs_streq(buf, "toggle")) 617 - card->ipato.invert6 = (card->ipato.invert6)? 0 : 1; 618 - else if (sysfs_streq(buf, "1")) 619 - card->ipato.invert6 = 1; 620 - else if (sysfs_streq(buf, "0")) 621 - card->ipato.invert6 = 0; 622 - else 617 + if (sysfs_streq(buf, "toggle")) { 618 + invert = !card->ipato.invert6; 619 + } else if (kstrtobool(buf, &invert)) { 623 620 rc = -EINVAL; 621 + goto out; 622 + } 623 + 624 + if (card->ipato.invert6 != invert) { 625 + card->ipato.invert6 = invert; 626 + spin_lock_bh(&card->ip_lock); 627 + qeth_l3_update_ipato(card); 628 + spin_unlock_bh(&card->ip_lock); 629 + } 630 + out: 624 631 mutex_unlock(&card->conf_mutex); 625 632 return rc ? rc : count; 626 633 }
+6 -2
drivers/scsi/aacraid/commsup.c
··· 2482 2482 /* Synchronize our watches */ 2483 2483 if (((NSEC_PER_SEC - (NSEC_PER_SEC / HZ)) > now.tv_nsec) 2484 2484 && (now.tv_nsec > (NSEC_PER_SEC / HZ))) 2485 - difference = (((NSEC_PER_SEC - now.tv_nsec) * HZ) 2486 - + NSEC_PER_SEC / 2) / NSEC_PER_SEC; 2485 + difference = HZ + HZ / 2 - 2486 + now.tv_nsec / (NSEC_PER_SEC / HZ); 2487 2487 else { 2488 2488 if (now.tv_nsec > NSEC_PER_SEC / 2) 2489 2489 ++now.tv_sec; ··· 2507 2507 if (kthread_should_stop()) 2508 2508 break; 2509 2509 2510 + /* 2511 + * we probably want usleep_range() here instead of the 2512 + * jiffies computation 2513 + */ 2510 2514 schedule_timeout(difference); 2511 2515 2512 2516 if (kthread_should_stop())
+4 -2
drivers/scsi/bfa/bfad_bsg.c
··· 3135 3135 struct fc_bsg_request *bsg_request = job->request; 3136 3136 struct fc_bsg_reply *bsg_reply = job->reply; 3137 3137 uint32_t vendor_cmd = bsg_request->rqst_data.h_vendor.vendor_cmd[0]; 3138 - struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job)); 3138 + struct Scsi_Host *shost = fc_bsg_to_shost(job); 3139 + struct bfad_im_port_s *im_port = bfad_get_im_port(shost); 3139 3140 struct bfad_s *bfad = im_port->bfad; 3140 3141 void *payload_kbuf; 3141 3142 int rc = -EINVAL; ··· 3351 3350 bfad_im_bsg_els_ct_request(struct bsg_job *job) 3352 3351 { 3353 3352 struct bfa_bsg_data *bsg_data; 3354 - struct bfad_im_port_s *im_port = shost_priv(fc_bsg_to_shost(job)); 3353 + struct Scsi_Host *shost = fc_bsg_to_shost(job); 3354 + struct bfad_im_port_s *im_port = bfad_get_im_port(shost); 3355 3355 struct bfad_s *bfad = im_port->bfad; 3356 3356 bfa_bsg_fcpt_t *bsg_fcpt; 3357 3357 struct bfad_fcxp *drv_fcxp;
+4 -2
drivers/scsi/bfa/bfad_im.c
··· 546 546 bfad_im_scsi_host_alloc(struct bfad_s *bfad, struct bfad_im_port_s *im_port, 547 547 struct device *dev) 548 548 { 549 + struct bfad_im_port_pointer *im_portp; 549 550 int error = 1; 550 551 551 552 mutex_lock(&bfad_mutex); ··· 565 564 goto out_free_idr; 566 565 } 567 566 568 - im_port->shost->hostdata[0] = (unsigned long)im_port; 567 + im_portp = shost_priv(im_port->shost); 568 + im_portp->p = im_port; 569 569 im_port->shost->unique_id = im_port->idr_id; 570 570 im_port->shost->this_id = -1; 571 571 im_port->shost->max_id = MAX_FCP_TARGET; ··· 750 748 751 749 sht->sg_tablesize = bfad->cfg_data.io_max_sge; 752 750 753 - return scsi_host_alloc(sht, sizeof(unsigned long)); 751 + return scsi_host_alloc(sht, sizeof(struct bfad_im_port_pointer)); 754 752 } 755 753 756 754 void
+10
drivers/scsi/bfa/bfad_im.h
··· 69 69 struct fc_vport *fc_vport; 70 70 }; 71 71 72 + struct bfad_im_port_pointer { 73 + struct bfad_im_port_s *p; 74 + }; 75 + 76 + static inline struct bfad_im_port_s *bfad_get_im_port(struct Scsi_Host *host) 77 + { 78 + struct bfad_im_port_pointer *im_portp = shost_priv(host); 79 + return im_portp->p; 80 + } 81 + 72 82 enum bfad_itnim_state { 73 83 ITNIM_STATE_NONE, 74 84 ITNIM_STATE_ONLINE,
+4
drivers/scsi/libfc/fc_lport.c
··· 904 904 case ELS_FLOGI: 905 905 if (!lport->point_to_multipoint) 906 906 fc_lport_recv_flogi_req(lport, fp); 907 + else 908 + fc_rport_recv_req(lport, fp); 907 909 break; 908 910 case ELS_LOGO: 909 911 if (fc_frame_sid(fp) == FC_FID_FLOGI) 910 912 fc_lport_recv_logo_req(lport, fp); 913 + else 914 + fc_rport_recv_req(lport, fp); 911 915 break; 912 916 case ELS_RSCN: 913 917 lport->tt.disc_recv_req(lport, fp);
+5 -5
drivers/scsi/libsas/sas_expander.c
··· 2145 2145 struct sas_rphy *rphy) 2146 2146 { 2147 2147 struct domain_device *dev; 2148 - unsigned int reslen = 0; 2148 + unsigned int rcvlen = 0; 2149 2149 int ret = -EINVAL; 2150 2150 2151 2151 /* no rphy means no smp target support (ie aic94xx host) */ ··· 2179 2179 2180 2180 ret = smp_execute_task_sg(dev, job->request_payload.sg_list, 2181 2181 job->reply_payload.sg_list); 2182 - if (ret > 0) { 2183 - /* positive number is the untransferred residual */ 2184 - reslen = ret; 2182 + if (ret >= 0) { 2183 + /* bsg_job_done() requires the length received */ 2184 + rcvlen = job->reply_payload.payload_len - ret; 2185 2185 ret = 0; 2186 2186 } 2187 2187 2188 2188 out: 2189 - bsg_job_done(job, ret, reslen); 2189 + bsg_job_done(job, ret, rcvlen); 2190 2190 }
+1 -1
drivers/scsi/lpfc/lpfc_mem.c
··· 753 753 drqe.address_hi = putPaddrHigh(rqb_entry->dbuf.phys); 754 754 rc = lpfc_sli4_rq_put(rqb_entry->hrq, rqb_entry->drq, &hrqe, &drqe); 755 755 if (rc < 0) { 756 - (rqbp->rqb_free_buffer)(phba, rqb_entry); 757 756 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 758 757 "6409 Cannot post to RQ %d: %x %x\n", 759 758 rqb_entry->hrq->queue_id, 760 759 rqb_entry->hrq->host_index, 761 760 rqb_entry->hrq->hba_index); 761 + (rqbp->rqb_free_buffer)(phba, rqb_entry); 762 762 } else { 763 763 list_add_tail(&rqb_entry->hbuf.list, &rqbp->rqb_buffer_list); 764 764 rqbp->buffer_count++;
+4 -2
drivers/scsi/scsi_debugfs.c
··· 8 8 { 9 9 struct scsi_cmnd *cmd = container_of(scsi_req(rq), typeof(*cmd), req); 10 10 int msecs = jiffies_to_msecs(jiffies - cmd->jiffies_at_alloc); 11 - char buf[80]; 11 + const u8 *const cdb = READ_ONCE(cmd->cmnd); 12 + char buf[80] = "(?)"; 12 13 13 - __scsi_format_command(buf, sizeof(buf), cmd->cmnd, cmd->cmd_len); 14 + if (cdb) 15 + __scsi_format_command(buf, sizeof(buf), cdb, cmd->cmd_len); 14 16 seq_printf(m, ", .cmd=%s, .retries=%d, allocated %d.%03d s ago", buf, 15 17 cmd->retries, msecs / 1000, msecs % 1000); 16 18 }
+10 -17
drivers/scsi/scsi_devinfo.c
··· 34 34 }; 35 35 36 36 37 - static const char spaces[] = " "; /* 16 of them */ 38 37 static blist_flags_t scsi_default_dev_flags; 39 38 static LIST_HEAD(scsi_dev_info_list); 40 39 static char scsi_dev_flags[256]; ··· 297 298 size_t from_length; 298 299 299 300 from_length = strlen(from); 300 - strncpy(to, from, min(to_length, from_length)); 301 - if (from_length < to_length) { 302 - if (compatible) { 303 - /* 304 - * NUL terminate the string if it is short. 305 - */ 306 - to[from_length] = '\0'; 307 - } else { 308 - /* 309 - * space pad the string if it is short. 310 - */ 311 - strncpy(&to[from_length], spaces, 312 - to_length - from_length); 313 - } 301 + /* This zero-pads the destination */ 302 + strncpy(to, from, to_length); 303 + if (from_length < to_length && !compatible) { 304 + /* 305 + * space pad the string if it is short. 306 + */ 307 + memset(&to[from_length], ' ', to_length - from_length); 314 308 } 315 309 if (from_length > to_length) 316 310 printk(KERN_WARNING "%s: %s string '%s' is too long\n", ··· 450 458 /* 451 459 * vendor strings must be an exact match 452 460 */ 453 - if (vmax != strlen(devinfo->vendor) || 461 + if (vmax != strnlen(devinfo->vendor, 462 + sizeof(devinfo->vendor)) || 454 463 memcmp(devinfo->vendor, vskip, vmax)) 455 464 continue; 456 465 ··· 459 466 * @model specifies the full string, and 460 467 * must be larger or equal to devinfo->model 461 468 */ 462 - mlen = strlen(devinfo->model); 469 + mlen = strnlen(devinfo->model, sizeof(devinfo->model)); 463 470 if (mmax < mlen || memcmp(devinfo->model, mskip, mlen)) 464 471 continue; 465 472 return devinfo;
+2
drivers/scsi/scsi_lib.c
··· 1967 1967 out_put_device: 1968 1968 put_device(&sdev->sdev_gendev); 1969 1969 out: 1970 + if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev)) 1971 + blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY); 1970 1972 return false; 1971 1973 } 1972 1974
+3 -1
drivers/scsi/sd.c
··· 1312 1312 static void sd_uninit_command(struct scsi_cmnd *SCpnt) 1313 1313 { 1314 1314 struct request *rq = SCpnt->request; 1315 + u8 *cmnd; 1315 1316 1316 1317 if (SCpnt->flags & SCMD_ZONE_WRITE_LOCK) 1317 1318 sd_zbc_write_unlock_zone(SCpnt); ··· 1321 1320 __free_page(rq->special_vec.bv_page); 1322 1321 1323 1322 if (SCpnt->cmnd != scsi_req(rq)->cmd) { 1324 - mempool_free(SCpnt->cmnd, sd_cdb_pool); 1323 + cmnd = SCpnt->cmnd; 1325 1324 SCpnt->cmnd = NULL; 1326 1325 SCpnt->cmd_len = 0; 1326 + mempool_free(cmnd, sd_cdb_pool); 1327 1327 } 1328 1328 } 1329 1329
+2 -2
drivers/soc/amlogic/meson-gx-socinfo.c
··· 20 20 #define AO_SEC_SOCINFO_OFFSET AO_SEC_SD_CFG8 21 21 22 22 #define SOCINFO_MAJOR GENMASK(31, 24) 23 - #define SOCINFO_MINOR GENMASK(23, 16) 24 - #define SOCINFO_PACK GENMASK(15, 8) 23 + #define SOCINFO_PACK GENMASK(23, 16) 24 + #define SOCINFO_MINOR GENMASK(15, 8) 25 25 #define SOCINFO_MISC GENMASK(7, 0) 26 26 27 27 static const struct meson_gx_soc_id {
+1 -1
drivers/staging/ccree/ssi_hash.c
··· 1769 1769 struct device *dev = drvdata_to_dev(ctx->drvdata); 1770 1770 struct ahash_req_ctx *state = ahash_request_ctx(req); 1771 1771 u32 tmp; 1772 - int rc; 1772 + int rc = 0; 1773 1773 1774 1774 memcpy(&tmp, in, sizeof(u32)); 1775 1775 if (tmp != CC_EXPORT_MAGIC) {
+1 -1
drivers/staging/pi433/rf69.c
··· 102 102 103 103 currentValue = READ_REG(REG_DATAMODUL); 104 104 105 - switch (currentValue & MASK_DATAMODUL_MODULATION_TYPE >> 3) { // TODO improvement: change 3 to define 105 + switch (currentValue & MASK_DATAMODUL_MODULATION_TYPE) { 106 106 case DATAMODUL_MODULATION_TYPE_OOK: return OOK; 107 107 case DATAMODUL_MODULATION_TYPE_FSK: return FSK; 108 108 default: return undefined;
-1
drivers/tee/optee/core.c
··· 590 590 return -ENODEV; 591 591 592 592 np = of_find_matching_node(fw_np, optee_match); 593 - of_node_put(fw_np); 594 593 if (!np) 595 594 return -ENODEV; 596 595
+3 -1
drivers/usb/core/config.c
··· 555 555 unsigned iad_num = 0; 556 556 557 557 memcpy(&config->desc, buffer, USB_DT_CONFIG_SIZE); 558 + nintf = nintf_orig = config->desc.bNumInterfaces; 559 + config->desc.bNumInterfaces = 0; // Adjusted later 560 + 558 561 if (config->desc.bDescriptorType != USB_DT_CONFIG || 559 562 config->desc.bLength < USB_DT_CONFIG_SIZE || 560 563 config->desc.bLength > size) { ··· 571 568 buffer += config->desc.bLength; 572 569 size -= config->desc.bLength; 573 570 574 - nintf = nintf_orig = config->desc.bNumInterfaces; 575 571 if (nintf > USB_MAXINTERFACES) { 576 572 dev_warn(ddev, "config %d has too many interfaces: %d, " 577 573 "using maximum allowed: %d\n",
+4
drivers/usb/dwc2/core.h
··· 537 537 * 2 - Internal DMA 538 538 * @power_optimized Are power optimizations enabled? 539 539 * @num_dev_ep Number of device endpoints available 540 + * @num_dev_in_eps Number of device IN endpoints available 540 541 * @num_dev_perio_in_ep Number of device periodic IN endpoints 541 542 * available 542 543 * @dev_token_q_depth Device Mode IN Token Sequence Learning Queue ··· 566 565 * 2 - 8 or 16 bits 567 566 * @snpsid: Value from SNPSID register 568 567 * @dev_ep_dirs: Direction of device endpoints (GHWCFG1) 568 + * @g_tx_fifo_size[] Power-on values of TxFIFO sizes 569 569 */ 570 570 struct dwc2_hw_params { 571 571 unsigned op_mode:3; ··· 588 586 unsigned fs_phy_type:2; 589 587 unsigned i2c_enable:1; 590 588 unsigned num_dev_ep:4; 589 + unsigned num_dev_in_eps : 4; 591 590 unsigned num_dev_perio_in_ep:4; 592 591 unsigned total_fifo_size:16; 593 592 unsigned power_optimized:1; 594 593 unsigned utmi_phy_data_width:2; 595 594 u32 snpsid; 596 595 u32 dev_ep_dirs; 596 + u32 g_tx_fifo_size[MAX_EPS_CHANNELS]; 597 597 }; 598 598 599 599 /* Size of control and EP0 buffers */
+2 -40
drivers/usb/dwc2/gadget.c
··· 195 195 { 196 196 if (hsotg->hw_params.en_multiple_tx_fifo) 197 197 /* In dedicated FIFO mode we need count of IN EPs */ 198 - return (dwc2_readl(hsotg->regs + GHWCFG4) & 199 - GHWCFG4_NUM_IN_EPS_MASK) >> GHWCFG4_NUM_IN_EPS_SHIFT; 198 + return hsotg->hw_params.num_dev_in_eps; 200 199 else 201 200 /* In shared FIFO mode we need count of Periodic IN EPs */ 202 201 return hsotg->hw_params.num_dev_perio_in_ep; 203 - } 204 - 205 - /** 206 - * dwc2_hsotg_ep_info_size - return Endpoint Info Control block size in DWORDs 207 - */ 208 - static int dwc2_hsotg_ep_info_size(struct dwc2_hsotg *hsotg) 209 - { 210 - int val = 0; 211 - int i; 212 - u32 ep_dirs; 213 - 214 - /* 215 - * Don't need additional space for ep info control registers in 216 - * slave mode. 217 - */ 218 - if (!using_dma(hsotg)) { 219 - dev_dbg(hsotg->dev, "Buffer DMA ep info size 0\n"); 220 - return 0; 221 - } 222 - 223 - /* 224 - * Buffer DMA mode - 1 location per endpoit 225 - * Descriptor DMA mode - 4 locations per endpoint 226 - */ 227 - ep_dirs = hsotg->hw_params.dev_ep_dirs; 228 - 229 - for (i = 0; i <= hsotg->hw_params.num_dev_ep; i++) { 230 - val += ep_dirs & 3 ? 1 : 2; 231 - ep_dirs >>= 2; 232 - } 233 - 234 - if (using_desc_dma(hsotg)) 235 - val = val * 4; 236 - 237 - return val; 238 202 } 239 203 240 204 /** ··· 207 243 */ 208 244 int dwc2_hsotg_tx_fifo_total_depth(struct dwc2_hsotg *hsotg) 209 245 { 210 - int ep_info_size; 211 246 int addr; 212 247 int tx_addr_max; 213 248 u32 np_tx_fifo_size; ··· 215 252 hsotg->params.g_np_tx_fifo_size); 216 253 217 254 /* Get Endpoint Info Control block size in DWORDs. */ 218 - ep_info_size = dwc2_hsotg_ep_info_size(hsotg); 219 - tx_addr_max = hsotg->hw_params.total_fifo_size - ep_info_size; 255 + tx_addr_max = hsotg->hw_params.total_fifo_size; 220 256 221 257 addr = hsotg->params.g_rx_fifo_size + np_tx_fifo_size; 222 258 if (tx_addr_max <= addr)
+19 -10
drivers/usb/dwc2/params.c
··· 484 484 } 485 485 486 486 for (fifo = 1; fifo <= fifo_count; fifo++) { 487 - dptxfszn = (dwc2_readl(hsotg->regs + DPTXFSIZN(fifo)) & 488 - FIFOSIZE_DEPTH_MASK) >> FIFOSIZE_DEPTH_SHIFT; 487 + dptxfszn = hsotg->hw_params.g_tx_fifo_size[fifo]; 489 488 490 489 if (hsotg->params.g_tx_fifo_size[fifo] < min || 491 490 hsotg->params.g_tx_fifo_size[fifo] > dptxfszn) { ··· 608 609 struct dwc2_hw_params *hw = &hsotg->hw_params; 609 610 bool forced; 610 611 u32 gnptxfsiz; 612 + int fifo, fifo_count; 611 613 612 614 if (hsotg->dr_mode == USB_DR_MODE_HOST) 613 615 return; ··· 616 616 forced = dwc2_force_mode_if_needed(hsotg, false); 617 617 618 618 gnptxfsiz = dwc2_readl(hsotg->regs + GNPTXFSIZ); 619 + 620 + fifo_count = dwc2_hsotg_tx_fifo_count(hsotg); 621 + 622 + for (fifo = 1; fifo <= fifo_count; fifo++) { 623 + hw->g_tx_fifo_size[fifo] = 624 + (dwc2_readl(hsotg->regs + DPTXFSIZN(fifo)) & 625 + FIFOSIZE_DEPTH_MASK) >> FIFOSIZE_DEPTH_SHIFT; 626 + } 619 627 620 628 if (forced) 621 629 dwc2_clear_force_mode(hsotg); ··· 669 661 hwcfg4 = dwc2_readl(hsotg->regs + GHWCFG4); 670 662 grxfsiz = dwc2_readl(hsotg->regs + GRXFSIZ); 671 663 672 - /* 673 - * Host specific hardware parameters. Reading these parameters 674 - * requires the controller to be in host mode. The mode will 675 - * be forced, if necessary, to read these values. 676 - */ 677 - dwc2_get_host_hwparams(hsotg); 678 - dwc2_get_dev_hwparams(hsotg); 679 - 680 664 /* hwcfg1 */ 681 665 hw->dev_ep_dirs = hwcfg1; 682 666 ··· 711 711 hw->en_multiple_tx_fifo = !!(hwcfg4 & GHWCFG4_DED_FIFO_EN); 712 712 hw->num_dev_perio_in_ep = (hwcfg4 & GHWCFG4_NUM_DEV_PERIO_IN_EP_MASK) >> 713 713 GHWCFG4_NUM_DEV_PERIO_IN_EP_SHIFT; 714 + hw->num_dev_in_eps = (hwcfg4 & GHWCFG4_NUM_IN_EPS_MASK) >> 715 + GHWCFG4_NUM_IN_EPS_SHIFT; 714 716 hw->dma_desc_enable = !!(hwcfg4 & GHWCFG4_DESC_DMA); 715 717 hw->power_optimized = !!(hwcfg4 & GHWCFG4_POWER_OPTIMIZ); 716 718 hw->utmi_phy_data_width = (hwcfg4 & GHWCFG4_UTMI_PHY_DATA_WIDTH_MASK) >> ··· 721 719 /* fifo sizes */ 722 720 hw->rx_fifo_size = (grxfsiz & GRXFSIZ_DEPTH_MASK) >> 723 721 GRXFSIZ_DEPTH_SHIFT; 722 + /* 723 + * Host specific hardware parameters. Reading these parameters 724 + * requires the controller to be in host mode. The mode will 725 + * be forced, if necessary, to read these values. 726 + */ 727 + dwc2_get_host_hwparams(hsotg); 728 + dwc2_get_dev_hwparams(hsotg); 724 729 725 730 return 0; 726 731 }
+4 -1
drivers/usb/dwc3/dwc3-of-simple.c
··· 51 51 52 52 clk = of_clk_get(np, i); 53 53 if (IS_ERR(clk)) { 54 - while (--i >= 0) 54 + while (--i >= 0) { 55 + clk_disable_unprepare(simple->clks[i]); 55 56 clk_put(simple->clks[i]); 57 + } 56 58 return PTR_ERR(clk); 57 59 } 58 60 ··· 205 203 .driver = { 206 204 .name = "dwc3-of-simple", 207 205 .of_match_table = of_dwc3_simple_match, 206 + .pm = &dwc3_of_simple_dev_pm_ops, 208 207 }, 209 208 }; 210 209
+2 -2
drivers/usb/dwc3/gadget.c
··· 259 259 { 260 260 const struct usb_endpoint_descriptor *desc = dep->endpoint.desc; 261 261 struct dwc3 *dwc = dep->dwc; 262 - u32 timeout = 500; 262 + u32 timeout = 1000; 263 263 u32 reg; 264 264 265 265 int cmd_status = 0; ··· 912 912 */ 913 913 if (speed == USB_SPEED_HIGH) { 914 914 struct usb_ep *ep = &dep->endpoint; 915 - unsigned int mult = ep->mult - 1; 915 + unsigned int mult = 2; 916 916 unsigned int maxp = usb_endpoint_maxp(ep->desc); 917 917 918 918 if (length <= (2 * maxp))
+2 -2
drivers/usb/gadget/Kconfig
··· 508 508 controller, and the relevant drivers for each function declared 509 509 by the device. 510 510 511 - endchoice 512 - 513 511 source "drivers/usb/gadget/legacy/Kconfig" 512 + 513 + endchoice 514 514 515 515 endif # USB_GADGET
+1 -11
drivers/usb/gadget/legacy/Kconfig
··· 13 13 # both kinds of controller can also support "USB On-the-Go" (CONFIG_USB_OTG). 14 14 # 15 15 16 - menuconfig USB_GADGET_LEGACY 17 - bool "Legacy USB Gadget Support" 18 - help 19 - Legacy USB gadgets are USB gadgets that do not use the USB gadget 20 - configfs interface. 21 - 22 - if USB_GADGET_LEGACY 23 - 24 16 config USB_ZERO 25 17 tristate "Gadget Zero (DEVELOPMENT)" 26 18 select USB_LIBCOMPOSITE ··· 479 487 # or video class gadget drivers), or specific hardware, here. 480 488 config USB_G_WEBCAM 481 489 tristate "USB Webcam Gadget" 482 - depends on VIDEO_DEV 490 + depends on VIDEO_V4L2 483 491 select USB_LIBCOMPOSITE 484 492 select VIDEOBUF2_VMALLOC 485 493 select USB_F_UVC ··· 490 498 491 499 Say "y" to link the driver statically, or "m" to build a 492 500 dynamically linked module called "g_webcam". 493 - 494 - endif
+11 -4
drivers/usb/host/xhci-mem.c
··· 971 971 return 0; 972 972 } 973 973 974 - xhci->devs[slot_id] = kzalloc(sizeof(*xhci->devs[slot_id]), flags); 975 - if (!xhci->devs[slot_id]) 974 + dev = kzalloc(sizeof(*dev), flags); 975 + if (!dev) 976 976 return 0; 977 - dev = xhci->devs[slot_id]; 978 977 979 978 /* Allocate the (output) device context that will be used in the HC. */ 980 979 dev->out_ctx = xhci_alloc_container_ctx(xhci, XHCI_CTX_TYPE_DEVICE, flags); ··· 1014 1015 1015 1016 trace_xhci_alloc_virt_device(dev); 1016 1017 1018 + xhci->devs[slot_id] = dev; 1019 + 1017 1020 return 1; 1018 1021 fail: 1019 - xhci_free_virt_device(xhci, slot_id); 1022 + 1023 + if (dev->in_ctx) 1024 + xhci_free_container_ctx(xhci, dev->in_ctx); 1025 + if (dev->out_ctx) 1026 + xhci_free_container_ctx(xhci, dev->out_ctx); 1027 + kfree(dev); 1028 + 1020 1029 return 0; 1021 1030 } 1022 1031
+3 -3
drivers/usb/host/xhci-ring.c
··· 3112 3112 { 3113 3113 u32 maxp, total_packet_count; 3114 3114 3115 - /* MTK xHCI is mostly 0.97 but contains some features from 1.0 */ 3115 + /* MTK xHCI 0.96 contains some features from 1.0 */ 3116 3116 if (xhci->hci_version < 0x100 && !(xhci->quirks & XHCI_MTK_HOST)) 3117 3117 return ((td_total_len - transferred) >> 10); 3118 3118 ··· 3121 3121 trb_buff_len == td_total_len) 3122 3122 return 0; 3123 3123 3124 - /* for MTK xHCI, TD size doesn't include this TRB */ 3125 - if (xhci->quirks & XHCI_MTK_HOST) 3124 + /* for MTK xHCI 0.96, TD size include this TRB, but not in 1.x */ 3125 + if ((xhci->quirks & XHCI_MTK_HOST) && (xhci->hci_version < 0x100)) 3126 3126 trb_buff_len = 0; 3127 3127 3128 3128 maxp = usb_endpoint_maxp(&urb->ep->desc);
+9 -1
drivers/usb/musb/da8xx.c
··· 284 284 musb->xceiv->otg->state = OTG_STATE_A_WAIT_VRISE; 285 285 portstate(musb->port1_status |= USB_PORT_STAT_POWER); 286 286 del_timer(&musb->dev_timer); 287 - } else { 287 + } else if (!(musb->int_usb & MUSB_INTR_BABBLE)) { 288 + /* 289 + * When babble condition happens, drvvbus interrupt 290 + * is also generated. Ignore this drvvbus interrupt 291 + * and let babble interrupt handler recovers the 292 + * controller; otherwise, the host-mode flag is lost 293 + * due to the MUSB_DEV_MODE() call below and babble 294 + * recovery logic will not be called. 295 + */ 288 296 musb->is_active = 0; 289 297 MUSB_DEV_MODE(musb); 290 298 otg->default_a = 0;
+7
drivers/usb/storage/unusual_devs.h
··· 2100 2100 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 2101 2101 US_FL_BROKEN_FUA ), 2102 2102 2103 + /* Reported by David Kozub <zub@linux.fjfi.cvut.cz> */ 2104 + UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999, 2105 + "JMicron", 2106 + "JMS567", 2107 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 2108 + US_FL_BROKEN_FUA), 2109 + 2103 2110 /* 2104 2111 * Reported by Alexandre Oliva <oliva@lsd.ic.unicamp.br> 2105 2112 * JMicron responds to USN and several other SCSI ioctls with a
+7
drivers/usb/storage/unusual_uas.h
··· 129 129 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 130 130 US_FL_BROKEN_FUA | US_FL_NO_REPORT_OPCODES), 131 131 132 + /* Reported-by: David Kozub <zub@linux.fjfi.cvut.cz> */ 133 + UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999, 134 + "JMicron", 135 + "JMS567", 136 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 137 + US_FL_BROKEN_FUA), 138 + 132 139 /* Reported-by: Hans de Goede <hdegoede@redhat.com> */ 133 140 UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999, 134 141 "VIA",
+41 -10
drivers/usb/usbip/stub_rx.c
··· 322 322 return priv; 323 323 } 324 324 325 - static int get_pipe(struct stub_device *sdev, int epnum, int dir) 325 + static int get_pipe(struct stub_device *sdev, struct usbip_header *pdu) 326 326 { 327 327 struct usb_device *udev = sdev->udev; 328 328 struct usb_host_endpoint *ep; 329 329 struct usb_endpoint_descriptor *epd = NULL; 330 + int epnum = pdu->base.ep; 331 + int dir = pdu->base.direction; 332 + 333 + if (epnum < 0 || epnum > 15) 334 + goto err_ret; 330 335 331 336 if (dir == USBIP_DIR_IN) 332 337 ep = udev->ep_in[epnum & 0x7f]; 333 338 else 334 339 ep = udev->ep_out[epnum & 0x7f]; 335 - if (!ep) { 336 - dev_err(&sdev->udev->dev, "no such endpoint?, %d\n", 337 - epnum); 338 - BUG(); 339 - } 340 + if (!ep) 341 + goto err_ret; 340 342 341 343 epd = &ep->desc; 344 + 345 + /* validate transfer_buffer_length */ 346 + if (pdu->u.cmd_submit.transfer_buffer_length > INT_MAX) { 347 + dev_err(&sdev->udev->dev, 348 + "CMD_SUBMIT: -EMSGSIZE transfer_buffer_length %d\n", 349 + pdu->u.cmd_submit.transfer_buffer_length); 350 + return -1; 351 + } 352 + 342 353 if (usb_endpoint_xfer_control(epd)) { 343 354 if (dir == USBIP_DIR_OUT) 344 355 return usb_sndctrlpipe(udev, epnum); ··· 372 361 } 373 362 374 363 if (usb_endpoint_xfer_isoc(epd)) { 364 + /* validate packet size and number of packets */ 365 + unsigned int maxp, packets, bytes; 366 + 367 + maxp = usb_endpoint_maxp(epd); 368 + maxp *= usb_endpoint_maxp_mult(epd); 369 + bytes = pdu->u.cmd_submit.transfer_buffer_length; 370 + packets = DIV_ROUND_UP(bytes, maxp); 371 + 372 + if (pdu->u.cmd_submit.number_of_packets < 0 || 373 + pdu->u.cmd_submit.number_of_packets > packets) { 374 + dev_err(&sdev->udev->dev, 375 + "CMD_SUBMIT: isoc invalid num packets %d\n", 376 + pdu->u.cmd_submit.number_of_packets); 377 + return -1; 378 + } 375 379 if (dir == USBIP_DIR_OUT) 376 380 return usb_sndisocpipe(udev, epnum); 377 381 else 378 382 return usb_rcvisocpipe(udev, epnum); 379 383 } 380 384 385 + err_ret: 381 386 /* NOT REACHED */ 382 - dev_err(&sdev->udev->dev, "get pipe, epnum %d\n", epnum); 383 - return 0; 387 + dev_err(&sdev->udev->dev, "CMD_SUBMIT: invalid epnum %d\n", epnum); 388 + return -1; 384 389 } 385 390 386 391 static void masking_bogus_flags(struct urb *urb) ··· 460 433 struct stub_priv *priv; 461 434 struct usbip_device *ud = &sdev->ud; 462 435 struct usb_device *udev = sdev->udev; 463 - int pipe = get_pipe(sdev, pdu->base.ep, pdu->base.direction); 436 + int pipe = get_pipe(sdev, pdu); 437 + 438 + if (pipe == -1) 439 + return; 464 440 465 441 priv = stub_priv_alloc(sdev, pdu); 466 442 if (!priv) ··· 482 452 } 483 453 484 454 /* allocate urb transfer buffer, if needed */ 485 - if (pdu->u.cmd_submit.transfer_buffer_length > 0) { 455 + if (pdu->u.cmd_submit.transfer_buffer_length > 0 && 456 + pdu->u.cmd_submit.transfer_buffer_length <= INT_MAX) { 486 457 priv->urb->transfer_buffer = 487 458 kzalloc(pdu->u.cmd_submit.transfer_buffer_length, 488 459 GFP_KERNEL);
+7
drivers/usb/usbip/stub_tx.c
··· 167 167 memset(&pdu_header, 0, sizeof(pdu_header)); 168 168 memset(&msg, 0, sizeof(msg)); 169 169 170 + if (urb->actual_length > 0 && !urb->transfer_buffer) { 171 + dev_err(&sdev->udev->dev, 172 + "urb: actual_length %d transfer_buffer null\n", 173 + urb->actual_length); 174 + return -1; 175 + } 176 + 170 177 if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) 171 178 iovnum = 2 + urb->number_of_packets; 172 179 else
+1
drivers/usb/usbip/usbip_common.h
··· 256 256 /* lock for status */ 257 257 spinlock_t lock; 258 258 259 + int sockfd; 259 260 struct socket *tcp_socket; 260 261 261 262 struct task_struct *tcp_rx;
+16 -9
drivers/usb/usbip/vhci_sysfs.c
··· 17 17 18 18 /* 19 19 * output example: 20 - * hub port sta spd dev socket local_busid 21 - * hs 0000 004 000 00000000 c5a7bb80 1-2.3 20 + * hub port sta spd dev sockfd local_busid 21 + * hs 0000 004 000 00000000 3 1-2.3 22 22 * ................................................ 23 - * ss 0008 004 000 00000000 d8cee980 2-3.4 23 + * ss 0008 004 000 00000000 4 2-3.4 24 24 * ................................................ 25 25 * 26 - * IP address can be retrieved from a socket pointer address by looking 27 - * up /proc/net/{tcp,tcp6}. Also, a userland program may remember a 28 - * port number and its peer IP address. 26 + * Output includes socket fd instead of socket pointer address to avoid 27 + * leaking kernel memory address in: 28 + * /sys/devices/platform/vhci_hcd.0/status and in debug output. 29 + * The socket pointer address is not used at the moment and it was made 30 + * visible as a convenient way to find IP address from socket pointer 31 + * address by looking up /proc/net/{tcp,tcp6}. As this opens a security 32 + * hole, the change is made to use sockfd instead. 33 + * 29 34 */ 30 35 static void port_show_vhci(char **out, int hub, int port, struct vhci_device *vdev) 31 36 { ··· 44 39 if (vdev->ud.status == VDEV_ST_USED) { 45 40 *out += sprintf(*out, "%03u %08x ", 46 41 vdev->speed, vdev->devid); 47 - *out += sprintf(*out, "%16p %s", 48 - vdev->ud.tcp_socket, 42 + *out += sprintf(*out, "%u %s", 43 + vdev->ud.sockfd, 49 44 dev_name(&vdev->udev->dev)); 50 45 51 46 } else { ··· 165 160 char *s = out; 166 161 167 162 /* 168 - * Half the ports are for SPEED_HIGH and half for SPEED_SUPER, thus the * 2. 163 + * Half the ports are for SPEED_HIGH and half for SPEED_SUPER, 164 + * thus the * 2. 169 165 */ 170 166 out += sprintf(out, "%d\n", VHCI_PORTS * vhci_num_controllers); 171 167 return out - s; ··· 372 366 373 367 vdev->devid = devid; 374 368 vdev->speed = speed; 369 + vdev->ud.sockfd = sockfd; 375 370 vdev->ud.tcp_socket = socket; 376 371 vdev->ud.status = VDEV_ST_NOTASSIGNED; 377 372
+9 -34
drivers/virtio/virtio_mmio.c
··· 522 522 return -EBUSY; 523 523 524 524 vm_dev = devm_kzalloc(&pdev->dev, sizeof(*vm_dev), GFP_KERNEL); 525 - if (!vm_dev) { 526 - rc = -ENOMEM; 527 - goto free_mem; 528 - } 525 + if (!vm_dev) 526 + return -ENOMEM; 529 527 530 528 vm_dev->vdev.dev.parent = &pdev->dev; 531 529 vm_dev->vdev.dev.release = virtio_mmio_release_dev; ··· 533 535 spin_lock_init(&vm_dev->lock); 534 536 535 537 vm_dev->base = devm_ioremap(&pdev->dev, mem->start, resource_size(mem)); 536 - if (vm_dev->base == NULL) { 537 - rc = -EFAULT; 538 - goto free_vmdev; 539 - } 538 + if (vm_dev->base == NULL) 539 + return -EFAULT; 540 540 541 541 /* Check magic value */ 542 542 magic = readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE); 543 543 if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) { 544 544 dev_warn(&pdev->dev, "Wrong magic value 0x%08lx!\n", magic); 545 - rc = -ENODEV; 546 - goto unmap; 545 + return -ENODEV; 547 546 } 548 547 549 548 /* Check device version */ ··· 548 553 if (vm_dev->version < 1 || vm_dev->version > 2) { 549 554 dev_err(&pdev->dev, "Version %ld not supported!\n", 550 555 vm_dev->version); 551 - rc = -ENXIO; 552 - goto unmap; 556 + return -ENXIO; 553 557 } 554 558 555 559 vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID); ··· 557 563 * virtio-mmio device with an ID 0 is a (dummy) placeholder 558 564 * with no function. End probing now with no error reported. 559 565 */ 560 - rc = -ENODEV; 561 - goto unmap; 566 + return -ENODEV; 562 567 } 563 568 vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID); 564 569 ··· 583 590 platform_set_drvdata(pdev, vm_dev); 584 591 585 592 rc = register_virtio_device(&vm_dev->vdev); 586 - if (rc) { 587 - iounmap(vm_dev->base); 588 - devm_release_mem_region(&pdev->dev, mem->start, 589 - resource_size(mem)); 593 + if (rc) 590 594 put_device(&vm_dev->vdev.dev); 591 - } 592 - return rc; 593 - unmap: 594 - iounmap(vm_dev->base); 595 - free_mem: 596 - devm_release_mem_region(&pdev->dev, mem->start, 597 - resource_size(mem)); 598 - free_vmdev: 599 - devm_kfree(&pdev->dev, vm_dev); 595 + 600 596 return rc; 601 597 } 602 598 603 599 static int virtio_mmio_remove(struct platform_device *pdev) 604 600 { 605 601 struct virtio_mmio_device *vm_dev = platform_get_drvdata(pdev); 606 - struct resource *mem; 607 - 608 - iounmap(vm_dev->base); 609 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 610 - if (mem) 611 - devm_release_mem_region(&pdev->dev, mem->start, 612 - resource_size(mem)); 613 602 unregister_virtio_device(&vm_dev->vdev); 614 603 615 604 return 0;
+1 -1
drivers/xen/Kconfig
··· 269 269 270 270 config XEN_ACPI_PROCESSOR 271 271 tristate "Xen ACPI processor" 272 - depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ 272 + depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ 273 273 default m 274 274 help 275 275 This ACPI processor uploads Power Management information to the Xen
-1
fs/autofs4/waitq.c
··· 170 170 171 171 mutex_unlock(&sbi->wq_mutex); 172 172 173 - if (autofs4_write(sbi, pipe, &pkt, pktsz)) 174 173 switch (ret = autofs4_write(sbi, pipe, &pkt, pktsz)) { 175 174 case 0: 176 175 break;
+12 -6
fs/btrfs/ctree.c
··· 1032 1032 root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && 1033 1033 !(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF)) { 1034 1034 ret = btrfs_inc_ref(trans, root, buf, 1); 1035 - BUG_ON(ret); /* -ENOMEM */ 1035 + if (ret) 1036 + return ret; 1036 1037 1037 1038 if (root->root_key.objectid == 1038 1039 BTRFS_TREE_RELOC_OBJECTID) { 1039 1040 ret = btrfs_dec_ref(trans, root, buf, 0); 1040 - BUG_ON(ret); /* -ENOMEM */ 1041 + if (ret) 1042 + return ret; 1041 1043 ret = btrfs_inc_ref(trans, root, cow, 1); 1042 - BUG_ON(ret); /* -ENOMEM */ 1044 + if (ret) 1045 + return ret; 1043 1046 } 1044 1047 new_flags |= BTRFS_BLOCK_FLAG_FULL_BACKREF; 1045 1048 } else { ··· 1052 1049 ret = btrfs_inc_ref(trans, root, cow, 1); 1053 1050 else 1054 1051 ret = btrfs_inc_ref(trans, root, cow, 0); 1055 - BUG_ON(ret); /* -ENOMEM */ 1052 + if (ret) 1053 + return ret; 1056 1054 } 1057 1055 if (new_flags != 0) { 1058 1056 int level = btrfs_header_level(buf); ··· 1072 1068 ret = btrfs_inc_ref(trans, root, cow, 1); 1073 1069 else 1074 1070 ret = btrfs_inc_ref(trans, root, cow, 0); 1075 - BUG_ON(ret); /* -ENOMEM */ 1071 + if (ret) 1072 + return ret; 1076 1073 ret = btrfs_dec_ref(trans, root, buf, 1); 1077 - BUG_ON(ret); /* -ENOMEM */ 1074 + if (ret) 1075 + return ret; 1078 1076 } 1079 1077 clean_tree_block(fs_info, buf); 1080 1078 *last_ref = 1;
+5 -7
fs/btrfs/disk-io.c
··· 3231 3231 int errors = 0; 3232 3232 u32 crc; 3233 3233 u64 bytenr; 3234 + int op_flags; 3234 3235 3235 3236 if (max_mirrors == 0) 3236 3237 max_mirrors = BTRFS_SUPER_MIRROR_MAX; ··· 3274 3273 * we fua the first super. The others we allow 3275 3274 * to go down lazy. 3276 3275 */ 3277 - if (i == 0) { 3278 - ret = btrfsic_submit_bh(REQ_OP_WRITE, 3279 - REQ_SYNC | REQ_FUA | REQ_META | REQ_PRIO, bh); 3280 - } else { 3281 - ret = btrfsic_submit_bh(REQ_OP_WRITE, 3282 - REQ_SYNC | REQ_META | REQ_PRIO, bh); 3283 - } 3276 + op_flags = REQ_SYNC | REQ_META | REQ_PRIO; 3277 + if (i == 0 && !btrfs_test_opt(device->fs_info, NOBARRIER)) 3278 + op_flags |= REQ_FUA; 3279 + ret = btrfsic_submit_bh(REQ_OP_WRITE, op_flags, bh); 3284 3280 if (ret) 3285 3281 errors++; 3286 3282 }
+1
fs/btrfs/extent-tree.c
··· 9206 9206 ret = btrfs_del_root(trans, fs_info, &root->root_key); 9207 9207 if (ret) { 9208 9208 btrfs_abort_transaction(trans, ret); 9209 + err = ret; 9209 9210 goto out_end_trans; 9210 9211 } 9211 9212
+2
fs/btrfs/inode.c
··· 3005 3005 compress_type = ordered_extent->compress_type; 3006 3006 if (test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags)) { 3007 3007 BUG_ON(compress_type); 3008 + btrfs_qgroup_free_data(inode, NULL, ordered_extent->file_offset, 3009 + ordered_extent->len); 3008 3010 ret = btrfs_mark_extent_written(trans, BTRFS_I(inode), 3009 3011 ordered_extent->file_offset, 3010 3012 ordered_extent->file_offset +
+1 -1
fs/btrfs/ioctl.c
··· 2206 2206 if (!path) 2207 2207 return -ENOMEM; 2208 2208 2209 - ptr = &name[BTRFS_INO_LOOKUP_PATH_MAX]; 2209 + ptr = &name[BTRFS_INO_LOOKUP_PATH_MAX - 1]; 2210 2210 2211 2211 key.objectid = tree_id; 2212 2212 key.type = BTRFS_ROOT_ITEM_KEY;
+38 -4
fs/ceph/mds_client.c
··· 1440 1440 return request_close_session(mdsc, session); 1441 1441 } 1442 1442 1443 + static bool drop_negative_children(struct dentry *dentry) 1444 + { 1445 + struct dentry *child; 1446 + bool all_negative = true; 1447 + 1448 + if (!d_is_dir(dentry)) 1449 + goto out; 1450 + 1451 + spin_lock(&dentry->d_lock); 1452 + list_for_each_entry(child, &dentry->d_subdirs, d_child) { 1453 + if (d_really_is_positive(child)) { 1454 + all_negative = false; 1455 + break; 1456 + } 1457 + } 1458 + spin_unlock(&dentry->d_lock); 1459 + 1460 + if (all_negative) 1461 + shrink_dcache_parent(dentry); 1462 + out: 1463 + return all_negative; 1464 + } 1465 + 1443 1466 /* 1444 1467 * Trim old(er) caps. 1445 1468 * ··· 1513 1490 if ((used | wanted) & ~oissued & mine) 1514 1491 goto out; /* we need these caps */ 1515 1492 1516 - session->s_trim_caps--; 1517 1493 if (oissued) { 1518 1494 /* we aren't the only cap.. just remove us */ 1519 1495 __ceph_remove_cap(cap, true); 1496 + session->s_trim_caps--; 1520 1497 } else { 1498 + struct dentry *dentry; 1521 1499 /* try dropping referring dentries */ 1522 1500 spin_unlock(&ci->i_ceph_lock); 1523 - d_prune_aliases(inode); 1524 - dout("trim_caps_cb %p cap %p pruned, count now %d\n", 1525 - inode, cap, atomic_read(&inode->i_count)); 1501 + dentry = d_find_any_alias(inode); 1502 + if (dentry && drop_negative_children(dentry)) { 1503 + int count; 1504 + dput(dentry); 1505 + d_prune_aliases(inode); 1506 + count = atomic_read(&inode->i_count); 1507 + if (count == 1) 1508 + session->s_trim_caps--; 1509 + dout("trim_caps_cb %p cap %p pruned, count now %d\n", 1510 + inode, cap, count); 1511 + } else { 1512 + dput(dentry); 1513 + } 1526 1514 return 0; 1527 1515 } 1528 1516
+2 -1
fs/cifs/smb2ops.c
··· 1406 1406 } while (rc == -EAGAIN); 1407 1407 1408 1408 if (rc) { 1409 - cifs_dbg(VFS, "ioctl error in smb2_get_dfs_refer rc=%d\n", rc); 1409 + if (rc != -ENOENT) 1410 + cifs_dbg(VFS, "ioctl error in smb2_get_dfs_refer rc=%d\n", rc); 1410 1411 goto out; 1411 1412 } 1412 1413
+16 -16
fs/cifs/smb2pdu.c
··· 2678 2678 cifs_small_buf_release(req); 2679 2679 2680 2680 rsp = (struct smb2_read_rsp *)rsp_iov.iov_base; 2681 - shdr = get_sync_hdr(rsp); 2682 - 2683 - if (shdr->Status == STATUS_END_OF_FILE) { 2684 - free_rsp_buf(resp_buftype, rsp_iov.iov_base); 2685 - return 0; 2686 - } 2687 2681 2688 2682 if (rc) { 2689 - cifs_stats_fail_inc(io_parms->tcon, SMB2_READ_HE); 2690 - cifs_dbg(VFS, "Send error in read = %d\n", rc); 2691 - } else { 2692 - *nbytes = le32_to_cpu(rsp->DataLength); 2693 - if ((*nbytes > CIFS_MAX_MSGSIZE) || 2694 - (*nbytes > io_parms->length)) { 2695 - cifs_dbg(FYI, "bad length %d for count %d\n", 2696 - *nbytes, io_parms->length); 2697 - rc = -EIO; 2698 - *nbytes = 0; 2683 + if (rc != -ENODATA) { 2684 + cifs_stats_fail_inc(io_parms->tcon, SMB2_READ_HE); 2685 + cifs_dbg(VFS, "Send error in read = %d\n", rc); 2699 2686 } 2687 + free_rsp_buf(resp_buftype, rsp_iov.iov_base); 2688 + return rc == -ENODATA ? 0 : rc; 2700 2689 } 2690 + 2691 + *nbytes = le32_to_cpu(rsp->DataLength); 2692 + if ((*nbytes > CIFS_MAX_MSGSIZE) || 2693 + (*nbytes > io_parms->length)) { 2694 + cifs_dbg(FYI, "bad length %d for count %d\n", 2695 + *nbytes, io_parms->length); 2696 + rc = -EIO; 2697 + *nbytes = 0; 2698 + } 2699 + 2700 + shdr = get_sync_hdr(rsp); 2701 2701 2702 2702 if (*buf) { 2703 2703 memcpy(*buf, (char *)shdr + rsp->DataOffset, *nbytes);
+1 -2
fs/dax.c
··· 627 627 628 628 if (pfn != pmd_pfn(*pmdp)) 629 629 goto unlock_pmd; 630 - if (!pmd_dirty(*pmdp) 631 - && !pmd_access_permitted(*pmdp, WRITE)) 630 + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) 632 631 goto unlock_pmd; 633 632 634 633 flush_cache_page(vma, address, pfn);
+3 -4
fs/exec.c
··· 1216 1216 return -EAGAIN; 1217 1217 } 1218 1218 1219 - char *get_task_comm(char *buf, struct task_struct *tsk) 1219 + char *__get_task_comm(char *buf, size_t buf_size, struct task_struct *tsk) 1220 1220 { 1221 - /* buf must be at least sizeof(tsk->comm) in size */ 1222 1221 task_lock(tsk); 1223 - strncpy(buf, tsk->comm, sizeof(tsk->comm)); 1222 + strncpy(buf, tsk->comm, buf_size); 1224 1223 task_unlock(tsk); 1225 1224 return buf; 1226 1225 } 1227 - EXPORT_SYMBOL_GPL(get_task_comm); 1226 + EXPORT_SYMBOL_GPL(__get_task_comm); 1228 1227 1229 1228 /* 1230 1229 * These functions flushes out all traces of the currently running executable
-1
fs/hpfs/dir.c
··· 150 150 if (unlikely(ret < 0)) 151 151 goto out; 152 152 ctx->pos = ((loff_t) hpfs_de_as_down_as_possible(inode->i_sb, hpfs_inode->i_dno) << 4) + 1; 153 - file->f_version = inode->i_version; 154 153 } 155 154 next_pos = ctx->pos; 156 155 if (!(de = map_pos_dirent(inode, &next_pos, &qbh))) {
-2
fs/hpfs/dnode.c
··· 419 419 c = 1; 420 420 goto ret; 421 421 } 422 - i->i_version++; 423 422 c = hpfs_add_to_dnode(i, dno, name, namelen, new_de, 0); 424 423 ret: 425 424 return c; ··· 725 726 return 2; 726 727 } 727 728 } 728 - i->i_version++; 729 729 for_all_poss(i, hpfs_pos_del, (t = get_pos(dnode, de)) + 1, 1); 730 730 hpfs_delete_de(i->i_sb, dnode, de); 731 731 hpfs_mark_4buffers_dirty(qbh);
-1
fs/hpfs/super.c
··· 235 235 ei = kmem_cache_alloc(hpfs_inode_cachep, GFP_NOFS); 236 236 if (!ei) 237 237 return NULL; 238 - ei->vfs_inode.i_version = 1; 239 238 return &ei->vfs_inode; 240 239 } 241 240
+11
fs/nfs/client.c
··· 291 291 const struct sockaddr *sap = data->addr; 292 292 struct nfs_net *nn = net_generic(data->net, nfs_net_id); 293 293 294 + again: 294 295 list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) { 295 296 const struct sockaddr *clap = (struct sockaddr *)&clp->cl_addr; 296 297 /* Don't match clients that failed to initialise properly */ 297 298 if (clp->cl_cons_state < 0) 298 299 continue; 300 + 301 + /* If a client is still initializing then we need to wait */ 302 + if (clp->cl_cons_state > NFS_CS_READY) { 303 + refcount_inc(&clp->cl_count); 304 + spin_unlock(&nn->nfs_client_lock); 305 + nfs_wait_client_init_complete(clp); 306 + nfs_put_client(clp); 307 + spin_lock(&nn->nfs_client_lock); 308 + goto again; 309 + } 299 310 300 311 /* Different NFS versions cannot share the same nfs_client */ 301 312 if (clp->rpc_ops != data->nfs_mod->rpc_ops)
+13 -4
fs/nfs/nfs4client.c
··· 404 404 if (error < 0) 405 405 goto error; 406 406 407 - if (!nfs4_has_session(clp)) 408 - nfs_mark_client_ready(clp, NFS_CS_READY); 409 - 410 407 error = nfs4_discover_server_trunking(clp, &old); 411 408 if (error < 0) 412 409 goto error; 413 410 414 - if (clp != old) 411 + if (clp != old) { 415 412 clp->cl_preserve_clid = true; 413 + /* 414 + * Mark the client as having failed initialization so other 415 + * processes walking the nfs_client_list in nfs_match_client() 416 + * won't try to use it. 417 + */ 418 + nfs_mark_client_ready(clp, -EPERM); 419 + } 416 420 nfs_put_client(clp); 417 421 clear_bit(NFS_CS_TSM_POSSIBLE, &clp->cl_flags); 418 422 return old; ··· 543 539 spin_lock(&nn->nfs_client_lock); 544 540 list_for_each_entry(pos, &nn->nfs_client_list, cl_share_link) { 545 541 542 + if (pos == new) 543 + goto found; 544 + 546 545 status = nfs4_match_client(pos, new, &prev, nn); 547 546 if (status < 0) 548 547 goto out_unlock; ··· 566 559 * way that a SETCLIENTID_CONFIRM to pos can succeed is 567 560 * if new and pos point to the same server: 568 561 */ 562 + found: 569 563 refcount_inc(&pos->cl_count); 570 564 spin_unlock(&nn->nfs_client_lock); 571 565 ··· 580 572 case 0: 581 573 nfs4_swap_callback_idents(pos, new); 582 574 pos->cl_confirm = new->cl_confirm; 575 + nfs_mark_client_ready(pos, NFS_CS_READY); 583 576 584 577 prev = NULL; 585 578 *result = pos;
+2
fs/nfs/write.c
··· 1890 1890 if (res) 1891 1891 error = nfs_generic_commit_list(inode, &head, how, &cinfo); 1892 1892 nfs_commit_end(cinfo.mds); 1893 + if (res == 0) 1894 + return res; 1893 1895 if (error < 0) 1894 1896 goto out_error; 1895 1897 if (!may_wait)
+3
fs/nfsd/auth.c
··· 60 60 gi->gid[i] = exp->ex_anon_gid; 61 61 else 62 62 gi->gid[i] = rqgi->gid[i]; 63 + 64 + /* Each thread allocates its own gi, no race */ 65 + groups_sort(gi); 63 66 } 64 67 } else { 65 68 gi = get_group_info(rqgi);
+10
fs/overlayfs/Kconfig
··· 24 24 an overlay which has redirects on a kernel that doesn't support this 25 25 feature will have unexpected results. 26 26 27 + config OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW 28 + bool "Overlayfs: follow redirects even if redirects are turned off" 29 + default y 30 + depends on OVERLAY_FS 31 + help 32 + Disable this to get a possibly more secure configuration, but that 33 + might not be backward compatible with previous kernels. 34 + 35 + For more information, see Documentation/filesystems/overlayfs.txt 36 + 27 37 config OVERLAY_FS_INDEX 28 38 bool "Overlayfs: turn on inodes index feature by default" 29 39 depends on OVERLAY_FS
+2 -1
fs/overlayfs/dir.c
··· 887 887 spin_unlock(&dentry->d_lock); 888 888 } else { 889 889 kfree(redirect); 890 - pr_warn_ratelimited("overlay: failed to set redirect (%i)\n", err); 890 + pr_warn_ratelimited("overlayfs: failed to set redirect (%i)\n", 891 + err); 891 892 /* Fall back to userspace copy-up */ 892 893 err = -EXDEV; 893 894 }
+17 -1
fs/overlayfs/namei.c
··· 435 435 436 436 /* Check if index is orphan and don't warn before cleaning it */ 437 437 if (d_inode(index)->i_nlink == 1 && 438 - ovl_get_nlink(index, origin.dentry, 0) == 0) 438 + ovl_get_nlink(origin.dentry, index, 0) == 0) 439 439 err = -ENOENT; 440 440 441 441 dput(origin.dentry); ··· 680 680 681 681 if (d.stop) 682 682 break; 683 + 684 + /* 685 + * Following redirects can have security consequences: it's like 686 + * a symlink into the lower layer without the permission checks. 687 + * This is only a problem if the upper layer is untrusted (e.g 688 + * comes from an USB drive). This can allow a non-readable file 689 + * or directory to become readable. 690 + * 691 + * Only following redirects when redirects are enabled disables 692 + * this attack vector when not necessary. 693 + */ 694 + err = -EPERM; 695 + if (d.redirect && !ofs->config.redirect_follow) { 696 + pr_warn_ratelimited("overlay: refusing to follow redirect for (%pd2)\n", dentry); 697 + goto out_put; 698 + } 683 699 684 700 if (d.redirect && d.redirect[0] == '/' && poe != roe) { 685 701 poe = roe;
+1 -1
fs/overlayfs/overlayfs.h
··· 180 180 static inline struct dentry *ovl_do_tmpfile(struct dentry *dentry, umode_t mode) 181 181 { 182 182 struct dentry *ret = vfs_tmpfile(dentry, mode, 0); 183 - int err = IS_ERR(ret) ? PTR_ERR(ret) : 0; 183 + int err = PTR_ERR_OR_ZERO(ret); 184 184 185 185 pr_debug("tmpfile(%pd2, 0%o) = %i\n", dentry, mode, err); 186 186 return ret;
+2
fs/overlayfs/ovl_entry.h
··· 14 14 char *workdir; 15 15 bool default_permissions; 16 16 bool redirect_dir; 17 + bool redirect_follow; 18 + const char *redirect_mode; 17 19 bool index; 18 20 }; 19 21
+5 -2
fs/overlayfs/readdir.c
··· 499 499 return err; 500 500 501 501 fail: 502 - pr_warn_ratelimited("overlay: failed to look up (%s) for ino (%i)\n", 502 + pr_warn_ratelimited("overlayfs: failed to look up (%s) for ino (%i)\n", 503 503 p->name, err); 504 504 goto out; 505 505 } ··· 663 663 return PTR_ERR(rdt.cache); 664 664 } 665 665 666 - return iterate_dir(od->realfile, &rdt.ctx); 666 + err = iterate_dir(od->realfile, &rdt.ctx); 667 + ctx->pos = rdt.ctx.pos; 668 + 669 + return err; 667 670 } 668 671 669 672
+66 -21
fs/overlayfs/super.c
··· 33 33 MODULE_PARM_DESC(ovl_redirect_dir_def, 34 34 "Default to on or off for the redirect_dir feature"); 35 35 36 + static bool ovl_redirect_always_follow = 37 + IS_ENABLED(CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW); 38 + module_param_named(redirect_always_follow, ovl_redirect_always_follow, 39 + bool, 0644); 40 + MODULE_PARM_DESC(ovl_redirect_always_follow, 41 + "Follow redirects even if redirect_dir feature is turned off"); 42 + 36 43 static bool ovl_index_def = IS_ENABLED(CONFIG_OVERLAY_FS_INDEX); 37 44 module_param_named(index, ovl_index_def, bool, 0644); 38 45 MODULE_PARM_DESC(ovl_index_def, ··· 239 232 kfree(ofs->config.lowerdir); 240 233 kfree(ofs->config.upperdir); 241 234 kfree(ofs->config.workdir); 235 + kfree(ofs->config.redirect_mode); 242 236 if (ofs->creator_cred) 243 237 put_cred(ofs->creator_cred); 244 238 kfree(ofs); ··· 252 244 ovl_free_fs(ofs); 253 245 } 254 246 247 + /* Sync real dirty inodes in upper filesystem (if it exists) */ 255 248 static int ovl_sync_fs(struct super_block *sb, int wait) 256 249 { 257 250 struct ovl_fs *ofs = sb->s_fs_info; ··· 261 252 262 253 if (!ofs->upper_mnt) 263 254 return 0; 264 - upper_sb = ofs->upper_mnt->mnt_sb; 265 - if (!upper_sb->s_op->sync_fs) 255 + 256 + /* 257 + * If this is a sync(2) call or an emergency sync, all the super blocks 258 + * will be iterated, including upper_sb, so no need to do anything. 259 + * 260 + * If this is a syncfs(2) call, then we do need to call 261 + * sync_filesystem() on upper_sb, but enough if we do it when being 262 + * called with wait == 1. 263 + */ 264 + if (!wait) 266 265 return 0; 267 266 268 - /* real inodes have already been synced by sync_filesystem(ovl_sb) */ 267 + upper_sb = ofs->upper_mnt->mnt_sb; 268 + 269 269 down_read(&upper_sb->s_umount); 270 - ret = upper_sb->s_op->sync_fs(upper_sb, wait); 270 + ret = sync_filesystem(upper_sb); 271 271 up_read(&upper_sb->s_umount); 272 + 272 273 return ret; 273 274 } 274 275 ··· 314 295 return (!ofs->upper_mnt || !ofs->workdir); 315 296 } 316 297 298 + static const char *ovl_redirect_mode_def(void) 299 + { 300 + return ovl_redirect_dir_def ? "on" : "off"; 301 + } 302 + 317 303 /** 318 304 * ovl_show_options 319 305 * ··· 337 313 } 338 314 if (ofs->config.default_permissions) 339 315 seq_puts(m, ",default_permissions"); 340 - if (ofs->config.redirect_dir != ovl_redirect_dir_def) 341 - seq_printf(m, ",redirect_dir=%s", 342 - ofs->config.redirect_dir ? "on" : "off"); 316 + if (strcmp(ofs->config.redirect_mode, ovl_redirect_mode_def()) != 0) 317 + seq_printf(m, ",redirect_dir=%s", ofs->config.redirect_mode); 343 318 if (ofs->config.index != ovl_index_def) 344 - seq_printf(m, ",index=%s", 345 - ofs->config.index ? "on" : "off"); 319 + seq_printf(m, ",index=%s", ofs->config.index ? "on" : "off"); 346 320 return 0; 347 321 } 348 322 ··· 370 348 OPT_UPPERDIR, 371 349 OPT_WORKDIR, 372 350 OPT_DEFAULT_PERMISSIONS, 373 - OPT_REDIRECT_DIR_ON, 374 - OPT_REDIRECT_DIR_OFF, 351 + OPT_REDIRECT_DIR, 375 352 OPT_INDEX_ON, 376 353 OPT_INDEX_OFF, 377 354 OPT_ERR, ··· 381 360 {OPT_UPPERDIR, "upperdir=%s"}, 382 361 {OPT_WORKDIR, "workdir=%s"}, 383 362 {OPT_DEFAULT_PERMISSIONS, "default_permissions"}, 384 - {OPT_REDIRECT_DIR_ON, "redirect_dir=on"}, 385 - {OPT_REDIRECT_DIR_OFF, "redirect_dir=off"}, 363 + {OPT_REDIRECT_DIR, "redirect_dir=%s"}, 386 364 {OPT_INDEX_ON, "index=on"}, 387 365 {OPT_INDEX_OFF, "index=off"}, 388 366 {OPT_ERR, NULL} ··· 410 390 return sbegin; 411 391 } 412 392 393 + static int ovl_parse_redirect_mode(struct ovl_config *config, const char *mode) 394 + { 395 + if (strcmp(mode, "on") == 0) { 396 + config->redirect_dir = true; 397 + /* 398 + * Does not make sense to have redirect creation without 399 + * redirect following. 400 + */ 401 + config->redirect_follow = true; 402 + } else if (strcmp(mode, "follow") == 0) { 403 + config->redirect_follow = true; 404 + } else if (strcmp(mode, "off") == 0) { 405 + if (ovl_redirect_always_follow) 406 + config->redirect_follow = true; 407 + } else if (strcmp(mode, "nofollow") != 0) { 408 + pr_err("overlayfs: bad mount option \"redirect_dir=%s\"\n", 409 + mode); 410 + return -EINVAL; 411 + } 412 + 413 + return 0; 414 + } 415 + 413 416 static int ovl_parse_opt(char *opt, struct ovl_config *config) 414 417 { 415 418 char *p; 419 + 420 + config->redirect_mode = kstrdup(ovl_redirect_mode_def(), GFP_KERNEL); 421 + if (!config->redirect_mode) 422 + return -ENOMEM; 416 423 417 424 while ((p = ovl_next_opt(&opt)) != NULL) { 418 425 int token; ··· 475 428 config->default_permissions = true; 476 429 break; 477 430 478 - case OPT_REDIRECT_DIR_ON: 479 - config->redirect_dir = true; 480 - break; 481 - 482 - case OPT_REDIRECT_DIR_OFF: 483 - config->redirect_dir = false; 431 + case OPT_REDIRECT_DIR: 432 + kfree(config->redirect_mode); 433 + config->redirect_mode = match_strdup(&args[0]); 434 + if (!config->redirect_mode) 435 + return -ENOMEM; 484 436 break; 485 437 486 438 case OPT_INDEX_ON: ··· 504 458 config->workdir = NULL; 505 459 } 506 460 507 - return 0; 461 + return ovl_parse_redirect_mode(config, config->redirect_mode); 508 462 } 509 463 510 464 #define OVL_WORKDIR_NAME "work" ··· 1206 1160 if (!cred) 1207 1161 goto out_err; 1208 1162 1209 - ofs->config.redirect_dir = ovl_redirect_dir_def; 1210 1163 ofs->config.index = ovl_index_def; 1211 1164 err = ovl_parse_opt((char *) data, &ofs->config); 1212 1165 if (err)
+3 -7
fs/xfs/libxfs/xfs_ialloc.c
··· 920 920 xfs_ialloc_ag_select( 921 921 xfs_trans_t *tp, /* transaction pointer */ 922 922 xfs_ino_t parent, /* parent directory inode number */ 923 - umode_t mode, /* bits set to indicate file type */ 924 - int okalloc) /* ok to allocate more space */ 923 + umode_t mode) /* bits set to indicate file type */ 925 924 { 926 925 xfs_agnumber_t agcount; /* number of ag's in the filesystem */ 927 926 xfs_agnumber_t agno; /* current ag number */ ··· 976 977 xfs_perag_put(pag); 977 978 return agno; 978 979 } 979 - 980 - if (!okalloc) 981 - goto nextag; 982 980 983 981 if (!pag->pagf_init) { 984 982 error = xfs_alloc_pagf_init(mp, tp, agno, flags); ··· 1676 1680 struct xfs_trans *tp, 1677 1681 xfs_ino_t parent, 1678 1682 umode_t mode, 1679 - int okalloc, 1680 1683 struct xfs_buf **IO_agbp, 1681 1684 xfs_ino_t *inop) 1682 1685 { ··· 1687 1692 int noroom = 0; 1688 1693 xfs_agnumber_t start_agno; 1689 1694 struct xfs_perag *pag; 1695 + int okalloc = 1; 1690 1696 1691 1697 if (*IO_agbp) { 1692 1698 /* ··· 1703 1707 * We do not have an agbp, so select an initial allocation 1704 1708 * group for inode allocation. 1705 1709 */ 1706 - start_agno = xfs_ialloc_ag_select(tp, parent, mode, okalloc); 1710 + start_agno = xfs_ialloc_ag_select(tp, parent, mode); 1707 1711 if (start_agno == NULLAGNUMBER) { 1708 1712 *inop = NULLFSINO; 1709 1713 return 0;
-1
fs/xfs/libxfs/xfs_ialloc.h
··· 81 81 struct xfs_trans *tp, /* transaction pointer */ 82 82 xfs_ino_t parent, /* parent inode (directory) */ 83 83 umode_t mode, /* mode bits for new inode */ 84 - int okalloc, /* ok to allocate more space */ 85 84 struct xfs_buf **agbp, /* buf for a.g. inode header */ 86 85 xfs_ino_t *inop); /* inode number allocated */ 87 86
-1
fs/xfs/scrub/scrub.c
··· 46 46 #include "scrub/scrub.h" 47 47 #include "scrub/common.h" 48 48 #include "scrub/trace.h" 49 - #include "scrub/scrub.h" 50 49 #include "scrub/btree.h" 51 50 52 51 /*
-1
fs/xfs/scrub/trace.c
··· 26 26 #include "xfs_mount.h" 27 27 #include "xfs_defer.h" 28 28 #include "xfs_da_format.h" 29 - #include "xfs_defer.h" 30 29 #include "xfs_inode.h" 31 30 #include "xfs_btree.h" 32 31 #include "xfs_trans.h"
+7 -26
fs/xfs/xfs_inode.c
··· 749 749 xfs_nlink_t nlink, 750 750 dev_t rdev, 751 751 prid_t prid, 752 - int okalloc, 753 752 xfs_buf_t **ialloc_context, 754 753 xfs_inode_t **ipp) 755 754 { ··· 764 765 * Call the space management code to pick 765 766 * the on-disk inode to be allocated. 766 767 */ 767 - error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, okalloc, 768 + error = xfs_dialloc(tp, pip ? pip->i_ino : 0, mode, 768 769 ialloc_context, &ino); 769 770 if (error) 770 771 return error; ··· 956 957 xfs_nlink_t nlink, 957 958 dev_t rdev, 958 959 prid_t prid, /* project id */ 959 - int okalloc, /* ok to allocate new space */ 960 960 xfs_inode_t **ipp, /* pointer to inode; it will be 961 961 locked. */ 962 962 int *committed) ··· 986 988 * transaction commit so that no other process can steal 987 989 * the inode(s) that we've just allocated. 988 990 */ 989 - code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid, okalloc, 990 - &ialloc_context, &ip); 991 + code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid, &ialloc_context, 992 + &ip); 991 993 992 994 /* 993 995 * Return an error if we were unable to allocate a new inode. ··· 1059 1061 * this call should always succeed. 1060 1062 */ 1061 1063 code = xfs_ialloc(tp, dp, mode, nlink, rdev, prid, 1062 - okalloc, &ialloc_context, &ip); 1064 + &ialloc_context, &ip); 1063 1065 1064 1066 /* 1065 1067 * If we get an error at this point, return to the caller ··· 1180 1182 xfs_flush_inodes(mp); 1181 1183 error = xfs_trans_alloc(mp, tres, resblks, 0, 0, &tp); 1182 1184 } 1183 - if (error == -ENOSPC) { 1184 - /* No space at all so try a "no-allocation" reservation */ 1185 - resblks = 0; 1186 - error = xfs_trans_alloc(mp, tres, 0, 0, 0, &tp); 1187 - } 1188 1185 if (error) 1189 1186 goto out_release_inode; 1190 1187 ··· 1196 1203 if (error) 1197 1204 goto out_trans_cancel; 1198 1205 1199 - if (!resblks) { 1200 - error = xfs_dir_canenter(tp, dp, name); 1201 - if (error) 1202 - goto out_trans_cancel; 1203 - } 1204 - 1205 1206 /* 1206 1207 * A newly created regular or special file just has one directory 1207 1208 * entry pointing to them, but a directory also the "." entry 1208 1209 * pointing to itself. 1209 1210 */ 1210 - error = xfs_dir_ialloc(&tp, dp, mode, is_dir ? 2 : 1, rdev, 1211 - prid, resblks > 0, &ip, NULL); 1211 + error = xfs_dir_ialloc(&tp, dp, mode, is_dir ? 2 : 1, rdev, prid, &ip, 1212 + NULL); 1212 1213 if (error) 1213 1214 goto out_trans_cancel; 1214 1215 ··· 1327 1340 tres = &M_RES(mp)->tr_create_tmpfile; 1328 1341 1329 1342 error = xfs_trans_alloc(mp, tres, resblks, 0, 0, &tp); 1330 - if (error == -ENOSPC) { 1331 - /* No space at all so try a "no-allocation" reservation */ 1332 - resblks = 0; 1333 - error = xfs_trans_alloc(mp, tres, 0, 0, 0, &tp); 1334 - } 1335 1343 if (error) 1336 1344 goto out_release_inode; 1337 1345 ··· 1335 1353 if (error) 1336 1354 goto out_trans_cancel; 1337 1355 1338 - error = xfs_dir_ialloc(&tp, dp, mode, 1, 0, 1339 - prid, resblks > 0, &ip, NULL); 1356 + error = xfs_dir_ialloc(&tp, dp, mode, 1, 0, prid, &ip, NULL); 1340 1357 if (error) 1341 1358 goto out_trans_cancel; 1342 1359
+1 -1
fs/xfs/xfs_inode.h
··· 428 428 xfs_extlen_t xfs_get_cowextsz_hint(struct xfs_inode *ip); 429 429 430 430 int xfs_dir_ialloc(struct xfs_trans **, struct xfs_inode *, umode_t, 431 - xfs_nlink_t, dev_t, prid_t, int, 431 + xfs_nlink_t, dev_t, prid_t, 432 432 struct xfs_inode **, int *); 433 433 434 434 /* from xfs_file.c */
+1 -1
fs/xfs/xfs_iomap.c
··· 1213 1213 1214 1214 ASSERT(ip->i_d.di_aformat != XFS_DINODE_FMT_LOCAL); 1215 1215 error = xfs_bmapi_read(ip, offset_fsb, end_fsb - offset_fsb, &imap, 1216 - &nimaps, XFS_BMAPI_ENTIRE | XFS_BMAPI_ATTRFORK); 1216 + &nimaps, XFS_BMAPI_ATTRFORK); 1217 1217 out_unlock: 1218 1218 xfs_iunlock(ip, lockmode); 1219 1219
+2 -2
fs/xfs/xfs_qm.c
··· 793 793 return error; 794 794 795 795 if (need_alloc) { 796 - error = xfs_dir_ialloc(&tp, NULL, S_IFREG, 1, 0, 0, 1, ip, 797 - &committed); 796 + error = xfs_dir_ialloc(&tp, NULL, S_IFREG, 1, 0, 0, ip, 797 + &committed); 798 798 if (error) { 799 799 xfs_trans_cancel(tp); 800 800 return error;
-2
fs/xfs/xfs_reflink.c
··· 49 49 #include "xfs_alloc.h" 50 50 #include "xfs_quota_defs.h" 51 51 #include "xfs_quota.h" 52 - #include "xfs_btree.h" 53 - #include "xfs_bmap_btree.h" 54 52 #include "xfs_reflink.h" 55 53 #include "xfs_iomap.h" 56 54 #include "xfs_rmap_btree.h"
+1 -14
fs/xfs/xfs_symlink.c
··· 232 232 resblks = XFS_SYMLINK_SPACE_RES(mp, link_name->len, fs_blocks); 233 233 234 234 error = xfs_trans_alloc(mp, &M_RES(mp)->tr_symlink, resblks, 0, 0, &tp); 235 - if (error == -ENOSPC && fs_blocks == 0) { 236 - resblks = 0; 237 - error = xfs_trans_alloc(mp, &M_RES(mp)->tr_symlink, 0, 0, 0, 238 - &tp); 239 - } 240 235 if (error) 241 236 goto out_release_inode; 242 237 ··· 255 260 goto out_trans_cancel; 256 261 257 262 /* 258 - * Check for ability to enter directory entry, if no space reserved. 259 - */ 260 - if (!resblks) { 261 - error = xfs_dir_canenter(tp, dp, link_name); 262 - if (error) 263 - goto out_trans_cancel; 264 - } 265 - /* 266 263 * Initialize the bmap freelist prior to calling either 267 264 * bmapi or the directory create code. 268 265 */ ··· 264 277 * Allocate an inode for the symlink. 265 278 */ 266 279 error = xfs_dir_ialloc(&tp, dp, S_IFLNK | (mode & ~S_IFMT), 1, 0, 267 - prid, resblks > 0, &ip, NULL); 280 + prid, &ip, NULL); 268 281 if (error) 269 282 goto out_trans_cancel; 270 283
-1
fs/xfs/xfs_trace.c
··· 24 24 #include "xfs_mount.h" 25 25 #include "xfs_defer.h" 26 26 #include "xfs_da_format.h" 27 - #include "xfs_defer.h" 28 27 #include "xfs_inode.h" 29 28 #include "xfs_btree.h" 30 29 #include "xfs_da_btree.h"
+8
include/crypto/internal/hash.h
··· 82 82 struct ahash_instance *inst); 83 83 void ahash_free_instance(struct crypto_instance *inst); 84 84 85 + int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, 86 + unsigned int keylen); 87 + 88 + static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg) 89 + { 90 + return alg->setkey != shash_no_setkey; 91 + } 92 + 85 93 int crypto_init_ahash_spawn(struct crypto_ahash_spawn *spawn, 86 94 struct hash_alg_common *alg, 87 95 struct crypto_instance *inst);
+6 -4
include/drm/drm_connector.h
··· 24 24 #define __DRM_CONNECTOR_H__ 25 25 26 26 #include <linux/list.h> 27 + #include <linux/llist.h> 27 28 #include <linux/ctype.h> 28 29 #include <linux/hdmi.h> 29 30 #include <drm/drm_mode_object.h> ··· 919 918 uint16_t tile_h_size, tile_v_size; 920 919 921 920 /** 922 - * @free_work: 921 + * @free_node: 923 922 * 924 - * Work used only by &drm_connector_iter to be able to clean up a 925 - * connector from any context. 923 + * List used only by &drm_connector_iter to be able to clean up a 924 + * connector from any context, in conjunction with 925 + * &drm_mode_config.connector_free_work. 926 926 */ 927 - struct work_struct free_work; 927 + struct llist_node free_node; 928 928 }; 929 929 930 930 #define obj_to_connector(x) container_of(x, struct drm_connector, base)
+2
include/drm/drm_edid.h
··· 465 465 struct edid *drm_get_edid_switcheroo(struct drm_connector *connector, 466 466 struct i2c_adapter *adapter); 467 467 struct edid *drm_edid_duplicate(const struct edid *edid); 468 + void drm_reset_display_info(struct drm_connector *connector); 469 + u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edid); 468 470 int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid); 469 471 470 472 u8 drm_match_cea_mode(const struct drm_display_mode *to_match);
+17 -1
include/drm/drm_mode_config.h
··· 27 27 #include <linux/types.h> 28 28 #include <linux/idr.h> 29 29 #include <linux/workqueue.h> 30 + #include <linux/llist.h> 30 31 31 32 #include <drm/drm_modeset_lock.h> 32 33 ··· 394 393 395 394 /** 396 395 * @connector_list_lock: Protects @num_connector and 397 - * @connector_list. 396 + * @connector_list and @connector_free_list. 398 397 */ 399 398 spinlock_t connector_list_lock; 400 399 /** ··· 414 413 * &struct drm_connector_list_iter to walk this list. 415 414 */ 416 415 struct list_head connector_list; 416 + /** 417 + * @connector_free_list: 418 + * 419 + * List of connector objects linked with &drm_connector.free_head. 420 + * Protected by @connector_list_lock. Used by 421 + * drm_for_each_connector_iter() and 422 + * &struct drm_connector_list_iter to savely free connectors using 423 + * @connector_free_work. 424 + */ 425 + struct llist_head connector_free_list; 426 + /** 427 + * @connector_free_work: Work to clean up @connector_free_list. 428 + */ 429 + struct work_struct connector_free_work; 430 + 417 431 /** 418 432 * @num_encoder: 419 433 *
-3
include/kvm/arm_arch_timer.h
··· 93 93 #define vcpu_vtimer(v) (&(v)->arch.timer_cpu.vtimer) 94 94 #define vcpu_ptimer(v) (&(v)->arch.timer_cpu.ptimer) 95 95 96 - void enable_el1_phys_timer_access(void); 97 - void disable_el1_phys_timer_access(void); 98 - 99 96 #endif
+11 -36
include/linux/compiler.h
··· 220 220 /* 221 221 * Prevent the compiler from merging or refetching reads or writes. The 222 222 * compiler is also forbidden from reordering successive instances of 223 - * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the 224 - * compiler is aware of some particular ordering. One way to make the 225 - * compiler aware of ordering is to put the two invocations of READ_ONCE, 226 - * WRITE_ONCE or ACCESS_ONCE() in different C statements. 223 + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some 224 + * particular ordering. One way to make the compiler aware of ordering is to 225 + * put the two invocations of READ_ONCE or WRITE_ONCE in different C 226 + * statements. 227 227 * 228 - * In contrast to ACCESS_ONCE these two macros will also work on aggregate 229 - * data types like structs or unions. If the size of the accessed data 230 - * type exceeds the word size of the machine (e.g., 32 bits or 64 bits) 231 - * READ_ONCE() and WRITE_ONCE() will fall back to memcpy(). There's at 232 - * least two memcpy()s: one for the __builtin_memcpy() and then one for 233 - * the macro doing the copy of variable - '__u' allocated on the stack. 228 + * These two macros will also work on aggregate data types like structs or 229 + * unions. If the size of the accessed data type exceeds the word size of 230 + * the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will 231 + * fall back to memcpy(). There's at least two memcpy()s: one for the 232 + * __builtin_memcpy() and then one for the macro doing the copy of variable 233 + * - '__u' allocated on the stack. 234 234 * 235 235 * Their two major use cases are: (1) Mediating communication between 236 236 * process-level code and irq/NMI handlers, all running on the same CPU, 237 - * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 237 + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 238 238 * mutilate accesses that either do not require ordering or that interact 239 239 * with an explicit memory barrier or atomic instruction that provides the 240 240 * required ordering. ··· 326 326 #define compiletime_assert_atomic_type(t) \ 327 327 compiletime_assert(__native_word(t), \ 328 328 "Need native word sized stores/loads for atomicity.") 329 - 330 - /* 331 - * Prevent the compiler from merging or refetching accesses. The compiler 332 - * is also forbidden from reordering successive instances of ACCESS_ONCE(), 333 - * but only when the compiler is aware of some particular ordering. One way 334 - * to make the compiler aware of ordering is to put the two invocations of 335 - * ACCESS_ONCE() in different C statements. 336 - * 337 - * ACCESS_ONCE will only work on scalar types. For union types, ACCESS_ONCE 338 - * on a union member will work as long as the size of the member matches the 339 - * size of the union and the size is smaller than word size. 340 - * 341 - * The major use cases of ACCESS_ONCE used to be (1) Mediating communication 342 - * between process-level code and irq/NMI handlers, all running on the same CPU, 343 - * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 344 - * mutilate accesses that either do not require ordering or that interact 345 - * with an explicit memory barrier or atomic instruction that provides the 346 - * required ordering. 347 - * 348 - * If possible use READ_ONCE()/WRITE_ONCE() instead. 349 - */ 350 - #define __ACCESS_ONCE(x) ({ \ 351 - __maybe_unused typeof(x) __var = (__force typeof(x)) 0; \ 352 - (volatile typeof(x) *)&(x); }) 353 - #define ACCESS_ONCE(x) (*__ACCESS_ONCE(x)) 354 329 355 330 #endif /* __LINUX_COMPILER_H */
-45
include/linux/completion.h
··· 10 10 */ 11 11 12 12 #include <linux/wait.h> 13 - #ifdef CONFIG_LOCKDEP_COMPLETIONS 14 - #include <linux/lockdep.h> 15 - #endif 16 13 17 14 /* 18 15 * struct completion - structure used to maintain state for a "completion" ··· 26 29 struct completion { 27 30 unsigned int done; 28 31 wait_queue_head_t wait; 29 - #ifdef CONFIG_LOCKDEP_COMPLETIONS 30 - struct lockdep_map_cross map; 31 - #endif 32 32 }; 33 33 34 - #ifdef CONFIG_LOCKDEP_COMPLETIONS 35 - static inline void complete_acquire(struct completion *x) 36 - { 37 - lock_acquire_exclusive((struct lockdep_map *)&x->map, 0, 0, NULL, _RET_IP_); 38 - } 39 - 40 - static inline void complete_release(struct completion *x) 41 - { 42 - lock_release((struct lockdep_map *)&x->map, 0, _RET_IP_); 43 - } 44 - 45 - static inline void complete_release_commit(struct completion *x) 46 - { 47 - lock_commit_crosslock((struct lockdep_map *)&x->map); 48 - } 49 - 50 - #define init_completion_map(x, m) \ 51 - do { \ 52 - lockdep_init_map_crosslock((struct lockdep_map *)&(x)->map, \ 53 - (m)->name, (m)->key, 0); \ 54 - __init_completion(x); \ 55 - } while (0) 56 - 57 - #define init_completion(x) \ 58 - do { \ 59 - static struct lock_class_key __key; \ 60 - lockdep_init_map_crosslock((struct lockdep_map *)&(x)->map, \ 61 - "(completion)" #x, \ 62 - &__key, 0); \ 63 - __init_completion(x); \ 64 - } while (0) 65 - #else 66 34 #define init_completion_map(x, m) __init_completion(x) 67 35 #define init_completion(x) __init_completion(x) 68 36 static inline void complete_acquire(struct completion *x) {} 69 37 static inline void complete_release(struct completion *x) {} 70 38 static inline void complete_release_commit(struct completion *x) {} 71 - #endif 72 39 73 - #ifdef CONFIG_LOCKDEP_COMPLETIONS 74 - #define COMPLETION_INITIALIZER(work) \ 75 - { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait), \ 76 - STATIC_CROSS_LOCKDEP_MAP_INIT("(completion)" #work, &(work)) } 77 - #else 78 40 #define COMPLETION_INITIALIZER(work) \ 79 41 { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait) } 80 - #endif 81 42 82 43 #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ 83 44 (*({ init_completion_map(&(work), &(map)); &(work); }))
+1
include/linux/cred.h
··· 83 83 extern void set_groups(struct cred *, struct group_info *); 84 84 extern int groups_search(const struct group_info *, kgid_t); 85 85 extern bool may_setgroups(void); 86 + extern void groups_sort(struct group_info *); 86 87 87 88 /* 88 89 * The security context of a task
+1
include/linux/idr.h
··· 15 15 #include <linux/radix-tree.h> 16 16 #include <linux/gfp.h> 17 17 #include <linux/percpu.h> 18 + #include <linux/bug.h> 18 19 19 20 struct idr { 20 21 struct radix_tree_root idr_rt;
+1 -1
include/linux/kvm_host.h
··· 232 232 struct mutex mutex; 233 233 struct kvm_run *run; 234 234 235 - int guest_fpu_loaded, guest_xcr0_loaded; 235 + int guest_xcr0_loaded; 236 236 struct swait_queue_head wq; 237 237 struct pid __rcu *pid; 238 238 int sigset_active;
-125
include/linux/lockdep.h
··· 158 158 int cpu; 159 159 unsigned long ip; 160 160 #endif 161 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 162 - /* 163 - * Whether it's a crosslock. 164 - */ 165 - int cross; 166 - #endif 167 161 }; 168 162 169 163 static inline void lockdep_copy_map(struct lockdep_map *to, ··· 261 267 unsigned int hardirqs_off:1; 262 268 unsigned int references:12; /* 32 bits */ 263 269 unsigned int pin_count; 264 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 265 - /* 266 - * Generation id. 267 - * 268 - * A value of cross_gen_id will be stored when holding this, 269 - * which is globally increased whenever each crosslock is held. 270 - */ 271 - unsigned int gen_id; 272 - #endif 273 270 }; 274 - 275 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 276 - #define MAX_XHLOCK_TRACE_ENTRIES 5 277 - 278 - /* 279 - * This is for keeping locks waiting for commit so that true dependencies 280 - * can be added at commit step. 281 - */ 282 - struct hist_lock { 283 - /* 284 - * Id for each entry in the ring buffer. This is used to 285 - * decide whether the ring buffer was overwritten or not. 286 - * 287 - * For example, 288 - * 289 - * |<----------- hist_lock ring buffer size ------->| 290 - * pppppppppppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiii 291 - * wrapped > iiiiiiiiiiiiiiiiiiiiiiiiiii....................... 292 - * 293 - * where 'p' represents an acquisition in process 294 - * context, 'i' represents an acquisition in irq 295 - * context. 296 - * 297 - * In this example, the ring buffer was overwritten by 298 - * acquisitions in irq context, that should be detected on 299 - * rollback or commit. 300 - */ 301 - unsigned int hist_id; 302 - 303 - /* 304 - * Seperate stack_trace data. This will be used at commit step. 305 - */ 306 - struct stack_trace trace; 307 - unsigned long trace_entries[MAX_XHLOCK_TRACE_ENTRIES]; 308 - 309 - /* 310 - * Seperate hlock instance. This will be used at commit step. 311 - * 312 - * TODO: Use a smaller data structure containing only necessary 313 - * data. However, we should make lockdep code able to handle the 314 - * smaller one first. 315 - */ 316 - struct held_lock hlock; 317 - }; 318 - 319 - /* 320 - * To initialize a lock as crosslock, lockdep_init_map_crosslock() should 321 - * be called instead of lockdep_init_map(). 322 - */ 323 - struct cross_lock { 324 - /* 325 - * When more than one acquisition of crosslocks are overlapped, 326 - * we have to perform commit for them based on cross_gen_id of 327 - * the first acquisition, which allows us to add more true 328 - * dependencies. 329 - * 330 - * Moreover, when no acquisition of a crosslock is in progress, 331 - * we should not perform commit because the lock might not exist 332 - * any more, which might cause incorrect memory access. So we 333 - * have to track the number of acquisitions of a crosslock. 334 - */ 335 - int nr_acquire; 336 - 337 - /* 338 - * Seperate hlock instance. This will be used at commit step. 339 - * 340 - * TODO: Use a smaller data structure containing only necessary 341 - * data. However, we should make lockdep code able to handle the 342 - * smaller one first. 343 - */ 344 - struct held_lock hlock; 345 - }; 346 - 347 - struct lockdep_map_cross { 348 - struct lockdep_map map; 349 - struct cross_lock xlock; 350 - }; 351 - #endif 352 271 353 272 /* 354 273 * Initialization, self-test and debugging-output methods: ··· 467 560 XHLOCK_CTX_NR, 468 561 }; 469 562 470 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 471 - extern void lockdep_init_map_crosslock(struct lockdep_map *lock, 472 - const char *name, 473 - struct lock_class_key *key, 474 - int subclass); 475 - extern void lock_commit_crosslock(struct lockdep_map *lock); 476 - 477 - /* 478 - * What we essencially have to initialize is 'nr_acquire'. Other members 479 - * will be initialized in add_xlock(). 480 - */ 481 - #define STATIC_CROSS_LOCK_INIT() \ 482 - { .nr_acquire = 0,} 483 - 484 - #define STATIC_CROSS_LOCKDEP_MAP_INIT(_name, _key) \ 485 - { .map.name = (_name), .map.key = (void *)(_key), \ 486 - .map.cross = 1, .xlock = STATIC_CROSS_LOCK_INIT(), } 487 - 488 - /* 489 - * To initialize a lockdep_map statically use this macro. 490 - * Note that _name must not be NULL. 491 - */ 492 - #define STATIC_LOCKDEP_MAP_INIT(_name, _key) \ 493 - { .name = (_name), .key = (void *)(_key), .cross = 0, } 494 - 495 - extern void crossrelease_hist_start(enum xhlock_context_t c); 496 - extern void crossrelease_hist_end(enum xhlock_context_t c); 497 - extern void lockdep_invariant_state(bool force); 498 - extern void lockdep_init_task(struct task_struct *task); 499 - extern void lockdep_free_task(struct task_struct *task); 500 - #else /* !CROSSRELEASE */ 501 563 #define lockdep_init_map_crosslock(m, n, k, s) do {} while (0) 502 564 /* 503 565 * To initialize a lockdep_map statically use this macro. ··· 480 604 static inline void lockdep_invariant_state(bool force) {} 481 605 static inline void lockdep_init_task(struct task_struct *task) {} 482 606 static inline void lockdep_free_task(struct task_struct *task) {} 483 - #endif /* CROSSRELEASE */ 484 607 485 608 #ifdef CONFIG_LOCK_STAT 486 609
+9
include/linux/oom.h
··· 67 67 } 68 68 69 69 /* 70 + * Use this helper if tsk->mm != mm and the victim mm needs a special 71 + * handling. This is guaranteed to stay true after once set. 72 + */ 73 + static inline bool mm_is_oom_victim(struct mm_struct *mm) 74 + { 75 + return test_bit(MMF_OOM_VICTIM, &mm->flags); 76 + } 77 + 78 + /* 70 79 * Checks whether a page fault on the given mm is still reliable. 71 80 * This is no longer true if the oom reaper started to reap the 72 81 * address space which is reflected by MMF_UNSTABLE flag set in
+3
include/linux/pci.h
··· 1675 1675 static inline struct pci_dev *pci_get_bus_and_slot(unsigned int bus, 1676 1676 unsigned int devfn) 1677 1677 { return NULL; } 1678 + static inline struct pci_dev *pci_get_domain_bus_and_slot(int domain, 1679 + unsigned int bus, unsigned int devfn) 1680 + { return NULL; } 1678 1681 1679 1682 static inline int pci_domain_nr(struct pci_bus *bus) { return 0; } 1680 1683 static inline struct pci_dev *pci_dev_get(struct pci_dev *dev) { return NULL; }
+1
include/linux/pm.h
··· 765 765 extern int pm_generic_poweroff(struct device *dev); 766 766 extern void pm_generic_complete(struct device *dev); 767 767 768 + extern void dev_pm_skip_next_resume_phases(struct device *dev); 768 769 extern bool dev_pm_smart_suspend_and_suspended(struct device *dev); 769 770 770 771 #else /* !CONFIG_PM_SLEEP */
+9
include/linux/ptr_ring.h
··· 101 101 102 102 /* Note: callers invoking this in a loop must use a compiler barrier, 103 103 * for example cpu_relax(). Callers must hold producer_lock. 104 + * Callers are responsible for making sure pointer that is being queued 105 + * points to a valid data. 104 106 */ 105 107 static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr) 106 108 { 107 109 if (unlikely(!r->size) || r->queue[r->producer]) 108 110 return -ENOSPC; 111 + 112 + /* Make sure the pointer we are storing points to a valid data. */ 113 + /* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */ 114 + smp_wmb(); 109 115 110 116 r->queue[r->producer++] = ptr; 111 117 if (unlikely(r->producer >= r->size)) ··· 281 275 if (ptr) 282 276 __ptr_ring_discard_one(r); 283 277 278 + /* Make sure anyone accessing data through the pointer is up to date. */ 279 + /* Pairs with smp_wmb in __ptr_ring_produce. */ 280 + smp_read_barrier_depends(); 284 281 return ptr; 285 282 } 286 283
+2
include/linux/rbtree.h
··· 99 99 struct rb_root *root); 100 100 extern void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new, 101 101 struct rb_root *root); 102 + extern void rb_replace_node_cached(struct rb_node *victim, struct rb_node *new, 103 + struct rb_root_cached *root); 102 104 103 105 static inline void rb_link_node(struct rb_node *node, struct rb_node *parent, 104 106 struct rb_node **rb_link)
-3
include/linux/rwlock_types.h
··· 10 10 */ 11 11 typedef struct { 12 12 arch_rwlock_t raw_lock; 13 - #ifdef CONFIG_GENERIC_LOCKBREAK 14 - unsigned int break_lock; 15 - #endif 16 13 #ifdef CONFIG_DEBUG_SPINLOCK 17 14 unsigned int magic, owner_cpu; 18 15 void *owner;
+5 -12
include/linux/sched.h
··· 849 849 struct held_lock held_locks[MAX_LOCK_DEPTH]; 850 850 #endif 851 851 852 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 853 - #define MAX_XHLOCKS_NR 64UL 854 - struct hist_lock *xhlocks; /* Crossrelease history locks */ 855 - unsigned int xhlock_idx; 856 - /* For restoring at history boundaries */ 857 - unsigned int xhlock_idx_hist[XHLOCK_CTX_NR]; 858 - unsigned int hist_id; 859 - /* For overwrite check at each context exit */ 860 - unsigned int hist_id_save[XHLOCK_CTX_NR]; 861 - #endif 862 - 863 852 #ifdef CONFIG_UBSAN 864 853 unsigned int in_ubsan; 865 854 #endif ··· 1492 1503 __set_task_comm(tsk, from, false); 1493 1504 } 1494 1505 1495 - extern char *get_task_comm(char *to, struct task_struct *tsk); 1506 + extern char *__get_task_comm(char *to, size_t len, struct task_struct *tsk); 1507 + #define get_task_comm(buf, tsk) ({ \ 1508 + BUILD_BUG_ON(sizeof(buf) != TASK_COMM_LEN); \ 1509 + __get_task_comm(buf, sizeof(buf), tsk); \ 1510 + }) 1496 1511 1497 1512 #ifdef CONFIG_SMP 1498 1513 void scheduler_ipi(void);
+1
include/linux/sched/coredump.h
··· 70 70 #define MMF_UNSTABLE 22 /* mm is unstable for copy_from_user */ 71 71 #define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zero page */ 72 72 #define MMF_DISABLE_THP 24 /* disable THP for all VMAs */ 73 + #define MMF_OOM_VICTIM 25 /* mm is the oom victim */ 73 74 #define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP) 74 75 75 76 #define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
-5
include/linux/spinlock.h
··· 107 107 108 108 #define raw_spin_is_locked(lock) arch_spin_is_locked(&(lock)->raw_lock) 109 109 110 - #ifdef CONFIG_GENERIC_LOCKBREAK 111 - #define raw_spin_is_contended(lock) ((lock)->break_lock) 112 - #else 113 - 114 110 #ifdef arch_spin_is_contended 115 111 #define raw_spin_is_contended(lock) arch_spin_is_contended(&(lock)->raw_lock) 116 112 #else 117 113 #define raw_spin_is_contended(lock) (((void)(lock), 0)) 118 114 #endif /*arch_spin_is_contended*/ 119 - #endif 120 115 121 116 /* 122 117 * This barrier must provide two things:
-3
include/linux/spinlock_types.h
··· 19 19 20 20 typedef struct raw_spinlock { 21 21 arch_spinlock_t raw_lock; 22 - #ifdef CONFIG_GENERIC_LOCKBREAK 23 - unsigned int break_lock; 24 - #endif 25 22 #ifdef CONFIG_DEBUG_SPINLOCK 26 23 unsigned int magic, owner_cpu; 27 24 void *owner;
+4 -1
include/linux/string.h
··· 259 259 { 260 260 __kernel_size_t ret; 261 261 size_t p_size = __builtin_object_size(p, 0); 262 - if (p_size == (size_t)-1) 262 + 263 + /* Work around gcc excess stack consumption issue */ 264 + if (p_size == (size_t)-1 || 265 + (__builtin_constant_p(p[p_size - 1]) && p[p_size - 1] == '\0')) 263 266 return __builtin_strlen(p); 264 267 ret = strnlen(p, p_size); 265 268 if (p_size <= ret)
+1 -1
include/linux/trace.h
··· 18 18 */ 19 19 struct trace_export { 20 20 struct trace_export __rcu *next; 21 - void (*write)(const void *, unsigned int); 21 + void (*write)(struct trace_export *, const void *, unsigned int); 22 22 }; 23 23 24 24 int register_ftrace_export(struct trace_export *export);
+9 -9
include/net/gue.h
··· 44 44 #else 45 45 #error "Please fix <asm/byteorder.h>" 46 46 #endif 47 - __u8 proto_ctype; 48 - __u16 flags; 47 + __u8 proto_ctype; 48 + __be16 flags; 49 49 }; 50 - __u32 word; 50 + __be32 word; 51 51 }; 52 52 }; 53 53 ··· 84 84 * if there is an unknown standard or private flags, or the options length for 85 85 * the flags exceeds the options length specific in hlen of the GUE header. 86 86 */ 87 - static inline int validate_gue_flags(struct guehdr *guehdr, 88 - size_t optlen) 87 + static inline int validate_gue_flags(struct guehdr *guehdr, size_t optlen) 89 88 { 89 + __be16 flags = guehdr->flags; 90 90 size_t len; 91 - __be32 flags = guehdr->flags; 92 91 93 92 if (flags & ~GUE_FLAGS_ALL) 94 93 return 1; ··· 100 101 /* Private flags are last four bytes accounted in 101 102 * guehdr_flags_len 102 103 */ 103 - flags = *(__be32 *)((void *)&guehdr[1] + len - GUE_LEN_PRIV); 104 + __be32 pflags = *(__be32 *)((void *)&guehdr[1] + 105 + len - GUE_LEN_PRIV); 104 106 105 - if (flags & ~GUE_PFLAGS_ALL) 107 + if (pflags & ~GUE_PFLAGS_ALL) 106 108 return 1; 107 109 108 - len += guehdr_priv_flags_len(flags); 110 + len += guehdr_priv_flags_len(pflags); 109 111 if (len > optlen) 110 112 return 1; 111 113 }
+1
include/net/ip.h
··· 36 36 #include <net/netns/hash.h> 37 37 38 38 #define IPV4_MAX_PMTU 65535U /* RFC 2675, Section 5.1 */ 39 + #define IPV4_MIN_MTU 68 /* RFC 791 */ 39 40 40 41 struct sock; 41 42
+1
include/net/sch_generic.h
··· 72 72 */ 73 73 #define TCQ_F_INVISIBLE 0x80 /* invisible by default in dump */ 74 74 #define TCQ_F_NOLOCK 0x100 /* qdisc does not require locking */ 75 + #define TCQ_F_OFFLOADED 0x200 /* qdisc is offloaded to HW */ 75 76 u32 limit; 76 77 const struct Qdisc_ops *ops; 77 78 struct qdisc_size_table __rcu *stab;
+7 -4
include/trace/events/preemptirq.h
··· 56 56 57 57 #include <trace/define_trace.h> 58 58 59 - #else /* !CONFIG_PREEMPTIRQ_EVENTS */ 59 + #endif /* !CONFIG_PREEMPTIRQ_EVENTS */ 60 60 61 + #if !defined(CONFIG_PREEMPTIRQ_EVENTS) || defined(CONFIG_PROVE_LOCKING) 61 62 #define trace_irq_enable(...) 62 63 #define trace_irq_disable(...) 63 - #define trace_preempt_enable(...) 64 - #define trace_preempt_disable(...) 65 64 #define trace_irq_enable_rcuidle(...) 66 65 #define trace_irq_disable_rcuidle(...) 66 + #endif 67 + 68 + #if !defined(CONFIG_PREEMPTIRQ_EVENTS) || !defined(CONFIG_DEBUG_PREEMPT) 69 + #define trace_preempt_enable(...) 70 + #define trace_preempt_disable(...) 67 71 #define trace_preempt_enable_rcuidle(...) 68 72 #define trace_preempt_disable_rcuidle(...) 69 - 70 73 #endif
+2 -2
include/uapi/linux/kvm.h
··· 630 630 631 631 struct kvm_s390_irq_state { 632 632 __u64 buf; 633 - __u32 flags; 633 + __u32 flags; /* will stay unused for compatibility reasons */ 634 634 __u32 len; 635 - __u32 reserved[4]; 635 + __u32 reserved[4]; /* will stay unused for compatibility reasons */ 636 636 }; 637 637 638 638 /* for KVM_SET_GUEST_DEBUG */
-1
include/uapi/linux/pkt_sched.h
··· 256 256 #define TC_RED_ECN 1 257 257 #define TC_RED_HARDDROP 2 258 258 #define TC_RED_ADAPTATIVE 4 259 - #define TC_RED_OFFLOADED 8 260 259 }; 261 260 262 261 struct tc_red_xstats {
+1
include/uapi/linux/rtnetlink.h
··· 557 557 TCA_PAD, 558 558 TCA_DUMP_INVISIBLE, 559 559 TCA_CHAIN, 560 + TCA_HW_OFFLOAD, 560 561 __TCA_MAX 561 562 }; 562 563
+6 -1
init/main.c
··· 589 589 radix_tree_init(); 590 590 591 591 /* 592 + * Set up housekeeping before setting up workqueues to allow the unbound 593 + * workqueue to take non-housekeeping into account. 594 + */ 595 + housekeeping_init(); 596 + 597 + /* 592 598 * Allow workqueue creation and work item queueing/cancelling 593 599 * early. Work item execution depends on kthreads and starts after 594 600 * workqueue_init(). ··· 611 605 early_irq_init(); 612 606 init_IRQ(); 613 607 tick_init(); 614 - housekeeping_init(); 615 608 rcu_init_nohz(); 616 609 init_timers(); 617 610 hrtimers_init();
+2
kernel/bpf/hashtab.c
··· 114 114 pptr = htab_elem_get_ptr(get_htab_elem(htab, i), 115 115 htab->map.key_size); 116 116 free_percpu(pptr); 117 + cond_resched(); 117 118 } 118 119 free_elems: 119 120 bpf_map_area_free(htab->elems); ··· 160 159 goto free_elems; 161 160 htab_elem_set_ptr(get_htab_elem(htab, i), htab->map.key_size, 162 161 pptr); 162 + cond_resched(); 163 163 } 164 164 165 165 skip_percpu_elems:
+2 -2
kernel/cgroup/debug.c
··· 50 50 51 51 spin_lock_irq(&css_set_lock); 52 52 rcu_read_lock(); 53 - cset = rcu_dereference(current->cgroups); 53 + cset = task_css_set(current); 54 54 refcnt = refcount_read(&cset->refcount); 55 55 seq_printf(seq, "css_set %pK %d", cset, refcnt); 56 56 if (refcnt > cset->nr_tasks) ··· 96 96 97 97 spin_lock_irq(&css_set_lock); 98 98 rcu_read_lock(); 99 - cset = rcu_dereference(current->cgroups); 99 + cset = task_css_set(current); 100 100 list_for_each_entry(link, &cset->cgrp_links, cgrp_link) { 101 101 struct cgroup *c = link->cgrp; 102 102
+6 -2
kernel/cgroup/stat.c
··· 296 296 } 297 297 298 298 /* ->updated_children list is self terminated */ 299 - for_each_possible_cpu(cpu) 300 - cgroup_cpu_stat(cgrp, cpu)->updated_children = cgrp; 299 + for_each_possible_cpu(cpu) { 300 + struct cgroup_cpu_stat *cstat = cgroup_cpu_stat(cgrp, cpu); 301 + 302 + cstat->updated_children = cgrp; 303 + u64_stats_init(&cstat->sync); 304 + } 301 305 302 306 prev_cputime_init(&cgrp->stat.prev_cputime); 303 307
+8
kernel/exit.c
··· 1755 1755 return -EFAULT; 1756 1756 } 1757 1757 #endif 1758 + 1759 + __weak void abort(void) 1760 + { 1761 + BUG(); 1762 + 1763 + /* if that doesn't kill us, halt */ 1764 + panic("Oops failed to kill thread"); 1765 + }
+2 -2
kernel/futex.c
··· 1582 1582 { 1583 1583 unsigned int op = (encoded_op & 0x70000000) >> 28; 1584 1584 unsigned int cmp = (encoded_op & 0x0f000000) >> 24; 1585 - int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 12); 1586 - int cmparg = sign_extend32(encoded_op & 0x00000fff, 12); 1585 + int oparg = sign_extend32((encoded_op & 0x00fff000) >> 12, 11); 1586 + int cmparg = sign_extend32(encoded_op & 0x00000fff, 11); 1587 1587 int oldval, ret; 1588 1588 1589 1589 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) {
+3 -2
kernel/groups.c
··· 86 86 return gid_gt(a, b) - gid_lt(a, b); 87 87 } 88 88 89 - static void groups_sort(struct group_info *group_info) 89 + void groups_sort(struct group_info *group_info) 90 90 { 91 91 sort(group_info->gid, group_info->ngroups, sizeof(*group_info->gid), 92 92 gid_cmp, NULL); 93 93 } 94 + EXPORT_SYMBOL(groups_sort); 94 95 95 96 /* a simple bsearch */ 96 97 int groups_search(const struct group_info *group_info, kgid_t grp) ··· 123 122 void set_groups(struct cred *new, struct group_info *group_info) 124 123 { 125 124 put_group_info(new->group_info); 126 - groups_sort(group_info); 127 125 get_group_info(group_info); 128 126 new->group_info = group_info; 129 127 } ··· 206 206 return retval; 207 207 } 208 208 209 + groups_sort(group_info); 209 210 retval = set_current_groups(group_info); 210 211 put_group_info(group_info); 211 212
+2 -2
kernel/kcov.c
··· 157 157 } 158 158 EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2); 159 159 160 - void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2) 160 + void notrace __sanitizer_cov_trace_cmp4(u32 arg1, u32 arg2) 161 161 { 162 162 write_comp_data(KCOV_CMP_SIZE(2), arg1, arg2, _RET_IP_); 163 163 } ··· 183 183 } 184 184 EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2); 185 185 186 - void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2) 186 + void notrace __sanitizer_cov_trace_const_cmp4(u32 arg1, u32 arg2) 187 187 { 188 188 write_comp_data(KCOV_CMP_SIZE(2) | KCOV_CMP_CONST, arg1, arg2, 189 189 _RET_IP_);
+38 -620
kernel/locking/lockdep.c
··· 57 57 #define CREATE_TRACE_POINTS 58 58 #include <trace/events/lock.h> 59 59 60 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 61 - #include <linux/slab.h> 62 - #endif 63 - 64 60 #ifdef CONFIG_PROVE_LOCKING 65 61 int prove_locking = 1; 66 62 module_param(prove_locking, int, 0644); ··· 70 74 #else 71 75 #define lock_stat 0 72 76 #endif 73 - 74 - #ifdef CONFIG_BOOTPARAM_LOCKDEP_CROSSRELEASE_FULLSTACK 75 - static int crossrelease_fullstack = 1; 76 - #else 77 - static int crossrelease_fullstack; 78 - #endif 79 - static int __init allow_crossrelease_fullstack(char *str) 80 - { 81 - crossrelease_fullstack = 1; 82 - return 0; 83 - } 84 - 85 - early_param("crossrelease_fullstack", allow_crossrelease_fullstack); 86 77 87 78 /* 88 79 * lockdep_lock: protects the lockdep graph, the hashes and the ··· 723 740 return is_static || static_obj(lock->key) ? NULL : ERR_PTR(-EINVAL); 724 741 } 725 742 726 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 727 - static void cross_init(struct lockdep_map *lock, int cross); 728 - static int cross_lock(struct lockdep_map *lock); 729 - static int lock_acquire_crosslock(struct held_lock *hlock); 730 - static int lock_release_crosslock(struct lockdep_map *lock); 731 - #else 732 - static inline void cross_init(struct lockdep_map *lock, int cross) {} 733 - static inline int cross_lock(struct lockdep_map *lock) { return 0; } 734 - static inline int lock_acquire_crosslock(struct held_lock *hlock) { return 2; } 735 - static inline int lock_release_crosslock(struct lockdep_map *lock) { return 2; } 736 - #endif 737 - 738 743 /* 739 744 * Register a lock's class in the hash-table, if the class is not present 740 745 * yet. Otherwise we look it up. We cache the result in the lock object ··· 1122 1151 printk(KERN_CONT "\n\n"); 1123 1152 } 1124 1153 1125 - if (cross_lock(tgt->instance)) { 1126 - printk(" Possible unsafe locking scenario by crosslock:\n\n"); 1127 - printk(" CPU0 CPU1\n"); 1128 - printk(" ---- ----\n"); 1129 - printk(" lock("); 1130 - __print_lock_name(parent); 1131 - printk(KERN_CONT ");\n"); 1132 - printk(" lock("); 1133 - __print_lock_name(target); 1134 - printk(KERN_CONT ");\n"); 1135 - printk(" lock("); 1136 - __print_lock_name(source); 1137 - printk(KERN_CONT ");\n"); 1138 - printk(" unlock("); 1139 - __print_lock_name(target); 1140 - printk(KERN_CONT ");\n"); 1141 - printk("\n *** DEADLOCK ***\n\n"); 1142 - } else { 1143 - printk(" Possible unsafe locking scenario:\n\n"); 1144 - printk(" CPU0 CPU1\n"); 1145 - printk(" ---- ----\n"); 1146 - printk(" lock("); 1147 - __print_lock_name(target); 1148 - printk(KERN_CONT ");\n"); 1149 - printk(" lock("); 1150 - __print_lock_name(parent); 1151 - printk(KERN_CONT ");\n"); 1152 - printk(" lock("); 1153 - __print_lock_name(target); 1154 - printk(KERN_CONT ");\n"); 1155 - printk(" lock("); 1156 - __print_lock_name(source); 1157 - printk(KERN_CONT ");\n"); 1158 - printk("\n *** DEADLOCK ***\n\n"); 1159 - } 1154 + printk(" Possible unsafe locking scenario:\n\n"); 1155 + printk(" CPU0 CPU1\n"); 1156 + printk(" ---- ----\n"); 1157 + printk(" lock("); 1158 + __print_lock_name(target); 1159 + printk(KERN_CONT ");\n"); 1160 + printk(" lock("); 1161 + __print_lock_name(parent); 1162 + printk(KERN_CONT ");\n"); 1163 + printk(" lock("); 1164 + __print_lock_name(target); 1165 + printk(KERN_CONT ");\n"); 1166 + printk(" lock("); 1167 + __print_lock_name(source); 1168 + printk(KERN_CONT ");\n"); 1169 + printk("\n *** DEADLOCK ***\n\n"); 1160 1170 } 1161 1171 1162 1172 /* ··· 1163 1211 curr->comm, task_pid_nr(curr)); 1164 1212 print_lock(check_src); 1165 1213 1166 - if (cross_lock(check_tgt->instance)) 1167 - pr_warn("\nbut now in release context of a crosslock acquired at the following:\n"); 1168 - else 1169 - pr_warn("\nbut task is already holding lock:\n"); 1214 + pr_warn("\nbut task is already holding lock:\n"); 1170 1215 1171 1216 print_lock(check_tgt); 1172 1217 pr_warn("\nwhich lock already depends on the new lock.\n\n"); ··· 1193 1244 if (!debug_locks_off_graph_unlock() || debug_locks_silent) 1194 1245 return 0; 1195 1246 1196 - if (cross_lock(check_tgt->instance)) 1197 - this->trace = *trace; 1198 - else if (!save_trace(&this->trace)) 1247 + if (!save_trace(&this->trace)) 1199 1248 return 0; 1200 1249 1201 1250 depth = get_lock_depth(target); ··· 1797 1850 if (nest) 1798 1851 return 2; 1799 1852 1800 - if (cross_lock(prev->instance)) 1801 - continue; 1802 - 1803 1853 return print_deadlock_bug(curr, prev, next); 1804 1854 } 1805 1855 return 1; ··· 1962 2018 for (;;) { 1963 2019 int distance = curr->lockdep_depth - depth + 1; 1964 2020 hlock = curr->held_locks + depth - 1; 1965 - /* 1966 - * Only non-crosslock entries get new dependencies added. 1967 - * Crosslock entries will be added by commit later: 1968 - */ 1969 - if (!cross_lock(hlock->instance)) { 1970 - /* 1971 - * Only non-recursive-read entries get new dependencies 1972 - * added: 1973 - */ 1974 - if (hlock->read != 2 && hlock->check) { 1975 - int ret = check_prev_add(curr, hlock, next, 1976 - distance, &trace, save_trace); 1977 - if (!ret) 1978 - return 0; 1979 2021 1980 - /* 1981 - * Stop after the first non-trylock entry, 1982 - * as non-trylock entries have added their 1983 - * own direct dependencies already, so this 1984 - * lock is connected to them indirectly: 1985 - */ 1986 - if (!hlock->trylock) 1987 - break; 1988 - } 2022 + /* 2023 + * Only non-recursive-read entries get new dependencies 2024 + * added: 2025 + */ 2026 + if (hlock->read != 2 && hlock->check) { 2027 + int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace); 2028 + if (!ret) 2029 + return 0; 2030 + 2031 + /* 2032 + * Stop after the first non-trylock entry, 2033 + * as non-trylock entries have added their 2034 + * own direct dependencies already, so this 2035 + * lock is connected to them indirectly: 2036 + */ 2037 + if (!hlock->trylock) 2038 + break; 1989 2039 } 2040 + 1990 2041 depth--; 1991 2042 /* 1992 2043 * End of lock-stack? ··· 3231 3292 void lockdep_init_map(struct lockdep_map *lock, const char *name, 3232 3293 struct lock_class_key *key, int subclass) 3233 3294 { 3234 - cross_init(lock, 0); 3235 3295 __lockdep_init_map(lock, name, key, subclass); 3236 3296 } 3237 3297 EXPORT_SYMBOL_GPL(lockdep_init_map); 3238 - 3239 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 3240 - void lockdep_init_map_crosslock(struct lockdep_map *lock, const char *name, 3241 - struct lock_class_key *key, int subclass) 3242 - { 3243 - cross_init(lock, 1); 3244 - __lockdep_init_map(lock, name, key, subclass); 3245 - } 3246 - EXPORT_SYMBOL_GPL(lockdep_init_map_crosslock); 3247 - #endif 3248 3298 3249 3299 struct lock_class_key __lockdep_no_validate__; 3250 3300 EXPORT_SYMBOL_GPL(__lockdep_no_validate__); ··· 3290 3362 int chain_head = 0; 3291 3363 int class_idx; 3292 3364 u64 chain_key; 3293 - int ret; 3294 3365 3295 3366 if (unlikely(!debug_locks)) 3296 3367 return 0; ··· 3338 3411 3339 3412 class_idx = class - lock_classes + 1; 3340 3413 3341 - /* TODO: nest_lock is not implemented for crosslock yet. */ 3342 - if (depth && !cross_lock(lock)) { 3414 + if (depth) { 3343 3415 hlock = curr->held_locks + depth - 1; 3344 3416 if (hlock->class_idx == class_idx && nest_lock) { 3345 3417 if (hlock->references) { ··· 3425 3499 3426 3500 if (!validate_chain(curr, lock, hlock, chain_head, chain_key)) 3427 3501 return 0; 3428 - 3429 - ret = lock_acquire_crosslock(hlock); 3430 - /* 3431 - * 2 means normal acquire operations are needed. Otherwise, it's 3432 - * ok just to return with '0:fail, 1:success'. 3433 - */ 3434 - if (ret != 2) 3435 - return ret; 3436 3502 3437 3503 curr->curr_chain_key = chain_key; 3438 3504 curr->lockdep_depth++; ··· 3663 3745 struct task_struct *curr = current; 3664 3746 struct held_lock *hlock; 3665 3747 unsigned int depth; 3666 - int ret, i; 3748 + int i; 3667 3749 3668 3750 if (unlikely(!debug_locks)) 3669 3751 return 0; 3670 - 3671 - ret = lock_release_crosslock(lock); 3672 - /* 3673 - * 2 means normal release operations are needed. Otherwise, it's 3674 - * ok just to return with '0:fail, 1:success'. 3675 - */ 3676 - if (ret != 2) 3677 - return ret; 3678 3752 3679 3753 depth = curr->lockdep_depth; 3680 3754 /* ··· 4585 4675 dump_stack(); 4586 4676 } 4587 4677 EXPORT_SYMBOL_GPL(lockdep_rcu_suspicious); 4588 - 4589 - #ifdef CONFIG_LOCKDEP_CROSSRELEASE 4590 - 4591 - /* 4592 - * Crossrelease works by recording a lock history for each thread and 4593 - * connecting those historic locks that were taken after the 4594 - * wait_for_completion() in the complete() context. 4595 - * 4596 - * Task-A Task-B 4597 - * 4598 - * mutex_lock(&A); 4599 - * mutex_unlock(&A); 4600 - * 4601 - * wait_for_completion(&C); 4602 - * lock_acquire_crosslock(); 4603 - * atomic_inc_return(&cross_gen_id); 4604 - * | 4605 - * | mutex_lock(&B); 4606 - * | mutex_unlock(&B); 4607 - * | 4608 - * | complete(&C); 4609 - * `-- lock_commit_crosslock(); 4610 - * 4611 - * Which will then add a dependency between B and C. 4612 - */ 4613 - 4614 - #define xhlock(i) (current->xhlocks[(i) % MAX_XHLOCKS_NR]) 4615 - 4616 - /* 4617 - * Whenever a crosslock is held, cross_gen_id will be increased. 4618 - */ 4619 - static atomic_t cross_gen_id; /* Can be wrapped */ 4620 - 4621 - /* 4622 - * Make an entry of the ring buffer invalid. 4623 - */ 4624 - static inline void invalidate_xhlock(struct hist_lock *xhlock) 4625 - { 4626 - /* 4627 - * Normally, xhlock->hlock.instance must be !NULL. 4628 - */ 4629 - xhlock->hlock.instance = NULL; 4630 - } 4631 - 4632 - /* 4633 - * Lock history stacks; we have 2 nested lock history stacks: 4634 - * 4635 - * HARD(IRQ) 4636 - * SOFT(IRQ) 4637 - * 4638 - * The thing is that once we complete a HARD/SOFT IRQ the future task locks 4639 - * should not depend on any of the locks observed while running the IRQ. So 4640 - * what we do is rewind the history buffer and erase all our knowledge of that 4641 - * temporal event. 4642 - */ 4643 - 4644 - void crossrelease_hist_start(enum xhlock_context_t c) 4645 - { 4646 - struct task_struct *cur = current; 4647 - 4648 - if (!cur->xhlocks) 4649 - return; 4650 - 4651 - cur->xhlock_idx_hist[c] = cur->xhlock_idx; 4652 - cur->hist_id_save[c] = cur->hist_id; 4653 - } 4654 - 4655 - void crossrelease_hist_end(enum xhlock_context_t c) 4656 - { 4657 - struct task_struct *cur = current; 4658 - 4659 - if (cur->xhlocks) { 4660 - unsigned int idx = cur->xhlock_idx_hist[c]; 4661 - struct hist_lock *h = &xhlock(idx); 4662 - 4663 - cur->xhlock_idx = idx; 4664 - 4665 - /* Check if the ring was overwritten. */ 4666 - if (h->hist_id != cur->hist_id_save[c]) 4667 - invalidate_xhlock(h); 4668 - } 4669 - } 4670 - 4671 - /* 4672 - * lockdep_invariant_state() is used to annotate independence inside a task, to 4673 - * make one task look like multiple independent 'tasks'. 4674 - * 4675 - * Take for instance workqueues; each work is independent of the last. The 4676 - * completion of a future work does not depend on the completion of a past work 4677 - * (in general). Therefore we must not carry that (lock) dependency across 4678 - * works. 4679 - * 4680 - * This is true for many things; pretty much all kthreads fall into this 4681 - * pattern, where they have an invariant state and future completions do not 4682 - * depend on past completions. Its just that since they all have the 'same' 4683 - * form -- the kthread does the same over and over -- it doesn't typically 4684 - * matter. 4685 - * 4686 - * The same is true for system-calls, once a system call is completed (we've 4687 - * returned to userspace) the next system call does not depend on the lock 4688 - * history of the previous system call. 4689 - * 4690 - * They key property for independence, this invariant state, is that it must be 4691 - * a point where we hold no locks and have no history. Because if we were to 4692 - * hold locks, the restore at _end() would not necessarily recover it's history 4693 - * entry. Similarly, independence per-definition means it does not depend on 4694 - * prior state. 4695 - */ 4696 - void lockdep_invariant_state(bool force) 4697 - { 4698 - /* 4699 - * We call this at an invariant point, no current state, no history. 4700 - * Verify the former, enforce the latter. 4701 - */ 4702 - WARN_ON_ONCE(!force && current->lockdep_depth); 4703 - if (current->xhlocks) 4704 - invalidate_xhlock(&xhlock(current->xhlock_idx)); 4705 - } 4706 - 4707 - static int cross_lock(struct lockdep_map *lock) 4708 - { 4709 - return lock ? lock->cross : 0; 4710 - } 4711 - 4712 - /* 4713 - * This is needed to decide the relationship between wrapable variables. 4714 - */ 4715 - static inline int before(unsigned int a, unsigned int b) 4716 - { 4717 - return (int)(a - b) < 0; 4718 - } 4719 - 4720 - static inline struct lock_class *xhlock_class(struct hist_lock *xhlock) 4721 - { 4722 - return hlock_class(&xhlock->hlock); 4723 - } 4724 - 4725 - static inline struct lock_class *xlock_class(struct cross_lock *xlock) 4726 - { 4727 - return hlock_class(&xlock->hlock); 4728 - } 4729 - 4730 - /* 4731 - * Should we check a dependency with previous one? 4732 - */ 4733 - static inline int depend_before(struct held_lock *hlock) 4734 - { 4735 - return hlock->read != 2 && hlock->check && !hlock->trylock; 4736 - } 4737 - 4738 - /* 4739 - * Should we check a dependency with next one? 4740 - */ 4741 - static inline int depend_after(struct held_lock *hlock) 4742 - { 4743 - return hlock->read != 2 && hlock->check; 4744 - } 4745 - 4746 - /* 4747 - * Check if the xhlock is valid, which would be false if, 4748 - * 4749 - * 1. Has not used after initializaion yet. 4750 - * 2. Got invalidated. 4751 - * 4752 - * Remind hist_lock is implemented as a ring buffer. 4753 - */ 4754 - static inline int xhlock_valid(struct hist_lock *xhlock) 4755 - { 4756 - /* 4757 - * xhlock->hlock.instance must be !NULL. 4758 - */ 4759 - return !!xhlock->hlock.instance; 4760 - } 4761 - 4762 - /* 4763 - * Record a hist_lock entry. 4764 - * 4765 - * Irq disable is only required. 4766 - */ 4767 - static void add_xhlock(struct held_lock *hlock) 4768 - { 4769 - unsigned int idx = ++current->xhlock_idx; 4770 - struct hist_lock *xhlock = &xhlock(idx); 4771 - 4772 - #ifdef CONFIG_DEBUG_LOCKDEP 4773 - /* 4774 - * This can be done locklessly because they are all task-local 4775 - * state, we must however ensure IRQs are disabled. 4776 - */ 4777 - WARN_ON_ONCE(!irqs_disabled()); 4778 - #endif 4779 - 4780 - /* Initialize hist_lock's members */ 4781 - xhlock->hlock = *hlock; 4782 - xhlock->hist_id = ++current->hist_id; 4783 - 4784 - xhlock->trace.nr_entries = 0; 4785 - xhlock->trace.max_entries = MAX_XHLOCK_TRACE_ENTRIES; 4786 - xhlock->trace.entries = xhlock->trace_entries; 4787 - 4788 - if (crossrelease_fullstack) { 4789 - xhlock->trace.skip = 3; 4790 - save_stack_trace(&xhlock->trace); 4791 - } else { 4792 - xhlock->trace.nr_entries = 1; 4793 - xhlock->trace.entries[0] = hlock->acquire_ip; 4794 - } 4795 - } 4796 - 4797 - static inline int same_context_xhlock(struct hist_lock *xhlock) 4798 - { 4799 - return xhlock->hlock.irq_context == task_irq_context(current); 4800 - } 4801 - 4802 - /* 4803 - * This should be lockless as far as possible because this would be 4804 - * called very frequently. 4805 - */ 4806 - static void check_add_xhlock(struct held_lock *hlock) 4807 - { 4808 - /* 4809 - * Record a hist_lock, only in case that acquisitions ahead 4810 - * could depend on the held_lock. For example, if the held_lock 4811 - * is trylock then acquisitions ahead never depends on that. 4812 - * In that case, we don't need to record it. Just return. 4813 - */ 4814 - if (!current->xhlocks || !depend_before(hlock)) 4815 - return; 4816 - 4817 - add_xhlock(hlock); 4818 - } 4819 - 4820 - /* 4821 - * For crosslock. 4822 - */ 4823 - static int add_xlock(struct held_lock *hlock) 4824 - { 4825 - struct cross_lock *xlock; 4826 - unsigned int gen_id; 4827 - 4828 - if (!graph_lock()) 4829 - return 0; 4830 - 4831 - xlock = &((struct lockdep_map_cross *)hlock->instance)->xlock; 4832 - 4833 - /* 4834 - * When acquisitions for a crosslock are overlapped, we use 4835 - * nr_acquire to perform commit for them, based on cross_gen_id 4836 - * of the first acquisition, which allows to add additional 4837 - * dependencies. 4838 - * 4839 - * Moreover, when no acquisition of a crosslock is in progress, 4840 - * we should not perform commit because the lock might not exist 4841 - * any more, which might cause incorrect memory access. So we 4842 - * have to track the number of acquisitions of a crosslock. 4843 - * 4844 - * depend_after() is necessary to initialize only the first 4845 - * valid xlock so that the xlock can be used on its commit. 4846 - */ 4847 - if (xlock->nr_acquire++ && depend_after(&xlock->hlock)) 4848 - goto unlock; 4849 - 4850 - gen_id = (unsigned int)atomic_inc_return(&cross_gen_id); 4851 - xlock->hlock = *hlock; 4852 - xlock->hlock.gen_id = gen_id; 4853 - unlock: 4854 - graph_unlock(); 4855 - return 1; 4856 - } 4857 - 4858 - /* 4859 - * Called for both normal and crosslock acquires. Normal locks will be 4860 - * pushed on the hist_lock queue. Cross locks will record state and 4861 - * stop regular lock_acquire() to avoid being placed on the held_lock 4862 - * stack. 4863 - * 4864 - * Return: 0 - failure; 4865 - * 1 - crosslock, done; 4866 - * 2 - normal lock, continue to held_lock[] ops. 4867 - */ 4868 - static int lock_acquire_crosslock(struct held_lock *hlock) 4869 - { 4870 - /* 4871 - * CONTEXT 1 CONTEXT 2 4872 - * --------- --------- 4873 - * lock A (cross) 4874 - * X = atomic_inc_return(&cross_gen_id) 4875 - * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 4876 - * Y = atomic_read_acquire(&cross_gen_id) 4877 - * lock B 4878 - * 4879 - * atomic_read_acquire() is for ordering between A and B, 4880 - * IOW, A happens before B, when CONTEXT 2 see Y >= X. 4881 - * 4882 - * Pairs with atomic_inc_return() in add_xlock(). 4883 - */ 4884 - hlock->gen_id = (unsigned int)atomic_read_acquire(&cross_gen_id); 4885 - 4886 - if (cross_lock(hlock->instance)) 4887 - return add_xlock(hlock); 4888 - 4889 - check_add_xhlock(hlock); 4890 - return 2; 4891 - } 4892 - 4893 - static int copy_trace(struct stack_trace *trace) 4894 - { 4895 - unsigned long *buf = stack_trace + nr_stack_trace_entries; 4896 - unsigned int max_nr = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries; 4897 - unsigned int nr = min(max_nr, trace->nr_entries); 4898 - 4899 - trace->nr_entries = nr; 4900 - memcpy(buf, trace->entries, nr * sizeof(trace->entries[0])); 4901 - trace->entries = buf; 4902 - nr_stack_trace_entries += nr; 4903 - 4904 - if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) { 4905 - if (!debug_locks_off_graph_unlock()) 4906 - return 0; 4907 - 4908 - print_lockdep_off("BUG: MAX_STACK_TRACE_ENTRIES too low!"); 4909 - dump_stack(); 4910 - 4911 - return 0; 4912 - } 4913 - 4914 - return 1; 4915 - } 4916 - 4917 - static int commit_xhlock(struct cross_lock *xlock, struct hist_lock *xhlock) 4918 - { 4919 - unsigned int xid, pid; 4920 - u64 chain_key; 4921 - 4922 - xid = xlock_class(xlock) - lock_classes; 4923 - chain_key = iterate_chain_key((u64)0, xid); 4924 - pid = xhlock_class(xhlock) - lock_classes; 4925 - chain_key = iterate_chain_key(chain_key, pid); 4926 - 4927 - if (lookup_chain_cache(chain_key)) 4928 - return 1; 4929 - 4930 - if (!add_chain_cache_classes(xid, pid, xhlock->hlock.irq_context, 4931 - chain_key)) 4932 - return 0; 4933 - 4934 - if (!check_prev_add(current, &xlock->hlock, &xhlock->hlock, 1, 4935 - &xhlock->trace, copy_trace)) 4936 - return 0; 4937 - 4938 - return 1; 4939 - } 4940 - 4941 - static void commit_xhlocks(struct cross_lock *xlock) 4942 - { 4943 - unsigned int cur = current->xhlock_idx; 4944 - unsigned int prev_hist_id = xhlock(cur).hist_id; 4945 - unsigned int i; 4946 - 4947 - if (!graph_lock()) 4948 - return; 4949 - 4950 - if (xlock->nr_acquire) { 4951 - for (i = 0; i < MAX_XHLOCKS_NR; i++) { 4952 - struct hist_lock *xhlock = &xhlock(cur - i); 4953 - 4954 - if (!xhlock_valid(xhlock)) 4955 - break; 4956 - 4957 - if (before(xhlock->hlock.gen_id, xlock->hlock.gen_id)) 4958 - break; 4959 - 4960 - if (!same_context_xhlock(xhlock)) 4961 - break; 4962 - 4963 - /* 4964 - * Filter out the cases where the ring buffer was 4965 - * overwritten and the current entry has a bigger 4966 - * hist_id than the previous one, which is impossible 4967 - * otherwise: 4968 - */ 4969 - if (unlikely(before(prev_hist_id, xhlock->hist_id))) 4970 - break; 4971 - 4972 - prev_hist_id = xhlock->hist_id; 4973 - 4974 - /* 4975 - * commit_xhlock() returns 0 with graph_lock already 4976 - * released if fail. 4977 - */ 4978 - if (!commit_xhlock(xlock, xhlock)) 4979 - return; 4980 - } 4981 - } 4982 - 4983 - graph_unlock(); 4984 - } 4985 - 4986 - void lock_commit_crosslock(struct lockdep_map *lock) 4987 - { 4988 - struct cross_lock *xlock; 4989 - unsigned long flags; 4990 - 4991 - if (unlikely(!debug_locks || current->lockdep_recursion)) 4992 - return; 4993 - 4994 - if (!current->xhlocks) 4995 - return; 4996 - 4997 - /* 4998 - * Do commit hist_locks with the cross_lock, only in case that 4999 - * the cross_lock could depend on acquisitions after that. 5000 - * 5001 - * For example, if the cross_lock does not have the 'check' flag 5002 - * then we don't need to check dependencies and commit for that. 5003 - * Just skip it. In that case, of course, the cross_lock does 5004 - * not depend on acquisitions ahead, either. 5005 - * 5006 - * WARNING: Don't do that in add_xlock() in advance. When an 5007 - * acquisition context is different from the commit context, 5008 - * invalid(skipped) cross_lock might be accessed. 5009 - */ 5010 - if (!depend_after(&((struct lockdep_map_cross *)lock)->xlock.hlock)) 5011 - return; 5012 - 5013 - raw_local_irq_save(flags); 5014 - check_flags(flags); 5015 - current->lockdep_recursion = 1; 5016 - xlock = &((struct lockdep_map_cross *)lock)->xlock; 5017 - commit_xhlocks(xlock); 5018 - current->lockdep_recursion = 0; 5019 - raw_local_irq_restore(flags); 5020 - } 5021 - EXPORT_SYMBOL_GPL(lock_commit_crosslock); 5022 - 5023 - /* 5024 - * Return: 0 - failure; 5025 - * 1 - crosslock, done; 5026 - * 2 - normal lock, continue to held_lock[] ops. 5027 - */ 5028 - static int lock_release_crosslock(struct lockdep_map *lock) 5029 - { 5030 - if (cross_lock(lock)) { 5031 - if (!graph_lock()) 5032 - return 0; 5033 - ((struct lockdep_map_cross *)lock)->xlock.nr_acquire--; 5034 - graph_unlock(); 5035 - return 1; 5036 - } 5037 - return 2; 5038 - } 5039 - 5040 - static void cross_init(struct lockdep_map *lock, int cross) 5041 - { 5042 - if (cross) 5043 - ((struct lockdep_map_cross *)lock)->xlock.nr_acquire = 0; 5044 - 5045 - lock->cross = cross; 5046 - 5047 - /* 5048 - * Crossrelease assumes that the ring buffer size of xhlocks 5049 - * is aligned with power of 2. So force it on build. 5050 - */ 5051 - BUILD_BUG_ON(MAX_XHLOCKS_NR & (MAX_XHLOCKS_NR - 1)); 5052 - } 5053 - 5054 - void lockdep_init_task(struct task_struct *task) 5055 - { 5056 - int i; 5057 - 5058 - task->xhlock_idx = UINT_MAX; 5059 - task->hist_id = 0; 5060 - 5061 - for (i = 0; i < XHLOCK_CTX_NR; i++) { 5062 - task->xhlock_idx_hist[i] = UINT_MAX; 5063 - task->hist_id_save[i] = 0; 5064 - } 5065 - 5066 - task->xhlocks = kzalloc(sizeof(struct hist_lock) * MAX_XHLOCKS_NR, 5067 - GFP_KERNEL); 5068 - } 5069 - 5070 - void lockdep_free_task(struct task_struct *task) 5071 - { 5072 - if (task->xhlocks) { 5073 - void *tmp = task->xhlocks; 5074 - /* Diable crossrelease for current */ 5075 - task->xhlocks = NULL; 5076 - kfree(tmp); 5077 - } 5078 - } 5079 - #endif
+3 -10
kernel/locking/spinlock.c
··· 66 66 break; \ 67 67 preempt_enable(); \ 68 68 \ 69 - if (!(lock)->break_lock) \ 70 - (lock)->break_lock = 1; \ 71 - while ((lock)->break_lock) \ 72 - arch_##op##_relax(&lock->raw_lock); \ 69 + arch_##op##_relax(&lock->raw_lock); \ 73 70 } \ 74 - (lock)->break_lock = 0; \ 75 71 } \ 76 72 \ 77 73 unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock) \ ··· 82 86 local_irq_restore(flags); \ 83 87 preempt_enable(); \ 84 88 \ 85 - if (!(lock)->break_lock) \ 86 - (lock)->break_lock = 1; \ 87 - while ((lock)->break_lock) \ 88 - arch_##op##_relax(&lock->raw_lock); \ 89 + arch_##op##_relax(&lock->raw_lock); \ 89 90 } \ 90 - (lock)->break_lock = 0; \ 91 + \ 91 92 return flags; \ 92 93 } \ 93 94 \
+11 -11
kernel/sched/core.c
··· 5097 5097 return ret; 5098 5098 } 5099 5099 5100 - /** 5101 - * sys_sched_rr_get_interval - return the default timeslice of a process. 5102 - * @pid: pid of the process. 5103 - * @interval: userspace pointer to the timeslice value. 5104 - * 5105 - * this syscall writes the default timeslice value of a given process 5106 - * into the user-space timespec buffer. A value of '0' means infinity. 5107 - * 5108 - * Return: On success, 0 and the timeslice is in @interval. Otherwise, 5109 - * an error code. 5110 - */ 5111 5100 static int sched_rr_get_interval(pid_t pid, struct timespec64 *t) 5112 5101 { 5113 5102 struct task_struct *p; ··· 5133 5144 return retval; 5134 5145 } 5135 5146 5147 + /** 5148 + * sys_sched_rr_get_interval - return the default timeslice of a process. 5149 + * @pid: pid of the process. 5150 + * @interval: userspace pointer to the timeslice value. 5151 + * 5152 + * this syscall writes the default timeslice value of a given process 5153 + * into the user-space timespec buffer. A value of '0' means infinity. 5154 + * 5155 + * Return: On success, 0 and the timeslice is in @interval. Otherwise, 5156 + * an error code. 5157 + */ 5136 5158 SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid, 5137 5159 struct timespec __user *, interval) 5138 5160 {
+7 -1
kernel/sched/rt.c
··· 2034 2034 bool resched = false; 2035 2035 struct task_struct *p; 2036 2036 struct rq *src_rq; 2037 + int rt_overload_count = rt_overloaded(this_rq); 2037 2038 2038 - if (likely(!rt_overloaded(this_rq))) 2039 + if (likely(!rt_overload_count)) 2039 2040 return; 2040 2041 2041 2042 /* ··· 2044 2043 * see overloaded we must also see the rto_mask bit. 2045 2044 */ 2046 2045 smp_rmb(); 2046 + 2047 + /* If we are the only overloaded CPU do nothing */ 2048 + if (rt_overload_count == 1 && 2049 + cpumask_test_cpu(this_rq->cpu, this_rq->rd->rto_mask)) 2050 + return; 2047 2051 2048 2052 #ifdef HAVE_RT_PUSH_IPI 2049 2053 if (sched_feat(RT_PUSH_IPI)) {
+1
kernel/trace/Kconfig
··· 164 164 bool "Enable trace events for preempt and irq disable/enable" 165 165 select TRACE_IRQFLAGS 166 166 depends on DEBUG_PREEMPT || !PROVE_LOCKING 167 + depends on TRACING 167 168 default n 168 169 help 169 170 Enable tracing of disable and enable events for preemption and irqs.
+12 -7
kernel/trace/bpf_trace.c
··· 343 343 .arg4_type = ARG_CONST_SIZE, 344 344 }; 345 345 346 - static DEFINE_PER_CPU(struct perf_sample_data, bpf_sd); 346 + static DEFINE_PER_CPU(struct perf_sample_data, bpf_trace_sd); 347 347 348 348 static __always_inline u64 349 349 __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map, 350 - u64 flags, struct perf_raw_record *raw) 350 + u64 flags, struct perf_sample_data *sd) 351 351 { 352 352 struct bpf_array *array = container_of(map, struct bpf_array, map); 353 - struct perf_sample_data *sd = this_cpu_ptr(&bpf_sd); 354 353 unsigned int cpu = smp_processor_id(); 355 354 u64 index = flags & BPF_F_INDEX_MASK; 356 355 struct bpf_event_entry *ee; ··· 372 373 if (unlikely(event->oncpu != cpu)) 373 374 return -EOPNOTSUPP; 374 375 375 - perf_sample_data_init(sd, 0, 0); 376 - sd->raw = raw; 377 376 perf_event_output(event, sd, regs); 378 377 return 0; 379 378 } ··· 379 382 BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map, 380 383 u64, flags, void *, data, u64, size) 381 384 { 385 + struct perf_sample_data *sd = this_cpu_ptr(&bpf_trace_sd); 382 386 struct perf_raw_record raw = { 383 387 .frag = { 384 388 .size = size, ··· 390 392 if (unlikely(flags & ~(BPF_F_INDEX_MASK))) 391 393 return -EINVAL; 392 394 393 - return __bpf_perf_event_output(regs, map, flags, &raw); 395 + perf_sample_data_init(sd, 0, 0); 396 + sd->raw = &raw; 397 + 398 + return __bpf_perf_event_output(regs, map, flags, sd); 394 399 } 395 400 396 401 static const struct bpf_func_proto bpf_perf_event_output_proto = { ··· 408 407 }; 409 408 410 409 static DEFINE_PER_CPU(struct pt_regs, bpf_pt_regs); 410 + static DEFINE_PER_CPU(struct perf_sample_data, bpf_misc_sd); 411 411 412 412 u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size, 413 413 void *ctx, u64 ctx_size, bpf_ctx_copy_t ctx_copy) 414 414 { 415 + struct perf_sample_data *sd = this_cpu_ptr(&bpf_misc_sd); 415 416 struct pt_regs *regs = this_cpu_ptr(&bpf_pt_regs); 416 417 struct perf_raw_frag frag = { 417 418 .copy = ctx_copy, ··· 431 428 }; 432 429 433 430 perf_fetch_caller_regs(regs); 431 + perf_sample_data_init(sd, 0, 0); 432 + sd->raw = &raw; 434 433 435 - return __bpf_perf_event_output(regs, map, flags, &raw); 434 + return __bpf_perf_event_output(regs, map, flags, sd); 436 435 } 437 436 438 437 BPF_CALL_0(bpf_get_current_task)
-6
kernel/trace/ring_buffer.c
··· 1799 1799 } 1800 1800 EXPORT_SYMBOL_GPL(ring_buffer_change_overwrite); 1801 1801 1802 - static __always_inline void * 1803 - __rb_data_page_index(struct buffer_data_page *bpage, unsigned index) 1804 - { 1805 - return bpage->data + index; 1806 - } 1807 - 1808 1802 static __always_inline void *__rb_page_index(struct buffer_page *bpage, unsigned index) 1809 1803 { 1810 1804 return bpage->page->data + index;
+15 -26
kernel/trace/trace.c
··· 362 362 } 363 363 364 364 /** 365 - * trace_pid_filter_add_remove - Add or remove a task from a pid_list 365 + * trace_pid_filter_add_remove_task - Add or remove a task from a pid_list 366 366 * @pid_list: The list to modify 367 367 * @self: The current task for fork or NULL for exit 368 368 * @task: The task to add or remove ··· 925 925 } 926 926 927 927 /** 928 - * trace_snapshot - take a snapshot of the current buffer. 928 + * tracing_snapshot - take a snapshot of the current buffer. 929 929 * 930 930 * This causes a swap between the snapshot buffer and the current live 931 931 * tracing buffer. You can use this to take snapshots of the live ··· 1004 1004 EXPORT_SYMBOL_GPL(tracing_alloc_snapshot); 1005 1005 1006 1006 /** 1007 - * trace_snapshot_alloc - allocate and take a snapshot of the current buffer. 1007 + * tracing_snapshot_alloc - allocate and take a snapshot of the current buffer. 1008 1008 * 1009 - * This is similar to trace_snapshot(), but it will allocate the 1009 + * This is similar to tracing_snapshot(), but it will allocate the 1010 1010 * snapshot buffer if it isn't already allocated. Use this only 1011 1011 * where it is safe to sleep, as the allocation may sleep. 1012 1012 * ··· 1303 1303 /* 1304 1304 * Copy the new maximum trace into the separate maximum-trace 1305 1305 * structure. (this way the maximum trace is permanently saved, 1306 - * for later retrieval via /sys/kernel/debug/tracing/latency_trace) 1306 + * for later retrieval via /sys/kernel/tracing/tracing_max_latency) 1307 1307 */ 1308 1308 static void 1309 1309 __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu) ··· 2415 2415 2416 2416 entry = ring_buffer_event_data(event); 2417 2417 size = ring_buffer_event_length(event); 2418 - export->write(entry, size); 2418 + export->write(export, entry, size); 2419 2419 } 2420 2420 2421 2421 static DEFINE_MUTEX(ftrace_export_lock); ··· 4178 4178 .llseek = seq_lseek, 4179 4179 }; 4180 4180 4181 - /* 4182 - * The tracer itself will not take this lock, but still we want 4183 - * to provide a consistent cpumask to user-space: 4184 - */ 4185 - static DEFINE_MUTEX(tracing_cpumask_update_lock); 4186 - 4187 - /* 4188 - * Temporary storage for the character representation of the 4189 - * CPU bitmask (and one more byte for the newline): 4190 - */ 4191 - static char mask_str[NR_CPUS + 1]; 4192 - 4193 4181 static ssize_t 4194 4182 tracing_cpumask_read(struct file *filp, char __user *ubuf, 4195 4183 size_t count, loff_t *ppos) 4196 4184 { 4197 4185 struct trace_array *tr = file_inode(filp)->i_private; 4186 + char *mask_str; 4198 4187 int len; 4199 4188 4200 - mutex_lock(&tracing_cpumask_update_lock); 4189 + len = snprintf(NULL, 0, "%*pb\n", 4190 + cpumask_pr_args(tr->tracing_cpumask)) + 1; 4191 + mask_str = kmalloc(len, GFP_KERNEL); 4192 + if (!mask_str) 4193 + return -ENOMEM; 4201 4194 4202 - len = snprintf(mask_str, count, "%*pb\n", 4195 + len = snprintf(mask_str, len, "%*pb\n", 4203 4196 cpumask_pr_args(tr->tracing_cpumask)); 4204 4197 if (len >= count) { 4205 4198 count = -EINVAL; 4206 4199 goto out_err; 4207 4200 } 4208 - count = simple_read_from_buffer(ubuf, count, ppos, mask_str, NR_CPUS+1); 4201 + count = simple_read_from_buffer(ubuf, count, ppos, mask_str, len); 4209 4202 4210 4203 out_err: 4211 - mutex_unlock(&tracing_cpumask_update_lock); 4204 + kfree(mask_str); 4212 4205 4213 4206 return count; 4214 4207 } ··· 4220 4227 err = cpumask_parse_user(ubuf, count, tracing_cpumask_new); 4221 4228 if (err) 4222 4229 goto err_unlock; 4223 - 4224 - mutex_lock(&tracing_cpumask_update_lock); 4225 4230 4226 4231 local_irq_disable(); 4227 4232 arch_spin_lock(&tr->max_lock); ··· 4243 4252 local_irq_enable(); 4244 4253 4245 4254 cpumask_copy(tr->tracing_cpumask, tracing_cpumask_new); 4246 - 4247 - mutex_unlock(&tracing_cpumask_update_lock); 4248 4255 free_cpumask_var(tracing_cpumask_new); 4249 4256 4250 4257 return count;
+4
kernel/trace/trace_stack.c
··· 209 209 if (__this_cpu_read(disable_stack_tracer) != 1) 210 210 goto out; 211 211 212 + /* If rcu is not watching, then save stack trace can fail */ 213 + if (!rcu_is_watching()) 214 + goto out; 215 + 212 216 ip += MCOUNT_INSN_SIZE; 213 217 214 218 check_stack(ip, &stack);
+1
kernel/uid16.c
··· 192 192 return retval; 193 193 } 194 194 195 + groups_sort(group_info); 195 196 retval = set_current_groups(group_info); 196 197 put_group_info(group_info); 197 198
+12 -21
kernel/workqueue.c
··· 38 38 #include <linux/hardirq.h> 39 39 #include <linux/mempolicy.h> 40 40 #include <linux/freezer.h> 41 - #include <linux/kallsyms.h> 42 41 #include <linux/debug_locks.h> 43 42 #include <linux/lockdep.h> 44 43 #include <linux/idr.h> ··· 47 48 #include <linux/nodemask.h> 48 49 #include <linux/moduleparam.h> 49 50 #include <linux/uaccess.h> 51 + #include <linux/sched/isolation.h> 50 52 51 53 #include "workqueue_internal.h" 52 54 ··· 1634 1634 mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT); 1635 1635 1636 1636 /* 1637 - * Sanity check nr_running. Because wq_unbind_fn() releases 1637 + * Sanity check nr_running. Because unbind_workers() releases 1638 1638 * pool->lock between setting %WORKER_UNBOUND and zapping 1639 1639 * nr_running, the warning may trigger spuriously. Check iff 1640 1640 * unbind is not in progress. ··· 4510 4510 * cpu comes back online. 4511 4511 */ 4512 4512 4513 - static void wq_unbind_fn(struct work_struct *work) 4513 + static void unbind_workers(int cpu) 4514 4514 { 4515 - int cpu = smp_processor_id(); 4516 4515 struct worker_pool *pool; 4517 4516 struct worker *worker; 4518 4517 ··· 4587 4588 pool->attrs->cpumask) < 0); 4588 4589 4589 4590 spin_lock_irq(&pool->lock); 4590 - 4591 - /* 4592 - * XXX: CPU hotplug notifiers are weird and can call DOWN_FAILED 4593 - * w/o preceding DOWN_PREPARE. Work around it. CPU hotplug is 4594 - * being reworked and this can go away in time. 4595 - */ 4596 - if (!(pool->flags & POOL_DISASSOCIATED)) { 4597 - spin_unlock_irq(&pool->lock); 4598 - return; 4599 - } 4600 4591 4601 4592 pool->flags &= ~POOL_DISASSOCIATED; 4602 4593 ··· 4698 4709 4699 4710 int workqueue_offline_cpu(unsigned int cpu) 4700 4711 { 4701 - struct work_struct unbind_work; 4702 4712 struct workqueue_struct *wq; 4703 4713 4704 4714 /* unbinding per-cpu workers should happen on the local CPU */ 4705 - INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn); 4706 - queue_work_on(cpu, system_highpri_wq, &unbind_work); 4715 + if (WARN_ON(cpu != smp_processor_id())) 4716 + return -1; 4717 + 4718 + unbind_workers(cpu); 4707 4719 4708 4720 /* update NUMA affinity of unbound workqueues */ 4709 4721 mutex_lock(&wq_pool_mutex); ··· 4712 4722 wq_update_unbound_numa(wq, cpu, false); 4713 4723 mutex_unlock(&wq_pool_mutex); 4714 4724 4715 - /* wait for per-cpu unbinding to finish */ 4716 - flush_work(&unbind_work); 4717 - destroy_work_on_stack(&unbind_work); 4718 4725 return 0; 4719 4726 } 4720 4727 ··· 4944 4957 if (!zalloc_cpumask_var(&saved_cpumask, GFP_KERNEL)) 4945 4958 return -ENOMEM; 4946 4959 4960 + /* 4961 + * Not excluding isolated cpus on purpose. 4962 + * If the user wishes to include them, we allow that. 4963 + */ 4947 4964 cpumask_and(cpumask, cpumask, cpu_possible_mask); 4948 4965 if (!cpumask_empty(cpumask)) { 4949 4966 apply_wqattrs_lock(); ··· 5546 5555 WARN_ON(__alignof__(struct pool_workqueue) < __alignof__(long long)); 5547 5556 5548 5557 BUG_ON(!alloc_cpumask_var(&wq_unbound_cpumask, GFP_KERNEL)); 5549 - cpumask_copy(wq_unbound_cpumask, cpu_possible_mask); 5558 + cpumask_copy(wq_unbound_cpumask, housekeeping_cpumask(HK_FLAG_DOMAIN)); 5550 5559 5551 5560 pwq_cache = KMEM_CACHE(pool_workqueue, SLAB_PANIC); 5552 5561
-33
lib/Kconfig.debug
··· 1099 1099 select DEBUG_MUTEXES 1100 1100 select DEBUG_RT_MUTEXES if RT_MUTEXES 1101 1101 select DEBUG_LOCK_ALLOC 1102 - select LOCKDEP_CROSSRELEASE 1103 - select LOCKDEP_COMPLETIONS 1104 1102 select TRACE_IRQFLAGS 1105 1103 default n 1106 1104 help ··· 1167 1169 1168 1170 CONFIG_LOCK_STAT defines "contended" and "acquired" lock events. 1169 1171 (CONFIG_LOCKDEP defines "acquire" and "release" events.) 1170 - 1171 - config LOCKDEP_CROSSRELEASE 1172 - bool 1173 - help 1174 - This makes lockdep work for crosslock which is a lock allowed to 1175 - be released in a different context from the acquisition context. 1176 - Normally a lock must be released in the context acquiring the lock. 1177 - However, relexing this constraint helps synchronization primitives 1178 - such as page locks or completions can use the lock correctness 1179 - detector, lockdep. 1180 - 1181 - config LOCKDEP_COMPLETIONS 1182 - bool 1183 - help 1184 - A deadlock caused by wait_for_completion() and complete() can be 1185 - detected by lockdep using crossrelease feature. 1186 - 1187 - config BOOTPARAM_LOCKDEP_CROSSRELEASE_FULLSTACK 1188 - bool "Enable the boot parameter, crossrelease_fullstack" 1189 - depends on LOCKDEP_CROSSRELEASE 1190 - default n 1191 - help 1192 - The lockdep "cross-release" feature needs to record stack traces 1193 - (of calling functions) for all acquisitions, for eventual later 1194 - use during analysis. By default only a single caller is recorded, 1195 - because the unwind operation can be very expensive with deeper 1196 - stack chains. 1197 - 1198 - However a boot parameter, crossrelease_fullstack, was 1199 - introduced since sometimes deeper traces are required for full 1200 - analysis. This option turns on the boot parameter. 1201 1172 1202 1173 config DEBUG_LOCKDEP 1203 1174 bool "Lock dependency engine debugging"
+28 -21
lib/asn1_decoder.c
··· 313 313 314 314 /* Decide how to handle the operation */ 315 315 switch (op) { 316 - case ASN1_OP_MATCH_ANY_ACT: 317 - case ASN1_OP_MATCH_ANY_ACT_OR_SKIP: 318 - case ASN1_OP_COND_MATCH_ANY_ACT: 319 - case ASN1_OP_COND_MATCH_ANY_ACT_OR_SKIP: 320 - ret = actions[machine[pc + 1]](context, hdr, tag, data + dp, len); 321 - if (ret < 0) 322 - return ret; 323 - goto skip_data; 324 - 325 - case ASN1_OP_MATCH_ACT: 326 - case ASN1_OP_MATCH_ACT_OR_SKIP: 327 - case ASN1_OP_COND_MATCH_ACT_OR_SKIP: 328 - ret = actions[machine[pc + 2]](context, hdr, tag, data + dp, len); 329 - if (ret < 0) 330 - return ret; 331 - goto skip_data; 332 - 333 316 case ASN1_OP_MATCH: 334 317 case ASN1_OP_MATCH_OR_SKIP: 318 + case ASN1_OP_MATCH_ACT: 319 + case ASN1_OP_MATCH_ACT_OR_SKIP: 335 320 case ASN1_OP_MATCH_ANY: 336 321 case ASN1_OP_MATCH_ANY_OR_SKIP: 322 + case ASN1_OP_MATCH_ANY_ACT: 323 + case ASN1_OP_MATCH_ANY_ACT_OR_SKIP: 337 324 case ASN1_OP_COND_MATCH_OR_SKIP: 325 + case ASN1_OP_COND_MATCH_ACT_OR_SKIP: 338 326 case ASN1_OP_COND_MATCH_ANY: 339 327 case ASN1_OP_COND_MATCH_ANY_OR_SKIP: 340 - skip_data: 328 + case ASN1_OP_COND_MATCH_ANY_ACT: 329 + case ASN1_OP_COND_MATCH_ANY_ACT_OR_SKIP: 330 + 341 331 if (!(flags & FLAG_CONS)) { 342 332 if (flags & FLAG_INDEFINITE_LENGTH) { 333 + size_t tmp = dp; 334 + 343 335 ret = asn1_find_indefinite_length( 344 - data, datalen, &dp, &len, &errmsg); 336 + data, datalen, &tmp, &len, &errmsg); 345 337 if (ret < 0) 346 338 goto error; 347 - } else { 348 - dp += len; 349 339 } 350 340 pr_debug("- LEAF: %zu\n", len); 351 341 } 342 + 343 + if (op & ASN1_OP_MATCH__ACT) { 344 + unsigned char act; 345 + 346 + if (op & ASN1_OP_MATCH__ANY) 347 + act = machine[pc + 1]; 348 + else 349 + act = machine[pc + 2]; 350 + ret = actions[act](context, hdr, tag, data + dp, len); 351 + if (ret < 0) 352 + return ret; 353 + } 354 + 355 + if (!(flags & FLAG_CONS)) 356 + dp += len; 352 357 pc += asn1_op_lengths[op]; 353 358 goto next_op; 354 359 ··· 439 434 else 440 435 act = machine[pc + 1]; 441 436 ret = actions[act](context, hdr, 0, data + tdp, len); 437 + if (ret < 0) 438 + return ret; 442 439 } 443 440 pc += asn1_op_lengths[op]; 444 441 goto next_op;
+10 -6
lib/oid_registry.c
··· 116 116 int count; 117 117 118 118 if (v >= end) 119 - return -EBADMSG; 119 + goto bad; 120 120 121 121 n = *v++; 122 122 ret = count = snprintf(buffer, bufsize, "%u.%u", n / 40, n % 40); 123 + if (count >= bufsize) 124 + return -ENOBUFS; 123 125 buffer += count; 124 126 bufsize -= count; 125 - if (bufsize == 0) 126 - return -ENOBUFS; 127 127 128 128 while (v < end) { 129 129 num = 0; ··· 134 134 num = n & 0x7f; 135 135 do { 136 136 if (v >= end) 137 - return -EBADMSG; 137 + goto bad; 138 138 n = *v++; 139 139 num <<= 7; 140 140 num |= n & 0x7f; 141 141 } while (n & 0x80); 142 142 } 143 143 ret += count = snprintf(buffer, bufsize, ".%lu", num); 144 - buffer += count; 145 - if (bufsize <= count) 144 + if (count >= bufsize) 146 145 return -ENOBUFS; 146 + buffer += count; 147 147 bufsize -= count; 148 148 } 149 149 150 150 return ret; 151 + 152 + bad: 153 + snprintf(buffer, bufsize, "(bad)"); 154 + return -EBADMSG; 151 155 } 152 156 EXPORT_SYMBOL_GPL(sprint_oid); 153 157
+10
lib/rbtree.c
··· 603 603 } 604 604 EXPORT_SYMBOL(rb_replace_node); 605 605 606 + void rb_replace_node_cached(struct rb_node *victim, struct rb_node *new, 607 + struct rb_root_cached *root) 608 + { 609 + rb_replace_node(victim, new, &root->rb_root); 610 + 611 + if (root->rb_leftmost == victim) 612 + root->rb_leftmost = new; 613 + } 614 + EXPORT_SYMBOL(rb_replace_node_cached); 615 + 606 616 void rb_replace_node_rcu(struct rb_node *victim, struct rb_node *new, 607 617 struct rb_root *root) 608 618 {
+1 -1
mm/early_ioremap.c
··· 111 111 enum fixed_addresses idx; 112 112 int i, slot; 113 113 114 - WARN_ON(system_state != SYSTEM_BOOTING); 114 + WARN_ON(system_state >= SYSTEM_RUNNING); 115 115 116 116 slot = -1; 117 117 for (i = 0; i < FIX_BTMAPS_SLOTS; i++) {
+4 -2
mm/frame_vector.c
··· 62 62 * get_user_pages_longterm() and disallow it for filesystem-dax 63 63 * mappings. 64 64 */ 65 - if (vma_is_fsdax(vma)) 66 - return -EOPNOTSUPP; 65 + if (vma_is_fsdax(vma)) { 66 + ret = -EOPNOTSUPP; 67 + goto out; 68 + } 67 69 68 70 if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { 69 71 vec->got_ref = true;
+1 -1
mm/gup.c
··· 66 66 */ 67 67 static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) 68 68 { 69 - return pte_access_permitted(pte, WRITE) || 69 + return pte_write(pte) || 70 70 ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); 71 71 } 72 72
+4 -4
mm/hmm.c
··· 391 391 if (pmd_protnone(pmd)) 392 392 return hmm_vma_walk_clear(start, end, walk); 393 393 394 - if (!pmd_access_permitted(pmd, write_fault)) 394 + if (write_fault && !pmd_write(pmd)) 395 395 return hmm_vma_walk_clear(start, end, walk); 396 396 397 397 pfn = pmd_pfn(pmd) + pte_index(addr); 398 - flag |= pmd_access_permitted(pmd, WRITE) ? HMM_PFN_WRITE : 0; 398 + flag |= pmd_write(pmd) ? HMM_PFN_WRITE : 0; 399 399 for (; addr < end; addr += PAGE_SIZE, i++, pfn++) 400 400 pfns[i] = hmm_pfn_t_from_pfn(pfn) | flag; 401 401 return 0; ··· 456 456 continue; 457 457 } 458 458 459 - if (!pte_access_permitted(pte, write_fault)) 459 + if (write_fault && !pte_write(pte)) 460 460 goto fault; 461 461 462 462 pfns[i] = hmm_pfn_t_from_pfn(pte_pfn(pte)) | flag; 463 - pfns[i] |= pte_access_permitted(pte, WRITE) ? HMM_PFN_WRITE : 0; 463 + pfns[i] |= pte_write(pte) ? HMM_PFN_WRITE : 0; 464 464 continue; 465 465 466 466 fault:
+3 -3
mm/huge_memory.c
··· 870 870 */ 871 871 WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set"); 872 872 873 - if (!pmd_access_permitted(*pmd, flags & FOLL_WRITE)) 873 + if (flags & FOLL_WRITE && !pmd_write(*pmd)) 874 874 return NULL; 875 875 876 876 if (pmd_present(*pmd) && pmd_devmap(*pmd)) ··· 1012 1012 1013 1013 assert_spin_locked(pud_lockptr(mm, pud)); 1014 1014 1015 - if (!pud_access_permitted(*pud, flags & FOLL_WRITE)) 1015 + if (flags & FOLL_WRITE && !pud_write(*pud)) 1016 1016 return NULL; 1017 1017 1018 1018 if (pud_present(*pud) && pud_devmap(*pud)) ··· 1386 1386 */ 1387 1387 static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) 1388 1388 { 1389 - return pmd_access_permitted(pmd, WRITE) || 1389 + return pmd_write(pmd) || 1390 1390 ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); 1391 1391 } 1392 1392
+1 -1
mm/kmemleak.c
··· 1523 1523 if (page_count(page) == 0) 1524 1524 continue; 1525 1525 scan_block(page, page + 1, NULL); 1526 - if (!(pfn % (MAX_SCAN_SIZE / sizeof(*page)))) 1526 + if (!(pfn & 63)) 1527 1527 cond_resched(); 1528 1528 } 1529 1529 }
+6 -5
mm/memory.c
··· 3831 3831 return VM_FAULT_FALLBACK; 3832 3832 } 3833 3833 3834 - static int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) 3834 + /* `inline' is required to avoid gcc 4.1.2 build error */ 3835 + static inline int wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) 3835 3836 { 3836 3837 if (vma_is_anonymous(vmf->vma)) 3837 3838 return do_huge_pmd_wp_page(vmf, orig_pmd); ··· 3949 3948 if (unlikely(!pte_same(*vmf->pte, entry))) 3950 3949 goto unlock; 3951 3950 if (vmf->flags & FAULT_FLAG_WRITE) { 3952 - if (!pte_access_permitted(entry, WRITE)) 3951 + if (!pte_write(entry)) 3953 3952 return do_wp_page(vmf); 3954 3953 entry = pte_mkdirty(entry); 3955 3954 } ··· 4014 4013 4015 4014 /* NUMA case for anonymous PUDs would go here */ 4016 4015 4017 - if (dirty && !pud_access_permitted(orig_pud, WRITE)) { 4016 + if (dirty && !pud_write(orig_pud)) { 4018 4017 ret = wp_huge_pud(&vmf, orig_pud); 4019 4018 if (!(ret & VM_FAULT_FALLBACK)) 4020 4019 return ret; ··· 4047 4046 if (pmd_protnone(orig_pmd) && vma_is_accessible(vma)) 4048 4047 return do_huge_pmd_numa_page(&vmf, orig_pmd); 4049 4048 4050 - if (dirty && !pmd_access_permitted(orig_pmd, WRITE)) { 4049 + if (dirty && !pmd_write(orig_pmd)) { 4051 4050 ret = wp_huge_pmd(&vmf, orig_pmd); 4052 4051 if (!(ret & VM_FAULT_FALLBACK)) 4053 4052 return ret; ··· 4337 4336 goto out; 4338 4337 pte = *ptep; 4339 4338 4340 - if (!pte_access_permitted(pte, flags & FOLL_WRITE)) 4339 + if ((flags & FOLL_WRITE) && !pte_write(pte)) 4341 4340 goto unlock; 4342 4341 4343 4342 *prot = pgprot_val(pte_pgprot(pte));
+5 -5
mm/mmap.c
··· 3019 3019 /* Use -1 here to ensure all VMAs in the mm are unmapped */ 3020 3020 unmap_vmas(&tlb, vma, 0, -1); 3021 3021 3022 - set_bit(MMF_OOM_SKIP, &mm->flags); 3023 - if (unlikely(tsk_is_oom_victim(current))) { 3022 + if (unlikely(mm_is_oom_victim(mm))) { 3024 3023 /* 3025 3024 * Wait for oom_reap_task() to stop working on this 3026 3025 * mm. Because MMF_OOM_SKIP is already set before 3027 3026 * calling down_read(), oom_reap_task() will not run 3028 3027 * on this "mm" post up_write(). 3029 3028 * 3030 - * tsk_is_oom_victim() cannot be set from under us 3031 - * either because current->mm is already set to NULL 3029 + * mm_is_oom_victim() cannot be set from under us 3030 + * either because victim->mm is already set to NULL 3032 3031 * under task_lock before calling mmput and oom_mm is 3033 - * set not NULL by the OOM killer only if current->mm 3032 + * set not NULL by the OOM killer only if victim->mm 3034 3033 * is found not NULL while holding the task_lock. 3035 3034 */ 3035 + set_bit(MMF_OOM_SKIP, &mm->flags); 3036 3036 down_write(&mm->mmap_sem); 3037 3037 up_write(&mm->mmap_sem); 3038 3038 }
+3 -1
mm/oom_kill.c
··· 683 683 return; 684 684 685 685 /* oom_mm is bound to the signal struct life time. */ 686 - if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm)) 686 + if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm)) { 687 687 mmgrab(tsk->signal->oom_mm); 688 + set_bit(MMF_OOM_VICTIM, &mm->flags); 689 + } 688 690 689 691 /* 690 692 * Make sure that the task is woken up from uninterruptible sleep
+11
mm/page_alloc.c
··· 2684 2684 { 2685 2685 struct page *page, *next; 2686 2686 unsigned long flags, pfn; 2687 + int batch_count = 0; 2687 2688 2688 2689 /* Prepare pages for freeing */ 2689 2690 list_for_each_entry_safe(page, next, list, lru) { ··· 2701 2700 set_page_private(page, 0); 2702 2701 trace_mm_page_free_batched(page); 2703 2702 free_unref_page_commit(page, pfn); 2703 + 2704 + /* 2705 + * Guard against excessive IRQ disabled times when we get 2706 + * a large list of pages to free. 2707 + */ 2708 + if (++batch_count == SWAP_CLUSTER_MAX) { 2709 + local_irq_restore(flags); 2710 + batch_count = 0; 2711 + local_irq_save(flags); 2712 + } 2704 2713 } 2705 2714 local_irq_restore(flags); 2706 2715 }
+4
mm/percpu.c
··· 2719 2719 2720 2720 if (pcpu_setup_first_chunk(ai, fc) < 0) 2721 2721 panic("Failed to initialize percpu areas."); 2722 + #ifdef CONFIG_CRIS 2723 + #warning "the CRIS architecture has physical and virtual addresses confused" 2724 + #else 2722 2725 pcpu_free_alloc_info(ai); 2726 + #endif 2723 2727 } 2724 2728 2725 2729 #endif /* CONFIG_SMP */
+10 -13
mm/slab.c
··· 1584 1584 *dbg_redzone2(cachep, objp)); 1585 1585 } 1586 1586 1587 - if (cachep->flags & SLAB_STORE_USER) { 1588 - pr_err("Last user: [<%p>](%pSR)\n", 1589 - *dbg_userword(cachep, objp), 1590 - *dbg_userword(cachep, objp)); 1591 - } 1587 + if (cachep->flags & SLAB_STORE_USER) 1588 + pr_err("Last user: (%pSR)\n", *dbg_userword(cachep, objp)); 1592 1589 realobj = (char *)objp + obj_offset(cachep); 1593 1590 size = cachep->object_size; 1594 1591 for (i = 0; i < size && lines; i += 16, lines--) { ··· 1618 1621 /* Mismatch ! */ 1619 1622 /* Print header */ 1620 1623 if (lines == 0) { 1621 - pr_err("Slab corruption (%s): %s start=%p, len=%d\n", 1624 + pr_err("Slab corruption (%s): %s start=%px, len=%d\n", 1622 1625 print_tainted(), cachep->name, 1623 1626 realobj, size); 1624 1627 print_objinfo(cachep, objp, 0); ··· 1647 1650 if (objnr) { 1648 1651 objp = index_to_obj(cachep, page, objnr - 1); 1649 1652 realobj = (char *)objp + obj_offset(cachep); 1650 - pr_err("Prev obj: start=%p, len=%d\n", realobj, size); 1653 + pr_err("Prev obj: start=%px, len=%d\n", realobj, size); 1651 1654 print_objinfo(cachep, objp, 2); 1652 1655 } 1653 1656 if (objnr + 1 < cachep->num) { 1654 1657 objp = index_to_obj(cachep, page, objnr + 1); 1655 1658 realobj = (char *)objp + obj_offset(cachep); 1656 - pr_err("Next obj: start=%p, len=%d\n", realobj, size); 1659 + pr_err("Next obj: start=%px, len=%d\n", realobj, size); 1657 1660 print_objinfo(cachep, objp, 2); 1658 1661 } 1659 1662 } ··· 2605 2608 /* Verify double free bug */ 2606 2609 for (i = page->active; i < cachep->num; i++) { 2607 2610 if (get_free_obj(page, i) == objnr) { 2608 - pr_err("slab: double free detected in cache '%s', objp %p\n", 2611 + pr_err("slab: double free detected in cache '%s', objp %px\n", 2609 2612 cachep->name, objp); 2610 2613 BUG(); 2611 2614 } ··· 2769 2772 else 2770 2773 slab_error(cache, "memory outside object was overwritten"); 2771 2774 2772 - pr_err("%p: redzone 1:0x%llx, redzone 2:0x%llx\n", 2775 + pr_err("%px: redzone 1:0x%llx, redzone 2:0x%llx\n", 2773 2776 obj, redzone1, redzone2); 2774 2777 } 2775 2778 ··· 3075 3078 if (*dbg_redzone1(cachep, objp) != RED_INACTIVE || 3076 3079 *dbg_redzone2(cachep, objp) != RED_INACTIVE) { 3077 3080 slab_error(cachep, "double free, or memory outside object was overwritten"); 3078 - pr_err("%p: redzone 1:0x%llx, redzone 2:0x%llx\n", 3081 + pr_err("%px: redzone 1:0x%llx, redzone 2:0x%llx\n", 3079 3082 objp, *dbg_redzone1(cachep, objp), 3080 3083 *dbg_redzone2(cachep, objp)); 3081 3084 } ··· 3088 3091 cachep->ctor(objp); 3089 3092 if (ARCH_SLAB_MINALIGN && 3090 3093 ((unsigned long)objp & (ARCH_SLAB_MINALIGN-1))) { 3091 - pr_err("0x%p: not aligned to ARCH_SLAB_MINALIGN=%d\n", 3094 + pr_err("0x%px: not aligned to ARCH_SLAB_MINALIGN=%d\n", 3092 3095 objp, (int)ARCH_SLAB_MINALIGN); 3093 3096 } 3094 3097 return objp; ··· 4280 4283 return; 4281 4284 } 4282 4285 #endif 4283 - seq_printf(m, "%p", (void *)address); 4286 + seq_printf(m, "%px", (void *)address); 4284 4287 } 4285 4288 4286 4289 static int leaks_show(struct seq_file *m, void *p)
+2 -2
net/batman-adv/bat_iv_ogm.c
··· 1214 1214 orig_node->last_seen = jiffies; 1215 1215 1216 1216 /* find packet count of corresponding one hop neighbor */ 1217 - spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock); 1217 + spin_lock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock); 1218 1218 if_num = if_incoming->if_num; 1219 1219 orig_eq_count = orig_neigh_node->bat_iv.bcast_own_sum[if_num]; 1220 1220 neigh_ifinfo = batadv_neigh_ifinfo_new(neigh_node, if_outgoing); ··· 1224 1224 } else { 1225 1225 neigh_rq_count = 0; 1226 1226 } 1227 - spin_unlock_bh(&orig_node->bat_iv.ogm_cnt_lock); 1227 + spin_unlock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock); 1228 1228 1229 1229 /* pay attention to not get a value bigger than 100 % */ 1230 1230 if (orig_eq_count > neigh_rq_count)
+1 -1
net/batman-adv/bat_v.c
··· 814 814 } 815 815 816 816 orig_gw = batadv_gw_node_get(bat_priv, orig_node); 817 - if (!orig_node) 817 + if (!orig_gw) 818 818 goto out; 819 819 820 820 if (batadv_v_gw_throughput_get(orig_gw, &orig_throughput) < 0)
+2
net/batman-adv/fragmentation.c
··· 499 499 */ 500 500 if (skb->priority >= 256 && skb->priority <= 263) 501 501 frag_header.priority = skb->priority - 256; 502 + else 503 + frag_header.priority = 0; 502 504 503 505 ether_addr_copy(frag_header.orig, primary_if->net_dev->dev_addr); 504 506 ether_addr_copy(frag_header.dest, orig_node->orig);
+2 -2
net/batman-adv/tp_meter.c
··· 482 482 483 483 /** 484 484 * batadv_tp_sender_timeout - timer that fires in case of packet loss 485 - * @arg: address of the related tp_vars 485 + * @t: address to timer_list inside tp_vars 486 486 * 487 487 * If fired it means that there was packet loss. 488 488 * Switch to Slow Start, set the ss_threshold to half of the current cwnd and ··· 1106 1106 /** 1107 1107 * batadv_tp_receiver_shutdown - stop a tp meter receiver when timeout is 1108 1108 * reached without received ack 1109 - * @arg: address of the related tp_vars 1109 + * @t: address to timer_list inside tp_vars 1110 1110 */ 1111 1111 static void batadv_tp_receiver_shutdown(struct timer_list *t) 1112 1112 {
-1
net/core/netprio_cgroup.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/types.h> 17 - #include <linux/module.h> 18 17 #include <linux/string.h> 19 18 #include <linux/errno.h> 20 19 #include <linux/skbuff.h>
+5 -1
net/core/skbuff.c
··· 4293 4293 struct sock *sk = skb->sk; 4294 4294 4295 4295 if (!skb_may_tx_timestamp(sk, false)) 4296 - return; 4296 + goto err; 4297 4297 4298 4298 /* Take a reference to prevent skb_orphan() from freeing the socket, 4299 4299 * but only if the socket refcount is not zero. ··· 4302 4302 *skb_hwtstamps(skb) = *hwtstamps; 4303 4303 __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND, false); 4304 4304 sock_put(sk); 4305 + return; 4305 4306 } 4307 + 4308 + err: 4309 + kfree_skb(skb); 4306 4310 } 4307 4311 EXPORT_SYMBOL_GPL(skb_complete_tx_timestamp); 4308 4312
-1
net/dsa/slave.c
··· 16 16 #include <linux/of_net.h> 17 17 #include <linux/of_mdio.h> 18 18 #include <linux/mdio.h> 19 - #include <linux/list.h> 20 19 #include <net/rtnetlink.h> 21 20 #include <net/pkt_cls.h> 22 21 #include <net/tc_act/tc_mirred.h>
+1 -1
net/ipv4/devinet.c
··· 1428 1428 1429 1429 static bool inetdev_valid_mtu(unsigned int mtu) 1430 1430 { 1431 - return mtu >= 68; 1431 + return mtu >= IPV4_MIN_MTU; 1432 1432 } 1433 1433 1434 1434 static void inetdev_send_gratuitous_arp(struct net_device *dev,
+34 -10
net/ipv4/igmp.c
··· 89 89 #include <linux/rtnetlink.h> 90 90 #include <linux/times.h> 91 91 #include <linux/pkt_sched.h> 92 + #include <linux/byteorder/generic.h> 92 93 93 94 #include <net/net_namespace.h> 94 95 #include <net/arp.h> ··· 322 321 return scount; 323 322 } 324 323 324 + /* source address selection per RFC 3376 section 4.2.13 */ 325 + static __be32 igmpv3_get_srcaddr(struct net_device *dev, 326 + const struct flowi4 *fl4) 327 + { 328 + struct in_device *in_dev = __in_dev_get_rcu(dev); 329 + 330 + if (!in_dev) 331 + return htonl(INADDR_ANY); 332 + 333 + for_ifa(in_dev) { 334 + if (inet_ifa_match(fl4->saddr, ifa)) 335 + return fl4->saddr; 336 + } endfor_ifa(in_dev); 337 + 338 + return htonl(INADDR_ANY); 339 + } 340 + 325 341 static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu) 326 342 { 327 343 struct sk_buff *skb; ··· 386 368 pip->frag_off = htons(IP_DF); 387 369 pip->ttl = 1; 388 370 pip->daddr = fl4.daddr; 389 - pip->saddr = fl4.saddr; 371 + pip->saddr = igmpv3_get_srcaddr(dev, &fl4); 390 372 pip->protocol = IPPROTO_IGMP; 391 373 pip->tot_len = 0; /* filled in later */ 392 374 ip_select_ident(net, skb, NULL); ··· 422 404 } 423 405 424 406 static struct sk_buff *add_grhead(struct sk_buff *skb, struct ip_mc_list *pmc, 425 - int type, struct igmpv3_grec **ppgr) 407 + int type, struct igmpv3_grec **ppgr, unsigned int mtu) 426 408 { 427 409 struct net_device *dev = pmc->interface->dev; 428 410 struct igmpv3_report *pih; 429 411 struct igmpv3_grec *pgr; 430 412 431 - if (!skb) 432 - skb = igmpv3_newpack(dev, dev->mtu); 433 - if (!skb) 434 - return NULL; 413 + if (!skb) { 414 + skb = igmpv3_newpack(dev, mtu); 415 + if (!skb) 416 + return NULL; 417 + } 435 418 pgr = skb_put(skb, sizeof(struct igmpv3_grec)); 436 419 pgr->grec_type = type; 437 420 pgr->grec_auxwords = 0; ··· 455 436 struct igmpv3_grec *pgr = NULL; 456 437 struct ip_sf_list *psf, *psf_next, *psf_prev, **psf_list; 457 438 int scount, stotal, first, isquery, truncate; 439 + unsigned int mtu; 458 440 459 441 if (pmc->multiaddr == IGMP_ALL_HOSTS) 460 442 return skb; 461 443 if (ipv4_is_local_multicast(pmc->multiaddr) && !net->ipv4.sysctl_igmp_llm_reports) 444 + return skb; 445 + 446 + mtu = READ_ONCE(dev->mtu); 447 + if (mtu < IPV4_MIN_MTU) 462 448 return skb; 463 449 464 450 isquery = type == IGMPV3_MODE_IS_INCLUDE || ··· 486 462 AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) { 487 463 if (skb) 488 464 igmpv3_sendpack(skb); 489 - skb = igmpv3_newpack(dev, dev->mtu); 465 + skb = igmpv3_newpack(dev, mtu); 490 466 } 491 467 } 492 468 first = 1; ··· 522 498 pgr->grec_nsrcs = htons(scount); 523 499 if (skb) 524 500 igmpv3_sendpack(skb); 525 - skb = igmpv3_newpack(dev, dev->mtu); 501 + skb = igmpv3_newpack(dev, mtu); 526 502 first = 1; 527 503 scount = 0; 528 504 } 529 505 if (first) { 530 - skb = add_grhead(skb, pmc, type, &pgr); 506 + skb = add_grhead(skb, pmc, type, &pgr, mtu); 531 507 first = 0; 532 508 } 533 509 if (!skb) ··· 562 538 igmpv3_sendpack(skb); 563 539 skb = NULL; /* add_grhead will get a new one */ 564 540 } 565 - skb = add_grhead(skb, pmc, type, &pgr); 541 + skb = add_grhead(skb, pmc, type, &pgr, mtu); 566 542 } 567 543 } 568 544 if (pgr)
+1 -1
net/ipv4/ip_gre.c
··· 269 269 270 270 /* Check based hdr len */ 271 271 if (unlikely(!pskb_may_pull(skb, len))) 272 - return -ENOMEM; 272 + return PACKET_REJECT; 273 273 274 274 iph = ip_hdr(skb); 275 275 ershdr = (struct erspan_base_hdr *)(skb->data + gre_hdr_len);
+2 -2
net/ipv4/ip_tunnel.c
··· 349 349 dev->needed_headroom = t_hlen + hlen; 350 350 mtu -= (dev->hard_header_len + t_hlen); 351 351 352 - if (mtu < 68) 353 - mtu = 68; 352 + if (mtu < IPV4_MIN_MTU) 353 + mtu = IPV4_MIN_MTU; 354 354 355 355 return mtu; 356 356 }
-1
net/ipv4/netfilter/arp_tables.c
··· 373 373 if (!xt_find_jump_offset(offsets, newpos, 374 374 newinfo->number)) 375 375 return 0; 376 - e = entry0 + newpos; 377 376 } else { 378 377 /* ... this is a fallthru */ 379 378 newpos = pos + e->next_offset;
-1
net/ipv4/netfilter/ip_tables.c
··· 439 439 if (!xt_find_jump_offset(offsets, newpos, 440 440 newinfo->number)) 441 441 return 0; 442 - e = entry0 + newpos; 443 442 } else { 444 443 /* ... this is a fallthru */ 445 444 newpos = pos + e->next_offset;
+2 -1
net/ipv4/netfilter/ipt_CLUSTERIP.c
··· 813 813 814 814 static void clusterip_net_exit(struct net *net) 815 815 { 816 - #ifdef CONFIG_PROC_FS 817 816 struct clusterip_net *cn = net_generic(net, clusterip_net_id); 817 + #ifdef CONFIG_PROC_FS 818 818 proc_remove(cn->procdir); 819 819 cn->procdir = NULL; 820 820 #endif 821 821 nf_unregister_net_hook(net, &cip_arp_ops); 822 + WARN_ON_ONCE(!list_empty(&cn->configs)); 822 823 } 823 824 824 825 static struct pernet_operations clusterip_net_ops = {
+10 -5
net/ipv4/raw.c
··· 513 513 int err; 514 514 struct ip_options_data opt_copy; 515 515 struct raw_frag_vec rfv; 516 + int hdrincl; 516 517 517 518 err = -EMSGSIZE; 518 519 if (len > 0xFFFF) 519 520 goto out; 520 521 522 + /* hdrincl should be READ_ONCE(inet->hdrincl) 523 + * but READ_ONCE() doesn't work with bit fields 524 + */ 525 + hdrincl = inet->hdrincl; 521 526 /* 522 527 * Check the flags. 523 528 */ ··· 598 593 /* Linux does not mangle headers on raw sockets, 599 594 * so that IP options + IP_HDRINCL is non-sense. 600 595 */ 601 - if (inet->hdrincl) 596 + if (hdrincl) 602 597 goto done; 603 598 if (ipc.opt->opt.srr) { 604 599 if (!daddr) ··· 620 615 621 616 flowi4_init_output(&fl4, ipc.oif, sk->sk_mark, tos, 622 617 RT_SCOPE_UNIVERSE, 623 - inet->hdrincl ? IPPROTO_RAW : sk->sk_protocol, 618 + hdrincl ? IPPROTO_RAW : sk->sk_protocol, 624 619 inet_sk_flowi_flags(sk) | 625 - (inet->hdrincl ? FLOWI_FLAG_KNOWN_NH : 0), 620 + (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0), 626 621 daddr, saddr, 0, 0, sk->sk_uid); 627 622 628 - if (!inet->hdrincl) { 623 + if (!hdrincl) { 629 624 rfv.msg = msg; 630 625 rfv.hlen = 0; 631 626 ··· 650 645 goto do_confirm; 651 646 back_from_confirm: 652 647 653 - if (inet->hdrincl) 648 + if (hdrincl) 654 649 err = raw_send_hdrinc(sk, &fl4, msg, len, 655 650 &rt, msg->msg_flags, &ipc.sockc); 656 651
+6 -4
net/ipv4/tcp_input.c
··· 508 508 u32 new_sample = tp->rcv_rtt_est.rtt_us; 509 509 long m = sample; 510 510 511 - if (m == 0) 512 - m = 1; 513 - 514 511 if (new_sample != 0) { 515 512 /* If we sample in larger samples in the non-timestamp 516 513 * case, we could grossly overestimate the RTT especially ··· 544 547 if (before(tp->rcv_nxt, tp->rcv_rtt_est.seq)) 545 548 return; 546 549 delta_us = tcp_stamp_us_delta(tp->tcp_mstamp, tp->rcv_rtt_est.time); 550 + if (!delta_us) 551 + delta_us = 1; 547 552 tcp_rcv_rtt_update(tp, delta_us, 1); 548 553 549 554 new_measure: ··· 562 563 (TCP_SKB_CB(skb)->end_seq - 563 564 TCP_SKB_CB(skb)->seq >= inet_csk(sk)->icsk_ack.rcv_mss)) { 564 565 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 565 - u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 566 + u32 delta_us; 566 567 568 + if (!delta) 569 + delta = 1; 570 + delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 567 571 tcp_rcv_rtt_update(tp, delta_us, 0); 568 572 } 569 573 }
+1 -1
net/ipv4/tcp_ipv4.c
··· 848 848 tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, 849 849 req->ts_recent, 850 850 0, 851 - tcp_md5_do_lookup(sk, (union tcp_md5_addr *)&ip_hdr(skb)->daddr, 851 + tcp_md5_do_lookup(sk, (union tcp_md5_addr *)&ip_hdr(skb)->saddr, 852 852 AF_INET), 853 853 inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0, 854 854 ip_hdr(skb)->tos);
+2
net/ipv4/tcp_timer.c
··· 249 249 icsk->icsk_ack.pingpong = 0; 250 250 icsk->icsk_ack.ato = TCP_ATO_MIN; 251 251 } 252 + tcp_mstamp_refresh(tcp_sk(sk)); 252 253 tcp_send_ack(sk); 253 254 __NET_INC_STATS(sock_net(sk), LINUX_MIB_DELAYEDACKS); 254 255 } ··· 618 617 goto out; 619 618 } 620 619 620 + tcp_mstamp_refresh(tp); 621 621 if (sk->sk_state == TCP_FIN_WAIT2 && sock_flag(sk, SOCK_DEAD)) { 622 622 if (tp->linger2 >= 0) { 623 623 const int tmo = tcp_fin_time(sk) - TCP_TIMEWAIT_LEN;
+15 -10
net/ipv6/mcast.c
··· 1682 1682 } 1683 1683 1684 1684 static struct sk_buff *add_grhead(struct sk_buff *skb, struct ifmcaddr6 *pmc, 1685 - int type, struct mld2_grec **ppgr) 1685 + int type, struct mld2_grec **ppgr, unsigned int mtu) 1686 1686 { 1687 - struct net_device *dev = pmc->idev->dev; 1688 1687 struct mld2_report *pmr; 1689 1688 struct mld2_grec *pgr; 1690 1689 1691 - if (!skb) 1692 - skb = mld_newpack(pmc->idev, dev->mtu); 1693 - if (!skb) 1694 - return NULL; 1690 + if (!skb) { 1691 + skb = mld_newpack(pmc->idev, mtu); 1692 + if (!skb) 1693 + return NULL; 1694 + } 1695 1695 pgr = skb_put(skb, sizeof(struct mld2_grec)); 1696 1696 pgr->grec_type = type; 1697 1697 pgr->grec_auxwords = 0; ··· 1714 1714 struct mld2_grec *pgr = NULL; 1715 1715 struct ip6_sf_list *psf, *psf_next, *psf_prev, **psf_list; 1716 1716 int scount, stotal, first, isquery, truncate; 1717 + unsigned int mtu; 1717 1718 1718 1719 if (pmc->mca_flags & MAF_NOREPORT) 1720 + return skb; 1721 + 1722 + mtu = READ_ONCE(dev->mtu); 1723 + if (mtu < IPV6_MIN_MTU) 1719 1724 return skb; 1720 1725 1721 1726 isquery = type == MLD2_MODE_IS_INCLUDE || ··· 1743 1738 AVAILABLE(skb) < grec_size(pmc, type, gdeleted, sdeleted)) { 1744 1739 if (skb) 1745 1740 mld_sendpack(skb); 1746 - skb = mld_newpack(idev, dev->mtu); 1741 + skb = mld_newpack(idev, mtu); 1747 1742 } 1748 1743 } 1749 1744 first = 1; ··· 1779 1774 pgr->grec_nsrcs = htons(scount); 1780 1775 if (skb) 1781 1776 mld_sendpack(skb); 1782 - skb = mld_newpack(idev, dev->mtu); 1777 + skb = mld_newpack(idev, mtu); 1783 1778 first = 1; 1784 1779 scount = 0; 1785 1780 } 1786 1781 if (first) { 1787 - skb = add_grhead(skb, pmc, type, &pgr); 1782 + skb = add_grhead(skb, pmc, type, &pgr, mtu); 1788 1783 first = 0; 1789 1784 } 1790 1785 if (!skb) ··· 1819 1814 mld_sendpack(skb); 1820 1815 skb = NULL; /* add_grhead will get a new one */ 1821 1816 } 1822 - skb = add_grhead(skb, pmc, type, &pgr); 1817 + skb = add_grhead(skb, pmc, type, &pgr, mtu); 1823 1818 } 1824 1819 } 1825 1820 if (pgr)
-1
net/ipv6/netfilter/ip6_tables.c
··· 458 458 if (!xt_find_jump_offset(offsets, newpos, 459 459 newinfo->number)) 460 460 return 0; 461 - e = entry0 + newpos; 462 461 } else { 463 462 /* ... this is a fallthru */ 464 463 newpos = pos + e->next_offset;
+7 -1
net/ipv6/netfilter/ip6t_MASQUERADE.c
··· 33 33 34 34 if (range->flags & NF_NAT_RANGE_MAP_IPS) 35 35 return -EINVAL; 36 - return 0; 36 + return nf_ct_netns_get(par->net, par->family); 37 + } 38 + 39 + static void masquerade_tg6_destroy(const struct xt_tgdtor_param *par) 40 + { 41 + nf_ct_netns_put(par->net, par->family); 37 42 } 38 43 39 44 static struct xt_target masquerade_tg6_reg __read_mostly = { 40 45 .name = "MASQUERADE", 41 46 .family = NFPROTO_IPV6, 42 47 .checkentry = masquerade_tg6_checkentry, 48 + .destroy = masquerade_tg6_destroy, 43 49 .target = masquerade_tg6, 44 50 .targetsize = sizeof(struct nf_nat_range), 45 51 .table = "nat",
+1 -1
net/ipv6/tcp_ipv6.c
··· 994 994 req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale, 995 995 tcp_time_stamp_raw() + tcp_rsk(req)->ts_off, 996 996 req->ts_recent, sk->sk_bound_dev_if, 997 - tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->daddr), 997 + tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr), 998 998 0, 0); 999 999 } 1000 1000
+2 -3
net/mac80211/ht.c
··· 291 291 int i; 292 292 293 293 mutex_lock(&sta->ampdu_mlme.mtx); 294 - for (i = 0; i < IEEE80211_NUM_TIDS; i++) { 294 + for (i = 0; i < IEEE80211_NUM_TIDS; i++) 295 295 ___ieee80211_stop_rx_ba_session(sta, i, WLAN_BACK_RECIPIENT, 296 296 WLAN_REASON_QSTA_LEAVE_QBSS, 297 297 reason != AGG_STOP_DESTROY_STA && 298 298 reason != AGG_STOP_PEER_REQUEST); 299 - } 300 - mutex_unlock(&sta->ampdu_mlme.mtx); 301 299 302 300 for (i = 0; i < IEEE80211_NUM_TIDS; i++) 303 301 ___ieee80211_stop_tx_ba_session(sta, i, reason); 302 + mutex_unlock(&sta->ampdu_mlme.mtx); 304 303 305 304 /* stopping might queue the work again - so cancel only afterwards */ 306 305 cancel_work_sync(&sta->ampdu_mlme.work);
+98 -30
net/netfilter/nf_conntrack_h323_asn1.c
··· 103 103 #define INC_BIT(bs) if((++(bs)->bit)>7){(bs)->cur++;(bs)->bit=0;} 104 104 #define INC_BITS(bs,b) if(((bs)->bit+=(b))>7){(bs)->cur+=(bs)->bit>>3;(bs)->bit&=7;} 105 105 #define BYTE_ALIGN(bs) if((bs)->bit){(bs)->cur++;(bs)->bit=0;} 106 - #define CHECK_BOUND(bs,n) if((bs)->cur+(n)>(bs)->end)return(H323_ERROR_BOUND) 107 106 static unsigned int get_len(struct bitstr *bs); 108 107 static unsigned int get_bit(struct bitstr *bs); 109 108 static unsigned int get_bits(struct bitstr *bs, unsigned int b); ··· 162 163 } 163 164 164 165 return v; 166 + } 167 + 168 + static int nf_h323_error_boundary(struct bitstr *bs, size_t bytes, size_t bits) 169 + { 170 + bits += bs->bit; 171 + bytes += bits / BITS_PER_BYTE; 172 + if (bits % BITS_PER_BYTE > 0) 173 + bytes++; 174 + 175 + if (*bs->cur + bytes > *bs->end) 176 + return 1; 177 + 178 + return 0; 165 179 } 166 180 167 181 /****************************************************************************/ ··· 291 279 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name); 292 280 293 281 INC_BIT(bs); 294 - 295 - CHECK_BOUND(bs, 0); 282 + if (nf_h323_error_boundary(bs, 0, 0)) 283 + return H323_ERROR_BOUND; 296 284 return H323_ERROR_NONE; 297 285 } 298 286 ··· 305 293 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name); 306 294 307 295 BYTE_ALIGN(bs); 308 - CHECK_BOUND(bs, 1); 296 + if (nf_h323_error_boundary(bs, 1, 0)) 297 + return H323_ERROR_BOUND; 298 + 309 299 len = *bs->cur++; 310 300 bs->cur += len; 301 + if (nf_h323_error_boundary(bs, 0, 0)) 302 + return H323_ERROR_BOUND; 311 303 312 - CHECK_BOUND(bs, 0); 313 304 return H323_ERROR_NONE; 314 305 } 315 306 ··· 334 319 bs->cur += 2; 335 320 break; 336 321 case CONS: /* 64K < Range < 4G */ 322 + if (nf_h323_error_boundary(bs, 0, 2)) 323 + return H323_ERROR_BOUND; 337 324 len = get_bits(bs, 2) + 1; 338 325 BYTE_ALIGN(bs); 339 326 if (base && (f->attr & DECODE)) { /* timeToLive */ ··· 347 330 break; 348 331 case UNCO: 349 332 BYTE_ALIGN(bs); 350 - CHECK_BOUND(bs, 2); 333 + if (nf_h323_error_boundary(bs, 2, 0)) 334 + return H323_ERROR_BOUND; 351 335 len = get_len(bs); 352 336 bs->cur += len; 353 337 break; ··· 359 341 360 342 PRINT("\n"); 361 343 362 - CHECK_BOUND(bs, 0); 344 + if (nf_h323_error_boundary(bs, 0, 0)) 345 + return H323_ERROR_BOUND; 363 346 return H323_ERROR_NONE; 364 347 } 365 348 ··· 376 357 INC_BITS(bs, f->sz); 377 358 } 378 359 379 - CHECK_BOUND(bs, 0); 360 + if (nf_h323_error_boundary(bs, 0, 0)) 361 + return H323_ERROR_BOUND; 380 362 return H323_ERROR_NONE; 381 363 } 382 364 ··· 395 375 len = f->lb; 396 376 break; 397 377 case WORD: /* 2-byte length */ 398 - CHECK_BOUND(bs, 2); 378 + if (nf_h323_error_boundary(bs, 2, 0)) 379 + return H323_ERROR_BOUND; 399 380 len = (*bs->cur++) << 8; 400 381 len += (*bs->cur++) + f->lb; 401 382 break; 402 383 case SEMI: 403 - CHECK_BOUND(bs, 2); 384 + if (nf_h323_error_boundary(bs, 2, 0)) 385 + return H323_ERROR_BOUND; 404 386 len = get_len(bs); 405 387 break; 406 388 default: ··· 413 391 bs->cur += len >> 3; 414 392 bs->bit = len & 7; 415 393 416 - CHECK_BOUND(bs, 0); 394 + if (nf_h323_error_boundary(bs, 0, 0)) 395 + return H323_ERROR_BOUND; 417 396 return H323_ERROR_NONE; 418 397 } 419 398 ··· 427 404 PRINT("%*.s%s\n", level * TAB_SIZE, " ", f->name); 428 405 429 406 /* 2 <= Range <= 255 */ 407 + if (nf_h323_error_boundary(bs, 0, f->sz)) 408 + return H323_ERROR_BOUND; 430 409 len = get_bits(bs, f->sz) + f->lb; 431 410 432 411 BYTE_ALIGN(bs); 433 412 INC_BITS(bs, (len << 2)); 434 413 435 - CHECK_BOUND(bs, 0); 414 + if (nf_h323_error_boundary(bs, 0, 0)) 415 + return H323_ERROR_BOUND; 436 416 return H323_ERROR_NONE; 437 417 } 438 418 ··· 466 440 break; 467 441 case BYTE: /* Range == 256 */ 468 442 BYTE_ALIGN(bs); 469 - CHECK_BOUND(bs, 1); 443 + if (nf_h323_error_boundary(bs, 1, 0)) 444 + return H323_ERROR_BOUND; 470 445 len = (*bs->cur++) + f->lb; 471 446 break; 472 447 case SEMI: 473 448 BYTE_ALIGN(bs); 474 - CHECK_BOUND(bs, 2); 449 + if (nf_h323_error_boundary(bs, 2, 0)) 450 + return H323_ERROR_BOUND; 475 451 len = get_len(bs) + f->lb; 476 452 break; 477 453 default: /* 2 <= Range <= 255 */ 454 + if (nf_h323_error_boundary(bs, 0, f->sz)) 455 + return H323_ERROR_BOUND; 478 456 len = get_bits(bs, f->sz) + f->lb; 479 457 BYTE_ALIGN(bs); 480 458 break; ··· 488 458 489 459 PRINT("\n"); 490 460 491 - CHECK_BOUND(bs, 0); 461 + if (nf_h323_error_boundary(bs, 0, 0)) 462 + return H323_ERROR_BOUND; 492 463 return H323_ERROR_NONE; 493 464 } 494 465 ··· 504 473 switch (f->sz) { 505 474 case BYTE: /* Range == 256 */ 506 475 BYTE_ALIGN(bs); 507 - CHECK_BOUND(bs, 1); 476 + if (nf_h323_error_boundary(bs, 1, 0)) 477 + return H323_ERROR_BOUND; 508 478 len = (*bs->cur++) + f->lb; 509 479 break; 510 480 default: /* 2 <= Range <= 255 */ 481 + if (nf_h323_error_boundary(bs, 0, f->sz)) 482 + return H323_ERROR_BOUND; 511 483 len = get_bits(bs, f->sz) + f->lb; 512 484 BYTE_ALIGN(bs); 513 485 break; ··· 518 484 519 485 bs->cur += len << 1; 520 486 521 - CHECK_BOUND(bs, 0); 487 + if (nf_h323_error_boundary(bs, 0, 0)) 488 + return H323_ERROR_BOUND; 522 489 return H323_ERROR_NONE; 523 490 } 524 491 ··· 538 503 base = (base && (f->attr & DECODE)) ? base + f->offset : NULL; 539 504 540 505 /* Extensible? */ 506 + if (nf_h323_error_boundary(bs, 0, 1)) 507 + return H323_ERROR_BOUND; 541 508 ext = (f->attr & EXT) ? get_bit(bs) : 0; 542 509 543 510 /* Get fields bitmap */ 511 + if (nf_h323_error_boundary(bs, 0, f->sz)) 512 + return H323_ERROR_BOUND; 544 513 bmp = get_bitmap(bs, f->sz); 545 514 if (base) 546 515 *(unsigned int *)base = bmp; ··· 564 525 565 526 /* Decode */ 566 527 if (son->attr & OPEN) { /* Open field */ 567 - CHECK_BOUND(bs, 2); 528 + if (nf_h323_error_boundary(bs, 2, 0)) 529 + return H323_ERROR_BOUND; 568 530 len = get_len(bs); 569 - CHECK_BOUND(bs, len); 531 + if (nf_h323_error_boundary(bs, len, 0)) 532 + return H323_ERROR_BOUND; 570 533 if (!base || !(son->attr & DECODE)) { 571 534 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, 572 535 " ", son->name); ··· 596 555 return H323_ERROR_NONE; 597 556 598 557 /* Get the extension bitmap */ 558 + if (nf_h323_error_boundary(bs, 0, 7)) 559 + return H323_ERROR_BOUND; 599 560 bmp2_len = get_bits(bs, 7) + 1; 600 - CHECK_BOUND(bs, (bmp2_len + 7) >> 3); 561 + if (nf_h323_error_boundary(bs, 0, bmp2_len)) 562 + return H323_ERROR_BOUND; 601 563 bmp2 = get_bitmap(bs, bmp2_len); 602 564 bmp |= bmp2 >> f->sz; 603 565 if (base) ··· 611 567 for (opt = 0; opt < bmp2_len; opt++, i++, son++) { 612 568 /* Check Range */ 613 569 if (i >= f->ub) { /* Newer Version? */ 614 - CHECK_BOUND(bs, 2); 570 + if (nf_h323_error_boundary(bs, 2, 0)) 571 + return H323_ERROR_BOUND; 615 572 len = get_len(bs); 616 - CHECK_BOUND(bs, len); 573 + if (nf_h323_error_boundary(bs, len, 0)) 574 + return H323_ERROR_BOUND; 617 575 bs->cur += len; 618 576 continue; 619 577 } ··· 629 583 if (!((0x80000000 >> opt) & bmp2)) /* Not present */ 630 584 continue; 631 585 632 - CHECK_BOUND(bs, 2); 586 + if (nf_h323_error_boundary(bs, 2, 0)) 587 + return H323_ERROR_BOUND; 633 588 len = get_len(bs); 634 - CHECK_BOUND(bs, len); 589 + if (nf_h323_error_boundary(bs, len, 0)) 590 + return H323_ERROR_BOUND; 635 591 if (!base || !(son->attr & DECODE)) { 636 592 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, " ", 637 593 son->name); ··· 671 623 switch (f->sz) { 672 624 case BYTE: 673 625 BYTE_ALIGN(bs); 674 - CHECK_BOUND(bs, 1); 626 + if (nf_h323_error_boundary(bs, 1, 0)) 627 + return H323_ERROR_BOUND; 675 628 count = *bs->cur++; 676 629 break; 677 630 case WORD: 678 631 BYTE_ALIGN(bs); 679 - CHECK_BOUND(bs, 2); 632 + if (nf_h323_error_boundary(bs, 2, 0)) 633 + return H323_ERROR_BOUND; 680 634 count = *bs->cur++; 681 635 count <<= 8; 682 636 count += *bs->cur++; 683 637 break; 684 638 case SEMI: 685 639 BYTE_ALIGN(bs); 686 - CHECK_BOUND(bs, 2); 640 + if (nf_h323_error_boundary(bs, 2, 0)) 641 + return H323_ERROR_BOUND; 687 642 count = get_len(bs); 688 643 break; 689 644 default: 645 + if (nf_h323_error_boundary(bs, 0, f->sz)) 646 + return H323_ERROR_BOUND; 690 647 count = get_bits(bs, f->sz); 691 648 break; 692 649 } ··· 711 658 for (i = 0; i < count; i++) { 712 659 if (son->attr & OPEN) { 713 660 BYTE_ALIGN(bs); 661 + if (nf_h323_error_boundary(bs, 2, 0)) 662 + return H323_ERROR_BOUND; 714 663 len = get_len(bs); 715 - CHECK_BOUND(bs, len); 664 + if (nf_h323_error_boundary(bs, len, 0)) 665 + return H323_ERROR_BOUND; 716 666 if (!base || !(son->attr & DECODE)) { 717 667 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, 718 668 " ", son->name); ··· 766 710 base = (base && (f->attr & DECODE)) ? base + f->offset : NULL; 767 711 768 712 /* Decode the choice index number */ 713 + if (nf_h323_error_boundary(bs, 0, 1)) 714 + return H323_ERROR_BOUND; 769 715 if ((f->attr & EXT) && get_bit(bs)) { 770 716 ext = 1; 717 + if (nf_h323_error_boundary(bs, 0, 7)) 718 + return H323_ERROR_BOUND; 771 719 type = get_bits(bs, 7) + f->lb; 772 720 } else { 773 721 ext = 0; 722 + if (nf_h323_error_boundary(bs, 0, f->sz)) 723 + return H323_ERROR_BOUND; 774 724 type = get_bits(bs, f->sz); 775 725 if (type >= f->lb) 776 726 return H323_ERROR_RANGE; ··· 789 727 /* Check Range */ 790 728 if (type >= f->ub) { /* Newer version? */ 791 729 BYTE_ALIGN(bs); 730 + if (nf_h323_error_boundary(bs, 2, 0)) 731 + return H323_ERROR_BOUND; 792 732 len = get_len(bs); 793 - CHECK_BOUND(bs, len); 733 + if (nf_h323_error_boundary(bs, len, 0)) 734 + return H323_ERROR_BOUND; 794 735 bs->cur += len; 795 736 return H323_ERROR_NONE; 796 737 } ··· 807 742 808 743 if (ext || (son->attr & OPEN)) { 809 744 BYTE_ALIGN(bs); 745 + if (nf_h323_error_boundary(bs, len, 0)) 746 + return H323_ERROR_BOUND; 810 747 len = get_len(bs); 811 - CHECK_BOUND(bs, len); 748 + if (nf_h323_error_boundary(bs, len, 0)) 749 + return H323_ERROR_BOUND; 812 750 if (!base || !(son->attr & DECODE)) { 813 751 PRINT("%*.s%s\n", (level + 1) * TAB_SIZE, " ", 814 752 son->name);
+9 -4
net/netfilter/nf_conntrack_netlink.c
··· 45 45 #include <net/netfilter/nf_conntrack_zones.h> 46 46 #include <net/netfilter/nf_conntrack_timestamp.h> 47 47 #include <net/netfilter/nf_conntrack_labels.h> 48 - #include <net/netfilter/nf_conntrack_seqadj.h> 49 48 #include <net/netfilter/nf_conntrack_synproxy.h> 50 49 #ifdef CONFIG_NF_NAT_NEEDED 51 50 #include <net/netfilter/nf_nat_core.h> ··· 1565 1566 static int ctnetlink_change_timeout(struct nf_conn *ct, 1566 1567 const struct nlattr * const cda[]) 1567 1568 { 1568 - u_int32_t timeout = ntohl(nla_get_be32(cda[CTA_TIMEOUT])); 1569 + u64 timeout = (u64)ntohl(nla_get_be32(cda[CTA_TIMEOUT])) * HZ; 1569 1570 1570 - ct->timeout = nfct_time_stamp + timeout * HZ; 1571 + if (timeout > INT_MAX) 1572 + timeout = INT_MAX; 1573 + ct->timeout = nfct_time_stamp + (u32)timeout; 1571 1574 1572 1575 if (test_bit(IPS_DYING_BIT, &ct->status)) 1573 1576 return -ETIME; ··· 1769 1768 int err = -EINVAL; 1770 1769 struct nf_conntrack_helper *helper; 1771 1770 struct nf_conn_tstamp *tstamp; 1771 + u64 timeout; 1772 1772 1773 1773 ct = nf_conntrack_alloc(net, zone, otuple, rtuple, GFP_ATOMIC); 1774 1774 if (IS_ERR(ct)) ··· 1778 1776 if (!cda[CTA_TIMEOUT]) 1779 1777 goto err1; 1780 1778 1781 - ct->timeout = nfct_time_stamp + ntohl(nla_get_be32(cda[CTA_TIMEOUT])) * HZ; 1779 + timeout = (u64)ntohl(nla_get_be32(cda[CTA_TIMEOUT])) * HZ; 1780 + if (timeout > INT_MAX) 1781 + timeout = INT_MAX; 1782 + ct->timeout = (u32)timeout + nfct_time_stamp; 1782 1783 1783 1784 rcu_read_lock(); 1784 1785 if (cda[CTA_HELP]) {
+3
net/netfilter/nf_conntrack_proto_tcp.c
··· 1039 1039 IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED && 1040 1040 timeouts[new_state] > timeouts[TCP_CONNTRACK_UNACK]) 1041 1041 timeout = timeouts[TCP_CONNTRACK_UNACK]; 1042 + else if (ct->proto.tcp.last_win == 0 && 1043 + timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS]) 1044 + timeout = timeouts[TCP_CONNTRACK_RETRANS]; 1042 1045 else 1043 1046 timeout = timeouts[new_state]; 1044 1047 spin_unlock_bh(&ct->lock);
+7
net/netfilter/nf_tables_api.c
··· 5847 5847 return 0; 5848 5848 } 5849 5849 5850 + static void __net_exit nf_tables_exit_net(struct net *net) 5851 + { 5852 + WARN_ON_ONCE(!list_empty(&net->nft.af_info)); 5853 + WARN_ON_ONCE(!list_empty(&net->nft.commit_list)); 5854 + } 5855 + 5850 5856 int __nft_release_basechain(struct nft_ctx *ctx) 5851 5857 { 5852 5858 struct nft_rule *rule, *nr; ··· 5923 5917 5924 5918 static struct pernet_operations nf_tables_net_ops = { 5925 5919 .init = nf_tables_init_net, 5920 + .exit = nf_tables_exit_net, 5926 5921 }; 5927 5922 5928 5923 static int __init nf_tables_module_init(void)
+2
net/netfilter/nft_exthdr.c
··· 214 214 [NFTA_EXTHDR_OFFSET] = { .type = NLA_U32 }, 215 215 [NFTA_EXTHDR_LEN] = { .type = NLA_U32 }, 216 216 [NFTA_EXTHDR_FLAGS] = { .type = NLA_U32 }, 217 + [NFTA_EXTHDR_OP] = { .type = NLA_U32 }, 218 + [NFTA_EXTHDR_SREG] = { .type = NLA_U32 }, 217 219 }; 218 220 219 221 static int nft_exthdr_init(const struct nft_ctx *ctx,
+9
net/netfilter/x_tables.c
··· 1729 1729 return 0; 1730 1730 } 1731 1731 1732 + static void __net_exit xt_net_exit(struct net *net) 1733 + { 1734 + int i; 1735 + 1736 + for (i = 0; i < NFPROTO_NUMPROTO; i++) 1737 + WARN_ON_ONCE(!list_empty(&net->xt.tables[i])); 1738 + } 1739 + 1732 1740 static struct pernet_operations xt_net_ops = { 1733 1741 .init = xt_net_init, 1742 + .exit = xt_net_exit, 1734 1743 }; 1735 1744 1736 1745 static int __init xt_init(void)
+6
net/netfilter/xt_bpf.c
··· 27 27 { 28 28 struct sock_fprog_kern program; 29 29 30 + if (len > XT_BPF_MAX_NUM_INSTR) 31 + return -EINVAL; 32 + 30 33 program.len = len; 31 34 program.filter = insns; 32 35 ··· 57 54 { 58 55 mm_segment_t oldfs = get_fs(); 59 56 int retval, fd; 57 + 58 + if (strnlen(path, XT_BPF_PATH_MAX) == XT_BPF_PATH_MAX) 59 + return -EINVAL; 60 60 61 61 set_fs(KERNEL_DS); 62 62 fd = bpf_obj_get_user(path, 0);
+7
net/netfilter/xt_osf.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/kernel.h> 21 21 22 + #include <linux/capability.h> 22 23 #include <linux/if.h> 23 24 #include <linux/inetdevice.h> 24 25 #include <linux/ip.h> ··· 71 70 struct xt_osf_finger *kf = NULL, *sf; 72 71 int err = 0; 73 72 73 + if (!capable(CAP_NET_ADMIN)) 74 + return -EPERM; 75 + 74 76 if (!osf_attrs[OSF_ATTR_FINGER]) 75 77 return -EINVAL; 76 78 ··· 118 114 struct xt_osf_user_finger *f; 119 115 struct xt_osf_finger *sf; 120 116 int err = -ENOENT; 117 + 118 + if (!capable(CAP_NET_ADMIN)) 119 + return -EPERM; 121 120 122 121 if (!osf_attrs[OSF_ATTR_FINGER]) 123 122 return -EINVAL;
+3
net/netlink/af_netlink.c
··· 284 284 struct sock *sk = skb->sk; 285 285 int ret = -ENOMEM; 286 286 287 + if (!net_eq(dev_net(dev), sock_net(sk))) 288 + return 0; 289 + 287 290 dev_hold(dev); 288 291 289 292 if (is_vmalloc_addr(skb->head))
-1
net/sched/act_meta_mark.c
··· 22 22 #include <net/pkt_sched.h> 23 23 #include <uapi/linux/tc_act/tc_ife.h> 24 24 #include <net/tc_act/tc_ife.h> 25 - #include <linux/rtnetlink.h> 26 25 27 26 static int skbmark_encode(struct sk_buff *skb, void *skbdata, 28 27 struct tcf_meta_info *e)
-1
net/sched/act_meta_skbtcindex.c
··· 22 22 #include <net/pkt_sched.h> 23 23 #include <uapi/linux/tc_act/tc_ife.h> 24 24 #include <net/tc_act/tc_ife.h> 25 - #include <linux/rtnetlink.h> 26 25 27 26 static int skbtcindex_encode(struct sk_buff *skb, void *skbdata, 28 27 struct tcf_meta_info *e)
+2 -3
net/sched/cls_api.c
··· 23 23 #include <linux/skbuff.h> 24 24 #include <linux/init.h> 25 25 #include <linux/kmod.h> 26 - #include <linux/err.h> 27 26 #include <linux/slab.h> 28 27 #include <net/net_namespace.h> 29 28 #include <net/sock.h> ··· 344 345 /* Hold a refcnt for all chains, so that they don't disappear 345 346 * while we are iterating. 346 347 */ 348 + if (!block) 349 + return; 347 350 list_for_each_entry(chain, &block->chain_list, list) 348 351 tcf_chain_hold(chain); 349 352 ··· 368 367 { 369 368 struct tcf_block_ext_info ei = {0, }; 370 369 371 - if (!block) 372 - return; 373 370 tcf_block_put_ext(block, block->q, &ei); 374 371 } 375 372
-1
net/sched/cls_u32.c
··· 45 45 #include <net/netlink.h> 46 46 #include <net/act_api.h> 47 47 #include <net/pkt_cls.h> 48 - #include <linux/netdevice.h> 49 48 #include <linux/idr.h> 50 49 51 50 struct tc_u_knode {
+2
net/sched/sch_api.c
··· 795 795 tcm->tcm_info = refcount_read(&q->refcnt); 796 796 if (nla_put_string(skb, TCA_KIND, q->ops->id)) 797 797 goto nla_put_failure; 798 + if (nla_put_u8(skb, TCA_HW_OFFLOAD, !!(q->flags & TCQ_F_OFFLOADED))) 799 + goto nla_put_failure; 798 800 if (q->ops->dump && q->ops->dump(q, skb) < 0) 799 801 goto nla_put_failure; 800 802
+6 -9
net/sched/sch_ingress.c
··· 68 68 struct net_device *dev = qdisc_dev(sch); 69 69 int err; 70 70 71 + net_inc_ingress_queue(); 72 + 71 73 mini_qdisc_pair_init(&q->miniqp, sch, &dev->miniq_ingress); 72 74 73 75 q->block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS; ··· 80 78 if (err) 81 79 return err; 82 80 83 - net_inc_ingress_queue(); 84 81 sch->flags |= TCQ_F_CPUSTATS; 85 82 86 83 return 0; ··· 173 172 struct net_device *dev = qdisc_dev(sch); 174 173 int err; 175 174 175 + net_inc_ingress_queue(); 176 + net_inc_egress_queue(); 177 + 176 178 mini_qdisc_pair_init(&q->miniqp_ingress, sch, &dev->miniq_ingress); 177 179 178 180 q->ingress_block_info.binder_type = TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS; ··· 194 190 195 191 err = tcf_block_get_ext(&q->egress_block, sch, &q->egress_block_info); 196 192 if (err) 197 - goto err_egress_block_get; 198 - 199 - net_inc_ingress_queue(); 200 - net_inc_egress_queue(); 193 + return err; 201 194 202 195 sch->flags |= TCQ_F_CPUSTATS; 203 196 204 197 return 0; 205 - 206 - err_egress_block_get: 207 - tcf_block_put_ext(q->ingress_block, sch, &q->ingress_block_info); 208 - return err; 209 198 } 210 199 211 200 static void clsact_destroy(struct Qdisc *sch)
+15 -16
net/sched/sch_red.c
··· 157 157 .handle = sch->handle, 158 158 .parent = sch->parent, 159 159 }; 160 + int err; 160 161 161 162 if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) 162 163 return -EOPNOTSUPP; ··· 172 171 opt.command = TC_RED_DESTROY; 173 172 } 174 173 175 - return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &opt); 174 + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &opt); 175 + 176 + if (!err && enable) 177 + sch->flags |= TCQ_F_OFFLOADED; 178 + else 179 + sch->flags &= ~TCQ_F_OFFLOADED; 180 + 181 + return err; 176 182 } 177 183 178 184 static void red_destroy(struct Qdisc *sch) ··· 282 274 return red_change(sch, opt); 283 275 } 284 276 285 - static int red_dump_offload(struct Qdisc *sch, struct tc_red_qopt *opt) 277 + static int red_dump_offload_stats(struct Qdisc *sch, struct tc_red_qopt *opt) 286 278 { 287 279 struct net_device *dev = qdisc_dev(sch); 288 280 struct tc_red_qopt_offload hw_stats = { ··· 294 286 .stats.qstats = &sch->qstats, 295 287 }, 296 288 }; 297 - int err; 298 289 299 - opt->flags &= ~TC_RED_OFFLOADED; 300 - if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) 290 + if (!(sch->flags & TCQ_F_OFFLOADED)) 301 291 return 0; 302 292 303 - err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, 304 - &hw_stats); 305 - if (err == -EOPNOTSUPP) 306 - return 0; 307 - 308 - if (!err) 309 - opt->flags |= TC_RED_OFFLOADED; 310 - 311 - return err; 293 + return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, 294 + &hw_stats); 312 295 } 313 296 314 297 static int red_dump(struct Qdisc *sch, struct sk_buff *skb) ··· 318 319 int err; 319 320 320 321 sch->qstats.backlog = q->qdisc->qstats.backlog; 321 - err = red_dump_offload(sch, &opt); 322 + err = red_dump_offload_stats(sch, &opt); 322 323 if (err) 323 324 goto nla_put_failure; 324 325 ··· 346 347 .marked = q->stats.prob_mark + q->stats.forced_mark, 347 348 }; 348 349 349 - if (tc_can_offload(dev) && dev->netdev_ops->ndo_setup_tc) { 350 + if (sch->flags & TCQ_F_OFFLOADED) { 350 351 struct red_stats hw_stats = {0}; 351 352 struct tc_red_qopt_offload hw_stats_request = { 352 353 .command = TC_RED_XSTATS,
+5 -1
net/sctp/socket.c
··· 3924 3924 struct sctp_association *asoc; 3925 3925 int retval = -EINVAL; 3926 3926 3927 - if (optlen < sizeof(struct sctp_reset_streams)) 3927 + if (optlen < sizeof(*params)) 3928 3928 return -EINVAL; 3929 3929 3930 3930 params = memdup_user(optval, optlen); 3931 3931 if (IS_ERR(params)) 3932 3932 return PTR_ERR(params); 3933 + 3934 + if (params->srs_number_streams * sizeof(__u16) > 3935 + optlen - sizeof(*params)) 3936 + goto out; 3933 3937 3934 3938 asoc = sctp_id2assoc(sk, params->srs_assoc_id); 3935 3939 if (!asoc)
+1
net/sunrpc/auth_gss/gss_rpc_xdr.c
··· 231 231 goto out_free_groups; 232 232 creds->cr_group_info->gid[i] = kgid; 233 233 } 234 + groups_sort(creds->cr_group_info); 234 235 235 236 return 0; 236 237 out_free_groups:
+1
net/sunrpc/auth_gss/svcauth_gss.c
··· 481 481 goto out; 482 482 rsci.cred.cr_group_info->gid[i] = kgid; 483 483 } 484 + groups_sort(rsci.cred.cr_group_info); 484 485 485 486 /* mech name */ 486 487 len = qword_get(&mesg, buf, mlen);
+2
net/sunrpc/svcauth_unix.c
··· 520 520 ug.gi->gid[i] = kgid; 521 521 } 522 522 523 + groups_sort(ug.gi); 523 524 ugp = unix_gid_lookup(cd, uid); 524 525 if (ugp) { 525 526 struct cache_head *ch; ··· 820 819 kgid_t kgid = make_kgid(&init_user_ns, svc_getnl(argv)); 821 820 cred->cr_group_info->gid[i] = kgid; 822 821 } 822 + groups_sort(cred->cr_group_info); 823 823 if (svc_getu32(argv) != htonl(RPC_AUTH_NULL) || svc_getu32(argv) != 0) { 824 824 *authp = rpc_autherr_badverf; 825 825 return SVC_DENIED;
+23 -13
net/sunrpc/xprt.c
··· 1001 1001 { 1002 1002 struct rpc_rqst *req = task->tk_rqstp; 1003 1003 struct rpc_xprt *xprt = req->rq_xprt; 1004 + unsigned int connect_cookie; 1004 1005 int status, numreqs; 1005 1006 1006 1007 dprintk("RPC: %5u xprt_transmit(%u)\n", task->tk_pid, req->rq_slen); ··· 1025 1024 } else if (!req->rq_bytes_sent) 1026 1025 return; 1027 1026 1027 + connect_cookie = xprt->connect_cookie; 1028 1028 req->rq_xtime = ktime_get(); 1029 1029 status = xprt->ops->send_request(task); 1030 1030 trace_xprt_transmit(xprt, req->rq_xid, status); ··· 1049 1047 xprt->stat.bklog_u += xprt->backlog.qlen; 1050 1048 xprt->stat.sending_u += xprt->sending.qlen; 1051 1049 xprt->stat.pending_u += xprt->pending.qlen; 1052 - 1053 - /* Don't race with disconnect */ 1054 - if (!xprt_connected(xprt)) 1055 - task->tk_status = -ENOTCONN; 1056 - else { 1057 - /* 1058 - * Sleep on the pending queue since 1059 - * we're expecting a reply. 1060 - */ 1061 - if (!req->rq_reply_bytes_recvd && rpc_reply_expected(task)) 1062 - rpc_sleep_on(&xprt->pending, task, xprt_timer); 1063 - req->rq_connect_cookie = xprt->connect_cookie; 1064 - } 1065 1050 spin_unlock_bh(&xprt->transport_lock); 1051 + 1052 + req->rq_connect_cookie = connect_cookie; 1053 + if (rpc_reply_expected(task) && !READ_ONCE(req->rq_reply_bytes_recvd)) { 1054 + /* 1055 + * Sleep on the pending queue if we're expecting a reply. 1056 + * The spinlock ensures atomicity between the test of 1057 + * req->rq_reply_bytes_recvd, and the call to rpc_sleep_on(). 1058 + */ 1059 + spin_lock(&xprt->recv_lock); 1060 + if (!req->rq_reply_bytes_recvd) { 1061 + rpc_sleep_on(&xprt->pending, task, xprt_timer); 1062 + /* 1063 + * Send an extra queue wakeup call if the 1064 + * connection was dropped in case the call to 1065 + * rpc_sleep_on() raced. 1066 + */ 1067 + if (!xprt_connected(xprt)) 1068 + xprt_wake_pending_tasks(xprt, -ENOTCONN); 1069 + } 1070 + spin_unlock(&xprt->recv_lock); 1071 + } 1066 1072 } 1067 1073 1068 1074 static void xprt_add_backlog(struct rpc_xprt *xprt, struct rpc_task *task)
+1 -5
net/sunrpc/xprtrdma/rpc_rdma.c
··· 1408 1408 dprintk("RPC: %s: reply %p completes request %p (xid 0x%08x)\n", 1409 1409 __func__, rep, req, be32_to_cpu(rep->rr_xid)); 1410 1410 1411 - if (list_empty(&req->rl_registered) && 1412 - !test_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags)) 1413 - rpcrdma_complete_rqst(rep); 1414 - else 1415 - queue_work(rpcrdma_receive_wq, &rep->rr_work); 1411 + queue_work_on(req->rl_cpu, rpcrdma_receive_wq, &rep->rr_work); 1416 1412 return; 1417 1413 1418 1414 out_badstatus:
+2
net/sunrpc/xprtrdma/transport.c
··· 52 52 #include <linux/slab.h> 53 53 #include <linux/seq_file.h> 54 54 #include <linux/sunrpc/addr.h> 55 + #include <linux/smp.h> 55 56 56 57 #include "xprt_rdma.h" 57 58 ··· 657 656 task->tk_pid, __func__, rqst->rq_callsize, 658 657 rqst->rq_rcvsize, req); 659 658 659 + req->rl_cpu = smp_processor_id(); 660 660 req->rl_connect_cookie = 0; /* our reserved value */ 661 661 rpcrdma_set_xprtdata(rqst, req); 662 662 rqst->rq_buffer = req->rl_sendbuf->rg_base;
+1 -1
net/sunrpc/xprtrdma/verbs.c
··· 83 83 struct workqueue_struct *recv_wq; 84 84 85 85 recv_wq = alloc_workqueue("xprtrdma_receive", 86 - WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_HIGHPRI, 86 + WQ_MEM_RECLAIM | WQ_HIGHPRI, 87 87 0); 88 88 if (!recv_wq) 89 89 return -ENOMEM;
+1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 342 342 struct rpcrdma_buffer; 343 343 struct rpcrdma_req { 344 344 struct list_head rl_list; 345 + int rl_cpu; 345 346 unsigned int rl_connect_cookie; 346 347 struct rpcrdma_buffer *rl_buffer; 347 348 struct rpcrdma_rep *rl_reply;
+1 -1
net/tipc/socket.c
··· 1140 1140 __skb_dequeue(arrvq); 1141 1141 __skb_queue_tail(inputq, skb); 1142 1142 } 1143 - refcount_dec(&skb->users); 1143 + kfree_skb(skb); 1144 1144 spin_unlock_bh(&inputq->lock); 1145 1145 continue; 1146 1146 }
+38 -10
net/wireless/Makefile
··· 25 25 26 26 $(obj)/shipped-certs.c: $(wildcard $(srctree)/$(src)/certs/*.x509) 27 27 @$(kecho) " GEN $@" 28 - @echo '#include "reg.h"' > $@ 29 - @echo 'const u8 shipped_regdb_certs[] = {' >> $@ 30 - @for f in $^ ; do hexdump -v -e '1/1 "0x%.2x," "\n"' < $$f >> $@ ; done 31 - @echo '};' >> $@ 32 - @echo 'unsigned int shipped_regdb_certs_len = sizeof(shipped_regdb_certs);' >> $@ 28 + @(set -e; \ 29 + allf=""; \ 30 + for f in $^ ; do \ 31 + # similar to hexdump -v -e '1/1 "0x%.2x," "\n"' \ 32 + thisf=$$(od -An -v -tx1 < $$f | \ 33 + sed -e 's/ /\n/g' | \ 34 + sed -e 's/^[0-9a-f]\+$$/\0/;t;d' | \ 35 + sed -e 's/^/0x/;s/$$/,/'); \ 36 + # file should not be empty - maybe command substitution failed? \ 37 + test ! -z "$$thisf";\ 38 + allf=$$allf$$thisf;\ 39 + done; \ 40 + ( \ 41 + echo '#include "reg.h"'; \ 42 + echo 'const u8 shipped_regdb_certs[] = {'; \ 43 + echo "$$allf"; \ 44 + echo '};'; \ 45 + echo 'unsigned int shipped_regdb_certs_len = sizeof(shipped_regdb_certs);'; \ 46 + ) >> $@) 33 47 34 48 $(obj)/extra-certs.c: $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR:"%"=%) \ 35 49 $(wildcard $(CONFIG_CFG80211_EXTRA_REGDB_KEYDIR:"%"=%)/*.x509) 36 50 @$(kecho) " GEN $@" 37 - @echo '#include "reg.h"' > $@ 38 - @echo 'const u8 extra_regdb_certs[] = {' >> $@ 39 - @for f in $^ ; do test -f $$f && hexdump -v -e '1/1 "0x%.2x," "\n"' < $$f >> $@ || true ; done 40 - @echo '};' >> $@ 41 - @echo 'unsigned int extra_regdb_certs_len = sizeof(extra_regdb_certs);' >> $@ 51 + @(set -e; \ 52 + allf=""; \ 53 + for f in $^ ; do \ 54 + # similar to hexdump -v -e '1/1 "0x%.2x," "\n"' \ 55 + thisf=$$(od -An -v -tx1 < $$f | \ 56 + sed -e 's/ /\n/g' | \ 57 + sed -e 's/^[0-9a-f]\+$$/\0/;t;d' | \ 58 + sed -e 's/^/0x/;s/$$/,/'); \ 59 + # file should not be empty - maybe command substitution failed? \ 60 + test ! -z "$$thisf";\ 61 + allf=$$allf$$thisf;\ 62 + done; \ 63 + ( \ 64 + echo '#include "reg.h"'; \ 65 + echo 'const u8 extra_regdb_certs[] = {'; \ 66 + echo "$$allf"; \ 67 + echo '};'; \ 68 + echo 'unsigned int extra_regdb_certs_len = sizeof(extra_regdb_certs);'; \ 69 + ) >> $@)
-22
scripts/checkpatch.pl
··· 6233 6233 } 6234 6234 } 6235 6235 6236 - # whine about ACCESS_ONCE 6237 - if ($^V && $^V ge 5.10.0 && 6238 - $line =~ /\bACCESS_ONCE\s*$balanced_parens\s*(=(?!=))?\s*($FuncArg)?/) { 6239 - my $par = $1; 6240 - my $eq = $2; 6241 - my $fun = $3; 6242 - $par =~ s/^\(\s*(.*)\s*\)$/$1/; 6243 - if (defined($eq)) { 6244 - if (WARN("PREFER_WRITE_ONCE", 6245 - "Prefer WRITE_ONCE(<FOO>, <BAR>) over ACCESS_ONCE(<FOO>) = <BAR>\n" . $herecurr) && 6246 - $fix) { 6247 - $fixed[$fixlinenr] =~ s/\bACCESS_ONCE\s*\(\s*\Q$par\E\s*\)\s*$eq\s*\Q$fun\E/WRITE_ONCE($par, $fun)/; 6248 - } 6249 - } else { 6250 - if (WARN("PREFER_READ_ONCE", 6251 - "Prefer READ_ONCE(<FOO>) over ACCESS_ONCE(<FOO>)\n" . $herecurr) && 6252 - $fix) { 6253 - $fixed[$fixlinenr] =~ s/\bACCESS_ONCE\s*\(\s*\Q$par\E\s*\)/READ_ONCE($par)/; 6254 - } 6255 - } 6256 - } 6257 - 6258 6236 # check for mutex_trylock_recursive usage 6259 6237 if ($line =~ /mutex_trylock_recursive/) { 6260 6238 ERROR("LOCKING",
+4 -4
scripts/faddr2line
··· 44 44 set -o errexit 45 45 set -o nounset 46 46 47 - READELF="${CROSS_COMPILE}readelf" 48 - ADDR2LINE="${CROSS_COMPILE}addr2line" 49 - SIZE="${CROSS_COMPILE}size" 50 - NM="${CROSS_COMPILE}nm" 47 + READELF="${CROSS_COMPILE:-}readelf" 48 + ADDR2LINE="${CROSS_COMPILE:-}addr2line" 49 + SIZE="${CROSS_COMPILE:-}size" 50 + NM="${CROSS_COMPILE:-}nm" 51 51 52 52 command -v awk >/dev/null 2>&1 || die "awk isn't installed" 53 53 command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed"
-1
security/keys/key.c
··· 833 833 834 834 key_check(keyring); 835 835 836 - key_ref = ERR_PTR(-EPERM); 837 836 if (!(flags & KEY_ALLOC_BYPASS_RESTRICTION)) 838 837 restrict_link = keyring->restrict_link; 839 838
+10 -14
security/keys/keyctl.c
··· 1588 1588 * The caller must have Setattr permission to change keyring restrictions. 1589 1589 * 1590 1590 * The requested type name may be a NULL pointer to reject all attempts 1591 - * to link to the keyring. If _type is non-NULL, _restriction can be 1592 - * NULL or a pointer to a string describing the restriction. If _type is 1593 - * NULL, _restriction must also be NULL. 1591 + * to link to the keyring. In this case, _restriction must also be NULL. 1592 + * Otherwise, both _type and _restriction must be non-NULL. 1594 1593 * 1595 1594 * Returns 0 if successful. 1596 1595 */ ··· 1597 1598 const char __user *_restriction) 1598 1599 { 1599 1600 key_ref_t key_ref; 1600 - bool link_reject = !_type; 1601 1601 char type[32]; 1602 1602 char *restriction = NULL; 1603 1603 long ret; ··· 1605 1607 if (IS_ERR(key_ref)) 1606 1608 return PTR_ERR(key_ref); 1607 1609 1610 + ret = -EINVAL; 1608 1611 if (_type) { 1612 + if (!_restriction) 1613 + goto error; 1614 + 1609 1615 ret = key_get_type_from_user(type, _type, sizeof(type)); 1610 1616 if (ret < 0) 1611 1617 goto error; 1612 - } 1613 - 1614 - if (_restriction) { 1615 - if (!_type) { 1616 - ret = -EINVAL; 1617 - goto error; 1618 - } 1619 1618 1620 1619 restriction = strndup_user(_restriction, PAGE_SIZE); 1621 1620 if (IS_ERR(restriction)) { 1622 1621 ret = PTR_ERR(restriction); 1623 1622 goto error; 1624 1623 } 1624 + } else { 1625 + if (_restriction) 1626 + goto error; 1625 1627 } 1626 1628 1627 - ret = keyring_restrict(key_ref, link_reject ? NULL : type, restriction); 1629 + ret = keyring_restrict(key_ref, _type ? type : NULL, restriction); 1628 1630 kfree(restriction); 1629 - 1630 1631 error: 1631 1632 key_ref_put(key_ref); 1632 - 1633 1633 return ret; 1634 1634 } 1635 1635
+37 -11
security/keys/request_key.c
··· 251 251 * The keyring selected is returned with an extra reference upon it which the 252 252 * caller must release. 253 253 */ 254 - static void construct_get_dest_keyring(struct key **_dest_keyring) 254 + static int construct_get_dest_keyring(struct key **_dest_keyring) 255 255 { 256 256 struct request_key_auth *rka; 257 257 const struct cred *cred = current_cred(); 258 258 struct key *dest_keyring = *_dest_keyring, *authkey; 259 + int ret; 259 260 260 261 kenter("%p", dest_keyring); 261 262 ··· 265 264 /* the caller supplied one */ 266 265 key_get(dest_keyring); 267 266 } else { 267 + bool do_perm_check = true; 268 + 268 269 /* use a default keyring; falling through the cases until we 269 270 * find one that we actually have */ 270 271 switch (cred->jit_keyring) { ··· 281 278 dest_keyring = 282 279 key_get(rka->dest_keyring); 283 280 up_read(&authkey->sem); 284 - if (dest_keyring) 281 + if (dest_keyring) { 282 + do_perm_check = false; 285 283 break; 284 + } 286 285 } 287 286 288 287 case KEY_REQKEY_DEFL_THREAD_KEYRING: ··· 319 314 default: 320 315 BUG(); 321 316 } 317 + 318 + /* 319 + * Require Write permission on the keyring. This is essential 320 + * because the default keyring may be the session keyring, and 321 + * joining a keyring only requires Search permission. 322 + * 323 + * However, this check is skipped for the "requestor keyring" so 324 + * that /sbin/request-key can itself use request_key() to add 325 + * keys to the original requestor's destination keyring. 326 + */ 327 + if (dest_keyring && do_perm_check) { 328 + ret = key_permission(make_key_ref(dest_keyring, 1), 329 + KEY_NEED_WRITE); 330 + if (ret) { 331 + key_put(dest_keyring); 332 + return ret; 333 + } 334 + } 322 335 } 323 336 324 337 *_dest_keyring = dest_keyring; 325 338 kleave(" [dk %d]", key_serial(dest_keyring)); 326 - return; 339 + return 0; 327 340 } 328 341 329 342 /* ··· 467 444 if (ctx->index_key.type == &key_type_keyring) 468 445 return ERR_PTR(-EPERM); 469 446 470 - user = key_user_lookup(current_fsuid()); 471 - if (!user) 472 - return ERR_PTR(-ENOMEM); 447 + ret = construct_get_dest_keyring(&dest_keyring); 448 + if (ret) 449 + goto error; 473 450 474 - construct_get_dest_keyring(&dest_keyring); 451 + user = key_user_lookup(current_fsuid()); 452 + if (!user) { 453 + ret = -ENOMEM; 454 + goto error_put_dest_keyring; 455 + } 475 456 476 457 ret = construct_alloc_key(ctx, dest_keyring, flags, user, &key); 477 458 key_user_put(user); ··· 490 463 } else if (ret == -EINPROGRESS) { 491 464 ret = 0; 492 465 } else { 493 - goto couldnt_alloc_key; 466 + goto error_put_dest_keyring; 494 467 } 495 468 496 469 key_put(dest_keyring); ··· 500 473 construction_failed: 501 474 key_negate_and_link(key, key_negative_timeout, NULL, NULL); 502 475 key_put(key); 503 - couldnt_alloc_key: 476 + error_put_dest_keyring: 504 477 key_put(dest_keyring); 478 + error: 505 479 kleave(" = %d", ret); 506 480 return ERR_PTR(ret); 507 481 } ··· 574 546 if (!IS_ERR(key_ref)) { 575 547 key = key_ref_to_ptr(key_ref); 576 548 if (dest_keyring) { 577 - construct_get_dest_keyring(&dest_keyring); 578 549 ret = key_link(dest_keyring, key); 579 - key_put(dest_keyring); 580 550 if (ret < 0) { 581 551 key_put(key); 582 552 key = ERR_PTR(ret);
+1
tools/arch/x86/include/asm/cpufeatures.h
··· 266 266 /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */ 267 267 #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */ 268 268 #define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */ 269 + #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */ 269 270 270 271 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ 271 272 #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+9 -12
tools/include/linux/compiler.h
··· 84 84 85 85 #define uninitialized_var(x) x = *(&(x)) 86 86 87 - #define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x)) 88 - 89 87 #include <linux/types.h> 90 88 91 89 /* ··· 133 135 /* 134 136 * Prevent the compiler from merging or refetching reads or writes. The 135 137 * compiler is also forbidden from reordering successive instances of 136 - * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the 137 - * compiler is aware of some particular ordering. One way to make the 138 - * compiler aware of ordering is to put the two invocations of READ_ONCE, 139 - * WRITE_ONCE or ACCESS_ONCE() in different C statements. 138 + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some 139 + * particular ordering. One way to make the compiler aware of ordering is to 140 + * put the two invocations of READ_ONCE or WRITE_ONCE in different C 141 + * statements. 140 142 * 141 - * In contrast to ACCESS_ONCE these two macros will also work on aggregate 142 - * data types like structs or unions. If the size of the accessed data 143 - * type exceeds the word size of the machine (e.g., 32 bits or 64 bits) 144 - * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a 145 - * compile-time warning. 143 + * These two macros will also work on aggregate data types like structs or 144 + * unions. If the size of the accessed data type exceeds the word size of 145 + * the machine (e.g., 32 bits or 64 bits) READ_ONCE() and WRITE_ONCE() will 146 + * fall back to memcpy and print a compile-time warning. 146 147 * 147 148 * Their two major use cases are: (1) Mediating communication between 148 149 * process-level code and irq/NMI handlers, all running on the same CPU, 149 - * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 150 + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise 150 151 * mutilate accesses that either do not require ordering or that interact 151 152 * with an explicit memory barrier or atomic instruction that provides the 152 153 * required ordering.
+1
tools/include/linux/lockdep.h
··· 48 48 #define printk(...) dprintf(STDOUT_FILENO, __VA_ARGS__) 49 49 #define pr_err(format, ...) fprintf (stderr, format, ## __VA_ARGS__) 50 50 #define pr_warn pr_err 51 + #define pr_cont pr_err 51 52 52 53 #define list_del_rcu list_del 53 54
+7
tools/include/uapi/asm/bpf_perf_event.h
··· 1 + #if defined(__aarch64__) 2 + #include "../../arch/arm64/include/uapi/asm/bpf_perf_event.h" 3 + #elif defined(__s390__) 4 + #include "../../arch/s390/include/uapi/asm/bpf_perf_event.h" 5 + #else 6 + #include <uapi/asm-generic/bpf_perf_event.h> 7 + #endif
+2 -2
tools/include/uapi/linux/kvm.h
··· 630 630 631 631 struct kvm_s390_irq_state { 632 632 __u64 buf; 633 - __u32 flags; 633 + __u32 flags; /* will stay unused for compatibility reasons */ 634 634 __u32 len; 635 - __u32 reserved[4]; 635 + __u32 reserved[4]; /* will stay unused for compatibility reasons */ 636 636 }; 637 637 638 638 /* for KVM_SET_GUEST_DEBUG */
+12 -3
tools/objtool/arch/x86/lib/x86-opcode-map.txt
··· 607 607 fc: paddb Pq,Qq | vpaddb Vx,Hx,Wx (66),(v1) 608 608 fd: paddw Pq,Qq | vpaddw Vx,Hx,Wx (66),(v1) 609 609 fe: paddd Pq,Qq | vpaddd Vx,Hx,Wx (66),(v1) 610 - ff: 610 + ff: UD0 611 611 EndTable 612 612 613 613 Table: 3-byte opcode 1 (0x0f 0x38) ··· 717 717 7e: vpermt2d/q Vx,Hx,Wx (66),(ev) 718 718 7f: vpermt2ps/d Vx,Hx,Wx (66),(ev) 719 719 80: INVEPT Gy,Mdq (66) 720 - 81: INVPID Gy,Mdq (66) 720 + 81: INVVPID Gy,Mdq (66) 721 721 82: INVPCID Gy,Mdq (66) 722 722 83: vpmultishiftqb Vx,Hx,Wx (66),(ev) 723 723 88: vexpandps/d Vpd,Wpd (66),(ev) ··· 896 896 897 897 GrpTable: Grp3_1 898 898 0: TEST Eb,Ib 899 - 1: 899 + 1: TEST Eb,Ib 900 900 2: NOT Eb 901 901 3: NEG Eb 902 902 4: MUL AL,Eb ··· 970 970 EndTable 971 971 972 972 GrpTable: Grp10 973 + # all are UD1 974 + 0: UD1 975 + 1: UD1 976 + 2: UD1 977 + 3: UD1 978 + 4: UD1 979 + 5: UD1 980 + 6: UD1 981 + 7: UD1 973 982 EndTable 974 983 975 984 # Grp11A and Grp11B are expressed as Grp11 in Intel SDM
+11 -2
tools/perf/util/intel-pt-decoder/x86-opcode-map.txt
··· 607 607 fc: paddb Pq,Qq | vpaddb Vx,Hx,Wx (66),(v1) 608 608 fd: paddw Pq,Qq | vpaddw Vx,Hx,Wx (66),(v1) 609 609 fe: paddd Pq,Qq | vpaddd Vx,Hx,Wx (66),(v1) 610 - ff: 610 + ff: UD0 611 611 EndTable 612 612 613 613 Table: 3-byte opcode 1 (0x0f 0x38) ··· 717 717 7e: vpermt2d/q Vx,Hx,Wx (66),(ev) 718 718 7f: vpermt2ps/d Vx,Hx,Wx (66),(ev) 719 719 80: INVEPT Gy,Mdq (66) 720 - 81: INVPID Gy,Mdq (66) 720 + 81: INVVPID Gy,Mdq (66) 721 721 82: INVPCID Gy,Mdq (66) 722 722 83: vpmultishiftqb Vx,Hx,Wx (66),(ev) 723 723 88: vexpandps/d Vpd,Wpd (66),(ev) ··· 970 970 EndTable 971 971 972 972 GrpTable: Grp10 973 + # all are UD1 974 + 0: UD1 975 + 1: UD1 976 + 2: UD1 977 + 3: UD1 978 + 4: UD1 979 + 5: UD1 980 + 6: UD1 981 + 7: UD1 973 982 EndTable 974 983 975 984 # Grp11A and Grp11B are expressed as Grp11 in Intel SDM
+1 -1
tools/perf/util/mmap.h
··· 70 70 static inline u64 perf_mmap__read_head(struct perf_mmap *mm) 71 71 { 72 72 struct perf_event_mmap_page *pc = mm->base; 73 - u64 head = ACCESS_ONCE(pc->data_head); 73 + u64 head = READ_ONCE(pc->data_head); 74 74 rmb(); 75 75 return head; 76 76 }
+1 -12
tools/testing/selftests/bpf/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - ifeq ($(srctree),) 4 - srctree := $(patsubst %/,%,$(dir $(CURDIR))) 5 - srctree := $(patsubst %/,%,$(dir $(srctree))) 6 - srctree := $(patsubst %/,%,$(dir $(srctree))) 7 - srctree := $(patsubst %/,%,$(dir $(srctree))) 8 - endif 9 - include $(srctree)/tools/scripts/Makefile.arch 10 - 11 - $(call detected_var,SRCARCH) 12 - 13 3 LIBDIR := ../../../lib 14 4 BPFDIR := $(LIBDIR)/bpf 15 5 APIDIR := ../../../include/uapi 16 - ASMDIR:= ../../../arch/$(ARCH)/include/uapi 17 6 GENDIR := ../../../../include/generated 18 7 GENHDR := $(GENDIR)/autoconf.h 19 8 ··· 10 21 GENFLAGS := -DHAVE_GENHDR 11 22 endif 12 23 13 - CFLAGS += -Wall -O2 -I$(APIDIR) -I$(ASMDIR) -I$(LIBDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include 24 + CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include 14 25 LDLIBS += -lcap -lelf 15 26 16 27 TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
+5 -5
tools/usb/usbip/libsrc/vhci_driver.c
··· 50 50 51 51 while (*c != '\0') { 52 52 int port, status, speed, devid; 53 - unsigned long socket; 53 + int sockfd; 54 54 char lbusid[SYSFS_BUS_ID_SIZE]; 55 55 struct usbip_imported_device *idev; 56 56 char hub[3]; 57 57 58 - ret = sscanf(c, "%2s %d %d %d %x %lx %31s\n", 58 + ret = sscanf(c, "%2s %d %d %d %x %u %31s\n", 59 59 hub, &port, &status, &speed, 60 - &devid, &socket, lbusid); 60 + &devid, &sockfd, lbusid); 61 61 62 62 if (ret < 5) { 63 63 dbg("sscanf failed: %d", ret); ··· 66 66 67 67 dbg("hub %s port %d status %d speed %d devid %x", 68 68 hub, port, status, speed, devid); 69 - dbg("socket %lx lbusid %s", socket, lbusid); 69 + dbg("sockfd %u lbusid %s", sockfd, lbusid); 70 70 71 71 /* if a device is connected, look at it */ 72 72 idev = &vhci_driver->idev[port]; ··· 106 106 return 0; 107 107 } 108 108 109 - #define MAX_STATUS_NAME 16 109 + #define MAX_STATUS_NAME 18 110 110 111 111 static int refresh_imported_device_list(void) 112 112 {
+23 -6
tools/virtio/ringtest/ptr_ring.c
··· 16 16 #define unlikely(x) (__builtin_expect(!!(x), 0)) 17 17 #define likely(x) (__builtin_expect(!!(x), 1)) 18 18 #define ALIGN(x, a) (((x) + (a) - 1) / (a) * (a)) 19 + #define SIZE_MAX (~(size_t)0) 20 + 19 21 typedef pthread_spinlock_t spinlock_t; 20 22 21 23 typedef int gfp_t; 22 - static void *kmalloc(unsigned size, gfp_t gfp) 23 - { 24 - return memalign(64, size); 25 - } 24 + #define __GFP_ZERO 0x1 26 25 27 - static void *kzalloc(unsigned size, gfp_t gfp) 26 + static void *kmalloc(unsigned size, gfp_t gfp) 28 27 { 29 28 void *p = memalign(64, size); 30 29 if (!p) 31 30 return p; 32 - memset(p, 0, size); 33 31 32 + if (gfp & __GFP_ZERO) 33 + memset(p, 0, size); 34 34 return p; 35 + } 36 + 37 + static inline void *kzalloc(unsigned size, gfp_t flags) 38 + { 39 + return kmalloc(size, flags | __GFP_ZERO); 40 + } 41 + 42 + static inline void *kmalloc_array(size_t n, size_t size, gfp_t flags) 43 + { 44 + if (size != 0 && n > SIZE_MAX / size) 45 + return NULL; 46 + return kmalloc(n * size, flags); 47 + } 48 + 49 + static inline void *kcalloc(size_t n, size_t size, gfp_t flags) 50 + { 51 + return kmalloc_array(n, size, flags | __GFP_ZERO); 35 52 } 36 53 37 54 static void kfree(void *p)
+1 -1
tools/vm/slabinfo-gnuplot.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 3 3 # Sergey Senozhatsky, 2015 4 4 # sergey.senozhatsky.work@gmail.com
+4 -7
virt/kvm/arm/arch_timer.c
··· 479 479 480 480 vtimer_restore_state(vcpu); 481 481 482 - if (has_vhe()) 483 - disable_el1_phys_timer_access(); 484 - 485 482 /* Set the background timer for the physical timer emulation. */ 486 483 phys_timer_emulate(vcpu); 487 484 } ··· 506 509 507 510 if (unlikely(!timer->enabled)) 508 511 return; 509 - 510 - if (has_vhe()) 511 - enable_el1_phys_timer_access(); 512 512 513 513 vtimer_save_state(vcpu); 514 514 ··· 835 841 no_vgic: 836 842 preempt_disable(); 837 843 timer->enabled = 1; 838 - kvm_timer_vcpu_load_vgic(vcpu); 844 + if (!irqchip_in_kernel(vcpu->kvm)) 845 + kvm_timer_vcpu_load_user(vcpu); 846 + else 847 + kvm_timer_vcpu_load_vgic(vcpu); 839 848 preempt_enable(); 840 849 841 850 return 0;
+5 -2
virt/kvm/arm/arm.c
··· 188 188 kvm->vcpus[i] = NULL; 189 189 } 190 190 } 191 + atomic_set(&kvm->online_vcpus, 0); 191 192 } 192 193 193 194 int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) ··· 297 296 { 298 297 kvm_mmu_free_memory_caches(vcpu); 299 298 kvm_timer_vcpu_terminate(vcpu); 300 - kvm_vgic_vcpu_destroy(vcpu); 301 299 kvm_pmu_vcpu_destroy(vcpu); 302 300 kvm_vcpu_uninit(vcpu); 303 301 kmem_cache_free(kvm_vcpu_cache, vcpu); ··· 627 627 ret = kvm_handle_mmio_return(vcpu, vcpu->run); 628 628 if (ret) 629 629 return ret; 630 + if (kvm_arm_handle_step_debug(vcpu, vcpu->run)) 631 + return 0; 632 + 630 633 } 631 634 632 635 if (run->immediate_exit) ··· 1505 1502 bool in_hyp_mode; 1506 1503 1507 1504 if (!is_hyp_mode_available()) { 1508 - kvm_err("HYP mode not available\n"); 1505 + kvm_info("HYP mode not available\n"); 1509 1506 return -ENODEV; 1510 1507 } 1511 1508
+20 -28
virt/kvm/arm/hyp/timer-sr.c
··· 27 27 write_sysreg(cntvoff, cntvoff_el2); 28 28 } 29 29 30 - void __hyp_text enable_el1_phys_timer_access(void) 31 - { 32 - u64 val; 33 - 34 - /* Allow physical timer/counter access for the host */ 35 - val = read_sysreg(cnthctl_el2); 36 - val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; 37 - write_sysreg(val, cnthctl_el2); 38 - } 39 - 40 - void __hyp_text disable_el1_phys_timer_access(void) 41 - { 42 - u64 val; 43 - 44 - /* 45 - * Disallow physical timer access for the guest 46 - * Physical counter access is allowed 47 - */ 48 - val = read_sysreg(cnthctl_el2); 49 - val &= ~CNTHCTL_EL1PCEN; 50 - val |= CNTHCTL_EL1PCTEN; 51 - write_sysreg(val, cnthctl_el2); 52 - } 53 - 54 30 void __hyp_text __timer_disable_traps(struct kvm_vcpu *vcpu) 55 31 { 56 32 /* 57 33 * We don't need to do this for VHE since the host kernel runs in EL2 58 34 * with HCR_EL2.TGE ==1, which makes those bits have no impact. 59 35 */ 60 - if (!has_vhe()) 61 - enable_el1_phys_timer_access(); 36 + if (!has_vhe()) { 37 + u64 val; 38 + 39 + /* Allow physical timer/counter access for the host */ 40 + val = read_sysreg(cnthctl_el2); 41 + val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN; 42 + write_sysreg(val, cnthctl_el2); 43 + } 62 44 } 63 45 64 46 void __hyp_text __timer_enable_traps(struct kvm_vcpu *vcpu) 65 47 { 66 - if (!has_vhe()) 67 - disable_el1_phys_timer_access(); 48 + if (!has_vhe()) { 49 + u64 val; 50 + 51 + /* 52 + * Disallow physical timer access for the guest 53 + * Physical counter access is allowed 54 + */ 55 + val = read_sysreg(cnthctl_el2); 56 + val &= ~CNTHCTL_EL1PCEN; 57 + val |= CNTHCTL_EL1PCTEN; 58 + write_sysreg(val, cnthctl_el2); 59 + } 68 60 }
-4
virt/kvm/arm/hyp/vgic-v2-sr.c
··· 34 34 else 35 35 elrsr1 = 0; 36 36 37 - #ifdef CONFIG_CPU_BIG_ENDIAN 38 - cpu_if->vgic_elrsr = ((u64)elrsr0 << 32) | elrsr1; 39 - #else 40 37 cpu_if->vgic_elrsr = ((u64)elrsr1 << 32) | elrsr0; 41 - #endif 42 38 } 43 39 44 40 static void __hyp_text save_lrs(struct kvm_vcpu *vcpu, void __iomem *base)
+1 -2
virt/kvm/arm/vgic/vgic-irqfd.c
··· 112 112 u32 nr = dist->nr_spis; 113 113 int i, ret; 114 114 115 - entries = kcalloc(nr, sizeof(struct kvm_kernel_irq_routing_entry), 116 - GFP_KERNEL); 115 + entries = kcalloc(nr, sizeof(*entries), GFP_KERNEL); 117 116 if (!entries) 118 117 return -ENOMEM; 119 118
+3 -1
virt/kvm/arm/vgic/vgic-its.c
··· 421 421 u32 *intids; 422 422 int nr_irqs, i; 423 423 unsigned long flags; 424 + u8 pendmask; 424 425 425 426 nr_irqs = vgic_copy_lpi_list(vcpu, &intids); 426 427 if (nr_irqs < 0) ··· 429 428 430 429 for (i = 0; i < nr_irqs; i++) { 431 430 int byte_offset, bit_nr; 432 - u8 pendmask; 433 431 434 432 byte_offset = intids[i] / BITS_PER_BYTE; 435 433 bit_nr = intids[i] % BITS_PER_BYTE; ··· 821 821 return E_ITS_MAPC_COLLECTION_OOR; 822 822 823 823 collection = kzalloc(sizeof(*collection), GFP_KERNEL); 824 + if (!collection) 825 + return -ENOMEM; 824 826 825 827 collection->collection_id = coll_id; 826 828 collection->target_addr = COLLECTION_NOT_MAPPED;
+1 -1
virt/kvm/arm/vgic/vgic-v3.c
··· 327 327 int last_byte_offset = -1; 328 328 struct vgic_irq *irq; 329 329 int ret; 330 + u8 val; 330 331 331 332 list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) { 332 333 int byte_offset, bit_nr; 333 334 struct kvm_vcpu *vcpu; 334 335 gpa_t pendbase, ptr; 335 336 bool stored; 336 - u8 val; 337 337 338 338 vcpu = irq->target_vcpu; 339 339 if (!vcpu)
+4 -2
virt/kvm/arm/vgic/vgic-v4.c
··· 337 337 goto out; 338 338 339 339 WARN_ON(!(irq->hw && irq->host_irq == virq)); 340 - irq->hw = false; 341 - ret = its_unmap_vlpi(virq); 340 + if (irq->hw) { 341 + irq->hw = false; 342 + ret = its_unmap_vlpi(virq); 343 + } 342 344 343 345 out: 344 346 mutex_unlock(&its->its_lock);
+5 -3
virt/kvm/arm/vgic/vgic.c
··· 492 492 int kvm_vgic_set_owner(struct kvm_vcpu *vcpu, unsigned int intid, void *owner) 493 493 { 494 494 struct vgic_irq *irq; 495 + unsigned long flags; 495 496 int ret = 0; 496 497 497 498 if (!vgic_initialized(vcpu->kvm)) ··· 503 502 return -EINVAL; 504 503 505 504 irq = vgic_get_irq(vcpu->kvm, vcpu, intid); 506 - spin_lock(&irq->irq_lock); 505 + spin_lock_irqsave(&irq->irq_lock, flags); 507 506 if (irq->owner && irq->owner != owner) 508 507 ret = -EEXIST; 509 508 else 510 509 irq->owner = owner; 511 - spin_unlock(&irq->irq_lock); 510 + spin_unlock_irqrestore(&irq->irq_lock, flags); 512 511 513 512 return ret; 514 513 } ··· 824 823 825 824 bool kvm_vgic_map_is_active(struct kvm_vcpu *vcpu, unsigned int vintid) 826 825 { 827 - struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, vintid); 826 + struct vgic_irq *irq; 828 827 bool map_is_active; 829 828 unsigned long flags; 830 829 831 830 if (!vgic_initialized(vcpu->kvm)) 832 831 return false; 833 832 833 + irq = vgic_get_irq(vcpu->kvm, vcpu, vintid); 834 834 spin_lock_irqsave(&irq->irq_lock, flags); 835 835 map_is_active = irq->hw && irq->active; 836 836 spin_unlock_irqrestore(&irq->irq_lock, flags);
+8
virt/kvm/kvm_main.c
··· 135 135 static unsigned long long kvm_createvm_count; 136 136 static unsigned long long kvm_active_vms; 137 137 138 + __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, 139 + unsigned long start, unsigned long end) 140 + { 141 + } 142 + 138 143 bool kvm_is_reserved_pfn(kvm_pfn_t pfn) 139 144 { 140 145 if (pfn_valid(pfn)) ··· 365 360 kvm_flush_remote_tlbs(kvm); 366 361 367 362 spin_unlock(&kvm->mmu_lock); 363 + 364 + kvm_arch_mmu_notifier_invalidate_range(kvm, start, end); 365 + 368 366 srcu_read_unlock(&kvm->srcu, idx); 369 367 } 370 368