Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge drm/drm-next into drm-intel-next-queued

To facilitate merging topic/hdr-formats from Maarten.

Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>

+3356 -2182
+9 -11
CREDITS
··· 842 842 843 843 N: Helge Deller 844 844 E: deller@gmx.de 845 - E: hdeller@redhat.de 846 - D: PA-RISC Linux hacker, LASI-, ASP-, WAX-, LCD/LED-driver 847 - S: Schimmelsrain 1 848 - S: D-69231 Rauenberg 845 + W: http://www.parisc-linux.org/ 846 + D: PA-RISC Linux architecture maintainer 847 + D: LASI-, ASP-, WAX-, LCD/LED-driver 849 848 S: Germany 850 849 851 850 N: Jean Delvare ··· 1360 1361 S: South Africa 1361 1362 1362 1363 N: Grant Grundler 1363 - E: grundler@parisc-linux.org 1364 + E: grantgrundler@gmail.com 1364 1365 W: http://obmouse.sourceforge.net/ 1365 1366 W: http://www.parisc-linux.org/ 1366 1367 D: obmouse - rewrote Olivier Florent's Omnibook 600 "pop-up" mouse driver ··· 2491 2492 S: USA 2492 2493 2493 2494 N: Kyle McMartin 2494 - E: kyle@parisc-linux.org 2495 + E: kyle@mcmartin.ca 2495 2496 D: Linux/PARISC hacker 2496 2497 D: AD1889 sound driver 2497 2498 S: Ottawa, Canada ··· 3779 3780 S: Cupertino, CA 95014 3780 3781 S: USA 3781 3782 3782 - N: Thibaut Varene 3783 - E: T-Bone@parisc-linux.org 3784 - W: http://www.parisc-linux.org/~varenet/ 3785 - P: 1024D/B7D2F063 E67C 0D43 A75E 12A5 BB1C FA2F 1E32 C3DA B7D2 F063 3783 + N: Thibaut Varène 3784 + E: hacks+kernel@slashdirt.org 3785 + W: http://hacks.slashdirt.org/ 3786 3786 D: PA-RISC port minion, PDC and GSCPS2 drivers, debuglocks and other bits 3787 3787 D: Some ARM at91rm9200 bits, S1D13XXX FB driver, random patches here and there 3788 3788 D: AD1889 sound driver 3789 - S: Paris, France 3789 + S: France 3790 3790 3791 3791 N: Heikki Vatiainen 3792 3792 E: hessu@cs.tut.fi
+16 -16
Documentation/admin-guide/README.rst
··· 1 1 .. _readme: 2 2 3 - Linux kernel release 4.x <http://kernel.org/> 3 + Linux kernel release 5.x <http://kernel.org/> 4 4 ============================================= 5 5 6 - These are the release notes for Linux version 4. Read them carefully, 6 + These are the release notes for Linux version 5. Read them carefully, 7 7 as they tell you what this is all about, explain how to install the 8 8 kernel, and what to do if something goes wrong. 9 9 ··· 63 63 directory where you have permissions (e.g. your home directory) and 64 64 unpack it:: 65 65 66 - xz -cd linux-4.X.tar.xz | tar xvf - 66 + xz -cd linux-5.x.tar.xz | tar xvf - 67 67 68 68 Replace "X" with the version number of the latest kernel. 69 69 ··· 72 72 files. They should match the library, and not get messed up by 73 73 whatever the kernel-du-jour happens to be. 74 74 75 - - You can also upgrade between 4.x releases by patching. Patches are 75 + - You can also upgrade between 5.x releases by patching. Patches are 76 76 distributed in the xz format. To install by patching, get all the 77 77 newer patch files, enter the top level directory of the kernel source 78 - (linux-4.X) and execute:: 78 + (linux-5.x) and execute:: 79 79 80 - xz -cd ../patch-4.x.xz | patch -p1 80 + xz -cd ../patch-5.x.xz | patch -p1 81 81 82 - Replace "x" for all versions bigger than the version "X" of your current 82 + Replace "x" for all versions bigger than the version "x" of your current 83 83 source tree, **in_order**, and you should be ok. You may want to remove 84 84 the backup files (some-file-name~ or some-file-name.orig), and make sure 85 85 that there are no failed patches (some-file-name# or some-file-name.rej). 86 86 If there are, either you or I have made a mistake. 87 87 88 - Unlike patches for the 4.x kernels, patches for the 4.x.y kernels 88 + Unlike patches for the 5.x kernels, patches for the 5.x.y kernels 89 89 (also known as the -stable kernels) are not incremental but instead apply 90 - directly to the base 4.x kernel. For example, if your base kernel is 4.0 91 - and you want to apply the 4.0.3 patch, you must not first apply the 4.0.1 92 - and 4.0.2 patches. Similarly, if you are running kernel version 4.0.2 and 93 - want to jump to 4.0.3, you must first reverse the 4.0.2 patch (that is, 94 - patch -R) **before** applying the 4.0.3 patch. You can read more on this in 90 + directly to the base 5.x kernel. For example, if your base kernel is 5.0 91 + and you want to apply the 5.0.3 patch, you must not first apply the 5.0.1 92 + and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and 93 + want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is, 94 + patch -R) **before** applying the 5.0.3 patch. You can read more on this in 95 95 :ref:`Documentation/process/applying-patches.rst <applying_patches>`. 96 96 97 97 Alternatively, the script patch-kernel can be used to automate this ··· 114 114 Software requirements 115 115 --------------------- 116 116 117 - Compiling and running the 4.x kernels requires up-to-date 117 + Compiling and running the 5.x kernels requires up-to-date 118 118 versions of various software packages. Consult 119 119 :ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers 120 120 required and how to get updates for these packages. Beware that using ··· 132 132 place for the output files (including .config). 133 133 Example:: 134 134 135 - kernel source code: /usr/src/linux-4.X 135 + kernel source code: /usr/src/linux-5.x 136 136 build directory: /home/name/build/kernel 137 137 138 138 To configure and build the kernel, use:: 139 139 140 - cd /usr/src/linux-4.X 140 + cd /usr/src/linux-5.x 141 141 make O=/home/name/build/kernel menuconfig 142 142 make O=/home/name/build/kernel 143 143 sudo make O=/home/name/build/kernel modules_install install
+3 -7
Documentation/networking/dsa/dsa.txt
··· 533 533 function that the driver has to call for each VLAN the given port is a member 534 534 of. A switchdev object is used to carry the VID and bridge flags. 535 535 536 - - port_fdb_prepare: bridge layer function invoked when the bridge prepares the 537 - installation of a Forwarding Database entry. If the operation is not 538 - supported, this function should return -EOPNOTSUPP to inform the bridge code 539 - to fallback to a software implementation. No hardware setup must be done in 540 - this function. See port_fdb_add for this and details. 541 - 542 536 - port_fdb_add: bridge layer function invoked when the bridge wants to install a 543 537 Forwarding Database entry, the switch hardware should be programmed with the 544 538 specified address in the specified VLAN Id in the forwarding database 545 - associated with this VLAN ID 539 + associated with this VLAN ID. If the operation is not supported, this 540 + function should return -EOPNOTSUPP to inform the bridge code to fallback to 541 + a software implementation. 546 542 547 543 Note: VLAN ID 0 corresponds to the port private database, which, in the context 548 544 of DSA, would be the its port-based VLAN, used by the associated bridge device.
+1 -1
Documentation/networking/msg_zerocopy.rst
··· 7 7 ===== 8 8 9 9 The MSG_ZEROCOPY flag enables copy avoidance for socket send calls. 10 - The feature is currently implemented for TCP sockets. 10 + The feature is currently implemented for TCP and UDP sockets. 11 11 12 12 13 13 Opportunity and Caveats
+5 -5
Documentation/networking/switchdev.txt
··· 92 92 Switch ID 93 93 ^^^^^^^^^ 94 94 95 - The switchdev driver must implement the switchdev op switchdev_port_attr_get 96 - for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same 97 - physical ID for each port of a switch. The ID must be unique between switches 98 - on the same system. The ID does not need to be unique between switches on 99 - different systems. 95 + The switchdev driver must implement the net_device operation 96 + ndo_get_port_parent_id for each port netdev, returning the same physical ID for 97 + each port of a switch. The ID must be unique between switches on the same 98 + system. The ID does not need to be unique between switches on different 99 + systems. 100 100 101 101 The switch ID is used to locate ports on a switch and to know if aggregated 102 102 ports belong to the same switch.
+61 -56
Documentation/process/applying-patches.rst
··· 216 216 generate a patch representing the differences between two patches and then 217 217 apply the result. 218 218 219 - This will let you move from something like 4.7.2 to 4.7.3 in a single 219 + This will let you move from something like 5.7.2 to 5.7.3 in a single 220 220 step. The -z flag to interdiff will even let you feed it patches in gzip or 221 221 bzip2 compressed form directly without the use of zcat or bzcat or manual 222 222 decompression. 223 223 224 - Here's how you'd go from 4.7.2 to 4.7.3 in a single step:: 224 + Here's how you'd go from 5.7.2 to 5.7.3 in a single step:: 225 225 226 - interdiff -z ../patch-4.7.2.gz ../patch-4.7.3.gz | patch -p1 226 + interdiff -z ../patch-5.7.2.gz ../patch-5.7.3.gz | patch -p1 227 227 228 228 Although interdiff may save you a step or two you are generally advised to 229 229 do the additional steps since interdiff can get things wrong in some cases. ··· 245 245 Most recent patches are linked from the front page, but they also have 246 246 specific homes. 247 247 248 - The 4.x.y (-stable) and 4.x patches live at 248 + The 5.x.y (-stable) and 5.x patches live at 249 249 250 - https://www.kernel.org/pub/linux/kernel/v4.x/ 250 + https://www.kernel.org/pub/linux/kernel/v5.x/ 251 251 252 - The -rc patches live at 252 + The -rc patches are not stored on the webserver but are generated on 253 + demand from git tags such as 253 254 254 - https://www.kernel.org/pub/linux/kernel/v4.x/testing/ 255 + https://git.kernel.org/torvalds/p/v5.1-rc1/v5.0 256 + 257 + The stable -rc patches live at 258 + 259 + https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/ 255 260 256 261 257 - The 4.x kernels 262 + The 5.x kernels 258 263 =============== 259 264 260 265 These are the base stable releases released by Linus. The highest numbered 261 266 release is the most recent. 262 267 263 268 If regressions or other serious flaws are found, then a -stable fix patch 264 - will be released (see below) on top of this base. Once a new 4.x base 269 + will be released (see below) on top of this base. Once a new 5.x base 265 270 kernel is released, a patch is made available that is a delta between the 266 - previous 4.x kernel and the new one. 271 + previous 5.x kernel and the new one. 267 272 268 - To apply a patch moving from 4.6 to 4.7, you'd do the following (note 269 - that such patches do **NOT** apply on top of 4.x.y kernels but on top of the 270 - base 4.x kernel -- if you need to move from 4.x.y to 4.x+1 you need to 271 - first revert the 4.x.y patch). 273 + To apply a patch moving from 5.6 to 5.7, you'd do the following (note 274 + that such patches do **NOT** apply on top of 5.x.y kernels but on top of the 275 + base 5.x kernel -- if you need to move from 5.x.y to 5.x+1 you need to 276 + first revert the 5.x.y patch). 272 277 273 278 Here are some examples:: 274 279 275 - # moving from 4.6 to 4.7 280 + # moving from 5.6 to 5.7 276 281 277 - $ cd ~/linux-4.6 # change to kernel source dir 278 - $ patch -p1 < ../patch-4.7 # apply the 4.7 patch 282 + $ cd ~/linux-5.6 # change to kernel source dir 283 + $ patch -p1 < ../patch-5.7 # apply the 5.7 patch 279 284 $ cd .. 280 - $ mv linux-4.6 linux-4.7 # rename source dir 285 + $ mv linux-5.6 linux-5.7 # rename source dir 281 286 282 - # moving from 4.6.1 to 4.7 287 + # moving from 5.6.1 to 5.7 283 288 284 - $ cd ~/linux-4.6.1 # change to kernel source dir 285 - $ patch -p1 -R < ../patch-4.6.1 # revert the 4.6.1 patch 286 - # source dir is now 4.6 287 - $ patch -p1 < ../patch-4.7 # apply new 4.7 patch 289 + $ cd ~/linux-5.6.1 # change to kernel source dir 290 + $ patch -p1 -R < ../patch-5.6.1 # revert the 5.6.1 patch 291 + # source dir is now 5.6 292 + $ patch -p1 < ../patch-5.7 # apply new 5.7 patch 288 293 $ cd .. 289 - $ mv linux-4.6.1 linux-4.7 # rename source dir 294 + $ mv linux-5.6.1 linux-5.7 # rename source dir 290 295 291 296 292 - The 4.x.y kernels 297 + The 5.x.y kernels 293 298 ================= 294 299 295 300 Kernels with 3-digit versions are -stable kernels. They contain small(ish) 296 301 critical fixes for security problems or significant regressions discovered 297 - in a given 4.x kernel. 302 + in a given 5.x kernel. 298 303 299 304 This is the recommended branch for users who want the most recent stable 300 305 kernel and are not interested in helping test development/experimental 301 306 versions. 302 307 303 - If no 4.x.y kernel is available, then the highest numbered 4.x kernel is 308 + If no 5.x.y kernel is available, then the highest numbered 5.x kernel is 304 309 the current stable kernel. 305 310 306 311 .. note:: ··· 313 308 The -stable team usually do make incremental patches available as well 314 309 as patches against the latest mainline release, but I only cover the 315 310 non-incremental ones below. The incremental ones can be found at 316 - https://www.kernel.org/pub/linux/kernel/v4.x/incr/ 311 + https://www.kernel.org/pub/linux/kernel/v5.x/incr/ 317 312 318 - These patches are not incremental, meaning that for example the 4.7.3 319 - patch does not apply on top of the 4.7.2 kernel source, but rather on top 320 - of the base 4.7 kernel source. 313 + These patches are not incremental, meaning that for example the 5.7.3 314 + patch does not apply on top of the 5.7.2 kernel source, but rather on top 315 + of the base 5.7 kernel source. 321 316 322 - So, in order to apply the 4.7.3 patch to your existing 4.7.2 kernel 323 - source you have to first back out the 4.7.2 patch (so you are left with a 324 - base 4.7 kernel source) and then apply the new 4.7.3 patch. 317 + So, in order to apply the 5.7.3 patch to your existing 5.7.2 kernel 318 + source you have to first back out the 5.7.2 patch (so you are left with a 319 + base 5.7 kernel source) and then apply the new 5.7.3 patch. 325 320 326 321 Here's a small example:: 327 322 328 - $ cd ~/linux-4.7.2 # change to the kernel source dir 329 - $ patch -p1 -R < ../patch-4.7.2 # revert the 4.7.2 patch 330 - $ patch -p1 < ../patch-4.7.3 # apply the new 4.7.3 patch 323 + $ cd ~/linux-5.7.2 # change to the kernel source dir 324 + $ patch -p1 -R < ../patch-5.7.2 # revert the 5.7.2 patch 325 + $ patch -p1 < ../patch-5.7.3 # apply the new 5.7.3 patch 331 326 $ cd .. 332 - $ mv linux-4.7.2 linux-4.7.3 # rename the kernel source dir 327 + $ mv linux-5.7.2 linux-5.7.3 # rename the kernel source dir 333 328 334 329 The -rc kernels 335 330 =============== ··· 348 343 development kernels but do not want to run some of the really experimental 349 344 stuff (such people should see the sections about -next and -mm kernels below). 350 345 351 - The -rc patches are not incremental, they apply to a base 4.x kernel, just 352 - like the 4.x.y patches described above. The kernel version before the -rcN 346 + The -rc patches are not incremental, they apply to a base 5.x kernel, just 347 + like the 5.x.y patches described above. The kernel version before the -rcN 353 348 suffix denotes the version of the kernel that this -rc kernel will eventually 354 349 turn into. 355 350 356 - So, 4.8-rc5 means that this is the fifth release candidate for the 4.8 357 - kernel and the patch should be applied on top of the 4.7 kernel source. 351 + So, 5.8-rc5 means that this is the fifth release candidate for the 5.8 352 + kernel and the patch should be applied on top of the 5.7 kernel source. 358 353 359 354 Here are 3 examples of how to apply these patches:: 360 355 361 - # first an example of moving from 4.7 to 4.8-rc3 356 + # first an example of moving from 5.7 to 5.8-rc3 362 357 363 - $ cd ~/linux-4.7 # change to the 4.7 source dir 364 - $ patch -p1 < ../patch-4.8-rc3 # apply the 4.8-rc3 patch 358 + $ cd ~/linux-5.7 # change to the 5.7 source dir 359 + $ patch -p1 < ../patch-5.8-rc3 # apply the 5.8-rc3 patch 365 360 $ cd .. 366 - $ mv linux-4.7 linux-4.8-rc3 # rename the source dir 361 + $ mv linux-5.7 linux-5.8-rc3 # rename the source dir 367 362 368 - # now let's move from 4.8-rc3 to 4.8-rc5 363 + # now let's move from 5.8-rc3 to 5.8-rc5 369 364 370 - $ cd ~/linux-4.8-rc3 # change to the 4.8-rc3 dir 371 - $ patch -p1 -R < ../patch-4.8-rc3 # revert the 4.8-rc3 patch 372 - $ patch -p1 < ../patch-4.8-rc5 # apply the new 4.8-rc5 patch 365 + $ cd ~/linux-5.8-rc3 # change to the 5.8-rc3 dir 366 + $ patch -p1 -R < ../patch-5.8-rc3 # revert the 5.8-rc3 patch 367 + $ patch -p1 < ../patch-5.8-rc5 # apply the new 5.8-rc5 patch 373 368 $ cd .. 374 - $ mv linux-4.8-rc3 linux-4.8-rc5 # rename the source dir 369 + $ mv linux-5.8-rc3 linux-5.8-rc5 # rename the source dir 375 370 376 - # finally let's try and move from 4.7.3 to 4.8-rc5 371 + # finally let's try and move from 5.7.3 to 5.8-rc5 377 372 378 - $ cd ~/linux-4.7.3 # change to the kernel source dir 379 - $ patch -p1 -R < ../patch-4.7.3 # revert the 4.7.3 patch 380 - $ patch -p1 < ../patch-4.8-rc5 # apply new 4.8-rc5 patch 373 + $ cd ~/linux-5.7.3 # change to the kernel source dir 374 + $ patch -p1 -R < ../patch-5.7.3 # revert the 5.7.3 patch 375 + $ patch -p1 < ../patch-5.8-rc5 # apply new 5.8-rc5 patch 381 376 $ cd .. 382 - $ mv linux-4.7.3 linux-4.8-rc5 # rename the kernel source dir 377 + $ mv linux-5.7.3 linux-5.8-rc5 # rename the kernel source dir 383 378 384 379 385 380 The -mm patches and the linux-next tree
+1 -1
Documentation/translations/it_IT/admin-guide/README.rst
··· 4 4 5 5 .. _it_readme: 6 6 7 - Rilascio del kernel Linux 4.x <http://kernel.org/> 7 + Rilascio del kernel Linux 5.x <http://kernel.org/> 8 8 =================================================== 9 9 10 10 .. warning::
+16 -6
MAINTAINERS
··· 409 409 F: include/uapi/linux/wmi.h 410 410 411 411 AD1889 ALSA SOUND DRIVER 412 - M: Thibaut Varene <T-Bone@parisc-linux.org> 413 - W: http://wiki.parisc-linux.org/AD1889 412 + W: https://parisc.wiki.kernel.org/index.php/AD1889 414 413 L: linux-parisc@vger.kernel.org 415 414 S: Maintained 416 415 F: sound/pci/ad1889.* ··· 2864 2865 R: Song Liu <songliubraving@fb.com> 2865 2866 R: Yonghong Song <yhs@fb.com> 2866 2867 L: netdev@vger.kernel.org 2867 - L: linux-kernel@vger.kernel.org 2868 + L: bpf@vger.kernel.org 2868 2869 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git 2869 2870 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git 2870 2871 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 ··· 2894 2895 BPF JIT for ARM 2895 2896 M: Shubham Bansal <illusionist.neo@gmail.com> 2896 2897 L: netdev@vger.kernel.org 2898 + L: bpf@vger.kernel.org 2897 2899 S: Maintained 2898 2900 F: arch/arm/net/ 2899 2901 ··· 2903 2903 M: Alexei Starovoitov <ast@kernel.org> 2904 2904 M: Zi Shen Lim <zlim.lnx@gmail.com> 2905 2905 L: netdev@vger.kernel.org 2906 + L: bpf@vger.kernel.org 2906 2907 S: Supported 2907 2908 F: arch/arm64/net/ 2908 2909 2909 2910 BPF JIT for MIPS (32-BIT AND 64-BIT) 2910 2911 M: Paul Burton <paul.burton@mips.com> 2911 2912 L: netdev@vger.kernel.org 2913 + L: bpf@vger.kernel.org 2912 2914 S: Maintained 2913 2915 F: arch/mips/net/ 2914 2916 2915 2917 BPF JIT for NFP NICs 2916 2918 M: Jakub Kicinski <jakub.kicinski@netronome.com> 2917 2919 L: netdev@vger.kernel.org 2920 + L: bpf@vger.kernel.org 2918 2921 S: Supported 2919 2922 F: drivers/net/ethernet/netronome/nfp/bpf/ 2920 2923 ··· 2925 2922 M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 2926 2923 M: Sandipan Das <sandipan@linux.ibm.com> 2927 2924 L: netdev@vger.kernel.org 2925 + L: bpf@vger.kernel.org 2928 2926 S: Maintained 2929 2927 F: arch/powerpc/net/ 2930 2928 ··· 2933 2929 M: Martin Schwidefsky <schwidefsky@de.ibm.com> 2934 2930 M: Heiko Carstens <heiko.carstens@de.ibm.com> 2935 2931 L: netdev@vger.kernel.org 2932 + L: bpf@vger.kernel.org 2936 2933 S: Maintained 2937 2934 F: arch/s390/net/ 2938 2935 X: arch/s390/net/pnet.c ··· 2941 2936 BPF JIT for SPARC (32-BIT AND 64-BIT) 2942 2937 M: David S. Miller <davem@davemloft.net> 2943 2938 L: netdev@vger.kernel.org 2939 + L: bpf@vger.kernel.org 2944 2940 S: Maintained 2945 2941 F: arch/sparc/net/ 2946 2942 2947 2943 BPF JIT for X86 32-BIT 2948 2944 M: Wang YanQing <udknight@gmail.com> 2949 2945 L: netdev@vger.kernel.org 2946 + L: bpf@vger.kernel.org 2950 2947 S: Maintained 2951 2948 F: arch/x86/net/bpf_jit_comp32.c 2952 2949 ··· 2956 2949 M: Alexei Starovoitov <ast@kernel.org> 2957 2950 M: Daniel Borkmann <daniel@iogearbox.net> 2958 2951 L: netdev@vger.kernel.org 2952 + L: bpf@vger.kernel.org 2959 2953 S: Supported 2960 2954 F: arch/x86/net/ 2961 2955 X: arch/x86/net/bpf_jit_comp32.c ··· 3411 3403 F: drivers/media/platform/marvell-ccic/ 3412 3404 3413 3405 CAIF NETWORK LAYER 3414 - M: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no> 3415 3406 L: netdev@vger.kernel.org 3416 - S: Supported 3407 + S: Orphan 3417 3408 F: Documentation/networking/caif/ 3418 3409 F: drivers/net/caif/ 3419 3410 F: include/uapi/linux/caif/ ··· 8531 8524 M: John Fastabend <john.fastabend@gmail.com> 8532 8525 M: Daniel Borkmann <daniel@iogearbox.net> 8533 8526 L: netdev@vger.kernel.org 8527 + L: bpf@vger.kernel.org 8534 8528 S: Maintained 8535 8529 F: include/linux/skmsg.h 8536 8530 F: net/core/skmsg.c ··· 11533 11525 F: drivers/block/paride/ 11534 11526 11535 11527 PARISC ARCHITECTURE 11536 - M: "James E.J. Bottomley" <jejb@parisc-linux.org> 11528 + M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> 11537 11529 M: Helge Deller <deller@gmx.de> 11538 11530 L: linux-parisc@vger.kernel.org 11539 11531 W: http://www.parisc-linux.org/ ··· 16759 16751 M: John Fastabend <john.fastabend@gmail.com> 16760 16752 L: netdev@vger.kernel.org 16761 16753 L: xdp-newbies@vger.kernel.org 16754 + L: bpf@vger.kernel.org 16762 16755 S: Supported 16763 16756 F: net/core/xdp.c 16764 16757 F: include/net/xdp.h ··· 16773 16764 M: Björn Töpel <bjorn.topel@intel.com> 16774 16765 M: Magnus Karlsson <magnus.karlsson@intel.com> 16775 16766 L: netdev@vger.kernel.org 16767 + L: bpf@vger.kernel.org 16776 16768 S: Maintained 16777 16769 F: kernel/bpf/xskmap.c 16778 16770 F: net/xdp/
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 0 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = 6 6 NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION*
+8 -12
arch/arc/Kconfig
··· 191 191 192 192 config ARC_SMP_HALT_ON_RESET 193 193 bool "Enable Halt-on-reset boot mode" 194 - default y if ARC_UBOOT_SUPPORT 195 194 help 196 195 In SMP configuration cores can be configured as Halt-on-reset 197 196 or they could all start at same time. For Halt-on-reset, non ··· 406 407 (also referred to as r58:r59). These can also be used by gcc as GPR so 407 408 kernel needs to save/restore per process 408 409 410 + config ARC_IRQ_NO_AUTOSAVE 411 + bool "Disable hardware autosave regfile on interrupts" 412 + default n 413 + help 414 + On HS cores, taken interrupt auto saves the regfile on stack. 415 + This is programmable and can be optionally disabled in which case 416 + software INTERRUPT_PROLOGUE/EPILGUE do the needed work 417 + 409 418 endif # ISA_ARCV2 410 419 411 420 endmenu # "ARC CPU Configuration" ··· 521 514 bool "Paranoia Checks in Low Level TLB Handlers" 522 515 523 516 endif 524 - 525 - config ARC_UBOOT_SUPPORT 526 - bool "Support uboot arg Handling" 527 - help 528 - ARC Linux by default checks for uboot provided args as pointers to 529 - external cmdline or DTB. This however breaks in absence of uboot, 530 - when booting from Metaware debugger directly, as the registers are 531 - not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus 532 - registers look like uboot args to kernel which then chokes. 533 - So only enable the uboot arg checking/processing if users are sure 534 - of uboot being in play. 535 517 536 518 config ARC_BUILTIN_DTB_NAME 537 519 string "Built in DTB"
-1
arch/arc/configs/nps_defconfig
··· 31 31 # CONFIG_ARC_HAS_LLSC is not set 32 32 CONFIG_ARC_KVADDR_SIZE=402 33 33 CONFIG_ARC_EMUL_UNALIGNED=y 34 - CONFIG_ARC_UBOOT_SUPPORT=y 35 34 CONFIG_PREEMPT=y 36 35 CONFIG_NET=y 37 36 CONFIG_UNIX=y
-1
arch/arc/configs/vdk_hs38_defconfig
··· 13 13 CONFIG_ARC_PLAT_AXS10X=y 14 14 CONFIG_AXS103=y 15 15 CONFIG_ISA_ARCV2=y 16 - CONFIG_ARC_UBOOT_SUPPORT=y 17 16 CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38" 18 17 CONFIG_PREEMPT=y 19 18 CONFIG_NET=y
-2
arch/arc/configs/vdk_hs38_smp_defconfig
··· 15 15 CONFIG_ISA_ARCV2=y 16 16 CONFIG_SMP=y 17 17 # CONFIG_ARC_TIMERS_64BIT is not set 18 - # CONFIG_ARC_SMP_HALT_ON_RESET is not set 19 - CONFIG_ARC_UBOOT_SUPPORT=y 20 18 CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp" 21 19 CONFIG_PREEMPT=y 22 20 CONFIG_NET=y
+8
arch/arc/include/asm/arcregs.h
··· 151 151 #endif 152 152 }; 153 153 154 + struct bcr_uarch_build_arcv2 { 155 + #ifdef CONFIG_CPU_BIG_ENDIAN 156 + unsigned int pad:8, prod:8, maj:8, min:8; 157 + #else 158 + unsigned int min:8, maj:8, prod:8, pad:8; 159 + #endif 160 + }; 161 + 154 162 struct bcr_mpy { 155 163 #ifdef CONFIG_CPU_BIG_ENDIAN 156 164 unsigned int pad:8, x1616:8, dsp:4, cycles:2, type:2, ver:8;
+11
arch/arc/include/asm/cache.h
··· 52 52 #define cache_line_size() SMP_CACHE_BYTES 53 53 #define ARCH_DMA_MINALIGN SMP_CACHE_BYTES 54 54 55 + /* 56 + * Make sure slab-allocated buffers are 64-bit aligned when atomic64_t uses 57 + * ARCv2 64-bit atomics (LLOCKD/SCONDD). This guarantess runtime 64-bit 58 + * alignment for any atomic64_t embedded in buffer. 59 + * Default ARCH_SLAB_MINALIGN is __alignof__(long long) which has a relaxed 60 + * value of 4 (and not 8) in ARC ABI. 61 + */ 62 + #if defined(CONFIG_ARC_HAS_LL64) && defined(CONFIG_ARC_HAS_LLSC) 63 + #define ARCH_SLAB_MINALIGN 8 64 + #endif 65 + 55 66 extern void arc_cache_init(void); 56 67 extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); 57 68 extern void read_decode_cache_bcr(void);
+54
arch/arc/include/asm/entry-arcv2.h
··· 17 17 ; 18 18 ; Now manually save: r12, sp, fp, gp, r25 19 19 20 + #ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE 21 + .ifnc \called_from, exception 22 + st.as r9, [sp, -10] ; save r9 in it's final stack slot 23 + sub sp, sp, 12 ; skip JLI, LDI, EI 24 + 25 + PUSH lp_count 26 + PUSHAX lp_start 27 + PUSHAX lp_end 28 + PUSH blink 29 + 30 + PUSH r11 31 + PUSH r10 32 + 33 + sub sp, sp, 4 ; skip r9 34 + 35 + PUSH r8 36 + PUSH r7 37 + PUSH r6 38 + PUSH r5 39 + PUSH r4 40 + PUSH r3 41 + PUSH r2 42 + PUSH r1 43 + PUSH r0 44 + .endif 45 + #endif 46 + 20 47 #ifdef CONFIG_ARC_HAS_ACCL_REGS 21 48 PUSH r59 22 49 PUSH r58 ··· 111 84 #ifdef CONFIG_ARC_HAS_ACCL_REGS 112 85 POP r58 113 86 POP r59 87 + #endif 88 + 89 + #ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE 90 + .ifnc \called_from, exception 91 + POP r0 92 + POP r1 93 + POP r2 94 + POP r3 95 + POP r4 96 + POP r5 97 + POP r6 98 + POP r7 99 + POP r8 100 + POP r9 101 + POP r10 102 + POP r11 103 + 104 + POP blink 105 + POPAX lp_end 106 + POPAX lp_start 107 + 108 + POP r9 109 + mov lp_count, r9 110 + 111 + add sp, sp, 12 ; skip JLI, LDI, EI 112 + ld.as r9, [sp, -10] ; reload r9 which got clobbered 113 + .endif 114 114 #endif 115 115 116 116 .endm
+4 -4
arch/arc/include/asm/uaccess.h
··· 207 207 */ 208 208 "=&r" (tmp), "+r" (to), "+r" (from) 209 209 : 210 - : "lp_count", "lp_start", "lp_end", "memory"); 210 + : "lp_count", "memory"); 211 211 212 212 return n; 213 213 } ··· 433 433 */ 434 434 "=&r" (tmp), "+r" (to), "+r" (from) 435 435 : 436 - : "lp_count", "lp_start", "lp_end", "memory"); 436 + : "lp_count", "memory"); 437 437 438 438 return n; 439 439 } ··· 653 653 " .previous \n" 654 654 : "+r"(d_char), "+r"(res) 655 655 : "i"(0) 656 - : "lp_count", "lp_start", "lp_end", "memory"); 656 + : "lp_count", "memory"); 657 657 658 658 return res; 659 659 } ··· 686 686 " .previous \n" 687 687 : "+r"(res), "+r"(dst), "+r"(src), "=r"(val) 688 688 : "g"(-EFAULT), "r"(count) 689 - : "lp_count", "lp_start", "lp_end", "memory"); 689 + : "lp_count", "memory"); 690 690 691 691 return res; 692 692 }
+3 -1
arch/arc/kernel/entry-arcv2.S
··· 209 209 ;####### Return from Intr ####### 210 210 211 211 debug_marker_l1: 212 - bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot 212 + ; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot 213 + btst r0, STATUS_DE_BIT ; Z flag set if bit clear 214 + bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set 213 215 214 216 .Lisr_ret_fast_path: 215 217 ; Handle special case #1: (Entry via Exception, Return via IRQ)
+12 -4
arch/arc/kernel/head.S
··· 17 17 #include <asm/entry.h> 18 18 #include <asm/arcregs.h> 19 19 #include <asm/cache.h> 20 + #include <asm/irqflags.h> 20 21 21 22 .macro CPU_EARLY_SETUP 22 23 ··· 48 47 sr r5, [ARC_REG_DC_CTRL] 49 48 50 49 1: 50 + 51 + #ifdef CONFIG_ISA_ARCV2 52 + ; Unaligned access is disabled at reset, so re-enable early as 53 + ; gcc 7.3.1 (ARC GNU 2018.03) onwards generates unaligned access 54 + ; by default 55 + lr r5, [status32] 56 + bset r5, r5, STATUS_AD_BIT 57 + kflag r5 58 + #endif 51 59 .endm 52 60 53 61 .section .init.text, "ax",@progbits ··· 100 90 st.ab 0, [r5, 4] 101 91 1: 102 92 103 - #ifdef CONFIG_ARC_UBOOT_SUPPORT 104 93 ; Uboot - kernel ABI 105 94 ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2 106 - ; r1 = magic number (board identity, unused as of now 95 + ; r1 = magic number (always zero as of now) 107 96 ; r2 = pointer to uboot provided cmdline or external DTB in mem 108 - ; These are handled later in setup_arch() 97 + ; These are handled later in handle_uboot_args() 109 98 st r0, [@uboot_tag] 110 99 st r2, [@uboot_arg] 111 - #endif 112 100 113 101 ; setup "current" tsk and optionally cache it in dedicated r25 114 102 mov r9, @init_task
+2
arch/arc/kernel/intc-arcv2.c
··· 49 49 50 50 *(unsigned int *)&ictrl = 0; 51 51 52 + #ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE 52 53 ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */ 53 54 ictrl.save_blink = 1; 54 55 ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */ 55 56 ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */ 56 57 ictrl.save_idx_regs = 1; /* JLI, LDI, EI */ 58 + #endif 57 59 58 60 WRITE_AUX(AUX_IRQ_CTRL, ictrl); 59 61
+89 -38
arch/arc/kernel/setup.c
··· 199 199 cpu->bpu.ret_stk = 4 << bpu.rse; 200 200 201 201 if (cpu->core.family >= 0x54) { 202 - unsigned int exec_ctrl; 203 202 204 - READ_BCR(AUX_EXEC_CTRL, exec_ctrl); 205 - cpu->extn.dual_enb = !(exec_ctrl & 1); 203 + struct bcr_uarch_build_arcv2 uarch; 206 204 207 - /* dual issue always present for this core */ 208 - cpu->extn.dual = 1; 205 + /* 206 + * The first 0x54 core (uarch maj:min 0:1 or 0:2) was 207 + * dual issue only (HS4x). But next uarch rev (1:0) 208 + * allows it be configured for single issue (HS3x) 209 + * Ensure we fiddle with dual issue only on HS4x 210 + */ 211 + READ_BCR(ARC_REG_MICRO_ARCH_BCR, uarch); 212 + 213 + if (uarch.prod == 4) { 214 + unsigned int exec_ctrl; 215 + 216 + /* dual issue hardware always present */ 217 + cpu->extn.dual = 1; 218 + 219 + READ_BCR(AUX_EXEC_CTRL, exec_ctrl); 220 + 221 + /* dual issue hardware enabled ? */ 222 + cpu->extn.dual_enb = !(exec_ctrl & 1); 223 + 224 + } 209 225 } 210 226 } 211 227 212 228 READ_BCR(ARC_REG_AP_BCR, ap); 213 229 if (ap.ver) { 214 230 cpu->extn.ap_num = 2 << ap.num; 215 - cpu->extn.ap_full = !!ap.min; 231 + cpu->extn.ap_full = !ap.min; 216 232 } 217 233 218 234 READ_BCR(ARC_REG_SMART_BCR, bcr); ··· 478 462 arc_chk_core_config(); 479 463 } 480 464 481 - static inline int is_kernel(unsigned long addr) 465 + static inline bool uboot_arg_invalid(unsigned long addr) 482 466 { 483 - if (addr >= (unsigned long)_stext && addr <= (unsigned long)_end) 484 - return 1; 485 - return 0; 467 + /* 468 + * Check that it is a untranslated address (although MMU is not enabled 469 + * yet, it being a high address ensures this is not by fluke) 470 + */ 471 + if (addr < PAGE_OFFSET) 472 + return true; 473 + 474 + /* Check that address doesn't clobber resident kernel image */ 475 + return addr >= (unsigned long)_stext && addr <= (unsigned long)_end; 476 + } 477 + 478 + #define IGNORE_ARGS "Ignore U-boot args: " 479 + 480 + /* uboot_tag values for U-boot - kernel ABI revision 0; see head.S */ 481 + #define UBOOT_TAG_NONE 0 482 + #define UBOOT_TAG_CMDLINE 1 483 + #define UBOOT_TAG_DTB 2 484 + 485 + void __init handle_uboot_args(void) 486 + { 487 + bool use_embedded_dtb = true; 488 + bool append_cmdline = false; 489 + 490 + /* check that we know this tag */ 491 + if (uboot_tag != UBOOT_TAG_NONE && 492 + uboot_tag != UBOOT_TAG_CMDLINE && 493 + uboot_tag != UBOOT_TAG_DTB) { 494 + pr_warn(IGNORE_ARGS "invalid uboot tag: '%08x'\n", uboot_tag); 495 + goto ignore_uboot_args; 496 + } 497 + 498 + if (uboot_tag != UBOOT_TAG_NONE && 499 + uboot_arg_invalid((unsigned long)uboot_arg)) { 500 + pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg); 501 + goto ignore_uboot_args; 502 + } 503 + 504 + /* see if U-boot passed an external Device Tree blob */ 505 + if (uboot_tag == UBOOT_TAG_DTB) { 506 + machine_desc = setup_machine_fdt((void *)uboot_arg); 507 + 508 + /* external Device Tree blob is invalid - use embedded one */ 509 + use_embedded_dtb = !machine_desc; 510 + } 511 + 512 + if (uboot_tag == UBOOT_TAG_CMDLINE) 513 + append_cmdline = true; 514 + 515 + ignore_uboot_args: 516 + 517 + if (use_embedded_dtb) { 518 + machine_desc = setup_machine_fdt(__dtb_start); 519 + if (!machine_desc) 520 + panic("Embedded DT invalid\n"); 521 + } 522 + 523 + /* 524 + * NOTE: @boot_command_line is populated by setup_machine_fdt() so this 525 + * append processing can only happen after. 526 + */ 527 + if (append_cmdline) { 528 + /* Ensure a whitespace between the 2 cmdlines */ 529 + strlcat(boot_command_line, " ", COMMAND_LINE_SIZE); 530 + strlcat(boot_command_line, uboot_arg, COMMAND_LINE_SIZE); 531 + } 486 532 } 487 533 488 534 void __init setup_arch(char **cmdline_p) 489 535 { 490 - #ifdef CONFIG_ARC_UBOOT_SUPPORT 491 - /* make sure that uboot passed pointer to cmdline/dtb is valid */ 492 - if (uboot_tag && is_kernel((unsigned long)uboot_arg)) 493 - panic("Invalid uboot arg\n"); 494 - 495 - /* See if u-boot passed an external Device Tree blob */ 496 - machine_desc = setup_machine_fdt(uboot_arg); /* uboot_tag == 2 */ 497 - if (!machine_desc) 498 - #endif 499 - { 500 - /* No, so try the embedded one */ 501 - machine_desc = setup_machine_fdt(__dtb_start); 502 - if (!machine_desc) 503 - panic("Embedded DT invalid\n"); 504 - 505 - /* 506 - * If we are here, it is established that @uboot_arg didn't 507 - * point to DT blob. Instead if u-boot says it is cmdline, 508 - * append to embedded DT cmdline. 509 - * setup_machine_fdt() would have populated @boot_command_line 510 - */ 511 - if (uboot_tag == 1) { 512 - /* Ensure a whitespace between the 2 cmdlines */ 513 - strlcat(boot_command_line, " ", COMMAND_LINE_SIZE); 514 - strlcat(boot_command_line, uboot_arg, 515 - COMMAND_LINE_SIZE); 516 - } 517 - } 536 + handle_uboot_args(); 518 537 519 538 /* Save unparsed command line copy for /proc/cmdline */ 520 539 *cmdline_p = boot_command_line;
-14
arch/arc/lib/memcpy-archs.S
··· 25 25 #endif 26 26 27 27 #ifdef CONFIG_ARC_HAS_LL64 28 - # define PREFETCH_READ(RX) prefetch [RX, 56] 29 - # define PREFETCH_WRITE(RX) prefetchw [RX, 64] 30 28 # define LOADX(DST,RX) ldd.ab DST, [RX, 8] 31 29 # define STOREX(SRC,RX) std.ab SRC, [RX, 8] 32 30 # define ZOLSHFT 5 33 31 # define ZOLAND 0x1F 34 32 #else 35 - # define PREFETCH_READ(RX) prefetch [RX, 28] 36 - # define PREFETCH_WRITE(RX) prefetchw [RX, 32] 37 33 # define LOADX(DST,RX) ld.ab DST, [RX, 4] 38 34 # define STOREX(SRC,RX) st.ab SRC, [RX, 4] 39 35 # define ZOLSHFT 4 ··· 37 41 #endif 38 42 39 43 ENTRY_CFI(memcpy) 40 - prefetch [r1] ; Prefetch the read location 41 - prefetchw [r0] ; Prefetch the write location 42 44 mov.f 0, r2 43 45 ;;; if size is zero 44 46 jz.d [blink] ··· 66 72 lpnz @.Lcopy32_64bytes 67 73 ;; LOOP START 68 74 LOADX (r6, r1) 69 - PREFETCH_READ (r1) 70 - PREFETCH_WRITE (r3) 71 75 LOADX (r8, r1) 72 76 LOADX (r10, r1) 73 77 LOADX (r4, r1) ··· 109 117 lpnz @.Lcopy8bytes_1 110 118 ;; LOOP START 111 119 ld.ab r6, [r1, 4] 112 - prefetch [r1, 28] ;Prefetch the next read location 113 120 ld.ab r8, [r1,4] 114 - prefetchw [r3, 32] ;Prefetch the next write location 115 121 116 122 SHIFT_1 (r7, r6, 24) 117 123 or r7, r7, r5 ··· 152 162 lpnz @.Lcopy8bytes_2 153 163 ;; LOOP START 154 164 ld.ab r6, [r1, 4] 155 - prefetch [r1, 28] ;Prefetch the next read location 156 165 ld.ab r8, [r1,4] 157 - prefetchw [r3, 32] ;Prefetch the next write location 158 166 159 167 SHIFT_1 (r7, r6, 16) 160 168 or r7, r7, r5 ··· 192 204 lpnz @.Lcopy8bytes_3 193 205 ;; LOOP START 194 206 ld.ab r6, [r1, 4] 195 - prefetch [r1, 28] ;Prefetch the next read location 196 207 ld.ab r8, [r1,4] 197 - prefetchw [r3, 32] ;Prefetch the next write location 198 208 199 209 SHIFT_1 (r7, r6, 8) 200 210 or r7, r7, r5
+1
arch/arc/plat-hsdk/Kconfig
··· 9 9 bool "ARC HS Development Kit SOC" 10 10 depends on ISA_ARCV2 11 11 select ARC_HAS_ACCL_REGS 12 + select ARC_IRQ_NO_AUTOSAVE 12 13 select CLK_HSDK 13 14 select RESET_HSDK 14 15 select HAVE_PCI
+1
arch/arm/Kconfig
··· 1400 1400 config HOTPLUG_CPU 1401 1401 bool "Support for hot-pluggable CPUs" 1402 1402 depends on SMP 1403 + select GENERIC_IRQ_MIGRATION 1403 1404 help 1404 1405 Say Y here to experiment with turning CPUs off and on. CPUs 1405 1406 can be controlled through /sys/devices/system/cpu.
+1 -1
arch/arm/boot/dts/am335x-evm.dts
··· 729 729 730 730 &cpsw_emac0 { 731 731 phy-handle = <&ethphy0>; 732 - phy-mode = "rgmii-txid"; 732 + phy-mode = "rgmii-id"; 733 733 }; 734 734 735 735 &tscadc {
+2 -2
arch/arm/boot/dts/am335x-evmsk.dts
··· 651 651 652 652 &cpsw_emac0 { 653 653 phy-handle = <&ethphy0>; 654 - phy-mode = "rgmii-txid"; 654 + phy-mode = "rgmii-id"; 655 655 dual_emac_res_vlan = <1>; 656 656 }; 657 657 658 658 &cpsw_emac1 { 659 659 phy-handle = <&ethphy1>; 660 - phy-mode = "rgmii-txid"; 660 + phy-mode = "rgmii-id"; 661 661 dual_emac_res_vlan = <2>; 662 662 }; 663 663
+22 -20
arch/arm/boot/dts/armada-xp-db.dts
··· 144 144 status = "okay"; 145 145 }; 146 146 147 - nand@d0000 { 147 + nand-controller@d0000 { 148 148 status = "okay"; 149 - label = "pxa3xx_nand-0"; 150 - num-cs = <1>; 151 - marvell,nand-keep-config; 152 - nand-on-flash-bbt; 153 149 154 - partitions { 155 - compatible = "fixed-partitions"; 156 - #address-cells = <1>; 157 - #size-cells = <1>; 150 + nand@0 { 151 + reg = <0>; 152 + label = "pxa3xx_nand-0"; 153 + nand-rb = <0>; 154 + nand-on-flash-bbt; 158 155 159 - partition@0 { 160 - label = "U-Boot"; 161 - reg = <0 0x800000>; 162 - }; 163 - partition@800000 { 164 - label = "Linux"; 165 - reg = <0x800000 0x800000>; 166 - }; 167 - partition@1000000 { 168 - label = "Filesystem"; 169 - reg = <0x1000000 0x3f000000>; 156 + partitions { 157 + compatible = "fixed-partitions"; 158 + #address-cells = <1>; 159 + #size-cells = <1>; 170 160 161 + partition@0 { 162 + label = "U-Boot"; 163 + reg = <0 0x800000>; 164 + }; 165 + partition@800000 { 166 + label = "Linux"; 167 + reg = <0x800000 0x800000>; 168 + }; 169 + partition@1000000 { 170 + label = "Filesystem"; 171 + reg = <0x1000000 0x3f000000>; 172 + }; 171 173 }; 172 174 }; 173 175 };
+8 -5
arch/arm/boot/dts/armada-xp-gp.dts
··· 160 160 status = "okay"; 161 161 }; 162 162 163 - nand@d0000 { 163 + nand-controller@d0000 { 164 164 status = "okay"; 165 - label = "pxa3xx_nand-0"; 166 - num-cs = <1>; 167 - marvell,nand-keep-config; 168 - nand-on-flash-bbt; 165 + 166 + nand@0 { 167 + reg = <0>; 168 + label = "pxa3xx_nand-0"; 169 + nand-rb = <0>; 170 + nand-on-flash-bbt; 171 + }; 169 172 }; 170 173 }; 171 174
+38 -35
arch/arm/boot/dts/armada-xp-lenovo-ix4-300d.dts
··· 81 81 82 82 }; 83 83 84 - nand@d0000 { 84 + nand-controller@d0000 { 85 85 status = "okay"; 86 - label = "pxa3xx_nand-0"; 87 - num-cs = <1>; 88 - marvell,nand-keep-config; 89 - nand-on-flash-bbt; 90 86 91 - partitions { 92 - compatible = "fixed-partitions"; 93 - #address-cells = <1>; 94 - #size-cells = <1>; 87 + nand@0 { 88 + reg = <0>; 89 + label = "pxa3xx_nand-0"; 90 + nand-rb = <0>; 91 + nand-on-flash-bbt; 95 92 96 - partition@0 { 97 - label = "u-boot"; 98 - reg = <0x00000000 0x000e0000>; 99 - read-only; 100 - }; 93 + partitions { 94 + compatible = "fixed-partitions"; 95 + #address-cells = <1>; 96 + #size-cells = <1>; 101 97 102 - partition@e0000 { 103 - label = "u-boot-env"; 104 - reg = <0x000e0000 0x00020000>; 105 - read-only; 106 - }; 98 + partition@0 { 99 + label = "u-boot"; 100 + reg = <0x00000000 0x000e0000>; 101 + read-only; 102 + }; 107 103 108 - partition@100000 { 109 - label = "u-boot-env2"; 110 - reg = <0x00100000 0x00020000>; 111 - read-only; 112 - }; 104 + partition@e0000 { 105 + label = "u-boot-env"; 106 + reg = <0x000e0000 0x00020000>; 107 + read-only; 108 + }; 113 109 114 - partition@120000 { 115 - label = "zImage"; 116 - reg = <0x00120000 0x00400000>; 117 - }; 110 + partition@100000 { 111 + label = "u-boot-env2"; 112 + reg = <0x00100000 0x00020000>; 113 + read-only; 114 + }; 118 115 119 - partition@520000 { 120 - label = "initrd"; 121 - reg = <0x00520000 0x00400000>; 122 - }; 116 + partition@120000 { 117 + label = "zImage"; 118 + reg = <0x00120000 0x00400000>; 119 + }; 123 120 124 - partition@e00000 { 125 - label = "boot"; 126 - reg = <0x00e00000 0x3f200000>; 121 + partition@520000 { 122 + label = "initrd"; 123 + reg = <0x00520000 0x00400000>; 124 + }; 125 + 126 + partition@e00000 { 127 + label = "boot"; 128 + reg = <0x00e00000 0x3f200000>; 129 + }; 127 130 }; 128 131 }; 129 132 };
+16 -1
arch/arm/boot/dts/tegra124-nyan.dtsi
··· 13 13 stdout-path = "serial0:115200n8"; 14 14 }; 15 15 16 - memory@80000000 { 16 + /* 17 + * Note that recent version of the device tree compiler (starting with 18 + * version 1.4.2) warn about this node containing a reg property, but 19 + * missing a unit-address. However, the bootloader on these Chromebook 20 + * devices relies on the full name of this node to be exactly /memory. 21 + * Adding the unit-address causes the bootloader to create a /memory 22 + * node and write the memory bank configuration to that node, which in 23 + * turn leads the kernel to believe that the device has 2 GiB of 24 + * memory instead of the amount detected by the bootloader. 25 + * 26 + * The name of this node is effectively ABI and must not be changed. 27 + */ 28 + memory { 29 + device_type = "memory"; 17 30 reg = <0x0 0x80000000 0x0 0x80000000>; 18 31 }; 32 + 33 + /delete-node/ memory@80000000; 19 34 20 35 host1x@50000000 { 21 36 hdmi@54280000 {
+2 -1
arch/arm/crypto/sha256-armv4.pl
··· 212 212 .global sha256_block_data_order 213 213 .type sha256_block_data_order,%function 214 214 sha256_block_data_order: 215 + .Lsha256_block_data_order: 215 216 #if __ARM_ARCH__<7 216 217 sub r3,pc,#8 @ sha256_block_data_order 217 218 #else 218 - adr r3,sha256_block_data_order 219 + adr r3,.Lsha256_block_data_order 219 220 #endif 220 221 #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) 221 222 ldr r12,.LOPENSSL_armcap
+2 -1
arch/arm/crypto/sha256-core.S_shipped
··· 93 93 .global sha256_block_data_order 94 94 .type sha256_block_data_order,%function 95 95 sha256_block_data_order: 96 + .Lsha256_block_data_order: 96 97 #if __ARM_ARCH__<7 97 98 sub r3,pc,#8 @ sha256_block_data_order 98 99 #else 99 - adr r3,sha256_block_data_order 100 + adr r3,.Lsha256_block_data_order 100 101 #endif 101 102 #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) 102 103 ldr r12,.LOPENSSL_armcap
+2 -1
arch/arm/crypto/sha512-armv4.pl
··· 274 274 .global sha512_block_data_order 275 275 .type sha512_block_data_order,%function 276 276 sha512_block_data_order: 277 + .Lsha512_block_data_order: 277 278 #if __ARM_ARCH__<7 278 279 sub r3,pc,#8 @ sha512_block_data_order 279 280 #else 280 - adr r3,sha512_block_data_order 281 + adr r3,.Lsha512_block_data_order 281 282 #endif 282 283 #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) 283 284 ldr r12,.LOPENSSL_armcap
+2 -1
arch/arm/crypto/sha512-core.S_shipped
··· 141 141 .global sha512_block_data_order 142 142 .type sha512_block_data_order,%function 143 143 sha512_block_data_order: 144 + .Lsha512_block_data_order: 144 145 #if __ARM_ARCH__<7 145 146 sub r3,pc,#8 @ sha512_block_data_order 146 147 #else 147 - adr r3,sha512_block_data_order 148 + adr r3,.Lsha512_block_data_order 148 149 #endif 149 150 #if __ARM_MAX_ARCH__>=7 && !defined(__KERNEL__) 150 151 ldr r12,.LOPENSSL_armcap
-1
arch/arm/include/asm/irq.h
··· 25 25 #ifndef __ASSEMBLY__ 26 26 struct irqaction; 27 27 struct pt_regs; 28 - extern void migrate_irqs(void); 29 28 30 29 extern void asm_do_IRQ(unsigned int, struct pt_regs *); 31 30 void handle_IRQ(unsigned int, struct pt_regs *);
-62
arch/arm/kernel/irq.c
··· 31 31 #include <linux/smp.h> 32 32 #include <linux/init.h> 33 33 #include <linux/seq_file.h> 34 - #include <linux/ratelimit.h> 35 34 #include <linux/errno.h> 36 35 #include <linux/list.h> 37 36 #include <linux/kallsyms.h> ··· 108 109 return nr_irqs; 109 110 } 110 111 #endif 111 - 112 - #ifdef CONFIG_HOTPLUG_CPU 113 - static bool migrate_one_irq(struct irq_desc *desc) 114 - { 115 - struct irq_data *d = irq_desc_get_irq_data(desc); 116 - const struct cpumask *affinity = irq_data_get_affinity_mask(d); 117 - struct irq_chip *c; 118 - bool ret = false; 119 - 120 - /* 121 - * If this is a per-CPU interrupt, or the affinity does not 122 - * include this CPU, then we have nothing to do. 123 - */ 124 - if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) 125 - return false; 126 - 127 - if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { 128 - affinity = cpu_online_mask; 129 - ret = true; 130 - } 131 - 132 - c = irq_data_get_irq_chip(d); 133 - if (!c->irq_set_affinity) 134 - pr_debug("IRQ%u: unable to set affinity\n", d->irq); 135 - else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret) 136 - cpumask_copy(irq_data_get_affinity_mask(d), affinity); 137 - 138 - return ret; 139 - } 140 - 141 - /* 142 - * The current CPU has been marked offline. Migrate IRQs off this CPU. 143 - * If the affinity settings do not allow other CPUs, force them onto any 144 - * available CPU. 145 - * 146 - * Note: we must iterate over all IRQs, whether they have an attached 147 - * action structure or not, as we need to get chained interrupts too. 148 - */ 149 - void migrate_irqs(void) 150 - { 151 - unsigned int i; 152 - struct irq_desc *desc; 153 - unsigned long flags; 154 - 155 - local_irq_save(flags); 156 - 157 - for_each_irq_desc(i, desc) { 158 - bool affinity_broken; 159 - 160 - raw_spin_lock(&desc->lock); 161 - affinity_broken = migrate_one_irq(desc); 162 - raw_spin_unlock(&desc->lock); 163 - 164 - if (affinity_broken) 165 - pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n", 166 - i, smp_processor_id()); 167 - } 168 - 169 - local_irq_restore(flags); 170 - } 171 - #endif /* CONFIG_HOTPLUG_CPU */
+1 -1
arch/arm/kernel/smp.c
··· 254 254 /* 255 255 * OK - migrate IRQs away from this CPU 256 256 */ 257 - migrate_irqs(); 257 + irq_migrate_all_off_this_cpu(); 258 258 259 259 /* 260 260 * Flush user cache and TLB mappings, and then remove this CPU
+2
arch/arm/mm/dma-mapping.c
··· 2390 2390 return; 2391 2391 2392 2392 arm_teardown_iommu_dma_ops(dev); 2393 + /* Let arch_setup_dma_ops() start again from scratch upon re-probe */ 2394 + set_dma_ops(dev, NULL); 2393 2395 }
+1 -1
arch/arm/probes/kprobes/opt-arm.c
··· 247 247 } 248 248 249 249 /* Copy arch-dep-instance from template. */ 250 - memcpy(code, (unsigned char *)optprobe_template_entry, 250 + memcpy(code, (unsigned long *)&optprobe_template_entry, 251 251 TMPL_END_IDX * sizeof(kprobe_opcode_t)); 252 252 253 253 /* Adjust buffer according to instruction. */
+1 -1
arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
··· 351 351 reg = <0>; 352 352 pinctrl-names = "default"; 353 353 pinctrl-0 = <&cp0_copper_eth_phy_reset>; 354 - reset-gpios = <&cp1_gpio1 11 GPIO_ACTIVE_LOW>; 354 + reset-gpios = <&cp0_gpio2 11 GPIO_ACTIVE_LOW>; 355 355 reset-assert-us = <10000>; 356 356 }; 357 357
+1 -1
arch/arm64/boot/dts/qcom/msm8998.dtsi
··· 37 37 }; 38 38 39 39 memory@86200000 { 40 - reg = <0x0 0x86200000 0x0 0x2600000>; 40 + reg = <0x0 0x86200000 0x0 0x2d00000>; 41 41 no-map; 42 42 }; 43 43
+18 -2
arch/arm64/crypto/chacha-neon-core.S
··· 158 158 mov w3, w2 159 159 bl chacha_permute 160 160 161 - st1 {v0.16b}, [x1], #16 162 - st1 {v3.16b}, [x1] 161 + st1 {v0.4s}, [x1], #16 162 + st1 {v3.4s}, [x1] 163 163 164 164 ldp x29, x30, [sp], #16 165 165 ret ··· 532 532 add v3.4s, v3.4s, v19.4s 533 533 add a2, a2, w8 534 534 add a3, a3, w9 535 + CPU_BE( rev a0, a0 ) 536 + CPU_BE( rev a1, a1 ) 537 + CPU_BE( rev a2, a2 ) 538 + CPU_BE( rev a3, a3 ) 535 539 536 540 ld4r {v24.4s-v27.4s}, [x0], #16 537 541 ld4r {v28.4s-v31.4s}, [x0] ··· 556 552 add v7.4s, v7.4s, v23.4s 557 553 add a6, a6, w8 558 554 add a7, a7, w9 555 + CPU_BE( rev a4, a4 ) 556 + CPU_BE( rev a5, a5 ) 557 + CPU_BE( rev a6, a6 ) 558 + CPU_BE( rev a7, a7 ) 559 559 560 560 // x8[0-3] += s2[0] 561 561 // x9[0-3] += s2[1] ··· 577 569 add v11.4s, v11.4s, v27.4s 578 570 add a10, a10, w8 579 571 add a11, a11, w9 572 + CPU_BE( rev a8, a8 ) 573 + CPU_BE( rev a9, a9 ) 574 + CPU_BE( rev a10, a10 ) 575 + CPU_BE( rev a11, a11 ) 580 576 581 577 // x12[0-3] += s3[0] 582 578 // x13[0-3] += s3[1] ··· 598 586 add v15.4s, v15.4s, v31.4s 599 587 add a14, a14, w8 600 588 add a15, a15, w9 589 + CPU_BE( rev a12, a12 ) 590 + CPU_BE( rev a13, a13 ) 591 + CPU_BE( rev a14, a14 ) 592 + CPU_BE( rev a15, a15 ) 601 593 602 594 // interleave 32-bit words in state n, n+1 603 595 ldp w6, w7, [x2], #64
+4
arch/arm64/include/asm/neon-intrinsics.h
··· 36 36 #include <arm_neon.h> 37 37 #endif 38 38 39 + #ifdef CONFIG_CC_IS_CLANG 40 + #pragma clang diagnostic ignored "-Wincompatible-pointer-types" 41 + #endif 42 + 39 43 #endif /* __ASM_NEON_INTRINSICS_H */
+1 -2
arch/arm64/kernel/head.S
··· 539 539 /* GICv3 system register access */ 540 540 mrs x0, id_aa64pfr0_el1 541 541 ubfx x0, x0, #24, #4 542 - cmp x0, #1 543 - b.ne 3f 542 + cbz x0, 3f 544 543 545 544 mrs_s x0, SYS_ICC_SRE_EL2 546 545 orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1
+8 -7
arch/arm64/kernel/ptrace.c
··· 1702 1702 } 1703 1703 1704 1704 /* 1705 - * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487C.a 1706 - * We also take into account DIT (bit 24), which is not yet documented, and 1707 - * treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may be 1708 - * allocated an EL0 meaning in future. 1705 + * SPSR_ELx bits which are always architecturally RES0 per ARM DDI 0487D.a. 1706 + * We permit userspace to set SSBS (AArch64 bit 12, AArch32 bit 23) which is 1707 + * not described in ARM DDI 0487D.a. 1708 + * We treat PAN and UAO as RES0 bits, as they are meaningless at EL0, and may 1709 + * be allocated an EL0 meaning in future. 1709 1710 * Userspace cannot use these until they have an architectural meaning. 1710 1711 * Note that this follows the SPSR_ELx format, not the AArch32 PSR format. 1711 1712 * We also reserve IL for the kernel; SS is handled dynamically. 1712 1713 */ 1713 1714 #define SPSR_EL1_AARCH64_RES0_BITS \ 1714 - (GENMASK_ULL(63,32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ 1715 - GENMASK_ULL(20, 10) | GENMASK_ULL(5, 5)) 1715 + (GENMASK_ULL(63, 32) | GENMASK_ULL(27, 25) | GENMASK_ULL(23, 22) | \ 1716 + GENMASK_ULL(20, 13) | GENMASK_ULL(11, 10) | GENMASK_ULL(5, 5)) 1716 1717 #define SPSR_EL1_AARCH32_RES0_BITS \ 1717 - (GENMASK_ULL(63,32) | GENMASK_ULL(23, 22) | GENMASK_ULL(20,20)) 1718 + (GENMASK_ULL(63, 32) | GENMASK_ULL(22, 22) | GENMASK_ULL(20, 20)) 1718 1719 1719 1720 static int valid_compat_regs(struct user_pt_regs *regs) 1720 1721 {
+3
arch/arm64/kernel/setup.c
··· 339 339 smp_init_cpus(); 340 340 smp_build_mpidr_hash(); 341 341 342 + /* Init percpu seeds for random tags after cpus are set up. */ 343 + kasan_init_tags(); 344 + 342 345 #ifdef CONFIG_ARM64_SW_TTBR0_PAN 343 346 /* 344 347 * Make sure init_thread_info.ttbr0 always generates translation
-2
arch/arm64/mm/kasan_init.c
··· 252 252 memset(kasan_early_shadow_page, KASAN_SHADOW_INIT, PAGE_SIZE); 253 253 cpu_replace_ttbr1(lm_alias(swapper_pg_dir)); 254 254 255 - kasan_init_tags(); 256 - 257 255 /* At this point kasan is fully initialized. Enable error messages */ 258 256 init_task.kasan_depth = 0; 259 257 pr_info("KernelAddressSanitizer initialized\n");
+8
arch/mips/bcm63xx/dev-enet.c
··· 70 70 71 71 static int shared_device_registered; 72 72 73 + static u64 enet_dmamask = DMA_BIT_MASK(32); 74 + 73 75 static struct resource enet0_res[] = { 74 76 { 75 77 .start = -1, /* filled at runtime */ ··· 101 99 .resource = enet0_res, 102 100 .dev = { 103 101 .platform_data = &enet0_pd, 102 + .dma_mask = &enet_dmamask, 103 + .coherent_dma_mask = DMA_BIT_MASK(32), 104 104 }, 105 105 }; 106 106 ··· 135 131 .resource = enet1_res, 136 132 .dev = { 137 133 .platform_data = &enet1_pd, 134 + .dma_mask = &enet_dmamask, 135 + .coherent_dma_mask = DMA_BIT_MASK(32), 138 136 }, 139 137 }; 140 138 ··· 163 157 .resource = enetsw_res, 164 158 .dev = { 165 159 .platform_data = &enetsw_pd, 160 + .dma_mask = &enet_dmamask, 161 + .coherent_dma_mask = DMA_BIT_MASK(32), 166 162 }, 167 163 }; 168 164
+1 -2
arch/mips/kernel/cmpxchg.c
··· 54 54 unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old, 55 55 unsigned long new, unsigned int size) 56 56 { 57 - u32 mask, old32, new32, load32; 57 + u32 mask, old32, new32, load32, load; 58 58 volatile u32 *ptr32; 59 59 unsigned int shift; 60 - u8 load; 61 60 62 61 /* Check that ptr is naturally aligned */ 63 62 WARN_ON((unsigned long)ptr & (size - 1));
+2 -1
arch/mips/kernel/setup.c
··· 384 384 init_initrd(); 385 385 reserved_end = (unsigned long) PFN_UP(__pa_symbol(&_end)); 386 386 387 - memblock_reserve(PHYS_OFFSET, reserved_end << PAGE_SHIFT); 387 + memblock_reserve(PHYS_OFFSET, 388 + (reserved_end << PAGE_SHIFT) - PHYS_OFFSET); 388 389 389 390 /* 390 391 * max_low_pfn is not a number of pages. The number of pages
+2 -2
arch/mips/lantiq/xway/vmmc.c
··· 31 31 dma_addr_t dma; 32 32 33 33 cp1_base = 34 - (void *) CPHYSADDR(dma_alloc_coherent(NULL, CP1_SIZE, 35 - &dma, GFP_ATOMIC)); 34 + (void *) CPHYSADDR(dma_alloc_coherent(&pdev->dev, CP1_SIZE, 35 + &dma, GFP_KERNEL)); 36 36 37 37 gpio_count = of_gpio_count(pdev->dev.of_node); 38 38 while (gpio_count > 0) {
+13 -13
arch/mips/net/ebpf_jit.c
··· 79 79 REG_64BIT_32BIT, 80 80 /* 32-bit compatible, need truncation for 64-bit ops. */ 81 81 REG_32BIT, 82 - /* 32-bit zero extended. */ 83 - REG_32BIT_ZERO_EX, 84 82 /* 32-bit no sign/zero extension needed. */ 85 83 REG_32BIT_POS 86 84 }; ··· 341 343 const struct bpf_prog *prog = ctx->skf; 342 344 int stack_adjust = ctx->stack_size; 343 345 int store_offset = stack_adjust - 8; 346 + enum reg_val_type td; 344 347 int r0 = MIPS_R_V0; 345 348 346 - if (dest_reg == MIPS_R_RA && 347 - get_reg_val_type(ctx, prog->len, BPF_REG_0) == REG_32BIT_ZERO_EX) 349 + if (dest_reg == MIPS_R_RA) { 348 350 /* Don't let zero extended value escape. */ 349 - emit_instr(ctx, sll, r0, r0, 0); 351 + td = get_reg_val_type(ctx, prog->len, BPF_REG_0); 352 + if (td == REG_64BIT) 353 + emit_instr(ctx, sll, r0, r0, 0); 354 + } 350 355 351 356 if (ctx->flags & EBPF_SAVE_RA) { 352 357 emit_instr(ctx, ld, MIPS_R_RA, store_offset, MIPS_R_SP); ··· 693 692 if (dst < 0) 694 693 return dst; 695 694 td = get_reg_val_type(ctx, this_idx, insn->dst_reg); 696 - if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { 695 + if (td == REG_64BIT) { 697 696 /* sign extend */ 698 697 emit_instr(ctx, sll, dst, dst, 0); 699 698 } ··· 708 707 if (dst < 0) 709 708 return dst; 710 709 td = get_reg_val_type(ctx, this_idx, insn->dst_reg); 711 - if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { 710 + if (td == REG_64BIT) { 712 711 /* sign extend */ 713 712 emit_instr(ctx, sll, dst, dst, 0); 714 713 } ··· 722 721 if (dst < 0) 723 722 return dst; 724 723 td = get_reg_val_type(ctx, this_idx, insn->dst_reg); 725 - if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) 724 + if (td == REG_64BIT) 726 725 /* sign extend */ 727 726 emit_instr(ctx, sll, dst, dst, 0); 728 727 if (insn->imm == 1) { ··· 861 860 if (src < 0 || dst < 0) 862 861 return -EINVAL; 863 862 td = get_reg_val_type(ctx, this_idx, insn->dst_reg); 864 - if (td == REG_64BIT || td == REG_32BIT_ZERO_EX) { 863 + if (td == REG_64BIT) { 865 864 /* sign extend */ 866 865 emit_instr(ctx, sll, dst, dst, 0); 867 866 } 868 867 did_move = false; 869 868 ts = get_reg_val_type(ctx, this_idx, insn->src_reg); 870 - if (ts == REG_64BIT || ts == REG_32BIT_ZERO_EX) { 869 + if (ts == REG_64BIT) { 871 870 int tmp_reg = MIPS_R_AT; 872 871 873 872 if (bpf_op == BPF_MOV) { ··· 1255 1254 if (insn->imm == 64 && td == REG_32BIT) 1256 1255 emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32); 1257 1256 1258 - if (insn->imm != 64 && 1259 - (td == REG_64BIT || td == REG_32BIT_ZERO_EX)) { 1257 + if (insn->imm != 64 && td == REG_64BIT) { 1260 1258 /* sign extend */ 1261 1259 emit_instr(ctx, sll, dst, dst, 0); 1262 1260 } ··· 1819 1819 1820 1820 /* Update the icache */ 1821 1821 flush_icache_range((unsigned long)ctx.target, 1822 - (unsigned long)(ctx.target + ctx.idx * sizeof(u32))); 1822 + (unsigned long)&ctx.target[ctx.idx]); 1823 1823 1824 1824 if (bpf_jit_enable > 1) 1825 1825 /* Dump JIT code */
+21 -8
arch/parisc/kernel/ptrace.c
··· 308 308 309 309 long do_syscall_trace_enter(struct pt_regs *regs) 310 310 { 311 - if (test_thread_flag(TIF_SYSCALL_TRACE) && 312 - tracehook_report_syscall_entry(regs)) { 311 + if (test_thread_flag(TIF_SYSCALL_TRACE)) { 312 + int rc = tracehook_report_syscall_entry(regs); 313 + 313 314 /* 314 - * Tracing decided this syscall should not happen or the 315 - * debugger stored an invalid system call number. Skip 316 - * the system call and the system call restart handling. 315 + * As tracesys_next does not set %r28 to -ENOSYS 316 + * when %r20 is set to -1, initialize it here. 317 317 */ 318 - regs->gr[20] = -1UL; 319 - goto out; 318 + regs->gr[28] = -ENOSYS; 319 + 320 + if (rc) { 321 + /* 322 + * A nonzero return code from 323 + * tracehook_report_syscall_entry() tells us 324 + * to prevent the syscall execution. Skip 325 + * the syscall call and the syscall restart handling. 326 + * 327 + * Note that the tracer may also just change 328 + * regs->gr[20] to an invalid syscall number, 329 + * that is handled by tracesys_next. 330 + */ 331 + regs->gr[20] = -1UL; 332 + return -1; 333 + } 320 334 } 321 335 322 336 /* Do the secure computing check after ptrace. */ ··· 354 340 regs->gr[24] & 0xffffffff, 355 341 regs->gr[23] & 0xffffffff); 356 342 357 - out: 358 343 /* 359 344 * Sign extend the syscall number to 64bit since it may have been 360 345 * modified by a compat ptrace call
+2
arch/powerpc/platforms/powernv/pci-ioda.c
··· 1593 1593 1594 1594 pnv_pci_ioda2_setup_dma_pe(phb, pe); 1595 1595 #ifdef CONFIG_IOMMU_API 1596 + iommu_register_group(&pe->table_group, 1597 + pe->phb->hose->global_number, pe->pe_number); 1596 1598 pnv_ioda_setup_bus_iommu_group(pe, &pe->table_group, NULL); 1597 1599 #endif 1598 1600 }
+2
arch/powerpc/platforms/powernv/pci.c
··· 1147 1147 return 0; 1148 1148 1149 1149 pe = &phb->ioda.pe_array[pdn->pe_number]; 1150 + if (!pe->table_group.group) 1151 + return 0; 1150 1152 iommu_add_device(&pe->table_group, dev); 1151 1153 return 0; 1152 1154 case BUS_NOTIFY_DEL_DEVICE:
+1 -1
arch/s390/kvm/vsie.c
··· 297 297 scb_s->crycbd = 0; 298 298 299 299 apie_h = vcpu->arch.sie_block->eca & ECA_APIE; 300 - if (!apie_h && !key_msk) 300 + if (!apie_h && (!key_msk || fmt_o == CRYCB_FORMAT0)) 301 301 return 0; 302 302 303 303 if (!crycb_addr)
+1 -1
arch/sh/boot/dts/Makefile
··· 1 1 ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"") 2 - obj-y += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o 2 + obj-$(CONFIG_USE_BUILTIN_DTB) += $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o 3 3 endif
+1 -1
arch/x86/include/asm/hyperv-tlfs.h
··· 841 841 * count is equal with how many entries of union hv_gpa_page_range can 842 842 * be populated into the input parameter page. 843 843 */ 844 - #define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) / \ 844 + #define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) / \ 845 845 sizeof(union hv_gpa_page_range)) 846 846 847 847 struct hv_guest_mapping_flush_list {
+2
arch/x86/include/asm/kvm_host.h
··· 299 299 unsigned int cr4_smap:1; 300 300 unsigned int cr4_smep:1; 301 301 unsigned int cr4_la57:1; 302 + unsigned int maxphyaddr:6; 302 303 }; 303 304 }; 304 305 ··· 398 397 void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, 399 398 u64 *spte, const void *pte); 400 399 hpa_t root_hpa; 400 + gpa_t root_cr3; 401 401 union kvm_mmu_role mmu_role; 402 402 u8 root_level; 403 403 u8 shadow_root_level;
+4 -2
arch/x86/include/asm/uaccess.h
··· 284 284 __put_user_goto(x, ptr, "l", "k", "ir", label); \ 285 285 break; \ 286 286 case 8: \ 287 - __put_user_goto_u64((__typeof__(*ptr))(x), ptr, label); \ 287 + __put_user_goto_u64(x, ptr, label); \ 288 288 break; \ 289 289 default: \ 290 290 __put_user_bad(); \ ··· 431 431 ({ \ 432 432 __label__ __pu_label; \ 433 433 int __pu_err = -EFAULT; \ 434 + __typeof__(*(ptr)) __pu_val; \ 435 + __pu_val = x; \ 434 436 __uaccess_begin(); \ 435 - __put_user_size((x), (ptr), (size), __pu_label); \ 437 + __put_user_size(__pu_val, (ptr), (size), __pu_label); \ 436 438 __pu_err = 0; \ 437 439 __pu_label: \ 438 440 __uaccess_end(); \
+4
arch/x86/kvm/cpuid.c
··· 335 335 unsigned f_xsaves = kvm_x86_ops->xsaves_supported() ? F(XSAVES) : 0; 336 336 unsigned f_umip = kvm_x86_ops->umip_emulated() ? F(UMIP) : 0; 337 337 unsigned f_intel_pt = kvm_x86_ops->pt_supported() ? F(INTEL_PT) : 0; 338 + unsigned f_la57 = 0; 338 339 339 340 /* cpuid 1.edx */ 340 341 const u32 kvm_cpuid_1_edx_x86_features = ··· 490 489 // TSC_ADJUST is emulated 491 490 entry->ebx |= F(TSC_ADJUST); 492 491 entry->ecx &= kvm_cpuid_7_0_ecx_x86_features; 492 + f_la57 = entry->ecx & F(LA57); 493 493 cpuid_mask(&entry->ecx, CPUID_7_ECX); 494 + /* Set LA57 based on hardware capability. */ 495 + entry->ecx |= f_la57; 494 496 entry->ecx |= f_umip; 495 497 /* PKU is not yet implemented for shadow paging. */ 496 498 if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
+14 -4
arch/x86/kvm/mmu.c
··· 3555 3555 &invalid_list); 3556 3556 mmu->root_hpa = INVALID_PAGE; 3557 3557 } 3558 + mmu->root_cr3 = 0; 3558 3559 } 3559 3560 3560 3561 kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list); ··· 3611 3610 vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->pae_root); 3612 3611 } else 3613 3612 BUG(); 3613 + vcpu->arch.mmu->root_cr3 = vcpu->arch.mmu->get_cr3(vcpu); 3614 3614 3615 3615 return 0; 3616 3616 } ··· 3620 3618 { 3621 3619 struct kvm_mmu_page *sp; 3622 3620 u64 pdptr, pm_mask; 3623 - gfn_t root_gfn; 3621 + gfn_t root_gfn, root_cr3; 3624 3622 int i; 3625 3623 3626 - root_gfn = vcpu->arch.mmu->get_cr3(vcpu) >> PAGE_SHIFT; 3624 + root_cr3 = vcpu->arch.mmu->get_cr3(vcpu); 3625 + root_gfn = root_cr3 >> PAGE_SHIFT; 3627 3626 3628 3627 if (mmu_check_root(vcpu, root_gfn)) 3629 3628 return 1; ··· 3649 3646 ++sp->root_count; 3650 3647 spin_unlock(&vcpu->kvm->mmu_lock); 3651 3648 vcpu->arch.mmu->root_hpa = root; 3652 - return 0; 3649 + goto set_root_cr3; 3653 3650 } 3654 3651 3655 3652 /* ··· 3714 3711 3715 3712 vcpu->arch.mmu->root_hpa = __pa(vcpu->arch.mmu->lm_root); 3716 3713 } 3714 + 3715 + set_root_cr3: 3716 + vcpu->arch.mmu->root_cr3 = root_cr3; 3717 3717 3718 3718 return 0; 3719 3719 } ··· 4169 4163 struct kvm_mmu_root_info root; 4170 4164 struct kvm_mmu *mmu = vcpu->arch.mmu; 4171 4165 4172 - root.cr3 = mmu->get_cr3(vcpu); 4166 + root.cr3 = mmu->root_cr3; 4173 4167 root.hpa = mmu->root_hpa; 4174 4168 4175 4169 for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) { ··· 4182 4176 } 4183 4177 4184 4178 mmu->root_hpa = root.hpa; 4179 + mmu->root_cr3 = root.cr3; 4185 4180 4186 4181 return i < KVM_MMU_NUM_PREV_ROOTS; 4187 4182 } ··· 4777 4770 ext.cr4_pse = !!is_pse(vcpu); 4778 4771 ext.cr4_pke = !!kvm_read_cr4_bits(vcpu, X86_CR4_PKE); 4779 4772 ext.cr4_la57 = !!kvm_read_cr4_bits(vcpu, X86_CR4_LA57); 4773 + ext.maxphyaddr = cpuid_maxphyaddr(vcpu); 4780 4774 4781 4775 ext.valid = 1; 4782 4776 ··· 5524 5516 vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; 5525 5517 5526 5518 vcpu->arch.root_mmu.root_hpa = INVALID_PAGE; 5519 + vcpu->arch.root_mmu.root_cr3 = 0; 5527 5520 vcpu->arch.root_mmu.translate_gpa = translate_gpa; 5528 5521 for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) 5529 5522 vcpu->arch.root_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID; 5530 5523 5531 5524 vcpu->arch.guest_mmu.root_hpa = INVALID_PAGE; 5525 + vcpu->arch.guest_mmu.root_cr3 = 0; 5532 5526 vcpu->arch.guest_mmu.translate_gpa = translate_gpa; 5533 5527 for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) 5534 5528 vcpu->arch.guest_mmu.prev_roots[i] = KVM_MMU_ROOT_INFO_INVALID;
-58
arch/x86/mm/extable.c
··· 117 117 } 118 118 EXPORT_SYMBOL_GPL(ex_handler_fprestore); 119 119 120 - /* Helper to check whether a uaccess fault indicates a kernel bug. */ 121 - static bool bogus_uaccess(struct pt_regs *regs, int trapnr, 122 - unsigned long fault_addr) 123 - { 124 - /* This is the normal case: #PF with a fault address in userspace. */ 125 - if (trapnr == X86_TRAP_PF && fault_addr < TASK_SIZE_MAX) 126 - return false; 127 - 128 - /* 129 - * This code can be reached for machine checks, but only if the #MC 130 - * handler has already decided that it looks like a candidate for fixup. 131 - * This e.g. happens when attempting to access userspace memory which 132 - * the CPU can't access because of uncorrectable bad memory. 133 - */ 134 - if (trapnr == X86_TRAP_MC) 135 - return false; 136 - 137 - /* 138 - * There are two remaining exception types we might encounter here: 139 - * - #PF for faulting accesses to kernel addresses 140 - * - #GP for faulting accesses to noncanonical addresses 141 - * Complain about anything else. 142 - */ 143 - if (trapnr != X86_TRAP_PF && trapnr != X86_TRAP_GP) { 144 - WARN(1, "unexpected trap %d in uaccess\n", trapnr); 145 - return false; 146 - } 147 - 148 - /* 149 - * This is a faulting memory access in kernel space, on a kernel 150 - * address, in a usercopy function. This can e.g. be caused by improper 151 - * use of helpers like __put_user and by improper attempts to access 152 - * userspace addresses in KERNEL_DS regions. 153 - * The one (semi-)legitimate exception are probe_kernel_{read,write}(), 154 - * which can be invoked from places like kgdb, /dev/mem (for reading) 155 - * and privileged BPF code (for reading). 156 - * The probe_kernel_*() functions set the kernel_uaccess_faults_ok flag 157 - * to tell us that faulting on kernel addresses, and even noncanonical 158 - * addresses, in a userspace accessor does not necessarily imply a 159 - * kernel bug, root might just be doing weird stuff. 160 - */ 161 - if (current->kernel_uaccess_faults_ok) 162 - return false; 163 - 164 - /* This is bad. Refuse the fixup so that we go into die(). */ 165 - if (trapnr == X86_TRAP_PF) { 166 - pr_emerg("BUG: pagefault on kernel address 0x%lx in non-whitelisted uaccess\n", 167 - fault_addr); 168 - } else { 169 - pr_emerg("BUG: GPF in non-whitelisted uaccess (non-canonical address?)\n"); 170 - } 171 - return true; 172 - } 173 - 174 120 __visible bool ex_handler_uaccess(const struct exception_table_entry *fixup, 175 121 struct pt_regs *regs, int trapnr, 176 122 unsigned long error_code, 177 123 unsigned long fault_addr) 178 124 { 179 - if (bogus_uaccess(regs, trapnr, fault_addr)) 180 - return false; 181 125 regs->ip = ex_fixup_addr(fixup); 182 126 return true; 183 127 } ··· 132 188 unsigned long error_code, 133 189 unsigned long fault_addr) 134 190 { 135 - if (bogus_uaccess(regs, trapnr, fault_addr)) 136 - return false; 137 191 /* Special hack for uaccess_err */ 138 192 current->thread.uaccess_err = 1; 139 193 regs->ip = ex_fixup_addr(fixup);
+3 -1
crypto/af_alg.c
··· 122 122 123 123 int af_alg_release(struct socket *sock) 124 124 { 125 - if (sock->sk) 125 + if (sock->sk) { 126 126 sock_put(sock->sk); 127 + sock->sk = NULL; 128 + } 127 129 return 0; 128 130 } 129 131 EXPORT_SYMBOL_GPL(af_alg_release);
+1 -1
drivers/base/power/runtime.c
··· 95 95 static void pm_runtime_deactivate_timer(struct device *dev) 96 96 { 97 97 if (dev->power.timer_expires > 0) { 98 - hrtimer_cancel(&dev->power.suspend_timer); 98 + hrtimer_try_to_cancel(&dev->power.suspend_timer); 99 99 dev->power.timer_expires = 0; 100 100 } 101 101 }
+2 -3
drivers/clk/at91/at91sam9x5.c
··· 144 144 return; 145 145 146 146 at91sam9x5_pmc = pmc_data_allocate(PMC_MAIN + 1, 147 - nck(at91sam9x5_systemck), 148 - nck(at91sam9x35_periphck), 0); 147 + nck(at91sam9x5_systemck), 31, 0); 149 148 if (!at91sam9x5_pmc) 150 149 return; 151 150 ··· 209 210 parent_names[1] = "mainck"; 210 211 parent_names[2] = "plladivck"; 211 212 parent_names[3] = "utmick"; 212 - parent_names[4] = "mck"; 213 + parent_names[4] = "masterck"; 213 214 for (i = 0; i < 2; i++) { 214 215 char name[6]; 215 216
+2 -2
drivers/clk/at91/sama5d2.c
··· 240 240 parent_names[1] = "mainck"; 241 241 parent_names[2] = "plladivck"; 242 242 parent_names[3] = "utmick"; 243 - parent_names[4] = "mck"; 243 + parent_names[4] = "masterck"; 244 244 for (i = 0; i < 3; i++) { 245 245 char name[6]; 246 246 ··· 291 291 parent_names[1] = "mainck"; 292 292 parent_names[2] = "plladivck"; 293 293 parent_names[3] = "utmick"; 294 - parent_names[4] = "mck"; 294 + parent_names[4] = "masterck"; 295 295 parent_names[5] = "audiopll_pmcck"; 296 296 for (i = 0; i < ARRAY_SIZE(sama5d2_gck); i++) { 297 297 hw = at91_clk_register_generated(regmap, &pmc_pcr_lock,
+1 -1
drivers/clk/at91/sama5d4.c
··· 207 207 parent_names[1] = "mainck"; 208 208 parent_names[2] = "plladivck"; 209 209 parent_names[3] = "utmick"; 210 - parent_names[4] = "mck"; 210 + parent_names[4] = "masterck"; 211 211 for (i = 0; i < 3; i++) { 212 212 char name[6]; 213 213
+2 -2
drivers/clk/sunxi-ng/ccu-sun6i-a31.c
··· 264 264 static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1", 265 265 0x060, BIT(10), 0); 266 266 static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1", 267 - 0x060, BIT(12), 0); 267 + 0x060, BIT(11), 0); 268 268 static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1", 269 - 0x060, BIT(13), 0); 269 + 0x060, BIT(12), 0); 270 270 static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1", 271 271 0x060, BIT(13), 0); 272 272 static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1",
+1 -1
drivers/clk/sunxi-ng/ccu-sun8i-v3s.c
··· 542 542 [RST_BUS_OHCI0] = { 0x2c0, BIT(29) }, 543 543 544 544 [RST_BUS_VE] = { 0x2c4, BIT(0) }, 545 - [RST_BUS_TCON0] = { 0x2c4, BIT(3) }, 545 + [RST_BUS_TCON0] = { 0x2c4, BIT(4) }, 546 546 [RST_BUS_CSI] = { 0x2c4, BIT(8) }, 547 547 [RST_BUS_DE] = { 0x2c4, BIT(12) }, 548 548 [RST_BUS_DBG] = { 0x2c4, BIT(31) },
+1 -1
drivers/cpufreq/scmi-cpufreq.c
··· 187 187 188 188 cpufreq_cooling_unregister(priv->cdev); 189 189 dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); 190 - kfree(priv); 191 190 dev_pm_opp_remove_all_dynamic(priv->cpu_dev); 191 + kfree(priv); 192 192 193 193 return 0; 194 194 }
+1 -1
drivers/crypto/ccree/cc_pm.h
··· 30 30 return 0; 31 31 } 32 32 33 - static void cc_pm_go(struct cc_drvdata *drvdata) {} 33 + static inline void cc_pm_go(struct cc_drvdata *drvdata) {} 34 34 35 35 static inline void cc_pm_fini(struct cc_drvdata *drvdata) {} 36 36
+10 -10
drivers/gpio/gpio-mt7621.c
··· 30 30 #define GPIO_REG_EDGE 0xA0 31 31 32 32 struct mtk_gc { 33 + struct irq_chip irq_chip; 33 34 struct gpio_chip chip; 34 35 spinlock_t lock; 35 36 int bank; ··· 190 189 return 0; 191 190 } 192 191 193 - static struct irq_chip mediatek_gpio_irq_chip = { 194 - .irq_unmask = mediatek_gpio_irq_unmask, 195 - .irq_mask = mediatek_gpio_irq_mask, 196 - .irq_mask_ack = mediatek_gpio_irq_mask, 197 - .irq_set_type = mediatek_gpio_irq_type, 198 - }; 199 - 200 192 static int 201 193 mediatek_gpio_xlate(struct gpio_chip *chip, 202 194 const struct of_phandle_args *spec, u32 *flags) ··· 248 254 return ret; 249 255 } 250 256 257 + rg->irq_chip.name = dev_name(dev); 258 + rg->irq_chip.parent_device = dev; 259 + rg->irq_chip.irq_unmask = mediatek_gpio_irq_unmask; 260 + rg->irq_chip.irq_mask = mediatek_gpio_irq_mask; 261 + rg->irq_chip.irq_mask_ack = mediatek_gpio_irq_mask; 262 + rg->irq_chip.irq_set_type = mediatek_gpio_irq_type; 263 + 251 264 if (mtk->gpio_irq) { 252 265 /* 253 266 * Manually request the irq here instead of passing ··· 271 270 return ret; 272 271 } 273 272 274 - ret = gpiochip_irqchip_add(&rg->chip, &mediatek_gpio_irq_chip, 273 + ret = gpiochip_irqchip_add(&rg->chip, &rg->irq_chip, 275 274 0, handle_simple_irq, IRQ_TYPE_NONE); 276 275 if (ret) { 277 276 dev_err(dev, "failed to add gpiochip_irqchip\n"); 278 277 return ret; 279 278 } 280 279 281 - gpiochip_set_chained_irqchip(&rg->chip, &mediatek_gpio_irq_chip, 280 + gpiochip_set_chained_irqchip(&rg->chip, &rg->irq_chip, 282 281 mtk->gpio_irq, NULL); 283 282 } 284 283 ··· 311 310 mtk->gpio_irq = irq_of_parse_and_map(np, 0); 312 311 mtk->dev = dev; 313 312 platform_set_drvdata(pdev, mtk); 314 - mediatek_gpio_irq_chip.name = dev_name(dev); 315 313 316 314 for (i = 0; i < MTK_BANK_CNT; i++) { 317 315 ret = mediatek_gpio_bank_probe(dev, np, i);
+1
drivers/gpio/gpio-pxa.c
··· 245 245 { 246 246 switch (gpio_type) { 247 247 case PXA3XX_GPIO: 248 + case MMP2_GPIO: 248 249 return false; 249 250 250 251 default:
+2
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 411 411 struct amdgpu_ctx_mgr ctx_mgr; 412 412 }; 413 413 414 + int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv); 415 + 414 416 int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, 415 417 unsigned size, struct amdgpu_ib *ib); 416 418 void amdgpu_ib_free(struct amdgpu_device *adev, struct amdgpu_ib *ib,
+17 -30
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 131 131 132 132 void amdgpu_amdkfd_device_init(struct amdgpu_device *adev) 133 133 { 134 - int i, n; 134 + int i; 135 135 int last_valid_bit; 136 136 137 137 if (adev->kfd.dev) { ··· 142 142 .gpuvm_size = min(adev->vm_manager.max_pfn 143 143 << AMDGPU_GPU_PAGE_SHIFT, 144 144 AMDGPU_GMC_HOLE_START), 145 - .drm_render_minor = adev->ddev->render->index 145 + .drm_render_minor = adev->ddev->render->index, 146 + .sdma_doorbell_idx = adev->doorbell_index.sdma_engine, 147 + 146 148 }; 147 149 148 150 /* this is going to have a few of the MSBs set that we need to ··· 174 172 &gpu_resources.doorbell_aperture_size, 175 173 &gpu_resources.doorbell_start_offset); 176 174 177 - if (adev->asic_type < CHIP_VEGA10) { 178 - kgd2kfd_device_init(adev->kfd.dev, &gpu_resources); 179 - return; 180 - } 181 - 182 - n = (adev->asic_type < CHIP_VEGA20) ? 2 : 8; 183 - 184 - for (i = 0; i < n; i += 2) { 185 - /* On SOC15 the BIF is involved in routing 186 - * doorbells using the low 12 bits of the 187 - * address. Communicate the assignments to 188 - * KFD. KFD uses two doorbell pages per 189 - * process in case of 64-bit doorbells so we 190 - * can use each doorbell assignment twice. 191 - */ 192 - gpu_resources.sdma_doorbell[0][i] = 193 - adev->doorbell_index.sdma_engine[0] + (i >> 1); 194 - gpu_resources.sdma_doorbell[0][i+1] = 195 - adev->doorbell_index.sdma_engine[0] + 0x200 + (i >> 1); 196 - gpu_resources.sdma_doorbell[1][i] = 197 - adev->doorbell_index.sdma_engine[1] + (i >> 1); 198 - gpu_resources.sdma_doorbell[1][i+1] = 199 - adev->doorbell_index.sdma_engine[1] + 0x200 + (i >> 1); 200 - } 201 - /* Doorbells 0x0e0-0ff and 0x2e0-2ff are reserved for 202 - * SDMA, IH and VCN. So don't use them for the CP. 175 + /* Since SOC15, BIF starts to statically use the 176 + * lower 12 bits of doorbell addresses for routing 177 + * based on settings in registers like 178 + * SDMA0_DOORBELL_RANGE etc.. 179 + * In order to route a doorbell to CP engine, the lower 180 + * 12 bits of its address has to be outside the range 181 + * set for SDMA, VCN, and IH blocks. 203 182 */ 204 - gpu_resources.reserved_doorbell_mask = 0x1e0; 205 - gpu_resources.reserved_doorbell_val = 0x0e0; 183 + if (adev->asic_type >= CHIP_VEGA10) { 184 + gpu_resources.non_cp_doorbells_start = 185 + adev->doorbell_index.first_non_cp; 186 + gpu_resources.non_cp_doorbells_end = 187 + adev->doorbell_index.last_non_cp; 188 + } 206 189 207 190 kgd2kfd_device_init(adev->kfd.dev, &gpu_resources); 208 191 }
+13 -125
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 204 204 } 205 205 206 206 207 - /* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence(s) from BO's 207 + /* amdgpu_amdkfd_remove_eviction_fence - Removes eviction fence from BO's 208 208 * reservation object. 209 209 * 210 210 * @bo: [IN] Remove eviction fence(s) from this BO 211 - * @ef: [IN] If ef is specified, then this eviction fence is removed if it 211 + * @ef: [IN] This eviction fence is removed if it 212 212 * is present in the shared list. 213 - * @ef_list: [OUT] Returns list of eviction fences. These fences are removed 214 - * from BO's reservation object shared list. 215 - * @ef_count: [OUT] Number of fences in ef_list. 216 213 * 217 - * NOTE: If called with ef_list, then amdgpu_amdkfd_add_eviction_fence must be 218 - * called to restore the eviction fences and to avoid memory leak. This is 219 - * useful for shared BOs. 220 214 * NOTE: Must be called with BO reserved i.e. bo->tbo.resv->lock held. 221 215 */ 222 216 static int amdgpu_amdkfd_remove_eviction_fence(struct amdgpu_bo *bo, 223 - struct amdgpu_amdkfd_fence *ef, 224 - struct amdgpu_amdkfd_fence ***ef_list, 225 - unsigned int *ef_count) 217 + struct amdgpu_amdkfd_fence *ef) 226 218 { 227 219 struct reservation_object *resv = bo->tbo.resv; 228 220 struct reservation_object_list *old, *new; 229 221 unsigned int i, j, k; 230 222 231 - if (!ef && !ef_list) 223 + if (!ef) 232 224 return -EINVAL; 233 - 234 - if (ef_list) { 235 - *ef_list = NULL; 236 - *ef_count = 0; 237 - } 238 225 239 226 old = reservation_object_get_list(resv); 240 227 if (!old) ··· 241 254 f = rcu_dereference_protected(old->shared[i], 242 255 reservation_object_held(resv)); 243 256 244 - if ((ef && f->context == ef->base.context) || 245 - (!ef && to_amdgpu_amdkfd_fence(f))) 257 + if (f->context == ef->base.context) 246 258 RCU_INIT_POINTER(new->shared[--j], f); 247 259 else 248 260 RCU_INIT_POINTER(new->shared[k++], f); 249 261 } 250 262 new->shared_max = old->shared_max; 251 263 new->shared_count = k; 252 - 253 - if (!ef) { 254 - unsigned int count = old->shared_count - j; 255 - 256 - /* Alloc memory for count number of eviction fence pointers. 257 - * Fill the ef_list array and ef_count 258 - */ 259 - *ef_list = kcalloc(count, sizeof(**ef_list), GFP_KERNEL); 260 - *ef_count = count; 261 - 262 - if (!*ef_list) { 263 - kfree(new); 264 - return -ENOMEM; 265 - } 266 - } 267 264 268 265 /* Install the new fence list, seqcount provides the barriers */ 269 266 preempt_disable(); ··· 262 291 263 292 f = rcu_dereference_protected(new->shared[i], 264 293 reservation_object_held(resv)); 265 - if (!ef) 266 - (*ef_list)[k++] = to_amdgpu_amdkfd_fence(f); 267 - else 268 - dma_fence_put(f); 294 + dma_fence_put(f); 269 295 } 270 296 kfree_rcu(old, rcu); 271 297 272 298 return 0; 273 - } 274 - 275 - /* amdgpu_amdkfd_add_eviction_fence - Adds eviction fence(s) back into BO's 276 - * reservation object. 277 - * 278 - * @bo: [IN] Add eviction fences to this BO 279 - * @ef_list: [IN] List of eviction fences to be added 280 - * @ef_count: [IN] Number of fences in ef_list. 281 - * 282 - * NOTE: Must call amdgpu_amdkfd_remove_eviction_fence before calling this 283 - * function. 284 - */ 285 - static void amdgpu_amdkfd_add_eviction_fence(struct amdgpu_bo *bo, 286 - struct amdgpu_amdkfd_fence **ef_list, 287 - unsigned int ef_count) 288 - { 289 - int i; 290 - 291 - if (!ef_list || !ef_count) 292 - return; 293 - 294 - for (i = 0; i < ef_count; i++) { 295 - amdgpu_bo_fence(bo, &ef_list[i]->base, true); 296 - /* Re-adding the fence takes an additional reference. Drop that 297 - * reference. 298 - */ 299 - dma_fence_put(&ef_list[i]->base); 300 - } 301 - 302 - kfree(ef_list); 303 299 } 304 300 305 301 static int amdgpu_amdkfd_bo_validate(struct amdgpu_bo *bo, uint32_t domain, ··· 284 346 ret = ttm_bo_validate(&bo->tbo, &bo->placement, &ctx); 285 347 if (ret) 286 348 goto validate_fail; 287 - if (wait) { 288 - struct amdgpu_amdkfd_fence **ef_list; 289 - unsigned int ef_count; 290 - 291 - ret = amdgpu_amdkfd_remove_eviction_fence(bo, NULL, &ef_list, 292 - &ef_count); 293 - if (ret) 294 - goto validate_fail; 295 - 296 - ttm_bo_wait(&bo->tbo, false, false); 297 - amdgpu_amdkfd_add_eviction_fence(bo, ef_list, ef_count); 298 - } 349 + if (wait) 350 + amdgpu_bo_sync_wait(bo, AMDGPU_FENCE_OWNER_KFD, false); 299 351 300 352 validate_fail: 301 353 return ret; ··· 372 444 { 373 445 int ret; 374 446 struct kfd_bo_va_list *bo_va_entry; 375 - struct amdgpu_bo *pd = vm->root.base.bo; 376 447 struct amdgpu_bo *bo = mem->bo; 377 448 uint64_t va = mem->va; 378 449 struct list_head *list_bo_va = &mem->bo_va_list; ··· 411 484 *p_bo_va_entry = bo_va_entry; 412 485 413 486 /* Allocate new page tables if needed and validate 414 - * them. Clearing of new page tables and validate need to wait 415 - * on move fences. We don't want that to trigger the eviction 416 - * fence, so remove it temporarily. 487 + * them. 417 488 */ 418 - amdgpu_amdkfd_remove_eviction_fence(pd, 419 - vm->process_info->eviction_fence, 420 - NULL, NULL); 421 - 422 489 ret = amdgpu_vm_alloc_pts(adev, vm, va, amdgpu_bo_size(bo)); 423 490 if (ret) { 424 491 pr_err("Failed to allocate pts, err=%d\n", ret); ··· 425 504 goto err_alloc_pts; 426 505 } 427 506 428 - /* Add the eviction fence back */ 429 - amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true); 430 - 431 507 return 0; 432 508 433 509 err_alloc_pts: 434 - amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true); 435 510 amdgpu_vm_bo_rmv(adev, bo_va_entry->bo_va); 436 511 list_del(&bo_va_entry->bo_list); 437 512 err_vmadd: ··· 726 809 { 727 810 struct amdgpu_bo_va *bo_va = entry->bo_va; 728 811 struct amdgpu_vm *vm = bo_va->base.vm; 729 - struct amdgpu_bo *pd = vm->root.base.bo; 730 812 731 - /* Remove eviction fence from PD (and thereby from PTs too as 732 - * they share the resv. object). Otherwise during PT update 733 - * job (see amdgpu_vm_bo_update_mapping), eviction fence would 734 - * get added to job->sync object and job execution would 735 - * trigger the eviction fence. 736 - */ 737 - amdgpu_amdkfd_remove_eviction_fence(pd, 738 - vm->process_info->eviction_fence, 739 - NULL, NULL); 740 813 amdgpu_vm_bo_unmap(adev, bo_va, entry->va); 741 814 742 815 amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update); 743 - 744 - /* Add the eviction fence back */ 745 - amdgpu_bo_fence(pd, &vm->process_info->eviction_fence->base, true); 746 816 747 817 amdgpu_sync_fence(NULL, sync, bo_va->last_pt_update, false); 748 818 ··· 906 1002 pr_err("validate_pt_pd_bos() failed\n"); 907 1003 goto validate_pd_fail; 908 1004 } 909 - ret = ttm_bo_wait(&vm->root.base.bo->tbo, false, false); 1005 + amdgpu_bo_sync_wait(vm->root.base.bo, AMDGPU_FENCE_OWNER_KFD, false); 910 1006 if (ret) 911 1007 goto wait_pd_fail; 912 1008 amdgpu_bo_fence(vm->root.base.bo, ··· 1293 1389 * attached 1294 1390 */ 1295 1391 amdgpu_amdkfd_remove_eviction_fence(mem->bo, 1296 - process_info->eviction_fence, 1297 - NULL, NULL); 1392 + process_info->eviction_fence); 1298 1393 pr_debug("Release VA 0x%llx - 0x%llx\n", mem->va, 1299 1394 mem->va + bo_size * (1 + mem->aql_queue)); 1300 1395 ··· 1520 1617 if (mem->mapped_to_gpu_memory == 0 && 1521 1618 !amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) && !mem->bo->pin_count) 1522 1619 amdgpu_amdkfd_remove_eviction_fence(mem->bo, 1523 - process_info->eviction_fence, 1524 - NULL, NULL); 1620 + process_info->eviction_fence); 1525 1621 1526 1622 unreserve_out: 1527 1623 unreserve_bo_and_vms(&ctx, false, false); ··· 1581 1679 } 1582 1680 1583 1681 amdgpu_amdkfd_remove_eviction_fence( 1584 - bo, mem->process_info->eviction_fence, NULL, NULL); 1682 + bo, mem->process_info->eviction_fence); 1585 1683 list_del_init(&mem->validate_list.head); 1586 1684 1587 1685 if (size) ··· 1847 1945 1848 1946 amdgpu_sync_create(&sync); 1849 1947 1850 - /* Avoid triggering eviction fences when unmapping invalid 1851 - * userptr BOs (waits for all fences, doesn't use 1852 - * FENCE_OWNER_VM) 1853 - */ 1854 - list_for_each_entry(peer_vm, &process_info->vm_list_head, 1855 - vm_list_node) 1856 - amdgpu_amdkfd_remove_eviction_fence(peer_vm->root.base.bo, 1857 - process_info->eviction_fence, 1858 - NULL, NULL); 1859 - 1860 1948 ret = process_validate_vms(process_info); 1861 1949 if (ret) 1862 1950 goto unreserve_out; ··· 1907 2015 ret = process_update_pds(process_info, &sync); 1908 2016 1909 2017 unreserve_out: 1910 - list_for_each_entry(peer_vm, &process_info->vm_list_head, 1911 - vm_list_node) 1912 - amdgpu_bo_fence(peer_vm->root.base.bo, 1913 - &process_info->eviction_fence->base, true); 1914 2018 ttm_eu_backoff_reservation(&ticket, &resv_list); 1915 2019 amdgpu_sync_wait(&sync, false); 1916 2020 amdgpu_sync_free(&sync);
+8 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
··· 124 124 struct amdgpu_ring *rings[AMDGPU_MAX_RINGS]; 125 125 struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS]; 126 126 unsigned num_rings; 127 + unsigned num_rqs = 0; 127 128 128 129 switch (i) { 129 130 case AMDGPU_HW_IP_GFX: ··· 167 166 break; 168 167 } 169 168 170 - for (j = 0; j < num_rings; ++j) 171 - rqs[j] = &rings[j]->sched.sched_rq[priority]; 169 + for (j = 0; j < num_rings; ++j) { 170 + if (!rings[j]->adev) 171 + continue; 172 + 173 + rqs[num_rqs++] = &rings[j]->sched.sched_rq[priority]; 174 + } 172 175 173 176 for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j) 174 177 r = drm_sched_entity_init(&ctx->entities[i][j].entity, 175 - rqs, num_rings, &ctx->guilty); 178 + rqs, num_rqs, &ctx->guilty); 176 179 if (r) 177 180 goto error_cleanup_entities; 178 181 }
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 158 158 while (size) { 159 159 uint32_t value; 160 160 161 - if (*pos > adev->rmmio_size) 162 - goto end; 163 - 164 161 if (read) { 165 162 value = RREG32(*pos >> 2); 166 163 r = put_user(value, (uint32_t *)buf);
+9
drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
··· 71 71 uint32_t vce_ring6_7; 72 72 } uvd_vce; 73 73 }; 74 + uint32_t first_non_cp; 75 + uint32_t last_non_cp; 74 76 uint32_t max_assignment; 75 77 /* Per engine SDMA doorbell size in dword */ 76 78 uint32_t sdma_doorbell_range; ··· 145 143 AMDGPU_VEGA20_DOORBELL64_VCE_RING2_3 = 0x18D, 146 144 AMDGPU_VEGA20_DOORBELL64_VCE_RING4_5 = 0x18E, 147 145 AMDGPU_VEGA20_DOORBELL64_VCE_RING6_7 = 0x18F, 146 + 147 + AMDGPU_VEGA20_DOORBELL64_FIRST_NON_CP = AMDGPU_VEGA20_DOORBELL_sDMA_ENGINE0, 148 + AMDGPU_VEGA20_DOORBELL64_LAST_NON_CP = AMDGPU_VEGA20_DOORBELL64_VCE_RING6_7, 149 + 148 150 AMDGPU_VEGA20_DOORBELL_MAX_ASSIGNMENT = 0x18F, 149 151 AMDGPU_VEGA20_DOORBELL_INVALID = 0xFFFF 150 152 } AMDGPU_VEGA20_DOORBELL_ASSIGNMENT; ··· 227 221 AMDGPU_DOORBELL64_VCE_RING2_3 = 0xFD, 228 222 AMDGPU_DOORBELL64_VCE_RING4_5 = 0xFE, 229 223 AMDGPU_DOORBELL64_VCE_RING6_7 = 0xFF, 224 + 225 + AMDGPU_DOORBELL64_FIRST_NON_CP = AMDGPU_DOORBELL64_sDMA_ENGINE0, 226 + AMDGPU_DOORBELL64_LAST_NON_CP = AMDGPU_DOORBELL64_VCE_RING6_7, 230 227 231 228 AMDGPU_DOORBELL64_MAX_ASSIGNMENT = 0xFF, 232 229 AMDGPU_DOORBELL64_INVALID = 0xFFFF
-88
drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.c
··· 184 184 return vrefresh; 185 185 } 186 186 187 - void amdgpu_calculate_u_and_p(u32 i, u32 r_c, u32 p_b, 188 - u32 *p, u32 *u) 189 - { 190 - u32 b_c = 0; 191 - u32 i_c; 192 - u32 tmp; 193 - 194 - i_c = (i * r_c) / 100; 195 - tmp = i_c >> p_b; 196 - 197 - while (tmp) { 198 - b_c++; 199 - tmp >>= 1; 200 - } 201 - 202 - *u = (b_c + 1) / 2; 203 - *p = i_c / (1 << (2 * (*u))); 204 - } 205 - 206 - int amdgpu_calculate_at(u32 t, u32 h, u32 fh, u32 fl, u32 *tl, u32 *th) 207 - { 208 - u32 k, a, ah, al; 209 - u32 t1; 210 - 211 - if ((fl == 0) || (fh == 0) || (fl > fh)) 212 - return -EINVAL; 213 - 214 - k = (100 * fh) / fl; 215 - t1 = (t * (k - 100)); 216 - a = (1000 * (100 * h + t1)) / (10000 + (t1 / 100)); 217 - a = (a + 5) / 10; 218 - ah = ((a * t) + 5000) / 10000; 219 - al = a - ah; 220 - 221 - *th = t - ah; 222 - *tl = t + al; 223 - 224 - return 0; 225 - } 226 - 227 - bool amdgpu_is_uvd_state(u32 class, u32 class2) 228 - { 229 - if (class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE) 230 - return true; 231 - if (class & ATOM_PPLIB_CLASSIFICATION_HD2STATE) 232 - return true; 233 - if (class & ATOM_PPLIB_CLASSIFICATION_HDSTATE) 234 - return true; 235 - if (class & ATOM_PPLIB_CLASSIFICATION_SDSTATE) 236 - return true; 237 - if (class2 & ATOM_PPLIB_CLASSIFICATION2_MVC) 238 - return true; 239 - return false; 240 - } 241 - 242 187 bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor) 243 188 { 244 189 switch (sensor) { ··· 892 947 return AMDGPU_PCIE_GEN1; 893 948 } 894 949 return AMDGPU_PCIE_GEN1; 895 - } 896 - 897 - u16 amdgpu_get_pcie_lane_support(struct amdgpu_device *adev, 898 - u16 asic_lanes, 899 - u16 default_lanes) 900 - { 901 - switch (asic_lanes) { 902 - case 0: 903 - default: 904 - return default_lanes; 905 - case 1: 906 - return 1; 907 - case 2: 908 - return 2; 909 - case 4: 910 - return 4; 911 - case 8: 912 - return 8; 913 - case 12: 914 - return 12; 915 - case 16: 916 - return 16; 917 - } 918 - } 919 - 920 - u8 amdgpu_encode_pci_lane_width(u32 lanes) 921 - { 922 - u8 encoded_lanes[] = { 0, 1, 2, 0, 3, 0, 0, 0, 4, 0, 0, 0, 5, 0, 0, 0, 6 }; 923 - 924 - if (lanes > 16) 925 - return 0; 926 - 927 - return encoded_lanes[lanes]; 928 950 } 929 951 930 952 struct amd_vce_state*
-9
drivers/gpu/drm/amd/amdgpu/amdgpu_dpm.h
··· 486 486 u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev); 487 487 u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev); 488 488 void amdgpu_dpm_get_active_displays(struct amdgpu_device *adev); 489 - bool amdgpu_is_uvd_state(u32 class, u32 class2); 490 - void amdgpu_calculate_u_and_p(u32 i, u32 r_c, u32 p_b, 491 - u32 *p, u32 *u); 492 - int amdgpu_calculate_at(u32 t, u32 h, u32 fh, u32 fl, u32 *tl, u32 *th); 493 489 494 490 bool amdgpu_is_internal_thermal_sensor(enum amdgpu_int_thermal_type sensor); 495 491 ··· 500 504 u32 sys_mask, 501 505 enum amdgpu_pcie_gen asic_gen, 502 506 enum amdgpu_pcie_gen default_gen); 503 - 504 - u16 amdgpu_get_pcie_lane_support(struct amdgpu_device *adev, 505 - u16 asic_lanes, 506 - u16 default_lanes); 507 - u8 amdgpu_encode_pci_lane_width(u32 lanes); 508 507 509 508 struct amd_vce_state* 510 509 amdgpu_get_vce_clock_state(void *handle, u32 idx);
+18 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 73 73 * - 3.27.0 - Add new chunk to to AMDGPU_CS to enable BO_LIST creation. 74 74 * - 3.28.0 - Add AMDGPU_CHUNK_ID_SCHEDULED_DEPENDENCIES 75 75 * - 3.29.0 - Add AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID 76 + * - 3.30.0 - Add AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE. 76 77 */ 77 78 #define KMS_DRIVER_MAJOR 3 78 - #define KMS_DRIVER_MINOR 29 79 + #define KMS_DRIVER_MINOR 30 79 80 #define KMS_DRIVER_PATCHLEVEL 0 80 81 81 82 int amdgpu_vram_limit = 0; ··· 1178 1177 .compat_ioctl = amdgpu_kms_compat_ioctl, 1179 1178 #endif 1180 1179 }; 1180 + 1181 + int amdgpu_file_to_fpriv(struct file *filp, struct amdgpu_fpriv **fpriv) 1182 + { 1183 + struct drm_file *file; 1184 + 1185 + if (!filp) 1186 + return -EINVAL; 1187 + 1188 + if (filp->f_op != &amdgpu_driver_kms_fops) { 1189 + return -EINVAL; 1190 + } 1191 + 1192 + file = filp->private_data; 1193 + *fpriv = file->driver_priv; 1194 + return 0; 1195 + } 1181 1196 1182 1197 static bool 1183 1198 amdgpu_get_crtc_scanout_position(struct drm_device *dev, unsigned int pipe,
+2 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_ih.c
··· 140 140 * Interrupt hander (VI), walk the IH ring. 141 141 * Returns irq process return code. 142 142 */ 143 - int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih, 144 - void (*callback)(struct amdgpu_device *adev, 145 - struct amdgpu_ih_ring *ih)) 143 + int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih) 146 144 { 147 145 u32 wptr; 148 146 ··· 160 162 rmb(); 161 163 162 164 while (ih->rptr != wptr) { 163 - callback(adev, ih); 165 + amdgpu_irq_dispatch(adev, ih); 164 166 ih->rptr &= ih->ptr_mask; 165 167 } 166 168
+1 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
··· 69 69 int amdgpu_ih_ring_init(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih, 70 70 unsigned ring_size, bool use_bus_addr); 71 71 void amdgpu_ih_ring_fini(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih); 72 - int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih, 73 - void (*callback)(struct amdgpu_device *adev, 74 - struct amdgpu_ih_ring *ih)); 72 + int amdgpu_ih_process(struct amdgpu_device *adev, struct amdgpu_ih_ring *ih); 75 73 76 74 #endif
+17 -31
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 131 131 } 132 132 133 133 /** 134 - * amdgpu_irq_callback - callback from the IH ring 135 - * 136 - * @adev: amdgpu device pointer 137 - * @ih: amdgpu ih ring 138 - * 139 - * Callback from IH ring processing to handle the entry at the current position 140 - * and advance the read pointer. 141 - */ 142 - static void amdgpu_irq_callback(struct amdgpu_device *adev, 143 - struct amdgpu_ih_ring *ih) 144 - { 145 - u32 ring_index = ih->rptr >> 2; 146 - struct amdgpu_iv_entry entry; 147 - 148 - entry.iv_entry = (const uint32_t *)&ih->ring[ring_index]; 149 - amdgpu_ih_decode_iv(adev, &entry); 150 - 151 - trace_amdgpu_iv(ih - &adev->irq.ih, &entry); 152 - 153 - amdgpu_irq_dispatch(adev, &entry); 154 - } 155 - 156 - /** 157 134 * amdgpu_irq_handler - IRQ handler 158 135 * 159 136 * @irq: IRQ number (unused) ··· 147 170 struct amdgpu_device *adev = dev->dev_private; 148 171 irqreturn_t ret; 149 172 150 - ret = amdgpu_ih_process(adev, &adev->irq.ih, amdgpu_irq_callback); 173 + ret = amdgpu_ih_process(adev, &adev->irq.ih); 151 174 if (ret == IRQ_HANDLED) 152 175 pm_runtime_mark_last_busy(dev->dev); 153 176 return ret; ··· 165 188 struct amdgpu_device *adev = container_of(work, struct amdgpu_device, 166 189 irq.ih1_work); 167 190 168 - amdgpu_ih_process(adev, &adev->irq.ih1, amdgpu_irq_callback); 191 + amdgpu_ih_process(adev, &adev->irq.ih1); 169 192 } 170 193 171 194 /** ··· 180 203 struct amdgpu_device *adev = container_of(work, struct amdgpu_device, 181 204 irq.ih2_work); 182 205 183 - amdgpu_ih_process(adev, &adev->irq.ih2, amdgpu_irq_callback); 206 + amdgpu_ih_process(adev, &adev->irq.ih2); 184 207 } 185 208 186 209 /** ··· 371 394 * Dispatches IRQ to IP blocks. 372 395 */ 373 396 void amdgpu_irq_dispatch(struct amdgpu_device *adev, 374 - struct amdgpu_iv_entry *entry) 397 + struct amdgpu_ih_ring *ih) 375 398 { 376 - unsigned client_id = entry->client_id; 377 - unsigned src_id = entry->src_id; 399 + u32 ring_index = ih->rptr >> 2; 400 + struct amdgpu_iv_entry entry; 401 + unsigned client_id, src_id; 378 402 struct amdgpu_irq_src *src; 379 403 bool handled = false; 380 404 int r; 405 + 406 + entry.iv_entry = (const uint32_t *)&ih->ring[ring_index]; 407 + amdgpu_ih_decode_iv(adev, &entry); 408 + 409 + trace_amdgpu_iv(ih - &adev->irq.ih, &entry); 410 + 411 + client_id = entry.client_id; 412 + src_id = entry.src_id; 381 413 382 414 if (client_id >= AMDGPU_IRQ_CLIENTID_MAX) { 383 415 DRM_DEBUG("Invalid client_id in IV: %d\n", client_id); ··· 402 416 client_id, src_id); 403 417 404 418 } else if ((src = adev->irq.client[client_id].sources[src_id])) { 405 - r = src->funcs->process(adev, src, entry); 419 + r = src->funcs->process(adev, src, &entry); 406 420 if (r < 0) 407 421 DRM_ERROR("error processing interrupt (%d)\n", r); 408 422 else if (r) ··· 414 428 415 429 /* Send it to amdkfd as well if it isn't already handled */ 416 430 if (!handled) 417 - amdgpu_amdkfd_interrupt(adev, entry->iv_entry); 431 + amdgpu_amdkfd_interrupt(adev, entry.iv_entry); 418 432 } 419 433 420 434 /**
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.h
··· 108 108 unsigned client_id, unsigned src_id, 109 109 struct amdgpu_irq_src *source); 110 110 void amdgpu_irq_dispatch(struct amdgpu_device *adev, 111 - struct amdgpu_iv_entry *entry); 111 + struct amdgpu_ih_ring *ih); 112 112 int amdgpu_irq_update(struct amdgpu_device *adev, struct amdgpu_irq_src *src, 113 113 unsigned type); 114 114 int amdgpu_irq_get(struct amdgpu_device *adev, struct amdgpu_irq_src *src,
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 207 207 if (!r) { 208 208 acpi_status = amdgpu_acpi_init(adev); 209 209 if (acpi_status) 210 - dev_dbg(&dev->pdev->dev, 210 + dev_dbg(&dev->pdev->dev, 211 211 "Error during ACPI methods call\n"); 212 212 } 213 213 214 214 if (amdgpu_device_is_px(dev)) { 215 + dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP); 215 216 pm_runtime_use_autosuspend(dev->dev); 216 217 pm_runtime_set_autosuspend_delay(dev->dev, 5000); 217 218 pm_runtime_set_active(dev->dev);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 406 406 struct amdgpu_flip_work *pflip_works; 407 407 enum amdgpu_flip_status pflip_status; 408 408 int deferred_flip_completion; 409 + u64 last_flip_vblank; 409 410 /* pll sharing */ 410 411 struct amdgpu_atom_ss ss; 411 412 bool ss_enabled;
+24
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 1285 1285 } 1286 1286 1287 1287 /** 1288 + * amdgpu_sync_wait_resv - Wait for BO reservation fences 1289 + * 1290 + * @bo: buffer object 1291 + * @owner: fence owner 1292 + * @intr: Whether the wait is interruptible 1293 + * 1294 + * Returns: 1295 + * 0 on success, errno otherwise. 1296 + */ 1297 + int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr) 1298 + { 1299 + struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev); 1300 + struct amdgpu_sync sync; 1301 + int r; 1302 + 1303 + amdgpu_sync_create(&sync); 1304 + amdgpu_sync_resv(adev, &sync, bo->tbo.resv, owner, false); 1305 + r = amdgpu_sync_wait(&sync, intr); 1306 + amdgpu_sync_free(&sync); 1307 + 1308 + return r; 1309 + } 1310 + 1311 + /** 1288 1312 * amdgpu_bo_gpu_offset - return GPU offset of bo 1289 1313 * @bo: amdgpu object for which we query the offset 1290 1314 *
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
··· 266 266 int amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo); 267 267 void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, 268 268 bool shared); 269 + int amdgpu_bo_sync_wait(struct amdgpu_bo *bo, void *owner, bool intr); 269 270 u64 amdgpu_bo_gpu_offset(struct amdgpu_bo *bo); 270 271 int amdgpu_bo_validate(struct amdgpu_bo *bo); 271 272 int amdgpu_bo_restore_shadow(struct amdgpu_bo *shadow,
+47 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c
··· 54 54 enum drm_sched_priority priority) 55 55 { 56 56 struct file *filp = fget(fd); 57 - struct drm_file *file; 58 57 struct amdgpu_fpriv *fpriv; 59 58 struct amdgpu_ctx *ctx; 60 59 uint32_t id; 60 + int r; 61 61 62 62 if (!filp) 63 63 return -EINVAL; 64 64 65 - file = filp->private_data; 66 - fpriv = file->driver_priv; 65 + r = amdgpu_file_to_fpriv(filp, &fpriv); 66 + if (r) { 67 + fput(filp); 68 + return r; 69 + } 70 + 67 71 idr_for_each_entry(&fpriv->ctx_mgr.ctx_handles, ctx, id) 68 72 amdgpu_ctx_priority_override(ctx, priority); 69 73 74 + fput(filp); 75 + 76 + return 0; 77 + } 78 + 79 + static int amdgpu_sched_context_priority_override(struct amdgpu_device *adev, 80 + int fd, 81 + unsigned ctx_id, 82 + enum drm_sched_priority priority) 83 + { 84 + struct file *filp = fget(fd); 85 + struct amdgpu_fpriv *fpriv; 86 + struct amdgpu_ctx *ctx; 87 + int r; 88 + 89 + if (!filp) 90 + return -EINVAL; 91 + 92 + r = amdgpu_file_to_fpriv(filp, &fpriv); 93 + if (r) { 94 + fput(filp); 95 + return r; 96 + } 97 + 98 + ctx = amdgpu_ctx_get(fpriv, ctx_id); 99 + 100 + if (!ctx) { 101 + fput(filp); 102 + return -EINVAL; 103 + } 104 + 105 + amdgpu_ctx_priority_override(ctx, priority); 106 + amdgpu_ctx_put(ctx); 70 107 fput(filp); 71 108 72 109 return 0; ··· 118 81 int r; 119 82 120 83 priority = amdgpu_to_sched_priority(args->in.priority); 121 - if (args->in.flags || priority == DRM_SCHED_PRIORITY_INVALID) 84 + if (priority == DRM_SCHED_PRIORITY_INVALID) 122 85 return -EINVAL; 123 86 124 87 switch (args->in.op) { 125 88 case AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE: 126 89 r = amdgpu_sched_process_priority_override(adev, 127 90 args->in.fd, 91 + priority); 92 + break; 93 + case AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE: 94 + r = amdgpu_sched_context_priority_override(adev, 95 + args->in.fd, 96 + args->in.ctx_id, 128 97 priority); 129 98 break; 130 99 default:
+8 -32
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 652 652 struct ttm_bo_global *glob = adev->mman.bdev.glob; 653 653 struct amdgpu_vm_bo_base *bo_base; 654 654 655 + #if 0 655 656 if (vm->bulk_moveable) { 656 657 spin_lock(&glob->lru_lock); 657 658 ttm_bo_bulk_move_lru_tail(&vm->lru_bulk_move); 658 659 spin_unlock(&glob->lru_lock); 659 660 return; 660 661 } 662 + #endif 661 663 662 664 memset(&vm->lru_bulk_move, 0, sizeof(vm->lru_bulk_move)); 663 665 ··· 699 697 { 700 698 struct amdgpu_vm_bo_base *bo_base, *tmp; 701 699 int r = 0; 702 - 703 - vm->bulk_moveable &= list_empty(&vm->evicted); 704 700 705 701 list_for_each_entry_safe(bo_base, tmp, &vm->evicted, vm_status) { 706 702 struct amdgpu_bo *bo = bo_base->bo; ··· 828 828 829 829 WARN_ON(job->ibs[0].length_dw > 64); 830 830 r = amdgpu_sync_resv(adev, &job->sync, bo->tbo.resv, 831 - AMDGPU_FENCE_OWNER_UNDEFINED, false); 831 + AMDGPU_FENCE_OWNER_KFD, false); 832 832 if (r) 833 833 goto error_free; 834 834 ··· 1332 1332 } 1333 1333 } 1334 1334 1335 - 1336 - /** 1337 - * amdgpu_vm_wait_pd - Wait for PT BOs to be free. 1338 - * 1339 - * @adev: amdgpu_device pointer 1340 - * @vm: related vm 1341 - * @owner: fence owner 1342 - * 1343 - * Returns: 1344 - * 0 on success, errno otherwise. 1345 - */ 1346 - static int amdgpu_vm_wait_pd(struct amdgpu_device *adev, struct amdgpu_vm *vm, 1347 - void *owner) 1348 - { 1349 - struct amdgpu_sync sync; 1350 - int r; 1351 - 1352 - amdgpu_sync_create(&sync); 1353 - amdgpu_sync_resv(adev, &sync, vm->root.base.bo->tbo.resv, owner, false); 1354 - r = amdgpu_sync_wait(&sync, true); 1355 - amdgpu_sync_free(&sync); 1356 - 1357 - return r; 1358 - } 1359 - 1360 1335 /** 1361 1336 * amdgpu_vm_update_func - helper to call update function 1362 1337 * ··· 1426 1451 params.adev = adev; 1427 1452 1428 1453 if (vm->use_cpu_for_update) { 1429 - r = amdgpu_vm_wait_pd(adev, vm, AMDGPU_FENCE_OWNER_VM); 1454 + r = amdgpu_bo_sync_wait(vm->root.base.bo, 1455 + AMDGPU_FENCE_OWNER_VM, true); 1430 1456 if (unlikely(r)) 1431 1457 return r; 1432 1458 ··· 1748 1772 params.adev = adev; 1749 1773 params.vm = vm; 1750 1774 1751 - /* sync to everything on unmapping */ 1775 + /* sync to everything except eviction fences on unmapping */ 1752 1776 if (!(flags & AMDGPU_PTE_VALID)) 1753 - owner = AMDGPU_FENCE_OWNER_UNDEFINED; 1777 + owner = AMDGPU_FENCE_OWNER_KFD; 1754 1778 1755 1779 if (vm->use_cpu_for_update) { 1756 1780 /* params.src is used as flag to indicate system Memory */ ··· 1760 1784 /* Wait for PT BOs to be idle. PTs share the same resv. object 1761 1785 * as the root PD BO 1762 1786 */ 1763 - r = amdgpu_vm_wait_pd(adev, vm, owner); 1787 + r = amdgpu_bo_sync_wait(vm->root.base.bo, owner, true); 1764 1788 if (unlikely(r)) 1765 1789 return r; 1766 1790
+1 -1
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
··· 2980 2980 struct amdgpu_irq_src *source, 2981 2981 struct amdgpu_iv_entry *entry) 2982 2982 { 2983 - unsigned long flags; 2983 + unsigned long flags; 2984 2984 unsigned crtc_id; 2985 2985 struct amdgpu_crtc *amdgpu_crtc; 2986 2986 struct amdgpu_flip_work *works;
+2 -1
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
··· 266 266 } 267 267 268 268 /* Trigger recovery for world switch failure if no TDR */ 269 - if (amdgpu_device_should_recover_gpu(adev)) 269 + if (amdgpu_device_should_recover_gpu(adev) 270 + && amdgpu_lockup_timeout == MAX_SCHEDULE_TIMEOUT) 270 271 amdgpu_device_gpu_recover(adev, NULL); 271 272 } 272 273
+1 -1
drivers/gpu/drm/amd/amdgpu/nbio_v7_4.c
··· 32 32 33 33 static u32 nbio_v7_4_get_rev_id(struct amdgpu_device *adev) 34 34 { 35 - u32 tmp = RREG32_SOC15(NBIO, 0, mmRCC_DEV0_EPF0_STRAP0); 35 + u32 tmp = RREG32_SOC15(NBIO, 0, mmRCC_DEV0_EPF0_STRAP0); 36 36 37 37 tmp &= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0_MASK; 38 38 tmp >>= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0__SHIFT;
+2 -2
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
··· 128 128 129 129 static const struct soc15_reg_golden golden_settings_sdma0_4_2[] = 130 130 { 131 - SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831d07), 131 + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CHICKEN_BITS, 0xfe931f07, 0x02831f07), 132 132 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_CLK_CTRL, 0xffffffff, 0x3f000100), 133 133 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), 134 134 SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002), ··· 158 158 }; 159 159 160 160 static const struct soc15_reg_golden golden_settings_sdma1_4_2[] = { 161 - SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), 161 + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07), 162 162 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), 163 163 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0000773f, 0x00004002), 164 164 SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0000773f, 0x00004002),
+1 -1
drivers/gpu/drm/amd/amdgpu/si.c
··· 1436 1436 AMD_CG_SUPPORT_UVD_MGCG | 1437 1437 AMD_CG_SUPPORT_HDP_LS | 1438 1438 AMD_CG_SUPPORT_HDP_MGCG; 1439 - adev->pg_flags = 0; 1439 + adev->pg_flags = 0; 1440 1440 adev->external_rev_id = (adev->rev_id == 0) ? 1 : 1441 1441 (adev->rev_id == 1) ? 5 : 6; 1442 1442 break;
+2
drivers/gpu/drm/amd/amdgpu/si_dpm.c
··· 6216 6216 si_pi->force_pcie_gen = AMDGPU_PCIE_GEN2; 6217 6217 if (current_link_speed == AMDGPU_PCIE_GEN2) 6218 6218 break; 6219 + /* fall through */ 6219 6220 case AMDGPU_PCIE_GEN2: 6220 6221 if (amdgpu_acpi_pcie_performance_request(adev, PCIE_PERF_REQ_PECI_GEN2, false) == 0) 6221 6222 break; 6222 6223 #endif 6224 + /* fall through */ 6223 6225 default: 6224 6226 si_pi->force_pcie_gen = si_get_current_pcie_speed(adev); 6225 6227 break;
+4
drivers/gpu/drm/amd/amdgpu/vega10_reg_init.c
··· 81 81 adev->doorbell_index.uvd_vce.vce_ring2_3 = AMDGPU_DOORBELL64_VCE_RING2_3; 82 82 adev->doorbell_index.uvd_vce.vce_ring4_5 = AMDGPU_DOORBELL64_VCE_RING4_5; 83 83 adev->doorbell_index.uvd_vce.vce_ring6_7 = AMDGPU_DOORBELL64_VCE_RING6_7; 84 + 85 + adev->doorbell_index.first_non_cp = AMDGPU_DOORBELL64_FIRST_NON_CP; 86 + adev->doorbell_index.last_non_cp = AMDGPU_DOORBELL64_LAST_NON_CP; 87 + 84 88 /* In unit of dword doorbell */ 85 89 adev->doorbell_index.max_assignment = AMDGPU_DOORBELL64_MAX_ASSIGNMENT << 1; 86 90 adev->doorbell_index.sdma_doorbell_range = 4;
+4
drivers/gpu/drm/amd/amdgpu/vega20_reg_init.c
··· 85 85 adev->doorbell_index.uvd_vce.vce_ring2_3 = AMDGPU_VEGA20_DOORBELL64_VCE_RING2_3; 86 86 adev->doorbell_index.uvd_vce.vce_ring4_5 = AMDGPU_VEGA20_DOORBELL64_VCE_RING4_5; 87 87 adev->doorbell_index.uvd_vce.vce_ring6_7 = AMDGPU_VEGA20_DOORBELL64_VCE_RING6_7; 88 + 89 + adev->doorbell_index.first_non_cp = AMDGPU_VEGA20_DOORBELL64_FIRST_NON_CP; 90 + adev->doorbell_index.last_non_cp = AMDGPU_VEGA20_DOORBELL64_LAST_NON_CP; 91 + 88 92 adev->doorbell_index.max_assignment = AMDGPU_VEGA20_DOORBELL_MAX_ASSIGNMENT << 1; 89 93 adev->doorbell_index.sdma_doorbell_range = 20; 90 94 }
+11 -5
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 134 134 */ 135 135 q->doorbell_id = q->properties.queue_id; 136 136 } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) { 137 - /* For SDMA queues on SOC15, use static doorbell 138 - * assignments based on the engine and queue. 137 + /* For SDMA queues on SOC15 with 8-byte doorbell, use static 138 + * doorbell assignments based on the engine and queue id. 139 + * The doobell index distance between RLC (2*i) and (2*i+1) 140 + * for a SDMA engine is 512. 139 141 */ 140 - q->doorbell_id = dev->shared_resources.sdma_doorbell 141 - [q->properties.sdma_engine_id] 142 - [q->properties.sdma_queue_id]; 142 + uint32_t *idx_offset = 143 + dev->shared_resources.sdma_doorbell_idx; 144 + 145 + q->doorbell_id = idx_offset[q->properties.sdma_engine_id] 146 + + (q->properties.sdma_queue_id & 1) 147 + * KFD_QUEUE_DOORBELL_MIRROR_OFFSET 148 + + (q->properties.sdma_queue_id >> 1); 143 149 } else { 144 150 /* For CP queues on SOC15 reserve a free doorbell ID */ 145 151 unsigned int found;
+17 -5
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 97 97 #define KFD_CWSR_TBA_TMA_SIZE (PAGE_SIZE * 2) 98 98 #define KFD_CWSR_TMA_OFFSET PAGE_SIZE 99 99 100 + #define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE \ 101 + (KFD_MAX_NUM_OF_PROCESSES * \ 102 + KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) 103 + 104 + #define KFD_KERNEL_QUEUE_SIZE 2048 105 + 106 + /* 107 + * 512 = 0x200 108 + * The doorbell index distance between SDMA RLC (2*i) and (2*i+1) in the 109 + * same SDMA engine on SOC15, which has 8-byte doorbells for SDMA. 110 + * 512 8-byte doorbell distance (i.e. one page away) ensures that SDMA RLC 111 + * (2*i+1) doorbells (in terms of the lower 12 bit address) lie exactly in 112 + * the OFFSET and SIZE set in registers like BIF_SDMA0_DOORBELL_RANGE. 113 + */ 114 + #define KFD_QUEUE_DOORBELL_MIRROR_OFFSET 512 115 + 116 + 100 117 /* 101 118 * Kernel module parameter to specify maximum number of supported queues per 102 119 * device 103 120 */ 104 121 extern int max_num_of_queues_per_device; 105 122 106 - #define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE \ 107 - (KFD_MAX_NUM_OF_PROCESSES * \ 108 - KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) 109 - 110 - #define KFD_KERNEL_QUEUE_SIZE 2048 111 123 112 124 /* Kernel module parameter to specify the scheduling policy */ 113 125 extern int sched_policy;
+9 -5
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 607 607 if (!qpd->doorbell_bitmap) 608 608 return -ENOMEM; 609 609 610 - /* Mask out any reserved doorbells */ 611 - for (i = 0; i < KFD_MAX_NUM_OF_QUEUES_PER_PROCESS; i++) 612 - if ((dev->shared_resources.reserved_doorbell_mask & i) == 613 - dev->shared_resources.reserved_doorbell_val) { 610 + /* Mask out doorbells reserved for SDMA, IH, and VCN on SOC15. */ 611 + for (i = 0; i < KFD_MAX_NUM_OF_QUEUES_PER_PROCESS / 2; i++) { 612 + if (i >= dev->shared_resources.non_cp_doorbells_start 613 + && i <= dev->shared_resources.non_cp_doorbells_end) { 614 614 set_bit(i, qpd->doorbell_bitmap); 615 - pr_debug("reserved doorbell 0x%03x\n", i); 615 + set_bit(i + KFD_QUEUE_DOORBELL_MIRROR_OFFSET, 616 + qpd->doorbell_bitmap); 617 + pr_debug("reserved doorbell 0x%03x and 0x%03x\n", i, 618 + i + KFD_QUEUE_DOORBELL_MIRROR_OFFSET); 616 619 } 620 + } 617 621 618 622 return 0; 619 623 }
+98 -23
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 303 303 return; 304 304 } 305 305 306 + /* Update to correct count(s) if racing with vblank irq */ 307 + amdgpu_crtc->last_flip_vblank = drm_crtc_accurate_vblank_count(&amdgpu_crtc->base); 306 308 307 309 /* wake up userspace */ 308 310 if (amdgpu_crtc->event) { 309 - /* Update to correct count(s) if racing with vblank irq */ 310 - drm_crtc_accurate_vblank_count(&amdgpu_crtc->base); 311 - 312 311 drm_crtc_send_vblank_event(&amdgpu_crtc->base, amdgpu_crtc->event); 313 312 314 313 /* page flip completed. clean up */ ··· 785 786 struct amdgpu_display_manager *dm = &adev->dm; 786 787 int ret = 0; 787 788 789 + WARN_ON(adev->dm.cached_state); 790 + adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev); 791 + 788 792 s3_handle_mst(adev->ddev, true); 789 793 790 794 amdgpu_dm_irq_suspend(adev); 791 795 792 - WARN_ON(adev->dm.cached_state); 793 - adev->dm.cached_state = drm_atomic_helper_suspend(adev->ddev); 794 796 795 797 dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); 796 798 ··· 3790 3790 * check will succeed, and let DC implement proper check 3791 3791 */ 3792 3792 static const uint32_t rgb_formats[] = { 3793 - DRM_FORMAT_RGB888, 3794 3793 DRM_FORMAT_XRGB8888, 3795 3794 DRM_FORMAT_ARGB8888, 3796 3795 DRM_FORMAT_RGBA8888, ··· 4645 4646 struct amdgpu_bo *abo; 4646 4647 uint64_t tiling_flags, dcc_address; 4647 4648 uint32_t target, target_vblank; 4649 + uint64_t last_flip_vblank; 4650 + bool vrr_active = acrtc_state->freesync_config.state == VRR_STATE_ACTIVE_VARIABLE; 4648 4651 4649 4652 struct { 4650 4653 struct dc_surface_update surface_updates[MAX_SURFACES]; ··· 4679 4678 struct dc_plane_state *dc_plane; 4680 4679 struct dm_plane_state *dm_new_plane_state = to_dm_plane_state(new_plane_state); 4681 4680 4682 - if (plane->type == DRM_PLANE_TYPE_CURSOR) { 4683 - handle_cursor_update(plane, old_plane_state); 4681 + /* Cursor plane is handled after stream updates */ 4682 + if (plane->type == DRM_PLANE_TYPE_CURSOR) 4684 4683 continue; 4685 - } 4686 4684 4687 4685 if (!fb || !crtc || pcrtc != crtc) 4688 4686 continue; ··· 4712 4712 */ 4713 4713 abo = gem_to_amdgpu_bo(fb->obj[0]); 4714 4714 r = amdgpu_bo_reserve(abo, true); 4715 - if (unlikely(r != 0)) { 4715 + if (unlikely(r != 0)) 4716 4716 DRM_ERROR("failed to reserve buffer before flip\n"); 4717 - WARN_ON(1); 4718 - } 4719 4717 4720 - /* Wait for all fences on this FB */ 4721 - WARN_ON(reservation_object_wait_timeout_rcu(abo->tbo.resv, true, false, 4722 - MAX_SCHEDULE_TIMEOUT) < 0); 4718 + /* 4719 + * Wait for all fences on this FB. Do limited wait to avoid 4720 + * deadlock during GPU reset when this fence will not signal 4721 + * but we hold reservation lock for the BO. 4722 + */ 4723 + r = reservation_object_wait_timeout_rcu(abo->tbo.resv, 4724 + true, false, 4725 + msecs_to_jiffies(5000)); 4726 + if (unlikely(r == 0)) 4727 + DRM_ERROR("Waiting for fences timed out."); 4728 + 4729 + 4723 4730 4724 4731 amdgpu_bo_get_tiling_flags(abo, &tiling_flags); 4725 4732 ··· 4806 4799 * hopefully eliminating dc_*_update structs in their entirety. 4807 4800 */ 4808 4801 if (flip_count) { 4809 - target = (uint32_t)drm_crtc_vblank_count(pcrtc) + *wait_for_vblank; 4802 + if (!vrr_active) { 4803 + /* Use old throttling in non-vrr fixed refresh rate mode 4804 + * to keep flip scheduling based on target vblank counts 4805 + * working in a backwards compatible way, e.g., for 4806 + * clients using the GLX_OML_sync_control extension or 4807 + * DRI3/Present extension with defined target_msc. 4808 + */ 4809 + last_flip_vblank = drm_crtc_vblank_count(pcrtc); 4810 + } 4811 + else { 4812 + /* For variable refresh rate mode only: 4813 + * Get vblank of last completed flip to avoid > 1 vrr 4814 + * flips per video frame by use of throttling, but allow 4815 + * flip programming anywhere in the possibly large 4816 + * variable vrr vblank interval for fine-grained flip 4817 + * timing control and more opportunity to avoid stutter 4818 + * on late submission of flips. 4819 + */ 4820 + spin_lock_irqsave(&pcrtc->dev->event_lock, flags); 4821 + last_flip_vblank = acrtc_attach->last_flip_vblank; 4822 + spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags); 4823 + } 4824 + 4825 + target = (uint32_t)last_flip_vblank + *wait_for_vblank; 4826 + 4810 4827 /* Prepare wait for target vblank early - before the fence-waits */ 4811 4828 target_vblank = target - (uint32_t)drm_crtc_vblank_count(pcrtc) + 4812 4829 amdgpu_get_vblank_counter_kms(pcrtc->dev, acrtc_attach->crtc_id); ··· 4904 4873 dc_state); 4905 4874 mutex_unlock(&dm->dc_lock); 4906 4875 } 4876 + 4877 + for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) 4878 + if (plane->type == DRM_PLANE_TYPE_CURSOR) 4879 + handle_cursor_update(plane, old_plane_state); 4907 4880 4908 4881 cleanup: 4909 4882 kfree(flip); ··· 5834 5799 old_dm_crtc_state = to_dm_crtc_state(old_crtc_state); 5835 5800 num_plane = 0; 5836 5801 5837 - if (!new_dm_crtc_state->stream) { 5838 - if (!new_dm_crtc_state->stream && old_dm_crtc_state->stream) { 5839 - update_type = UPDATE_TYPE_FULL; 5840 - goto cleanup; 5841 - } 5842 - 5843 - continue; 5802 + if (new_dm_crtc_state->stream != old_dm_crtc_state->stream) { 5803 + update_type = UPDATE_TYPE_FULL; 5804 + goto cleanup; 5844 5805 } 5806 + 5807 + if (!new_dm_crtc_state->stream) 5808 + continue; 5845 5809 5846 5810 for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, j) { 5847 5811 new_plane_crtc = new_plane_state->crtc; ··· 5850 5816 5851 5817 if (plane->type == DRM_PLANE_TYPE_CURSOR) 5852 5818 continue; 5819 + 5820 + if (new_dm_plane_state->dc_state != old_dm_plane_state->dc_state) { 5821 + update_type = UPDATE_TYPE_FULL; 5822 + goto cleanup; 5823 + } 5853 5824 5854 5825 if (!state->allow_modeset) 5855 5826 continue; ··· 5992 5953 ret = drm_atomic_add_affected_planes(state, crtc); 5993 5954 if (ret) 5994 5955 goto fail; 5956 + } 5957 + 5958 + /* 5959 + * Add all primary and overlay planes on the CRTC to the state 5960 + * whenever a plane is enabled to maintain correct z-ordering 5961 + * and to enable fast surface updates. 5962 + */ 5963 + drm_for_each_crtc(crtc, dev) { 5964 + bool modified = false; 5965 + 5966 + for_each_oldnew_plane_in_state(state, plane, old_plane_state, new_plane_state, i) { 5967 + if (plane->type == DRM_PLANE_TYPE_CURSOR) 5968 + continue; 5969 + 5970 + if (new_plane_state->crtc == crtc || 5971 + old_plane_state->crtc == crtc) { 5972 + modified = true; 5973 + break; 5974 + } 5975 + } 5976 + 5977 + if (!modified) 5978 + continue; 5979 + 5980 + drm_for_each_plane_mask(plane, state->dev, crtc->state->plane_mask) { 5981 + if (plane->type == DRM_PLANE_TYPE_CURSOR) 5982 + continue; 5983 + 5984 + new_plane_state = 5985 + drm_atomic_get_plane_state(state, plane); 5986 + 5987 + if (IS_ERR(new_plane_state)) { 5988 + ret = PTR_ERR(new_plane_state); 5989 + goto fail; 5990 + } 5991 + } 5995 5992 } 5996 5993 5997 5994 /* Remove exiting planes if they are modified */
+2
drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
··· 265 265 && id.enum_id == obj_id.enum_id) 266 266 return &bp->object_info_tbl.v1_4->display_path[i]; 267 267 } 268 + /* fall through */ 268 269 case OBJECT_TYPE_CONNECTOR: 269 270 case OBJECT_TYPE_GENERIC: 270 271 /* Both Generic and Connector Object ID ··· 278 277 && id.enum_id == obj_id.enum_id) 279 278 return &bp->object_info_tbl.v1_4->display_path[i]; 280 279 } 280 + /* fall through */ 281 281 default: 282 282 return NULL; 283 283 }
+9 -6
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 1138 1138 /* pplib is notified if disp_num changed */ 1139 1139 dc->hwss.optimize_bandwidth(dc, context); 1140 1140 1141 + for (i = 0; i < context->stream_count; i++) 1142 + context->streams[i]->mode_changed = false; 1143 + 1141 1144 dc_release_state(dc->current_state); 1142 1145 1143 1146 dc->current_state = context; ··· 1626 1623 stream_update->adjust->v_total_min, 1627 1624 stream_update->adjust->v_total_max); 1628 1625 1629 - if (stream_update->periodic_vsync_config && pipe_ctx->stream_res.tg->funcs->program_vline_interrupt) 1630 - pipe_ctx->stream_res.tg->funcs->program_vline_interrupt( 1631 - pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing, VLINE0, &stream->periodic_vsync_config); 1626 + if (stream_update->periodic_interrupt0 && 1627 + dc->hwss.setup_periodic_interrupt) 1628 + dc->hwss.setup_periodic_interrupt(pipe_ctx, VLINE0); 1632 1629 1633 - if (stream_update->enhanced_sync_config && pipe_ctx->stream_res.tg->funcs->program_vline_interrupt) 1634 - pipe_ctx->stream_res.tg->funcs->program_vline_interrupt( 1635 - pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing, VLINE1, &stream->enhanced_sync_config); 1630 + if (stream_update->periodic_interrupt1 && 1631 + dc->hwss.setup_periodic_interrupt) 1632 + dc->hwss.setup_periodic_interrupt(pipe_ctx, VLINE1); 1636 1633 1637 1634 if ((stream_update->hdr_static_metadata && !stream->use_dynamic_meta) || 1638 1635 stream_update->vrr_infopacket ||
+17 -7
drivers/gpu/drm/amd/display/dc/dc_stream.h
··· 51 51 bool dummy; 52 52 }; 53 53 54 - union vline_config { 55 - unsigned int line_number; 56 - unsigned long long delta_in_ns; 54 + enum vertical_interrupt_ref_point { 55 + START_V_UPDATE = 0, 56 + START_V_SYNC, 57 + INVALID_POINT 58 + 59 + //For now, only v_update interrupt is used. 60 + //START_V_BLANK, 61 + //START_V_ACTIVE 62 + }; 63 + 64 + struct periodic_interrupt_config { 65 + enum vertical_interrupt_ref_point ref_point; 66 + int lines_offset; 57 67 }; 58 68 59 69 ··· 116 106 /* DMCU info */ 117 107 unsigned int abm_level; 118 108 119 - union vline_config periodic_vsync_config; 120 - union vline_config enhanced_sync_config; 109 + struct periodic_interrupt_config periodic_interrupt0; 110 + struct periodic_interrupt_config periodic_interrupt1; 121 111 122 112 /* from core_stream struct */ 123 113 struct dc_context *ctx; ··· 168 158 struct dc_info_packet *hdr_static_metadata; 169 159 unsigned int *abm_level; 170 160 171 - union vline_config *periodic_vsync_config; 172 - union vline_config *enhanced_sync_config; 161 + struct periodic_interrupt_config *periodic_interrupt0; 162 + struct periodic_interrupt_config *periodic_interrupt1; 173 163 174 164 struct dc_crtc_timing_adjust *adjust; 175 165 struct dc_info_packet *vrr_infopacket;
+24 -21
drivers/gpu/drm/amd/display/dc/dce/dce_abm.c
··· 53 53 54 54 #define MCP_DISABLE_ABM_IMMEDIATELY 255 55 55 56 + static bool dce_abm_set_pipe(struct abm *abm, uint32_t controller_id) 57 + { 58 + struct dce_abm *abm_dce = TO_DCE_ABM(abm); 59 + uint32_t rampingBoundary = 0xFFFF; 60 + 61 + REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 62 + 1, 80000); 63 + 64 + /* set ramping boundary */ 65 + REG_WRITE(MASTER_COMM_DATA_REG1, rampingBoundary); 66 + 67 + /* setDMCUParam_Pipe */ 68 + REG_UPDATE_2(MASTER_COMM_CMD_REG, 69 + MASTER_COMM_CMD_REG_BYTE0, MCP_ABM_PIPE_SET, 70 + MASTER_COMM_CMD_REG_BYTE1, controller_id); 71 + 72 + /* notifyDMCUMsg */ 73 + REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); 74 + 75 + return true; 76 + } 56 77 57 78 static unsigned int calculate_16_bit_backlight_from_pwm(struct dce_abm *abm_dce) 58 79 { ··· 196 175 uint32_t controller_id) 197 176 { 198 177 unsigned int backlight_8_bit = 0; 199 - uint32_t rampingBoundary = 0xFFFF; 200 178 uint32_t s2; 201 179 202 180 if (backlight_pwm_u16_16 & 0x10000) ··· 205 185 // Take MSB of fractional part since backlight is not max 206 186 backlight_8_bit = (backlight_pwm_u16_16 >> 8) & 0xFF; 207 187 208 - /* set ramping boundary */ 209 - REG_WRITE(MASTER_COMM_DATA_REG1, rampingBoundary); 210 - 211 - /* setDMCUParam_Pipe */ 212 - REG_UPDATE_2(MASTER_COMM_CMD_REG, 213 - MASTER_COMM_CMD_REG_BYTE0, MCP_ABM_PIPE_SET, 214 - MASTER_COMM_CMD_REG_BYTE1, controller_id); 215 - 216 - /* notifyDMCUMsg */ 217 - REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); 188 + dce_abm_set_pipe(&abm_dce->base, controller_id); 218 189 219 190 /* waitDMCUReadyForCmd */ 220 191 REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, ··· 320 309 { 321 310 struct dce_abm *abm_dce = TO_DCE_ABM(abm); 322 311 323 - REG_WAIT(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 0, 324 - 1, 80000); 325 - 326 - /* setDMCUParam_ABMLevel */ 327 - REG_UPDATE_2(MASTER_COMM_CMD_REG, 328 - MASTER_COMM_CMD_REG_BYTE0, MCP_ABM_PIPE_SET, 329 - MASTER_COMM_CMD_REG_BYTE1, MCP_DISABLE_ABM_IMMEDIATELY); 330 - 331 - /* notifyDMCUMsg */ 332 - REG_UPDATE(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, 1); 312 + dce_abm_set_pipe(abm, MCP_DISABLE_ABM_IMMEDIATELY); 333 313 334 314 abm->stored_backlight_registers.BL_PWM_CNTL = 335 315 REG_READ(BL_PWM_CNTL); ··· 421 419 .abm_init = dce_abm_init, 422 420 .set_abm_level = dce_abm_set_level, 423 421 .init_backlight = dce_abm_init_backlight, 422 + .set_pipe = dce_abm_set_pipe, 424 423 .set_backlight_level_pwm = dce_abm_set_backlight_level_pwm, 425 424 .get_current_backlight = dce_abm_get_current_backlight, 426 425 .get_target_backlight = dce_abm_get_target_backlight,
+8 -3
drivers/gpu/drm/amd/display/dc/dce/dce_clk_mgr.c
··· 696 696 { 697 697 struct dce_clk_mgr *clk_mgr_dce = TO_DCE_CLK_MGR(clk_mgr); 698 698 struct dm_pp_power_level_change_request level_change_req; 699 + int patched_disp_clk = context->bw.dce.dispclk_khz; 700 + 701 + /*TODO: W/A for dal3 linux, investigate why this works */ 702 + if (!clk_mgr_dce->dfs_bypass_active) 703 + patched_disp_clk = patched_disp_clk * 115 / 100; 699 704 700 705 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr, context); 701 706 /* get max clock state from PPLIB */ ··· 710 705 clk_mgr_dce->cur_min_clks_state = level_change_req.power_level; 711 706 } 712 707 713 - if (should_set_clock(safe_to_lower, context->bw.dce.dispclk_khz, clk_mgr->clks.dispclk_khz)) { 714 - context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, context->bw.dce.dispclk_khz); 715 - clk_mgr->clks.dispclk_khz = context->bw.dce.dispclk_khz; 708 + if (should_set_clock(safe_to_lower, patched_disp_clk, clk_mgr->clks.dispclk_khz)) { 709 + context->bw.dce.dispclk_khz = dce_set_clock(clk_mgr, patched_disp_clk); 710 + clk_mgr->clks.dispclk_khz = patched_disp_clk; 716 711 } 717 712 dce11_pplib_apply_display_requirements(clk_mgr->ctx->dc, context); 718 713 }
+1 -1
drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
··· 479 479 case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F: 480 480 sign = 1; 481 481 floating = 1; 482 - /* no break */ 482 + /* fall through */ 483 483 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F: /* shouldn't this get float too? */ 484 484 case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616: 485 485 grph_depth = 3;
+4
drivers/gpu/drm/amd/display/dc/dce100/dce100_hw_sequencer.h
··· 37 37 struct dc *dc, 38 38 struct dc_state *context); 39 39 40 + void dce100_optimize_bandwidth( 41 + struct dc *dc, 42 + struct dc_state *context); 43 + 40 44 bool dce100_enable_display_power_gating(struct dc *dc, uint8_t controller_id, 41 45 struct dc_bios *dcb, 42 46 enum pipe_gating_control power_gating);
+17 -5
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 1300 1300 struct drr_params params = {0}; 1301 1301 unsigned int event_triggers = 0; 1302 1302 1303 + if (dc->hwss.disable_stream_gating) { 1304 + dc->hwss.disable_stream_gating(dc, pipe_ctx); 1305 + } 1306 + 1303 1307 if (pipe_ctx->stream_res.audio != NULL) { 1304 1308 struct audio_output audio_output; 1305 1309 ··· 1333 1329 if (!pipe_ctx->stream->apply_seamless_boot_optimization) 1334 1330 dc->hwss.enable_stream_timing(pipe_ctx, context, dc); 1335 1331 1336 - if (pipe_ctx->stream_res.tg->funcs->program_vupdate_interrupt) 1337 - pipe_ctx->stream_res.tg->funcs->program_vupdate_interrupt( 1338 - pipe_ctx->stream_res.tg, 1339 - &stream->timing); 1332 + if (dc->hwss.setup_vupdate_interrupt) 1333 + dc->hwss.setup_vupdate_interrupt(pipe_ctx); 1340 1334 1341 1335 params.vertical_total_min = stream->adjust.v_total_min; 1342 1336 params.vertical_total_max = stream->adjust.v_total_max; ··· 1523 1521 struct dc_link *edp_link = get_link_for_edp(dc); 1524 1522 bool can_edp_fast_boot_optimize = false; 1525 1523 bool apply_edp_fast_boot_optimization = false; 1524 + bool can_apply_seamless_boot = false; 1525 + 1526 + for (i = 0; i < context->stream_count; i++) { 1527 + if (context->streams[i]->apply_seamless_boot_optimization) { 1528 + can_apply_seamless_boot = true; 1529 + break; 1530 + } 1531 + } 1526 1532 1527 1533 if (edp_link) { 1528 1534 /* this seems to cause blank screens on DCE8 */ ··· 1559 1549 } 1560 1550 } 1561 1551 1562 - if (!apply_edp_fast_boot_optimization) { 1552 + if (!apply_edp_fast_boot_optimization && !can_apply_seamless_boot) { 1563 1553 if (edp_link_to_turnoff) { 1564 1554 /*turn off backlight before DP_blank and encoder powered down*/ 1565 1555 dc->hwss.edp_backlight_control(edp_link_to_turnoff, false); ··· 2686 2676 .set_static_screen_control = set_static_screen_control, 2687 2677 .reset_hw_ctx_wrap = dce110_reset_hw_ctx_wrap, 2688 2678 .enable_stream_timing = dce110_enable_stream_timing, 2679 + .disable_stream_gating = NULL, 2680 + .enable_stream_gating = NULL, 2689 2681 .setup_stereo = NULL, 2690 2682 .set_avmute = dce110_set_avmute, 2691 2683 .wait_for_mpcc_disconnect = dce110_wait_for_mpcc_disconnect,
+1 -1
drivers/gpu/drm/amd/display/dc/dce80/dce80_hw_sequencer.c
··· 77 77 dc->hwss.enable_display_power_gating = dce100_enable_display_power_gating; 78 78 dc->hwss.pipe_control_lock = dce_pipe_control_lock; 79 79 dc->hwss.prepare_bandwidth = dce100_prepare_bandwidth; 80 - dc->hwss.optimize_bandwidth = dce100_prepare_bandwidth; 80 + dc->hwss.optimize_bandwidth = dce100_optimize_bandwidth; 81 81 } 82 82
+16 -3
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
··· 792 792 struct dc *dc, 793 793 struct dc_state *context) 794 794 { 795 - /* TODO implement when needed but for now hardcode max value*/ 796 - context->bw.dce.dispclk_khz = 681000; 797 - context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; 795 + int i; 796 + bool at_least_one_pipe = false; 797 + 798 + for (i = 0; i < dc->res_pool->pipe_count; i++) { 799 + if (context->res_ctx.pipe_ctx[i].stream) 800 + at_least_one_pipe = true; 801 + } 802 + 803 + if (at_least_one_pipe) { 804 + /* TODO implement when needed but for now hardcode max value*/ 805 + context->bw.dce.dispclk_khz = 681000; 806 + context->bw.dce.yclk_khz = 250000 * MEMORY_TYPE_MULTIPLIER_CZ; 807 + } else { 808 + context->bw.dce.dispclk_khz = 0; 809 + context->bw.dce.yclk_khz = 0; 810 + } 798 811 799 812 return true; 800 813 }
+183 -6
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 959 959 static void dcn10_init_pipes(struct dc *dc, struct dc_state *context) 960 960 { 961 961 int i; 962 + bool can_apply_seamless_boot = false; 963 + 964 + for (i = 0; i < context->stream_count; i++) { 965 + if (context->streams[i]->apply_seamless_boot_optimization) { 966 + can_apply_seamless_boot = true; 967 + break; 968 + } 969 + } 962 970 963 971 for (i = 0; i < dc->res_pool->pipe_count; i++) { 964 972 struct timing_generator *tg = dc->res_pool->timing_generators[i]; 973 + struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; 974 + 975 + /* There is assumption that pipe_ctx is not mapping irregularly 976 + * to non-preferred front end. If pipe_ctx->stream is not NULL, 977 + * we will use the pipe, so don't disable 978 + */ 979 + if (pipe_ctx->stream != NULL) 980 + continue; 965 981 966 982 if (tg->funcs->is_tg_enabled(tg)) 967 983 tg->funcs->lock(tg); ··· 991 975 } 992 976 } 993 977 994 - dc->res_pool->mpc->funcs->mpc_init(dc->res_pool->mpc); 978 + /* Cannot reset the MPC mux if seamless boot */ 979 + if (!can_apply_seamless_boot) 980 + dc->res_pool->mpc->funcs->mpc_init(dc->res_pool->mpc); 995 981 996 982 for (i = 0; i < dc->res_pool->pipe_count; i++) { 997 983 struct timing_generator *tg = dc->res_pool->timing_generators[i]; 998 984 struct hubp *hubp = dc->res_pool->hubps[i]; 999 985 struct dpp *dpp = dc->res_pool->dpps[i]; 1000 986 struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i]; 987 + 988 + // W/A for issue with dc_post_update_surfaces_to_stream 989 + hubp->power_gated = true; 990 + 991 + /* There is assumption that pipe_ctx is not mapping irregularly 992 + * to non-preferred front end. If pipe_ctx->stream is not NULL, 993 + * we will use the pipe, so don't disable 994 + */ 995 + if (pipe_ctx->stream != NULL) 996 + continue; 1001 997 1002 998 dpp->funcs->dpp_reset(dpp); 1003 999 ··· 1165 1137 struct clock_source *old_clk = pipe_ctx_old->clock_source; 1166 1138 1167 1139 reset_back_end_for_pipe(dc, pipe_ctx_old, dc->current_state); 1140 + if (dc->hwss.enable_stream_gating) { 1141 + dc->hwss.enable_stream_gating(dc, pipe_ctx); 1142 + } 1168 1143 if (old_clk) 1169 1144 old_clk->funcs->cs_power_down(old_clk); 1170 1145 } 1171 1146 } 1172 - 1173 1147 } 1174 1148 1175 1149 static bool patch_address_for_sbs_tb_stereo( ··· 2192 2162 if (!blank) { 2193 2163 if (stream_res->tg->funcs->set_blank) 2194 2164 stream_res->tg->funcs->set_blank(stream_res->tg, blank); 2195 - if (stream_res->abm) 2165 + if (stream_res->abm) { 2166 + stream_res->abm->funcs->set_pipe(stream_res->abm, stream_res->tg->inst + 1); 2196 2167 stream_res->abm->funcs->set_abm_level(stream_res->abm, stream->abm_level); 2168 + } 2197 2169 } else if (blank) { 2198 2170 if (stream_res->abm) 2199 2171 stream_res->abm->funcs->set_abm_immediate_disable(stream_res->abm); ··· 2693 2661 .mirror = pipe_ctx->plane_state->horizontal_mirror 2694 2662 }; 2695 2663 2696 - pos_cpy.x -= pipe_ctx->plane_state->dst_rect.x; 2697 - pos_cpy.y -= pipe_ctx->plane_state->dst_rect.y; 2664 + pos_cpy.x_hotspot += pipe_ctx->plane_state->dst_rect.x; 2665 + pos_cpy.y_hotspot += pipe_ctx->plane_state->dst_rect.y; 2698 2666 2699 2667 if (pipe_ctx->plane_state->address.type 2700 2668 == PLN_ADDR_TYPE_VIDEO_PROGRESSIVE) ··· 2739 2707 2740 2708 pipe_ctx->plane_res.dpp->funcs->set_optional_cursor_attributes( 2741 2709 pipe_ctx->plane_res.dpp, &opt_attr); 2710 + } 2711 + 2712 + /** 2713 + * apply_front_porch_workaround TODO FPGA still need? 2714 + * 2715 + * This is a workaround for a bug that has existed since R5xx and has not been 2716 + * fixed keep Front porch at minimum 2 for Interlaced mode or 1 for progressive. 2717 + */ 2718 + static void apply_front_porch_workaround( 2719 + struct dc_crtc_timing *timing) 2720 + { 2721 + if (timing->flags.INTERLACE == 1) { 2722 + if (timing->v_front_porch < 2) 2723 + timing->v_front_porch = 2; 2724 + } else { 2725 + if (timing->v_front_porch < 1) 2726 + timing->v_front_porch = 1; 2727 + } 2728 + } 2729 + 2730 + int get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx) 2731 + { 2732 + struct timing_generator *optc = pipe_ctx->stream_res.tg; 2733 + const struct dc_crtc_timing *dc_crtc_timing = &pipe_ctx->stream->timing; 2734 + struct dc_crtc_timing patched_crtc_timing; 2735 + int vesa_sync_start; 2736 + int asic_blank_end; 2737 + int interlace_factor; 2738 + int vertical_line_start; 2739 + 2740 + patched_crtc_timing = *dc_crtc_timing; 2741 + apply_front_porch_workaround(&patched_crtc_timing); 2742 + 2743 + interlace_factor = patched_crtc_timing.flags.INTERLACE ? 2 : 1; 2744 + 2745 + vesa_sync_start = patched_crtc_timing.v_addressable + 2746 + patched_crtc_timing.v_border_bottom + 2747 + patched_crtc_timing.v_front_porch; 2748 + 2749 + asic_blank_end = (patched_crtc_timing.v_total - 2750 + vesa_sync_start - 2751 + patched_crtc_timing.v_border_top) 2752 + * interlace_factor; 2753 + 2754 + vertical_line_start = asic_blank_end - 2755 + optc->dlg_otg_param.vstartup_start + 1; 2756 + 2757 + return vertical_line_start; 2758 + } 2759 + 2760 + static void calc_vupdate_position( 2761 + struct pipe_ctx *pipe_ctx, 2762 + uint32_t *start_line, 2763 + uint32_t *end_line) 2764 + { 2765 + const struct dc_crtc_timing *dc_crtc_timing = &pipe_ctx->stream->timing; 2766 + int vline_int_offset_from_vupdate = 2767 + pipe_ctx->stream->periodic_interrupt0.lines_offset; 2768 + int vupdate_offset_from_vsync = get_vupdate_offset_from_vsync(pipe_ctx); 2769 + int start_position; 2770 + 2771 + if (vline_int_offset_from_vupdate > 0) 2772 + vline_int_offset_from_vupdate--; 2773 + else if (vline_int_offset_from_vupdate < 0) 2774 + vline_int_offset_from_vupdate++; 2775 + 2776 + start_position = vline_int_offset_from_vupdate + vupdate_offset_from_vsync; 2777 + 2778 + if (start_position >= 0) 2779 + *start_line = start_position; 2780 + else 2781 + *start_line = dc_crtc_timing->v_total + start_position - 1; 2782 + 2783 + *end_line = *start_line + 2; 2784 + 2785 + if (*end_line >= dc_crtc_timing->v_total) 2786 + *end_line = 2; 2787 + } 2788 + 2789 + static void cal_vline_position( 2790 + struct pipe_ctx *pipe_ctx, 2791 + enum vline_select vline, 2792 + uint32_t *start_line, 2793 + uint32_t *end_line) 2794 + { 2795 + enum vertical_interrupt_ref_point ref_point = INVALID_POINT; 2796 + 2797 + if (vline == VLINE0) 2798 + ref_point = pipe_ctx->stream->periodic_interrupt0.ref_point; 2799 + else if (vline == VLINE1) 2800 + ref_point = pipe_ctx->stream->periodic_interrupt1.ref_point; 2801 + 2802 + switch (ref_point) { 2803 + case START_V_UPDATE: 2804 + calc_vupdate_position( 2805 + pipe_ctx, 2806 + start_line, 2807 + end_line); 2808 + break; 2809 + case START_V_SYNC: 2810 + // Suppose to do nothing because vsync is 0; 2811 + break; 2812 + default: 2813 + ASSERT(0); 2814 + break; 2815 + } 2816 + } 2817 + 2818 + static void dcn10_setup_periodic_interrupt( 2819 + struct pipe_ctx *pipe_ctx, 2820 + enum vline_select vline) 2821 + { 2822 + struct timing_generator *tg = pipe_ctx->stream_res.tg; 2823 + 2824 + if (vline == VLINE0) { 2825 + uint32_t start_line = 0; 2826 + uint32_t end_line = 0; 2827 + 2828 + cal_vline_position(pipe_ctx, vline, &start_line, &end_line); 2829 + 2830 + tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line); 2831 + 2832 + } else if (vline == VLINE1) { 2833 + pipe_ctx->stream_res.tg->funcs->setup_vertical_interrupt1( 2834 + tg, 2835 + pipe_ctx->stream->periodic_interrupt1.lines_offset); 2836 + } 2837 + } 2838 + 2839 + static void dcn10_setup_vupdate_interrupt(struct pipe_ctx *pipe_ctx) 2840 + { 2841 + struct timing_generator *tg = pipe_ctx->stream_res.tg; 2842 + int start_line = get_vupdate_offset_from_vsync(pipe_ctx); 2843 + 2844 + if (start_line < 0) { 2845 + ASSERT(0); 2846 + start_line = 0; 2847 + } 2848 + 2849 + if (tg->funcs->setup_vertical_interrupt2) 2850 + tg->funcs->setup_vertical_interrupt2(tg, start_line); 2742 2851 } 2743 2852 2744 2853 static const struct hw_sequencer_funcs dcn10_funcs = { ··· 2929 2756 .edp_wait_for_hpd_ready = hwss_edp_wait_for_hpd_ready, 2930 2757 .set_cursor_position = dcn10_set_cursor_position, 2931 2758 .set_cursor_attribute = dcn10_set_cursor_attribute, 2932 - .set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level 2759 + .set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level, 2760 + .disable_stream_gating = NULL, 2761 + .enable_stream_gating = NULL, 2762 + .setup_periodic_interrupt = dcn10_setup_periodic_interrupt, 2763 + .setup_vupdate_interrupt = dcn10_setup_vupdate_interrupt 2933 2764 }; 2934 2765 2935 2766
+2
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
··· 81 81 struct dc_state *context, 82 82 const struct dc_stream_state *stream); 83 83 84 + int get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx); 85 + 84 86 #endif /* __DC_HWSS_DCN10_H__ */
+20 -117
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.c
··· 92 92 OTG_3D_STRUCTURE_STEREO_SEL_OVR, 0); 93 93 } 94 94 95 - static uint32_t get_start_vline(struct timing_generator *optc, const struct dc_crtc_timing *dc_crtc_timing) 96 - { 97 - struct dc_crtc_timing patched_crtc_timing; 98 - int vesa_sync_start; 99 - int asic_blank_end; 100 - int interlace_factor; 101 - int vertical_line_start; 102 - 103 - patched_crtc_timing = *dc_crtc_timing; 104 - optc1_apply_front_porch_workaround(optc, &patched_crtc_timing); 105 - 106 - vesa_sync_start = patched_crtc_timing.h_addressable + 107 - patched_crtc_timing.h_border_right + 108 - patched_crtc_timing.h_front_porch; 109 - 110 - asic_blank_end = patched_crtc_timing.h_total - 111 - vesa_sync_start - 112 - patched_crtc_timing.h_border_left; 113 - 114 - interlace_factor = patched_crtc_timing.flags.INTERLACE ? 2 : 1; 115 - 116 - vesa_sync_start = patched_crtc_timing.v_addressable + 117 - patched_crtc_timing.v_border_bottom + 118 - patched_crtc_timing.v_front_porch; 119 - 120 - asic_blank_end = (patched_crtc_timing.v_total - 121 - vesa_sync_start - 122 - patched_crtc_timing.v_border_top) 123 - * interlace_factor; 124 - 125 - vertical_line_start = asic_blank_end - optc->dlg_otg_param.vstartup_start + 1; 126 - if (vertical_line_start < 0) { 127 - ASSERT(0); 128 - vertical_line_start = 0; 129 - } 130 - 131 - return vertical_line_start; 132 - } 133 - 134 - static void calc_vline_position( 95 + void optc1_setup_vertical_interrupt0( 135 96 struct timing_generator *optc, 136 - const struct dc_crtc_timing *dc_crtc_timing, 137 - unsigned long long vsync_delta, 138 - uint32_t *start_line, 139 - uint32_t *end_line) 140 - { 141 - unsigned long long req_delta_tens_of_usec = div64_u64((vsync_delta + 9999), 10000); 142 - unsigned long long pix_clk_hundreds_khz = div64_u64((dc_crtc_timing->pix_clk_100hz + 999), 1000); 143 - uint32_t req_delta_lines = (uint32_t) div64_u64( 144 - (req_delta_tens_of_usec * pix_clk_hundreds_khz + dc_crtc_timing->h_total - 1), 145 - dc_crtc_timing->h_total); 146 - 147 - uint32_t vsync_line = get_start_vline(optc, dc_crtc_timing); 148 - 149 - if (req_delta_lines != 0) 150 - req_delta_lines--; 151 - 152 - if (req_delta_lines > vsync_line) 153 - *start_line = dc_crtc_timing->v_total - (req_delta_lines - vsync_line) - 1; 154 - else 155 - *start_line = vsync_line - req_delta_lines; 156 - 157 - *end_line = *start_line + 2; 158 - 159 - if (*end_line >= dc_crtc_timing->v_total) 160 - *end_line = 2; 161 - } 162 - 163 - void optc1_program_vline_interrupt( 164 - struct timing_generator *optc, 165 - const struct dc_crtc_timing *dc_crtc_timing, 166 - enum vline_select vline, 167 - const union vline_config *vline_config) 97 + uint32_t start_line, 98 + uint32_t end_line) 168 99 { 169 100 struct optc *optc1 = DCN10TG_FROM_TG(optc); 170 - uint32_t start_line = 0; 171 - uint32_t end_line = 0; 172 101 173 - switch (vline) { 174 - case VLINE0: 175 - calc_vline_position(optc, dc_crtc_timing, vline_config->delta_in_ns, &start_line, &end_line); 176 - REG_SET_2(OTG_VERTICAL_INTERRUPT0_POSITION, 0, 177 - OTG_VERTICAL_INTERRUPT0_LINE_START, start_line, 178 - OTG_VERTICAL_INTERRUPT0_LINE_END, end_line); 179 - break; 180 - case VLINE1: 181 - REG_SET(OTG_VERTICAL_INTERRUPT1_POSITION, 0, 182 - OTG_VERTICAL_INTERRUPT1_LINE_START, vline_config->line_number); 183 - break; 184 - default: 185 - break; 186 - } 102 + REG_SET_2(OTG_VERTICAL_INTERRUPT0_POSITION, 0, 103 + OTG_VERTICAL_INTERRUPT0_LINE_START, start_line, 104 + OTG_VERTICAL_INTERRUPT0_LINE_END, end_line); 187 105 } 188 106 189 - void optc1_program_vupdate_interrupt( 107 + void optc1_setup_vertical_interrupt1( 190 108 struct timing_generator *optc, 191 - const struct dc_crtc_timing *dc_crtc_timing) 109 + uint32_t start_line) 192 110 { 193 111 struct optc *optc1 = DCN10TG_FROM_TG(optc); 194 - int32_t vertical_line_start; 195 - uint32_t asic_blank_end; 196 - uint32_t vesa_sync_start; 197 - struct dc_crtc_timing patched_crtc_timing; 198 112 199 - patched_crtc_timing = *dc_crtc_timing; 200 - optc1_apply_front_porch_workaround(optc, &patched_crtc_timing); 113 + REG_SET(OTG_VERTICAL_INTERRUPT1_POSITION, 0, 114 + OTG_VERTICAL_INTERRUPT1_LINE_START, start_line); 115 + } 201 116 202 - /* asic_h_blank_end = HsyncWidth + HbackPorch = 203 - * vesa. usHorizontalTotal - vesa. usHorizontalSyncStart - 204 - * vesa.h_left_border 205 - */ 206 - vesa_sync_start = patched_crtc_timing.h_addressable + 207 - patched_crtc_timing.h_border_right + 208 - patched_crtc_timing.h_front_porch; 209 - 210 - asic_blank_end = patched_crtc_timing.h_total - 211 - vesa_sync_start - 212 - patched_crtc_timing.h_border_left; 213 - 214 - /* Use OTG_VERTICAL_INTERRUPT2 replace VUPDATE interrupt, 215 - * program the reg for interrupt postition. 216 - */ 217 - vertical_line_start = asic_blank_end - optc->dlg_otg_param.vstartup_start + 1; 218 - if (vertical_line_start < 0) 219 - vertical_line_start = 0; 117 + void optc1_setup_vertical_interrupt2( 118 + struct timing_generator *optc, 119 + uint32_t start_line) 120 + { 121 + struct optc *optc1 = DCN10TG_FROM_TG(optc); 220 122 221 123 REG_SET(OTG_VERTICAL_INTERRUPT2_POSITION, 0, 222 - OTG_VERTICAL_INTERRUPT2_LINE_START, vertical_line_start); 124 + OTG_VERTICAL_INTERRUPT2_LINE_START, start_line); 223 125 } 224 126 225 127 /** ··· 1382 1480 static const struct timing_generator_funcs dcn10_tg_funcs = { 1383 1481 .validate_timing = optc1_validate_timing, 1384 1482 .program_timing = optc1_program_timing, 1385 - .program_vline_interrupt = optc1_program_vline_interrupt, 1386 - .program_vupdate_interrupt = optc1_program_vupdate_interrupt, 1483 + .setup_vertical_interrupt0 = optc1_setup_vertical_interrupt0, 1484 + .setup_vertical_interrupt1 = optc1_setup_vertical_interrupt1, 1485 + .setup_vertical_interrupt2 = optc1_setup_vertical_interrupt2, 1387 1486 .program_global_sync = optc1_program_global_sync, 1388 1487 .enable_crtc = optc1_enable_crtc, 1389 1488 .disable_crtc = optc1_disable_crtc,
+9 -4
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_optc.h
··· 483 483 const struct dc_crtc_timing *dc_crtc_timing, 484 484 bool use_vbios); 485 485 486 - void optc1_program_vline_interrupt( 486 + void optc1_setup_vertical_interrupt0( 487 487 struct timing_generator *optc, 488 - const struct dc_crtc_timing *dc_crtc_timing, 489 - enum vline_select vline, 490 - const union vline_config *vline_config); 488 + uint32_t start_line, 489 + uint32_t end_line); 490 + void optc1_setup_vertical_interrupt1( 491 + struct timing_generator *optc, 492 + uint32_t start_line); 493 + void optc1_setup_vertical_interrupt2( 494 + struct timing_generator *optc, 495 + uint32_t start_line); 491 496 492 497 void optc1_program_global_sync( 493 498 struct timing_generator *optc);
+1
drivers/gpu/drm/amd/display/dc/inc/hw/abm.h
··· 46 46 void (*abm_init)(struct abm *abm); 47 47 bool (*set_abm_level)(struct abm *abm, unsigned int abm_level); 48 48 bool (*set_abm_immediate_disable)(struct abm *abm); 49 + bool (*set_pipe)(struct abm *abm, unsigned int controller_id); 49 50 bool (*init_backlight)(struct abm *abm); 50 51 51 52 /* backlight_pwm_u16_16 is unsigned 32 bit,
+9 -14
drivers/gpu/drm/amd/display/dc/inc/hw/timing_generator.h
··· 134 134 135 135 struct drr_params; 136 136 137 - union vline_config; 138 - 139 - 140 - enum vline_select { 141 - VLINE0, 142 - VLINE1, 143 - VLINE2 144 - }; 145 137 146 138 struct timing_generator_funcs { 147 139 bool (*validate_timing)(struct timing_generator *tg, ··· 141 149 void (*program_timing)(struct timing_generator *tg, 142 150 const struct dc_crtc_timing *timing, 143 151 bool use_vbios); 144 - void (*program_vline_interrupt)( 152 + void (*setup_vertical_interrupt0)( 145 153 struct timing_generator *optc, 146 - const struct dc_crtc_timing *dc_crtc_timing, 147 - enum vline_select vline, 148 - const union vline_config *vline_config); 154 + uint32_t start_line, 155 + uint32_t end_line); 156 + void (*setup_vertical_interrupt1)( 157 + struct timing_generator *optc, 158 + uint32_t start_line); 159 + void (*setup_vertical_interrupt2)( 160 + struct timing_generator *optc, 161 + uint32_t start_line); 149 162 150 - void (*program_vupdate_interrupt)(struct timing_generator *optc, 151 - const struct dc_crtc_timing *dc_crtc_timing); 152 163 bool (*enable_crtc)(struct timing_generator *tg); 153 164 bool (*disable_crtc)(struct timing_generator *tg); 154 165 bool (*is_counter_moving)(struct timing_generator *tg);
+12
drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
··· 38 38 PIPE_GATING_CONTROL_INIT 39 39 }; 40 40 41 + enum vline_select { 42 + VLINE0, 43 + VLINE1 44 + }; 45 + 41 46 struct dce_hwseq_wa { 42 47 bool blnd_crtc_trigger; 43 48 bool DEGVIDCN10_253; ··· 72 67 struct stream_resource; 73 68 74 69 struct hw_sequencer_funcs { 70 + 71 + void (*disable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx); 72 + 73 + void (*enable_stream_gating)(struct dc *dc, struct pipe_ctx *pipe_ctx); 75 74 76 75 void (*init_hw)(struct dc *dc); 77 76 ··· 228 219 void (*set_cursor_position)(struct pipe_ctx *pipe); 229 220 void (*set_cursor_attribute)(struct pipe_ctx *pipe); 230 221 void (*set_cursor_sdr_white_level)(struct pipe_ctx *pipe); 222 + 223 + void (*setup_periodic_interrupt)(struct pipe_ctx *pipe_ctx, enum vline_select vline); 224 + void (*setup_vupdate_interrupt)(struct pipe_ctx *pipe_ctx); 231 225 232 226 }; 233 227
+3
drivers/gpu/drm/amd/display/include/dal_asic_id.h
··· 131 131 #define INTERNAL_REV_RAVEN_A0 0x00 /* First spin of Raven */ 132 132 #define RAVEN_A0 0x01 133 133 #define RAVEN_B0 0x21 134 + #define PICASSO_A0 0x41 134 135 #if defined(CONFIG_DRM_AMD_DC_DCN1_01) 135 136 /* DCN1_01 */ 136 137 #define RAVEN2_A0 0x81 ··· 165 164 #define FAMILY_AI 141 166 165 167 166 #define FAMILY_UNKNOWN 0xFF 167 + 168 + 168 169 169 170 #endif /* __DAL_ASIC_ID_H__ */
+4 -19
drivers/gpu/drm/amd/display/modules/power/power_helpers.c
··· 165 165 }; 166 166 #pragma pack(pop) 167 167 168 - static uint16_t backlight_8_to_16(unsigned int backlight_8bit) 169 - { 170 - return (uint16_t)(backlight_8bit * 0x101); 171 - } 172 - 173 168 static void fill_backlight_transform_table(struct dmcu_iram_parameters params, 174 169 struct iram_table_v_2 *table) 175 170 { 176 171 unsigned int i; 177 172 unsigned int num_entries = NUM_BL_CURVE_SEGS; 178 - unsigned int query_input_8bit; 179 - unsigned int query_output_8bit; 180 173 unsigned int lut_index; 181 174 182 175 table->backlight_thresholds[0] = 0; ··· 187 194 * format U4.10. 188 195 */ 189 196 for (i = 1; i+1 < num_entries; i++) { 190 - query_input_8bit = DIV_ROUNDUP((i * 256), num_entries); 191 - 192 197 lut_index = (params.backlight_lut_array_size - 1) * i / (num_entries - 1); 193 198 ASSERT(lut_index < params.backlight_lut_array_size); 194 - query_output_8bit = params.backlight_lut_array[lut_index] >> 8; 195 199 196 200 table->backlight_thresholds[i] = 197 - backlight_8_to_16(query_input_8bit); 201 + cpu_to_be16(DIV_ROUNDUP((i * 65536), num_entries)); 198 202 table->backlight_offsets[i] = 199 - backlight_8_to_16(query_output_8bit); 203 + cpu_to_be16(params.backlight_lut_array[lut_index]); 200 204 } 201 205 } 202 206 ··· 202 212 { 203 213 unsigned int i; 204 214 unsigned int num_entries = NUM_BL_CURVE_SEGS; 205 - unsigned int query_input_8bit; 206 - unsigned int query_output_8bit; 207 215 unsigned int lut_index; 208 216 209 217 table->backlight_thresholds[0] = 0; ··· 219 231 * format U4.10. 220 232 */ 221 233 for (i = 1; i+1 < num_entries; i++) { 222 - query_input_8bit = DIV_ROUNDUP((i * 256), num_entries); 223 - 224 234 lut_index = (params.backlight_lut_array_size - 1) * i / (num_entries - 1); 225 235 ASSERT(lut_index < params.backlight_lut_array_size); 226 - query_output_8bit = params.backlight_lut_array[lut_index] >> 8; 227 236 228 237 table->backlight_thresholds[i] = 229 - backlight_8_to_16(query_input_8bit); 238 + cpu_to_be16(DIV_ROUNDUP((i * 65536), num_entries)); 230 239 table->backlight_offsets[i] = 231 - backlight_8_to_16(query_output_8bit); 240 + cpu_to_be16(params.backlight_lut_array[lut_index]); 232 241 } 233 242 } 234 243
+8 -11
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
··· 137 137 /* Bit n == 1 means Queue n is available for KFD */ 138 138 DECLARE_BITMAP(queue_bitmap, KGD_MAX_QUEUES); 139 139 140 - /* Doorbell assignments (SOC15 and later chips only). Only 140 + /* SDMA doorbell assignments (SOC15 and later chips only). Only 141 141 * specific doorbells are routed to each SDMA engine. Others 142 142 * are routed to IH and VCN. They are not usable by the CP. 143 - * 144 - * Any doorbell number D that satisfies the following condition 145 - * is reserved: (D & reserved_doorbell_mask) == reserved_doorbell_val 146 - * 147 - * KFD currently uses 1024 (= 0x3ff) doorbells per process. If 148 - * doorbells 0x0e0-0x0ff and 0x2e0-0x2ff are reserved, that means 149 - * mask would be set to 0x1e0 and val set to 0x0e0. 150 143 */ 151 - unsigned int sdma_doorbell[2][8]; 152 - unsigned int reserved_doorbell_mask; 153 - unsigned int reserved_doorbell_val; 144 + uint32_t *sdma_doorbell_idx; 145 + 146 + /* From SOC15 onward, the doorbell index range not usable for CP 147 + * queues. 148 + */ 149 + uint32_t non_cp_doorbells_start; 150 + uint32_t non_cp_doorbells_end; 154 151 155 152 /* Base address of doorbell aperture. */ 156 153 phys_addr_t doorbell_physical_address;
+3 -5
drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c
··· 139 139 static int smu10_init_dynamic_state_adjustment_rule_settings( 140 140 struct pp_hwmgr *hwmgr) 141 141 { 142 - uint32_t table_size = 143 - sizeof(struct phm_clock_voltage_dependency_table) + 144 - (7 * sizeof(struct phm_clock_voltage_dependency_record)); 142 + struct phm_clock_voltage_dependency_table *table_clk_vlt; 145 143 146 - struct phm_clock_voltage_dependency_table *table_clk_vlt = 147 - kzalloc(table_size, GFP_KERNEL); 144 + table_clk_vlt = kzalloc(struct_size(table_clk_vlt, entries, 7), 145 + GFP_KERNEL); 148 146 149 147 if (NULL == table_clk_vlt) { 150 148 pr_err("Can not allocate memory!\n");
+2
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
··· 3681 3681 data->force_pcie_gen = PP_PCIEGen2; 3682 3682 if (current_link_speed == PP_PCIEGen2) 3683 3683 break; 3684 + /* fall through */ 3684 3685 case PP_PCIEGen2: 3685 3686 if (0 == amdgpu_acpi_pcie_performance_request(hwmgr->adev, PCIE_PERF_REQ_GEN2, false)) 3686 3687 break; 3687 3688 #endif 3689 + /* fall through */ 3688 3690 default: 3689 3691 data->force_pcie_gen = smu7_get_current_pcie_speed(hwmgr); 3690 3692 break;
+1 -1
drivers/gpu/drm/amd/powerplay/hwmgr/smu7_powertune.c
··· 1211 1211 hwmgr->platform_descriptor.TDPAdjustment : 1212 1212 (-1 * hwmgr->platform_descriptor.TDPAdjustment); 1213 1213 1214 - if (hwmgr->chip_id > CHIP_TONGA) 1214 + if (hwmgr->chip_id > CHIP_TONGA) 1215 1215 target_tdp = ((100 + adjust_percent) * (int)(cac_table->usTDP * 256)) / 100; 1216 1216 else 1217 1217 target_tdp = ((100 + adjust_percent) * (int)(cac_table->usConfigurableTDP * 256)) / 100;
+3 -5
drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c
··· 272 272 struct pp_hwmgr *hwmgr, 273 273 ATOM_CLK_VOLT_CAPABILITY *disp_voltage_table) 274 274 { 275 - uint32_t table_size = 276 - sizeof(struct phm_clock_voltage_dependency_table) + 277 - (7 * sizeof(struct phm_clock_voltage_dependency_record)); 275 + struct phm_clock_voltage_dependency_table *table_clk_vlt; 278 276 279 - struct phm_clock_voltage_dependency_table *table_clk_vlt = 280 - kzalloc(table_size, GFP_KERNEL); 277 + table_clk_vlt = kzalloc(struct_size(table_clk_vlt, entries, 7), 278 + GFP_KERNEL); 281 279 282 280 if (NULL == table_clk_vlt) { 283 281 pr_err("Can not allocate memory!\n");
+24 -2
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_baco.c
··· 1 + /* 2 + * Copyright 2018 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + * 22 + */ 1 23 #include "amdgpu.h" 2 24 #include "soc15.h" 3 25 #include "soc15_hw_ip.h" ··· 136 114 if (soc15_baco_program_registers(hwmgr, pre_baco_tbl, 137 115 ARRAY_SIZE(pre_baco_tbl))) { 138 116 if (smum_send_msg_to_smc(hwmgr, PPSMC_MSG_EnterBaco)) 139 - return -1; 117 + return -EINVAL; 140 118 141 119 if (soc15_baco_program_registers(hwmgr, enter_baco_tbl, 142 120 ARRAY_SIZE(enter_baco_tbl))) ··· 154 132 } 155 133 } 156 134 157 - return -1; 135 + return -EINVAL; 158 136 }
+2 -2
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_baco.h
··· 20 20 * OTHER DEALINGS IN THE SOFTWARE. 21 21 * 22 22 */ 23 - #ifndef __VEGA10_BOCO_H__ 24 - #define __VEGA10_BOCO_H__ 23 + #ifndef __VEGA10_BACO_H__ 24 + #define __VEGA10_BACO_H__ 25 25 #include "hwmgr.h" 26 26 #include "common_baco.h" 27 27
+25 -3
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_baco.c
··· 1 + /* 2 + * Copyright 2018 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice shall be included in 12 + * all copies or substantial portions of the Software. 13 + * 14 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 + * OTHER DEALINGS IN THE SOFTWARE. 21 + * 22 + */ 1 23 #include "amdgpu.h" 2 24 #include "soc15.h" 3 25 #include "soc15_hw_ip.h" ··· 89 67 90 68 91 69 if(smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_EnterBaco, 0)) 92 - return -1; 70 + return -EINVAL; 93 71 94 72 } else if (state == BACO_STATE_OUT) { 95 73 if (smum_send_msg_to_smc(hwmgr, PPSMC_MSG_ExitBaco)) 96 - return -1; 74 + return -EINVAL; 97 75 if (!soc15_baco_program_registers(hwmgr, clean_baco_tbl, 98 76 ARRAY_SIZE(clean_baco_tbl))) 99 - return -1; 77 + return -EINVAL; 100 78 } 101 79 102 80 return 0;
+2 -2
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_baco.h
··· 20 20 * OTHER DEALINGS IN THE SOFTWARE. 21 21 * 22 22 */ 23 - #ifndef __VEGA20_BOCO_H__ 24 - #define __VEGA20_BOCO_H__ 23 + #ifndef __VEGA20_BACO_H__ 24 + #define __VEGA20_BACO_H__ 25 25 #include "hwmgr.h" 26 26 #include "common_baco.h" 27 27
+1 -1
drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c
··· 3456 3456 disable_mclk_switching = ((1 < hwmgr->display_config->num_display) && 3457 3457 !hwmgr->display_config->multi_monitor_in_sync) || 3458 3458 vblank_too_short; 3459 - latency = hwmgr->display_config->dce_tolerable_mclk_in_active_latency; 3459 + latency = hwmgr->display_config->dce_tolerable_mclk_in_active_latency; 3460 3460 3461 3461 /* gfxclk */ 3462 3462 dpm_table = &(data->dpm_table.gfx_table);
+4
drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
··· 29 29 #include <drm/amdgpu_drm.h> 30 30 #include "smumgr.h" 31 31 32 + MODULE_FIRMWARE("amdgpu/bonaire_smc.bin"); 33 + MODULE_FIRMWARE("amdgpu/bonaire_k_smc.bin"); 34 + MODULE_FIRMWARE("amdgpu/hawaii_smc.bin"); 35 + MODULE_FIRMWARE("amdgpu/hawaii_k_smc.bin"); 32 36 MODULE_FIRMWARE("amdgpu/topaz_smc.bin"); 33 37 MODULE_FIRMWARE("amdgpu/topaz_k_smc.bin"); 34 38 MODULE_FIRMWARE("amdgpu/tonga_smc.bin");
+4
drivers/gpu/drm/bochs/bochs_drv.c
··· 145 145 if (IS_ERR(dev)) 146 146 return PTR_ERR(dev); 147 147 148 + ret = pci_enable_device(pdev); 149 + if (ret) 150 + goto err_free_dev; 151 + 148 152 dev->pdev = pdev; 149 153 pci_set_drvdata(pdev, dev); 150 154
+9
drivers/gpu/drm/drm_atomic_helper.c
··· 1608 1608 old_plane_state->crtc != new_plane_state->crtc) 1609 1609 return -EINVAL; 1610 1610 1611 + /* 1612 + * FIXME: Since prepare_fb and cleanup_fb are always called on 1613 + * the new_plane_state for async updates we need to block framebuffer 1614 + * changes. This prevents use of a fb that's been cleaned up and 1615 + * double cleanups from occuring. 1616 + */ 1617 + if (old_plane_state->fb != new_plane_state->fb) 1618 + return -EINVAL; 1619 + 1611 1620 funcs = plane->helper_private; 1612 1621 if (!funcs->atomic_async_update) 1613 1622 return -EINVAL;
+16 -6
drivers/gpu/drm/drm_file.c
··· 262 262 kfree(file); 263 263 } 264 264 265 + static void drm_close_helper(struct file *filp) 266 + { 267 + struct drm_file *file_priv = filp->private_data; 268 + struct drm_device *dev = file_priv->minor->dev; 269 + 270 + mutex_lock(&dev->filelist_mutex); 271 + list_del(&file_priv->lhead); 272 + mutex_unlock(&dev->filelist_mutex); 273 + 274 + drm_file_free(file_priv); 275 + } 276 + 265 277 static int drm_setup(struct drm_device * dev) 266 278 { 267 279 int ret; ··· 330 318 goto err_undo; 331 319 if (need_setup) { 332 320 retcode = drm_setup(dev); 333 - if (retcode) 321 + if (retcode) { 322 + drm_close_helper(filp); 334 323 goto err_undo; 324 + } 335 325 } 336 326 return 0; 337 327 ··· 487 473 488 474 DRM_DEBUG("open_count = %d\n", dev->open_count); 489 475 490 - mutex_lock(&dev->filelist_mutex); 491 - list_del(&file_priv->lhead); 492 - mutex_unlock(&dev->filelist_mutex); 493 - 494 - drm_file_free(file_priv); 476 + drm_close_helper(filp); 495 477 496 478 if (!--dev->open_count) { 497 479 drm_lastclose(dev);
+17 -5
drivers/gpu/drm/drm_ioctl.c
··· 508 508 return err; 509 509 } 510 510 511 + static inline bool 512 + drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags) 513 + { 514 + return drm_core_check_feature(dev, DRIVER_RENDER) && 515 + (flags & DRM_RENDER_ALLOW); 516 + } 517 + 511 518 /** 512 519 * drm_ioctl_permit - Check ioctl permissions against caller 513 520 * ··· 529 522 */ 530 523 int drm_ioctl_permit(u32 flags, struct drm_file *file_priv) 531 524 { 525 + const struct drm_device *dev = file_priv->minor->dev; 526 + 532 527 /* ROOT_ONLY is only for CAP_SYS_ADMIN */ 533 528 if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN))) 534 529 return -EACCES; 535 530 536 - /* AUTH is only for authenticated or render client */ 537 - if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) && 538 - !file_priv->authenticated)) 539 - return -EACCES; 531 + /* AUTH is only for master ... */ 532 + if (unlikely((flags & DRM_AUTH) && drm_is_primary_client(file_priv))) { 533 + /* authenticated ones, or render capable on DRM_RENDER_ALLOW. */ 534 + if (!file_priv->authenticated && 535 + !drm_render_driver_and_ioctl(dev, flags)) 536 + return -EACCES; 537 + } 540 538 541 539 /* MASTER is only for master or control clients */ 542 540 if (unlikely((flags & DRM_MASTER) && ··· 582 570 DRM_IOCTL_DEF(DRM_IOCTL_SET_UNIQUE, drm_invalid_op, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 583 571 DRM_IOCTL_DEF(DRM_IOCTL_BLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 584 572 DRM_IOCTL_DEF(DRM_IOCTL_UNBLOCK, drm_noop, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 585 - DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_AUTH|DRM_UNLOCKED|DRM_MASTER), 573 + DRM_IOCTL_DEF(DRM_IOCTL_AUTH_MAGIC, drm_authmagic, DRM_UNLOCKED|DRM_MASTER), 586 574 587 575 DRM_IOCTL_DEF(DRM_IOCTL_ADD_MAP, drm_legacy_addmap_ioctl, DRM_AUTH|DRM_MASTER|DRM_ROOT_ONLY), 588 576 DRM_IOCTL_DEF(DRM_IOCTL_RM_MAP, drm_legacy_rmmap_ioctl, DRM_AUTH),
+2 -1
drivers/gpu/drm/imx/Kconfig
··· 4 4 select VIDEOMODE_HELPERS 5 5 select DRM_GEM_CMA_HELPER 6 6 select DRM_KMS_CMA_HELPER 7 - depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM) 7 + depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM || COMPILE_TEST) 8 8 depends on IMX_IPUV3_CORE 9 9 help 10 10 enable i.MX graphics support ··· 18 18 config DRM_IMX_TVE 19 19 tristate "Support for TV and VGA displays" 20 20 depends on DRM_IMX 21 + depends on COMMON_CLK 21 22 select REGMAP_MMIO 22 23 help 23 24 Choose this to enable the internal Television Encoder (TVe)
+2 -5
drivers/gpu/drm/imx/imx-drm-core.c
··· 49 49 { 50 50 int ret; 51 51 52 - ret = drm_atomic_helper_check_modeset(dev, state); 53 - if (ret) 54 - return ret; 55 - 56 - ret = drm_atomic_helper_check_planes(dev, state); 52 + ret = drm_atomic_helper_check(dev, state); 57 53 if (ret) 58 54 return ret; 59 55 ··· 225 229 drm->mode_config.funcs = &imx_drm_mode_config_funcs; 226 230 drm->mode_config.helper_private = &imx_drm_mode_config_helpers; 227 231 drm->mode_config.allow_fb_modifiers = true; 232 + drm->mode_config.normalize_zpos = true; 228 233 229 234 drm_mode_config_init(drm); 230 235
+28 -2
drivers/gpu/drm/imx/ipuv3-crtc.c
··· 34 34 struct ipu_dc *dc; 35 35 struct ipu_di *di; 36 36 int irq; 37 + struct drm_pending_vblank_event *event; 37 38 }; 38 39 39 40 static inline struct ipu_crtc *to_ipu_crtc(struct drm_crtc *crtc) ··· 174 173 static irqreturn_t ipu_irq_handler(int irq, void *dev_id) 175 174 { 176 175 struct ipu_crtc *ipu_crtc = dev_id; 176 + struct drm_crtc *crtc = &ipu_crtc->base; 177 + unsigned long flags; 178 + int i; 177 179 178 - drm_crtc_handle_vblank(&ipu_crtc->base); 180 + drm_crtc_handle_vblank(crtc); 181 + 182 + if (ipu_crtc->event) { 183 + for (i = 0; i < ARRAY_SIZE(ipu_crtc->plane); i++) { 184 + struct ipu_plane *plane = ipu_crtc->plane[i]; 185 + 186 + if (!plane) 187 + continue; 188 + 189 + if (ipu_plane_atomic_update_pending(&plane->base)) 190 + break; 191 + } 192 + 193 + if (i == ARRAY_SIZE(ipu_crtc->plane)) { 194 + spin_lock_irqsave(&crtc->dev->event_lock, flags); 195 + drm_crtc_send_vblank_event(crtc, ipu_crtc->event); 196 + ipu_crtc->event = NULL; 197 + drm_crtc_vblank_put(crtc); 198 + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 199 + } 200 + } 179 201 180 202 return IRQ_HANDLED; 181 203 } ··· 247 223 { 248 224 spin_lock_irq(&crtc->dev->event_lock); 249 225 if (crtc->state->event) { 226 + struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); 227 + 250 228 WARN_ON(drm_crtc_vblank_get(crtc)); 251 - drm_crtc_arm_vblank_event(crtc, crtc->state->event); 229 + ipu_crtc->event = crtc->state->event; 252 230 crtc->state->event = NULL; 253 231 } 254 232 spin_unlock_irq(&crtc->dev->event_lock);
+51 -25
drivers/gpu/drm/imx/ipuv3-plane.c
··· 273 273 274 274 static void ipu_plane_state_reset(struct drm_plane *plane) 275 275 { 276 + unsigned int zpos = (plane->type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1; 276 277 struct ipu_plane_state *ipu_state; 277 278 278 279 if (plane->state) { ··· 285 284 286 285 ipu_state = kzalloc(sizeof(*ipu_state), GFP_KERNEL); 287 286 288 - if (ipu_state) 287 + if (ipu_state) { 289 288 __drm_atomic_helper_plane_reset(plane, &ipu_state->base); 289 + ipu_state->base.zpos = zpos; 290 + ipu_state->base.normalized_zpos = zpos; 291 + } 290 292 } 291 293 292 294 static struct drm_plane_state * ··· 564 560 if (ipu_plane->dp_flow == IPU_DP_FLOW_SYNC_FG) 565 561 ipu_dp_set_window_pos(ipu_plane->dp, dst->x1, dst->y1); 566 562 563 + switch (ipu_plane->dp_flow) { 564 + case IPU_DP_FLOW_SYNC_BG: 565 + if (state->normalized_zpos == 1) { 566 + ipu_dp_set_global_alpha(ipu_plane->dp, 567 + !fb->format->has_alpha, 0xff, 568 + true); 569 + } else { 570 + ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true); 571 + } 572 + break; 573 + case IPU_DP_FLOW_SYNC_FG: 574 + if (state->normalized_zpos == 1) { 575 + ipu_dp_set_global_alpha(ipu_plane->dp, 576 + !fb->format->has_alpha, 0xff, 577 + false); 578 + } 579 + break; 580 + } 581 + 567 582 eba = drm_plane_state_to_eba(state, 0); 568 583 569 584 /* ··· 605 582 active = ipu_idmac_get_current_buffer(ipu_plane->ipu_ch); 606 583 ipu_cpmem_set_buffer(ipu_plane->ipu_ch, !active, eba); 607 584 ipu_idmac_select_buffer(ipu_plane->ipu_ch, !active); 585 + ipu_plane->next_buf = !active; 608 586 if (ipu_plane_separate_alpha(ipu_plane)) { 609 587 active = ipu_idmac_get_current_buffer(ipu_plane->alpha_ch); 610 588 ipu_cpmem_set_buffer(ipu_plane->alpha_ch, !active, ··· 619 595 switch (ipu_plane->dp_flow) { 620 596 case IPU_DP_FLOW_SYNC_BG: 621 597 ipu_dp_setup_channel(ipu_plane->dp, ics, IPUV3_COLORSPACE_RGB); 622 - ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true); 623 598 break; 624 599 case IPU_DP_FLOW_SYNC_FG: 625 600 ipu_dp_setup_channel(ipu_plane->dp, ics, 626 601 IPUV3_COLORSPACE_UNKNOWN); 627 - /* Enable local alpha on partial plane */ 628 - switch (fb->format->format) { 629 - case DRM_FORMAT_ARGB1555: 630 - case DRM_FORMAT_ABGR1555: 631 - case DRM_FORMAT_RGBA5551: 632 - case DRM_FORMAT_BGRA5551: 633 - case DRM_FORMAT_ARGB4444: 634 - case DRM_FORMAT_ARGB8888: 635 - case DRM_FORMAT_ABGR8888: 636 - case DRM_FORMAT_RGBA8888: 637 - case DRM_FORMAT_BGRA8888: 638 - case DRM_FORMAT_RGB565_A8: 639 - case DRM_FORMAT_BGR565_A8: 640 - case DRM_FORMAT_RGB888_A8: 641 - case DRM_FORMAT_BGR888_A8: 642 - case DRM_FORMAT_RGBX8888_A8: 643 - case DRM_FORMAT_BGRX8888_A8: 644 - ipu_dp_set_global_alpha(ipu_plane->dp, false, 0, false); 645 - break; 646 - default: 647 - ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true); 648 - break; 649 - } 602 + break; 650 603 } 651 604 652 605 ipu_dmfc_config_wait4eot(ipu_plane->dmfc, drm_rect_width(dst)); ··· 710 709 ipu_cpmem_set_buffer(ipu_plane->ipu_ch, 1, eba); 711 710 ipu_idmac_lock_enable(ipu_plane->ipu_ch, num_bursts); 712 711 ipu_plane_enable(ipu_plane); 712 + ipu_plane->next_buf = -1; 713 713 } 714 714 715 715 static const struct drm_plane_helper_funcs ipu_plane_helper_funcs = { ··· 720 718 .atomic_update = ipu_plane_atomic_update, 721 719 }; 722 720 721 + bool ipu_plane_atomic_update_pending(struct drm_plane *plane) 722 + { 723 + struct ipu_plane *ipu_plane = to_ipu_plane(plane); 724 + struct drm_plane_state *state = plane->state; 725 + struct ipu_plane_state *ipu_state = to_ipu_plane_state(state); 726 + 727 + /* disabled crtcs must not block the update */ 728 + if (!state->crtc) 729 + return false; 730 + 731 + if (ipu_state->use_pre) 732 + return ipu_prg_channel_configure_pending(ipu_plane->ipu_ch); 733 + else if (ipu_plane->next_buf >= 0) 734 + return ipu_idmac_get_current_buffer(ipu_plane->ipu_ch) != 735 + ipu_plane->next_buf; 736 + 737 + return false; 738 + } 723 739 int ipu_planes_assign_pre(struct drm_device *dev, 724 740 struct drm_atomic_state *state) 725 741 { ··· 826 806 { 827 807 struct ipu_plane *ipu_plane; 828 808 const uint64_t *modifiers = ipu_format_modifiers; 809 + unsigned int zpos = (type == DRM_PLANE_TYPE_PRIMARY) ? 0 : 1; 829 810 int ret; 830 811 831 812 DRM_DEBUG_KMS("channel %d, dp flow %d, possible_crtcs=0x%x\n", ··· 856 835 } 857 836 858 837 drm_plane_helper_add(&ipu_plane->base, &ipu_plane_helper_funcs); 838 + 839 + if (dp == IPU_DP_FLOW_SYNC_BG || dp == IPU_DP_FLOW_SYNC_FG) 840 + drm_plane_create_zpos_property(&ipu_plane->base, zpos, 0, 1); 841 + else 842 + drm_plane_create_zpos_immutable_property(&ipu_plane->base, 0); 859 843 860 844 return ipu_plane; 861 845 }
+2
drivers/gpu/drm/imx/ipuv3-plane.h
··· 27 27 int dp_flow; 28 28 29 29 bool disabling; 30 + int next_buf; 30 31 }; 31 32 32 33 struct ipu_plane *ipu_plane_init(struct drm_device *dev, struct ipu_soc *ipu, ··· 49 48 50 49 void ipu_plane_disable(struct ipu_plane *ipu_plane, bool disable_dp_channel); 51 50 void ipu_plane_disable_deferred(struct drm_plane *plane); 51 + bool ipu_plane_atomic_update_pending(struct drm_plane *plane); 52 52 53 53 #endif
+2
drivers/gpu/drm/radeon/ci_dpm.c
··· 4869 4869 pi->force_pcie_gen = RADEON_PCIE_GEN2; 4870 4870 if (current_link_speed == RADEON_PCIE_GEN2) 4871 4871 break; 4872 + /* fall through */ 4872 4873 case RADEON_PCIE_GEN2: 4873 4874 if (radeon_acpi_pcie_performance_request(rdev, PCIE_PERF_REQ_PECI_GEN2, false) == 0) 4874 4875 break; 4875 4876 #endif 4877 + /* fall through */ 4876 4878 default: 4877 4879 pi->force_pcie_gen = ci_get_current_pcie_speed(rdev); 4878 4880 break;
+1
drivers/gpu/drm/radeon/evergreen_cs.c
··· 1299 1299 return -EINVAL; 1300 1300 } 1301 1301 ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff); 1302 + break; 1302 1303 case CB_TARGET_MASK: 1303 1304 track->cb_target_mask = radeon_get_ib_value(p, idx); 1304 1305 track->cb_dirty = true;
+1
drivers/gpu/drm/radeon/radeon_kms.c
··· 172 172 } 173 173 174 174 if (radeon_is_px(dev)) { 175 + dev_pm_set_driver_flags(dev->dev, DPM_FLAG_NEVER_SKIP); 175 176 pm_runtime_use_autosuspend(dev->dev); 176 177 pm_runtime_set_autosuspend_delay(dev->dev, 5000); 177 178 pm_runtime_set_active(dev->dev);
+2
drivers/gpu/drm/radeon/si_dpm.c
··· 5762 5762 si_pi->force_pcie_gen = RADEON_PCIE_GEN2; 5763 5763 if (current_link_speed == RADEON_PCIE_GEN2) 5764 5764 break; 5765 + /* fall through */ 5765 5766 case RADEON_PCIE_GEN2: 5766 5767 if (radeon_acpi_pcie_performance_request(rdev, PCIE_PERF_REQ_PECI_GEN2, false) == 0) 5767 5768 break; 5768 5769 #endif 5770 + /* fall through */ 5769 5771 default: 5770 5772 si_pi->force_pcie_gen = si_get_current_pcie_speed(rdev); 5771 5773 break;
+26 -13
drivers/gpu/drm/scheduler/sched_entity.c
··· 52 52 { 53 53 int i; 54 54 55 - if (!(entity && rq_list && num_rq_list > 0 && rq_list[0])) 55 + if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0]))) 56 56 return -EINVAL; 57 57 58 58 memset(entity, 0, sizeof(struct drm_sched_entity)); 59 59 INIT_LIST_HEAD(&entity->list); 60 - entity->rq = rq_list[0]; 60 + entity->rq = NULL; 61 61 entity->guilty = guilty; 62 62 entity->num_rq_list = num_rq_list; 63 63 entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *), ··· 67 67 68 68 for (i = 0; i < num_rq_list; ++i) 69 69 entity->rq_list[i] = rq_list[i]; 70 + 71 + if (num_rq_list) 72 + entity->rq = rq_list[0]; 73 + 70 74 entity->last_scheduled = NULL; 71 75 72 76 spin_lock_init(&entity->rq_lock); ··· 168 164 struct drm_gpu_scheduler *sched; 169 165 struct task_struct *last_user; 170 166 long ret = timeout; 167 + 168 + if (!entity->rq) 169 + return 0; 171 170 172 171 sched = entity->rq->sched; 173 172 /** ··· 271 264 */ 272 265 void drm_sched_entity_fini(struct drm_sched_entity *entity) 273 266 { 274 - struct drm_gpu_scheduler *sched; 267 + struct drm_gpu_scheduler *sched = NULL; 275 268 276 - sched = entity->rq->sched; 277 - drm_sched_rq_remove_entity(entity->rq, entity); 269 + if (entity->rq) { 270 + sched = entity->rq->sched; 271 + drm_sched_rq_remove_entity(entity->rq, entity); 272 + } 278 273 279 274 /* Consumption of existing IBs wasn't completed. Forcefully 280 275 * remove them here. 281 276 */ 282 277 if (spsc_queue_peek(&entity->job_queue)) { 283 - /* Park the kernel for a moment to make sure it isn't processing 284 - * our enity. 285 - */ 286 - kthread_park(sched->thread); 287 - kthread_unpark(sched->thread); 278 + if (sched) { 279 + /* Park the kernel for a moment to make sure it isn't processing 280 + * our enity. 281 + */ 282 + kthread_park(sched->thread); 283 + kthread_unpark(sched->thread); 284 + } 288 285 if (entity->dependency) { 289 286 dma_fence_remove_callback(entity->dependency, 290 287 &entity->cb); ··· 373 362 for (i = 0; i < entity->num_rq_list; ++i) 374 363 drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority); 375 364 376 - drm_sched_rq_remove_entity(entity->rq, entity); 377 - drm_sched_entity_set_rq_priority(&entity->rq, priority); 378 - drm_sched_rq_add_entity(entity->rq, entity); 365 + if (entity->rq) { 366 + drm_sched_rq_remove_entity(entity->rq, entity); 367 + drm_sched_entity_set_rq_priority(&entity->rq, priority); 368 + drm_sched_rq_add_entity(entity->rq, entity); 369 + } 379 370 380 371 spin_unlock(&entity->rq_lock); 381 372 }
+6
drivers/gpu/ipu-v3/ipu-pre.c
··· 265 265 writel(IPU_PRE_CTRL_SDW_UPDATE, pre->regs + IPU_PRE_CTRL_SET); 266 266 } 267 267 268 + bool ipu_pre_update_pending(struct ipu_pre *pre) 269 + { 270 + return !!(readl_relaxed(pre->regs + IPU_PRE_CTRL) & 271 + IPU_PRE_CTRL_SDW_UPDATE); 272 + } 273 + 268 274 u32 ipu_pre_get_baddr(struct ipu_pre *pre) 269 275 { 270 276 return (u32)pre->buffer_paddr;
+16
drivers/gpu/ipu-v3/ipu-prg.c
··· 347 347 } 348 348 EXPORT_SYMBOL_GPL(ipu_prg_channel_configure); 349 349 350 + bool ipu_prg_channel_configure_pending(struct ipuv3_channel *ipu_chan) 351 + { 352 + int prg_chan = ipu_prg_ipu_to_prg_chan(ipu_chan->num); 353 + struct ipu_prg *prg = ipu_chan->ipu->prg_priv; 354 + struct ipu_prg_channel *chan; 355 + 356 + if (prg_chan < 0) 357 + return false; 358 + 359 + chan = &prg->chan[prg_chan]; 360 + WARN_ON(!chan->enabled); 361 + 362 + return ipu_pre_update_pending(prg->pres[chan->used_pre]); 363 + } 364 + EXPORT_SYMBOL_GPL(ipu_prg_channel_configure_pending); 365 + 350 366 static int ipu_prg_probe(struct platform_device *pdev) 351 367 { 352 368 struct device *dev = &pdev->dev;
+1
drivers/gpu/ipu-v3/ipu-prv.h
··· 272 272 unsigned int height, unsigned int stride, u32 format, 273 273 uint64_t modifier, unsigned int bufaddr); 274 274 void ipu_pre_update(struct ipu_pre *pre, unsigned int bufaddr); 275 + bool ipu_pre_update_pending(struct ipu_pre *pre); 275 276 276 277 struct ipu_prg *ipu_prg_lookup_by_phandle(struct device *dev, const char *name, 277 278 int ipu_id);
+13 -2
drivers/infiniband/hw/cxgb4/device.c
··· 783 783 static int c4iw_rdev_open(struct c4iw_rdev *rdev) 784 784 { 785 785 int err; 786 + unsigned int factor; 786 787 787 788 c4iw_init_dev_ucontext(rdev, &rdev->uctx); 788 789 ··· 807 806 return -EINVAL; 808 807 } 809 808 810 - rdev->qpmask = rdev->lldi.udb_density - 1; 811 - rdev->cqmask = rdev->lldi.ucq_density - 1; 809 + /* This implementation requires a sge_host_page_size <= PAGE_SIZE. */ 810 + if (rdev->lldi.sge_host_page_size > PAGE_SIZE) { 811 + pr_err("%s: unsupported sge host page size %u\n", 812 + pci_name(rdev->lldi.pdev), 813 + rdev->lldi.sge_host_page_size); 814 + return -EINVAL; 815 + } 816 + 817 + factor = PAGE_SIZE / rdev->lldi.sge_host_page_size; 818 + rdev->qpmask = (rdev->lldi.udb_density * factor) - 1; 819 + rdev->cqmask = (rdev->lldi.ucq_density * factor) - 1; 820 + 812 821 pr_debug("dev %s stag start 0x%0x size 0x%0x num stags %d pbl start 0x%0x size 0x%0x rq start 0x%0x size 0x%0x qp qid start %u size %u cq qid start %u size %u srq size %u\n", 813 822 pci_name(rdev->lldi.pdev), rdev->lldi.vr->stag.start, 814 823 rdev->lldi.vr->stag.size, c4iw_num_stags(rdev),
-10
drivers/infiniband/ulp/srp/ib_srp.c
··· 3032 3032 { 3033 3033 struct srp_target_port *target = host_to_target(scmnd->device->host); 3034 3034 struct srp_rdma_ch *ch; 3035 - int i, j; 3036 3035 u8 status; 3037 3036 3038 3037 shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n"); ··· 3042 3043 return FAILED; 3043 3044 if (status) 3044 3045 return FAILED; 3045 - 3046 - for (i = 0; i < target->ch_count; i++) { 3047 - ch = &target->ch[i]; 3048 - for (j = 0; j < target->req_ring_size; ++j) { 3049 - struct srp_request *req = &ch->req_ring[j]; 3050 - 3051 - srp_finish_req(ch, req, scmnd->device, DID_RESET << 16); 3052 - } 3053 - } 3054 3046 3055 3047 return SUCCESS; 3056 3048 }
+1 -1
drivers/iommu/dmar.c
··· 144 144 for (tmp = dev; tmp; tmp = tmp->bus->self) 145 145 level++; 146 146 147 - size = sizeof(*info) + level * sizeof(struct acpi_dmar_pci_path); 147 + size = sizeof(*info) + level * sizeof(info->path[0]); 148 148 if (size <= sizeof(dmar_pci_notify_info_buf)) { 149 149 info = (struct dmar_pci_notify_info *)dmar_pci_notify_info_buf; 150 150 } else {
+2 -2
drivers/mailbox/bcm-flexrm-mailbox.c
··· 1396 1396 1397 1397 /* Clear ring flush state */ 1398 1398 timeout = 1000; /* timeout of 1s */ 1399 - writel_relaxed(0x0, ring + RING_CONTROL); 1399 + writel_relaxed(0x0, ring->regs + RING_CONTROL); 1400 1400 do { 1401 - if (!(readl_relaxed(ring + RING_FLUSH_DONE) & 1401 + if (!(readl_relaxed(ring->regs + RING_FLUSH_DONE) & 1402 1402 FLUSH_DONE_MASK)) 1403 1403 break; 1404 1404 mdelay(1);
+1
drivers/mailbox/mailbox.c
··· 310 310 311 311 return ret; 312 312 } 313 + EXPORT_SYMBOL_GPL(mbox_flush); 313 314 314 315 /** 315 316 * mbox_request_channel - Request a mailbox channel.
-6
drivers/mmc/core/block.c
··· 2380 2380 snprintf(md->disk->disk_name, sizeof(md->disk->disk_name), 2381 2381 "mmcblk%u%s", card->host->index, subname ? subname : ""); 2382 2382 2383 - if (mmc_card_mmc(card)) 2384 - blk_queue_logical_block_size(md->queue.queue, 2385 - card->ext_csd.data_sector_size); 2386 - else 2387 - blk_queue_logical_block_size(md->queue.queue, 512); 2388 - 2389 2383 set_capacity(md->disk, size); 2390 2384 2391 2385 if (mmc_host_cmd23(card->host)) {
+1 -1
drivers/mmc/core/core.c
··· 95 95 if (!data) 96 96 return; 97 97 98 - if (cmd->error || data->error || 98 + if ((cmd && cmd->error) || data->error || 99 99 !should_fail(&host->fail_mmc_request, data->blksz * data->blocks)) 100 100 return; 101 101
+8 -1
drivers/mmc/core/queue.c
··· 355 355 { 356 356 struct mmc_host *host = card->host; 357 357 u64 limit = BLK_BOUNCE_HIGH; 358 + unsigned block_size = 512; 358 359 359 360 if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) 360 361 limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; ··· 369 368 blk_queue_max_hw_sectors(mq->queue, 370 369 min(host->max_blk_count, host->max_req_size / 512)); 371 370 blk_queue_max_segments(mq->queue, host->max_segs); 372 - blk_queue_max_segment_size(mq->queue, host->max_seg_size); 371 + 372 + if (mmc_card_mmc(card)) 373 + block_size = card->ext_csd.data_sector_size; 374 + 375 + blk_queue_logical_block_size(mq->queue, block_size); 376 + blk_queue_max_segment_size(mq->queue, 377 + round_down(host->max_seg_size, block_size)); 373 378 374 379 INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler); 375 380 INIT_WORK(&mq->complete_work, mmc_blk_mq_complete_work);
+11 -2
drivers/mmc/host/cqhci.c
··· 201 201 cq_host->desc_size = cq_host->slot_sz * cq_host->num_slots; 202 202 203 203 cq_host->data_size = cq_host->trans_desc_len * cq_host->mmc->max_segs * 204 - (cq_host->num_slots - 1); 204 + cq_host->mmc->cqe_qdepth; 205 205 206 206 pr_debug("%s: cqhci: desc_size: %zu data_sz: %zu slot-sz: %d\n", 207 207 mmc_hostname(cq_host->mmc), cq_host->desc_size, cq_host->data_size, ··· 217 217 cq_host->desc_size, 218 218 &cq_host->desc_dma_base, 219 219 GFP_KERNEL); 220 + if (!cq_host->desc_base) 221 + return -ENOMEM; 222 + 220 223 cq_host->trans_desc_base = dmam_alloc_coherent(mmc_dev(cq_host->mmc), 221 224 cq_host->data_size, 222 225 &cq_host->trans_desc_dma_base, 223 226 GFP_KERNEL); 224 - if (!cq_host->desc_base || !cq_host->trans_desc_base) 227 + if (!cq_host->trans_desc_base) { 228 + dmam_free_coherent(mmc_dev(cq_host->mmc), cq_host->desc_size, 229 + cq_host->desc_base, 230 + cq_host->desc_dma_base); 231 + cq_host->desc_base = NULL; 232 + cq_host->desc_dma_base = 0; 225 233 return -ENOMEM; 234 + } 226 235 227 236 pr_debug("%s: cqhci: desc-base: 0x%p trans-base: 0x%p\n desc_dma 0x%llx trans_dma: 0x%llx\n", 228 237 mmc_hostname(cq_host->mmc), cq_host->desc_base, cq_host->trans_desc_base,
+1
drivers/mmc/host/mmc_spi.c
··· 1450 1450 mmc->caps &= ~MMC_CAP_NEEDS_POLL; 1451 1451 mmc_gpiod_request_cd_irq(mmc); 1452 1452 } 1453 + mmc_detect_change(mmc, 0); 1453 1454 1454 1455 /* Index 1 is write protect/read only */ 1455 1456 status = mmc_gpiod_request_ro(mmc, NULL, 1, false, 0, NULL);
+1
drivers/mmc/host/renesas_sdhi_sys_dmac.c
··· 65 65 .scc_offset = 0x0300, 66 66 .taps = rcar_gen2_scc_taps, 67 67 .taps_num = ARRAY_SIZE(rcar_gen2_scc_taps), 68 + .max_blk_count = 0xffffffff, 68 69 }; 69 70 70 71 /* Definitions for sampling clocks */
+5 -4
drivers/mmc/host/sdhci-esdhc-imx.c
··· 1095 1095 writel(readl(host->ioaddr + SDHCI_HOST_CONTROL) 1096 1096 | ESDHC_BURST_LEN_EN_INCR, 1097 1097 host->ioaddr + SDHCI_HOST_CONTROL); 1098 + 1098 1099 /* 1099 - * erratum ESDHC_FLAG_ERR004536 fix for MX6Q TO1.2 and MX6DL 1100 - * TO1.1, it's harmless for MX6SL 1101 - */ 1102 - writel(readl(host->ioaddr + 0x6c) | BIT(7), 1100 + * erratum ESDHC_FLAG_ERR004536 fix for MX6Q TO1.2 and MX6DL 1101 + * TO1.1, it's harmless for MX6SL 1102 + */ 1103 + writel(readl(host->ioaddr + 0x6c) & ~BIT(7), 1103 1104 host->ioaddr + 0x6c); 1104 1105 1105 1106 /* disable DLL_CTRL delay line settings */
+5
drivers/mmc/host/tmio_mmc.h
··· 277 277 iowrite16(val >> 16, host->ctl + ((addr + 2) << host->bus_shift)); 278 278 } 279 279 280 + static inline void sd_ctrl_write32(struct tmio_mmc_host *host, int addr, u32 val) 281 + { 282 + iowrite32(val, host->ctl + (addr << host->bus_shift)); 283 + } 284 + 280 285 static inline void sd_ctrl_write32_rep(struct tmio_mmc_host *host, int addr, 281 286 const u32 *buf, int count) 282 287 {
+12 -5
drivers/mmc/host/tmio_mmc_core.c
··· 43 43 #include <linux/regulator/consumer.h> 44 44 #include <linux/mmc/sdio.h> 45 45 #include <linux/scatterlist.h> 46 + #include <linux/sizes.h> 46 47 #include <linux/spinlock.h> 47 48 #include <linux/swiotlb.h> 48 49 #include <linux/workqueue.h> ··· 630 629 return false; 631 630 } 632 631 633 - static void __tmio_mmc_sdio_irq(struct tmio_mmc_host *host) 632 + static bool __tmio_mmc_sdio_irq(struct tmio_mmc_host *host) 634 633 { 635 634 struct mmc_host *mmc = host->mmc; 636 635 struct tmio_mmc_data *pdata = host->pdata; ··· 638 637 unsigned int sdio_status; 639 638 640 639 if (!(pdata->flags & TMIO_MMC_SDIO_IRQ)) 641 - return; 640 + return false; 642 641 643 642 status = sd_ctrl_read16(host, CTL_SDIO_STATUS); 644 643 ireg = status & TMIO_SDIO_MASK_ALL & ~host->sdio_irq_mask; ··· 651 650 652 651 if (mmc->caps & MMC_CAP_SDIO_IRQ && ireg & TMIO_SDIO_STAT_IOIRQ) 653 652 mmc_signal_sdio_irq(mmc); 653 + 654 + return ireg; 654 655 } 655 656 656 657 irqreturn_t tmio_mmc_irq(int irq, void *devid) ··· 671 668 if (__tmio_mmc_sdcard_irq(host, ireg, status)) 672 669 return IRQ_HANDLED; 673 670 674 - __tmio_mmc_sdio_irq(host); 671 + if (__tmio_mmc_sdio_irq(host)) 672 + return IRQ_HANDLED; 675 673 676 - return IRQ_HANDLED; 674 + return IRQ_NONE; 677 675 } 678 676 EXPORT_SYMBOL_GPL(tmio_mmc_irq); 679 677 ··· 704 700 705 701 /* Set transfer length / blocksize */ 706 702 sd_ctrl_write16(host, CTL_SD_XFER_LEN, data->blksz); 707 - sd_ctrl_write16(host, CTL_XFER_BLK_COUNT, data->blocks); 703 + if (host->mmc->max_blk_count >= SZ_64K) 704 + sd_ctrl_write32(host, CTL_XFER_BLK_COUNT, data->blocks); 705 + else 706 + sd_ctrl_write16(host, CTL_XFER_BLK_COUNT, data->blocks); 708 707 709 708 tmio_mmc_start_dma(host, data); 710 709
+1 -1
drivers/mtd/devices/powernv_flash.c
··· 212 212 * Going to have to check what details I need to set and how to 213 213 * get them 214 214 */ 215 - mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFn", dev->of_node); 215 + mtd->name = devm_kasprintf(dev, GFP_KERNEL, "%pOFP", dev->of_node); 216 216 mtd->type = MTD_NORFLASH; 217 217 mtd->flags = MTD_WRITEABLE; 218 218 mtd->size = size;
+1
drivers/mtd/mtdcore.c
··· 507 507 { 508 508 struct nvmem_config config = {}; 509 509 510 + config.id = -1; 510 511 config.dev = &mtd->dev; 511 512 config.name = mtd->name; 512 513 config.owner = THIS_MODULE;
+14 -21
drivers/net/bonding/bond_main.c
··· 1183 1183 } 1184 1184 } 1185 1185 1186 - /* Link-local multicast packets should be passed to the 1187 - * stack on the link they arrive as well as pass them to the 1188 - * bond-master device. These packets are mostly usable when 1189 - * stack receives it with the link on which they arrive 1190 - * (e.g. LLDP) they also must be available on master. Some of 1191 - * the use cases include (but are not limited to): LLDP agents 1192 - * that must be able to operate both on enslaved interfaces as 1193 - * well as on bonds themselves; linux bridges that must be able 1194 - * to process/pass BPDUs from attached bonds when any kind of 1195 - * STP version is enabled on the network. 1186 + /* 1187 + * For packets determined by bond_should_deliver_exact_match() call to 1188 + * be suppressed we want to make an exception for link-local packets. 1189 + * This is necessary for e.g. LLDP daemons to be able to monitor 1190 + * inactive slave links without being forced to bind to them 1191 + * explicitly. 1192 + * 1193 + * At the same time, packets that are passed to the bonding master 1194 + * (including link-local ones) can have their originating interface 1195 + * determined via PACKET_ORIGDEV socket option. 1196 1196 */ 1197 - if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) { 1198 - struct sk_buff *nskb = skb_clone(skb, GFP_ATOMIC); 1199 - 1200 - if (nskb) { 1201 - nskb->dev = bond->dev; 1202 - nskb->queue_mapping = 0; 1203 - netif_rx(nskb); 1204 - } 1205 - return RX_HANDLER_PASS; 1206 - } 1207 - if (bond_should_deliver_exact_match(skb, slave, bond)) 1197 + if (bond_should_deliver_exact_match(skb, slave, bond)) { 1198 + if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) 1199 + return RX_HANDLER_PASS; 1208 1200 return RX_HANDLER_EXACT; 1201 + } 1209 1202 1210 1203 skb->dev = bond->dev; 1211 1204
+71 -19
drivers/net/dsa/b53/b53_common.c
··· 344 344 b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_CTRL, mgmt); 345 345 } 346 346 347 - static void b53_enable_vlan(struct b53_device *dev, bool enable) 347 + static void b53_enable_vlan(struct b53_device *dev, bool enable, 348 + bool enable_filtering) 348 349 { 349 350 u8 mgmt, vc0, vc1, vc4 = 0, vc5; 350 351 ··· 370 369 vc0 |= VC0_VLAN_EN | VC0_VID_CHK_EN | VC0_VID_HASH_VID; 371 370 vc1 |= VC1_RX_MCST_UNTAG_EN | VC1_RX_MCST_FWD_EN; 372 371 vc4 &= ~VC4_ING_VID_CHECK_MASK; 373 - vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S; 374 - vc5 |= VC5_DROP_VTABLE_MISS; 372 + if (enable_filtering) { 373 + vc4 |= VC4_ING_VID_VIO_DROP << VC4_ING_VID_CHECK_S; 374 + vc5 |= VC5_DROP_VTABLE_MISS; 375 + } else { 376 + vc4 |= VC4_ING_VID_VIO_FWD << VC4_ING_VID_CHECK_S; 377 + vc5 &= ~VC5_DROP_VTABLE_MISS; 378 + } 375 379 376 380 if (is5325(dev)) 377 381 vc0 &= ~VC0_RESERVED_1; ··· 426 420 } 427 421 428 422 b53_write8(dev, B53_CTRL_PAGE, B53_SWITCH_MODE, mgmt); 423 + 424 + dev->vlan_enabled = enable; 425 + dev->vlan_filtering_enabled = enable_filtering; 429 426 } 430 427 431 428 static int b53_set_jumbo(struct b53_device *dev, bool enable, bool allow_10_100) ··· 641 632 b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc); 642 633 } 643 634 635 + static u16 b53_default_pvid(struct b53_device *dev) 636 + { 637 + if (is5325(dev) || is5365(dev)) 638 + return 1; 639 + else 640 + return 0; 641 + } 642 + 644 643 int b53_configure_vlan(struct dsa_switch *ds) 645 644 { 646 645 struct b53_device *dev = ds->priv; 647 646 struct b53_vlan vl = { 0 }; 648 - int i; 647 + int i, def_vid; 648 + 649 + def_vid = b53_default_pvid(dev); 649 650 650 651 /* clear all vlan entries */ 651 652 if (is5325(dev) || is5365(dev)) { 652 - for (i = 1; i < dev->num_vlans; i++) 653 + for (i = def_vid; i < dev->num_vlans; i++) 653 654 b53_set_vlan_entry(dev, i, &vl); 654 655 } else { 655 656 b53_do_vlan_op(dev, VTA_CMD_CLEAR); 656 657 } 657 658 658 - b53_enable_vlan(dev, false); 659 + b53_enable_vlan(dev, false, dev->vlan_filtering_enabled); 659 660 660 661 b53_for_each_port(dev, i) 661 662 b53_write16(dev, B53_VLAN_PAGE, 662 - B53_VLAN_PORT_DEF_TAG(i), 1); 663 + B53_VLAN_PORT_DEF_TAG(i), def_vid); 663 664 664 665 if (!is5325(dev) && !is5365(dev)) 665 666 b53_set_jumbo(dev, dev->enable_jumbo, false); ··· 1274 1255 1275 1256 int b53_vlan_filtering(struct dsa_switch *ds, int port, bool vlan_filtering) 1276 1257 { 1258 + struct b53_device *dev = ds->priv; 1259 + struct net_device *bridge_dev; 1260 + unsigned int i; 1261 + u16 pvid, new_pvid; 1262 + 1263 + /* Handle the case were multiple bridges span the same switch device 1264 + * and one of them has a different setting than what is being requested 1265 + * which would be breaking filtering semantics for any of the other 1266 + * bridge devices. 1267 + */ 1268 + b53_for_each_port(dev, i) { 1269 + bridge_dev = dsa_to_port(ds, i)->bridge_dev; 1270 + if (bridge_dev && 1271 + bridge_dev != dsa_to_port(ds, port)->bridge_dev && 1272 + br_vlan_enabled(bridge_dev) != vlan_filtering) { 1273 + netdev_err(bridge_dev, 1274 + "VLAN filtering is global to the switch!\n"); 1275 + return -EINVAL; 1276 + } 1277 + } 1278 + 1279 + b53_read16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), &pvid); 1280 + new_pvid = pvid; 1281 + if (dev->vlan_filtering_enabled && !vlan_filtering) { 1282 + /* Filtering is currently enabled, use the default PVID since 1283 + * the bridge does not expect tagging anymore 1284 + */ 1285 + dev->ports[port].pvid = pvid; 1286 + new_pvid = b53_default_pvid(dev); 1287 + } else if (!dev->vlan_filtering_enabled && vlan_filtering) { 1288 + /* Filtering is currently disabled, restore the previous PVID */ 1289 + new_pvid = dev->ports[port].pvid; 1290 + } 1291 + 1292 + if (pvid != new_pvid) 1293 + b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), 1294 + new_pvid); 1295 + 1296 + b53_enable_vlan(dev, dev->vlan_enabled, vlan_filtering); 1297 + 1277 1298 return 0; 1278 1299 } 1279 1300 EXPORT_SYMBOL(b53_vlan_filtering); ··· 1329 1270 if (vlan->vid_end > dev->num_vlans) 1330 1271 return -ERANGE; 1331 1272 1332 - b53_enable_vlan(dev, true); 1273 + b53_enable_vlan(dev, true, dev->vlan_filtering_enabled); 1333 1274 1334 1275 return 0; 1335 1276 } ··· 1359 1300 b53_fast_age_vlan(dev, vid); 1360 1301 } 1361 1302 1362 - if (pvid) { 1303 + if (pvid && !dsa_is_cpu_port(ds, port)) { 1363 1304 b53_write16(dev, B53_VLAN_PAGE, B53_VLAN_PORT_DEF_TAG(port), 1364 1305 vlan->vid_end); 1365 1306 b53_fast_age_vlan(dev, vid); ··· 1385 1326 1386 1327 vl->members &= ~BIT(port); 1387 1328 1388 - if (pvid == vid) { 1389 - if (is5325(dev) || is5365(dev)) 1390 - pvid = 1; 1391 - else 1392 - pvid = 0; 1393 - } 1329 + if (pvid == vid) 1330 + pvid = b53_default_pvid(dev); 1394 1331 1395 1332 if (untagged && !dsa_is_cpu_port(ds, port)) 1396 1333 vl->untag &= ~(BIT(port)); ··· 1699 1644 b53_write16(dev, B53_PVLAN_PAGE, B53_PVLAN_PORT_MASK(port), pvlan); 1700 1645 dev->ports[port].vlan_ctl_mask = pvlan; 1701 1646 1702 - if (is5325(dev) || is5365(dev)) 1703 - pvid = 1; 1704 - else 1705 - pvid = 0; 1647 + pvid = b53_default_pvid(dev); 1706 1648 1707 1649 /* Make this port join all VLANs without VLAN entries */ 1708 1650 if (is58xx(dev)) {
+3
drivers/net/dsa/b53/b53_priv.h
··· 91 91 struct b53_port { 92 92 u16 vlan_ctl_mask; 93 93 struct ethtool_eee eee; 94 + u16 pvid; 94 95 }; 95 96 96 97 struct b53_vlan { ··· 138 137 139 138 unsigned int num_vlans; 140 139 struct b53_vlan *vlans; 140 + bool vlan_enabled; 141 + bool vlan_filtering_enabled; 141 142 unsigned int num_ports; 142 143 struct b53_port *ports; 143 144 };
+6 -4
drivers/net/dsa/bcm_sf2.c
··· 726 726 { 727 727 struct net_device *p = ds->ports[port].cpu_dp->master; 728 728 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 729 - struct ethtool_wolinfo pwol; 729 + struct ethtool_wolinfo pwol = { }; 730 730 731 731 /* Get the parent device WoL settings */ 732 - p->ethtool_ops->get_wol(p, &pwol); 732 + if (p->ethtool_ops->get_wol) 733 + p->ethtool_ops->get_wol(p, &pwol); 733 734 734 735 /* Advertise the parent device supported settings */ 735 736 wol->supported = pwol.supported; ··· 751 750 struct net_device *p = ds->ports[port].cpu_dp->master; 752 751 struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds); 753 752 s8 cpu_port = ds->ports[port].cpu_dp->index; 754 - struct ethtool_wolinfo pwol; 753 + struct ethtool_wolinfo pwol = { }; 755 754 756 - p->ethtool_ops->get_wol(p, &pwol); 755 + if (p->ethtool_ops->get_wol) 756 + p->ethtool_ops->get_wol(p, &pwol); 757 757 if (wol->wolopts & ~pwol.supported) 758 758 return -EINVAL; 759 759
+6
drivers/net/dsa/lantiq_gswip.c
··· 1162 1162 1163 1163 module_platform_driver(gswip_driver); 1164 1164 1165 + MODULE_FIRMWARE("lantiq/xrx300_phy11g_a21.bin"); 1166 + MODULE_FIRMWARE("lantiq/xrx300_phy22f_a21.bin"); 1167 + MODULE_FIRMWARE("lantiq/xrx200_phy11g_a14.bin"); 1168 + MODULE_FIRMWARE("lantiq/xrx200_phy11g_a22.bin"); 1169 + MODULE_FIRMWARE("lantiq/xrx200_phy22f_a14.bin"); 1170 + MODULE_FIRMWARE("lantiq/xrx200_phy22f_a22.bin"); 1165 1171 MODULE_AUTHOR("Hauke Mehrtens <hauke@hauke-m.de>"); 1166 1172 MODULE_DESCRIPTION("Lantiq / Intel GSWIP driver"); 1167 1173 MODULE_LICENSE("GPL v2");
+12 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 896 896 default: 897 897 return U64_MAX; 898 898 } 899 - value = (((u64)high) << 16) | low; 899 + value = (((u64)high) << 32) | low; 900 900 return value; 901 901 } 902 902 ··· 3093 3093 .port_disable_pri_override = mv88e6xxx_port_disable_pri_override, 3094 3094 .port_link_state = mv88e6352_port_link_state, 3095 3095 .port_get_cmode = mv88e6185_port_get_cmode, 3096 - .stats_snapshot = mv88e6320_g1_stats_snapshot, 3096 + .stats_snapshot = mv88e6xxx_g1_stats_snapshot, 3097 3097 .stats_set_histogram = mv88e6095_g1_stats_set_histogram, 3098 3098 .stats_get_sset_count = mv88e6095_stats_get_sset_count, 3099 3099 .stats_get_strings = mv88e6095_stats_get_strings, ··· 4595 4595 return 0; 4596 4596 } 4597 4597 4598 + static void mv88e6xxx_ports_cmode_init(struct mv88e6xxx_chip *chip) 4599 + { 4600 + int i; 4601 + 4602 + for (i = 0; i < mv88e6xxx_num_ports(chip); i++) 4603 + chip->ports[i].cmode = MV88E6XXX_PORT_STS_CMODE_INVALID; 4604 + } 4605 + 4598 4606 static enum dsa_tag_protocol mv88e6xxx_get_tag_protocol(struct dsa_switch *ds, 4599 4607 int port) 4600 4608 { ··· 4638 4630 err = mv88e6xxx_detect(chip); 4639 4631 if (err) 4640 4632 goto free; 4633 + 4634 + mv88e6xxx_ports_cmode_init(chip); 4641 4635 4642 4636 mutex_lock(&chip->reg_lock); 4643 4637 err = mv88e6xxx_switch_reset(chip);
+6 -2
drivers/net/dsa/mv88e6xxx/port.c
··· 398 398 cmode = 0; 399 399 } 400 400 401 + /* cmode doesn't change, nothing to do for us */ 402 + if (cmode == chip->ports[port].cmode) 403 + return 0; 404 + 401 405 lane = mv88e6390x_serdes_get_lane(chip, port); 402 406 if (lane < 0) 403 407 return lane; ··· 412 408 return err; 413 409 } 414 410 415 - err = mv88e6390_serdes_power(chip, port, false); 411 + err = mv88e6390x_serdes_power(chip, port, false); 416 412 if (err) 417 413 return err; 418 414 ··· 428 424 if (err) 429 425 return err; 430 426 431 - err = mv88e6390_serdes_power(chip, port, true); 427 + err = mv88e6390x_serdes_power(chip, port, true); 432 428 if (err) 433 429 return err; 434 430
+1
drivers/net/dsa/mv88e6xxx/port.h
··· 52 52 #define MV88E6185_PORT_STS_CMODE_1000BASE_X 0x0005 53 53 #define MV88E6185_PORT_STS_CMODE_PHY 0x0006 54 54 #define MV88E6185_PORT_STS_CMODE_DISABLED 0x0007 55 + #define MV88E6XXX_PORT_STS_CMODE_INVALID 0xff 55 56 56 57 /* Offset 0x01: MAC (or PCS or Physical) Control Register */ 57 58 #define MV88E6XXX_PORT_MAC_CTL 0x01
+3
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 275 275 276 276 static int hw_atl_b0_hw_init_tx_path(struct aq_hw_s *self) 277 277 { 278 + /* Tx TC/Queue number config */ 279 + hw_atl_rpb_tps_tx_tc_mode_set(self, 1U); 280 + 278 281 hw_atl_thm_lso_tcp_flag_of_first_pkt_set(self, 0x0FF6U); 279 282 hw_atl_thm_lso_tcp_flag_of_middle_pkt_set(self, 0x0FF6U); 280 283 hw_atl_thm_lso_tcp_flag_of_last_pkt_set(self, 0x0F7FU);
+9
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.c
··· 1274 1274 HW_ATL_TPB_TX_BUF_EN_SHIFT, tx_buff_en); 1275 1275 } 1276 1276 1277 + void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw, 1278 + u32 tx_traf_class_mode) 1279 + { 1280 + aq_hw_write_reg_bit(aq_hw, HW_ATL_TPB_TX_TC_MODE_ADDR, 1281 + HW_ATL_TPB_TX_TC_MODE_MSK, 1282 + HW_ATL_TPB_TX_TC_MODE_SHIFT, 1283 + tx_traf_class_mode); 1284 + } 1285 + 1277 1286 void hw_atl_tpb_tx_buff_hi_threshold_per_tc_set(struct aq_hw_s *aq_hw, 1278 1287 u32 tx_buff_hi_threshold_per_tc, 1279 1288 u32 buffer)
+4
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh.h
··· 605 605 606 606 /* tpb */ 607 607 608 + /* set TX Traffic Class Mode */ 609 + void hw_atl_rpb_tps_tx_tc_mode_set(struct aq_hw_s *aq_hw, 610 + u32 tx_traf_class_mode); 611 + 608 612 /* set tx buffer enable */ 609 613 void hw_atl_tpb_tx_buff_en_set(struct aq_hw_s *aq_hw, u32 tx_buff_en); 610 614
+13
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h
··· 1948 1948 /* default value of bitfield tx_buf_en */ 1949 1949 #define HW_ATL_TPB_TX_BUF_EN_DEFAULT 0x0 1950 1950 1951 + /* register address for bitfield tx_tc_mode */ 1952 + #define HW_ATL_TPB_TX_TC_MODE_ADDR 0x00007900 1953 + /* bitmask for bitfield tx_tc_mode */ 1954 + #define HW_ATL_TPB_TX_TC_MODE_MSK 0x00000100 1955 + /* inverted bitmask for bitfield tx_tc_mode */ 1956 + #define HW_ATL_TPB_TX_TC_MODE_MSKN 0xFFFFFEFF 1957 + /* lower bit position of bitfield tx_tc_mode */ 1958 + #define HW_ATL_TPB_TX_TC_MODE_SHIFT 8 1959 + /* width of bitfield tx_tc_mode */ 1960 + #define HW_ATL_TPB_TX_TC_MODE_WIDTH 1 1961 + /* default value of bitfield tx_tc_mode */ 1962 + #define HW_ATL_TPB_TX_TC_MODE_DEFAULT 0x0 1963 + 1951 1964 /* tx tx{b}_hi_thresh[c:0] bitfield definitions 1952 1965 * preprocessor definitions for the bitfield "tx{b}_hi_thresh[c:0]". 1953 1966 * parameter: buffer {b} | stride size 0x10 | range [0, 7]
+1 -3
drivers/net/ethernet/atheros/atlx/atl2.c
··· 1335 1335 { 1336 1336 struct net_device *netdev; 1337 1337 struct atl2_adapter *adapter; 1338 - static int cards_found; 1338 + static int cards_found = 0; 1339 1339 unsigned long mmio_start; 1340 1340 int mmio_len; 1341 1341 int err; 1342 - 1343 - cards_found = 0; 1344 1342 1345 1343 err = pci_enable_device(pdev); 1346 1344 if (err)
+4
drivers/net/ethernet/broadcom/bcmsysport.c
··· 134 134 135 135 priv->rx_chk_en = !!(wanted & NETIF_F_RXCSUM); 136 136 reg = rxchk_readl(priv, RXCHK_CONTROL); 137 + /* Clear L2 header checks, which would prevent BPDUs 138 + * from being received. 139 + */ 140 + reg &= ~RXCHK_L2_HDR_DIS; 137 141 if (priv->rx_chk_en) 138 142 reg |= RXCHK_EN; 139 143 else
+8 -2
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 500 500 } 501 501 502 502 length >>= 9; 503 + if (unlikely(length >= ARRAY_SIZE(bnxt_lhint_arr))) { 504 + dev_warn_ratelimited(&pdev->dev, "Dropped oversize %d bytes TX packet.\n", 505 + skb->len); 506 + i = 0; 507 + goto tx_dma_error; 508 + } 503 509 flags |= bnxt_lhint_arr[length]; 504 510 txbd->tx_bd_len_flags_type = cpu_to_le32(flags); 505 511 ··· 3909 3903 if (len) 3910 3904 break; 3911 3905 /* on first few passes, just barely sleep */ 3912 - if (i < DFLT_HWRM_CMD_TIMEOUT) 3906 + if (i < HWRM_SHORT_TIMEOUT_COUNTER) 3913 3907 usleep_range(HWRM_SHORT_MIN_TIMEOUT, 3914 3908 HWRM_SHORT_MAX_TIMEOUT); 3915 3909 else ··· 3932 3926 dma_rmb(); 3933 3927 if (*valid) 3934 3928 break; 3935 - udelay(1); 3929 + usleep_range(1, 5); 3936 3930 } 3937 3931 3938 3932 if (j >= HWRM_VALID_BIT_DELAY_USEC) {
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 582 582 (HWRM_SHORT_TIMEOUT_COUNTER * HWRM_SHORT_MIN_TIMEOUT + \ 583 583 ((n) - HWRM_SHORT_TIMEOUT_COUNTER) * HWRM_MIN_TIMEOUT)) 584 584 585 - #define HWRM_VALID_BIT_DELAY_USEC 20 585 + #define HWRM_VALID_BIT_DELAY_USEC 150 586 586 587 587 #define BNXT_HWRM_CHNL_CHIMP 0 588 588 #define BNXT_HWRM_CHNL_KONG 1
+8 -6
drivers/net/ethernet/cavium/thunder/nic.h
··· 271 271 }; 272 272 273 273 struct nicvf_work { 274 - struct delayed_work work; 274 + struct work_struct work; 275 275 u8 mode; 276 276 struct xcast_addr_list *mc; 277 277 }; ··· 327 327 struct nicvf_work rx_mode_work; 328 328 /* spinlock to protect workqueue arguments from concurrent access */ 329 329 spinlock_t rx_mode_wq_lock; 330 - 330 + /* workqueue for handling kernel ndo_set_rx_mode() calls */ 331 + struct workqueue_struct *nicvf_rx_mode_wq; 332 + /* mutex to protect VF's mailbox contents from concurrent access */ 333 + struct mutex rx_mode_mtx; 334 + struct delayed_work link_change_work; 331 335 /* PTP timestamp */ 332 336 struct cavium_ptp *ptp_clock; 333 337 /* Inbound timestamping is on */ ··· 579 575 580 576 struct xcast { 581 577 u8 msg; 582 - union { 583 - u8 mode; 584 - u64 mac; 585 - } data; 578 + u8 mode; 579 + u64 mac:48; 586 580 }; 587 581 588 582 /* 128 bit shared memory between PF and each VF */
+46 -103
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 57 57 #define NIC_GET_BGX_FROM_VF_LMAC_MAP(map) ((map >> 4) & 0xF) 58 58 #define NIC_GET_LMAC_FROM_VF_LMAC_MAP(map) (map & 0xF) 59 59 u8 *vf_lmac_map; 60 - struct delayed_work dwork; 61 - struct workqueue_struct *check_link; 62 - u8 *link; 63 - u8 *duplex; 64 - u32 *speed; 65 60 u16 cpi_base[MAX_NUM_VFS_SUPPORTED]; 66 61 u16 rssi_base[MAX_NUM_VFS_SUPPORTED]; 67 - bool mbx_lock[MAX_NUM_VFS_SUPPORTED]; 68 62 69 63 /* MSI-X */ 70 64 u8 num_vec; ··· 923 929 nic_reg_write(nic, NIC_PF_PKIND_0_15_CFG | (pkind_idx << 3), pkind_val); 924 930 } 925 931 932 + /* Get BGX LMAC link status and update corresponding VF 933 + * if there is a change, valid only if internal L2 switch 934 + * is not present otherwise VF link is always treated as up 935 + */ 936 + static void nic_link_status_get(struct nicpf *nic, u8 vf) 937 + { 938 + union nic_mbx mbx = {}; 939 + struct bgx_link_status link; 940 + u8 bgx, lmac; 941 + 942 + mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE; 943 + 944 + /* Get BGX, LMAC indices for the VF */ 945 + bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 946 + lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 947 + 948 + /* Get interface link status */ 949 + bgx_get_lmac_link_state(nic->node, bgx, lmac, &link); 950 + 951 + /* Send a mbox message to VF with current link status */ 952 + mbx.link_status.link_up = link.link_up; 953 + mbx.link_status.duplex = link.duplex; 954 + mbx.link_status.speed = link.speed; 955 + mbx.link_status.mac_type = link.mac_type; 956 + 957 + /* reply with link status */ 958 + nic_send_msg_to_vf(nic, vf, &mbx); 959 + } 960 + 926 961 /* Interrupt handler to handle mailbox messages from VFs */ 927 962 static void nic_handle_mbx_intr(struct nicpf *nic, int vf) 928 963 { ··· 963 940 int bgx, lmac; 964 941 int i; 965 942 int ret = 0; 966 - 967 - nic->mbx_lock[vf] = true; 968 943 969 944 mbx_addr = nic_get_mbx_addr(vf); 970 945 mbx_data = (u64 *)&mbx; ··· 978 957 switch (mbx.msg.msg) { 979 958 case NIC_MBOX_MSG_READY: 980 959 nic_mbx_send_ready(nic, vf); 981 - if (vf < nic->num_vf_en) { 982 - nic->link[vf] = 0; 983 - nic->duplex[vf] = 0; 984 - nic->speed[vf] = 0; 985 - } 986 - goto unlock; 960 + return; 987 961 case NIC_MBOX_MSG_QS_CFG: 988 962 reg_addr = NIC_PF_QSET_0_127_CFG | 989 963 (mbx.qs.num << NIC_QS_ID_SHIFT); ··· 1047 1031 break; 1048 1032 case NIC_MBOX_MSG_RSS_SIZE: 1049 1033 nic_send_rss_size(nic, vf); 1050 - goto unlock; 1034 + return; 1051 1035 case NIC_MBOX_MSG_RSS_CFG: 1052 1036 case NIC_MBOX_MSG_RSS_CFG_CONT: 1053 1037 nic_config_rss(nic, &mbx.rss_cfg); ··· 1055 1039 case NIC_MBOX_MSG_CFG_DONE: 1056 1040 /* Last message of VF config msg sequence */ 1057 1041 nic_enable_vf(nic, vf, true); 1058 - goto unlock; 1042 + break; 1059 1043 case NIC_MBOX_MSG_SHUTDOWN: 1060 1044 /* First msg in VF teardown sequence */ 1061 1045 if (vf >= nic->num_vf_en) ··· 1065 1049 break; 1066 1050 case NIC_MBOX_MSG_ALLOC_SQS: 1067 1051 nic_alloc_sqs(nic, &mbx.sqs_alloc); 1068 - goto unlock; 1052 + return; 1069 1053 case NIC_MBOX_MSG_NICVF_PTR: 1070 1054 nic->nicvf[vf] = mbx.nicvf.nicvf; 1071 1055 break; 1072 1056 case NIC_MBOX_MSG_PNICVF_PTR: 1073 1057 nic_send_pnicvf(nic, vf); 1074 - goto unlock; 1058 + return; 1075 1059 case NIC_MBOX_MSG_SNICVF_PTR: 1076 1060 nic_send_snicvf(nic, &mbx.nicvf); 1077 - goto unlock; 1061 + return; 1078 1062 case NIC_MBOX_MSG_BGX_STATS: 1079 1063 nic_get_bgx_stats(nic, &mbx.bgx_stats); 1080 - goto unlock; 1064 + return; 1081 1065 case NIC_MBOX_MSG_LOOPBACK: 1082 1066 ret = nic_config_loopback(nic, &mbx.lbk); 1083 1067 break; ··· 1086 1070 break; 1087 1071 case NIC_MBOX_MSG_PFC: 1088 1072 nic_pause_frame(nic, vf, &mbx.pfc); 1089 - goto unlock; 1073 + return; 1090 1074 case NIC_MBOX_MSG_PTP_CFG: 1091 1075 nic_config_timestamp(nic, vf, &mbx.ptp); 1092 1076 break; ··· 1110 1094 bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1111 1095 lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1112 1096 bgx_set_dmac_cam_filter(nic->node, bgx, lmac, 1113 - mbx.xcast.data.mac, 1097 + mbx.xcast.mac, 1114 1098 vf < NIC_VF_PER_MBX_REG ? vf : 1115 1099 vf - NIC_VF_PER_MBX_REG); 1116 1100 break; ··· 1122 1106 } 1123 1107 bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1124 1108 lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1125 - bgx_set_xcast_mode(nic->node, bgx, lmac, mbx.xcast.data.mode); 1109 + bgx_set_xcast_mode(nic->node, bgx, lmac, mbx.xcast.mode); 1126 1110 break; 1111 + case NIC_MBOX_MSG_BGX_LINK_CHANGE: 1112 + if (vf >= nic->num_vf_en) { 1113 + ret = -1; /* NACK */ 1114 + break; 1115 + } 1116 + nic_link_status_get(nic, vf); 1117 + return; 1127 1118 default: 1128 1119 dev_err(&nic->pdev->dev, 1129 1120 "Invalid msg from VF%d, msg 0x%x\n", vf, mbx.msg.msg); ··· 1144 1121 mbx.msg.msg, vf); 1145 1122 nic_mbx_send_nack(nic, vf); 1146 1123 } 1147 - unlock: 1148 - nic->mbx_lock[vf] = false; 1149 1124 } 1150 1125 1151 1126 static irqreturn_t nic_mbx_intr_handler(int irq, void *nic_irq) ··· 1291 1270 return 0; 1292 1271 } 1293 1272 1294 - /* Poll for BGX LMAC link status and update corresponding VF 1295 - * if there is a change, valid only if internal L2 switch 1296 - * is not present otherwise VF link is always treated as up 1297 - */ 1298 - static void nic_poll_for_link(struct work_struct *work) 1299 - { 1300 - union nic_mbx mbx = {}; 1301 - struct nicpf *nic; 1302 - struct bgx_link_status link; 1303 - u8 vf, bgx, lmac; 1304 - 1305 - nic = container_of(work, struct nicpf, dwork.work); 1306 - 1307 - mbx.link_status.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE; 1308 - 1309 - for (vf = 0; vf < nic->num_vf_en; vf++) { 1310 - /* Poll only if VF is UP */ 1311 - if (!nic->vf_enabled[vf]) 1312 - continue; 1313 - 1314 - /* Get BGX, LMAC indices for the VF */ 1315 - bgx = NIC_GET_BGX_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1316 - lmac = NIC_GET_LMAC_FROM_VF_LMAC_MAP(nic->vf_lmac_map[vf]); 1317 - /* Get interface link status */ 1318 - bgx_get_lmac_link_state(nic->node, bgx, lmac, &link); 1319 - 1320 - /* Inform VF only if link status changed */ 1321 - if (nic->link[vf] == link.link_up) 1322 - continue; 1323 - 1324 - if (!nic->mbx_lock[vf]) { 1325 - nic->link[vf] = link.link_up; 1326 - nic->duplex[vf] = link.duplex; 1327 - nic->speed[vf] = link.speed; 1328 - 1329 - /* Send a mbox message to VF with current link status */ 1330 - mbx.link_status.link_up = link.link_up; 1331 - mbx.link_status.duplex = link.duplex; 1332 - mbx.link_status.speed = link.speed; 1333 - mbx.link_status.mac_type = link.mac_type; 1334 - nic_send_msg_to_vf(nic, vf, &mbx); 1335 - } 1336 - } 1337 - queue_delayed_work(nic->check_link, &nic->dwork, HZ * 2); 1338 - } 1339 - 1340 1273 static int nic_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 1341 1274 { 1342 1275 struct device *dev = &pdev->dev; ··· 1359 1384 if (!nic->vf_lmac_map) 1360 1385 goto err_release_regions; 1361 1386 1362 - nic->link = devm_kmalloc_array(dev, max_lmac, sizeof(u8), GFP_KERNEL); 1363 - if (!nic->link) 1364 - goto err_release_regions; 1365 - 1366 - nic->duplex = devm_kmalloc_array(dev, max_lmac, sizeof(u8), GFP_KERNEL); 1367 - if (!nic->duplex) 1368 - goto err_release_regions; 1369 - 1370 - nic->speed = devm_kmalloc_array(dev, max_lmac, sizeof(u32), GFP_KERNEL); 1371 - if (!nic->speed) 1372 - goto err_release_regions; 1373 - 1374 1387 /* Initialize hardware */ 1375 1388 nic_init_hw(nic); 1376 1389 ··· 1374 1411 if (err) 1375 1412 goto err_unregister_interrupts; 1376 1413 1377 - /* Register a physical link status poll fn() */ 1378 - nic->check_link = alloc_workqueue("check_link_status", 1379 - WQ_UNBOUND | WQ_MEM_RECLAIM, 1); 1380 - if (!nic->check_link) { 1381 - err = -ENOMEM; 1382 - goto err_disable_sriov; 1383 - } 1384 - 1385 - INIT_DELAYED_WORK(&nic->dwork, nic_poll_for_link); 1386 - queue_delayed_work(nic->check_link, &nic->dwork, 0); 1387 - 1388 1414 return 0; 1389 1415 1390 - err_disable_sriov: 1391 - if (nic->flags & NIC_SRIOV_ENABLED) 1392 - pci_disable_sriov(pdev); 1393 1416 err_unregister_interrupts: 1394 1417 nic_unregister_interrupts(nic); 1395 1418 err_release_regions: ··· 1395 1446 1396 1447 if (nic->flags & NIC_SRIOV_ENABLED) 1397 1448 pci_disable_sriov(pdev); 1398 - 1399 - if (nic->check_link) { 1400 - /* Destroy work Queue */ 1401 - cancel_delayed_work_sync(&nic->dwork); 1402 - destroy_workqueue(nic->check_link); 1403 - } 1404 1449 1405 1450 nic_unregister_interrupts(nic); 1406 1451 pci_release_regions(pdev);
+86 -42
drivers/net/ethernet/cavium/thunder/nicvf_main.c
··· 68 68 MODULE_PARM_DESC(cpi_alg, 69 69 "PFC algorithm (0=none, 1=VLAN, 2=VLAN16, 3=IP Diffserv)"); 70 70 71 - /* workqueue for handling kernel ndo_set_rx_mode() calls */ 72 - static struct workqueue_struct *nicvf_rx_mode_wq; 73 - 74 71 static inline u8 nicvf_netdev_qidx(struct nicvf *nic, u8 qidx) 75 72 { 76 73 if (nic->sqs_mode) ··· 124 127 { 125 128 int timeout = NIC_MBOX_MSG_TIMEOUT; 126 129 int sleep = 10; 130 + int ret = 0; 131 + 132 + mutex_lock(&nic->rx_mode_mtx); 127 133 128 134 nic->pf_acked = false; 129 135 nic->pf_nacked = false; ··· 139 139 netdev_err(nic->netdev, 140 140 "PF NACK to mbox msg 0x%02x from VF%d\n", 141 141 (mbx->msg.msg & 0xFF), nic->vf_id); 142 - return -EINVAL; 142 + ret = -EINVAL; 143 + break; 143 144 } 144 145 msleep(sleep); 145 146 if (nic->pf_acked) ··· 150 149 netdev_err(nic->netdev, 151 150 "PF didn't ACK to mbox msg 0x%02x from VF%d\n", 152 151 (mbx->msg.msg & 0xFF), nic->vf_id); 153 - return -EBUSY; 152 + ret = -EBUSY; 153 + break; 154 154 } 155 155 } 156 - return 0; 156 + mutex_unlock(&nic->rx_mode_mtx); 157 + return ret; 157 158 } 158 159 159 160 /* Checks if VF is able to comminicate with PF ··· 173 170 } 174 171 175 172 return 1; 173 + } 174 + 175 + static void nicvf_send_cfg_done(struct nicvf *nic) 176 + { 177 + union nic_mbx mbx = {}; 178 + 179 + mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE; 180 + if (nicvf_send_msg_to_pf(nic, &mbx)) { 181 + netdev_err(nic->netdev, 182 + "PF didn't respond to CFG DONE msg\n"); 183 + } 176 184 } 177 185 178 186 static void nicvf_read_bgx_stats(struct nicvf *nic, struct bgx_stats_msg *bgx) ··· 242 228 break; 243 229 case NIC_MBOX_MSG_BGX_LINK_CHANGE: 244 230 nic->pf_acked = true; 245 - nic->link_up = mbx.link_status.link_up; 246 - nic->duplex = mbx.link_status.duplex; 247 - nic->speed = mbx.link_status.speed; 248 - nic->mac_type = mbx.link_status.mac_type; 249 - if (nic->link_up) { 250 - netdev_info(nic->netdev, "Link is Up %d Mbps %s duplex\n", 251 - nic->speed, 252 - nic->duplex == DUPLEX_FULL ? 253 - "Full" : "Half"); 254 - netif_carrier_on(nic->netdev); 255 - netif_tx_start_all_queues(nic->netdev); 256 - } else { 257 - netdev_info(nic->netdev, "Link is Down\n"); 258 - netif_carrier_off(nic->netdev); 259 - netif_tx_stop_all_queues(nic->netdev); 231 + if (nic->link_up != mbx.link_status.link_up) { 232 + nic->link_up = mbx.link_status.link_up; 233 + nic->duplex = mbx.link_status.duplex; 234 + nic->speed = mbx.link_status.speed; 235 + nic->mac_type = mbx.link_status.mac_type; 236 + if (nic->link_up) { 237 + netdev_info(nic->netdev, 238 + "Link is Up %d Mbps %s duplex\n", 239 + nic->speed, 240 + nic->duplex == DUPLEX_FULL ? 241 + "Full" : "Half"); 242 + netif_carrier_on(nic->netdev); 243 + netif_tx_start_all_queues(nic->netdev); 244 + } else { 245 + netdev_info(nic->netdev, "Link is Down\n"); 246 + netif_carrier_off(nic->netdev); 247 + netif_tx_stop_all_queues(nic->netdev); 248 + } 260 249 } 261 250 break; 262 251 case NIC_MBOX_MSG_ALLOC_SQS: ··· 1328 1311 struct nicvf_cq_poll *cq_poll = NULL; 1329 1312 union nic_mbx mbx = {}; 1330 1313 1314 + cancel_delayed_work_sync(&nic->link_change_work); 1315 + 1316 + /* wait till all queued set_rx_mode tasks completes */ 1317 + drain_workqueue(nic->nicvf_rx_mode_wq); 1318 + 1331 1319 mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN; 1332 1320 nicvf_send_msg_to_pf(nic, &mbx); 1333 1321 ··· 1432 1410 return nicvf_send_msg_to_pf(nic, &mbx); 1433 1411 } 1434 1412 1413 + static void nicvf_link_status_check_task(struct work_struct *work_arg) 1414 + { 1415 + struct nicvf *nic = container_of(work_arg, 1416 + struct nicvf, 1417 + link_change_work.work); 1418 + union nic_mbx mbx = {}; 1419 + mbx.msg.msg = NIC_MBOX_MSG_BGX_LINK_CHANGE; 1420 + nicvf_send_msg_to_pf(nic, &mbx); 1421 + queue_delayed_work(nic->nicvf_rx_mode_wq, 1422 + &nic->link_change_work, 2 * HZ); 1423 + } 1424 + 1435 1425 int nicvf_open(struct net_device *netdev) 1436 1426 { 1437 1427 int cpu, err, qidx; 1438 1428 struct nicvf *nic = netdev_priv(netdev); 1439 1429 struct queue_set *qs = nic->qs; 1440 1430 struct nicvf_cq_poll *cq_poll = NULL; 1441 - union nic_mbx mbx = {}; 1431 + 1432 + /* wait till all queued set_rx_mode tasks completes if any */ 1433 + drain_workqueue(nic->nicvf_rx_mode_wq); 1442 1434 1443 1435 netif_carrier_off(netdev); 1444 1436 ··· 1548 1512 nicvf_enable_intr(nic, NICVF_INTR_RBDR, qidx); 1549 1513 1550 1514 /* Send VF config done msg to PF */ 1551 - mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE; 1552 - nicvf_write_to_mbx(nic, &mbx); 1515 + nicvf_send_cfg_done(nic); 1516 + 1517 + INIT_DELAYED_WORK(&nic->link_change_work, 1518 + nicvf_link_status_check_task); 1519 + queue_delayed_work(nic->nicvf_rx_mode_wq, 1520 + &nic->link_change_work, 0); 1553 1521 1554 1522 return 0; 1555 1523 cleanup: ··· 1981 1941 1982 1942 /* flush DMAC filters and reset RX mode */ 1983 1943 mbx.xcast.msg = NIC_MBOX_MSG_RESET_XCAST; 1984 - nicvf_send_msg_to_pf(nic, &mbx); 1944 + if (nicvf_send_msg_to_pf(nic, &mbx) < 0) 1945 + goto free_mc; 1985 1946 1986 1947 if (mode & BGX_XCAST_MCAST_FILTER) { 1987 1948 /* once enabling filtering, we need to signal to PF to add 1988 1949 * its' own LMAC to the filter to accept packets for it. 1989 1950 */ 1990 1951 mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST; 1991 - mbx.xcast.data.mac = 0; 1992 - nicvf_send_msg_to_pf(nic, &mbx); 1952 + mbx.xcast.mac = 0; 1953 + if (nicvf_send_msg_to_pf(nic, &mbx) < 0) 1954 + goto free_mc; 1993 1955 } 1994 1956 1995 1957 /* check if we have any specific MACs to be added to PF DMAC filter */ ··· 1999 1957 /* now go through kernel list of MACs and add them one by one */ 2000 1958 for (idx = 0; idx < mc_addrs->count; idx++) { 2001 1959 mbx.xcast.msg = NIC_MBOX_MSG_ADD_MCAST; 2002 - mbx.xcast.data.mac = mc_addrs->mc[idx]; 2003 - nicvf_send_msg_to_pf(nic, &mbx); 1960 + mbx.xcast.mac = mc_addrs->mc[idx]; 1961 + if (nicvf_send_msg_to_pf(nic, &mbx) < 0) 1962 + goto free_mc; 2004 1963 } 2005 - kfree(mc_addrs); 2006 1964 } 2007 1965 2008 1966 /* and finally set rx mode for PF accordingly */ 2009 1967 mbx.xcast.msg = NIC_MBOX_MSG_SET_XCAST; 2010 - mbx.xcast.data.mode = mode; 1968 + mbx.xcast.mode = mode; 2011 1969 2012 1970 nicvf_send_msg_to_pf(nic, &mbx); 1971 + free_mc: 1972 + kfree(mc_addrs); 2013 1973 } 2014 1974 2015 1975 static void nicvf_set_rx_mode_task(struct work_struct *work_arg) 2016 1976 { 2017 1977 struct nicvf_work *vf_work = container_of(work_arg, struct nicvf_work, 2018 - work.work); 1978 + work); 2019 1979 struct nicvf *nic = container_of(vf_work, struct nicvf, rx_mode_work); 2020 1980 u8 mode; 2021 1981 struct xcast_addr_list *mc; ··· 2074 2030 kfree(nic->rx_mode_work.mc); 2075 2031 nic->rx_mode_work.mc = mc_list; 2076 2032 nic->rx_mode_work.mode = mode; 2077 - queue_delayed_work(nicvf_rx_mode_wq, &nic->rx_mode_work.work, 0); 2033 + queue_work(nic->nicvf_rx_mode_wq, &nic->rx_mode_work.work); 2078 2034 spin_unlock(&nic->rx_mode_wq_lock); 2079 2035 } 2080 2036 ··· 2231 2187 2232 2188 INIT_WORK(&nic->reset_task, nicvf_reset_task); 2233 2189 2234 - INIT_DELAYED_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task); 2190 + nic->nicvf_rx_mode_wq = alloc_ordered_workqueue("nicvf_rx_mode_wq_VF%d", 2191 + WQ_MEM_RECLAIM, 2192 + nic->vf_id); 2193 + INIT_WORK(&nic->rx_mode_work.work, nicvf_set_rx_mode_task); 2235 2194 spin_lock_init(&nic->rx_mode_wq_lock); 2195 + mutex_init(&nic->rx_mode_mtx); 2236 2196 2237 2197 err = register_netdev(netdev); 2238 2198 if (err) { ··· 2276 2228 nic = netdev_priv(netdev); 2277 2229 pnetdev = nic->pnicvf->netdev; 2278 2230 2279 - cancel_delayed_work_sync(&nic->rx_mode_work.work); 2280 - 2281 2231 /* Check if this Qset is assigned to different VF. 2282 2232 * If yes, clean primary and all secondary Qsets. 2283 2233 */ 2284 2234 if (pnetdev && (pnetdev->reg_state == NETREG_REGISTERED)) 2285 2235 unregister_netdev(pnetdev); 2236 + if (nic->nicvf_rx_mode_wq) { 2237 + destroy_workqueue(nic->nicvf_rx_mode_wq); 2238 + nic->nicvf_rx_mode_wq = NULL; 2239 + } 2286 2240 nicvf_unregister_interrupts(nic); 2287 2241 pci_set_drvdata(pdev, NULL); 2288 2242 if (nic->drv_stats) ··· 2311 2261 static int __init nicvf_init_module(void) 2312 2262 { 2313 2263 pr_info("%s, ver %s\n", DRV_NAME, DRV_VERSION); 2314 - nicvf_rx_mode_wq = alloc_ordered_workqueue("nicvf_generic", 2315 - WQ_MEM_RECLAIM); 2316 2264 return pci_register_driver(&nicvf_driver); 2317 2265 } 2318 2266 2319 2267 static void __exit nicvf_cleanup_module(void) 2320 2268 { 2321 - if (nicvf_rx_mode_wq) { 2322 - destroy_workqueue(nicvf_rx_mode_wq); 2323 - nicvf_rx_mode_wq = NULL; 2324 - } 2325 2269 pci_unregister_driver(&nicvf_driver); 2326 2270 } 2327 2271
+1 -1
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 1217 1217 1218 1218 /* Disable MAC steering (NCSI traffic) */ 1219 1219 for (i = 0; i < RX_TRAFFIC_STEER_RULE_COUNT; i++) 1220 - bgx_reg_write(bgx, 0, BGX_CMR_RX_STREERING + (i * 8), 0x00); 1220 + bgx_reg_write(bgx, 0, BGX_CMR_RX_STEERING + (i * 8), 0x00); 1221 1221 } 1222 1222 1223 1223 static u8 bgx_get_lane2sds_cfg(struct bgx *bgx, struct lmac *lmac)
+1 -1
drivers/net/ethernet/cavium/thunder/thunder_bgx.h
··· 60 60 #define RX_DMACX_CAM_EN BIT_ULL(48) 61 61 #define RX_DMACX_CAM_LMACID(x) (((u64)x) << 49) 62 62 #define RX_DMAC_COUNT 32 63 - #define BGX_CMR_RX_STREERING 0x300 63 + #define BGX_CMR_RX_STEERING 0x300 64 64 #define RX_TRAFFIC_STEER_RULE_COUNT 8 65 65 #define BGX_CMR_CHAN_MSK_AND 0x450 66 66 #define BGX_CMR_BIST_STATUS 0x460
+1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c
··· 660 660 lld->cclk_ps = 1000000000 / adap->params.vpd.cclk; 661 661 lld->udb_density = 1 << adap->params.sge.eq_qpp; 662 662 lld->ucq_density = 1 << adap->params.sge.iq_qpp; 663 + lld->sge_host_page_size = 1 << (adap->params.sge.hps + 10); 663 664 lld->filt_mode = adap->params.tp.vlan_pri_map; 664 665 /* MODQ_REQ_MAP sets queues 0-3 to chan 0-3 */ 665 666 for (i = 0; i < NCHAN; i++)
+1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h
··· 336 336 unsigned int cclk_ps; /* Core clock period in psec */ 337 337 unsigned short udb_density; /* # of user DB/page */ 338 338 unsigned short ucq_density; /* # of user CQs/page */ 339 + unsigned int sge_host_page_size; /* SGE host page size */ 339 340 unsigned short filt_mode; /* filter optional components */ 340 341 unsigned short tx_modq[NCHAN]; /* maps each tx channel to a */ 341 342 /* scheduler queue */
+3
drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
··· 3128 3128 dsaf_set_bit(credit, DSAF_SBM_ROCEE_CFG_CRD_EN_B, 1); 3129 3129 dsaf_write_dev(dsaf_dev, DSAF_SBM_ROCEE_CFG_REG_REG, credit); 3130 3130 } 3131 + 3132 + put_device(&pdev->dev); 3133 + 3131 3134 return 0; 3132 3135 } 3133 3136 EXPORT_SYMBOL(hns_dsaf_roce_reset);
+24 -3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 3289 3289 i40e_alloc_rx_buffers_zc(ring, I40E_DESC_UNUSED(ring)) : 3290 3290 !i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring)); 3291 3291 if (!ok) { 3292 + /* Log this in case the user has forgotten to give the kernel 3293 + * any buffers, even later in the application. 3294 + */ 3292 3295 dev_info(&vsi->back->pdev->dev, 3293 - "Failed allocate some buffers on %sRx ring %d (pf_q %d)\n", 3296 + "Failed to allocate some buffers on %sRx ring %d (pf_q %d)\n", 3294 3297 ring->xsk_umem ? "UMEM enabled " : "", 3295 3298 ring->queue_index, pf_q); 3296 3299 } ··· 6728 6725 6729 6726 for (i = 0; i < vsi->num_queue_pairs; i++) { 6730 6727 i40e_clean_tx_ring(vsi->tx_rings[i]); 6731 - if (i40e_enabled_xdp_vsi(vsi)) 6728 + if (i40e_enabled_xdp_vsi(vsi)) { 6729 + /* Make sure that in-progress ndo_xdp_xmit 6730 + * calls are completed. 6731 + */ 6732 + synchronize_rcu(); 6732 6733 i40e_clean_tx_ring(vsi->xdp_rings[i]); 6734 + } 6733 6735 i40e_clean_rx_ring(vsi->rx_rings[i]); 6734 6736 } 6735 6737 ··· 11903 11895 if (old_prog) 11904 11896 bpf_prog_put(old_prog); 11905 11897 11898 + /* Kick start the NAPI context if there is an AF_XDP socket open 11899 + * on that queue id. This so that receiving will start. 11900 + */ 11901 + if (need_reset && prog) 11902 + for (i = 0; i < vsi->num_queue_pairs; i++) 11903 + if (vsi->xdp_rings[i]->xsk_umem) 11904 + (void)i40e_xsk_async_xmit(vsi->netdev, i); 11905 + 11906 11906 return 0; 11907 11907 } 11908 11908 ··· 11971 11955 static void i40e_queue_pair_clean_rings(struct i40e_vsi *vsi, int queue_pair) 11972 11956 { 11973 11957 i40e_clean_tx_ring(vsi->tx_rings[queue_pair]); 11974 - if (i40e_enabled_xdp_vsi(vsi)) 11958 + if (i40e_enabled_xdp_vsi(vsi)) { 11959 + /* Make sure that in-progress ndo_xdp_xmit calls are 11960 + * completed. 11961 + */ 11962 + synchronize_rcu(); 11975 11963 i40e_clean_tx_ring(vsi->xdp_rings[queue_pair]); 11964 + } 11976 11965 i40e_clean_rx_ring(vsi->rx_rings[queue_pair]); 11977 11966 } 11978 11967
+3 -1
drivers/net/ethernet/intel/i40e/i40e_txrx.c
··· 3709 3709 struct i40e_netdev_priv *np = netdev_priv(dev); 3710 3710 unsigned int queue_index = smp_processor_id(); 3711 3711 struct i40e_vsi *vsi = np->vsi; 3712 + struct i40e_pf *pf = vsi->back; 3712 3713 struct i40e_ring *xdp_ring; 3713 3714 int drops = 0; 3714 3715 int i; ··· 3717 3716 if (test_bit(__I40E_VSI_DOWN, vsi->state)) 3718 3717 return -ENETDOWN; 3719 3718 3720 - if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs) 3719 + if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs || 3720 + test_bit(__I40E_CONFIG_BUSY, pf->state)) 3721 3721 return -ENXIO; 3722 3722 3723 3723 if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+5
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 183 183 err = i40e_queue_pair_enable(vsi, qid); 184 184 if (err) 185 185 return err; 186 + 187 + /* Kick start the NAPI context so that receiving will start */ 188 + err = i40e_xsk_async_xmit(vsi->netdev, qid); 189 + if (err) 190 + return err; 186 191 } 187 192 188 193 return 0;
+16 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 3953 3953 else 3954 3954 mrqc = IXGBE_MRQC_VMDQRSS64EN; 3955 3955 3956 - /* Enable L3/L4 for Tx Switched packets */ 3957 - mrqc |= IXGBE_MRQC_L3L4TXSWEN; 3956 + /* Enable L3/L4 for Tx Switched packets only for X550, 3957 + * older devices do not support this feature 3958 + */ 3959 + if (hw->mac.type >= ixgbe_mac_X550) 3960 + mrqc |= IXGBE_MRQC_L3L4TXSWEN; 3958 3961 } else { 3959 3962 if (tcs > 4) 3960 3963 mrqc = IXGBE_MRQC_RTRSS8TCEN; ··· 10228 10225 int i, frame_size = dev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; 10229 10226 struct ixgbe_adapter *adapter = netdev_priv(dev); 10230 10227 struct bpf_prog *old_prog; 10228 + bool need_reset; 10231 10229 10232 10230 if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) 10233 10231 return -EINVAL; ··· 10251 10247 return -ENOMEM; 10252 10248 10253 10249 old_prog = xchg(&adapter->xdp_prog, prog); 10250 + need_reset = (!!prog != !!old_prog); 10254 10251 10255 10252 /* If transitioning XDP modes reconfigure rings */ 10256 - if (!!prog != !!old_prog) { 10253 + if (need_reset) { 10257 10254 int err = ixgbe_setup_tc(dev, adapter->hw_tcs); 10258 10255 10259 10256 if (err) { ··· 10269 10264 10270 10265 if (old_prog) 10271 10266 bpf_prog_put(old_prog); 10267 + 10268 + /* Kick start the NAPI context if there is an AF_XDP socket open 10269 + * on that queue id. This so that receiving will start. 10270 + */ 10271 + if (need_reset && prog) 10272 + for (i = 0; i < adapter->num_rx_queues; i++) 10273 + if (adapter->xdp_ring[i]->xsk_umem) 10274 + (void)ixgbe_xsk_async_xmit(adapter->netdev, i); 10272 10275 10273 10276 return 0; 10274 10277 }
+12 -3
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 144 144 ixgbe_txrx_ring_disable(adapter, qid); 145 145 146 146 err = ixgbe_add_xsk_umem(adapter, umem, qid); 147 + if (err) 148 + return err; 147 149 148 - if (if_running) 150 + if (if_running) { 149 151 ixgbe_txrx_ring_enable(adapter, qid); 150 152 151 - return err; 153 + /* Kick start the NAPI context so that receiving will start */ 154 + err = ixgbe_xsk_async_xmit(adapter->netdev, qid); 155 + if (err) 156 + return err; 157 + } 158 + 159 + return 0; 152 160 } 153 161 154 162 static int ixgbe_xsk_umem_disable(struct ixgbe_adapter *adapter, u16 qid) ··· 642 634 dma_addr_t dma; 643 635 644 636 while (budget-- > 0) { 645 - if (unlikely(!ixgbe_desc_unused(xdp_ring))) { 637 + if (unlikely(!ixgbe_desc_unused(xdp_ring)) || 638 + !netif_carrier_ok(xdp_ring->netdev)) { 646 639 work_done = false; 647 640 break; 648 641 }
+6 -1
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 2879 2879 2880 2880 ret = mv643xx_eth_shared_of_probe(pdev); 2881 2881 if (ret) 2882 - return ret; 2882 + goto err_put_clk; 2883 2883 pd = dev_get_platdata(&pdev->dev); 2884 2884 2885 2885 msp->tx_csum_limit = (pd != NULL && pd->tx_csum_limit) ? ··· 2887 2887 infer_hw_params(msp); 2888 2888 2889 2889 return 0; 2890 + 2891 + err_put_clk: 2892 + if (!IS_ERR(msp->clk)) 2893 + clk_disable_unprepare(msp->clk); 2894 + return ret; 2890 2895 } 2891 2896 2892 2897 static int mv643xx_eth_shared_remove(struct platform_device *pdev)
+1 -1
drivers/net/ethernet/marvell/mvneta.c
··· 2146 2146 if (unlikely(!skb)) 2147 2147 goto err_drop_frame_ret_pool; 2148 2148 2149 - dma_sync_single_range_for_cpu(dev->dev.parent, 2149 + dma_sync_single_range_for_cpu(&pp->bm_priv->pdev->dev, 2150 2150 rx_desc->buf_phys_addr, 2151 2151 MVNETA_MH_SIZE + NET_SKB_PAD, 2152 2152 rx_bytes,
+1 -1
drivers/net/ethernet/marvell/sky2.c
··· 5073 5073 INIT_WORK(&hw->restart_work, sky2_restart); 5074 5074 5075 5075 pci_set_drvdata(pdev, hw); 5076 - pdev->d3_delay = 200; 5076 + pdev->d3_delay = 300; 5077 5077 5078 5078 return 0; 5079 5079
+1 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 3360 3360 dev->addr_len = ETH_ALEN; 3361 3361 mlx4_en_u64_to_mac(dev->dev_addr, mdev->dev->caps.def_mac[priv->port]); 3362 3362 if (!is_valid_ether_addr(dev->dev_addr)) { 3363 - en_err(priv, "Port: %d, invalid mac burned: %pM, quiting\n", 3363 + en_err(priv, "Port: %d, invalid mac burned: %pM, quitting\n", 3364 3364 priv->port, dev->dev_addr); 3365 3365 err = -EINVAL; 3366 3366 goto out;
+7 -5
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 862 862 for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { 863 863 bool configure = false; 864 864 bool pfc = false; 865 + u16 thres_cells; 866 + u16 delay_cells; 865 867 bool lossy; 866 - u16 thres; 867 868 868 869 for (j = 0; j < IEEE_8021QAZ_MAX_TCS; j++) { 869 870 if (prio_tc[j] == i) { ··· 878 877 continue; 879 878 880 879 lossy = !(pfc || pause_en); 881 - thres = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu); 882 - delay = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay, pfc, 883 - pause_en); 884 - mlxsw_sp_pg_buf_pack(pbmc_pl, i, thres + delay, thres, lossy); 880 + thres_cells = mlxsw_sp_pg_buf_threshold_get(mlxsw_sp, mtu); 881 + delay_cells = mlxsw_sp_pg_buf_delay_get(mlxsw_sp, mtu, delay, 882 + pfc, pause_en); 883 + mlxsw_sp_pg_buf_pack(pbmc_pl, i, thres_cells + delay_cells, 884 + thres_cells, lossy); 885 885 } 886 886 887 887 return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(pbmc), pbmc_pl);
+1 -1
drivers/net/ethernet/microchip/enc28j60.c
··· 1681 1681 MODULE_AUTHOR("Claudio Lanconelli <lanconelli.claudio@eptar.com>"); 1682 1682 MODULE_LICENSE("GPL"); 1683 1683 module_param_named(debug, debug.msg_enable, int, 0); 1684 - MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., ffff=all)"); 1684 + MODULE_PARM_DESC(debug, "Debug verbosity level in amount of bits set (0=none, ..., 31=all)"); 1685 1685 MODULE_ALIAS("spi:" DRV_NAME);
+12 -4
drivers/net/ethernet/microchip/lan743x_main.c
··· 1400 1400 } 1401 1401 1402 1402 static void lan743x_tx_frame_add_lso(struct lan743x_tx *tx, 1403 - unsigned int frame_length) 1403 + unsigned int frame_length, 1404 + int nr_frags) 1404 1405 { 1405 1406 /* called only from within lan743x_tx_xmit_frame. 1406 1407 * assuming tx->ring_lock has already been acquired. ··· 1411 1410 1412 1411 /* wrap up previous descriptor */ 1413 1412 tx->frame_data0 |= TX_DESC_DATA0_EXT_; 1413 + if (nr_frags <= 0) { 1414 + tx->frame_data0 |= TX_DESC_DATA0_LS_; 1415 + tx->frame_data0 |= TX_DESC_DATA0_IOC_; 1416 + } 1414 1417 tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; 1415 1418 tx_descriptor->data0 = tx->frame_data0; 1416 1419 ··· 1519 1514 u32 tx_tail_flags = 0; 1520 1515 1521 1516 /* wrap up previous descriptor */ 1522 - tx->frame_data0 |= TX_DESC_DATA0_LS_; 1523 - tx->frame_data0 |= TX_DESC_DATA0_IOC_; 1517 + if ((tx->frame_data0 & TX_DESC_DATA0_DTYPE_MASK_) == 1518 + TX_DESC_DATA0_DTYPE_DATA_) { 1519 + tx->frame_data0 |= TX_DESC_DATA0_LS_; 1520 + tx->frame_data0 |= TX_DESC_DATA0_IOC_; 1521 + } 1524 1522 1525 1523 tx_descriptor = &tx->ring_cpu_ptr[tx->frame_tail]; 1526 1524 buffer_info = &tx->buffer_info[tx->frame_tail]; ··· 1608 1600 } 1609 1601 1610 1602 if (gso) 1611 - lan743x_tx_frame_add_lso(tx, frame_length); 1603 + lan743x_tx_frame_add_lso(tx, frame_length, nr_frags); 1612 1604 1613 1605 if (nr_frags <= 0) 1614 1606 goto finish;
+6 -11
drivers/net/ethernet/netronome/nfp/bpf/jit.c
··· 1291 1291 1292 1292 static int 1293 1293 wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, 1294 - enum alu_op alu_op, bool skip) 1294 + enum alu_op alu_op) 1295 1295 { 1296 1296 const struct bpf_insn *insn = &meta->insn; 1297 - 1298 - if (skip) { 1299 - meta->skip = true; 1300 - return 0; 1301 - } 1302 1297 1303 1298 wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); 1304 1299 wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); ··· 2304 2309 2305 2310 static int xor_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) 2306 2311 { 2307 - return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR, !~meta->insn.imm); 2312 + return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR); 2308 2313 } 2309 2314 2310 2315 static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) ··· 2314 2319 2315 2320 static int and_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) 2316 2321 { 2317 - return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND, !~meta->insn.imm); 2322 + return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND); 2318 2323 } 2319 2324 2320 2325 static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) ··· 2324 2329 2325 2330 static int or_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) 2326 2331 { 2327 - return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR, !meta->insn.imm); 2332 + return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR); 2328 2333 } 2329 2334 2330 2335 static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) ··· 2334 2339 2335 2340 static int add_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) 2336 2341 { 2337 - return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD, !meta->insn.imm); 2342 + return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD); 2338 2343 } 2339 2344 2340 2345 static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) ··· 2344 2349 2345 2350 static int sub_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) 2346 2351 { 2347 - return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB, !meta->insn.imm); 2352 + return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB); 2348 2353 } 2349 2354 2350 2355 static int mul_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta)
+15 -6
drivers/net/ethernet/qlogic/qed/qed_iwarp.c
··· 1688 1688 1689 1689 eth_hlen = ETH_HLEN + (vlan_valid ? sizeof(u32) : 0); 1690 1690 1691 + if (!ether_addr_equal(ethh->h_dest, 1692 + p_hwfn->p_rdma_info->iwarp.mac_addr)) { 1693 + DP_VERBOSE(p_hwfn, 1694 + QED_MSG_RDMA, 1695 + "Got unexpected mac %pM instead of %pM\n", 1696 + ethh->h_dest, p_hwfn->p_rdma_info->iwarp.mac_addr); 1697 + return -EINVAL; 1698 + } 1699 + 1691 1700 ether_addr_copy(remote_mac_addr, ethh->h_source); 1692 1701 ether_addr_copy(local_mac_addr, ethh->h_dest); 1693 1702 ··· 2614 2605 struct qed_iwarp_info *iwarp_info; 2615 2606 struct qed_ll2_acquire_data data; 2616 2607 struct qed_ll2_cbs cbs; 2617 - u32 mpa_buff_size; 2608 + u32 buff_size; 2618 2609 u16 n_ooo_bufs; 2619 2610 int rc = 0; 2620 2611 int i; ··· 2641 2632 2642 2633 memset(&data, 0, sizeof(data)); 2643 2634 data.input.conn_type = QED_LL2_TYPE_IWARP; 2644 - data.input.mtu = QED_IWARP_MAX_SYN_PKT_SIZE; 2635 + data.input.mtu = params->max_mtu; 2645 2636 data.input.rx_num_desc = QED_IWARP_LL2_SYN_RX_SIZE; 2646 2637 data.input.tx_num_desc = QED_IWARP_LL2_SYN_TX_SIZE; 2647 2638 data.input.tx_max_bds_per_packet = 1; /* will never be fragmented */ ··· 2663 2654 goto err; 2664 2655 } 2665 2656 2657 + buff_size = QED_IWARP_MAX_BUF_SIZE(params->max_mtu); 2666 2658 rc = qed_iwarp_ll2_alloc_buffers(p_hwfn, 2667 2659 QED_IWARP_LL2_SYN_RX_SIZE, 2668 - QED_IWARP_MAX_SYN_PKT_SIZE, 2660 + buff_size, 2669 2661 iwarp_info->ll2_syn_handle); 2670 2662 if (rc) 2671 2663 goto err; ··· 2720 2710 if (rc) 2721 2711 goto err; 2722 2712 2723 - mpa_buff_size = QED_IWARP_MAX_BUF_SIZE(params->max_mtu); 2724 2713 rc = qed_iwarp_ll2_alloc_buffers(p_hwfn, 2725 2714 data.input.rx_num_desc, 2726 - mpa_buff_size, 2715 + buff_size, 2727 2716 iwarp_info->ll2_mpa_handle); 2728 2717 if (rc) 2729 2718 goto err; ··· 2735 2726 2736 2727 iwarp_info->max_num_partial_fpdus = (u16)p_hwfn->p_rdma_info->num_qps; 2737 2728 2738 - iwarp_info->mpa_intermediate_buf = kzalloc(mpa_buff_size, GFP_KERNEL); 2729 + iwarp_info->mpa_intermediate_buf = kzalloc(buff_size, GFP_KERNEL); 2739 2730 if (!iwarp_info->mpa_intermediate_buf) 2740 2731 goto err; 2741 2732
-1
drivers/net/ethernet/qlogic/qed/qed_iwarp.h
··· 46 46 47 47 #define QED_IWARP_LL2_SYN_TX_SIZE (128) 48 48 #define QED_IWARP_LL2_SYN_RX_SIZE (256) 49 - #define QED_IWARP_MAX_SYN_PKT_SIZE (128) 50 49 51 50 #define QED_IWARP_LL2_OOO_DEF_TX_SIZE (256) 52 51 #define QED_IWARP_MAX_OOO (16)
+6 -3
drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
··· 241 241 static int dwmac4_rx_check_timestamp(void *desc) 242 242 { 243 243 struct dma_desc *p = (struct dma_desc *)desc; 244 + unsigned int rdes0 = le32_to_cpu(p->des0); 245 + unsigned int rdes1 = le32_to_cpu(p->des1); 246 + unsigned int rdes3 = le32_to_cpu(p->des3); 244 247 u32 own, ctxt; 245 248 int ret = 1; 246 249 247 - own = p->des3 & RDES3_OWN; 248 - ctxt = ((p->des3 & RDES3_CONTEXT_DESCRIPTOR) 250 + own = rdes3 & RDES3_OWN; 251 + ctxt = ((rdes3 & RDES3_CONTEXT_DESCRIPTOR) 249 252 >> RDES3_CONTEXT_DESCRIPTOR_SHIFT); 250 253 251 254 if (likely(!own && ctxt)) { 252 - if ((p->des0 == 0xffffffff) && (p->des1 == 0xffffffff)) 255 + if ((rdes0 == 0xffffffff) && (rdes1 == 0xffffffff)) 253 256 /* Corrupted value */ 254 257 ret = -EINVAL; 255 258 else
+12 -10
drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
··· 696 696 struct ethtool_eee *edata) 697 697 { 698 698 struct stmmac_priv *priv = netdev_priv(dev); 699 + int ret; 699 700 700 - priv->eee_enabled = edata->eee_enabled; 701 - 702 - if (!priv->eee_enabled) 701 + if (!edata->eee_enabled) { 703 702 stmmac_disable_eee_mode(priv); 704 - else { 703 + } else { 705 704 /* We are asking for enabling the EEE but it is safe 706 705 * to verify all by invoking the eee_init function. 707 706 * In case of failure it will return an error. 708 707 */ 709 - priv->eee_enabled = stmmac_eee_init(priv); 710 - if (!priv->eee_enabled) 708 + edata->eee_enabled = stmmac_eee_init(priv); 709 + if (!edata->eee_enabled) 711 710 return -EOPNOTSUPP; 712 - 713 - /* Do not change tx_lpi_timer in case of failure */ 714 - priv->tx_lpi_timer = edata->tx_lpi_timer; 715 711 } 716 712 717 - return phy_ethtool_set_eee(dev->phydev, edata); 713 + ret = phy_ethtool_set_eee(dev->phydev, edata); 714 + if (ret) 715 + return ret; 716 + 717 + priv->eee_enabled = edata->eee_enabled; 718 + priv->tx_lpi_timer = edata->tx_lpi_timer; 719 + return 0; 718 720 } 719 721 720 722 static u32 stmmac_usec2riwt(u32 usec, struct stmmac_priv *priv)
+1 -1
drivers/net/ethernet/ti/netcp_core.c
··· 259 259 const char *name; 260 260 char node_name[32]; 261 261 262 - if (of_property_read_string(node, "label", &name) < 0) { 262 + if (of_property_read_string(child, "label", &name) < 0) { 263 263 snprintf(node_name, sizeof(node_name), "%pOFn", child); 264 264 name = node_name; 265 265 }
+8 -3
drivers/net/geneve.c
··· 692 692 static int geneve_open(struct net_device *dev) 693 693 { 694 694 struct geneve_dev *geneve = netdev_priv(dev); 695 - bool ipv6 = !!(geneve->info.mode & IP_TUNNEL_INFO_IPV6); 696 695 bool metadata = geneve->collect_md; 696 + bool ipv4, ipv6; 697 697 int ret = 0; 698 698 699 + ipv6 = geneve->info.mode & IP_TUNNEL_INFO_IPV6 || metadata; 700 + ipv4 = !ipv6 || metadata; 699 701 #if IS_ENABLED(CONFIG_IPV6) 700 - if (ipv6 || metadata) 702 + if (ipv6) { 701 703 ret = geneve_sock_add(geneve, true); 704 + if (ret < 0 && ret != -EAFNOSUPPORT) 705 + ipv4 = false; 706 + } 702 707 #endif 703 - if (!ret && (!ipv6 || metadata)) 708 + if (ipv4) 704 709 ret = geneve_sock_add(geneve, false); 705 710 if (ret < 0) 706 711 geneve_sock_release(geneve);
+19 -3
drivers/net/hyperv/netvsc_drv.c
··· 744 744 schedule_delayed_work(&ndev_ctx->dwork, 0); 745 745 } 746 746 747 + static void netvsc_comp_ipcsum(struct sk_buff *skb) 748 + { 749 + struct iphdr *iph = (struct iphdr *)skb->data; 750 + 751 + iph->check = 0; 752 + iph->check = ip_fast_csum(iph, iph->ihl); 753 + } 754 + 747 755 static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net, 748 756 struct netvsc_channel *nvchan) 749 757 { ··· 778 770 /* skb is already created with CHECKSUM_NONE */ 779 771 skb_checksum_none_assert(skb); 780 772 781 - /* 782 - * In Linux, the IP checksum is always checked. 783 - * Do L4 checksum offload if enabled and present. 773 + /* Incoming packets may have IP header checksum verified by the host. 774 + * They may not have IP header checksum computed after coalescing. 775 + * We compute it here if the flags are set, because on Linux, the IP 776 + * checksum is always checked. 777 + */ 778 + if (csum_info && csum_info->receive.ip_checksum_value_invalid && 779 + csum_info->receive.ip_checksum_succeeded && 780 + skb->protocol == htons(ETH_P_IP)) 781 + netvsc_comp_ipcsum(skb); 782 + 783 + /* Do L4 checksum offload if enabled and present. 784 784 */ 785 785 if (csum_info && (net->features & NETIF_F_RXCSUM)) { 786 786 if (csum_info->receive.tcp_checksum_succeeded ||
+4
drivers/net/ipvlan/ipvlan_main.c
··· 499 499 500 500 if (!data) 501 501 return 0; 502 + if (!ns_capable(dev_net(ipvlan->phy_dev)->user_ns, CAP_NET_ADMIN)) 503 + return -EPERM; 502 504 503 505 if (data[IFLA_IPVLAN_MODE]) { 504 506 u16 nmode = nla_get_u16(data[IFLA_IPVLAN_MODE]); ··· 603 601 struct ipvl_dev *tmp = netdev_priv(phy_dev); 604 602 605 603 phy_dev = tmp->phy_dev; 604 + if (!ns_capable(dev_net(phy_dev)->user_ns, CAP_NET_ADMIN)) 605 + return -EPERM; 606 606 } else if (!netif_is_ipvlan_port(phy_dev)) { 607 607 /* Exit early if the underlying link is invalid or busy */ 608 608 if (phy_dev->type != ARPHRD_ETHER ||
+3
drivers/net/phy/dp83867.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/of.h> 21 21 #include <linux/phy.h> 22 + #include <linux/delay.h> 22 23 23 24 #include <dt-bindings/net/ti-dp83867.h> 24 25 ··· 325 324 err = phy_write(phydev, DP83867_CTRL, DP83867_SW_RESET); 326 325 if (err < 0) 327 326 return err; 327 + 328 + usleep_range(10, 20); 328 329 329 330 return dp83867_config_init(phydev); 330 331 }
+5 -1
drivers/net/phy/marvell10g.c
··· 26 26 #include <linux/marvell_phy.h> 27 27 #include <linux/phy.h> 28 28 29 + #define MDIO_AN_10GBT_CTRL_ADV_NBT_MASK 0x01e0 30 + 29 31 enum { 30 32 MV_PCS_BASE_T = 0x0000, 31 33 MV_PCS_BASE_R = 0x1000, ··· 388 386 else 389 387 reg = 0; 390 388 389 + /* Make sure we clear unsupported 2.5G/5G advertising */ 391 390 ret = mv3310_modify(phydev, MDIO_MMD_AN, MDIO_AN_10GBT_CTRL, 392 - MDIO_AN_10GBT_CTRL_ADV10G, reg); 391 + MDIO_AN_10GBT_CTRL_ADV10G | 392 + MDIO_AN_10GBT_CTRL_ADV_NBT_MASK, reg); 393 393 if (ret < 0) 394 394 return ret; 395 395 if (ret > 0)
-1
drivers/net/phy/mdio_bus.c
··· 379 379 err = device_register(&bus->dev); 380 380 if (err) { 381 381 pr_err("mii_bus %s failed to register\n", bus->id); 382 - put_device(&bus->dev); 383 382 return -EINVAL; 384 383 } 385 384
+12 -1
drivers/net/phy/micrel.c
··· 344 344 return genphy_config_aneg(phydev); 345 345 } 346 346 347 + static int ksz8061_config_init(struct phy_device *phydev) 348 + { 349 + int ret; 350 + 351 + ret = phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_DEVID1, 0xB61A); 352 + if (ret) 353 + return ret; 354 + 355 + return kszphy_config_init(phydev); 356 + } 357 + 347 358 static int ksz9021_load_values_from_of(struct phy_device *phydev, 348 359 const struct device_node *of_node, 349 360 u16 reg, ··· 1051 1040 .name = "Micrel KSZ8061", 1052 1041 .phy_id_mask = MICREL_PHY_ID_MASK, 1053 1042 .features = PHY_BASIC_FEATURES, 1054 - .config_init = kszphy_config_init, 1043 + .config_init = ksz8061_config_init, 1055 1044 .ack_interrupt = kszphy_ack_interrupt, 1056 1045 .config_intr = kszphy_config_intr, 1057 1046 .suspend = genphy_suspend,
+4
drivers/net/phy/phylink.c
··· 320 320 linkmode_zero(state->lp_advertising); 321 321 state->interface = pl->link_config.interface; 322 322 state->an_enabled = pl->link_config.an_enabled; 323 + state->speed = SPEED_UNKNOWN; 324 + state->duplex = DUPLEX_UNKNOWN; 325 + state->pause = MLO_PAUSE_NONE; 326 + state->an_complete = 0; 323 327 state->link = 1; 324 328 325 329 return pl->ops->mac_link_state(ndev, state);
+7
drivers/net/phy/realtek.c
··· 282 282 .name = "RTL8366RB Gigabit Ethernet", 283 283 .features = PHY_GBIT_FEATURES, 284 284 .config_init = &rtl8366rb_config_init, 285 + /* These interrupts are handled by the irq controller 286 + * embedded inside the RTL8366RB, they get unmasked when the 287 + * irq is requested and ACKed by reading the status register, 288 + * which is done by the irqchip code. 289 + */ 290 + .ack_interrupt = genphy_no_ack_interrupt, 291 + .config_intr = genphy_no_config_intr, 285 292 .suspend = genphy_suspend, 286 293 .resume = genphy_resume, 287 294 },
+4 -1
drivers/net/phy/xilinx_gmii2rgmii.c
··· 44 44 u16 val = 0; 45 45 int err; 46 46 47 - err = priv->phy_drv->read_status(phydev); 47 + if (priv->phy_drv->read_status) 48 + err = priv->phy_drv->read_status(phydev); 49 + else 50 + err = genphy_read_status(phydev); 48 51 if (err < 0) 49 52 return err; 50 53
+2 -2
drivers/net/team/team.c
··· 1256 1256 list_add_tail_rcu(&port->list, &team->port_list); 1257 1257 team_port_enable(team, port); 1258 1258 __team_compute_features(team); 1259 - __team_port_change_port_added(port, !!netif_carrier_ok(port_dev)); 1259 + __team_port_change_port_added(port, !!netif_oper_up(port_dev)); 1260 1260 __team_options_change_check(team); 1261 1261 1262 1262 netdev_info(dev, "Port device %s added\n", portname); ··· 2915 2915 2916 2916 switch (event) { 2917 2917 case NETDEV_UP: 2918 - if (netif_carrier_ok(dev)) 2918 + if (netif_oper_up(dev)) 2919 2919 team_port_change_check(port, true); 2920 2920 break; 2921 2921 case NETDEV_DOWN:
+2 -2
drivers/net/tun.c
··· 2167 2167 } 2168 2168 2169 2169 add_wait_queue(&tfile->wq.wait, &wait); 2170 - current->state = TASK_INTERRUPTIBLE; 2171 2170 2172 2171 while (1) { 2172 + set_current_state(TASK_INTERRUPTIBLE); 2173 2173 ptr = ptr_ring_consume(&tfile->tx_ring); 2174 2174 if (ptr) 2175 2175 break; ··· 2185 2185 schedule(); 2186 2186 } 2187 2187 2188 - current->state = TASK_RUNNING; 2188 + __set_current_state(TASK_RUNNING); 2189 2189 remove_wait_queue(&tfile->wq.wait, &wait); 2190 2190 2191 2191 out:
+2 -2
drivers/net/usb/qmi_wwan.c
··· 1201 1201 {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ 1202 1202 {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ 1203 1203 {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ 1204 - {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354 */ 1205 - {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC7304/MC7354 */ 1204 + {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354, WP76xx */ 1205 + {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 10)},/* Sierra Wireless MC7304/MC7354 */ 1206 1206 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ 1207 1207 {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ 1208 1208 {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */
+3 -2
drivers/net/usb/r8152.c
··· 557 557 /* MAC PASSTHRU */ 558 558 #define AD_MASK 0xfee0 559 559 #define BND_MASK 0x0004 560 + #define BD_MASK 0x0001 560 561 #define EFUSE 0xcfdb 561 562 #define PASS_THRU_MASK 0x1 562 563 ··· 1177 1176 return -ENODEV; 1178 1177 } 1179 1178 } else { 1180 - /* test for RTL8153-BND */ 1179 + /* test for RTL8153-BND and RTL8153-BD */ 1181 1180 ocp_data = ocp_read_byte(tp, MCU_TYPE_USB, USB_MISC_1); 1182 - if ((ocp_data & BND_MASK) == 0) { 1181 + if ((ocp_data & BND_MASK) == 0 && (ocp_data & BD_MASK) == 0) { 1183 1182 netif_dbg(tp, probe, tp->netdev, 1184 1183 "Invalid variant for MAC pass through\n"); 1185 1184 return -ENODEV;
+3
drivers/net/vrf.c
··· 1273 1273 1274 1274 /* default to no qdisc; user can add if desired */ 1275 1275 dev->priv_flags |= IFF_NO_QUEUE; 1276 + 1277 + dev->min_mtu = 0; 1278 + dev->max_mtu = 0; 1276 1279 } 1277 1280 1278 1281 static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+1 -1
drivers/net/wireless/mac80211_hwsim.c
··· 3554 3554 goto out_err; 3555 3555 } 3556 3556 3557 - genlmsg_reply(skb, info); 3557 + res = genlmsg_reply(skb, info); 3558 3558 break; 3559 3559 } 3560 3560
+30 -18
drivers/net/wireless/mediatek/mt76/mt76x0/usb.c
··· 158 158 .get_txpower = mt76x02_get_txpower, 159 159 }; 160 160 161 + static int mt76x0u_init_hardware(struct mt76x02_dev *dev) 162 + { 163 + int err; 164 + 165 + mt76x0_chip_onoff(dev, true, true); 166 + 167 + if (!mt76x02_wait_for_mac(&dev->mt76)) 168 + return -ETIMEDOUT; 169 + 170 + err = mt76x0u_mcu_init(dev); 171 + if (err < 0) 172 + return err; 173 + 174 + mt76x0_init_usb_dma(dev); 175 + err = mt76x0_init_hardware(dev); 176 + if (err < 0) 177 + return err; 178 + 179 + mt76_rmw(dev, MT_US_CYC_CFG, MT_US_CYC_CNT, 0x1e); 180 + mt76_wr(dev, MT_TXOP_CTRL_CFG, 181 + FIELD_PREP(MT_TXOP_TRUN_EN, 0x3f) | 182 + FIELD_PREP(MT_TXOP_EXT_CCA_DLY, 0x58)); 183 + 184 + return 0; 185 + } 186 + 161 187 static int mt76x0u_register_device(struct mt76x02_dev *dev) 162 188 { 163 189 struct ieee80211_hw *hw = dev->mt76.hw; ··· 197 171 if (err < 0) 198 172 goto out_err; 199 173 200 - mt76x0_chip_onoff(dev, true, true); 201 - if (!mt76x02_wait_for_mac(&dev->mt76)) { 202 - err = -ETIMEDOUT; 203 - goto out_err; 204 - } 205 - 206 - err = mt76x0u_mcu_init(dev); 174 + err = mt76x0u_init_hardware(dev); 207 175 if (err < 0) 208 176 goto out_err; 209 - 210 - mt76x0_init_usb_dma(dev); 211 - err = mt76x0_init_hardware(dev); 212 - if (err < 0) 213 - goto out_err; 214 - 215 - mt76_rmw(dev, MT_US_CYC_CFG, MT_US_CYC_CNT, 0x1e); 216 - mt76_wr(dev, MT_TXOP_CTRL_CFG, 217 - FIELD_PREP(MT_TXOP_TRUN_EN, 0x3f) | 218 - FIELD_PREP(MT_TXOP_EXT_CCA_DLY, 0x58)); 219 177 220 178 err = mt76x0_register_device(dev); 221 179 if (err < 0) ··· 311 301 312 302 mt76u_stop_queues(&dev->mt76); 313 303 mt76x0u_mac_stop(dev); 304 + clear_bit(MT76_STATE_MCU_RUNNING, &dev->mt76.state); 305 + mt76x0_chip_onoff(dev, false, false); 314 306 usb_kill_urb(usb->mcu.res.urb); 315 307 316 308 return 0; ··· 340 328 tasklet_enable(&usb->rx_tasklet); 341 329 tasklet_enable(&usb->tx_tasklet); 342 330 343 - ret = mt76x0_init_hardware(dev); 331 + ret = mt76x0u_init_hardware(dev); 344 332 if (ret) 345 333 goto err; 346 334
+2
drivers/net/xen-netback/hash.c
··· 454 454 if (xenvif_hash_cache_size == 0) 455 455 return; 456 456 457 + BUG_ON(vif->hash.cache.count); 458 + 457 459 spin_lock_init(&vif->hash.cache.lock); 458 460 INIT_LIST_HEAD(&vif->hash.cache.list); 459 461 }
+7
drivers/net/xen-netback/interface.c
··· 153 153 { 154 154 struct xenvif *vif = netdev_priv(dev); 155 155 unsigned int size = vif->hash.size; 156 + unsigned int num_queues; 157 + 158 + /* If queues are not set up internally - always return 0 159 + * as the packet going to be dropped anyway */ 160 + num_queues = READ_ONCE(vif->num_queues); 161 + if (num_queues < 1) 162 + return 0; 156 163 157 164 if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE) 158 165 return fallback(dev, skb, NULL) % dev->real_num_tx_queues;
+5 -5
drivers/net/xen-netback/netback.c
··· 1072 1072 skb_frag_size_set(&frags[i], len); 1073 1073 } 1074 1074 1075 - /* Copied all the bits from the frag list -- free it. */ 1076 - skb_frag_list_init(skb); 1077 - xenvif_skb_zerocopy_prepare(queue, nskb); 1078 - kfree_skb(nskb); 1079 - 1080 1075 /* Release all the original (foreign) frags. */ 1081 1076 for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) 1082 1077 skb_frag_unref(skb, f); ··· 1140 1145 xenvif_fill_frags(queue, skb); 1141 1146 1142 1147 if (unlikely(skb_has_frag_list(skb))) { 1148 + struct sk_buff *nskb = skb_shinfo(skb)->frag_list; 1149 + xenvif_skb_zerocopy_prepare(queue, nskb); 1143 1150 if (xenvif_handle_frag_list(queue, skb)) { 1144 1151 if (net_ratelimit()) 1145 1152 netdev_err(queue->vif->dev, ··· 1150 1153 kfree_skb(skb); 1151 1154 continue; 1152 1155 } 1156 + /* Copied all the bits from the frag list -- free it. */ 1157 + skb_frag_list_init(skb); 1158 + kfree_skb(nskb); 1153 1159 } 1154 1160 1155 1161 skb->dev = queue->vif->dev;
+1 -1
drivers/pinctrl/meson/pinctrl-meson8b.c
··· 693 693 694 694 static const char * const sdxc_a_groups[] = { 695 695 "sdxc_d0_0_a", "sdxc_d13_0_a", "sdxc_d47_a", "sdxc_clk_a", 696 - "sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d0_13_1_a" 696 + "sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d13_1_a" 697 697 }; 698 698 699 699 static const char * const pcm_a_groups[] = {
+1 -1
drivers/pinctrl/qcom/pinctrl-qcs404.c
··· 79 79 .intr_cfg_reg = 0, \ 80 80 .intr_status_reg = 0, \ 81 81 .intr_target_reg = 0, \ 82 - .tile = NORTH, \ 82 + .tile = SOUTH, \ 83 83 .mux_bit = -1, \ 84 84 .pull_bit = pull, \ 85 85 .drv_bit = drv, \
+9 -5
drivers/scsi/3w-9xxx.c
··· 2009 2009 struct Scsi_Host *host = NULL; 2010 2010 TW_Device_Extension *tw_dev; 2011 2011 unsigned long mem_addr, mem_len; 2012 - int retval = -ENODEV; 2012 + int retval; 2013 2013 2014 2014 retval = pci_enable_device(pdev); 2015 2015 if (retval) { ··· 2020 2020 pci_set_master(pdev); 2021 2021 pci_try_set_mwi(pdev); 2022 2022 2023 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 2024 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 2023 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2024 + if (retval) 2025 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 2026 + if (retval) { 2025 2027 TW_PRINTK(host, TW_DRIVER, 0x23, "Failed to set dma mask"); 2026 2028 retval = -ENODEV; 2027 2029 goto out_disable_device; ··· 2242 2240 pci_set_master(pdev); 2243 2241 pci_try_set_mwi(pdev); 2244 2242 2245 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 2246 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 2243 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2244 + if (retval) 2245 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 2246 + if (retval) { 2247 2247 TW_PRINTK(host, TW_DRIVER, 0x40, "Failed to set dma mask during resume"); 2248 2248 retval = -ENODEV; 2249 2249 goto out_disable_device;
+8 -4
drivers/scsi/3w-sas.c
··· 1573 1573 pci_set_master(pdev); 1574 1574 pci_try_set_mwi(pdev); 1575 1575 1576 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 1577 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 1576 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1577 + if (retval) 1578 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1579 + if (retval) { 1578 1580 TW_PRINTK(host, TW_DRIVER, 0x18, "Failed to set dma mask"); 1579 1581 retval = -ENODEV; 1580 1582 goto out_disable_device; ··· 1807 1805 pci_set_master(pdev); 1808 1806 pci_try_set_mwi(pdev); 1809 1807 1810 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 1811 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 1808 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1809 + if (retval) 1810 + retval = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1811 + if (retval) { 1812 1812 TW_PRINTK(host, TW_DRIVER, 0x25, "Failed to set dma mask during resume"); 1813 1813 retval = -ENODEV; 1814 1814 goto out_disable_device;
+5 -3
drivers/scsi/aic94xx/aic94xx_init.c
··· 769 769 if (err) 770 770 goto Err_remove; 771 771 772 - err = -ENODEV; 773 - if (dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)) || 774 - dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32))) { 772 + err = dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(64)); 773 + if (err) 774 + err = dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)); 775 + if (err) { 776 + err = -ENODEV; 775 777 asd_printk("no suitable DMA mask for %s\n", pci_name(dev)); 776 778 goto Err_remove; 777 779 }
+13 -5
drivers/scsi/bfa/bfad.c
··· 727 727 int 728 728 bfad_pci_init(struct pci_dev *pdev, struct bfad_s *bfad) 729 729 { 730 - int rc = -ENODEV; 730 + int rc = -ENODEV; 731 731 732 732 if (pci_enable_device(pdev)) { 733 733 printk(KERN_ERR "pci_enable_device fail %p\n", pdev); ··· 739 739 740 740 pci_set_master(pdev); 741 741 742 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 743 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 742 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 743 + if (rc) 744 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 745 + 746 + if (rc) { 747 + rc = -ENODEV; 744 748 printk(KERN_ERR "dma_set_mask_and_coherent fail %p\n", pdev); 745 749 goto out_release_region; 746 750 } ··· 1538 1534 { 1539 1535 struct bfad_s *bfad = pci_get_drvdata(pdev); 1540 1536 u8 byte; 1537 + int rc; 1541 1538 1542 1539 dev_printk(KERN_ERR, &pdev->dev, 1543 1540 "bfad_pci_slot_reset flags: 0x%x\n", bfad->bfad_flags); ··· 1566 1561 pci_save_state(pdev); 1567 1562 pci_set_master(pdev); 1568 1563 1569 - if (dma_set_mask_and_coherent(&bfad->pcidev->dev, DMA_BIT_MASK(64)) || 1570 - dma_set_mask_and_coherent(&bfad->pcidev->dev, DMA_BIT_MASK(32))) 1564 + rc = dma_set_mask_and_coherent(&bfad->pcidev->dev, DMA_BIT_MASK(64)); 1565 + if (rc) 1566 + rc = dma_set_mask_and_coherent(&bfad->pcidev->dev, 1567 + DMA_BIT_MASK(32)); 1568 + if (rc) 1571 1569 goto out_disable_device; 1572 1570 1573 1571 if (restart_bfa(bfad) == -1)
+5 -2
drivers/scsi/csiostor/csio_init.c
··· 210 210 pci_set_master(pdev); 211 211 pci_try_set_mwi(pdev); 212 212 213 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 214 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 213 + rv = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 214 + if (rv) 215 + rv = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 216 + if (rv) { 217 + rv = -ENODEV; 215 218 dev_err(&pdev->dev, "No suitable DMA available.\n"); 216 219 goto err_release_regions; 217 220 }
+6 -2
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 2323 2323 struct Scsi_Host *shost; 2324 2324 struct hisi_hba *hisi_hba; 2325 2325 struct device *dev = &pdev->dev; 2326 + int error; 2326 2327 2327 2328 shost = scsi_host_alloc(hw->sht, sizeof(*hisi_hba)); 2328 2329 if (!shost) { ··· 2344 2343 if (hisi_sas_get_fw_info(hisi_hba) < 0) 2345 2344 goto err_out; 2346 2345 2347 - if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)) && 2348 - dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 2346 + error = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 2347 + if (error) 2348 + error = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32)); 2349 + 2350 + if (error) { 2349 2351 dev_err(dev, "No usable DMA addressing method\n"); 2350 2352 goto err_out; 2351 2353 }
+5 -3
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 2447 2447 if (rc) 2448 2448 goto err_out_disable_device; 2449 2449 2450 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 2451 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 2450 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 2451 + if (rc) 2452 + rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 2453 + if (rc) { 2452 2454 dev_err(dev, "No usable DMA addressing method\n"); 2453 - rc = -EIO; 2455 + rc = -ENODEV; 2454 2456 goto err_out_regions; 2455 2457 } 2456 2458
+7 -3
drivers/scsi/hptiop.c
··· 1292 1292 dma_addr_t start_phy; 1293 1293 void *start_virt; 1294 1294 u32 offset, i, req_size; 1295 + int rc; 1295 1296 1296 1297 dprintk("hptiop_probe(%p)\n", pcidev); 1297 1298 ··· 1309 1308 1310 1309 /* Enable 64bit DMA if possible */ 1311 1310 iop_ops = (struct hptiop_adapter_ops *)id->driver_data; 1312 - if (dma_set_mask(&pcidev->dev, 1313 - DMA_BIT_MASK(iop_ops->hw_dma_bit_mask)) || 1314 - dma_set_mask(&pcidev->dev, DMA_BIT_MASK(32))) { 1311 + rc = dma_set_mask(&pcidev->dev, 1312 + DMA_BIT_MASK(iop_ops->hw_dma_bit_mask)); 1313 + if (rc) 1314 + rc = dma_set_mask(&pcidev->dev, DMA_BIT_MASK(32)); 1315 + 1316 + if (rc) { 1315 1317 printk(KERN_ERR "hptiop: fail to set dma_mask\n"); 1316 1318 goto disable_pci_device; 1317 1319 }
+6
drivers/scsi/libiscsi.c
··· 1459 1459 if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) 1460 1460 return -ENODATA; 1461 1461 1462 + spin_lock_bh(&conn->session->back_lock); 1463 + if (conn->task == NULL) { 1464 + spin_unlock_bh(&conn->session->back_lock); 1465 + return -ENODATA; 1466 + } 1462 1467 __iscsi_get_task(task); 1468 + spin_unlock_bh(&conn->session->back_lock); 1463 1469 spin_unlock_bh(&conn->session->frwd_lock); 1464 1470 rc = conn->session->tt->xmit_task(task); 1465 1471 spin_lock_bh(&conn->session->frwd_lock);
+2
drivers/scsi/libsas/sas_expander.c
··· 828 828 rphy = sas_end_device_alloc(phy->port); 829 829 if (!rphy) 830 830 goto out_free; 831 + rphy->identify.phy_identifier = phy_id; 831 832 832 833 child->rphy = rphy; 833 834 get_device(&rphy->dev); ··· 855 854 856 855 child->rphy = rphy; 857 856 get_device(&rphy->dev); 857 + rphy->identify.phy_identifier = phy_id; 858 858 sas_fill_in_rphy(child, rphy); 859 859 860 860 list_add_tail(&child->disco_list_node, &parent->port->disco_list);
+12 -7
drivers/scsi/lpfc/lpfc_init.c
··· 7361 7361 unsigned long bar0map_len, bar2map_len; 7362 7362 int i, hbq_count; 7363 7363 void *ptr; 7364 - int error = -ENODEV; 7364 + int error; 7365 7365 7366 7366 if (!pdev) 7367 - return error; 7367 + return -ENODEV; 7368 7368 7369 7369 /* Set the device DMA mask size */ 7370 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 7371 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) 7370 + error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 7371 + if (error) 7372 + error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 7373 + if (error) 7372 7374 return error; 7375 + error = -ENODEV; 7373 7376 7374 7377 /* Get the bus address of Bar0 and Bar2 and the number of bytes 7375 7378 * required by each mapping. ··· 9745 9742 uint32_t if_type; 9746 9743 9747 9744 if (!pdev) 9748 - return error; 9745 + return -ENODEV; 9749 9746 9750 9747 /* Set the device DMA mask size */ 9751 - if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)) || 9752 - dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) 9748 + error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 9749 + if (error) 9750 + error = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 9751 + if (error) 9753 9752 return error; 9754 9753 9755 9754 /*
+1 -1
drivers/scsi/scsi_lib.c
··· 655 655 set_host_byte(cmd, DID_OK); 656 656 return BLK_STS_TARGET; 657 657 case DID_NEXUS_FAILURE: 658 + set_host_byte(cmd, DID_OK); 658 659 return BLK_STS_NEXUS; 659 660 case DID_ALLOC_FAILURE: 660 661 set_host_byte(cmd, DID_OK); ··· 2598 2597 * device deleted during suspend) 2599 2598 */ 2600 2599 mutex_lock(&sdev->state_mutex); 2601 - WARN_ON_ONCE(!sdev->quiesced_by); 2602 2600 sdev->quiesced_by = NULL; 2603 2601 blk_clear_pm_only(sdev->request_queue); 2604 2602 if (sdev->sdev_state == SDEV_QUIESCE)
+5 -3
drivers/scsi/sd_zbc.c
··· 142 142 return -EOPNOTSUPP; 143 143 144 144 /* 145 - * Get a reply buffer for the number of requested zones plus a header. 146 - * For ATA, buffers must be aligned to 512B. 145 + * Get a reply buffer for the number of requested zones plus a header, 146 + * without exceeding the device maximum command size. For ATA disks, 147 + * buffers must be aligned to 512B. 147 148 */ 148 - buflen = roundup((nrz + 1) * 64, 512); 149 + buflen = min(queue_max_hw_sectors(disk->queue) << 9, 150 + roundup((nrz + 1) * 64, 512)); 149 151 buf = kmalloc(buflen, gfp_mask); 150 152 if (!buf) 151 153 return -ENOMEM;
+3 -1
drivers/tee/optee/core.c
··· 699 699 return -ENODEV; 700 700 701 701 np = of_find_matching_node(fw_np, optee_match); 702 - if (!np || !of_device_is_available(np)) 702 + if (!np || !of_device_is_available(np)) { 703 + of_node_put(np); 703 704 return -ENODEV; 705 + } 704 706 705 707 optee = optee_probe(np); 706 708 of_node_put(np);
+1 -1
drivers/vhost/vhost.c
··· 1788 1788 1789 1789 ret = translate_desc(vq, (uintptr_t)vq->used + used_offset, 1790 1790 len, iov, 64, VHOST_ACCESS_WO); 1791 - if (ret) 1791 + if (ret < 0) 1792 1792 return ret; 1793 1793 1794 1794 for (i = 0; i < ret; i++) {
+1
fs/afs/cell.c
··· 173 173 174 174 rcu_assign_pointer(cell->vl_servers, vllist); 175 175 cell->dns_expiry = TIME64_MAX; 176 + __clear_bit(AFS_CELL_FL_NO_LOOKUP_YET, &cell->flags); 176 177 } else { 177 178 cell->dns_expiry = ktime_get_real_seconds(); 178 179 }
+48 -9
fs/binfmt_script.c
··· 14 14 #include <linux/err.h> 15 15 #include <linux/fs.h> 16 16 17 + static inline bool spacetab(char c) { return c == ' ' || c == '\t'; } 18 + static inline char *next_non_spacetab(char *first, const char *last) 19 + { 20 + for (; first <= last; first++) 21 + if (!spacetab(*first)) 22 + return first; 23 + return NULL; 24 + } 25 + static inline char *next_terminator(char *first, const char *last) 26 + { 27 + for (; first <= last; first++) 28 + if (spacetab(*first) || !*first) 29 + return first; 30 + return NULL; 31 + } 32 + 17 33 static int load_script(struct linux_binprm *bprm) 18 34 { 19 35 const char *i_arg, *i_name; 20 - char *cp; 36 + char *cp, *buf_end; 21 37 struct file *file; 22 38 int retval; 23 39 40 + /* Not ours to exec if we don't start with "#!". */ 24 41 if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!')) 25 42 return -ENOEXEC; 26 43 ··· 50 33 if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE) 51 34 return -ENOENT; 52 35 53 - /* 54 - * This section does the #! interpretation. 55 - * Sorta complicated, but hopefully it will work. -TYT 56 - */ 57 - 36 + /* Release since we are not mapping a binary into memory. */ 58 37 allow_write_access(bprm->file); 59 38 fput(bprm->file); 60 39 bprm->file = NULL; 61 40 62 - bprm->buf[BINPRM_BUF_SIZE - 1] = '\0'; 63 - if ((cp = strchr(bprm->buf, '\n')) == NULL) 64 - cp = bprm->buf+BINPRM_BUF_SIZE-1; 41 + /* 42 + * This section handles parsing the #! line into separate 43 + * interpreter path and argument strings. We must be careful 44 + * because bprm->buf is not yet guaranteed to be NUL-terminated 45 + * (though the buffer will have trailing NUL padding when the 46 + * file size was smaller than the buffer size). 47 + * 48 + * We do not want to exec a truncated interpreter path, so either 49 + * we find a newline (which indicates nothing is truncated), or 50 + * we find a space/tab/NUL after the interpreter path (which 51 + * itself may be preceded by spaces/tabs). Truncating the 52 + * arguments is fine: the interpreter can re-read the script to 53 + * parse them on its own. 54 + */ 55 + buf_end = bprm->buf + sizeof(bprm->buf) - 1; 56 + cp = strnchr(bprm->buf, sizeof(bprm->buf), '\n'); 57 + if (!cp) { 58 + cp = next_non_spacetab(bprm->buf + 2, buf_end); 59 + if (!cp) 60 + return -ENOEXEC; /* Entire buf is spaces/tabs */ 61 + /* 62 + * If there is no later space/tab/NUL we must assume the 63 + * interpreter path is truncated. 64 + */ 65 + if (!next_terminator(cp, buf_end)) 66 + return -ENOEXEC; 67 + cp = buf_end; 68 + } 69 + /* NUL-terminate the buffer and any trailing spaces/tabs. */ 65 70 *cp = '\0'; 66 71 while (cp > bprm->buf) { 67 72 cp--;
+2 -1
fs/ceph/snap.c
··· 616 616 capsnap->size); 617 617 618 618 spin_lock(&mdsc->snap_flush_lock); 619 - list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); 619 + if (list_empty(&ci->i_snap_flush_item)) 620 + list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); 620 621 spin_unlock(&mdsc->snap_flush_lock); 621 622 return 1; /* caller may want to ceph_flush_snaps */ 622 623 }
+12
fs/hugetlbfs/inode.c
··· 859 859 rc = migrate_huge_page_move_mapping(mapping, newpage, page); 860 860 if (rc != MIGRATEPAGE_SUCCESS) 861 861 return rc; 862 + 863 + /* 864 + * page_private is subpool pointer in hugetlb pages. Transfer to 865 + * new page. PagePrivate is not associated with page_private for 866 + * hugetlb pages and can not be set here as only page_huge_active 867 + * pages can be migrated. 868 + */ 869 + if (page_private(page)) { 870 + set_page_private(newpage, page_private(page)); 871 + set_page_private(page, 0); 872 + } 873 + 862 874 if (mode != MIGRATE_SYNC_NO_COPY) 863 875 migrate_page_copy(newpage, page); 864 876 else
-2
fs/namespace.c
··· 2698 2698 if (!access_ok(from, n)) 2699 2699 return n; 2700 2700 2701 - current->kernel_uaccess_faults_ok++; 2702 2701 while (n) { 2703 2702 if (__get_user(c, f)) { 2704 2703 memset(t, 0, n); ··· 2707 2708 f++; 2708 2709 n--; 2709 2710 } 2710 - current->kernel_uaccess_faults_ok--; 2711 2711 return n; 2712 2712 } 2713 2713
+17 -14
fs/nfs/nfs4idmap.c
··· 44 44 #include <linux/keyctl.h> 45 45 #include <linux/key-type.h> 46 46 #include <keys/user-type.h> 47 + #include <keys/request_key_auth-type.h> 47 48 #include <linux/module.h> 48 49 49 50 #include "internal.h" ··· 60 59 struct idmap_legacy_upcalldata { 61 60 struct rpc_pipe_msg pipe_msg; 62 61 struct idmap_msg idmap_msg; 63 - struct key_construction *key_cons; 62 + struct key *authkey; 64 63 struct idmap *idmap; 65 64 }; 66 65 ··· 385 384 { Opt_find_err, NULL } 386 385 }; 387 386 388 - static int nfs_idmap_legacy_upcall(struct key_construction *, const char *, void *); 387 + static int nfs_idmap_legacy_upcall(struct key *, void *); 389 388 static ssize_t idmap_pipe_downcall(struct file *, const char __user *, 390 389 size_t); 391 390 static void idmap_release_pipe(struct inode *); ··· 550 549 static void 551 550 nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret) 552 551 { 553 - struct key_construction *cons = idmap->idmap_upcall_data->key_cons; 552 + struct key *authkey = idmap->idmap_upcall_data->authkey; 554 553 555 554 kfree(idmap->idmap_upcall_data); 556 555 idmap->idmap_upcall_data = NULL; 557 - complete_request_key(cons, ret); 556 + complete_request_key(authkey, ret); 557 + key_put(authkey); 558 558 } 559 559 560 560 static void ··· 565 563 nfs_idmap_complete_pipe_upcall_locked(idmap, ret); 566 564 } 567 565 568 - static int nfs_idmap_legacy_upcall(struct key_construction *cons, 569 - const char *op, 570 - void *aux) 566 + static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux) 571 567 { 572 568 struct idmap_legacy_upcalldata *data; 569 + struct request_key_auth *rka = get_request_key_auth(authkey); 573 570 struct rpc_pipe_msg *msg; 574 571 struct idmap_msg *im; 575 572 struct idmap *idmap = (struct idmap *)aux; 576 - struct key *key = cons->key; 573 + struct key *key = rka->target_key; 577 574 int ret = -ENOKEY; 578 575 579 576 if (!aux) ··· 587 586 msg = &data->pipe_msg; 588 587 im = &data->idmap_msg; 589 588 data->idmap = idmap; 590 - data->key_cons = cons; 589 + data->authkey = key_get(authkey); 591 590 592 591 ret = nfs_idmap_prepare_message(key->description, idmap, im, msg); 593 592 if (ret < 0) ··· 605 604 out2: 606 605 kfree(data); 607 606 out1: 608 - complete_request_key(cons, ret); 607 + complete_request_key(authkey, ret); 609 608 return ret; 610 609 } 611 610 ··· 652 651 static ssize_t 653 652 idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) 654 653 { 654 + struct request_key_auth *rka; 655 655 struct rpc_inode *rpci = RPC_I(file_inode(filp)); 656 656 struct idmap *idmap = (struct idmap *)rpci->private; 657 - struct key_construction *cons; 657 + struct key *authkey; 658 658 struct idmap_msg im; 659 659 size_t namelen_in; 660 660 int ret = -ENOKEY; ··· 667 665 if (idmap->idmap_upcall_data == NULL) 668 666 goto out_noupcall; 669 667 670 - cons = idmap->idmap_upcall_data->key_cons; 668 + authkey = idmap->idmap_upcall_data->authkey; 669 + rka = get_request_key_auth(authkey); 671 670 672 671 if (mlen != sizeof(im)) { 673 672 ret = -ENOSPC; ··· 693 690 694 691 ret = nfs_idmap_read_and_verify_message(&im, 695 692 &idmap->idmap_upcall_data->idmap_msg, 696 - cons->key, cons->authkey); 693 + rka->target_key, authkey); 697 694 if (ret >= 0) { 698 - key_set_timeout(cons->key, nfs_idmap_cache_timeout); 695 + key_set_timeout(rka->target_key, nfs_idmap_cache_timeout); 699 696 ret = mlen; 700 697 } 701 698
-4
fs/orangefs/file.c
··· 398 398 loff_t pos = iocb->ki_pos; 399 399 ssize_t rc = 0; 400 400 401 - BUG_ON(iocb->private); 402 - 403 401 gossip_debug(GOSSIP_FILE_DEBUG, "orangefs_file_read_iter\n"); 404 402 405 403 orangefs_stats.reads++; ··· 413 415 struct file *file = iocb->ki_filp; 414 416 loff_t pos; 415 417 ssize_t rc; 416 - 417 - BUG_ON(iocb->private); 418 418 419 419 gossip_debug(GOSSIP_FILE_DEBUG, "orangefs_file_write_iter\n"); 420 420
-4
fs/proc/base.c
··· 1086 1086 1087 1087 task_lock(p); 1088 1088 if (!p->vfork_done && process_shares_mm(p, mm)) { 1089 - pr_info("updating oom_score_adj for %d (%s) from %d to %d because it shares mm with %d (%s). Report if this is unexpected.\n", 1090 - task_pid_nr(p), p->comm, 1091 - p->signal->oom_score_adj, oom_adj, 1092 - task_pid_nr(task), task->comm); 1093 1089 p->signal->oom_score_adj = oom_adj; 1094 1090 if (!legacy && has_capability_noaudit(current, CAP_SYS_RESOURCE)) 1095 1091 p->signal->oom_score_adj_min = (short)oom_adj;
+1 -1
include/drm/drm_drv.h
··· 767 767 * 768 768 * Returns true if the @feature is supported, false otherwise. 769 769 */ 770 - static inline bool drm_core_check_feature(struct drm_device *dev, u32 feature) 770 + static inline bool drm_core_check_feature(const struct drm_device *dev, u32 feature) 771 771 { 772 772 return dev->driver->driver_features & dev->driver_features & feature; 773 773 }
+36
include/keys/request_key_auth-type.h
··· 1 + /* request_key authorisation token key type 2 + * 3 + * Copyright (C) 2005 Red Hat, Inc. All Rights Reserved. 4 + * Written by David Howells (dhowells@redhat.com) 5 + * 6 + * This program is free software; you can redistribute it and/or 7 + * modify it under the terms of the GNU General Public Licence 8 + * as published by the Free Software Foundation; either version 9 + * 2 of the Licence, or (at your option) any later version. 10 + */ 11 + 12 + #ifndef _KEYS_REQUEST_KEY_AUTH_TYPE_H 13 + #define _KEYS_REQUEST_KEY_AUTH_TYPE_H 14 + 15 + #include <linux/key.h> 16 + 17 + /* 18 + * Authorisation record for request_key(). 19 + */ 20 + struct request_key_auth { 21 + struct key *target_key; 22 + struct key *dest_keyring; 23 + const struct cred *cred; 24 + void *callout_info; 25 + size_t callout_len; 26 + pid_t pid; 27 + char op[8]; 28 + } __randomize_layout; 29 + 30 + static inline struct request_key_auth *get_request_key_auth(const struct key *key) 31 + { 32 + return key->payload.data[0]; 33 + } 34 + 35 + 36 + #endif /* _KEYS_REQUEST_KEY_AUTH_TYPE_H */
+1 -1
include/keys/user-type.h
··· 31 31 struct user_key_payload { 32 32 struct rcu_head rcu; /* RCU destructor */ 33 33 unsigned short datalen; /* length of this data */ 34 - char data[0]; /* actual data */ 34 + char data[0] __aligned(__alignof__(u64)); /* actual data */ 35 35 }; 36 36 37 37 extern struct key_type key_type_user;
+6 -16
include/linux/key-type.h
··· 21 21 struct kernel_pkey_params; 22 22 23 23 /* 24 - * key under-construction record 25 - * - passed to the request_key actor if supplied 26 - */ 27 - struct key_construction { 28 - struct key *key; /* key being constructed */ 29 - struct key *authkey;/* authorisation for key being constructed */ 30 - }; 31 - 32 - /* 33 24 * Pre-parsed payload, used by key add, update and instantiate. 34 25 * 35 26 * This struct will be cleared and data and datalen will be set with the data ··· 41 50 time64_t expiry; /* Expiry time of key */ 42 51 } __randomize_layout; 43 52 44 - typedef int (*request_key_actor_t)(struct key_construction *key, 45 - const char *op, void *aux); 53 + typedef int (*request_key_actor_t)(struct key *auth_key, void *aux); 46 54 47 55 /* 48 56 * Preparsed matching criterion. ··· 171 181 const void *data, 172 182 size_t datalen, 173 183 struct key *keyring, 174 - struct key *instkey); 184 + struct key *authkey); 175 185 extern int key_reject_and_link(struct key *key, 176 186 unsigned timeout, 177 187 unsigned error, 178 188 struct key *keyring, 179 - struct key *instkey); 180 - extern void complete_request_key(struct key_construction *cons, int error); 189 + struct key *authkey); 190 + extern void complete_request_key(struct key *authkey, int error); 181 191 182 192 static inline int key_negate_and_link(struct key *key, 183 193 unsigned timeout, 184 194 struct key *keyring, 185 - struct key *instkey) 195 + struct key *authkey) 186 196 { 187 - return key_reject_and_link(key, timeout, ENOKEY, keyring, instkey); 197 + return key_reject_and_link(key, timeout, ENOKEY, keyring, authkey); 188 198 } 189 199 190 200 extern int generic_key_instantiate(struct key *key, struct key_preparsed_payload *prep);
+22 -2
include/linux/netdev_features.h
··· 11 11 #define _LINUX_NETDEV_FEATURES_H 12 12 13 13 #include <linux/types.h> 14 + #include <linux/bitops.h> 15 + #include <asm/byteorder.h> 14 16 15 17 typedef u64 netdev_features_t; 16 18 ··· 156 154 #define NETIF_F_HW_TLS_TX __NETIF_F(HW_TLS_TX) 157 155 #define NETIF_F_HW_TLS_RX __NETIF_F(HW_TLS_RX) 158 156 159 - #define for_each_netdev_feature(mask_addr, bit) \ 160 - for_each_set_bit(bit, (unsigned long *)mask_addr, NETDEV_FEATURE_COUNT) 157 + /* Finds the next feature with the highest number of the range of start till 0. 158 + */ 159 + static inline int find_next_netdev_feature(u64 feature, unsigned long start) 160 + { 161 + /* like BITMAP_LAST_WORD_MASK() for u64 162 + * this sets the most significant 64 - start to 0. 163 + */ 164 + feature &= ~0ULL >> (-start & ((sizeof(feature) * 8) - 1)); 165 + 166 + return fls64(feature) - 1; 167 + } 168 + 169 + /* This goes for the MSB to the LSB through the set feature bits, 170 + * mask_addr should be a u64 and bit an int 171 + */ 172 + #define for_each_netdev_feature(mask_addr, bit) \ 173 + for ((bit) = find_next_netdev_feature((mask_addr), \ 174 + NETDEV_FEATURE_COUNT); \ 175 + (bit) >= 0; \ 176 + (bit) = find_next_netdev_feature((mask_addr), (bit) - 1)) 161 177 162 178 /* Features valid for ethtool to change */ 163 179 /* = all defined minus driver/device-class-related */
+1 -1
include/linux/netdevice.h
··· 3861 3861 if (debug_value == 0) /* no output */ 3862 3862 return 0; 3863 3863 /* set low N bits */ 3864 - return (1 << debug_value) - 1; 3864 + return (1U << debug_value) - 1; 3865 3865 } 3866 3866 3867 3867 static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
+8
include/linux/phy.h
··· 992 992 { 993 993 return 0; 994 994 } 995 + static inline int genphy_no_ack_interrupt(struct phy_device *phydev) 996 + { 997 + return 0; 998 + } 999 + static inline int genphy_no_config_intr(struct phy_device *phydev) 1000 + { 1001 + return 0; 1002 + } 995 1003 int genphy_read_mmd_unsupported(struct phy_device *phdev, int devad, 996 1004 u16 regnum); 997 1005 int genphy_write_mmd_unsupported(struct phy_device *phdev, int devnum,
-6
include/linux/sched.h
··· 739 739 unsigned use_memdelay:1; 740 740 #endif 741 741 742 - /* 743 - * May usercopy functions fault on kernel addresses? 744 - * This is not just a single bit because this can potentially nest. 745 - */ 746 - unsigned int kernel_uaccess_faults_ok; 747 - 748 742 unsigned long atomic_flags; /* Flags requiring atomic access. */ 749 743 750 744 struct restart_block restart_block;
+7 -1
include/linux/skbuff.h
··· 2434 2434 2435 2435 if (skb_flow_dissect_flow_keys_basic(skb, &keys, NULL, 0, 0, 0, 0)) 2436 2436 skb_set_transport_header(skb, keys.control.thoff); 2437 - else 2437 + else if (offset_hint >= 0) 2438 2438 skb_set_transport_header(skb, offset_hint); 2439 2439 } 2440 2440 ··· 4210 4210 static inline bool skb_is_gso_sctp(const struct sk_buff *skb) 4211 4211 { 4212 4212 return skb_shinfo(skb)->gso_type & SKB_GSO_SCTP; 4213 + } 4214 + 4215 + static inline bool skb_is_gso_tcp(const struct sk_buff *skb) 4216 + { 4217 + return skb_is_gso(skb) && 4218 + skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6); 4213 4219 } 4214 4220 4215 4221 static inline void skb_gso_reset(struct sk_buff *skb)
+19
include/linux/virtio_net.h
··· 57 57 58 58 if (!skb_partial_csum_set(skb, start, off)) 59 59 return -EINVAL; 60 + } else { 61 + /* gso packets without NEEDS_CSUM do not set transport_offset. 62 + * probe and drop if does not match one of the above types. 63 + */ 64 + if (gso_type && skb->network_header) { 65 + if (!skb->protocol) 66 + virtio_net_hdr_set_proto(skb, hdr); 67 + retry: 68 + skb_probe_transport_header(skb, -1); 69 + if (!skb_transport_header_was_set(skb)) { 70 + /* UFO does not specify ipv4 or 6: try both */ 71 + if (gso_type & SKB_GSO_UDP && 72 + skb->protocol == htons(ETH_P_IP)) { 73 + skb->protocol = htons(ETH_P_IPV6); 74 + goto retry; 75 + } 76 + return -EINVAL; 77 + } 78 + } 60 79 } 61 80 62 81 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+8 -1
include/net/icmp.h
··· 22 22 23 23 #include <net/inet_sock.h> 24 24 #include <net/snmp.h> 25 + #include <net/ip.h> 25 26 26 27 struct icmp_err { 27 28 int errno; ··· 40 39 struct sk_buff; 41 40 struct net; 42 41 43 - void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info); 42 + void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, 43 + const struct ip_options *opt); 44 + static inline void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info) 45 + { 46 + __icmp_send(skb_in, type, code, info, &IPCB(skb_in)->opt); 47 + } 48 + 44 49 int icmp_rcv(struct sk_buff *skb); 45 50 int icmp_err(struct sk_buff *skb, u32 info); 46 51 int icmp_init(void);
+3 -1
include/net/ip.h
··· 667 667 } 668 668 669 669 void ip_options_fragment(struct sk_buff *skb); 670 + int __ip_options_compile(struct net *net, struct ip_options *opt, 671 + struct sk_buff *skb, __be32 *info); 670 672 int ip_options_compile(struct net *net, struct ip_options *opt, 671 673 struct sk_buff *skb); 672 674 int ip_options_get(struct net *net, struct ip_options_rcu **optp, ··· 718 716 int ip_misc_proc_init(void); 719 717 #endif 720 718 721 - int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, 719 + int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family, 722 720 struct netlink_ext_ack *extack); 723 721 724 722 #endif /* _IP_H */
+3 -2
include/net/phonet/pep.h
··· 63 63 u8 state_after_reset; /* reset request */ 64 64 u8 error_code; /* any response */ 65 65 u8 pep_type; /* status indication */ 66 - u8 data[1]; 66 + u8 data0; /* anything else */ 67 67 }; 68 + u8 data[]; 68 69 }; 69 - #define other_pep_type data[1] 70 + #define other_pep_type data[0] 70 71 71 72 static inline struct pnpipehdr *pnp_hdr(struct sk_buff *skb) 72 73 {
+9 -3
include/net/xfrm.h
··· 853 853 xfrm_pol_put(pols[i]); 854 854 } 855 855 856 - void __xfrm_state_destroy(struct xfrm_state *); 856 + void __xfrm_state_destroy(struct xfrm_state *, bool); 857 857 858 858 static inline void __xfrm_state_put(struct xfrm_state *x) 859 859 { ··· 863 863 static inline void xfrm_state_put(struct xfrm_state *x) 864 864 { 865 865 if (refcount_dec_and_test(&x->refcnt)) 866 - __xfrm_state_destroy(x); 866 + __xfrm_state_destroy(x, false); 867 + } 868 + 869 + static inline void xfrm_state_put_sync(struct xfrm_state *x) 870 + { 871 + if (refcount_dec_and_test(&x->refcnt)) 872 + __xfrm_state_destroy(x, true); 867 873 } 868 874 869 875 static inline void xfrm_state_hold(struct xfrm_state *x) ··· 1596 1590 1597 1591 struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq); 1598 1592 int xfrm_state_delete(struct xfrm_state *x); 1599 - int xfrm_state_flush(struct net *net, u8 proto, bool task_valid); 1593 + int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync); 1600 1594 int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid); 1601 1595 void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si); 1602 1596 void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
+2 -1
include/uapi/drm/amdgpu_drm.h
··· 272 272 273 273 /* sched ioctl */ 274 274 #define AMDGPU_SCHED_OP_PROCESS_PRIORITY_OVERRIDE 1 275 + #define AMDGPU_SCHED_OP_CONTEXT_PRIORITY_OVERRIDE 2 275 276 276 277 struct drm_amdgpu_sched_in { 277 278 /* AMDGPU_SCHED_OP_* */ 278 279 __u32 op; 279 280 __u32 fd; 280 281 __s32 priority; 281 - __u32 flags; 282 + __u32 ctx_id; 282 283 }; 283 284 284 285 union drm_amdgpu_sched {
+1
include/video/imx-ipu-v3.h
··· 348 348 unsigned int axi_id, unsigned int width, 349 349 unsigned int height, unsigned int stride, 350 350 u32 format, uint64_t modifier, unsigned long *eba); 351 + bool ipu_prg_channel_configure_pending(struct ipuv3_channel *ipu_chan); 351 352 352 353 /* 353 354 * IPU CMOS Sensor Interface (csi) functions
+3 -3
init/initramfs.c
··· 550 550 initrd_end = 0; 551 551 } 552 552 553 + #ifdef CONFIG_BLK_DEV_RAM 553 554 #define BUF_SIZE 1024 554 555 static void __init clean_rootfs(void) 555 556 { ··· 597 596 ksys_close(fd); 598 597 kfree(buf); 599 598 } 599 + #endif 600 600 601 601 static int __init populate_rootfs(void) 602 602 { ··· 640 638 printk(KERN_INFO "Unpacking initramfs...\n"); 641 639 err = unpack_to_rootfs((char *)initrd_start, 642 640 initrd_end - initrd_start); 643 - if (err) { 641 + if (err) 644 642 printk(KERN_EMERG "Initramfs unpacking failed: %s\n", err); 645 - clean_rootfs(); 646 - } 647 643 free_initrd(); 648 644 #endif 649 645 }
+1
kernel/bpf/lpm_trie.c
··· 471 471 } 472 472 473 473 if (!node || node->prefixlen != key->prefixlen || 474 + node->prefixlen != matchlen || 474 475 (node->flags & LPM_TREE_NODE_FLAG_IM)) { 475 476 ret = -ENOENT; 476 477 goto out;
+7 -1
kernel/bpf/stackmap.c
··· 44 44 struct stack_map_irq_work *work; 45 45 46 46 work = container_of(entry, struct stack_map_irq_work, irq_work); 47 - up_read(work->sem); 47 + up_read_non_owner(work->sem); 48 48 work->sem = NULL; 49 49 } 50 50 ··· 338 338 } else { 339 339 work->sem = &current->mm->mmap_sem; 340 340 irq_work_queue(&work->irq_work); 341 + /* 342 + * The irq_work will release the mmap_sem with 343 + * up_read_non_owner(). The rwsem_release() is called 344 + * here to release the lock from lockdep's perspective. 345 + */ 346 + rwsem_release(&current->mm->mmap_sem.dep_map, 1, _RET_IP_); 341 347 } 342 348 } 343 349
+3 -3
kernel/bpf/syscall.c
··· 559 559 err = bpf_map_new_fd(map, f_flags); 560 560 if (err < 0) { 561 561 /* failed to allocate fd. 562 - * bpf_map_put() is needed because the above 562 + * bpf_map_put_with_uref() is needed because the above 563 563 * bpf_map_alloc_id() has published the map 564 564 * to the userspace and the userspace may 565 565 * have refcnt-ed it through BPF_MAP_GET_FD_BY_ID. 566 566 */ 567 - bpf_map_put(map); 567 + bpf_map_put_with_uref(map); 568 568 return err; 569 569 } 570 570 ··· 1986 1986 1987 1987 fd = bpf_map_new_fd(map, f_flags); 1988 1988 if (fd < 0) 1989 - bpf_map_put(map); 1989 + bpf_map_put_with_uref(map); 1990 1990 1991 1991 return fd; 1992 1992 }
+9 -5
kernel/bpf/verifier.c
··· 1617 1617 return 0; 1618 1618 } 1619 1619 1620 - static int check_sock_access(struct bpf_verifier_env *env, u32 regno, int off, 1621 - int size, enum bpf_access_type t) 1620 + static int check_sock_access(struct bpf_verifier_env *env, int insn_idx, 1621 + u32 regno, int off, int size, 1622 + enum bpf_access_type t) 1622 1623 { 1623 1624 struct bpf_reg_state *regs = cur_regs(env); 1624 1625 struct bpf_reg_state *reg = &regs[regno]; 1625 - struct bpf_insn_access_aux info; 1626 + struct bpf_insn_access_aux info = {}; 1626 1627 1627 1628 if (reg->smin_value < 0) { 1628 1629 verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n", ··· 1636 1635 off, size); 1637 1636 return -EACCES; 1638 1637 } 1638 + 1639 + env->insn_aux_data[insn_idx].ctx_field_size = info.ctx_field_size; 1639 1640 1640 1641 return 0; 1641 1642 } ··· 2035 2032 verbose(env, "cannot write into socket\n"); 2036 2033 return -EACCES; 2037 2034 } 2038 - err = check_sock_access(env, regno, off, size, t); 2035 + err = check_sock_access(env, insn_idx, regno, off, size, t); 2039 2036 if (!err && value_regno >= 0) 2040 2037 mark_reg_unknown(env, regs, value_regno); 2041 2038 } else { ··· 6920 6917 u32 off_reg; 6921 6918 6922 6919 aux = &env->insn_aux_data[i + delta]; 6923 - if (!aux->alu_state) 6920 + if (!aux->alu_state || 6921 + aux->alu_state == BPF_ALU_NON_POINTER) 6924 6922 continue; 6925 6923 6926 6924 isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
+1 -1
kernel/sched/psi.c
··· 322 322 expires = group->next_update; 323 323 if (now < expires) 324 324 goto out; 325 - if (now - expires > psi_period) 325 + if (now - expires >= psi_period) 326 326 missed_periods = div_u64(now - expires, psi_period); 327 327 328 328 /*
+2
kernel/trace/trace.c
··· 3384 3384 const char tgid_space[] = " "; 3385 3385 const char space[] = " "; 3386 3386 3387 + print_event_info(buf, m); 3388 + 3387 3389 seq_printf(m, "# %s _-----=> irqs-off\n", 3388 3390 tgid ? tgid_space : space); 3389 3391 seq_printf(m, "# %s / _----=> need-resched\n",
+1 -9
kernel/trace/trace_kprobe.c
··· 861 861 static nokprobe_inline int 862 862 fetch_store_strlen(unsigned long addr) 863 863 { 864 - mm_segment_t old_fs; 865 864 int ret, len = 0; 866 865 u8 c; 867 866 868 - old_fs = get_fs(); 869 - set_fs(KERNEL_DS); 870 - pagefault_disable(); 871 - 872 867 do { 873 - ret = __copy_from_user_inatomic(&c, (u8 *)addr + len, 1); 868 + ret = probe_mem_read(&c, (u8 *)addr + len, 1); 874 869 len++; 875 870 } while (c && ret == 0 && len < MAX_STRING_SIZE); 876 - 877 - pagefault_enable(); 878 - set_fs(old_fs); 879 871 880 872 return (ret < 0) ? ret : len; 881 873 }
+22
lib/Kconfig.kasan
··· 113 113 114 114 endchoice 115 115 116 + config KASAN_STACK_ENABLE 117 + bool "Enable stack instrumentation (unsafe)" if CC_IS_CLANG && !COMPILE_TEST 118 + default !(CLANG_VERSION < 90000) 119 + depends on KASAN 120 + help 121 + The LLVM stack address sanitizer has a know problem that 122 + causes excessive stack usage in a lot of functions, see 123 + https://bugs.llvm.org/show_bug.cgi?id=38809 124 + Disabling asan-stack makes it safe to run kernels build 125 + with clang-8 with KASAN enabled, though it loses some of 126 + the functionality. 127 + This feature is always disabled when compile-testing with clang-8 128 + or earlier to avoid cluttering the output in stack overflow 129 + warnings, but clang-8 users can still enable it for builds without 130 + CONFIG_COMPILE_TEST. On gcc and later clang versions it is 131 + assumed to always be safe to use and enabled by default. 132 + 133 + config KASAN_STACK 134 + int 135 + default 1 if KASAN_STACK_ENABLE || CC_IS_GCC 136 + default 0 137 + 116 138 config KASAN_S390_4_LEVEL_PAGING 117 139 bool "KASan: use 4-level paging" 118 140 depends on KASAN && S390
+5 -3
lib/assoc_array.c
··· 768 768 new_s0->index_key[i] = 769 769 ops->get_key_chunk(index_key, i * ASSOC_ARRAY_KEY_CHUNK_SIZE); 770 770 771 - blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK); 772 - pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank); 773 - new_s0->index_key[keylen - 1] &= ~blank; 771 + if (level & ASSOC_ARRAY_KEY_CHUNK_MASK) { 772 + blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK); 773 + pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank); 774 + new_s0->index_key[keylen - 1] &= ~blank; 775 + } 774 776 775 777 /* This now reduces to a node splitting exercise for which we'll need 776 778 * to regenerate the disparity table.
+3 -1
mm/debug.c
··· 44 44 45 45 void __dump_page(struct page *page, const char *reason) 46 46 { 47 - struct address_space *mapping = page_mapping(page); 47 + struct address_space *mapping; 48 48 bool page_poisoned = PagePoisoned(page); 49 49 int mapcount; 50 50 ··· 57 57 pr_warn("page:%px is uninitialized and poisoned", page); 58 58 goto hex_only; 59 59 } 60 + 61 + mapping = page_mapping(page); 60 62 61 63 /* 62 64 * Avoid VM_BUG_ON() in page_mapcount().
+13 -3
mm/hugetlb.c
··· 3624 3624 copy_user_huge_page(new_page, old_page, address, vma, 3625 3625 pages_per_huge_page(h)); 3626 3626 __SetPageUptodate(new_page); 3627 - set_page_huge_active(new_page); 3628 3627 3629 3628 mmu_notifier_range_init(&range, mm, haddr, haddr + huge_page_size(h)); 3630 3629 mmu_notifier_invalidate_range_start(&range); ··· 3644 3645 make_huge_pte(vma, new_page, 1)); 3645 3646 page_remove_rmap(old_page, true); 3646 3647 hugepage_add_new_anon_rmap(new_page, vma, haddr); 3648 + set_page_huge_active(new_page); 3647 3649 /* Make the old page be freed below */ 3648 3650 new_page = old_page; 3649 3651 } ··· 3729 3729 pte_t new_pte; 3730 3730 spinlock_t *ptl; 3731 3731 unsigned long haddr = address & huge_page_mask(h); 3732 + bool new_page = false; 3732 3733 3733 3734 /* 3734 3735 * Currently, we are forced to kill the process in the event the ··· 3791 3790 } 3792 3791 clear_huge_page(page, address, pages_per_huge_page(h)); 3793 3792 __SetPageUptodate(page); 3794 - set_page_huge_active(page); 3793 + new_page = true; 3795 3794 3796 3795 if (vma->vm_flags & VM_MAYSHARE) { 3797 3796 int err = huge_add_to_page_cache(page, mapping, idx); ··· 3862 3861 } 3863 3862 3864 3863 spin_unlock(ptl); 3864 + 3865 + /* 3866 + * Only make newly allocated pages active. Existing pages found 3867 + * in the pagecache could be !page_huge_active() if they have been 3868 + * isolated for migration. 3869 + */ 3870 + if (new_page) 3871 + set_page_huge_active(page); 3872 + 3865 3873 unlock_page(page); 3866 3874 out: 3867 3875 return ret; ··· 4105 4095 * the set_pte_at() write. 4106 4096 */ 4107 4097 __SetPageUptodate(page); 4108 - set_page_huge_active(page); 4109 4098 4110 4099 mapping = dst_vma->vm_file->f_mapping; 4111 4100 idx = vma_hugecache_offset(h, dst_vma, dst_addr); ··· 4172 4163 update_mmu_cache(dst_vma, dst_addr, dst_pte); 4173 4164 4174 4165 spin_unlock(ptl); 4166 + set_page_huge_active(page); 4175 4167 if (vm_shared) 4176 4168 unlock_page(page); 4177 4169 ret = 0;
+2
mm/kasan/Makefile
··· 7 7 8 8 CFLAGS_REMOVE_common.o = -pg 9 9 CFLAGS_REMOVE_generic.o = -pg 10 + CFLAGS_REMOVE_tags.o = -pg 11 + 10 12 # Function splitter causes unnecessary splits in __asan_load1/__asan_store1 11 13 # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533 12 14
+17 -12
mm/kasan/common.c
··· 361 361 * get different tags. 362 362 */ 363 363 static u8 assign_tag(struct kmem_cache *cache, const void *object, 364 - bool init, bool krealloc) 364 + bool init, bool keep_tag) 365 365 { 366 - /* Reuse the same tag for krealloc'ed objects. */ 367 - if (krealloc) 366 + /* 367 + * 1. When an object is kmalloc()'ed, two hooks are called: 368 + * kasan_slab_alloc() and kasan_kmalloc(). We assign the 369 + * tag only in the first one. 370 + * 2. We reuse the same tag for krealloc'ed objects. 371 + */ 372 + if (keep_tag) 368 373 return get_tag(object); 369 374 370 375 /* ··· 408 403 assign_tag(cache, object, true, false)); 409 404 410 405 return (void *)object; 411 - } 412 - 413 - void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, 414 - gfp_t flags) 415 - { 416 - return kasan_kmalloc(cache, object, cache->object_size, flags); 417 406 } 418 407 419 408 static inline bool shadow_invalid(u8 tag, s8 shadow_byte) ··· 466 467 } 467 468 468 469 static void *__kasan_kmalloc(struct kmem_cache *cache, const void *object, 469 - size_t size, gfp_t flags, bool krealloc) 470 + size_t size, gfp_t flags, bool keep_tag) 470 471 { 471 472 unsigned long redzone_start; 472 473 unsigned long redzone_end; ··· 484 485 KASAN_SHADOW_SCALE_SIZE); 485 486 486 487 if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) 487 - tag = assign_tag(cache, object, false, krealloc); 488 + tag = assign_tag(cache, object, false, keep_tag); 488 489 489 490 /* Tag is ignored in set_tag without CONFIG_KASAN_SW_TAGS */ 490 491 kasan_unpoison_shadow(set_tag(object, tag), size); ··· 497 498 return set_tag(object, tag); 498 499 } 499 500 501 + void * __must_check kasan_slab_alloc(struct kmem_cache *cache, void *object, 502 + gfp_t flags) 503 + { 504 + return __kasan_kmalloc(cache, object, cache->object_size, flags, false); 505 + } 506 + 500 507 void * __must_check kasan_kmalloc(struct kmem_cache *cache, const void *object, 501 508 size_t size, gfp_t flags) 502 509 { 503 - return __kasan_kmalloc(cache, object, size, flags, false); 510 + return __kasan_kmalloc(cache, object, size, flags, true); 504 511 } 505 512 EXPORT_SYMBOL(kasan_kmalloc); 506 513
+1 -1
mm/kasan/tags.c
··· 46 46 int cpu; 47 47 48 48 for_each_possible_cpu(cpu) 49 - per_cpu(prng_state, cpu) = get_random_u32(); 49 + per_cpu(prng_state, cpu) = (u32)get_cycles(); 50 50 } 51 51 52 52 /*
+7 -3
mm/kmemleak.c
··· 574 574 unsigned long flags; 575 575 struct kmemleak_object *object, *parent; 576 576 struct rb_node **link, *rb_parent; 577 + unsigned long untagged_ptr; 577 578 578 579 object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); 579 580 if (!object) { ··· 620 619 621 620 write_lock_irqsave(&kmemleak_lock, flags); 622 621 623 - min_addr = min(min_addr, ptr); 624 - max_addr = max(max_addr, ptr + size); 622 + untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr); 623 + min_addr = min(min_addr, untagged_ptr); 624 + max_addr = max(max_addr, untagged_ptr + size); 625 625 link = &object_tree_root.rb_node; 626 626 rb_parent = NULL; 627 627 while (*link) { ··· 1335 1333 unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER); 1336 1334 unsigned long *end = _end - (BYTES_PER_POINTER - 1); 1337 1335 unsigned long flags; 1336 + unsigned long untagged_ptr; 1338 1337 1339 1338 read_lock_irqsave(&kmemleak_lock, flags); 1340 1339 for (ptr = start; ptr < end; ptr++) { ··· 1350 1347 pointer = *ptr; 1351 1348 kasan_enable_current(); 1352 1349 1353 - if (pointer < min_addr || pointer >= max_addr) 1350 + untagged_ptr = (unsigned long)kasan_reset_tag((void *)pointer); 1351 + if (untagged_ptr < min_addr || untagged_ptr >= max_addr) 1354 1352 continue; 1355 1353 1356 1354 /*
-6
mm/maccess.c
··· 30 30 31 31 set_fs(KERNEL_DS); 32 32 pagefault_disable(); 33 - current->kernel_uaccess_faults_ok++; 34 33 ret = __copy_from_user_inatomic(dst, 35 34 (__force const void __user *)src, size); 36 - current->kernel_uaccess_faults_ok--; 37 35 pagefault_enable(); 38 36 set_fs(old_fs); 39 37 ··· 58 60 59 61 set_fs(KERNEL_DS); 60 62 pagefault_disable(); 61 - current->kernel_uaccess_faults_ok++; 62 63 ret = __copy_to_user_inatomic((__force void __user *)dst, src, size); 63 - current->kernel_uaccess_faults_ok--; 64 64 pagefault_enable(); 65 65 set_fs(old_fs); 66 66 ··· 94 98 95 99 set_fs(KERNEL_DS); 96 100 pagefault_disable(); 97 - current->kernel_uaccess_faults_ok++; 98 101 99 102 do { 100 103 ret = __get_user(*dst++, (const char __user __force *)src++); 101 104 } while (dst[-1] && ret == 0 && src - unsafe_addr < count); 102 105 103 - current->kernel_uaccess_faults_ok--; 104 106 dst[-1] = '\0'; 105 107 pagefault_enable(); 106 108 set_fs(old_fs);
+15 -12
mm/memory_hotplug.c
··· 1188 1188 return PageBuddy(page) && page_order(page) >= pageblock_order; 1189 1189 } 1190 1190 1191 - /* Return the start of the next active pageblock after a given page */ 1192 - static struct page *next_active_pageblock(struct page *page) 1191 + /* Return the pfn of the start of the next active pageblock after a given pfn */ 1192 + static unsigned long next_active_pageblock(unsigned long pfn) 1193 1193 { 1194 + struct page *page = pfn_to_page(pfn); 1195 + 1194 1196 /* Ensure the starting page is pageblock-aligned */ 1195 - BUG_ON(page_to_pfn(page) & (pageblock_nr_pages - 1)); 1197 + BUG_ON(pfn & (pageblock_nr_pages - 1)); 1196 1198 1197 1199 /* If the entire pageblock is free, move to the end of free page */ 1198 1200 if (pageblock_free(page)) { ··· 1202 1200 /* be careful. we don't have locks, page_order can be changed.*/ 1203 1201 order = page_order(page); 1204 1202 if ((order < MAX_ORDER) && (order >= pageblock_order)) 1205 - return page + (1 << order); 1203 + return pfn + (1 << order); 1206 1204 } 1207 1205 1208 - return page + pageblock_nr_pages; 1206 + return pfn + pageblock_nr_pages; 1209 1207 } 1210 1208 1211 - static bool is_pageblock_removable_nolock(struct page *page) 1209 + static bool is_pageblock_removable_nolock(unsigned long pfn) 1212 1210 { 1211 + struct page *page = pfn_to_page(pfn); 1213 1212 struct zone *zone; 1214 - unsigned long pfn; 1215 1213 1216 1214 /* 1217 1215 * We have to be careful here because we are iterating over memory ··· 1234 1232 /* Checks if this range of memory is likely to be hot-removable. */ 1235 1233 bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages) 1236 1234 { 1237 - struct page *page = pfn_to_page(start_pfn); 1238 - unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page))); 1239 - struct page *end_page = pfn_to_page(end_pfn); 1235 + unsigned long end_pfn, pfn; 1236 + 1237 + end_pfn = min(start_pfn + nr_pages, 1238 + zone_end_pfn(page_zone(pfn_to_page(start_pfn)))); 1240 1239 1241 1240 /* Check the starting page of each pageblock within the range */ 1242 - for (; page < end_page; page = next_active_pageblock(page)) { 1243 - if (!is_pageblock_removable_nolock(page)) 1241 + for (pfn = start_pfn; pfn < end_pfn; pfn = next_active_pageblock(pfn)) { 1242 + if (!is_pageblock_removable_nolock(pfn)) 1244 1243 return false; 1245 1244 cond_resched(); 1246 1245 }
+3 -3
mm/mempolicy.c
··· 1314 1314 nodemask_t *nodes) 1315 1315 { 1316 1316 unsigned long copy = ALIGN(maxnode-1, 64) / 8; 1317 - const int nbytes = BITS_TO_LONGS(MAX_NUMNODES) * sizeof(long); 1317 + unsigned int nbytes = BITS_TO_LONGS(nr_node_ids) * sizeof(long); 1318 1318 1319 1319 if (copy > nbytes) { 1320 1320 if (copy > PAGE_SIZE) ··· 1491 1491 int uninitialized_var(pval); 1492 1492 nodemask_t nodes; 1493 1493 1494 - if (nmask != NULL && maxnode < MAX_NUMNODES) 1494 + if (nmask != NULL && maxnode < nr_node_ids) 1495 1495 return -EINVAL; 1496 1496 1497 1497 err = do_get_mempolicy(&pval, &nodes, addr, flags); ··· 1527 1527 unsigned long nr_bits, alloc_size; 1528 1528 DECLARE_BITMAP(bm, MAX_NUMNODES); 1529 1529 1530 - nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES); 1530 + nr_bits = min_t(unsigned long, maxnode-1, nr_node_ids); 1531 1531 alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8; 1532 1532 1533 1533 if (nmask)
+11
mm/migrate.c
··· 1315 1315 lock_page(hpage); 1316 1316 } 1317 1317 1318 + /* 1319 + * Check for pages which are in the process of being freed. Without 1320 + * page_mapping() set, hugetlbfs specific move page routine will not 1321 + * be called and we could leak usage counts for subpools. 1322 + */ 1323 + if (page_private(hpage) && !page_mapping(hpage)) { 1324 + rc = -EBUSY; 1325 + goto out_unlock; 1326 + } 1327 + 1318 1328 if (PageAnon(hpage)) 1319 1329 anon_vma = page_get_anon_vma(hpage); 1320 1330 ··· 1355 1345 put_new_page = NULL; 1356 1346 } 1357 1347 1348 + out_unlock: 1358 1349 unlock_page(hpage); 1359 1350 out: 1360 1351 if (rc != -EAGAIN)
+3 -4
mm/mmap.c
··· 2426 2426 { 2427 2427 struct mm_struct *mm = vma->vm_mm; 2428 2428 struct vm_area_struct *prev; 2429 - int error; 2429 + int error = 0; 2430 2430 2431 2431 address &= PAGE_MASK; 2432 - error = security_mmap_addr(address); 2433 - if (error) 2434 - return error; 2432 + if (address < mmap_min_addr) 2433 + return -EPERM; 2435 2434 2436 2435 /* Enforce stack_guard_gap */ 2437 2436 prev = vma->vm_prev;
+16 -4
mm/page_alloc.c
··· 2170 2170 2171 2171 max_boost = mult_frac(zone->_watermark[WMARK_HIGH], 2172 2172 watermark_boost_factor, 10000); 2173 + 2174 + /* 2175 + * high watermark may be uninitialised if fragmentation occurs 2176 + * very early in boot so do not boost. We do not fall 2177 + * through and boost by pageblock_nr_pages as failing 2178 + * allocations that early means that reclaim is not going 2179 + * to help and it may even be impossible to reclaim the 2180 + * boosted watermark resulting in a hang. 2181 + */ 2182 + if (!max_boost) 2183 + return; 2184 + 2173 2185 max_boost = max(pageblock_nr_pages, max_boost); 2174 2186 2175 2187 zone->watermark_boost = min(zone->watermark_boost + pageblock_nr_pages, ··· 4687 4675 /* Even if we own the page, we do not use atomic_set(). 4688 4676 * This would break get_page_unless_zero() users. 4689 4677 */ 4690 - page_ref_add(page, size); 4678 + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE); 4691 4679 4692 4680 /* reset page count bias and offset to start of new frag */ 4693 4681 nc->pfmemalloc = page_is_pfmemalloc(page); 4694 - nc->pagecnt_bias = size + 1; 4682 + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; 4695 4683 nc->offset = size; 4696 4684 } 4697 4685 ··· 4707 4695 size = nc->size; 4708 4696 #endif 4709 4697 /* OK, page count is 0, we can safely set it */ 4710 - set_page_count(page, size + 1); 4698 + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1); 4711 4699 4712 4700 /* reset page count bias and offset to start of new frag */ 4713 - nc->pagecnt_bias = size + 1; 4701 + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1; 4714 4702 offset = size - fragsz; 4715 4703 } 4716 4704
+8 -4
mm/shmem.c
··· 2848 2848 static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry) 2849 2849 { 2850 2850 struct inode *inode = d_inode(old_dentry); 2851 - int ret; 2851 + int ret = 0; 2852 2852 2853 2853 /* 2854 2854 * No ordinary (disk based) filesystem counts links as inodes; 2855 2855 * but each new link needs a new dentry, pinning lowmem, and 2856 2856 * tmpfs dentries cannot be pruned until they are unlinked. 2857 + * But if an O_TMPFILE file is linked into the tmpfs, the 2858 + * first link must skip that, to get the accounting right. 2857 2859 */ 2858 - ret = shmem_reserve_inode(inode->i_sb); 2859 - if (ret) 2860 - goto out; 2860 + if (inode->i_nlink) { 2861 + ret = shmem_reserve_inode(inode->i_sb); 2862 + if (ret) 2863 + goto out; 2864 + } 2861 2865 2862 2866 dir->i_size += BOGO_DIRENT_SIZE; 2863 2867 inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode);
+11 -4
mm/slab.c
··· 2359 2359 void *freelist; 2360 2360 void *addr = page_address(page); 2361 2361 2362 - page->s_mem = kasan_reset_tag(addr) + colour_off; 2362 + page->s_mem = addr + colour_off; 2363 2363 page->active = 0; 2364 2364 2365 2365 if (OBJFREELIST_SLAB(cachep)) ··· 2368 2368 /* Slab management obj is off-slab. */ 2369 2369 freelist = kmem_cache_alloc_node(cachep->freelist_cache, 2370 2370 local_flags, nodeid); 2371 + freelist = kasan_reset_tag(freelist); 2371 2372 if (!freelist) 2372 2373 return NULL; 2373 2374 } else { ··· 2682 2681 2683 2682 offset *= cachep->colour_off; 2684 2683 2684 + /* 2685 + * Call kasan_poison_slab() before calling alloc_slabmgmt(), so 2686 + * page_address() in the latter returns a non-tagged pointer, 2687 + * as it should be for slab pages. 2688 + */ 2689 + kasan_poison_slab(page); 2690 + 2685 2691 /* Get slab management. */ 2686 2692 freelist = alloc_slabmgmt(cachep, page, offset, 2687 2693 local_flags & ~GFP_CONSTRAINT_MASK, page_node); ··· 2697 2689 2698 2690 slab_map_pages(cachep, page, freelist); 2699 2691 2700 - kasan_poison_slab(page); 2701 2692 cache_init_objs(cachep, page); 2702 2693 2703 2694 if (gfpflags_allow_blocking(local_flags)) ··· 3547 3540 { 3548 3541 void *ret = slab_alloc(cachep, flags, _RET_IP_); 3549 3542 3550 - ret = kasan_slab_alloc(cachep, ret, flags); 3551 3543 trace_kmem_cache_alloc(_RET_IP_, ret, 3552 3544 cachep->object_size, cachep->size, flags); 3553 3545 ··· 3636 3630 { 3637 3631 void *ret = slab_alloc_node(cachep, flags, nodeid, _RET_IP_); 3638 3632 3639 - ret = kasan_slab_alloc(cachep, ret, flags); 3640 3633 trace_kmem_cache_alloc_node(_RET_IP_, ret, 3641 3634 cachep->object_size, cachep->size, 3642 3635 flags, nodeid); ··· 4412 4407 struct kmem_cache *cachep; 4413 4408 unsigned int objnr; 4414 4409 unsigned long offset; 4410 + 4411 + ptr = kasan_reset_tag(ptr); 4415 4412 4416 4413 /* Find and validate object. */ 4417 4414 cachep = page->slab_cache;
+3 -4
mm/slab.h
··· 437 437 438 438 flags &= gfp_allowed_mask; 439 439 for (i = 0; i < size; i++) { 440 - void *object = p[i]; 441 - 442 - kmemleak_alloc_recursive(object, s->object_size, 1, 440 + p[i] = kasan_slab_alloc(s, p[i], flags); 441 + /* As p[i] might get tagged, call kmemleak hook after KASAN. */ 442 + kmemleak_alloc_recursive(p[i], s->object_size, 1, 443 443 s->flags, flags); 444 - p[i] = kasan_slab_alloc(s, object, flags); 445 444 } 446 445 447 446 if (memcg_kmem_enabled())
+2 -1
mm/slab_common.c
··· 1228 1228 flags |= __GFP_COMP; 1229 1229 page = alloc_pages(flags, order); 1230 1230 ret = page ? page_address(page) : NULL; 1231 - kmemleak_alloc(ret, size, 1, flags); 1232 1231 ret = kasan_kmalloc_large(ret, size, flags); 1232 + /* As ret might get tagged, call kmemleak hook after KASAN. */ 1233 + kmemleak_alloc(ret, size, 1, flags); 1233 1234 return ret; 1234 1235 } 1235 1236 EXPORT_SYMBOL(kmalloc_order);
+39 -20
mm/slub.c
··· 249 249 unsigned long ptr_addr) 250 250 { 251 251 #ifdef CONFIG_SLAB_FREELIST_HARDENED 252 - return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr); 252 + /* 253 + * When CONFIG_KASAN_SW_TAGS is enabled, ptr_addr might be tagged. 254 + * Normally, this doesn't cause any issues, as both set_freepointer() 255 + * and get_freepointer() are called with a pointer with the same tag. 256 + * However, there are some issues with CONFIG_SLUB_DEBUG code. For 257 + * example, when __free_slub() iterates over objects in a cache, it 258 + * passes untagged pointers to check_object(). check_object() in turns 259 + * calls get_freepointer() with an untagged pointer, which causes the 260 + * freepointer to be restored incorrectly. 261 + */ 262 + return (void *)((unsigned long)ptr ^ s->random ^ 263 + (unsigned long)kasan_reset_tag((void *)ptr_addr)); 253 264 #else 254 265 return ptr; 255 266 #endif ··· 314 303 __p < (__addr) + (__objects) * (__s)->size; \ 315 304 __p += (__s)->size) 316 305 317 - #define for_each_object_idx(__p, __idx, __s, __addr, __objects) \ 318 - for (__p = fixup_red_left(__s, __addr), __idx = 1; \ 319 - __idx <= __objects; \ 320 - __p += (__s)->size, __idx++) 321 - 322 306 /* Determine object index from a given position */ 323 307 static inline unsigned int slab_index(void *p, struct kmem_cache *s, void *addr) 324 308 { 325 - return (p - addr) / s->size; 309 + return (kasan_reset_tag(p) - addr) / s->size; 326 310 } 327 311 328 312 static inline unsigned int order_objects(unsigned int order, unsigned int size) ··· 513 507 return 1; 514 508 515 509 base = page_address(page); 510 + object = kasan_reset_tag(object); 516 511 object = restore_red_left(s, object); 517 512 if (object < base || object >= base + page->objects * s->size || 518 513 (object - base) % s->size) { ··· 1082 1075 init_tracking(s, object); 1083 1076 } 1084 1077 1078 + static void setup_page_debug(struct kmem_cache *s, void *addr, int order) 1079 + { 1080 + if (!(s->flags & SLAB_POISON)) 1081 + return; 1082 + 1083 + metadata_access_enable(); 1084 + memset(addr, POISON_INUSE, PAGE_SIZE << order); 1085 + metadata_access_disable(); 1086 + } 1087 + 1085 1088 static inline int alloc_consistency_checks(struct kmem_cache *s, 1086 1089 struct page *page, 1087 1090 void *object, unsigned long addr) ··· 1347 1330 #else /* !CONFIG_SLUB_DEBUG */ 1348 1331 static inline void setup_object_debug(struct kmem_cache *s, 1349 1332 struct page *page, void *object) {} 1333 + static inline void setup_page_debug(struct kmem_cache *s, 1334 + void *addr, int order) {} 1350 1335 1351 1336 static inline int alloc_debug_processing(struct kmem_cache *s, 1352 1337 struct page *page, void *object, unsigned long addr) { return 0; } ··· 1393 1374 */ 1394 1375 static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) 1395 1376 { 1377 + ptr = kasan_kmalloc_large(ptr, size, flags); 1378 + /* As ptr might get tagged, call kmemleak hook after KASAN. */ 1396 1379 kmemleak_alloc(ptr, size, 1, flags); 1397 - return kasan_kmalloc_large(ptr, size, flags); 1380 + return ptr; 1398 1381 } 1399 1382 1400 1383 static __always_inline void kfree_hook(void *x) ··· 1662 1641 if (page_is_pfmemalloc(page)) 1663 1642 SetPageSlabPfmemalloc(page); 1664 1643 1644 + kasan_poison_slab(page); 1645 + 1665 1646 start = page_address(page); 1666 1647 1667 - if (unlikely(s->flags & SLAB_POISON)) 1668 - memset(start, POISON_INUSE, PAGE_SIZE << order); 1669 - 1670 - kasan_poison_slab(page); 1648 + setup_page_debug(s, start, order); 1671 1649 1672 1650 shuffle = shuffle_freelist(s, page); 1673 1651 1674 1652 if (!shuffle) { 1675 - for_each_object_idx(p, idx, s, start, page->objects) { 1676 - if (likely(idx < page->objects)) { 1677 - next = p + s->size; 1678 - next = setup_object(s, page, next); 1679 - set_freepointer(s, p, next); 1680 - } else 1681 - set_freepointer(s, p, NULL); 1682 - } 1683 1653 start = fixup_red_left(s, start); 1684 1654 start = setup_object(s, page, start); 1685 1655 page->freelist = start; 1656 + for (idx = 0, p = start; idx < page->objects - 1; idx++) { 1657 + next = p + s->size; 1658 + next = setup_object(s, page, next); 1659 + set_freepointer(s, p, next); 1660 + p = next; 1661 + } 1662 + set_freepointer(s, p, NULL); 1686 1663 } 1687 1664 1688 1665 page->inuse = page->objects;
+10 -7
mm/swap.c
··· 320 320 { 321 321 } 322 322 323 - static bool need_activate_page_drain(int cpu) 324 - { 325 - return false; 326 - } 327 - 328 323 void activate_page(struct page *page) 329 324 { 330 325 struct zone *zone = page_zone(page); ··· 648 653 put_cpu(); 649 654 } 650 655 656 + #ifdef CONFIG_SMP 657 + 658 + static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); 659 + 651 660 static void lru_add_drain_per_cpu(struct work_struct *dummy) 652 661 { 653 662 lru_add_drain(); 654 663 } 655 - 656 - static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); 657 664 658 665 /* 659 666 * Doesn't need any cpu hotplug locking because we do rely on per-cpu ··· 699 702 700 703 mutex_unlock(&lock); 701 704 } 705 + #else 706 + void lru_add_drain_all(void) 707 + { 708 + lru_add_drain(); 709 + } 710 + #endif 702 711 703 712 /** 704 713 * release_pages - batched put_page()
+1 -1
mm/util.c
··· 150 150 { 151 151 void *p; 152 152 153 - p = kmalloc_track_caller(len, GFP_USER); 153 + p = kmalloc_track_caller(len, GFP_USER | __GFP_NOWARN); 154 154 if (!p) 155 155 return ERR_PTR(-ENOMEM); 156 156
+24 -21
net/bpf/test_run.c
··· 13 13 #include <net/sock.h> 14 14 #include <net/tcp.h> 15 15 16 - static __always_inline u32 bpf_test_run_one(struct bpf_prog *prog, void *ctx, 17 - struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE]) 18 - { 19 - u32 ret; 20 - 21 - preempt_disable(); 22 - rcu_read_lock(); 23 - bpf_cgroup_storage_set(storage); 24 - ret = BPF_PROG_RUN(prog, ctx); 25 - rcu_read_unlock(); 26 - preempt_enable(); 27 - 28 - return ret; 29 - } 30 - 31 - static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, u32 *ret, 32 - u32 *time) 16 + static int bpf_test_run(struct bpf_prog *prog, void *ctx, u32 repeat, 17 + u32 *retval, u32 *time) 33 18 { 34 19 struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE] = { 0 }; 35 20 enum bpf_cgroup_storage_type stype; 36 21 u64 time_start, time_spent = 0; 22 + int ret = 0; 37 23 u32 i; 38 24 39 25 for_each_cgroup_storage_type(stype) { ··· 34 48 35 49 if (!repeat) 36 50 repeat = 1; 51 + 52 + rcu_read_lock(); 53 + preempt_disable(); 37 54 time_start = ktime_get_ns(); 38 55 for (i = 0; i < repeat; i++) { 39 - *ret = bpf_test_run_one(prog, ctx, storage); 56 + bpf_cgroup_storage_set(storage); 57 + *retval = BPF_PROG_RUN(prog, ctx); 58 + 59 + if (signal_pending(current)) { 60 + ret = -EINTR; 61 + break; 62 + } 63 + 40 64 if (need_resched()) { 41 - if (signal_pending(current)) 42 - break; 43 65 time_spent += ktime_get_ns() - time_start; 66 + preempt_enable(); 67 + rcu_read_unlock(); 68 + 44 69 cond_resched(); 70 + 71 + rcu_read_lock(); 72 + preempt_disable(); 45 73 time_start = ktime_get_ns(); 46 74 } 47 75 } 48 76 time_spent += ktime_get_ns() - time_start; 77 + preempt_enable(); 78 + rcu_read_unlock(); 79 + 49 80 do_div(time_spent, repeat); 50 81 *time = time_spent > U32_MAX ? U32_MAX : (u32)time_spent; 51 82 52 83 for_each_cgroup_storage_type(stype) 53 84 bpf_cgroup_storage_free(storage[stype]); 54 85 55 - return 0; 86 + return ret; 56 87 } 57 88 58 89 static int bpf_test_finish(const union bpf_attr *kattr,
+1 -8
net/bridge/br_multicast.c
··· 1204 1204 return; 1205 1205 1206 1206 br_multicast_update_query_timer(br, query, max_delay); 1207 - 1208 - /* Based on RFC4541, section 2.1.1 IGMP Forwarding Rules, 1209 - * the arrival port for IGMP Queries where the source address 1210 - * is 0.0.0.0 should not be added to router port list. 1211 - */ 1212 - if ((saddr->proto == htons(ETH_P_IP) && saddr->u.ip4) || 1213 - saddr->proto == htons(ETH_P_IPV6)) 1214 - br_multicast_mark_router(br, port); 1207 + br_multicast_mark_router(br, port); 1215 1208 } 1216 1209 1217 1210 static void br_ip4_multicast_query(struct net_bridge *br,
+9 -6
net/ceph/messenger.c
··· 2058 2058 dout("process_connect on %p tag %d\n", con, (int)con->in_tag); 2059 2059 2060 2060 if (con->auth) { 2061 + int len = le32_to_cpu(con->in_reply.authorizer_len); 2062 + 2061 2063 /* 2062 2064 * Any connection that defines ->get_authorizer() 2063 2065 * should also define ->add_authorizer_challenge() and ··· 2069 2067 */ 2070 2068 if (con->in_reply.tag == CEPH_MSGR_TAG_CHALLENGE_AUTHORIZER) { 2071 2069 ret = con->ops->add_authorizer_challenge( 2072 - con, con->auth->authorizer_reply_buf, 2073 - le32_to_cpu(con->in_reply.authorizer_len)); 2070 + con, con->auth->authorizer_reply_buf, len); 2074 2071 if (ret < 0) 2075 2072 return ret; 2076 2073 ··· 2079 2078 return 0; 2080 2079 } 2081 2080 2082 - ret = con->ops->verify_authorizer_reply(con); 2083 - if (ret < 0) { 2084 - con->error_msg = "bad authorize reply"; 2085 - return ret; 2081 + if (len) { 2082 + ret = con->ops->verify_authorizer_reply(con); 2083 + if (ret < 0) { 2084 + con->error_msg = "bad authorize reply"; 2085 + return ret; 2086 + } 2086 2087 } 2087 2088 } 2088 2089
+5 -1
net/compat.c
··· 388 388 char __user *optval, unsigned int optlen) 389 389 { 390 390 int err; 391 - struct socket *sock = sockfd_lookup(fd, &err); 391 + struct socket *sock; 392 392 393 + if (optlen > INT_MAX) 394 + return -EINVAL; 395 + 396 + sock = sockfd_lookup(fd, &err); 393 397 if (sock) { 394 398 err = security_socket_setsockopt(sock, level, optname); 395 399 if (err) {
+2 -2
net/core/dev.c
··· 8152 8152 netdev_features_t feature; 8153 8153 int feature_bit; 8154 8154 8155 - for_each_netdev_feature(&upper_disables, feature_bit) { 8155 + for_each_netdev_feature(upper_disables, feature_bit) { 8156 8156 feature = __NETIF_F_BIT(feature_bit); 8157 8157 if (!(upper->wanted_features & feature) 8158 8158 && (features & feature)) { ··· 8172 8172 netdev_features_t feature; 8173 8173 int feature_bit; 8174 8174 8175 - for_each_netdev_feature(&upper_disables, feature_bit) { 8175 + for_each_netdev_feature(upper_disables, feature_bit) { 8176 8176 feature = __NETIF_F_BIT(feature_bit); 8177 8177 if (!(features & feature) && (lower->features & feature)) { 8178 8178 netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
+4 -8
net/core/filter.c
··· 2789 2789 u32 off = skb_mac_header_len(skb); 2790 2790 int ret; 2791 2791 2792 - /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */ 2793 - if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb))) 2792 + if (!skb_is_gso_tcp(skb)) 2794 2793 return -ENOTSUPP; 2795 2794 2796 2795 ret = skb_cow(skb, len_diff); ··· 2830 2831 u32 off = skb_mac_header_len(skb); 2831 2832 int ret; 2832 2833 2833 - /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */ 2834 - if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb))) 2834 + if (!skb_is_gso_tcp(skb)) 2835 2835 return -ENOTSUPP; 2836 2836 2837 2837 ret = skb_unclone(skb, GFP_ATOMIC); ··· 2955 2957 u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb); 2956 2958 int ret; 2957 2959 2958 - /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */ 2959 - if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb))) 2960 + if (!skb_is_gso_tcp(skb)) 2960 2961 return -ENOTSUPP; 2961 2962 2962 2963 ret = skb_cow(skb, len_diff); ··· 2984 2987 u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb); 2985 2988 int ret; 2986 2989 2987 - /* SCTP uses GSO_BY_FRAGS, thus cannot adjust it. */ 2988 - if (skb_is_gso(skb) && unlikely(skb_is_gso_sctp(skb))) 2990 + if (!skb_is_gso_tcp(skb)) 2989 2991 return -ENOTSUPP; 2990 2992 2991 2993 ret = skb_unclone(skb, GFP_ATOMIC);
+4
net/core/skbuff.c
··· 356 356 */ 357 357 void *netdev_alloc_frag(unsigned int fragsz) 358 358 { 359 + fragsz = SKB_DATA_ALIGN(fragsz); 360 + 359 361 return __netdev_alloc_frag(fragsz, GFP_ATOMIC); 360 362 } 361 363 EXPORT_SYMBOL(netdev_alloc_frag); ··· 371 369 372 370 void *napi_alloc_frag(unsigned int fragsz) 373 371 { 372 + fragsz = SKB_DATA_ALIGN(fragsz); 373 + 374 374 return __napi_alloc_frag(fragsz, GFP_ATOMIC); 375 375 } 376 376 EXPORT_SYMBOL(napi_alloc_frag);
+10 -6
net/dsa/dsa2.c
··· 612 612 { 613 613 struct device_node *ports, *port; 614 614 struct dsa_port *dp; 615 + int err = 0; 615 616 u32 reg; 616 - int err; 617 617 618 618 ports = of_get_child_by_name(dn, "ports"); 619 619 if (!ports) { ··· 624 624 for_each_available_child_of_node(ports, port) { 625 625 err = of_property_read_u32(port, "reg", &reg); 626 626 if (err) 627 - return err; 627 + goto out_put_node; 628 628 629 - if (reg >= ds->num_ports) 630 - return -EINVAL; 629 + if (reg >= ds->num_ports) { 630 + err = -EINVAL; 631 + goto out_put_node; 632 + } 631 633 632 634 dp = &ds->ports[reg]; 633 635 634 636 err = dsa_port_parse_of(dp, port); 635 637 if (err) 636 - return err; 638 + goto out_put_node; 637 639 } 638 640 639 - return 0; 641 + out_put_node: 642 + of_node_put(ports); 643 + return err; 640 644 } 641 645 642 646 static int dsa_switch_parse_member_of(struct dsa_switch *ds,
+5 -3
net/dsa/port.c
··· 69 69 70 70 int dsa_port_enable(struct dsa_port *dp, struct phy_device *phy) 71 71 { 72 - u8 stp_state = dp->bridge_dev ? BR_STATE_BLOCKING : BR_STATE_FORWARDING; 73 72 struct dsa_switch *ds = dp->ds; 74 73 int port = dp->index; 75 74 int err; ··· 79 80 return err; 80 81 } 81 82 82 - dsa_port_set_state_now(dp, stp_state); 83 + if (!dp->bridge_dev) 84 + dsa_port_set_state_now(dp, BR_STATE_FORWARDING); 83 85 84 86 return 0; 85 87 } ··· 90 90 struct dsa_switch *ds = dp->ds; 91 91 int port = dp->index; 92 92 93 - dsa_port_set_state_now(dp, BR_STATE_DISABLED); 93 + if (!dp->bridge_dev) 94 + dsa_port_set_state_now(dp, BR_STATE_DISABLED); 94 95 95 96 if (ds->ops->port_disable) 96 97 ds->ops->port_disable(ds, port, phy); ··· 292 291 return ERR_PTR(-EPROBE_DEFER); 293 292 } 294 293 294 + of_node_put(phy_dn); 295 295 return phydev; 296 296 } 297 297
+17 -3
net/ipv4/cipso_ipv4.c
··· 667 667 case CIPSO_V4_MAP_PASS: 668 668 return 0; 669 669 case CIPSO_V4_MAP_TRANS: 670 - if (doi_def->map.std->lvl.cipso[level] < CIPSO_V4_INV_LVL) 670 + if ((level < doi_def->map.std->lvl.cipso_size) && 671 + (doi_def->map.std->lvl.cipso[level] < CIPSO_V4_INV_LVL)) 671 672 return 0; 672 673 break; 673 674 } ··· 1736 1735 */ 1737 1736 void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway) 1738 1737 { 1738 + unsigned char optbuf[sizeof(struct ip_options) + 40]; 1739 + struct ip_options *opt = (struct ip_options *)optbuf; 1740 + 1739 1741 if (ip_hdr(skb)->protocol == IPPROTO_ICMP || error != -EACCES) 1740 1742 return; 1741 1743 1744 + /* 1745 + * We might be called above the IP layer, 1746 + * so we can not use icmp_send and IPCB here. 1747 + */ 1748 + 1749 + memset(opt, 0, sizeof(struct ip_options)); 1750 + opt->optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr); 1751 + if (__ip_options_compile(dev_net(skb->dev), opt, skb, NULL)) 1752 + return; 1753 + 1742 1754 if (gateway) 1743 - icmp_send(skb, ICMP_DEST_UNREACH, ICMP_NET_ANO, 0); 1755 + __icmp_send(skb, ICMP_DEST_UNREACH, ICMP_NET_ANO, 0, opt); 1744 1756 else 1745 - icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_ANO, 0); 1757 + __icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_ANO, 0, opt); 1746 1758 } 1747 1759 1748 1760 /**
+1 -1
net/ipv4/esp4.c
··· 328 328 skb->len += tailen; 329 329 skb->data_len += tailen; 330 330 skb->truesize += tailen; 331 - if (sk) 331 + if (sk && sk_fullsock(sk)) 332 332 refcount_add(tailen, &sk->sk_wmem_alloc); 333 333 334 334 goto out;
+4
net/ipv4/fib_frontend.c
··· 710 710 case RTA_GATEWAY: 711 711 cfg->fc_gw = nla_get_be32(attr); 712 712 break; 713 + case RTA_VIA: 714 + NL_SET_ERR_MSG(extack, "IPv4 does not support RTA_VIA attribute"); 715 + err = -EINVAL; 716 + goto errout; 713 717 case RTA_PRIORITY: 714 718 cfg->fc_priority = nla_get_u32(attr); 715 719 break;
+4 -3
net/ipv4/icmp.c
··· 570 570 * MUST reply to only the first fragment. 571 571 */ 572 572 573 - void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info) 573 + void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info, 574 + const struct ip_options *opt) 574 575 { 575 576 struct iphdr *iph; 576 577 int room; ··· 692 691 iph->tos; 693 692 mark = IP4_REPLY_MARK(net, skb_in->mark); 694 693 695 - if (ip_options_echo(net, &icmp_param.replyopts.opt.opt, skb_in)) 694 + if (__ip_options_echo(net, &icmp_param.replyopts.opt.opt, skb_in, opt)) 696 695 goto out_unlock; 697 696 698 697 ··· 743 742 local_bh_enable(); 744 743 out:; 745 744 } 746 - EXPORT_SYMBOL(icmp_send); 745 + EXPORT_SYMBOL(__icmp_send); 747 746 748 747 749 748 static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
+17 -16
net/ipv4/ip_gre.c
··· 1457 1457 struct ip_tunnel_parm *p = &t->parms; 1458 1458 __be16 o_flags = p->o_flags; 1459 1459 1460 - if ((t->erspan_ver == 1 || t->erspan_ver == 2) && 1461 - !t->collect_md) 1462 - o_flags |= TUNNEL_KEY; 1460 + if (t->erspan_ver == 1 || t->erspan_ver == 2) { 1461 + if (!t->collect_md) 1462 + o_flags |= TUNNEL_KEY; 1463 + 1464 + if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver)) 1465 + goto nla_put_failure; 1466 + 1467 + if (t->erspan_ver == 1) { 1468 + if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index)) 1469 + goto nla_put_failure; 1470 + } else { 1471 + if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir)) 1472 + goto nla_put_failure; 1473 + if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid)) 1474 + goto nla_put_failure; 1475 + } 1476 + } 1463 1477 1464 1478 if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) || 1465 1479 nla_put_be16(skb, IFLA_GRE_IFLAGS, ··· 1506 1492 1507 1493 if (t->collect_md) { 1508 1494 if (nla_put_flag(skb, IFLA_GRE_COLLECT_METADATA)) 1509 - goto nla_put_failure; 1510 - } 1511 - 1512 - if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, t->erspan_ver)) 1513 - goto nla_put_failure; 1514 - 1515 - if (t->erspan_ver == 1) { 1516 - if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, t->index)) 1517 - goto nla_put_failure; 1518 - } else if (t->erspan_ver == 2) { 1519 - if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, t->dir)) 1520 - goto nla_put_failure; 1521 - if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, t->hwid)) 1522 1495 goto nla_put_failure; 1523 1496 } 1524 1497
+5 -4
net/ipv4/ip_input.c
··· 307 307 } 308 308 309 309 static int ip_rcv_finish_core(struct net *net, struct sock *sk, 310 - struct sk_buff *skb) 310 + struct sk_buff *skb, struct net_device *dev) 311 311 { 312 312 const struct iphdr *iph = ip_hdr(skb); 313 313 int (*edemux)(struct sk_buff *skb); 314 - struct net_device *dev = skb->dev; 315 314 struct rtable *rt; 316 315 int err; 317 316 ··· 399 400 400 401 static int ip_rcv_finish(struct net *net, struct sock *sk, struct sk_buff *skb) 401 402 { 403 + struct net_device *dev = skb->dev; 402 404 int ret; 403 405 404 406 /* if ingress device is enslaved to an L3 master device pass the ··· 409 409 if (!skb) 410 410 return NET_RX_SUCCESS; 411 411 412 - ret = ip_rcv_finish_core(net, sk, skb); 412 + ret = ip_rcv_finish_core(net, sk, skb, dev); 413 413 if (ret != NET_RX_DROP) 414 414 ret = dst_input(skb); 415 415 return ret; ··· 545 545 546 546 INIT_LIST_HEAD(&sublist); 547 547 list_for_each_entry_safe(skb, next, head, list) { 548 + struct net_device *dev = skb->dev; 548 549 struct dst_entry *dst; 549 550 550 551 skb_list_del_init(skb); ··· 555 554 skb = l3mdev_ip_rcv(skb); 556 555 if (!skb) 557 556 continue; 558 - if (ip_rcv_finish_core(net, sk, skb) == NET_RX_DROP) 557 + if (ip_rcv_finish_core(net, sk, skb, dev) == NET_RX_DROP) 559 558 continue; 560 559 561 560 dst = skb_dst(skb);
+17 -5
net/ipv4/ip_options.c
··· 251 251 * If opt == NULL, then skb->data should point to IP header. 252 252 */ 253 253 254 - int ip_options_compile(struct net *net, 255 - struct ip_options *opt, struct sk_buff *skb) 254 + int __ip_options_compile(struct net *net, 255 + struct ip_options *opt, struct sk_buff *skb, 256 + __be32 *info) 256 257 { 257 258 __be32 spec_dst = htonl(INADDR_ANY); 258 259 unsigned char *pp_ptr = NULL; ··· 469 468 return 0; 470 469 471 470 error: 472 - if (skb) { 473 - icmp_send(skb, ICMP_PARAMETERPROB, 0, htonl((pp_ptr-iph)<<24)); 474 - } 471 + if (info) 472 + *info = htonl((pp_ptr-iph)<<24); 475 473 return -EINVAL; 474 + } 475 + 476 + int ip_options_compile(struct net *net, 477 + struct ip_options *opt, struct sk_buff *skb) 478 + { 479 + int ret; 480 + __be32 info; 481 + 482 + ret = __ip_options_compile(net, opt, skb, &info); 483 + if (ret != 0 && skb) 484 + icmp_send(skb, ICMP_PARAMETERPROB, 0, info); 485 + return ret; 476 486 } 477 487 EXPORT_SYMBOL(ip_options_compile); 478 488
+14 -5
net/ipv4/netlink.c
··· 3 3 #include <linux/types.h> 4 4 #include <net/net_namespace.h> 5 5 #include <net/netlink.h> 6 + #include <linux/in6.h> 6 7 #include <net/ip.h> 7 8 8 - int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, 9 + int rtm_getroute_parse_ip_proto(struct nlattr *attr, u8 *ip_proto, u8 family, 9 10 struct netlink_ext_ack *extack) 10 11 { 11 12 *ip_proto = nla_get_u8(attr); ··· 14 13 switch (*ip_proto) { 15 14 case IPPROTO_TCP: 16 15 case IPPROTO_UDP: 17 - case IPPROTO_ICMP: 18 16 return 0; 19 - default: 20 - NL_SET_ERR_MSG(extack, "Unsupported ip proto"); 21 - return -EOPNOTSUPP; 17 + case IPPROTO_ICMP: 18 + if (family != AF_INET) 19 + break; 20 + return 0; 21 + #if IS_ENABLED(CONFIG_IPV6) 22 + case IPPROTO_ICMPV6: 23 + if (family != AF_INET6) 24 + break; 25 + return 0; 26 + #endif 22 27 } 28 + NL_SET_ERR_MSG(extack, "Unsupported ip proto"); 29 + return -EOPNOTSUPP; 23 30 } 24 31 EXPORT_SYMBOL_GPL(rtm_getroute_parse_ip_proto);
+1 -1
net/ipv4/route.c
··· 2803 2803 2804 2804 if (tb[RTA_IP_PROTO]) { 2805 2805 err = rtm_getroute_parse_ip_proto(tb[RTA_IP_PROTO], 2806 - &ip_proto, extack); 2806 + &ip_proto, AF_INET, extack); 2807 2807 if (err) 2808 2808 return err; 2809 2809 }
+1 -1
net/ipv4/tcp.c
··· 2528 2528 sk_mem_reclaim(sk); 2529 2529 tcp_clear_all_retrans_hints(tcp_sk(sk)); 2530 2530 tcp_sk(sk)->packets_out = 0; 2531 + inet_csk(sk)->icsk_backoff = 0; 2531 2532 } 2532 2533 2533 2534 int tcp_disconnect(struct sock *sk, int flags) ··· 2577 2576 tp->write_seq += tp->max_window + 2; 2578 2577 if (tp->write_seq == 0) 2579 2578 tp->write_seq = 1; 2580 - icsk->icsk_backoff = 0; 2581 2579 tp->snd_cwnd = 2; 2582 2580 icsk->icsk_probes_out = 0; 2583 2581 tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
+4 -1
net/ipv4/tcp_ipv4.c
··· 536 536 if (sock_owned_by_user(sk)) 537 537 break; 538 538 539 + skb = tcp_rtx_queue_head(sk); 540 + if (WARN_ON_ONCE(!skb)) 541 + break; 542 + 539 543 icsk->icsk_backoff--; 540 544 icsk->icsk_rto = tp->srtt_us ? __tcp_set_rto(tp) : 541 545 TCP_TIMEOUT_INIT; 542 546 icsk->icsk_rto = inet_csk_rto_backoff(icsk, TCP_RTO_MAX); 543 547 544 - skb = tcp_rtx_queue_head(sk); 545 548 546 549 tcp_mstamp_refresh(tp); 547 550 delta_us = (u32)(tp->tcp_mstamp - tcp_skb_timestamp_us(skb));
+1
net/ipv4/tcp_output.c
··· 2347 2347 /* "skb_mstamp_ns" is used as a start point for the retransmit timer */ 2348 2348 skb->skb_mstamp_ns = tp->tcp_wstamp_ns = tp->tcp_clock_cache; 2349 2349 list_move_tail(&skb->tcp_tsorted_anchor, &tp->tsorted_sent_queue); 2350 + tcp_init_tso_segs(skb, mss_now); 2350 2351 goto repair; /* Skip network transmission */ 2351 2352 } 2352 2353
+4 -2
net/ipv4/udp.c
··· 562 562 563 563 for (i = 0; i < MAX_IPTUN_ENCAP_OPS; i++) { 564 564 int (*handler)(struct sk_buff *skb, u32 info); 565 + const struct ip_tunnel_encap_ops *encap; 565 566 566 - if (!iptun_encaps[i]) 567 + encap = rcu_dereference(iptun_encaps[i]); 568 + if (!encap) 567 569 continue; 568 - handler = rcu_dereference(iptun_encaps[i]->err_handler); 570 + handler = encap->err_handler; 569 571 if (handler && !handler(skb, info)) 570 572 return 0; 571 573 }
+1 -1
net/ipv6/esp6.c
··· 296 296 skb->len += tailen; 297 297 skb->data_len += tailen; 298 298 skb->truesize += tailen; 299 - if (sk) 299 + if (sk && sk_fullsock(sk)) 300 300 refcount_add(tailen, &sk->sk_wmem_alloc); 301 301 302 302 goto out;
+1 -1
net/ipv6/fou6.c
··· 72 72 73 73 static int gue6_err_proto_handler(int proto, struct sk_buff *skb, 74 74 struct inet6_skb_parm *opt, 75 - u8 type, u8 code, int offset, u32 info) 75 + u8 type, u8 code, int offset, __be32 info) 76 76 { 77 77 const struct inet6_protocol *ipprot; 78 78
+41 -32
net/ipv6/ip6_gre.c
··· 1719 1719 return 0; 1720 1720 } 1721 1721 1722 + static void ip6erspan_set_version(struct nlattr *data[], 1723 + struct __ip6_tnl_parm *parms) 1724 + { 1725 + if (!data) 1726 + return; 1727 + 1728 + parms->erspan_ver = 1; 1729 + if (data[IFLA_GRE_ERSPAN_VER]) 1730 + parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]); 1731 + 1732 + if (parms->erspan_ver == 1) { 1733 + if (data[IFLA_GRE_ERSPAN_INDEX]) 1734 + parms->index = nla_get_u32(data[IFLA_GRE_ERSPAN_INDEX]); 1735 + } else if (parms->erspan_ver == 2) { 1736 + if (data[IFLA_GRE_ERSPAN_DIR]) 1737 + parms->dir = nla_get_u8(data[IFLA_GRE_ERSPAN_DIR]); 1738 + if (data[IFLA_GRE_ERSPAN_HWID]) 1739 + parms->hwid = nla_get_u16(data[IFLA_GRE_ERSPAN_HWID]); 1740 + } 1741 + } 1742 + 1722 1743 static void ip6gre_netlink_parms(struct nlattr *data[], 1723 1744 struct __ip6_tnl_parm *parms) 1724 1745 { ··· 1788 1767 1789 1768 if (data[IFLA_GRE_COLLECT_METADATA]) 1790 1769 parms->collect_md = true; 1791 - 1792 - parms->erspan_ver = 1; 1793 - if (data[IFLA_GRE_ERSPAN_VER]) 1794 - parms->erspan_ver = nla_get_u8(data[IFLA_GRE_ERSPAN_VER]); 1795 - 1796 - if (parms->erspan_ver == 1) { 1797 - if (data[IFLA_GRE_ERSPAN_INDEX]) 1798 - parms->index = nla_get_u32(data[IFLA_GRE_ERSPAN_INDEX]); 1799 - } else if (parms->erspan_ver == 2) { 1800 - if (data[IFLA_GRE_ERSPAN_DIR]) 1801 - parms->dir = nla_get_u8(data[IFLA_GRE_ERSPAN_DIR]); 1802 - if (data[IFLA_GRE_ERSPAN_HWID]) 1803 - parms->hwid = nla_get_u16(data[IFLA_GRE_ERSPAN_HWID]); 1804 - } 1805 1770 } 1806 1771 1807 1772 static int ip6gre_tap_init(struct net_device *dev) ··· 2107 2100 struct __ip6_tnl_parm *p = &t->parms; 2108 2101 __be16 o_flags = p->o_flags; 2109 2102 2110 - if ((p->erspan_ver == 1 || p->erspan_ver == 2) && 2111 - !p->collect_md) 2112 - o_flags |= TUNNEL_KEY; 2103 + if (p->erspan_ver == 1 || p->erspan_ver == 2) { 2104 + if (!p->collect_md) 2105 + o_flags |= TUNNEL_KEY; 2106 + 2107 + if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, p->erspan_ver)) 2108 + goto nla_put_failure; 2109 + 2110 + if (p->erspan_ver == 1) { 2111 + if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index)) 2112 + goto nla_put_failure; 2113 + } else { 2114 + if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, p->dir)) 2115 + goto nla_put_failure; 2116 + if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, p->hwid)) 2117 + goto nla_put_failure; 2118 + } 2119 + } 2113 2120 2114 2121 if (nla_put_u32(skb, IFLA_GRE_LINK, p->link) || 2115 2122 nla_put_be16(skb, IFLA_GRE_IFLAGS, ··· 2138 2117 nla_put_u8(skb, IFLA_GRE_ENCAP_LIMIT, p->encap_limit) || 2139 2118 nla_put_be32(skb, IFLA_GRE_FLOWINFO, p->flowinfo) || 2140 2119 nla_put_u32(skb, IFLA_GRE_FLAGS, p->flags) || 2141 - nla_put_u32(skb, IFLA_GRE_FWMARK, p->fwmark) || 2142 - nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index)) 2120 + nla_put_u32(skb, IFLA_GRE_FWMARK, p->fwmark)) 2143 2121 goto nla_put_failure; 2144 2122 2145 2123 if (nla_put_u16(skb, IFLA_GRE_ENCAP_TYPE, ··· 2153 2133 2154 2134 if (p->collect_md) { 2155 2135 if (nla_put_flag(skb, IFLA_GRE_COLLECT_METADATA)) 2156 - goto nla_put_failure; 2157 - } 2158 - 2159 - if (nla_put_u8(skb, IFLA_GRE_ERSPAN_VER, p->erspan_ver)) 2160 - goto nla_put_failure; 2161 - 2162 - if (p->erspan_ver == 1) { 2163 - if (nla_put_u32(skb, IFLA_GRE_ERSPAN_INDEX, p->index)) 2164 - goto nla_put_failure; 2165 - } else if (p->erspan_ver == 2) { 2166 - if (nla_put_u8(skb, IFLA_GRE_ERSPAN_DIR, p->dir)) 2167 - goto nla_put_failure; 2168 - if (nla_put_u16(skb, IFLA_GRE_ERSPAN_HWID, p->hwid)) 2169 2136 goto nla_put_failure; 2170 2137 } 2171 2138 ··· 2210 2203 int err; 2211 2204 2212 2205 ip6gre_netlink_parms(data, &nt->parms); 2206 + ip6erspan_set_version(data, &nt->parms); 2213 2207 ign = net_generic(net, ip6gre_net_id); 2214 2208 2215 2209 if (nt->parms.collect_md) { ··· 2256 2248 if (IS_ERR(t)) 2257 2249 return PTR_ERR(t); 2258 2250 2251 + ip6erspan_set_version(data, &p); 2259 2252 ip6gre_tunnel_unlink_md(ign, t); 2260 2253 ip6gre_tunnel_unlink(ign, t); 2261 2254 ip6erspan_tnl_change(t, &p, !tb[IFLA_MTU]);
+30 -9
net/ipv6/route.c
··· 1274 1274 static void rt6_remove_exception(struct rt6_exception_bucket *bucket, 1275 1275 struct rt6_exception *rt6_ex) 1276 1276 { 1277 + struct fib6_info *from; 1277 1278 struct net *net; 1278 1279 1279 1280 if (!bucket || !rt6_ex) 1280 1281 return; 1281 1282 1282 1283 net = dev_net(rt6_ex->rt6i->dst.dev); 1284 + net->ipv6.rt6_stats->fib_rt_cache--; 1285 + 1286 + /* purge completely the exception to allow releasing the held resources: 1287 + * some [sk] cache may keep the dst around for unlimited time 1288 + */ 1289 + from = rcu_dereference_protected(rt6_ex->rt6i->from, 1290 + lockdep_is_held(&rt6_exception_lock)); 1291 + rcu_assign_pointer(rt6_ex->rt6i->from, NULL); 1292 + fib6_info_release(from); 1293 + dst_dev_put(&rt6_ex->rt6i->dst); 1294 + 1283 1295 hlist_del_rcu(&rt6_ex->hlist); 1284 1296 dst_release(&rt6_ex->rt6i->dst); 1285 1297 kfree_rcu(rt6_ex, rcu); 1286 1298 WARN_ON_ONCE(!bucket->depth); 1287 1299 bucket->depth--; 1288 - net->ipv6.rt6_stats->fib_rt_cache--; 1289 1300 } 1290 1301 1291 1302 /* Remove oldest rt6_ex in bucket and free the memory ··· 1610 1599 static void rt6_update_exception_stamp_rt(struct rt6_info *rt) 1611 1600 { 1612 1601 struct rt6_exception_bucket *bucket; 1613 - struct fib6_info *from = rt->from; 1614 1602 struct in6_addr *src_key = NULL; 1615 1603 struct rt6_exception *rt6_ex; 1616 - 1617 - if (!from || 1618 - !(rt->rt6i_flags & RTF_CACHE)) 1619 - return; 1604 + struct fib6_info *from; 1620 1605 1621 1606 rcu_read_lock(); 1607 + from = rcu_dereference(rt->from); 1608 + if (!from || !(rt->rt6i_flags & RTF_CACHE)) 1609 + goto unlock; 1610 + 1622 1611 bucket = rcu_dereference(from->rt6i_exception_bucket); 1623 1612 1624 1613 #ifdef CONFIG_IPV6_SUBTREES ··· 1637 1626 if (rt6_ex) 1638 1627 rt6_ex->stamp = jiffies; 1639 1628 1629 + unlock: 1640 1630 rcu_read_unlock(); 1641 1631 } 1642 1632 ··· 2754 2742 u32 tbid = l3mdev_fib_table(dev) ? : RT_TABLE_MAIN; 2755 2743 const struct in6_addr *gw_addr = &cfg->fc_gateway; 2756 2744 u32 flags = RTF_LOCAL | RTF_ANYCAST | RTF_REJECT; 2745 + struct fib6_info *from; 2757 2746 struct rt6_info *grt; 2758 2747 int err; 2759 2748 2760 2749 err = 0; 2761 2750 grt = ip6_nh_lookup_table(net, cfg, gw_addr, tbid, 0); 2762 2751 if (grt) { 2752 + rcu_read_lock(); 2753 + from = rcu_dereference(grt->from); 2763 2754 if (!grt->dst.error && 2764 2755 /* ignore match if it is the default route */ 2765 - grt->from && !ipv6_addr_any(&grt->from->fib6_dst.addr) && 2756 + from && !ipv6_addr_any(&from->fib6_dst.addr) && 2766 2757 (grt->rt6i_flags & flags || dev != grt->dst.dev)) { 2767 2758 NL_SET_ERR_MSG(extack, 2768 2759 "Nexthop has invalid gateway or device mismatch"); 2769 2760 err = -EINVAL; 2770 2761 } 2762 + rcu_read_unlock(); 2771 2763 2772 2764 ip6_rt_put(grt); 2773 2765 } ··· 4182 4166 cfg->fc_gateway = nla_get_in6_addr(tb[RTA_GATEWAY]); 4183 4167 cfg->fc_flags |= RTF_GATEWAY; 4184 4168 } 4169 + if (tb[RTA_VIA]) { 4170 + NL_SET_ERR_MSG(extack, "IPv6 does not support RTA_VIA attribute"); 4171 + goto errout; 4172 + } 4185 4173 4186 4174 if (tb[RTA_DST]) { 4187 4175 int plen = (rtm->rtm_dst_len + 7) >> 3; ··· 4669 4649 table = rt->fib6_table->tb6_id; 4670 4650 else 4671 4651 table = RT6_TABLE_UNSPEC; 4672 - rtm->rtm_table = table; 4652 + rtm->rtm_table = table < 256 ? table : RT_TABLE_COMPAT; 4673 4653 if (nla_put_u32(skb, RTA_TABLE, table)) 4674 4654 goto nla_put_failure; 4675 4655 ··· 4893 4873 4894 4874 if (tb[RTA_IP_PROTO]) { 4895 4875 err = rtm_getroute_parse_ip_proto(tb[RTA_IP_PROTO], 4896 - &fl6.flowi6_proto, extack); 4876 + &fl6.flowi6_proto, AF_INET6, 4877 + extack); 4897 4878 if (err) 4898 4879 goto errout; 4899 4880 }
+1
net/ipv6/sit.c
··· 1873 1873 1874 1874 err_reg_dev: 1875 1875 ipip6_dev_free(sitn->fb_tunnel_dev); 1876 + free_netdev(sitn->fb_tunnel_dev); 1876 1877 err_alloc_dev: 1877 1878 return err; 1878 1879 }
+7 -5
net/ipv6/udp.c
··· 288 288 int peeked, peeking, off; 289 289 int err; 290 290 int is_udplite = IS_UDPLITE(sk); 291 + struct udp_mib __percpu *mib; 291 292 bool checksum_valid = false; 292 - struct udp_mib *mib; 293 293 int is_udp4; 294 294 295 295 if (flags & MSG_ERRQUEUE) ··· 420 420 */ 421 421 static int __udp6_lib_err_encap_no_sk(struct sk_buff *skb, 422 422 struct inet6_skb_parm *opt, 423 - u8 type, u8 code, int offset, u32 info) 423 + u8 type, u8 code, int offset, __be32 info) 424 424 { 425 425 int i; 426 426 427 427 for (i = 0; i < MAX_IPTUN_ENCAP_OPS; i++) { 428 428 int (*handler)(struct sk_buff *skb, struct inet6_skb_parm *opt, 429 - u8 type, u8 code, int offset, u32 info); 429 + u8 type, u8 code, int offset, __be32 info); 430 + const struct ip6_tnl_encap_ops *encap; 430 431 431 - if (!ip6tun_encaps[i]) 432 + encap = rcu_dereference(ip6tun_encaps[i]); 433 + if (!encap) 432 434 continue; 433 - handler = rcu_dereference(ip6tun_encaps[i]->err_handler); 435 + handler = encap->err_handler; 434 436 if (handler && !handler(skb, opt, type, code, offset, info)) 435 437 return 0; 436 438 }
+1 -1
net/ipv6/xfrm6_tunnel.c
··· 344 344 struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net); 345 345 unsigned int i; 346 346 347 - xfrm_state_flush(net, IPSEC_PROTO_ANY, false); 348 347 xfrm_flush_gc(); 348 + xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true); 349 349 350 350 for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) 351 351 WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));
+16 -26
net/key/af_key.c
··· 196 196 return 0; 197 197 } 198 198 199 - static int pfkey_broadcast_one(struct sk_buff *skb, struct sk_buff **skb2, 200 - gfp_t allocation, struct sock *sk) 199 + static int pfkey_broadcast_one(struct sk_buff *skb, gfp_t allocation, 200 + struct sock *sk) 201 201 { 202 202 int err = -ENOBUFS; 203 203 204 - sock_hold(sk); 205 - if (*skb2 == NULL) { 206 - if (refcount_read(&skb->users) != 1) { 207 - *skb2 = skb_clone(skb, allocation); 208 - } else { 209 - *skb2 = skb; 210 - refcount_inc(&skb->users); 211 - } 204 + if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf) 205 + return err; 206 + 207 + skb = skb_clone(skb, allocation); 208 + 209 + if (skb) { 210 + skb_set_owner_r(skb, sk); 211 + skb_queue_tail(&sk->sk_receive_queue, skb); 212 + sk->sk_data_ready(sk); 213 + err = 0; 212 214 } 213 - if (*skb2 != NULL) { 214 - if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf) { 215 - skb_set_owner_r(*skb2, sk); 216 - skb_queue_tail(&sk->sk_receive_queue, *skb2); 217 - sk->sk_data_ready(sk); 218 - *skb2 = NULL; 219 - err = 0; 220 - } 221 - } 222 - sock_put(sk); 223 215 return err; 224 216 } 225 217 ··· 226 234 { 227 235 struct netns_pfkey *net_pfkey = net_generic(net, pfkey_net_id); 228 236 struct sock *sk; 229 - struct sk_buff *skb2 = NULL; 230 237 int err = -ESRCH; 231 238 232 239 /* XXX Do we need something like netlink_overrun? I think ··· 244 253 * socket. 245 254 */ 246 255 if (pfk->promisc) 247 - pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk); 256 + pfkey_broadcast_one(skb, GFP_ATOMIC, sk); 248 257 249 258 /* the exact target will be processed later */ 250 259 if (sk == one_sk) ··· 259 268 continue; 260 269 } 261 270 262 - err2 = pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk); 271 + err2 = pfkey_broadcast_one(skb, GFP_ATOMIC, sk); 263 272 264 273 /* Error is cleared after successful sending to at least one 265 274 * registered KM */ ··· 269 278 rcu_read_unlock(); 270 279 271 280 if (one_sk != NULL) 272 - err = pfkey_broadcast_one(skb, &skb2, allocation, one_sk); 281 + err = pfkey_broadcast_one(skb, allocation, one_sk); 273 282 274 - kfree_skb(skb2); 275 283 kfree_skb(skb); 276 284 return err; 277 285 } ··· 1773 1783 if (proto == 0) 1774 1784 return -EINVAL; 1775 1785 1776 - err = xfrm_state_flush(net, proto, true); 1786 + err = xfrm_state_flush(net, proto, true, false); 1777 1787 err2 = unicast_flush_resp(sk, hdr); 1778 1788 if (err || err2) { 1779 1789 if (err == -ESRCH) /* empty table - go quietly */
+5 -1
net/mac80211/cfg.c
··· 941 941 BSS_CHANGED_P2P_PS | 942 942 BSS_CHANGED_TXPOWER; 943 943 int err; 944 + int prev_beacon_int; 944 945 945 946 old = sdata_dereference(sdata->u.ap.beacon, sdata); 946 947 if (old) ··· 964 963 965 964 sdata->needed_rx_chains = sdata->local->rx_chains; 966 965 966 + prev_beacon_int = sdata->vif.bss_conf.beacon_int; 967 967 sdata->vif.bss_conf.beacon_int = params->beacon_interval; 968 968 969 969 if (params->he_cap) ··· 976 974 if (!err) 977 975 ieee80211_vif_copy_chanctx_to_vlans(sdata, false); 978 976 mutex_unlock(&local->mtx); 979 - if (err) 977 + if (err) { 978 + sdata->vif.bss_conf.beacon_int = prev_beacon_int; 980 979 return err; 980 + } 981 981 982 982 /* 983 983 * Apply control port protocol, this allows us to
+2 -2
net/mac80211/main.c
··· 615 615 * We need a bit of data queued to build aggregates properly, so 616 616 * instruct the TCP stack to allow more than a single ms of data 617 617 * to be queued in the stack. The value is a bit-shift of 1 618 - * second, so 8 is ~4ms of queued data. Only affects local TCP 618 + * second, so 7 is ~8ms of queued data. Only affects local TCP 619 619 * sockets. 620 620 * This is the default, anyhow - drivers may need to override it 621 621 * for local reasons (longer buffers, longer completion time, or 622 622 * similar). 623 623 */ 624 - local->hw.tx_sk_pacing_shift = 8; 624 + local->hw.tx_sk_pacing_shift = 7; 625 625 626 626 /* set up some defaults */ 627 627 local->hw.queues = 1;
+6
net/mac80211/mesh.h
··· 70 70 * @dst: mesh path destination mac address 71 71 * @mpp: mesh proxy mac address 72 72 * @rhash: rhashtable list pointer 73 + * @walk_list: linked list containing all mesh_path objects. 73 74 * @gate_list: list pointer for known gates list 74 75 * @sdata: mesh subif 75 76 * @next_hop: mesh neighbor to which frames for this destination will be ··· 106 105 u8 dst[ETH_ALEN]; 107 106 u8 mpp[ETH_ALEN]; /* used for MPP or MAP */ 108 107 struct rhash_head rhash; 108 + struct hlist_node walk_list; 109 109 struct hlist_node gate_list; 110 110 struct ieee80211_sub_if_data *sdata; 111 111 struct sta_info __rcu *next_hop; ··· 135 133 * gate's mpath may or may not be resolved and active. 136 134 * @gates_lock: protects updates to known_gates 137 135 * @rhead: the rhashtable containing struct mesh_paths, keyed by dest addr 136 + * @walk_head: linked list containging all mesh_path objects 137 + * @walk_lock: lock protecting walk_head 138 138 * @entries: number of entries in the table 139 139 */ 140 140 struct mesh_table { 141 141 struct hlist_head known_gates; 142 142 spinlock_t gates_lock; 143 143 struct rhashtable rhead; 144 + struct hlist_head walk_head; 145 + spinlock_t walk_lock; 144 146 atomic_t entries; /* Up to MAX_MESH_NEIGHBOURS */ 145 147 }; 146 148
+47 -110
net/mac80211/mesh_pathtbl.c
··· 59 59 return NULL; 60 60 61 61 INIT_HLIST_HEAD(&newtbl->known_gates); 62 + INIT_HLIST_HEAD(&newtbl->walk_head); 62 63 atomic_set(&newtbl->entries, 0); 63 64 spin_lock_init(&newtbl->gates_lock); 65 + spin_lock_init(&newtbl->walk_lock); 64 66 65 67 return newtbl; 66 68 } ··· 251 249 static struct mesh_path * 252 250 __mesh_path_lookup_by_idx(struct mesh_table *tbl, int idx) 253 251 { 254 - int i = 0, ret; 255 - struct mesh_path *mpath = NULL; 256 - struct rhashtable_iter iter; 252 + int i = 0; 253 + struct mesh_path *mpath; 257 254 258 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC); 259 - if (ret) 260 - return NULL; 261 - 262 - rhashtable_walk_start(&iter); 263 - 264 - while ((mpath = rhashtable_walk_next(&iter))) { 265 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 266 - continue; 267 - if (IS_ERR(mpath)) 268 - break; 255 + hlist_for_each_entry_rcu(mpath, &tbl->walk_head, walk_list) { 269 256 if (i++ == idx) 270 257 break; 271 258 } 272 - rhashtable_walk_stop(&iter); 273 - rhashtable_walk_exit(&iter); 274 259 275 - if (IS_ERR(mpath) || !mpath) 260 + if (!mpath) 276 261 return NULL; 277 262 278 263 if (mpath_expired(mpath)) { ··· 421 432 return ERR_PTR(-ENOMEM); 422 433 423 434 tbl = sdata->u.mesh.mesh_paths; 435 + spin_lock_bh(&tbl->walk_lock); 424 436 do { 425 437 ret = rhashtable_lookup_insert_fast(&tbl->rhead, 426 438 &new_mpath->rhash, ··· 431 441 mpath = rhashtable_lookup_fast(&tbl->rhead, 432 442 dst, 433 443 mesh_rht_params); 434 - 444 + else if (!ret) 445 + hlist_add_head(&new_mpath->walk_list, &tbl->walk_head); 435 446 } while (unlikely(ret == -EEXIST && !mpath)); 447 + spin_unlock_bh(&tbl->walk_lock); 436 448 437 - if (ret && ret != -EEXIST) 438 - return ERR_PTR(ret); 439 - 440 - /* At this point either new_mpath was added, or we found a 441 - * matching entry already in the table; in the latter case 442 - * free the unnecessary new entry. 443 - */ 444 - if (ret == -EEXIST) { 449 + if (ret) { 445 450 kfree(new_mpath); 451 + 452 + if (ret != -EEXIST) 453 + return ERR_PTR(ret); 454 + 446 455 new_mpath = mpath; 447 456 } 457 + 448 458 sdata->u.mesh.mesh_paths_generation++; 449 459 return new_mpath; 450 460 } ··· 470 480 471 481 memcpy(new_mpath->mpp, mpp, ETH_ALEN); 472 482 tbl = sdata->u.mesh.mpp_paths; 483 + 484 + spin_lock_bh(&tbl->walk_lock); 473 485 ret = rhashtable_lookup_insert_fast(&tbl->rhead, 474 486 &new_mpath->rhash, 475 487 mesh_rht_params); 488 + if (!ret) 489 + hlist_add_head_rcu(&new_mpath->walk_list, &tbl->walk_head); 490 + spin_unlock_bh(&tbl->walk_lock); 491 + 492 + if (ret) 493 + kfree(new_mpath); 476 494 477 495 sdata->u.mesh.mpp_paths_generation++; 478 496 return ret; ··· 501 503 struct mesh_table *tbl = sdata->u.mesh.mesh_paths; 502 504 static const u8 bcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}; 503 505 struct mesh_path *mpath; 504 - struct rhashtable_iter iter; 505 - int ret; 506 506 507 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC); 508 - if (ret) 509 - return; 510 - 511 - rhashtable_walk_start(&iter); 512 - 513 - while ((mpath = rhashtable_walk_next(&iter))) { 514 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 515 - continue; 516 - if (IS_ERR(mpath)) 517 - break; 507 + rcu_read_lock(); 508 + hlist_for_each_entry_rcu(mpath, &tbl->walk_head, walk_list) { 518 509 if (rcu_access_pointer(mpath->next_hop) == sta && 519 510 mpath->flags & MESH_PATH_ACTIVE && 520 511 !(mpath->flags & MESH_PATH_FIXED)) { ··· 517 530 WLAN_REASON_MESH_PATH_DEST_UNREACHABLE, bcast); 518 531 } 519 532 } 520 - rhashtable_walk_stop(&iter); 521 - rhashtable_walk_exit(&iter); 533 + rcu_read_unlock(); 522 534 } 523 535 524 536 static void mesh_path_free_rcu(struct mesh_table *tbl, ··· 537 551 538 552 static void __mesh_path_del(struct mesh_table *tbl, struct mesh_path *mpath) 539 553 { 554 + hlist_del_rcu(&mpath->walk_list); 540 555 rhashtable_remove_fast(&tbl->rhead, &mpath->rhash, mesh_rht_params); 541 556 mesh_path_free_rcu(tbl, mpath); 542 557 } ··· 558 571 struct ieee80211_sub_if_data *sdata = sta->sdata; 559 572 struct mesh_table *tbl = sdata->u.mesh.mesh_paths; 560 573 struct mesh_path *mpath; 561 - struct rhashtable_iter iter; 562 - int ret; 574 + struct hlist_node *n; 563 575 564 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC); 565 - if (ret) 566 - return; 567 - 568 - rhashtable_walk_start(&iter); 569 - 570 - while ((mpath = rhashtable_walk_next(&iter))) { 571 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 572 - continue; 573 - if (IS_ERR(mpath)) 574 - break; 575 - 576 + spin_lock_bh(&tbl->walk_lock); 577 + hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) { 576 578 if (rcu_access_pointer(mpath->next_hop) == sta) 577 579 __mesh_path_del(tbl, mpath); 578 580 } 579 - 580 - rhashtable_walk_stop(&iter); 581 - rhashtable_walk_exit(&iter); 581 + spin_unlock_bh(&tbl->walk_lock); 582 582 } 583 583 584 584 static void mpp_flush_by_proxy(struct ieee80211_sub_if_data *sdata, ··· 573 599 { 574 600 struct mesh_table *tbl = sdata->u.mesh.mpp_paths; 575 601 struct mesh_path *mpath; 576 - struct rhashtable_iter iter; 577 - int ret; 602 + struct hlist_node *n; 578 603 579 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC); 580 - if (ret) 581 - return; 582 - 583 - rhashtable_walk_start(&iter); 584 - 585 - while ((mpath = rhashtable_walk_next(&iter))) { 586 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 587 - continue; 588 - if (IS_ERR(mpath)) 589 - break; 590 - 604 + spin_lock_bh(&tbl->walk_lock); 605 + hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) { 591 606 if (ether_addr_equal(mpath->mpp, proxy)) 592 607 __mesh_path_del(tbl, mpath); 593 608 } 594 - 595 - rhashtable_walk_stop(&iter); 596 - rhashtable_walk_exit(&iter); 609 + spin_unlock_bh(&tbl->walk_lock); 597 610 } 598 611 599 612 static void table_flush_by_iface(struct mesh_table *tbl) 600 613 { 601 614 struct mesh_path *mpath; 602 - struct rhashtable_iter iter; 603 - int ret; 615 + struct hlist_node *n; 604 616 605 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_ATOMIC); 606 - if (ret) 607 - return; 608 - 609 - rhashtable_walk_start(&iter); 610 - 611 - while ((mpath = rhashtable_walk_next(&iter))) { 612 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 613 - continue; 614 - if (IS_ERR(mpath)) 615 - break; 617 + spin_lock_bh(&tbl->walk_lock); 618 + hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) { 616 619 __mesh_path_del(tbl, mpath); 617 620 } 618 - 619 - rhashtable_walk_stop(&iter); 620 - rhashtable_walk_exit(&iter); 621 + spin_unlock_bh(&tbl->walk_lock); 621 622 } 622 623 623 624 /** ··· 624 675 { 625 676 struct mesh_path *mpath; 626 677 627 - rcu_read_lock(); 678 + spin_lock_bh(&tbl->walk_lock); 628 679 mpath = rhashtable_lookup_fast(&tbl->rhead, addr, mesh_rht_params); 629 680 if (!mpath) { 630 - rcu_read_unlock(); 681 + spin_unlock_bh(&tbl->walk_lock); 631 682 return -ENXIO; 632 683 } 633 684 634 685 __mesh_path_del(tbl, mpath); 635 - rcu_read_unlock(); 686 + spin_unlock_bh(&tbl->walk_lock); 636 687 return 0; 637 688 } 638 689 ··· 803 854 struct mesh_table *tbl) 804 855 { 805 856 struct mesh_path *mpath; 806 - struct rhashtable_iter iter; 807 - int ret; 857 + struct hlist_node *n; 808 858 809 - ret = rhashtable_walk_init(&tbl->rhead, &iter, GFP_KERNEL); 810 - if (ret) 811 - return; 812 - 813 - rhashtable_walk_start(&iter); 814 - 815 - while ((mpath = rhashtable_walk_next(&iter))) { 816 - if (IS_ERR(mpath) && PTR_ERR(mpath) == -EAGAIN) 817 - continue; 818 - if (IS_ERR(mpath)) 819 - break; 859 + spin_lock_bh(&tbl->walk_lock); 860 + hlist_for_each_entry_safe(mpath, n, &tbl->walk_head, walk_list) { 820 861 if ((!(mpath->flags & MESH_PATH_RESOLVING)) && 821 862 (!(mpath->flags & MESH_PATH_FIXED)) && 822 863 time_after(jiffies, mpath->exp_time + MESH_PATH_EXPIRE)) 823 864 __mesh_path_del(tbl, mpath); 824 865 } 825 - 826 - rhashtable_walk_stop(&iter); 827 - rhashtable_walk_exit(&iter); 866 + spin_unlock_bh(&tbl->walk_lock); 828 867 } 829 868 830 869 void mesh_path_expire(struct ieee80211_sub_if_data *sdata)
+6 -1
net/mac80211/rx.c
··· 2644 2644 struct ieee80211_sub_if_data *sdata = rx->sdata; 2645 2645 struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh; 2646 2646 u16 ac, q, hdrlen; 2647 + int tailroom = 0; 2647 2648 2648 2649 hdr = (struct ieee80211_hdr *) skb->data; 2649 2650 hdrlen = ieee80211_hdrlen(hdr->frame_control); ··· 2733 2732 if (!ifmsh->mshcfg.dot11MeshForwarding) 2734 2733 goto out; 2735 2734 2735 + if (sdata->crypto_tx_tailroom_needed_cnt) 2736 + tailroom = IEEE80211_ENCRYPT_TAILROOM; 2737 + 2736 2738 fwd_skb = skb_copy_expand(skb, local->tx_headroom + 2737 - sdata->encrypt_headroom, 0, GFP_ATOMIC); 2739 + sdata->encrypt_headroom, 2740 + tailroom, GFP_ATOMIC); 2738 2741 if (!fwd_skb) 2739 2742 goto out; 2740 2743
+3
net/mpls/af_mpls.c
··· 1838 1838 goto errout; 1839 1839 break; 1840 1840 } 1841 + case RTA_GATEWAY: 1842 + NL_SET_ERR_MSG(extack, "MPLS does not support RTA_GATEWAY attribute"); 1843 + goto errout; 1841 1844 case RTA_VIA: 1842 1845 { 1843 1846 if (nla_get_via(nla, &cfg->rc_via_alen,
+2 -1
net/netfilter/ipvs/ip_vs_ctl.c
··· 896 896 { 897 897 struct ip_vs_dest *dest; 898 898 unsigned int atype, i; 899 - int ret = 0; 900 899 901 900 EnterFunction(2); 902 901 903 902 #ifdef CONFIG_IP_VS_IPV6 904 903 if (udest->af == AF_INET6) { 904 + int ret; 905 + 905 906 atype = ipv6_addr_type(&udest->addr.in6); 906 907 if ((!(atype & IPV6_ADDR_UNICAST) || 907 908 atype & IPV6_ADDR_LINKLOCAL) &&
+3
net/netfilter/nf_tables_api.c
··· 313 313 int err; 314 314 315 315 list_for_each_entry(rule, &ctx->chain->rules, list) { 316 + if (!nft_is_active_next(ctx->net, rule)) 317 + continue; 318 + 316 319 err = nft_delrule(ctx, rule); 317 320 if (err < 0) 318 321 return err;
+2 -1
net/netlabel/netlabel_kapi.c
··· 903 903 (state == 0 && (byte & bitmask) == 0)) 904 904 return bit_spot; 905 905 906 - bit_spot++; 906 + if (++bit_spot >= bitmap_len) 907 + return -1; 907 908 bitmask >>= 1; 908 909 if (bitmask == 0) { 909 910 byte = bitmap[++byte_offset];
+20
net/nfc/llcp_commands.c
··· 419 419 sock->service_name, 420 420 sock->service_name_len, 421 421 &service_name_tlv_length); 422 + if (!service_name_tlv) { 423 + err = -ENOMEM; 424 + goto error_tlv; 425 + } 422 426 size += service_name_tlv_length; 423 427 } 424 428 ··· 433 429 434 430 miux_tlv = nfc_llcp_build_tlv(LLCP_TLV_MIUX, (u8 *)&miux, 0, 435 431 &miux_tlv_length); 432 + if (!miux_tlv) { 433 + err = -ENOMEM; 434 + goto error_tlv; 435 + } 436 436 size += miux_tlv_length; 437 437 438 438 rw_tlv = nfc_llcp_build_tlv(LLCP_TLV_RW, &rw, 0, &rw_tlv_length); 439 + if (!rw_tlv) { 440 + err = -ENOMEM; 441 + goto error_tlv; 442 + } 439 443 size += rw_tlv_length; 440 444 441 445 pr_debug("SKB size %d SN length %zu\n", size, sock->service_name_len); ··· 496 484 497 485 miux_tlv = nfc_llcp_build_tlv(LLCP_TLV_MIUX, (u8 *)&miux, 0, 498 486 &miux_tlv_length); 487 + if (!miux_tlv) { 488 + err = -ENOMEM; 489 + goto error_tlv; 490 + } 499 491 size += miux_tlv_length; 500 492 501 493 rw_tlv = nfc_llcp_build_tlv(LLCP_TLV_RW, &rw, 0, &rw_tlv_length); 494 + if (!rw_tlv) { 495 + err = -ENOMEM; 496 + goto error_tlv; 497 + } 502 498 size += rw_tlv_length; 503 499 504 500 skb = llcp_allocate_pdu(sock, LLCP_PDU_CC, size);
+20 -4
net/nfc/llcp_core.c
··· 532 532 533 533 static int nfc_llcp_build_gb(struct nfc_llcp_local *local) 534 534 { 535 - u8 *gb_cur, *version_tlv, version, version_length; 536 - u8 *lto_tlv, lto_length; 537 - u8 *wks_tlv, wks_length; 538 - u8 *miux_tlv, miux_length; 535 + u8 *gb_cur, version, version_length; 536 + u8 lto_length, wks_length, miux_length; 537 + u8 *version_tlv = NULL, *lto_tlv = NULL, 538 + *wks_tlv = NULL, *miux_tlv = NULL; 539 539 __be16 wks = cpu_to_be16(local->local_wks); 540 540 u8 gb_len = 0; 541 541 int ret = 0; ··· 543 543 version = LLCP_VERSION_11; 544 544 version_tlv = nfc_llcp_build_tlv(LLCP_TLV_VERSION, &version, 545 545 1, &version_length); 546 + if (!version_tlv) { 547 + ret = -ENOMEM; 548 + goto out; 549 + } 546 550 gb_len += version_length; 547 551 548 552 lto_tlv = nfc_llcp_build_tlv(LLCP_TLV_LTO, &local->lto, 1, &lto_length); 553 + if (!lto_tlv) { 554 + ret = -ENOMEM; 555 + goto out; 556 + } 549 557 gb_len += lto_length; 550 558 551 559 pr_debug("Local wks 0x%lx\n", local->local_wks); 552 560 wks_tlv = nfc_llcp_build_tlv(LLCP_TLV_WKS, (u8 *)&wks, 2, &wks_length); 561 + if (!wks_tlv) { 562 + ret = -ENOMEM; 563 + goto out; 564 + } 553 565 gb_len += wks_length; 554 566 555 567 miux_tlv = nfc_llcp_build_tlv(LLCP_TLV_MIUX, (u8 *)&local->miux, 0, 556 568 &miux_length); 569 + if (!miux_tlv) { 570 + ret = -ENOMEM; 571 + goto out; 572 + } 557 573 gb_len += miux_length; 558 574 559 575 gb_len += ARRAY_SIZE(llcp_magic);
+16 -16
net/phonet/pep.c
··· 132 132 ph->utid = 0; 133 133 ph->message_id = id; 134 134 ph->pipe_handle = pn->pipe_handle; 135 - ph->data[0] = code; 135 + ph->error_code = code; 136 136 return pn_skb_send(sk, skb, NULL); 137 137 } 138 138 ··· 153 153 ph->utid = id; /* whatever */ 154 154 ph->message_id = id; 155 155 ph->pipe_handle = pn->pipe_handle; 156 - ph->data[0] = code; 156 + ph->error_code = code; 157 157 return pn_skb_send(sk, skb, NULL); 158 158 } 159 159 ··· 208 208 struct pnpipehdr *ph; 209 209 struct sockaddr_pn dst; 210 210 u8 data[4] = { 211 - oph->data[0], /* PEP type */ 211 + oph->pep_type, /* PEP type */ 212 212 code, /* error code, at an unusual offset */ 213 213 PAD, PAD, 214 214 }; ··· 221 221 ph->utid = oph->utid; 222 222 ph->message_id = PNS_PEP_CTRL_RESP; 223 223 ph->pipe_handle = oph->pipe_handle; 224 - ph->data[0] = oph->data[1]; /* CTRL id */ 224 + ph->data0 = oph->data[0]; /* CTRL id */ 225 225 226 226 pn_skb_get_src_sockaddr(oskb, &dst); 227 227 return pn_skb_send(sk, skb, &dst); ··· 272 272 return -EINVAL; 273 273 274 274 hdr = pnp_hdr(skb); 275 - if (hdr->data[0] != PN_PEP_TYPE_COMMON) { 275 + if (hdr->pep_type != PN_PEP_TYPE_COMMON) { 276 276 net_dbg_ratelimited("Phonet unknown PEP type: %u\n", 277 - (unsigned int)hdr->data[0]); 277 + (unsigned int)hdr->pep_type); 278 278 return -EOPNOTSUPP; 279 279 } 280 280 281 - switch (hdr->data[1]) { 281 + switch (hdr->data[0]) { 282 282 case PN_PEP_IND_FLOW_CONTROL: 283 283 switch (pn->tx_fc) { 284 284 case PN_LEGACY_FLOW_CONTROL: 285 - switch (hdr->data[4]) { 285 + switch (hdr->data[3]) { 286 286 case PEP_IND_BUSY: 287 287 atomic_set(&pn->tx_credits, 0); 288 288 break; ··· 292 292 } 293 293 break; 294 294 case PN_ONE_CREDIT_FLOW_CONTROL: 295 - if (hdr->data[4] == PEP_IND_READY) 295 + if (hdr->data[3] == PEP_IND_READY) 296 296 atomic_set(&pn->tx_credits, wake = 1); 297 297 break; 298 298 } ··· 301 301 case PN_PEP_IND_ID_MCFC_GRANT_CREDITS: 302 302 if (pn->tx_fc != PN_MULTI_CREDIT_FLOW_CONTROL) 303 303 break; 304 - atomic_add(wake = hdr->data[4], &pn->tx_credits); 304 + atomic_add(wake = hdr->data[3], &pn->tx_credits); 305 305 break; 306 306 307 307 default: 308 308 net_dbg_ratelimited("Phonet unknown PEP indication: %u\n", 309 - (unsigned int)hdr->data[1]); 309 + (unsigned int)hdr->data[0]); 310 310 return -EOPNOTSUPP; 311 311 } 312 312 if (wake) ··· 318 318 { 319 319 struct pep_sock *pn = pep_sk(sk); 320 320 struct pnpipehdr *hdr = pnp_hdr(skb); 321 - u8 n_sb = hdr->data[0]; 321 + u8 n_sb = hdr->data0; 322 322 323 323 pn->rx_fc = pn->tx_fc = PN_LEGACY_FLOW_CONTROL; 324 324 __skb_pull(skb, sizeof(*hdr)); ··· 506 506 return -ECONNREFUSED; 507 507 508 508 /* Parse sub-blocks */ 509 - n_sb = hdr->data[4]; 509 + n_sb = hdr->data[3]; 510 510 while (n_sb > 0) { 511 511 u8 type, buf[6], len = sizeof(buf); 512 512 const u8 *data = pep_get_sb(skb, &type, &len, buf); ··· 739 739 ph->utid = 0; 740 740 ph->message_id = PNS_PIPE_REMOVE_REQ; 741 741 ph->pipe_handle = pn->pipe_handle; 742 - ph->data[0] = PAD; 742 + ph->data0 = PAD; 743 743 return pn_skb_send(sk, skb, NULL); 744 744 } 745 745 ··· 817 817 peer_type = hdr->other_pep_type << 8; 818 818 819 819 /* Parse sub-blocks (options) */ 820 - n_sb = hdr->data[4]; 820 + n_sb = hdr->data[3]; 821 821 while (n_sb > 0) { 822 822 u8 type, buf[1], len = sizeof(buf); 823 823 const u8 *data = pep_get_sb(skb, &type, &len, buf); ··· 1109 1109 ph->utid = 0; 1110 1110 if (pn->aligned) { 1111 1111 ph->message_id = PNS_PIPE_ALIGNED_DATA; 1112 - ph->data[0] = 0; /* padding */ 1112 + ph->data0 = 0; /* padding */ 1113 1113 } else 1114 1114 ph->message_id = PNS_PIPE_DATA; 1115 1115 ph->pipe_handle = pn->pipe_handle;
+1 -2
net/sched/act_ipt.c
··· 199 199 err2: 200 200 kfree(tname); 201 201 err1: 202 - if (ret == ACT_P_CREATED) 203 - tcf_idr_release(*a, bind); 202 + tcf_idr_release(*a, bind); 204 203 return err; 205 204 } 206 205
+1 -2
net/sched/act_skbedit.c
··· 189 189 190 190 params_new = kzalloc(sizeof(*params_new), GFP_KERNEL); 191 191 if (unlikely(!params_new)) { 192 - if (ret == ACT_P_CREATED) 193 - tcf_idr_release(*a, bind); 192 + tcf_idr_release(*a, bind); 194 193 return -ENOMEM; 195 194 } 196 195
+2 -1
net/sched/act_tunnel_key.c
··· 377 377 return ret; 378 378 379 379 release_tun_meta: 380 - dst_release(&metadata->dst); 380 + if (metadata) 381 + dst_release(&metadata->dst); 381 382 382 383 err_out: 383 384 if (exists)
+7 -3
net/sched/sch_netem.c
··· 447 447 int nb = 0; 448 448 int count = 1; 449 449 int rc = NET_XMIT_SUCCESS; 450 + int rc_drop = NET_XMIT_DROP; 450 451 451 452 /* Do not fool qdisc_drop_all() */ 452 453 skb->prev = NULL; ··· 487 486 q->duplicate = 0; 488 487 rootq->enqueue(skb2, rootq, to_free); 489 488 q->duplicate = dupsave; 489 + rc_drop = NET_XMIT_SUCCESS; 490 490 } 491 491 492 492 /* ··· 500 498 if (skb_is_gso(skb)) { 501 499 segs = netem_segment(skb, sch, to_free); 502 500 if (!segs) 503 - return NET_XMIT_DROP; 501 + return rc_drop; 504 502 } else { 505 503 segs = skb; 506 504 } ··· 523 521 1<<(prandom_u32() % 8); 524 522 } 525 523 526 - if (unlikely(sch->q.qlen >= sch->limit)) 527 - return qdisc_drop_all(skb, sch, to_free); 524 + if (unlikely(sch->q.qlen >= sch->limit)) { 525 + qdisc_drop_all(skb, sch, to_free); 526 + return rc_drop; 527 + } 528 528 529 529 qdisc_qstats_backlog_inc(sch, skb); 530 530
+1 -1
net/sctp/chunk.c
··· 192 192 if (unlikely(!max_data)) { 193 193 max_data = sctp_min_frag_point(sctp_sk(asoc->base.sk), 194 194 sctp_datachk_len(&asoc->stream)); 195 - pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%Zu)", 195 + pr_warn_ratelimited("%s: asoc:%p frag_point is zero, forcing max_data to default minimum (%zu)", 196 196 __func__, asoc, max_data); 197 197 } 198 198
+2 -1
net/sctp/transport.c
··· 207 207 208 208 /* When a data chunk is sent, reset the heartbeat interval. */ 209 209 expires = jiffies + sctp_transport_timeout(transport); 210 - if (time_before(transport->hb_timer.expires, expires) && 210 + if ((time_before(transport->hb_timer.expires, expires) || 211 + !timer_pending(&transport->hb_timer)) && 211 212 !mod_timer(&transport->hb_timer, 212 213 expires + prandom_u32_max(transport->rto))) 213 214 sctp_transport_hold(transport);
+3 -3
net/smc/smc.h
··· 113 113 } __aligned(8); 114 114 115 115 enum smc_urg_state { 116 - SMC_URG_VALID, /* data present */ 117 - SMC_URG_NOTYET, /* data pending */ 118 - SMC_URG_READ /* data was already read */ 116 + SMC_URG_VALID = 1, /* data present */ 117 + SMC_URG_NOTYET = 2, /* data pending */ 118 + SMC_URG_READ = 3, /* data was already read */ 119 119 }; 120 120 121 121 struct smc_connection {
+1
net/socket.c
··· 577 577 if (inode) 578 578 inode_lock(inode); 579 579 sock->ops->release(sock); 580 + sock->sk = NULL; 580 581 if (inode) 581 582 inode_unlock(inode); 582 583 sock->ops = NULL;
+11 -6
net/tipc/socket.c
··· 379 379 380 380 #define tipc_wait_for_cond(sock_, timeo_, condition_) \ 381 381 ({ \ 382 + DEFINE_WAIT_FUNC(wait_, woken_wake_function); \ 382 383 struct sock *sk_; \ 383 384 int rc_; \ 384 385 \ 385 386 while ((rc_ = !(condition_))) { \ 386 - DEFINE_WAIT_FUNC(wait_, woken_wake_function); \ 387 + /* coupled with smp_wmb() in tipc_sk_proto_rcv() */ \ 388 + smp_rmb(); \ 387 389 sk_ = (sock_)->sk; \ 388 390 rc_ = tipc_sk_sock_err((sock_), timeo_); \ 389 391 if (rc_) \ 390 392 break; \ 391 - prepare_to_wait(sk_sleep(sk_), &wait_, TASK_INTERRUPTIBLE); \ 393 + add_wait_queue(sk_sleep(sk_), &wait_); \ 392 394 release_sock(sk_); \ 393 395 *(timeo_) = wait_woken(&wait_, TASK_INTERRUPTIBLE, *(timeo_)); \ 394 396 sched_annotate_sleep(); \ ··· 1679 1677 static int tipc_wait_for_rcvmsg(struct socket *sock, long *timeop) 1680 1678 { 1681 1679 struct sock *sk = sock->sk; 1682 - DEFINE_WAIT(wait); 1680 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 1683 1681 long timeo = *timeop; 1684 1682 int err = sock_error(sk); 1685 1683 ··· 1687 1685 return err; 1688 1686 1689 1687 for (;;) { 1690 - prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE); 1691 1688 if (timeo && skb_queue_empty(&sk->sk_receive_queue)) { 1692 1689 if (sk->sk_shutdown & RCV_SHUTDOWN) { 1693 1690 err = -ENOTCONN; 1694 1691 break; 1695 1692 } 1693 + add_wait_queue(sk_sleep(sk), &wait); 1696 1694 release_sock(sk); 1697 - timeo = schedule_timeout(timeo); 1695 + timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo); 1696 + sched_annotate_sleep(); 1698 1697 lock_sock(sk); 1698 + remove_wait_queue(sk_sleep(sk), &wait); 1699 1699 } 1700 1700 err = 0; 1701 1701 if (!skb_queue_empty(&sk->sk_receive_queue)) ··· 1713 1709 if (err) 1714 1710 break; 1715 1711 } 1716 - finish_wait(sk_sleep(sk), &wait); 1717 1712 *timeop = timeo; 1718 1713 return err; 1719 1714 } ··· 1985 1982 return; 1986 1983 case SOCK_WAKEUP: 1987 1984 tipc_dest_del(&tsk->cong_links, msg_orignode(hdr), 0); 1985 + /* coupled with smp_rmb() in tipc_wait_for_cond() */ 1986 + smp_wmb(); 1988 1987 tsk->cong_link_cnt--; 1989 1988 wakeup = true; 1990 1989 break;
+34 -25
net/unix/af_unix.c
··· 890 890 addr->hash ^= sk->sk_type; 891 891 892 892 __unix_remove_socket(sk); 893 - u->addr = addr; 893 + smp_store_release(&u->addr, addr); 894 894 __unix_insert_socket(&unix_socket_table[addr->hash], sk); 895 895 spin_unlock(&unix_table_lock); 896 896 err = 0; ··· 1060 1060 1061 1061 err = 0; 1062 1062 __unix_remove_socket(sk); 1063 - u->addr = addr; 1063 + smp_store_release(&u->addr, addr); 1064 1064 __unix_insert_socket(list, sk); 1065 1065 1066 1066 out_unlock: ··· 1331 1331 RCU_INIT_POINTER(newsk->sk_wq, &newu->peer_wq); 1332 1332 otheru = unix_sk(other); 1333 1333 1334 - /* copy address information from listening to new sock*/ 1335 - if (otheru->addr) { 1336 - refcount_inc(&otheru->addr->refcnt); 1337 - newu->addr = otheru->addr; 1338 - } 1334 + /* copy address information from listening to new sock 1335 + * 1336 + * The contents of *(otheru->addr) and otheru->path 1337 + * are seen fully set up here, since we have found 1338 + * otheru in hash under unix_table_lock. Insertion 1339 + * into the hash chain we'd found it in had been done 1340 + * in an earlier critical area protected by unix_table_lock, 1341 + * the same one where we'd set *(otheru->addr) contents, 1342 + * as well as otheru->path and otheru->addr itself. 1343 + * 1344 + * Using smp_store_release() here to set newu->addr 1345 + * is enough to make those stores, as well as stores 1346 + * to newu->path visible to anyone who gets newu->addr 1347 + * by smp_load_acquire(). IOW, the same warranties 1348 + * as for unix_sock instances bound in unix_bind() or 1349 + * in unix_autobind(). 1350 + */ 1339 1351 if (otheru->path.dentry) { 1340 1352 path_get(&otheru->path); 1341 1353 newu->path = otheru->path; 1342 1354 } 1355 + refcount_inc(&otheru->addr->refcnt); 1356 + smp_store_release(&newu->addr, otheru->addr); 1343 1357 1344 1358 /* Set credentials */ 1345 1359 copy_peercred(sk, other); ··· 1467 1453 static int unix_getname(struct socket *sock, struct sockaddr *uaddr, int peer) 1468 1454 { 1469 1455 struct sock *sk = sock->sk; 1470 - struct unix_sock *u; 1456 + struct unix_address *addr; 1471 1457 DECLARE_SOCKADDR(struct sockaddr_un *, sunaddr, uaddr); 1472 1458 int err = 0; 1473 1459 ··· 1482 1468 sock_hold(sk); 1483 1469 } 1484 1470 1485 - u = unix_sk(sk); 1486 - unix_state_lock(sk); 1487 - if (!u->addr) { 1471 + addr = smp_load_acquire(&unix_sk(sk)->addr); 1472 + if (!addr) { 1488 1473 sunaddr->sun_family = AF_UNIX; 1489 1474 sunaddr->sun_path[0] = 0; 1490 1475 err = sizeof(short); 1491 1476 } else { 1492 - struct unix_address *addr = u->addr; 1493 - 1494 1477 err = addr->len; 1495 1478 memcpy(sunaddr, addr->name, addr->len); 1496 1479 } 1497 - unix_state_unlock(sk); 1498 1480 sock_put(sk); 1499 1481 out: 1500 1482 return err; ··· 2083 2073 2084 2074 static void unix_copy_addr(struct msghdr *msg, struct sock *sk) 2085 2075 { 2086 - struct unix_sock *u = unix_sk(sk); 2076 + struct unix_address *addr = smp_load_acquire(&unix_sk(sk)->addr); 2087 2077 2088 - if (u->addr) { 2089 - msg->msg_namelen = u->addr->len; 2090 - memcpy(msg->msg_name, u->addr->name, u->addr->len); 2078 + if (addr) { 2079 + msg->msg_namelen = addr->len; 2080 + memcpy(msg->msg_name, addr->name, addr->len); 2091 2081 } 2092 2082 } 2093 2083 ··· 2591 2581 if (!ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) 2592 2582 return -EPERM; 2593 2583 2594 - unix_state_lock(sk); 2595 - path = unix_sk(sk)->path; 2596 - if (!path.dentry) { 2597 - unix_state_unlock(sk); 2584 + if (!smp_load_acquire(&unix_sk(sk)->addr)) 2598 2585 return -ENOENT; 2599 - } 2586 + 2587 + path = unix_sk(sk)->path; 2588 + if (!path.dentry) 2589 + return -ENOENT; 2600 2590 2601 2591 path_get(&path); 2602 - unix_state_unlock(sk); 2603 2592 2604 2593 fd = get_unused_fd_flags(O_CLOEXEC); 2605 2594 if (fd < 0) ··· 2839 2830 (s->sk_state == TCP_ESTABLISHED ? SS_CONNECTING : SS_DISCONNECTING), 2840 2831 sock_i_ino(s)); 2841 2832 2842 - if (u->addr) { 2833 + if (u->addr) { // under unix_table_lock here 2843 2834 int i, len; 2844 2835 seq_putc(seq, ' '); 2845 2836
+2 -1
net/unix/diag.c
··· 10 10 11 11 static int sk_diag_dump_name(struct sock *sk, struct sk_buff *nlskb) 12 12 { 13 - struct unix_address *addr = unix_sk(sk)->addr; 13 + /* might or might not have unix_table_lock */ 14 + struct unix_address *addr = smp_load_acquire(&unix_sk(sk)->addr); 14 15 15 16 if (!addr) 16 17 return 0;
+8 -5
net/x25/af_x25.c
··· 679 679 struct sockaddr_x25 *addr = (struct sockaddr_x25 *)uaddr; 680 680 int len, i, rc = 0; 681 681 682 - if (!sock_flag(sk, SOCK_ZAPPED) || 683 - addr_len != sizeof(struct sockaddr_x25) || 682 + if (addr_len != sizeof(struct sockaddr_x25) || 684 683 addr->sx25_family != AF_X25) { 685 684 rc = -EINVAL; 686 685 goto out; ··· 698 699 } 699 700 700 701 lock_sock(sk); 701 - x25_sk(sk)->source_addr = addr->sx25_addr; 702 - x25_insert_socket(sk); 703 - sock_reset_flag(sk, SOCK_ZAPPED); 702 + if (sock_flag(sk, SOCK_ZAPPED)) { 703 + x25_sk(sk)->source_addr = addr->sx25_addr; 704 + x25_insert_socket(sk); 705 + sock_reset_flag(sk, SOCK_ZAPPED); 706 + } else { 707 + rc = -EINVAL; 708 + } 704 709 release_sock(sk); 705 710 SOCK_DEBUG(sk, "x25_bind: socket is bound\n"); 706 711 out:
+6 -5
net/xdp/xdp_umem.c
··· 125 125 return 0; 126 126 127 127 err_unreg_umem: 128 - xdp_clear_umem_at_qid(dev, queue_id); 129 128 if (!force_zc) 130 129 err = 0; /* fallback to copy mode */ 130 + if (err) 131 + xdp_clear_umem_at_qid(dev, queue_id); 131 132 out_rtnl_unlock: 132 133 rtnl_unlock(); 133 134 return err; ··· 260 259 if (!umem->pgs) 261 260 return -ENOMEM; 262 261 263 - down_write(&current->mm->mmap_sem); 264 - npgs = get_user_pages(umem->address, umem->npgs, 265 - gup_flags, &umem->pgs[0], NULL); 266 - up_write(&current->mm->mmap_sem); 262 + down_read(&current->mm->mmap_sem); 263 + npgs = get_user_pages_longterm(umem->address, umem->npgs, 264 + gup_flags, &umem->pgs[0], NULL); 265 + up_read(&current->mm->mmap_sem); 267 266 268 267 if (npgs != umem->npgs) { 269 268 if (npgs >= 0) {
+19 -1
net/xdp/xsk.c
··· 366 366 367 367 xskq_destroy(xs->rx); 368 368 xskq_destroy(xs->tx); 369 - xdp_put_umem(xs->umem); 370 369 371 370 sock_orphan(sk); 372 371 sock->sk = NULL; ··· 668 669 if (!umem) 669 670 return -EINVAL; 670 671 672 + /* Matches the smp_wmb() in XDP_UMEM_REG */ 673 + smp_rmb(); 671 674 if (offset == XDP_UMEM_PGOFF_FILL_RING) 672 675 q = READ_ONCE(umem->fq); 673 676 else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING) ··· 679 678 if (!q) 680 679 return -EINVAL; 681 680 681 + /* Matches the smp_wmb() in xsk_init_queue */ 682 + smp_rmb(); 682 683 qpg = virt_to_head_page(q->ring); 683 684 if (size > (PAGE_SIZE << compound_order(qpg))) 684 685 return -EINVAL; ··· 717 714 .sendpage = sock_no_sendpage, 718 715 }; 719 716 717 + static void xsk_destruct(struct sock *sk) 718 + { 719 + struct xdp_sock *xs = xdp_sk(sk); 720 + 721 + if (!sock_flag(sk, SOCK_DEAD)) 722 + return; 723 + 724 + xdp_put_umem(xs->umem); 725 + 726 + sk_refcnt_debug_dec(sk); 727 + } 728 + 720 729 static int xsk_create(struct net *net, struct socket *sock, int protocol, 721 730 int kern) 722 731 { ··· 754 739 sock_init_data(sock, sk); 755 740 756 741 sk->sk_family = PF_XDP; 742 + 743 + sk->sk_destruct = xsk_destruct; 744 + sk_refcnt_debug_inc(sk); 757 745 758 746 sock_set_flag(sk, SOCK_RCU_FREE); 759 747
+2 -2
net/xfrm/xfrm_interface.c
··· 76 76 int ifindex; 77 77 struct xfrm_if *xi; 78 78 79 - if (!skb->dev) 79 + if (!secpath_exists(skb) || !skb->dev) 80 80 return NULL; 81 81 82 - xfrmn = net_generic(dev_net(skb->dev), xfrmi_net_id); 82 + xfrmn = net_generic(xs_net(xfrm_input_state(skb)), xfrmi_net_id); 83 83 ifindex = skb->dev->ifindex; 84 84 85 85 for_each_xfrmi_rcu(xfrmn->xfrmi[0], xi) {
+3 -1
net/xfrm/xfrm_policy.c
··· 3314 3314 3315 3315 if (ifcb) { 3316 3316 xi = ifcb->decode_session(skb); 3317 - if (xi) 3317 + if (xi) { 3318 3318 if_id = xi->p.if_id; 3319 + net = xi->net; 3320 + } 3319 3321 } 3320 3322 rcu_read_unlock(); 3321 3323
+19 -11
net/xfrm/xfrm_state.c
··· 432 432 } 433 433 EXPORT_SYMBOL(xfrm_state_free); 434 434 435 - static void xfrm_state_gc_destroy(struct xfrm_state *x) 435 + static void ___xfrm_state_destroy(struct xfrm_state *x) 436 436 { 437 437 tasklet_hrtimer_cancel(&x->mtimer); 438 438 del_timer_sync(&x->rtimer); ··· 474 474 synchronize_rcu(); 475 475 476 476 hlist_for_each_entry_safe(x, tmp, &gc_list, gclist) 477 - xfrm_state_gc_destroy(x); 477 + ___xfrm_state_destroy(x); 478 478 } 479 479 480 480 static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) ··· 598 598 } 599 599 EXPORT_SYMBOL(xfrm_state_alloc); 600 600 601 - void __xfrm_state_destroy(struct xfrm_state *x) 601 + void __xfrm_state_destroy(struct xfrm_state *x, bool sync) 602 602 { 603 603 WARN_ON(x->km.state != XFRM_STATE_DEAD); 604 604 605 - spin_lock_bh(&xfrm_state_gc_lock); 606 - hlist_add_head(&x->gclist, &xfrm_state_gc_list); 607 - spin_unlock_bh(&xfrm_state_gc_lock); 608 - schedule_work(&xfrm_state_gc_work); 605 + if (sync) { 606 + synchronize_rcu(); 607 + ___xfrm_state_destroy(x); 608 + } else { 609 + spin_lock_bh(&xfrm_state_gc_lock); 610 + hlist_add_head(&x->gclist, &xfrm_state_gc_list); 611 + spin_unlock_bh(&xfrm_state_gc_lock); 612 + schedule_work(&xfrm_state_gc_work); 613 + } 609 614 } 610 615 EXPORT_SYMBOL(__xfrm_state_destroy); 611 616 ··· 713 708 } 714 709 #endif 715 710 716 - int xfrm_state_flush(struct net *net, u8 proto, bool task_valid) 711 + int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync) 717 712 { 718 713 int i, err = 0, cnt = 0; 719 714 ··· 735 730 err = xfrm_state_delete(x); 736 731 xfrm_audit_state_delete(x, err ? 0 : 1, 737 732 task_valid); 738 - xfrm_state_put(x); 733 + if (sync) 734 + xfrm_state_put_sync(x); 735 + else 736 + xfrm_state_put(x); 739 737 if (!err) 740 738 cnt++; 741 739 ··· 2223 2215 if (atomic_read(&t->tunnel_users) == 2) 2224 2216 xfrm_state_delete(t); 2225 2217 atomic_dec(&t->tunnel_users); 2226 - xfrm_state_put(t); 2218 + xfrm_state_put_sync(t); 2227 2219 x->tunnel = NULL; 2228 2220 } 2229 2221 } ··· 2383 2375 unsigned int sz; 2384 2376 2385 2377 flush_work(&net->xfrm.state_hash_work); 2386 - xfrm_state_flush(net, IPSEC_PROTO_ANY, false); 2387 2378 flush_work(&xfrm_state_gc_work); 2379 + xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true); 2388 2380 2389 2381 WARN_ON(!list_empty(&net->xfrm.state_all)); 2390 2382
+1 -1
net/xfrm/xfrm_user.c
··· 1932 1932 struct xfrm_usersa_flush *p = nlmsg_data(nlh); 1933 1933 int err; 1934 1934 1935 - err = xfrm_state_flush(net, p->proto, true); 1935 + err = xfrm_state_flush(net, p->proto, true, false); 1936 1936 if (err) { 1937 1937 if (err == -ESRCH) /* empty table */ 1938 1938 return 0;
+1 -1
scripts/Makefile.kasan
··· 26 26 CFLAGS_KASAN := $(CFLAGS_KASAN_SHADOW) \ 27 27 $(call cc-param,asan-globals=1) \ 28 28 $(call cc-param,asan-instrumentation-with-call-threshold=$(call_threshold)) \ 29 - $(call cc-param,asan-stack=1) \ 29 + $(call cc-param,asan-stack=$(CONFIG_KASAN_STACK)) \ 30 30 $(call cc-param,asan-use-after-scope=1) \ 31 31 $(call cc-param,asan-instrument-allocas=1) 32 32 endif
+2 -2
scripts/kallsyms.c
··· 118 118 fprintf(stderr, "Read error or end of file.\n"); 119 119 return -1; 120 120 } 121 - if (strlen(sym) > KSYM_NAME_LEN) { 122 - fprintf(stderr, "Symbol %s too long for kallsyms (%zu vs %d).\n" 121 + if (strlen(sym) >= KSYM_NAME_LEN) { 122 + fprintf(stderr, "Symbol %s too long for kallsyms (%zu >= %d).\n" 123 123 "Please increase KSYM_NAME_LEN both in kernel and kallsyms.c\n", 124 124 sym, strlen(sym), KSYM_NAME_LEN); 125 125 return -1;
+1 -12
security/keys/internal.h
··· 186 186 return key_task_permission(key_ref, current_cred(), perm); 187 187 } 188 188 189 - /* 190 - * Authorisation record for request_key(). 191 - */ 192 - struct request_key_auth { 193 - struct key *target_key; 194 - struct key *dest_keyring; 195 - const struct cred *cred; 196 - void *callout_info; 197 - size_t callout_len; 198 - pid_t pid; 199 - } __randomize_layout; 200 - 201 189 extern struct key_type key_type_request_key_auth; 202 190 extern struct key *request_key_auth_new(struct key *target, 191 + const char *op, 203 192 const void *callout_info, 204 193 size_t callout_len, 205 194 struct key *dest_keyring);
+3 -2
security/keys/key.c
··· 265 265 266 266 spin_lock(&user->lock); 267 267 if (!(flags & KEY_ALLOC_QUOTA_OVERRUN)) { 268 - if (user->qnkeys + 1 >= maxkeys || 269 - user->qnbytes + quotalen >= maxbytes || 268 + if (user->qnkeys + 1 > maxkeys || 269 + user->qnbytes + quotalen > maxbytes || 270 270 user->qnbytes + quotalen < user->qnbytes) 271 271 goto no_quota; 272 272 } ··· 297 297 key->gid = gid; 298 298 key->perm = perm; 299 299 key->restrict_link = restrict_link; 300 + key->last_used_at = ktime_get_real_seconds(); 300 301 301 302 if (!(flags & KEY_ALLOC_NOT_IN_QUOTA)) 302 303 key->flags |= 1 << KEY_FLAG_IN_QUOTA;
+1
security/keys/keyctl.c
··· 25 25 #include <linux/security.h> 26 26 #include <linux/uio.h> 27 27 #include <linux/uaccess.h> 28 + #include <keys/request_key_auth-type.h> 28 29 #include "internal.h" 29 30 30 31 #define KEY_MAX_DESC_SIZE 4096
+1 -3
security/keys/keyring.c
··· 661 661 BUG_ON((ctx->flags & STATE_CHECKS) == 0 || 662 662 (ctx->flags & STATE_CHECKS) == STATE_CHECKS); 663 663 664 - if (ctx->index_key.description) 665 - ctx->index_key.desc_len = strlen(ctx->index_key.description); 666 - 667 664 /* Check to see if this top-level keyring is what we are looking for 668 665 * and whether it is valid or not. 669 666 */ ··· 911 914 struct keyring_search_context ctx = { 912 915 .index_key.type = type, 913 916 .index_key.description = description, 917 + .index_key.desc_len = strlen(description), 914 918 .cred = current_cred(), 915 919 .match_data.cmp = key_default_cmp, 916 920 .match_data.raw_data = description,
+1 -2
security/keys/proc.c
··· 165 165 int rc; 166 166 167 167 struct keyring_search_context ctx = { 168 - .index_key.type = key->type, 169 - .index_key.description = key->description, 168 + .index_key = key->index_key, 170 169 .cred = m->file->f_cred, 171 170 .match_data.cmp = lookup_user_key_possessed, 172 171 .match_data.raw_data = key,
+1
security/keys/process_keys.c
··· 19 19 #include <linux/security.h> 20 20 #include <linux/user_namespace.h> 21 21 #include <linux/uaccess.h> 22 + #include <keys/request_key_auth-type.h> 22 23 #include "internal.h" 23 24 24 25 /* Session keyring create vs join semaphore */
+30 -43
security/keys/request_key.c
··· 18 18 #include <linux/keyctl.h> 19 19 #include <linux/slab.h> 20 20 #include "internal.h" 21 + #include <keys/request_key_auth-type.h> 21 22 22 23 #define key_negative_timeout 60 /* default timeout on a negative key's existence */ 23 24 24 25 /** 25 26 * complete_request_key - Complete the construction of a key. 26 - * @cons: The key construction record. 27 + * @auth_key: The authorisation key. 27 28 * @error: The success or failute of the construction. 28 29 * 29 30 * Complete the attempt to construct a key. The key will be negated 30 31 * if an error is indicated. The authorisation key will be revoked 31 32 * unconditionally. 32 33 */ 33 - void complete_request_key(struct key_construction *cons, int error) 34 + void complete_request_key(struct key *authkey, int error) 34 35 { 35 - kenter("{%d,%d},%d", cons->key->serial, cons->authkey->serial, error); 36 + struct request_key_auth *rka = get_request_key_auth(authkey); 37 + struct key *key = rka->target_key; 38 + 39 + kenter("%d{%d},%d", authkey->serial, key->serial, error); 36 40 37 41 if (error < 0) 38 - key_negate_and_link(cons->key, key_negative_timeout, NULL, 39 - cons->authkey); 42 + key_negate_and_link(key, key_negative_timeout, NULL, authkey); 40 43 else 41 - key_revoke(cons->authkey); 42 - 43 - key_put(cons->key); 44 - key_put(cons->authkey); 45 - kfree(cons); 44 + key_revoke(authkey); 46 45 } 47 46 EXPORT_SYMBOL(complete_request_key); 48 47 ··· 90 91 * Request userspace finish the construction of a key 91 92 * - execute "/sbin/request-key <op> <key> <uid> <gid> <keyring> <keyring> <keyring>" 92 93 */ 93 - static int call_sbin_request_key(struct key_construction *cons, 94 - const char *op, 95 - void *aux) 94 + static int call_sbin_request_key(struct key *authkey, void *aux) 96 95 { 97 96 static char const request_key[] = "/sbin/request-key"; 97 + struct request_key_auth *rka = get_request_key_auth(authkey); 98 98 const struct cred *cred = current_cred(); 99 99 key_serial_t prkey, sskey; 100 - struct key *key = cons->key, *authkey = cons->authkey, *keyring, 101 - *session; 100 + struct key *key = rka->target_key, *keyring, *session; 102 101 char *argv[9], *envp[3], uid_str[12], gid_str[12]; 103 102 char key_str[12], keyring_str[3][12]; 104 103 char desc[20]; 105 104 int ret, i; 106 105 107 - kenter("{%d},{%d},%s", key->serial, authkey->serial, op); 106 + kenter("{%d},{%d},%s", key->serial, authkey->serial, rka->op); 108 107 109 108 ret = install_user_keyrings(); 110 109 if (ret < 0) ··· 160 163 /* set up the argument list */ 161 164 i = 0; 162 165 argv[i++] = (char *)request_key; 163 - argv[i++] = (char *) op; 166 + argv[i++] = (char *)rka->op; 164 167 argv[i++] = key_str; 165 168 argv[i++] = uid_str; 166 169 argv[i++] = gid_str; ··· 188 191 key_put(keyring); 189 192 190 193 error_alloc: 191 - complete_request_key(cons, ret); 194 + complete_request_key(authkey, ret); 192 195 kleave(" = %d", ret); 193 196 return ret; 194 197 } ··· 202 205 size_t callout_len, void *aux, 203 206 struct key *dest_keyring) 204 207 { 205 - struct key_construction *cons; 206 208 request_key_actor_t actor; 207 209 struct key *authkey; 208 210 int ret; 209 211 210 212 kenter("%d,%p,%zu,%p", key->serial, callout_info, callout_len, aux); 211 213 212 - cons = kmalloc(sizeof(*cons), GFP_KERNEL); 213 - if (!cons) 214 - return -ENOMEM; 215 - 216 214 /* allocate an authorisation key */ 217 - authkey = request_key_auth_new(key, callout_info, callout_len, 215 + authkey = request_key_auth_new(key, "create", callout_info, callout_len, 218 216 dest_keyring); 219 - if (IS_ERR(authkey)) { 220 - kfree(cons); 221 - ret = PTR_ERR(authkey); 222 - authkey = NULL; 223 - } else { 224 - cons->authkey = key_get(authkey); 225 - cons->key = key_get(key); 217 + if (IS_ERR(authkey)) 218 + return PTR_ERR(authkey); 226 219 227 - /* make the call */ 228 - actor = call_sbin_request_key; 229 - if (key->type->request_key) 230 - actor = key->type->request_key; 220 + /* Make the call */ 221 + actor = call_sbin_request_key; 222 + if (key->type->request_key) 223 + actor = key->type->request_key; 231 224 232 - ret = actor(cons, "create", aux); 225 + ret = actor(authkey, aux); 233 226 234 - /* check that the actor called complete_request_key() prior to 235 - * returning an error */ 236 - WARN_ON(ret < 0 && 237 - !test_bit(KEY_FLAG_REVOKED, &authkey->flags)); 238 - key_put(authkey); 239 - } 227 + /* check that the actor called complete_request_key() prior to 228 + * returning an error */ 229 + WARN_ON(ret < 0 && 230 + !test_bit(KEY_FLAG_REVOKED, &authkey->flags)); 240 231 232 + key_put(authkey); 241 233 kleave(" = %d", ret); 242 234 return ret; 243 235 } ··· 261 275 if (cred->request_key_auth) { 262 276 authkey = cred->request_key_auth; 263 277 down_read(&authkey->sem); 264 - rka = authkey->payload.data[0]; 278 + rka = get_request_key_auth(authkey); 265 279 if (!test_bit(KEY_FLAG_REVOKED, 266 280 &authkey->flags)) 267 281 dest_keyring = ··· 531 545 struct keyring_search_context ctx = { 532 546 .index_key.type = type, 533 547 .index_key.description = description, 548 + .index_key.desc_len = strlen(description), 534 549 .cred = current_cred(), 535 550 .match_data.cmp = key_default_cmp, 536 551 .match_data.raw_data = description,
+10 -8
security/keys/request_key_auth.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/uaccess.h> 19 19 #include "internal.h" 20 - #include <keys/user-type.h> 20 + #include <keys/request_key_auth-type.h> 21 21 22 22 static int request_key_auth_preparse(struct key_preparsed_payload *); 23 23 static void request_key_auth_free_preparse(struct key_preparsed_payload *); ··· 68 68 static void request_key_auth_describe(const struct key *key, 69 69 struct seq_file *m) 70 70 { 71 - struct request_key_auth *rka = key->payload.data[0]; 71 + struct request_key_auth *rka = get_request_key_auth(key); 72 72 73 73 seq_puts(m, "key:"); 74 74 seq_puts(m, key->description); ··· 83 83 static long request_key_auth_read(const struct key *key, 84 84 char __user *buffer, size_t buflen) 85 85 { 86 - struct request_key_auth *rka = key->payload.data[0]; 86 + struct request_key_auth *rka = get_request_key_auth(key); 87 87 size_t datalen; 88 88 long ret; 89 89 ··· 109 109 */ 110 110 static void request_key_auth_revoke(struct key *key) 111 111 { 112 - struct request_key_auth *rka = key->payload.data[0]; 112 + struct request_key_auth *rka = get_request_key_auth(key); 113 113 114 114 kenter("{%d}", key->serial); 115 115 ··· 136 136 */ 137 137 static void request_key_auth_destroy(struct key *key) 138 138 { 139 - struct request_key_auth *rka = key->payload.data[0]; 139 + struct request_key_auth *rka = get_request_key_auth(key); 140 140 141 141 kenter("{%d}", key->serial); 142 142 ··· 147 147 * Create an authorisation token for /sbin/request-key or whoever to gain 148 148 * access to the caller's security data. 149 149 */ 150 - struct key *request_key_auth_new(struct key *target, const void *callout_info, 151 - size_t callout_len, struct key *dest_keyring) 150 + struct key *request_key_auth_new(struct key *target, const char *op, 151 + const void *callout_info, size_t callout_len, 152 + struct key *dest_keyring) 152 153 { 153 154 struct request_key_auth *rka, *irka; 154 155 const struct cred *cred = current->cred; ··· 167 166 if (!rka->callout_info) 168 167 goto error_free_rka; 169 168 rka->callout_len = callout_len; 169 + strlcpy(rka->op, op, sizeof(rka->op)); 170 170 171 171 /* see if the calling process is already servicing the key request of 172 172 * another process */ ··· 247 245 struct key *authkey; 248 246 key_ref_t authkey_ref; 249 247 250 - sprintf(description, "%x", target_id); 248 + ctx.index_key.desc_len = sprintf(description, "%x", target_id); 251 249 252 250 authkey_ref = search_process_keyrings(&ctx); 253 251
+6 -4
security/lsm_audit.c
··· 321 321 if (a->u.net->sk) { 322 322 struct sock *sk = a->u.net->sk; 323 323 struct unix_sock *u; 324 + struct unix_address *addr; 324 325 int len = 0; 325 326 char *p = NULL; 326 327 ··· 352 351 #endif 353 352 case AF_UNIX: 354 353 u = unix_sk(sk); 354 + addr = smp_load_acquire(&u->addr); 355 + if (!addr) 356 + break; 355 357 if (u->path.dentry) { 356 358 audit_log_d_path(ab, " path=", &u->path); 357 359 break; 358 360 } 359 - if (!u->addr) 360 - break; 361 - len = u->addr->len-sizeof(short); 362 - p = &u->addr->name->sun_path[0]; 361 + len = addr->len-sizeof(short); 362 + p = &addr->name->sun_path[0]; 363 363 audit_log_format(ab, " path="); 364 364 if (*p) 365 365 audit_log_untrustedstring(ab, p);
+41 -1
sound/pci/hda/patch_realtek.c
··· 1855 1855 ALC887_FIXUP_BASS_CHMAP, 1856 1856 ALC1220_FIXUP_GB_DUAL_CODECS, 1857 1857 ALC1220_FIXUP_CLEVO_P950, 1858 + ALC1220_FIXUP_SYSTEM76_ORYP5, 1859 + ALC1220_FIXUP_SYSTEM76_ORYP5_PINS, 1858 1860 }; 1859 1861 1860 1862 static void alc889_fixup_coef(struct hda_codec *codec, ··· 2056 2054 */ 2057 2055 snd_hda_override_conn_list(codec, 0x14, 1, conn1); 2058 2056 snd_hda_override_conn_list(codec, 0x1b, 1, conn1); 2057 + } 2058 + 2059 + static void alc_fixup_headset_mode_no_hp_mic(struct hda_codec *codec, 2060 + const struct hda_fixup *fix, int action); 2061 + 2062 + static void alc1220_fixup_system76_oryp5(struct hda_codec *codec, 2063 + const struct hda_fixup *fix, 2064 + int action) 2065 + { 2066 + alc1220_fixup_clevo_p950(codec, fix, action); 2067 + alc_fixup_headset_mode_no_hp_mic(codec, fix, action); 2059 2068 } 2060 2069 2061 2070 static const struct hda_fixup alc882_fixups[] = { ··· 2313 2300 .type = HDA_FIXUP_FUNC, 2314 2301 .v.func = alc1220_fixup_clevo_p950, 2315 2302 }, 2303 + [ALC1220_FIXUP_SYSTEM76_ORYP5] = { 2304 + .type = HDA_FIXUP_FUNC, 2305 + .v.func = alc1220_fixup_system76_oryp5, 2306 + }, 2307 + [ALC1220_FIXUP_SYSTEM76_ORYP5_PINS] = { 2308 + .type = HDA_FIXUP_PINS, 2309 + .v.pins = (const struct hda_pintbl[]) { 2310 + { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 2311 + {} 2312 + }, 2313 + .chained = true, 2314 + .chain_id = ALC1220_FIXUP_SYSTEM76_ORYP5, 2315 + }, 2316 2316 }; 2317 2317 2318 2318 static const struct snd_pci_quirk alc882_fixup_tbl[] = { ··· 2402 2376 SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950), 2403 2377 SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950), 2404 2378 SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950), 2379 + SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS), 2380 + SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS), 2405 2381 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 2406 2382 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), 2407 2383 SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530), ··· 5660 5632 ALC294_FIXUP_ASUS_SPK, 5661 5633 ALC225_FIXUP_HEADSET_JACK, 5662 5634 ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 5635 + ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE, 5663 5636 }; 5664 5637 5665 5638 static const struct hda_fixup alc269_fixups[] = { ··· 6616 6587 .chained = true, 6617 6588 .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 6618 6589 }, 6590 + [ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE] = { 6591 + .type = HDA_FIXUP_VERBS, 6592 + .v.verbs = (const struct hda_verb[]) { 6593 + /* Disable PCBEEP-IN passthrough */ 6594 + { 0x20, AC_VERB_SET_COEF_INDEX, 0x36 }, 6595 + { 0x20, AC_VERB_SET_PROC_COEF, 0x57d7 }, 6596 + { } 6597 + }, 6598 + .chained = true, 6599 + .chain_id = ALC285_FIXUP_LENOVO_HEADPHONE_NOISE 6600 + }, 6619 6601 }; 6620 6602 6621 6603 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 7312 7272 {0x12, 0x90a60130}, 7313 7273 {0x19, 0x03a11020}, 7314 7274 {0x21, 0x0321101f}), 7315 - SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 7275 + SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE, 7316 7276 {0x12, 0x90a60130}, 7317 7277 {0x14, 0x90170110}, 7318 7278 {0x19, 0x04a11040},
+1 -1
sound/soc/generic/simple-card.c
··· 462 462 conf_idx = 0; 463 463 node = of_get_child_by_name(top, PREFIX "dai-link"); 464 464 if (!node) { 465 - node = dev->of_node; 465 + node = of_node_get(top); 466 466 loop = 0; 467 467 } 468 468
+7 -1
sound/soc/samsung/i2s.c
··· 604 604 unsigned int fmt) 605 605 { 606 606 struct i2s_dai *i2s = to_info(dai); 607 + struct i2s_dai *other = get_other_dai(i2s); 607 608 int lrp_shift, sdf_shift, sdf_mask, lrp_rlow, mod_slave; 608 609 u32 mod, tmp = 0; 609 610 unsigned long flags; ··· 662 661 * CLK_I2S_RCLK_SRC clock is not exposed so we ensure any 663 662 * clock configuration assigned in DT is not overwritten. 664 663 */ 665 - if (i2s->rclk_srcrate == 0 && i2s->clk_data.clks == NULL) 664 + if (i2s->rclk_srcrate == 0 && i2s->clk_data.clks == NULL && 665 + other->clk_data.clks == NULL) 666 666 i2s_set_sysclk(dai, SAMSUNG_I2S_RCLKSRC_0, 667 667 0, SND_SOC_CLOCK_IN); 668 668 break; ··· 701 699 struct snd_pcm_hw_params *params, struct snd_soc_dai *dai) 702 700 { 703 701 struct i2s_dai *i2s = to_info(dai); 702 + struct i2s_dai *other = get_other_dai(i2s); 704 703 u32 mod, mask = 0, val = 0; 705 704 struct clk *rclksrc; 706 705 unsigned long flags; ··· 787 784 i2s->frmclk = params_rate(params); 788 785 789 786 rclksrc = i2s->clk_table[CLK_I2S_RCLK_SRC]; 787 + if (!rclksrc || IS_ERR(rclksrc)) 788 + rclksrc = other->clk_table[CLK_I2S_RCLK_SRC]; 789 + 790 790 if (rclksrc && !IS_ERR(rclksrc)) 791 791 i2s->rclk_srcrate = clk_get_rate(rclksrc); 792 792
+7 -1
sound/soc/soc-topology.c
··· 2487 2487 struct snd_soc_tplg_ops *ops, const struct firmware *fw, u32 id) 2488 2488 { 2489 2489 struct soc_tplg tplg; 2490 + int ret; 2490 2491 2491 2492 /* setup parsing context */ 2492 2493 memset(&tplg, 0, sizeof(tplg)); ··· 2501 2500 tplg.bytes_ext_ops = ops->bytes_ext_ops; 2502 2501 tplg.bytes_ext_ops_count = ops->bytes_ext_ops_count; 2503 2502 2504 - return soc_tplg_load(&tplg); 2503 + ret = soc_tplg_load(&tplg); 2504 + /* free the created components if fail to load topology */ 2505 + if (ret) 2506 + snd_soc_tplg_component_remove(comp, SND_SOC_TPLG_INDEX_ALL); 2507 + 2508 + return ret; 2505 2509 } 2506 2510 EXPORT_SYMBOL_GPL(snd_soc_tplg_component_load); 2507 2511
+10
tools/testing/selftests/bpf/test_lpm_map.c
··· 474 474 assert(bpf_map_lookup_elem(map_fd, key, &value) == -1 && 475 475 errno == ENOENT); 476 476 477 + key->prefixlen = 30; // unused prefix so far 478 + inet_pton(AF_INET, "192.255.0.0", key->data); 479 + assert(bpf_map_delete_elem(map_fd, key) == -1 && 480 + errno == ENOENT); 481 + 482 + key->prefixlen = 16; // same prefix as the root node 483 + inet_pton(AF_INET, "192.255.0.0", key->data); 484 + assert(bpf_map_delete_elem(map_fd, key) == -1 && 485 + errno == ENOENT); 486 + 477 487 /* assert initial lookup */ 478 488 key->prefixlen = 32; 479 489 inet_pton(AF_INET, "192.168.0.1", key->data);
+1
tools/testing/selftests/net/fib_tests.sh
··· 388 388 389 389 set -e 390 390 $IP link set dev dummy0 carrier off 391 + sleep 1 391 392 set +e 392 393 393 394 echo " Carrier down"
+80 -16
tools/testing/selftests/net/pmtu.sh
··· 103 103 # and check that configured MTU is used on link creation and changes, and 104 104 # that MTU is properly calculated instead when MTU is not configured from 105 105 # userspace 106 + # 107 + # - cleanup_ipv4_exception 108 + # Similar to pmtu_ipv4_vxlan4_exception, but explicitly generate PMTU 109 + # exceptions on multiple CPUs and check that the veth device tear-down 110 + # happens in a timely manner 111 + # 112 + # - cleanup_ipv6_exception 113 + # Same as above, but use IPv6 transport from A to B 114 + 106 115 107 116 # Kselftest framework requirement - SKIP code is 4. 108 117 ksft_skip=4 ··· 144 135 pmtu_vti6_default_mtu vti6: default MTU assignment 145 136 pmtu_vti4_link_add_mtu vti4: MTU setting on link creation 146 137 pmtu_vti6_link_add_mtu vti6: MTU setting on link creation 147 - pmtu_vti6_link_change_mtu vti6: MTU changes on link changes" 138 + pmtu_vti6_link_change_mtu vti6: MTU changes on link changes 139 + cleanup_ipv4_exception ipv4: cleanup of cached exceptions 140 + cleanup_ipv6_exception ipv6: cleanup of cached exceptions" 148 141 149 142 NS_A="ns-$(mktemp -u XXXXXX)" 150 143 NS_B="ns-$(mktemp -u XXXXXX)" ··· 274 263 275 264 ${ns_a} ip link set ${encap}_a up 276 265 ${ns_b} ip link set ${encap}_b up 277 - 278 - sleep 1 279 266 } 280 267 281 268 setup_fou44() { ··· 311 302 setup_namespaces() { 312 303 for n in ${NS_A} ${NS_B} ${NS_R1} ${NS_R2}; do 313 304 ip netns add ${n} || return 1 305 + 306 + # Disable DAD, so that we don't have to wait to use the 307 + # configured IPv6 addresses 308 + ip netns exec ${n} sysctl -q net/ipv6/conf/default/accept_dad=0 314 309 done 315 310 } 316 311 ··· 350 337 351 338 ${ns_a} ip link set vti${proto}_a up 352 339 ${ns_b} ip link set vti${proto}_b up 353 - 354 - sleep 1 355 340 } 356 341 357 342 setup_vti4() { ··· 386 375 387 376 ${ns_a} ip link set ${type}_a up 388 377 ${ns_b} ip link set ${type}_b up 389 - 390 - sleep 1 391 378 } 392 379 393 380 setup_geneve4() { ··· 597 588 mtu "${ns_b}" veth_B-R2 1500 598 589 599 590 # Create route exceptions 600 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s 1800 ${dst1} > /dev/null 601 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s 1800 ${dst2} > /dev/null 591 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst1} > /dev/null 592 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1800 ${dst2} > /dev/null 602 593 603 594 # Check that exceptions have been created with the correct PMTU 604 595 pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst1})" ··· 630 621 # Decrease remote MTU on path via R2, get new exception 631 622 mtu "${ns_r2}" veth_R2-B 400 632 623 mtu "${ns_b}" veth_B-R2 400 633 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s 1400 ${dst2} > /dev/null 624 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1400 ${dst2} > /dev/null 634 625 pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst2})" 635 626 check_pmtu_value "lock 552" "${pmtu_2}" "exceeding MTU, with MTU < min_pmtu" || return 1 636 627 ··· 647 638 check_pmtu_value "1500" "${pmtu_2}" "increasing local MTU" || return 1 648 639 649 640 # Get new exception 650 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s 1400 ${dst2} > /dev/null 641 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s 1400 ${dst2} > /dev/null 651 642 pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst2})" 652 643 check_pmtu_value "lock 552" "${pmtu_2}" "exceeding MTU, with MTU < min_pmtu" || return 1 653 644 } ··· 696 687 697 688 mtu "${ns_a}" ${type}_a $((${ll_mtu} + 1000)) 698 689 mtu "${ns_b}" ${type}_b $((${ll_mtu} + 1000)) 699 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s $((${ll_mtu} + 500)) ${dst} > /dev/null 690 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s $((${ll_mtu} + 500)) ${dst} > /dev/null 700 691 701 692 # Check that exception was created 702 693 pmtu="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst})" ··· 776 767 777 768 mtu "${ns_a}" ${encap}_a $((${ll_mtu} + 1000)) 778 769 mtu "${ns_b}" ${encap}_b $((${ll_mtu} + 1000)) 779 - ${ns_a} ${ping} -q -M want -i 0.1 -w 2 -s $((${ll_mtu} + 500)) ${dst} > /dev/null 770 + ${ns_a} ${ping} -q -M want -i 0.1 -w 1 -s $((${ll_mtu} + 500)) ${dst} > /dev/null 780 771 781 772 # Check that exception was created 782 773 pmtu="$(route_get_dst_pmtu_from_exception "${ns_a}" ${dst})" ··· 834 825 835 826 # Send DF packet without exceeding link layer MTU, check that no 836 827 # exception is created 837 - ${ns_a} ping -q -M want -i 0.1 -w 2 -s ${ping_payload} ${tunnel4_b_addr} > /dev/null 828 + ${ns_a} ping -q -M want -i 0.1 -w 1 -s ${ping_payload} ${tunnel4_b_addr} > /dev/null 838 829 pmtu="$(route_get_dst_pmtu_from_exception "${ns_a}" ${tunnel4_b_addr})" 839 830 check_pmtu_value "" "${pmtu}" "sending packet smaller than PMTU (IP payload length ${esp_payload_rfc4106})" || return 1 840 831 841 832 # Now exceed link layer MTU by one byte, check that exception is created 842 833 # with the right PMTU value 843 - ${ns_a} ping -q -M want -i 0.1 -w 2 -s $((ping_payload + 1)) ${tunnel4_b_addr} > /dev/null 834 + ${ns_a} ping -q -M want -i 0.1 -w 1 -s $((ping_payload + 1)) ${tunnel4_b_addr} > /dev/null 844 835 pmtu="$(route_get_dst_pmtu_from_exception "${ns_a}" ${tunnel4_b_addr})" 845 836 check_pmtu_value "${esp_payload_rfc4106}" "${pmtu}" "exceeding PMTU (IP payload length $((esp_payload_rfc4106 + 1)))" 846 837 } ··· 856 847 mtu "${ns_b}" veth_b 4000 857 848 mtu "${ns_a}" vti6_a 5000 858 849 mtu "${ns_b}" vti6_b 5000 859 - ${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${tunnel6_b_addr} > /dev/null 850 + ${ns_a} ${ping6} -q -i 0.1 -w 1 -s 60000 ${tunnel6_b_addr} > /dev/null 860 851 861 852 # Check that exception was created 862 853 pmtu="$(route_get_dst_pmtu_from_exception "${ns_a}" ${tunnel6_b_addr})" ··· 1015 1006 fi 1016 1007 1017 1008 return ${fail} 1009 + } 1010 + 1011 + check_command() { 1012 + cmd=${1} 1013 + 1014 + if ! which ${cmd} > /dev/null 2>&1; then 1015 + err " missing required command: '${cmd}'" 1016 + return 1 1017 + fi 1018 + return 0 1019 + } 1020 + 1021 + test_cleanup_vxlanX_exception() { 1022 + outer="${1}" 1023 + encap="vxlan" 1024 + ll_mtu=4000 1025 + 1026 + check_command taskset || return 2 1027 + cpu_list=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2) 1028 + 1029 + setup namespaces routing ${encap}${outer} || return 2 1030 + trace "${ns_a}" ${encap}_a "${ns_b}" ${encap}_b \ 1031 + "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \ 1032 + "${ns_b}" veth_B-R1 "${ns_r1}" veth_R1-B 1033 + 1034 + # Create route exception by exceeding link layer MTU 1035 + mtu "${ns_a}" veth_A-R1 $((${ll_mtu} + 1000)) 1036 + mtu "${ns_r1}" veth_R1-A $((${ll_mtu} + 1000)) 1037 + mtu "${ns_b}" veth_B-R1 ${ll_mtu} 1038 + mtu "${ns_r1}" veth_R1-B ${ll_mtu} 1039 + 1040 + mtu "${ns_a}" ${encap}_a $((${ll_mtu} + 1000)) 1041 + mtu "${ns_b}" ${encap}_b $((${ll_mtu} + 1000)) 1042 + 1043 + # Fill exception cache for multiple CPUs (2) 1044 + # we can always use inner IPv4 for that 1045 + for cpu in ${cpu_list}; do 1046 + taskset --cpu-list ${cpu} ${ns_a} ping -q -M want -i 0.1 -w 1 -s $((${ll_mtu} + 500)) ${tunnel4_b_addr} > /dev/null 1047 + done 1048 + 1049 + ${ns_a} ip link del dev veth_A-R1 & 1050 + iplink_pid=$! 1051 + sleep 1 1052 + if [ "$(cat /proc/${iplink_pid}/cmdline 2>/dev/null | tr -d '\0')" = "iplinkdeldevveth_A-R1" ]; then 1053 + err " can't delete veth device in a timely manner, PMTU dst likely leaked" 1054 + return 1 1055 + fi 1056 + } 1057 + 1058 + test_cleanup_ipv6_exception() { 1059 + test_cleanup_vxlanX_exception 6 1060 + } 1061 + 1062 + test_cleanup_ipv4_exception() { 1063 + test_cleanup_vxlanX_exception 4 1018 1064 } 1019 1065 1020 1066 usage() {
+4 -4
tools/testing/selftests/net/udpgro.sh
··· 37 37 38 38 cfg_veth 39 39 40 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} && \ 40 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} && \ 41 41 echo "ok" || \ 42 42 echo "failed" & 43 43 ··· 81 81 # will land on the 'plain' one 82 82 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -G ${family} -b ${addr1} -n 0 & 83 83 pid=$! 84 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${family} -b ${addr2%/*} ${rx_args} && \ 84 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${family} -b ${addr2%/*} ${rx_args} && \ 85 85 echo "ok" || \ 86 86 echo "failed"& 87 87 ··· 99 99 100 100 cfg_veth 101 101 102 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -p 12345 & 103 - ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} && \ 102 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 1000 -R 10 ${rx_args} -p 12345 & 103 + ip netns exec "${PEER_NS}" ./udpgso_bench_rx -C 2000 -R 10 ${rx_args} && \ 104 104 echo "ok" || \ 105 105 echo "failed" & 106 106
+29 -13
tools/testing/selftests/net/udpgso_bench_rx.c
··· 45 45 static int cfg_expected_pkt_nr; 46 46 static int cfg_expected_pkt_len; 47 47 static int cfg_expected_gso_size; 48 + static int cfg_connect_timeout_ms; 49 + static int cfg_rcv_timeout_ms; 48 50 static struct sockaddr_storage cfg_bind_addr; 49 51 50 52 static bool interrupted; ··· 89 87 return (tv.tv_sec * 1000) + (tv.tv_usec / 1000); 90 88 } 91 89 92 - static void do_poll(int fd) 90 + static void do_poll(int fd, int timeout_ms) 93 91 { 94 92 struct pollfd pfd; 95 93 int ret; ··· 104 102 break; 105 103 if (ret == -1) 106 104 error(1, errno, "poll"); 107 - if (ret == 0) 108 - continue; 105 + if (ret == 0) { 106 + if (!timeout_ms) 107 + continue; 108 + 109 + timeout_ms -= 10; 110 + if (timeout_ms <= 0) { 111 + interrupted = true; 112 + break; 113 + } 114 + } 109 115 if (pfd.revents != POLLIN) 110 116 error(1, errno, "poll: 0x%x expected 0x%x\n", 111 117 pfd.revents, POLLIN); ··· 144 134 if (listen(accept_fd, 1)) 145 135 error(1, errno, "listen"); 146 136 147 - do_poll(accept_fd); 137 + do_poll(accept_fd, cfg_connect_timeout_ms); 148 138 if (interrupted) 149 139 exit(0); 150 140 ··· 283 273 284 274 static void usage(const char *filepath) 285 275 { 286 - error(1, 0, "Usage: %s [-Grtv] [-b addr] [-p port] [-l pktlen] [-n packetnr] [-S gsosize]", filepath); 276 + error(1, 0, "Usage: %s [-C connect_timeout] [-Grtv] [-b addr] [-p port]" 277 + " [-l pktlen] [-n packetnr] [-R rcv_timeout] [-S gsosize]", 278 + filepath); 287 279 } 288 280 289 281 static void parse_opts(int argc, char **argv) ··· 294 282 295 283 /* bind to any by default */ 296 284 setup_sockaddr(PF_INET6, "::", &cfg_bind_addr); 297 - while ((c = getopt(argc, argv, "4b:Gl:n:p:rS:tv")) != -1) { 285 + while ((c = getopt(argc, argv, "4b:C:Gl:n:p:rR:S:tv")) != -1) { 298 286 switch (c) { 299 287 case '4': 300 288 cfg_family = PF_INET; ··· 303 291 break; 304 292 case 'b': 305 293 setup_sockaddr(cfg_family, optarg, &cfg_bind_addr); 294 + break; 295 + case 'C': 296 + cfg_connect_timeout_ms = strtoul(optarg, NULL, 0); 306 297 break; 307 298 case 'G': 308 299 cfg_gro_segment = true; ··· 321 306 break; 322 307 case 'r': 323 308 cfg_read_all = true; 309 + break; 310 + case 'R': 311 + cfg_rcv_timeout_ms = strtoul(optarg, NULL, 0); 324 312 break; 325 313 case 'S': 326 314 cfg_expected_gso_size = strtol(optarg, NULL, 0); ··· 347 329 348 330 static void do_recv(void) 349 331 { 332 + int timeout_ms = cfg_tcp ? cfg_rcv_timeout_ms : cfg_connect_timeout_ms; 350 333 unsigned long tnow, treport; 351 - int fd, loop = 0; 334 + int fd; 352 335 353 336 fd = do_socket(cfg_tcp); 354 337 ··· 361 342 362 343 treport = gettimeofday_ms() + 1000; 363 344 do { 364 - /* force termination after the second poll(); this cope both 365 - * with sender slower than receiver and missing packet errors 366 - */ 367 - if (cfg_expected_pkt_nr && loop++) 368 - interrupted = true; 369 - do_poll(fd); 345 + do_poll(fd, timeout_ms); 370 346 371 347 if (cfg_tcp) 372 348 do_flush_tcp(fd); ··· 378 364 bytes = packets = 0; 379 365 treport = tnow + 1000; 380 366 } 367 + 368 + timeout_ms = cfg_rcv_timeout_ms; 381 369 382 370 } while (!interrupted); 383 371
+1 -1
virt/kvm/kvm_main.c
··· 4044 4044 } 4045 4045 add_uevent_var(env, "PID=%d", kvm->userspace_pid); 4046 4046 4047 - if (kvm->debugfs_dentry) { 4047 + if (!IS_ERR_OR_NULL(kvm->debugfs_dentry)) { 4048 4048 char *tmp, *p = kmalloc(PATH_MAX, GFP_KERNEL); 4049 4049 4050 4050 if (p) {