Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.7-rc5 into tty-next

We need the tty fixes in here too.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4390 -2041
+8 -5
Documentation/admin-guide/device-mapper/dm-integrity.rst
··· 182 182 space-efficient. If this option is not present, large padding is 183 183 used - that is for compatibility with older kernels. 184 184 185 + allow_discards 186 + Allow block discard requests (a.k.a. TRIM) for the integrity device. 187 + Discards are only allowed to devices using internal hash. 185 188 186 - The journal mode (D/J), buffer_sectors, journal_watermark, commit_time can 187 - be changed when reloading the target (load an inactive table and swap the 188 - tables with suspend and resume). The other arguments should not be changed 189 - when reloading the target because the layout of disk data depend on them 190 - and the reloaded target would be non-functional. 189 + The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and 190 + allow_discards can be changed when reloading the target (load an inactive 191 + table and swap the tables with suspend and resume). The other arguments 192 + should not be changed when reloading the target because the layout of disk 193 + data depend on them and the reloaded target would be non-functional. 191 194 192 195 193 196 The layout of the formatted block device:
+3 -4
Documentation/devicetree/bindings/dma/socionext,uniphier-xdmac.yaml
··· 22 22 const: socionext,uniphier-xdmac 23 23 24 24 reg: 25 - items: 26 - - description: XDMAC base register region (offset and length) 27 - - description: XDMAC extension register region (offset and length) 25 + maxItems: 1 28 26 29 27 interrupts: 30 28 maxItems: 1 ··· 47 49 - reg 48 50 - interrupts 49 51 - "#dma-cells" 52 + - dma-channels 50 53 51 54 examples: 52 55 - | 53 56 xdmac: dma-controller@5fc10000 { 54 57 compatible = "socionext,uniphier-xdmac"; 55 - reg = <0x5fc10000 0x1000>, <0x5fc20000 0x800>; 58 + reg = <0x5fc10000 0x5300>; 56 59 interrupts = <0 188 4>; 57 60 #dma-cells = <2>; 58 61 dma-channels = <16>;
+2 -2
Documentation/networking/devlink/ice.rst
··· 61 61 - running 62 62 - ICE OS Default Package 63 63 - The name of the DDP package that is active in the device. The DDP 64 - package is loaded by the driver during initialization. Each varation 65 - of DDP package shall have a unique name. 64 + package is loaded by the driver during initialization. Each 65 + variation of the DDP package has a unique name. 66 66 * - ``fw.app`` 67 67 - running 68 68 - 1.3.1.0
+2
Documentation/virt/kvm/index.rst
··· 28 28 arm/index 29 29 30 30 devices/index 31 + 32 + running-nested-guests
+276
Documentation/virt/kvm/running-nested-guests.rst
··· 1 + ============================== 2 + Running nested guests with KVM 3 + ============================== 4 + 5 + A nested guest is the ability to run a guest inside another guest (it 6 + can be KVM-based or a different hypervisor). The straightforward 7 + example is a KVM guest that in turn runs on a KVM guest (the rest of 8 + this document is built on this example):: 9 + 10 + .----------------. .----------------. 11 + | | | | 12 + | L2 | | L2 | 13 + | (Nested Guest) | | (Nested Guest) | 14 + | | | | 15 + |----------------'--'----------------| 16 + | | 17 + | L1 (Guest Hypervisor) | 18 + | KVM (/dev/kvm) | 19 + | | 20 + .------------------------------------------------------. 21 + | L0 (Host Hypervisor) | 22 + | KVM (/dev/kvm) | 23 + |------------------------------------------------------| 24 + | Hardware (with virtualization extensions) | 25 + '------------------------------------------------------' 26 + 27 + Terminology: 28 + 29 + - L0 – level-0; the bare metal host, running KVM 30 + 31 + - L1 – level-1 guest; a VM running on L0; also called the "guest 32 + hypervisor", as it itself is capable of running KVM. 33 + 34 + - L2 – level-2 guest; a VM running on L1, this is the "nested guest" 35 + 36 + .. note:: The above diagram is modelled after the x86 architecture; 37 + s390x, ppc64 and other architectures are likely to have 38 + a different design for nesting. 39 + 40 + For example, s390x always has an LPAR (LogicalPARtition) 41 + hypervisor running on bare metal, adding another layer and 42 + resulting in at least four levels in a nested setup — L0 (bare 43 + metal, running the LPAR hypervisor), L1 (host hypervisor), L2 44 + (guest hypervisor), L3 (nested guest). 45 + 46 + This document will stick with the three-level terminology (L0, 47 + L1, and L2) for all architectures; and will largely focus on 48 + x86. 49 + 50 + 51 + Use Cases 52 + --------- 53 + 54 + There are several scenarios where nested KVM can be useful, to name a 55 + few: 56 + 57 + - As a developer, you want to test your software on different operating 58 + systems (OSes). Instead of renting multiple VMs from a Cloud 59 + Provider, using nested KVM lets you rent a large enough "guest 60 + hypervisor" (level-1 guest). This in turn allows you to create 61 + multiple nested guests (level-2 guests), running different OSes, on 62 + which you can develop and test your software. 63 + 64 + - Live migration of "guest hypervisors" and their nested guests, for 65 + load balancing, disaster recovery, etc. 66 + 67 + - VM image creation tools (e.g. ``virt-install``, etc) often run 68 + their own VM, and users expect these to work inside a VM. 69 + 70 + - Some OSes use virtualization internally for security (e.g. to let 71 + applications run safely in isolation). 72 + 73 + 74 + Enabling "nested" (x86) 75 + ----------------------- 76 + 77 + From Linux kernel v4.19 onwards, the ``nested`` KVM parameter is enabled 78 + by default for Intel and AMD. (Though your Linux distribution might 79 + override this default.) 80 + 81 + In case you are running a Linux kernel older than v4.19, to enable 82 + nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To 83 + persist this setting across reboots, you can add it in a config file, as 84 + shown below: 85 + 86 + 1. On the bare metal host (L0), list the kernel modules and ensure that 87 + the KVM modules:: 88 + 89 + $ lsmod | grep -i kvm 90 + kvm_intel 133627 0 91 + kvm 435079 1 kvm_intel 92 + 93 + 2. Show information for ``kvm_intel`` module:: 94 + 95 + $ modinfo kvm_intel | grep -i nested 96 + parm: nested:bool 97 + 98 + 3. For the nested KVM configuration to persist across reboots, place the 99 + below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it 100 + doesn't exist):: 101 + 102 + $ cat /etc/modprobe.d/kvm_intel.conf 103 + options kvm-intel nested=y 104 + 105 + 4. Unload and re-load the KVM Intel module:: 106 + 107 + $ sudo rmmod kvm-intel 108 + $ sudo modprobe kvm-intel 109 + 110 + 5. Verify if the ``nested`` parameter for KVM is enabled:: 111 + 112 + $ cat /sys/module/kvm_intel/parameters/nested 113 + Y 114 + 115 + For AMD hosts, the process is the same as above, except that the module 116 + name is ``kvm-amd``. 117 + 118 + 119 + Additional nested-related kernel parameters (x86) 120 + ------------------------------------------------- 121 + 122 + If your hardware is sufficiently advanced (Intel Haswell processor or 123 + higher, which has newer hardware virt extensions), the following 124 + additional features will also be enabled by default: "Shadow VMCS 125 + (Virtual Machine Control Structure)", APIC Virtualization on your bare 126 + metal host (L0). Parameters for Intel hosts:: 127 + 128 + $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs 129 + Y 130 + 131 + $ cat /sys/module/kvm_intel/parameters/enable_apicv 132 + Y 133 + 134 + $ cat /sys/module/kvm_intel/parameters/ept 135 + Y 136 + 137 + .. note:: If you suspect your L2 (i.e. nested guest) is running slower, 138 + ensure the above are enabled (particularly 139 + ``enable_shadow_vmcs`` and ``ept``). 140 + 141 + 142 + Starting a nested guest (x86) 143 + ----------------------------- 144 + 145 + Once your bare metal host (L0) is configured for nesting, you should be 146 + able to start an L1 guest with:: 147 + 148 + $ qemu-kvm -cpu host [...] 149 + 150 + The above will pass through the host CPU's capabilities as-is to the 151 + gues); or for better live migration compatibility, use a named CPU 152 + model supported by QEMU. e.g.:: 153 + 154 + $ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on 155 + 156 + then the guest hypervisor will subsequently be capable of running a 157 + nested guest with accelerated KVM. 158 + 159 + 160 + Enabling "nested" (s390x) 161 + ------------------------- 162 + 163 + 1. On the host hypervisor (L0), enable the ``nested`` parameter on 164 + s390x:: 165 + 166 + $ rmmod kvm 167 + $ modprobe kvm nested=1 168 + 169 + .. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive 170 + with the ``nested`` paramter — i.e. to be able to enable 171 + ``nested``, the ``hpage`` parameter *must* be disabled. 172 + 173 + 2. The guest hypervisor (L1) must be provided with the ``sie`` CPU 174 + feature — with QEMU, this can be done by using "host passthrough" 175 + (via the command-line ``-cpu host``). 176 + 177 + 3. Now the KVM module can be loaded in the L1 (guest hypervisor):: 178 + 179 + $ modprobe kvm 180 + 181 + 182 + Live migration with nested KVM 183 + ------------------------------ 184 + 185 + Migrating an L1 guest, with a *live* nested guest in it, to another 186 + bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for 187 + Intel x86 systems, and even on older versions for s390x. 188 + 189 + On AMD systems, once an L1 guest has started an L2 guest, the L1 guest 190 + should no longer be migrated or saved (refer to QEMU documentation on 191 + "savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate 192 + or save-and-load an L1 guest while an L2 guest is running will result in 193 + undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a 194 + kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1 195 + guest can no longer be considered stable or secure, and must be restarted. 196 + Migrating an L1 guest merely configured to support nesting, while not 197 + actually running L2 guests, is expected to function normally even on AMD 198 + systems but may fail once guests are started. 199 + 200 + Migrating an L2 guest is always expected to succeed, so all the following 201 + scenarios should work even on AMD systems: 202 + 203 + - Migrating a nested guest (L2) to another L1 guest on the *same* bare 204 + metal host. 205 + 206 + - Migrating a nested guest (L2) to another L1 guest on a *different* 207 + bare metal host. 208 + 209 + - Migrating a nested guest (L2) to a bare metal host. 210 + 211 + Reporting bugs from nested setups 212 + ----------------------------------- 213 + 214 + Debugging "nested" problems can involve sifting through log files across 215 + L0, L1 and L2; this can result in tedious back-n-forth between the bug 216 + reporter and the bug fixer. 217 + 218 + - Mention that you are in a "nested" setup. If you are running any kind 219 + of "nesting" at all, say so. Unfortunately, this needs to be called 220 + out because when reporting bugs, people tend to forget to even 221 + *mention* that they're using nested virtualization. 222 + 223 + - Ensure you are actually running KVM on KVM. Sometimes people do not 224 + have KVM enabled for their guest hypervisor (L1), which results in 225 + them running with pure emulation or what QEMU calls it as "TCG", but 226 + they think they're running nested KVM. Thus confusing "nested Virt" 227 + (which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM). 228 + 229 + Information to collect (generic) 230 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 231 + 232 + The following is not an exhaustive list, but a very good starting point: 233 + 234 + - Kernel, libvirt, and QEMU version from L0 235 + 236 + - Kernel, libvirt and QEMU version from L1 237 + 238 + - QEMU command-line of L1 -- when using libvirt, you'll find it here: 239 + ``/var/log/libvirt/qemu/instance.log`` 240 + 241 + - QEMU command-line of L2 -- as above, when using libvirt, get the 242 + complete libvirt-generated QEMU command-line 243 + 244 + - ``cat /sys/cpuinfo`` from L0 245 + 246 + - ``cat /sys/cpuinfo`` from L1 247 + 248 + - ``lscpu`` from L0 249 + 250 + - ``lscpu`` from L1 251 + 252 + - Full ``dmesg`` output from L0 253 + 254 + - Full ``dmesg`` output from L1 255 + 256 + x86-specific info to collect 257 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 258 + 259 + Both the below commands, ``x86info`` and ``dmidecode``, should be 260 + available on most Linux distributions with the same name: 261 + 262 + - Output of: ``x86info -a`` from L0 263 + 264 + - Output of: ``x86info -a`` from L1 265 + 266 + - Output of: ``dmidecode`` from L0 267 + 268 + - Output of: ``dmidecode`` from L1 269 + 270 + s390x-specific info to collect 271 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 272 + 273 + Along with the earlier mentioned generic details, the below is 274 + also recommended: 275 + 276 + - ``/proc/sysinfo`` from L1; this will also include the info from L0
+4 -14
MAINTAINERS
··· 3657 3657 S: Maintained 3658 3658 W: http://btrfs.wiki.kernel.org/ 3659 3659 Q: http://patchwork.kernel.org/project/linux-btrfs/list/ 3660 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git 3660 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git 3661 3661 F: Documentation/filesystems/btrfs.rst 3662 3662 F: fs/btrfs/ 3663 3663 F: include/linux/btrfs* ··· 3936 3936 CEPH COMMON CODE (LIBCEPH) 3937 3937 M: Ilya Dryomov <idryomov@gmail.com> 3938 3938 M: Jeff Layton <jlayton@kernel.org> 3939 - M: Sage Weil <sage@redhat.com> 3940 3939 L: ceph-devel@vger.kernel.org 3941 3940 S: Supported 3942 3941 W: http://ceph.com/ 3943 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 3944 3942 T: git git://github.com/ceph/ceph-client.git 3945 3943 F: include/linux/ceph/ 3946 3944 F: include/linux/crush/ ··· 3946 3948 3947 3949 CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH) 3948 3950 M: Jeff Layton <jlayton@kernel.org> 3949 - M: Sage Weil <sage@redhat.com> 3950 3951 M: Ilya Dryomov <idryomov@gmail.com> 3951 3952 L: ceph-devel@vger.kernel.org 3952 3953 S: Supported 3953 3954 W: http://ceph.com/ 3954 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 3955 3955 T: git git://github.com/ceph/ceph-client.git 3956 3956 F: Documentation/filesystems/ceph.rst 3957 3957 F: fs/ceph/ ··· 5931 5935 DYNAMIC INTERRUPT MODERATION 5932 5936 M: Tal Gilboa <talgi@mellanox.com> 5933 5937 S: Maintained 5938 + F: Documentation/networking/net_dim.rst 5934 5939 F: include/linux/dim.h 5935 5940 F: lib/dim/ 5936 - F: Documentation/networking/net_dim.rst 5937 5941 5938 5942 DZ DECSTATION DZ11 SERIAL DRIVER 5939 5943 M: "Maciej W. Rozycki" <macro@linux-mips.org> ··· 7115 7119 7116 7120 GENERIC PHY FRAMEWORK 7117 7121 M: Kishon Vijay Abraham I <kishon@ti.com> 7122 + M: Vinod Koul <vkoul@kernel.org> 7118 7123 L: linux-kernel@vger.kernel.org 7119 7124 S: Supported 7120 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy.git 7125 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git 7121 7126 F: Documentation/devicetree/bindings/phy/ 7122 7127 F: drivers/phy/ 7123 7128 F: include/linux/phy/ ··· 7742 7745 L: platform-driver-x86@vger.kernel.org 7743 7746 S: Orphan 7744 7747 F: drivers/platform/x86/tc1100-wmi.c 7745 - 7746 - HP100: Driver for HP 10/100 Mbit/s Voice Grade Network Adapter Series 7747 - M: Jaroslav Kysela <perex@perex.cz> 7748 - S: Obsolete 7749 - F: drivers/staging/hp/hp100.* 7750 7748 7751 7749 HPET: High Precision Event Timers driver 7752 7750 M: Clemens Ladisch <clemens@ladisch.de> ··· 14094 14102 14095 14103 RADOS BLOCK DEVICE (RBD) 14096 14104 M: Ilya Dryomov <idryomov@gmail.com> 14097 - M: Sage Weil <sage@redhat.com> 14098 14105 R: Dongsheng Yang <dongsheng.yang@easystack.cn> 14099 14106 L: ceph-devel@vger.kernel.org 14100 14107 S: Supported 14101 14108 W: http://ceph.com/ 14102 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 14103 14109 T: git git://github.com/ceph/ceph-client.git 14104 14110 F: Documentation/ABI/testing/sysfs-bus-rbd 14105 14111 F: drivers/block/rbd.c
+12 -5
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 7 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc5 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION* ··· 729 729 KBUILD_CFLAGS += -Os 730 730 endif 731 731 732 - ifdef CONFIG_CC_DISABLE_WARN_MAYBE_UNINITIALIZED 733 - KBUILD_CFLAGS += -Wno-maybe-uninitialized 734 - endif 735 - 736 732 # Tell gcc to never replace conditional load with a non-conditional one 737 733 KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0) 738 734 KBUILD_CFLAGS += $(call cc-option,-fno-allow-store-data-races) ··· 876 880 877 881 # disable stringop warnings in gcc 8+ 878 882 KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation) 883 + 884 + # We'll want to enable this eventually, but it's not going away for 5.7 at least 885 + KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds) 886 + KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds) 887 + KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow) 888 + 889 + # Another good warning that we'll want to enable eventually 890 + KBUILD_CFLAGS += $(call cc-disable-warning, restrict) 891 + 892 + # Enabled with W=2, disabled by default as noisy 893 + KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized) 879 894 880 895 # disable invalid "can't wrap" optimizations for signed / pointers 881 896 KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
+11 -3
arch/arm/crypto/chacha-glue.c
··· 91 91 return; 92 92 } 93 93 94 - kernel_neon_begin(); 95 - chacha_doneon(state, dst, src, bytes, nrounds); 96 - kernel_neon_end(); 94 + do { 95 + unsigned int todo = min_t(unsigned int, bytes, SZ_4K); 96 + 97 + kernel_neon_begin(); 98 + chacha_doneon(state, dst, src, todo, nrounds); 99 + kernel_neon_end(); 100 + 101 + bytes -= todo; 102 + src += todo; 103 + dst += todo; 104 + } while (bytes); 97 105 } 98 106 EXPORT_SYMBOL(chacha_crypt_arch); 99 107
+1 -1
arch/arm/crypto/nhpoly1305-neon-glue.c
··· 30 30 return crypto_nhpoly1305_update(desc, src, srclen); 31 31 32 32 do { 33 - unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); 33 + unsigned int n = min_t(unsigned int, srclen, SZ_4K); 34 34 35 35 kernel_neon_begin(); 36 36 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11 -4
arch/arm/crypto/poly1305-glue.c
··· 160 160 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE); 161 161 162 162 if (static_branch_likely(&have_neon) && do_neon) { 163 - kernel_neon_begin(); 164 - poly1305_blocks_neon(&dctx->h, src, len, 1); 165 - kernel_neon_end(); 163 + do { 164 + unsigned int todo = min_t(unsigned int, len, SZ_4K); 165 + 166 + kernel_neon_begin(); 167 + poly1305_blocks_neon(&dctx->h, src, todo, 1); 168 + kernel_neon_end(); 169 + 170 + len -= todo; 171 + src += todo; 172 + } while (len); 166 173 } else { 167 174 poly1305_blocks_arm(&dctx->h, src, len, 1); 175 + src += len; 168 176 } 169 - src += len; 170 177 nbytes %= POLY1305_BLOCK_SIZE; 171 178 } 172 179
+7 -2
arch/arm/include/asm/futex.h
··· 165 165 preempt_enable(); 166 166 #endif 167 167 168 - if (!ret) 169 - *oval = oldval; 168 + /* 169 + * Store unconditionally. If ret != 0 the extra store is the least 170 + * of the worries but GCC cannot figure out that __futex_atomic_op() 171 + * is either setting ret to -EFAULT or storing the old value in 172 + * oldval which results in a uninitialized warning at the call site. 173 + */ 174 + *oval = oldval; 170 175 171 176 return ret; 172 177 }
+11 -3
arch/arm64/crypto/chacha-neon-glue.c
··· 87 87 !crypto_simd_usable()) 88 88 return chacha_crypt_generic(state, dst, src, bytes, nrounds); 89 89 90 - kernel_neon_begin(); 91 - chacha_doneon(state, dst, src, bytes, nrounds); 92 - kernel_neon_end(); 90 + do { 91 + unsigned int todo = min_t(unsigned int, bytes, SZ_4K); 92 + 93 + kernel_neon_begin(); 94 + chacha_doneon(state, dst, src, todo, nrounds); 95 + kernel_neon_end(); 96 + 97 + bytes -= todo; 98 + src += todo; 99 + dst += todo; 100 + } while (bytes); 93 101 } 94 102 EXPORT_SYMBOL(chacha_crypt_arch); 95 103
+1 -1
arch/arm64/crypto/nhpoly1305-neon-glue.c
··· 30 30 return crypto_nhpoly1305_update(desc, src, srclen); 31 31 32 32 do { 33 - unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); 33 + unsigned int n = min_t(unsigned int, srclen, SZ_4K); 34 34 35 35 kernel_neon_begin(); 36 36 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11 -4
arch/arm64/crypto/poly1305-glue.c
··· 143 143 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE); 144 144 145 145 if (static_branch_likely(&have_neon) && crypto_simd_usable()) { 146 - kernel_neon_begin(); 147 - poly1305_blocks_neon(&dctx->h, src, len, 1); 148 - kernel_neon_end(); 146 + do { 147 + unsigned int todo = min_t(unsigned int, len, SZ_4K); 148 + 149 + kernel_neon_begin(); 150 + poly1305_blocks_neon(&dctx->h, src, todo, 1); 151 + kernel_neon_end(); 152 + 153 + len -= todo; 154 + src += todo; 155 + } while (len); 149 156 } else { 150 157 poly1305_blocks(&dctx->h, src, len, 1); 158 + src += len; 151 159 } 152 - src += len; 153 160 nbytes %= POLY1305_BLOCK_SIZE; 154 161 } 155 162
+1 -1
arch/arm64/kernel/vdso/Makefile
··· 32 32 OBJECT_FILES_NON_STANDARD := y 33 33 KCOV_INSTRUMENT := n 34 34 35 - CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny 35 + CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny -fasynchronous-unwind-tables 36 36 37 37 ifneq ($(c-gettimeofday-y),) 38 38 CFLAGS_vgettimeofday.o += -include $(c-gettimeofday-y)
+7
arch/arm64/kvm/guest.c
··· 200 200 } 201 201 202 202 memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id)); 203 + 204 + if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) { 205 + int i; 206 + 207 + for (i = 0; i < 16; i++) 208 + *vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i); 209 + } 203 210 out: 204 211 return err; 205 212 }
+23
arch/arm64/kvm/hyp/entry.S
··· 18 18 19 19 #define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x) 20 20 #define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x) 21 + #define CPU_SP_EL0_OFFSET (CPU_XREG_OFFSET(30) + 8) 21 22 22 23 .text 23 24 .pushsection .hyp.text, "ax" ··· 48 47 ldp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)] 49 48 .endm 50 49 50 + .macro save_sp_el0 ctxt, tmp 51 + mrs \tmp, sp_el0 52 + str \tmp, [\ctxt, #CPU_SP_EL0_OFFSET] 53 + .endm 54 + 55 + .macro restore_sp_el0 ctxt, tmp 56 + ldr \tmp, [\ctxt, #CPU_SP_EL0_OFFSET] 57 + msr sp_el0, \tmp 58 + .endm 59 + 51 60 /* 52 61 * u64 __guest_enter(struct kvm_vcpu *vcpu, 53 62 * struct kvm_cpu_context *host_ctxt); ··· 70 59 71 60 // Store the host regs 72 61 save_callee_saved_regs x1 62 + 63 + // Save the host's sp_el0 64 + save_sp_el0 x1, x2 73 65 74 66 // Now the host state is stored if we have a pending RAS SError it must 75 67 // affect the host. If any asynchronous exception is pending we defer ··· 96 82 // as it may cause Pointer Authentication key signing mismatch errors 97 83 // when this feature is enabled for kernel code. 98 84 ptrauth_switch_to_guest x29, x0, x1, x2 85 + 86 + // Restore the guest's sp_el0 87 + restore_sp_el0 x29, x0 99 88 100 89 // Restore guest regs x0-x17 101 90 ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)] ··· 147 130 // Store the guest regs x18-x29, lr 148 131 save_callee_saved_regs x1 149 132 133 + // Store the guest's sp_el0 134 + save_sp_el0 x1, x2 135 + 150 136 get_host_ctxt x2, x3 151 137 152 138 // Macro ptrauth_switch_to_guest format: ··· 158 138 // as it may cause Pointer Authentication key signing mismatch errors 159 139 // when this feature is enabled for kernel code. 160 140 ptrauth_switch_to_host x1, x2, x3, x4, x5 141 + 142 + // Restore the hosts's sp_el0 143 + restore_sp_el0 x2, x3 161 144 162 145 // Now restore the host regs 163 146 restore_callee_saved_regs x2
-1
arch/arm64/kvm/hyp/hyp-entry.S
··· 198 198 .macro invalid_vector label, target = __hyp_panic 199 199 .align 2 200 200 SYM_CODE_START(\label) 201 - \label: 202 201 b \target 203 202 SYM_CODE_END(\label) 204 203 .endm
+3 -14
arch/arm64/kvm/hyp/sysreg-sr.c
··· 15 15 /* 16 16 * Non-VHE: Both host and guest must save everything. 17 17 * 18 - * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate, 19 - * which are handled as part of the el2 return state) on every switch. 18 + * VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and 19 + * pstate, which are handled as part of the el2 return state) on every 20 + * switch (sp_el0 is being dealt with in the assembly code). 20 21 * tpidr_el0 and tpidrro_el0 only need to be switched when going 21 22 * to host userspace or a different VCPU. EL1 registers only need to be 22 23 * switched when potentially going to run a different VCPU. The latter two ··· 27 26 static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt) 28 27 { 29 28 ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1); 30 - 31 - /* 32 - * The host arm64 Linux uses sp_el0 to point to 'current' and it must 33 - * therefore be saved/restored on every entry/exit to/from the guest. 34 - */ 35 - ctxt->gp_regs.regs.sp = read_sysreg(sp_el0); 36 29 } 37 30 38 31 static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt) ··· 94 99 static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt) 95 100 { 96 101 write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1); 97 - 98 - /* 99 - * The host arm64 Linux uses sp_el0 to point to 'current' and it must 100 - * therefore be saved/restored on every entry/exit to/from the guest. 101 - */ 102 - write_sysreg(ctxt->gp_regs.regs.sp, sp_el0); 103 102 } 104 103 105 104 static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
+2
arch/arm64/mm/hugetlbpage.c
··· 230 230 ptep = (pte_t *)pudp; 231 231 } else if (sz == (CONT_PTE_SIZE)) { 232 232 pmdp = pmd_alloc(mm, pudp, addr); 233 + if (!pmdp) 234 + return NULL; 233 235 234 236 WARN_ON(addr & (sz - 1)); 235 237 /*
+1
arch/powerpc/kvm/powerpc.c
··· 521 521 case KVM_CAP_IOEVENTFD: 522 522 case KVM_CAP_DEVICE_CTRL: 523 523 case KVM_CAP_IMMEDIATE_EXIT: 524 + case KVM_CAP_SET_GUEST_DEBUG: 524 525 r = 1; 525 526 break; 526 527 case KVM_CAP_PPC_GUEST_DEBUG_SSTEP:
+1 -1
arch/riscv/Kconfig
··· 60 60 select ARCH_HAS_GIGANTIC_PAGE 61 61 select ARCH_HAS_SET_DIRECT_MAP 62 62 select ARCH_HAS_SET_MEMORY 63 - select ARCH_HAS_STRICT_KERNEL_RWX 63 + select ARCH_HAS_STRICT_KERNEL_RWX if MMU 64 64 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT 65 65 select SPARSEMEM_STATIC if 32BIT 66 66 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
-3
arch/riscv/include/asm/csr.h
··· 51 51 #define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1)) 52 52 53 53 /* Interrupt causes (minus the high bit) */ 54 - #define IRQ_U_SOFT 0 55 54 #define IRQ_S_SOFT 1 56 55 #define IRQ_M_SOFT 3 57 - #define IRQ_U_TIMER 4 58 56 #define IRQ_S_TIMER 5 59 57 #define IRQ_M_TIMER 7 60 - #define IRQ_U_EXT 8 61 58 #define IRQ_S_EXT 9 62 59 #define IRQ_M_EXT 11 63 60
+22
arch/riscv/include/asm/hwcap.h
··· 8 8 #ifndef _ASM_RISCV_HWCAP_H 9 9 #define _ASM_RISCV_HWCAP_H 10 10 11 + #include <linux/bits.h> 11 12 #include <uapi/asm/hwcap.h> 12 13 13 14 #ifndef __ASSEMBLY__ ··· 23 22 }; 24 23 25 24 extern unsigned long elf_hwcap; 25 + 26 + #define RISCV_ISA_EXT_a ('a' - 'a') 27 + #define RISCV_ISA_EXT_c ('c' - 'a') 28 + #define RISCV_ISA_EXT_d ('d' - 'a') 29 + #define RISCV_ISA_EXT_f ('f' - 'a') 30 + #define RISCV_ISA_EXT_h ('h' - 'a') 31 + #define RISCV_ISA_EXT_i ('i' - 'a') 32 + #define RISCV_ISA_EXT_m ('m' - 'a') 33 + #define RISCV_ISA_EXT_s ('s' - 'a') 34 + #define RISCV_ISA_EXT_u ('u' - 'a') 35 + 36 + #define RISCV_ISA_EXT_MAX 64 37 + 38 + unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap); 39 + 40 + #define riscv_isa_extension_mask(ext) BIT_MASK(RISCV_ISA_EXT_##ext) 41 + 42 + bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit); 43 + #define riscv_isa_extension_available(isa_bitmap, ext) \ 44 + __riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext) 45 + 26 46 #endif 27 47 28 48 #endif /* _ASM_RISCV_HWCAP_H */
-8
arch/riscv/include/asm/set_memory.h
··· 22 22 static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } 23 23 #endif 24 24 25 - #ifdef CONFIG_STRICT_KERNEL_RWX 26 - void set_kernel_text_ro(void); 27 - void set_kernel_text_rw(void); 28 - #else 29 - static inline void set_kernel_text_ro(void) { } 30 - static inline void set_kernel_text_rw(void) { } 31 - #endif 32 - 33 25 int set_direct_map_invalid_noflush(struct page *page); 34 26 int set_direct_map_default_noflush(struct page *page); 35 27
+2 -2
arch/riscv/kernel/cpu_ops.c
··· 15 15 16 16 const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init; 17 17 18 - void *__cpu_up_stack_pointer[NR_CPUS]; 19 - void *__cpu_up_task_pointer[NR_CPUS]; 18 + void *__cpu_up_stack_pointer[NR_CPUS] __section(.data); 19 + void *__cpu_up_task_pointer[NR_CPUS] __section(.data); 20 20 21 21 extern const struct cpu_operations cpu_ops_sbi; 22 22 extern const struct cpu_operations cpu_ops_spinwait;
+80 -3
arch/riscv/kernel/cpufeature.c
··· 6 6 * Copyright (C) 2017 SiFive 7 7 */ 8 8 9 + #include <linux/bitmap.h> 9 10 #include <linux/of.h> 10 11 #include <asm/processor.h> 11 12 #include <asm/hwcap.h> ··· 14 13 #include <asm/switch_to.h> 15 14 16 15 unsigned long elf_hwcap __read_mostly; 16 + 17 + /* Host ISA bitmap */ 18 + static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly; 19 + 17 20 #ifdef CONFIG_FPU 18 21 bool has_fpu __read_mostly; 19 22 #endif 23 + 24 + /** 25 + * riscv_isa_extension_base() - Get base extension word 26 + * 27 + * @isa_bitmap: ISA bitmap to use 28 + * Return: base extension word as unsigned long value 29 + * 30 + * NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used. 31 + */ 32 + unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap) 33 + { 34 + if (!isa_bitmap) 35 + return riscv_isa[0]; 36 + return isa_bitmap[0]; 37 + } 38 + EXPORT_SYMBOL_GPL(riscv_isa_extension_base); 39 + 40 + /** 41 + * __riscv_isa_extension_available() - Check whether given extension 42 + * is available or not 43 + * 44 + * @isa_bitmap: ISA bitmap to use 45 + * @bit: bit position of the desired extension 46 + * Return: true or false 47 + * 48 + * NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used. 49 + */ 50 + bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit) 51 + { 52 + const unsigned long *bmap = (isa_bitmap) ? isa_bitmap : riscv_isa; 53 + 54 + if (bit >= RISCV_ISA_EXT_MAX) 55 + return false; 56 + 57 + return test_bit(bit, bmap) ? true : false; 58 + } 59 + EXPORT_SYMBOL_GPL(__riscv_isa_extension_available); 20 60 21 61 void riscv_fill_hwcap(void) 22 62 { 23 63 struct device_node *node; 24 64 const char *isa; 25 - size_t i; 65 + char print_str[BITS_PER_LONG + 1]; 66 + size_t i, j, isa_len; 26 67 static unsigned long isa2hwcap[256] = {0}; 27 68 28 69 isa2hwcap['i'] = isa2hwcap['I'] = COMPAT_HWCAP_ISA_I; ··· 76 33 77 34 elf_hwcap = 0; 78 35 36 + bitmap_zero(riscv_isa, RISCV_ISA_EXT_MAX); 37 + 79 38 for_each_of_cpu_node(node) { 80 39 unsigned long this_hwcap = 0; 40 + unsigned long this_isa = 0; 81 41 82 42 if (riscv_of_processor_hartid(node) < 0) 83 43 continue; ··· 90 44 continue; 91 45 } 92 46 93 - for (i = 0; i < strlen(isa); ++i) 47 + i = 0; 48 + isa_len = strlen(isa); 49 + #if IS_ENABLED(CONFIG_32BIT) 50 + if (!strncmp(isa, "rv32", 4)) 51 + i += 4; 52 + #elif IS_ENABLED(CONFIG_64BIT) 53 + if (!strncmp(isa, "rv64", 4)) 54 + i += 4; 55 + #endif 56 + for (; i < isa_len; ++i) { 94 57 this_hwcap |= isa2hwcap[(unsigned char)(isa[i])]; 58 + /* 59 + * TODO: X, Y and Z extension parsing for Host ISA 60 + * bitmap will be added in-future. 61 + */ 62 + if ('a' <= isa[i] && isa[i] < 'x') 63 + this_isa |= (1UL << (isa[i] - 'a')); 64 + } 95 65 96 66 /* 97 67 * All "okay" hart should have same isa. Set HWCAP based on ··· 118 56 elf_hwcap &= this_hwcap; 119 57 else 120 58 elf_hwcap = this_hwcap; 59 + 60 + if (riscv_isa[0]) 61 + riscv_isa[0] &= this_isa; 62 + else 63 + riscv_isa[0] = this_isa; 121 64 } 122 65 123 66 /* We don't support systems with F but without D, so mask those out ··· 132 65 elf_hwcap &= ~COMPAT_HWCAP_ISA_F; 133 66 } 134 67 135 - pr_info("elf_hwcap is 0x%lx\n", elf_hwcap); 68 + memset(print_str, 0, sizeof(print_str)); 69 + for (i = 0, j = 0; i < BITS_PER_LONG; i++) 70 + if (riscv_isa[0] & BIT_MASK(i)) 71 + print_str[j++] = (char)('a' + i); 72 + pr_info("riscv: ISA extensions %s\n", print_str); 73 + 74 + memset(print_str, 0, sizeof(print_str)); 75 + for (i = 0, j = 0; i < BITS_PER_LONG; i++) 76 + if (elf_hwcap & BIT_MASK(i)) 77 + print_str[j++] = (char)('a' + i); 78 + pr_info("riscv: ELF capabilities %s\n", print_str); 136 79 137 80 #ifdef CONFIG_FPU 138 81 if (elf_hwcap & (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D))
+10 -7
arch/riscv/kernel/sbi.c
··· 102 102 { 103 103 sbi_ecall(SBI_EXT_0_1_SHUTDOWN, 0, 0, 0, 0, 0, 0, 0); 104 104 } 105 - EXPORT_SYMBOL(sbi_set_timer); 105 + EXPORT_SYMBOL(sbi_shutdown); 106 106 107 107 /** 108 108 * sbi_clear_ipi() - Clear any pending IPIs for the calling hart. ··· 113 113 { 114 114 sbi_ecall(SBI_EXT_0_1_CLEAR_IPI, 0, 0, 0, 0, 0, 0, 0); 115 115 } 116 - EXPORT_SYMBOL(sbi_shutdown); 116 + EXPORT_SYMBOL(sbi_clear_ipi); 117 117 118 118 /** 119 119 * sbi_set_timer_v01() - Program the timer for next timer event. ··· 167 167 168 168 return result; 169 169 } 170 + 171 + static void sbi_set_power_off(void) 172 + { 173 + pm_power_off = sbi_shutdown; 174 + } 170 175 #else 171 176 static void __sbi_set_timer_v01(uint64_t stime_value) 172 177 { ··· 196 191 197 192 return 0; 198 193 } 194 + 195 + static void sbi_set_power_off(void) {} 199 196 #endif /* CONFIG_RISCV_SBI_V01 */ 200 197 201 198 static void __sbi_set_timer_v02(uint64_t stime_value) ··· 547 540 return __sbi_base_ecall(SBI_EXT_BASE_GET_IMP_VERSION); 548 541 } 549 542 550 - static void sbi_power_off(void) 551 - { 552 - sbi_shutdown(); 553 - } 554 543 555 544 int __init sbi_init(void) 556 545 { 557 546 int ret; 558 547 559 - pm_power_off = sbi_power_off; 548 + sbi_set_power_off(); 560 549 ret = sbi_get_spec_version(); 561 550 if (ret > 0) 562 551 sbi_spec_version = ret;
+2
arch/riscv/kernel/smp.c
··· 10 10 11 11 #include <linux/cpu.h> 12 12 #include <linux/interrupt.h> 13 + #include <linux/module.h> 13 14 #include <linux/profile.h> 14 15 #include <linux/smp.h> 15 16 #include <linux/sched.h> ··· 64 63 for_each_cpu(cpu, in) 65 64 cpumask_set_cpu(cpuid_to_hartid_map(cpu), out); 66 65 } 66 + EXPORT_SYMBOL_GPL(riscv_cpuid_to_hartid_mask); 67 67 68 68 bool arch_match_cpu_phys_id(int cpu, u64 phys_id) 69 69 {
+2 -2
arch/riscv/kernel/stacktrace.c
··· 12 12 #include <linux/stacktrace.h> 13 13 #include <linux/ftrace.h> 14 14 15 + register unsigned long sp_in_global __asm__("sp"); 16 + 15 17 #ifdef CONFIG_FRAME_POINTER 16 18 17 19 struct stackframe { 18 20 unsigned long fp; 19 21 unsigned long ra; 20 22 }; 21 - 22 - register unsigned long sp_in_global __asm__("sp"); 23 23 24 24 void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs, 25 25 bool (*fn)(unsigned long, void *), void *arg)
+4 -4
arch/riscv/kernel/vdso/Makefile
··· 12 12 vdso-syms += flush_icache 13 13 14 14 # Files to link into the vdso 15 - obj-vdso = $(patsubst %, %.o, $(vdso-syms)) 15 + obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o 16 16 17 17 # Build rules 18 18 targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o ··· 33 33 $(call if_changed,vdsold) 34 34 35 35 # We also create a special relocatable object that should mirror the symbol 36 - # table and layout of the linked DSO. With ld -R we can then refer to 37 - # these symbols in the kernel code rather than hand-coded addresses. 36 + # table and layout of the linked DSO. With ld --just-symbols we can then 37 + # refer to these symbols in the kernel code rather than hand-coded addresses. 38 38 39 39 SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \ 40 40 -Wl,--build-id -Wl,--hash-style=both 41 41 $(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE 42 42 $(call if_changed,vdsold) 43 43 44 - LDFLAGS_vdso-syms.o := -r -R 44 + LDFLAGS_vdso-syms.o := -r --just-symbols 45 45 $(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE 46 46 $(call if_changed,ld) 47 47
+12
arch/riscv/kernel/vdso/note.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * This supplies .note.* sections to go into the PT_NOTE inside the vDSO text. 4 + * Here we can supply some information useful to userland. 5 + */ 6 + 7 + #include <linux/elfnote.h> 8 + #include <linux/version.h> 9 + 10 + ELFNOTE_START(Linux, 0, "a") 11 + .long LINUX_VERSION_CODE 12 + ELFNOTE_END
+2 -17
arch/riscv/mm/init.c
··· 150 150 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); 151 151 152 152 set_max_mapnr(PFN_DOWN(mem_size)); 153 - max_low_pfn = PFN_DOWN(memblock_end_of_DRAM()); 153 + max_pfn = PFN_DOWN(memblock_end_of_DRAM()); 154 + max_low_pfn = max_pfn; 154 155 155 156 #ifdef CONFIG_BLK_DEV_INITRD 156 157 setup_initrd(); ··· 502 501 #endif /* CONFIG_MMU */ 503 502 504 503 #ifdef CONFIG_STRICT_KERNEL_RWX 505 - void set_kernel_text_rw(void) 506 - { 507 - unsigned long text_start = (unsigned long)_text; 508 - unsigned long text_end = (unsigned long)_etext; 509 - 510 - set_memory_rw(text_start, (text_end - text_start) >> PAGE_SHIFT); 511 - } 512 - 513 - void set_kernel_text_ro(void) 514 - { 515 - unsigned long text_start = (unsigned long)_text; 516 - unsigned long text_end = (unsigned long)_etext; 517 - 518 - set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT); 519 - } 520 - 521 504 void mark_rodata_ro(void) 522 505 { 523 506 unsigned long text_start = (unsigned long)_text;
+1
arch/s390/kvm/kvm-s390.c
··· 545 545 case KVM_CAP_S390_AIS: 546 546 case KVM_CAP_S390_AIS_MIGRATION: 547 547 case KVM_CAP_S390_VCPU_RESETS: 548 + case KVM_CAP_SET_GUEST_DEBUG: 548 549 r = 1; 549 550 break; 550 551 case KVM_CAP_S390_HPAGE_1M:
+3 -1
arch/s390/kvm/priv.c
··· 626 626 * available for the guest are AQIC and TAPQ with the t bit set 627 627 * since we do not set IC.3 (FIII) we currently will only intercept 628 628 * the AQIC function code. 629 + * Note: running nested under z/VM can result in intercepts for other 630 + * function codes, e.g. PQAP(QCI). We do not support this and bail out. 629 631 */ 630 632 reg0 = vcpu->run->s.regs.gprs[0]; 631 633 fc = (reg0 >> 24) & 0xff; 632 - if (WARN_ON_ONCE(fc != 0x03)) 634 + if (fc != 0x03) 633 635 return -EOPNOTSUPP; 634 636 635 637 /* PQAP instruction is allowed for guest kernel only */
+4
arch/s390/lib/uaccess.c
··· 64 64 { 65 65 mm_segment_t old_fs; 66 66 unsigned long asce, cr; 67 + unsigned long flags; 67 68 68 69 old_fs = current->thread.mm_segment; 69 70 if (old_fs & 1) 70 71 return old_fs; 72 + /* protect against a concurrent page table upgrade */ 73 + local_irq_save(flags); 71 74 current->thread.mm_segment |= 1; 72 75 asce = S390_lowcore.kernel_asce; 73 76 if (likely(old_fs == USER_DS)) { ··· 86 83 __ctl_load(asce, 7, 7); 87 84 set_cpu_flag(CIF_ASCE_SECONDARY); 88 85 } 86 + local_irq_restore(flags); 89 87 return old_fs; 90 88 } 91 89 EXPORT_SYMBOL(enable_sacf_uaccess);
+14 -2
arch/s390/mm/pgalloc.c
··· 70 70 { 71 71 struct mm_struct *mm = arg; 72 72 73 - if (current->active_mm == mm) 74 - set_user_asce(mm); 73 + /* we must change all active ASCEs to avoid the creation of new TLBs */ 74 + if (current->active_mm == mm) { 75 + S390_lowcore.user_asce = mm->context.asce; 76 + if (current->thread.mm_segment == USER_DS) { 77 + __ctl_load(S390_lowcore.user_asce, 1, 1); 78 + /* Mark user-ASCE present in CR1 */ 79 + clear_cpu_flag(CIF_ASCE_PRIMARY); 80 + } 81 + if (current->thread.mm_segment == USER_DS_SACF) { 82 + __ctl_load(S390_lowcore.user_asce, 7, 7); 83 + /* enable_sacf_uaccess does all or nothing */ 84 + WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY)); 85 + } 86 + } 75 87 __tlb_flush_local(); 76 88 } 77 89
+4 -6
arch/x86/crypto/blake2s-glue.c
··· 32 32 const u32 inc) 33 33 { 34 34 /* SIMD disables preemption, so relax after processing each page. */ 35 - BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8); 35 + BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8); 36 36 37 37 if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) { 38 38 blake2s_compress_generic(state, block, nblocks, inc); 39 39 return; 40 40 } 41 41 42 - for (;;) { 42 + do { 43 43 const size_t blocks = min_t(size_t, nblocks, 44 - PAGE_SIZE / BLAKE2S_BLOCK_SIZE); 44 + SZ_4K / BLAKE2S_BLOCK_SIZE); 45 45 46 46 kernel_fpu_begin(); 47 47 if (IS_ENABLED(CONFIG_AS_AVX512) && ··· 52 52 kernel_fpu_end(); 53 53 54 54 nblocks -= blocks; 55 - if (!nblocks) 56 - break; 57 55 block += blocks * BLAKE2S_BLOCK_SIZE; 58 - } 56 + } while (nblocks); 59 57 } 60 58 EXPORT_SYMBOL(blake2s_compress_arch); 61 59
+11 -3
arch/x86/crypto/chacha_glue.c
··· 153 153 bytes <= CHACHA_BLOCK_SIZE) 154 154 return chacha_crypt_generic(state, dst, src, bytes, nrounds); 155 155 156 - kernel_fpu_begin(); 157 - chacha_dosimd(state, dst, src, bytes, nrounds); 158 - kernel_fpu_end(); 156 + do { 157 + unsigned int todo = min_t(unsigned int, bytes, SZ_4K); 158 + 159 + kernel_fpu_begin(); 160 + chacha_dosimd(state, dst, src, todo, nrounds); 161 + kernel_fpu_end(); 162 + 163 + bytes -= todo; 164 + src += todo; 165 + dst += todo; 166 + } while (bytes); 159 167 } 160 168 EXPORT_SYMBOL(chacha_crypt_arch); 161 169
+1 -1
arch/x86/crypto/nhpoly1305-avx2-glue.c
··· 29 29 return crypto_nhpoly1305_update(desc, src, srclen); 30 30 31 31 do { 32 - unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); 32 + unsigned int n = min_t(unsigned int, srclen, SZ_4K); 33 33 34 34 kernel_fpu_begin(); 35 35 crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+1 -1
arch/x86/crypto/nhpoly1305-sse2-glue.c
··· 29 29 return crypto_nhpoly1305_update(desc, src, srclen); 30 30 31 31 do { 32 - unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE); 32 + unsigned int n = min_t(unsigned int, srclen, SZ_4K); 33 33 34 34 kernel_fpu_begin(); 35 35 crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+6 -7
arch/x86/crypto/poly1305_glue.c
··· 91 91 struct poly1305_arch_internal *state = ctx; 92 92 93 93 /* SIMD disables preemption, so relax after processing each page. */ 94 - BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE || 95 - PAGE_SIZE % POLY1305_BLOCK_SIZE); 94 + BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE || 95 + SZ_4K % POLY1305_BLOCK_SIZE); 96 96 97 97 if (!static_branch_likely(&poly1305_use_avx) || 98 98 (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) || ··· 102 102 return; 103 103 } 104 104 105 - for (;;) { 106 - const size_t bytes = min_t(size_t, len, PAGE_SIZE); 105 + do { 106 + const size_t bytes = min_t(size_t, len, SZ_4K); 107 107 108 108 kernel_fpu_begin(); 109 109 if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512)) ··· 113 113 else 114 114 poly1305_blocks_avx(ctx, inp, bytes, padbit); 115 115 kernel_fpu_end(); 116 + 116 117 len -= bytes; 117 - if (!len) 118 - break; 119 118 inp += bytes; 120 - } 119 + } while (len); 121 120 } 122 121 123 122 static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+21 -19
arch/x86/entry/calling.h
··· 98 98 #define SIZEOF_PTREGS 21*8 99 99 100 100 .macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0 101 - /* 102 - * Push registers and sanitize registers of values that a 103 - * speculation attack might otherwise want to exploit. The 104 - * lower registers are likely clobbered well before they 105 - * could be put to use in a speculative execution gadget. 106 - * Interleave XOR with PUSH for better uop scheduling: 107 - */ 108 101 .if \save_ret 109 102 pushq %rsi /* pt_regs->si */ 110 103 movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */ ··· 107 114 pushq %rsi /* pt_regs->si */ 108 115 .endif 109 116 pushq \rdx /* pt_regs->dx */ 110 - xorl %edx, %edx /* nospec dx */ 111 117 pushq %rcx /* pt_regs->cx */ 112 - xorl %ecx, %ecx /* nospec cx */ 113 118 pushq \rax /* pt_regs->ax */ 114 119 pushq %r8 /* pt_regs->r8 */ 115 - xorl %r8d, %r8d /* nospec r8 */ 116 120 pushq %r9 /* pt_regs->r9 */ 117 - xorl %r9d, %r9d /* nospec r9 */ 118 121 pushq %r10 /* pt_regs->r10 */ 119 - xorl %r10d, %r10d /* nospec r10 */ 120 122 pushq %r11 /* pt_regs->r11 */ 121 - xorl %r11d, %r11d /* nospec r11*/ 122 123 pushq %rbx /* pt_regs->rbx */ 123 - xorl %ebx, %ebx /* nospec rbx*/ 124 124 pushq %rbp /* pt_regs->rbp */ 125 - xorl %ebp, %ebp /* nospec rbp*/ 126 125 pushq %r12 /* pt_regs->r12 */ 127 - xorl %r12d, %r12d /* nospec r12*/ 128 126 pushq %r13 /* pt_regs->r13 */ 129 - xorl %r13d, %r13d /* nospec r13*/ 130 127 pushq %r14 /* pt_regs->r14 */ 131 - xorl %r14d, %r14d /* nospec r14*/ 132 128 pushq %r15 /* pt_regs->r15 */ 133 - xorl %r15d, %r15d /* nospec r15*/ 134 129 UNWIND_HINT_REGS 130 + 135 131 .if \save_ret 136 132 pushq %rsi /* return address on top of stack */ 137 133 .endif 134 + 135 + /* 136 + * Sanitize registers of values that a speculation attack might 137 + * otherwise want to exploit. The lower registers are likely clobbered 138 + * well before they could be put to use in a speculative execution 139 + * gadget. 140 + */ 141 + xorl %edx, %edx /* nospec dx */ 142 + xorl %ecx, %ecx /* nospec cx */ 143 + xorl %r8d, %r8d /* nospec r8 */ 144 + xorl %r9d, %r9d /* nospec r9 */ 145 + xorl %r10d, %r10d /* nospec r10 */ 146 + xorl %r11d, %r11d /* nospec r11 */ 147 + xorl %ebx, %ebx /* nospec rbx */ 148 + xorl %ebp, %ebp /* nospec rbp */ 149 + xorl %r12d, %r12d /* nospec r12 */ 150 + xorl %r13d, %r13d /* nospec r13 */ 151 + xorl %r14d, %r14d /* nospec r14 */ 152 + xorl %r15d, %r15d /* nospec r15 */ 153 + 138 154 .endm 139 155 140 156 .macro POP_REGS pop_rdi=1 skip_r11rcx=0
+7 -7
arch/x86/entry/entry_64.S
··· 249 249 */ 250 250 syscall_return_via_sysret: 251 251 /* rcx and r11 are already restored (see code above) */ 252 - UNWIND_HINT_EMPTY 253 252 POP_REGS pop_rdi=0 skip_r11rcx=1 254 253 255 254 /* ··· 257 258 */ 258 259 movq %rsp, %rdi 259 260 movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp 261 + UNWIND_HINT_EMPTY 260 262 261 263 pushq RSP-RDI(%rdi) /* RSP */ 262 264 pushq (%rdi) /* RDI */ ··· 279 279 * %rdi: prev task 280 280 * %rsi: next task 281 281 */ 282 - SYM_CODE_START(__switch_to_asm) 283 - UNWIND_HINT_FUNC 282 + SYM_FUNC_START(__switch_to_asm) 284 283 /* 285 284 * Save callee-saved registers 286 285 * This must match the order in inactive_task_frame ··· 320 321 popq %rbp 321 322 322 323 jmp __switch_to 323 - SYM_CODE_END(__switch_to_asm) 324 + SYM_FUNC_END(__switch_to_asm) 324 325 325 326 /* 326 327 * A newly forked process directly context switches into this address. ··· 511 512 * +----------------------------------------------------+ 512 513 */ 513 514 SYM_CODE_START(interrupt_entry) 514 - UNWIND_HINT_FUNC 515 + UNWIND_HINT_IRET_REGS offset=16 515 516 ASM_CLAC 516 517 cld 517 518 ··· 543 544 pushq 5*8(%rdi) /* regs->eflags */ 544 545 pushq 4*8(%rdi) /* regs->cs */ 545 546 pushq 3*8(%rdi) /* regs->ip */ 547 + UNWIND_HINT_IRET_REGS 546 548 pushq 2*8(%rdi) /* regs->orig_ax */ 547 549 pushq 8(%rdi) /* return address */ 548 - UNWIND_HINT_FUNC 549 550 550 551 movq (%rdi), %rdi 551 552 jmp 2f ··· 636 637 */ 637 638 movq %rsp, %rdi 638 639 movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp 640 + UNWIND_HINT_EMPTY 639 641 640 642 /* Copy the IRET frame to the trampoline stack. */ 641 643 pushq 6*8(%rdi) /* SS */ ··· 1739 1739 1740 1740 movq PER_CPU_VAR(cpu_current_top_of_stack), %rax 1741 1741 leaq -PTREGS_SIZE(%rax), %rsp 1742 - UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE 1742 + UNWIND_HINT_REGS 1743 1743 1744 1744 call do_exit 1745 1745 SYM_CODE_END(rewind_stack_do_exit)
+10 -2
arch/x86/hyperv/hv_init.c
··· 73 73 struct page *pg; 74 74 75 75 input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg); 76 - pg = alloc_page(GFP_KERNEL); 76 + /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */ 77 + pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL); 77 78 if (unlikely(!pg)) 78 79 return -ENOMEM; 79 80 *input_arg = page_address(pg); ··· 255 254 static int hv_suspend(void) 256 255 { 257 256 union hv_x64_msr_hypercall_contents hypercall_msr; 257 + int ret; 258 258 259 259 /* 260 260 * Reset the hypercall page as it is going to be invalidated ··· 272 270 hypercall_msr.enable = 0; 273 271 wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); 274 272 275 - return 0; 273 + ret = hv_cpu_die(0); 274 + return ret; 276 275 } 277 276 278 277 static void hv_resume(void) 279 278 { 280 279 union hv_x64_msr_hypercall_contents hypercall_msr; 280 + int ret; 281 + 282 + ret = hv_cpu_init(0); 283 + WARN_ON(ret); 281 284 282 285 /* Re-enable the hypercall page */ 283 286 rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); ··· 295 288 hv_hypercall_pg_saved = NULL; 296 289 } 297 290 291 + /* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */ 298 292 static struct syscore_ops hv_syscore_ops = { 299 293 .suspend = hv_suspend, 300 294 .resume = hv_resume,
+3 -2
arch/x86/include/asm/ftrace.h
··· 61 61 { 62 62 /* 63 63 * Compare the symbol name with the system call name. Skip the 64 - * "__x64_sys", "__ia32_sys" or simple "sys" prefix. 64 + * "__x64_sys", "__ia32_sys", "__do_sys" or simple "sys" prefix. 65 65 */ 66 66 return !strcmp(sym + 3, name + 3) || 67 67 (!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) || 68 - (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)); 68 + (!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)) || 69 + (!strncmp(sym, "__do_sys", 8) && !strcmp(sym + 8, name + 3)); 69 70 } 70 71 71 72 #ifndef COMPILE_OFFSETS
+2 -2
arch/x86/include/asm/kvm_host.h
··· 1663 1663 static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq) 1664 1664 { 1665 1665 /* We can only post Fixed and LowPrio IRQs */ 1666 - return (irq->delivery_mode == dest_Fixed || 1667 - irq->delivery_mode == dest_LowestPrio); 1666 + return (irq->delivery_mode == APIC_DM_FIXED || 1667 + irq->delivery_mode == APIC_DM_LOWEST); 1668 1668 } 1669 1669 1670 1670 static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
+2
arch/x86/include/asm/mshyperv.h
··· 35 35 rdmsrl(HV_X64_MSR_SINT0 + int_num, val) 36 36 #define hv_set_synint_state(int_num, val) \ 37 37 wrmsrl(HV_X64_MSR_SINT0 + int_num, val) 38 + #define hv_recommend_using_aeoi() \ 39 + (!(ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED)) 38 40 39 41 #define hv_get_crash_ctl(val) \ 40 42 rdmsrl(HV_X64_MSR_CRASH_CTL, val)
+1 -1
arch/x86/include/asm/unwind.h
··· 19 19 #if defined(CONFIG_UNWINDER_ORC) 20 20 bool signal, full_regs; 21 21 unsigned long sp, bp, ip; 22 - struct pt_regs *regs; 22 + struct pt_regs *regs, *prev_regs; 23 23 #elif defined(CONFIG_UNWINDER_FRAME_POINTER) 24 24 bool got_irq; 25 25 unsigned long *bp, *orig_sp, ip;
+14 -13
arch/x86/kernel/apic/apic.c
··· 352 352 * According to Intel, MFENCE can do the serialization here. 353 353 */ 354 354 asm volatile("mfence" : : : "memory"); 355 - 356 - printk_once(KERN_DEBUG "TSC deadline timer enabled\n"); 357 355 return; 358 356 } 359 357 ··· 544 546 }; 545 547 static DEFINE_PER_CPU(struct clock_event_device, lapic_events); 546 548 547 - static u32 hsx_deadline_rev(void) 549 + static __init u32 hsx_deadline_rev(void) 548 550 { 549 551 switch (boot_cpu_data.x86_stepping) { 550 552 case 0x02: return 0x3a; /* EP */ ··· 554 556 return ~0U; 555 557 } 556 558 557 - static u32 bdx_deadline_rev(void) 559 + static __init u32 bdx_deadline_rev(void) 558 560 { 559 561 switch (boot_cpu_data.x86_stepping) { 560 562 case 0x02: return 0x00000011; ··· 566 568 return ~0U; 567 569 } 568 570 569 - static u32 skx_deadline_rev(void) 571 + static __init u32 skx_deadline_rev(void) 570 572 { 571 573 switch (boot_cpu_data.x86_stepping) { 572 574 case 0x03: return 0x01000136; ··· 579 581 return ~0U; 580 582 } 581 583 582 - static const struct x86_cpu_id deadline_match[] = { 584 + static const struct x86_cpu_id deadline_match[] __initconst = { 583 585 X86_MATCH_INTEL_FAM6_MODEL( HASWELL_X, &hsx_deadline_rev), 584 586 X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_X, 0x0b000020), 585 587 X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_D, &bdx_deadline_rev), ··· 601 603 {}, 602 604 }; 603 605 604 - static void apic_check_deadline_errata(void) 606 + static __init bool apic_validate_deadline_timer(void) 605 607 { 606 608 const struct x86_cpu_id *m; 607 609 u32 rev; 608 610 609 - if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) || 610 - boot_cpu_has(X86_FEATURE_HYPERVISOR)) 611 - return; 611 + if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER)) 612 + return false; 613 + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) 614 + return true; 612 615 613 616 m = x86_match_cpu(deadline_match); 614 617 if (!m) 615 - return; 618 + return true; 616 619 617 620 /* 618 621 * Function pointers will have the MSB set due to address layout, ··· 625 626 rev = (u32)m->driver_data; 626 627 627 628 if (boot_cpu_data.microcode >= rev) 628 - return; 629 + return true; 629 630 630 631 setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); 631 632 pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; " 632 633 "please update microcode to version: 0x%x (or later)\n", rev); 634 + return false; 633 635 } 634 636 635 637 /* ··· 2092 2092 { 2093 2093 unsigned int new_apicid; 2094 2094 2095 - apic_check_deadline_errata(); 2095 + if (apic_validate_deadline_timer()) 2096 + pr_debug("TSC deadline timer available\n"); 2096 2097 2097 2098 if (x2apic_mode) { 2098 2099 boot_cpu_physical_apicid = read_apic_id();
+2 -1
arch/x86/kernel/dumpstack_64.c
··· 183 183 */ 184 184 if (visit_mask) { 185 185 if (*visit_mask & (1UL << info->type)) { 186 - printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type); 186 + if (task == current) 187 + printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type); 187 188 goto unknown; 188 189 } 189 190 *visit_mask |= 1UL << info->type;
+3
arch/x86/kernel/unwind_frame.c
··· 344 344 if (IS_ENABLED(CONFIG_X86_32)) 345 345 goto the_end; 346 346 347 + if (state->task != current) 348 + goto the_end; 349 + 347 350 if (state->regs) { 348 351 printk_deferred_once(KERN_WARNING 349 352 "WARNING: kernel stack regs at %p in %s:%d has bad 'bp' value %p\n",
+74 -39
arch/x86/kernel/unwind_orc.c
··· 8 8 #include <asm/orc_lookup.h> 9 9 10 10 #define orc_warn(fmt, ...) \ 11 - printk_deferred_once(KERN_WARNING pr_fmt("WARNING: " fmt), ##__VA_ARGS__) 11 + printk_deferred_once(KERN_WARNING "WARNING: " fmt, ##__VA_ARGS__) 12 + 13 + #define orc_warn_current(args...) \ 14 + ({ \ 15 + if (state->task == current) \ 16 + orc_warn(args); \ 17 + }) 12 18 13 19 extern int __start_orc_unwind_ip[]; 14 20 extern int __stop_orc_unwind_ip[]; 15 21 extern struct orc_entry __start_orc_unwind[]; 16 22 extern struct orc_entry __stop_orc_unwind[]; 17 23 18 - static DEFINE_MUTEX(sort_mutex); 19 - int *cur_orc_ip_table = __start_orc_unwind_ip; 20 - struct orc_entry *cur_orc_table = __start_orc_unwind; 21 - 22 - unsigned int lookup_num_blocks; 23 - bool orc_init; 24 + static bool orc_init __ro_after_init; 25 + static unsigned int lookup_num_blocks __ro_after_init; 24 26 25 27 static inline unsigned long orc_ip(const int *ip) 26 28 { ··· 144 142 { 145 143 static struct orc_entry *orc; 146 144 147 - if (!orc_init) 148 - return NULL; 149 - 150 145 if (ip == 0) 151 146 return &null_orc_entry; 152 147 ··· 187 188 } 188 189 189 190 #ifdef CONFIG_MODULES 191 + 192 + static DEFINE_MUTEX(sort_mutex); 193 + static int *cur_orc_ip_table = __start_orc_unwind_ip; 194 + static struct orc_entry *cur_orc_table = __start_orc_unwind; 190 195 191 196 static void orc_sort_swap(void *_a, void *_b, int size) 192 197 { ··· 384 381 return true; 385 382 } 386 383 384 + /* 385 + * If state->regs is non-NULL, and points to a full pt_regs, just get the reg 386 + * value from state->regs. 387 + * 388 + * Otherwise, if state->regs just points to IRET regs, and the previous frame 389 + * had full regs, it's safe to get the value from the previous regs. This can 390 + * happen when early/late IRQ entry code gets interrupted by an NMI. 391 + */ 392 + static bool get_reg(struct unwind_state *state, unsigned int reg_off, 393 + unsigned long *val) 394 + { 395 + unsigned int reg = reg_off/8; 396 + 397 + if (!state->regs) 398 + return false; 399 + 400 + if (state->full_regs) { 401 + *val = ((unsigned long *)state->regs)[reg]; 402 + return true; 403 + } 404 + 405 + if (state->prev_regs) { 406 + *val = ((unsigned long *)state->prev_regs)[reg]; 407 + return true; 408 + } 409 + 410 + return false; 411 + } 412 + 387 413 bool unwind_next_frame(struct unwind_state *state) 388 414 { 389 - unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp; 415 + unsigned long ip_p, sp, tmp, orig_ip = state->ip, prev_sp = state->sp; 390 416 enum stack_type prev_type = state->stack_info.type; 391 417 struct orc_entry *orc; 392 418 bool indirect = false; ··· 477 445 break; 478 446 479 447 case ORC_REG_R10: 480 - if (!state->regs || !state->full_regs) { 481 - orc_warn("missing regs for base reg R10 at ip %pB\n", 482 - (void *)state->ip); 448 + if (!get_reg(state, offsetof(struct pt_regs, r10), &sp)) { 449 + orc_warn_current("missing R10 value at %pB\n", 450 + (void *)state->ip); 483 451 goto err; 484 452 } 485 - sp = state->regs->r10; 486 453 break; 487 454 488 455 case ORC_REG_R13: 489 - if (!state->regs || !state->full_regs) { 490 - orc_warn("missing regs for base reg R13 at ip %pB\n", 491 - (void *)state->ip); 456 + if (!get_reg(state, offsetof(struct pt_regs, r13), &sp)) { 457 + orc_warn_current("missing R13 value at %pB\n", 458 + (void *)state->ip); 492 459 goto err; 493 460 } 494 - sp = state->regs->r13; 495 461 break; 496 462 497 463 case ORC_REG_DI: 498 - if (!state->regs || !state->full_regs) { 499 - orc_warn("missing regs for base reg DI at ip %pB\n", 500 - (void *)state->ip); 464 + if (!get_reg(state, offsetof(struct pt_regs, di), &sp)) { 465 + orc_warn_current("missing RDI value at %pB\n", 466 + (void *)state->ip); 501 467 goto err; 502 468 } 503 - sp = state->regs->di; 504 469 break; 505 470 506 471 case ORC_REG_DX: 507 - if (!state->regs || !state->full_regs) { 508 - orc_warn("missing regs for base reg DX at ip %pB\n", 509 - (void *)state->ip); 472 + if (!get_reg(state, offsetof(struct pt_regs, dx), &sp)) { 473 + orc_warn_current("missing DX value at %pB\n", 474 + (void *)state->ip); 510 475 goto err; 511 476 } 512 - sp = state->regs->dx; 513 477 break; 514 478 515 479 default: 516 - orc_warn("unknown SP base reg %d for ip %pB\n", 480 + orc_warn("unknown SP base reg %d at %pB\n", 517 481 orc->sp_reg, (void *)state->ip); 518 482 goto err; 519 483 } ··· 532 504 533 505 state->sp = sp; 534 506 state->regs = NULL; 507 + state->prev_regs = NULL; 535 508 state->signal = false; 536 509 break; 537 510 538 511 case ORC_TYPE_REGS: 539 512 if (!deref_stack_regs(state, sp, &state->ip, &state->sp)) { 540 - orc_warn("can't dereference registers at %p for ip %pB\n", 541 - (void *)sp, (void *)orig_ip); 513 + orc_warn_current("can't access registers at %pB\n", 514 + (void *)orig_ip); 542 515 goto err; 543 516 } 544 517 545 518 state->regs = (struct pt_regs *)sp; 519 + state->prev_regs = NULL; 546 520 state->full_regs = true; 547 521 state->signal = true; 548 522 break; 549 523 550 524 case ORC_TYPE_REGS_IRET: 551 525 if (!deref_stack_iret_regs(state, sp, &state->ip, &state->sp)) { 552 - orc_warn("can't dereference iret registers at %p for ip %pB\n", 553 - (void *)sp, (void *)orig_ip); 526 + orc_warn_current("can't access iret registers at %pB\n", 527 + (void *)orig_ip); 554 528 goto err; 555 529 } 556 530 531 + if (state->full_regs) 532 + state->prev_regs = state->regs; 557 533 state->regs = (void *)sp - IRET_FRAME_OFFSET; 558 534 state->full_regs = false; 559 535 state->signal = true; 560 536 break; 561 537 562 538 default: 563 - orc_warn("unknown .orc_unwind entry type %d for ip %pB\n", 539 + orc_warn("unknown .orc_unwind entry type %d at %pB\n", 564 540 orc->type, (void *)orig_ip); 565 - break; 541 + goto err; 566 542 } 567 543 568 544 /* Find BP: */ 569 545 switch (orc->bp_reg) { 570 546 case ORC_REG_UNDEFINED: 571 - if (state->regs && state->full_regs) 572 - state->bp = state->regs->bp; 547 + if (get_reg(state, offsetof(struct pt_regs, bp), &tmp)) 548 + state->bp = tmp; 573 549 break; 574 550 575 551 case ORC_REG_PREV_SP: ··· 596 564 if (state->stack_info.type == prev_type && 597 565 on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) && 598 566 state->sp <= prev_sp) { 599 - orc_warn("stack going in the wrong direction? ip=%pB\n", 600 - (void *)orig_ip); 567 + orc_warn_current("stack going in the wrong direction? at %pB\n", 568 + (void *)orig_ip); 601 569 goto err; 602 570 } 603 571 ··· 617 585 void __unwind_start(struct unwind_state *state, struct task_struct *task, 618 586 struct pt_regs *regs, unsigned long *first_frame) 619 587 { 588 + if (!orc_init) 589 + goto done; 590 + 620 591 memset(state, 0, sizeof(*state)); 621 592 state->task = task; 622 593 ··· 686 651 /* Otherwise, skip ahead to the user-specified starting frame: */ 687 652 while (!unwind_done(state) && 688 653 (!on_stack(&state->stack_info, first_frame, sizeof(long)) || 689 - state->sp <= (unsigned long)first_frame)) 654 + state->sp < (unsigned long)first_frame)) 690 655 unwind_next_frame(state); 691 656 692 657 return;
+5 -5
arch/x86/kvm/ioapic.c
··· 225 225 } 226 226 227 227 /* 228 - * AMD SVM AVIC accelerate EOI write and do not trap, 229 - * in-kernel IOAPIC will not be able to receive the EOI. 230 - * In this case, we do lazy update of the pending EOI when 231 - * trying to set IOAPIC irq. 228 + * AMD SVM AVIC accelerate EOI write iff the interrupt is edge 229 + * triggered, in which case the in-kernel IOAPIC will not be able 230 + * to receive the EOI. In this case, we do a lazy update of the 231 + * pending EOI when trying to set IOAPIC irq. 232 232 */ 233 - if (kvm_apicv_activated(ioapic->kvm)) 233 + if (edge && kvm_apicv_activated(ioapic->kvm)) 234 234 ioapic_lazy_update_eoi(ioapic, irq); 235 235 236 236 /*
+1 -1
arch/x86/kvm/svm/sev.c
··· 345 345 return NULL; 346 346 347 347 /* Pin the user virtual address. */ 348 - npinned = get_user_pages_fast(uaddr, npages, FOLL_WRITE, pages); 348 + npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages); 349 349 if (npinned != npages) { 350 350 pr_err("SEV: Failure locking %lu pages.\n", npages); 351 351 goto err;
+2
arch/x86/kvm/svm/svm.c
··· 1752 1752 if (svm->vcpu.guest_debug & 1753 1753 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) { 1754 1754 kvm_run->exit_reason = KVM_EXIT_DEBUG; 1755 + kvm_run->debug.arch.dr6 = svm->vmcb->save.dr6; 1756 + kvm_run->debug.arch.dr7 = svm->vmcb->save.dr7; 1755 1757 kvm_run->debug.arch.pc = 1756 1758 svm->vmcb->save.cs.base + svm->vmcb->save.rip; 1757 1759 kvm_run->debug.arch.exception = DB_VECTOR;
+1 -1
arch/x86/kvm/vmx/nested.c
··· 5165 5165 */ 5166 5166 break; 5167 5167 default: 5168 - BUG_ON(1); 5168 + BUG(); 5169 5169 break; 5170 5170 } 5171 5171
+3
arch/x86/kvm/vmx/vmenter.S
··· 82 82 /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */ 83 83 FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE 84 84 85 + /* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */ 86 + or $1, %_ASM_AX 87 + 85 88 pop %_ASM_AX 86 89 .Lvmexit_skip_rsb: 87 90 #endif
+6 -15
arch/x86/kvm/x86.c
··· 926 926 __reserved_bits; \ 927 927 }) 928 928 929 - static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c) 930 - { 931 - u64 reserved_bits = __cr4_reserved_bits(cpu_has, c); 932 - 933 - if (kvm_cpu_cap_has(X86_FEATURE_LA57)) 934 - reserved_bits &= ~X86_CR4_LA57; 935 - 936 - if (kvm_cpu_cap_has(X86_FEATURE_UMIP)) 937 - reserved_bits &= ~X86_CR4_UMIP; 938 - 939 - return reserved_bits; 940 - } 941 - 942 929 static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) 943 930 { 944 931 if (cr4 & cr4_reserved_bits) ··· 3372 3385 case KVM_CAP_GET_MSR_FEATURES: 3373 3386 case KVM_CAP_MSR_PLATFORM_INFO: 3374 3387 case KVM_CAP_EXCEPTION_PAYLOAD: 3388 + case KVM_CAP_SET_GUEST_DEBUG: 3375 3389 r = 1; 3376 3390 break; 3377 3391 case KVM_CAP_SYNC_REGS: ··· 9663 9675 if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES)) 9664 9676 supported_xss = 0; 9665 9677 9666 - cr4_reserved_bits = kvm_host_cr4_reserved_bits(&boot_cpu_data); 9678 + #define __kvm_cpu_cap_has(UNUSED_, f) kvm_cpu_cap_has(f) 9679 + cr4_reserved_bits = __cr4_reserved_bits(__kvm_cpu_cap_has, UNUSED_); 9680 + #undef __kvm_cpu_cap_has 9667 9681 9668 9682 if (kvm_has_tsc_control) { 9669 9683 /* ··· 9697 9707 9698 9708 WARN_ON(!irqs_disabled()); 9699 9709 9700 - if (kvm_host_cr4_reserved_bits(c) != cr4_reserved_bits) 9710 + if (__cr4_reserved_bits(cpu_has, c) != 9711 + __cr4_reserved_bits(cpu_has, &boot_cpu_data)) 9701 9712 return -EIO; 9702 9713 9703 9714 return ops->check_processor_compatibility();
+8 -4
arch/x86/mm/pat/set_memory.c
··· 43 43 unsigned long pfn; 44 44 unsigned int flags; 45 45 unsigned int force_split : 1, 46 - force_static_prot : 1; 46 + force_static_prot : 1, 47 + force_flush_all : 1; 47 48 struct page **pages; 48 49 }; 49 50 ··· 356 355 return; 357 356 } 358 357 359 - if (cpa->numpages <= tlb_single_page_flush_ceiling) 360 - on_each_cpu(__cpa_flush_tlb, cpa, 1); 361 - else 358 + if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling) 362 359 flush_tlb_all(); 360 + else 361 + on_each_cpu(__cpa_flush_tlb, cpa, 1); 363 362 364 363 if (!cache) 365 364 return; ··· 1599 1598 alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY); 1600 1599 alias_cpa.curpage = 0; 1601 1600 1601 + cpa->force_flush_all = 1; 1602 + 1602 1603 ret = __change_page_attr_set_clr(&alias_cpa, 0); 1603 1604 if (ret) 1604 1605 return ret; ··· 1621 1618 alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY); 1622 1619 alias_cpa.curpage = 0; 1623 1620 1621 + cpa->force_flush_all = 1; 1624 1622 /* 1625 1623 * The high mapping range is imprecise, so ignore the 1626 1624 * return value.
+4 -2
block/bfq-iosched.c
··· 123 123 #include <linux/ioprio.h> 124 124 #include <linux/sbitmap.h> 125 125 #include <linux/delay.h> 126 + #include <linux/backing-dev.h> 126 127 127 128 #include "blk.h" 128 129 #include "blk-mq.h" ··· 4977 4976 ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); 4978 4977 switch (ioprio_class) { 4979 4978 default: 4980 - dev_err(bfqq->bfqd->queue->backing_dev_info->dev, 4981 - "bfq: bad prio class %d\n", ioprio_class); 4979 + pr_err("bdi %s: bfq: bad prio class %d\n", 4980 + bdi_dev_name(bfqq->bfqd->queue->backing_dev_info), 4981 + ioprio_class); 4982 4982 /* fall through */ 4983 4983 case IOPRIO_CLASS_NONE: 4984 4984 /*
+1 -1
block/blk-cgroup.c
··· 496 496 { 497 497 /* some drivers (floppy) instantiate a queue w/o disk registered */ 498 498 if (blkg->q->backing_dev_info->dev) 499 - return dev_name(blkg->q->backing_dev_info->dev); 499 + return bdi_dev_name(blkg->q->backing_dev_info); 500 500 return NULL; 501 501 } 502 502
+71 -46
block/blk-iocost.c
··· 466 466 */ 467 467 atomic64_t vtime; 468 468 atomic64_t done_vtime; 469 - atomic64_t abs_vdebt; 469 + u64 abs_vdebt; 470 470 u64 last_vtime; 471 471 472 472 /* ··· 1142 1142 struct iocg_wake_ctx ctx = { .iocg = iocg }; 1143 1143 u64 margin_ns = (u64)(ioc->period_us * 1144 1144 WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC; 1145 - u64 abs_vdebt, vdebt, vshortage, expires, oexpires; 1145 + u64 vdebt, vshortage, expires, oexpires; 1146 1146 s64 vbudget; 1147 1147 u32 hw_inuse; 1148 1148 ··· 1152 1152 vbudget = now->vnow - atomic64_read(&iocg->vtime); 1153 1153 1154 1154 /* pay off debt */ 1155 - abs_vdebt = atomic64_read(&iocg->abs_vdebt); 1156 - vdebt = abs_cost_to_cost(abs_vdebt, hw_inuse); 1155 + vdebt = abs_cost_to_cost(iocg->abs_vdebt, hw_inuse); 1157 1156 if (vdebt && vbudget > 0) { 1158 1157 u64 delta = min_t(u64, vbudget, vdebt); 1159 1158 u64 abs_delta = min(cost_to_abs_cost(delta, hw_inuse), 1160 - abs_vdebt); 1159 + iocg->abs_vdebt); 1161 1160 1162 1161 atomic64_add(delta, &iocg->vtime); 1163 1162 atomic64_add(delta, &iocg->done_vtime); 1164 - atomic64_sub(abs_delta, &iocg->abs_vdebt); 1165 - if (WARN_ON_ONCE(atomic64_read(&iocg->abs_vdebt) < 0)) 1166 - atomic64_set(&iocg->abs_vdebt, 0); 1163 + iocg->abs_vdebt -= abs_delta; 1167 1164 } 1168 1165 1169 1166 /* ··· 1216 1219 u64 expires, oexpires; 1217 1220 u32 hw_inuse; 1218 1221 1222 + lockdep_assert_held(&iocg->waitq.lock); 1223 + 1219 1224 /* debt-adjust vtime */ 1220 1225 current_hweight(iocg, NULL, &hw_inuse); 1221 - vtime += abs_cost_to_cost(atomic64_read(&iocg->abs_vdebt), hw_inuse); 1226 + vtime += abs_cost_to_cost(iocg->abs_vdebt, hw_inuse); 1222 1227 1223 - /* clear or maintain depending on the overage */ 1224 - if (time_before_eq64(vtime, now->vnow)) { 1228 + /* 1229 + * Clear or maintain depending on the overage. Non-zero vdebt is what 1230 + * guarantees that @iocg is online and future iocg_kick_delay() will 1231 + * clear use_delay. Don't leave it on when there's no vdebt. 1232 + */ 1233 + if (!iocg->abs_vdebt || time_before_eq64(vtime, now->vnow)) { 1225 1234 blkcg_clear_delay(blkg); 1226 1235 return false; 1227 1236 } ··· 1261 1258 { 1262 1259 struct ioc_gq *iocg = container_of(timer, struct ioc_gq, delay_timer); 1263 1260 struct ioc_now now; 1261 + unsigned long flags; 1264 1262 1263 + spin_lock_irqsave(&iocg->waitq.lock, flags); 1265 1264 ioc_now(iocg->ioc, &now); 1266 1265 iocg_kick_delay(iocg, &now, 0); 1266 + spin_unlock_irqrestore(&iocg->waitq.lock, flags); 1267 1267 1268 1268 return HRTIMER_NORESTART; 1269 1269 } ··· 1374 1368 * should have woken up in the last period and expire idle iocgs. 1375 1369 */ 1376 1370 list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) { 1377 - if (!waitqueue_active(&iocg->waitq) && 1378 - !atomic64_read(&iocg->abs_vdebt) && !iocg_is_idle(iocg)) 1371 + if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt && 1372 + !iocg_is_idle(iocg)) 1379 1373 continue; 1380 1374 1381 1375 spin_lock(&iocg->waitq.lock); 1382 1376 1383 - if (waitqueue_active(&iocg->waitq) || 1384 - atomic64_read(&iocg->abs_vdebt)) { 1377 + if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt) { 1385 1378 /* might be oversleeping vtime / hweight changes, kick */ 1386 1379 iocg_kick_waitq(iocg, &now); 1387 1380 iocg_kick_delay(iocg, &now, 0); ··· 1723 1718 * tests are racy but the races aren't systemic - we only miss once 1724 1719 * in a while which is fine. 1725 1720 */ 1726 - if (!waitqueue_active(&iocg->waitq) && 1727 - !atomic64_read(&iocg->abs_vdebt) && 1721 + if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt && 1728 1722 time_before_eq64(vtime + cost, now.vnow)) { 1729 1723 iocg_commit_bio(iocg, bio, cost); 1730 1724 return; 1731 1725 } 1732 1726 1733 1727 /* 1734 - * We're over budget. If @bio has to be issued regardless, 1735 - * remember the abs_cost instead of advancing vtime. 1736 - * iocg_kick_waitq() will pay off the debt before waking more IOs. 1728 + * We activated above but w/o any synchronization. Deactivation is 1729 + * synchronized with waitq.lock and we won't get deactivated as long 1730 + * as we're waiting or has debt, so we're good if we're activated 1731 + * here. In the unlikely case that we aren't, just issue the IO. 1732 + */ 1733 + spin_lock_irq(&iocg->waitq.lock); 1734 + 1735 + if (unlikely(list_empty(&iocg->active_list))) { 1736 + spin_unlock_irq(&iocg->waitq.lock); 1737 + iocg_commit_bio(iocg, bio, cost); 1738 + return; 1739 + } 1740 + 1741 + /* 1742 + * We're over budget. If @bio has to be issued regardless, remember 1743 + * the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay 1744 + * off the debt before waking more IOs. 1745 + * 1737 1746 * This way, the debt is continuously paid off each period with the 1738 - * actual budget available to the cgroup. If we just wound vtime, 1739 - * we would incorrectly use the current hw_inuse for the entire 1740 - * amount which, for example, can lead to the cgroup staying 1741 - * blocked for a long time even with substantially raised hw_inuse. 1747 + * actual budget available to the cgroup. If we just wound vtime, we 1748 + * would incorrectly use the current hw_inuse for the entire amount 1749 + * which, for example, can lead to the cgroup staying blocked for a 1750 + * long time even with substantially raised hw_inuse. 1751 + * 1752 + * An iocg with vdebt should stay online so that the timer can keep 1753 + * deducting its vdebt and [de]activate use_delay mechanism 1754 + * accordingly. We don't want to race against the timer trying to 1755 + * clear them and leave @iocg inactive w/ dangling use_delay heavily 1756 + * penalizing the cgroup and its descendants. 1742 1757 */ 1743 1758 if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) { 1744 - atomic64_add(abs_cost, &iocg->abs_vdebt); 1759 + iocg->abs_vdebt += abs_cost; 1745 1760 if (iocg_kick_delay(iocg, &now, cost)) 1746 1761 blkcg_schedule_throttle(rqos->q, 1747 1762 (bio->bi_opf & REQ_SWAP) == REQ_SWAP); 1763 + spin_unlock_irq(&iocg->waitq.lock); 1748 1764 return; 1749 1765 } 1750 1766 ··· 1782 1756 * All waiters are on iocg->waitq and the wait states are 1783 1757 * synchronized using waitq.lock. 1784 1758 */ 1785 - spin_lock_irq(&iocg->waitq.lock); 1786 - 1787 - /* 1788 - * We activated above but w/o any synchronization. Deactivation is 1789 - * synchronized with waitq.lock and we won't get deactivated as 1790 - * long as we're waiting, so we're good if we're activated here. 1791 - * In the unlikely case that we are deactivated, just issue the IO. 1792 - */ 1793 - if (unlikely(list_empty(&iocg->active_list))) { 1794 - spin_unlock_irq(&iocg->waitq.lock); 1795 - iocg_commit_bio(iocg, bio, cost); 1796 - return; 1797 - } 1798 - 1799 1759 init_waitqueue_func_entry(&wait.wait, iocg_wake_fn); 1800 1760 wait.wait.private = current; 1801 1761 wait.bio = bio; ··· 1813 1801 struct ioc_now now; 1814 1802 u32 hw_inuse; 1815 1803 u64 abs_cost, cost; 1804 + unsigned long flags; 1816 1805 1817 1806 /* bypass if disabled or for root cgroup */ 1818 1807 if (!ioc->enabled || !iocg->level) ··· 1833 1820 iocg->cursor = bio_end; 1834 1821 1835 1822 /* 1836 - * Charge if there's enough vtime budget and the existing request 1837 - * has cost assigned. Otherwise, account it as debt. See debt 1838 - * handling in ioc_rqos_throttle() for details. 1823 + * Charge if there's enough vtime budget and the existing request has 1824 + * cost assigned. 1839 1825 */ 1840 1826 if (rq->bio && rq->bio->bi_iocost_cost && 1841 - time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) 1827 + time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) { 1842 1828 iocg_commit_bio(iocg, bio, cost); 1843 - else 1844 - atomic64_add(abs_cost, &iocg->abs_vdebt); 1829 + return; 1830 + } 1831 + 1832 + /* 1833 + * Otherwise, account it as debt if @iocg is online, which it should 1834 + * be for the vast majority of cases. See debt handling in 1835 + * ioc_rqos_throttle() for details. 1836 + */ 1837 + spin_lock_irqsave(&iocg->waitq.lock, flags); 1838 + if (likely(!list_empty(&iocg->active_list))) { 1839 + iocg->abs_vdebt += abs_cost; 1840 + iocg_kick_delay(iocg, &now, cost); 1841 + } else { 1842 + iocg_commit_bio(iocg, bio, cost); 1843 + } 1844 + spin_unlock_irqrestore(&iocg->waitq.lock, flags); 1845 1845 } 1846 1846 1847 1847 static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio) ··· 2024 1998 iocg->ioc = ioc; 2025 1999 atomic64_set(&iocg->vtime, now.vnow); 2026 2000 atomic64_set(&iocg->done_vtime, now.vnow); 2027 - atomic64_set(&iocg->abs_vdebt, 0); 2028 2001 atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period)); 2029 2002 INIT_LIST_HEAD(&iocg->active_list); 2030 2003 iocg->hweight_active = HWEIGHT_WHOLE;
+1 -1
block/partitions/core.c
··· 496 496 497 497 if (!disk_part_scan_enabled(disk)) 498 498 return 0; 499 - if (bdev->bd_part_count || bdev->bd_openers > 1) 499 + if (bdev->bd_part_count) 500 500 return -EBUSY; 501 501 res = invalidate_partition(disk, 0); 502 502 if (res)
+3 -3
crypto/lrw.c
··· 287 287 crypto_free_skcipher(ctx->child); 288 288 } 289 289 290 - static void free(struct skcipher_instance *inst) 290 + static void free_inst(struct skcipher_instance *inst) 291 291 { 292 292 crypto_drop_skcipher(skcipher_instance_ctx(inst)); 293 293 kfree(inst); ··· 400 400 inst->alg.encrypt = encrypt; 401 401 inst->alg.decrypt = decrypt; 402 402 403 - inst->free = free; 403 + inst->free = free_inst; 404 404 405 405 err = skcipher_register_instance(tmpl, inst); 406 406 if (err) { 407 407 err_free_inst: 408 - free(inst); 408 + free_inst(inst); 409 409 } 410 410 return err; 411 411 }
+3 -3
crypto/xts.c
··· 322 322 crypto_free_cipher(ctx->tweak); 323 323 } 324 324 325 - static void free(struct skcipher_instance *inst) 325 + static void free_inst(struct skcipher_instance *inst) 326 326 { 327 327 crypto_drop_skcipher(skcipher_instance_ctx(inst)); 328 328 kfree(inst); ··· 434 434 inst->alg.encrypt = encrypt; 435 435 inst->alg.decrypt = decrypt; 436 436 437 - inst->free = free; 437 + inst->free = free_inst; 438 438 439 439 err = skcipher_register_instance(tmpl, inst); 440 440 if (err) { 441 441 err_free_inst: 442 - free(inst); 442 + free_inst(inst); 443 443 } 444 444 return err; 445 445 }
+2 -2
drivers/acpi/device_pm.c
··· 273 273 end: 274 274 if (result) { 275 275 dev_warn(&device->dev, "Failed to change power state to %s\n", 276 - acpi_power_state_string(state)); 276 + acpi_power_state_string(target_state)); 277 277 } else { 278 278 device->power.state = target_state; 279 279 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 280 280 "Device [%s] transitioned to %s\n", 281 281 device->pnp.bus_id, 282 - acpi_power_state_string(state))); 282 + acpi_power_state_string(target_state))); 283 283 } 284 284 285 285 return result;
+1
drivers/amba/bus.c
··· 645 645 dev->dev.release = amba_device_release; 646 646 dev->dev.bus = &amba_bustype; 647 647 dev->dev.dma_mask = &dev->dev.coherent_dma_mask; 648 + dev->dev.dma_parms = &dev->dma_parms; 648 649 dev->res.name = dev_name(&dev->dev); 649 650 } 650 651
+5 -3
drivers/base/component.c
··· 256 256 ret = master->ops->bind(master->dev); 257 257 if (ret < 0) { 258 258 devres_release_group(master->dev, NULL); 259 - dev_info(master->dev, "master bind failed: %d\n", ret); 259 + if (ret != -EPROBE_DEFER) 260 + dev_info(master->dev, "master bind failed: %d\n", ret); 260 261 return ret; 261 262 } 262 263 ··· 612 611 devres_release_group(component->dev, NULL); 613 612 devres_release_group(master->dev, NULL); 614 613 615 - dev_err(master->dev, "failed to bind %s (ops %ps): %d\n", 616 - dev_name(component->dev), component->ops, ret); 614 + if (ret != -EPROBE_DEFER) 615 + dev_err(master->dev, "failed to bind %s (ops %ps): %d\n", 616 + dev_name(component->dev), component->ops, ret); 617 617 } 618 618 619 619 return ret;
+6 -1
drivers/base/core.c
··· 2370 2370 return fw_devlink_flags; 2371 2371 } 2372 2372 2373 + static bool fw_devlink_is_permissive(void) 2374 + { 2375 + return fw_devlink_flags == DL_FLAG_SYNC_STATE_ONLY; 2376 + } 2377 + 2373 2378 /** 2374 2379 * device_add - add device to device hierarchy. 2375 2380 * @dev: device. ··· 2529 2524 if (fw_devlink_flags && is_fwnode_dev && 2530 2525 fwnode_has_op(dev->fwnode, add_links)) { 2531 2526 fw_ret = fwnode_call_int_op(dev->fwnode, add_links, dev); 2532 - if (fw_ret == -ENODEV) 2527 + if (fw_ret == -ENODEV && !fw_devlink_is_permissive()) 2533 2528 device_link_wait_for_mandatory_supplier(dev); 2534 2529 else if (fw_ret) 2535 2530 device_link_wait_for_optional_supplier(dev);
+8 -12
drivers/base/dd.c
··· 224 224 } 225 225 DEFINE_SHOW_ATTRIBUTE(deferred_devs); 226 226 227 - #ifdef CONFIG_MODULES 228 - /* 229 - * In the case of modules, set the default probe timeout to 230 - * 30 seconds to give userland some time to load needed modules 231 - */ 232 - int driver_deferred_probe_timeout = 30; 233 - #else 234 - /* In the case of !modules, no probe timeout needed */ 235 - int driver_deferred_probe_timeout = -1; 236 - #endif 227 + int driver_deferred_probe_timeout; 237 228 EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout); 229 + static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue); 238 230 239 231 static int __init deferred_probe_timeout_setup(char *str) 240 232 { ··· 258 266 return -ENODEV; 259 267 } 260 268 261 - if (!driver_deferred_probe_timeout) { 262 - dev_WARN(dev, "deferred probe timeout, ignoring dependency"); 269 + if (!driver_deferred_probe_timeout && initcalls_done) { 270 + dev_warn(dev, "deferred probe timeout, ignoring dependency"); 263 271 return -ETIMEDOUT; 264 272 } 265 273 ··· 276 284 277 285 list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe) 278 286 dev_info(private->device, "deferred probe pending"); 287 + wake_up(&probe_timeout_waitqueue); 279 288 } 280 289 static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func); 281 290 ··· 651 658 */ 652 659 void wait_for_device_probe(void) 653 660 { 661 + /* wait for probe timeout */ 662 + wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout); 663 + 654 664 /* wait for the deferred probe workqueue to finish */ 655 665 flush_work(&deferred_probe_work); 656 666
+2
drivers/base/platform.c
··· 380 380 */ 381 381 static void setup_pdev_dma_masks(struct platform_device *pdev) 382 382 { 383 + pdev->dev.dma_parms = &pdev->dma_parms; 384 + 383 385 if (!pdev->dev.coherent_dma_mask) 384 386 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 385 387 if (!pdev->dev.dma_mask) {
+78 -8
drivers/block/virtio_blk.c
··· 33 33 } ____cacheline_aligned_in_smp; 34 34 35 35 struct virtio_blk { 36 + /* 37 + * This mutex must be held by anything that may run after 38 + * virtblk_remove() sets vblk->vdev to NULL. 39 + * 40 + * blk-mq, virtqueue processing, and sysfs attribute code paths are 41 + * shut down before vblk->vdev is set to NULL and therefore do not need 42 + * to hold this mutex. 43 + */ 44 + struct mutex vdev_mutex; 36 45 struct virtio_device *vdev; 37 46 38 47 /* The disk structure for the kernel. */ ··· 52 43 53 44 /* Process context for config space updates */ 54 45 struct work_struct config_work; 46 + 47 + /* 48 + * Tracks references from block_device_operations open/release and 49 + * virtio_driver probe/remove so this object can be freed once no 50 + * longer in use. 51 + */ 52 + refcount_t refs; 55 53 56 54 /* What host tells us, plus 2 for header & tailer. */ 57 55 unsigned int sg_elems; ··· 311 295 return err; 312 296 } 313 297 298 + static void virtblk_get(struct virtio_blk *vblk) 299 + { 300 + refcount_inc(&vblk->refs); 301 + } 302 + 303 + static void virtblk_put(struct virtio_blk *vblk) 304 + { 305 + if (refcount_dec_and_test(&vblk->refs)) { 306 + ida_simple_remove(&vd_index_ida, vblk->index); 307 + mutex_destroy(&vblk->vdev_mutex); 308 + kfree(vblk); 309 + } 310 + } 311 + 312 + static int virtblk_open(struct block_device *bd, fmode_t mode) 313 + { 314 + struct virtio_blk *vblk = bd->bd_disk->private_data; 315 + int ret = 0; 316 + 317 + mutex_lock(&vblk->vdev_mutex); 318 + 319 + if (vblk->vdev) 320 + virtblk_get(vblk); 321 + else 322 + ret = -ENXIO; 323 + 324 + mutex_unlock(&vblk->vdev_mutex); 325 + return ret; 326 + } 327 + 328 + static void virtblk_release(struct gendisk *disk, fmode_t mode) 329 + { 330 + struct virtio_blk *vblk = disk->private_data; 331 + 332 + virtblk_put(vblk); 333 + } 334 + 314 335 /* We provide getgeo only to please some old bootloader/partitioning tools */ 315 336 static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo) 316 337 { 317 338 struct virtio_blk *vblk = bd->bd_disk->private_data; 339 + int ret = 0; 340 + 341 + mutex_lock(&vblk->vdev_mutex); 342 + 343 + if (!vblk->vdev) { 344 + ret = -ENXIO; 345 + goto out; 346 + } 318 347 319 348 /* see if the host passed in geometry config */ 320 349 if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) { ··· 375 314 geo->sectors = 1 << 5; 376 315 geo->cylinders = get_capacity(bd->bd_disk) >> 11; 377 316 } 378 - return 0; 317 + out: 318 + mutex_unlock(&vblk->vdev_mutex); 319 + return ret; 379 320 } 380 321 381 322 static const struct block_device_operations virtblk_fops = { 382 323 .owner = THIS_MODULE, 324 + .open = virtblk_open, 325 + .release = virtblk_release, 383 326 .getgeo = virtblk_getgeo, 384 327 }; 385 328 ··· 720 655 goto out_free_index; 721 656 } 722 657 658 + /* This reference is dropped in virtblk_remove(). */ 659 + refcount_set(&vblk->refs, 1); 660 + mutex_init(&vblk->vdev_mutex); 661 + 723 662 vblk->vdev = vdev; 724 663 vblk->sg_elems = sg_elems; 725 664 ··· 889 820 static void virtblk_remove(struct virtio_device *vdev) 890 821 { 891 822 struct virtio_blk *vblk = vdev->priv; 892 - int index = vblk->index; 893 - int refc; 894 823 895 824 /* Make sure no work handler is accessing the device. */ 896 825 flush_work(&vblk->config_work); ··· 898 831 899 832 blk_mq_free_tag_set(&vblk->tag_set); 900 833 834 + mutex_lock(&vblk->vdev_mutex); 835 + 901 836 /* Stop all the virtqueues. */ 902 837 vdev->config->reset(vdev); 903 838 904 - refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref); 839 + /* Virtqueues are stopped, nothing can use vblk->vdev anymore. */ 840 + vblk->vdev = NULL; 841 + 905 842 put_disk(vblk->disk); 906 843 vdev->config->del_vqs(vdev); 907 844 kfree(vblk->vqs); 908 - kfree(vblk); 909 845 910 - /* Only free device id if we don't have any users */ 911 - if (refc == 1) 912 - ida_simple_remove(&vd_index_ida, index); 846 + mutex_unlock(&vblk->vdev_mutex); 847 + 848 + virtblk_put(vblk); 913 849 } 914 850 915 851 #ifdef CONFIG_PM_SLEEP
+3 -4
drivers/bus/mhi/core/init.c
··· 812 812 if (!mhi_cntrl) 813 813 return -EINVAL; 814 814 815 - if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put) 816 - return -EINVAL; 817 - 818 - if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status) 815 + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put || 816 + !mhi_cntrl->status_cb || !mhi_cntrl->read_reg || 817 + !mhi_cntrl->write_reg) 819 818 return -EINVAL; 820 819 821 820 ret = parse_config(mhi_cntrl, config);
-3
drivers/bus/mhi/core/internal.h
··· 11 11 12 12 extern struct bus_type mhi_bus_type; 13 13 14 - /* MHI MMIO register mapping */ 15 - #define PCI_INVALID_READ(val) (val == U32_MAX) 16 - 17 14 #define MHIREGLEN (0x0) 18 15 #define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF) 19 16 #define MHIREGLEN_MHIREGLEN_SHIFT (0)
+5 -13
drivers/bus/mhi/core/main.c
··· 18 18 int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl, 19 19 void __iomem *base, u32 offset, u32 *out) 20 20 { 21 - u32 tmp = readl(base + offset); 22 - 23 - /* If there is any unexpected value, query the link status */ 24 - if (PCI_INVALID_READ(tmp) && 25 - mhi_cntrl->link_status(mhi_cntrl)) 26 - return -EIO; 27 - 28 - *out = tmp; 29 - 30 - return 0; 21 + return mhi_cntrl->read_reg(mhi_cntrl, base + offset, out); 31 22 } 32 23 33 24 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl, ··· 40 49 void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base, 41 50 u32 offset, u32 val) 42 51 { 43 - writel(val, base + offset); 52 + mhi_cntrl->write_reg(mhi_cntrl, base + offset, val); 44 53 } 45 54 46 55 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base, ··· 285 294 !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee))) 286 295 continue; 287 296 mhi_dev = mhi_alloc_device(mhi_cntrl); 288 - if (!mhi_dev) 297 + if (IS_ERR(mhi_dev)) 289 298 return; 290 299 291 300 mhi_dev->dev_type = MHI_DEVICE_XFER; ··· 327 336 328 337 /* Channel name is same for both UL and DL */ 329 338 mhi_dev->chan_name = mhi_chan->name; 330 - dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan, 339 + dev_set_name(&mhi_dev->dev, "%s_%s", 340 + dev_name(mhi_cntrl->cntrl_dev), 331 341 mhi_dev->chan_name); 332 342 333 343 /* Init wakeup source if available */
+5 -1
drivers/bus/mhi/core/pm.c
··· 902 902 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), 903 903 msecs_to_jiffies(mhi_cntrl->timeout_ms)); 904 904 905 - return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO; 905 + ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT; 906 + if (ret) 907 + mhi_power_down(mhi_cntrl, false); 908 + 909 + return ret; 906 910 } 907 911 EXPORT_SYMBOL(mhi_sync_power_up); 908 912
+1 -1
drivers/cpufreq/intel_pstate.c
··· 1059 1059 1060 1060 update_turbo_state(); 1061 1061 if (global.turbo_disabled) { 1062 - pr_warn("Turbo disabled by BIOS or unavailable on processor\n"); 1062 + pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n"); 1063 1063 mutex_unlock(&intel_pstate_limits_lock); 1064 1064 mutex_unlock(&intel_pstate_driver_lock); 1065 1065 return -EPERM;
+7 -3
drivers/crypto/caam/caamalg.c
··· 963 963 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); 964 964 struct aead_edesc *edesc; 965 965 int ecode = 0; 966 + bool has_bklog; 966 967 967 968 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); 968 969 969 970 edesc = rctx->edesc; 971 + has_bklog = edesc->bklog; 970 972 971 973 if (err) 972 974 ecode = caam_jr_strstatus(jrdev, err); ··· 981 979 * If no backlog flag, the completion of the request is done 982 980 * by CAAM, not crypto engine. 983 981 */ 984 - if (!edesc->bklog) 982 + if (!has_bklog) 985 983 aead_request_complete(req, ecode); 986 984 else 987 985 crypto_finalize_aead_request(jrp->engine, req, ecode); ··· 997 995 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev); 998 996 int ivsize = crypto_skcipher_ivsize(skcipher); 999 997 int ecode = 0; 998 + bool has_bklog; 1000 999 1001 1000 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); 1002 1001 1003 1002 edesc = rctx->edesc; 1003 + has_bklog = edesc->bklog; 1004 1004 if (err) 1005 1005 ecode = caam_jr_strstatus(jrdev, err); 1006 1006 ··· 1032 1028 * If no backlog flag, the completion of the request is done 1033 1029 * by CAAM, not crypto engine. 1034 1030 */ 1035 - if (!edesc->bklog) 1031 + if (!has_bklog) 1036 1032 skcipher_request_complete(req, ecode); 1037 1033 else 1038 1034 crypto_finalize_skcipher_request(jrp->engine, req, ecode); ··· 1715 1711 1716 1712 if (ivsize || mapped_dst_nents > 1) 1717 1713 sg_to_sec4_set_last(edesc->sec4_sg + dst_sg_idx + 1718 - mapped_dst_nents); 1714 + mapped_dst_nents - 1 + !!ivsize); 1719 1715 1720 1716 if (sec4_sg_bytes) { 1721 1717 edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
+6 -2
drivers/crypto/caam/caamhash.c
··· 583 583 struct caam_hash_state *state = ahash_request_ctx(req); 584 584 struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); 585 585 int ecode = 0; 586 + bool has_bklog; 586 587 587 588 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); 588 589 589 590 edesc = state->edesc; 591 + has_bklog = edesc->bklog; 590 592 591 593 if (err) 592 594 ecode = caam_jr_strstatus(jrdev, err); ··· 605 603 * If no backlog flag, the completion of the request is done 606 604 * by CAAM, not crypto engine. 607 605 */ 608 - if (!edesc->bklog) 606 + if (!has_bklog) 609 607 req->base.complete(&req->base, ecode); 610 608 else 611 609 crypto_finalize_hash_request(jrp->engine, req, ecode); ··· 634 632 struct caam_hash_state *state = ahash_request_ctx(req); 635 633 int digestsize = crypto_ahash_digestsize(ahash); 636 634 int ecode = 0; 635 + bool has_bklog; 637 636 638 637 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); 639 638 640 639 edesc = state->edesc; 640 + has_bklog = edesc->bklog; 641 641 if (err) 642 642 ecode = caam_jr_strstatus(jrdev, err); 643 643 ··· 667 663 * If no backlog flag, the completion of the request is done 668 664 * by CAAM, not crypto engine. 669 665 */ 670 - if (!edesc->bklog) 666 + if (!has_bklog) 671 667 req->base.complete(&req->base, ecode); 672 668 else 673 669 crypto_finalize_hash_request(jrp->engine, req, ecode);
+6 -2
drivers/crypto/caam/caampkc.c
··· 121 121 struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); 122 122 struct rsa_edesc *edesc; 123 123 int ecode = 0; 124 + bool has_bklog; 124 125 125 126 if (err) 126 127 ecode = caam_jr_strstatus(dev, err); 127 128 128 129 edesc = req_ctx->edesc; 130 + has_bklog = edesc->bklog; 129 131 130 132 rsa_pub_unmap(dev, edesc, req); 131 133 rsa_io_unmap(dev, edesc, req); ··· 137 135 * If no backlog flag, the completion of the request is done 138 136 * by CAAM, not crypto engine. 139 137 */ 140 - if (!edesc->bklog) 138 + if (!has_bklog) 141 139 akcipher_request_complete(req, ecode); 142 140 else 143 141 crypto_finalize_akcipher_request(jrp->engine, req, ecode); ··· 154 152 struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req); 155 153 struct rsa_edesc *edesc; 156 154 int ecode = 0; 155 + bool has_bklog; 157 156 158 157 if (err) 159 158 ecode = caam_jr_strstatus(dev, err); 160 159 161 160 edesc = req_ctx->edesc; 161 + has_bklog = edesc->bklog; 162 162 163 163 switch (key->priv_form) { 164 164 case FORM1: ··· 180 176 * If no backlog flag, the completion of the request is done 181 177 * by CAAM, not crypto engine. 182 178 */ 183 - if (!edesc->bklog) 179 + if (!has_bklog) 184 180 akcipher_request_complete(req, ecode); 185 181 else 186 182 crypto_finalize_akcipher_request(jrp->engine, req, ecode);
+46 -37
drivers/crypto/chelsio/chcr_ktls.c
··· 673 673 return 0; 674 674 } 675 675 676 - /* 677 - * chcr_write_cpl_set_tcb_ulp: update tcb values. 678 - * TCB is responsible to create tcp headers, so all the related values 679 - * should be correctly updated. 680 - * @tx_info - driver specific tls info. 681 - * @q - tx queue on which packet is going out. 682 - * @tid - TCB identifier. 683 - * @pos - current index where should we start writing. 684 - * @word - TCB word. 685 - * @mask - TCB word related mask. 686 - * @val - TCB word related value. 687 - * @reply - set 1 if looking for TP response. 688 - * return - next position to write. 689 - */ 690 - static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, 691 - struct sge_eth_txq *q, u32 tid, 692 - void *pos, u16 word, u64 mask, 676 + static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, 677 + u32 tid, void *pos, u16 word, u64 mask, 693 678 u64 val, u32 reply) 694 679 { 695 680 struct cpl_set_tcb_field_core *cpl; 696 681 struct ulptx_idata *idata; 697 682 struct ulp_txpkt *txpkt; 698 - void *save_pos = NULL; 699 - u8 buf[48] = {0}; 700 - int left; 701 683 702 - left = (void *)q->q.stat - pos; 703 - if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) { 704 - if (!left) { 705 - pos = q->q.desc; 706 - } else { 707 - save_pos = pos; 708 - pos = buf; 709 - } 710 - } 711 684 /* ULP_TXPKT */ 712 685 txpkt = pos; 713 686 txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0)); ··· 705 732 idata = (struct ulptx_idata *)(cpl + 1); 706 733 idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP)); 707 734 idata->len = htonl(0); 735 + pos = idata + 1; 708 736 709 - if (save_pos) { 710 - pos = chcr_copy_to_txd(buf, &q->q, save_pos, 711 - CHCR_SET_TCB_FIELD_LEN); 712 - } else { 713 - /* check again if we are at the end of the queue */ 714 - if (left == CHCR_SET_TCB_FIELD_LEN) 737 + return pos; 738 + } 739 + 740 + 741 + /* 742 + * chcr_write_cpl_set_tcb_ulp: update tcb values. 743 + * TCB is responsible to create tcp headers, so all the related values 744 + * should be correctly updated. 745 + * @tx_info - driver specific tls info. 746 + * @q - tx queue on which packet is going out. 747 + * @tid - TCB identifier. 748 + * @pos - current index where should we start writing. 749 + * @word - TCB word. 750 + * @mask - TCB word related mask. 751 + * @val - TCB word related value. 752 + * @reply - set 1 if looking for TP response. 753 + * return - next position to write. 754 + */ 755 + static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info, 756 + struct sge_eth_txq *q, u32 tid, 757 + void *pos, u16 word, u64 mask, 758 + u64 val, u32 reply) 759 + { 760 + int left = (void *)q->q.stat - pos; 761 + 762 + if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) { 763 + if (!left) { 715 764 pos = q->q.desc; 716 - else 717 - pos = idata + 1; 765 + } else { 766 + u8 buf[48] = {0}; 767 + 768 + __chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word, 769 + mask, val, reply); 770 + 771 + return chcr_copy_to_txd(buf, &q->q, pos, 772 + CHCR_SET_TCB_FIELD_LEN); 773 + } 718 774 } 775 + 776 + pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word, 777 + mask, val, reply); 778 + 779 + /* check again if we are at the end of the queue */ 780 + if (left == CHCR_SET_TCB_FIELD_LEN) 781 + pos = q->q.desc; 719 782 720 783 return pos; 721 784 }
+4 -3
drivers/dma-buf/dma-buf.c
··· 388 388 389 389 return ret; 390 390 391 - case DMA_BUF_SET_NAME: 391 + case DMA_BUF_SET_NAME_A: 392 + case DMA_BUF_SET_NAME_B: 392 393 return dma_buf_set_name(dmabuf, (const char __user *)arg); 393 394 394 395 default: ··· 656 655 * calls attach() of dma_buf_ops to allow device-specific attach functionality 657 656 * @dmabuf: [in] buffer to attach device to. 658 657 * @dev: [in] device to be attached. 659 - * @importer_ops [in] importer operations for the attachment 660 - * @importer_priv [in] importer private pointer for the attachment 658 + * @importer_ops: [in] importer operations for the attachment 659 + * @importer_priv: [in] importer private pointer for the attachment 661 660 * 662 661 * Returns struct dma_buf_attachment pointer for this attachment. Attachments 663 662 * must be cleaned up by calling dma_buf_detach().
+2 -1
drivers/dma/Kconfig
··· 241 241 242 242 config HISI_DMA 243 243 tristate "HiSilicon DMA Engine support" 244 - depends on ARM64 || (COMPILE_TEST && PCI_MSI) 244 + depends on ARM64 || COMPILE_TEST 245 + depends on PCI_MSI 245 246 select DMA_ENGINE 246 247 select DMA_VIRTUAL_CHANNELS 247 248 help
+26 -34
drivers/dma/dmaengine.c
··· 232 232 struct dma_chan_dev *chan_dev; 233 233 234 234 chan_dev = container_of(dev, typeof(*chan_dev), device); 235 - if (atomic_dec_and_test(chan_dev->idr_ref)) { 236 - ida_free(&dma_ida, chan_dev->dev_id); 237 - kfree(chan_dev->idr_ref); 238 - } 239 235 kfree(chan_dev); 240 236 } 241 237 ··· 1039 1043 } 1040 1044 1041 1045 static int __dma_async_device_channel_register(struct dma_device *device, 1042 - struct dma_chan *chan, 1043 - int chan_id) 1046 + struct dma_chan *chan) 1044 1047 { 1045 1048 int rc = 0; 1046 - int chancnt = device->chancnt; 1047 - atomic_t *idr_ref; 1048 - struct dma_chan *tchan; 1049 - 1050 - tchan = list_first_entry_or_null(&device->channels, 1051 - struct dma_chan, device_node); 1052 - if (!tchan) 1053 - return -ENODEV; 1054 - 1055 - if (tchan->dev) { 1056 - idr_ref = tchan->dev->idr_ref; 1057 - } else { 1058 - idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL); 1059 - if (!idr_ref) 1060 - return -ENOMEM; 1061 - atomic_set(idr_ref, 0); 1062 - } 1063 1049 1064 1050 chan->local = alloc_percpu(typeof(*chan->local)); 1065 1051 if (!chan->local) ··· 1057 1079 * When the chan_id is a negative value, we are dynamically adding 1058 1080 * the channel. Otherwise we are static enumerating. 1059 1081 */ 1060 - chan->chan_id = chan_id < 0 ? chancnt : chan_id; 1082 + mutex_lock(&device->chan_mutex); 1083 + chan->chan_id = ida_alloc(&device->chan_ida, GFP_KERNEL); 1084 + mutex_unlock(&device->chan_mutex); 1085 + if (chan->chan_id < 0) { 1086 + pr_err("%s: unable to alloc ida for chan: %d\n", 1087 + __func__, chan->chan_id); 1088 + goto err_out; 1089 + } 1090 + 1061 1091 chan->dev->device.class = &dma_devclass; 1062 1092 chan->dev->device.parent = device->dev; 1063 1093 chan->dev->chan = chan; 1064 - chan->dev->idr_ref = idr_ref; 1065 1094 chan->dev->dev_id = device->dev_id; 1066 - atomic_inc(idr_ref); 1067 1095 dev_set_name(&chan->dev->device, "dma%dchan%d", 1068 1096 device->dev_id, chan->chan_id); 1069 - 1070 1097 rc = device_register(&chan->dev->device); 1071 1098 if (rc) 1072 - goto err_out; 1099 + goto err_out_ida; 1073 1100 chan->client_count = 0; 1074 - device->chancnt = chan->chan_id + 1; 1101 + device->chancnt++; 1075 1102 1076 1103 return 0; 1077 1104 1105 + err_out_ida: 1106 + mutex_lock(&device->chan_mutex); 1107 + ida_free(&device->chan_ida, chan->chan_id); 1108 + mutex_unlock(&device->chan_mutex); 1078 1109 err_out: 1079 1110 free_percpu(chan->local); 1080 1111 kfree(chan->dev); 1081 - if (atomic_dec_return(idr_ref) == 0) 1082 - kfree(idr_ref); 1083 1112 return rc; 1084 1113 } 1085 1114 ··· 1095 1110 { 1096 1111 int rc; 1097 1112 1098 - rc = __dma_async_device_channel_register(device, chan, -1); 1113 + rc = __dma_async_device_channel_register(device, chan); 1099 1114 if (rc < 0) 1100 1115 return rc; 1101 1116 ··· 1115 1130 device->chancnt--; 1116 1131 chan->dev->chan = NULL; 1117 1132 mutex_unlock(&dma_list_mutex); 1133 + mutex_lock(&device->chan_mutex); 1134 + ida_free(&device->chan_ida, chan->chan_id); 1135 + mutex_unlock(&device->chan_mutex); 1118 1136 device_unregister(&chan->dev->device); 1119 1137 free_percpu(chan->local); 1120 1138 } ··· 1140 1152 */ 1141 1153 int dma_async_device_register(struct dma_device *device) 1142 1154 { 1143 - int rc, i = 0; 1155 + int rc; 1144 1156 struct dma_chan* chan; 1145 1157 1146 1158 if (!device) ··· 1245 1257 if (rc != 0) 1246 1258 return rc; 1247 1259 1260 + mutex_init(&device->chan_mutex); 1261 + ida_init(&device->chan_ida); 1262 + 1248 1263 /* represent channels in sysfs. Probably want devs too */ 1249 1264 list_for_each_entry(chan, &device->channels, device_node) { 1250 - rc = __dma_async_device_channel_register(device, chan, i++); 1265 + rc = __dma_async_device_channel_register(device, chan); 1251 1266 if (rc < 0) 1252 1267 goto err_out; 1253 1268 } ··· 1325 1334 */ 1326 1335 dma_cap_set(DMA_PRIVATE, device->cap_mask); 1327 1336 dma_channel_rebalance(); 1337 + ida_free(&dma_ida, device->dev_id); 1328 1338 dma_device_put(device); 1329 1339 mutex_unlock(&dma_list_mutex); 1330 1340 }
+3 -3
drivers/dma/dmatest.c
··· 240 240 struct dmatest_thread *thread; 241 241 242 242 list_for_each_entry(thread, &dtc->threads, node) { 243 - if (!thread->done) 243 + if (!thread->done && !thread->pending) 244 244 return true; 245 245 } 246 246 } ··· 662 662 flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 663 663 664 664 ktime = ktime_get(); 665 - while (!kthread_should_stop() 666 - && !(params->iterations && total_tests >= params->iterations)) { 665 + while (!(kthread_should_stop() || 666 + (params->iterations && total_tests >= params->iterations))) { 667 667 struct dma_async_tx_descriptor *tx = NULL; 668 668 struct dmaengine_unmap_data *um; 669 669 dma_addr_t *dsts;
+4 -1
drivers/dma/mmp_tdma.c
··· 363 363 gen_pool_free(gpool, (unsigned long)tdmac->desc_arr, 364 364 size); 365 365 tdmac->desc_arr = NULL; 366 + if (tdmac->status == DMA_ERROR) 367 + tdmac->status = DMA_COMPLETE; 366 368 367 369 return; 368 370 } ··· 445 443 if (!desc) 446 444 goto err_out; 447 445 448 - mmp_tdma_config_write(chan, direction, &tdmac->slave_config); 446 + if (mmp_tdma_config_write(chan, direction, &tdmac->slave_config)) 447 + goto err_out; 449 448 450 449 while (buf < buf_len) { 451 450 desc = &tdmac->desc_arr[i];
+1 -1
drivers/dma/pch_dma.c
··· 865 865 } 866 866 867 867 pci_set_master(pdev); 868 + pd->dma.dev = &pdev->dev; 868 869 869 870 err = request_irq(pdev->irq, pd_irq, IRQF_SHARED, DRV_NAME, pd); 870 871 if (err) { ··· 881 880 goto err_free_irq; 882 881 } 883 882 884 - pd->dma.dev = &pdev->dev; 885 883 886 884 INIT_LIST_HEAD(&pd->dma.channels); 887 885
+9
drivers/dma/tegra20-apb-dma.c
··· 816 816 static void tegra_dma_synchronize(struct dma_chan *dc) 817 817 { 818 818 struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc); 819 + int err; 820 + 821 + err = pm_runtime_get_sync(tdc->tdma->dev); 822 + if (err < 0) { 823 + dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err); 824 + return; 825 + } 819 826 820 827 /* 821 828 * CPU, which handles interrupt, could be busy in ··· 832 825 wait_event(tdc->wq, tegra_dma_eoc_interrupt_deasserted(tdc)); 833 826 834 827 tasklet_kill(&tdc->tasklet); 828 + 829 + pm_runtime_put(tdc->tdma->dev); 835 830 } 836 831 837 832 static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channel *tdc,
+1
drivers/dma/ti/k3-psil.c
··· 27 27 soc_ep_map = &j721e_ep_map; 28 28 } else { 29 29 pr_err("PSIL: No compatible machine found for map\n"); 30 + mutex_unlock(&ep_map_mutex); 30 31 return ERR_PTR(-ENOTSUPP); 31 32 } 32 33 pr_debug("%s: Using map for %s\n", __func__, soc_ep_map->name);
+10 -10
drivers/dma/xilinx/xilinx_dma.c
··· 1230 1230 return ret; 1231 1231 1232 1232 spin_lock_irqsave(&chan->lock, flags); 1233 - 1234 - desc = list_last_entry(&chan->active_list, 1235 - struct xilinx_dma_tx_descriptor, node); 1236 - /* 1237 - * VDMA and simple mode do not support residue reporting, so the 1238 - * residue field will always be 0. 1239 - */ 1240 - if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA) 1241 - residue = xilinx_dma_get_residue(chan, desc); 1242 - 1233 + if (!list_empty(&chan->active_list)) { 1234 + desc = list_last_entry(&chan->active_list, 1235 + struct xilinx_dma_tx_descriptor, node); 1236 + /* 1237 + * VDMA and simple mode do not support residue reporting, so the 1238 + * residue field will always be 0. 1239 + */ 1240 + if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA) 1241 + residue = xilinx_dma_get_residue(chan, desc); 1242 + } 1243 1243 spin_unlock_irqrestore(&chan->lock, flags); 1244 1244 1245 1245 dma_set_residue(txstate, residue);
+1 -1
drivers/firmware/efi/tpm.c
··· 16 16 int efi_tpm_final_log_size; 17 17 EXPORT_SYMBOL(efi_tpm_final_log_size); 18 18 19 - static int tpm2_calc_event_log_size(void *data, int count, void *size_info) 19 + static int __init tpm2_calc_event_log_size(void *data, int count, void *size_info) 20 20 { 21 21 struct tcg_pcr_event2_head *header; 22 22 int event_size, size = 0;
+2 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3372 3372 } 3373 3373 } 3374 3374 3375 - amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); 3376 - amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE); 3377 - 3378 - amdgpu_amdkfd_suspend(adev, !fbcon); 3379 - 3380 3375 amdgpu_ras_suspend(adev); 3381 3376 3382 3377 r = amdgpu_device_ip_suspend_phase1(adev); 3378 + 3379 + amdgpu_amdkfd_suspend(adev, !fbcon); 3383 3380 3384 3381 /* evict vram memory */ 3385 3382 amdgpu_bo_evict_vram(adev);
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 85 85 * - 3.34.0 - Non-DC can flip correctly between buffers with different pitches 86 86 * - 3.35.0 - Add drm_amdgpu_info_device::tcc_disabled_mask 87 87 * - 3.36.0 - Allow reading more status registers on si/cik 88 + * - 3.37.0 - L2 is invalidated before SDMA IBs, needed for correctness 88 89 */ 89 90 #define KMS_DRIVER_MAJOR 3 90 - #define KMS_DRIVER_MINOR 36 91 + #define KMS_DRIVER_MINOR 37 91 92 #define KMS_DRIVER_PATCHLEVEL 0 92 93 93 94 int amdgpu_vram_limit = 0;
+16
drivers/gpu/drm/amd/amdgpu/navi10_sdma_pkt_open.h
··· 73 73 #define SDMA_OP_AQL_COPY 0 74 74 #define SDMA_OP_AQL_BARRIER_OR 0 75 75 76 + #define SDMA_GCR_RANGE_IS_PA (1 << 18) 77 + #define SDMA_GCR_SEQ(x) (((x) & 0x3) << 16) 78 + #define SDMA_GCR_GL2_WB (1 << 15) 79 + #define SDMA_GCR_GL2_INV (1 << 14) 80 + #define SDMA_GCR_GL2_DISCARD (1 << 13) 81 + #define SDMA_GCR_GL2_RANGE(x) (((x) & 0x3) << 11) 82 + #define SDMA_GCR_GL2_US (1 << 10) 83 + #define SDMA_GCR_GL1_INV (1 << 9) 84 + #define SDMA_GCR_GLV_INV (1 << 8) 85 + #define SDMA_GCR_GLK_INV (1 << 7) 86 + #define SDMA_GCR_GLK_WB (1 << 6) 87 + #define SDMA_GCR_GLM_INV (1 << 5) 88 + #define SDMA_GCR_GLM_WB (1 << 4) 89 + #define SDMA_GCR_GL1_RANGE(x) (((x) & 0x3) << 2) 90 + #define SDMA_GCR_GLI_INV(x) (((x) & 0x3) << 0) 91 + 76 92 /*define for op field*/ 77 93 #define SDMA_PKT_HEADER_op_offset 0 78 94 #define SDMA_PKT_HEADER_op_mask 0x000000FF
+13 -1
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
··· 382 382 unsigned vmid = AMDGPU_JOB_GET_VMID(job); 383 383 uint64_t csa_mc_addr = amdgpu_sdma_get_csa_mc_addr(ring, vmid); 384 384 385 + /* Invalidate L2, because if we don't do it, we might get stale cache 386 + * lines from previous IBs. 387 + */ 388 + amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_GCR_REQ)); 389 + amdgpu_ring_write(ring, 0); 390 + amdgpu_ring_write(ring, (SDMA_GCR_GL2_INV | 391 + SDMA_GCR_GL2_WB | 392 + SDMA_GCR_GLM_INV | 393 + SDMA_GCR_GLM_WB) << 16); 394 + amdgpu_ring_write(ring, 0xffffff80); 395 + amdgpu_ring_write(ring, 0xffff); 396 + 385 397 /* An IB packet must end on a 8 DW boundary--the next dword 386 398 * must be on a 8-dword boundary. Our IB packet below is 6 387 399 * dwords long, thus add x number of NOPs, such that, in ··· 1607 1595 SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 + 1608 1596 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 * 2 + 1609 1597 10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */ 1610 - .emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */ 1598 + .emit_ib_size = 5 + 7 + 6, /* sdma_v5_0_ring_emit_ib */ 1611 1599 .emit_ib = sdma_v5_0_ring_emit_ib, 1612 1600 .emit_fence = sdma_v5_0_ring_emit_fence, 1613 1601 .emit_pipeline_sync = sdma_v5_0_ring_emit_pipeline_sync,
+40 -15
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 2008 2008 dc_sink_retain(aconnector->dc_sink); 2009 2009 if (sink->dc_edid.length == 0) { 2010 2010 aconnector->edid = NULL; 2011 - drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux); 2011 + if (aconnector->dc_link->aux_mode) { 2012 + drm_dp_cec_unset_edid( 2013 + &aconnector->dm_dp_aux.aux); 2014 + } 2012 2015 } else { 2013 2016 aconnector->edid = 2014 - (struct edid *) sink->dc_edid.raw_edid; 2015 - 2017 + (struct edid *)sink->dc_edid.raw_edid; 2016 2018 2017 2019 drm_connector_update_edid_property(connector, 2018 - aconnector->edid); 2019 - drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux, 2020 - aconnector->edid); 2020 + aconnector->edid); 2021 + 2022 + if (aconnector->dc_link->aux_mode) 2023 + drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux, 2024 + aconnector->edid); 2021 2025 } 2026 + 2022 2027 amdgpu_dm_update_freesync_caps(connector, aconnector->edid); 2023 2028 update_connector_ext_caps(aconnector); 2024 2029 } else { ··· 3345 3340 const union dc_tiling_info *tiling_info, 3346 3341 const uint64_t info, 3347 3342 struct dc_plane_dcc_param *dcc, 3348 - struct dc_plane_address *address) 3343 + struct dc_plane_address *address, 3344 + bool force_disable_dcc) 3349 3345 { 3350 3346 struct dc *dc = adev->dm.dc; 3351 3347 struct dc_dcc_surface_param input; ··· 3357 3351 3358 3352 memset(&input, 0, sizeof(input)); 3359 3353 memset(&output, 0, sizeof(output)); 3354 + 3355 + if (force_disable_dcc) 3356 + return 0; 3360 3357 3361 3358 if (!offset) 3362 3359 return 0; ··· 3410 3401 union dc_tiling_info *tiling_info, 3411 3402 struct plane_size *plane_size, 3412 3403 struct dc_plane_dcc_param *dcc, 3413 - struct dc_plane_address *address) 3404 + struct dc_plane_address *address, 3405 + bool force_disable_dcc) 3414 3406 { 3415 3407 const struct drm_framebuffer *fb = &afb->base; 3416 3408 int ret; ··· 3517 3507 3518 3508 ret = fill_plane_dcc_attributes(adev, afb, format, rotation, 3519 3509 plane_size, tiling_info, 3520 - tiling_flags, dcc, address); 3510 + tiling_flags, dcc, address, 3511 + force_disable_dcc); 3521 3512 if (ret) 3522 3513 return ret; 3523 3514 } ··· 3610 3599 const struct drm_plane_state *plane_state, 3611 3600 const uint64_t tiling_flags, 3612 3601 struct dc_plane_info *plane_info, 3613 - struct dc_plane_address *address) 3602 + struct dc_plane_address *address, 3603 + bool force_disable_dcc) 3614 3604 { 3615 3605 const struct drm_framebuffer *fb = plane_state->fb; 3616 3606 const struct amdgpu_framebuffer *afb = ··· 3693 3681 plane_info->rotation, tiling_flags, 3694 3682 &plane_info->tiling_info, 3695 3683 &plane_info->plane_size, 3696 - &plane_info->dcc, address); 3684 + &plane_info->dcc, address, 3685 + force_disable_dcc); 3697 3686 if (ret) 3698 3687 return ret; 3699 3688 ··· 3717 3704 struct dc_plane_info plane_info; 3718 3705 uint64_t tiling_flags; 3719 3706 int ret; 3707 + bool force_disable_dcc = false; 3720 3708 3721 3709 ret = fill_dc_scaling_info(plane_state, &scaling_info); 3722 3710 if (ret) ··· 3732 3718 if (ret) 3733 3719 return ret; 3734 3720 3721 + force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; 3735 3722 ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags, 3736 3723 &plane_info, 3737 - &dc_plane_state->address); 3724 + &dc_plane_state->address, 3725 + force_disable_dcc); 3738 3726 if (ret) 3739 3727 return ret; 3740 3728 ··· 5358 5342 uint64_t tiling_flags; 5359 5343 uint32_t domain; 5360 5344 int r; 5345 + bool force_disable_dcc = false; 5361 5346 5362 5347 dm_plane_state_old = to_dm_plane_state(plane->state); 5363 5348 dm_plane_state_new = to_dm_plane_state(new_state); ··· 5417 5400 dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) { 5418 5401 struct dc_plane_state *plane_state = dm_plane_state_new->dc_state; 5419 5402 5403 + force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend; 5420 5404 fill_plane_buffer_attributes( 5421 5405 adev, afb, plane_state->format, plane_state->rotation, 5422 5406 tiling_flags, &plane_state->tiling_info, 5423 5407 &plane_state->plane_size, &plane_state->dcc, 5424 - &plane_state->address); 5408 + &plane_state->address, 5409 + force_disable_dcc); 5425 5410 } 5426 5411 5427 5412 return 0; ··· 6695 6676 fill_dc_plane_info_and_addr( 6696 6677 dm->adev, new_plane_state, tiling_flags, 6697 6678 &bundle->plane_infos[planes_count], 6698 - &bundle->flip_addrs[planes_count].address); 6679 + &bundle->flip_addrs[planes_count].address, 6680 + false); 6681 + 6682 + DRM_DEBUG_DRIVER("plane: id=%d dcc_en=%d\n", 6683 + new_plane_state->plane->index, 6684 + bundle->plane_infos[planes_count].dcc.enable); 6699 6685 6700 6686 bundle->surface_updates[planes_count].plane_info = 6701 6687 &bundle->plane_infos[planes_count]; ··· 8120 8096 ret = fill_dc_plane_info_and_addr( 8121 8097 dm->adev, new_plane_state, tiling_flags, 8122 8098 plane_info, 8123 - &flip_addr->address); 8099 + &flip_addr->address, 8100 + false); 8124 8101 if (ret) 8125 8102 goto cleanup; 8126 8103
+2 -3
drivers/gpu/drm/amd/display/dc/core/dc.c
··· 834 834 static void wait_for_no_pipes_pending(struct dc *dc, struct dc_state *context) 835 835 { 836 836 int i; 837 - int count = 0; 838 - struct pipe_ctx *pipe; 839 837 PERF_TRACE(); 840 838 for (i = 0; i < MAX_PIPES; i++) { 841 - pipe = &context->res_ctx.pipe_ctx[i]; 839 + int count = 0; 840 + struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; 842 841 843 842 if (!pipe->plane_state) 844 843 continue;
+4 -36
drivers/gpu/drm/amd/display/dc/core/dc_stream.c
··· 231 231 return dc_stream_get_status_from_state(dc->current_state, stream); 232 232 } 233 233 234 - static void delay_cursor_until_vupdate(struct pipe_ctx *pipe_ctx, struct dc *dc) 235 - { 236 - #if defined(CONFIG_DRM_AMD_DC_DCN) 237 - unsigned int vupdate_line; 238 - unsigned int lines_to_vupdate, us_to_vupdate, vpos, nvpos; 239 - struct dc_stream_state *stream = pipe_ctx->stream; 240 - unsigned int us_per_line; 241 - 242 - if (stream->ctx->asic_id.chip_family == FAMILY_RV && 243 - ASICREV_IS_RAVEN(stream->ctx->asic_id.hw_internal_rev)) { 244 - 245 - vupdate_line = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx); 246 - if (!dc_stream_get_crtc_position(dc, &stream, 1, &vpos, &nvpos)) 247 - return; 248 - 249 - if (vpos >= vupdate_line) 250 - return; 251 - 252 - us_per_line = stream->timing.h_total * 10000 / stream->timing.pix_clk_100hz; 253 - lines_to_vupdate = vupdate_line - vpos; 254 - us_to_vupdate = lines_to_vupdate * us_per_line; 255 - 256 - /* 70 us is a conservative estimate of cursor update time*/ 257 - if (us_to_vupdate < 70) 258 - udelay(us_to_vupdate); 259 - } 260 - #endif 261 - } 262 234 263 235 /** 264 236 * dc_stream_set_cursor_attributes() - Update cursor attributes and set cursor surface address ··· 270 298 271 299 if (!pipe_to_program) { 272 300 pipe_to_program = pipe_ctx; 273 - 274 - delay_cursor_until_vupdate(pipe_ctx, dc); 275 - dc->hwss.pipe_control_lock(dc, pipe_to_program, true); 301 + dc->hwss.cursor_lock(dc, pipe_to_program, true); 276 302 } 277 303 278 304 dc->hwss.set_cursor_attribute(pipe_ctx); ··· 279 309 } 280 310 281 311 if (pipe_to_program) 282 - dc->hwss.pipe_control_lock(dc, pipe_to_program, false); 312 + dc->hwss.cursor_lock(dc, pipe_to_program, false); 283 313 284 314 return true; 285 315 } ··· 319 349 320 350 if (!pipe_to_program) { 321 351 pipe_to_program = pipe_ctx; 322 - 323 - delay_cursor_until_vupdate(pipe_ctx, dc); 324 - dc->hwss.pipe_control_lock(dc, pipe_to_program, true); 352 + dc->hwss.cursor_lock(dc, pipe_to_program, true); 325 353 } 326 354 327 355 dc->hwss.set_cursor_position(pipe_ctx); 328 356 } 329 357 330 358 if (pipe_to_program) 331 - dc->hwss.pipe_control_lock(dc, pipe_to_program, false); 359 + dc->hwss.cursor_lock(dc, pipe_to_program, false); 332 360 333 361 return true; 334 362 }
+1
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 2757 2757 .disable_plane = dce110_power_down_fe, 2758 2758 .pipe_control_lock = dce_pipe_control_lock, 2759 2759 .interdependent_update_lock = NULL, 2760 + .cursor_lock = dce_pipe_control_lock, 2760 2761 .prepare_bandwidth = dce110_prepare_bandwidth, 2761 2762 .optimize_bandwidth = dce110_optimize_bandwidth, 2762 2763 .set_drr = set_drr,
+10
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
··· 1625 1625 hws->funcs.verify_allow_pstate_change_high(dc); 1626 1626 } 1627 1627 1628 + void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock) 1629 + { 1630 + /* cursor lock is per MPCC tree, so only need to lock one pipe per stream */ 1631 + if (!pipe || pipe->top_pipe) 1632 + return; 1633 + 1634 + dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc, 1635 + pipe->stream_res.opp->inst, lock); 1636 + } 1637 + 1628 1638 static bool wait_for_reset_trigger_to_occur( 1629 1639 struct dc_context *dc_ctx, 1630 1640 struct timing_generator *tg)
+1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
··· 49 49 struct dc *dc, 50 50 struct pipe_ctx *pipe, 51 51 bool lock); 52 + void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock); 52 53 void dcn10_blank_pixel_data( 53 54 struct dc *dc, 54 55 struct pipe_ctx *pipe_ctx,
+1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
··· 50 50 .disable_audio_stream = dce110_disable_audio_stream, 51 51 .disable_plane = dcn10_disable_plane, 52 52 .pipe_control_lock = dcn10_pipe_control_lock, 53 + .cursor_lock = dcn10_cursor_lock, 53 54 .interdependent_update_lock = dcn10_lock_all_pipes, 54 55 .prepare_bandwidth = dcn10_prepare_bandwidth, 55 56 .optimize_bandwidth = dcn10_optimize_bandwidth,
+15
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
··· 223 223 REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, dpp_id); 224 224 REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, tree->opp_id); 225 225 226 + /* Configure VUPDATE lock set for this MPCC to map to the OPP */ 227 + REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, tree->opp_id); 228 + 226 229 /* update mpc tree mux setting */ 227 230 if (tree->opp_list == insert_above_mpcc) { 228 231 /* insert the toppest mpcc */ ··· 321 318 REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf); 322 319 REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf); 323 320 REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, 0xf); 321 + REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf); 324 322 325 323 /* mark this mpcc as not in use */ 326 324 mpc10->mpcc_in_use_mask &= ~(1 << mpcc_id); ··· 332 328 REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf); 333 329 REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf); 334 330 REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, 0xf); 331 + REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf); 335 332 } 336 333 } 337 334 ··· 366 361 REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf); 367 362 REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf); 368 363 REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, 0xf); 364 + REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf); 369 365 370 366 mpc1_init_mpcc(&(mpc->mpcc_array[mpcc_id]), mpcc_id); 371 367 } ··· 387 381 REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf); 388 382 REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf); 389 383 REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, 0xf); 384 + REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf); 390 385 391 386 mpc1_init_mpcc(&(mpc->mpcc_array[mpcc_id]), mpcc_id); 392 387 ··· 460 453 MPCC_BUSY, &s->busy); 461 454 } 462 455 456 + void mpc1_cursor_lock(struct mpc *mpc, int opp_id, bool lock) 457 + { 458 + struct dcn10_mpc *mpc10 = TO_DCN10_MPC(mpc); 459 + 460 + REG_SET(CUR[opp_id], 0, CUR_VUPDATE_LOCK_SET, lock ? 1 : 0); 461 + } 462 + 463 463 static const struct mpc_funcs dcn10_mpc_funcs = { 464 464 .read_mpcc_state = mpc1_read_mpcc_state, 465 465 .insert_plane = mpc1_insert_plane, ··· 478 464 .assert_mpcc_idle_before_connect = mpc1_assert_mpcc_idle_before_connect, 479 465 .init_mpcc_list_from_hw = mpc1_init_mpcc_list_from_hw, 480 466 .update_blending = mpc1_update_blending, 467 + .cursor_lock = mpc1_cursor_lock, 481 468 .set_denorm = NULL, 482 469 .set_denorm_clamp = NULL, 483 470 .set_output_csc = NULL,
+14 -6
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.h
··· 39 39 SRII(MPCC_BG_G_Y, MPCC, inst),\ 40 40 SRII(MPCC_BG_R_CR, MPCC, inst),\ 41 41 SRII(MPCC_BG_B_CB, MPCC, inst),\ 42 - SRII(MPCC_BG_B_CB, MPCC, inst),\ 43 - SRII(MPCC_SM_CONTROL, MPCC, inst) 42 + SRII(MPCC_SM_CONTROL, MPCC, inst),\ 43 + SRII(MPCC_UPDATE_LOCK_SEL, MPCC, inst) 44 44 45 45 #define MPC_OUT_MUX_COMMON_REG_LIST_DCN1_0(inst) \ 46 - SRII(MUX, MPC_OUT, inst) 46 + SRII(MUX, MPC_OUT, inst),\ 47 + VUPDATE_SRII(CUR, VUPDATE_LOCK_SET, inst) 47 48 48 49 #define MPC_COMMON_REG_VARIABLE_LIST \ 49 50 uint32_t MPCC_TOP_SEL[MAX_MPCC]; \ ··· 56 55 uint32_t MPCC_BG_R_CR[MAX_MPCC]; \ 57 56 uint32_t MPCC_BG_B_CB[MAX_MPCC]; \ 58 57 uint32_t MPCC_SM_CONTROL[MAX_MPCC]; \ 59 - uint32_t MUX[MAX_OPP]; 58 + uint32_t MUX[MAX_OPP]; \ 59 + uint32_t MPCC_UPDATE_LOCK_SEL[MAX_MPCC]; \ 60 + uint32_t CUR[MAX_OPP]; 60 61 61 62 #define MPC_COMMON_MASK_SH_LIST_DCN1_0(mask_sh)\ 62 63 SF(MPCC0_MPCC_TOP_SEL, MPCC_TOP_SEL, mask_sh),\ ··· 81 78 SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FIELD_ALT, mask_sh),\ 82 79 SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FORCE_NEXT_FRAME_POL, mask_sh),\ 83 80 SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FORCE_NEXT_TOP_POL, mask_sh),\ 84 - SF(MPC_OUT0_MUX, MPC_OUT_MUX, mask_sh) 81 + SF(MPC_OUT0_MUX, MPC_OUT_MUX, mask_sh),\ 82 + SF(MPCC0_MPCC_UPDATE_LOCK_SEL, MPCC_UPDATE_LOCK_SEL, mask_sh) 85 83 86 84 #define MPC_REG_FIELD_LIST(type) \ 87 85 type MPCC_TOP_SEL;\ ··· 105 101 type MPCC_SM_FIELD_ALT;\ 106 102 type MPCC_SM_FORCE_NEXT_FRAME_POL;\ 107 103 type MPCC_SM_FORCE_NEXT_TOP_POL;\ 108 - type MPC_OUT_MUX; 104 + type MPC_OUT_MUX;\ 105 + type MPCC_UPDATE_LOCK_SEL;\ 106 + type CUR_VUPDATE_LOCK_SET; 109 107 110 108 struct dcn_mpc_registers { 111 109 MPC_COMMON_REG_VARIABLE_LIST ··· 197 191 struct mpc *mpc, 198 192 int mpcc_inst, 199 193 struct mpcc_state *s); 194 + 195 + void mpc1_cursor_lock(struct mpc *mpc, int opp_id, bool lock); 200 196 201 197 #endif
+12 -2
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
··· 181 181 .reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \ 182 182 mm ## block ## id ## _ ## reg_name 183 183 184 + #define VUPDATE_SRII(reg_name, block, id)\ 185 + .reg_name[id] = BASE(mm ## reg_name ## 0 ## _ ## block ## id ## _BASE_IDX) + \ 186 + mm ## reg_name ## 0 ## _ ## block ## id 187 + 188 + /* set field/register/bitfield name */ 189 + #define SFRB(field_name, reg_name, bitfield, post_fix)\ 190 + .field_name = reg_name ## __ ## bitfield ## post_fix 191 + 184 192 /* NBIO */ 185 193 #define NBIO_BASE_INNER(seg) \ 186 194 NBIF_BASE__INST0_SEG ## seg ··· 427 419 }; 428 420 429 421 static const struct dcn_mpc_shift mpc_shift = { 430 - MPC_COMMON_MASK_SH_LIST_DCN1_0(__SHIFT) 422 + MPC_COMMON_MASK_SH_LIST_DCN1_0(__SHIFT),\ 423 + SFRB(CUR_VUPDATE_LOCK_SET, CUR0_VUPDATE_LOCK_SET0, CUR0_VUPDATE_LOCK_SET, __SHIFT) 431 424 }; 432 425 433 426 static const struct dcn_mpc_mask mpc_mask = { 434 - MPC_COMMON_MASK_SH_LIST_DCN1_0(_MASK), 427 + MPC_COMMON_MASK_SH_LIST_DCN1_0(_MASK),\ 428 + SFRB(CUR_VUPDATE_LOCK_SET, CUR0_VUPDATE_LOCK_SET0, CUR0_VUPDATE_LOCK_SET, _MASK) 435 429 }; 436 430 437 431 #define tg_regs(id)\
+2 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 2294 2294 2295 2295 REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, 2); 2296 2296 REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_ENABLE, 1); 2297 - REG_WRITE(REFCLK_CNTL, 0); 2297 + if (REG(REFCLK_CNTL)) 2298 + REG_WRITE(REFCLK_CNTL, 0); 2298 2299 // 2299 2300 2300 2301
+1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
··· 52 52 .disable_plane = dcn20_disable_plane, 53 53 .pipe_control_lock = dcn20_pipe_control_lock, 54 54 .interdependent_update_lock = dcn10_lock_all_pipes, 55 + .cursor_lock = dcn10_cursor_lock, 55 56 .prepare_bandwidth = dcn20_prepare_bandwidth, 56 57 .optimize_bandwidth = dcn20_optimize_bandwidth, 57 58 .update_bandwidth = dcn20_update_bandwidth,
+1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
··· 545 545 .mpc_init = mpc1_mpc_init, 546 546 .mpc_init_single_inst = mpc1_mpc_init_single_inst, 547 547 .update_blending = mpc2_update_blending, 548 + .cursor_lock = mpc1_cursor_lock, 548 549 .get_mpcc_for_dpp = mpc2_get_mpcc_for_dpp, 549 550 .wait_for_idle = mpc2_assert_idle_mpcc, 550 551 .assert_mpcc_idle_before_connect = mpc2_assert_mpcc_idle_before_connect,
+2 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.h
··· 179 179 SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MAX_G_Y, mask_sh),\ 180 180 SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MIN_G_Y, mask_sh),\ 181 181 SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MAX_B_CB, mask_sh),\ 182 - SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MIN_B_CB, mask_sh) 182 + SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MIN_B_CB, mask_sh),\ 183 + SF(CUR_VUPDATE_LOCK_SET0, CUR_VUPDATE_LOCK_SET, mask_sh) 183 184 184 185 /* 185 186 * DCN2 MPC_OCSC debug status register:
+27 -8
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 508 508 .block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \ 509 509 mm ## block ## id ## _ ## reg_name 510 510 511 + #define VUPDATE_SRII(reg_name, block, id)\ 512 + .reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \ 513 + mm ## reg_name ## _ ## block ## id 514 + 511 515 /* NBIO */ 512 516 #define NBIO_BASE_INNER(seg) \ 513 517 NBIO_BASE__INST0_SEG ## seg ··· 3068 3064 return out; 3069 3065 } 3070 3066 3071 - 3072 - bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context, 3073 - bool fast_validate) 3067 + /* 3068 + * This must be noinline to ensure anything that deals with FP registers 3069 + * is contained within this call; previously our compiling with hard-float 3070 + * would result in fp instructions being emitted outside of the boundaries 3071 + * of the DC_FP_START/END macros, which makes sense as the compiler has no 3072 + * idea about what is wrapped and what is not 3073 + * 3074 + * This is largely just a workaround to avoid breakage introduced with 5.6, 3075 + * ideally all fp-using code should be moved into its own file, only that 3076 + * should be compiled with hard-float, and all code exported from there 3077 + * should be strictly wrapped with DC_FP_START/END 3078 + */ 3079 + static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc, 3080 + struct dc_state *context, bool fast_validate) 3074 3081 { 3075 3082 bool voltage_supported = false; 3076 3083 bool full_pstate_supported = false; 3077 3084 bool dummy_pstate_supported = false; 3078 3085 double p_state_latency_us; 3079 3086 3080 - DC_FP_START(); 3081 3087 p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us; 3082 3088 context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support = 3083 3089 dc->debug.disable_dram_clock_change_vactive_support; 3084 3090 3085 3091 if (fast_validate) { 3086 - voltage_supported = dcn20_validate_bandwidth_internal(dc, context, true); 3087 - 3088 - DC_FP_END(); 3089 - return voltage_supported; 3092 + return dcn20_validate_bandwidth_internal(dc, context, true); 3090 3093 } 3091 3094 3092 3095 // Best case, we support full UCLK switch latency ··· 3122 3111 3123 3112 restore_dml_state: 3124 3113 context->bw_ctx.dml.soc.dram_clock_change_latency_us = p_state_latency_us; 3114 + return voltage_supported; 3115 + } 3125 3116 3117 + bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context, 3118 + bool fast_validate) 3119 + { 3120 + bool voltage_supported = false; 3121 + DC_FP_START(); 3122 + voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate); 3126 3123 DC_FP_END(); 3127 3124 return voltage_supported; 3128 3125 }
+1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
··· 53 53 .disable_plane = dcn20_disable_plane, 54 54 .pipe_control_lock = dcn20_pipe_control_lock, 55 55 .interdependent_update_lock = dcn10_lock_all_pipes, 56 + .cursor_lock = dcn10_cursor_lock, 56 57 .prepare_bandwidth = dcn20_prepare_bandwidth, 57 58 .optimize_bandwidth = dcn20_optimize_bandwidth, 58 59 .update_bandwidth = dcn20_update_bandwidth,
+32 -43
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
··· 284 284 .dram_channel_width_bytes = 4, 285 285 .fabric_datapath_to_dcn_data_return_bytes = 32, 286 286 .dcn_downspread_percent = 0.5, 287 - .downspread_percent = 0.5, 287 + .downspread_percent = 0.38, 288 288 .dram_page_open_time_ns = 50.0, 289 289 .dram_rw_turnaround_time_ns = 17.5, 290 290 .dram_return_buffer_per_channel_bytes = 8192, ··· 339 339 #define DCCG_SRII(reg_name, block, id)\ 340 340 .block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \ 341 341 mm ## block ## id ## _ ## reg_name 342 + 343 + #define VUPDATE_SRII(reg_name, block, id)\ 344 + .reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \ 345 + mm ## reg_name ## _ ## block ## id 342 346 343 347 /* NBIO */ 344 348 #define NBIO_BASE_INNER(seg) \ ··· 1378 1374 { 1379 1375 struct dcn21_resource_pool *pool = TO_DCN21_RES_POOL(dc->res_pool); 1380 1376 struct clk_limit_table *clk_table = &bw_params->clk_table; 1381 - unsigned int i, j, k; 1382 - int closest_clk_lvl; 1377 + struct _vcs_dpi_voltage_scaling_st clock_limits[DC__VOLTAGE_STATES]; 1378 + unsigned int i, j, closest_clk_lvl; 1383 1379 1384 1380 // Default clock levels are used for diags, which may lead to overclocking. 1385 - if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) && !IS_DIAG_DC(dc->ctx->dce_environment)) { 1381 + if (!IS_DIAG_DC(dc->ctx->dce_environment)) { 1386 1382 dcn2_1_ip.max_num_otg = pool->base.res_cap->num_timing_generator; 1387 1383 dcn2_1_ip.max_num_dpp = pool->base.pipe_count; 1388 1384 dcn2_1_soc.num_chans = bw_params->num_channels; 1389 1385 1390 - /* Vmin: leave lowest DCN clocks, override with dcfclk, fclk, memclk from fuse */ 1391 - dcn2_1_soc.clock_limits[0].state = 0; 1392 - dcn2_1_soc.clock_limits[0].dcfclk_mhz = clk_table->entries[0].dcfclk_mhz; 1393 - dcn2_1_soc.clock_limits[0].fabricclk_mhz = clk_table->entries[0].fclk_mhz; 1394 - dcn2_1_soc.clock_limits[0].socclk_mhz = clk_table->entries[0].socclk_mhz; 1395 - dcn2_1_soc.clock_limits[0].dram_speed_mts = clk_table->entries[0].memclk_mhz * 2; 1396 - 1397 - /* 1398 - * Other levels: find closest DCN clocks that fit the given clock limit using dcfclk 1399 - * as indicator 1400 - */ 1401 - 1402 - closest_clk_lvl = -1; 1403 - /* index currently being filled */ 1404 - k = 1; 1405 - for (i = 1; i < clk_table->num_entries; i++) { 1406 - /* loop backwards, skip duplicate state*/ 1407 - for (j = dcn2_1_soc.num_states - 1; j >= k; j--) { 1386 + ASSERT(clk_table->num_entries); 1387 + for (i = 0; i < clk_table->num_entries; i++) { 1388 + /* loop backwards*/ 1389 + for (closest_clk_lvl = 0, j = dcn2_1_soc.num_states - 1; j >= 0; j--) { 1408 1390 if ((unsigned int) dcn2_1_soc.clock_limits[j].dcfclk_mhz <= clk_table->entries[i].dcfclk_mhz) { 1409 1391 closest_clk_lvl = j; 1410 1392 break; 1411 1393 } 1412 1394 } 1413 1395 1414 - /* if found a lvl that fits, use the DCN clks from it, if not, go to next clk limit*/ 1415 - if (closest_clk_lvl != -1) { 1416 - dcn2_1_soc.clock_limits[k].state = i; 1417 - dcn2_1_soc.clock_limits[k].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz; 1418 - dcn2_1_soc.clock_limits[k].fabricclk_mhz = clk_table->entries[i].fclk_mhz; 1419 - dcn2_1_soc.clock_limits[k].socclk_mhz = clk_table->entries[i].socclk_mhz; 1420 - dcn2_1_soc.clock_limits[k].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2; 1396 + clock_limits[i].state = i; 1397 + clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz; 1398 + clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz; 1399 + clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz; 1400 + clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2; 1421 1401 1422 - dcn2_1_soc.clock_limits[k].dispclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dispclk_mhz; 1423 - dcn2_1_soc.clock_limits[k].dppclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dppclk_mhz; 1424 - dcn2_1_soc.clock_limits[k].dram_bw_per_chan_gbps = dcn2_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps; 1425 - dcn2_1_soc.clock_limits[k].dscclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz; 1426 - dcn2_1_soc.clock_limits[k].dtbclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz; 1427 - dcn2_1_soc.clock_limits[k].phyclk_d18_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz; 1428 - dcn2_1_soc.clock_limits[k].phyclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz; 1429 - k++; 1430 - } 1402 + clock_limits[i].dispclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dispclk_mhz; 1403 + clock_limits[i].dppclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dppclk_mhz; 1404 + clock_limits[i].dram_bw_per_chan_gbps = dcn2_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps; 1405 + clock_limits[i].dscclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz; 1406 + clock_limits[i].dtbclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz; 1407 + clock_limits[i].phyclk_d18_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz; 1408 + clock_limits[i].phyclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz; 1431 1409 } 1432 - dcn2_1_soc.num_states = k; 1410 + for (i = 0; i < clk_table->num_entries; i++) 1411 + dcn2_1_soc.clock_limits[i] = clock_limits[i]; 1412 + if (clk_table->num_entries) { 1413 + dcn2_1_soc.num_states = clk_table->num_entries; 1414 + /* duplicate last level */ 1415 + dcn2_1_soc.clock_limits[dcn2_1_soc.num_states] = dcn2_1_soc.clock_limits[dcn2_1_soc.num_states - 1]; 1416 + dcn2_1_soc.clock_limits[dcn2_1_soc.num_states].state = dcn2_1_soc.num_states; 1417 + } 1433 1418 } 1434 - 1435 - /* duplicate last level */ 1436 - dcn2_1_soc.clock_limits[dcn2_1_soc.num_states] = dcn2_1_soc.clock_limits[dcn2_1_soc.num_states - 1]; 1437 - dcn2_1_soc.clock_limits[dcn2_1_soc.num_states].state = dcn2_1_soc.num_states; 1438 1419 1439 1420 dml_init_instance(&dc->dml, &dcn2_1_soc, &dcn2_1_ip, DML_PROJECT_DCN21); 1440 1421 }
+4 -4
drivers/gpu/drm/amd/display/dc/dml/dcn21/display_rq_dlg_calc_21.c
··· 1200 1200 min_hratio_fact_l = 1.0; 1201 1201 min_hratio_fact_c = 1.0; 1202 1202 1203 - if (htaps_l <= 1) 1203 + if (hratio_l <= 1) 1204 1204 min_hratio_fact_l = 2.0; 1205 1205 else if (htaps_l <= 6) { 1206 1206 if ((hratio_l * 2.0) > 4.0) ··· 1216 1216 1217 1217 hscale_pixel_rate_l = min_hratio_fact_l * dppclk_freq_in_mhz; 1218 1218 1219 - if (htaps_c <= 1) 1219 + if (hratio_c <= 1) 1220 1220 min_hratio_fact_c = 2.0; 1221 1221 else if (htaps_c <= 6) { 1222 1222 if ((hratio_c * 2.0) > 4.0) ··· 1522 1522 1523 1523 disp_dlg_regs->refcyc_per_vm_group_vblank = get_refcyc_per_vm_group_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; 1524 1524 disp_dlg_regs->refcyc_per_vm_group_flip = get_refcyc_per_vm_group_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; 1525 - disp_dlg_regs->refcyc_per_vm_req_vblank = get_refcyc_per_vm_req_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; 1526 - disp_dlg_regs->refcyc_per_vm_req_flip = get_refcyc_per_vm_req_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz; 1525 + disp_dlg_regs->refcyc_per_vm_req_vblank = get_refcyc_per_vm_req_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10); 1526 + disp_dlg_regs->refcyc_per_vm_req_flip = get_refcyc_per_vm_req_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10); 1527 1527 1528 1528 // Clamp to max for now 1529 1529 if (disp_dlg_regs->refcyc_per_vm_group_vblank >= (unsigned int)dml_pow(2, 23))
+16
drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
··· 210 210 struct mpcc_blnd_cfg *blnd_cfg, 211 211 int mpcc_id); 212 212 213 + /* 214 + * Lock cursor updates for the specified OPP. 215 + * OPP defines the set of MPCC that are locked together for cursor. 216 + * 217 + * Parameters: 218 + * [in] mpc - MPC context. 219 + * [in] opp_id - The OPP to lock cursor updates on 220 + * [in] lock - lock/unlock the OPP 221 + * 222 + * Return: void 223 + */ 224 + void (*cursor_lock)( 225 + struct mpc *mpc, 226 + int opp_id, 227 + bool lock); 228 + 213 229 struct mpcc* (*get_mpcc_for_dpp)( 214 230 struct mpc_tree *tree, 215 231 int dpp_id);
+1
drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
··· 86 86 struct dc_state *context, bool lock); 87 87 void (*set_flip_control_gsl)(struct pipe_ctx *pipe_ctx, 88 88 bool flip_immediate); 89 + void (*cursor_lock)(struct dc *dc, struct pipe_ctx *pipe, bool lock); 89 90 90 91 /* Timing Related */ 91 92 void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes,
+1 -1
drivers/gpu/drm/amd/display/dc/os_types.h
··· 108 108 #define ASSERT(expr) ASSERT_CRITICAL(expr) 109 109 110 110 #else 111 - #define ASSERT(expr) WARN_ON(!(expr)) 111 + #define ASSERT(expr) WARN_ON_ONCE(!(expr)) 112 112 #endif 113 113 114 114 #define BREAK_TO_DEBUGGER() ASSERT(0)
+5 -4
drivers/gpu/drm/amd/powerplay/amd_powerplay.c
··· 1435 1435 if (!hwmgr) 1436 1436 return -EINVAL; 1437 1437 1438 - if (!hwmgr->pm_en || !hwmgr->hwmgr_func->get_asic_baco_capability) 1438 + if (!(hwmgr->not_vf && amdgpu_dpm) || 1439 + !hwmgr->hwmgr_func->get_asic_baco_capability) 1439 1440 return 0; 1440 1441 1441 1442 mutex_lock(&hwmgr->smu_lock); ··· 1453 1452 if (!hwmgr) 1454 1453 return -EINVAL; 1455 1454 1456 - if (!(hwmgr->not_vf && amdgpu_dpm) || 1457 - !hwmgr->hwmgr_func->get_asic_baco_state) 1455 + if (!hwmgr->pm_en || !hwmgr->hwmgr_func->get_asic_baco_state) 1458 1456 return 0; 1459 1457 1460 1458 mutex_lock(&hwmgr->smu_lock); ··· 1470 1470 if (!hwmgr) 1471 1471 return -EINVAL; 1472 1472 1473 - if (!hwmgr->pm_en || !hwmgr->hwmgr_func->set_asic_baco_state) 1473 + if (!(hwmgr->not_vf && amdgpu_dpm) || 1474 + !hwmgr->hwmgr_func->set_asic_baco_state) 1474 1475 return 0; 1475 1476 1476 1477 mutex_lock(&hwmgr->smu_lock);
+6 -2
drivers/gpu/drm/drm_dp_mst_topology.c
··· 3442 3442 drm_dp_queue_down_tx(mgr, txmsg); 3443 3443 3444 3444 ret = drm_dp_mst_wait_tx_reply(mstb, txmsg); 3445 - if (ret > 0 && txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) 3446 - ret = -EIO; 3445 + if (ret > 0) { 3446 + if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) 3447 + ret = -EIO; 3448 + else 3449 + ret = size; 3450 + } 3447 3451 3448 3452 kfree(txmsg); 3449 3453 fail_put:
+1 -1
drivers/gpu/drm/drm_edid.c
··· 5111 5111 struct drm_display_mode *mode; 5112 5112 unsigned pixel_clock = (timings->pixel_clock[0] | 5113 5113 (timings->pixel_clock[1] << 8) | 5114 - (timings->pixel_clock[2] << 16)); 5114 + (timings->pixel_clock[2] << 16)) + 1; 5115 5115 unsigned hactive = (timings->hactive[0] | timings->hactive[1] << 8) + 1; 5116 5116 unsigned hblank = (timings->hblank[0] | timings->hblank[1] << 8) + 1; 5117 5117 unsigned hsync = (timings->hsync[0] | (timings->hsync[1] & 0x7f) << 8) + 1;
+7 -1
drivers/gpu/drm/drm_hdcp.c
··· 241 241 242 242 ret = request_firmware_direct(&fw, (const char *)fw_name, 243 243 drm_dev->dev); 244 - if (ret < 0) 244 + if (ret < 0) { 245 + *revoked_ksv_cnt = 0; 246 + *revoked_ksv_list = NULL; 247 + ret = 0; 245 248 goto exit; 249 + } 246 250 247 251 if (fw->size && fw->data) 248 252 ret = drm_hdcp_srm_update(fw->data, fw->size, revoked_ksv_list, ··· 291 287 292 288 ret = drm_hdcp_request_srm(drm_dev, &revoked_ksv_list, 293 289 &revoked_ksv_cnt); 290 + if (ret) 291 + return ret; 294 292 295 293 /* revoked_ksv_cnt will be zero when above function failed */ 296 294 for (i = 0; i < revoked_ksv_cnt; i++)
+20 -4
drivers/gpu/drm/i915/gem/i915_gem_tiling.c
··· 182 182 int tiling_mode, unsigned int stride) 183 183 { 184 184 struct i915_ggtt *ggtt = &to_i915(obj->base.dev)->ggtt; 185 - struct i915_vma *vma; 185 + struct i915_vma *vma, *vn; 186 + LIST_HEAD(unbind); 186 187 int ret = 0; 187 188 188 189 if (tiling_mode == I915_TILING_NONE) 189 190 return 0; 190 191 191 192 mutex_lock(&ggtt->vm.mutex); 193 + 194 + spin_lock(&obj->vma.lock); 192 195 for_each_ggtt_vma(vma, obj) { 196 + GEM_BUG_ON(vma->vm != &ggtt->vm); 197 + 193 198 if (i915_vma_fence_prepare(vma, tiling_mode, stride)) 194 199 continue; 195 200 196 - ret = __i915_vma_unbind(vma); 197 - if (ret) 198 - break; 201 + list_move(&vma->vm_link, &unbind); 199 202 } 203 + spin_unlock(&obj->vma.lock); 204 + 205 + list_for_each_entry_safe(vma, vn, &unbind, vm_link) { 206 + ret = __i915_vma_unbind(vma); 207 + if (ret) { 208 + /* Restore the remaining vma on an error */ 209 + list_splice(&unbind, &ggtt->vm.bound_list); 210 + break; 211 + } 212 + } 213 + 200 214 mutex_unlock(&ggtt->vm.mutex); 201 215 202 216 return ret; ··· 282 268 } 283 269 mutex_unlock(&obj->mm.lock); 284 270 271 + spin_lock(&obj->vma.lock); 285 272 for_each_ggtt_vma(vma, obj) { 286 273 vma->fence_size = 287 274 i915_gem_fence_size(i915, vma->size, tiling, stride); ··· 293 278 if (vma->fence) 294 279 vma->fence->dirty = true; 295 280 } 281 + spin_unlock(&obj->vma.lock); 296 282 297 283 obj->tiling_and_stride = tiling | stride; 298 284 i915_gem_object_unlock(obj);
+8 -4
drivers/gpu/drm/i915/gem/selftests/huge_pages.c
··· 1477 1477 unsigned int page_size = BIT(first); 1478 1478 1479 1479 obj = i915_gem_object_create_internal(dev_priv, page_size); 1480 - if (IS_ERR(obj)) 1481 - return PTR_ERR(obj); 1480 + if (IS_ERR(obj)) { 1481 + err = PTR_ERR(obj); 1482 + goto out_vm; 1483 + } 1482 1484 1483 1485 vma = i915_vma_instance(obj, vm, NULL); 1484 1486 if (IS_ERR(vma)) { ··· 1533 1531 } 1534 1532 1535 1533 obj = i915_gem_object_create_internal(dev_priv, PAGE_SIZE); 1536 - if (IS_ERR(obj)) 1537 - return PTR_ERR(obj); 1534 + if (IS_ERR(obj)) { 1535 + err = PTR_ERR(obj); 1536 + goto out_vm; 1537 + } 1538 1538 1539 1539 vma = i915_vma_instance(obj, vm, NULL); 1540 1540 if (IS_ERR(vma)) {
+2
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 521 521 522 522 rcu_read_lock(); 523 523 cl = rcu_dereference(from->hwsp_cacheline); 524 + if (i915_request_completed(from)) /* confirm cacheline is valid */ 525 + goto unlock; 524 526 if (unlikely(!i915_active_acquire_if_busy(&cl->active))) 525 527 goto unlock; /* seqno wrapped and completed! */ 526 528 if (unlikely(i915_request_completed(from)))
+2 -4
drivers/gpu/drm/i915/i915_irq.c
··· 3358 3358 { 3359 3359 struct intel_uncore *uncore = &dev_priv->uncore; 3360 3360 3361 - u32 de_pipe_masked = GEN8_PIPE_CDCLK_CRC_DONE; 3361 + u32 de_pipe_masked = gen8_de_pipe_fault_mask(dev_priv) | 3362 + GEN8_PIPE_CDCLK_CRC_DONE; 3362 3363 u32 de_pipe_enables; 3363 3364 u32 de_port_masked = GEN8_AUX_CHANNEL_A; 3364 3365 u32 de_port_enables; ··· 3370 3369 de_misc_masked |= GEN8_DE_MISC_GSE; 3371 3370 3372 3371 if (INTEL_GEN(dev_priv) >= 9) { 3373 - de_pipe_masked |= GEN9_DE_PIPE_IRQ_FAULT_ERRORS; 3374 3372 de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C | 3375 3373 GEN9_AUX_CHANNEL_D; 3376 3374 if (IS_GEN9_LP(dev_priv)) 3377 3375 de_port_masked |= BXT_DE_PORT_GMBUS; 3378 - } else { 3379 - de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS; 3380 3376 } 3381 3377 3382 3378 if (INTEL_GEN(dev_priv) >= 11)
+6 -4
drivers/gpu/drm/i915/i915_vma.c
··· 158 158 159 159 GEM_BUG_ON(!IS_ALIGNED(vma->size, I915_GTT_PAGE_SIZE)); 160 160 161 + spin_lock(&obj->vma.lock); 162 + 161 163 if (i915_is_ggtt(vm)) { 162 164 if (unlikely(overflows_type(vma->size, u32))) 163 - goto err_vma; 165 + goto err_unlock; 164 166 165 167 vma->fence_size = i915_gem_fence_size(vm->i915, vma->size, 166 168 i915_gem_object_get_tiling(obj), 167 169 i915_gem_object_get_stride(obj)); 168 170 if (unlikely(vma->fence_size < vma->size || /* overflow */ 169 171 vma->fence_size > vm->total)) 170 - goto err_vma; 172 + goto err_unlock; 171 173 172 174 GEM_BUG_ON(!IS_ALIGNED(vma->fence_size, I915_GTT_MIN_ALIGNMENT)); 173 175 ··· 180 178 181 179 __set_bit(I915_VMA_GGTT_BIT, __i915_vma_flags(vma)); 182 180 } 183 - 184 - spin_lock(&obj->vma.lock); 185 181 186 182 rb = NULL; 187 183 p = &obj->vma.tree.rb_node; ··· 225 225 226 226 return vma; 227 227 228 + err_unlock: 229 + spin_unlock(&obj->vma.lock); 228 230 err_vma: 229 231 i915_vma_free(vma); 230 232 return ERR_PTR(-E2BIG);
+1
drivers/gpu/drm/ingenic/ingenic-drm.c
··· 843 843 { .compatible = "ingenic,jz4770-lcd", .data = &jz4770_soc_info }, 844 844 { /* sentinel */ }, 845 845 }; 846 + MODULE_DEVICE_TABLE(of, ingenic_drm_of_match); 846 847 847 848 static struct platform_driver ingenic_drm_driver = { 848 849 .driver = {
+5 -5
drivers/gpu/drm/qxl/qxl_cmd.c
··· 480 480 return ret; 481 481 482 482 ret = qxl_release_reserve_list(release, true); 483 - if (ret) 483 + if (ret) { 484 + qxl_release_free(qdev, release); 484 485 return ret; 485 - 486 + } 486 487 cmd = (struct qxl_surface_cmd *)qxl_release_map(qdev, release); 487 488 cmd->type = QXL_SURFACE_CMD_CREATE; 488 489 cmd->flags = QXL_SURF_FLAG_KEEP_DATA; ··· 500 499 /* no need to add a release to the fence for this surface bo, 501 500 since it is only released when we ask to destroy the surface 502 501 and it would never signal otherwise */ 503 - qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); 504 502 qxl_release_fence_buffer_objects(release); 503 + qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); 505 504 506 505 surf->hw_surf_alloc = true; 507 506 spin_lock(&qdev->surf_id_idr_lock); ··· 543 542 cmd->surface_id = id; 544 543 qxl_release_unmap(qdev, release, &cmd->release_info); 545 544 546 - qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); 547 - 548 545 qxl_release_fence_buffer_objects(release); 546 + qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false); 549 547 550 548 return 0; 551 549 }
+3 -3
drivers/gpu/drm/qxl/qxl_display.c
··· 510 510 cmd->u.set.visible = 1; 511 511 qxl_release_unmap(qdev, release, &cmd->release_info); 512 512 513 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 514 513 qxl_release_fence_buffer_objects(release); 514 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 515 515 516 516 return ret; 517 517 ··· 652 652 cmd->u.position.y = plane->state->crtc_y + fb->hot_y; 653 653 654 654 qxl_release_unmap(qdev, release, &cmd->release_info); 655 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 656 655 qxl_release_fence_buffer_objects(release); 656 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 657 657 658 658 if (old_cursor_bo != NULL) 659 659 qxl_bo_unpin(old_cursor_bo); ··· 700 700 cmd->type = QXL_CURSOR_HIDE; 701 701 qxl_release_unmap(qdev, release, &cmd->release_info); 702 702 703 - qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 704 703 qxl_release_fence_buffer_objects(release); 704 + qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false); 705 705 } 706 706 707 707 static void qxl_update_dumb_head(struct qxl_device *qdev,
+4 -3
drivers/gpu/drm/qxl/qxl_draw.c
··· 209 209 goto out_release_backoff; 210 210 211 211 rects = drawable_set_clipping(qdev, num_clips, clips_bo); 212 - if (!rects) 212 + if (!rects) { 213 + ret = -EINVAL; 213 214 goto out_release_backoff; 214 - 215 + } 215 216 drawable = (struct qxl_drawable *)qxl_release_map(qdev, release); 216 217 217 218 drawable->clip.type = SPICE_CLIP_TYPE_RECTS; ··· 243 242 } 244 243 qxl_bo_kunmap(clips_bo); 245 244 246 - qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); 247 245 qxl_release_fence_buffer_objects(release); 246 + qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false); 248 247 249 248 out_release_backoff: 250 249 if (ret)
+2 -1
drivers/gpu/drm/qxl/qxl_image.c
··· 212 212 break; 213 213 default: 214 214 DRM_ERROR("unsupported image bit depth\n"); 215 - return -EINVAL; /* TODO: cleanup */ 215 + qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr); 216 + return -EINVAL; 216 217 } 217 218 image->u.bitmap.flags = QXL_BITMAP_TOP_DOWN; 218 219 image->u.bitmap.x = width;
+1 -4
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 261 261 apply_surf_reloc(qdev, &reloc_info[i]); 262 262 } 263 263 264 + qxl_release_fence_buffer_objects(release); 264 265 ret = qxl_push_command_ring_release(qdev, release, cmd->type, true); 265 - if (ret) 266 - qxl_release_backoff_reserve_list(release); 267 - else 268 - qxl_release_fence_buffer_objects(release); 269 266 270 267 out_free_bos: 271 268 out_free_release:
+1 -1
drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
··· 717 717 struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode; 718 718 struct sun6i_dsi *dsi = encoder_to_sun6i_dsi(encoder); 719 719 struct mipi_dsi_device *device = dsi->device; 720 - union phy_configure_opts opts = { 0 }; 720 + union phy_configure_opts opts = { }; 721 721 struct phy_configure_opts_mipi_dphy *cfg = &opts.mipi_dphy; 722 722 u16 delay; 723 723 int err;
+1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 221 221 /* virtio_ioctl.c */ 222 222 #define DRM_VIRTIO_NUM_IOCTLS 10 223 223 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS]; 224 + void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file); 224 225 225 226 /* virtio_kms.c */ 226 227 int virtio_gpu_init(struct drm_device *dev);
+3
drivers/gpu/drm/virtio/virtgpu_gem.c
··· 39 39 int ret; 40 40 u32 handle; 41 41 42 + if (vgdev->has_virgl_3d) 43 + virtio_gpu_create_context(dev, file); 44 + 42 45 ret = virtio_gpu_object_create(vgdev, params, &obj, NULL); 43 46 if (ret < 0) 44 47 return ret;
+1 -2
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 34 34 35 35 #include "virtgpu_drv.h" 36 36 37 - static void virtio_gpu_create_context(struct drm_device *dev, 38 - struct drm_file *file) 37 + void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file) 39 38 { 40 39 struct virtio_gpu_device *vgdev = dev->dev_private; 41 40 struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
+6 -11
drivers/gpu/drm/virtio/virtgpu_kms.c
··· 53 53 events_clear, &events_clear); 54 54 } 55 55 56 - static void virtio_gpu_context_destroy(struct virtio_gpu_device *vgdev, 57 - uint32_t ctx_id) 58 - { 59 - virtio_gpu_cmd_context_destroy(vgdev, ctx_id); 60 - virtio_gpu_notify(vgdev); 61 - ida_free(&vgdev->ctx_id_ida, ctx_id - 1); 62 - } 63 - 64 56 static void virtio_gpu_init_vq(struct virtio_gpu_queue *vgvq, 65 57 void (*work_func)(struct work_struct *work)) 66 58 { ··· 267 275 void virtio_gpu_driver_postclose(struct drm_device *dev, struct drm_file *file) 268 276 { 269 277 struct virtio_gpu_device *vgdev = dev->dev_private; 270 - struct virtio_gpu_fpriv *vfpriv; 278 + struct virtio_gpu_fpriv *vfpriv = file->driver_priv; 271 279 272 280 if (!vgdev->has_virgl_3d) 273 281 return; 274 282 275 - vfpriv = file->driver_priv; 283 + if (vfpriv->context_created) { 284 + virtio_gpu_cmd_context_destroy(vgdev, vfpriv->ctx_id); 285 + virtio_gpu_notify(vgdev); 286 + } 276 287 277 - virtio_gpu_context_destroy(vgdev, vfpriv->ctx_id); 288 + ida_free(&vgdev->ctx_id_ida, vfpriv->ctx_id - 1); 278 289 mutex_destroy(&vfpriv->context_lock); 279 290 kfree(vfpriv); 280 291 file->driver_priv = NULL;
+1
drivers/hid/Kconfig
··· 1155 1155 config HID_MCP2221 1156 1156 tristate "Microchip MCP2221 HID USB-to-I2C/SMbus host support" 1157 1157 depends on USB_HID && I2C 1158 + depends on GPIOLIB 1158 1159 ---help--- 1159 1160 Provides I2C and SMBUS host adapter functionality over USB-HID 1160 1161 through MCP2221 device.
+1
drivers/hid/hid-alps.c
··· 802 802 break; 803 803 case HID_DEVICE_ID_ALPS_U1_DUAL: 804 804 case HID_DEVICE_ID_ALPS_U1: 805 + case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY: 805 806 data->dev_type = U1; 806 807 break; 807 808 default:
+7 -1
drivers/hid/hid-ids.h
··· 79 79 #define HID_DEVICE_ID_ALPS_U1_DUAL_PTP 0x121F 80 80 #define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP 0x1220 81 81 #define HID_DEVICE_ID_ALPS_U1 0x1215 82 + #define HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY 0x121E 82 83 #define HID_DEVICE_ID_ALPS_T4_BTNLESS 0x120C 83 84 #define HID_DEVICE_ID_ALPS_1222 0x1222 84 - 85 85 86 86 #define USB_VENDOR_ID_AMI 0x046b 87 87 #define USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE 0xff10 ··· 385 385 #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349 0x7349 386 386 #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7 387 387 #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001 388 + #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002 0xc002 388 389 389 390 #define USB_VENDOR_ID_ELAN 0x04f3 390 391 #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401 ··· 760 759 #define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2 0xc218 761 760 #define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2_2 0xc219 762 761 #define USB_DEVICE_ID_LOGITECH_G15_LCD 0xc222 762 + #define USB_DEVICE_ID_LOGITECH_G11 0xc225 763 763 #define USB_DEVICE_ID_LOGITECH_G15_V2_LCD 0xc227 764 764 #define USB_DEVICE_ID_LOGITECH_G510 0xc22d 765 765 #define USB_DEVICE_ID_LOGITECH_G510_USB_AUDIO 0xc22e ··· 1099 1097 #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 1100 1098 #define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200 1101 1099 1100 + #define I2C_VENDOR_ID_SYNAPTICS 0x06cb 1101 + #define I2C_PRODUCT_ID_SYNAPTICS_SYNA2393 0x7a13 1102 + 1102 1103 #define USB_VENDOR_ID_SYNAPTICS 0x06cb 1103 1104 #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 1104 1105 #define USB_DEVICE_ID_SYNAPTICS_INT_TP 0x0002 ··· 1116 1111 #define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10 1117 1112 #define USB_DEVICE_ID_SYNAPTICS_HD 0x0ac3 1118 1113 #define USB_DEVICE_ID_SYNAPTICS_QUAD_HD 0x1ac3 1114 + #define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819 1119 1115 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968 1120 1116 #define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710 1121 1117 #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
+4
drivers/hid/hid-lg-g15.c
··· 872 872 } 873 873 874 874 static const struct hid_device_id lg_g15_devices[] = { 875 + /* The G11 is a G15 without the LCD, treat it as a G15 */ 876 + { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 877 + USB_DEVICE_ID_LOGITECH_G11), 878 + .driver_data = LG_G15 }, 875 879 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 876 880 USB_DEVICE_ID_LOGITECH_G15_LCD), 877 881 .driver_data = LG_G15 },
+3
drivers/hid/hid-multitouch.c
··· 1922 1922 { .driver_data = MT_CLS_EGALAX_SERIAL, 1923 1923 MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 1924 1924 USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001) }, 1925 + { .driver_data = MT_CLS_EGALAX, 1926 + MT_USB_DEVICE(USB_VENDOR_ID_DWAV, 1927 + USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) }, 1925 1928 1926 1929 /* Elitegroup panel */ 1927 1930 { .driver_data = MT_CLS_SERIAL,
+1
drivers/hid/hid-quirks.c
··· 163 163 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2), HID_QUIRK_NO_INIT_REPORTS }, 164 164 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_QUAD_HD), HID_QUIRK_NO_INIT_REPORTS }, 165 165 { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP_V103), HID_QUIRK_NO_INIT_REPORTS }, 166 + { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K12A), HID_QUIRK_NO_INIT_REPORTS }, 166 167 { HID_USB_DEVICE(USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD), HID_QUIRK_BADPAD }, 167 168 { HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT }, 168 169 { HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
+2
drivers/hid/i2c-hid/i2c-hid-core.c
··· 177 177 I2C_HID_QUIRK_BOGUS_IRQ }, 178 178 { USB_VENDOR_ID_ALPS_JP, HID_ANY_ID, 179 179 I2C_HID_QUIRK_RESET_ON_RESUME }, 180 + { I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393, 181 + I2C_HID_QUIRK_RESET_ON_RESUME }, 180 182 { USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720, 181 183 I2C_HID_QUIRK_BAD_INPUT_SIZE }, 182 184 { 0, 0 }
+29 -8
drivers/hid/usbhid/hid-core.c
··· 682 682 struct usbhid_device *usbhid = hid->driver_data; 683 683 int res; 684 684 685 + mutex_lock(&usbhid->mutex); 686 + 685 687 set_bit(HID_OPENED, &usbhid->iofl); 686 688 687 - if (hid->quirks & HID_QUIRK_ALWAYS_POLL) 688 - return 0; 689 + if (hid->quirks & HID_QUIRK_ALWAYS_POLL) { 690 + res = 0; 691 + goto Done; 692 + } 689 693 690 694 res = usb_autopm_get_interface(usbhid->intf); 691 695 /* the device must be awake to reliably request remote wakeup */ 692 696 if (res < 0) { 693 697 clear_bit(HID_OPENED, &usbhid->iofl); 694 - return -EIO; 698 + res = -EIO; 699 + goto Done; 695 700 } 696 701 697 702 usbhid->intf->needs_remote_wakeup = 1; ··· 730 725 msleep(50); 731 726 732 727 clear_bit(HID_RESUME_RUNNING, &usbhid->iofl); 728 + 729 + Done: 730 + mutex_unlock(&usbhid->mutex); 733 731 return res; 734 732 } 735 733 736 734 static void usbhid_close(struct hid_device *hid) 737 735 { 738 736 struct usbhid_device *usbhid = hid->driver_data; 737 + 738 + mutex_lock(&usbhid->mutex); 739 739 740 740 /* 741 741 * Make sure we don't restart data acquisition due to ··· 753 743 clear_bit(HID_IN_POLLING, &usbhid->iofl); 754 744 spin_unlock_irq(&usbhid->lock); 755 745 756 - if (hid->quirks & HID_QUIRK_ALWAYS_POLL) 757 - return; 746 + if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) { 747 + hid_cancel_delayed_stuff(usbhid); 748 + usb_kill_urb(usbhid->urbin); 749 + usbhid->intf->needs_remote_wakeup = 0; 750 + } 758 751 759 - hid_cancel_delayed_stuff(usbhid); 760 - usb_kill_urb(usbhid->urbin); 761 - usbhid->intf->needs_remote_wakeup = 0; 752 + mutex_unlock(&usbhid->mutex); 762 753 } 763 754 764 755 /* ··· 1068 1057 unsigned int n, insize = 0; 1069 1058 int ret; 1070 1059 1060 + mutex_lock(&usbhid->mutex); 1061 + 1071 1062 clear_bit(HID_DISCONNECTED, &usbhid->iofl); 1072 1063 1073 1064 usbhid->bufsize = HID_MIN_BUFFER_SIZE; ··· 1190 1177 usbhid_set_leds(hid); 1191 1178 device_set_wakeup_enable(&dev->dev, 1); 1192 1179 } 1180 + 1181 + mutex_unlock(&usbhid->mutex); 1193 1182 return 0; 1194 1183 1195 1184 fail: ··· 1202 1187 usbhid->urbout = NULL; 1203 1188 usbhid->urbctrl = NULL; 1204 1189 hid_free_buffers(dev, hid); 1190 + mutex_unlock(&usbhid->mutex); 1205 1191 return ret; 1206 1192 } 1207 1193 ··· 1217 1201 clear_bit(HID_IN_POLLING, &usbhid->iofl); 1218 1202 usbhid->intf->needs_remote_wakeup = 0; 1219 1203 } 1204 + 1205 + mutex_lock(&usbhid->mutex); 1220 1206 1221 1207 clear_bit(HID_STARTED, &usbhid->iofl); 1222 1208 spin_lock_irq(&usbhid->lock); /* Sync with error and led handlers */ ··· 1240 1222 usbhid->urbout = NULL; 1241 1223 1242 1224 hid_free_buffers(hid_to_usb_dev(hid), hid); 1225 + 1226 + mutex_unlock(&usbhid->mutex); 1243 1227 } 1244 1228 1245 1229 static int usbhid_power(struct hid_device *hid, int lvl) ··· 1402 1382 INIT_WORK(&usbhid->reset_work, hid_reset); 1403 1383 timer_setup(&usbhid->io_retry, hid_retry_timeout, 0); 1404 1384 spin_lock_init(&usbhid->lock); 1385 + mutex_init(&usbhid->mutex); 1405 1386 1406 1387 ret = hid_add_device(hid); 1407 1388 if (ret) {
+1
drivers/hid/usbhid/usbhid.h
··· 80 80 dma_addr_t outbuf_dma; /* Output buffer dma */ 81 81 unsigned long last_out; /* record of last output for timeouts */ 82 82 83 + struct mutex mutex; /* start/stop/open/close */ 83 84 spinlock_t lock; /* fifo spinlock */ 84 85 unsigned long iofl; /* I/O flags (CTRL_RUNNING, OUT_RUNNING) */ 85 86 struct timer_list io_retry; /* Retry timer */
+3 -1
drivers/hid/wacom_sys.c
··· 319 319 data[0] = field->report->id; 320 320 ret = wacom_get_report(hdev, HID_FEATURE_REPORT, 321 321 data, n, WAC_CMD_RETRIES); 322 - if (ret == n) { 322 + if (ret == n && features->type == HID_GENERIC) { 323 323 ret = hid_report_raw_event(hdev, 324 324 HID_FEATURE_REPORT, data, n, 0); 325 + } else if (ret == 2 && features->type != HID_GENERIC) { 326 + features->touch_max = data[1]; 325 327 } else { 326 328 features->touch_max = 16; 327 329 hid_warn(hdev, "wacom_feature_mapping: "
+22 -66
drivers/hid/wacom_wac.c
··· 1427 1427 { 1428 1428 struct input_dev *pad_input = wacom->pad_input; 1429 1429 unsigned char *data = wacom->data; 1430 + int nbuttons = wacom->features.numbered_buttons; 1430 1431 1431 - int buttons = data[282] | ((data[281] & 0x40) << 2); 1432 + int expresskeys = data[282]; 1433 + int center = (data[281] & 0x40) >> 6; 1432 1434 int ring = data[285] & 0x7F; 1433 1435 bool ringstatus = data[285] & 0x80; 1434 - bool prox = buttons || ringstatus; 1436 + bool prox = expresskeys || center || ringstatus; 1435 1437 1436 1438 /* Fix touchring data: userspace expects 0 at left and increasing clockwise */ 1437 1439 ring = 71 - ring; ··· 1441 1439 if (ring > 71) 1442 1440 ring -= 72; 1443 1441 1444 - wacom_report_numbered_buttons(pad_input, 9, buttons); 1442 + wacom_report_numbered_buttons(pad_input, nbuttons, 1443 + expresskeys | (center << (nbuttons - 1))); 1445 1444 1446 1445 input_report_abs(pad_input, ABS_WHEEL, ringstatus ? ring : 0); 1447 1446 ··· 2640 2637 case HID_DG_TIPSWITCH: 2641 2638 hid_data->last_slot_field = equivalent_usage; 2642 2639 break; 2640 + case HID_DG_CONTACTCOUNT: 2641 + hid_data->cc_report = report->id; 2642 + hid_data->cc_index = i; 2643 + hid_data->cc_value_index = j; 2644 + break; 2643 2645 } 2644 2646 } 2647 + } 2648 + 2649 + if (hid_data->cc_report != 0 && 2650 + hid_data->cc_index >= 0) { 2651 + struct hid_field *field = report->field[hid_data->cc_index]; 2652 + int value = field->value[hid_data->cc_value_index]; 2653 + if (value) 2654 + hid_data->num_expected = value; 2655 + } 2656 + else { 2657 + hid_data->num_expected = wacom_wac->features.touch_max; 2645 2658 } 2646 2659 } 2647 2660 ··· 2668 2649 struct wacom_wac *wacom_wac = &wacom->wacom_wac; 2669 2650 struct input_dev *input = wacom_wac->touch_input; 2670 2651 unsigned touch_max = wacom_wac->features.touch_max; 2671 - struct hid_data *hid_data = &wacom_wac->hid_data; 2672 2652 2673 2653 /* If more packets of data are expected, give us a chance to 2674 2654 * process them rather than immediately syncing a partial ··· 2681 2663 2682 2664 input_sync(input); 2683 2665 wacom_wac->hid_data.num_received = 0; 2684 - hid_data->num_expected = 0; 2685 2666 2686 2667 /* keep touch state for pen event */ 2687 2668 wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac); ··· 2755 2738 } 2756 2739 } 2757 2740 2758 - static void wacom_set_num_expected(struct hid_device *hdev, 2759 - struct hid_report *report, 2760 - int collection_index, 2761 - struct hid_field *field, 2762 - int field_index) 2763 - { 2764 - struct wacom *wacom = hid_get_drvdata(hdev); 2765 - struct wacom_wac *wacom_wac = &wacom->wacom_wac; 2766 - struct hid_data *hid_data = &wacom_wac->hid_data; 2767 - unsigned int original_collection_level = 2768 - hdev->collection[collection_index].level; 2769 - bool end_collection = false; 2770 - int i; 2771 - 2772 - if (hid_data->num_expected) 2773 - return; 2774 - 2775 - // find the contact count value for this segment 2776 - for (i = field_index; i < report->maxfield && !end_collection; i++) { 2777 - struct hid_field *field = report->field[i]; 2778 - unsigned int field_level = 2779 - hdev->collection[field->usage[0].collection_index].level; 2780 - unsigned int j; 2781 - 2782 - if (field_level != original_collection_level) 2783 - continue; 2784 - 2785 - for (j = 0; j < field->maxusage; j++) { 2786 - struct hid_usage *usage = &field->usage[j]; 2787 - 2788 - if (usage->collection_index != collection_index) { 2789 - end_collection = true; 2790 - break; 2791 - } 2792 - if (wacom_equivalent_usage(usage->hid) == HID_DG_CONTACTCOUNT) { 2793 - hid_data->cc_report = report->id; 2794 - hid_data->cc_index = i; 2795 - hid_data->cc_value_index = j; 2796 - 2797 - if (hid_data->cc_report != 0 && 2798 - hid_data->cc_index >= 0) { 2799 - 2800 - struct hid_field *field = 2801 - report->field[hid_data->cc_index]; 2802 - int value = 2803 - field->value[hid_data->cc_value_index]; 2804 - 2805 - if (value) 2806 - hid_data->num_expected = value; 2807 - } 2808 - } 2809 - } 2810 - } 2811 - 2812 - if (hid_data->cc_report == 0 || hid_data->cc_index < 0) 2813 - hid_data->num_expected = wacom_wac->features.touch_max; 2814 - } 2815 - 2816 2741 static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *report, 2817 2742 int collection_index, struct hid_field *field, 2818 2743 int field_index) 2819 2744 { 2820 2745 struct wacom *wacom = hid_get_drvdata(hdev); 2821 2746 2822 - if (WACOM_FINGER_FIELD(field)) 2823 - wacom_set_num_expected(hdev, report, collection_index, field, 2824 - field_index); 2825 2747 wacom_report_events(hdev, report, collection_index, field_index); 2826 2748 2827 2749 /*
+1 -5
drivers/hv/hv.c
··· 184 184 185 185 shared_sint.vector = HYPERVISOR_CALLBACK_VECTOR; 186 186 shared_sint.masked = false; 187 - if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED) 188 - shared_sint.auto_eoi = false; 189 - else 190 - shared_sint.auto_eoi = true; 191 - 187 + shared_sint.auto_eoi = hv_recommend_using_aeoi(); 192 188 hv_set_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64); 193 189 194 190 /* Enable the global synic bit */
+2 -2
drivers/hv/hv_trace.h
··· 286 286 __field(int, ret) 287 287 ), 288 288 TP_fast_assign( 289 - memcpy(__entry->guest_id, &msg->guest_endpoint_id.b, 16); 290 - memcpy(__entry->host_id, &msg->host_service_id.b, 16); 289 + export_guid(__entry->guest_id, &msg->guest_endpoint_id); 290 + export_guid(__entry->host_id, &msg->host_service_id); 291 291 __entry->ret = ret; 292 292 ), 293 293 TP_printk("sending guest_endpoint_id %pUl, host_service_id %pUl, "
+34 -9
drivers/hv/vmbus_drv.c
··· 978 978 979 979 return drv->resume(dev); 980 980 } 981 + #else 982 + #define vmbus_suspend NULL 983 + #define vmbus_resume NULL 981 984 #endif /* CONFIG_PM_SLEEP */ 982 985 983 986 /* ··· 1000 997 } 1001 998 1002 999 /* 1003 - * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than 1004 - * SET_SYSTEM_SLEEP_PM_OPS: see the comment before vmbus_bus_pm. 1000 + * Note: we must use the "noirq" ops: see the comment before vmbus_bus_pm. 1001 + * 1002 + * suspend_noirq/resume_noirq are set to NULL to support Suspend-to-Idle: we 1003 + * shouldn't suspend the vmbus devices upon Suspend-to-Idle, otherwise there 1004 + * is no way to wake up a Generation-2 VM. 1005 + * 1006 + * The other 4 ops are for hibernation. 1005 1007 */ 1008 + 1006 1009 static const struct dev_pm_ops vmbus_pm = { 1007 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_suspend, vmbus_resume) 1010 + .suspend_noirq = NULL, 1011 + .resume_noirq = NULL, 1012 + .freeze_noirq = vmbus_suspend, 1013 + .thaw_noirq = vmbus_resume, 1014 + .poweroff_noirq = vmbus_suspend, 1015 + .restore_noirq = vmbus_resume, 1008 1016 }; 1009 1017 1010 1018 /* The one and only one */ ··· 2295 2281 2296 2282 return 0; 2297 2283 } 2284 + #else 2285 + #define vmbus_bus_suspend NULL 2286 + #define vmbus_bus_resume NULL 2298 2287 #endif /* CONFIG_PM_SLEEP */ 2299 2288 2300 2289 static const struct acpi_device_id vmbus_acpi_device_ids[] = { ··· 2308 2291 MODULE_DEVICE_TABLE(acpi, vmbus_acpi_device_ids); 2309 2292 2310 2293 /* 2311 - * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than 2312 - * SET_SYSTEM_SLEEP_PM_OPS, otherwise NIC SR-IOV can not work, because the 2313 - * "pci_dev_pm_ops" uses the "noirq" callbacks: in the resume path, the 2314 - * pci "noirq" restore callback runs before "non-noirq" callbacks (see 2294 + * Note: we must use the "no_irq" ops, otherwise hibernation can not work with 2295 + * PCI device assignment, because "pci_dev_pm_ops" uses the "noirq" ops: in 2296 + * the resume path, the pci "noirq" restore op runs before "non-noirq" op (see 2315 2297 * resume_target_kernel() -> dpm_resume_start(), and hibernation_restore() -> 2316 2298 * dpm_resume_end()). This means vmbus_bus_resume() and the pci-hyperv's 2317 - * resume callback must also run via the "noirq" callbacks. 2299 + * resume callback must also run via the "noirq" ops. 2300 + * 2301 + * Set suspend_noirq/resume_noirq to NULL for Suspend-to-Idle: see the comment 2302 + * earlier in this file before vmbus_pm. 2318 2303 */ 2304 + 2319 2305 static const struct dev_pm_ops vmbus_bus_pm = { 2320 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_bus_suspend, vmbus_bus_resume) 2306 + .suspend_noirq = NULL, 2307 + .resume_noirq = NULL, 2308 + .freeze_noirq = vmbus_bus_suspend, 2309 + .thaw_noirq = vmbus_bus_resume, 2310 + .poweroff_noirq = vmbus_bus_suspend, 2311 + .restore_noirq = vmbus_bus_resume 2321 2312 }; 2322 2313 2323 2314 static struct acpi_driver vmbus_acpi_driver = {
+1 -1
drivers/i2c/busses/i2c-amd-mp2-pci.c
··· 349 349 if (!privdata) 350 350 return -ENOMEM; 351 351 352 + privdata->pci_dev = pci_dev; 352 353 rc = amd_mp2_pci_init(privdata, pci_dev); 353 354 if (rc) 354 355 return rc; 355 356 356 357 mutex_init(&privdata->c2p_lock); 357 - privdata->pci_dev = pci_dev; 358 358 359 359 pm_runtime_set_autosuspend_delay(&pci_dev->dev, 1000); 360 360 pm_runtime_use_autosuspend(&pci_dev->dev);
+4 -1
drivers/i2c/busses/i2c-aspeed.c
··· 603 603 /* Ack all interrupts except for Rx done */ 604 604 writel(irq_received & ~ASPEED_I2CD_INTR_RX_DONE, 605 605 bus->base + ASPEED_I2C_INTR_STS_REG); 606 + readl(bus->base + ASPEED_I2C_INTR_STS_REG); 606 607 irq_remaining = irq_received; 607 608 608 609 #if IS_ENABLED(CONFIG_I2C_SLAVE) ··· 646 645 irq_received, irq_handled); 647 646 648 647 /* Ack Rx done */ 649 - if (irq_received & ASPEED_I2CD_INTR_RX_DONE) 648 + if (irq_received & ASPEED_I2CD_INTR_RX_DONE) { 650 649 writel(ASPEED_I2CD_INTR_RX_DONE, 651 650 bus->base + ASPEED_I2C_INTR_STS_REG); 651 + readl(bus->base + ASPEED_I2C_INTR_STS_REG); 652 + } 652 653 spin_unlock(&bus->lock); 653 654 return irq_remaining ? IRQ_NONE : IRQ_HANDLED; 654 655 }
+3
drivers/i2c/busses/i2c-bcm-iproc.c
··· 360 360 value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK); 361 361 i2c_slave_event(iproc_i2c->slave, 362 362 I2C_SLAVE_WRITE_RECEIVED, &value); 363 + if (rx_status == I2C_SLAVE_RX_END) 364 + i2c_slave_event(iproc_i2c->slave, 365 + I2C_SLAVE_STOP, &value); 363 366 } 364 367 } else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) { 365 368 /* Master read other than start */
+12 -24
drivers/i2c/busses/i2c-tegra.c
··· 996 996 do { 997 997 u32 status = i2c_readl(i2c_dev, I2C_INT_STATUS); 998 998 999 - if (status) 999 + if (status) { 1000 1000 tegra_i2c_isr(i2c_dev->irq, i2c_dev); 1001 1001 1002 - if (completion_done(complete)) { 1003 - s64 delta = ktime_ms_delta(ktimeout, ktime); 1002 + if (completion_done(complete)) { 1003 + s64 delta = ktime_ms_delta(ktimeout, ktime); 1004 1004 1005 - return msecs_to_jiffies(delta) ?: 1; 1005 + return msecs_to_jiffies(delta) ?: 1; 1006 + } 1006 1007 } 1007 1008 1008 1009 ktime = ktime_get(); ··· 1030 1029 disable_irq(i2c_dev->irq); 1031 1030 1032 1031 /* 1033 - * Under some rare circumstances (like running KASAN + 1034 - * NFS root) CPU, which handles interrupt, may stuck in 1035 - * uninterruptible state for a significant time. In this 1036 - * case we will get timeout if I2C transfer is running on 1037 - * a sibling CPU, despite of IRQ being raised. 1038 - * 1039 - * In order to handle this rare condition, the IRQ status 1040 - * needs to be checked after timeout. 1032 + * There is a chance that completion may happen after IRQ 1033 + * synchronization, which is done by disable_irq(). 1041 1034 */ 1042 - if (ret == 0) 1043 - ret = tegra_i2c_poll_completion_timeout(i2c_dev, 1044 - complete, 0); 1035 + if (ret == 0 && completion_done(complete)) { 1036 + dev_warn(i2c_dev->dev, 1037 + "completion done after timeout\n"); 1038 + ret = 1; 1039 + } 1045 1040 } 1046 1041 1047 1042 return ret; ··· 1215 1218 if (dma) { 1216 1219 time_left = tegra_i2c_wait_completion_timeout( 1217 1220 i2c_dev, &i2c_dev->dma_complete, xfer_time); 1218 - 1219 - /* 1220 - * Synchronize DMA first, since dmaengine_terminate_sync() 1221 - * performs synchronization after the transfer's termination 1222 - * and we want to get a completion if transfer succeeded. 1223 - */ 1224 - dmaengine_synchronize(i2c_dev->msg_read ? 1225 - i2c_dev->rx_dma_chan : 1226 - i2c_dev->tx_dma_chan); 1227 1221 1228 1222 dmaengine_terminate_sync(i2c_dev->msg_read ? 1229 1223 i2c_dev->rx_dma_chan :
+14 -12
drivers/infiniband/core/cm.c
··· 862 862 863 863 ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b, 864 864 &cm.local_id_next, GFP_KERNEL); 865 - if (ret) 865 + if (ret < 0) 866 866 goto error; 867 867 cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand; 868 868 ··· 1828 1828 1829 1829 static void cm_format_rej(struct cm_rej_msg *rej_msg, 1830 1830 struct cm_id_private *cm_id_priv, 1831 - enum ib_cm_rej_reason reason, 1832 - void *ari, 1833 - u8 ari_length, 1834 - const void *private_data, 1835 - u8 private_data_len) 1831 + enum ib_cm_rej_reason reason, void *ari, 1832 + u8 ari_length, const void *private_data, 1833 + u8 private_data_len, enum ib_cm_state state) 1836 1834 { 1837 1835 lockdep_assert_held(&cm_id_priv->lock); 1838 1836 ··· 1838 1840 IBA_SET(CM_REJ_REMOTE_COMM_ID, rej_msg, 1839 1841 be32_to_cpu(cm_id_priv->id.remote_id)); 1840 1842 1841 - switch(cm_id_priv->id.state) { 1843 + switch (state) { 1842 1844 case IB_CM_REQ_RCVD: 1843 1845 IBA_SET(CM_REJ_LOCAL_COMM_ID, rej_msg, be32_to_cpu(0)); 1844 1846 IBA_SET(CM_REJ_MESSAGE_REJECTED, rej_msg, CM_MSG_RESPONSE_REQ); ··· 1903 1905 cm_id_priv->private_data_len); 1904 1906 break; 1905 1907 case IB_CM_TIMEWAIT: 1906 - cm_format_rej((struct cm_rej_msg *) msg->mad, cm_id_priv, 1907 - IB_CM_REJ_STALE_CONN, NULL, 0, NULL, 0); 1908 + cm_format_rej((struct cm_rej_msg *)msg->mad, cm_id_priv, 1909 + IB_CM_REJ_STALE_CONN, NULL, 0, NULL, 0, 1910 + IB_CM_TIMEWAIT); 1908 1911 break; 1909 1912 default: 1910 1913 goto unlock; ··· 2903 2904 u8 ari_length, const void *private_data, 2904 2905 u8 private_data_len) 2905 2906 { 2907 + enum ib_cm_state state = cm_id_priv->id.state; 2906 2908 struct ib_mad_send_buf *msg; 2907 2909 int ret; 2908 2910 ··· 2913 2913 (ari && ari_length > IB_CM_REJ_ARI_LENGTH)) 2914 2914 return -EINVAL; 2915 2915 2916 - switch (cm_id_priv->id.state) { 2916 + switch (state) { 2917 2917 case IB_CM_REQ_SENT: 2918 2918 case IB_CM_MRA_REQ_RCVD: 2919 2919 case IB_CM_REQ_RCVD: ··· 2925 2925 if (ret) 2926 2926 return ret; 2927 2927 cm_format_rej((struct cm_rej_msg *)msg->mad, cm_id_priv, reason, 2928 - ari, ari_length, private_data, private_data_len); 2928 + ari, ari_length, private_data, private_data_len, 2929 + state); 2929 2930 break; 2930 2931 case IB_CM_REP_SENT: 2931 2932 case IB_CM_MRA_REP_RCVD: ··· 2935 2934 if (ret) 2936 2935 return ret; 2937 2936 cm_format_rej((struct cm_rej_msg *)msg->mad, cm_id_priv, reason, 2938 - ari, ari_length, private_data, private_data_len); 2937 + ari, ari_length, private_data, private_data_len, 2938 + state); 2939 2939 break; 2940 2940 default: 2941 2941 pr_debug("%s: local_id %d, cm_id->state: %d\n", __func__,
+4 -5
drivers/infiniband/core/rdma_core.c
··· 360 360 * uverbs_uobject_fd_release(), and the caller is expected to ensure 361 361 * that release is never done while a call to lookup is possible. 362 362 */ 363 - if (f->f_op != fd_type->fops) { 363 + if (f->f_op != fd_type->fops || uobject->ufile != ufile) { 364 364 fput(f); 365 365 return ERR_PTR(-EBADF); 366 366 } ··· 474 474 filp = anon_inode_getfile(fd_type->name, fd_type->fops, NULL, 475 475 fd_type->flags); 476 476 if (IS_ERR(filp)) { 477 + uverbs_uobject_put(uobj); 477 478 uobj = ERR_CAST(filp); 478 - goto err_uobj; 479 + goto err_fd; 479 480 } 480 481 uobj->object = filp; 481 482 482 483 uobj->id = new_fd; 483 484 return uobj; 484 485 485 - err_uobj: 486 - uverbs_uobject_put(uobj); 487 486 err_fd: 488 487 put_unused_fd(new_fd); 489 488 return uobj; ··· 678 679 enum rdma_lookup_mode mode) 679 680 { 680 681 assert_uverbs_usecnt(uobj, mode); 681 - uobj->uapi_object->type_class->lookup_put(uobj, mode); 682 682 /* 683 683 * In order to unlock an object, either decrease its usecnt for 684 684 * read access or zero it in case of exclusive access. See ··· 694 696 break; 695 697 } 696 698 699 + uobj->uapi_object->type_class->lookup_put(uobj, mode); 697 700 /* Pairs with the kref obtained by type->lookup_get */ 698 701 uverbs_uobject_put(uobj); 699 702 }
+4
drivers/infiniband/core/uverbs_main.c
··· 820 820 ret = mmget_not_zero(mm); 821 821 if (!ret) { 822 822 list_del_init(&priv->list); 823 + if (priv->entry) { 824 + rdma_user_mmap_entry_put(priv->entry); 825 + priv->entry = NULL; 826 + } 823 827 mm = NULL; 824 828 continue; 825 829 }
+1 -1
drivers/infiniband/hw/i40iw/i40iw_ctrl.c
··· 1046 1046 u64 header; 1047 1047 1048 1048 wqe = i40iw_sc_cqp_get_next_send_wqe(cqp, scratch); 1049 - if (wqe) 1049 + if (!wqe) 1050 1050 return I40IW_ERR_RING_FULL; 1051 1051 1052 1052 set_64bit_val(wqe, 32, feat_mem->pa);
+2 -1
drivers/infiniband/hw/mlx4/main.c
··· 1499 1499 int i; 1500 1500 1501 1501 for (i = 0; i < ARRAY_SIZE(pdefault_rules->rules_create_list); i++) { 1502 + union ib_flow_spec ib_spec = {}; 1502 1503 int ret; 1503 - union ib_flow_spec ib_spec; 1504 + 1504 1505 switch (pdefault_rules->rules_create_list[i]) { 1505 1506 case 0: 1506 1507 /* no rule */
+3 -1
drivers/infiniband/hw/mlx5/qp.c
··· 5558 5558 rdma_ah_set_path_bits(ah_attr, path->grh_mlid & 0x7f); 5559 5559 rdma_ah_set_static_rate(ah_attr, 5560 5560 path->static_rate ? path->static_rate - 5 : 0); 5561 - if (path->grh_mlid & (1 << 7)) { 5561 + 5562 + if (path->grh_mlid & (1 << 7) || 5563 + ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) { 5562 5564 u32 tc_fl = be32_to_cpu(path->tclass_flowlabel); 5563 5565 5564 5566 rdma_ah_set_grh(ah_attr, NULL,
+2 -2
drivers/infiniband/sw/rdmavt/cq.c
··· 248 248 */ 249 249 if (udata && udata->outlen >= sizeof(__u64)) { 250 250 cq->ip = rvt_create_mmap_info(rdi, sz, udata, u_wc); 251 - if (!cq->ip) { 252 - err = -ENOMEM; 251 + if (IS_ERR(cq->ip)) { 252 + err = PTR_ERR(cq->ip); 253 253 goto bail_wc; 254 254 } 255 255
+2 -2
drivers/infiniband/sw/rdmavt/mmap.c
··· 154 154 * @udata: user data (must be valid!) 155 155 * @obj: opaque pointer to a cq, wq etc 156 156 * 157 - * Return: rvt_mmap struct on success 157 + * Return: rvt_mmap struct on success, ERR_PTR on failure 158 158 */ 159 159 struct rvt_mmap_info *rvt_create_mmap_info(struct rvt_dev_info *rdi, u32 size, 160 160 struct ib_udata *udata, void *obj) ··· 166 166 167 167 ip = kmalloc_node(sizeof(*ip), GFP_KERNEL, rdi->dparms.node); 168 168 if (!ip) 169 - return ip; 169 + return ERR_PTR(-ENOMEM); 170 170 171 171 size = PAGE_ALIGN(size); 172 172
+2 -2
drivers/infiniband/sw/rdmavt/qp.c
··· 1244 1244 1245 1245 qp->ip = rvt_create_mmap_info(rdi, s, udata, 1246 1246 qp->r_rq.wq); 1247 - if (!qp->ip) { 1248 - ret = ERR_PTR(-ENOMEM); 1247 + if (IS_ERR(qp->ip)) { 1248 + ret = ERR_CAST(qp->ip); 1249 1249 goto bail_qpn; 1250 1250 } 1251 1251
+2 -2
drivers/infiniband/sw/rdmavt/srq.c
··· 111 111 u32 s = sizeof(struct rvt_rwq) + srq->rq.size * sz; 112 112 113 113 srq->ip = rvt_create_mmap_info(dev, s, udata, srq->rq.wq); 114 - if (!srq->ip) { 115 - ret = -ENOMEM; 114 + if (IS_ERR(srq->ip)) { 115 + ret = PTR_ERR(srq->ip); 116 116 goto bail_wq; 117 117 } 118 118
+11 -4
drivers/infiniband/sw/siw/siw_qp_tx.c
··· 920 920 { 921 921 struct ib_mr *base_mr = (struct ib_mr *)(uintptr_t)sqe->base_mr; 922 922 struct siw_device *sdev = to_siw_dev(pd->device); 923 - struct siw_mem *mem = siw_mem_id2obj(sdev, sqe->rkey >> 8); 923 + struct siw_mem *mem; 924 924 int rv = 0; 925 925 926 926 siw_dbg_pd(pd, "STag 0x%08x\n", sqe->rkey); 927 927 928 - if (unlikely(!mem || !base_mr)) { 928 + if (unlikely(!base_mr)) { 929 929 pr_warn("siw: fastreg: STag 0x%08x unknown\n", sqe->rkey); 930 930 return -EINVAL; 931 931 } 932 + 932 933 if (unlikely(base_mr->rkey >> 8 != sqe->rkey >> 8)) { 933 934 pr_warn("siw: fastreg: STag 0x%08x: bad MR\n", sqe->rkey); 934 - rv = -EINVAL; 935 - goto out; 935 + return -EINVAL; 936 936 } 937 + 938 + mem = siw_mem_id2obj(sdev, sqe->rkey >> 8); 939 + if (unlikely(!mem)) { 940 + pr_warn("siw: fastreg: STag 0x%08x unknown\n", sqe->rkey); 941 + return -EINVAL; 942 + } 943 + 937 944 if (unlikely(mem->pd != pd)) { 938 945 pr_warn("siw: fastreg: PD mismatch\n"); 939 946 rv = -EINVAL;
+2 -2
drivers/interconnect/qcom/osm-l3.c
··· 78 78 [SLAVE_OSM_L3] = &sdm845_osm_l3, 79 79 }; 80 80 81 - const static struct qcom_icc_desc sdm845_icc_osm_l3 = { 81 + static const struct qcom_icc_desc sdm845_icc_osm_l3 = { 82 82 .nodes = sdm845_osm_l3_nodes, 83 83 .num_nodes = ARRAY_SIZE(sdm845_osm_l3_nodes), 84 84 }; ··· 91 91 [SLAVE_OSM_L3] = &sc7180_osm_l3, 92 92 }; 93 93 94 - const static struct qcom_icc_desc sc7180_icc_osm_l3 = { 94 + static const struct qcom_icc_desc sc7180_icc_osm_l3 = { 95 95 .nodes = sc7180_osm_l3_nodes, 96 96 .num_nodes = ARRAY_SIZE(sc7180_osm_l3_nodes), 97 97 };
+8 -8
drivers/interconnect/qcom/sdm845.c
··· 192 192 [SLAVE_ANOC_PCIE_A1NOC_SNOC] = &qns_pcie_a1noc_snoc, 193 193 }; 194 194 195 - const static struct qcom_icc_desc sdm845_aggre1_noc = { 195 + static const struct qcom_icc_desc sdm845_aggre1_noc = { 196 196 .nodes = aggre1_noc_nodes, 197 197 .num_nodes = ARRAY_SIZE(aggre1_noc_nodes), 198 198 .bcms = aggre1_noc_bcms, ··· 220 220 [SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc, 221 221 }; 222 222 223 - const static struct qcom_icc_desc sdm845_aggre2_noc = { 223 + static const struct qcom_icc_desc sdm845_aggre2_noc = { 224 224 .nodes = aggre2_noc_nodes, 225 225 .num_nodes = ARRAY_SIZE(aggre2_noc_nodes), 226 226 .bcms = aggre2_noc_bcms, ··· 281 281 [SLAVE_SERVICE_CNOC] = &srvc_cnoc, 282 282 }; 283 283 284 - const static struct qcom_icc_desc sdm845_config_noc = { 284 + static const struct qcom_icc_desc sdm845_config_noc = { 285 285 .nodes = config_noc_nodes, 286 286 .num_nodes = ARRAY_SIZE(config_noc_nodes), 287 287 .bcms = config_noc_bcms, ··· 297 297 [SLAVE_MEM_NOC_CFG] = &qhs_memnoc, 298 298 }; 299 299 300 - const static struct qcom_icc_desc sdm845_dc_noc = { 300 + static const struct qcom_icc_desc sdm845_dc_noc = { 301 301 .nodes = dc_noc_nodes, 302 302 .num_nodes = ARRAY_SIZE(dc_noc_nodes), 303 303 .bcms = dc_noc_bcms, ··· 315 315 [SLAVE_SERVICE_GNOC] = &srvc_gnoc, 316 316 }; 317 317 318 - const static struct qcom_icc_desc sdm845_gladiator_noc = { 318 + static const struct qcom_icc_desc sdm845_gladiator_noc = { 319 319 .nodes = gladiator_noc_nodes, 320 320 .num_nodes = ARRAY_SIZE(gladiator_noc_nodes), 321 321 .bcms = gladiator_noc_bcms, ··· 350 350 [SLAVE_EBI1] = &ebi, 351 351 }; 352 352 353 - const static struct qcom_icc_desc sdm845_mem_noc = { 353 + static const struct qcom_icc_desc sdm845_mem_noc = { 354 354 .nodes = mem_noc_nodes, 355 355 .num_nodes = ARRAY_SIZE(mem_noc_nodes), 356 356 .bcms = mem_noc_bcms, ··· 384 384 [SLAVE_CAMNOC_UNCOMP] = &qns_camnoc_uncomp, 385 385 }; 386 386 387 - const static struct qcom_icc_desc sdm845_mmss_noc = { 387 + static const struct qcom_icc_desc sdm845_mmss_noc = { 388 388 .nodes = mmss_noc_nodes, 389 389 .num_nodes = ARRAY_SIZE(mmss_noc_nodes), 390 390 .bcms = mmss_noc_bcms, ··· 430 430 [SLAVE_TCU] = &xs_sys_tcu_cfg, 431 431 }; 432 432 433 - const static struct qcom_icc_desc sdm845_system_noc = { 433 + static const struct qcom_icc_desc sdm845_system_noc = { 434 434 .nodes = system_noc_nodes, 435 435 .num_nodes = ARRAY_SIZE(system_noc_nodes), 436 436 .bcms = system_noc_bcms,
+2 -2
drivers/iommu/Kconfig
··· 362 362 363 363 config SPAPR_TCE_IOMMU 364 364 bool "sPAPR TCE IOMMU Support" 365 - depends on PPC_POWERNV || PPC_PSERIES || (PPC && COMPILE_TEST) 365 + depends on PPC_POWERNV || PPC_PSERIES 366 366 select IOMMU_API 367 367 help 368 368 Enables bits of IOMMU API required by VFIO. The iommu_ops ··· 457 457 458 458 config MTK_IOMMU 459 459 bool "MTK IOMMU Support" 460 - depends on ARM || ARM64 || COMPILE_TEST 460 + depends on HAS_DMA 461 461 depends on ARCH_MEDIATEK || COMPILE_TEST 462 462 select ARM_DMA_USE_IOMMU 463 463 select IOMMU_API
+154 -44
drivers/iommu/amd_iommu.c
··· 101 101 static void update_domain(struct protection_domain *domain); 102 102 static int protection_domain_init(struct protection_domain *domain); 103 103 static void detach_device(struct device *dev); 104 + static void update_and_flush_device_table(struct protection_domain *domain, 105 + struct domain_pgtable *pgtable); 104 106 105 107 /**************************************************************************** 106 108 * ··· 151 149 static struct protection_domain *to_pdomain(struct iommu_domain *dom) 152 150 { 153 151 return container_of(dom, struct protection_domain, domain); 152 + } 153 + 154 + static void amd_iommu_domain_get_pgtable(struct protection_domain *domain, 155 + struct domain_pgtable *pgtable) 156 + { 157 + u64 pt_root = atomic64_read(&domain->pt_root); 158 + 159 + pgtable->root = (u64 *)(pt_root & PAGE_MASK); 160 + pgtable->mode = pt_root & 7; /* lowest 3 bits encode pgtable mode */ 161 + } 162 + 163 + static u64 amd_iommu_domain_encode_pgtable(u64 *root, int mode) 164 + { 165 + u64 pt_root; 166 + 167 + /* lowest 3 bits encode pgtable mode */ 168 + pt_root = mode & 7; 169 + pt_root |= (u64)root; 170 + 171 + return pt_root; 154 172 } 155 173 156 174 static struct iommu_dev_data *alloc_dev_data(u16 devid) ··· 1419 1397 1420 1398 static void free_pagetable(struct protection_domain *domain) 1421 1399 { 1422 - unsigned long root = (unsigned long)domain->pt_root; 1400 + struct domain_pgtable pgtable; 1423 1401 struct page *freelist = NULL; 1402 + unsigned long root; 1424 1403 1425 - BUG_ON(domain->mode < PAGE_MODE_NONE || 1426 - domain->mode > PAGE_MODE_6_LEVEL); 1404 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1405 + atomic64_set(&domain->pt_root, 0); 1427 1406 1428 - freelist = free_sub_pt(root, domain->mode, freelist); 1407 + BUG_ON(pgtable.mode < PAGE_MODE_NONE || 1408 + pgtable.mode > PAGE_MODE_6_LEVEL); 1409 + 1410 + root = (unsigned long)pgtable.root; 1411 + freelist = free_sub_pt(root, pgtable.mode, freelist); 1429 1412 1430 1413 free_page_list(freelist); 1431 1414 } ··· 1444 1417 unsigned long address, 1445 1418 gfp_t gfp) 1446 1419 { 1420 + struct domain_pgtable pgtable; 1447 1421 unsigned long flags; 1448 - bool ret = false; 1449 - u64 *pte; 1422 + bool ret = true; 1423 + u64 *pte, root; 1450 1424 1451 1425 spin_lock_irqsave(&domain->lock, flags); 1452 1426 1453 - if (address <= PM_LEVEL_SIZE(domain->mode) || 1454 - WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL)) 1427 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1428 + 1429 + if (address <= PM_LEVEL_SIZE(pgtable.mode)) 1430 + goto out; 1431 + 1432 + ret = false; 1433 + if (WARN_ON_ONCE(pgtable.mode == PAGE_MODE_6_LEVEL)) 1455 1434 goto out; 1456 1435 1457 1436 pte = (void *)get_zeroed_page(gfp); 1458 1437 if (!pte) 1459 1438 goto out; 1460 1439 1461 - *pte = PM_LEVEL_PDE(domain->mode, 1462 - iommu_virt_to_phys(domain->pt_root)); 1463 - domain->pt_root = pte; 1464 - domain->mode += 1; 1440 + *pte = PM_LEVEL_PDE(pgtable.mode, iommu_virt_to_phys(pgtable.root)); 1441 + 1442 + pgtable.root = pte; 1443 + pgtable.mode += 1; 1444 + update_and_flush_device_table(domain, &pgtable); 1445 + domain_flush_complete(domain); 1446 + 1447 + /* 1448 + * Device Table needs to be updated and flushed before the new root can 1449 + * be published. 1450 + */ 1451 + root = amd_iommu_domain_encode_pgtable(pte, pgtable.mode); 1452 + atomic64_set(&domain->pt_root, root); 1465 1453 1466 1454 ret = true; 1467 1455 ··· 1493 1451 gfp_t gfp, 1494 1452 bool *updated) 1495 1453 { 1454 + struct domain_pgtable pgtable; 1496 1455 int level, end_lvl; 1497 1456 u64 *pte, *page; 1498 1457 1499 1458 BUG_ON(!is_power_of_2(page_size)); 1500 1459 1501 - while (address > PM_LEVEL_SIZE(domain->mode)) 1502 - *updated = increase_address_space(domain, address, gfp) || *updated; 1460 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1503 1461 1504 - level = domain->mode - 1; 1505 - pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; 1462 + while (address > PM_LEVEL_SIZE(pgtable.mode)) { 1463 + /* 1464 + * Return an error if there is no memory to update the 1465 + * page-table. 1466 + */ 1467 + if (!increase_address_space(domain, address, gfp)) 1468 + return NULL; 1469 + 1470 + /* Read new values to check if update was successful */ 1471 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1472 + } 1473 + 1474 + 1475 + level = pgtable.mode - 1; 1476 + pte = &pgtable.root[PM_LEVEL_INDEX(level, address)]; 1506 1477 address = PAGE_SIZE_ALIGN(address, page_size); 1507 1478 end_lvl = PAGE_SIZE_LEVEL(page_size); 1508 1479 ··· 1591 1536 unsigned long address, 1592 1537 unsigned long *page_size) 1593 1538 { 1539 + struct domain_pgtable pgtable; 1594 1540 int level; 1595 1541 u64 *pte; 1596 1542 1597 1543 *page_size = 0; 1598 1544 1599 - if (address > PM_LEVEL_SIZE(domain->mode)) 1545 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1546 + 1547 + if (address > PM_LEVEL_SIZE(pgtable.mode)) 1600 1548 return NULL; 1601 1549 1602 - level = domain->mode - 1; 1603 - pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)]; 1550 + level = pgtable.mode - 1; 1551 + pte = &pgtable.root[PM_LEVEL_INDEX(level, address)]; 1604 1552 *page_size = PTE_LEVEL_PAGE_SIZE(level); 1605 1553 1606 1554 while (level > 0) { ··· 1718 1660 unsigned long flags; 1719 1661 1720 1662 spin_lock_irqsave(&dom->lock, flags); 1721 - update_domain(dom); 1663 + /* 1664 + * Flush domain TLB(s) and wait for completion. Any Device-Table 1665 + * Updates and flushing already happened in 1666 + * increase_address_space(). 1667 + */ 1668 + domain_flush_tlb_pde(dom); 1669 + domain_flush_complete(dom); 1722 1670 spin_unlock_irqrestore(&dom->lock, flags); 1723 1671 } 1724 1672 ··· 1870 1806 static struct protection_domain *dma_ops_domain_alloc(void) 1871 1807 { 1872 1808 struct protection_domain *domain; 1809 + u64 *pt_root, root; 1873 1810 1874 1811 domain = kzalloc(sizeof(struct protection_domain), GFP_KERNEL); 1875 1812 if (!domain) ··· 1879 1814 if (protection_domain_init(domain)) 1880 1815 goto free_domain; 1881 1816 1882 - domain->mode = PAGE_MODE_3_LEVEL; 1883 - domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL); 1884 - domain->flags = PD_DMA_OPS_MASK; 1885 - if (!domain->pt_root) 1817 + pt_root = (void *)get_zeroed_page(GFP_KERNEL); 1818 + if (!pt_root) 1886 1819 goto free_domain; 1820 + 1821 + root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL); 1822 + atomic64_set(&domain->pt_root, root); 1823 + domain->flags = PD_DMA_OPS_MASK; 1887 1824 1888 1825 if (iommu_get_dma_cookie(&domain->domain) == -ENOMEM) 1889 1826 goto free_domain; ··· 1908 1841 } 1909 1842 1910 1843 static void set_dte_entry(u16 devid, struct protection_domain *domain, 1844 + struct domain_pgtable *pgtable, 1911 1845 bool ats, bool ppr) 1912 1846 { 1913 1847 u64 pte_root = 0; 1914 1848 u64 flags = 0; 1915 1849 u32 old_domid; 1916 1850 1917 - if (domain->mode != PAGE_MODE_NONE) 1918 - pte_root = iommu_virt_to_phys(domain->pt_root); 1851 + if (pgtable->mode != PAGE_MODE_NONE) 1852 + pte_root = iommu_virt_to_phys(pgtable->root); 1919 1853 1920 - pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK) 1854 + pte_root |= (pgtable->mode & DEV_ENTRY_MODE_MASK) 1921 1855 << DEV_ENTRY_MODE_SHIFT; 1922 1856 pte_root |= DTE_FLAG_IR | DTE_FLAG_IW | DTE_FLAG_V | DTE_FLAG_TV; 1923 1857 ··· 1991 1923 static void do_attach(struct iommu_dev_data *dev_data, 1992 1924 struct protection_domain *domain) 1993 1925 { 1926 + struct domain_pgtable pgtable; 1994 1927 struct amd_iommu *iommu; 1995 1928 bool ats; 1996 1929 ··· 2007 1938 domain->dev_cnt += 1; 2008 1939 2009 1940 /* Update device table */ 2010 - set_dte_entry(dev_data->devid, domain, ats, dev_data->iommu_v2); 1941 + amd_iommu_domain_get_pgtable(domain, &pgtable); 1942 + set_dte_entry(dev_data->devid, domain, &pgtable, 1943 + ats, dev_data->iommu_v2); 2011 1944 clone_aliases(dev_data->pdev); 2012 1945 2013 1946 device_flush_dte(dev_data); ··· 2320 2249 * 2321 2250 *****************************************************************************/ 2322 2251 2323 - static void update_device_table(struct protection_domain *domain) 2252 + static void update_device_table(struct protection_domain *domain, 2253 + struct domain_pgtable *pgtable) 2324 2254 { 2325 2255 struct iommu_dev_data *dev_data; 2326 2256 2327 2257 list_for_each_entry(dev_data, &domain->dev_list, list) { 2328 - set_dte_entry(dev_data->devid, domain, dev_data->ats.enabled, 2329 - dev_data->iommu_v2); 2258 + set_dte_entry(dev_data->devid, domain, pgtable, 2259 + dev_data->ats.enabled, dev_data->iommu_v2); 2330 2260 clone_aliases(dev_data->pdev); 2331 2261 } 2332 2262 } 2333 2263 2264 + static void update_and_flush_device_table(struct protection_domain *domain, 2265 + struct domain_pgtable *pgtable) 2266 + { 2267 + update_device_table(domain, pgtable); 2268 + domain_flush_devices(domain); 2269 + } 2270 + 2334 2271 static void update_domain(struct protection_domain *domain) 2335 2272 { 2336 - update_device_table(domain); 2273 + struct domain_pgtable pgtable; 2337 2274 2338 - domain_flush_devices(domain); 2275 + /* Update device table */ 2276 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2277 + update_and_flush_device_table(domain, &pgtable); 2278 + 2279 + /* Flush domain TLB(s) and wait for completion */ 2339 2280 domain_flush_tlb_pde(domain); 2281 + domain_flush_complete(domain); 2340 2282 } 2341 2283 2342 2284 int __init amd_iommu_init_api(void) ··· 2459 2375 static struct iommu_domain *amd_iommu_domain_alloc(unsigned type) 2460 2376 { 2461 2377 struct protection_domain *pdomain; 2378 + u64 *pt_root, root; 2462 2379 2463 2380 switch (type) { 2464 2381 case IOMMU_DOMAIN_UNMANAGED: ··· 2467 2382 if (!pdomain) 2468 2383 return NULL; 2469 2384 2470 - pdomain->mode = PAGE_MODE_3_LEVEL; 2471 - pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL); 2472 - if (!pdomain->pt_root) { 2385 + pt_root = (void *)get_zeroed_page(GFP_KERNEL); 2386 + if (!pt_root) { 2473 2387 protection_domain_free(pdomain); 2474 2388 return NULL; 2475 2389 } 2390 + 2391 + root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL); 2392 + atomic64_set(&pdomain->pt_root, root); 2476 2393 2477 2394 pdomain->domain.geometry.aperture_start = 0; 2478 2395 pdomain->domain.geometry.aperture_end = ~0ULL; ··· 2493 2406 if (!pdomain) 2494 2407 return NULL; 2495 2408 2496 - pdomain->mode = PAGE_MODE_NONE; 2409 + atomic64_set(&pdomain->pt_root, PAGE_MODE_NONE); 2497 2410 break; 2498 2411 default: 2499 2412 return NULL; ··· 2505 2418 static void amd_iommu_domain_free(struct iommu_domain *dom) 2506 2419 { 2507 2420 struct protection_domain *domain; 2421 + struct domain_pgtable pgtable; 2508 2422 2509 2423 domain = to_pdomain(dom); 2510 2424 ··· 2523 2435 dma_ops_domain_free(domain); 2524 2436 break; 2525 2437 default: 2526 - if (domain->mode != PAGE_MODE_NONE) 2438 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2439 + 2440 + if (pgtable.mode != PAGE_MODE_NONE) 2527 2441 free_pagetable(domain); 2528 2442 2529 2443 if (domain->flags & PD_IOMMUV2_MASK) ··· 2608 2518 gfp_t gfp) 2609 2519 { 2610 2520 struct protection_domain *domain = to_pdomain(dom); 2521 + struct domain_pgtable pgtable; 2611 2522 int prot = 0; 2612 2523 int ret; 2613 2524 2614 - if (domain->mode == PAGE_MODE_NONE) 2525 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2526 + if (pgtable.mode == PAGE_MODE_NONE) 2615 2527 return -EINVAL; 2616 2528 2617 2529 if (iommu_prot & IOMMU_READ) ··· 2633 2541 struct iommu_iotlb_gather *gather) 2634 2542 { 2635 2543 struct protection_domain *domain = to_pdomain(dom); 2544 + struct domain_pgtable pgtable; 2636 2545 2637 - if (domain->mode == PAGE_MODE_NONE) 2546 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2547 + if (pgtable.mode == PAGE_MODE_NONE) 2638 2548 return 0; 2639 2549 2640 2550 return iommu_unmap_page(domain, iova, page_size); ··· 2647 2553 { 2648 2554 struct protection_domain *domain = to_pdomain(dom); 2649 2555 unsigned long offset_mask, pte_pgsize; 2556 + struct domain_pgtable pgtable; 2650 2557 u64 *pte, __pte; 2651 2558 2652 - if (domain->mode == PAGE_MODE_NONE) 2559 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2560 + if (pgtable.mode == PAGE_MODE_NONE) 2653 2561 return iova; 2654 2562 2655 2563 pte = fetch_pte(domain, iova, &pte_pgsize); ··· 2804 2708 void amd_iommu_domain_direct_map(struct iommu_domain *dom) 2805 2709 { 2806 2710 struct protection_domain *domain = to_pdomain(dom); 2711 + struct domain_pgtable pgtable; 2807 2712 unsigned long flags; 2713 + u64 pt_root; 2808 2714 2809 2715 spin_lock_irqsave(&domain->lock, flags); 2810 2716 2717 + /* First save pgtable configuration*/ 2718 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2719 + 2811 2720 /* Update data structure */ 2812 - domain->mode = PAGE_MODE_NONE; 2721 + pt_root = amd_iommu_domain_encode_pgtable(NULL, PAGE_MODE_NONE); 2722 + atomic64_set(&domain->pt_root, pt_root); 2813 2723 2814 2724 /* Make changes visible to IOMMUs */ 2815 2725 update_domain(domain); 2726 + 2727 + /* Restore old pgtable in domain->ptroot to free page-table */ 2728 + pt_root = amd_iommu_domain_encode_pgtable(pgtable.root, pgtable.mode); 2729 + atomic64_set(&domain->pt_root, pt_root); 2816 2730 2817 2731 /* Page-table is not visible to IOMMU anymore, so free it */ 2818 2732 free_pagetable(domain); ··· 3014 2908 static int __set_gcr3(struct protection_domain *domain, int pasid, 3015 2909 unsigned long cr3) 3016 2910 { 2911 + struct domain_pgtable pgtable; 3017 2912 u64 *pte; 3018 2913 3019 - if (domain->mode != PAGE_MODE_NONE) 2914 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2915 + if (pgtable.mode != PAGE_MODE_NONE) 3020 2916 return -EINVAL; 3021 2917 3022 2918 pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, true); ··· 3032 2924 3033 2925 static int __clear_gcr3(struct protection_domain *domain, int pasid) 3034 2926 { 2927 + struct domain_pgtable pgtable; 3035 2928 u64 *pte; 3036 2929 3037 - if (domain->mode != PAGE_MODE_NONE) 2930 + amd_iommu_domain_get_pgtable(domain, &pgtable); 2931 + if (pgtable.mode != PAGE_MODE_NONE) 3038 2932 return -EINVAL; 3039 2933 3040 2934 pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, false);
+1 -1
drivers/iommu/amd_iommu_init.c
··· 2936 2936 { 2937 2937 for (; *str; ++str) { 2938 2938 if (strncmp(str, "legacy", 6) == 0) { 2939 - amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY; 2939 + amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA; 2940 2940 break; 2941 2941 } 2942 2942 if (strncmp(str, "vapic", 5) == 0) {
+7 -2
drivers/iommu/amd_iommu_types.h
··· 468 468 iommu core code */ 469 469 spinlock_t lock; /* mostly used to lock the page table*/ 470 470 u16 id; /* the domain id written to the device table */ 471 - int mode; /* paging mode (0-6 levels) */ 472 - u64 *pt_root; /* page table root pointer */ 471 + atomic64_t pt_root; /* pgtable root and pgtable mode */ 473 472 int glx; /* Number of levels for GCR3 table */ 474 473 u64 *gcr3_tbl; /* Guest CR3 table */ 475 474 unsigned long flags; /* flags to find out type of domain */ 476 475 unsigned dev_cnt; /* devices assigned to this domain */ 477 476 unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */ 477 + }; 478 + 479 + /* For decocded pt_root */ 480 + struct domain_pgtable { 481 + int mode; 482 + u64 *root; 478 483 }; 479 484 480 485 /*
+2 -2
drivers/iommu/intel-iommu.c
··· 371 371 int dmar_disabled = 1; 372 372 #endif /* CONFIG_INTEL_IOMMU_DEFAULT_ON */ 373 373 374 - #ifdef INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON 374 + #ifdef CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON 375 375 int intel_iommu_sm = 1; 376 376 #else 377 377 int intel_iommu_sm; 378 - #endif /* INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON */ 378 + #endif /* CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON */ 379 379 380 380 int intel_iommu_enabled = 0; 381 381 EXPORT_SYMBOL_GPL(intel_iommu_enabled);
+2 -1
drivers/iommu/iommu.c
··· 170 170 171 171 static void dev_iommu_free(struct device *dev) 172 172 { 173 + iommu_fwspec_free(dev); 173 174 kfree(dev->iommu); 174 175 dev->iommu = NULL; 175 176 } ··· 1429 1428 1430 1429 return group; 1431 1430 } 1432 - EXPORT_SYMBOL(iommu_group_get_for_dev); 1431 + EXPORT_SYMBOL_GPL(iommu_group_get_for_dev); 1433 1432 1434 1433 struct iommu_domain *iommu_group_default_domain(struct iommu_group *group) 1435 1434 {
+4 -1
drivers/iommu/qcom_iommu.c
··· 824 824 qcom_iommu->dev = dev; 825 825 826 826 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 827 - if (res) 827 + if (res) { 828 828 qcom_iommu->local_base = devm_ioremap_resource(dev, res); 829 + if (IS_ERR(qcom_iommu->local_base)) 830 + return PTR_ERR(qcom_iommu->local_base); 831 + } 829 832 830 833 qcom_iommu->iface_clk = devm_clk_get(dev, "iface"); 831 834 if (IS_ERR(qcom_iommu->iface_clk)) {
+1 -1
drivers/iommu/virtio-iommu.c
··· 453 453 if (!region) 454 454 return -ENOMEM; 455 455 456 - list_add(&vdev->resv_regions, &region->list); 456 + list_add(&region->list, &vdev->resv_regions); 457 457 return 0; 458 458 } 459 459
+4 -2
drivers/md/dm-mpath.c
··· 585 585 586 586 /* Do we need to select a new pgpath? */ 587 587 pgpath = READ_ONCE(m->current_pgpath); 588 - queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags); 589 - if (!pgpath || !queue_io) 588 + if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags)) 590 589 pgpath = choose_pgpath(m, bio->bi_iter.bi_size); 590 + 591 + /* MPATHF_QUEUE_IO might have been cleared by choose_pgpath. */ 592 + queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags); 591 593 592 594 if ((pgpath && queue_io) || 593 595 (!pgpath && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) {
+1 -1
drivers/md/dm-verity-fec.c
··· 435 435 fio->level++; 436 436 437 437 if (type == DM_VERITY_BLOCK_TYPE_METADATA) 438 - block += v->data_blocks; 438 + block = block - v->hash_start + v->data_blocks; 439 439 440 440 /* 441 441 * For RS(M, N), the continuous FEC data is divided into blocks of N
+37 -15
drivers/md/dm-writecache.c
··· 931 931 return 0; 932 932 } 933 933 934 + static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors) 935 + { 936 + struct dm_io_region region; 937 + struct dm_io_request req; 938 + 939 + region.bdev = wc->ssd_dev->bdev; 940 + region.sector = wc->start_sector; 941 + region.count = n_sectors; 942 + req.bi_op = REQ_OP_READ; 943 + req.bi_op_flags = REQ_SYNC; 944 + req.mem.type = DM_IO_VMA; 945 + req.mem.ptr.vma = (char *)wc->memory_map; 946 + req.client = wc->dm_io; 947 + req.notify.fn = NULL; 948 + 949 + return dm_io(&req, 1, &region, NULL); 950 + } 951 + 934 952 static void writecache_resume(struct dm_target *ti) 935 953 { 936 954 struct dm_writecache *wc = ti->private; ··· 959 941 960 942 wc_lock(wc); 961 943 962 - if (WC_MODE_PMEM(wc)) 944 + if (WC_MODE_PMEM(wc)) { 963 945 persistent_memory_invalidate_cache(wc->memory_map, wc->memory_map_size); 946 + } else { 947 + r = writecache_read_metadata(wc, wc->metadata_sectors); 948 + if (r) { 949 + size_t sb_entries_offset; 950 + writecache_error(wc, r, "unable to read metadata: %d", r); 951 + sb_entries_offset = offsetof(struct wc_memory_superblock, entries); 952 + memset((char *)wc->memory_map + sb_entries_offset, -1, 953 + (wc->metadata_sectors << SECTOR_SHIFT) - sb_entries_offset); 954 + } 955 + } 964 956 965 957 wc->tree = RB_ROOT; 966 958 INIT_LIST_HEAD(&wc->lru); ··· 2130 2102 ti->error = "Invalid block size"; 2131 2103 goto bad; 2132 2104 } 2105 + if (wc->block_size < bdev_logical_block_size(wc->dev->bdev) || 2106 + wc->block_size < bdev_logical_block_size(wc->ssd_dev->bdev)) { 2107 + r = -EINVAL; 2108 + ti->error = "Block size is smaller than device logical block size"; 2109 + goto bad; 2110 + } 2133 2111 wc->block_size_bits = __ffs(wc->block_size); 2134 2112 2135 2113 wc->max_writeback_jobs = MAX_WRITEBACK_JOBS; ··· 2234 2200 goto bad; 2235 2201 } 2236 2202 } else { 2237 - struct dm_io_region region; 2238 - struct dm_io_request req; 2239 2203 size_t n_blocks, n_metadata_blocks; 2240 2204 uint64_t n_bitmap_bits; 2241 2205 ··· 2290 2258 goto bad; 2291 2259 } 2292 2260 2293 - region.bdev = wc->ssd_dev->bdev; 2294 - region.sector = wc->start_sector; 2295 - region.count = wc->metadata_sectors; 2296 - req.bi_op = REQ_OP_READ; 2297 - req.bi_op_flags = REQ_SYNC; 2298 - req.mem.type = DM_IO_VMA; 2299 - req.mem.ptr.vma = (char *)wc->memory_map; 2300 - req.client = wc->dm_io; 2301 - req.notify.fn = NULL; 2302 - 2303 - r = dm_io(&req, 1, &region, NULL); 2261 + r = writecache_read_metadata(wc, wc->block_size >> SECTOR_SHIFT); 2304 2262 if (r) { 2305 - ti->error = "Unable to read metadata"; 2263 + ti->error = "Unable to read first block of metadata"; 2306 2264 goto bad; 2307 2265 } 2308 2266 }
+8
drivers/misc/mei/hw-me.c
··· 1465 1465 MEI_CFG_DMA_128, 1466 1466 }; 1467 1467 1468 + /* LBG with quirk for SPS Firmware exclusion */ 1469 + static const struct mei_cfg mei_me_pch12_sps_cfg = { 1470 + MEI_CFG_PCH8_HFS, 1471 + MEI_CFG_FW_VER_SUPP, 1472 + MEI_CFG_FW_SPS, 1473 + }; 1474 + 1468 1475 /* Tiger Lake and newer devices */ 1469 1476 static const struct mei_cfg mei_me_pch15_cfg = { 1470 1477 MEI_CFG_PCH8_HFS, ··· 1494 1487 [MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg, 1495 1488 [MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg, 1496 1489 [MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg, 1490 + [MEI_ME_PCH12_SPS_CFG] = &mei_me_pch12_sps_cfg, 1497 1491 [MEI_ME_PCH15_CFG] = &mei_me_pch15_cfg, 1498 1492 }; 1499 1493
+4
drivers/misc/mei/hw-me.h
··· 80 80 * servers platforms with quirk for 81 81 * SPS firmware exclusion. 82 82 * @MEI_ME_PCH12_CFG: Platform Controller Hub Gen12 and newer 83 + * @MEI_ME_PCH12_SPS_CFG: Platform Controller Hub Gen12 and newer 84 + * servers platforms with quirk for 85 + * SPS firmware exclusion. 83 86 * @MEI_ME_PCH15_CFG: Platform Controller Hub Gen15 and newer 84 87 * @MEI_ME_NUM_CFG: Upper Sentinel. 85 88 */ ··· 96 93 MEI_ME_PCH8_CFG, 97 94 MEI_ME_PCH8_SPS_CFG, 98 95 MEI_ME_PCH12_CFG, 96 + MEI_ME_PCH12_SPS_CFG, 99 97 MEI_ME_PCH15_CFG, 100 98 MEI_ME_NUM_CFG, 101 99 };
+1 -1
drivers/misc/mei/pci-me.c
··· 70 70 {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, MEI_ME_PCH8_CFG)}, 71 71 {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_CFG)}, 72 72 {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_CFG)}, 73 - {MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_CFG)}, 73 + {MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_CFG)}, 74 74 75 75 {MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, MEI_ME_PCH8_CFG)}, 76 76 {MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, MEI_ME_PCH8_CFG)},
+1 -1
drivers/mmc/core/mmc_ops.c
··· 878 878 * Issued High Priority Interrupt, and check for card status 879 879 * until out-of prg-state. 880 880 */ 881 - int mmc_interrupt_hpi(struct mmc_card *card) 881 + static int mmc_interrupt_hpi(struct mmc_card *card) 882 882 { 883 883 int err; 884 884 u32 status;
+10 -11
drivers/mmc/host/cqhci.c
··· 5 5 #include <linux/delay.h> 6 6 #include <linux/highmem.h> 7 7 #include <linux/io.h> 8 + #include <linux/iopoll.h> 8 9 #include <linux/module.h> 9 10 #include <linux/dma-mapping.h> 10 11 #include <linux/slab.h> ··· 350 349 /* CQHCI is idle and should halt immediately, so set a small timeout */ 351 350 #define CQHCI_OFF_TIMEOUT 100 352 351 352 + static u32 cqhci_read_ctl(struct cqhci_host *cq_host) 353 + { 354 + return cqhci_readl(cq_host, CQHCI_CTL); 355 + } 356 + 353 357 static void cqhci_off(struct mmc_host *mmc) 354 358 { 355 359 struct cqhci_host *cq_host = mmc->cqe_private; 356 - ktime_t timeout; 357 - bool timed_out; 358 360 u32 reg; 361 + int err; 359 362 360 363 if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt) 361 364 return; ··· 369 364 370 365 cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL); 371 366 372 - timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT); 373 - while (1) { 374 - timed_out = ktime_compare(ktime_get(), timeout) > 0; 375 - reg = cqhci_readl(cq_host, CQHCI_CTL); 376 - if ((reg & CQHCI_HALT) || timed_out) 377 - break; 378 - } 379 - 380 - if (timed_out) 367 + err = readx_poll_timeout(cqhci_read_ctl, cq_host, reg, 368 + reg & CQHCI_HALT, 0, CQHCI_OFF_TIMEOUT); 369 + if (err < 0) 381 370 pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc)); 382 371 else 383 372 pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
+1 -10
drivers/mmc/host/meson-mx-sdio.c
··· 357 357 meson_mx_mmc_start_cmd(mmc, mrq->cmd); 358 358 } 359 359 360 - static int meson_mx_mmc_card_busy(struct mmc_host *mmc) 361 - { 362 - struct meson_mx_mmc_host *host = mmc_priv(mmc); 363 - u32 irqc = readl(host->base + MESON_MX_SDIO_IRQC); 364 - 365 - return !!(irqc & MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK); 366 - } 367 - 368 360 static void meson_mx_mmc_read_response(struct mmc_host *mmc, 369 361 struct mmc_command *cmd) 370 362 { ··· 498 506 static struct mmc_host_ops meson_mx_mmc_ops = { 499 507 .request = meson_mx_mmc_request, 500 508 .set_ios = meson_mx_mmc_set_ios, 501 - .card_busy = meson_mx_mmc_card_busy, 502 509 .get_cd = mmc_gpio_get_cd, 503 510 .get_ro = mmc_gpio_get_ro, 504 511 }; ··· 561 570 mmc->f_max = clk_round_rate(host->cfg_div_clk, 562 571 clk_get_rate(host->parent_clk)); 563 572 564 - mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23; 573 + mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_WAIT_WHILE_BUSY; 565 574 mmc->ops = &meson_mx_mmc_ops; 566 575 567 576 ret = mmc_of_parse(mmc);
+2
drivers/mmc/host/sdhci-msm.c
··· 2087 2087 goto clk_disable; 2088 2088 } 2089 2089 2090 + msm_host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_NEED_RSP_BUSY; 2091 + 2090 2092 pm_runtime_get_noresume(&pdev->dev); 2091 2093 pm_runtime_set_active(&pdev->dev); 2092 2094 pm_runtime_enable(&pdev->dev);
+3
drivers/mmc/host/sdhci-pci-core.c
··· 601 601 struct sdhci_pci_slot *slot = sdhci_priv(host); 602 602 struct intel_host *intel_host = sdhci_pci_priv(slot); 603 603 604 + if (!(mmc_driver_type_mask(intel_host->drv_strength) & card_drv)) 605 + return 0; 606 + 604 607 return intel_host->drv_strength; 605 608 } 606 609
+10
drivers/mmc/host/sdhci-xenon.c
··· 235 235 { 236 236 /* Wait for 5ms after set 1.8V signal enable bit */ 237 237 usleep_range(5000, 5500); 238 + 239 + /* 240 + * For some reason the controller's Host Control2 register reports 241 + * the bit representing 1.8V signaling as 0 when read after it was 242 + * written as 1. Subsequent read reports 1. 243 + * 244 + * Since this may cause some issues, do an empty read of the Host 245 + * Control2 register here to circumvent this. 246 + */ 247 + sdhci_readw(host, SDHCI_HOST_CONTROL2); 238 248 } 239 249 240 250 static const struct sdhci_ops sdhci_xenon_ops = {
+1 -1
drivers/most/core.c
··· 1483 1483 ida_destroy(&mdev_id); 1484 1484 } 1485 1485 1486 - module_init(most_init); 1486 + subsys_initcall(most_init); 1487 1487 module_exit(most_exit); 1488 1488 MODULE_LICENSE("GPL"); 1489 1489 MODULE_AUTHOR("Christian Gromm <christian.gromm@microchip.com>");
+1 -1
drivers/net/dsa/mv88e6xxx/Kconfig
··· 24 24 bool "PTP support for Marvell 88E6xxx" 25 25 default n 26 26 depends on NET_DSA_MV88E6XXX_GLOBAL2 27 + depends on PTP_1588_CLOCK 27 28 imply NETWORK_PHY_TIMESTAMPING 28 - imply PTP_1588_CLOCK 29 29 help 30 30 Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch 31 31 chips that support it.
-4
drivers/net/dsa/mv88e6xxx/chip.c
··· 3962 3962 .serdes_get_stats = mv88e6390_serdes_get_stats, 3963 3963 .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 3964 3964 .serdes_get_regs = mv88e6390_serdes_get_regs, 3965 - .phylink_validate = mv88e6390_phylink_validate, 3966 3965 .gpio_ops = &mv88e6352_gpio_ops, 3967 3966 .phylink_validate = mv88e6390_phylink_validate, 3968 3967 }; ··· 4020 4021 .serdes_get_stats = mv88e6390_serdes_get_stats, 4021 4022 .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 4022 4023 .serdes_get_regs = mv88e6390_serdes_get_regs, 4023 - .phylink_validate = mv88e6390_phylink_validate, 4024 4024 .gpio_ops = &mv88e6352_gpio_ops, 4025 4025 .phylink_validate = mv88e6390x_phylink_validate, 4026 4026 }; ··· 4077 4079 .serdes_get_stats = mv88e6390_serdes_get_stats, 4078 4080 .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 4079 4081 .serdes_get_regs = mv88e6390_serdes_get_regs, 4080 - .phylink_validate = mv88e6390_phylink_validate, 4081 4082 .avb_ops = &mv88e6390_avb_ops, 4082 4083 .ptp_ops = &mv88e6352_ptp_ops, 4083 4084 .phylink_validate = mv88e6390_phylink_validate, ··· 4232 4235 .serdes_get_stats = mv88e6390_serdes_get_stats, 4233 4236 .serdes_get_regs_len = mv88e6390_serdes_get_regs_len, 4234 4237 .serdes_get_regs = mv88e6390_serdes_get_regs, 4235 - .phylink_validate = mv88e6390_phylink_validate, 4236 4238 .gpio_ops = &mv88e6352_gpio_ops, 4237 4239 .avb_ops = &mv88e6390_avb_ops, 4238 4240 .ptp_ops = &mv88e6352_ptp_ops,
+1
drivers/net/dsa/ocelot/felix.c
··· 400 400 ocelot->stats_layout = felix->info->stats_layout; 401 401 ocelot->num_stats = felix->info->num_stats; 402 402 ocelot->shared_queue_sz = felix->info->shared_queue_sz; 403 + ocelot->num_mact_rows = felix->info->num_mact_rows; 403 404 ocelot->vcap_is2_keys = felix->info->vcap_is2_keys; 404 405 ocelot->vcap_is2_actions= felix->info->vcap_is2_actions; 405 406 ocelot->vcap = felix->info->vcap;
+1
drivers/net/dsa/ocelot/felix.h
··· 15 15 const u32 *const *map; 16 16 const struct ocelot_ops *ops; 17 17 int shared_queue_sz; 18 + int num_mact_rows; 18 19 const struct ocelot_stat_layout *stats_layout; 19 20 unsigned int num_stats; 20 21 int num_ports;
+1
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1220 1220 .vcap_is2_actions = vsc9959_vcap_is2_actions, 1221 1221 .vcap = vsc9959_vcap_props, 1222 1222 .shared_queue_sz = 128 * 1024, 1223 + .num_mact_rows = 2048, 1223 1224 .num_ports = 6, 1224 1225 .switch_pci_bar = 4, 1225 1226 .imdio_pci_bar = 0,
+1
drivers/net/dsa/sja1105/Kconfig
··· 20 20 config NET_DSA_SJA1105_PTP 21 21 bool "Support for the PTP clock on the NXP SJA1105 Ethernet switch" 22 22 depends on NET_DSA_SJA1105 23 + depends on PTP_1588_CLOCK 23 24 help 24 25 This enables support for timestamping and PTP clock manipulations in 25 26 the SJA1105 DSA driver.
+18 -8
drivers/net/dsa/sja1105/sja1105_ptp.c
··· 16 16 17 17 /* PTPSYNCTS has no interrupt or update mechanism, because the intended 18 18 * hardware use case is for the timestamp to be collected synchronously, 19 - * immediately after the CAS_MASTER SJA1105 switch has triggered a CASSYNC 20 - * pulse on the PTP_CLK pin. When used as a generic extts source, it needs 21 - * polling and a comparison with the old value. The polling interval is just 22 - * the Nyquist rate of a canonical PPS input (e.g. from a GPS module). 23 - * Anything of higher frequency than 1 Hz will be lost, since there is no 24 - * timestamp FIFO. 19 + * immediately after the CAS_MASTER SJA1105 switch has performed a CASSYNC 20 + * one-shot toggle (no return to level) on the PTP_CLK pin. When used as a 21 + * generic extts source, the PTPSYNCTS register needs polling and a comparison 22 + * with the old value. The polling interval is configured as the Nyquist rate 23 + * of a signal with 50% duty cycle and 1Hz frequency, which is sadly all that 24 + * this hardware can do (but may be enough for some setups). Anything of higher 25 + * frequency than 1 Hz will be lost, since there is no timestamp FIFO. 25 26 */ 26 - #define SJA1105_EXTTS_INTERVAL (HZ / 2) 27 + #define SJA1105_EXTTS_INTERVAL (HZ / 4) 27 28 28 29 /* This range is actually +/- SJA1105_MAX_ADJ_PPB 29 30 * divided by 1000 (ppb -> ppm) and with a 16-bit ··· 755 754 return -EOPNOTSUPP; 756 755 757 756 /* Reject requests with unsupported flags */ 758 - if (extts->flags) 757 + if (extts->flags & ~(PTP_ENABLE_FEATURE | 758 + PTP_RISING_EDGE | 759 + PTP_FALLING_EDGE | 760 + PTP_STRICT_FLAGS)) 761 + return -EOPNOTSUPP; 762 + 763 + /* We can only enable time stamping on both edges, sadly. */ 764 + if ((extts->flags & PTP_STRICT_FLAGS) && 765 + (extts->flags & PTP_ENABLE_FEATURE) && 766 + (extts->flags & PTP_EXTTS_EDGES) != PTP_EXTTS_EDGES) 759 767 return -EOPNOTSUPP; 760 768 761 769 rc = sja1105_change_ptp_clk_pin_func(priv, PTP_PF_EXTTS);
+1 -1
drivers/net/ethernet/amazon/ena/ena_netdev.h
··· 69 69 * 16kB. 70 70 */ 71 71 #if PAGE_SIZE > SZ_16K 72 - #define ENA_PAGE_SIZE SZ_16K 72 + #define ENA_PAGE_SIZE (_AC(SZ_16K, UL)) 73 73 #else 74 74 #define ENA_PAGE_SIZE PAGE_SIZE 75 75 #endif
+1 -1
drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
··· 57 57 { AQ_DEVICE_ID_D108, AQ_HWREV_2, &hw_atl_ops_b0, &hw_atl_b0_caps_aqc108, }, 58 58 { AQ_DEVICE_ID_D109, AQ_HWREV_2, &hw_atl_ops_b0, &hw_atl_b0_caps_aqc109, }, 59 59 60 - { AQ_DEVICE_ID_AQC100, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, }, 60 + { AQ_DEVICE_ID_AQC100, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc100, }, 61 61 { AQ_DEVICE_ID_AQC107, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, }, 62 62 { AQ_DEVICE_ID_AQC108, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc108, }, 63 63 { AQ_DEVICE_ID_AQC109, AQ_HWREV_ANY, &hw_atl_ops_b1, &hw_atl_b0_caps_aqc109, },
+15 -9
drivers/net/ethernet/broadcom/bgmac-platform.c
··· 172 172 { 173 173 struct device_node *np = pdev->dev.of_node; 174 174 struct bgmac *bgmac; 175 + struct resource *regs; 175 176 const u8 *mac_addr; 176 177 177 178 bgmac = bgmac_alloc(&pdev->dev); ··· 207 206 if (IS_ERR(bgmac->plat.base)) 208 207 return PTR_ERR(bgmac->plat.base); 209 208 210 - bgmac->plat.idm_base = 211 - devm_platform_ioremap_resource_byname(pdev, "idm_base"); 212 - if (IS_ERR(bgmac->plat.idm_base)) 213 - return PTR_ERR(bgmac->plat.idm_base); 214 - bgmac->feature_flags &= ~BGMAC_FEAT_IDM_MASK; 209 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "idm_base"); 210 + if (regs) { 211 + bgmac->plat.idm_base = devm_ioremap_resource(&pdev->dev, regs); 212 + if (IS_ERR(bgmac->plat.idm_base)) 213 + return PTR_ERR(bgmac->plat.idm_base); 214 + bgmac->feature_flags &= ~BGMAC_FEAT_IDM_MASK; 215 + } 215 216 216 - bgmac->plat.nicpm_base = 217 - devm_platform_ioremap_resource_byname(pdev, "nicpm_base"); 218 - if (IS_ERR(bgmac->plat.nicpm_base)) 219 - return PTR_ERR(bgmac->plat.nicpm_base); 217 + regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "nicpm_base"); 218 + if (regs) { 219 + bgmac->plat.nicpm_base = devm_ioremap_resource(&pdev->dev, 220 + regs); 221 + if (IS_ERR(bgmac->plat.nicpm_base)) 222 + return PTR_ERR(bgmac->plat.nicpm_base); 223 + } 220 224 221 225 bgmac->read = platform_bgmac_read; 222 226 bgmac->write = platform_bgmac_write;
+13 -7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 6642 6642 int rc; 6643 6643 6644 6644 if (!mem_size) 6645 - return 0; 6645 + return -EINVAL; 6646 6646 6647 6647 ctx_pg->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE); 6648 6648 if (ctx_pg->nr_pages > MAX_CTX_TOTAL_PAGES) { ··· 9780 9780 netdev_features_t features) 9781 9781 { 9782 9782 struct bnxt *bp = netdev_priv(dev); 9783 + netdev_features_t vlan_features; 9783 9784 9784 9785 if ((features & NETIF_F_NTUPLE) && !bnxt_rfs_capable(bp)) 9785 9786 features &= ~NETIF_F_NTUPLE; ··· 9797 9796 /* Both CTAG and STAG VLAN accelaration on the RX side have to be 9798 9797 * turned on or off together. 9799 9798 */ 9800 - if ((features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX)) != 9801 - (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX)) { 9799 + vlan_features = features & (NETIF_F_HW_VLAN_CTAG_RX | 9800 + NETIF_F_HW_VLAN_STAG_RX); 9801 + if (vlan_features != (NETIF_F_HW_VLAN_CTAG_RX | 9802 + NETIF_F_HW_VLAN_STAG_RX)) { 9802 9803 if (dev->features & NETIF_F_HW_VLAN_CTAG_RX) 9803 9804 features &= ~(NETIF_F_HW_VLAN_CTAG_RX | 9804 9805 NETIF_F_HW_VLAN_STAG_RX); 9805 - else 9806 + else if (vlan_features) 9806 9807 features |= NETIF_F_HW_VLAN_CTAG_RX | 9807 9808 NETIF_F_HW_VLAN_STAG_RX; 9808 9809 } ··· 12215 12212 bnxt_ulp_start(bp, err); 12216 12213 } 12217 12214 12218 - if (result != PCI_ERS_RESULT_RECOVERED && netif_running(netdev)) 12219 - dev_close(netdev); 12215 + if (result != PCI_ERS_RESULT_RECOVERED) { 12216 + if (netif_running(netdev)) 12217 + dev_close(netdev); 12218 + pci_disable_device(pdev); 12219 + } 12220 12220 12221 12221 rtnl_unlock(); 12222 12222 12223 - return PCI_ERS_RESULT_RECOVERED; 12223 + return result; 12224 12224 } 12225 12225 12226 12226 /**
-1
drivers/net/ethernet/broadcom/bnxt/bnxt.h
··· 1066 1066 #define BNXT_VF_LINK_FORCED 0x4 1067 1067 #define BNXT_VF_LINK_UP 0x8 1068 1068 #define BNXT_VF_TRUST 0x10 1069 - u32 func_flags; /* func cfg flags */ 1070 1069 u32 min_tx_rate; 1071 1070 u32 max_tx_rate; 1072 1071 void *hwrm_cmd_req_addr;
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
··· 43 43 #define BNXT_NVM_CFG_VER_BITS 24 44 44 #define BNXT_NVM_CFG_VER_BYTES 4 45 45 46 - #define BNXT_MSIX_VEC_MAX 1280 46 + #define BNXT_MSIX_VEC_MAX 512 47 47 #define BNXT_MSIX_VEC_MIN_MAX 128 48 48 49 49 enum bnxt_nvm_dir_type {
+2 -8
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
··· 85 85 if (old_setting == setting) 86 86 return 0; 87 87 88 - func_flags = vf->func_flags; 89 88 if (setting) 90 - func_flags |= FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE; 89 + func_flags = FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE; 91 90 else 92 - func_flags |= FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE; 91 + func_flags = FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE; 93 92 /*TODO: if the driver supports VLAN filter on guest VLAN, 94 93 * the spoof check should also include vlan anti-spoofing 95 94 */ ··· 97 98 req.flags = cpu_to_le32(func_flags); 98 99 rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 99 100 if (!rc) { 100 - vf->func_flags = func_flags; 101 101 if (setting) 102 102 vf->flags |= BNXT_VF_SPOOFCHK; 103 103 else ··· 226 228 memcpy(vf->mac_addr, mac, ETH_ALEN); 227 229 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); 228 230 req.fid = cpu_to_le16(vf->fw_fid); 229 - req.flags = cpu_to_le32(vf->func_flags); 230 231 req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_MAC_ADDR); 231 232 memcpy(req.dflt_mac_addr, mac, ETH_ALEN); 232 233 return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); ··· 263 266 264 267 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); 265 268 req.fid = cpu_to_le16(vf->fw_fid); 266 - req.flags = cpu_to_le32(vf->func_flags); 267 269 req.dflt_vlan = cpu_to_le16(vlan_tag); 268 270 req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_VLAN); 269 271 rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); ··· 301 305 return 0; 302 306 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); 303 307 req.fid = cpu_to_le16(vf->fw_fid); 304 - req.flags = cpu_to_le32(vf->func_flags); 305 308 req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_MAX_BW); 306 309 req.max_bw = cpu_to_le32(max_tx_rate); 307 310 req.enables |= cpu_to_le32(FUNC_CFG_REQ_ENABLES_MIN_BW); ··· 472 477 vf = &bp->pf.vf[vf_id]; 473 478 bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1); 474 479 req.fid = cpu_to_le16(vf->fw_fid); 475 - req.flags = cpu_to_le32(vf->func_flags); 476 480 477 481 if (is_valid_ether_addr(vf->mac_addr)) { 478 482 req.enables |= cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_MAC_ADDR);
+1 -1
drivers/net/ethernet/cadence/Kconfig
··· 35 35 config MACB_USE_HWSTAMP 36 36 bool "Use IEEE 1588 hwstamp" 37 37 depends on MACB 38 + depends on PTP_1588_CLOCK 38 39 default y 39 - imply PTP_1588_CLOCK 40 40 ---help--- 41 41 Enable IEEE 1588 Precision Time Protocol (PTP) support for MACB. 42 42
+12 -12
drivers/net/ethernet/cadence/macb_main.c
··· 334 334 int status; 335 335 336 336 status = pm_runtime_get_sync(&bp->pdev->dev); 337 - if (status < 0) 337 + if (status < 0) { 338 + pm_runtime_put_noidle(&bp->pdev->dev); 338 339 goto mdio_pm_exit; 340 + } 339 341 340 342 status = macb_mdio_wait_for_idle(bp); 341 343 if (status < 0) ··· 388 386 int status; 389 387 390 388 status = pm_runtime_get_sync(&bp->pdev->dev); 391 - if (status < 0) 389 + if (status < 0) { 390 + pm_runtime_put_noidle(&bp->pdev->dev); 392 391 goto mdio_pm_exit; 392 + } 393 393 394 394 status = macb_mdio_wait_for_idle(bp); 395 395 if (status < 0) ··· 3820 3816 int ret; 3821 3817 3822 3818 ret = pm_runtime_get_sync(&lp->pdev->dev); 3823 - if (ret < 0) 3819 + if (ret < 0) { 3820 + pm_runtime_put_noidle(&lp->pdev->dev); 3824 3821 return ret; 3822 + } 3825 3823 3826 3824 /* Clear internal statistics */ 3827 3825 ctl = macb_readl(lp, NCR); ··· 4178 4172 4179 4173 static int fu540_c000_init(struct platform_device *pdev) 4180 4174 { 4181 - struct resource *res; 4182 - 4183 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 4184 - if (!res) 4185 - return -ENODEV; 4186 - 4187 - mgmt->reg = ioremap(res->start, resource_size(res)); 4188 - if (!mgmt->reg) 4189 - return -ENOMEM; 4175 + mgmt->reg = devm_platform_ioremap_resource(pdev, 1); 4176 + if (IS_ERR(mgmt->reg)) 4177 + return PTR_ERR(mgmt->reg); 4190 4178 4191 4179 return macb_init(pdev); 4192 4180 }
+1 -1
drivers/net/ethernet/cavium/Kconfig
··· 54 54 config CAVIUM_PTP 55 55 tristate "Cavium PTP coprocessor as PTP clock" 56 56 depends on 64BIT && PCI 57 - imply PTP_1588_CLOCK 57 + depends on PTP_1588_CLOCK 58 58 ---help--- 59 59 This driver adds support for the Precision Time Protocol Clocks and 60 60 Timestamping coprocessor (PTP) found on Cavium processors.
+37 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2207 2207 if (unlikely(skip_eotx_wr)) { 2208 2208 start = (u64 *)wr; 2209 2209 eosw_txq->state = next_state; 2210 + eosw_txq->cred -= wrlen16; 2211 + eosw_txq->ncompl++; 2212 + eosw_txq->last_compl = 0; 2210 2213 goto write_wr_headers; 2211 2214 } 2212 2215 ··· 2368 2365 return cxgb4_eth_xmit(skb, dev); 2369 2366 } 2370 2367 2368 + static void eosw_txq_flush_pending_skbs(struct sge_eosw_txq *eosw_txq) 2369 + { 2370 + int pktcount = eosw_txq->pidx - eosw_txq->last_pidx; 2371 + int pidx = eosw_txq->pidx; 2372 + struct sk_buff *skb; 2373 + 2374 + if (!pktcount) 2375 + return; 2376 + 2377 + if (pktcount < 0) 2378 + pktcount += eosw_txq->ndesc; 2379 + 2380 + while (pktcount--) { 2381 + pidx--; 2382 + if (pidx < 0) 2383 + pidx += eosw_txq->ndesc; 2384 + 2385 + skb = eosw_txq->desc[pidx].skb; 2386 + if (skb) { 2387 + dev_consume_skb_any(skb); 2388 + eosw_txq->desc[pidx].skb = NULL; 2389 + eosw_txq->inuse--; 2390 + } 2391 + } 2392 + 2393 + eosw_txq->pidx = eosw_txq->last_pidx + 1; 2394 + } 2395 + 2371 2396 /** 2372 2397 * cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc. 2373 2398 * @dev - netdevice ··· 2471 2440 FW_FLOWC_MNEM_EOSTATE_CLOSING : 2472 2441 FW_FLOWC_MNEM_EOSTATE_ESTABLISHED); 2473 2442 2474 - eosw_txq->cred -= len16; 2475 - eosw_txq->ncompl++; 2476 - eosw_txq->last_compl = 0; 2443 + /* Free up any pending skbs to ensure there's room for 2444 + * termination FLOWC. 2445 + */ 2446 + if (tc == FW_SCHED_CLS_NONE) 2447 + eosw_txq_flush_pending_skbs(eosw_txq); 2477 2448 2478 2449 ret = eosw_txq_enqueue(eosw_txq, skb); 2479 2450 if (ret) { ··· 2728 2695 * is ever running at a time ... 2729 2696 */ 2730 2697 static void service_ofldq(struct sge_uld_txq *q) 2698 + __must_hold(&q->sendq.lock) 2731 2699 { 2732 2700 u64 *pos, *before, *end; 2733 2701 int credits;
+1 -1
drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
··· 74 74 pci_disable_device(pdev); 75 75 err_pci_enable: 76 76 err_mdiobus_alloc: 77 - iounmap(port_regs); 78 77 err_hw_alloc: 78 + iounmap(port_regs); 79 79 err_ioremap: 80 80 return err; 81 81 }
+2 -1
drivers/net/ethernet/ibm/ibmvnic.c
··· 2189 2189 rc = do_hard_reset(adapter, rwi, reset_state); 2190 2190 rtnl_unlock(); 2191 2191 } 2192 - } else { 2192 + } else if (!(rwi->reset_reason == VNIC_RESET_FATAL && 2193 + adapter->from_passive_init)) { 2193 2194 rc = do_reset(adapter, rwi, reset_state); 2194 2195 } 2195 2196 kfree(rwi);
+3
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
··· 1428 1428 struct mvpp2_ethtool_fs *efs; 1429 1429 int ret; 1430 1430 1431 + if (info->fs.location >= MVPP2_N_RFS_ENTRIES_PER_FLOW) 1432 + return -EINVAL; 1433 + 1431 1434 efs = port->rfs_rules[info->fs.location]; 1432 1435 if (!efs) 1433 1436 return -EINVAL;
+2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 4329 4329 4330 4330 if (!mvpp22_rss_is_supported()) 4331 4331 return -EOPNOTSUPP; 4332 + if (rss_context >= MVPP22_N_RSS_TABLES) 4333 + return -EINVAL; 4332 4334 4333 4335 if (hfunc) 4334 4336 *hfunc = ETH_RSS_HASH_CRC32;
+3 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 2550 2550 2551 2551 if (!err || err == -ENOSPC) { 2552 2552 priv->def_counter[port] = idx; 2553 + err = 0; 2553 2554 } else if (err == -ENOENT) { 2554 2555 err = 0; 2555 2556 continue; ··· 2601 2600 MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED); 2602 2601 if (!err) 2603 2602 *idx = get_param_l(&out_param); 2604 - 2603 + if (WARN_ON(err == -ENOSPC)) 2604 + err = -EINVAL; 2605 2605 return err; 2606 2606 } 2607 2607 return __mlx4_counter_alloc(dev, idx);
+5 -1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 888 888 } 889 889 890 890 cmd->ent_arr[ent->idx] = ent; 891 - set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state); 892 891 lay = get_inst(cmd, ent->idx); 893 892 ent->lay = lay; 894 893 memset(lay, 0, sizeof(*lay)); ··· 909 910 910 911 if (ent->callback) 911 912 schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); 913 + set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state); 912 914 913 915 /* Skip sending command to fw if internal error */ 914 916 if (pci_channel_offline(dev->pdev) || ··· 922 922 MLX5_SET(mbox_out, ent->out, syndrome, drv_synd); 923 923 924 924 mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); 925 + /* no doorbell, no need to keep the entry */ 926 + free_ent(cmd, ent->idx); 927 + if (ent->callback) 928 + free_cmd(ent); 925 929 return; 926 930 } 927 931
+2 -7
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1773 1773 1774 1774 static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv) 1775 1775 { 1776 - int err = mlx5e_init_rep_rx(priv); 1777 - 1778 - if (err) 1779 - return err; 1780 - 1781 1776 mlx5e_create_q_counters(priv); 1782 - return 0; 1777 + return mlx5e_init_rep_rx(priv); 1783 1778 } 1784 1779 1785 1780 static void mlx5e_cleanup_ul_rep_rx(struct mlx5e_priv *priv) 1786 1781 { 1787 - mlx5e_destroy_q_counters(priv); 1788 1782 mlx5e_cleanup_rep_rx(priv); 1783 + mlx5e_destroy_q_counters(priv); 1789 1784 } 1790 1785 1791 1786 static int mlx5e_init_uplink_rep_tx(struct mlx5e_rep_priv *rpriv)
+9 -9
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 1550 1550 MLX5_FLOW_NAMESPACE_KERNEL, 1, 1551 1551 modact); 1552 1552 if (IS_ERR(mod_hdr)) { 1553 + err = PTR_ERR(mod_hdr); 1553 1554 esw_warn(dev, "Failed to create restore mod header, err: %d\n", 1554 1555 err); 1555 - err = PTR_ERR(mod_hdr); 1556 1556 goto err_mod_hdr; 1557 1557 } 1558 1558 ··· 2219 2219 total_vports = num_vfs + MLX5_SPECIAL_VPORTS(esw->dev); 2220 2220 2221 2221 memset(&esw->fdb_table.offloads, 0, sizeof(struct offloads_fdb)); 2222 + mutex_init(&esw->fdb_table.offloads.vports.lock); 2223 + hash_init(esw->fdb_table.offloads.vports.table); 2222 2224 2223 2225 err = esw_create_uplink_offloads_acl_tables(esw); 2224 2226 if (err) 2225 - return err; 2227 + goto create_acl_err; 2226 2228 2227 2229 err = esw_create_offloads_table(esw, total_vports); 2228 2230 if (err) ··· 2242 2240 if (err) 2243 2241 goto create_fg_err; 2244 2242 2245 - mutex_init(&esw->fdb_table.offloads.vports.lock); 2246 - hash_init(esw->fdb_table.offloads.vports.table); 2247 - 2248 2243 return 0; 2249 2244 2250 2245 create_fg_err: ··· 2252 2253 esw_destroy_offloads_table(esw); 2253 2254 create_offloads_err: 2254 2255 esw_destroy_uplink_offloads_acl_tables(esw); 2255 - 2256 + create_acl_err: 2257 + mutex_destroy(&esw->fdb_table.offloads.vports.lock); 2256 2258 return err; 2257 2259 } 2258 2260 2259 2261 static void esw_offloads_steering_cleanup(struct mlx5_eswitch *esw) 2260 2262 { 2261 - mutex_destroy(&esw->fdb_table.offloads.vports.lock); 2262 2263 esw_destroy_vport_rx_group(esw); 2263 2264 esw_destroy_offloads_fdb_tables(esw); 2264 2265 esw_destroy_restore_table(esw); 2265 2266 esw_destroy_offloads_table(esw); 2266 2267 esw_destroy_uplink_offloads_acl_tables(esw); 2268 + mutex_destroy(&esw->fdb_table.offloads.vports.lock); 2267 2269 } 2268 2270 2269 2271 static void ··· 2377 2377 err_vports: 2378 2378 esw_offloads_unload_rep(esw, MLX5_VPORT_UPLINK); 2379 2379 err_uplink: 2380 - esw_set_passing_vport_metadata(esw, false); 2381 - err_steering_init: 2382 2380 esw_offloads_steering_cleanup(esw); 2381 + err_steering_init: 2382 + esw_set_passing_vport_metadata(esw, false); 2383 2383 err_vport_metadata: 2384 2384 mlx5_rdma_disable_roce(esw->dev); 2385 2385 mutex_destroy(&esw->offloads.termtbl_mutex);
+13 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
··· 695 695 pr_info("CQ event %u on CQ #%u\n", event, mcq->cqn); 696 696 } 697 697 698 + static void dr_cq_complete(struct mlx5_core_cq *mcq, 699 + struct mlx5_eqe *eqe) 700 + { 701 + pr_err("CQ completion CQ: #%u\n", mcq->cqn); 702 + } 703 + 698 704 static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev, 699 705 struct mlx5_uars_page *uar, 700 706 size_t ncqe) ··· 762 756 mlx5_fill_page_frag_array(&cq->wq_ctrl.buf, pas); 763 757 764 758 cq->mcq.event = dr_cq_event; 759 + cq->mcq.comp = dr_cq_complete; 765 760 766 761 err = mlx5_core_create_cq(mdev, &cq->mcq, in, inlen, out, sizeof(out)); 767 762 kvfree(in); ··· 774 767 cq->mcq.set_ci_db = cq->wq_ctrl.db.db; 775 768 cq->mcq.arm_db = cq->wq_ctrl.db.db + 1; 776 769 *cq->mcq.set_ci_db = 0; 777 - *cq->mcq.arm_db = 0; 770 + 771 + /* set no-zero value, in order to avoid the HW to run db-recovery on 772 + * CQ that used in polling mode. 773 + */ 774 + *cq->mcq.arm_db = cpu_to_be32(2 << 28); 775 + 778 776 cq->mcq.vector = 0; 779 777 cq->mcq.irqn = irqn; 780 778 cq->mcq.uar = uar;
+10 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
··· 986 986 unsigned int priority, 987 987 struct mlxsw_afk_element_usage *elusage) 988 988 { 989 + struct mlxsw_sp_acl_tcam_vchunk *vchunk, *vchunk2; 989 990 struct mlxsw_sp_acl_tcam_vregion *vregion; 990 - struct mlxsw_sp_acl_tcam_vchunk *vchunk; 991 + struct list_head *pos; 991 992 int err; 992 993 993 994 if (priority == MLXSW_SP_ACL_TCAM_CATCHALL_PRIO) ··· 1026 1025 } 1027 1026 1028 1027 mlxsw_sp_acl_tcam_rehash_ctx_vregion_changed(vregion); 1029 - list_add_tail(&vchunk->list, &vregion->vchunk_list); 1028 + 1029 + /* Position the vchunk inside the list according to priority */ 1030 + list_for_each(pos, &vregion->vchunk_list) { 1031 + vchunk2 = list_entry(pos, typeof(*vchunk2), list); 1032 + if (vchunk2->priority > priority) 1033 + break; 1034 + } 1035 + list_add_tail(&vchunk->list, pos); 1030 1036 mutex_unlock(&vregion->lock); 1031 1037 1032 1038 return vchunk;
+2 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
··· 36 36 err = mlxsw_sp_acl_rulei_act_count(mlxsw_sp, rulei, extack); 37 37 if (err) 38 38 return err; 39 - } else if (act->hw_stats != FLOW_ACTION_HW_STATS_DISABLED) { 39 + } else if (act->hw_stats != FLOW_ACTION_HW_STATS_DISABLED && 40 + act->hw_stats != FLOW_ACTION_HW_STATS_DONT_CARE) { 40 41 NL_SET_ERR_MSG_MOD(extack, "Unsupported action HW stats type"); 41 42 return -EOPNOTSUPP; 42 43 }
+1 -1
drivers/net/ethernet/moxa/moxart_ether.c
··· 564 564 struct net_device *ndev = platform_get_drvdata(pdev); 565 565 566 566 unregister_netdev(ndev); 567 - free_irq(ndev->irq, ndev); 567 + devm_free_irq(&pdev->dev, ndev->irq, ndev); 568 568 moxart_mac_free_memory(ndev); 569 569 free_netdev(ndev); 570 570
+11 -6
drivers/net/ethernet/mscc/ocelot.c
··· 1031 1031 { 1032 1032 int i, j; 1033 1033 1034 - /* Loop through all the mac tables entries. There are 1024 rows of 4 1035 - * entries. 1036 - */ 1037 - for (i = 0; i < 1024; i++) { 1034 + /* Loop through all the mac tables entries. */ 1035 + for (i = 0; i < ocelot->num_mact_rows; i++) { 1038 1036 for (j = 0; j < 4; j++) { 1039 1037 struct ocelot_mact_entry entry; 1040 1038 bool is_static; ··· 1451 1453 1452 1454 void ocelot_set_ageing_time(struct ocelot *ocelot, unsigned int msecs) 1453 1455 { 1454 - ocelot_write(ocelot, ANA_AUTOAGE_AGE_PERIOD(msecs / 2), 1455 - ANA_AUTOAGE); 1456 + unsigned int age_period = ANA_AUTOAGE_AGE_PERIOD(msecs / 2000); 1457 + 1458 + /* Setting AGE_PERIOD to zero effectively disables automatic aging, 1459 + * which is clearly not what our intention is. So avoid that. 1460 + */ 1461 + if (!age_period) 1462 + age_period = 1; 1463 + 1464 + ocelot_rmw(ocelot, age_period, ANA_AUTOAGE_AGE_PERIOD_M, ANA_AUTOAGE); 1456 1465 } 1457 1466 EXPORT_SYMBOL(ocelot_set_ageing_time); 1458 1467
+1
drivers/net/ethernet/mscc/ocelot_regs.c
··· 431 431 ocelot->stats_layout = ocelot_stats_layout; 432 432 ocelot->num_stats = ARRAY_SIZE(ocelot_stats_layout); 433 433 ocelot->shared_queue_sz = 224 * 1024; 434 + ocelot->num_mact_rows = 1024; 434 435 ocelot->ops = ops; 435 436 436 437 ret = ocelot_regfields_init(ocelot, ocelot_regfields);
+4 -2
drivers/net/ethernet/natsemi/jazzsonic.c
··· 208 208 209 209 err = register_netdev(dev); 210 210 if (err) 211 - goto out1; 211 + goto undo_probe1; 212 212 213 213 return 0; 214 214 215 - out1: 215 + undo_probe1: 216 + dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode), 217 + lp->descriptors, lp->descriptors_laddr); 216 218 release_mem_region(dev->base_addr, SONIC_MEM_SIZE); 217 219 out: 218 220 free_netdev(dev);
+1
drivers/net/ethernet/netronome/nfp/abm/main.c
··· 283 283 if (!nfp_nsp_has_hwinfo_lookup(nsp)) { 284 284 nfp_warn(pf->cpp, "NSP doesn't support PF MAC generation\n"); 285 285 eth_hw_addr_random(nn->dp.netdev); 286 + nfp_nsp_close(nsp); 286 287 return; 287 288 } 288 289
+1 -2
drivers/net/ethernet/pensando/ionic/ionic_debugfs.c
··· 170 170 debugfs_create_x64("base_pa", 0400, cq_dentry, &cq->base_pa); 171 171 debugfs_create_u32("num_descs", 0400, cq_dentry, &cq->num_descs); 172 172 debugfs_create_u32("desc_size", 0400, cq_dentry, &cq->desc_size); 173 - debugfs_create_u8("done_color", 0400, cq_dentry, 174 - (u8 *)&cq->done_color); 173 + debugfs_create_bool("done_color", 0400, cq_dentry, &cq->done_color); 175 174 176 175 debugfs_create_file("tail", 0400, cq_dentry, cq, &cq_tail_fops); 177 176
+2 -2
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 2101 2101 ionic_txrx_free(lif); 2102 2102 } 2103 2103 ionic_lifs_deinit(ionic); 2104 + ionic_reset(ionic); 2104 2105 ionic_qcqs_free(lif); 2105 2106 2106 2107 dev_info(ionic->dev, "FW Down: LIFs stopped\n"); ··· 2117 2116 2118 2117 dev_info(ionic->dev, "FW Up: restarting LIFs\n"); 2119 2118 2119 + ionic_init_devinfo(ionic); 2120 2120 err = ionic_qcqs_alloc(lif); 2121 2121 if (err) 2122 2122 goto err_out; ··· 2551 2549 dev_err(ionic->dev, "Cannot register net device, aborting\n"); 2552 2550 return err; 2553 2551 } 2554 - 2555 - ionic_link_status_check_request(ionic->master_lif); 2556 2552 ionic->master_lif->registered = true; 2557 2553 2558 2554 return 0;
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac5.c
··· 624 624 total_offset += offset; 625 625 } 626 626 627 - total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000; 627 + total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL; 628 628 total_ctr += total_offset; 629 629 630 630 ctr_low = do_div(total_ctr, 1000000000);
+4 -7
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 4060 4060 /** 4061 4061 * stmmac_interrupt - main ISR 4062 4062 * @irq: interrupt number. 4063 - * @dev_id: to pass the net device pointer. 4063 + * @dev_id: to pass the net device pointer (must be valid). 4064 4064 * Description: this is the main driver interrupt service routine. 4065 4065 * It can call: 4066 4066 * o DMA service routine (to manage incoming frame reception and transmission ··· 4083 4083 4084 4084 if (priv->irq_wake) 4085 4085 pm_wakeup_event(priv->device, 0); 4086 - 4087 - if (unlikely(!dev)) { 4088 - netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__); 4089 - return IRQ_NONE; 4090 - } 4091 4086 4092 4087 /* Check if adapter is up */ 4093 4088 if (test_bit(STMMAC_DOWN, &priv->state)) ··· 4986 4991 priv->plat->bsp_priv); 4987 4992 4988 4993 if (ret < 0) 4989 - return ret; 4994 + goto error_serdes_powerup; 4990 4995 } 4991 4996 4992 4997 #ifdef CONFIG_DEBUG_FS ··· 4995 5000 4996 5001 return ret; 4997 5002 5003 + error_serdes_powerup: 5004 + unregister_netdev(ndev); 4998 5005 error_netdev_register: 4999 5006 phylink_destroy(priv->phylink); 5000 5007 error_phy_setup:
+1 -2
drivers/net/ethernet/ti/Kconfig
··· 90 90 config TI_CPTS_MOD 91 91 tristate 92 92 depends on TI_CPTS 93 + depends on PTP_1588_CLOCK 93 94 default y if TI_CPSW=y || TI_KEYSTONE_NETCP=y || TI_CPSW_SWITCHDEV=y 94 - select NET_PTP_CLASSIFY 95 - imply PTP_1588_CLOCK 96 95 default m 97 96 98 97 config TI_K3_AM65_CPSW_NUSS
+3 -2
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1719 1719 1720 1720 ret = devm_request_irq(dev, tx_chn->irq, 1721 1721 am65_cpsw_nuss_tx_irq, 1722 - 0, tx_chn->tx_chn_name, tx_chn); 1722 + IRQF_TRIGGER_HIGH, 1723 + tx_chn->tx_chn_name, tx_chn); 1723 1724 if (ret) { 1724 1725 dev_err(dev, "failure requesting tx%u irq %u, %d\n", 1725 1726 tx_chn->id, tx_chn->irq, ret); ··· 1745 1744 1746 1745 ret = devm_request_irq(dev, common->rx_chns.irq, 1747 1746 am65_cpsw_nuss_rx_irq, 1748 - 0, dev_name(dev), common); 1747 + IRQF_TRIGGER_HIGH, dev_name(dev), common); 1749 1748 if (ret) { 1750 1749 dev_err(dev, "failure requesting rx irq %u, %d\n", 1751 1750 common->rx_chns.irq, ret);
+1 -1
drivers/net/ethernet/toshiba/tc35815.c
··· 643 643 linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, mask); 644 644 linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask); 645 645 } 646 - linkmode_and(phydev->supported, phydev->supported, mask); 646 + linkmode_andnot(phydev->supported, phydev->supported, mask); 647 647 linkmode_copy(phydev->advertising, phydev->supported); 648 648 649 649 lp->link = 0;
+5 -4
drivers/net/gtp.c
··· 1169 1169 static struct genl_family gtp_genl_family; 1170 1170 1171 1171 static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq, 1172 - u32 type, struct pdp_ctx *pctx) 1172 + int flags, u32 type, struct pdp_ctx *pctx) 1173 1173 { 1174 1174 void *genlh; 1175 1175 1176 - genlh = genlmsg_put(skb, snd_portid, snd_seq, &gtp_genl_family, 0, 1176 + genlh = genlmsg_put(skb, snd_portid, snd_seq, &gtp_genl_family, flags, 1177 1177 type); 1178 1178 if (genlh == NULL) 1179 1179 goto nlmsg_failure; ··· 1227 1227 goto err_unlock; 1228 1228 } 1229 1229 1230 - err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid, 1231 - info->snd_seq, info->nlhdr->nlmsg_type, pctx); 1230 + err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid, info->snd_seq, 1231 + 0, info->nlhdr->nlmsg_type, pctx); 1232 1232 if (err < 0) 1233 1233 goto err_unlock_free; 1234 1234 ··· 1271 1271 gtp_genl_fill_info(skb, 1272 1272 NETLINK_CB(cb->skb).portid, 1273 1273 cb->nlh->nlmsg_seq, 1274 + NLM_F_MULTI, 1274 1275 cb->nlh->nlmsg_type, pctx)) { 1275 1276 cb->args[0] = i; 1276 1277 cb->args[1] = j;
+2 -1
drivers/net/hyperv/netvsc_drv.c
··· 707 707 goto drop; 708 708 } 709 709 710 - static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *ndev) 710 + static netdev_tx_t netvsc_start_xmit(struct sk_buff *skb, 711 + struct net_device *ndev) 711 712 { 712 713 return netvsc_xmit(skb, ndev, false); 713 714 }
+9 -2
drivers/net/ipa/gsi.c
··· 1041 1041 1042 1042 complete(&gsi->completion); 1043 1043 } 1044 + 1044 1045 /* Inter-EE interrupt handler */ 1045 1046 static void gsi_isr_glob_ee(struct gsi *gsi) 1046 1047 { ··· 1494 1493 struct completion *completion = &gsi->completion; 1495 1494 u32 val; 1496 1495 1496 + /* First zero the result code field */ 1497 + val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET); 1498 + val &= ~GENERIC_EE_RESULT_FMASK; 1499 + iowrite32(val, gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET); 1500 + 1501 + /* Now issue the command */ 1497 1502 val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK); 1498 1503 val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK); 1499 1504 val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK); ··· 1805 1798 1806 1799 /* Worst case we need an event for every outstanding TRE */ 1807 1800 if (data->channel.tre_count > data->channel.event_count) { 1808 - dev_warn(gsi->dev, "channel %u limited to %u TREs\n", 1809 - data->channel_id, data->channel.tre_count); 1810 1801 tre_count = data->channel.event_count; 1802 + dev_warn(gsi->dev, "channel %u limited to %u TREs\n", 1803 + data->channel_id, tre_count); 1811 1804 } else { 1812 1805 tre_count = data->channel.tre_count; 1813 1806 }
+2
drivers/net/ipa/gsi_reg.h
··· 410 410 #define INTER_EE_RESULT_FMASK GENMASK(2, 0) 411 411 #define GENERIC_EE_RESULT_FMASK GENMASK(7, 5) 412 412 #define GENERIC_EE_SUCCESS_FVAL 1 413 + #define GENERIC_EE_INCORRECT_DIRECTION_FVAL 3 414 + #define GENERIC_EE_INCORRECT_CHANNEL_FVAL 5 413 415 #define GENERIC_EE_NO_RESOURCES_FVAL 7 414 416 #define USB_MAX_PACKET_FMASK GENMASK(15, 15) /* 0: HS; 1: SS */ 415 417 #define MHI_BASE_CHANNEL_FMASK GENMASK(31, 24)
+2 -5
drivers/net/ipa/ipa_endpoint.c
··· 1283 1283 */ 1284 1284 int ipa_endpoint_stop(struct ipa_endpoint *endpoint) 1285 1285 { 1286 - u32 retries = endpoint->toward_ipa ? 0 : IPA_ENDPOINT_STOP_RX_RETRIES; 1286 + u32 retries = IPA_ENDPOINT_STOP_RX_RETRIES; 1287 1287 int ret; 1288 1288 1289 1289 do { ··· 1291 1291 struct gsi *gsi = &ipa->gsi; 1292 1292 1293 1293 ret = gsi_channel_stop(gsi, endpoint->channel_id); 1294 - if (ret != -EAGAIN) 1294 + if (ret != -EAGAIN || endpoint->toward_ipa) 1295 1295 break; 1296 - 1297 - if (endpoint->toward_ipa) 1298 - continue; 1299 1296 1300 1297 /* For IPA v3.5.1, send a DMA read task and check again */ 1301 1298 if (ipa->version == IPA_VERSION_3_5_1) {
+4 -2
drivers/net/macsec.c
··· 1305 1305 struct crypto_aead *tfm; 1306 1306 int ret; 1307 1307 1308 - tfm = crypto_alloc_aead("gcm(aes)", 0, 0); 1308 + /* Pick a sync gcm(aes) cipher to ensure order is preserved. */ 1309 + tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC); 1309 1310 1310 1311 if (IS_ERR(tfm)) 1311 1312 return tfm; ··· 2641 2640 if (ret) 2642 2641 goto rollback; 2643 2642 2644 - rtnl_unlock(); 2645 2643 /* Force features update, since they are different for SW MACSec and 2646 2644 * HW offloading cases. 2647 2645 */ 2648 2646 netdev_update_features(dev); 2647 + 2648 + rtnl_unlock(); 2649 2649 return 0; 2650 2650 2651 2651 rollback:
+1 -1
drivers/net/phy/dp83640.c
··· 1120 1120 goto out; 1121 1121 } 1122 1122 dp83640_clock_init(clock, bus); 1123 - list_add_tail(&phyter_clocks, &clock->list); 1123 + list_add_tail(&clock->list, &phyter_clocks); 1124 1124 out: 1125 1125 mutex_unlock(&phyter_clocks_lock); 1126 1126
+15 -17
drivers/net/phy/dp83822.c
··· 137 137 value &= ~DP83822_WOL_SECURE_ON; 138 138 } 139 139 140 - value |= (DP83822_WOL_EN | DP83822_WOL_INDICATION_SEL | 141 - DP83822_WOL_CLR_INDICATION); 142 - phy_write_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG, 143 - value); 144 - } else { 145 - value = phy_read_mmd(phydev, DP83822_DEVADDR, 146 - MII_DP83822_WOL_CFG); 147 - value &= ~DP83822_WOL_EN; 148 - phy_write_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG, 149 - value); 150 - } 140 + /* Clear any pending WoL interrupt */ 141 + phy_read(phydev, MII_DP83822_MISR2); 151 142 152 - return 0; 143 + value |= DP83822_WOL_EN | DP83822_WOL_INDICATION_SEL | 144 + DP83822_WOL_CLR_INDICATION; 145 + 146 + return phy_write_mmd(phydev, DP83822_DEVADDR, 147 + MII_DP83822_WOL_CFG, value); 148 + } else { 149 + return phy_clear_bits_mmd(phydev, DP83822_DEVADDR, 150 + MII_DP83822_WOL_CFG, DP83822_WOL_EN); 151 + } 153 152 } 154 153 155 154 static void dp83822_get_wol(struct phy_device *phydev, ··· 257 258 258 259 static int dp83822_config_init(struct phy_device *phydev) 259 260 { 260 - int value; 261 + int value = DP83822_WOL_EN | DP83822_WOL_MAGIC_EN | 262 + DP83822_WOL_SECURE_ON; 261 263 262 - value = DP83822_WOL_MAGIC_EN | DP83822_WOL_SECURE_ON | DP83822_WOL_EN; 263 - 264 - return phy_write_mmd(phydev, DP83822_DEVADDR, MII_DP83822_WOL_CFG, 265 - value); 264 + return phy_clear_bits_mmd(phydev, DP83822_DEVADDR, 265 + MII_DP83822_WOL_CFG, value); 266 266 } 267 267 268 268 static int dp83822_phy_reset(struct phy_device *phydev)
+12 -9
drivers/net/phy/dp83tc811.c
··· 139 139 value &= ~DP83811_WOL_SECURE_ON; 140 140 } 141 141 142 - value |= (DP83811_WOL_EN | DP83811_WOL_INDICATION_SEL | 143 - DP83811_WOL_CLR_INDICATION); 144 - phy_write_mmd(phydev, DP83811_DEVADDR, MII_DP83811_WOL_CFG, 145 - value); 142 + /* Clear any pending WoL interrupt */ 143 + phy_read(phydev, MII_DP83811_INT_STAT1); 144 + 145 + value |= DP83811_WOL_EN | DP83811_WOL_INDICATION_SEL | 146 + DP83811_WOL_CLR_INDICATION; 147 + 148 + return phy_write_mmd(phydev, DP83811_DEVADDR, 149 + MII_DP83811_WOL_CFG, value); 146 150 } else { 147 - phy_clear_bits_mmd(phydev, DP83811_DEVADDR, MII_DP83811_WOL_CFG, 148 - DP83811_WOL_EN); 151 + return phy_clear_bits_mmd(phydev, DP83811_DEVADDR, 152 + MII_DP83811_WOL_CFG, DP83811_WOL_EN); 149 153 } 150 154 151 - return 0; 152 155 } 153 156 154 157 static void dp83811_get_wol(struct phy_device *phydev, ··· 295 292 296 293 value = DP83811_WOL_MAGIC_EN | DP83811_WOL_SECURE_ON | DP83811_WOL_EN; 297 294 298 - return phy_write_mmd(phydev, DP83811_DEVADDR, MII_DP83811_WOL_CFG, 299 - value); 295 + return phy_clear_bits_mmd(phydev, DP83811_DEVADDR, MII_DP83811_WOL_CFG, 296 + value); 300 297 } 301 298 302 299 static int dp83811_phy_reset(struct phy_device *phydev)
+26 -1
drivers/net/phy/marvell10g.c
··· 66 66 MV_PCS_CSSR1_SPD2_2500 = 0x0004, 67 67 MV_PCS_CSSR1_SPD2_10000 = 0x0000, 68 68 69 + /* Temperature read register (88E2110 only) */ 70 + MV_PCS_TEMP = 0x8042, 71 + 69 72 /* These registers appear at 0x800X and 0xa00X - the 0xa00X control 70 73 * registers appear to set themselves to the 0x800X when AN is 71 74 * restarted, but status registers appear readable from either. ··· 80 77 MV_V2_PORT_CTRL = 0xf001, 81 78 MV_V2_PORT_CTRL_SWRST = BIT(15), 82 79 MV_V2_PORT_CTRL_PWRDOWN = BIT(11), 80 + /* Temperature control/read registers (88X3310 only) */ 83 81 MV_V2_TEMP_CTRL = 0xf08a, 84 82 MV_V2_TEMP_CTRL_MASK = 0xc000, 85 83 MV_V2_TEMP_CTRL_SAMPLE = 0x0000, ··· 108 104 return 0; 109 105 } 110 106 107 + static int mv3310_hwmon_read_temp_reg(struct phy_device *phydev) 108 + { 109 + return phy_read_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP); 110 + } 111 + 112 + static int mv2110_hwmon_read_temp_reg(struct phy_device *phydev) 113 + { 114 + return phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_TEMP); 115 + } 116 + 117 + static int mv10g_hwmon_read_temp_reg(struct phy_device *phydev) 118 + { 119 + if (phydev->drv->phy_id == MARVELL_PHY_ID_88X3310) 120 + return mv3310_hwmon_read_temp_reg(phydev); 121 + else /* MARVELL_PHY_ID_88E2110 */ 122 + return mv2110_hwmon_read_temp_reg(phydev); 123 + } 124 + 111 125 static int mv3310_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 112 126 u32 attr, int channel, long *value) 113 127 { ··· 138 116 } 139 117 140 118 if (type == hwmon_temp && attr == hwmon_temp_input) { 141 - temp = phy_read_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP); 119 + temp = mv10g_hwmon_read_temp_reg(phydev); 142 120 if (temp < 0) 143 121 return temp; 144 122 ··· 190 168 { 191 169 u16 val; 192 170 int ret; 171 + 172 + if (phydev->drv->phy_id != MARVELL_PHY_ID_88X3310) 173 + return 0; 193 174 194 175 ret = phy_write_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP, 195 176 MV_V2_TEMP_UNKNOWN);
+1
drivers/net/usb/qmi_wwan.c
··· 1359 1359 {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */ 1360 1360 {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */ 1361 1361 {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */ 1362 + {QMI_FIXED_INTF(0x413c, 0x81cc, 8)}, /* Dell Wireless 5816e */ 1362 1363 {QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */ 1363 1364 {QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */ 1364 1365 {QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/
+3 -1
drivers/net/wireguard/queueing.c
··· 35 35 if (multicore) { 36 36 queue->worker = wg_packet_percpu_multicore_worker_alloc( 37 37 function, queue); 38 - if (!queue->worker) 38 + if (!queue->worker) { 39 + ptr_ring_cleanup(&queue->ring, NULL); 39 40 return -ENOMEM; 41 + } 40 42 } else { 41 43 INIT_WORK(&queue->work, function); 42 44 }
+10 -11
drivers/net/wireguard/receive.c
··· 226 226 static void keep_key_fresh(struct wg_peer *peer) 227 227 { 228 228 struct noise_keypair *keypair; 229 - bool send = false; 229 + bool send; 230 230 231 231 if (peer->sent_lastminute_handshake) 232 232 return; 233 233 234 234 rcu_read_lock_bh(); 235 235 keypair = rcu_dereference_bh(peer->keypairs.current_keypair); 236 - if (likely(keypair && READ_ONCE(keypair->sending.is_valid)) && 237 - keypair->i_am_the_initiator && 238 - unlikely(wg_birthdate_has_expired(keypair->sending.birthdate, 239 - REJECT_AFTER_TIME - KEEPALIVE_TIMEOUT - REKEY_TIMEOUT))) 240 - send = true; 236 + send = keypair && READ_ONCE(keypair->sending.is_valid) && 237 + keypair->i_am_the_initiator && 238 + wg_birthdate_has_expired(keypair->sending.birthdate, 239 + REJECT_AFTER_TIME - KEEPALIVE_TIMEOUT - REKEY_TIMEOUT); 241 240 rcu_read_unlock_bh(); 242 241 243 - if (send) { 242 + if (unlikely(send)) { 244 243 peer->sent_lastminute_handshake = true; 245 244 wg_packet_send_queued_handshake_initiation(peer, false); 246 245 } ··· 392 393 len = ntohs(ip_hdr(skb)->tot_len); 393 394 if (unlikely(len < sizeof(struct iphdr))) 394 395 goto dishonest_packet_size; 395 - if (INET_ECN_is_ce(PACKET_CB(skb)->ds)) 396 - IP_ECN_set_ce(ip_hdr(skb)); 396 + INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ip_hdr(skb)->tos); 397 397 } else if (skb->protocol == htons(ETH_P_IPV6)) { 398 398 len = ntohs(ipv6_hdr(skb)->payload_len) + 399 399 sizeof(struct ipv6hdr); 400 - if (INET_ECN_is_ce(PACKET_CB(skb)->ds)) 401 - IP6_ECN_set_ce(skb, ipv6_hdr(skb)); 400 + INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ipv6_get_dsfield(ipv6_hdr(skb))); 402 401 } else { 403 402 goto dishonest_packet_type; 404 403 } ··· 515 518 &PACKET_CB(skb)->keypair->receiving)) ? 516 519 PACKET_STATE_CRYPTED : PACKET_STATE_DEAD; 517 520 wg_queue_enqueue_per_peer_napi(skb, state); 521 + if (need_resched()) 522 + cond_resched(); 518 523 } 519 524 } 520 525
+2 -2
drivers/net/wireguard/selftest/ratelimiter.c
··· 120 120 enum { TRIALS_BEFORE_GIVING_UP = 5000 }; 121 121 bool success = false; 122 122 int test = 0, trials; 123 - struct sk_buff *skb4, *skb6; 123 + struct sk_buff *skb4, *skb6 = NULL; 124 124 struct iphdr *hdr4; 125 - struct ipv6hdr *hdr6; 125 + struct ipv6hdr *hdr6 = NULL; 126 126 127 127 if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN)) 128 128 return true;
+10 -10
drivers/net/wireguard/send.c
··· 124 124 static void keep_key_fresh(struct wg_peer *peer) 125 125 { 126 126 struct noise_keypair *keypair; 127 - bool send = false; 127 + bool send; 128 128 129 129 rcu_read_lock_bh(); 130 130 keypair = rcu_dereference_bh(peer->keypairs.current_keypair); 131 - if (likely(keypair && READ_ONCE(keypair->sending.is_valid)) && 132 - (unlikely(atomic64_read(&keypair->sending.counter.counter) > 133 - REKEY_AFTER_MESSAGES) || 134 - (keypair->i_am_the_initiator && 135 - unlikely(wg_birthdate_has_expired(keypair->sending.birthdate, 136 - REKEY_AFTER_TIME))))) 137 - send = true; 131 + send = keypair && READ_ONCE(keypair->sending.is_valid) && 132 + (atomic64_read(&keypair->sending.counter.counter) > REKEY_AFTER_MESSAGES || 133 + (keypair->i_am_the_initiator && 134 + wg_birthdate_has_expired(keypair->sending.birthdate, REKEY_AFTER_TIME))); 138 135 rcu_read_unlock_bh(); 139 136 140 - if (send) 137 + if (unlikely(send)) 141 138 wg_packet_send_queued_handshake_initiation(peer, false); 142 139 } 143 140 ··· 278 281 279 282 wg_noise_keypair_put(keypair, false); 280 283 wg_peer_put(peer); 284 + if (need_resched()) 285 + cond_resched(); 281 286 } 282 287 } 283 288 ··· 303 304 } 304 305 wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first, 305 306 state); 306 - 307 + if (need_resched()) 308 + cond_resched(); 307 309 } 308 310 } 309 311
-12
drivers/net/wireguard/socket.c
··· 76 76 net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n", 77 77 wg->dev->name, &endpoint->addr, ret); 78 78 goto err; 79 - } else if (unlikely(rt->dst.dev == skb->dev)) { 80 - ip_rt_put(rt); 81 - ret = -ELOOP; 82 - net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n", 83 - wg->dev->name, &endpoint->addr); 84 - goto err; 85 79 } 86 80 if (cache) 87 81 dst_cache_set_ip4(cache, &rt->dst, fl.saddr); ··· 142 148 ret = PTR_ERR(dst); 143 149 net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n", 144 150 wg->dev->name, &endpoint->addr, ret); 145 - goto err; 146 - } else if (unlikely(dst->dev == skb->dev)) { 147 - dst_release(dst); 148 - ret = -ELOOP; 149 - net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n", 150 - wg->dev->name, &endpoint->addr); 151 151 goto err; 152 152 } 153 153 if (cache)
+3 -1
drivers/nvme/host/core.c
··· 1110 1110 * Don't treat an error as fatal, as we potentially already 1111 1111 * have a NGUID or EUI-64. 1112 1112 */ 1113 - if (status > 0) 1113 + if (status > 0 && !(status & NVME_SC_DNR)) 1114 1114 status = 0; 1115 1115 goto free_data; 1116 1116 } ··· 3642 3642 3643 3643 return; 3644 3644 out_put_disk: 3645 + /* prevent double queue cleanup */ 3646 + ns->disk->queue = NULL; 3645 3647 put_disk(ns->disk); 3646 3648 out_unlink_ns: 3647 3649 mutex_lock(&ctrl->subsys->lock);
+5 -1
drivers/nvme/host/pci.c
··· 973 973 974 974 static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) 975 975 { 976 - if (++nvmeq->cq_head == nvmeq->q_depth) { 976 + u16 tmp = nvmeq->cq_head + 1; 977 + 978 + if (tmp == nvmeq->q_depth) { 977 979 nvmeq->cq_head = 0; 978 980 nvmeq->cq_phase ^= 1; 981 + } else { 982 + nvmeq->cq_head = tmp; 979 983 } 980 984 } 981 985
+7
drivers/phy/qualcomm/phy-qcom-qusb2.c
··· 816 816 .compatible = "qcom,msm8998-qusb2-phy", 817 817 .data = &msm8998_phy_cfg, 818 818 }, { 819 + /* 820 + * Deprecated. Only here to support legacy device 821 + * trees that didn't include "qcom,qusb2-v2-phy" 822 + */ 823 + .compatible = "qcom,sdm845-qusb2-phy", 824 + .data = &qusb2_v2_phy_cfg, 825 + }, { 819 826 .compatible = "qcom,qusb2-v2-phy", 820 827 .data = &qusb2_v2_phy_cfg, 821 828 },
+21 -11
drivers/phy/qualcomm/phy-qcom-usb-hs-28nm.c
··· 160 160 ret = regulator_bulk_enable(VREG_NUM, priv->vregs); 161 161 if (ret) 162 162 return ret; 163 - ret = clk_bulk_prepare_enable(priv->num_clks, priv->clks); 164 - if (ret) 165 - goto err_disable_regulator; 163 + 166 164 qcom_snps_hsphy_disable_hv_interrupts(priv); 167 165 qcom_snps_hsphy_exit_retention(priv); 168 166 169 167 return 0; 170 - 171 - err_disable_regulator: 172 - regulator_bulk_disable(VREG_NUM, priv->vregs); 173 - 174 - return ret; 175 168 } 176 169 177 170 static int qcom_snps_hsphy_power_off(struct phy *phy) ··· 173 180 174 181 qcom_snps_hsphy_enter_retention(priv); 175 182 qcom_snps_hsphy_enable_hv_interrupts(priv); 176 - clk_bulk_disable_unprepare(priv->num_clks, priv->clks); 177 183 regulator_bulk_disable(VREG_NUM, priv->vregs); 178 184 179 185 return 0; ··· 258 266 struct hsphy_priv *priv = phy_get_drvdata(phy); 259 267 int ret; 260 268 261 - ret = qcom_snps_hsphy_reset(priv); 269 + ret = clk_bulk_prepare_enable(priv->num_clks, priv->clks); 262 270 if (ret) 263 271 return ret; 272 + 273 + ret = qcom_snps_hsphy_reset(priv); 274 + if (ret) 275 + goto disable_clocks; 264 276 265 277 qcom_snps_hsphy_init_sequence(priv); 266 278 267 279 ret = qcom_snps_hsphy_por_reset(priv); 268 280 if (ret) 269 - return ret; 281 + goto disable_clocks; 282 + 283 + return 0; 284 + 285 + disable_clocks: 286 + clk_bulk_disable_unprepare(priv->num_clks, priv->clks); 287 + return ret; 288 + } 289 + 290 + static int qcom_snps_hsphy_exit(struct phy *phy) 291 + { 292 + struct hsphy_priv *priv = phy_get_drvdata(phy); 293 + 294 + clk_bulk_disable_unprepare(priv->num_clks, priv->clks); 270 295 271 296 return 0; 272 297 } 273 298 274 299 static const struct phy_ops qcom_snps_hsphy_ops = { 275 300 .init = qcom_snps_hsphy_init, 301 + .exit = qcom_snps_hsphy_exit, 276 302 .power_on = qcom_snps_hsphy_power_on, 277 303 .power_off = qcom_snps_hsphy_power_off, 278 304 .set_mode = qcom_snps_hsphy_set_mode,
+46 -34
drivers/platform/chrome/cros_ec_sensorhub.c
··· 52 52 int sensor_type[MOTIONSENSE_TYPE_MAX] = { 0 }; 53 53 struct cros_ec_command *msg = sensorhub->msg; 54 54 struct cros_ec_dev *ec = sensorhub->ec; 55 - int ret, i, sensor_num; 55 + int ret, i; 56 56 char *name; 57 57 58 - sensor_num = cros_ec_get_sensor_count(ec); 59 - if (sensor_num < 0) { 60 - dev_err(dev, 61 - "Unable to retrieve sensor information (err:%d)\n", 62 - sensor_num); 63 - return sensor_num; 64 - } 65 - 66 - sensorhub->sensor_num = sensor_num; 67 - if (sensor_num == 0) { 68 - dev_err(dev, "Zero sensors reported.\n"); 69 - return -EINVAL; 70 - } 71 58 72 59 msg->version = 1; 73 60 msg->insize = sizeof(struct ec_response_motion_sense); 74 61 msg->outsize = sizeof(struct ec_params_motion_sense); 75 62 76 - for (i = 0; i < sensor_num; i++) { 63 + for (i = 0; i < sensorhub->sensor_num; i++) { 77 64 sensorhub->params->cmd = MOTIONSENSE_CMD_INFO; 78 65 sensorhub->params->info.sensor_num = i; 79 66 ··· 127 140 struct cros_ec_dev *ec = dev_get_drvdata(dev->parent); 128 141 struct cros_ec_sensorhub *data; 129 142 struct cros_ec_command *msg; 130 - int ret; 131 - int i; 143 + int ret, i, sensor_num; 132 144 133 145 msg = devm_kzalloc(dev, sizeof(struct cros_ec_command) + 134 146 max((u16)sizeof(struct ec_params_motion_sense), ··· 152 166 dev_set_drvdata(dev, data); 153 167 154 168 /* Check whether this EC is a sensor hub. */ 155 - if (cros_ec_check_features(data->ec, EC_FEATURE_MOTION_SENSE)) { 169 + if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE)) { 170 + sensor_num = cros_ec_get_sensor_count(ec); 171 + if (sensor_num < 0) { 172 + dev_err(dev, 173 + "Unable to retrieve sensor information (err:%d)\n", 174 + sensor_num); 175 + return sensor_num; 176 + } 177 + if (sensor_num == 0) { 178 + dev_err(dev, "Zero sensors reported.\n"); 179 + return -EINVAL; 180 + } 181 + data->sensor_num = sensor_num; 182 + 183 + /* 184 + * Prepare the ring handler before enumering the 185 + * sensors. 186 + */ 187 + if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) { 188 + ret = cros_ec_sensorhub_ring_allocate(data); 189 + if (ret) 190 + return ret; 191 + } 192 + 193 + /* Enumerate the sensors.*/ 156 194 ret = cros_ec_sensorhub_register(dev, data); 157 195 if (ret) 158 196 return ret; 197 + 198 + /* 199 + * When the EC does not have a FIFO, the sensors will query 200 + * their data themselves via sysfs or a software trigger. 201 + */ 202 + if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) { 203 + ret = cros_ec_sensorhub_ring_add(data); 204 + if (ret) 205 + return ret; 206 + /* 207 + * The msg and its data is not under the control of the 208 + * ring handler. 209 + */ 210 + return devm_add_action_or_reset(dev, 211 + cros_ec_sensorhub_ring_remove, 212 + data); 213 + } 214 + 159 215 } else { 160 216 /* 161 217 * If the device has sensors but does not claim to ··· 212 184 } 213 185 } 214 186 215 - /* 216 - * If the EC does not have a FIFO, the sensors will query their data 217 - * themselves via sysfs or a software trigger. 218 - */ 219 - if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) { 220 - ret = cros_ec_sensorhub_ring_add(data); 221 - if (ret) 222 - return ret; 223 - /* 224 - * The msg and its data is not under the control of the ring 225 - * handler. 226 - */ 227 - return devm_add_action_or_reset(dev, 228 - cros_ec_sensorhub_ring_remove, 229 - data); 230 - } 231 187 232 188 return 0; 233 189 }
+47 -28
drivers/platform/chrome/cros_ec_sensorhub_ring.c
··· 957 957 } 958 958 959 959 /** 960 + * cros_ec_sensorhub_ring_allocate() - Prepare the FIFO functionality if the EC 961 + * supports it. 962 + * 963 + * @sensorhub : Sensor Hub object. 964 + * 965 + * Return: 0 on success. 966 + */ 967 + int cros_ec_sensorhub_ring_allocate(struct cros_ec_sensorhub *sensorhub) 968 + { 969 + int fifo_info_length = 970 + sizeof(struct ec_response_motion_sense_fifo_info) + 971 + sizeof(u16) * sensorhub->sensor_num; 972 + 973 + /* Allocate the array for lost events. */ 974 + sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length, 975 + GFP_KERNEL); 976 + if (!sensorhub->fifo_info) 977 + return -ENOMEM; 978 + 979 + /* 980 + * Allocate the callback area based on the number of sensors. 981 + * Add one for the sensor ring. 982 + */ 983 + sensorhub->push_data = devm_kcalloc(sensorhub->dev, 984 + sensorhub->sensor_num, 985 + sizeof(*sensorhub->push_data), 986 + GFP_KERNEL); 987 + if (!sensorhub->push_data) 988 + return -ENOMEM; 989 + 990 + sensorhub->tight_timestamps = cros_ec_check_features( 991 + sensorhub->ec, 992 + EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS); 993 + 994 + if (sensorhub->tight_timestamps) { 995 + sensorhub->batch_state = devm_kcalloc(sensorhub->dev, 996 + sensorhub->sensor_num, 997 + sizeof(*sensorhub->batch_state), 998 + GFP_KERNEL); 999 + if (!sensorhub->batch_state) 1000 + return -ENOMEM; 1001 + } 1002 + 1003 + return 0; 1004 + } 1005 + 1006 + /** 960 1007 * cros_ec_sensorhub_ring_add() - Add the FIFO functionality if the EC 961 1008 * supports it. 962 1009 * ··· 1018 971 int fifo_info_length = 1019 972 sizeof(struct ec_response_motion_sense_fifo_info) + 1020 973 sizeof(u16) * sensorhub->sensor_num; 1021 - 1022 - /* Allocate the array for lost events. */ 1023 - sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length, 1024 - GFP_KERNEL); 1025 - if (!sensorhub->fifo_info) 1026 - return -ENOMEM; 1027 974 1028 975 /* Retrieve FIFO information */ 1029 976 sensorhub->msg->version = 2; ··· 1039 998 if (!sensorhub->ring) 1040 999 return -ENOMEM; 1041 1000 1042 - /* 1043 - * Allocate the callback area based on the number of sensors. 1044 - */ 1045 - sensorhub->push_data = devm_kcalloc( 1046 - sensorhub->dev, sensorhub->sensor_num, 1047 - sizeof(*sensorhub->push_data), 1048 - GFP_KERNEL); 1049 - if (!sensorhub->push_data) 1050 - return -ENOMEM; 1051 - 1052 1001 sensorhub->fifo_timestamp[CROS_EC_SENSOR_LAST_TS] = 1053 1002 cros_ec_get_time_ns(); 1054 - 1055 - sensorhub->tight_timestamps = cros_ec_check_features( 1056 - ec, EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS); 1057 - 1058 - if (sensorhub->tight_timestamps) { 1059 - sensorhub->batch_state = devm_kcalloc(sensorhub->dev, 1060 - sensorhub->sensor_num, 1061 - sizeof(*sensorhub->batch_state), 1062 - GFP_KERNEL); 1063 - if (!sensorhub->batch_state) 1064 - return -ENOMEM; 1065 - } 1066 1003 1067 1004 /* Register the notifier that will act as a top half interrupt. */ 1068 1005 sensorhub->notifier.notifier_call = cros_ec_sensorhub_event;
+24
drivers/platform/x86/asus-nb-wmi.c
··· 515 515 .detect_quirks = asus_nb_wmi_quirks, 516 516 }; 517 517 518 + static const struct dmi_system_id asus_nb_wmi_blacklist[] __initconst = { 519 + { 520 + /* 521 + * asus-nb-wm adds no functionality. The T100TA has a detachable 522 + * USB kbd, so no hotkeys and it has no WMI rfkill; and loading 523 + * asus-nb-wm causes the camera LED to turn and _stay_ on. 524 + */ 525 + .matches = { 526 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 527 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"), 528 + }, 529 + }, 530 + { 531 + /* The Asus T200TA has the same issue as the T100TA */ 532 + .matches = { 533 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 534 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T200TA"), 535 + }, 536 + }, 537 + {} /* Terminating entry */ 538 + }; 518 539 519 540 static int __init asus_nb_wmi_init(void) 520 541 { 542 + if (dmi_check_system(asus_nb_wmi_blacklist)) 543 + return -ENODEV; 544 + 521 545 return asus_wmi_register_driver(&asus_nb_wmi_driver); 522 546 } 523 547
+1 -1
drivers/platform/x86/intel-uncore-frequency.c
··· 53 53 /* Storage for uncore data for all instances */ 54 54 static struct uncore_data *uncore_instances; 55 55 /* Root of the all uncore sysfs kobjs */ 56 - struct kobject *uncore_root_kobj; 56 + static struct kobject *uncore_root_kobj; 57 57 /* Stores the CPU mask of the target CPUs to use during uncore read/write */ 58 58 static cpumask_t uncore_cpu_mask; 59 59 /* CPU online callback register instance */
+5 -19
drivers/platform/x86/intel_pmc_core.c
··· 255 255 }; 256 256 257 257 static const struct pmc_bit_map icl_pfear_map[] = { 258 - /* Ice Lake generation onwards only */ 258 + /* Ice Lake and Jasper Lake generation onwards only */ 259 259 {"RES_65", BIT(0)}, 260 260 {"RES_66", BIT(1)}, 261 261 {"RES_67", BIT(2)}, ··· 274 274 }; 275 275 276 276 static const struct pmc_bit_map tgl_pfear_map[] = { 277 - /* Tiger Lake, Elkhart Lake and Jasper Lake generation onwards only */ 277 + /* Tiger Lake and Elkhart Lake generation onwards only */ 278 278 {"PSF9", BIT(0)}, 279 279 {"RES_66", BIT(1)}, 280 280 {"RES_67", BIT(2)}, ··· 692 692 kfree(lpm_regs); 693 693 } 694 694 695 - #if IS_ENABLED(CONFIG_DEBUG_FS) 696 695 static bool slps0_dbg_latch; 697 696 698 697 static inline u8 pmc_core_reg_read_byte(struct pmc_dev *pmcdev, int offset) ··· 1132 1133 &pmc_core_substate_l_sts_regs_fops); 1133 1134 } 1134 1135 } 1135 - #else 1136 - static inline void pmc_core_dbgfs_register(struct pmc_dev *pmcdev) 1137 - { 1138 - } 1139 - 1140 - static inline void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev) 1141 - { 1142 - } 1143 - #endif /* CONFIG_DEBUG_FS */ 1144 1136 1145 1137 static const struct x86_cpu_id intel_pmc_core_ids[] = { 1146 1138 X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &spt_reg_map), ··· 1146 1156 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &tgl_reg_map), 1147 1157 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &tgl_reg_map), 1148 1158 X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, &tgl_reg_map), 1149 - X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &tgl_reg_map), 1159 + X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &icl_reg_map), 1150 1160 {} 1151 1161 }; 1152 1162 ··· 1250 1260 return 0; 1251 1261 } 1252 1262 1253 - #ifdef CONFIG_PM_SLEEP 1254 - 1255 1263 static bool warn_on_s0ix_failures; 1256 1264 module_param(warn_on_s0ix_failures, bool, 0644); 1257 1265 MODULE_PARM_DESC(warn_on_s0ix_failures, "Check and warn for S0ix failures"); 1258 1266 1259 - static int pmc_core_suspend(struct device *dev) 1267 + static __maybe_unused int pmc_core_suspend(struct device *dev) 1260 1268 { 1261 1269 struct pmc_dev *pmcdev = dev_get_drvdata(dev); 1262 1270 ··· 1306 1318 return false; 1307 1319 } 1308 1320 1309 - static int pmc_core_resume(struct device *dev) 1321 + static __maybe_unused int pmc_core_resume(struct device *dev) 1310 1322 { 1311 1323 struct pmc_dev *pmcdev = dev_get_drvdata(dev); 1312 1324 const struct pmc_bit_map **maps = pmcdev->map->lpm_sts; ··· 1335 1347 1336 1348 return 0; 1337 1349 } 1338 - 1339 - #endif 1340 1350 1341 1351 static const struct dev_pm_ops pmc_core_pm_ops = { 1342 1352 SET_LATE_SYSTEM_SLEEP_PM_OPS(pmc_core_suspend, pmc_core_resume)
-2
drivers/platform/x86/intel_pmc_core.h
··· 282 282 u32 base_addr; 283 283 void __iomem *regbase; 284 284 const struct pmc_reg_map *map; 285 - #if IS_ENABLED(CONFIG_DEBUG_FS) 286 285 struct dentry *dbgfs_dir; 287 - #endif /* CONFIG_DEBUG_FS */ 288 286 int pmc_xram_read_bit; 289 287 struct mutex lock; /* generic mutex lock for PMC Core */ 290 288
+2 -2
drivers/platform/x86/surface3_power.c
··· 522 522 strlcpy(board_info.type, "MSHW0011-bat0", I2C_NAME_SIZE); 523 523 524 524 bat0 = i2c_acpi_new_device(dev, 1, &board_info); 525 - if (!bat0) 526 - return -ENOMEM; 525 + if (IS_ERR(bat0)) 526 + return PTR_ERR(bat0); 527 527 528 528 data->bat0 = bat0; 529 529 i2c_set_clientdata(bat0, data);
+1 -1
drivers/platform/x86/thinkpad_acpi.c
··· 9548 9548 if (!battery_info.batteries[battery].start_support) 9549 9549 return -ENODEV; 9550 9550 /* valid values are [0, 99] */ 9551 - if (value < 0 || value > 99) 9551 + if (value > 99) 9552 9552 return -EINVAL; 9553 9553 if (value > battery_info.batteries[battery].charge_stop) 9554 9554 return -EINVAL;
+2 -2
drivers/platform/x86/xiaomi-wmi.c
··· 23 23 unsigned int key_code; 24 24 }; 25 25 26 - int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context) 26 + static int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context) 27 27 { 28 28 struct xiaomi_wmi *data; 29 29 ··· 48 48 return input_register_device(data->input_dev); 49 49 } 50 50 51 - void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy) 51 + static void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy) 52 52 { 53 53 struct xiaomi_wmi *data; 54 54
+11 -14
drivers/regulator/core.c
··· 5754 5754 5755 5755 static int __init regulator_init_complete(void) 5756 5756 { 5757 - int delay = driver_deferred_probe_timeout; 5758 - 5759 - if (delay < 0) 5760 - delay = 0; 5761 5757 /* 5762 5758 * Since DT doesn't provide an idiomatic mechanism for 5763 5759 * enabling full constraints and since it's much more natural ··· 5764 5768 has_full_constraints = true; 5765 5769 5766 5770 /* 5767 - * If driver_deferred_probe_timeout is set, we punt 5768 - * completion for that many seconds since systems like 5769 - * distros will load many drivers from userspace so consumers 5770 - * might not always be ready yet, this is particularly an 5771 - * issue with laptops where this might bounce the display off 5772 - * then on. Ideally we'd get a notification from userspace 5773 - * when this happens but we don't so just wait a bit and hope 5774 - * we waited long enough. It'd be better if we'd only do 5775 - * this on systems that need it. 5771 + * We punt completion for an arbitrary amount of time since 5772 + * systems like distros will load many drivers from userspace 5773 + * so consumers might not always be ready yet, this is 5774 + * particularly an issue with laptops where this might bounce 5775 + * the display off then on. Ideally we'd get a notification 5776 + * from userspace when this happens but we don't so just wait 5777 + * a bit and hope we waited long enough. It'd be better if 5778 + * we'd only do this on systems that need it, and a kernel 5779 + * command line option might be useful. 5776 5780 */ 5777 - schedule_delayed_work(&regulator_init_complete_work, delay * HZ); 5781 + schedule_delayed_work(&regulator_init_complete_work, 5782 + msecs_to_jiffies(30000)); 5778 5783 5779 5784 return 0; 5780 5785 }
+5 -5
drivers/s390/net/qeth_core_main.c
··· 6717 6717 unsigned int i; 6718 6718 6719 6719 /* Quiesce the NAPI instances: */ 6720 - qeth_for_each_output_queue(card, queue, i) { 6720 + qeth_for_each_output_queue(card, queue, i) 6721 6721 napi_disable(&queue->napi); 6722 - del_timer_sync(&queue->timer); 6723 - } 6724 6722 6725 6723 /* Stop .ndo_start_xmit, might still access queue->napi. */ 6726 6724 netif_tx_disable(dev); 6727 6725 6728 - /* Queues may get re-allocated, so remove the NAPIs here. */ 6729 - qeth_for_each_output_queue(card, queue, i) 6726 + qeth_for_each_output_queue(card, queue, i) { 6727 + del_timer_sync(&queue->timer); 6728 + /* Queues may get re-allocated, so remove the NAPIs. */ 6730 6729 netif_napi_del(&queue->napi); 6730 + } 6731 6731 } else { 6732 6732 netif_tx_disable(dev); 6733 6733 }
+5
drivers/scsi/ibmvscsi/ibmvfc.c
··· 3640 3640 struct ibmvfc_host *vhost = tgt->vhost; 3641 3641 struct ibmvfc_event *evt; 3642 3642 3643 + if (!vhost->logged_in) { 3644 + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT); 3645 + return; 3646 + } 3647 + 3643 3648 if (vhost->discovery_threads >= disc_threads) 3644 3649 return; 3645 3650
-4
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 2320 2320 static int ibmvscsi_remove(struct vio_dev *vdev) 2321 2321 { 2322 2322 struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev); 2323 - unsigned long flags; 2324 2323 2325 2324 srp_remove_host(hostdata->host); 2326 2325 scsi_remove_host(hostdata->host); 2327 2326 2328 2327 purge_requests(hostdata, DID_ERROR); 2329 - 2330 - spin_lock_irqsave(hostdata->host->host_lock, flags); 2331 2328 release_event_pool(&hostdata->pool, hostdata); 2332 - spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2333 2329 2334 2330 ibmvscsi_release_crq_queue(&hostdata->queue, hostdata, 2335 2331 max_events);
+1 -1
drivers/scsi/qla2xxx/qla_attr.c
··· 3031 3031 test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags)) 3032 3032 msleep(1000); 3033 3033 3034 - qla_nvme_delete(vha); 3035 3034 3036 3035 qla24xx_disable_vp(vha); 3037 3036 qla2x00_wait_for_sess_deletion(vha); 3038 3037 3038 + qla_nvme_delete(vha); 3039 3039 vha->flags.delete_progress = 1; 3040 3040 3041 3041 qlt_remove_target(ha, vha);
+1 -1
drivers/scsi/qla2xxx/qla_mbx.c
··· 3153 3153 ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x108c, 3154 3154 "Entered %s.\n", __func__); 3155 3155 3156 - if (vha->flags.qpairs_available && sp->qpair) 3156 + if (sp->qpair) 3157 3157 req = sp->qpair->req; 3158 3158 else 3159 3159 return QLA_FUNCTION_FAILED;
+17 -18
drivers/scsi/qla2xxx/qla_os.c
··· 3732 3732 } 3733 3733 qla2x00_wait_for_hba_ready(base_vha); 3734 3734 3735 + /* 3736 + * if UNLOADING flag is already set, then continue unload, 3737 + * where it was set first. 3738 + */ 3739 + if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags)) 3740 + return; 3741 + 3735 3742 if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) || 3736 3743 IS_QLA28XX(ha)) { 3737 3744 if (ha->flags.fw_started) ··· 3756 3749 } 3757 3750 3758 3751 qla2x00_wait_for_sess_deletion(base_vha); 3759 - 3760 - /* 3761 - * if UNLOAD flag is already set, then continue unload, 3762 - * where it was set first. 3763 - */ 3764 - if (test_bit(UNLOADING, &base_vha->dpc_flags)) 3765 - return; 3766 - 3767 - set_bit(UNLOADING, &base_vha->dpc_flags); 3768 3752 3769 3753 qla_nvme_delete(base_vha); 3770 3754 ··· 4861 4863 { 4862 4864 struct qla_work_evt *e; 4863 4865 uint8_t bail; 4866 + 4867 + if (test_bit(UNLOADING, &vha->dpc_flags)) 4868 + return NULL; 4864 4869 4865 4870 QLA_VHA_MARK_BUSY(vha, bail); 4866 4871 if (bail) ··· 6629 6628 struct pci_dev *pdev = ha->pdev; 6630 6629 scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); 6631 6630 6632 - /* 6633 - * if UNLOAD flag is already set, then continue unload, 6634 - * where it was set first. 6635 - */ 6636 - if (test_bit(UNLOADING, &base_vha->dpc_flags)) 6637 - return; 6638 - 6639 6631 ql_log(ql_log_warn, base_vha, 0x015b, 6640 6632 "Disabling adapter.\n"); 6641 6633 ··· 6639 6645 return; 6640 6646 } 6641 6647 6642 - qla2x00_wait_for_sess_deletion(base_vha); 6648 + /* 6649 + * if UNLOADING flag is already set, then continue unload, 6650 + * where it was set first. 6651 + */ 6652 + if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags)) 6653 + return; 6643 6654 6644 - set_bit(UNLOADING, &base_vha->dpc_flags); 6655 + qla2x00_wait_for_sess_deletion(base_vha); 6645 6656 6646 6657 qla2x00_delete_all_vps(ha, base_vha); 6647 6658
+1
drivers/scsi/scsi_lib.c
··· 2284 2284 switch (oldstate) { 2285 2285 case SDEV_RUNNING: 2286 2286 case SDEV_CREATED_BLOCK: 2287 + case SDEV_QUIESCE: 2287 2288 case SDEV_OFFLINE: 2288 2289 break; 2289 2290 default:
+4
drivers/staging/gasket/gasket_core.c
··· 925 925 gasket_get_bar_index(gasket_dev, 926 926 (vma->vm_pgoff << PAGE_SHIFT) + 927 927 driver_desc->legacy_mmap_address_offset); 928 + 929 + if (bar_index < 0) 930 + return DO_MAP_REGION_INVALID; 931 + 928 932 phys_base = gasket_dev->bar_data[bar_index].phys_base + phys_offset; 929 933 while (mapped_bytes < map_length) { 930 934 /*
-1
drivers/staging/ks7010/TODO
··· 30 30 31 31 Please send any patches to: 32 32 Greg Kroah-Hartman <gregkh@linuxfoundation.org> 33 - Wolfram Sang <wsa@the-dreams.de> 34 33 Linux Driver Project Developer List <driverdev-devel@linuxdriverproject.org>
+1 -1
drivers/target/target_core_iblock.c
··· 432 432 target_to_linux_sector(dev, cmd->t_task_lba), 433 433 target_to_linux_sector(dev, 434 434 sbc_get_write_same_sectors(cmd)), 435 - GFP_KERNEL, false); 435 + GFP_KERNEL, BLKDEV_ZERO_NOUNMAP); 436 436 if (ret) 437 437 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 438 438
+3
drivers/thunderbolt/usb4.c
··· 182 182 return ret; 183 183 184 184 ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1); 185 + if (ret) 186 + return ret; 187 + 185 188 if (val & ROUTER_CS_26_ONS) 186 189 return -EOPNOTSUPP; 187 190
+1 -1
drivers/tty/hvc/Kconfig
··· 88 88 89 89 config HVC_RISCV_SBI 90 90 bool "RISC-V SBI console support" 91 - depends on RISCV_SBI 91 + depends on RISCV_SBI_V01 92 92 select HVC_DRIVER 93 93 help 94 94 This enables support for console output via RISC-V SBI calls, which
+1 -1
drivers/tty/serial/Kconfig
··· 86 86 87 87 config SERIAL_EARLYCON_RISCV_SBI 88 88 bool "Early console using RISC-V SBI" 89 - depends on RISCV_SBI 89 + depends on RISCV_SBI_V01 90 90 select SERIAL_CORE 91 91 select SERIAL_CORE_CONSOLE 92 92 select SERIAL_EARLYCON
+1 -3
drivers/tty/serial/bcm63xx_uart.c
··· 843 843 if (IS_ERR(clk) && pdev->dev.of_node) 844 844 clk = of_clk_get(pdev->dev.of_node, 0); 845 845 846 - if (IS_ERR(clk)) { 847 - clk_put(clk); 846 + if (IS_ERR(clk)) 848 847 return -ENODEV; 849 - } 850 848 851 849 port->iotype = UPIO_MEM; 852 850 port->irq = res_irq->start;
+1
drivers/tty/serial/xilinx_uartps.c
··· 1465 1465 cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS; 1466 1466 #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE 1467 1467 cdns_uart_uart_driver.cons = &cdns_uart_console; 1468 + cdns_uart_console.index = id; 1468 1469 #endif 1469 1470 1470 1471 rc = uart_register_driver(&cdns_uart_uart_driver);
+7 -2
drivers/tty/vt/vt.c
··· 365 365 return uniscr; 366 366 } 367 367 368 + static void vc_uniscr_free(struct uni_screen *uniscr) 369 + { 370 + vfree(uniscr); 371 + } 372 + 368 373 static void vc_uniscr_set(struct vc_data *vc, struct uni_screen *new_uniscr) 369 374 { 370 - vfree(vc->vc_uni_screen); 375 + vc_uniscr_free(vc->vc_uni_screen); 371 376 vc->vc_uni_screen = new_uniscr; 372 377 } 373 378 ··· 1235 1230 err = resize_screen(vc, new_cols, new_rows, user); 1236 1231 if (err) { 1237 1232 kfree(newscreen); 1238 - kfree(new_uniscr); 1233 + vc_uniscr_free(new_uniscr); 1239 1234 return err; 1240 1235 } 1241 1236
+1 -1
drivers/usb/chipidea/ci_hdrc_msm.c
··· 114 114 hw_write_id_reg(ci, HS_PHY_GENCONFIG_2, 115 115 HS_PHY_ULPI_TX_PKT_EN_CLR_FIX, 0); 116 116 117 - if (!IS_ERR(ci->platdata->vbus_extcon.edev)) { 117 + if (!IS_ERR(ci->platdata->vbus_extcon.edev) || ci->role_switch) { 118 118 hw_write_id_reg(ci, HS_PHY_GENCONFIG_2, 119 119 HS_PHY_SESS_VLD_CTRL_EN, 120 120 HS_PHY_SESS_VLD_CTRL_EN);
+2 -3
drivers/usb/core/devio.c
··· 217 217 { 218 218 struct usb_memory *usbm = NULL; 219 219 struct usb_dev_state *ps = file->private_data; 220 + struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus); 220 221 size_t size = vma->vm_end - vma->vm_start; 221 222 void *mem; 222 223 unsigned long flags; ··· 251 250 usbm->vma_use_count = 1; 252 251 INIT_LIST_HEAD(&usbm->memlist); 253 252 254 - if (remap_pfn_range(vma, vma->vm_start, 255 - virt_to_phys(usbm->mem) >> PAGE_SHIFT, 256 - size, vma->vm_page_prot) < 0) { 253 + if (dma_mmap_coherent(hcd->self.sysdev, vma, mem, dma_handle, size)) { 257 254 dec_usb_memory_use_count(usbm, &usbm->vma_use_count); 258 255 return -EAGAIN; 259 256 }
+2 -2
drivers/usb/core/message.c
··· 1144 1144 1145 1145 if (usb_endpoint_out(epaddr)) { 1146 1146 ep = dev->ep_out[epnum]; 1147 - if (reset_hardware) 1147 + if (reset_hardware && epnum != 0) 1148 1148 dev->ep_out[epnum] = NULL; 1149 1149 } else { 1150 1150 ep = dev->ep_in[epnum]; 1151 - if (reset_hardware) 1151 + if (reset_hardware && epnum != 0) 1152 1152 dev->ep_in[epnum] = NULL; 1153 1153 } 1154 1154 if (ep) {
+2 -2
drivers/usb/serial/garmin_gps.c
··· 1138 1138 send it directly to the tty port */ 1139 1139 if (garmin_data_p->flags & FLAGS_QUEUING) { 1140 1140 pkt_add(garmin_data_p, data, data_length); 1141 - } else if (bulk_data || 1142 - getLayerId(data) == GARMIN_LAYERID_APPL) { 1141 + } else if (bulk_data || (data_length >= sizeof(u32) && 1142 + getLayerId(data) == GARMIN_LAYERID_APPL)) { 1143 1143 1144 1144 spin_lock_irqsave(&garmin_data_p->lock, flags); 1145 1145 garmin_data_p->flags |= APP_RESP_SEEN;
+1
drivers/usb/serial/qcserial.c
··· 173 173 {DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */ 174 174 {DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */ 175 175 {DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */ 176 + {DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */ 176 177 {DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */ 177 178 {DEVICE_SWI(0x413c, 0x81d0)}, /* Dell Wireless 5819 */ 178 179 {DEVICE_SWI(0x413c, 0x81d1)}, /* Dell Wireless 5818 */
+7
drivers/usb/storage/unusual_uas.h
··· 28 28 * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org> 29 29 */ 30 30 31 + /* Reported-by: Julian Groß <julian.g@posteo.de> */ 32 + UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999, 33 + "LaCie", 34 + "2Big Quadra USB3", 35 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 36 + US_FL_NO_REPORT_OPCODES), 37 + 31 38 /* 32 39 * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI 33 40 * commands in UAS mode. Observed with the 1.28 firmware; are there others?
+6 -2
drivers/usb/typec/mux/intel_pmc_mux.c
··· 157 157 req.mode_data |= (state->mode - TYPEC_STATE_MODAL) << 158 158 PMC_USB_ALTMODE_DP_MODE_SHIFT; 159 159 160 + if (data->status & DP_STATUS_HPD_STATE) 161 + req.mode_data |= PMC_USB_DP_HPD_LVL << 162 + PMC_USB_ALTMODE_DP_MODE_SHIFT; 163 + 160 164 return pmc_usb_command(port, (void *)&req, sizeof(req)); 161 165 } 162 166 ··· 302 298 struct typec_mux_desc mux_desc = { }; 303 299 int ret; 304 300 305 - ret = fwnode_property_read_u8(fwnode, "usb2-port", &port->usb2_port); 301 + ret = fwnode_property_read_u8(fwnode, "usb2-port-number", &port->usb2_port); 306 302 if (ret) 307 303 return ret; 308 304 309 - ret = fwnode_property_read_u8(fwnode, "usb3-port", &port->usb3_port); 305 + ret = fwnode_property_read_u8(fwnode, "usb3-port-number", &port->usb3_port); 310 306 if (ret) 311 307 return ret; 312 308
+5 -5
drivers/vfio/vfio_iommu_type1.c
··· 342 342 vma = find_vma_intersection(mm, vaddr, vaddr + 1); 343 343 344 344 if (vma && vma->vm_flags & VM_PFNMAP) { 345 - *pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; 346 - if (is_invalid_reserved_pfn(*pfn)) 345 + if (!follow_pfn(vma, vaddr, pfn) && 346 + is_invalid_reserved_pfn(*pfn)) 347 347 ret = 0; 348 348 } 349 349 done: ··· 555 555 continue; 556 556 } 557 557 558 - remote_vaddr = dma->vaddr + iova - dma->iova; 558 + remote_vaddr = dma->vaddr + (iova - dma->iova); 559 559 ret = vfio_pin_page_external(dma, remote_vaddr, &phys_pfn[i], 560 560 do_accounting); 561 561 if (ret) ··· 2345 2345 vaddr = dma->vaddr + offset; 2346 2346 2347 2347 if (write) 2348 - *copied = __copy_to_user((void __user *)vaddr, data, 2348 + *copied = copy_to_user((void __user *)vaddr, data, 2349 2349 count) ? 0 : count; 2350 2350 else 2351 - *copied = __copy_from_user(data, (void __user *)vaddr, 2351 + *copied = copy_from_user(data, (void __user *)vaddr, 2352 2352 count) ? 0 : count; 2353 2353 if (kthread) 2354 2354 unuse_mm(mm);
+16 -5
drivers/vhost/vsock.c
··· 181 181 break; 182 182 } 183 183 184 - vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len); 185 - added = true; 186 - 187 - /* Deliver to monitoring devices all correctly transmitted 188 - * packets. 184 + /* Deliver to monitoring devices all packets that we 185 + * will transmit. 189 186 */ 190 187 virtio_transport_deliver_tap_pkt(pkt); 188 + 189 + vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len); 190 + added = true; 191 191 192 192 pkt->off += payload_len; 193 193 total_len += payload_len; ··· 196 196 * to send it with the next available buffer. 197 197 */ 198 198 if (pkt->off < pkt->len) { 199 + /* We are queueing the same virtio_vsock_pkt to handle 200 + * the remaining bytes, and we want to deliver it 201 + * to monitoring devices in the next iteration. 202 + */ 203 + pkt->tap_delivered = false; 204 + 199 205 spin_lock_bh(&vsock->send_pkt_list_lock); 200 206 list_add(&pkt->list, &vsock->send_pkt_list); 201 207 spin_unlock_bh(&vsock->send_pkt_list_lock); ··· 548 542 549 543 mutex_unlock(&vq->mutex); 550 544 } 545 + 546 + /* Some packets may have been queued before the device was started, 547 + * let's kick the send worker to send them. 548 + */ 549 + vhost_work_queue(&vsock->dev, &vsock->send_pkt_work); 551 550 552 551 mutex_unlock(&vsock->dev.mutex); 553 552 return 0;
+1 -1
fs/btrfs/backref.c
··· 391 391 struct rb_node **p = &preftrees->direct.root.rb_root.rb_node; 392 392 struct rb_node *parent = NULL; 393 393 struct prelim_ref *ref = NULL; 394 - struct prelim_ref target = {0}; 394 + struct prelim_ref target = {}; 395 395 int result; 396 396 397 397 target.parent = bytenr;
+14 -6
fs/btrfs/block-group.c
··· 916 916 path = btrfs_alloc_path(); 917 917 if (!path) { 918 918 ret = -ENOMEM; 919 - goto out; 919 + goto out_put_group; 920 920 } 921 921 922 922 /* ··· 954 954 ret = btrfs_orphan_add(trans, BTRFS_I(inode)); 955 955 if (ret) { 956 956 btrfs_add_delayed_iput(inode); 957 - goto out; 957 + goto out_put_group; 958 958 } 959 959 clear_nlink(inode); 960 960 /* One for the block groups ref */ ··· 977 977 978 978 ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1); 979 979 if (ret < 0) 980 - goto out; 980 + goto out_put_group; 981 981 if (ret > 0) 982 982 btrfs_release_path(path); 983 983 if (ret == 0) { 984 984 ret = btrfs_del_item(trans, tree_root, path); 985 985 if (ret) 986 - goto out; 986 + goto out_put_group; 987 987 btrfs_release_path(path); 988 988 } 989 989 ··· 1102 1102 1103 1103 ret = remove_block_group_free_space(trans, block_group); 1104 1104 if (ret) 1105 - goto out; 1105 + goto out_put_group; 1106 1106 1107 - btrfs_put_block_group(block_group); 1107 + /* Once for the block groups rbtree */ 1108 1108 btrfs_put_block_group(block_group); 1109 1109 1110 1110 ret = btrfs_search_slot(trans, root, &key, path, -1, 1); ··· 1127 1127 /* once for the tree */ 1128 1128 free_extent_map(em); 1129 1129 } 1130 + 1131 + out_put_group: 1132 + /* Once for the lookup reference */ 1133 + btrfs_put_block_group(block_group); 1130 1134 out: 1131 1135 if (remove_rsv) 1132 1136 btrfs_delayed_refs_rsv_release(fs_info, 1); ··· 1292 1288 if (ret) 1293 1289 goto err; 1294 1290 mutex_unlock(&fs_info->unused_bg_unpin_mutex); 1291 + if (prev_trans) 1292 + btrfs_put_transaction(prev_trans); 1295 1293 1296 1294 return true; 1297 1295 1298 1296 err: 1299 1297 mutex_unlock(&fs_info->unused_bg_unpin_mutex); 1298 + if (prev_trans) 1299 + btrfs_put_transaction(prev_trans); 1300 1300 btrfs_dec_block_group_ro(bg); 1301 1301 return false; 1302 1302 }
+1 -1
fs/btrfs/discard.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 3 3 #ifndef BTRFS_DISCARD_H 4 4 #define BTRFS_DISCARD_H
+32 -4
fs/btrfs/disk-io.c
··· 2036 2036 for (i = 0; i < ret; i++) 2037 2037 btrfs_drop_and_free_fs_root(fs_info, gang[i]); 2038 2038 } 2039 - 2040 - if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) 2041 - btrfs_free_log_root_tree(NULL, fs_info); 2042 2039 } 2043 2040 2044 2041 static void btrfs_init_scrub(struct btrfs_fs_info *fs_info) ··· 3885 3888 spin_unlock(&fs_info->fs_roots_radix_lock); 3886 3889 3887 3890 if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) { 3888 - btrfs_free_log(NULL, root); 3891 + ASSERT(root->log_root == NULL); 3889 3892 if (root->reloc_root) { 3890 3893 btrfs_put_root(root->reloc_root); 3891 3894 root->reloc_root = NULL; ··· 4206 4209 4207 4210 down_write(&fs_info->cleanup_work_sem); 4208 4211 up_write(&fs_info->cleanup_work_sem); 4212 + } 4213 + 4214 + static void btrfs_drop_all_logs(struct btrfs_fs_info *fs_info) 4215 + { 4216 + struct btrfs_root *gang[8]; 4217 + u64 root_objectid = 0; 4218 + int ret; 4219 + 4220 + spin_lock(&fs_info->fs_roots_radix_lock); 4221 + while ((ret = radix_tree_gang_lookup(&fs_info->fs_roots_radix, 4222 + (void **)gang, root_objectid, 4223 + ARRAY_SIZE(gang))) != 0) { 4224 + int i; 4225 + 4226 + for (i = 0; i < ret; i++) 4227 + gang[i] = btrfs_grab_root(gang[i]); 4228 + spin_unlock(&fs_info->fs_roots_radix_lock); 4229 + 4230 + for (i = 0; i < ret; i++) { 4231 + if (!gang[i]) 4232 + continue; 4233 + root_objectid = gang[i]->root_key.objectid; 4234 + btrfs_free_log(NULL, gang[i]); 4235 + btrfs_put_root(gang[i]); 4236 + } 4237 + root_objectid++; 4238 + spin_lock(&fs_info->fs_roots_radix_lock); 4239 + } 4240 + spin_unlock(&fs_info->fs_roots_radix_lock); 4241 + btrfs_free_log_root_tree(NULL, fs_info); 4209 4242 } 4210 4243 4211 4244 static void btrfs_destroy_ordered_extents(struct btrfs_root *root) ··· 4630 4603 btrfs_destroy_delayed_inodes(fs_info); 4631 4604 btrfs_assert_delayed_root_empty(fs_info); 4632 4605 btrfs_destroy_all_delalloc_inodes(fs_info); 4606 + btrfs_drop_all_logs(fs_info); 4633 4607 mutex_unlock(&fs_info->transaction_kthread_mutex); 4634 4608 4635 4609 return 0;
+1
fs/btrfs/relocation.c
··· 4559 4559 if (IS_ERR(fs_root)) { 4560 4560 err = PTR_ERR(fs_root); 4561 4561 list_add_tail(&reloc_root->root_list, &reloc_roots); 4562 + btrfs_end_transaction(trans); 4562 4563 goto out_unset; 4563 4564 } 4564 4565
+11 -2
fs/btrfs/transaction.c
··· 662 662 } 663 663 664 664 got_it: 665 - btrfs_record_root_in_trans(h, root); 666 - 667 665 if (!current->journal_info) 668 666 current->journal_info = h; 667 + 668 + /* 669 + * btrfs_record_root_in_trans() needs to alloc new extents, and may 670 + * call btrfs_join_transaction() while we're also starting a 671 + * transaction. 672 + * 673 + * Thus it need to be called after current->journal_info initialized, 674 + * or we can deadlock. 675 + */ 676 + btrfs_record_root_in_trans(h, root); 677 + 669 678 return h; 670 679 671 680 join_fail:
+40 -3
fs/btrfs/tree-log.c
··· 4226 4226 const u64 ino = btrfs_ino(inode); 4227 4227 struct btrfs_path *dst_path = NULL; 4228 4228 bool dropped_extents = false; 4229 + u64 truncate_offset = i_size; 4230 + struct extent_buffer *leaf; 4231 + int slot; 4229 4232 int ins_nr = 0; 4230 4233 int start_slot; 4231 4234 int ret; ··· 4243 4240 if (ret < 0) 4244 4241 goto out; 4245 4242 4243 + /* 4244 + * We must check if there is a prealloc extent that starts before the 4245 + * i_size and crosses the i_size boundary. This is to ensure later we 4246 + * truncate down to the end of that extent and not to the i_size, as 4247 + * otherwise we end up losing part of the prealloc extent after a log 4248 + * replay and with an implicit hole if there is another prealloc extent 4249 + * that starts at an offset beyond i_size. 4250 + */ 4251 + ret = btrfs_previous_item(root, path, ino, BTRFS_EXTENT_DATA_KEY); 4252 + if (ret < 0) 4253 + goto out; 4254 + 4255 + if (ret == 0) { 4256 + struct btrfs_file_extent_item *ei; 4257 + 4258 + leaf = path->nodes[0]; 4259 + slot = path->slots[0]; 4260 + ei = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item); 4261 + 4262 + if (btrfs_file_extent_type(leaf, ei) == 4263 + BTRFS_FILE_EXTENT_PREALLOC) { 4264 + u64 extent_end; 4265 + 4266 + btrfs_item_key_to_cpu(leaf, &key, slot); 4267 + extent_end = key.offset + 4268 + btrfs_file_extent_num_bytes(leaf, ei); 4269 + 4270 + if (extent_end > i_size) 4271 + truncate_offset = extent_end; 4272 + } 4273 + } else { 4274 + ret = 0; 4275 + } 4276 + 4246 4277 while (true) { 4247 - struct extent_buffer *leaf = path->nodes[0]; 4248 - int slot = path->slots[0]; 4278 + leaf = path->nodes[0]; 4279 + slot = path->slots[0]; 4249 4280 4250 4281 if (slot >= btrfs_header_nritems(leaf)) { 4251 4282 if (ins_nr > 0) { ··· 4317 4280 ret = btrfs_truncate_inode_items(trans, 4318 4281 root->log_root, 4319 4282 &inode->vfs_inode, 4320 - i_size, 4283 + truncate_offset, 4321 4284 BTRFS_EXTENT_DATA_KEY); 4322 4285 } while (ret == -EAGAIN); 4323 4286 if (ret)
+2 -1
fs/ceph/caps.c
··· 2749 2749 2750 2750 ret = try_get_cap_refs(inode, need, want, 0, flags, got); 2751 2751 /* three special error codes */ 2752 - if (ret == -EAGAIN || ret == -EFBIG || ret == -EAGAIN) 2752 + if (ret == -EAGAIN || ret == -EFBIG || ret == -ESTALE) 2753 2753 ret = 0; 2754 2754 return ret; 2755 2755 } ··· 3746 3746 WARN_ON(1); 3747 3747 tsession = NULL; 3748 3748 target = -1; 3749 + mutex_lock(&session->s_mutex); 3749 3750 } 3750 3751 goto retry; 3751 3752
+1 -1
fs/ceph/debugfs.c
··· 271 271 &congestion_kb_fops); 272 272 273 273 snprintf(name, sizeof(name), "../../bdi/%s", 274 - dev_name(fsc->sb->s_bdi->dev)); 274 + bdi_dev_name(fsc->sb->s_bdi)); 275 275 fsc->debugfs_bdi = 276 276 debugfs_create_symlink("bdi", 277 277 fsc->client->debugfs_dir,
+3 -5
fs/ceph/mds_client.c
··· 3251 3251 void *end = p + msg->front.iov_len; 3252 3252 struct ceph_mds_session_head *h; 3253 3253 u32 op; 3254 - u64 seq; 3255 - unsigned long features = 0; 3254 + u64 seq, features = 0; 3256 3255 int wake = 0; 3257 3256 bool blacklisted = false; 3258 3257 ··· 3270 3271 goto bad; 3271 3272 /* version >= 3, feature bits */ 3272 3273 ceph_decode_32_safe(&p, end, len, bad); 3273 - ceph_decode_need(&p, end, len, bad); 3274 - memcpy(&features, p, min_t(size_t, len, sizeof(features))); 3275 - p += len; 3274 + ceph_decode_64_safe(&p, end, features, bad); 3275 + p += len - sizeof(features); 3276 3276 } 3277 3277 3278 3278 mutex_lock(&mdsc->mutex);
+2 -2
fs/ceph/quota.c
··· 159 159 } 160 160 161 161 if (IS_ERR(in)) { 162 - pr_warn("Can't lookup inode %llx (err: %ld)\n", 163 - realm->ino, PTR_ERR(in)); 162 + dout("Can't lookup inode %llx (err: %ld)\n", 163 + realm->ino, PTR_ERR(in)); 164 164 qri->timeout = jiffies + msecs_to_jiffies(60 * 1000); /* XXX */ 165 165 } else { 166 166 qri->timeout = 0;
+1
fs/configfs/dir.c
··· 1519 1519 spin_lock(&configfs_dirent_lock); 1520 1520 configfs_detach_rollback(dentry); 1521 1521 spin_unlock(&configfs_dirent_lock); 1522 + config_item_put(parent_item); 1522 1523 return -EINTR; 1523 1524 } 1524 1525 frag->frag_dead = true;
+8
fs/coredump.c
··· 788 788 if (displaced) 789 789 put_files_struct(displaced); 790 790 if (!dump_interrupted()) { 791 + /* 792 + * umh disabled with CONFIG_STATIC_USERMODEHELPER_PATH="" would 793 + * have this set to NULL. 794 + */ 795 + if (!cprm.file) { 796 + pr_info("Core dump to |%s disabled\n", cn.corename); 797 + goto close_fail; 798 + } 791 799 file_start_write(cprm.file); 792 800 core_dumped = binfmt->core_dump(&cprm); 793 801 file_end_write(cprm.file);
+33 -28
fs/eventpoll.c
··· 1171 1171 { 1172 1172 struct eventpoll *ep = epi->ep; 1173 1173 1174 + /* Fast preliminary check */ 1175 + if (epi->next != EP_UNACTIVE_PTR) 1176 + return false; 1177 + 1174 1178 /* Check that the same epi has not been just chained from another CPU */ 1175 1179 if (cmpxchg(&epi->next, EP_UNACTIVE_PTR, NULL) != EP_UNACTIVE_PTR) 1176 1180 return false; ··· 1241 1237 * chained in ep->ovflist and requeued later on. 1242 1238 */ 1243 1239 if (READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR) { 1244 - if (epi->next == EP_UNACTIVE_PTR && 1245 - chain_epi_lockless(epi)) 1240 + if (chain_epi_lockless(epi)) 1246 1241 ep_pm_stay_awake_rcu(epi); 1247 - goto out_unlock; 1248 - } 1249 - 1250 - /* If this file is already in the ready list we exit soon */ 1251 - if (!ep_is_linked(epi) && 1252 - list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) { 1253 - ep_pm_stay_awake_rcu(epi); 1242 + } else if (!ep_is_linked(epi)) { 1243 + /* In the usual case, add event to ready list. */ 1244 + if (list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) 1245 + ep_pm_stay_awake_rcu(epi); 1254 1246 } 1255 1247 1256 1248 /* ··· 1822 1822 { 1823 1823 int res = 0, eavail, timed_out = 0; 1824 1824 u64 slack = 0; 1825 - bool waiter = false; 1826 1825 wait_queue_entry_t wait; 1827 1826 ktime_t expires, *to = NULL; 1828 1827 ··· 1866 1867 */ 1867 1868 ep_reset_busy_poll_napi_id(ep); 1868 1869 1869 - /* 1870 - * We don't have any available event to return to the caller. We need 1871 - * to sleep here, and we will be woken by ep_poll_callback() when events 1872 - * become available. 1873 - */ 1874 - if (!waiter) { 1875 - waiter = true; 1876 - init_waitqueue_entry(&wait, current); 1877 - 1870 + do { 1871 + /* 1872 + * Internally init_wait() uses autoremove_wake_function(), 1873 + * thus wait entry is removed from the wait queue on each 1874 + * wakeup. Why it is important? In case of several waiters 1875 + * each new wakeup will hit the next waiter, giving it the 1876 + * chance to harvest new event. Otherwise wakeup can be 1877 + * lost. This is also good performance-wise, because on 1878 + * normal wakeup path no need to call __remove_wait_queue() 1879 + * explicitly, thus ep->lock is not taken, which halts the 1880 + * event delivery. 1881 + */ 1882 + init_wait(&wait); 1878 1883 write_lock_irq(&ep->lock); 1879 1884 __add_wait_queue_exclusive(&ep->wq, &wait); 1880 1885 write_unlock_irq(&ep->lock); 1881 - } 1882 1886 1883 - for (;;) { 1884 1887 /* 1885 1888 * We don't want to sleep if the ep_poll_callback() sends us 1886 1889 * a wakeup in between. That's why we set the task state ··· 1912 1911 timed_out = 1; 1913 1912 break; 1914 1913 } 1915 - } 1914 + 1915 + /* We were woken up, thus go and try to harvest some events */ 1916 + eavail = 1; 1917 + 1918 + } while (0); 1916 1919 1917 1920 __set_current_state(TASK_RUNNING); 1921 + 1922 + if (!list_empty_careful(&wait.entry)) { 1923 + write_lock_irq(&ep->lock); 1924 + __remove_wait_queue(&ep->wq, &wait); 1925 + write_unlock_irq(&ep->lock); 1926 + } 1918 1927 1919 1928 send_events: 1920 1929 /* ··· 1935 1924 if (!res && eavail && 1936 1925 !(res = ep_send_events(ep, events, maxevents)) && !timed_out) 1937 1926 goto fetch_events; 1938 - 1939 - if (waiter) { 1940 - write_lock_irq(&ep->lock); 1941 - __remove_wait_queue(&ep->wq, &wait); 1942 - write_unlock_irq(&ep->lock); 1943 - } 1944 1927 1945 1928 return res; 1946 1929 }
+53 -70
fs/io_uring.c
··· 524 524 REQ_F_OVERFLOW_BIT, 525 525 REQ_F_POLLED_BIT, 526 526 REQ_F_BUFFER_SELECTED_BIT, 527 + REQ_F_NO_FILE_TABLE_BIT, 527 528 528 529 /* not a real bit, just to check we're not overflowing the space */ 529 530 __REQ_F_LAST_BIT, ··· 578 577 REQ_F_POLLED = BIT(REQ_F_POLLED_BIT), 579 578 /* buffer already selected */ 580 579 REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT), 580 + /* doesn't need file table for this request */ 581 + REQ_F_NO_FILE_TABLE = BIT(REQ_F_NO_FILE_TABLE_BIT), 581 582 }; 582 583 583 584 struct async_poll { ··· 680 677 unsigned needs_mm : 1; 681 678 /* needs req->file assigned */ 682 679 unsigned needs_file : 1; 683 - /* needs req->file assigned IFF fd is >= 0 */ 684 - unsigned fd_non_neg : 1; 685 680 /* hash wq insertion if file is a regular file */ 686 681 unsigned hash_reg_file : 1; 687 682 /* unbound wq insertion if file is a non-regular file */ ··· 782 781 .needs_file = 1, 783 782 }, 784 783 [IORING_OP_OPENAT] = { 785 - .needs_file = 1, 786 - .fd_non_neg = 1, 787 784 .file_table = 1, 788 785 .needs_fs = 1, 789 786 }, ··· 795 796 }, 796 797 [IORING_OP_STATX] = { 797 798 .needs_mm = 1, 798 - .needs_file = 1, 799 - .fd_non_neg = 1, 800 799 .needs_fs = 1, 800 + .file_table = 1, 801 801 }, 802 802 [IORING_OP_READ] = { 803 803 .needs_mm = 1, ··· 831 833 .buffer_select = 1, 832 834 }, 833 835 [IORING_OP_OPENAT2] = { 834 - .needs_file = 1, 835 - .fd_non_neg = 1, 836 836 .file_table = 1, 837 837 .needs_fs = 1, 838 838 }, ··· 1287 1291 struct io_kiocb *req; 1288 1292 1289 1293 req = ctx->fallback_req; 1290 - if (!test_and_set_bit_lock(0, (unsigned long *) ctx->fallback_req)) 1294 + if (!test_and_set_bit_lock(0, (unsigned long *) &ctx->fallback_req)) 1291 1295 return req; 1292 1296 1293 1297 return NULL; ··· 1374 1378 if (likely(!io_is_fallback_req(req))) 1375 1379 kmem_cache_free(req_cachep, req); 1376 1380 else 1377 - clear_bit_unlock(0, (unsigned long *) req->ctx->fallback_req); 1381 + clear_bit_unlock(0, (unsigned long *) &req->ctx->fallback_req); 1378 1382 } 1379 1383 1380 1384 struct req_batch { ··· 2030 2034 * any file. For now, just ensure that anything potentially problematic is done 2031 2035 * inline. 2032 2036 */ 2033 - static bool io_file_supports_async(struct file *file) 2037 + static bool io_file_supports_async(struct file *file, int rw) 2034 2038 { 2035 2039 umode_t mode = file_inode(file)->i_mode; 2036 2040 ··· 2039 2043 if (S_ISREG(mode) && file->f_op != &io_uring_fops) 2040 2044 return true; 2041 2045 2042 - return false; 2046 + if (!(file->f_mode & FMODE_NOWAIT)) 2047 + return false; 2048 + 2049 + if (rw == READ) 2050 + return file->f_op->read_iter != NULL; 2051 + 2052 + return file->f_op->write_iter != NULL; 2043 2053 } 2044 2054 2045 2055 static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe, ··· 2573 2571 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so 2574 2572 * we know to async punt it even if it was opened O_NONBLOCK 2575 2573 */ 2576 - if (force_nonblock && !io_file_supports_async(req->file)) 2574 + if (force_nonblock && !io_file_supports_async(req->file, READ)) 2577 2575 goto copy_iov; 2578 2576 2579 2577 iov_count = iov_iter_count(&iter); ··· 2596 2594 if (ret) 2597 2595 goto out_free; 2598 2596 /* any defer here is final, must blocking retry */ 2599 - if (!(req->flags & REQ_F_NOWAIT)) 2597 + if (!(req->flags & REQ_F_NOWAIT) && 2598 + !file_can_poll(req->file)) 2600 2599 req->flags |= REQ_F_MUST_PUNT; 2601 2600 return -EAGAIN; 2602 2601 } ··· 2665 2662 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so 2666 2663 * we know to async punt it even if it was opened O_NONBLOCK 2667 2664 */ 2668 - if (force_nonblock && !io_file_supports_async(req->file)) 2665 + if (force_nonblock && !io_file_supports_async(req->file, WRITE)) 2669 2666 goto copy_iov; 2670 2667 2671 2668 /* file path doesn't support NOWAIT for non-direct_IO */ ··· 2719 2716 if (ret) 2720 2717 goto out_free; 2721 2718 /* any defer here is final, must blocking retry */ 2722 - req->flags |= REQ_F_MUST_PUNT; 2719 + if (!file_can_poll(req->file)) 2720 + req->flags |= REQ_F_MUST_PUNT; 2723 2721 return -EAGAIN; 2724 2722 } 2725 2723 } ··· 2760 2756 return 0; 2761 2757 } 2762 2758 2763 - static bool io_splice_punt(struct file *file) 2764 - { 2765 - if (get_pipe_info(file)) 2766 - return false; 2767 - if (!io_file_supports_async(file)) 2768 - return true; 2769 - return !(file->f_flags & O_NONBLOCK); 2770 - } 2771 - 2772 2759 static int io_splice(struct io_kiocb *req, bool force_nonblock) 2773 2760 { 2774 2761 struct io_splice *sp = &req->splice; ··· 2769 2774 loff_t *poff_in, *poff_out; 2770 2775 long ret; 2771 2776 2772 - if (force_nonblock) { 2773 - if (io_splice_punt(in) || io_splice_punt(out)) 2774 - return -EAGAIN; 2775 - flags |= SPLICE_F_NONBLOCK; 2776 - } 2777 + if (force_nonblock) 2778 + return -EAGAIN; 2777 2779 2778 2780 poff_in = (sp->off_in == -1) ? NULL : &sp->off_in; 2779 2781 poff_out = (sp->off_out == -1) ? NULL : &sp->off_out; ··· 3347 3355 struct kstat stat; 3348 3356 int ret; 3349 3357 3350 - if (force_nonblock) 3358 + if (force_nonblock) { 3359 + /* only need file table for an actual valid fd */ 3360 + if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD) 3361 + req->flags |= REQ_F_NO_FILE_TABLE; 3351 3362 return -EAGAIN; 3363 + } 3352 3364 3353 3365 if (vfs_stat_set_lookup_flags(&lookup_flags, ctx->how.flags)) 3354 3366 return -EINVAL; ··· 3498 3502 if (io_req_cancelled(req)) 3499 3503 return; 3500 3504 __io_sync_file_range(req); 3501 - io_put_req(req); /* put submission ref */ 3505 + io_steal_work(req, workptr); 3502 3506 } 3503 3507 3504 3508 static int io_sync_file_range(struct io_kiocb *req, bool force_nonblock) ··· 5011 5015 int ret; 5012 5016 5013 5017 /* Still need defer if there is pending req in defer list. */ 5014 - if (!req_need_defer(req) && list_empty(&ctx->defer_list)) 5018 + if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list)) 5015 5019 return 0; 5016 5020 5017 5021 if (!req->io && io_alloc_async_ctx(req)) ··· 5360 5364 io_steal_work(req, workptr); 5361 5365 } 5362 5366 5363 - static int io_req_needs_file(struct io_kiocb *req, int fd) 5364 - { 5365 - if (!io_op_defs[req->opcode].needs_file) 5366 - return 0; 5367 - if ((fd == -1 || fd == AT_FDCWD) && io_op_defs[req->opcode].fd_non_neg) 5368 - return 0; 5369 - return 1; 5370 - } 5371 - 5372 5367 static inline struct file *io_file_from_index(struct io_ring_ctx *ctx, 5373 5368 int index) 5374 5369 { ··· 5397 5410 } 5398 5411 5399 5412 static int io_req_set_file(struct io_submit_state *state, struct io_kiocb *req, 5400 - int fd, unsigned int flags) 5413 + int fd) 5401 5414 { 5402 5415 bool fixed; 5403 5416 5404 - if (!io_req_needs_file(req, fd)) 5405 - return 0; 5406 - 5407 - fixed = (flags & IOSQE_FIXED_FILE); 5417 + fixed = (req->flags & REQ_F_FIXED_FILE) != 0; 5408 5418 if (unlikely(!fixed && req->needs_fixed_file)) 5409 5419 return -EBADF; 5410 5420 ··· 5413 5429 int ret = -EBADF; 5414 5430 struct io_ring_ctx *ctx = req->ctx; 5415 5431 5416 - if (req->work.files) 5432 + if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE)) 5417 5433 return 0; 5418 5434 if (!ctx->ring_file) 5419 5435 return -EBADF; ··· 5778 5794 struct io_submit_state *state, bool async) 5779 5795 { 5780 5796 unsigned int sqe_flags; 5781 - int id, fd; 5797 + int id; 5782 5798 5783 5799 /* 5784 5800 * All io need record the previous position, if LINK vs DARIN, ··· 5830 5846 IOSQE_ASYNC | IOSQE_FIXED_FILE | 5831 5847 IOSQE_BUFFER_SELECT | IOSQE_IO_LINK); 5832 5848 5833 - fd = READ_ONCE(sqe->fd); 5834 - return io_req_set_file(state, req, fd, sqe_flags); 5849 + if (!io_op_defs[req->opcode].needs_file) 5850 + return 0; 5851 + 5852 + return io_req_set_file(state, req, READ_ONCE(sqe->fd)); 5835 5853 } 5836 5854 5837 5855 static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr, ··· 7313 7327 * it could cause shutdown to hang. 7314 7328 */ 7315 7329 while (ctx->sqo_thread && !wq_has_sleeper(&ctx->sqo_wait)) 7316 - cpu_relax(); 7330 + cond_resched(); 7317 7331 7318 7332 io_kill_timeouts(ctx); 7319 7333 io_poll_remove_all(ctx); ··· 7342 7356 static void io_uring_cancel_files(struct io_ring_ctx *ctx, 7343 7357 struct files_struct *files) 7344 7358 { 7345 - struct io_kiocb *req; 7346 - DEFINE_WAIT(wait); 7347 - 7348 7359 while (!list_empty_careful(&ctx->inflight_list)) { 7349 - struct io_kiocb *cancel_req = NULL; 7360 + struct io_kiocb *cancel_req = NULL, *req; 7361 + DEFINE_WAIT(wait); 7350 7362 7351 7363 spin_lock_irq(&ctx->inflight_lock); 7352 7364 list_for_each_entry(req, &ctx->inflight_list, inflight_entry) { ··· 7384 7400 */ 7385 7401 if (refcount_sub_and_test(2, &cancel_req->refs)) { 7386 7402 io_put_req(cancel_req); 7403 + finish_wait(&ctx->inflight_wait, &wait); 7387 7404 continue; 7388 7405 } 7389 7406 } ··· 7392 7407 io_wq_cancel_work(ctx->io_wq, &cancel_req->work); 7393 7408 io_put_req(cancel_req); 7394 7409 schedule(); 7410 + finish_wait(&ctx->inflight_wait, &wait); 7395 7411 } 7396 - finish_wait(&ctx->inflight_wait, &wait); 7397 7412 } 7398 7413 7399 7414 static int io_uring_flush(struct file *file, void *data) ··· 7742 7757 return ret; 7743 7758 } 7744 7759 7745 - static int io_uring_create(unsigned entries, struct io_uring_params *p) 7760 + static int io_uring_create(unsigned entries, struct io_uring_params *p, 7761 + struct io_uring_params __user *params) 7746 7762 { 7747 7763 struct user_struct *user = NULL; 7748 7764 struct io_ring_ctx *ctx; ··· 7835 7849 p->cq_off.overflow = offsetof(struct io_rings, cq_overflow); 7836 7850 p->cq_off.cqes = offsetof(struct io_rings, cqes); 7837 7851 7852 + p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP | 7853 + IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS | 7854 + IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL; 7855 + 7856 + if (copy_to_user(params, p, sizeof(*p))) { 7857 + ret = -EFAULT; 7858 + goto err; 7859 + } 7838 7860 /* 7839 7861 * Install ring fd as the very last thing, so we don't risk someone 7840 7862 * having closed it before we finish setup ··· 7851 7857 if (ret < 0) 7852 7858 goto err; 7853 7859 7854 - p->features = IORING_FEAT_SINGLE_MMAP | IORING_FEAT_NODROP | 7855 - IORING_FEAT_SUBMIT_STABLE | IORING_FEAT_RW_CUR_POS | 7856 - IORING_FEAT_CUR_PERSONALITY | IORING_FEAT_FAST_POLL; 7857 7860 trace_io_uring_create(ret, ctx, p->sq_entries, p->cq_entries, p->flags); 7858 7861 return ret; 7859 7862 err: ··· 7866 7875 static long io_uring_setup(u32 entries, struct io_uring_params __user *params) 7867 7876 { 7868 7877 struct io_uring_params p; 7869 - long ret; 7870 7878 int i; 7871 7879 7872 7880 if (copy_from_user(&p, params, sizeof(p))) ··· 7880 7890 IORING_SETUP_CLAMP | IORING_SETUP_ATTACH_WQ)) 7881 7891 return -EINVAL; 7882 7892 7883 - ret = io_uring_create(entries, &p); 7884 - if (ret < 0) 7885 - return ret; 7886 - 7887 - if (copy_to_user(params, &p, sizeof(p))) 7888 - return -EFAULT; 7889 - 7890 - return ret; 7893 + return io_uring_create(entries, &p, params); 7891 7894 } 7892 7895 7893 7896 SYSCALL_DEFINE2(io_uring_setup, u32, entries,
+8
fs/ioctl.c
··· 55 55 static int ioctl_fibmap(struct file *filp, int __user *p) 56 56 { 57 57 struct inode *inode = file_inode(filp); 58 + struct super_block *sb = inode->i_sb; 58 59 int error, ur_block; 59 60 sector_t block; 60 61 ··· 71 70 72 71 block = ur_block; 73 72 error = bmap(inode, &block); 73 + 74 + if (block > INT_MAX) { 75 + error = -ERANGE; 76 + pr_warn_ratelimited("[%s/%d] FS: %s File: %pD4 would truncate fibmap result\n", 77 + current->comm, task_pid_nr(current), 78 + sb->s_id, filp); 79 + } 74 80 75 81 if (error) 76 82 ur_block = 0;
+1 -4
fs/iomap/fiemap.c
··· 117 117 118 118 if (iomap->type == IOMAP_MAPPED) { 119 119 addr = (pos - iomap->offset + iomap->addr) >> inode->i_blkbits; 120 - if (addr > INT_MAX) 121 - WARN(1, "would truncate bmap result\n"); 122 - else 123 - *bno = addr; 120 + *bno = addr; 124 121 } 125 122 return 0; 126 123 }
+15 -7
fs/nfs/nfs3acl.c
··· 253 253 254 254 int nfs3_set_acl(struct inode *inode, struct posix_acl *acl, int type) 255 255 { 256 - struct posix_acl *alloc = NULL, *dfacl = NULL; 256 + struct posix_acl *orig = acl, *dfacl = NULL, *alloc; 257 257 int status; 258 258 259 259 if (S_ISDIR(inode->i_mode)) { 260 260 switch(type) { 261 261 case ACL_TYPE_ACCESS: 262 - alloc = dfacl = get_acl(inode, ACL_TYPE_DEFAULT); 262 + alloc = get_acl(inode, ACL_TYPE_DEFAULT); 263 263 if (IS_ERR(alloc)) 264 264 goto fail; 265 + dfacl = alloc; 265 266 break; 266 267 267 268 case ACL_TYPE_DEFAULT: 268 - dfacl = acl; 269 - alloc = acl = get_acl(inode, ACL_TYPE_ACCESS); 269 + alloc = get_acl(inode, ACL_TYPE_ACCESS); 270 270 if (IS_ERR(alloc)) 271 271 goto fail; 272 + dfacl = acl; 273 + acl = alloc; 272 274 break; 273 275 } 274 276 } 275 277 276 278 if (acl == NULL) { 277 - alloc = acl = posix_acl_from_mode(inode->i_mode, GFP_KERNEL); 279 + alloc = posix_acl_from_mode(inode->i_mode, GFP_KERNEL); 278 280 if (IS_ERR(alloc)) 279 281 goto fail; 282 + acl = alloc; 280 283 } 281 284 status = __nfs3_proc_setacls(inode, acl, dfacl); 282 - posix_acl_release(alloc); 285 + out: 286 + if (acl != orig) 287 + posix_acl_release(acl); 288 + if (dfacl != orig) 289 + posix_acl_release(dfacl); 283 290 return status; 284 291 285 292 fail: 286 - return PTR_ERR(alloc); 293 + status = PTR_ERR(alloc); 294 + goto out; 287 295 } 288 296 289 297 const struct xattr_handler *nfs3_xattr_handlers[] = {
+9 -2
fs/nfs/nfs4proc.c
··· 7891 7891 nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata) 7892 7892 { 7893 7893 struct nfs41_bind_conn_to_session_args *args = task->tk_msg.rpc_argp; 7894 + struct nfs41_bind_conn_to_session_res *res = task->tk_msg.rpc_resp; 7894 7895 struct nfs_client *clp = args->client; 7895 7896 7896 7897 switch (task->tk_status) { ··· 7899 7898 case -NFS4ERR_DEADSESSION: 7900 7899 nfs4_schedule_session_recovery(clp->cl_session, 7901 7900 task->tk_status); 7901 + } 7902 + if (args->dir == NFS4_CDFC4_FORE_OR_BOTH && 7903 + res->dir != NFS4_CDFS4_BOTH) { 7904 + rpc_task_close_connection(task); 7905 + if (args->retries++ < MAX_BIND_CONN_TO_SESSION_RETRIES) 7906 + rpc_restart_call(task); 7902 7907 } 7903 7908 } 7904 7909 ··· 7928 7921 struct nfs41_bind_conn_to_session_args args = { 7929 7922 .client = clp, 7930 7923 .dir = NFS4_CDFC4_FORE_OR_BOTH, 7924 + .retries = 0, 7931 7925 }; 7932 7926 struct nfs41_bind_conn_to_session_res res; 7933 7927 struct rpc_message msg = { ··· 9199 9191 nfs4_init_sequence(&lgp->args.seq_args, &lgp->res.seq_res, 0, 0); 9200 9192 9201 9193 task = rpc_run_task(&task_setup_data); 9202 - if (IS_ERR(task)) 9203 - return ERR_CAST(task); 9194 + 9204 9195 status = rpc_wait_for_completion_task(task); 9205 9196 if (status != 0) 9206 9197 goto out;
+5 -6
fs/nfs/pnfs.c
··· 1332 1332 !valid_layout) { 1333 1333 spin_unlock(&ino->i_lock); 1334 1334 dprintk("NFS: %s no layout segments to return\n", __func__); 1335 - goto out_put_layout_hdr; 1335 + goto out_wait_layoutreturn; 1336 1336 } 1337 1337 1338 1338 send = pnfs_prepare_layoutreturn(lo, &stateid, &cred, NULL); 1339 1339 spin_unlock(&ino->i_lock); 1340 1340 if (send) 1341 1341 status = pnfs_send_layoutreturn(lo, &stateid, &cred, IOMODE_ANY, true); 1342 + out_wait_layoutreturn: 1343 + wait_on_bit(&lo->plh_flags, NFS_LAYOUT_RETURN, TASK_UNINTERRUPTIBLE); 1342 1344 out_put_layout_hdr: 1343 1345 pnfs_free_lseg_list(&tmp_list); 1344 1346 pnfs_put_layout_hdr(lo); ··· 1458 1456 /* lo ref dropped in pnfs_roc_release() */ 1459 1457 layoutreturn = pnfs_prepare_layoutreturn(lo, &stateid, &lc_cred, &iomode); 1460 1458 /* If the creds don't match, we can't compound the layoutreturn */ 1461 - if (!layoutreturn) 1459 + if (!layoutreturn || cred_fscmp(cred, lc_cred) != 0) 1462 1460 goto out_noroc; 1463 - if (cred_fscmp(cred, lc_cred) != 0) 1464 - goto out_noroc_put_cred; 1465 1461 1466 1462 roc = layoutreturn; 1467 1463 pnfs_init_layoutreturn_args(args, lo, &stateid, iomode); 1468 1464 res->lrs_present = 0; 1469 1465 layoutreturn = false; 1470 - 1471 - out_noroc_put_cred: 1472 1466 put_cred(lc_cred); 1467 + 1473 1468 out_noroc: 1474 1469 spin_unlock(&ino->i_lock); 1475 1470 rcu_read_unlock();
+1 -1
fs/nfs/super.c
··· 185 185 186 186 rcu_read_lock(); 187 187 list_for_each_entry_rcu(server, head, client_link) { 188 - if (!nfs_sb_active(server->super)) 188 + if (!(server->super && nfs_sb_active(server->super))) 189 189 continue; 190 190 rcu_read_unlock(); 191 191 if (last)
+12 -15
fs/ocfs2/dlmfs/dlmfs.c
··· 275 275 loff_t *ppos) 276 276 { 277 277 int bytes_left; 278 - ssize_t writelen; 279 278 char *lvb_buf; 280 279 struct inode *inode = file_inode(filp); 281 280 ··· 284 285 if (*ppos >= i_size_read(inode)) 285 286 return -ENOSPC; 286 287 288 + /* don't write past the lvb */ 289 + if (count > i_size_read(inode) - *ppos) 290 + count = i_size_read(inode) - *ppos; 291 + 287 292 if (!count) 288 293 return 0; 289 294 290 295 if (!access_ok(buf, count)) 291 296 return -EFAULT; 292 297 293 - /* don't write past the lvb */ 294 - if ((count + *ppos) > i_size_read(inode)) 295 - writelen = i_size_read(inode) - *ppos; 296 - else 297 - writelen = count - *ppos; 298 - 299 - lvb_buf = kmalloc(writelen, GFP_NOFS); 298 + lvb_buf = kmalloc(count, GFP_NOFS); 300 299 if (!lvb_buf) 301 300 return -ENOMEM; 302 301 303 - bytes_left = copy_from_user(lvb_buf, buf, writelen); 304 - writelen -= bytes_left; 305 - if (writelen) 306 - user_dlm_write_lvb(inode, lvb_buf, writelen); 302 + bytes_left = copy_from_user(lvb_buf, buf, count); 303 + count -= bytes_left; 304 + if (count) 305 + user_dlm_write_lvb(inode, lvb_buf, count); 307 306 308 307 kfree(lvb_buf); 309 308 310 - *ppos = *ppos + writelen; 311 - mlog(0, "wrote %zd bytes\n", writelen); 312 - return writelen; 309 + *ppos = *ppos + count; 310 + mlog(0, "wrote %zu bytes\n", count); 311 + return count; 313 312 } 314 313 315 314 static void dlmfs_init_once(void *foo)
+4 -5
fs/pnode.c
··· 261 261 child = copy_tree(last_source, last_source->mnt.mnt_root, type); 262 262 if (IS_ERR(child)) 263 263 return PTR_ERR(child); 264 + read_seqlock_excl(&mount_lock); 264 265 mnt_set_mountpoint(m, mp, child); 266 + if (m->mnt_master != dest_master) 267 + SET_MNT_MARK(m->mnt_master); 268 + read_sequnlock_excl(&mount_lock); 265 269 last_dest = m; 266 270 last_source = child; 267 - if (m->mnt_master != dest_master) { 268 - read_seqlock_excl(&mount_lock); 269 - SET_MNT_MARK(m->mnt_master); 270 - read_sequnlock_excl(&mount_lock); 271 - } 272 271 hlist_add_head(&child->mnt_hash, list); 273 272 return count_mounts(m->mnt_ns, child); 274 273 }
+18 -27
fs/splice.c
··· 1118 1118 loff_t offset; 1119 1119 long ret; 1120 1120 1121 + if (unlikely(!(in->f_mode & FMODE_READ) || 1122 + !(out->f_mode & FMODE_WRITE))) 1123 + return -EBADF; 1124 + 1121 1125 ipipe = get_pipe_info(in); 1122 1126 opipe = get_pipe_info(out); 1123 1127 1124 1128 if (ipipe && opipe) { 1125 1129 if (off_in || off_out) 1126 1130 return -ESPIPE; 1127 - 1128 - if (!(in->f_mode & FMODE_READ)) 1129 - return -EBADF; 1130 - 1131 - if (!(out->f_mode & FMODE_WRITE)) 1132 - return -EBADF; 1133 1131 1134 1132 /* Splicing to self would be fun, but... */ 1135 1133 if (ipipe == opipe) ··· 1150 1152 } else { 1151 1153 offset = out->f_pos; 1152 1154 } 1153 - 1154 - if (unlikely(!(out->f_mode & FMODE_WRITE))) 1155 - return -EBADF; 1156 1155 1157 1156 if (unlikely(out->f_flags & O_APPEND)) 1158 1157 return -EINVAL; ··· 1435 1440 error = -EBADF; 1436 1441 in = fdget(fd_in); 1437 1442 if (in.file) { 1438 - if (in.file->f_mode & FMODE_READ) { 1439 - out = fdget(fd_out); 1440 - if (out.file) { 1441 - if (out.file->f_mode & FMODE_WRITE) 1442 - error = do_splice(in.file, off_in, 1443 - out.file, off_out, 1444 - len, flags); 1445 - fdput(out); 1446 - } 1443 + out = fdget(fd_out); 1444 + if (out.file) { 1445 + error = do_splice(in.file, off_in, out.file, off_out, 1446 + len, flags); 1447 + fdput(out); 1447 1448 } 1448 1449 fdput(in); 1449 1450 } ··· 1761 1770 struct pipe_inode_info *opipe = get_pipe_info(out); 1762 1771 int ret = -EINVAL; 1763 1772 1773 + if (unlikely(!(in->f_mode & FMODE_READ) || 1774 + !(out->f_mode & FMODE_WRITE))) 1775 + return -EBADF; 1776 + 1764 1777 /* 1765 1778 * Duplicate the contents of ipipe to opipe without actually 1766 1779 * copying the data. ··· 1790 1795 1791 1796 SYSCALL_DEFINE4(tee, int, fdin, int, fdout, size_t, len, unsigned int, flags) 1792 1797 { 1793 - struct fd in; 1798 + struct fd in, out; 1794 1799 int error; 1795 1800 1796 1801 if (unlikely(flags & ~SPLICE_F_ALL)) ··· 1802 1807 error = -EBADF; 1803 1808 in = fdget(fdin); 1804 1809 if (in.file) { 1805 - if (in.file->f_mode & FMODE_READ) { 1806 - struct fd out = fdget(fdout); 1807 - if (out.file) { 1808 - if (out.file->f_mode & FMODE_WRITE) 1809 - error = do_tee(in.file, out.file, 1810 - len, flags); 1811 - fdput(out); 1812 - } 1810 + out = fdget(fdout); 1811 + if (out.file) { 1812 + error = do_tee(in.file, out.file, len, flags); 1813 + fdput(out); 1813 1814 } 1814 1815 fdput(in); 1815 1816 }
+1 -1
fs/super.c
··· 1302 1302 mutex_lock(&bdev->bd_fsfreeze_mutex); 1303 1303 if (bdev->bd_fsfreeze_count > 0) { 1304 1304 mutex_unlock(&bdev->bd_fsfreeze_mutex); 1305 - blkdev_put(bdev, mode); 1306 1305 warnf(fc, "%pg: Can't mount, blockdev is frozen", bdev); 1306 + blkdev_put(bdev, mode); 1307 1307 return -EBUSY; 1308 1308 } 1309 1309
+1 -1
fs/vboxsf/super.c
··· 164 164 goto fail_free; 165 165 } 166 166 167 - err = super_setup_bdi_name(sb, "vboxsf-%s.%d", fc->source, sbi->bdi_id); 167 + err = super_setup_bdi_name(sb, "vboxsf-%d", sbi->bdi_id); 168 168 if (err) 169 169 goto fail_free; 170 170
+1
include/linux/amba/bus.h
··· 65 65 struct device dev; 66 66 struct resource res; 67 67 struct clk *pclk; 68 + struct device_dma_parameters dma_parms; 68 69 unsigned int periphid; 69 70 unsigned int cid; 70 71 struct amba_cs_uci_id uci;
+1
include/linux/backing-dev-defs.h
··· 219 219 wait_queue_head_t wb_waitq; 220 220 221 221 struct device *dev; 222 + char dev_name[64]; 222 223 struct device *owner; 223 224 224 225 struct timer_list laptop_mode_wb_timer;
+1 -8
include/linux/backing-dev.h
··· 505 505 (1 << WB_async_congested)); 506 506 } 507 507 508 - extern const char *bdi_unknown_name; 509 - 510 - static inline const char *bdi_dev_name(struct backing_dev_info *bdi) 511 - { 512 - if (!bdi || !bdi->dev) 513 - return bdi_unknown_name; 514 - return dev_name(bdi->dev); 515 - } 508 + const char *bdi_dev_name(struct backing_dev_info *bdi); 516 509 517 510 #endif /* _LINUX_BACKING_DEV_H */
+1 -2
include/linux/dma-buf.h
··· 329 329 330 330 /** 331 331 * struct dma_buf_attach_ops - importer operations for an attachment 332 - * @move_notify: [optional] notification that the DMA-buf is moving 333 332 * 334 333 * Attachment operations implemented by the importer. 335 334 */ 336 335 struct dma_buf_attach_ops { 337 336 /** 338 - * @move_notify 337 + * @move_notify: [optional] notification that the DMA-buf is moving 339 338 * 340 339 * If this callback is provided the framework can avoid pinning the 341 340 * backing store while mappings exists.
+6 -6
include/linux/dmaengine.h
··· 83 83 /** 84 84 * Interleaved Transfer Request 85 85 * ---------------------------- 86 - * A chunk is collection of contiguous bytes to be transfered. 86 + * A chunk is collection of contiguous bytes to be transferred. 87 87 * The gap(in bytes) between two chunks is called inter-chunk-gap(ICG). 88 - * ICGs may or maynot change between chunks. 88 + * ICGs may or may not change between chunks. 89 89 * A FRAME is the smallest series of contiguous {chunk,icg} pairs, 90 90 * that when repeated an integral number of times, specifies the transfer. 91 91 * A transfer template is specification of a Frame, the number of times ··· 341 341 * @chan: driver channel device 342 342 * @device: sysfs device 343 343 * @dev_id: parent dma_device dev_id 344 - * @idr_ref: reference count to gate release of dma_device dev_id 345 344 */ 346 345 struct dma_chan_dev { 347 346 struct dma_chan *chan; 348 347 struct device device; 349 348 int dev_id; 350 - atomic_t *idr_ref; 351 349 }; 352 350 353 351 /** ··· 833 835 int dev_id; 834 836 struct device *dev; 835 837 struct module *owner; 838 + struct ida chan_ida; 839 + struct mutex chan_mutex; /* to protect chan_ida */ 836 840 837 841 u32 src_addr_widths; 838 842 u32 dst_addr_widths; ··· 1069 1069 * dmaengine_synchronize() needs to be called before it is safe to free 1070 1070 * any memory that is accessed by previously submitted descriptors or before 1071 1071 * freeing any resources accessed from within the completion callback of any 1072 - * perviously submitted descriptors. 1072 + * previously submitted descriptors. 1073 1073 * 1074 1074 * This function can be called from atomic context as well as from within a 1075 1075 * complete callback of a descriptor submitted on the same channel. ··· 1091 1091 * 1092 1092 * Synchronizes to the DMA channel termination to the current context. When this 1093 1093 * function returns it is guaranteed that all transfers for previously issued 1094 - * descriptors have stopped and and it is safe to free the memory assoicated 1094 + * descriptors have stopped and it is safe to free the memory associated 1095 1095 * with them. Furthermore it is guaranteed that all complete callback functions 1096 1096 * for a previously submitted descriptor have finished running and it is safe to 1097 1097 * free resources accessed from within the complete callbacks.
+1 -1
include/linux/fs.h
··· 983 983 __u32 handle_bytes; 984 984 int handle_type; 985 985 /* file identifier */ 986 - unsigned char f_handle[0]; 986 + unsigned char f_handle[]; 987 987 }; 988 988 989 989 static inline struct file *get_file(struct file *f)
+1 -1
include/linux/lsm_hook_defs.h
··· 55 55 LSM_HOOK(void, LSM_RET_VOID, bprm_committed_creds, struct linux_binprm *bprm) 56 56 LSM_HOOK(int, 0, fs_context_dup, struct fs_context *fc, 57 57 struct fs_context *src_sc) 58 - LSM_HOOK(int, 0, fs_context_parse_param, struct fs_context *fc, 58 + LSM_HOOK(int, -ENOPARAM, fs_context_parse_param, struct fs_context *fc, 59 59 struct fs_parameter *param) 60 60 LSM_HOOK(int, 0, sb_alloc_security, struct super_block *sb) 61 61 LSM_HOOK(void, LSM_RET_VOID, sb_free_security, struct super_block *sb)
+10 -6
include/linux/mhi.h
··· 53 53 * @MHI_CHAIN: Linked transfer 54 54 */ 55 55 enum mhi_flags { 56 - MHI_EOB, 57 - MHI_EOT, 58 - MHI_CHAIN, 56 + MHI_EOB = BIT(0), 57 + MHI_EOT = BIT(1), 58 + MHI_CHAIN = BIT(2), 59 59 }; 60 60 61 61 /** ··· 335 335 * @syserr_worker: System error worker 336 336 * @state_event: State change event 337 337 * @status_cb: CB function to notify power states of the device (required) 338 - * @link_status: CB function to query link status of the device (required) 339 338 * @wake_get: CB function to assert device wake (optional) 340 339 * @wake_put: CB function to de-assert device wake (optional) 341 340 * @wake_toggle: CB function to assert and de-assert device wake (optional) 342 341 * @runtime_get: CB function to controller runtime resume (required) 343 - * @runtimet_put: CB function to decrement pm usage (required) 342 + * @runtime_put: CB function to decrement pm usage (required) 344 343 * @map_single: CB function to create TRE buffer 345 344 * @unmap_single: CB function to destroy TRE buffer 345 + * @read_reg: Read a MHI register via the physical link (required) 346 + * @write_reg: Write a MHI register via the physical link (required) 346 347 * @buffer_len: Bounce buffer length 347 348 * @bounce_buf: Use of bounce buffer 348 349 * @fbc_download: MHI host needs to do complete image transfer (optional) ··· 418 417 419 418 void (*status_cb)(struct mhi_controller *mhi_cntrl, 420 419 enum mhi_callback cb); 421 - int (*link_status)(struct mhi_controller *mhi_cntrl); 422 420 void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override); 423 421 void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override); 424 422 void (*wake_toggle)(struct mhi_controller *mhi_cntrl); ··· 427 427 struct mhi_buf_info *buf); 428 428 void (*unmap_single)(struct mhi_controller *mhi_cntrl, 429 429 struct mhi_buf_info *buf); 430 + int (*read_reg)(struct mhi_controller *mhi_cntrl, void __iomem *addr, 431 + u32 *out); 432 + void (*write_reg)(struct mhi_controller *mhi_cntrl, void __iomem *addr, 433 + u32 val); 430 434 431 435 size_t buffer_len; 432 436 bool bounce_buf;
+2
include/linux/nfs_xdr.h
··· 1317 1317 struct nfstime4 date; 1318 1318 }; 1319 1319 1320 + #define MAX_BIND_CONN_TO_SESSION_RETRIES 3 1320 1321 struct nfs41_bind_conn_to_session_args { 1321 1322 struct nfs_client *client; 1322 1323 struct nfs4_sessionid sessionid; 1323 1324 u32 dir; 1324 1325 bool use_conn_in_rdma_mode; 1326 + int retries; 1325 1327 }; 1326 1328 1327 1329 struct nfs41_bind_conn_to_session_res {
+1
include/linux/platform_data/cros_ec_sensorhub.h
··· 185 185 void cros_ec_sensorhub_unregister_push_data(struct cros_ec_sensorhub *sensorhub, 186 186 u8 sensor_num); 187 187 188 + int cros_ec_sensorhub_ring_allocate(struct cros_ec_sensorhub *sensorhub); 188 189 int cros_ec_sensorhub_ring_add(struct cros_ec_sensorhub *sensorhub); 189 190 void cros_ec_sensorhub_ring_remove(void *arg); 190 191 int cros_ec_sensorhub_ring_fifo_enable(struct cros_ec_sensorhub *sensorhub,
+1
include/linux/platform_device.h
··· 25 25 bool id_auto; 26 26 struct device dev; 27 27 u64 platform_dma_mask; 28 + struct device_dma_parameters dma_parms; 28 29 u32 num_resources; 29 30 struct resource *resource; 30 31
+12 -1
include/linux/sunrpc/clnt.h
··· 71 71 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 72 72 struct dentry *cl_debugfs; /* debugfs directory */ 73 73 #endif 74 - struct rpc_xprt_iter cl_xpi; 74 + /* cl_work is only needed after cl_xpi is no longer used, 75 + * and that are of similar size 76 + */ 77 + union { 78 + struct rpc_xprt_iter cl_xpi; 79 + struct work_struct cl_work; 80 + }; 75 81 const struct cred *cl_cred; 76 82 }; 77 83 ··· 242 236 (task->tk_msg.rpc_proc->p_decode != NULL); 243 237 } 244 238 239 + static inline void rpc_task_close_connection(struct rpc_task *task) 240 + { 241 + if (task->tk_xprt) 242 + xprt_force_disconnect(task->tk_xprt); 243 + } 245 244 #endif /* _LINUX_SUNRPC_CLNT_H */
-51
include/linux/tcp.h
··· 78 78 #define TCP_SACK_SEEN (1 << 0) /*1 = peer is SACK capable, */ 79 79 #define TCP_DSACK_SEEN (1 << 2) /*1 = DSACK was received from peer*/ 80 80 81 - #if IS_ENABLED(CONFIG_MPTCP) 82 - struct mptcp_options_received { 83 - u64 sndr_key; 84 - u64 rcvr_key; 85 - u64 data_ack; 86 - u64 data_seq; 87 - u32 subflow_seq; 88 - u16 data_len; 89 - u16 mp_capable : 1, 90 - mp_join : 1, 91 - dss : 1, 92 - add_addr : 1, 93 - rm_addr : 1, 94 - family : 4, 95 - echo : 1, 96 - backup : 1; 97 - u32 token; 98 - u32 nonce; 99 - u64 thmac; 100 - u8 hmac[20]; 101 - u8 join_id; 102 - u8 use_map:1, 103 - dsn64:1, 104 - data_fin:1, 105 - use_ack:1, 106 - ack64:1, 107 - mpc_map:1, 108 - __unused:2; 109 - u8 addr_id; 110 - u8 rm_id; 111 - union { 112 - struct in_addr addr; 113 - #if IS_ENABLED(CONFIG_MPTCP_IPV6) 114 - struct in6_addr addr6; 115 - #endif 116 - }; 117 - u64 ahmac; 118 - u16 port; 119 - }; 120 - #endif 121 - 122 81 struct tcp_options_received { 123 82 /* PAWS/RTTM data */ 124 83 int ts_recent_stamp;/* Time we stored ts_recent (for aging) */ ··· 95 136 u8 num_sacks; /* Number of SACK blocks */ 96 137 u16 user_mss; /* mss requested by user in ioctl */ 97 138 u16 mss_clamp; /* Maximal mss, negotiated at connection setup */ 98 - #if IS_ENABLED(CONFIG_MPTCP) 99 - struct mptcp_options_received mptcp; 100 - #endif 101 139 }; 102 140 103 141 static inline void tcp_clear_options(struct tcp_options_received *rx_opt) ··· 103 147 rx_opt->wscale_ok = rx_opt->snd_wscale = 0; 104 148 #if IS_ENABLED(CONFIG_SMC) 105 149 rx_opt->smc_ok = 0; 106 - #endif 107 - #if IS_ENABLED(CONFIG_MPTCP) 108 - rx_opt->mptcp.mp_capable = 0; 109 - rx_opt->mptcp.mp_join = 0; 110 - rx_opt->mptcp.add_addr = 0; 111 - rx_opt->mptcp.rm_addr = 0; 112 - rx_opt->mptcp.dss = 0; 113 150 #endif 114 151 } 115 152
+1 -1
include/linux/tty.h
··· 66 66 int read; 67 67 int flags; 68 68 /* Data points here */ 69 - unsigned long data[0]; 69 + unsigned long data[]; 70 70 }; 71 71 72 72 /* Values for .flags field of tty_buffer */
+24 -2
include/linux/virtio_net.h
··· 3 3 #define _LINUX_VIRTIO_NET_H 4 4 5 5 #include <linux/if_vlan.h> 6 + #include <uapi/linux/tcp.h> 7 + #include <uapi/linux/udp.h> 6 8 #include <uapi/linux/virtio_net.h> 7 9 8 10 static inline int virtio_net_hdr_set_proto(struct sk_buff *skb, ··· 30 28 bool little_endian) 31 29 { 32 30 unsigned int gso_type = 0; 31 + unsigned int thlen = 0; 32 + unsigned int ip_proto; 33 33 34 34 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) { 35 35 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { 36 36 case VIRTIO_NET_HDR_GSO_TCPV4: 37 37 gso_type = SKB_GSO_TCPV4; 38 + ip_proto = IPPROTO_TCP; 39 + thlen = sizeof(struct tcphdr); 38 40 break; 39 41 case VIRTIO_NET_HDR_GSO_TCPV6: 40 42 gso_type = SKB_GSO_TCPV6; 43 + ip_proto = IPPROTO_TCP; 44 + thlen = sizeof(struct tcphdr); 41 45 break; 42 46 case VIRTIO_NET_HDR_GSO_UDP: 43 47 gso_type = SKB_GSO_UDP; 48 + ip_proto = IPPROTO_UDP; 49 + thlen = sizeof(struct udphdr); 44 50 break; 45 51 default: 46 52 return -EINVAL; ··· 67 57 68 58 if (!skb_partial_csum_set(skb, start, off)) 69 59 return -EINVAL; 60 + 61 + if (skb_transport_offset(skb) + thlen > skb_headlen(skb)) 62 + return -EINVAL; 70 63 } else { 71 64 /* gso packets without NEEDS_CSUM do not set transport_offset. 72 65 * probe and drop if does not match one of the above types. 73 66 */ 74 67 if (gso_type && skb->network_header) { 68 + struct flow_keys_basic keys; 69 + 75 70 if (!skb->protocol) 76 71 virtio_net_hdr_set_proto(skb, hdr); 77 72 retry: 78 - skb_probe_transport_header(skb); 79 - if (!skb_transport_header_was_set(skb)) { 73 + if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys, 74 + NULL, 0, 0, 0, 75 + 0)) { 80 76 /* UFO does not specify ipv4 or 6: try both */ 81 77 if (gso_type & SKB_GSO_UDP && 82 78 skb->protocol == htons(ETH_P_IP)) { ··· 91 75 } 92 76 return -EINVAL; 93 77 } 78 + 79 + if (keys.control.thoff + thlen > skb_headlen(skb) || 80 + keys.basic.ip_proto != ip_proto) 81 + return -EINVAL; 82 + 83 + skb_set_transport_header(skb, keys.control.thoff); 94 84 } 95 85 } 96 86
+1
include/linux/virtio_vsock.h
··· 48 48 u32 len; 49 49 u32 off; 50 50 bool reply; 51 + bool tap_delivered; 51 52 }; 52 53 53 54 struct virtio_vsock_pkt_info {
+8 -1
include/net/flow_offload.h
··· 166 166 enum flow_action_hw_stats_bit { 167 167 FLOW_ACTION_HW_STATS_IMMEDIATE_BIT, 168 168 FLOW_ACTION_HW_STATS_DELAYED_BIT, 169 + FLOW_ACTION_HW_STATS_DISABLED_BIT, 169 170 }; 170 171 171 172 enum flow_action_hw_stats { 172 - FLOW_ACTION_HW_STATS_DISABLED = 0, 173 + FLOW_ACTION_HW_STATS_DONT_CARE = 0, 173 174 FLOW_ACTION_HW_STATS_IMMEDIATE = 174 175 BIT(FLOW_ACTION_HW_STATS_IMMEDIATE_BIT), 175 176 FLOW_ACTION_HW_STATS_DELAYED = BIT(FLOW_ACTION_HW_STATS_DELAYED_BIT), 176 177 FLOW_ACTION_HW_STATS_ANY = FLOW_ACTION_HW_STATS_IMMEDIATE | 177 178 FLOW_ACTION_HW_STATS_DELAYED, 179 + FLOW_ACTION_HW_STATS_DISABLED = 180 + BIT(FLOW_ACTION_HW_STATS_DISABLED_BIT), 178 181 }; 179 182 180 183 typedef void (*action_destr)(void *priv); ··· 328 325 return true; 329 326 if (!flow_action_mixed_hw_stats_check(action, extack)) 330 327 return false; 328 + 331 329 action_entry = flow_action_first_entry_get(action); 330 + if (action_entry->hw_stats == FLOW_ACTION_HW_STATS_DONT_CARE) 331 + return true; 332 + 332 333 if (!check_allow_bit && 333 334 action_entry->hw_stats != FLOW_ACTION_HW_STATS_ANY) { 334 335 NL_SET_ERR_MSG_MOD(extack, "Driver supports only default HW stats type \"any\"");
+55 -2
include/net/inet_ecn.h
··· 99 99 return 1; 100 100 } 101 101 102 + static inline int IP_ECN_set_ect1(struct iphdr *iph) 103 + { 104 + u32 check = (__force u32)iph->check; 105 + 106 + if ((iph->tos & INET_ECN_MASK) != INET_ECN_ECT_0) 107 + return 0; 108 + 109 + check += (__force u16)htons(0x100); 110 + 111 + iph->check = (__force __sum16)(check + (check>=0xFFFF)); 112 + iph->tos ^= INET_ECN_MASK; 113 + return 1; 114 + } 115 + 102 116 static inline void IP_ECN_clear(struct iphdr *iph) 103 117 { 104 118 iph->tos &= ~INET_ECN_MASK; ··· 148 134 return 1; 149 135 } 150 136 137 + static inline int IP6_ECN_set_ect1(struct sk_buff *skb, struct ipv6hdr *iph) 138 + { 139 + __be32 from, to; 140 + 141 + if ((ipv6_get_dsfield(iph) & INET_ECN_MASK) != INET_ECN_ECT_0) 142 + return 0; 143 + 144 + from = *(__be32 *)iph; 145 + to = from ^ htonl(INET_ECN_MASK << 20); 146 + *(__be32 *)iph = to; 147 + if (skb->ip_summed == CHECKSUM_COMPLETE) 148 + skb->csum = csum_add(csum_sub(skb->csum, (__force __wsum)from), 149 + (__force __wsum)to); 150 + return 1; 151 + } 152 + 151 153 static inline void ipv6_copy_dscp(unsigned int dscp, struct ipv6hdr *inner) 152 154 { 153 155 dscp &= ~INET_ECN_MASK; ··· 183 153 if (skb_network_header(skb) + sizeof(struct ipv6hdr) <= 184 154 skb_tail_pointer(skb)) 185 155 return IP6_ECN_set_ce(skb, ipv6_hdr(skb)); 156 + break; 157 + } 158 + 159 + return 0; 160 + } 161 + 162 + static inline int INET_ECN_set_ect1(struct sk_buff *skb) 163 + { 164 + switch (skb->protocol) { 165 + case cpu_to_be16(ETH_P_IP): 166 + if (skb_network_header(skb) + sizeof(struct iphdr) <= 167 + skb_tail_pointer(skb)) 168 + return IP_ECN_set_ect1(ip_hdr(skb)); 169 + break; 170 + 171 + case cpu_to_be16(ETH_P_IPV6): 172 + if (skb_network_header(skb) + sizeof(struct ipv6hdr) <= 173 + skb_tail_pointer(skb)) 174 + return IP6_ECN_set_ect1(skb, ipv6_hdr(skb)); 186 175 break; 187 176 } 188 177 ··· 257 208 int rc; 258 209 259 210 rc = __INET_ECN_decapsulate(outer, inner, &set_ce); 260 - if (!rc && set_ce) 261 - INET_ECN_set_ce(skb); 211 + if (!rc) { 212 + if (set_ce) 213 + INET_ECN_set_ce(skb); 214 + else if ((outer & INET_ECN_MASK) == INET_ECN_ECT_1) 215 + INET_ECN_set_ect1(skb); 216 + } 262 217 263 218 return rc; 264 219 }
+4
include/net/ip6_fib.h
··· 203 203 struct rt6_info { 204 204 struct dst_entry dst; 205 205 struct fib6_info __rcu *from; 206 + int sernum; 206 207 207 208 struct rt6key rt6i_dst; 208 209 struct rt6key rt6i_src; ··· 291 290 { 292 291 struct fib6_info *from; 293 292 u32 cookie = 0; 293 + 294 + if (rt->sernum) 295 + return rt->sernum; 294 296 295 297 rcu_read_lock(); 296 298
-3
include/net/mptcp.h
··· 68 68 return tcp_rsk(req)->is_mptcp; 69 69 } 70 70 71 - void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr, 72 - int opsize, struct tcp_options_received *opt_rx); 73 71 bool mptcp_syn_options(struct sock *sk, const struct sk_buff *skb, 74 72 unsigned int *size, struct mptcp_out_options *opts); 75 - void mptcp_rcv_synsent(struct sock *sk); 76 73 bool mptcp_synack_options(const struct request_sock *req, unsigned int *size, 77 74 struct mptcp_out_options *opts); 78 75 bool mptcp_established_options(struct sock *sk, struct sk_buff *skb,
+7
include/net/net_namespace.h
··· 437 437 return atomic_read(&net->ipv4.rt_genid); 438 438 } 439 439 440 + #if IS_ENABLED(CONFIG_IPV6) 441 + static inline int rt_genid_ipv6(const struct net *net) 442 + { 443 + return atomic_read(&net->ipv6.fib6_sernum); 444 + } 445 + #endif 446 + 440 447 static inline void rt_genid_bump_ipv4(struct net *net) 441 448 { 442 449 atomic_inc(&net->ipv4.rt_genid);
+1
include/net/sch_generic.h
··· 407 407 struct mutex lock; 408 408 struct list_head chain_list; 409 409 u32 index; /* block index for shared blocks */ 410 + u32 classid; /* which class this block belongs to */ 410 411 refcount_t refcnt; 411 412 struct net *net; 412 413 struct Qdisc *q;
+1
include/soc/mscc/ocelot.h
··· 502 502 unsigned int num_stats; 503 503 504 504 int shared_queue_sz; 505 + int num_mact_rows; 505 506 506 507 struct net_device *hw_bridge_dev; 507 508 u16 bridge_mask;
+1 -1
include/trace/events/gpu_mem.h
··· 24 24 * 25 25 * @pid: Put 0 for global total, while positive pid for process total. 26 26 * 27 - * @size: Virtual size of the allocation in bytes. 27 + * @size: Size of the allocation in bytes. 28 28 * 29 29 */ 30 30 TRACE_EVENT(gpu_mem_total,
+4 -8
include/trace/events/rpcrdma.h
··· 692 692 693 693 TRACE_EVENT(xprtrdma_post_send, 694 694 TP_PROTO( 695 - const struct rpcrdma_req *req, 696 - int status 695 + const struct rpcrdma_req *req 697 696 ), 698 697 699 - TP_ARGS(req, status), 698 + TP_ARGS(req), 700 699 701 700 TP_STRUCT__entry( 702 701 __field(const void *, req) ··· 704 705 __field(unsigned int, client_id) 705 706 __field(int, num_sge) 706 707 __field(int, signaled) 707 - __field(int, status) 708 708 ), 709 709 710 710 TP_fast_assign( ··· 716 718 __entry->sc = req->rl_sendctx; 717 719 __entry->num_sge = req->rl_wr.num_sge; 718 720 __entry->signaled = req->rl_wr.send_flags & IB_SEND_SIGNALED; 719 - __entry->status = status; 720 721 ), 721 722 722 - TP_printk("task:%u@%u req=%p sc=%p (%d SGE%s) %sstatus=%d", 723 + TP_printk("task:%u@%u req=%p sc=%p (%d SGE%s) %s", 723 724 __entry->task_id, __entry->client_id, 724 725 __entry->req, __entry->sc, __entry->num_sge, 725 726 (__entry->num_sge == 1 ? "" : "s"), 726 - (__entry->signaled ? "signaled " : ""), 727 - __entry->status 727 + (__entry->signaled ? "signaled" : "") 728 728 ) 729 729 ); 730 730
+4 -4
include/trace/events/wbt.h
··· 33 33 ), 34 34 35 35 TP_fast_assign( 36 - strlcpy(__entry->name, dev_name(bdi->dev), 36 + strlcpy(__entry->name, bdi_dev_name(bdi), 37 37 ARRAY_SIZE(__entry->name)); 38 38 __entry->rmean = stat[0].mean; 39 39 __entry->rmin = stat[0].min; ··· 68 68 ), 69 69 70 70 TP_fast_assign( 71 - strlcpy(__entry->name, dev_name(bdi->dev), 71 + strlcpy(__entry->name, bdi_dev_name(bdi), 72 72 ARRAY_SIZE(__entry->name)); 73 73 __entry->lat = div_u64(lat, 1000); 74 74 ), ··· 105 105 ), 106 106 107 107 TP_fast_assign( 108 - strlcpy(__entry->name, dev_name(bdi->dev), 108 + strlcpy(__entry->name, bdi_dev_name(bdi), 109 109 ARRAY_SIZE(__entry->name)); 110 110 __entry->msg = msg; 111 111 __entry->step = step; ··· 141 141 ), 142 142 143 143 TP_fast_assign( 144 - strlcpy(__entry->name, dev_name(bdi->dev), 144 + strlcpy(__entry->name, bdi_dev_name(bdi), 145 145 ARRAY_SIZE(__entry->name)); 146 146 __entry->status = status; 147 147 __entry->step = step;
+4
include/uapi/drm/amdgpu_drm.h
··· 346 346 #define AMDGPU_TILING_DCC_PITCH_MAX_MASK 0x3FFF 347 347 #define AMDGPU_TILING_DCC_INDEPENDENT_64B_SHIFT 43 348 348 #define AMDGPU_TILING_DCC_INDEPENDENT_64B_MASK 0x1 349 + #define AMDGPU_TILING_DCC_INDEPENDENT_128B_SHIFT 44 350 + #define AMDGPU_TILING_DCC_INDEPENDENT_128B_MASK 0x1 351 + #define AMDGPU_TILING_SCANOUT_SHIFT 63 352 + #define AMDGPU_TILING_SCANOUT_MASK 0x1 349 353 350 354 /* Set/Get helpers for tiling flags. */ 351 355 #define AMDGPU_TILING_SET(field, value) \
+1 -1
include/uapi/linux/bpf.h
··· 73 73 /* Key of an a BPF_MAP_TYPE_LPM_TRIE entry */ 74 74 struct bpf_lpm_trie_key { 75 75 __u32 prefixlen; /* up to 32 for AF_INET, 128 for AF_INET6 */ 76 - __u8 data[]; /* Arbitrary size */ 76 + __u8 data[0]; /* Arbitrary size */ 77 77 }; 78 78 79 79 struct bpf_cgroup_storage_key {
+2 -2
include/uapi/linux/dlm_device.h
··· 45 45 void __user *bastaddr; 46 46 struct dlm_lksb __user *lksb; 47 47 char lvb[DLM_USER_LVB_LEN]; 48 - char name[]; 48 + char name[0]; 49 49 }; 50 50 51 51 struct dlm_lspace_params { 52 52 __u32 flags; 53 53 __u32 minor; 54 - char name[]; 54 + char name[0]; 55 55 }; 56 56 57 57 struct dlm_purge_params {
+6
include/uapi/linux/dma-buf.h
··· 39 39 40 40 #define DMA_BUF_BASE 'b' 41 41 #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) 42 + 43 + /* 32/64bitness of this uapi was botched in android, there's no difference 44 + * between them in actual uapi, they're just different numbers. 45 + */ 42 46 #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) 47 + #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) 48 + #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) 43 49 44 50 #endif
+1 -1
include/uapi/linux/fiemap.h
··· 34 34 __u32 fm_mapped_extents;/* number of extents that were mapped (out) */ 35 35 __u32 fm_extent_count; /* size of fm_extents array (in) */ 36 36 __u32 fm_reserved; 37 - struct fiemap_extent fm_extents[]; /* array of mapped extents (out) */ 37 + struct fiemap_extent fm_extents[0]; /* array of mapped extents (out) */ 38 38 }; 39 39 40 40 #define FIEMAP_MAX_OFFSET (~0ULL)
+2 -2
include/uapi/linux/hyperv.h
··· 119 119 120 120 struct hv_fcopy_hdr { 121 121 __u32 operation; 122 - uuid_le service_id0; /* currently unused */ 123 - uuid_le service_id1; /* currently unused */ 122 + __u8 service_id0[16]; /* currently unused */ 123 + __u8 service_id1[16]; /* currently unused */ 124 124 } __attribute__((packed)); 125 125 126 126 #define OVER_WRITE 0x1
+3 -3
include/uapi/linux/if_arcnet.h
··· 60 60 __u8 proto; /* protocol ID field - varies */ 61 61 __u8 split_flag; /* for use with split packets */ 62 62 __be16 sequence; /* sequence number */ 63 - __u8 payload[]; /* space remaining in packet (504 bytes)*/ 63 + __u8 payload[0]; /* space remaining in packet (504 bytes)*/ 64 64 }; 65 65 #define RFC1201_HDR_SIZE 4 66 66 ··· 69 69 */ 70 70 struct arc_rfc1051 { 71 71 __u8 proto; /* ARC_P_RFC1051_ARP/RFC1051_IP */ 72 - __u8 payload[]; /* 507 bytes */ 72 + __u8 payload[0]; /* 507 bytes */ 73 73 }; 74 74 #define RFC1051_HDR_SIZE 1 75 75 ··· 80 80 struct arc_eth_encap { 81 81 __u8 proto; /* Always ARC_P_ETHER */ 82 82 struct ethhdr eth; /* standard ethernet header (yuck!) */ 83 - __u8 payload[]; /* 493 bytes */ 83 + __u8 payload[0]; /* 493 bytes */ 84 84 }; 85 85 #define ETH_ENCAP_HDR_SIZE 14 86 86
+1 -1
include/uapi/linux/mmc/ioctl.h
··· 57 57 */ 58 58 struct mmc_ioc_multi_cmd { 59 59 __u64 num_of_cmds; 60 - struct mmc_ioc_cmd cmds[]; 60 + struct mmc_ioc_cmd cmds[0]; 61 61 }; 62 62 63 63 #define MMC_IOC_CMD _IOWR(MMC_BLOCK_MAJOR, 0, struct mmc_ioc_cmd)
+2 -2
include/uapi/linux/net_dropmon.h
··· 29 29 30 30 struct net_dm_config_msg { 31 31 __u32 entries; 32 - struct net_dm_config_entry options[]; 32 + struct net_dm_config_entry options[0]; 33 33 }; 34 34 35 35 struct net_dm_alert_msg { 36 36 __u32 entries; 37 - struct net_dm_drop_point points[]; 37 + struct net_dm_drop_point points[0]; 38 38 }; 39 39 40 40 struct net_dm_user_msg {
+1 -1
include/uapi/linux/netfilter_bridge/ebt_among.h
··· 40 40 struct ebt_mac_wormhash { 41 41 int table[257]; 42 42 int poolsize; 43 - struct ebt_mac_wormhash_tuple pool[]; 43 + struct ebt_mac_wormhash_tuple pool[0]; 44 44 }; 45 45 46 46 #define ebt_mac_wormhash_size(x) ((x) ? sizeof(struct ebt_mac_wormhash) \
+1 -1
include/uapi/scsi/scsi_bsg_fc.h
··· 209 209 __u64 vendor_id; 210 210 211 211 /* start of vendor command area */ 212 - __u32 vendor_cmd[]; 212 + __u32 vendor_cmd[0]; 213 213 }; 214 214 215 215 /* Response:
-18
init/Kconfig
··· 39 39 config CC_HAS_ASM_INLINE 40 40 def_bool $(success,echo 'void foo(void) { asm inline (""); }' | $(CC) -x c - -c -o /dev/null) 41 41 42 - config CC_HAS_WARN_MAYBE_UNINITIALIZED 43 - def_bool $(cc-option,-Wmaybe-uninitialized) 44 - help 45 - GCC >= 4.7 supports this option. 46 - 47 - config CC_DISABLE_WARN_MAYBE_UNINITIALIZED 48 - bool 49 - depends on CC_HAS_WARN_MAYBE_UNINITIALIZED 50 - default CC_IS_GCC && GCC_VERSION < 40900 # unreliable for GCC < 4.9 51 - help 52 - GCC's -Wmaybe-uninitialized is not reliable by definition. 53 - Lots of false positive warnings are produced in some cases. 54 - 55 - If this option is enabled, -Wno-maybe-uninitialzed is passed 56 - to the compiler to suppress maybe-uninitialized warnings. 57 - 58 42 config CONSTRUCTORS 59 43 bool 60 44 depends on !UML ··· 1241 1257 config CC_OPTIMIZE_FOR_PERFORMANCE_O3 1242 1258 bool "Optimize more for performance (-O3)" 1243 1259 depends on ARC 1244 - imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives 1245 1260 help 1246 1261 Choosing this option will pass "-O3" to your compiler to optimize 1247 1262 the kernel yet more for performance. 1248 1263 1249 1264 config CC_OPTIMIZE_FOR_SIZE 1250 1265 bool "Optimize for size (-Os)" 1251 - imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives 1252 1266 help 1253 1267 Choosing this option will pass "-Os" to your compiler resulting 1254 1268 in a smaller kernel.
+1 -1
init/initramfs.c
··· 542 542 } 543 543 544 544 #ifdef CONFIG_KEXEC_CORE 545 - static bool kexec_free_initrd(void) 545 + static bool __init kexec_free_initrd(void) 546 546 { 547 547 unsigned long crashk_start = (unsigned long)__va(crashk_res.start); 548 548 unsigned long crashk_end = (unsigned long)__va(crashk_res.end);
+52 -17
init/main.c
··· 257 257 258 258 early_param("loglevel", loglevel); 259 259 260 + #ifdef CONFIG_BLK_DEV_INITRD 261 + static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum) 262 + { 263 + u32 size, csum; 264 + char *data; 265 + u32 *hdr; 266 + 267 + if (!initrd_end) 268 + return NULL; 269 + 270 + data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN; 271 + if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN)) 272 + return NULL; 273 + 274 + hdr = (u32 *)(data - 8); 275 + size = hdr[0]; 276 + csum = hdr[1]; 277 + 278 + data = ((void *)hdr) - size; 279 + if ((unsigned long)data < initrd_start) { 280 + pr_err("bootconfig size %d is greater than initrd size %ld\n", 281 + size, initrd_end - initrd_start); 282 + return NULL; 283 + } 284 + 285 + /* Remove bootconfig from initramfs/initrd */ 286 + initrd_end = (unsigned long)data; 287 + if (_size) 288 + *_size = size; 289 + if (_csum) 290 + *_csum = csum; 291 + 292 + return data; 293 + } 294 + #else 295 + static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum) 296 + { 297 + return NULL; 298 + } 299 + #endif 300 + 260 301 #ifdef CONFIG_BOOT_CONFIG 261 302 262 303 char xbc_namebuf[XBC_KEYLEN_MAX] __initdata; ··· 398 357 int pos; 399 358 u32 size, csum; 400 359 char *data, *copy; 401 - u32 *hdr; 402 360 int ret; 361 + 362 + data = get_boot_config_from_initrd(&size, &csum); 363 + if (!data) 364 + goto not_found; 403 365 404 366 strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE); 405 367 parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL, ··· 411 367 if (!bootconfig_found) 412 368 return; 413 369 414 - if (!initrd_end) 415 - goto not_found; 416 - 417 - data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN; 418 - if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN)) 419 - goto not_found; 420 - 421 - hdr = (u32 *)(data - 8); 422 - size = hdr[0]; 423 - csum = hdr[1]; 424 - 425 370 if (size >= XBC_DATA_MAX) { 426 371 pr_err("bootconfig size %d greater than max size %d\n", 427 372 size, XBC_DATA_MAX); 428 373 return; 429 374 } 430 - 431 - data = ((void *)hdr) - size; 432 - if ((unsigned long)data < initrd_start) 433 - goto not_found; 434 375 435 376 if (boot_config_checksum((unsigned char *)data, size) != csum) { 436 377 pr_err("bootconfig checksum failed\n"); ··· 449 420 not_found: 450 421 pr_err("'bootconfig' found on command line, but no bootconfig found\n"); 451 422 } 423 + 452 424 #else 453 - #define setup_boot_config(cmdline) do { } while (0) 425 + 426 + static void __init setup_boot_config(const char *cmdline) 427 + { 428 + /* Remove bootconfig data from initrd */ 429 + get_boot_config_from_initrd(NULL, NULL); 430 + } 454 431 455 432 static int __init warn_bootconfig(char *str) 456 433 {
+26 -8
ipc/mqueue.c
··· 142 142 143 143 struct sigevent notify; 144 144 struct pid *notify_owner; 145 + u32 notify_self_exec_id; 145 146 struct user_namespace *notify_user_ns; 146 147 struct user_struct *user; /* user who created, for accounting */ 147 148 struct sock *notify_sock; ··· 774 773 * synchronously. */ 775 774 if (info->notify_owner && 776 775 info->attr.mq_curmsgs == 1) { 777 - struct kernel_siginfo sig_i; 778 776 switch (info->notify.sigev_notify) { 779 777 case SIGEV_NONE: 780 778 break; 781 - case SIGEV_SIGNAL: 782 - /* sends signal */ 779 + case SIGEV_SIGNAL: { 780 + struct kernel_siginfo sig_i; 781 + struct task_struct *task; 782 + 783 + /* do_mq_notify() accepts sigev_signo == 0, why?? */ 784 + if (!info->notify.sigev_signo) 785 + break; 783 786 784 787 clear_siginfo(&sig_i); 785 788 sig_i.si_signo = info->notify.sigev_signo; 786 789 sig_i.si_errno = 0; 787 790 sig_i.si_code = SI_MESGQ; 788 791 sig_i.si_value = info->notify.sigev_value; 789 - /* map current pid/uid into info->owner's namespaces */ 790 792 rcu_read_lock(); 793 + /* map current pid/uid into info->owner's namespaces */ 791 794 sig_i.si_pid = task_tgid_nr_ns(current, 792 795 ns_of_pid(info->notify_owner)); 793 - sig_i.si_uid = from_kuid_munged(info->notify_user_ns, current_uid()); 796 + sig_i.si_uid = from_kuid_munged(info->notify_user_ns, 797 + current_uid()); 798 + /* 799 + * We can't use kill_pid_info(), this signal should 800 + * bypass check_kill_permission(). It is from kernel 801 + * but si_fromuser() can't know this. 802 + * We do check the self_exec_id, to avoid sending 803 + * signals to programs that don't expect them. 804 + */ 805 + task = pid_task(info->notify_owner, PIDTYPE_TGID); 806 + if (task && task->self_exec_id == 807 + info->notify_self_exec_id) { 808 + do_send_sig_info(info->notify.sigev_signo, 809 + &sig_i, task, PIDTYPE_TGID); 810 + } 794 811 rcu_read_unlock(); 795 - 796 - kill_pid_info(info->notify.sigev_signo, 797 - &sig_i, info->notify_owner); 798 812 break; 813 + } 799 814 case SIGEV_THREAD: 800 815 set_cookie(info->notify_cookie, NOTIFY_WOKENUP); 801 816 netlink_sendskb(info->notify_sock, info->notify_cookie); ··· 1400 1383 info->notify.sigev_signo = notification->sigev_signo; 1401 1384 info->notify.sigev_value = notification->sigev_value; 1402 1385 info->notify.sigev_notify = SIGEV_SIGNAL; 1386 + info->notify_self_exec_id = current->self_exec_id; 1403 1387 break; 1404 1388 } 1405 1389
+2 -2
kernel/kcov.c
··· 740 740 * kcov_remote_handle() with KCOV_SUBSYSTEM_COMMON as the subsystem id and an 741 741 * arbitrary 4-byte non-zero number as the instance id). This common handle 742 742 * then gets saved into the task_struct of the process that issued the 743 - * KCOV_REMOTE_ENABLE ioctl. When this proccess issues system calls that spawn 744 - * kernel threads, the common handle must be retrived via kcov_common_handle() 743 + * KCOV_REMOTE_ENABLE ioctl. When this process issues system calls that spawn 744 + * kernel threads, the common handle must be retrieved via kcov_common_handle() 745 745 * and passed to the spawned threads via custom annotations. Those kernel 746 746 * threads must in turn be annotated with kcov_remote_start(common_handle) and 747 747 * kcov_remote_stop(). All of the threads that are spawned by the same process
+7
kernel/power/hibernate.c
··· 898 898 error = freeze_processes(); 899 899 if (error) 900 900 goto Close_Finish; 901 + 902 + error = freeze_kernel_threads(); 903 + if (error) { 904 + thaw_processes(); 905 + goto Close_Finish; 906 + } 907 + 901 908 error = load_image_and_restore(); 902 909 thaw_processes(); 903 910 Finish:
-1
kernel/trace/Kconfig
··· 466 466 config PROFILE_ALL_BRANCHES 467 467 bool "Profile all if conditionals" if !FORTIFY_SOURCE 468 468 select TRACE_BRANCH_PROFILING 469 - imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED # avoid false positives 470 469 help 471 470 This tracer profiles all branch conditions. Every if () 472 471 taken in the kernel is recorded whether it hit or miss.
+24 -6
kernel/trace/preemptirq_delay_test.c
··· 113 113 114 114 for (i = 0; i < s; i++) 115 115 (testfuncs[i])(i); 116 + 117 + set_current_state(TASK_INTERRUPTIBLE); 118 + while (!kthread_should_stop()) { 119 + schedule(); 120 + set_current_state(TASK_INTERRUPTIBLE); 121 + } 122 + 123 + __set_current_state(TASK_RUNNING); 124 + 116 125 return 0; 117 126 } 118 127 119 - static struct task_struct *preemptirq_start_test(void) 128 + static int preemptirq_run_test(void) 120 129 { 130 + struct task_struct *task; 131 + 121 132 char task_name[50]; 122 133 123 134 snprintf(task_name, sizeof(task_name), "%s_test", test_mode); 124 - return kthread_run(preemptirq_delay_run, NULL, task_name); 135 + task = kthread_run(preemptirq_delay_run, NULL, task_name); 136 + if (IS_ERR(task)) 137 + return PTR_ERR(task); 138 + if (task) 139 + kthread_stop(task); 140 + return 0; 125 141 } 126 142 127 143 128 144 static ssize_t trigger_store(struct kobject *kobj, struct kobj_attribute *attr, 129 145 const char *buf, size_t count) 130 146 { 131 - preemptirq_start_test(); 147 + ssize_t ret; 148 + 149 + ret = preemptirq_run_test(); 150 + if (ret) 151 + return ret; 132 152 return count; 133 153 } 134 154 ··· 168 148 169 149 static int __init preemptirq_delay_init(void) 170 150 { 171 - struct task_struct *test_task; 172 151 int retval; 173 152 174 - test_task = preemptirq_start_test(); 175 - retval = PTR_ERR_OR_ZERO(test_task); 153 + retval = preemptirq_run_test(); 176 154 if (retval != 0) 177 155 return retval; 178 156
+15 -1
kernel/trace/trace.c
··· 947 947 EXPORT_SYMBOL_GPL(__trace_bputs); 948 948 949 949 #ifdef CONFIG_TRACER_SNAPSHOT 950 - void tracing_snapshot_instance_cond(struct trace_array *tr, void *cond_data) 950 + static void tracing_snapshot_instance_cond(struct trace_array *tr, 951 + void *cond_data) 951 952 { 952 953 struct tracer *tracer = tr->current_trace; 953 954 unsigned long flags; ··· 8526 8525 */ 8527 8526 allocate_snapshot = false; 8528 8527 #endif 8528 + 8529 + /* 8530 + * Because of some magic with the way alloc_percpu() works on 8531 + * x86_64, we need to synchronize the pgd of all the tables, 8532 + * otherwise the trace events that happen in x86_64 page fault 8533 + * handlers can't cope with accessing the chance that a 8534 + * alloc_percpu()'d memory might be touched in the page fault trace 8535 + * event. Oh, and we need to audit all other alloc_percpu() and vmalloc() 8536 + * calls in tracing, because something might get triggered within a 8537 + * page fault trace event! 8538 + */ 8539 + vmalloc_sync_mappings(); 8540 + 8529 8541 return 0; 8530 8542 } 8531 8543
+10 -14
kernel/trace/trace_boot.c
··· 95 95 struct xbc_node *anode; 96 96 char buf[MAX_BUF_LEN]; 97 97 const char *val; 98 - int ret; 99 - 100 - kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN); 101 - 102 - ret = kprobe_event_gen_cmd_start(&cmd, event, NULL); 103 - if (ret) 104 - return ret; 98 + int ret = 0; 105 99 106 100 xbc_node_for_each_array_value(node, "probes", anode, val) { 107 - ret = kprobe_event_add_field(&cmd, val); 108 - if (ret) 109 - return ret; 110 - } 101 + kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN); 111 102 112 - ret = kprobe_event_gen_cmd_end(&cmd); 113 - if (ret) 114 - pr_err("Failed to add probe: %s\n", buf); 103 + ret = kprobe_event_gen_cmd_start(&cmd, event, val); 104 + if (ret) 105 + break; 106 + 107 + ret = kprobe_event_gen_cmd_end(&cmd); 108 + if (ret) 109 + pr_err("Failed to add probe: %s\n", buf); 110 + } 115 111 116 112 return ret; 117 113 }
+7 -1
kernel/trace/trace_kprobe.c
··· 453 453 454 454 static bool within_notrace_func(struct trace_kprobe *tk) 455 455 { 456 - unsigned long addr = addr = trace_kprobe_address(tk); 456 + unsigned long addr = trace_kprobe_address(tk); 457 457 char symname[KSYM_NAME_LEN], *p; 458 458 459 459 if (!__within_notrace_func(addr)) ··· 940 940 * complete command or only the first part of it; in the latter case, 941 941 * kprobe_event_add_fields() can be used to add more fields following this. 942 942 * 943 + * Unlikely the synth_event_gen_cmd_start(), @loc must be specified. This 944 + * returns -EINVAL if @loc == NULL. 945 + * 943 946 * Return: 0 if successful, error otherwise. 944 947 */ 945 948 int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd, bool kretprobe, ··· 954 951 int ret; 955 952 956 953 if (cmd->type != DYNEVENT_TYPE_KPROBE) 954 + return -EINVAL; 955 + 956 + if (!loc) 957 957 return -EINVAL; 958 958 959 959 if (kretprobe)
+5
kernel/umh.c
··· 544 544 * Runs a user-space application. The application is started 545 545 * asynchronously if wait is not set, and runs as a child of system workqueues. 546 546 * (ie. it runs with full root capabilities and optimized affinity). 547 + * 548 + * Note: successful return value does not guarantee the helper was called at 549 + * all. You can't rely on sub_info->{init,cleanup} being called even for 550 + * UMH_WAIT_* wait modes as STATIC_USERMODEHELPER_PATH="" turns all helpers 551 + * into a successful no-op. 547 552 */ 548 553 int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait) 549 554 {
+7 -10
lib/Kconfig.ubsan
··· 60 60 Enabling this option will get kernel image size increased 61 61 significantly. 62 62 63 - config UBSAN_NO_ALIGNMENT 64 - bool "Disable checking of pointers alignment" 65 - default y if HAVE_EFFICIENT_UNALIGNED_ACCESS 66 - help 67 - This option disables the check of unaligned memory accesses. 68 - This option should be used when building allmodconfig. 69 - Disabling this option on architectures that support unaligned 70 - accesses may produce a lot of false positives. 71 - 72 63 config UBSAN_ALIGNMENT 73 - def_bool !UBSAN_NO_ALIGNMENT 64 + bool "Enable checks for pointers alignment" 65 + default !HAVE_EFFICIENT_UNALIGNED_ACCESS 66 + depends on !X86 || !COMPILE_TEST 67 + help 68 + This option enables the check of unaligned memory accesses. 69 + Enabling this option on architectures that support unaligned 70 + accesses may produce a lot of false positives. 74 71 75 72 config TEST_UBSAN 76 73 tristate "Module for testing for undefined behavior detection"
+1 -1
lib/kunit/test.c
··· 93 93 * representation. 94 94 */ 95 95 if (suite) 96 - pr_info("%s %zd - %s", 96 + pr_info("%s %zd - %s\n", 97 97 kunit_status_to_string(is_ok), 98 98 test_number, description); 99 99 else
+11 -2
mm/backing-dev.c
··· 21 21 EXPORT_SYMBOL_GPL(noop_backing_dev_info); 22 22 23 23 static struct class *bdi_class; 24 - const char *bdi_unknown_name = "(unknown)"; 24 + static const char *bdi_unknown_name = "(unknown)"; 25 25 26 26 /* 27 27 * bdi_lock protects bdi_tree and updates to bdi_list. bdi_list has RCU ··· 938 938 if (bdi->dev) /* The driver needs to use separate queues per device */ 939 939 return 0; 940 940 941 - dev = device_create_vargs(bdi_class, NULL, MKDEV(0, 0), bdi, fmt, args); 941 + vsnprintf(bdi->dev_name, sizeof(bdi->dev_name), fmt, args); 942 + dev = device_create(bdi_class, NULL, MKDEV(0, 0), bdi, bdi->dev_name); 942 943 if (IS_ERR(dev)) 943 944 return PTR_ERR(dev); 944 945 ··· 1043 1042 kref_put(&bdi->refcnt, release_bdi); 1044 1043 } 1045 1044 EXPORT_SYMBOL(bdi_put); 1045 + 1046 + const char *bdi_dev_name(struct backing_dev_info *bdi) 1047 + { 1048 + if (!bdi || !bdi->dev) 1049 + return bdi_unknown_name; 1050 + return bdi->dev_name; 1051 + } 1052 + EXPORT_SYMBOL_GPL(bdi_dev_name); 1046 1053 1047 1054 static wait_queue_head_t congestion_wqh[2] = { 1048 1055 __WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[0]),
+9 -6
mm/memcontrol.c
··· 4990 4990 unsigned int size; 4991 4991 int node; 4992 4992 int __maybe_unused i; 4993 + long error = -ENOMEM; 4993 4994 4994 4995 size = sizeof(struct mem_cgroup); 4995 4996 size += nr_node_ids * sizeof(struct mem_cgroup_per_node *); 4996 4997 4997 4998 memcg = kzalloc(size, GFP_KERNEL); 4998 4999 if (!memcg) 4999 - return NULL; 5000 + return ERR_PTR(error); 5000 5001 5001 5002 memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL, 5002 5003 1, MEM_CGROUP_ID_MAX, 5003 5004 GFP_KERNEL); 5004 - if (memcg->id.id < 0) 5005 + if (memcg->id.id < 0) { 5006 + error = memcg->id.id; 5005 5007 goto fail; 5008 + } 5006 5009 5007 5010 memcg->vmstats_local = alloc_percpu(struct memcg_vmstats_percpu); 5008 5011 if (!memcg->vmstats_local) ··· 5049 5046 fail: 5050 5047 mem_cgroup_id_remove(memcg); 5051 5048 __mem_cgroup_free(memcg); 5052 - return NULL; 5049 + return ERR_PTR(error); 5053 5050 } 5054 5051 5055 5052 static struct cgroup_subsys_state * __ref ··· 5060 5057 long error = -ENOMEM; 5061 5058 5062 5059 memcg = mem_cgroup_alloc(); 5063 - if (!memcg) 5064 - return ERR_PTR(error); 5060 + if (IS_ERR(memcg)) 5061 + return ERR_CAST(memcg); 5065 5062 5066 5063 WRITE_ONCE(memcg->high, PAGE_COUNTER_MAX); 5067 5064 memcg->soft_limit = PAGE_COUNTER_MAX; ··· 5111 5108 fail: 5112 5109 mem_cgroup_id_remove(memcg); 5113 5110 mem_cgroup_free(memcg); 5114 - return ERR_PTR(-ENOMEM); 5111 + return ERR_PTR(error); 5115 5112 } 5116 5113 5117 5114 static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
+9
mm/page_alloc.c
··· 1607 1607 if (!__pageblock_pfn_to_page(block_start_pfn, 1608 1608 block_end_pfn, zone)) 1609 1609 return; 1610 + cond_resched(); 1610 1611 } 1611 1612 1612 1613 /* We confirm that there is no hole */ ··· 2400 2399 unsigned long max_boost; 2401 2400 2402 2401 if (!watermark_boost_factor) 2402 + return; 2403 + /* 2404 + * Don't bother in zones that are unlikely to produce results. 2405 + * On small machines, including kdump capture kernels running 2406 + * in a small area, boosting the watermark can cause an out of 2407 + * memory situation immediately. 2408 + */ 2409 + if ((pageblock_nr_pages * 4) > zone_managed_pages(zone)) 2403 2410 return; 2404 2411 2405 2412 max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
+10 -4
mm/percpu.c
··· 80 80 #include <linux/workqueue.h> 81 81 #include <linux/kmemleak.h> 82 82 #include <linux/sched.h> 83 + #include <linux/sched/mm.h> 83 84 84 85 #include <asm/cacheflush.h> 85 86 #include <asm/sections.h> ··· 1558 1557 static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved, 1559 1558 gfp_t gfp) 1560 1559 { 1561 - /* whitelisted flags that can be passed to the backing allocators */ 1562 - gfp_t pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); 1563 - bool is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL; 1564 - bool do_warn = !(gfp & __GFP_NOWARN); 1560 + gfp_t pcpu_gfp; 1561 + bool is_atomic; 1562 + bool do_warn; 1565 1563 static int warn_limit = 10; 1566 1564 struct pcpu_chunk *chunk, *next; 1567 1565 const char *err; ··· 1568 1568 unsigned long flags; 1569 1569 void __percpu *ptr; 1570 1570 size_t bits, bit_align; 1571 + 1572 + gfp = current_gfp_context(gfp); 1573 + /* whitelisted flags that can be passed to the backing allocators */ 1574 + pcpu_gfp = gfp & (GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); 1575 + is_atomic = (gfp & GFP_KERNEL) != GFP_KERNEL; 1576 + do_warn = !(gfp & __GFP_NOWARN); 1571 1577 1572 1578 /* 1573 1579 * There is now a minimum allocation size of PCPU_MIN_ALLOC_SIZE,
+30 -15
mm/slub.c
··· 551 551 metadata_access_disable(); 552 552 } 553 553 554 + /* 555 + * See comment in calculate_sizes(). 556 + */ 557 + static inline bool freeptr_outside_object(struct kmem_cache *s) 558 + { 559 + return s->offset >= s->inuse; 560 + } 561 + 562 + /* 563 + * Return offset of the end of info block which is inuse + free pointer if 564 + * not overlapping with object. 565 + */ 566 + static inline unsigned int get_info_end(struct kmem_cache *s) 567 + { 568 + if (freeptr_outside_object(s)) 569 + return s->inuse + sizeof(void *); 570 + else 571 + return s->inuse; 572 + } 573 + 554 574 static struct track *get_track(struct kmem_cache *s, void *object, 555 575 enum track_item alloc) 556 576 { 557 577 struct track *p; 558 578 559 - if (s->offset) 560 - p = object + s->offset + sizeof(void *); 561 - else 562 - p = object + s->inuse; 579 + p = object + get_info_end(s); 563 580 564 581 return p + alloc; 565 582 } ··· 703 686 print_section(KERN_ERR, "Redzone ", p + s->object_size, 704 687 s->inuse - s->object_size); 705 688 706 - if (s->offset) 707 - off = s->offset + sizeof(void *); 708 - else 709 - off = s->inuse; 689 + off = get_info_end(s); 710 690 711 691 if (s->flags & SLAB_STORE_USER) 712 692 off += 2 * sizeof(struct track); ··· 796 782 * object address 797 783 * Bytes of the object to be managed. 798 784 * If the freepointer may overlay the object then the free 799 - * pointer is the first word of the object. 785 + * pointer is at the middle of the object. 800 786 * 801 787 * Poisoning uses 0x6b (POISON_FREE) and the last byte is 802 788 * 0xa5 (POISON_END) ··· 830 816 831 817 static int check_pad_bytes(struct kmem_cache *s, struct page *page, u8 *p) 832 818 { 833 - unsigned long off = s->inuse; /* The end of info */ 834 - 835 - if (s->offset) 836 - /* Freepointer is placed after the object. */ 837 - off += sizeof(void *); 819 + unsigned long off = get_info_end(s); /* The end of info */ 838 820 839 821 if (s->flags & SLAB_STORE_USER) 840 822 /* We also have user information there */ ··· 917 907 check_pad_bytes(s, page, p); 918 908 } 919 909 920 - if (!s->offset && val == SLUB_RED_ACTIVE) 910 + if (!freeptr_outside_object(s) && val == SLUB_RED_ACTIVE) 921 911 /* 922 912 * Object and freepointer overlap. Cannot check 923 913 * freepointer while object is allocated. ··· 3597 3587 * 3598 3588 * This is the case if we do RCU, have a constructor or 3599 3589 * destructor or are poisoning the objects. 3590 + * 3591 + * The assumption that s->offset >= s->inuse means free 3592 + * pointer is outside of the object is used in the 3593 + * freeptr_outside_object() function. If that is no 3594 + * longer true, the function needs to be modified. 3600 3595 */ 3601 3596 s->offset = size; 3602 3597 size += sizeof(void *);
-1
mm/vmscan.c
··· 1625 1625 * @dst: The temp list to put pages on to. 1626 1626 * @nr_scanned: The number of pages that were scanned. 1627 1627 * @sc: The scan_control struct for this reclaim session 1628 - * @mode: One of the LRU isolation modes 1629 1628 * @lru: LRU list id for isolating 1630 1629 * 1631 1630 * returns how many pages were moved onto *@dst.
+10 -10
net/atm/common.c
··· 177 177 178 178 set_bit(ATM_VF_CLOSE, &vcc->flags); 179 179 clear_bit(ATM_VF_READY, &vcc->flags); 180 - if (vcc->dev) { 181 - if (vcc->dev->ops->close) 182 - vcc->dev->ops->close(vcc); 183 - if (vcc->push) 184 - vcc->push(vcc, NULL); /* atmarpd has no push */ 185 - module_put(vcc->owner); 180 + if (vcc->dev && vcc->dev->ops->close) 181 + vcc->dev->ops->close(vcc); 182 + if (vcc->push) 183 + vcc->push(vcc, NULL); /* atmarpd has no push */ 184 + module_put(vcc->owner); 186 185 187 - while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) { 188 - atm_return(vcc, skb->truesize); 189 - kfree_skb(skb); 190 - } 186 + while ((skb = skb_dequeue(&sk->sk_receive_queue)) != NULL) { 187 + atm_return(vcc, skb->truesize); 188 + kfree_skb(skb); 189 + } 191 190 191 + if (vcc->dev && vcc->dev->ops->owner) { 192 192 module_put(vcc->dev->ops->owner); 193 193 atm_dev_put(vcc->dev); 194 194 }
+6
net/atm/lec.c
··· 1264 1264 entry->vcc = NULL; 1265 1265 } 1266 1266 if (entry->recv_vcc) { 1267 + struct atm_vcc *vcc = entry->recv_vcc; 1268 + struct lec_vcc_priv *vpriv = LEC_VCC_PRIV(vcc); 1269 + 1270 + kfree(vpriv); 1271 + vcc->user_back = NULL; 1272 + 1267 1273 entry->recv_vcc->push = entry->old_recv_push; 1268 1274 vcc_release_async(entry->recv_vcc, -EPIPE); 1269 1275 entry->recv_vcc = NULL;
+1 -1
net/batman-adv/bat_v_ogm.c
··· 893 893 894 894 orig_node = batadv_v_ogm_orig_get(bat_priv, ogm_packet->orig); 895 895 if (!orig_node) 896 - return; 896 + goto out; 897 897 898 898 neigh_node = batadv_neigh_node_get_or_create(orig_node, if_incoming, 899 899 ethhdr->h_source);
+1 -8
net/batman-adv/network-coding.c
··· 1009 1009 */ 1010 1010 static u8 batadv_nc_random_weight_tq(u8 tq) 1011 1011 { 1012 - u8 rand_val, rand_tq; 1013 - 1014 - get_random_bytes(&rand_val, sizeof(rand_val)); 1015 - 1016 1012 /* randomize the estimated packet loss (max TQ - estimated TQ) */ 1017 - rand_tq = rand_val * (BATADV_TQ_MAX_VALUE - tq); 1018 - 1019 - /* normalize the randomized packet loss */ 1020 - rand_tq /= BATADV_TQ_MAX_VALUE; 1013 + u8 rand_tq = prandom_u32_max(BATADV_TQ_MAX_VALUE + 1 - tq); 1021 1014 1022 1015 /* convert to (randomized) estimated tq again */ 1023 1016 return BATADV_TQ_MAX_VALUE - rand_tq;
+2 -1
net/batman-adv/sysfs.c
··· 1150 1150 ret = batadv_parse_throughput(net_dev, buff, "throughput_override", 1151 1151 &tp_override); 1152 1152 if (!ret) 1153 - return count; 1153 + goto out; 1154 1154 1155 1155 old_tp_override = atomic_read(&hard_iface->bat_v.throughput_override); 1156 1156 if (old_tp_override == tp_override) ··· 1190 1190 1191 1191 tp_override = atomic_read(&hard_iface->bat_v.throughput_override); 1192 1192 1193 + batadv_hardif_put(hard_iface); 1193 1194 return sprintf(buff, "%u.%u MBit\n", tp_override / 10, 1194 1195 tp_override % 10); 1195 1196 }
+1
net/bridge/br_netlink.c
··· 612 612 v - 1, rtm_cmd); 613 613 v_change_start = 0; 614 614 } 615 + cond_resched(); 615 616 } 616 617 /* v_change_start is set only if the last/whole range changed */ 617 618 if (v_change_start)
+10 -2
net/core/devlink.c
··· 4283 4283 end_offset = nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]); 4284 4284 end_offset += nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]); 4285 4285 dump = false; 4286 + 4287 + if (start_offset == end_offset) { 4288 + err = 0; 4289 + goto nla_put_failure; 4290 + } 4286 4291 } 4287 4292 4288 4293 err = devlink_nl_region_read_snapshot_fill(skb, devlink, ··· 5368 5363 { 5369 5364 enum devlink_health_reporter_state prev_health_state; 5370 5365 struct devlink *devlink = reporter->devlink; 5366 + unsigned long recover_ts_threshold; 5371 5367 5372 5368 /* write a log message of the current error */ 5373 5369 WARN_ON(!msg); ··· 5379 5373 devlink_recover_notify(reporter, DEVLINK_CMD_HEALTH_REPORTER_RECOVER); 5380 5374 5381 5375 /* abort if the previous error wasn't recovered */ 5376 + recover_ts_threshold = reporter->last_recovery_ts + 5377 + msecs_to_jiffies(reporter->graceful_period); 5382 5378 if (reporter->auto_recover && 5383 5379 (prev_health_state != DEVLINK_HEALTH_REPORTER_STATE_HEALTHY || 5384 - jiffies - reporter->last_recovery_ts < 5385 - msecs_to_jiffies(reporter->graceful_period))) { 5380 + (reporter->last_recovery_ts && reporter->recovery_count && 5381 + time_is_after_jiffies(recover_ts_threshold)))) { 5386 5382 trace_devlink_health_recover_aborted(devlink, 5387 5383 reporter->ops->name, 5388 5384 reporter->health_state,
+7 -4
net/core/drop_monitor.c
··· 213 213 static void trace_drop_common(struct sk_buff *skb, void *location) 214 214 { 215 215 struct net_dm_alert_msg *msg; 216 + struct net_dm_drop_point *point; 216 217 struct nlmsghdr *nlh; 217 218 struct nlattr *nla; 218 219 int i; ··· 232 231 nlh = (struct nlmsghdr *)dskb->data; 233 232 nla = genlmsg_data(nlmsg_data(nlh)); 234 233 msg = nla_data(nla); 234 + point = msg->points; 235 235 for (i = 0; i < msg->entries; i++) { 236 - if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) { 237 - msg->points[i].count++; 236 + if (!memcmp(&location, &point->pc, sizeof(void *))) { 237 + point->count++; 238 238 goto out; 239 239 } 240 + point++; 240 241 } 241 242 if (msg->entries == dm_hit_limit) 242 243 goto out; ··· 247 244 */ 248 245 __nla_reserve_nohdr(dskb, sizeof(struct net_dm_drop_point)); 249 246 nla->nla_len += NLA_ALIGN(sizeof(struct net_dm_drop_point)); 250 - memcpy(msg->points[msg->entries].pc, &location, sizeof(void *)); 251 - msg->points[msg->entries].count = 1; 247 + memcpy(point->pc, &location, sizeof(void *)); 248 + point->count = 1; 252 249 msg->entries++; 253 250 254 251 if (!timer_pending(&data->send_timer)) {
+3 -3
net/core/neighbour.c
··· 1956 1956 NEIGH_UPDATE_F_OVERRIDE_ISROUTER); 1957 1957 } 1958 1958 1959 + if (protocol) 1960 + neigh->protocol = protocol; 1961 + 1959 1962 if (ndm->ndm_flags & NTF_EXT_LEARNED) 1960 1963 flags |= NEIGH_UPDATE_F_EXT_LEARNED; 1961 1964 ··· 1971 1968 } else 1972 1969 err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags, 1973 1970 NETLINK_CB(skb).portid, extack); 1974 - 1975 - if (protocol) 1976 - neigh->protocol = protocol; 1977 1971 1978 1972 neigh_release(neigh); 1979 1973
-1
net/core/sock.c
··· 2364 2364 } 2365 2365 } 2366 2366 2367 - /* On 32bit arches, an skb frag is limited to 2^15 */ 2368 2367 #define SKB_FRAG_PAGE_ORDER get_order(32768) 2369 2368 DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key); 2370 2369
+1 -1
net/dsa/dsa2.c
··· 459 459 list_for_each_entry(dp, &dst->ports, list) { 460 460 err = dsa_port_setup(dp); 461 461 if (err) 462 - goto teardown; 462 + continue; 463 463 } 464 464 465 465 return 0;
+2 -1
net/dsa/master.c
··· 289 289 { 290 290 struct dsa_port *cpu_dp = dev->dsa_ptr; 291 291 292 - dev->netdev_ops = cpu_dp->orig_ndo_ops; 292 + if (cpu_dp->orig_ndo_ops) 293 + dev->netdev_ops = cpu_dp->orig_ndo_ops; 293 294 cpu_dp->orig_ndo_ops = NULL; 294 295 } 295 296
+3 -5
net/dsa/slave.c
··· 856 856 struct dsa_port *to_dp; 857 857 int err; 858 858 859 - act = &cls->rule->action.entries[0]; 860 - 861 859 if (!ds->ops->port_mirror_add) 862 860 return -EOPNOTSUPP; 863 - 864 - if (!act->dev) 865 - return -EINVAL; 866 861 867 862 if (!flow_action_basic_hw_stats_check(&cls->rule->action, 868 863 cls->common.extack)) 869 864 return -EOPNOTSUPP; 870 865 871 866 act = &cls->rule->action.entries[0]; 867 + 868 + if (!act->dev) 869 + return -EINVAL; 872 870 873 871 if (!dsa_slave_dev_check(act->dev)) 874 872 return -EOPNOTSUPP;
+1 -1
net/hsr/hsr_slave.c
··· 18 18 { 19 19 struct sk_buff *skb = *pskb; 20 20 struct hsr_port *port; 21 - u16 protocol; 21 + __be16 protocol; 22 22 23 23 if (!skb_mac_header_was_set(skb)) { 24 24 WARN_ONCE(1, "%s: skb invalid", __func__);
-7
net/ipv4/tcp_input.c
··· 3926 3926 */ 3927 3927 break; 3928 3928 #endif 3929 - case TCPOPT_MPTCP: 3930 - mptcp_parse_option(skb, ptr, opsize, opt_rx); 3931 - break; 3932 - 3933 3929 case TCPOPT_FASTOPEN: 3934 3930 tcp_parse_fastopen_option( 3935 3931 opsize - TCPOLEN_FASTOPEN_BASE, ··· 5985 5989 5986 5990 tcp_sync_mss(sk, icsk->icsk_pmtu_cookie); 5987 5991 tcp_initialize_rcv_mss(sk); 5988 - 5989 - if (sk_is_mptcp(sk)) 5990 - mptcp_rcv_synsent(sk); 5991 5992 5992 5993 /* Remember, tcp_poll() does not lock socket! 5993 5994 * Change state from SYN-SENT only after copied_seq
+25
net/ipv6/route.c
··· 1385 1385 } 1386 1386 ip6_rt_copy_init(pcpu_rt, res); 1387 1387 pcpu_rt->rt6i_flags |= RTF_PCPU; 1388 + 1389 + if (f6i->nh) 1390 + pcpu_rt->sernum = rt_genid_ipv6(dev_net(dev)); 1391 + 1388 1392 return pcpu_rt; 1393 + } 1394 + 1395 + static bool rt6_is_valid(const struct rt6_info *rt6) 1396 + { 1397 + return rt6->sernum == rt_genid_ipv6(dev_net(rt6->dst.dev)); 1389 1398 } 1390 1399 1391 1400 /* It should be called with rcu_read_lock() acquired */ ··· 1403 1394 struct rt6_info *pcpu_rt; 1404 1395 1405 1396 pcpu_rt = this_cpu_read(*res->nh->rt6i_pcpu); 1397 + 1398 + if (pcpu_rt && pcpu_rt->sernum && !rt6_is_valid(pcpu_rt)) { 1399 + struct rt6_info *prev, **p; 1400 + 1401 + p = this_cpu_ptr(res->nh->rt6i_pcpu); 1402 + prev = xchg(p, NULL); 1403 + if (prev) { 1404 + dst_dev_put(&prev->dst); 1405 + dst_release(&prev->dst); 1406 + } 1407 + 1408 + pcpu_rt = NULL; 1409 + } 1406 1410 1407 1411 return pcpu_rt; 1408 1412 } ··· 2614 2592 struct rt6_info *rt; 2615 2593 2616 2594 rt = container_of(dst, struct rt6_info, dst); 2595 + 2596 + if (rt->sernum) 2597 + return rt6_is_valid(rt) ? dst : NULL; 2617 2598 2618 2599 rcu_read_lock(); 2619 2600
+8 -2
net/ipv6/seg6.c
··· 27 27 28 28 bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len) 29 29 { 30 - int trailing; 31 30 unsigned int tlv_offset; 31 + int max_last_entry; 32 + int trailing; 32 33 33 34 if (srh->type != IPV6_SRCRT_TYPE_4) 34 35 return false; ··· 37 36 if (((srh->hdrlen + 1) << 3) != len) 38 37 return false; 39 38 40 - if (srh->segments_left > srh->first_segment) 39 + max_last_entry = (srh->hdrlen / 2) - 1; 40 + 41 + if (srh->first_segment > max_last_entry) 42 + return false; 43 + 44 + if (srh->segments_left > srh->first_segment + 1) 41 45 return false; 42 46 43 47 tlv_offset = sizeof(*srh) + ((srh->first_segment + 1) << 4);
+41 -54
net/mptcp/options.c
··· 16 16 return (flags & MPTCP_CAP_FLAG_MASK) == MPTCP_CAP_HMAC_SHA256; 17 17 } 18 18 19 - void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr, 20 - int opsize, struct tcp_options_received *opt_rx) 19 + static void mptcp_parse_option(const struct sk_buff *skb, 20 + const unsigned char *ptr, int opsize, 21 + struct mptcp_options_received *mp_opt) 21 22 { 22 - struct mptcp_options_received *mp_opt = &opt_rx->mptcp; 23 23 u8 subtype = *ptr >> 4; 24 24 int expected_opsize; 25 25 u8 version; ··· 283 283 } 284 284 285 285 void mptcp_get_options(const struct sk_buff *skb, 286 - struct tcp_options_received *opt_rx) 286 + struct mptcp_options_received *mp_opt) 287 287 { 288 - const unsigned char *ptr; 289 288 const struct tcphdr *th = tcp_hdr(skb); 290 - int length = (th->doff * 4) - sizeof(struct tcphdr); 289 + const unsigned char *ptr; 290 + int length; 291 291 292 + /* initialize option status */ 293 + mp_opt->mp_capable = 0; 294 + mp_opt->mp_join = 0; 295 + mp_opt->add_addr = 0; 296 + mp_opt->rm_addr = 0; 297 + mp_opt->dss = 0; 298 + 299 + length = (th->doff * 4) - sizeof(struct tcphdr); 292 300 ptr = (const unsigned char *)(th + 1); 293 301 294 302 while (length > 0) { ··· 316 308 if (opsize > length) 317 309 return; /* don't parse partial options */ 318 310 if (opcode == TCPOPT_MPTCP) 319 - mptcp_parse_option(skb, ptr, opsize, opt_rx); 311 + mptcp_parse_option(skb, ptr, opsize, mp_opt); 320 312 ptr += opsize - 2; 321 313 length -= opsize; 322 314 } ··· 350 342 return true; 351 343 } 352 344 return false; 353 - } 354 - 355 - void mptcp_rcv_synsent(struct sock *sk) 356 - { 357 - struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 358 - struct tcp_sock *tp = tcp_sk(sk); 359 - 360 - if (subflow->request_mptcp && tp->rx_opt.mptcp.mp_capable) { 361 - subflow->mp_capable = 1; 362 - subflow->can_ack = 1; 363 - subflow->remote_key = tp->rx_opt.mptcp.sndr_key; 364 - pr_debug("subflow=%p, remote_key=%llu", subflow, 365 - subflow->remote_key); 366 - } else if (subflow->request_join && tp->rx_opt.mptcp.mp_join) { 367 - subflow->mp_join = 1; 368 - subflow->thmac = tp->rx_opt.mptcp.thmac; 369 - subflow->remote_nonce = tp->rx_opt.mptcp.nonce; 370 - pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", subflow, 371 - subflow->thmac, subflow->remote_nonce); 372 - } else if (subflow->request_mptcp) { 373 - tcp_sk(sk)->is_mptcp = 0; 374 - } 375 345 } 376 346 377 347 /* MP_JOIN client subflow must wait for 4th ack before sending any data: ··· 695 709 if (TCP_SKB_CB(skb)->seq != subflow->ssn_offset + 1) 696 710 return subflow->mp_capable; 697 711 698 - if (mp_opt->use_ack) { 712 + if (mp_opt->dss && mp_opt->use_ack) { 699 713 /* subflows are fully established as soon as we get any 700 714 * additional ack. 701 715 */ 702 716 subflow->fully_established = 1; 703 717 goto fully_established; 704 718 } 705 - 706 - WARN_ON_ONCE(subflow->can_ack); 707 719 708 720 /* If the first established packet does not contain MP_CAPABLE + data 709 721 * then fallback to TCP ··· 712 728 return false; 713 729 } 714 730 731 + if (unlikely(!READ_ONCE(msk->pm.server_side))) 732 + pr_warn_once("bogus mpc option on established client sk"); 715 733 subflow->fully_established = 1; 716 734 subflow->remote_key = mp_opt->sndr_key; 717 735 subflow->can_ack = 1; ··· 805 819 { 806 820 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 807 821 struct mptcp_sock *msk = mptcp_sk(subflow->conn); 808 - struct mptcp_options_received *mp_opt; 822 + struct mptcp_options_received mp_opt; 809 823 struct mptcp_ext *mpext; 810 824 811 - mp_opt = &opt_rx->mptcp; 812 - if (!check_fully_established(msk, sk, subflow, skb, mp_opt)) 825 + mptcp_get_options(skb, &mp_opt); 826 + if (!check_fully_established(msk, sk, subflow, skb, &mp_opt)) 813 827 return; 814 828 815 - if (mp_opt->add_addr && add_addr_hmac_valid(msk, mp_opt)) { 829 + if (mp_opt.add_addr && add_addr_hmac_valid(msk, &mp_opt)) { 816 830 struct mptcp_addr_info addr; 817 831 818 - addr.port = htons(mp_opt->port); 819 - addr.id = mp_opt->addr_id; 820 - if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) { 832 + addr.port = htons(mp_opt.port); 833 + addr.id = mp_opt.addr_id; 834 + if (mp_opt.family == MPTCP_ADDR_IPVERSION_4) { 821 835 addr.family = AF_INET; 822 - addr.addr = mp_opt->addr; 836 + addr.addr = mp_opt.addr; 823 837 } 824 838 #if IS_ENABLED(CONFIG_MPTCP_IPV6) 825 - else if (mp_opt->family == MPTCP_ADDR_IPVERSION_6) { 839 + else if (mp_opt.family == MPTCP_ADDR_IPVERSION_6) { 826 840 addr.family = AF_INET6; 827 - addr.addr6 = mp_opt->addr6; 841 + addr.addr6 = mp_opt.addr6; 828 842 } 829 843 #endif 830 - if (!mp_opt->echo) 844 + if (!mp_opt.echo) 831 845 mptcp_pm_add_addr_received(msk, &addr); 832 - mp_opt->add_addr = 0; 846 + mp_opt.add_addr = 0; 833 847 } 834 848 835 - if (!mp_opt->dss) 849 + if (!mp_opt.dss) 836 850 return; 837 851 838 852 /* we can't wait for recvmsg() to update the ack_seq, otherwise 839 853 * monodirectional flows will stuck 840 854 */ 841 - if (mp_opt->use_ack) 842 - update_una(msk, mp_opt); 855 + if (mp_opt.use_ack) 856 + update_una(msk, &mp_opt); 843 857 844 858 mpext = skb_ext_add(skb, SKB_EXT_MPTCP); 845 859 if (!mpext) ··· 847 861 848 862 memset(mpext, 0, sizeof(*mpext)); 849 863 850 - if (mp_opt->use_map) { 851 - if (mp_opt->mpc_map) { 864 + if (mp_opt.use_map) { 865 + if (mp_opt.mpc_map) { 852 866 /* this is an MP_CAPABLE carrying MPTCP data 853 867 * we know this map the first chunk of data 854 868 */ ··· 858 872 mpext->subflow_seq = 1; 859 873 mpext->dsn64 = 1; 860 874 mpext->mpc_map = 1; 875 + mpext->data_fin = 0; 861 876 } else { 862 - mpext->data_seq = mp_opt->data_seq; 863 - mpext->subflow_seq = mp_opt->subflow_seq; 864 - mpext->dsn64 = mp_opt->dsn64; 865 - mpext->data_fin = mp_opt->data_fin; 877 + mpext->data_seq = mp_opt.data_seq; 878 + mpext->subflow_seq = mp_opt.subflow_seq; 879 + mpext->dsn64 = mp_opt.dsn64; 880 + mpext->data_fin = mp_opt.data_fin; 866 881 } 867 - mpext->data_len = mp_opt->data_len; 882 + mpext->data_len = mp_opt.data_len; 868 883 mpext->use_map = 1; 869 884 } 870 885 }
+9 -8
net/mptcp/protocol.c
··· 1316 1316 1317 1317 static int mptcp_disconnect(struct sock *sk, int flags) 1318 1318 { 1319 - lock_sock(sk); 1320 - __mptcp_clear_xmit(sk); 1321 - release_sock(sk); 1322 - mptcp_cancel_work(sk); 1323 - return tcp_disconnect(sk, flags); 1319 + /* Should never be called. 1320 + * inet_stream_connect() calls ->disconnect, but that 1321 + * refers to the subflow socket, not the mptcp one. 1322 + */ 1323 + WARN_ON_ONCE(1); 1324 + return 0; 1324 1325 } 1325 1326 1326 1327 #if IS_ENABLED(CONFIG_MPTCP_IPV6) ··· 1334 1333 #endif 1335 1334 1336 1335 struct sock *mptcp_sk_clone(const struct sock *sk, 1337 - const struct tcp_options_received *opt_rx, 1336 + const struct mptcp_options_received *mp_opt, 1338 1337 struct request_sock *req) 1339 1338 { 1340 1339 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); ··· 1373 1372 1374 1373 msk->write_seq = subflow_req->idsn + 1; 1375 1374 atomic64_set(&msk->snd_una, msk->write_seq); 1376 - if (opt_rx->mptcp.mp_capable) { 1375 + if (mp_opt->mp_capable) { 1377 1376 msk->can_ack = true; 1378 - msk->remote_key = opt_rx->mptcp.sndr_key; 1377 + msk->remote_key = mp_opt->sndr_key; 1379 1378 mptcp_crypto_key_sha(msk->remote_key, NULL, &ack_seq); 1380 1379 ack_seq++; 1381 1380 msk->ack_seq = ack_seq;
+41 -2
net/mptcp/protocol.h
··· 91 91 #define MPTCP_WORK_RTX 2 92 92 #define MPTCP_WORK_EOF 3 93 93 94 + struct mptcp_options_received { 95 + u64 sndr_key; 96 + u64 rcvr_key; 97 + u64 data_ack; 98 + u64 data_seq; 99 + u32 subflow_seq; 100 + u16 data_len; 101 + u16 mp_capable : 1, 102 + mp_join : 1, 103 + dss : 1, 104 + add_addr : 1, 105 + rm_addr : 1, 106 + family : 4, 107 + echo : 1, 108 + backup : 1; 109 + u32 token; 110 + u32 nonce; 111 + u64 thmac; 112 + u8 hmac[20]; 113 + u8 join_id; 114 + u8 use_map:1, 115 + dsn64:1, 116 + data_fin:1, 117 + use_ack:1, 118 + ack64:1, 119 + mpc_map:1, 120 + __unused:2; 121 + u8 addr_id; 122 + u8 rm_id; 123 + union { 124 + struct in_addr addr; 125 + #if IS_ENABLED(CONFIG_MPTCP_IPV6) 126 + struct in6_addr addr6; 127 + #endif 128 + }; 129 + u64 ahmac; 130 + u16 port; 131 + }; 132 + 94 133 static inline __be32 mptcp_option(u8 subopt, u8 len, u8 nib, u8 field) 95 134 { 96 135 return htonl((TCPOPT_MPTCP << 24) | (len << 16) | (subopt << 12) | ··· 370 331 #endif 371 332 372 333 struct sock *mptcp_sk_clone(const struct sock *sk, 373 - const struct tcp_options_received *opt_rx, 334 + const struct mptcp_options_received *mp_opt, 374 335 struct request_sock *req); 375 336 void mptcp_get_options(const struct sk_buff *skb, 376 - struct tcp_options_received *opt_rx); 337 + struct mptcp_options_received *mp_opt); 377 338 378 339 void mptcp_finish_connect(struct sock *sk); 379 340 void mptcp_data_ready(struct sock *sk, struct sock *ssk);
+55 -31
net/mptcp/subflow.c
··· 124 124 { 125 125 struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk_listener); 126 126 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req); 127 - struct tcp_options_received rx_opt; 127 + struct mptcp_options_received mp_opt; 128 128 129 129 pr_debug("subflow_req=%p, listener=%p", subflow_req, listener); 130 130 131 - memset(&rx_opt.mptcp, 0, sizeof(rx_opt.mptcp)); 132 - mptcp_get_options(skb, &rx_opt); 131 + mptcp_get_options(skb, &mp_opt); 133 132 134 133 subflow_req->mp_capable = 0; 135 134 subflow_req->mp_join = 0; ··· 141 142 return; 142 143 #endif 143 144 144 - if (rx_opt.mptcp.mp_capable) { 145 + if (mp_opt.mp_capable) { 145 146 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_MPCAPABLEPASSIVE); 146 147 147 - if (rx_opt.mptcp.mp_join) 148 + if (mp_opt.mp_join) 148 149 return; 149 - } else if (rx_opt.mptcp.mp_join) { 150 + } else if (mp_opt.mp_join) { 150 151 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINSYNRX); 151 152 } 152 153 153 - if (rx_opt.mptcp.mp_capable && listener->request_mptcp) { 154 + if (mp_opt.mp_capable && listener->request_mptcp) { 154 155 int err; 155 156 156 157 err = mptcp_token_new_request(req); ··· 158 159 subflow_req->mp_capable = 1; 159 160 160 161 subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq; 161 - } else if (rx_opt.mptcp.mp_join && listener->request_mptcp) { 162 + } else if (mp_opt.mp_join && listener->request_mptcp) { 162 163 subflow_req->ssn_offset = TCP_SKB_CB(skb)->seq; 163 164 subflow_req->mp_join = 1; 164 - subflow_req->backup = rx_opt.mptcp.backup; 165 - subflow_req->remote_id = rx_opt.mptcp.join_id; 166 - subflow_req->token = rx_opt.mptcp.token; 167 - subflow_req->remote_nonce = rx_opt.mptcp.nonce; 165 + subflow_req->backup = mp_opt.backup; 166 + subflow_req->remote_id = mp_opt.join_id; 167 + subflow_req->token = mp_opt.token; 168 + subflow_req->remote_nonce = mp_opt.nonce; 168 169 pr_debug("token=%u, remote_nonce=%u", subflow_req->token, 169 170 subflow_req->remote_nonce); 170 171 if (!subflow_token_join_request(req, skb)) { ··· 220 221 static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb) 221 222 { 222 223 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 224 + struct mptcp_options_received mp_opt; 223 225 struct sock *parent = subflow->conn; 226 + struct tcp_sock *tp = tcp_sk(sk); 224 227 225 228 subflow->icsk_af_ops->sk_rx_dst_set(sk, skb); 226 229 227 - if (inet_sk_state_load(parent) != TCP_ESTABLISHED) { 230 + if (inet_sk_state_load(parent) == TCP_SYN_SENT) { 228 231 inet_sk_state_store(parent, TCP_ESTABLISHED); 229 232 parent->sk_state_change(parent); 230 233 } 231 234 232 - if (subflow->conn_finished || !tcp_sk(sk)->is_mptcp) 235 + /* be sure no special action on any packet other than syn-ack */ 236 + if (subflow->conn_finished) 237 + return; 238 + 239 + subflow->conn_finished = 1; 240 + 241 + mptcp_get_options(skb, &mp_opt); 242 + if (subflow->request_mptcp && mp_opt.mp_capable) { 243 + subflow->mp_capable = 1; 244 + subflow->can_ack = 1; 245 + subflow->remote_key = mp_opt.sndr_key; 246 + pr_debug("subflow=%p, remote_key=%llu", subflow, 247 + subflow->remote_key); 248 + } else if (subflow->request_join && mp_opt.mp_join) { 249 + subflow->mp_join = 1; 250 + subflow->thmac = mp_opt.thmac; 251 + subflow->remote_nonce = mp_opt.nonce; 252 + pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", subflow, 253 + subflow->thmac, subflow->remote_nonce); 254 + } else if (subflow->request_mptcp) { 255 + tp->is_mptcp = 0; 256 + } 257 + 258 + if (!tp->is_mptcp) 233 259 return; 234 260 235 261 if (subflow->mp_capable) { 236 262 pr_debug("subflow=%p, remote_key=%llu", mptcp_subflow_ctx(sk), 237 263 subflow->remote_key); 238 264 mptcp_finish_connect(sk); 239 - subflow->conn_finished = 1; 240 265 241 266 if (skb) { 242 267 pr_debug("synack seq=%u", TCP_SKB_CB(skb)->seq); ··· 287 264 if (!mptcp_finish_join(sk)) 288 265 goto do_reset; 289 266 290 - subflow->conn_finished = 1; 291 267 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_JOINSYNACKRX); 292 268 } else { 293 269 do_reset: ··· 344 322 345 323 /* validate hmac received in third ACK */ 346 324 static bool subflow_hmac_valid(const struct request_sock *req, 347 - const struct tcp_options_received *rx_opt) 325 + const struct mptcp_options_received *mp_opt) 348 326 { 349 327 const struct mptcp_subflow_request_sock *subflow_req; 350 328 u8 hmac[MPTCPOPT_HMAC_LEN]; ··· 361 339 subflow_req->local_nonce, hmac); 362 340 363 341 ret = true; 364 - if (crypto_memneq(hmac, rx_opt->mptcp.hmac, sizeof(hmac))) 342 + if (crypto_memneq(hmac, mp_opt->hmac, sizeof(hmac))) 365 343 ret = false; 366 344 367 345 sock_put((struct sock *)msk); ··· 417 395 { 418 396 struct mptcp_subflow_context *listener = mptcp_subflow_ctx(sk); 419 397 struct mptcp_subflow_request_sock *subflow_req; 420 - struct tcp_options_received opt_rx; 398 + struct mptcp_options_received mp_opt; 421 399 bool fallback_is_fatal = false; 422 400 struct sock *new_msk = NULL; 423 401 bool fallback = false; ··· 425 403 426 404 pr_debug("listener=%p, req=%p, conn=%p", listener, req, listener->conn); 427 405 428 - opt_rx.mptcp.mp_capable = 0; 406 + /* we need later a valid 'mp_capable' value even when options are not 407 + * parsed 408 + */ 409 + mp_opt.mp_capable = 0; 429 410 if (tcp_rsk(req)->is_mptcp == 0) 430 411 goto create_child; 431 412 ··· 443 418 goto create_msk; 444 419 } 445 420 446 - mptcp_get_options(skb, &opt_rx); 447 - if (!opt_rx.mptcp.mp_capable) { 421 + mptcp_get_options(skb, &mp_opt); 422 + if (!mp_opt.mp_capable) { 448 423 fallback = true; 449 424 goto create_child; 450 425 } 451 426 452 427 create_msk: 453 - new_msk = mptcp_sk_clone(listener->conn, &opt_rx, req); 428 + new_msk = mptcp_sk_clone(listener->conn, &mp_opt, req); 454 429 if (!new_msk) 455 430 fallback = true; 456 431 } else if (subflow_req->mp_join) { 457 432 fallback_is_fatal = true; 458 - opt_rx.mptcp.mp_join = 0; 459 - mptcp_get_options(skb, &opt_rx); 460 - if (!opt_rx.mptcp.mp_join || 461 - !subflow_hmac_valid(req, &opt_rx)) { 433 + mptcp_get_options(skb, &mp_opt); 434 + if (!mp_opt.mp_join || 435 + !subflow_hmac_valid(req, &mp_opt)) { 462 436 SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC); 463 437 return NULL; 464 438 } ··· 497 473 /* with OoO packets we can reach here without ingress 498 474 * mpc option 499 475 */ 500 - ctx->remote_key = opt_rx.mptcp.sndr_key; 501 - ctx->fully_established = opt_rx.mptcp.mp_capable; 502 - ctx->can_ack = opt_rx.mptcp.mp_capable; 476 + ctx->remote_key = mp_opt.sndr_key; 477 + ctx->fully_established = mp_opt.mp_capable; 478 + ctx->can_ack = mp_opt.mp_capable; 503 479 } else if (ctx->mp_join) { 504 480 struct mptcp_sock *owner; 505 481 ··· 523 499 /* check for expected invariant - should never trigger, just help 524 500 * catching eariler subtle bugs 525 501 */ 526 - WARN_ON_ONCE(*own_req && child && tcp_sk(child)->is_mptcp && 502 + WARN_ON_ONCE(child && *own_req && tcp_sk(child)->is_mptcp && 527 503 (!mptcp_subflow_ctx(child) || 528 504 !mptcp_subflow_ctx(child)->conn)); 529 505 return child;
+1 -3
net/netfilter/nf_nat_proto.c
··· 68 68 enum nf_nat_manip_type maniptype) 69 69 { 70 70 struct udphdr *hdr; 71 - bool do_csum; 72 71 73 72 if (skb_ensure_writable(skb, hdroff + sizeof(*hdr))) 74 73 return false; 75 74 76 75 hdr = (struct udphdr *)(skb->data + hdroff); 77 - do_csum = hdr->check || skb->ip_summed == CHECKSUM_PARTIAL; 76 + __udp_manip_pkt(skb, iphdroff, hdr, tuple, maniptype, !!hdr->check); 78 77 79 - __udp_manip_pkt(skb, iphdroff, hdr, tuple, maniptype, do_csum); 80 78 return true; 81 79 } 82 80
+16 -6
net/sched/cls_api.c
··· 2070 2070 err = PTR_ERR(block); 2071 2071 goto errout; 2072 2072 } 2073 + block->classid = parent; 2073 2074 2074 2075 chain_index = tca[TCA_CHAIN] ? nla_get_u32(tca[TCA_CHAIN]) : 0; 2075 2076 if (chain_index > TC_ACT_EXT_VAL_MASK) { ··· 2613 2612 return skb->len; 2614 2613 2615 2614 parent = tcm->tcm_parent; 2616 - if (!parent) { 2615 + if (!parent) 2617 2616 q = dev->qdisc; 2618 - parent = q->handle; 2619 - } else { 2617 + else 2620 2618 q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent)); 2621 - } 2622 2619 if (!q) 2623 2620 goto out; 2624 2621 cops = q->ops->cl_ops; ··· 2632 2633 block = cops->tcf_block(q, cl, NULL); 2633 2634 if (!block) 2634 2635 goto out; 2636 + parent = block->classid; 2635 2637 if (tcf_block_shared(block)) 2636 2638 q = NULL; 2637 2639 } ··· 3523 3523 #endif 3524 3524 } 3525 3525 3526 + static enum flow_action_hw_stats tc_act_hw_stats(u8 hw_stats) 3527 + { 3528 + if (WARN_ON_ONCE(hw_stats > TCA_ACT_HW_STATS_ANY)) 3529 + return FLOW_ACTION_HW_STATS_DONT_CARE; 3530 + else if (!hw_stats) 3531 + return FLOW_ACTION_HW_STATS_DISABLED; 3532 + 3533 + return hw_stats; 3534 + } 3535 + 3526 3536 int tc_setup_flow_action(struct flow_action *flow_action, 3527 3537 const struct tcf_exts *exts) 3528 3538 { ··· 3556 3546 if (err) 3557 3547 goto err_out_locked; 3558 3548 3559 - entry->hw_stats = act->hw_stats; 3549 + entry->hw_stats = tc_act_hw_stats(act->hw_stats); 3560 3550 3561 3551 if (is_tcf_gact_ok(act)) { 3562 3552 entry->id = FLOW_ACTION_ACCEPT; ··· 3624 3614 entry->mangle.mask = tcf_pedit_mask(act, k); 3625 3615 entry->mangle.val = tcf_pedit_val(act, k); 3626 3616 entry->mangle.offset = tcf_pedit_offset(act, k); 3627 - entry->hw_stats = act->hw_stats; 3617 + entry->hw_stats = tc_act_hw_stats(act->hw_stats); 3628 3618 entry = &flow_action->entries[++j]; 3629 3619 } 3630 3620 } else if (is_tcf_csum(act)) {
+2 -1
net/sched/sch_choke.c
··· 323 323 324 324 sch->q.qlen = 0; 325 325 sch->qstats.backlog = 0; 326 - memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *)); 326 + if (q->tab) 327 + memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *)); 327 328 q->head = q->tail = 0; 328 329 red_restart(&q->vars); 329 330 }
+1 -1
net/sched/sch_fq_codel.c
··· 416 416 q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM])); 417 417 418 418 if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]) 419 - q->drop_batch_size = min(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])); 419 + q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])); 420 420 421 421 if (tb[TCA_FQ_CODEL_MEMORY_LIMIT]) 422 422 q->memory_limit = min(1U << 31, nla_get_u32(tb[TCA_FQ_CODEL_MEMORY_LIMIT]));
+9
net/sched/sch_sfq.c
··· 637 637 if (ctl->divisor && 638 638 (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536)) 639 639 return -EINVAL; 640 + 641 + /* slot->allot is a short, make sure quantum is not too big. */ 642 + if (ctl->quantum) { 643 + unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum); 644 + 645 + if (scaled <= 0 || scaled > SHRT_MAX) 646 + return -EINVAL; 647 + } 648 + 640 649 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, 641 650 ctl_v1->Wlog)) 642 651 return -EINVAL;
+3
net/sched/sch_skbprio.c
··· 169 169 { 170 170 struct tc_skbprio_qopt *ctl = nla_data(opt); 171 171 172 + if (opt->nla_len != nla_attr_size(sizeof(*ctl))) 173 + return -EINVAL; 174 + 172 175 sch->limit = ctl->limit; 173 176 return 0; 174 177 }
+18 -6
net/sunrpc/clnt.c
··· 880 880 /* 881 881 * Free an RPC client 882 882 */ 883 + static void rpc_free_client_work(struct work_struct *work) 884 + { 885 + struct rpc_clnt *clnt = container_of(work, struct rpc_clnt, cl_work); 886 + 887 + /* These might block on processes that might allocate memory, 888 + * so they cannot be called in rpciod, so they are handled separately 889 + * here. 890 + */ 891 + rpc_clnt_debugfs_unregister(clnt); 892 + rpc_clnt_remove_pipedir(clnt); 893 + 894 + kfree(clnt); 895 + rpciod_down(); 896 + } 883 897 static struct rpc_clnt * 884 898 rpc_free_client(struct rpc_clnt *clnt) 885 899 { ··· 904 890 rcu_dereference(clnt->cl_xprt)->servername); 905 891 if (clnt->cl_parent != clnt) 906 892 parent = clnt->cl_parent; 907 - rpc_clnt_debugfs_unregister(clnt); 908 - rpc_clnt_remove_pipedir(clnt); 909 893 rpc_unregister_client(clnt); 910 894 rpc_free_iostats(clnt->cl_metrics); 911 895 clnt->cl_metrics = NULL; 912 896 xprt_put(rcu_dereference_raw(clnt->cl_xprt)); 913 897 xprt_iter_destroy(&clnt->cl_xpi); 914 - rpciod_down(); 915 898 put_cred(clnt->cl_cred); 916 899 rpc_free_clid(clnt); 917 - kfree(clnt); 900 + 901 + INIT_WORK(&clnt->cl_work, rpc_free_client_work); 902 + schedule_work(&clnt->cl_work); 918 903 return parent; 919 904 } 920 905 ··· 2821 2808 task = rpc_call_null_helper(clnt, xprt, NULL, 2822 2809 RPC_TASK_SOFT|RPC_TASK_SOFTCONN|RPC_TASK_ASYNC|RPC_TASK_NULLCREDS, 2823 2810 &rpc_cb_add_xprt_call_ops, data); 2824 - if (IS_ERR(task)) 2825 - return PTR_ERR(task); 2811 + 2826 2812 rpc_put_task(task); 2827 2813 success: 2828 2814 return 1;
+11 -4
net/sunrpc/xprtrdma/rpc_rdma.c
··· 388 388 } while (nsegs); 389 389 390 390 done: 391 - return xdr_stream_encode_item_absent(xdr); 391 + if (xdr_stream_encode_item_absent(xdr) < 0) 392 + return -EMSGSIZE; 393 + return 0; 392 394 } 393 395 394 396 /* Register and XDR encode the Write list. Supports encoding a list ··· 456 454 *segcount = cpu_to_be32(nchunks); 457 455 458 456 done: 459 - return xdr_stream_encode_item_absent(xdr); 457 + if (xdr_stream_encode_item_absent(xdr) < 0) 458 + return -EMSGSIZE; 459 + return 0; 460 460 } 461 461 462 462 /* Register and XDR encode the Reply chunk. Supports encoding an array ··· 484 480 int nsegs, nchunks; 485 481 __be32 *segcount; 486 482 487 - if (wtype != rpcrdma_replych) 488 - return xdr_stream_encode_item_absent(xdr); 483 + if (wtype != rpcrdma_replych) { 484 + if (xdr_stream_encode_item_absent(xdr) < 0) 485 + return -EMSGSIZE; 486 + return 0; 487 + } 489 488 490 489 seg = req->rl_segments; 491 490 nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_rcv_buf, 0, wtype, seg);
+2 -1
net/sunrpc/xprtrdma/verbs.c
··· 289 289 case RDMA_CM_EVENT_DISCONNECTED: 290 290 ep->re_connect_status = -ECONNABORTED; 291 291 disconnected: 292 + xprt_force_disconnect(xprt); 292 293 return rpcrdma_ep_destroy(ep); 293 294 default: 294 295 break; ··· 1356 1355 --ep->re_send_count; 1357 1356 } 1358 1357 1358 + trace_xprtrdma_post_send(req); 1359 1359 rc = frwr_send(r_xprt, req); 1360 - trace_xprtrdma_post_send(req, rc); 1361 1360 if (rc) 1362 1361 return -ENOTCONN; 1363 1362 return 0;
+3 -2
net/tipc/topsrv.c
··· 402 402 read_lock_bh(&sk->sk_callback_lock); 403 403 ret = tipc_conn_rcv_sub(srv, con, &s); 404 404 read_unlock_bh(&sk->sk_callback_lock); 405 + if (!ret) 406 + return 0; 405 407 } 406 - if (ret < 0) 407 - tipc_conn_close(con); 408 408 409 + tipc_conn_close(con); 409 410 return ret; 410 411 } 411 412
+5 -2
net/tls/tls_sw.c
··· 800 800 *copied -= sk_msg_free(sk, msg); 801 801 tls_free_open_rec(sk); 802 802 } 803 + if (psock) 804 + sk_psock_put(sk, psock); 803 805 return err; 804 806 } 805 807 more_data: ··· 2083 2081 strp_data_ready(&ctx->strp); 2084 2082 2085 2083 psock = sk_psock_get(sk); 2086 - if (psock && !list_empty(&psock->ingress_msg)) { 2087 - ctx->saved_data_ready(sk); 2084 + if (psock) { 2085 + if (!list_empty(&psock->ingress_msg)) 2086 + ctx->saved_data_ready(sk); 2088 2087 sk_psock_put(sk, psock); 2089 2088 } 2090 2089 }
+4
net/vmw_vsock/virtio_transport_common.c
··· 157 157 158 158 void virtio_transport_deliver_tap_pkt(struct virtio_vsock_pkt *pkt) 159 159 { 160 + if (pkt->tap_delivered) 161 + return; 162 + 160 163 vsock_deliver_tap(virtio_transport_build_skb, pkt); 164 + pkt->tap_delivered = true; 161 165 } 162 166 EXPORT_SYMBOL_GPL(virtio_transport_deliver_tap_pkt); 163 167
+6
net/x25/x25_subr.c
··· 357 357 sk->sk_state_change(sk); 358 358 sock_set_flag(sk, SOCK_DEAD); 359 359 } 360 + if (x25->neighbour) { 361 + read_lock_bh(&x25_list_lock); 362 + x25_neigh_put(x25->neighbour); 363 + x25->neighbour = NULL; 364 + read_unlock_bh(&x25_list_lock); 365 + } 360 366 } 361 367 362 368 /*
+1 -1
samples/trace_events/trace-events-sample.h
··· 416 416 * Note, TRACE_EVENT() itself is simply defined as: 417 417 * 418 418 * #define TRACE_EVENT(name, proto, args, tstruct, assign, printk) \ 419 - * DEFINE_EVENT_CLASS(name, proto, args, tstruct, assign, printk); \ 419 + * DECLARE_EVENT_CLASS(name, proto, args, tstruct, assign, printk); \ 420 420 * DEFINE_EVENT(name, name, proto, args) 421 421 * 422 422 * The DEFINE_EVENT() also can be declared with conditions and reg functions:
+1 -1
scripts/decodecode
··· 126 126 faultline=`cat $T.dis | head -1 | cut -d":" -f2-` 127 127 faultline=`echo "$faultline" | sed -e 's/\[/\\\[/g; s/\]/\\\]/g'` 128 128 129 - cat $T.oo | sed -e "${faultlinenum}s/^\(.*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/" 129 + cat $T.oo | sed -e "${faultlinenum}s/^\([^:]*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/" 130 130 echo 131 131 cat $T.aa 132 132 cleanup
+1
scripts/gcc-plugins/Makefile
··· 4 4 HOST_EXTRACXXFLAGS += -I$(GCC_PLUGINS_DIR)/include -I$(src) -std=gnu++98 -fno-rtti 5 5 HOST_EXTRACXXFLAGS += -fno-exceptions -fasynchronous-unwind-tables -ggdb 6 6 HOST_EXTRACXXFLAGS += -Wno-narrowing -Wno-unused-variable -Wno-c++11-compat 7 + HOST_EXTRACXXFLAGS += -Wno-format-diag 7 8 8 9 $(obj)/randomize_layout_plugin.o: $(objtree)/$(obj)/randomize_layout_seed.h 9 10 quiet_cmd_create_randomize_layout_seed = GENSEED $@
+4
scripts/gcc-plugins/gcc-common.h
··· 35 35 #include "ggc.h" 36 36 #include "timevar.h" 37 37 38 + #if BUILDING_GCC_VERSION < 10000 38 39 #include "params.h" 40 + #endif 39 41 40 42 #if BUILDING_GCC_VERSION <= 4009 41 43 #include "pointer-set.h" ··· 849 847 return gimple_build_assign(lhs, subcode, op1, op2 PASS_MEM_STAT); 850 848 } 851 849 850 + #if BUILDING_GCC_VERSION < 10000 852 851 template <> 853 852 template <> 854 853 inline bool is_a_helper<const ggoto *>::test(const_gimple gs) ··· 863 860 { 864 861 return gs->code == GIMPLE_RETURN; 865 862 } 863 + #endif 866 864 867 865 static inline gasm *as_a_gasm(gimple stmt) 868 866 {
+2 -3
scripts/gcc-plugins/stackleak_plugin.c
··· 51 51 gimple stmt; 52 52 gcall *stackleak_track_stack; 53 53 cgraph_node_ptr node; 54 - int frequency; 55 54 basic_block bb; 56 55 57 56 /* Insert call to void stackleak_track_stack(void) */ ··· 67 68 bb = gimple_bb(stackleak_track_stack); 68 69 node = cgraph_get_create_node(track_function_decl); 69 70 gcc_assert(node); 70 - frequency = compute_call_stmt_bb_frequency(current_function_decl, bb); 71 71 cgraph_create_edge(cgraph_get_node(current_function_decl), node, 72 - stackleak_track_stack, bb->count, frequency); 72 + stackleak_track_stack, bb->count, 73 + compute_call_stmt_bb_frequency(current_function_decl, bb)); 73 74 } 74 75 75 76 static bool is_alloca(gimple stmt)
+2 -2
scripts/gdb/linux/rbtree.py
··· 12 12 13 13 def rb_first(root): 14 14 if root.type == rb_root_type.get_type(): 15 - node = node.address.cast(rb_root_type.get_type().pointer()) 15 + node = root.address.cast(rb_root_type.get_type().pointer()) 16 16 elif root.type != rb_root_type.get_type().pointer(): 17 17 raise gdb.GdbError("Must be struct rb_root not {}".format(root.type)) 18 18 ··· 28 28 29 29 def rb_last(root): 30 30 if root.type == rb_root_type.get_type(): 31 - node = node.address.cast(rb_root_type.get_type().pointer()) 31 + node = root.address.cast(rb_root_type.get_type().pointer()) 32 32 elif root.type != rb_root_type.get_type().pointer(): 33 33 raise gdb.GdbError("Must be struct rb_root not {}".format(root.type)) 34 34
+1 -1
scripts/kallsyms.c
··· 34 34 unsigned int len; 35 35 unsigned int start_pos; 36 36 unsigned int percpu_absolute; 37 - unsigned char sym[0]; 37 + unsigned char sym[]; 38 38 }; 39 39 40 40 struct addr_range {
+45 -25
security/selinux/hooks.c
··· 5842 5842 5843 5843 static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb) 5844 5844 { 5845 - int err = 0; 5846 - u32 perm; 5845 + int rc = 0; 5846 + unsigned int msg_len; 5847 + unsigned int data_len = skb->len; 5848 + unsigned char *data = skb->data; 5847 5849 struct nlmsghdr *nlh; 5848 5850 struct sk_security_struct *sksec = sk->sk_security; 5851 + u16 sclass = sksec->sclass; 5852 + u32 perm; 5849 5853 5850 - if (skb->len < NLMSG_HDRLEN) { 5851 - err = -EINVAL; 5852 - goto out; 5853 - } 5854 - nlh = nlmsg_hdr(skb); 5854 + while (data_len >= nlmsg_total_size(0)) { 5855 + nlh = (struct nlmsghdr *)data; 5855 5856 5856 - err = selinux_nlmsg_lookup(sksec->sclass, nlh->nlmsg_type, &perm); 5857 - if (err) { 5858 - if (err == -EINVAL) { 5857 + /* NOTE: the nlmsg_len field isn't reliably set by some netlink 5858 + * users which means we can't reject skb's with bogus 5859 + * length fields; our solution is to follow what 5860 + * netlink_rcv_skb() does and simply skip processing at 5861 + * messages with length fields that are clearly junk 5862 + */ 5863 + if (nlh->nlmsg_len < NLMSG_HDRLEN || nlh->nlmsg_len > data_len) 5864 + return 0; 5865 + 5866 + rc = selinux_nlmsg_lookup(sclass, nlh->nlmsg_type, &perm); 5867 + if (rc == 0) { 5868 + rc = sock_has_perm(sk, perm); 5869 + if (rc) 5870 + return rc; 5871 + } else if (rc == -EINVAL) { 5872 + /* -EINVAL is a missing msg/perm mapping */ 5859 5873 pr_warn_ratelimited("SELinux: unrecognized netlink" 5860 - " message: protocol=%hu nlmsg_type=%hu sclass=%s" 5861 - " pid=%d comm=%s\n", 5862 - sk->sk_protocol, nlh->nlmsg_type, 5863 - secclass_map[sksec->sclass - 1].name, 5864 - task_pid_nr(current), current->comm); 5865 - if (!enforcing_enabled(&selinux_state) || 5866 - security_get_allow_unknown(&selinux_state)) 5867 - err = 0; 5874 + " message: protocol=%hu nlmsg_type=%hu sclass=%s" 5875 + " pid=%d comm=%s\n", 5876 + sk->sk_protocol, nlh->nlmsg_type, 5877 + secclass_map[sclass - 1].name, 5878 + task_pid_nr(current), current->comm); 5879 + if (enforcing_enabled(&selinux_state) && 5880 + !security_get_allow_unknown(&selinux_state)) 5881 + return rc; 5882 + rc = 0; 5883 + } else if (rc == -ENOENT) { 5884 + /* -ENOENT is a missing socket/class mapping, ignore */ 5885 + rc = 0; 5886 + } else { 5887 + return rc; 5868 5888 } 5869 5889 5870 - /* Ignore */ 5871 - if (err == -ENOENT) 5872 - err = 0; 5873 - goto out; 5890 + /* move to the next message after applying netlink padding */ 5891 + msg_len = NLMSG_ALIGN(nlh->nlmsg_len); 5892 + if (msg_len >= data_len) 5893 + return 0; 5894 + data_len -= msg_len; 5895 + data += msg_len; 5874 5896 } 5875 5897 5876 - err = sock_has_perm(sk, perm); 5877 - out: 5878 - return err; 5898 + return rc; 5879 5899 } 5880 5900 5881 5901 static void ipc_init_security(struct ipc_security_struct *isec, u16 sclass)
+1 -1
security/selinux/ss/conditional.c
··· 429 429 430 430 p->cond_list = kcalloc(len, sizeof(*p->cond_list), GFP_KERNEL); 431 431 if (!p->cond_list) 432 - return rc; 432 + return -ENOMEM; 433 433 434 434 rc = avtab_alloc(&(p->te_cond_avtab), p->te_avtab.nel); 435 435 if (rc)
+6 -4
sound/core/oss/pcm_plugin.c
··· 205 205 plugin = snd_pcm_plug_first(plug); 206 206 while (plugin && frames > 0) { 207 207 plugin_next = plugin->next; 208 + if (check_size && plugin->buf_frames && 209 + frames > plugin->buf_frames) 210 + frames = plugin->buf_frames; 208 211 if (plugin->dst_frames) { 209 212 frames = plugin->dst_frames(plugin, frames); 210 213 if (frames < 0) 211 214 return frames; 212 215 } 213 - if (check_size && frames > plugin->buf_frames) 214 - frames = plugin->buf_frames; 215 216 plugin = plugin_next; 216 217 } 217 218 return frames; ··· 226 225 227 226 plugin = snd_pcm_plug_last(plug); 228 227 while (plugin && frames > 0) { 229 - if (check_size && frames > plugin->buf_frames) 230 - frames = plugin->buf_frames; 231 228 plugin_prev = plugin->prev; 232 229 if (plugin->src_frames) { 233 230 frames = plugin->src_frames(plugin, frames); 234 231 if (frames < 0) 235 232 return frames; 236 233 } 234 + if (check_size && plugin->buf_frames && 235 + frames > plugin->buf_frames) 236 + frames = plugin->buf_frames; 237 237 plugin = plugin_prev; 238 238 } 239 239 return frames;
+6 -3
sound/isa/opti9xx/miro.c
··· 867 867 spin_unlock_irqrestore(&chip->lock, flags); 868 868 } 869 869 870 + static inline void snd_miro_write_mask(struct snd_miro *chip, 871 + unsigned char reg, unsigned char value, unsigned char mask) 872 + { 873 + unsigned char oldval = snd_miro_read(chip, reg); 870 874 871 - #define snd_miro_write_mask(chip, reg, value, mask) \ 872 - snd_miro_write(chip, reg, \ 873 - (snd_miro_read(chip, reg) & ~(mask)) | ((value) & (mask))) 875 + snd_miro_write(chip, reg, (oldval & ~mask) | (value & mask)); 876 + } 874 877 875 878 /* 876 879 * Proc Interface
+6 -3
sound/isa/opti9xx/opti92x-ad1848.c
··· 317 317 } 318 318 319 319 320 - #define snd_opti9xx_write_mask(chip, reg, value, mask) \ 321 - snd_opti9xx_write(chip, reg, \ 322 - (snd_opti9xx_read(chip, reg) & ~(mask)) | ((value) & (mask))) 320 + static inline void snd_opti9xx_write_mask(struct snd_opti9xx *chip, 321 + unsigned char reg, unsigned char value, unsigned char mask) 322 + { 323 + unsigned char oldval = snd_opti9xx_read(chip, reg); 323 324 325 + snd_opti9xx_write(chip, reg, (oldval & ~mask) | (value & mask)); 326 + } 324 327 325 328 static int snd_opti9xx_configure(struct snd_opti9xx *chip, 326 329 long port,
+5 -4
sound/pci/hda/hda_intel.c
··· 2078 2078 * some HD-audio PCI entries are exposed without any codecs, and such devices 2079 2079 * should be ignored from the beginning. 2080 2080 */ 2081 - static const struct snd_pci_quirk driver_blacklist[] = { 2082 - SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0), 2083 - SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0), 2081 + static const struct pci_device_id driver_blacklist[] = { 2082 + { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1043, 0x874f) }, /* ASUS ROG Zenith II / Strix */ 2083 + { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb59) }, /* MSI TRX40 Creator */ 2084 + { PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb60) }, /* MSI TRX40 */ 2084 2085 {} 2085 2086 }; 2086 2087 ··· 2101 2100 bool schedule_probe; 2102 2101 int err; 2103 2102 2104 - if (snd_pci_quirk_lookup(pci, driver_blacklist)) { 2103 + if (pci_match_id(driver_blacklist, pci)) { 2105 2104 dev_info(&pci->dev, "Skipping the blacklisted device\n"); 2106 2105 return -ENODEV; 2107 2106 }
+5 -1
sound/pci/hda/patch_hdmi.c
··· 1848 1848 /* Add sanity check to pass klockwork check. 1849 1849 * This should never happen. 1850 1850 */ 1851 - if (WARN_ON(spdif == NULL)) 1851 + if (WARN_ON(spdif == NULL)) { 1852 + mutex_unlock(&codec->spdif_mutex); 1852 1853 return true; 1854 + } 1853 1855 non_pcm = !!(spdif->status & IEC958_AES0_NONAUDIO); 1854 1856 mutex_unlock(&codec->spdif_mutex); 1855 1857 return non_pcm; ··· 2200 2198 2201 2199 for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) { 2202 2200 struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx); 2201 + struct hdmi_eld *pin_eld = &per_pin->sink_eld; 2203 2202 2203 + pin_eld->eld_valid = false; 2204 2204 hdmi_present_sense(per_pin, 0); 2205 2205 } 2206 2206
+1
sound/pci/hda/patch_realtek.c
··· 7420 7420 SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC), 7421 7421 SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC), 7422 7422 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), 7423 + SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7423 7424 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), 7424 7425 SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), 7425 7426 SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+5 -17
sound/usb/line6/podhd.c
··· 21 21 enum { 22 22 LINE6_PODHD300, 23 23 LINE6_PODHD400, 24 - LINE6_PODHD500_0, 25 - LINE6_PODHD500_1, 24 + LINE6_PODHD500, 26 25 LINE6_PODX3, 27 26 LINE6_PODX3LIVE, 28 27 LINE6_PODHD500X, ··· 317 318 /* TODO: no need to alloc data interfaces when only audio is used */ 318 319 { LINE6_DEVICE(0x5057), .driver_info = LINE6_PODHD300 }, 319 320 { LINE6_DEVICE(0x5058), .driver_info = LINE6_PODHD400 }, 320 - { LINE6_IF_NUM(0x414D, 0), .driver_info = LINE6_PODHD500_0 }, 321 - { LINE6_IF_NUM(0x414D, 1), .driver_info = LINE6_PODHD500_1 }, 321 + { LINE6_IF_NUM(0x414D, 0), .driver_info = LINE6_PODHD500 }, 322 322 { LINE6_IF_NUM(0x414A, 0), .driver_info = LINE6_PODX3 }, 323 323 { LINE6_IF_NUM(0x414B, 0), .driver_info = LINE6_PODX3LIVE }, 324 324 { LINE6_IF_NUM(0x4159, 0), .driver_info = LINE6_PODHD500X }, ··· 350 352 .ep_audio_r = 0x82, 351 353 .ep_audio_w = 0x01, 352 354 }, 353 - [LINE6_PODHD500_0] = { 355 + [LINE6_PODHD500] = { 354 356 .id = "PODHD500", 355 357 .name = "POD HD500", 356 - .capabilities = LINE6_CAP_PCM 358 + .capabilities = LINE6_CAP_PCM | LINE6_CAP_CONTROL 357 359 | LINE6_CAP_HWMON, 358 360 .altsetting = 1, 359 - .ep_ctrl_r = 0x81, 360 - .ep_ctrl_w = 0x01, 361 - .ep_audio_r = 0x86, 362 - .ep_audio_w = 0x02, 363 - }, 364 - [LINE6_PODHD500_1] = { 365 - .id = "PODHD500", 366 - .name = "POD HD500", 367 - .capabilities = LINE6_CAP_PCM 368 - | LINE6_CAP_HWMON, 369 - .altsetting = 0, 361 + .ctrl_if = 1, 370 362 .ep_ctrl_r = 0x81, 371 363 .ep_ctrl_w = 0x01, 372 364 .ep_audio_r = 0x86,
+1 -1
sound/usb/quirks.c
··· 1687 1687 1688 1688 case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */ 1689 1689 case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */ 1690 - case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */ 1690 + case USB_ID(0x16d0, 0x06b2): /* NuPrime DAC-10 */ 1691 1691 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ 1692 1692 case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */ 1693 1693 case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */
+6 -3
tools/bootconfig/main.c
··· 314 314 ret = delete_xbc(path); 315 315 if (ret < 0) { 316 316 pr_err("Failed to delete previous boot config: %d\n", ret); 317 + free(data); 317 318 return ret; 318 319 } 319 320 ··· 322 321 fd = open(path, O_RDWR | O_APPEND); 323 322 if (fd < 0) { 324 323 pr_err("Failed to open %s: %d\n", path, fd); 324 + free(data); 325 325 return fd; 326 326 } 327 327 /* TODO: Ensure the @path is initramfs/initrd image */ 328 328 ret = write(fd, data, size + 8); 329 329 if (ret < 0) { 330 330 pr_err("Failed to apply a boot config: %d\n", ret); 331 - return ret; 331 + goto out; 332 332 } 333 333 /* Write a magic word of the bootconfig */ 334 334 ret = write(fd, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN); 335 335 if (ret < 0) { 336 336 pr_err("Failed to apply a boot config magic: %d\n", ret); 337 - return ret; 337 + goto out; 338 338 } 339 + out: 339 340 close(fd); 340 341 free(data); 341 342 342 - return 0; 343 + return ret; 343 344 } 344 345 345 346 int usage(void)
+6 -1
tools/cgroup/iocost_monitor.py
··· 159 159 else: 160 160 self.inflight_pct = 0 161 161 162 - self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000 162 + # vdebt used to be an atomic64_t and is now u64, support both 163 + try: 164 + self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000 165 + except: 166 + self.debt_ms = iocg.abs_vdebt.value_() / VTIME_PER_USEC / 1000 167 + 163 168 self.use_delay = blkg.use_delay.counter.value_() 164 169 self.delay_ms = blkg.delay_nsec.counter.value_() / 1_000_000 165 170
+14 -3
tools/objtool/check.c
··· 72 72 return find_insn(file, func->cfunc->sec, func->cfunc->offset); 73 73 } 74 74 75 + static struct instruction *prev_insn_same_sym(struct objtool_file *file, 76 + struct instruction *insn) 77 + { 78 + struct instruction *prev = list_prev_entry(insn, list); 79 + 80 + if (&prev->list != &file->insn_list && prev->func == insn->func) 81 + return prev; 82 + 83 + return NULL; 84 + } 85 + 75 86 #define func_for_each_insn(file, func, insn) \ 76 87 for (insn = find_insn(file, func->sec, func->offset); \ 77 88 insn; \ ··· 1061 1050 * it. 1062 1051 */ 1063 1052 for (; 1064 - &insn->list != &file->insn_list && insn->func && insn->func->pfunc == func; 1065 - insn = insn->first_jump_src ?: list_prev_entry(insn, list)) { 1053 + insn && insn->func && insn->func->pfunc == func; 1054 + insn = insn->first_jump_src ?: prev_insn_same_sym(file, insn)) { 1066 1055 1067 1056 if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC) 1068 1057 break; ··· 1460 1449 struct cfi_reg *cfa = &state->cfa; 1461 1450 struct stack_op *op = &insn->stack_op; 1462 1451 1463 - if (cfa->base != CFI_SP) 1452 + if (cfa->base != CFI_SP && cfa->base != CFI_SP_INDIRECT) 1464 1453 return 0; 1465 1454 1466 1455 /* push */
+4 -3
tools/objtool/elf.h
··· 87 87 #define OFFSET_STRIDE (1UL << OFFSET_STRIDE_BITS) 88 88 #define OFFSET_STRIDE_MASK (~(OFFSET_STRIDE - 1)) 89 89 90 - #define for_offset_range(_offset, _start, _end) \ 91 - for (_offset = ((_start) & OFFSET_STRIDE_MASK); \ 92 - _offset <= ((_end) & OFFSET_STRIDE_MASK); \ 90 + #define for_offset_range(_offset, _start, _end) \ 91 + for (_offset = ((_start) & OFFSET_STRIDE_MASK); \ 92 + _offset >= ((_start) & OFFSET_STRIDE_MASK) && \ 93 + _offset <= ((_end) & OFFSET_STRIDE_MASK); \ 93 94 _offset += OFFSET_STRIDE) 94 95 95 96 static inline u32 sec_offset_hash(struct section *sec, unsigned long offset)
+146
tools/testing/selftests/filesystems/epoll/epoll_wakeup_test.c
··· 3 3 #define _GNU_SOURCE 4 4 #include <poll.h> 5 5 #include <unistd.h> 6 + #include <assert.h> 6 7 #include <signal.h> 7 8 #include <pthread.h> 8 9 #include <sys/epoll.h> ··· 3135 3134 } 3136 3135 close(ctx.efd[0]); 3137 3136 close(ctx.sfd[0]); 3137 + } 3138 + 3139 + enum { 3140 + EPOLL60_EVENTS_NR = 10, 3141 + }; 3142 + 3143 + struct epoll60_ctx { 3144 + volatile int stopped; 3145 + int ready; 3146 + int waiters; 3147 + int epfd; 3148 + int evfd[EPOLL60_EVENTS_NR]; 3149 + }; 3150 + 3151 + static void *epoll60_wait_thread(void *ctx_) 3152 + { 3153 + struct epoll60_ctx *ctx = ctx_; 3154 + struct epoll_event e; 3155 + sigset_t sigmask; 3156 + uint64_t v; 3157 + int ret; 3158 + 3159 + /* Block SIGUSR1 */ 3160 + sigemptyset(&sigmask); 3161 + sigaddset(&sigmask, SIGUSR1); 3162 + sigprocmask(SIG_SETMASK, &sigmask, NULL); 3163 + 3164 + /* Prepare empty mask for epoll_pwait() */ 3165 + sigemptyset(&sigmask); 3166 + 3167 + while (!ctx->stopped) { 3168 + /* Mark we are ready */ 3169 + __atomic_fetch_add(&ctx->ready, 1, __ATOMIC_ACQUIRE); 3170 + 3171 + /* Start when all are ready */ 3172 + while (__atomic_load_n(&ctx->ready, __ATOMIC_ACQUIRE) && 3173 + !ctx->stopped); 3174 + 3175 + /* Account this waiter */ 3176 + __atomic_fetch_add(&ctx->waiters, 1, __ATOMIC_ACQUIRE); 3177 + 3178 + ret = epoll_pwait(ctx->epfd, &e, 1, 2000, &sigmask); 3179 + if (ret != 1) { 3180 + /* We expect only signal delivery on stop */ 3181 + assert(ret < 0 && errno == EINTR && "Lost wakeup!\n"); 3182 + assert(ctx->stopped); 3183 + break; 3184 + } 3185 + 3186 + ret = read(e.data.fd, &v, sizeof(v)); 3187 + /* Since we are on ET mode, thus each thread gets its own fd. */ 3188 + assert(ret == sizeof(v)); 3189 + 3190 + __atomic_fetch_sub(&ctx->waiters, 1, __ATOMIC_RELEASE); 3191 + } 3192 + 3193 + return NULL; 3194 + } 3195 + 3196 + static inline unsigned long long msecs(void) 3197 + { 3198 + struct timespec ts; 3199 + unsigned long long msecs; 3200 + 3201 + clock_gettime(CLOCK_REALTIME, &ts); 3202 + msecs = ts.tv_sec * 1000ull; 3203 + msecs += ts.tv_nsec / 1000000ull; 3204 + 3205 + return msecs; 3206 + } 3207 + 3208 + static inline int count_waiters(struct epoll60_ctx *ctx) 3209 + { 3210 + return __atomic_load_n(&ctx->waiters, __ATOMIC_ACQUIRE); 3211 + } 3212 + 3213 + TEST(epoll60) 3214 + { 3215 + struct epoll60_ctx ctx = { 0 }; 3216 + pthread_t waiters[ARRAY_SIZE(ctx.evfd)]; 3217 + struct epoll_event e; 3218 + int i, n, ret; 3219 + 3220 + signal(SIGUSR1, signal_handler); 3221 + 3222 + ctx.epfd = epoll_create1(0); 3223 + ASSERT_GE(ctx.epfd, 0); 3224 + 3225 + /* Create event fds */ 3226 + for (i = 0; i < ARRAY_SIZE(ctx.evfd); i++) { 3227 + ctx.evfd[i] = eventfd(0, EFD_NONBLOCK); 3228 + ASSERT_GE(ctx.evfd[i], 0); 3229 + 3230 + e.events = EPOLLIN | EPOLLET; 3231 + e.data.fd = ctx.evfd[i]; 3232 + ASSERT_EQ(epoll_ctl(ctx.epfd, EPOLL_CTL_ADD, ctx.evfd[i], &e), 0); 3233 + } 3234 + 3235 + /* Create waiter threads */ 3236 + for (i = 0; i < ARRAY_SIZE(waiters); i++) 3237 + ASSERT_EQ(pthread_create(&waiters[i], NULL, 3238 + epoll60_wait_thread, &ctx), 0); 3239 + 3240 + for (i = 0; i < 300; i++) { 3241 + uint64_t v = 1, ms; 3242 + 3243 + /* Wait for all to be ready */ 3244 + while (__atomic_load_n(&ctx.ready, __ATOMIC_ACQUIRE) != 3245 + ARRAY_SIZE(ctx.evfd)) 3246 + ; 3247 + 3248 + /* Steady, go */ 3249 + __atomic_fetch_sub(&ctx.ready, ARRAY_SIZE(ctx.evfd), 3250 + __ATOMIC_ACQUIRE); 3251 + 3252 + /* Wait all have gone to kernel */ 3253 + while (count_waiters(&ctx) != ARRAY_SIZE(ctx.evfd)) 3254 + ; 3255 + 3256 + /* 1ms should be enough to schedule away */ 3257 + usleep(1000); 3258 + 3259 + /* Quickly signal all handles at once */ 3260 + for (n = 0; n < ARRAY_SIZE(ctx.evfd); n++) { 3261 + ret = write(ctx.evfd[n], &v, sizeof(v)); 3262 + ASSERT_EQ(ret, sizeof(v)); 3263 + } 3264 + 3265 + /* Busy loop for 1s and wait for all waiters to wake up */ 3266 + ms = msecs(); 3267 + while (count_waiters(&ctx) && msecs() < ms + 1000) 3268 + ; 3269 + 3270 + ASSERT_EQ(count_waiters(&ctx), 0); 3271 + } 3272 + ctx.stopped = 1; 3273 + /* Stop waiters */ 3274 + for (i = 0; i < ARRAY_SIZE(waiters); i++) 3275 + ret = pthread_kill(waiters[i], SIGUSR1); 3276 + for (i = 0; i < ARRAY_SIZE(waiters); i++) 3277 + pthread_join(waiters[i], NULL); 3278 + 3279 + for (i = 0; i < ARRAY_SIZE(waiters); i++) 3280 + close(ctx.evfd[i]); 3281 + close(ctx.epfd); 3138 3282 } 3139 3283 3140 3284 TEST_HARNESS_MAIN
+30 -2
tools/testing/selftests/ftrace/ftracetest
··· 17 17 echo " -vv Alias of -v -v (Show all results in stdout)" 18 18 echo " -vvv Alias of -v -v -v (Show all commands immediately)" 19 19 echo " --fail-unsupported Treat UNSUPPORTED as a failure" 20 + echo " --fail-unresolved Treat UNRESOLVED as a failure" 20 21 echo " -d|--debug Debug mode (trace all shell commands)" 21 22 echo " -l|--logdir <dir> Save logs on the <dir>" 22 23 echo " If <dir> is -, all logs output in console only" ··· 30 29 # kselftest skip code is 4 31 30 err_skip=4 32 31 32 + # cgroup RT scheduling prevents chrt commands from succeeding, which 33 + # induces failures in test wakeup tests. Disable for the duration of 34 + # the tests. 35 + 36 + readonly sched_rt_runtime=/proc/sys/kernel/sched_rt_runtime_us 37 + 38 + sched_rt_runtime_orig=$(cat $sched_rt_runtime) 39 + 40 + setup() { 41 + echo -1 > $sched_rt_runtime 42 + } 43 + 44 + cleanup() { 45 + echo $sched_rt_runtime_orig > $sched_rt_runtime 46 + } 47 + 33 48 errexit() { # message 34 49 echo "Error: $1" 1>&2 50 + cleanup 35 51 exit $err_ret 36 52 } 37 53 ··· 56 38 if [ `id -u` -ne 0 ]; then 57 39 errexit "this must be run by root user" 58 40 fi 41 + 42 + setup 59 43 60 44 # Utilities 61 45 absdir() { # file_path ··· 111 91 ;; 112 92 --fail-unsupported) 113 93 UNSUPPORTED_RESULT=1 94 + shift 1 95 + ;; 96 + --fail-unresolved) 97 + UNRESOLVED_RESULT=1 114 98 shift 1 115 99 ;; 116 100 --logdir|-l) ··· 181 157 DEBUG=0 182 158 VERBOSE=0 183 159 UNSUPPORTED_RESULT=0 160 + UNRESOLVED_RESULT=0 184 161 STOP_FAILURE=0 185 162 # Parse command-line options 186 163 parse_opts $* ··· 260 235 261 236 INSTANCE= 262 237 CASENO=0 238 + 263 239 testcase() { # testfile 264 240 CASENO=$((CASENO+1)) 265 241 desc=`grep "^#[ \t]*description:" $1 | cut -f2 -d:` ··· 286 260 $UNRESOLVED) 287 261 prlog " [${color_blue}UNRESOLVED${color_reset}]" 288 262 UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO" 289 - return 1 # this is a kind of bug.. something happened. 263 + return $UNRESOLVED_RESULT # depends on use case 290 264 ;; 291 265 $UNTESTED) 292 266 prlog " [${color_blue}UNTESTED${color_reset}]" ··· 299 273 return $UNSUPPORTED_RESULT # depends on use case 300 274 ;; 301 275 $XFAIL) 302 - prlog " [${color_red}XFAIL${color_reset}]" 276 + prlog " [${color_green}XFAIL${color_reset}]" 303 277 XFAILED_CASES="$XFAILED_CASES $CASENO" 304 278 return 0 305 279 ;; ··· 431 405 prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w` 432 406 prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w` 433 407 prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w` 408 + 409 + cleanup 434 410 435 411 # if no error, return 0 436 412 exit $TOTAL_RESULT
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/fgraph-filter-stack.tc
··· 10 10 exit_unsupported 11 11 fi 12 12 13 - if [ ! -f set_ftrace_filter ]; then 14 - echo "set_ftrace_filter not found? Is dynamic ftrace not set?" 15 - exit_unsupported 16 - fi 13 + check_filter_file set_ftrace_filter 17 14 18 15 do_reset() { 19 16 if [ -e /proc/sys/kernel/stack_tracer_enabled ]; then
+2
tools/testing/selftests/ftrace/test.d/ftrace/fgraph-filter.tc
··· 9 9 exit_unsupported 10 10 fi 11 11 12 + check_filter_file set_ftrace_filter 13 + 12 14 fail() { # msg 13 15 echo $1 14 16 exit_fail
+2
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-glob.tc
··· 9 9 exit_unsupported 10 10 fi 11 11 12 + check_filter_file set_ftrace_filter 13 + 12 14 disable_tracing 13 15 clear_trace 14 16
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-notrace-pid.tc
··· 15 15 exit_unsupported 16 16 fi 17 17 18 - if [ ! -f set_ftrace_filter ]; then 19 - echo "set_ftrace_filter not found? Is function tracer not set?" 20 - exit_unsupported 21 - fi 18 + check_filter_file set_ftrace_filter 22 19 23 20 do_function_fork=1 24 21
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-pid.tc
··· 16 16 exit_unsupported 17 17 fi 18 18 19 - if [ ! -f set_ftrace_filter ]; then 20 - echo "set_ftrace_filter not found? Is function tracer not set?" 21 - exit_unsupported 22 - fi 19 + check_filter_file set_ftrace_filter 23 20 24 21 do_function_fork=1 25 22
+1 -1
tools/testing/selftests/ftrace/test.d/ftrace/func-filter-stacktrace.tc
··· 3 3 # description: ftrace - stacktrace filter command 4 4 # flags: instance 5 5 6 - [ ! -f set_ftrace_filter ] && exit_unsupported 6 + check_filter_file set_ftrace_filter 7 7 8 8 echo _do_fork:stacktrace >> set_ftrace_filter 9 9
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
··· 11 11 # 12 12 13 13 # The triggers are set within the set_ftrace_filter file 14 - if [ ! -f set_ftrace_filter ]; then 15 - echo "set_ftrace_filter not found? Is dynamic ftrace not set?" 16 - exit_unsupported 17 - fi 14 + check_filter_file set_ftrace_filter 18 15 19 16 do_reset() { 20 17 reset_ftrace_filter
+1 -1
tools/testing/selftests/ftrace/test.d/ftrace/func_mod_trace.tc
··· 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: ftrace - function trace on module 4 4 5 - [ ! -f set_ftrace_filter ] && exit_unsupported 5 + check_filter_file set_ftrace_filter 6 6 7 7 : "mod: allows to filter a non exist function" 8 8 echo 'non_exist_func:mod:non_exist_module' > set_ftrace_filter
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func_profiler.tc
··· 18 18 exit_unsupported; 19 19 fi 20 20 21 - if [ ! -f set_ftrace_filter ]; then 22 - echo "set_ftrace_filter not found? Is dynamic ftrace not set?" 23 - exit_unsupported 24 - fi 21 + check_filter_file set_ftrace_filter 25 22 26 23 if [ ! -f function_profile_enabled ]; then 27 24 echo "function_profile_enabled not found, function profiling enabled?"
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func_set_ftrace_file.tc
··· 10 10 # 11 11 12 12 # The triggers are set within the set_ftrace_filter file 13 - if [ ! -f set_ftrace_filter ]; then 14 - echo "set_ftrace_filter not found? Is dynamic ftrace not set?" 15 - exit_unsupported 16 - fi 13 + check_filter_file set_ftrace_filter 17 14 18 15 fail() { # mesg 19 16 echo $1
+2
tools/testing/selftests/ftrace/test.d/ftrace/func_stack_tracer.tc
··· 8 8 exit_unsupported 9 9 fi 10 10 11 + check_filter_file stack_trace_filter 12 + 11 13 echo > stack_trace_filter 12 14 echo 0 > stack_max_size 13 15 echo 1 > /proc/sys/kernel/stack_tracer_enabled
+1 -4
tools/testing/selftests/ftrace/test.d/ftrace/func_traceonoff_triggers.tc
··· 11 11 # 12 12 13 13 # The triggers are set within the set_ftrace_filter file 14 - if [ ! -f set_ftrace_filter ]; then 15 - echo "set_ftrace_filter not found? Is dynamic ftrace not set?" 16 - exit_unsupported 17 - fi 14 + check_filter_file set_ftrace_filter 18 15 19 16 fail() { # mesg 20 17 echo $1
+6
tools/testing/selftests/ftrace/test.d/functions
··· 1 + check_filter_file() { # check filter file introduced by dynamic ftrace 2 + if [ ! -f "$1" ]; then 3 + echo "$1 not found? Is dynamic ftrace not set?" 4 + exit_unsupported 5 + fi 6 + } 1 7 2 8 clear_trace() { # reset trace output 3 9 echo > trace
+1 -1
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc
··· 38 38 echo 0 > events/kprobes/testprobe/enable 39 39 40 40 : "Confirm the arguments is recorded in given types correctly" 41 - ARGS=`grep "testprobe" trace | sed -e 's/.* arg1=\(.*\) arg2=\(.*\) arg3=\(.*\) arg4=\(.*\)/\1 \2 \3 \4/'` 41 + ARGS=`grep "testprobe" trace | head -n 1 | sed -e 's/.* arg1=\(.*\) arg2=\(.*\) arg3=\(.*\) arg4=\(.*\)/\1 \2 \3 \4/'` 42 42 check_types $ARGS $width 43 43 44 44 : "Clear event for next loop"
+2
tools/testing/selftests/ftrace/test.d/kprobe/kprobe_ftrace.tc
··· 5 5 [ -f kprobe_events ] || exit_unsupported # this is configurable 6 6 grep "function" available_tracers || exit_unsupported # this is configurable 7 7 8 + check_filter_file set_ftrace_filter 9 + 8 10 # prepare 9 11 echo nop > current_tracer 10 12 echo _do_fork > set_ftrace_filter
+6 -6
tools/testing/selftests/gpio/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - MOUNT_CFLAGS := $(shell pkg-config --cflags mount 2>/dev/null) 4 - MOUNT_LDLIBS := $(shell pkg-config --libs mount 2>/dev/null) 5 - ifeq ($(MOUNT_LDLIBS),) 6 - MOUNT_LDLIBS := -lmount -I/usr/include/libmount 3 + VAR_CFLAGS := $(shell pkg-config --cflags mount 2>/dev/null) 4 + VAR_LDLIBS := $(shell pkg-config --libs mount 2>/dev/null) 5 + ifeq ($(VAR_LDLIBS),) 6 + VAR_LDLIBS := -lmount -I/usr/include/libmount 7 7 endif 8 8 9 - CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/ $(MOUNT_CFLAGS) 10 - LDLIBS += $(MOUNT_LDLIBS) 9 + CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/ $(VAR_CFLAGS) 10 + LDLIBS += $(VAR_LDLIBS) 11 11 12 12 TEST_PROGS := gpio-mockup.sh 13 13 TEST_FILES := gpio-mockup-sysfs.sh
+1 -1
tools/testing/selftests/intel_pstate/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE 3 - LDLIBS := $(LDLIBS) -lm 3 + LDLIBS += -lm 4 4 5 5 uname_M := $(shell uname -m 2>/dev/null || echo not) 6 6 ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+272
tools/testing/selftests/kselftest_deps.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # kselftest_deps.sh 4 + # 5 + # Checks for kselftest build dependencies on the build system. 6 + # Copyright (c) 2020 Shuah Khan <skhan@linuxfoundation.org> 7 + # 8 + # 9 + 10 + usage() 11 + { 12 + 13 + echo -e "Usage: $0 -[p] <compiler> [test_name]\n" 14 + echo -e "\tkselftest_deps.sh [-p] gcc" 15 + echo -e "\tkselftest_deps.sh [-p] gcc vm" 16 + echo -e "\tkselftest_deps.sh [-p] aarch64-linux-gnu-gcc" 17 + echo -e "\tkselftest_deps.sh [-p] aarch64-linux-gnu-gcc vm\n" 18 + echo "- Should be run in selftests directory in the kernel repo." 19 + echo "- Checks if Kselftests can be built/cross-built on a system." 20 + echo "- Parses all test/sub-test Makefile to find library dependencies." 21 + echo "- Runs compile test on a trivial C file with LDLIBS specified" 22 + echo " in the test Makefiles to identify missing library dependencies." 23 + echo "- Prints suggested target list for a system filtering out tests" 24 + echo " failed the build dependency check from the TARGETS in Selftests" 25 + echo " main Makefile when optional -p is specified." 26 + echo "- Prints pass/fail dependency check for each tests/sub-test." 27 + echo "- Prints pass/fail targets and libraries." 28 + echo "- Default: runs dependency checks on all tests." 29 + echo "- Optional test name can be specified to check dependencies for it." 30 + exit 1 31 + 32 + } 33 + 34 + # Start main() 35 + main() 36 + { 37 + 38 + base_dir=`pwd` 39 + # Make sure we're in the selftests top-level directory. 40 + if [ $(basename "$base_dir") != "selftests" ]; then 41 + echo -e "\tPlease run $0 in" 42 + echo -e "\ttools/testing/selftests directory ..." 43 + exit 1 44 + fi 45 + 46 + print_targets=0 47 + 48 + while getopts "p" arg; do 49 + case $arg in 50 + p) 51 + print_targets=1 52 + shift;; 53 + esac 54 + done 55 + 56 + if [ $# -eq 0 ] 57 + then 58 + usage 59 + fi 60 + 61 + # Compiler 62 + CC=$1 63 + 64 + tmp_file=$(mktemp).c 65 + trap "rm -f $tmp_file.o $tmp_file $tmp_file.bin" EXIT 66 + #echo $tmp_file 67 + 68 + pass=$(mktemp).out 69 + trap "rm -f $pass" EXIT 70 + #echo $pass 71 + 72 + fail=$(mktemp).out 73 + trap "rm -f $fail" EXIT 74 + #echo $fail 75 + 76 + # Generate tmp source fire for compile test 77 + cat << "EOF" > $tmp_file 78 + int main() 79 + { 80 + } 81 + EOF 82 + 83 + # Save results 84 + total_cnt=0 85 + fail_trgts=() 86 + fail_libs=() 87 + fail_cnt=0 88 + pass_trgts=() 89 + pass_libs=() 90 + pass_cnt=0 91 + 92 + # Get all TARGETS from selftests Makefile 93 + targets=$(egrep "^TARGETS +|^TARGETS =" Makefile | cut -d "=" -f2) 94 + 95 + # Single test case 96 + if [ $# -eq 2 ] 97 + then 98 + test=$2/Makefile 99 + 100 + l1_test $test 101 + l2_test $test 102 + l3_test $test 103 + 104 + print_results $1 $2 105 + exit $? 106 + fi 107 + 108 + # Level 1: LDLIBS set static. 109 + # 110 + # Find all LDLIBS set statically for all executables built by a Makefile 111 + # and filter out VAR_LDLIBS to discard the following: 112 + # gpio/Makefile:LDLIBS += $(VAR_LDLIBS) 113 + # Append space at the end of the list to append more tests. 114 + 115 + l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \ 116 + grep -v "VAR_LDLIBS" | awk -F: '{print $1}') 117 + 118 + # Level 2: LDLIBS set dynamically. 119 + # 120 + # Level 2 121 + # Some tests have multiple valid LDLIBS lines for individual sub-tests 122 + # that need dependency checks. Find them and append them to the tests 123 + # e.g: vm/Makefile:$(OUTPUT)/userfaultfd: LDLIBS += -lpthread 124 + # Filter out VAR_LDLIBS to discard the following: 125 + # memfd/Makefile:$(OUTPUT)/fuse_mnt: LDLIBS += $(VAR_LDLIBS) 126 + # Append space at the end of the list to append more tests. 127 + 128 + l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \ 129 + grep -v "VAR_LDLIBS" | awk -F: '{print $1}') 130 + 131 + # Level 3 132 + # gpio, memfd and others use pkg-config to find mount and fuse libs 133 + # respectively and save it in VAR_LDLIBS. If pkg-config doesn't find 134 + # any, VAR_LDLIBS set to default. 135 + # Use the default value and filter out pkg-config for dependency check. 136 + # e.g: 137 + # gpio/Makefile 138 + # VAR_LDLIBS := $(shell pkg-config --libs mount) 2>/dev/null) 139 + # memfd/Makefile 140 + # VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null) 141 + 142 + l3_tests=$(grep -r --include=Makefile "^VAR_LDLIBS" | \ 143 + grep -v "pkg-config" | awk -F: '{print $1}') 144 + 145 + #echo $l1_tests 146 + #echo $l2_1_tests 147 + #echo $l3_tests 148 + 149 + all_tests 150 + print_results $1 $2 151 + 152 + exit $? 153 + } 154 + # end main() 155 + 156 + all_tests() 157 + { 158 + for test in $l1_tests; do 159 + l1_test $test 160 + done 161 + 162 + for test in $l2_tests; do 163 + l2_test $test 164 + done 165 + 166 + for test in $l3_tests; do 167 + l3_test $test 168 + done 169 + } 170 + 171 + # Use same parsing used for l1_tests and pick libraries this time. 172 + l1_test() 173 + { 174 + test_libs=$(grep --include=Makefile "^LDLIBS" $test | \ 175 + grep -v "VAR_LDLIBS" | \ 176 + sed -e 's/\:/ /' | \ 177 + sed -e 's/+/ /' | cut -d "=" -f 2) 178 + 179 + check_libs $test $test_libs 180 + } 181 + 182 + # Use same parsing used for l2__tests and pick libraries this time. 183 + l2_test() 184 + { 185 + test_libs=$(grep --include=Makefile ": LDLIBS" $test | \ 186 + grep -v "VAR_LDLIBS" | \ 187 + sed -e 's/\:/ /' | sed -e 's/+/ /' | \ 188 + cut -d "=" -f 2) 189 + 190 + check_libs $test $test_libs 191 + } 192 + 193 + l3_test() 194 + { 195 + test_libs=$(grep --include=Makefile "^VAR_LDLIBS" $test | \ 196 + grep -v "pkg-config" | sed -e 's/\:/ /' | 197 + sed -e 's/+/ /' | cut -d "=" -f 2) 198 + 199 + check_libs $test $test_libs 200 + } 201 + 202 + check_libs() 203 + { 204 + 205 + if [[ ! -z "${test_libs// }" ]] 206 + then 207 + 208 + #echo $test_libs 209 + 210 + for lib in $test_libs; do 211 + 212 + let total_cnt+=1 213 + $CC -o $tmp_file.bin $lib $tmp_file > /dev/null 2>&1 214 + if [ $? -ne 0 ]; then 215 + echo "FAIL: $test dependency check: $lib" >> $fail 216 + let fail_cnt+=1 217 + fail_libs+="$lib " 218 + fail_target=$(echo "$test" | cut -d "/" -f1) 219 + fail_trgts+="$fail_target " 220 + targets=$(echo "$targets" | grep -v "$fail_target") 221 + else 222 + echo "PASS: $test dependency check passed $lib" >> $pass 223 + let pass_cnt+=1 224 + pass_libs+="$lib " 225 + pass_trgts+="$(echo "$test" | cut -d "/" -f1) " 226 + fi 227 + 228 + done 229 + fi 230 + } 231 + 232 + print_results() 233 + { 234 + echo -e "========================================================"; 235 + echo -e "Kselftest Dependency Check for [$0 $1 $2] results..." 236 + 237 + if [ $print_targets -ne 0 ] 238 + then 239 + echo -e "Suggested Selftest Targets for your configuration:" 240 + echo -e "$targets"; 241 + fi 242 + 243 + echo -e "========================================================"; 244 + echo -e "Checked tests defining LDLIBS dependencies" 245 + echo -e "--------------------------------------------------------"; 246 + echo -e "Total tests with Dependencies:" 247 + echo -e "$total_cnt Pass: $pass_cnt Fail: $fail_cnt"; 248 + 249 + if [ $pass_cnt -ne 0 ]; then 250 + echo -e "--------------------------------------------------------"; 251 + cat $pass 252 + echo -e "--------------------------------------------------------"; 253 + echo -e "Targets passed build dependency check on system:" 254 + echo -e "$(echo "$pass_trgts" | xargs -n1 | sort -u | xargs)" 255 + fi 256 + 257 + if [ $fail_cnt -ne 0 ]; then 258 + echo -e "--------------------------------------------------------"; 259 + cat $fail 260 + echo -e "--------------------------------------------------------"; 261 + echo -e "Targets failed build dependency check on system:" 262 + echo -e "$(echo "$fail_trgts" | xargs -n1 | sort -u | xargs)" 263 + echo -e "--------------------------------------------------------"; 264 + echo -e "Missing libraries system" 265 + echo -e "$(echo "$fail_libs" | xargs -n1 | sort -u | xargs)" 266 + fi 267 + 268 + echo -e "--------------------------------------------------------"; 269 + echo -e "========================================================"; 270 + } 271 + 272 + main "$@"
+28 -1
tools/testing/selftests/kvm/Makefile
··· 5 5 6 6 top_srcdir = ../../../.. 7 7 KSFT_KHDR_INSTALL := 1 8 + 9 + # For cross-builds to work, UNAME_M has to map to ARCH and arch specific 10 + # directories and targets in this Makefile. "uname -m" doesn't map to 11 + # arch specific sub-directory names. 12 + # 13 + # UNAME_M variable to used to run the compiles pointing to the right arch 14 + # directories and build the right targets for these supported architectures. 15 + # 16 + # TEST_GEN_PROGS and LIBKVM are set using UNAME_M variable. 17 + # LINUX_TOOL_ARCH_INCLUDE is set using ARCH variable. 18 + # 19 + # x86_64 targets are named to include x86_64 as a suffix and directories 20 + # for includes are in x86_64 sub-directory. s390x and aarch64 follow the 21 + # same convention. "uname -m" doesn't result in the correct mapping for 22 + # s390x and aarch64. 23 + # 24 + # No change necessary for x86_64 8 25 UNAME_M := $(shell uname -m) 26 + 27 + # Set UNAME_M for arm64 compile/install to work 28 + ifeq ($(ARCH),arm64) 29 + UNAME_M := aarch64 30 + endif 31 + # Set UNAME_M s390x compile/install to work 32 + ifeq ($(ARCH),s390) 33 + UNAME_M := s390x 34 + endif 9 35 10 36 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c lib/test_util.c 11 37 LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c ··· 79 53 INSTALL_HDR_PATH = $(top_srcdir)/usr 80 54 LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/ 81 55 LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include 82 - LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include 56 + LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include 83 57 CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \ 84 58 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \ 85 59 -I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \ ··· 110 84 $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ) 111 85 $(AR) crs $@ $^ 112 86 87 + x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) 113 88 all: $(STATIC_LIBS) 114 89 $(TEST_GEN_PROGS): $(STATIC_LIBS) 115 90
+2 -2
tools/testing/selftests/kvm/include/evmcs.h
··· 219 219 #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK \ 220 220 (~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1)) 221 221 222 - struct hv_enlightened_vmcs *current_evmcs; 223 - struct hv_vp_assist_page *current_vp_assist; 222 + extern struct hv_enlightened_vmcs *current_evmcs; 223 + extern struct hv_vp_assist_page *current_vp_assist; 224 224 225 225 int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id); 226 226
+3
tools/testing/selftests/kvm/lib/x86_64/vmx.c
··· 17 17 18 18 bool enable_evmcs; 19 19 20 + struct hv_enlightened_vmcs *current_evmcs; 21 + struct hv_vp_assist_page *current_vp_assist; 22 + 20 23 struct eptPageTableEntry { 21 24 uint64_t readable:1; 22 25 uint64_t writable:1;
+12 -2
tools/testing/selftests/memfd/Makefile
··· 8 8 TEST_PROGS := run_fuse_test.sh run_hugetlbfs_test.sh 9 9 TEST_GEN_FILES := fuse_test fuse_mnt 10 10 11 - fuse_mnt.o: CFLAGS += $(shell pkg-config fuse --cflags) 11 + VAR_CFLAGS := $(shell pkg-config fuse --cflags 2>/dev/null) 12 + ifeq ($(VAR_CFLAGS),) 13 + VAR_CFLAGS := -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse 14 + endif 15 + 16 + VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null) 17 + ifeq ($(VAR_LDLIBS),) 18 + VAR_LDLIBS := -lfuse -pthread 19 + endif 20 + 21 + fuse_mnt.o: CFLAGS += $(VAR_CFLAGS) 12 22 13 23 include ../lib.mk 14 24 15 - $(OUTPUT)/fuse_mnt: LDLIBS += $(shell pkg-config fuse --libs) 25 + $(OUTPUT)/fuse_mnt: LDLIBS += $(VAR_LDLIBS) 16 26 17 27 $(OUTPUT)/memfd_test: memfd_test.c common.c 18 28 $(OUTPUT)/fuse_test: fuse_test.c common.c
+5 -2
tools/testing/selftests/net/tcp_mmap.c
··· 165 165 socklen_t zc_len = sizeof(zc); 166 166 int res; 167 167 168 + memset(&zc, 0, sizeof(zc)); 168 169 zc.address = (__u64)((unsigned long)addr); 169 170 zc.length = chunk_size; 170 - zc.recv_skip_hint = 0; 171 + 171 172 res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, 172 173 &zc, &zc_len); 173 174 if (res == -1) ··· 282 281 static void do_accept(int fdlisten) 283 282 { 284 283 pthread_attr_t attr; 284 + int rcvlowat; 285 285 286 286 pthread_attr_init(&attr); 287 287 pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); 288 288 289 + rcvlowat = chunk_size; 289 290 if (setsockopt(fdlisten, SOL_SOCKET, SO_RCVLOWAT, 290 - &chunk_size, sizeof(chunk_size)) == -1) { 291 + &rcvlowat, sizeof(rcvlowat)) == -1) { 291 292 perror("setsockopt SO_RCVLOWAT"); 292 293 } 293 294
+51 -3
tools/testing/selftests/wireguard/netns.sh
··· 48 48 exec 2>/dev/null 49 49 printf "$orig_message_cost" > /proc/sys/net/core/message_cost 50 50 ip0 link del dev wg0 51 + ip0 link del dev wg1 51 52 ip1 link del dev wg0 53 + ip1 link del dev wg1 52 54 ip2 link del dev wg0 55 + ip2 link del dev wg1 53 56 local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)" 54 57 [[ -n $to_kill ]] && kill $to_kill 55 58 pp ip netns del $netns1 ··· 80 77 key1="$(pp wg genkey)" 81 78 key2="$(pp wg genkey)" 82 79 key3="$(pp wg genkey)" 80 + key4="$(pp wg genkey)" 83 81 pub1="$(pp wg pubkey <<<"$key1")" 84 82 pub2="$(pp wg pubkey <<<"$key2")" 85 83 pub3="$(pp wg pubkey <<<"$key3")" 84 + pub4="$(pp wg pubkey <<<"$key4")" 86 85 psk="$(pp wg genpsk)" 87 86 [[ -n $key1 && -n $key2 && -n $psk ]] 88 87 89 88 configure_peers() { 90 89 ip1 addr add 192.168.241.1/24 dev wg0 91 - ip1 addr add fd00::1/24 dev wg0 90 + ip1 addr add fd00::1/112 dev wg0 92 91 93 92 ip2 addr add 192.168.241.2/24 dev wg0 94 - ip2 addr add fd00::2/24 dev wg0 93 + ip2 addr add fd00::2/112 dev wg0 95 94 96 95 n1 wg set wg0 \ 97 96 private-key <(echo "$key1") \ ··· 235 230 n1 wg set wg0 private-key <(echo "$key3") 236 231 n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove 237 232 n1 ping -W 1 -c 1 192.168.241.2 233 + n2 wg set wg0 peer "$pub3" remove 238 234 239 - ip1 link del wg0 235 + # Test that we can route wg through wg 236 + ip1 addr flush dev wg0 237 + ip2 addr flush dev wg0 238 + ip1 addr add fd00::5:1/112 dev wg0 239 + ip2 addr add fd00::5:2/112 dev wg0 240 + n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips fd00::5:2/128 endpoint 127.0.0.1:2 241 + n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips fd00::5:1/128 endpoint 127.212.121.99:9998 242 + ip1 link add wg1 type wireguard 243 + ip2 link add wg1 type wireguard 244 + ip1 addr add 192.168.241.1/24 dev wg1 245 + ip1 addr add fd00::1/112 dev wg1 246 + ip2 addr add 192.168.241.2/24 dev wg1 247 + ip2 addr add fd00::2/112 dev wg1 248 + ip1 link set mtu 1340 up dev wg1 249 + ip2 link set mtu 1340 up dev wg1 250 + n1 wg set wg1 listen-port 5 private-key <(echo "$key3") peer "$pub4" allowed-ips 192.168.241.2/32,fd00::2/128 endpoint [fd00::5:2]:5 251 + n2 wg set wg1 listen-port 5 private-key <(echo "$key4") peer "$pub3" allowed-ips 192.168.241.1/32,fd00::1/128 endpoint [fd00::5:1]:5 252 + tests 253 + # Try to set up a routing loop between the two namespaces 254 + ip1 link set netns $netns0 dev wg1 255 + ip0 addr add 192.168.241.1/24 dev wg1 256 + ip0 link set up dev wg1 257 + n0 ping -W 1 -c 1 192.168.241.2 258 + n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7 240 259 ip2 link del wg0 260 + ip2 link del wg1 261 + ! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel 262 + 263 + ip0 link del wg1 264 + ip1 link del wg0 241 265 242 266 # Test using NAT. We now change the topology to this: 243 267 # ┌────────────────────────────────────────┐ ┌────────────────────────────────────────────────┐ ┌────────────────────────────────────────┐ ··· 315 281 pp sleep 3 316 282 n2 ping -W 1 -c 1 192.168.241.1 317 283 n1 wg set wg0 peer "$pub2" persistent-keepalive 0 284 + 285 + # Test that onion routing works, even when it loops 286 + n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5 287 + ip1 addr add 192.168.242.1/24 dev wg0 288 + ip2 link add wg1 type wireguard 289 + ip2 addr add 192.168.242.2/24 dev wg1 290 + n2 wg set wg1 private-key <(echo "$key3") listen-port 5 peer "$pub1" allowed-ips 192.168.242.1/32 291 + ip2 link set wg1 up 292 + n1 ping -W 1 -c 1 192.168.242.2 293 + ip2 link del wg1 294 + n1 wg set wg0 peer "$pub3" endpoint 192.168.242.2:5 295 + ! n1 ping -W 1 -c 1 192.168.242.2 || false # Should not crash kernel 296 + n1 wg set wg0 peer "$pub3" remove 297 + ip1 addr del 192.168.242.1/24 dev wg0 318 298 319 299 # Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs. 320 300 ip1 -6 addr add fc00::9/96 dev vethc
+1
tools/testing/selftests/wireguard/qemu/arch/powerpc64le.config
··· 10 10 CONFIG_CMDLINE="console=hvc0 wg.success=hvc1" 11 11 CONFIG_SECTION_MISMATCH_WARN_ONLY=y 12 12 CONFIG_FRAME_WARN=1280 13 + CONFIG_THREAD_SHIFT=14
-1
tools/testing/selftests/wireguard/qemu/debug.config
··· 25 25 CONFIG_KASAN_INLINE=y 26 26 CONFIG_UBSAN=y 27 27 CONFIG_UBSAN_SANITIZE_ALL=y 28 - CONFIG_UBSAN_NO_ALIGNMENT=y 29 28 CONFIG_UBSAN_NULL=y 30 29 CONFIG_DEBUG_KMEMLEAK=y 31 30 CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=8192
+6 -2
virt/kvm/arm/hyp/aarch32.c
··· 125 125 */ 126 126 void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr) 127 127 { 128 + u32 pc = *vcpu_pc(vcpu); 128 129 bool is_thumb; 129 130 130 131 is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT); 131 132 if (is_thumb && !is_wide_instr) 132 - *vcpu_pc(vcpu) += 2; 133 + pc += 2; 133 134 else 134 - *vcpu_pc(vcpu) += 4; 135 + pc += 4; 136 + 137 + *vcpu_pc(vcpu) = pc; 138 + 135 139 kvm_adjust_itstate(vcpu); 136 140 }
+40
virt/kvm/arm/psci.c
··· 186 186 kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_RESET); 187 187 } 188 188 189 + static void kvm_psci_narrow_to_32bit(struct kvm_vcpu *vcpu) 190 + { 191 + int i; 192 + 193 + /* 194 + * Zero the input registers' upper 32 bits. They will be fully 195 + * zeroed on exit, so we're fine changing them in place. 196 + */ 197 + for (i = 1; i < 4; i++) 198 + vcpu_set_reg(vcpu, i, lower_32_bits(vcpu_get_reg(vcpu, i))); 199 + } 200 + 201 + static unsigned long kvm_psci_check_allowed_function(struct kvm_vcpu *vcpu, u32 fn) 202 + { 203 + switch(fn) { 204 + case PSCI_0_2_FN64_CPU_SUSPEND: 205 + case PSCI_0_2_FN64_CPU_ON: 206 + case PSCI_0_2_FN64_AFFINITY_INFO: 207 + /* Disallow these functions for 32bit guests */ 208 + if (vcpu_mode_is_32bit(vcpu)) 209 + return PSCI_RET_NOT_SUPPORTED; 210 + break; 211 + } 212 + 213 + return 0; 214 + } 215 + 189 216 static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu) 190 217 { 191 218 struct kvm *kvm = vcpu->kvm; 192 219 u32 psci_fn = smccc_get_function(vcpu); 193 220 unsigned long val; 194 221 int ret = 1; 222 + 223 + val = kvm_psci_check_allowed_function(vcpu, psci_fn); 224 + if (val) 225 + goto out; 195 226 196 227 switch (psci_fn) { 197 228 case PSCI_0_2_FN_PSCI_VERSION: ··· 241 210 val = PSCI_RET_SUCCESS; 242 211 break; 243 212 case PSCI_0_2_FN_CPU_ON: 213 + kvm_psci_narrow_to_32bit(vcpu); 214 + fallthrough; 244 215 case PSCI_0_2_FN64_CPU_ON: 245 216 mutex_lock(&kvm->lock); 246 217 val = kvm_psci_vcpu_on(vcpu); 247 218 mutex_unlock(&kvm->lock); 248 219 break; 249 220 case PSCI_0_2_FN_AFFINITY_INFO: 221 + kvm_psci_narrow_to_32bit(vcpu); 222 + fallthrough; 250 223 case PSCI_0_2_FN64_AFFINITY_INFO: 251 224 val = kvm_psci_vcpu_affinity_info(vcpu); 252 225 break; ··· 291 256 break; 292 257 } 293 258 259 + out: 294 260 smccc_set_retval(vcpu, val, 0, 0, 0); 295 261 return ret; 296 262 } ··· 309 273 break; 310 274 case PSCI_1_0_FN_PSCI_FEATURES: 311 275 feature = smccc_get_arg1(vcpu); 276 + val = kvm_psci_check_allowed_function(vcpu, feature); 277 + if (val) 278 + break; 279 + 312 280 switch(feature) { 313 281 case PSCI_0_2_FN_PSCI_VERSION: 314 282 case PSCI_0_2_FN_CPU_SUSPEND:
+16 -3
virt/kvm/arm/vgic/vgic-init.c
··· 294 294 } 295 295 } 296 296 297 - if (vgic_has_its(kvm)) { 297 + if (vgic_has_its(kvm)) 298 298 vgic_lpi_translation_cache_init(kvm); 299 + 300 + /* 301 + * If we have GICv4.1 enabled, unconditionnaly request enable the 302 + * v4 support so that we get HW-accelerated vSGIs. Otherwise, only 303 + * enable it if we present a virtual ITS to the guest. 304 + */ 305 + if (vgic_supports_direct_msis(kvm)) { 299 306 ret = vgic_v4_init(kvm); 300 307 if (ret) 301 308 goto out; ··· 355 348 { 356 349 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 357 350 351 + /* 352 + * Retire all pending LPIs on this vcpu anyway as we're 353 + * going to destroy it. 354 + */ 355 + vgic_flush_pending_lpis(vcpu); 356 + 358 357 INIT_LIST_HEAD(&vgic_cpu->ap_list_head); 359 358 } 360 359 ··· 372 359 373 360 vgic_debug_destroy(kvm); 374 361 375 - kvm_vgic_dist_destroy(kvm); 376 - 377 362 kvm_for_each_vcpu(i, vcpu, kvm) 378 363 kvm_vgic_vcpu_destroy(vcpu); 364 + 365 + kvm_vgic_dist_destroy(kvm); 379 366 } 380 367 381 368 void kvm_vgic_destroy(struct kvm *kvm)
+9 -2
virt/kvm/arm/vgic/vgic-its.c
··· 96 96 * We "cache" the configuration table entries in our struct vgic_irq's. 97 97 * However we only have those structs for mapped IRQs, so we read in 98 98 * the respective config data from memory here upon mapping the LPI. 99 + * 100 + * Should any of these fail, behave as if we couldn't create the LPI 101 + * by dropping the refcount and returning the error. 99 102 */ 100 103 ret = update_lpi_config(kvm, irq, NULL, false); 101 - if (ret) 104 + if (ret) { 105 + vgic_put_irq(kvm, irq); 102 106 return ERR_PTR(ret); 107 + } 103 108 104 109 ret = vgic_v3_lpi_sync_pending_status(kvm, irq); 105 - if (ret) 110 + if (ret) { 111 + vgic_put_irq(kvm, irq); 106 112 return ERR_PTR(ret); 113 + } 107 114 108 115 return irq; 109 116 }
+10 -6
virt/kvm/arm/vgic/vgic-mmio-v2.c
··· 409 409 NULL, vgic_mmio_uaccess_write_v2_group, 1, 410 410 VGIC_ACCESS_32bit), 411 411 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_SET, 412 - vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, 412 + vgic_mmio_read_enable, vgic_mmio_write_senable, 413 + NULL, vgic_uaccess_write_senable, 1, 413 414 VGIC_ACCESS_32bit), 414 415 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_CLEAR, 415 - vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1, 416 + vgic_mmio_read_enable, vgic_mmio_write_cenable, 417 + NULL, vgic_uaccess_write_cenable, 1, 416 418 VGIC_ACCESS_32bit), 417 419 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET, 418 - vgic_mmio_read_pending, vgic_mmio_write_spending, NULL, NULL, 1, 420 + vgic_mmio_read_pending, vgic_mmio_write_spending, 421 + NULL, vgic_uaccess_write_spending, 1, 419 422 VGIC_ACCESS_32bit), 420 423 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR, 421 - vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1, 424 + vgic_mmio_read_pending, vgic_mmio_write_cpending, 425 + NULL, vgic_uaccess_write_cpending, 1, 422 426 VGIC_ACCESS_32bit), 423 427 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET, 424 428 vgic_mmio_read_active, vgic_mmio_write_sactive, 425 - NULL, vgic_mmio_uaccess_write_sactive, 1, 429 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 1, 426 430 VGIC_ACCESS_32bit), 427 431 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR, 428 432 vgic_mmio_read_active, vgic_mmio_write_cactive, 429 - NULL, vgic_mmio_uaccess_write_cactive, 1, 433 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive, 1, 430 434 VGIC_ACCESS_32bit), 431 435 REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI, 432 436 vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
+18 -13
virt/kvm/arm/vgic/vgic-mmio-v3.c
··· 50 50 51 51 bool vgic_supports_direct_msis(struct kvm *kvm) 52 52 { 53 - return kvm_vgic_global_state.has_gicv4 && vgic_has_its(kvm); 53 + return (kvm_vgic_global_state.has_gicv4_1 || 54 + (kvm_vgic_global_state.has_gicv4 && vgic_has_its(kvm))); 54 55 } 55 56 56 57 /* ··· 539 538 vgic_mmio_read_group, vgic_mmio_write_group, NULL, NULL, 1, 540 539 VGIC_ACCESS_32bit), 541 540 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISENABLER, 542 - vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, 541 + vgic_mmio_read_enable, vgic_mmio_write_senable, 542 + NULL, vgic_uaccess_write_senable, 1, 543 543 VGIC_ACCESS_32bit), 544 544 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICENABLER, 545 - vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1, 545 + vgic_mmio_read_enable, vgic_mmio_write_cenable, 546 + NULL, vgic_uaccess_write_cenable, 1, 546 547 VGIC_ACCESS_32bit), 547 548 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISPENDR, 548 549 vgic_mmio_read_pending, vgic_mmio_write_spending, ··· 556 553 VGIC_ACCESS_32bit), 557 554 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER, 558 555 vgic_mmio_read_active, vgic_mmio_write_sactive, 559 - NULL, vgic_mmio_uaccess_write_sactive, 1, 556 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 1, 560 557 VGIC_ACCESS_32bit), 561 558 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER, 562 559 vgic_mmio_read_active, vgic_mmio_write_cactive, 563 - NULL, vgic_mmio_uaccess_write_cactive, 560 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive, 564 561 1, VGIC_ACCESS_32bit), 565 562 REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR, 566 563 vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL, ··· 612 609 REGISTER_DESC_WITH_LENGTH(SZ_64K + GICR_IGROUPR0, 613 610 vgic_mmio_read_group, vgic_mmio_write_group, 4, 614 611 VGIC_ACCESS_32bit), 615 - REGISTER_DESC_WITH_LENGTH(SZ_64K + GICR_ISENABLER0, 616 - vgic_mmio_read_enable, vgic_mmio_write_senable, 4, 612 + REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ISENABLER0, 613 + vgic_mmio_read_enable, vgic_mmio_write_senable, 614 + NULL, vgic_uaccess_write_senable, 4, 617 615 VGIC_ACCESS_32bit), 618 - REGISTER_DESC_WITH_LENGTH(SZ_64K + GICR_ICENABLER0, 619 - vgic_mmio_read_enable, vgic_mmio_write_cenable, 4, 616 + REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ICENABLER0, 617 + vgic_mmio_read_enable, vgic_mmio_write_cenable, 618 + NULL, vgic_uaccess_write_cenable, 4, 620 619 VGIC_ACCESS_32bit), 621 620 REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ISPENDR0, 622 621 vgic_mmio_read_pending, vgic_mmio_write_spending, ··· 630 625 VGIC_ACCESS_32bit), 631 626 REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ISACTIVER0, 632 627 vgic_mmio_read_active, vgic_mmio_write_sactive, 633 - NULL, vgic_mmio_uaccess_write_sactive, 634 - 4, VGIC_ACCESS_32bit), 628 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 4, 629 + VGIC_ACCESS_32bit), 635 630 REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ICACTIVER0, 636 631 vgic_mmio_read_active, vgic_mmio_write_cactive, 637 - NULL, vgic_mmio_uaccess_write_cactive, 638 - 4, VGIC_ACCESS_32bit), 632 + vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive, 4, 633 + VGIC_ACCESS_32bit), 639 634 REGISTER_DESC_WITH_LENGTH(SZ_64K + GICR_IPRIORITYR0, 640 635 vgic_mmio_read_priority, vgic_mmio_write_priority, 32, 641 636 VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
+170 -58
virt/kvm/arm/vgic/vgic-mmio.c
··· 184 184 } 185 185 } 186 186 187 + int vgic_uaccess_write_senable(struct kvm_vcpu *vcpu, 188 + gpa_t addr, unsigned int len, 189 + unsigned long val) 190 + { 191 + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 192 + int i; 193 + unsigned long flags; 194 + 195 + for_each_set_bit(i, &val, len * 8) { 196 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); 197 + 198 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 199 + irq->enabled = true; 200 + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 201 + 202 + vgic_put_irq(vcpu->kvm, irq); 203 + } 204 + 205 + return 0; 206 + } 207 + 208 + int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu, 209 + gpa_t addr, unsigned int len, 210 + unsigned long val) 211 + { 212 + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 213 + int i; 214 + unsigned long flags; 215 + 216 + for_each_set_bit(i, &val, len * 8) { 217 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); 218 + 219 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 220 + irq->enabled = false; 221 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 222 + 223 + vgic_put_irq(vcpu->kvm, irq); 224 + } 225 + 226 + return 0; 227 + } 228 + 187 229 unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, 188 230 gpa_t addr, unsigned int len) 189 231 { ··· 261 219 return value; 262 220 } 263 221 264 - /* Must be called with irq->irq_lock held */ 265 - static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, 266 - bool is_uaccess) 267 - { 268 - if (is_uaccess) 269 - return; 270 - 271 - irq->pending_latch = true; 272 - vgic_irq_set_phys_active(irq, true); 273 - } 274 - 275 222 static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq) 276 223 { 277 224 return (vgic_irq_is_sgi(irq->intid) && ··· 271 240 gpa_t addr, unsigned int len, 272 241 unsigned long val) 273 242 { 274 - bool is_uaccess = !kvm_get_running_vcpu(); 275 243 u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 276 244 int i; 277 245 unsigned long flags; ··· 300 270 continue; 301 271 } 302 272 273 + irq->pending_latch = true; 303 274 if (irq->hw) 304 - vgic_hw_irq_spending(vcpu, irq, is_uaccess); 305 - else 306 - irq->pending_latch = true; 275 + vgic_irq_set_phys_active(irq, true); 276 + 307 277 vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 308 278 vgic_put_irq(vcpu->kvm, irq); 309 279 } 310 280 } 311 281 312 - /* Must be called with irq->irq_lock held */ 313 - static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq, 314 - bool is_uaccess) 282 + int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu, 283 + gpa_t addr, unsigned int len, 284 + unsigned long val) 315 285 { 316 - if (is_uaccess) 317 - return; 286 + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 287 + int i; 288 + unsigned long flags; 318 289 290 + for_each_set_bit(i, &val, len * 8) { 291 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); 292 + 293 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 294 + irq->pending_latch = true; 295 + 296 + /* 297 + * GICv2 SGIs are terribly broken. We can't restore 298 + * the source of the interrupt, so just pick the vcpu 299 + * itself as the source... 300 + */ 301 + if (is_vgic_v2_sgi(vcpu, irq)) 302 + irq->source |= BIT(vcpu->vcpu_id); 303 + 304 + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 305 + 306 + vgic_put_irq(vcpu->kvm, irq); 307 + } 308 + 309 + return 0; 310 + } 311 + 312 + /* Must be called with irq->irq_lock held */ 313 + static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq) 314 + { 319 315 irq->pending_latch = false; 320 316 321 317 /* ··· 364 308 gpa_t addr, unsigned int len, 365 309 unsigned long val) 366 310 { 367 - bool is_uaccess = !kvm_get_running_vcpu(); 368 311 u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 369 312 int i; 370 313 unsigned long flags; ··· 394 339 } 395 340 396 341 if (irq->hw) 397 - vgic_hw_irq_cpending(vcpu, irq, is_uaccess); 342 + vgic_hw_irq_cpending(vcpu, irq); 398 343 else 399 344 irq->pending_latch = false; 400 345 ··· 403 348 } 404 349 } 405 350 406 - unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, 407 - gpa_t addr, unsigned int len) 351 + int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu, 352 + gpa_t addr, unsigned int len, 353 + unsigned long val) 354 + { 355 + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 356 + int i; 357 + unsigned long flags; 358 + 359 + for_each_set_bit(i, &val, len * 8) { 360 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); 361 + 362 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 363 + /* 364 + * More fun with GICv2 SGIs! If we're clearing one of them 365 + * from userspace, which source vcpu to clear? Let's not 366 + * even think of it, and blow the whole set. 367 + */ 368 + if (is_vgic_v2_sgi(vcpu, irq)) 369 + irq->source = 0; 370 + 371 + irq->pending_latch = false; 372 + 373 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 374 + 375 + vgic_put_irq(vcpu->kvm, irq); 376 + } 377 + 378 + return 0; 379 + } 380 + 381 + /* 382 + * If we are fiddling with an IRQ's active state, we have to make sure the IRQ 383 + * is not queued on some running VCPU's LRs, because then the change to the 384 + * active state can be overwritten when the VCPU's state is synced coming back 385 + * from the guest. 386 + * 387 + * For shared interrupts as well as GICv3 private interrupts, we have to 388 + * stop all the VCPUs because interrupts can be migrated while we don't hold 389 + * the IRQ locks and we don't want to be chasing moving targets. 390 + * 391 + * For GICv2 private interrupts we don't have to do anything because 392 + * userspace accesses to the VGIC state already require all VCPUs to be 393 + * stopped, and only the VCPU itself can modify its private interrupts 394 + * active state, which guarantees that the VCPU is not running. 395 + */ 396 + static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid) 397 + { 398 + if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 || 399 + intid >= VGIC_NR_PRIVATE_IRQS) 400 + kvm_arm_halt_guest(vcpu->kvm); 401 + } 402 + 403 + /* See vgic_access_active_prepare */ 404 + static void vgic_access_active_finish(struct kvm_vcpu *vcpu, u32 intid) 405 + { 406 + if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 || 407 + intid >= VGIC_NR_PRIVATE_IRQS) 408 + kvm_arm_resume_guest(vcpu->kvm); 409 + } 410 + 411 + static unsigned long __vgic_mmio_read_active(struct kvm_vcpu *vcpu, 412 + gpa_t addr, unsigned int len) 408 413 { 409 414 u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 410 415 u32 value = 0; ··· 474 359 for (i = 0; i < len * 8; i++) { 475 360 struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); 476 361 362 + /* 363 + * Even for HW interrupts, don't evaluate the HW state as 364 + * all the guest is interested in is the virtual state. 365 + */ 477 366 if (irq->active) 478 367 value |= (1U << i); 479 368 ··· 485 366 } 486 367 487 368 return value; 369 + } 370 + 371 + unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, 372 + gpa_t addr, unsigned int len) 373 + { 374 + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 375 + u32 val; 376 + 377 + mutex_lock(&vcpu->kvm->lock); 378 + vgic_access_active_prepare(vcpu, intid); 379 + 380 + val = __vgic_mmio_read_active(vcpu, addr, len); 381 + 382 + vgic_access_active_finish(vcpu, intid); 383 + mutex_unlock(&vcpu->kvm->lock); 384 + 385 + return val; 386 + } 387 + 388 + unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu, 389 + gpa_t addr, unsigned int len) 390 + { 391 + return __vgic_mmio_read_active(vcpu, addr, len); 488 392 } 489 393 490 394 /* Must be called with irq->irq_lock held */ ··· 568 426 raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 569 427 } 570 428 571 - /* 572 - * If we are fiddling with an IRQ's active state, we have to make sure the IRQ 573 - * is not queued on some running VCPU's LRs, because then the change to the 574 - * active state can be overwritten when the VCPU's state is synced coming back 575 - * from the guest. 576 - * 577 - * For shared interrupts, we have to stop all the VCPUs because interrupts can 578 - * be migrated while we don't hold the IRQ locks and we don't want to be 579 - * chasing moving targets. 580 - * 581 - * For private interrupts we don't have to do anything because userspace 582 - * accesses to the VGIC state already require all VCPUs to be stopped, and 583 - * only the VCPU itself can modify its private interrupts active state, which 584 - * guarantees that the VCPU is not running. 585 - */ 586 - static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid) 587 - { 588 - if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 || 589 - intid > VGIC_NR_PRIVATE_IRQS) 590 - kvm_arm_halt_guest(vcpu->kvm); 591 - } 592 - 593 - /* See vgic_change_active_prepare */ 594 - static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid) 595 - { 596 - if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 || 597 - intid > VGIC_NR_PRIVATE_IRQS) 598 - kvm_arm_resume_guest(vcpu->kvm); 599 - } 600 - 601 429 static void __vgic_mmio_write_cactive(struct kvm_vcpu *vcpu, 602 430 gpa_t addr, unsigned int len, 603 431 unsigned long val) ··· 589 477 u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 590 478 591 479 mutex_lock(&vcpu->kvm->lock); 592 - vgic_change_active_prepare(vcpu, intid); 480 + vgic_access_active_prepare(vcpu, intid); 593 481 594 482 __vgic_mmio_write_cactive(vcpu, addr, len, val); 595 483 596 - vgic_change_active_finish(vcpu, intid); 484 + vgic_access_active_finish(vcpu, intid); 597 485 mutex_unlock(&vcpu->kvm->lock); 598 486 } 599 487 ··· 626 514 u32 intid = VGIC_ADDR_TO_INTID(addr, 1); 627 515 628 516 mutex_lock(&vcpu->kvm->lock); 629 - vgic_change_active_prepare(vcpu, intid); 517 + vgic_access_active_prepare(vcpu, intid); 630 518 631 519 __vgic_mmio_write_sactive(vcpu, addr, len, val); 632 520 633 - vgic_change_active_finish(vcpu, intid); 521 + vgic_access_active_finish(vcpu, intid); 634 522 mutex_unlock(&vcpu->kvm->lock); 635 523 } 636 524
+19
virt/kvm/arm/vgic/vgic-mmio.h
··· 138 138 gpa_t addr, unsigned int len, 139 139 unsigned long val); 140 140 141 + int vgic_uaccess_write_senable(struct kvm_vcpu *vcpu, 142 + gpa_t addr, unsigned int len, 143 + unsigned long val); 144 + 145 + int vgic_uaccess_write_cenable(struct kvm_vcpu *vcpu, 146 + gpa_t addr, unsigned int len, 147 + unsigned long val); 148 + 141 149 unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu, 142 150 gpa_t addr, unsigned int len); 143 151 ··· 157 149 gpa_t addr, unsigned int len, 158 150 unsigned long val); 159 151 152 + int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu, 153 + gpa_t addr, unsigned int len, 154 + unsigned long val); 155 + 156 + int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu, 157 + gpa_t addr, unsigned int len, 158 + unsigned long val); 159 + 160 160 unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu, 161 + gpa_t addr, unsigned int len); 162 + 163 + unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu, 161 164 gpa_t addr, unsigned int len); 162 165 163 166 void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,