Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicts:

tools/testing/selftests/net/config
62199e3f1658 ("selftests: net: Add VXLAN MDB test")
3a0385be133e ("selftests: add the missing CONFIG_IP_SCTP in net config")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2628 -1384
+2
.mailmap
··· 265 265 Krzysztof Kozlowski <krzk@kernel.org> <krzysztof.kozlowski@canonical.com> 266 266 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 267 267 Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org> 268 + Leonard Crestez <leonard.crestez@nxp.com> Leonard Crestez <cdleonard@gmail.com> 268 269 Leonardo Bras <leobras.c@gmail.com> <leonardo@linux.ibm.com> 270 + Leonard Göhrs <l.goehrs@pengutronix.de> 269 271 Leonid I Ananiev <leonid.i.ananiev@intel.com> 270 272 Leon Romanovsky <leon@kernel.org> <leon@leon.nu> 271 273 Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com>
+3 -3
Documentation/devicetree/bindings/interrupt-controller/loongarch,cpu-interrupt-controller.yaml Documentation/devicetree/bindings/interrupt-controller/loongson,cpu-interrupt-controller.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/interrupt-controller/loongarch,cpu-interrupt-controller.yaml# 4 + $id: http://devicetree.org/schemas/interrupt-controller/loongson,cpu-interrupt-controller.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: LoongArch CPU Interrupt Controller ··· 11 11 12 12 properties: 13 13 compatible: 14 - const: loongarch,cpu-interrupt-controller 14 + const: loongson,cpu-interrupt-controller 15 15 16 16 '#interrupt-cells': 17 17 const: 1 ··· 28 28 examples: 29 29 - | 30 30 interrupt-controller { 31 - compatible = "loongarch,cpu-interrupt-controller"; 31 + compatible = "loongson,cpu-interrupt-controller"; 32 32 #interrupt-cells = <1>; 33 33 interrupt-controller; 34 34 };
+2 -2
Documentation/devicetree/bindings/serial/renesas,scif.yaml
··· 92 92 - description: Error interrupt 93 93 - description: Receive buffer full interrupt 94 94 - description: Transmit buffer empty interrupt 95 - - description: Transmit End interrupt 95 + - description: Break interrupt 96 96 - items: 97 97 - description: Error interrupt 98 98 - description: Receive buffer full interrupt ··· 107 107 - const: eri 108 108 - const: rxi 109 109 - const: txi 110 - - const: tei 110 + - const: bri 111 111 - items: 112 112 - const: eri 113 113 - const: rxi
+76 -59
Documentation/mm/zsmalloc.rst
··· 39 39 40 40 # cat /sys/kernel/debug/zsmalloc/zram0/classes 41 41 42 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage 42 + class size 10% 20% 30% 40% 50% 60% 70% 80% 90% 99% 100% obj_allocated obj_used pages_used pages_per_zspage freeable 43 43 ... 44 44 ... 45 - 9 176 0 1 186 129 8 4 46 - 10 192 1 0 2880 2872 135 3 47 - 11 208 0 1 819 795 42 2 48 - 12 224 0 1 219 159 12 4 45 + 30 512 0 12 4 1 0 1 0 0 1 0 414 3464 3346 433 1 14 46 + 31 528 2 7 2 2 1 0 1 0 0 2 117 4154 3793 536 4 44 47 + 32 544 6 3 4 1 2 1 0 0 0 1 260 4170 3965 556 2 26 49 48 ... 50 49 ... 51 50 ··· 53 54 index 54 55 size 55 56 object size zspage stores 56 - almost_empty 57 - the number of ZS_ALMOST_EMPTY zspages(see below) 58 - almost_full 59 - the number of ZS_ALMOST_FULL zspages(see below) 57 + 10% 58 + the number of zspages with usage ratio less than 10% (see below) 59 + 20% 60 + the number of zspages with usage ratio between 10% and 20% 61 + 30% 62 + the number of zspages with usage ratio between 20% and 30% 63 + 40% 64 + the number of zspages with usage ratio between 30% and 40% 65 + 50% 66 + the number of zspages with usage ratio between 40% and 50% 67 + 60% 68 + the number of zspages with usage ratio between 50% and 60% 69 + 70% 70 + the number of zspages with usage ratio between 60% and 70% 71 + 80% 72 + the number of zspages with usage ratio between 70% and 80% 73 + 90% 74 + the number of zspages with usage ratio between 80% and 90% 75 + 99% 76 + the number of zspages with usage ratio between 90% and 99% 77 + 100% 78 + the number of zspages with usage ratio 100% 60 79 obj_allocated 61 80 the number of objects allocated 62 81 obj_used ··· 83 66 the number of pages allocated for the class 84 67 pages_per_zspage 85 68 the number of 0-order pages to make a zspage 69 + freeable 70 + the approximate number of pages class compaction can free 86 71 87 - We assign a zspage to ZS_ALMOST_EMPTY fullness group when n <= N / f, where 88 - 89 - * n = number of allocated objects 90 - * N = total number of objects zspage can store 91 - * f = fullness_threshold_frac(ie, 4 at the moment) 92 - 93 - Similarly, we assign zspage to: 94 - 95 - * ZS_ALMOST_FULL when n > N / f 96 - * ZS_EMPTY when n == 0 97 - * ZS_FULL when n == N 98 - 72 + Each zspage maintains inuse counter which keeps track of the number of 73 + objects stored in the zspage. The inuse counter determines the zspage's 74 + "fullness group" which is calculated as the ratio of the "inuse" objects to 75 + the total number of objects the zspage can hold (objs_per_zspage). The 76 + closer the inuse counter is to objs_per_zspage, the better. 99 77 100 78 Internals 101 79 ========= ··· 106 94 107 95 For instance, consider the following size classes::: 108 96 109 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 97 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 110 98 ... 111 - 94 1536 0 0 0 0 0 3 0 112 - 100 1632 0 0 0 0 0 2 0 99 + 94 1536 0 .... 0 0 0 0 3 0 100 + 100 1632 0 .... 0 0 0 0 2 0 113 101 ... 114 102 115 103 ··· 146 134 147 135 Let's take a closer look at the bottom of `/sys/kernel/debug/zsmalloc/zramX/classes`::: 148 136 149 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 137 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 138 + 150 139 ... 151 - 202 3264 0 0 0 0 0 4 0 152 - 254 4096 0 0 0 0 0 1 0 140 + 202 3264 0 .. 0 0 0 0 4 0 141 + 254 4096 0 .. 0 0 0 0 1 0 153 142 ... 154 143 155 144 Size class #202 stores objects of size 3264 bytes and has a maximum of 4 pages ··· 164 151 165 152 For zspage chain size of 8, huge class watermark becomes 3632 bytes::: 166 153 167 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 154 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 155 + 168 156 ... 169 - 202 3264 0 0 0 0 0 4 0 170 - 211 3408 0 0 0 0 0 5 0 171 - 217 3504 0 0 0 0 0 6 0 172 - 222 3584 0 0 0 0 0 7 0 173 - 225 3632 0 0 0 0 0 8 0 174 - 254 4096 0 0 0 0 0 1 0 157 + 202 3264 0 .. 0 0 0 0 4 0 158 + 211 3408 0 .. 0 0 0 0 5 0 159 + 217 3504 0 .. 0 0 0 0 6 0 160 + 222 3584 0 .. 0 0 0 0 7 0 161 + 225 3632 0 .. 0 0 0 0 8 0 162 + 254 4096 0 .. 0 0 0 0 1 0 175 163 ... 176 164 177 165 For zspage chain size of 16, huge class watermark becomes 3840 bytes::: 178 166 179 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 167 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 168 + 180 169 ... 181 - 202 3264 0 0 0 0 0 4 0 182 - 206 3328 0 0 0 0 0 13 0 183 - 207 3344 0 0 0 0 0 9 0 184 - 208 3360 0 0 0 0 0 14 0 185 - 211 3408 0 0 0 0 0 5 0 186 - 212 3424 0 0 0 0 0 16 0 187 - 214 3456 0 0 0 0 0 11 0 188 - 217 3504 0 0 0 0 0 6 0 189 - 219 3536 0 0 0 0 0 13 0 190 - 222 3584 0 0 0 0 0 7 0 191 - 223 3600 0 0 0 0 0 15 0 192 - 225 3632 0 0 0 0 0 8 0 193 - 228 3680 0 0 0 0 0 9 0 194 - 230 3712 0 0 0 0 0 10 0 195 - 232 3744 0 0 0 0 0 11 0 196 - 234 3776 0 0 0 0 0 12 0 197 - 235 3792 0 0 0 0 0 13 0 198 - 236 3808 0 0 0 0 0 14 0 199 - 238 3840 0 0 0 0 0 15 0 200 - 254 4096 0 0 0 0 0 1 0 170 + 202 3264 0 .. 0 0 0 0 4 0 171 + 206 3328 0 .. 0 0 0 0 13 0 172 + 207 3344 0 .. 0 0 0 0 9 0 173 + 208 3360 0 .. 0 0 0 0 14 0 174 + 211 3408 0 .. 0 0 0 0 5 0 175 + 212 3424 0 .. 0 0 0 0 16 0 176 + 214 3456 0 .. 0 0 0 0 11 0 177 + 217 3504 0 .. 0 0 0 0 6 0 178 + 219 3536 0 .. 0 0 0 0 13 0 179 + 222 3584 0 .. 0 0 0 0 7 0 180 + 223 3600 0 .. 0 0 0 0 15 0 181 + 225 3632 0 .. 0 0 0 0 8 0 182 + 228 3680 0 .. 0 0 0 0 9 0 183 + 230 3712 0 .. 0 0 0 0 10 0 184 + 232 3744 0 .. 0 0 0 0 11 0 185 + 234 3776 0 .. 0 0 0 0 12 0 186 + 235 3792 0 .. 0 0 0 0 13 0 187 + 236 3808 0 .. 0 0 0 0 14 0 188 + 238 3840 0 .. 0 0 0 0 15 0 189 + 254 4096 0 .. 0 0 0 0 1 0 201 190 ... 202 191 203 192 Overall the combined zspage chain size effect on zsmalloc pool configuration::: ··· 229 214 230 215 zsmalloc classes stats::: 231 216 232 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 217 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 218 + 233 219 ... 234 - Total 13 51 413836 412973 159955 3 220 + Total 13 .. 51 413836 412973 159955 3 235 221 236 222 zram mm_stat::: 237 223 ··· 243 227 244 228 zsmalloc classes stats::: 245 229 246 - class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable 230 + class size 10% .... 100% obj_allocated obj_used pages_used pages_per_zspage freeable 231 + 247 232 ... 248 - Total 18 87 414852 412978 156666 0 233 + Total 18 .. 87 414852 412978 156666 0 249 234 250 235 zram mm_stat::: 251 236
+2
Documentation/networking/ip-sysctl.rst
··· 340 340 Reserve max(window/2^tcp_app_win, mss) of window for application 341 341 buffer. Value 0 is special, it means that nothing is reserved. 342 342 343 + Possible values are [0, 31], inclusive. 344 + 343 345 Default: 31 344 346 345 347 tcp_autocorking - BOOLEAN
+8 -8
MAINTAINERS
··· 224 224 F: drivers/net/ethernet/8390/ 225 225 226 226 9P FILE SYSTEM 227 - M: Eric Van Hensbergen <ericvh@gmail.com> 227 + M: Eric Van Hensbergen <ericvh@kernel.org> 228 228 M: Latchesar Ionkov <lucho@ionkov.net> 229 229 M: Dominique Martinet <asmadeus@codewreck.org> 230 230 R: Christian Schoenebeck <linux_oss@crudebyte.com> 231 - L: v9fs-developer@lists.sourceforge.net 231 + L: v9fs@lists.linux.dev 232 232 S: Maintained 233 - W: http://swik.net/v9fs 233 + W: http://github.com/v9fs 234 234 Q: http://patchwork.kernel.org/project/v9fs-devel/list/ 235 235 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git 236 236 T: git git://github.com/martinetd/linux.git ··· 4468 4468 F: drivers/net/ieee802154/ca8210.c 4469 4469 4470 4470 CANAAN/KENDRYTE K210 SOC FPIOA DRIVER 4471 - M: Damien Le Moal <damien.lemoal@wdc.com> 4471 + M: Damien Le Moal <dlemoal@kernel.org> 4472 4472 L: linux-riscv@lists.infradead.org 4473 4473 L: linux-gpio@vger.kernel.org (pinctrl driver) 4474 4474 F: Documentation/devicetree/bindings/pinctrl/canaan,k210-fpioa.yaml 4475 4475 F: drivers/pinctrl/pinctrl-k210.c 4476 4476 4477 4477 CANAAN/KENDRYTE K210 SOC RESET CONTROLLER DRIVER 4478 - M: Damien Le Moal <damien.lemoal@wdc.com> 4478 + M: Damien Le Moal <dlemoal@kernel.org> 4479 4479 L: linux-kernel@vger.kernel.org 4480 4480 L: linux-riscv@lists.infradead.org 4481 4481 S: Maintained ··· 4483 4483 F: drivers/reset/reset-k210.c 4484 4484 4485 4485 CANAAN/KENDRYTE K210 SOC SYSTEM CONTROLLER DRIVER 4486 - M: Damien Le Moal <damien.lemoal@wdc.com> 4486 + M: Damien Le Moal <dlemoal@kernel.org> 4487 4487 L: linux-riscv@lists.infradead.org 4488 4488 S: Maintained 4489 4489 F: Documentation/devicetree/bindings/mfd/canaan,k210-sysctl.yaml ··· 11765 11765 F: drivers/ata/sata_promise.* 11766 11766 11767 11767 LIBATA SUBSYSTEM (Serial and Parallel ATA drivers) 11768 - M: Damien Le Moal <damien.lemoal@opensource.wdc.com> 11768 + M: Damien Le Moal <dlemoal@kernel.org> 11769 11769 L: linux-ide@vger.kernel.org 11770 11770 S: Maintained 11771 11771 T: git git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata.git ··· 23139 23139 F: arch/x86/kernel/cpu/zhaoxin.c 23140 23140 23141 23141 ZONEFS FILESYSTEM 23142 - M: Damien Le Moal <damien.lemoal@opensource.wdc.com> 23142 + M: Damien Le Moal <dlemoal@kernel.org> 23143 23143 M: Naohiro Aota <naohiro.aota@wdc.com> 23144 23144 R: Johannes Thumshirn <jth@kernel.org> 23145 23145 L: linux-fsdevel@vger.kernel.org
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 3 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+14 -18
arch/arm64/kernel/compat_alignment.c
··· 314 314 int (*handler)(unsigned long addr, u32 instr, struct pt_regs *regs); 315 315 unsigned int type; 316 316 u32 instr = 0; 317 - u16 tinstr = 0; 318 317 int isize = 4; 319 318 int thumb2_32b = 0; 320 - int fault; 321 319 322 320 instrptr = instruction_pointer(regs); 323 321 324 322 if (compat_thumb_mode(regs)) { 325 323 __le16 __user *ptr = (__le16 __user *)(instrptr & ~1); 324 + u16 tinstr, tinst2; 326 325 327 - fault = alignment_get_thumb(regs, ptr, &tinstr); 328 - if (!fault) { 329 - if (IS_T32(tinstr)) { 330 - /* Thumb-2 32-bit */ 331 - u16 tinst2; 332 - fault = alignment_get_thumb(regs, ptr + 1, &tinst2); 333 - instr = ((u32)tinstr << 16) | tinst2; 334 - thumb2_32b = 1; 335 - } else { 336 - isize = 2; 337 - instr = thumb2arm(tinstr); 338 - } 326 + if (alignment_get_thumb(regs, ptr, &tinstr)) 327 + return 1; 328 + 329 + if (IS_T32(tinstr)) { /* Thumb-2 32-bit */ 330 + if (alignment_get_thumb(regs, ptr + 1, &tinst2)) 331 + return 1; 332 + instr = ((u32)tinstr << 16) | tinst2; 333 + thumb2_32b = 1; 334 + } else { 335 + isize = 2; 336 + instr = thumb2arm(tinstr); 339 337 } 340 338 } else { 341 - fault = alignment_get_arm(regs, (__le32 __user *)instrptr, &instr); 339 + if (alignment_get_arm(regs, (__le32 __user *)instrptr, &instr)) 340 + return 1; 342 341 } 343 - 344 - if (fault) 345 - return 1; 346 342 347 343 switch (CODING_BITS(instr)) { 348 344 case 0x00000000: /* 3.13.4 load/store instruction extensions */
+25 -1
arch/arm64/kvm/arm.c
··· 1890 1890 return ret; 1891 1891 } 1892 1892 1893 + static u64 get_hyp_id_aa64pfr0_el1(void) 1894 + { 1895 + /* 1896 + * Track whether the system isn't affected by spectre/meltdown in the 1897 + * hypervisor's view of id_aa64pfr0_el1, used for protected VMs. 1898 + * Although this is per-CPU, we make it global for simplicity, e.g., not 1899 + * to have to worry about vcpu migration. 1900 + * 1901 + * Unlike for non-protected VMs, userspace cannot override this for 1902 + * protected VMs. 1903 + */ 1904 + u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 1905 + 1906 + val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | 1907 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3)); 1908 + 1909 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 1910 + arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED); 1911 + val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 1912 + arm64_get_meltdown_state() == SPECTRE_UNAFFECTED); 1913 + 1914 + return val; 1915 + } 1916 + 1893 1917 static void kvm_hyp_init_symbols(void) 1894 1918 { 1895 - kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 1919 + kvm_nvhe_sym(id_aa64pfr0_el1_sys_val) = get_hyp_id_aa64pfr0_el1(); 1896 1920 kvm_nvhe_sym(id_aa64pfr1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1); 1897 1921 kvm_nvhe_sym(id_aa64isar0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR0_EL1); 1898 1922 kvm_nvhe_sym(id_aa64isar1_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64ISAR1_EL1);
+4 -1
arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
··· 33 33 * Allow for protected VMs: 34 34 * - Floating-point and Advanced SIMD 35 35 * - Data Independent Timing 36 + * - Spectre/Meltdown Mitigation 36 37 */ 37 38 #define PVM_ID_AA64PFR0_ALLOW (\ 38 39 ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_FP) | \ 39 40 ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AdvSIMD) | \ 40 - ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) \ 41 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_DIT) | \ 42 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) | \ 43 + ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3) \ 41 44 ) 42 45 43 46 /*
-7
arch/arm64/kvm/hyp/nvhe/sys_regs.c
··· 85 85 86 86 static u64 get_pvm_id_aa64pfr0(const struct kvm_vcpu *vcpu) 87 87 { 88 - const struct kvm *kvm = (const struct kvm *)kern_hyp_va(vcpu->kvm); 89 88 u64 set_mask = 0; 90 89 u64 allow_mask = PVM_ID_AA64PFR0_ALLOW; 91 90 92 91 set_mask |= get_restricted_features_unsigned(id_aa64pfr0_el1_sys_val, 93 92 PVM_ID_AA64PFR0_RESTRICT_UNSIGNED); 94 - 95 - /* Spectre and Meltdown mitigation in KVM */ 96 - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2), 97 - (u64)kvm->arch.pfr0_csv2); 98 - set_mask |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3), 99 - (u64)kvm->arch.pfr0_csv3); 100 93 101 94 return (id_aa64pfr0_el1_sys_val & allow_mask) | set_mask; 102 95 }
+1
arch/arm64/kvm/pmu-emul.c
··· 558 558 for_each_set_bit(i, &mask, 32) 559 559 kvm_pmu_set_pmc_value(kvm_vcpu_idx_to_pmc(vcpu, i), 0, true); 560 560 } 561 + kvm_vcpu_pmu_restore_guest(vcpu); 561 562 } 562 563 563 564 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc)
-1
arch/arm64/kvm/sys_regs.c
··· 794 794 if (!kvm_supports_32bit_el0()) 795 795 val |= ARMV8_PMU_PMCR_LC; 796 796 kvm_pmu_handle_pmcr(vcpu, val); 797 - kvm_vcpu_pmu_restore_guest(vcpu); 798 797 } else { 799 798 /* PMCR.P & PMCR.C are RAZ */ 800 799 val = __vcpu_sys_reg(vcpu, PMCR_EL0)
+4
arch/arm64/net/bpf_jit.h
··· 281 281 /* DMB */ 282 282 #define A64_DMB_ISH aarch64_insn_gen_dmb(AARCH64_INSN_MB_ISH) 283 283 284 + /* ADR */ 285 + #define A64_ADR(Rd, offset) \ 286 + aarch64_insn_gen_adr(0, offset, Rd, AARCH64_INSN_ADR_TYPE_ADR) 287 + 284 288 #endif /* _BPF_JIT_H */
+2 -1
arch/arm64/net/bpf_jit_comp.c
··· 1900 1900 restore_args(ctx, args_off, nargs); 1901 1901 /* call original func */ 1902 1902 emit(A64_LDR64I(A64_R(10), A64_SP, retaddr_off), ctx); 1903 - emit(A64_BLR(A64_R(10)), ctx); 1903 + emit(A64_ADR(A64_LR, AARCH64_INSN_SIZE * 2), ctx); 1904 + emit(A64_RET(A64_R(10)), ctx); 1904 1905 /* store return value */ 1905 1906 emit(A64_STR64I(A64_R(0), A64_SP, retval_off), ctx); 1906 1907 /* reserve a nop for bpf_tramp_image_put */
+4
arch/loongarch/net/bpf_jit.c
··· 1022 1022 emit_atomic(insn, ctx); 1023 1023 break; 1024 1024 1025 + /* Speculation barrier */ 1026 + case BPF_ST | BPF_NOSPEC: 1027 + break; 1028 + 1025 1029 default: 1026 1030 pr_err("bpf_jit: unknown opcode %02x\n", code); 1027 1031 return -EINVAL;
+5
arch/x86/Makefile.um
··· 3 3 4 4 # 5 5 # Disable SSE and other FP/SIMD instructions to match normal x86 6 + # This is required to work around issues in older LLVM versions, but breaks 7 + # GCC versions < 11. See: 8 + # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652 6 9 # 10 + ifeq ($(CONFIG_CC_IS_CLANG),y) 7 11 KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx 8 12 KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2 13 + endif 9 14 10 15 ifeq ($(CONFIG_X86_32),y) 11 16 START := 0x8048000
+2
arch/x86/include/asm/intel-family.h
··· 125 125 126 126 #define INTEL_FAM6_LUNARLAKE_M 0xBD 127 127 128 + #define INTEL_FAM6_ARROWLAKE 0xC6 129 + 128 130 /* "Small Core" Processors (Atom/E-Core) */ 129 131 130 132 #define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
+7 -2
arch/x86/kernel/acpi/boot.c
··· 146 146 147 147 pr_debug("Local APIC address 0x%08x\n", madt->address); 148 148 } 149 - if (madt->header.revision >= 5) 149 + 150 + /* ACPI 6.3 and newer support the online capable bit. */ 151 + if (acpi_gbl_FADT.header.revision > 6 || 152 + (acpi_gbl_FADT.header.revision == 6 && 153 + acpi_gbl_FADT.minor_revision >= 3)) 150 154 acpi_support_online_capable = true; 151 155 152 156 default_acpi_madt_oem_check(madt->header.oem_id, ··· 197 193 if (lapic_flags & ACPI_MADT_ENABLED) 198 194 return true; 199 195 200 - if (acpi_support_online_capable && (lapic_flags & ACPI_MADT_ONLINE_CAPABLE)) 196 + if (!acpi_support_online_capable || 197 + (lapic_flags & ACPI_MADT_ONLINE_CAPABLE)) 201 198 return true; 202 199 203 200 return false;
+21
arch/x86/pci/fixup.c
··· 7 7 #include <linux/dmi.h> 8 8 #include <linux/pci.h> 9 9 #include <linux/vgaarb.h> 10 + #include <asm/amd_nb.h> 10 11 #include <asm/hpet.h> 11 12 #include <asm/pci_x86.h> 12 13 ··· 824 823 } 825 824 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x7910, rs690_fix_64bit_dma); 826 825 826 + #endif 827 + 828 + #ifdef CONFIG_AMD_NB 829 + 830 + #define AMD_15B8_RCC_DEV2_EPF0_STRAP2 0x10136008 831 + #define AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK 0x00000080L 832 + 833 + static void quirk_clear_strap_no_soft_reset_dev2_f0(struct pci_dev *dev) 834 + { 835 + u32 data; 836 + 837 + if (!amd_smn_read(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, &data)) { 838 + data &= ~AMD_15B8_RCC_DEV2_EPF0_STRAP2_NO_SOFT_RESET_DEV2_F0_MASK; 839 + if (amd_smn_write(0, AMD_15B8_RCC_DEV2_EPF0_STRAP2, data)) 840 + pci_err(dev, "Failed to write data 0x%x\n", data); 841 + } else { 842 + pci_err(dev, "Failed to read data\n"); 843 + } 844 + } 845 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x15b8, quirk_clear_strap_no_soft_reset_dev2_f0); 827 846 #endif
+1 -3
block/blk-mq.c
··· 1359 1359 return false; 1360 1360 if (rq->mq_hctx->type != HCTX_TYPE_POLL) 1361 1361 return false; 1362 - if (WARN_ON_ONCE(!rq->bio)) 1363 - return false; 1364 1362 return true; 1365 1363 } 1366 1364 EXPORT_SYMBOL_GPL(blk_rq_is_poll); ··· 1366 1368 static void blk_rq_poll_completion(struct request *rq, struct completion *wait) 1367 1369 { 1368 1370 do { 1369 - bio_poll(rq->bio, NULL, 0); 1371 + blk_mq_poll(rq->q, blk_rq_to_qc(rq), NULL, 0); 1370 1372 cond_resched(); 1371 1373 } while (!completion_done(wait)); 1372 1374 }
+7 -1
block/genhd.c
··· 368 368 if (disk->open_partitions) 369 369 return -EBUSY; 370 370 371 - set_bit(GD_NEED_PART_SCAN, &disk->state); 372 371 /* 373 372 * If the device is opened exclusively by current thread already, it's 374 373 * safe to scan partitons, otherwise, use bd_prepare_to_claim() to ··· 380 381 return ret; 381 382 } 382 383 384 + set_bit(GD_NEED_PART_SCAN, &disk->state); 383 385 bdev = blkdev_get_by_dev(disk_devt(disk), mode & ~FMODE_EXCL, NULL); 384 386 if (IS_ERR(bdev)) 385 387 ret = PTR_ERR(bdev); 386 388 else 387 389 blkdev_put(bdev, mode & ~FMODE_EXCL); 388 390 391 + /* 392 + * If blkdev_get_by_dev() failed early, GD_NEED_PART_SCAN is still set, 393 + * and this will cause that re-assemble partitioned raid device will 394 + * creat partition for underlying disk. 395 + */ 396 + clear_bit(GD_NEED_PART_SCAN, &disk->state); 389 397 if (!(mode & FMODE_EXCL)) 390 398 bd_abort_claiming(disk->part0, disk_scan_partitions); 391 399 return ret;
+13 -2
drivers/acpi/acpi_video.c
··· 1984 1984 static int acpi_video_bus_add(struct acpi_device *device) 1985 1985 { 1986 1986 struct acpi_video_bus *video; 1987 + bool auto_detect; 1987 1988 int error; 1988 1989 acpi_status status; 1989 1990 ··· 2046 2045 mutex_unlock(&video_list_lock); 2047 2046 2048 2047 /* 2049 - * The userspace visible backlight_device gets registered separately 2050 - * from acpi_video_register_backlight(). 2048 + * If backlight-type auto-detection is used then a native backlight may 2049 + * show up later and this may change the result from video to native. 2050 + * Therefor normally the userspace visible /sys/class/backlight device 2051 + * gets registered separately by the GPU driver calling 2052 + * acpi_video_register_backlight() when an internal panel is detected. 2053 + * Register the backlight now when not using auto-detection, so that 2054 + * when the kernel cmdline or DMI-quirks are used the backlight will 2055 + * get registered even if acpi_video_register_backlight() is not called. 2051 2056 */ 2052 2057 acpi_video_run_bcl_for_osi(video); 2058 + if (__acpi_video_get_backlight_type(false, &auto_detect) == acpi_backlight_video && 2059 + !auto_detect) 2060 + acpi_video_bus_register_backlight(video); 2061 + 2053 2062 acpi_video_bus_add_notify_handler(video); 2054 2063 2055 2064 return 0;
+45 -13
drivers/acpi/video_detect.c
··· 277 277 }, 278 278 279 279 /* 280 + * Models which need acpi_video backlight control where the GPU drivers 281 + * do not call acpi_video_register_backlight() because no internal panel 282 + * is detected. Typically these are all-in-ones (monitors with builtin 283 + * PC) where the panel connection shows up as regular DP instead of eDP. 284 + */ 285 + { 286 + .callback = video_detect_force_video, 287 + /* Apple iMac14,1 */ 288 + .matches = { 289 + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 290 + DMI_MATCH(DMI_PRODUCT_NAME, "iMac14,1"), 291 + }, 292 + }, 293 + { 294 + .callback = video_detect_force_video, 295 + /* Apple iMac14,2 */ 296 + .matches = { 297 + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 298 + DMI_MATCH(DMI_PRODUCT_NAME, "iMac14,2"), 299 + }, 300 + }, 301 + 302 + /* 303 + * Older models with nvidia GPU which need acpi_video backlight 304 + * control and where the old nvidia binary driver series does not 305 + * call acpi_video_register_backlight(). 306 + */ 307 + { 308 + .callback = video_detect_force_video, 309 + /* ThinkPad W530 */ 310 + .matches = { 311 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 312 + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W530"), 313 + }, 314 + }, 315 + 316 + /* 280 317 * These models have a working acpi_video backlight control, and using 281 318 * native backlight causes a regression where backlight does not work 282 319 * when userspace is not handling brightness key events. Disable ··· 819 782 * Determine which type of backlight interface to use on this system, 820 783 * First check cmdline, then dmi quirks, then do autodetect. 821 784 */ 822 - static enum acpi_backlight_type __acpi_video_get_backlight_type(bool native) 785 + enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, bool *auto_detect) 823 786 { 824 787 static DEFINE_MUTEX(init_mutex); 825 788 static bool nvidia_wmi_ec_present; ··· 844 807 native_available = true; 845 808 mutex_unlock(&init_mutex); 846 809 810 + if (auto_detect) 811 + *auto_detect = false; 812 + 847 813 /* 848 814 * The below heuristics / detection steps are in order of descending 849 815 * presedence. The commandline takes presedence over anything else. ··· 857 817 /* DMI quirks override any autodetection. */ 858 818 if (acpi_backlight_dmi != acpi_backlight_undef) 859 819 return acpi_backlight_dmi; 820 + 821 + if (auto_detect) 822 + *auto_detect = true; 860 823 861 824 /* Special cases such as nvidia_wmi_ec and apple gmux. */ 862 825 if (nvidia_wmi_ec_present) ··· 880 837 /* No ACPI video/native (old hw), use vendor specific fw methods. */ 881 838 return acpi_backlight_vendor; 882 839 } 883 - 884 - enum acpi_backlight_type acpi_video_get_backlight_type(void) 885 - { 886 - return __acpi_video_get_backlight_type(false); 887 - } 888 - EXPORT_SYMBOL(acpi_video_get_backlight_type); 889 - 890 - bool acpi_video_backlight_use_native(void) 891 - { 892 - return __acpi_video_get_backlight_type(true) == acpi_backlight_native; 893 - } 894 - EXPORT_SYMBOL(acpi_video_backlight_use_native); 840 + EXPORT_SYMBOL(__acpi_video_get_backlight_type);
+23 -3
drivers/block/ublk_drv.c
··· 246 246 if (ub->params.types & UBLK_PARAM_TYPE_BASIC) { 247 247 const struct ublk_param_basic *p = &ub->params.basic; 248 248 249 - if (p->logical_bs_shift > PAGE_SHIFT) 249 + if (p->logical_bs_shift > PAGE_SHIFT || p->logical_bs_shift < 9) 250 250 return -EINVAL; 251 251 252 252 if (p->logical_bs_shift > p->physical_bs_shift) ··· 1261 1261 ublk_queue_cmd(ubq, req); 1262 1262 } 1263 1263 1264 - static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) 1264 + static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, 1265 + unsigned int issue_flags, 1266 + struct ublksrv_io_cmd *ub_cmd) 1265 1267 { 1266 - struct ublksrv_io_cmd *ub_cmd = (struct ublksrv_io_cmd *)cmd->cmd; 1267 1268 struct ublk_device *ub = cmd->file->private_data; 1268 1269 struct ublk_queue *ubq; 1269 1270 struct ublk_io *io; ··· 1361 1360 pr_devel("%s: complete: cmd op %d, tag %d ret %x io_flags %x\n", 1362 1361 __func__, cmd_op, tag, ret, io->flags); 1363 1362 return -EIOCBQUEUED; 1363 + } 1364 + 1365 + static int ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags) 1366 + { 1367 + struct ublksrv_io_cmd *ub_src = (struct ublksrv_io_cmd *) cmd->cmd; 1368 + struct ublksrv_io_cmd ub_cmd; 1369 + 1370 + /* 1371 + * Not necessary for async retry, but let's keep it simple and always 1372 + * copy the values to avoid any potential reuse. 1373 + */ 1374 + ub_cmd.q_id = READ_ONCE(ub_src->q_id); 1375 + ub_cmd.tag = READ_ONCE(ub_src->tag); 1376 + ub_cmd.result = READ_ONCE(ub_src->result); 1377 + ub_cmd.addr = READ_ONCE(ub_src->addr); 1378 + 1379 + return __ublk_ch_uring_cmd(cmd, issue_flags, &ub_cmd); 1364 1380 } 1365 1381 1366 1382 static const struct file_operations ublk_ch_fops = { ··· 1970 1952 /* clear all we don't support yet */ 1971 1953 ub->params.types &= UBLK_PARAM_TYPE_ALL; 1972 1954 ret = ublk_validate_params(ub); 1955 + if (ret) 1956 + ub->params.types = 0; 1973 1957 } 1974 1958 mutex_unlock(&ub->mutex); 1975 1959
+173 -96
drivers/block/virtio_blk.c
··· 96 96 97 97 /* 98 98 * The zone append command has an extended in header. 99 - * The status field in zone_append_in_hdr must have 100 - * the same offset in virtblk_req as the non-zoned 101 - * status field above. 99 + * The status field in zone_append_in_hdr must always 100 + * be the last byte. 102 101 */ 103 102 struct { 103 + __virtio64 sector; 104 104 u8 status; 105 - u8 reserved[7]; 106 - __le64 append_sector; 107 - } zone_append_in_hdr; 108 - }; 105 + } zone_append; 106 + } in_hdr; 109 107 110 108 size_t in_hdr_len; 111 109 ··· 152 154 sgs[num_out + num_in++] = vbr->sg_table.sgl; 153 155 } 154 156 155 - sg_init_one(&in_hdr, &vbr->status, vbr->in_hdr_len); 157 + sg_init_one(&in_hdr, &vbr->in_hdr.status, vbr->in_hdr_len); 156 158 sgs[num_out + num_in++] = &in_hdr; 157 159 158 160 return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC); ··· 240 242 struct request *req, 241 243 struct virtblk_req *vbr) 242 244 { 243 - size_t in_hdr_len = sizeof(vbr->status); 245 + size_t in_hdr_len = sizeof(vbr->in_hdr.status); 244 246 bool unmap = false; 245 247 u32 type; 246 248 u64 sector = 0; 249 + 250 + if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) && op_is_zone_mgmt(req_op(req))) 251 + return BLK_STS_NOTSUPP; 247 252 248 253 /* Set fields for all request types */ 249 254 vbr->out_hdr.ioprio = cpu_to_virtio32(vdev, req_get_ioprio(req)); ··· 288 287 case REQ_OP_ZONE_APPEND: 289 288 type = VIRTIO_BLK_T_ZONE_APPEND; 290 289 sector = blk_rq_pos(req); 291 - in_hdr_len = sizeof(vbr->zone_append_in_hdr); 290 + in_hdr_len = sizeof(vbr->in_hdr.zone_append); 292 291 break; 293 292 case REQ_OP_ZONE_RESET: 294 293 type = VIRTIO_BLK_T_ZONE_RESET; ··· 298 297 type = VIRTIO_BLK_T_ZONE_RESET_ALL; 299 298 break; 300 299 case REQ_OP_DRV_IN: 301 - /* Out header already filled in, nothing to do */ 300 + /* 301 + * Out header has already been prepared by the caller (virtblk_get_id() 302 + * or virtblk_submit_zone_report()), nothing to do here. 303 + */ 302 304 return 0; 303 305 default: 304 306 WARN_ON_ONCE(1); ··· 322 318 return 0; 323 319 } 324 320 321 + /* 322 + * The status byte is always the last byte of the virtblk request 323 + * in-header. This helper fetches its value for all in-header formats 324 + * that are currently defined. 325 + */ 326 + static inline u8 virtblk_vbr_status(struct virtblk_req *vbr) 327 + { 328 + return *((u8 *)&vbr->in_hdr + vbr->in_hdr_len - 1); 329 + } 330 + 325 331 static inline void virtblk_request_done(struct request *req) 326 332 { 327 333 struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); 328 - blk_status_t status = virtblk_result(vbr->status); 334 + blk_status_t status = virtblk_result(virtblk_vbr_status(vbr)); 335 + struct virtio_blk *vblk = req->mq_hctx->queue->queuedata; 329 336 330 337 virtblk_unmap_data(req, vbr); 331 338 virtblk_cleanup_cmd(req); 332 339 333 340 if (req_op(req) == REQ_OP_ZONE_APPEND) 334 - req->__sector = le64_to_cpu(vbr->zone_append_in_hdr.append_sector); 341 + req->__sector = virtio64_to_cpu(vblk->vdev, 342 + vbr->in_hdr.zone_append.sector); 335 343 336 344 blk_mq_end_request(req, status); 337 345 } ··· 371 355 372 356 if (likely(!blk_should_fake_timeout(req->q)) && 373 357 !blk_mq_complete_request_remote(req) && 374 - !blk_mq_add_to_batch(req, iob, vbr->status, 358 + !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr), 375 359 virtblk_complete_batch)) 376 360 virtblk_request_done(req); 377 361 req_done++; ··· 566 550 #ifdef CONFIG_BLK_DEV_ZONED 567 551 static void *virtblk_alloc_report_buffer(struct virtio_blk *vblk, 568 552 unsigned int nr_zones, 569 - unsigned int zone_sectors, 570 553 size_t *buflen) 571 554 { 572 555 struct request_queue *q = vblk->disk->queue; ··· 573 558 void *buf; 574 559 575 560 nr_zones = min_t(unsigned int, nr_zones, 576 - get_capacity(vblk->disk) >> ilog2(zone_sectors)); 561 + get_capacity(vblk->disk) >> ilog2(vblk->zone_sectors)); 577 562 578 563 bufsize = sizeof(struct virtio_blk_zone_report) + 579 564 nr_zones * sizeof(struct virtio_blk_zone_descriptor); ··· 607 592 return PTR_ERR(req); 608 593 609 594 vbr = blk_mq_rq_to_pdu(req); 610 - vbr->in_hdr_len = sizeof(vbr->status); 595 + vbr->in_hdr_len = sizeof(vbr->in_hdr.status); 611 596 vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_ZONE_REPORT); 612 597 vbr->out_hdr.sector = cpu_to_virtio64(vblk->vdev, sector); 613 598 ··· 616 601 goto out; 617 602 618 603 blk_execute_rq(req, false); 619 - err = blk_status_to_errno(virtblk_result(vbr->status)); 604 + err = blk_status_to_errno(virtblk_result(vbr->in_hdr.status)); 620 605 out: 621 606 blk_mq_free_request(req); 622 607 return err; ··· 624 609 625 610 static int virtblk_parse_zone(struct virtio_blk *vblk, 626 611 struct virtio_blk_zone_descriptor *entry, 627 - unsigned int idx, unsigned int zone_sectors, 628 - report_zones_cb cb, void *data) 612 + unsigned int idx, report_zones_cb cb, void *data) 629 613 { 630 614 struct blk_zone zone = { }; 631 615 632 - if (entry->z_type != VIRTIO_BLK_ZT_SWR && 633 - entry->z_type != VIRTIO_BLK_ZT_SWP && 634 - entry->z_type != VIRTIO_BLK_ZT_CONV) { 635 - dev_err(&vblk->vdev->dev, "invalid zone type %#x\n", 636 - entry->z_type); 637 - return -EINVAL; 616 + zone.start = virtio64_to_cpu(vblk->vdev, entry->z_start); 617 + if (zone.start + vblk->zone_sectors <= get_capacity(vblk->disk)) 618 + zone.len = vblk->zone_sectors; 619 + else 620 + zone.len = get_capacity(vblk->disk) - zone.start; 621 + zone.capacity = virtio64_to_cpu(vblk->vdev, entry->z_cap); 622 + zone.wp = virtio64_to_cpu(vblk->vdev, entry->z_wp); 623 + 624 + switch (entry->z_type) { 625 + case VIRTIO_BLK_ZT_SWR: 626 + zone.type = BLK_ZONE_TYPE_SEQWRITE_REQ; 627 + break; 628 + case VIRTIO_BLK_ZT_SWP: 629 + zone.type = BLK_ZONE_TYPE_SEQWRITE_PREF; 630 + break; 631 + case VIRTIO_BLK_ZT_CONV: 632 + zone.type = BLK_ZONE_TYPE_CONVENTIONAL; 633 + break; 634 + default: 635 + dev_err(&vblk->vdev->dev, "zone %llu: invalid type %#x\n", 636 + zone.start, entry->z_type); 637 + return -EIO; 638 638 } 639 639 640 - zone.type = entry->z_type; 641 - zone.cond = entry->z_state; 642 - zone.len = zone_sectors; 643 - zone.capacity = le64_to_cpu(entry->z_cap); 644 - zone.start = le64_to_cpu(entry->z_start); 645 - if (zone.cond == BLK_ZONE_COND_FULL) 640 + switch (entry->z_state) { 641 + case VIRTIO_BLK_ZS_EMPTY: 642 + zone.cond = BLK_ZONE_COND_EMPTY; 643 + break; 644 + case VIRTIO_BLK_ZS_CLOSED: 645 + zone.cond = BLK_ZONE_COND_CLOSED; 646 + break; 647 + case VIRTIO_BLK_ZS_FULL: 648 + zone.cond = BLK_ZONE_COND_FULL; 646 649 zone.wp = zone.start + zone.len; 647 - else 648 - zone.wp = le64_to_cpu(entry->z_wp); 650 + break; 651 + case VIRTIO_BLK_ZS_EOPEN: 652 + zone.cond = BLK_ZONE_COND_EXP_OPEN; 653 + break; 654 + case VIRTIO_BLK_ZS_IOPEN: 655 + zone.cond = BLK_ZONE_COND_IMP_OPEN; 656 + break; 657 + case VIRTIO_BLK_ZS_NOT_WP: 658 + zone.cond = BLK_ZONE_COND_NOT_WP; 659 + break; 660 + case VIRTIO_BLK_ZS_RDONLY: 661 + zone.cond = BLK_ZONE_COND_READONLY; 662 + zone.wp = ULONG_MAX; 663 + break; 664 + case VIRTIO_BLK_ZS_OFFLINE: 665 + zone.cond = BLK_ZONE_COND_OFFLINE; 666 + zone.wp = ULONG_MAX; 667 + break; 668 + default: 669 + dev_err(&vblk->vdev->dev, "zone %llu: invalid condition %#x\n", 670 + zone.start, entry->z_state); 671 + return -EIO; 672 + } 649 673 674 + /* 675 + * The callback below checks the validity of the reported 676 + * entry data, no need to further validate it here. 677 + */ 650 678 return cb(&zone, idx, data); 651 679 } 652 680 ··· 699 641 { 700 642 struct virtio_blk *vblk = disk->private_data; 701 643 struct virtio_blk_zone_report *report; 702 - unsigned int zone_sectors = vblk->zone_sectors; 703 - unsigned int nz, i; 704 - int ret, zone_idx = 0; 644 + unsigned long long nz, i; 705 645 size_t buflen; 646 + unsigned int zone_idx = 0; 647 + int ret; 706 648 707 649 if (WARN_ON_ONCE(!vblk->zone_sectors)) 708 650 return -EOPNOTSUPP; 709 651 710 - report = virtblk_alloc_report_buffer(vblk, nr_zones, 711 - zone_sectors, &buflen); 652 + report = virtblk_alloc_report_buffer(vblk, nr_zones, &buflen); 712 653 if (!report) 713 654 return -ENOMEM; 655 + 656 + mutex_lock(&vblk->vdev_mutex); 657 + 658 + if (!vblk->vdev) { 659 + ret = -ENXIO; 660 + goto fail_report; 661 + } 714 662 715 663 while (zone_idx < nr_zones && sector < get_capacity(vblk->disk)) { 716 664 memset(report, 0, buflen); 717 665 718 666 ret = virtblk_submit_zone_report(vblk, (char *)report, 719 667 buflen, sector); 720 - if (ret) { 721 - if (ret > 0) 722 - ret = -EIO; 723 - goto out_free; 724 - } 725 - nz = min((unsigned int)le64_to_cpu(report->nr_zones), nr_zones); 668 + if (ret) 669 + goto fail_report; 670 + 671 + nz = min_t(u64, virtio64_to_cpu(vblk->vdev, report->nr_zones), 672 + nr_zones); 726 673 if (!nz) 727 674 break; 728 675 729 676 for (i = 0; i < nz && zone_idx < nr_zones; i++) { 730 677 ret = virtblk_parse_zone(vblk, &report->zones[i], 731 - zone_idx, zone_sectors, cb, data); 678 + zone_idx, cb, data); 732 679 if (ret) 733 - goto out_free; 734 - sector = le64_to_cpu(report->zones[i].z_start) + zone_sectors; 680 + goto fail_report; 681 + 682 + sector = virtio64_to_cpu(vblk->vdev, 683 + report->zones[i].z_start) + 684 + vblk->zone_sectors; 735 685 zone_idx++; 736 686 } 737 687 } ··· 748 682 ret = zone_idx; 749 683 else 750 684 ret = -EINVAL; 751 - out_free: 685 + fail_report: 686 + mutex_unlock(&vblk->vdev_mutex); 752 687 kvfree(report); 753 688 return ret; 754 689 } ··· 758 691 { 759 692 u8 model; 760 693 761 - if (!vblk->zone_sectors) 762 - return; 763 - 764 694 virtio_cread(vblk->vdev, struct virtio_blk_config, 765 695 zoned.model, &model); 766 - if (!blk_revalidate_disk_zones(vblk->disk, NULL)) 767 - set_capacity_and_notify(vblk->disk, 0); 696 + switch (model) { 697 + default: 698 + dev_err(&vblk->vdev->dev, "unknown zone model %d\n", model); 699 + fallthrough; 700 + case VIRTIO_BLK_Z_NONE: 701 + case VIRTIO_BLK_Z_HA: 702 + disk_set_zoned(vblk->disk, BLK_ZONED_NONE); 703 + return; 704 + case VIRTIO_BLK_Z_HM: 705 + WARN_ON_ONCE(!vblk->zone_sectors); 706 + if (!blk_revalidate_disk_zones(vblk->disk, NULL)) 707 + set_capacity_and_notify(vblk->disk, 0); 708 + } 768 709 } 769 710 770 711 static int virtblk_probe_zoned_device(struct virtio_device *vdev, 771 712 struct virtio_blk *vblk, 772 713 struct request_queue *q) 773 714 { 774 - u32 v; 715 + u32 v, wg; 775 716 u8 model; 776 717 int ret; 777 718 ··· 788 713 789 714 switch (model) { 790 715 case VIRTIO_BLK_Z_NONE: 716 + case VIRTIO_BLK_Z_HA: 717 + /* Present the host-aware device as non-zoned */ 791 718 return 0; 792 719 case VIRTIO_BLK_Z_HM: 793 720 break; 794 - case VIRTIO_BLK_Z_HA: 795 - /* 796 - * Present the host-aware device as a regular drive. 797 - * TODO It is possible to add an option to make it appear 798 - * in the system as a zoned drive. 799 - */ 800 - return 0; 801 721 default: 802 722 dev_err(&vdev->dev, "unsupported zone model %d\n", model); 803 723 return -EINVAL; ··· 805 735 806 736 virtio_cread(vdev, struct virtio_blk_config, 807 737 zoned.max_open_zones, &v); 808 - disk_set_max_open_zones(vblk->disk, le32_to_cpu(v)); 809 - 810 - dev_dbg(&vdev->dev, "max open zones = %u\n", le32_to_cpu(v)); 738 + disk_set_max_open_zones(vblk->disk, v); 739 + dev_dbg(&vdev->dev, "max open zones = %u\n", v); 811 740 812 741 virtio_cread(vdev, struct virtio_blk_config, 813 742 zoned.max_active_zones, &v); 814 - disk_set_max_active_zones(vblk->disk, le32_to_cpu(v)); 815 - dev_dbg(&vdev->dev, "max active zones = %u\n", le32_to_cpu(v)); 743 + disk_set_max_active_zones(vblk->disk, v); 744 + dev_dbg(&vdev->dev, "max active zones = %u\n", v); 816 745 817 746 virtio_cread(vdev, struct virtio_blk_config, 818 - zoned.write_granularity, &v); 819 - if (!v) { 747 + zoned.write_granularity, &wg); 748 + if (!wg) { 820 749 dev_warn(&vdev->dev, "zero write granularity reported\n"); 821 750 return -ENODEV; 822 751 } 823 - blk_queue_physical_block_size(q, le32_to_cpu(v)); 824 - blk_queue_io_min(q, le32_to_cpu(v)); 752 + blk_queue_physical_block_size(q, wg); 753 + blk_queue_io_min(q, wg); 825 754 826 - dev_dbg(&vdev->dev, "write granularity = %u\n", le32_to_cpu(v)); 755 + dev_dbg(&vdev->dev, "write granularity = %u\n", wg); 827 756 828 757 /* 829 758 * virtio ZBD specification doesn't require zones to be a power of 830 759 * two sectors in size, but the code in this driver expects that. 831 760 */ 832 - virtio_cread(vdev, struct virtio_blk_config, zoned.zone_sectors, &v); 833 - vblk->zone_sectors = le32_to_cpu(v); 761 + virtio_cread(vdev, struct virtio_blk_config, zoned.zone_sectors, 762 + &vblk->zone_sectors); 834 763 if (vblk->zone_sectors == 0 || !is_power_of_2(vblk->zone_sectors)) { 835 764 dev_err(&vdev->dev, 836 765 "zoned device with non power of two zone size %u\n", ··· 852 783 dev_warn(&vdev->dev, "zero max_append_sectors reported\n"); 853 784 return -ENODEV; 854 785 } 855 - blk_queue_max_zone_append_sectors(q, le32_to_cpu(v)); 856 - dev_dbg(&vdev->dev, "max append sectors = %u\n", le32_to_cpu(v)); 786 + if ((v << SECTOR_SHIFT) < wg) { 787 + dev_err(&vdev->dev, 788 + "write granularity %u exceeds max_append_sectors %u limit\n", 789 + wg, v); 790 + return -ENODEV; 791 + } 792 + 793 + blk_queue_max_zone_append_sectors(q, v); 794 + dev_dbg(&vdev->dev, "max append sectors = %u\n", v); 857 795 } 858 796 859 797 return ret; 860 798 } 861 799 862 - static inline bool virtblk_has_zoned_feature(struct virtio_device *vdev) 863 - { 864 - return virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED); 865 - } 866 800 #else 867 801 868 802 /* 869 803 * Zoned block device support is not configured in this kernel. 870 - * We only need to define a few symbols to avoid compilation errors. 804 + * Host-managed zoned devices can't be supported, but others are 805 + * good to go as regular block devices. 871 806 */ 872 807 #define virtblk_report_zones NULL 808 + 873 809 static inline void virtblk_revalidate_zones(struct virtio_blk *vblk) 874 810 { 875 811 } 812 + 876 813 static inline int virtblk_probe_zoned_device(struct virtio_device *vdev, 877 814 struct virtio_blk *vblk, struct request_queue *q) 878 815 { 879 - return -EOPNOTSUPP; 880 - } 816 + u8 model; 881 817 882 - static inline bool virtblk_has_zoned_feature(struct virtio_device *vdev) 883 - { 884 - return false; 818 + virtio_cread(vdev, struct virtio_blk_config, zoned.model, &model); 819 + if (model == VIRTIO_BLK_Z_HM) { 820 + dev_err(&vdev->dev, 821 + "virtio_blk: zoned devices are not supported"); 822 + return -EOPNOTSUPP; 823 + } 824 + 825 + return 0; 885 826 } 886 827 #endif /* CONFIG_BLK_DEV_ZONED */ 887 828 ··· 910 831 return PTR_ERR(req); 911 832 912 833 vbr = blk_mq_rq_to_pdu(req); 913 - vbr->in_hdr_len = sizeof(vbr->status); 834 + vbr->in_hdr_len = sizeof(vbr->in_hdr.status); 914 835 vbr->out_hdr.type = cpu_to_virtio32(vblk->vdev, VIRTIO_BLK_T_GET_ID); 915 836 vbr->out_hdr.sector = 0; 916 837 ··· 919 840 goto out; 920 841 921 842 blk_execute_rq(req, false); 922 - err = blk_status_to_errno(virtblk_result(vbr->status)); 843 + err = blk_status_to_errno(virtblk_result(vbr->in_hdr.status)); 923 844 out: 924 845 blk_mq_free_request(req); 925 846 return err; ··· 1577 1498 virtblk_update_capacity(vblk, false); 1578 1499 virtio_device_ready(vdev); 1579 1500 1580 - if (virtblk_has_zoned_feature(vdev)) { 1501 + /* 1502 + * All steps that follow use the VQs therefore they need to be 1503 + * placed after the virtio_device_ready() call above. 1504 + */ 1505 + if (virtio_has_feature(vdev, VIRTIO_BLK_F_ZONED)) { 1581 1506 err = virtblk_probe_zoned_device(vdev, vblk, q); 1582 1507 if (err) 1583 1508 goto out_cleanup_disk; 1584 1509 } 1585 - 1586 - dev_info(&vdev->dev, "blk config size: %zu\n", 1587 - sizeof(struct virtio_blk_config)); 1588 1510 1589 1511 err = device_add_disk(&vdev->dev, vblk->disk, virtblk_attr_groups); 1590 1512 if (err) ··· 1687 1607 VIRTIO_BLK_F_RO, VIRTIO_BLK_F_BLK_SIZE, 1688 1608 VIRTIO_BLK_F_FLUSH, VIRTIO_BLK_F_TOPOLOGY, VIRTIO_BLK_F_CONFIG_WCE, 1689 1609 VIRTIO_BLK_F_MQ, VIRTIO_BLK_F_DISCARD, VIRTIO_BLK_F_WRITE_ZEROES, 1690 - VIRTIO_BLK_F_SECURE_ERASE, 1691 - #ifdef CONFIG_BLK_DEV_ZONED 1692 - VIRTIO_BLK_F_ZONED, 1693 - #endif /* CONFIG_BLK_DEV_ZONED */ 1610 + VIRTIO_BLK_F_SECURE_ERASE, VIRTIO_BLK_F_ZONED, 1694 1611 }; 1695 1612 1696 1613 static struct virtio_driver virtio_blk = {
+1 -1
drivers/bluetooth/btbcm.c
··· 511 511 len = strlen(tmp) + 1; 512 512 board_type = devm_kzalloc(dev, len, GFP_KERNEL); 513 513 strscpy(board_type, tmp, len); 514 - for (i = 0; i < board_type[i]; i++) { 514 + for (i = 0; i < len; i++) { 515 515 if (board_type[i] == '/') 516 516 board_type[i] = '-'; 517 517 }
+1
drivers/bluetooth/btsdio.c
··· 358 358 if (!data) 359 359 return; 360 360 361 + cancel_work_sync(&data->work); 361 362 hdev = data->hdev; 362 363 363 364 sdio_set_drvdata(func, NULL);
+6
drivers/bus/imx-weim.c
··· 329 329 "Failed to setup timing for '%pOF'\n", rd->dn); 330 330 331 331 if (!of_node_check_flag(rd->dn, OF_POPULATED)) { 332 + /* 333 + * Clear the flag before adding the device so that 334 + * fw_devlink doesn't skip adding consumers to this 335 + * device. 336 + */ 337 + rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 332 338 if (!of_platform_device_create(rd->dn, NULL, &pdev->dev)) { 333 339 dev_err(&pdev->dev, 334 340 "Failed to create child device '%pOF'\n",
+9 -22
drivers/counter/104-quad-8.c
··· 97 97 struct quad8_reg __iomem *reg; 98 98 }; 99 99 100 - /* Borrow Toggle flip-flop */ 101 - #define QUAD8_FLAG_BT BIT(0) 102 - /* Carry Toggle flip-flop */ 103 - #define QUAD8_FLAG_CT BIT(1) 104 100 /* Error flag */ 105 101 #define QUAD8_FLAG_E BIT(4) 106 102 /* Up/Down flag */ ··· 129 133 #define QUAD8_CMR_QUADRATURE_X2 0x10 130 134 #define QUAD8_CMR_QUADRATURE_X4 0x18 131 135 136 + /* Each Counter is 24 bits wide */ 137 + #define LS7267_CNTR_MAX GENMASK(23, 0) 138 + 132 139 static int quad8_signal_read(struct counter_device *counter, 133 140 struct counter_signal *signal, 134 141 enum counter_signal_level *level) ··· 155 156 { 156 157 struct quad8 *const priv = counter_priv(counter); 157 158 struct channel_reg __iomem *const chan = priv->reg->channel + count->id; 158 - unsigned int flags; 159 - unsigned int borrow; 160 - unsigned int carry; 161 159 unsigned long irqflags; 162 160 int i; 163 161 164 - flags = ioread8(&chan->control); 165 - borrow = flags & QUAD8_FLAG_BT; 166 - carry = !!(flags & QUAD8_FLAG_CT); 167 - 168 - /* Borrow XOR Carry effectively doubles count range */ 169 - *val = (unsigned long)(borrow ^ carry) << 24; 162 + *val = 0; 170 163 171 164 spin_lock_irqsave(&priv->lock, irqflags); 172 165 ··· 182 191 unsigned long irqflags; 183 192 int i; 184 193 185 - /* Only 24-bit values are supported */ 186 - if (val > 0xFFFFFF) 194 + if (val > LS7267_CNTR_MAX) 187 195 return -ERANGE; 188 196 189 197 spin_lock_irqsave(&priv->lock, irqflags); ··· 368 378 369 379 /* Handle Index signals */ 370 380 if (synapse->signal->id >= 16) { 371 - if (priv->preset_enable[count->id]) 381 + if (!priv->preset_enable[count->id]) 372 382 *action = COUNTER_SYNAPSE_ACTION_RISING_EDGE; 373 383 else 374 384 *action = COUNTER_SYNAPSE_ACTION_NONE; ··· 796 806 struct quad8 *const priv = counter_priv(counter); 797 807 unsigned long irqflags; 798 808 799 - /* Only 24-bit values are supported */ 800 - if (preset > 0xFFFFFF) 809 + if (preset > LS7267_CNTR_MAX) 801 810 return -ERANGE; 802 811 803 812 spin_lock_irqsave(&priv->lock, irqflags); ··· 823 834 *ceiling = priv->preset[count->id]; 824 835 break; 825 836 default: 826 - /* By default 0x1FFFFFF (25 bits unsigned) is maximum count */ 827 - *ceiling = 0x1FFFFFF; 837 + *ceiling = LS7267_CNTR_MAX; 828 838 break; 829 839 } 830 840 ··· 838 850 struct quad8 *const priv = counter_priv(counter); 839 851 unsigned long irqflags; 840 852 841 - /* Only 24-bit values are supported */ 842 - if (ceiling > 0xFFFFFF) 853 + if (ceiling > LS7267_CNTR_MAX) 843 854 return -ERANGE; 844 855 845 856 spin_lock_irqsave(&priv->lock, irqflags);
+68 -58
drivers/cxl/core/hdm.c
··· 101 101 BIT(CXL_CM_CAP_CAP_ID_HDM)); 102 102 } 103 103 104 - static struct cxl_hdm *devm_cxl_setup_emulated_hdm(struct cxl_port *port, 105 - struct cxl_endpoint_dvsec_info *info) 104 + static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info) 106 105 { 107 - struct device *dev = &port->dev; 108 106 struct cxl_hdm *cxlhdm; 107 + void __iomem *hdm; 108 + u32 ctrl; 109 + int i; 109 110 111 + if (!info) 112 + return false; 113 + 114 + cxlhdm = dev_get_drvdata(&info->port->dev); 115 + hdm = cxlhdm->regs.hdm_decoder; 116 + 117 + if (!hdm) 118 + return true; 119 + 120 + /* 121 + * If HDM decoders are present and the driver is in control of 122 + * Mem_Enable skip DVSEC based emulation 123 + */ 110 124 if (!info->mem_enabled) 111 - return ERR_PTR(-ENODEV); 125 + return false; 112 126 113 - cxlhdm = devm_kzalloc(dev, sizeof(*cxlhdm), GFP_KERNEL); 114 - if (!cxlhdm) 115 - return ERR_PTR(-ENOMEM); 127 + /* 128 + * If any decoders are committed already, there should not be any 129 + * emulated DVSEC decoders. 130 + */ 131 + for (i = 0; i < cxlhdm->decoder_count; i++) { 132 + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i)); 133 + if (FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl)) 134 + return false; 135 + } 116 136 117 - cxlhdm->port = port; 118 - cxlhdm->decoder_count = info->ranges; 119 - cxlhdm->target_count = info->ranges; 120 - dev_set_drvdata(&port->dev, cxlhdm); 121 - 122 - return cxlhdm; 137 + return true; 123 138 } 124 139 125 140 /** ··· 153 138 cxlhdm = devm_kzalloc(dev, sizeof(*cxlhdm), GFP_KERNEL); 154 139 if (!cxlhdm) 155 140 return ERR_PTR(-ENOMEM); 156 - 157 141 cxlhdm->port = port; 158 - crb = ioremap(port->component_reg_phys, CXL_COMPONENT_REG_BLOCK_SIZE); 159 - if (!crb) { 160 - if (info && info->mem_enabled) 161 - return devm_cxl_setup_emulated_hdm(port, info); 142 + dev_set_drvdata(dev, cxlhdm); 162 143 144 + crb = ioremap(port->component_reg_phys, CXL_COMPONENT_REG_BLOCK_SIZE); 145 + if (!crb && info && info->mem_enabled) { 146 + cxlhdm->decoder_count = info->ranges; 147 + return cxlhdm; 148 + } else if (!crb) { 163 149 dev_err(dev, "No component registers mapped\n"); 164 150 return ERR_PTR(-ENXIO); 165 151 } ··· 176 160 return ERR_PTR(-ENXIO); 177 161 } 178 162 179 - dev_set_drvdata(dev, cxlhdm); 163 + /* 164 + * Now that the hdm capability is parsed, decide if range 165 + * register emulation is needed and fixup cxlhdm accordingly. 166 + */ 167 + if (should_emulate_decoders(info)) { 168 + dev_dbg(dev, "Fallback map %d range register%s\n", info->ranges, 169 + info->ranges > 1 ? "s" : ""); 170 + cxlhdm->decoder_count = info->ranges; 171 + } 180 172 181 173 return cxlhdm; 182 174 } ··· 738 714 return 0; 739 715 } 740 716 741 - static int cxl_setup_hdm_decoder_from_dvsec(struct cxl_port *port, 742 - struct cxl_decoder *cxld, int which, 743 - struct cxl_endpoint_dvsec_info *info) 717 + static int cxl_setup_hdm_decoder_from_dvsec( 718 + struct cxl_port *port, struct cxl_decoder *cxld, u64 *dpa_base, 719 + int which, struct cxl_endpoint_dvsec_info *info) 744 720 { 721 + struct cxl_endpoint_decoder *cxled; 722 + u64 len; 723 + int rc; 724 + 745 725 if (!is_cxl_endpoint(port)) 746 726 return -EOPNOTSUPP; 747 727 748 - if (!range_len(&info->dvsec_range[which])) 728 + cxled = to_cxl_endpoint_decoder(&cxld->dev); 729 + len = range_len(&info->dvsec_range[which]); 730 + if (!len) 749 731 return -ENOENT; 750 732 751 733 cxld->target_type = CXL_DECODER_EXPANDER; ··· 766 736 cxld->flags |= CXL_DECODER_F_ENABLE | CXL_DECODER_F_LOCK; 767 737 port->commit_end = cxld->id; 768 738 769 - return 0; 770 - } 771 - 772 - static bool should_emulate_decoders(struct cxl_port *port) 773 - { 774 - struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev); 775 - void __iomem *hdm = cxlhdm->regs.hdm_decoder; 776 - u32 ctrl; 777 - int i; 778 - 779 - if (!is_cxl_endpoint(cxlhdm->port)) 780 - return false; 781 - 782 - if (!hdm) 783 - return true; 784 - 785 - /* 786 - * If any decoders are committed already, there should not be any 787 - * emulated DVSEC decoders. 788 - */ 789 - for (i = 0; i < cxlhdm->decoder_count; i++) { 790 - ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i)); 791 - if (FIELD_GET(CXL_HDM_DECODER0_CTRL_COMMITTED, ctrl)) 792 - return false; 739 + rc = devm_cxl_dpa_reserve(cxled, *dpa_base, len, 0); 740 + if (rc) { 741 + dev_err(&port->dev, 742 + "decoder%d.%d: Failed to reserve DPA range %#llx - %#llx\n (%d)", 743 + port->id, cxld->id, *dpa_base, *dpa_base + len - 1, rc); 744 + return rc; 793 745 } 746 + *dpa_base += len; 747 + cxled->state = CXL_DECODER_STATE_AUTO; 794 748 795 - return true; 749 + return 0; 796 750 } 797 751 798 752 static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, 799 753 int *target_map, void __iomem *hdm, int which, 800 754 u64 *dpa_base, struct cxl_endpoint_dvsec_info *info) 801 755 { 802 - struct cxl_endpoint_decoder *cxled = NULL; 756 + struct cxl_endpoint_decoder *cxled; 803 757 u64 size, base, skip, dpa_size; 804 758 bool committed; 805 759 u32 remainder; ··· 794 780 unsigned char target_id[8]; 795 781 } target_list; 796 782 797 - if (should_emulate_decoders(port)) 798 - return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); 799 - 800 - if (is_endpoint_decoder(&cxld->dev)) 801 - cxled = to_cxl_endpoint_decoder(&cxld->dev); 783 + if (should_emulate_decoders(info)) 784 + return cxl_setup_hdm_decoder_from_dvsec(port, cxld, dpa_base, 785 + which, info); 802 786 803 787 ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which)); 804 788 base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which)); ··· 817 805 .start = base, 818 806 .end = base + size - 1, 819 807 }; 820 - 821 - if (cxled && !committed && range_len(&info->dvsec_range[which])) 822 - return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); 823 808 824 809 /* decoders are enabled if committed */ 825 810 if (committed) { ··· 855 846 if (rc) 856 847 return rc; 857 848 858 - if (!cxled) { 849 + if (!info) { 859 850 target_list.value = 860 851 ioread64_hi_lo(hdm + CXL_HDM_DECODER0_TL_LOW(which)); 861 852 for (i = 0; i < cxld->interleave_ways; i++) ··· 875 866 return -ENXIO; 876 867 } 877 868 skip = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SKIP_LOW(which)); 869 + cxled = to_cxl_endpoint_decoder(&cxld->dev); 878 870 rc = devm_cxl_dpa_reserve(cxled, *dpa_base + skip, dpa_size, skip); 879 871 if (rc) { 880 872 dev_err(&port->dev,
+23 -15
drivers/cxl/core/pci.c
··· 462 462 return NULL; 463 463 } 464 464 465 - #define CDAT_DOE_REQ(entry_handle) \ 465 + #define CDAT_DOE_REQ(entry_handle) cpu_to_le32 \ 466 466 (FIELD_PREP(CXL_DOE_TABLE_ACCESS_REQ_CODE, \ 467 467 CXL_DOE_TABLE_ACCESS_REQ_CODE_READ) | \ 468 468 FIELD_PREP(CXL_DOE_TABLE_ACCESS_TABLE_TYPE, \ ··· 475 475 } 476 476 477 477 struct cdat_doe_task { 478 - u32 request_pl; 479 - u32 response_pl[32]; 478 + __le32 request_pl; 479 + __le32 response_pl[32]; 480 480 struct completion c; 481 481 struct pci_doe_task task; 482 482 }; ··· 510 510 return rc; 511 511 } 512 512 wait_for_completion(&t.c); 513 - if (t.task.rv < sizeof(u32)) 513 + if (t.task.rv < 2 * sizeof(__le32)) 514 514 return -EIO; 515 515 516 - *length = t.response_pl[1]; 516 + *length = le32_to_cpu(t.response_pl[1]); 517 517 dev_dbg(dev, "CDAT length %zu\n", *length); 518 518 519 519 return 0; ··· 524 524 struct cxl_cdat *cdat) 525 525 { 526 526 size_t length = cdat->length; 527 - u32 *data = cdat->table; 527 + __le32 *data = cdat->table; 528 528 int entry_handle = 0; 529 529 530 530 do { 531 531 DECLARE_CDAT_DOE_TASK(CDAT_DOE_REQ(entry_handle), t); 532 + struct cdat_entry_header *entry; 532 533 size_t entry_dw; 533 - u32 *entry; 534 534 int rc; 535 535 536 536 rc = pci_doe_submit_task(cdat_doe, &t.task); ··· 539 539 return rc; 540 540 } 541 541 wait_for_completion(&t.c); 542 - /* 1 DW header + 1 DW data min */ 543 - if (t.task.rv < (2 * sizeof(u32))) 542 + 543 + /* 1 DW Table Access Response Header + CDAT entry */ 544 + entry = (struct cdat_entry_header *)(t.response_pl + 1); 545 + if ((entry_handle == 0 && 546 + t.task.rv != sizeof(__le32) + sizeof(struct cdat_header)) || 547 + (entry_handle > 0 && 548 + (t.task.rv < sizeof(__le32) + sizeof(*entry) || 549 + t.task.rv != sizeof(__le32) + le16_to_cpu(entry->length)))) 544 550 return -EIO; 545 551 546 552 /* Get the CXL table access header entry handle */ 547 553 entry_handle = FIELD_GET(CXL_DOE_TABLE_ACCESS_ENTRY_HANDLE, 548 - t.response_pl[0]); 549 - entry = t.response_pl + 1; 550 - entry_dw = t.task.rv / sizeof(u32); 554 + le32_to_cpu(t.response_pl[0])); 555 + entry_dw = t.task.rv / sizeof(__le32); 551 556 /* Skip Header */ 552 557 entry_dw -= 1; 553 - entry_dw = min(length / sizeof(u32), entry_dw); 558 + entry_dw = min(length / sizeof(__le32), entry_dw); 554 559 /* Prevent length < 1 DW from causing a buffer overflow */ 555 560 if (entry_dw) { 556 - memcpy(data, entry, entry_dw * sizeof(u32)); 557 - length -= entry_dw * sizeof(u32); 561 + memcpy(data, entry, entry_dw * sizeof(__le32)); 562 + length -= entry_dw * sizeof(__le32); 558 563 data += entry_dw; 559 564 } 560 565 } while (entry_handle != CXL_DOE_TABLE_ACCESS_LAST_ENTRY); 566 + 567 + /* Length in CDAT header may exceed concatenation of CDAT entries */ 568 + cdat->length -= length; 561 569 562 570 return 0; 563 571 }
+3 -3
drivers/cxl/core/pmem.c
··· 62 62 return is_cxl_nvdimm_bridge(dev); 63 63 } 64 64 65 - struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct device *start) 65 + struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_memdev *cxlmd) 66 66 { 67 - struct cxl_port *port = find_cxl_root(start); 67 + struct cxl_port *port = find_cxl_root(dev_get_drvdata(&cxlmd->dev)); 68 68 struct device *dev; 69 69 70 70 if (!port) ··· 253 253 struct device *dev; 254 254 int rc; 255 255 256 - cxl_nvb = cxl_find_nvdimm_bridge(&cxlmd->dev); 256 + cxl_nvb = cxl_find_nvdimm_bridge(cxlmd); 257 257 if (!cxl_nvb) 258 258 return -ENODEV; 259 259
+7 -31
drivers/cxl/core/port.c
··· 823 823 return false; 824 824 } 825 825 826 - /* Find a 2nd level CXL port that has a dport that is an ancestor of @match */ 827 - static int match_root_child(struct device *dev, const void *match) 826 + struct cxl_port *find_cxl_root(struct cxl_port *port) 828 827 { 829 - const struct device *iter = NULL; 830 - struct cxl_dport *dport; 831 - struct cxl_port *port; 828 + struct cxl_port *iter = port; 832 829 833 - if (!dev_is_cxl_root_child(dev)) 834 - return 0; 830 + while (iter && !is_cxl_root(iter)) 831 + iter = to_cxl_port(iter->dev.parent); 835 832 836 - port = to_cxl_port(dev); 837 - iter = match; 838 - while (iter) { 839 - dport = cxl_find_dport_by_dev(port, iter); 840 - if (dport) 841 - break; 842 - iter = iter->parent; 843 - } 844 - 845 - return !!iter; 846 - } 847 - 848 - struct cxl_port *find_cxl_root(struct device *dev) 849 - { 850 - struct device *port_dev; 851 - struct cxl_port *root; 852 - 853 - port_dev = bus_find_device(&cxl_bus_type, NULL, dev, match_root_child); 854 - if (!port_dev) 833 + if (!iter) 855 834 return NULL; 856 - 857 - root = to_cxl_port(port_dev->parent); 858 - get_device(&root->dev); 859 - put_device(port_dev); 860 - return root; 835 + get_device(&iter->dev); 836 + return iter; 861 837 } 862 838 EXPORT_SYMBOL_NS_GPL(find_cxl_root, CXL); 863 839
+29 -4
drivers/cxl/core/region.c
··· 134 134 struct cxl_endpoint_decoder *cxled = p->targets[i]; 135 135 struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); 136 136 struct cxl_port *iter = cxled_to_port(cxled); 137 + struct cxl_dev_state *cxlds = cxlmd->cxlds; 137 138 struct cxl_ep *ep; 138 139 int rc = 0; 140 + 141 + if (cxlds->rcd) 142 + goto endpoint_reset; 139 143 140 144 while (!is_cxl_root(to_cxl_port(iter->dev.parent))) 141 145 iter = to_cxl_port(iter->dev.parent); ··· 157 153 return rc; 158 154 } 159 155 156 + endpoint_reset: 160 157 rc = cxled->cxld.reset(&cxled->cxld); 161 158 if (rc) 162 159 return rc; ··· 1204 1199 { 1205 1200 struct cxl_region_params *p = &cxlr->params; 1206 1201 struct cxl_endpoint_decoder *cxled; 1202 + struct cxl_dev_state *cxlds; 1207 1203 struct cxl_memdev *cxlmd; 1208 1204 struct cxl_port *iter; 1209 1205 struct cxl_ep *ep; ··· 1220 1214 for (i = 0; i < p->nr_targets; i++) { 1221 1215 cxled = p->targets[i]; 1222 1216 cxlmd = cxled_to_memdev(cxled); 1217 + cxlds = cxlmd->cxlds; 1218 + 1219 + if (cxlds->rcd) 1220 + continue; 1223 1221 1224 1222 iter = cxled_to_port(cxled); 1225 1223 while (!is_cxl_root(to_cxl_port(iter->dev.parent))) ··· 1239 1229 { 1240 1230 struct cxl_region_params *p = &cxlr->params; 1241 1231 struct cxl_endpoint_decoder *cxled; 1232 + struct cxl_dev_state *cxlds; 1233 + int i, rc, rch = 0, vh = 0; 1242 1234 struct cxl_memdev *cxlmd; 1243 1235 struct cxl_port *iter; 1244 1236 struct cxl_ep *ep; 1245 - int i, rc; 1246 1237 1247 1238 for (i = 0; i < p->nr_targets; i++) { 1248 1239 cxled = p->targets[i]; 1249 1240 cxlmd = cxled_to_memdev(cxled); 1241 + cxlds = cxlmd->cxlds; 1242 + 1243 + /* validate that all targets agree on topology */ 1244 + if (!cxlds->rcd) { 1245 + vh++; 1246 + } else { 1247 + rch++; 1248 + continue; 1249 + } 1250 1250 1251 1251 iter = cxled_to_port(cxled); 1252 1252 while (!is_cxl_root(to_cxl_port(iter->dev.parent))) ··· 1274 1254 return rc; 1275 1255 } 1276 1256 } 1257 + } 1258 + 1259 + if (rch && vh) { 1260 + dev_err(&cxlr->dev, "mismatched CXL topologies detected\n"); 1261 + cxl_region_teardown_targets(cxlr); 1262 + return -ENXIO; 1277 1263 } 1278 1264 1279 1265 return 0; ··· 1674 1648 if (rc) 1675 1649 goto err_decrement; 1676 1650 p->state = CXL_CONFIG_ACTIVE; 1651 + set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); 1677 1652 } 1678 1653 1679 1654 cxled->cxld.interleave_ways = p->interleave_ways; ··· 1776 1749 1777 1750 down_read(&cxl_dpa_rwsem); 1778 1751 rc = cxl_region_attach(cxlr, cxled, pos); 1779 - if (rc == 0) 1780 - set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); 1781 1752 up_read(&cxl_dpa_rwsem); 1782 1753 up_write(&cxl_region_rwsem); 1783 1754 return rc; ··· 2276 2251 * bridge for one device is the same for all. 2277 2252 */ 2278 2253 if (i == 0) { 2279 - cxl_nvb = cxl_find_nvdimm_bridge(&cxlmd->dev); 2254 + cxl_nvb = cxl_find_nvdimm_bridge(cxlmd); 2280 2255 if (!cxl_nvb) { 2281 2256 cxlr_pmem = ERR_PTR(-ENODEV); 2282 2257 goto out;
+5 -3
drivers/cxl/cxl.h
··· 658 658 struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport, 659 659 resource_size_t component_reg_phys, 660 660 struct cxl_dport *parent_dport); 661 - struct cxl_port *find_cxl_root(struct device *dev); 661 + struct cxl_port *find_cxl_root(struct cxl_port *port); 662 662 int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd); 663 663 void cxl_bus_rescan(void); 664 664 void cxl_bus_drain(void); ··· 695 695 696 696 /** 697 697 * struct cxl_endpoint_dvsec_info - Cached DVSEC info 698 - * @mem_enabled: cached value of mem_enabled in the DVSEC, PCIE_DEVICE 698 + * @mem_enabled: cached value of mem_enabled in the DVSEC at init time 699 699 * @ranges: Number of active HDM ranges this device uses. 700 + * @port: endpoint port associated with this info instance 700 701 * @dvsec_range: cached attributes of the ranges in the DVSEC, PCIE_DEVICE 701 702 */ 702 703 struct cxl_endpoint_dvsec_info { 703 704 bool mem_enabled; 704 705 int ranges; 706 + struct cxl_port *port; 705 707 struct range dvsec_range[2]; 706 708 }; 707 709 ··· 760 758 bool is_cxl_nvdimm(struct device *dev); 761 759 bool is_cxl_nvdimm_bridge(struct device *dev); 762 760 int devm_cxl_add_nvdimm(struct cxl_memdev *cxlmd); 763 - struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct device *dev); 761 + struct cxl_nvdimm_bridge *cxl_find_nvdimm_bridge(struct cxl_memdev *cxlmd); 764 762 765 763 #ifdef CONFIG_CXL_REGION 766 764 bool is_cxl_pmem_region(struct device *dev);
+14
drivers/cxl/cxlpci.h
··· 68 68 CXL_REGLOC_RBI_TYPES 69 69 }; 70 70 71 + struct cdat_header { 72 + __le32 length; 73 + u8 revision; 74 + u8 checksum; 75 + u8 reserved[6]; 76 + __le32 sequence; 77 + } __packed; 78 + 79 + struct cdat_entry_header { 80 + u8 type; 81 + u8 reserved; 82 + __le16 length; 83 + } __packed; 84 + 71 85 int devm_cxl_port_enumerate_dports(struct cxl_port *port); 72 86 struct cxl_dev_state; 73 87 int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
+2 -2
drivers/cxl/port.c
··· 78 78 79 79 static int cxl_endpoint_port_probe(struct cxl_port *port) 80 80 { 81 + struct cxl_endpoint_dvsec_info info = { .port = port }; 81 82 struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport); 82 - struct cxl_endpoint_dvsec_info info = { 0 }; 83 83 struct cxl_dev_state *cxlds = cxlmd->cxlds; 84 84 struct cxl_hdm *cxlhdm; 85 85 struct cxl_port *root; ··· 119 119 * This can't fail in practice as CXL root exit unregisters all 120 120 * descendant ports and that in turn synchronizes with cxl_port_probe() 121 121 */ 122 - root = find_cxl_root(&cxlmd->dev); 122 + root = find_cxl_root(port); 123 123 124 124 /* 125 125 * Now that all endpoint decoders are successfully enumerated, try to
+17 -3
drivers/dma/apple-admac.c
··· 75 75 76 76 #define REG_TX_INTSTATE(idx) (0x0030 + (idx) * 4) 77 77 #define REG_RX_INTSTATE(idx) (0x0040 + (idx) * 4) 78 + #define REG_GLOBAL_INTSTATE(idx) (0x0050 + (idx) * 4) 78 79 #define REG_CHAN_INTSTATUS(ch, idx) (0x8010 + (ch) * 0x200 + (idx) * 4) 79 80 #define REG_CHAN_INTMASK(ch, idx) (0x8020 + (ch) * 0x200 + (idx) * 4) 80 81 ··· 512 511 admac_stop_chan(adchan); 513 512 admac_reset_rings(adchan); 514 513 515 - adchan->current_tx = NULL; 514 + if (adchan->current_tx) { 515 + list_add_tail(&adchan->current_tx->node, &adchan->to_free); 516 + adchan->current_tx = NULL; 517 + } 516 518 /* 517 519 * Descriptors can only be freed after the tasklet 518 520 * has been killed (in admac_synchronize). ··· 676 672 static irqreturn_t admac_interrupt(int irq, void *devid) 677 673 { 678 674 struct admac_data *ad = devid; 679 - u32 rx_intstate, tx_intstate; 675 + u32 rx_intstate, tx_intstate, global_intstate; 680 676 int i; 681 677 682 678 rx_intstate = readl_relaxed(ad->base + REG_RX_INTSTATE(ad->irq_index)); 683 679 tx_intstate = readl_relaxed(ad->base + REG_TX_INTSTATE(ad->irq_index)); 680 + global_intstate = readl_relaxed(ad->base + REG_GLOBAL_INTSTATE(ad->irq_index)); 684 681 685 - if (!tx_intstate && !rx_intstate) 682 + if (!tx_intstate && !rx_intstate && !global_intstate) 686 683 return IRQ_NONE; 687 684 688 685 for (i = 0; i < ad->nchannels; i += 2) { ··· 696 691 if (rx_intstate & 1) 697 692 admac_handle_chan_int(ad, i); 698 693 rx_intstate >>= 1; 694 + } 695 + 696 + if (global_intstate) { 697 + dev_warn(ad->dev, "clearing unknown global interrupt flag: %x\n", 698 + global_intstate); 699 + writel_relaxed(~(u32) 0, ad->base + REG_GLOBAL_INTSTATE(ad->irq_index)); 699 700 } 700 701 701 702 return IRQ_HANDLED; ··· 861 850 862 851 dma->directions = BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); 863 852 dma->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 853 + dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | 854 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | 855 + BIT(DMA_SLAVE_BUSWIDTH_4_BYTES); 864 856 dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | 865 857 BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | 866 858 BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+1 -1
drivers/dma/dmaengine.c
··· 1342 1342 if (ret) 1343 1343 return ret; 1344 1344 1345 - return devm_add_action(device->dev, dmaenginem_async_device_unregister, device); 1345 + return devm_add_action_or_reset(device->dev, dmaenginem_async_device_unregister, device); 1346 1346 } 1347 1347 EXPORT_SYMBOL(dmaenginem_async_device_register); 1348 1348
+1 -1
drivers/dma/xilinx/xdma.c
··· 277 277 278 278 /** 279 279 * xdma_xfer_start - Start DMA transfer 280 - * @xdma_chan: DMA channel pointer 280 + * @xchan: DMA channel pointer 281 281 */ 282 282 static int xdma_xfer_start(struct xdma_chan *xchan) 283 283 {
+1 -1
drivers/gpio/Kconfig
··· 100 100 tristate 101 101 102 102 config GPIO_REGMAP 103 - depends on REGMAP 103 + select REGMAP 104 104 tristate 105 105 106 106 # put drivers in the right section, in alphabetical order
+1 -4
drivers/gpio/gpio-davinci.c
··· 324 324 .irq_enable = gpio_irq_enable, 325 325 .irq_disable = gpio_irq_disable, 326 326 .irq_set_type = gpio_irq_type, 327 - .flags = IRQCHIP_SET_TYPE_MASKED, 327 + .flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE, 328 328 }; 329 329 330 330 static void gpio_irq_handler(struct irq_desc *desc) ··· 640 640 context->set_rising = readl_relaxed(&g->set_rising); 641 641 context->set_falling = readl_relaxed(&g->set_falling); 642 642 } 643 - 644 - /* Clear Bank interrupt enable bit */ 645 - writel_relaxed(0, base + BINTEN); 646 643 647 644 /* Clear all interrupt status registers */ 648 645 writel_relaxed(GENMASK(31, 0), &g->intstat);
+50 -7
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
··· 177 177 const struct dc_link *link) 178 178 {} 179 179 180 + static void dm_helpers_construct_old_payload( 181 + struct dc_link *link, 182 + int pbn_per_slot, 183 + struct drm_dp_mst_atomic_payload *new_payload, 184 + struct drm_dp_mst_atomic_payload *old_payload) 185 + { 186 + struct link_mst_stream_allocation_table current_link_table = 187 + link->mst_stream_alloc_table; 188 + struct link_mst_stream_allocation *dc_alloc; 189 + int i; 190 + 191 + *old_payload = *new_payload; 192 + 193 + /* Set correct time_slots/PBN of old payload. 194 + * other fields (delete & dsc_enabled) in 195 + * struct drm_dp_mst_atomic_payload are don't care fields 196 + * while calling drm_dp_remove_payload() 197 + */ 198 + for (i = 0; i < current_link_table.stream_count; i++) { 199 + dc_alloc = 200 + &current_link_table.stream_allocations[i]; 201 + 202 + if (dc_alloc->vcp_id == new_payload->vcpi) { 203 + old_payload->time_slots = dc_alloc->slot_count; 204 + old_payload->pbn = dc_alloc->slot_count * pbn_per_slot; 205 + break; 206 + } 207 + } 208 + 209 + /* make sure there is an old payload*/ 210 + ASSERT(i != current_link_table.stream_count); 211 + 212 + } 213 + 180 214 /* 181 215 * Writes payload allocation table in immediate downstream device. 182 216 */ ··· 222 188 { 223 189 struct amdgpu_dm_connector *aconnector; 224 190 struct drm_dp_mst_topology_state *mst_state; 225 - struct drm_dp_mst_atomic_payload *payload; 191 + struct drm_dp_mst_atomic_payload *target_payload, *new_payload, old_payload; 226 192 struct drm_dp_mst_topology_mgr *mst_mgr; 227 193 228 194 aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context; ··· 238 204 mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state); 239 205 240 206 /* It's OK for this to fail */ 241 - payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port); 242 - if (enable) 243 - drm_dp_add_payload_part1(mst_mgr, mst_state, payload); 244 - else 245 - drm_dp_remove_payload(mst_mgr, mst_state, payload, payload); 207 + new_payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->mst_output_port); 208 + 209 + if (enable) { 210 + target_payload = new_payload; 211 + 212 + drm_dp_add_payload_part1(mst_mgr, mst_state, new_payload); 213 + } else { 214 + /* construct old payload by VCPI*/ 215 + dm_helpers_construct_old_payload(stream->link, mst_state->pbn_div, 216 + new_payload, &old_payload); 217 + target_payload = &old_payload; 218 + 219 + drm_dp_remove_payload(mst_mgr, mst_state, &old_payload, new_payload); 220 + } 246 221 247 222 /* mst_mgr->->payloads are VC payload notify MST branch using DPCD or 248 223 * AUX message. The sequence is slot 1-63 allocated sequence for each 249 224 * stream. AMD ASIC stream slot allocation should follow the same 250 225 * sequence. copy DRM MST allocation to dc */ 251 - fill_dc_mst_payload_table_from_drm(stream->link, enable, payload, proposed_table); 226 + fill_dc_mst_payload_table_from_drm(stream->link, enable, target_payload, proposed_table); 252 227 253 228 return true; 254 229 }
+6
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 61 61 #define CTF_OFFSET_HOTSPOT 5 62 62 #define CTF_OFFSET_MEM 5 63 63 64 + static const int pmfw_decoded_link_speed[5] = {1, 2, 3, 4, 5}; 65 + static const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16}; 66 + 67 + #define DECODE_GEN_SPEED(gen_speed_idx) (pmfw_decoded_link_speed[gen_speed_idx]) 68 + #define DECODE_LANE_WIDTH(lane_width_idx) (pmfw_decoded_link_width[lane_width_idx]) 69 + 64 70 struct smu_13_0_max_sustainable_clocks { 65 71 uint32_t display_clock; 66 72 uint32_t phy_clock;
+2 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 1144 1144 (pcie_table->pcie_lane[i] == 5) ? "x12" : 1145 1145 (pcie_table->pcie_lane[i] == 6) ? "x16" : "", 1146 1146 pcie_table->clk_freq[i], 1147 - ((gen_speed - 1) == pcie_table->pcie_gen[i]) && 1148 - (lane_width == link_width[pcie_table->pcie_lane[i]]) ? 1147 + (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) && 1148 + (lane_width == DECODE_LANE_WIDTH(link_width[pcie_table->pcie_lane[i]])) ? 1149 1149 "*" : ""); 1150 1150 break; 1151 1151
+77 -10
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 575 575 dpm_table); 576 576 if (ret) 577 577 return ret; 578 + 579 + if (skutable->DriverReportedClocks.GameClockAc && 580 + (dpm_table->dpm_levels[dpm_table->count - 1].value > 581 + skutable->DriverReportedClocks.GameClockAc)) { 582 + dpm_table->dpm_levels[dpm_table->count - 1].value = 583 + skutable->DriverReportedClocks.GameClockAc; 584 + dpm_table->max = skutable->DriverReportedClocks.GameClockAc; 585 + } 578 586 } else { 579 587 dpm_table->count = 1; 580 588 dpm_table->dpm_levels[0].value = smu->smu_table.boot_values.gfxclk / 100; ··· 836 828 return ret; 837 829 } 838 830 831 + static int smu_v13_0_7_get_dpm_ultimate_freq(struct smu_context *smu, 832 + enum smu_clk_type clk_type, 833 + uint32_t *min, 834 + uint32_t *max) 835 + { 836 + struct smu_13_0_dpm_context *dpm_context = 837 + smu->smu_dpm.dpm_context; 838 + struct smu_13_0_dpm_table *dpm_table; 839 + 840 + switch (clk_type) { 841 + case SMU_MCLK: 842 + case SMU_UCLK: 843 + /* uclk dpm table */ 844 + dpm_table = &dpm_context->dpm_tables.uclk_table; 845 + break; 846 + case SMU_GFXCLK: 847 + case SMU_SCLK: 848 + /* gfxclk dpm table */ 849 + dpm_table = &dpm_context->dpm_tables.gfx_table; 850 + break; 851 + case SMU_SOCCLK: 852 + /* socclk dpm table */ 853 + dpm_table = &dpm_context->dpm_tables.soc_table; 854 + break; 855 + case SMU_FCLK: 856 + /* fclk dpm table */ 857 + dpm_table = &dpm_context->dpm_tables.fclk_table; 858 + break; 859 + case SMU_VCLK: 860 + case SMU_VCLK1: 861 + /* vclk dpm table */ 862 + dpm_table = &dpm_context->dpm_tables.vclk_table; 863 + break; 864 + case SMU_DCLK: 865 + case SMU_DCLK1: 866 + /* dclk dpm table */ 867 + dpm_table = &dpm_context->dpm_tables.dclk_table; 868 + break; 869 + default: 870 + dev_err(smu->adev->dev, "Unsupported clock type!\n"); 871 + return -EINVAL; 872 + } 873 + 874 + if (min) 875 + *min = dpm_table->min; 876 + if (max) 877 + *max = dpm_table->max; 878 + 879 + return 0; 880 + } 881 + 839 882 static int smu_v13_0_7_read_sensor(struct smu_context *smu, 840 883 enum amd_pp_sensors sensor, 841 884 void *data, ··· 1133 1074 (pcie_table->pcie_lane[i] == 5) ? "x12" : 1134 1075 (pcie_table->pcie_lane[i] == 6) ? "x16" : "", 1135 1076 pcie_table->clk_freq[i], 1136 - (gen_speed == pcie_table->pcie_gen[i]) && 1137 - (lane_width == pcie_table->pcie_lane[i]) ? 1077 + (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) && 1078 + (lane_width == DECODE_LANE_WIDTH(pcie_table->pcie_lane[i])) ? 1138 1079 "*" : ""); 1139 1080 break; 1140 1081 ··· 1388 1329 &dpm_context->dpm_tables.fclk_table; 1389 1330 struct smu_umd_pstate_table *pstate_table = 1390 1331 &smu->pstate_table; 1332 + struct smu_table_context *table_context = &smu->smu_table; 1333 + PPTable_t *pptable = table_context->driver_pptable; 1334 + DriverReportedClocks_t driver_clocks = 1335 + pptable->SkuTable.DriverReportedClocks; 1391 1336 1392 1337 pstate_table->gfxclk_pstate.min = gfx_table->min; 1393 - pstate_table->gfxclk_pstate.peak = gfx_table->max; 1338 + if (driver_clocks.GameClockAc && 1339 + (driver_clocks.GameClockAc < gfx_table->max)) 1340 + pstate_table->gfxclk_pstate.peak = driver_clocks.GameClockAc; 1341 + else 1342 + pstate_table->gfxclk_pstate.peak = gfx_table->max; 1394 1343 1395 1344 pstate_table->uclk_pstate.min = mem_table->min; 1396 1345 pstate_table->uclk_pstate.peak = mem_table->max; ··· 1415 1348 pstate_table->fclk_pstate.min = fclk_table->min; 1416 1349 pstate_table->fclk_pstate.peak = fclk_table->max; 1417 1350 1418 - /* 1419 - * For now, just use the mininum clock frequency. 1420 - * TODO: update them when the real pstate settings available 1421 - */ 1422 - pstate_table->gfxclk_pstate.standard = gfx_table->min; 1423 - pstate_table->uclk_pstate.standard = mem_table->min; 1351 + if (driver_clocks.BaseClockAc && 1352 + driver_clocks.BaseClockAc < gfx_table->max) 1353 + pstate_table->gfxclk_pstate.standard = driver_clocks.BaseClockAc; 1354 + else 1355 + pstate_table->gfxclk_pstate.standard = gfx_table->max; 1356 + pstate_table->uclk_pstate.standard = mem_table->max; 1424 1357 pstate_table->socclk_pstate.standard = soc_table->min; 1425 1358 pstate_table->vclk_pstate.standard = vclk_table->min; 1426 1359 pstate_table->dclk_pstate.standard = dclk_table->min; ··· 1743 1676 .dpm_set_jpeg_enable = smu_v13_0_set_jpeg_enable, 1744 1677 .init_pptable_microcode = smu_v13_0_init_pptable_microcode, 1745 1678 .populate_umd_state_clk = smu_v13_0_7_populate_umd_state_clk, 1746 - .get_dpm_ultimate_freq = smu_v13_0_get_dpm_ultimate_freq, 1679 + .get_dpm_ultimate_freq = smu_v13_0_7_get_dpm_ultimate_freq, 1747 1680 .get_vbios_bootup_values = smu_v13_0_get_vbios_bootup_values, 1748 1681 .read_sensor = smu_v13_0_7_read_sensor, 1749 1682 .feature_is_enabled = smu_cmn_feature_is_enabled,
-1
drivers/gpu/drm/armada/armada_drv.c
··· 99 99 if (ret) { 100 100 dev_err(dev, "[" DRM_NAME ":%s] can't kick out simple-fb: %d\n", 101 101 __func__, ret); 102 - kfree(priv); 103 102 return ret; 104 103 } 105 104
+16 -4
drivers/gpu/drm/i915/display/icl_dsi.c
··· 300 300 { 301 301 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev); 302 302 struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder); 303 + i915_reg_t dss_ctl1_reg, dss_ctl2_reg; 303 304 u32 dss_ctl1; 304 305 305 - dss_ctl1 = intel_de_read(dev_priv, DSS_CTL1); 306 + /* FIXME: Move all DSS handling to intel_vdsc.c */ 307 + if (DISPLAY_VER(dev_priv) >= 12) { 308 + struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc); 309 + 310 + dss_ctl1_reg = ICL_PIPE_DSS_CTL1(crtc->pipe); 311 + dss_ctl2_reg = ICL_PIPE_DSS_CTL2(crtc->pipe); 312 + } else { 313 + dss_ctl1_reg = DSS_CTL1; 314 + dss_ctl2_reg = DSS_CTL2; 315 + } 316 + 317 + dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg); 306 318 dss_ctl1 |= SPLITTER_ENABLE; 307 319 dss_ctl1 &= ~OVERLAP_PIXELS_MASK; 308 320 dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap); ··· 335 323 336 324 dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK; 337 325 dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth); 338 - dss_ctl2 = intel_de_read(dev_priv, DSS_CTL2); 326 + dss_ctl2 = intel_de_read(dev_priv, dss_ctl2_reg); 339 327 dss_ctl2 &= ~RIGHT_DL_BUF_TARGET_DEPTH_MASK; 340 328 dss_ctl2 |= RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth); 341 - intel_de_write(dev_priv, DSS_CTL2, dss_ctl2); 329 + intel_de_write(dev_priv, dss_ctl2_reg, dss_ctl2); 342 330 } else { 343 331 /* Interleave */ 344 332 dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE; 345 333 } 346 334 347 - intel_de_write(dev_priv, DSS_CTL1, dss_ctl1); 335 + intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1); 348 336 } 349 337 350 338 /* aka DSI 8X clock */
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gf108.c
··· 31 31 .init = gf100_fb_init, 32 32 .init_page = gf100_fb_init_page, 33 33 .intr = gf100_fb_intr, 34 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 34 35 .ram_new = gf108_ram_new, 35 36 .default_bigpage = 17, 36 37 };
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk104.c
··· 77 77 .init = gf100_fb_init, 78 78 .init_page = gf100_fb_init_page, 79 79 .intr = gf100_fb_intr, 80 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 80 81 .ram_new = gk104_ram_new, 81 82 .default_bigpage = 17, 82 83 .clkgate_pack = gk104_fb_clkgate_pack,
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gk110.c
··· 59 59 .init = gf100_fb_init, 60 60 .init_page = gf100_fb_init_page, 61 61 .intr = gf100_fb_intr, 62 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 62 63 .ram_new = gk104_ram_new, 63 64 .default_bigpage = 17, 64 65 .clkgate_pack = gk110_fb_clkgate_pack,
+1
drivers/gpu/drm/nouveau/nvkm/subdev/fb/gm107.c
··· 31 31 .init = gf100_fb_init, 32 32 .init_page = gf100_fb_init_page, 33 33 .intr = gf100_fb_intr, 34 + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, 34 35 .ram_new = gm107_ram_new, 35 36 .default_bigpage = 17, 36 37 };
+9 -2
drivers/gpu/drm/scheduler/sched_entity.c
··· 507 507 { 508 508 struct drm_sched_entity *entity = sched_job->entity; 509 509 bool first; 510 + ktime_t submit_ts; 510 511 511 512 trace_drm_sched_job(sched_job, entity); 512 513 atomic_inc(entity->rq->sched->score); 513 514 WRITE_ONCE(entity->last_user, current->group_leader); 515 + 516 + /* 517 + * After the sched_job is pushed into the entity queue, it may be 518 + * completed and freed up at any time. We can no longer access it. 519 + * Make sure to set the submit_ts first, to avoid a race. 520 + */ 521 + sched_job->submit_ts = submit_ts = ktime_get(); 514 522 first = spsc_queue_push(&entity->job_queue, &sched_job->queue_node); 515 - sched_job->submit_ts = ktime_get(); 516 523 517 524 /* first job wakes up scheduler */ 518 525 if (first) { ··· 536 529 spin_unlock(&entity->rq_lock); 537 530 538 531 if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) 539 - drm_sched_rq_update_fifo(entity, sched_job->submit_ts); 532 + drm_sched_rq_update_fifo(entity, submit_ts); 540 533 541 534 drm_sched_wakeup(entity->rq->sched); 542 535 }
+1 -1
drivers/hid/Kconfig
··· 1122 1122 tristate "Topre REALFORCE keyboards" 1123 1123 depends on HID 1124 1124 help 1125 - Say Y for N-key rollover support on Topre REALFORCE R2 108 key keyboards. 1125 + Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key keyboards. 1126 1126 1127 1127 config HID_THINGM 1128 1128 tristate "ThingM blink(1) USB RGB LED"
+4
drivers/hid/hid-ids.h
··· 420 420 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A 421 421 #define I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN 0x2A1C 422 422 #define I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN 0x279F 423 + #define I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100 0x29F5 424 + #define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1 0x2BED 425 + #define I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2 0x2BEE 423 426 424 427 #define USB_VENDOR_ID_ELECOM 0x056e 425 428 #define USB_DEVICE_ID_ELECOM_BM084 0x0061 ··· 1252 1249 1253 1250 #define USB_VENDOR_ID_TOPRE 0x0853 1254 1251 #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_108 0x0148 1252 + #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_87 0x0146 1255 1253 1256 1254 #define USB_VENDOR_ID_TOPSEED 0x0766 1257 1255 #define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
+6
drivers/hid/hid-input.c
··· 398 398 HID_BATTERY_QUIRK_IGNORE }, 399 399 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_LENOVO_YOGA_C630_TOUCHSCREEN), 400 400 HID_BATTERY_QUIRK_IGNORE }, 401 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13T_AW100), 402 + HID_BATTERY_QUIRK_IGNORE }, 403 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V1), 404 + HID_BATTERY_QUIRK_IGNORE }, 405 + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_14T_EA100_V2), 406 + HID_BATTERY_QUIRK_IGNORE }, 401 407 {} 402 408 }; 403 409
+1 -1
drivers/hid/hid-sensor-custom.c
··· 940 940 struct hid_sensor_hub_device *hsdev, 941 941 const struct hid_sensor_custom_match *match) 942 942 { 943 - char real_usage[HID_SENSOR_USAGE_LENGTH]; 943 + char real_usage[HID_SENSOR_USAGE_LENGTH] = { 0 }; 944 944 struct platform_device *custom_pdev; 945 945 const char *dev_name; 946 946 char *c;
+2
drivers/hid/hid-topre.c
··· 36 36 static const struct hid_device_id topre_id_table[] = { 37 37 { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE, 38 38 USB_DEVICE_ID_TOPRE_REALFORCE_R2_108) }, 39 + { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE, 40 + USB_DEVICE_ID_TOPRE_REALFORCE_R2_87) }, 39 41 { } 40 42 }; 41 43 MODULE_DEVICE_TABLE(hid, topre_id_table);
+2 -2
drivers/hid/intel-ish-hid/ishtp/bus.c
··· 241 241 struct ishtp_cl_device *device = to_ishtp_cl_device(dev); 242 242 struct ishtp_cl_driver *driver = to_ishtp_cl_driver(drv); 243 243 244 - return guid_equal(&driver->id[0].guid, 245 - &device->fw_client->props.protocol_name); 244 + return(device->fw_client ? guid_equal(&driver->id[0].guid, 245 + &device->fw_client->props.protocol_name) : 0); 246 246 } 247 247 248 248 /**
+10 -14
drivers/hwtracing/coresight/coresight-etm4x-core.c
··· 472 472 if (etm4x_sspcicrn_present(drvdata, i)) 473 473 etm4x_relaxed_write32(csa, config->ss_pe_cmp[i], TRCSSPCICRn(i)); 474 474 } 475 - for (i = 0; i < drvdata->nr_addr_cmp; i++) { 475 + for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) { 476 476 etm4x_relaxed_write64(csa, config->addr_val[i], TRCACVRn(i)); 477 477 etm4x_relaxed_write64(csa, config->addr_acc[i], TRCACATRn(i)); 478 478 } ··· 1070 1070 struct csdev_access *csa) 1071 1071 { 1072 1072 u32 devarch = readl_relaxed(drvdata->base + TRCDEVARCH); 1073 - u32 idr1 = readl_relaxed(drvdata->base + TRCIDR1); 1074 1073 1075 1074 /* 1076 1075 * All ETMs must implement TRCDEVARCH to indicate that 1077 - * the component is an ETMv4. To support any broken 1078 - * implementations we fall back to TRCIDR1 check, which 1079 - * is not really reliable. 1076 + * the component is an ETMv4. Even though TRCIDR1 also 1077 + * contains the information, it is part of the "Trace" 1078 + * register and must be accessed with the OSLK cleared, 1079 + * with MMIO. But we cannot touch the OSLK until we are 1080 + * sure this is an ETM. So rely only on the TRCDEVARCH. 1080 1081 */ 1081 - if ((devarch & ETM_DEVARCH_ID_MASK) == ETM_DEVARCH_ETMv4x_ARCH) { 1082 - drvdata->arch = etm_devarch_to_arch(devarch); 1083 - } else { 1084 - pr_warn("CPU%d: ETM4x incompatible TRCDEVARCH: %x, falling back to TRCIDR1\n", 1085 - smp_processor_id(), devarch); 1086 - 1087 - if (ETM_TRCIDR1_ARCH_MAJOR(idr1) != ETM_TRCIDR1_ARCH_ETMv4) 1088 - return false; 1089 - drvdata->arch = etm_trcidr_to_arch(idr1); 1082 + if ((devarch & ETM_DEVARCH_ID_MASK) != ETM_DEVARCH_ETMv4x_ARCH) { 1083 + pr_warn_once("TRCDEVARCH doesn't match ETMv4 architecture\n"); 1084 + return false; 1090 1085 } 1091 1086 1087 + drvdata->arch = etm_devarch_to_arch(devarch); 1092 1088 *csa = CSDEV_ACCESS_IOMEM(drvdata->base); 1093 1089 return true; 1094 1090 }
+6 -14
drivers/hwtracing/coresight/coresight-etm4x.h
··· 753 753 * TRCDEVARCH - CoreSight architected register 754 754 * - Bits[15:12] - Major version 755 755 * - Bits[19:16] - Minor version 756 - * TRCIDR1 - ETM architected register 757 - * - Bits[11:8] - Major version 758 - * - Bits[7:4] - Minor version 759 - * We must rely on TRCDEVARCH for the version information, 760 - * however we don't want to break the support for potential 761 - * old implementations which might not implement it. Thus 762 - * we fall back to TRCIDR1 if TRCDEVARCH is not implemented 763 - * for memory mapped components. 756 + * 757 + * We must rely only on TRCDEVARCH for the version information. Even though, 758 + * TRCIDR1 also provides the architecture version, it is a "Trace" register 759 + * and as such must be accessed only with Trace power domain ON. This may 760 + * not be available at probe time. 761 + * 764 762 * Now to make certain decisions easier based on the version 765 763 * we use an internal representation of the version in the 766 764 * driver, as follows : ··· 782 784 { 783 785 return ETM_ARCH_VERSION(ETM_DEVARCH_ARCHID_ARCH_VER(devarch), 784 786 ETM_DEVARCH_REVISION(devarch)); 785 - } 786 - 787 - static inline u8 etm_trcidr_to_arch(u32 trcidr1) 788 - { 789 - return ETM_ARCH_VERSION(ETM_TRCIDR1_ARCH_MAJOR(trcidr1), 790 - ETM_TRCIDR1_ARCH_MINOR(trcidr1)); 791 787 } 792 788 793 789 enum etm_impdef_type {
+5
drivers/i2c/i2c-core-of.c
··· 178 178 return NOTIFY_OK; 179 179 } 180 180 181 + /* 182 + * Clear the flag before adding the device so that fw_devlink 183 + * doesn't skip adding consumers to this device. 184 + */ 185 + rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 181 186 client = of_i2c_register_device(adap, rd->dn); 182 187 if (IS_ERR(client)) { 183 188 dev_err(&adap->dev, "failed to create client for '%pOF'\n",
+1 -1
drivers/iio/accel/kionix-kx022a.c
··· 864 864 if (ret < 0) 865 865 goto err_read; 866 866 867 - iio_push_to_buffers_with_timestamp(idev, data->buffer, pf->timestamp); 867 + iio_push_to_buffers_with_timestamp(idev, data->buffer, data->timestamp); 868 868 err_read: 869 869 iio_trigger_notify_done(idev->trig); 870 870
+1 -1
drivers/iio/adc/ad7791.c
··· 253 253 .has_registers = true, 254 254 .addr_shift = 4, 255 255 .read_mask = BIT(3), 256 - .irq_flags = IRQF_TRIGGER_LOW, 256 + .irq_flags = IRQF_TRIGGER_FALLING, 257 257 }; 258 258 259 259 static int ad7791_read_raw(struct iio_dev *indio_dev,
+2 -4
drivers/iio/adc/ltc2497.c
··· 28 28 struct ltc2497core_driverdata common_ddata; 29 29 struct i2c_client *client; 30 30 u32 recv_size; 31 - u32 sub_lsb; 32 31 /* 33 32 * DMA (thus cache coherency maintenance) may require the 34 33 * transfer buffers to live in their own cache lines. ··· 64 65 * equivalent to a sign extension. 65 66 */ 66 67 if (st->recv_size == 3) { 67 - *val = (get_unaligned_be24(st->data.d8) >> st->sub_lsb) 68 + *val = (get_unaligned_be24(st->data.d8) >> 6) 68 69 - BIT(ddata->chip_info->resolution + 1); 69 70 } else { 70 - *val = (be32_to_cpu(st->data.d32) >> st->sub_lsb) 71 + *val = (be32_to_cpu(st->data.d32) >> 6) 71 72 - BIT(ddata->chip_info->resolution + 1); 72 73 } 73 74 ··· 121 122 st->common_ddata.chip_info = chip_info; 122 123 123 124 resolution = chip_info->resolution; 124 - st->sub_lsb = 31 - (resolution + 1); 125 125 st->recv_size = BITS_TO_BYTES(resolution) + 1; 126 126 127 127 return ltc2497core_probe(dev, indio_dev);
+15 -7
drivers/iio/adc/max11410.c
··· 414 414 if (!ret) 415 415 return -ETIMEDOUT; 416 416 } else { 417 + int ret2; 418 + 417 419 /* Wait for status register Conversion Ready flag */ 418 - ret = read_poll_timeout(max11410_read_reg, ret, 419 - ret || (val & MAX11410_STATUS_CONV_READY_BIT), 420 + ret = read_poll_timeout(max11410_read_reg, ret2, 421 + ret2 || (val & MAX11410_STATUS_CONV_READY_BIT), 420 422 5000, MAX11410_CONVERSION_TIMEOUT_MS * 1000, 421 423 true, st, MAX11410_REG_STATUS, &val); 422 424 if (ret) 423 425 return ret; 426 + if (ret2) 427 + return ret2; 424 428 } 425 429 426 430 /* Read ADC Data */ ··· 855 851 856 852 static int max11410_calibrate(struct max11410_state *st, u32 cal_type) 857 853 { 858 - int ret, val; 854 + int ret, ret2, val; 859 855 860 856 ret = max11410_write_reg(st, MAX11410_REG_CAL_START, cal_type); 861 857 if (ret) 862 858 return ret; 863 859 864 860 /* Wait for status register Calibration Ready flag */ 865 - return read_poll_timeout(max11410_read_reg, ret, 866 - ret || (val & MAX11410_STATUS_CAL_READY_BIT), 867 - 50000, MAX11410_CALIB_TIMEOUT_MS * 1000, true, 868 - st, MAX11410_REG_STATUS, &val); 861 + ret = read_poll_timeout(max11410_read_reg, ret2, 862 + ret2 || (val & MAX11410_STATUS_CAL_READY_BIT), 863 + 50000, MAX11410_CALIB_TIMEOUT_MS * 1000, true, 864 + st, MAX11410_REG_STATUS, &val); 865 + if (ret) 866 + return ret; 867 + 868 + return ret2; 869 869 } 870 870 871 871 static int max11410_self_calibrate(struct max11410_state *st)
+1 -1
drivers/iio/adc/palmas_gpadc.c
··· 639 639 640 640 static int palmas_gpadc_remove(struct platform_device *pdev) 641 641 { 642 - struct iio_dev *indio_dev = dev_to_iio_dev(&pdev->dev); 642 + struct iio_dev *indio_dev = dev_get_drvdata(&pdev->dev); 643 643 struct palmas_gpadc *adc = iio_priv(indio_dev); 644 644 645 645 if (adc->wakeup1_enable || adc->wakeup2_enable)
+9 -1
drivers/iio/adc/qcom-spmi-adc5.c
··· 628 628 struct fwnode_handle *fwnode, 629 629 const struct adc5_data *data) 630 630 { 631 - const char *name = fwnode_get_name(fwnode), *channel_name; 631 + const char *channel_name; 632 + char *name; 632 633 u32 chan, value, varr[2]; 633 634 u32 sid = 0; 634 635 int ret; 635 636 struct device *dev = adc->dev; 637 + 638 + name = devm_kasprintf(dev, GFP_KERNEL, "%pfwP", fwnode); 639 + if (!name) 640 + return -ENOMEM; 641 + 642 + /* Cut the address part */ 643 + name[strchrnul(name, '@') - name] = '\0'; 636 644 637 645 ret = fwnode_property_read_u32(fwnode, "reg", &chan); 638 646 if (ret) {
+1
drivers/iio/adc/ti-ads7950.c
··· 634 634 st->chip.label = dev_name(&st->spi->dev); 635 635 st->chip.parent = &st->spi->dev; 636 636 st->chip.owner = THIS_MODULE; 637 + st->chip.can_sleep = true; 637 638 st->chip.base = -1; 638 639 st->chip.ngpio = TI_ADS7950_NUM_GPIOS; 639 640 st->chip.get_direction = ti_ads7950_get_direction;
+2 -2
drivers/iio/dac/cio-dac.c
··· 66 66 if (mask != IIO_CHAN_INFO_RAW) 67 67 return -EINVAL; 68 68 69 - /* DAC can only accept up to a 16-bit value */ 70 - if ((unsigned int)val > 65535) 69 + /* DAC can only accept up to a 12-bit value */ 70 + if ((unsigned int)val > 4095) 71 71 return -EINVAL; 72 72 73 73 priv->chan_out_states[chan->channel] = val;
+1
drivers/iio/imu/Kconfig
··· 47 47 depends on SPI 48 48 select IIO_ADIS_LIB 49 49 select IIO_ADIS_LIB_BUFFER if IIO_BUFFER 50 + select CRC32 50 51 help 51 52 Say yes here to build support for Analog Devices ADIS16375, ADIS16480, 52 53 ADIS16485, ADIS16488 inertial sensors.
+12 -9
drivers/iio/industrialio-buffer.c
··· 203 203 break; 204 204 } 205 205 206 + if (filp->f_flags & O_NONBLOCK) { 207 + if (!written) 208 + ret = -EAGAIN; 209 + break; 210 + } 211 + 206 212 wait_woken(&wait, TASK_INTERRUPTIBLE, 207 213 MAX_SCHEDULE_TIMEOUT); 208 214 continue; 209 215 } 210 216 211 217 ret = rb->access->write(rb, n - written, buf + written); 212 - if (ret == 0 && (filp->f_flags & O_NONBLOCK)) 213 - ret = -EAGAIN; 218 + if (ret < 0) 219 + break; 214 220 215 - if (ret > 0) { 216 - written += ret; 217 - if (written != n && !(filp->f_flags & O_NONBLOCK)) 218 - continue; 219 - } 220 - } while (ret == 0); 221 + written += ret; 222 + 223 + } while (written != n); 221 224 remove_wait_queue(&rb->pollq, &wait); 222 225 223 - return ret < 0 ? ret : n; 226 + return ret < 0 ? ret : written; 224 227 } 225 228 226 229 /**
+12
drivers/iio/light/cm32181.c
··· 429 429 .attrs = &cm32181_attribute_group, 430 430 }; 431 431 432 + static void cm32181_unregister_dummy_client(void *data) 433 + { 434 + struct i2c_client *client = data; 435 + 436 + /* Unregister the dummy client */ 437 + i2c_unregister_device(client); 438 + } 439 + 432 440 static int cm32181_probe(struct i2c_client *client) 433 441 { 434 442 struct device *dev = &client->dev; ··· 468 460 client = i2c_acpi_new_device(dev, 1, &board_info); 469 461 if (IS_ERR(client)) 470 462 return PTR_ERR(client); 463 + 464 + ret = devm_add_action_or_reset(dev, cm32181_unregister_dummy_client, client); 465 + if (ret) 466 + return ret; 471 467 } 472 468 473 469 cm32181 = iio_priv(indio_dev);
+2 -1
drivers/iio/light/vcnl4000.c
··· 208 208 209 209 data->rev = ret & 0xf; 210 210 data->al_scale = 250000; 211 - mutex_init(&data->vcnl4000_lock); 212 211 213 212 return data->chip_spec->set_power_state(data, true); 214 213 }; ··· 1365 1366 data->client = client; 1366 1367 data->id = id->driver_data; 1367 1368 data->chip_spec = &vcnl4000_chip_spec_cfg[data->id]; 1369 + 1370 + mutex_init(&data->vcnl4000_lock); 1368 1371 1369 1372 ret = data->chip_spec->init(data); 1370 1373 if (ret < 0)
+8 -4
drivers/mtd/mtdblock.c
··· 153 153 mtdblk->cache_state = STATE_EMPTY; 154 154 ret = mtd_read(mtd, sect_start, sect_size, 155 155 &retlen, mtdblk->cache_data); 156 - if (ret) 156 + if (ret && !mtd_is_bitflip(ret)) 157 157 return ret; 158 158 if (retlen != sect_size) 159 159 return -EIO; ··· 188 188 pr_debug("mtdblock: read on \"%s\" at 0x%lx, size 0x%x\n", 189 189 mtd->name, pos, len); 190 190 191 - if (!sect_size) 192 - return mtd_read(mtd, pos, len, &retlen, buf); 191 + if (!sect_size) { 192 + ret = mtd_read(mtd, pos, len, &retlen, buf); 193 + if (ret && !mtd_is_bitflip(ret)) 194 + return ret; 195 + return 0; 196 + } 193 197 194 198 while (len > 0) { 195 199 unsigned long sect_start = (pos/sect_size)*sect_size; ··· 213 209 memcpy (buf, mtdblk->cache_data + offset, size); 214 210 } else { 215 211 ret = mtd_read(mtd, pos, size, &retlen, buf); 216 - if (ret) 212 + if (ret && !mtd_is_bitflip(ret)) 217 213 return ret; 218 214 if (retlen != size) 219 215 return -EIO;
+3 -3
drivers/mtd/nand/raw/meson_nand.c
··· 280 280 281 281 if (raw) { 282 282 len = mtd->writesize + mtd->oobsize; 283 - cmd = (len & GENMASK(5, 0)) | scrambler | DMA_DIR(dir); 283 + cmd = (len & GENMASK(13, 0)) | scrambler | DMA_DIR(dir); 284 284 writel(cmd, nfc->reg_base + NFC_REG_CMD); 285 285 return; 286 286 } ··· 544 544 if (ret) 545 545 goto out; 546 546 547 - cmd = NFC_CMD_N2M | (len & GENMASK(5, 0)); 547 + cmd = NFC_CMD_N2M | (len & GENMASK(13, 0)); 548 548 writel(cmd, nfc->reg_base + NFC_REG_CMD); 549 549 550 550 meson_nfc_drain_cmd(nfc); ··· 568 568 if (ret) 569 569 return ret; 570 570 571 - cmd = NFC_CMD_M2N | (len & GENMASK(5, 0)); 571 + cmd = NFC_CMD_M2N | (len & GENMASK(13, 0)); 572 572 writel(cmd, nfc->reg_base + NFC_REG_CMD); 573 573 574 574 meson_nfc_drain_cmd(nfc);
+3
drivers/mtd/nand/raw/stm32_fmc2_nand.c
··· 1531 1531 if (IS_ERR(sdrt)) 1532 1532 return PTR_ERR(sdrt); 1533 1533 1534 + if (conf->timings.mode > 3) 1535 + return -EOPNOTSUPP; 1536 + 1534 1537 if (chipnr == NAND_DATA_IFACE_CHECK_ONLY) 1535 1538 return 0; 1536 1539
+3 -2
drivers/net/bonding/bond_main.c
··· 3269 3269 3270 3270 combined = skb_header_pointer(skb, 0, sizeof(_combined), &_combined); 3271 3271 if (!combined || combined->ip6.nexthdr != NEXTHDR_ICMP || 3272 - combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT) 3272 + (combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_SOLICITATION && 3273 + combined->icmp6.icmp6_type != NDISC_NEIGHBOUR_ADVERTISEMENT)) 3273 3274 goto out; 3274 3275 3275 3276 saddr = &combined->ip6.saddr; ··· 3292 3291 else if (curr_active_slave && 3293 3292 time_after(slave_last_rx(bond, curr_active_slave), 3294 3293 curr_active_slave->last_link_up)) 3295 - bond_validate_na(bond, slave, saddr, daddr); 3294 + bond_validate_na(bond, slave, daddr, saddr); 3296 3295 else if (curr_arp_slave && 3297 3296 bond_time_in_interval(bond, slave_last_tx(curr_arp_slave), 1)) 3298 3297 bond_validate_na(bond, slave, saddr, daddr);
+4
drivers/net/ethernet/cadence/macb_main.c
··· 1063 1063 } 1064 1064 #endif 1065 1065 addr |= MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, desc->addr)); 1066 + #ifdef CONFIG_MACB_USE_HWSTAMP 1067 + if (bp->hw_dma_cap & HW_DMA_CAP_PTP) 1068 + addr &= ~GEM_BIT(DMA_RXVALID); 1069 + #endif 1066 1070 return addr; 1067 1071 } 1068 1072
+16
drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
··· 989 989 return 0; 990 990 } 991 991 992 + /* FIXME: Workaround for the link partner's verification failing if ENETC 993 + * priorly received too much express traffic. The documentation doesn't 994 + * suggest this is needed. 995 + */ 996 + static void enetc_restart_emac_rx(struct enetc_si *si) 997 + { 998 + u32 val = enetc_port_rd(&si->hw, ENETC_PM0_CMD_CFG); 999 + 1000 + enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val & ~ENETC_PM0_RX_EN); 1001 + 1002 + if (val & ENETC_PM0_RX_EN) 1003 + enetc_port_wr(&si->hw, ENETC_PM0_CMD_CFG, val); 1004 + } 1005 + 992 1006 static int enetc_set_mm(struct net_device *ndev, struct ethtool_mm_cfg *cfg, 993 1007 struct netlink_ext_ack *extack) 994 1008 { ··· 1053 1039 val |= ENETC_MMCSR_RAFS(add_frag_size); 1054 1040 1055 1041 enetc_port_wr(hw, ENETC_MMCSR, val); 1042 + 1043 + enetc_restart_emac_rx(priv->si); 1056 1044 1057 1045 mutex_unlock(&priv->mm_lock); 1058 1046
+12 -8
drivers/net/ethernet/intel/iavf/iavf.h
··· 58 58 struct iavf_vsi { 59 59 struct iavf_adapter *back; 60 60 struct net_device *netdev; 61 - unsigned long active_cvlans[BITS_TO_LONGS(VLAN_N_VID)]; 62 - unsigned long active_svlans[BITS_TO_LONGS(VLAN_N_VID)]; 63 61 u16 seid; 64 62 u16 id; 65 63 DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__); ··· 155 157 u16 tpid; 156 158 }; 157 159 160 + enum iavf_vlan_state_t { 161 + IAVF_VLAN_INVALID, 162 + IAVF_VLAN_ADD, /* filter needs to be added */ 163 + IAVF_VLAN_IS_NEW, /* filter is new, wait for PF answer */ 164 + IAVF_VLAN_ACTIVE, /* filter is accepted by PF */ 165 + IAVF_VLAN_DISABLE, /* filter needs to be deleted by PF, then marked INACTIVE */ 166 + IAVF_VLAN_INACTIVE, /* filter is inactive, we are in IFF_DOWN */ 167 + IAVF_VLAN_REMOVE, /* filter needs to be removed from list */ 168 + }; 169 + 158 170 struct iavf_vlan_filter { 159 171 struct list_head list; 160 172 struct iavf_vlan vlan; 161 - struct { 162 - u8 is_new_vlan:1; /* filter is new, wait for PF answer */ 163 - u8 remove:1; /* filter needs to be removed */ 164 - u8 add:1; /* filter needs to be added */ 165 - u8 padding:5; 166 - }; 173 + enum iavf_vlan_state_t state; 167 174 }; 168 175 169 176 #define IAVF_MAX_TRAFFIC_CLASS 4 ··· 260 257 wait_queue_head_t vc_waitqueue; 261 258 struct iavf_q_vector *q_vectors; 262 259 struct list_head vlan_filter_list; 260 + int num_vlan_filters; 263 261 struct list_head mac_filter_list; 264 262 struct mutex crit_lock; 265 263 struct mutex client_lock;
+18 -26
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 791 791 f->vlan = vlan; 792 792 793 793 list_add_tail(&f->list, &adapter->vlan_filter_list); 794 - f->add = true; 794 + f->state = IAVF_VLAN_ADD; 795 + adapter->num_vlan_filters++; 795 796 adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER; 796 797 } 797 798 ··· 814 813 815 814 f = iavf_find_vlan(adapter, vlan); 816 815 if (f) { 817 - f->remove = true; 816 + f->state = IAVF_VLAN_REMOVE; 818 817 adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER; 819 818 } 820 819 ··· 829 828 **/ 830 829 static void iavf_restore_filters(struct iavf_adapter *adapter) 831 830 { 832 - u16 vid; 831 + struct iavf_vlan_filter *f; 833 832 834 833 /* re-add all VLAN filters */ 835 - for_each_set_bit(vid, adapter->vsi.active_cvlans, VLAN_N_VID) 836 - iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021Q)); 834 + spin_lock_bh(&adapter->mac_vlan_list_lock); 837 835 838 - for_each_set_bit(vid, adapter->vsi.active_svlans, VLAN_N_VID) 839 - iavf_add_vlan(adapter, IAVF_VLAN(vid, ETH_P_8021AD)); 836 + list_for_each_entry(f, &adapter->vlan_filter_list, list) { 837 + if (f->state == IAVF_VLAN_INACTIVE) 838 + f->state = IAVF_VLAN_ADD; 839 + } 840 + 841 + spin_unlock_bh(&adapter->mac_vlan_list_lock); 842 + adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER; 840 843 } 841 844 842 845 /** ··· 849 844 */ 850 845 u16 iavf_get_num_vlans_added(struct iavf_adapter *adapter) 851 846 { 852 - return bitmap_weight(adapter->vsi.active_cvlans, VLAN_N_VID) + 853 - bitmap_weight(adapter->vsi.active_svlans, VLAN_N_VID); 847 + return adapter->num_vlan_filters; 854 848 } 855 849 856 850 /** ··· 932 928 return 0; 933 929 934 930 iavf_del_vlan(adapter, IAVF_VLAN(vid, be16_to_cpu(proto))); 935 - if (proto == cpu_to_be16(ETH_P_8021Q)) 936 - clear_bit(vid, adapter->vsi.active_cvlans); 937 - else 938 - clear_bit(vid, adapter->vsi.active_svlans); 939 - 940 931 return 0; 941 932 } 942 933 ··· 1292 1293 } 1293 1294 } 1294 1295 1295 - /* remove all VLAN filters */ 1296 + /* disable all VLAN filters */ 1296 1297 list_for_each_entry_safe(vlf, vlftmp, &adapter->vlan_filter_list, 1297 - list) { 1298 - if (vlf->add) { 1299 - list_del(&vlf->list); 1300 - kfree(vlf); 1301 - } else { 1302 - vlf->remove = true; 1303 - } 1304 - } 1298 + list) 1299 + vlf->state = IAVF_VLAN_DISABLE; 1300 + 1305 1301 spin_unlock_bh(&adapter->mac_vlan_list_lock); 1306 1302 } 1307 1303 ··· 2908 2914 list_del(&fv->list); 2909 2915 kfree(fv); 2910 2916 } 2917 + adapter->num_vlan_filters = 0; 2911 2918 2912 2919 spin_unlock_bh(&adapter->mac_vlan_list_lock); 2913 2920 ··· 3125 3130 adapter->aq_required |= IAVF_FLAG_AQ_ADD_MAC_FILTER; 3126 3131 adapter->aq_required |= IAVF_FLAG_AQ_ADD_CLOUD_FILTER; 3127 3132 iavf_misc_irq_enable(adapter); 3128 - 3129 - bitmap_clear(adapter->vsi.active_cvlans, 0, VLAN_N_VID); 3130 - bitmap_clear(adapter->vsi.active_svlans, 0, VLAN_N_VID); 3131 3133 3132 3134 mod_delayed_work(adapter->wq, &adapter->watchdog_task, 2); 3133 3135
+36 -32
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 642 642 643 643 spin_lock_bh(&adapter->mac_vlan_list_lock); 644 644 list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { 645 - if (f->is_new_vlan) { 646 - if (f->vlan.tpid == ETH_P_8021Q) 647 - clear_bit(f->vlan.vid, 648 - adapter->vsi.active_cvlans); 649 - else 650 - clear_bit(f->vlan.vid, 651 - adapter->vsi.active_svlans); 652 - 645 + if (f->state == IAVF_VLAN_IS_NEW) { 653 646 list_del(&f->list); 654 647 kfree(f); 648 + adapter->num_vlan_filters--; 655 649 } 656 650 } 657 651 spin_unlock_bh(&adapter->mac_vlan_list_lock); ··· 673 679 spin_lock_bh(&adapter->mac_vlan_list_lock); 674 680 675 681 list_for_each_entry(f, &adapter->vlan_filter_list, list) { 676 - if (f->add) 682 + if (f->state == IAVF_VLAN_ADD) 677 683 count++; 678 684 } 679 685 if (!count || !VLAN_FILTERING_ALLOWED(adapter)) { ··· 704 710 vvfl->vsi_id = adapter->vsi_res->vsi_id; 705 711 vvfl->num_elements = count; 706 712 list_for_each_entry(f, &adapter->vlan_filter_list, list) { 707 - if (f->add) { 713 + if (f->state == IAVF_VLAN_ADD) { 708 714 vvfl->vlan_id[i] = f->vlan.vid; 709 715 i++; 710 - f->add = false; 711 - f->is_new_vlan = true; 716 + f->state = IAVF_VLAN_IS_NEW; 712 717 if (i == count) 713 718 break; 714 719 } ··· 753 760 vvfl_v2->vport_id = adapter->vsi_res->vsi_id; 754 761 vvfl_v2->num_elements = count; 755 762 list_for_each_entry(f, &adapter->vlan_filter_list, list) { 756 - if (f->add) { 763 + if (f->state == IAVF_VLAN_ADD) { 757 764 struct virtchnl_vlan_supported_caps *filtering_support = 758 765 &adapter->vlan_v2_caps.filtering.filtering_support; 759 766 struct virtchnl_vlan *vlan; ··· 771 778 vlan->tpid = f->vlan.tpid; 772 779 773 780 i++; 774 - f->add = false; 775 - f->is_new_vlan = true; 781 + f->state = IAVF_VLAN_IS_NEW; 776 782 } 777 783 } 778 784 ··· 814 822 * filters marked for removal to enable bailing out before 815 823 * sending a virtchnl message 816 824 */ 817 - if (f->remove && !VLAN_FILTERING_ALLOWED(adapter)) { 825 + if (f->state == IAVF_VLAN_REMOVE && 826 + !VLAN_FILTERING_ALLOWED(adapter)) { 818 827 list_del(&f->list); 819 828 kfree(f); 820 - } else if (f->remove) { 829 + adapter->num_vlan_filters--; 830 + } else if (f->state == IAVF_VLAN_DISABLE && 831 + !VLAN_FILTERING_ALLOWED(adapter)) { 832 + f->state = IAVF_VLAN_INACTIVE; 833 + } else if (f->state == IAVF_VLAN_REMOVE || 834 + f->state == IAVF_VLAN_DISABLE) { 821 835 count++; 822 836 } 823 837 } ··· 855 857 vvfl->vsi_id = adapter->vsi_res->vsi_id; 856 858 vvfl->num_elements = count; 857 859 list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { 858 - if (f->remove) { 860 + if (f->state == IAVF_VLAN_DISABLE) { 859 861 vvfl->vlan_id[i] = f->vlan.vid; 862 + f->state = IAVF_VLAN_INACTIVE; 860 863 i++; 864 + if (i == count) 865 + break; 866 + } else if (f->state == IAVF_VLAN_REMOVE) { 867 + vvfl->vlan_id[i] = f->vlan.vid; 861 868 list_del(&f->list); 862 869 kfree(f); 870 + adapter->num_vlan_filters--; 871 + i++; 863 872 if (i == count) 864 873 break; 865 874 } ··· 906 901 vvfl_v2->vport_id = adapter->vsi_res->vsi_id; 907 902 vvfl_v2->num_elements = count; 908 903 list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) { 909 - if (f->remove) { 904 + if (f->state == IAVF_VLAN_DISABLE || 905 + f->state == IAVF_VLAN_REMOVE) { 910 906 struct virtchnl_vlan_supported_caps *filtering_support = 911 907 &adapter->vlan_v2_caps.filtering.filtering_support; 912 908 struct virtchnl_vlan *vlan; ··· 921 915 vlan->tci = f->vlan.vid; 922 916 vlan->tpid = f->vlan.tpid; 923 917 924 - list_del(&f->list); 925 - kfree(f); 918 + if (f->state == IAVF_VLAN_DISABLE) { 919 + f->state = IAVF_VLAN_INACTIVE; 920 + } else { 921 + list_del(&f->list); 922 + kfree(f); 923 + adapter->num_vlan_filters--; 924 + } 926 925 i++; 927 926 if (i == count) 928 927 break; ··· 2203 2192 list_for_each_entry(vlf, 2204 2193 &adapter->vlan_filter_list, 2205 2194 list) 2206 - vlf->add = true; 2195 + vlf->state = IAVF_VLAN_ADD; 2207 2196 2208 2197 adapter->aq_required |= 2209 2198 IAVF_FLAG_AQ_ADD_VLAN_FILTER; ··· 2271 2260 list_for_each_entry(vlf, 2272 2261 &adapter->vlan_filter_list, 2273 2262 list) 2274 - vlf->add = true; 2263 + vlf->state = IAVF_VLAN_ADD; 2275 2264 2276 2265 aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER; 2277 2266 } ··· 2455 2444 2456 2445 spin_lock_bh(&adapter->mac_vlan_list_lock); 2457 2446 list_for_each_entry(f, &adapter->vlan_filter_list, list) { 2458 - if (f->is_new_vlan) { 2459 - f->is_new_vlan = false; 2460 - if (f->vlan.tpid == ETH_P_8021Q) 2461 - set_bit(f->vlan.vid, 2462 - adapter->vsi.active_cvlans); 2463 - else 2464 - set_bit(f->vlan.vid, 2465 - adapter->vsi.active_svlans); 2466 - } 2447 + if (f->state == IAVF_VLAN_IS_NEW) 2448 + f->state = IAVF_VLAN_ACTIVE; 2467 2449 } 2468 2450 spin_unlock_bh(&adapter->mac_vlan_list_lock); 2469 2451 }
+20 -2
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 681 681 return 0; 682 682 } 683 683 684 - int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) 684 + int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash, 685 + enum xdp_rss_hash_type *rss_type) 685 686 { 686 687 struct mlx4_en_xdp_buff *_ctx = (void *)ctx; 688 + struct mlx4_cqe *cqe = _ctx->cqe; 689 + enum xdp_rss_hash_type xht = 0; 690 + __be16 status; 687 691 688 692 if (unlikely(!(_ctx->dev->features & NETIF_F_RXHASH))) 689 693 return -ENODATA; 690 694 691 - *hash = be32_to_cpu(_ctx->cqe->immed_rss_invalid); 695 + *hash = be32_to_cpu(cqe->immed_rss_invalid); 696 + status = cqe->status; 697 + if (status & cpu_to_be16(MLX4_CQE_STATUS_TCP)) 698 + xht = XDP_RSS_L4_TCP; 699 + if (status & cpu_to_be16(MLX4_CQE_STATUS_UDP)) 700 + xht = XDP_RSS_L4_UDP; 701 + if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV4 | MLX4_CQE_STATUS_IPV4F)) 702 + xht |= XDP_RSS_L3_IPV4; 703 + if (status & cpu_to_be16(MLX4_CQE_STATUS_IPV6)) { 704 + xht |= XDP_RSS_L3_IPV6; 705 + if (cqe->ipv6_ext_mask) 706 + xht |= XDP_RSS_L3_DYNHDR; 707 + } 708 + *rss_type = xht; 709 + 692 710 return 0; 693 711 } 694 712
+2 -1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 798 798 799 799 struct xdp_md; 800 800 int mlx4_en_xdp_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp); 801 - int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash); 801 + int mlx4_en_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash, 802 + enum xdp_rss_hash_type *rss_type); 802 803 803 804 /* 804 805 * Functions for time stamping
+61 -2
drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
··· 34 34 #include <net/xdp_sock_drv.h> 35 35 #include "en/xdp.h" 36 36 #include "en/params.h" 37 + #include <linux/bitfield.h> 37 38 38 39 int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk) 39 40 { ··· 170 169 return 0; 171 170 } 172 171 173 - static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) 172 + /* Mapping HW RSS Type bits CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 into 4-bits*/ 173 + #define RSS_TYPE_MAX_TABLE 16 /* 4-bits max 16 entries */ 174 + #define RSS_L4 GENMASK(1, 0) 175 + #define RSS_L3 GENMASK(3, 2) /* Same as CQE_RSS_HTYPE_IP */ 176 + 177 + /* Valid combinations of CQE_RSS_HTYPE_IP + CQE_RSS_HTYPE_L4 sorted numerical */ 178 + enum mlx5_rss_hash_type { 179 + RSS_TYPE_NO_HASH = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IP_NONE) | 180 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)), 181 + RSS_TYPE_L3_IPV4 = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) | 182 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)), 183 + RSS_TYPE_L4_IPV4_TCP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) | 184 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)), 185 + RSS_TYPE_L4_IPV4_UDP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) | 186 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)), 187 + RSS_TYPE_L4_IPV4_IPSEC = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV4) | 188 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)), 189 + RSS_TYPE_L3_IPV6 = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) | 190 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_NONE)), 191 + RSS_TYPE_L4_IPV6_TCP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) | 192 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_TCP)), 193 + RSS_TYPE_L4_IPV6_UDP = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) | 194 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_UDP)), 195 + RSS_TYPE_L4_IPV6_IPSEC = (FIELD_PREP_CONST(RSS_L3, CQE_RSS_IPV6) | 196 + FIELD_PREP_CONST(RSS_L4, CQE_RSS_L4_IPSEC)), 197 + }; 198 + 199 + /* Invalid combinations will simply return zero, allows no boundary checks */ 200 + static const enum xdp_rss_hash_type mlx5_xdp_rss_type[RSS_TYPE_MAX_TABLE] = { 201 + [RSS_TYPE_NO_HASH] = XDP_RSS_TYPE_NONE, 202 + [1] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 203 + [2] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 204 + [3] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 205 + [RSS_TYPE_L3_IPV4] = XDP_RSS_TYPE_L3_IPV4, 206 + [RSS_TYPE_L4_IPV4_TCP] = XDP_RSS_TYPE_L4_IPV4_TCP, 207 + [RSS_TYPE_L4_IPV4_UDP] = XDP_RSS_TYPE_L4_IPV4_UDP, 208 + [RSS_TYPE_L4_IPV4_IPSEC] = XDP_RSS_TYPE_L4_IPV4_IPSEC, 209 + [RSS_TYPE_L3_IPV6] = XDP_RSS_TYPE_L3_IPV6, 210 + [RSS_TYPE_L4_IPV6_TCP] = XDP_RSS_TYPE_L4_IPV6_TCP, 211 + [RSS_TYPE_L4_IPV6_UDP] = XDP_RSS_TYPE_L4_IPV6_UDP, 212 + [RSS_TYPE_L4_IPV6_IPSEC] = XDP_RSS_TYPE_L4_IPV6_IPSEC, 213 + [12] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 214 + [13] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 215 + [14] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 216 + [15] = XDP_RSS_TYPE_NONE, /* Implicit zero */ 217 + }; 218 + 219 + static int mlx5e_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash, 220 + enum xdp_rss_hash_type *rss_type) 174 221 { 175 222 const struct mlx5e_xdp_buff *_ctx = (void *)ctx; 223 + const struct mlx5_cqe64 *cqe = _ctx->cqe; 224 + u32 hash_type, l4_type, ip_type, lookup; 176 225 177 226 if (unlikely(!(_ctx->xdp.rxq->dev->features & NETIF_F_RXHASH))) 178 227 return -ENODATA; 179 228 180 - *hash = be32_to_cpu(_ctx->cqe->rss_hash_result); 229 + *hash = be32_to_cpu(cqe->rss_hash_result); 230 + 231 + hash_type = cqe->rss_hash_type; 232 + BUILD_BUG_ON(CQE_RSS_HTYPE_IP != RSS_L3); /* same mask */ 233 + ip_type = hash_type & CQE_RSS_HTYPE_IP; 234 + l4_type = FIELD_GET(CQE_RSS_HTYPE_L4, hash_type); 235 + lookup = ip_type | l4_type; 236 + *rss_type = mlx5_xdp_rss_type[lookup]; 237 + 181 238 return 0; 182 239 } 183 240
+7 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_ctx.c
··· 628 628 int i, err, ring; 629 629 630 630 if (dev->flags & QLCNIC_NEED_FLR) { 631 - pci_reset_function(dev->pdev); 631 + err = pci_reset_function(dev->pdev); 632 + if (err) { 633 + dev_err(&dev->pdev->dev, 634 + "Adapter reset failed (%d). Please reboot\n", 635 + err); 636 + return err; 637 + } 632 638 dev->flags &= ~QLCNIC_NEED_FLR; 633 639 } 634 640
+1 -1
drivers/net/ethernet/sun/niu.c
··· 4522 4522 4523 4523 err = niu_rbr_fill(np, rp, GFP_KERNEL); 4524 4524 if (err) 4525 - return err; 4525 + goto out_err; 4526 4526 } 4527 4527 4528 4528 tx_rings = kcalloc(num_tx_rings, sizeof(struct tx_ring_info),
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 27 27 #include <linux/of.h> 28 28 #include <linux/of_mdio.h> 29 29 #include <linux/of_net.h> 30 - #include <linux/of_device.h> 30 + #include <linux/of_platform.h> 31 31 #include <linux/if_vlan.h> 32 32 #include <linux/kmemleak.h> 33 33 #include <linux/sys_soc.h>
+2 -1
drivers/net/ethernet/ti/cpsw_new.c
··· 7 7 8 8 #include <linux/io.h> 9 9 #include <linux/clk.h> 10 + #include <linux/platform_device.h> 10 11 #include <linux/timer.h> 11 12 #include <linux/module.h> 12 13 #include <linux/irqreturn.h> ··· 24 23 #include <linux/of.h> 25 24 #include <linux/of_mdio.h> 26 25 #include <linux/of_net.h> 27 - #include <linux/of_device.h> 26 + #include <linux/of_platform.h> 28 27 #include <linux/if_vlan.h> 29 28 #include <linux/kmemleak.h> 30 29 #include <linux/sys_soc.h>
+13 -1
drivers/net/phy/nxp-c45-tja11xx.c
··· 191 191 #define MAX_ID_PS 2260U 192 192 #define DEFAULT_ID_PS 2000U 193 193 194 - #define PPM_TO_SUBNS_INC(ppb) div_u64(GENMASK(31, 0) * (ppb) * \ 194 + #define PPM_TO_SUBNS_INC(ppb) div_u64(GENMASK_ULL(31, 0) * (ppb) * \ 195 195 PTP_CLK_PERIOD_100BT1, NSEC_PER_SEC) 196 196 197 197 #define NXP_C45_SKB_CB(skb) ((struct nxp_c45_skb_cb *)(skb)->cb) ··· 1337 1337 return ret; 1338 1338 } 1339 1339 1340 + static void nxp_c45_remove(struct phy_device *phydev) 1341 + { 1342 + struct nxp_c45_phy *priv = phydev->priv; 1343 + 1344 + if (priv->ptp_clock) 1345 + ptp_clock_unregister(priv->ptp_clock); 1346 + 1347 + skb_queue_purge(&priv->tx_queue); 1348 + skb_queue_purge(&priv->rx_queue); 1349 + } 1350 + 1340 1351 static struct phy_driver nxp_c45_driver[] = { 1341 1352 { 1342 1353 PHY_ID_MATCH_MODEL(PHY_ID_TJA_1103), ··· 1370 1359 .set_loopback = genphy_c45_loopback, 1371 1360 .get_sqi = nxp_c45_get_sqi, 1372 1361 .get_sqi_max = nxp_c45_get_sqi_max, 1362 + .remove = nxp_c45_remove, 1373 1363 }, 1374 1364 }; 1375 1365
+14 -5
drivers/net/phy/sfp.c
··· 210 210 #define SFP_PHY_ADDR 22 211 211 #define SFP_PHY_ADDR_ROLLBALL 17 212 212 213 + /* SFP_EEPROM_BLOCK_SIZE is the size of data chunk to read the EEPROM 214 + * at a time. Some SFP modules and also some Linux I2C drivers do not like 215 + * reads longer than 16 bytes. 216 + */ 217 + #define SFP_EEPROM_BLOCK_SIZE 16 218 + 213 219 struct sff_data { 214 220 unsigned int gpios; 215 221 bool (*module_supported)(const struct sfp_eeprom_id *id); ··· 1957 1951 u8 check; 1958 1952 int ret; 1959 1953 1960 - /* Some SFP modules and also some Linux I2C drivers do not like reads 1961 - * longer than 16 bytes, so read the EEPROM in chunks of 16 bytes at 1962 - * a time. 1963 - */ 1964 - sfp->i2c_block_size = 16; 1954 + sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE; 1965 1955 1966 1956 ret = sfp_read(sfp, false, 0, &id.base, sizeof(id.base)); 1967 1957 if (ret < 0) { ··· 2515 2513 unsigned int first, last, len; 2516 2514 int ret; 2517 2515 2516 + if (!(sfp->state & SFP_F_PRESENT)) 2517 + return -ENODEV; 2518 + 2518 2519 if (ee->len == 0) 2519 2520 return -EINVAL; 2520 2521 ··· 2550 2545 const struct ethtool_module_eeprom *page, 2551 2546 struct netlink_ext_ack *extack) 2552 2547 { 2548 + if (!(sfp->state & SFP_F_PRESENT)) 2549 + return -ENODEV; 2550 + 2553 2551 if (page->bank) { 2554 2552 NL_SET_ERR_MSG(extack, "Banks not supported"); 2555 2553 return -EOPNOTSUPP; ··· 2657 2649 return ERR_PTR(-ENOMEM); 2658 2650 2659 2651 sfp->dev = dev; 2652 + sfp->i2c_block_size = SFP_EEPROM_BLOCK_SIZE; 2660 2653 2661 2654 mutex_init(&sfp->sm_mutex); 2662 2655 mutex_init(&sfp->st_mutex);
+1 -1
drivers/net/usb/r8152.c
··· 1943 1943 if (!rx_agg) 1944 1944 return NULL; 1945 1945 1946 - rx_agg->page = alloc_pages(mflags | __GFP_COMP, order); 1946 + rx_agg->page = alloc_pages(mflags | __GFP_COMP | __GFP_NOWARN, order); 1947 1947 if (!rx_agg->page) 1948 1948 goto free_rx; 1949 1949
+7 -3
drivers/net/veth.c
··· 1648 1648 return 0; 1649 1649 } 1650 1650 1651 - static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash) 1651 + static int veth_xdp_rx_hash(const struct xdp_md *ctx, u32 *hash, 1652 + enum xdp_rss_hash_type *rss_type) 1652 1653 { 1653 1654 struct veth_xdp_buff *_ctx = (void *)ctx; 1655 + struct sk_buff *skb = _ctx->skb; 1654 1656 1655 - if (!_ctx->skb) 1657 + if (!skb) 1656 1658 return -ENODATA; 1657 1659 1658 - *hash = skb_get_hash(_ctx->skb); 1660 + *hash = skb_get_hash(skb); 1661 + *rss_type = skb->l4_hash ? XDP_RSS_TYPE_L4_ANY : XDP_RSS_TYPE_NONE; 1662 + 1659 1663 return 0; 1660 1664 } 1661 1665
+2 -1
drivers/net/wwan/iosm/iosm_ipc_pcie.c
··· 295 295 ret = dma_set_mask(ipc_pcie->dev, DMA_BIT_MASK(64)); 296 296 if (ret) { 297 297 dev_err(ipc_pcie->dev, "Could not set PCI DMA mask: %d", ret); 298 - return ret; 298 + goto set_mask_fail; 299 299 } 300 300 301 301 ipc_pcie_config_aspm(ipc_pcie); ··· 323 323 imem_init_fail: 324 324 ipc_pcie_resources_release(ipc_pcie); 325 325 resources_req_fail: 326 + set_mask_fail: 326 327 pci_disable_device(pci); 327 328 pci_enable_fail: 328 329 kfree(ipc_pcie);
+3 -3
drivers/nvme/host/core.c
··· 1674 1674 struct request_queue *queue = disk->queue; 1675 1675 u32 size = queue_logical_block_size(queue); 1676 1676 1677 + if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns, UINT_MAX)) 1678 + ctrl->max_discard_sectors = nvme_lba_to_sect(ns, ctrl->dmrsl); 1679 + 1677 1680 if (ctrl->max_discard_sectors == 0) { 1678 1681 blk_queue_max_discard_sectors(queue, 0); 1679 1682 return; ··· 1690 1687 /* If discard is already enabled, don't reset queue limits */ 1691 1688 if (queue->limits.max_discard_sectors) 1692 1689 return; 1693 - 1694 - if (ctrl->dmrsl && ctrl->dmrsl <= nvme_sect_to_lba(ns, UINT_MAX)) 1695 - ctrl->max_discard_sectors = nvme_lba_to_sect(ns, ctrl->dmrsl); 1696 1690 1697 1691 blk_queue_max_discard_sectors(queue, ctrl->max_discard_sectors); 1698 1692 blk_queue_max_discard_segments(queue, ctrl->max_discard_segments);
+1
drivers/of/dynamic.c
··· 226 226 np->sibling = np->parent->child; 227 227 np->parent->child = np; 228 228 of_node_clear_flag(np, OF_DETACHED); 229 + np->fwnode.flags |= FWNODE_FLAG_NOT_DEVICE; 229 230 } 230 231 231 232 /**
+5
drivers/of/platform.c
··· 737 737 if (of_node_check_flag(rd->dn, OF_POPULATED)) 738 738 return NOTIFY_OK; 739 739 740 + /* 741 + * Clear the flag before adding the device so that fw_devlink 742 + * doesn't skip adding consumers to this device. 743 + */ 744 + rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 740 745 /* pdev_parent may be NULL when no bus platform device */ 741 746 pdev_parent = of_find_device_by_node(rd->dn->parent); 742 747 pdev = of_platform_device_create(rd->dn, NULL,
+18 -12
drivers/pci/doe.c
··· 128 128 return -EIO; 129 129 130 130 /* Length is 2 DW of header + length of payload in DW */ 131 - length = 2 + task->request_pl_sz / sizeof(u32); 131 + length = 2 + task->request_pl_sz / sizeof(__le32); 132 132 if (length > PCI_DOE_MAX_LENGTH) 133 133 return -EIO; 134 134 if (length == PCI_DOE_MAX_LENGTH) ··· 141 141 pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, 142 142 FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, 143 143 length)); 144 - for (i = 0; i < task->request_pl_sz / sizeof(u32); i++) 144 + for (i = 0; i < task->request_pl_sz / sizeof(__le32); i++) 145 145 pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, 146 - task->request_pl[i]); 146 + le32_to_cpu(task->request_pl[i])); 147 147 148 148 pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_GO); 149 149 ··· 195 195 196 196 /* First 2 dwords have already been read */ 197 197 length -= 2; 198 - payload_length = min(length, task->response_pl_sz / sizeof(u32)); 198 + payload_length = min(length, task->response_pl_sz / sizeof(__le32)); 199 199 /* Read the rest of the response payload */ 200 200 for (i = 0; i < payload_length; i++) { 201 - pci_read_config_dword(pdev, offset + PCI_DOE_READ, 202 - &task->response_pl[i]); 201 + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); 202 + task->response_pl[i] = cpu_to_le32(val); 203 203 /* Prior to the last ack, ensure Data Object Ready */ 204 204 if (i == (payload_length - 1) && !pci_doe_data_obj_ready(doe_mb)) 205 205 return -EIO; ··· 217 217 if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) 218 218 return -EIO; 219 219 220 - return min(length, task->response_pl_sz / sizeof(u32)) * sizeof(u32); 220 + return min(length, task->response_pl_sz / sizeof(__le32)) * sizeof(__le32); 221 221 } 222 222 223 223 static void signal_task_complete(struct pci_doe_task *task, int rv) 224 224 { 225 225 task->rv = rv; 226 226 task->complete(task); 227 + destroy_work_on_stack(&task->work); 227 228 } 228 229 229 230 static void signal_task_abort(struct pci_doe_task *task, int rv) ··· 318 317 { 319 318 u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX, 320 319 *index); 320 + __le32 request_pl_le = cpu_to_le32(request_pl); 321 + __le32 response_pl_le; 321 322 u32 response_pl; 322 323 DECLARE_COMPLETION_ONSTACK(c); 323 324 struct pci_doe_task task = { 324 325 .prot.vid = PCI_VENDOR_ID_PCI_SIG, 325 326 .prot.type = PCI_DOE_PROTOCOL_DISCOVERY, 326 - .request_pl = &request_pl, 327 + .request_pl = &request_pl_le, 327 328 .request_pl_sz = sizeof(request_pl), 328 - .response_pl = &response_pl, 329 + .response_pl = &response_pl_le, 329 330 .response_pl_sz = sizeof(response_pl), 330 331 .complete = pci_doe_task_complete, 331 332 .private = &c, ··· 343 340 if (task.rv != sizeof(response_pl)) 344 341 return -EIO; 345 342 343 + response_pl = le32_to_cpu(response_pl_le); 346 344 *vid = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID, response_pl); 347 345 *protocol = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL, 348 346 response_pl); ··· 524 520 * task->complete will be called when the state machine is done processing this 525 521 * task. 526 522 * 523 + * @task must be allocated on the stack. 524 + * 527 525 * Excess data will be discarded. 528 526 * 529 527 * RETURNS: 0 when task has been successfully queued, -ERRNO on error ··· 539 533 * DOE requests must be a whole number of DW and the response needs to 540 534 * be big enough for at least 1 DW 541 535 */ 542 - if (task->request_pl_sz % sizeof(u32) || 543 - task->response_pl_sz < sizeof(u32)) 536 + if (task->request_pl_sz % sizeof(__le32) || 537 + task->response_pl_sz < sizeof(__le32)) 544 538 return -EINVAL; 545 539 546 540 if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) 547 541 return -EIO; 548 542 549 543 task->doe_mb = doe_mb; 550 - INIT_WORK(&task->work, doe_statemachine_work); 544 + INIT_WORK_ONSTACK(&task->work, doe_statemachine_work); 551 545 queue_work(doe_mb->work_queue, &task->work); 552 546 return 0; 553 547 }
+3 -2
drivers/pci/remove.c
··· 157 157 list_for_each_entry_safe(child, tmp, 158 158 &bus->devices, bus_list) 159 159 pci_remove_bus_device(child); 160 - pci_remove_bus(bus); 161 - host_bridge->bus = NULL; 162 160 163 161 #ifdef CONFIG_PCI_DOMAINS_GENERIC 164 162 /* Release domain_nr if it was dynamically allocated */ 165 163 if (host_bridge->domain_nr == PCI_DOMAIN_NR_NOT_SET) 166 164 pci_bus_release_domain_nr(bus, host_bridge->dev.parent); 167 165 #endif 166 + 167 + pci_remove_bus(bus); 168 + host_bridge->bus = NULL; 168 169 169 170 /* remove the host bridge */ 170 171 device_del(&host_bridge->dev);
+16 -20
drivers/pinctrl/pinctrl-amd.c
··· 872 872 .pin_config_group_set = amd_pinconf_group_set, 873 873 }; 874 874 875 - static void amd_gpio_irq_init_pin(struct amd_gpio *gpio_dev, int pin) 875 + static void amd_gpio_irq_init(struct amd_gpio *gpio_dev) 876 876 { 877 - const struct pin_desc *pd; 877 + struct pinctrl_desc *desc = gpio_dev->pctrl->desc; 878 878 unsigned long flags; 879 879 u32 pin_reg, mask; 880 + int i; 880 881 881 882 mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) | 882 883 BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) | 883 884 BIT(WAKE_CNTRL_OFF_S4); 884 885 885 - pd = pin_desc_get(gpio_dev->pctrl, pin); 886 - if (!pd) 887 - return; 886 + for (i = 0; i < desc->npins; i++) { 887 + int pin = desc->pins[i].number; 888 + const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin); 888 889 889 - raw_spin_lock_irqsave(&gpio_dev->lock, flags); 890 - pin_reg = readl(gpio_dev->base + pin * 4); 891 - pin_reg &= ~mask; 892 - writel(pin_reg, gpio_dev->base + pin * 4); 893 - raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 894 - } 890 + if (!pd) 891 + continue; 895 892 896 - static void amd_gpio_irq_init(struct amd_gpio *gpio_dev) 897 - { 898 - struct pinctrl_desc *desc = gpio_dev->pctrl->desc; 899 - int i; 893 + raw_spin_lock_irqsave(&gpio_dev->lock, flags); 900 894 901 - for (i = 0; i < desc->npins; i++) 902 - amd_gpio_irq_init_pin(gpio_dev, i); 895 + pin_reg = readl(gpio_dev->base + i * 4); 896 + pin_reg &= ~mask; 897 + writel(pin_reg, gpio_dev->base + i * 4); 898 + 899 + raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 900 + } 903 901 } 904 902 905 903 #ifdef CONFIG_PM_SLEEP ··· 950 952 for (i = 0; i < desc->npins; i++) { 951 953 int pin = desc->pins[i].number; 952 954 953 - if (!amd_gpio_should_save(gpio_dev, pin)) { 954 - amd_gpio_irq_init_pin(gpio_dev, pin); 955 + if (!amd_gpio_should_save(gpio_dev, pin)) 955 956 continue; 956 - } 957 957 958 958 raw_spin_lock_irqsave(&gpio_dev->lock, flags); 959 959 gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
+1 -2
drivers/scsi/iscsi_tcp.c
··· 771 771 iscsi_set_param(cls_conn, param, buf, buflen); 772 772 break; 773 773 case ISCSI_PARAM_DATADGST_EN: 774 - iscsi_set_param(cls_conn, param, buf, buflen); 775 - 776 774 mutex_lock(&tcp_sw_conn->sock_lock); 777 775 if (!tcp_sw_conn->sock) { 778 776 mutex_unlock(&tcp_sw_conn->sock_lock); 779 777 return -ENOTCONN; 780 778 } 779 + iscsi_set_param(cls_conn, param, buf, buflen); 781 780 tcp_sw_conn->sendpage = conn->datadgst_en ? 782 781 sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage; 783 782 mutex_unlock(&tcp_sw_conn->sock_lock);
+1 -1
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 2526 2526 mrioc->unrecoverable = 1; 2527 2527 goto schedule_work; 2528 2528 case MPI3_SYSIF_FAULT_CODE_SOFT_RESET_IN_PROGRESS: 2529 - return; 2529 + goto schedule_work; 2530 2530 case MPI3_SYSIF_FAULT_CODE_CI_ACTIVATION_RESET: 2531 2531 reset_reason = MPI3MR_RESET_FROM_CIACTIV_FAULT; 2532 2532 break;
+1
drivers/scsi/qla2xxx/qla_os.c
··· 3617 3617 probe_failed: 3618 3618 qla_enode_stop(base_vha); 3619 3619 qla_edb_stop(base_vha); 3620 + vfree(base_vha->scan.l); 3620 3621 if (base_vha->gnl.l) { 3621 3622 dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size, 3622 3623 base_vha->gnl.l, base_vha->gnl.ldma);
+5
drivers/spi/spi.c
··· 4456 4456 return NOTIFY_OK; 4457 4457 } 4458 4458 4459 + /* 4460 + * Clear the flag before adding the device so that fw_devlink 4461 + * doesn't skip adding consumers to this device. 4462 + */ 4463 + rd->dn->fwnode.flags &= ~FWNODE_FLAG_NOT_DEVICE; 4459 4464 spi = of_register_spi_device(ctlr, rd->dn); 4460 4465 put_device(&ctlr->dev); 4461 4466
+11
drivers/tty/serial/8250/8250_port.c
··· 1903 1903 static bool handle_rx_dma(struct uart_8250_port *up, unsigned int iir) 1904 1904 { 1905 1905 switch (iir & 0x3f) { 1906 + case UART_IIR_THRI: 1907 + /* 1908 + * Postpone DMA or not decision to IIR_RDI or IIR_RX_TIMEOUT 1909 + * because it's impossible to do an informed decision about 1910 + * that with IIR_THRI. 1911 + * 1912 + * This also fixes one known DMA Rx corruption issue where 1913 + * DR is asserted but DMA Rx only gets a corrupted zero byte 1914 + * (too early DR?). 1915 + */ 1916 + return false; 1906 1917 case UART_IIR_RDI: 1907 1918 if (!up->dma->rx_running) 1908 1919 break;
+8 -2
drivers/tty/serial/fsl_lpuart.c
··· 858 858 struct lpuart_port, port); 859 859 unsigned long stat = lpuart32_read(port, UARTSTAT); 860 860 unsigned long sfifo = lpuart32_read(port, UARTFIFO); 861 + unsigned long ctrl = lpuart32_read(port, UARTCTRL); 861 862 862 863 if (sport->dma_tx_in_progress) 863 864 return 0; 864 865 865 - if (stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) 866 + /* 867 + * LPUART Transmission Complete Flag may never be set while queuing a break 868 + * character, so avoid checking for transmission complete when UARTCTRL_SBK 869 + * is asserted. 870 + */ 871 + if ((stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) || ctrl & UARTCTRL_SBK) 866 872 return TIOCSER_TEMT; 867 873 868 874 return 0; ··· 2948 2942 tty = tty_port_tty_get(port); 2949 2943 if (tty) { 2950 2944 tty_dev = tty->dev; 2951 - may_wake = device_may_wakeup(tty_dev); 2945 + may_wake = tty_dev && device_may_wakeup(tty_dev); 2952 2946 tty_kref_put(tty); 2953 2947 } 2954 2948
+9 -1
drivers/tty/serial/sh-sci.c
··· 31 31 #include <linux/ioport.h> 32 32 #include <linux/ktime.h> 33 33 #include <linux/major.h> 34 + #include <linux/minmax.h> 34 35 #include <linux/module.h> 35 36 #include <linux/mm.h> 36 37 #include <linux/of.h> ··· 2865 2864 sci_port->irqs[i] = platform_get_irq(dev, i); 2866 2865 } 2867 2866 2867 + /* 2868 + * The fourth interrupt on SCI port is transmit end interrupt, so 2869 + * shuffle the interrupts. 2870 + */ 2871 + if (p->type == PORT_SCI) 2872 + swap(sci_port->irqs[SCIx_BRI_IRQ], sci_port->irqs[SCIx_TEI_IRQ]); 2873 + 2868 2874 /* The SCI generates several interrupts. They can be muxed together or 2869 2875 * connected to different interrupt lines. In the muxed case only one 2870 2876 * interrupt resource is specified as there is only one interrupt ID. ··· 2937 2929 port->flags = UPF_FIXED_PORT | UPF_BOOT_AUTOCONF | p->flags; 2938 2930 port->fifosize = sci_port->params->fifosize; 2939 2931 2940 - if (port->type == PORT_SCI) { 2932 + if (port->type == PORT_SCI && !dev->dev.of_node) { 2941 2933 if (sci_port->reg_size >= 0x20) 2942 2934 port->regshift = 2; 2943 2935 else
+16 -31
drivers/ufs/core/ufshcd.c
··· 1409 1409 struct ufs_clk_info *clki; 1410 1410 unsigned long irq_flags; 1411 1411 1412 - /* 1413 - * Skip devfreq if UFS initialization is not finished. 1414 - * Otherwise ufs could be in a inconsistent state. 1415 - */ 1416 - if (!smp_load_acquire(&hba->logical_unit_scan_finished)) 1417 - return 0; 1418 - 1419 1412 if (!ufshcd_is_clkscaling_supported(hba)) 1420 1413 return -EINVAL; 1421 1414 ··· 8392 8399 if (ret) 8393 8400 goto out; 8394 8401 8402 + /* Initialize devfreq after UFS device is detected */ 8403 + if (ufshcd_is_clkscaling_supported(hba)) { 8404 + memcpy(&hba->clk_scaling.saved_pwr_info.info, 8405 + &hba->pwr_info, 8406 + sizeof(struct ufs_pa_layer_attr)); 8407 + hba->clk_scaling.saved_pwr_info.is_valid = true; 8408 + hba->clk_scaling.is_allowed = true; 8409 + 8410 + ret = ufshcd_devfreq_init(hba); 8411 + if (ret) 8412 + goto out; 8413 + 8414 + hba->clk_scaling.is_enabled = true; 8415 + ufshcd_init_clk_scaling_sysfs(hba); 8416 + } 8417 + 8395 8418 ufs_bsg_probe(hba); 8396 8419 ufshpb_init(hba); 8397 8420 scsi_scan_host(hba->host); ··· 8679 8670 if (ret) { 8680 8671 pm_runtime_put_sync(hba->dev); 8681 8672 ufshcd_hba_exit(hba); 8682 - } else { 8683 - /* 8684 - * Make sure that when reader code sees UFS initialization has finished, 8685 - * all initialization steps have really been executed. 8686 - */ 8687 - smp_store_release(&hba->logical_unit_scan_finished, true); 8688 8673 } 8689 8674 } 8690 8675 ··· 10319 10316 */ 10320 10317 ufshcd_set_ufs_dev_active(hba); 10321 10318 10322 - /* Initialize devfreq */ 10323 - if (ufshcd_is_clkscaling_supported(hba)) { 10324 - memcpy(&hba->clk_scaling.saved_pwr_info.info, 10325 - &hba->pwr_info, 10326 - sizeof(struct ufs_pa_layer_attr)); 10327 - hba->clk_scaling.saved_pwr_info.is_valid = true; 10328 - hba->clk_scaling.is_allowed = true; 10329 - 10330 - err = ufshcd_devfreq_init(hba); 10331 - if (err) 10332 - goto rpm_put_sync; 10333 - 10334 - hba->clk_scaling.is_enabled = true; 10335 - ufshcd_init_clk_scaling_sysfs(hba); 10336 - } 10337 - 10338 10319 async_schedule(ufshcd_async_scan, hba); 10339 10320 ufs_sysfs_add_nodes(hba->dev); 10340 10321 10341 10322 device_enable_async_suspend(dev); 10342 10323 return 0; 10343 10324 10344 - rpm_put_sync: 10345 - pm_runtime_put_sync(dev); 10346 10325 free_tmf_queue: 10347 10326 blk_mq_destroy_queue(hba->tmf_queue); 10348 10327 blk_put_queue(hba->tmf_queue);
+1 -2
drivers/usb/cdns3/cdnsp-ep0.c
··· 414 414 void cdnsp_setup_analyze(struct cdnsp_device *pdev) 415 415 { 416 416 struct usb_ctrlrequest *ctrl = &pdev->setup; 417 - int ret = 0; 417 + int ret = -EINVAL; 418 418 u16 len; 419 419 420 420 trace_cdnsp_ctrl_req(ctrl); ··· 424 424 425 425 if (pdev->gadget.state == USB_STATE_NOTATTACHED) { 426 426 dev_err(pdev->dev, "ERR: Setup detected in unattached state\n"); 427 - ret = -EINVAL; 428 427 goto out; 429 428 } 430 429
+4
drivers/usb/dwc3/dwc3-pci.c
··· 49 49 #define PCI_DEVICE_ID_INTEL_RPLS 0x7a61 50 50 #define PCI_DEVICE_ID_INTEL_MTLM 0x7eb1 51 51 #define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1 52 + #define PCI_DEVICE_ID_INTEL_MTLS 0x7f6f 52 53 #define PCI_DEVICE_ID_INTEL_MTL 0x7e7e 53 54 #define PCI_DEVICE_ID_INTEL_TGL 0x9a15 54 55 #define PCI_DEVICE_ID_AMD_MR 0x163a ··· 473 472 (kernel_ulong_t) &dwc3_pci_intel_swnode, }, 474 473 475 474 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLP), 475 + (kernel_ulong_t) &dwc3_pci_intel_swnode, }, 476 + 477 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLS), 476 478 (kernel_ulong_t) &dwc3_pci_intel_swnode, }, 477 479 478 480 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL),
+1 -1
drivers/usb/gadget/function/f_fs.c
··· 1251 1251 p->kiocb = kiocb; 1252 1252 if (p->aio) { 1253 1253 p->to_free = dup_iter(&p->data, to, GFP_KERNEL); 1254 - if (!p->to_free) { 1254 + if (!iter_is_ubuf(&p->data) && !p->to_free) { 1255 1255 kfree(p); 1256 1256 return -ENOMEM; 1257 1257 }
+1 -1
drivers/usb/gadget/legacy/inode.c
··· 614 614 if (!priv) 615 615 goto fail; 616 616 priv->to_free = dup_iter(&priv->to, to, GFP_KERNEL); 617 - if (!priv->to_free) { 617 + if (!iter_is_ubuf(&priv->to) && !priv->to_free) { 618 618 kfree(priv); 619 619 goto fail; 620 620 }
+3 -4
drivers/usb/host/xhci-pci.c
··· 771 771 /* suspend and resume implemented later */ 772 772 773 773 .shutdown = usb_hcd_pci_shutdown, 774 - .driver = { 775 774 #ifdef CONFIG_PM 776 - .pm = &usb_hcd_pci_pm_ops, 777 - #endif 778 - .probe_type = PROBE_PREFER_ASYNCHRONOUS, 775 + .driver = { 776 + .pm = &usb_hcd_pci_pm_ops 779 777 }, 778 + #endif 780 779 }; 781 780 782 781 static int __init xhci_pci_init(void)
+3 -3
drivers/usb/host/xhci-tegra.c
··· 1360 1360 1361 1361 mutex_unlock(&tegra->lock); 1362 1362 1363 + tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl, 1364 + tegra->otg_usb2_port); 1365 + 1363 1366 if (tegra->host_mode) { 1364 1367 /* switch to host mode */ 1365 1368 if (tegra->otg_usb3_port >= 0) { ··· 1477 1474 } 1478 1475 1479 1476 tegra->otg_usb2_port = tegra_xusb_get_usb2_port(tegra, usbphy); 1480 - tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion( 1481 - tegra->padctl, 1482 - tegra->otg_usb2_port); 1483 1477 1484 1478 tegra->host_mode = (usbphy->last_event == USB_EVENT_ID) ? true : false; 1485 1479
+6 -1
drivers/usb/host/xhci.c
··· 9 9 */ 10 10 11 11 #include <linux/pci.h> 12 + #include <linux/iommu.h> 12 13 #include <linux/iopoll.h> 13 14 #include <linux/irq.h> 14 15 #include <linux/log2.h> ··· 229 228 static void xhci_zero_64b_regs(struct xhci_hcd *xhci) 230 229 { 231 230 struct device *dev = xhci_to_hcd(xhci)->self.sysdev; 231 + struct iommu_domain *domain; 232 232 int err, i; 233 233 u64 val; 234 234 u32 intrs; ··· 248 246 * an iommu. Doing anything when there is no iommu is definitely 249 247 * unsafe... 250 248 */ 251 - if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !device_iommu_mapped(dev)) 249 + domain = iommu_get_domain_for_dev(dev); 250 + if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !domain || 251 + domain->type == IOMMU_DOMAIN_IDENTITY) 252 252 return; 253 253 254 254 xhci_info(xhci, "Zeroing 64bit base registers, expecting fault\n"); ··· 4442 4438 4443 4439 if (!virt_dev || max_exit_latency == virt_dev->current_mel) { 4444 4440 spin_unlock_irqrestore(&xhci->lock, flags); 4441 + xhci_free_command(xhci, command); 4445 4442 return 0; 4446 4443 } 4447 4444
+1
drivers/usb/serial/cp210x.c
··· 120 120 { USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */ 121 121 { USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */ 122 122 { USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */ 123 + { USB_DEVICE(0x10C4, 0x82AA) }, /* Silicon Labs IFS-USB-DATACABLE used with Quint UPS */ 123 124 { USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */ 124 125 { USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */ 125 126 { USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */
+10
drivers/usb/serial/option.c
··· 1198 1198 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) }, 1199 1199 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) }, 1200 1200 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) }, 1201 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */ 1202 + .driver_info = ZLP }, 1201 1203 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) }, 1202 1204 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1203 1205 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, ··· 1302 1300 .driver_info = NCTRL(0) | RSVD(1) }, 1303 1301 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */ 1304 1302 .driver_info = RSVD(0) }, 1303 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */ 1304 + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1305 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */ 1306 + .driver_info = NCTRL(0) | RSVD(1) }, 1307 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */ 1308 + .driver_info = NCTRL(2) | RSVD(3) }, 1309 + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */ 1310 + .driver_info = NCTRL(0) | RSVD(1) }, 1305 1311 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), 1306 1312 .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, 1307 1313 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+5 -1
drivers/usb/typec/altmodes/displayport.c
··· 112 112 if (dp->data.status & DP_STATUS_PREFER_MULTI_FUNC && 113 113 pin_assign & DP_PIN_ASSIGN_MULTI_FUNC_MASK) 114 114 pin_assign &= DP_PIN_ASSIGN_MULTI_FUNC_MASK; 115 - else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) 115 + else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) { 116 116 pin_assign &= DP_PIN_ASSIGN_DP_ONLY_MASK; 117 + /* Default to pin assign C if available */ 118 + if (pin_assign & BIT(DP_PIN_ASSIGN_C)) 119 + pin_assign = BIT(DP_PIN_ASSIGN_C); 120 + } 117 121 118 122 if (!pin_assign) 119 123 return -EINVAL;
+6 -2
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 2467 2467 err = 0; 2468 2468 goto out; 2469 2469 } 2470 + mlx5_vdpa_add_debugfs(ndev); 2470 2471 err = setup_virtqueues(mvdev); 2471 2472 if (err) { 2472 2473 mlx5_vdpa_warn(mvdev, "setup_virtqueues\n"); 2473 - goto out; 2474 + goto err_setup; 2474 2475 } 2475 2476 2476 2477 err = create_rqt(ndev); ··· 2501 2500 destroy_rqt(ndev); 2502 2501 err_rqt: 2503 2502 teardown_virtqueues(ndev); 2503 + err_setup: 2504 + mlx5_vdpa_remove_debugfs(ndev->debugfs); 2504 2505 out: 2505 2506 return err; 2506 2507 } ··· 2516 2513 if (!ndev->setup) 2517 2514 return; 2518 2515 2516 + mlx5_vdpa_remove_debugfs(ndev->debugfs); 2517 + ndev->debugfs = NULL; 2519 2518 teardown_steering(ndev); 2520 2519 destroy_tir(ndev); 2521 2520 destroy_rqt(ndev); ··· 3266 3261 if (err) 3267 3262 goto err_reg; 3268 3263 3269 - mlx5_vdpa_add_debugfs(ndev); 3270 3264 mgtdev->ndev = ndev; 3271 3265 return 0; 3272 3266
+9 -4
drivers/vdpa/vdpa_sim/vdpa_sim_net.c
··· 466 466 467 467 vdpasim_net_setup_config(simdev, config); 468 468 469 - ret = _vdpa_register_device(&simdev->vdpa, VDPASIM_NET_VQ_NUM); 470 - if (ret) 471 - goto reg_err; 472 - 473 469 net = sim_to_net(simdev); 474 470 475 471 u64_stats_init(&net->tx_stats.syncp); 476 472 u64_stats_init(&net->rx_stats.syncp); 477 473 u64_stats_init(&net->cq_stats.syncp); 474 + 475 + /* 476 + * Initialization must be completed before this call, since it can 477 + * connect the device to the vDPA bus, so requests can arrive after 478 + * this call. 479 + */ 480 + ret = _vdpa_register_device(&simdev->vdpa, VDPASIM_NET_VQ_NUM); 481 + if (ret) 482 + goto reg_err; 478 483 479 484 return 0; 480 485
+7 -32
drivers/vhost/scsi.c
··· 125 125 struct se_portal_group se_tpg; 126 126 /* Pointer back to vhost_scsi, protected by tv_tpg_mutex */ 127 127 struct vhost_scsi *vhost_scsi; 128 - struct list_head tmf_queue; 129 128 }; 130 129 131 130 struct vhost_scsi_tport { ··· 205 206 206 207 struct vhost_scsi_tmf { 207 208 struct vhost_work vwork; 208 - struct vhost_scsi_tpg *tpg; 209 209 struct vhost_scsi *vhost; 210 210 struct vhost_scsi_virtqueue *svq; 211 - struct list_head queue_entry; 212 211 213 212 struct se_cmd se_cmd; 214 213 u8 scsi_resp; ··· 349 352 350 353 static void vhost_scsi_release_tmf_res(struct vhost_scsi_tmf *tmf) 351 354 { 352 - struct vhost_scsi_tpg *tpg = tmf->tpg; 353 355 struct vhost_scsi_inflight *inflight = tmf->inflight; 354 356 355 - mutex_lock(&tpg->tv_tpg_mutex); 356 - list_add_tail(&tpg->tmf_queue, &tmf->queue_entry); 357 - mutex_unlock(&tpg->tv_tpg_mutex); 357 + kfree(tmf); 358 358 vhost_scsi_put_inflight(inflight); 359 359 } 360 360 ··· 1188 1194 goto send_reject; 1189 1195 } 1190 1196 1191 - mutex_lock(&tpg->tv_tpg_mutex); 1192 - if (list_empty(&tpg->tmf_queue)) { 1193 - pr_err("Missing reserve TMF. Could not handle LUN RESET.\n"); 1194 - mutex_unlock(&tpg->tv_tpg_mutex); 1197 + tmf = kzalloc(sizeof(*tmf), GFP_KERNEL); 1198 + if (!tmf) 1195 1199 goto send_reject; 1196 - } 1197 1200 1198 - tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf, 1199 - queue_entry); 1200 - list_del_init(&tmf->queue_entry); 1201 - mutex_unlock(&tpg->tv_tpg_mutex); 1202 - 1203 - tmf->tpg = tpg; 1201 + vhost_work_init(&tmf->vwork, vhost_scsi_tmf_resp_work); 1204 1202 tmf->vhost = vs; 1205 1203 tmf->svq = svq; 1206 1204 tmf->resp_iov = vq->iov[vc->out]; ··· 1644 1658 for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) { 1645 1659 tpg = vs_tpg[i]; 1646 1660 if (tpg) { 1661 + mutex_lock(&tpg->tv_tpg_mutex); 1662 + tpg->vhost_scsi = NULL; 1647 1663 tpg->tv_tpg_vhost_count--; 1664 + mutex_unlock(&tpg->tv_tpg_mutex); 1648 1665 target_undepend_item(&tpg->se_tpg.tpg_group.cg_item); 1649 1666 } 1650 1667 } ··· 2021 2032 { 2022 2033 struct vhost_scsi_tpg *tpg = container_of(se_tpg, 2023 2034 struct vhost_scsi_tpg, se_tpg); 2024 - struct vhost_scsi_tmf *tmf; 2025 - 2026 - tmf = kzalloc(sizeof(*tmf), GFP_KERNEL); 2027 - if (!tmf) 2028 - return -ENOMEM; 2029 - INIT_LIST_HEAD(&tmf->queue_entry); 2030 - vhost_work_init(&tmf->vwork, vhost_scsi_tmf_resp_work); 2031 2035 2032 2036 mutex_lock(&vhost_scsi_mutex); 2033 2037 2034 2038 mutex_lock(&tpg->tv_tpg_mutex); 2035 2039 tpg->tv_tpg_port_count++; 2036 - list_add_tail(&tmf->queue_entry, &tpg->tmf_queue); 2037 2040 mutex_unlock(&tpg->tv_tpg_mutex); 2038 2041 2039 2042 vhost_scsi_hotplug(tpg, lun); ··· 2040 2059 { 2041 2060 struct vhost_scsi_tpg *tpg = container_of(se_tpg, 2042 2061 struct vhost_scsi_tpg, se_tpg); 2043 - struct vhost_scsi_tmf *tmf; 2044 2062 2045 2063 mutex_lock(&vhost_scsi_mutex); 2046 2064 2047 2065 mutex_lock(&tpg->tv_tpg_mutex); 2048 2066 tpg->tv_tpg_port_count--; 2049 - tmf = list_first_entry(&tpg->tmf_queue, struct vhost_scsi_tmf, 2050 - queue_entry); 2051 - list_del(&tmf->queue_entry); 2052 - kfree(tmf); 2053 2067 mutex_unlock(&tpg->tv_tpg_mutex); 2054 2068 2055 2069 vhost_scsi_hotunplug(tpg, lun); ··· 2305 2329 } 2306 2330 mutex_init(&tpg->tv_tpg_mutex); 2307 2331 INIT_LIST_HEAD(&tpg->tv_tpg_list); 2308 - INIT_LIST_HEAD(&tpg->tmf_queue); 2309 2332 tpg->tport = tport; 2310 2333 tpg->tport_tpgt = tpgt; 2311 2334
+9 -9
drivers/video/fbdev/core/fbcon.c
··· 823 823 int oldidx = con2fb_map[unit]; 824 824 struct fb_info *info = fbcon_registered_fb[newidx]; 825 825 struct fb_info *oldinfo = NULL; 826 - int found, err = 0, show_logo; 826 + int err = 0, show_logo; 827 827 828 828 WARN_CONSOLE_UNLOCKED(); 829 829 ··· 841 841 if (oldidx != -1) 842 842 oldinfo = fbcon_registered_fb[oldidx]; 843 843 844 - found = search_fb_in_map(newidx); 845 - 846 - if (!err && !found) { 844 + if (!search_fb_in_map(newidx)) { 847 845 err = con2fb_acquire_newinfo(vc, info, unit); 848 - if (!err) 849 - con2fb_map[unit] = newidx; 846 + if (err) 847 + return err; 848 + 849 + fbcon_add_cursor_work(info); 850 850 } 851 + 852 + con2fb_map[unit] = newidx; 851 853 852 854 /* 853 855 * If old fb is not mapped to any of the consoles, 854 856 * fbcon should release it. 855 857 */ 856 - if (!err && oldinfo && !search_fb_in_map(oldidx)) 858 + if (oldinfo && !search_fb_in_map(oldidx)) 857 859 con2fb_release_oldinfo(vc, oldinfo, info); 858 860 859 861 show_logo = (fg_console == 0 && !user && 860 862 logo_shown != FBCON_LOGO_DONTSHOW); 861 863 862 - if (!found) 863 - fbcon_add_cursor_work(info); 864 864 con2fb_map_boot[unit] = newidx; 865 865 con2fb_init_display(vc, info, unit, show_logo); 866 866
+2
drivers/video/fbdev/core/fbmem.c
··· 1116 1116 case FBIOPUT_VSCREENINFO: 1117 1117 if (copy_from_user(&var, argp, sizeof(var))) 1118 1118 return -EFAULT; 1119 + /* only for kernel-internal use */ 1120 + var.activate &= ~FB_ACTIVATE_KD_TEXT; 1119 1121 console_lock(); 1120 1122 lock_fb_info(info); 1121 1123 ret = fbcon_modechange_possible(info, &var);
+5 -3
fs/9p/xattr.c
··· 35 35 return retval; 36 36 } 37 37 if (attr_size > buffer_size) { 38 - if (!buffer_size) /* request to get the attr_size */ 39 - retval = attr_size; 40 - else 38 + if (buffer_size) 41 39 retval = -ERANGE; 40 + else if (attr_size > SSIZE_MAX) 41 + retval = -EOVERFLOW; 42 + else /* request to get the attr_size */ 43 + retval = attr_size; 42 44 } else { 43 45 iov_iter_truncate(&to, attr_size); 44 46 retval = p9_client_read(attr_fid, 0, &to, &err);
+14
fs/btrfs/disk-io.c
··· 2250 2250 2251 2251 fs_info->csum_shash = csum_shash; 2252 2252 2253 + /* 2254 + * Check if the checksum implementation is a fast accelerated one. 2255 + * As-is this is a bit of a hack and should be replaced once the csum 2256 + * implementations provide that information themselves. 2257 + */ 2258 + switch (csum_type) { 2259 + case BTRFS_CSUM_TYPE_CRC32: 2260 + if (!strstr(crypto_shash_driver_name(csum_shash), "generic")) 2261 + set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags); 2262 + break; 2263 + default: 2264 + break; 2265 + } 2266 + 2253 2267 btrfs_info(fs_info, "using %s (%s) checksum algorithm", 2254 2268 btrfs_super_csum_name(csum_type), 2255 2269 crypto_shash_driver_name(csum_shash));
+2 -2
fs/btrfs/super.c
··· 1516 1516 shrinker_debugfs_rename(&s->s_shrink, "sb-%s:%s", fs_type->name, 1517 1517 s->s_id); 1518 1518 btrfs_sb(s)->bdev_holder = fs_type; 1519 - if (!strstr(crc32c_impl(), "generic")) 1520 - set_bit(BTRFS_FS_CSUM_IMPL_FAST, &fs_info->flags); 1521 1519 error = btrfs_fill_super(s, fs_devices, data); 1522 1520 } 1523 1521 if (!error) ··· 1629 1631 btrfs_workqueue_set_max(fs_info->hipri_workers, new_pool_size); 1630 1632 btrfs_workqueue_set_max(fs_info->delalloc_workers, new_pool_size); 1631 1633 btrfs_workqueue_set_max(fs_info->caching_workers, new_pool_size); 1634 + workqueue_set_max_active(fs_info->endio_workers, new_pool_size); 1635 + workqueue_set_max_active(fs_info->endio_meta_workers, new_pool_size); 1632 1636 btrfs_workqueue_set_max(fs_info->endio_write_workers, new_pool_size); 1633 1637 btrfs_workqueue_set_max(fs_info->endio_freespace_worker, new_pool_size); 1634 1638 btrfs_workqueue_set_max(fs_info->delayed_workers, new_pool_size);
+1 -1
fs/cifs/cifssmb.c
··· 120 120 spin_lock(&server->srv_lock); 121 121 if (server->tcpStatus == CifsNeedReconnect) { 122 122 spin_unlock(&server->srv_lock); 123 - mutex_lock(&ses->session_mutex); 123 + mutex_unlock(&ses->session_mutex); 124 124 125 125 if (tcon->retry) 126 126 goto again;
+7 -6
fs/cifs/fs_context.c
··· 441 441 * but there are some bugs that prevent rename from working if there are 442 442 * multiple delimiters. 443 443 * 444 - * Returns a sanitized duplicate of @path. The caller is responsible for 445 - * cleaning up the original. 444 + * Returns a sanitized duplicate of @path. @gfp indicates the GFP_* flags 445 + * for kstrdup. 446 + * The caller is responsible for freeing the original. 446 447 */ 447 448 #define IS_DELIM(c) ((c) == '/' || (c) == '\\') 448 - static char *sanitize_path(char *path) 449 + char *cifs_sanitize_prepath(char *prepath, gfp_t gfp) 449 450 { 450 - char *cursor1 = path, *cursor2 = path; 451 + char *cursor1 = prepath, *cursor2 = prepath; 451 452 452 453 /* skip all prepended delimiters */ 453 454 while (IS_DELIM(*cursor1)) ··· 470 469 cursor2--; 471 470 472 471 *(cursor2) = '\0'; 473 - return kstrdup(path, GFP_KERNEL); 472 + return kstrdup(prepath, gfp); 474 473 } 475 474 476 475 /* ··· 532 531 if (!*pos) 533 532 return 0; 534 533 535 - ctx->prepath = sanitize_path(pos); 534 + ctx->prepath = cifs_sanitize_prepath(pos, GFP_KERNEL); 536 535 if (!ctx->prepath) 537 536 return -ENOMEM; 538 537
+3
fs/cifs/fs_context.h
··· 287 287 */ 288 288 #define SMB3_MAX_DCLOSETIMEO (1 << 30) 289 289 #define SMB3_DEF_DCLOSETIMEO (1 * HZ) /* even 1 sec enough to help eg open/write/close/open/read */ 290 + 291 + extern char *cifs_sanitize_prepath(char *prepath, gfp_t gfp); 292 + 290 293 #endif
+1 -1
fs/cifs/misc.c
··· 1195 1195 kfree(cifs_sb->prepath); 1196 1196 1197 1197 if (prefix && *prefix) { 1198 - cifs_sb->prepath = kstrdup(prefix, GFP_ATOMIC); 1198 + cifs_sb->prepath = cifs_sanitize_prepath(prefix, GFP_ATOMIC); 1199 1199 if (!cifs_sb->prepath) 1200 1200 return -ENOMEM; 1201 1201
+47 -5
fs/dax.c
··· 781 781 return ret; 782 782 } 783 783 784 + static int __dax_clear_dirty_range(struct address_space *mapping, 785 + pgoff_t start, pgoff_t end) 786 + { 787 + XA_STATE(xas, &mapping->i_pages, start); 788 + unsigned int scanned = 0; 789 + void *entry; 790 + 791 + xas_lock_irq(&xas); 792 + xas_for_each(&xas, entry, end) { 793 + entry = get_unlocked_entry(&xas, 0); 794 + xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY); 795 + xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE); 796 + put_unlocked_entry(&xas, entry, WAKE_NEXT); 797 + 798 + if (++scanned % XA_CHECK_SCHED) 799 + continue; 800 + 801 + xas_pause(&xas); 802 + xas_unlock_irq(&xas); 803 + cond_resched(); 804 + xas_lock_irq(&xas); 805 + } 806 + xas_unlock_irq(&xas); 807 + 808 + return 0; 809 + } 810 + 784 811 /* 785 812 * Delete DAX entry at @index from @mapping. Wait for it 786 813 * to be unlocked before deleting it. ··· 1285 1258 /* don't bother with blocks that are not shared to start with */ 1286 1259 if (!(iomap->flags & IOMAP_F_SHARED)) 1287 1260 return length; 1288 - /* don't bother with holes or unwritten extents */ 1289 - if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN) 1290 - return length; 1291 1261 1292 1262 id = dax_read_lock(); 1293 1263 ret = dax_iomap_direct_access(iomap, pos, length, &daddr, NULL); 1294 1264 if (ret < 0) 1295 1265 goto out_unlock; 1266 + 1267 + /* zero the distance if srcmap is HOLE or UNWRITTEN */ 1268 + if (srcmap->flags & IOMAP_F_SHARED || srcmap->type == IOMAP_UNWRITTEN) { 1269 + memset(daddr, 0, length); 1270 + dax_flush(iomap->dax_dev, daddr, length); 1271 + ret = length; 1272 + goto out_unlock; 1273 + } 1296 1274 1297 1275 ret = dax_iomap_direct_access(srcmap, pos, length, &saddr, NULL); 1298 1276 if (ret < 0) ··· 1467 1435 * written by write(2) is visible in mmap. 1468 1436 */ 1469 1437 if (iomap->flags & IOMAP_F_NEW || cow) { 1438 + /* 1439 + * Filesystem allows CoW on non-shared extents. The src extents 1440 + * may have been mmapped with dirty mark before. To be able to 1441 + * invalidate its dax entries, we need to clear the dirty mark 1442 + * in advance. 1443 + */ 1444 + if (cow) 1445 + __dax_clear_dirty_range(iomi->inode->i_mapping, 1446 + pos >> PAGE_SHIFT, 1447 + (end - 1) >> PAGE_SHIFT); 1470 1448 invalidate_inode_pages2_range(iomi->inode->i_mapping, 1471 1449 pos >> PAGE_SHIFT, 1472 1450 (end - 1) >> PAGE_SHIFT); ··· 2064 2022 2065 2023 while ((ret = iomap_iter(&src_iter, ops)) > 0 && 2066 2024 (ret = iomap_iter(&dst_iter, ops)) > 0) { 2067 - compared = dax_range_compare_iter(&src_iter, &dst_iter, len, 2068 - same); 2025 + compared = dax_range_compare_iter(&src_iter, &dst_iter, 2026 + min(src_iter.len, dst_iter.len), same); 2069 2027 if (compared < 0) 2070 2028 return ret; 2071 2029 src_iter.processed = dst_iter.processed = compared;
+6 -11
fs/ksmbd/connection.c
··· 112 112 struct ksmbd_conn *conn = work->conn; 113 113 struct list_head *requests_queue = NULL; 114 114 115 - if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE) { 115 + if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE) 116 116 requests_queue = &conn->requests; 117 - work->synchronous = true; 118 - } 119 117 120 118 if (requests_queue) { 121 119 atomic_inc(&conn->req_running); ··· 134 136 135 137 if (!work->multiRsp) 136 138 atomic_dec(&conn->req_running); 137 - spin_lock(&conn->request_lock); 138 139 if (!work->multiRsp) { 140 + spin_lock(&conn->request_lock); 139 141 list_del_init(&work->request_entry); 140 - if (!work->synchronous) 141 - list_del_init(&work->async_request_entry); 142 + spin_unlock(&conn->request_lock); 143 + if (work->asynchronous) 144 + release_async_work(work); 142 145 ret = 0; 143 146 } 144 - spin_unlock(&conn->request_lock); 145 147 146 148 wake_up_all(&conn->req_running_q); 147 149 return ret; ··· 324 326 325 327 /* 4 for rfc1002 length field */ 326 328 size = pdu_size + 4; 327 - conn->request_buf = kvmalloc(size, 328 - GFP_KERNEL | 329 - __GFP_NOWARN | 330 - __GFP_NORETRY); 329 + conn->request_buf = kvmalloc(size, GFP_KERNEL); 331 330 if (!conn->request_buf) 332 331 break; 333 332
+1 -1
fs/ksmbd/ksmbd_work.h
··· 68 68 /* Request is encrypted */ 69 69 bool encrypted:1; 70 70 /* Is this SYNC or ASYNC ksmbd_work */ 71 - bool synchronous:1; 71 + bool asynchronous:1; 72 72 bool need_invalidate_rkey:1; 73 73 74 74 unsigned int remote_key;
+1 -4
fs/ksmbd/server.c
··· 289 289 work->request_buf = conn->request_buf; 290 290 conn->request_buf = NULL; 291 291 292 - if (ksmbd_init_smb_server(work)) { 293 - ksmbd_free_work_struct(work); 294 - return -EINVAL; 295 - } 292 + ksmbd_init_smb_server(work); 296 293 297 294 ksmbd_conn_enqueue_request(work); 298 295 atomic_inc(&conn->r_count);
+21 -15
fs/ksmbd/smb2pdu.c
··· 229 229 struct smb2_negotiate_rsp *rsp; 230 230 struct ksmbd_conn *conn = work->conn; 231 231 232 - if (conn->need_neg == false) 233 - return -EINVAL; 234 - 235 232 *(__be32 *)work->response_buf = 236 233 cpu_to_be32(conn->vals->header_size); 237 234 ··· 495 498 rsp_hdr->SessionId = rcv_hdr->SessionId; 496 499 memcpy(rsp_hdr->Signature, rcv_hdr->Signature, 16); 497 500 498 - work->synchronous = true; 499 - if (work->async_id) { 500 - ksmbd_release_id(&conn->async_ida, work->async_id); 501 - work->async_id = 0; 502 - } 503 - 504 501 return 0; 505 502 } 506 503 ··· 635 644 pr_err("Failed to alloc async message id\n"); 636 645 return id; 637 646 } 638 - work->synchronous = false; 647 + work->asynchronous = true; 639 648 work->async_id = id; 640 649 rsp_hdr->Id.AsyncId = cpu_to_le64(id); 641 650 ··· 653 662 } 654 663 655 664 return 0; 665 + } 666 + 667 + void release_async_work(struct ksmbd_work *work) 668 + { 669 + struct ksmbd_conn *conn = work->conn; 670 + 671 + spin_lock(&conn->request_lock); 672 + list_del_init(&work->async_request_entry); 673 + spin_unlock(&conn->request_lock); 674 + 675 + work->asynchronous = 0; 676 + work->cancel_fn = NULL; 677 + kfree(work->cancel_argv); 678 + work->cancel_argv = NULL; 679 + if (work->async_id) { 680 + ksmbd_release_id(&conn->async_ida, work->async_id); 681 + work->async_id = 0; 682 + } 656 683 } 657 684 658 685 void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status) ··· 7054 7045 7055 7046 ksmbd_vfs_posix_lock_wait(flock); 7056 7047 7057 - spin_lock(&work->conn->request_lock); 7058 7048 spin_lock(&fp->f_lock); 7059 7049 list_del(&work->fp_entry); 7060 - work->cancel_fn = NULL; 7061 - kfree(argv); 7062 7050 spin_unlock(&fp->f_lock); 7063 - spin_unlock(&work->conn->request_lock); 7064 7051 7065 7052 if (work->state != KSMBD_WORK_ACTIVE) { 7066 7053 list_del(&smb_lock->llist); ··· 7074 7069 work->send_no_response = 1; 7075 7070 goto out; 7076 7071 } 7072 + 7077 7073 init_smb2_rsp_hdr(work); 7078 7074 smb2_set_err_rsp(work); 7079 7075 rsp->hdr.Status = ··· 7087 7081 spin_lock(&work->conn->llist_lock); 7088 7082 list_del(&smb_lock->clist); 7089 7083 spin_unlock(&work->conn->llist_lock); 7090 - 7084 + release_async_work(work); 7091 7085 goto retry; 7092 7086 } else if (!rc) { 7093 7087 spin_lock(&work->conn->llist_lock);
+1
fs/ksmbd/smb2pdu.h
··· 486 486 struct file_lock *smb_flock_init(struct file *f); 487 487 int setup_async_work(struct ksmbd_work *work, void (*fn)(void **), 488 488 void **arg); 489 + void release_async_work(struct ksmbd_work *work); 489 490 void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status); 490 491 struct channel *lookup_chann_list(struct ksmbd_session *sess, 491 492 struct ksmbd_conn *conn);
+110 -30
fs/ksmbd/smb_common.c
··· 283 283 return BAD_PROT_ID; 284 284 } 285 285 286 - int ksmbd_init_smb_server(struct ksmbd_work *work) 287 - { 288 - struct ksmbd_conn *conn = work->conn; 286 + #define SMB_COM_NEGOTIATE_EX 0x0 289 287 290 - if (conn->need_neg == false) 288 + /** 289 + * get_smb1_cmd_val() - get smb command value from smb header 290 + * @work: smb work containing smb header 291 + * 292 + * Return: smb command value 293 + */ 294 + static u16 get_smb1_cmd_val(struct ksmbd_work *work) 295 + { 296 + return SMB_COM_NEGOTIATE_EX; 297 + } 298 + 299 + /** 300 + * init_smb1_rsp_hdr() - initialize smb negotiate response header 301 + * @work: smb work containing smb request 302 + * 303 + * Return: 0 on success, otherwise -EINVAL 304 + */ 305 + static int init_smb1_rsp_hdr(struct ksmbd_work *work) 306 + { 307 + struct smb_hdr *rsp_hdr = (struct smb_hdr *)work->response_buf; 308 + struct smb_hdr *rcv_hdr = (struct smb_hdr *)work->request_buf; 309 + 310 + /* 311 + * Remove 4 byte direct TCP header. 312 + */ 313 + *(__be32 *)work->response_buf = 314 + cpu_to_be32(sizeof(struct smb_hdr) - 4); 315 + 316 + rsp_hdr->Command = SMB_COM_NEGOTIATE; 317 + *(__le32 *)rsp_hdr->Protocol = SMB1_PROTO_NUMBER; 318 + rsp_hdr->Flags = SMBFLG_RESPONSE; 319 + rsp_hdr->Flags2 = SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS | 320 + SMBFLG2_EXT_SEC | SMBFLG2_IS_LONG_NAME; 321 + rsp_hdr->Pid = rcv_hdr->Pid; 322 + rsp_hdr->Mid = rcv_hdr->Mid; 323 + return 0; 324 + } 325 + 326 + /** 327 + * smb1_check_user_session() - check for valid session for a user 328 + * @work: smb work containing smb request buffer 329 + * 330 + * Return: 0 on success, otherwise error 331 + */ 332 + static int smb1_check_user_session(struct ksmbd_work *work) 333 + { 334 + unsigned int cmd = work->conn->ops->get_cmd_val(work); 335 + 336 + if (cmd == SMB_COM_NEGOTIATE_EX) 291 337 return 0; 292 338 293 - init_smb3_11_server(conn); 339 + return -EINVAL; 340 + } 294 341 295 - if (conn->ops->get_cmd_val(work) != SMB_COM_NEGOTIATE) 296 - conn->need_neg = false; 342 + /** 343 + * smb1_allocate_rsp_buf() - allocate response buffer for a command 344 + * @work: smb work containing smb request 345 + * 346 + * Return: 0 on success, otherwise -ENOMEM 347 + */ 348 + static int smb1_allocate_rsp_buf(struct ksmbd_work *work) 349 + { 350 + work->response_buf = kmalloc(MAX_CIFS_SMALL_BUFFER_SIZE, 351 + GFP_KERNEL | __GFP_ZERO); 352 + work->response_sz = MAX_CIFS_SMALL_BUFFER_SIZE; 353 + 354 + if (!work->response_buf) { 355 + pr_err("Failed to allocate %u bytes buffer\n", 356 + MAX_CIFS_SMALL_BUFFER_SIZE); 357 + return -ENOMEM; 358 + } 359 + 297 360 return 0; 361 + } 362 + 363 + static struct smb_version_ops smb1_server_ops = { 364 + .get_cmd_val = get_smb1_cmd_val, 365 + .init_rsp_hdr = init_smb1_rsp_hdr, 366 + .allocate_rsp_buf = smb1_allocate_rsp_buf, 367 + .check_user_session = smb1_check_user_session, 368 + }; 369 + 370 + static int smb1_negotiate(struct ksmbd_work *work) 371 + { 372 + return ksmbd_smb_negotiate_common(work, SMB_COM_NEGOTIATE); 373 + } 374 + 375 + static struct smb_version_cmds smb1_server_cmds[1] = { 376 + [SMB_COM_NEGOTIATE_EX] = { .proc = smb1_negotiate, }, 377 + }; 378 + 379 + static void init_smb1_server(struct ksmbd_conn *conn) 380 + { 381 + conn->ops = &smb1_server_ops; 382 + conn->cmds = smb1_server_cmds; 383 + conn->max_cmds = ARRAY_SIZE(smb1_server_cmds); 384 + } 385 + 386 + void ksmbd_init_smb_server(struct ksmbd_work *work) 387 + { 388 + struct ksmbd_conn *conn = work->conn; 389 + __le32 proto; 390 + 391 + if (conn->need_neg == false) 392 + return; 393 + 394 + proto = *(__le32 *)((struct smb_hdr *)work->request_buf)->Protocol; 395 + if (proto == SMB1_PROTO_NUMBER) 396 + init_smb1_server(conn); 397 + else 398 + init_smb3_11_server(conn); 298 399 } 299 400 300 401 int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work, int info_level, ··· 545 444 546 445 ksmbd_debug(SMB, "Unsupported SMB1 protocol\n"); 547 446 548 - /* 549 - * Remove 4 byte direct TCP header, add 2 byte bcc and 550 - * 2 byte DialectIndex. 551 - */ 552 - *(__be32 *)work->response_buf = 553 - cpu_to_be32(sizeof(struct smb_hdr) - 4 + 2 + 2); 447 + /* Add 2 byte bcc and 2 byte DialectIndex. */ 448 + inc_rfc1001_len(work->response_buf, 4); 554 449 neg_rsp->hdr.Status.CifsError = STATUS_SUCCESS; 555 - 556 - neg_rsp->hdr.Command = SMB_COM_NEGOTIATE; 557 - *(__le32 *)neg_rsp->hdr.Protocol = SMB1_PROTO_NUMBER; 558 - neg_rsp->hdr.Flags = SMBFLG_RESPONSE; 559 - neg_rsp->hdr.Flags2 = SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS | 560 - SMBFLG2_EXT_SEC | SMBFLG2_IS_LONG_NAME; 561 450 562 451 neg_rsp->hdr.WordCount = 1; 563 452 neg_rsp->DialectIndex = cpu_to_le16(work->conn->dialect); ··· 565 474 ksmbd_debug(SMB, "conn->dialect 0x%x\n", conn->dialect); 566 475 567 476 if (command == SMB2_NEGOTIATE_HE) { 568 - struct smb2_hdr *smb2_hdr = smb2_get_msg(work->request_buf); 569 - 570 - if (smb2_hdr->ProtocolId != SMB2_PROTO_NUMBER) { 571 - ksmbd_debug(SMB, "Downgrade to SMB1 negotiation\n"); 572 - command = SMB_COM_NEGOTIATE; 573 - } 574 - } 575 - 576 - if (command == SMB2_NEGOTIATE_HE) { 577 477 ret = smb2_handle_negotiate(work); 578 - init_smb2_neg_rsp(work); 579 478 return ret; 580 479 } 581 480 582 481 if (command == SMB_COM_NEGOTIATE) { 583 482 if (__smb2_negotiate(conn)) { 584 - conn->need_neg = true; 585 483 init_smb3_11_server(conn); 586 484 init_smb2_neg_rsp(work); 587 485 ksmbd_debug(SMB, "Upgrade to SMB2 negotiation\n");
+1 -1
fs/ksmbd/smb_common.h
··· 427 427 428 428 int ksmbd_lookup_dialect_by_id(__le16 *cli_dialects, __le16 dialects_count); 429 429 430 - int ksmbd_init_smb_server(struct ksmbd_work *work); 430 + void ksmbd_init_smb_server(struct ksmbd_work *work); 431 431 432 432 struct ksmbd_kstat; 433 433 int ksmbd_populate_dot_dotdot_entries(struct ksmbd_work *work,
-18
fs/ksmbd/unicode.c
··· 114 114 } 115 115 116 116 /* 117 - * is_char_allowed() - check for valid character 118 - * @ch: input character to be checked 119 - * 120 - * Return: 1 if char is allowed, otherwise 0 121 - */ 122 - static inline int is_char_allowed(char *ch) 123 - { 124 - /* check for control chars, wildcards etc. */ 125 - if (!(*ch & 0x80) && 126 - (*ch <= 0x1f || 127 - *ch == '?' || *ch == '"' || *ch == '<' || 128 - *ch == '>' || *ch == '|')) 129 - return 0; 130 - 131 - return 1; 132 - } 133 - 134 - /* 135 117 * smb_from_utf16() - convert utf16le string to local charset 136 118 * @to: destination buffer 137 119 * @from: source buffer
+1 -1
fs/netfs/iterator.c
··· 139 139 size_t seg = min_t(size_t, PAGE_SIZE - off, len); 140 140 141 141 *pages++ = NULL; 142 - sg_set_page(sg, page, len, off); 142 + sg_set_page(sg, page, seg, off); 143 143 sgtable->nents++; 144 144 sg++; 145 145 len -= seg;
+1
fs/nilfs2/btree.c
··· 2219 2219 /* on-disk format */ 2220 2220 binfo->bi_dat.bi_blkoff = cpu_to_le64(key); 2221 2221 binfo->bi_dat.bi_level = level; 2222 + memset(binfo->bi_dat.bi_pad, 0, sizeof(binfo->bi_dat.bi_pad)); 2222 2223 2223 2224 return 0; 2224 2225 }
+1
fs/nilfs2/direct.c
··· 314 314 315 315 binfo->bi_dat.bi_blkoff = cpu_to_le64(key); 316 316 binfo->bi_dat.bi_level = 0; 317 + memset(binfo->bi_dat.bi_pad, 0, sizeof(binfo->bi_dat.bi_pad)); 317 318 318 319 return 0; 319 320 }
+1 -2
fs/nilfs2/segment.c
··· 2609 2609 goto loop; 2610 2610 2611 2611 end_thread: 2612 - spin_unlock(&sci->sc_state_lock); 2613 - 2614 2612 /* end sync. */ 2615 2613 sci->sc_task = NULL; 2616 2614 wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */ 2615 + spin_unlock(&sci->sc_state_lock); 2617 2616 return 0; 2618 2617 } 2619 2618
+2
fs/nilfs2/super.c
··· 482 482 up_write(&nilfs->ns_sem); 483 483 } 484 484 485 + nilfs_sysfs_delete_device_group(nilfs); 485 486 iput(nilfs->ns_sufile); 486 487 iput(nilfs->ns_cpfile); 487 488 iput(nilfs->ns_dat); ··· 1106 1105 nilfs_put_root(fsroot); 1107 1106 1108 1107 failed_unload: 1108 + nilfs_sysfs_delete_device_group(nilfs); 1109 1109 iput(nilfs->ns_sufile); 1110 1110 iput(nilfs->ns_cpfile); 1111 1111 iput(nilfs->ns_dat);
+7 -5
fs/nilfs2/the_nilfs.c
··· 87 87 { 88 88 might_sleep(); 89 89 if (nilfs_init(nilfs)) { 90 - nilfs_sysfs_delete_device_group(nilfs); 91 90 brelse(nilfs->ns_sbh[0]); 92 91 brelse(nilfs->ns_sbh[1]); 93 92 } ··· 304 305 goto failed; 305 306 } 306 307 308 + err = nilfs_sysfs_create_device_group(sb); 309 + if (unlikely(err)) 310 + goto sysfs_error; 311 + 307 312 if (valid_fs) 308 313 goto skip_recovery; 309 314 ··· 369 366 goto failed; 370 367 371 368 failed_unload: 369 + nilfs_sysfs_delete_device_group(nilfs); 370 + 371 + sysfs_error: 372 372 iput(nilfs->ns_cpfile); 373 373 iput(nilfs->ns_sufile); 374 374 iput(nilfs->ns_dat); ··· 700 694 nilfs->ns_mount_state = le16_to_cpu(sbp->s_state); 701 695 702 696 err = nilfs_store_log_cursor(nilfs, sbp); 703 - if (err) 704 - goto failed_sbh; 705 - 706 - err = nilfs_sysfs_create_device_group(sb); 707 697 if (err) 708 698 goto failed_sbh; 709 699
+13 -2
include/acpi/video.h
··· 59 59 extern void acpi_video_register_backlight(void); 60 60 extern int acpi_video_get_edid(struct acpi_device *device, int type, 61 61 int device_id, void **edid); 62 - extern enum acpi_backlight_type acpi_video_get_backlight_type(void); 63 - extern bool acpi_video_backlight_use_native(void); 64 62 /* 65 63 * Note: The value returned by acpi_video_handles_brightness_key_presses() 66 64 * may change over time and should not be cached. ··· 67 69 extern int acpi_video_get_levels(struct acpi_device *device, 68 70 struct acpi_video_device_brightness **dev_br, 69 71 int *pmax_level); 72 + 73 + extern enum acpi_backlight_type __acpi_video_get_backlight_type(bool native, 74 + bool *auto_detect); 75 + 76 + static inline enum acpi_backlight_type acpi_video_get_backlight_type(void) 77 + { 78 + return __acpi_video_get_backlight_type(false, NULL); 79 + } 80 + 81 + static inline bool acpi_video_backlight_use_native(void) 82 + { 83 + return __acpi_video_get_backlight_type(true, NULL) == acpi_backlight_native; 84 + } 70 85 #else 71 86 static inline void acpi_video_report_nolcd(void) { return; }; 72 87 static inline int acpi_video_register(void) { return -ENODEV; }
+12 -2
include/linux/mlx5/device.h
··· 36 36 #include <linux/types.h> 37 37 #include <rdma/ib_verbs.h> 38 38 #include <linux/mlx5/mlx5_ifc.h> 39 + #include <linux/bitfield.h> 39 40 40 41 #if defined(__LITTLE_ENDIAN) 41 42 #define MLX5_SET_HOST_ENDIANNESS 0 ··· 981 980 }; 982 981 983 982 enum { 984 - CQE_RSS_HTYPE_IP = 0x3 << 2, 983 + CQE_RSS_HTYPE_IP = GENMASK(3, 2), 985 984 /* cqe->rss_hash_type[3:2] - IP destination selected for hash 986 985 * (00 = none, 01 = IPv4, 10 = IPv6, 11 = Reserved) 987 986 */ 988 - CQE_RSS_HTYPE_L4 = 0x3 << 6, 987 + CQE_RSS_IP_NONE = 0x0, 988 + CQE_RSS_IPV4 = 0x1, 989 + CQE_RSS_IPV6 = 0x2, 990 + CQE_RSS_RESERVED = 0x3, 991 + 992 + CQE_RSS_HTYPE_L4 = GENMASK(7, 6), 989 993 /* cqe->rss_hash_type[7:6] - L4 destination selected for hash 990 994 * (00 = none, 01 = TCP. 10 = UDP, 11 = IPSEC.SPI 991 995 */ 996 + CQE_RSS_L4_NONE = 0x0, 997 + CQE_RSS_L4_TCP = 0x1, 998 + CQE_RSS_L4_UDP = 0x2, 999 + CQE_RSS_L4_IPSEC = 0x3, 992 1000 }; 993 1001 994 1002 enum {
+2 -1
include/linux/mm_types.h
··· 774 774 unsigned long cpu_bitmap[]; 775 775 }; 776 776 777 - #define MM_MT_FLAGS (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN) 777 + #define MM_MT_FLAGS (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | \ 778 + MT_FLAGS_USE_RCU) 778 779 extern struct mm_struct init_mm; 779 780 780 781 /* Pointer magic because the dynamic array size confuses some compilers. */
+2 -1
include/linux/netdevice.h
··· 1650 1650 1651 1651 struct xdp_metadata_ops { 1652 1652 int (*xmo_rx_timestamp)(const struct xdp_md *ctx, u64 *timestamp); 1653 - int (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash); 1653 + int (*xmo_rx_hash)(const struct xdp_md *ctx, u32 *hash, 1654 + enum xdp_rss_hash_type *rss_type); 1654 1655 }; 1655 1656 1656 1657 /**
+6 -2
include/linux/pci-doe.h
··· 34 34 * @work: Used internally by the mailbox 35 35 * @doe_mb: Used internally by the mailbox 36 36 * 37 + * Payloads are treated as opaque byte streams which are transmitted verbatim, 38 + * without byte-swapping. If payloads contain little-endian register values, 39 + * the caller is responsible for conversion with cpu_to_le32() / le32_to_cpu(). 40 + * 37 41 * The payload sizes and rv are specified in bytes with the following 38 42 * restrictions concerning the protocol. 39 43 * ··· 49 45 */ 50 46 struct pci_doe_task { 51 47 struct pci_doe_protocol prot; 52 - u32 *request_pl; 48 + __le32 *request_pl; 53 49 size_t request_pl_sz; 54 - u32 *response_pl; 50 + __le32 *response_pl; 55 51 size_t response_pl_sz; 56 52 int rv; 57 53 void (*complete)(struct pci_doe_task *task);
+2
include/linux/pci.h
··· 1624 1624 flags, NULL); 1625 1625 } 1626 1626 1627 + static inline bool pci_msix_can_alloc_dyn(struct pci_dev *dev) 1628 + { return false; } 1627 1629 static inline struct msi_map pci_msix_alloc_irq_at(struct pci_dev *dev, unsigned int index, 1628 1630 const struct irq_affinity_desc *affdesc) 1629 1631 {
+2 -1
include/linux/rtnetlink.h
··· 25 25 struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev, 26 26 unsigned change, u32 event, 27 27 gfp_t flags, int *new_nsid, 28 - int new_ifindex, u32 portid, u32 seq); 28 + int new_ifindex, u32 portid, 29 + const struct nlmsghdr *nlh); 29 30 void rtmsg_ifinfo_send(struct sk_buff *skb, struct net_device *dev, 30 31 gfp_t flags, u32 portid, const struct nlmsghdr *nlh); 31 32
+1
include/net/bluetooth/hci_core.h
··· 954 954 HCI_CONN_STK_ENCRYPT, 955 955 HCI_CONN_AUTH_INITIATOR, 956 956 HCI_CONN_DROP, 957 + HCI_CONN_CANCEL, 957 958 HCI_CONN_PARAM_REMOVAL_PEND, 958 959 HCI_CONN_NEW_LINK_KEY, 959 960 HCI_CONN_SCANNING,
+6 -2
include/net/bonding.h
··· 761 761 #if IS_ENABLED(CONFIG_IPV6) 762 762 static inline int bond_get_targets_ip6(struct in6_addr *targets, struct in6_addr *ip) 763 763 { 764 + struct in6_addr mcaddr; 764 765 int i; 765 766 766 - for (i = 0; i < BOND_MAX_NS_TARGETS; i++) 767 - if (ipv6_addr_equal(&targets[i], ip)) 767 + for (i = 0; i < BOND_MAX_NS_TARGETS; i++) { 768 + addrconf_addr_solict_mult(&targets[i], &mcaddr); 769 + if ((ipv6_addr_equal(&targets[i], ip)) || 770 + (ipv6_addr_equal(&mcaddr, ip))) 768 771 return i; 769 772 else if (ipv6_addr_any(&targets[i])) 770 773 break; 774 + } 771 775 772 776 return -1; 773 777 }
+47
include/net/xdp.h
··· 8 8 9 9 #include <linux/skbuff.h> /* skb_shared_info */ 10 10 #include <uapi/linux/netdev.h> 11 + #include <linux/bitfield.h> 11 12 12 13 /** 13 14 * DOC: XDP RX-queue information ··· 424 423 XDP_METADATA_KFUNC_xxx 425 424 #undef XDP_METADATA_KFUNC 426 425 MAX_XDP_METADATA_KFUNC, 426 + }; 427 + 428 + enum xdp_rss_hash_type { 429 + /* First part: Individual bits for L3/L4 types */ 430 + XDP_RSS_L3_IPV4 = BIT(0), 431 + XDP_RSS_L3_IPV6 = BIT(1), 432 + 433 + /* The fixed (L3) IPv4 and IPv6 headers can both be followed by 434 + * variable/dynamic headers, IPv4 called Options and IPv6 called 435 + * Extension Headers. HW RSS type can contain this info. 436 + */ 437 + XDP_RSS_L3_DYNHDR = BIT(2), 438 + 439 + /* When RSS hash covers L4 then drivers MUST set XDP_RSS_L4 bit in 440 + * addition to the protocol specific bit. This ease interaction with 441 + * SKBs and avoids reserving a fixed mask for future L4 protocol bits. 442 + */ 443 + XDP_RSS_L4 = BIT(3), /* L4 based hash, proto can be unknown */ 444 + XDP_RSS_L4_TCP = BIT(4), 445 + XDP_RSS_L4_UDP = BIT(5), 446 + XDP_RSS_L4_SCTP = BIT(6), 447 + XDP_RSS_L4_IPSEC = BIT(7), /* L4 based hash include IPSEC SPI */ 448 + 449 + /* Second part: RSS hash type combinations used for driver HW mapping */ 450 + XDP_RSS_TYPE_NONE = 0, 451 + XDP_RSS_TYPE_L2 = XDP_RSS_TYPE_NONE, 452 + 453 + XDP_RSS_TYPE_L3_IPV4 = XDP_RSS_L3_IPV4, 454 + XDP_RSS_TYPE_L3_IPV6 = XDP_RSS_L3_IPV6, 455 + XDP_RSS_TYPE_L3_IPV4_OPT = XDP_RSS_L3_IPV4 | XDP_RSS_L3_DYNHDR, 456 + XDP_RSS_TYPE_L3_IPV6_EX = XDP_RSS_L3_IPV6 | XDP_RSS_L3_DYNHDR, 457 + 458 + XDP_RSS_TYPE_L4_ANY = XDP_RSS_L4, 459 + XDP_RSS_TYPE_L4_IPV4_TCP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_TCP, 460 + XDP_RSS_TYPE_L4_IPV4_UDP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_UDP, 461 + XDP_RSS_TYPE_L4_IPV4_SCTP = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_SCTP, 462 + XDP_RSS_TYPE_L4_IPV4_IPSEC = XDP_RSS_L3_IPV4 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC, 463 + 464 + XDP_RSS_TYPE_L4_IPV6_TCP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_TCP, 465 + XDP_RSS_TYPE_L4_IPV6_UDP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_UDP, 466 + XDP_RSS_TYPE_L4_IPV6_SCTP = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_SCTP, 467 + XDP_RSS_TYPE_L4_IPV6_IPSEC = XDP_RSS_L3_IPV6 | XDP_RSS_L4 | XDP_RSS_L4_IPSEC, 468 + 469 + XDP_RSS_TYPE_L4_IPV6_TCP_EX = XDP_RSS_TYPE_L4_IPV6_TCP | XDP_RSS_L3_DYNHDR, 470 + XDP_RSS_TYPE_L4_IPV6_UDP_EX = XDP_RSS_TYPE_L4_IPV6_UDP | XDP_RSS_L3_DYNHDR, 471 + XDP_RSS_TYPE_L4_IPV6_SCTP_EX = XDP_RSS_TYPE_L4_IPV6_SCTP | XDP_RSS_L3_DYNHDR, 427 472 }; 428 473 429 474 #ifdef CONFIG_NET
+9 -9
include/uapi/linux/virtio_blk.h
··· 140 140 141 141 /* Zoned block device characteristics (if VIRTIO_BLK_F_ZONED) */ 142 142 struct virtio_blk_zoned_characteristics { 143 - __le32 zone_sectors; 144 - __le32 max_open_zones; 145 - __le32 max_active_zones; 146 - __le32 max_append_sectors; 147 - __le32 write_granularity; 143 + __virtio32 zone_sectors; 144 + __virtio32 max_open_zones; 145 + __virtio32 max_active_zones; 146 + __virtio32 max_append_sectors; 147 + __virtio32 write_granularity; 148 148 __u8 model; 149 149 __u8 unused2[3]; 150 150 } zoned; ··· 241 241 */ 242 242 struct virtio_blk_zone_descriptor { 243 243 /* Zone capacity */ 244 - __le64 z_cap; 244 + __virtio64 z_cap; 245 245 /* The starting sector of the zone */ 246 - __le64 z_start; 246 + __virtio64 z_start; 247 247 /* Zone write pointer position in sectors */ 248 - __le64 z_wp; 248 + __virtio64 z_wp; 249 249 /* Zone type */ 250 250 __u8 z_type; 251 251 /* Zone state */ ··· 254 254 }; 255 255 256 256 struct virtio_blk_zone_report { 257 - __le64 nr_zones; 257 + __virtio64 nr_zones; 258 258 __u8 reserved[56]; 259 259 struct virtio_blk_zone_descriptor zones[]; 260 260 };
-1
include/ufs/ufshcd.h
··· 979 979 struct completion *uic_async_done; 980 980 981 981 enum ufshcd_state ufshcd_state; 982 - bool logical_unit_scan_finished; 983 982 u32 eh_flags; 984 983 u32 intr_mask; 985 984 u16 ee_ctrl_mask;
+1 -1
io_uring/io_uring.c
··· 2789 2789 io_eventfd_unregister(ctx); 2790 2790 io_alloc_cache_free(&ctx->apoll_cache, io_apoll_cache_free); 2791 2791 io_alloc_cache_free(&ctx->netmsg_cache, io_netmsg_cache_free); 2792 - mutex_unlock(&ctx->uring_lock); 2793 2792 io_destroy_buffers(ctx); 2793 + mutex_unlock(&ctx->uring_lock); 2794 2794 if (ctx->sq_creds) 2795 2795 put_cred(ctx->sq_creds); 2796 2796 if (ctx->submitter_task)
+4 -3
io_uring/kbuf.c
··· 228 228 return i; 229 229 } 230 230 231 - /* the head kbuf is the list itself */ 231 + /* protects io_buffers_cache */ 232 + lockdep_assert_held(&ctx->uring_lock); 233 + 232 234 while (!list_empty(&bl->buf_list)) { 233 235 struct io_buffer *nxt; 234 236 235 237 nxt = list_first_entry(&bl->buf_list, struct io_buffer, list); 236 - list_del(&nxt->list); 238 + list_move(&nxt->list, &ctx->io_buffers_cache); 237 239 if (++i == nbufs) 238 240 return i; 239 241 cond_resched(); 240 242 } 241 - i++; 242 243 243 244 return i; 244 245 }
+3 -3
kernel/dma/swiotlb.c
··· 623 623 phys_to_dma_unencrypted(dev, mem->start) & boundary_mask; 624 624 unsigned long max_slots = get_max_slots(boundary_mask); 625 625 unsigned int iotlb_align_mask = 626 - dma_get_min_align_mask(dev) & ~(IO_TLB_SIZE - 1); 626 + dma_get_min_align_mask(dev) | alloc_align_mask; 627 627 unsigned int nslots = nr_slots(alloc_size), stride; 628 628 unsigned int offset = swiotlb_align_offset(dev, orig_addr); 629 629 unsigned int index, slots_checked, count = 0, i; ··· 639 639 * allocations. 640 640 */ 641 641 if (alloc_size >= PAGE_SIZE) 642 - iotlb_align_mask &= PAGE_MASK; 643 - iotlb_align_mask &= alloc_align_mask; 642 + iotlb_align_mask |= ~PAGE_MASK; 643 + iotlb_align_mask &= ~(IO_TLB_SIZE - 1); 644 644 645 645 /* 646 646 * For mappings with an alignment requirement don't bother looping to
+8 -6
kernel/events/core.c
··· 12173 12173 /* 12174 12174 * If its not a per-cpu rb, it must be the same task. 12175 12175 */ 12176 - if (output_event->cpu == -1 && output_event->ctx != event->ctx) 12176 + if (output_event->cpu == -1 && output_event->hw.target != event->hw.target) 12177 12177 goto out; 12178 12178 12179 12179 /* ··· 12893 12893 __perf_pmu_remove(src_ctx, src_cpu, pmu, &src_ctx->pinned_groups, &events); 12894 12894 __perf_pmu_remove(src_ctx, src_cpu, pmu, &src_ctx->flexible_groups, &events); 12895 12895 12896 - /* 12897 - * Wait for the events to quiesce before re-instating them. 12898 - */ 12899 - synchronize_rcu(); 12896 + if (!list_empty(&events)) { 12897 + /* 12898 + * Wait for the events to quiesce before re-instating them. 12899 + */ 12900 + synchronize_rcu(); 12900 12901 12901 - __perf_pmu_install(dst_ctx, dst_cpu, pmu, &events); 12902 + __perf_pmu_install(dst_ctx, dst_cpu, pmu, &events); 12903 + } 12902 12904 12903 12905 mutex_unlock(&dst_ctx->mutex); 12904 12906 mutex_unlock(&src_ctx->mutex);
+3
kernel/fork.c
··· 617 617 if (retval) 618 618 goto out; 619 619 620 + mt_clear_in_rcu(vmi.mas.tree); 620 621 for_each_vma(old_vmi, mpnt) { 621 622 struct file *file; 622 623 ··· 701 700 retval = arch_dup_mmap(oldmm, mm); 702 701 loop_out: 703 702 vma_iter_free(&vmi); 703 + if (!retval) 704 + mt_set_in_rcu(vmi.mas.tree); 704 705 out: 705 706 mmap_write_unlock(mm); 706 707 flush_tlb_mm(oldmm);
+19 -8
kernel/rcu/tree.c
··· 3024 3024 return !!READ_ONCE(krcp->head); 3025 3025 } 3026 3026 3027 + static bool 3028 + need_wait_for_krwp_work(struct kfree_rcu_cpu_work *krwp) 3029 + { 3030 + int i; 3031 + 3032 + for (i = 0; i < FREE_N_CHANNELS; i++) 3033 + if (!list_empty(&krwp->bulk_head_free[i])) 3034 + return true; 3035 + 3036 + return !!krwp->head_free; 3037 + } 3038 + 3027 3039 static int krc_count(struct kfree_rcu_cpu *krcp) 3028 3040 { 3029 3041 int sum = atomic_read(&krcp->head_count); ··· 3119 3107 for (i = 0; i < KFREE_N_BATCHES; i++) { 3120 3108 struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]); 3121 3109 3122 - // Try to detach bulk_head or head and attach it over any 3123 - // available corresponding free channel. It can be that 3124 - // a previous RCU batch is in progress, it means that 3125 - // immediately to queue another one is not possible so 3126 - // in that case the monitor work is rearmed. 3127 - if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) || 3128 - (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) || 3129 - (READ_ONCE(krcp->head) && !krwp->head_free)) { 3110 + // Try to detach bulk_head or head and attach it, only when 3111 + // all channels are free. Any channel is not free means at krwp 3112 + // there is on-going rcu work to handle krwp's free business. 3113 + if (need_wait_for_krwp_work(krwp)) 3114 + continue; 3130 3115 3116 + // kvfree_rcu_drain_ready() might handle this krcp, if so give up. 3117 + if (need_offload_krc(krcp)) { 3131 3118 // Channel 1 corresponds to the SLAB-pointer bulk path. 3132 3119 // Channel 2 corresponds to vmalloc-pointer bulk path. 3133 3120 for (j = 0; j < FREE_N_CHANNELS; j++) {
+9 -6
kernel/trace/ftrace.c
··· 5667 5667 ret = 0; 5668 5668 } 5669 5669 5670 - if (unlikely(ret && new_direct)) { 5671 - direct->count++; 5672 - list_del_rcu(&new_direct->next); 5673 - synchronize_rcu_tasks(); 5674 - kfree(new_direct); 5675 - ftrace_direct_func_count--; 5670 + if (ret) { 5671 + direct->addr = old_addr; 5672 + if (unlikely(new_direct)) { 5673 + direct->count++; 5674 + list_del_rcu(&new_direct->next); 5675 + synchronize_rcu_tasks(); 5676 + kfree(new_direct); 5677 + ftrace_direct_func_count--; 5678 + } 5676 5679 } 5677 5680 5678 5681 out_unlock:
+1 -1
kernel/trace/trace_events_synth.c
··· 44 44 45 45 static const char *err_text[] = { ERRORS }; 46 46 47 - DEFINE_MUTEX(lastcmd_mutex); 47 + static DEFINE_MUTEX(lastcmd_mutex); 48 48 static char *last_cmd; 49 49 50 50 static int errpos(const char *str)
+2 -2
lib/Kconfig.debug
··· 1143 1143 1144 1144 config SCHED_DEBUG 1145 1145 bool "Collect scheduler debugging info" 1146 - depends on DEBUG_KERNEL && PROC_FS 1146 + depends on DEBUG_KERNEL && DEBUG_FS 1147 1147 default y 1148 1148 help 1149 1149 If you say Y here, the /sys/kernel/debug/sched file will be provided ··· 1392 1392 range 10 30 1393 1393 default 14 1394 1394 help 1395 - Try increasing this value if you need large MAX_STACK_TRACE_ENTRIES. 1395 + Try increasing this value if you need large STACK_TRACE_HASH_SIZE. 1396 1396 1397 1397 config LOCKDEP_CIRCULAR_QUEUE_BITS 1398 1398 int "Bitsize for elements in circular_queue struct"
+189 -96
lib/maple_tree.c
··· 185 185 */ 186 186 static void ma_free_rcu(struct maple_node *node) 187 187 { 188 - node->parent = ma_parent_ptr(node); 188 + WARN_ON(node->parent != ma_parent_ptr(node)); 189 189 call_rcu(&node->rcu, mt_free_rcu); 190 190 } 191 191 ··· 539 539 */ 540 540 static inline bool ma_dead_node(const struct maple_node *node) 541 541 { 542 - struct maple_node *parent = (void *)((unsigned long) 543 - node->parent & ~MAPLE_NODE_MASK); 542 + struct maple_node *parent; 544 543 544 + /* Do not reorder reads from the node prior to the parent check */ 545 + smp_rmb(); 546 + parent = (void *)((unsigned long) node->parent & ~MAPLE_NODE_MASK); 545 547 return (parent == node); 546 548 } 549 + 547 550 /* 548 551 * mte_dead_node() - check if the @enode is dead. 549 552 * @enode: The encoded maple node ··· 558 555 struct maple_node *parent, *node; 559 556 560 557 node = mte_to_node(enode); 558 + /* Do not reorder reads from the node prior to the parent check */ 559 + smp_rmb(); 561 560 parent = mte_parent(enode); 562 561 return (parent == node); 563 562 } ··· 629 624 * ma_pivots() - Get a pointer to the maple node pivots. 630 625 * @node - the maple node 631 626 * @type - the node type 627 + * 628 + * In the event of a dead node, this array may be %NULL 632 629 * 633 630 * Return: A pointer to the maple node pivots 634 631 */ ··· 824 817 return rcu_dereference_check(slots[offset], mt_locked(mt)); 825 818 } 826 819 820 + static inline void *mt_slot_locked(struct maple_tree *mt, void __rcu **slots, 821 + unsigned char offset) 822 + { 823 + return rcu_dereference_protected(slots[offset], mt_locked(mt)); 824 + } 827 825 /* 828 826 * mas_slot_locked() - Get the slot value when holding the maple tree lock. 829 827 * @mas: The maple state ··· 840 828 static inline void *mas_slot_locked(struct ma_state *mas, void __rcu **slots, 841 829 unsigned char offset) 842 830 { 843 - return rcu_dereference_protected(slots[offset], mt_locked(mas->tree)); 831 + return mt_slot_locked(mas->tree, slots, offset); 844 832 } 845 833 846 834 /* ··· 909 897 910 898 meta->gap = offset; 911 899 meta->end = end; 900 + } 901 + 902 + /* 903 + * mt_clear_meta() - clear the metadata information of a node, if it exists 904 + * @mt: The maple tree 905 + * @mn: The maple node 906 + * @type: The maple node type 907 + * @offset: The offset of the highest sub-gap in this node. 908 + * @end: The end of the data in this node. 909 + */ 910 + static inline void mt_clear_meta(struct maple_tree *mt, struct maple_node *mn, 911 + enum maple_type type) 912 + { 913 + struct maple_metadata *meta; 914 + unsigned long *pivots; 915 + void __rcu **slots; 916 + void *next; 917 + 918 + switch (type) { 919 + case maple_range_64: 920 + pivots = mn->mr64.pivot; 921 + if (unlikely(pivots[MAPLE_RANGE64_SLOTS - 2])) { 922 + slots = mn->mr64.slot; 923 + next = mt_slot_locked(mt, slots, 924 + MAPLE_RANGE64_SLOTS - 1); 925 + if (unlikely((mte_to_node(next) && 926 + mte_node_type(next)))) 927 + return; /* no metadata, could be node */ 928 + } 929 + fallthrough; 930 + case maple_arange_64: 931 + meta = ma_meta(mn, type); 932 + break; 933 + default: 934 + return; 935 + } 936 + 937 + meta->gap = 0; 938 + meta->end = 0; 912 939 } 913 940 914 941 /* ··· 1147 1096 a_type = mas_parent_enum(mas, p_enode); 1148 1097 a_node = mte_parent(p_enode); 1149 1098 a_slot = mte_parent_slot(p_enode); 1150 - pivots = ma_pivots(a_node, a_type); 1151 1099 a_enode = mt_mk_node(a_node, a_type); 1100 + pivots = ma_pivots(a_node, a_type); 1101 + 1102 + if (unlikely(ma_dead_node(a_node))) 1103 + return 1; 1152 1104 1153 1105 if (!set_min && a_slot) { 1154 1106 set_min = true; ··· 1408 1354 mas->max = ULONG_MAX; 1409 1355 mas->depth = 0; 1410 1356 1357 + retry: 1411 1358 root = mas_root(mas); 1412 1359 /* Tree with nodes */ 1413 1360 if (likely(xa_is_node(root))) { 1414 1361 mas->depth = 1; 1415 1362 mas->node = mte_safe_root(root); 1416 1363 mas->offset = 0; 1364 + if (mte_dead_node(mas->node)) 1365 + goto retry; 1366 + 1417 1367 return NULL; 1418 1368 } 1419 1369 ··· 1459 1401 { 1460 1402 unsigned char offset; 1461 1403 1404 + if (!pivots) 1405 + return 0; 1406 + 1462 1407 if (type == maple_arange_64) 1463 1408 return ma_meta_end(node, type); 1464 1409 ··· 1497 1436 return ma_meta_end(node, type); 1498 1437 1499 1438 pivots = ma_pivots(node, type); 1439 + if (unlikely(ma_dead_node(node))) 1440 + return 0; 1441 + 1500 1442 offset = mt_pivots[type] - 1; 1501 1443 if (likely(!pivots[offset])) 1502 1444 return ma_meta_end(node, type); ··· 1788 1724 rcu_assign_pointer(slots[offset], mas->node); 1789 1725 } 1790 1726 1791 - if (!advanced) 1727 + if (!advanced) { 1728 + mte_set_node_dead(old_enode); 1792 1729 mas_free(mas, old_enode); 1730 + } 1793 1731 } 1794 1732 1795 1733 /* ··· 3725 3659 slot++; 3726 3660 mas->depth = 1; 3727 3661 mas_set_height(mas); 3728 - 3662 + ma_set_meta(node, maple_leaf_64, 0, slot); 3729 3663 /* swap the new root into the tree */ 3730 3664 rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node)); 3731 - ma_set_meta(node, maple_leaf_64, 0, slot); 3732 3665 return slot; 3733 3666 } 3734 3667 ··· 3940 3875 end = ma_data_end(node, type, pivots, max); 3941 3876 if (unlikely(ma_dead_node(node))) 3942 3877 goto dead_node; 3943 - 3944 - if (pivots[offset] >= mas->index) 3945 - goto next; 3946 - 3947 3878 do { 3948 - offset++; 3949 - } while ((offset < end) && (pivots[offset] < mas->index)); 3879 + if (pivots[offset] >= mas->index) { 3880 + max = pivots[offset]; 3881 + break; 3882 + } 3883 + } while (++offset < end); 3950 3884 3951 - if (likely(offset > end)) 3952 - max = pivots[offset]; 3953 - 3954 - next: 3955 3885 slots = ma_slots(node, type); 3956 3886 next = mt_slot(mas->tree, slots, offset); 3957 3887 if (unlikely(ma_dead_node(node))) ··· 4224 4164 done: 4225 4165 mas_leaf_set_meta(mas, newnode, dst_pivots, maple_leaf_64, new_end); 4226 4166 if (in_rcu) { 4167 + mte_set_node_dead(mas->node); 4227 4168 mas->node = mt_mk_node(newnode, wr_mas->type); 4228 4169 mas_replace(mas, false); 4229 4170 } else { ··· 4566 4505 node = mas_mn(mas); 4567 4506 slots = ma_slots(node, mt); 4568 4507 pivots = ma_pivots(node, mt); 4508 + if (unlikely(ma_dead_node(node))) 4509 + return 1; 4510 + 4569 4511 mas->max = pivots[offset]; 4570 4512 if (offset) 4571 4513 mas->min = pivots[offset - 1] + 1; ··· 4590 4526 slots = ma_slots(node, mt); 4591 4527 pivots = ma_pivots(node, mt); 4592 4528 offset = ma_data_end(node, mt, pivots, mas->max); 4529 + if (unlikely(ma_dead_node(node))) 4530 + return 1; 4531 + 4593 4532 if (offset) 4594 4533 mas->min = pivots[offset - 1] + 1; 4595 4534 ··· 4641 4574 struct maple_enode *enode; 4642 4575 int level = 0; 4643 4576 unsigned char offset; 4577 + unsigned char node_end; 4644 4578 enum maple_type mt; 4645 4579 void __rcu **slots; 4646 4580 ··· 4665 4597 node = mas_mn(mas); 4666 4598 mt = mte_node_type(mas->node); 4667 4599 pivots = ma_pivots(node, mt); 4668 - } while (unlikely(offset == ma_data_end(node, mt, pivots, mas->max))); 4600 + node_end = ma_data_end(node, mt, pivots, mas->max); 4601 + if (unlikely(ma_dead_node(node))) 4602 + return 1; 4603 + 4604 + } while (unlikely(offset == node_end)); 4669 4605 4670 4606 slots = ma_slots(node, mt); 4671 4607 pivot = mas_safe_pivot(mas, pivots, ++offset, mt); ··· 4685 4613 mt = mte_node_type(mas->node); 4686 4614 slots = ma_slots(node, mt); 4687 4615 pivots = ma_pivots(node, mt); 4616 + if (unlikely(ma_dead_node(node))) 4617 + return 1; 4618 + 4688 4619 offset = 0; 4689 4620 pivot = pivots[0]; 4690 4621 } ··· 4734 4659 return NULL; 4735 4660 } 4736 4661 4737 - pivots = ma_pivots(node, type); 4738 4662 slots = ma_slots(node, type); 4739 - mas->index = mas_safe_min(mas, pivots, mas->offset); 4663 + pivots = ma_pivots(node, type); 4740 4664 count = ma_data_end(node, type, pivots, mas->max); 4741 - if (ma_dead_node(node)) 4665 + if (unlikely(ma_dead_node(node))) 4666 + return NULL; 4667 + 4668 + mas->index = mas_safe_min(mas, pivots, mas->offset); 4669 + if (unlikely(ma_dead_node(node))) 4742 4670 return NULL; 4743 4671 4744 4672 if (mas->index > max) ··· 4895 4817 4896 4818 slots = ma_slots(mn, mt); 4897 4819 pivots = ma_pivots(mn, mt); 4820 + if (unlikely(ma_dead_node(mn))) { 4821 + mas_rewalk(mas, index); 4822 + goto retry; 4823 + } 4824 + 4898 4825 if (offset == mt_pivots[mt]) 4899 4826 pivot = mas->max; 4900 4827 else ··· 5483 5400 } 5484 5401 5485 5402 /* 5486 - * mas_dead_leaves() - Mark all leaves of a node as dead. 5403 + * mte_dead_leaves() - Mark all leaves of a node as dead. 5487 5404 * @mas: The maple state 5488 5405 * @slots: Pointer to the slot array 5406 + * @type: The maple node type 5489 5407 * 5490 5408 * Must hold the write lock. 5491 5409 * 5492 5410 * Return: The number of leaves marked as dead. 5493 5411 */ 5494 5412 static inline 5495 - unsigned char mas_dead_leaves(struct ma_state *mas, void __rcu **slots) 5413 + unsigned char mte_dead_leaves(struct maple_enode *enode, struct maple_tree *mt, 5414 + void __rcu **slots) 5496 5415 { 5497 5416 struct maple_node *node; 5498 5417 enum maple_type type; 5499 5418 void *entry; 5500 5419 int offset; 5501 5420 5502 - for (offset = 0; offset < mt_slot_count(mas->node); offset++) { 5503 - entry = mas_slot_locked(mas, slots, offset); 5421 + for (offset = 0; offset < mt_slot_count(enode); offset++) { 5422 + entry = mt_slot(mt, slots, offset); 5504 5423 type = mte_node_type(entry); 5505 5424 node = mte_to_node(entry); 5506 5425 /* Use both node and type to catch LE & BE metadata */ ··· 5510 5425 break; 5511 5426 5512 5427 mte_set_node_dead(entry); 5513 - smp_wmb(); /* Needed for RCU */ 5514 5428 node->type = type; 5515 5429 rcu_assign_pointer(slots[offset], node); 5516 5430 } ··· 5517 5433 return offset; 5518 5434 } 5519 5435 5520 - static void __rcu **mas_dead_walk(struct ma_state *mas, unsigned char offset) 5436 + /** 5437 + * mte_dead_walk() - Walk down a dead tree to just before the leaves 5438 + * @enode: The maple encoded node 5439 + * @offset: The starting offset 5440 + * 5441 + * Note: This can only be used from the RCU callback context. 5442 + */ 5443 + static void __rcu **mte_dead_walk(struct maple_enode **enode, unsigned char offset) 5521 5444 { 5522 5445 struct maple_node *node, *next; 5523 5446 void __rcu **slots = NULL; 5524 5447 5525 - next = mas_mn(mas); 5448 + next = mte_to_node(*enode); 5526 5449 do { 5527 - mas->node = ma_enode_ptr(next); 5528 - node = mas_mn(mas); 5450 + *enode = ma_enode_ptr(next); 5451 + node = mte_to_node(*enode); 5529 5452 slots = ma_slots(node, node->type); 5530 - next = mas_slot_locked(mas, slots, offset); 5453 + next = rcu_dereference_protected(slots[offset], 5454 + lock_is_held(&rcu_callback_map)); 5531 5455 offset = 0; 5532 5456 } while (!ma_is_leaf(next->type)); 5533 5457 5534 5458 return slots; 5535 5459 } 5536 5460 5461 + /** 5462 + * mt_free_walk() - Walk & free a tree in the RCU callback context 5463 + * @head: The RCU head that's within the node. 5464 + * 5465 + * Note: This can only be used from the RCU callback context. 5466 + */ 5537 5467 static void mt_free_walk(struct rcu_head *head) 5538 5468 { 5539 5469 void __rcu **slots; 5540 5470 struct maple_node *node, *start; 5541 - struct maple_tree mt; 5471 + struct maple_enode *enode; 5542 5472 unsigned char offset; 5543 5473 enum maple_type type; 5544 - MA_STATE(mas, &mt, 0, 0); 5545 5474 5546 5475 node = container_of(head, struct maple_node, rcu); 5547 5476 5548 5477 if (ma_is_leaf(node->type)) 5549 5478 goto free_leaf; 5550 5479 5551 - mt_init_flags(&mt, node->ma_flags); 5552 - mas_lock(&mas); 5553 5480 start = node; 5554 - mas.node = mt_mk_node(node, node->type); 5555 - slots = mas_dead_walk(&mas, 0); 5556 - node = mas_mn(&mas); 5481 + enode = mt_mk_node(node, node->type); 5482 + slots = mte_dead_walk(&enode, 0); 5483 + node = mte_to_node(enode); 5557 5484 do { 5558 5485 mt_free_bulk(node->slot_len, slots); 5559 5486 offset = node->parent_slot + 1; 5560 - mas.node = node->piv_parent; 5561 - if (mas_mn(&mas) == node) 5562 - goto start_slots_free; 5487 + enode = node->piv_parent; 5488 + if (mte_to_node(enode) == node) 5489 + goto free_leaf; 5563 5490 5564 - type = mte_node_type(mas.node); 5565 - slots = ma_slots(mte_to_node(mas.node), type); 5566 - if ((offset < mt_slots[type]) && (slots[offset])) 5567 - slots = mas_dead_walk(&mas, offset); 5568 - 5569 - node = mas_mn(&mas); 5491 + type = mte_node_type(enode); 5492 + slots = ma_slots(mte_to_node(enode), type); 5493 + if ((offset < mt_slots[type]) && 5494 + rcu_dereference_protected(slots[offset], 5495 + lock_is_held(&rcu_callback_map))) 5496 + slots = mte_dead_walk(&enode, offset); 5497 + node = mte_to_node(enode); 5570 5498 } while ((node != start) || (node->slot_len < offset)); 5571 5499 5572 5500 slots = ma_slots(node, node->type); 5573 5501 mt_free_bulk(node->slot_len, slots); 5574 5502 5575 - start_slots_free: 5576 - mas_unlock(&mas); 5577 5503 free_leaf: 5578 5504 mt_free_rcu(&node->rcu); 5579 5505 } 5580 5506 5581 - static inline void __rcu **mas_destroy_descend(struct ma_state *mas, 5582 - struct maple_enode *prev, unsigned char offset) 5507 + static inline void __rcu **mte_destroy_descend(struct maple_enode **enode, 5508 + struct maple_tree *mt, struct maple_enode *prev, unsigned char offset) 5583 5509 { 5584 5510 struct maple_node *node; 5585 - struct maple_enode *next = mas->node; 5511 + struct maple_enode *next = *enode; 5586 5512 void __rcu **slots = NULL; 5513 + enum maple_type type; 5514 + unsigned char next_offset = 0; 5587 5515 5588 5516 do { 5589 - mas->node = next; 5590 - node = mas_mn(mas); 5591 - slots = ma_slots(node, mte_node_type(mas->node)); 5592 - next = mas_slot_locked(mas, slots, 0); 5517 + *enode = next; 5518 + node = mte_to_node(*enode); 5519 + type = mte_node_type(*enode); 5520 + slots = ma_slots(node, type); 5521 + next = mt_slot_locked(mt, slots, next_offset); 5593 5522 if ((mte_dead_node(next))) 5594 - next = mas_slot_locked(mas, slots, 1); 5523 + next = mt_slot_locked(mt, slots, ++next_offset); 5595 5524 5596 - mte_set_node_dead(mas->node); 5597 - node->type = mte_node_type(mas->node); 5525 + mte_set_node_dead(*enode); 5526 + node->type = type; 5598 5527 node->piv_parent = prev; 5599 5528 node->parent_slot = offset; 5600 - offset = 0; 5601 - prev = mas->node; 5529 + offset = next_offset; 5530 + next_offset = 0; 5531 + prev = *enode; 5602 5532 } while (!mte_is_leaf(next)); 5603 5533 5604 5534 return slots; 5605 5535 } 5606 5536 5607 - static void mt_destroy_walk(struct maple_enode *enode, unsigned char ma_flags, 5537 + static void mt_destroy_walk(struct maple_enode *enode, struct maple_tree *mt, 5608 5538 bool free) 5609 5539 { 5610 5540 void __rcu **slots; 5611 5541 struct maple_node *node = mte_to_node(enode); 5612 5542 struct maple_enode *start; 5613 - struct maple_tree mt; 5614 5543 5615 - MA_STATE(mas, &mt, 0, 0); 5616 - 5617 - if (mte_is_leaf(enode)) 5544 + if (mte_is_leaf(enode)) { 5545 + node->type = mte_node_type(enode); 5618 5546 goto free_leaf; 5547 + } 5619 5548 5620 - mt_init_flags(&mt, ma_flags); 5621 - mas_lock(&mas); 5622 - 5623 - mas.node = start = enode; 5624 - slots = mas_destroy_descend(&mas, start, 0); 5625 - node = mas_mn(&mas); 5549 + start = enode; 5550 + slots = mte_destroy_descend(&enode, mt, start, 0); 5551 + node = mte_to_node(enode); // Updated in the above call. 5626 5552 do { 5627 5553 enum maple_type type; 5628 5554 unsigned char offset; 5629 5555 struct maple_enode *parent, *tmp; 5630 5556 5631 - node->slot_len = mas_dead_leaves(&mas, slots); 5557 + node->slot_len = mte_dead_leaves(enode, mt, slots); 5632 5558 if (free) 5633 5559 mt_free_bulk(node->slot_len, slots); 5634 5560 offset = node->parent_slot + 1; 5635 - mas.node = node->piv_parent; 5636 - if (mas_mn(&mas) == node) 5637 - goto start_slots_free; 5561 + enode = node->piv_parent; 5562 + if (mte_to_node(enode) == node) 5563 + goto free_leaf; 5638 5564 5639 - type = mte_node_type(mas.node); 5640 - slots = ma_slots(mte_to_node(mas.node), type); 5565 + type = mte_node_type(enode); 5566 + slots = ma_slots(mte_to_node(enode), type); 5641 5567 if (offset >= mt_slots[type]) 5642 5568 goto next; 5643 5569 5644 - tmp = mas_slot_locked(&mas, slots, offset); 5570 + tmp = mt_slot_locked(mt, slots, offset); 5645 5571 if (mte_node_type(tmp) && mte_to_node(tmp)) { 5646 - parent = mas.node; 5647 - mas.node = tmp; 5648 - slots = mas_destroy_descend(&mas, parent, offset); 5572 + parent = enode; 5573 + enode = tmp; 5574 + slots = mte_destroy_descend(&enode, mt, parent, offset); 5649 5575 } 5650 5576 next: 5651 - node = mas_mn(&mas); 5652 - } while (start != mas.node); 5577 + node = mte_to_node(enode); 5578 + } while (start != enode); 5653 5579 5654 - node = mas_mn(&mas); 5655 - node->slot_len = mas_dead_leaves(&mas, slots); 5580 + node = mte_to_node(enode); 5581 + node->slot_len = mte_dead_leaves(enode, mt, slots); 5656 5582 if (free) 5657 5583 mt_free_bulk(node->slot_len, slots); 5658 - 5659 - start_slots_free: 5660 - mas_unlock(&mas); 5661 5584 5662 5585 free_leaf: 5663 5586 if (free) 5664 5587 mt_free_rcu(&node->rcu); 5588 + else 5589 + mt_clear_meta(mt, node, node->type); 5665 5590 } 5666 5591 5667 5592 /* ··· 5686 5593 struct maple_node *node = mte_to_node(enode); 5687 5594 5688 5595 if (mt_in_rcu(mt)) { 5689 - mt_destroy_walk(enode, mt->ma_flags, false); 5596 + mt_destroy_walk(enode, mt, false); 5690 5597 call_rcu(&node->rcu, mt_free_walk); 5691 5598 } else { 5692 - mt_destroy_walk(enode, mt->ma_flags, true); 5599 + mt_destroy_walk(enode, mt, true); 5693 5600 } 5694 5601 } 5695 5602 ··· 6710 6617 while (likely(!ma_is_leaf(mt))) { 6711 6618 MT_BUG_ON(mas->tree, mte_dead_node(mas->node)); 6712 6619 slots = ma_slots(mn, mt); 6713 - pivots = ma_pivots(mn, mt); 6714 - max = pivots[0]; 6715 6620 entry = mas_slot(mas, slots, 0); 6621 + pivots = ma_pivots(mn, mt); 6716 6622 if (unlikely(ma_dead_node(mn))) 6717 6623 return NULL; 6624 + max = pivots[0]; 6718 6625 mas->node = entry; 6719 6626 mn = mas_mn(mas); 6720 6627 mt = mte_node_type(mas->node); ··· 6734 6641 if (likely(entry)) 6735 6642 return entry; 6736 6643 6737 - pivots = ma_pivots(mn, mt); 6738 - mas->index = pivots[0] + 1; 6739 6644 mas->offset = 1; 6740 6645 entry = mas_slot(mas, slots, 1); 6646 + pivots = ma_pivots(mn, mt); 6741 6647 if (unlikely(ma_dead_node(mn))) 6742 6648 return NULL; 6743 6649 6650 + mas->index = pivots[0] + 1; 6744 6651 if (mas->index > limit) 6745 6652 goto none; 6746 6653
+12 -2
mm/hugetlb.c
··· 5478 5478 struct folio *pagecache_folio, spinlock_t *ptl) 5479 5479 { 5480 5480 const bool unshare = flags & FAULT_FLAG_UNSHARE; 5481 - pte_t pte; 5481 + pte_t pte = huge_ptep_get(ptep); 5482 5482 struct hstate *h = hstate_vma(vma); 5483 5483 struct page *old_page; 5484 5484 struct folio *new_folio; ··· 5486 5486 vm_fault_t ret = 0; 5487 5487 unsigned long haddr = address & huge_page_mask(h); 5488 5488 struct mmu_notifier_range range; 5489 + 5490 + /* 5491 + * Never handle CoW for uffd-wp protected pages. It should be only 5492 + * handled when the uffd-wp protection is removed. 5493 + * 5494 + * Note that only the CoW optimization path (in hugetlb_no_page()) 5495 + * can trigger this, because hugetlb_fault() will always resolve 5496 + * uffd-wp bit first. 5497 + */ 5498 + if (!unshare && huge_pte_uffd_wp(pte)) 5499 + return 0; 5489 5500 5490 5501 /* 5491 5502 * hugetlb does not support FOLL_FORCE-style write faults that keep the ··· 5511 5500 return 0; 5512 5501 } 5513 5502 5514 - pte = huge_ptep_get(ptep); 5515 5503 old_page = pte_page(pte); 5516 5504 5517 5505 delayacct_wpcopy_start();
+16 -16
mm/kfence/core.c
··· 556 556 * enters __slab_free() slow-path. 557 557 */ 558 558 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 559 - struct slab *slab = page_slab(&pages[i]); 559 + struct slab *slab = page_slab(nth_page(pages, i)); 560 560 561 561 if (!i || (i % 2)) 562 562 continue; 563 - 564 - /* Verify we do not have a compound head page. */ 565 - if (WARN_ON(compound_head(&pages[i]) != &pages[i])) 566 - return addr; 567 563 568 564 __folio_set_slab(slab_folio(slab)); 569 565 #ifdef CONFIG_MEMCG ··· 593 597 594 598 /* Protect the right redzone. */ 595 599 if (unlikely(!kfence_protect(addr + PAGE_SIZE))) 596 - return addr; 600 + goto reset_slab; 597 601 598 602 addr += 2 * PAGE_SIZE; 599 603 } 600 604 601 605 return 0; 606 + 607 + reset_slab: 608 + for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 609 + struct slab *slab = page_slab(nth_page(pages, i)); 610 + 611 + if (!i || (i % 2)) 612 + continue; 613 + #ifdef CONFIG_MEMCG 614 + slab->memcg_data = 0; 615 + #endif 616 + __folio_clear_slab(slab_folio(slab)); 617 + } 618 + 619 + return addr; 602 620 } 603 621 604 622 static bool __init kfence_init_pool_early(void) ··· 642 632 * fails for the first page, and therefore expect addr==__kfence_pool in 643 633 * most failure cases. 644 634 */ 645 - for (char *p = (char *)addr; p < __kfence_pool + KFENCE_POOL_SIZE; p += PAGE_SIZE) { 646 - struct slab *slab = virt_to_slab(p); 647 - 648 - if (!slab) 649 - continue; 650 - #ifdef CONFIG_MEMCG 651 - slab->memcg_data = 0; 652 - #endif 653 - __folio_clear_slab(slab_folio(slab)); 654 - } 655 635 memblock_free_late(__pa(addr), KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool)); 656 636 __kfence_pool = NULL; 657 637 return false;
+15 -1
mm/memory.c
··· 3563 3563 struct vm_area_struct *vma = vmf->vma; 3564 3564 struct mmu_notifier_range range; 3565 3565 3566 - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) 3566 + /* 3567 + * We need a reference to lock the folio because we don't hold 3568 + * the PTL so a racing thread can remove the device-exclusive 3569 + * entry and unmap it. If the folio is free the entry must 3570 + * have been removed already. If it happens to have already 3571 + * been re-allocated after being freed all we do is lock and 3572 + * unlock it. 3573 + */ 3574 + if (!folio_try_get(folio)) 3575 + return 0; 3576 + 3577 + if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { 3578 + folio_put(folio); 3567 3579 return VM_FAULT_RETRY; 3580 + } 3568 3581 mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, 3569 3582 vma->vm_mm, vmf->address & PAGE_MASK, 3570 3583 (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); ··· 3590 3577 3591 3578 pte_unmap_unlock(vmf->pte, vmf->ptl); 3592 3579 folio_unlock(folio); 3580 + folio_put(folio); 3593 3581 3594 3582 mmu_notifier_invalidate_range_end(&range); 3595 3583 return 0;
+2 -1
mm/mmap.c
··· 2277 2277 int count = 0; 2278 2278 int error = -ENOMEM; 2279 2279 MA_STATE(mas_detach, &mt_detach, 0, 0); 2280 - mt_init_flags(&mt_detach, MT_FLAGS_LOCK_EXTERN); 2280 + mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK); 2281 2281 mt_set_external_lock(&mt_detach, &mm->mmap_lock); 2282 2282 2283 2283 /* ··· 3037 3037 */ 3038 3038 set_bit(MMF_OOM_SKIP, &mm->flags); 3039 3039 mmap_write_lock(mm); 3040 + mt_clear_in_rcu(&mm->mm_mt); 3040 3041 free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS, 3041 3042 USER_PGTABLES_CEILING); 3042 3043 tlb_finish_mmu(&tlb);
+2 -1
mm/swapfile.c
··· 679 679 { 680 680 int nid; 681 681 682 + assert_spin_locked(&p->lock); 682 683 for_each_node(nid) 683 684 plist_del(&p->avail_lists[nid], &swap_avail_heads[nid]); 684 685 } ··· 2435 2434 spin_unlock(&swap_lock); 2436 2435 goto out_dput; 2437 2436 } 2438 - del_from_avail_list(p); 2439 2437 spin_lock(&p->lock); 2438 + del_from_avail_list(p); 2440 2439 if (p->prio < 0) { 2441 2440 struct swap_info_struct *si = p; 2442 2441 int nid;
+5 -3
mm/vmalloc.c
··· 3042 3042 * allocation request, free them via vfree() if any. 3043 3043 */ 3044 3044 if (area->nr_pages != nr_small_pages) { 3045 - warn_alloc(gfp_mask, NULL, 3046 - "vmalloc error: size %lu, page order %u, failed to allocate pages", 3047 - area->nr_pages * PAGE_SIZE, page_order); 3045 + /* vm_area_alloc_pages() can also fail due to a fatal signal */ 3046 + if (!fatal_signal_pending(current)) 3047 + warn_alloc(gfp_mask, NULL, 3048 + "vmalloc error: size %lu, page order %u, failed to allocate pages", 3049 + area->nr_pages * PAGE_SIZE, page_order); 3048 3050 goto fail; 3049 3051 } 3050 3052
+4
net/9p/trans_xen.c
··· 280 280 write_unlock(&xen_9pfs_lock); 281 281 282 282 for (i = 0; i < priv->num_rings; i++) { 283 + struct xen_9pfs_dataring *ring = &priv->rings[i]; 284 + 285 + cancel_work_sync(&ring->work); 286 + 283 287 if (!priv->rings[i].intf) 284 288 break; 285 289 if (priv->rings[i].irq > 0)
+53 -36
net/bluetooth/hci_conn.c
··· 68 68 }; 69 69 70 70 /* This function requires the caller holds hdev->lock */ 71 - static void hci_connect_le_scan_cleanup(struct hci_conn *conn) 71 + static void hci_connect_le_scan_cleanup(struct hci_conn *conn, u8 status) 72 72 { 73 73 struct hci_conn_params *params; 74 74 struct hci_dev *hdev = conn->hdev; ··· 88 88 89 89 params = hci_pend_le_action_lookup(&hdev->pend_le_conns, bdaddr, 90 90 bdaddr_type); 91 - if (!params || !params->explicit_connect) 91 + if (!params) 92 92 return; 93 + 94 + if (params->conn) { 95 + hci_conn_drop(params->conn); 96 + hci_conn_put(params->conn); 97 + params->conn = NULL; 98 + } 99 + 100 + if (!params->explicit_connect) 101 + return; 102 + 103 + /* If the status indicates successful cancellation of 104 + * the attempt (i.e. Unknown Connection Id) there's no point of 105 + * notifying failure since we'll go back to keep trying to 106 + * connect. The only exception is explicit connect requests 107 + * where a timeout + cancel does indicate an actual failure. 108 + */ 109 + if (status && status != HCI_ERROR_UNKNOWN_CONN_ID) 110 + mgmt_connect_failed(hdev, &conn->dst, conn->type, 111 + conn->dst_type, status); 93 112 94 113 /* The connection attempt was doing scan for new RPA, and is 95 114 * in scan phase. If params are not associated with any other ··· 197 178 rcu_read_unlock(); 198 179 199 180 if (c == conn) { 200 - hci_connect_le_scan_cleanup(conn); 181 + hci_connect_le_scan_cleanup(conn, 0x00); 201 182 hci_conn_cleanup(conn); 202 183 } 203 184 ··· 1068 1049 return conn; 1069 1050 } 1070 1051 1052 + static bool hci_conn_unlink(struct hci_conn *conn) 1053 + { 1054 + if (!conn->link) 1055 + return false; 1056 + 1057 + conn->link->link = NULL; 1058 + conn->link = NULL; 1059 + 1060 + return true; 1061 + } 1062 + 1071 1063 int hci_conn_del(struct hci_conn *conn) 1072 1064 { 1073 1065 struct hci_dev *hdev = conn->hdev; ··· 1090 1060 cancel_delayed_work_sync(&conn->idle_work); 1091 1061 1092 1062 if (conn->type == ACL_LINK) { 1093 - struct hci_conn *sco = conn->link; 1094 - if (sco) { 1095 - sco->link = NULL; 1063 + struct hci_conn *link = conn->link; 1064 + 1065 + if (link) { 1066 + hci_conn_unlink(conn); 1096 1067 /* Due to race, SCO connection might be not established 1097 1068 * yet at this point. Delete it now, otherwise it is 1098 1069 * possible for it to be stuck and can't be deleted. 1099 1070 */ 1100 - if (sco->handle == HCI_CONN_HANDLE_UNSET) 1101 - hci_conn_del(sco); 1071 + if (link->handle == HCI_CONN_HANDLE_UNSET) 1072 + hci_conn_del(link); 1102 1073 } 1103 1074 1104 1075 /* Unacked frames */ ··· 1115 1084 struct hci_conn *acl = conn->link; 1116 1085 1117 1086 if (acl) { 1118 - acl->link = NULL; 1087 + hci_conn_unlink(conn); 1119 1088 hci_conn_drop(acl); 1120 1089 } 1121 1090 ··· 1210 1179 static void hci_le_conn_failed(struct hci_conn *conn, u8 status) 1211 1180 { 1212 1181 struct hci_dev *hdev = conn->hdev; 1213 - struct hci_conn_params *params; 1214 1182 1215 - params = hci_pend_le_action_lookup(&hdev->pend_le_conns, &conn->dst, 1216 - conn->dst_type); 1217 - if (params && params->conn) { 1218 - hci_conn_drop(params->conn); 1219 - hci_conn_put(params->conn); 1220 - params->conn = NULL; 1221 - } 1222 - 1223 - /* If the status indicates successful cancellation of 1224 - * the attempt (i.e. Unknown Connection Id) there's no point of 1225 - * notifying failure since we'll go back to keep trying to 1226 - * connect. The only exception is explicit connect requests 1227 - * where a timeout + cancel does indicate an actual failure. 1228 - */ 1229 - if (status != HCI_ERROR_UNKNOWN_CONN_ID || 1230 - (params && params->explicit_connect)) 1231 - mgmt_connect_failed(hdev, &conn->dst, conn->type, 1232 - conn->dst_type, status); 1233 - 1234 - /* Since we may have temporarily stopped the background scanning in 1235 - * favor of connection establishment, we should restart it. 1236 - */ 1237 - hci_update_passive_scan(hdev); 1183 + hci_connect_le_scan_cleanup(conn, status); 1238 1184 1239 1185 /* Enable advertising in case this was a failed connection 1240 1186 * attempt as a peripheral. ··· 1245 1237 { 1246 1238 struct hci_conn *conn = data; 1247 1239 1240 + bt_dev_dbg(hdev, "err %d", err); 1241 + 1248 1242 hci_dev_lock(hdev); 1249 1243 1250 1244 if (!err) { 1251 - hci_connect_le_scan_cleanup(conn); 1245 + hci_connect_le_scan_cleanup(conn, 0x00); 1252 1246 goto done; 1253 1247 } 1254 - 1255 - bt_dev_err(hdev, "request failed to create LE connection: err %d", err); 1256 1248 1257 1249 /* Check if connection is still pending */ 1258 1250 if (conn != hci_lookup_le_connect(hdev)) ··· 2446 2438 c->state = BT_CLOSED; 2447 2439 2448 2440 hci_disconn_cfm(c, HCI_ERROR_LOCAL_HOST_TERM); 2441 + 2442 + /* Unlink before deleting otherwise it is possible that 2443 + * hci_conn_del removes the link which may cause the list to 2444 + * contain items already freed. 2445 + */ 2446 + hci_conn_unlink(c); 2449 2447 hci_conn_del(c); 2450 2448 } 2451 2449 } ··· 2788 2774 int hci_abort_conn(struct hci_conn *conn, u8 reason) 2789 2775 { 2790 2776 int r = 0; 2777 + 2778 + if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags)) 2779 + return 0; 2791 2780 2792 2781 switch (conn->state) { 2793 2782 case BT_CONNECTED:
+7 -11
net/bluetooth/hci_event.c
··· 2881 2881 2882 2882 conn->resp_addr_type = peer_addr_type; 2883 2883 bacpy(&conn->resp_addr, peer_addr); 2884 - 2885 - /* We don't want the connection attempt to stick around 2886 - * indefinitely since LE doesn't have a page timeout concept 2887 - * like BR/EDR. Set a timer for any connection that doesn't use 2888 - * the accept list for connecting. 2889 - */ 2890 - if (filter_policy == HCI_LE_USE_PEER_ADDR) 2891 - queue_delayed_work(conn->hdev->workqueue, 2892 - &conn->le_conn_timeout, 2893 - conn->conn_timeout); 2894 2884 } 2895 2885 2896 2886 static void hci_cs_le_create_conn(struct hci_dev *hdev, u8 status) ··· 5892 5902 if (status) 5893 5903 goto unlock; 5894 5904 5905 + /* Drop the connection if it has been aborted */ 5906 + if (test_bit(HCI_CONN_CANCEL, &conn->flags)) { 5907 + hci_conn_drop(conn); 5908 + goto unlock; 5909 + } 5910 + 5895 5911 if (conn->dst_type == ADDR_LE_DEV_PUBLIC) 5896 5912 addr_type = BDADDR_LE_PUBLIC; 5897 5913 else ··· 6991 6995 bis->iso_qos.in.latency = le16_to_cpu(ev->interval) * 125 / 100; 6992 6996 bis->iso_qos.in.sdu = le16_to_cpu(ev->max_pdu); 6993 6997 6994 - hci_connect_cfm(bis, ev->status); 6998 + hci_iso_setup_path(bis); 6995 6999 } 6996 7000 6997 7001 hci_dev_unlock(hdev);
+10 -3
net/bluetooth/hci_sync.c
··· 246 246 247 247 skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk); 248 248 if (IS_ERR(skb)) { 249 - bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode, 250 - PTR_ERR(skb)); 249 + if (!event) 250 + bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode, 251 + PTR_ERR(skb)); 251 252 return PTR_ERR(skb); 252 253 } 253 254 ··· 5127 5126 if (test_bit(HCI_CONN_SCANNING, &conn->flags)) 5128 5127 return 0; 5129 5128 5129 + if (test_and_set_bit(HCI_CONN_CANCEL, &conn->flags)) 5130 + return 0; 5131 + 5130 5132 return __hci_cmd_sync_status(hdev, HCI_OP_LE_CREATE_CONN_CANCEL, 5131 - 6, &conn->dst, HCI_CMD_TIMEOUT); 5133 + 0, NULL, HCI_CMD_TIMEOUT); 5132 5134 } 5133 5135 5134 5136 static int hci_connect_cancel_sync(struct hci_dev *hdev, struct hci_conn *conn) ··· 6106 6102 conn->conn_timeout, NULL); 6107 6103 6108 6104 done: 6105 + if (err == -ETIMEDOUT) 6106 + hci_le_connect_cancel_sync(hdev, conn); 6107 + 6109 6108 /* Re-enable advertising after the connection attempt is finished. */ 6110 6109 hci_resume_advertising_sync(hdev); 6111 6110 return err;
+1 -1
net/bluetooth/hidp/core.c
··· 433 433 static void hidp_del_timer(struct hidp_session *session) 434 434 { 435 435 if (session->idle_to > 0) 436 - del_timer(&session->timer); 436 + del_timer_sync(&session->timer); 437 437 } 438 438 439 439 static void hidp_process_report(struct hidp_session *session, int type,
+6 -18
net/bluetooth/l2cap_core.c
··· 4652 4652 4653 4653 BT_DBG("scid 0x%4.4x dcid 0x%4.4x", scid, dcid); 4654 4654 4655 - mutex_lock(&conn->chan_lock); 4656 - 4657 - chan = __l2cap_get_chan_by_scid(conn, dcid); 4655 + chan = l2cap_get_chan_by_scid(conn, dcid); 4658 4656 if (!chan) { 4659 - mutex_unlock(&conn->chan_lock); 4660 4657 cmd_reject_invalid_cid(conn, cmd->ident, dcid, scid); 4661 4658 return 0; 4662 4659 } 4663 - 4664 - l2cap_chan_hold(chan); 4665 - l2cap_chan_lock(chan); 4666 4660 4667 4661 rsp.dcid = cpu_to_le16(chan->scid); 4668 4662 rsp.scid = cpu_to_le16(chan->dcid); ··· 4664 4670 4665 4671 chan->ops->set_shutdown(chan); 4666 4672 4673 + mutex_lock(&conn->chan_lock); 4667 4674 l2cap_chan_del(chan, ECONNRESET); 4675 + mutex_unlock(&conn->chan_lock); 4668 4676 4669 4677 chan->ops->close(chan); 4670 4678 4671 4679 l2cap_chan_unlock(chan); 4672 4680 l2cap_chan_put(chan); 4673 - 4674 - mutex_unlock(&conn->chan_lock); 4675 4681 4676 4682 return 0; 4677 4683 } ··· 4692 4698 4693 4699 BT_DBG("dcid 0x%4.4x scid 0x%4.4x", dcid, scid); 4694 4700 4695 - mutex_lock(&conn->chan_lock); 4696 - 4697 - chan = __l2cap_get_chan_by_scid(conn, scid); 4701 + chan = l2cap_get_chan_by_scid(conn, scid); 4698 4702 if (!chan) { 4699 4703 mutex_unlock(&conn->chan_lock); 4700 4704 return 0; 4701 4705 } 4702 4706 4703 - l2cap_chan_hold(chan); 4704 - l2cap_chan_lock(chan); 4705 - 4706 4707 if (chan->state != BT_DISCONN) { 4707 4708 l2cap_chan_unlock(chan); 4708 4709 l2cap_chan_put(chan); 4709 - mutex_unlock(&conn->chan_lock); 4710 4710 return 0; 4711 4711 } 4712 4712 4713 + mutex_lock(&conn->chan_lock); 4713 4714 l2cap_chan_del(chan, 0); 4715 + mutex_unlock(&conn->chan_lock); 4714 4716 4715 4717 chan->ops->close(chan); 4716 4718 4717 4719 l2cap_chan_unlock(chan); 4718 4720 l2cap_chan_put(chan); 4719 - 4720 - mutex_unlock(&conn->chan_lock); 4721 4721 4722 4722 return 0; 4723 4723 }
+49 -36
net/bluetooth/sco.c
··· 235 235 return err; 236 236 } 237 237 238 - static int sco_connect(struct hci_dev *hdev, struct sock *sk) 238 + static int sco_connect(struct sock *sk) 239 239 { 240 240 struct sco_conn *conn; 241 241 struct hci_conn *hcon; 242 + struct hci_dev *hdev; 242 243 int err, type; 243 244 244 245 BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst); 246 + 247 + hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR); 248 + if (!hdev) 249 + return -EHOSTUNREACH; 250 + 251 + hci_dev_lock(hdev); 245 252 246 253 if (lmp_esco_capable(hdev) && !disable_esco) 247 254 type = ESCO_LINK; ··· 256 249 type = SCO_LINK; 257 250 258 251 if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT && 259 - (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) 260 - return -EOPNOTSUPP; 252 + (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) { 253 + err = -EOPNOTSUPP; 254 + goto unlock; 255 + } 261 256 262 257 hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst, 263 258 sco_pi(sk)->setting, &sco_pi(sk)->codec); 264 - if (IS_ERR(hcon)) 265 - return PTR_ERR(hcon); 259 + if (IS_ERR(hcon)) { 260 + err = PTR_ERR(hcon); 261 + goto unlock; 262 + } 263 + 264 + hci_dev_unlock(hdev); 265 + hci_dev_put(hdev); 266 266 267 267 conn = sco_conn_add(hcon); 268 268 if (!conn) { ··· 277 263 return -ENOMEM; 278 264 } 279 265 280 - /* Update source addr of the socket */ 281 - bacpy(&sco_pi(sk)->src, &hcon->src); 282 - 283 266 err = sco_chan_add(conn, sk, NULL); 284 267 if (err) 285 268 return err; 269 + 270 + lock_sock(sk); 271 + 272 + /* Update source addr of the socket */ 273 + bacpy(&sco_pi(sk)->src, &hcon->src); 286 274 287 275 if (hcon->state == BT_CONNECTED) { 288 276 sco_sock_clear_timer(sk); ··· 294 278 sco_sock_set_timer(sk, sk->sk_sndtimeo); 295 279 } 296 280 281 + release_sock(sk); 282 + 283 + return err; 284 + 285 + unlock: 286 + hci_dev_unlock(hdev); 287 + hci_dev_put(hdev); 297 288 return err; 298 289 } 299 290 ··· 588 565 { 589 566 struct sockaddr_sco *sa = (struct sockaddr_sco *) addr; 590 567 struct sock *sk = sock->sk; 591 - struct hci_dev *hdev; 592 568 int err; 593 569 594 570 BT_DBG("sk %p", sk); ··· 596 574 addr->sa_family != AF_BLUETOOTH) 597 575 return -EINVAL; 598 576 599 - lock_sock(sk); 600 - if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) { 601 - err = -EBADFD; 602 - goto done; 603 - } 577 + if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) 578 + return -EBADFD; 604 579 605 - if (sk->sk_type != SOCK_SEQPACKET) { 580 + if (sk->sk_type != SOCK_SEQPACKET) 606 581 err = -EINVAL; 607 - goto done; 608 - } 609 582 610 - hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR); 611 - if (!hdev) { 612 - err = -EHOSTUNREACH; 613 - goto done; 614 - } 615 - hci_dev_lock(hdev); 616 - 583 + lock_sock(sk); 617 584 /* Set destination address and psm */ 618 585 bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr); 586 + release_sock(sk); 619 587 620 - err = sco_connect(hdev, sk); 621 - hci_dev_unlock(hdev); 622 - hci_dev_put(hdev); 588 + err = sco_connect(sk); 623 589 if (err) 624 - goto done; 590 + return err; 591 + 592 + lock_sock(sk); 625 593 626 594 err = bt_sock_wait_state(sk, BT_CONNECTED, 627 595 sock_sndtimeo(sk, flags & O_NONBLOCK)); 628 596 629 - done: 630 597 release_sock(sk); 631 598 return err; 632 599 } ··· 1140 1129 break; 1141 1130 } 1142 1131 1132 + release_sock(sk); 1133 + 1143 1134 /* find total buffer size required to copy codec + caps */ 1144 1135 hci_dev_lock(hdev); 1145 1136 list_for_each_entry(c, &hdev->local_codecs, list) { ··· 1159 1146 buf_len += sizeof(struct bt_codecs); 1160 1147 if (buf_len > len) { 1161 1148 hci_dev_put(hdev); 1162 - err = -ENOBUFS; 1163 - break; 1149 + return -ENOBUFS; 1164 1150 } 1165 1151 ptr = optval; 1166 1152 1167 1153 if (put_user(num_codecs, ptr)) { 1168 1154 hci_dev_put(hdev); 1169 - err = -EFAULT; 1170 - break; 1155 + return -EFAULT; 1171 1156 } 1172 1157 ptr += sizeof(num_codecs); 1173 1158 ··· 1205 1194 ptr += len; 1206 1195 } 1207 1196 1208 - if (!err && put_user(buf_len, optlen)) 1209 - err = -EFAULT; 1210 - 1211 1197 hci_dev_unlock(hdev); 1212 1198 hci_dev_put(hdev); 1199 + 1200 + lock_sock(sk); 1201 + 1202 + if (!err && put_user(buf_len, optlen)) 1203 + err = -EFAULT; 1213 1204 1214 1205 break; 1215 1206
+2 -1
net/core/dev.c
··· 3197 3197 } 3198 3198 3199 3199 if (skb_rx_queue_recorded(skb)) { 3200 + DEBUG_NET_WARN_ON_ONCE(qcount == 0); 3200 3201 hash = skb_get_rx_queue(skb); 3201 3202 if (hash >= qoffset) 3202 3203 hash -= qoffset; ··· 10871 10870 dev->rtnl_link_state == RTNL_LINK_INITIALIZED) 10872 10871 skb = rtmsg_ifinfo_build_skb(RTM_DELLINK, dev, ~0U, 0, 10873 10872 GFP_KERNEL, NULL, 0, 10874 - portid, nlmsg_seq(nlh)); 10873 + portid, nlh); 10875 10874 10876 10875 /* 10877 10876 * Flush the unicast and multicast chains
+9 -2
net/core/rtnetlink.c
··· 3975 3975 struct sk_buff *rtmsg_ifinfo_build_skb(int type, struct net_device *dev, 3976 3976 unsigned int change, 3977 3977 u32 event, gfp_t flags, int *new_nsid, 3978 - int new_ifindex, u32 portid, u32 seq) 3978 + int new_ifindex, u32 portid, 3979 + const struct nlmsghdr *nlh) 3979 3980 { 3980 3981 struct net *net = dev_net(dev); 3981 3982 struct sk_buff *skb; 3982 3983 int err = -ENOBUFS; 3984 + u32 seq = 0; 3983 3985 3984 3986 skb = nlmsg_new(if_nlmsg_size(dev, 0), flags); 3985 3987 if (skb == NULL) 3986 3988 goto errout; 3989 + 3990 + if (nlmsg_report(nlh)) 3991 + seq = nlmsg_seq(nlh); 3992 + else 3993 + portid = 0; 3987 3994 3988 3995 err = rtnl_fill_ifinfo(skb, dev, dev_net(dev), 3989 3996 type, portid, seq, change, 0, 0, event, ··· 4027 4020 return; 4028 4021 4029 4022 skb = rtmsg_ifinfo_build_skb(type, dev, change, event, flags, new_nsid, 4030 - new_ifindex, portid, nlmsg_seq(nlh)); 4023 + new_ifindex, portid, nlh); 4031 4024 if (skb) 4032 4025 rtmsg_ifinfo_send(skb, dev, flags, portid, nlh); 4033 4026 }
+8 -8
net/core/skbuff.c
··· 5597 5597 if (skb_cloned(to)) 5598 5598 return false; 5599 5599 5600 - /* In general, avoid mixing slab allocated and page_pool allocated 5601 - * pages within the same SKB. However when @to is not pp_recycle and 5602 - * @from is cloned, we can transition frag pages from page_pool to 5603 - * reference counted. 5604 - * 5605 - * On the other hand, don't allow coalescing two pp_recycle SKBs if 5606 - * @from is cloned, in case the SKB is using page_pool fragment 5600 + /* In general, avoid mixing page_pool and non-page_pool allocated 5601 + * pages within the same SKB. Additionally avoid dealing with clones 5602 + * with page_pool pages, in case the SKB is using page_pool fragment 5607 5603 * references (PP_FLAG_PAGE_FRAG). Since we only take full page 5608 5604 * references for cloned SKBs at the moment that would result in 5609 5605 * inconsistent reference counts. 5606 + * In theory we could take full references if @from is cloned and 5607 + * !@to->pp_recycle but its tricky (due to potential race with 5608 + * the clone disappearing) and rare, so not worth dealing with. 5610 5609 */ 5611 - if (to->pp_recycle != (from->pp_recycle && !skb_cloned(from))) 5610 + if (to->pp_recycle != from->pp_recycle || 5611 + (from->pp_recycle && skb_cloned(from))) 5612 5612 return false; 5613 5613 5614 5614 if (len <= skb_tailroom(to)) {
+9 -1
net/core/xdp.c
··· 734 734 * bpf_xdp_metadata_rx_hash - Read XDP frame RX hash. 735 735 * @ctx: XDP context pointer. 736 736 * @hash: Return value pointer. 737 + * @rss_type: Return value pointer for RSS type. 738 + * 739 + * The RSS hash type (@rss_type) specifies what portion of packet headers NIC 740 + * hardware used when calculating RSS hash value. The RSS type can be decoded 741 + * via &enum xdp_rss_hash_type either matching on individual L3/L4 bits 742 + * ``XDP_RSS_L*`` or by combined traditional *RSS Hashing Types* 743 + * ``XDP_RSS_TYPE_L*``. 737 744 * 738 745 * Return: 739 746 * * Returns 0 on success or ``-errno`` on error. 740 747 * * ``-EOPNOTSUPP`` : means device driver doesn't implement kfunc 741 748 * * ``-ENODATA`` : means no RX-hash available for this frame 742 749 */ 743 - __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash) 750 + __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash, 751 + enum xdp_rss_hash_type *rss_type) 744 752 { 745 753 return -EOPNOTSUPP; 746 754 }
+3
net/ipv4/sysctl_net_ipv4.c
··· 25 25 static int ip_local_port_range_max[] = { 65535, 65535 }; 26 26 static int tcp_adv_win_scale_min = -31; 27 27 static int tcp_adv_win_scale_max = 31; 28 + static int tcp_app_win_max = 31; 28 29 static int tcp_min_snd_mss_min = TCP_MIN_SND_MSS; 29 30 static int tcp_min_snd_mss_max = 65535; 30 31 static int ip_privileged_port_min; ··· 1199 1198 .maxlen = sizeof(u8), 1200 1199 .mode = 0644, 1201 1200 .proc_handler = proc_dou8vec_minmax, 1201 + .extra1 = SYSCTL_ZERO, 1202 + .extra2 = &tcp_app_win_max, 1202 1203 }, 1203 1204 { 1204 1205 .procname = "tcp_adv_win_scale",
+2 -2
net/ipv4/tcp_ipv4.c
··· 2780 2780 static void bpf_iter_tcp_put_batch(struct bpf_tcp_iter_state *iter) 2781 2781 { 2782 2782 while (iter->cur_sk < iter->end_sk) 2783 - sock_put(iter->batch[iter->cur_sk++]); 2783 + sock_gen_put(iter->batch[iter->cur_sk++]); 2784 2784 } 2785 2785 2786 2786 static int bpf_iter_tcp_realloc_batch(struct bpf_tcp_iter_state *iter, ··· 2941 2941 * st->bucket. See tcp_seek_last_pos(). 2942 2942 */ 2943 2943 st->offset++; 2944 - sock_put(iter->batch[iter->cur_sk++]); 2944 + sock_gen_put(iter->batch[iter->cur_sk++]); 2945 2945 } 2946 2946 2947 2947 if (iter->cur_sk < iter->end_sk)
+5 -3
net/ipv6/udp.c
··· 1397 1397 msg->msg_name = &sin; 1398 1398 msg->msg_namelen = sizeof(sin); 1399 1399 do_udp_sendmsg: 1400 - if (ipv6_only_sock(sk)) 1401 - return -ENETUNREACH; 1402 - return udp_sendmsg(sk, msg, len); 1400 + err = ipv6_only_sock(sk) ? 1401 + -ENETUNREACH : udp_sendmsg(sk, msg, len); 1402 + msg->msg_name = sin6; 1403 + msg->msg_namelen = addr_len; 1404 + return err; 1403 1405 } 1404 1406 } 1405 1407
+9 -2
net/mptcp/fastopen.c
··· 9 9 void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow, 10 10 struct request_sock *req) 11 11 { 12 - struct sock *ssk = subflow->tcp_sock; 13 - struct sock *sk = subflow->conn; 12 + struct sock *sk, *ssk; 14 13 struct sk_buff *skb; 15 14 struct tcp_sock *tp; 16 15 16 + /* on early fallback the subflow context is deleted by 17 + * subflow_syn_recv_sock() 18 + */ 19 + if (!subflow) 20 + return; 21 + 22 + ssk = subflow->tcp_sock; 23 + sk = subflow->conn; 17 24 tp = tcp_sk(ssk); 18 25 19 26 subflow->is_mptfo = 1;
+2 -3
net/mptcp/options.c
··· 1192 1192 */ 1193 1193 if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq) { 1194 1194 if (mp_opt.data_fin && mp_opt.data_len == 1 && 1195 - mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64) && 1196 - schedule_work(&msk->work)) 1197 - sock_hold(subflow->conn); 1195 + mptcp_update_rcv_data_fin(msk, mp_opt.data_seq, mp_opt.dsn64)) 1196 + mptcp_schedule_work((struct sock *)msk); 1198 1197 1199 1198 return true; 1200 1199 }
+1 -1
net/mptcp/protocol.c
··· 2626 2626 2627 2627 lock_sock(sk); 2628 2628 state = sk->sk_state; 2629 - if (unlikely(state == TCP_CLOSE)) 2629 + if (unlikely((1 << state) & (TCPF_CLOSE | TCPF_LISTEN))) 2630 2630 goto unlock; 2631 2631 2632 2632 mptcp_check_data_fin_ack(sk);
+6 -12
net/mptcp/subflow.c
··· 408 408 409 409 tcp_send_active_reset(ssk, GFP_ATOMIC); 410 410 tcp_done(ssk); 411 - if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) && 412 - schedule_work(&mptcp_sk(sk)->work)) 413 - return; /* worker will put sk for us */ 411 + if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags)) 412 + mptcp_schedule_work(sk); 414 413 415 414 sock_put(sk); 416 415 } ··· 1100 1101 skb_ext_del(skb, SKB_EXT_MPTCP); 1101 1102 return MAPPING_OK; 1102 1103 } else { 1103 - if (updated && schedule_work(&msk->work)) 1104 - sock_hold((struct sock *)msk); 1104 + if (updated) 1105 + mptcp_schedule_work((struct sock *)msk); 1105 1106 1106 1107 return MAPPING_DATA_FIN; 1107 1108 } ··· 1204 1205 /* sched mptcp worker to remove the subflow if no more data is pending */ 1205 1206 static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk) 1206 1207 { 1207 - struct sock *sk = (struct sock *)msk; 1208 - 1209 1208 if (likely(ssk->sk_state != TCP_CLOSE)) 1210 1209 return; 1211 1210 1212 1211 if (skb_queue_empty(&ssk->sk_receive_queue) && 1213 - !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) { 1214 - sock_hold(sk); 1215 - if (!schedule_work(&msk->work)) 1216 - sock_put(sk); 1217 - } 1212 + !test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags)) 1213 + mptcp_schedule_work((struct sock *)msk); 1218 1214 } 1219 1215 1220 1216 static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
+1 -1
net/openvswitch/actions.c
··· 913 913 { 914 914 struct vport *vport = ovs_vport_rcu(dp, out_port); 915 915 916 - if (likely(vport)) { 916 + if (likely(vport && netif_carrier_ok(vport->dev))) { 917 917 u16 mru = OVS_CB(skb)->mru; 918 918 u32 cutlen = OVS_CB(skb)->cutlen; 919 919
+5 -3
net/qrtr/af_qrtr.c
··· 498 498 if (!size || len != ALIGN(size, 4) + hdrlen) 499 499 goto err; 500 500 501 + if ((cb->type == QRTR_TYPE_NEW_SERVER || 502 + cb->type == QRTR_TYPE_RESUME_TX) && 503 + size < sizeof(struct qrtr_ctrl_pkt)) 504 + goto err; 505 + 501 506 if (cb->dst_port != QRTR_PORT_CTRL && cb->type != QRTR_TYPE_DATA && 502 507 cb->type != QRTR_TYPE_RESUME_TX) 503 508 goto err; ··· 514 509 if (cb->type == QRTR_TYPE_NEW_SERVER) { 515 510 /* Remote node endpoint can bridge other distant nodes */ 516 511 const struct qrtr_ctrl_pkt *pkt; 517 - 518 - if (size < sizeof(*pkt)) 519 - goto err; 520 512 521 513 pkt = data + hdrlen; 522 514 qrtr_node_assign(node, le32_to_cpu(pkt->server.node));
+2 -1
net/sctp/stream_interleave.c
··· 1154 1154 1155 1155 #define _sctp_walk_ifwdtsn(pos, chunk, end) \ 1156 1156 for (pos = chunk->subh.ifwdtsn_hdr->skip; \ 1157 - (void *)pos < (void *)chunk->subh.ifwdtsn_hdr->skip + (end); pos++) 1157 + (void *)pos <= (void *)chunk->subh.ifwdtsn_hdr->skip + (end) - \ 1158 + sizeof(struct sctp_ifwdtsn_skip); pos++) 1158 1159 1159 1160 #define sctp_walk_ifwdtsn(pos, ch) \ 1160 1161 _sctp_walk_ifwdtsn((pos), (ch), ntohs((ch)->chunk_hdr->length) - \
+11
net/smc/af_smc.c
··· 3270 3270 sk_common_release(sk); 3271 3271 goto out; 3272 3272 } 3273 + 3274 + /* smc_clcsock_release() does not wait smc->clcsock->sk's 3275 + * destruction; its sk_state might not be TCP_CLOSE after 3276 + * smc->sk is close()d, and TCP timers can be fired later, 3277 + * which need net ref. 3278 + */ 3279 + sk = smc->clcsock->sk; 3280 + __netns_tracker_free(net, &sk->ns_tracker, false); 3281 + sk->sk_net_refcnt = 1; 3282 + get_net_track(net, &sk->ns_tracker, GFP_KERNEL); 3283 + sock_inuse_add(net, 1); 3273 3284 } else { 3274 3285 smc->clcsock = clcsock; 3275 3286 }
+16
tools/testing/radix-tree/maple.c
··· 108 108 MT_BUG_ON(mt, mn->slot[1] != NULL); 109 109 MT_BUG_ON(mt, mas_allocated(&mas) != 0); 110 110 111 + mn->parent = ma_parent_ptr(mn); 111 112 ma_free_rcu(mn); 112 113 mas.node = MAS_START; 113 114 mas_nomem(&mas, GFP_KERNEL); ··· 161 160 MT_BUG_ON(mt, mas_allocated(&mas) != i); 162 161 MT_BUG_ON(mt, !mn); 163 162 MT_BUG_ON(mt, not_empty(mn)); 163 + mn->parent = ma_parent_ptr(mn); 164 164 ma_free_rcu(mn); 165 165 } 166 166 ··· 194 192 MT_BUG_ON(mt, not_empty(mn)); 195 193 MT_BUG_ON(mt, mas_allocated(&mas) != i - 1); 196 194 MT_BUG_ON(mt, !mn); 195 + mn->parent = ma_parent_ptr(mn); 197 196 ma_free_rcu(mn); 198 197 } 199 198 ··· 213 210 mn = mas_pop_node(&mas); 214 211 MT_BUG_ON(mt, not_empty(mn)); 215 212 MT_BUG_ON(mt, mas_allocated(&mas) != j - 1); 213 + mn->parent = ma_parent_ptr(mn); 216 214 ma_free_rcu(mn); 217 215 } 218 216 MT_BUG_ON(mt, mas_allocated(&mas) != 0); ··· 237 233 MT_BUG_ON(mt, mas_allocated(&mas) != i - j); 238 234 mn = mas_pop_node(&mas); 239 235 MT_BUG_ON(mt, not_empty(mn)); 236 + mn->parent = ma_parent_ptr(mn); 240 237 ma_free_rcu(mn); 241 238 MT_BUG_ON(mt, mas_allocated(&mas) != i - j - 1); 242 239 } ··· 274 269 mn = mas_pop_node(&mas); /* get the next node. */ 275 270 MT_BUG_ON(mt, mn == NULL); 276 271 MT_BUG_ON(mt, not_empty(mn)); 272 + mn->parent = ma_parent_ptr(mn); 277 273 ma_free_rcu(mn); 278 274 } 279 275 MT_BUG_ON(mt, mas_allocated(&mas) != 0); ··· 300 294 mn = mas_pop_node(&mas2); /* get the next node. */ 301 295 MT_BUG_ON(mt, mn == NULL); 302 296 MT_BUG_ON(mt, not_empty(mn)); 297 + mn->parent = ma_parent_ptr(mn); 303 298 ma_free_rcu(mn); 304 299 } 305 300 MT_BUG_ON(mt, mas_allocated(&mas2) != 0); ··· 341 334 MT_BUG_ON(mt, mas_allocated(&mas) != MAPLE_ALLOC_SLOTS + 2); 342 335 mn = mas_pop_node(&mas); 343 336 MT_BUG_ON(mt, not_empty(mn)); 337 + mn->parent = ma_parent_ptr(mn); 344 338 ma_free_rcu(mn); 345 339 for (i = 1; i <= MAPLE_ALLOC_SLOTS + 1; i++) { 346 340 mn = mas_pop_node(&mas); 347 341 MT_BUG_ON(mt, not_empty(mn)); 342 + mn->parent = ma_parent_ptr(mn); 348 343 ma_free_rcu(mn); 349 344 } 350 345 MT_BUG_ON(mt, mas_allocated(&mas) != 0); ··· 384 375 mas_node_count(&mas, i); /* Request */ 385 376 mas_nomem(&mas, GFP_KERNEL); /* Fill request */ 386 377 mn = mas_pop_node(&mas); /* get the next node. */ 378 + mn->parent = ma_parent_ptr(mn); 387 379 ma_free_rcu(mn); 388 380 mas_destroy(&mas); 389 381 ··· 392 382 mas_node_count(&mas, i); /* Request */ 393 383 mas_nomem(&mas, GFP_KERNEL); /* Fill request */ 394 384 mn = mas_pop_node(&mas); /* get the next node. */ 385 + mn->parent = ma_parent_ptr(mn); 395 386 ma_free_rcu(mn); 396 387 mn = mas_pop_node(&mas); /* get the next node. */ 388 + mn->parent = ma_parent_ptr(mn); 397 389 ma_free_rcu(mn); 398 390 mn = mas_pop_node(&mas); /* get the next node. */ 391 + mn->parent = ma_parent_ptr(mn); 399 392 ma_free_rcu(mn); 400 393 mas_destroy(&mas); 401 394 } ··· 35382 35369 MT_BUG_ON(mt, allocated != 1 + height * 3); 35383 35370 mn = mas_pop_node(&mas); 35384 35371 MT_BUG_ON(mt, mas_allocated(&mas) != allocated - 1); 35372 + mn->parent = ma_parent_ptr(mn); 35385 35373 ma_free_rcu(mn); 35386 35374 MT_BUG_ON(mt, mas_preallocate(&mas, GFP_KERNEL) != 0); 35387 35375 mas_destroy(&mas); ··· 35400 35386 mas_destroy(&mas); 35401 35387 allocated = mas_allocated(&mas); 35402 35388 MT_BUG_ON(mt, allocated != 0); 35389 + mn->parent = ma_parent_ptr(mn); 35403 35390 ma_free_rcu(mn); 35404 35391 35405 35392 MT_BUG_ON(mt, mas_preallocate(&mas, GFP_KERNEL) != 0); ··· 35771 35756 tree.ma_root = mt_mk_node(node, maple_leaf_64); 35772 35757 mt_dump(&tree); 35773 35758 35759 + node->parent = ma_parent_ptr(node); 35774 35760 ma_free_rcu(node); 35775 35761 35776 35762 /* Check things that will make lockdep angry */
+27 -3
tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c
··· 159 159 160 160 if (!ASSERT_EQ(query_opts.feature_flags, 161 161 NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 162 - NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | 163 - NETDEV_XDP_ACT_NDO_XMIT_SG, 162 + NETDEV_XDP_ACT_RX_SG, 164 163 "veth_src query_opts.feature_flags")) 165 164 goto out; 166 165 ··· 169 170 170 171 if (!ASSERT_EQ(query_opts.feature_flags, 171 172 NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 173 + NETDEV_XDP_ACT_RX_SG, 174 + "veth_dst query_opts.feature_flags")) 175 + goto out; 176 + 177 + /* Enable GRO */ 178 + SYS("ethtool -K veth_src gro on"); 179 + SYS("ethtool -K veth_dst gro on"); 180 + 181 + err = bpf_xdp_query(ifindex_src, XDP_FLAGS_DRV_MODE, &query_opts); 182 + if (!ASSERT_OK(err, "veth_src bpf_xdp_query gro on")) 183 + goto out; 184 + 185 + if (!ASSERT_EQ(query_opts.feature_flags, 186 + NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 172 187 NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | 173 188 NETDEV_XDP_ACT_NDO_XMIT_SG, 174 - "veth_dst query_opts.feature_flags")) 189 + "veth_src query_opts.feature_flags gro on")) 190 + goto out; 191 + 192 + err = bpf_xdp_query(ifindex_dst, XDP_FLAGS_DRV_MODE, &query_opts); 193 + if (!ASSERT_OK(err, "veth_dst bpf_xdp_query gro on")) 194 + goto out; 195 + 196 + if (!ASSERT_EQ(query_opts.feature_flags, 197 + NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | 198 + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | 199 + NETDEV_XDP_ACT_NDO_XMIT_SG, 200 + "veth_dst query_opts.feature_flags gro on")) 175 201 goto out; 176 202 177 203 memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN);
+2
tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
··· 268 268 if (!ASSERT_NEQ(meta->rx_hash, 0, "rx_hash")) 269 269 return -1; 270 270 271 + ASSERT_EQ(meta->rx_hash_type, 0, "rx_hash_type"); 272 + 271 273 xsk_ring_cons__release(&xsk->rx, 1); 272 274 refill_rx(xsk, comp_addr); 273 275
+24 -18
tools/testing/selftests/bpf/progs/xdp_hw_metadata.c
··· 12 12 __type(value, __u32); 13 13 } xsk SEC(".maps"); 14 14 15 + __u64 pkts_skip = 0; 16 + __u64 pkts_fail = 0; 17 + __u64 pkts_redir = 0; 18 + 15 19 extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, 16 20 __u64 *timestamp) __ksym; 17 - extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, 18 - __u32 *hash) __ksym; 21 + extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash, 22 + enum xdp_rss_hash_type *rss_type) __ksym; 19 23 20 24 SEC("xdp") 21 25 int rx(struct xdp_md *ctx) ··· 30 26 struct udphdr *udp = NULL; 31 27 struct iphdr *iph = NULL; 32 28 struct xdp_meta *meta; 33 - int ret; 29 + int err; 34 30 35 31 data = (void *)(long)ctx->data; 36 32 data_end = (void *)(long)ctx->data_end; ··· 50 46 udp = NULL; 51 47 } 52 48 53 - if (!udp) 49 + if (!udp) { 50 + __sync_add_and_fetch(&pkts_skip, 1); 54 51 return XDP_PASS; 52 + } 55 53 56 - if (udp->dest != bpf_htons(9091)) 54 + /* Forwarding UDP:9091 to AF_XDP */ 55 + if (udp->dest != bpf_htons(9091)) { 56 + __sync_add_and_fetch(&pkts_skip, 1); 57 57 return XDP_PASS; 58 + } 58 59 59 - bpf_printk("forwarding UDP:9091 to AF_XDP"); 60 - 61 - ret = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta)); 62 - if (ret != 0) { 63 - bpf_printk("bpf_xdp_adjust_meta returned %d", ret); 60 + err = bpf_xdp_adjust_meta(ctx, -(int)sizeof(struct xdp_meta)); 61 + if (err) { 62 + __sync_add_and_fetch(&pkts_fail, 1); 64 63 return XDP_PASS; 65 64 } 66 65 ··· 72 65 meta = data_meta; 73 66 74 67 if (meta + 1 > data) { 75 - bpf_printk("bpf_xdp_adjust_meta doesn't appear to work"); 68 + __sync_add_and_fetch(&pkts_fail, 1); 76 69 return XDP_PASS; 77 70 } 78 71 79 - if (!bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp)) 80 - bpf_printk("populated rx_timestamp with %llu", meta->rx_timestamp); 81 - else 72 + err = bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp); 73 + if (err) 82 74 meta->rx_timestamp = 0; /* Used by AF_XDP as not avail signal */ 83 75 84 - if (!bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash)) 85 - bpf_printk("populated rx_hash with %u", meta->rx_hash); 86 - else 87 - meta->rx_hash = 0; /* Used by AF_XDP as not avail signal */ 76 + err = bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type); 77 + if (err < 0) 78 + meta->rx_hash_err = err; /* Used by AF_XDP as no hash signal */ 88 79 80 + __sync_add_and_fetch(&pkts_redir, 1); 89 81 return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS); 90 82 } 91 83
+3 -3
tools/testing/selftests/bpf/progs/xdp_metadata.c
··· 21 21 22 22 extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, 23 23 __u64 *timestamp) __ksym; 24 - extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, 25 - __u32 *hash) __ksym; 24 + extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash, 25 + enum xdp_rss_hash_type *rss_type) __ksym; 26 26 27 27 SEC("xdp") 28 28 int rx(struct xdp_md *ctx) ··· 56 56 if (timestamp == 0) 57 57 meta->rx_timestamp = 1; 58 58 59 - bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash); 59 + bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash, &meta->rx_hash_type); 60 60 61 61 return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS); 62 62 }
+4 -3
tools/testing/selftests/bpf/progs/xdp_metadata2.c
··· 5 5 #include <bpf/bpf_helpers.h> 6 6 #include <bpf/bpf_endian.h> 7 7 8 - extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, 9 - __u32 *hash) __ksym; 8 + extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash, 9 + enum xdp_rss_hash_type *rss_type) __ksym; 10 10 11 11 int called; 12 12 13 13 SEC("freplace/rx") 14 14 int freplace_rx(struct xdp_md *ctx) 15 15 { 16 + enum xdp_rss_hash_type type = 0; 16 17 u32 hash = 0; 17 18 /* Call _any_ metadata function to make sure we don't crash. */ 18 - bpf_xdp_metadata_rx_hash(ctx, &hash); 19 + bpf_xdp_metadata_rx_hash(ctx, &hash, &type); 19 20 called++; 20 21 return XDP_PASS; 21 22 }
+8 -2
tools/testing/selftests/bpf/xdp_hw_metadata.c
··· 141 141 meta = data - sizeof(*meta); 142 142 143 143 printf("rx_timestamp: %llu\n", meta->rx_timestamp); 144 - printf("rx_hash: %u\n", meta->rx_hash); 144 + if (meta->rx_hash_err < 0) 145 + printf("No rx_hash err=%d\n", meta->rx_hash_err); 146 + else 147 + printf("rx_hash: 0x%X with RSS type:0x%X\n", 148 + meta->rx_hash, meta->rx_hash_type); 145 149 } 146 150 147 151 static void verify_skb_metadata(int fd) ··· 216 212 while (true) { 217 213 errno = 0; 218 214 ret = poll(fds, rxq + 1, 1000); 219 - printf("poll: %d (%d)\n", ret, errno); 215 + printf("poll: %d (%d) skip=%llu fail=%llu redir=%llu\n", 216 + ret, errno, bpf_obj->bss->pkts_skip, 217 + bpf_obj->bss->pkts_fail, bpf_obj->bss->pkts_redir); 220 218 if (ret < 0) 221 219 break; 222 220 if (ret == 0)
+4
tools/testing/selftests/bpf/xdp_metadata.h
··· 12 12 struct xdp_meta { 13 13 __u64 rx_timestamp; 14 14 __u32 rx_hash; 15 + union { 16 + __u32 rx_hash_type; 17 + __s32 rx_hash_err; 18 + }; 15 19 };
+2 -1
tools/testing/selftests/drivers/net/bonding/Makefile
··· 8 8 dev_addr_lists.sh \ 9 9 mode-1-recovery-updelay.sh \ 10 10 mode-2-recovery-updelay.sh \ 11 - option_prio.sh \ 11 + bond_options.sh \ 12 12 bond-eth-type-change.sh 13 13 14 14 TEST_FILES := \ 15 15 lag_lib.sh \ 16 + bond_topo_3d1c.sh \ 16 17 net_forwarding_lib.sh 17 18 18 19 include ../../../lib.mk
+264
tools/testing/selftests/drivers/net/bonding/bond_options.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Test bonding options with mode 1,5,6 5 + 6 + ALL_TESTS=" 7 + prio 8 + arp_validate 9 + " 10 + 11 + REQUIRE_MZ=no 12 + NUM_NETIFS=0 13 + lib_dir=$(dirname "$0") 14 + source ${lib_dir}/net_forwarding_lib.sh 15 + source ${lib_dir}/bond_topo_3d1c.sh 16 + 17 + skip_prio() 18 + { 19 + local skip=1 20 + 21 + # check if iproute support prio option 22 + ip -n ${s_ns} link set eth0 type bond_slave prio 10 23 + [[ $? -ne 0 ]] && skip=0 24 + 25 + # check if kernel support prio option 26 + ip -n ${s_ns} -d link show eth0 | grep -q "prio 10" 27 + [[ $? -ne 0 ]] && skip=0 28 + 29 + return $skip 30 + } 31 + 32 + skip_ns() 33 + { 34 + local skip=1 35 + 36 + # check if iproute support ns_ip6_target option 37 + ip -n ${s_ns} link add bond1 type bond ns_ip6_target ${g_ip6} 38 + [[ $? -ne 0 ]] && skip=0 39 + 40 + # check if kernel support ns_ip6_target option 41 + ip -n ${s_ns} -d link show bond1 | grep -q "ns_ip6_target ${g_ip6}" 42 + [[ $? -ne 0 ]] && skip=0 43 + 44 + ip -n ${s_ns} link del bond1 45 + 46 + return $skip 47 + } 48 + 49 + active_slave="" 50 + check_active_slave() 51 + { 52 + local target_active_slave=$1 53 + active_slave=$(cmd_jq "ip -n ${s_ns} -d -j link show bond0" ".[].linkinfo.info_data.active_slave") 54 + test "$active_slave" = "$target_active_slave" 55 + check_err $? "Current active slave is $active_slave but not $target_active_slave" 56 + } 57 + 58 + 59 + # Test bonding prio option 60 + prio_test() 61 + { 62 + local param="$1" 63 + RET=0 64 + 65 + # create bond 66 + bond_reset "${param}" 67 + 68 + # check bonding member prio value 69 + ip -n ${s_ns} link set eth0 type bond_slave prio 0 70 + ip -n ${s_ns} link set eth1 type bond_slave prio 10 71 + ip -n ${s_ns} link set eth2 type bond_slave prio 11 72 + cmd_jq "ip -n ${s_ns} -d -j link show eth0" \ 73 + ".[].linkinfo.info_slave_data | select (.prio == 0)" "-e" &> /dev/null 74 + check_err $? "eth0 prio is not 0" 75 + cmd_jq "ip -n ${s_ns} -d -j link show eth1" \ 76 + ".[].linkinfo.info_slave_data | select (.prio == 10)" "-e" &> /dev/null 77 + check_err $? "eth1 prio is not 10" 78 + cmd_jq "ip -n ${s_ns} -d -j link show eth2" \ 79 + ".[].linkinfo.info_slave_data | select (.prio == 11)" "-e" &> /dev/null 80 + check_err $? "eth2 prio is not 11" 81 + 82 + bond_check_connection "setup" 83 + 84 + # active slave should be the primary slave 85 + check_active_slave eth1 86 + 87 + # active slave should be the higher prio slave 88 + ip -n ${s_ns} link set $active_slave down 89 + bond_check_connection "fail over" 90 + check_active_slave eth2 91 + 92 + # when only 1 slave is up 93 + ip -n ${s_ns} link set $active_slave down 94 + bond_check_connection "only 1 slave up" 95 + check_active_slave eth0 96 + 97 + # when a higher prio slave change to up 98 + ip -n ${s_ns} link set eth2 up 99 + bond_check_connection "higher prio slave up" 100 + case $primary_reselect in 101 + "0") 102 + check_active_slave "eth2" 103 + ;; 104 + "1") 105 + check_active_slave "eth0" 106 + ;; 107 + "2") 108 + check_active_slave "eth0" 109 + ;; 110 + esac 111 + local pre_active_slave=$active_slave 112 + 113 + # when the primary slave change to up 114 + ip -n ${s_ns} link set eth1 up 115 + bond_check_connection "primary slave up" 116 + case $primary_reselect in 117 + "0") 118 + check_active_slave "eth1" 119 + ;; 120 + "1") 121 + check_active_slave "$pre_active_slave" 122 + ;; 123 + "2") 124 + check_active_slave "$pre_active_slave" 125 + ip -n ${s_ns} link set $active_slave down 126 + bond_check_connection "pre_active slave down" 127 + check_active_slave "eth1" 128 + ;; 129 + esac 130 + 131 + # Test changing bond slave prio 132 + if [[ "$primary_reselect" == "0" ]];then 133 + ip -n ${s_ns} link set eth0 type bond_slave prio 1000000 134 + ip -n ${s_ns} link set eth1 type bond_slave prio 0 135 + ip -n ${s_ns} link set eth2 type bond_slave prio -50 136 + ip -n ${s_ns} -d link show eth0 | grep -q 'prio 1000000' 137 + check_err $? "eth0 prio is not 1000000" 138 + ip -n ${s_ns} -d link show eth1 | grep -q 'prio 0' 139 + check_err $? "eth1 prio is not 0" 140 + ip -n ${s_ns} -d link show eth2 | grep -q 'prio -50' 141 + check_err $? "eth3 prio is not -50" 142 + check_active_slave "eth1" 143 + 144 + ip -n ${s_ns} link set $active_slave down 145 + bond_check_connection "change slave prio" 146 + check_active_slave "eth0" 147 + fi 148 + } 149 + 150 + prio_miimon() 151 + { 152 + local primary_reselect 153 + local mode=$1 154 + 155 + for primary_reselect in 0 1 2; do 156 + prio_test "mode $mode miimon 100 primary eth1 primary_reselect $primary_reselect" 157 + log_test "prio" "$mode miimon primary_reselect $primary_reselect" 158 + done 159 + } 160 + 161 + prio_arp() 162 + { 163 + local primary_reselect 164 + local mode=$1 165 + 166 + for primary_reselect in 0 1 2; do 167 + prio_test "mode active-backup arp_interval 100 arp_ip_target ${g_ip4} primary eth1 primary_reselect $primary_reselect" 168 + log_test "prio" "$mode arp_ip_target primary_reselect $primary_reselect" 169 + done 170 + } 171 + 172 + prio_ns() 173 + { 174 + local primary_reselect 175 + local mode=$1 176 + 177 + if skip_ns; then 178 + log_test_skip "prio ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'." 179 + return 0 180 + fi 181 + 182 + for primary_reselect in 0 1 2; do 183 + prio_test "mode active-backup arp_interval 100 ns_ip6_target ${g_ip6} primary eth1 primary_reselect $primary_reselect" 184 + log_test "prio" "$mode ns_ip6_target primary_reselect $primary_reselect" 185 + done 186 + } 187 + 188 + prio() 189 + { 190 + local mode modes="active-backup balance-tlb balance-alb" 191 + 192 + if skip_prio; then 193 + log_test_skip "prio" "Current iproute or kernel doesn't support bond option 'prio'." 194 + return 0 195 + fi 196 + 197 + for mode in $modes; do 198 + prio_miimon $mode 199 + prio_arp $mode 200 + prio_ns $mode 201 + done 202 + } 203 + 204 + arp_validate_test() 205 + { 206 + local param="$1" 207 + RET=0 208 + 209 + # create bond 210 + bond_reset "${param}" 211 + 212 + bond_check_connection 213 + [ $RET -ne 0 ] && log_test "arp_validate" "$retmsg" 214 + 215 + # wait for a while to make sure the mii status stable 216 + sleep 5 217 + for i in $(seq 0 2); do 218 + mii_status=$(cmd_jq "ip -n ${s_ns} -j -d link show eth$i" ".[].linkinfo.info_slave_data.mii_status") 219 + if [ ${mii_status} != "UP" ]; then 220 + RET=1 221 + log_test "arp_validate" "interface eth$i mii_status $mii_status" 222 + fi 223 + done 224 + } 225 + 226 + arp_validate_arp() 227 + { 228 + local mode=$1 229 + local val 230 + for val in $(seq 0 6); do 231 + arp_validate_test "mode $mode arp_interval 100 arp_ip_target ${g_ip4} arp_validate $val" 232 + log_test "arp_validate" "$mode arp_ip_target arp_validate $val" 233 + done 234 + } 235 + 236 + arp_validate_ns() 237 + { 238 + local mode=$1 239 + local val 240 + 241 + if skip_ns; then 242 + log_test_skip "arp_validate ns" "Current iproute or kernel doesn't support bond option 'ns_ip6_target'." 243 + return 0 244 + fi 245 + 246 + for val in $(seq 0 6); do 247 + arp_validate_test "mode $mode arp_interval 100 ns_ip6_target ${g_ip6} arp_validate $val" 248 + log_test "arp_validate" "$mode ns_ip6_target arp_validate $val" 249 + done 250 + } 251 + 252 + arp_validate() 253 + { 254 + arp_validate_arp "active-backup" 255 + arp_validate_ns "active-backup" 256 + } 257 + 258 + trap cleanup EXIT 259 + 260 + setup_prepare 261 + setup_wait 262 + tests_run 263 + 264 + exit $EXIT_STATUS
+143
tools/testing/selftests/drivers/net/bonding/bond_topo_3d1c.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + # 4 + # Topology for Bond mode 1,5,6 testing 5 + # 6 + # +-------------------------------------+ 7 + # | bond0 | 8 + # | + | Server 9 + # | eth0 | eth1 eth2 | 192.0.2.1/24 10 + # | +-------------------+ | 2001:db8::1/24 11 + # | | | | | 12 + # +-------------------------------------+ 13 + # | | | 14 + # +-------------------------------------+ 15 + # | | | | | 16 + # | +---+---------+---------+---+ | Gateway 17 + # | | br0 | | 192.0.2.254/24 18 + # | +-------------+-------------+ | 2001:db8::254/24 19 + # | | | 20 + # +-------------------------------------+ 21 + # | 22 + # +-------------------------------------+ 23 + # | | | Client 24 + # | + | 192.0.2.10/24 25 + # | eth0 | 2001:db8::10/24 26 + # +-------------------------------------+ 27 + 28 + s_ns="s-$(mktemp -u XXXXXX)" 29 + c_ns="c-$(mktemp -u XXXXXX)" 30 + g_ns="g-$(mktemp -u XXXXXX)" 31 + s_ip4="192.0.2.1" 32 + c_ip4="192.0.2.10" 33 + g_ip4="192.0.2.254" 34 + s_ip6="2001:db8::1" 35 + c_ip6="2001:db8::10" 36 + g_ip6="2001:db8::254" 37 + 38 + gateway_create() 39 + { 40 + ip netns add ${g_ns} 41 + ip -n ${g_ns} link add br0 type bridge 42 + ip -n ${g_ns} link set br0 up 43 + ip -n ${g_ns} addr add ${g_ip4}/24 dev br0 44 + ip -n ${g_ns} addr add ${g_ip6}/24 dev br0 45 + } 46 + 47 + gateway_destroy() 48 + { 49 + ip -n ${g_ns} link del br0 50 + ip netns del ${g_ns} 51 + } 52 + 53 + server_create() 54 + { 55 + ip netns add ${s_ns} 56 + ip -n ${s_ns} link add bond0 type bond mode active-backup miimon 100 57 + 58 + for i in $(seq 0 2); do 59 + ip -n ${s_ns} link add eth${i} type veth peer name s${i} netns ${g_ns} 60 + 61 + ip -n ${g_ns} link set s${i} up 62 + ip -n ${g_ns} link set s${i} master br0 63 + ip -n ${s_ns} link set eth${i} master bond0 64 + done 65 + 66 + ip -n ${s_ns} link set bond0 up 67 + ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 68 + ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 69 + sleep 2 70 + } 71 + 72 + # Reset bond with new mode and options 73 + bond_reset() 74 + { 75 + local param="$1" 76 + 77 + ip -n ${s_ns} link set bond0 down 78 + ip -n ${s_ns} link del bond0 79 + 80 + ip -n ${s_ns} link add bond0 type bond $param 81 + for i in $(seq 0 2); do 82 + ip -n ${s_ns} link set eth$i master bond0 83 + done 84 + 85 + ip -n ${s_ns} link set bond0 up 86 + ip -n ${s_ns} addr add ${s_ip4}/24 dev bond0 87 + ip -n ${s_ns} addr add ${s_ip6}/24 dev bond0 88 + sleep 2 89 + } 90 + 91 + server_destroy() 92 + { 93 + for i in $(seq 0 2); do 94 + ip -n ${s_ns} link del eth${i} 95 + done 96 + ip netns del ${s_ns} 97 + } 98 + 99 + client_create() 100 + { 101 + ip netns add ${c_ns} 102 + ip -n ${c_ns} link add eth0 type veth peer name c0 netns ${g_ns} 103 + 104 + ip -n ${g_ns} link set c0 up 105 + ip -n ${g_ns} link set c0 master br0 106 + 107 + ip -n ${c_ns} link set eth0 up 108 + ip -n ${c_ns} addr add ${c_ip4}/24 dev eth0 109 + ip -n ${c_ns} addr add ${c_ip6}/24 dev eth0 110 + } 111 + 112 + client_destroy() 113 + { 114 + ip -n ${c_ns} link del eth0 115 + ip netns del ${c_ns} 116 + } 117 + 118 + setup_prepare() 119 + { 120 + gateway_create 121 + server_create 122 + client_create 123 + } 124 + 125 + cleanup() 126 + { 127 + pre_cleanup 128 + 129 + client_destroy 130 + server_destroy 131 + gateway_destroy 132 + } 133 + 134 + bond_check_connection() 135 + { 136 + local msg=${1:-"check connection"} 137 + 138 + sleep 2 139 + ip netns exec ${s_ns} ping ${c_ip4} -c5 -i 0.1 &>/dev/null 140 + check_err $? "${msg}: ping failed" 141 + ip netns exec ${s_ns} ping6 ${c_ip6} -c5 -i 0.1 &>/dev/null 142 + check_err $? "${msg}: ping6 failed" 143 + }
-245
tools/testing/selftests/drivers/net/bonding/option_prio.sh
··· 1 - #!/bin/bash 2 - # SPDX-License-Identifier: GPL-2.0 3 - # 4 - # Test bonding option prio 5 - # 6 - 7 - ALL_TESTS=" 8 - prio_arp_ip_target_test 9 - prio_miimon_test 10 - " 11 - 12 - REQUIRE_MZ=no 13 - REQUIRE_JQ=no 14 - NUM_NETIFS=0 15 - lib_dir=$(dirname "$0") 16 - source "$lib_dir"/net_forwarding_lib.sh 17 - 18 - destroy() 19 - { 20 - ip link del bond0 &>/dev/null 21 - ip link del br0 &>/dev/null 22 - ip link del veth0 &>/dev/null 23 - ip link del veth1 &>/dev/null 24 - ip link del veth2 &>/dev/null 25 - ip netns del ns1 &>/dev/null 26 - ip link del veth3 &>/dev/null 27 - } 28 - 29 - cleanup() 30 - { 31 - pre_cleanup 32 - 33 - destroy 34 - } 35 - 36 - skip() 37 - { 38 - local skip=1 39 - ip link add name bond0 type bond mode 1 miimon 100 &>/dev/null 40 - ip link add name veth0 type veth peer name veth0_p 41 - ip link set veth0 master bond0 42 - 43 - # check if iproute support prio option 44 - ip link set dev veth0 type bond_slave prio 10 45 - [[ $? -ne 0 ]] && skip=0 46 - 47 - # check if bonding support prio option 48 - ip -d link show veth0 | grep -q "prio 10" 49 - [[ $? -ne 0 ]] && skip=0 50 - 51 - ip link del bond0 &>/dev/null 52 - ip link del veth0 53 - 54 - return $skip 55 - } 56 - 57 - active_slave="" 58 - check_active_slave() 59 - { 60 - local target_active_slave=$1 61 - active_slave="$(cat /sys/class/net/bond0/bonding/active_slave)" 62 - test "$active_slave" = "$target_active_slave" 63 - check_err $? "Current active slave is $active_slave but not $target_active_slave" 64 - } 65 - 66 - 67 - # Test bonding prio option with mode=$mode monitor=$monitor 68 - # and primary_reselect=$primary_reselect 69 - prio_test() 70 - { 71 - RET=0 72 - 73 - local monitor=$1 74 - local mode=$2 75 - local primary_reselect=$3 76 - 77 - local bond_ip4="192.169.1.2" 78 - local peer_ip4="192.169.1.1" 79 - local bond_ip6="2009:0a:0b::02" 80 - local peer_ip6="2009:0a:0b::01" 81 - 82 - 83 - # create veths 84 - ip link add name veth0 type veth peer name veth0_p 85 - ip link add name veth1 type veth peer name veth1_p 86 - ip link add name veth2 type veth peer name veth2_p 87 - 88 - # create bond 89 - if [[ "$monitor" == "miimon" ]];then 90 - ip link add name bond0 type bond mode $mode miimon 100 primary veth1 primary_reselect $primary_reselect 91 - elif [[ "$monitor" == "arp_ip_target" ]];then 92 - ip link add name bond0 type bond mode $mode arp_interval 1000 arp_ip_target $peer_ip4 primary veth1 primary_reselect $primary_reselect 93 - elif [[ "$monitor" == "ns_ip6_target" ]];then 94 - ip link add name bond0 type bond mode $mode arp_interval 1000 ns_ip6_target $peer_ip6 primary veth1 primary_reselect $primary_reselect 95 - fi 96 - ip link set bond0 up 97 - ip link set veth0 master bond0 98 - ip link set veth1 master bond0 99 - ip link set veth2 master bond0 100 - # check bonding member prio value 101 - ip link set dev veth0 type bond_slave prio 0 102 - ip link set dev veth1 type bond_slave prio 10 103 - ip link set dev veth2 type bond_slave prio 11 104 - ip -d link show veth0 | grep -q 'prio 0' 105 - check_err $? "veth0 prio is not 0" 106 - ip -d link show veth1 | grep -q 'prio 10' 107 - check_err $? "veth0 prio is not 10" 108 - ip -d link show veth2 | grep -q 'prio 11' 109 - check_err $? "veth0 prio is not 11" 110 - 111 - ip link set veth0 up 112 - ip link set veth1 up 113 - ip link set veth2 up 114 - ip link set veth0_p up 115 - ip link set veth1_p up 116 - ip link set veth2_p up 117 - 118 - # prepare ping target 119 - ip link add name br0 type bridge 120 - ip link set br0 up 121 - ip link set veth0_p master br0 122 - ip link set veth1_p master br0 123 - ip link set veth2_p master br0 124 - ip link add name veth3 type veth peer name veth3_p 125 - ip netns add ns1 126 - ip link set veth3_p master br0 up 127 - ip link set veth3 netns ns1 up 128 - ip netns exec ns1 ip addr add $peer_ip4/24 dev veth3 129 - ip netns exec ns1 ip addr add $peer_ip6/64 dev veth3 130 - ip addr add $bond_ip4/24 dev bond0 131 - ip addr add $bond_ip6/64 dev bond0 132 - sleep 5 133 - 134 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 135 - check_err $? "ping failed 1." 136 - ping6 $peer_ip6 -c5 -I bond0 &>/dev/null 137 - check_err $? "ping6 failed 1." 138 - 139 - # active salve should be the primary slave 140 - check_active_slave veth1 141 - 142 - # active slave should be the higher prio slave 143 - ip link set $active_slave down 144 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 145 - check_err $? "ping failed 2." 146 - check_active_slave veth2 147 - 148 - # when only 1 slave is up 149 - ip link set $active_slave down 150 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 151 - check_err $? "ping failed 3." 152 - check_active_slave veth0 153 - 154 - # when a higher prio slave change to up 155 - ip link set veth2 up 156 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 157 - check_err $? "ping failed 4." 158 - case $primary_reselect in 159 - "0") 160 - check_active_slave "veth2" 161 - ;; 162 - "1") 163 - check_active_slave "veth0" 164 - ;; 165 - "2") 166 - check_active_slave "veth0" 167 - ;; 168 - esac 169 - local pre_active_slave=$active_slave 170 - 171 - # when the primary slave change to up 172 - ip link set veth1 up 173 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 174 - check_err $? "ping failed 5." 175 - case $primary_reselect in 176 - "0") 177 - check_active_slave "veth1" 178 - ;; 179 - "1") 180 - check_active_slave "$pre_active_slave" 181 - ;; 182 - "2") 183 - check_active_slave "$pre_active_slave" 184 - ip link set $active_slave down 185 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 186 - check_err $? "ping failed 6." 187 - check_active_slave "veth1" 188 - ;; 189 - esac 190 - 191 - # Test changing bond salve prio 192 - if [[ "$primary_reselect" == "0" ]];then 193 - ip link set dev veth0 type bond_slave prio 1000000 194 - ip link set dev veth1 type bond_slave prio 0 195 - ip link set dev veth2 type bond_slave prio -50 196 - ip -d link show veth0 | grep -q 'prio 1000000' 197 - check_err $? "veth0 prio is not 1000000" 198 - ip -d link show veth1 | grep -q 'prio 0' 199 - check_err $? "veth1 prio is not 0" 200 - ip -d link show veth2 | grep -q 'prio -50' 201 - check_err $? "veth3 prio is not -50" 202 - check_active_slave "veth1" 203 - 204 - ip link set $active_slave down 205 - ping $peer_ip4 -c5 -I bond0 &>/dev/null 206 - check_err $? "ping failed 7." 207 - check_active_slave "veth0" 208 - fi 209 - 210 - cleanup 211 - 212 - log_test "prio_test" "Test bonding option 'prio' with mode=$mode monitor=$monitor and primary_reselect=$primary_reselect" 213 - } 214 - 215 - prio_miimon_test() 216 - { 217 - local mode 218 - local primary_reselect 219 - 220 - for mode in 1 5 6; do 221 - for primary_reselect in 0 1 2; do 222 - prio_test "miimon" $mode $primary_reselect 223 - done 224 - done 225 - } 226 - 227 - prio_arp_ip_target_test() 228 - { 229 - local primary_reselect 230 - 231 - for primary_reselect in 0 1 2; do 232 - prio_test "arp_ip_target" 1 $primary_reselect 233 - done 234 - } 235 - 236 - if skip;then 237 - log_test_skip "option_prio.sh" "Current iproute doesn't support 'prio'." 238 - exit 0 239 - fi 240 - 241 - trap cleanup EXIT 242 - 243 - tests_run 244 - 245 - exit "$EXIT_STATUS"
+1
tools/testing/selftests/net/config
··· 49 49 CONFIG_CRYPTO_SM4_GENERIC=y 50 50 CONFIG_AMT=m 51 51 CONFIG_VXLAN=m 52 + CONFIG_IP_SCTP=m
+2
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 913 913 $client4_port > /dev/null 2>&1 & 914 914 local listener_pid=$! 915 915 916 + sleep 0.5 916 917 verify_listener_events $client_evts $LISTENER_CREATED $AF_INET 10.0.2.2 $client4_port 917 918 918 919 # ADD_ADDR from client to server machine reusing the subflow port ··· 929 928 # Delete the listener from the client ns, if one was created 930 929 kill_wait $listener_pid 931 930 931 + sleep 0.5 932 932 verify_listener_events $client_evts $LISTENER_CLOSED $AF_INET 10.0.2.2 $client4_port 933 933 } 934 934
+1 -1
tools/testing/selftests/net/openvswitch/ovs-dpctl.py
··· 62 62 nla_map = ( 63 63 ("OVS_DP_ATTR_UNSPEC", "none"), 64 64 ("OVS_DP_ATTR_NAME", "asciiz"), 65 - ("OVS_DP_ATTR_UPCALL_PID", "uint32"), 65 + ("OVS_DP_ATTR_UPCALL_PID", "array(uint32)"), 66 66 ("OVS_DP_ATTR_STATS", "dpstats"), 67 67 ("OVS_DP_ATTR_MEGAFLOW_STATS", "megaflowstats"), 68 68 ("OVS_DP_ATTR_USER_FEATURES", "uint32"),
+1 -1
tools/virtio/virtio-trace/README
··· 61 61 id=channel0,name=agent-ctl-path\ 62 62 ##data path## 63 63 -chardev pipe,id=charchannel1,path=/tmp/virtio-trace/trace-path-cpu0\ 64 - -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel0,\ 64 + -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,\ 65 65 id=channel1,name=trace-path-cpu0\ 66 66 ... 67 67