Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Minor overlapping changes in the btusb and ixgbe drivers.

Signed-off-by: David S. Miller <davem@davemloft.net>

+1670 -1375
+14 -3
.clang-format
··· 107 107 - 'css_for_each_descendant_post' 108 108 - 'css_for_each_descendant_pre' 109 109 - 'device_for_each_child_node' 110 + - 'dma_fence_chain_for_each' 110 111 - 'drm_atomic_crtc_for_each_plane' 111 112 - 'drm_atomic_crtc_state_for_each_plane' 112 113 - 'drm_atomic_crtc_state_for_each_plane_state' 113 114 - 'drm_atomic_for_each_plane_damage' 115 + - 'drm_client_for_each_connector_iter' 116 + - 'drm_client_for_each_modeset' 114 117 - 'drm_connector_for_each_possible_encoder' 115 118 - 'drm_for_each_connector_iter' 116 119 - 'drm_for_each_crtc' ··· 129 126 - 'drm_mm_for_each_node_in_range' 130 127 - 'drm_mm_for_each_node_safe' 131 128 - 'flow_action_for_each' 129 + - 'for_each_active_dev_scope' 132 130 - 'for_each_active_drhd_unit' 133 131 - 'for_each_active_iommu' 134 132 - 'for_each_available_child_of_node' ··· 157 153 - 'for_each_cpu_not' 158 154 - 'for_each_cpu_wrap' 159 155 - 'for_each_dev_addr' 156 + - 'for_each_dev_scope' 157 + - 'for_each_displayid_db' 160 158 - 'for_each_dma_cap_mask' 161 159 - 'for_each_dpcm_be' 162 160 - 'for_each_dpcm_be_rollback' ··· 175 169 - 'for_each_evictable_lru' 176 170 - 'for_each_fib6_node_rt_rcu' 177 171 - 'for_each_fib6_walker_rt' 172 + - 'for_each_free_mem_pfn_range_in_zone' 173 + - 'for_each_free_mem_pfn_range_in_zone_from' 178 174 - 'for_each_free_mem_range' 179 175 - 'for_each_free_mem_range_reverse' 180 176 - 'for_each_func_rsrc' ··· 186 178 - 'for_each_ip_tunnel_rcu' 187 179 - 'for_each_irq_nr' 188 180 - 'for_each_link_codecs' 181 + - 'for_each_link_platforms' 189 182 - 'for_each_lru' 190 183 - 'for_each_matching_node' 191 184 - 'for_each_matching_node_and_match' ··· 311 302 - 'ide_port_for_each_present_dev' 312 303 - 'idr_for_each_entry' 313 304 - 'idr_for_each_entry_continue' 305 + - 'idr_for_each_entry_continue_ul' 314 306 - 'idr_for_each_entry_ul' 307 + - 'in_dev_for_each_ifa_rcu' 308 + - 'in_dev_for_each_ifa_rtnl' 315 309 - 'inet_bind_bucket_for_each' 316 310 - 'inet_lhash2_for_each_icsk_rcu' 317 311 - 'key_for_each' ··· 355 343 - 'media_device_for_each_intf' 356 344 - 'media_device_for_each_link' 357 345 - 'media_device_for_each_pad' 358 - - 'mp_bvec_for_each_page' 359 - - 'mp_bvec_for_each_segment' 360 346 - 'nanddev_io_for_each_page' 361 347 - 'netdev_for_each_lower_dev' 362 348 - 'netdev_for_each_lower_private' ··· 391 381 - 'radix_tree_for_each_slot' 392 382 - 'radix_tree_for_each_tagged' 393 383 - 'rbtree_postorder_for_each_entry_safe' 384 + - 'rdma_for_each_block' 394 385 - 'rdma_for_each_port' 395 386 - 'resource_list_for_each_entry' 396 387 - 'resource_list_for_each_entry_safe' 397 388 - 'rhl_for_each_entry_rcu' 398 389 - 'rhl_for_each_rcu' 399 390 - 'rht_for_each' 400 - - 'rht_for_each_from' 401 391 - 'rht_for_each_entry' 402 392 - 'rht_for_each_entry_from' 403 393 - 'rht_for_each_entry_rcu' 404 394 - 'rht_for_each_entry_rcu_from' 405 395 - 'rht_for_each_entry_safe' 396 + - 'rht_for_each_from' 406 397 - 'rht_for_each_rcu' 407 398 - 'rht_for_each_rcu_from' 408 399 - '__rq_for_each_bio'
+7 -7
Documentation/process/embargoed-hardware-issues.rst
··· 5 5 ----- 6 6 7 7 Hardware issues which result in security problems are a different category 8 - of security bugs than pure software bugs which only affect the Linux 8 + of security bugs than pure software bugs which only affect the Linux 9 9 kernel. 10 10 11 11 Hardware issues like Meltdown, Spectre, L1TF etc. must be treated ··· 159 159 160 160 The initial response team sets up an encrypted mailing-list or repurposes 161 161 an existing one if appropriate. The disclosing party should provide a list 162 - of contacts for all other parties who have already been, or should be 162 + of contacts for all other parties who have already been, or should be, 163 163 informed about the issue. The response team contacts these parties so they 164 164 can name experts who should be subscribed to the mailing-list. 165 165 ··· 217 217 AMD 218 218 IBM 219 219 Intel 220 - Qualcomm 220 + Qualcomm Trilok Soni <tsoni@codeaurora.org> 221 221 222 - Microsoft 222 + Microsoft Sasha Levin <sashal@kernel.org> 223 223 VMware 224 - XEN 224 + Xen Andrew Cooper <andrew.cooper3@citrix.com> 225 225 226 226 Canonical Tyler Hicks <tyhicks@canonical.com> 227 227 Debian Ben Hutchings <ben@decadent.org.uk> ··· 230 230 SUSE Jiri Kosina <jkosina@suse.cz> 231 231 232 232 Amazon 233 - Google 234 - ============== ======================================================== 233 + Google Kees Cook <keescook@chromium.org> 234 + ============= ======================================================== 235 235 236 236 If you want your organization to be added to the ambassadors list, please 237 237 contact the hardware security team. The nominated ambassador has to
+7 -6
Documentation/riscv/boot-image-header.txt
··· 18 18 u32 res1 = 0; /* Reserved */ 19 19 u64 res2 = 0; /* Reserved */ 20 20 u64 magic = 0x5643534952; /* Magic number, little endian, "RISCV" */ 21 - u32 res3; /* Reserved for additional RISC-V specific header */ 21 + u32 magic2 = 0x56534905; /* Magic number 2, little endian, "RSC\x05" */ 22 22 u32 res4; /* Reserved for PE COFF offset */ 23 23 24 24 This header format is compliant with PE/COFF header and largely inspired from ··· 37 37 Bits 16:31 - Major version 38 38 39 39 This preserves compatibility across newer and older version of the header. 40 - The current version is defined as 0.1. 40 + The current version is defined as 0.2. 41 41 42 - - res3 is reserved for offset to any other additional fields. This makes the 43 - header extendible in future. One example would be to accommodate ISA 44 - extension for RISC-V in future. For current version, it is set to be zero. 42 + - The "magic" field is deprecated as of version 0.2. In a future 43 + release, it may be removed. This originally should have matched up 44 + with the ARM64 header "magic" field, but unfortunately does not. 45 + The "magic2" field replaces it, matching up with the ARM64 header. 45 46 46 - - In current header, the flag field has only one field. 47 + - In current header, the flags field has only one field. 47 48 Bit 0: Kernel endianness. 1 if BE, 0 if LE. 48 49 49 50 - Image size is mandatory for boot loader to load kernel image. Booting will
+1 -2
MAINTAINERS
··· 17732 17732 F: include/uapi/linux/fsmap.h 17733 17733 17734 17734 XILINX AXI ETHERNET DRIVER 17735 - M: Anirudha Sarangi <anirudh@xilinx.com> 17736 - M: John Linn <John.Linn@xilinx.com> 17735 + M: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> 17737 17736 S: Maintained 17738 17737 F: drivers/net/ethernet/xilinx/xilinx_axienet* 17739 17738
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 3 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc7 5 + EXTRAVERSION = -rc8 6 6 NAME = Bobtail Squid 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm64/boot/dts/renesas/hihope-common.dtsi
··· 279 279 mmc-hs200-1_8v; 280 280 non-removable; 281 281 fixed-emmc-driver-type = <1>; 282 + status = "okay"; 282 283 }; 283 284 284 285 &usb_extal_clk {
+3 -3
arch/arm64/boot/dts/renesas/r8a77995-draak.dts
··· 97 97 reg = <0x0 0x48000000 0x0 0x18000000>; 98 98 }; 99 99 100 - reg_1p8v: regulator0 { 100 + reg_1p8v: regulator-1p8v { 101 101 compatible = "regulator-fixed"; 102 102 regulator-name = "fixed-1.8V"; 103 103 regulator-min-microvolt = <1800000>; ··· 106 106 regulator-always-on; 107 107 }; 108 108 109 - reg_3p3v: regulator1 { 109 + reg_3p3v: regulator-3p3v { 110 110 compatible = "regulator-fixed"; 111 111 regulator-name = "fixed-3.3V"; 112 112 regulator-min-microvolt = <3300000>; ··· 115 115 regulator-always-on; 116 116 }; 117 117 118 - reg_12p0v: regulator1 { 118 + reg_12p0v: regulator-12p0v { 119 119 compatible = "regulator-fixed"; 120 120 regulator-name = "D12.0V"; 121 121 regulator-min-microvolt = <12000000>;
+4 -17
arch/powerpc/kernel/process.c
··· 101 101 } 102 102 } 103 103 104 - static bool tm_active_with_fp(struct task_struct *tsk) 105 - { 106 - return MSR_TM_ACTIVE(tsk->thread.regs->msr) && 107 - (tsk->thread.ckpt_regs.msr & MSR_FP); 108 - } 109 - 110 - static bool tm_active_with_altivec(struct task_struct *tsk) 111 - { 112 - return MSR_TM_ACTIVE(tsk->thread.regs->msr) && 113 - (tsk->thread.ckpt_regs.msr & MSR_VEC); 114 - } 115 104 #else 116 105 static inline void check_if_tm_restore_required(struct task_struct *tsk) { } 117 - static inline bool tm_active_with_fp(struct task_struct *tsk) { return false; } 118 - static inline bool tm_active_with_altivec(struct task_struct *tsk) { return false; } 119 106 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ 120 107 121 108 bool strict_msr_control; ··· 239 252 240 253 static int restore_fp(struct task_struct *tsk) 241 254 { 242 - if (tsk->thread.load_fp || tm_active_with_fp(tsk)) { 255 + if (tsk->thread.load_fp) { 243 256 load_fp_state(&current->thread.fp_state); 244 257 current->thread.load_fp++; 245 258 return 1; ··· 321 334 322 335 static int restore_altivec(struct task_struct *tsk) 323 336 { 324 - if (cpu_has_feature(CPU_FTR_ALTIVEC) && 325 - (tsk->thread.load_vec || tm_active_with_altivec(tsk))) { 337 + if (cpu_has_feature(CPU_FTR_ALTIVEC) && (tsk->thread.load_vec)) { 326 338 load_vr_state(&tsk->thread.vr_state); 327 339 tsk->thread.used_vr = 1; 328 340 tsk->thread.load_vec++; ··· 483 497 if (!tsk->thread.regs) 484 498 return; 485 499 500 + check_if_tm_restore_required(tsk); 501 + 486 502 usermsr = tsk->thread.regs->msr; 487 503 488 504 if ((usermsr & msr_all_available) == 0) 489 505 return; 490 506 491 507 msr_check_and_set(msr_all_available); 492 - check_if_tm_restore_required(tsk); 493 508 494 509 WARN_ON((usermsr & MSR_VSX) && !((usermsr & MSR_FP) && (usermsr & MSR_VEC))); 495 510
-1
arch/powerpc/mm/nohash/tlb.c
··· 630 630 #ifdef CONFIG_PPC_FSL_BOOK3E 631 631 if (mmu_has_feature(MMU_FTR_TYPE_FSL_E)) { 632 632 unsigned int num_cams; 633 - int __maybe_unused cpu = smp_processor_id(); 634 633 bool map = true; 635 634 636 635 /* use a quarter of the TLBCAM for bolted linear map */
+6 -6
arch/riscv/include/asm/image.h
··· 3 3 #ifndef __ASM_IMAGE_H 4 4 #define __ASM_IMAGE_H 5 5 6 - #define RISCV_IMAGE_MAGIC "RISCV" 6 + #define RISCV_IMAGE_MAGIC "RISCV\0\0\0" 7 + #define RISCV_IMAGE_MAGIC2 "RSC\x05" 7 8 8 9 #define RISCV_IMAGE_FLAG_BE_SHIFT 0 9 10 #define RISCV_IMAGE_FLAG_BE_MASK 0x1 ··· 24 23 #define __HEAD_FLAGS (__HEAD_FLAG(BE)) 25 24 26 25 #define RISCV_HEADER_VERSION_MAJOR 0 27 - #define RISCV_HEADER_VERSION_MINOR 1 26 + #define RISCV_HEADER_VERSION_MINOR 2 28 27 29 28 #define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \ 30 29 RISCV_HEADER_VERSION_MINOR) ··· 40 39 * @version: version 41 40 * @res1: reserved 42 41 * @res2: reserved 43 - * @magic: Magic number 44 - * @res3: reserved (will be used for additional RISC-V specific 45 - * header) 42 + * @magic: Magic number (RISC-V specific; deprecated) 43 + * @magic2: Magic number 2 (to match the ARM64 'magic' field pos) 46 44 * @res4: reserved (will be used for PE COFF offset) 47 45 * 48 46 * The intention is for this header format to be shared between multiple ··· 58 58 u32 res1; 59 59 u64 res2; 60 60 u64 magic; 61 - u32 res3; 61 + u32 magic2; 62 62 u32 res4; 63 63 }; 64 64 #endif /* __ASSEMBLY__ */
+2 -2
arch/riscv/kernel/head.S
··· 39 39 .word RISCV_HEADER_VERSION 40 40 .word 0 41 41 .dword 0 42 - .asciz RISCV_IMAGE_MAGIC 43 - .word 0 42 + .ascii RISCV_IMAGE_MAGIC 44 43 .balign 4 44 + .ascii RISCV_IMAGE_MAGIC2 45 45 .word 0 46 46 47 47 .global _start_kernel
+10
arch/s390/kvm/interrupt.c
··· 1961 1961 case KVM_S390_MCHK: 1962 1962 irq->u.mchk.mcic = s390int->parm64; 1963 1963 break; 1964 + case KVM_S390_INT_PFAULT_INIT: 1965 + irq->u.ext.ext_params = s390int->parm; 1966 + irq->u.ext.ext_params2 = s390int->parm64; 1967 + break; 1968 + case KVM_S390_RESTART: 1969 + case KVM_S390_INT_CLOCK_COMP: 1970 + case KVM_S390_INT_CPU_TIMER: 1971 + break; 1972 + default: 1973 + return -EINVAL; 1964 1974 } 1965 1975 return 0; 1966 1976 }
+3 -1
arch/s390/kvm/kvm-s390.c
··· 1018 1018 /* mark all the pages in active slots as dirty */ 1019 1019 for (slotnr = 0; slotnr < slots->used_slots; slotnr++) { 1020 1020 ms = slots->memslots + slotnr; 1021 + if (!ms->dirty_bitmap) 1022 + return -EINVAL; 1021 1023 /* 1022 1024 * The second half of the bitmap is only used on x86, 1023 1025 * and would be wasted otherwise, so we put it to good ··· 4325 4323 } 4326 4324 case KVM_S390_INTERRUPT: { 4327 4325 struct kvm_s390_interrupt s390int; 4328 - struct kvm_s390_irq s390irq; 4326 + struct kvm_s390_irq s390irq = {}; 4329 4327 4330 4328 if (copy_from_user(&s390int, argp, sizeof(s390int))) 4331 4329 return -EFAULT;
+18 -15
arch/sparc/kernel/sys_sparc_64.c
··· 336 336 { 337 337 long err; 338 338 339 + if (!IS_ENABLED(CONFIG_SYSVIPC)) 340 + return -ENOSYS; 341 + 339 342 /* No need for backward compatibility. We can start fresh... */ 340 343 if (call <= SEMTIMEDOP) { 341 344 switch (call) { 342 345 case SEMOP: 343 - err = sys_semtimedop(first, ptr, 344 - (unsigned int)second, NULL); 346 + err = ksys_semtimedop(first, ptr, 347 + (unsigned int)second, NULL); 345 348 goto out; 346 349 case SEMTIMEDOP: 347 - err = sys_semtimedop(first, ptr, (unsigned int)second, 350 + err = ksys_semtimedop(first, ptr, (unsigned int)second, 348 351 (const struct __kernel_timespec __user *) 349 - (unsigned long) fifth); 352 + (unsigned long) fifth); 350 353 goto out; 351 354 case SEMGET: 352 - err = sys_semget(first, (int)second, (int)third); 355 + err = ksys_semget(first, (int)second, (int)third); 353 356 goto out; 354 357 case SEMCTL: { 355 - err = sys_semctl(first, second, 356 - (int)third | IPC_64, 357 - (unsigned long) ptr); 358 + err = ksys_old_semctl(first, second, 359 + (int)third | IPC_64, 360 + (unsigned long) ptr); 358 361 goto out; 359 362 } 360 363 default: ··· 368 365 if (call <= MSGCTL) { 369 366 switch (call) { 370 367 case MSGSND: 371 - err = sys_msgsnd(first, ptr, (size_t)second, 368 + err = ksys_msgsnd(first, ptr, (size_t)second, 372 369 (int)third); 373 370 goto out; 374 371 case MSGRCV: 375 - err = sys_msgrcv(first, ptr, (size_t)second, fifth, 372 + err = ksys_msgrcv(first, ptr, (size_t)second, fifth, 376 373 (int)third); 377 374 goto out; 378 375 case MSGGET: 379 - err = sys_msgget((key_t)first, (int)second); 376 + err = ksys_msgget((key_t)first, (int)second); 380 377 goto out; 381 378 case MSGCTL: 382 - err = sys_msgctl(first, (int)second | IPC_64, ptr); 379 + err = ksys_old_msgctl(first, (int)second | IPC_64, ptr); 383 380 goto out; 384 381 default: 385 382 err = -ENOSYS; ··· 399 396 goto out; 400 397 } 401 398 case SHMDT: 402 - err = sys_shmdt(ptr); 399 + err = ksys_shmdt(ptr); 403 400 goto out; 404 401 case SHMGET: 405 - err = sys_shmget(first, (size_t)second, (int)third); 402 + err = ksys_shmget(first, (size_t)second, (int)third); 406 403 goto out; 407 404 case SHMCTL: 408 - err = sys_shmctl(first, (int)second | IPC_64, ptr); 405 + err = ksys_old_shmctl(first, (int)second | IPC_64, ptr); 409 406 goto out; 410 407 default: 411 408 err = -ENOSYS;
+5 -3
arch/x86/hyperv/mmu.c
··· 37 37 * Lower 12 bits encode the number of additional 38 38 * pages to flush (in addition to the 'cur' page). 39 39 */ 40 - if (diff >= HV_TLB_FLUSH_UNIT) 40 + if (diff >= HV_TLB_FLUSH_UNIT) { 41 41 gva_list[gva_n] |= ~PAGE_MASK; 42 - else if (diff) 42 + cur += HV_TLB_FLUSH_UNIT; 43 + } else if (diff) { 43 44 gva_list[gva_n] |= (diff - 1) >> PAGE_SHIFT; 45 + cur = end; 46 + } 44 47 45 - cur += HV_TLB_FLUSH_UNIT; 46 48 gva_n++; 47 49 48 50 } while (cur < end);
+1
arch/x86/include/asm/bootparam_utils.h
··· 70 70 BOOT_PARAM_PRESERVE(eddbuf_entries), 71 71 BOOT_PARAM_PRESERVE(edd_mbr_sig_buf_entries), 72 72 BOOT_PARAM_PRESERVE(edd_mbr_sig_buffer), 73 + BOOT_PARAM_PRESERVE(secure_boot), 73 74 BOOT_PARAM_PRESERVE(hdr), 74 75 BOOT_PARAM_PRESERVE(e820_table), 75 76 BOOT_PARAM_PRESERVE(eddbuf),
+2
arch/x86/include/asm/kvm_host.h
··· 335 335 int root_count; /* Currently serving as active root */ 336 336 unsigned int unsync_children; 337 337 struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ 338 + unsigned long mmu_valid_gen; 338 339 DECLARE_BITMAP(unsync_child_bitmap, 512); 339 340 340 341 #ifdef CONFIG_X86_32 ··· 857 856 unsigned long n_requested_mmu_pages; 858 857 unsigned long n_max_mmu_pages; 859 858 unsigned int indirect_shadow_pages; 859 + unsigned long mmu_valid_gen; 860 860 struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; 861 861 /* 862 862 * Hash table of struct kvm_mmu_page.
+3 -1
arch/x86/include/asm/uaccess.h
··· 444 444 ({ \ 445 445 int __gu_err; \ 446 446 __inttype(*(ptr)) __gu_val; \ 447 + __typeof__(ptr) __gu_ptr = (ptr); \ 448 + __typeof__(size) __gu_size = (size); \ 447 449 __uaccess_begin_nospec(); \ 448 - __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \ 450 + __get_user_size(__gu_val, __gu_ptr, __gu_size, __gu_err, -EFAULT); \ 449 451 __uaccess_end(); \ 450 452 (x) = (__force __typeof__(*(ptr)))__gu_val; \ 451 453 __builtin_expect(__gu_err, 0); \
+4 -4
arch/x86/kernel/apic/apic.c
··· 834 834 if (!boot_cpu_has(X86_FEATURE_APIC)) 835 835 return true; 836 836 837 + /* Virt guests may lack ARAT, but still have DEADLINE */ 838 + if (!boot_cpu_has(X86_FEATURE_ARAT)) 839 + return true; 840 + 837 841 /* Deadline timer is based on TSC so no further PIT action required */ 838 842 if (boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER)) 839 843 return false; ··· 1183 1179 apic_write(APIC_LVT0, v | APIC_LVT_MASKED); 1184 1180 v = apic_read(APIC_LVT1); 1185 1181 apic_write(APIC_LVT1, v | APIC_LVT_MASKED); 1186 - if (!x2apic_enabled()) { 1187 - v = apic_read(APIC_LDR) & ~APIC_LDR_MASK; 1188 - apic_write(APIC_LDR, v); 1189 - } 1190 1182 if (maxlvt >= 4) { 1191 1183 v = apic_read(APIC_LVTPC); 1192 1184 apic_write(APIC_LVTPC, v | APIC_LVT_MASKED);
+99 -2
arch/x86/kvm/mmu.c
··· 2095 2095 if (!direct) 2096 2096 sp->gfns = mmu_memory_cache_alloc(&vcpu->arch.mmu_page_cache); 2097 2097 set_page_private(virt_to_page(sp->spt), (unsigned long)sp); 2098 + 2099 + /* 2100 + * active_mmu_pages must be a FIFO list, as kvm_zap_obsolete_pages() 2101 + * depends on valid pages being added to the head of the list. See 2102 + * comments in kvm_zap_obsolete_pages(). 2103 + */ 2098 2104 list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); 2099 2105 kvm_mod_used_mmu_pages(vcpu->kvm, +1); 2100 2106 return sp; ··· 2250 2244 #define for_each_valid_sp(_kvm, _sp, _gfn) \ 2251 2245 hlist_for_each_entry(_sp, \ 2252 2246 &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \ 2253 - if ((_sp)->role.invalid) { \ 2247 + if (is_obsolete_sp((_kvm), (_sp)) || (_sp)->role.invalid) { \ 2254 2248 } else 2255 2249 2256 2250 #define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \ ··· 2306 2300 static void kvm_mmu_audit(struct kvm_vcpu *vcpu, int point) { } 2307 2301 static void mmu_audit_disable(void) { } 2308 2302 #endif 2303 + 2304 + static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) 2305 + { 2306 + return unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); 2307 + } 2309 2308 2310 2309 static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, 2311 2310 struct list_head *invalid_list) ··· 2536 2525 if (level > PT_PAGE_TABLE_LEVEL && need_sync) 2537 2526 flush |= kvm_sync_pages(vcpu, gfn, &invalid_list); 2538 2527 } 2528 + sp->mmu_valid_gen = vcpu->kvm->arch.mmu_valid_gen; 2539 2529 clear_page(sp->spt); 2540 2530 trace_kvm_mmu_get_page(sp, true); 2541 2531 ··· 4245 4233 return false; 4246 4234 4247 4235 if (cached_root_available(vcpu, new_cr3, new_role)) { 4236 + /* 4237 + * It is possible that the cached previous root page is 4238 + * obsolete because of a change in the MMU generation 4239 + * number. However, changing the generation number is 4240 + * accompanied by KVM_REQ_MMU_RELOAD, which will free 4241 + * the root set here and allocate a new one. 4242 + */ 4248 4243 kvm_make_request(KVM_REQ_LOAD_CR3, vcpu); 4249 4244 if (!skip_tlb_flush) { 4250 4245 kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); ··· 5668 5649 return alloc_mmu_pages(vcpu); 5669 5650 } 5670 5651 5652 + 5653 + static void kvm_zap_obsolete_pages(struct kvm *kvm) 5654 + { 5655 + struct kvm_mmu_page *sp, *node; 5656 + LIST_HEAD(invalid_list); 5657 + int ign; 5658 + 5659 + restart: 5660 + list_for_each_entry_safe_reverse(sp, node, 5661 + &kvm->arch.active_mmu_pages, link) { 5662 + /* 5663 + * No obsolete valid page exists before a newly created page 5664 + * since active_mmu_pages is a FIFO list. 5665 + */ 5666 + if (!is_obsolete_sp(kvm, sp)) 5667 + break; 5668 + 5669 + /* 5670 + * Do not repeatedly zap a root page to avoid unnecessary 5671 + * KVM_REQ_MMU_RELOAD, otherwise we may not be able to 5672 + * progress: 5673 + * vcpu 0 vcpu 1 5674 + * call vcpu_enter_guest(): 5675 + * 1): handle KVM_REQ_MMU_RELOAD 5676 + * and require mmu-lock to 5677 + * load mmu 5678 + * repeat: 5679 + * 1): zap root page and 5680 + * send KVM_REQ_MMU_RELOAD 5681 + * 5682 + * 2): if (cond_resched_lock(mmu-lock)) 5683 + * 5684 + * 2): hold mmu-lock and load mmu 5685 + * 5686 + * 3): see KVM_REQ_MMU_RELOAD bit 5687 + * on vcpu->requests is set 5688 + * then return 1 to call 5689 + * vcpu_enter_guest() again. 5690 + * goto repeat; 5691 + * 5692 + * Since we are reversely walking the list and the invalid 5693 + * list will be moved to the head, skip the invalid page 5694 + * can help us to avoid the infinity list walking. 5695 + */ 5696 + if (sp->role.invalid) 5697 + continue; 5698 + 5699 + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { 5700 + kvm_mmu_commit_zap_page(kvm, &invalid_list); 5701 + cond_resched_lock(&kvm->mmu_lock); 5702 + goto restart; 5703 + } 5704 + 5705 + if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) 5706 + goto restart; 5707 + } 5708 + 5709 + kvm_mmu_commit_zap_page(kvm, &invalid_list); 5710 + } 5711 + 5712 + /* 5713 + * Fast invalidate all shadow pages and use lock-break technique 5714 + * to zap obsolete pages. 5715 + * 5716 + * It's required when memslot is being deleted or VM is being 5717 + * destroyed, in these cases, we should ensure that KVM MMU does 5718 + * not use any resource of the being-deleted slot or all slots 5719 + * after calling the function. 5720 + */ 5721 + static void kvm_mmu_zap_all_fast(struct kvm *kvm) 5722 + { 5723 + spin_lock(&kvm->mmu_lock); 5724 + kvm->arch.mmu_valid_gen++; 5725 + 5726 + kvm_zap_obsolete_pages(kvm); 5727 + spin_unlock(&kvm->mmu_lock); 5728 + } 5729 + 5671 5730 static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, 5672 5731 struct kvm_memory_slot *slot, 5673 5732 struct kvm_page_track_notifier_node *node) 5674 5733 { 5675 - kvm_mmu_zap_all(kvm); 5734 + kvm_mmu_zap_all_fast(kvm); 5676 5735 } 5677 5736 5678 5737 void kvm_mmu_init_vm(struct kvm *kvm)
+3 -1
arch/x86/kvm/vmx/nested.c
··· 4540 4540 int len; 4541 4541 gva_t gva = 0; 4542 4542 struct vmcs12 *vmcs12; 4543 + struct x86_exception e; 4543 4544 short offset; 4544 4545 4545 4546 if (!nested_vmx_check_permission(vcpu)) ··· 4589 4588 vmx_instruction_info, true, len, &gva)) 4590 4589 return 1; 4591 4590 /* _system ok, nested_vmx_check_permission has verified cpl=0 */ 4592 - kvm_write_guest_virt_system(vcpu, gva, &field_value, len, NULL); 4591 + if (kvm_write_guest_virt_system(vcpu, gva, &field_value, len, &e)) 4592 + kvm_inject_page_fault(vcpu, &e); 4593 4593 } 4594 4594 4595 4595 return nested_vmx_succeed(vcpu);
+7
arch/x86/kvm/x86.c
··· 5312 5312 /* kvm_write_guest_virt_system can pull in tons of pages. */ 5313 5313 vcpu->arch.l1tf_flush_l1d = true; 5314 5314 5315 + /* 5316 + * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED 5317 + * is returned, but our callers are not ready for that and they blindly 5318 + * call kvm_inject_page_fault. Ensure that they at least do not leak 5319 + * uninitialized kernel stack memory into cr2 and error code. 5320 + */ 5321 + memset(exception, 0, sizeof(*exception)); 5315 5322 return kvm_write_guest_virt_helper(addr, val, bytes, vcpu, 5316 5323 PFERR_WRITE_MASK, exception); 5317 5324 }
+19 -16
arch/x86/purgatory/Makefile
··· 18 18 KASAN_SANITIZE := n 19 19 KCOV_INSTRUMENT := n 20 20 21 + # These are adjustments to the compiler flags used for objects that 22 + # make up the standalone purgatory.ro 23 + 24 + PURGATORY_CFLAGS_REMOVE := -mcmodel=kernel 25 + PURGATORY_CFLAGS := -mcmodel=large -ffreestanding -fno-zero-initialized-in-bss 26 + 21 27 # Default KBUILD_CFLAGS can have -pg option set when FTRACE is enabled. That 22 28 # in turn leaves some undefined symbols like __fentry__ in purgatory and not 23 29 # sure how to relocate those. 24 30 ifdef CONFIG_FUNCTION_TRACER 25 - CFLAGS_REMOVE_sha256.o += $(CC_FLAGS_FTRACE) 26 - CFLAGS_REMOVE_purgatory.o += $(CC_FLAGS_FTRACE) 27 - CFLAGS_REMOVE_string.o += $(CC_FLAGS_FTRACE) 28 - CFLAGS_REMOVE_kexec-purgatory.o += $(CC_FLAGS_FTRACE) 31 + PURGATORY_CFLAGS_REMOVE += $(CC_FLAGS_FTRACE) 29 32 endif 30 33 31 34 ifdef CONFIG_STACKPROTECTOR 32 - CFLAGS_REMOVE_sha256.o += -fstack-protector 33 - CFLAGS_REMOVE_purgatory.o += -fstack-protector 34 - CFLAGS_REMOVE_string.o += -fstack-protector 35 - CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector 35 + PURGATORY_CFLAGS_REMOVE += -fstack-protector 36 36 endif 37 37 38 38 ifdef CONFIG_STACKPROTECTOR_STRONG 39 - CFLAGS_REMOVE_sha256.o += -fstack-protector-strong 40 - CFLAGS_REMOVE_purgatory.o += -fstack-protector-strong 41 - CFLAGS_REMOVE_string.o += -fstack-protector-strong 42 - CFLAGS_REMOVE_kexec-purgatory.o += -fstack-protector-strong 39 + PURGATORY_CFLAGS_REMOVE += -fstack-protector-strong 43 40 endif 44 41 45 42 ifdef CONFIG_RETPOLINE 46 - CFLAGS_REMOVE_sha256.o += $(RETPOLINE_CFLAGS) 47 - CFLAGS_REMOVE_purgatory.o += $(RETPOLINE_CFLAGS) 48 - CFLAGS_REMOVE_string.o += $(RETPOLINE_CFLAGS) 49 - CFLAGS_REMOVE_kexec-purgatory.o += $(RETPOLINE_CFLAGS) 43 + PURGATORY_CFLAGS_REMOVE += $(RETPOLINE_CFLAGS) 50 44 endif 45 + 46 + CFLAGS_REMOVE_purgatory.o += $(PURGATORY_CFLAGS_REMOVE) 47 + CFLAGS_purgatory.o += $(PURGATORY_CFLAGS) 48 + 49 + CFLAGS_REMOVE_sha256.o += $(PURGATORY_CFLAGS_REMOVE) 50 + CFLAGS_sha256.o += $(PURGATORY_CFLAGS) 51 + 52 + CFLAGS_REMOVE_string.o += $(PURGATORY_CFLAGS_REMOVE) 53 + CFLAGS_string.o += $(PURGATORY_CFLAGS) 51 54 52 55 $(obj)/purgatory.ro: $(PURGATORY_OBJS) FORCE 53 56 $(call if_changed,ld)
+1 -1
drivers/bluetooth/bpa10x.c
··· 337 337 338 338 usb_free_urb(urb); 339 339 340 - return 0; 340 + return err; 341 341 } 342 342 343 343 static int bpa10x_set_diag(struct hci_dev *hdev, bool enable)
+3 -5
drivers/bluetooth/btusb.c
··· 384 384 { USB_DEVICE(0x13d3, 0x3526), .driver_info = BTUSB_REALTEK }, 385 385 { USB_DEVICE(0x0b05, 0x185c), .driver_info = BTUSB_REALTEK }, 386 386 387 + /* Additional Realtek 8822CE Bluetooth devices */ 388 + { USB_DEVICE(0x04ca, 0x4005), .driver_info = BTUSB_REALTEK }, 389 + 387 390 /* Silicon Wave based devices */ 388 391 { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE }, 389 392 ··· 1204 1201 } 1205 1202 1206 1203 data->intf->needs_remote_wakeup = 1; 1207 - /* device specific wakeup source enabled and required for USB 1208 - * remote wakeup while host is suspended 1209 - */ 1210 - device_wakeup_enable(&data->udev->dev); 1211 1204 1212 1205 /* Disable device remote wakeup when host is suspended 1213 1206 * For Realtek chips, global suspend without ··· 1280 1281 if (test_bit(BTUSB_WAKEUP_DISABLE, &data->flags)) 1281 1282 data->intf->needs_remote_wakeup = 1; 1282 1283 1283 - device_wakeup_disable(&data->udev->dev); 1284 1284 usb_autopm_put_interface(data->intf); 1285 1285 1286 1286 failed:
+6 -4
drivers/bluetooth/hci_qca.c
··· 309 309 ws_awake_device); 310 310 struct hci_uart *hu = qca->hu; 311 311 unsigned long retrans_delay; 312 + unsigned long flags; 312 313 313 314 BT_DBG("hu %p wq awake device", hu); 314 315 315 316 /* Vote for serial clock */ 316 317 serial_clock_vote(HCI_IBS_TX_VOTE_CLOCK_ON, hu); 317 318 318 - spin_lock(&qca->hci_ibs_lock); 319 + spin_lock_irqsave(&qca->hci_ibs_lock, flags); 319 320 320 321 /* Send wake indication to device */ 321 322 if (send_hci_ibs_cmd(HCI_IBS_WAKE_IND, hu) < 0) ··· 328 327 retrans_delay = msecs_to_jiffies(qca->wake_retrans); 329 328 mod_timer(&qca->wake_retrans_timer, jiffies + retrans_delay); 330 329 331 - spin_unlock(&qca->hci_ibs_lock); 330 + spin_unlock_irqrestore(&qca->hci_ibs_lock, flags); 332 331 333 332 /* Actually send the packets */ 334 333 hci_uart_tx_wakeup(hu); ··· 339 338 struct qca_data *qca = container_of(work, struct qca_data, 340 339 ws_awake_rx); 341 340 struct hci_uart *hu = qca->hu; 341 + unsigned long flags; 342 342 343 343 BT_DBG("hu %p wq awake rx", hu); 344 344 345 345 serial_clock_vote(HCI_IBS_RX_VOTE_CLOCK_ON, hu); 346 346 347 - spin_lock(&qca->hci_ibs_lock); 347 + spin_lock_irqsave(&qca->hci_ibs_lock, flags); 348 348 qca->rx_ibs_state = HCI_IBS_RX_AWAKE; 349 349 350 350 /* Always acknowledge device wake up, ··· 356 354 357 355 qca->ibs_sent_wacks++; 358 356 359 - spin_unlock(&qca->hci_ibs_lock); 357 + spin_unlock_irqrestore(&qca->hci_ibs_lock, flags); 360 358 361 359 /* Actually send the packets */ 362 360 hci_uart_tx_wakeup(hu);
+19 -9
drivers/dma/sh/rcar-dmac.c
··· 192 192 * @iomem: remapped I/O memory base 193 193 * @n_channels: number of available channels 194 194 * @channels: array of DMAC channels 195 + * @channels_mask: bitfield of which DMA channels are managed by this driver 195 196 * @modules: bitmask of client modules in use 196 197 */ 197 198 struct rcar_dmac { ··· 203 202 204 203 unsigned int n_channels; 205 204 struct rcar_dmac_chan *channels; 205 + unsigned int channels_mask; 206 206 207 207 DECLARE_BITMAP(modules, 256); 208 208 }; ··· 440 438 u16 dmaor; 441 439 442 440 /* Clear all channels and enable the DMAC globally. */ 443 - rcar_dmac_write(dmac, RCAR_DMACHCLR, GENMASK(dmac->n_channels - 1, 0)); 441 + rcar_dmac_write(dmac, RCAR_DMACHCLR, dmac->channels_mask); 444 442 rcar_dmac_write(dmac, RCAR_DMAOR, 445 443 RCAR_DMAOR_PRI_FIXED | RCAR_DMAOR_DME); 446 444 ··· 815 813 /* Stop all channels. */ 816 814 for (i = 0; i < dmac->n_channels; ++i) { 817 815 struct rcar_dmac_chan *chan = &dmac->channels[i]; 816 + 817 + if (!(dmac->channels_mask & BIT(i))) 818 + continue; 818 819 819 820 /* Stop and reinitialize the channel. */ 820 821 spin_lock_irq(&chan->lock); ··· 1781 1776 return 0; 1782 1777 } 1783 1778 1779 + #define RCAR_DMAC_MAX_CHANNELS 32 1780 + 1784 1781 static int rcar_dmac_parse_of(struct device *dev, struct rcar_dmac *dmac) 1785 1782 { 1786 1783 struct device_node *np = dev->of_node; ··· 1794 1787 return ret; 1795 1788 } 1796 1789 1797 - if (dmac->n_channels <= 0 || dmac->n_channels >= 100) { 1790 + /* The hardware and driver don't support more than 32 bits in CHCLR */ 1791 + if (dmac->n_channels <= 0 || 1792 + dmac->n_channels >= RCAR_DMAC_MAX_CHANNELS) { 1798 1793 dev_err(dev, "invalid number of channels %u\n", 1799 1794 dmac->n_channels); 1800 1795 return -EINVAL; 1801 1796 } 1797 + 1798 + dmac->channels_mask = GENMASK(dmac->n_channels - 1, 0); 1802 1799 1803 1800 return 0; 1804 1801 } ··· 1813 1802 DMA_SLAVE_BUSWIDTH_2_BYTES | DMA_SLAVE_BUSWIDTH_4_BYTES | 1814 1803 DMA_SLAVE_BUSWIDTH_8_BYTES | DMA_SLAVE_BUSWIDTH_16_BYTES | 1815 1804 DMA_SLAVE_BUSWIDTH_32_BYTES | DMA_SLAVE_BUSWIDTH_64_BYTES; 1816 - unsigned int channels_offset = 0; 1817 1805 struct dma_device *engine; 1818 1806 struct rcar_dmac *dmac; 1819 1807 struct resource *mem; ··· 1841 1831 * level we can't disable it selectively, so ignore channel 0 for now if 1842 1832 * the device is part of an IOMMU group. 1843 1833 */ 1844 - if (device_iommu_mapped(&pdev->dev)) { 1845 - dmac->n_channels--; 1846 - channels_offset = 1; 1847 - } 1834 + if (device_iommu_mapped(&pdev->dev)) 1835 + dmac->channels_mask &= ~BIT(0); 1848 1836 1849 1837 dmac->channels = devm_kcalloc(&pdev->dev, dmac->n_channels, 1850 1838 sizeof(*dmac->channels), GFP_KERNEL); ··· 1900 1892 INIT_LIST_HEAD(&engine->channels); 1901 1893 1902 1894 for (i = 0; i < dmac->n_channels; ++i) { 1903 - ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], 1904 - i + channels_offset); 1895 + if (!(dmac->channels_mask & BIT(i))) 1896 + continue; 1897 + 1898 + ret = rcar_dmac_chan_probe(dmac, &dmac->channels[i], i); 1905 1899 if (ret < 0) 1906 1900 goto error; 1907 1901 }
+8 -2
drivers/dma/sprd-dma.c
··· 908 908 struct sprd_dma_chn *schan = to_sprd_dma_chan(chan); 909 909 struct dma_slave_config *slave_cfg = &schan->slave_cfg; 910 910 dma_addr_t src = 0, dst = 0; 911 + dma_addr_t start_src = 0, start_dst = 0; 911 912 struct sprd_dma_desc *sdesc; 912 913 struct scatterlist *sg; 913 914 u32 len = 0; ··· 955 954 dst = sg_dma_address(sg); 956 955 } 957 956 957 + if (!i) { 958 + start_src = src; 959 + start_dst = dst; 960 + } 961 + 958 962 /* 959 963 * The link-list mode needs at least 2 link-list 960 964 * configurations. If there is only one sg, it doesn't ··· 976 970 } 977 971 } 978 972 979 - ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, src, dst, len, 980 - dir, flags, slave_cfg); 973 + ret = sprd_dma_fill_desc(chan, &sdesc->chn_hw, 0, 0, start_src, 974 + start_dst, len, dir, flags, slave_cfg); 981 975 if (ret) { 982 976 kfree(sdesc); 983 977 return NULL;
+3 -1
drivers/dma/ti/dma-crossbar.c
··· 391 391 392 392 ret = of_property_read_u32_array(node, pname, (u32 *)rsv_events, 393 393 nelm * 2); 394 - if (ret) 394 + if (ret) { 395 + kfree(rsv_events); 395 396 return ret; 397 + } 396 398 397 399 for (i = 0; i < nelm; i++) { 398 400 ti_dra7_xbar_reserve(rsv_events[i][0], rsv_events[i][1],
+3 -1
drivers/dma/ti/omap-dma.c
··· 1540 1540 1541 1541 rc = devm_request_irq(&pdev->dev, irq, omap_dma_irq, 1542 1542 IRQF_SHARED, "omap-dma-engine", od); 1543 - if (rc) 1543 + if (rc) { 1544 + omap_dma_free(od); 1544 1545 return rc; 1546 + } 1545 1547 } 1546 1548 1547 1549 if (omap_dma_glbl_read(od, CAPS_0) & CAPS_0_SUPPORT_LL123)
+1
drivers/gpio/gpio-mockup.c
··· 309 309 .read = gpio_mockup_debugfs_read, 310 310 .write = gpio_mockup_debugfs_write, 311 311 .llseek = no_llseek, 312 + .release = single_release, 312 313 }; 313 314 314 315 static void gpio_mockup_debugfs_setup(struct device *dev,
+6 -9
drivers/gpio/gpio-pca953x.c
··· 604 604 u8 new_irqs; 605 605 int level, i; 606 606 u8 invert_irq_mask[MAX_BANK]; 607 - int reg_direction[MAX_BANK]; 607 + u8 reg_direction[MAX_BANK]; 608 608 609 - regmap_bulk_read(chip->regmap, chip->regs->direction, reg_direction, 610 - NBANK(chip)); 609 + pca953x_read_regs(chip, chip->regs->direction, reg_direction); 611 610 612 611 if (chip->driver_data & PCA_PCAL) { 613 612 /* Enable latch on interrupt-enabled inputs */ ··· 678 679 bool pending_seen = false; 679 680 bool trigger_seen = false; 680 681 u8 trigger[MAX_BANK]; 681 - int reg_direction[MAX_BANK]; 682 + u8 reg_direction[MAX_BANK]; 682 683 int ret, i; 683 684 684 685 if (chip->driver_data & PCA_PCAL) { ··· 709 710 return false; 710 711 711 712 /* Remove output pins from the equation */ 712 - regmap_bulk_read(chip->regmap, chip->regs->direction, reg_direction, 713 - NBANK(chip)); 713 + pca953x_read_regs(chip, chip->regs->direction, reg_direction); 714 714 for (i = 0; i < NBANK(chip); i++) 715 715 cur_stat[i] &= reg_direction[i]; 716 716 ··· 766 768 { 767 769 struct i2c_client *client = chip->client; 768 770 struct irq_chip *irq_chip = &chip->irq_chip; 769 - int reg_direction[MAX_BANK]; 771 + u8 reg_direction[MAX_BANK]; 770 772 int ret, i; 771 773 772 774 if (!client->irq) ··· 787 789 * interrupt. We have to rely on the previous read for 788 790 * this purpose. 789 791 */ 790 - regmap_bulk_read(chip->regmap, chip->regs->direction, reg_direction, 791 - NBANK(chip)); 792 + pca953x_read_regs(chip, chip->regs->direction, reg_direction); 792 793 for (i = 0; i < NBANK(chip); i++) 793 794 chip->irq_stat[i] &= reg_direction[i]; 794 795 mutex_init(&chip->irq_lock);
+38 -4
drivers/gpio/gpiolib-acpi.c
··· 7 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 8 8 */ 9 9 10 + #include <linux/dmi.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/gpio/consumer.h> 12 13 #include <linux/gpio/driver.h> ··· 19 18 #include <linux/pinctrl/pinctrl.h> 20 19 21 20 #include "gpiolib.h" 21 + 22 + static int run_edge_events_on_boot = -1; 23 + module_param(run_edge_events_on_boot, int, 0444); 24 + MODULE_PARM_DESC(run_edge_events_on_boot, 25 + "Run edge _AEI event-handlers at boot: 0=no, 1=yes, -1=auto"); 22 26 23 27 /** 24 28 * struct acpi_gpio_event - ACPI GPIO event handler data ··· 176 170 event->irq_requested = true; 177 171 178 172 /* Make sure we trigger the initial state of edge-triggered IRQs */ 179 - value = gpiod_get_raw_value_cansleep(event->desc); 180 - if (((event->irqflags & IRQF_TRIGGER_RISING) && value == 1) || 181 - ((event->irqflags & IRQF_TRIGGER_FALLING) && value == 0)) 182 - event->handler(event->irq, event); 173 + if (run_edge_events_on_boot && 174 + (event->irqflags & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING))) { 175 + value = gpiod_get_raw_value_cansleep(event->desc); 176 + if (((event->irqflags & IRQF_TRIGGER_RISING) && value == 1) || 177 + ((event->irqflags & IRQF_TRIGGER_FALLING) && value == 0)) 178 + event->handler(event->irq, event); 179 + } 183 180 } 184 181 185 182 static void acpi_gpiochip_request_irqs(struct acpi_gpio_chip *acpi_gpio) ··· 1292 1283 } 1293 1284 /* We must use _sync so that this runs after the first deferred_probe run */ 1294 1285 late_initcall_sync(acpi_gpio_handle_deferred_request_irqs); 1286 + 1287 + static const struct dmi_system_id run_edge_events_on_boot_blacklist[] = { 1288 + { 1289 + .matches = { 1290 + DMI_MATCH(DMI_SYS_VENDOR, "MINIX"), 1291 + DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"), 1292 + } 1293 + }, 1294 + {} /* Terminating entry */ 1295 + }; 1296 + 1297 + static int acpi_gpio_setup_params(void) 1298 + { 1299 + if (run_edge_events_on_boot < 0) { 1300 + if (dmi_check_system(run_edge_events_on_boot_blacklist)) 1301 + run_edge_events_on_boot = 0; 1302 + else 1303 + run_edge_events_on_boot = 1; 1304 + } 1305 + 1306 + return 0; 1307 + } 1308 + 1309 + /* Directly after dmi_setup() which runs as core_initcall() */ 1310 + postcore_initcall(acpi_gpio_setup_params);
+9 -18
drivers/gpio/gpiolib-of.c
··· 343 343 344 344 desc = of_get_named_gpiod_flags(dev->of_node, prop_name, idx, 345 345 &of_flags); 346 - /* 347 - * -EPROBE_DEFER in our case means that we found a 348 - * valid GPIO property, but no controller has been 349 - * registered so far. 350 - * 351 - * This means we don't need to look any further for 352 - * alternate name conventions, and we should really 353 - * preserve the return code for our user to be able to 354 - * retry probing later. 355 - */ 356 - if (IS_ERR(desc) && PTR_ERR(desc) == -EPROBE_DEFER) 357 - return desc; 358 346 359 - if (!IS_ERR(desc) || (PTR_ERR(desc) != -ENOENT)) 347 + if (!IS_ERR(desc) || PTR_ERR(desc) != -ENOENT) 360 348 break; 361 349 } 362 350 363 - /* Special handling for SPI GPIOs if used */ 364 - if (IS_ERR(desc)) 351 + if (IS_ERR(desc) && PTR_ERR(desc) == -ENOENT) { 352 + /* Special handling for SPI GPIOs if used */ 365 353 desc = of_find_spi_gpio(dev, con_id, &of_flags); 366 - if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER) { 354 + } 355 + 356 + if (IS_ERR(desc) && PTR_ERR(desc) == -ENOENT) { 367 357 /* This quirk looks up flags and all */ 368 358 desc = of_find_spi_cs_gpio(dev, con_id, idx, flags); 369 359 if (!IS_ERR(desc)) 370 360 return desc; 371 361 } 372 362 373 - /* Special handling for regulator GPIOs if used */ 374 - if (IS_ERR(desc) && PTR_ERR(desc) != -EPROBE_DEFER) 363 + if (IS_ERR(desc) && PTR_ERR(desc) == -ENOENT) { 364 + /* Special handling for regulator GPIOs if used */ 375 365 desc = of_find_regulator_gpio(dev, con_id, &of_flags); 366 + } 376 367 377 368 if (IS_ERR(desc)) 378 369 return desc;
+11 -5
drivers/gpio/gpiolib.c
··· 536 536 return -EINVAL; 537 537 538 538 /* 539 + * Do not allow both INPUT & OUTPUT flags to be set as they are 540 + * contradictory. 541 + */ 542 + if ((lflags & GPIOHANDLE_REQUEST_INPUT) && 543 + (lflags & GPIOHANDLE_REQUEST_OUTPUT)) 544 + return -EINVAL; 545 + 546 + /* 539 547 * Do not allow OPEN_SOURCE & OPEN_DRAIN flags in a single request. If 540 548 * the hardware actually supports enabling both at the same time the 541 549 * electrical result would be disastrous. ··· 934 926 } 935 927 936 928 /* This is just wrong: we don't look for events on output lines */ 937 - if (lflags & GPIOHANDLE_REQUEST_OUTPUT) { 929 + if ((lflags & GPIOHANDLE_REQUEST_OUTPUT) || 930 + (lflags & GPIOHANDLE_REQUEST_OPEN_DRAIN) || 931 + (lflags & GPIOHANDLE_REQUEST_OPEN_SOURCE)) { 938 932 ret = -EINVAL; 939 933 goto out_free_label; 940 934 } ··· 950 940 951 941 if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW) 952 942 set_bit(FLAG_ACTIVE_LOW, &desc->flags); 953 - if (lflags & GPIOHANDLE_REQUEST_OPEN_DRAIN) 954 - set_bit(FLAG_OPEN_DRAIN, &desc->flags); 955 - if (lflags & GPIOHANDLE_REQUEST_OPEN_SOURCE) 956 - set_bit(FLAG_OPEN_SOURCE, &desc->flags); 957 943 958 944 ret = gpiod_direction_input(desc); 959 945 if (ret)
+48 -6
drivers/gpu/drm/drm_modes.c
··· 1454 1454 } 1455 1455 1456 1456 static int drm_mode_parse_cmdline_extra(const char *str, int length, 1457 + bool freestanding, 1457 1458 const struct drm_connector *connector, 1458 1459 struct drm_cmdline_mode *mode) 1459 1460 { ··· 1463 1462 for (i = 0; i < length; i++) { 1464 1463 switch (str[i]) { 1465 1464 case 'i': 1465 + if (freestanding) 1466 + return -EINVAL; 1467 + 1466 1468 mode->interlace = true; 1467 1469 break; 1468 1470 case 'm': 1471 + if (freestanding) 1472 + return -EINVAL; 1473 + 1469 1474 mode->margins = true; 1470 1475 break; 1471 1476 case 'D': ··· 1549 1542 if (extras) { 1550 1543 int ret = drm_mode_parse_cmdline_extra(end_ptr + i, 1551 1544 1, 1545 + false, 1552 1546 connector, 1553 1547 mode); 1554 1548 if (ret) ··· 1677 1669 return 0; 1678 1670 } 1679 1671 1672 + static const char * const drm_named_modes_whitelist[] = { 1673 + "NTSC", 1674 + "PAL", 1675 + }; 1676 + 1677 + static bool drm_named_mode_is_in_whitelist(const char *mode, unsigned int size) 1678 + { 1679 + int i; 1680 + 1681 + for (i = 0; i < ARRAY_SIZE(drm_named_modes_whitelist); i++) 1682 + if (!strncmp(mode, drm_named_modes_whitelist[i], size)) 1683 + return true; 1684 + 1685 + return false; 1686 + } 1687 + 1680 1688 /** 1681 1689 * drm_mode_parse_command_line_for_connector - parse command line modeline for connector 1682 1690 * @mode_option: optional per connector mode option ··· 1749 1725 * bunch of things: 1750 1726 * - We need to make sure that the first character (which 1751 1727 * would be our resolution in X) is a digit. 1752 - * - However, if the X resolution is missing, then we end up 1753 - * with something like x<yres>, with our first character 1754 - * being an alpha-numerical character, which would be 1755 - * considered a named mode. 1728 + * - If not, then it's either a named mode or a force on/off. 1729 + * To distinguish between the two, we need to run the 1730 + * extra parsing function, and if not, then we consider it 1731 + * a named mode. 1756 1732 * 1757 1733 * If this isn't enough, we should add more heuristics here, 1758 1734 * and matching unit-tests. 1759 1735 */ 1760 - if (!isdigit(name[0]) && name[0] != 'x') 1736 + if (!isdigit(name[0]) && name[0] != 'x') { 1737 + unsigned int namelen = strlen(name); 1738 + 1739 + /* 1740 + * Only the force on/off options can be in that case, 1741 + * and they all take a single character. 1742 + */ 1743 + if (namelen == 1) { 1744 + ret = drm_mode_parse_cmdline_extra(name, namelen, true, 1745 + connector, mode); 1746 + if (!ret) 1747 + return true; 1748 + } 1749 + 1761 1750 named_mode = true; 1751 + } 1762 1752 1763 1753 /* Try to locate the bpp and refresh specifiers, if any */ 1764 1754 bpp_ptr = strchr(name, '-'); ··· 1810 1772 if (named_mode) { 1811 1773 if (mode_end + 1 > DRM_DISPLAY_MODE_LEN) 1812 1774 return false; 1775 + 1776 + if (!drm_named_mode_is_in_whitelist(name, mode_end)) 1777 + return false; 1778 + 1813 1779 strscpy(mode->name, name, mode_end + 1); 1814 1780 } else { 1815 1781 ret = drm_mode_parse_cmdline_res_mode(name, mode_end, ··· 1853 1811 extra_ptr != options_ptr) { 1854 1812 int len = strlen(name) - (extra_ptr - name); 1855 1813 1856 - ret = drm_mode_parse_cmdline_extra(extra_ptr, len, 1814 + ret = drm_mode_parse_cmdline_extra(extra_ptr, len, false, 1857 1815 connector, mode); 1858 1816 if (ret) 1859 1817 return false;
+9 -1
drivers/gpu/drm/i915/display/intel_dp_mst.c
··· 128 128 limits.max_lane_count = intel_dp_max_lane_count(intel_dp); 129 129 130 130 limits.min_bpp = intel_dp_min_bpp(pipe_config); 131 - limits.max_bpp = pipe_config->pipe_bpp; 131 + /* 132 + * FIXME: If all the streams can't fit into the link with 133 + * their current pipe_bpp we should reduce pipe_bpp across 134 + * the board until things start to fit. Until then we 135 + * limit to <= 8bpc since that's what was hardcoded for all 136 + * MST streams previously. This hack should be removed once 137 + * we have the proper retry logic in place. 138 + */ 139 + limits.max_bpp = min(pipe_config->pipe_bpp, 24); 132 140 133 141 intel_dp_adjust_compliance_config(intel_dp, pipe_config, &limits); 134 142
+1 -9
drivers/gpu/drm/i915/gem/i915_gem_userptr.c
··· 664 664 665 665 for_each_sgt_page(page, sgt_iter, pages) { 666 666 if (obj->mm.dirty) 667 - /* 668 - * As this may not be anonymous memory (e.g. shmem) 669 - * but exist on a real mapping, we have to lock 670 - * the page in order to dirty it -- holding 671 - * the page reference is not sufficient to 672 - * prevent the inode from being truncated. 673 - * Play safe and take the lock. 674 - */ 675 - set_page_dirty_lock(page); 667 + set_page_dirty(page); 676 668 677 669 mark_page_accessed(page); 678 670 put_page(page);
-5
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 308 308 FLOW_CONTROL_ENABLE | 309 309 PARTIAL_INSTRUCTION_SHOOTDOWN_DISABLE); 310 310 311 - /* Syncing dependencies between camera and graphics:skl,bxt,kbl */ 312 - if (!IS_COFFEELAKE(i915)) 313 - WA_SET_BIT_MASKED(HALF_SLICE_CHICKEN3, 314 - GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC); 315 - 316 311 /* WaEnableYV12BugFixInHalfSliceChicken7:skl,bxt,kbl,glk,cfl */ 317 312 /* WaEnableSamplerGPGPUPreemptionSupport:skl,bxt,kbl,cfl */ 318 313 WA_SET_BIT_MASKED(GEN9_HALF_SLICE_CHICKEN7,
+2 -3
drivers/gpu/drm/ingenic/ingenic-drm.c
··· 656 656 return ret; 657 657 } 658 658 659 - if (panel) { 659 + if (panel) 660 660 bridge = devm_drm_panel_bridge_add(dev, panel, 661 - DRM_MODE_CONNECTOR_Unknown); 662 - } 661 + DRM_MODE_CONNECTOR_DPI); 663 662 664 663 priv->dma_hwdesc = dma_alloc_coherent(dev, sizeof(*priv->dma_hwdesc), 665 664 &priv->dma_hwdesc_phys,
+1 -1
drivers/gpu/drm/lima/lima_gem.c
··· 342 342 timeout = drm_timeout_abs_to_jiffies(timeout_ns); 343 343 344 344 ret = drm_gem_reservation_object_wait(file, handle, write, timeout); 345 - if (ret == 0) 345 + if (ret == -ETIME) 346 346 ret = timeout ? -ETIMEDOUT : -EBUSY; 347 347 348 348 return ret;
+12
drivers/gpu/drm/nouveau/nvkm/subdev/secboot/gp102.c
··· 190 190 MODULE_FIRMWARE("nvidia/gp102/sec2/desc.bin"); 191 191 MODULE_FIRMWARE("nvidia/gp102/sec2/image.bin"); 192 192 MODULE_FIRMWARE("nvidia/gp102/sec2/sig.bin"); 193 + MODULE_FIRMWARE("nvidia/gp102/sec2/desc-1.bin"); 194 + MODULE_FIRMWARE("nvidia/gp102/sec2/image-1.bin"); 195 + MODULE_FIRMWARE("nvidia/gp102/sec2/sig-1.bin"); 193 196 MODULE_FIRMWARE("nvidia/gp104/acr/bl.bin"); 194 197 MODULE_FIRMWARE("nvidia/gp104/acr/unload_bl.bin"); 195 198 MODULE_FIRMWARE("nvidia/gp104/acr/ucode_load.bin"); ··· 213 210 MODULE_FIRMWARE("nvidia/gp104/sec2/desc.bin"); 214 211 MODULE_FIRMWARE("nvidia/gp104/sec2/image.bin"); 215 212 MODULE_FIRMWARE("nvidia/gp104/sec2/sig.bin"); 213 + MODULE_FIRMWARE("nvidia/gp104/sec2/desc-1.bin"); 214 + MODULE_FIRMWARE("nvidia/gp104/sec2/image-1.bin"); 215 + MODULE_FIRMWARE("nvidia/gp104/sec2/sig-1.bin"); 216 216 MODULE_FIRMWARE("nvidia/gp106/acr/bl.bin"); 217 217 MODULE_FIRMWARE("nvidia/gp106/acr/unload_bl.bin"); 218 218 MODULE_FIRMWARE("nvidia/gp106/acr/ucode_load.bin"); ··· 236 230 MODULE_FIRMWARE("nvidia/gp106/sec2/desc.bin"); 237 231 MODULE_FIRMWARE("nvidia/gp106/sec2/image.bin"); 238 232 MODULE_FIRMWARE("nvidia/gp106/sec2/sig.bin"); 233 + MODULE_FIRMWARE("nvidia/gp106/sec2/desc-1.bin"); 234 + MODULE_FIRMWARE("nvidia/gp106/sec2/image-1.bin"); 235 + MODULE_FIRMWARE("nvidia/gp106/sec2/sig-1.bin"); 239 236 MODULE_FIRMWARE("nvidia/gp107/acr/bl.bin"); 240 237 MODULE_FIRMWARE("nvidia/gp107/acr/unload_bl.bin"); 241 238 MODULE_FIRMWARE("nvidia/gp107/acr/ucode_load.bin"); ··· 259 250 MODULE_FIRMWARE("nvidia/gp107/sec2/desc.bin"); 260 251 MODULE_FIRMWARE("nvidia/gp107/sec2/image.bin"); 261 252 MODULE_FIRMWARE("nvidia/gp107/sec2/sig.bin"); 253 + MODULE_FIRMWARE("nvidia/gp107/sec2/desc-1.bin"); 254 + MODULE_FIRMWARE("nvidia/gp107/sec2/image-1.bin"); 255 + MODULE_FIRMWARE("nvidia/gp107/sec2/sig-1.bin");
+7
drivers/gpu/drm/selftests/drm_cmdline_selftests.h
··· 9 9 10 10 #define cmdline_test(test) selftest(test, test) 11 11 12 + cmdline_test(drm_cmdline_test_force_d_only) 13 + cmdline_test(drm_cmdline_test_force_D_only_dvi) 14 + cmdline_test(drm_cmdline_test_force_D_only_hdmi) 15 + cmdline_test(drm_cmdline_test_force_D_only_not_digital) 16 + cmdline_test(drm_cmdline_test_force_e_only) 17 + cmdline_test(drm_cmdline_test_margin_only) 18 + cmdline_test(drm_cmdline_test_interlace_only) 12 19 cmdline_test(drm_cmdline_test_res) 13 20 cmdline_test(drm_cmdline_test_res_missing_x) 14 21 cmdline_test(drm_cmdline_test_res_missing_y)
+130
drivers/gpu/drm/selftests/test-drm_cmdline_parser.c
··· 17 17 18 18 static const struct drm_connector no_connector = {}; 19 19 20 + static int drm_cmdline_test_force_e_only(void *ignored) 21 + { 22 + struct drm_cmdline_mode mode = { }; 23 + 24 + FAIL_ON(!drm_mode_parse_command_line_for_connector("e", 25 + &no_connector, 26 + &mode)); 27 + FAIL_ON(mode.specified); 28 + FAIL_ON(mode.refresh_specified); 29 + FAIL_ON(mode.bpp_specified); 30 + 31 + FAIL_ON(mode.rb); 32 + FAIL_ON(mode.cvt); 33 + FAIL_ON(mode.interlace); 34 + FAIL_ON(mode.margins); 35 + FAIL_ON(mode.force != DRM_FORCE_ON); 36 + 37 + return 0; 38 + } 39 + 40 + static int drm_cmdline_test_force_D_only_not_digital(void *ignored) 41 + { 42 + struct drm_cmdline_mode mode = { }; 43 + 44 + FAIL_ON(!drm_mode_parse_command_line_for_connector("D", 45 + &no_connector, 46 + &mode)); 47 + FAIL_ON(mode.specified); 48 + FAIL_ON(mode.refresh_specified); 49 + FAIL_ON(mode.bpp_specified); 50 + 51 + FAIL_ON(mode.rb); 52 + FAIL_ON(mode.cvt); 53 + FAIL_ON(mode.interlace); 54 + FAIL_ON(mode.margins); 55 + FAIL_ON(mode.force != DRM_FORCE_ON); 56 + 57 + return 0; 58 + } 59 + 60 + static const struct drm_connector connector_hdmi = { 61 + .connector_type = DRM_MODE_CONNECTOR_HDMIB, 62 + }; 63 + 64 + static int drm_cmdline_test_force_D_only_hdmi(void *ignored) 65 + { 66 + struct drm_cmdline_mode mode = { }; 67 + 68 + FAIL_ON(!drm_mode_parse_command_line_for_connector("D", 69 + &connector_hdmi, 70 + &mode)); 71 + FAIL_ON(mode.specified); 72 + FAIL_ON(mode.refresh_specified); 73 + FAIL_ON(mode.bpp_specified); 74 + 75 + FAIL_ON(mode.rb); 76 + FAIL_ON(mode.cvt); 77 + FAIL_ON(mode.interlace); 78 + FAIL_ON(mode.margins); 79 + FAIL_ON(mode.force != DRM_FORCE_ON_DIGITAL); 80 + 81 + return 0; 82 + } 83 + 84 + static const struct drm_connector connector_dvi = { 85 + .connector_type = DRM_MODE_CONNECTOR_DVII, 86 + }; 87 + 88 + static int drm_cmdline_test_force_D_only_dvi(void *ignored) 89 + { 90 + struct drm_cmdline_mode mode = { }; 91 + 92 + FAIL_ON(!drm_mode_parse_command_line_for_connector("D", 93 + &connector_dvi, 94 + &mode)); 95 + FAIL_ON(mode.specified); 96 + FAIL_ON(mode.refresh_specified); 97 + FAIL_ON(mode.bpp_specified); 98 + 99 + FAIL_ON(mode.rb); 100 + FAIL_ON(mode.cvt); 101 + FAIL_ON(mode.interlace); 102 + FAIL_ON(mode.margins); 103 + FAIL_ON(mode.force != DRM_FORCE_ON_DIGITAL); 104 + 105 + return 0; 106 + } 107 + 108 + static int drm_cmdline_test_force_d_only(void *ignored) 109 + { 110 + struct drm_cmdline_mode mode = { }; 111 + 112 + FAIL_ON(!drm_mode_parse_command_line_for_connector("d", 113 + &no_connector, 114 + &mode)); 115 + FAIL_ON(mode.specified); 116 + FAIL_ON(mode.refresh_specified); 117 + FAIL_ON(mode.bpp_specified); 118 + 119 + FAIL_ON(mode.rb); 120 + FAIL_ON(mode.cvt); 121 + FAIL_ON(mode.interlace); 122 + FAIL_ON(mode.margins); 123 + FAIL_ON(mode.force != DRM_FORCE_OFF); 124 + 125 + return 0; 126 + } 127 + 128 + static int drm_cmdline_test_margin_only(void *ignored) 129 + { 130 + struct drm_cmdline_mode mode = { }; 131 + 132 + FAIL_ON(drm_mode_parse_command_line_for_connector("m", 133 + &no_connector, 134 + &mode)); 135 + 136 + return 0; 137 + } 138 + 139 + static int drm_cmdline_test_interlace_only(void *ignored) 140 + { 141 + struct drm_cmdline_mode mode = { }; 142 + 143 + FAIL_ON(drm_mode_parse_command_line_for_connector("i", 144 + &no_connector, 145 + &mode)); 146 + 147 + return 0; 148 + } 149 + 20 150 static int drm_cmdline_test_res(void *ignored) 21 151 { 22 152 struct drm_cmdline_mode mode = { };
+3 -5
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
··· 353 353 !!(HIGH_WORD(ecx) & MESSAGE_STATUS_HB)); 354 354 if ((HIGH_WORD(ebx) & MESSAGE_STATUS_SUCCESS) == 0) { 355 355 kfree(reply); 356 - 356 + reply = NULL; 357 357 if ((HIGH_WORD(ebx) & MESSAGE_STATUS_CPT) != 0) { 358 358 /* A checkpoint occurred. Retry. */ 359 359 continue; ··· 377 377 378 378 if ((HIGH_WORD(ecx) & MESSAGE_STATUS_SUCCESS) == 0) { 379 379 kfree(reply); 380 - 380 + reply = NULL; 381 381 if ((HIGH_WORD(ecx) & MESSAGE_STATUS_CPT) != 0) { 382 382 /* A checkpoint occurred. Retry. */ 383 383 continue; ··· 389 389 break; 390 390 } 391 391 392 - if (retries == RETRIES) { 393 - kfree(reply); 392 + if (!reply) 394 393 return -EINVAL; 395 - } 396 394 397 395 *msg_len = reply_len; 398 396 *msg = reply;
+35 -5
drivers/iommu/amd_iommu.c
··· 1143 1143 iommu_completion_wait(iommu); 1144 1144 } 1145 1145 1146 + static void amd_iommu_flush_tlb_domid(struct amd_iommu *iommu, u32 dom_id) 1147 + { 1148 + struct iommu_cmd cmd; 1149 + 1150 + build_inv_iommu_pages(&cmd, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS, 1151 + dom_id, 1); 1152 + iommu_queue_command(iommu, &cmd); 1153 + 1154 + iommu_completion_wait(iommu); 1155 + } 1156 + 1146 1157 static void amd_iommu_flush_all(struct amd_iommu *iommu) 1147 1158 { 1148 1159 struct iommu_cmd cmd; ··· 1435 1424 * another level increases the size of the address space by 9 bits to a size up 1436 1425 * to 64 bits. 1437 1426 */ 1438 - static bool increase_address_space(struct protection_domain *domain, 1427 + static void increase_address_space(struct protection_domain *domain, 1439 1428 gfp_t gfp) 1440 1429 { 1430 + unsigned long flags; 1441 1431 u64 *pte; 1442 1432 1443 - if (domain->mode == PAGE_MODE_6_LEVEL) 1433 + spin_lock_irqsave(&domain->lock, flags); 1434 + 1435 + if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL)) 1444 1436 /* address space already 64 bit large */ 1445 - return false; 1437 + goto out; 1446 1438 1447 1439 pte = (void *)get_zeroed_page(gfp); 1448 1440 if (!pte) 1449 - return false; 1441 + goto out; 1450 1442 1451 1443 *pte = PM_LEVEL_PDE(domain->mode, 1452 1444 iommu_virt_to_phys(domain->pt_root)); ··· 1457 1443 domain->mode += 1; 1458 1444 domain->updated = true; 1459 1445 1460 - return true; 1446 + out: 1447 + spin_unlock_irqrestore(&domain->lock, flags); 1448 + 1449 + return; 1461 1450 } 1462 1451 1463 1452 static u64 *alloc_pte(struct protection_domain *domain, ··· 1890 1873 { 1891 1874 u64 pte_root = 0; 1892 1875 u64 flags = 0; 1876 + u32 old_domid; 1893 1877 1894 1878 if (domain->mode != PAGE_MODE_NONE) 1895 1879 pte_root = iommu_virt_to_phys(domain->pt_root); ··· 1940 1922 flags &= ~DEV_DOMID_MASK; 1941 1923 flags |= domain->id; 1942 1924 1925 + old_domid = amd_iommu_dev_table[devid].data[1] & DEV_DOMID_MASK; 1943 1926 amd_iommu_dev_table[devid].data[1] = flags; 1944 1927 amd_iommu_dev_table[devid].data[0] = pte_root; 1928 + 1929 + /* 1930 + * A kdump kernel might be replacing a domain ID that was copied from 1931 + * the previous kernel--if so, it needs to flush the translation cache 1932 + * entries for the old domain ID that is being overwritten 1933 + */ 1934 + if (old_domid) { 1935 + struct amd_iommu *iommu = amd_iommu_rlookup_table[devid]; 1936 + 1937 + amd_iommu_flush_tlb_domid(iommu, old_domid); 1938 + } 1945 1939 } 1946 1940 1947 1941 static void clear_dte_entry(u16 devid)
+53 -2
drivers/iommu/intel-iommu.c
··· 339 339 static void domain_remove_dev_info(struct dmar_domain *domain); 340 340 static void dmar_remove_one_dev_info(struct device *dev); 341 341 static void __dmar_remove_one_dev_info(struct device_domain_info *info); 342 + static void domain_context_clear(struct intel_iommu *iommu, 343 + struct device *dev); 342 344 static int domain_detach_iommu(struct dmar_domain *domain, 343 345 struct intel_iommu *iommu); 344 346 static bool device_is_rmrr_locked(struct device *dev); ··· 2107 2105 return ret; 2108 2106 } 2109 2107 2108 + struct domain_context_mapping_data { 2109 + struct dmar_domain *domain; 2110 + struct intel_iommu *iommu; 2111 + struct pasid_table *table; 2112 + }; 2113 + 2114 + static int domain_context_mapping_cb(struct pci_dev *pdev, 2115 + u16 alias, void *opaque) 2116 + { 2117 + struct domain_context_mapping_data *data = opaque; 2118 + 2119 + return domain_context_mapping_one(data->domain, data->iommu, 2120 + data->table, PCI_BUS_NUM(alias), 2121 + alias & 0xff); 2122 + } 2123 + 2110 2124 static int 2111 2125 domain_context_mapping(struct dmar_domain *domain, struct device *dev) 2112 2126 { 2127 + struct domain_context_mapping_data data; 2113 2128 struct pasid_table *table; 2114 2129 struct intel_iommu *iommu; 2115 2130 u8 bus, devfn; ··· 2136 2117 return -ENODEV; 2137 2118 2138 2119 table = intel_pasid_get_table(dev); 2139 - return domain_context_mapping_one(domain, iommu, table, bus, devfn); 2120 + 2121 + if (!dev_is_pci(dev)) 2122 + return domain_context_mapping_one(domain, iommu, table, 2123 + bus, devfn); 2124 + 2125 + data.domain = domain; 2126 + data.iommu = iommu; 2127 + data.table = table; 2128 + 2129 + return pci_for_each_dma_alias(to_pci_dev(dev), 2130 + &domain_context_mapping_cb, &data); 2140 2131 } 2141 2132 2142 2133 static int domain_context_mapped_cb(struct pci_dev *pdev, ··· 4788 4759 return ret; 4789 4760 } 4790 4761 4762 + static int domain_context_clear_one_cb(struct pci_dev *pdev, u16 alias, void *opaque) 4763 + { 4764 + struct intel_iommu *iommu = opaque; 4765 + 4766 + domain_context_clear_one(iommu, PCI_BUS_NUM(alias), alias & 0xff); 4767 + return 0; 4768 + } 4769 + 4770 + /* 4771 + * NB - intel-iommu lacks any sort of reference counting for the users of 4772 + * dependent devices. If multiple endpoints have intersecting dependent 4773 + * devices, unbinding the driver from any one of them will possibly leave 4774 + * the others unable to operate. 4775 + */ 4776 + static void domain_context_clear(struct intel_iommu *iommu, struct device *dev) 4777 + { 4778 + if (!iommu || !dev || !dev_is_pci(dev)) 4779 + return; 4780 + 4781 + pci_for_each_dma_alias(to_pci_dev(dev), &domain_context_clear_one_cb, iommu); 4782 + } 4783 + 4791 4784 static void __dmar_remove_one_dev_info(struct device_domain_info *info) 4792 4785 { 4793 4786 struct dmar_domain *domain; ··· 4830 4779 PASID_RID2PASID); 4831 4780 4832 4781 iommu_disable_dev_iotlb(info); 4833 - domain_context_clear_one(iommu, info->bus, info->devfn); 4782 + domain_context_clear(iommu, info->dev); 4834 4783 intel_pasid_free_table(info->dev); 4835 4784 } 4836 4785
+15 -21
drivers/iommu/intel-svm.c
··· 100 100 } 101 101 102 102 static void intel_flush_svm_range_dev (struct intel_svm *svm, struct intel_svm_dev *sdev, 103 - unsigned long address, unsigned long pages, int ih, int gl) 103 + unsigned long address, unsigned long pages, int ih) 104 104 { 105 105 struct qi_desc desc; 106 106 107 - if (pages == -1) { 108 - /* For global kernel pages we have to flush them in *all* PASIDs 109 - * because that's the only option the hardware gives us. Despite 110 - * the fact that they are actually only accessible through one. */ 111 - if (gl) 112 - desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | 113 - QI_EIOTLB_DID(sdev->did) | 114 - QI_EIOTLB_GRAN(QI_GRAN_ALL_ALL) | 115 - QI_EIOTLB_TYPE; 116 - else 117 - desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | 118 - QI_EIOTLB_DID(sdev->did) | 119 - QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | 120 - QI_EIOTLB_TYPE; 107 + /* 108 + * Do PASID granu IOTLB invalidation if page selective capability is 109 + * not available. 110 + */ 111 + if (pages == -1 || !cap_pgsel_inv(svm->iommu->cap)) { 112 + desc.qw0 = QI_EIOTLB_PASID(svm->pasid) | 113 + QI_EIOTLB_DID(sdev->did) | 114 + QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) | 115 + QI_EIOTLB_TYPE; 121 116 desc.qw1 = 0; 122 117 } else { 123 118 int mask = ilog2(__roundup_pow_of_two(pages)); ··· 122 127 QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) | 123 128 QI_EIOTLB_TYPE; 124 129 desc.qw1 = QI_EIOTLB_ADDR(address) | 125 - QI_EIOTLB_GL(gl) | 126 130 QI_EIOTLB_IH(ih) | 127 131 QI_EIOTLB_AM(mask); 128 132 } ··· 156 162 } 157 163 158 164 static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address, 159 - unsigned long pages, int ih, int gl) 165 + unsigned long pages, int ih) 160 166 { 161 167 struct intel_svm_dev *sdev; 162 168 163 169 rcu_read_lock(); 164 170 list_for_each_entry_rcu(sdev, &svm->devs, list) 165 - intel_flush_svm_range_dev(svm, sdev, address, pages, ih, gl); 171 + intel_flush_svm_range_dev(svm, sdev, address, pages, ih); 166 172 rcu_read_unlock(); 167 173 } 168 174 ··· 174 180 struct intel_svm *svm = container_of(mn, struct intel_svm, notifier); 175 181 176 182 intel_flush_svm_range(svm, start, 177 - (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0, 0); 183 + (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 0); 178 184 } 179 185 180 186 static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm) ··· 197 203 rcu_read_lock(); 198 204 list_for_each_entry_rcu(sdev, &svm->devs, list) { 199 205 intel_pasid_tear_down_entry(svm->iommu, sdev->dev, svm->pasid); 200 - intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm); 206 + intel_flush_svm_range_dev(svm, sdev, 0, -1, 0); 201 207 } 202 208 rcu_read_unlock(); 203 209 ··· 419 425 * large and has to be physically contiguous. So it's 420 426 * hard to be as defensive as we might like. */ 421 427 intel_pasid_tear_down_entry(iommu, dev, svm->pasid); 422 - intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm); 428 + intel_flush_svm_range_dev(svm, sdev, 0, -1, 0); 423 429 kfree_rcu(sdev, rcu); 424 430 425 431 if (list_empty(&svm->devs)) {
+9 -1
drivers/isdn/capi/capi.c
··· 688 688 if (!cdev->ap.applid) 689 689 return -ENODEV; 690 690 691 + if (count < CAPIMSG_BASELEN) 692 + return -EINVAL; 693 + 691 694 skb = alloc_skb(count, GFP_USER); 692 695 if (!skb) 693 696 return -ENOMEM; ··· 701 698 } 702 699 mlen = CAPIMSG_LEN(skb->data); 703 700 if (CAPIMSG_CMD(skb->data) == CAPI_DATA_B3_REQ) { 704 - if ((size_t)(mlen + CAPIMSG_DATALEN(skb->data)) != count) { 701 + if (count < CAPI_DATA_B3_REQ_LEN || 702 + (size_t)(mlen + CAPIMSG_DATALEN(skb->data)) != count) { 705 703 kfree_skb(skb); 706 704 return -EINVAL; 707 705 } ··· 715 711 CAPIMSG_SETAPPID(skb->data, cdev->ap.applid); 716 712 717 713 if (CAPIMSG_CMD(skb->data) == CAPI_DISCONNECT_B3_RESP) { 714 + if (count < CAPI_DISCONNECT_B3_RESP_LEN) { 715 + kfree_skb(skb); 716 + return -EINVAL; 717 + } 718 718 mutex_lock(&cdev->lock); 719 719 capincci_free(cdev, CAPIMSG_NCCI(skb->data)); 720 720 mutex_unlock(&cdev->lock);
+1 -1
drivers/mmc/core/mmc_ops.c
··· 564 564 if (index == EXT_CSD_SANITIZE_START) 565 565 cmd.sanitize_busy = true; 566 566 567 - err = mmc_wait_for_cmd(host, &cmd, 0); 567 + err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES); 568 568 if (err) 569 569 goto out; 570 570
+1 -1
drivers/mmc/host/bcm2835.c
··· 597 597 struct dma_chan *terminate_chan = NULL; 598 598 struct mmc_request *mrq; 599 599 600 - cancel_delayed_work_sync(&host->timeout_work); 600 + cancel_delayed_work(&host->timeout_work); 601 601 602 602 mrq = host->mrq; 603 603
-6
drivers/mmc/host/renesas_sdhi_core.c
··· 774 774 /* All SDHI have SDIO status bits which must be 1 */ 775 775 mmc_data->flags |= TMIO_MMC_SDIO_STATUS_SETBITS; 776 776 777 - pm_runtime_enable(&pdev->dev); 778 - 779 777 ret = renesas_sdhi_clk_enable(host); 780 778 if (ret) 781 779 goto efree; ··· 854 856 efree: 855 857 tmio_mmc_host_free(host); 856 858 857 - pm_runtime_disable(&pdev->dev); 858 - 859 859 return ret; 860 860 } 861 861 EXPORT_SYMBOL_GPL(renesas_sdhi_probe); ··· 864 868 865 869 tmio_mmc_host_remove(host); 866 870 renesas_sdhi_clk_disable(host); 867 - 868 - pm_runtime_disable(&pdev->dev); 869 871 870 872 return 0; 871 873 }
+1 -1
drivers/mmc/host/sdhci-pci-o2micro.c
··· 432 432 mmc_hostname(host->mmc)); 433 433 host->flags &= ~SDHCI_SIGNALING_330; 434 434 host->flags |= SDHCI_SIGNALING_180; 435 - host->quirks2 |= SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD; 436 435 host->mmc->caps2 |= MMC_CAP2_NO_SD; 437 436 host->mmc->caps2 |= MMC_CAP2_NO_SDIO; 438 437 pci_write_config_dword(chip->pdev, ··· 681 682 const struct sdhci_pci_fixes sdhci_o2 = { 682 683 .probe = sdhci_pci_o2_probe, 683 684 .quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC, 685 + .quirks2 = SDHCI_QUIRK2_CLEAR_TRANSFERMODE_REG_BEFORE_CMD, 684 686 .probe_slot = sdhci_pci_o2_probe_slot, 685 687 #ifdef CONFIG_PM_SLEEP 686 688 .resume = sdhci_pci_o2_resume,
-5
drivers/mmc/host/tmio_mmc.c
··· 172 172 host->mmc->f_max = pdata->hclk; 173 173 host->mmc->f_min = pdata->hclk / 512; 174 174 175 - pm_runtime_enable(&pdev->dev); 176 - 177 175 ret = tmio_mmc_host_probe(host); 178 176 if (ret) 179 177 goto host_free; ··· 191 193 tmio_mmc_host_remove(host); 192 194 host_free: 193 195 tmio_mmc_host_free(host); 194 - pm_runtime_disable(&pdev->dev); 195 196 cell_disable: 196 197 if (cell->disable) 197 198 cell->disable(pdev); ··· 206 209 tmio_mmc_host_remove(host); 207 210 if (cell->disable) 208 211 cell->disable(pdev); 209 - 210 - pm_runtime_disable(&pdev->dev); 211 212 212 213 return 0; 213 214 }
+1
drivers/mmc/host/tmio_mmc.h
··· 163 163 unsigned long last_req_ts; 164 164 struct mutex ios_lock; /* protect set_ios() context */ 165 165 bool native_hotplug; 166 + bool runtime_synced; 166 167 bool sdio_irq_enabled; 167 168 168 169 /* Mandatory callback */
+14 -13
drivers/mmc/host/tmio_mmc_core.c
··· 1153 1153 } 1154 1154 EXPORT_SYMBOL_GPL(tmio_mmc_host_free); 1155 1155 1156 - /** 1157 - * tmio_mmc_host_probe() - Common probe for all implementations 1158 - * @_host: Host to probe 1159 - * 1160 - * Perform tasks common to all implementations probe functions. 1161 - * 1162 - * The caller should have called pm_runtime_enable() prior to calling 1163 - * the common probe function. 1164 - */ 1165 1156 int tmio_mmc_host_probe(struct tmio_mmc_host *_host) 1166 1157 { 1167 1158 struct platform_device *pdev = _host->pdev; ··· 1248 1257 /* See if we also get DMA */ 1249 1258 tmio_mmc_request_dma(_host, pdata); 1250 1259 1251 - pm_runtime_set_active(&pdev->dev); 1252 1260 pm_runtime_set_autosuspend_delay(&pdev->dev, 50); 1253 1261 pm_runtime_use_autosuspend(&pdev->dev); 1262 + pm_runtime_enable(&pdev->dev); 1263 + pm_runtime_get_sync(&pdev->dev); 1254 1264 1255 1265 ret = mmc_add_host(mmc); 1256 1266 if (ret) 1257 1267 goto remove_host; 1258 1268 1259 1269 dev_pm_qos_expose_latency_limit(&pdev->dev, 100); 1270 + pm_runtime_put(&pdev->dev); 1260 1271 1261 1272 return 0; 1262 1273 1263 1274 remove_host: 1275 + pm_runtime_put_noidle(&pdev->dev); 1264 1276 tmio_mmc_host_remove(_host); 1265 1277 return ret; 1266 1278 } ··· 1274 1280 struct platform_device *pdev = host->pdev; 1275 1281 struct mmc_host *mmc = host->mmc; 1276 1282 1283 + pm_runtime_get_sync(&pdev->dev); 1284 + 1277 1285 if (host->pdata->flags & TMIO_MMC_SDIO_IRQ) 1278 1286 sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0000); 1279 - 1280 - if (!host->native_hotplug) 1281 - pm_runtime_get_sync(&pdev->dev); 1282 1287 1283 1288 dev_pm_qos_hide_latency_limit(&pdev->dev); 1284 1289 ··· 1287 1294 tmio_mmc_release_dma(host); 1288 1295 1289 1296 pm_runtime_dont_use_autosuspend(&pdev->dev); 1297 + if (host->native_hotplug) 1298 + pm_runtime_put_noidle(&pdev->dev); 1290 1299 pm_runtime_put_sync(&pdev->dev); 1300 + pm_runtime_disable(&pdev->dev); 1291 1301 } 1292 1302 EXPORT_SYMBOL_GPL(tmio_mmc_host_remove); 1293 1303 ··· 1332 1336 int tmio_mmc_host_runtime_resume(struct device *dev) 1333 1337 { 1334 1338 struct tmio_mmc_host *host = dev_get_drvdata(dev); 1339 + 1340 + if (!host->runtime_synced) { 1341 + host->runtime_synced = true; 1342 + return 0; 1343 + } 1335 1344 1336 1345 tmio_mmc_clk_enable(host); 1337 1346 tmio_mmc_hw_reset(host->mmc);
-3
drivers/mmc/host/uniphier-sd.c
··· 631 631 host->clk_disable = uniphier_sd_clk_disable; 632 632 host->set_clock = uniphier_sd_set_clock; 633 633 634 - pm_runtime_enable(&pdev->dev); 635 634 ret = uniphier_sd_clk_enable(host); 636 635 if (ret) 637 636 goto free_host; ··· 652 653 653 654 free_host: 654 655 tmio_mmc_host_free(host); 655 - pm_runtime_disable(&pdev->dev); 656 656 657 657 return ret; 658 658 } ··· 662 664 663 665 tmio_mmc_host_remove(host); 664 666 uniphier_sd_clk_disable(host); 665 - pm_runtime_disable(&pdev->dev); 666 667 667 668 return 0; 668 669 }
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
··· 98 98 .reset_level = HNAE3_GLOBAL_RESET }, 99 99 { .int_msk = BIT(1), .msg = "rx_stp_fifo_overflow", 100 100 .reset_level = HNAE3_GLOBAL_RESET }, 101 - { .int_msk = BIT(2), .msg = "rx_stp_fifo_undeflow", 101 + { .int_msk = BIT(2), .msg = "rx_stp_fifo_underflow", 102 102 .reset_level = HNAE3_GLOBAL_RESET }, 103 103 { .int_msk = BIT(3), .msg = "tx_buf_overflow", 104 104 .reset_level = HNAE3_GLOBAL_RESET },
+6 -3
drivers/net/ethernet/ibm/ibmvnic.c
··· 1984 1984 rwi = get_next_rwi(adapter); 1985 1985 while (rwi) { 1986 1986 if (adapter->state == VNIC_REMOVING || 1987 - adapter->state == VNIC_REMOVED) 1988 - goto out; 1987 + adapter->state == VNIC_REMOVED) { 1988 + kfree(rwi); 1989 + rc = EBUSY; 1990 + break; 1991 + } 1989 1992 1990 1993 if (adapter->force_reset_recovery) { 1991 1994 adapter->force_reset_recovery = false; ··· 2014 2011 netdev_dbg(adapter->netdev, "Reset failed\n"); 2015 2012 free_all_rwi(adapter); 2016 2013 } 2017 - out: 2014 + 2018 2015 adapter->resetting = false; 2019 2016 if (we_lock_rtnl) 2020 2017 rtnl_unlock();
+5 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 36 36 #include <net/vxlan.h> 37 37 #include <net/mpls.h> 38 38 #include <net/xdp_sock.h> 39 + #include <net/xfrm.h> 39 40 40 41 #include "ixgbe.h" 41 42 #include "ixgbe_common.h" ··· 2624 2623 /* 16K ints/sec to 9.2K ints/sec */ 2625 2624 avg_wire_size *= 15; 2626 2625 avg_wire_size += 11452; 2627 - } else if (avg_wire_size <= 1980) { 2626 + } else if (avg_wire_size < 1968) { 2628 2627 /* 9.2K ints/sec to 8K ints/sec */ 2629 2628 avg_wire_size *= 5; 2630 2629 avg_wire_size += 22420; ··· 2657 2656 case IXGBE_LINK_SPEED_2_5GB_FULL: 2658 2657 case IXGBE_LINK_SPEED_1GB_FULL: 2659 2658 case IXGBE_LINK_SPEED_10_FULL: 2659 + if (avg_wire_size > 8064) 2660 + avg_wire_size = 8064; 2660 2661 itr += DIV_ROUND_UP(avg_wire_size, 2661 2662 IXGBE_ITR_ADAPTIVE_MIN_INC * 64) * 2662 2663 IXGBE_ITR_ADAPTIVE_MIN_INC; ··· 8701 8698 #endif /* IXGBE_FCOE */ 8702 8699 8703 8700 #ifdef CONFIG_IXGBE_IPSEC 8704 - if (secpath_exists(skb) && 8701 + if (xfrm_offload(skb) && 8705 8702 !ixgbe_ipsec_tx(tx_ring, first, &ipsec_tx)) 8706 8703 goto out_drop; 8707 8704 #endif
+11 -19
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 642 642 bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector, 643 643 struct ixgbe_ring *tx_ring, int napi_budget) 644 644 { 645 + u16 ntc = tx_ring->next_to_clean, ntu = tx_ring->next_to_use; 645 646 unsigned int total_packets = 0, total_bytes = 0; 646 - u32 i = tx_ring->next_to_clean, xsk_frames = 0; 647 - unsigned int budget = q_vector->tx.work_limit; 648 647 struct xdp_umem *umem = tx_ring->xsk_umem; 649 648 union ixgbe_adv_tx_desc *tx_desc; 650 649 struct ixgbe_tx_buffer *tx_bi; 651 - bool xmit_done; 650 + u32 xsk_frames = 0; 652 651 653 - tx_bi = &tx_ring->tx_buffer_info[i]; 654 - tx_desc = IXGBE_TX_DESC(tx_ring, i); 655 - i -= tx_ring->count; 652 + tx_bi = &tx_ring->tx_buffer_info[ntc]; 653 + tx_desc = IXGBE_TX_DESC(tx_ring, ntc); 656 654 657 - do { 655 + while (ntc != ntu) { 658 656 if (!(tx_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD))) 659 657 break; 660 658 ··· 668 670 669 671 tx_bi++; 670 672 tx_desc++; 671 - i++; 672 - if (unlikely(!i)) { 673 - i -= tx_ring->count; 673 + ntc++; 674 + if (unlikely(ntc == tx_ring->count)) { 675 + ntc = 0; 674 676 tx_bi = tx_ring->tx_buffer_info; 675 677 tx_desc = IXGBE_TX_DESC(tx_ring, 0); 676 678 } 677 679 678 680 /* issue prefetch for next Tx descriptor */ 679 681 prefetch(tx_desc); 682 + } 680 683 681 - /* update budget accounting */ 682 - budget--; 683 - } while (likely(budget)); 684 - 685 - i += tx_ring->count; 686 - tx_ring->next_to_clean = i; 684 + tx_ring->next_to_clean = ntc; 687 685 688 686 u64_stats_update_begin(&tx_ring->syncp); 689 687 tx_ring->stats.bytes += total_bytes; ··· 698 704 xsk_clear_tx_need_wakeup(tx_ring->xsk_umem); 699 705 } 700 706 701 - xmit_done = ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit); 702 - 703 - return budget > 0 && xmit_done; 707 + return ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit); 704 708 } 705 709 706 710 int ixgbe_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
+2 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 30 30 #include <linux/bpf.h> 31 31 #include <linux/bpf_trace.h> 32 32 #include <linux/atomic.h> 33 + #include <net/xfrm.h> 33 34 34 35 #include "ixgbevf.h" 35 36 ··· 4168 4167 first->protocol = vlan_get_protocol(skb); 4169 4168 4170 4169 #ifdef CONFIG_IXGBEVF_IPSEC 4171 - if (secpath_exists(skb) && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) 4170 + if (xfrm_offload(skb) && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) 4172 4171 goto out_drop; 4173 4172 #endif 4174 4173 tso = ixgbevf_tso(tx_ring, first, &hdr_len, &ipsec_tx);
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 2240 2240 for (i = 1; i <= dev->caps.num_ports; i++) { 2241 2241 if (mlx4_dev_port(dev, i, &port_cap)) { 2242 2242 mlx4_err(dev, 2243 - "QUERY_DEV_CAP command failed, can't veify DMFS high rate steering.\n"); 2243 + "QUERY_DEV_CAP command failed, can't verify DMFS high rate steering.\n"); 2244 2244 } else if ((dev->caps.dmfs_high_steer_mode != 2245 2245 MLX4_STEERING_DMFS_A0_DEFAULT) && 2246 2246 (port_cap.dmfs_optimized_state ==
+3 -3
drivers/net/ethernet/natsemi/sonic.c
··· 232 232 233 233 laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE); 234 234 if (!laddr) { 235 - printk(KERN_ERR "%s: failed to map tx DMA buffer.\n", dev->name); 236 - dev_kfree_skb(skb); 237 - return NETDEV_TX_BUSY; 235 + pr_err_ratelimited("%s: failed to map tx DMA buffer.\n", dev->name); 236 + dev_kfree_skb_any(skb); 237 + return NETDEV_TX_OK; 238 238 } 239 239 240 240 sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */
+5 -5
drivers/net/ethernet/netronome/nfp/flower/cmsg.c
··· 260 260 261 261 type = cmsg_hdr->type; 262 262 switch (type) { 263 - case NFP_FLOWER_CMSG_TYPE_PORT_REIFY: 264 - nfp_flower_cmsg_portreify_rx(app, skb); 265 - break; 266 263 case NFP_FLOWER_CMSG_TYPE_PORT_MOD: 267 264 nfp_flower_cmsg_portmod_rx(app, skb); 268 265 break; ··· 325 328 struct nfp_flower_priv *priv = app->priv; 326 329 struct sk_buff_head *skb_head; 327 330 328 - if (type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY || 329 - type == NFP_FLOWER_CMSG_TYPE_PORT_MOD) 331 + if (type == NFP_FLOWER_CMSG_TYPE_PORT_MOD) 330 332 skb_head = &priv->cmsg_skbs_high; 331 333 else 332 334 skb_head = &priv->cmsg_skbs_low; ··· 363 367 dev_consume_skb_any(skb); 364 368 } else if (cmsg_hdr->type == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH) { 365 369 /* Acks from the NFP that the route is added - ignore. */ 370 + dev_consume_skb_any(skb); 371 + } else if (cmsg_hdr->type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY) { 372 + /* Handle REIFY acks outside wq to prevent RTNL conflict. */ 373 + nfp_flower_cmsg_portreify_rx(app, skb); 366 374 dev_consume_skb_any(skb); 367 375 } else { 368 376 nfp_flower_queue_ctl_msg(app, skb, cmsg_hdr->type);
+99 -44
drivers/net/ethernet/nvidia/forcedeth.c
··· 713 713 struct nv_skb_map *next_tx_ctx; 714 714 }; 715 715 716 + struct nv_txrx_stats { 717 + u64 stat_rx_packets; 718 + u64 stat_rx_bytes; /* not always available in HW */ 719 + u64 stat_rx_missed_errors; 720 + u64 stat_rx_dropped; 721 + u64 stat_tx_packets; /* not always available in HW */ 722 + u64 stat_tx_bytes; 723 + u64 stat_tx_dropped; 724 + }; 725 + 726 + #define nv_txrx_stats_inc(member) \ 727 + __this_cpu_inc(np->txrx_stats->member) 728 + #define nv_txrx_stats_add(member, count) \ 729 + __this_cpu_add(np->txrx_stats->member, (count)) 730 + 716 731 /* 717 732 * SMP locking: 718 733 * All hardware access under netdev_priv(dev)->lock, except the performance ··· 812 797 813 798 /* RX software stats */ 814 799 struct u64_stats_sync swstats_rx_syncp; 815 - u64 stat_rx_packets; 816 - u64 stat_rx_bytes; /* not always available in HW */ 817 - u64 stat_rx_missed_errors; 818 - u64 stat_rx_dropped; 800 + struct nv_txrx_stats __percpu *txrx_stats; 819 801 820 802 /* media detection workaround. 821 803 * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); ··· 838 826 839 827 /* TX software stats */ 840 828 struct u64_stats_sync swstats_tx_syncp; 841 - u64 stat_tx_packets; /* not always available in HW */ 842 - u64 stat_tx_bytes; 843 - u64 stat_tx_dropped; 844 829 845 830 /* msi/msi-x fields */ 846 831 u32 msi_flags; ··· 1730 1721 } 1731 1722 } 1732 1723 1724 + static void nv_get_stats(int cpu, struct fe_priv *np, 1725 + struct rtnl_link_stats64 *storage) 1726 + { 1727 + struct nv_txrx_stats *src = per_cpu_ptr(np->txrx_stats, cpu); 1728 + unsigned int syncp_start; 1729 + u64 rx_packets, rx_bytes, rx_dropped, rx_missed_errors; 1730 + u64 tx_packets, tx_bytes, tx_dropped; 1731 + 1732 + do { 1733 + syncp_start = u64_stats_fetch_begin_irq(&np->swstats_rx_syncp); 1734 + rx_packets = src->stat_rx_packets; 1735 + rx_bytes = src->stat_rx_bytes; 1736 + rx_dropped = src->stat_rx_dropped; 1737 + rx_missed_errors = src->stat_rx_missed_errors; 1738 + } while (u64_stats_fetch_retry_irq(&np->swstats_rx_syncp, syncp_start)); 1739 + 1740 + storage->rx_packets += rx_packets; 1741 + storage->rx_bytes += rx_bytes; 1742 + storage->rx_dropped += rx_dropped; 1743 + storage->rx_missed_errors += rx_missed_errors; 1744 + 1745 + do { 1746 + syncp_start = u64_stats_fetch_begin_irq(&np->swstats_tx_syncp); 1747 + tx_packets = src->stat_tx_packets; 1748 + tx_bytes = src->stat_tx_bytes; 1749 + tx_dropped = src->stat_tx_dropped; 1750 + } while (u64_stats_fetch_retry_irq(&np->swstats_tx_syncp, syncp_start)); 1751 + 1752 + storage->tx_packets += tx_packets; 1753 + storage->tx_bytes += tx_bytes; 1754 + storage->tx_dropped += tx_dropped; 1755 + } 1756 + 1733 1757 /* 1734 1758 * nv_get_stats64: dev->ndo_get_stats64 function 1735 1759 * Get latest stats value from the nic. ··· 1775 1733 __releases(&netdev_priv(dev)->hwstats_lock) 1776 1734 { 1777 1735 struct fe_priv *np = netdev_priv(dev); 1778 - unsigned int syncp_start; 1736 + int cpu; 1779 1737 1780 1738 /* 1781 1739 * Note: because HW stats are not always available and for ··· 1788 1746 */ 1789 1747 1790 1748 /* software stats */ 1791 - do { 1792 - syncp_start = u64_stats_fetch_begin_irq(&np->swstats_rx_syncp); 1793 - storage->rx_packets = np->stat_rx_packets; 1794 - storage->rx_bytes = np->stat_rx_bytes; 1795 - storage->rx_dropped = np->stat_rx_dropped; 1796 - storage->rx_missed_errors = np->stat_rx_missed_errors; 1797 - } while (u64_stats_fetch_retry_irq(&np->swstats_rx_syncp, syncp_start)); 1798 - 1799 - do { 1800 - syncp_start = u64_stats_fetch_begin_irq(&np->swstats_tx_syncp); 1801 - storage->tx_packets = np->stat_tx_packets; 1802 - storage->tx_bytes = np->stat_tx_bytes; 1803 - storage->tx_dropped = np->stat_tx_dropped; 1804 - } while (u64_stats_fetch_retry_irq(&np->swstats_tx_syncp, syncp_start)); 1749 + for_each_online_cpu(cpu) 1750 + nv_get_stats(cpu, np, storage); 1805 1751 1806 1752 /* If the nic supports hw counters then retrieve latest values */ 1807 1753 if (np->driver_data & DEV_HAS_STATISTICS_V123) { ··· 1857 1827 } else { 1858 1828 packet_dropped: 1859 1829 u64_stats_update_begin(&np->swstats_rx_syncp); 1860 - np->stat_rx_dropped++; 1830 + nv_txrx_stats_inc(stat_rx_dropped); 1861 1831 u64_stats_update_end(&np->swstats_rx_syncp); 1862 1832 return 1; 1863 1833 } ··· 1899 1869 } else { 1900 1870 packet_dropped: 1901 1871 u64_stats_update_begin(&np->swstats_rx_syncp); 1902 - np->stat_rx_dropped++; 1872 + nv_txrx_stats_inc(stat_rx_dropped); 1903 1873 u64_stats_update_end(&np->swstats_rx_syncp); 1904 1874 return 1; 1905 1875 } ··· 2043 2013 } 2044 2014 if (nv_release_txskb(np, &np->tx_skb[i])) { 2045 2015 u64_stats_update_begin(&np->swstats_tx_syncp); 2046 - np->stat_tx_dropped++; 2016 + nv_txrx_stats_inc(stat_tx_dropped); 2047 2017 u64_stats_update_end(&np->swstats_tx_syncp); 2048 2018 } 2049 2019 np->tx_skb[i].dma = 0; ··· 2257 2227 /* on DMA mapping error - drop the packet */ 2258 2228 dev_kfree_skb_any(skb); 2259 2229 u64_stats_update_begin(&np->swstats_tx_syncp); 2260 - np->stat_tx_dropped++; 2230 + nv_txrx_stats_inc(stat_tx_dropped); 2261 2231 u64_stats_update_end(&np->swstats_tx_syncp); 2262 2232 return NETDEV_TX_OK; 2263 2233 } ··· 2303 2273 dev_kfree_skb_any(skb); 2304 2274 np->put_tx_ctx = start_tx_ctx; 2305 2275 u64_stats_update_begin(&np->swstats_tx_syncp); 2306 - np->stat_tx_dropped++; 2276 + nv_txrx_stats_inc(stat_tx_dropped); 2307 2277 u64_stats_update_end(&np->swstats_tx_syncp); 2308 2278 return NETDEV_TX_OK; 2309 2279 } ··· 2414 2384 /* on DMA mapping error - drop the packet */ 2415 2385 dev_kfree_skb_any(skb); 2416 2386 u64_stats_update_begin(&np->swstats_tx_syncp); 2417 - np->stat_tx_dropped++; 2387 + nv_txrx_stats_inc(stat_tx_dropped); 2418 2388 u64_stats_update_end(&np->swstats_tx_syncp); 2419 2389 return NETDEV_TX_OK; 2420 2390 } ··· 2461 2431 dev_kfree_skb_any(skb); 2462 2432 np->put_tx_ctx = start_tx_ctx; 2463 2433 u64_stats_update_begin(&np->swstats_tx_syncp); 2464 - np->stat_tx_dropped++; 2434 + nv_txrx_stats_inc(stat_tx_dropped); 2465 2435 u64_stats_update_end(&np->swstats_tx_syncp); 2466 2436 return NETDEV_TX_OK; 2467 2437 } ··· 2590 2560 && !(flags & NV_TX_RETRYCOUNT_MASK)) 2591 2561 nv_legacybackoff_reseed(dev); 2592 2562 } else { 2563 + unsigned int len; 2564 + 2593 2565 u64_stats_update_begin(&np->swstats_tx_syncp); 2594 - np->stat_tx_packets++; 2595 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2566 + nv_txrx_stats_inc(stat_tx_packets); 2567 + len = np->get_tx_ctx->skb->len; 2568 + nv_txrx_stats_add(stat_tx_bytes, len); 2596 2569 u64_stats_update_end(&np->swstats_tx_syncp); 2597 2570 } 2598 2571 bytes_compl += np->get_tx_ctx->skb->len; ··· 2610 2577 && !(flags & NV_TX2_RETRYCOUNT_MASK)) 2611 2578 nv_legacybackoff_reseed(dev); 2612 2579 } else { 2580 + unsigned int len; 2581 + 2613 2582 u64_stats_update_begin(&np->swstats_tx_syncp); 2614 - np->stat_tx_packets++; 2615 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2583 + nv_txrx_stats_inc(stat_tx_packets); 2584 + len = np->get_tx_ctx->skb->len; 2585 + nv_txrx_stats_add(stat_tx_bytes, len); 2616 2586 u64_stats_update_end(&np->swstats_tx_syncp); 2617 2587 } 2618 2588 bytes_compl += np->get_tx_ctx->skb->len; ··· 2663 2627 nv_legacybackoff_reseed(dev); 2664 2628 } 2665 2629 } else { 2630 + unsigned int len; 2631 + 2666 2632 u64_stats_update_begin(&np->swstats_tx_syncp); 2667 - np->stat_tx_packets++; 2668 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2633 + nv_txrx_stats_inc(stat_tx_packets); 2634 + len = np->get_tx_ctx->skb->len; 2635 + nv_txrx_stats_add(stat_tx_bytes, len); 2669 2636 u64_stats_update_end(&np->swstats_tx_syncp); 2670 2637 } 2671 2638 ··· 2845 2806 } 2846 2807 } 2847 2808 2809 + static void rx_missing_handler(u32 flags, struct fe_priv *np) 2810 + { 2811 + if (flags & NV_RX_MISSEDFRAME) { 2812 + u64_stats_update_begin(&np->swstats_rx_syncp); 2813 + nv_txrx_stats_inc(stat_rx_missed_errors); 2814 + u64_stats_update_end(&np->swstats_rx_syncp); 2815 + } 2816 + } 2817 + 2848 2818 static int nv_rx_process(struct net_device *dev, int limit) 2849 2819 { 2850 2820 struct fe_priv *np = netdev_priv(dev); ··· 2896 2848 } 2897 2849 /* the rest are hard errors */ 2898 2850 else { 2899 - if (flags & NV_RX_MISSEDFRAME) { 2900 - u64_stats_update_begin(&np->swstats_rx_syncp); 2901 - np->stat_rx_missed_errors++; 2902 - u64_stats_update_end(&np->swstats_rx_syncp); 2903 - } 2851 + rx_missing_handler(flags, np); 2904 2852 dev_kfree_skb(skb); 2905 2853 goto next_pkt; 2906 2854 } ··· 2940 2896 skb->protocol = eth_type_trans(skb, dev); 2941 2897 napi_gro_receive(&np->napi, skb); 2942 2898 u64_stats_update_begin(&np->swstats_rx_syncp); 2943 - np->stat_rx_packets++; 2944 - np->stat_rx_bytes += len; 2899 + nv_txrx_stats_inc(stat_rx_packets); 2900 + nv_txrx_stats_add(stat_rx_bytes, len); 2945 2901 u64_stats_update_end(&np->swstats_rx_syncp); 2946 2902 next_pkt: 2947 2903 if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) ··· 3026 2982 } 3027 2983 napi_gro_receive(&np->napi, skb); 3028 2984 u64_stats_update_begin(&np->swstats_rx_syncp); 3029 - np->stat_rx_packets++; 3030 - np->stat_rx_bytes += len; 2985 + nv_txrx_stats_inc(stat_rx_packets); 2986 + nv_txrx_stats_add(stat_rx_bytes, len); 3031 2987 u64_stats_update_end(&np->swstats_rx_syncp); 3032 2988 } else { 3033 2989 dev_kfree_skb(skb); ··· 5695 5651 SET_NETDEV_DEV(dev, &pci_dev->dev); 5696 5652 u64_stats_init(&np->swstats_rx_syncp); 5697 5653 u64_stats_init(&np->swstats_tx_syncp); 5654 + np->txrx_stats = alloc_percpu(struct nv_txrx_stats); 5655 + if (!np->txrx_stats) { 5656 + pr_err("np->txrx_stats, alloc memory error.\n"); 5657 + err = -ENOMEM; 5658 + goto out_alloc_percpu; 5659 + } 5698 5660 5699 5661 timer_setup(&np->oom_kick, nv_do_rx_refill, 0); 5700 5662 timer_setup(&np->nic_poll, nv_do_nic_poll, 0); ··· 6110 6060 out_disable: 6111 6061 pci_disable_device(pci_dev); 6112 6062 out_free: 6063 + free_percpu(np->txrx_stats); 6064 + out_alloc_percpu: 6113 6065 free_netdev(dev); 6114 6066 out: 6115 6067 return err; ··· 6157 6105 static void nv_remove(struct pci_dev *pci_dev) 6158 6106 { 6159 6107 struct net_device *dev = pci_get_drvdata(pci_dev); 6108 + struct fe_priv *np = netdev_priv(dev); 6109 + 6110 + free_percpu(np->txrx_stats); 6160 6111 6161 6112 unregister_netdev(dev); 6162 6113
+6 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
··· 873 873 int ret; 874 874 u32 reg, val; 875 875 876 - regmap_field_read(gmac->regmap_field, &val); 876 + ret = regmap_field_read(gmac->regmap_field, &val); 877 + if (ret) { 878 + dev_err(priv->device, "Fail to read from regmap field.\n"); 879 + return ret; 880 + } 881 + 877 882 reg = gmac->variant->default_syscon_value; 878 883 if (reg != val) 879 884 dev_warn(priv->device,
+2 -2
drivers/net/hamradio/6pack.c
··· 344 344 345 345 sp->dev->stats.rx_bytes += count; 346 346 347 - if ((skb = dev_alloc_skb(count)) == NULL) 347 + if ((skb = dev_alloc_skb(count + 1)) == NULL) 348 348 goto out_mem; 349 349 350 - ptr = skb_put(skb, count); 350 + ptr = skb_put(skb, count + 1); 351 351 *ptr++ = cmd; /* KISS command */ 352 352 353 353 memcpy(ptr, sp->cooked_buf + 1, count);
+3 -3
drivers/net/phy/phylink.c
··· 376 376 * Local device Link partner 377 377 * Pause AsymDir Pause AsymDir Result 378 378 * 1 X 1 X TX+RX 379 - * 0 1 1 1 RX 380 - * 1 1 0 1 TX 379 + * 0 1 1 1 TX 380 + * 1 1 0 1 RX 381 381 */ 382 382 static void phylink_resolve_flow(struct phylink *pl, 383 383 struct phylink_link_state *state) ··· 398 398 new_pause = MLO_PAUSE_TX | MLO_PAUSE_RX; 399 399 else if (pause & MLO_PAUSE_ASYM) 400 400 new_pause = state->pause & MLO_PAUSE_SYM ? 401 - MLO_PAUSE_RX : MLO_PAUSE_TX; 401 + MLO_PAUSE_TX : MLO_PAUSE_RX; 402 402 } else { 403 403 new_pause = pl->link_config.pause & MLO_PAUSE_TXRX_MASK; 404 404 }
+11 -5
drivers/net/tun.c
··· 787 787 } 788 788 789 789 static int tun_attach(struct tun_struct *tun, struct file *file, 790 - bool skip_filter, bool napi, bool napi_frags) 790 + bool skip_filter, bool napi, bool napi_frags, 791 + bool publish_tun) 791 792 { 792 793 struct tun_file *tfile = file->private_data; 793 794 struct net_device *dev = tun->dev; ··· 871 870 * initialized tfile; otherwise we risk using half-initialized 872 871 * object. 873 872 */ 874 - rcu_assign_pointer(tfile->tun, tun); 873 + if (publish_tun) 874 + rcu_assign_pointer(tfile->tun, tun); 875 875 rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile); 876 876 tun->numqueues++; 877 877 tun_set_real_num_queues(tun); ··· 2732 2730 2733 2731 err = tun_attach(tun, file, ifr->ifr_flags & IFF_NOFILTER, 2734 2732 ifr->ifr_flags & IFF_NAPI, 2735 - ifr->ifr_flags & IFF_NAPI_FRAGS); 2733 + ifr->ifr_flags & IFF_NAPI_FRAGS, true); 2736 2734 if (err < 0) 2737 2735 return err; 2738 2736 ··· 2831 2829 2832 2830 INIT_LIST_HEAD(&tun->disabled); 2833 2831 err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI, 2834 - ifr->ifr_flags & IFF_NAPI_FRAGS); 2832 + ifr->ifr_flags & IFF_NAPI_FRAGS, false); 2835 2833 if (err < 0) 2836 2834 goto err_free_flow; 2837 2835 2838 2836 err = register_netdevice(tun->dev); 2839 2837 if (err < 0) 2840 2838 goto err_detach; 2839 + /* free_netdev() won't check refcnt, to aovid race 2840 + * with dev_put() we need publish tun after registration. 2841 + */ 2842 + rcu_assign_pointer(tfile->tun, tun); 2841 2843 } 2842 2844 2843 2845 netif_carrier_on(tun->dev); ··· 2984 2978 if (ret < 0) 2985 2979 goto unlock; 2986 2980 ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI, 2987 - tun->flags & IFF_NAPI_FRAGS); 2981 + tun->flags & IFF_NAPI_FRAGS, true); 2988 2982 } else if (ifr->ifr_flags & IFF_DETACH_QUEUE) { 2989 2983 tun = rtnl_dereference(tfile->tun); 2990 2984 if (!tun || !(tun->flags & IFF_MULTI_QUEUE) || tfile->detached)
+9 -1
drivers/net/usb/cdc_ether.c
··· 206 206 goto bad_desc; 207 207 } 208 208 skip: 209 - if (rndis && header.usb_cdc_acm_descriptor && 209 + /* Communcation class functions with bmCapabilities are not 210 + * RNDIS. But some Wireless class RNDIS functions use 211 + * bmCapabilities for their own purpose. The failsafe is 212 + * therefore applied only to Communication class RNDIS 213 + * functions. The rndis test is redundant, but a cheap 214 + * optimization. 215 + */ 216 + if (rndis && is_rndis(&intf->cur_altsetting->desc) && 217 + header.usb_cdc_acm_descriptor && 210 218 header.usb_cdc_acm_descriptor->bmCapabilities) { 211 219 dev_dbg(&intf->dev, 212 220 "ACM capabilities %02x, not really RNDIS?\n",
+1 -1
drivers/net/virtio_net.c
··· 1331 1331 } 1332 1332 } 1333 1333 1334 - if (rq->vq->num_free > virtqueue_get_vring_size(rq->vq) / 2) { 1334 + if (rq->vq->num_free > min((unsigned int)budget, virtqueue_get_vring_size(rq->vq)) / 2) { 1335 1335 if (!try_fill_recv(vi, rq, GFP_ATOMIC)) 1336 1336 schedule_delayed_work(&vi->refill, 0); 1337 1337 }
+1 -1
drivers/net/wan/lmc/lmc_main.c
··· 1115 1115 sc->lmc_cmdmode |= (TULIP_CMD_TXRUN | TULIP_CMD_RXRUN); 1116 1116 LMC_CSR_WRITE (sc, csr_command, sc->lmc_cmdmode); 1117 1117 1118 - lmc_trace(dev, "lmc_runnin_reset_out"); 1118 + lmc_trace(dev, "lmc_running_reset_out"); 1119 1119 } 1120 1120 1121 1121
+1
drivers/net/wimax/i2400m/op-rfkill.c
··· 127 127 "%d\n", result); 128 128 result = 0; 129 129 error_cmd: 130 + kfree(cmd); 130 131 kfree_skb(ack_skb); 131 132 error_msg_to_dev: 132 133 error_alloc:
+12 -12
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1114 1114 1115 1115 /* same thing for QuZ... */ 1116 1116 if (iwl_trans->hw_rev == CSR_HW_REV_TYPE_QUZ) { 1117 - if (cfg == &iwl_ax101_cfg_qu_hr) 1118 - cfg = &iwl_ax101_cfg_quz_hr; 1119 - else if (cfg == &iwl_ax201_cfg_qu_hr) 1120 - cfg = &iwl_ax201_cfg_quz_hr; 1121 - else if (cfg == &iwl9461_2ac_cfg_qu_b0_jf_b0) 1122 - cfg = &iwl9461_2ac_cfg_quz_a0_jf_b0_soc; 1123 - else if (cfg == &iwl9462_2ac_cfg_qu_b0_jf_b0) 1124 - cfg = &iwl9462_2ac_cfg_quz_a0_jf_b0_soc; 1125 - else if (cfg == &iwl9560_2ac_cfg_qu_b0_jf_b0) 1126 - cfg = &iwl9560_2ac_cfg_quz_a0_jf_b0_soc; 1127 - else if (cfg == &iwl9560_2ac_160_cfg_qu_b0_jf_b0) 1128 - cfg = &iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc; 1117 + if (iwl_trans->cfg == &iwl_ax101_cfg_qu_hr) 1118 + iwl_trans->cfg = &iwl_ax101_cfg_quz_hr; 1119 + else if (iwl_trans->cfg == &iwl_ax201_cfg_qu_hr) 1120 + iwl_trans->cfg = &iwl_ax201_cfg_quz_hr; 1121 + else if (iwl_trans->cfg == &iwl9461_2ac_cfg_qu_b0_jf_b0) 1122 + iwl_trans->cfg = &iwl9461_2ac_cfg_quz_a0_jf_b0_soc; 1123 + else if (iwl_trans->cfg == &iwl9462_2ac_cfg_qu_b0_jf_b0) 1124 + iwl_trans->cfg = &iwl9462_2ac_cfg_quz_a0_jf_b0_soc; 1125 + else if (iwl_trans->cfg == &iwl9560_2ac_cfg_qu_b0_jf_b0) 1126 + iwl_trans->cfg = &iwl9560_2ac_cfg_quz_a0_jf_b0_soc; 1127 + else if (iwl_trans->cfg == &iwl9560_2ac_160_cfg_qu_b0_jf_b0) 1128 + iwl_trans->cfg = &iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc; 1129 1129 } 1130 1130 1131 1131 #endif
+3
drivers/net/wireless/marvell/mwifiex/ie.c
··· 241 241 } 242 242 243 243 vs_ie = (struct ieee_types_header *)vendor_ie; 244 + if (le16_to_cpu(ie->ie_length) + vs_ie->len + 2 > 245 + IEEE_MAX_IE_SIZE) 246 + return -EINVAL; 244 247 memcpy(ie->ie_buffer + le16_to_cpu(ie->ie_length), 245 248 vs_ie, vs_ie->len + 2); 246 249 le16_unaligned_add_cpu(&ie->ie_length, vs_ie->len + 2);
+8 -1
drivers/net/wireless/marvell/mwifiex/uap_cmd.c
··· 265 265 266 266 rate_ie = (void *)cfg80211_find_ie(WLAN_EID_SUPP_RATES, var_pos, len); 267 267 if (rate_ie) { 268 + if (rate_ie->len > MWIFIEX_SUPPORTED_RATES) 269 + return; 268 270 memcpy(bss_cfg->rates, rate_ie + 1, rate_ie->len); 269 271 rate_len = rate_ie->len; 270 272 } ··· 274 272 rate_ie = (void *)cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES, 275 273 params->beacon.tail, 276 274 params->beacon.tail_len); 277 - if (rate_ie) 275 + if (rate_ie) { 276 + if (rate_ie->len > MWIFIEX_SUPPORTED_RATES - rate_len) 277 + return; 278 278 memcpy(bss_cfg->rates + rate_len, rate_ie + 1, rate_ie->len); 279 + } 279 280 280 281 return; 281 282 } ··· 396 391 params->beacon.tail_len); 397 392 if (vendor_ie) { 398 393 wmm_ie = vendor_ie; 394 + if (*(wmm_ie + 1) > sizeof(struct mwifiex_types_wmm_info)) 395 + return; 399 396 memcpy(&bss_cfg->wmm_info, wmm_ie + 400 397 sizeof(struct ieee_types_header), *(wmm_ie + 1)); 401 398 priv->wmm_enabled = 1;
+5
drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
··· 59 59 dev_dbg(dev->mt76.dev, "mask out 2GHz support\n"); 60 60 } 61 61 62 + if (is_mt7630(dev)) { 63 + dev->mt76.cap.has_5ghz = false; 64 + dev_dbg(dev->mt76.dev, "mask out 5GHz support\n"); 65 + } 66 + 62 67 if (!mt76x02_field_valid(nic_conf1 & 0xff)) 63 68 nic_conf1 &= 0xff00; 64 69
+14 -1
drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
··· 51 51 mt76x0e_stop_hw(dev); 52 52 } 53 53 54 + static int 55 + mt76x0e_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, 56 + struct ieee80211_vif *vif, struct ieee80211_sta *sta, 57 + struct ieee80211_key_conf *key) 58 + { 59 + struct mt76x02_dev *dev = hw->priv; 60 + 61 + if (is_mt7630(dev)) 62 + return -EOPNOTSUPP; 63 + 64 + return mt76x02_set_key(hw, cmd, vif, sta, key); 65 + } 66 + 54 67 static void 55 68 mt76x0e_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, 56 69 u32 queues, bool drop) ··· 80 67 .configure_filter = mt76x02_configure_filter, 81 68 .bss_info_changed = mt76x02_bss_info_changed, 82 69 .sta_state = mt76_sta_state, 83 - .set_key = mt76x02_set_key, 70 + .set_key = mt76x0e_set_key, 84 71 .conf_tx = mt76x02_conf_tx, 85 72 .sw_scan_start = mt76_sw_scan, 86 73 .sw_scan_complete = mt76x02_sw_scan_complete,
+18 -19
drivers/net/wireless/ralink/rt2x00/rt2800lib.c
··· 1654 1654 1655 1655 offset = MAC_IVEIV_ENTRY(key->hw_key_idx); 1656 1656 1657 - rt2800_register_multiread(rt2x00dev, offset, 1658 - &iveiv_entry, sizeof(iveiv_entry)); 1659 - if ((crypto->cipher == CIPHER_TKIP) || 1660 - (crypto->cipher == CIPHER_TKIP_NO_MIC) || 1661 - (crypto->cipher == CIPHER_AES)) 1662 - iveiv_entry.iv[3] |= 0x20; 1663 - iveiv_entry.iv[3] |= key->keyidx << 6; 1657 + if (crypto->cmd == SET_KEY) { 1658 + rt2800_register_multiread(rt2x00dev, offset, 1659 + &iveiv_entry, sizeof(iveiv_entry)); 1660 + if ((crypto->cipher == CIPHER_TKIP) || 1661 + (crypto->cipher == CIPHER_TKIP_NO_MIC) || 1662 + (crypto->cipher == CIPHER_AES)) 1663 + iveiv_entry.iv[3] |= 0x20; 1664 + iveiv_entry.iv[3] |= key->keyidx << 6; 1665 + } else { 1666 + memset(&iveiv_entry, 0, sizeof(iveiv_entry)); 1667 + } 1668 + 1664 1669 rt2800_register_multiwrite(rt2x00dev, offset, 1665 1670 &iveiv_entry, sizeof(iveiv_entry)); 1666 1671 } ··· 4242 4237 switch (rt2x00dev->default_ant.rx_chain_num) { 4243 4238 case 3: 4244 4239 /* Turn on tertiary LNAs */ 4245 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A2_EN, 4246 - rf->channel > 14); 4247 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G2_EN, 4248 - rf->channel <= 14); 4240 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A2_EN, 1); 4241 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G2_EN, 1); 4249 4242 /* fall-through */ 4250 4243 case 2: 4251 4244 /* Turn on secondary LNAs */ 4252 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A1_EN, 4253 - rf->channel > 14); 4254 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G1_EN, 4255 - rf->channel <= 14); 4245 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A1_EN, 1); 4246 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G1_EN, 1); 4256 4247 /* fall-through */ 4257 4248 case 1: 4258 4249 /* Turn on primary LNAs */ 4259 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A0_EN, 4260 - rf->channel > 14); 4261 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G0_EN, 4262 - rf->channel <= 14); 4250 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A0_EN, 1); 4251 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G0_EN, 1); 4263 4252 break; 4264 4253 } 4265 4254
-1
drivers/net/wireless/rsi/rsi_91x_usb.c
··· 645 645 kfree(rsi_dev->tx_buffer); 646 646 647 647 fail_eps: 648 - kfree(rsi_dev); 649 648 650 649 return status; 651 650 }
+1 -1
drivers/nfc/st95hf/core.c
··· 316 316 &echo_response); 317 317 if (result) { 318 318 dev_err(&st95context->spicontext.spidev->dev, 319 - "err: echo response receieve error = 0x%x\n", result); 319 + "err: echo response receive error = 0x%x\n", result); 320 320 return result; 321 321 } 322 322
+4 -1
drivers/nvdimm/pfn_devs.c
··· 655 655 resource_size_t start, size; 656 656 struct nd_region *nd_region; 657 657 unsigned long npfns, align; 658 + u32 end_trunc; 658 659 struct nd_pfn_sb *pfn_sb; 659 660 phys_addr_t offset; 660 661 const char *sig; ··· 697 696 size = resource_size(&nsio->res); 698 697 npfns = PHYS_PFN(size - SZ_8K); 699 698 align = max(nd_pfn->align, (1UL << SUBSECTION_SHIFT)); 699 + end_trunc = start + size - ALIGN_DOWN(start + size, align); 700 700 if (nd_pfn->mode == PFN_MODE_PMEM) { 701 701 /* 702 702 * The altmap should be padded out to the block size used ··· 716 714 return -ENXIO; 717 715 } 718 716 719 - npfns = PHYS_PFN(size - offset); 717 + npfns = PHYS_PFN(size - offset - end_trunc); 720 718 pfn_sb->mode = cpu_to_le32(nd_pfn->mode); 721 719 pfn_sb->dataoff = cpu_to_le64(offset); 722 720 pfn_sb->npfns = cpu_to_le64(npfns); ··· 725 723 memcpy(pfn_sb->parent_uuid, nd_dev_to_uuid(&ndns->dev), 16); 726 724 pfn_sb->version_major = cpu_to_le16(1); 727 725 pfn_sb->version_minor = cpu_to_le16(3); 726 + pfn_sb->end_trunc = cpu_to_le32(end_trunc); 728 727 pfn_sb->align = cpu_to_le32(nd_pfn->align); 729 728 checksum = nd_sb_checksum((struct nd_gen_sb *) pfn_sb); 730 729 pfn_sb->checksum = cpu_to_le64(checksum);
+29 -1
drivers/pinctrl/aspeed/pinctrl-aspeed-g5.c
··· 2552 2552 if (IS_ERR(map)) 2553 2553 return map; 2554 2554 } else 2555 - map = ERR_PTR(-ENODEV); 2555 + return ERR_PTR(-ENODEV); 2556 2556 2557 2557 ctx->maps[ASPEED_IP_LPC] = map; 2558 2558 dev_dbg(ctx->dev, "Acquired LPC regmap"); ··· 2560 2560 } 2561 2561 2562 2562 return ERR_PTR(-EINVAL); 2563 + } 2564 + 2565 + static int aspeed_g5_sig_expr_eval(struct aspeed_pinmux_data *ctx, 2566 + const struct aspeed_sig_expr *expr, 2567 + bool enabled) 2568 + { 2569 + int ret; 2570 + int i; 2571 + 2572 + for (i = 0; i < expr->ndescs; i++) { 2573 + const struct aspeed_sig_desc *desc = &expr->descs[i]; 2574 + struct regmap *map; 2575 + 2576 + map = aspeed_g5_acquire_regmap(ctx, desc->ip); 2577 + if (IS_ERR(map)) { 2578 + dev_err(ctx->dev, 2579 + "Failed to acquire regmap for IP block %d\n", 2580 + desc->ip); 2581 + return PTR_ERR(map); 2582 + } 2583 + 2584 + ret = aspeed_sig_desc_eval(desc, enabled, ctx->maps[desc->ip]); 2585 + if (ret <= 0) 2586 + return ret; 2587 + } 2588 + 2589 + return 1; 2563 2590 } 2564 2591 2565 2592 /** ··· 2674 2647 } 2675 2648 2676 2649 static const struct aspeed_pinmux_ops aspeed_g5_ops = { 2650 + .eval = aspeed_g5_sig_expr_eval, 2677 2651 .set = aspeed_g5_sig_expr_set, 2678 2652 }; 2679 2653
+5 -2
drivers/pinctrl/aspeed/pinmux-aspeed.c
··· 78 78 * neither the enabled nor disabled state. Thus we must explicitly test for 79 79 * either condition as required. 80 80 */ 81 - int aspeed_sig_expr_eval(const struct aspeed_pinmux_data *ctx, 81 + int aspeed_sig_expr_eval(struct aspeed_pinmux_data *ctx, 82 82 const struct aspeed_sig_expr *expr, bool enabled) 83 83 { 84 - int i; 85 84 int ret; 85 + int i; 86 + 87 + if (ctx->ops->eval) 88 + return ctx->ops->eval(ctx, expr, enabled); 86 89 87 90 for (i = 0; i < expr->ndescs; i++) { 88 91 const struct aspeed_sig_desc *desc = &expr->descs[i];
+4 -3
drivers/pinctrl/aspeed/pinmux-aspeed.h
··· 702 702 struct aspeed_pinmux_data; 703 703 704 704 struct aspeed_pinmux_ops { 705 + int (*eval)(struct aspeed_pinmux_data *ctx, 706 + const struct aspeed_sig_expr *expr, bool enabled); 705 707 int (*set)(struct aspeed_pinmux_data *ctx, 706 708 const struct aspeed_sig_expr *expr, bool enabled); 707 709 }; ··· 724 722 int aspeed_sig_desc_eval(const struct aspeed_sig_desc *desc, bool enabled, 725 723 struct regmap *map); 726 724 727 - int aspeed_sig_expr_eval(const struct aspeed_pinmux_data *ctx, 728 - const struct aspeed_sig_expr *expr, 729 - bool enabled); 725 + int aspeed_sig_expr_eval(struct aspeed_pinmux_data *ctx, 726 + const struct aspeed_sig_expr *expr, bool enabled); 730 727 731 728 static inline int aspeed_sig_expr_set(struct aspeed_pinmux_data *ctx, 732 729 const struct aspeed_sig_expr *expr,
+4 -4
drivers/regulator/act8945a-regulator.c
··· 169 169 reg = ACT8945A_DCDC3_CTRL; 170 170 break; 171 171 case ACT8945A_ID_LDO1: 172 - reg = ACT8945A_LDO1_SUS; 172 + reg = ACT8945A_LDO1_CTRL; 173 173 break; 174 174 case ACT8945A_ID_LDO2: 175 - reg = ACT8945A_LDO2_SUS; 175 + reg = ACT8945A_LDO2_CTRL; 176 176 break; 177 177 case ACT8945A_ID_LDO3: 178 - reg = ACT8945A_LDO3_SUS; 178 + reg = ACT8945A_LDO3_CTRL; 179 179 break; 180 180 case ACT8945A_ID_LDO4: 181 - reg = ACT8945A_LDO4_SUS; 181 + reg = ACT8945A_LDO4_CTRL; 182 182 break; 183 183 default: 184 184 return -EINVAL;
+2 -2
drivers/regulator/slg51000-regulator.c
··· 205 205 ena_gpiod = devm_gpiod_get_from_of_node(chip->dev, np, 206 206 "enable-gpios", 0, 207 207 gflags, "gpio-en-ldo"); 208 - if (ena_gpiod) { 208 + if (!IS_ERR(ena_gpiod)) { 209 209 config->ena_gpiod = ena_gpiod; 210 210 devm_gpiod_unhinge(chip->dev, config->ena_gpiod); 211 211 } ··· 459 459 GPIOD_OUT_HIGH 460 460 | GPIOD_FLAGS_BIT_NONEXCLUSIVE, 461 461 "slg51000-cs"); 462 - if (cs_gpiod) { 462 + if (!IS_ERR(cs_gpiod)) { 463 463 dev_info(dev, "Found chip selector property\n"); 464 464 chip->cs_gpiod = cs_gpiod; 465 465 }
+20 -3
drivers/regulator/twl-regulator.c
··· 359 359 2500, 2750, 360 360 }; 361 361 362 + /* 600mV to 1450mV in 12.5 mV steps */ 363 + static const struct regulator_linear_range VDD1_ranges[] = { 364 + REGULATOR_LINEAR_RANGE(600000, 0, 68, 12500) 365 + }; 366 + 367 + /* 600mV to 1450mV in 12.5 mV steps, everything above = 1500mV */ 368 + static const struct regulator_linear_range VDD2_ranges[] = { 369 + REGULATOR_LINEAR_RANGE(600000, 0, 68, 12500), 370 + REGULATOR_LINEAR_RANGE(1500000, 69, 69, 12500) 371 + }; 372 + 362 373 static int twl4030ldo_list_voltage(struct regulator_dev *rdev, unsigned index) 363 374 { 364 375 struct twlreg_info *info = rdev_get_drvdata(rdev); ··· 438 427 } 439 428 440 429 static const struct regulator_ops twl4030smps_ops = { 430 + .list_voltage = regulator_list_voltage_linear_range, 431 + 441 432 .set_voltage = twl4030smps_set_voltage, 442 433 .get_voltage = twl4030smps_get_voltage, 443 434 }; ··· 479 466 }, \ 480 467 } 481 468 482 - #define TWL4030_ADJUSTABLE_SMPS(label, offset, num, turnon_delay, remap_conf) \ 469 + #define TWL4030_ADJUSTABLE_SMPS(label, offset, num, turnon_delay, remap_conf, \ 470 + n_volt) \ 483 471 static const struct twlreg_info TWL4030_INFO_##label = { \ 484 472 .base = offset, \ 485 473 .id = num, \ ··· 493 479 .owner = THIS_MODULE, \ 494 480 .enable_time = turnon_delay, \ 495 481 .of_map_mode = twl4030reg_map_mode, \ 482 + .n_voltages = n_volt, \ 483 + .n_linear_ranges = ARRAY_SIZE(label ## _ranges), \ 484 + .linear_ranges = label ## _ranges, \ 496 485 }, \ 497 486 } 498 487 ··· 535 518 TWL4030_ADJUSTABLE_LDO(VDAC, 0x3b, 10, 100, 0x08); 536 519 TWL4030_ADJUSTABLE_LDO(VINTANA2, 0x43, 12, 100, 0x08); 537 520 TWL4030_ADJUSTABLE_LDO(VIO, 0x4b, 14, 1000, 0x08); 538 - TWL4030_ADJUSTABLE_SMPS(VDD1, 0x55, 15, 1000, 0x08); 539 - TWL4030_ADJUSTABLE_SMPS(VDD2, 0x63, 16, 1000, 0x08); 521 + TWL4030_ADJUSTABLE_SMPS(VDD1, 0x55, 15, 1000, 0x08, 68); 522 + TWL4030_ADJUSTABLE_SMPS(VDD2, 0x63, 16, 1000, 0x08, 69); 540 523 /* VUSBCP is managed *only* by the USB subchip */ 541 524 TWL4030_FIXED_LDO(VINTANA1, 0x3f, 1500, 11, 100, 0x08); 542 525 TWL4030_FIXED_LDO(VINTDIG, 0x47, 1500, 13, 100, 0x08);
+1 -1
drivers/scsi/lpfc/lpfc_attr.c
··· 5715 5715 * 0 = Set nr_hw_queues by the number of CPUs or HW queues. 5716 5716 * 1,128 = Manually specify the maximum nr_hw_queue value to be set, 5717 5717 * 5718 - * Value range is [0,128]. Default value is 8. 5718 + * Value range is [0,256]. Default value is 8. 5719 5719 */ 5720 5720 LPFC_ATTR_R(fcp_mq_threshold, LPFC_FCP_MQ_THRESHOLD_DEF, 5721 5721 LPFC_FCP_MQ_THRESHOLD_MIN, LPFC_FCP_MQ_THRESHOLD_MAX,
+1 -1
drivers/scsi/lpfc/lpfc_sli4.h
··· 46 46 47 47 /* FCP MQ queue count limiting */ 48 48 #define LPFC_FCP_MQ_THRESHOLD_MIN 0 49 - #define LPFC_FCP_MQ_THRESHOLD_MAX 128 49 + #define LPFC_FCP_MQ_THRESHOLD_MAX 256 50 50 #define LPFC_FCP_MQ_THRESHOLD_DEF 8 51 51 52 52 /* Common buffer size to accomidate SCSI and NVME IO buffers */
+6
drivers/soc/qcom/qcom-geni-se.c
··· 630 630 struct geni_wrapper *wrapper = se->wrapper; 631 631 u32 val; 632 632 633 + if (!wrapper) 634 + return -EINVAL; 635 + 633 636 *iova = dma_map_single(wrapper->dev, buf, len, DMA_TO_DEVICE); 634 637 if (dma_mapping_error(wrapper->dev, *iova)) 635 638 return -EIO; ··· 665 662 { 666 663 struct geni_wrapper *wrapper = se->wrapper; 667 664 u32 val; 665 + 666 + if (!wrapper) 667 + return -EINVAL; 668 668 669 669 *iova = dma_map_single(wrapper->dev, buf, len, DMA_FROM_DEVICE); 670 670 if (dma_mapping_error(wrapper->dev, *iova))
+9 -4
drivers/vhost/test.c
··· 22 22 * Using this limit prevents one virtqueue from starving others. */ 23 23 #define VHOST_TEST_WEIGHT 0x80000 24 24 25 + /* Max number of packets transferred before requeueing the job. 26 + * Using this limit prevents one virtqueue from starving others with 27 + * pkts. 28 + */ 29 + #define VHOST_TEST_PKT_WEIGHT 256 30 + 25 31 enum { 26 32 VHOST_TEST_VQ = 0, 27 33 VHOST_TEST_VQ_MAX = 1, ··· 86 80 } 87 81 vhost_add_used_and_signal(&n->dev, vq, head, 0); 88 82 total_len += len; 89 - if (unlikely(total_len >= VHOST_TEST_WEIGHT)) { 90 - vhost_poll_queue(&vq->poll); 83 + if (unlikely(vhost_exceeds_weight(vq, 0, total_len))) 91 84 break; 92 - } 93 85 } 94 86 95 87 mutex_unlock(&vq->mutex); ··· 119 115 dev = &n->dev; 120 116 vqs[VHOST_TEST_VQ] = &n->vqs[VHOST_TEST_VQ]; 121 117 n->vqs[VHOST_TEST_VQ].handle_kick = handle_vq_kick; 122 - vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX); 118 + vhost_dev_init(dev, vqs, VHOST_TEST_VQ_MAX, UIO_MAXIOV, 119 + VHOST_TEST_PKT_WEIGHT, VHOST_TEST_WEIGHT); 123 120 124 121 f->private_data = n; 125 122
+7 -517
drivers/vhost/vhost.c
··· 203 203 int vhost_poll_start(struct vhost_poll *poll, struct file *file) 204 204 { 205 205 __poll_t mask; 206 - int ret = 0; 207 206 208 207 if (poll->wqh) 209 208 return 0; ··· 212 213 vhost_poll_wakeup(&poll->wait, 0, 0, poll_to_key(mask)); 213 214 if (mask & EPOLLERR) { 214 215 vhost_poll_stop(poll); 215 - ret = -EINVAL; 216 + return -EINVAL; 216 217 } 217 218 218 - return ret; 219 + return 0; 219 220 } 220 221 EXPORT_SYMBOL_GPL(vhost_poll_start); 221 222 ··· 297 298 __vhost_vq_meta_reset(d->vqs[i]); 298 299 } 299 300 300 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 301 - static void vhost_map_unprefetch(struct vhost_map *map) 302 - { 303 - kfree(map->pages); 304 - map->pages = NULL; 305 - map->npages = 0; 306 - map->addr = NULL; 307 - } 308 - 309 - static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) 310 - { 311 - struct vhost_map *map[VHOST_NUM_ADDRS]; 312 - int i; 313 - 314 - spin_lock(&vq->mmu_lock); 315 - for (i = 0; i < VHOST_NUM_ADDRS; i++) { 316 - map[i] = rcu_dereference_protected(vq->maps[i], 317 - lockdep_is_held(&vq->mmu_lock)); 318 - if (map[i]) 319 - rcu_assign_pointer(vq->maps[i], NULL); 320 - } 321 - spin_unlock(&vq->mmu_lock); 322 - 323 - synchronize_rcu(); 324 - 325 - for (i = 0; i < VHOST_NUM_ADDRS; i++) 326 - if (map[i]) 327 - vhost_map_unprefetch(map[i]); 328 - 329 - } 330 - 331 - static void vhost_reset_vq_maps(struct vhost_virtqueue *vq) 332 - { 333 - int i; 334 - 335 - vhost_uninit_vq_maps(vq); 336 - for (i = 0; i < VHOST_NUM_ADDRS; i++) 337 - vq->uaddrs[i].size = 0; 338 - } 339 - 340 - static bool vhost_map_range_overlap(struct vhost_uaddr *uaddr, 341 - unsigned long start, 342 - unsigned long end) 343 - { 344 - if (unlikely(!uaddr->size)) 345 - return false; 346 - 347 - return !(end < uaddr->uaddr || start > uaddr->uaddr - 1 + uaddr->size); 348 - } 349 - 350 - static void vhost_invalidate_vq_start(struct vhost_virtqueue *vq, 351 - int index, 352 - unsigned long start, 353 - unsigned long end) 354 - { 355 - struct vhost_uaddr *uaddr = &vq->uaddrs[index]; 356 - struct vhost_map *map; 357 - int i; 358 - 359 - if (!vhost_map_range_overlap(uaddr, start, end)) 360 - return; 361 - 362 - spin_lock(&vq->mmu_lock); 363 - ++vq->invalidate_count; 364 - 365 - map = rcu_dereference_protected(vq->maps[index], 366 - lockdep_is_held(&vq->mmu_lock)); 367 - if (map) { 368 - if (uaddr->write) { 369 - for (i = 0; i < map->npages; i++) 370 - set_page_dirty(map->pages[i]); 371 - } 372 - rcu_assign_pointer(vq->maps[index], NULL); 373 - } 374 - spin_unlock(&vq->mmu_lock); 375 - 376 - if (map) { 377 - synchronize_rcu(); 378 - vhost_map_unprefetch(map); 379 - } 380 - } 381 - 382 - static void vhost_invalidate_vq_end(struct vhost_virtqueue *vq, 383 - int index, 384 - unsigned long start, 385 - unsigned long end) 386 - { 387 - if (!vhost_map_range_overlap(&vq->uaddrs[index], start, end)) 388 - return; 389 - 390 - spin_lock(&vq->mmu_lock); 391 - --vq->invalidate_count; 392 - spin_unlock(&vq->mmu_lock); 393 - } 394 - 395 - static int vhost_invalidate_range_start(struct mmu_notifier *mn, 396 - const struct mmu_notifier_range *range) 397 - { 398 - struct vhost_dev *dev = container_of(mn, struct vhost_dev, 399 - mmu_notifier); 400 - int i, j; 401 - 402 - if (!mmu_notifier_range_blockable(range)) 403 - return -EAGAIN; 404 - 405 - for (i = 0; i < dev->nvqs; i++) { 406 - struct vhost_virtqueue *vq = dev->vqs[i]; 407 - 408 - for (j = 0; j < VHOST_NUM_ADDRS; j++) 409 - vhost_invalidate_vq_start(vq, j, 410 - range->start, 411 - range->end); 412 - } 413 - 414 - return 0; 415 - } 416 - 417 - static void vhost_invalidate_range_end(struct mmu_notifier *mn, 418 - const struct mmu_notifier_range *range) 419 - { 420 - struct vhost_dev *dev = container_of(mn, struct vhost_dev, 421 - mmu_notifier); 422 - int i, j; 423 - 424 - for (i = 0; i < dev->nvqs; i++) { 425 - struct vhost_virtqueue *vq = dev->vqs[i]; 426 - 427 - for (j = 0; j < VHOST_NUM_ADDRS; j++) 428 - vhost_invalidate_vq_end(vq, j, 429 - range->start, 430 - range->end); 431 - } 432 - } 433 - 434 - static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { 435 - .invalidate_range_start = vhost_invalidate_range_start, 436 - .invalidate_range_end = vhost_invalidate_range_end, 437 - }; 438 - 439 - static void vhost_init_maps(struct vhost_dev *dev) 440 - { 441 - struct vhost_virtqueue *vq; 442 - int i, j; 443 - 444 - dev->mmu_notifier.ops = &vhost_mmu_notifier_ops; 445 - 446 - for (i = 0; i < dev->nvqs; ++i) { 447 - vq = dev->vqs[i]; 448 - for (j = 0; j < VHOST_NUM_ADDRS; j++) 449 - RCU_INIT_POINTER(vq->maps[j], NULL); 450 - } 451 - } 452 - #endif 453 - 454 301 static void vhost_vq_reset(struct vhost_dev *dev, 455 302 struct vhost_virtqueue *vq) 456 303 { ··· 325 480 vq->busyloop_timeout = 0; 326 481 vq->umem = NULL; 327 482 vq->iotlb = NULL; 328 - vq->invalidate_count = 0; 329 483 __vhost_vq_meta_reset(vq); 330 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 331 - vhost_reset_vq_maps(vq); 332 - #endif 333 484 } 334 485 335 486 static int vhost_worker(void *data) ··· 475 634 INIT_LIST_HEAD(&dev->read_list); 476 635 INIT_LIST_HEAD(&dev->pending_list); 477 636 spin_lock_init(&dev->iotlb_lock); 478 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 479 - vhost_init_maps(dev); 480 - #endif 637 + 481 638 482 639 for (i = 0; i < dev->nvqs; ++i) { 483 640 vq = dev->vqs[i]; ··· 484 645 vq->heads = NULL; 485 646 vq->dev = dev; 486 647 mutex_init(&vq->mutex); 487 - spin_lock_init(&vq->mmu_lock); 488 648 vhost_vq_reset(dev, vq); 489 649 if (vq->handle_kick) 490 650 vhost_poll_init(&vq->poll, vq->handle_kick, ··· 563 725 if (err) 564 726 goto err_cgroup; 565 727 566 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 567 - err = mmu_notifier_register(&dev->mmu_notifier, dev->mm); 568 - if (err) 569 - goto err_mmu_notifier; 570 - #endif 571 - 572 728 return 0; 573 - 574 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 575 - err_mmu_notifier: 576 - vhost_dev_free_iovecs(dev); 577 - #endif 578 729 err_cgroup: 579 730 kthread_stop(worker); 580 731 dev->worker = NULL; ··· 654 827 spin_unlock(&dev->iotlb_lock); 655 828 } 656 829 657 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 658 - static void vhost_setup_uaddr(struct vhost_virtqueue *vq, 659 - int index, unsigned long uaddr, 660 - size_t size, bool write) 661 - { 662 - struct vhost_uaddr *addr = &vq->uaddrs[index]; 663 - 664 - addr->uaddr = uaddr; 665 - addr->size = size; 666 - addr->write = write; 667 - } 668 - 669 - static void vhost_setup_vq_uaddr(struct vhost_virtqueue *vq) 670 - { 671 - vhost_setup_uaddr(vq, VHOST_ADDR_DESC, 672 - (unsigned long)vq->desc, 673 - vhost_get_desc_size(vq, vq->num), 674 - false); 675 - vhost_setup_uaddr(vq, VHOST_ADDR_AVAIL, 676 - (unsigned long)vq->avail, 677 - vhost_get_avail_size(vq, vq->num), 678 - false); 679 - vhost_setup_uaddr(vq, VHOST_ADDR_USED, 680 - (unsigned long)vq->used, 681 - vhost_get_used_size(vq, vq->num), 682 - true); 683 - } 684 - 685 - static int vhost_map_prefetch(struct vhost_virtqueue *vq, 686 - int index) 687 - { 688 - struct vhost_map *map; 689 - struct vhost_uaddr *uaddr = &vq->uaddrs[index]; 690 - struct page **pages; 691 - int npages = DIV_ROUND_UP(uaddr->size, PAGE_SIZE); 692 - int npinned; 693 - void *vaddr, *v; 694 - int err; 695 - int i; 696 - 697 - spin_lock(&vq->mmu_lock); 698 - 699 - err = -EFAULT; 700 - if (vq->invalidate_count) 701 - goto err; 702 - 703 - err = -ENOMEM; 704 - map = kmalloc(sizeof(*map), GFP_ATOMIC); 705 - if (!map) 706 - goto err; 707 - 708 - pages = kmalloc_array(npages, sizeof(struct page *), GFP_ATOMIC); 709 - if (!pages) 710 - goto err_pages; 711 - 712 - err = EFAULT; 713 - npinned = __get_user_pages_fast(uaddr->uaddr, npages, 714 - uaddr->write, pages); 715 - if (npinned > 0) 716 - release_pages(pages, npinned); 717 - if (npinned != npages) 718 - goto err_gup; 719 - 720 - for (i = 0; i < npinned; i++) 721 - if (PageHighMem(pages[i])) 722 - goto err_gup; 723 - 724 - vaddr = v = page_address(pages[0]); 725 - 726 - /* For simplicity, fallback to userspace address if VA is not 727 - * contigious. 728 - */ 729 - for (i = 1; i < npinned; i++) { 730 - v += PAGE_SIZE; 731 - if (v != page_address(pages[i])) 732 - goto err_gup; 733 - } 734 - 735 - map->addr = vaddr + (uaddr->uaddr & (PAGE_SIZE - 1)); 736 - map->npages = npages; 737 - map->pages = pages; 738 - 739 - rcu_assign_pointer(vq->maps[index], map); 740 - /* No need for a synchronize_rcu(). This function should be 741 - * called by dev->worker so we are serialized with all 742 - * readers. 743 - */ 744 - spin_unlock(&vq->mmu_lock); 745 - 746 - return 0; 747 - 748 - err_gup: 749 - kfree(pages); 750 - err_pages: 751 - kfree(map); 752 - err: 753 - spin_unlock(&vq->mmu_lock); 754 - return err; 755 - } 756 - #endif 757 - 758 830 void vhost_dev_cleanup(struct vhost_dev *dev) 759 831 { 760 832 int i; ··· 683 957 kthread_stop(dev->worker); 684 958 dev->worker = NULL; 685 959 } 686 - if (dev->mm) { 687 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 688 - mmu_notifier_unregister(&dev->mmu_notifier, dev->mm); 689 - #endif 960 + if (dev->mm) 690 961 mmput(dev->mm); 691 - } 692 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 693 - for (i = 0; i < dev->nvqs; i++) 694 - vhost_uninit_vq_maps(dev->vqs[i]); 695 - #endif 696 962 dev->mm = NULL; 697 963 } 698 964 EXPORT_SYMBOL_GPL(vhost_dev_cleanup); ··· 913 1195 914 1196 static inline int vhost_put_avail_event(struct vhost_virtqueue *vq) 915 1197 { 916 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 917 - struct vhost_map *map; 918 - struct vring_used *used; 919 - 920 - if (!vq->iotlb) { 921 - rcu_read_lock(); 922 - 923 - map = rcu_dereference(vq->maps[VHOST_ADDR_USED]); 924 - if (likely(map)) { 925 - used = map->addr; 926 - *((__virtio16 *)&used->ring[vq->num]) = 927 - cpu_to_vhost16(vq, vq->avail_idx); 928 - rcu_read_unlock(); 929 - return 0; 930 - } 931 - 932 - rcu_read_unlock(); 933 - } 934 - #endif 935 - 936 1198 return vhost_put_user(vq, cpu_to_vhost16(vq, vq->avail_idx), 937 1199 vhost_avail_event(vq)); 938 1200 } ··· 921 1223 struct vring_used_elem *head, int idx, 922 1224 int count) 923 1225 { 924 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 925 - struct vhost_map *map; 926 - struct vring_used *used; 927 - size_t size; 928 - 929 - if (!vq->iotlb) { 930 - rcu_read_lock(); 931 - 932 - map = rcu_dereference(vq->maps[VHOST_ADDR_USED]); 933 - if (likely(map)) { 934 - used = map->addr; 935 - size = count * sizeof(*head); 936 - memcpy(used->ring + idx, head, size); 937 - rcu_read_unlock(); 938 - return 0; 939 - } 940 - 941 - rcu_read_unlock(); 942 - } 943 - #endif 944 - 945 1226 return vhost_copy_to_user(vq, vq->used->ring + idx, head, 946 1227 count * sizeof(*head)); 947 1228 } ··· 928 1251 static inline int vhost_put_used_flags(struct vhost_virtqueue *vq) 929 1252 930 1253 { 931 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 932 - struct vhost_map *map; 933 - struct vring_used *used; 934 - 935 - if (!vq->iotlb) { 936 - rcu_read_lock(); 937 - 938 - map = rcu_dereference(vq->maps[VHOST_ADDR_USED]); 939 - if (likely(map)) { 940 - used = map->addr; 941 - used->flags = cpu_to_vhost16(vq, vq->used_flags); 942 - rcu_read_unlock(); 943 - return 0; 944 - } 945 - 946 - rcu_read_unlock(); 947 - } 948 - #endif 949 - 950 1254 return vhost_put_user(vq, cpu_to_vhost16(vq, vq->used_flags), 951 1255 &vq->used->flags); 952 1256 } ··· 935 1277 static inline int vhost_put_used_idx(struct vhost_virtqueue *vq) 936 1278 937 1279 { 938 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 939 - struct vhost_map *map; 940 - struct vring_used *used; 941 - 942 - if (!vq->iotlb) { 943 - rcu_read_lock(); 944 - 945 - map = rcu_dereference(vq->maps[VHOST_ADDR_USED]); 946 - if (likely(map)) { 947 - used = map->addr; 948 - used->idx = cpu_to_vhost16(vq, vq->last_used_idx); 949 - rcu_read_unlock(); 950 - return 0; 951 - } 952 - 953 - rcu_read_unlock(); 954 - } 955 - #endif 956 - 957 1280 return vhost_put_user(vq, cpu_to_vhost16(vq, vq->last_used_idx), 958 1281 &vq->used->idx); 959 1282 } ··· 980 1341 static inline int vhost_get_avail_idx(struct vhost_virtqueue *vq, 981 1342 __virtio16 *idx) 982 1343 { 983 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 984 - struct vhost_map *map; 985 - struct vring_avail *avail; 986 - 987 - if (!vq->iotlb) { 988 - rcu_read_lock(); 989 - 990 - map = rcu_dereference(vq->maps[VHOST_ADDR_AVAIL]); 991 - if (likely(map)) { 992 - avail = map->addr; 993 - *idx = avail->idx; 994 - rcu_read_unlock(); 995 - return 0; 996 - } 997 - 998 - rcu_read_unlock(); 999 - } 1000 - #endif 1001 - 1002 1344 return vhost_get_avail(vq, *idx, &vq->avail->idx); 1003 1345 } 1004 1346 1005 1347 static inline int vhost_get_avail_head(struct vhost_virtqueue *vq, 1006 1348 __virtio16 *head, int idx) 1007 1349 { 1008 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1009 - struct vhost_map *map; 1010 - struct vring_avail *avail; 1011 - 1012 - if (!vq->iotlb) { 1013 - rcu_read_lock(); 1014 - 1015 - map = rcu_dereference(vq->maps[VHOST_ADDR_AVAIL]); 1016 - if (likely(map)) { 1017 - avail = map->addr; 1018 - *head = avail->ring[idx & (vq->num - 1)]; 1019 - rcu_read_unlock(); 1020 - return 0; 1021 - } 1022 - 1023 - rcu_read_unlock(); 1024 - } 1025 - #endif 1026 - 1027 1350 return vhost_get_avail(vq, *head, 1028 1351 &vq->avail->ring[idx & (vq->num - 1)]); 1029 1352 } ··· 993 1392 static inline int vhost_get_avail_flags(struct vhost_virtqueue *vq, 994 1393 __virtio16 *flags) 995 1394 { 996 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 997 - struct vhost_map *map; 998 - struct vring_avail *avail; 999 - 1000 - if (!vq->iotlb) { 1001 - rcu_read_lock(); 1002 - 1003 - map = rcu_dereference(vq->maps[VHOST_ADDR_AVAIL]); 1004 - if (likely(map)) { 1005 - avail = map->addr; 1006 - *flags = avail->flags; 1007 - rcu_read_unlock(); 1008 - return 0; 1009 - } 1010 - 1011 - rcu_read_unlock(); 1012 - } 1013 - #endif 1014 - 1015 1395 return vhost_get_avail(vq, *flags, &vq->avail->flags); 1016 1396 } 1017 1397 1018 1398 static inline int vhost_get_used_event(struct vhost_virtqueue *vq, 1019 1399 __virtio16 *event) 1020 1400 { 1021 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1022 - struct vhost_map *map; 1023 - struct vring_avail *avail; 1024 - 1025 - if (!vq->iotlb) { 1026 - rcu_read_lock(); 1027 - map = rcu_dereference(vq->maps[VHOST_ADDR_AVAIL]); 1028 - if (likely(map)) { 1029 - avail = map->addr; 1030 - *event = (__virtio16)avail->ring[vq->num]; 1031 - rcu_read_unlock(); 1032 - return 0; 1033 - } 1034 - rcu_read_unlock(); 1035 - } 1036 - #endif 1037 - 1038 1401 return vhost_get_avail(vq, *event, vhost_used_event(vq)); 1039 1402 } 1040 1403 1041 1404 static inline int vhost_get_used_idx(struct vhost_virtqueue *vq, 1042 1405 __virtio16 *idx) 1043 1406 { 1044 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1045 - struct vhost_map *map; 1046 - struct vring_used *used; 1047 - 1048 - if (!vq->iotlb) { 1049 - rcu_read_lock(); 1050 - 1051 - map = rcu_dereference(vq->maps[VHOST_ADDR_USED]); 1052 - if (likely(map)) { 1053 - used = map->addr; 1054 - *idx = used->idx; 1055 - rcu_read_unlock(); 1056 - return 0; 1057 - } 1058 - 1059 - rcu_read_unlock(); 1060 - } 1061 - #endif 1062 - 1063 1407 return vhost_get_used(vq, *idx, &vq->used->idx); 1064 1408 } 1065 1409 1066 1410 static inline int vhost_get_desc(struct vhost_virtqueue *vq, 1067 1411 struct vring_desc *desc, int idx) 1068 1412 { 1069 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1070 - struct vhost_map *map; 1071 - struct vring_desc *d; 1072 - 1073 - if (!vq->iotlb) { 1074 - rcu_read_lock(); 1075 - 1076 - map = rcu_dereference(vq->maps[VHOST_ADDR_DESC]); 1077 - if (likely(map)) { 1078 - d = map->addr; 1079 - *desc = *(d + idx); 1080 - rcu_read_unlock(); 1081 - return 0; 1082 - } 1083 - 1084 - rcu_read_unlock(); 1085 - } 1086 - #endif 1087 - 1088 1413 return vhost_copy_from_user(vq, desc, vq->desc + idx, sizeof(*desc)); 1089 1414 } 1090 1415 ··· 1351 1824 return true; 1352 1825 } 1353 1826 1354 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1355 - static void vhost_vq_map_prefetch(struct vhost_virtqueue *vq) 1356 - { 1357 - struct vhost_map __rcu *map; 1358 - int i; 1359 - 1360 - for (i = 0; i < VHOST_NUM_ADDRS; i++) { 1361 - rcu_read_lock(); 1362 - map = rcu_dereference(vq->maps[i]); 1363 - rcu_read_unlock(); 1364 - if (unlikely(!map)) 1365 - vhost_map_prefetch(vq, i); 1366 - } 1367 - } 1368 - #endif 1369 - 1370 1827 int vq_meta_prefetch(struct vhost_virtqueue *vq) 1371 1828 { 1372 1829 unsigned int num = vq->num; 1373 1830 1374 - if (!vq->iotlb) { 1375 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1376 - vhost_vq_map_prefetch(vq); 1377 - #endif 1831 + if (!vq->iotlb) 1378 1832 return 1; 1379 - } 1380 1833 1381 1834 return iotlb_access_ok(vq, VHOST_ACCESS_RO, (u64)(uintptr_t)vq->desc, 1382 1835 vhost_get_desc_size(vq, num), VHOST_ADDR_DESC) && ··· 1567 2060 1568 2061 mutex_lock(&vq->mutex); 1569 2062 1570 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1571 - /* Unregister MMU notifer to allow invalidation callback 1572 - * can access vq->uaddrs[] without holding a lock. 1573 - */ 1574 - if (d->mm) 1575 - mmu_notifier_unregister(&d->mmu_notifier, d->mm); 1576 - 1577 - vhost_uninit_vq_maps(vq); 1578 - #endif 1579 - 1580 2063 switch (ioctl) { 1581 2064 case VHOST_SET_VRING_NUM: 1582 2065 r = vhost_vring_set_num(d, vq, argp); ··· 1577 2080 default: 1578 2081 BUG(); 1579 2082 } 1580 - 1581 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 1582 - vhost_setup_vq_uaddr(vq); 1583 - 1584 - if (d->mm) 1585 - mmu_notifier_register(&d->mmu_notifier, d->mm); 1586 - #endif 1587 2083 1588 2084 mutex_unlock(&vq->mutex); 1589 2085 ··· 2178 2688 /* If this is an input descriptor, increment that count. */ 2179 2689 if (access == VHOST_ACCESS_WO) { 2180 2690 *in_num += ret; 2181 - if (unlikely(log)) { 2691 + if (unlikely(log && ret)) { 2182 2692 log[*log_num].addr = vhost64_to_cpu(vq, desc.addr); 2183 2693 log[*log_num].len = vhost32_to_cpu(vq, desc.len); 2184 2694 ++*log_num; ··· 2319 2829 /* If this is an input descriptor, 2320 2830 * increment that count. */ 2321 2831 *in_num += ret; 2322 - if (unlikely(log)) { 2832 + if (unlikely(log && ret)) { 2323 2833 log[*log_num].addr = vhost64_to_cpu(vq, desc.addr); 2324 2834 log[*log_num].len = vhost32_to_cpu(vq, desc.len); 2325 2835 ++*log_num;
-41
drivers/vhost/vhost.h
··· 12 12 #include <linux/virtio_config.h> 13 13 #include <linux/virtio_ring.h> 14 14 #include <linux/atomic.h> 15 - #include <linux/pagemap.h> 16 - #include <linux/mmu_notifier.h> 17 - #include <asm/cacheflush.h> 18 15 19 16 struct vhost_work; 20 17 typedef void (*vhost_work_fn_t)(struct vhost_work *work); ··· 80 83 VHOST_NUM_ADDRS = 3, 81 84 }; 82 85 83 - struct vhost_map { 84 - int npages; 85 - void *addr; 86 - struct page **pages; 87 - }; 88 - 89 - struct vhost_uaddr { 90 - unsigned long uaddr; 91 - size_t size; 92 - bool write; 93 - }; 94 - 95 - #if defined(CONFIG_MMU_NOTIFIER) && ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE == 0 96 - #define VHOST_ARCH_CAN_ACCEL_UACCESS 0 97 - #else 98 - #define VHOST_ARCH_CAN_ACCEL_UACCESS 0 99 - #endif 100 - 101 86 /* The virtqueue structure describes a queue attached to a device. */ 102 87 struct vhost_virtqueue { 103 88 struct vhost_dev *dev; ··· 90 111 struct vring_desc __user *desc; 91 112 struct vring_avail __user *avail; 92 113 struct vring_used __user *used; 93 - 94 - #if VHOST_ARCH_CAN_ACCEL_UACCESS 95 - /* Read by memory accessors, modified by meta data 96 - * prefetching, MMU notifier and vring ioctl(). 97 - * Synchonrized through mmu_lock (writers) and RCU (writers 98 - * and readers). 99 - */ 100 - struct vhost_map __rcu *maps[VHOST_NUM_ADDRS]; 101 - /* Read by MMU notifier, modified by vring ioctl(), 102 - * synchronized through MMU notifier 103 - * registering/unregistering. 104 - */ 105 - struct vhost_uaddr uaddrs[VHOST_NUM_ADDRS]; 106 - #endif 107 114 const struct vhost_umem_node *meta_iotlb[VHOST_NUM_ADDRS]; 108 - 109 115 struct file *kick; 110 116 struct eventfd_ctx *call_ctx; 111 117 struct eventfd_ctx *error_ctx; ··· 145 181 bool user_be; 146 182 #endif 147 183 u32 busyloop_timeout; 148 - spinlock_t mmu_lock; 149 - int invalidate_count; 150 184 }; 151 185 152 186 struct vhost_msg_node { ··· 158 196 159 197 struct vhost_dev { 160 198 struct mm_struct *mm; 161 - #ifdef CONFIG_MMU_NOTIFIER 162 - struct mmu_notifier mmu_notifier; 163 - #endif 164 199 struct mutex mutex; 165 200 struct vhost_virtqueue **vqs; 166 201 int nvqs;
+6 -2
drivers/virtio/virtio_ring.c
··· 566 566 567 567 unmap_release: 568 568 err_idx = i; 569 - i = head; 569 + 570 + if (indirect) 571 + i = 0; 572 + else 573 + i = head; 570 574 571 575 for (n = 0; n < total_sg; n++) { 572 576 if (i == err_idx) 573 577 break; 574 578 vring_unmap_one_split(vq, &desc[i]); 575 - i = virtio16_to_cpu(_vq->vdev, vq->split.vring.desc[i].next); 579 + i = virtio16_to_cpu(_vq->vdev, desc[i].next); 576 580 } 577 581 578 582 if (indirect)
+26 -9
fs/btrfs/extent_io.c
··· 3628 3628 TASK_UNINTERRUPTIBLE); 3629 3629 } 3630 3630 3631 + static void end_extent_buffer_writeback(struct extent_buffer *eb) 3632 + { 3633 + clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); 3634 + smp_mb__after_atomic(); 3635 + wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK); 3636 + } 3637 + 3631 3638 /* 3632 3639 * Lock eb pages and flush the bio if we can't the locks 3633 3640 * ··· 3706 3699 3707 3700 if (!trylock_page(p)) { 3708 3701 if (!flush) { 3709 - ret = flush_write_bio(epd); 3710 - if (ret < 0) { 3702 + int err; 3703 + 3704 + err = flush_write_bio(epd); 3705 + if (err < 0) { 3706 + ret = err; 3711 3707 failed_page_nr = i; 3712 3708 goto err_unlock; 3713 3709 } ··· 3725 3715 /* Unlock already locked pages */ 3726 3716 for (i = 0; i < failed_page_nr; i++) 3727 3717 unlock_page(eb->pages[i]); 3718 + /* 3719 + * Clear EXTENT_BUFFER_WRITEBACK and wake up anyone waiting on it. 3720 + * Also set back EXTENT_BUFFER_DIRTY so future attempts to this eb can 3721 + * be made and undo everything done before. 3722 + */ 3723 + btrfs_tree_lock(eb); 3724 + spin_lock(&eb->refs_lock); 3725 + set_bit(EXTENT_BUFFER_DIRTY, &eb->bflags); 3726 + end_extent_buffer_writeback(eb); 3727 + spin_unlock(&eb->refs_lock); 3728 + percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, eb->len, 3729 + fs_info->dirty_metadata_batch); 3730 + btrfs_clear_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN); 3731 + btrfs_tree_unlock(eb); 3728 3732 return ret; 3729 - } 3730 - 3731 - static void end_extent_buffer_writeback(struct extent_buffer *eb) 3732 - { 3733 - clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags); 3734 - smp_mb__after_atomic(); 3735 - wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK); 3736 3733 } 3737 3734 3738 3735 static void set_btree_ioerr(struct page *page)
+8 -8
fs/btrfs/tree-log.c
··· 4985 4985 BTRFS_I(inode), 4986 4986 LOG_OTHER_INODE_ALL, 4987 4987 0, LLONG_MAX, ctx); 4988 - iput(inode); 4988 + btrfs_add_delayed_iput(inode); 4989 4989 } 4990 4990 } 4991 4991 continue; ··· 5000 5000 ret = btrfs_log_inode(trans, root, BTRFS_I(inode), 5001 5001 LOG_OTHER_INODE, 0, LLONG_MAX, ctx); 5002 5002 if (ret) { 5003 - iput(inode); 5003 + btrfs_add_delayed_iput(inode); 5004 5004 continue; 5005 5005 } 5006 5006 ··· 5009 5009 key.offset = 0; 5010 5010 ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); 5011 5011 if (ret < 0) { 5012 - iput(inode); 5012 + btrfs_add_delayed_iput(inode); 5013 5013 continue; 5014 5014 } 5015 5015 ··· 5056 5056 } 5057 5057 path->slots[0]++; 5058 5058 } 5059 - iput(inode); 5059 + btrfs_add_delayed_iput(inode); 5060 5060 } 5061 5061 5062 5062 return ret; ··· 5689 5689 } 5690 5690 5691 5691 if (btrfs_inode_in_log(BTRFS_I(di_inode), trans->transid)) { 5692 - iput(di_inode); 5692 + btrfs_add_delayed_iput(di_inode); 5693 5693 break; 5694 5694 } 5695 5695 ··· 5701 5701 if (!ret && 5702 5702 btrfs_must_commit_transaction(trans, BTRFS_I(di_inode))) 5703 5703 ret = 1; 5704 - iput(di_inode); 5704 + btrfs_add_delayed_iput(di_inode); 5705 5705 if (ret) 5706 5706 goto next_dir_inode; 5707 5707 if (ctx->log_new_dentries) { ··· 5848 5848 if (!ret && ctx && ctx->log_new_dentries) 5849 5849 ret = log_new_dir_dentries(trans, root, 5850 5850 BTRFS_I(dir_inode), ctx); 5851 - iput(dir_inode); 5851 + btrfs_add_delayed_iput(dir_inode); 5852 5852 if (ret) 5853 5853 goto out; 5854 5854 } ··· 5891 5891 ret = btrfs_log_inode(trans, root, BTRFS_I(inode), 5892 5892 LOG_INODE_EXISTS, 5893 5893 0, LLONG_MAX, ctx); 5894 - iput(inode); 5894 + btrfs_add_delayed_iput(inode); 5895 5895 if (ret) 5896 5896 return ret; 5897 5897
+13 -2
fs/configfs/configfs_internal.h
··· 20 20 #include <linux/list.h> 21 21 #include <linux/spinlock.h> 22 22 23 + struct configfs_fragment { 24 + atomic_t frag_count; 25 + struct rw_semaphore frag_sem; 26 + bool frag_dead; 27 + }; 28 + 29 + void put_fragment(struct configfs_fragment *); 30 + struct configfs_fragment *get_fragment(struct configfs_fragment *); 31 + 23 32 struct configfs_dirent { 24 33 atomic_t s_count; 25 34 int s_dependent_count; ··· 43 34 #ifdef CONFIG_LOCKDEP 44 35 int s_depth; 45 36 #endif 37 + struct configfs_fragment *s_frag; 46 38 }; 47 39 48 40 #define CONFIGFS_ROOT 0x0001 ··· 71 61 extern int configfs_create_file(struct config_item *, const struct configfs_attribute *); 72 62 extern int configfs_create_bin_file(struct config_item *, 73 63 const struct configfs_bin_attribute *); 74 - extern int configfs_make_dirent(struct configfs_dirent *, 75 - struct dentry *, void *, umode_t, int); 64 + extern int configfs_make_dirent(struct configfs_dirent *, struct dentry *, 65 + void *, umode_t, int, struct configfs_fragment *); 76 66 extern int configfs_dirent_is_ready(struct configfs_dirent *); 77 67 78 68 extern void configfs_hash_and_remove(struct dentry * dir, const char * name); ··· 147 137 { 148 138 if (!(sd->s_type & CONFIGFS_ROOT)) { 149 139 kfree(sd->s_iattr); 140 + put_fragment(sd->s_frag); 150 141 kmem_cache_free(configfs_dir_cachep, sd); 151 142 } 152 143 }
+104 -33
fs/configfs/dir.c
··· 151 151 152 152 #endif /* CONFIG_LOCKDEP */ 153 153 154 + static struct configfs_fragment *new_fragment(void) 155 + { 156 + struct configfs_fragment *p; 157 + 158 + p = kmalloc(sizeof(struct configfs_fragment), GFP_KERNEL); 159 + if (p) { 160 + atomic_set(&p->frag_count, 1); 161 + init_rwsem(&p->frag_sem); 162 + p->frag_dead = false; 163 + } 164 + return p; 165 + } 166 + 167 + void put_fragment(struct configfs_fragment *frag) 168 + { 169 + if (frag && atomic_dec_and_test(&frag->frag_count)) 170 + kfree(frag); 171 + } 172 + 173 + struct configfs_fragment *get_fragment(struct configfs_fragment *frag) 174 + { 175 + if (likely(frag)) 176 + atomic_inc(&frag->frag_count); 177 + return frag; 178 + } 179 + 154 180 /* 155 181 * Allocates a new configfs_dirent and links it to the parent configfs_dirent 156 182 */ 157 183 static struct configfs_dirent *configfs_new_dirent(struct configfs_dirent *parent_sd, 158 - void *element, int type) 184 + void *element, int type, 185 + struct configfs_fragment *frag) 159 186 { 160 187 struct configfs_dirent * sd; 161 188 ··· 202 175 kmem_cache_free(configfs_dir_cachep, sd); 203 176 return ERR_PTR(-ENOENT); 204 177 } 178 + sd->s_frag = get_fragment(frag); 205 179 list_add(&sd->s_sibling, &parent_sd->s_children); 206 180 spin_unlock(&configfs_dirent_lock); 207 181 ··· 237 209 238 210 int configfs_make_dirent(struct configfs_dirent * parent_sd, 239 211 struct dentry * dentry, void * element, 240 - umode_t mode, int type) 212 + umode_t mode, int type, struct configfs_fragment *frag) 241 213 { 242 214 struct configfs_dirent * sd; 243 215 244 - sd = configfs_new_dirent(parent_sd, element, type); 216 + sd = configfs_new_dirent(parent_sd, element, type, frag); 245 217 if (IS_ERR(sd)) 246 218 return PTR_ERR(sd); 247 219 ··· 288 260 * until it is validated by configfs_dir_set_ready() 289 261 */ 290 262 291 - static int configfs_create_dir(struct config_item *item, struct dentry *dentry) 263 + static int configfs_create_dir(struct config_item *item, struct dentry *dentry, 264 + struct configfs_fragment *frag) 292 265 { 293 266 int error; 294 267 umode_t mode = S_IFDIR| S_IRWXU | S_IRUGO | S_IXUGO; ··· 302 273 return error; 303 274 304 275 error = configfs_make_dirent(p->d_fsdata, dentry, item, mode, 305 - CONFIGFS_DIR | CONFIGFS_USET_CREATING); 276 + CONFIGFS_DIR | CONFIGFS_USET_CREATING, 277 + frag); 306 278 if (unlikely(error)) 307 279 return error; 308 280 ··· 368 338 { 369 339 int err = 0; 370 340 umode_t mode = S_IFLNK | S_IRWXUGO; 341 + struct configfs_dirent *p = parent->d_fsdata; 371 342 372 - err = configfs_make_dirent(parent->d_fsdata, dentry, sl, mode, 373 - CONFIGFS_ITEM_LINK); 343 + err = configfs_make_dirent(p, dentry, sl, mode, 344 + CONFIGFS_ITEM_LINK, p->s_frag); 374 345 if (!err) { 375 346 err = configfs_create(dentry, mode, init_symlink); 376 347 if (err) { ··· 630 599 631 600 static int configfs_attach_group(struct config_item *parent_item, 632 601 struct config_item *item, 633 - struct dentry *dentry); 602 + struct dentry *dentry, 603 + struct configfs_fragment *frag); 634 604 static void configfs_detach_group(struct config_item *item); 635 605 636 606 static void detach_groups(struct config_group *group) ··· 679 647 * try using vfs_mkdir. Just a thought. 680 648 */ 681 649 static int create_default_group(struct config_group *parent_group, 682 - struct config_group *group) 650 + struct config_group *group, 651 + struct configfs_fragment *frag) 683 652 { 684 653 int ret; 685 654 struct configfs_dirent *sd; ··· 696 663 d_add(child, NULL); 697 664 698 665 ret = configfs_attach_group(&parent_group->cg_item, 699 - &group->cg_item, child); 666 + &group->cg_item, child, frag); 700 667 if (!ret) { 701 668 sd = child->d_fsdata; 702 669 sd->s_type |= CONFIGFS_USET_DEFAULT; ··· 710 677 return ret; 711 678 } 712 679 713 - static int populate_groups(struct config_group *group) 680 + static int populate_groups(struct config_group *group, 681 + struct configfs_fragment *frag) 714 682 { 715 683 struct config_group *new_group; 716 684 int ret = 0; 717 685 718 686 list_for_each_entry(new_group, &group->default_groups, group_entry) { 719 - ret = create_default_group(group, new_group); 687 + ret = create_default_group(group, new_group, frag); 720 688 if (ret) { 721 689 detach_groups(group); 722 690 break; ··· 831 797 */ 832 798 static int configfs_attach_item(struct config_item *parent_item, 833 799 struct config_item *item, 834 - struct dentry *dentry) 800 + struct dentry *dentry, 801 + struct configfs_fragment *frag) 835 802 { 836 803 int ret; 837 804 838 - ret = configfs_create_dir(item, dentry); 805 + ret = configfs_create_dir(item, dentry, frag); 839 806 if (!ret) { 840 807 ret = populate_attrs(item); 841 808 if (ret) { ··· 866 831 867 832 static int configfs_attach_group(struct config_item *parent_item, 868 833 struct config_item *item, 869 - struct dentry *dentry) 834 + struct dentry *dentry, 835 + struct configfs_fragment *frag) 870 836 { 871 837 int ret; 872 838 struct configfs_dirent *sd; 873 839 874 - ret = configfs_attach_item(parent_item, item, dentry); 840 + ret = configfs_attach_item(parent_item, item, dentry, frag); 875 841 if (!ret) { 876 842 sd = dentry->d_fsdata; 877 843 sd->s_type |= CONFIGFS_USET_DIR; ··· 888 852 */ 889 853 inode_lock_nested(d_inode(dentry), I_MUTEX_CHILD); 890 854 configfs_adjust_dir_dirent_depth_before_populate(sd); 891 - ret = populate_groups(to_config_group(item)); 855 + ret = populate_groups(to_config_group(item), frag); 892 856 if (ret) { 893 857 configfs_detach_item(item); 894 858 d_inode(dentry)->i_flags |= S_DEAD; ··· 1283 1247 struct configfs_dirent *sd; 1284 1248 const struct config_item_type *type; 1285 1249 struct module *subsys_owner = NULL, *new_item_owner = NULL; 1250 + struct configfs_fragment *frag; 1286 1251 char *name; 1287 1252 1288 1253 sd = dentry->d_parent->d_fsdata; ··· 1299 1262 1300 1263 if (!(sd->s_type & CONFIGFS_USET_DIR)) { 1301 1264 ret = -EPERM; 1265 + goto out; 1266 + } 1267 + 1268 + frag = new_fragment(); 1269 + if (!frag) { 1270 + ret = -ENOMEM; 1302 1271 goto out; 1303 1272 } 1304 1273 ··· 1410 1367 spin_unlock(&configfs_dirent_lock); 1411 1368 1412 1369 if (group) 1413 - ret = configfs_attach_group(parent_item, item, dentry); 1370 + ret = configfs_attach_group(parent_item, item, dentry, frag); 1414 1371 else 1415 - ret = configfs_attach_item(parent_item, item, dentry); 1372 + ret = configfs_attach_item(parent_item, item, dentry, frag); 1416 1373 1417 1374 spin_lock(&configfs_dirent_lock); 1418 1375 sd->s_type &= ~CONFIGFS_USET_IN_MKDIR; ··· 1449 1406 * reference. 1450 1407 */ 1451 1408 config_item_put(parent_item); 1409 + put_fragment(frag); 1452 1410 1453 1411 out: 1454 1412 return ret; ··· 1461 1417 struct config_item *item; 1462 1418 struct configfs_subsystem *subsys; 1463 1419 struct configfs_dirent *sd; 1420 + struct configfs_fragment *frag; 1464 1421 struct module *subsys_owner = NULL, *dead_item_owner = NULL; 1465 1422 int ret; 1466 1423 ··· 1518 1473 dput(wait); 1519 1474 } 1520 1475 } while (ret == -EAGAIN); 1476 + 1477 + frag = sd->s_frag; 1478 + if (down_write_killable(&frag->frag_sem)) { 1479 + spin_lock(&configfs_dirent_lock); 1480 + configfs_detach_rollback(dentry); 1481 + spin_unlock(&configfs_dirent_lock); 1482 + return -EINTR; 1483 + } 1484 + frag->frag_dead = true; 1485 + up_write(&frag->frag_sem); 1521 1486 1522 1487 /* Get a working ref for the duration of this function */ 1523 1488 item = configfs_get_config_item(dentry); ··· 1629 1574 */ 1630 1575 err = -ENOENT; 1631 1576 if (configfs_dirent_is_ready(parent_sd)) { 1632 - file->private_data = configfs_new_dirent(parent_sd, NULL, 0); 1577 + file->private_data = configfs_new_dirent(parent_sd, NULL, 0, NULL); 1633 1578 if (IS_ERR(file->private_data)) 1634 1579 err = PTR_ERR(file->private_data); 1635 1580 else ··· 1787 1732 { 1788 1733 struct configfs_subsystem *subsys = parent_group->cg_subsys; 1789 1734 struct dentry *parent; 1735 + struct configfs_fragment *frag; 1790 1736 int ret; 1737 + 1738 + frag = new_fragment(); 1739 + if (!frag) 1740 + return -ENOMEM; 1791 1741 1792 1742 mutex_lock(&subsys->su_mutex); 1793 1743 link_group(parent_group, group); ··· 1801 1741 parent = parent_group->cg_item.ci_dentry; 1802 1742 1803 1743 inode_lock_nested(d_inode(parent), I_MUTEX_PARENT); 1804 - ret = create_default_group(parent_group, group); 1744 + ret = create_default_group(parent_group, group, frag); 1805 1745 if (ret) 1806 1746 goto err_out; 1807 1747 ··· 1809 1749 configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata); 1810 1750 spin_unlock(&configfs_dirent_lock); 1811 1751 inode_unlock(d_inode(parent)); 1752 + put_fragment(frag); 1812 1753 return 0; 1813 1754 err_out: 1814 1755 inode_unlock(d_inode(parent)); 1815 1756 mutex_lock(&subsys->su_mutex); 1816 1757 unlink_group(group); 1817 1758 mutex_unlock(&subsys->su_mutex); 1759 + put_fragment(frag); 1818 1760 return ret; 1819 1761 } 1820 1762 EXPORT_SYMBOL(configfs_register_group); ··· 1832 1770 struct configfs_subsystem *subsys = group->cg_subsys; 1833 1771 struct dentry *dentry = group->cg_item.ci_dentry; 1834 1772 struct dentry *parent = group->cg_item.ci_parent->ci_dentry; 1773 + struct configfs_dirent *sd = dentry->d_fsdata; 1774 + struct configfs_fragment *frag = sd->s_frag; 1835 1775 1836 - mutex_lock(&subsys->su_mutex); 1837 - if (!group->cg_item.ci_parent->ci_group) { 1838 - /* 1839 - * The parent has already been unlinked and detached 1840 - * due to a rmdir. 1841 - */ 1842 - goto unlink_group; 1843 - } 1844 - mutex_unlock(&subsys->su_mutex); 1776 + down_write(&frag->frag_sem); 1777 + frag->frag_dead = true; 1778 + up_write(&frag->frag_sem); 1845 1779 1846 1780 inode_lock_nested(d_inode(parent), I_MUTEX_PARENT); 1847 1781 spin_lock(&configfs_dirent_lock); ··· 1854 1796 dput(dentry); 1855 1797 1856 1798 mutex_lock(&subsys->su_mutex); 1857 - unlink_group: 1858 1799 unlink_group(group); 1859 1800 mutex_unlock(&subsys->su_mutex); 1860 1801 } ··· 1910 1853 struct dentry *dentry; 1911 1854 struct dentry *root; 1912 1855 struct configfs_dirent *sd; 1856 + struct configfs_fragment *frag; 1857 + 1858 + frag = new_fragment(); 1859 + if (!frag) 1860 + return -ENOMEM; 1913 1861 1914 1862 root = configfs_pin_fs(); 1915 - if (IS_ERR(root)) 1863 + if (IS_ERR(root)) { 1864 + put_fragment(frag); 1916 1865 return PTR_ERR(root); 1866 + } 1917 1867 1918 1868 if (!group->cg_item.ci_name) 1919 1869 group->cg_item.ci_name = group->cg_item.ci_namebuf; ··· 1936 1872 d_add(dentry, NULL); 1937 1873 1938 1874 err = configfs_attach_group(sd->s_element, &group->cg_item, 1939 - dentry); 1875 + dentry, frag); 1940 1876 if (err) { 1941 1877 BUG_ON(d_inode(dentry)); 1942 1878 d_drop(dentry); ··· 1954 1890 unlink_group(group); 1955 1891 configfs_release_fs(); 1956 1892 } 1893 + put_fragment(frag); 1957 1894 1958 1895 return err; 1959 1896 } ··· 1964 1899 struct config_group *group = &subsys->su_group; 1965 1900 struct dentry *dentry = group->cg_item.ci_dentry; 1966 1901 struct dentry *root = dentry->d_sb->s_root; 1902 + struct configfs_dirent *sd = dentry->d_fsdata; 1903 + struct configfs_fragment *frag = sd->s_frag; 1967 1904 1968 1905 if (dentry->d_parent != root) { 1969 1906 pr_err("Tried to unregister non-subsystem!\n"); 1970 1907 return; 1971 1908 } 1909 + 1910 + down_write(&frag->frag_sem); 1911 + frag->frag_dead = true; 1912 + up_write(&frag->frag_sem); 1972 1913 1973 1914 inode_lock_nested(d_inode(root), 1974 1915 I_MUTEX_PARENT);
+138 -138
fs/configfs/file.c
··· 39 39 bool write_in_progress; 40 40 char *bin_buffer; 41 41 int bin_buffer_size; 42 + int cb_max_size; 43 + struct config_item *item; 44 + struct module *owner; 45 + union { 46 + struct configfs_attribute *attr; 47 + struct configfs_bin_attribute *bin_attr; 48 + }; 42 49 }; 43 50 44 - 45 - /** 46 - * fill_read_buffer - allocate and fill buffer from item. 47 - * @dentry: dentry pointer. 48 - * @buffer: data buffer for file. 49 - * 50 - * Allocate @buffer->page, if it hasn't been already, then call the 51 - * config_item's show() method to fill the buffer with this attribute's 52 - * data. 53 - * This is called only once, on the file's first read. 54 - */ 55 - static int fill_read_buffer(struct dentry * dentry, struct configfs_buffer * buffer) 51 + static inline struct configfs_fragment *to_frag(struct file *file) 56 52 { 57 - struct configfs_attribute * attr = to_attr(dentry); 58 - struct config_item * item = to_item(dentry->d_parent); 59 - int ret = 0; 60 - ssize_t count; 53 + struct configfs_dirent *sd = file->f_path.dentry->d_fsdata; 54 + 55 + return sd->s_frag; 56 + } 57 + 58 + static int fill_read_buffer(struct file *file, struct configfs_buffer *buffer) 59 + { 60 + struct configfs_fragment *frag = to_frag(file); 61 + ssize_t count = -ENOENT; 61 62 62 63 if (!buffer->page) 63 64 buffer->page = (char *) get_zeroed_page(GFP_KERNEL); 64 65 if (!buffer->page) 65 66 return -ENOMEM; 66 67 67 - count = attr->show(item, buffer->page); 68 + down_read(&frag->frag_sem); 69 + if (!frag->frag_dead) 70 + count = buffer->attr->show(buffer->item, buffer->page); 71 + up_read(&frag->frag_sem); 68 72 69 - BUG_ON(count > (ssize_t)SIMPLE_ATTR_SIZE); 70 - if (count >= 0) { 71 - buffer->needs_read_fill = 0; 72 - buffer->count = count; 73 - } else 74 - ret = count; 75 - return ret; 73 + if (count < 0) 74 + return count; 75 + if (WARN_ON_ONCE(count > (ssize_t)SIMPLE_ATTR_SIZE)) 76 + return -EIO; 77 + buffer->needs_read_fill = 0; 78 + buffer->count = count; 79 + return 0; 76 80 } 77 81 78 82 /** ··· 101 97 static ssize_t 102 98 configfs_read_file(struct file *file, char __user *buf, size_t count, loff_t *ppos) 103 99 { 104 - struct configfs_buffer * buffer = file->private_data; 100 + struct configfs_buffer *buffer = file->private_data; 105 101 ssize_t retval = 0; 106 102 107 103 mutex_lock(&buffer->mutex); 108 104 if (buffer->needs_read_fill) { 109 - if ((retval = fill_read_buffer(file->f_path.dentry,buffer))) 105 + retval = fill_read_buffer(file, buffer); 106 + if (retval) 110 107 goto out; 111 108 } 112 109 pr_debug("%s: count = %zd, ppos = %lld, buf = %s\n", ··· 143 138 configfs_read_bin_file(struct file *file, char __user *buf, 144 139 size_t count, loff_t *ppos) 145 140 { 141 + struct configfs_fragment *frag = to_frag(file); 146 142 struct configfs_buffer *buffer = file->private_data; 147 - struct dentry *dentry = file->f_path.dentry; 148 - struct config_item *item = to_item(dentry->d_parent); 149 - struct configfs_bin_attribute *bin_attr = to_bin_attr(dentry); 150 143 ssize_t retval = 0; 151 144 ssize_t len = min_t(size_t, count, PAGE_SIZE); 152 145 ··· 159 156 160 157 if (buffer->needs_read_fill) { 161 158 /* perform first read with buf == NULL to get extent */ 162 - len = bin_attr->read(item, NULL, 0); 159 + down_read(&frag->frag_sem); 160 + if (!frag->frag_dead) 161 + len = buffer->bin_attr->read(buffer->item, NULL, 0); 162 + else 163 + len = -ENOENT; 164 + up_read(&frag->frag_sem); 163 165 if (len <= 0) { 164 166 retval = len; 165 167 goto out; 166 168 } 167 169 168 170 /* do not exceed the maximum value */ 169 - if (bin_attr->cb_max_size && len > bin_attr->cb_max_size) { 171 + if (buffer->cb_max_size && len > buffer->cb_max_size) { 170 172 retval = -EFBIG; 171 173 goto out; 172 174 } ··· 184 176 buffer->bin_buffer_size = len; 185 177 186 178 /* perform second read to fill buffer */ 187 - len = bin_attr->read(item, buffer->bin_buffer, len); 179 + down_read(&frag->frag_sem); 180 + if (!frag->frag_dead) 181 + len = buffer->bin_attr->read(buffer->item, 182 + buffer->bin_buffer, len); 183 + else 184 + len = -ENOENT; 185 + up_read(&frag->frag_sem); 188 186 if (len < 0) { 189 187 retval = len; 190 188 vfree(buffer->bin_buffer); ··· 240 226 return error ? -EFAULT : count; 241 227 } 242 228 243 - 244 - /** 245 - * flush_write_buffer - push buffer to config_item. 246 - * @dentry: dentry to the attribute 247 - * @buffer: data buffer for file. 248 - * @count: number of bytes 249 - * 250 - * Get the correct pointers for the config_item and the attribute we're 251 - * dealing with, then call the store() method for the attribute, 252 - * passing the buffer that we acquired in fill_write_buffer(). 253 - */ 254 - 255 229 static int 256 - flush_write_buffer(struct dentry * dentry, struct configfs_buffer * buffer, size_t count) 230 + flush_write_buffer(struct file *file, struct configfs_buffer *buffer, size_t count) 257 231 { 258 - struct configfs_attribute * attr = to_attr(dentry); 259 - struct config_item * item = to_item(dentry->d_parent); 232 + struct configfs_fragment *frag = to_frag(file); 233 + int res = -ENOENT; 260 234 261 - return attr->store(item, buffer->page, count); 235 + down_read(&frag->frag_sem); 236 + if (!frag->frag_dead) 237 + res = buffer->attr->store(buffer->item, buffer->page, count); 238 + up_read(&frag->frag_sem); 239 + return res; 262 240 } 263 241 264 242 ··· 274 268 static ssize_t 275 269 configfs_write_file(struct file *file, const char __user *buf, size_t count, loff_t *ppos) 276 270 { 277 - struct configfs_buffer * buffer = file->private_data; 271 + struct configfs_buffer *buffer = file->private_data; 278 272 ssize_t len; 279 273 280 274 mutex_lock(&buffer->mutex); 281 275 len = fill_write_buffer(buffer, buf, count); 282 276 if (len > 0) 283 - len = flush_write_buffer(file->f_path.dentry, buffer, len); 277 + len = flush_write_buffer(file, buffer, len); 284 278 if (len > 0) 285 279 *ppos += len; 286 280 mutex_unlock(&buffer->mutex); ··· 305 299 size_t count, loff_t *ppos) 306 300 { 307 301 struct configfs_buffer *buffer = file->private_data; 308 - struct dentry *dentry = file->f_path.dentry; 309 - struct configfs_bin_attribute *bin_attr = to_bin_attr(dentry); 310 302 void *tbuf = NULL; 311 303 ssize_t len; 312 304 ··· 320 316 /* buffer grows? */ 321 317 if (*ppos + count > buffer->bin_buffer_size) { 322 318 323 - if (bin_attr->cb_max_size && 324 - *ppos + count > bin_attr->cb_max_size) { 319 + if (buffer->cb_max_size && 320 + *ppos + count > buffer->cb_max_size) { 325 321 len = -EFBIG; 326 322 goto out; 327 323 } ··· 353 349 return len; 354 350 } 355 351 356 - static int check_perm(struct inode * inode, struct file * file, int type) 352 + static int __configfs_open_file(struct inode *inode, struct file *file, int type) 357 353 { 358 - struct config_item *item = configfs_get_config_item(file->f_path.dentry->d_parent); 359 - struct configfs_attribute * attr = to_attr(file->f_path.dentry); 360 - struct configfs_bin_attribute *bin_attr = NULL; 361 - struct configfs_buffer * buffer; 362 - struct configfs_item_operations * ops = NULL; 363 - int error = 0; 354 + struct dentry *dentry = file->f_path.dentry; 355 + struct configfs_fragment *frag = to_frag(file); 356 + struct configfs_attribute *attr; 357 + struct configfs_buffer *buffer; 358 + int error; 364 359 365 - if (!item || !attr) 366 - goto Einval; 360 + error = -ENOMEM; 361 + buffer = kzalloc(sizeof(struct configfs_buffer), GFP_KERNEL); 362 + if (!buffer) 363 + goto out; 367 364 368 - if (type & CONFIGFS_ITEM_BIN_ATTR) 369 - bin_attr = to_bin_attr(file->f_path.dentry); 365 + error = -ENOENT; 366 + down_read(&frag->frag_sem); 367 + if (unlikely(frag->frag_dead)) 368 + goto out_free_buffer; 370 369 371 - /* Grab the module reference for this attribute if we have one */ 372 - if (!try_module_get(attr->ca_owner)) { 373 - error = -ENODEV; 374 - goto Done; 370 + error = -EINVAL; 371 + buffer->item = to_item(dentry->d_parent); 372 + if (!buffer->item) 373 + goto out_free_buffer; 374 + 375 + attr = to_attr(dentry); 376 + if (!attr) 377 + goto out_put_item; 378 + 379 + if (type & CONFIGFS_ITEM_BIN_ATTR) { 380 + buffer->bin_attr = to_bin_attr(dentry); 381 + buffer->cb_max_size = buffer->bin_attr->cb_max_size; 382 + } else { 383 + buffer->attr = attr; 375 384 } 376 385 377 - if (item->ci_type) 378 - ops = item->ci_type->ct_item_ops; 379 - else 380 - goto Eaccess; 386 + buffer->owner = attr->ca_owner; 387 + /* Grab the module reference for this attribute if we have one */ 388 + error = -ENODEV; 389 + if (!try_module_get(buffer->owner)) 390 + goto out_put_item; 391 + 392 + error = -EACCES; 393 + if (!buffer->item->ci_type) 394 + goto out_put_module; 395 + 396 + buffer->ops = buffer->item->ci_type->ct_item_ops; 381 397 382 398 /* File needs write support. 383 399 * The inode's perms must say it's ok, ··· 405 381 */ 406 382 if (file->f_mode & FMODE_WRITE) { 407 383 if (!(inode->i_mode & S_IWUGO)) 408 - goto Eaccess; 409 - 384 + goto out_put_module; 410 385 if ((type & CONFIGFS_ITEM_ATTR) && !attr->store) 411 - goto Eaccess; 412 - 413 - if ((type & CONFIGFS_ITEM_BIN_ATTR) && !bin_attr->write) 414 - goto Eaccess; 386 + goto out_put_module; 387 + if ((type & CONFIGFS_ITEM_BIN_ATTR) && !buffer->bin_attr->write) 388 + goto out_put_module; 415 389 } 416 390 417 391 /* File needs read support. ··· 418 396 */ 419 397 if (file->f_mode & FMODE_READ) { 420 398 if (!(inode->i_mode & S_IRUGO)) 421 - goto Eaccess; 422 - 399 + goto out_put_module; 423 400 if ((type & CONFIGFS_ITEM_ATTR) && !attr->show) 424 - goto Eaccess; 425 - 426 - if ((type & CONFIGFS_ITEM_BIN_ATTR) && !bin_attr->read) 427 - goto Eaccess; 401 + goto out_put_module; 402 + if ((type & CONFIGFS_ITEM_BIN_ATTR) && !buffer->bin_attr->read) 403 + goto out_put_module; 428 404 } 429 405 430 - /* No error? Great, allocate a buffer for the file, and store it 431 - * it in file->private_data for easy access. 432 - */ 433 - buffer = kzalloc(sizeof(struct configfs_buffer),GFP_KERNEL); 434 - if (!buffer) { 435 - error = -ENOMEM; 436 - goto Enomem; 437 - } 438 406 mutex_init(&buffer->mutex); 439 407 buffer->needs_read_fill = 1; 440 408 buffer->read_in_progress = false; 441 409 buffer->write_in_progress = false; 442 - buffer->ops = ops; 443 410 file->private_data = buffer; 444 - goto Done; 411 + up_read(&frag->frag_sem); 412 + return 0; 445 413 446 - Einval: 447 - error = -EINVAL; 448 - goto Done; 449 - Eaccess: 450 - error = -EACCES; 451 - Enomem: 452 - module_put(attr->ca_owner); 453 - Done: 454 - if (error && item) 455 - config_item_put(item); 414 + out_put_module: 415 + module_put(buffer->owner); 416 + out_put_item: 417 + config_item_put(buffer->item); 418 + out_free_buffer: 419 + up_read(&frag->frag_sem); 420 + kfree(buffer); 421 + out: 456 422 return error; 457 423 } 458 424 459 425 static int configfs_release(struct inode *inode, struct file *filp) 460 426 { 461 - struct config_item * item = to_item(filp->f_path.dentry->d_parent); 462 - struct configfs_attribute * attr = to_attr(filp->f_path.dentry); 463 - struct module * owner = attr->ca_owner; 464 - struct configfs_buffer * buffer = filp->private_data; 427 + struct configfs_buffer *buffer = filp->private_data; 465 428 466 - if (item) 467 - config_item_put(item); 468 - /* After this point, attr should not be accessed. */ 469 - module_put(owner); 470 - 471 - if (buffer) { 472 - if (buffer->page) 473 - free_page((unsigned long)buffer->page); 474 - mutex_destroy(&buffer->mutex); 475 - kfree(buffer); 476 - } 429 + module_put(buffer->owner); 430 + if (buffer->page) 431 + free_page((unsigned long)buffer->page); 432 + mutex_destroy(&buffer->mutex); 433 + kfree(buffer); 477 434 return 0; 478 435 } 479 436 480 437 static int configfs_open_file(struct inode *inode, struct file *filp) 481 438 { 482 - return check_perm(inode, filp, CONFIGFS_ITEM_ATTR); 439 + return __configfs_open_file(inode, filp, CONFIGFS_ITEM_ATTR); 483 440 } 484 441 485 442 static int configfs_open_bin_file(struct inode *inode, struct file *filp) 486 443 { 487 - return check_perm(inode, filp, CONFIGFS_ITEM_BIN_ATTR); 444 + return __configfs_open_file(inode, filp, CONFIGFS_ITEM_BIN_ATTR); 488 445 } 489 446 490 - static int configfs_release_bin_file(struct inode *inode, struct file *filp) 447 + static int configfs_release_bin_file(struct inode *inode, struct file *file) 491 448 { 492 - struct configfs_buffer *buffer = filp->private_data; 493 - struct dentry *dentry = filp->f_path.dentry; 494 - struct config_item *item = to_item(dentry->d_parent); 495 - struct configfs_bin_attribute *bin_attr = to_bin_attr(dentry); 496 - ssize_t len = 0; 497 - int ret; 449 + struct configfs_buffer *buffer = file->private_data; 498 450 499 451 buffer->read_in_progress = false; 500 452 501 453 if (buffer->write_in_progress) { 454 + struct configfs_fragment *frag = to_frag(file); 502 455 buffer->write_in_progress = false; 503 456 504 - len = bin_attr->write(item, buffer->bin_buffer, 505 - buffer->bin_buffer_size); 506 - 457 + down_read(&frag->frag_sem); 458 + if (!frag->frag_dead) { 459 + /* result of ->release() is ignored */ 460 + buffer->bin_attr->write(buffer->item, 461 + buffer->bin_buffer, 462 + buffer->bin_buffer_size); 463 + } 464 + up_read(&frag->frag_sem); 507 465 /* vfree on NULL is safe */ 508 466 vfree(buffer->bin_buffer); 509 467 buffer->bin_buffer = NULL; ··· 491 489 buffer->needs_read_fill = 1; 492 490 } 493 491 494 - ret = configfs_release(inode, filp); 495 - if (len < 0) 496 - return len; 497 - return ret; 492 + configfs_release(inode, file); 493 + return 0; 498 494 } 499 495 500 496 ··· 527 527 528 528 inode_lock_nested(d_inode(dir), I_MUTEX_NORMAL); 529 529 error = configfs_make_dirent(parent_sd, NULL, (void *) attr, mode, 530 - CONFIGFS_ITEM_ATTR); 530 + CONFIGFS_ITEM_ATTR, parent_sd->s_frag); 531 531 inode_unlock(d_inode(dir)); 532 532 533 533 return error; ··· 549 549 550 550 inode_lock_nested(dir->d_inode, I_MUTEX_NORMAL); 551 551 error = configfs_make_dirent(parent_sd, NULL, (void *) bin_attr, mode, 552 - CONFIGFS_ITEM_BIN_ATTR); 552 + CONFIGFS_ITEM_BIN_ATTR, parent_sd->s_frag); 553 553 inode_unlock(dir->d_inode); 554 554 555 555 return error;
+10 -8
fs/nfs/inode.c
··· 1403 1403 if (NFS_PROTO(inode)->have_delegation(inode, FMODE_READ)) 1404 1404 return 0; 1405 1405 1406 - /* No fileid? Just exit */ 1407 - if (!(fattr->valid & NFS_ATTR_FATTR_FILEID)) 1408 - return 0; 1406 + if (!(fattr->valid & NFS_ATTR_FATTR_FILEID)) { 1407 + /* Only a mounted-on-fileid? Just exit */ 1408 + if (fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) 1409 + return 0; 1409 1410 /* Has the inode gone and changed behind our back? */ 1410 - if (nfsi->fileid != fattr->fileid) { 1411 + } else if (nfsi->fileid != fattr->fileid) { 1411 1412 /* Is this perhaps the mounted-on fileid? */ 1412 1413 if ((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) && 1413 1414 nfsi->fileid == fattr->mounted_on_fileid) ··· 1808 1807 nfs_display_fhandle_hash(NFS_FH(inode)), 1809 1808 atomic_read(&inode->i_count), fattr->valid); 1810 1809 1811 - /* No fileid? Just exit */ 1812 - if (!(fattr->valid & NFS_ATTR_FATTR_FILEID)) 1813 - return 0; 1810 + if (!(fattr->valid & NFS_ATTR_FATTR_FILEID)) { 1811 + /* Only a mounted-on-fileid? Just exit */ 1812 + if (fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) 1813 + return 0; 1814 1814 /* Has the inode gone and changed behind our back? */ 1815 - if (nfsi->fileid != fattr->fileid) { 1815 + } else if (nfsi->fileid != fattr->fileid) { 1816 1816 /* Is this perhaps the mounted-on fileid? */ 1817 1817 if ((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) && 1818 1818 nfsi->fileid == fattr->mounted_on_fileid)
+4 -4
include/linux/compiler.h
··· 24 24 long ______r; \ 25 25 static struct ftrace_likely_data \ 26 26 __aligned(4) \ 27 - __section("_ftrace_annotated_branch") \ 27 + __section(_ftrace_annotated_branch) \ 28 28 ______f = { \ 29 29 .data.func = __func__, \ 30 30 .data.file = __FILE__, \ ··· 60 60 #define __trace_if_value(cond) ({ \ 61 61 static struct ftrace_branch_data \ 62 62 __aligned(4) \ 63 - __section("_ftrace_branch") \ 63 + __section(_ftrace_branch) \ 64 64 __if_trace = { \ 65 65 .func = __func__, \ 66 66 .file = __FILE__, \ ··· 118 118 ".popsection\n\t" 119 119 120 120 /* Annotate a C jump table to allow objtool to follow the code flow */ 121 - #define __annotate_jump_table __section(".rodata..c_jump_table") 121 + #define __annotate_jump_table __section(.rodata..c_jump_table) 122 122 123 123 #else 124 124 #define annotate_reachable() ··· 298 298 * visible to the compiler. 299 299 */ 300 300 #define __ADDRESSABLE(sym) \ 301 - static void * __section(".discard.addressable") __used \ 301 + static void * __section(.discard.addressable) __used \ 302 302 __PASTE(__addressable_##sym, __LINE__) = (void *)&sym; 303 303 304 304 /**
+1 -1
include/linux/input/elan-i2c-ids.h
··· 48 48 { "ELAN0618", 0 }, 49 49 { "ELAN0619", 0 }, 50 50 { "ELAN061A", 0 }, 51 - { "ELAN061B", 0 }, 51 + /* { "ELAN061B", 0 }, not working on the Lenovo Legion Y7000 */ 52 52 { "ELAN061C", 0 }, 53 53 { "ELAN061D", 0 }, 54 54 { "ELAN061E", 0 },
-3
include/linux/intel-iommu.h
··· 346 346 #define QI_PC_PASID_SEL (QI_PC_TYPE | QI_PC_GRAN(1)) 347 347 348 348 #define QI_EIOTLB_ADDR(addr) ((u64)(addr) & VTD_PAGE_MASK) 349 - #define QI_EIOTLB_GL(gl) (((u64)gl) << 7) 350 349 #define QI_EIOTLB_IH(ih) (((u64)ih) << 6) 351 350 #define QI_EIOTLB_AM(am) (((u64)am)) 352 351 #define QI_EIOTLB_PASID(pasid) (((u64)pasid) << 32) ··· 377 378 #define QI_RESP_INVALID 0x1 378 379 #define QI_RESP_FAILURE 0xf 379 380 380 - #define QI_GRAN_ALL_ALL 0 381 - #define QI_GRAN_NONG_ALL 1 382 381 #define QI_GRAN_NONG_PASID 2 383 382 #define QI_GRAN_PSI_PASID 3 384 383
+1
include/linux/phy_fixed.h
··· 11 11 }; 12 12 13 13 struct device_node; 14 + struct gpio_desc; 14 15 15 16 #if IS_ENABLED(CONFIG_FIXED_PHY) 16 17 extern int fixed_phy_change_carrier(struct net_device *dev, bool new_carrier);
+19
include/linux/syscalls.h
··· 1402 1402 return old; 1403 1403 } 1404 1404 1405 + /* for __ARCH_WANT_SYS_IPC */ 1406 + long ksys_semtimedop(int semid, struct sembuf __user *tsops, 1407 + unsigned int nsops, 1408 + const struct __kernel_timespec __user *timeout); 1409 + long ksys_semget(key_t key, int nsems, int semflg); 1410 + long ksys_old_semctl(int semid, int semnum, int cmd, unsigned long arg); 1411 + long ksys_msgget(key_t key, int msgflg); 1412 + long ksys_old_msgctl(int msqid, int cmd, struct msqid_ds __user *buf); 1413 + long ksys_msgrcv(int msqid, struct msgbuf __user *msgp, size_t msgsz, 1414 + long msgtyp, int msgflg); 1415 + long ksys_msgsnd(int msqid, struct msgbuf __user *msgp, size_t msgsz, 1416 + int msgflg); 1417 + long ksys_shmget(key_t key, size_t size, int shmflg); 1418 + long ksys_shmdt(char __user *shmaddr); 1419 + long ksys_old_shmctl(int shmid, int cmd, struct shmid_ds __user *buf); 1420 + long compat_ksys_semtimedop(int semid, struct sembuf __user *tsems, 1421 + unsigned int nsops, 1422 + const struct old_timespec32 __user *timeout); 1423 + 1405 1424 #endif
+2 -2
include/net/ip_fib.h
··· 513 513 struct netlink_callback *cb); 514 514 515 515 int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh, 516 - unsigned char *flags, bool skip_oif); 516 + u8 rt_family, unsigned char *flags, bool skip_oif); 517 517 int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh, 518 - int nh_weight); 518 + int nh_weight, u8 rt_family); 519 519 #endif /* _NET_FIB_H */
+3 -2
include/net/nexthop.h
··· 161 161 } 162 162 163 163 static inline 164 - int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh) 164 + int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh, 165 + u8 rt_family) 165 166 { 166 167 struct nh_group *nhg = rtnl_dereference(nh->nh_grp); 167 168 int i; ··· 173 172 struct fib_nh_common *nhc = &nhi->fib_nhc; 174 173 int weight = nhg->nh_entries[i].weight; 175 174 176 - if (fib_add_nexthop(skb, nhc, weight) < 0) 175 + if (fib_add_nexthop(skb, nhc, weight, rt_family) < 0) 177 176 return -EMSGSIZE; 178 177 } 179 178
-2
include/net/xfrm.h
··· 983 983 void xfrm_dst_ifdown(struct dst_entry *dst, struct net_device *dev); 984 984 985 985 struct xfrm_if_parms { 986 - char name[IFNAMSIZ]; /* name of XFRM device */ 987 986 int link; /* ifindex of underlying L2 interface */ 988 987 u32 if_id; /* interface identifyer */ 989 988 }; ··· 990 991 struct xfrm_if { 991 992 struct xfrm_if __rcu *next; /* next interface in list */ 992 993 struct net_device *dev; /* virtual device associated with interface */ 993 - struct net_device *phydev; /* physical device */ 994 994 struct net *net; /* netns for packet i/o */ 995 995 struct xfrm_if_parms p; /* interface parms */ 996 996
+1 -1
include/uapi/asm-generic/unistd.h
··· 569 569 __SC_COMP(__NR_semctl, sys_semctl, compat_sys_semctl) 570 570 #if defined(__ARCH_WANT_TIME32_SYSCALLS) || __BITS_PER_LONG != 32 571 571 #define __NR_semtimedop 192 572 - __SC_COMP(__NR_semtimedop, sys_semtimedop, sys_semtimedop_time32) 572 + __SC_3264(__NR_semtimedop, sys_semtimedop_time32, sys_semtimedop) 573 573 #endif 574 574 #define __NR_semop 193 575 575 __SYSCALL(__NR_semop, sys_semop)
+1
include/uapi/linux/isdn/capicmd.h
··· 16 16 #define CAPI_MSG_BASELEN 8 17 17 #define CAPI_DATA_B3_REQ_LEN (CAPI_MSG_BASELEN+4+4+2+2+2) 18 18 #define CAPI_DATA_B3_RESP_LEN (CAPI_MSG_BASELEN+4+2) 19 + #define CAPI_DISCONNECT_B3_RESP_LEN (CAPI_MSG_BASELEN+4) 19 20 20 21 /*----- CAPI commands -----*/ 21 22 #define CAPI_ALERT 0x01
+2 -23
ipc/util.h
··· 276 276 *cmd &= ~IPC_64; 277 277 return version; 278 278 } 279 - #endif 280 279 281 - /* for __ARCH_WANT_SYS_IPC */ 282 - long ksys_semtimedop(int semid, struct sembuf __user *tsops, 283 - unsigned int nsops, 284 - const struct __kernel_timespec __user *timeout); 285 - long ksys_semget(key_t key, int nsems, int semflg); 286 - long ksys_old_semctl(int semid, int semnum, int cmd, unsigned long arg); 287 - long ksys_msgget(key_t key, int msgflg); 288 - long ksys_old_msgctl(int msqid, int cmd, struct msqid_ds __user *buf); 289 - long ksys_msgrcv(int msqid, struct msgbuf __user *msgp, size_t msgsz, 290 - long msgtyp, int msgflg); 291 - long ksys_msgsnd(int msqid, struct msgbuf __user *msgp, size_t msgsz, 292 - int msgflg); 293 - long ksys_shmget(key_t key, size_t size, int shmflg); 294 - long ksys_shmdt(char __user *shmaddr); 295 - long ksys_old_shmctl(int shmid, int cmd, struct shmid_ds __user *buf); 296 - 297 - /* for CONFIG_ARCH_WANT_OLD_COMPAT_IPC */ 298 - long compat_ksys_semtimedop(int semid, struct sembuf __user *tsems, 299 - unsigned int nsops, 300 - const struct old_timespec32 __user *timeout); 301 - #ifdef CONFIG_COMPAT 302 280 long compat_ksys_old_semctl(int semid, int semnum, int cmd, int arg); 303 281 long compat_ksys_old_msgctl(int msqid, int cmd, void __user *uptr); 304 282 long compat_ksys_msgrcv(int msqid, compat_uptr_t msgp, compat_ssize_t msgsz, ··· 284 306 long compat_ksys_msgsnd(int msqid, compat_uptr_t msgp, 285 307 compat_ssize_t msgsz, int msgflg); 286 308 long compat_ksys_old_shmctl(int shmid, int cmd, void __user *uptr); 287 - #endif /* CONFIG_COMPAT */ 309 + 310 + #endif 288 311 289 312 #endif
+14 -9
kernel/bpf/verifier.c
··· 1772 1772 bitmap_from_u64(mask, stack_mask); 1773 1773 for_each_set_bit(i, mask, 64) { 1774 1774 if (i >= func->allocated_stack / BPF_REG_SIZE) { 1775 - /* This can happen if backtracking 1776 - * is propagating stack precision where 1777 - * caller has larger stack frame 1778 - * than callee, but backtrack_insn() should 1779 - * have returned -ENOTSUPP. 1775 + /* the sequence of instructions: 1776 + * 2: (bf) r3 = r10 1777 + * 3: (7b) *(u64 *)(r3 -8) = r0 1778 + * 4: (79) r4 = *(u64 *)(r10 -8) 1779 + * doesn't contain jmps. It's backtracked 1780 + * as a single block. 1781 + * During backtracking insn 3 is not recognized as 1782 + * stack access, so at the end of backtracking 1783 + * stack slot fp-8 is still marked in stack_mask. 1784 + * However the parent state may not have accessed 1785 + * fp-8 and it's "unallocated" stack space. 1786 + * In such case fallback to conservative. 1780 1787 */ 1781 - verbose(env, "BUG spi %d stack_size %d\n", 1782 - i, func->allocated_stack); 1783 - WARN_ONCE(1, "verifier backtracking bug"); 1784 - return -EFAULT; 1788 + mark_all_scalars_precise(env, st); 1789 + return 0; 1785 1790 } 1786 1791 1787 1792 if (func->stack[i].slot_type[0] != STACK_SPILL) {
+9 -1
kernel/cgroup/cgroup.c
··· 5255 5255 * if the parent has to be frozen, the child has too. 5256 5256 */ 5257 5257 cgrp->freezer.e_freeze = parent->freezer.e_freeze; 5258 - if (cgrp->freezer.e_freeze) 5258 + if (cgrp->freezer.e_freeze) { 5259 + /* 5260 + * Set the CGRP_FREEZE flag, so when a process will be 5261 + * attached to the child cgroup, it will become frozen. 5262 + * At this point the new cgroup is unpopulated, so we can 5263 + * consider it frozen immediately. 5264 + */ 5265 + set_bit(CGRP_FREEZE, &cgrp->flags); 5259 5266 set_bit(CGRP_FROZEN, &cgrp->flags); 5267 + } 5260 5268 5261 5269 spin_lock_irq(&css_set_lock); 5262 5270 for (tcgrp = cgrp; tcgrp; tcgrp = cgroup_parent(tcgrp)) {
+2 -2
kernel/events/hw_breakpoint.c
··· 413 413 414 414 int register_perf_hw_breakpoint(struct perf_event *bp) 415 415 { 416 - struct arch_hw_breakpoint hw; 416 + struct arch_hw_breakpoint hw = { }; 417 417 int err; 418 418 419 419 err = reserve_bp_slot(bp); ··· 461 461 modify_user_hw_breakpoint_check(struct perf_event *bp, struct perf_event_attr *attr, 462 462 bool check) 463 463 { 464 - struct arch_hw_breakpoint hw; 464 + struct arch_hw_breakpoint hw = { }; 465 465 int err; 466 466 467 467 err = hw_breakpoint_parse(bp, attr, &hw);
+10
kernel/fork.c
··· 2338 2338 * 2339 2339 * It copies the process, and if successful kick-starts 2340 2340 * it and waits for it to finish using the VM if required. 2341 + * 2342 + * args->exit_signal is expected to be checked for sanity by the caller. 2341 2343 */ 2342 2344 long _do_fork(struct kernel_clone_args *args) 2343 2345 { ··· 2563 2561 2564 2562 if (copy_from_user(&args, uargs, size)) 2565 2563 return -EFAULT; 2564 + 2565 + /* 2566 + * Verify that higher 32bits of exit_signal are unset and that 2567 + * it is a valid signal 2568 + */ 2569 + if (unlikely((args.exit_signal & ~((u64)CSIGNAL)) || 2570 + !valid_signal(args.exit_signal))) 2571 + return -EINVAL; 2566 2572 2567 2573 *kargs = (struct kernel_clone_args){ 2568 2574 .flags = args.flags,
+2
kernel/irq/resend.c
··· 36 36 irq = find_first_bit(irqs_resend, nr_irqs); 37 37 clear_bit(irq, irqs_resend); 38 38 desc = irq_to_desc(irq); 39 + if (!desc) 40 + continue; 39 41 local_irq_disable(); 40 42 desc->handle_irq(desc); 41 43 local_irq_enable();
+39 -39
kernel/sched/core.c
··· 5105 5105 return retval; 5106 5106 } 5107 5107 5108 - static int sched_read_attr(struct sched_attr __user *uattr, 5109 - struct sched_attr *attr, 5110 - unsigned int usize) 5108 + /* 5109 + * Copy the kernel size attribute structure (which might be larger 5110 + * than what user-space knows about) to user-space. 5111 + * 5112 + * Note that all cases are valid: user-space buffer can be larger or 5113 + * smaller than the kernel-space buffer. The usual case is that both 5114 + * have the same size. 5115 + */ 5116 + static int 5117 + sched_attr_copy_to_user(struct sched_attr __user *uattr, 5118 + struct sched_attr *kattr, 5119 + unsigned int usize) 5111 5120 { 5112 - int ret; 5121 + unsigned int ksize = sizeof(*kattr); 5113 5122 5114 5123 if (!access_ok(uattr, usize)) 5115 5124 return -EFAULT; 5116 5125 5117 5126 /* 5118 - * If we're handed a smaller struct than we know of, 5119 - * ensure all the unknown bits are 0 - i.e. old 5120 - * user-space does not get uncomplete information. 5127 + * sched_getattr() ABI forwards and backwards compatibility: 5128 + * 5129 + * If usize == ksize then we just copy everything to user-space and all is good. 5130 + * 5131 + * If usize < ksize then we only copy as much as user-space has space for, 5132 + * this keeps ABI compatibility as well. We skip the rest. 5133 + * 5134 + * If usize > ksize then user-space is using a newer version of the ABI, 5135 + * which part the kernel doesn't know about. Just ignore it - tooling can 5136 + * detect the kernel's knowledge of attributes from the attr->size value 5137 + * which is set to ksize in this case. 5121 5138 */ 5122 - if (usize < sizeof(*attr)) { 5123 - unsigned char *addr; 5124 - unsigned char *end; 5139 + kattr->size = min(usize, ksize); 5125 5140 5126 - addr = (void *)attr + usize; 5127 - end = (void *)attr + sizeof(*attr); 5128 - 5129 - for (; addr < end; addr++) { 5130 - if (*addr) 5131 - return -EFBIG; 5132 - } 5133 - 5134 - attr->size = usize; 5135 - } 5136 - 5137 - ret = copy_to_user(uattr, attr, attr->size); 5138 - if (ret) 5141 + if (copy_to_user(uattr, kattr, kattr->size)) 5139 5142 return -EFAULT; 5140 5143 5141 5144 return 0; ··· 5148 5145 * sys_sched_getattr - similar to sched_getparam, but with sched_attr 5149 5146 * @pid: the pid in question. 5150 5147 * @uattr: structure containing the extended parameters. 5151 - * @size: sizeof(attr) for fwd/bwd comp. 5148 + * @usize: sizeof(attr) that user-space knows about, for forwards and backwards compatibility. 5152 5149 * @flags: for future extension. 5153 5150 */ 5154 5151 SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, 5155 - unsigned int, size, unsigned int, flags) 5152 + unsigned int, usize, unsigned int, flags) 5156 5153 { 5157 - struct sched_attr attr = { 5158 - .size = sizeof(struct sched_attr), 5159 - }; 5154 + struct sched_attr kattr = { }; 5160 5155 struct task_struct *p; 5161 5156 int retval; 5162 5157 5163 - if (!uattr || pid < 0 || size > PAGE_SIZE || 5164 - size < SCHED_ATTR_SIZE_VER0 || flags) 5158 + if (!uattr || pid < 0 || usize > PAGE_SIZE || 5159 + usize < SCHED_ATTR_SIZE_VER0 || flags) 5165 5160 return -EINVAL; 5166 5161 5167 5162 rcu_read_lock(); ··· 5172 5171 if (retval) 5173 5172 goto out_unlock; 5174 5173 5175 - attr.sched_policy = p->policy; 5174 + kattr.sched_policy = p->policy; 5176 5175 if (p->sched_reset_on_fork) 5177 - attr.sched_flags |= SCHED_FLAG_RESET_ON_FORK; 5176 + kattr.sched_flags |= SCHED_FLAG_RESET_ON_FORK; 5178 5177 if (task_has_dl_policy(p)) 5179 - __getparam_dl(p, &attr); 5178 + __getparam_dl(p, &kattr); 5180 5179 else if (task_has_rt_policy(p)) 5181 - attr.sched_priority = p->rt_priority; 5180 + kattr.sched_priority = p->rt_priority; 5182 5181 else 5183 - attr.sched_nice = task_nice(p); 5182 + kattr.sched_nice = task_nice(p); 5184 5183 5185 5184 #ifdef CONFIG_UCLAMP_TASK 5186 - attr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value; 5187 - attr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value; 5185 + kattr.sched_util_min = p->uclamp_req[UCLAMP_MIN].value; 5186 + kattr.sched_util_max = p->uclamp_req[UCLAMP_MAX].value; 5188 5187 #endif 5189 5188 5190 5189 rcu_read_unlock(); 5191 5190 5192 - retval = sched_read_attr(uattr, &attr, size); 5193 - return retval; 5191 + return sched_attr_copy_to_user(uattr, &kattr, usize); 5194 5192 5195 5193 out_unlock: 5196 5194 rcu_read_unlock();
+5
kernel/sched/fair.c
··· 4470 4470 if (likely(cfs_rq->runtime_remaining > 0)) 4471 4471 return; 4472 4472 4473 + if (cfs_rq->throttled) 4474 + return; 4473 4475 /* 4474 4476 * if we're unable to extend our runtime we resched so that the active 4475 4477 * hierarchy can be throttled ··· 4674 4672 rq_lock_irqsave(rq, &rf); 4675 4673 if (!cfs_rq_throttled(cfs_rq)) 4676 4674 goto next; 4675 + 4676 + /* By the above check, this should never be true */ 4677 + SCHED_WARN_ON(cfs_rq->runtime_remaining > 0); 4677 4678 4678 4679 runtime = -cfs_rq->runtime_remaining + 1; 4679 4680 if (runtime > remaining)
+3 -3
lib/Kconfig
··· 631 631 config PARMAN 632 632 tristate "parman" if COMPILE_TEST 633 633 634 + config OBJAGG 635 + tristate "objagg" if COMPILE_TEST 636 + 634 637 config STRING_SELFTEST 635 638 tristate "Test string functions" 636 639 ··· 656 653 657 654 config GENERIC_LIB_UCMPDI2 658 655 bool 659 - 660 - config OBJAGG 661 - tristate "objagg" if COMPILE_TEST
+2 -1
mm/balloon_compaction.c
··· 124 124 struct page *balloon_page_alloc(void) 125 125 { 126 126 struct page *page = alloc_page(balloon_mapping_gfp_mask() | 127 - __GFP_NOMEMALLOC | __GFP_NORETRY); 127 + __GFP_NOMEMALLOC | __GFP_NORETRY | 128 + __GFP_NOWARN); 128 129 return page; 129 130 } 130 131 EXPORT_SYMBOL_GPL(balloon_page_alloc);
-5
net/bluetooth/hci_event.c
··· 5660 5660 return send_conn_param_neg_reply(hdev, handle, 5661 5661 HCI_ERROR_UNKNOWN_CONN_ID); 5662 5662 5663 - if (min < hcon->le_conn_min_interval || 5664 - max > hcon->le_conn_max_interval) 5665 - return send_conn_param_neg_reply(hdev, handle, 5666 - HCI_ERROR_INVALID_LL_PARAMS); 5667 - 5668 5663 if (hci_check_conn_params(min, max, latency, timeout)) 5669 5664 return send_conn_param_neg_reply(hdev, handle, 5670 5665 HCI_ERROR_INVALID_LL_PARAMS);
+1 -8
net/bluetooth/l2cap_core.c
··· 5305 5305 5306 5306 memset(&rsp, 0, sizeof(rsp)); 5307 5307 5308 - if (min < hcon->le_conn_min_interval || 5309 - max > hcon->le_conn_max_interval) { 5310 - BT_DBG("requested connection interval exceeds current bounds."); 5311 - err = -EINVAL; 5312 - } else { 5313 - err = hci_check_conn_params(min, max, latency, to_multiplier); 5314 - } 5315 - 5308 + err = hci_check_conn_params(min, max, latency, to_multiplier); 5316 5309 if (err) 5317 5310 rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); 5318 5311 else
+1 -1
net/bridge/br_mdb.c
··· 466 466 struct nlmsghdr *nlh; 467 467 struct nlattr *nest; 468 468 469 - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), NLM_F_MULTI); 469 + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); 470 470 if (!nlh) 471 471 return -EMSGSIZE; 472 472
+4
net/bridge/br_netfilter_hooks.c
··· 496 496 if (!brnet->call_ip6tables && 497 497 !br_opt_get(br, BROPT_NF_CALL_IP6TABLES)) 498 498 return NF_ACCEPT; 499 + if (!ipv6_mod_enabled()) { 500 + pr_warn_once("Module ipv6 is disabled, so call_ip6tables is not supported."); 501 + return NF_DROP; 502 + } 499 503 500 504 nf_bridge_pull_encap_header_rcsum(skb); 501 505 return br_nf_pre_routing_ipv6(priv, skb, state);
+2
net/core/dev.c
··· 8807 8807 ret = notifier_to_errno(ret); 8808 8808 if (ret) { 8809 8809 rollback_registered(dev); 8810 + rcu_barrier(); 8811 + 8810 8812 dev->reg_state = NETREG_UNREGISTERED; 8811 8813 } 8812 8814 /*
+19
net/core/skbuff.c
··· 3670 3670 int pos; 3671 3671 int dummy; 3672 3672 3673 + if (list_skb && !list_skb->head_frag && skb_headlen(list_skb) && 3674 + (skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY)) { 3675 + /* gso_size is untrusted, and we have a frag_list with a linear 3676 + * non head_frag head. 3677 + * 3678 + * (we assume checking the first list_skb member suffices; 3679 + * i.e if either of the list_skb members have non head_frag 3680 + * head, then the first one has too). 3681 + * 3682 + * If head_skb's headlen does not fit requested gso_size, it 3683 + * means that the frag_list members do NOT terminate on exact 3684 + * gso_size boundaries. Hence we cannot perform skb_frag_t page 3685 + * sharing. Therefore we must fallback to copying the frag_list 3686 + * skbs; we do so by disabling SG. 3687 + */ 3688 + if (mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) 3689 + features &= ~NETIF_F_SG; 3690 + } 3691 + 3673 3692 __skb_push(head_skb, doffset); 3674 3693 proto = skb_network_protocol(head_skb, &dummy); 3675 3694 if (unlikely(!proto))
+3
net/core/sock_map.c
··· 656 656 struct sock *sk, u64 flags) 657 657 { 658 658 struct bpf_htab *htab = container_of(map, struct bpf_htab, map); 659 + struct inet_connection_sock *icsk = inet_csk(sk); 659 660 u32 key_size = map->key_size, hash; 660 661 struct bpf_htab_elem *elem, *elem_new; 661 662 struct bpf_htab_bucket *bucket; ··· 666 665 667 666 WARN_ON_ONCE(!rcu_read_lock_held()); 668 667 if (unlikely(flags > BPF_EXIST)) 668 + return -EINVAL; 669 + if (unlikely(icsk->icsk_ulp_data)) 669 670 return -EINVAL; 670 671 671 672 link = sk_psock_init_link();
+8 -7
net/ipv4/fib_semantics.c
··· 1582 1582 } 1583 1583 1584 1584 int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nhc, 1585 - unsigned char *flags, bool skip_oif) 1585 + u8 rt_family, unsigned char *flags, bool skip_oif) 1586 1586 { 1587 1587 if (nhc->nhc_flags & RTNH_F_DEAD) 1588 1588 *flags |= RTNH_F_DEAD; ··· 1613 1613 /* if gateway family does not match nexthop family 1614 1614 * gateway is encoded as RTA_VIA 1615 1615 */ 1616 - if (nhc->nhc_gw_family != nhc->nhc_family) { 1616 + if (rt_family != nhc->nhc_gw_family) { 1617 1617 int alen = sizeof(struct in6_addr); 1618 1618 struct nlattr *nla; 1619 1619 struct rtvia *via; ··· 1654 1654 1655 1655 #if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6) 1656 1656 int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc, 1657 - int nh_weight) 1657 + int nh_weight, u8 rt_family) 1658 1658 { 1659 1659 const struct net_device *dev = nhc->nhc_dev; 1660 1660 struct rtnexthop *rtnh; ··· 1667 1667 rtnh->rtnh_hops = nh_weight - 1; 1668 1668 rtnh->rtnh_ifindex = dev ? dev->ifindex : 0; 1669 1669 1670 - if (fib_nexthop_info(skb, nhc, &flags, true) < 0) 1670 + if (fib_nexthop_info(skb, nhc, rt_family, &flags, true) < 0) 1671 1671 goto nla_put_failure; 1672 1672 1673 1673 rtnh->rtnh_flags = flags; ··· 1693 1693 goto nla_put_failure; 1694 1694 1695 1695 if (unlikely(fi->nh)) { 1696 - if (nexthop_mpath_fill_node(skb, fi->nh) < 0) 1696 + if (nexthop_mpath_fill_node(skb, fi->nh, AF_INET) < 0) 1697 1697 goto nla_put_failure; 1698 1698 goto mp_end; 1699 1699 } 1700 1700 1701 1701 for_nexthops(fi) { 1702 - if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight) < 0) 1702 + if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight, 1703 + AF_INET) < 0) 1703 1704 goto nla_put_failure; 1704 1705 #ifdef CONFIG_IP_ROUTE_CLASSID 1705 1706 if (nh->nh_tclassid && ··· 1776 1775 const struct fib_nh_common *nhc = fib_info_nhc(fi, 0); 1777 1776 unsigned char flags = 0; 1778 1777 1779 - if (fib_nexthop_info(skb, nhc, &flags, false) < 0) 1778 + if (fib_nexthop_info(skb, nhc, AF_INET, &flags, false) < 0) 1780 1779 goto nla_put_failure; 1781 1780 1782 1781 rtm->rtm_flags = flags;
+1 -1
net/ipv4/tcp_input.c
··· 266 266 267 267 static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp) 268 268 { 269 - tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR; 269 + tp->ecn_flags &= ~TCP_ECN_QUEUE_CWR; 270 270 } 271 271 272 272 static void __tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+1 -1
net/ipv6/ping.c
··· 223 223 return 0; 224 224 } 225 225 226 - static void __net_init ping_v6_proc_exit_net(struct net *net) 226 + static void __net_exit ping_v6_proc_exit_net(struct net *net) 227 227 { 228 228 remove_proc_entry("icmp6", net->proc_net); 229 229 }
+13 -8
net/ipv6/route.c
··· 4386 4386 struct fib6_config cfg = { 4387 4387 .fc_table = l3mdev_fib_table(idev->dev) ? : RT6_TABLE_LOCAL, 4388 4388 .fc_ifindex = idev->dev->ifindex, 4389 - .fc_flags = RTF_UP | RTF_ADDRCONF | RTF_NONEXTHOP, 4389 + .fc_flags = RTF_UP | RTF_NONEXTHOP, 4390 4390 .fc_dst = *addr, 4391 4391 .fc_dst_len = 128, 4392 4392 .fc_protocol = RTPROT_KERNEL, 4393 4393 .fc_nlinfo.nl_net = net, 4394 4394 .fc_ignore_dev_down = true, 4395 4395 }; 4396 + struct fib6_info *f6i; 4396 4397 4397 4398 if (anycast) { 4398 4399 cfg.fc_type = RTN_ANYCAST; ··· 4403 4402 cfg.fc_flags |= RTF_LOCAL; 4404 4403 } 4405 4404 4406 - return ip6_route_info_create(&cfg, gfp_flags, NULL); 4405 + f6i = ip6_route_info_create(&cfg, gfp_flags, NULL); 4406 + if (!IS_ERR(f6i)) 4407 + f6i->dst_nocount = true; 4408 + return f6i; 4407 4409 } 4408 4410 4409 4411 /* remove deleted ip from prefsrc entries */ ··· 5327 5323 if (nexthop_is_multipath(nh)) { 5328 5324 struct nlattr *mp; 5329 5325 5330 - mp = nla_nest_start(skb, RTA_MULTIPATH); 5326 + mp = nla_nest_start_noflag(skb, RTA_MULTIPATH); 5331 5327 if (!mp) 5332 5328 goto nla_put_failure; 5333 5329 5334 - if (nexthop_mpath_fill_node(skb, nh)) 5330 + if (nexthop_mpath_fill_node(skb, nh, AF_INET6)) 5335 5331 goto nla_put_failure; 5336 5332 5337 5333 nla_nest_end(skb, mp); ··· 5339 5335 struct fib6_nh *fib6_nh; 5340 5336 5341 5337 fib6_nh = nexthop_fib6_nh(nh); 5342 - if (fib_nexthop_info(skb, &fib6_nh->nh_common, 5338 + if (fib_nexthop_info(skb, &fib6_nh->nh_common, AF_INET6, 5343 5339 flags, false) < 0) 5344 5340 goto nla_put_failure; 5345 5341 } ··· 5468 5464 goto nla_put_failure; 5469 5465 5470 5466 if (fib_add_nexthop(skb, &rt->fib6_nh->nh_common, 5471 - rt->fib6_nh->fib_nh_weight) < 0) 5467 + rt->fib6_nh->fib_nh_weight, AF_INET6) < 0) 5472 5468 goto nla_put_failure; 5473 5469 5474 5470 list_for_each_entry_safe(sibling, next_sibling, 5475 5471 &rt->fib6_siblings, fib6_siblings) { 5476 5472 if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common, 5477 - sibling->fib6_nh->fib_nh_weight) < 0) 5473 + sibling->fib6_nh->fib_nh_weight, 5474 + AF_INET6) < 0) 5478 5475 goto nla_put_failure; 5479 5476 } 5480 5477 ··· 5492 5487 5493 5488 rtm->rtm_flags |= nh_flags; 5494 5489 } else { 5495 - if (fib_nexthop_info(skb, &rt->fib6_nh->nh_common, 5490 + if (fib_nexthop_info(skb, &rt->fib6_nh->nh_common, AF_INET6, 5496 5491 &nh_flags, false) < 0) 5497 5492 goto nla_put_failure; 5498 5493
+4 -10
net/mac80211/cfg.c
··· 1532 1532 struct sta_info *sta; 1533 1533 struct ieee80211_sub_if_data *sdata; 1534 1534 int err; 1535 - int layer2_update; 1536 1535 1537 1536 if (params->vlan) { 1538 1537 sdata = IEEE80211_DEV_TO_SUB_IF(params->vlan); ··· 1575 1576 test_sta_flag(sta, WLAN_STA_ASSOC)) 1576 1577 rate_control_rate_init(sta); 1577 1578 1578 - layer2_update = sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 1579 - sdata->vif.type == NL80211_IFTYPE_AP; 1580 - 1581 1579 err = sta_info_insert_rcu(sta); 1582 1580 if (err) { 1583 1581 rcu_read_unlock(); 1584 1582 return err; 1585 1583 } 1586 - 1587 - if (layer2_update) 1588 - cfg80211_send_layer2_update(sta->sdata->dev, sta->sta.addr); 1589 1584 1590 1585 rcu_read_unlock(); 1591 1586 ··· 1678 1685 sta->sdata = vlansdata; 1679 1686 ieee80211_check_fast_xmit(sta); 1680 1687 1681 - if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 1688 + if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) { 1682 1689 ieee80211_vif_inc_num_mcast(sta->sdata); 1683 - 1684 - cfg80211_send_layer2_update(sta->sdata->dev, sta->sta.addr); 1690 + cfg80211_send_layer2_update(sta->sdata->dev, 1691 + sta->sta.addr); 1692 + } 1685 1693 } 1686 1694 1687 1695 err = sta_apply_parameters(local, sta, params);
+4
net/mac80211/sta_info.c
··· 1979 1979 ieee80211_check_fast_xmit(sta); 1980 1980 ieee80211_check_fast_rx(sta); 1981 1981 } 1982 + if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 1983 + sta->sdata->vif.type == NL80211_IFTYPE_AP) 1984 + cfg80211_send_layer2_update(sta->sdata->dev, 1985 + sta->sta.addr); 1982 1986 break; 1983 1987 default: 1984 1988 break;
+5 -2
net/netfilter/nf_conntrack_netlink.c
··· 553 553 goto nla_put_failure; 554 554 555 555 if (ctnetlink_dump_status(skb, ct) < 0 || 556 - ctnetlink_dump_timeout(skb, ct) < 0 || 557 556 ctnetlink_dump_acct(skb, ct, type) < 0 || 558 557 ctnetlink_dump_timestamp(skb, ct) < 0 || 559 - ctnetlink_dump_protoinfo(skb, ct) < 0 || 560 558 ctnetlink_dump_helpinfo(skb, ct) < 0 || 561 559 ctnetlink_dump_mark(skb, ct) < 0 || 562 560 ctnetlink_dump_secctx(skb, ct) < 0 || ··· 564 566 ctnetlink_dump_master(skb, ct) < 0 || 565 567 ctnetlink_dump_ct_seq_adj(skb, ct) < 0 || 566 568 ctnetlink_dump_ct_synproxy(skb, ct) < 0) 569 + goto nla_put_failure; 570 + 571 + if (!test_bit(IPS_OFFLOAD_BIT, &ct->status) && 572 + (ctnetlink_dump_timeout(skb, ct) < 0 || 573 + ctnetlink_dump_protoinfo(skb, ct) < 0)) 567 574 goto nla_put_failure; 568 575 569 576 nlmsg_end(skb, nlh);
+1 -1
net/netfilter/nf_flow_table_core.c
··· 218 218 return err; 219 219 } 220 220 221 - flow->timeout = (u32)jiffies; 221 + flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 222 222 return 0; 223 223 } 224 224 EXPORT_SYMBOL_GPL(flow_offload_add);
+3
net/netfilter/nft_fib_netdev.c
··· 14 14 #include <linux/netfilter/nf_tables.h> 15 15 #include <net/netfilter/nf_tables_core.h> 16 16 #include <net/netfilter/nf_tables.h> 17 + #include <net/ipv6.h> 17 18 18 19 #include <net/netfilter/nft_fib.h> 19 20 ··· 35 34 } 36 35 break; 37 36 case ETH_P_IPV6: 37 + if (!ipv6_mod_enabled()) 38 + break; 38 39 switch (priv->result) { 39 40 case NFT_FIB_RESULT_OIF: 40 41 case NFT_FIB_RESULT_OIFNAME:
+3 -3
net/netfilter/nft_socket.c
··· 47 47 return; 48 48 } 49 49 50 - /* So that subsequent socket matching not to require other lookups. */ 51 - skb->sk = sk; 52 - 53 50 switch(priv->key) { 54 51 case NFT_SOCKET_TRANSPARENT: 55 52 nft_reg_store8(dest, inet_sk_transparent(sk)); ··· 63 66 WARN_ON(1); 64 67 regs->verdict.code = NFT_BREAK; 65 68 } 69 + 70 + if (sk != skb->sk) 71 + sock_gen_put(sk); 66 72 } 67 73 68 74 static const struct nla_policy nft_socket_policy[NFTA_SOCKET_MAX + 1] = {
+4 -1
net/qrtr/tun.c
··· 84 84 if (!kbuf) 85 85 return -ENOMEM; 86 86 87 - if (!copy_from_iter_full(kbuf, len, from)) 87 + if (!copy_from_iter_full(kbuf, len, from)) { 88 + kfree(kbuf); 88 89 return -EFAULT; 90 + } 89 91 90 92 ret = qrtr_endpoint_post(&tun->ep, kbuf, len); 91 93 94 + kfree(kbuf); 92 95 return ret < 0 ? ret : len; 93 96 } 94 97
+18 -22
net/rds/bind.c
··· 1 1 /* 2 - * Copyright (c) 2006, 2018 Oracle and/or its affiliates. All rights reserved. 2 + * Copyright (c) 2006, 2019 Oracle and/or its affiliates. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 239 239 goto out; 240 240 } 241 241 242 - sock_set_flag(sk, SOCK_RCU_FREE); 243 - ret = rds_add_bound(rs, binding_addr, &port, scope_id); 244 - if (ret) 245 - goto out; 246 - 247 - if (rs->rs_transport) { /* previously bound */ 242 + /* The transport can be set using SO_RDS_TRANSPORT option before the 243 + * socket is bound. 244 + */ 245 + if (rs->rs_transport) { 248 246 trans = rs->rs_transport; 249 247 if (trans->laddr_check(sock_net(sock->sk), 250 248 binding_addr, scope_id) != 0) { 251 249 ret = -ENOPROTOOPT; 252 - rds_remove_bound(rs); 253 - } else { 254 - ret = 0; 250 + goto out; 255 251 } 256 - goto out; 257 - } 258 - trans = rds_trans_get_preferred(sock_net(sock->sk), binding_addr, 259 - scope_id); 260 - if (!trans) { 261 - ret = -EADDRNOTAVAIL; 262 - rds_remove_bound(rs); 263 - pr_info_ratelimited("RDS: %s could not find a transport for %pI6c, load rds_tcp or rds_rdma?\n", 264 - __func__, binding_addr); 265 - goto out; 252 + } else { 253 + trans = rds_trans_get_preferred(sock_net(sock->sk), 254 + binding_addr, scope_id); 255 + if (!trans) { 256 + ret = -EADDRNOTAVAIL; 257 + pr_info_ratelimited("RDS: %s could not find a transport for %pI6c, load rds_tcp or rds_rdma?\n", 258 + __func__, binding_addr); 259 + goto out; 260 + } 261 + rs->rs_transport = trans; 266 262 } 267 263 268 - rs->rs_transport = trans; 269 - ret = 0; 264 + sock_set_flag(sk, SOCK_RCU_FREE); 265 + ret = rds_add_bound(rs, binding_addr, &port, scope_id); 270 266 271 267 out: 272 268 release_sock(sk);
+1 -1
net/rxrpc/input.c
··· 1262 1262 1263 1263 if (nskb != skb) { 1264 1264 rxrpc_eaten_skb(skb, rxrpc_skb_received); 1265 - rxrpc_new_skb(skb, rxrpc_skb_unshared); 1266 1265 skb = nskb; 1266 + rxrpc_new_skb(skb, rxrpc_skb_unshared); 1267 1267 sp = rxrpc_skb(skb); 1268 1268 } 1269 1269 }
+2
net/sched/sch_api.c
··· 1920 1920 cl = cops->find(q, portid); 1921 1921 if (!cl) 1922 1922 return; 1923 + if (!cops->tcf_block) 1924 + return; 1923 1925 block = cops->tcf_block(q, cl, NULL); 1924 1926 if (!block) 1925 1927 return;
+7 -2
net/sched/sch_generic.c
··· 46 46 * - updates to tree and tree walking are only done under the rtnl mutex. 47 47 */ 48 48 49 + #define SKB_XOFF_MAGIC ((struct sk_buff *)1UL) 50 + 49 51 static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q) 50 52 { 51 53 const struct netdev_queue *txq = q->dev_queue; ··· 73 71 q->q.qlen--; 74 72 } 75 73 } else { 76 - skb = NULL; 74 + skb = SKB_XOFF_MAGIC; 77 75 } 78 76 } 79 77 ··· 255 253 return skb; 256 254 257 255 skb = qdisc_dequeue_skb_bad_txq(q); 258 - if (unlikely(skb)) 256 + if (unlikely(skb)) { 257 + if (skb == SKB_XOFF_MAGIC) 258 + return NULL; 259 259 goto bulk; 260 + } 260 261 skb = q->dequeue(q); 261 262 if (skb) { 262 263 bulk:
+1 -1
net/sched/sch_hhf.c
··· 531 531 new_hhf_non_hh_weight = nla_get_u32(tb[TCA_HHF_NON_HH_WEIGHT]); 532 532 533 533 non_hh_quantum = (u64)new_quantum * new_hhf_non_hh_weight; 534 - if (non_hh_quantum > INT_MAX) 534 + if (non_hh_quantum == 0 || non_hh_quantum > INT_MAX) 535 535 return -EINVAL; 536 536 537 537 sch_tree_lock(sch);
+1 -1
net/sctp/protocol.c
··· 1339 1339 return status; 1340 1340 } 1341 1341 1342 - static void __net_init sctp_ctrlsock_exit(struct net *net) 1342 + static void __net_exit sctp_ctrlsock_exit(struct net *net) 1343 1343 { 1344 1344 /* Free the control endpoint. */ 1345 1345 inet_ctl_sock_destroy(net->sctp.ctl_sock);
+1 -1
net/sctp/sm_sideeffect.c
··· 547 547 if (net->sctp.pf_enable && 548 548 (transport->state == SCTP_ACTIVE) && 549 549 (transport->error_count < transport->pathmaxrxt) && 550 - (transport->error_count > asoc->pf_retrans)) { 550 + (transport->error_count > transport->pf_retrans)) { 551 551 552 552 sctp_assoc_control_transport(asoc, transport, 553 553 SCTP_TRANSPORT_PF,
+13 -11
net/sctp/socket.c
··· 309 309 return retval; 310 310 } 311 311 312 - static long sctp_get_port_local(struct sock *, union sctp_addr *); 312 + static int sctp_get_port_local(struct sock *, union sctp_addr *); 313 313 314 314 /* Verify this is a valid sockaddr. */ 315 315 static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt, ··· 399 399 * detection. 400 400 */ 401 401 addr->v4.sin_port = htons(snum); 402 - if ((ret = sctp_get_port_local(sk, addr))) { 402 + if (sctp_get_port_local(sk, addr)) 403 403 return -EADDRINUSE; 404 - } 405 404 406 405 /* Refresh ephemeral port. */ 407 406 if (!bp->port) ··· 412 413 ret = sctp_add_bind_addr(bp, addr, af->sockaddr_len, 413 414 SCTP_ADDR_SRC, GFP_ATOMIC); 414 415 415 - /* Copy back into socket for getsockname() use. */ 416 - if (!ret) { 417 - inet_sk(sk)->inet_sport = htons(inet_sk(sk)->inet_num); 418 - sp->pf->to_sk_saddr(addr, sk); 416 + if (ret) { 417 + sctp_put_port(sk); 418 + return ret; 419 419 } 420 + /* Copy back into socket for getsockname() use. */ 421 + inet_sk(sk)->inet_sport = htons(inet_sk(sk)->inet_num); 422 + sp->pf->to_sk_saddr(addr, sk); 420 423 421 424 return ret; 422 425 } ··· 7193 7192 val.spt_pathmaxrxt = trans->pathmaxrxt; 7194 7193 val.spt_pathpfthld = trans->pf_retrans; 7195 7194 7196 - return 0; 7195 + goto out; 7197 7196 } 7198 7197 7199 7198 asoc = sctp_id2assoc(sk, val.spt_assoc_id); ··· 7211 7210 val.spt_pathmaxrxt = sp->pathmaxrxt; 7212 7211 } 7213 7212 7213 + out: 7214 7214 if (put_user(len, optlen) || copy_to_user(optval, &val, len)) 7215 7215 return -EFAULT; 7216 7216 ··· 8147 8145 static struct sctp_bind_bucket *sctp_bucket_create( 8148 8146 struct sctp_bind_hashbucket *head, struct net *, unsigned short snum); 8149 8147 8150 - static long sctp_get_port_local(struct sock *sk, union sctp_addr *addr) 8148 + static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr) 8151 8149 { 8152 8150 struct sctp_sock *sp = sctp_sk(sk); 8153 8151 bool reuse = (sk->sk_reuse || sp->reuse); ··· 8257 8255 8258 8256 if (sctp_bind_addr_conflict(&ep2->base.bind_addr, 8259 8257 addr, sp2, sp)) { 8260 - ret = (long)sk2; 8258 + ret = 1; 8261 8259 goto fail_unlock; 8262 8260 } 8263 8261 } ··· 8329 8327 addr.v4.sin_port = htons(snum); 8330 8328 8331 8329 /* Note: sk->sk_num gets filled in if ephemeral port request. */ 8332 - return !!sctp_get_port_local(sk, &addr); 8330 + return sctp_get_port_local(sk, &addr); 8333 8331 } 8334 8332 8335 8333 /*
+2 -1
net/tipc/name_distr.c
··· 223 223 publ->key); 224 224 } 225 225 226 - kfree_rcu(p, rcu); 226 + if (p) 227 + kfree_rcu(p, rcu); 227 228 } 228 229 229 230 /**
+25 -31
net/xfrm/xfrm_interface.c
··· 145 145 if (err < 0) 146 146 goto out; 147 147 148 - strcpy(xi->p.name, dev->name); 149 - 150 148 dev_hold(dev); 151 149 xfrmi_link(xfrmn, xi); 152 150 ··· 175 177 struct xfrmi_net *xfrmn = net_generic(xi->net, xfrmi_net_id); 176 178 177 179 xfrmi_unlink(xfrmn, xi); 178 - dev_put(xi->phydev); 179 180 dev_put(dev); 180 181 } 181 182 ··· 291 294 if (tdev == dev) { 292 295 stats->collisions++; 293 296 net_warn_ratelimited("%s: Local routing loop detected!\n", 294 - xi->p.name); 297 + dev->name); 295 298 goto tx_err_dst_release; 296 299 } 297 300 ··· 361 364 goto tx_err; 362 365 } 363 366 364 - fl.flowi_oif = xi->phydev->ifindex; 367 + fl.flowi_oif = xi->p.link; 365 368 366 369 ret = xfrmi_xmit2(skb, dev, &fl); 367 370 if (ret < 0) ··· 502 505 503 506 static int xfrmi_update(struct xfrm_if *xi, struct xfrm_if_parms *p) 504 507 { 505 - struct net *net = dev_net(xi->dev); 508 + struct net *net = xi->net; 506 509 struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id); 507 510 int err; 508 511 ··· 547 550 { 548 551 struct xfrm_if *xi = netdev_priv(dev); 549 552 550 - return xi->phydev->ifindex; 553 + return xi->p.link; 551 554 } 552 555 553 556 ··· 573 576 dev->needs_free_netdev = true; 574 577 dev->priv_destructor = xfrmi_dev_free; 575 578 netif_keep_dst(dev); 579 + 580 + eth_broadcast_addr(dev->broadcast); 576 581 } 577 582 578 583 static int xfrmi_dev_init(struct net_device *dev) 579 584 { 580 585 struct xfrm_if *xi = netdev_priv(dev); 581 - struct net_device *phydev = xi->phydev; 586 + struct net_device *phydev = __dev_get_by_index(xi->net, xi->p.link); 582 587 int err; 583 588 584 589 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); ··· 595 596 596 597 dev->features |= NETIF_F_LLTX; 597 598 598 - dev->needed_headroom = phydev->needed_headroom; 599 - dev->needed_tailroom = phydev->needed_tailroom; 599 + if (phydev) { 600 + dev->needed_headroom = phydev->needed_headroom; 601 + dev->needed_tailroom = phydev->needed_tailroom; 600 602 601 - if (is_zero_ether_addr(dev->dev_addr)) 602 - eth_hw_addr_inherit(dev, phydev); 603 - if (is_zero_ether_addr(dev->broadcast)) 604 - memcpy(dev->broadcast, phydev->broadcast, dev->addr_len); 603 + if (is_zero_ether_addr(dev->dev_addr)) 604 + eth_hw_addr_inherit(dev, phydev); 605 + if (is_zero_ether_addr(dev->broadcast)) 606 + memcpy(dev->broadcast, phydev->broadcast, 607 + dev->addr_len); 608 + } else { 609 + eth_hw_addr_random(dev); 610 + eth_broadcast_addr(dev->broadcast); 611 + } 605 612 606 613 return 0; 607 614 } ··· 643 638 int err; 644 639 645 640 xfrmi_netlink_parms(data, &p); 646 - 647 - if (!tb[IFLA_IFNAME]) 648 - return -EINVAL; 649 - 650 - nla_strlcpy(p.name, tb[IFLA_IFNAME], IFNAMSIZ); 651 - 652 641 xi = xfrmi_locate(net, &p); 653 642 if (xi) 654 643 return -EEXIST; ··· 651 652 xi->p = p; 652 653 xi->net = net; 653 654 xi->dev = dev; 654 - xi->phydev = dev_get_by_index(net, p.link); 655 - if (!xi->phydev) 656 - return -ENODEV; 657 655 658 656 err = xfrmi_create(dev); 659 - if (err < 0) 660 - dev_put(xi->phydev); 661 657 return err; 662 658 } 663 659 ··· 666 672 struct netlink_ext_ack *extack) 667 673 { 668 674 struct xfrm_if *xi = netdev_priv(dev); 669 - struct net *net = dev_net(dev); 675 + struct net *net = xi->net; 676 + struct xfrm_if_parms p; 670 677 671 - xfrmi_netlink_parms(data, &xi->p); 672 - 673 - xi = xfrmi_locate(net, &xi->p); 678 + xfrmi_netlink_parms(data, &p); 679 + xi = xfrmi_locate(net, &p); 674 680 if (!xi) { 675 681 xi = netdev_priv(dev); 676 682 } else { ··· 678 684 return -EEXIST; 679 685 } 680 686 681 - return xfrmi_update(xi, &xi->p); 687 + return xfrmi_update(xi, &p); 682 688 } 683 689 684 690 static size_t xfrmi_get_size(const struct net_device *dev) ··· 709 715 { 710 716 struct xfrm_if *xi = netdev_priv(dev); 711 717 712 - return dev_net(xi->phydev); 718 + return xi->net; 713 719 } 714 720 715 721 static const struct nla_policy xfrmi_policy[IFLA_XFRM_MAX + 1] = {
+4 -2
net/xfrm/xfrm_policy.c
··· 912 912 } else if (delta > 0) { 913 913 p = &parent->rb_right; 914 914 } else { 915 + bool same_prefixlen = node->prefixlen == n->prefixlen; 915 916 struct xfrm_policy *tmp; 916 917 917 918 hlist_for_each_entry(tmp, &n->hhead, bydst) { ··· 920 919 hlist_del_rcu(&tmp->bydst); 921 920 } 922 921 922 + node->prefixlen = prefixlen; 923 + 923 924 xfrm_policy_inexact_list_reinsert(net, node, family); 924 925 925 - if (node->prefixlen == n->prefixlen) { 926 + if (same_prefixlen) { 926 927 kfree_rcu(n, rcu); 927 928 return; 928 929 } ··· 932 929 rb_erase(*p, new); 933 930 kfree_rcu(n, rcu); 934 931 n = node; 935 - n->prefixlen = prefixlen; 936 932 goto restart; 937 933 } 938 934 }
+6
security/keys/request_key_auth.c
··· 66 66 { 67 67 struct request_key_auth *rka = dereference_key_rcu(key); 68 68 69 + if (!rka) 70 + return; 71 + 69 72 seq_puts(m, "key:"); 70 73 seq_puts(m, key->description); 71 74 if (key_is_positive(key)) ··· 85 82 struct request_key_auth *rka = dereference_key_locked(key); 86 83 size_t datalen; 87 84 long ret; 85 + 86 + if (!rka) 87 + return -EKEYREVOKED; 88 88 89 89 datalen = rka->callout_len; 90 90 ret = datalen;
+2 -2
sound/pci/hda/hda_auto_parser.c
··· 824 824 while (id >= 0) { 825 825 const struct hda_fixup *fix = codec->fixup_list + id; 826 826 827 + if (++depth > 10) 828 + break; 827 829 if (fix->chained_before) 828 830 apply_fixup(codec, fix->chain_id, action, depth + 1); 829 831 ··· 864 862 break; 865 863 } 866 864 if (!fix->chained || fix->chained_before) 867 - break; 868 - if (++depth > 10) 869 865 break; 870 866 id = fix->chain_id; 871 867 }
+2 -1
sound/pci/hda/hda_generic.c
··· 6009 6009 if (spec->init_hook) 6010 6010 spec->init_hook(codec); 6011 6011 6012 - snd_hda_apply_verbs(codec); 6012 + if (!spec->skip_verbs) 6013 + snd_hda_apply_verbs(codec); 6013 6014 6014 6015 init_multi_out(codec); 6015 6016 init_extra_out(codec);
+1
sound/pci/hda/hda_generic.h
··· 243 243 unsigned int indep_hp_enabled:1; /* independent HP enabled */ 244 244 unsigned int have_aamix_ctl:1; 245 245 unsigned int hp_mic_jack_modes:1; 246 + unsigned int skip_verbs:1; /* don't apply verbs at snd_hda_gen_init() */ 246 247 247 248 /* additional mute flags (only effective with auto_mute_via_amp=1) */ 248 249 u64 mute_bits;
+17
sound/pci/hda/patch_realtek.c
··· 837 837 if (spec->init_hook) 838 838 spec->init_hook(codec); 839 839 840 + spec->gen.skip_verbs = 1; /* applied in below */ 840 841 snd_hda_gen_init(codec); 841 842 alc_fix_pll(codec); 842 843 alc_auto_init_amp(codec, spec->init_amp); 844 + snd_hda_apply_verbs(codec); /* apply verbs here after own init */ 843 845 844 846 snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT); 845 847 ··· 5799 5797 ALC286_FIXUP_ACER_AIO_HEADSET_MIC, 5800 5798 ALC256_FIXUP_ASUS_MIC_NO_PRESENCE, 5801 5799 ALC299_FIXUP_PREDATOR_SPK, 5800 + ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC, 5802 5801 }; 5803 5802 5804 5803 static const struct hda_fixup alc269_fixups[] = { ··· 6840 6837 { } 6841 6838 } 6842 6839 }, 6840 + [ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC] = { 6841 + .type = HDA_FIXUP_PINS, 6842 + .v.pins = (const struct hda_pintbl[]) { 6843 + { 0x14, 0x411111f0 }, /* disable confusing internal speaker */ 6844 + { 0x19, 0x04a11150 }, /* use as headset mic, without its own jack detect */ 6845 + { } 6846 + }, 6847 + .chained = true, 6848 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 6849 + }, 6843 6850 }; 6844 6851 6845 6852 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 6992 6979 SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE), 6993 6980 SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6994 6981 SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6982 + SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 6995 6983 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 6996 6984 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 6997 6985 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 7009 6995 SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK), 7010 6996 SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A), 7011 6997 SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC), 6998 + SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_INTSPK_HEADSET_MIC), 7012 6999 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 7013 7000 SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC), 7014 7001 SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC), ··· 7087 7072 SND_PCI_QUIRK(0x17aa, 0x312a, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 7088 7073 SND_PCI_QUIRK(0x17aa, 0x312f, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 7089 7074 SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION), 7075 + SND_PCI_QUIRK(0x17aa, 0x3151, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC), 7090 7076 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 7091 7077 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), 7092 7078 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), ··· 8962 8946 static const struct hda_device_id snd_hda_id_realtek[] = { 8963 8947 HDA_CODEC_ENTRY(0x10ec0215, "ALC215", patch_alc269), 8964 8948 HDA_CODEC_ENTRY(0x10ec0221, "ALC221", patch_alc269), 8949 + HDA_CODEC_ENTRY(0x10ec0222, "ALC222", patch_alc269), 8965 8950 HDA_CODEC_ENTRY(0x10ec0225, "ALC225", patch_alc269), 8966 8951 HDA_CODEC_ENTRY(0x10ec0231, "ALC231", patch_alc269), 8967 8952 HDA_CODEC_ENTRY(0x10ec0233, "ALC233", patch_alc269),
+54
tools/testing/selftests/cgroup/test_freezer.c
··· 448 448 } 449 449 450 450 /* 451 + * The test creates a cgroups and freezes it. Then it creates a child cgroup 452 + * and populates it with a task. After that it checks that the child cgroup 453 + * is frozen and the parent cgroup remains frozen too. 454 + */ 455 + static int test_cgfreezer_mkdir(const char *root) 456 + { 457 + int ret = KSFT_FAIL; 458 + char *parent, *child = NULL; 459 + int pid; 460 + 461 + parent = cg_name(root, "cg_test_mkdir_A"); 462 + if (!parent) 463 + goto cleanup; 464 + 465 + child = cg_name(parent, "cg_test_mkdir_B"); 466 + if (!child) 467 + goto cleanup; 468 + 469 + if (cg_create(parent)) 470 + goto cleanup; 471 + 472 + if (cg_freeze_wait(parent, true)) 473 + goto cleanup; 474 + 475 + if (cg_create(child)) 476 + goto cleanup; 477 + 478 + pid = cg_run_nowait(child, child_fn, NULL); 479 + if (pid < 0) 480 + goto cleanup; 481 + 482 + if (cg_wait_for_proc_count(child, 1)) 483 + goto cleanup; 484 + 485 + if (cg_check_frozen(child, true)) 486 + goto cleanup; 487 + 488 + if (cg_check_frozen(parent, true)) 489 + goto cleanup; 490 + 491 + ret = KSFT_PASS; 492 + 493 + cleanup: 494 + if (child) 495 + cg_destroy(child); 496 + free(child); 497 + if (parent) 498 + cg_destroy(parent); 499 + free(parent); 500 + return ret; 501 + } 502 + 503 + /* 451 504 * The test creates two nested cgroups, freezes the parent 452 505 * and removes the child. Then it checks that the parent cgroup 453 506 * remains frozen and it's possible to create a new child ··· 868 815 T(test_cgfreezer_simple), 869 816 T(test_cgfreezer_tree), 870 817 T(test_cgfreezer_forkbomb), 818 + T(test_cgfreezer_mkdir), 871 819 T(test_cgfreezer_rmdir), 872 820 T(test_cgfreezer_migrate), 873 821 T(test_cgfreezer_ptrace),
+13 -11
tools/testing/selftests/net/fib_nexthops.sh
··· 212 212 printf " ${out}\n" 213 213 printf " Expected:\n" 214 214 printf " ${expected}\n\n" 215 + else 216 + echo " WARNING: Unexpected route entry" 215 217 fi 216 218 fi 217 219 ··· 276 274 277 275 run_cmd "$IP nexthop get id 52" 278 276 log_test $? 0 "Get nexthop by id" 279 - check_nexthop "id 52" "id 52 via 2001:db8:91::2 dev veth1" 277 + check_nexthop "id 52" "id 52 via 2001:db8:91::2 dev veth1 scope link" 280 278 281 279 run_cmd "$IP nexthop del id 52" 282 280 log_test $? 0 "Delete nexthop by id" ··· 481 479 run_cmd "$IP -6 nexthop add id 85 dev veth1" 482 480 run_cmd "$IP ro replace 2001:db8:101::1/128 nhid 85" 483 481 log_test $? 0 "IPv6 route with device only nexthop" 484 - check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 dev veth1" 482 + check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 dev veth1 metric 1024 pref medium" 485 483 486 484 run_cmd "$IP nexthop add id 123 group 81/85" 487 485 run_cmd "$IP ro replace 2001:db8:101::1/128 nhid 123" 488 486 log_test $? 0 "IPv6 multipath route with nexthop mix - dev only + gw" 489 - check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 nexthop via 2001:db8:91::2 dev veth1 nexthop dev veth1" 487 + check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 123 metric 1024 nexthop via 2001:db8:91::2 dev veth1 weight 1 nexthop dev veth1 weight 1 pref medium" 490 488 491 489 # 492 490 # IPv6 route with v4 nexthop - not allowed ··· 540 538 541 539 run_cmd "$IP nexthop get id 12" 542 540 log_test $? 0 "Get nexthop by id" 543 - check_nexthop "id 12" "id 12 via 172.16.1.2 src 172.16.1.1 dev veth1 scope link" 541 + check_nexthop "id 12" "id 12 via 172.16.1.2 dev veth1 scope link" 544 542 545 543 run_cmd "$IP nexthop del id 12" 546 544 log_test $? 0 "Delete nexthop by id" ··· 687 685 set +e 688 686 run_cmd "$IP ro add 172.16.101.1/32 nhid 11" 689 687 log_test $? 0 "IPv6 nexthop with IPv4 route" 690 - check_route "172.16.101.1" "172.16.101.1 nhid 11 via ${lladdr} dev veth1" 688 + check_route "172.16.101.1" "172.16.101.1 nhid 11 via inet6 ${lladdr} dev veth1" 691 689 692 690 set -e 693 691 run_cmd "$IP nexthop add id 12 via 172.16.1.2 dev veth1" ··· 696 694 run_cmd "$IP ro replace 172.16.101.1/32 nhid 101" 697 695 log_test $? 0 "IPv6 nexthop with IPv4 route" 698 696 699 - check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 697 + check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via inet6 ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 700 698 701 699 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 ${lladdr} dev veth1" 702 700 log_test $? 0 "IPv4 route with IPv6 gateway" 703 - check_route "172.16.101.1" "172.16.101.1 via ${lladdr} dev veth1" 701 + check_route "172.16.101.1" "172.16.101.1 via inet6 ${lladdr} dev veth1" 704 702 705 703 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 2001:db8:50::1 dev veth1" 706 704 log_test $? 2 "IPv4 route with invalid IPv6 gateway" ··· 787 785 log_test $? 0 "IPv4 route with device only nexthop" 788 786 check_route "172.16.101.1" "172.16.101.1 nhid 85 dev veth1" 789 787 790 - run_cmd "$IP nexthop add id 122 group 21/85" 791 - run_cmd "$IP ro replace 172.16.101.1/32 nhid 122" 788 + run_cmd "$IP nexthop add id 123 group 21/85" 789 + run_cmd "$IP ro replace 172.16.101.1/32 nhid 123" 792 790 log_test $? 0 "IPv4 multipath route with nexthop mix - dev only + gw" 793 - check_route "172.16.101.1" "172.16.101.1 nhid 85 nexthop via 172.16.1.2 dev veth1 nexthop dev veth1" 791 + check_route "172.16.101.1" "172.16.101.1 nhid 123 nexthop via 172.16.1.2 dev veth1 weight 1 nexthop dev veth1 weight 1" 794 792 795 793 # 796 794 # IPv4 with IPv6 ··· 822 820 run_cmd "$IP ro replace 172.16.101.1/32 nhid 101" 823 821 log_test $? 0 "IPv4 route with mixed v4-v6 multipath route" 824 822 825 - check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 823 + check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via inet6 ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 826 824 827 825 run_cmd "ip netns exec me ping -c1 -w1 172.16.101.1" 828 826 log_test $? 0 "IPv6 nexthop with IPv4 route"
+7
tools/testing/selftests/net/xfrm_policy.sh
··· 106 106 # 107 107 # 10.0.0.0/24 and 10.0.1.0/24 nodes have been merged as 10.0.0.0/23. 108 108 ip -net $ns xfrm policy add src 10.1.0.0/24 dst 10.0.0.0/23 dir fwd priority 200 action block 109 + 110 + # similar to above: add policies (with partially random address), with shrinking prefixes. 111 + for p in 29 28 27;do 112 + for k in $(seq 1 32); do 113 + ip -net $ns xfrm policy add src 10.253.1.$((RANDOM%255))/$p dst 10.254.1.$((RANDOM%255))/$p dir fwd priority $((200+k)) action block 2>/dev/null 114 + done 115 + done 109 116 } 110 117 111 118 do_esp_policy_get_check() {