Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.16-rc4 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3518 -1648
+4 -5
Documentation/arm64/pointer-authentication.rst
··· 53 53 virtual address size configured by the kernel. For example, with a 54 54 virtual address size of 48, the PAC is 7 bits wide. 55 55 56 - Recent versions of GCC can compile code with APIAKey-based return 57 - address protection when passed the -msign-return-address option. This 58 - uses instructions in the HINT space (unless -march=armv8.3-a or higher 59 - is also passed), and such code can run on systems without the pointer 60 - authentication extension. 56 + When ARM64_PTR_AUTH_KERNEL is selected, the kernel will be compiled 57 + with HINT space pointer authentication instructions protecting 58 + function returns. Kernels built with this option will work on hardware 59 + with or without pointer authentication support. 61 60 62 61 In addition to exec(), keys can also be reinitialized to random values 63 62 using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
+3 -3
Documentation/cpu-freq/core.rst
··· 73 73 The third argument is a struct cpufreq_freqs with the following 74 74 values: 75 75 76 - ===== =========================== 77 - cpu number of the affected CPU 76 + ====== ====================================== 77 + policy a pointer to the struct cpufreq_policy 78 78 old old frequency 79 79 new new frequency 80 80 flags flags of the cpufreq driver 81 - ===== =========================== 81 + ====== ====================================== 82 82 83 83 3. CPUFreq Table Generation with Operating Performance Point (OPP) 84 84 ==================================================================
+56 -39
Documentation/filesystems/netfs_library.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 3 ================================= 4 - NETWORK FILESYSTEM HELPER LIBRARY 4 + Network Filesystem Helper Library 5 5 ================================= 6 6 7 7 .. Contents: ··· 37 37 38 38 The following services are provided: 39 39 40 - * Handles transparent huge pages (THPs). 40 + * Handle folios that span multiple pages. 41 41 42 - * Insulates the netfs from VM interface changes. 42 + * Insulate the netfs from VM interface changes. 43 43 44 - * Allows the netfs to arbitrarily split reads up into pieces, even ones that 45 - don't match page sizes or page alignments and that may cross pages. 44 + * Allow the netfs to arbitrarily split reads up into pieces, even ones that 45 + don't match folio sizes or folio alignments and that may cross folios. 46 46 47 - * Allows the netfs to expand a readahead request in both directions to meet 48 - its needs. 47 + * Allow the netfs to expand a readahead request in both directions to meet its 48 + needs. 49 49 50 - * Allows the netfs to partially fulfil a read, which will then be resubmitted. 50 + * Allow the netfs to partially fulfil a read, which will then be resubmitted. 51 51 52 - * Handles local caching, allowing cached data and server-read data to be 52 + * Handle local caching, allowing cached data and server-read data to be 53 53 interleaved for a single request. 54 54 55 - * Handles clearing of bufferage that aren't on the server. 55 + * Handle clearing of bufferage that aren't on the server. 56 56 57 57 * Handle retrying of reads that failed, switching reads from the cache to the 58 58 server as necessary. ··· 70 70 71 71 Three read helpers are provided:: 72 72 73 - * void netfs_readahead(struct readahead_control *ractl, 74 - const struct netfs_read_request_ops *ops, 75 - void *netfs_priv);`` 76 - * int netfs_readpage(struct file *file, 77 - struct page *page, 78 - const struct netfs_read_request_ops *ops, 79 - void *netfs_priv); 80 - * int netfs_write_begin(struct file *file, 81 - struct address_space *mapping, 82 - loff_t pos, 83 - unsigned int len, 84 - unsigned int flags, 85 - struct page **_page, 86 - void **_fsdata, 87 - const struct netfs_read_request_ops *ops, 88 - void *netfs_priv); 73 + void netfs_readahead(struct readahead_control *ractl, 74 + const struct netfs_read_request_ops *ops, 75 + void *netfs_priv); 76 + int netfs_readpage(struct file *file, 77 + struct folio *folio, 78 + const struct netfs_read_request_ops *ops, 79 + void *netfs_priv); 80 + int netfs_write_begin(struct file *file, 81 + struct address_space *mapping, 82 + loff_t pos, 83 + unsigned int len, 84 + unsigned int flags, 85 + struct folio **_folio, 86 + void **_fsdata, 87 + const struct netfs_read_request_ops *ops, 88 + void *netfs_priv); 89 89 90 90 Each corresponds to a VM operation, with the addition of a couple of parameters 91 91 for the use of the read helpers: ··· 103 103 For ->readahead() and ->readpage(), the network filesystem should just jump 104 104 into the corresponding read helper; whereas for ->write_begin(), it may be a 105 105 little more complicated as the network filesystem might want to flush 106 - conflicting writes or track dirty data and needs to put the acquired page if an 107 - error occurs after calling the helper. 106 + conflicting writes or track dirty data and needs to put the acquired folio if 107 + an error occurs after calling the helper. 108 108 109 109 The helpers manage the read request, calling back into the network filesystem 110 110 through the suppplied table of operations. Waits will be performed as ··· 253 253 void (*issue_op)(struct netfs_read_subrequest *subreq); 254 254 bool (*is_still_valid)(struct netfs_read_request *rreq); 255 255 int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, 256 - struct page *page, void **_fsdata); 256 + struct folio *folio, void **_fsdata); 257 257 void (*done)(struct netfs_read_request *rreq); 258 258 void (*cleanup)(struct address_space *mapping, void *netfs_priv); 259 259 }; ··· 313 313 314 314 There is no return value; the netfs_subreq_terminated() function should be 315 315 called to indicate whether or not the operation succeeded and how much data 316 - it transferred. The filesystem also should not deal with setting pages 316 + it transferred. The filesystem also should not deal with setting folios 317 317 uptodate, unlocking them or dropping their refs - the helpers need to deal 318 318 with this as they have to coordinate with copying to the local cache. 319 319 320 - Note that the helpers have the pages locked, but not pinned. It is possible 321 - to use the ITER_XARRAY iov iterator to refer to the range of the inode that 322 - is being operated upon without the need to allocate large bvec tables. 320 + Note that the helpers have the folios locked, but not pinned. It is 321 + possible to use the ITER_XARRAY iov iterator to refer to the range of the 322 + inode that is being operated upon without the need to allocate large bvec 323 + tables. 323 324 324 325 * ``is_still_valid()`` 325 326 ··· 331 330 * ``check_write_begin()`` 332 331 333 332 [Optional] This is called from the netfs_write_begin() helper once it has 334 - allocated/grabbed the page to be modified to allow the filesystem to flush 333 + allocated/grabbed the folio to be modified to allow the filesystem to flush 335 334 conflicting state before allowing it to be modified. 336 335 337 - It should return 0 if everything is now fine, -EAGAIN if the page should be 336 + It should return 0 if everything is now fine, -EAGAIN if the folio should be 338 337 regrabbed and any other error code to abort the operation. 339 338 340 339 * ``done`` 341 340 342 - [Optional] This is called after the pages in the request have all been 341 + [Optional] This is called after the folios in the request have all been 343 342 unlocked (and marked uptodate if applicable). 344 343 345 344 * ``cleanup`` ··· 391 390 * If NETFS_SREQ_CLEAR_TAIL was set, a short read will be cleared to the 392 391 end of the slice instead of reissuing. 393 392 394 - * Once the data is read, the pages that have been fully read/cleared: 393 + * Once the data is read, the folios that have been fully read/cleared: 395 394 396 395 * Will be marked uptodate. 397 396 ··· 399 398 400 399 * Unlocked 401 400 402 - * Any pages that need writing to the cache will then have DIO writes issued. 401 + * Any folios that need writing to the cache will then have DIO writes issued. 403 402 404 403 * Synchronous operations will wait for reading to be complete. 405 404 406 - * Writes to the cache will proceed asynchronously and the pages will have the 405 + * Writes to the cache will proceed asynchronously and the folios will have the 407 406 PG_fscache mark removed when that completes. 408 407 409 408 * The request structures will be cleaned up when everything has completed. ··· 452 451 bool seek_data, 453 452 netfs_io_terminated_t term_func, 454 453 void *term_func_priv); 454 + 455 + int (*prepare_write)(struct netfs_cache_resources *cres, 456 + loff_t *_start, size_t *_len, loff_t i_size); 455 457 456 458 int (*write)(struct netfs_cache_resources *cres, 457 459 loff_t start_pos, ··· 513 509 indicating whether the termination is definitely happening in the caller's 514 510 context. 515 511 512 + * ``prepare_write()`` 513 + 514 + [Required] Called to adjust a write to the cache and check that there is 515 + sufficient space in the cache. The start and length values indicate the 516 + size of the write that netfslib is proposing, and this can be adjusted by 517 + the cache to respect DIO boundaries. The file size is passed for 518 + information. 519 + 516 520 * ``write()`` 517 521 518 522 [Required] Called to write to the cache. The start file offset is given ··· 537 525 there isn't a read request structure as well, such as writing dirty data to the 538 526 cache. 539 527 528 + 529 + API Function Reference 530 + ====================== 531 + 540 532 .. kernel-doc:: include/linux/netfs.h 533 + .. kernel-doc:: fs/netfs/read_helper.c
+11 -2
MAINTAINERS
··· 15979 15979 15980 15980 RANDOM NUMBER DRIVER 15981 15981 M: "Theodore Ts'o" <tytso@mit.edu> 15982 + M: Jason A. Donenfeld <Jason@zx2c4.com> 15982 15983 S: Maintained 15983 15984 F: drivers/char/random.c 15984 15985 ··· 16502 16501 F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml 16503 16502 F: drivers/media/platform/sunxi/sun8i-rotate/ 16504 16503 16504 + RPMSG TTY DRIVER 16505 + M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> 16506 + L: linux-remoteproc@vger.kernel.org 16507 + S: Maintained 16508 + F: drivers/tty/rpmsg_tty.c 16509 + 16505 16510 RTL2830 MEDIA DRIVER 16506 16511 M: Antti Palosaari <crope@iki.fi> 16507 16512 L: linux-media@vger.kernel.org ··· 16630 16623 16631 16624 S390 IUCV NETWORK LAYER 16632 16625 M: Julian Wiedmann <jwi@linux.ibm.com> 16633 - M: Karsten Graul <kgraul@linux.ibm.com> 16626 + M: Alexandra Winter <wintera@linux.ibm.com> 16627 + M: Wenjia Zhang <wenjia@linux.ibm.com> 16634 16628 L: linux-s390@vger.kernel.org 16635 16629 L: netdev@vger.kernel.org 16636 16630 S: Supported ··· 16642 16634 16643 16635 S390 NETWORK DRIVERS 16644 16636 M: Julian Wiedmann <jwi@linux.ibm.com> 16645 - M: Karsten Graul <kgraul@linux.ibm.com> 16637 + M: Alexandra Winter <wintera@linux.ibm.com> 16638 + M: Wenjia Zhang <wenjia@linux.ibm.com> 16646 16639 L: linux-s390@vger.kernel.org 16647 16640 L: netdev@vger.kernel.org 16648 16641 S: Supported
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Gobble Gobble 7 7 8 8 # *DOCUMENTATION*
+2 -2
arch/arm64/include/asm/kvm_arm.h
··· 91 91 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) 92 92 93 93 /* TCR_EL2 Registers bits */ 94 - #define TCR_EL2_RES1 ((1 << 31) | (1 << 23)) 94 + #define TCR_EL2_RES1 ((1U << 31) | (1 << 23)) 95 95 #define TCR_EL2_TBI (1 << 20) 96 96 #define TCR_EL2_PS_SHIFT 16 97 97 #define TCR_EL2_PS_MASK (7 << TCR_EL2_PS_SHIFT) ··· 276 276 #define CPTR_EL2_TFP_SHIFT 10 277 277 278 278 /* Hyp Coprocessor Trap Register */ 279 - #define CPTR_EL2_TCPAC (1 << 31) 279 + #define CPTR_EL2_TCPAC (1U << 31) 280 280 #define CPTR_EL2_TAM (1 << 30) 281 281 #define CPTR_EL2_TTA (1 << 20) 282 282 #define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
+6
arch/arm64/kernel/entry-ftrace.S
··· 77 77 .endm 78 78 79 79 SYM_CODE_START(ftrace_regs_caller) 80 + #ifdef BTI_C 81 + BTI_C 82 + #endif 80 83 ftrace_regs_entry 1 81 84 b ftrace_common 82 85 SYM_CODE_END(ftrace_regs_caller) 83 86 84 87 SYM_CODE_START(ftrace_caller) 88 + #ifdef BTI_C 89 + BTI_C 90 + #endif 85 91 ftrace_regs_entry 0 86 92 b ftrace_common 87 93 SYM_CODE_END(ftrace_caller)
+1 -1
arch/arm64/kernel/machine_kexec.c
··· 147 147 if (rc) 148 148 return rc; 149 149 kimage->arch.ttbr1 = __pa(trans_pgd); 150 - kimage->arch.zero_page = __pa(empty_zero_page); 150 + kimage->arch.zero_page = __pa_symbol(empty_zero_page); 151 151 152 152 reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; 153 153 memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
+14
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 403 403 404 404 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu); 405 405 406 + static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code); 407 + 406 408 /* 407 409 * Allow the hypervisor to handle the exit with an exit handler if it has one. 408 410 * ··· 431 429 */ 432 430 static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) 433 431 { 432 + /* 433 + * Save PSTATE early so that we can evaluate the vcpu mode 434 + * early on. 435 + */ 436 + vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR); 437 + 438 + /* 439 + * Check whether we want to repaint the state one way or 440 + * another. 441 + */ 442 + early_exit_filter(vcpu, exit_code); 443 + 434 444 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) 435 445 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR); 436 446
+6 -1
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
··· 70 70 static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt) 71 71 { 72 72 ctxt->regs.pc = read_sysreg_el2(SYS_ELR); 73 - ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR); 73 + /* 74 + * Guest PSTATE gets saved at guest fixup time in all 75 + * cases. We still need to handle the nVHE host side here. 76 + */ 77 + if (!has_vhe() && ctxt->__hyp_running_vcpu) 78 + ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR); 74 79 75 80 if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) 76 81 ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
+1 -7
arch/arm64/kvm/hyp/nvhe/switch.c
··· 233 233 * Returns false if the guest ran in AArch32 when it shouldn't have, and 234 234 * thus should exit to the host, or true if a the guest run loop can continue. 235 235 */ 236 - static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code) 236 + static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) 237 237 { 238 238 struct kvm *kvm = kern_hyp_va(vcpu->kvm); 239 239 ··· 248 248 vcpu->arch.target = -1; 249 249 *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT); 250 250 *exit_code |= ARM_EXCEPTION_IL; 251 - return false; 252 251 } 253 - 254 - return true; 255 252 } 256 253 257 254 /* Switch to the guest for legacy non-VHE systems */ ··· 312 315 do { 313 316 /* Jump in the fire! */ 314 317 exit_code = __guest_enter(vcpu); 315 - 316 - if (unlikely(!handle_aarch32_guest(vcpu, &exit_code))) 317 - break; 318 318 319 319 /* And we're baaack! */ 320 320 } while (fixup_guest_exit(vcpu, &exit_code));
+4
arch/arm64/kvm/hyp/vhe/switch.c
··· 112 112 return hyp_exit_handlers; 113 113 } 114 114 115 + static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code) 116 + { 117 + } 118 + 115 119 /* Switch to the guest for VHE systems running in EL2 */ 116 120 static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) 117 121 {
+5
arch/parisc/Makefile
··· 15 15 # Mike Shaver, Helge Deller and Martin K. Petersen 16 16 # 17 17 18 + ifdef CONFIG_PARISC_SELF_EXTRACT 19 + boot := arch/parisc/boot 20 + KBUILD_IMAGE := $(boot)/bzImage 21 + else 18 22 KBUILD_IMAGE := vmlinuz 23 + endif 19 24 20 25 NM = sh $(srctree)/arch/parisc/nm 21 26 CHECKFLAGS += -D__hppa__=1
+13 -1
arch/parisc/configs/generic-64bit_defconfig
··· 1 1 CONFIG_LOCALVERSION="-64bit" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 + CONFIG_KERNEL_LZ4=y 3 4 CONFIG_SYSVIPC=y 4 5 CONFIG_POSIX_MQUEUE=y 6 + CONFIG_AUDIT=y 5 7 CONFIG_BSD_PROCESS_ACCT=y 6 8 CONFIG_BSD_PROCESS_ACCT_V3=y 7 9 CONFIG_TASKSTATS=y ··· 37 35 CONFIG_BLK_DEV_INTEGRITY=y 38 36 CONFIG_BINFMT_MISC=m 39 37 # CONFIG_COMPACTION is not set 38 + CONFIG_MEMORY_FAILURE=y 40 39 CONFIG_NET=y 41 40 CONFIG_PACKET=y 42 41 CONFIG_UNIX=y ··· 68 65 CONFIG_SCSI_SRP_ATTRS=y 69 66 CONFIG_ISCSI_BOOT_SYSFS=y 70 67 CONFIG_SCSI_MPT2SAS=y 71 - CONFIG_SCSI_LASI700=m 68 + CONFIG_SCSI_LASI700=y 72 69 CONFIG_SCSI_SYM53C8XX_2=y 73 70 CONFIG_SCSI_ZALON=y 74 71 CONFIG_SCSI_QLA_ISCSI=m 75 72 CONFIG_SCSI_DH=y 76 73 CONFIG_ATA=y 74 + CONFIG_SATA_SIL=y 75 + CONFIG_SATA_SIS=y 76 + CONFIG_SATA_VIA=y 77 77 CONFIG_PATA_NS87415=y 78 78 CONFIG_PATA_SIL680=y 79 79 CONFIG_ATA_GENERIC=y ··· 85 79 CONFIG_BLK_DEV_DM=m 86 80 CONFIG_DM_RAID=m 87 81 CONFIG_DM_UEVENT=y 82 + CONFIG_DM_AUDIT=y 88 83 CONFIG_FUSION=y 89 84 CONFIG_FUSION_SPI=y 90 85 CONFIG_FUSION_SAS=y ··· 203 196 CONFIG_FB_MATROX_I2C=y 204 197 CONFIG_FB_MATROX_MAVEN=y 205 198 CONFIG_FB_RADEON=y 199 + CONFIG_LOGO=y 200 + # CONFIG_LOGO_LINUX_CLUT224 is not set 206 201 CONFIG_HIDRAW=y 207 202 CONFIG_HID_PID=y 208 203 CONFIG_USB_HIDDEV=y 209 204 CONFIG_USB=y 205 + CONFIG_USB_EHCI_HCD=y 206 + CONFIG_USB_OHCI_HCD=y 207 + CONFIG_USB_OHCI_HCD_PLATFORM=y 210 208 CONFIG_UIO=y 211 209 CONFIG_UIO_PDRV_GENIRQ=m 212 210 CONFIG_UIO_AEC=m
+1
arch/parisc/install.sh
··· 39 39 if [ -n "${INSTALLKERNEL}" ]; then 40 40 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 41 41 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi 42 + if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi 42 43 fi 43 44 44 45 # Default install
+7 -21
arch/parisc/kernel/time.c
··· 249 249 static int __init init_cr16_clocksource(void) 250 250 { 251 251 /* 252 - * The cr16 interval timers are not syncronized across CPUs on 253 - * different sockets, so mark them unstable and lower rating on 254 - * multi-socket SMP systems. 252 + * The cr16 interval timers are not syncronized across CPUs, even if 253 + * they share the same socket. 255 254 */ 256 255 if (num_online_cpus() > 1 && !running_on_qemu) { 257 - int cpu; 258 - unsigned long cpu0_loc; 259 - cpu0_loc = per_cpu(cpu_data, 0).cpu_loc; 256 + /* mark sched_clock unstable */ 257 + clear_sched_clock_stable(); 260 258 261 - for_each_online_cpu(cpu) { 262 - if (cpu == 0) 263 - continue; 264 - if ((cpu0_loc != 0) && 265 - (cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc)) 266 - continue; 267 - 268 - /* mark sched_clock unstable */ 269 - clear_sched_clock_stable(); 270 - 271 - clocksource_cr16.name = "cr16_unstable"; 272 - clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE; 273 - clocksource_cr16.rating = 0; 274 - break; 275 - } 259 + clocksource_cr16.name = "cr16_unstable"; 260 + clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE; 261 + clocksource_cr16.rating = 0; 276 262 } 277 263 278 264 /* register at clocksource framework */
+3 -5
arch/riscv/include/asm/kvm_host.h
··· 12 12 #include <linux/types.h> 13 13 #include <linux/kvm.h> 14 14 #include <linux/kvm_types.h> 15 + #include <asm/csr.h> 15 16 #include <asm/kvm_vcpu_fp.h> 16 17 #include <asm/kvm_vcpu_timer.h> 17 18 18 - #ifdef CONFIG_64BIT 19 - #define KVM_MAX_VCPUS (1U << 16) 20 - #else 21 - #define KVM_MAX_VCPUS (1U << 9) 22 - #endif 19 + #define KVM_MAX_VCPUS \ 20 + ((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) + 1) 23 21 24 22 #define KVM_HALT_POLL_NS_DEFAULT 500000 25 23
+6
arch/riscv/kvm/mmu.c
··· 453 453 void kvm_arch_flush_shadow_memslot(struct kvm *kvm, 454 454 struct kvm_memory_slot *slot) 455 455 { 456 + gpa_t gpa = slot->base_gfn << PAGE_SHIFT; 457 + phys_addr_t size = slot->npages << PAGE_SHIFT; 458 + 459 + spin_lock(&kvm->mmu_lock); 460 + stage2_unmap_range(kvm, gpa, size, false); 461 + spin_unlock(&kvm->mmu_lock); 456 462 } 457 463 458 464 void kvm_arch_commit_memory_region(struct kvm *kvm,
+8 -2
arch/s390/configs/debug_defconfig
··· 403 403 CONFIG_CONNECTOR=y 404 404 CONFIG_ZRAM=y 405 405 CONFIG_BLK_DEV_LOOP=m 406 - CONFIG_BLK_DEV_CRYPTOLOOP=m 407 406 CONFIG_BLK_DEV_DRBD=m 408 407 CONFIG_BLK_DEV_NBD=m 409 408 CONFIG_BLK_DEV_RAM=y ··· 475 476 CONFIG_MACVTAP=m 476 477 CONFIG_VXLAN=m 477 478 CONFIG_BAREUDP=m 479 + CONFIG_AMT=m 478 480 CONFIG_TUN=m 479 481 CONFIG_VETH=m 480 482 CONFIG_VIRTIO_NET=m ··· 489 489 # CONFIG_NET_VENDOR_AMD is not set 490 490 # CONFIG_NET_VENDOR_AQUANTIA is not set 491 491 # CONFIG_NET_VENDOR_ARC is not set 492 + # CONFIG_NET_VENDOR_ASIX is not set 492 493 # CONFIG_NET_VENDOR_ATHEROS is not set 493 494 # CONFIG_NET_VENDOR_BROADCOM is not set 494 495 # CONFIG_NET_VENDOR_BROCADE is not set ··· 572 571 CONFIG_WATCHDOG_NOWAYOUT=y 573 572 CONFIG_SOFT_WATCHDOG=m 574 573 CONFIG_DIAG288_WATCHDOG=m 574 + # CONFIG_DRM_DEBUG_MODESET_LOCK is not set 575 575 CONFIG_FB=y 576 576 CONFIG_FRAMEBUFFER_CONSOLE=y 577 577 CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y ··· 777 775 CONFIG_CRC7=m 778 776 CONFIG_CRC8=m 779 777 CONFIG_RANDOM32_SELFTEST=y 778 + CONFIG_XZ_DEC_MICROLZMA=y 780 779 CONFIG_DMA_CMA=y 781 780 CONFIG_CMA_SIZE_MBYTES=0 782 781 CONFIG_PRINTK_TIME=y 783 782 CONFIG_DYNAMIC_DEBUG=y 784 783 CONFIG_DEBUG_INFO=y 785 784 CONFIG_DEBUG_INFO_DWARF4=y 785 + CONFIG_DEBUG_INFO_BTF=y 786 786 CONFIG_GDB_SCRIPTS=y 787 787 CONFIG_HEADERS_INSTALL=y 788 788 CONFIG_DEBUG_SECTION_MISMATCH=y ··· 811 807 CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m 812 808 CONFIG_DEBUG_PER_CPU_MAPS=y 813 809 CONFIG_KFENCE=y 810 + CONFIG_KFENCE_STATIC_KEYS=y 814 811 CONFIG_DEBUG_SHIRQ=y 815 812 CONFIG_PANIC_ON_OOPS=y 816 813 CONFIG_DETECT_HUNG_TASK=y ··· 847 842 CONFIG_SAMPLES=y 848 843 CONFIG_SAMPLE_TRACE_PRINTK=m 849 844 CONFIG_SAMPLE_FTRACE_DIRECT=m 845 + CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m 850 846 CONFIG_DEBUG_ENTRY=y 851 847 CONFIG_CIO_INJECT=y 852 848 CONFIG_KUNIT=m ··· 866 860 CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y 867 861 CONFIG_LKDTM=m 868 862 CONFIG_TEST_MIN_HEAP=y 869 - CONFIG_KPROBES_SANITY_TEST=y 863 + CONFIG_KPROBES_SANITY_TEST=m 870 864 CONFIG_RBTREE_TEST=y 871 865 CONFIG_INTERVAL_TREE_TEST=m 872 866 CONFIG_PERCPU_TEST=m
+6 -1
arch/s390/configs/defconfig
··· 394 394 CONFIG_CONNECTOR=y 395 395 CONFIG_ZRAM=y 396 396 CONFIG_BLK_DEV_LOOP=m 397 - CONFIG_BLK_DEV_CRYPTOLOOP=m 398 397 CONFIG_BLK_DEV_DRBD=m 399 398 CONFIG_BLK_DEV_NBD=m 400 399 CONFIG_BLK_DEV_RAM=y ··· 466 467 CONFIG_MACVTAP=m 467 468 CONFIG_VXLAN=m 468 469 CONFIG_BAREUDP=m 470 + CONFIG_AMT=m 469 471 CONFIG_TUN=m 470 472 CONFIG_VETH=m 471 473 CONFIG_VIRTIO_NET=m ··· 480 480 # CONFIG_NET_VENDOR_AMD is not set 481 481 # CONFIG_NET_VENDOR_AQUANTIA is not set 482 482 # CONFIG_NET_VENDOR_ARC is not set 483 + # CONFIG_NET_VENDOR_ASIX is not set 483 484 # CONFIG_NET_VENDOR_ATHEROS is not set 484 485 # CONFIG_NET_VENDOR_BROADCOM is not set 485 486 # CONFIG_NET_VENDOR_BROCADE is not set ··· 763 762 CONFIG_CRC4=m 764 763 CONFIG_CRC7=m 765 764 CONFIG_CRC8=m 765 + CONFIG_XZ_DEC_MICROLZMA=y 766 766 CONFIG_DMA_CMA=y 767 767 CONFIG_CMA_SIZE_MBYTES=0 768 768 CONFIG_PRINTK_TIME=y 769 769 CONFIG_DYNAMIC_DEBUG=y 770 770 CONFIG_DEBUG_INFO=y 771 771 CONFIG_DEBUG_INFO_DWARF4=y 772 + CONFIG_DEBUG_INFO_BTF=y 772 773 CONFIG_GDB_SCRIPTS=y 773 774 CONFIG_DEBUG_SECTION_MISMATCH=y 774 775 CONFIG_MAGIC_SYSRQ=y ··· 795 792 CONFIG_SAMPLES=y 796 793 CONFIG_SAMPLE_TRACE_PRINTK=m 797 794 CONFIG_SAMPLE_FTRACE_DIRECT=m 795 + CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m 798 796 CONFIG_KUNIT=m 799 797 CONFIG_KUNIT_DEBUGFS=y 800 798 CONFIG_LKDTM=m 799 + CONFIG_KPROBES_SANITY_TEST=m 801 800 CONFIG_PERCPU_TEST=m 802 801 CONFIG_ATOMIC64_SELFTEST=y 803 802 CONFIG_TEST_BPF=m
+2
arch/s390/configs/zfcpdump_defconfig
··· 65 65 # CONFIG_NETWORK_FILESYSTEMS is not set 66 66 CONFIG_LSM="yama,loadpin,safesetid,integrity" 67 67 # CONFIG_ZLIB_DFLTCC is not set 68 + CONFIG_XZ_DEC_MICROLZMA=y 68 69 CONFIG_PRINTK_TIME=y 69 70 # CONFIG_SYMBOLIC_ERRNAME is not set 70 71 CONFIG_DEBUG_INFO=y 72 + CONFIG_DEBUG_INFO_BTF=y 71 73 CONFIG_DEBUG_FS=y 72 74 CONFIG_DEBUG_KERNEL=y 73 75 CONFIG_PANIC_ON_OOPS=y
+4 -3
arch/s390/include/asm/pci_io.h
··· 14 14 15 15 /* I/O Map */ 16 16 #define ZPCI_IOMAP_SHIFT 48 17 - #define ZPCI_IOMAP_ADDR_BASE 0x8000000000000000UL 17 + #define ZPCI_IOMAP_ADDR_SHIFT 62 18 + #define ZPCI_IOMAP_ADDR_BASE (1UL << ZPCI_IOMAP_ADDR_SHIFT) 18 19 #define ZPCI_IOMAP_ADDR_OFF_MASK ((1UL << ZPCI_IOMAP_SHIFT) - 1) 19 20 #define ZPCI_IOMAP_MAX_ENTRIES \ 20 - ((ULONG_MAX - ZPCI_IOMAP_ADDR_BASE + 1) / (1UL << ZPCI_IOMAP_SHIFT)) 21 + (1UL << (ZPCI_IOMAP_ADDR_SHIFT - ZPCI_IOMAP_SHIFT)) 21 22 #define ZPCI_IOMAP_ADDR_IDX_MASK \ 22 - (~ZPCI_IOMAP_ADDR_OFF_MASK - ZPCI_IOMAP_ADDR_BASE) 23 + ((ZPCI_IOMAP_ADDR_BASE - 1) & ~ZPCI_IOMAP_ADDR_OFF_MASK) 23 24 24 25 struct zpci_iomap_entry { 25 26 u32 fh;
+3 -2
arch/s390/lib/test_unwind.c
··· 173 173 } 174 174 175 175 /* 176 - * trigger specification exception 176 + * Trigger operation exception; use insn notation to bypass 177 + * llvm's integrated assembler sanity checks. 177 178 */ 178 179 asm volatile( 179 - " mvcl %%r1,%%r1\n" 180 + " .insn e,0x0000\n" /* illegal opcode */ 180 181 "0: nopr %%r7\n" 181 182 EX_TABLE(0b, 0b) 182 183 :);
+18 -19
arch/x86/entry/entry_64.S
··· 574 574 ud2 575 575 1: 576 576 #endif 577 + #ifdef CONFIG_XEN_PV 578 + ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV 579 + #endif 580 + 577 581 POP_REGS pop_rdi=0 578 582 579 583 /* ··· 894 890 .Lparanoid_entry_checkgs: 895 891 /* EBX = 1 -> kernel GSBASE active, no restore required */ 896 892 movl $1, %ebx 893 + 897 894 /* 898 895 * The kernel-enforced convention is a negative GSBASE indicates 899 896 * a kernel value. No SWAPGS needed on entry and exit. ··· 902 897 movl $MSR_GS_BASE, %ecx 903 898 rdmsr 904 899 testl %edx, %edx 905 - jns .Lparanoid_entry_swapgs 906 - ret 907 - 908 - .Lparanoid_entry_swapgs: 909 - swapgs 910 - 911 - /* 912 - * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an 913 - * unconditional CR3 write, even in the PTI case. So do an lfence 914 - * to prevent GS speculation, regardless of whether PTI is enabled. 915 - */ 916 - FENCE_SWAPGS_KERNEL_ENTRY 900 + js .Lparanoid_kernel_gsbase 917 901 918 902 /* EBX = 0 -> SWAPGS required on exit */ 919 903 xorl %ebx, %ebx 904 + swapgs 905 + .Lparanoid_kernel_gsbase: 906 + 907 + FENCE_SWAPGS_KERNEL_ENTRY 920 908 ret 921 909 SYM_CODE_END(paranoid_entry) 922 910 ··· 991 993 pushq %r12 992 994 ret 993 995 994 - .Lerror_entry_done_lfence: 995 - FENCE_SWAPGS_KERNEL_ENTRY 996 - .Lerror_entry_done: 997 - ret 998 - 999 996 /* 1000 997 * There are two places in the kernel that can potentially fault with 1001 998 * usergs. Handle them here. B stepping K8s sometimes report a ··· 1013 1020 * .Lgs_change's error handler with kernel gsbase. 1014 1021 */ 1015 1022 SWAPGS 1016 - FENCE_SWAPGS_USER_ENTRY 1017 - jmp .Lerror_entry_done 1023 + 1024 + /* 1025 + * Issue an LFENCE to prevent GS speculation, regardless of whether it is a 1026 + * kernel or user gsbase. 1027 + */ 1028 + .Lerror_entry_done_lfence: 1029 + FENCE_SWAPGS_KERNEL_ENTRY 1030 + ret 1018 1031 1019 1032 .Lbstep_iret: 1020 1033 /* Fix truncated RIP */
+1 -1
arch/x86/include/asm/intel-family.h
··· 108 108 #define INTEL_FAM6_ALDERLAKE 0x97 /* Golden Cove / Gracemont */ 109 109 #define INTEL_FAM6_ALDERLAKE_L 0x9A /* Golden Cove / Gracemont */ 110 110 111 - #define INTEL_FAM6_RAPTOR_LAKE 0xB7 111 + #define INTEL_FAM6_RAPTORLAKE 0xB7 112 112 113 113 /* "Small Core" Processors (Atom) */ 114 114
+1
arch/x86/include/asm/kvm_host.h
··· 1036 1036 #define APICV_INHIBIT_REASON_PIT_REINJ 4 1037 1037 #define APICV_INHIBIT_REASON_X2APIC 5 1038 1038 #define APICV_INHIBIT_REASON_BLOCKIRQ 6 1039 + #define APICV_INHIBIT_REASON_ABSENT 7 1039 1040 1040 1041 struct kvm_arch { 1041 1042 unsigned long n_used_mmu_pages;
+11
arch/x86/include/asm/sev-common.h
··· 73 73 74 74 #define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK) 75 75 76 + /* 77 + * Error codes related to GHCB input that can be communicated back to the guest 78 + * by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2. 79 + */ 80 + #define GHCB_ERR_NOT_REGISTERED 1 81 + #define GHCB_ERR_INVALID_USAGE 2 82 + #define GHCB_ERR_INVALID_SCRATCH_AREA 3 83 + #define GHCB_ERR_MISSING_INPUT 4 84 + #define GHCB_ERR_INVALID_INPUT 5 85 + #define GHCB_ERR_INVALID_EVENT 6 86 + 76 87 #endif
+1 -1
arch/x86/kernel/fpu/signal.c
··· 118 118 struct fpstate *fpstate) 119 119 { 120 120 struct xregs_state __user *x = buf; 121 - struct _fpx_sw_bytes sw_bytes; 121 + struct _fpx_sw_bytes sw_bytes = {}; 122 122 u32 xfeatures; 123 123 int err; 124 124
+39 -18
arch/x86/kernel/sev.c
··· 294 294 char *dst, char *buf, size_t size) 295 295 { 296 296 unsigned long error_code = X86_PF_PROT | X86_PF_WRITE; 297 - char __user *target = (char __user *)dst; 298 - u64 d8; 299 - u32 d4; 300 - u16 d2; 301 - u8 d1; 302 297 303 298 /* 304 299 * This function uses __put_user() independent of whether kernel or user ··· 315 320 * instructions here would cause infinite nesting. 316 321 */ 317 322 switch (size) { 318 - case 1: 323 + case 1: { 324 + u8 d1; 325 + u8 __user *target = (u8 __user *)dst; 326 + 319 327 memcpy(&d1, buf, 1); 320 328 if (__put_user(d1, target)) 321 329 goto fault; 322 330 break; 323 - case 2: 331 + } 332 + case 2: { 333 + u16 d2; 334 + u16 __user *target = (u16 __user *)dst; 335 + 324 336 memcpy(&d2, buf, 2); 325 337 if (__put_user(d2, target)) 326 338 goto fault; 327 339 break; 328 - case 4: 340 + } 341 + case 4: { 342 + u32 d4; 343 + u32 __user *target = (u32 __user *)dst; 344 + 329 345 memcpy(&d4, buf, 4); 330 346 if (__put_user(d4, target)) 331 347 goto fault; 332 348 break; 333 - case 8: 349 + } 350 + case 8: { 351 + u64 d8; 352 + u64 __user *target = (u64 __user *)dst; 353 + 334 354 memcpy(&d8, buf, 8); 335 355 if (__put_user(d8, target)) 336 356 goto fault; 337 357 break; 358 + } 338 359 default: 339 360 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); 340 361 return ES_UNSUPPORTED; ··· 373 362 char *src, char *buf, size_t size) 374 363 { 375 364 unsigned long error_code = X86_PF_PROT; 376 - char __user *s = (char __user *)src; 377 - u64 d8; 378 - u32 d4; 379 - u16 d2; 380 - u8 d1; 381 365 382 366 /* 383 367 * This function uses __get_user() independent of whether kernel or user ··· 394 388 * instructions here would cause infinite nesting. 395 389 */ 396 390 switch (size) { 397 - case 1: 391 + case 1: { 392 + u8 d1; 393 + u8 __user *s = (u8 __user *)src; 394 + 398 395 if (__get_user(d1, s)) 399 396 goto fault; 400 397 memcpy(buf, &d1, 1); 401 398 break; 402 - case 2: 399 + } 400 + case 2: { 401 + u16 d2; 402 + u16 __user *s = (u16 __user *)src; 403 + 403 404 if (__get_user(d2, s)) 404 405 goto fault; 405 406 memcpy(buf, &d2, 2); 406 407 break; 407 - case 4: 408 + } 409 + case 4: { 410 + u32 d4; 411 + u32 __user *s = (u32 __user *)src; 412 + 408 413 if (__get_user(d4, s)) 409 414 goto fault; 410 415 memcpy(buf, &d4, 4); 411 416 break; 412 - case 8: 417 + } 418 + case 8: { 419 + u64 d8; 420 + u64 __user *s = (u64 __user *)src; 413 421 if (__get_user(d8, s)) 414 422 goto fault; 415 423 memcpy(buf, &d8, 8); 416 424 break; 425 + } 417 426 default: 418 427 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); 419 428 return ES_UNSUPPORTED;
+24 -4
arch/x86/kernel/tsc.c
··· 1180 1180 1181 1181 EXPORT_SYMBOL_GPL(mark_tsc_unstable); 1182 1182 1183 + static void __init tsc_disable_clocksource_watchdog(void) 1184 + { 1185 + clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1186 + clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1187 + } 1188 + 1183 1189 static void __init check_system_tsc_reliable(void) 1184 1190 { 1185 1191 #if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC) ··· 1202 1196 #endif 1203 1197 if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE)) 1204 1198 tsc_clocksource_reliable = 1; 1199 + 1200 + /* 1201 + * Disable the clocksource watchdog when the system has: 1202 + * - TSC running at constant frequency 1203 + * - TSC which does not stop in C-States 1204 + * - the TSC_ADJUST register which allows to detect even minimal 1205 + * modifications 1206 + * - not more than two sockets. As the number of sockets cannot be 1207 + * evaluated at the early boot stage where this has to be 1208 + * invoked, check the number of online memory nodes as a 1209 + * fallback solution which is an reasonable estimate. 1210 + */ 1211 + if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) && 1212 + boot_cpu_has(X86_FEATURE_NONSTOP_TSC) && 1213 + boot_cpu_has(X86_FEATURE_TSC_ADJUST) && 1214 + nr_online_nodes <= 2) 1215 + tsc_disable_clocksource_watchdog(); 1205 1216 } 1206 1217 1207 1218 /* ··· 1410 1387 if (tsc_unstable) 1411 1388 goto unreg; 1412 1389 1413 - if (tsc_clocksource_reliable || no_tsc_watchdog) 1414 - clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1415 - 1416 1390 if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3)) 1417 1391 clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP; 1418 1392 ··· 1547 1527 } 1548 1528 1549 1529 if (tsc_clocksource_reliable || no_tsc_watchdog) 1550 - clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1530 + tsc_disable_clocksource_watchdog(); 1551 1531 1552 1532 clocksource_register_khz(&clocksource_tsc_early, tsc_khz); 1553 1533 detect_art();
+41
arch/x86/kernel/tsc_sync.c
··· 30 30 }; 31 31 32 32 static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust); 33 + static struct timer_list tsc_sync_check_timer; 33 34 34 35 /* 35 36 * TSC's on different sockets may be reset asynchronously. ··· 77 76 adj->warned = true; 78 77 } 79 78 } 79 + 80 + /* 81 + * Normally the tsc_sync will be checked every time system enters idle 82 + * state, but there is still caveat that a system won't enter idle, 83 + * either because it's too busy or configured purposely to not enter 84 + * idle. 85 + * 86 + * So setup a periodic timer (every 10 minutes) to make sure the check 87 + * is always on. 88 + */ 89 + 90 + #define SYNC_CHECK_INTERVAL (HZ * 600) 91 + 92 + static void tsc_sync_check_timer_fn(struct timer_list *unused) 93 + { 94 + int next_cpu; 95 + 96 + tsc_verify_tsc_adjust(false); 97 + 98 + /* Run the check for all onlined CPUs in turn */ 99 + next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask); 100 + if (next_cpu >= nr_cpu_ids) 101 + next_cpu = cpumask_first(cpu_online_mask); 102 + 103 + tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL; 104 + add_timer_on(&tsc_sync_check_timer, next_cpu); 105 + } 106 + 107 + static int __init start_sync_check_timer(void) 108 + { 109 + if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable) 110 + return 0; 111 + 112 + timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0); 113 + tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL; 114 + add_timer(&tsc_sync_check_timer); 115 + 116 + return 0; 117 + } 118 + late_initcall(start_sync_check_timer); 80 119 81 120 static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval, 82 121 unsigned int cpu, bool bootcpu)
-1
arch/x86/kvm/ioapic.h
··· 81 81 unsigned long irq_states[IOAPIC_NUM_PINS]; 82 82 struct kvm_io_device dev; 83 83 struct kvm *kvm; 84 - void (*ack_notifier)(void *opaque, int irq); 85 84 spinlock_t lock; 86 85 struct rtc_status rtc_status; 87 86 struct delayed_work eoi_inject;
-1
arch/x86/kvm/irq.h
··· 56 56 struct kvm_io_device dev_master; 57 57 struct kvm_io_device dev_slave; 58 58 struct kvm_io_device dev_elcr; 59 - void (*ack_notifier)(void *opaque, int irq); 60 59 unsigned long irq_states[PIC_NUM_PINS]; 61 60 }; 62 61
+1 -1
arch/x86/kvm/lapic.c
··· 707 707 static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr) 708 708 { 709 709 int highest_irr; 710 - if (apic->vcpu->arch.apicv_active) 710 + if (kvm_x86_ops.sync_pir_to_irr) 711 711 highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu); 712 712 else 713 713 highest_irr = apic_find_highest_irr(apic);
+73 -47
arch/x86/kvm/mmu/mmu.c
··· 1582 1582 flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp); 1583 1583 1584 1584 if (is_tdp_mmu_enabled(kvm)) 1585 - flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); 1585 + flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush); 1586 1586 1587 1587 return flush; 1588 1588 } ··· 1936 1936 1937 1937 static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) 1938 1938 { 1939 - return sp->role.invalid || 1939 + if (sp->role.invalid) 1940 + return true; 1941 + 1942 + /* TDP MMU pages due not use the MMU generation. */ 1943 + return !sp->tdp_mmu_page && 1940 1944 unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); 1941 1945 } 1942 1946 ··· 2177 2173 iterator->shadow_addr = root; 2178 2174 iterator->level = vcpu->arch.mmu->shadow_root_level; 2179 2175 2180 - if (iterator->level == PT64_ROOT_4LEVEL && 2176 + if (iterator->level >= PT64_ROOT_4LEVEL && 2181 2177 vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL && 2182 2178 !vcpu->arch.mmu->direct_map) 2183 - --iterator->level; 2179 + iterator->level = PT32E_ROOT_LEVEL; 2184 2180 2185 2181 if (iterator->level == PT32E_ROOT_LEVEL) { 2186 2182 /* ··· 3980 3976 return true; 3981 3977 } 3982 3978 3979 + /* 3980 + * Returns true if the page fault is stale and needs to be retried, i.e. if the 3981 + * root was invalidated by a memslot update or a relevant mmu_notifier fired. 3982 + */ 3983 + static bool is_page_fault_stale(struct kvm_vcpu *vcpu, 3984 + struct kvm_page_fault *fault, int mmu_seq) 3985 + { 3986 + if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa))) 3987 + return true; 3988 + 3989 + return fault->slot && 3990 + mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva); 3991 + } 3992 + 3983 3993 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) 3984 3994 { 3985 3995 bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); ··· 4031 4013 else 4032 4014 write_lock(&vcpu->kvm->mmu_lock); 4033 4015 4034 - if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) 4016 + if (is_page_fault_stale(vcpu, fault, mmu_seq)) 4035 4017 goto out_unlock; 4018 + 4036 4019 r = make_mmu_pages_available(vcpu); 4037 4020 if (r) 4038 4021 goto out_unlock; ··· 4874 4855 struct kvm_mmu *context = &vcpu->arch.guest_mmu; 4875 4856 struct kvm_mmu_role_regs regs = { 4876 4857 .cr0 = cr0, 4877 - .cr4 = cr4, 4858 + .cr4 = cr4 & ~X86_CR4_PKE, 4878 4859 .efer = efer, 4879 4860 }; 4880 4861 union kvm_mmu_role new_role; ··· 4938 4919 context->direct_map = false; 4939 4920 4940 4921 update_permission_bitmask(context, true); 4941 - update_pkru_bitmask(context); 4922 + context->pkru_mask = 0; 4942 4923 reset_rsvds_bits_mask_ept(vcpu, context, execonly); 4943 4924 reset_ept_shadow_zero_bits_mask(vcpu, context, execonly); 4944 4925 } ··· 5044 5025 /* 5045 5026 * Invalidate all MMU roles to force them to reinitialize as CPUID 5046 5027 * information is factored into reserved bit calculations. 5028 + * 5029 + * Correctly handling multiple vCPU models with respect to paging and 5030 + * physical address properties) in a single VM would require tracking 5031 + * all relevant CPUID information in kvm_mmu_page_role. That is very 5032 + * undesirable as it would increase the memory requirements for 5033 + * gfn_track (see struct kvm_mmu_page_role comments). For now that 5034 + * problem is swept under the rug; KVM's CPUID API is horrific and 5035 + * it's all but impossible to solve it without introducing a new API. 5047 5036 */ 5048 5037 vcpu->arch.root_mmu.mmu_role.ext.valid = 0; 5049 5038 vcpu->arch.guest_mmu.mmu_role.ext.valid = 0; ··· 5059 5032 kvm_mmu_reset_context(vcpu); 5060 5033 5061 5034 /* 5062 - * KVM does not correctly handle changing guest CPUID after KVM_RUN, as 5063 - * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't 5064 - * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page 5065 - * faults due to reusing SPs/SPTEs. Alert userspace, but otherwise 5066 - * sweep the problem under the rug. 5067 - * 5068 - * KVM's horrific CPUID ABI makes the problem all but impossible to 5069 - * solve, as correctly handling multiple vCPU models (with respect to 5070 - * paging and physical address properties) in a single VM would require 5071 - * tracking all relevant CPUID information in kvm_mmu_page_role. That 5072 - * is very undesirable as it would double the memory requirements for 5073 - * gfn_track (see struct kvm_mmu_page_role comments), and in practice 5074 - * no sane VMM mucks with the core vCPU model on the fly. 5035 + * Changing guest CPUID after KVM_RUN is forbidden, see the comment in 5036 + * kvm_arch_vcpu_ioctl(). 5075 5037 */ 5076 - if (vcpu->arch.last_vmentry_cpu != -1) { 5077 - pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest instability\n"); 5078 - pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} will fail after KVM_RUN starting with Linux 5.16\n"); 5079 - } 5038 + KVM_BUG_ON(vcpu->arch.last_vmentry_cpu != -1, vcpu->kvm); 5080 5039 } 5081 5040 5082 5041 void kvm_mmu_reset_context(struct kvm_vcpu *vcpu) ··· 5382 5369 5383 5370 void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva) 5384 5371 { 5385 - kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE); 5372 + kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE); 5386 5373 ++vcpu->stat.invlpg; 5387 5374 } 5388 5375 EXPORT_SYMBOL_GPL(kvm_mmu_invlpg); ··· 5867 5854 void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm, 5868 5855 const struct kvm_memory_slot *slot) 5869 5856 { 5870 - bool flush = false; 5871 - 5872 5857 if (kvm_memslots_have_rmaps(kvm)) { 5873 5858 write_lock(&kvm->mmu_lock); 5874 5859 /* ··· 5874 5863 * logging at a 4k granularity and never creates collapsible 5875 5864 * 2m SPTEs during dirty logging. 5876 5865 */ 5877 - flush = slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true); 5878 - if (flush) 5866 + if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true)) 5879 5867 kvm_arch_flush_remote_tlbs_memslot(kvm, slot); 5880 5868 write_unlock(&kvm->mmu_lock); 5881 5869 } 5882 5870 5883 5871 if (is_tdp_mmu_enabled(kvm)) { 5884 5872 read_lock(&kvm->mmu_lock); 5885 - flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush); 5886 - if (flush) 5887 - kvm_arch_flush_remote_tlbs_memslot(kvm, slot); 5873 + kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot); 5888 5874 read_unlock(&kvm->mmu_lock); 5889 5875 } 5890 5876 } ··· 6190 6182 mmu_audit_disable(); 6191 6183 } 6192 6184 6185 + /* 6186 + * Calculate the effective recovery period, accounting for '0' meaning "let KVM 6187 + * select a halving time of 1 hour". Returns true if recovery is enabled. 6188 + */ 6189 + static bool calc_nx_huge_pages_recovery_period(uint *period) 6190 + { 6191 + /* 6192 + * Use READ_ONCE to get the params, this may be called outside of the 6193 + * param setters, e.g. by the kthread to compute its next timeout. 6194 + */ 6195 + bool enabled = READ_ONCE(nx_huge_pages); 6196 + uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio); 6197 + 6198 + if (!enabled || !ratio) 6199 + return false; 6200 + 6201 + *period = READ_ONCE(nx_huge_pages_recovery_period_ms); 6202 + if (!*period) { 6203 + /* Make sure the period is not less than one second. */ 6204 + ratio = min(ratio, 3600u); 6205 + *period = 60 * 60 * 1000 / ratio; 6206 + } 6207 + return true; 6208 + } 6209 + 6193 6210 static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel_param *kp) 6194 6211 { 6195 6212 bool was_recovery_enabled, is_recovery_enabled; 6196 6213 uint old_period, new_period; 6197 6214 int err; 6198 6215 6199 - was_recovery_enabled = nx_huge_pages_recovery_ratio; 6200 - old_period = nx_huge_pages_recovery_period_ms; 6216 + was_recovery_enabled = calc_nx_huge_pages_recovery_period(&old_period); 6201 6217 6202 6218 err = param_set_uint(val, kp); 6203 6219 if (err) 6204 6220 return err; 6205 6221 6206 - is_recovery_enabled = nx_huge_pages_recovery_ratio; 6207 - new_period = nx_huge_pages_recovery_period_ms; 6222 + is_recovery_enabled = calc_nx_huge_pages_recovery_period(&new_period); 6208 6223 6209 - if (READ_ONCE(nx_huge_pages) && is_recovery_enabled && 6224 + if (is_recovery_enabled && 6210 6225 (!was_recovery_enabled || old_period > new_period)) { 6211 6226 struct kvm *kvm; 6212 6227 ··· 6293 6262 6294 6263 static long get_nx_lpage_recovery_timeout(u64 start_time) 6295 6264 { 6296 - uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio); 6297 - uint period = READ_ONCE(nx_huge_pages_recovery_period_ms); 6265 + bool enabled; 6266 + uint period; 6298 6267 6299 - if (!period && ratio) { 6300 - /* Make sure the period is not less than one second. */ 6301 - ratio = min(ratio, 3600u); 6302 - period = 60 * 60 * 1000 / ratio; 6303 - } 6268 + enabled = calc_nx_huge_pages_recovery_period(&period); 6304 6269 6305 - return READ_ONCE(nx_huge_pages) && ratio 6306 - ? start_time + msecs_to_jiffies(period) - get_jiffies_64() 6307 - : MAX_SCHEDULE_TIMEOUT; 6270 + return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64() 6271 + : MAX_SCHEDULE_TIMEOUT; 6308 6272 } 6309 6273 6310 6274 static int kvm_nx_lpage_recovery_worker(struct kvm *kvm, uintptr_t data)
+2 -1
arch/x86/kvm/mmu/paging_tmpl.h
··· 911 911 912 912 r = RET_PF_RETRY; 913 913 write_lock(&vcpu->kvm->mmu_lock); 914 - if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) 914 + 915 + if (is_page_fault_stale(vcpu, fault, mmu_seq)) 915 916 goto out_unlock; 916 917 917 918 kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
+14 -24
arch/x86/kvm/mmu/tdp_mmu.c
··· 317 317 struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt)); 318 318 int level = sp->role.level; 319 319 gfn_t base_gfn = sp->gfn; 320 - u64 old_child_spte; 321 - u64 *sptep; 322 - gfn_t gfn; 323 320 int i; 324 321 325 322 trace_kvm_mmu_prepare_zap_page(sp); ··· 324 327 tdp_mmu_unlink_page(kvm, sp, shared); 325 328 326 329 for (i = 0; i < PT64_ENT_PER_PAGE; i++) { 327 - sptep = rcu_dereference(pt) + i; 328 - gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); 330 + u64 *sptep = rcu_dereference(pt) + i; 331 + gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level); 332 + u64 old_child_spte; 329 333 330 334 if (shared) { 331 335 /* ··· 372 374 shared); 373 375 } 374 376 375 - kvm_flush_remote_tlbs_with_address(kvm, gfn, 377 + kvm_flush_remote_tlbs_with_address(kvm, base_gfn, 376 378 KVM_PAGES_PER_HPAGE(level + 1)); 377 379 378 380 call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback); ··· 1031 1033 { 1032 1034 struct kvm_mmu_page *root; 1033 1035 1034 - for_each_tdp_mmu_root(kvm, root, range->slot->as_id) 1035 - flush |= zap_gfn_range(kvm, root, range->start, range->end, 1036 - range->may_block, flush, false); 1036 + for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false) 1037 + flush = zap_gfn_range(kvm, root, range->start, range->end, 1038 + range->may_block, flush, false); 1037 1039 1038 1040 return flush; 1039 1041 } ··· 1362 1364 * Clear leaf entries which could be replaced by large mappings, for 1363 1365 * GFNs within the slot. 1364 1366 */ 1365 - static bool zap_collapsible_spte_range(struct kvm *kvm, 1367 + static void zap_collapsible_spte_range(struct kvm *kvm, 1366 1368 struct kvm_mmu_page *root, 1367 - const struct kvm_memory_slot *slot, 1368 - bool flush) 1369 + const struct kvm_memory_slot *slot) 1369 1370 { 1370 1371 gfn_t start = slot->base_gfn; 1371 1372 gfn_t end = start + slot->npages; ··· 1375 1378 1376 1379 tdp_root_for_each_pte(iter, root, start, end) { 1377 1380 retry: 1378 - if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) { 1379 - flush = false; 1381 + if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) 1380 1382 continue; 1381 - } 1382 1383 1383 1384 if (!is_shadow_present_pte(iter.old_spte) || 1384 1385 !is_last_spte(iter.old_spte, iter.level)) ··· 1388 1393 pfn, PG_LEVEL_NUM)) 1389 1394 continue; 1390 1395 1396 + /* Note, a successful atomic zap also does a remote TLB flush. */ 1391 1397 if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) { 1392 1398 /* 1393 1399 * The iter must explicitly re-read the SPTE because ··· 1397 1401 iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep)); 1398 1402 goto retry; 1399 1403 } 1400 - flush = true; 1401 1404 } 1402 1405 1403 1406 rcu_read_unlock(); 1404 - 1405 - return flush; 1406 1407 } 1407 1408 1408 1409 /* 1409 1410 * Clear non-leaf entries (and free associated page tables) which could 1410 1411 * be replaced by large mappings, for GFNs within the slot. 1411 1412 */ 1412 - bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, 1413 - const struct kvm_memory_slot *slot, 1414 - bool flush) 1413 + void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, 1414 + const struct kvm_memory_slot *slot) 1415 1415 { 1416 1416 struct kvm_mmu_page *root; 1417 1417 1418 1418 lockdep_assert_held_read(&kvm->mmu_lock); 1419 1419 1420 1420 for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true) 1421 - flush = zap_collapsible_spte_range(kvm, root, slot, flush); 1422 - 1423 - return flush; 1421 + zap_collapsible_spte_range(kvm, root, slot); 1424 1422 } 1425 1423 1426 1424 /*
+2 -3
arch/x86/kvm/mmu/tdp_mmu.h
··· 64 64 struct kvm_memory_slot *slot, 65 65 gfn_t gfn, unsigned long mask, 66 66 bool wrprot); 67 - bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, 68 - const struct kvm_memory_slot *slot, 69 - bool flush); 67 + void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm, 68 + const struct kvm_memory_slot *slot); 70 69 71 70 bool kvm_tdp_mmu_write_protect_gfn(struct kvm *kvm, 72 71 struct kvm_memory_slot *slot, gfn_t gfn,
+10 -7
arch/x86/kvm/svm/avic.c
··· 900 900 bool svm_check_apicv_inhibit_reasons(ulong bit) 901 901 { 902 902 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) | 903 + BIT(APICV_INHIBIT_REASON_ABSENT) | 903 904 BIT(APICV_INHIBIT_REASON_HYPERV) | 904 905 BIT(APICV_INHIBIT_REASON_NESTED) | 905 906 BIT(APICV_INHIBIT_REASON_IRQWIN) | ··· 990 989 static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run) 991 990 { 992 991 struct vcpu_svm *svm = to_svm(vcpu); 992 + int cpu = get_cpu(); 993 993 994 + WARN_ON(cpu != vcpu->cpu); 994 995 svm->avic_is_running = is_run; 995 996 996 - if (!kvm_vcpu_apicv_active(vcpu)) 997 - return; 998 - 999 - if (is_run) 1000 - avic_vcpu_load(vcpu, vcpu->cpu); 1001 - else 1002 - avic_vcpu_put(vcpu); 997 + if (kvm_vcpu_apicv_active(vcpu)) { 998 + if (is_run) 999 + avic_vcpu_load(vcpu, cpu); 1000 + else 1001 + avic_vcpu_put(vcpu); 1002 + } 1003 + put_cpu(); 1003 1004 } 1004 1005 1005 1006 void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
+1 -1
arch/x86/kvm/svm/pmu.c
··· 281 281 pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; 282 282 283 283 pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; 284 - pmu->reserved_bits = 0xffffffff00200000ull; 284 + pmu->reserved_bits = 0xfffffff000280000ull; 285 285 pmu->version = 1; 286 286 /* not applicable to AMD; but clean them to prevent any fall out */ 287 287 pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+146 -117
arch/x86/kvm/svm/sev.c
··· 1543 1543 return false; 1544 1544 } 1545 1545 1546 - static int sev_lock_for_migration(struct kvm *kvm) 1546 + static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) 1547 1547 { 1548 - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; 1548 + struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info; 1549 + struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info; 1550 + int r = -EBUSY; 1551 + 1552 + if (dst_kvm == src_kvm) 1553 + return -EINVAL; 1549 1554 1550 1555 /* 1551 - * Bail if this VM is already involved in a migration to avoid deadlock 1552 - * between two VMs trying to migrate to/from each other. 1556 + * Bail if these VMs are already involved in a migration to avoid 1557 + * deadlock between two VMs trying to migrate to/from each other. 1553 1558 */ 1554 - if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1)) 1559 + if (atomic_cmpxchg_acquire(&dst_sev->migration_in_progress, 0, 1)) 1555 1560 return -EBUSY; 1556 1561 1557 - mutex_lock(&kvm->lock); 1562 + if (atomic_cmpxchg_acquire(&src_sev->migration_in_progress, 0, 1)) 1563 + goto release_dst; 1558 1564 1565 + r = -EINTR; 1566 + if (mutex_lock_killable(&dst_kvm->lock)) 1567 + goto release_src; 1568 + if (mutex_lock_killable(&src_kvm->lock)) 1569 + goto unlock_dst; 1559 1570 return 0; 1571 + 1572 + unlock_dst: 1573 + mutex_unlock(&dst_kvm->lock); 1574 + release_src: 1575 + atomic_set_release(&src_sev->migration_in_progress, 0); 1576 + release_dst: 1577 + atomic_set_release(&dst_sev->migration_in_progress, 0); 1578 + return r; 1560 1579 } 1561 1580 1562 - static void sev_unlock_after_migration(struct kvm *kvm) 1581 + static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) 1563 1582 { 1564 - struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; 1583 + struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info; 1584 + struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info; 1565 1585 1566 - mutex_unlock(&kvm->lock); 1567 - atomic_set_release(&sev->migration_in_progress, 0); 1586 + mutex_unlock(&dst_kvm->lock); 1587 + mutex_unlock(&src_kvm->lock); 1588 + atomic_set_release(&dst_sev->migration_in_progress, 0); 1589 + atomic_set_release(&src_sev->migration_in_progress, 0); 1568 1590 } 1569 1591 1570 1592 ··· 1629 1607 dst->asid = src->asid; 1630 1608 dst->handle = src->handle; 1631 1609 dst->pages_locked = src->pages_locked; 1610 + dst->enc_context_owner = src->enc_context_owner; 1632 1611 1633 1612 src->asid = 0; 1634 1613 src->active = false; 1635 1614 src->handle = 0; 1636 1615 src->pages_locked = 0; 1616 + src->enc_context_owner = NULL; 1637 1617 1638 - INIT_LIST_HEAD(&dst->regions_list); 1639 - list_replace_init(&src->regions_list, &dst->regions_list); 1618 + list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list); 1640 1619 } 1641 1620 1642 1621 static int sev_es_migrate_from(struct kvm *dst, struct kvm *src) ··· 1689 1666 bool charged = false; 1690 1667 int ret; 1691 1668 1692 - ret = sev_lock_for_migration(kvm); 1693 - if (ret) 1694 - return ret; 1695 - 1696 - if (sev_guest(kvm)) { 1697 - ret = -EINVAL; 1698 - goto out_unlock; 1699 - } 1700 - 1701 1669 source_kvm_file = fget(source_fd); 1702 1670 if (!file_is_kvm(source_kvm_file)) { 1703 1671 ret = -EBADF; ··· 1696 1682 } 1697 1683 1698 1684 source_kvm = source_kvm_file->private_data; 1699 - ret = sev_lock_for_migration(source_kvm); 1685 + ret = sev_lock_two_vms(kvm, source_kvm); 1700 1686 if (ret) 1701 1687 goto out_fput; 1702 1688 1703 - if (!sev_guest(source_kvm)) { 1689 + if (sev_guest(kvm) || !sev_guest(source_kvm)) { 1704 1690 ret = -EINVAL; 1705 - goto out_source; 1691 + goto out_unlock; 1706 1692 } 1707 1693 1708 1694 src_sev = &to_kvm_svm(source_kvm)->sev_info; 1695 + 1696 + /* 1697 + * VMs mirroring src's encryption context rely on it to keep the 1698 + * ASID allocated, but below we are clearing src_sev->asid. 1699 + */ 1700 + if (src_sev->num_mirrored_vms) { 1701 + ret = -EBUSY; 1702 + goto out_unlock; 1703 + } 1704 + 1709 1705 dst_sev->misc_cg = get_current_misc_cg(); 1710 1706 cg_cleanup_sev = dst_sev; 1711 1707 if (dst_sev->misc_cg != src_sev->misc_cg) { ··· 1752 1728 sev_misc_cg_uncharge(cg_cleanup_sev); 1753 1729 put_misc_cg(cg_cleanup_sev->misc_cg); 1754 1730 cg_cleanup_sev->misc_cg = NULL; 1755 - out_source: 1756 - sev_unlock_after_migration(source_kvm); 1731 + out_unlock: 1732 + sev_unlock_two_vms(kvm, source_kvm); 1757 1733 out_fput: 1758 1734 if (source_kvm_file) 1759 1735 fput(source_kvm_file); 1760 - out_unlock: 1761 - sev_unlock_after_migration(kvm); 1762 1736 return ret; 1763 1737 } 1764 1738 ··· 1975 1953 { 1976 1954 struct file *source_kvm_file; 1977 1955 struct kvm *source_kvm; 1978 - struct kvm_sev_info source_sev, *mirror_sev; 1956 + struct kvm_sev_info *source_sev, *mirror_sev; 1979 1957 int ret; 1980 1958 1981 1959 source_kvm_file = fget(source_fd); 1982 1960 if (!file_is_kvm(source_kvm_file)) { 1983 1961 ret = -EBADF; 1984 - goto e_source_put; 1962 + goto e_source_fput; 1985 1963 } 1986 1964 1987 1965 source_kvm = source_kvm_file->private_data; 1988 - mutex_lock(&source_kvm->lock); 1966 + ret = sev_lock_two_vms(kvm, source_kvm); 1967 + if (ret) 1968 + goto e_source_fput; 1989 1969 1990 - if (!sev_guest(source_kvm)) { 1970 + /* 1971 + * Mirrors of mirrors should work, but let's not get silly. Also 1972 + * disallow out-of-band SEV/SEV-ES init if the target is already an 1973 + * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being 1974 + * created after SEV/SEV-ES initialization, e.g. to init intercepts. 1975 + */ 1976 + if (sev_guest(kvm) || !sev_guest(source_kvm) || 1977 + is_mirroring_enc_context(source_kvm) || kvm->created_vcpus) { 1991 1978 ret = -EINVAL; 1992 - goto e_source_unlock; 1979 + goto e_unlock; 1993 1980 } 1994 - 1995 - /* Mirrors of mirrors should work, but let's not get silly */ 1996 - if (is_mirroring_enc_context(source_kvm) || source_kvm == kvm) { 1997 - ret = -EINVAL; 1998 - goto e_source_unlock; 1999 - } 2000 - 2001 - memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info, 2002 - sizeof(source_sev)); 2003 1981 2004 1982 /* 2005 1983 * The mirror kvm holds an enc_context_owner ref so its asid can't 2006 1984 * disappear until we're done with it 2007 1985 */ 1986 + source_sev = &to_kvm_svm(source_kvm)->sev_info; 2008 1987 kvm_get_kvm(source_kvm); 2009 - 2010 - fput(source_kvm_file); 2011 - mutex_unlock(&source_kvm->lock); 2012 - mutex_lock(&kvm->lock); 2013 - 2014 - /* 2015 - * Disallow out-of-band SEV/SEV-ES init if the target is already an 2016 - * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being 2017 - * created after SEV/SEV-ES initialization, e.g. to init intercepts. 2018 - */ 2019 - if (sev_guest(kvm) || kvm->created_vcpus) { 2020 - ret = -EINVAL; 2021 - goto e_mirror_unlock; 2022 - } 1988 + source_sev->num_mirrored_vms++; 2023 1989 2024 1990 /* Set enc_context_owner and copy its encryption context over */ 2025 1991 mirror_sev = &to_kvm_svm(kvm)->sev_info; 2026 1992 mirror_sev->enc_context_owner = source_kvm; 2027 1993 mirror_sev->active = true; 2028 - mirror_sev->asid = source_sev.asid; 2029 - mirror_sev->fd = source_sev.fd; 2030 - mirror_sev->es_active = source_sev.es_active; 2031 - mirror_sev->handle = source_sev.handle; 1994 + mirror_sev->asid = source_sev->asid; 1995 + mirror_sev->fd = source_sev->fd; 1996 + mirror_sev->es_active = source_sev->es_active; 1997 + mirror_sev->handle = source_sev->handle; 1998 + INIT_LIST_HEAD(&mirror_sev->regions_list); 1999 + ret = 0; 2000 + 2032 2001 /* 2033 2002 * Do not copy ap_jump_table. Since the mirror does not share the same 2034 2003 * KVM contexts as the original, and they may have different 2035 2004 * memory-views. 2036 2005 */ 2037 2006 2038 - mutex_unlock(&kvm->lock); 2039 - return 0; 2040 - 2041 - e_mirror_unlock: 2042 - mutex_unlock(&kvm->lock); 2043 - kvm_put_kvm(source_kvm); 2044 - return ret; 2045 - e_source_unlock: 2046 - mutex_unlock(&source_kvm->lock); 2047 - e_source_put: 2007 + e_unlock: 2008 + sev_unlock_two_vms(kvm, source_kvm); 2009 + e_source_fput: 2048 2010 if (source_kvm_file) 2049 2011 fput(source_kvm_file); 2050 2012 return ret; ··· 2040 2034 struct list_head *head = &sev->regions_list; 2041 2035 struct list_head *pos, *q; 2042 2036 2037 + WARN_ON(sev->num_mirrored_vms); 2038 + 2043 2039 if (!sev_guest(kvm)) 2044 2040 return; 2045 2041 2046 2042 /* If this is a mirror_kvm release the enc_context_owner and skip sev cleanup */ 2047 2043 if (is_mirroring_enc_context(kvm)) { 2048 - kvm_put_kvm(sev->enc_context_owner); 2044 + struct kvm *owner_kvm = sev->enc_context_owner; 2045 + struct kvm_sev_info *owner_sev = &to_kvm_svm(owner_kvm)->sev_info; 2046 + 2047 + mutex_lock(&owner_kvm->lock); 2048 + if (!WARN_ON(!owner_sev->num_mirrored_vms)) 2049 + owner_sev->num_mirrored_vms--; 2050 + mutex_unlock(&owner_kvm->lock); 2051 + kvm_put_kvm(owner_kvm); 2049 2052 return; 2050 2053 } 2051 - 2052 - mutex_lock(&kvm->lock); 2053 2054 2054 2055 /* 2055 2056 * Ensure that all guest tagged cache entries are flushed before ··· 2076 2063 cond_resched(); 2077 2064 } 2078 2065 } 2079 - 2080 - mutex_unlock(&kvm->lock); 2081 2066 2082 2067 sev_unbind_asid(kvm, sev->handle); 2083 2068 sev_asid_free(sev); ··· 2260 2249 __free_page(virt_to_page(svm->sev_es.vmsa)); 2261 2250 2262 2251 if (svm->sev_es.ghcb_sa_free) 2263 - kfree(svm->sev_es.ghcb_sa); 2252 + kvfree(svm->sev_es.ghcb_sa); 2264 2253 } 2265 2254 2266 2255 static void dump_ghcb(struct vcpu_svm *svm) ··· 2352 2341 memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); 2353 2342 } 2354 2343 2355 - static int sev_es_validate_vmgexit(struct vcpu_svm *svm) 2344 + static bool sev_es_validate_vmgexit(struct vcpu_svm *svm) 2356 2345 { 2357 2346 struct kvm_vcpu *vcpu; 2358 2347 struct ghcb *ghcb; 2359 - u64 exit_code = 0; 2348 + u64 exit_code; 2349 + u64 reason; 2360 2350 2361 2351 ghcb = svm->sev_es.ghcb; 2362 2352 2363 - /* Only GHCB Usage code 0 is supported */ 2364 - if (ghcb->ghcb_usage) 2365 - goto vmgexit_err; 2366 - 2367 2353 /* 2368 - * Retrieve the exit code now even though is may not be marked valid 2354 + * Retrieve the exit code now even though it may not be marked valid 2369 2355 * as it could help with debugging. 2370 2356 */ 2371 2357 exit_code = ghcb_get_sw_exit_code(ghcb); 2358 + 2359 + /* Only GHCB Usage code 0 is supported */ 2360 + if (ghcb->ghcb_usage) { 2361 + reason = GHCB_ERR_INVALID_USAGE; 2362 + goto vmgexit_err; 2363 + } 2364 + 2365 + reason = GHCB_ERR_MISSING_INPUT; 2372 2366 2373 2367 if (!ghcb_sw_exit_code_is_valid(ghcb) || 2374 2368 !ghcb_sw_exit_info_1_is_valid(ghcb) || ··· 2453 2437 case SVM_VMGEXIT_UNSUPPORTED_EVENT: 2454 2438 break; 2455 2439 default: 2440 + reason = GHCB_ERR_INVALID_EVENT; 2456 2441 goto vmgexit_err; 2457 2442 } 2458 2443 2459 - return 0; 2444 + return true; 2460 2445 2461 2446 vmgexit_err: 2462 2447 vcpu = &svm->vcpu; 2463 2448 2464 - if (ghcb->ghcb_usage) { 2449 + if (reason == GHCB_ERR_INVALID_USAGE) { 2465 2450 vcpu_unimpl(vcpu, "vmgexit: ghcb usage %#x is not valid\n", 2466 2451 ghcb->ghcb_usage); 2452 + } else if (reason == GHCB_ERR_INVALID_EVENT) { 2453 + vcpu_unimpl(vcpu, "vmgexit: exit code %#llx is not valid\n", 2454 + exit_code); 2467 2455 } else { 2468 - vcpu_unimpl(vcpu, "vmgexit: exit reason %#llx is not valid\n", 2456 + vcpu_unimpl(vcpu, "vmgexit: exit code %#llx input is not valid\n", 2469 2457 exit_code); 2470 2458 dump_ghcb(svm); 2471 2459 } 2472 2460 2473 - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 2474 - vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON; 2475 - vcpu->run->internal.ndata = 2; 2476 - vcpu->run->internal.data[0] = exit_code; 2477 - vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu; 2461 + /* Clear the valid entries fields */ 2462 + memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); 2478 2463 2479 - return -EINVAL; 2464 + ghcb_set_sw_exit_info_1(ghcb, 2); 2465 + ghcb_set_sw_exit_info_2(ghcb, reason); 2466 + 2467 + return false; 2480 2468 } 2481 2469 2482 2470 void sev_es_unmap_ghcb(struct vcpu_svm *svm) ··· 2502 2482 svm->sev_es.ghcb_sa_sync = false; 2503 2483 } 2504 2484 2505 - kfree(svm->sev_es.ghcb_sa); 2485 + kvfree(svm->sev_es.ghcb_sa); 2506 2486 svm->sev_es.ghcb_sa = NULL; 2507 2487 svm->sev_es.ghcb_sa_free = false; 2508 2488 } ··· 2550 2530 scratch_gpa_beg = ghcb_get_sw_scratch(ghcb); 2551 2531 if (!scratch_gpa_beg) { 2552 2532 pr_err("vmgexit: scratch gpa not provided\n"); 2553 - return false; 2533 + goto e_scratch; 2554 2534 } 2555 2535 2556 2536 scratch_gpa_end = scratch_gpa_beg + len; 2557 2537 if (scratch_gpa_end < scratch_gpa_beg) { 2558 2538 pr_err("vmgexit: scratch length (%#llx) not valid for scratch address (%#llx)\n", 2559 2539 len, scratch_gpa_beg); 2560 - return false; 2540 + goto e_scratch; 2561 2541 } 2562 2542 2563 2543 if ((scratch_gpa_beg & PAGE_MASK) == control->ghcb_gpa) { ··· 2575 2555 scratch_gpa_end > ghcb_scratch_end) { 2576 2556 pr_err("vmgexit: scratch area is outside of GHCB shared buffer area (%#llx - %#llx)\n", 2577 2557 scratch_gpa_beg, scratch_gpa_end); 2578 - return false; 2558 + goto e_scratch; 2579 2559 } 2580 2560 2581 2561 scratch_va = (void *)svm->sev_es.ghcb; ··· 2588 2568 if (len > GHCB_SCRATCH_AREA_LIMIT) { 2589 2569 pr_err("vmgexit: scratch area exceeds KVM limits (%#llx requested, %#llx limit)\n", 2590 2570 len, GHCB_SCRATCH_AREA_LIMIT); 2591 - return false; 2571 + goto e_scratch; 2592 2572 } 2593 - scratch_va = kzalloc(len, GFP_KERNEL_ACCOUNT); 2573 + scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT); 2594 2574 if (!scratch_va) 2595 - return false; 2575 + goto e_scratch; 2596 2576 2597 2577 if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) { 2598 2578 /* Unable to copy scratch area from guest */ 2599 2579 pr_err("vmgexit: kvm_read_guest for scratch area failed\n"); 2600 2580 2601 - kfree(scratch_va); 2602 - return false; 2581 + kvfree(scratch_va); 2582 + goto e_scratch; 2603 2583 } 2604 2584 2605 2585 /* ··· 2616 2596 svm->sev_es.ghcb_sa_len = len; 2617 2597 2618 2598 return true; 2599 + 2600 + e_scratch: 2601 + ghcb_set_sw_exit_info_1(ghcb, 2); 2602 + ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA); 2603 + 2604 + return false; 2619 2605 } 2620 2606 2621 2607 static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask, ··· 2672 2646 2673 2647 ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_CPUID); 2674 2648 if (!ret) { 2675 - ret = -EINVAL; 2649 + /* Error, keep GHCB MSR value as-is */ 2676 2650 break; 2677 2651 } 2678 2652 ··· 2708 2682 GHCB_MSR_TERM_REASON_POS); 2709 2683 pr_info("SEV-ES guest requested termination: %#llx:%#llx\n", 2710 2684 reason_set, reason_code); 2711 - fallthrough; 2685 + 2686 + ret = -EINVAL; 2687 + break; 2712 2688 } 2713 2689 default: 2714 - ret = -EINVAL; 2690 + /* Error, keep GHCB MSR value as-is */ 2691 + break; 2715 2692 } 2716 2693 2717 2694 trace_kvm_vmgexit_msr_protocol_exit(svm->vcpu.vcpu_id, ··· 2738 2709 2739 2710 if (!ghcb_gpa) { 2740 2711 vcpu_unimpl(vcpu, "vmgexit: GHCB gpa is not set\n"); 2741 - return -EINVAL; 2712 + 2713 + /* Without a GHCB, just return right back to the guest */ 2714 + return 1; 2742 2715 } 2743 2716 2744 2717 if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) { 2745 2718 /* Unable to map GHCB from guest */ 2746 2719 vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n", 2747 2720 ghcb_gpa); 2748 - return -EINVAL; 2721 + 2722 + /* Without a GHCB, just return right back to the guest */ 2723 + return 1; 2749 2724 } 2750 2725 2751 2726 svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva; ··· 2759 2726 2760 2727 exit_code = ghcb_get_sw_exit_code(ghcb); 2761 2728 2762 - ret = sev_es_validate_vmgexit(svm); 2763 - if (ret) 2764 - return ret; 2729 + if (!sev_es_validate_vmgexit(svm)) 2730 + return 1; 2765 2731 2766 2732 sev_es_sync_from_ghcb(svm); 2767 2733 ghcb_set_sw_exit_info_1(ghcb, 0); 2768 2734 ghcb_set_sw_exit_info_2(ghcb, 0); 2769 2735 2770 - ret = -EINVAL; 2736 + ret = 1; 2771 2737 switch (exit_code) { 2772 2738 case SVM_VMGEXIT_MMIO_READ: 2773 2739 if (!setup_vmgexit_scratch(svm, true, control->exit_info_2)) ··· 2807 2775 default: 2808 2776 pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n", 2809 2777 control->exit_info_1); 2810 - ghcb_set_sw_exit_info_1(ghcb, 1); 2811 - ghcb_set_sw_exit_info_2(ghcb, 2812 - X86_TRAP_UD | 2813 - SVM_EVTINJ_TYPE_EXEPT | 2814 - SVM_EVTINJ_VALID); 2778 + ghcb_set_sw_exit_info_1(ghcb, 2); 2779 + ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT); 2815 2780 } 2816 2781 2817 - ret = 1; 2818 2782 break; 2819 2783 } 2820 2784 case SVM_VMGEXIT_UNSUPPORTED_EVENT: 2821 2785 vcpu_unimpl(vcpu, 2822 2786 "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", 2823 2787 control->exit_info_1, control->exit_info_2); 2788 + ret = -EINVAL; 2824 2789 break; 2825 2790 default: 2826 2791 ret = svm_invoke_exit_handler(vcpu, exit_code); ··· 2839 2810 return -EINVAL; 2840 2811 2841 2812 if (!setup_vmgexit_scratch(svm, in, bytes)) 2842 - return -EINVAL; 2813 + return 1; 2843 2814 2844 2815 return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa, 2845 2816 count, in);
-1
arch/x86/kvm/svm/svm.c
··· 4651 4651 .load_eoi_exitmap = svm_load_eoi_exitmap, 4652 4652 .hwapic_irr_update = svm_hwapic_irr_update, 4653 4653 .hwapic_isr_update = svm_hwapic_isr_update, 4654 - .sync_pir_to_irr = kvm_lapic_find_highest_irr, 4655 4654 .apicv_post_state_restore = avic_post_state_restore, 4656 4655 4657 4656 .set_tss_addr = svm_set_tss_addr,
+1
arch/x86/kvm/svm/svm.h
··· 79 79 struct list_head regions_list; /* List of registered regions */ 80 80 u64 ap_jump_table; /* SEV-ES AP Jump Table address */ 81 81 struct kvm *enc_context_owner; /* Owner of copied encryption context */ 82 + unsigned long num_mirrored_vms; /* Number of VMs sharing this ASID */ 82 83 struct misc_cg *misc_cg; /* For misc cgroup accounting */ 83 84 atomic_t migration_in_progress; 84 85 };
+25 -28
arch/x86/kvm/vmx/nested.c
··· 1162 1162 WARN_ON(!enable_vpid); 1163 1163 1164 1164 /* 1165 - * If VPID is enabled and used by vmc12, but L2 does not have a unique 1166 - * TLB tag (ASID), i.e. EPT is disabled and KVM was unable to allocate 1167 - * a VPID for L2, flush the current context as the effective ASID is 1168 - * common to both L1 and L2. 1169 - * 1170 - * Defer the flush so that it runs after vmcs02.EPTP has been set by 1171 - * KVM_REQ_LOAD_MMU_PGD (if nested EPT is enabled) and to avoid 1172 - * redundant flushes further down the nested pipeline. 1173 - * 1174 - * If a TLB flush isn't required due to any of the above, and vpid12 is 1175 - * changing then the new "virtual" VPID (vpid12) will reuse the same 1176 - * "real" VPID (vpid02), and so needs to be flushed. There's no direct 1177 - * mapping between vpid02 and vpid12, vpid02 is per-vCPU and reused for 1178 - * all nested vCPUs. Remember, a flush on VM-Enter does not invalidate 1179 - * guest-physical mappings, so there is no need to sync the nEPT MMU. 1165 + * VPID is enabled and in use by vmcs12. If vpid12 is changing, then 1166 + * emulate a guest TLB flush as KVM does not track vpid12 history nor 1167 + * is the VPID incorporated into the MMU context. I.e. KVM must assume 1168 + * that the new vpid12 has never been used and thus represents a new 1169 + * guest ASID that cannot have entries in the TLB. 1180 1170 */ 1181 - if (!nested_has_guest_tlb_tag(vcpu)) { 1182 - kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); 1183 - } else if (is_vmenter && 1184 - vmcs12->virtual_processor_id != vmx->nested.last_vpid) { 1171 + if (is_vmenter && vmcs12->virtual_processor_id != vmx->nested.last_vpid) { 1185 1172 vmx->nested.last_vpid = vmcs12->virtual_processor_id; 1186 - vpid_sync_context(nested_get_vpid02(vcpu)); 1173 + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); 1174 + return; 1187 1175 } 1176 + 1177 + /* 1178 + * If VPID is enabled, used by vmc12, and vpid12 is not changing but 1179 + * does not have a unique TLB tag (ASID), i.e. EPT is disabled and 1180 + * KVM was unable to allocate a VPID for L2, flush the current context 1181 + * as the effective ASID is common to both L1 and L2. 1182 + */ 1183 + if (!nested_has_guest_tlb_tag(vcpu)) 1184 + kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); 1188 1185 } 1189 1186 1190 1187 static bool is_bitwise_subset(u64 superset, u64 subset, u64 mask) ··· 2591 2594 2592 2595 if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && 2593 2596 WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, 2594 - vmcs12->guest_ia32_perf_global_ctrl))) 2597 + vmcs12->guest_ia32_perf_global_ctrl))) { 2598 + *entry_failure_code = ENTRY_FAIL_DEFAULT; 2595 2599 return -EINVAL; 2600 + } 2596 2601 2597 2602 kvm_rsp_write(vcpu, vmcs12->guest_rsp); 2598 2603 kvm_rip_write(vcpu, vmcs12->guest_rip); ··· 3343 3344 }; 3344 3345 u32 failed_index; 3345 3346 3346 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) 3347 - kvm_vcpu_flush_tlb_current(vcpu); 3347 + kvm_service_local_tlb_flush_requests(vcpu); 3348 3348 3349 3349 evaluate_pending_interrupts = exec_controls_get(vmx) & 3350 3350 (CPU_BASED_INTR_WINDOW_EXITING | CPU_BASED_NMI_WINDOW_EXITING); ··· 4500 4502 (void)nested_get_evmcs_page(vcpu); 4501 4503 } 4502 4504 4503 - /* Service the TLB flush request for L2 before switching to L1. */ 4504 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) 4505 - kvm_vcpu_flush_tlb_current(vcpu); 4505 + /* Service pending TLB flush requests for L2 before switching to L1. */ 4506 + kvm_service_local_tlb_flush_requests(vcpu); 4506 4507 4507 4508 /* 4508 4509 * VCPU_EXREG_PDPTR will be clobbered in arch/x86/kvm/vmx/vmx.h between ··· 4854 4857 if (!vmx->nested.cached_vmcs12) 4855 4858 goto out_cached_vmcs12; 4856 4859 4860 + vmx->nested.shadow_vmcs12_cache.gpa = INVALID_GPA; 4857 4861 vmx->nested.cached_shadow_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL_ACCOUNT); 4858 4862 if (!vmx->nested.cached_shadow_vmcs12) 4859 4863 goto out_cached_shadow_vmcs12; ··· 5287 5289 struct gfn_to_hva_cache *ghc = &vmx->nested.vmcs12_cache; 5288 5290 struct vmcs_hdr hdr; 5289 5291 5290 - if (ghc->gpa != vmptr && 5291 - kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) { 5292 + if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) { 5292 5293 /* 5293 5294 * Reads from an unbacked page return all 1s, 5294 5295 * which means that the 32 bits located at the
+11 -9
arch/x86/kvm/vmx/posted_intr.c
··· 5 5 #include <asm/cpu.h> 6 6 7 7 #include "lapic.h" 8 + #include "irq.h" 8 9 #include "posted_intr.h" 9 10 #include "trace.h" 10 11 #include "vmx.h" ··· 78 77 pi_set_on(pi_desc); 79 78 } 80 79 80 + static bool vmx_can_use_vtd_pi(struct kvm *kvm) 81 + { 82 + return irqchip_in_kernel(kvm) && enable_apicv && 83 + kvm_arch_has_assigned_device(kvm) && 84 + irq_remapping_cap(IRQ_POSTING_CAP); 85 + } 86 + 81 87 void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu) 82 88 { 83 89 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); 84 90 85 - if (!kvm_arch_has_assigned_device(vcpu->kvm) || 86 - !irq_remapping_cap(IRQ_POSTING_CAP) || 87 - !kvm_vcpu_apicv_active(vcpu)) 91 + if (!vmx_can_use_vtd_pi(vcpu->kvm)) 88 92 return; 89 93 90 94 /* Set SN when the vCPU is preempted */ ··· 147 141 struct pi_desc old, new; 148 142 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu); 149 143 150 - if (!kvm_arch_has_assigned_device(vcpu->kvm) || 151 - !irq_remapping_cap(IRQ_POSTING_CAP) || 152 - !kvm_vcpu_apicv_active(vcpu)) 144 + if (!vmx_can_use_vtd_pi(vcpu->kvm)) 153 145 return 0; 154 146 155 147 WARN_ON(irqs_disabled()); ··· 274 270 struct vcpu_data vcpu_info; 275 271 int idx, ret = 0; 276 272 277 - if (!kvm_arch_has_assigned_device(kvm) || 278 - !irq_remapping_cap(IRQ_POSTING_CAP) || 279 - !kvm_vcpu_apicv_active(kvm->vcpus[0])) 273 + if (!vmx_can_use_vtd_pi(kvm)) 280 274 return 0; 281 275 282 276 idx = srcu_read_lock(&kvm->irq_srcu);
+42 -25
arch/x86/kvm/vmx/vmx.c
··· 2918 2918 } 2919 2919 } 2920 2920 2921 + static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu) 2922 + { 2923 + if (is_guest_mode(vcpu)) 2924 + return nested_get_vpid02(vcpu); 2925 + return to_vmx(vcpu)->vpid; 2926 + } 2927 + 2921 2928 static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu) 2922 2929 { 2923 2930 struct kvm_mmu *mmu = vcpu->arch.mmu; ··· 2937 2930 if (enable_ept) 2938 2931 ept_sync_context(construct_eptp(vcpu, root_hpa, 2939 2932 mmu->shadow_root_level)); 2940 - else if (!is_guest_mode(vcpu)) 2941 - vpid_sync_context(to_vmx(vcpu)->vpid); 2942 2933 else 2943 - vpid_sync_context(nested_get_vpid02(vcpu)); 2934 + vpid_sync_context(vmx_get_current_vpid(vcpu)); 2944 2935 } 2945 2936 2946 2937 static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr) 2947 2938 { 2948 2939 /* 2949 - * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in 2940 + * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in 2950 2941 * vmx_flush_tlb_guest() for an explanation of why this is ok. 2951 2942 */ 2952 - vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr); 2943 + vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr); 2953 2944 } 2954 2945 2955 2946 static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu) 2956 2947 { 2957 2948 /* 2958 - * vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==0 2959 - * or a vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit 2960 - * are required to flush GVA->{G,H}PA mappings from the TLB if vpid is 2949 + * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a 2950 + * vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit are 2951 + * required to flush GVA->{G,H}PA mappings from the TLB if vpid is 2961 2952 * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed), 2962 2953 * i.e. no explicit INVVPID is necessary. 2963 2954 */ 2964 - vpid_sync_context(to_vmx(vcpu)->vpid); 2955 + vpid_sync_context(vmx_get_current_vpid(vcpu)); 2965 2956 } 2966 2957 2967 2958 void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu) ··· 6267 6262 { 6268 6263 struct vcpu_vmx *vmx = to_vmx(vcpu); 6269 6264 int max_irr; 6270 - bool max_irr_updated; 6265 + bool got_posted_interrupt; 6271 6266 6272 - if (KVM_BUG_ON(!vcpu->arch.apicv_active, vcpu->kvm)) 6267 + if (KVM_BUG_ON(!enable_apicv, vcpu->kvm)) 6273 6268 return -EIO; 6274 6269 6275 6270 if (pi_test_on(&vmx->pi_desc)) { ··· 6279 6274 * But on x86 this is just a compiler barrier anyway. 6280 6275 */ 6281 6276 smp_mb__after_atomic(); 6282 - max_irr_updated = 6277 + got_posted_interrupt = 6283 6278 kvm_apic_update_irr(vcpu, vmx->pi_desc.pir, &max_irr); 6284 - 6285 - /* 6286 - * If we are running L2 and L1 has a new pending interrupt 6287 - * which can be injected, this may cause a vmexit or it may 6288 - * be injected into L2. Either way, this interrupt will be 6289 - * processed via KVM_REQ_EVENT, not RVI, because we do not use 6290 - * virtual interrupt delivery to inject L1 interrupts into L2. 6291 - */ 6292 - if (is_guest_mode(vcpu) && max_irr_updated) 6293 - kvm_make_request(KVM_REQ_EVENT, vcpu); 6294 6279 } else { 6295 6280 max_irr = kvm_lapic_find_highest_irr(vcpu); 6281 + got_posted_interrupt = false; 6296 6282 } 6297 - vmx_hwapic_irr_update(vcpu, max_irr); 6283 + 6284 + /* 6285 + * Newly recognized interrupts are injected via either virtual interrupt 6286 + * delivery (RVI) or KVM_REQ_EVENT. Virtual interrupt delivery is 6287 + * disabled in two cases: 6288 + * 6289 + * 1) If L2 is running and the vCPU has a new pending interrupt. If L1 6290 + * wants to exit on interrupts, KVM_REQ_EVENT is needed to synthesize a 6291 + * VM-Exit to L1. If L1 doesn't want to exit, the interrupt is injected 6292 + * into L2, but KVM doesn't use virtual interrupt delivery to inject 6293 + * interrupts into L2, and so KVM_REQ_EVENT is again needed. 6294 + * 6295 + * 2) If APICv is disabled for this vCPU, assigned devices may still 6296 + * attempt to post interrupts. The posted interrupt vector will cause 6297 + * a VM-Exit and the subsequent entry will call sync_pir_to_irr. 6298 + */ 6299 + if (!is_guest_mode(vcpu) && kvm_vcpu_apicv_active(vcpu)) 6300 + vmx_set_rvi(max_irr); 6301 + else if (got_posted_interrupt) 6302 + kvm_make_request(KVM_REQ_EVENT, vcpu); 6303 + 6298 6304 return max_irr; 6299 6305 } 6300 6306 ··· 7525 7509 static bool vmx_check_apicv_inhibit_reasons(ulong bit) 7526 7510 { 7527 7511 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) | 7512 + BIT(APICV_INHIBIT_REASON_ABSENT) | 7528 7513 BIT(APICV_INHIBIT_REASON_HYPERV) | 7529 7514 BIT(APICV_INHIBIT_REASON_BLOCKIRQ); 7530 7515 ··· 7778 7761 ple_window_shrink = 0; 7779 7762 } 7780 7763 7781 - if (!cpu_has_vmx_apicv()) { 7764 + if (!cpu_has_vmx_apicv()) 7782 7765 enable_apicv = 0; 7766 + if (!enable_apicv) 7783 7767 vmx_x86_ops.sync_pir_to_irr = NULL; 7784 - } 7785 7768 7786 7769 if (cpu_has_vmx_tsc_scaling()) { 7787 7770 kvm_has_tsc_control = true;
+58 -17
arch/x86/kvm/x86.c
··· 3258 3258 static_call(kvm_x86_tlb_flush_guest)(vcpu); 3259 3259 } 3260 3260 3261 + 3262 + static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu) 3263 + { 3264 + ++vcpu->stat.tlb_flush; 3265 + static_call(kvm_x86_tlb_flush_current)(vcpu); 3266 + } 3267 + 3268 + /* 3269 + * Service "local" TLB flush requests, which are specific to the current MMU 3270 + * context. In addition to the generic event handling in vcpu_enter_guest(), 3271 + * TLB flushes that are targeted at an MMU context also need to be serviced 3272 + * prior before nested VM-Enter/VM-Exit. 3273 + */ 3274 + void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu) 3275 + { 3276 + if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) 3277 + kvm_vcpu_flush_tlb_current(vcpu); 3278 + 3279 + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) 3280 + kvm_vcpu_flush_tlb_guest(vcpu); 3281 + } 3282 + EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests); 3283 + 3261 3284 static void record_steal_time(struct kvm_vcpu *vcpu) 3262 3285 { 3263 3286 struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache; ··· 4156 4133 case KVM_CAP_SGX_ATTRIBUTE: 4157 4134 #endif 4158 4135 case KVM_CAP_VM_COPY_ENC_CONTEXT_FROM: 4136 + case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM: 4159 4137 case KVM_CAP_SREGS2: 4160 4138 case KVM_CAP_EXIT_ON_EMULATION_FAILURE: 4161 4139 case KVM_CAP_VCPU_ATTRIBUTES: ··· 4472 4448 static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu, 4473 4449 struct kvm_lapic_state *s) 4474 4450 { 4475 - if (vcpu->arch.apicv_active) 4476 - static_call(kvm_x86_sync_pir_to_irr)(vcpu); 4451 + static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); 4477 4452 4478 4453 return kvm_apic_get_state(vcpu, s); 4479 4454 } ··· 5147 5124 struct kvm_cpuid __user *cpuid_arg = argp; 5148 5125 struct kvm_cpuid cpuid; 5149 5126 5127 + /* 5128 + * KVM does not correctly handle changing guest CPUID after KVM_RUN, as 5129 + * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't 5130 + * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page 5131 + * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with 5132 + * the core vCPU model on the fly, so fail. 5133 + */ 5134 + r = -EINVAL; 5135 + if (vcpu->arch.last_vmentry_cpu != -1) 5136 + goto out; 5137 + 5150 5138 r = -EFAULT; 5151 5139 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid))) 5152 5140 goto out; ··· 5167 5133 case KVM_SET_CPUID2: { 5168 5134 struct kvm_cpuid2 __user *cpuid_arg = argp; 5169 5135 struct kvm_cpuid2 cpuid; 5136 + 5137 + /* 5138 + * KVM_SET_CPUID{,2} after KVM_RUN is forbidded, see the comment in 5139 + * KVM_SET_CPUID case above. 5140 + */ 5141 + r = -EINVAL; 5142 + if (vcpu->arch.last_vmentry_cpu != -1) 5143 + goto out; 5170 5144 5171 5145 r = -EFAULT; 5172 5146 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid))) ··· 5740 5698 smp_wmb(); 5741 5699 kvm->arch.irqchip_mode = KVM_IRQCHIP_SPLIT; 5742 5700 kvm->arch.nr_reserved_ioapic_pins = cap->args[0]; 5701 + kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT); 5743 5702 r = 0; 5744 5703 split_irqchip_unlock: 5745 5704 mutex_unlock(&kvm->lock); ··· 6121 6078 /* Write kvm->irq_routing before enabling irqchip_in_kernel. */ 6122 6079 smp_wmb(); 6123 6080 kvm->arch.irqchip_mode = KVM_IRQCHIP_KERNEL; 6081 + kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT); 6124 6082 create_irqchip_unlock: 6125 6083 mutex_unlock(&kvm->lock); 6126 6084 break; ··· 8820 8776 { 8821 8777 init_rwsem(&kvm->arch.apicv_update_lock); 8822 8778 8823 - if (enable_apicv) 8824 - clear_bit(APICV_INHIBIT_REASON_DISABLE, 8825 - &kvm->arch.apicv_inhibit_reasons); 8826 - else 8779 + set_bit(APICV_INHIBIT_REASON_ABSENT, 8780 + &kvm->arch.apicv_inhibit_reasons); 8781 + if (!enable_apicv) 8827 8782 set_bit(APICV_INHIBIT_REASON_DISABLE, 8828 8783 &kvm->arch.apicv_inhibit_reasons); 8829 8784 } ··· 9571 9528 if (irqchip_split(vcpu->kvm)) 9572 9529 kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors); 9573 9530 else { 9574 - if (vcpu->arch.apicv_active) 9575 - static_call(kvm_x86_sync_pir_to_irr)(vcpu); 9531 + static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); 9576 9532 if (ioapic_in_kernel(vcpu->kvm)) 9577 9533 kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 9578 9534 } ··· 9690 9648 /* Flushing all ASIDs flushes the current ASID... */ 9691 9649 kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); 9692 9650 } 9693 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu)) 9694 - kvm_vcpu_flush_tlb_current(vcpu); 9695 - if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu)) 9696 - kvm_vcpu_flush_tlb_guest(vcpu); 9651 + kvm_service_local_tlb_flush_requests(vcpu); 9697 9652 9698 9653 if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) { 9699 9654 vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS; ··· 9841 9802 9842 9803 /* 9843 9804 * This handles the case where a posted interrupt was 9844 - * notified with kvm_vcpu_kick. 9805 + * notified with kvm_vcpu_kick. Assigned devices can 9806 + * use the POSTED_INTR_VECTOR even if APICv is disabled, 9807 + * so do it even if APICv is disabled on this vCPU. 9845 9808 */ 9846 - if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active) 9847 - static_call(kvm_x86_sync_pir_to_irr)(vcpu); 9809 + if (kvm_lapic_enabled(vcpu)) 9810 + static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); 9848 9811 9849 9812 if (kvm_vcpu_exit_request(vcpu)) { 9850 9813 vcpu->mode = OUTSIDE_GUEST_MODE; ··· 9890 9849 if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST)) 9891 9850 break; 9892 9851 9893 - if (vcpu->arch.apicv_active) 9894 - static_call(kvm_x86_sync_pir_to_irr)(vcpu); 9852 + if (kvm_lapic_enabled(vcpu)) 9853 + static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu); 9895 9854 9896 9855 if (unlikely(kvm_vcpu_exit_request(vcpu))) { 9897 9856 exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
+1 -6
arch/x86/kvm/x86.h
··· 103 103 104 104 #define MSR_IA32_CR_PAT_DEFAULT 0x0007040600070406ULL 105 105 106 + void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu); 106 107 int kvm_check_nested_events(struct kvm_vcpu *vcpu); 107 108 108 109 static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu) ··· 184 183 static inline bool mmu_is_nested(struct kvm_vcpu *vcpu) 185 184 { 186 185 return vcpu->arch.walk_mmu == &vcpu->arch.nested_mmu; 187 - } 188 - 189 - static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu) 190 - { 191 - ++vcpu->stat.tlb_flush; 192 - static_call(kvm_x86_tlb_flush_current)(vcpu); 193 186 } 194 187 195 188 static inline int is_pae(struct kvm_vcpu *vcpu)
+11 -1
arch/x86/realmode/init.c
··· 72 72 #ifdef CONFIG_X86_64 73 73 u64 *trampoline_pgd; 74 74 u64 efer; 75 + int i; 75 76 #endif 76 77 77 78 base = (unsigned char *)real_mode_header; ··· 129 128 trampoline_header->flags = 0; 130 129 131 130 trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd); 131 + 132 + /* Map the real mode stub as virtual == physical */ 132 133 trampoline_pgd[0] = trampoline_pgd_entry.pgd; 133 - trampoline_pgd[511] = init_top_pgt[511].pgd; 134 + 135 + /* 136 + * Include the entirety of the kernel mapping into the trampoline 137 + * PGD. This way, all mappings present in the normal kernel page 138 + * tables are usable while running on trampoline_pgd. 139 + */ 140 + for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++) 141 + trampoline_pgd[i] = init_top_pgt[i].pgd; 134 142 #endif 135 143 136 144 sme_sev_setup_real_mode(trampoline_header);
+20
arch/x86/xen/xen-asm.S
··· 20 20 21 21 #include <linux/init.h> 22 22 #include <linux/linkage.h> 23 + #include <../entry/calling.h> 23 24 24 25 .pushsection .noinstr.text, "ax" 25 26 /* ··· 192 191 pushq $0 193 192 jmp hypercall_iret 194 193 SYM_CODE_END(xen_iret) 194 + 195 + /* 196 + * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is 197 + * also the kernel stack. Reusing swapgs_restore_regs_and_return_to_usermode() 198 + * in XEN pv would cause %rsp to move up to the top of the kernel stack and 199 + * leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI 200 + * interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET 201 + * frame at the same address is useless. 202 + */ 203 + SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode) 204 + UNWIND_HINT_REGS 205 + POP_REGS 206 + 207 + /* stackleak_erase() can work safely on the kernel stack. */ 208 + STACKLEAK_ERASE_NOCLOBBER 209 + 210 + addq $8, %rsp /* skip regs->orig_ax */ 211 + jmp xen_iret 212 + SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode) 195 213 196 214 /* 197 215 * Xen handles syscall callbacks much like ordinary exceptions, which
+1 -1
drivers/ata/libata-sata.c
··· 827 827 if (ap->target_lpm_policy >= ARRAY_SIZE(ata_lpm_policy_names)) 828 828 return -EINVAL; 829 829 830 - return snprintf(buf, PAGE_SIZE, "%s\n", 830 + return sysfs_emit(buf, "%s\n", 831 831 ata_lpm_policy_names[ap->target_lpm_policy]); 832 832 } 833 833 DEVICE_ATTR(link_power_management_policy, S_IRUGO | S_IWUSR,
+8 -8
drivers/ata/pata_falcon.c
··· 55 55 /* Transfer multiple of 2 bytes */ 56 56 if (rw == READ) { 57 57 if (swap) 58 - raw_insw_swapw((u16 *)data_addr, (u16 *)buf, words); 58 + raw_insw_swapw(data_addr, (u16 *)buf, words); 59 59 else 60 - raw_insw((u16 *)data_addr, (u16 *)buf, words); 60 + raw_insw(data_addr, (u16 *)buf, words); 61 61 } else { 62 62 if (swap) 63 - raw_outsw_swapw((u16 *)data_addr, (u16 *)buf, words); 63 + raw_outsw_swapw(data_addr, (u16 *)buf, words); 64 64 else 65 - raw_outsw((u16 *)data_addr, (u16 *)buf, words); 65 + raw_outsw(data_addr, (u16 *)buf, words); 66 66 } 67 67 68 68 /* Transfer trailing byte, if any. */ ··· 74 74 75 75 if (rw == READ) { 76 76 if (swap) 77 - raw_insw_swapw((u16 *)data_addr, (u16 *)pad, 1); 77 + raw_insw_swapw(data_addr, (u16 *)pad, 1); 78 78 else 79 - raw_insw((u16 *)data_addr, (u16 *)pad, 1); 79 + raw_insw(data_addr, (u16 *)pad, 1); 80 80 *buf = pad[0]; 81 81 } else { 82 82 pad[0] = *buf; 83 83 if (swap) 84 - raw_outsw_swapw((u16 *)data_addr, (u16 *)pad, 1); 84 + raw_outsw_swapw(data_addr, (u16 *)pad, 1); 85 85 else 86 - raw_outsw((u16 *)data_addr, (u16 *)pad, 1); 86 + raw_outsw(data_addr, (u16 *)pad, 1); 87 87 } 88 88 words++; 89 89 }
+13 -7
drivers/ata/sata_fsl.c
··· 1394 1394 return 0; 1395 1395 } 1396 1396 1397 + static void sata_fsl_host_stop(struct ata_host *host) 1398 + { 1399 + struct sata_fsl_host_priv *host_priv = host->private_data; 1400 + 1401 + iounmap(host_priv->hcr_base); 1402 + kfree(host_priv); 1403 + } 1404 + 1397 1405 /* 1398 1406 * scsi mid-layer and libata interface structures 1399 1407 */ ··· 1433 1425 1434 1426 .port_start = sata_fsl_port_start, 1435 1427 .port_stop = sata_fsl_port_stop, 1428 + 1429 + .host_stop = sata_fsl_host_stop, 1436 1430 1437 1431 .pmp_attach = sata_fsl_pmp_attach, 1438 1432 .pmp_detach = sata_fsl_pmp_detach, ··· 1490 1480 host_priv->ssr_base = ssr_base; 1491 1481 host_priv->csr_base = csr_base; 1492 1482 1493 - irq = irq_of_parse_and_map(ofdev->dev.of_node, 0); 1494 - if (!irq) { 1495 - dev_err(&ofdev->dev, "invalid irq from platform\n"); 1483 + irq = platform_get_irq(ofdev, 0); 1484 + if (irq < 0) { 1485 + retval = irq; 1496 1486 goto error_exit_with_cleanup; 1497 1487 } 1498 1488 host_priv->irq = irq; ··· 1566 1556 device_remove_file(&ofdev->dev, &host_priv->rx_watermark); 1567 1557 1568 1558 ata_host_detach(host); 1569 - 1570 - irq_dispose_mapping(host_priv->irq); 1571 - iounmap(host_priv->hcr_base); 1572 - kfree(host_priv); 1573 1559 1574 1560 return 0; 1575 1561 }
+1 -1
drivers/block/loop.c
··· 2103 2103 int ret; 2104 2104 2105 2105 if (idx < 0) { 2106 - pr_warn("deleting an unspecified loop device is not supported.\n"); 2106 + pr_warn_once("deleting an unspecified loop device is not supported.\n"); 2107 2107 return -EINVAL; 2108 2108 } 2109 2109
+3 -3
drivers/char/agp/parisc-agp.c
··· 281 281 return 0; 282 282 } 283 283 284 - static int 284 + static int __init 285 285 lba_find_capability(int cap) 286 286 { 287 287 struct _parisc_agp_info *info = &parisc_agp_info; ··· 366 366 return error; 367 367 } 368 368 369 - static int 369 + static int __init 370 370 find_quicksilver(struct device *dev, void *data) 371 371 { 372 372 struct parisc_device **lba = data; ··· 378 378 return 0; 379 379 } 380 380 381 - static int 381 + static int __init 382 382 parisc_agp_init(void) 383 383 { 384 384 extern struct sba_device *sba_list;
+33 -8
drivers/char/ipmi/ipmi_msghandler.c
··· 191 191 struct work_struct remove_work; 192 192 }; 193 193 194 + static struct workqueue_struct *remove_work_wq; 195 + 194 196 static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index) 195 197 __acquires(user->release_barrier) 196 198 { ··· 1299 1297 struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount); 1300 1298 1301 1299 /* SRCU cleanup must happen in task context. */ 1302 - schedule_work(&user->remove_work); 1300 + queue_work(remove_work_wq, &user->remove_work); 1303 1301 } 1304 1302 1305 1303 static void _ipmi_destroy_user(struct ipmi_user *user) ··· 3920 3918 /* We didn't find a user, deliver an error response. */ 3921 3919 ipmi_inc_stat(intf, unhandled_commands); 3922 3920 3923 - msg->data[0] = ((netfn + 1) << 2) | (msg->rsp[4] & 0x3); 3924 - msg->data[1] = msg->rsp[2]; 3925 - msg->data[2] = msg->rsp[4] & ~0x3; 3921 + msg->data[0] = (netfn + 1) << 2; 3922 + msg->data[0] |= msg->rsp[2] & 0x3; /* rqLUN */ 3923 + msg->data[1] = msg->rsp[1]; /* Addr */ 3924 + msg->data[2] = msg->rsp[2] & ~0x3; /* rqSeq */ 3925 + msg->data[2] |= msg->rsp[0] & 0x3; /* rsLUN */ 3926 3926 msg->data[3] = cmd; 3927 3927 msg->data[4] = IPMI_INVALID_CMD_COMPLETION_CODE; 3928 3928 msg->data_size = 5; ··· 4459 4455 msg->rsp[2] = IPMI_ERR_UNSPECIFIED; 4460 4456 msg->rsp_size = 3; 4461 4457 } else if (msg->type == IPMI_SMI_MSG_TYPE_IPMB_DIRECT) { 4462 - /* commands must have at least 3 bytes, responses 4. */ 4463 - if (is_cmd && (msg->rsp_size < 3)) { 4458 + /* commands must have at least 4 bytes, responses 5. */ 4459 + if (is_cmd && (msg->rsp_size < 4)) { 4464 4460 ipmi_inc_stat(intf, invalid_commands); 4465 4461 goto out; 4466 4462 } 4467 - if (!is_cmd && (msg->rsp_size < 4)) 4468 - goto return_unspecified; 4463 + if (!is_cmd && (msg->rsp_size < 5)) { 4464 + ipmi_inc_stat(intf, invalid_ipmb_responses); 4465 + /* Construct a valid error response. */ 4466 + msg->rsp[0] = msg->data[0] & 0xfc; /* NetFN */ 4467 + msg->rsp[0] |= (1 << 2); /* Make it a response */ 4468 + msg->rsp[0] |= msg->data[2] & 3; /* rqLUN */ 4469 + msg->rsp[1] = msg->data[1]; /* Addr */ 4470 + msg->rsp[2] = msg->data[2] & 0xfc; /* rqSeq */ 4471 + msg->rsp[2] |= msg->data[0] & 0x3; /* rsLUN */ 4472 + msg->rsp[3] = msg->data[3]; /* Cmd */ 4473 + msg->rsp[4] = IPMI_ERR_UNSPECIFIED; 4474 + msg->rsp_size = 5; 4475 + } 4469 4476 } else if ((msg->data_size >= 2) 4470 4477 && (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2)) 4471 4478 && (msg->data[1] == IPMI_SEND_MSG_CMD) ··· 5046 5031 if (rv) { 5047 5032 rv->done = free_smi_msg; 5048 5033 rv->user_data = NULL; 5034 + rv->type = IPMI_SMI_MSG_TYPE_NORMAL; 5049 5035 atomic_inc(&smi_msg_inuse_count); 5050 5036 } 5051 5037 return rv; ··· 5399 5383 5400 5384 atomic_notifier_chain_register(&panic_notifier_list, &panic_block); 5401 5385 5386 + remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); 5387 + if (!remove_work_wq) { 5388 + pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); 5389 + rv = -ENOMEM; 5390 + goto out; 5391 + } 5392 + 5402 5393 initialized = true; 5403 5394 5404 5395 out: ··· 5431 5408 int count; 5432 5409 5433 5410 if (initialized) { 5411 + destroy_workqueue(remove_work_wq); 5412 + 5434 5413 atomic_notifier_chain_unregister(&panic_notifier_list, 5435 5414 &panic_block); 5436 5415
+7 -7
drivers/cpufreq/cpufreq.c
··· 1004 1004 .release = cpufreq_sysfs_release, 1005 1005 }; 1006 1006 1007 - static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu) 1007 + static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu, 1008 + struct device *dev) 1008 1009 { 1009 - struct device *dev = get_cpu_device(cpu); 1010 - 1011 1010 if (unlikely(!dev)) 1012 1011 return; 1013 1012 ··· 1295 1296 1296 1297 if (policy->max_freq_req) { 1297 1298 /* 1298 - * CPUFREQ_CREATE_POLICY notification is sent only after 1299 - * successfully adding max_freq_req request. 1299 + * Remove max_freq_req after sending CPUFREQ_REMOVE_POLICY 1300 + * notification, since CPUFREQ_CREATE_POLICY notification was 1301 + * sent after adding max_freq_req earlier. 1300 1302 */ 1301 1303 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1302 1304 CPUFREQ_REMOVE_POLICY, policy); ··· 1391 1391 if (new_policy) { 1392 1392 for_each_cpu(j, policy->related_cpus) { 1393 1393 per_cpu(cpufreq_cpu_data, j) = policy; 1394 - add_cpu_dev_symlink(policy, j); 1394 + add_cpu_dev_symlink(policy, j, get_cpu_device(j)); 1395 1395 } 1396 1396 1397 1397 policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req), ··· 1565 1565 /* Create sysfs link on CPU registration */ 1566 1566 policy = per_cpu(cpufreq_cpu_data, cpu); 1567 1567 if (policy) 1568 - add_cpu_dev_symlink(policy, cpu); 1568 + add_cpu_dev_symlink(policy, cpu, dev); 1569 1569 1570 1570 return 0; 1571 1571 }
+1 -1
drivers/dma-buf/heaps/system_heap.c
··· 290 290 int i; 291 291 292 292 table = &buffer->sg_table; 293 - for_each_sg(table->sgl, sg, table->nents, i) { 293 + for_each_sgtable_sg(table, sg, i) { 294 294 struct page *page = sg_page(sg); 295 295 296 296 __free_pages(page, compound_order(page));
+5 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1396 1396 struct sg_table *sg = NULL; 1397 1397 uint64_t user_addr = 0; 1398 1398 struct amdgpu_bo *bo; 1399 - struct drm_gem_object *gobj; 1399 + struct drm_gem_object *gobj = NULL; 1400 1400 u32 domain, alloc_domain; 1401 1401 u64 alloc_flags; 1402 1402 int ret; ··· 1506 1506 remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info); 1507 1507 drm_vma_node_revoke(&gobj->vma_node, drm_priv); 1508 1508 err_node_allow: 1509 - drm_gem_object_put(gobj); 1510 1509 /* Don't unreserve system mem limit twice */ 1511 1510 goto err_reserve_limit; 1512 1511 err_bo_create: 1513 1512 unreserve_mem_limit(adev, size, alloc_domain, !!sg); 1514 1513 err_reserve_limit: 1515 1514 mutex_destroy(&(*mem)->lock); 1516 - kfree(*mem); 1515 + if (gobj) 1516 + drm_gem_object_put(gobj); 1517 + else 1518 + kfree(*mem); 1517 1519 err: 1518 1520 if (sg) { 1519 1521 sg_free_table(sg);
+10 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3833 3833 /* disable all interrupts */ 3834 3834 amdgpu_irq_disable_all(adev); 3835 3835 if (adev->mode_info.mode_config_initialized){ 3836 - if (!amdgpu_device_has_dc_support(adev)) 3836 + if (!drm_drv_uses_atomic_modeset(adev_to_drm(adev))) 3837 3837 drm_helper_force_disable_all(adev_to_drm(adev)); 3838 3838 else 3839 3839 drm_atomic_helper_shutdown(adev_to_drm(adev)); ··· 4289 4289 { 4290 4290 int r; 4291 4291 4292 + amdgpu_amdkfd_pre_reset(adev); 4293 + 4292 4294 if (from_hypervisor) 4293 4295 r = amdgpu_virt_request_full_gpu(adev, true); 4294 4296 else ··· 4318 4316 4319 4317 amdgpu_irq_gpu_reset_resume_helper(adev); 4320 4318 r = amdgpu_ib_ring_tests(adev); 4319 + amdgpu_amdkfd_post_reset(adev); 4321 4320 4322 4321 error: 4323 4322 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) { ··· 5033 5030 5034 5031 cancel_delayed_work_sync(&tmp_adev->delayed_init_work); 5035 5032 5036 - amdgpu_amdkfd_pre_reset(tmp_adev); 5033 + if (!amdgpu_sriov_vf(tmp_adev)) 5034 + amdgpu_amdkfd_pre_reset(tmp_adev); 5037 5035 5038 5036 /* 5039 5037 * Mark these ASICs to be reseted as untracked first ··· 5133 5129 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); 5134 5130 } 5135 5131 5136 - if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) { 5132 + if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) { 5137 5133 drm_helper_resume_force_mode(adev_to_drm(tmp_adev)); 5138 5134 } 5139 5135 ··· 5152 5148 5153 5149 skip_sched_resume: 5154 5150 list_for_each_entry(tmp_adev, device_list_handle, reset_list) { 5155 - /* unlock kfd */ 5156 - if (!need_emergency_restart) 5157 - amdgpu_amdkfd_post_reset(tmp_adev); 5151 + /* unlock kfd: SRIOV would do it separately */ 5152 + if (!need_emergency_restart && !amdgpu_sriov_vf(tmp_adev)) 5153 + amdgpu_amdkfd_post_reset(tmp_adev); 5158 5154 5159 5155 /* kfd_post_reset will do nothing if kfd device is not initialized, 5160 5156 * need to bring up kfd here if it's not be initialized before
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 157 157 [HDP_HWIP] = HDP_HWID, 158 158 [SDMA0_HWIP] = SDMA0_HWID, 159 159 [SDMA1_HWIP] = SDMA1_HWID, 160 + [SDMA2_HWIP] = SDMA2_HWID, 161 + [SDMA3_HWIP] = SDMA3_HWID, 160 162 [MMHUB_HWIP] = MMHUB_HWID, 161 163 [ATHUB_HWIP] = ATHUB_HWID, 162 164 [NBIO_HWIP] = NBIF_HWID, ··· 920 918 case IP_VERSION(3, 0, 64): 921 919 case IP_VERSION(3, 1, 1): 922 920 case IP_VERSION(3, 0, 2): 921 + case IP_VERSION(3, 0, 192): 923 922 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block); 924 923 if (!amdgpu_sriov_vf(adev)) 925 924 amdgpu_device_ip_block_add(adev, &jpeg_v3_0_ip_block);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 135 135 break; 136 136 case IP_VERSION(3, 0, 0): 137 137 case IP_VERSION(3, 0, 64): 138 + case IP_VERSION(3, 0, 192): 138 139 if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0)) 139 140 fw_name = FIRMWARE_SIENNA_CICHLID; 140 141 else
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
··· 504 504 int i = 0; 505 505 506 506 for (i = 0; i < adev->mode_info.num_crtc; i++) 507 - if (adev->mode_info.crtcs[i]) 508 - hrtimer_cancel(&adev->mode_info.crtcs[i]->vblank_timer); 507 + if (adev->amdgpu_vkms_output[i].vblank_hrtimer.function) 508 + hrtimer_cancel(&adev->amdgpu_vkms_output[i].vblank_hrtimer); 509 509 510 510 kfree(adev->mode_info.bios_hardcoded_edid); 511 511 kfree(adev->amdgpu_vkms_output);
+4 -3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 4060 4060 4061 4061 gfx_v9_0_cp_enable(adev, false); 4062 4062 4063 - /* Skip suspend with A+A reset */ 4064 - if (adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) { 4065 - dev_dbg(adev->dev, "Device in reset. Skipping RLC halt\n"); 4063 + /* Skip stopping RLC with A+A reset or when RLC controls GFX clock */ 4064 + if ((adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) || 4065 + (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2))) { 4066 + dev_dbg(adev->dev, "Skipping RLC halt\n"); 4066 4067 return 0; 4067 4068 } 4068 4069
+1
drivers/gpu/drm/amd/amdgpu/nv.c
··· 183 183 switch (adev->ip_versions[UVD_HWIP][0]) { 184 184 case IP_VERSION(3, 0, 0): 185 185 case IP_VERSION(3, 0, 64): 186 + case IP_VERSION(3, 0, 192): 186 187 if (amdgpu_sriov_vf(adev)) { 187 188 if (encode) 188 189 *codecs = &sriov_sc_video_codecs_encode;
+4 -9
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 1574 1574 static void svm_range_restore_work(struct work_struct *work) 1575 1575 { 1576 1576 struct delayed_work *dwork = to_delayed_work(work); 1577 - struct amdkfd_process_info *process_info; 1578 1577 struct svm_range_list *svms; 1579 1578 struct svm_range *prange; 1580 1579 struct kfd_process *p; ··· 1593 1594 * the lifetime of this thread, kfd_process and mm will be valid. 1594 1595 */ 1595 1596 p = container_of(svms, struct kfd_process, svms); 1596 - process_info = p->kgd_process_info; 1597 1597 mm = p->mm; 1598 1598 if (!mm) 1599 1599 return; 1600 1600 1601 - mutex_lock(&process_info->lock); 1602 1601 svm_range_list_lock_and_flush_work(svms, mm); 1603 1602 mutex_lock(&svms->lock); 1604 1603 ··· 1649 1652 out_reschedule: 1650 1653 mutex_unlock(&svms->lock); 1651 1654 mmap_write_unlock(mm); 1652 - mutex_unlock(&process_info->lock); 1653 1655 1654 1656 /* If validation failed, reschedule another attempt */ 1655 1657 if (evicted_ranges) { ··· 2610 2614 2611 2615 if (atomic_read(&svms->drain_pagefaults)) { 2612 2616 pr_debug("draining retry fault, drop fault 0x%llx\n", addr); 2617 + r = 0; 2613 2618 goto out; 2614 2619 } 2615 2620 ··· 2620 2623 mm = get_task_mm(p->lead_thread); 2621 2624 if (!mm) { 2622 2625 pr_debug("svms 0x%p failed to get mm\n", svms); 2626 + r = 0; 2623 2627 goto out; 2624 2628 } 2625 2629 ··· 2658 2660 2659 2661 if (svm_range_skip_recover(prange)) { 2660 2662 amdgpu_gmc_filter_faults_remove(adev, addr, pasid); 2663 + r = 0; 2661 2664 goto out_unlock_range; 2662 2665 } 2663 2666 ··· 2667 2668 if (timestamp < AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING) { 2668 2669 pr_debug("svms 0x%p [0x%lx %lx] already restored\n", 2669 2670 svms, prange->start, prange->last); 2671 + r = 0; 2670 2672 goto out_unlock_range; 2671 2673 } 2672 2674 ··· 3177 3177 svm_range_set_attr(struct kfd_process *p, uint64_t start, uint64_t size, 3178 3178 uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs) 3179 3179 { 3180 - struct amdkfd_process_info *process_info = p->kgd_process_info; 3181 3180 struct mm_struct *mm = current->mm; 3182 3181 struct list_head update_list; 3183 3182 struct list_head insert_list; ··· 3194 3195 return r; 3195 3196 3196 3197 svms = &p->svms; 3197 - 3198 - mutex_lock(&process_info->lock); 3199 3198 3200 3199 svm_range_list_lock_and_flush_work(svms, mm); 3201 3200 ··· 3270 3273 mutex_unlock(&svms->lock); 3271 3274 mmap_read_unlock(mm); 3272 3275 out: 3273 - mutex_unlock(&process_info->lock); 3274 - 3275 3276 pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] done, r=%d\n", p->pasid, 3276 3277 &p->svms, start, start + size - 1, r); 3277 3278
+8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
··· 314 314 ret = -EINVAL; 315 315 goto cleanup; 316 316 } 317 + 318 + if ((aconn->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort) && 319 + (aconn->base.connector_type != DRM_MODE_CONNECTOR_eDP)) { 320 + DRM_DEBUG_DRIVER("No DP connector available for CRC source\n"); 321 + ret = -EINVAL; 322 + goto cleanup; 323 + } 324 + 317 325 } 318 326 319 327 #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+16 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 36 36 #include "dm_helpers.h" 37 37 38 38 #include "dc_link_ddc.h" 39 + #include "ddc_service_types.h" 40 + #include "dpcd_defs.h" 39 41 40 42 #include "i2caux_interface.h" 41 43 #include "dmub_cmd.h" ··· 159 157 }; 160 158 161 159 #if defined(CONFIG_DRM_AMD_DC_DCN) 160 + static bool needs_dsc_aux_workaround(struct dc_link *link) 161 + { 162 + if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 && 163 + (link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) && 164 + link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2) 165 + return true; 166 + 167 + return false; 168 + } 169 + 162 170 static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector) 163 171 { 164 172 struct dc_sink *dc_sink = aconnector->dc_sink; ··· 178 166 u8 *dsc_branch_dec_caps = NULL; 179 167 180 168 aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port); 181 - #if defined(CONFIG_HP_HOOK_WORKAROUND) 169 + 182 170 /* 183 171 * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs 184 172 * because it only check the dsc/fec caps of the "port variable" and not the dock ··· 188 176 * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux 189 177 * 190 178 */ 191 - 192 - if (!aconnector->dsc_aux && !port->parent->port_parent) 179 + if (!aconnector->dsc_aux && !port->parent->port_parent && 180 + needs_dsc_aux_workaround(aconnector->dc_link)) 193 181 aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux; 194 - #endif 182 + 195 183 if (!aconnector->dsc_aux) 196 184 return false; 197 185
+16
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 758 758 dal_ddc_service_set_transaction_type(link->ddc, 759 759 sink_caps->transaction_type); 760 760 761 + #if defined(CONFIG_DRM_AMD_DC_DCN) 762 + /* Apply work around for tunneled MST on certain USB4 docks. Always use DSC if dock 763 + * reports DSC support. 764 + */ 765 + if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA && 766 + link->type == dc_connection_mst_branch && 767 + link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 && 768 + link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_SUPPORT && 769 + !link->dc->debug.dpia_debug.bits.disable_mst_dsc_work_around) 770 + link->wa_flags.dpia_mst_dsc_always_on = true; 771 + #endif 772 + 761 773 #if defined(CONFIG_DRM_AMD_DC_HDCP) 762 774 /* In case of fallback to SST when topology discovery below fails 763 775 * HDCP caps will be querried again later by the upper layer (caller ··· 1214 1202 if (link->type == dc_connection_mst_branch) { 1215 1203 LINK_INFO("link=%d, mst branch is now Disconnected\n", 1216 1204 link->link_index); 1205 + 1206 + /* Disable work around which keeps DSC on for tunneled MST on certain USB4 docks. */ 1207 + if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) 1208 + link->wa_flags.dpia_mst_dsc_always_on = false; 1217 1209 1218 1210 dm_helpers_dp_mst_stop_top_mgr(link->ctx, link); 1219 1211
+14 -10
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 1664 1664 if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param) 1665 1665 return false; 1666 1666 1667 + // Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks 1668 + if (old_stream->audio_info.mode_count != stream->audio_info.mode_count) 1669 + return false; 1670 + 1667 1671 return true; 1668 1672 } 1669 1673 ··· 2256 2252 2257 2253 if (!new_ctx) 2258 2254 return DC_ERROR_UNEXPECTED; 2259 - #if defined(CONFIG_DRM_AMD_DC_DCN) 2260 - 2261 - /* 2262 - * Update link encoder to stream assignment. 2263 - * TODO: Split out reason allocation from validation. 2264 - */ 2265 - if (dc->res_pool->funcs->link_encs_assign && fast_validate == false) 2266 - dc->res_pool->funcs->link_encs_assign( 2267 - dc, new_ctx, new_ctx->streams, new_ctx->stream_count); 2268 - #endif 2269 2255 2270 2256 if (dc->res_pool->funcs->validate_global) { 2271 2257 result = dc->res_pool->funcs->validate_global(dc, new_ctx); ··· 2306 2312 if (result == DC_OK) 2307 2313 if (!dc->res_pool->funcs->validate_bandwidth(dc, new_ctx, fast_validate)) 2308 2314 result = DC_FAIL_BANDWIDTH_VALIDATE; 2315 + 2316 + #if defined(CONFIG_DRM_AMD_DC_DCN) 2317 + /* 2318 + * Only update link encoder to stream assignment after bandwidth validation passed. 2319 + * TODO: Split out assignment and validation. 2320 + */ 2321 + if (result == DC_OK && dc->res_pool->funcs->link_encs_assign && fast_validate == false) 2322 + dc->res_pool->funcs->link_encs_assign( 2323 + dc, new_ctx, new_ctx->streams, new_ctx->stream_count); 2324 + #endif 2309 2325 2310 2326 return result; 2311 2327 }
+2 -1
drivers/gpu/drm/amd/display/dc/dc.h
··· 508 508 uint32_t disable_dpia:1; 509 509 uint32_t force_non_lttpr:1; 510 510 uint32_t extend_aux_rd_interval:1; 511 - uint32_t reserved:29; 511 + uint32_t disable_mst_dsc_work_around:1; 512 + uint32_t reserved:28; 512 513 } bits; 513 514 uint32_t raw; 514 515 };
+2
drivers/gpu/drm/amd/display/dc/dc_link.h
··· 191 191 bool dp_skip_DID2; 192 192 bool dp_skip_reset_segment; 193 193 bool dp_mot_reset_segment; 194 + /* Some USB4 docks do not handle turning off MST DSC once it has been enabled. */ 195 + bool dpia_mst_dsc_always_on; 194 196 } wa_flags; 195 197 struct link_mst_stream_allocation_table mst_stream_alloc_table; 196 198
+1 -1
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1468 1468 dev_err(adev->dev, "Failed to disable smu features.\n"); 1469 1469 } 1470 1470 1471 - if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0) && 1471 + if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2) && 1472 1472 adev->gfx.rlc.funcs->stop) 1473 1473 adev->gfx.rlc.funcs->stop(adev); 1474 1474
+3
drivers/gpu/drm/i915/display/intel_display_types.h
··· 1640 1640 struct intel_dp_pcon_frl frl; 1641 1641 1642 1642 struct intel_psr psr; 1643 + 1644 + /* When we last wrote the OUI for eDP */ 1645 + unsigned long last_oui_write; 1643 1646 }; 1644 1647 1645 1648 enum lspcon_vendor {
+11
drivers/gpu/drm/i915/display/intel_dp.c
··· 29 29 #include <linux/i2c.h> 30 30 #include <linux/notifier.h> 31 31 #include <linux/slab.h> 32 + #include <linux/timekeeping.h> 32 33 #include <linux/types.h> 33 34 34 35 #include <asm/byteorder.h> ··· 1956 1955 1957 1956 if (drm_dp_dpcd_write(&intel_dp->aux, DP_SOURCE_OUI, oui, sizeof(oui)) < 0) 1958 1957 drm_err(&i915->drm, "Failed to write source OUI\n"); 1958 + 1959 + intel_dp->last_oui_write = jiffies; 1960 + } 1961 + 1962 + void intel_dp_wait_source_oui(struct intel_dp *intel_dp) 1963 + { 1964 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1965 + 1966 + drm_dbg_kms(&i915->drm, "Performing OUI wait\n"); 1967 + wait_remaining_ms_from_jiffies(intel_dp->last_oui_write, 30); 1959 1968 } 1960 1969 1961 1970 /* If the device supports it, try to set the power state appropriately */
+2
drivers/gpu/drm/i915/display/intel_dp.h
··· 119 119 const struct intel_crtc_state *crtc_state); 120 120 void intel_dp_phy_test(struct intel_encoder *encoder); 121 121 122 + void intel_dp_wait_source_oui(struct intel_dp *intel_dp); 123 + 122 124 #endif /* __INTEL_DP_H__ */
+26 -6
drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
··· 36 36 37 37 #include "intel_backlight.h" 38 38 #include "intel_display_types.h" 39 + #include "intel_dp.h" 39 40 #include "intel_dp_aux_backlight.h" 40 41 41 42 /* TODO: ··· 106 105 struct intel_panel *panel = &connector->panel; 107 106 int ret; 108 107 u8 tcon_cap[4]; 108 + 109 + intel_dp_wait_source_oui(intel_dp); 109 110 110 111 ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap)); 111 112 if (ret != sizeof(tcon_cap)) ··· 207 204 int ret; 208 205 u8 old_ctrl, ctrl; 209 206 207 + intel_dp_wait_source_oui(intel_dp); 208 + 210 209 ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl); 211 210 if (ret != 1) { 212 211 drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret); ··· 298 293 struct intel_panel *panel = &connector->panel; 299 294 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); 300 295 296 + if (!panel->backlight.edp.vesa.info.aux_enable) { 297 + u32 pwm_level = intel_backlight_invert_pwm_level(connector, 298 + panel->backlight.pwm_level_max); 299 + 300 + panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level); 301 + } 302 + 301 303 drm_edp_backlight_enable(&intel_dp->aux, &panel->backlight.edp.vesa.info, level); 302 304 } 303 305 ··· 316 304 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); 317 305 318 306 drm_edp_backlight_disable(&intel_dp->aux, &panel->backlight.edp.vesa.info); 307 + 308 + if (!panel->backlight.edp.vesa.info.aux_enable) 309 + panel->backlight.pwm_funcs->disable(old_conn_state, 310 + intel_backlight_invert_pwm_level(connector, 0)); 319 311 } 320 312 321 313 static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, enum pipe pipe) ··· 337 321 if (ret < 0) 338 322 return ret; 339 323 324 + if (!panel->backlight.edp.vesa.info.aux_enable) { 325 + ret = panel->backlight.pwm_funcs->setup(connector, pipe); 326 + if (ret < 0) { 327 + drm_err(&i915->drm, 328 + "Failed to setup PWM backlight controls for eDP backlight: %d\n", 329 + ret); 330 + return ret; 331 + } 332 + } 340 333 panel->backlight.max = panel->backlight.edp.vesa.info.max; 341 334 panel->backlight.min = 0; 342 335 if (current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) { ··· 365 340 struct intel_dp *intel_dp = intel_attached_dp(connector); 366 341 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 367 342 368 - /* TODO: We currently only support AUX only backlight configurations, not backlights which 369 - * require a mix of PWM and AUX controls to work. In the mean time, these machines typically 370 - * work just fine using normal PWM controls anyway. 371 - */ 372 - if ((intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) && 373 - drm_edp_backlight_supported(intel_dp->edp_dpcd)) { 343 + if (drm_edp_backlight_supported(intel_dp->edp_dpcd)) { 374 344 drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n"); 375 345 return true; 376 346 }
-7
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 621 621 FF_MODE2_GS_TIMER_MASK, 622 622 FF_MODE2_GS_TIMER_224, 623 623 0, false); 624 - 625 - /* 626 - * Wa_14012131227:dg1 627 - * Wa_1508744258:tgl,rkl,dg1,adl-s,adl-p 628 - */ 629 - wa_masked_en(wal, GEN7_COMMON_SLICE_CHICKEN1, 630 - GEN9_RHWO_OPTIMIZATION_DISABLE); 631 624 } 632 625 633 626 static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
+1 -1
drivers/gpu/drm/msm/Kconfig
··· 4 4 tristate "MSM DRM" 5 5 depends on DRM 6 6 depends on ARCH_QCOM || SOC_IMX5 || COMPILE_TEST 7 + depends on COMMON_CLK 7 8 depends on IOMMU_SUPPORT 8 - depends on (OF && COMMON_CLK) || COMPILE_TEST 9 9 depends on QCOM_OCMEM || QCOM_OCMEM=n 10 10 depends on QCOM_LLCC || QCOM_LLCC=n 11 11 depends on QCOM_COMMAND_DB || QCOM_COMMAND_DB=n
+3 -3
drivers/gpu/drm/msm/Makefile
··· 23 23 hdmi/hdmi_i2c.o \ 24 24 hdmi/hdmi_phy.o \ 25 25 hdmi/hdmi_phy_8960.o \ 26 + hdmi/hdmi_phy_8996.o \ 26 27 hdmi/hdmi_phy_8x60.o \ 27 28 hdmi/hdmi_phy_8x74.o \ 29 + hdmi/hdmi_pll_8960.o \ 28 30 edp/edp.o \ 29 31 edp/edp_aux.o \ 30 32 edp/edp_bridge.o \ ··· 39 37 disp/mdp4/mdp4_dtv_encoder.o \ 40 38 disp/mdp4/mdp4_lcdc_encoder.o \ 41 39 disp/mdp4/mdp4_lvds_connector.o \ 40 + disp/mdp4/mdp4_lvds_pll.o \ 42 41 disp/mdp4/mdp4_irq.o \ 43 42 disp/mdp4/mdp4_kms.o \ 44 43 disp/mdp4/mdp4_plane.o \ ··· 119 116 dp/dp_audio.o 120 117 121 118 msm-$(CONFIG_DRM_FBDEV_EMULATION) += msm_fbdev.o 122 - msm-$(CONFIG_COMMON_CLK) += disp/mdp4/mdp4_lvds_pll.o 123 - msm-$(CONFIG_COMMON_CLK) += hdmi/hdmi_pll_8960.o 124 - msm-$(CONFIG_COMMON_CLK) += hdmi/hdmi_phy_8996.o 125 119 126 120 msm-$(CONFIG_DRM_MSM_HDMI_HDCP) += hdmi/hdmi_hdcp.o 127 121
+10 -10
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1424 1424 { 1425 1425 struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 1426 1426 struct msm_gpu *gpu = &adreno_gpu->base; 1427 - u32 gpu_scid, cntl1_regval = 0; 1427 + u32 cntl1_regval = 0; 1428 1428 1429 1429 if (IS_ERR(a6xx_gpu->llc_mmio)) 1430 1430 return; 1431 1431 1432 1432 if (!llcc_slice_activate(a6xx_gpu->llc_slice)) { 1433 - gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); 1433 + u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); 1434 1434 1435 1435 gpu_scid &= 0x1f; 1436 1436 cntl1_regval = (gpu_scid << 0) | (gpu_scid << 5) | (gpu_scid << 10) | 1437 1437 (gpu_scid << 15) | (gpu_scid << 20); 1438 + 1439 + /* On A660, the SCID programming for UCHE traffic is done in 1440 + * A6XX_GBIF_SCACHE_CNTL0[14:10] 1441 + */ 1442 + if (adreno_is_a660_family(adreno_gpu)) 1443 + gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, (0x1f << 10) | 1444 + (1 << 8), (gpu_scid << 10) | (1 << 8)); 1438 1445 } 1439 1446 1440 1447 /* ··· 1478 1471 } 1479 1472 1480 1473 gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0), cntl1_regval); 1481 - 1482 - /* On A660, the SCID programming for UCHE traffic is done in 1483 - * A6XX_GBIF_SCACHE_CNTL0[14:10] 1484 - */ 1485 - if (adreno_is_a660_family(adreno_gpu)) 1486 - gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, (0x1f << 10) | 1487 - (1 << 8), (gpu_scid << 10) | (1 << 8)); 1488 1474 } 1489 1475 1490 1476 static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) ··· 1640 1640 return (unsigned long)busy_time; 1641 1641 } 1642 1642 1643 - void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1643 + static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1644 1644 { 1645 1645 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1646 1646 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+2 -2
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
··· 777 777 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 778 778 779 779 a6xx_state->gmu_registers = state_kcalloc(a6xx_state, 780 - 2, sizeof(*a6xx_state->gmu_registers)); 780 + 3, sizeof(*a6xx_state->gmu_registers)); 781 781 782 782 if (!a6xx_state->gmu_registers) 783 783 return; 784 784 785 - a6xx_state->nr_gmu_registers = 2; 785 + a6xx_state->nr_gmu_registers = 3; 786 786 787 787 /* Get the CX GMU registers from AHB */ 788 788 _a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gmu_reglist[0],
+17
drivers/gpu/drm/msm/dp/dp_aux.c
··· 33 33 bool read; 34 34 bool no_send_addr; 35 35 bool no_send_stop; 36 + bool initted; 36 37 u32 offset; 37 38 u32 segment; 38 39 ··· 332 331 } 333 332 334 333 mutex_lock(&aux->mutex); 334 + if (!aux->initted) { 335 + ret = -EIO; 336 + goto exit; 337 + } 335 338 336 339 dp_aux_update_offset_and_segment(aux, msg); 337 340 dp_aux_transfer_helper(aux, msg, true); ··· 385 380 } 386 381 387 382 aux->cmd_busy = false; 383 + 384 + exit: 388 385 mutex_unlock(&aux->mutex); 389 386 390 387 return ret; ··· 438 431 439 432 aux = container_of(dp_aux, struct dp_aux_private, dp_aux); 440 433 434 + mutex_lock(&aux->mutex); 435 + 441 436 dp_catalog_aux_enable(aux->catalog, true); 442 437 aux->retry_cnt = 0; 438 + aux->initted = true; 439 + 440 + mutex_unlock(&aux->mutex); 443 441 } 444 442 445 443 void dp_aux_deinit(struct drm_dp_aux *dp_aux) ··· 453 441 454 442 aux = container_of(dp_aux, struct dp_aux_private, dp_aux); 455 443 444 + mutex_lock(&aux->mutex); 445 + 446 + aux->initted = false; 456 447 dp_catalog_aux_enable(aux->catalog, false); 448 + 449 + mutex_unlock(&aux->mutex); 457 450 } 458 451 459 452 int dp_aux_register(struct drm_dp_aux *dp_aux)
+2
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1658 1658 if (!prop) { 1659 1659 DRM_DEV_DEBUG(dev, 1660 1660 "failed to find data lane mapping, using default\n"); 1661 + /* Set the number of date lanes to 4 by default. */ 1662 + msm_host->num_data_lanes = 4; 1661 1663 return 0; 1662 1664 } 1663 1665
+1
drivers/gpu/drm/msm/msm_debugfs.c
··· 77 77 goto free_priv; 78 78 79 79 pm_runtime_get_sync(&gpu->pdev->dev); 80 + msm_gpu_hw_init(gpu); 80 81 show_priv->state = gpu->funcs->gpu_state_get(gpu); 81 82 pm_runtime_put_sync(&gpu->pdev->dev); 82 83
+32 -17
drivers/gpu/drm/msm/msm_drv.c
··· 967 967 return ret; 968 968 } 969 969 970 - static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, 971 - struct drm_file *file) 970 + static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, 971 + ktime_t timeout) 972 972 { 973 - struct msm_drm_private *priv = dev->dev_private; 974 - struct drm_msm_wait_fence *args = data; 975 - ktime_t timeout = to_ktime(args->timeout); 976 - struct msm_gpu_submitqueue *queue; 977 - struct msm_gpu *gpu = priv->gpu; 978 973 struct dma_fence *fence; 979 974 int ret; 980 975 981 - if (args->pad) { 982 - DRM_ERROR("invalid pad: %08x\n", args->pad); 976 + if (fence_id > queue->last_fence) { 977 + DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n", 978 + fence_id, queue->last_fence); 983 979 return -EINVAL; 984 980 } 985 - 986 - if (!gpu) 987 - return 0; 988 - 989 - queue = msm_submitqueue_get(file->driver_priv, args->queueid); 990 - if (!queue) 991 - return -ENOENT; 992 981 993 982 /* 994 983 * Map submitqueue scoped "seqno" (which is actually an idr key) ··· 990 1001 ret = mutex_lock_interruptible(&queue->lock); 991 1002 if (ret) 992 1003 return ret; 993 - fence = idr_find(&queue->fence_idr, args->fence); 1004 + fence = idr_find(&queue->fence_idr, fence_id); 994 1005 if (fence) 995 1006 fence = dma_fence_get_rcu(fence); 996 1007 mutex_unlock(&queue->lock); ··· 1006 1017 } 1007 1018 1008 1019 dma_fence_put(fence); 1020 + 1021 + return ret; 1022 + } 1023 + 1024 + static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, 1025 + struct drm_file *file) 1026 + { 1027 + struct msm_drm_private *priv = dev->dev_private; 1028 + struct drm_msm_wait_fence *args = data; 1029 + struct msm_gpu_submitqueue *queue; 1030 + int ret; 1031 + 1032 + if (args->pad) { 1033 + DRM_ERROR("invalid pad: %08x\n", args->pad); 1034 + return -EINVAL; 1035 + } 1036 + 1037 + if (!priv->gpu) 1038 + return 0; 1039 + 1040 + queue = msm_submitqueue_get(file->driver_priv, args->queueid); 1041 + if (!queue) 1042 + return -ENOENT; 1043 + 1044 + ret = wait_fence(queue, args->fence, to_ktime(args->timeout)); 1045 + 1009 1046 msm_submitqueue_put(queue); 1010 1047 1011 1048 return ret;
+2 -3
drivers/gpu/drm/msm/msm_gem.c
··· 1056 1056 { 1057 1057 struct msm_gem_object *msm_obj = to_msm_bo(obj); 1058 1058 1059 - vma->vm_flags &= ~VM_PFNMAP; 1060 - vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND; 1059 + vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP; 1061 1060 vma->vm_page_prot = msm_gem_pgprot(msm_obj, vm_get_page_prot(vma->vm_flags)); 1062 1061 1063 1062 return 0; ··· 1120 1121 break; 1121 1122 fallthrough; 1122 1123 default: 1123 - DRM_DEV_ERROR(dev->dev, "invalid cache flag: %x\n", 1124 + DRM_DEV_DEBUG(dev->dev, "invalid cache flag: %x\n", 1124 1125 (flags & MSM_BO_CACHE_MASK)); 1125 1126 return -EINVAL; 1126 1127 }
+2
drivers/gpu/drm/msm/msm_gem_submit.c
··· 772 772 args->nr_cmds); 773 773 if (IS_ERR(submit)) { 774 774 ret = PTR_ERR(submit); 775 + submit = NULL; 775 776 goto out_unlock; 776 777 } 777 778 ··· 905 904 drm_sched_entity_push_job(&submit->base); 906 905 907 906 args->fence = submit->fence_id; 907 + queue->last_fence = submit->fence_id; 908 908 909 909 msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); 910 910 msm_process_post_deps(post_deps, args->nr_out_syncobjs,
+3
drivers/gpu/drm/msm/msm_gpu.h
··· 359 359 * @ring_nr: the ringbuffer used by this submitqueue, which is determined 360 360 * by the submitqueue's priority 361 361 * @faults: the number of GPU hangs associated with this submitqueue 362 + * @last_fence: the sequence number of the last allocated fence (for error 363 + * checking) 362 364 * @ctx: the per-drm_file context associated with the submitqueue (ie. 363 365 * which set of pgtables do submits jobs associated with the 364 366 * submitqueue use) ··· 376 374 u32 flags; 377 375 u32 ring_nr; 378 376 int faults; 377 + uint32_t last_fence; 379 378 struct msm_file_private *ctx; 380 379 struct list_head node; 381 380 struct idr fence_idr;
+9 -4
drivers/gpu/drm/msm/msm_gpu_devfreq.c
··· 20 20 struct msm_gpu *gpu = dev_to_gpu(dev); 21 21 struct dev_pm_opp *opp; 22 22 23 + /* 24 + * Note that devfreq_recommended_opp() can modify the freq 25 + * to something that actually is in the opp table: 26 + */ 23 27 opp = devfreq_recommended_opp(dev, freq, flags); 24 28 25 29 /* ··· 32 28 */ 33 29 if (gpu->devfreq.idle_freq) { 34 30 gpu->devfreq.idle_freq = *freq; 31 + dev_pm_opp_put(opp); 35 32 return 0; 36 33 } 37 34 ··· 208 203 struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq); 209 204 unsigned long idle_freq, target_freq = 0; 210 205 211 - if (!df->devfreq) 212 - return; 213 - 214 206 /* 215 207 * Hold devfreq lock to synchronize with get_dev_status()/ 216 208 * target() callbacks ··· 229 227 { 230 228 struct msm_gpu_devfreq *df = &gpu->devfreq; 231 229 230 + if (!df->devfreq) 231 + return; 232 + 232 233 msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1), 233 - HRTIMER_MODE_ABS); 234 + HRTIMER_MODE_REL); 234 235 }
+19 -23
drivers/gpu/drm/vc4/vc4_kms.c
··· 337 337 struct drm_device *dev = state->dev; 338 338 struct vc4_dev *vc4 = to_vc4_dev(dev); 339 339 struct vc4_hvs *hvs = vc4->hvs; 340 - struct drm_crtc_state *old_crtc_state; 341 340 struct drm_crtc_state *new_crtc_state; 342 341 struct drm_crtc *crtc; 343 342 struct vc4_hvs_state *old_hvs_state; 343 + unsigned int channel; 344 344 int i; 345 345 346 346 for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) { ··· 353 353 vc4_hvs_mask_underrun(dev, vc4_crtc_state->assigned_channel); 354 354 } 355 355 356 - if (vc4->hvs->hvs5) 357 - clk_set_min_rate(hvs->core_clk, 500000000); 358 - 359 356 old_hvs_state = vc4_hvs_get_old_global_state(state); 360 - if (!old_hvs_state) 357 + if (IS_ERR(old_hvs_state)) 361 358 return; 362 359 363 - for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) { 364 - struct vc4_crtc_state *vc4_crtc_state = 365 - to_vc4_crtc_state(old_crtc_state); 366 - unsigned int channel = vc4_crtc_state->assigned_channel; 360 + for (channel = 0; channel < HVS_NUM_CHANNELS; channel++) { 361 + struct drm_crtc_commit *commit; 367 362 int ret; 368 - 369 - if (channel == VC4_HVS_CHANNEL_DISABLED) 370 - continue; 371 363 372 364 if (!old_hvs_state->fifo_state[channel].in_use) 373 365 continue; 374 366 375 - ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[channel].pending_commit); 367 + commit = old_hvs_state->fifo_state[channel].pending_commit; 368 + if (!commit) 369 + continue; 370 + 371 + ret = drm_crtc_commit_wait(commit); 376 372 if (ret) 377 373 drm_err(dev, "Timed out waiting for commit\n"); 374 + 375 + drm_crtc_commit_put(commit); 376 + old_hvs_state->fifo_state[channel].pending_commit = NULL; 378 377 } 378 + 379 + if (vc4->hvs->hvs5) 380 + clk_set_min_rate(hvs->core_clk, 500000000); 379 381 380 382 drm_atomic_helper_commit_modeset_disables(dev, state); 381 383 ··· 412 410 unsigned int i; 413 411 414 412 hvs_state = vc4_hvs_get_new_global_state(state); 415 - if (!hvs_state) 416 - return -EINVAL; 413 + if (WARN_ON(IS_ERR(hvs_state))) 414 + return PTR_ERR(hvs_state); 417 415 418 416 for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 419 417 struct vc4_crtc_state *vc4_crtc_state = ··· 670 668 671 669 for (i = 0; i < HVS_NUM_CHANNELS; i++) { 672 670 state->fifo_state[i].in_use = old_state->fifo_state[i].in_use; 673 - 674 - if (!old_state->fifo_state[i].pending_commit) 675 - continue; 676 - 677 - state->fifo_state[i].pending_commit = 678 - drm_crtc_commit_get(old_state->fifo_state[i].pending_commit); 679 671 } 680 672 681 673 return &state->base; ··· 758 762 unsigned int i; 759 763 760 764 hvs_new_state = vc4_hvs_get_global_state(state); 761 - if (!hvs_new_state) 762 - return -EINVAL; 765 + if (IS_ERR(hvs_new_state)) 766 + return PTR_ERR(hvs_new_state); 763 767 764 768 for (i = 0; i < ARRAY_SIZE(hvs_new_state->fifo_state); i++) 765 769 if (!hvs_new_state->fifo_state[i].in_use)
+1 -41
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 157 157 schedule_work(&vgdev->config_changed_work); 158 158 } 159 159 160 - static __poll_t virtio_gpu_poll(struct file *filp, 161 - struct poll_table_struct *wait) 162 - { 163 - struct drm_file *drm_file = filp->private_data; 164 - struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; 165 - struct drm_device *dev = drm_file->minor->dev; 166 - struct virtio_gpu_device *vgdev = dev->dev_private; 167 - struct drm_pending_event *e = NULL; 168 - __poll_t mask = 0; 169 - 170 - if (!vgdev->has_virgl_3d || !vfpriv || !vfpriv->ring_idx_mask) 171 - return drm_poll(filp, wait); 172 - 173 - poll_wait(filp, &drm_file->event_wait, wait); 174 - 175 - if (!list_empty(&drm_file->event_list)) { 176 - spin_lock_irq(&dev->event_lock); 177 - e = list_first_entry(&drm_file->event_list, 178 - struct drm_pending_event, link); 179 - drm_file->event_space += e->event->length; 180 - list_del(&e->link); 181 - spin_unlock_irq(&dev->event_lock); 182 - 183 - kfree(e); 184 - mask |= EPOLLIN | EPOLLRDNORM; 185 - } 186 - 187 - return mask; 188 - } 189 - 190 160 static struct virtio_device_id id_table[] = { 191 161 { VIRTIO_ID_GPU, VIRTIO_DEV_ANY_ID }, 192 162 { 0 }, ··· 196 226 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>"); 197 227 MODULE_AUTHOR("Alon Levy"); 198 228 199 - static const struct file_operations virtio_gpu_driver_fops = { 200 - .owner = THIS_MODULE, 201 - .open = drm_open, 202 - .release = drm_release, 203 - .unlocked_ioctl = drm_ioctl, 204 - .compat_ioctl = drm_compat_ioctl, 205 - .poll = virtio_gpu_poll, 206 - .read = drm_read, 207 - .llseek = noop_llseek, 208 - .mmap = drm_gem_mmap 209 - }; 229 + DEFINE_DRM_GEM_FOPS(virtio_gpu_driver_fops); 210 230 211 231 static const struct drm_driver driver = { 212 232 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
-1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 138 138 spinlock_t lock; 139 139 }; 140 140 141 - #define VIRTGPU_EVENT_FENCE_SIGNALED_INTERNAL 0x10000000 142 141 struct virtio_gpu_fence_event { 143 142 struct drm_pending_event base; 144 143 struct drm_event event;
+1 -1
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 54 54 if (!e) 55 55 return -ENOMEM; 56 56 57 - e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED_INTERNAL; 57 + e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED; 58 58 e->event.length = sizeof(e->event); 59 59 60 60 ret = drm_event_reserve_init(dev, file, &e->base, &e->event);
+3 -2
drivers/i2c/busses/i2c-cbus-gpio.c
··· 195 195 } 196 196 197 197 static const struct i2c_algorithm cbus_i2c_algo = { 198 - .smbus_xfer = cbus_i2c_smbus_xfer, 199 - .functionality = cbus_i2c_func, 198 + .smbus_xfer = cbus_i2c_smbus_xfer, 199 + .smbus_xfer_atomic = cbus_i2c_smbus_xfer, 200 + .functionality = cbus_i2c_func, 200 201 }; 201 202 202 203 static int cbus_i2c_remove(struct platform_device *pdev)
+2 -2
drivers/i2c/busses/i2c-rk3x.c
··· 423 423 if (!(ipd & REG_INT_MBRF)) 424 424 return; 425 425 426 - /* ack interrupt */ 427 - i2c_writel(i2c, REG_INT_MBRF, REG_IPD); 426 + /* ack interrupt (read also produces a spurious START flag, clear it too) */ 427 + i2c_writel(i2c, REG_INT_MBRF | REG_INT_START, REG_IPD); 428 428 429 429 /* Can only handle a maximum of 32 bytes at a time */ 430 430 if (len > 32)
+38 -7
drivers/i2c/busses/i2c-stm32f7.c
··· 1493 1493 { 1494 1494 struct stm32f7_i2c_dev *i2c_dev = data; 1495 1495 struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg; 1496 + struct stm32_i2c_dma *dma = i2c_dev->dma; 1496 1497 void __iomem *base = i2c_dev->base; 1497 1498 u32 status, mask; 1498 1499 int ret = IRQ_HANDLED; ··· 1519 1518 dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n", 1520 1519 __func__, f7_msg->addr); 1521 1520 writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR); 1521 + if (i2c_dev->use_dma) { 1522 + stm32f7_i2c_disable_dma_req(i2c_dev); 1523 + dmaengine_terminate_async(dma->chan_using); 1524 + } 1522 1525 f7_msg->result = -ENXIO; 1523 1526 } 1524 1527 ··· 1538 1533 /* Clear STOP flag */ 1539 1534 writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR); 1540 1535 1541 - if (i2c_dev->use_dma) { 1536 + if (i2c_dev->use_dma && !f7_msg->result) { 1542 1537 ret = IRQ_WAKE_THREAD; 1543 1538 } else { 1544 1539 i2c_dev->master_mode = false; ··· 1551 1546 if (f7_msg->stop) { 1552 1547 mask = STM32F7_I2C_CR2_STOP; 1553 1548 stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask); 1554 - } else if (i2c_dev->use_dma) { 1549 + } else if (i2c_dev->use_dma && !f7_msg->result) { 1555 1550 ret = IRQ_WAKE_THREAD; 1556 1551 } else if (f7_msg->smbus) { 1557 1552 stm32f7_i2c_smbus_rep_start(i2c_dev); ··· 1588 1583 if (!ret) { 1589 1584 dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__); 1590 1585 stm32f7_i2c_disable_dma_req(i2c_dev); 1591 - dmaengine_terminate_all(dma->chan_using); 1586 + dmaengine_terminate_async(dma->chan_using); 1592 1587 f7_msg->result = -ETIMEDOUT; 1593 1588 } 1594 1589 ··· 1665 1660 /* Disable dma */ 1666 1661 if (i2c_dev->use_dma) { 1667 1662 stm32f7_i2c_disable_dma_req(i2c_dev); 1668 - dmaengine_terminate_all(dma->chan_using); 1663 + dmaengine_terminate_async(dma->chan_using); 1669 1664 } 1670 1665 1671 1666 i2c_dev->master_mode = false; ··· 1701 1696 time_left = wait_for_completion_timeout(&i2c_dev->complete, 1702 1697 i2c_dev->adap.timeout); 1703 1698 ret = f7_msg->result; 1699 + if (ret) { 1700 + if (i2c_dev->use_dma) 1701 + dmaengine_synchronize(dma->chan_using); 1702 + 1703 + /* 1704 + * It is possible that some unsent data have already been 1705 + * written into TXDR. To avoid sending old data in a 1706 + * further transfer, flush TXDR in case of any error 1707 + */ 1708 + writel_relaxed(STM32F7_I2C_ISR_TXE, 1709 + i2c_dev->base + STM32F7_I2C_ISR); 1710 + goto pm_free; 1711 + } 1704 1712 1705 1713 if (!time_left) { 1706 1714 dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n", 1707 1715 i2c_dev->msg->addr); 1708 1716 if (i2c_dev->use_dma) 1709 - dmaengine_terminate_all(dma->chan_using); 1717 + dmaengine_terminate_sync(dma->chan_using); 1718 + stm32f7_i2c_wait_free_bus(i2c_dev); 1710 1719 ret = -ETIMEDOUT; 1711 1720 } 1712 1721 ··· 1763 1744 timeout = wait_for_completion_timeout(&i2c_dev->complete, 1764 1745 i2c_dev->adap.timeout); 1765 1746 ret = f7_msg->result; 1766 - if (ret) 1747 + if (ret) { 1748 + if (i2c_dev->use_dma) 1749 + dmaengine_synchronize(dma->chan_using); 1750 + 1751 + /* 1752 + * It is possible that some unsent data have already been 1753 + * written into TXDR. To avoid sending old data in a 1754 + * further transfer, flush TXDR in case of any error 1755 + */ 1756 + writel_relaxed(STM32F7_I2C_ISR_TXE, 1757 + i2c_dev->base + STM32F7_I2C_ISR); 1767 1758 goto pm_free; 1759 + } 1768 1760 1769 1761 if (!timeout) { 1770 1762 dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr); 1771 1763 if (i2c_dev->use_dma) 1772 - dmaengine_terminate_all(dma->chan_using); 1764 + dmaengine_terminate_sync(dma->chan_using); 1765 + stm32f7_i2c_wait_free_bus(i2c_dev); 1773 1766 ret = -ETIMEDOUT; 1774 1767 goto pm_free; 1775 1768 }
+14
drivers/net/dsa/b53/b53_spi.c
··· 349 349 }; 350 350 MODULE_DEVICE_TABLE(of, b53_spi_of_match); 351 351 352 + static const struct spi_device_id b53_spi_ids[] = { 353 + { .name = "bcm5325" }, 354 + { .name = "bcm5365" }, 355 + { .name = "bcm5395" }, 356 + { .name = "bcm5397" }, 357 + { .name = "bcm5398" }, 358 + { .name = "bcm53115" }, 359 + { .name = "bcm53125" }, 360 + { .name = "bcm53128" }, 361 + { /* sentinel */ } 362 + }; 363 + MODULE_DEVICE_TABLE(spi, b53_spi_ids); 364 + 352 365 static struct spi_driver b53_spi_driver = { 353 366 .driver = { 354 367 .name = "b53-switch", ··· 370 357 .probe = b53_spi_probe, 371 358 .remove = b53_spi_remove, 372 359 .shutdown = b53_spi_shutdown, 360 + .id_table = b53_spi_ids, 373 361 }; 374 362 375 363 module_spi_driver(b53_spi_driver);
+220 -32
drivers/net/dsa/mv88e6xxx/serdes.c
··· 50 50 } 51 51 52 52 static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, 53 - u16 status, u16 lpa, 53 + u16 ctrl, u16 status, u16 lpa, 54 54 struct phylink_link_state *state) 55 55 { 56 + state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK); 57 + 56 58 if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) { 57 - state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK); 59 + /* The Spped and Duplex Resolved register is 1 if AN is enabled 60 + * and complete, or if AN is disabled. So with disabled AN we 61 + * still get here on link up. But we want to set an_complete 62 + * only if AN was enabled, thus we look at BMCR_ANENABLE. 63 + * (According to 802.3-2008 section 22.2.4.2.10, we should be 64 + * able to get this same value from BMSR_ANEGCAPABLE, but tests 65 + * show that these Marvell PHYs don't conform to this part of 66 + * the specificaion - BMSR_ANEGCAPABLE is simply always 1.) 67 + */ 68 + state->an_complete = !!(ctrl & BMCR_ANENABLE); 58 69 state->duplex = status & 59 70 MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ? 60 71 DUPLEX_FULL : DUPLEX_HALF; ··· 92 81 dev_err(chip->dev, "invalid PHY speed\n"); 93 82 return -EINVAL; 94 83 } 84 + } else if (state->link && 85 + state->interface != PHY_INTERFACE_MODE_SGMII) { 86 + /* If Speed and Duplex Resolved register is 0 and link is up, it 87 + * means that AN was enabled, but link partner had it disabled 88 + * and the PHY invoked the Auto-Negotiation Bypass feature and 89 + * linked anyway. 90 + */ 91 + state->duplex = DUPLEX_FULL; 92 + if (state->interface == PHY_INTERFACE_MODE_2500BASEX) 93 + state->speed = SPEED_2500; 94 + else 95 + state->speed = SPEED_1000; 95 96 } else { 96 97 state->link = false; 97 98 } ··· 191 168 int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port, 192 169 int lane, struct phylink_link_state *state) 193 170 { 194 - u16 lpa, status; 171 + u16 lpa, status, ctrl; 195 172 int err; 173 + 174 + err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl); 175 + if (err) { 176 + dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err); 177 + return err; 178 + } 196 179 197 180 err = mv88e6352_serdes_read(chip, 0x11, &status); 198 181 if (err) { ··· 212 183 return err; 213 184 } 214 185 215 - return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state); 186 + return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state); 216 187 } 217 188 218 189 int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port, ··· 912 883 static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip, 913 884 int port, int lane, struct phylink_link_state *state) 914 885 { 915 - u16 lpa, status; 886 + u16 lpa, status, ctrl; 916 887 int err; 888 + 889 + err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, 890 + MV88E6390_SGMII_BMCR, &ctrl); 891 + if (err) { 892 + dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err); 893 + return err; 894 + } 917 895 918 896 err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, 919 897 MV88E6390_SGMII_PHY_STATUS, &status); ··· 936 900 return err; 937 901 } 938 902 939 - return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state); 903 + return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state); 940 904 } 941 905 942 906 static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip, ··· 1307 1271 } 1308 1272 } 1309 1273 1310 - static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane) 1274 + static int mv88e6393x_serdes_power_lane(struct mv88e6xxx_chip *chip, int lane, 1275 + bool on) 1311 1276 { 1312 - u16 reg, pcs; 1277 + u16 reg; 1278 + int err; 1279 + 1280 + err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, 1281 + MV88E6393X_SERDES_CTRL1, &reg); 1282 + if (err) 1283 + return err; 1284 + 1285 + if (on) 1286 + reg &= ~(MV88E6393X_SERDES_CTRL1_TX_PDOWN | 1287 + MV88E6393X_SERDES_CTRL1_RX_PDOWN); 1288 + else 1289 + reg |= MV88E6393X_SERDES_CTRL1_TX_PDOWN | 1290 + MV88E6393X_SERDES_CTRL1_RX_PDOWN; 1291 + 1292 + return mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS, 1293 + MV88E6393X_SERDES_CTRL1, reg); 1294 + } 1295 + 1296 + static int mv88e6393x_serdes_erratum_4_6(struct mv88e6xxx_chip *chip, int lane) 1297 + { 1298 + u16 reg; 1313 1299 int err; 1314 1300 1315 1301 /* mv88e6393x family errata 4.6: ··· 1342 1284 * It seems that after this workaround the SERDES is automatically 1343 1285 * powered up (the bit is cleared), so power it down. 1344 1286 */ 1345 - if (lane == MV88E6393X_PORT0_LANE || lane == MV88E6393X_PORT9_LANE || 1346 - lane == MV88E6393X_PORT10_LANE) { 1347 - err = mv88e6390_serdes_read(chip, lane, 1348 - MDIO_MMD_PHYXS, 1349 - MV88E6393X_SERDES_POC, &reg); 1350 - if (err) 1351 - return err; 1287 + err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, 1288 + MV88E6393X_SERDES_POC, &reg); 1289 + if (err) 1290 + return err; 1352 1291 1353 - reg &= ~MV88E6393X_SERDES_POC_PDOWN; 1354 - reg |= MV88E6393X_SERDES_POC_RESET; 1292 + reg &= ~MV88E6393X_SERDES_POC_PDOWN; 1293 + reg |= MV88E6393X_SERDES_POC_RESET; 1355 1294 1356 - err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS, 1357 - MV88E6393X_SERDES_POC, reg); 1358 - if (err) 1359 - return err; 1295 + err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS, 1296 + MV88E6393X_SERDES_POC, reg); 1297 + if (err) 1298 + return err; 1360 1299 1361 - err = mv88e6390_serdes_power_sgmii(chip, lane, false); 1362 - if (err) 1363 - return err; 1364 - } 1300 + err = mv88e6390_serdes_power_sgmii(chip, lane, false); 1301 + if (err) 1302 + return err; 1303 + 1304 + return mv88e6393x_serdes_power_lane(chip, lane, false); 1305 + } 1306 + 1307 + int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip) 1308 + { 1309 + int err; 1310 + 1311 + err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT0_LANE); 1312 + if (err) 1313 + return err; 1314 + 1315 + err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT9_LANE); 1316 + if (err) 1317 + return err; 1318 + 1319 + return mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT10_LANE); 1320 + } 1321 + 1322 + static int mv88e6393x_serdes_erratum_4_8(struct mv88e6xxx_chip *chip, int lane) 1323 + { 1324 + u16 reg, pcs; 1325 + int err; 1365 1326 1366 1327 /* mv88e6393x family errata 4.8: 1367 1328 * When a SERDES port is operating in 1000BASE-X or SGMII mode link may ··· 1411 1334 MV88E6393X_ERRATA_4_8_REG, reg); 1412 1335 } 1413 1336 1414 - int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip) 1337 + static int mv88e6393x_serdes_erratum_5_2(struct mv88e6xxx_chip *chip, int lane, 1338 + u8 cmode) 1415 1339 { 1340 + static const struct { 1341 + u16 dev, reg, val, mask; 1342 + } fixes[] = { 1343 + { MDIO_MMD_VEND1, 0x8093, 0xcb5a, 0xffff }, 1344 + { MDIO_MMD_VEND1, 0x8171, 0x7088, 0xffff }, 1345 + { MDIO_MMD_VEND1, 0x80c9, 0x311a, 0xffff }, 1346 + { MDIO_MMD_VEND1, 0x80a2, 0x8000, 0xff7f }, 1347 + { MDIO_MMD_VEND1, 0x80a9, 0x0000, 0xfff0 }, 1348 + { MDIO_MMD_VEND1, 0x80a3, 0x0000, 0xf8ff }, 1349 + { MDIO_MMD_PHYXS, MV88E6393X_SERDES_POC, 1350 + MV88E6393X_SERDES_POC_RESET, MV88E6393X_SERDES_POC_RESET }, 1351 + }; 1352 + int err, i; 1353 + u16 reg; 1354 + 1355 + /* mv88e6393x family errata 5.2: 1356 + * For optimal signal integrity the following sequence should be applied 1357 + * to SERDES operating in 10G mode. These registers only apply to 10G 1358 + * operation and have no effect on other speeds. 1359 + */ 1360 + if (cmode != MV88E6393X_PORT_STS_CMODE_10GBASER) 1361 + return 0; 1362 + 1363 + for (i = 0; i < ARRAY_SIZE(fixes); ++i) { 1364 + err = mv88e6390_serdes_read(chip, lane, fixes[i].dev, 1365 + fixes[i].reg, &reg); 1366 + if (err) 1367 + return err; 1368 + 1369 + reg &= ~fixes[i].mask; 1370 + reg |= fixes[i].val; 1371 + 1372 + err = mv88e6390_serdes_write(chip, lane, fixes[i].dev, 1373 + fixes[i].reg, reg); 1374 + if (err) 1375 + return err; 1376 + } 1377 + 1378 + return 0; 1379 + } 1380 + 1381 + static int mv88e6393x_serdes_fix_2500basex_an(struct mv88e6xxx_chip *chip, 1382 + int lane, u8 cmode, bool on) 1383 + { 1384 + u16 reg; 1416 1385 int err; 1417 1386 1418 - err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT0_LANE); 1387 + if (cmode != MV88E6XXX_PORT_STS_CMODE_2500BASEX) 1388 + return 0; 1389 + 1390 + /* Inband AN is broken on Amethyst in 2500base-x mode when set by 1391 + * standard mechanism (via cmode). 1392 + * We can get around this by configuring the PCS mode to 1000base-x 1393 + * and then writing value 0x58 to register 1e.8000. (This must be done 1394 + * while SerDes receiver and transmitter are disabled, which is, when 1395 + * this function is called.) 1396 + * It seem that when we do this configuration to 2500base-x mode (by 1397 + * changing PCS mode to 1000base-x and frequency to 3.125 GHz from 1398 + * 1.25 GHz) and then configure to sgmii or 1000base-x, the device 1399 + * thinks that it already has SerDes at 1.25 GHz and does not change 1400 + * the 1e.8000 register, leaving SerDes at 3.125 GHz. 1401 + * To avoid this, change PCS mode back to 2500base-x when disabling 1402 + * SerDes from 2500base-x mode. 1403 + */ 1404 + err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, 1405 + MV88E6393X_SERDES_POC, &reg); 1419 1406 if (err) 1420 1407 return err; 1421 1408 1422 - err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT9_LANE); 1409 + reg &= ~(MV88E6393X_SERDES_POC_PCS_MASK | MV88E6393X_SERDES_POC_AN); 1410 + if (on) 1411 + reg |= MV88E6393X_SERDES_POC_PCS_1000BASEX | 1412 + MV88E6393X_SERDES_POC_AN; 1413 + else 1414 + reg |= MV88E6393X_SERDES_POC_PCS_2500BASEX; 1415 + reg |= MV88E6393X_SERDES_POC_RESET; 1416 + 1417 + err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS, 1418 + MV88E6393X_SERDES_POC, reg); 1423 1419 if (err) 1424 1420 return err; 1425 1421 1426 - return mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT10_LANE); 1422 + err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_VEND1, 0x8000, 0x58); 1423 + if (err) 1424 + return err; 1425 + 1426 + return 0; 1427 1427 } 1428 1428 1429 1429 int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane, 1430 1430 bool on) 1431 1431 { 1432 1432 u8 cmode = chip->ports[port].cmode; 1433 + int err; 1433 1434 1434 1435 if (port != 0 && port != 9 && port != 10) 1435 1436 return -EOPNOTSUPP; 1437 + 1438 + if (on) { 1439 + err = mv88e6393x_serdes_erratum_4_8(chip, lane); 1440 + if (err) 1441 + return err; 1442 + 1443 + err = mv88e6393x_serdes_erratum_5_2(chip, lane, cmode); 1444 + if (err) 1445 + return err; 1446 + 1447 + err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode, 1448 + true); 1449 + if (err) 1450 + return err; 1451 + 1452 + err = mv88e6393x_serdes_power_lane(chip, lane, true); 1453 + if (err) 1454 + return err; 1455 + } 1436 1456 1437 1457 switch (cmode) { 1438 1458 case MV88E6XXX_PORT_STS_CMODE_SGMII: 1439 1459 case MV88E6XXX_PORT_STS_CMODE_1000BASEX: 1440 1460 case MV88E6XXX_PORT_STS_CMODE_2500BASEX: 1441 - return mv88e6390_serdes_power_sgmii(chip, lane, on); 1461 + err = mv88e6390_serdes_power_sgmii(chip, lane, on); 1462 + break; 1442 1463 case MV88E6393X_PORT_STS_CMODE_5GBASER: 1443 1464 case MV88E6393X_PORT_STS_CMODE_10GBASER: 1444 - return mv88e6390_serdes_power_10g(chip, lane, on); 1465 + err = mv88e6390_serdes_power_10g(chip, lane, on); 1466 + break; 1445 1467 } 1446 1468 1447 - return 0; 1469 + if (err) 1470 + return err; 1471 + 1472 + if (!on) { 1473 + err = mv88e6393x_serdes_power_lane(chip, lane, false); 1474 + if (err) 1475 + return err; 1476 + 1477 + err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode, 1478 + false); 1479 + } 1480 + 1481 + return err; 1448 1482 }
+4
drivers/net/dsa/mv88e6xxx/serdes.h
··· 93 93 #define MV88E6393X_SERDES_POC_PCS_MASK 0x0007 94 94 #define MV88E6393X_SERDES_POC_RESET BIT(15) 95 95 #define MV88E6393X_SERDES_POC_PDOWN BIT(5) 96 + #define MV88E6393X_SERDES_POC_AN BIT(3) 97 + #define MV88E6393X_SERDES_CTRL1 0xf003 98 + #define MV88E6393X_SERDES_CTRL1_TX_PDOWN BIT(9) 99 + #define MV88E6393X_SERDES_CTRL1_RX_PDOWN BIT(8) 96 100 97 101 #define MV88E6393X_ERRATA_4_8_REG 0xF074 98 102 #define MV88E6393X_ERRATA_4_8_BIT BIT(14)
+8 -1
drivers/net/dsa/rtl8365mb.c
··· 107 107 #define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC 2112 108 108 109 109 /* Family-specific data and limits */ 110 + #define RTL8365MB_PHYADDRMAX 7 110 111 #define RTL8365MB_NUM_PHYREGS 32 111 112 #define RTL8365MB_PHYREGMAX (RTL8365MB_NUM_PHYREGS - 1) 112 113 #define RTL8365MB_MAX_NUM_PORTS (RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1) ··· 177 176 #define RTL8365MB_INDIRECT_ACCESS_STATUS_REG 0x1F01 178 177 #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG 0x1F02 179 178 #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK GENMASK(4, 0) 180 - #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(6, 5) 179 + #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(7, 5) 181 180 #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK GENMASK(11, 8) 182 181 #define RTL8365MB_PHY_BASE 0x2000 183 182 #define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG 0x1F03 ··· 680 679 u16 val; 681 680 int ret; 682 681 682 + if (phy > RTL8365MB_PHYADDRMAX) 683 + return -EINVAL; 684 + 683 685 if (regnum > RTL8365MB_PHYREGMAX) 684 686 return -EINVAL; 685 687 ··· 707 703 { 708 704 u32 ocp_addr; 709 705 int ret; 706 + 707 + if (phy > RTL8365MB_PHYADDRMAX) 708 + return -EINVAL; 710 709 711 710 if (regnum > RTL8365MB_PHYREGMAX) 712 711 return -EINVAL;
+14 -13
drivers/net/ethernet/aquantia/atlantic/aq_common.h
··· 40 40 41 41 #define AQ_DEVICE_ID_AQC113DEV 0x00C0 42 42 #define AQ_DEVICE_ID_AQC113CS 0x94C0 43 + #define AQ_DEVICE_ID_AQC113CA 0x34C0 43 44 #define AQ_DEVICE_ID_AQC114CS 0x93C0 44 45 #define AQ_DEVICE_ID_AQC113 0x04C0 45 46 #define AQ_DEVICE_ID_AQC113C 0x14C0 46 47 #define AQ_DEVICE_ID_AQC115C 0x12C0 48 + #define AQ_DEVICE_ID_AQC116C 0x11C0 47 49 48 50 #define HW_ATL_NIC_NAME "Marvell (aQuantia) AQtion 10Gbit Network Adapter" 49 51 ··· 55 53 56 54 #define AQ_NIC_RATE_10G BIT(0) 57 55 #define AQ_NIC_RATE_5G BIT(1) 58 - #define AQ_NIC_RATE_5GSR BIT(2) 59 - #define AQ_NIC_RATE_2G5 BIT(3) 60 - #define AQ_NIC_RATE_1G BIT(4) 61 - #define AQ_NIC_RATE_100M BIT(5) 62 - #define AQ_NIC_RATE_10M BIT(6) 63 - #define AQ_NIC_RATE_1G_HALF BIT(7) 64 - #define AQ_NIC_RATE_100M_HALF BIT(8) 65 - #define AQ_NIC_RATE_10M_HALF BIT(9) 56 + #define AQ_NIC_RATE_2G5 BIT(2) 57 + #define AQ_NIC_RATE_1G BIT(3) 58 + #define AQ_NIC_RATE_100M BIT(4) 59 + #define AQ_NIC_RATE_10M BIT(5) 60 + #define AQ_NIC_RATE_1G_HALF BIT(6) 61 + #define AQ_NIC_RATE_100M_HALF BIT(7) 62 + #define AQ_NIC_RATE_10M_HALF BIT(8) 66 63 67 - #define AQ_NIC_RATE_EEE_10G BIT(10) 68 - #define AQ_NIC_RATE_EEE_5G BIT(11) 69 - #define AQ_NIC_RATE_EEE_2G5 BIT(12) 70 - #define AQ_NIC_RATE_EEE_1G BIT(13) 71 - #define AQ_NIC_RATE_EEE_100M BIT(14) 64 + #define AQ_NIC_RATE_EEE_10G BIT(9) 65 + #define AQ_NIC_RATE_EEE_5G BIT(10) 66 + #define AQ_NIC_RATE_EEE_2G5 BIT(11) 67 + #define AQ_NIC_RATE_EEE_1G BIT(12) 68 + #define AQ_NIC_RATE_EEE_100M BIT(13) 72 69 #define AQ_NIC_RATE_EEE_MSK (AQ_NIC_RATE_EEE_10G |\ 73 70 AQ_NIC_RATE_EEE_5G |\ 74 71 AQ_NIC_RATE_EEE_2G5 |\
+2
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 80 80 }; 81 81 82 82 struct aq_stats_s { 83 + u64 brc; 84 + u64 btc; 83 85 u64 uprc; 84 86 u64 mprc; 85 87 u64 bprc;
+22 -12
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 316 316 aq_macsec_init(self); 317 317 #endif 318 318 319 - mutex_lock(&self->fwreq_mutex); 320 - err = self->aq_fw_ops->get_mac_permanent(self->aq_hw, addr); 321 - mutex_unlock(&self->fwreq_mutex); 322 - if (err) 323 - goto err_exit; 319 + if (platform_get_ethdev_address(&self->pdev->dev, self->ndev) != 0) { 320 + // If DT has none or an invalid one, ask device for MAC address 321 + mutex_lock(&self->fwreq_mutex); 322 + err = self->aq_fw_ops->get_mac_permanent(self->aq_hw, addr); 323 + mutex_unlock(&self->fwreq_mutex); 324 324 325 - eth_hw_addr_set(self->ndev, addr); 325 + if (err) 326 + goto err_exit; 326 327 327 - if (!is_valid_ether_addr(self->ndev->dev_addr) || 328 - !aq_nic_is_valid_ether_addr(self->ndev->dev_addr)) { 329 - netdev_warn(self->ndev, "MAC is invalid, will use random."); 330 - eth_hw_addr_random(self->ndev); 328 + if (is_valid_ether_addr(addr) && 329 + aq_nic_is_valid_ether_addr(addr)) { 330 + eth_hw_addr_set(self->ndev, addr); 331 + } else { 332 + netdev_warn(self->ndev, "MAC is invalid, will use random."); 333 + eth_hw_addr_random(self->ndev); 334 + } 331 335 } 332 336 333 337 #if defined(AQ_CFG_MAC_ADDR_PERMANENT) ··· 909 905 data[++i] = stats->mbtc; 910 906 data[++i] = stats->bbrc; 911 907 data[++i] = stats->bbtc; 912 - data[++i] = stats->ubrc + stats->mbrc + stats->bbrc; 913 - data[++i] = stats->ubtc + stats->mbtc + stats->bbtc; 908 + if (stats->brc) 909 + data[++i] = stats->brc; 910 + else 911 + data[++i] = stats->ubrc + stats->mbrc + stats->bbrc; 912 + if (stats->btc) 913 + data[++i] = stats->btc; 914 + else 915 + data[++i] = stats->ubtc + stats->mbtc + stats->bbtc; 914 916 data[++i] = stats->dma_pkt_rc; 915 917 data[++i] = stats->dma_pkt_tc; 916 918 data[++i] = stats->dma_oct_rc;
+6 -1
drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
··· 49 49 { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113), }, 50 50 { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113C), }, 51 51 { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC115C), }, 52 + { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113CA), }, 53 + { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC116C), }, 52 54 53 55 {} 54 56 }; ··· 87 85 { AQ_DEVICE_ID_AQC113CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, 88 86 { AQ_DEVICE_ID_AQC114CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, 89 87 { AQ_DEVICE_ID_AQC113C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, 90 - { AQ_DEVICE_ID_AQC115C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, 88 + { AQ_DEVICE_ID_AQC115C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc115c, }, 89 + { AQ_DEVICE_ID_AQC113CA, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, 90 + { AQ_DEVICE_ID_AQC116C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc116c, }, 91 + 91 92 }; 92 93 93 94 MODULE_DEVICE_TABLE(pci, aq_pci_tbl);
-3
drivers/net/ethernet/aquantia/atlantic/aq_vec.c
··· 362 362 { 363 363 unsigned int count; 364 364 365 - WARN_ONCE(!aq_vec_is_valid_tc(self, tc), 366 - "Invalid tc %u (#rx=%u, #tx=%u)\n", 367 - tc, self->rx_rings, self->tx_rings); 368 365 if (!aq_vec_is_valid_tc(self, tc)) 369 366 return 0; 370 367
+13 -2
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils.c
··· 867 867 int hw_atl_utils_update_stats(struct aq_hw_s *self) 868 868 { 869 869 struct aq_stats_s *cs = &self->curr_stats; 870 + struct aq_stats_s curr_stats = *cs; 870 871 struct hw_atl_utils_mbox mbox; 872 + bool corrupted_stats = false; 871 873 872 874 hw_atl_utils_mpi_read_stats(self, &mbox); 873 875 874 - #define AQ_SDELTA(_N_) (self->curr_stats._N_ += \ 875 - mbox.stats._N_ - self->last_stats._N_) 876 + #define AQ_SDELTA(_N_) \ 877 + do { \ 878 + if (!corrupted_stats && \ 879 + ((s64)(mbox.stats._N_ - self->last_stats._N_)) >= 0) \ 880 + curr_stats._N_ += mbox.stats._N_ - self->last_stats._N_; \ 881 + else \ 882 + corrupted_stats = true; \ 883 + } while (0) 876 884 877 885 if (self->aq_link_status.mbps) { 878 886 AQ_SDELTA(uprc); ··· 900 892 AQ_SDELTA(bbrc); 901 893 AQ_SDELTA(bbtc); 902 894 AQ_SDELTA(dpc); 895 + 896 + if (!corrupted_stats) 897 + *cs = curr_stats; 903 898 } 904 899 #undef AQ_SDELTA 905 900
-3
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c
··· 132 132 if (speed & AQ_NIC_RATE_5G) 133 133 rate |= FW2X_RATE_5G; 134 134 135 - if (speed & AQ_NIC_RATE_5GSR) 136 - rate |= FW2X_RATE_5G; 137 - 138 135 if (speed & AQ_NIC_RATE_2G5) 139 136 rate |= FW2X_RATE_2G5; 140 137
+18 -4
drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.c
··· 65 65 AQ_NIC_RATE_5G | 66 66 AQ_NIC_RATE_2G5 | 67 67 AQ_NIC_RATE_1G | 68 - AQ_NIC_RATE_1G_HALF | 69 68 AQ_NIC_RATE_100M | 70 - AQ_NIC_RATE_100M_HALF | 71 - AQ_NIC_RATE_10M | 72 - AQ_NIC_RATE_10M_HALF, 69 + AQ_NIC_RATE_10M, 70 + }; 71 + 72 + const struct aq_hw_caps_s hw_atl2_caps_aqc115c = { 73 + DEFAULT_BOARD_BASIC_CAPABILITIES, 74 + .media_type = AQ_HW_MEDIA_TYPE_TP, 75 + .link_speed_msk = AQ_NIC_RATE_2G5 | 76 + AQ_NIC_RATE_1G | 77 + AQ_NIC_RATE_100M | 78 + AQ_NIC_RATE_10M, 79 + }; 80 + 81 + const struct aq_hw_caps_s hw_atl2_caps_aqc116c = { 82 + DEFAULT_BOARD_BASIC_CAPABILITIES, 83 + .media_type = AQ_HW_MEDIA_TYPE_TP, 84 + .link_speed_msk = AQ_NIC_RATE_1G | 85 + AQ_NIC_RATE_100M | 86 + AQ_NIC_RATE_10M, 73 87 }; 74 88 75 89 static u32 hw_atl2_sem_act_rslvr_get(struct aq_hw_s *self)
+2
drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2.h
··· 9 9 #include "aq_common.h" 10 10 11 11 extern const struct aq_hw_caps_s hw_atl2_caps_aqc113; 12 + extern const struct aq_hw_caps_s hw_atl2_caps_aqc115c; 13 + extern const struct aq_hw_caps_s hw_atl2_caps_aqc116c; 12 14 extern const struct aq_hw_ops hw_atl2_ops; 13 15 14 16 #endif /* HW_ATL2_H */
+34 -4
drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils.h
··· 239 239 u8 minor; 240 240 u16 build; 241 241 } phy; 242 - u32 rsvd; 242 + u32 drv_iface_ver:4; 243 + u32 rsvd:28; 243 244 }; 244 245 245 246 struct link_status_s { ··· 425 424 u16 rsvd2; 426 425 }; 427 426 428 - struct statistics_s { 427 + struct statistics_a0_s { 429 428 struct { 430 429 u32 link_up; 431 430 u32 link_down; ··· 456 455 } msm; 457 456 u32 main_loop_cycles; 458 457 u32 reserve_fw_gap; 458 + }; 459 + 460 + struct __packed statistics_b0_s { 461 + u64 rx_good_octets; 462 + u64 rx_pause_frames; 463 + u64 rx_good_frames; 464 + u64 rx_errors; 465 + u64 rx_unicast_frames; 466 + u64 rx_multicast_frames; 467 + u64 rx_broadcast_frames; 468 + 469 + u64 tx_good_octets; 470 + u64 tx_pause_frames; 471 + u64 tx_good_frames; 472 + u64 tx_errors; 473 + u64 tx_unicast_frames; 474 + u64 tx_multicast_frames; 475 + u64 tx_broadcast_frames; 476 + 477 + u32 main_loop_cycles; 478 + }; 479 + 480 + struct __packed statistics_s { 481 + union __packed { 482 + struct statistics_a0_s a0; 483 + struct statistics_b0_s b0; 484 + }; 459 485 }; 460 486 461 487 struct filter_caps_s { ··· 573 545 u32 rsvd5; 574 546 }; 575 547 576 - struct fw_interface_out { 548 + struct __packed fw_interface_out { 577 549 struct transaction_counter_s transaction_id; 578 550 struct version_s version; 579 551 struct link_status_s link_status; ··· 597 569 struct core_dump_s core_dump; 598 570 u32 rsvd11; 599 571 struct statistics_s stats; 600 - u32 rsvd12; 601 572 struct filter_caps_s filter_caps; 602 573 struct device_caps_s device_caps; 603 574 u32 rsvd13; ··· 618 591 #define AQ_HOST_MODE_SLEEP_PROXY 2U 619 592 #define AQ_HOST_MODE_LOW_POWER 3U 620 593 #define AQ_HOST_MODE_SHUTDOWN 4U 594 + 595 + #define AQ_A2_FW_INTERFACE_A0 0 596 + #define AQ_A2_FW_INTERFACE_B0 1 621 597 622 598 int hw_atl2_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops); 623 599
+86 -22
drivers/net/ethernet/aquantia/atlantic/hw_atl2/hw_atl2_utils_fw.c
··· 84 84 if (cnt > AQ_A2_FW_READ_TRY_MAX) 85 85 return -ETIME; 86 86 if (tid1.transaction_cnt_a != tid1.transaction_cnt_b) 87 - udelay(1); 87 + mdelay(1); 88 88 } while (tid1.transaction_cnt_a != tid1.transaction_cnt_b); 89 89 90 90 hw_atl2_mif_shared_buf_read(self, offset, (u32 *)data, dwords); ··· 154 154 { 155 155 link_options->rate_10G = !!(speed & AQ_NIC_RATE_10G); 156 156 link_options->rate_5G = !!(speed & AQ_NIC_RATE_5G); 157 - link_options->rate_N5G = !!(speed & AQ_NIC_RATE_5GSR); 157 + link_options->rate_N5G = link_options->rate_5G; 158 158 link_options->rate_2P5G = !!(speed & AQ_NIC_RATE_2G5); 159 159 link_options->rate_N2P5G = link_options->rate_2P5G; 160 160 link_options->rate_1G = !!(speed & AQ_NIC_RATE_1G); ··· 192 192 rate |= AQ_NIC_RATE_10G; 193 193 if (lkp_link_caps->rate_5G) 194 194 rate |= AQ_NIC_RATE_5G; 195 - if (lkp_link_caps->rate_N5G) 196 - rate |= AQ_NIC_RATE_5GSR; 197 195 if (lkp_link_caps->rate_2P5G) 198 196 rate |= AQ_NIC_RATE_2G5; 199 197 if (lkp_link_caps->rate_1G) ··· 333 335 return 0; 334 336 } 335 337 336 - static int aq_a2_fw_update_stats(struct aq_hw_s *self) 338 + static void aq_a2_fill_a0_stats(struct aq_hw_s *self, 339 + struct statistics_s *stats) 337 340 { 338 341 struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; 339 - struct statistics_s stats; 342 + struct aq_stats_s *cs = &self->curr_stats; 343 + struct aq_stats_s curr_stats = *cs; 344 + bool corrupted_stats = false; 340 345 341 - hw_atl2_shared_buffer_read_safe(self, stats, &stats); 342 - 343 - #define AQ_SDELTA(_N_, _F_) (self->curr_stats._N_ += \ 344 - stats.msm._F_ - priv->last_stats.msm._F_) 346 + #define AQ_SDELTA(_N, _F) \ 347 + do { \ 348 + if (!corrupted_stats && \ 349 + ((s64)(stats->a0.msm._F - priv->last_stats.a0.msm._F)) >= 0) \ 350 + curr_stats._N += stats->a0.msm._F - priv->last_stats.a0.msm._F;\ 351 + else \ 352 + corrupted_stats = true; \ 353 + } while (0) 345 354 346 355 if (self->aq_link_status.mbps) { 347 356 AQ_SDELTA(uprc, rx_unicast_frames); ··· 367 362 AQ_SDELTA(mbtc, tx_multicast_octets); 368 363 AQ_SDELTA(bbrc, rx_broadcast_octets); 369 364 AQ_SDELTA(bbtc, tx_broadcast_octets); 365 + 366 + if (!corrupted_stats) 367 + *cs = curr_stats; 370 368 } 371 369 #undef AQ_SDELTA 372 - self->curr_stats.dma_pkt_rc = 373 - hw_atl_stats_rx_dma_good_pkt_counter_get(self); 374 - self->curr_stats.dma_pkt_tc = 375 - hw_atl_stats_tx_dma_good_pkt_counter_get(self); 376 - self->curr_stats.dma_oct_rc = 377 - hw_atl_stats_rx_dma_good_octet_counter_get(self); 378 - self->curr_stats.dma_oct_tc = 379 - hw_atl_stats_tx_dma_good_octet_counter_get(self); 380 - self->curr_stats.dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self); 370 + 371 + } 372 + 373 + static void aq_a2_fill_b0_stats(struct aq_hw_s *self, 374 + struct statistics_s *stats) 375 + { 376 + struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; 377 + struct aq_stats_s *cs = &self->curr_stats; 378 + struct aq_stats_s curr_stats = *cs; 379 + bool corrupted_stats = false; 380 + 381 + #define AQ_SDELTA(_N, _F) \ 382 + do { \ 383 + if (!corrupted_stats && \ 384 + ((s64)(stats->b0._F - priv->last_stats.b0._F)) >= 0) \ 385 + curr_stats._N += stats->b0._F - priv->last_stats.b0._F; \ 386 + else \ 387 + corrupted_stats = true; \ 388 + } while (0) 389 + 390 + if (self->aq_link_status.mbps) { 391 + AQ_SDELTA(uprc, rx_unicast_frames); 392 + AQ_SDELTA(mprc, rx_multicast_frames); 393 + AQ_SDELTA(bprc, rx_broadcast_frames); 394 + AQ_SDELTA(erpr, rx_errors); 395 + AQ_SDELTA(brc, rx_good_octets); 396 + 397 + AQ_SDELTA(uptc, tx_unicast_frames); 398 + AQ_SDELTA(mptc, tx_multicast_frames); 399 + AQ_SDELTA(bptc, tx_broadcast_frames); 400 + AQ_SDELTA(erpt, tx_errors); 401 + AQ_SDELTA(btc, tx_good_octets); 402 + 403 + if (!corrupted_stats) 404 + *cs = curr_stats; 405 + } 406 + #undef AQ_SDELTA 407 + } 408 + 409 + static int aq_a2_fw_update_stats(struct aq_hw_s *self) 410 + { 411 + struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; 412 + struct aq_stats_s *cs = &self->curr_stats; 413 + struct statistics_s stats; 414 + struct version_s version; 415 + int err; 416 + 417 + err = hw_atl2_shared_buffer_read_safe(self, version, &version); 418 + if (err) 419 + return err; 420 + 421 + err = hw_atl2_shared_buffer_read_safe(self, stats, &stats); 422 + if (err) 423 + return err; 424 + 425 + if (version.drv_iface_ver == AQ_A2_FW_INTERFACE_A0) 426 + aq_a2_fill_a0_stats(self, &stats); 427 + else 428 + aq_a2_fill_b0_stats(self, &stats); 429 + 430 + cs->dma_pkt_rc = hw_atl_stats_rx_dma_good_pkt_counter_get(self); 431 + cs->dma_pkt_tc = hw_atl_stats_tx_dma_good_pkt_counter_get(self); 432 + cs->dma_oct_rc = hw_atl_stats_rx_dma_good_octet_counter_get(self); 433 + cs->dma_oct_tc = hw_atl_stats_tx_dma_good_octet_counter_get(self); 434 + cs->dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self); 381 435 382 436 memcpy(&priv->last_stats, &stats, sizeof(stats)); 383 437 ··· 563 499 hw_atl2_shared_buffer_read_safe(self, version, &version); 564 500 565 501 /* A2 FW version is stored in reverse order */ 566 - return version.mac.major << 24 | 567 - version.mac.minor << 16 | 568 - version.mac.build; 502 + return version.bundle.major << 24 | 503 + version.bundle.minor << 16 | 504 + version.bundle.build; 569 505 } 570 506 571 507 int hw_atl2_utils_get_action_resolve_table_caps(struct aq_hw_s *self,
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 4550 4550 4551 4551 fsl_mc_portal_free(priv->mc_io); 4552 4552 4553 + destroy_workqueue(priv->dpaa2_ptp_wq); 4554 + 4553 4555 dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name); 4554 4556 4555 4557 free_netdev(net_dev);
+6 -22
drivers/net/ethernet/ibm/ibmvnic.c
··· 628 628 old_buff_size = adapter->prev_rx_buf_sz; 629 629 new_buff_size = adapter->cur_rx_buf_sz; 630 630 631 - /* Require buff size to be exactly same for now */ 632 - if (old_buff_size != new_buff_size) 633 - return false; 634 - 635 - if (old_num_pools == new_num_pools && old_pool_size == new_pool_size) 636 - return true; 637 - 638 - if (old_num_pools < adapter->min_rx_queues || 639 - old_num_pools > adapter->max_rx_queues || 640 - old_pool_size < adapter->min_rx_add_entries_per_subcrq || 641 - old_pool_size > adapter->max_rx_add_entries_per_subcrq) 631 + if (old_buff_size != new_buff_size || 632 + old_num_pools != new_num_pools || 633 + old_pool_size != new_pool_size) 642 634 return false; 643 635 644 636 return true; ··· 866 874 old_mtu = adapter->prev_mtu; 867 875 new_mtu = adapter->req_mtu; 868 876 869 - /* Require MTU to be exactly same to reuse pools for now */ 870 - if (old_mtu != new_mtu) 871 - return false; 872 - 873 - if (old_num_pools == new_num_pools && old_pool_size == new_pool_size) 874 - return true; 875 - 876 - if (old_num_pools < adapter->min_tx_queues || 877 - old_num_pools > adapter->max_tx_queues || 878 - old_pool_size < adapter->min_tx_entries_per_subcrq || 879 - old_pool_size > adapter->max_tx_entries_per_subcrq) 877 + if (old_mtu != new_mtu || 878 + old_num_pools != new_num_pools || 879 + old_pool_size != new_pool_size) 880 880 return false; 881 881 882 882 return true;
+1
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 383 383 while (i--) { 384 384 dma = xsk_buff_xdp_get_dma(*xdp); 385 385 rx_desc->read.pkt_addr = cpu_to_le64(dma); 386 + rx_desc->wb.status_error0 = 0; 386 387 387 388 rx_desc++; 388 389 xdp++;
+1 -1
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 7458 7458 7459 7459 shared = num_present_cpus() - priv->nthreads; 7460 7460 if (shared > 0) 7461 - bitmap_fill(&priv->lock_map, 7461 + bitmap_set(&priv->lock_map, 0, 7462 7462 min_t(int, shared, MVPP2_MAX_THREADS)); 7463 7463 7464 7464 for (i = 0; i < MVPP2_MAX_THREADS; i++) {
+1 -1
drivers/net/ethernet/marvell/octeontx2/af/rvu.c
··· 2341 2341 goto free_regions; 2342 2342 break; 2343 2343 default: 2344 - return err; 2344 + goto free_regions; 2345 2345 } 2346 2346 2347 2347 mw->mbox_wq = alloc_workqueue(name,
+3 -3
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 670 670 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_T, SPEED_1000, 671 671 ETHTOOL_LINK_MODE_1000baseT_Full_BIT); 672 672 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_CX_SGMII, SPEED_1000, 673 - ETHTOOL_LINK_MODE_1000baseKX_Full_BIT); 673 + ETHTOOL_LINK_MODE_1000baseX_Full_BIT); 674 674 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_KX, SPEED_1000, 675 675 ETHTOOL_LINK_MODE_1000baseKX_Full_BIT); 676 676 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_T, SPEED_10000, ··· 682 682 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_KR, SPEED_10000, 683 683 ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); 684 684 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_CR, SPEED_10000, 685 - ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); 685 + ETHTOOL_LINK_MODE_10000baseCR_Full_BIT); 686 686 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_SR, SPEED_10000, 687 - ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); 687 + ETHTOOL_LINK_MODE_10000baseSR_Full_BIT); 688 688 MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_20GBASE_KR2, SPEED_20000, 689 689 ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT, 690 690 ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT);
+7 -2
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2286 2286 bool carry_xdp_prog) 2287 2287 { 2288 2288 struct bpf_prog *xdp_prog; 2289 - int i, t; 2289 + int i, t, ret; 2290 2290 2291 - mlx4_en_copy_priv(tmp, priv, prof); 2291 + ret = mlx4_en_copy_priv(tmp, priv, prof); 2292 + if (ret) { 2293 + en_warn(priv, "%s: mlx4_en_copy_priv() failed, return\n", 2294 + __func__); 2295 + return ret; 2296 + } 2292 2297 2293 2298 if (mlx4_en_alloc_resources(tmp)) { 2294 2299 en_warn(priv,
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 341 341 case MLX5_CMD_OP_DEALLOC_SF: 342 342 case MLX5_CMD_OP_DESTROY_UCTX: 343 343 case MLX5_CMD_OP_DESTROY_UMEM: 344 + case MLX5_CMD_OP_MODIFY_RQT: 344 345 return MLX5_CMD_STAT_OK; 345 346 346 347 case MLX5_CMD_OP_QUERY_HCA_CAP: ··· 447 446 case MLX5_CMD_OP_MODIFY_TIS: 448 447 case MLX5_CMD_OP_QUERY_TIS: 449 448 case MLX5_CMD_OP_CREATE_RQT: 450 - case MLX5_CMD_OP_MODIFY_RQT: 451 449 case MLX5_CMD_OP_QUERY_RQT: 452 450 453 451 case MLX5_CMD_OP_CREATE_FLOW_TABLE:
+40 -1
drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c
··· 13 13 unsigned int max_nch; 14 14 u32 drop_rqn; 15 15 16 + struct mlx5e_packet_merge_param pkt_merge_param; 17 + struct rw_semaphore pkt_merge_param_sem; 18 + 16 19 struct mlx5e_rss *rss[MLX5E_MAX_NUM_RSS]; 17 20 bool rss_active; 18 21 u32 rss_rqns[MLX5E_INDIR_RQT_SIZE]; ··· 395 392 if (err) 396 393 goto out; 397 394 395 + /* Separated from the channels RQs, does not share pkt_merge state with them */ 398 396 mlx5e_tir_builder_build_rqt(builder, res->mdev->mlx5e_res.hw_objs.td.tdn, 399 397 mlx5e_rqt_get_rqtn(&res->ptp.rqt), 400 398 inner_ft_support); ··· 450 446 res->features = features; 451 447 res->max_nch = max_nch; 452 448 res->drop_rqn = drop_rqn; 449 + 450 + res->pkt_merge_param = *init_pkt_merge_param; 451 + init_rwsem(&res->pkt_merge_param_sem); 453 452 454 453 err = mlx5e_rx_res_rss_init_def(res, init_pkt_merge_param, init_nch); 455 454 if (err) ··· 520 513 return mlx5e_tir_get_tirn(&res->ptp.tir); 521 514 } 522 515 523 - u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix) 516 + static u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix) 524 517 { 525 518 return mlx5e_rqt_get_rqtn(&res->channels[ix].direct_rqt); 526 519 } ··· 663 656 if (!builder) 664 657 return -ENOMEM; 665 658 659 + down_write(&res->pkt_merge_param_sem); 660 + res->pkt_merge_param = *pkt_merge_param; 661 + 666 662 mlx5e_tir_builder_build_packet_merge(builder, pkt_merge_param); 667 663 668 664 final_err = 0; ··· 691 681 } 692 682 } 693 683 684 + up_write(&res->pkt_merge_param_sem); 694 685 mlx5e_tir_builder_free(builder); 695 686 return final_err; 696 687 } ··· 699 688 struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *res) 700 689 { 701 690 return mlx5e_rss_get_hash(res->rss[0]); 691 + } 692 + 693 + int mlx5e_rx_res_tls_tir_create(struct mlx5e_rx_res *res, unsigned int rxq, 694 + struct mlx5e_tir *tir) 695 + { 696 + bool inner_ft_support = res->features & MLX5E_RX_RES_FEATURE_INNER_FT; 697 + struct mlx5e_tir_builder *builder; 698 + u32 rqtn; 699 + int err; 700 + 701 + builder = mlx5e_tir_builder_alloc(false); 702 + if (!builder) 703 + return -ENOMEM; 704 + 705 + rqtn = mlx5e_rx_res_get_rqtn_direct(res, rxq); 706 + 707 + mlx5e_tir_builder_build_rqt(builder, res->mdev->mlx5e_res.hw_objs.td.tdn, rqtn, 708 + inner_ft_support); 709 + mlx5e_tir_builder_build_direct(builder); 710 + mlx5e_tir_builder_build_tls(builder); 711 + down_read(&res->pkt_merge_param_sem); 712 + mlx5e_tir_builder_build_packet_merge(builder, &res->pkt_merge_param); 713 + err = mlx5e_tir_init(tir, builder, res->mdev, false); 714 + up_read(&res->pkt_merge_param_sem); 715 + 716 + mlx5e_tir_builder_free(builder); 717 + 718 + return err; 702 719 }
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h
··· 37 37 u32 mlx5e_rx_res_get_tirn_rss_inner(struct mlx5e_rx_res *res, enum mlx5_traffic_types tt); 38 38 u32 mlx5e_rx_res_get_tirn_ptp(struct mlx5e_rx_res *res); 39 39 40 - /* RQTN getters for modules that create their own TIRs */ 41 - u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix); 42 - 43 40 /* Activate/deactivate API */ 44 41 void mlx5e_rx_res_channels_activate(struct mlx5e_rx_res *res, struct mlx5e_channels *chs); 45 42 void mlx5e_rx_res_channels_deactivate(struct mlx5e_rx_res *res); ··· 66 69 /* Workaround for hairpin */ 67 70 struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *res); 68 71 72 + /* Accel TIRs */ 73 + int mlx5e_rx_res_tls_tir_create(struct mlx5e_rx_res *res, unsigned int rxq, 74 + struct mlx5e_tir *tir); 69 75 #endif /* __MLX5_EN_RX_RES_H__ */
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.c
··· 191 191 eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2; 192 192 eseg->swp_inner_l4_offset = 193 193 (skb->csum_start + skb->head - skb->data) / 2; 194 - if (skb->protocol == htons(ETH_P_IPV6)) 194 + if (inner_ip_hdr(skb)->version == 6) 195 195 eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; 196 196 break; 197 197 default:
+1 -23
drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c
··· 100 100 return resp_list; 101 101 } 102 102 103 - static int mlx5e_ktls_create_tir(struct mlx5_core_dev *mdev, struct mlx5e_tir *tir, u32 rqtn) 104 - { 105 - struct mlx5e_tir_builder *builder; 106 - int err; 107 - 108 - builder = mlx5e_tir_builder_alloc(false); 109 - if (!builder) 110 - return -ENOMEM; 111 - 112 - mlx5e_tir_builder_build_rqt(builder, mdev->mlx5e_res.hw_objs.td.tdn, rqtn, false); 113 - mlx5e_tir_builder_build_direct(builder); 114 - mlx5e_tir_builder_build_tls(builder); 115 - err = mlx5e_tir_init(tir, builder, mdev, false); 116 - 117 - mlx5e_tir_builder_free(builder); 118 - 119 - return err; 120 - } 121 - 122 103 static void accel_rule_handle_work(struct work_struct *work) 123 104 { 124 105 struct mlx5e_ktls_offload_context_rx *priv_rx; ··· 590 609 struct mlx5_core_dev *mdev; 591 610 struct mlx5e_priv *priv; 592 611 int rxq, err; 593 - u32 rqtn; 594 612 595 613 tls_ctx = tls_get_ctx(sk); 596 614 priv = netdev_priv(netdev); ··· 615 635 priv_rx->sw_stats = &priv->tls->sw_stats; 616 636 mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx); 617 637 618 - rqtn = mlx5e_rx_res_get_rqtn_direct(priv->rx_res, rxq); 619 - 620 - err = mlx5e_ktls_create_tir(mdev, &priv_rx->tir, rqtn); 638 + err = mlx5e_rx_res_tls_tir_create(priv->rx_res, rxq, &priv_rx->tir); 621 639 if (err) 622 640 goto err_create_tir; 623 641
+4
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 1080 1080 &MLX5E_STATS_GRP(pme), 1081 1081 &MLX5E_STATS_GRP(channels), 1082 1082 &MLX5E_STATS_GRP(per_port_buff_congest), 1083 + #ifdef CONFIG_MLX5_EN_IPSEC 1084 + &MLX5E_STATS_GRP(ipsec_sw), 1085 + &MLX5E_STATS_GRP(ipsec_hw), 1086 + #endif 1083 1087 }; 1084 1088 1085 1089 static unsigned int mlx5e_ul_rep_stats_grps_num(struct mlx5e_priv *priv)
+3 -5
drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
··· 543 543 u16 klm_entries, u16 index) 544 544 { 545 545 struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; 546 - u16 entries, pi, i, header_offset, err, wqe_bbs, new_entries; 546 + u16 entries, pi, header_offset, err, wqe_bbs, new_entries; 547 547 u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; 548 548 struct page *page = shampo->last_page; 549 549 u64 addr = shampo->last_addr; 550 550 struct mlx5e_dma_info *dma_info; 551 551 struct mlx5e_umr_wqe *umr_wqe; 552 - int headroom; 552 + int headroom, i; 553 553 554 554 headroom = rq->buff.headroom; 555 555 new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_ALIGNMENT - 1)); ··· 601 601 602 602 err_unmap: 603 603 while (--i >= 0) { 604 - if (--index < 0) 605 - index = shampo->hd_per_wq - 1; 606 - dma_info = &shampo->info[index]; 604 + dma_info = &shampo->info[--index]; 607 605 if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { 608 606 dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); 609 607 mlx5e_page_release(rq, dma_info, true);
+2 -2
drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
··· 130 130 /* If vports min rate divider is 0 but their group has bw_share configured, then 131 131 * need to set bw_share for vports to minimal value. 132 132 */ 133 - if (!group_level && !max_guarantee && group->bw_share) 133 + if (!group_level && !max_guarantee && group && group->bw_share) 134 134 return 1; 135 135 return 0; 136 136 } ··· 423 423 return err; 424 424 425 425 /* Recalculate bw share weights of old and new groups */ 426 - if (vport->qos.bw_share) { 426 + if (vport->qos.bw_share || new_group->bw_share) { 427 427 esw_qos_normalize_vports_min_rate(esw, curr_group, extack); 428 428 esw_qos_normalize_vports_min_rate(esw, new_group, extack); 429 429 }
+16 -4
drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
··· 329 329 esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr) 330 330 { 331 331 struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; 332 + bool result = false; 332 333 int i; 333 334 334 - for (i = esw_attr->split_count; i < esw_attr->out_count; i++) 335 + /* Indirect table is supported only for flows with in_port uplink 336 + * and the destination is vport on the same eswitch as the uplink, 337 + * return false in case at least one of destinations doesn't meet 338 + * this criteria. 339 + */ 340 + for (i = esw_attr->split_count; i < esw_attr->out_count; i++) { 335 341 if (esw_attr->dests[i].rep && 336 342 mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport, 337 - esw_attr->dests[i].mdev)) 338 - return true; 339 - return false; 343 + esw_attr->dests[i].mdev)) { 344 + result = true; 345 + } else { 346 + result = false; 347 + break; 348 + } 349 + } 350 + return result; 340 351 } 341 352 342 353 static int ··· 2523 2512 struct mlx5_eswitch *esw = master->priv.eswitch; 2524 2513 struct mlx5_flow_table_attr ft_attr = { 2525 2514 .max_fte = 1, .prio = 0, .level = 0, 2515 + .flags = MLX5_FLOW_TABLE_OTHER_VPORT, 2526 2516 }; 2527 2517 struct mlx5_flow_namespace *egress_ns; 2528 2518 struct mlx5_flow_table *acl;
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/health.c
··· 835 835 836 836 health->timer.expires = jiffies + msecs_to_jiffies(poll_interval_ms); 837 837 add_timer(&health->timer); 838 + 839 + if (mlx5_core_is_pf(dev) && MLX5_CAP_MCAM_REG(dev, mrtc)) 840 + queue_delayed_work(health->wq, &health->update_fw_log_ts_work, 0); 838 841 } 839 842 840 843 void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health) ··· 905 902 INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work); 906 903 INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work); 907 904 INIT_DELAYED_WORK(&health->update_fw_log_ts_work, mlx5_health_log_ts_update); 908 - if (mlx5_core_is_pf(dev)) 909 - queue_delayed_work(health->wq, &health->update_fw_log_ts_work, 0); 910 905 911 906 return 0; 912 907
+1
drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
··· 608 608 if (port_sel->tunnel) 609 609 mlx5_destroy_ttc_table(port_sel->inner.ttc); 610 610 mlx5_lag_destroy_definers(ldev); 611 + memset(port_sel, 0, sizeof(*port_sel)); 611 612 }
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/lib/tout.c
··· 31 31 dev->timeouts->to[type] = val; 32 32 } 33 33 34 - static void tout_set_def_val(struct mlx5_core_dev *dev) 34 + void mlx5_tout_set_def_val(struct mlx5_core_dev *dev) 35 35 { 36 36 int i; 37 37 38 - for (i = MLX5_TO_FW_PRE_INIT_TIMEOUT_MS; i < MAX_TIMEOUT_TYPES; i++) 38 + for (i = 0; i < MAX_TIMEOUT_TYPES; i++) 39 39 tout_set(dev, tout_def_sw_val[i], i); 40 40 } 41 41 ··· 45 45 if (!dev->timeouts) 46 46 return -ENOMEM; 47 47 48 - tout_set_def_val(dev); 49 48 return 0; 50 49 } 51 50
+1
drivers/net/ethernet/mellanox/mlx5/core/lib/tout.h
··· 34 34 void mlx5_tout_cleanup(struct mlx5_core_dev *dev); 35 35 void mlx5_tout_query_iseg(struct mlx5_core_dev *dev); 36 36 int mlx5_tout_query_dtor(struct mlx5_core_dev *dev); 37 + void mlx5_tout_set_def_val(struct mlx5_core_dev *dev); 37 38 u64 _mlx5_tout_ms(struct mlx5_core_dev *dev, enum mlx5_timeouts_types type); 38 39 39 40 #define mlx5_tout_ms(dev, type) _mlx5_tout_ms(dev, MLX5_TO_##type##_MS)
+15 -15
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 992 992 if (mlx5_core_is_pf(dev)) 993 993 pcie_print_link_status(dev->pdev); 994 994 995 - err = mlx5_tout_init(dev); 996 - if (err) { 997 - mlx5_core_err(dev, "Failed initializing timeouts, aborting\n"); 998 - return err; 999 - } 995 + mlx5_tout_set_def_val(dev); 1000 996 1001 997 /* wait for firmware to accept initialization segments configurations 1002 998 */ ··· 1001 1005 if (err) { 1002 1006 mlx5_core_err(dev, "Firmware over %llu MS in pre-initializing state, aborting\n", 1003 1007 mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT)); 1004 - goto err_tout_cleanup; 1008 + return err; 1005 1009 } 1006 1010 1007 1011 err = mlx5_cmd_init(dev); 1008 1012 if (err) { 1009 1013 mlx5_core_err(dev, "Failed initializing command interface, aborting\n"); 1010 - goto err_tout_cleanup; 1014 + return err; 1011 1015 } 1012 1016 1013 1017 mlx5_tout_query_iseg(dev); ··· 1071 1075 1072 1076 mlx5_set_driver_version(dev); 1073 1077 1074 - mlx5_start_health_poll(dev); 1075 - 1076 1078 err = mlx5_query_hca_caps(dev); 1077 1079 if (err) { 1078 1080 mlx5_core_err(dev, "query hca failed\n"); 1079 - goto stop_health; 1081 + goto reclaim_boot_pages; 1080 1082 } 1083 + 1084 + mlx5_start_health_poll(dev); 1081 1085 1082 1086 return 0; 1083 1087 1084 - stop_health: 1085 - mlx5_stop_health_poll(dev, boot); 1086 1088 reclaim_boot_pages: 1087 1089 mlx5_reclaim_startup_pages(dev); 1088 1090 err_disable_hca: ··· 1088 1094 err_cmd_cleanup: 1089 1095 mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1090 1096 mlx5_cmd_cleanup(dev); 1091 - err_tout_cleanup: 1092 - mlx5_tout_cleanup(dev); 1093 1097 1094 1098 return err; 1095 1099 } ··· 1106 1114 mlx5_core_disable_hca(dev, 0); 1107 1115 mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); 1108 1116 mlx5_cmd_cleanup(dev); 1109 - mlx5_tout_cleanup(dev); 1110 1117 1111 1118 return 0; 1112 1119 } ··· 1467 1476 mlx5_debugfs_root); 1468 1477 INIT_LIST_HEAD(&priv->traps); 1469 1478 1479 + err = mlx5_tout_init(dev); 1480 + if (err) { 1481 + mlx5_core_err(dev, "Failed initializing timeouts, aborting\n"); 1482 + goto err_timeout_init; 1483 + } 1484 + 1470 1485 err = mlx5_health_init(dev); 1471 1486 if (err) 1472 1487 goto err_health_init; ··· 1498 1501 err_pagealloc_init: 1499 1502 mlx5_health_cleanup(dev); 1500 1503 err_health_init: 1504 + mlx5_tout_cleanup(dev); 1505 + err_timeout_init: 1501 1506 debugfs_remove(dev->priv.dbg_root); 1502 1507 mutex_destroy(&priv->pgdir_mutex); 1503 1508 mutex_destroy(&priv->alloc_mutex); ··· 1517 1518 mlx5_adev_cleanup(dev); 1518 1519 mlx5_pagealloc_cleanup(dev); 1519 1520 mlx5_health_cleanup(dev); 1521 + mlx5_tout_cleanup(dev); 1520 1522 debugfs_remove_recursive(dev->priv.dbg_root); 1521 1523 mutex_destroy(&priv->pgdir_mutex); 1522 1524 mutex_destroy(&priv->alloc_mutex);
+3 -1
drivers/net/ethernet/mscc/ocelot.c
··· 1563 1563 } 1564 1564 1565 1565 err = ocelot_setup_ptp_traps(ocelot, port, l2, l4); 1566 - if (err) 1566 + if (err) { 1567 + mutex_unlock(&ocelot->ptp_lock); 1567 1568 return err; 1569 + } 1568 1570 1569 1571 if (l2 && l4) 1570 1572 cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+1 -1
drivers/net/ethernet/natsemi/xtsonic.c
··· 120 120 .ndo_set_mac_address = eth_mac_addr, 121 121 }; 122 122 123 - static int __init sonic_probe1(struct net_device *dev) 123 + static int sonic_probe1(struct net_device *dev) 124 124 { 125 125 unsigned int silicon_revision; 126 126 struct sonic_local *lp = netdev_priv(dev);
+8 -2
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
··· 1077 1077 sds_mbx_size = sizeof(struct qlcnic_sds_mbx); 1078 1078 context_id = recv_ctx->context_id; 1079 1079 num_sds = adapter->drv_sds_rings - QLCNIC_MAX_SDS_RINGS; 1080 - ahw->hw_ops->alloc_mbx_args(&cmd, adapter, 1081 - QLCNIC_CMD_ADD_RCV_RINGS); 1080 + err = ahw->hw_ops->alloc_mbx_args(&cmd, adapter, 1081 + QLCNIC_CMD_ADD_RCV_RINGS); 1082 + if (err) { 1083 + dev_err(&adapter->pdev->dev, 1084 + "Failed to alloc mbx args %d\n", err); 1085 + return err; 1086 + } 1087 + 1082 1088 cmd.req.arg[1] = 0 | (num_sds << 8) | (context_id << 16); 1083 1089 1084 1090 /* set up status rings, mbx 2-81 */
+6 -5
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 5540 5540 netdev_features_t features) 5541 5541 { 5542 5542 struct stmmac_priv *priv = netdev_priv(netdev); 5543 - bool sph_en; 5544 - u32 chan; 5545 5543 5546 5544 /* Keep the COE Type in case of csum is supporting */ 5547 5545 if (features & NETIF_F_RXCSUM) ··· 5551 5553 */ 5552 5554 stmmac_rx_ipc(priv, priv->hw); 5553 5555 5554 - sph_en = (priv->hw->rx_csum > 0) && priv->sph; 5556 + if (priv->sph_cap) { 5557 + bool sph_en = (priv->hw->rx_csum > 0) && priv->sph; 5558 + u32 chan; 5555 5559 5556 - for (chan = 0; chan < priv->plat->rx_queues_to_use; chan++) 5557 - stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan); 5560 + for (chan = 0; chan < priv->plat->rx_queues_to_use; chan++) 5561 + stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan); 5562 + } 5558 5563 5559 5564 return 0; 5560 5565 }
+1 -1
drivers/net/usb/lan78xx.c
··· 2228 2228 if (dev->domain_data.phyirq > 0) 2229 2229 phydev->irq = dev->domain_data.phyirq; 2230 2230 else 2231 - phydev->irq = 0; 2231 + phydev->irq = PHY_POLL; 2232 2232 netdev_dbg(dev->net, "phydev->irq = %d\n", phydev->irq); 2233 2233 2234 2234 /* set to AUTOMDIX */
+2
drivers/net/vrf.c
··· 497 497 /* strip the ethernet header added for pass through VRF device */ 498 498 __skb_pull(skb, skb_network_offset(skb)); 499 499 500 + memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 500 501 ret = vrf_ip6_local_out(net, skb->sk, skb); 501 502 if (unlikely(net_xmit_eval(ret))) 502 503 dev->stats.tx_errors++; ··· 580 579 RT_SCOPE_LINK); 581 580 } 582 581 582 + memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 583 583 ret = vrf_ip_local_out(dev_net(skb_dst(skb)->dev), skb->sk, skb); 584 584 if (unlikely(net_xmit_eval(ret))) 585 585 vrf_dev->stats.tx_errors++;
+1 -1
drivers/net/wireguard/allowedips.c
··· 163 163 return exact; 164 164 } 165 165 166 - static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node) 166 + static inline void connect_node(struct allowedips_node __rcu **parent, u8 bit, struct allowedips_node *node) 167 167 { 168 168 node->parent_bit_packed = (unsigned long)parent | bit; 169 169 rcu_assign_pointer(*parent, node);
+21 -18
drivers/net/wireguard/device.c
··· 98 98 { 99 99 struct wg_device *wg = netdev_priv(dev); 100 100 struct wg_peer *peer; 101 + struct sk_buff *skb; 101 102 102 103 mutex_lock(&wg->device_update_lock); 103 104 list_for_each_entry(peer, &wg->peer_list, peer_list) { ··· 109 108 wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake); 110 109 } 111 110 mutex_unlock(&wg->device_update_lock); 112 - skb_queue_purge(&wg->incoming_handshakes); 111 + while ((skb = ptr_ring_consume(&wg->handshake_queue.ring)) != NULL) 112 + kfree_skb(skb); 113 + atomic_set(&wg->handshake_queue_len, 0); 113 114 wg_socket_reinit(wg, NULL, NULL); 114 115 return 0; 115 116 } ··· 238 235 destroy_workqueue(wg->handshake_receive_wq); 239 236 destroy_workqueue(wg->handshake_send_wq); 240 237 destroy_workqueue(wg->packet_crypt_wq); 241 - wg_packet_queue_free(&wg->decrypt_queue); 242 - wg_packet_queue_free(&wg->encrypt_queue); 238 + wg_packet_queue_free(&wg->handshake_queue, true); 239 + wg_packet_queue_free(&wg->decrypt_queue, false); 240 + wg_packet_queue_free(&wg->encrypt_queue, false); 243 241 rcu_barrier(); /* Wait for all the peers to be actually freed. */ 244 242 wg_ratelimiter_uninit(); 245 243 memzero_explicit(&wg->static_identity, sizeof(wg->static_identity)); 246 - skb_queue_purge(&wg->incoming_handshakes); 247 244 free_percpu(dev->tstats); 248 - free_percpu(wg->incoming_handshakes_worker); 249 245 kvfree(wg->index_hashtable); 250 246 kvfree(wg->peer_hashtable); 251 247 mutex_unlock(&wg->device_update_lock); ··· 300 298 init_rwsem(&wg->static_identity.lock); 301 299 mutex_init(&wg->socket_update_lock); 302 300 mutex_init(&wg->device_update_lock); 303 - skb_queue_head_init(&wg->incoming_handshakes); 304 301 wg_allowedips_init(&wg->peer_allowedips); 305 302 wg_cookie_checker_init(&wg->cookie_checker, wg); 306 303 INIT_LIST_HEAD(&wg->peer_list); ··· 317 316 if (!dev->tstats) 318 317 goto err_free_index_hashtable; 319 318 320 - wg->incoming_handshakes_worker = 321 - wg_packet_percpu_multicore_worker_alloc( 322 - wg_packet_handshake_receive_worker, wg); 323 - if (!wg->incoming_handshakes_worker) 324 - goto err_free_tstats; 325 - 326 319 wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s", 327 320 WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name); 328 321 if (!wg->handshake_receive_wq) 329 - goto err_free_incoming_handshakes; 322 + goto err_free_tstats; 330 323 331 324 wg->handshake_send_wq = alloc_workqueue("wg-kex-%s", 332 325 WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name); ··· 342 347 if (ret < 0) 343 348 goto err_free_encrypt_queue; 344 349 345 - ret = wg_ratelimiter_init(); 350 + ret = wg_packet_queue_init(&wg->handshake_queue, wg_packet_handshake_receive_worker, 351 + MAX_QUEUED_INCOMING_HANDSHAKES); 346 352 if (ret < 0) 347 353 goto err_free_decrypt_queue; 354 + 355 + ret = wg_ratelimiter_init(); 356 + if (ret < 0) 357 + goto err_free_handshake_queue; 348 358 349 359 ret = register_netdevice(dev); 350 360 if (ret < 0) ··· 367 367 368 368 err_uninit_ratelimiter: 369 369 wg_ratelimiter_uninit(); 370 + err_free_handshake_queue: 371 + wg_packet_queue_free(&wg->handshake_queue, false); 370 372 err_free_decrypt_queue: 371 - wg_packet_queue_free(&wg->decrypt_queue); 373 + wg_packet_queue_free(&wg->decrypt_queue, false); 372 374 err_free_encrypt_queue: 373 - wg_packet_queue_free(&wg->encrypt_queue); 375 + wg_packet_queue_free(&wg->encrypt_queue, false); 374 376 err_destroy_packet_crypt: 375 377 destroy_workqueue(wg->packet_crypt_wq); 376 378 err_destroy_handshake_send: 377 379 destroy_workqueue(wg->handshake_send_wq); 378 380 err_destroy_handshake_receive: 379 381 destroy_workqueue(wg->handshake_receive_wq); 380 - err_free_incoming_handshakes: 381 - free_percpu(wg->incoming_handshakes_worker); 382 382 err_free_tstats: 383 383 free_percpu(dev->tstats); 384 384 err_free_index_hashtable: ··· 398 398 static void wg_netns_pre_exit(struct net *net) 399 399 { 400 400 struct wg_device *wg; 401 + struct wg_peer *peer; 401 402 402 403 rtnl_lock(); 403 404 list_for_each_entry(wg, &device_list, device_list) { ··· 408 407 mutex_lock(&wg->device_update_lock); 409 408 rcu_assign_pointer(wg->creating_net, NULL); 410 409 wg_socket_reinit(wg, NULL, NULL); 410 + list_for_each_entry(peer, &wg->peer_list, peer_list) 411 + wg_socket_clear_peer_endpoint_src(peer); 411 412 mutex_unlock(&wg->device_update_lock); 412 413 } 413 414 }
+3 -6
drivers/net/wireguard/device.h
··· 39 39 40 40 struct wg_device { 41 41 struct net_device *dev; 42 - struct crypt_queue encrypt_queue, decrypt_queue; 42 + struct crypt_queue encrypt_queue, decrypt_queue, handshake_queue; 43 43 struct sock __rcu *sock4, *sock6; 44 44 struct net __rcu *creating_net; 45 45 struct noise_static_identity static_identity; 46 - struct workqueue_struct *handshake_receive_wq, *handshake_send_wq; 47 - struct workqueue_struct *packet_crypt_wq; 48 - struct sk_buff_head incoming_handshakes; 49 - int incoming_handshake_cpu; 50 - struct multicore_worker __percpu *incoming_handshakes_worker; 46 + struct workqueue_struct *packet_crypt_wq,*handshake_receive_wq, *handshake_send_wq; 51 47 struct cookie_checker cookie_checker; 52 48 struct pubkey_hashtable *peer_hashtable; 53 49 struct index_hashtable *index_hashtable; 54 50 struct allowedips peer_allowedips; 55 51 struct mutex device_update_lock, socket_update_lock; 56 52 struct list_head device_list, peer_list; 53 + atomic_t handshake_queue_len; 57 54 unsigned int num_peers, device_update_gen; 58 55 u32 fwmark; 59 56 u16 incoming_port;
+4 -4
drivers/net/wireguard/main.c
··· 17 17 #include <linux/genetlink.h> 18 18 #include <net/rtnetlink.h> 19 19 20 - static int __init mod_init(void) 20 + static int __init wg_mod_init(void) 21 21 { 22 22 int ret; 23 23 ··· 60 60 return ret; 61 61 } 62 62 63 - static void __exit mod_exit(void) 63 + static void __exit wg_mod_exit(void) 64 64 { 65 65 wg_genetlink_uninit(); 66 66 wg_device_uninit(); ··· 68 68 wg_allowedips_slab_uninit(); 69 69 } 70 70 71 - module_init(mod_init); 72 - module_exit(mod_exit); 71 + module_init(wg_mod_init); 72 + module_exit(wg_mod_exit); 73 73 MODULE_LICENSE("GPL v2"); 74 74 MODULE_DESCRIPTION("WireGuard secure network tunnel"); 75 75 MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
+3 -3
drivers/net/wireguard/queueing.c
··· 38 38 return 0; 39 39 } 40 40 41 - void wg_packet_queue_free(struct crypt_queue *queue) 41 + void wg_packet_queue_free(struct crypt_queue *queue, bool purge) 42 42 { 43 43 free_percpu(queue->worker); 44 - WARN_ON(!__ptr_ring_empty(&queue->ring)); 45 - ptr_ring_cleanup(&queue->ring, NULL); 44 + WARN_ON(!purge && !__ptr_ring_empty(&queue->ring)); 45 + ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL); 46 46 } 47 47 48 48 #define NEXT(skb) ((skb)->prev)
+1 -1
drivers/net/wireguard/queueing.h
··· 23 23 /* queueing.c APIs: */ 24 24 int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, 25 25 unsigned int len); 26 - void wg_packet_queue_free(struct crypt_queue *queue); 26 + void wg_packet_queue_free(struct crypt_queue *queue, bool purge); 27 27 struct multicore_worker __percpu * 28 28 wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr); 29 29
+2 -2
drivers/net/wireguard/ratelimiter.c
··· 176 176 (1U << 14) / sizeof(struct hlist_head))); 177 177 max_entries = table_size * 8; 178 178 179 - table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL); 179 + table_v4 = kvcalloc(table_size, sizeof(*table_v4), GFP_KERNEL); 180 180 if (unlikely(!table_v4)) 181 181 goto err_kmemcache; 182 182 183 183 #if IS_ENABLED(CONFIG_IPV6) 184 - table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL); 184 + table_v6 = kvcalloc(table_size, sizeof(*table_v6), GFP_KERNEL); 185 185 if (unlikely(!table_v6)) { 186 186 kvfree(table_v4); 187 187 goto err_kmemcache;
+22 -15
drivers/net/wireguard/receive.c
··· 116 116 return; 117 117 } 118 118 119 - under_load = skb_queue_len(&wg->incoming_handshakes) >= 120 - MAX_QUEUED_INCOMING_HANDSHAKES / 8; 119 + under_load = atomic_read(&wg->handshake_queue_len) >= 120 + MAX_QUEUED_INCOMING_HANDSHAKES / 8; 121 121 if (under_load) { 122 122 last_under_load = ktime_get_coarse_boottime_ns(); 123 123 } else if (last_under_load) { ··· 212 212 213 213 void wg_packet_handshake_receive_worker(struct work_struct *work) 214 214 { 215 - struct wg_device *wg = container_of(work, struct multicore_worker, 216 - work)->ptr; 215 + struct crypt_queue *queue = container_of(work, struct multicore_worker, work)->ptr; 216 + struct wg_device *wg = container_of(queue, struct wg_device, handshake_queue); 217 217 struct sk_buff *skb; 218 218 219 - while ((skb = skb_dequeue(&wg->incoming_handshakes)) != NULL) { 219 + while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) { 220 220 wg_receive_handshake_packet(wg, skb); 221 221 dev_kfree_skb(skb); 222 + atomic_dec(&wg->handshake_queue_len); 222 223 cond_resched(); 223 224 } 224 225 } ··· 554 553 case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION): 555 554 case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE): 556 555 case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): { 557 - int cpu; 556 + int cpu, ret = -EBUSY; 558 557 559 - if (skb_queue_len(&wg->incoming_handshakes) > 560 - MAX_QUEUED_INCOMING_HANDSHAKES || 561 - unlikely(!rng_is_initialized())) { 558 + if (unlikely(!rng_is_initialized())) 559 + goto drop; 560 + if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) { 561 + if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) { 562 + ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb); 563 + spin_unlock_bh(&wg->handshake_queue.ring.producer_lock); 564 + } 565 + } else 566 + ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb); 567 + if (ret) { 568 + drop: 562 569 net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n", 563 570 wg->dev->name, skb); 564 571 goto err; 565 572 } 566 - skb_queue_tail(&wg->incoming_handshakes, skb); 567 - /* Queues up a call to packet_process_queued_handshake_ 568 - * packets(skb): 569 - */ 570 - cpu = wg_cpumask_next_online(&wg->incoming_handshake_cpu); 573 + atomic_inc(&wg->handshake_queue_len); 574 + cpu = wg_cpumask_next_online(&wg->handshake_queue.last_cpu); 575 + /* Queues up a call to packet_process_queued_handshake_packets(skb): */ 571 576 queue_work_on(cpu, wg->handshake_receive_wq, 572 - &per_cpu_ptr(wg->incoming_handshakes_worker, cpu)->work); 577 + &per_cpu_ptr(wg->handshake_queue.worker, cpu)->work); 573 578 break; 574 579 } 575 580 case cpu_to_le32(MESSAGE_DATA):
+1 -1
drivers/net/wireguard/socket.c
··· 308 308 { 309 309 write_lock_bh(&peer->endpoint_lock); 310 310 memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6)); 311 - dst_cache_reset(&peer->endpoint_cache); 311 + dst_cache_reset_now(&peer->endpoint_cache); 312 312 write_unlock_bh(&peer->endpoint_lock); 313 313 } 314 314
+6
drivers/net/wireless/intel/iwlwifi/fw/uefi.c
··· 86 86 if (len < tlv_len) { 87 87 IWL_ERR(trans, "invalid TLV len: %zd/%u\n", 88 88 len, tlv_len); 89 + kfree(reduce_power_data); 89 90 reduce_power_data = ERR_PTR(-EINVAL); 90 91 goto out; 91 92 } ··· 106 105 IWL_DEBUG_FW(trans, 107 106 "Couldn't allocate (more) reduce_power_data\n"); 108 107 108 + kfree(reduce_power_data); 109 109 reduce_power_data = ERR_PTR(-ENOMEM); 110 110 goto out; 111 111 } ··· 136 134 done: 137 135 if (!size) { 138 136 IWL_DEBUG_FW(trans, "Empty REDUCE_POWER, skipping.\n"); 137 + /* Better safe than sorry, but 'reduce_power_data' should 138 + * always be NULL if !size. 139 + */ 140 + kfree(reduce_power_data); 139 141 reduce_power_data = ERR_PTR(-ENOENT); 140 142 goto out; 141 143 }
+15 -7
drivers/net/wireless/intel/iwlwifi/iwl-drv.c
··· 1313 1313 const struct iwl_op_mode_ops *ops = op->ops; 1314 1314 struct dentry *dbgfs_dir = NULL; 1315 1315 struct iwl_op_mode *op_mode = NULL; 1316 + int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY; 1317 + 1318 + for (retry = 0; retry <= max_retry; retry++) { 1316 1319 1317 1320 #ifdef CONFIG_IWLWIFI_DEBUGFS 1318 - drv->dbgfs_op_mode = debugfs_create_dir(op->name, 1319 - drv->dbgfs_drv); 1320 - dbgfs_dir = drv->dbgfs_op_mode; 1321 + drv->dbgfs_op_mode = debugfs_create_dir(op->name, 1322 + drv->dbgfs_drv); 1323 + dbgfs_dir = drv->dbgfs_op_mode; 1321 1324 #endif 1322 1325 1323 - op_mode = ops->start(drv->trans, drv->trans->cfg, &drv->fw, dbgfs_dir); 1326 + op_mode = ops->start(drv->trans, drv->trans->cfg, 1327 + &drv->fw, dbgfs_dir); 1328 + 1329 + if (op_mode) 1330 + return op_mode; 1331 + 1332 + IWL_ERR(drv, "retry init count %d\n", retry); 1324 1333 1325 1334 #ifdef CONFIG_IWLWIFI_DEBUGFS 1326 - if (!op_mode) { 1327 1335 debugfs_remove_recursive(drv->dbgfs_op_mode); 1328 1336 drv->dbgfs_op_mode = NULL; 1329 - } 1330 1337 #endif 1338 + } 1331 1339 1332 - return op_mode; 1340 + return NULL; 1333 1341 } 1334 1342 1335 1343 static void _iwl_op_mode_stop(struct iwl_drv *drv)
+3
drivers/net/wireless/intel/iwlwifi/iwl-drv.h
··· 89 89 #define IWL_EXPORT_SYMBOL(sym) 90 90 #endif 91 91 92 + /* max retry for init flow */ 93 + #define IWL_MAX_INIT_RETRY 2 94 + 92 95 #endif /* __iwl_drv_h__ */
+23 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 16 16 #include <net/ieee80211_radiotap.h> 17 17 #include <net/tcp.h> 18 18 19 + #include "iwl-drv.h" 19 20 #include "iwl-op-mode.h" 20 21 #include "iwl-io.h" 21 22 #include "mvm.h" ··· 1118 1117 { 1119 1118 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); 1120 1119 int ret; 1120 + int retry, max_retry = 0; 1121 1121 1122 1122 mutex_lock(&mvm->mutex); 1123 - ret = __iwl_mvm_mac_start(mvm); 1123 + 1124 + /* we are starting the mac not in error flow, and restart is enabled */ 1125 + if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) && 1126 + iwlwifi_mod_params.fw_restart) { 1127 + max_retry = IWL_MAX_INIT_RETRY; 1128 + /* 1129 + * This will prevent mac80211 recovery flows to trigger during 1130 + * init failures 1131 + */ 1132 + set_bit(IWL_MVM_STATUS_STARTING, &mvm->status); 1133 + } 1134 + 1135 + for (retry = 0; retry <= max_retry; retry++) { 1136 + ret = __iwl_mvm_mac_start(mvm); 1137 + if (!ret) 1138 + break; 1139 + 1140 + IWL_ERR(mvm, "mac start retry %d\n", retry); 1141 + } 1142 + clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status); 1143 + 1124 1144 mutex_unlock(&mvm->mutex); 1125 1145 1126 1146 return ret;
+3
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
··· 1123 1123 * @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running 1124 1124 * @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA 1125 1125 * @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it) 1126 + * @IWL_MVM_STATUS_STARTING: starting mac, 1127 + * used to disable restart flow while in STARTING state 1126 1128 */ 1127 1129 enum iwl_mvm_status { 1128 1130 IWL_MVM_STATUS_HW_RFKILL, ··· 1136 1134 IWL_MVM_STATUS_FIRMWARE_RUNNING, 1137 1135 IWL_MVM_STATUS_NEED_FLUSH_P2P, 1138 1136 IWL_MVM_STATUS_IN_D3, 1137 + IWL_MVM_STATUS_STARTING, 1139 1138 }; 1140 1139 1141 1140 /* Keep track of completed init configuration */
+5
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
··· 686 686 int ret; 687 687 688 688 rtnl_lock(); 689 + wiphy_lock(mvm->hw->wiphy); 689 690 mutex_lock(&mvm->mutex); 690 691 691 692 ret = iwl_run_init_mvm_ucode(mvm); ··· 702 701 iwl_mvm_stop_device(mvm); 703 702 704 703 mutex_unlock(&mvm->mutex); 704 + wiphy_unlock(mvm->hw->wiphy); 705 705 rtnl_unlock(); 706 706 707 707 if (ret < 0) ··· 1602 1600 */ 1603 1601 if (!mvm->fw_restart && fw_error) { 1604 1602 iwl_fw_error_collect(&mvm->fwrt, false); 1603 + } else if (test_bit(IWL_MVM_STATUS_STARTING, 1604 + &mvm->status)) { 1605 + IWL_ERR(mvm, "Starting mac, retry will be triggered anyway\n"); 1605 1606 } else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { 1606 1607 struct iwl_mvm_reprobe *reprobe; 1607 1608
+8 -2
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1339 1339 u16 mac_type, u8 mac_step, 1340 1340 u16 rf_type, u8 cdb, u8 rf_id, u8 no_160, u8 cores) 1341 1341 { 1342 + int num_devices = ARRAY_SIZE(iwl_dev_info_table); 1342 1343 int i; 1343 1344 1344 - for (i = ARRAY_SIZE(iwl_dev_info_table) - 1; i >= 0; i--) { 1345 + if (!num_devices) 1346 + return NULL; 1347 + 1348 + for (i = num_devices - 1; i >= 0; i--) { 1345 1349 const struct iwl_dev_info *dev_info = &iwl_dev_info_table[i]; 1346 1350 1347 1351 if (dev_info->device != (u16)IWL_CFG_ANY && ··· 1446 1442 */ 1447 1443 if (iwl_trans->trans_cfg->rf_id && 1448 1444 iwl_trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_9000 && 1449 - !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) 1445 + !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) { 1446 + ret = -EINVAL; 1450 1447 goto out_free_trans; 1448 + } 1451 1449 1452 1450 dev_info = iwl_pci_find_dev_info(pdev->device, pdev->subsystem_device, 1453 1451 CSR_HW_REV_TYPE(iwl_trans->hw_rev),
+1 -2
drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
··· 143 143 if (!wcid) 144 144 wcid = &dev->mt76.global_wcid; 145 145 146 - pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb); 147 - 148 146 if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) { 149 147 struct mt7615_phy *phy = &dev->phy; 150 148 ··· 162 164 if (id < 0) 163 165 return id; 164 166 167 + pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb); 165 168 mt7615_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, sta, 166 169 pid, key, false); 167 170
+15 -13
drivers/net/wireless/mediatek/mt76/mt7615/usb_sdio.c
··· 43 43 static void 44 44 mt7663_usb_sdio_write_txwi(struct mt7615_dev *dev, struct mt76_wcid *wcid, 45 45 enum mt76_txq_id qid, struct ieee80211_sta *sta, 46 + struct ieee80211_key_conf *key, int pid, 46 47 struct sk_buff *skb) 47 48 { 48 - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 49 - struct ieee80211_key_conf *key = info->control.hw_key; 50 - __le32 *txwi; 51 - int pid; 49 + __le32 *txwi = (__le32 *)(skb->data - MT_USB_TXD_SIZE); 52 50 53 - if (!wcid) 54 - wcid = &dev->mt76.global_wcid; 55 - 56 - pid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb); 57 - 58 - txwi = (__le32 *)(skb->data - MT_USB_TXD_SIZE); 59 51 memset(txwi, 0, MT_USB_TXD_SIZE); 60 52 mt7615_mac_write_txwi(dev, txwi, skb, wcid, sta, pid, key, false); 61 53 skb_push(skb, MT_USB_TXD_SIZE); ··· 186 194 struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76); 187 195 struct sk_buff *skb = tx_info->skb; 188 196 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 197 + struct ieee80211_key_conf *key = info->control.hw_key; 189 198 struct mt7615_sta *msta; 190 - int pad; 199 + int pad, err, pktid; 191 200 192 201 msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL; 202 + if (!wcid) 203 + wcid = &dev->mt76.global_wcid; 204 + 193 205 if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && 194 206 msta && !msta->rate_probe) { 195 207 /* request to configure sampling rate */ ··· 203 207 spin_unlock_bh(&dev->mt76.lock); 204 208 } 205 209 206 - mt7663_usb_sdio_write_txwi(dev, wcid, qid, sta, skb); 210 + pktid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb); 211 + mt7663_usb_sdio_write_txwi(dev, wcid, qid, sta, key, pktid, skb); 207 212 if (mt76_is_usb(mdev)) { 208 213 u32 len = skb->len; 209 214 ··· 214 217 pad = round_up(skb->len, 4) - skb->len; 215 218 } 216 219 217 - return mt76_skb_adjust_pad(skb, pad); 220 + err = mt76_skb_adjust_pad(skb, pad); 221 + if (err) 222 + /* Release pktid in case of error. */ 223 + idr_remove(&wcid->pktid, pktid); 224 + 225 + return err; 218 226 } 219 227 EXPORT_SYMBOL_GPL(mt7663_usb_sdio_tx_prepare_skb); 220 228
+7 -1
drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
··· 72 72 bool ampdu = IEEE80211_SKB_CB(tx_info->skb)->flags & IEEE80211_TX_CTL_AMPDU; 73 73 enum mt76_qsel qsel; 74 74 u32 flags; 75 + int err; 75 76 76 77 mt76_insert_hdr_pad(tx_info->skb); 77 78 ··· 107 106 ewma_pktlen_add(&msta->pktlen, tx_info->skb->len); 108 107 } 109 108 110 - return mt76x02u_skb_dma_info(tx_info->skb, WLAN_PORT, flags); 109 + err = mt76x02u_skb_dma_info(tx_info->skb, WLAN_PORT, flags); 110 + if (err && wcid) 111 + /* Release pktid in case of error. */ 112 + idr_remove(&wcid->pktid, pid); 113 + 114 + return err; 111 115 } 112 116 EXPORT_SYMBOL_GPL(mt76x02u_tx_prepare_skb); 113 117
+7 -8
drivers/net/wireless/mediatek/mt76/mt7915/mac.c
··· 1151 1151 } 1152 1152 } 1153 1153 1154 - pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb); 1154 + t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size); 1155 + t->skb = tx_info->skb; 1155 1156 1157 + id = mt76_token_consume(mdev, &t); 1158 + if (id < 0) 1159 + return id; 1160 + 1161 + pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb); 1156 1162 mt7915_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid, key, 1157 1163 false); 1158 1164 ··· 1183 1177 1184 1178 txp->bss_idx = mvif->idx; 1185 1179 } 1186 - 1187 - t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size); 1188 - t->skb = tx_info->skb; 1189 - 1190 - id = mt76_token_consume(mdev, &t); 1191 - if (id < 0) 1192 - return id; 1193 1180 1194 1181 txp->token = cpu_to_le16(id); 1195 1182 if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags))
+2 -2
drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
··· 176 176 if (ht_cap->ht_supported) 177 177 mode |= PHY_MODE_GN; 178 178 179 - if (he_cap->has_he) 179 + if (he_cap && he_cap->has_he) 180 180 mode |= PHY_MODE_AX_24G; 181 181 } else if (band == NL80211_BAND_5GHZ) { 182 182 mode |= PHY_MODE_A; ··· 187 187 if (vht_cap->vht_supported) 188 188 mode |= PHY_MODE_AC; 189 189 190 - if (he_cap->has_he) 190 + if (he_cap && he_cap->has_he) 191 191 mode |= PHY_MODE_AX_5G; 192 192 } 193 193
+12 -9
drivers/net/wireless/mediatek/mt76/mt7921/sdio_mac.c
··· 142 142 static void 143 143 mt7921s_write_txwi(struct mt7921_dev *dev, struct mt76_wcid *wcid, 144 144 enum mt76_txq_id qid, struct ieee80211_sta *sta, 145 + struct ieee80211_key_conf *key, int pid, 145 146 struct sk_buff *skb) 146 147 { 147 - struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 148 - struct ieee80211_key_conf *key = info->control.hw_key; 149 - __le32 *txwi; 150 - int pid; 148 + __le32 *txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE); 151 149 152 - pid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb); 153 - txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE); 154 150 memset(txwi, 0, MT_SDIO_TXD_SIZE); 155 151 mt7921_mac_write_txwi(dev, txwi, skb, wcid, key, pid, false); 156 152 skb_push(skb, MT_SDIO_TXD_SIZE); ··· 159 163 { 160 164 struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76); 161 165 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb); 166 + struct ieee80211_key_conf *key = info->control.hw_key; 162 167 struct sk_buff *skb = tx_info->skb; 163 - int pad; 168 + int err, pad, pktid; 164 169 165 170 if (unlikely(tx_info->skb->len <= ETH_HLEN)) 166 171 return -EINVAL; ··· 178 181 } 179 182 } 180 183 181 - mt7921s_write_txwi(dev, wcid, qid, sta, skb); 184 + pktid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb); 185 + mt7921s_write_txwi(dev, wcid, qid, sta, key, pktid, skb); 182 186 183 187 mt7921_skb_add_sdio_hdr(skb, MT7921_SDIO_DATA); 184 188 pad = round_up(skb->len, 4) - skb->len; 185 189 186 - return mt76_skb_adjust_pad(skb, pad); 190 + err = mt76_skb_adjust_pad(skb, pad); 191 + if (err) 192 + /* Release pktid in case of error. */ 193 + idr_remove(&wcid->pktid, pktid); 194 + 195 + return err; 187 196 } 188 197 189 198 void mt7921s_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e)
+1 -1
drivers/net/wireless/mediatek/mt76/tx.c
··· 173 173 if (!(cb->flags & MT_TX_CB_DMA_DONE)) 174 174 continue; 175 175 176 - if (!time_is_after_jiffies(cb->jiffies + 176 + if (time_is_after_jiffies(cb->jiffies + 177 177 MT_TX_STATUS_SKB_TIMEOUT)) 178 178 continue; 179 179 }
+3
drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
··· 25 25 if (status == -ENODEV || status == -ENOENT) 26 26 return true; 27 27 28 + if (!test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags)) 29 + return false; 30 + 28 31 if (status == -EPROTO || status == -ETIMEDOUT) 29 32 rt2x00dev->num_proto_errs++; 30 33 else
+1 -1
drivers/net/wireless/realtek/rtw89/fw.c
··· 91 91 info->section_num = GET_FW_HDR_SEC_NUM(fw); 92 92 info->hdr_len = RTW89_FW_HDR_SIZE + 93 93 info->section_num * RTW89_FW_SECTION_HDR_SIZE; 94 - SET_FW_HDR_PART_SIZE(fw, FWDL_SECTION_PER_PKT_LEN); 95 94 96 95 bin = fw + info->hdr_len; 97 96 ··· 274 275 } 275 276 276 277 skb_put_data(skb, fw, len); 278 + SET_FW_HDR_PART_SIZE(skb->data, FWDL_SECTION_PER_PKT_LEN); 277 279 rtw89_h2c_pkt_set_hdr_fwdl(rtwdev, skb, FWCMD_TYPE_H2C, 278 280 H2C_CAT_MAC, H2C_CL_MAC_FWDL, 279 281 H2C_FUNC_MAC_FWHDR_DL, len);
+4 -2
drivers/net/wireless/realtek/rtw89/fw.h
··· 282 282 le32_get_bits(*((__le32 *)(fwhdr) + 6), GENMASK(15, 8)) 283 283 #define GET_FW_HDR_CMD_VERSERION(fwhdr) \ 284 284 le32_get_bits(*((__le32 *)(fwhdr) + 7), GENMASK(31, 24)) 285 - #define SET_FW_HDR_PART_SIZE(fwhdr, val) \ 286 - le32p_replace_bits((__le32 *)(fwhdr) + 7, val, GENMASK(15, 0)) 285 + static inline void SET_FW_HDR_PART_SIZE(void *fwhdr, u32 val) 286 + { 287 + le32p_replace_bits((__le32 *)fwhdr + 7, val, GENMASK(15, 0)); 288 + } 287 289 288 290 #define SET_CTRL_INFO_MACID(table, val) \ 289 291 le32p_replace_bits((__le32 *)(table) + 0, val, GENMASK(6, 0))
-5
drivers/powercap/dtpm.c
··· 463 463 464 464 static int __init init_dtpm(void) 465 465 { 466 - struct dtpm_descr *dtpm_descr; 467 - 468 466 pct = powercap_register_control_type(NULL, "dtpm", NULL); 469 467 if (IS_ERR(pct)) { 470 468 pr_err("Failed to register control type\n"); 471 469 return PTR_ERR(pct); 472 470 } 473 - 474 - for_each_dtpm_table(dtpm_descr) 475 - dtpm_descr->init(); 476 471 477 472 return 0; 478 473 }
+2 -7
drivers/scsi/lpfc/lpfc_els.c
··· 5095 5095 /* NPort Recovery mode or node is just allocated */ 5096 5096 if (!lpfc_nlp_not_used(ndlp)) { 5097 5097 /* A LOGO is completing and the node is in NPR state. 5098 - * If this a fabric node that cleared its transport 5099 - * registration, release the rpi. 5098 + * Just unregister the RPI because the node is still 5099 + * required. 5100 5100 */ 5101 - spin_lock_irq(&ndlp->lock); 5102 - ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; 5103 - if (phba->sli_rev == LPFC_SLI_REV4) 5104 - ndlp->nlp_flag |= NLP_RELEASE_RPI; 5105 - spin_unlock_irq(&ndlp->lock); 5106 5101 lpfc_unreg_rpi(vport, ndlp); 5107 5102 } else { 5108 5103 /* Indicate the node has already released, should
+18
drivers/scsi/ufs/ufshcd-pci.c
··· 421 421 return err; 422 422 } 423 423 424 + static int ufs_intel_adl_init(struct ufs_hba *hba) 425 + { 426 + hba->nop_out_timeout = 200; 427 + hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8; 428 + return ufs_intel_common_init(hba); 429 + } 430 + 424 431 static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = { 425 432 .name = "intel-pci", 426 433 .init = ufs_intel_common_init, ··· 452 445 .link_startup_notify = ufs_intel_link_startup_notify, 453 446 .pwr_change_notify = ufs_intel_lkf_pwr_change_notify, 454 447 .apply_dev_quirks = ufs_intel_lkf_apply_dev_quirks, 448 + .resume = ufs_intel_resume, 449 + .device_reset = ufs_intel_device_reset, 450 + }; 451 + 452 + static struct ufs_hba_variant_ops ufs_intel_adl_hba_vops = { 453 + .name = "intel-pci", 454 + .init = ufs_intel_adl_init, 455 + .exit = ufs_intel_common_exit, 456 + .link_startup_notify = ufs_intel_link_startup_notify, 455 457 .resume = ufs_intel_resume, 456 458 .device_reset = ufs_intel_device_reset, 457 459 }; ··· 579 563 { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_ehl_hba_vops }, 580 564 { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_ehl_hba_vops }, 581 565 { PCI_VDEVICE(INTEL, 0x98FA), (kernel_ulong_t)&ufs_intel_lkf_hba_vops }, 566 + { PCI_VDEVICE(INTEL, 0x51FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops }, 567 + { PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops }, 582 568 { } /* terminate list */ 583 569 }; 584 570
+13
drivers/tty/serial/8250/8250_bcm7271.c
··· 237 237 u32 rx_err; 238 238 u32 rx_timeout; 239 239 u32 rx_abort; 240 + u32 saved_mctrl; 240 241 }; 241 242 242 243 static struct dentry *brcmuart_debugfs_root; ··· 1134 1133 static int __maybe_unused brcmuart_suspend(struct device *dev) 1135 1134 { 1136 1135 struct brcmuart_priv *priv = dev_get_drvdata(dev); 1136 + struct uart_8250_port *up = serial8250_get_port(priv->line); 1137 + struct uart_port *port = &up->port; 1137 1138 1138 1139 serial8250_suspend_port(priv->line); 1139 1140 clk_disable_unprepare(priv->baud_mux_clk); 1141 + 1142 + /* 1143 + * This will prevent resume from enabling RTS before the 1144 + * baud rate has been resored. 1145 + */ 1146 + priv->saved_mctrl = port->mctrl; 1147 + port->mctrl = 0; 1140 1148 1141 1149 return 0; 1142 1150 } ··· 1153 1143 static int __maybe_unused brcmuart_resume(struct device *dev) 1154 1144 { 1155 1145 struct brcmuart_priv *priv = dev_get_drvdata(dev); 1146 + struct uart_8250_port *up = serial8250_get_port(priv->line); 1147 + struct uart_port *port = &up->port; 1156 1148 int ret; 1157 1149 1158 1150 ret = clk_prepare_enable(priv->baud_mux_clk); ··· 1177 1165 start_rx_dma(serial8250_get_port(priv->line)); 1178 1166 } 1179 1167 serial8250_resume_port(priv->line); 1168 + port->mctrl = priv->saved_mctrl; 1180 1169 return 0; 1181 1170 } 1182 1171
+25 -14
drivers/tty/serial/8250/8250_pci.c
··· 1324 1324 { 1325 1325 int scr; 1326 1326 int lcr; 1327 - int actual_baud; 1328 - int tolerance; 1329 1327 1330 - for (scr = 5 ; scr <= 15 ; scr++) { 1331 - actual_baud = 921600 * 16 / scr; 1332 - tolerance = actual_baud / 50; 1328 + for (scr = 16; scr > 4; scr--) { 1329 + unsigned int maxrate = port->uartclk / scr; 1330 + unsigned int divisor = max(maxrate / baud, 1U); 1331 + int delta = maxrate / divisor - baud; 1333 1332 1334 - if ((baud < actual_baud + tolerance) && 1335 - (baud > actual_baud - tolerance)) { 1333 + if (baud > maxrate + baud / 50) 1334 + continue; 1336 1335 1336 + if (delta > baud / 50) 1337 + divisor++; 1338 + 1339 + if (divisor > 0xffff) 1340 + continue; 1341 + 1342 + /* Update delta due to possible divisor change */ 1343 + delta = maxrate / divisor - baud; 1344 + if (abs(delta) < baud / 50) { 1337 1345 lcr = serial_port_in(port, UART_LCR); 1338 1346 serial_port_out(port, UART_LCR, lcr | 0x80); 1339 - 1340 - serial_port_out(port, UART_DLL, 1); 1341 - serial_port_out(port, UART_DLM, 0); 1347 + serial_port_out(port, UART_DLL, divisor & 0xff); 1348 + serial_port_out(port, UART_DLM, divisor >> 8 & 0xff); 1342 1349 serial_port_out(port, 2, 16 - scr); 1343 1350 serial_port_out(port, UART_LCR, lcr); 1344 1351 return; 1345 - } else if (baud > actual_baud) { 1346 - break; 1347 1352 } 1348 1353 } 1349 - serial8250_do_set_divisor(port, baud, quot, quot_frac); 1350 1354 } 1351 1355 static int pci_pericom_setup(struct serial_private *priv, 1352 1356 const struct pciserial_board *board, ··· 2295 2291 .setup = pci_pericom_setup_four_at_eight, 2296 2292 }, 2297 2293 { 2298 - .vendor = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, 2294 + .vendor = PCI_VENDOR_ID_ACCESIO, 2299 2295 .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, 2296 + .subvendor = PCI_ANY_ID, 2297 + .subdevice = PCI_ANY_ID, 2298 + .setup = pci_pericom_setup_four_at_eight, 2299 + }, 2300 + { 2301 + .vendor = PCI_VENDOR_ID_ACCESIO, 2302 + .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, 2300 2303 .subvendor = PCI_ANY_ID, 2301 2304 .subdevice = PCI_ANY_ID, 2302 2305 .setup = pci_pericom_setup_four_at_eight,
-7
drivers/tty/serial/8250/8250_port.c
··· 2024 2024 struct uart_8250_port *up = up_to_u8250p(port); 2025 2025 unsigned char mcr; 2026 2026 2027 - if (port->rs485.flags & SER_RS485_ENABLED) { 2028 - if (serial8250_in_MCR(up) & UART_MCR_RTS) 2029 - mctrl |= TIOCM_RTS; 2030 - else 2031 - mctrl &= ~TIOCM_RTS; 2032 - } 2033 - 2034 2027 mcr = serial8250_TIOCM_to_MCR(mctrl); 2035 2028 2036 2029 mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
+1 -1
drivers/tty/serial/Kconfig
··· 1533 1533 tristate "LiteUART serial port support" 1534 1534 depends on HAS_IOMEM 1535 1535 depends on OF || COMPILE_TEST 1536 - depends on LITEX 1536 + depends on LITEX || COMPILE_TEST 1537 1537 select SERIAL_CORE 1538 1538 help 1539 1539 This driver is for the FPGA-based LiteUART serial controller from LiteX
+1
drivers/tty/serial/amba-pl011.c
··· 2947 2947 2948 2948 static const struct acpi_device_id __maybe_unused sbsa_uart_acpi_match[] = { 2949 2949 { "ARMH0011", 0 }, 2950 + { "ARMHB000", 0 }, 2950 2951 {}, 2951 2952 }; 2952 2953 MODULE_DEVICE_TABLE(acpi, sbsa_uart_acpi_match);
+1
drivers/tty/serial/fsl_lpuart.c
··· 2625 2625 OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup); 2626 2626 OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1028a-lpuart", ls1028a_early_console_setup); 2627 2627 OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup); 2628 + OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup); 2628 2629 EARLYCON_DECLARE(lpuart, lpuart_early_console_setup); 2629 2630 EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup); 2630 2631
+17 -3
drivers/tty/serial/liteuart.c
··· 270 270 271 271 /* get membase */ 272 272 port->membase = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 273 - if (IS_ERR(port->membase)) 274 - return PTR_ERR(port->membase); 273 + if (IS_ERR(port->membase)) { 274 + ret = PTR_ERR(port->membase); 275 + goto err_erase_id; 276 + } 275 277 276 278 /* values not from device tree */ 277 279 port->dev = &pdev->dev; ··· 287 285 port->line = dev_id; 288 286 spin_lock_init(&port->lock); 289 287 290 - return uart_add_one_port(&liteuart_driver, &uart->port); 288 + platform_set_drvdata(pdev, port); 289 + 290 + ret = uart_add_one_port(&liteuart_driver, &uart->port); 291 + if (ret) 292 + goto err_erase_id; 293 + 294 + return 0; 295 + 296 + err_erase_id: 297 + xa_erase(&liteuart_array, uart->id); 298 + 299 + return ret; 291 300 } 292 301 293 302 static int liteuart_remove(struct platform_device *pdev) ··· 306 293 struct uart_port *port = platform_get_drvdata(pdev); 307 294 struct liteuart_port *uart = to_liteuart_port(port); 308 295 296 + uart_remove_one_port(&liteuart_driver, port); 309 297 xa_erase(&liteuart_array, uart->id); 310 298 311 299 return 0;
+3
drivers/tty/serial/msm_serial.c
··· 598 598 u32 val; 599 599 int ret; 600 600 601 + if (IS_ENABLED(CONFIG_CONSOLE_POLL)) 602 + return; 603 + 601 604 if (!dma->chan) 602 605 return; 603 606
+2 -2
drivers/tty/serial/serial-tegra.c
··· 1506 1506 .fifo_mode_enable_status = false, 1507 1507 .uart_max_port = 5, 1508 1508 .max_dma_burst_bytes = 4, 1509 - .error_tolerance_low_range = 0, 1509 + .error_tolerance_low_range = -4, 1510 1510 .error_tolerance_high_range = 4, 1511 1511 }; 1512 1512 ··· 1517 1517 .fifo_mode_enable_status = false, 1518 1518 .uart_max_port = 5, 1519 1519 .max_dma_burst_bytes = 4, 1520 - .error_tolerance_low_range = 0, 1520 + .error_tolerance_low_range = -4, 1521 1521 .error_tolerance_high_range = 4, 1522 1522 }; 1523 1523
+17 -1
drivers/tty/serial/serial_core.c
··· 1075 1075 goto out; 1076 1076 1077 1077 if (!tty_io_error(tty)) { 1078 + if (uport->rs485.flags & SER_RS485_ENABLED) { 1079 + set &= ~TIOCM_RTS; 1080 + clear &= ~TIOCM_RTS; 1081 + } 1082 + 1078 1083 uart_update_mctrl(uport, set, clear); 1079 1084 ret = 0; 1080 1085 } ··· 1554 1549 { 1555 1550 struct uart_state *state = container_of(port, struct uart_state, port); 1556 1551 struct uart_port *uport = uart_port_check(state); 1552 + char *buf; 1557 1553 1558 1554 /* 1559 1555 * At this point, we stop accepting input. To do this, we ··· 1576 1570 */ 1577 1571 tty_port_set_suspended(port, 0); 1578 1572 1579 - uart_change_pm(state, UART_PM_STATE_OFF); 1573 + /* 1574 + * Free the transmit buffer. 1575 + */ 1576 + spin_lock_irq(&uport->lock); 1577 + buf = state->xmit.buf; 1578 + state->xmit.buf = NULL; 1579 + spin_unlock_irq(&uport->lock); 1580 1580 1581 + if (buf) 1582 + free_page((unsigned long)buf); 1583 + 1584 + uart_change_pm(state, UART_PM_STATE_OFF); 1581 1585 } 1582 1586 1583 1587 static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+4 -16
drivers/usb/cdns3/cdns3-gadget.c
··· 337 337 cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs); 338 338 } 339 339 340 - static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req) 341 - { 342 - struct cdns3_endpoint *priv_ep = priv_req->priv_ep; 343 - int current_trb = priv_req->start_trb; 344 - 345 - while (current_trb != priv_req->end_trb) { 346 - cdns3_ep_inc_deq(priv_ep); 347 - current_trb = priv_ep->dequeue; 348 - } 349 - 350 - cdns3_ep_inc_deq(priv_ep); 351 - } 352 - 353 340 /** 354 341 * cdns3_allow_enable_l1 - enable/disable permits to transition to L1. 355 342 * @priv_dev: Extended gadget object ··· 1504 1517 1505 1518 trb = priv_ep->trb_pool + priv_ep->dequeue; 1506 1519 1507 - /* Request was dequeued and TRB was changed to TRB_LINK. */ 1508 - if (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) { 1520 + /* The TRB was changed as link TRB, and the request was handled at ep_dequeue */ 1521 + while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) { 1509 1522 trace_cdns3_complete_trb(priv_ep, trb); 1510 - cdns3_move_deq_to_next_trb(priv_req); 1523 + cdns3_ep_inc_deq(priv_ep); 1524 + trb = priv_ep->trb_pool + priv_ep->dequeue; 1511 1525 } 1512 1526 1513 1527 if (!request->stream_id) {
+3
drivers/usb/cdns3/cdnsp-mem.c
··· 987 987 988 988 /* Set up the endpoint ring. */ 989 989 pep->ring = cdnsp_ring_alloc(pdev, 2, ring_type, max_packet, mem_flags); 990 + if (!pep->ring) 991 + return -ENOMEM; 992 + 990 993 pep->skip = false; 991 994 992 995 /* Fill the endpoint context */
+3
drivers/usb/core/quirks.c
··· 434 434 { USB_DEVICE(0x1532, 0x0116), .driver_info = 435 435 USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, 436 436 437 + /* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */ 438 + { USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM }, 439 + 437 440 /* Lenovo ThinkCenter A630Z TI024Gen3 usb-audio */ 438 441 { USB_DEVICE(0x17ef, 0xa012), .driver_info = 439 442 USB_QUIRK_DISCONNECT_SUSPEND },
+14 -7
drivers/usb/host/xhci-ring.c
··· 366 366 /* Must be called with xhci->lock held, releases and aquires lock back */ 367 367 static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags) 368 368 { 369 - u32 temp_32; 369 + struct xhci_segment *new_seg = xhci->cmd_ring->deq_seg; 370 + union xhci_trb *new_deq = xhci->cmd_ring->dequeue; 371 + u64 crcr; 370 372 int ret; 371 373 372 374 xhci_dbg(xhci, "Abort command ring\n"); ··· 377 375 378 376 /* 379 377 * The control bits like command stop, abort are located in lower 380 - * dword of the command ring control register. Limit the write 381 - * to the lower dword to avoid corrupting the command ring pointer 382 - * in case if the command ring is stopped by the time upper dword 383 - * is written. 378 + * dword of the command ring control register. 379 + * Some controllers require all 64 bits to be written to abort the ring. 380 + * Make sure the upper dword is valid, pointing to the next command, 381 + * avoiding corrupting the command ring pointer in case the command ring 382 + * is stopped by the time the upper dword is written. 384 383 */ 385 - temp_32 = readl(&xhci->op_regs->cmd_ring); 386 - writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 384 + next_trb(xhci, NULL, &new_seg, &new_deq); 385 + if (trb_is_link(new_deq)) 386 + next_trb(xhci, NULL, &new_seg, &new_deq); 387 + 388 + crcr = xhci_trb_virt_to_dma(new_seg, new_deq); 389 + xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 387 390 388 391 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the 389 392 * completion of the Command Abort operation. If CRR is not negated in 5
-4
drivers/usb/typec/tcpm/tcpm.c
··· 4110 4110 tcpm_try_src(port) ? SRC_TRY 4111 4111 : SNK_ATTACHED, 4112 4112 0); 4113 - else 4114 - /* Wait for VBUS, but not forever */ 4115 - tcpm_set_state(port, PORT_RESET, PD_T_PS_SOURCE_ON); 4116 4113 break; 4117 - 4118 4114 case SRC_TRY: 4119 4115 port->try_src_count++; 4120 4116 tcpm_set_cc(port, tcpm_rp_cc(port));
+3 -2
drivers/vfio/pci/vfio_pci_igd.c
··· 98 98 version = cpu_to_le16(0x0201); 99 99 100 100 if (igd_opregion_shift_copy(buf, &off, 101 - &version + (pos - OPREGION_VERSION), 101 + (u8 *)&version + 102 + (pos - OPREGION_VERSION), 102 103 &pos, &remaining, bytes)) 103 104 return -EFAULT; 104 105 } ··· 122 121 OPREGION_SIZE : 0); 123 122 124 123 if (igd_opregion_shift_copy(buf, &off, 125 - &rvda + (pos - OPREGION_RVDA), 124 + (u8 *)&rvda + (pos - OPREGION_RVDA), 126 125 &pos, &remaining, bytes)) 127 126 return -EFAULT; 128 127 }
+14 -14
drivers/vfio/vfio.c
··· 232 232 } 233 233 #endif /* CONFIG_VFIO_NOIOMMU */ 234 234 235 - /** 235 + /* 236 236 * IOMMU driver registration 237 237 */ 238 238 int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops) ··· 285 285 unsigned long action, void *data); 286 286 static void vfio_group_get(struct vfio_group *group); 287 287 288 - /** 288 + /* 289 289 * Container objects - containers are created when /dev/vfio/vfio is 290 290 * opened, but their lifecycle extends until the last user is done, so 291 291 * it's freed via kref. Must support container/group/device being ··· 309 309 kref_put(&container->kref, vfio_container_release); 310 310 } 311 311 312 - /** 312 + /* 313 313 * Group objects - create, release, get, put, search 314 314 */ 315 315 static struct vfio_group * ··· 488 488 return group; 489 489 } 490 490 491 - /** 491 + /* 492 492 * Device objects - create, release, get, put, search 493 493 */ 494 494 /* Device reference always implies a group reference */ ··· 595 595 return ret; 596 596 } 597 597 598 - /** 598 + /* 599 599 * Async device support 600 600 */ 601 601 static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev) ··· 689 689 return NOTIFY_OK; 690 690 } 691 691 692 - /** 692 + /* 693 693 * VFIO driver API 694 694 */ 695 695 void vfio_init_group_dev(struct vfio_device *device, struct device *dev, ··· 831 831 } 832 832 EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev); 833 833 834 - /** 834 + /* 835 835 * Get a reference to the vfio_device for a device. Even if the 836 836 * caller thinks they own the device, they could be racing with a 837 837 * release call path, so we can't trust drvdata for the shortcut. ··· 965 965 } 966 966 EXPORT_SYMBOL_GPL(vfio_unregister_group_dev); 967 967 968 - /** 968 + /* 969 969 * VFIO base fd, /dev/vfio/vfio 970 970 */ 971 971 static long vfio_ioctl_check_extension(struct vfio_container *container, ··· 1183 1183 .compat_ioctl = compat_ptr_ioctl, 1184 1184 }; 1185 1185 1186 - /** 1186 + /* 1187 1187 * VFIO Group fd, /dev/vfio/$GROUP 1188 1188 */ 1189 1189 static void __vfio_group_unset_container(struct vfio_group *group) ··· 1536 1536 .release = vfio_group_fops_release, 1537 1537 }; 1538 1538 1539 - /** 1539 + /* 1540 1540 * VFIO Device fd 1541 1541 */ 1542 1542 static int vfio_device_fops_release(struct inode *inode, struct file *filep) ··· 1611 1611 .mmap = vfio_device_fops_mmap, 1612 1612 }; 1613 1613 1614 - /** 1614 + /* 1615 1615 * External user API, exported by symbols to be linked dynamically. 1616 1616 * 1617 1617 * The protocol includes: ··· 1659 1659 } 1660 1660 EXPORT_SYMBOL_GPL(vfio_group_get_external_user); 1661 1661 1662 - /** 1662 + /* 1663 1663 * External user API, exported by symbols to be linked dynamically. 1664 1664 * The external user passes in a device pointer 1665 1665 * to verify that: ··· 1725 1725 } 1726 1726 EXPORT_SYMBOL_GPL(vfio_external_check_extension); 1727 1727 1728 - /** 1728 + /* 1729 1729 * Sub-module support 1730 1730 */ 1731 1731 /* ··· 2272 2272 } 2273 2273 EXPORT_SYMBOL_GPL(vfio_group_iommu_domain); 2274 2274 2275 - /** 2275 + /* 2276 2276 * Module/class support 2277 2277 */ 2278 2278 static char *vfio_devnode(struct device *dev, umode_t *mode)
+9 -5
drivers/video/console/vgacon.c
··· 366 366 struct uni_pagedir *p; 367 367 368 368 /* 369 - * We cannot be loaded as a module, therefore init is always 1, 370 - * but vgacon_init can be called more than once, and init will 371 - * not be 1. 369 + * We cannot be loaded as a module, therefore init will be 1 370 + * if we are the default console, however if we are a fallback 371 + * console, for example if fbcon has failed registration, then 372 + * init will be 0, so we need to make sure our boot parameters 373 + * have been copied to the console structure for vgacon_resize 374 + * ultimately called by vc_resize. Any subsequent calls to 375 + * vgacon_init init will have init set to 0 too. 372 376 */ 373 377 c->vc_can_do_color = vga_can_do_color; 378 + c->vc_scan_lines = vga_scan_lines; 379 + c->vc_font.height = c->vc_cell_height = vga_video_font_height; 374 380 375 381 /* set dimensions manually if init != 0 since vc_resize() will fail */ 376 382 if (init) { ··· 385 379 } else 386 380 vc_resize(c, vga_video_num_columns, vga_video_num_lines); 387 381 388 - c->vc_scan_lines = vga_scan_lines; 389 - c->vc_font.height = c->vc_cell_height = vga_video_font_height; 390 382 c->vc_complement_mask = 0x7700; 391 383 if (vga_512_chars) 392 384 c->vc_hi_font_mask = 0x0800;
+5 -6
fs/cifs/connect.c
··· 1562 1562 /* fscache server cookies are based on primary channel only */ 1563 1563 if (!CIFS_SERVER_IS_CHAN(tcp_ses)) 1564 1564 cifs_fscache_get_client_cookie(tcp_ses); 1565 + #ifdef CONFIG_CIFS_FSCACHE 1566 + else 1567 + tcp_ses->fscache = tcp_ses->primary_server->fscache; 1568 + #endif /* CONFIG_CIFS_FSCACHE */ 1565 1569 1566 1570 /* queue echo request delayed work */ 1567 1571 queue_delayed_work(cifsiod_wq, &tcp_ses->echo, tcp_ses->echo_interval); ··· 3050 3046 cifs_dbg(VFS, "read only mount of RW share\n"); 3051 3047 /* no need to log a RW mount of a typical RW share */ 3052 3048 } 3053 - /* 3054 - * The cookie is initialized from volume info returned above. 3055 - * Inside cifs_fscache_get_super_cookie it checks 3056 - * that we do not get super cookie twice. 3057 - */ 3058 - cifs_fscache_get_super_cookie(tcon); 3059 3049 } 3060 3050 3061 3051 /* ··· 3424 3426 */ 3425 3427 mount_put_conns(mnt_ctx); 3426 3428 mount_get_dfs_conns(mnt_ctx); 3429 + set_root_ses(mnt_ctx); 3427 3430 3428 3431 full_path = build_unc_path_to_root(ctx, cifs_sb, true); 3429 3432 if (IS_ERR(full_path))
+10 -36
fs/cifs/fscache.c
··· 16 16 * Key layout of CIFS server cache index object 17 17 */ 18 18 struct cifs_server_key { 19 - struct { 20 - uint16_t family; /* address family */ 21 - __be16 port; /* IP port */ 22 - } hdr; 23 - union { 24 - struct in_addr ipv4_addr; 25 - struct in6_addr ipv6_addr; 26 - }; 19 + __u64 conn_id; 27 20 } __packed; 28 21 29 22 /* ··· 24 31 */ 25 32 void cifs_fscache_get_client_cookie(struct TCP_Server_Info *server) 26 33 { 27 - const struct sockaddr *sa = (struct sockaddr *) &server->dstaddr; 28 - const struct sockaddr_in *addr = (struct sockaddr_in *) sa; 29 - const struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *) sa; 30 34 struct cifs_server_key key; 31 - uint16_t key_len = sizeof(key.hdr); 32 - 33 - memset(&key, 0, sizeof(key)); 34 35 35 36 /* 36 - * Should not be a problem as sin_family/sin6_family overlays 37 - * sa_family field 37 + * Check if cookie was already initialized so don't reinitialize it. 38 + * In the future, as we integrate with newer fscache features, 39 + * we may want to instead add a check if cookie has changed 38 40 */ 39 - key.hdr.family = sa->sa_family; 40 - switch (sa->sa_family) { 41 - case AF_INET: 42 - key.hdr.port = addr->sin_port; 43 - key.ipv4_addr = addr->sin_addr; 44 - key_len += sizeof(key.ipv4_addr); 45 - break; 46 - 47 - case AF_INET6: 48 - key.hdr.port = addr6->sin6_port; 49 - key.ipv6_addr = addr6->sin6_addr; 50 - key_len += sizeof(key.ipv6_addr); 51 - break; 52 - 53 - default: 54 - cifs_dbg(VFS, "Unknown network family '%d'\n", sa->sa_family); 55 - server->fscache = NULL; 41 + if (server->fscache) 56 42 return; 57 - } 43 + 44 + memset(&key, 0, sizeof(key)); 45 + key.conn_id = server->conn_id; 58 46 59 47 server->fscache = 60 48 fscache_acquire_cookie(cifs_fscache_netfs.primary_index, 61 49 &cifs_fscache_server_index_def, 62 - &key, key_len, 50 + &key, sizeof(key), 63 51 NULL, 0, 64 52 server, 0, true); 65 53 cifs_dbg(FYI, "%s: (0x%p/0x%p)\n", ··· 66 92 * In the future, as we integrate with newer fscache features, 67 93 * we may want to instead add a check if cookie has changed 68 94 */ 69 - if (tcon->fscache == NULL) 95 + if (tcon->fscache) 70 96 return; 71 97 72 98 sharename = extract_sharename(tcon->treeName);
+7
fs/cifs/inode.c
··· 1376 1376 inode = ERR_PTR(rc); 1377 1377 } 1378 1378 1379 + /* 1380 + * The cookie is initialized from volume info returned above. 1381 + * Inside cifs_fscache_get_super_cookie it checks 1382 + * that we do not get super cookie twice. 1383 + */ 1384 + cifs_fscache_get_super_cookie(tcon); 1385 + 1379 1386 out: 1380 1387 kfree(path); 1381 1388 free_xid(xid);
+4
fs/file.c
··· 858 858 file = NULL; 859 859 else if (!get_file_rcu_many(file, refs)) 860 860 goto loop; 861 + else if (files_lookup_fd_raw(files, fd) != file) { 862 + fput_many(file, refs); 863 + goto loop; 864 + } 861 865 } 862 866 rcu_read_unlock(); 863 867
+7 -3
fs/gfs2/glock.c
··· 1857 1857 1858 1858 void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state) 1859 1859 { 1860 - struct gfs2_holder mock_gh = { .gh_gl = gl, .gh_state = state, }; 1861 1860 unsigned long delay = 0; 1862 1861 unsigned long holdtime; 1863 1862 unsigned long now = jiffies; ··· 1889 1890 * keep the glock until the last strong holder is done with it. 1890 1891 */ 1891 1892 if (!find_first_strong_holder(gl)) { 1892 - if (state == LM_ST_UNLOCKED) 1893 - mock_gh.gh_state = LM_ST_EXCLUSIVE; 1893 + struct gfs2_holder mock_gh = { 1894 + .gh_gl = gl, 1895 + .gh_state = (state == LM_ST_UNLOCKED) ? 1896 + LM_ST_EXCLUSIVE : state, 1897 + .gh_iflags = BIT(HIF_HOLDER) 1898 + }; 1899 + 1894 1900 demote_incompat_holders(gl, &mock_gh); 1895 1901 } 1896 1902 handle_callback(gl, state, delay, true);
+45 -64
fs/gfs2/inode.c
··· 40 40 static const struct inode_operations gfs2_dir_iops; 41 41 static const struct inode_operations gfs2_symlink_iops; 42 42 43 - static int iget_test(struct inode *inode, void *opaque) 44 - { 45 - u64 no_addr = *(u64 *)opaque; 46 - 47 - return GFS2_I(inode)->i_no_addr == no_addr; 48 - } 49 - 50 - static int iget_set(struct inode *inode, void *opaque) 51 - { 52 - u64 no_addr = *(u64 *)opaque; 53 - 54 - GFS2_I(inode)->i_no_addr = no_addr; 55 - inode->i_ino = no_addr; 56 - return 0; 57 - } 58 - 59 - static struct inode *gfs2_iget(struct super_block *sb, u64 no_addr) 60 - { 61 - struct inode *inode; 62 - 63 - repeat: 64 - inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr); 65 - if (!inode) 66 - return inode; 67 - if (is_bad_inode(inode)) { 68 - iput(inode); 69 - goto repeat; 70 - } 71 - return inode; 72 - } 73 - 74 43 /** 75 44 * gfs2_set_iop - Sets inode operations 76 45 * @inode: The inode with correct i_mode filled in ··· 73 104 } 74 105 } 75 106 107 + static int iget_test(struct inode *inode, void *opaque) 108 + { 109 + u64 no_addr = *(u64 *)opaque; 110 + 111 + return GFS2_I(inode)->i_no_addr == no_addr; 112 + } 113 + 114 + static int iget_set(struct inode *inode, void *opaque) 115 + { 116 + u64 no_addr = *(u64 *)opaque; 117 + 118 + GFS2_I(inode)->i_no_addr = no_addr; 119 + inode->i_ino = no_addr; 120 + return 0; 121 + } 122 + 76 123 /** 77 124 * gfs2_inode_lookup - Lookup an inode 78 125 * @sb: The super block ··· 117 132 { 118 133 struct inode *inode; 119 134 struct gfs2_inode *ip; 120 - struct gfs2_glock *io_gl = NULL; 121 135 struct gfs2_holder i_gh; 122 136 int error; 123 137 124 138 gfs2_holder_mark_uninitialized(&i_gh); 125 - inode = gfs2_iget(sb, no_addr); 139 + inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr); 126 140 if (!inode) 127 141 return ERR_PTR(-ENOMEM); 128 142 ··· 129 145 130 146 if (inode->i_state & I_NEW) { 131 147 struct gfs2_sbd *sdp = GFS2_SB(inode); 148 + struct gfs2_glock *io_gl; 132 149 133 150 error = gfs2_glock_get(sdp, no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl); 134 151 if (unlikely(error)) 135 152 goto fail; 136 - flush_delayed_work(&ip->i_gl->gl_work); 137 - 138 - error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 139 - if (unlikely(error)) 140 - goto fail; 141 - if (blktype != GFS2_BLKST_UNLINKED) 142 - gfs2_cancel_delete_work(io_gl); 143 153 144 154 if (type == DT_UNKNOWN || blktype != GFS2_BLKST_FREE) { 145 155 /* 146 156 * The GL_SKIP flag indicates to skip reading the inode 147 - * block. We read the inode with gfs2_inode_refresh 157 + * block. We read the inode when instantiating it 148 158 * after possibly checking the block type. 149 159 */ 150 160 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, ··· 159 181 } 160 182 } 161 183 162 - glock_set_object(ip->i_gl, ip); 163 184 set_bit(GLF_INSTANTIATE_NEEDED, &ip->i_gl->gl_flags); 164 - error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh); 185 + 186 + error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 165 187 if (unlikely(error)) 166 188 goto fail; 167 - glock_set_object(ip->i_iopen_gh.gh_gl, ip); 189 + if (blktype != GFS2_BLKST_UNLINKED) 190 + gfs2_cancel_delete_work(io_gl); 191 + error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh); 168 192 gfs2_glock_put(io_gl); 169 - io_gl = NULL; 193 + if (unlikely(error)) 194 + goto fail; 170 195 171 196 /* Lowest possible timestamp; will be overwritten in gfs2_dinode_in. */ 172 197 inode->i_atime.tv_sec = 1LL << (8 * sizeof(inode->i_atime.tv_sec) - 1); 173 198 inode->i_atime.tv_nsec = 0; 174 199 200 + glock_set_object(ip->i_gl, ip); 201 + 175 202 if (type == DT_UNKNOWN) { 176 203 /* Inode glock must be locked already */ 177 204 error = gfs2_instantiate(&i_gh); 178 - if (error) 205 + if (error) { 206 + glock_clear_object(ip->i_gl, ip); 179 207 goto fail; 208 + } 180 209 } else { 181 210 ip->i_no_formal_ino = no_formal_ino; 182 211 inode->i_mode = DT2IF(type); ··· 191 206 192 207 if (gfs2_holder_initialized(&i_gh)) 193 208 gfs2_glock_dq_uninit(&i_gh); 209 + glock_set_object(ip->i_iopen_gh.gh_gl, ip); 194 210 195 211 gfs2_set_iop(inode); 212 + unlock_new_inode(inode); 196 213 } 197 214 198 215 if (no_formal_ino && ip->i_no_formal_ino && 199 216 no_formal_ino != ip->i_no_formal_ino) { 200 - error = -ESTALE; 201 - if (inode->i_state & I_NEW) 202 - goto fail; 203 217 iput(inode); 204 - return ERR_PTR(error); 218 + return ERR_PTR(-ESTALE); 205 219 } 206 - 207 - if (inode->i_state & I_NEW) 208 - unlock_new_inode(inode); 209 220 210 221 return inode; 211 222 212 223 fail: 213 - if (gfs2_holder_initialized(&ip->i_iopen_gh)) { 214 - glock_clear_object(ip->i_iopen_gh.gh_gl, ip); 224 + if (gfs2_holder_initialized(&ip->i_iopen_gh)) 215 225 gfs2_glock_dq_uninit(&ip->i_iopen_gh); 216 - } 217 - if (io_gl) 218 - gfs2_glock_put(io_gl); 219 226 if (gfs2_holder_initialized(&i_gh)) 220 227 gfs2_glock_dq_uninit(&i_gh); 221 228 iget_failed(inode); ··· 707 730 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl); 708 731 if (error) 709 732 goto fail_free_inode; 710 - flush_delayed_work(&ip->i_gl->gl_work); 711 733 712 734 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 713 735 if (error) 714 736 goto fail_free_inode; 715 737 gfs2_cancel_delete_work(io_gl); 716 738 739 + error = insert_inode_locked4(inode, ip->i_no_addr, iget_test, &ip->i_no_addr); 740 + BUG_ON(error); 741 + 717 742 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, ghs + 1); 718 743 if (error) 719 744 goto fail_gunlock2; 720 745 721 - glock_set_object(ip->i_gl, ip); 722 746 error = gfs2_trans_begin(sdp, blocks, 0); 723 747 if (error) 724 748 goto fail_gunlock2; ··· 735 757 if (error) 736 758 goto fail_gunlock2; 737 759 760 + glock_set_object(ip->i_gl, ip); 738 761 glock_set_object(io_gl, ip); 739 762 gfs2_set_iop(inode); 740 - insert_inode_hash(inode); 741 763 742 764 free_vfs_inode = 0; /* After this point, the inode is no longer 743 765 considered free. Any failures need to undo ··· 779 801 gfs2_glock_dq_uninit(ghs + 1); 780 802 gfs2_glock_put(io_gl); 781 803 gfs2_qa_put(dip); 804 + unlock_new_inode(inode); 782 805 return error; 783 806 784 807 fail_gunlock3: 808 + glock_clear_object(ip->i_gl, ip); 785 809 glock_clear_object(io_gl, ip); 786 810 gfs2_glock_dq_uninit(&ip->i_iopen_gh); 787 811 fail_gunlock2: 788 - glock_clear_object(io_gl, ip); 789 812 gfs2_glock_put(io_gl); 790 813 fail_free_inode: 791 814 if (ip->i_gl) { 792 - glock_clear_object(ip->i_gl, ip); 793 815 if (free_vfs_inode) /* else evict will do the put for us */ 794 816 gfs2_glock_put(ip->i_gl); 795 817 } ··· 807 829 mark_inode_dirty(inode); 808 830 set_bit(free_vfs_inode ? GIF_FREE_VFS_INODE : GIF_ALLOC_FAILED, 809 831 &GFS2_I(inode)->i_flags); 810 - iput(inode); 832 + if (inode->i_state & I_NEW) 833 + iget_failed(inode); 834 + else 835 + iput(inode); 811 836 } 812 837 if (gfs2_holder_initialized(ghs + 1)) 813 838 gfs2_glock_dq_uninit(ghs + 1);
+7
fs/io-wq.c
··· 714 714 715 715 static inline bool io_should_retry_thread(long err) 716 716 { 717 + /* 718 + * Prevent perpetual task_work retry, if the task (or its group) is 719 + * exiting. 720 + */ 721 + if (fatal_signal_pending(current)) 722 + return false; 723 + 717 724 switch (err) { 718 725 case -EAGAIN: 719 726 case -ERESTARTSYS:
+2 -2
fs/netfs/read_helper.c
··· 1008 1008 } 1009 1009 EXPORT_SYMBOL(netfs_readpage); 1010 1010 1011 - /** 1012 - * netfs_skip_folio_read - prep a folio for writing without reading first 1011 + /* 1012 + * Prepare a folio for writing without reading first 1013 1013 * @folio: The folio being prepared 1014 1014 * @pos: starting position for the write 1015 1015 * @len: length of write
-1
fs/xfs/xfs_inode.c
··· 3122 3122 * appropriately. 3123 3123 */ 3124 3124 if (flags & RENAME_WHITEOUT) { 3125 - ASSERT(!(flags & (RENAME_NOREPLACE | RENAME_EXCHANGE))); 3126 3125 error = xfs_rename_alloc_whiteout(mnt_userns, target_dp, &wip); 3127 3126 if (error) 3128 3127 return error;
+2
include/linux/kprobes.h
··· 153 153 struct kretprobe_holder *rph; 154 154 }; 155 155 156 + #define KRETPROBE_MAX_DATA_SIZE 4096 157 + 156 158 struct kretprobe_instance { 157 159 union { 158 160 struct freelist_node freelist;
+4 -1
include/linux/mlx5/mlx5_ifc.h
··· 9698 9698 u8 regs_84_to_68[0x11]; 9699 9699 u8 tracer_registers[0x4]; 9700 9700 9701 - u8 regs_63_to_32[0x20]; 9701 + u8 regs_63_to_46[0x12]; 9702 + u8 mrtc[0x1]; 9703 + u8 regs_44_to_32[0xd]; 9704 + 9702 9705 u8 regs_31_to_0[0x20]; 9703 9706 }; 9704 9707
+13 -6
include/linux/netdevice.h
··· 4404 4404 static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu) 4405 4405 { 4406 4406 spin_lock(&txq->_xmit_lock); 4407 - txq->xmit_lock_owner = cpu; 4407 + /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4408 + WRITE_ONCE(txq->xmit_lock_owner, cpu); 4408 4409 } 4409 4410 4410 4411 static inline bool __netif_tx_acquire(struct netdev_queue *txq) ··· 4422 4421 static inline void __netif_tx_lock_bh(struct netdev_queue *txq) 4423 4422 { 4424 4423 spin_lock_bh(&txq->_xmit_lock); 4425 - txq->xmit_lock_owner = smp_processor_id(); 4424 + /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4425 + WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id()); 4426 4426 } 4427 4427 4428 4428 static inline bool __netif_tx_trylock(struct netdev_queue *txq) 4429 4429 { 4430 4430 bool ok = spin_trylock(&txq->_xmit_lock); 4431 - if (likely(ok)) 4432 - txq->xmit_lock_owner = smp_processor_id(); 4431 + 4432 + if (likely(ok)) { 4433 + /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4434 + WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id()); 4435 + } 4433 4436 return ok; 4434 4437 } 4435 4438 4436 4439 static inline void __netif_tx_unlock(struct netdev_queue *txq) 4437 4440 { 4438 - txq->xmit_lock_owner = -1; 4441 + /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4442 + WRITE_ONCE(txq->xmit_lock_owner, -1); 4439 4443 spin_unlock(&txq->_xmit_lock); 4440 4444 } 4441 4445 4442 4446 static inline void __netif_tx_unlock_bh(struct netdev_queue *txq) 4443 4447 { 4444 - txq->xmit_lock_owner = -1; 4448 + /* Pairs with READ_ONCE() in __dev_queue_xmit() */ 4449 + WRITE_ONCE(txq->xmit_lock_owner, -1); 4445 4450 spin_unlock_bh(&txq->_xmit_lock); 4446 4451 } 4447 4452
+3 -2
include/linux/sched/cputime.h
··· 18 18 #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */ 19 19 20 20 #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN 21 - extern void task_cputime(struct task_struct *t, 21 + extern bool task_cputime(struct task_struct *t, 22 22 u64 *utime, u64 *stime); 23 23 extern u64 task_gtime(struct task_struct *t); 24 24 #else 25 - static inline void task_cputime(struct task_struct *t, 25 + static inline bool task_cputime(struct task_struct *t, 26 26 u64 *utime, u64 *stime) 27 27 { 28 28 *utime = t->utime; 29 29 *stime = t->stime; 30 + return false; 30 31 } 31 32 32 33 static inline u64 task_gtime(struct task_struct *t)
+4 -10
include/linux/siphash.h
··· 27 27 } 28 28 29 29 u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); 30 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 31 30 u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); 32 - #endif 33 31 34 32 u64 siphash_1u64(const u64 a, const siphash_key_t *key); 35 33 u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); ··· 80 82 static inline u64 siphash(const void *data, size_t len, 81 83 const siphash_key_t *key) 82 84 { 83 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 84 - if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) 85 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || 86 + !IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) 85 87 return __siphash_unaligned(data, len, key); 86 - #endif 87 88 return ___siphash_aligned(data, len, key); 88 89 } 89 90 ··· 93 96 94 97 u32 __hsiphash_aligned(const void *data, size_t len, 95 98 const hsiphash_key_t *key); 96 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 97 99 u32 __hsiphash_unaligned(const void *data, size_t len, 98 100 const hsiphash_key_t *key); 99 - #endif 100 101 101 102 u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); 102 103 u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); ··· 130 135 static inline u32 hsiphash(const void *data, size_t len, 131 136 const hsiphash_key_t *key) 132 137 { 133 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 134 - if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) 138 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) || 139 + !IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) 135 140 return __hsiphash_unaligned(data, len, key); 136 - #endif 137 141 return ___hsiphash_aligned(data, len, key); 138 142 } 139 143
+1 -1
include/net/busy_poll.h
··· 133 133 if (unlikely(READ_ONCE(sk->sk_napi_id) != skb->napi_id)) 134 134 WRITE_ONCE(sk->sk_napi_id, skb->napi_id); 135 135 #endif 136 - sk_rx_queue_set(sk, skb); 136 + sk_rx_queue_update(sk, skb); 137 137 } 138 138 139 139 static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id)
+11
include/net/dst_cache.h
··· 80 80 } 81 81 82 82 /** 83 + * dst_cache_reset_now - invalidate the cache contents immediately 84 + * @dst_cache: the cache 85 + * 86 + * The caller must be sure there are no concurrent users, as this frees 87 + * all dst_cache users immediately, rather than waiting for the next 88 + * per-cpu usage like dst_cache_reset does. Most callers should use the 89 + * higher speed lazily-freed dst_cache_reset function instead. 90 + */ 91 + void dst_cache_reset_now(struct dst_cache *dst_cache); 92 + 93 + /** 83 94 * dst_cache_init - initialize the cache, allocating the required storage 84 95 * @dst_cache: the cache 85 96 * @gfp: allocation flags
+3 -1
include/net/fib_rules.h
··· 69 69 int (*action)(struct fib_rule *, 70 70 struct flowi *, int, 71 71 struct fib_lookup_arg *); 72 - bool (*suppress)(struct fib_rule *, 72 + bool (*suppress)(struct fib_rule *, int, 73 73 struct fib_lookup_arg *); 74 74 int (*match)(struct fib_rule *, 75 75 struct flowi *, int); ··· 218 218 struct fib_lookup_arg *arg)); 219 219 220 220 INDIRECT_CALLABLE_DECLARE(bool fib6_rule_suppress(struct fib_rule *rule, 221 + int flags, 221 222 struct fib_lookup_arg *arg)); 222 223 INDIRECT_CALLABLE_DECLARE(bool fib4_rule_suppress(struct fib_rule *rule, 224 + int flags, 223 225 struct fib_lookup_arg *arg)); 224 226 #endif
+1 -1
include/net/ip_fib.h
··· 438 438 #ifdef CONFIG_IP_ROUTE_CLASSID 439 439 static inline int fib_num_tclassid_users(struct net *net) 440 440 { 441 - return net->ipv4.fib_num_tclassid_users; 441 + return atomic_read(&net->ipv4.fib_num_tclassid_users); 442 442 } 443 443 #else 444 444 static inline int fib_num_tclassid_users(struct net *net)
+1 -1
include/net/netns/ipv4.h
··· 65 65 bool fib_has_custom_local_routes; 66 66 bool fib_offload_disabled; 67 67 #ifdef CONFIG_IP_ROUTE_CLASSID 68 - int fib_num_tclassid_users; 68 + atomic_t fib_num_tclassid_users; 69 69 #endif 70 70 struct hlist_head *fib_table_hash; 71 71 struct sock *fibnl;
+23 -7
include/net/sock.h
··· 1913 1913 return -1; 1914 1914 } 1915 1915 1916 - static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb) 1916 + static inline void __sk_rx_queue_set(struct sock *sk, 1917 + const struct sk_buff *skb, 1918 + bool force_set) 1917 1919 { 1918 1920 #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING 1919 1921 if (skb_rx_queue_recorded(skb)) { 1920 1922 u16 rx_queue = skb_get_rx_queue(skb); 1921 1923 1922 - if (unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue)) 1924 + if (force_set || 1925 + unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue)) 1923 1926 WRITE_ONCE(sk->sk_rx_queue_mapping, rx_queue); 1924 1927 } 1925 1928 #endif 1929 + } 1930 + 1931 + static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb) 1932 + { 1933 + __sk_rx_queue_set(sk, skb, true); 1934 + } 1935 + 1936 + static inline void sk_rx_queue_update(struct sock *sk, const struct sk_buff *skb) 1937 + { 1938 + __sk_rx_queue_set(sk, skb, false); 1926 1939 } 1927 1940 1928 1941 static inline void sk_rx_queue_clear(struct sock *sk) ··· 2443 2430 * @sk: socket 2444 2431 * 2445 2432 * Use the per task page_frag instead of the per socket one for 2446 - * optimization when we know that we're in the normal context and owns 2433 + * optimization when we know that we're in process context and own 2447 2434 * everything that's associated with %current. 2448 2435 * 2449 - * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest 2450 - * inside other socket operations and end up recursing into sk_page_frag() 2451 - * while it's already in use. 2436 + * Both direct reclaim and page faults can nest inside other 2437 + * socket operations and end up recursing into sk_page_frag() 2438 + * while it's already in use: explicitly avoid task page_frag 2439 + * usage if the caller is potentially doing any of them. 2440 + * This assumes that page fault handlers use the GFP_NOFS flags. 2452 2441 * 2453 2442 * Return: a per task page_frag if context allows that, 2454 2443 * otherwise a per socket one. 2455 2444 */ 2456 2445 static inline struct page_frag *sk_page_frag(struct sock *sk) 2457 2446 { 2458 - if (gfpflags_normal_context(sk->sk_allocation)) 2447 + if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) == 2448 + (__GFP_DIRECT_RECLAIM | __GFP_FS)) 2459 2449 return &current->task_frag; 2460 2450 2461 2451 return &sk->sk_frag;
+1 -1
include/sound/soc-acpi.h
··· 147 147 */ 148 148 /* Descriptor for SST ASoC machine driver */ 149 149 struct snd_soc_acpi_mach { 150 - const u8 id[ACPI_ID_LEN]; 150 + u8 id[ACPI_ID_LEN]; 151 151 const struct snd_soc_acpi_codecs *comp_ids; 152 152 const u32 link_mask; 153 153 const struct snd_soc_acpi_link_adr *links;
+7
include/uapi/drm/virtgpu_drm.h
··· 196 196 __u64 ctx_set_params; 197 197 }; 198 198 199 + /* 200 + * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in 201 + * effect. The event size is sizeof(drm_event), since there is no additional 202 + * payload. 203 + */ 204 + #define VIRTGPU_EVENT_FENCE_SIGNALED 0x90000000 205 + 199 206 #define DRM_IOCTL_VIRTGPU_MAP \ 200 207 DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map) 201 208
+1 -1
include/uapi/linux/if_ether.h
··· 117 117 #define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */ 118 118 #define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */ 119 119 120 - #define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is less than this value 120 + #define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is more than this value 121 121 * then the frame is Ethernet II. Else it is 802.3 */ 122 122 123 123 /*
+3
kernel/kprobes.c
··· 2086 2086 } 2087 2087 } 2088 2088 2089 + if (rp->data_size > KRETPROBE_MAX_DATA_SIZE) 2090 + return -E2BIG; 2091 + 2089 2092 rp->kp.pre_handler = pre_handler_kretprobe; 2090 2093 rp->kp.post_handler = NULL; 2091 2094
+3 -3
kernel/sched/core.c
··· 1918 1918 }; 1919 1919 } 1920 1920 1921 - rq->uclamp_flags = 0; 1921 + rq->uclamp_flags = UCLAMP_FLAG_IDLE; 1922 1922 } 1923 1923 1924 1924 static void __init init_uclamp(void) ··· 6617 6617 int mode = sched_dynamic_mode(str); 6618 6618 if (mode < 0) { 6619 6619 pr_warn("Dynamic Preempt: unsupported mode: %s\n", str); 6620 - return 1; 6620 + return 0; 6621 6621 } 6622 6622 6623 6623 sched_dynamic_update(mode); 6624 - return 0; 6624 + return 1; 6625 6625 } 6626 6626 __setup("preempt=", setup_preempt_mode); 6627 6627
+9 -3
kernel/sched/cputime.c
··· 615 615 .sum_exec_runtime = p->se.sum_exec_runtime, 616 616 }; 617 617 618 - task_cputime(p, &cputime.utime, &cputime.stime); 618 + if (task_cputime(p, &cputime.utime, &cputime.stime)) 619 + cputime.sum_exec_runtime = task_sched_runtime(p); 619 620 cputime_adjust(&cputime, &p->prev_cputime, ut, st); 620 621 } 621 622 EXPORT_SYMBOL_GPL(task_cputime_adjusted); ··· 829 828 * add up the pending nohz execution time since the last 830 829 * cputime snapshot. 831 830 */ 832 - void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) 831 + bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime) 833 832 { 834 833 struct vtime *vtime = &t->vtime; 835 834 unsigned int seq; 836 835 u64 delta; 836 + int ret; 837 837 838 838 if (!vtime_accounting_enabled()) { 839 839 *utime = t->utime; 840 840 *stime = t->stime; 841 - return; 841 + return false; 842 842 } 843 843 844 844 do { 845 + ret = false; 845 846 seq = read_seqcount_begin(&vtime->seqcount); 846 847 847 848 *utime = t->utime; ··· 853 850 if (vtime->state < VTIME_SYS) 854 851 continue; 855 852 853 + ret = true; 856 854 delta = vtime_delta(vtime); 857 855 858 856 /* ··· 865 861 else 866 862 *utime += vtime->utime + delta; 867 863 } while (read_seqcount_retry(&vtime->seqcount, seq)); 864 + 865 + return ret; 868 866 } 869 867 870 868 static int vtime_state_fetch(struct vtime *vtime, int cpu)
+2 -1
kernel/softirq.c
··· 595 595 { 596 596 __irq_enter_raw(); 597 597 598 - if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET)) 598 + if (tick_nohz_full_cpu(smp_processor_id()) || 599 + (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))) 599 600 tick_irq_enter(); 600 601 601 602 account_hardirq_enter(current);
+7
kernel/time/tick-sched.c
··· 1375 1375 now = ktime_get(); 1376 1376 if (ts->idle_active) 1377 1377 tick_nohz_stop_idle(ts, now); 1378 + /* 1379 + * If all CPUs are idle. We may need to update a stale jiffies value. 1380 + * Note nohz_full is a special case: a timekeeper is guaranteed to stay 1381 + * alive but it might be busy looping with interrupts disabled in some 1382 + * rare case (typically stop machine). So we must make sure we have a 1383 + * last resort. 1384 + */ 1378 1385 if (ts->tick_stopped) 1379 1386 tick_nohz_update_jiffies(now); 1380 1387 }
+1 -1
kernel/trace/trace_events_hist.c
··· 3757 3757 3758 3758 if (strcmp(field->type, hist_field->type) != 0) { 3759 3759 if (field->size != hist_field->size || 3760 - field->is_signed != hist_field->is_signed) 3760 + (!field->is_string && field->is_signed != hist_field->is_signed)) 3761 3761 return -EINVAL; 3762 3762 } 3763 3763
+3
kernel/trace/tracing_map.c
··· 15 15 #include <linux/jhash.h> 16 16 #include <linux/slab.h> 17 17 #include <linux/sort.h> 18 + #include <linux/kmemleak.h> 18 19 19 20 #include "tracing_map.h" 20 21 #include "trace.h" ··· 308 307 for (i = 0; i < a->n_pages; i++) { 309 308 if (!a->pages[i]) 310 309 break; 310 + kmemleak_free(a->pages[i]); 311 311 free_page((unsigned long)a->pages[i]); 312 312 } 313 313 ··· 344 342 a->pages[i] = (void *)get_zeroed_page(GFP_KERNEL); 345 343 if (!a->pages[i]) 346 344 goto free; 345 + kmemleak_alloc(a->pages[i], PAGE_SIZE, 1, GFP_KERNEL); 347 346 } 348 347 out: 349 348 return a;
+6 -6
lib/siphash.c
··· 49 49 SIPROUND; \ 50 50 return (v0 ^ v1) ^ (v2 ^ v3); 51 51 52 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 52 53 u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) 53 54 { 54 55 const u8 *end = data + len - (len % sizeof(u64)); ··· 81 80 POSTAMBLE 82 81 } 83 82 EXPORT_SYMBOL(__siphash_aligned); 83 + #endif 84 84 85 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 86 85 u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) 87 86 { 88 87 const u8 *end = data + len - (len % sizeof(u64)); ··· 114 113 POSTAMBLE 115 114 } 116 115 EXPORT_SYMBOL(__siphash_unaligned); 117 - #endif 118 116 119 117 /** 120 118 * siphash_1u64 - compute 64-bit siphash PRF value of a u64 ··· 250 250 HSIPROUND; \ 251 251 return (v0 ^ v1) ^ (v2 ^ v3); 252 252 253 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 253 254 u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) 254 255 { 255 256 const u8 *end = data + len - (len % sizeof(u64)); ··· 281 280 HPOSTAMBLE 282 281 } 283 282 EXPORT_SYMBOL(__hsiphash_aligned); 283 + #endif 284 284 285 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 286 285 u32 __hsiphash_unaligned(const void *data, size_t len, 287 286 const hsiphash_key_t *key) 288 287 { ··· 314 313 HPOSTAMBLE 315 314 } 316 315 EXPORT_SYMBOL(__hsiphash_unaligned); 317 - #endif 318 316 319 317 /** 320 318 * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 ··· 418 418 HSIPROUND; \ 419 419 return v1 ^ v3; 420 420 421 + #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 421 422 u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) 422 423 { 423 424 const u8 *end = data + len - (len % sizeof(u32)); ··· 439 438 HPOSTAMBLE 440 439 } 441 440 EXPORT_SYMBOL(__hsiphash_aligned); 441 + #endif 442 442 443 - #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 444 443 u32 __hsiphash_unaligned(const void *data, size_t len, 445 444 const hsiphash_key_t *key) 446 445 { ··· 462 461 HPOSTAMBLE 463 462 } 464 463 EXPORT_SYMBOL(__hsiphash_unaligned); 465 - #endif 466 464 467 465 /** 468 466 * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+4 -1
net/core/dev.c
··· 4210 4210 if (dev->flags & IFF_UP) { 4211 4211 int cpu = smp_processor_id(); /* ok because BHs are off */ 4212 4212 4213 - if (txq->xmit_lock_owner != cpu) { 4213 + /* Other cpus might concurrently change txq->xmit_lock_owner 4214 + * to -1 or to their cpu id, but not to our id. 4215 + */ 4216 + if (READ_ONCE(txq->xmit_lock_owner) != cpu) { 4214 4217 if (dev_xmit_recursion()) 4215 4218 goto recursion_alert; 4216 4219
+19
net/core/dst_cache.c
··· 162 162 free_percpu(dst_cache->cache); 163 163 } 164 164 EXPORT_SYMBOL_GPL(dst_cache_destroy); 165 + 166 + void dst_cache_reset_now(struct dst_cache *dst_cache) 167 + { 168 + int i; 169 + 170 + if (!dst_cache->cache) 171 + return; 172 + 173 + dst_cache->reset_ts = jiffies; 174 + for_each_possible_cpu(i) { 175 + struct dst_cache_pcpu *idst = per_cpu_ptr(dst_cache->cache, i); 176 + struct dst_entry *dst = idst->dst; 177 + 178 + idst->cookie = 0; 179 + idst->dst = NULL; 180 + dst_release(dst); 181 + } 182 + } 183 + EXPORT_SYMBOL_GPL(dst_cache_reset_now);
+1 -1
net/core/fib_rules.c
··· 323 323 if (!err && ops->suppress && INDIRECT_CALL_MT(ops->suppress, 324 324 fib6_rule_suppress, 325 325 fib4_rule_suppress, 326 - rule, arg)) 326 + rule, flags, arg)) 327 327 continue; 328 328 329 329 if (err != -EAGAIN) {
+1 -1
net/ipv4/fib_frontend.c
··· 1582 1582 int error; 1583 1583 1584 1584 #ifdef CONFIG_IP_ROUTE_CLASSID 1585 - net->ipv4.fib_num_tclassid_users = 0; 1585 + atomic_set(&net->ipv4.fib_num_tclassid_users, 0); 1586 1586 #endif 1587 1587 error = ip_fib_net_init(net); 1588 1588 if (error < 0)
+3 -2
net/ipv4/fib_rules.c
··· 141 141 } 142 142 143 143 INDIRECT_CALLABLE_SCOPE bool fib4_rule_suppress(struct fib_rule *rule, 144 + int flags, 144 145 struct fib_lookup_arg *arg) 145 146 { 146 147 struct fib_result *result = (struct fib_result *) arg->result; ··· 264 263 if (tb[FRA_FLOW]) { 265 264 rule4->tclassid = nla_get_u32(tb[FRA_FLOW]); 266 265 if (rule4->tclassid) 267 - net->ipv4.fib_num_tclassid_users++; 266 + atomic_inc(&net->ipv4.fib_num_tclassid_users); 268 267 } 269 268 #endif 270 269 ··· 296 295 297 296 #ifdef CONFIG_IP_ROUTE_CLASSID 298 297 if (((struct fib4_rule *)rule)->tclassid) 299 - net->ipv4.fib_num_tclassid_users--; 298 + atomic_dec(&net->ipv4.fib_num_tclassid_users); 300 299 #endif 301 300 net->ipv4.fib_has_custom_rules = true; 302 301
+2 -2
net/ipv4/fib_semantics.c
··· 220 220 { 221 221 #ifdef CONFIG_IP_ROUTE_CLASSID 222 222 if (fib_nh->nh_tclassid) 223 - net->ipv4.fib_num_tclassid_users--; 223 + atomic_dec(&net->ipv4.fib_num_tclassid_users); 224 224 #endif 225 225 fib_nh_common_release(&fib_nh->nh_common); 226 226 } ··· 632 632 #ifdef CONFIG_IP_ROUTE_CLASSID 633 633 nh->nh_tclassid = cfg->fc_flow; 634 634 if (nh->nh_tclassid) 635 - net->ipv4.fib_num_tclassid_users++; 635 + atomic_inc(&net->ipv4.fib_num_tclassid_users); 636 636 #endif 637 637 #ifdef CONFIG_IP_ROUTE_MULTIPATH 638 638 nh->fib_nh_weight = nh_weight;
+2 -2
net/ipv6/fib6_rules.c
··· 267 267 } 268 268 269 269 INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule, 270 + int flags, 270 271 struct fib_lookup_arg *arg) 271 272 { 272 273 struct fib6_result *res = arg->result; ··· 295 294 return false; 296 295 297 296 suppress_route: 298 - if (!(arg->flags & FIB_LOOKUP_NOREF)) 299 - ip6_rt_put(rt); 297 + ip6_rt_put_flags(rt, flags); 300 298 return true; 301 299 } 302 300
+3 -3
net/ipv6/ip6_offload.c
··· 248 248 * memcmp() alone below is sufficient, right? 249 249 */ 250 250 if ((first_word & htonl(0xF00FFFFF)) || 251 - !ipv6_addr_equal(&iph->saddr, &iph2->saddr) || 252 - !ipv6_addr_equal(&iph->daddr, &iph2->daddr) || 253 - *(u16 *)&iph->nexthdr != *(u16 *)&iph2->nexthdr) { 251 + !ipv6_addr_equal(&iph->saddr, &iph2->saddr) || 252 + !ipv6_addr_equal(&iph->daddr, &iph2->daddr) || 253 + *(u16 *)&iph->nexthdr != *(u16 *)&iph2->nexthdr) { 254 254 not_same_flow: 255 255 NAPI_GRO_CB(p)->same_flow = 0; 256 256 continue;
+5 -4
net/mctp/route.c
··· 952 952 } 953 953 954 954 static int mctp_route_remove(struct mctp_dev *mdev, mctp_eid_t daddr_start, 955 - unsigned int daddr_extent) 955 + unsigned int daddr_extent, unsigned char type) 956 956 { 957 957 struct net *net = dev_net(mdev->dev); 958 958 struct mctp_route *rt, *tmp; ··· 969 969 970 970 list_for_each_entry_safe(rt, tmp, &net->mctp.routes, list) { 971 971 if (rt->dev == mdev && 972 - rt->min == daddr_start && rt->max == daddr_end) { 972 + rt->min == daddr_start && rt->max == daddr_end && 973 + rt->type == type) { 973 974 list_del_rcu(&rt->list); 974 975 /* TODO: immediate RTM_DELROUTE */ 975 976 mctp_route_release(rt); ··· 988 987 989 988 int mctp_route_remove_local(struct mctp_dev *mdev, mctp_eid_t addr) 990 989 { 991 - return mctp_route_remove(mdev, addr, 0); 990 + return mctp_route_remove(mdev, addr, 0, RTN_LOCAL); 992 991 } 993 992 994 993 /* removes all entries for a given device */ ··· 1196 1195 if (rtm->rtm_type != RTN_UNICAST) 1197 1196 return -EINVAL; 1198 1197 1199 - rc = mctp_route_remove(mdev, daddr_start, rtm->rtm_dst_len); 1198 + rc = mctp_route_remove(mdev, daddr_start, rtm->rtm_dst_len, RTN_UNICAST); 1200 1199 return rc; 1201 1200 } 1202 1201
+1 -1
net/mctp/test/utils.c
··· 12 12 static netdev_tx_t mctp_test_dev_tx(struct sk_buff *skb, 13 13 struct net_device *ndev) 14 14 { 15 - kfree(skb); 15 + kfree_skb(skb); 16 16 return NETDEV_TX_OK; 17 17 } 18 18
+62 -35
net/mpls/af_mpls.c
··· 409 409 goto err; 410 410 411 411 /* Find the output device */ 412 - out_dev = rcu_dereference(nh->nh_dev); 412 + out_dev = nh->nh_dev; 413 413 if (!mpls_output_possible(out_dev)) 414 414 goto tx_err; 415 415 ··· 698 698 (dev->addr_len != nh->nh_via_alen)) 699 699 goto errout; 700 700 701 - RCU_INIT_POINTER(nh->nh_dev, dev); 701 + nh->nh_dev = dev; 702 702 703 703 if (!(dev->flags & IFF_UP)) { 704 704 nh->nh_flags |= RTNH_F_DEAD; ··· 1491 1491 kfree(mdev); 1492 1492 } 1493 1493 1494 - static void mpls_ifdown(struct net_device *dev, int event) 1494 + static int mpls_ifdown(struct net_device *dev, int event) 1495 1495 { 1496 1496 struct mpls_route __rcu **platform_label; 1497 1497 struct net *net = dev_net(dev); 1498 - u8 alive, deleted; 1499 1498 unsigned index; 1500 1499 1501 1500 platform_label = rtnl_dereference(net->mpls.platform_label); 1502 1501 for (index = 0; index < net->mpls.platform_labels; index++) { 1503 1502 struct mpls_route *rt = rtnl_dereference(platform_label[index]); 1503 + bool nh_del = false; 1504 + u8 alive = 0; 1504 1505 1505 1506 if (!rt) 1506 1507 continue; 1507 1508 1508 - alive = 0; 1509 - deleted = 0; 1509 + if (event == NETDEV_UNREGISTER) { 1510 + u8 deleted = 0; 1511 + 1512 + for_nexthops(rt) { 1513 + if (!nh->nh_dev || nh->nh_dev == dev) 1514 + deleted++; 1515 + if (nh->nh_dev == dev) 1516 + nh_del = true; 1517 + } endfor_nexthops(rt); 1518 + 1519 + /* if there are no more nexthops, delete the route */ 1520 + if (deleted == rt->rt_nhn) { 1521 + mpls_route_update(net, index, NULL, NULL); 1522 + continue; 1523 + } 1524 + 1525 + if (nh_del) { 1526 + size_t size = sizeof(*rt) + rt->rt_nhn * 1527 + rt->rt_nh_size; 1528 + struct mpls_route *orig = rt; 1529 + 1530 + rt = kmalloc(size, GFP_KERNEL); 1531 + if (!rt) 1532 + return -ENOMEM; 1533 + memcpy(rt, orig, size); 1534 + } 1535 + } 1536 + 1510 1537 change_nexthops(rt) { 1511 1538 unsigned int nh_flags = nh->nh_flags; 1512 1539 1513 - if (rtnl_dereference(nh->nh_dev) != dev) 1540 + if (nh->nh_dev != dev) 1514 1541 goto next; 1515 1542 1516 1543 switch (event) { ··· 1550 1523 break; 1551 1524 } 1552 1525 if (event == NETDEV_UNREGISTER) 1553 - RCU_INIT_POINTER(nh->nh_dev, NULL); 1526 + nh->nh_dev = NULL; 1554 1527 1555 1528 if (nh->nh_flags != nh_flags) 1556 1529 WRITE_ONCE(nh->nh_flags, nh_flags); 1557 1530 next: 1558 1531 if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))) 1559 1532 alive++; 1560 - if (!rtnl_dereference(nh->nh_dev)) 1561 - deleted++; 1562 1533 } endfor_nexthops(rt); 1563 1534 1564 1535 WRITE_ONCE(rt->rt_nhn_alive, alive); 1565 1536 1566 - /* if there are no more nexthops, delete the route */ 1567 - if (event == NETDEV_UNREGISTER && deleted == rt->rt_nhn) 1568 - mpls_route_update(net, index, NULL, NULL); 1537 + if (nh_del) 1538 + mpls_route_update(net, index, rt, NULL); 1569 1539 } 1540 + 1541 + return 0; 1570 1542 } 1571 1543 1572 1544 static void mpls_ifup(struct net_device *dev, unsigned int flags) ··· 1585 1559 alive = 0; 1586 1560 change_nexthops(rt) { 1587 1561 unsigned int nh_flags = nh->nh_flags; 1588 - struct net_device *nh_dev = 1589 - rtnl_dereference(nh->nh_dev); 1590 1562 1591 1563 if (!(nh_flags & flags)) { 1592 1564 alive++; 1593 1565 continue; 1594 1566 } 1595 - if (nh_dev != dev) 1567 + if (nh->nh_dev != dev) 1596 1568 continue; 1597 1569 alive++; 1598 1570 nh_flags &= ~flags; ··· 1621 1597 return NOTIFY_OK; 1622 1598 1623 1599 switch (event) { 1600 + int err; 1601 + 1624 1602 case NETDEV_DOWN: 1625 - mpls_ifdown(dev, event); 1603 + err = mpls_ifdown(dev, event); 1604 + if (err) 1605 + return notifier_from_errno(err); 1626 1606 break; 1627 1607 case NETDEV_UP: 1628 1608 flags = dev_get_flags(dev); ··· 1637 1609 break; 1638 1610 case NETDEV_CHANGE: 1639 1611 flags = dev_get_flags(dev); 1640 - if (flags & (IFF_RUNNING | IFF_LOWER_UP)) 1612 + if (flags & (IFF_RUNNING | IFF_LOWER_UP)) { 1641 1613 mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN); 1642 - else 1643 - mpls_ifdown(dev, event); 1614 + } else { 1615 + err = mpls_ifdown(dev, event); 1616 + if (err) 1617 + return notifier_from_errno(err); 1618 + } 1644 1619 break; 1645 1620 case NETDEV_UNREGISTER: 1646 - mpls_ifdown(dev, event); 1621 + err = mpls_ifdown(dev, event); 1622 + if (err) 1623 + return notifier_from_errno(err); 1647 1624 mdev = mpls_dev_get(dev); 1648 1625 if (mdev) { 1649 1626 mpls_dev_sysctl_unregister(dev, mdev); ··· 1659 1626 case NETDEV_CHANGENAME: 1660 1627 mdev = mpls_dev_get(dev); 1661 1628 if (mdev) { 1662 - int err; 1663 - 1664 1629 mpls_dev_sysctl_unregister(dev, mdev); 1665 1630 err = mpls_dev_sysctl_register(dev, mdev); 1666 1631 if (err) ··· 2025 1994 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh), 2026 1995 nh->nh_via_alen)) 2027 1996 goto nla_put_failure; 2028 - dev = rtnl_dereference(nh->nh_dev); 1997 + dev = nh->nh_dev; 2029 1998 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex)) 2030 1999 goto nla_put_failure; 2031 2000 if (nh->nh_flags & RTNH_F_LINKDOWN) ··· 2043 2012 goto nla_put_failure; 2044 2013 2045 2014 for_nexthops(rt) { 2046 - dev = rtnl_dereference(nh->nh_dev); 2015 + dev = nh->nh_dev; 2047 2016 if (!dev) 2048 2017 continue; 2049 2018 ··· 2154 2123 static bool mpls_rt_uses_dev(struct mpls_route *rt, 2155 2124 const struct net_device *dev) 2156 2125 { 2157 - struct net_device *nh_dev; 2158 - 2159 2126 if (rt->rt_nhn == 1) { 2160 2127 struct mpls_nh *nh = rt->rt_nh; 2161 2128 2162 - nh_dev = rtnl_dereference(nh->nh_dev); 2163 - if (dev == nh_dev) 2129 + if (nh->nh_dev == dev) 2164 2130 return true; 2165 2131 } else { 2166 2132 for_nexthops(rt) { 2167 - nh_dev = rtnl_dereference(nh->nh_dev); 2168 - if (nh_dev == dev) 2133 + if (nh->nh_dev == dev) 2169 2134 return true; 2170 2135 } endfor_nexthops(rt); 2171 2136 } ··· 2249 2222 size_t nhsize = 0; 2250 2223 2251 2224 for_nexthops(rt) { 2252 - if (!rtnl_dereference(nh->nh_dev)) 2225 + if (!nh->nh_dev) 2253 2226 continue; 2254 2227 nhsize += nla_total_size(sizeof(struct rtnexthop)); 2255 2228 /* RTA_VIA */ ··· 2495 2468 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh), 2496 2469 nh->nh_via_alen)) 2497 2470 goto nla_put_failure; 2498 - dev = rtnl_dereference(nh->nh_dev); 2471 + dev = nh->nh_dev; 2499 2472 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex)) 2500 2473 goto nla_put_failure; 2501 2474 ··· 2534 2507 rt0 = mpls_rt_alloc(1, lo->addr_len, 0); 2535 2508 if (IS_ERR(rt0)) 2536 2509 goto nort0; 2537 - RCU_INIT_POINTER(rt0->rt_nh->nh_dev, lo); 2510 + rt0->rt_nh->nh_dev = lo; 2538 2511 rt0->rt_protocol = RTPROT_KERNEL; 2539 2512 rt0->rt_payload_type = MPT_IPV4; 2540 2513 rt0->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT; ··· 2548 2521 rt2 = mpls_rt_alloc(1, lo->addr_len, 0); 2549 2522 if (IS_ERR(rt2)) 2550 2523 goto nort2; 2551 - RCU_INIT_POINTER(rt2->rt_nh->nh_dev, lo); 2524 + rt2->rt_nh->nh_dev = lo; 2552 2525 rt2->rt_protocol = RTPROT_KERNEL; 2553 2526 rt2->rt_payload_type = MPT_IPV6; 2554 2527 rt2->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;
+1 -1
net/mpls/internal.h
··· 87 87 }; 88 88 89 89 struct mpls_nh { /* next hop label forwarding entry */ 90 - struct net_device __rcu *nh_dev; 90 + struct net_device *nh_dev; 91 91 92 92 /* nh_flags is accessed under RCU in the packet path; it is 93 93 * modified handling netdev events with rtnl lock held
+5
net/netlink/af_netlink.c
··· 1852 1852 if (msg->msg_flags & MSG_OOB) 1853 1853 return -EOPNOTSUPP; 1854 1854 1855 + if (len == 0) { 1856 + pr_warn_once("Zero length message leads to an empty skb\n"); 1857 + return -ENODATA; 1858 + } 1859 + 1855 1860 err = scm_send(sock, msg, &scm, true); 1856 1861 if (err < 0) 1857 1862 return err;
+1 -1
net/rds/tcp.c
··· 500 500 sk->sk_userlocks |= SOCK_SNDBUF_LOCK; 501 501 } 502 502 if (rtn->rcvbuf_size > 0) { 503 - sk->sk_sndbuf = rtn->rcvbuf_size; 503 + sk->sk_rcvbuf = rtn->rcvbuf_size; 504 504 sk->sk_userlocks |= SOCK_RCVBUF_LOCK; 505 505 } 506 506 release_sock(sk);
+9 -5
net/rxrpc/conn_client.c
··· 135 135 return bundle; 136 136 } 137 137 138 + static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) 139 + { 140 + rxrpc_put_peer(bundle->params.peer); 141 + kfree(bundle); 142 + } 143 + 138 144 void rxrpc_put_bundle(struct rxrpc_bundle *bundle) 139 145 { 140 146 unsigned int d = bundle->debug_id; 141 147 unsigned int u = atomic_dec_return(&bundle->usage); 142 148 143 149 _debug("PUT B=%x %u", d, u); 144 - if (u == 0) { 145 - rxrpc_put_peer(bundle->params.peer); 146 - kfree(bundle); 147 - } 150 + if (u == 0) 151 + rxrpc_free_bundle(bundle); 148 152 } 149 153 150 154 /* ··· 332 328 return candidate; 333 329 334 330 found_bundle_free: 335 - kfree(candidate); 331 + rxrpc_free_bundle(candidate); 336 332 found_bundle: 337 333 rxrpc_get_bundle(bundle); 338 334 spin_unlock(&local->client_bundles_lock);
+9 -5
net/rxrpc/peer_object.c
··· 299 299 return peer; 300 300 } 301 301 302 + static void rxrpc_free_peer(struct rxrpc_peer *peer) 303 + { 304 + rxrpc_put_local(peer->local); 305 + kfree_rcu(peer, rcu); 306 + } 307 + 302 308 /* 303 309 * Set up a new incoming peer. There shouldn't be any other matching peers 304 310 * since we've already done a search in the list from the non-reentrant context ··· 371 365 spin_unlock_bh(&rxnet->peer_hash_lock); 372 366 373 367 if (peer) 374 - kfree(candidate); 368 + rxrpc_free_peer(candidate); 375 369 else 376 370 peer = candidate; 377 371 } ··· 426 420 list_del_init(&peer->keepalive_link); 427 421 spin_unlock_bh(&rxnet->peer_hash_lock); 428 422 429 - rxrpc_put_local(peer->local); 430 - kfree_rcu(peer, rcu); 423 + rxrpc_free_peer(peer); 431 424 } 432 425 433 426 /* ··· 462 457 if (n == 0) { 463 458 hash_del_rcu(&peer->hash_link); 464 459 list_del_init(&peer->keepalive_link); 465 - rxrpc_put_local(peer->local); 466 - kfree_rcu(peer, rcu); 460 + rxrpc_free_peer(peer); 467 461 } 468 462 } 469 463
+6 -2
net/smc/smc_close.c
··· 195 195 int old_state; 196 196 long timeout; 197 197 int rc = 0; 198 + int rc1 = 0; 198 199 199 200 timeout = current->flags & PF_EXITING ? 200 201 0 : sock_flag(sk, SOCK_LINGER) ? ··· 233 232 /* actively shutdown clcsock before peer close it, 234 233 * prevent peer from entering TIME_WAIT state. 235 234 */ 236 - if (smc->clcsock && smc->clcsock->sk) 237 - rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); 235 + if (smc->clcsock && smc->clcsock->sk) { 236 + rc1 = kernel_sock_shutdown(smc->clcsock, 237 + SHUT_RDWR); 238 + rc = rc ? rc : rc1; 239 + } 238 240 } else { 239 241 /* peer event has changed the state */ 240 242 goto again;
+3 -4
net/smc/smc_core.c
··· 625 625 void smc_lgr_cleanup_early(struct smc_connection *conn) 626 626 { 627 627 struct smc_link_group *lgr = conn->lgr; 628 - struct list_head *lgr_list; 629 628 spinlock_t *lgr_lock; 630 629 631 630 if (!lgr) 632 631 return; 633 632 634 633 smc_conn_free(conn); 635 - lgr_list = smc_lgr_list_head(lgr, &lgr_lock); 634 + smc_lgr_list_head(lgr, &lgr_lock); 636 635 spin_lock_bh(lgr_lock); 637 636 /* do not use this link group for new connections */ 638 - if (!list_empty(lgr_list)) 639 - list_del_init(lgr_list); 637 + if (!list_empty(&lgr->list)) 638 + list_del_init(&lgr->list); 640 639 spin_unlock_bh(lgr_lock); 641 640 __smc_lgr_terminate(lgr, true); 642 641 }
+2 -2
net/tls/tls_sw.c
··· 521 521 memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, 522 522 prot->iv_size + prot->salt_size); 523 523 524 - xor_iv_with_seq(prot, rec->iv_data, tls_ctx->tx.rec_seq); 524 + xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq); 525 525 526 526 sge->offset += prot->prepend_size; 527 527 sge->length -= prot->prepend_size; ··· 1499 1499 else 1500 1500 memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size); 1501 1501 1502 - xor_iv_with_seq(prot, iv, tls_ctx->rx.rec_seq); 1502 + xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq); 1503 1503 1504 1504 /* Prepare AAD */ 1505 1505 tls_make_aad(aad, rxm->full_len - prot->overhead_size +
+10
sound/hda/intel-dsp-config.c
··· 252 252 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 253 253 .device = 0x02c8, 254 254 }, 255 + { 256 + .flags = FLAG_SOF, 257 + .device = 0x02c8, 258 + .codec_hid = "ESSX8336", 259 + }, 255 260 /* Cometlake-H */ 256 261 { 257 262 .flags = FLAG_SOF, ··· 280 275 { 281 276 .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE, 282 277 .device = 0x06c8, 278 + }, 279 + { 280 + .flags = FLAG_SOF, 281 + .device = 0x06c8, 282 + .codec_hid = "ESSX8336", 283 283 }, 284 284 #endif 285 285
+11 -1
sound/pci/hda/hda_intel.c
··· 335 335 ((pci)->device == 0x0c0c) || \ 336 336 ((pci)->device == 0x0d0c) || \ 337 337 ((pci)->device == 0x160c) || \ 338 - ((pci)->device == 0x490d)) 338 + ((pci)->device == 0x490d) || \ 339 + ((pci)->device == 0x4f90) || \ 340 + ((pci)->device == 0x4f91) || \ 341 + ((pci)->device == 0x4f92)) 339 342 340 343 #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98) 341 344 ··· 2475 2472 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2476 2473 /* DG1 */ 2477 2474 { PCI_DEVICE(0x8086, 0x490d), 2475 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2476 + /* DG2 */ 2477 + { PCI_DEVICE(0x8086, 0x4f90), 2478 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2479 + { PCI_DEVICE(0x8086, 0x4f91), 2480 + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2481 + { PCI_DEVICE(0x8086, 0x4f92), 2478 2482 .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, 2479 2483 /* Alderlake-S */ 2480 2484 { PCI_DEVICE(0x8086, 0x7ad0),
+9
sound/pci/hda/hda_local.h
··· 438 438 #define for_each_hda_codec_node(nid, codec) \ 439 439 for ((nid) = (codec)->core.start_nid; (nid) < (codec)->core.end_nid; (nid)++) 440 440 441 + /* Set the codec power_state flag to indicate to allow unsol event handling; 442 + * see hda_codec_unsol_event() in hda_bind.c. Calling this might confuse the 443 + * state tracking, so use with care. 444 + */ 445 + static inline void snd_hda_codec_allow_unsol_events(struct hda_codec *codec) 446 + { 447 + codec->core.dev.power.power_state = PMSG_ON; 448 + } 449 + 441 450 /* 442 451 * get widget capabilities 443 452 */
+5
sound/pci/hda/patch_cs8409.c
··· 750 750 if (cs42l42->full_scale_vol) 751 751 cs8409_i2c_write(cs42l42, 0x2001, 0x01); 752 752 753 + /* we have to explicitly allow unsol event handling even during the 754 + * resume phase so that the jack event is processed properly 755 + */ 756 + snd_hda_codec_allow_unsol_events(cs42l42->codec); 757 + 753 758 cs42l42_enable_jack_detect(cs42l42); 754 759 } 755 760
+2 -1
sound/pci/hda/patch_hdmi.c
··· 4380 4380 HDA_CODEC_ENTRY(0x80862812, "Tigerlake HDMI", patch_i915_tgl_hdmi), 4381 4381 HDA_CODEC_ENTRY(0x80862814, "DG1 HDMI", patch_i915_tgl_hdmi), 4382 4382 HDA_CODEC_ENTRY(0x80862815, "Alderlake HDMI", patch_i915_tgl_hdmi), 4383 - HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi), 4384 4383 HDA_CODEC_ENTRY(0x80862816, "Rocketlake HDMI", patch_i915_tgl_hdmi), 4384 + HDA_CODEC_ENTRY(0x80862819, "DG2 HDMI", patch_i915_tgl_hdmi), 4385 4385 HDA_CODEC_ENTRY(0x8086281a, "Jasperlake HDMI", patch_i915_icl_hdmi), 4386 4386 HDA_CODEC_ENTRY(0x8086281b, "Elkhartlake HDMI", patch_i915_icl_hdmi), 4387 + HDA_CODEC_ENTRY(0x8086281c, "Alderlake-P HDMI", patch_i915_tgl_hdmi), 4387 4388 HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi), 4388 4389 HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI", patch_i915_byt_hdmi), 4389 4390 HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI", patch_i915_byt_hdmi),
+3 -29
sound/soc/codecs/cs35l41-spi.c
··· 42 42 43 43 MODULE_DEVICE_TABLE(spi, cs35l41_id_spi); 44 44 45 - static void cs35l41_spi_otp_setup(struct cs35l41_private *cs35l41, 46 - bool is_pre_setup, unsigned int *freq) 47 - { 48 - struct spi_device *spi; 49 - u32 orig_spi_freq; 50 - 51 - spi = to_spi_device(cs35l41->dev); 52 - 53 - if (!spi) { 54 - dev_err(cs35l41->dev, "%s: No SPI device\n", __func__); 55 - return; 56 - } 57 - 58 - if (is_pre_setup) { 59 - orig_spi_freq = spi->max_speed_hz; 60 - if (orig_spi_freq > CS35L41_SPI_MAX_FREQ_OTP) { 61 - spi->max_speed_hz = CS35L41_SPI_MAX_FREQ_OTP; 62 - spi_setup(spi); 63 - } 64 - *freq = orig_spi_freq; 65 - } else { 66 - if (spi->max_speed_hz != *freq) { 67 - spi->max_speed_hz = *freq; 68 - spi_setup(spi); 69 - } 70 - } 71 - } 72 - 73 45 static int cs35l41_spi_probe(struct spi_device *spi) 74 46 { 75 47 const struct regmap_config *regmap_config = &cs35l41_regmap_spi; ··· 53 81 if (!cs35l41) 54 82 return -ENOMEM; 55 83 84 + spi->max_speed_hz = CS35L41_SPI_MAX_FREQ; 85 + spi_setup(spi); 86 + 56 87 spi_set_drvdata(spi, cs35l41); 57 88 cs35l41->regmap = devm_regmap_init_spi(spi, regmap_config); 58 89 if (IS_ERR(cs35l41->regmap)) { ··· 66 91 67 92 cs35l41->dev = &spi->dev; 68 93 cs35l41->irq = spi->irq; 69 - cs35l41->otp_setup = cs35l41_spi_otp_setup; 70 94 71 95 return cs35l41_probe(cs35l41, pdata); 72 96 }
-7
sound/soc/codecs/cs35l41.c
··· 302 302 const struct cs35l41_otp_packed_element_t *otp_map; 303 303 struct cs35l41_private *cs35l41 = data; 304 304 int bit_offset, word_offset, ret, i; 305 - unsigned int orig_spi_freq; 306 305 unsigned int bit_sum = 8; 307 306 u32 otp_val, otp_id_reg; 308 307 u32 *otp_mem; ··· 325 326 goto err_otp_unpack; 326 327 } 327 328 328 - if (cs35l41->otp_setup) 329 - cs35l41->otp_setup(cs35l41, true, &orig_spi_freq); 330 - 331 329 ret = regmap_bulk_read(cs35l41->regmap, CS35L41_OTP_MEM0, otp_mem, 332 330 CS35L41_OTP_SIZE_WORDS); 333 331 if (ret < 0) { 334 332 dev_err(cs35l41->dev, "Read OTP Mem failed: %d\n", ret); 335 333 goto err_otp_unpack; 336 334 } 337 - 338 - if (cs35l41->otp_setup) 339 - cs35l41->otp_setup(cs35l41, false, &orig_spi_freq); 340 335 341 336 otp_map = otp_map_match->map; 342 337
+1 -3
sound/soc/codecs/cs35l41.h
··· 726 726 #define CS35L41_FS2_WINDOW_MASK 0x00FFF800 727 727 #define CS35L41_FS2_WINDOW_SHIFT 12 728 728 729 - #define CS35L41_SPI_MAX_FREQ_OTP 4000000 729 + #define CS35L41_SPI_MAX_FREQ 4000000 730 730 731 731 #define CS35L41_RX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) 732 732 #define CS35L41_TX_FORMATS (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S24_LE) ··· 764 764 int irq; 765 765 /* GPIO for /RST */ 766 766 struct gpio_desc *reset_gpio; 767 - void (*otp_setup)(struct cs35l41_private *cs35l41, bool is_pre_setup, 768 - unsigned int *freq); 769 767 }; 770 768 771 769 int cs35l41_probe(struct cs35l41_private *cs35l41,
+1
sound/soc/codecs/rk817_codec.c
··· 539 539 MODULE_DESCRIPTION("ASoC RK817 codec driver"); 540 540 MODULE_AUTHOR("binyuan <kevan.lan@rock-chips.com>"); 541 541 MODULE_LICENSE("GPL v2"); 542 + MODULE_ALIAS("platform:rk817-codec");
+6
sound/soc/intel/common/soc-acpi-intel-cml-match.c
··· 81 81 .sof_fw_filename = "sof-cml.ri", 82 82 .sof_tplg_filename = "sof-cml-da7219-max98390.tplg", 83 83 }, 84 + { 85 + .id = "ESSX8336", 86 + .drv_name = "sof-essx8336", 87 + .sof_fw_filename = "sof-cml.ri", 88 + .sof_tplg_filename = "sof-cml-es8336.tplg", 89 + }, 84 90 {}, 85 91 }; 86 92 EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_cml_machines);
+3 -1
sound/soc/soc-acpi.c
··· 20 20 21 21 if (comp_ids) { 22 22 for (i = 0; i < comp_ids->num_codecs; i++) { 23 - if (acpi_dev_present(comp_ids->codecs[i], NULL, -1)) 23 + if (acpi_dev_present(comp_ids->codecs[i], NULL, -1)) { 24 + strscpy(machine->id, comp_ids->codecs[i], ACPI_ID_LEN); 24 25 return true; 26 + } 25 27 } 26 28 } 27 29
+7
sound/soc/sof/intel/hda.c
··· 58 58 return -EINVAL; 59 59 } 60 60 61 + /* DAI already configured, reset it before reconfiguring it */ 62 + if (sof_dai->configured) { 63 + ret = hda_ctrl_dai_widget_free(w); 64 + if (ret < 0) 65 + return ret; 66 + } 67 + 61 68 config = &sof_dai->dai_config[sof_dai->current_config]; 62 69 63 70 /*
+146 -33
sound/soc/tegra/tegra186_dspk.c
··· 26 26 { TEGRA186_DSPK_CODEC_CTRL, 0x03000000 }, 27 27 }; 28 28 29 - static int tegra186_dspk_get_control(struct snd_kcontrol *kcontrol, 29 + static int tegra186_dspk_get_fifo_th(struct snd_kcontrol *kcontrol, 30 30 struct snd_ctl_elem_value *ucontrol) 31 31 { 32 32 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 33 33 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 34 34 35 - if (strstr(kcontrol->id.name, "FIFO Threshold")) 36 - ucontrol->value.integer.value[0] = dspk->rx_fifo_th; 37 - else if (strstr(kcontrol->id.name, "OSR Value")) 38 - ucontrol->value.integer.value[0] = dspk->osr_val; 39 - else if (strstr(kcontrol->id.name, "LR Polarity Select")) 40 - ucontrol->value.integer.value[0] = dspk->lrsel; 41 - else if (strstr(kcontrol->id.name, "Channel Select")) 42 - ucontrol->value.integer.value[0] = dspk->ch_sel; 43 - else if (strstr(kcontrol->id.name, "Mono To Stereo")) 44 - ucontrol->value.integer.value[0] = dspk->mono_to_stereo; 45 - else if (strstr(kcontrol->id.name, "Stereo To Mono")) 46 - ucontrol->value.integer.value[0] = dspk->stereo_to_mono; 35 + ucontrol->value.integer.value[0] = dspk->rx_fifo_th; 47 36 48 37 return 0; 49 38 } 50 39 51 - static int tegra186_dspk_put_control(struct snd_kcontrol *kcontrol, 40 + static int tegra186_dspk_put_fifo_th(struct snd_kcontrol *kcontrol, 52 41 struct snd_ctl_elem_value *ucontrol) 53 42 { 54 43 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 55 44 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 56 - int val = ucontrol->value.integer.value[0]; 45 + int value = ucontrol->value.integer.value[0]; 57 46 58 - if (strstr(kcontrol->id.name, "FIFO Threshold")) 59 - dspk->rx_fifo_th = val; 60 - else if (strstr(kcontrol->id.name, "OSR Value")) 61 - dspk->osr_val = val; 62 - else if (strstr(kcontrol->id.name, "LR Polarity Select")) 63 - dspk->lrsel = val; 64 - else if (strstr(kcontrol->id.name, "Channel Select")) 65 - dspk->ch_sel = val; 66 - else if (strstr(kcontrol->id.name, "Mono To Stereo")) 67 - dspk->mono_to_stereo = val; 68 - else if (strstr(kcontrol->id.name, "Stereo To Mono")) 69 - dspk->stereo_to_mono = val; 47 + if (value == dspk->rx_fifo_th) 48 + return 0; 49 + 50 + dspk->rx_fifo_th = value; 51 + 52 + return 1; 53 + } 54 + 55 + static int tegra186_dspk_get_osr_val(struct snd_kcontrol *kcontrol, 56 + struct snd_ctl_elem_value *ucontrol) 57 + { 58 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 59 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 60 + 61 + ucontrol->value.enumerated.item[0] = dspk->osr_val; 70 62 71 63 return 0; 64 + } 65 + 66 + static int tegra186_dspk_put_osr_val(struct snd_kcontrol *kcontrol, 67 + struct snd_ctl_elem_value *ucontrol) 68 + { 69 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 70 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 71 + unsigned int value = ucontrol->value.enumerated.item[0]; 72 + 73 + if (value == dspk->osr_val) 74 + return 0; 75 + 76 + dspk->osr_val = value; 77 + 78 + return 1; 79 + } 80 + 81 + static int tegra186_dspk_get_pol_sel(struct snd_kcontrol *kcontrol, 82 + struct snd_ctl_elem_value *ucontrol) 83 + { 84 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 85 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 86 + 87 + ucontrol->value.enumerated.item[0] = dspk->lrsel; 88 + 89 + return 0; 90 + } 91 + 92 + static int tegra186_dspk_put_pol_sel(struct snd_kcontrol *kcontrol, 93 + struct snd_ctl_elem_value *ucontrol) 94 + { 95 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 96 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 97 + unsigned int value = ucontrol->value.enumerated.item[0]; 98 + 99 + if (value == dspk->lrsel) 100 + return 0; 101 + 102 + dspk->lrsel = value; 103 + 104 + return 1; 105 + } 106 + 107 + static int tegra186_dspk_get_ch_sel(struct snd_kcontrol *kcontrol, 108 + struct snd_ctl_elem_value *ucontrol) 109 + { 110 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 111 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 112 + 113 + ucontrol->value.enumerated.item[0] = dspk->ch_sel; 114 + 115 + return 0; 116 + } 117 + 118 + static int tegra186_dspk_put_ch_sel(struct snd_kcontrol *kcontrol, 119 + struct snd_ctl_elem_value *ucontrol) 120 + { 121 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 122 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 123 + unsigned int value = ucontrol->value.enumerated.item[0]; 124 + 125 + if (value == dspk->ch_sel) 126 + return 0; 127 + 128 + dspk->ch_sel = value; 129 + 130 + return 1; 131 + } 132 + 133 + static int tegra186_dspk_get_mono_to_stereo(struct snd_kcontrol *kcontrol, 134 + struct snd_ctl_elem_value *ucontrol) 135 + { 136 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 137 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 138 + 139 + ucontrol->value.enumerated.item[0] = dspk->mono_to_stereo; 140 + 141 + return 0; 142 + } 143 + 144 + static int tegra186_dspk_put_mono_to_stereo(struct snd_kcontrol *kcontrol, 145 + struct snd_ctl_elem_value *ucontrol) 146 + { 147 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 148 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 149 + unsigned int value = ucontrol->value.enumerated.item[0]; 150 + 151 + if (value == dspk->mono_to_stereo) 152 + return 0; 153 + 154 + dspk->mono_to_stereo = value; 155 + 156 + return 1; 157 + } 158 + 159 + static int tegra186_dspk_get_stereo_to_mono(struct snd_kcontrol *kcontrol, 160 + struct snd_ctl_elem_value *ucontrol) 161 + { 162 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 163 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 164 + 165 + ucontrol->value.enumerated.item[0] = dspk->stereo_to_mono; 166 + 167 + return 0; 168 + } 169 + 170 + static int tegra186_dspk_put_stereo_to_mono(struct snd_kcontrol *kcontrol, 171 + struct snd_ctl_elem_value *ucontrol) 172 + { 173 + struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol); 174 + struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec); 175 + unsigned int value = ucontrol->value.enumerated.item[0]; 176 + 177 + if (value == dspk->stereo_to_mono) 178 + return 0; 179 + 180 + dspk->stereo_to_mono = value; 181 + 182 + return 1; 72 183 } 73 184 74 185 static int __maybe_unused tegra186_dspk_runtime_suspend(struct device *dev) ··· 390 279 static const struct snd_kcontrol_new tegrat186_dspk_controls[] = { 391 280 SOC_SINGLE_EXT("FIFO Threshold", SND_SOC_NOPM, 0, 392 281 TEGRA186_DSPK_RX_FIFO_DEPTH - 1, 0, 393 - tegra186_dspk_get_control, tegra186_dspk_put_control), 282 + tegra186_dspk_get_fifo_th, tegra186_dspk_put_fifo_th), 394 283 SOC_ENUM_EXT("OSR Value", tegra186_dspk_osr_enum, 395 - tegra186_dspk_get_control, tegra186_dspk_put_control), 284 + tegra186_dspk_get_osr_val, tegra186_dspk_put_osr_val), 396 285 SOC_ENUM_EXT("LR Polarity Select", tegra186_dspk_lrsel_enum, 397 - tegra186_dspk_get_control, tegra186_dspk_put_control), 286 + tegra186_dspk_get_pol_sel, tegra186_dspk_put_pol_sel), 398 287 SOC_ENUM_EXT("Channel Select", tegra186_dspk_ch_sel_enum, 399 - tegra186_dspk_get_control, tegra186_dspk_put_control), 288 + tegra186_dspk_get_ch_sel, tegra186_dspk_put_ch_sel), 400 289 SOC_ENUM_EXT("Mono To Stereo", tegra186_dspk_mono_conv_enum, 401 - tegra186_dspk_get_control, tegra186_dspk_put_control), 290 + tegra186_dspk_get_mono_to_stereo, 291 + tegra186_dspk_put_mono_to_stereo), 402 292 SOC_ENUM_EXT("Stereo To Mono", tegra186_dspk_stereo_conv_enum, 403 - tegra186_dspk_get_control, tegra186_dspk_put_control), 293 + tegra186_dspk_get_stereo_to_mono, 294 + tegra186_dspk_put_stereo_to_mono), 404 295 }; 405 296 406 297 static const struct snd_soc_component_driver tegra186_dspk_cmpnt = {
+112 -32
sound/soc/tegra/tegra210_admaif.c
··· 424 424 .trigger = tegra_admaif_trigger, 425 425 }; 426 426 427 - static int tegra_admaif_get_control(struct snd_kcontrol *kcontrol, 428 - struct snd_ctl_elem_value *ucontrol) 427 + static int tegra210_admaif_pget_mono_to_stereo(struct snd_kcontrol *kcontrol, 428 + struct snd_ctl_elem_value *ucontrol) 429 429 { 430 430 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 431 - struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 432 431 struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 433 - long *uctl_val = &ucontrol->value.integer.value[0]; 432 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 434 433 435 - if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) 436 - *uctl_val = admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg]; 437 - else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) 438 - *uctl_val = admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg]; 439 - else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) 440 - *uctl_val = admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg]; 441 - else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) 442 - *uctl_val = admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg]; 434 + ucontrol->value.enumerated.item[0] = 435 + admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg]; 443 436 444 437 return 0; 445 438 } 446 439 447 - static int tegra_admaif_put_control(struct snd_kcontrol *kcontrol, 448 - struct snd_ctl_elem_value *ucontrol) 440 + static int tegra210_admaif_pput_mono_to_stereo(struct snd_kcontrol *kcontrol, 441 + struct snd_ctl_elem_value *ucontrol) 449 442 { 450 443 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 451 - struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 452 444 struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 453 - int value = ucontrol->value.integer.value[0]; 445 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 446 + unsigned int value = ucontrol->value.enumerated.item[0]; 454 447 455 - if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) 456 - admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg] = value; 457 - else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) 458 - admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg] = value; 459 - else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) 460 - admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg] = value; 461 - else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) 462 - admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg] = value; 448 + if (value == admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg]) 449 + return 0; 450 + 451 + admaif->mono_to_stereo[ADMAIF_TX_PATH][ec->reg] = value; 452 + 453 + return 1; 454 + } 455 + 456 + static int tegra210_admaif_cget_mono_to_stereo(struct snd_kcontrol *kcontrol, 457 + struct snd_ctl_elem_value *ucontrol) 458 + { 459 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 460 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 461 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 462 + 463 + ucontrol->value.enumerated.item[0] = 464 + admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg]; 463 465 464 466 return 0; 467 + } 468 + 469 + static int tegra210_admaif_cput_mono_to_stereo(struct snd_kcontrol *kcontrol, 470 + struct snd_ctl_elem_value *ucontrol) 471 + { 472 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 473 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 474 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 475 + unsigned int value = ucontrol->value.enumerated.item[0]; 476 + 477 + if (value == admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg]) 478 + return 0; 479 + 480 + admaif->mono_to_stereo[ADMAIF_RX_PATH][ec->reg] = value; 481 + 482 + return 1; 483 + } 484 + 485 + static int tegra210_admaif_pget_stereo_to_mono(struct snd_kcontrol *kcontrol, 486 + struct snd_ctl_elem_value *ucontrol) 487 + { 488 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 489 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 490 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 491 + 492 + ucontrol->value.enumerated.item[0] = 493 + admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg]; 494 + 495 + return 0; 496 + } 497 + 498 + static int tegra210_admaif_pput_stereo_to_mono(struct snd_kcontrol *kcontrol, 499 + struct snd_ctl_elem_value *ucontrol) 500 + { 501 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 502 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 503 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 504 + unsigned int value = ucontrol->value.enumerated.item[0]; 505 + 506 + if (value == admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg]) 507 + return 0; 508 + 509 + admaif->stereo_to_mono[ADMAIF_TX_PATH][ec->reg] = value; 510 + 511 + return 1; 512 + } 513 + 514 + static int tegra210_admaif_cget_stereo_to_mono(struct snd_kcontrol *kcontrol, 515 + struct snd_ctl_elem_value *ucontrol) 516 + { 517 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 518 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 519 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 520 + 521 + ucontrol->value.enumerated.item[0] = 522 + admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg]; 523 + 524 + return 0; 525 + } 526 + 527 + static int tegra210_admaif_cput_stereo_to_mono(struct snd_kcontrol *kcontrol, 528 + struct snd_ctl_elem_value *ucontrol) 529 + { 530 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 531 + struct tegra_admaif *admaif = snd_soc_component_get_drvdata(cmpnt); 532 + struct soc_enum *ec = (struct soc_enum *)kcontrol->private_value; 533 + unsigned int value = ucontrol->value.enumerated.item[0]; 534 + 535 + if (value == admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg]) 536 + return 0; 537 + 538 + admaif->stereo_to_mono[ADMAIF_RX_PATH][ec->reg] = value; 539 + 540 + return 1; 465 541 } 466 542 467 543 static int tegra_admaif_dai_probe(struct snd_soc_dai *dai) ··· 635 559 } 636 560 637 561 #define TEGRA_ADMAIF_CIF_CTRL(reg) \ 638 - NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Mono To Stereo", reg - 1,\ 639 - tegra_admaif_get_control, tegra_admaif_put_control, \ 562 + NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Mono To Stereo", reg - 1, \ 563 + tegra210_admaif_pget_mono_to_stereo, \ 564 + tegra210_admaif_pput_mono_to_stereo, \ 640 565 tegra_admaif_mono_conv_text), \ 641 - NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Stereo To Mono", reg - 1,\ 642 - tegra_admaif_get_control, tegra_admaif_put_control, \ 566 + NV_SOC_ENUM_EXT("ADMAIF" #reg " Playback Stereo To Mono", reg - 1, \ 567 + tegra210_admaif_pget_stereo_to_mono, \ 568 + tegra210_admaif_pput_stereo_to_mono, \ 643 569 tegra_admaif_stereo_conv_text), \ 644 - NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Mono To Stereo", reg - 1, \ 645 - tegra_admaif_get_control, tegra_admaif_put_control, \ 570 + NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Mono To Stereo", reg - 1, \ 571 + tegra210_admaif_cget_mono_to_stereo, \ 572 + tegra210_admaif_cput_mono_to_stereo, \ 646 573 tegra_admaif_mono_conv_text), \ 647 - NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Stereo To Mono", reg - 1, \ 648 - tegra_admaif_get_control, tegra_admaif_put_control, \ 574 + NV_SOC_ENUM_EXT("ADMAIF" #reg " Capture Stereo To Mono", reg - 1, \ 575 + tegra210_admaif_cget_stereo_to_mono, \ 576 + tegra210_admaif_cput_stereo_to_mono, \ 649 577 tegra_admaif_stereo_conv_text) 650 578 651 579 static struct snd_kcontrol_new tegra210_admaif_controls[] = {
+3
sound/soc/tegra/tegra210_adx.c
··· 193 193 struct soc_mixer_control *mc = 194 194 (struct soc_mixer_control *)kcontrol->private_value;; 195 195 196 + if (value == bytes_map[mc->reg]) 197 + return 0; 198 + 196 199 if (value >= 0 && value <= 255) { 197 200 /* update byte map and enable slot */ 198 201 bytes_map[mc->reg] = value;
+7 -4
sound/soc/tegra/tegra210_ahub.c
··· 62 62 unsigned int *item = uctl->value.enumerated.item; 63 63 unsigned int value = e->values[item[0]]; 64 64 unsigned int i, bit_pos, reg_idx = 0, reg_val = 0; 65 + int change = 0; 65 66 66 67 if (item[0] >= e->items) 67 68 return -EINVAL; ··· 87 86 88 87 /* Update widget power if state has changed */ 89 88 if (snd_soc_component_test_bits(cmpnt, update[i].reg, 90 - update[i].mask, update[i].val)) 91 - snd_soc_dapm_mux_update_power(dapm, kctl, item[0], e, 92 - &update[i]); 89 + update[i].mask, 90 + update[i].val)) 91 + change |= snd_soc_dapm_mux_update_power(dapm, kctl, 92 + item[0], e, 93 + &update[i]); 93 94 } 94 95 95 - return 0; 96 + return change; 96 97 } 97 98 98 99 static struct snd_soc_dai_driver tegra210_ahub_dais[] = {
+3
sound/soc/tegra/tegra210_amx.c
··· 222 222 int reg = mc->reg; 223 223 int value = ucontrol->value.integer.value[0]; 224 224 225 + if (value == bytes_map[reg]) 226 + return 0; 227 + 225 228 if (value >= 0 && value <= 255) { 226 229 /* Update byte map and enable slot */ 227 230 bytes_map[reg] = value;
+150 -36
sound/soc/tegra/tegra210_dmic.c
··· 156 156 return 0; 157 157 } 158 158 159 - static int tegra210_dmic_get_control(struct snd_kcontrol *kcontrol, 160 - struct snd_ctl_elem_value *ucontrol) 159 + static int tegra210_dmic_get_boost_gain(struct snd_kcontrol *kcontrol, 160 + struct snd_ctl_elem_value *ucontrol) 161 161 { 162 162 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 163 163 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 164 164 165 - if (strstr(kcontrol->id.name, "Boost Gain Volume")) 166 - ucontrol->value.integer.value[0] = dmic->boost_gain; 167 - else if (strstr(kcontrol->id.name, "Channel Select")) 168 - ucontrol->value.integer.value[0] = dmic->ch_select; 169 - else if (strstr(kcontrol->id.name, "Mono To Stereo")) 170 - ucontrol->value.integer.value[0] = dmic->mono_to_stereo; 171 - else if (strstr(kcontrol->id.name, "Stereo To Mono")) 172 - ucontrol->value.integer.value[0] = dmic->stereo_to_mono; 173 - else if (strstr(kcontrol->id.name, "OSR Value")) 174 - ucontrol->value.integer.value[0] = dmic->osr_val; 175 - else if (strstr(kcontrol->id.name, "LR Polarity Select")) 176 - ucontrol->value.integer.value[0] = dmic->lrsel; 165 + ucontrol->value.integer.value[0] = dmic->boost_gain; 177 166 178 167 return 0; 179 168 } 180 169 181 - static int tegra210_dmic_put_control(struct snd_kcontrol *kcontrol, 182 - struct snd_ctl_elem_value *ucontrol) 170 + static int tegra210_dmic_put_boost_gain(struct snd_kcontrol *kcontrol, 171 + struct snd_ctl_elem_value *ucontrol) 183 172 { 184 173 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 185 174 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 186 175 int value = ucontrol->value.integer.value[0]; 187 176 188 - if (strstr(kcontrol->id.name, "Boost Gain Volume")) 189 - dmic->boost_gain = value; 190 - else if (strstr(kcontrol->id.name, "Channel Select")) 191 - dmic->ch_select = ucontrol->value.integer.value[0]; 192 - else if (strstr(kcontrol->id.name, "Mono To Stereo")) 193 - dmic->mono_to_stereo = value; 194 - else if (strstr(kcontrol->id.name, "Stereo To Mono")) 195 - dmic->stereo_to_mono = value; 196 - else if (strstr(kcontrol->id.name, "OSR Value")) 197 - dmic->osr_val = value; 198 - else if (strstr(kcontrol->id.name, "LR Polarity Select")) 199 - dmic->lrsel = value; 177 + if (value == dmic->boost_gain) 178 + return 0; 179 + 180 + dmic->boost_gain = value; 181 + 182 + return 1; 183 + } 184 + 185 + static int tegra210_dmic_get_ch_select(struct snd_kcontrol *kcontrol, 186 + struct snd_ctl_elem_value *ucontrol) 187 + { 188 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 189 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 190 + 191 + ucontrol->value.enumerated.item[0] = dmic->ch_select; 200 192 201 193 return 0; 194 + } 195 + 196 + static int tegra210_dmic_put_ch_select(struct snd_kcontrol *kcontrol, 197 + struct snd_ctl_elem_value *ucontrol) 198 + { 199 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 200 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 201 + unsigned int value = ucontrol->value.enumerated.item[0]; 202 + 203 + if (value == dmic->ch_select) 204 + return 0; 205 + 206 + dmic->ch_select = value; 207 + 208 + return 1; 209 + } 210 + 211 + static int tegra210_dmic_get_mono_to_stereo(struct snd_kcontrol *kcontrol, 212 + struct snd_ctl_elem_value *ucontrol) 213 + { 214 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 215 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 216 + 217 + ucontrol->value.enumerated.item[0] = dmic->mono_to_stereo; 218 + 219 + return 0; 220 + } 221 + 222 + static int tegra210_dmic_put_mono_to_stereo(struct snd_kcontrol *kcontrol, 223 + struct snd_ctl_elem_value *ucontrol) 224 + { 225 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 226 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 227 + unsigned int value = ucontrol->value.enumerated.item[0]; 228 + 229 + if (value == dmic->mono_to_stereo) 230 + return 0; 231 + 232 + dmic->mono_to_stereo = value; 233 + 234 + return 1; 235 + } 236 + 237 + static int tegra210_dmic_get_stereo_to_mono(struct snd_kcontrol *kcontrol, 238 + struct snd_ctl_elem_value *ucontrol) 239 + { 240 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 241 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 242 + 243 + ucontrol->value.enumerated.item[0] = dmic->stereo_to_mono; 244 + 245 + return 0; 246 + } 247 + 248 + static int tegra210_dmic_put_stereo_to_mono(struct snd_kcontrol *kcontrol, 249 + struct snd_ctl_elem_value *ucontrol) 250 + { 251 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 252 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 253 + unsigned int value = ucontrol->value.enumerated.item[0]; 254 + 255 + if (value == dmic->stereo_to_mono) 256 + return 0; 257 + 258 + dmic->stereo_to_mono = value; 259 + 260 + return 1; 261 + } 262 + 263 + static int tegra210_dmic_get_osr_val(struct snd_kcontrol *kcontrol, 264 + struct snd_ctl_elem_value *ucontrol) 265 + { 266 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 267 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 268 + 269 + ucontrol->value.enumerated.item[0] = dmic->osr_val; 270 + 271 + return 0; 272 + } 273 + 274 + static int tegra210_dmic_put_osr_val(struct snd_kcontrol *kcontrol, 275 + struct snd_ctl_elem_value *ucontrol) 276 + { 277 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 278 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 279 + unsigned int value = ucontrol->value.enumerated.item[0]; 280 + 281 + if (value == dmic->osr_val) 282 + return 0; 283 + 284 + dmic->osr_val = value; 285 + 286 + return 1; 287 + } 288 + 289 + static int tegra210_dmic_get_pol_sel(struct snd_kcontrol *kcontrol, 290 + struct snd_ctl_elem_value *ucontrol) 291 + { 292 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 293 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 294 + 295 + ucontrol->value.enumerated.item[0] = dmic->lrsel; 296 + 297 + return 0; 298 + } 299 + 300 + static int tegra210_dmic_put_pol_sel(struct snd_kcontrol *kcontrol, 301 + struct snd_ctl_elem_value *ucontrol) 302 + { 303 + struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol); 304 + struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp); 305 + unsigned int value = ucontrol->value.enumerated.item[0]; 306 + 307 + if (value == dmic->lrsel) 308 + return 0; 309 + 310 + dmic->lrsel = value; 311 + 312 + return 1; 202 313 } 203 314 204 315 static const struct snd_soc_dai_ops tegra210_dmic_dai_ops = { ··· 398 287 399 288 static const struct snd_kcontrol_new tegra210_dmic_controls[] = { 400 289 SOC_SINGLE_EXT("Boost Gain Volume", 0, 0, MAX_BOOST_GAIN, 0, 401 - tegra210_dmic_get_control, tegra210_dmic_put_control), 290 + tegra210_dmic_get_boost_gain, 291 + tegra210_dmic_put_boost_gain), 402 292 SOC_ENUM_EXT("Channel Select", tegra210_dmic_ch_enum, 403 - tegra210_dmic_get_control, tegra210_dmic_put_control), 293 + tegra210_dmic_get_ch_select, tegra210_dmic_put_ch_select), 404 294 SOC_ENUM_EXT("Mono To Stereo", 405 - tegra210_dmic_mono_conv_enum, tegra210_dmic_get_control, 406 - tegra210_dmic_put_control), 295 + tegra210_dmic_mono_conv_enum, 296 + tegra210_dmic_get_mono_to_stereo, 297 + tegra210_dmic_put_mono_to_stereo), 407 298 SOC_ENUM_EXT("Stereo To Mono", 408 - tegra210_dmic_stereo_conv_enum, tegra210_dmic_get_control, 409 - tegra210_dmic_put_control), 299 + tegra210_dmic_stereo_conv_enum, 300 + tegra210_dmic_get_stereo_to_mono, 301 + tegra210_dmic_put_stereo_to_mono), 410 302 SOC_ENUM_EXT("OSR Value", tegra210_dmic_osr_enum, 411 - tegra210_dmic_get_control, tegra210_dmic_put_control), 303 + tegra210_dmic_get_osr_val, tegra210_dmic_put_osr_val), 412 304 SOC_ENUM_EXT("LR Polarity Select", tegra210_dmic_lrsel_enum, 413 - tegra210_dmic_get_control, tegra210_dmic_put_control), 305 + tegra210_dmic_get_pol_sel, tegra210_dmic_put_pol_sel), 414 306 }; 415 307 416 308 static const struct snd_soc_component_driver tegra210_dmic_compnt = {
+240 -84
sound/soc/tegra/tegra210_i2s.c
··· 302 302 return 0; 303 303 } 304 304 305 + static int tegra210_i2s_get_loopback(struct snd_kcontrol *kcontrol, 306 + struct snd_ctl_elem_value *ucontrol) 307 + { 308 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 309 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 310 + 311 + ucontrol->value.integer.value[0] = i2s->loopback; 312 + 313 + return 0; 314 + } 315 + 316 + static int tegra210_i2s_put_loopback(struct snd_kcontrol *kcontrol, 317 + struct snd_ctl_elem_value *ucontrol) 318 + { 319 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 320 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 321 + int value = ucontrol->value.integer.value[0]; 322 + 323 + if (value == i2s->loopback) 324 + return 0; 325 + 326 + i2s->loopback = value; 327 + 328 + regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, I2S_CTRL_LPBK_MASK, 329 + i2s->loopback << I2S_CTRL_LPBK_SHIFT); 330 + 331 + return 1; 332 + } 333 + 334 + static int tegra210_i2s_get_fsync_width(struct snd_kcontrol *kcontrol, 335 + struct snd_ctl_elem_value *ucontrol) 336 + { 337 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 338 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 339 + 340 + ucontrol->value.integer.value[0] = i2s->fsync_width; 341 + 342 + return 0; 343 + } 344 + 345 + static int tegra210_i2s_put_fsync_width(struct snd_kcontrol *kcontrol, 346 + struct snd_ctl_elem_value *ucontrol) 347 + { 348 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 349 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 350 + int value = ucontrol->value.integer.value[0]; 351 + 352 + if (value == i2s->fsync_width) 353 + return 0; 354 + 355 + i2s->fsync_width = value; 356 + 357 + /* 358 + * Frame sync width is used only for FSYNC modes and not 359 + * applicable for LRCK modes. Reset value for this field is "0", 360 + * which means the width is one bit clock wide. 361 + * The width requirement may depend on the codec and in such 362 + * cases mixer control is used to update custom values. A value 363 + * of "N" here means, width is "N + 1" bit clock wide. 364 + */ 365 + regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, 366 + I2S_CTRL_FSYNC_WIDTH_MASK, 367 + i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT); 368 + 369 + return 1; 370 + } 371 + 372 + static int tegra210_i2s_cget_stereo_to_mono(struct snd_kcontrol *kcontrol, 373 + struct snd_ctl_elem_value *ucontrol) 374 + { 375 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 376 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 377 + 378 + ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_TX_PATH]; 379 + 380 + return 0; 381 + } 382 + 383 + static int tegra210_i2s_cput_stereo_to_mono(struct snd_kcontrol *kcontrol, 384 + struct snd_ctl_elem_value *ucontrol) 385 + { 386 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 387 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 388 + unsigned int value = ucontrol->value.enumerated.item[0]; 389 + 390 + if (value == i2s->stereo_to_mono[I2S_TX_PATH]) 391 + return 0; 392 + 393 + i2s->stereo_to_mono[I2S_TX_PATH] = value; 394 + 395 + return 1; 396 + } 397 + 398 + static int tegra210_i2s_cget_mono_to_stereo(struct snd_kcontrol *kcontrol, 399 + struct snd_ctl_elem_value *ucontrol) 400 + { 401 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 402 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 403 + 404 + ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_TX_PATH]; 405 + 406 + return 0; 407 + } 408 + 409 + static int tegra210_i2s_cput_mono_to_stereo(struct snd_kcontrol *kcontrol, 410 + struct snd_ctl_elem_value *ucontrol) 411 + { 412 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 413 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 414 + unsigned int value = ucontrol->value.enumerated.item[0]; 415 + 416 + if (value == i2s->mono_to_stereo[I2S_TX_PATH]) 417 + return 0; 418 + 419 + i2s->mono_to_stereo[I2S_TX_PATH] = value; 420 + 421 + return 1; 422 + } 423 + 424 + static int tegra210_i2s_pget_stereo_to_mono(struct snd_kcontrol *kcontrol, 425 + struct snd_ctl_elem_value *ucontrol) 426 + { 427 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 428 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 429 + 430 + ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_RX_PATH]; 431 + 432 + return 0; 433 + } 434 + 435 + static int tegra210_i2s_pput_stereo_to_mono(struct snd_kcontrol *kcontrol, 436 + struct snd_ctl_elem_value *ucontrol) 437 + { 438 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 439 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 440 + unsigned int value = ucontrol->value.enumerated.item[0]; 441 + 442 + if (value == i2s->stereo_to_mono[I2S_RX_PATH]) 443 + return 0; 444 + 445 + i2s->stereo_to_mono[I2S_RX_PATH] = value; 446 + 447 + return 1; 448 + } 449 + 450 + static int tegra210_i2s_pget_mono_to_stereo(struct snd_kcontrol *kcontrol, 451 + struct snd_ctl_elem_value *ucontrol) 452 + { 453 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 454 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 455 + 456 + ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_RX_PATH]; 457 + 458 + return 0; 459 + } 460 + 461 + static int tegra210_i2s_pput_mono_to_stereo(struct snd_kcontrol *kcontrol, 462 + struct snd_ctl_elem_value *ucontrol) 463 + { 464 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 465 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 466 + unsigned int value = ucontrol->value.enumerated.item[0]; 467 + 468 + if (value == i2s->mono_to_stereo[I2S_RX_PATH]) 469 + return 0; 470 + 471 + i2s->mono_to_stereo[I2S_RX_PATH] = value; 472 + 473 + return 1; 474 + } 475 + 476 + static int tegra210_i2s_pget_fifo_th(struct snd_kcontrol *kcontrol, 477 + struct snd_ctl_elem_value *ucontrol) 478 + { 479 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 480 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 481 + 482 + ucontrol->value.integer.value[0] = i2s->rx_fifo_th; 483 + 484 + return 0; 485 + } 486 + 487 + static int tegra210_i2s_pput_fifo_th(struct snd_kcontrol *kcontrol, 488 + struct snd_ctl_elem_value *ucontrol) 489 + { 490 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 491 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 492 + int value = ucontrol->value.integer.value[0]; 493 + 494 + if (value == i2s->rx_fifo_th) 495 + return 0; 496 + 497 + i2s->rx_fifo_th = value; 498 + 499 + return 1; 500 + } 501 + 502 + static int tegra210_i2s_get_bclk_ratio(struct snd_kcontrol *kcontrol, 503 + struct snd_ctl_elem_value *ucontrol) 504 + { 505 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 506 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 507 + 508 + ucontrol->value.integer.value[0] = i2s->bclk_ratio; 509 + 510 + return 0; 511 + } 512 + 513 + static int tegra210_i2s_put_bclk_ratio(struct snd_kcontrol *kcontrol, 514 + struct snd_ctl_elem_value *ucontrol) 515 + { 516 + struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 517 + struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 518 + int value = ucontrol->value.integer.value[0]; 519 + 520 + if (value == i2s->bclk_ratio) 521 + return 0; 522 + 523 + i2s->bclk_ratio = value; 524 + 525 + return 1; 526 + } 527 + 305 528 static int tegra210_i2s_set_dai_bclk_ratio(struct snd_soc_dai *dai, 306 529 unsigned int ratio) 307 530 { 308 531 struct tegra210_i2s *i2s = snd_soc_dai_get_drvdata(dai); 309 532 310 533 i2s->bclk_ratio = ratio; 311 - 312 - return 0; 313 - } 314 - 315 - static int tegra210_i2s_get_control(struct snd_kcontrol *kcontrol, 316 - struct snd_ctl_elem_value *ucontrol) 317 - { 318 - struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 319 - struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 320 - long *uctl_val = &ucontrol->value.integer.value[0]; 321 - 322 - if (strstr(kcontrol->id.name, "Loopback")) 323 - *uctl_val = i2s->loopback; 324 - else if (strstr(kcontrol->id.name, "FSYNC Width")) 325 - *uctl_val = i2s->fsync_width; 326 - else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) 327 - *uctl_val = i2s->stereo_to_mono[I2S_TX_PATH]; 328 - else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) 329 - *uctl_val = i2s->mono_to_stereo[I2S_TX_PATH]; 330 - else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) 331 - *uctl_val = i2s->stereo_to_mono[I2S_RX_PATH]; 332 - else if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) 333 - *uctl_val = i2s->mono_to_stereo[I2S_RX_PATH]; 334 - else if (strstr(kcontrol->id.name, "Playback FIFO Threshold")) 335 - *uctl_val = i2s->rx_fifo_th; 336 - else if (strstr(kcontrol->id.name, "BCLK Ratio")) 337 - *uctl_val = i2s->bclk_ratio; 338 - 339 - return 0; 340 - } 341 - 342 - static int tegra210_i2s_put_control(struct snd_kcontrol *kcontrol, 343 - struct snd_ctl_elem_value *ucontrol) 344 - { 345 - struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol); 346 - struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt); 347 - int value = ucontrol->value.integer.value[0]; 348 - 349 - if (strstr(kcontrol->id.name, "Loopback")) { 350 - i2s->loopback = value; 351 - 352 - regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, 353 - I2S_CTRL_LPBK_MASK, 354 - i2s->loopback << I2S_CTRL_LPBK_SHIFT); 355 - 356 - } else if (strstr(kcontrol->id.name, "FSYNC Width")) { 357 - /* 358 - * Frame sync width is used only for FSYNC modes and not 359 - * applicable for LRCK modes. Reset value for this field is "0", 360 - * which means the width is one bit clock wide. 361 - * The width requirement may depend on the codec and in such 362 - * cases mixer control is used to update custom values. A value 363 - * of "N" here means, width is "N + 1" bit clock wide. 364 - */ 365 - i2s->fsync_width = value; 366 - 367 - regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, 368 - I2S_CTRL_FSYNC_WIDTH_MASK, 369 - i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT); 370 - 371 - } else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) { 372 - i2s->stereo_to_mono[I2S_TX_PATH] = value; 373 - } else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) { 374 - i2s->mono_to_stereo[I2S_TX_PATH] = value; 375 - } else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) { 376 - i2s->stereo_to_mono[I2S_RX_PATH] = value; 377 - } else if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) { 378 - i2s->mono_to_stereo[I2S_RX_PATH] = value; 379 - } else if (strstr(kcontrol->id.name, "Playback FIFO Threshold")) { 380 - i2s->rx_fifo_th = value; 381 - } else if (strstr(kcontrol->id.name, "BCLK Ratio")) { 382 - i2s->bclk_ratio = value; 383 - } 384 534 385 535 return 0; 386 536 } ··· 748 598 tegra210_i2s_stereo_conv_text); 749 599 750 600 static const struct snd_kcontrol_new tegra210_i2s_controls[] = { 751 - SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_control, 752 - tegra210_i2s_put_control), 753 - SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0, tegra210_i2s_get_control, 754 - tegra210_i2s_put_control), 601 + SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_loopback, 602 + tegra210_i2s_put_loopback), 603 + SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0, 604 + tegra210_i2s_get_fsync_width, 605 + tegra210_i2s_put_fsync_width), 755 606 SOC_ENUM_EXT("Capture Stereo To Mono", tegra210_i2s_stereo_conv_enum, 756 - tegra210_i2s_get_control, tegra210_i2s_put_control), 607 + tegra210_i2s_cget_stereo_to_mono, 608 + tegra210_i2s_cput_stereo_to_mono), 757 609 SOC_ENUM_EXT("Capture Mono To Stereo", tegra210_i2s_mono_conv_enum, 758 - tegra210_i2s_get_control, tegra210_i2s_put_control), 610 + tegra210_i2s_cget_mono_to_stereo, 611 + tegra210_i2s_cput_mono_to_stereo), 759 612 SOC_ENUM_EXT("Playback Stereo To Mono", tegra210_i2s_stereo_conv_enum, 760 - tegra210_i2s_get_control, tegra210_i2s_put_control), 613 + tegra210_i2s_pget_mono_to_stereo, 614 + tegra210_i2s_pput_mono_to_stereo), 761 615 SOC_ENUM_EXT("Playback Mono To Stereo", tegra210_i2s_mono_conv_enum, 762 - tegra210_i2s_get_control, tegra210_i2s_put_control), 616 + tegra210_i2s_pget_stereo_to_mono, 617 + tegra210_i2s_pput_stereo_to_mono), 763 618 SOC_SINGLE_EXT("Playback FIFO Threshold", 0, 0, I2S_RX_FIFO_DEPTH - 1, 764 - 0, tegra210_i2s_get_control, tegra210_i2s_put_control), 765 - SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0, tegra210_i2s_get_control, 766 - tegra210_i2s_put_control), 619 + 0, tegra210_i2s_pget_fifo_th, tegra210_i2s_pput_fifo_th), 620 + SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0, 621 + tegra210_i2s_get_bclk_ratio, 622 + tegra210_i2s_put_bclk_ratio), 767 623 }; 768 624 769 625 static const struct snd_soc_dapm_widget tegra210_i2s_widgets[] = {
+19 -7
sound/soc/tegra/tegra210_mixer.c
··· 192 192 return 0; 193 193 } 194 194 195 - static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol, 196 - struct snd_ctl_elem_value *ucontrol) 195 + static int tegra210_mixer_apply_gain(struct snd_kcontrol *kcontrol, 196 + struct snd_ctl_elem_value *ucontrol, 197 + bool instant_gain) 197 198 { 198 199 struct soc_mixer_control *mc = 199 200 (struct soc_mixer_control *)kcontrol->private_value; 200 201 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 201 202 struct tegra210_mixer *mixer = snd_soc_component_get_drvdata(cmpnt); 202 203 unsigned int reg = mc->reg, id; 203 - bool instant_gain = false; 204 204 int err; 205 - 206 - if (strstr(kcontrol->id.name, "Instant Gain Volume")) 207 - instant_gain = true; 208 205 209 206 /* Save gain value for specific MIXER input */ 210 207 id = (reg - TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_0) / 211 208 TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_STRIDE; 209 + 210 + if (mixer->gain_value[id] == ucontrol->value.integer.value[0]) 211 + return 0; 212 212 213 213 mixer->gain_value[id] = ucontrol->value.integer.value[0]; 214 214 ··· 219 219 } 220 220 221 221 return 1; 222 + } 223 + 224 + static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol, 225 + struct snd_ctl_elem_value *ucontrol) 226 + { 227 + return tegra210_mixer_apply_gain(kcontrol, ucontrol, false); 228 + } 229 + 230 + static int tegra210_mixer_put_instant_gain(struct snd_kcontrol *kcontrol, 231 + struct snd_ctl_elem_value *ucontrol) 232 + { 233 + return tegra210_mixer_apply_gain(kcontrol, ucontrol, true); 222 234 } 223 235 224 236 static int tegra210_mixer_set_audio_cif(struct tegra210_mixer *mixer, ··· 400 388 SOC_SINGLE_EXT("RX" #id " Instant Gain Volume", \ 401 389 MIXER_GAIN_CFG_RAM_ADDR((id) - 1), 0, \ 402 390 0x20000, 0, tegra210_mixer_get_gain, \ 403 - tegra210_mixer_put_gain), 391 + tegra210_mixer_put_instant_gain), 404 392 405 393 /* Volume controls for all MIXER inputs */ 406 394 static const struct snd_kcontrol_new tegra210_mixer_gain_ctls[] = {
+22 -8
sound/soc/tegra/tegra210_mvc.c
··· 136 136 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 137 137 struct tegra210_mvc *mvc = snd_soc_component_get_drvdata(cmpnt); 138 138 unsigned int value; 139 - u8 mute_mask; 139 + u8 new_mask, old_mask; 140 140 int err; 141 141 142 142 pm_runtime_get_sync(cmpnt->dev); ··· 148 148 if (err < 0) 149 149 goto end; 150 150 151 - mute_mask = ucontrol->value.integer.value[0]; 151 + regmap_read(mvc->regmap, TEGRA210_MVC_CTRL, &value); 152 + 153 + old_mask = (value >> TEGRA210_MVC_MUTE_SHIFT) & TEGRA210_MUTE_MASK_EN; 154 + new_mask = ucontrol->value.integer.value[0]; 155 + 156 + if (new_mask == old_mask) { 157 + err = 0; 158 + goto end; 159 + } 152 160 153 161 err = regmap_update_bits(mvc->regmap, mc->reg, 154 162 TEGRA210_MVC_MUTE_MASK, 155 - mute_mask << TEGRA210_MVC_MUTE_SHIFT); 163 + new_mask << TEGRA210_MVC_MUTE_SHIFT); 156 164 if (err < 0) 157 165 goto end; 158 166 ··· 203 195 unsigned int reg = mc->reg; 204 196 unsigned int value; 205 197 u8 chan; 206 - int err; 198 + int err, old_volume; 207 199 208 200 pm_runtime_get_sync(cmpnt->dev); 209 201 ··· 215 207 goto end; 216 208 217 209 chan = (reg - TEGRA210_MVC_TARGET_VOL) / REG_SIZE; 210 + old_volume = mvc->volume[chan]; 218 211 219 212 tegra210_mvc_conv_vol(mvc, chan, 220 213 ucontrol->value.integer.value[0]); 214 + 215 + if (mvc->volume[chan] == old_volume) { 216 + err = 0; 217 + goto end; 218 + } 221 219 222 220 /* Configure init volume same as target volume */ 223 221 regmap_write(mvc->regmap, ··· 289 275 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 290 276 struct tegra210_mvc *mvc = snd_soc_component_get_drvdata(cmpnt); 291 277 292 - ucontrol->value.integer.value[0] = mvc->curve_type; 278 + ucontrol->value.enumerated.item[0] = mvc->curve_type; 293 279 294 280 return 0; 295 281 } ··· 299 285 { 300 286 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 301 287 struct tegra210_mvc *mvc = snd_soc_component_get_drvdata(cmpnt); 302 - int value; 288 + unsigned int value; 303 289 304 290 regmap_read(mvc->regmap, TEGRA210_MVC_ENABLE, &value); 305 291 if (value & TEGRA210_MVC_EN) { ··· 308 294 return -EINVAL; 309 295 } 310 296 311 - if (mvc->curve_type == ucontrol->value.integer.value[0]) 297 + if (mvc->curve_type == ucontrol->value.enumerated.item[0]) 312 298 return 0; 313 299 314 - mvc->curve_type = ucontrol->value.integer.value[0]; 300 + mvc->curve_type = ucontrol->value.enumerated.item[0]; 315 301 316 302 tegra210_mvc_reset_vol_settings(mvc, cmpnt->dev); 317 303
+93 -28
sound/soc/tegra/tegra210_sfc.c
··· 3244 3244 return tegra210_sfc_write_coeff_ram(cmpnt); 3245 3245 } 3246 3246 3247 - static int tegra210_sfc_get_control(struct snd_kcontrol *kcontrol, 3247 + static int tegra210_sfc_iget_stereo_to_mono(struct snd_kcontrol *kcontrol, 3248 3248 struct snd_ctl_elem_value *ucontrol) 3249 3249 { 3250 3250 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3251 3251 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3252 3252 3253 - if (strstr(kcontrol->id.name, "Input Stereo To Mono")) 3254 - ucontrol->value.integer.value[0] = 3255 - sfc->stereo_to_mono[SFC_RX_PATH]; 3256 - else if (strstr(kcontrol->id.name, "Input Mono To Stereo")) 3257 - ucontrol->value.integer.value[0] = 3258 - sfc->mono_to_stereo[SFC_RX_PATH]; 3259 - else if (strstr(kcontrol->id.name, "Output Stereo To Mono")) 3260 - ucontrol->value.integer.value[0] = 3261 - sfc->stereo_to_mono[SFC_TX_PATH]; 3262 - else if (strstr(kcontrol->id.name, "Output Mono To Stereo")) 3263 - ucontrol->value.integer.value[0] = 3264 - sfc->mono_to_stereo[SFC_TX_PATH]; 3253 + ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_RX_PATH]; 3265 3254 3266 3255 return 0; 3267 3256 } 3268 3257 3269 - static int tegra210_sfc_put_control(struct snd_kcontrol *kcontrol, 3258 + static int tegra210_sfc_iput_stereo_to_mono(struct snd_kcontrol *kcontrol, 3270 3259 struct snd_ctl_elem_value *ucontrol) 3271 3260 { 3272 3261 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3273 3262 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3274 - int value = ucontrol->value.integer.value[0]; 3263 + unsigned int value = ucontrol->value.enumerated.item[0]; 3275 3264 3276 - if (strstr(kcontrol->id.name, "Input Stereo To Mono")) 3277 - sfc->stereo_to_mono[SFC_RX_PATH] = value; 3278 - else if (strstr(kcontrol->id.name, "Input Mono To Stereo")) 3279 - sfc->mono_to_stereo[SFC_RX_PATH] = value; 3280 - else if (strstr(kcontrol->id.name, "Output Stereo To Mono")) 3281 - sfc->stereo_to_mono[SFC_TX_PATH] = value; 3282 - else if (strstr(kcontrol->id.name, "Output Mono To Stereo")) 3283 - sfc->mono_to_stereo[SFC_TX_PATH] = value; 3284 - else 3265 + if (value == sfc->stereo_to_mono[SFC_RX_PATH]) 3285 3266 return 0; 3267 + 3268 + sfc->stereo_to_mono[SFC_RX_PATH] = value; 3269 + 3270 + return 1; 3271 + } 3272 + 3273 + static int tegra210_sfc_iget_mono_to_stereo(struct snd_kcontrol *kcontrol, 3274 + struct snd_ctl_elem_value *ucontrol) 3275 + { 3276 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3277 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3278 + 3279 + ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_RX_PATH]; 3280 + 3281 + return 0; 3282 + } 3283 + 3284 + static int tegra210_sfc_iput_mono_to_stereo(struct snd_kcontrol *kcontrol, 3285 + struct snd_ctl_elem_value *ucontrol) 3286 + { 3287 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3288 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3289 + unsigned int value = ucontrol->value.enumerated.item[0]; 3290 + 3291 + if (value == sfc->mono_to_stereo[SFC_RX_PATH]) 3292 + return 0; 3293 + 3294 + sfc->mono_to_stereo[SFC_RX_PATH] = value; 3295 + 3296 + return 1; 3297 + } 3298 + 3299 + static int tegra210_sfc_oget_stereo_to_mono(struct snd_kcontrol *kcontrol, 3300 + struct snd_ctl_elem_value *ucontrol) 3301 + { 3302 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3303 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3304 + 3305 + ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_TX_PATH]; 3306 + 3307 + return 0; 3308 + } 3309 + 3310 + static int tegra210_sfc_oput_stereo_to_mono(struct snd_kcontrol *kcontrol, 3311 + struct snd_ctl_elem_value *ucontrol) 3312 + { 3313 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3314 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3315 + unsigned int value = ucontrol->value.enumerated.item[0]; 3316 + 3317 + if (value == sfc->stereo_to_mono[SFC_TX_PATH]) 3318 + return 0; 3319 + 3320 + sfc->stereo_to_mono[SFC_TX_PATH] = value; 3321 + 3322 + return 1; 3323 + } 3324 + 3325 + static int tegra210_sfc_oget_mono_to_stereo(struct snd_kcontrol *kcontrol, 3326 + struct snd_ctl_elem_value *ucontrol) 3327 + { 3328 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3329 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3330 + 3331 + ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_TX_PATH]; 3332 + 3333 + return 0; 3334 + } 3335 + 3336 + static int tegra210_sfc_oput_mono_to_stereo(struct snd_kcontrol *kcontrol, 3337 + struct snd_ctl_elem_value *ucontrol) 3338 + { 3339 + struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol); 3340 + struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt); 3341 + unsigned int value = ucontrol->value.enumerated.item[0]; 3342 + 3343 + if (value == sfc->mono_to_stereo[SFC_TX_PATH]) 3344 + return 0; 3345 + 3346 + sfc->mono_to_stereo[SFC_TX_PATH] = value; 3286 3347 3287 3348 return 1; 3288 3349 } ··· 3445 3384 3446 3385 static const struct snd_kcontrol_new tegra210_sfc_controls[] = { 3447 3386 SOC_ENUM_EXT("Input Stereo To Mono", tegra210_sfc_stereo_conv_enum, 3448 - tegra210_sfc_get_control, tegra210_sfc_put_control), 3387 + tegra210_sfc_iget_stereo_to_mono, 3388 + tegra210_sfc_iput_stereo_to_mono), 3449 3389 SOC_ENUM_EXT("Input Mono To Stereo", tegra210_sfc_mono_conv_enum, 3450 - tegra210_sfc_get_control, tegra210_sfc_put_control), 3390 + tegra210_sfc_iget_mono_to_stereo, 3391 + tegra210_sfc_iput_mono_to_stereo), 3451 3392 SOC_ENUM_EXT("Output Stereo To Mono", tegra210_sfc_stereo_conv_enum, 3452 - tegra210_sfc_get_control, tegra210_sfc_put_control), 3393 + tegra210_sfc_oget_stereo_to_mono, 3394 + tegra210_sfc_oput_stereo_to_mono), 3453 3395 SOC_ENUM_EXT("Output Mono To Stereo", tegra210_sfc_mono_conv_enum, 3454 - tegra210_sfc_get_control, tegra210_sfc_put_control), 3396 + tegra210_sfc_oget_mono_to_stereo, 3397 + tegra210_sfc_oput_mono_to_stereo), 3455 3398 }; 3456 3399 3457 3400 static const struct snd_soc_component_driver tegra210_sfc_cmpnt = {
+1 -21
tools/include/linux/kernel.h
··· 7 7 #include <assert.h> 8 8 #include <linux/build_bug.h> 9 9 #include <linux/compiler.h> 10 + #include <linux/math.h> 10 11 #include <endian.h> 11 12 #include <byteswap.h> 12 13 13 14 #ifndef UINT_MAX 14 15 #define UINT_MAX (~0U) 15 16 #endif 16 - 17 - #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) 18 17 19 18 #define PERF_ALIGN(x, a) __PERF_ALIGN_MASK(x, (typeof(x))(a)-1) 20 19 #define __PERF_ALIGN_MASK(x, mask) (((x)+(mask))&~(mask)) ··· 49 50 typeof(y) _min2 = (y); \ 50 51 (void) (&_min1 == &_min2); \ 51 52 _min1 < _min2 ? _min1 : _min2; }) 52 - #endif 53 - 54 - #ifndef roundup 55 - #define roundup(x, y) ( \ 56 - { \ 57 - const typeof(y) __y = y; \ 58 - (((x) + (__y - 1)) / __y) * __y; \ 59 - } \ 60 - ) 61 53 #endif 62 54 63 55 #ifndef BUG_ON ··· 93 103 int scnprintf_pad(char * buf, size_t size, const char * fmt, ...); 94 104 95 105 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr)) 96 - 97 - /* 98 - * This looks more complex than it should be. But we need to 99 - * get the type for the ~ right in round_down (it needs to be 100 - * as wide as the result!), and we want to evaluate the macro 101 - * arguments just once each. 102 - */ 103 - #define __round_mask(x, y) ((__typeof__(x))((y)-1)) 104 - #define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) 105 - #define round_down(x, y) ((x) & ~__round_mask(x, y)) 106 106 107 107 #define current_gfp_context(k) 0 108 108 #define synchronize_rcu()
+25
tools/include/linux/math.h
··· 1 + #ifndef _TOOLS_MATH_H 2 + #define _TOOLS_MATH_H 3 + 4 + /* 5 + * This looks more complex than it should be. But we need to 6 + * get the type for the ~ right in round_down (it needs to be 7 + * as wide as the result!), and we want to evaluate the macro 8 + * arguments just once each. 9 + */ 10 + #define __round_mask(x, y) ((__typeof__(x))((y)-1)) 11 + #define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1) 12 + #define round_down(x, y) ((x) & ~__round_mask(x, y)) 13 + 14 + #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) 15 + 16 + #ifndef roundup 17 + #define roundup(x, y) ( \ 18 + { \ 19 + const typeof(y) __y = y; \ 20 + (((x) + (__y - 1)) / __y) * __y; \ 21 + } \ 22 + ) 23 + #endif 24 + 25 + #endif
+1
tools/objtool/elf.c
··· 375 375 return -1; 376 376 } 377 377 memset(sym, 0, sizeof(*sym)); 378 + INIT_LIST_HEAD(&sym->pv_target); 378 379 sym->alias = sym; 379 380 380 381 sym->idx = i;
+4
tools/objtool/objtool.c
··· 153 153 !strcmp(func->name, "_paravirt_ident_64")) 154 154 return; 155 155 156 + /* already added this function */ 157 + if (!list_empty(&func->pv_target)) 158 + return; 159 + 156 160 list_add(&func->pv_target, &f->pv_ops[idx].targets); 157 161 f->pv_ops[idx].clean = false; 158 162 }
+3
tools/testing/radix-tree/linux/lockdep.h
··· 1 1 #ifndef _LINUX_LOCKDEP_H 2 2 #define _LINUX_LOCKDEP_H 3 + 4 + #include <linux/spinlock.h> 5 + 3 6 struct lock_class_key { 4 7 unsigned int a; 5 8 };
+30
tools/testing/selftests/kvm/kvm_create_max_vcpus.c
··· 12 12 #include <stdio.h> 13 13 #include <stdlib.h> 14 14 #include <string.h> 15 + #include <sys/resource.h> 15 16 16 17 #include "test_util.h" 17 18 ··· 41 40 { 42 41 int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID); 43 42 int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); 43 + /* 44 + * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds + 45 + * an arbitrary number for everything else. 46 + */ 47 + int nr_fds_wanted = kvm_max_vcpus + 100; 48 + struct rlimit rl; 44 49 45 50 pr_info("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id); 46 51 pr_info("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus); 52 + 53 + /* 54 + * Check that we're allowed to open nr_fds_wanted file descriptors and 55 + * try raising the limits if needed. 56 + */ 57 + TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!"); 58 + 59 + if (rl.rlim_cur < nr_fds_wanted) { 60 + rl.rlim_cur = nr_fds_wanted; 61 + if (rl.rlim_max < nr_fds_wanted) { 62 + int old_rlim_max = rl.rlim_max; 63 + rl.rlim_max = nr_fds_wanted; 64 + 65 + int r = setrlimit(RLIMIT_NOFILE, &rl); 66 + if (r < 0) { 67 + printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n", 68 + old_rlim_max, nr_fds_wanted); 69 + exit(KSFT_SKIP); 70 + } 71 + } else { 72 + TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!"); 73 + } 74 + } 47 75 48 76 /* 49 77 * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID.
+1 -1
tools/testing/selftests/kvm/kvm_page_table_test.c
··· 280 280 #ifdef __s390x__ 281 281 alignment = max(0x100000, alignment); 282 282 #endif 283 - guest_test_phys_mem = align_down(guest_test_virt_mem, alignment); 283 + guest_test_phys_mem = align_down(guest_test_phys_mem, alignment); 284 284 285 285 /* Set up the shared data structure test_args */ 286 286 test_args.vm = vm;
+71 -69
tools/testing/selftests/kvm/x86_64/hyperv_features.c
··· 165 165 vcpu_set_cpuid(vm, VCPU_ID, cpuid); 166 166 } 167 167 168 - static void guest_test_msrs_access(struct kvm_vm *vm, struct msr_data *msr, 169 - struct kvm_cpuid2 *best) 168 + static void guest_test_msrs_access(void) 170 169 { 171 170 struct kvm_run *run; 171 + struct kvm_vm *vm; 172 172 struct ucall uc; 173 173 int stage = 0, r; 174 174 struct kvm_cpuid_entry2 feat = { ··· 180 180 struct kvm_cpuid_entry2 dbg = { 181 181 .function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES 182 182 }; 183 - struct kvm_enable_cap cap = {0}; 184 - 185 - run = vcpu_state(vm, VCPU_ID); 183 + struct kvm_cpuid2 *best; 184 + vm_vaddr_t msr_gva; 185 + struct kvm_enable_cap cap = { 186 + .cap = KVM_CAP_HYPERV_ENFORCE_CPUID, 187 + .args = {1} 188 + }; 189 + struct msr_data *msr; 186 190 187 191 while (true) { 192 + vm = vm_create_default(VCPU_ID, 0, guest_msr); 193 + 194 + msr_gva = vm_vaddr_alloc_page(vm); 195 + memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize()); 196 + msr = addr_gva2hva(vm, msr_gva); 197 + 198 + vcpu_args_set(vm, VCPU_ID, 1, msr_gva); 199 + vcpu_enable_cap(vm, VCPU_ID, &cap); 200 + 201 + vcpu_set_hv_cpuid(vm, VCPU_ID); 202 + 203 + best = kvm_get_supported_hv_cpuid(); 204 + 205 + vm_init_descriptor_tables(vm); 206 + vcpu_init_descriptor_tables(vm, VCPU_ID); 207 + vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler); 208 + 209 + run = vcpu_state(vm, VCPU_ID); 210 + 188 211 switch (stage) { 189 212 case 0: 190 213 /* ··· 338 315 * capability enabled and guest visible CPUID bit unset. 339 316 */ 340 317 cap.cap = KVM_CAP_HYPERV_SYNIC2; 318 + cap.args[0] = 0; 341 319 vcpu_enable_cap(vm, VCPU_ID, &cap); 342 320 break; 343 321 case 22: ··· 485 461 486 462 switch (get_ucall(vm, VCPU_ID, &uc)) { 487 463 case UCALL_SYNC: 488 - TEST_ASSERT(uc.args[1] == stage, 489 - "Unexpected stage: %ld (%d expected)\n", 490 - uc.args[1], stage); 464 + TEST_ASSERT(uc.args[1] == 0, 465 + "Unexpected stage: %ld (0 expected)\n", 466 + uc.args[1]); 491 467 break; 492 468 case UCALL_ABORT: 493 469 TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], ··· 498 474 } 499 475 500 476 stage++; 477 + kvm_vm_free(vm); 501 478 } 502 479 } 503 480 504 - static void guest_test_hcalls_access(struct kvm_vm *vm, struct hcall_data *hcall, 505 - void *input, void *output, struct kvm_cpuid2 *best) 481 + static void guest_test_hcalls_access(void) 506 482 { 507 483 struct kvm_run *run; 484 + struct kvm_vm *vm; 508 485 struct ucall uc; 509 486 int stage = 0, r; 510 487 struct kvm_cpuid_entry2 feat = { ··· 518 493 struct kvm_cpuid_entry2 dbg = { 519 494 .function = HYPERV_CPUID_SYNDBG_PLATFORM_CAPABILITIES 520 495 }; 521 - 522 - run = vcpu_state(vm, VCPU_ID); 496 + struct kvm_enable_cap cap = { 497 + .cap = KVM_CAP_HYPERV_ENFORCE_CPUID, 498 + .args = {1} 499 + }; 500 + vm_vaddr_t hcall_page, hcall_params; 501 + struct hcall_data *hcall; 502 + struct kvm_cpuid2 *best; 523 503 524 504 while (true) { 505 + vm = vm_create_default(VCPU_ID, 0, guest_hcall); 506 + 507 + vm_init_descriptor_tables(vm); 508 + vcpu_init_descriptor_tables(vm, VCPU_ID); 509 + vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler); 510 + 511 + /* Hypercall input/output */ 512 + hcall_page = vm_vaddr_alloc_pages(vm, 2); 513 + hcall = addr_gva2hva(vm, hcall_page); 514 + memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); 515 + 516 + hcall_params = vm_vaddr_alloc_page(vm); 517 + memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize()); 518 + 519 + vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params); 520 + vcpu_enable_cap(vm, VCPU_ID, &cap); 521 + 522 + vcpu_set_hv_cpuid(vm, VCPU_ID); 523 + 524 + best = kvm_get_supported_hv_cpuid(); 525 + 526 + run = vcpu_state(vm, VCPU_ID); 527 + 525 528 switch (stage) { 526 529 case 0: 527 530 hcall->control = 0xdeadbeef; ··· 659 606 660 607 switch (get_ucall(vm, VCPU_ID, &uc)) { 661 608 case UCALL_SYNC: 662 - TEST_ASSERT(uc.args[1] == stage, 663 - "Unexpected stage: %ld (%d expected)\n", 664 - uc.args[1], stage); 609 + TEST_ASSERT(uc.args[1] == 0, 610 + "Unexpected stage: %ld (0 expected)\n", 611 + uc.args[1]); 665 612 break; 666 613 case UCALL_ABORT: 667 614 TEST_FAIL("%s at %s:%ld", (const char *)uc.args[0], ··· 672 619 } 673 620 674 621 stage++; 622 + kvm_vm_free(vm); 675 623 } 676 624 } 677 625 678 626 int main(void) 679 627 { 680 - struct kvm_cpuid2 *best; 681 - struct kvm_vm *vm; 682 - vm_vaddr_t msr_gva, hcall_page, hcall_params; 683 - struct kvm_enable_cap cap = { 684 - .cap = KVM_CAP_HYPERV_ENFORCE_CPUID, 685 - .args = {1} 686 - }; 687 - 688 - /* Test MSRs */ 689 - vm = vm_create_default(VCPU_ID, 0, guest_msr); 690 - 691 - msr_gva = vm_vaddr_alloc_page(vm); 692 - memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize()); 693 - vcpu_args_set(vm, VCPU_ID, 1, msr_gva); 694 - vcpu_enable_cap(vm, VCPU_ID, &cap); 695 - 696 - vcpu_set_hv_cpuid(vm, VCPU_ID); 697 - 698 - best = kvm_get_supported_hv_cpuid(); 699 - 700 - vm_init_descriptor_tables(vm); 701 - vcpu_init_descriptor_tables(vm, VCPU_ID); 702 - vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler); 703 - 704 628 pr_info("Testing access to Hyper-V specific MSRs\n"); 705 - guest_test_msrs_access(vm, addr_gva2hva(vm, msr_gva), 706 - best); 707 - kvm_vm_free(vm); 708 - 709 - /* Test hypercalls */ 710 - vm = vm_create_default(VCPU_ID, 0, guest_hcall); 711 - 712 - vm_init_descriptor_tables(vm); 713 - vcpu_init_descriptor_tables(vm, VCPU_ID); 714 - vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler); 715 - 716 - /* Hypercall input/output */ 717 - hcall_page = vm_vaddr_alloc_pages(vm, 2); 718 - memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize()); 719 - 720 - hcall_params = vm_vaddr_alloc_page(vm); 721 - memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize()); 722 - 723 - vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params); 724 - vcpu_enable_cap(vm, VCPU_ID, &cap); 725 - 726 - vcpu_set_hv_cpuid(vm, VCPU_ID); 727 - 728 - best = kvm_get_supported_hv_cpuid(); 629 + guest_test_msrs_access(); 729 630 730 631 pr_info("Testing access to Hyper-V hypercalls\n"); 731 - guest_test_hcalls_access(vm, addr_gva2hva(vm, hcall_params), 732 - addr_gva2hva(vm, hcall_page), 733 - addr_gva2hva(vm, hcall_page) + getpagesize(), 734 - best); 735 - 736 - kvm_vm_free(vm); 632 + guest_test_hcalls_access(); 737 633 }
+155 -10
tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c
··· 54 54 return vm; 55 55 } 56 56 57 - static struct kvm_vm *__vm_create(void) 57 + static struct kvm_vm *aux_vm_create(bool with_vcpus) 58 58 { 59 59 struct kvm_vm *vm; 60 60 int i; 61 61 62 62 vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); 63 + if (!with_vcpus) 64 + return vm; 65 + 63 66 for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) 64 67 vm_vcpu_add(vm, i); 65 68 ··· 92 89 { 93 90 struct kvm_vm *src_vm; 94 91 struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS]; 95 - int i; 92 + int i, ret; 96 93 97 94 src_vm = sev_vm_create(es); 98 95 for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) 99 - dst_vms[i] = __vm_create(); 96 + dst_vms[i] = aux_vm_create(true); 100 97 101 98 /* Initial migration from the src to the first dst. */ 102 99 sev_migrate_from(dst_vms[0]->fd, src_vm->fd); ··· 105 102 sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd); 106 103 107 104 /* Migrate the guest back to the original VM. */ 108 - sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); 105 + ret = __sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd); 106 + TEST_ASSERT(ret == -1 && errno == EIO, 107 + "VM that was migrated from should be dead. ret %d, errno: %d\n", ret, 108 + errno); 109 109 110 110 kvm_vm_free(src_vm); 111 111 for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i) ··· 152 146 153 147 for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) 154 148 pthread_join(pt[i], NULL); 149 + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) 150 + kvm_vm_free(input[i].vm); 155 151 } 156 152 157 153 static void test_sev_migrate_parameters(void) ··· 165 157 sev_vm = sev_vm_create(/* es= */ false); 166 158 sev_es_vm = sev_vm_create(/* es= */ true); 167 159 vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); 168 - vm_no_sev = __vm_create(); 160 + vm_no_sev = aux_vm_create(true); 169 161 sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR); 170 162 sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL); 171 163 vm_vcpu_add(sev_es_vm_no_vmsa, 1); 172 - 173 164 174 165 ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd); 175 166 TEST_ASSERT( ··· 198 191 TEST_ASSERT(ret == -1 && errno == EINVAL, 199 192 "Migrations require SEV enabled. ret %d, errno: %d\n", ret, 200 193 errno); 194 + 195 + kvm_vm_free(sev_vm); 196 + kvm_vm_free(sev_es_vm); 197 + kvm_vm_free(sev_es_vm_no_vmsa); 198 + kvm_vm_free(vm_no_vcpu); 199 + kvm_vm_free(vm_no_sev); 200 + } 201 + 202 + static int __sev_mirror_create(int dst_fd, int src_fd) 203 + { 204 + struct kvm_enable_cap cap = { 205 + .cap = KVM_CAP_VM_COPY_ENC_CONTEXT_FROM, 206 + .args = { src_fd } 207 + }; 208 + 209 + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap); 210 + } 211 + 212 + 213 + static void sev_mirror_create(int dst_fd, int src_fd) 214 + { 215 + int ret; 216 + 217 + ret = __sev_mirror_create(dst_fd, src_fd); 218 + TEST_ASSERT(!ret, "Copying context failed, ret: %d, errno: %d\n", ret, errno); 219 + } 220 + 221 + static void test_sev_mirror(bool es) 222 + { 223 + struct kvm_vm *src_vm, *dst_vm; 224 + struct kvm_sev_launch_start start = { 225 + .policy = es ? SEV_POLICY_ES : 0 226 + }; 227 + int i; 228 + 229 + src_vm = sev_vm_create(es); 230 + dst_vm = aux_vm_create(false); 231 + 232 + sev_mirror_create(dst_vm->fd, src_vm->fd); 233 + 234 + /* Check that we can complete creation of the mirror VM. */ 235 + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i) 236 + vm_vcpu_add(dst_vm, i); 237 + sev_ioctl(dst_vm->fd, KVM_SEV_LAUNCH_START, &start); 238 + if (es) 239 + sev_ioctl(dst_vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL); 240 + 241 + kvm_vm_free(src_vm); 242 + kvm_vm_free(dst_vm); 243 + } 244 + 245 + static void test_sev_mirror_parameters(void) 246 + { 247 + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_with_vcpu; 248 + int ret; 249 + 250 + sev_vm = sev_vm_create(/* es= */ false); 251 + sev_es_vm = sev_vm_create(/* es= */ true); 252 + vm_with_vcpu = aux_vm_create(true); 253 + vm_no_vcpu = aux_vm_create(false); 254 + 255 + ret = __sev_mirror_create(sev_vm->fd, sev_vm->fd); 256 + TEST_ASSERT( 257 + ret == -1 && errno == EINVAL, 258 + "Should not be able copy context to self. ret: %d, errno: %d\n", 259 + ret, errno); 260 + 261 + ret = __sev_mirror_create(sev_vm->fd, sev_es_vm->fd); 262 + TEST_ASSERT( 263 + ret == -1 && errno == EINVAL, 264 + "Should not be able copy context to SEV enabled VM. ret: %d, errno: %d\n", 265 + ret, errno); 266 + 267 + ret = __sev_mirror_create(sev_es_vm->fd, sev_vm->fd); 268 + TEST_ASSERT( 269 + ret == -1 && errno == EINVAL, 270 + "Should not be able copy context to SEV-ES enabled VM. ret: %d, errno: %d\n", 271 + ret, errno); 272 + 273 + ret = __sev_mirror_create(vm_no_vcpu->fd, vm_with_vcpu->fd); 274 + TEST_ASSERT(ret == -1 && errno == EINVAL, 275 + "Copy context requires SEV enabled. ret %d, errno: %d\n", ret, 276 + errno); 277 + 278 + ret = __sev_mirror_create(vm_with_vcpu->fd, sev_vm->fd); 279 + TEST_ASSERT( 280 + ret == -1 && errno == EINVAL, 281 + "SEV copy context requires no vCPUS on the destination. ret: %d, errno: %d\n", 282 + ret, errno); 283 + 284 + kvm_vm_free(sev_vm); 285 + kvm_vm_free(sev_es_vm); 286 + kvm_vm_free(vm_with_vcpu); 287 + kvm_vm_free(vm_no_vcpu); 288 + } 289 + 290 + static void test_sev_move_copy(void) 291 + { 292 + struct kvm_vm *dst_vm, *sev_vm, *mirror_vm, *dst_mirror_vm; 293 + int ret; 294 + 295 + sev_vm = sev_vm_create(/* es= */ false); 296 + dst_vm = aux_vm_create(true); 297 + mirror_vm = aux_vm_create(false); 298 + dst_mirror_vm = aux_vm_create(false); 299 + 300 + sev_mirror_create(mirror_vm->fd, sev_vm->fd); 301 + ret = __sev_migrate_from(dst_vm->fd, sev_vm->fd); 302 + TEST_ASSERT(ret == -1 && errno == EBUSY, 303 + "Cannot migrate VM that has mirrors. ret %d, errno: %d\n", ret, 304 + errno); 305 + 306 + /* The mirror itself can be migrated. */ 307 + sev_migrate_from(dst_mirror_vm->fd, mirror_vm->fd); 308 + ret = __sev_migrate_from(dst_vm->fd, sev_vm->fd); 309 + TEST_ASSERT(ret == -1 && errno == EBUSY, 310 + "Cannot migrate VM that has mirrors. ret %d, errno: %d\n", ret, 311 + errno); 312 + 313 + /* 314 + * mirror_vm is not a mirror anymore, dst_mirror_vm is. Thus, 315 + * the owner can be copied as soon as dst_mirror_vm is gone. 316 + */ 317 + kvm_vm_free(dst_mirror_vm); 318 + sev_migrate_from(dst_vm->fd, sev_vm->fd); 319 + 320 + kvm_vm_free(mirror_vm); 321 + kvm_vm_free(dst_vm); 322 + kvm_vm_free(sev_vm); 201 323 } 202 324 203 325 int main(int argc, char *argv[]) 204 326 { 205 - test_sev_migrate_from(/* es= */ false); 206 - test_sev_migrate_from(/* es= */ true); 207 - test_sev_migrate_locking(); 208 - test_sev_migrate_parameters(); 327 + if (kvm_check_cap(KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM)) { 328 + test_sev_migrate_from(/* es= */ false); 329 + test_sev_migrate_from(/* es= */ true); 330 + test_sev_migrate_locking(); 331 + test_sev_migrate_parameters(); 332 + if (kvm_check_cap(KVM_CAP_VM_COPY_ENC_CONTEXT_FROM)) 333 + test_sev_move_copy(); 334 + } 335 + if (kvm_check_cap(KVM_CAP_VM_COPY_ENC_CONTEXT_FROM)) { 336 + test_sev_mirror(/* es= */ false); 337 + test_sev_mirror(/* es= */ true); 338 + test_sev_mirror_parameters(); 339 + } 209 340 return 0; 210 341 }
+2 -2
tools/testing/selftests/net/fcnal-test.sh
··· 4002 4002 ################################################################################ 4003 4003 # main 4004 4004 4005 - TESTS_IPV4="ipv4_ping ipv4_tcp ipv4_udp ipv4_addr_bind ipv4_runtime ipv4_netfilter" 4006 - TESTS_IPV6="ipv6_ping ipv6_tcp ipv6_udp ipv6_addr_bind ipv6_runtime ipv6_netfilter" 4005 + TESTS_IPV4="ipv4_ping ipv4_tcp ipv4_udp ipv4_bind ipv4_runtime ipv4_netfilter" 4006 + TESTS_IPV6="ipv6_ping ipv6_tcp ipv6_udp ipv6_bind ipv6_runtime ipv6_netfilter" 4007 4007 TESTS_OTHER="use_cases" 4008 4008 4009 4009 PAUSE_ON_FAIL=no
+28 -2
tools/testing/selftests/wireguard/netns.sh
··· 276 276 n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7 277 277 ip2 link del wg0 278 278 ip2 link del wg1 279 - ! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel 279 + read _ _ tx_bytes_before < <(n0 wg show wg1 transfer) 280 + ! n0 ping -W 1 -c 10 -f 192.168.241.2 || false 281 + sleep 1 282 + read _ _ tx_bytes_after < <(n0 wg show wg1 transfer) 283 + (( tx_bytes_after - tx_bytes_before < 70000 )) 280 284 281 285 ip0 link del wg1 282 286 ip1 link del wg0 ··· 613 609 kill $ncat_pid 614 610 ip0 link del wg0 615 611 612 + # Ensure that dst_cache references don't outlive netns lifetime 613 + ip1 link add dev wg0 type wireguard 614 + ip2 link add dev wg0 type wireguard 615 + configure_peers 616 + ip1 link add veth1 type veth peer name veth2 617 + ip1 link set veth2 netns $netns2 618 + ip1 addr add fd00:aa::1/64 dev veth1 619 + ip2 addr add fd00:aa::2/64 dev veth2 620 + ip1 link set veth1 up 621 + ip2 link set veth2 up 622 + waitiface $netns1 veth1 623 + waitiface $netns2 veth2 624 + ip1 -6 route add default dev veth1 via fd00:aa::2 625 + ip2 -6 route add default dev veth2 via fd00:aa::1 626 + n1 wg set wg0 peer "$pub2" endpoint [fd00:aa::2]:2 627 + n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::1]:1 628 + n1 ping6 -c 1 fd00::2 629 + pp ip netns delete $netns1 630 + pp ip netns delete $netns2 631 + pp ip netns add $netns1 632 + pp ip netns add $netns2 633 + 616 634 # Ensure there aren't circular reference loops 617 635 ip1 link add wg1 type wireguard 618 636 ip2 link add wg2 type wireguard ··· 653 627 done < /dev/kmsg 654 628 alldeleted=1 655 629 for object in "${!objects[@]}"; do 656 - if [[ ${objects["$object"]} != *createddestroyed ]]; then 630 + if [[ ${objects["$object"]} != *createddestroyed && ${objects["$object"]} != *createdcreateddestroyeddestroyed ]]; then 657 631 echo "Error: $object: merely ${objects["$object"]}" >&3 658 632 alldeleted=0 659 633 fi
+1 -1
tools/testing/selftests/wireguard/qemu/debug.config
··· 47 47 CONFIG_TRACE_IRQFLAGS=y 48 48 CONFIG_DEBUG_BUGVERBOSE=y 49 49 CONFIG_DEBUG_LIST=y 50 - CONFIG_DEBUG_PI_LIST=y 50 + CONFIG_DEBUG_PLIST=y 51 51 CONFIG_PROVE_RCU=y 52 52 CONFIG_SPARSE_RCU_POINTER=y 53 53 CONFIG_RCU_CPU_STALL_TIMEOUT=21
+1
tools/testing/selftests/wireguard/qemu/kernel.config
··· 66 66 CONFIG_SYSFS=y 67 67 CONFIG_TMPFS=y 68 68 CONFIG_CONSOLE_LOGLEVEL_DEFAULT=15 69 + CONFIG_LOG_BUF_SHIFT=18 69 70 CONFIG_PRINTK_TIME=y 70 71 CONFIG_BLK_DEV_INITRD=y 71 72 CONFIG_LEGACY_VSYSCALL_NONE=y
+37 -19
virt/kvm/kvm_main.c
··· 1531 1531 1532 1532 static int kvm_set_memslot(struct kvm *kvm, 1533 1533 const struct kvm_userspace_memory_region *mem, 1534 - struct kvm_memory_slot *old, 1535 1534 struct kvm_memory_slot *new, int as_id, 1536 1535 enum kvm_mr_change change) 1537 1536 { 1538 - struct kvm_memory_slot *slot; 1537 + struct kvm_memory_slot *slot, old; 1539 1538 struct kvm_memslots *slots; 1540 1539 int r; 1541 1540 ··· 1565 1566 * Note, the INVALID flag needs to be in the appropriate entry 1566 1567 * in the freshly allocated memslots, not in @old or @new. 1567 1568 */ 1568 - slot = id_to_memslot(slots, old->id); 1569 + slot = id_to_memslot(slots, new->id); 1569 1570 slot->flags |= KVM_MEMSLOT_INVALID; 1570 1571 1571 1572 /* ··· 1596 1597 kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id)); 1597 1598 } 1598 1599 1600 + /* 1601 + * Make a full copy of the old memslot, the pointer will become stale 1602 + * when the memslots are re-sorted by update_memslots(), and the old 1603 + * memslot needs to be referenced after calling update_memslots(), e.g. 1604 + * to free its resources and for arch specific behavior. This needs to 1605 + * happen *after* (re)acquiring slots_arch_lock. 1606 + */ 1607 + slot = id_to_memslot(slots, new->id); 1608 + if (slot) { 1609 + old = *slot; 1610 + } else { 1611 + WARN_ON_ONCE(change != KVM_MR_CREATE); 1612 + memset(&old, 0, sizeof(old)); 1613 + old.id = new->id; 1614 + old.as_id = as_id; 1615 + } 1616 + 1617 + /* Copy the arch-specific data, again after (re)acquiring slots_arch_lock. */ 1618 + memcpy(&new->arch, &old.arch, sizeof(old.arch)); 1619 + 1599 1620 r = kvm_arch_prepare_memory_region(kvm, new, mem, change); 1600 1621 if (r) 1601 1622 goto out_slots; ··· 1623 1604 update_memslots(slots, new, change); 1624 1605 slots = install_new_memslots(kvm, as_id, slots); 1625 1606 1626 - kvm_arch_commit_memory_region(kvm, mem, old, new, change); 1607 + kvm_arch_commit_memory_region(kvm, mem, &old, new, change); 1608 + 1609 + /* Free the old memslot's metadata. Note, this is the full copy!!! */ 1610 + if (change == KVM_MR_DELETE) 1611 + kvm_free_memslot(kvm, &old); 1627 1612 1628 1613 kvfree(slots); 1629 1614 return 0; 1630 1615 1631 1616 out_slots: 1632 1617 if (change == KVM_MR_DELETE || change == KVM_MR_MOVE) { 1633 - slot = id_to_memslot(slots, old->id); 1618 + slot = id_to_memslot(slots, new->id); 1634 1619 slot->flags &= ~KVM_MEMSLOT_INVALID; 1635 1620 slots = install_new_memslots(kvm, as_id, slots); 1636 1621 } else { ··· 1649 1626 struct kvm_memory_slot *old, int as_id) 1650 1627 { 1651 1628 struct kvm_memory_slot new; 1652 - int r; 1653 1629 1654 1630 if (!old->npages) 1655 1631 return -EINVAL; ··· 1661 1639 */ 1662 1640 new.as_id = as_id; 1663 1641 1664 - r = kvm_set_memslot(kvm, mem, old, &new, as_id, KVM_MR_DELETE); 1665 - if (r) 1666 - return r; 1667 - 1668 - kvm_free_memslot(kvm, old); 1669 - return 0; 1642 + return kvm_set_memslot(kvm, mem, &new, as_id, KVM_MR_DELETE); 1670 1643 } 1671 1644 1672 1645 /* ··· 1689 1672 id = (u16)mem->slot; 1690 1673 1691 1674 /* General sanity checks */ 1692 - if (mem->memory_size & (PAGE_SIZE - 1)) 1675 + if ((mem->memory_size & (PAGE_SIZE - 1)) || 1676 + (mem->memory_size != (unsigned long)mem->memory_size)) 1693 1677 return -EINVAL; 1694 1678 if (mem->guest_phys_addr & (PAGE_SIZE - 1)) 1695 1679 return -EINVAL; ··· 1736 1718 if (!old.npages) { 1737 1719 change = KVM_MR_CREATE; 1738 1720 new.dirty_bitmap = NULL; 1739 - memset(&new.arch, 0, sizeof(new.arch)); 1740 1721 } else { /* Modify an existing slot. */ 1741 1722 if ((new.userspace_addr != old.userspace_addr) || 1742 1723 (new.npages != old.npages) || ··· 1749 1732 else /* Nothing to change. */ 1750 1733 return 0; 1751 1734 1752 - /* Copy dirty_bitmap and arch from the current memslot. */ 1735 + /* Copy dirty_bitmap from the current memslot. */ 1753 1736 new.dirty_bitmap = old.dirty_bitmap; 1754 - memcpy(&new.arch, &old.arch, sizeof(new.arch)); 1755 1737 } 1756 1738 1757 1739 if ((change == KVM_MR_CREATE) || (change == KVM_MR_MOVE)) { ··· 1776 1760 bitmap_set(new.dirty_bitmap, 0, new.npages); 1777 1761 } 1778 1762 1779 - r = kvm_set_memslot(kvm, mem, &old, &new, as_id, change); 1763 + r = kvm_set_memslot(kvm, mem, &new, as_id, change); 1780 1764 if (r) 1781 1765 goto out_bitmap; 1782 1766 ··· 2931 2915 int r; 2932 2916 gpa_t gpa = ghc->gpa + offset; 2933 2917 2934 - BUG_ON(len + offset > ghc->len); 2918 + if (WARN_ON_ONCE(len + offset > ghc->len)) 2919 + return -EINVAL; 2935 2920 2936 2921 if (slots->generation != ghc->generation) { 2937 2922 if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len)) ··· 2969 2952 int r; 2970 2953 gpa_t gpa = ghc->gpa + offset; 2971 2954 2972 - BUG_ON(len + offset > ghc->len); 2955 + if (WARN_ON_ONCE(len + offset > ghc->len)) 2956 + return -EINVAL; 2973 2957 2974 2958 if (slots->generation != ghc->generation) { 2975 2959 if (__kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len))