···5353virtual address size configured by the kernel. For example, with a5454virtual address size of 48, the PAC is 7 bits wide.55555656-Recent versions of GCC can compile code with APIAKey-based return5757-address protection when passed the -msign-return-address option. This5858-uses instructions in the HINT space (unless -march=armv8.3-a or higher5959-is also passed), and such code can run on systems without the pointer6060-authentication extension.5656+When ARM64_PTR_AUTH_KERNEL is selected, the kernel will be compiled5757+with HINT space pointer authentication instructions protecting5858+function returns. Kernels built with this option will work on hardware5959+with or without pointer authentication support.61606261In addition to exec(), keys can also be reinitialized to random values6362using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
+3-3
Documentation/cpu-freq/core.rst
···7373The third argument is a struct cpufreq_freqs with the following7474values:75757676-===== ===========================7777-cpu number of the affected CPU7676+====== ======================================7777+policy a pointer to the struct cpufreq_policy7878old old frequency7979new new frequency8080flags flags of the cpufreq driver8181-===== ===========================8181+====== ======================================828283833. CPUFreq Table Generation with Operating Performance Point (OPP)8484==================================================================
+56-39
Documentation/filesystems/netfs_library.rst
···11.. SPDX-License-Identifier: GPL-2.02233=================================44-NETWORK FILESYSTEM HELPER LIBRARY44+Network Filesystem Helper Library55=================================6677.. Contents:···37373838The following services are provided:39394040- * Handles transparent huge pages (THPs).4040+ * Handle folios that span multiple pages.41414242- * Insulates the netfs from VM interface changes.4242+ * Insulate the netfs from VM interface changes.43434444- * Allows the netfs to arbitrarily split reads up into pieces, even ones that4545- don't match page sizes or page alignments and that may cross pages.4444+ * Allow the netfs to arbitrarily split reads up into pieces, even ones that4545+ don't match folio sizes or folio alignments and that may cross folios.46464747- * Allows the netfs to expand a readahead request in both directions to meet4848- its needs.4747+ * Allow the netfs to expand a readahead request in both directions to meet its4848+ needs.49495050- * Allows the netfs to partially fulfil a read, which will then be resubmitted.5050+ * Allow the netfs to partially fulfil a read, which will then be resubmitted.51515252- * Handles local caching, allowing cached data and server-read data to be5252+ * Handle local caching, allowing cached data and server-read data to be5353 interleaved for a single request.54545555- * Handles clearing of bufferage that aren't on the server.5555+ * Handle clearing of bufferage that aren't on the server.56565757 * Handle retrying of reads that failed, switching reads from the cache to the5858 server as necessary.···70707171Three read helpers are provided::72727373- * void netfs_readahead(struct readahead_control *ractl,7474- const struct netfs_read_request_ops *ops,7575- void *netfs_priv);``7676- * int netfs_readpage(struct file *file,7777- struct page *page,7878- const struct netfs_read_request_ops *ops,7979- void *netfs_priv);8080- * int netfs_write_begin(struct file *file,8181- struct address_space *mapping,8282- loff_t pos,8383- unsigned int len,8484- unsigned int flags,8585- struct page **_page,8686- void **_fsdata,8787- const struct netfs_read_request_ops *ops,8888- void *netfs_priv);7373+ void netfs_readahead(struct readahead_control *ractl,7474+ const struct netfs_read_request_ops *ops,7575+ void *netfs_priv);7676+ int netfs_readpage(struct file *file,7777+ struct folio *folio,7878+ const struct netfs_read_request_ops *ops,7979+ void *netfs_priv);8080+ int netfs_write_begin(struct file *file,8181+ struct address_space *mapping,8282+ loff_t pos,8383+ unsigned int len,8484+ unsigned int flags,8585+ struct folio **_folio,8686+ void **_fsdata,8787+ const struct netfs_read_request_ops *ops,8888+ void *netfs_priv);89899090Each corresponds to a VM operation, with the addition of a couple of parameters9191for the use of the read helpers:···103103For ->readahead() and ->readpage(), the network filesystem should just jump104104into the corresponding read helper; whereas for ->write_begin(), it may be a105105little more complicated as the network filesystem might want to flush106106-conflicting writes or track dirty data and needs to put the acquired page if an107107-error occurs after calling the helper.106106+conflicting writes or track dirty data and needs to put the acquired folio if107107+an error occurs after calling the helper.108108109109The helpers manage the read request, calling back into the network filesystem110110through the suppplied table of operations. Waits will be performed as···253253 void (*issue_op)(struct netfs_read_subrequest *subreq);254254 bool (*is_still_valid)(struct netfs_read_request *rreq);255255 int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,256256- struct page *page, void **_fsdata);256256+ struct folio *folio, void **_fsdata);257257 void (*done)(struct netfs_read_request *rreq);258258 void (*cleanup)(struct address_space *mapping, void *netfs_priv);259259 };···313313314314 There is no return value; the netfs_subreq_terminated() function should be315315 called to indicate whether or not the operation succeeded and how much data316316- it transferred. The filesystem also should not deal with setting pages316316+ it transferred. The filesystem also should not deal with setting folios317317 uptodate, unlocking them or dropping their refs - the helpers need to deal318318 with this as they have to coordinate with copying to the local cache.319319320320- Note that the helpers have the pages locked, but not pinned. It is possible321321- to use the ITER_XARRAY iov iterator to refer to the range of the inode that322322- is being operated upon without the need to allocate large bvec tables.320320+ Note that the helpers have the folios locked, but not pinned. It is321321+ possible to use the ITER_XARRAY iov iterator to refer to the range of the322322+ inode that is being operated upon without the need to allocate large bvec323323+ tables.323324324325 * ``is_still_valid()``325326···331330 * ``check_write_begin()``332331333332 [Optional] This is called from the netfs_write_begin() helper once it has334334- allocated/grabbed the page to be modified to allow the filesystem to flush333333+ allocated/grabbed the folio to be modified to allow the filesystem to flush335334 conflicting state before allowing it to be modified.336335337337- It should return 0 if everything is now fine, -EAGAIN if the page should be336336+ It should return 0 if everything is now fine, -EAGAIN if the folio should be338337 regrabbed and any other error code to abort the operation.339338340339 * ``done``341340342342- [Optional] This is called after the pages in the request have all been341341+ [Optional] This is called after the folios in the request have all been343342 unlocked (and marked uptodate if applicable).344343345344 * ``cleanup``···391390 * If NETFS_SREQ_CLEAR_TAIL was set, a short read will be cleared to the392391 end of the slice instead of reissuing.393392394394- * Once the data is read, the pages that have been fully read/cleared:393393+ * Once the data is read, the folios that have been fully read/cleared:395394396395 * Will be marked uptodate.397396···399398400399 * Unlocked401400402402- * Any pages that need writing to the cache will then have DIO writes issued.401401+ * Any folios that need writing to the cache will then have DIO writes issued.403402404403 * Synchronous operations will wait for reading to be complete.405404406406- * Writes to the cache will proceed asynchronously and the pages will have the405405+ * Writes to the cache will proceed asynchronously and the folios will have the407406 PG_fscache mark removed when that completes.408407409408 * The request structures will be cleaned up when everything has completed.···452451 bool seek_data,453452 netfs_io_terminated_t term_func,454453 void *term_func_priv);454454+455455+ int (*prepare_write)(struct netfs_cache_resources *cres,456456+ loff_t *_start, size_t *_len, loff_t i_size);455457456458 int (*write)(struct netfs_cache_resources *cres,457459 loff_t start_pos,···513509 indicating whether the termination is definitely happening in the caller's514510 context.515511512512+ * ``prepare_write()``513513+514514+ [Required] Called to adjust a write to the cache and check that there is515515+ sufficient space in the cache. The start and length values indicate the516516+ size of the write that netfslib is proposing, and this can be adjusted by517517+ the cache to respect DIO boundaries. The file size is passed for518518+ information.519519+516520 * ``write()``517521518522 [Required] Called to write to the cache. The start file offset is given···537525there isn't a read request structure as well, such as writing dirty data to the538526cache.539527528528+529529+API Function Reference530530+======================531531+540532.. kernel-doc:: include/linux/netfs.h533533+.. kernel-doc:: fs/netfs/read_helper.c
+11-2
MAINTAINERS
···15979159791598015980RANDOM NUMBER DRIVER1598115981M: "Theodore Ts'o" <tytso@mit.edu>1598215982+M: Jason A. Donenfeld <Jason@zx2c4.com>1598215983S: Maintained1598315984F: drivers/char/random.c1598415985···1650216501F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml1650316502F: drivers/media/platform/sunxi/sun8i-rotate/16504165031650416504+RPMSG TTY DRIVER1650516505+M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>1650616506+L: linux-remoteproc@vger.kernel.org1650716507+S: Maintained1650816508+F: drivers/tty/rpmsg_tty.c1650916509+1650516510RTL2830 MEDIA DRIVER1650616511M: Antti Palosaari <crope@iki.fi>1650716512L: linux-media@vger.kernel.org···16630166231663116624S390 IUCV NETWORK LAYER1663216625M: Julian Wiedmann <jwi@linux.ibm.com>1663316633-M: Karsten Graul <kgraul@linux.ibm.com>1662616626+M: Alexandra Winter <wintera@linux.ibm.com>1662716627+M: Wenjia Zhang <wenjia@linux.ibm.com>1663416628L: linux-s390@vger.kernel.org1663516629L: netdev@vger.kernel.org1663616630S: Supported···16642166341664316635S390 NETWORK DRIVERS1664416636M: Julian Wiedmann <jwi@linux.ibm.com>1664516645-M: Karsten Graul <kgraul@linux.ibm.com>1663716637+M: Alexandra Winter <wintera@linux.ibm.com>1663816638+M: Wenjia Zhang <wenjia@linux.ibm.com>1664616639L: linux-s390@vger.kernel.org1664716640L: netdev@vger.kernel.org1664816641S: Supported
···403403404404static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu);405405406406+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code);407407+406408/*407409 * Allow the hypervisor to handle the exit with an exit handler if it has one.408410 *···431429 */432430static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)433431{432432+ /*433433+ * Save PSTATE early so that we can evaluate the vcpu mode434434+ * early on.435435+ */436436+ vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);437437+438438+ /*439439+ * Check whether we want to repaint the state one way or440440+ * another.441441+ */442442+ early_exit_filter(vcpu, exit_code);443443+434444 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)435445 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);436446
+6-1
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
···7070static inline void __sysreg_save_el2_return_state(struct kvm_cpu_context *ctxt)7171{7272 ctxt->regs.pc = read_sysreg_el2(SYS_ELR);7373- ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);7373+ /*7474+ * Guest PSTATE gets saved at guest fixup time in all7575+ * cases. We still need to handle the nVHE host side here.7676+ */7777+ if (!has_vhe() && ctxt->__hyp_running_vcpu)7878+ ctxt->regs.pstate = read_sysreg_el2(SYS_SPSR);74797580 if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN))7681 ctxt_sys_reg(ctxt, DISR_EL1) = read_sysreg_s(SYS_VDISR_EL2);
+1-7
arch/arm64/kvm/hyp/nvhe/switch.c
···233233 * Returns false if the guest ran in AArch32 when it shouldn't have, and234234 * thus should exit to the host, or true if a the guest run loop can continue.235235 */236236-static bool handle_aarch32_guest(struct kvm_vcpu *vcpu, u64 *exit_code)236236+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)237237{238238 struct kvm *kvm = kern_hyp_va(vcpu->kvm);239239···248248 vcpu->arch.target = -1;249249 *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);250250 *exit_code |= ARM_EXCEPTION_IL;251251- return false;252251 }253253-254254- return true;255252}256253257254/* Switch to the guest for legacy non-VHE systems */···312315 do {313316 /* Jump in the fire! */314317 exit_code = __guest_enter(vcpu);315315-316316- if (unlikely(!handle_aarch32_guest(vcpu, &exit_code)))317317- break;318318319319 /* And we're baaack! */320320 } while (fixup_guest_exit(vcpu, &exit_code));
+4
arch/arm64/kvm/hyp/vhe/switch.c
···112112 return hyp_exit_handlers;113113}114114115115+static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)116116+{117117+}118118+115119/* Switch to the guest for VHE systems running in EL2 */116120static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)117121{
+5
arch/parisc/Makefile
···1515# Mike Shaver, Helge Deller and Martin K. Petersen1616#17171818+ifdef CONFIG_PARISC_SELF_EXTRACT1919+boot := arch/parisc/boot2020+KBUILD_IMAGE := $(boot)/bzImage2121+else1822KBUILD_IMAGE := vmlinuz2323+endif19242025NM = sh $(srctree)/arch/parisc/nm2126CHECKFLAGS += -D__hppa__=1
+13-1
arch/parisc/configs/generic-64bit_defconfig
···11CONFIG_LOCALVERSION="-64bit"22# CONFIG_LOCALVERSION_AUTO is not set33+CONFIG_KERNEL_LZ4=y34CONFIG_SYSVIPC=y45CONFIG_POSIX_MQUEUE=y66+CONFIG_AUDIT=y57CONFIG_BSD_PROCESS_ACCT=y68CONFIG_BSD_PROCESS_ACCT_V3=y79CONFIG_TASKSTATS=y···3735CONFIG_BLK_DEV_INTEGRITY=y3836CONFIG_BINFMT_MISC=m3937# CONFIG_COMPACTION is not set3838+CONFIG_MEMORY_FAILURE=y4039CONFIG_NET=y4140CONFIG_PACKET=y4241CONFIG_UNIX=y···6865CONFIG_SCSI_SRP_ATTRS=y6966CONFIG_ISCSI_BOOT_SYSFS=y7067CONFIG_SCSI_MPT2SAS=y7171-CONFIG_SCSI_LASI700=m6868+CONFIG_SCSI_LASI700=y7269CONFIG_SCSI_SYM53C8XX_2=y7370CONFIG_SCSI_ZALON=y7471CONFIG_SCSI_QLA_ISCSI=m7572CONFIG_SCSI_DH=y7673CONFIG_ATA=y7474+CONFIG_SATA_SIL=y7575+CONFIG_SATA_SIS=y7676+CONFIG_SATA_VIA=y7777CONFIG_PATA_NS87415=y7878CONFIG_PATA_SIL680=y7979CONFIG_ATA_GENERIC=y···8579CONFIG_BLK_DEV_DM=m8680CONFIG_DM_RAID=m8781CONFIG_DM_UEVENT=y8282+CONFIG_DM_AUDIT=y8883CONFIG_FUSION=y8984CONFIG_FUSION_SPI=y9085CONFIG_FUSION_SAS=y···203196CONFIG_FB_MATROX_I2C=y204197CONFIG_FB_MATROX_MAVEN=y205198CONFIG_FB_RADEON=y199199+CONFIG_LOGO=y200200+# CONFIG_LOGO_LINUX_CLUT224 is not set206201CONFIG_HIDRAW=y207202CONFIG_HID_PID=y208203CONFIG_USB_HIDDEV=y209204CONFIG_USB=y205205+CONFIG_USB_EHCI_HCD=y206206+CONFIG_USB_OHCI_HCD=y207207+CONFIG_USB_OHCI_HCD_PLATFORM=y210208CONFIG_UIO=y211209CONFIG_UIO_PDRV_GENIRQ=m212210CONFIG_UIO_AEC=m
+1
arch/parisc/install.sh
···3939if [ -n "${INSTALLKERNEL}" ]; then4040 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi4141 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi4242+ if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi4243fi43444445# Default install
+7-21
arch/parisc/kernel/time.c
···249249static int __init init_cr16_clocksource(void)250250{251251 /*252252- * The cr16 interval timers are not syncronized across CPUs on253253- * different sockets, so mark them unstable and lower rating on254254- * multi-socket SMP systems.252252+ * The cr16 interval timers are not syncronized across CPUs, even if253253+ * they share the same socket.255254 */256255 if (num_online_cpus() > 1 && !running_on_qemu) {257257- int cpu;258258- unsigned long cpu0_loc;259259- cpu0_loc = per_cpu(cpu_data, 0).cpu_loc;256256+ /* mark sched_clock unstable */257257+ clear_sched_clock_stable();260258261261- for_each_online_cpu(cpu) {262262- if (cpu == 0)263263- continue;264264- if ((cpu0_loc != 0) &&265265- (cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc))266266- continue;267267-268268- /* mark sched_clock unstable */269269- clear_sched_clock_stable();270270-271271- clocksource_cr16.name = "cr16_unstable";272272- clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;273273- clocksource_cr16.rating = 0;274274- break;275275- }259259+ clocksource_cr16.name = "cr16_unstable";260260+ clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;261261+ clocksource_cr16.rating = 0;276262 }277263278264 /* register at clocksource framework */
···403403CONFIG_CONNECTOR=y404404CONFIG_ZRAM=y405405CONFIG_BLK_DEV_LOOP=m406406-CONFIG_BLK_DEV_CRYPTOLOOP=m407406CONFIG_BLK_DEV_DRBD=m408407CONFIG_BLK_DEV_NBD=m409408CONFIG_BLK_DEV_RAM=y···475476CONFIG_MACVTAP=m476477CONFIG_VXLAN=m477478CONFIG_BAREUDP=m479479+CONFIG_AMT=m478480CONFIG_TUN=m479481CONFIG_VETH=m480482CONFIG_VIRTIO_NET=m···489489# CONFIG_NET_VENDOR_AMD is not set490490# CONFIG_NET_VENDOR_AQUANTIA is not set491491# CONFIG_NET_VENDOR_ARC is not set492492+# CONFIG_NET_VENDOR_ASIX is not set492493# CONFIG_NET_VENDOR_ATHEROS is not set493494# CONFIG_NET_VENDOR_BROADCOM is not set494495# CONFIG_NET_VENDOR_BROCADE is not set···572571CONFIG_WATCHDOG_NOWAYOUT=y573572CONFIG_SOFT_WATCHDOG=m574573CONFIG_DIAG288_WATCHDOG=m574574+# CONFIG_DRM_DEBUG_MODESET_LOCK is not set575575CONFIG_FB=y576576CONFIG_FRAMEBUFFER_CONSOLE=y577577CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y···777775CONFIG_CRC7=m778776CONFIG_CRC8=m779777CONFIG_RANDOM32_SELFTEST=y778778+CONFIG_XZ_DEC_MICROLZMA=y780779CONFIG_DMA_CMA=y781780CONFIG_CMA_SIZE_MBYTES=0782781CONFIG_PRINTK_TIME=y783782CONFIG_DYNAMIC_DEBUG=y784783CONFIG_DEBUG_INFO=y785784CONFIG_DEBUG_INFO_DWARF4=y785785+CONFIG_DEBUG_INFO_BTF=y786786CONFIG_GDB_SCRIPTS=y787787CONFIG_HEADERS_INSTALL=y788788CONFIG_DEBUG_SECTION_MISMATCH=y···811807CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m812808CONFIG_DEBUG_PER_CPU_MAPS=y813809CONFIG_KFENCE=y810810+CONFIG_KFENCE_STATIC_KEYS=y814811CONFIG_DEBUG_SHIRQ=y815812CONFIG_PANIC_ON_OOPS=y816813CONFIG_DETECT_HUNG_TASK=y···847842CONFIG_SAMPLES=y848843CONFIG_SAMPLE_TRACE_PRINTK=m849844CONFIG_SAMPLE_FTRACE_DIRECT=m845845+CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m850846CONFIG_DEBUG_ENTRY=y851847CONFIG_CIO_INJECT=y852848CONFIG_KUNIT=m···866860CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y867861CONFIG_LKDTM=m868862CONFIG_TEST_MIN_HEAP=y869869-CONFIG_KPROBES_SANITY_TEST=y863863+CONFIG_KPROBES_SANITY_TEST=m870864CONFIG_RBTREE_TEST=y871865CONFIG_INTERVAL_TREE_TEST=m872866CONFIG_PERCPU_TEST=m
+6-1
arch/s390/configs/defconfig
···394394CONFIG_CONNECTOR=y395395CONFIG_ZRAM=y396396CONFIG_BLK_DEV_LOOP=m397397-CONFIG_BLK_DEV_CRYPTOLOOP=m398397CONFIG_BLK_DEV_DRBD=m399398CONFIG_BLK_DEV_NBD=m400399CONFIG_BLK_DEV_RAM=y···466467CONFIG_MACVTAP=m467468CONFIG_VXLAN=m468469CONFIG_BAREUDP=m470470+CONFIG_AMT=m469471CONFIG_TUN=m470472CONFIG_VETH=m471473CONFIG_VIRTIO_NET=m···480480# CONFIG_NET_VENDOR_AMD is not set481481# CONFIG_NET_VENDOR_AQUANTIA is not set482482# CONFIG_NET_VENDOR_ARC is not set483483+# CONFIG_NET_VENDOR_ASIX is not set483484# CONFIG_NET_VENDOR_ATHEROS is not set484485# CONFIG_NET_VENDOR_BROADCOM is not set485486# CONFIG_NET_VENDOR_BROCADE is not set···763762CONFIG_CRC4=m764763CONFIG_CRC7=m765764CONFIG_CRC8=m765765+CONFIG_XZ_DEC_MICROLZMA=y766766CONFIG_DMA_CMA=y767767CONFIG_CMA_SIZE_MBYTES=0768768CONFIG_PRINTK_TIME=y769769CONFIG_DYNAMIC_DEBUG=y770770CONFIG_DEBUG_INFO=y771771CONFIG_DEBUG_INFO_DWARF4=y772772+CONFIG_DEBUG_INFO_BTF=y772773CONFIG_GDB_SCRIPTS=y773774CONFIG_DEBUG_SECTION_MISMATCH=y774775CONFIG_MAGIC_SYSRQ=y···795792CONFIG_SAMPLES=y796793CONFIG_SAMPLE_TRACE_PRINTK=m797794CONFIG_SAMPLE_FTRACE_DIRECT=m795795+CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m798796CONFIG_KUNIT=m799797CONFIG_KUNIT_DEBUGFS=y800798CONFIG_LKDTM=m799799+CONFIG_KPROBES_SANITY_TEST=m801800CONFIG_PERCPU_TEST=m802801CONFIG_ATOMIC64_SELFTEST=y803802CONFIG_TEST_BPF=m
+2
arch/s390/configs/zfcpdump_defconfig
···6565# CONFIG_NETWORK_FILESYSTEMS is not set6666CONFIG_LSM="yama,loadpin,safesetid,integrity"6767# CONFIG_ZLIB_DFLTCC is not set6868+CONFIG_XZ_DEC_MICROLZMA=y6869CONFIG_PRINTK_TIME=y6970# CONFIG_SYMBOLIC_ERRNAME is not set7071CONFIG_DEBUG_INFO=y7272+CONFIG_DEBUG_INFO_BTF=y7173CONFIG_DEBUG_FS=y7274CONFIG_DEBUG_KERNEL=y7375CONFIG_PANIC_ON_OOPS=y
···574574 ud25755751:576576#endif577577+#ifdef CONFIG_XEN_PV578578+ ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV579579+#endif580580+577581 POP_REGS pop_rdi=0578582579583 /*···894890.Lparanoid_entry_checkgs:895891 /* EBX = 1 -> kernel GSBASE active, no restore required */896892 movl $1, %ebx893893+897894 /*898895 * The kernel-enforced convention is a negative GSBASE indicates899896 * a kernel value. No SWAPGS needed on entry and exit.···902897 movl $MSR_GS_BASE, %ecx903898 rdmsr904899 testl %edx, %edx905905- jns .Lparanoid_entry_swapgs906906- ret907907-908908-.Lparanoid_entry_swapgs:909909- swapgs910910-911911- /*912912- * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an913913- * unconditional CR3 write, even in the PTI case. So do an lfence914914- * to prevent GS speculation, regardless of whether PTI is enabled.915915- */916916- FENCE_SWAPGS_KERNEL_ENTRY900900+ js .Lparanoid_kernel_gsbase917901918902 /* EBX = 0 -> SWAPGS required on exit */919903 xorl %ebx, %ebx904904+ swapgs905905+.Lparanoid_kernel_gsbase:906906+907907+ FENCE_SWAPGS_KERNEL_ENTRY920908 ret921909SYM_CODE_END(paranoid_entry)922910···991993 pushq %r12992994 ret993995994994-.Lerror_entry_done_lfence:995995- FENCE_SWAPGS_KERNEL_ENTRY996996-.Lerror_entry_done:997997- ret998998-999996 /*1000997 * There are two places in the kernel that can potentially fault with1001998 * usergs. Handle them here. B stepping K8s sometimes report a···10131020 * .Lgs_change's error handler with kernel gsbase.10141021 */10151022 SWAPGS10161016- FENCE_SWAPGS_USER_ENTRY10171017- jmp .Lerror_entry_done10231023+10241024+ /*10251025+ * Issue an LFENCE to prevent GS speculation, regardless of whether it is a10261026+ * kernel or user gsbase.10271027+ */10281028+.Lerror_entry_done_lfence:10291029+ FENCE_SWAPGS_KERNEL_ENTRY10301030+ ret1018103110191032.Lbstep_iret:10201033 /* Fix truncated RIP */
···73737474#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)75757676+/*7777+ * Error codes related to GHCB input that can be communicated back to the guest7878+ * by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2.7979+ */8080+#define GHCB_ERR_NOT_REGISTERED 18181+#define GHCB_ERR_INVALID_USAGE 28282+#define GHCB_ERR_INVALID_SCRATCH_AREA 38383+#define GHCB_ERR_MISSING_INPUT 48484+#define GHCB_ERR_INVALID_INPUT 58585+#define GHCB_ERR_INVALID_EVENT 68686+7687#endif
···294294 char *dst, char *buf, size_t size)295295{296296 unsigned long error_code = X86_PF_PROT | X86_PF_WRITE;297297- char __user *target = (char __user *)dst;298298- u64 d8;299299- u32 d4;300300- u16 d2;301301- u8 d1;302297303298 /*304299 * This function uses __put_user() independent of whether kernel or user···315320 * instructions here would cause infinite nesting.316321 */317322 switch (size) {318318- case 1:323323+ case 1: {324324+ u8 d1;325325+ u8 __user *target = (u8 __user *)dst;326326+319327 memcpy(&d1, buf, 1);320328 if (__put_user(d1, target))321329 goto fault;322330 break;323323- case 2:331331+ }332332+ case 2: {333333+ u16 d2;334334+ u16 __user *target = (u16 __user *)dst;335335+324336 memcpy(&d2, buf, 2);325337 if (__put_user(d2, target))326338 goto fault;327339 break;328328- case 4:340340+ }341341+ case 4: {342342+ u32 d4;343343+ u32 __user *target = (u32 __user *)dst;344344+329345 memcpy(&d4, buf, 4);330346 if (__put_user(d4, target))331347 goto fault;332348 break;333333- case 8:349349+ }350350+ case 8: {351351+ u64 d8;352352+ u64 __user *target = (u64 __user *)dst;353353+334354 memcpy(&d8, buf, 8);335355 if (__put_user(d8, target))336356 goto fault;337357 break;358358+ }338359 default:339360 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);340361 return ES_UNSUPPORTED;···373362 char *src, char *buf, size_t size)374363{375364 unsigned long error_code = X86_PF_PROT;376376- char __user *s = (char __user *)src;377377- u64 d8;378378- u32 d4;379379- u16 d2;380380- u8 d1;381365382366 /*383367 * This function uses __get_user() independent of whether kernel or user···394388 * instructions here would cause infinite nesting.395389 */396390 switch (size) {397397- case 1:391391+ case 1: {392392+ u8 d1;393393+ u8 __user *s = (u8 __user *)src;394394+398395 if (__get_user(d1, s))399396 goto fault;400397 memcpy(buf, &d1, 1);401398 break;402402- case 2:399399+ }400400+ case 2: {401401+ u16 d2;402402+ u16 __user *s = (u16 __user *)src;403403+403404 if (__get_user(d2, s))404405 goto fault;405406 memcpy(buf, &d2, 2);406407 break;407407- case 4:408408+ }409409+ case 4: {410410+ u32 d4;411411+ u32 __user *s = (u32 __user *)src;412412+408413 if (__get_user(d4, s))409414 goto fault;410415 memcpy(buf, &d4, 4);411416 break;412412- case 8:417417+ }418418+ case 8: {419419+ u64 d8;420420+ u64 __user *s = (u64 __user *)src;413421 if (__get_user(d8, s))414422 goto fault;415423 memcpy(buf, &d8, 8);416424 break;425425+ }417426 default:418427 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);419428 return ES_UNSUPPORTED;
+24-4
arch/x86/kernel/tsc.c
···1180118011811181EXPORT_SYMBOL_GPL(mark_tsc_unstable);1182118211831183+static void __init tsc_disable_clocksource_watchdog(void)11841184+{11851185+ clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;11861186+ clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;11871187+}11881188+11831189static void __init check_system_tsc_reliable(void)11841190{11851191#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)···12021196#endif12031197 if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))12041198 tsc_clocksource_reliable = 1;11991199+12001200+ /*12011201+ * Disable the clocksource watchdog when the system has:12021202+ * - TSC running at constant frequency12031203+ * - TSC which does not stop in C-States12041204+ * - the TSC_ADJUST register which allows to detect even minimal12051205+ * modifications12061206+ * - not more than two sockets. As the number of sockets cannot be12071207+ * evaluated at the early boot stage where this has to be12081208+ * invoked, check the number of online memory nodes as a12091209+ * fallback solution which is an reasonable estimate.12101210+ */12111211+ if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&12121212+ boot_cpu_has(X86_FEATURE_NONSTOP_TSC) &&12131213+ boot_cpu_has(X86_FEATURE_TSC_ADJUST) &&12141214+ nr_online_nodes <= 2)12151215+ tsc_disable_clocksource_watchdog();12051216}1206121712071218/*···14101387 if (tsc_unstable)14111388 goto unreg;1412138914131413- if (tsc_clocksource_reliable || no_tsc_watchdog)14141414- clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;14151415-14161390 if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3))14171391 clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;14181392···15471527 }1548152815491529 if (tsc_clocksource_reliable || no_tsc_watchdog)15501550- clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;15301530+ tsc_disable_clocksource_watchdog();1551153115521532 clocksource_register_khz(&clocksource_tsc_early, tsc_khz);15531533 detect_art();
+41
arch/x86/kernel/tsc_sync.c
···3030};31313232static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);3333+static struct timer_list tsc_sync_check_timer;33343435/*3536 * TSC's on different sockets may be reset asynchronously.···7776 adj->warned = true;7877 }7978}7979+8080+/*8181+ * Normally the tsc_sync will be checked every time system enters idle8282+ * state, but there is still caveat that a system won't enter idle,8383+ * either because it's too busy or configured purposely to not enter8484+ * idle.8585+ *8686+ * So setup a periodic timer (every 10 minutes) to make sure the check8787+ * is always on.8888+ */8989+9090+#define SYNC_CHECK_INTERVAL (HZ * 600)9191+9292+static void tsc_sync_check_timer_fn(struct timer_list *unused)9393+{9494+ int next_cpu;9595+9696+ tsc_verify_tsc_adjust(false);9797+9898+ /* Run the check for all onlined CPUs in turn */9999+ next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);100100+ if (next_cpu >= nr_cpu_ids)101101+ next_cpu = cpumask_first(cpu_online_mask);102102+103103+ tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL;104104+ add_timer_on(&tsc_sync_check_timer, next_cpu);105105+}106106+107107+static int __init start_sync_check_timer(void)108108+{109109+ if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)110110+ return 0;111111+112112+ timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);113113+ tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;114114+ add_timer(&tsc_sync_check_timer);115115+116116+ return 0;117117+}118118+late_initcall(start_sync_check_timer);8011981120static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval,82121 unsigned int cpu, bool bootcpu)
···5656 struct kvm_io_device dev_master;5757 struct kvm_io_device dev_slave;5858 struct kvm_io_device dev_elcr;5959- void (*ack_notifier)(void *opaque, int irq);6059 unsigned long irq_states[PIC_NUM_PINS];6160};6261
+1-1
arch/x86/kvm/lapic.c
···707707static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)708708{709709 int highest_irr;710710- if (apic->vcpu->arch.apicv_active)710710+ if (kvm_x86_ops.sync_pir_to_irr)711711 highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu);712712 else713713 highest_irr = apic_find_highest_irr(apic);
+73-47
arch/x86/kvm/mmu/mmu.c
···15821582 flush = kvm_handle_gfn_range(kvm, range, kvm_unmap_rmapp);1583158315841584 if (is_tdp_mmu_enabled(kvm))15851585- flush |= kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);15851585+ flush = kvm_tdp_mmu_unmap_gfn_range(kvm, range, flush);1586158615871587 return flush;15881588}···1936193619371937static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)19381938{19391939- return sp->role.invalid ||19391939+ if (sp->role.invalid)19401940+ return true;19411941+19421942+ /* TDP MMU pages due not use the MMU generation. */19431943+ return !sp->tdp_mmu_page &&19401944 unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);19411945}19421946···21772173 iterator->shadow_addr = root;21782174 iterator->level = vcpu->arch.mmu->shadow_root_level;2179217521802180- if (iterator->level == PT64_ROOT_4LEVEL &&21762176+ if (iterator->level >= PT64_ROOT_4LEVEL &&21812177 vcpu->arch.mmu->root_level < PT64_ROOT_4LEVEL &&21822178 !vcpu->arch.mmu->direct_map)21832183- --iterator->level;21792179+ iterator->level = PT32E_ROOT_LEVEL;2184218021852181 if (iterator->level == PT32E_ROOT_LEVEL) {21862182 /*···39803976 return true;39813977}3982397839793979+/*39803980+ * Returns true if the page fault is stale and needs to be retried, i.e. if the39813981+ * root was invalidated by a memslot update or a relevant mmu_notifier fired.39823982+ */39833983+static bool is_page_fault_stale(struct kvm_vcpu *vcpu,39843984+ struct kvm_page_fault *fault, int mmu_seq)39853985+{39863986+ if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa)))39873987+ return true;39883988+39893989+ return fault->slot &&39903990+ mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);39913991+}39923992+39833993static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)39843994{39853995 bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);···40314013 else40324014 write_lock(&vcpu->kvm->mmu_lock);4033401540344034- if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva))40164016+ if (is_page_fault_stale(vcpu, fault, mmu_seq))40354017 goto out_unlock;40184018+40364019 r = make_mmu_pages_available(vcpu);40374020 if (r)40384021 goto out_unlock;···48744855 struct kvm_mmu *context = &vcpu->arch.guest_mmu;48754856 struct kvm_mmu_role_regs regs = {48764857 .cr0 = cr0,48774877- .cr4 = cr4,48584858+ .cr4 = cr4 & ~X86_CR4_PKE,48784859 .efer = efer,48794860 };48804861 union kvm_mmu_role new_role;···49384919 context->direct_map = false;4939492049404921 update_permission_bitmask(context, true);49414941- update_pkru_bitmask(context);49224922+ context->pkru_mask = 0;49424923 reset_rsvds_bits_mask_ept(vcpu, context, execonly);49434924 reset_ept_shadow_zero_bits_mask(vcpu, context, execonly);49444925}···50445025 /*50455026 * Invalidate all MMU roles to force them to reinitialize as CPUID50465027 * information is factored into reserved bit calculations.50285028+ *50295029+ * Correctly handling multiple vCPU models with respect to paging and50305030+ * physical address properties) in a single VM would require tracking50315031+ * all relevant CPUID information in kvm_mmu_page_role. That is very50325032+ * undesirable as it would increase the memory requirements for50335033+ * gfn_track (see struct kvm_mmu_page_role comments). For now that50345034+ * problem is swept under the rug; KVM's CPUID API is horrific and50355035+ * it's all but impossible to solve it without introducing a new API.50475036 */50485037 vcpu->arch.root_mmu.mmu_role.ext.valid = 0;50495038 vcpu->arch.guest_mmu.mmu_role.ext.valid = 0;···50595032 kvm_mmu_reset_context(vcpu);5060503350615034 /*50625062- * KVM does not correctly handle changing guest CPUID after KVM_RUN, as50635063- * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't50645064- * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page50655065- * faults due to reusing SPs/SPTEs. Alert userspace, but otherwise50665066- * sweep the problem under the rug.50675067- *50685068- * KVM's horrific CPUID ABI makes the problem all but impossible to50695069- * solve, as correctly handling multiple vCPU models (with respect to50705070- * paging and physical address properties) in a single VM would require50715071- * tracking all relevant CPUID information in kvm_mmu_page_role. That50725072- * is very undesirable as it would double the memory requirements for50735073- * gfn_track (see struct kvm_mmu_page_role comments), and in practice50745074- * no sane VMM mucks with the core vCPU model on the fly.50355035+ * Changing guest CPUID after KVM_RUN is forbidden, see the comment in50365036+ * kvm_arch_vcpu_ioctl().50755037 */50765076- if (vcpu->arch.last_vmentry_cpu != -1) {50775077- pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} after KVM_RUN may cause guest instability\n");50785078- pr_warn_ratelimited("KVM: KVM_SET_CPUID{,2} will fail after KVM_RUN starting with Linux 5.16\n");50795079- }50385038+ KVM_BUG_ON(vcpu->arch.last_vmentry_cpu != -1, vcpu->kvm);50805039}5081504050825041void kvm_mmu_reset_context(struct kvm_vcpu *vcpu)···5382536953835370void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)53845371{53855385- kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE);53725372+ kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE);53865373 ++vcpu->stat.invlpg;53875374}53885375EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);···58675854void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,58685855 const struct kvm_memory_slot *slot)58695856{58705870- bool flush = false;58715871-58725857 if (kvm_memslots_have_rmaps(kvm)) {58735858 write_lock(&kvm->mmu_lock);58745859 /*···58745863 * logging at a 4k granularity and never creates collapsible58755864 * 2m SPTEs during dirty logging.58765865 */58775877- flush = slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true);58785878- if (flush)58665866+ if (slot_handle_level_4k(kvm, slot, kvm_mmu_zap_collapsible_spte, true))58795867 kvm_arch_flush_remote_tlbs_memslot(kvm, slot);58805868 write_unlock(&kvm->mmu_lock);58815869 }5882587058835871 if (is_tdp_mmu_enabled(kvm)) {58845872 read_lock(&kvm->mmu_lock);58855885- flush = kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot, flush);58865886- if (flush)58875887- kvm_arch_flush_remote_tlbs_memslot(kvm, slot);58735873+ kvm_tdp_mmu_zap_collapsible_sptes(kvm, slot);58885874 read_unlock(&kvm->mmu_lock);58895875 }58905876}···61906182 mmu_audit_disable();61916183}6192618461856185+/*61866186+ * Calculate the effective recovery period, accounting for '0' meaning "let KVM61876187+ * select a halving time of 1 hour". Returns true if recovery is enabled.61886188+ */61896189+static bool calc_nx_huge_pages_recovery_period(uint *period)61906190+{61916191+ /*61926192+ * Use READ_ONCE to get the params, this may be called outside of the61936193+ * param setters, e.g. by the kthread to compute its next timeout.61946194+ */61956195+ bool enabled = READ_ONCE(nx_huge_pages);61966196+ uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio);61976197+61986198+ if (!enabled || !ratio)61996199+ return false;62006200+62016201+ *period = READ_ONCE(nx_huge_pages_recovery_period_ms);62026202+ if (!*period) {62036203+ /* Make sure the period is not less than one second. */62046204+ ratio = min(ratio, 3600u);62056205+ *period = 60 * 60 * 1000 / ratio;62066206+ }62076207+ return true;62086208+}62096209+61936210static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel_param *kp)61946211{61956212 bool was_recovery_enabled, is_recovery_enabled;61966213 uint old_period, new_period;61976214 int err;6198621561996199- was_recovery_enabled = nx_huge_pages_recovery_ratio;62006200- old_period = nx_huge_pages_recovery_period_ms;62166216+ was_recovery_enabled = calc_nx_huge_pages_recovery_period(&old_period);6201621762026218 err = param_set_uint(val, kp);62036219 if (err)62046220 return err;6205622162066206- is_recovery_enabled = nx_huge_pages_recovery_ratio;62076207- new_period = nx_huge_pages_recovery_period_ms;62226222+ is_recovery_enabled = calc_nx_huge_pages_recovery_period(&new_period);6208622362096209- if (READ_ONCE(nx_huge_pages) && is_recovery_enabled &&62246224+ if (is_recovery_enabled &&62106225 (!was_recovery_enabled || old_period > new_period)) {62116226 struct kvm *kvm;62126227···6293626262946263static long get_nx_lpage_recovery_timeout(u64 start_time)62956264{62966296- uint ratio = READ_ONCE(nx_huge_pages_recovery_ratio);62976297- uint period = READ_ONCE(nx_huge_pages_recovery_period_ms);62656265+ bool enabled;62666266+ uint period;6298626762996299- if (!period && ratio) {63006300- /* Make sure the period is not less than one second. */63016301- ratio = min(ratio, 3600u);63026302- period = 60 * 60 * 1000 / ratio;63036303- }62686268+ enabled = calc_nx_huge_pages_recovery_period(&period);6304626963056305- return READ_ONCE(nx_huge_pages) && ratio63066306- ? start_time + msecs_to_jiffies(period) - get_jiffies_64()63076307- : MAX_SCHEDULE_TIMEOUT;62706270+ return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64()62716271+ : MAX_SCHEDULE_TIMEOUT;63086272}6309627363106274static int kvm_nx_lpage_recovery_worker(struct kvm *kvm, uintptr_t data)
+2-1
arch/x86/kvm/mmu/paging_tmpl.h
···911911912912 r = RET_PF_RETRY;913913 write_lock(&vcpu->kvm->mmu_lock);914914- if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva))914914+915915+ if (is_page_fault_stale(vcpu, fault, mmu_seq))915916 goto out_unlock;916917917918 kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
+14-24
arch/x86/kvm/mmu/tdp_mmu.c
···317317 struct kvm_mmu_page *sp = sptep_to_sp(rcu_dereference(pt));318318 int level = sp->role.level;319319 gfn_t base_gfn = sp->gfn;320320- u64 old_child_spte;321321- u64 *sptep;322322- gfn_t gfn;323320 int i;324321325322 trace_kvm_mmu_prepare_zap_page(sp);···324327 tdp_mmu_unlink_page(kvm, sp, shared);325328326329 for (i = 0; i < PT64_ENT_PER_PAGE; i++) {327327- sptep = rcu_dereference(pt) + i;328328- gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);330330+ u64 *sptep = rcu_dereference(pt) + i;331331+ gfn_t gfn = base_gfn + i * KVM_PAGES_PER_HPAGE(level);332332+ u64 old_child_spte;329333330334 if (shared) {331335 /*···372374 shared);373375 }374376375375- kvm_flush_remote_tlbs_with_address(kvm, gfn,377377+ kvm_flush_remote_tlbs_with_address(kvm, base_gfn,376378 KVM_PAGES_PER_HPAGE(level + 1));377379378380 call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);···10311033{10321034 struct kvm_mmu_page *root;1033103510341034- for_each_tdp_mmu_root(kvm, root, range->slot->as_id)10351035- flush |= zap_gfn_range(kvm, root, range->start, range->end,10361036- range->may_block, flush, false);10361036+ for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false)10371037+ flush = zap_gfn_range(kvm, root, range->start, range->end,10381038+ range->may_block, flush, false);1037103910381040 return flush;10391041}···13621364 * Clear leaf entries which could be replaced by large mappings, for13631365 * GFNs within the slot.13641366 */13651365-static bool zap_collapsible_spte_range(struct kvm *kvm,13671367+static void zap_collapsible_spte_range(struct kvm *kvm,13661368 struct kvm_mmu_page *root,13671367- const struct kvm_memory_slot *slot,13681368- bool flush)13691369+ const struct kvm_memory_slot *slot)13691370{13701371 gfn_t start = slot->base_gfn;13711372 gfn_t end = start + slot->npages;···1375137813761379 tdp_root_for_each_pte(iter, root, start, end) {13771380retry:13781378- if (tdp_mmu_iter_cond_resched(kvm, &iter, flush, true)) {13791379- flush = false;13811381+ if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true))13801382 continue;13811381- }1382138313831384 if (!is_shadow_present_pte(iter.old_spte) ||13841385 !is_last_spte(iter.old_spte, iter.level))···13881393 pfn, PG_LEVEL_NUM))13891394 continue;1390139513961396+ /* Note, a successful atomic zap also does a remote TLB flush. */13911397 if (!tdp_mmu_zap_spte_atomic(kvm, &iter)) {13921398 /*13931399 * The iter must explicitly re-read the SPTE because···13971401 iter.old_spte = READ_ONCE(*rcu_dereference(iter.sptep));13981402 goto retry;13991403 }14001400- flush = true;14011404 }1402140514031406 rcu_read_unlock();14041404-14051405- return flush;14061407}1407140814081409/*14091410 * Clear non-leaf entries (and free associated page tables) which could14101411 * be replaced by large mappings, for GFNs within the slot.14111412 */14121412-bool kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,14131413- const struct kvm_memory_slot *slot,14141414- bool flush)14131413+void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,14141414+ const struct kvm_memory_slot *slot)14151415{14161416 struct kvm_mmu_page *root;1417141714181418 lockdep_assert_held_read(&kvm->mmu_lock);1419141914201420 for_each_tdp_mmu_root_yield_safe(kvm, root, slot->as_id, true)14211421- flush = zap_collapsible_spte_range(kvm, root, slot, flush);14221422-14231423- return flush;14211421+ zap_collapsible_spte_range(kvm, root, slot);14241422}1425142314261424/*
···900900bool svm_check_apicv_inhibit_reasons(ulong bit)901901{902902 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |903903+ BIT(APICV_INHIBIT_REASON_ABSENT) |903904 BIT(APICV_INHIBIT_REASON_HYPERV) |904905 BIT(APICV_INHIBIT_REASON_NESTED) |905906 BIT(APICV_INHIBIT_REASON_IRQWIN) |···990989static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)991990{992991 struct vcpu_svm *svm = to_svm(vcpu);992992+ int cpu = get_cpu();993993994994+ WARN_ON(cpu != vcpu->cpu);994995 svm->avic_is_running = is_run;995996996996- if (!kvm_vcpu_apicv_active(vcpu))997997- return;998998-999999- if (is_run)10001000- avic_vcpu_load(vcpu, vcpu->cpu);10011001- else10021002- avic_vcpu_put(vcpu);997997+ if (kvm_vcpu_apicv_active(vcpu)) {998998+ if (is_run)999999+ avic_vcpu_load(vcpu, cpu);10001000+ else10011001+ avic_vcpu_put(vcpu);10021002+ }10031003+ put_cpu();10031004}1004100510051006void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
+1-1
arch/x86/kvm/svm/pmu.c
···281281 pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;282282283283 pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;284284- pmu->reserved_bits = 0xffffffff00200000ull;284284+ pmu->reserved_bits = 0xfffffff000280000ull;285285 pmu->version = 1;286286 /* not applicable to AMD; but clean them to prevent any fall out */287287 pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+146-117
arch/x86/kvm/svm/sev.c
···15431543 return false;15441544}1545154515461546-static int sev_lock_for_migration(struct kvm *kvm)15461546+static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)15471547{15481548- struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;15481548+ struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;15491549+ struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;15501550+ int r = -EBUSY;15511551+15521552+ if (dst_kvm == src_kvm)15531553+ return -EINVAL;1549155415501555 /*15511551- * Bail if this VM is already involved in a migration to avoid deadlock15521552- * between two VMs trying to migrate to/from each other.15561556+ * Bail if these VMs are already involved in a migration to avoid15571557+ * deadlock between two VMs trying to migrate to/from each other.15531558 */15541554- if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1))15591559+ if (atomic_cmpxchg_acquire(&dst_sev->migration_in_progress, 0, 1))15551560 return -EBUSY;1556156115571557- mutex_lock(&kvm->lock);15621562+ if (atomic_cmpxchg_acquire(&src_sev->migration_in_progress, 0, 1))15631563+ goto release_dst;1558156415651565+ r = -EINTR;15661566+ if (mutex_lock_killable(&dst_kvm->lock))15671567+ goto release_src;15681568+ if (mutex_lock_killable(&src_kvm->lock))15691569+ goto unlock_dst;15591570 return 0;15711571+15721572+unlock_dst:15731573+ mutex_unlock(&dst_kvm->lock);15741574+release_src:15751575+ atomic_set_release(&src_sev->migration_in_progress, 0);15761576+release_dst:15771577+ atomic_set_release(&dst_sev->migration_in_progress, 0);15781578+ return r;15601579}1561158015621562-static void sev_unlock_after_migration(struct kvm *kvm)15811581+static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)15631582{15641564- struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;15831583+ struct kvm_sev_info *dst_sev = &to_kvm_svm(dst_kvm)->sev_info;15841584+ struct kvm_sev_info *src_sev = &to_kvm_svm(src_kvm)->sev_info;1565158515661566- mutex_unlock(&kvm->lock);15671567- atomic_set_release(&sev->migration_in_progress, 0);15861586+ mutex_unlock(&dst_kvm->lock);15871587+ mutex_unlock(&src_kvm->lock);15881588+ atomic_set_release(&dst_sev->migration_in_progress, 0);15891589+ atomic_set_release(&src_sev->migration_in_progress, 0);15681590}1569159115701592···16291607 dst->asid = src->asid;16301608 dst->handle = src->handle;16311609 dst->pages_locked = src->pages_locked;16101610+ dst->enc_context_owner = src->enc_context_owner;1632161116331612 src->asid = 0;16341613 src->active = false;16351614 src->handle = 0;16361615 src->pages_locked = 0;16161616+ src->enc_context_owner = NULL;1637161716381638- INIT_LIST_HEAD(&dst->regions_list);16391639- list_replace_init(&src->regions_list, &dst->regions_list);16181618+ list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list);16401619}1641162016421621static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)···16891666 bool charged = false;16901667 int ret;1691166816921692- ret = sev_lock_for_migration(kvm);16931693- if (ret)16941694- return ret;16951695-16961696- if (sev_guest(kvm)) {16971697- ret = -EINVAL;16981698- goto out_unlock;16991699- }17001700-17011669 source_kvm_file = fget(source_fd);17021670 if (!file_is_kvm(source_kvm_file)) {17031671 ret = -EBADF;···16961682 }1697168316981684 source_kvm = source_kvm_file->private_data;16991699- ret = sev_lock_for_migration(source_kvm);16851685+ ret = sev_lock_two_vms(kvm, source_kvm);17001686 if (ret)17011687 goto out_fput;1702168817031703- if (!sev_guest(source_kvm)) {16891689+ if (sev_guest(kvm) || !sev_guest(source_kvm)) {17041690 ret = -EINVAL;17051705- goto out_source;16911691+ goto out_unlock;17061692 }1707169317081694 src_sev = &to_kvm_svm(source_kvm)->sev_info;16951695+16961696+ /*16971697+ * VMs mirroring src's encryption context rely on it to keep the16981698+ * ASID allocated, but below we are clearing src_sev->asid.16991699+ */17001700+ if (src_sev->num_mirrored_vms) {17011701+ ret = -EBUSY;17021702+ goto out_unlock;17031703+ }17041704+17091705 dst_sev->misc_cg = get_current_misc_cg();17101706 cg_cleanup_sev = dst_sev;17111707 if (dst_sev->misc_cg != src_sev->misc_cg) {···17521728 sev_misc_cg_uncharge(cg_cleanup_sev);17531729 put_misc_cg(cg_cleanup_sev->misc_cg);17541730 cg_cleanup_sev->misc_cg = NULL;17551755-out_source:17561756- sev_unlock_after_migration(source_kvm);17311731+out_unlock:17321732+ sev_unlock_two_vms(kvm, source_kvm);17571733out_fput:17581734 if (source_kvm_file)17591735 fput(source_kvm_file);17601760-out_unlock:17611761- sev_unlock_after_migration(kvm);17621736 return ret;17631737}17641738···19751953{19761954 struct file *source_kvm_file;19771955 struct kvm *source_kvm;19781978- struct kvm_sev_info source_sev, *mirror_sev;19561956+ struct kvm_sev_info *source_sev, *mirror_sev;19791957 int ret;1980195819811959 source_kvm_file = fget(source_fd);19821960 if (!file_is_kvm(source_kvm_file)) {19831961 ret = -EBADF;19841984- goto e_source_put;19621962+ goto e_source_fput;19851963 }1986196419871965 source_kvm = source_kvm_file->private_data;19881988- mutex_lock(&source_kvm->lock);19661966+ ret = sev_lock_two_vms(kvm, source_kvm);19671967+ if (ret)19681968+ goto e_source_fput;1989196919901990- if (!sev_guest(source_kvm)) {19701970+ /*19711971+ * Mirrors of mirrors should work, but let's not get silly. Also19721972+ * disallow out-of-band SEV/SEV-ES init if the target is already an19731973+ * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being19741974+ * created after SEV/SEV-ES initialization, e.g. to init intercepts.19751975+ */19761976+ if (sev_guest(kvm) || !sev_guest(source_kvm) ||19771977+ is_mirroring_enc_context(source_kvm) || kvm->created_vcpus) {19911978 ret = -EINVAL;19921992- goto e_source_unlock;19791979+ goto e_unlock;19931980 }19941994-19951995- /* Mirrors of mirrors should work, but let's not get silly */19961996- if (is_mirroring_enc_context(source_kvm) || source_kvm == kvm) {19971997- ret = -EINVAL;19981998- goto e_source_unlock;19991999- }20002000-20012001- memcpy(&source_sev, &to_kvm_svm(source_kvm)->sev_info,20022002- sizeof(source_sev));2003198120041982 /*20051983 * The mirror kvm holds an enc_context_owner ref so its asid can't20061984 * disappear until we're done with it20071985 */19861986+ source_sev = &to_kvm_svm(source_kvm)->sev_info;20081987 kvm_get_kvm(source_kvm);20092009-20102010- fput(source_kvm_file);20112011- mutex_unlock(&source_kvm->lock);20122012- mutex_lock(&kvm->lock);20132013-20142014- /*20152015- * Disallow out-of-band SEV/SEV-ES init if the target is already an20162016- * SEV guest, or if vCPUs have been created. KVM relies on vCPUs being20172017- * created after SEV/SEV-ES initialization, e.g. to init intercepts.20182018- */20192019- if (sev_guest(kvm) || kvm->created_vcpus) {20202020- ret = -EINVAL;20212021- goto e_mirror_unlock;20222022- }19881988+ source_sev->num_mirrored_vms++;2023198920241990 /* Set enc_context_owner and copy its encryption context over */20251991 mirror_sev = &to_kvm_svm(kvm)->sev_info;20261992 mirror_sev->enc_context_owner = source_kvm;20271993 mirror_sev->active = true;20282028- mirror_sev->asid = source_sev.asid;20292029- mirror_sev->fd = source_sev.fd;20302030- mirror_sev->es_active = source_sev.es_active;20312031- mirror_sev->handle = source_sev.handle;19941994+ mirror_sev->asid = source_sev->asid;19951995+ mirror_sev->fd = source_sev->fd;19961996+ mirror_sev->es_active = source_sev->es_active;19971997+ mirror_sev->handle = source_sev->handle;19981998+ INIT_LIST_HEAD(&mirror_sev->regions_list);19991999+ ret = 0;20002000+20322001 /*20332002 * Do not copy ap_jump_table. Since the mirror does not share the same20342003 * KVM contexts as the original, and they may have different20352004 * memory-views.20362005 */2037200620382038- mutex_unlock(&kvm->lock);20392039- return 0;20402040-20412041-e_mirror_unlock:20422042- mutex_unlock(&kvm->lock);20432043- kvm_put_kvm(source_kvm);20442044- return ret;20452045-e_source_unlock:20462046- mutex_unlock(&source_kvm->lock);20472047-e_source_put:20072007+e_unlock:20082008+ sev_unlock_two_vms(kvm, source_kvm);20092009+e_source_fput:20482010 if (source_kvm_file)20492011 fput(source_kvm_file);20502012 return ret;···20402034 struct list_head *head = &sev->regions_list;20412035 struct list_head *pos, *q;2042203620372037+ WARN_ON(sev->num_mirrored_vms);20382038+20432039 if (!sev_guest(kvm))20442040 return;2045204120462042 /* If this is a mirror_kvm release the enc_context_owner and skip sev cleanup */20472043 if (is_mirroring_enc_context(kvm)) {20482048- kvm_put_kvm(sev->enc_context_owner);20442044+ struct kvm *owner_kvm = sev->enc_context_owner;20452045+ struct kvm_sev_info *owner_sev = &to_kvm_svm(owner_kvm)->sev_info;20462046+20472047+ mutex_lock(&owner_kvm->lock);20482048+ if (!WARN_ON(!owner_sev->num_mirrored_vms))20492049+ owner_sev->num_mirrored_vms--;20502050+ mutex_unlock(&owner_kvm->lock);20512051+ kvm_put_kvm(owner_kvm);20492052 return;20502053 }20512051-20522052- mutex_lock(&kvm->lock);2053205420542055 /*20552056 * Ensure that all guest tagged cache entries are flushed before···20762063 cond_resched();20772064 }20782065 }20792079-20802080- mutex_unlock(&kvm->lock);2081206620822067 sev_unbind_asid(kvm, sev->handle);20832068 sev_asid_free(sev);···22602249 __free_page(virt_to_page(svm->sev_es.vmsa));2261225022622251 if (svm->sev_es.ghcb_sa_free)22632263- kfree(svm->sev_es.ghcb_sa);22522252+ kvfree(svm->sev_es.ghcb_sa);22642253}2265225422662255static void dump_ghcb(struct vcpu_svm *svm)···23522341 memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));23532342}2354234323552355-static int sev_es_validate_vmgexit(struct vcpu_svm *svm)23442344+static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)23562345{23572346 struct kvm_vcpu *vcpu;23582347 struct ghcb *ghcb;23592359- u64 exit_code = 0;23482348+ u64 exit_code;23492349+ u64 reason;2360235023612351 ghcb = svm->sev_es.ghcb;2362235223632363- /* Only GHCB Usage code 0 is supported */23642364- if (ghcb->ghcb_usage)23652365- goto vmgexit_err;23662366-23672353 /*23682368- * Retrieve the exit code now even though is may not be marked valid23542354+ * Retrieve the exit code now even though it may not be marked valid23692355 * as it could help with debugging.23702356 */23712357 exit_code = ghcb_get_sw_exit_code(ghcb);23582358+23592359+ /* Only GHCB Usage code 0 is supported */23602360+ if (ghcb->ghcb_usage) {23612361+ reason = GHCB_ERR_INVALID_USAGE;23622362+ goto vmgexit_err;23632363+ }23642364+23652365+ reason = GHCB_ERR_MISSING_INPUT;2372236623732367 if (!ghcb_sw_exit_code_is_valid(ghcb) ||23742368 !ghcb_sw_exit_info_1_is_valid(ghcb) ||···24532437 case SVM_VMGEXIT_UNSUPPORTED_EVENT:24542438 break;24552439 default:24402440+ reason = GHCB_ERR_INVALID_EVENT;24562441 goto vmgexit_err;24572442 }2458244324592459- return 0;24442444+ return true;2460244524612446vmgexit_err:24622447 vcpu = &svm->vcpu;2463244824642464- if (ghcb->ghcb_usage) {24492449+ if (reason == GHCB_ERR_INVALID_USAGE) {24652450 vcpu_unimpl(vcpu, "vmgexit: ghcb usage %#x is not valid\n",24662451 ghcb->ghcb_usage);24522452+ } else if (reason == GHCB_ERR_INVALID_EVENT) {24532453+ vcpu_unimpl(vcpu, "vmgexit: exit code %#llx is not valid\n",24542454+ exit_code);24672455 } else {24682468- vcpu_unimpl(vcpu, "vmgexit: exit reason %#llx is not valid\n",24562456+ vcpu_unimpl(vcpu, "vmgexit: exit code %#llx input is not valid\n",24692457 exit_code);24702458 dump_ghcb(svm);24712459 }2472246024732473- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;24742474- vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON;24752475- vcpu->run->internal.ndata = 2;24762476- vcpu->run->internal.data[0] = exit_code;24772477- vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu;24612461+ /* Clear the valid entries fields */24622462+ memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));2478246324792479- return -EINVAL;24642464+ ghcb_set_sw_exit_info_1(ghcb, 2);24652465+ ghcb_set_sw_exit_info_2(ghcb, reason);24662466+24672467+ return false;24802468}2481246924822470void sev_es_unmap_ghcb(struct vcpu_svm *svm)···25022482 svm->sev_es.ghcb_sa_sync = false;25032483 }2504248425052505- kfree(svm->sev_es.ghcb_sa);24852485+ kvfree(svm->sev_es.ghcb_sa);25062486 svm->sev_es.ghcb_sa = NULL;25072487 svm->sev_es.ghcb_sa_free = false;25082488 }···25502530 scratch_gpa_beg = ghcb_get_sw_scratch(ghcb);25512531 if (!scratch_gpa_beg) {25522532 pr_err("vmgexit: scratch gpa not provided\n");25532553- return false;25332533+ goto e_scratch;25542534 }2555253525562536 scratch_gpa_end = scratch_gpa_beg + len;25572537 if (scratch_gpa_end < scratch_gpa_beg) {25582538 pr_err("vmgexit: scratch length (%#llx) not valid for scratch address (%#llx)\n",25592539 len, scratch_gpa_beg);25602560- return false;25402540+ goto e_scratch;25612541 }2562254225632543 if ((scratch_gpa_beg & PAGE_MASK) == control->ghcb_gpa) {···25752555 scratch_gpa_end > ghcb_scratch_end) {25762556 pr_err("vmgexit: scratch area is outside of GHCB shared buffer area (%#llx - %#llx)\n",25772557 scratch_gpa_beg, scratch_gpa_end);25782578- return false;25582558+ goto e_scratch;25792559 }2580256025812561 scratch_va = (void *)svm->sev_es.ghcb;···25882568 if (len > GHCB_SCRATCH_AREA_LIMIT) {25892569 pr_err("vmgexit: scratch area exceeds KVM limits (%#llx requested, %#llx limit)\n",25902570 len, GHCB_SCRATCH_AREA_LIMIT);25912591- return false;25712571+ goto e_scratch;25922572 }25932593- scratch_va = kzalloc(len, GFP_KERNEL_ACCOUNT);25732573+ scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT);25942574 if (!scratch_va)25952595- return false;25752575+ goto e_scratch;2596257625972577 if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) {25982578 /* Unable to copy scratch area from guest */25992579 pr_err("vmgexit: kvm_read_guest for scratch area failed\n");2600258026012601- kfree(scratch_va);26022602- return false;25812581+ kvfree(scratch_va);25822582+ goto e_scratch;26032583 }2604258426052585 /*···26162596 svm->sev_es.ghcb_sa_len = len;2617259726182598 return true;25992599+26002600+e_scratch:26012601+ ghcb_set_sw_exit_info_1(ghcb, 2);26022602+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA);26032603+26042604+ return false;26192605}2620260626212607static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,···2672264626732647 ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_CPUID);26742648 if (!ret) {26752675- ret = -EINVAL;26492649+ /* Error, keep GHCB MSR value as-is */26762650 break;26772651 }26782652···27082682 GHCB_MSR_TERM_REASON_POS);27092683 pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",27102684 reason_set, reason_code);27112711- fallthrough;26852685+26862686+ ret = -EINVAL;26872687+ break;27122688 }27132689 default:27142714- ret = -EINVAL;26902690+ /* Error, keep GHCB MSR value as-is */26912691+ break;27152692 }2716269327172694 trace_kvm_vmgexit_msr_protocol_exit(svm->vcpu.vcpu_id,···2738270927392710 if (!ghcb_gpa) {27402711 vcpu_unimpl(vcpu, "vmgexit: GHCB gpa is not set\n");27412741- return -EINVAL;27122712+27132713+ /* Without a GHCB, just return right back to the guest */27142714+ return 1;27422715 }2743271627442717 if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) {27452718 /* Unable to map GHCB from guest */27462719 vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n",27472720 ghcb_gpa);27482748- return -EINVAL;27212721+27222722+ /* Without a GHCB, just return right back to the guest */27232723+ return 1;27492724 }2750272527512726 svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva;···2759272627602727 exit_code = ghcb_get_sw_exit_code(ghcb);2761272827622762- ret = sev_es_validate_vmgexit(svm);27632763- if (ret)27642764- return ret;27292729+ if (!sev_es_validate_vmgexit(svm))27302730+ return 1;2765273127662732 sev_es_sync_from_ghcb(svm);27672733 ghcb_set_sw_exit_info_1(ghcb, 0);27682734 ghcb_set_sw_exit_info_2(ghcb, 0);2769273527702770- ret = -EINVAL;27362736+ ret = 1;27712737 switch (exit_code) {27722738 case SVM_VMGEXIT_MMIO_READ:27732739 if (!setup_vmgexit_scratch(svm, true, control->exit_info_2))···28072775 default:28082776 pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n",28092777 control->exit_info_1);28102810- ghcb_set_sw_exit_info_1(ghcb, 1);28112811- ghcb_set_sw_exit_info_2(ghcb,28122812- X86_TRAP_UD |28132813- SVM_EVTINJ_TYPE_EXEPT |28142814- SVM_EVTINJ_VALID);27782778+ ghcb_set_sw_exit_info_1(ghcb, 2);27792779+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT);28152780 }2816278128172817- ret = 1;28182782 break;28192783 }28202784 case SVM_VMGEXIT_UNSUPPORTED_EVENT:28212785 vcpu_unimpl(vcpu,28222786 "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",28232787 control->exit_info_1, control->exit_info_2);27882788+ ret = -EINVAL;28242789 break;28252790 default:28262791 ret = svm_invoke_exit_handler(vcpu, exit_code);···28392810 return -EINVAL;2840281128412812 if (!setup_vmgexit_scratch(svm, in, bytes))28422842- return -EINVAL;28132813+ return 1;2843281428442815 return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa,28452816 count, in);
···7979 struct list_head regions_list; /* List of registered regions */8080 u64 ap_jump_table; /* SEV-ES AP Jump Table address */8181 struct kvm *enc_context_owner; /* Owner of copied encryption context */8282+ unsigned long num_mirrored_vms; /* Number of VMs sharing this ASID */8283 struct misc_cg *misc_cg; /* For misc cgroup accounting */8384 atomic_t migration_in_progress;8485};
+25-28
arch/x86/kvm/vmx/nested.c
···11621162 WARN_ON(!enable_vpid);1163116311641164 /*11651165- * If VPID is enabled and used by vmc12, but L2 does not have a unique11661166- * TLB tag (ASID), i.e. EPT is disabled and KVM was unable to allocate11671167- * a VPID for L2, flush the current context as the effective ASID is11681168- * common to both L1 and L2.11691169- *11701170- * Defer the flush so that it runs after vmcs02.EPTP has been set by11711171- * KVM_REQ_LOAD_MMU_PGD (if nested EPT is enabled) and to avoid11721172- * redundant flushes further down the nested pipeline.11731173- *11741174- * If a TLB flush isn't required due to any of the above, and vpid12 is11751175- * changing then the new "virtual" VPID (vpid12) will reuse the same11761176- * "real" VPID (vpid02), and so needs to be flushed. There's no direct11771177- * mapping between vpid02 and vpid12, vpid02 is per-vCPU and reused for11781178- * all nested vCPUs. Remember, a flush on VM-Enter does not invalidate11791179- * guest-physical mappings, so there is no need to sync the nEPT MMU.11651165+ * VPID is enabled and in use by vmcs12. If vpid12 is changing, then11661166+ * emulate a guest TLB flush as KVM does not track vpid12 history nor11671167+ * is the VPID incorporated into the MMU context. I.e. KVM must assume11681168+ * that the new vpid12 has never been used and thus represents a new11691169+ * guest ASID that cannot have entries in the TLB.11801170 */11811181- if (!nested_has_guest_tlb_tag(vcpu)) {11821182- kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);11831183- } else if (is_vmenter &&11841184- vmcs12->virtual_processor_id != vmx->nested.last_vpid) {11711171+ if (is_vmenter && vmcs12->virtual_processor_id != vmx->nested.last_vpid) {11851172 vmx->nested.last_vpid = vmcs12->virtual_processor_id;11861186- vpid_sync_context(nested_get_vpid02(vcpu));11731173+ kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu);11741174+ return;11871175 }11761176+11771177+ /*11781178+ * If VPID is enabled, used by vmc12, and vpid12 is not changing but11791179+ * does not have a unique TLB tag (ASID), i.e. EPT is disabled and11801180+ * KVM was unable to allocate a VPID for L2, flush the current context11811181+ * as the effective ASID is common to both L1 and L2.11821182+ */11831183+ if (!nested_has_guest_tlb_tag(vcpu))11841184+ kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);11881185}1189118611901187static bool is_bitwise_subset(u64 superset, u64 subset, u64 mask)···2591259425922595 if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) &&25932596 WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL,25942594- vmcs12->guest_ia32_perf_global_ctrl)))25972597+ vmcs12->guest_ia32_perf_global_ctrl))) {25982598+ *entry_failure_code = ENTRY_FAIL_DEFAULT;25952599 return -EINVAL;26002600+ }2596260125972602 kvm_rsp_write(vcpu, vmcs12->guest_rsp);25982603 kvm_rip_write(vcpu, vmcs12->guest_rip);···33433344 };33443345 u32 failed_index;3345334633463346- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))33473347- kvm_vcpu_flush_tlb_current(vcpu);33473347+ kvm_service_local_tlb_flush_requests(vcpu);3348334833493349 evaluate_pending_interrupts = exec_controls_get(vmx) &33503350 (CPU_BASED_INTR_WINDOW_EXITING | CPU_BASED_NMI_WINDOW_EXITING);···45004502 (void)nested_get_evmcs_page(vcpu);45014503 }4502450445034503- /* Service the TLB flush request for L2 before switching to L1. */45044504- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))45054505- kvm_vcpu_flush_tlb_current(vcpu);45054505+ /* Service pending TLB flush requests for L2 before switching to L1. */45064506+ kvm_service_local_tlb_flush_requests(vcpu);4506450745074508 /*45084509 * VCPU_EXREG_PDPTR will be clobbered in arch/x86/kvm/vmx/vmx.h between···48544857 if (!vmx->nested.cached_vmcs12)48554858 goto out_cached_vmcs12;4856485948604860+ vmx->nested.shadow_vmcs12_cache.gpa = INVALID_GPA;48574861 vmx->nested.cached_shadow_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL_ACCOUNT);48584862 if (!vmx->nested.cached_shadow_vmcs12)48594863 goto out_cached_shadow_vmcs12;···52875289 struct gfn_to_hva_cache *ghc = &vmx->nested.vmcs12_cache;52885290 struct vmcs_hdr hdr;5289529152905290- if (ghc->gpa != vmptr &&52915291- kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) {52925292+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, vmptr, VMCS12_SIZE)) {52925293 /*52935294 * Reads from an unbacked page return all 1s,52945295 * which means that the 32 bits located at the
+11-9
arch/x86/kvm/vmx/posted_intr.c
···55#include <asm/cpu.h>6677#include "lapic.h"88+#include "irq.h"89#include "posted_intr.h"910#include "trace.h"1011#include "vmx.h"···7877 pi_set_on(pi_desc);7978}80798080+static bool vmx_can_use_vtd_pi(struct kvm *kvm)8181+{8282+ return irqchip_in_kernel(kvm) && enable_apicv &&8383+ kvm_arch_has_assigned_device(kvm) &&8484+ irq_remapping_cap(IRQ_POSTING_CAP);8585+}8686+8187void vmx_vcpu_pi_put(struct kvm_vcpu *vcpu)8288{8389 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);84908585- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||8686- !irq_remapping_cap(IRQ_POSTING_CAP) ||8787- !kvm_vcpu_apicv_active(vcpu))9191+ if (!vmx_can_use_vtd_pi(vcpu->kvm))8892 return;89939094 /* Set SN when the vCPU is preempted */···147141 struct pi_desc old, new;148142 struct pi_desc *pi_desc = vcpu_to_pi_desc(vcpu);149143150150- if (!kvm_arch_has_assigned_device(vcpu->kvm) ||151151- !irq_remapping_cap(IRQ_POSTING_CAP) ||152152- !kvm_vcpu_apicv_active(vcpu))144144+ if (!vmx_can_use_vtd_pi(vcpu->kvm))153145 return 0;154146155147 WARN_ON(irqs_disabled());···274270 struct vcpu_data vcpu_info;275271 int idx, ret = 0;276272277277- if (!kvm_arch_has_assigned_device(kvm) ||278278- !irq_remapping_cap(IRQ_POSTING_CAP) ||279279- !kvm_vcpu_apicv_active(kvm->vcpus[0]))273273+ if (!vmx_can_use_vtd_pi(kvm))280274 return 0;281275282276 idx = srcu_read_lock(&kvm->irq_srcu);
+42-25
arch/x86/kvm/vmx/vmx.c
···29182918 }29192919}2920292029212921+static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)29222922+{29232923+ if (is_guest_mode(vcpu))29242924+ return nested_get_vpid02(vcpu);29252925+ return to_vmx(vcpu)->vpid;29262926+}29272927+29212928static void vmx_flush_tlb_current(struct kvm_vcpu *vcpu)29222929{29232930 struct kvm_mmu *mmu = vcpu->arch.mmu;···29372930 if (enable_ept)29382931 ept_sync_context(construct_eptp(vcpu, root_hpa,29392932 mmu->shadow_root_level));29402940- else if (!is_guest_mode(vcpu))29412941- vpid_sync_context(to_vmx(vcpu)->vpid);29422933 else29432943- vpid_sync_context(nested_get_vpid02(vcpu));29342934+ vpid_sync_context(vmx_get_current_vpid(vcpu));29442935}2945293629462937static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)29472938{29482939 /*29492949- * vpid_sync_vcpu_addr() is a nop if vmx->vpid==0, see the comment in29402940+ * vpid_sync_vcpu_addr() is a nop if vpid==0, see the comment in29502941 * vmx_flush_tlb_guest() for an explanation of why this is ok.29512942 */29522952- vpid_sync_vcpu_addr(to_vmx(vcpu)->vpid, addr);29432943+ vpid_sync_vcpu_addr(vmx_get_current_vpid(vcpu), addr);29532944}2954294529552946static void vmx_flush_tlb_guest(struct kvm_vcpu *vcpu)29562947{29572948 /*29582958- * vpid_sync_context() is a nop if vmx->vpid==0, e.g. if enable_vpid==029592959- * or a vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit29602960- * are required to flush GVA->{G,H}PA mappings from the TLB if vpid is29492949+ * vpid_sync_context() is a nop if vpid==0, e.g. if enable_vpid==0 or a29502950+ * vpid couldn't be allocated for this vCPU. VM-Enter and VM-Exit are29512951+ * required to flush GVA->{G,H}PA mappings from the TLB if vpid is29612952 * disabled (VM-Enter with vpid enabled and vpid==0 is disallowed),29622953 * i.e. no explicit INVVPID is necessary.29632954 */29642964- vpid_sync_context(to_vmx(vcpu)->vpid);29552955+ vpid_sync_context(vmx_get_current_vpid(vcpu));29652956}2966295729672958void vmx_ept_load_pdptrs(struct kvm_vcpu *vcpu)···62676262{62686263 struct vcpu_vmx *vmx = to_vmx(vcpu);62696264 int max_irr;62706270- bool max_irr_updated;62656265+ bool got_posted_interrupt;6271626662726272- if (KVM_BUG_ON(!vcpu->arch.apicv_active, vcpu->kvm))62676267+ if (KVM_BUG_ON(!enable_apicv, vcpu->kvm))62736268 return -EIO;6274626962756270 if (pi_test_on(&vmx->pi_desc)) {···62796274 * But on x86 this is just a compiler barrier anyway.62806275 */62816276 smp_mb__after_atomic();62826282- max_irr_updated =62776277+ got_posted_interrupt =62836278 kvm_apic_update_irr(vcpu, vmx->pi_desc.pir, &max_irr);62846284-62856285- /*62866286- * If we are running L2 and L1 has a new pending interrupt62876287- * which can be injected, this may cause a vmexit or it may62886288- * be injected into L2. Either way, this interrupt will be62896289- * processed via KVM_REQ_EVENT, not RVI, because we do not use62906290- * virtual interrupt delivery to inject L1 interrupts into L2.62916291- */62926292- if (is_guest_mode(vcpu) && max_irr_updated)62936293- kvm_make_request(KVM_REQ_EVENT, vcpu);62946279 } else {62956280 max_irr = kvm_lapic_find_highest_irr(vcpu);62816281+ got_posted_interrupt = false;62966282 }62976297- vmx_hwapic_irr_update(vcpu, max_irr);62836283+62846284+ /*62856285+ * Newly recognized interrupts are injected via either virtual interrupt62866286+ * delivery (RVI) or KVM_REQ_EVENT. Virtual interrupt delivery is62876287+ * disabled in two cases:62886288+ *62896289+ * 1) If L2 is running and the vCPU has a new pending interrupt. If L162906290+ * wants to exit on interrupts, KVM_REQ_EVENT is needed to synthesize a62916291+ * VM-Exit to L1. If L1 doesn't want to exit, the interrupt is injected62926292+ * into L2, but KVM doesn't use virtual interrupt delivery to inject62936293+ * interrupts into L2, and so KVM_REQ_EVENT is again needed.62946294+ *62956295+ * 2) If APICv is disabled for this vCPU, assigned devices may still62966296+ * attempt to post interrupts. The posted interrupt vector will cause62976297+ * a VM-Exit and the subsequent entry will call sync_pir_to_irr.62986298+ */62996299+ if (!is_guest_mode(vcpu) && kvm_vcpu_apicv_active(vcpu))63006300+ vmx_set_rvi(max_irr);63016301+ else if (got_posted_interrupt)63026302+ kvm_make_request(KVM_REQ_EVENT, vcpu);63036303+62986304 return max_irr;62996305}63006306···75257509static bool vmx_check_apicv_inhibit_reasons(ulong bit)75267510{75277511 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) |75127512+ BIT(APICV_INHIBIT_REASON_ABSENT) |75287513 BIT(APICV_INHIBIT_REASON_HYPERV) |75297514 BIT(APICV_INHIBIT_REASON_BLOCKIRQ);75307515···77787761 ple_window_shrink = 0;77797762 }7780776377817781- if (!cpu_has_vmx_apicv()) {77647764+ if (!cpu_has_vmx_apicv())77827765 enable_apicv = 0;77667766+ if (!enable_apicv)77837767 vmx_x86_ops.sync_pir_to_irr = NULL;77847784- }7785776877867769 if (cpu_has_vmx_tsc_scaling()) {77877770 kvm_has_tsc_control = true;
+58-17
arch/x86/kvm/x86.c
···32583258 static_call(kvm_x86_tlb_flush_guest)(vcpu);32593259}3260326032613261+32623262+static inline void kvm_vcpu_flush_tlb_current(struct kvm_vcpu *vcpu)32633263+{32643264+ ++vcpu->stat.tlb_flush;32653265+ static_call(kvm_x86_tlb_flush_current)(vcpu);32663266+}32673267+32683268+/*32693269+ * Service "local" TLB flush requests, which are specific to the current MMU32703270+ * context. In addition to the generic event handling in vcpu_enter_guest(),32713271+ * TLB flushes that are targeted at an MMU context also need to be serviced32723272+ * prior before nested VM-Enter/VM-Exit.32733273+ */32743274+void kvm_service_local_tlb_flush_requests(struct kvm_vcpu *vcpu)32753275+{32763276+ if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))32773277+ kvm_vcpu_flush_tlb_current(vcpu);32783278+32793279+ if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))32803280+ kvm_vcpu_flush_tlb_guest(vcpu);32813281+}32823282+EXPORT_SYMBOL_GPL(kvm_service_local_tlb_flush_requests);32833283+32613284static void record_steal_time(struct kvm_vcpu *vcpu)32623285{32633286 struct gfn_to_hva_cache *ghc = &vcpu->arch.st.cache;···41564133 case KVM_CAP_SGX_ATTRIBUTE:41574134#endif41584135 case KVM_CAP_VM_COPY_ENC_CONTEXT_FROM:41364136+ case KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM:41594137 case KVM_CAP_SREGS2:41604138 case KVM_CAP_EXIT_ON_EMULATION_FAILURE:41614139 case KVM_CAP_VCPU_ATTRIBUTES:···44724448static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,44734449 struct kvm_lapic_state *s)44744450{44754475- if (vcpu->arch.apicv_active)44764476- static_call(kvm_x86_sync_pir_to_irr)(vcpu);44514451+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);4477445244784453 return kvm_apic_get_state(vcpu, s);44794454}···51475124 struct kvm_cpuid __user *cpuid_arg = argp;51485125 struct kvm_cpuid cpuid;5149512651275127+ /*51285128+ * KVM does not correctly handle changing guest CPUID after KVM_RUN, as51295129+ * MAXPHYADDR, GBPAGES support, AMD reserved bit behavior, etc.. aren't51305130+ * tracked in kvm_mmu_page_role. As a result, KVM may miss guest page51315131+ * faults due to reusing SPs/SPTEs. In practice no sane VMM mucks with51325132+ * the core vCPU model on the fly, so fail.51335133+ */51345134+ r = -EINVAL;51355135+ if (vcpu->arch.last_vmentry_cpu != -1)51365136+ goto out;51375137+51505138 r = -EFAULT;51515139 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))51525140 goto out;···51675133 case KVM_SET_CPUID2: {51685134 struct kvm_cpuid2 __user *cpuid_arg = argp;51695135 struct kvm_cpuid2 cpuid;51365136+51375137+ /*51385138+ * KVM_SET_CPUID{,2} after KVM_RUN is forbidded, see the comment in51395139+ * KVM_SET_CPUID case above.51405140+ */51415141+ r = -EINVAL;51425142+ if (vcpu->arch.last_vmentry_cpu != -1)51435143+ goto out;5170514451715145 r = -EFAULT;51725146 if (copy_from_user(&cpuid, cpuid_arg, sizeof(cpuid)))···57405698 smp_wmb();57415699 kvm->arch.irqchip_mode = KVM_IRQCHIP_SPLIT;57425700 kvm->arch.nr_reserved_ioapic_pins = cap->args[0];57015701+ kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT);57435702 r = 0;57445703split_irqchip_unlock:57455704 mutex_unlock(&kvm->lock);···61216078 /* Write kvm->irq_routing before enabling irqchip_in_kernel. */61226079 smp_wmb();61236080 kvm->arch.irqchip_mode = KVM_IRQCHIP_KERNEL;60816081+ kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT);61246082 create_irqchip_unlock:61256083 mutex_unlock(&kvm->lock);61266084 break;···88208776{88218777 init_rwsem(&kvm->arch.apicv_update_lock);8822877888238823- if (enable_apicv)88248824- clear_bit(APICV_INHIBIT_REASON_DISABLE,88258825- &kvm->arch.apicv_inhibit_reasons);88268826- else87798779+ set_bit(APICV_INHIBIT_REASON_ABSENT,87808780+ &kvm->arch.apicv_inhibit_reasons);87818781+ if (!enable_apicv)88278782 set_bit(APICV_INHIBIT_REASON_DISABLE,88288783 &kvm->arch.apicv_inhibit_reasons);88298784}···95719528 if (irqchip_split(vcpu->kvm))95729529 kvm_scan_ioapic_routes(vcpu, vcpu->arch.ioapic_handled_vectors);95739530 else {95749574- if (vcpu->arch.apicv_active)95759575- static_call(kvm_x86_sync_pir_to_irr)(vcpu);95319531+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);95769532 if (ioapic_in_kernel(vcpu->kvm))95779533 kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors);95789534 }···96909648 /* Flushing all ASIDs flushes the current ASID... */96919649 kvm_clear_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu);96929650 }96939693- if (kvm_check_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu))96949694- kvm_vcpu_flush_tlb_current(vcpu);96959695- if (kvm_check_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu))96969696- kvm_vcpu_flush_tlb_guest(vcpu);96519651+ kvm_service_local_tlb_flush_requests(vcpu);9697965296989653 if (kvm_check_request(KVM_REQ_REPORT_TPR_ACCESS, vcpu)) {96999654 vcpu->run->exit_reason = KVM_EXIT_TPR_ACCESS;···9841980298429803 /*98439804 * This handles the case where a posted interrupt was98449844- * notified with kvm_vcpu_kick.98059805+ * notified with kvm_vcpu_kick. Assigned devices can98069806+ * use the POSTED_INTR_VECTOR even if APICv is disabled,98079807+ * so do it even if APICv is disabled on this vCPU.98459808 */98469846- if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)98479847- static_call(kvm_x86_sync_pir_to_irr)(vcpu);98099809+ if (kvm_lapic_enabled(vcpu))98109810+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);9848981198499812 if (kvm_vcpu_exit_request(vcpu)) {98509813 vcpu->mode = OUTSIDE_GUEST_MODE;···98909849 if (likely(exit_fastpath != EXIT_FASTPATH_REENTER_GUEST))98919850 break;9892985198939893- if (vcpu->arch.apicv_active)98949894- static_call(kvm_x86_sync_pir_to_irr)(vcpu);98529852+ if (kvm_lapic_enabled(vcpu))98539853+ static_call_cond(kvm_x86_sync_pir_to_irr)(vcpu);9895985498969855 if (unlikely(kvm_vcpu_exit_request(vcpu))) {98979856 exit_fastpath = EXIT_FASTPATH_EXIT_HANDLED;
···7272#ifdef CONFIG_X86_647373 u64 *trampoline_pgd;7474 u64 efer;7575+ int i;7576#endif76777778 base = (unsigned char *)real_mode_header;···129128 trampoline_header->flags = 0;130129131130 trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);131131+132132+ /* Map the real mode stub as virtual == physical */132133 trampoline_pgd[0] = trampoline_pgd_entry.pgd;133133- trampoline_pgd[511] = init_top_pgt[511].pgd;134134+135135+ /*136136+ * Include the entirety of the kernel mapping into the trampoline137137+ * PGD. This way, all mappings present in the normal kernel page138138+ * tables are usable while running on trampoline_pgd.139139+ */140140+ for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)141141+ trampoline_pgd[i] = init_top_pgt[i].pgd;134142#endif135143136144 sme_sev_setup_real_mode(trampoline_header);
+20
arch/x86/xen/xen-asm.S
···20202121#include <linux/init.h>2222#include <linux/linkage.h>2323+#include <../entry/calling.h>23242425.pushsection .noinstr.text, "ax"2526/*···192191 pushq $0193192 jmp hypercall_iret194193SYM_CODE_END(xen_iret)194194+195195+/*196196+ * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is197197+ * also the kernel stack. Reusing swapgs_restore_regs_and_return_to_usermode()198198+ * in XEN pv would cause %rsp to move up to the top of the kernel stack and199199+ * leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI200200+ * interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET201201+ * frame at the same address is useless.202202+ */203203+SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode)204204+ UNWIND_HINT_REGS205205+ POP_REGS206206+207207+ /* stackleak_erase() can work safely on the kernel stack. */208208+ STACKLEAK_ERASE_NOCLOBBER209209+210210+ addq $8, %rsp /* skip regs->orig_ax */211211+ jmp xen_iret212212+SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode)195213196214/*197215 * Xen handles syscall callbacks much like ordinary exceptions, which
···21032103 int ret;2104210421052105 if (idx < 0) {21062106- pr_warn("deleting an unspecified loop device is not supported.\n");21062106+ pr_warn_once("deleting an unspecified loop device is not supported.\n");21072107 return -EINVAL;21082108 }21092109
···38333833 /* disable all interrupts */38343834 amdgpu_irq_disable_all(adev);38353835 if (adev->mode_info.mode_config_initialized){38363836- if (!amdgpu_device_has_dc_support(adev))38363836+ if (!drm_drv_uses_atomic_modeset(adev_to_drm(adev)))38373837 drm_helper_force_disable_all(adev_to_drm(adev));38383838 else38393839 drm_atomic_helper_shutdown(adev_to_drm(adev));···42894289{42904290 int r;4291429142924292+ amdgpu_amdkfd_pre_reset(adev);42934293+42924294 if (from_hypervisor)42934295 r = amdgpu_virt_request_full_gpu(adev, true);42944296 else···4318431643194317 amdgpu_irq_gpu_reset_resume_helper(adev);43204318 r = amdgpu_ib_ring_tests(adev);43194319+ amdgpu_amdkfd_post_reset(adev);4321432043224321error:43234322 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) {···5033503050345031 cancel_delayed_work_sync(&tmp_adev->delayed_init_work);5035503250365036- amdgpu_amdkfd_pre_reset(tmp_adev);50335033+ if (!amdgpu_sriov_vf(tmp_adev))50345034+ amdgpu_amdkfd_pre_reset(tmp_adev);5037503550385036 /*50395037 * Mark these ASICs to be reseted as untracked first···51335129 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);51345130 }5135513151365136- if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) {51325132+ if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) {51375133 drm_helper_resume_force_mode(adev_to_drm(tmp_adev));51385134 }51395135···5152514851535149skip_sched_resume:51545150 list_for_each_entry(tmp_adev, device_list_handle, reset_list) {51555155- /* unlock kfd */51565156- if (!need_emergency_restart)51575157- amdgpu_amdkfd_post_reset(tmp_adev);51515151+ /* unlock kfd: SRIOV would do it separately */51525152+ if (!need_emergency_restart && !amdgpu_sriov_vf(tmp_adev))51535153+ amdgpu_amdkfd_post_reset(tmp_adev);5158515451595155 /* kfd_post_reset will do nothing if kfd device is not initialized,51605156 * need to bring up kfd here if it's not be initialized before
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
···157157 [HDP_HWIP] = HDP_HWID,158158 [SDMA0_HWIP] = SDMA0_HWID,159159 [SDMA1_HWIP] = SDMA1_HWID,160160+ [SDMA2_HWIP] = SDMA2_HWID,161161+ [SDMA3_HWIP] = SDMA3_HWID,160162 [MMHUB_HWIP] = MMHUB_HWID,161163 [ATHUB_HWIP] = ATHUB_HWID,162164 [NBIO_HWIP] = NBIF_HWID,···920918 case IP_VERSION(3, 0, 64):921919 case IP_VERSION(3, 1, 1):922920 case IP_VERSION(3, 0, 2):921921+ case IP_VERSION(3, 0, 192):923922 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block);924923 if (!amdgpu_sriov_vf(adev))925924 amdgpu_device_ip_block_add(adev, &jpeg_v3_0_ip_block);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
···135135 break;136136 case IP_VERSION(3, 0, 0):137137 case IP_VERSION(3, 0, 64):138138+ case IP_VERSION(3, 0, 192):138139 if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0))139140 fw_name = FIRMWARE_SIENNA_CICHLID;140141 else
+2-2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
···504504 int i = 0;505505506506 for (i = 0; i < adev->mode_info.num_crtc; i++)507507- if (adev->mode_info.crtcs[i])508508- hrtimer_cancel(&adev->mode_info.crtcs[i]->vblank_timer);507507+ if (adev->amdgpu_vkms_output[i].vblank_hrtimer.function)508508+ hrtimer_cancel(&adev->amdgpu_vkms_output[i].vblank_hrtimer);509509510510 kfree(adev->mode_info.bios_hardcoded_edid);511511 kfree(adev->amdgpu_vkms_output);
+4-3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
···4060406040614061 gfx_v9_0_cp_enable(adev, false);4062406240634063- /* Skip suspend with A+A reset */40644064- if (adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) {40654065- dev_dbg(adev->dev, "Device in reset. Skipping RLC halt\n");40634063+ /* Skip stopping RLC with A+A reset or when RLC controls GFX clock */40644064+ if ((adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) ||40654065+ (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2))) {40664066+ dev_dbg(adev->dev, "Skipping RLC halt\n");40664067 return 0;40674068 }40684069
+1
drivers/gpu/drm/amd/amdgpu/nv.c
···183183 switch (adev->ip_versions[UVD_HWIP][0]) {184184 case IP_VERSION(3, 0, 0):185185 case IP_VERSION(3, 0, 64):186186+ case IP_VERSION(3, 0, 192):186187 if (amdgpu_sriov_vf(adev)) {187188 if (encode)188189 *codecs = &sriov_sc_video_codecs_encode;
+4-9
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
···15741574static void svm_range_restore_work(struct work_struct *work)15751575{15761576 struct delayed_work *dwork = to_delayed_work(work);15771577- struct amdkfd_process_info *process_info;15781577 struct svm_range_list *svms;15791578 struct svm_range *prange;15801579 struct kfd_process *p;···15931594 * the lifetime of this thread, kfd_process and mm will be valid.15941595 */15951596 p = container_of(svms, struct kfd_process, svms);15961596- process_info = p->kgd_process_info;15971597 mm = p->mm;15981598 if (!mm)15991599 return;1600160016011601- mutex_lock(&process_info->lock);16021601 svm_range_list_lock_and_flush_work(svms, mm);16031602 mutex_lock(&svms->lock);16041603···16491652out_reschedule:16501653 mutex_unlock(&svms->lock);16511654 mmap_write_unlock(mm);16521652- mutex_unlock(&process_info->lock);1653165516541656 /* If validation failed, reschedule another attempt */16551657 if (evicted_ranges) {···2610261426112615 if (atomic_read(&svms->drain_pagefaults)) {26122616 pr_debug("draining retry fault, drop fault 0x%llx\n", addr);26172617+ r = 0;26132618 goto out;26142619 }26152620···26202623 mm = get_task_mm(p->lead_thread);26212624 if (!mm) {26222625 pr_debug("svms 0x%p failed to get mm\n", svms);26262626+ r = 0;26232627 goto out;26242628 }26252629···2658266026592661 if (svm_range_skip_recover(prange)) {26602662 amdgpu_gmc_filter_faults_remove(adev, addr, pasid);26632663+ r = 0;26612664 goto out_unlock_range;26622665 }26632666···26672668 if (timestamp < AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING) {26682669 pr_debug("svms 0x%p [0x%lx %lx] already restored\n",26692670 svms, prange->start, prange->last);26712671+ r = 0;26702672 goto out_unlock_range;26712673 }26722674···31773177svm_range_set_attr(struct kfd_process *p, uint64_t start, uint64_t size,31783178 uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs)31793179{31803180- struct amdkfd_process_info *process_info = p->kgd_process_info;31813180 struct mm_struct *mm = current->mm;31823181 struct list_head update_list;31833182 struct list_head insert_list;···31943195 return r;3195319631963197 svms = &p->svms;31973197-31983198- mutex_lock(&process_info->lock);3199319832003199 svm_range_list_lock_and_flush_work(svms, mm);32013200···32703273 mutex_unlock(&svms->lock);32713274 mmap_read_unlock(mm);32723275out:32733273- mutex_unlock(&process_info->lock);32743274-32753276 pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] done, r=%d\n", p->pasid,32763277 &p->svms, start, start + size - 1, r);32773278
···3636#include "dm_helpers.h"37373838#include "dc_link_ddc.h"3939+#include "ddc_service_types.h"4040+#include "dpcd_defs.h"39414042#include "i2caux_interface.h"4143#include "dmub_cmd.h"···159157};160158161159#if defined(CONFIG_DRM_AMD_DC_DCN)160160+static bool needs_dsc_aux_workaround(struct dc_link *link)161161+{162162+ if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&163163+ (link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) &&164164+ link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2)165165+ return true;166166+167167+ return false;168168+}169169+162170static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)163171{164172 struct dc_sink *dc_sink = aconnector->dc_sink;···178166 u8 *dsc_branch_dec_caps = NULL;179167180168 aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port);181181-#if defined(CONFIG_HP_HOOK_WORKAROUND)169169+182170 /*183171 * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs184172 * because it only check the dsc/fec caps of the "port variable" and not the dock···188176 * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux189177 *190178 */191191-192192- if (!aconnector->dsc_aux && !port->parent->port_parent)179179+ if (!aconnector->dsc_aux && !port->parent->port_parent &&180180+ needs_dsc_aux_workaround(aconnector->dc_link))193181 aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;194194-#endif182182+195183 if (!aconnector->dsc_aux)196184 return false;197185
+16
drivers/gpu/drm/amd/display/dc/core/dc_link.c
···758758 dal_ddc_service_set_transaction_type(link->ddc,759759 sink_caps->transaction_type);760760761761+#if defined(CONFIG_DRM_AMD_DC_DCN)762762+ /* Apply work around for tunneled MST on certain USB4 docks. Always use DSC if dock763763+ * reports DSC support.764764+ */765765+ if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA &&766766+ link->type == dc_connection_mst_branch &&767767+ link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&768768+ link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_SUPPORT &&769769+ !link->dc->debug.dpia_debug.bits.disable_mst_dsc_work_around)770770+ link->wa_flags.dpia_mst_dsc_always_on = true;771771+#endif772772+761773#if defined(CONFIG_DRM_AMD_DC_HDCP)762774 /* In case of fallback to SST when topology discovery below fails763775 * HDCP caps will be querried again later by the upper layer (caller···12141202 if (link->type == dc_connection_mst_branch) {12151203 LINK_INFO("link=%d, mst branch is now Disconnected\n",12161204 link->link_index);12051205+12061206+ /* Disable work around which keeps DSC on for tunneled MST on certain USB4 docks. */12071207+ if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)12081208+ link->wa_flags.dpia_mst_dsc_always_on = false;1217120912181210 dm_helpers_dp_mst_stop_top_mgr(link->ctx, link);12191211
+1-1
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
···21382138 }2139213921402140 for (lane = 0; lane < (uint8_t)lt_settings->link_settings.lane_count; lane++)21412141- lt_settings->dpcd_lane_settings[lane].bits.VOLTAGE_SWING_SET = VOLTAGE_SWING_LEVEL0;21412141+ lt_settings->dpcd_lane_settings[lane].raw = 0;21422142 }2143214321442144 if (status == LINK_TRAINING_SUCCESS) {
+14-10
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
···16641664 if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)16651665 return false;1666166616671667+ // Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks16681668+ if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)16691669+ return false;16701670+16671671 return true;16681672}16691673···2256225222572253 if (!new_ctx)22582254 return DC_ERROR_UNEXPECTED;22592259-#if defined(CONFIG_DRM_AMD_DC_DCN)22602260-22612261- /*22622262- * Update link encoder to stream assignment.22632263- * TODO: Split out reason allocation from validation.22642264- */22652265- if (dc->res_pool->funcs->link_encs_assign && fast_validate == false)22662266- dc->res_pool->funcs->link_encs_assign(22672267- dc, new_ctx, new_ctx->streams, new_ctx->stream_count);22682268-#endif2269225522702256 if (dc->res_pool->funcs->validate_global) {22712257 result = dc->res_pool->funcs->validate_global(dc, new_ctx);···23062312 if (result == DC_OK)23072313 if (!dc->res_pool->funcs->validate_bandwidth(dc, new_ctx, fast_validate))23082314 result = DC_FAIL_BANDWIDTH_VALIDATE;23152315+23162316+#if defined(CONFIG_DRM_AMD_DC_DCN)23172317+ /*23182318+ * Only update link encoder to stream assignment after bandwidth validation passed.23192319+ * TODO: Split out assignment and validation.23202320+ */23212321+ if (result == DC_OK && dc->res_pool->funcs->link_encs_assign && fast_validate == false)23222322+ dc->res_pool->funcs->link_encs_assign(23232323+ dc, new_ctx, new_ctx->streams, new_ctx->stream_count);23242324+#endif2309232523102326 return result;23112327}
···191191 bool dp_skip_DID2;192192 bool dp_skip_reset_segment;193193 bool dp_mot_reset_segment;194194+ /* Some USB4 docks do not handle turning off MST DSC once it has been enabled. */195195+ bool dpia_mst_dsc_always_on;194196 } wa_flags;195197 struct link_mst_stream_allocation_table mst_stream_alloc_table;196198
+1-1
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
···14681468 dev_err(adev->dev, "Failed to disable smu features.\n");14691469 }1470147014711471- if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0) &&14711471+ if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2) &&14721472 adev->gfx.rlc.funcs->stop)14731473 adev->gfx.rlc.funcs->stop(adev);14741474
···36363737#include "intel_backlight.h"3838#include "intel_display_types.h"3939+#include "intel_dp.h"3940#include "intel_dp_aux_backlight.h"40414142/* TODO:···106105 struct intel_panel *panel = &connector->panel;107106 int ret;108107 u8 tcon_cap[4];108108+109109+ intel_dp_wait_source_oui(intel_dp);109110110111 ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap));111112 if (ret != sizeof(tcon_cap))···207204 int ret;208205 u8 old_ctrl, ctrl;209206207207+ intel_dp_wait_source_oui(intel_dp);208208+210209 ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl);211210 if (ret != 1) {212211 drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret);···298293 struct intel_panel *panel = &connector->panel;299294 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);300295296296+ if (!panel->backlight.edp.vesa.info.aux_enable) {297297+ u32 pwm_level = intel_backlight_invert_pwm_level(connector,298298+ panel->backlight.pwm_level_max);299299+300300+ panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level);301301+ }302302+301303 drm_edp_backlight_enable(&intel_dp->aux, &panel->backlight.edp.vesa.info, level);302304}303305···316304 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);317305318306 drm_edp_backlight_disable(&intel_dp->aux, &panel->backlight.edp.vesa.info);307307+308308+ if (!panel->backlight.edp.vesa.info.aux_enable)309309+ panel->backlight.pwm_funcs->disable(old_conn_state,310310+ intel_backlight_invert_pwm_level(connector, 0));319311}320312321313static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, enum pipe pipe)···337321 if (ret < 0)338322 return ret;339323324324+ if (!panel->backlight.edp.vesa.info.aux_enable) {325325+ ret = panel->backlight.pwm_funcs->setup(connector, pipe);326326+ if (ret < 0) {327327+ drm_err(&i915->drm,328328+ "Failed to setup PWM backlight controls for eDP backlight: %d\n",329329+ ret);330330+ return ret;331331+ }332332+ }340333 panel->backlight.max = panel->backlight.edp.vesa.info.max;341334 panel->backlight.min = 0;342335 if (current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {···365340 struct intel_dp *intel_dp = intel_attached_dp(connector);366341 struct drm_i915_private *i915 = dp_to_i915(intel_dp);367342368368- /* TODO: We currently only support AUX only backlight configurations, not backlights which369369- * require a mix of PWM and AUX controls to work. In the mean time, these machines typically370370- * work just fine using normal PWM controls anyway.371371- */372372- if ((intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) &&373373- drm_edp_backlight_supported(intel_dp->edp_dpcd)) {343343+ if (drm_edp_backlight_supported(intel_dp->edp_dpcd)) {374344 drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n");375345 return true;376346 }
···16581658 if (!prop) {16591659 DRM_DEV_DEBUG(dev,16601660 "failed to find data lane mapping, using default\n");16611661+ /* Set the number of date lanes to 4 by default. */16621662+ msm_host->num_data_lanes = 4;16611663 return 0;16621664 }16631665
···359359 * @ring_nr: the ringbuffer used by this submitqueue, which is determined360360 * by the submitqueue's priority361361 * @faults: the number of GPU hangs associated with this submitqueue362362+ * @last_fence: the sequence number of the last allocated fence (for error363363+ * checking)362364 * @ctx: the per-drm_file context associated with the submitqueue (ie.363365 * which set of pgtables do submits jobs associated with the364366 * submitqueue use)···376374 u32 flags;377375 u32 ring_nr;378376 int faults;377377+ uint32_t last_fence;379378 struct msm_file_private *ctx;380379 struct list_head node;381380 struct idr fence_idr;
+9-4
drivers/gpu/drm/msm/msm_gpu_devfreq.c
···2020 struct msm_gpu *gpu = dev_to_gpu(dev);2121 struct dev_pm_opp *opp;22222323+ /*2424+ * Note that devfreq_recommended_opp() can modify the freq2525+ * to something that actually is in the opp table:2626+ */2327 opp = devfreq_recommended_opp(dev, freq, flags);24282529 /*···3228 */3329 if (gpu->devfreq.idle_freq) {3430 gpu->devfreq.idle_freq = *freq;3131+ dev_pm_opp_put(opp);3532 return 0;3633 }3734···208203 struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq);209204 unsigned long idle_freq, target_freq = 0;210205211211- if (!df->devfreq)212212- return;213213-214206 /*215207 * Hold devfreq lock to synchronize with get_dev_status()/216208 * target() callbacks···229227{230228 struct msm_gpu_devfreq *df = &gpu->devfreq;231229230230+ if (!df->devfreq)231231+ return;232232+232233 msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1),233233- HRTIMER_MODE_ABS);234234+ HRTIMER_MODE_REL);234235}
+19-23
drivers/gpu/drm/vc4/vc4_kms.c
···337337 struct drm_device *dev = state->dev;338338 struct vc4_dev *vc4 = to_vc4_dev(dev);339339 struct vc4_hvs *hvs = vc4->hvs;340340- struct drm_crtc_state *old_crtc_state;341340 struct drm_crtc_state *new_crtc_state;342341 struct drm_crtc *crtc;343342 struct vc4_hvs_state *old_hvs_state;343343+ unsigned int channel;344344 int i;345345346346 for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {···353353 vc4_hvs_mask_underrun(dev, vc4_crtc_state->assigned_channel);354354 }355355356356- if (vc4->hvs->hvs5)357357- clk_set_min_rate(hvs->core_clk, 500000000);358358-359356 old_hvs_state = vc4_hvs_get_old_global_state(state);360360- if (!old_hvs_state)357357+ if (IS_ERR(old_hvs_state))361358 return;362359363363- for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) {364364- struct vc4_crtc_state *vc4_crtc_state =365365- to_vc4_crtc_state(old_crtc_state);366366- unsigned int channel = vc4_crtc_state->assigned_channel;360360+ for (channel = 0; channel < HVS_NUM_CHANNELS; channel++) {361361+ struct drm_crtc_commit *commit;367362 int ret;368368-369369- if (channel == VC4_HVS_CHANNEL_DISABLED)370370- continue;371363372364 if (!old_hvs_state->fifo_state[channel].in_use)373365 continue;374366375375- ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[channel].pending_commit);367367+ commit = old_hvs_state->fifo_state[channel].pending_commit;368368+ if (!commit)369369+ continue;370370+371371+ ret = drm_crtc_commit_wait(commit);376372 if (ret)377373 drm_err(dev, "Timed out waiting for commit\n");374374+375375+ drm_crtc_commit_put(commit);376376+ old_hvs_state->fifo_state[channel].pending_commit = NULL;378377 }378378+379379+ if (vc4->hvs->hvs5)380380+ clk_set_min_rate(hvs->core_clk, 500000000);379381380382 drm_atomic_helper_commit_modeset_disables(dev, state);381383···412410 unsigned int i;413411414412 hvs_state = vc4_hvs_get_new_global_state(state);415415- if (!hvs_state)416416- return -EINVAL;413413+ if (WARN_ON(IS_ERR(hvs_state)))414414+ return PTR_ERR(hvs_state);417415418416 for_each_new_crtc_in_state(state, crtc, crtc_state, i) {419417 struct vc4_crtc_state *vc4_crtc_state =···670668671669 for (i = 0; i < HVS_NUM_CHANNELS; i++) {672670 state->fifo_state[i].in_use = old_state->fifo_state[i].in_use;673673-674674- if (!old_state->fifo_state[i].pending_commit)675675- continue;676676-677677- state->fifo_state[i].pending_commit =678678- drm_crtc_commit_get(old_state->fifo_state[i].pending_commit);679671 }680672681673 return &state->base;···758762 unsigned int i;759763760764 hvs_new_state = vc4_hvs_get_global_state(state);761761- if (!hvs_new_state)762762- return -EINVAL;765765+ if (IS_ERR(hvs_new_state))766766+ return PTR_ERR(hvs_new_state);763767764768 for (i = 0; i < ARRAY_SIZE(hvs_new_state->fifo_state); i++)765769 if (!hvs_new_state->fifo_state[i].in_use)
···423423 if (!(ipd & REG_INT_MBRF))424424 return;425425426426- /* ack interrupt */427427- i2c_writel(i2c, REG_INT_MBRF, REG_IPD);426426+ /* ack interrupt (read also produces a spurious START flag, clear it too) */427427+ i2c_writel(i2c, REG_INT_MBRF | REG_INT_START, REG_IPD);428428429429 /* Can only handle a maximum of 32 bytes at a time */430430 if (len > 32)
+38-7
drivers/i2c/busses/i2c-stm32f7.c
···14931493{14941494 struct stm32f7_i2c_dev *i2c_dev = data;14951495 struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg;14961496+ struct stm32_i2c_dma *dma = i2c_dev->dma;14961497 void __iomem *base = i2c_dev->base;14971498 u32 status, mask;14981499 int ret = IRQ_HANDLED;···15191518 dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n",15201519 __func__, f7_msg->addr);15211520 writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR);15211521+ if (i2c_dev->use_dma) {15221522+ stm32f7_i2c_disable_dma_req(i2c_dev);15231523+ dmaengine_terminate_async(dma->chan_using);15241524+ }15221525 f7_msg->result = -ENXIO;15231526 }15241527···15381533 /* Clear STOP flag */15391534 writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR);1540153515411541- if (i2c_dev->use_dma) {15361536+ if (i2c_dev->use_dma && !f7_msg->result) {15421537 ret = IRQ_WAKE_THREAD;15431538 } else {15441539 i2c_dev->master_mode = false;···15511546 if (f7_msg->stop) {15521547 mask = STM32F7_I2C_CR2_STOP;15531548 stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask);15541554- } else if (i2c_dev->use_dma) {15491549+ } else if (i2c_dev->use_dma && !f7_msg->result) {15551550 ret = IRQ_WAKE_THREAD;15561551 } else if (f7_msg->smbus) {15571552 stm32f7_i2c_smbus_rep_start(i2c_dev);···15881583 if (!ret) {15891584 dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__);15901585 stm32f7_i2c_disable_dma_req(i2c_dev);15911591- dmaengine_terminate_all(dma->chan_using);15861586+ dmaengine_terminate_async(dma->chan_using);15921587 f7_msg->result = -ETIMEDOUT;15931588 }15941589···16651660 /* Disable dma */16661661 if (i2c_dev->use_dma) {16671662 stm32f7_i2c_disable_dma_req(i2c_dev);16681668- dmaengine_terminate_all(dma->chan_using);16631663+ dmaengine_terminate_async(dma->chan_using);16691664 }1670166516711666 i2c_dev->master_mode = false;···17011696 time_left = wait_for_completion_timeout(&i2c_dev->complete,17021697 i2c_dev->adap.timeout);17031698 ret = f7_msg->result;16991699+ if (ret) {17001700+ if (i2c_dev->use_dma)17011701+ dmaengine_synchronize(dma->chan_using);17021702+17031703+ /*17041704+ * It is possible that some unsent data have already been17051705+ * written into TXDR. To avoid sending old data in a17061706+ * further transfer, flush TXDR in case of any error17071707+ */17081708+ writel_relaxed(STM32F7_I2C_ISR_TXE,17091709+ i2c_dev->base + STM32F7_I2C_ISR);17101710+ goto pm_free;17111711+ }1704171217051713 if (!time_left) {17061714 dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n",17071715 i2c_dev->msg->addr);17081716 if (i2c_dev->use_dma)17091709- dmaengine_terminate_all(dma->chan_using);17171717+ dmaengine_terminate_sync(dma->chan_using);17181718+ stm32f7_i2c_wait_free_bus(i2c_dev);17101719 ret = -ETIMEDOUT;17111720 }17121721···17631744 timeout = wait_for_completion_timeout(&i2c_dev->complete,17641745 i2c_dev->adap.timeout);17651746 ret = f7_msg->result;17661766- if (ret)17471747+ if (ret) {17481748+ if (i2c_dev->use_dma)17491749+ dmaengine_synchronize(dma->chan_using);17501750+17511751+ /*17521752+ * It is possible that some unsent data have already been17531753+ * written into TXDR. To avoid sending old data in a17541754+ * further transfer, flush TXDR in case of any error17551755+ */17561756+ writel_relaxed(STM32F7_I2C_ISR_TXE,17571757+ i2c_dev->base + STM32F7_I2C_ISR);17671758 goto pm_free;17591759+ }1768176017691761 if (!timeout) {17701762 dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr);17711763 if (i2c_dev->use_dma)17721772- dmaengine_terminate_all(dma->chan_using);17641764+ dmaengine_terminate_sync(dma->chan_using);17651765+ stm32f7_i2c_wait_free_bus(i2c_dev);17731766 ret = -ETIMEDOUT;17741767 goto pm_free;17751768 }
···5050}51515252static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,5353- u16 status, u16 lpa,5353+ u16 ctrl, u16 status, u16 lpa,5454 struct phylink_link_state *state)5555{5656+ state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);5757+5658 if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) {5757- state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);5959+ /* The Spped and Duplex Resolved register is 1 if AN is enabled6060+ * and complete, or if AN is disabled. So with disabled AN we6161+ * still get here on link up. But we want to set an_complete6262+ * only if AN was enabled, thus we look at BMCR_ANENABLE.6363+ * (According to 802.3-2008 section 22.2.4.2.10, we should be6464+ * able to get this same value from BMSR_ANEGCAPABLE, but tests6565+ * show that these Marvell PHYs don't conform to this part of6666+ * the specificaion - BMSR_ANEGCAPABLE is simply always 1.)6767+ */6868+ state->an_complete = !!(ctrl & BMCR_ANENABLE);5869 state->duplex = status &5970 MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ?6071 DUPLEX_FULL : DUPLEX_HALF;···9281 dev_err(chip->dev, "invalid PHY speed\n");9382 return -EINVAL;9483 }8484+ } else if (state->link &&8585+ state->interface != PHY_INTERFACE_MODE_SGMII) {8686+ /* If Speed and Duplex Resolved register is 0 and link is up, it8787+ * means that AN was enabled, but link partner had it disabled8888+ * and the PHY invoked the Auto-Negotiation Bypass feature and8989+ * linked anyway.9090+ */9191+ state->duplex = DUPLEX_FULL;9292+ if (state->interface == PHY_INTERFACE_MODE_2500BASEX)9393+ state->speed = SPEED_2500;9494+ else9595+ state->speed = SPEED_1000;9596 } else {9697 state->link = false;9798 }···191168int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,192169 int lane, struct phylink_link_state *state)193170{194194- u16 lpa, status;171171+ u16 lpa, status, ctrl;195172 int err;173173+174174+ err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl);175175+ if (err) {176176+ dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);177177+ return err;178178+ }196179197180 err = mv88e6352_serdes_read(chip, 0x11, &status);198181 if (err) {···212183 return err;213184 }214185215215- return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state);186186+ return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);216187}217188218189int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port,···912883static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,913884 int port, int lane, struct phylink_link_state *state)914885{915915- u16 lpa, status;886886+ u16 lpa, status, ctrl;916887 int err;888888+889889+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,890890+ MV88E6390_SGMII_BMCR, &ctrl);891891+ if (err) {892892+ dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);893893+ return err;894894+ }917895918896 err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,919897 MV88E6390_SGMII_PHY_STATUS, &status);···936900 return err;937901 }938902939939- return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state);903903+ return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);940904}941905942906static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip,···13071271 }13081272}1309127313101310-static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane)12741274+static int mv88e6393x_serdes_power_lane(struct mv88e6xxx_chip *chip, int lane,12751275+ bool on)13111276{13121312- u16 reg, pcs;12771277+ u16 reg;12781278+ int err;12791279+12801280+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,12811281+ MV88E6393X_SERDES_CTRL1, ®);12821282+ if (err)12831283+ return err;12841284+12851285+ if (on)12861286+ reg &= ~(MV88E6393X_SERDES_CTRL1_TX_PDOWN |12871287+ MV88E6393X_SERDES_CTRL1_RX_PDOWN);12881288+ else12891289+ reg |= MV88E6393X_SERDES_CTRL1_TX_PDOWN |12901290+ MV88E6393X_SERDES_CTRL1_RX_PDOWN;12911291+12921292+ return mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,12931293+ MV88E6393X_SERDES_CTRL1, reg);12941294+}12951295+12961296+static int mv88e6393x_serdes_erratum_4_6(struct mv88e6xxx_chip *chip, int lane)12971297+{12981298+ u16 reg;13131299 int err;1314130013151301 /* mv88e6393x family errata 4.6:···13421284 * It seems that after this workaround the SERDES is automatically13431285 * powered up (the bit is cleared), so power it down.13441286 */13451345- if (lane == MV88E6393X_PORT0_LANE || lane == MV88E6393X_PORT9_LANE ||13461346- lane == MV88E6393X_PORT10_LANE) {13471347- err = mv88e6390_serdes_read(chip, lane,13481348- MDIO_MMD_PHYXS,13491349- MV88E6393X_SERDES_POC, ®);13501350- if (err)13511351- return err;12871287+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,12881288+ MV88E6393X_SERDES_POC, ®);12891289+ if (err)12901290+ return err;1352129113531353- reg &= ~MV88E6393X_SERDES_POC_PDOWN;13541354- reg |= MV88E6393X_SERDES_POC_RESET;12921292+ reg &= ~MV88E6393X_SERDES_POC_PDOWN;12931293+ reg |= MV88E6393X_SERDES_POC_RESET;1355129413561356- err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,13571357- MV88E6393X_SERDES_POC, reg);13581358- if (err)13591359- return err;12951295+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,12961296+ MV88E6393X_SERDES_POC, reg);12971297+ if (err)12981298+ return err;1360129913611361- err = mv88e6390_serdes_power_sgmii(chip, lane, false);13621362- if (err)13631363- return err;13641364- }13001300+ err = mv88e6390_serdes_power_sgmii(chip, lane, false);13011301+ if (err)13021302+ return err;13031303+13041304+ return mv88e6393x_serdes_power_lane(chip, lane, false);13051305+}13061306+13071307+int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip)13081308+{13091309+ int err;13101310+13111311+ err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT0_LANE);13121312+ if (err)13131313+ return err;13141314+13151315+ err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT9_LANE);13161316+ if (err)13171317+ return err;13181318+13191319+ return mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT10_LANE);13201320+}13211321+13221322+static int mv88e6393x_serdes_erratum_4_8(struct mv88e6xxx_chip *chip, int lane)13231323+{13241324+ u16 reg, pcs;13251325+ int err;1365132613661327 /* mv88e6393x family errata 4.8:13671328 * When a SERDES port is operating in 1000BASE-X or SGMII mode link may···14111334 MV88E6393X_ERRATA_4_8_REG, reg);14121335}1413133614141414-int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip)13371337+static int mv88e6393x_serdes_erratum_5_2(struct mv88e6xxx_chip *chip, int lane,13381338+ u8 cmode)14151339{13401340+ static const struct {13411341+ u16 dev, reg, val, mask;13421342+ } fixes[] = {13431343+ { MDIO_MMD_VEND1, 0x8093, 0xcb5a, 0xffff },13441344+ { MDIO_MMD_VEND1, 0x8171, 0x7088, 0xffff },13451345+ { MDIO_MMD_VEND1, 0x80c9, 0x311a, 0xffff },13461346+ { MDIO_MMD_VEND1, 0x80a2, 0x8000, 0xff7f },13471347+ { MDIO_MMD_VEND1, 0x80a9, 0x0000, 0xfff0 },13481348+ { MDIO_MMD_VEND1, 0x80a3, 0x0000, 0xf8ff },13491349+ { MDIO_MMD_PHYXS, MV88E6393X_SERDES_POC,13501350+ MV88E6393X_SERDES_POC_RESET, MV88E6393X_SERDES_POC_RESET },13511351+ };13521352+ int err, i;13531353+ u16 reg;13541354+13551355+ /* mv88e6393x family errata 5.2:13561356+ * For optimal signal integrity the following sequence should be applied13571357+ * to SERDES operating in 10G mode. These registers only apply to 10G13581358+ * operation and have no effect on other speeds.13591359+ */13601360+ if (cmode != MV88E6393X_PORT_STS_CMODE_10GBASER)13611361+ return 0;13621362+13631363+ for (i = 0; i < ARRAY_SIZE(fixes); ++i) {13641364+ err = mv88e6390_serdes_read(chip, lane, fixes[i].dev,13651365+ fixes[i].reg, ®);13661366+ if (err)13671367+ return err;13681368+13691369+ reg &= ~fixes[i].mask;13701370+ reg |= fixes[i].val;13711371+13721372+ err = mv88e6390_serdes_write(chip, lane, fixes[i].dev,13731373+ fixes[i].reg, reg);13741374+ if (err)13751375+ return err;13761376+ }13771377+13781378+ return 0;13791379+}13801380+13811381+static int mv88e6393x_serdes_fix_2500basex_an(struct mv88e6xxx_chip *chip,13821382+ int lane, u8 cmode, bool on)13831383+{13841384+ u16 reg;14161385 int err;1417138614181418- err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT0_LANE);13871387+ if (cmode != MV88E6XXX_PORT_STS_CMODE_2500BASEX)13881388+ return 0;13891389+13901390+ /* Inband AN is broken on Amethyst in 2500base-x mode when set by13911391+ * standard mechanism (via cmode).13921392+ * We can get around this by configuring the PCS mode to 1000base-x13931393+ * and then writing value 0x58 to register 1e.8000. (This must be done13941394+ * while SerDes receiver and transmitter are disabled, which is, when13951395+ * this function is called.)13961396+ * It seem that when we do this configuration to 2500base-x mode (by13971397+ * changing PCS mode to 1000base-x and frequency to 3.125 GHz from13981398+ * 1.25 GHz) and then configure to sgmii or 1000base-x, the device13991399+ * thinks that it already has SerDes at 1.25 GHz and does not change14001400+ * the 1e.8000 register, leaving SerDes at 3.125 GHz.14011401+ * To avoid this, change PCS mode back to 2500base-x when disabling14021402+ * SerDes from 2500base-x mode.14031403+ */14041404+ err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,14051405+ MV88E6393X_SERDES_POC, ®);14191406 if (err)14201407 return err;1421140814221422- err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT9_LANE);14091409+ reg &= ~(MV88E6393X_SERDES_POC_PCS_MASK | MV88E6393X_SERDES_POC_AN);14101410+ if (on)14111411+ reg |= MV88E6393X_SERDES_POC_PCS_1000BASEX |14121412+ MV88E6393X_SERDES_POC_AN;14131413+ else14141414+ reg |= MV88E6393X_SERDES_POC_PCS_2500BASEX;14151415+ reg |= MV88E6393X_SERDES_POC_RESET;14161416+14171417+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,14181418+ MV88E6393X_SERDES_POC, reg);14231419 if (err)14241420 return err;1425142114261426- return mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT10_LANE);14221422+ err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_VEND1, 0x8000, 0x58);14231423+ if (err)14241424+ return err;14251425+14261426+ return 0;14271427}1428142814291429int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,14301430 bool on)14311431{14321432 u8 cmode = chip->ports[port].cmode;14331433+ int err;1433143414341435 if (port != 0 && port != 9 && port != 10)14351436 return -EOPNOTSUPP;14371437+14381438+ if (on) {14391439+ err = mv88e6393x_serdes_erratum_4_8(chip, lane);14401440+ if (err)14411441+ return err;14421442+14431443+ err = mv88e6393x_serdes_erratum_5_2(chip, lane, cmode);14441444+ if (err)14451445+ return err;14461446+14471447+ err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,14481448+ true);14491449+ if (err)14501450+ return err;14511451+14521452+ err = mv88e6393x_serdes_power_lane(chip, lane, true);14531453+ if (err)14541454+ return err;14551455+ }1436145614371457 switch (cmode) {14381458 case MV88E6XXX_PORT_STS_CMODE_SGMII:14391459 case MV88E6XXX_PORT_STS_CMODE_1000BASEX:14401460 case MV88E6XXX_PORT_STS_CMODE_2500BASEX:14411441- return mv88e6390_serdes_power_sgmii(chip, lane, on);14611461+ err = mv88e6390_serdes_power_sgmii(chip, lane, on);14621462+ break;14421463 case MV88E6393X_PORT_STS_CMODE_5GBASER:14431464 case MV88E6393X_PORT_STS_CMODE_10GBASER:14441444- return mv88e6390_serdes_power_10g(chip, lane, on);14651465+ err = mv88e6390_serdes_power_10g(chip, lane, on);14661466+ break;14451467 }1446146814471447- return 0;14691469+ if (err)14701470+ return err;14711471+14721472+ if (!on) {14731473+ err = mv88e6393x_serdes_power_lane(chip, lane, false);14741474+ if (err)14751475+ return err;14761476+14771477+ err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,14781478+ false);14791479+ }14801480+14811481+ return err;14481482}
···22862286 bool carry_xdp_prog)22872287{22882288 struct bpf_prog *xdp_prog;22892289- int i, t;22892289+ int i, t, ret;2290229022912291- mlx4_en_copy_priv(tmp, priv, prof);22912291+ ret = mlx4_en_copy_priv(tmp, priv, prof);22922292+ if (ret) {22932293+ en_warn(priv, "%s: mlx4_en_copy_priv() failed, return\n",22942294+ __func__);22952295+ return ret;22962296+ }2292229722932298 if (mlx4_en_alloc_resources(tmp)) {22942299 en_warn(priv,
+1-1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
···341341 case MLX5_CMD_OP_DEALLOC_SF:342342 case MLX5_CMD_OP_DESTROY_UCTX:343343 case MLX5_CMD_OP_DESTROY_UMEM:344344+ case MLX5_CMD_OP_MODIFY_RQT:344345 return MLX5_CMD_STAT_OK;345346346347 case MLX5_CMD_OP_QUERY_HCA_CAP:···447446 case MLX5_CMD_OP_MODIFY_TIS:448447 case MLX5_CMD_OP_QUERY_TIS:449448 case MLX5_CMD_OP_CREATE_RQT:450450- case MLX5_CMD_OP_MODIFY_RQT:451449 case MLX5_CMD_OP_QUERY_RQT:452450453451 case MLX5_CMD_OP_CREATE_FLOW_TABLE:
···130130 /* If vports min rate divider is 0 but their group has bw_share configured, then131131 * need to set bw_share for vports to minimal value.132132 */133133- if (!group_level && !max_guarantee && group->bw_share)133133+ if (!group_level && !max_guarantee && group && group->bw_share)134134 return 1;135135 return 0;136136}···423423 return err;424424425425 /* Recalculate bw share weights of old and new groups */426426- if (vport->qos.bw_share) {426426+ if (vport->qos.bw_share || new_group->bw_share) {427427 esw_qos_normalize_vports_min_rate(esw, curr_group, extack);428428 esw_qos_normalize_vports_min_rate(esw, new_group, extack);429429 }
···329329esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr)330330{331331 struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;332332+ bool result = false;332333 int i;333334334334- for (i = esw_attr->split_count; i < esw_attr->out_count; i++)335335+ /* Indirect table is supported only for flows with in_port uplink336336+ * and the destination is vport on the same eswitch as the uplink,337337+ * return false in case at least one of destinations doesn't meet338338+ * this criteria.339339+ */340340+ for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {335341 if (esw_attr->dests[i].rep &&336342 mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,337337- esw_attr->dests[i].mdev))338338- return true;339339- return false;343343+ esw_attr->dests[i].mdev)) {344344+ result = true;345345+ } else {346346+ result = false;347347+ break;348348+ }349349+ }350350+ return result;340351}341352342353static int···25232512 struct mlx5_eswitch *esw = master->priv.eswitch;25242513 struct mlx5_flow_table_attr ft_attr = {25252514 .max_fte = 1, .prio = 0, .level = 0,25152515+ .flags = MLX5_FLOW_TABLE_OTHER_VPORT,25262516 };25272517 struct mlx5_flow_namespace *egress_ns;25282518 struct mlx5_flow_table *acl;
···3131 dev->timeouts->to[type] = val;3232}33333434-static void tout_set_def_val(struct mlx5_core_dev *dev)3434+void mlx5_tout_set_def_val(struct mlx5_core_dev *dev)3535{3636 int i;37373838- for (i = MLX5_TO_FW_PRE_INIT_TIMEOUT_MS; i < MAX_TIMEOUT_TYPES; i++)3838+ for (i = 0; i < MAX_TIMEOUT_TYPES; i++)3939 tout_set(dev, tout_def_sw_val[i], i);4040}4141···4545 if (!dev->timeouts)4646 return -ENOMEM;47474848- tout_set_def_val(dev);4948 return 0;5049}5150
···8989#define IWL_EXPORT_SYMBOL(sym)9090#endif91919292+/* max retry for init flow */9393+#define IWL_MAX_INIT_RETRY 29494+9295#endif /* __iwl_drv_h__ */
+23-1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···1616#include <net/ieee80211_radiotap.h>1717#include <net/tcp.h>18181919+#include "iwl-drv.h"1920#include "iwl-op-mode.h"2021#include "iwl-io.h"2122#include "mvm.h"···11181117{11191118 struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);11201119 int ret;11201120+ int retry, max_retry = 0;1121112111221122 mutex_lock(&mvm->mutex);11231123- ret = __iwl_mvm_mac_start(mvm);11231123+11241124+ /* we are starting the mac not in error flow, and restart is enabled */11251125+ if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) &&11261126+ iwlwifi_mod_params.fw_restart) {11271127+ max_retry = IWL_MAX_INIT_RETRY;11281128+ /*11291129+ * This will prevent mac80211 recovery flows to trigger during11301130+ * init failures11311131+ */11321132+ set_bit(IWL_MVM_STATUS_STARTING, &mvm->status);11331133+ }11341134+11351135+ for (retry = 0; retry <= max_retry; retry++) {11361136+ ret = __iwl_mvm_mac_start(mvm);11371137+ if (!ret)11381138+ break;11391139+11401140+ IWL_ERR(mvm, "mac start retry %d\n", retry);11411141+ }11421142+ clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);11431143+11241144 mutex_unlock(&mvm->mutex);1125114511261146 return ret;
+3
drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
···11231123 * @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running11241124 * @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA11251125 * @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it)11261126+ * @IWL_MVM_STATUS_STARTING: starting mac,11271127+ * used to disable restart flow while in STARTING state11261128 */11271129enum iwl_mvm_status {11281130 IWL_MVM_STATUS_HW_RFKILL,···11361134 IWL_MVM_STATUS_FIRMWARE_RUNNING,11371135 IWL_MVM_STATUS_NEED_FLUSH_P2P,11381136 IWL_MVM_STATUS_IN_D3,11371137+ IWL_MVM_STATUS_STARTING,11391138};1140113911411140/* Keep track of completed init configuration */
+5
drivers/net/wireless/intel/iwlwifi/mvm/ops.c
···686686 int ret;687687688688 rtnl_lock();689689+ wiphy_lock(mvm->hw->wiphy);689690 mutex_lock(&mvm->mutex);690691691692 ret = iwl_run_init_mvm_ucode(mvm);···702701 iwl_mvm_stop_device(mvm);703702704703 mutex_unlock(&mvm->mutex);704704+ wiphy_unlock(mvm->hw->wiphy);705705 rtnl_unlock();706706707707 if (ret < 0)···16021600 */16031601 if (!mvm->fw_restart && fw_error) {16041602 iwl_fw_error_collect(&mvm->fwrt, false);16031603+ } else if (test_bit(IWL_MVM_STATUS_STARTING,16041604+ &mvm->status)) {16051605+ IWL_ERR(mvm, "Starting mac, retry will be triggered anyway\n");16051606 } else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {16061607 struct iwl_mvm_reprobe *reprobe;16071608
+8-2
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
···13391339 u16 mac_type, u8 mac_step,13401340 u16 rf_type, u8 cdb, u8 rf_id, u8 no_160, u8 cores)13411341{13421342+ int num_devices = ARRAY_SIZE(iwl_dev_info_table);13421343 int i;1343134413441344- for (i = ARRAY_SIZE(iwl_dev_info_table) - 1; i >= 0; i--) {13451345+ if (!num_devices)13461346+ return NULL;13471347+13481348+ for (i = num_devices - 1; i >= 0; i--) {13451349 const struct iwl_dev_info *dev_info = &iwl_dev_info_table[i];1346135013471351 if (dev_info->device != (u16)IWL_CFG_ANY &&···14461442 */14471443 if (iwl_trans->trans_cfg->rf_id &&14481444 iwl_trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_9000 &&14491449- !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans))14451445+ !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) {14461446+ ret = -EINVAL;14501447 goto out_free_trans;14481448+ }1451144914521450 dev_info = iwl_pci_find_dev_info(pdev->device, pdev->subsystem_device,14531451 CSR_HW_REV_TYPE(iwl_trans->hw_rev),
···173173 if (!(cb->flags & MT_TX_CB_DMA_DONE))174174 continue;175175176176- if (!time_is_after_jiffies(cb->jiffies +176176+ if (time_is_after_jiffies(cb->jiffies +177177 MT_TX_STATUS_SKB_TIMEOUT))178178 continue;179179 }
+3
drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
···2525 if (status == -ENODEV || status == -ENOENT)2626 return true;27272828+ if (!test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags))2929+ return false;3030+2831 if (status == -EPROTO || status == -ETIMEDOUT)2932 rt2x00dev->num_proto_errs++;3033 else
···463463464464static int __init init_dtpm(void)465465{466466- struct dtpm_descr *dtpm_descr;467467-468466 pct = powercap_register_control_type(NULL, "dtpm", NULL);469467 if (IS_ERR(pct)) {470468 pr_err("Failed to register control type\n");471469 return PTR_ERR(pct);472470 }473473-474474- for_each_dtpm_table(dtpm_descr)475475- dtpm_descr->init();476471477472 return 0;478473}
+2-7
drivers/scsi/lpfc/lpfc_els.c
···50955095 /* NPort Recovery mode or node is just allocated */50965096 if (!lpfc_nlp_not_used(ndlp)) {50975097 /* A LOGO is completing and the node is in NPR state.50985098- * If this a fabric node that cleared its transport50995099- * registration, release the rpi.50985098+ * Just unregister the RPI because the node is still50995099+ * required.51005100 */51015101- spin_lock_irq(&ndlp->lock);51025102- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;51035103- if (phba->sli_rev == LPFC_SLI_REV4)51045104- ndlp->nlp_flag |= NLP_RELEASE_RPI;51055105- spin_unlock_irq(&ndlp->lock);51065101 lpfc_unreg_rpi(vport, ndlp);51075102 } else {51085103 /* Indicate the node has already released, should
···15331533 tristate "LiteUART serial port support"15341534 depends on HAS_IOMEM15351535 depends on OF || COMPILE_TEST15361536- depends on LITEX15361536+ depends on LITEX || COMPILE_TEST15371537 select SERIAL_CORE15381538 help15391539 This driver is for the FPGA-based LiteUART serial controller from LiteX
···10751075 goto out;1076107610771077 if (!tty_io_error(tty)) {10781078+ if (uport->rs485.flags & SER_RS485_ENABLED) {10791079+ set &= ~TIOCM_RTS;10801080+ clear &= ~TIOCM_RTS;10811081+ }10821082+10781083 uart_update_mctrl(uport, set, clear);10791084 ret = 0;10801085 }···15541549{15551550 struct uart_state *state = container_of(port, struct uart_state, port);15561551 struct uart_port *uport = uart_port_check(state);15521552+ char *buf;1557155315581554 /*15591555 * At this point, we stop accepting input. To do this, we···15761570 */15771571 tty_port_set_suspended(port, 0);1578157215791579- uart_change_pm(state, UART_PM_STATE_OFF);15731573+ /*15741574+ * Free the transmit buffer.15751575+ */15761576+ spin_lock_irq(&uport->lock);15771577+ buf = state->xmit.buf;15781578+ state->xmit.buf = NULL;15791579+ spin_unlock_irq(&uport->lock);1580158015811581+ if (buf)15821582+ free_page((unsigned long)buf);15831583+15841584+ uart_change_pm(state, UART_PM_STATE_OFF);15811585}1582158615831587static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+4-16
drivers/usb/cdns3/cdns3-gadget.c
···337337 cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs);338338}339339340340-static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req)341341-{342342- struct cdns3_endpoint *priv_ep = priv_req->priv_ep;343343- int current_trb = priv_req->start_trb;344344-345345- while (current_trb != priv_req->end_trb) {346346- cdns3_ep_inc_deq(priv_ep);347347- current_trb = priv_ep->dequeue;348348- }349349-350350- cdns3_ep_inc_deq(priv_ep);351351-}352352-353340/**354341 * cdns3_allow_enable_l1 - enable/disable permits to transition to L1.355342 * @priv_dev: Extended gadget object···1504151715051518 trb = priv_ep->trb_pool + priv_ep->dequeue;1506151915071507- /* Request was dequeued and TRB was changed to TRB_LINK. */15081508- if (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {15201520+ /* The TRB was changed as link TRB, and the request was handled at ep_dequeue */15211521+ while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {15091522 trace_cdns3_complete_trb(priv_ep, trb);15101510- cdns3_move_deq_to_next_trb(priv_req);15231523+ cdns3_ep_inc_deq(priv_ep);15241524+ trb = priv_ep->trb_pool + priv_ep->dequeue;15111525 }1512152615131527 if (!request->stream_id) {
+3
drivers/usb/cdns3/cdnsp-mem.c
···987987988988 /* Set up the endpoint ring. */989989 pep->ring = cdnsp_ring_alloc(pdev, 2, ring_type, max_packet, mem_flags);990990+ if (!pep->ring)991991+ return -ENOMEM;992992+990993 pep->skip = false;991994992995 /* Fill the endpoint context */
···366366/* Must be called with xhci->lock held, releases and aquires lock back */367367static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)368368{369369- u32 temp_32;369369+ struct xhci_segment *new_seg = xhci->cmd_ring->deq_seg;370370+ union xhci_trb *new_deq = xhci->cmd_ring->dequeue;371371+ u64 crcr;370372 int ret;371373372374 xhci_dbg(xhci, "Abort command ring\n");···377375378376 /*379377 * The control bits like command stop, abort are located in lower380380- * dword of the command ring control register. Limit the write381381- * to the lower dword to avoid corrupting the command ring pointer382382- * in case if the command ring is stopped by the time upper dword383383- * is written.378378+ * dword of the command ring control register.379379+ * Some controllers require all 64 bits to be written to abort the ring.380380+ * Make sure the upper dword is valid, pointing to the next command,381381+ * avoiding corrupting the command ring pointer in case the command ring382382+ * is stopped by the time the upper dword is written.384383 */385385- temp_32 = readl(&xhci->op_regs->cmd_ring);386386- writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);384384+ next_trb(xhci, NULL, &new_seg, &new_deq);385385+ if (trb_is_link(new_deq))386386+ next_trb(xhci, NULL, &new_seg, &new_deq);387387+388388+ crcr = xhci_trb_virt_to_dma(new_seg, new_deq);389389+ xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);387390388391 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the389392 * completion of the Command Abort operation. If CRR is not negated in 5
-4
drivers/usb/typec/tcpm/tcpm.c
···41104110 tcpm_try_src(port) ? SRC_TRY41114111 : SNK_ATTACHED,41124112 0);41134113- else41144114- /* Wait for VBUS, but not forever */41154115- tcpm_set_state(port, PORT_RESET, PD_T_PS_SOURCE_ON);41164113 break;41174117-41184114 case SRC_TRY:41194115 port->try_src_count++;41204116 tcpm_set_cc(port, tcpm_rp_cc(port));
···232232}233233#endif /* CONFIG_VFIO_NOIOMMU */234234235235-/**235235+/*236236 * IOMMU driver registration237237 */238238int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops)···285285 unsigned long action, void *data);286286static void vfio_group_get(struct vfio_group *group);287287288288-/**288288+/*289289 * Container objects - containers are created when /dev/vfio/vfio is290290 * opened, but their lifecycle extends until the last user is done, so291291 * it's freed via kref. Must support container/group/device being···309309 kref_put(&container->kref, vfio_container_release);310310}311311312312-/**312312+/*313313 * Group objects - create, release, get, put, search314314 */315315static struct vfio_group *···488488 return group;489489}490490491491-/**491491+/*492492 * Device objects - create, release, get, put, search493493 */494494/* Device reference always implies a group reference */···595595 return ret;596596}597597598598-/**598598+/*599599 * Async device support600600 */601601static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev)···689689 return NOTIFY_OK;690690}691691692692-/**692692+/*693693 * VFIO driver API694694 */695695void vfio_init_group_dev(struct vfio_device *device, struct device *dev,···831831}832832EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev);833833834834-/**834834+/*835835 * Get a reference to the vfio_device for a device. Even if the836836 * caller thinks they own the device, they could be racing with a837837 * release call path, so we can't trust drvdata for the shortcut.···965965}966966EXPORT_SYMBOL_GPL(vfio_unregister_group_dev);967967968968-/**968968+/*969969 * VFIO base fd, /dev/vfio/vfio970970 */971971static long vfio_ioctl_check_extension(struct vfio_container *container,···11831183 .compat_ioctl = compat_ptr_ioctl,11841184};1185118511861186-/**11861186+/*11871187 * VFIO Group fd, /dev/vfio/$GROUP11881188 */11891189static void __vfio_group_unset_container(struct vfio_group *group)···15361536 .release = vfio_group_fops_release,15371537};1538153815391539-/**15391539+/*15401540 * VFIO Device fd15411541 */15421542static int vfio_device_fops_release(struct inode *inode, struct file *filep)···16111611 .mmap = vfio_device_fops_mmap,16121612};1613161316141614-/**16141614+/*16151615 * External user API, exported by symbols to be linked dynamically.16161616 *16171617 * The protocol includes:···16591659}16601660EXPORT_SYMBOL_GPL(vfio_group_get_external_user);1661166116621662-/**16621662+/*16631663 * External user API, exported by symbols to be linked dynamically.16641664 * The external user passes in a device pointer16651665 * to verify that:···17251725}17261726EXPORT_SYMBOL_GPL(vfio_external_check_extension);1727172717281728-/**17281728+/*17291729 * Sub-module support17301730 */17311731/*···22722272}22732273EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);2274227422752275-/**22752275+/*22762276 * Module/class support22772277 */22782278static char *vfio_devnode(struct device *dev, umode_t *mode)
+9-5
drivers/video/console/vgacon.c
···366366 struct uni_pagedir *p;367367368368 /*369369- * We cannot be loaded as a module, therefore init is always 1,370370- * but vgacon_init can be called more than once, and init will371371- * not be 1.369369+ * We cannot be loaded as a module, therefore init will be 1370370+ * if we are the default console, however if we are a fallback371371+ * console, for example if fbcon has failed registration, then372372+ * init will be 0, so we need to make sure our boot parameters373373+ * have been copied to the console structure for vgacon_resize374374+ * ultimately called by vc_resize. Any subsequent calls to375375+ * vgacon_init init will have init set to 0 too.372376 */373377 c->vc_can_do_color = vga_can_do_color;378378+ c->vc_scan_lines = vga_scan_lines;379379+ c->vc_font.height = c->vc_cell_height = vga_video_font_height;374380375381 /* set dimensions manually if init != 0 since vc_resize() will fail */376382 if (init) {···385379 } else386380 vc_resize(c, vga_video_num_columns, vga_video_num_lines);387381388388- c->vc_scan_lines = vga_scan_lines;389389- c->vc_font.height = c->vc_cell_height = vga_video_font_height;390382 c->vc_complement_mask = 0x7700;391383 if (vga_512_chars)392384 c->vc_hi_font_mask = 0x0800;
+5-6
fs/cifs/connect.c
···15621562 /* fscache server cookies are based on primary channel only */15631563 if (!CIFS_SERVER_IS_CHAN(tcp_ses))15641564 cifs_fscache_get_client_cookie(tcp_ses);15651565+#ifdef CONFIG_CIFS_FSCACHE15661566+ else15671567+ tcp_ses->fscache = tcp_ses->primary_server->fscache;15681568+#endif /* CONFIG_CIFS_FSCACHE */1565156915661570 /* queue echo request delayed work */15671571 queue_delayed_work(cifsiod_wq, &tcp_ses->echo, tcp_ses->echo_interval);···30503046 cifs_dbg(VFS, "read only mount of RW share\n");30513047 /* no need to log a RW mount of a typical RW share */30523048 }30533053- /*30543054- * The cookie is initialized from volume info returned above.30553055- * Inside cifs_fscache_get_super_cookie it checks30563056- * that we do not get super cookie twice.30573057- */30583058- cifs_fscache_get_super_cookie(tcon);30593049 }3060305030613051 /*···34243426 */34253427 mount_put_conns(mnt_ctx);34263428 mount_get_dfs_conns(mnt_ctx);34293429+ set_root_ses(mnt_ctx);3427343034283431 full_path = build_unc_path_to_root(ctx, cifs_sb, true);34293432 if (IS_ERR(full_path))
+10-36
fs/cifs/fscache.c
···1616 * Key layout of CIFS server cache index object1717 */1818struct cifs_server_key {1919- struct {2020- uint16_t family; /* address family */2121- __be16 port; /* IP port */2222- } hdr;2323- union {2424- struct in_addr ipv4_addr;2525- struct in6_addr ipv6_addr;2626- };1919+ __u64 conn_id;2720} __packed;28212922/*···2431 */2532void cifs_fscache_get_client_cookie(struct TCP_Server_Info *server)2633{2727- const struct sockaddr *sa = (struct sockaddr *) &server->dstaddr;2828- const struct sockaddr_in *addr = (struct sockaddr_in *) sa;2929- const struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *) sa;3034 struct cifs_server_key key;3131- uint16_t key_len = sizeof(key.hdr);3232-3333- memset(&key, 0, sizeof(key));34353536 /*3636- * Should not be a problem as sin_family/sin6_family overlays3737- * sa_family field3737+ * Check if cookie was already initialized so don't reinitialize it.3838+ * In the future, as we integrate with newer fscache features,3939+ * we may want to instead add a check if cookie has changed3840 */3939- key.hdr.family = sa->sa_family;4040- switch (sa->sa_family) {4141- case AF_INET:4242- key.hdr.port = addr->sin_port;4343- key.ipv4_addr = addr->sin_addr;4444- key_len += sizeof(key.ipv4_addr);4545- break;4646-4747- case AF_INET6:4848- key.hdr.port = addr6->sin6_port;4949- key.ipv6_addr = addr6->sin6_addr;5050- key_len += sizeof(key.ipv6_addr);5151- break;5252-5353- default:5454- cifs_dbg(VFS, "Unknown network family '%d'\n", sa->sa_family);5555- server->fscache = NULL;4141+ if (server->fscache)5642 return;5757- }4343+4444+ memset(&key, 0, sizeof(key));4545+ key.conn_id = server->conn_id;58465947 server->fscache =6048 fscache_acquire_cookie(cifs_fscache_netfs.primary_index,6149 &cifs_fscache_server_index_def,6262- &key, key_len,5050+ &key, sizeof(key),6351 NULL, 0,6452 server, 0, true);6553 cifs_dbg(FYI, "%s: (0x%p/0x%p)\n",···6692 * In the future, as we integrate with newer fscache features,6793 * we may want to instead add a check if cookie has changed6894 */6969- if (tcon->fscache == NULL)9595+ if (tcon->fscache)7096 return;71977298 sharename = extract_sharename(tcon->treeName);
+7
fs/cifs/inode.c
···13761376 inode = ERR_PTR(rc);13771377 }1378137813791379+ /*13801380+ * The cookie is initialized from volume info returned above.13811381+ * Inside cifs_fscache_get_super_cookie it checks13821382+ * that we do not get super cookie twice.13831383+ */13841384+ cifs_fscache_get_super_cookie(tcon);13851385+13791386out:13801387 kfree(path);13811388 free_xid(xid);
···1857185718581858void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state)18591859{18601860- struct gfs2_holder mock_gh = { .gh_gl = gl, .gh_state = state, };18611860 unsigned long delay = 0;18621861 unsigned long holdtime;18631862 unsigned long now = jiffies;···18891890 * keep the glock until the last strong holder is done with it.18901891 */18911892 if (!find_first_strong_holder(gl)) {18921892- if (state == LM_ST_UNLOCKED)18931893- mock_gh.gh_state = LM_ST_EXCLUSIVE;18931893+ struct gfs2_holder mock_gh = {18941894+ .gh_gl = gl,18951895+ .gh_state = (state == LM_ST_UNLOCKED) ?18961896+ LM_ST_EXCLUSIVE : state,18971897+ .gh_iflags = BIT(HIF_HOLDER)18981898+ };18991899+18941900 demote_incompat_holders(gl, &mock_gh);18951901 }18961902 handle_callback(gl, state, delay, true);
+45-64
fs/gfs2/inode.c
···4040static const struct inode_operations gfs2_dir_iops;4141static const struct inode_operations gfs2_symlink_iops;42424343-static int iget_test(struct inode *inode, void *opaque)4444-{4545- u64 no_addr = *(u64 *)opaque;4646-4747- return GFS2_I(inode)->i_no_addr == no_addr;4848-}4949-5050-static int iget_set(struct inode *inode, void *opaque)5151-{5252- u64 no_addr = *(u64 *)opaque;5353-5454- GFS2_I(inode)->i_no_addr = no_addr;5555- inode->i_ino = no_addr;5656- return 0;5757-}5858-5959-static struct inode *gfs2_iget(struct super_block *sb, u64 no_addr)6060-{6161- struct inode *inode;6262-6363-repeat:6464- inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr);6565- if (!inode)6666- return inode;6767- if (is_bad_inode(inode)) {6868- iput(inode);6969- goto repeat;7070- }7171- return inode;7272-}7373-7443/**7544 * gfs2_set_iop - Sets inode operations7645 * @inode: The inode with correct i_mode filled in···73104 }74105}75106107107+static int iget_test(struct inode *inode, void *opaque)108108+{109109+ u64 no_addr = *(u64 *)opaque;110110+111111+ return GFS2_I(inode)->i_no_addr == no_addr;112112+}113113+114114+static int iget_set(struct inode *inode, void *opaque)115115+{116116+ u64 no_addr = *(u64 *)opaque;117117+118118+ GFS2_I(inode)->i_no_addr = no_addr;119119+ inode->i_ino = no_addr;120120+ return 0;121121+}122122+76123/**77124 * gfs2_inode_lookup - Lookup an inode78125 * @sb: The super block···117132{118133 struct inode *inode;119134 struct gfs2_inode *ip;120120- struct gfs2_glock *io_gl = NULL;121135 struct gfs2_holder i_gh;122136 int error;123137124138 gfs2_holder_mark_uninitialized(&i_gh);125125- inode = gfs2_iget(sb, no_addr);139139+ inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr);126140 if (!inode)127141 return ERR_PTR(-ENOMEM);128142···129145130146 if (inode->i_state & I_NEW) {131147 struct gfs2_sbd *sdp = GFS2_SB(inode);148148+ struct gfs2_glock *io_gl;132149133150 error = gfs2_glock_get(sdp, no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl);134151 if (unlikely(error))135152 goto fail;136136- flush_delayed_work(&ip->i_gl->gl_work);137137-138138- error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl);139139- if (unlikely(error))140140- goto fail;141141- if (blktype != GFS2_BLKST_UNLINKED)142142- gfs2_cancel_delete_work(io_gl);143153144154 if (type == DT_UNKNOWN || blktype != GFS2_BLKST_FREE) {145155 /*146156 * The GL_SKIP flag indicates to skip reading the inode147147- * block. We read the inode with gfs2_inode_refresh157157+ * block. We read the inode when instantiating it148158 * after possibly checking the block type.149159 */150160 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE,···159181 }160182 }161183162162- glock_set_object(ip->i_gl, ip);163184 set_bit(GLF_INSTANTIATE_NEEDED, &ip->i_gl->gl_flags);164164- error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);185185+186186+ error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl);165187 if (unlikely(error))166188 goto fail;167167- glock_set_object(ip->i_iopen_gh.gh_gl, ip);189189+ if (blktype != GFS2_BLKST_UNLINKED)190190+ gfs2_cancel_delete_work(io_gl);191191+ error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh);168192 gfs2_glock_put(io_gl);169169- io_gl = NULL;193193+ if (unlikely(error))194194+ goto fail;170195171196 /* Lowest possible timestamp; will be overwritten in gfs2_dinode_in. */172197 inode->i_atime.tv_sec = 1LL << (8 * sizeof(inode->i_atime.tv_sec) - 1);173198 inode->i_atime.tv_nsec = 0;174199200200+ glock_set_object(ip->i_gl, ip);201201+175202 if (type == DT_UNKNOWN) {176203 /* Inode glock must be locked already */177204 error = gfs2_instantiate(&i_gh);178178- if (error)205205+ if (error) {206206+ glock_clear_object(ip->i_gl, ip);179207 goto fail;208208+ }180209 } else {181210 ip->i_no_formal_ino = no_formal_ino;182211 inode->i_mode = DT2IF(type);···191206192207 if (gfs2_holder_initialized(&i_gh))193208 gfs2_glock_dq_uninit(&i_gh);209209+ glock_set_object(ip->i_iopen_gh.gh_gl, ip);194210195211 gfs2_set_iop(inode);212212+ unlock_new_inode(inode);196213 }197214198215 if (no_formal_ino && ip->i_no_formal_ino &&199216 no_formal_ino != ip->i_no_formal_ino) {200200- error = -ESTALE;201201- if (inode->i_state & I_NEW)202202- goto fail;203217 iput(inode);204204- return ERR_PTR(error);218218+ return ERR_PTR(-ESTALE);205219 }206206-207207- if (inode->i_state & I_NEW)208208- unlock_new_inode(inode);209220210221 return inode;211222212223fail:213213- if (gfs2_holder_initialized(&ip->i_iopen_gh)) {214214- glock_clear_object(ip->i_iopen_gh.gh_gl, ip);224224+ if (gfs2_holder_initialized(&ip->i_iopen_gh))215225 gfs2_glock_dq_uninit(&ip->i_iopen_gh);216216- }217217- if (io_gl)218218- gfs2_glock_put(io_gl);219226 if (gfs2_holder_initialized(&i_gh))220227 gfs2_glock_dq_uninit(&i_gh);221228 iget_failed(inode);···707730 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl);708731 if (error)709732 goto fail_free_inode;710710- flush_delayed_work(&ip->i_gl->gl_work);711733712734 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl);713735 if (error)714736 goto fail_free_inode;715737 gfs2_cancel_delete_work(io_gl);716738739739+ error = insert_inode_locked4(inode, ip->i_no_addr, iget_test, &ip->i_no_addr);740740+ BUG_ON(error);741741+717742 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, ghs + 1);718743 if (error)719744 goto fail_gunlock2;720745721721- glock_set_object(ip->i_gl, ip);722746 error = gfs2_trans_begin(sdp, blocks, 0);723747 if (error)724748 goto fail_gunlock2;···735757 if (error)736758 goto fail_gunlock2;737759760760+ glock_set_object(ip->i_gl, ip);738761 glock_set_object(io_gl, ip);739762 gfs2_set_iop(inode);740740- insert_inode_hash(inode);741763742764 free_vfs_inode = 0; /* After this point, the inode is no longer743765 considered free. Any failures need to undo···779801 gfs2_glock_dq_uninit(ghs + 1);780802 gfs2_glock_put(io_gl);781803 gfs2_qa_put(dip);804804+ unlock_new_inode(inode);782805 return error;783806784807fail_gunlock3:808808+ glock_clear_object(ip->i_gl, ip);785809 glock_clear_object(io_gl, ip);786810 gfs2_glock_dq_uninit(&ip->i_iopen_gh);787811fail_gunlock2:788788- glock_clear_object(io_gl, ip);789812 gfs2_glock_put(io_gl);790813fail_free_inode:791814 if (ip->i_gl) {792792- glock_clear_object(ip->i_gl, ip);793815 if (free_vfs_inode) /* else evict will do the put for us */794816 gfs2_glock_put(ip->i_gl);795817 }···807829 mark_inode_dirty(inode);808830 set_bit(free_vfs_inode ? GIF_FREE_VFS_INODE : GIF_ALLOC_FAILED,809831 &GFS2_I(inode)->i_flags);810810- iput(inode);832832+ if (inode->i_state & I_NEW)833833+ iget_failed(inode);834834+ else835835+ iput(inode);811836 }812837 if (gfs2_holder_initialized(ghs + 1))813838 gfs2_glock_dq_uninit(ghs + 1);
+7
fs/io-wq.c
···714714715715static inline bool io_should_retry_thread(long err)716716{717717+ /*718718+ * Prevent perpetual task_work retry, if the task (or its group) is719719+ * exiting.720720+ */721721+ if (fatal_signal_pending(current))722722+ return false;723723+717724 switch (err) {718725 case -EAGAIN:719726 case -ERESTARTSYS:
+2-2
fs/netfs/read_helper.c
···10081008}10091009EXPORT_SYMBOL(netfs_readpage);1010101010111011-/**10121012- * netfs_skip_folio_read - prep a folio for writing without reading first10111011+/*10121012+ * Prepare a folio for writing without reading first10131013 * @folio: The folio being prepared10141014 * @pos: starting position for the write10151015 * @len: length of write
···133133 if (unlikely(READ_ONCE(sk->sk_napi_id) != skb->napi_id))134134 WRITE_ONCE(sk->sk_napi_id, skb->napi_id);135135#endif136136- sk_rx_queue_set(sk, skb);136136+ sk_rx_queue_update(sk, skb);137137}138138139139static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id)
+11
include/net/dst_cache.h
···8080}81818282/**8383+ * dst_cache_reset_now - invalidate the cache contents immediately8484+ * @dst_cache: the cache8585+ *8686+ * The caller must be sure there are no concurrent users, as this frees8787+ * all dst_cache users immediately, rather than waiting for the next8888+ * per-cpu usage like dst_cache_reset does. Most callers should use the8989+ * higher speed lazily-freed dst_cache_reset function instead.9090+ */9191+void dst_cache_reset_now(struct dst_cache *dst_cache);9292+9393+/**8394 * dst_cache_init - initialize the cache, allocating the required storage8495 * @dst_cache: the cache8596 * @gfp: allocation flags
···438438#ifdef CONFIG_IP_ROUTE_CLASSID439439static inline int fib_num_tclassid_users(struct net *net)440440{441441- return net->ipv4.fib_num_tclassid_users;441441+ return atomic_read(&net->ipv4.fib_num_tclassid_users);442442}443443#else444444static inline int fib_num_tclassid_users(struct net *net)
···19131913 return -1;19141914}1915191519161916-static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)19161916+static inline void __sk_rx_queue_set(struct sock *sk,19171917+ const struct sk_buff *skb,19181918+ bool force_set)19171919{19181920#ifdef CONFIG_SOCK_RX_QUEUE_MAPPING19191921 if (skb_rx_queue_recorded(skb)) {19201922 u16 rx_queue = skb_get_rx_queue(skb);1921192319221922- if (unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue))19241924+ if (force_set ||19251925+ unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue))19231926 WRITE_ONCE(sk->sk_rx_queue_mapping, rx_queue);19241927 }19251928#endif19291929+}19301930+19311931+static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)19321932+{19331933+ __sk_rx_queue_set(sk, skb, true);19341934+}19351935+19361936+static inline void sk_rx_queue_update(struct sock *sk, const struct sk_buff *skb)19371937+{19381938+ __sk_rx_queue_set(sk, skb, false);19261939}1927194019281941static inline void sk_rx_queue_clear(struct sock *sk)···24432430 * @sk: socket24442431 *24452432 * Use the per task page_frag instead of the per socket one for24462446- * optimization when we know that we're in the normal context and owns24332433+ * optimization when we know that we're in process context and own24472434 * everything that's associated with %current.24482435 *24492449- * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest24502450- * inside other socket operations and end up recursing into sk_page_frag()24512451- * while it's already in use.24362436+ * Both direct reclaim and page faults can nest inside other24372437+ * socket operations and end up recursing into sk_page_frag()24382438+ * while it's already in use: explicitly avoid task page_frag24392439+ * usage if the caller is potentially doing any of them.24402440+ * This assumes that page fault handlers use the GFP_NOFS flags.24522441 *24532442 * Return: a per task page_frag if context allows that,24542443 * otherwise a per socket one.24552444 */24562445static inline struct page_frag *sk_page_frag(struct sock *sk)24572446{24582458- if (gfpflags_normal_context(sk->sk_allocation))24472447+ if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==24482448+ (__GFP_DIRECT_RECLAIM | __GFP_FS))24592449 return ¤t->task_frag;2460245024612451 return &sk->sk_frag;
···196196 __u64 ctx_set_params;197197};198198199199+/*200200+ * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in201201+ * effect. The event size is sizeof(drm_event), since there is no additional202202+ * payload.203203+ */204204+#define VIRTGPU_EVENT_FENCE_SIGNALED 0x90000000205205+199206#define DRM_IOCTL_VIRTGPU_MAP \200207 DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)201208
+1-1
include/uapi/linux/if_ether.h
···117117#define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */118118#define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */119119120120-#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is less than this value120120+#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is more than this value121121 * then the frame is Ethernet II. Else it is 802.3 */122122123123/*
···615615 .sum_exec_runtime = p->se.sum_exec_runtime,616616 };617617618618- task_cputime(p, &cputime.utime, &cputime.stime);618618+ if (task_cputime(p, &cputime.utime, &cputime.stime))619619+ cputime.sum_exec_runtime = task_sched_runtime(p);619620 cputime_adjust(&cputime, &p->prev_cputime, ut, st);620621}621622EXPORT_SYMBOL_GPL(task_cputime_adjusted);···829828 * add up the pending nohz execution time since the last830829 * cputime snapshot.831830 */832832-void task_cputime(struct task_struct *t, u64 *utime, u64 *stime)831831+bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime)833832{834833 struct vtime *vtime = &t->vtime;835834 unsigned int seq;836835 u64 delta;836836+ int ret;837837838838 if (!vtime_accounting_enabled()) {839839 *utime = t->utime;840840 *stime = t->stime;841841- return;841841+ return false;842842 }843843844844 do {845845+ ret = false;845846 seq = read_seqcount_begin(&vtime->seqcount);846847847848 *utime = t->utime;···853850 if (vtime->state < VTIME_SYS)854851 continue;855852853853+ ret = true;856854 delta = vtime_delta(vtime);857855858856 /*···865861 else866862 *utime += vtime->utime + delta;867863 } while (read_seqcount_retry(&vtime->seqcount, seq));864864+865865+ return ret;868866}869867870868static int vtime_state_fetch(struct vtime *vtime, int cpu)
+2-1
kernel/softirq.c
···595595{596596 __irq_enter_raw();597597598598- if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))598598+ if (tick_nohz_full_cpu(smp_processor_id()) ||599599+ (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET)))599600 tick_irq_enter();600601601602 account_hardirq_enter(current);
+7
kernel/time/tick-sched.c
···13751375 now = ktime_get();13761376 if (ts->idle_active)13771377 tick_nohz_stop_idle(ts, now);13781378+ /*13791379+ * If all CPUs are idle. We may need to update a stale jiffies value.13801380+ * Note nohz_full is a special case: a timekeeper is guaranteed to stay13811381+ * alive but it might be busy looping with interrupts disabled in some13821382+ * rare case (typically stop machine). So we must make sure we have a13831383+ * last resort.13841384+ */13781385 if (ts->tick_stopped)13791386 tick_nohz_update_jiffies(now);13801387}
···1515#include <linux/jhash.h>1616#include <linux/slab.h>1717#include <linux/sort.h>1818+#include <linux/kmemleak.h>18191920#include "tracing_map.h"2021#include "trace.h"···308307 for (i = 0; i < a->n_pages; i++) {309308 if (!a->pages[i])310309 break;310310+ kmemleak_free(a->pages[i]);311311 free_page((unsigned long)a->pages[i]);312312 }313313···344342 a->pages[i] = (void *)get_zeroed_page(GFP_KERNEL);345343 if (!a->pages[i])346344 goto free;345345+ kmemleak_alloc(a->pages[i], PAGE_SIZE, 1, GFP_KERNEL);347346 }348347 out:349348 return a;
+6-6
lib/siphash.c
···4949 SIPROUND; \5050 return (v0 ^ v1) ^ (v2 ^ v3);51515252+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS5253u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)5354{5455 const u8 *end = data + len - (len % sizeof(u64));···8180 POSTAMBLE8281}8382EXPORT_SYMBOL(__siphash_aligned);8383+#endif84848585-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS8685u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)8786{8887 const u8 *end = data + len - (len % sizeof(u64));···114113 POSTAMBLE115114}116115EXPORT_SYMBOL(__siphash_unaligned);117117-#endif118116119117/**120118 * siphash_1u64 - compute 64-bit siphash PRF value of a u64···250250 HSIPROUND; \251251 return (v0 ^ v1) ^ (v2 ^ v3);252252253253+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS253254u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)254255{255256 const u8 *end = data + len - (len % sizeof(u64));···281280 HPOSTAMBLE282281}283282EXPORT_SYMBOL(__hsiphash_aligned);283283+#endif284284285285-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS286285u32 __hsiphash_unaligned(const void *data, size_t len,287286 const hsiphash_key_t *key)288287{···314313 HPOSTAMBLE315314}316315EXPORT_SYMBOL(__hsiphash_unaligned);317317-#endif318316319317/**320318 * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32···418418 HSIPROUND; \419419 return v1 ^ v3;420420421421+#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS421422u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)422423{423424 const u8 *end = data + len - (len % sizeof(u32));···439438 HPOSTAMBLE440439}441440EXPORT_SYMBOL(__hsiphash_aligned);441441+#endif442442443443-#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS444443u32 __hsiphash_unaligned(const void *data, size_t len,445444 const hsiphash_key_t *key)446445{···462461 HPOSTAMBLE463462}464463EXPORT_SYMBOL(__hsiphash_unaligned);465465-#endif466464467465/**468466 * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32
+4-1
net/core/dev.c
···42104210 if (dev->flags & IFF_UP) {42114211 int cpu = smp_processor_id(); /* ok because BHs are off */4212421242134213- if (txq->xmit_lock_owner != cpu) {42134213+ /* Other cpus might concurrently change txq->xmit_lock_owner42144214+ * to -1 or to their cpu id, but not to our id.42154215+ */42164216+ if (READ_ONCE(txq->xmit_lock_owner) != cpu) {42144217 if (dev_xmit_recursion())42154218 goto recursion_alert;42164219
···409409 goto err;410410411411 /* Find the output device */412412- out_dev = rcu_dereference(nh->nh_dev);412412+ out_dev = nh->nh_dev;413413 if (!mpls_output_possible(out_dev))414414 goto tx_err;415415···698698 (dev->addr_len != nh->nh_via_alen))699699 goto errout;700700701701- RCU_INIT_POINTER(nh->nh_dev, dev);701701+ nh->nh_dev = dev;702702703703 if (!(dev->flags & IFF_UP)) {704704 nh->nh_flags |= RTNH_F_DEAD;···14911491 kfree(mdev);14921492}1493149314941494-static void mpls_ifdown(struct net_device *dev, int event)14941494+static int mpls_ifdown(struct net_device *dev, int event)14951495{14961496 struct mpls_route __rcu **platform_label;14971497 struct net *net = dev_net(dev);14981498- u8 alive, deleted;14991498 unsigned index;1500149915011500 platform_label = rtnl_dereference(net->mpls.platform_label);15021501 for (index = 0; index < net->mpls.platform_labels; index++) {15031502 struct mpls_route *rt = rtnl_dereference(platform_label[index]);15031503+ bool nh_del = false;15041504+ u8 alive = 0;1504150515051506 if (!rt)15061507 continue;1507150815081508- alive = 0;15091509- deleted = 0;15091509+ if (event == NETDEV_UNREGISTER) {15101510+ u8 deleted = 0;15111511+15121512+ for_nexthops(rt) {15131513+ if (!nh->nh_dev || nh->nh_dev == dev)15141514+ deleted++;15151515+ if (nh->nh_dev == dev)15161516+ nh_del = true;15171517+ } endfor_nexthops(rt);15181518+15191519+ /* if there are no more nexthops, delete the route */15201520+ if (deleted == rt->rt_nhn) {15211521+ mpls_route_update(net, index, NULL, NULL);15221522+ continue;15231523+ }15241524+15251525+ if (nh_del) {15261526+ size_t size = sizeof(*rt) + rt->rt_nhn *15271527+ rt->rt_nh_size;15281528+ struct mpls_route *orig = rt;15291529+15301530+ rt = kmalloc(size, GFP_KERNEL);15311531+ if (!rt)15321532+ return -ENOMEM;15331533+ memcpy(rt, orig, size);15341534+ }15351535+ }15361536+15101537 change_nexthops(rt) {15111538 unsigned int nh_flags = nh->nh_flags;1512153915131513- if (rtnl_dereference(nh->nh_dev) != dev)15401540+ if (nh->nh_dev != dev)15141541 goto next;1515154215161543 switch (event) {···15501523 break;15511524 }15521525 if (event == NETDEV_UNREGISTER)15531553- RCU_INIT_POINTER(nh->nh_dev, NULL);15261526+ nh->nh_dev = NULL;1554152715551528 if (nh->nh_flags != nh_flags)15561529 WRITE_ONCE(nh->nh_flags, nh_flags);15571530next:15581531 if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)))15591532 alive++;15601560- if (!rtnl_dereference(nh->nh_dev))15611561- deleted++;15621533 } endfor_nexthops(rt);1563153415641535 WRITE_ONCE(rt->rt_nhn_alive, alive);1565153615661566- /* if there are no more nexthops, delete the route */15671567- if (event == NETDEV_UNREGISTER && deleted == rt->rt_nhn)15681568- mpls_route_update(net, index, NULL, NULL);15371537+ if (nh_del)15381538+ mpls_route_update(net, index, rt, NULL);15691539 }15401540+15411541+ return 0;15701542}1571154315721544static void mpls_ifup(struct net_device *dev, unsigned int flags)···15851559 alive = 0;15861560 change_nexthops(rt) {15871561 unsigned int nh_flags = nh->nh_flags;15881588- struct net_device *nh_dev =15891589- rtnl_dereference(nh->nh_dev);1590156215911563 if (!(nh_flags & flags)) {15921564 alive++;15931565 continue;15941566 }15951595- if (nh_dev != dev)15671567+ if (nh->nh_dev != dev)15961568 continue;15971569 alive++;15981570 nh_flags &= ~flags;···16211597 return NOTIFY_OK;1622159816231599 switch (event) {16001600+ int err;16011601+16241602 case NETDEV_DOWN:16251625- mpls_ifdown(dev, event);16031603+ err = mpls_ifdown(dev, event);16041604+ if (err)16051605+ return notifier_from_errno(err);16261606 break;16271607 case NETDEV_UP:16281608 flags = dev_get_flags(dev);···16371609 break;16381610 case NETDEV_CHANGE:16391611 flags = dev_get_flags(dev);16401640- if (flags & (IFF_RUNNING | IFF_LOWER_UP))16121612+ if (flags & (IFF_RUNNING | IFF_LOWER_UP)) {16411613 mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);16421642- else16431643- mpls_ifdown(dev, event);16141614+ } else {16151615+ err = mpls_ifdown(dev, event);16161616+ if (err)16171617+ return notifier_from_errno(err);16181618+ }16441619 break;16451620 case NETDEV_UNREGISTER:16461646- mpls_ifdown(dev, event);16211621+ err = mpls_ifdown(dev, event);16221622+ if (err)16231623+ return notifier_from_errno(err);16471624 mdev = mpls_dev_get(dev);16481625 if (mdev) {16491626 mpls_dev_sysctl_unregister(dev, mdev);···16591626 case NETDEV_CHANGENAME:16601627 mdev = mpls_dev_get(dev);16611628 if (mdev) {16621662- int err;16631663-16641629 mpls_dev_sysctl_unregister(dev, mdev);16651630 err = mpls_dev_sysctl_register(dev, mdev);16661631 if (err)···20251994 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),20261995 nh->nh_via_alen))20271996 goto nla_put_failure;20282028- dev = rtnl_dereference(nh->nh_dev);19971997+ dev = nh->nh_dev;20291998 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))20301999 goto nla_put_failure;20312000 if (nh->nh_flags & RTNH_F_LINKDOWN)···20432012 goto nla_put_failure;2044201320452014 for_nexthops(rt) {20462046- dev = rtnl_dereference(nh->nh_dev);20152015+ dev = nh->nh_dev;20472016 if (!dev)20482017 continue;20492018···21542123static bool mpls_rt_uses_dev(struct mpls_route *rt,21552124 const struct net_device *dev)21562125{21572157- struct net_device *nh_dev;21582158-21592126 if (rt->rt_nhn == 1) {21602127 struct mpls_nh *nh = rt->rt_nh;2161212821622162- nh_dev = rtnl_dereference(nh->nh_dev);21632163- if (dev == nh_dev)21292129+ if (nh->nh_dev == dev)21642130 return true;21652131 } else {21662132 for_nexthops(rt) {21672167- nh_dev = rtnl_dereference(nh->nh_dev);21682168- if (nh_dev == dev)21332133+ if (nh->nh_dev == dev)21692134 return true;21702135 } endfor_nexthops(rt);21712136 }···22492222 size_t nhsize = 0;2250222322512224 for_nexthops(rt) {22522252- if (!rtnl_dereference(nh->nh_dev))22252225+ if (!nh->nh_dev)22532226 continue;22542227 nhsize += nla_total_size(sizeof(struct rtnexthop));22552228 /* RTA_VIA */···24952468 nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),24962469 nh->nh_via_alen))24972470 goto nla_put_failure;24982498- dev = rtnl_dereference(nh->nh_dev);24712471+ dev = nh->nh_dev;24992472 if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))25002473 goto nla_put_failure;25012474···25342507 rt0 = mpls_rt_alloc(1, lo->addr_len, 0);25352508 if (IS_ERR(rt0))25362509 goto nort0;25372537- RCU_INIT_POINTER(rt0->rt_nh->nh_dev, lo);25102510+ rt0->rt_nh->nh_dev = lo;25382511 rt0->rt_protocol = RTPROT_KERNEL;25392512 rt0->rt_payload_type = MPT_IPV4;25402513 rt0->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;···25482521 rt2 = mpls_rt_alloc(1, lo->addr_len, 0);25492522 if (IS_ERR(rt2))25502523 goto nort2;25512551- RCU_INIT_POINTER(rt2->rt_nh->nh_dev, lo);25242524+ rt2->rt_nh->nh_dev = lo;25522525 rt2->rt_protocol = RTPROT_KERNEL;25532526 rt2->rt_payload_type = MPT_IPV6;25542527 rt2->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;
+1-1
net/mpls/internal.h
···8787};88888989struct mpls_nh { /* next hop label forwarding entry */9090- struct net_device __rcu *nh_dev;9090+ struct net_device *nh_dev;91919292 /* nh_flags is accessed under RCU in the packet path; it is9393 * modified handling netdev events with rtnl lock held
+5
net/netlink/af_netlink.c
···18521852 if (msg->msg_flags & MSG_OOB)18531853 return -EOPNOTSUPP;1854185418551855+ if (len == 0) {18561856+ pr_warn_once("Zero length message leads to an empty skb\n");18571857+ return -ENODATA;18581858+ }18591859+18551860 err = scm_send(sock, msg, &scm, true);18561861 if (err < 0)18571862 return err;
···135135 return bundle;136136}137137138138+static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)139139+{140140+ rxrpc_put_peer(bundle->params.peer);141141+ kfree(bundle);142142+}143143+138144void rxrpc_put_bundle(struct rxrpc_bundle *bundle)139145{140146 unsigned int d = bundle->debug_id;141147 unsigned int u = atomic_dec_return(&bundle->usage);142148143149 _debug("PUT B=%x %u", d, u);144144- if (u == 0) {145145- rxrpc_put_peer(bundle->params.peer);146146- kfree(bundle);147147- }150150+ if (u == 0)151151+ rxrpc_free_bundle(bundle);148152}149153150154/*···332328 return candidate;333329334330found_bundle_free:335335- kfree(candidate);331331+ rxrpc_free_bundle(candidate);336332found_bundle:337333 rxrpc_get_bundle(bundle);338334 spin_unlock(&local->client_bundles_lock);
+9-5
net/rxrpc/peer_object.c
···299299 return peer;300300}301301302302+static void rxrpc_free_peer(struct rxrpc_peer *peer)303303+{304304+ rxrpc_put_local(peer->local);305305+ kfree_rcu(peer, rcu);306306+}307307+302308/*303309 * Set up a new incoming peer. There shouldn't be any other matching peers304310 * since we've already done a search in the list from the non-reentrant context···371365 spin_unlock_bh(&rxnet->peer_hash_lock);372366373367 if (peer)374374- kfree(candidate);368368+ rxrpc_free_peer(candidate);375369 else376370 peer = candidate;377371 }···426420 list_del_init(&peer->keepalive_link);427421 spin_unlock_bh(&rxnet->peer_hash_lock);428422429429- rxrpc_put_local(peer->local);430430- kfree_rcu(peer, rcu);423423+ rxrpc_free_peer(peer);431424}432425433426/*···462457 if (n == 0) {463458 hash_del_rcu(&peer->hash_link);464459 list_del_init(&peer->keepalive_link);465465- rxrpc_put_local(peer->local);466466- kfree_rcu(peer, rcu);460460+ rxrpc_free_peer(peer);467461 }468462}469463
+6-2
net/smc/smc_close.c
···195195 int old_state;196196 long timeout;197197 int rc = 0;198198+ int rc1 = 0;198199199200 timeout = current->flags & PF_EXITING ?200201 0 : sock_flag(sk, SOCK_LINGER) ?···233232 /* actively shutdown clcsock before peer close it,234233 * prevent peer from entering TIME_WAIT state.235234 */236236- if (smc->clcsock && smc->clcsock->sk)237237- rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR);235235+ if (smc->clcsock && smc->clcsock->sk) {236236+ rc1 = kernel_sock_shutdown(smc->clcsock,237237+ SHUT_RDWR);238238+ rc = rc ? rc : rc1;239239+ }238240 } else {239241 /* peer event has changed the state */240242 goto again;
+3-4
net/smc/smc_core.c
···625625void smc_lgr_cleanup_early(struct smc_connection *conn)626626{627627 struct smc_link_group *lgr = conn->lgr;628628- struct list_head *lgr_list;629628 spinlock_t *lgr_lock;630629631630 if (!lgr)632631 return;633632634633 smc_conn_free(conn);635635- lgr_list = smc_lgr_list_head(lgr, &lgr_lock);634634+ smc_lgr_list_head(lgr, &lgr_lock);636635 spin_lock_bh(lgr_lock);637636 /* do not use this link group for new connections */638638- if (!list_empty(lgr_list))639639- list_del_init(lgr_list);637637+ if (!list_empty(&lgr->list))638638+ list_del_init(&lgr->list);640639 spin_unlock_bh(lgr_lock);641640 __smc_lgr_terminate(lgr, true);642641}
···438438#define for_each_hda_codec_node(nid, codec) \439439 for ((nid) = (codec)->core.start_nid; (nid) < (codec)->core.end_nid; (nid)++)440440441441+/* Set the codec power_state flag to indicate to allow unsol event handling;442442+ * see hda_codec_unsol_event() in hda_bind.c. Calling this might confuse the443443+ * state tracking, so use with care.444444+ */445445+static inline void snd_hda_codec_allow_unsol_events(struct hda_codec *codec)446446+{447447+ codec->core.dev.power.power_state = PMSG_ON;448448+}449449+441450/*442451 * get widget capabilities443452 */
+5
sound/pci/hda/patch_cs8409.c
···750750 if (cs42l42->full_scale_vol)751751 cs8409_i2c_write(cs42l42, 0x2001, 0x01);752752753753+ /* we have to explicitly allow unsol event handling even during the754754+ * resume phase so that the jack event is processed properly755755+ */756756+ snd_hda_codec_allow_unsol_events(cs42l42->codec);757757+753758 cs42l42_enable_jack_detect(cs42l42);754759}755760
···20202121 if (comp_ids) {2222 for (i = 0; i < comp_ids->num_codecs; i++) {2323- if (acpi_dev_present(comp_ids->codecs[i], NULL, -1))2323+ if (acpi_dev_present(comp_ids->codecs[i], NULL, -1)) {2424+ strscpy(machine->id, comp_ids->codecs[i], ACPI_ID_LEN);2425 return true;2626+ }2527 }2628 }2729
+7
sound/soc/sof/intel/hda.c
···5858 return -EINVAL;5959 }60606161+ /* DAI already configured, reset it before reconfiguring it */6262+ if (sof_dai->configured) {6363+ ret = hda_ctrl_dai_widget_free(w);6464+ if (ret < 0)6565+ return ret;6666+ }6767+6168 config = &sof_dai->dai_config[sof_dai->current_config];62696370 /*
+146-33
sound/soc/tegra/tegra186_dspk.c
···2626 { TEGRA186_DSPK_CODEC_CTRL, 0x03000000 },2727};28282929-static int tegra186_dspk_get_control(struct snd_kcontrol *kcontrol,2929+static int tegra186_dspk_get_fifo_th(struct snd_kcontrol *kcontrol,3030 struct snd_ctl_elem_value *ucontrol)3131{3232 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);3333 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);34343535- if (strstr(kcontrol->id.name, "FIFO Threshold"))3636- ucontrol->value.integer.value[0] = dspk->rx_fifo_th;3737- else if (strstr(kcontrol->id.name, "OSR Value"))3838- ucontrol->value.integer.value[0] = dspk->osr_val;3939- else if (strstr(kcontrol->id.name, "LR Polarity Select"))4040- ucontrol->value.integer.value[0] = dspk->lrsel;4141- else if (strstr(kcontrol->id.name, "Channel Select"))4242- ucontrol->value.integer.value[0] = dspk->ch_sel;4343- else if (strstr(kcontrol->id.name, "Mono To Stereo"))4444- ucontrol->value.integer.value[0] = dspk->mono_to_stereo;4545- else if (strstr(kcontrol->id.name, "Stereo To Mono"))4646- ucontrol->value.integer.value[0] = dspk->stereo_to_mono;3535+ ucontrol->value.integer.value[0] = dspk->rx_fifo_th;47364837 return 0;4938}50395151-static int tegra186_dspk_put_control(struct snd_kcontrol *kcontrol,4040+static int tegra186_dspk_put_fifo_th(struct snd_kcontrol *kcontrol,5241 struct snd_ctl_elem_value *ucontrol)5342{5443 struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);5544 struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);5656- int val = ucontrol->value.integer.value[0];4545+ int value = ucontrol->value.integer.value[0];57465858- if (strstr(kcontrol->id.name, "FIFO Threshold"))5959- dspk->rx_fifo_th = val;6060- else if (strstr(kcontrol->id.name, "OSR Value"))6161- dspk->osr_val = val;6262- else if (strstr(kcontrol->id.name, "LR Polarity Select"))6363- dspk->lrsel = val;6464- else if (strstr(kcontrol->id.name, "Channel Select"))6565- dspk->ch_sel = val;6666- else if (strstr(kcontrol->id.name, "Mono To Stereo"))6767- dspk->mono_to_stereo = val;6868- else if (strstr(kcontrol->id.name, "Stereo To Mono"))6969- dspk->stereo_to_mono = val;4747+ if (value == dspk->rx_fifo_th)4848+ return 0;4949+5050+ dspk->rx_fifo_th = value;5151+5252+ return 1;5353+}5454+5555+static int tegra186_dspk_get_osr_val(struct snd_kcontrol *kcontrol,5656+ struct snd_ctl_elem_value *ucontrol)5757+{5858+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);5959+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);6060+6161+ ucontrol->value.enumerated.item[0] = dspk->osr_val;70627163 return 0;6464+}6565+6666+static int tegra186_dspk_put_osr_val(struct snd_kcontrol *kcontrol,6767+ struct snd_ctl_elem_value *ucontrol)6868+{6969+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);7070+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);7171+ unsigned int value = ucontrol->value.enumerated.item[0];7272+7373+ if (value == dspk->osr_val)7474+ return 0;7575+7676+ dspk->osr_val = value;7777+7878+ return 1;7979+}8080+8181+static int tegra186_dspk_get_pol_sel(struct snd_kcontrol *kcontrol,8282+ struct snd_ctl_elem_value *ucontrol)8383+{8484+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);8585+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);8686+8787+ ucontrol->value.enumerated.item[0] = dspk->lrsel;8888+8989+ return 0;9090+}9191+9292+static int tegra186_dspk_put_pol_sel(struct snd_kcontrol *kcontrol,9393+ struct snd_ctl_elem_value *ucontrol)9494+{9595+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);9696+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);9797+ unsigned int value = ucontrol->value.enumerated.item[0];9898+9999+ if (value == dspk->lrsel)100100+ return 0;101101+102102+ dspk->lrsel = value;103103+104104+ return 1;105105+}106106+107107+static int tegra186_dspk_get_ch_sel(struct snd_kcontrol *kcontrol,108108+ struct snd_ctl_elem_value *ucontrol)109109+{110110+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);111111+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);112112+113113+ ucontrol->value.enumerated.item[0] = dspk->ch_sel;114114+115115+ return 0;116116+}117117+118118+static int tegra186_dspk_put_ch_sel(struct snd_kcontrol *kcontrol,119119+ struct snd_ctl_elem_value *ucontrol)120120+{121121+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);122122+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);123123+ unsigned int value = ucontrol->value.enumerated.item[0];124124+125125+ if (value == dspk->ch_sel)126126+ return 0;127127+128128+ dspk->ch_sel = value;129129+130130+ return 1;131131+}132132+133133+static int tegra186_dspk_get_mono_to_stereo(struct snd_kcontrol *kcontrol,134134+ struct snd_ctl_elem_value *ucontrol)135135+{136136+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);137137+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);138138+139139+ ucontrol->value.enumerated.item[0] = dspk->mono_to_stereo;140140+141141+ return 0;142142+}143143+144144+static int tegra186_dspk_put_mono_to_stereo(struct snd_kcontrol *kcontrol,145145+ struct snd_ctl_elem_value *ucontrol)146146+{147147+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);148148+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);149149+ unsigned int value = ucontrol->value.enumerated.item[0];150150+151151+ if (value == dspk->mono_to_stereo)152152+ return 0;153153+154154+ dspk->mono_to_stereo = value;155155+156156+ return 1;157157+}158158+159159+static int tegra186_dspk_get_stereo_to_mono(struct snd_kcontrol *kcontrol,160160+ struct snd_ctl_elem_value *ucontrol)161161+{162162+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);163163+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);164164+165165+ ucontrol->value.enumerated.item[0] = dspk->stereo_to_mono;166166+167167+ return 0;168168+}169169+170170+static int tegra186_dspk_put_stereo_to_mono(struct snd_kcontrol *kcontrol,171171+ struct snd_ctl_elem_value *ucontrol)172172+{173173+ struct snd_soc_component *codec = snd_soc_kcontrol_component(kcontrol);174174+ struct tegra186_dspk *dspk = snd_soc_component_get_drvdata(codec);175175+ unsigned int value = ucontrol->value.enumerated.item[0];176176+177177+ if (value == dspk->stereo_to_mono)178178+ return 0;179179+180180+ dspk->stereo_to_mono = value;181181+182182+ return 1;72183}7318474185static int __maybe_unused tegra186_dspk_runtime_suspend(struct device *dev)···390279static const struct snd_kcontrol_new tegrat186_dspk_controls[] = {391280 SOC_SINGLE_EXT("FIFO Threshold", SND_SOC_NOPM, 0,392281 TEGRA186_DSPK_RX_FIFO_DEPTH - 1, 0,393393- tegra186_dspk_get_control, tegra186_dspk_put_control),282282+ tegra186_dspk_get_fifo_th, tegra186_dspk_put_fifo_th),394283 SOC_ENUM_EXT("OSR Value", tegra186_dspk_osr_enum,395395- tegra186_dspk_get_control, tegra186_dspk_put_control),284284+ tegra186_dspk_get_osr_val, tegra186_dspk_put_osr_val),396285 SOC_ENUM_EXT("LR Polarity Select", tegra186_dspk_lrsel_enum,397397- tegra186_dspk_get_control, tegra186_dspk_put_control),286286+ tegra186_dspk_get_pol_sel, tegra186_dspk_put_pol_sel),398287 SOC_ENUM_EXT("Channel Select", tegra186_dspk_ch_sel_enum,399399- tegra186_dspk_get_control, tegra186_dspk_put_control),288288+ tegra186_dspk_get_ch_sel, tegra186_dspk_put_ch_sel),400289 SOC_ENUM_EXT("Mono To Stereo", tegra186_dspk_mono_conv_enum,401401- tegra186_dspk_get_control, tegra186_dspk_put_control),290290+ tegra186_dspk_get_mono_to_stereo,291291+ tegra186_dspk_put_mono_to_stereo),402292 SOC_ENUM_EXT("Stereo To Mono", tegra186_dspk_stereo_conv_enum,403403- tegra186_dspk_get_control, tegra186_dspk_put_control),293293+ tegra186_dspk_get_stereo_to_mono,294294+ tegra186_dspk_put_stereo_to_mono),404295};405296406297static const struct snd_soc_component_driver tegra186_dspk_cmpnt = {
···193193 struct soc_mixer_control *mc =194194 (struct soc_mixer_control *)kcontrol->private_value;;195195196196+ if (value == bytes_map[mc->reg])197197+ return 0;198198+196199 if (value >= 0 && value <= 255) {197200 /* update byte map and enable slot */198201 bytes_map[mc->reg] = value;
+7-4
sound/soc/tegra/tegra210_ahub.c
···6262 unsigned int *item = uctl->value.enumerated.item;6363 unsigned int value = e->values[item[0]];6464 unsigned int i, bit_pos, reg_idx = 0, reg_val = 0;6565+ int change = 0;65666667 if (item[0] >= e->items)6768 return -EINVAL;···87868887 /* Update widget power if state has changed */8988 if (snd_soc_component_test_bits(cmpnt, update[i].reg,9090- update[i].mask, update[i].val))9191- snd_soc_dapm_mux_update_power(dapm, kctl, item[0], e,9292- &update[i]);8989+ update[i].mask,9090+ update[i].val))9191+ change |= snd_soc_dapm_mux_update_power(dapm, kctl,9292+ item[0], e,9393+ &update[i]);9394 }94959595- return 0;9696+ return change;9697}97989899static struct snd_soc_dai_driver tegra210_ahub_dais[] = {
+3
sound/soc/tegra/tegra210_amx.c
···222222 int reg = mc->reg;223223 int value = ucontrol->value.integer.value[0];224224225225+ if (value == bytes_map[reg])226226+ return 0;227227+225228 if (value >= 0 && value <= 255) {226229 /* Update byte map and enable slot */227230 bytes_map[reg] = value;
+150-36
sound/soc/tegra/tegra210_dmic.c
···156156 return 0;157157}158158159159-static int tegra210_dmic_get_control(struct snd_kcontrol *kcontrol,160160- struct snd_ctl_elem_value *ucontrol)159159+static int tegra210_dmic_get_boost_gain(struct snd_kcontrol *kcontrol,160160+ struct snd_ctl_elem_value *ucontrol)161161{162162 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);163163 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);164164165165- if (strstr(kcontrol->id.name, "Boost Gain Volume"))166166- ucontrol->value.integer.value[0] = dmic->boost_gain;167167- else if (strstr(kcontrol->id.name, "Channel Select"))168168- ucontrol->value.integer.value[0] = dmic->ch_select;169169- else if (strstr(kcontrol->id.name, "Mono To Stereo"))170170- ucontrol->value.integer.value[0] = dmic->mono_to_stereo;171171- else if (strstr(kcontrol->id.name, "Stereo To Mono"))172172- ucontrol->value.integer.value[0] = dmic->stereo_to_mono;173173- else if (strstr(kcontrol->id.name, "OSR Value"))174174- ucontrol->value.integer.value[0] = dmic->osr_val;175175- else if (strstr(kcontrol->id.name, "LR Polarity Select"))176176- ucontrol->value.integer.value[0] = dmic->lrsel;165165+ ucontrol->value.integer.value[0] = dmic->boost_gain;177166178167 return 0;179168}180169181181-static int tegra210_dmic_put_control(struct snd_kcontrol *kcontrol,182182- struct snd_ctl_elem_value *ucontrol)170170+static int tegra210_dmic_put_boost_gain(struct snd_kcontrol *kcontrol,171171+ struct snd_ctl_elem_value *ucontrol)183172{184173 struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);185174 struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);186175 int value = ucontrol->value.integer.value[0];187176188188- if (strstr(kcontrol->id.name, "Boost Gain Volume"))189189- dmic->boost_gain = value;190190- else if (strstr(kcontrol->id.name, "Channel Select"))191191- dmic->ch_select = ucontrol->value.integer.value[0];192192- else if (strstr(kcontrol->id.name, "Mono To Stereo"))193193- dmic->mono_to_stereo = value;194194- else if (strstr(kcontrol->id.name, "Stereo To Mono"))195195- dmic->stereo_to_mono = value;196196- else if (strstr(kcontrol->id.name, "OSR Value"))197197- dmic->osr_val = value;198198- else if (strstr(kcontrol->id.name, "LR Polarity Select"))199199- dmic->lrsel = value;177177+ if (value == dmic->boost_gain)178178+ return 0;179179+180180+ dmic->boost_gain = value;181181+182182+ return 1;183183+}184184+185185+static int tegra210_dmic_get_ch_select(struct snd_kcontrol *kcontrol,186186+ struct snd_ctl_elem_value *ucontrol)187187+{188188+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);189189+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);190190+191191+ ucontrol->value.enumerated.item[0] = dmic->ch_select;200192201193 return 0;194194+}195195+196196+static int tegra210_dmic_put_ch_select(struct snd_kcontrol *kcontrol,197197+ struct snd_ctl_elem_value *ucontrol)198198+{199199+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);200200+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);201201+ unsigned int value = ucontrol->value.enumerated.item[0];202202+203203+ if (value == dmic->ch_select)204204+ return 0;205205+206206+ dmic->ch_select = value;207207+208208+ return 1;209209+}210210+211211+static int tegra210_dmic_get_mono_to_stereo(struct snd_kcontrol *kcontrol,212212+ struct snd_ctl_elem_value *ucontrol)213213+{214214+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);215215+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);216216+217217+ ucontrol->value.enumerated.item[0] = dmic->mono_to_stereo;218218+219219+ return 0;220220+}221221+222222+static int tegra210_dmic_put_mono_to_stereo(struct snd_kcontrol *kcontrol,223223+ struct snd_ctl_elem_value *ucontrol)224224+{225225+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);226226+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);227227+ unsigned int value = ucontrol->value.enumerated.item[0];228228+229229+ if (value == dmic->mono_to_stereo)230230+ return 0;231231+232232+ dmic->mono_to_stereo = value;233233+234234+ return 1;235235+}236236+237237+static int tegra210_dmic_get_stereo_to_mono(struct snd_kcontrol *kcontrol,238238+ struct snd_ctl_elem_value *ucontrol)239239+{240240+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);241241+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);242242+243243+ ucontrol->value.enumerated.item[0] = dmic->stereo_to_mono;244244+245245+ return 0;246246+}247247+248248+static int tegra210_dmic_put_stereo_to_mono(struct snd_kcontrol *kcontrol,249249+ struct snd_ctl_elem_value *ucontrol)250250+{251251+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);252252+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);253253+ unsigned int value = ucontrol->value.enumerated.item[0];254254+255255+ if (value == dmic->stereo_to_mono)256256+ return 0;257257+258258+ dmic->stereo_to_mono = value;259259+260260+ return 1;261261+}262262+263263+static int tegra210_dmic_get_osr_val(struct snd_kcontrol *kcontrol,264264+ struct snd_ctl_elem_value *ucontrol)265265+{266266+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);267267+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);268268+269269+ ucontrol->value.enumerated.item[0] = dmic->osr_val;270270+271271+ return 0;272272+}273273+274274+static int tegra210_dmic_put_osr_val(struct snd_kcontrol *kcontrol,275275+ struct snd_ctl_elem_value *ucontrol)276276+{277277+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);278278+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);279279+ unsigned int value = ucontrol->value.enumerated.item[0];280280+281281+ if (value == dmic->osr_val)282282+ return 0;283283+284284+ dmic->osr_val = value;285285+286286+ return 1;287287+}288288+289289+static int tegra210_dmic_get_pol_sel(struct snd_kcontrol *kcontrol,290290+ struct snd_ctl_elem_value *ucontrol)291291+{292292+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);293293+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);294294+295295+ ucontrol->value.enumerated.item[0] = dmic->lrsel;296296+297297+ return 0;298298+}299299+300300+static int tegra210_dmic_put_pol_sel(struct snd_kcontrol *kcontrol,301301+ struct snd_ctl_elem_value *ucontrol)302302+{303303+ struct snd_soc_component *comp = snd_soc_kcontrol_component(kcontrol);304304+ struct tegra210_dmic *dmic = snd_soc_component_get_drvdata(comp);305305+ unsigned int value = ucontrol->value.enumerated.item[0];306306+307307+ if (value == dmic->lrsel)308308+ return 0;309309+310310+ dmic->lrsel = value;311311+312312+ return 1;202313}203314204315static const struct snd_soc_dai_ops tegra210_dmic_dai_ops = {···398287399288static const struct snd_kcontrol_new tegra210_dmic_controls[] = {400289 SOC_SINGLE_EXT("Boost Gain Volume", 0, 0, MAX_BOOST_GAIN, 0,401401- tegra210_dmic_get_control, tegra210_dmic_put_control),290290+ tegra210_dmic_get_boost_gain,291291+ tegra210_dmic_put_boost_gain),402292 SOC_ENUM_EXT("Channel Select", tegra210_dmic_ch_enum,403403- tegra210_dmic_get_control, tegra210_dmic_put_control),293293+ tegra210_dmic_get_ch_select, tegra210_dmic_put_ch_select),404294 SOC_ENUM_EXT("Mono To Stereo",405405- tegra210_dmic_mono_conv_enum, tegra210_dmic_get_control,406406- tegra210_dmic_put_control),295295+ tegra210_dmic_mono_conv_enum,296296+ tegra210_dmic_get_mono_to_stereo,297297+ tegra210_dmic_put_mono_to_stereo),407298 SOC_ENUM_EXT("Stereo To Mono",408408- tegra210_dmic_stereo_conv_enum, tegra210_dmic_get_control,409409- tegra210_dmic_put_control),299299+ tegra210_dmic_stereo_conv_enum,300300+ tegra210_dmic_get_stereo_to_mono,301301+ tegra210_dmic_put_stereo_to_mono),410302 SOC_ENUM_EXT("OSR Value", tegra210_dmic_osr_enum,411411- tegra210_dmic_get_control, tegra210_dmic_put_control),303303+ tegra210_dmic_get_osr_val, tegra210_dmic_put_osr_val),412304 SOC_ENUM_EXT("LR Polarity Select", tegra210_dmic_lrsel_enum,413413- tegra210_dmic_get_control, tegra210_dmic_put_control),305305+ tegra210_dmic_get_pol_sel, tegra210_dmic_put_pol_sel),414306};415307416308static const struct snd_soc_component_driver tegra210_dmic_compnt = {
+240-84
sound/soc/tegra/tegra210_i2s.c
···302302 return 0;303303}304304305305+static int tegra210_i2s_get_loopback(struct snd_kcontrol *kcontrol,306306+ struct snd_ctl_elem_value *ucontrol)307307+{308308+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);309309+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);310310+311311+ ucontrol->value.integer.value[0] = i2s->loopback;312312+313313+ return 0;314314+}315315+316316+static int tegra210_i2s_put_loopback(struct snd_kcontrol *kcontrol,317317+ struct snd_ctl_elem_value *ucontrol)318318+{319319+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);320320+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);321321+ int value = ucontrol->value.integer.value[0];322322+323323+ if (value == i2s->loopback)324324+ return 0;325325+326326+ i2s->loopback = value;327327+328328+ regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL, I2S_CTRL_LPBK_MASK,329329+ i2s->loopback << I2S_CTRL_LPBK_SHIFT);330330+331331+ return 1;332332+}333333+334334+static int tegra210_i2s_get_fsync_width(struct snd_kcontrol *kcontrol,335335+ struct snd_ctl_elem_value *ucontrol)336336+{337337+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);338338+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);339339+340340+ ucontrol->value.integer.value[0] = i2s->fsync_width;341341+342342+ return 0;343343+}344344+345345+static int tegra210_i2s_put_fsync_width(struct snd_kcontrol *kcontrol,346346+ struct snd_ctl_elem_value *ucontrol)347347+{348348+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);349349+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);350350+ int value = ucontrol->value.integer.value[0];351351+352352+ if (value == i2s->fsync_width)353353+ return 0;354354+355355+ i2s->fsync_width = value;356356+357357+ /*358358+ * Frame sync width is used only for FSYNC modes and not359359+ * applicable for LRCK modes. Reset value for this field is "0",360360+ * which means the width is one bit clock wide.361361+ * The width requirement may depend on the codec and in such362362+ * cases mixer control is used to update custom values. A value363363+ * of "N" here means, width is "N + 1" bit clock wide.364364+ */365365+ regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,366366+ I2S_CTRL_FSYNC_WIDTH_MASK,367367+ i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);368368+369369+ return 1;370370+}371371+372372+static int tegra210_i2s_cget_stereo_to_mono(struct snd_kcontrol *kcontrol,373373+ struct snd_ctl_elem_value *ucontrol)374374+{375375+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);376376+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);377377+378378+ ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_TX_PATH];379379+380380+ return 0;381381+}382382+383383+static int tegra210_i2s_cput_stereo_to_mono(struct snd_kcontrol *kcontrol,384384+ struct snd_ctl_elem_value *ucontrol)385385+{386386+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);387387+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);388388+ unsigned int value = ucontrol->value.enumerated.item[0];389389+390390+ if (value == i2s->stereo_to_mono[I2S_TX_PATH])391391+ return 0;392392+393393+ i2s->stereo_to_mono[I2S_TX_PATH] = value;394394+395395+ return 1;396396+}397397+398398+static int tegra210_i2s_cget_mono_to_stereo(struct snd_kcontrol *kcontrol,399399+ struct snd_ctl_elem_value *ucontrol)400400+{401401+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);402402+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);403403+404404+ ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_TX_PATH];405405+406406+ return 0;407407+}408408+409409+static int tegra210_i2s_cput_mono_to_stereo(struct snd_kcontrol *kcontrol,410410+ struct snd_ctl_elem_value *ucontrol)411411+{412412+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);413413+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);414414+ unsigned int value = ucontrol->value.enumerated.item[0];415415+416416+ if (value == i2s->mono_to_stereo[I2S_TX_PATH])417417+ return 0;418418+419419+ i2s->mono_to_stereo[I2S_TX_PATH] = value;420420+421421+ return 1;422422+}423423+424424+static int tegra210_i2s_pget_stereo_to_mono(struct snd_kcontrol *kcontrol,425425+ struct snd_ctl_elem_value *ucontrol)426426+{427427+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);428428+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);429429+430430+ ucontrol->value.enumerated.item[0] = i2s->stereo_to_mono[I2S_RX_PATH];431431+432432+ return 0;433433+}434434+435435+static int tegra210_i2s_pput_stereo_to_mono(struct snd_kcontrol *kcontrol,436436+ struct snd_ctl_elem_value *ucontrol)437437+{438438+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);439439+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);440440+ unsigned int value = ucontrol->value.enumerated.item[0];441441+442442+ if (value == i2s->stereo_to_mono[I2S_RX_PATH])443443+ return 0;444444+445445+ i2s->stereo_to_mono[I2S_RX_PATH] = value;446446+447447+ return 1;448448+}449449+450450+static int tegra210_i2s_pget_mono_to_stereo(struct snd_kcontrol *kcontrol,451451+ struct snd_ctl_elem_value *ucontrol)452452+{453453+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);454454+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);455455+456456+ ucontrol->value.enumerated.item[0] = i2s->mono_to_stereo[I2S_RX_PATH];457457+458458+ return 0;459459+}460460+461461+static int tegra210_i2s_pput_mono_to_stereo(struct snd_kcontrol *kcontrol,462462+ struct snd_ctl_elem_value *ucontrol)463463+{464464+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);465465+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);466466+ unsigned int value = ucontrol->value.enumerated.item[0];467467+468468+ if (value == i2s->mono_to_stereo[I2S_RX_PATH])469469+ return 0;470470+471471+ i2s->mono_to_stereo[I2S_RX_PATH] = value;472472+473473+ return 1;474474+}475475+476476+static int tegra210_i2s_pget_fifo_th(struct snd_kcontrol *kcontrol,477477+ struct snd_ctl_elem_value *ucontrol)478478+{479479+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);480480+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);481481+482482+ ucontrol->value.integer.value[0] = i2s->rx_fifo_th;483483+484484+ return 0;485485+}486486+487487+static int tegra210_i2s_pput_fifo_th(struct snd_kcontrol *kcontrol,488488+ struct snd_ctl_elem_value *ucontrol)489489+{490490+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);491491+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);492492+ int value = ucontrol->value.integer.value[0];493493+494494+ if (value == i2s->rx_fifo_th)495495+ return 0;496496+497497+ i2s->rx_fifo_th = value;498498+499499+ return 1;500500+}501501+502502+static int tegra210_i2s_get_bclk_ratio(struct snd_kcontrol *kcontrol,503503+ struct snd_ctl_elem_value *ucontrol)504504+{505505+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);506506+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);507507+508508+ ucontrol->value.integer.value[0] = i2s->bclk_ratio;509509+510510+ return 0;511511+}512512+513513+static int tegra210_i2s_put_bclk_ratio(struct snd_kcontrol *kcontrol,514514+ struct snd_ctl_elem_value *ucontrol)515515+{516516+ struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);517517+ struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);518518+ int value = ucontrol->value.integer.value[0];519519+520520+ if (value == i2s->bclk_ratio)521521+ return 0;522522+523523+ i2s->bclk_ratio = value;524524+525525+ return 1;526526+}527527+305528static int tegra210_i2s_set_dai_bclk_ratio(struct snd_soc_dai *dai,306529 unsigned int ratio)307530{308531 struct tegra210_i2s *i2s = snd_soc_dai_get_drvdata(dai);309532310533 i2s->bclk_ratio = ratio;311311-312312- return 0;313313-}314314-315315-static int tegra210_i2s_get_control(struct snd_kcontrol *kcontrol,316316- struct snd_ctl_elem_value *ucontrol)317317-{318318- struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);319319- struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);320320- long *uctl_val = &ucontrol->value.integer.value[0];321321-322322- if (strstr(kcontrol->id.name, "Loopback"))323323- *uctl_val = i2s->loopback;324324- else if (strstr(kcontrol->id.name, "FSYNC Width"))325325- *uctl_val = i2s->fsync_width;326326- else if (strstr(kcontrol->id.name, "Capture Stereo To Mono"))327327- *uctl_val = i2s->stereo_to_mono[I2S_TX_PATH];328328- else if (strstr(kcontrol->id.name, "Capture Mono To Stereo"))329329- *uctl_val = i2s->mono_to_stereo[I2S_TX_PATH];330330- else if (strstr(kcontrol->id.name, "Playback Stereo To Mono"))331331- *uctl_val = i2s->stereo_to_mono[I2S_RX_PATH];332332- else if (strstr(kcontrol->id.name, "Playback Mono To Stereo"))333333- *uctl_val = i2s->mono_to_stereo[I2S_RX_PATH];334334- else if (strstr(kcontrol->id.name, "Playback FIFO Threshold"))335335- *uctl_val = i2s->rx_fifo_th;336336- else if (strstr(kcontrol->id.name, "BCLK Ratio"))337337- *uctl_val = i2s->bclk_ratio;338338-339339- return 0;340340-}341341-342342-static int tegra210_i2s_put_control(struct snd_kcontrol *kcontrol,343343- struct snd_ctl_elem_value *ucontrol)344344-{345345- struct snd_soc_component *compnt = snd_soc_kcontrol_component(kcontrol);346346- struct tegra210_i2s *i2s = snd_soc_component_get_drvdata(compnt);347347- int value = ucontrol->value.integer.value[0];348348-349349- if (strstr(kcontrol->id.name, "Loopback")) {350350- i2s->loopback = value;351351-352352- regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,353353- I2S_CTRL_LPBK_MASK,354354- i2s->loopback << I2S_CTRL_LPBK_SHIFT);355355-356356- } else if (strstr(kcontrol->id.name, "FSYNC Width")) {357357- /*358358- * Frame sync width is used only for FSYNC modes and not359359- * applicable for LRCK modes. Reset value for this field is "0",360360- * which means the width is one bit clock wide.361361- * The width requirement may depend on the codec and in such362362- * cases mixer control is used to update custom values. A value363363- * of "N" here means, width is "N + 1" bit clock wide.364364- */365365- i2s->fsync_width = value;366366-367367- regmap_update_bits(i2s->regmap, TEGRA210_I2S_CTRL,368368- I2S_CTRL_FSYNC_WIDTH_MASK,369369- i2s->fsync_width << I2S_FSYNC_WIDTH_SHIFT);370370-371371- } else if (strstr(kcontrol->id.name, "Capture Stereo To Mono")) {372372- i2s->stereo_to_mono[I2S_TX_PATH] = value;373373- } else if (strstr(kcontrol->id.name, "Capture Mono To Stereo")) {374374- i2s->mono_to_stereo[I2S_TX_PATH] = value;375375- } else if (strstr(kcontrol->id.name, "Playback Stereo To Mono")) {376376- i2s->stereo_to_mono[I2S_RX_PATH] = value;377377- } else if (strstr(kcontrol->id.name, "Playback Mono To Stereo")) {378378- i2s->mono_to_stereo[I2S_RX_PATH] = value;379379- } else if (strstr(kcontrol->id.name, "Playback FIFO Threshold")) {380380- i2s->rx_fifo_th = value;381381- } else if (strstr(kcontrol->id.name, "BCLK Ratio")) {382382- i2s->bclk_ratio = value;383383- }384534385535 return 0;386536}···748598 tegra210_i2s_stereo_conv_text);749599750600static const struct snd_kcontrol_new tegra210_i2s_controls[] = {751751- SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_control,752752- tegra210_i2s_put_control),753753- SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0, tegra210_i2s_get_control,754754- tegra210_i2s_put_control),601601+ SOC_SINGLE_EXT("Loopback", 0, 0, 1, 0, tegra210_i2s_get_loopback,602602+ tegra210_i2s_put_loopback),603603+ SOC_SINGLE_EXT("FSYNC Width", 0, 0, 255, 0,604604+ tegra210_i2s_get_fsync_width,605605+ tegra210_i2s_put_fsync_width),755606 SOC_ENUM_EXT("Capture Stereo To Mono", tegra210_i2s_stereo_conv_enum,756756- tegra210_i2s_get_control, tegra210_i2s_put_control),607607+ tegra210_i2s_cget_stereo_to_mono,608608+ tegra210_i2s_cput_stereo_to_mono),757609 SOC_ENUM_EXT("Capture Mono To Stereo", tegra210_i2s_mono_conv_enum,758758- tegra210_i2s_get_control, tegra210_i2s_put_control),610610+ tegra210_i2s_cget_mono_to_stereo,611611+ tegra210_i2s_cput_mono_to_stereo),759612 SOC_ENUM_EXT("Playback Stereo To Mono", tegra210_i2s_stereo_conv_enum,760760- tegra210_i2s_get_control, tegra210_i2s_put_control),613613+ tegra210_i2s_pget_mono_to_stereo,614614+ tegra210_i2s_pput_mono_to_stereo),761615 SOC_ENUM_EXT("Playback Mono To Stereo", tegra210_i2s_mono_conv_enum,762762- tegra210_i2s_get_control, tegra210_i2s_put_control),616616+ tegra210_i2s_pget_stereo_to_mono,617617+ tegra210_i2s_pput_stereo_to_mono),763618 SOC_SINGLE_EXT("Playback FIFO Threshold", 0, 0, I2S_RX_FIFO_DEPTH - 1,764764- 0, tegra210_i2s_get_control, tegra210_i2s_put_control),765765- SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0, tegra210_i2s_get_control,766766- tegra210_i2s_put_control),619619+ 0, tegra210_i2s_pget_fifo_th, tegra210_i2s_pput_fifo_th),620620+ SOC_SINGLE_EXT("BCLK Ratio", 0, 0, INT_MAX, 0,621621+ tegra210_i2s_get_bclk_ratio,622622+ tegra210_i2s_put_bclk_ratio),767623};768624769625static const struct snd_soc_dapm_widget tegra210_i2s_widgets[] = {
+19-7
sound/soc/tegra/tegra210_mixer.c
···192192 return 0;193193}194194195195-static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol,196196- struct snd_ctl_elem_value *ucontrol)195195+static int tegra210_mixer_apply_gain(struct snd_kcontrol *kcontrol,196196+ struct snd_ctl_elem_value *ucontrol,197197+ bool instant_gain)197198{198199 struct soc_mixer_control *mc =199200 (struct soc_mixer_control *)kcontrol->private_value;200201 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);201202 struct tegra210_mixer *mixer = snd_soc_component_get_drvdata(cmpnt);202203 unsigned int reg = mc->reg, id;203203- bool instant_gain = false;204204 int err;205205-206206- if (strstr(kcontrol->id.name, "Instant Gain Volume"))207207- instant_gain = true;208205209206 /* Save gain value for specific MIXER input */210207 id = (reg - TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_0) /211208 TEGRA210_MIXER_GAIN_CFG_RAM_ADDR_STRIDE;209209+210210+ if (mixer->gain_value[id] == ucontrol->value.integer.value[0])211211+ return 0;212212213213 mixer->gain_value[id] = ucontrol->value.integer.value[0];214214···219219 }220220221221 return 1;222222+}223223+224224+static int tegra210_mixer_put_gain(struct snd_kcontrol *kcontrol,225225+ struct snd_ctl_elem_value *ucontrol)226226+{227227+ return tegra210_mixer_apply_gain(kcontrol, ucontrol, false);228228+}229229+230230+static int tegra210_mixer_put_instant_gain(struct snd_kcontrol *kcontrol,231231+ struct snd_ctl_elem_value *ucontrol)232232+{233233+ return tegra210_mixer_apply_gain(kcontrol, ucontrol, true);222234}223235224236static int tegra210_mixer_set_audio_cif(struct tegra210_mixer *mixer,···400388 SOC_SINGLE_EXT("RX" #id " Instant Gain Volume", \401389 MIXER_GAIN_CFG_RAM_ADDR((id) - 1), 0, \402390 0x20000, 0, tegra210_mixer_get_gain, \403403- tegra210_mixer_put_gain),391391+ tegra210_mixer_put_instant_gain),404392405393/* Volume controls for all MIXER inputs */406394static const struct snd_kcontrol_new tegra210_mixer_gain_ctls[] = {
···32443244 return tegra210_sfc_write_coeff_ram(cmpnt);32453245}3246324632473247-static int tegra210_sfc_get_control(struct snd_kcontrol *kcontrol,32473247+static int tegra210_sfc_iget_stereo_to_mono(struct snd_kcontrol *kcontrol,32483248 struct snd_ctl_elem_value *ucontrol)32493249{32503250 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32513251 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);3252325232533253- if (strstr(kcontrol->id.name, "Input Stereo To Mono"))32543254- ucontrol->value.integer.value[0] =32553255- sfc->stereo_to_mono[SFC_RX_PATH];32563256- else if (strstr(kcontrol->id.name, "Input Mono To Stereo"))32573257- ucontrol->value.integer.value[0] =32583258- sfc->mono_to_stereo[SFC_RX_PATH];32593259- else if (strstr(kcontrol->id.name, "Output Stereo To Mono"))32603260- ucontrol->value.integer.value[0] =32613261- sfc->stereo_to_mono[SFC_TX_PATH];32623262- else if (strstr(kcontrol->id.name, "Output Mono To Stereo"))32633263- ucontrol->value.integer.value[0] =32643264- sfc->mono_to_stereo[SFC_TX_PATH];32533253+ ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_RX_PATH];3265325432663255 return 0;32673256}3268325732693269-static int tegra210_sfc_put_control(struct snd_kcontrol *kcontrol,32583258+static int tegra210_sfc_iput_stereo_to_mono(struct snd_kcontrol *kcontrol,32703259 struct snd_ctl_elem_value *ucontrol)32713260{32723261 struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32733262 struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32743274- int value = ucontrol->value.integer.value[0];32633263+ unsigned int value = ucontrol->value.enumerated.item[0];3275326432763276- if (strstr(kcontrol->id.name, "Input Stereo To Mono"))32773277- sfc->stereo_to_mono[SFC_RX_PATH] = value;32783278- else if (strstr(kcontrol->id.name, "Input Mono To Stereo"))32793279- sfc->mono_to_stereo[SFC_RX_PATH] = value;32803280- else if (strstr(kcontrol->id.name, "Output Stereo To Mono"))32813281- sfc->stereo_to_mono[SFC_TX_PATH] = value;32823282- else if (strstr(kcontrol->id.name, "Output Mono To Stereo"))32833283- sfc->mono_to_stereo[SFC_TX_PATH] = value;32843284- else32653265+ if (value == sfc->stereo_to_mono[SFC_RX_PATH])32853266 return 0;32673267+32683268+ sfc->stereo_to_mono[SFC_RX_PATH] = value;32693269+32703270+ return 1;32713271+}32723272+32733273+static int tegra210_sfc_iget_mono_to_stereo(struct snd_kcontrol *kcontrol,32743274+ struct snd_ctl_elem_value *ucontrol)32753275+{32763276+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32773277+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32783278+32793279+ ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_RX_PATH];32803280+32813281+ return 0;32823282+}32833283+32843284+static int tegra210_sfc_iput_mono_to_stereo(struct snd_kcontrol *kcontrol,32853285+ struct snd_ctl_elem_value *ucontrol)32863286+{32873287+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);32883288+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);32893289+ unsigned int value = ucontrol->value.enumerated.item[0];32903290+32913291+ if (value == sfc->mono_to_stereo[SFC_RX_PATH])32923292+ return 0;32933293+32943294+ sfc->mono_to_stereo[SFC_RX_PATH] = value;32953295+32963296+ return 1;32973297+}32983298+32993299+static int tegra210_sfc_oget_stereo_to_mono(struct snd_kcontrol *kcontrol,33003300+ struct snd_ctl_elem_value *ucontrol)33013301+{33023302+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33033303+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33043304+33053305+ ucontrol->value.enumerated.item[0] = sfc->stereo_to_mono[SFC_TX_PATH];33063306+33073307+ return 0;33083308+}33093309+33103310+static int tegra210_sfc_oput_stereo_to_mono(struct snd_kcontrol *kcontrol,33113311+ struct snd_ctl_elem_value *ucontrol)33123312+{33133313+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33143314+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33153315+ unsigned int value = ucontrol->value.enumerated.item[0];33163316+33173317+ if (value == sfc->stereo_to_mono[SFC_TX_PATH])33183318+ return 0;33193319+33203320+ sfc->stereo_to_mono[SFC_TX_PATH] = value;33213321+33223322+ return 1;33233323+}33243324+33253325+static int tegra210_sfc_oget_mono_to_stereo(struct snd_kcontrol *kcontrol,33263326+ struct snd_ctl_elem_value *ucontrol)33273327+{33283328+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33293329+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33303330+33313331+ ucontrol->value.enumerated.item[0] = sfc->mono_to_stereo[SFC_TX_PATH];33323332+33333333+ return 0;33343334+}33353335+33363336+static int tegra210_sfc_oput_mono_to_stereo(struct snd_kcontrol *kcontrol,33373337+ struct snd_ctl_elem_value *ucontrol)33383338+{33393339+ struct snd_soc_component *cmpnt = snd_soc_kcontrol_component(kcontrol);33403340+ struct tegra210_sfc *sfc = snd_soc_component_get_drvdata(cmpnt);33413341+ unsigned int value = ucontrol->value.enumerated.item[0];33423342+33433343+ if (value == sfc->mono_to_stereo[SFC_TX_PATH])33443344+ return 0;33453345+33463346+ sfc->mono_to_stereo[SFC_TX_PATH] = value;3286334732873348 return 1;32883349}···3445338434463385static const struct snd_kcontrol_new tegra210_sfc_controls[] = {34473386 SOC_ENUM_EXT("Input Stereo To Mono", tegra210_sfc_stereo_conv_enum,34483448- tegra210_sfc_get_control, tegra210_sfc_put_control),33873387+ tegra210_sfc_iget_stereo_to_mono,33883388+ tegra210_sfc_iput_stereo_to_mono),34493389 SOC_ENUM_EXT("Input Mono To Stereo", tegra210_sfc_mono_conv_enum,34503450- tegra210_sfc_get_control, tegra210_sfc_put_control),33903390+ tegra210_sfc_iget_mono_to_stereo,33913391+ tegra210_sfc_iput_mono_to_stereo),34513392 SOC_ENUM_EXT("Output Stereo To Mono", tegra210_sfc_stereo_conv_enum,34523452- tegra210_sfc_get_control, tegra210_sfc_put_control),33933393+ tegra210_sfc_oget_stereo_to_mono,33943394+ tegra210_sfc_oput_stereo_to_mono),34533395 SOC_ENUM_EXT("Output Mono To Stereo", tegra210_sfc_mono_conv_enum,34543454- tegra210_sfc_get_control, tegra210_sfc_put_control),33963396+ tegra210_sfc_oget_mono_to_stereo,33973397+ tegra210_sfc_oput_mono_to_stereo),34553398};3456339934573400static const struct snd_soc_component_driver tegra210_sfc_cmpnt = {
+1-21
tools/include/linux/kernel.h
···77#include <assert.h>88#include <linux/build_bug.h>99#include <linux/compiler.h>1010+#include <linux/math.h>1011#include <endian.h>1112#include <byteswap.h>12131314#ifndef UINT_MAX1415#define UINT_MAX (~0U)1516#endif1616-1717-#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))18171918#define PERF_ALIGN(x, a) __PERF_ALIGN_MASK(x, (typeof(x))(a)-1)2019#define __PERF_ALIGN_MASK(x, mask) (((x)+(mask))&~(mask))···4950 typeof(y) _min2 = (y); \5051 (void) (&_min1 == &_min2); \5152 _min1 < _min2 ? _min1 : _min2; })5252-#endif5353-5454-#ifndef roundup5555-#define roundup(x, y) ( \5656-{ \5757- const typeof(y) __y = y; \5858- (((x) + (__y - 1)) / __y) * __y; \5959-} \6060-)6153#endif62546355#ifndef BUG_ON···93103int scnprintf_pad(char * buf, size_t size, const char * fmt, ...);9410495105#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))9696-9797-/*9898- * This looks more complex than it should be. But we need to9999- * get the type for the ~ right in round_down (it needs to be100100- * as wide as the result!), and we want to evaluate the macro101101- * arguments just once each.102102- */103103-#define __round_mask(x, y) ((__typeof__(x))((y)-1))104104-#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1)105105-#define round_down(x, y) ((x) & ~__round_mask(x, y))106106107107#define current_gfp_context(k) 0108108#define synchronize_rcu()
+25
tools/include/linux/math.h
···11+#ifndef _TOOLS_MATH_H22+#define _TOOLS_MATH_H33+44+/*55+ * This looks more complex than it should be. But we need to66+ * get the type for the ~ right in round_down (it needs to be77+ * as wide as the result!), and we want to evaluate the macro88+ * arguments just once each.99+ */1010+#define __round_mask(x, y) ((__typeof__(x))((y)-1))1111+#define round_up(x, y) ((((x)-1) | __round_mask(x, y))+1)1212+#define round_down(x, y) ((x) & ~__round_mask(x, y))1313+1414+#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))1515+1616+#ifndef roundup1717+#define roundup(x, y) ( \1818+{ \1919+ const typeof(y) __y = y; \2020+ (((x) + (__y - 1)) / __y) * __y; \2121+} \2222+)2323+#endif2424+2525+#endif
···276276n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7277277ip2 link del wg0278278ip2 link del wg1279279-! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel279279+read _ _ tx_bytes_before < <(n0 wg show wg1 transfer)280280+! n0 ping -W 1 -c 10 -f 192.168.241.2 || false281281+sleep 1282282+read _ _ tx_bytes_after < <(n0 wg show wg1 transfer)283283+(( tx_bytes_after - tx_bytes_before < 70000 ))280284281285ip0 link del wg1282286ip1 link del wg0···613609kill $ncat_pid614610ip0 link del wg0615611612612+# Ensure that dst_cache references don't outlive netns lifetime613613+ip1 link add dev wg0 type wireguard614614+ip2 link add dev wg0 type wireguard615615+configure_peers616616+ip1 link add veth1 type veth peer name veth2617617+ip1 link set veth2 netns $netns2618618+ip1 addr add fd00:aa::1/64 dev veth1619619+ip2 addr add fd00:aa::2/64 dev veth2620620+ip1 link set veth1 up621621+ip2 link set veth2 up622622+waitiface $netns1 veth1623623+waitiface $netns2 veth2624624+ip1 -6 route add default dev veth1 via fd00:aa::2625625+ip2 -6 route add default dev veth2 via fd00:aa::1626626+n1 wg set wg0 peer "$pub2" endpoint [fd00:aa::2]:2627627+n2 wg set wg0 peer "$pub1" endpoint [fd00:aa::1]:1628628+n1 ping6 -c 1 fd00::2629629+pp ip netns delete $netns1630630+pp ip netns delete $netns2631631+pp ip netns add $netns1632632+pp ip netns add $netns2633633+616634# Ensure there aren't circular reference loops617635ip1 link add wg1 type wireguard618636ip2 link add wg2 type wireguard···653627done < /dev/kmsg654628alldeleted=1655629for object in "${!objects[@]}"; do656656- if [[ ${objects["$object"]} != *createddestroyed ]]; then630630+ if [[ ${objects["$object"]} != *createddestroyed && ${objects["$object"]} != *createdcreateddestroyeddestroyed ]]; then657631 echo "Error: $object: merely ${objects["$object"]}" >&3658632 alldeleted=0659633 fi