···5353virtual address size configured by the kernel. For example, with a5454virtual address size of 48, the PAC is 7 bits wide.55555656-Recent versions of GCC can compile code with APIAKey-based return5757-address protection when passed the -msign-return-address option. This5858-uses instructions in the HINT space (unless -march=armv8.3-a or higher5959-is also passed), and such code can run on systems without the pointer6060-authentication extension.5656+When ARM64_PTR_AUTH_KERNEL is selected, the kernel will be compiled5757+with HINT space pointer authentication instructions protecting5858+function returns. Kernels built with this option will work on hardware5959+with or without pointer authentication support.61606261In addition to exec(), keys can also be reinitialized to random values6362using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
+10-5
Documentation/conf.py
···249249250250html_static_path = ['sphinx-static']251251252252-html_context = {253253- 'css_files': [254254- '_static/theme_overrides.css',255255- ],256256-}252252+html_css_files = [253253+ 'theme_overrides.css',254254+]255255+256256+if major <= 1 and minor < 8:257257+ html_context = {258258+ 'css_files': [259259+ '_static/theme_overrides.css',260260+ ],261261+ }257262258263# Add any extra paths that contain custom files (such as robots.txt or259264# .htaccess) here, relative to this directory. These files are copied
+3-3
Documentation/cpu-freq/core.rst
···7373The third argument is a struct cpufreq_freqs with the following7474values:75757676-===== ===========================7777-cpu number of the affected CPU7676+====== ======================================7777+policy a pointer to the struct cpufreq_policy7878old old frequency7979new new frequency8080flags flags of the cpufreq driver8181-===== ===========================8181+====== ======================================828283833. CPUFreq Table Generation with Operating Performance Point (OPP)8484==================================================================
···439439 spin_lock(&p->lock);440440 p->count += this_cpu_read(var2);441441442442-On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable()443443-which makes the above code fully equivalent. On a PREEMPT_RT kernel444442migrate_disable() ensures that the task is pinned on the current CPU which445443in turn guarantees that the per-CPU access to var1 and var2 are staying on446446-the same CPU.444444+the same CPU while the task remains preemptible.447445448446The migrate_disable() substitution is not valid for the following449447scenario::···454456 p = this_cpu_ptr(&var1);455457 p->val = func2();456458457457-While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because458458-here migrate_disable() does not protect against reentrancy from a459459-preempting task. A correct substitution for this case is::459459+This breaks because migrate_disable() does not protect against reentrancy from460460+a preempting task. A correct substitution for this case is::460461461462 func()462463 {
+11
Documentation/process/changes.rst
···3535binutils 2.23 ld -v3636flex 2.5.35 flex --version3737bison 2.0 bison --version3838+pahole 1.16 pahole --version3839util-linux 2.10o fdformat --version3940kmod 13 depmod -V4041e2fsprogs 1.41.4 e2fsck -V···108107109108Since Linux 4.16, the build system generates parsers110109during build. This requires bison 2.0 or later.110110+111111+pahole:112112+-------113113+114114+Since Linux 5.2, if CONFIG_DEBUG_INFO_BTF is selected, the build system115115+generates BTF (BPF Type Format) from DWARF in vmlinux, a bit later from kernel116116+modules as well. This requires pahole v1.16 or later.117117+118118+It is found in the 'dwarves' or 'pahole' distro packages or from119119+https://fedorapeople.org/~acme/dwarves/.111120112121Perl113122----
+2-1
Documentation/process/submitting-patches.rst
···1414Documentation/process/submit-checklist.rst1515for a list of items to check before submitting code. If you are submitting1616a driver, also read Documentation/process/submitting-drivers.rst; for device1717-tree binding patches, read Documentation/process/submitting-patches.rst.1717+tree binding patches, read1818+Documentation/devicetree/bindings/submitting-patches.rst.18191920This documentation assumes that you're using ``git`` to prepare your patches.2021If you're unfamiliar with ``git``, you would be well-advised to learn how to
···1515# Mike Shaver, Helge Deller and Martin K. Petersen1616#17171818+ifdef CONFIG_PARISC_SELF_EXTRACT1919+boot := arch/parisc/boot2020+KBUILD_IMAGE := $(boot)/bzImage2121+else1822KBUILD_IMAGE := vmlinuz2323+endif19242025NM = sh $(srctree)/arch/parisc/nm2126CHECKFLAGS += -D__hppa__=1
+13-1
arch/parisc/configs/generic-64bit_defconfig
···11CONFIG_LOCALVERSION="-64bit"22# CONFIG_LOCALVERSION_AUTO is not set33+CONFIG_KERNEL_LZ4=y34CONFIG_SYSVIPC=y45CONFIG_POSIX_MQUEUE=y66+CONFIG_AUDIT=y57CONFIG_BSD_PROCESS_ACCT=y68CONFIG_BSD_PROCESS_ACCT_V3=y79CONFIG_TASKSTATS=y···3735CONFIG_BLK_DEV_INTEGRITY=y3836CONFIG_BINFMT_MISC=m3937# CONFIG_COMPACTION is not set3838+CONFIG_MEMORY_FAILURE=y4039CONFIG_NET=y4140CONFIG_PACKET=y4241CONFIG_UNIX=y···6865CONFIG_SCSI_SRP_ATTRS=y6966CONFIG_ISCSI_BOOT_SYSFS=y7067CONFIG_SCSI_MPT2SAS=y7171-CONFIG_SCSI_LASI700=m6868+CONFIG_SCSI_LASI700=y7269CONFIG_SCSI_SYM53C8XX_2=y7370CONFIG_SCSI_ZALON=y7471CONFIG_SCSI_QLA_ISCSI=m7572CONFIG_SCSI_DH=y7673CONFIG_ATA=y7474+CONFIG_SATA_SIL=y7575+CONFIG_SATA_SIS=y7676+CONFIG_SATA_VIA=y7777CONFIG_PATA_NS87415=y7878CONFIG_PATA_SIL680=y7979CONFIG_ATA_GENERIC=y···8579CONFIG_BLK_DEV_DM=m8680CONFIG_DM_RAID=m8781CONFIG_DM_UEVENT=y8282+CONFIG_DM_AUDIT=y8883CONFIG_FUSION=y8984CONFIG_FUSION_SPI=y9085CONFIG_FUSION_SAS=y···203196CONFIG_FB_MATROX_I2C=y204197CONFIG_FB_MATROX_MAVEN=y205198CONFIG_FB_RADEON=y199199+CONFIG_LOGO=y200200+# CONFIG_LOGO_LINUX_CLUT224 is not set206201CONFIG_HIDRAW=y207202CONFIG_HID_PID=y208203CONFIG_USB_HIDDEV=y209204CONFIG_USB=y205205+CONFIG_USB_EHCI_HCD=y206206+CONFIG_USB_OHCI_HCD=y207207+CONFIG_USB_OHCI_HCD_PLATFORM=y210208CONFIG_UIO=y211209CONFIG_UIO_PDRV_GENIRQ=m212210CONFIG_UIO_AEC=m
+1
arch/parisc/install.sh
···3939if [ -n "${INSTALLKERNEL}" ]; then4040 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi4141 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi4242+ if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi4243fi43444445# Default install
+7-21
arch/parisc/kernel/time.c
···249249static int __init init_cr16_clocksource(void)250250{251251 /*252252- * The cr16 interval timers are not syncronized across CPUs on253253- * different sockets, so mark them unstable and lower rating on254254- * multi-socket SMP systems.252252+ * The cr16 interval timers are not syncronized across CPUs, even if253253+ * they share the same socket.255254 */256255 if (num_online_cpus() > 1 && !running_on_qemu) {257257- int cpu;258258- unsigned long cpu0_loc;259259- cpu0_loc = per_cpu(cpu_data, 0).cpu_loc;256256+ /* mark sched_clock unstable */257257+ clear_sched_clock_stable();260258261261- for_each_online_cpu(cpu) {262262- if (cpu == 0)263263- continue;264264- if ((cpu0_loc != 0) &&265265- (cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc))266266- continue;267267-268268- /* mark sched_clock unstable */269269- clear_sched_clock_stable();270270-271271- clocksource_cr16.name = "cr16_unstable";272272- clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;273273- clocksource_cr16.rating = 0;274274- break;275275- }259259+ clocksource_cr16.name = "cr16_unstable";260260+ clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE;261261+ clocksource_cr16.rating = 0;276262 }277263278264 /* register at clocksource framework */
+8-2
arch/s390/configs/debug_defconfig
···403403CONFIG_CONNECTOR=y404404CONFIG_ZRAM=y405405CONFIG_BLK_DEV_LOOP=m406406-CONFIG_BLK_DEV_CRYPTOLOOP=m407406CONFIG_BLK_DEV_DRBD=m408407CONFIG_BLK_DEV_NBD=m409408CONFIG_BLK_DEV_RAM=y···475476CONFIG_MACVTAP=m476477CONFIG_VXLAN=m477478CONFIG_BAREUDP=m479479+CONFIG_AMT=m478480CONFIG_TUN=m479481CONFIG_VETH=m480482CONFIG_VIRTIO_NET=m···489489# CONFIG_NET_VENDOR_AMD is not set490490# CONFIG_NET_VENDOR_AQUANTIA is not set491491# CONFIG_NET_VENDOR_ARC is not set492492+# CONFIG_NET_VENDOR_ASIX is not set492493# CONFIG_NET_VENDOR_ATHEROS is not set493494# CONFIG_NET_VENDOR_BROADCOM is not set494495# CONFIG_NET_VENDOR_BROCADE is not set···572571CONFIG_WATCHDOG_NOWAYOUT=y573572CONFIG_SOFT_WATCHDOG=m574573CONFIG_DIAG288_WATCHDOG=m574574+# CONFIG_DRM_DEBUG_MODESET_LOCK is not set575575CONFIG_FB=y576576CONFIG_FRAMEBUFFER_CONSOLE=y577577CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y···777775CONFIG_CRC7=m778776CONFIG_CRC8=m779777CONFIG_RANDOM32_SELFTEST=y778778+CONFIG_XZ_DEC_MICROLZMA=y780779CONFIG_DMA_CMA=y781780CONFIG_CMA_SIZE_MBYTES=0782781CONFIG_PRINTK_TIME=y783782CONFIG_DYNAMIC_DEBUG=y784783CONFIG_DEBUG_INFO=y785784CONFIG_DEBUG_INFO_DWARF4=y785785+CONFIG_DEBUG_INFO_BTF=y786786CONFIG_GDB_SCRIPTS=y787787CONFIG_HEADERS_INSTALL=y788788CONFIG_DEBUG_SECTION_MISMATCH=y···811807CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m812808CONFIG_DEBUG_PER_CPU_MAPS=y813809CONFIG_KFENCE=y810810+CONFIG_KFENCE_STATIC_KEYS=y814811CONFIG_DEBUG_SHIRQ=y815812CONFIG_PANIC_ON_OOPS=y816813CONFIG_DETECT_HUNG_TASK=y···847842CONFIG_SAMPLES=y848843CONFIG_SAMPLE_TRACE_PRINTK=m849844CONFIG_SAMPLE_FTRACE_DIRECT=m845845+CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m850846CONFIG_DEBUG_ENTRY=y851847CONFIG_CIO_INJECT=y852848CONFIG_KUNIT=m···866860CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y867861CONFIG_LKDTM=m868862CONFIG_TEST_MIN_HEAP=y869869-CONFIG_KPROBES_SANITY_TEST=y863863+CONFIG_KPROBES_SANITY_TEST=m870864CONFIG_RBTREE_TEST=y871865CONFIG_INTERVAL_TREE_TEST=m872866CONFIG_PERCPU_TEST=m
+6-1
arch/s390/configs/defconfig
···394394CONFIG_CONNECTOR=y395395CONFIG_ZRAM=y396396CONFIG_BLK_DEV_LOOP=m397397-CONFIG_BLK_DEV_CRYPTOLOOP=m398397CONFIG_BLK_DEV_DRBD=m399398CONFIG_BLK_DEV_NBD=m400399CONFIG_BLK_DEV_RAM=y···466467CONFIG_MACVTAP=m467468CONFIG_VXLAN=m468469CONFIG_BAREUDP=m470470+CONFIG_AMT=m469471CONFIG_TUN=m470472CONFIG_VETH=m471473CONFIG_VIRTIO_NET=m···480480# CONFIG_NET_VENDOR_AMD is not set481481# CONFIG_NET_VENDOR_AQUANTIA is not set482482# CONFIG_NET_VENDOR_ARC is not set483483+# CONFIG_NET_VENDOR_ASIX is not set483484# CONFIG_NET_VENDOR_ATHEROS is not set484485# CONFIG_NET_VENDOR_BROADCOM is not set485486# CONFIG_NET_VENDOR_BROCADE is not set···763762CONFIG_CRC4=m764763CONFIG_CRC7=m765764CONFIG_CRC8=m765765+CONFIG_XZ_DEC_MICROLZMA=y766766CONFIG_DMA_CMA=y767767CONFIG_CMA_SIZE_MBYTES=0768768CONFIG_PRINTK_TIME=y769769CONFIG_DYNAMIC_DEBUG=y770770CONFIG_DEBUG_INFO=y771771CONFIG_DEBUG_INFO_DWARF4=y772772+CONFIG_DEBUG_INFO_BTF=y772773CONFIG_GDB_SCRIPTS=y773774CONFIG_DEBUG_SECTION_MISMATCH=y774775CONFIG_MAGIC_SYSRQ=y···795792CONFIG_SAMPLES=y796793CONFIG_SAMPLE_TRACE_PRINTK=m797794CONFIG_SAMPLE_FTRACE_DIRECT=m795795+CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m798796CONFIG_KUNIT=m799797CONFIG_KUNIT_DEBUGFS=y800798CONFIG_LKDTM=m799799+CONFIG_KPROBES_SANITY_TEST=m801800CONFIG_PERCPU_TEST=m802801CONFIG_ATOMIC64_SELFTEST=y803802CONFIG_TEST_BPF=m
+2
arch/s390/configs/zfcpdump_defconfig
···6565# CONFIG_NETWORK_FILESYSTEMS is not set6666CONFIG_LSM="yama,loadpin,safesetid,integrity"6767# CONFIG_ZLIB_DFLTCC is not set6868+CONFIG_XZ_DEC_MICROLZMA=y6869CONFIG_PRINTK_TIME=y6970# CONFIG_SYMBOLIC_ERRNAME is not set7071CONFIG_DEBUG_INFO=y7272+CONFIG_DEBUG_INFO_BTF=y7173CONFIG_DEBUG_FS=y7274CONFIG_DEBUG_KERNEL=y7375CONFIG_PANIC_ON_OOPS=y
···19321932 depends on ACPI19331933 select UCS2_STRING19341934 select EFI_RUNTIME_WRAPPERS19351935+ select ARCH_USE_MEMREMAP_PROT19351936 help19361937 This enables the kernel to use EFI runtime services that are19371938 available (such as the EFI variable services).
+18-19
arch/x86/entry/entry_64.S
···574574 ud25755751:576576#endif577577+#ifdef CONFIG_XEN_PV578578+ ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV579579+#endif580580+577581 POP_REGS pop_rdi=0578582579583 /*···894890.Lparanoid_entry_checkgs:895891 /* EBX = 1 -> kernel GSBASE active, no restore required */896892 movl $1, %ebx893893+897894 /*898895 * The kernel-enforced convention is a negative GSBASE indicates899896 * a kernel value. No SWAPGS needed on entry and exit.···902897 movl $MSR_GS_BASE, %ecx903898 rdmsr904899 testl %edx, %edx905905- jns .Lparanoid_entry_swapgs906906- ret907907-908908-.Lparanoid_entry_swapgs:909909- swapgs910910-911911- /*912912- * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an913913- * unconditional CR3 write, even in the PTI case. So do an lfence914914- * to prevent GS speculation, regardless of whether PTI is enabled.915915- */916916- FENCE_SWAPGS_KERNEL_ENTRY900900+ js .Lparanoid_kernel_gsbase917901918902 /* EBX = 0 -> SWAPGS required on exit */919903 xorl %ebx, %ebx904904+ swapgs905905+.Lparanoid_kernel_gsbase:906906+907907+ FENCE_SWAPGS_KERNEL_ENTRY920908 ret921909SYM_CODE_END(paranoid_entry)922910···991993 pushq %r12992994 ret993995994994-.Lerror_entry_done_lfence:995995- FENCE_SWAPGS_KERNEL_ENTRY996996-.Lerror_entry_done:997997- ret998998-999996 /*1000997 * There are two places in the kernel that can potentially fault with1001998 * usergs. Handle them here. B stepping K8s sometimes report a···10131020 * .Lgs_change's error handler with kernel gsbase.10141021 */10151022 SWAPGS10161016- FENCE_SWAPGS_USER_ENTRY10171017- jmp .Lerror_entry_done10231023+10241024+ /*10251025+ * Issue an LFENCE to prevent GS speculation, regardless of whether it is a10261026+ * kernel or user gsbase.10271027+ */10281028+.Lerror_entry_done_lfence:10291029+ FENCE_SWAPGS_KERNEL_ENTRY10301030+ ret1018103110191032.Lbstep_iret:10201033 /* Fix truncated RIP */
···73737474#define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK)75757676+/*7777+ * Error codes related to GHCB input that can be communicated back to the guest7878+ * by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2.7979+ */8080+#define GHCB_ERR_NOT_REGISTERED 18181+#define GHCB_ERR_INVALID_USAGE 28282+#define GHCB_ERR_INVALID_SCRATCH_AREA 38383+#define GHCB_ERR_MISSING_INPUT 48484+#define GHCB_ERR_INVALID_INPUT 58585+#define GHCB_ERR_INVALID_EVENT 68686+7687#endif
···294294 char *dst, char *buf, size_t size)295295{296296 unsigned long error_code = X86_PF_PROT | X86_PF_WRITE;297297- char __user *target = (char __user *)dst;298298- u64 d8;299299- u32 d4;300300- u16 d2;301301- u8 d1;302297303298 /*304299 * This function uses __put_user() independent of whether kernel or user···315320 * instructions here would cause infinite nesting.316321 */317322 switch (size) {318318- case 1:323323+ case 1: {324324+ u8 d1;325325+ u8 __user *target = (u8 __user *)dst;326326+319327 memcpy(&d1, buf, 1);320328 if (__put_user(d1, target))321329 goto fault;322330 break;323323- case 2:331331+ }332332+ case 2: {333333+ u16 d2;334334+ u16 __user *target = (u16 __user *)dst;335335+324336 memcpy(&d2, buf, 2);325337 if (__put_user(d2, target))326338 goto fault;327339 break;328328- case 4:340340+ }341341+ case 4: {342342+ u32 d4;343343+ u32 __user *target = (u32 __user *)dst;344344+329345 memcpy(&d4, buf, 4);330346 if (__put_user(d4, target))331347 goto fault;332348 break;333333- case 8:349349+ }350350+ case 8: {351351+ u64 d8;352352+ u64 __user *target = (u64 __user *)dst;353353+334354 memcpy(&d8, buf, 8);335355 if (__put_user(d8, target))336356 goto fault;337357 break;358358+ }338359 default:339360 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);340361 return ES_UNSUPPORTED;···373362 char *src, char *buf, size_t size)374363{375364 unsigned long error_code = X86_PF_PROT;376376- char __user *s = (char __user *)src;377377- u64 d8;378378- u32 d4;379379- u16 d2;380380- u8 d1;381365382366 /*383367 * This function uses __get_user() independent of whether kernel or user···394388 * instructions here would cause infinite nesting.395389 */396390 switch (size) {397397- case 1:391391+ case 1: {392392+ u8 d1;393393+ u8 __user *s = (u8 __user *)src;394394+398395 if (__get_user(d1, s))399396 goto fault;400397 memcpy(buf, &d1, 1);401398 break;402402- case 2:399399+ }400400+ case 2: {401401+ u16 d2;402402+ u16 __user *s = (u16 __user *)src;403403+403404 if (__get_user(d2, s))404405 goto fault;405406 memcpy(buf, &d2, 2);406407 break;407407- case 4:408408+ }409409+ case 4: {410410+ u32 d4;411411+ u32 __user *s = (u32 __user *)src;412412+408413 if (__get_user(d4, s))409414 goto fault;410415 memcpy(buf, &d4, 4);411416 break;412412- case 8:417417+ }418418+ case 8: {419419+ u64 d8;420420+ u64 __user *s = (u64 __user *)src;413421 if (__get_user(d8, s))414422 goto fault;415423 memcpy(buf, &d8, 8);416424 break;425425+ }417426 default:418427 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size);419428 return ES_UNSUPPORTED;
+24-4
arch/x86/kernel/tsc.c
···1180118011811181EXPORT_SYMBOL_GPL(mark_tsc_unstable);1182118211831183+static void __init tsc_disable_clocksource_watchdog(void)11841184+{11851185+ clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;11861186+ clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;11871187+}11881188+11831189static void __init check_system_tsc_reliable(void)11841190{11851191#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)···12021196#endif12031197 if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))12041198 tsc_clocksource_reliable = 1;11991199+12001200+ /*12011201+ * Disable the clocksource watchdog when the system has:12021202+ * - TSC running at constant frequency12031203+ * - TSC which does not stop in C-States12041204+ * - the TSC_ADJUST register which allows to detect even minimal12051205+ * modifications12061206+ * - not more than two sockets. As the number of sockets cannot be12071207+ * evaluated at the early boot stage where this has to be12081208+ * invoked, check the number of online memory nodes as a12091209+ * fallback solution which is an reasonable estimate.12101210+ */12111211+ if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) &&12121212+ boot_cpu_has(X86_FEATURE_NONSTOP_TSC) &&12131213+ boot_cpu_has(X86_FEATURE_TSC_ADJUST) &&12141214+ nr_online_nodes <= 2)12151215+ tsc_disable_clocksource_watchdog();12051216}1206121712071218/*···14101387 if (tsc_unstable)14111388 goto unreg;1412138914131413- if (tsc_clocksource_reliable || no_tsc_watchdog)14141414- clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY;14151415-14161390 if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3))14171391 clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP;14181392···15471527 }1548152815491529 if (tsc_clocksource_reliable || no_tsc_watchdog)15501550- clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY;15301530+ tsc_disable_clocksource_watchdog();1551153115521532 clocksource_register_khz(&clocksource_tsc_early, tsc_khz);15531533 detect_art();
+41
arch/x86/kernel/tsc_sync.c
···3030};31313232static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust);3333+static struct timer_list tsc_sync_check_timer;33343435/*3536 * TSC's on different sockets may be reset asynchronously.···7776 adj->warned = true;7877 }7978}7979+8080+/*8181+ * Normally the tsc_sync will be checked every time system enters idle8282+ * state, but there is still caveat that a system won't enter idle,8383+ * either because it's too busy or configured purposely to not enter8484+ * idle.8585+ *8686+ * So setup a periodic timer (every 10 minutes) to make sure the check8787+ * is always on.8888+ */8989+9090+#define SYNC_CHECK_INTERVAL (HZ * 600)9191+9292+static void tsc_sync_check_timer_fn(struct timer_list *unused)9393+{9494+ int next_cpu;9595+9696+ tsc_verify_tsc_adjust(false);9797+9898+ /* Run the check for all onlined CPUs in turn */9999+ next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask);100100+ if (next_cpu >= nr_cpu_ids)101101+ next_cpu = cpumask_first(cpu_online_mask);102102+103103+ tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL;104104+ add_timer_on(&tsc_sync_check_timer, next_cpu);105105+}106106+107107+static int __init start_sync_check_timer(void)108108+{109109+ if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable)110110+ return 0;111111+112112+ timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0);113113+ tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL;114114+ add_timer(&tsc_sync_check_timer);115115+116116+ return 0;117117+}118118+late_initcall(start_sync_check_timer);8011981120static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval,82121 unsigned int cpu, bool bootcpu)
+21-2
arch/x86/kvm/mmu/mmu.c
···1936193619371937static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)19381938{19391939- return sp->role.invalid ||19391939+ if (sp->role.invalid)19401940+ return true;19411941+19421942+ /* TDP MMU pages due not use the MMU generation. */19431943+ return !sp->tdp_mmu_page &&19401944 unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);19411945}19421946···39803976 return true;39813977}3982397839793979+/*39803980+ * Returns true if the page fault is stale and needs to be retried, i.e. if the39813981+ * root was invalidated by a memslot update or a relevant mmu_notifier fired.39823982+ */39833983+static bool is_page_fault_stale(struct kvm_vcpu *vcpu,39843984+ struct kvm_page_fault *fault, int mmu_seq)39853985+{39863986+ if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa)))39873987+ return true;39883988+39893989+ return fault->slot &&39903990+ mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);39913991+}39923992+39833993static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)39843994{39853995 bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu);···40314013 else40324014 write_lock(&vcpu->kvm->mmu_lock);4033401540344034- if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva))40164016+ if (is_page_fault_stale(vcpu, fault, mmu_seq))40354017 goto out_unlock;40184018+40364019 r = make_mmu_pages_available(vcpu);40374020 if (r)40384021 goto out_unlock;
+2-1
arch/x86/kvm/mmu/paging_tmpl.h
···911911912912 r = RET_PF_RETRY;913913 write_lock(&vcpu->kvm->mmu_lock);914914- if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva))914914+915915+ if (is_page_fault_stale(vcpu, fault, mmu_seq))915916 goto out_unlock;916917917918 kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
···281281 pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS;282282283283 pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1;284284- pmu->reserved_bits = 0xffffffff00200000ull;284284+ pmu->reserved_bits = 0xfffffff000280000ull;285285 pmu->version = 1;286286 /* not applicable to AMD; but clean them to prevent any fall out */287287 pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+60-42
arch/x86/kvm/svm/sev.c
···22602260 __free_page(virt_to_page(svm->sev_es.vmsa));2261226122622262 if (svm->sev_es.ghcb_sa_free)22632263- kfree(svm->sev_es.ghcb_sa);22632263+ kvfree(svm->sev_es.ghcb_sa);22642264}2265226522662266static void dump_ghcb(struct vcpu_svm *svm)···23522352 memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));23532353}2354235423552355-static int sev_es_validate_vmgexit(struct vcpu_svm *svm)23552355+static bool sev_es_validate_vmgexit(struct vcpu_svm *svm)23562356{23572357 struct kvm_vcpu *vcpu;23582358 struct ghcb *ghcb;23592359- u64 exit_code = 0;23592359+ u64 exit_code;23602360+ u64 reason;2360236123612362 ghcb = svm->sev_es.ghcb;2362236323632363- /* Only GHCB Usage code 0 is supported */23642364- if (ghcb->ghcb_usage)23652365- goto vmgexit_err;23662366-23672364 /*23682368- * Retrieve the exit code now even though is may not be marked valid23652365+ * Retrieve the exit code now even though it may not be marked valid23692366 * as it could help with debugging.23702367 */23712368 exit_code = ghcb_get_sw_exit_code(ghcb);23692369+23702370+ /* Only GHCB Usage code 0 is supported */23712371+ if (ghcb->ghcb_usage) {23722372+ reason = GHCB_ERR_INVALID_USAGE;23732373+ goto vmgexit_err;23742374+ }23752375+23762376+ reason = GHCB_ERR_MISSING_INPUT;2372237723732378 if (!ghcb_sw_exit_code_is_valid(ghcb) ||23742379 !ghcb_sw_exit_info_1_is_valid(ghcb) ||···24532448 case SVM_VMGEXIT_UNSUPPORTED_EVENT:24542449 break;24552450 default:24512451+ reason = GHCB_ERR_INVALID_EVENT;24562452 goto vmgexit_err;24572453 }2458245424592459- return 0;24552455+ return true;2460245624612457vmgexit_err:24622458 vcpu = &svm->vcpu;2463245924642464- if (ghcb->ghcb_usage) {24602460+ if (reason == GHCB_ERR_INVALID_USAGE) {24652461 vcpu_unimpl(vcpu, "vmgexit: ghcb usage %#x is not valid\n",24662462 ghcb->ghcb_usage);24632463+ } else if (reason == GHCB_ERR_INVALID_EVENT) {24642464+ vcpu_unimpl(vcpu, "vmgexit: exit code %#llx is not valid\n",24652465+ exit_code);24672466 } else {24682468- vcpu_unimpl(vcpu, "vmgexit: exit reason %#llx is not valid\n",24672467+ vcpu_unimpl(vcpu, "vmgexit: exit code %#llx input is not valid\n",24692468 exit_code);24702469 dump_ghcb(svm);24712470 }2472247124732473- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;24742474- vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON;24752475- vcpu->run->internal.ndata = 2;24762476- vcpu->run->internal.data[0] = exit_code;24772477- vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu;24722472+ /* Clear the valid entries fields */24732473+ memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap));2478247424792479- return -EINVAL;24752475+ ghcb_set_sw_exit_info_1(ghcb, 2);24762476+ ghcb_set_sw_exit_info_2(ghcb, reason);24772477+24782478+ return false;24802479}2481248024822481void sev_es_unmap_ghcb(struct vcpu_svm *svm)···25022493 svm->sev_es.ghcb_sa_sync = false;25032494 }2504249525052505- kfree(svm->sev_es.ghcb_sa);24962496+ kvfree(svm->sev_es.ghcb_sa);25062497 svm->sev_es.ghcb_sa = NULL;25072498 svm->sev_es.ghcb_sa_free = false;25082499 }···25502541 scratch_gpa_beg = ghcb_get_sw_scratch(ghcb);25512542 if (!scratch_gpa_beg) {25522543 pr_err("vmgexit: scratch gpa not provided\n");25532553- return false;25442544+ goto e_scratch;25542545 }2555254625562547 scratch_gpa_end = scratch_gpa_beg + len;25572548 if (scratch_gpa_end < scratch_gpa_beg) {25582549 pr_err("vmgexit: scratch length (%#llx) not valid for scratch address (%#llx)\n",25592550 len, scratch_gpa_beg);25602560- return false;25512551+ goto e_scratch;25612552 }2562255325632554 if ((scratch_gpa_beg & PAGE_MASK) == control->ghcb_gpa) {···25752566 scratch_gpa_end > ghcb_scratch_end) {25762567 pr_err("vmgexit: scratch area is outside of GHCB shared buffer area (%#llx - %#llx)\n",25772568 scratch_gpa_beg, scratch_gpa_end);25782578- return false;25692569+ goto e_scratch;25792570 }2580257125812572 scratch_va = (void *)svm->sev_es.ghcb;···25882579 if (len > GHCB_SCRATCH_AREA_LIMIT) {25892580 pr_err("vmgexit: scratch area exceeds KVM limits (%#llx requested, %#llx limit)\n",25902581 len, GHCB_SCRATCH_AREA_LIMIT);25912591- return false;25822582+ goto e_scratch;25922583 }25932593- scratch_va = kzalloc(len, GFP_KERNEL_ACCOUNT);25842584+ scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT);25942585 if (!scratch_va)25952595- return false;25862586+ goto e_scratch;2596258725972588 if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) {25982589 /* Unable to copy scratch area from guest */25992590 pr_err("vmgexit: kvm_read_guest for scratch area failed\n");2600259126012601- kfree(scratch_va);26022602- return false;25922592+ kvfree(scratch_va);25932593+ goto e_scratch;26032594 }2604259526052596 /*···26162607 svm->sev_es.ghcb_sa_len = len;2617260826182609 return true;26102610+26112611+e_scratch:26122612+ ghcb_set_sw_exit_info_1(ghcb, 2);26132613+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA);26142614+26152615+ return false;26192616}2620261726212618static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,···2672265726732658 ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_CPUID);26742659 if (!ret) {26752675- ret = -EINVAL;26602660+ /* Error, keep GHCB MSR value as-is */26762661 break;26772662 }26782663···27082693 GHCB_MSR_TERM_REASON_POS);27092694 pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",27102695 reason_set, reason_code);27112711- fallthrough;26962696+26972697+ ret = -EINVAL;26982698+ break;27122699 }27132700 default:27142714- ret = -EINVAL;27012701+ /* Error, keep GHCB MSR value as-is */27022702+ break;27152703 }2716270427172705 trace_kvm_vmgexit_msr_protocol_exit(svm->vcpu.vcpu_id,···2738272027392721 if (!ghcb_gpa) {27402722 vcpu_unimpl(vcpu, "vmgexit: GHCB gpa is not set\n");27412741- return -EINVAL;27232723+27242724+ /* Without a GHCB, just return right back to the guest */27252725+ return 1;27422726 }2743272727442728 if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) {27452729 /* Unable to map GHCB from guest */27462730 vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n",27472731 ghcb_gpa);27482748- return -EINVAL;27322732+27332733+ /* Without a GHCB, just return right back to the guest */27342734+ return 1;27492735 }2750273627512737 svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva;···2759273727602738 exit_code = ghcb_get_sw_exit_code(ghcb);2761273927622762- ret = sev_es_validate_vmgexit(svm);27632763- if (ret)27642764- return ret;27402740+ if (!sev_es_validate_vmgexit(svm))27412741+ return 1;2765274227662743 sev_es_sync_from_ghcb(svm);27672744 ghcb_set_sw_exit_info_1(ghcb, 0);27682745 ghcb_set_sw_exit_info_2(ghcb, 0);2769274627702770- ret = -EINVAL;27472747+ ret = 1;27712748 switch (exit_code) {27722749 case SVM_VMGEXIT_MMIO_READ:27732750 if (!setup_vmgexit_scratch(svm, true, control->exit_info_2))···28072786 default:28082787 pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n",28092788 control->exit_info_1);28102810- ghcb_set_sw_exit_info_1(ghcb, 1);28112811- ghcb_set_sw_exit_info_2(ghcb,28122812- X86_TRAP_UD |28132813- SVM_EVTINJ_TYPE_EXEPT |28142814- SVM_EVTINJ_VALID);27892789+ ghcb_set_sw_exit_info_1(ghcb, 2);27902790+ ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT);28152791 }2816279228172817- ret = 1;28182793 break;28192794 }28202795 case SVM_VMGEXIT_UNSUPPORTED_EVENT:28212796 vcpu_unimpl(vcpu,28222797 "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",28232798 control->exit_info_1, control->exit_info_2);27992799+ ret = -EINVAL;28242800 break;28252801 default:28262802 ret = svm_invoke_exit_handler(vcpu, exit_code);···28392821 return -EINVAL;2840282228412823 if (!setup_vmgexit_scratch(svm, in, bytes))28422842- return -EINVAL;28242824+ return 1;2843282528442826 return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa,28452827 count, in);
···277277 return;278278 }279279280280- new = early_memremap(data.phys_map, data.size);280280+ new = early_memremap_prot(data.phys_map, data.size,281281+ pgprot_val(pgprot_encrypted(FIXMAP_PAGE_NORMAL)));281282 if (!new) {282283 pr_err("Failed to map new boot services memmap\n");283284 return;
+11-1
arch/x86/realmode/init.c
···7272#ifdef CONFIG_X86_647373 u64 *trampoline_pgd;7474 u64 efer;7575+ int i;7576#endif76777778 base = (unsigned char *)real_mode_header;···129128 trampoline_header->flags = 0;130129131130 trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);131131+132132+ /* Map the real mode stub as virtual == physical */132133 trampoline_pgd[0] = trampoline_pgd_entry.pgd;133133- trampoline_pgd[511] = init_top_pgt[511].pgd;134134+135135+ /*136136+ * Include the entirety of the kernel mapping into the trampoline137137+ * PGD. This way, all mappings present in the normal kernel page138138+ * tables are usable while running on trampoline_pgd.139139+ */140140+ for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)141141+ trampoline_pgd[i] = init_top_pgt[i].pgd;134142#endif135143136144 sme_sev_setup_real_mode(trampoline_header);
+20
arch/x86/xen/xen-asm.S
···20202121#include <linux/init.h>2222#include <linux/linkage.h>2323+#include <../entry/calling.h>23242425.pushsection .noinstr.text, "ax"2526/*···192191 pushq $0193192 jmp hypercall_iret194193SYM_CODE_END(xen_iret)194194+195195+/*196196+ * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is197197+ * also the kernel stack. Reusing swapgs_restore_regs_and_return_to_usermode()198198+ * in XEN pv would cause %rsp to move up to the top of the kernel stack and199199+ * leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI200200+ * interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET201201+ * frame at the same address is useless.202202+ */203203+SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode)204204+ UNWIND_HINT_REGS205205+ POP_REGS206206+207207+ /* stackleak_erase() can work safely on the kernel stack. */208208+ STACKLEAK_ERASE_NOCLOBBER209209+210210+ addq $8, %rsp /* skip regs->orig_ax */211211+ jmp xen_iret212212+SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode)195213196214/*197215 * Xen handles syscall callbacks much like ordinary exceptions, which
···21032103 int ret;2104210421052105 if (idx < 0) {21062106- pr_warn("deleting an unspecified loop device is not supported.\n");21062106+ pr_warn_once("deleting an unspecified loop device is not supported.\n");21072107 return -EINVAL;21082108 }21092109
···38333833 /* disable all interrupts */38343834 amdgpu_irq_disable_all(adev);38353835 if (adev->mode_info.mode_config_initialized){38363836- if (!amdgpu_device_has_dc_support(adev))38363836+ if (!drm_drv_uses_atomic_modeset(adev_to_drm(adev)))38373837 drm_helper_force_disable_all(adev_to_drm(adev));38383838 else38393839 drm_atomic_helper_shutdown(adev_to_drm(adev));···42894289{42904290 int r;4291429142924292+ amdgpu_amdkfd_pre_reset(adev);42934293+42924294 if (from_hypervisor)42934295 r = amdgpu_virt_request_full_gpu(adev, true);42944296 else···4318431643194317 amdgpu_irq_gpu_reset_resume_helper(adev);43204318 r = amdgpu_ib_ring_tests(adev);43194319+ amdgpu_amdkfd_post_reset(adev);4321432043224321error:43234322 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) {···5033503050345031 cancel_delayed_work_sync(&tmp_adev->delayed_init_work);5035503250365036- amdgpu_amdkfd_pre_reset(tmp_adev);50335033+ if (!amdgpu_sriov_vf(tmp_adev))50345034+ amdgpu_amdkfd_pre_reset(tmp_adev);5037503550385036 /*50395037 * Mark these ASICs to be reseted as untracked first···51335129 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);51345130 }5135513151365136- if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) {51325132+ if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) {51375133 drm_helper_resume_force_mode(adev_to_drm(tmp_adev));51385134 }51395135···5152514851535149skip_sched_resume:51545150 list_for_each_entry(tmp_adev, device_list_handle, reset_list) {51555155- /* unlock kfd */51565156- if (!need_emergency_restart)51575157- amdgpu_amdkfd_post_reset(tmp_adev);51515151+ /* unlock kfd: SRIOV would do it separately */51525152+ if (!need_emergency_restart && !amdgpu_sriov_vf(tmp_adev))51535153+ amdgpu_amdkfd_post_reset(tmp_adev);5158515451595155 /* kfd_post_reset will do nothing if kfd device is not initialized,51605156 * need to bring up kfd here if it's not be initialized before
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
···157157 [HDP_HWIP] = HDP_HWID,158158 [SDMA0_HWIP] = SDMA0_HWID,159159 [SDMA1_HWIP] = SDMA1_HWID,160160+ [SDMA2_HWIP] = SDMA2_HWID,161161+ [SDMA3_HWIP] = SDMA3_HWID,160162 [MMHUB_HWIP] = MMHUB_HWID,161163 [ATHUB_HWIP] = ATHUB_HWID,162164 [NBIO_HWIP] = NBIF_HWID,···920918 case IP_VERSION(3, 0, 64):921919 case IP_VERSION(3, 1, 1):922920 case IP_VERSION(3, 0, 2):921921+ case IP_VERSION(3, 0, 192):923922 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block);924923 if (!amdgpu_sriov_vf(adev))925924 amdgpu_device_ip_block_add(adev, &jpeg_v3_0_ip_block);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
···135135 break;136136 case IP_VERSION(3, 0, 0):137137 case IP_VERSION(3, 0, 64):138138+ case IP_VERSION(3, 0, 192):138139 if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0))139140 fw_name = FIRMWARE_SIENNA_CICHLID;140141 else
+2-2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
···504504 int i = 0;505505506506 for (i = 0; i < adev->mode_info.num_crtc; i++)507507- if (adev->mode_info.crtcs[i])508508- hrtimer_cancel(&adev->mode_info.crtcs[i]->vblank_timer);507507+ if (adev->amdgpu_vkms_output[i].vblank_hrtimer.function)508508+ hrtimer_cancel(&adev->amdgpu_vkms_output[i].vblank_hrtimer);509509510510 kfree(adev->mode_info.bios_hardcoded_edid);511511 kfree(adev->amdgpu_vkms_output);
+4-3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
···4060406040614061 gfx_v9_0_cp_enable(adev, false);4062406240634063- /* Skip suspend with A+A reset */40644064- if (adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) {40654065- dev_dbg(adev->dev, "Device in reset. Skipping RLC halt\n");40634063+ /* Skip stopping RLC with A+A reset or when RLC controls GFX clock */40644064+ if ((adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) ||40654065+ (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2))) {40664066+ dev_dbg(adev->dev, "Skipping RLC halt\n");40664067 return 0;40674068 }40684069
+1
drivers/gpu/drm/amd/amdgpu/nv.c
···183183 switch (adev->ip_versions[UVD_HWIP][0]) {184184 case IP_VERSION(3, 0, 0):185185 case IP_VERSION(3, 0, 64):186186+ case IP_VERSION(3, 0, 192):186187 if (amdgpu_sriov_vf(adev)) {187188 if (encode)188189 *codecs = &sriov_sc_video_codecs_encode;
+4-9
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
···15741574static void svm_range_restore_work(struct work_struct *work)15751575{15761576 struct delayed_work *dwork = to_delayed_work(work);15771577- struct amdkfd_process_info *process_info;15781577 struct svm_range_list *svms;15791578 struct svm_range *prange;15801579 struct kfd_process *p;···15931594 * the lifetime of this thread, kfd_process and mm will be valid.15941595 */15951596 p = container_of(svms, struct kfd_process, svms);15961596- process_info = p->kgd_process_info;15971597 mm = p->mm;15981598 if (!mm)15991599 return;1600160016011601- mutex_lock(&process_info->lock);16021601 svm_range_list_lock_and_flush_work(svms, mm);16031602 mutex_lock(&svms->lock);16041603···16491652out_reschedule:16501653 mutex_unlock(&svms->lock);16511654 mmap_write_unlock(mm);16521652- mutex_unlock(&process_info->lock);1653165516541656 /* If validation failed, reschedule another attempt */16551657 if (evicted_ranges) {···2610261426112615 if (atomic_read(&svms->drain_pagefaults)) {26122616 pr_debug("draining retry fault, drop fault 0x%llx\n", addr);26172617+ r = 0;26132618 goto out;26142619 }26152620···26202623 mm = get_task_mm(p->lead_thread);26212624 if (!mm) {26222625 pr_debug("svms 0x%p failed to get mm\n", svms);26262626+ r = 0;26232627 goto out;26242628 }26252629···2658266026592661 if (svm_range_skip_recover(prange)) {26602662 amdgpu_gmc_filter_faults_remove(adev, addr, pasid);26632663+ r = 0;26612664 goto out_unlock_range;26622665 }26632666···26672668 if (timestamp < AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING) {26682669 pr_debug("svms 0x%p [0x%lx %lx] already restored\n",26692670 svms, prange->start, prange->last);26712671+ r = 0;26702672 goto out_unlock_range;26712673 }26722674···31773177svm_range_set_attr(struct kfd_process *p, uint64_t start, uint64_t size,31783178 uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs)31793179{31803180- struct amdkfd_process_info *process_info = p->kgd_process_info;31813180 struct mm_struct *mm = current->mm;31823181 struct list_head update_list;31833182 struct list_head insert_list;···31943195 return r;3195319631963197 svms = &p->svms;31973197-31983198- mutex_lock(&process_info->lock);3199319832003199 svm_range_list_lock_and_flush_work(svms, mm);32013200···32703273 mutex_unlock(&svms->lock);32713274 mmap_read_unlock(mm);32723275out:32733273- mutex_unlock(&process_info->lock);32743274-32753276 pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] done, r=%d\n", p->pasid,32763277 &p->svms, start, start + size - 1, r);32773278
···3636#include "dm_helpers.h"37373838#include "dc_link_ddc.h"3939+#include "ddc_service_types.h"4040+#include "dpcd_defs.h"39414042#include "i2caux_interface.h"4143#include "dmub_cmd.h"···159157};160158161159#if defined(CONFIG_DRM_AMD_DC_DCN)160160+static bool needs_dsc_aux_workaround(struct dc_link *link)161161+{162162+ if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&163163+ (link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) &&164164+ link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2)165165+ return true;166166+167167+ return false;168168+}169169+162170static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)163171{164172 struct dc_sink *dc_sink = aconnector->dc_sink;···178166 u8 *dsc_branch_dec_caps = NULL;179167180168 aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port);181181-#if defined(CONFIG_HP_HOOK_WORKAROUND)169169+182170 /*183171 * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs184172 * because it only check the dsc/fec caps of the "port variable" and not the dock···188176 * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux189177 *190178 */191191-192192- if (!aconnector->dsc_aux && !port->parent->port_parent)179179+ if (!aconnector->dsc_aux && !port->parent->port_parent &&180180+ needs_dsc_aux_workaround(aconnector->dc_link))193181 aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;194194-#endif182182+195183 if (!aconnector->dsc_aux)196184 return false;197185
+16
drivers/gpu/drm/amd/display/dc/core/dc_link.c
···758758 dal_ddc_service_set_transaction_type(link->ddc,759759 sink_caps->transaction_type);760760761761+#if defined(CONFIG_DRM_AMD_DC_DCN)762762+ /* Apply work around for tunneled MST on certain USB4 docks. Always use DSC if dock763763+ * reports DSC support.764764+ */765765+ if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA &&766766+ link->type == dc_connection_mst_branch &&767767+ link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&768768+ link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_SUPPORT &&769769+ !link->dc->debug.dpia_debug.bits.disable_mst_dsc_work_around)770770+ link->wa_flags.dpia_mst_dsc_always_on = true;771771+#endif772772+761773#if defined(CONFIG_DRM_AMD_DC_HDCP)762774 /* In case of fallback to SST when topology discovery below fails763775 * HDCP caps will be querried again later by the upper layer (caller···12141202 if (link->type == dc_connection_mst_branch) {12151203 LINK_INFO("link=%d, mst branch is now Disconnected\n",12161204 link->link_index);12051205+12061206+ /* Disable work around which keeps DSC on for tunneled MST on certain USB4 docks. */12071207+ if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)12081208+ link->wa_flags.dpia_mst_dsc_always_on = false;1217120912181210 dm_helpers_dp_mst_stop_top_mgr(link->ctx, link);12191211
+1-1
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
···21382138 }2139213921402140 for (lane = 0; lane < (uint8_t)lt_settings->link_settings.lane_count; lane++)21412141- lt_settings->dpcd_lane_settings[lane].bits.VOLTAGE_SWING_SET = VOLTAGE_SWING_LEVEL0;21412141+ lt_settings->dpcd_lane_settings[lane].raw = 0;21422142 }2143214321442144 if (status == LINK_TRAINING_SUCCESS) {
+14-10
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
···16641664 if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param)16651665 return false;1666166616671667+ // Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks16681668+ if (old_stream->audio_info.mode_count != stream->audio_info.mode_count)16691669+ return false;16701670+16671671 return true;16681672}16691673···2256225222572253 if (!new_ctx)22582254 return DC_ERROR_UNEXPECTED;22592259-#if defined(CONFIG_DRM_AMD_DC_DCN)22602260-22612261- /*22622262- * Update link encoder to stream assignment.22632263- * TODO: Split out reason allocation from validation.22642264- */22652265- if (dc->res_pool->funcs->link_encs_assign && fast_validate == false)22662266- dc->res_pool->funcs->link_encs_assign(22672267- dc, new_ctx, new_ctx->streams, new_ctx->stream_count);22682268-#endif2269225522702256 if (dc->res_pool->funcs->validate_global) {22712257 result = dc->res_pool->funcs->validate_global(dc, new_ctx);···23062312 if (result == DC_OK)23072313 if (!dc->res_pool->funcs->validate_bandwidth(dc, new_ctx, fast_validate))23082314 result = DC_FAIL_BANDWIDTH_VALIDATE;23152315+23162316+#if defined(CONFIG_DRM_AMD_DC_DCN)23172317+ /*23182318+ * Only update link encoder to stream assignment after bandwidth validation passed.23192319+ * TODO: Split out assignment and validation.23202320+ */23212321+ if (result == DC_OK && dc->res_pool->funcs->link_encs_assign && fast_validate == false)23222322+ dc->res_pool->funcs->link_encs_assign(23232323+ dc, new_ctx, new_ctx->streams, new_ctx->stream_count);23242324+#endif2309232523102326 return result;23112327}
···191191 bool dp_skip_DID2;192192 bool dp_skip_reset_segment;193193 bool dp_mot_reset_segment;194194+ /* Some USB4 docks do not handle turning off MST DSC once it has been enabled. */195195+ bool dpia_mst_dsc_always_on;194196 } wa_flags;195197 struct link_mst_stream_allocation_table mst_stream_alloc_table;196198
+1-1
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
···14681468 dev_err(adev->dev, "Failed to disable smu features.\n");14691469 }1470147014711471- if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0) &&14711471+ if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2) &&14721472 adev->gfx.rlc.funcs->stop)14731473 adev->gfx.rlc.funcs->stop(adev);14741474
···36363737#include "intel_backlight.h"3838#include "intel_display_types.h"3939+#include "intel_dp.h"3940#include "intel_dp_aux_backlight.h"40414142/* TODO:···106105 struct intel_panel *panel = &connector->panel;107106 int ret;108107 u8 tcon_cap[4];108108+109109+ intel_dp_wait_source_oui(intel_dp);109110110111 ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap));111112 if (ret != sizeof(tcon_cap))···207204 int ret;208205 u8 old_ctrl, ctrl;209206207207+ intel_dp_wait_source_oui(intel_dp);208208+210209 ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl);211210 if (ret != 1) {212211 drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret);···298293 struct intel_panel *panel = &connector->panel;299294 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);300295296296+ if (!panel->backlight.edp.vesa.info.aux_enable) {297297+ u32 pwm_level = intel_backlight_invert_pwm_level(connector,298298+ panel->backlight.pwm_level_max);299299+300300+ panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level);301301+ }302302+301303 drm_edp_backlight_enable(&intel_dp->aux, &panel->backlight.edp.vesa.info, level);302304}303305···316304 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder);317305318306 drm_edp_backlight_disable(&intel_dp->aux, &panel->backlight.edp.vesa.info);307307+308308+ if (!panel->backlight.edp.vesa.info.aux_enable)309309+ panel->backlight.pwm_funcs->disable(old_conn_state,310310+ intel_backlight_invert_pwm_level(connector, 0));319311}320312321313static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, enum pipe pipe)···337321 if (ret < 0)338322 return ret;339323324324+ if (!panel->backlight.edp.vesa.info.aux_enable) {325325+ ret = panel->backlight.pwm_funcs->setup(connector, pipe);326326+ if (ret < 0) {327327+ drm_err(&i915->drm,328328+ "Failed to setup PWM backlight controls for eDP backlight: %d\n",329329+ ret);330330+ return ret;331331+ }332332+ }340333 panel->backlight.max = panel->backlight.edp.vesa.info.max;341334 panel->backlight.min = 0;342335 if (current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) {···365340 struct intel_dp *intel_dp = intel_attached_dp(connector);366341 struct drm_i915_private *i915 = dp_to_i915(intel_dp);367342368368- /* TODO: We currently only support AUX only backlight configurations, not backlights which369369- * require a mix of PWM and AUX controls to work. In the mean time, these machines typically370370- * work just fine using normal PWM controls anyway.371371- */372372- if ((intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) &&373373- drm_edp_backlight_supported(intel_dp->edp_dpcd)) {343343+ if (drm_edp_backlight_supported(intel_dp->edp_dpcd)) {374344 drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n");375345 return true;376346 }
+1
drivers/gpu/drm/i915/gt/intel_gtt.c
···66#include <linux/slab.h> /* fault-inject.h is not standalone! */7788#include <linux/fault-inject.h>99+#include <linux/sched/mm.h>9101011#include "gem/i915_gem_lmem.h"1112#include "i915_trace.h"
···16581658 if (!prop) {16591659 DRM_DEV_DEBUG(dev,16601660 "failed to find data lane mapping, using default\n");16611661+ /* Set the number of date lanes to 4 by default. */16621662+ msm_host->num_data_lanes = 4;16611663 return 0;16621664 }16631665
···359359 * @ring_nr: the ringbuffer used by this submitqueue, which is determined360360 * by the submitqueue's priority361361 * @faults: the number of GPU hangs associated with this submitqueue362362+ * @last_fence: the sequence number of the last allocated fence (for error363363+ * checking)362364 * @ctx: the per-drm_file context associated with the submitqueue (ie.363365 * which set of pgtables do submits jobs associated with the364366 * submitqueue use)···376374 u32 flags;377375 u32 ring_nr;378376 int faults;377377+ uint32_t last_fence;379378 struct msm_file_private *ctx;380379 struct list_head node;381380 struct idr fence_idr;
+9-4
drivers/gpu/drm/msm/msm_gpu_devfreq.c
···2020 struct msm_gpu *gpu = dev_to_gpu(dev);2121 struct dev_pm_opp *opp;22222323+ /*2424+ * Note that devfreq_recommended_opp() can modify the freq2525+ * to something that actually is in the opp table:2626+ */2327 opp = devfreq_recommended_opp(dev, freq, flags);24282529 /*···3228 */3329 if (gpu->devfreq.idle_freq) {3430 gpu->devfreq.idle_freq = *freq;3131+ dev_pm_opp_put(opp);3532 return 0;3633 }3734···208203 struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq);209204 unsigned long idle_freq, target_freq = 0;210205211211- if (!df->devfreq)212212- return;213213-214206 /*215207 * Hold devfreq lock to synchronize with get_dev_status()/216208 * target() callbacks···229227{230228 struct msm_gpu_devfreq *df = &gpu->devfreq;231229230230+ if (!df->devfreq)231231+ return;232232+232233 msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1),233233- HRTIMER_MODE_ABS);234234+ HRTIMER_MODE_REL);234235}
···5454 if (!e)5555 return -ENOMEM;56565757- e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED_INTERNAL;5757+ e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED;5858 e->event.length = sizeof(e->event);59596060 ret = drm_event_reserve_init(dev, file, &e->base, &e->event);
+5-5
drivers/hid/Kconfig
···207207208208config HID_CHICONY209209 tristate "Chicony devices"210210- depends on HID210210+ depends on USB_HID211211 default !EXPERT212212 help213213 Support for Chicony Tactical pad and special keys on Chicony keyboards.214214215215config HID_CORSAIR216216 tristate "Corsair devices"217217- depends on HID && USB && LEDS_CLASS217217+ depends on USB_HID && LEDS_CLASS218218 help219219 Support for Corsair devices that are not fully compliant with the220220 HID standard.···245245246246config HID_PRODIKEYS247247 tristate "Prodikeys PC-MIDI Keyboard support"248248- depends on HID && SND248248+ depends on USB_HID && SND249249 select SND_RAWMIDI250250 help251251 Support for Prodikeys PC-MIDI Keyboard device support.···560560561561config HID_LOGITECH562562 tristate "Logitech devices"563563- depends on HID563563+ depends on USB_HID564564 depends on LEDS_CLASS565565 default !EXPERT566566 help···951951952952config HID_SAMSUNG953953 tristate "Samsung InfraRed remote control or keyboards"954954- depends on HID954954+ depends on USB_HID955955 help956956 Support for Samsung InfraRed remote control or keyboards.957957
+2-4
drivers/hid/hid-asus.c
···10281028 if (drvdata->quirks & QUIRK_IS_MULTITOUCH)10291029 drvdata->tp = &asus_i2c_tp;1030103010311031- if ((drvdata->quirks & QUIRK_T100_KEYBOARD) &&10321032- hid_is_using_ll_driver(hdev, &usb_hid_driver)) {10311031+ if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && hid_is_usb(hdev)) {10331032 struct usb_interface *intf = to_usb_interface(hdev->dev.parent);1034103310351034 if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) {···10561057 drvdata->tp = &asus_t100chi_tp;10571058 }1058105910591059- if ((drvdata->quirks & QUIRK_MEDION_E1239T) &&10601060- hid_is_using_ll_driver(hdev, &usb_hid_driver)) {10601060+ if ((drvdata->quirks & QUIRK_MEDION_E1239T) && hid_is_usb(hdev)) {10611061 struct usb_host_interface *alt =10621062 to_usb_interface(hdev->dev.parent)->altsetting;10631063
+1-1
drivers/hid/hid-bigbenff.c
···191191 struct bigben_device, worker);192192 struct hid_field *report_field = bigben->report->field[0];193193194194- if (bigben->removed)194194+ if (bigben->removed || !report_field)195195 return;196196197197 if (bigben->work_led) {
+3
drivers/hid/hid-chicony.c
···114114{115115 int ret;116116117117+ if (!hid_is_usb(hdev))118118+ return -EINVAL;119119+117120 hdev->quirks |= HID_QUIRK_INPUT_PER_APP;118121 ret = hid_parse(hdev);119122 if (ret) {
···50505151static int is_not_elan_touchpad(struct hid_device *hdev)5252{5353- if (hdev->bus == BUS_USB) {5353+ if (hid_is_usb(hdev)) {5454 struct usb_interface *intf = to_usb_interface(hdev->dev.parent);55555656 return (intf->altsetting->desc.bInterfaceNumber !=
+3
drivers/hid/hid-elo.c
···230230 int ret;231231 struct usb_device *udev;232232233233+ if (!hid_is_usb(hdev))234234+ return -EINVAL;235235+233236 priv = kzalloc(sizeof(*priv), GFP_KERNEL);234237 if (!priv)235238 return -ENOMEM;
+3
drivers/hid/hid-ft260.c
···915915 struct ft260_get_chip_version_report version;916916 int ret;917917918918+ if (!hid_is_usb(hdev))919919+ return -EINVAL;920920+918921 dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);919922 if (!dev)920923 return -ENOMEM;
···344344{345345 int retval;346346347347+ if (!hid_is_usb(hdev))348348+ return -EINVAL;349349+347350 retval = hid_parse(hdev);348351 if (retval) {349352 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-isku.c
···324324{325325 int retval;326326327327+ if (!hid_is_usb(hdev))328328+ return -EINVAL;329329+327330 retval = hid_parse(hdev);328331 if (retval) {329332 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-kone.c
···749749{750750 int retval;751751752752+ if (!hid_is_usb(hdev))753753+ return -EINVAL;754754+752755 retval = hid_parse(hdev);753756 if (retval) {754757 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-koneplus.c
···431431{432432 int retval;433433434434+ if (!hid_is_usb(hdev))435435+ return -EINVAL;436436+434437 retval = hid_parse(hdev);435438 if (retval) {436439 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-konepure.c
···133133{134134 int retval;135135136136+ if (!hid_is_usb(hdev))137137+ return -EINVAL;138138+136139 retval = hid_parse(hdev);137140 if (retval) {138141 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-kovaplus.c
···501501{502502 int retval;503503504504+ if (!hid_is_usb(hdev))505505+ return -EINVAL;506506+504507 retval = hid_parse(hdev);505508 if (retval) {506509 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-lua.c
···160160{161161 int retval;162162163163+ if (!hid_is_usb(hdev))164164+ return -EINVAL;165165+163166 retval = hid_parse(hdev);164167 if (retval) {165168 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-pyra.c
···449449{450450 int retval;451451452452+ if (!hid_is_usb(hdev))453453+ return -EINVAL;454454+452455 retval = hid_parse(hdev);453456 if (retval) {454457 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-ryos.c
···141141{142142 int retval;143143144144+ if (!hid_is_usb(hdev))145145+ return -EINVAL;146146+144147 retval = hid_parse(hdev);145148 if (retval) {146149 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-savu.c
···113113{114114 int retval;115115116116+ if (!hid_is_usb(hdev))117117+ return -EINVAL;118118+116119 retval = hid_parse(hdev);117120 if (retval) {118121 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-samsung.c
···152152 int ret;153153 unsigned int cmask = HID_CONNECT_DEFAULT;154154155155+ if (!hid_is_usb(hdev))156156+ return -EINVAL;157157+155158 ret = hid_parse(hdev);156159 if (ret) {157160 hid_err(hdev, "parse failed\n");
+18-6
drivers/hid/hid-sony.c
···30003000 sc->quirks = quirks;30013001 hid_set_drvdata(hdev, sc);30023002 sc->hdev = hdev;30033003- usbdev = to_usb_device(sc->hdev->dev.parent->parent);3004300330053004 ret = hid_parse(hdev);30063005 if (ret) {···30373038 */30383039 if (!(hdev->claimed & HID_CLAIMED_INPUT)) {30393040 hid_err(hdev, "failed to claim input\n");30403040- hid_hw_stop(hdev);30413041- return -ENODEV;30413041+ ret = -ENODEV;30423042+ goto err;30423043 }3043304430443045 if (sc->quirks & (GHL_GUITAR_PS3WIIU | GHL_GUITAR_PS4)) {30463046+ if (!hid_is_usb(hdev)) {30473047+ ret = -EINVAL;30483048+ goto err;30493049+ }30503050+30513051+ usbdev = to_usb_device(sc->hdev->dev.parent->parent);30523052+30453053 sc->ghl_urb = usb_alloc_urb(0, GFP_ATOMIC);30463046- if (!sc->ghl_urb)30473047- return -ENOMEM;30543054+ if (!sc->ghl_urb) {30553055+ ret = -ENOMEM;30563056+ goto err;30573057+ }3048305830493059 if (sc->quirks & GHL_GUITAR_PS3WIIU)30503060 ret = ghl_init_urb(sc, usbdev, ghl_ps3wiiu_magic_data,···30633055 ARRAY_SIZE(ghl_ps4_magic_data));30643056 if (ret) {30653057 hid_err(hdev, "error preparing URB\n");30663066- return ret;30583058+ goto err;30673059 }3068306030693061 timer_setup(&sc->ghl_poke_timer, ghl_magic_poke, 0);···30713063 jiffies + GHL_GUITAR_POKE_INTERVAL*HZ);30723064 }3073306530663066+ return ret;30673067+30683068+err:30693069+ hid_hw_stop(hdev);30743070 return ret;30753071}30763072
+3
drivers/hid/hid-thrustmaster.c
···274274 int ret = 0;275275 struct tm_wheel *tm_wheel = NULL;276276277277+ if (!hid_is_usb(hdev))278278+ return -EINVAL;279279+277280 ret = hid_parse(hdev);278281 if (ret) {279282 hid_err(hdev, "parse failed with error %d\n", ret);
+1-1
drivers/hid/hid-u2fzero.c
···311311 unsigned int minor;312312 int ret;313313314314- if (!hid_is_using_ll_driver(hdev, &usb_hid_driver))314314+ if (!hid_is_usb(hdev))315315 return -EINVAL;316316317317 dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);
+3
drivers/hid/hid-uclogic-core.c
···164164 struct uclogic_drvdata *drvdata = NULL;165165 bool params_initialized = false;166166167167+ if (!hid_is_usb(hdev))168168+ return -EINVAL;169169+167170 /*168171 * libinput requires the pad interface to be on a different node169172 * than the pen, so use QUIRK_MULTI_INPUT for all tablets.
···423423 if (!(ipd & REG_INT_MBRF))424424 return;425425426426- /* ack interrupt */427427- i2c_writel(i2c, REG_INT_MBRF, REG_IPD);426426+ /* ack interrupt (read also produces a spurious START flag, clear it too) */427427+ i2c_writel(i2c, REG_INT_MBRF | REG_INT_START, REG_IPD);428428429429 /* Can only handle a maximum of 32 bytes at a time */430430 if (len > 32)
+38-7
drivers/i2c/busses/i2c-stm32f7.c
···14931493{14941494 struct stm32f7_i2c_dev *i2c_dev = data;14951495 struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg;14961496+ struct stm32_i2c_dma *dma = i2c_dev->dma;14961497 void __iomem *base = i2c_dev->base;14971498 u32 status, mask;14981499 int ret = IRQ_HANDLED;···15191518 dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n",15201519 __func__, f7_msg->addr);15211520 writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR);15211521+ if (i2c_dev->use_dma) {15221522+ stm32f7_i2c_disable_dma_req(i2c_dev);15231523+ dmaengine_terminate_async(dma->chan_using);15241524+ }15221525 f7_msg->result = -ENXIO;15231526 }15241527···15381533 /* Clear STOP flag */15391534 writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR);1540153515411541- if (i2c_dev->use_dma) {15361536+ if (i2c_dev->use_dma && !f7_msg->result) {15421537 ret = IRQ_WAKE_THREAD;15431538 } else {15441539 i2c_dev->master_mode = false;···15511546 if (f7_msg->stop) {15521547 mask = STM32F7_I2C_CR2_STOP;15531548 stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask);15541554- } else if (i2c_dev->use_dma) {15491549+ } else if (i2c_dev->use_dma && !f7_msg->result) {15551550 ret = IRQ_WAKE_THREAD;15561551 } else if (f7_msg->smbus) {15571552 stm32f7_i2c_smbus_rep_start(i2c_dev);···15881583 if (!ret) {15891584 dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__);15901585 stm32f7_i2c_disable_dma_req(i2c_dev);15911591- dmaengine_terminate_all(dma->chan_using);15861586+ dmaengine_terminate_async(dma->chan_using);15921587 f7_msg->result = -ETIMEDOUT;15931588 }15941589···16651660 /* Disable dma */16661661 if (i2c_dev->use_dma) {16671662 stm32f7_i2c_disable_dma_req(i2c_dev);16681668- dmaengine_terminate_all(dma->chan_using);16631663+ dmaengine_terminate_async(dma->chan_using);16691664 }1670166516711666 i2c_dev->master_mode = false;···17011696 time_left = wait_for_completion_timeout(&i2c_dev->complete,17021697 i2c_dev->adap.timeout);17031698 ret = f7_msg->result;16991699+ if (ret) {17001700+ if (i2c_dev->use_dma)17011701+ dmaengine_synchronize(dma->chan_using);17021702+17031703+ /*17041704+ * It is possible that some unsent data have already been17051705+ * written into TXDR. To avoid sending old data in a17061706+ * further transfer, flush TXDR in case of any error17071707+ */17081708+ writel_relaxed(STM32F7_I2C_ISR_TXE,17091709+ i2c_dev->base + STM32F7_I2C_ISR);17101710+ goto pm_free;17111711+ }1704171217051713 if (!time_left) {17061714 dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n",17071715 i2c_dev->msg->addr);17081716 if (i2c_dev->use_dma)17091709- dmaengine_terminate_all(dma->chan_using);17171717+ dmaengine_terminate_sync(dma->chan_using);17181718+ stm32f7_i2c_wait_free_bus(i2c_dev);17101719 ret = -ETIMEDOUT;17111720 }17121721···17631744 timeout = wait_for_completion_timeout(&i2c_dev->complete,17641745 i2c_dev->adap.timeout);17651746 ret = f7_msg->result;17661766- if (ret)17471747+ if (ret) {17481748+ if (i2c_dev->use_dma)17491749+ dmaengine_synchronize(dma->chan_using);17501750+17511751+ /*17521752+ * It is possible that some unsent data have already been17531753+ * written into TXDR. To avoid sending old data in a17541754+ * further transfer, flush TXDR in case of any error17551755+ */17561756+ writel_relaxed(STM32F7_I2C_ISR_TXE,17571757+ i2c_dev->base + STM32F7_I2C_ISR);17671758 goto pm_free;17591759+ }1768176017691761 if (!timeout) {17701762 dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr);17711763 if (i2c_dev->use_dma)17721772- dmaengine_terminate_all(dma->chan_using);17641764+ dmaengine_terminate_sync(dma->chan_using);17651765+ stm32f7_i2c_wait_free_bus(i2c_dev);17731766 ret = -ETIMEDOUT;17741767 goto pm_free;17751768 }
···2626config MTD_NAND_DENALI_DT2727 tristate "Denali NAND controller as a DT device"2828 select MTD_NAND_DENALI2929- depends on HAS_DMA && HAVE_CLK && OF2929+ depends on HAS_DMA && HAVE_CLK && OF && HAS_IOMEM3030 help3131 Enable the driver for NAND flash on platforms using a Denali NAND3232 controller as a DT device.
+28-8
drivers/mtd/nand/raw/fsmc_nand.c
···15151616#include <linux/clk.h>1717#include <linux/completion.h>1818+#include <linux/delay.h>1819#include <linux/dmaengine.h>1920#include <linux/dma-direction.h>2021#include <linux/dma-mapping.h>···9392#define FSMC_NAND_BANK_SZ 0x2094939594#define FSMC_BUSY_WAIT_TIMEOUT (1 * HZ)9595+9696+/*9797+ * According to SPEAr300 Reference Manual (RM0082)9898+ * TOUDEL = 7ns (Output delay from the flip-flops to the board)9999+ * TINDEL = 5ns (Input delay from the board to the flipflop)100100+ */101101+#define TOUTDEL 7000102102+#define TINDEL 50009610397104struct fsmc_nand_timings {98105 u8 tclr;···286277{287278 unsigned long hclk = clk_get_rate(host->clk);288279 unsigned long hclkn = NSEC_PER_SEC / hclk;289289- u32 thiz, thold, twait, tset;280280+ u32 thiz, thold, twait, tset, twait_min;290281291282 if (sdrt->tRC_min < 30000)292283 return -EOPNOTSUPP;···318309 else if (tims->thold > FSMC_THOLD_MASK)319310 tims->thold = FSMC_THOLD_MASK;320311321321- twait = max(sdrt->tRP_min, sdrt->tWP_min);322322- tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;323323- if (tims->twait == 0)324324- tims->twait = 1;325325- else if (tims->twait > FSMC_TWAIT_MASK)326326- tims->twait = FSMC_TWAIT_MASK;327327-328312 tset = max(sdrt->tCS_min - sdrt->tWP_min,329313 sdrt->tCEA_max - sdrt->tREA_max);330314 tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1;···325323 tims->tset = 1;326324 else if (tims->tset > FSMC_TSET_MASK)327325 tims->tset = FSMC_TSET_MASK;326326+327327+ /*328328+ * According to SPEAr300 Reference Manual (RM0082) which gives more329329+ * information related to FSMSC timings than the SPEAr600 one (RM0305),330330+ * twait >= tCEA - (tset * TCLK) + TOUTDEL + TINDEL331331+ */332332+ twait_min = sdrt->tCEA_max - ((tims->tset + 1) * hclkn * 1000)333333+ + TOUTDEL + TINDEL;334334+ twait = max3(sdrt->tRP_min, sdrt->tWP_min, twait_min);335335+336336+ tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1;337337+ if (tims->twait == 0)338338+ tims->twait = 1;339339+ else if (tims->twait > FSMC_TWAIT_MASK)340340+ tims->twait = FSMC_TWAIT_MASK;328341329342 return 0;330343}···681664 instr->ctx.waitrdy.timeout_ms);682665 break;683666 }667667+668668+ if (instr->delay_ns)669669+ ndelay(instr->delay_ns);684670 }685671686672 return ret;
···471471 u16 reg;472472 int err;473473474474+ /* The 88e6250 family does not have the PHY detect bit. Instead,475475+ * report whether the port is internal.476476+ */477477+ if (chip->info->family == MV88E6XXX_FAMILY_6250)478478+ return port < chip->info->num_internal_phys;479479+474480 err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, ®);475481 if (err) {476482 dev_err(chip->dev,···698692{699693 struct mv88e6xxx_chip *chip = ds->priv;700694 struct mv88e6xxx_port *p;701701- int err;695695+ int err = 0;702696703697 p = &chip->ports[port];704698705705- /* FIXME: is this the correct test? If we're in fixed mode on an706706- * internal port, why should we process this any different from707707- * PHY mode? On the other hand, the port may be automedia between708708- * an internal PHY and the serdes...709709- */710710- if ((mode == MLO_AN_PHY) && mv88e6xxx_phy_is_internal(ds, port))711711- return;712712-713699 mv88e6xxx_reg_lock(chip);714714- /* In inband mode, the link may come up at any time while the link715715- * is not forced down. Force the link down while we reconfigure the716716- * interface mode.717717- */718718- if (mode == MLO_AN_INBAND && p->interface != state->interface &&719719- chip->info->ops->port_set_link)720720- chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN);721700722722- err = mv88e6xxx_port_config_interface(chip, port, state->interface);723723- if (err && err != -EOPNOTSUPP)724724- goto err_unlock;701701+ if (mode != MLO_AN_PHY || !mv88e6xxx_phy_is_internal(ds, port)) {702702+ /* In inband mode, the link may come up at any time while the703703+ * link is not forced down. Force the link down while we704704+ * reconfigure the interface mode.705705+ */706706+ if (mode == MLO_AN_INBAND &&707707+ p->interface != state->interface &&708708+ chip->info->ops->port_set_link)709709+ chip->info->ops->port_set_link(chip, port,710710+ LINK_FORCED_DOWN);725711726726- err = mv88e6xxx_serdes_pcs_config(chip, port, mode, state->interface,727727- state->advertising);728728- /* FIXME: we should restart negotiation if something changed - which729729- * is something we get if we convert to using phylinks PCS operations.730730- */731731- if (err > 0)732732- err = 0;712712+ err = mv88e6xxx_port_config_interface(chip, port,713713+ state->interface);714714+ if (err && err != -EOPNOTSUPP)715715+ goto err_unlock;716716+717717+ err = mv88e6xxx_serdes_pcs_config(chip, port, mode,718718+ state->interface,719719+ state->advertising);720720+ /* FIXME: we should restart negotiation if something changed -721721+ * which is something we get if we convert to using phylinks722722+ * PCS operations.723723+ */724724+ if (err > 0)725725+ err = 0;726726+ }733727734728 /* Undo the forced down state above after completing configuration735735- * irrespective of its state on entry, which allows the link to come up.729729+ * irrespective of its state on entry, which allows the link to come730730+ * up in the in-band case where there is no separate SERDES. Also731731+ * ensure that the link can come up if the PPU is in use and we are732732+ * in PHY mode (we treat the PPU as an effective in-band mechanism.)736733 */737737- if (mode == MLO_AN_INBAND && p->interface != state->interface &&738738- chip->info->ops->port_set_link)734734+ if (chip->info->ops->port_set_link &&735735+ ((mode == MLO_AN_INBAND && p->interface != state->interface) ||736736+ (mode == MLO_AN_PHY && mv88e6xxx_port_ppu_updates(chip, port))))739737 chip->info->ops->port_set_link(chip, port, LINK_UNFORCED);740738741739 p->interface = state->interface;···762752 ops = chip->info->ops;763753764754 mv88e6xxx_reg_lock(chip);765765- /* Internal PHYs propagate their configuration directly to the MAC.766766- * External PHYs depend on whether the PPU is enabled for this port.755755+ /* Force the link down if we know the port may not be automatically756756+ * updated by the switch or if we are using fixed-link mode.767757 */768768- if (((!mv88e6xxx_phy_is_internal(ds, port) &&769769- !mv88e6xxx_port_ppu_updates(chip, port)) ||758758+ if ((!mv88e6xxx_port_ppu_updates(chip, port) ||770759 mode == MLO_AN_FIXED) && ops->port_sync_link)771760 err = ops->port_sync_link(chip, port, mode, false);772761 mv88e6xxx_reg_unlock(chip);···788779 ops = chip->info->ops;789780790781 mv88e6xxx_reg_lock(chip);791791- /* Internal PHYs propagate their configuration directly to the MAC.792792- * External PHYs depend on whether the PPU is enabled for this port.782782+ /* Configure and force the link up if we know that the port may not783783+ * automatically updated by the switch or if we are using fixed-link784784+ * mode.793785 */794794- if ((!mv88e6xxx_phy_is_internal(ds, port) &&795795- !mv88e6xxx_port_ppu_updates(chip, port)) ||786786+ if (!mv88e6xxx_port_ppu_updates(chip, port) ||796787 mode == MLO_AN_FIXED) {797788 /* FIXME: for an automedia port, should we force the link798789 * down here - what if the link comes up due to "other" media
+7-1
drivers/net/dsa/mv88e6xxx/serdes.c
···830830 bool up)831831{832832 u8 cmode = chip->ports[port].cmode;833833- int err = 0;833833+ int err;834834835835 switch (cmode) {836836 case MV88E6XXX_PORT_STS_CMODE_SGMII:···841841 case MV88E6XXX_PORT_STS_CMODE_XAUI:842842 case MV88E6XXX_PORT_STS_CMODE_RXAUI:843843 err = mv88e6390_serdes_power_10g(chip, lane, up);844844+ break;845845+ default:846846+ err = -EINVAL;844847 break;845848 }846849···15431540 case MV88E6393X_PORT_STS_CMODE_5GBASER:15441541 case MV88E6393X_PORT_STS_CMODE_10GBASER:15451542 err = mv88e6390_serdes_power_10g(chip, lane, on);15431543+ break;15441544+ default:15451545+ err = -EINVAL;15461546 break;15471547 }15481548
···19491949}1950195019511951/**19521952+ * i40e_sync_vf_state19531953+ * @vf: pointer to the VF info19541954+ * @state: VF state19551955+ *19561956+ * Called from a VF message to synchronize the service with a potential19571957+ * VF reset state19581958+ **/19591959+static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state)19601960+{19611961+ int i;19621962+19631963+ /* When handling some messages, it needs VF state to be set.19641964+ * It is possible that this flag is cleared during VF reset,19651965+ * so there is a need to wait until the end of the reset to19661966+ * handle the request message correctly.19671967+ */19681968+ for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) {19691969+ if (test_bit(state, &vf->vf_states))19701970+ return true;19711971+ usleep_range(10000, 20000);19721972+ }19731973+19741974+ return test_bit(state, &vf->vf_states);19751975+}19761976+19771977+/**19521978 * i40e_vc_get_version_msg19531979 * @vf: pointer to the VF info19541980 * @msg: pointer to the msg buffer···20342008 size_t len = 0;20352009 int ret;2036201020372037- if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) {20112011+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) {20382012 aq_ret = I40E_ERR_PARAM;20392013 goto err;20402014 }···21572131 bool allmulti = false;21582132 bool alluni = false;2159213321602160- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {21342134+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {21612135 aq_ret = I40E_ERR_PARAM;21622136 goto err_out;21632137 }···22452219 struct i40e_vsi *vsi;22462220 u16 num_qps_all = 0;2247222122482248- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {22222222+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {22492223 aq_ret = I40E_ERR_PARAM;22502224 goto error_param;22512225 }···23942368 i40e_status aq_ret = 0;23952369 int i;2396237023972397- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {23712371+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {23982372 aq_ret = I40E_ERR_PARAM;23992373 goto error_param;24002374 }···25662540 struct i40e_pf *pf = vf->pf;25672541 i40e_status aq_ret = 0;2568254225692569- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {25432543+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {25702544 aq_ret = I40E_ERR_PARAM;25712545 goto error_param;25722546 }···26162590 u8 cur_pairs = vf->num_queue_pairs;26172591 struct i40e_pf *pf = vf->pf;2618259226192619- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states))25932593+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE))26202594 return -EINVAL;2621259526222596 if (req_pairs > I40E_MAX_VF_QUEUES) {···2661263526622636 memset(&stats, 0, sizeof(struct i40e_eth_stats));2663263726642664- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {26382638+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {26652639 aq_ret = I40E_ERR_PARAM;26662640 goto error_param;26672641 }···27782752 i40e_status ret = 0;27792753 int i;2780275427812781- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||27552755+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||27822756 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {27832757 ret = I40E_ERR_PARAM;27842758 goto error_param;···28502824 i40e_status ret = 0;28512825 int i;2852282628532853- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||28272827+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||28542828 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) {28552829 ret = I40E_ERR_PARAM;28562830 goto error_param;···29942968 i40e_status aq_ret = 0;29952969 int i;2996297029972997- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||29712971+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||29982972 !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {29992973 aq_ret = I40E_ERR_PARAM;30002974 goto error_param;···31143088 struct i40e_vsi *vsi = NULL;31153089 i40e_status aq_ret = 0;3116309031173117- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||30913091+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||31183092 !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) ||31193119- (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {30933093+ vrk->key_len != I40E_HKEY_ARRAY_SIZE) {31203094 aq_ret = I40E_ERR_PARAM;31213095 goto err;31223096 }···31453119 i40e_status aq_ret = 0;31463120 u16 i;3147312131483148- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) ||31223122+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) ||31493123 !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) ||31503150- (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {31243124+ vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {31513125 aq_ret = I40E_ERR_PARAM;31523126 goto err;31533127 }···31803154 i40e_status aq_ret = 0;31813155 int len = 0;3182315631833183- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {31573157+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {31843158 aq_ret = I40E_ERR_PARAM;31853159 goto err;31863160 }···32163190 struct i40e_hw *hw = &pf->hw;32173191 i40e_status aq_ret = 0;3218319232193219- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {31933193+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {32203194 aq_ret = I40E_ERR_PARAM;32213195 goto err;32223196 }···32413215 i40e_status aq_ret = 0;32423216 struct i40e_vsi *vsi;3243321732443244- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {32183218+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {32453219 aq_ret = I40E_ERR_PARAM;32463220 goto err;32473221 }···32673241 i40e_status aq_ret = 0;32683242 struct i40e_vsi *vsi;3269324332703270- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {32443244+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {32713245 aq_ret = I40E_ERR_PARAM;32723246 goto err;32733247 }···34943468 i40e_status aq_ret = 0;34953469 int i, ret;3496347034973497- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {34713471+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {34983472 aq_ret = I40E_ERR_PARAM;34993473 goto err;35003474 }···36253599 i40e_status aq_ret = 0;36263600 int i, ret;3627360136283628- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {36023602+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {36293603 aq_ret = I40E_ERR_PARAM;36303604 goto err_out;36313605 }···37343708 i40e_status aq_ret = 0;37353709 u64 speed = 0;3736371037373737- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {37113711+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {37383712 aq_ret = I40E_ERR_PARAM;37393713 goto err;37403714 }···3823379738243798 /* set this flag only after making sure all inputs are sane */38253799 vf->adq_enabled = true;38263826- /* num_req_queues is set when user changes number of queues via ethtool38273827- * and this causes issue for default VSI(which depends on this variable)38283828- * when ADq is enabled, hence reset it.38293829- */38303830- vf->num_req_queues = 0;3831380038323801 /* reset the VF in order to allocate resources */38333802 i40e_vc_reset_vf(vf, true);···38453824 struct i40e_pf *pf = vf->pf;38463825 i40e_status aq_ret = 0;3847382638483848- if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {38273827+ if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) {38493828 aq_ret = I40E_ERR_PARAM;38503829 goto err;38513830 }
···631631 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))632632 return -EINVAL;633633634634- new_tx_count = clamp_t(u32, ring->tx_pending,635635- IAVF_MIN_TXD,636636- IAVF_MAX_TXD);637637- new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);634634+ if (ring->tx_pending > IAVF_MAX_TXD ||635635+ ring->tx_pending < IAVF_MIN_TXD ||636636+ ring->rx_pending > IAVF_MAX_RXD ||637637+ ring->rx_pending < IAVF_MIN_RXD) {638638+ netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n",639639+ ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD,640640+ IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE);641641+ return -EINVAL;642642+ }638643639639- new_rx_count = clamp_t(u32, ring->rx_pending,640640- IAVF_MIN_RXD,641641- IAVF_MAX_RXD);642642- new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE);644644+ new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);645645+ if (new_tx_count != ring->tx_pending)646646+ netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n",647647+ new_tx_count);648648+649649+ new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE);650650+ if (new_rx_count != ring->rx_pending)651651+ netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n",652652+ new_rx_count);643653644654 /* if nothing to do return success */645655 if ((new_tx_count == adapter->tx_desc_count) &&646646- (new_rx_count == adapter->rx_desc_count))656656+ (new_rx_count == adapter->rx_desc_count)) {657657+ netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n");647658 return 0;659659+ }648660649649- adapter->tx_desc_count = new_tx_count;650650- adapter->rx_desc_count = new_rx_count;661661+ if (new_tx_count != adapter->tx_desc_count) {662662+ netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n",663663+ adapter->tx_desc_count, new_tx_count);664664+ adapter->tx_desc_count = new_tx_count;665665+ }666666+667667+ if (new_rx_count != adapter->rx_desc_count) {668668+ netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n",669669+ adapter->rx_desc_count, new_rx_count);670670+ adapter->rx_desc_count = new_rx_count;671671+ }651672652673 if (netif_running(netdev)) {653674 adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+1
drivers/net/ethernet/intel/iavf/iavf_main.c
···22482248 }2249224922502250 pci_set_master(adapter->pdev);22512251+ pci_restore_msi_state(adapter->pdev);2251225222522253 if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) {22532254 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+12-6
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
···97979898 new_cfg->etscfg.maxtcs = pf->hw.func_caps.common_cap.maxtc;9999100100+ if (!bwcfg)101101+ new_cfg->etscfg.tcbwtable[0] = 100;102102+100103 if (!bwrec)101104 new_cfg->etsrec.tcbwtable[0] = 100;102105···170167 if (mode == pf->dcbx_cap)171168 return ICE_DCB_NO_HW_CHG;172169173173- pf->dcbx_cap = mode;174170 qos_cfg = &pf->hw.port_info->qos_cfg;175175- if (mode & DCB_CAP_DCBX_VER_CEE) {176176- if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP)177177- return ICE_DCB_NO_HW_CHG;171171+172172+ /* DSCP configuration is not DCBx negotiated */173173+ if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP)174174+ return ICE_DCB_NO_HW_CHG;175175+176176+ pf->dcbx_cap = mode;177177+178178+ if (mode & DCB_CAP_DCBX_VER_CEE)178179 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE;179179- } else {180180+ else180181 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE;181181- }182182183183 dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode);184184 return ICE_DCB_HW_CHG_RST;
+2-2
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
···12681268 bool is_tun = tun == ICE_FD_HW_SEG_TUN;12691269 int err;1270127012711271- if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num))12711271+ if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num, TNL_ALL))12721272 continue;12731273 err = ice_fdir_write_fltr(pf, input, add, is_tun);12741274 if (err)···16521652 }1653165316541654 /* return error if not an update and no available filters */16551655- fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port) ? 2 : 1;16551655+ fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port, TNL_ALL) ? 2 : 1;16561656 if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) &&16571657 ice_fdir_num_avail_fltr(hw, pf->vsi[vsi->idx]) < fltrs_needed) {16581658 dev_err(dev, "Failed to add filter. The maximum number of flow director filters has been reached.\n");
+1-1
drivers/net/ethernet/intel/ice/ice_fdir.c
···924924 memcpy(pkt, ice_fdir_pkt[idx].pkt, ice_fdir_pkt[idx].pkt_len);925925 loc = pkt;926926 } else {927927- if (!ice_get_open_tunnel_port(hw, &tnl_port))927927+ if (!ice_get_open_tunnel_port(hw, &tnl_port, TNL_ALL))928928 return ICE_ERR_DOES_NOT_EXIST;929929 if (!ice_fdir_pkt[idx].tun_pkt)930930 return ICE_ERR_PARAM;
+5-2
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
···18991899 * ice_get_open_tunnel_port - retrieve an open tunnel port19001900 * @hw: pointer to the HW structure19011901 * @port: returns open port19021902+ * @type: type of tunnel, can be TNL_LAST if it doesn't matter19021903 */19031904bool19041904-ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port)19051905+ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port,19061906+ enum ice_tunnel_type type)19051907{19061908 bool res = false;19071909 u16 i;···19111909 mutex_lock(&hw->tnl_lock);1912191019131911 for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)19141914- if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port) {19121912+ if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port &&19131913+ (type == TNL_LAST || type == hw->tnl.tbl[i].type)) {19151914 *port = hw->tnl.tbl[i].port;19161915 res = true;19171916 break;
···58885888 netif_carrier_on(vsi->netdev);58895889 }5890589058915891+ /* clear this now, and the first stats read will be used as baseline */58925892+ vsi->stat_offsets_loaded = false;58935893+58915894 ice_service_task_schedule(pf);5892589558935896 return 0;···59375934/**59385935 * ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters59395936 * @vsi: the VSI to be updated59375937+ * @vsi_stats: the stats struct to be updated59405938 * @rings: rings to work on59415939 * @count: number of rings59425940 */59435941static void59445944-ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, struct ice_tx_ring **rings,59455945- u16 count)59425942+ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi,59435943+ struct rtnl_link_stats64 *vsi_stats,59445944+ struct ice_tx_ring **rings, u16 count)59465945{59475947- struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats;59485946 u16 i;5949594759505948 for (i = 0; i < count; i++) {···59695965 */59705966static void ice_update_vsi_ring_stats(struct ice_vsi *vsi)59715967{59725972- struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats;59685968+ struct rtnl_link_stats64 *vsi_stats;59735969 u64 pkts, bytes;59745970 int i;5975597159765976- /* reset netdev stats */59775977- vsi_stats->tx_packets = 0;59785978- vsi_stats->tx_bytes = 0;59795979- vsi_stats->rx_packets = 0;59805980- vsi_stats->rx_bytes = 0;59725972+ vsi_stats = kzalloc(sizeof(*vsi_stats), GFP_ATOMIC);59735973+ if (!vsi_stats)59745974+ return;5981597559825976 /* reset non-netdev (extended) stats */59835977 vsi->tx_restart = 0;···59875985 rcu_read_lock();5988598659895987 /* update Tx rings counters */59905990- ice_update_vsi_tx_ring_stats(vsi, vsi->tx_rings, vsi->num_txq);59885988+ ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->tx_rings,59895989+ vsi->num_txq);5991599059925991 /* update Rx rings counters */59935992 ice_for_each_rxq(vsi, i) {···6003600060046001 /* update XDP Tx rings counters */60056002 if (ice_is_xdp_ena_vsi(vsi))60066006- ice_update_vsi_tx_ring_stats(vsi, vsi->xdp_rings,60036003+ ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->xdp_rings,60076004 vsi->num_xdp_txq);6008600560096006 rcu_read_unlock();60076007+60086008+ vsi->net_stats.tx_packets = vsi_stats->tx_packets;60096009+ vsi->net_stats.tx_bytes = vsi_stats->tx_bytes;60106010+ vsi->net_stats.rx_packets = vsi_stats->rx_packets;60116011+ vsi->net_stats.rx_bytes = vsi_stats->rx_bytes;60126012+60136013+ kfree(vsi_stats);60106014}6011601560126016/**
+14-7
drivers/net/ethernet/intel/ice/ice_switch.c
···37963796 * ice_find_recp - find a recipe37973797 * @hw: pointer to the hardware structure37983798 * @lkup_exts: extension sequence to match37993799+ * @tun_type: type of recipe tunnel37993800 *38003801 * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.38013802 */38023802-static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)38033803+static u1638043804+ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts,38053805+ enum ice_sw_tunnel_type tun_type)38033806{38043807 bool refresh_required = true;38053808 struct ice_sw_recipe *recp;···38633860 }38643861 /* If for "i"th recipe the found was never set to false38653862 * then it means we found our match38633863+ * Also tun type of recipe needs to be checked38663864 */38673867- if (found)38653865+ if (found && recp[i].tun_type == tun_type)38683866 return i; /* Return the recipe ID */38693867 }38703868 }···46554651 }4656465246574653 /* Look for a recipe which matches our requested fv / mask list */46584658- *rid = ice_find_recp(hw, lkup_exts);46544654+ *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type);46594655 if (*rid < ICE_MAX_NUM_RECIPES)46604656 /* Success if found a recipe that match the existing criteria */46614657 goto err_unroll;4662465846594659+ rm->tun_type = rinfo->tun_type;46634660 /* Recipe we need does not exist, add a recipe */46644661 status = ice_add_sw_recipe(hw, rm, profiles);46654662 if (status)···4963495849644959 switch (tun_type) {49654960 case ICE_SW_TUN_VXLAN:49664966- case ICE_SW_TUN_GENEVE:49674967- if (!ice_get_open_tunnel_port(hw, &open_port))49614961+ if (!ice_get_open_tunnel_port(hw, &open_port, TNL_VXLAN))49684962 return ICE_ERR_CFG;49694963 break;49704970-49644964+ case ICE_SW_TUN_GENEVE:49654965+ if (!ice_get_open_tunnel_port(hw, &open_port, TNL_GENEVE))49664966+ return ICE_ERR_CFG;49674967+ break;49714968 default:49724969 /* Nothing needs to be done for this tunnel type */49734970 return 0;···55625555 if (status)55635556 return status;5564555755655565- rid = ice_find_recp(hw, &lkup_exts);55585558+ rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type);55665559 /* If did not find a recipe that match the existing criteria */55675560 if (rid == ICE_MAX_NUM_RECIPES)55685561 return ICE_ERR_PARAM;
···16171617 ice_vc_set_default_allowlist(vf);1618161816191619 ice_vf_fdir_exit(vf);16201620+ ice_vf_fdir_init(vf);16201621 /* clean VF control VSI when resetting VFs since it should be16211622 * setup only when VF creates its first FDIR rule.16221623 */···17481747 }1749174817501749 ice_vf_fdir_exit(vf);17501750+ ice_vf_fdir_init(vf);17511751 /* clean VF control VSI when resetting VF since it should be setup17521752 * only when VF creates its first FDIR rule.17531753 */···20222020 ret = ice_eswitch_configure(pf);20232021 if (ret)20242022 goto err_unroll_sriov;20232023+20242024+ /* rearm global interrupts */20252025+ if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state))20262026+ ice_irq_dynamic_ena(hw, NULL, NULL);2025202720262028 return 0;20272029
···3480348034813481 spin_lock_irqsave(&qdev->hw_lock, hw_flags);3482348234833483- err = ql_wait_for_drvr_lock(qdev);34843484- if (err) {34853485- err = ql_adapter_initialize(qdev);34863486- if (err) {34873487- netdev_err(ndev, "Unable to initialize adapter\n");34883488- goto err_init;34893489- }34903490- netdev_err(ndev, "Releasing driver lock\n");34913491- ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);34923492- } else {34833483+ if (!ql_wait_for_drvr_lock(qdev)) {34933484 netdev_err(ndev, "Could not acquire driver lock\n");34853485+ err = -ENODEV;34943486 goto err_lock;34953487 }34883488+34893489+ err = ql_adapter_initialize(qdev);34903490+ if (err) {34913491+ netdev_err(ndev, "Unable to initialize adapter\n");34923492+ goto err_init;34933493+ }34943494+ ql_sem_unlock(qdev, QL_DRVR_SEM_MASK);3496349534973496 spin_unlock_irqrestore(&qdev->hw_lock, hw_flags);34983497
+1
drivers/net/phy/phylink.c
···16531653 * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan16541654 *16551655 * Handle a network device suspend event. There are several cases:16561656+ *16561657 * - If Wake-on-Lan is not active, we can bring down the link between16571658 * the MAC and PHY by calling phylink_stop().16581659 * - If Wake-on-Lan is active, and being handled only by the PHY, we
+2
drivers/net/usb/cdc_ncm.c
···181181 min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32);182182183183 max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize));184184+ if (max == 0)185185+ max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */184186185187 /* some devices set dwNtbOutMaxSize too low for the above default */186188 min = min(min, max);
+7-6
drivers/net/vmxnet3/vmxnet3_drv.c
···3261326132623262#ifdef CONFIG_PCI_MSI32633263 if (adapter->intr.type == VMXNET3_IT_MSIX) {32643264- int i, nvec;32643264+ int i, nvec, nvec_allocated;3265326532663266 nvec = adapter->share_intr == VMXNET3_INTR_TXSHARE ?32673267 1 : adapter->num_tx_queues;···32743274 for (i = 0; i < nvec; i++)32753275 adapter->intr.msix_entries[i].entry = i;3276327632773277- nvec = vmxnet3_acquire_msix_vectors(adapter, nvec);32783278- if (nvec < 0)32773277+ nvec_allocated = vmxnet3_acquire_msix_vectors(adapter, nvec);32783278+ if (nvec_allocated < 0)32793279 goto msix_err;3280328032813281 /* If we cannot allocate one MSIx vector per queue32823282 * then limit the number of rx queues to 132833283 */32843284- if (nvec == VMXNET3_LINUX_MIN_MSIX_VECT) {32843284+ if (nvec_allocated == VMXNET3_LINUX_MIN_MSIX_VECT &&32853285+ nvec != VMXNET3_LINUX_MIN_MSIX_VECT) {32853286 if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE32863287 || adapter->num_rx_queues != 1) {32873288 adapter->share_intr = VMXNET3_INTR_TXSHARE;···32923291 }32933292 }3294329332953295- adapter->intr.num_intrs = nvec;32943294+ adapter->intr.num_intrs = nvec_allocated;32963295 return;3297329632983297msix_err:32993298 /* If we cannot allocate MSIx vectors use only one rx queue */33003299 dev_info(&adapter->pdev->dev,33013300 "Failed to enable MSI-X, error %d. "33023302- "Limiting #rx queues to 1, try MSI.\n", nvec);33013301+ "Limiting #rx queues to 1, try MSI.\n", nvec_allocated);3303330233043303 adapter->intr.type = VMXNET3_IT_MSI;33053304 }
···183183bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem)184184{185185 struct ipc_mem_channel *channel;186186+ bool hpda_ctrl_pending = false;186187 struct sk_buff_head *ul_list;187188 bool hpda_pending = false;188188- bool forced_hpdu = false;189189 struct ipc_pipe *pipe;190190 int i;191191···202202 ul_list = &channel->ul_list;203203204204 /* Fill the transfer descriptor with the uplink buffer info. */205205- hpda_pending |= ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,205205+ if (!ipc_imem_check_wwan_ips(channel)) {206206+ hpda_ctrl_pending |=207207+ ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,206208 pipe, ul_list);207207-208208- /* forced HP update needed for non data channels */209209- if (hpda_pending && !ipc_imem_check_wwan_ips(channel))210210- forced_hpdu = true;209209+ } else {210210+ hpda_pending |=211211+ ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,212212+ pipe, ul_list);213213+ }211214 }212215213213- if (forced_hpdu) {216216+ /* forced HP update needed for non data channels */217217+ if (hpda_ctrl_pending) {214218 hpda_pending = false;215219 ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,216220 IPC_HP_UL_WRITE_TD);···537533 "Modem link down. Exit run state worker.");538534 return;539535 }536536+537537+ if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag))538538+ ipc_devlink_deinit(ipc_imem->ipc_devlink);540539541540 if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg))542541 ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem);···11841177 ipc_port_deinit(ipc_imem->ipc_port);11851178 }1186117911871187- if (ipc_imem->ipc_devlink)11801180+ if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag))11881181 ipc_devlink_deinit(ipc_imem->ipc_devlink);1189118211901183 ipc_imem_device_ipc_uninit(ipc_imem);···1280127312811274 ipc_imem->pci_device_id = device_id;1282127512831283- ipc_imem->ev_cdev_write_pending = false;12841276 ipc_imem->cp_version = 0;12851277 ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP;12861278···1347134113481342 if (ipc_flash_link_establish(ipc_imem))13491343 goto devlink_channel_fail;13441344+13451345+ set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag);13501346 }13511347 return ipc_imem;13521348devlink_channel_fail:
+1-3
drivers/net/wwan/iosm/iosm_ipc_imem.h
···101101#define IOSM_CHIP_INFO_SIZE_MAX 100102102103103#define FULLY_FUNCTIONAL 0104104+#define IOSM_DEVLINK_INIT 1104105105106/* List of the supported UL/DL pipes. */106107enum ipc_mem_pipes {···337336 * process the irq actions.338337 * @flag: Flag to monitor the state of driver339338 * @td_update_timer_suspended: if true then td update timer suspend340340- * @ev_cdev_write_pending: 0 means inform the IPC tasklet to pass341341- * the accumulated uplink buffers to CP.342339 * @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass343340 * @reset_det_n: Reset detect flag344341 * @pcie_wake_n: Pcie wake flag···378379 u8 ev_irq_pending[IPC_IRQ_VECTORS];379380 unsigned long flag;380381 u8 td_update_timer_suspended:1,381381- ev_cdev_write_pending:1,382382 ev_mux_net_transmit_pending:1,383383 reset_det_n:1,384384 pcie_wake_n:1;
+1-6
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
···4141static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg,4242 void *msg, size_t size)4343{4444- ipc_imem->ev_cdev_write_pending = false;4544 ipc_imem_ul_send(ipc_imem);46454746 return 0;···4950/* Through tasklet to do sio write. */5051static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem)5152{5252- if (ipc_imem->ev_cdev_write_pending)5353- return -1;5454-5555- ipc_imem->ev_cdev_write_pending = true;5656-5753 return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0,5854 NULL, 0, false);5955}···447453 /* Release the pipe resources */448454 ipc_imem_pipe_cleanup(ipc_imem, &channel->ul_pipe);449455 ipc_imem_pipe_cleanup(ipc_imem, &channel->dl_pipe);456456+ ipc_imem->nr_of_channels--;450457}451458452459void ipc_imem_sys_devlink_notify_rx(struct iosm_devlink *ipc_devlink,
···9999 DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Tablet Gen 2"),100100 },101101 },102102+ {103103+ .ident = "Microsoft Surface Go 3",104104+ .matches = {105105+ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),106106+ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),107107+ },108108+ },102109 { }103110};104111
+12
drivers/platform/x86/lg-laptop.c
···657657 if (product && strlen(product) > 4)658658 switch (product[4]) {659659 case '5':660660+ if (strlen(product) > 5)661661+ switch (product[5]) {662662+ case 'N':663663+ year = 2021;664664+ break;665665+ case '0':666666+ year = 2016;667667+ break;668668+ default:669669+ year = 2022;670670+ }671671+ break;660672 case '6':661673 year = 2016;662674 break;
···463463464464static int __init init_dtpm(void)465465{466466- struct dtpm_descr *dtpm_descr;467467-468466 pct = powercap_register_control_type(NULL, "dtpm", NULL);469467 if (IS_ERR(pct)) {470468 pr_err("Failed to register control type\n");471469 return PTR_ERR(pct);472470 }473473-474474- for_each_dtpm_table(dtpm_descr)475475- dtpm_descr->init();476471477472 return 0;478473}
+2-7
drivers/scsi/lpfc/lpfc_els.c
···50955095 /* NPort Recovery mode or node is just allocated */50965096 if (!lpfc_nlp_not_used(ndlp)) {50975097 /* A LOGO is completing and the node is in NPR state.50985098- * If this a fabric node that cleared its transport50995099- * registration, release the rpi.50985098+ * Just unregister the RPI because the node is still50995099+ * required.51005100 */51015101- spin_lock_irq(&ndlp->lock);51025102- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC;51035103- if (phba->sli_rev == LPFC_SLI_REV4)51045104- ndlp->nlp_flag |= NLP_RELEASE_RPI;51055105- spin_unlock_irq(&ndlp->lock);51065101 lpfc_unreg_rpi(vport, ndlp);51075102 } else {51085103 /* Indicate the node has already released, should
···15331533 tristate "LiteUART serial port support"15341534 depends on HAS_IOMEM15351535 depends on OF || COMPILE_TEST15361536- depends on LITEX15361536+ depends on LITEX || COMPILE_TEST15371537 select SERIAL_CORE15381538 help15391539 This driver is for the FPGA-based LiteUART serial controller from LiteX
···10751075 goto out;1076107610771077 if (!tty_io_error(tty)) {10781078+ if (uport->rs485.flags & SER_RS485_ENABLED) {10791079+ set &= ~TIOCM_RTS;10801080+ clear &= ~TIOCM_RTS;10811081+ }10821082+10781083 uart_update_mctrl(uport, set, clear);10791084 ret = 0;10801085 }···15541549{15551550 struct uart_state *state = container_of(port, struct uart_state, port);15561551 struct uart_port *uport = uart_port_check(state);15521552+ char *buf;1557155315581554 /*15591555 * At this point, we stop accepting input. To do this, we···15761570 */15771571 tty_port_set_suspended(port, 0);1578157215791579- uart_change_pm(state, UART_PM_STATE_OFF);15731573+ /*15741574+ * Free the transmit buffer.15751575+ */15761576+ spin_lock_irq(&uport->lock);15771577+ buf = state->xmit.buf;15781578+ state->xmit.buf = NULL;15791579+ spin_unlock_irq(&uport->lock);1580158015811581+ if (buf)15821582+ free_page((unsigned long)buf);15831583+15841584+ uart_change_pm(state, UART_PM_STATE_OFF);15811585}1582158615831587static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+4-16
drivers/usb/cdns3/cdns3-gadget.c
···337337 cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs);338338}339339340340-static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req)341341-{342342- struct cdns3_endpoint *priv_ep = priv_req->priv_ep;343343- int current_trb = priv_req->start_trb;344344-345345- while (current_trb != priv_req->end_trb) {346346- cdns3_ep_inc_deq(priv_ep);347347- current_trb = priv_ep->dequeue;348348- }349349-350350- cdns3_ep_inc_deq(priv_ep);351351-}352352-353340/**354341 * cdns3_allow_enable_l1 - enable/disable permits to transition to L1.355342 * @priv_dev: Extended gadget object···1504151715051518 trb = priv_ep->trb_pool + priv_ep->dequeue;1506151915071507- /* Request was dequeued and TRB was changed to TRB_LINK. */15081508- if (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {15201520+ /* The TRB was changed as link TRB, and the request was handled at ep_dequeue */15211521+ while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) {15091522 trace_cdns3_complete_trb(priv_ep, trb);15101510- cdns3_move_deq_to_next_trb(priv_req);15231523+ cdns3_ep_inc_deq(priv_ep);15241524+ trb = priv_ep->trb_pool + priv_ep->dequeue;15111525 }1512152615131527 if (!request->stream_id) {
+3
drivers/usb/cdns3/cdnsp-mem.c
···987987988988 /* Set up the endpoint ring. */989989 pep->ring = cdnsp_ring_alloc(pdev, 2, ring_type, max_packet, mem_flags);990990+ if (!pep->ring)991991+ return -ENOMEM;992992+990993 pep->skip = false;991994992995 /* Fill the endpoint context */
···366366/* Must be called with xhci->lock held, releases and aquires lock back */367367static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags)368368{369369- u32 temp_32;369369+ struct xhci_segment *new_seg = xhci->cmd_ring->deq_seg;370370+ union xhci_trb *new_deq = xhci->cmd_ring->dequeue;371371+ u64 crcr;370372 int ret;371373372374 xhci_dbg(xhci, "Abort command ring\n");···377375378376 /*379377 * The control bits like command stop, abort are located in lower380380- * dword of the command ring control register. Limit the write381381- * to the lower dword to avoid corrupting the command ring pointer382382- * in case if the command ring is stopped by the time upper dword383383- * is written.378378+ * dword of the command ring control register.379379+ * Some controllers require all 64 bits to be written to abort the ring.380380+ * Make sure the upper dword is valid, pointing to the next command,381381+ * avoiding corrupting the command ring pointer in case the command ring382382+ * is stopped by the time the upper dword is written.384383 */385385- temp_32 = readl(&xhci->op_regs->cmd_ring);386386- writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);384384+ next_trb(xhci, NULL, &new_seg, &new_deq);385385+ if (trb_is_link(new_deq))386386+ next_trb(xhci, NULL, &new_seg, &new_deq);387387+388388+ crcr = xhci_trb_virt_to_dma(new_seg, new_deq);389389+ xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring);387390388391 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the389392 * completion of the Command Abort operation. If CRR is not negated in 5
-4
drivers/usb/typec/tcpm/tcpm.c
···41104110 tcpm_try_src(port) ? SRC_TRY41114111 : SNK_ATTACHED,41124112 0);41134113- else41144114- /* Wait for VBUS, but not forever */41154115- tcpm_set_state(port, PORT_RESET, PD_T_PS_SOURCE_ON);41164113 break;41174117-41184114 case SRC_TRY:41194115 port->try_src_count++;41204116 tcpm_set_cc(port, tcpm_rp_cc(port));
···232232}233233#endif /* CONFIG_VFIO_NOIOMMU */234234235235-/**235235+/*236236 * IOMMU driver registration237237 */238238int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops)···285285 unsigned long action, void *data);286286static void vfio_group_get(struct vfio_group *group);287287288288-/**288288+/*289289 * Container objects - containers are created when /dev/vfio/vfio is290290 * opened, but their lifecycle extends until the last user is done, so291291 * it's freed via kref. Must support container/group/device being···309309 kref_put(&container->kref, vfio_container_release);310310}311311312312-/**312312+/*313313 * Group objects - create, release, get, put, search314314 */315315static struct vfio_group *···488488 return group;489489}490490491491-/**491491+/*492492 * Device objects - create, release, get, put, search493493 */494494/* Device reference always implies a group reference */···595595 return ret;596596}597597598598-/**598598+/*599599 * Async device support600600 */601601static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev)···689689 return NOTIFY_OK;690690}691691692692-/**692692+/*693693 * VFIO driver API694694 */695695void vfio_init_group_dev(struct vfio_device *device, struct device *dev,···831831}832832EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev);833833834834-/**834834+/*835835 * Get a reference to the vfio_device for a device. Even if the836836 * caller thinks they own the device, they could be racing with a837837 * release call path, so we can't trust drvdata for the shortcut.···965965}966966EXPORT_SYMBOL_GPL(vfio_unregister_group_dev);967967968968-/**968968+/*969969 * VFIO base fd, /dev/vfio/vfio970970 */971971static long vfio_ioctl_check_extension(struct vfio_container *container,···11831183 .compat_ioctl = compat_ptr_ioctl,11841184};1185118511861186-/**11861186+/*11871187 * VFIO Group fd, /dev/vfio/$GROUP11881188 */11891189static void __vfio_group_unset_container(struct vfio_group *group)···15361536 .release = vfio_group_fops_release,15371537};1538153815391539-/**15391539+/*15401540 * VFIO Device fd15411541 */15421542static int vfio_device_fops_release(struct inode *inode, struct file *filep)···16111611 .mmap = vfio_device_fops_mmap,16121612};1613161316141614-/**16141614+/*16151615 * External user API, exported by symbols to be linked dynamically.16161616 *16171617 * The protocol includes:···16591659}16601660EXPORT_SYMBOL_GPL(vfio_group_get_external_user);1661166116621662-/**16621662+/*16631663 * External user API, exported by symbols to be linked dynamically.16641664 * The external user passes in a device pointer16651665 * to verify that:···17251725}17261726EXPORT_SYMBOL_GPL(vfio_external_check_extension);1727172717281728-/**17281728+/*17291729 * Sub-module support17301730 */17311731/*···22722272}22732273EXPORT_SYMBOL_GPL(vfio_group_iommu_domain);2274227422752275-/**22752275+/*22762276 * Module/class support22772277 */22782278static char *vfio_devnode(struct device *dev, umode_t *mode)
+9-5
drivers/video/console/vgacon.c
···366366 struct uni_pagedir *p;367367368368 /*369369- * We cannot be loaded as a module, therefore init is always 1,370370- * but vgacon_init can be called more than once, and init will371371- * not be 1.369369+ * We cannot be loaded as a module, therefore init will be 1370370+ * if we are the default console, however if we are a fallback371371+ * console, for example if fbcon has failed registration, then372372+ * init will be 0, so we need to make sure our boot parameters373373+ * have been copied to the console structure for vgacon_resize374374+ * ultimately called by vc_resize. Any subsequent calls to375375+ * vgacon_init init will have init set to 0 too.372376 */373377 c->vc_can_do_color = vga_can_do_color;378378+ c->vc_scan_lines = vga_scan_lines;379379+ c->vc_font.height = c->vc_cell_height = vga_video_font_height;374380375381 /* set dimensions manually if init != 0 since vc_resize() will fail */376382 if (init) {···385379 } else386380 vc_resize(c, vga_video_num_columns, vga_video_num_lines);387381388388- c->vc_scan_lines = vga_scan_lines;389389- c->vc_font.height = c->vc_cell_height = vga_video_font_height;390382 c->vc_complement_mask = 0x7700;391383 if (vga_512_chars)392384 c->vc_hi_font_mask = 0x0800;
+5-6
fs/cifs/connect.c
···15621562 /* fscache server cookies are based on primary channel only */15631563 if (!CIFS_SERVER_IS_CHAN(tcp_ses))15641564 cifs_fscache_get_client_cookie(tcp_ses);15651565+#ifdef CONFIG_CIFS_FSCACHE15661566+ else15671567+ tcp_ses->fscache = tcp_ses->primary_server->fscache;15681568+#endif /* CONFIG_CIFS_FSCACHE */1565156915661570 /* queue echo request delayed work */15671571 queue_delayed_work(cifsiod_wq, &tcp_ses->echo, tcp_ses->echo_interval);···30503046 cifs_dbg(VFS, "read only mount of RW share\n");30513047 /* no need to log a RW mount of a typical RW share */30523048 }30533053- /*30543054- * The cookie is initialized from volume info returned above.30553055- * Inside cifs_fscache_get_super_cookie it checks30563056- * that we do not get super cookie twice.30573057- */30583058- cifs_fscache_get_super_cookie(tcon);30593049 }3060305030613051 /*···34243426 */34253427 mount_put_conns(mnt_ctx);34263428 mount_get_dfs_conns(mnt_ctx);34293429+ set_root_ses(mnt_ctx);3427343034283431 full_path = build_unc_path_to_root(ctx, cifs_sb, true);34293432 if (IS_ERR(full_path))
+10-36
fs/cifs/fscache.c
···1616 * Key layout of CIFS server cache index object1717 */1818struct cifs_server_key {1919- struct {2020- uint16_t family; /* address family */2121- __be16 port; /* IP port */2222- } hdr;2323- union {2424- struct in_addr ipv4_addr;2525- struct in6_addr ipv6_addr;2626- };1919+ __u64 conn_id;2720} __packed;28212922/*···2431 */2532void cifs_fscache_get_client_cookie(struct TCP_Server_Info *server)2633{2727- const struct sockaddr *sa = (struct sockaddr *) &server->dstaddr;2828- const struct sockaddr_in *addr = (struct sockaddr_in *) sa;2929- const struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *) sa;3034 struct cifs_server_key key;3131- uint16_t key_len = sizeof(key.hdr);3232-3333- memset(&key, 0, sizeof(key));34353536 /*3636- * Should not be a problem as sin_family/sin6_family overlays3737- * sa_family field3737+ * Check if cookie was already initialized so don't reinitialize it.3838+ * In the future, as we integrate with newer fscache features,3939+ * we may want to instead add a check if cookie has changed3840 */3939- key.hdr.family = sa->sa_family;4040- switch (sa->sa_family) {4141- case AF_INET:4242- key.hdr.port = addr->sin_port;4343- key.ipv4_addr = addr->sin_addr;4444- key_len += sizeof(key.ipv4_addr);4545- break;4646-4747- case AF_INET6:4848- key.hdr.port = addr6->sin6_port;4949- key.ipv6_addr = addr6->sin6_addr;5050- key_len += sizeof(key.ipv6_addr);5151- break;5252-5353- default:5454- cifs_dbg(VFS, "Unknown network family '%d'\n", sa->sa_family);5555- server->fscache = NULL;4141+ if (server->fscache)5642 return;5757- }4343+4444+ memset(&key, 0, sizeof(key));4545+ key.conn_id = server->conn_id;58465947 server->fscache =6048 fscache_acquire_cookie(cifs_fscache_netfs.primary_index,6149 &cifs_fscache_server_index_def,6262- &key, key_len,5050+ &key, sizeof(key),6351 NULL, 0,6452 server, 0, true);6553 cifs_dbg(FYI, "%s: (0x%p/0x%p)\n",···6692 * In the future, as we integrate with newer fscache features,6793 * we may want to instead add a check if cookie has changed6894 */6969- if (tcon->fscache == NULL)9595+ if (tcon->fscache)7096 return;71977298 sharename = extract_sharename(tcon->treeName);
+7
fs/cifs/inode.c
···13761376 inode = ERR_PTR(rc);13771377 }1378137813791379+ /*13801380+ * The cookie is initialized from volume info returned above.13811381+ * Inside cifs_fscache_get_super_cookie it checks13821382+ * that we do not get super cookie twice.13831383+ */13841384+ cifs_fscache_get_super_cookie(tcon);13851385+13791386out:13801387 kfree(path);13811388 free_xid(xid);
···732732struct bpf_trampoline *bpf_trampoline_get(u64 key,733733 struct bpf_attach_target_info *tgt_info);734734void bpf_trampoline_put(struct bpf_trampoline *tr);735735+int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs);735736#define BPF_DISPATCHER_INIT(_name) { \736737 .mutex = __MUTEX_INITIALIZER(_name.mutex), \737738 .func = &_name##_func, \···13531352 * kprobes, tracepoints) to prevent deadlocks on map operations as any of13541353 * these events can happen inside a region which holds a map bucket lock13551354 * and can deadlock on it.13561356- *13571357- * Use the preemption safe inc/dec variants on RT because migrate disable13581358- * is preemptible on RT and preemption in the middle of the RMW operation13591359- * might lead to inconsistent state. Use the raw variants for non RT13601360- * kernels as migrate_disable() maps to preempt_disable() so the slightly13611361- * more expensive save operation can be avoided.13621355 */13631356static inline void bpf_disable_instrumentation(void)13641357{13651358 migrate_disable();13661366- if (IS_ENABLED(CONFIG_PREEMPT_RT))13671367- this_cpu_inc(bpf_prog_active);13681368- else13691369- __this_cpu_inc(bpf_prog_active);13591359+ this_cpu_inc(bpf_prog_active);13701360}1371136113721362static inline void bpf_enable_instrumentation(void)13731363{13741374- if (IS_ENABLED(CONFIG_PREEMPT_RT))13751375- this_cpu_dec(bpf_prog_active);13761376- else13771377- __this_cpu_dec(bpf_prog_active);13641364+ this_cpu_dec(bpf_prog_active);13781365 migrate_enable();13791366}13801367
···1818#include <linux/klist.h>1919#include <linux/pm.h>2020#include <linux/device/bus.h>2121+#include <linux/module.h>21222223/**2324 * enum probe_type - device driver probe type to try
+1-4
include/linux/filter.h
···66#define __LINUX_FILTER_H__7788#include <linux/atomic.h>99+#include <linux/bpf.h>910#include <linux/refcount.h>1011#include <linux/compat.h>1112#include <linux/skbuff.h>···27262827#include <asm/byteorder.h>2928#include <uapi/linux/filter.h>3030-#include <uapi/linux/bpf.h>31293230struct sk_buff;3331struct sock;···640640 * This uses migrate_disable/enable() explicitly to document that the641641 * invocation of a BPF program does not require reentrancy protection642642 * against a BPF program which is invoked from a preempting task.643643- *644644- * For non RT enabled kernels migrate_disable/enable() maps to645645- * preempt_disable/enable(), i.e. it disables also preemption.646643 */647644static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog,648645 const void *ctx)
···538538 * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY539539 * @state: State of the PHY for management purposes540540 * @dev_flags: Device-specific flags used by the PHY driver.541541- * Bits [15:0] are free to use by the PHY driver to communicate542542- * driver specific behavior.543543- * Bits [23:16] are currently reserved for future use.544544- * Bits [31:24] are reserved for defining generic545545- * PHY driver behavior.541541+ *542542+ * - Bits [15:0] are free to use by the PHY driver to communicate543543+ * driver specific behavior.544544+ * - Bits [23:16] are currently reserved for future use.545545+ * - Bits [31:24] are reserved for defining generic546546+ * PHY driver behavior.546547 * @irq: IRQ number of the PHY's interrupt (-1 if none)547548 * @phy_timer: The timer for handling the state machine548549 * @phylink: Pointer to phylink instance for this PHY
+8-6
include/linux/regulator/driver.h
···499499 * best to shut-down regulator(s) or reboot the SOC if error500500 * handling is repeatedly failing. If fatal_cnt is given the IRQ501501 * handling is aborted if it fails for fatal_cnt times and die()502502- * callback (if populated) or BUG() is called to try to prevent502502+ * callback (if populated) is called. If die() is not populated503503+ * poweroff for the system is attempted in order to prevent any503504 * further damage.504505 * @reread_ms: The time which is waited before attempting to re-read status505506 * at the worker if IC reading fails. Immediate re-read is done···517516 * @data: Driver private data pointer which will be passed as such to518517 * the renable, map_event and die callbacks in regulator_irq_data.519518 * @die: Protection callback. If IC status reading or recovery actions520520- * fail fatal_cnt times this callback or BUG() is called. This521521- * callback should implement a final protection attempt like522522- * disabling the regulator. If protection succeeded this may523523- * return 0. If anything else is returned the core assumes final524524- * protection failed and calls BUG() as a last resort.519519+ * fail fatal_cnt times this callback is called or system is520520+ * powered off. This callback should implement a final protection521521+ * attempt like disabling the regulator. If protection succeeded522522+ * die() may return 0. If anything else is returned the core523523+ * assumes final protection failed and attempts to perform a524524+ * poweroff as a last resort.525525 * @map_event: Driver callback to map IRQ status into regulator devices with526526 * events / errors. NOTE: callback MUST initialize both the527527 * errors and notifs for all rdevs which it signals having
···196196 __u64 ctx_set_params;197197};198198199199+/*200200+ * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in201201+ * effect. The event size is sizeof(drm_event), since there is no additional202202+ * payload.203203+ */204204+#define VIRTGPU_EVENT_FENCE_SIGNALED 0x90000000205205+199206#define DRM_IOCTL_VIRTGPU_MAP \200207 DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)201208
+2-9
kernel/bpf/btf.c
···6361636163626362/* BTF ID set registration API for modules */6363636363646364-struct kfunc_btf_id_list {63656365- struct list_head list;63666366- struct mutex mutex;63676367-};63686368-63696364#ifdef CONFIG_DEBUG_INFO_BTF_MODULES6370636563716366void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,···63866391{63876392 struct kfunc_btf_id_set *s;6388639363896389- if (!owner)63906390- return false;63916394 mutex_lock(&klist->mutex);63926395 list_for_each_entry(s, &klist->list, list) {63936396 if (s->owner == owner && btf_id_set_contains(s->set, kfunc_id)) {···63976404 return false;63986405}6399640664006400-#endif64016401-64026407#define DEFINE_KFUNC_BTF_ID_LIST(name) \64036408 struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list), \64046409 __MUTEX_INITIALIZER(name.mutex) }; \···6404641364056414DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list);64066415DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list);64166416+64176417+#endif
···615615 .sum_exec_runtime = p->se.sum_exec_runtime,616616 };617617618618- task_cputime(p, &cputime.utime, &cputime.stime);618618+ if (task_cputime(p, &cputime.utime, &cputime.stime))619619+ cputime.sum_exec_runtime = task_sched_runtime(p);619620 cputime_adjust(&cputime, &p->prev_cputime, ut, st);620621}621622EXPORT_SYMBOL_GPL(task_cputime_adjusted);···829828 * add up the pending nohz execution time since the last830829 * cputime snapshot.831830 */832832-void task_cputime(struct task_struct *t, u64 *utime, u64 *stime)831831+bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime)833832{834833 struct vtime *vtime = &t->vtime;835834 unsigned int seq;836835 u64 delta;836836+ int ret;837837838838 if (!vtime_accounting_enabled()) {839839 *utime = t->utime;840840 *stime = t->stime;841841- return;841841+ return false;842842 }843843844844 do {845845+ ret = false;845846 seq = read_seqcount_begin(&vtime->seqcount);846847847848 *utime = t->utime;···853850 if (vtime->state < VTIME_SYS)854851 continue;855852853853+ ret = true;856854 delta = vtime_delta(vtime);857855858856 /*···865861 else866862 *utime += vtime->utime + delta;867863 } while (read_seqcount_retry(&vtime->seqcount, seq));864864+865865+ return ret;868866}869867870868static int vtime_state_fetch(struct vtime *vtime, int cpu)
+2-1
kernel/softirq.c
···595595{596596 __irq_enter_raw();597597598598- if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))598598+ if (tick_nohz_full_cpu(smp_processor_id()) ||599599+ (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET)))599600 tick_irq_enter();600601601602 account_hardirq_enter(current);
+7
kernel/time/tick-sched.c
···13751375 now = ktime_get();13761376 if (ts->idle_active)13771377 tick_nohz_stop_idle(ts, now);13781378+ /*13791379+ * If all CPUs are idle. We may need to update a stale jiffies value.13801380+ * Note nohz_full is a special case: a timekeeper is guaranteed to stay13811381+ * alive but it might be busy looping with interrupts disabled in some13821382+ * rare case (typically stop machine). So we must make sure we have a13831383+ * last resort.13841384+ */13781385 if (ts->tick_stopped)13791386 tick_nohz_update_jiffies(now);13801387}
+1
lib/Kconfig.debug
···316316 bool "Generate BTF typeinfo"317317 depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED318318 depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST319319+ depends on BPF_SYSCALL319320 help320321 Generate deduplicated BTF type information from DWARF debug info.321322 Turning this on expects presence of pahole tool, which will convert
···167167 write_lock_bh(&sk->sk_callback_lock);168168 if (strp_stop)169169 sk_psock_stop_strp(sk, psock);170170- else170170+ if (verdict_stop)171171 sk_psock_stop_verdict(sk, psock);172172+173173+ if (psock->psock_update_sk_prot)174174+ psock->psock_update_sk_prot(sk, psock, false);172175 write_unlock_bh(&sk->sk_callback_lock);173176 }174177}···285282286283 if (msg_parser)287284 psock_set_prog(&psock->progs.msg_parser, msg_parser);285285+ if (stream_parser)286286+ psock_set_prog(&psock->progs.stream_parser, stream_parser);287287+ if (stream_verdict)288288+ psock_set_prog(&psock->progs.stream_verdict, stream_verdict);289289+ if (skb_verdict)290290+ psock_set_prog(&psock->progs.skb_verdict, skb_verdict);288291289292 ret = sock_map_init_proto(sk, psock);290293 if (ret < 0)···301292 ret = sk_psock_init_strp(sk, psock);302293 if (ret)303294 goto out_unlock_drop;304304- psock_set_prog(&psock->progs.stream_verdict, stream_verdict);305305- psock_set_prog(&psock->progs.stream_parser, stream_parser);306295 sk_psock_start_strp(sk, psock);307296 } else if (!stream_parser && stream_verdict && !psock->saved_data_ready) {308308- psock_set_prog(&psock->progs.stream_verdict, stream_verdict);309297 sk_psock_start_verdict(sk,psock);310298 } else if (!stream_verdict && skb_verdict && !psock->saved_data_ready) {311311- psock_set_prog(&psock->progs.skb_verdict, skb_verdict);312299 sk_psock_start_verdict(sk, psock);313300 }314301 write_unlock_bh(&sk->sk_callback_lock);
+2-1
net/ethtool/netlink.c
···4040 if (dev->dev.parent)4141 pm_runtime_get_sync(dev->dev.parent);42424343- if (!netif_device_present(dev)) {4343+ if (!netif_device_present(dev) ||4444+ dev->reg_state == NETREG_UNREGISTERING) {4445 ret = -ENODEV;4546 goto err;4647 }
···829829 int ret = 0;830830 int state = child->sk_state;831831832832- /* record NAPI ID of child */833833- sk_mark_napi_id(child, skb);832832+ /* record sk_napi_id and sk_rx_queue_mapping of child. */833833+ sk_mark_napi_id_set(child, skb);834834835835 tcp_segs_in(tcp_sk(child), skb);836836 if (!sock_owned_by_user(child)) {
···161161 hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb));162162163163 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));164164+165165+ /* the control block has been erased, so we have to set the166166+ * iif once again.167167+ * We read the receiving interface index directly from the168168+ * skb->skb_iif as it is done in the IPv4 receiving path (i.e.:169169+ * ip_rcv_core(...)).170170+ */171171+ IP6CB(skb)->iif = skb->skb_iif;164172 }165173166174 hdr->nexthdr = NEXTHDR_ROUTING;
+3-3
net/netfilter/nf_conntrack_core.c
···684684685685 tstamp = nf_conn_tstamp_find(ct);686686 if (tstamp) {687687- s32 timeout = ct->timeout - nfct_time_stamp;687687+ s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp;688688689689 tstamp->stop = ktime_get_real_ns();690690 if (timeout < 0)···10361036 }1037103710381038 /* We want the clashing entry to go away real soon: 1 second timeout. */10391039- loser_ct->timeout = nfct_time_stamp + HZ;10391039+ WRITE_ONCE(loser_ct->timeout, nfct_time_stamp + HZ);1040104010411041 /* IPS_NAT_CLASH removes the entry automatically on the first10421042 * reply. Also prevents UDP tracker from moving the entry to···15601560 /* save hash for reusing when confirming */15611561 *(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash;15621562 ct->status = 0;15631563- ct->timeout = 0;15631563+ WRITE_ONCE(ct->timeout, 0);15641564 write_pnet(&ct->ct_net, net);15651565 memset(&ct->__nfct_init_offset, 0,15661566 offsetof(struct nf_conn, proto) -
···8383 int cnt;8484 };8585 int addr_cnt;8686+ bool is_set;8687 Elf64_Addr addr[ADDR_CNT];8788};8889···452451 * in symbol's size, together with 'cnt' field hence453452 * that - 1.454453 */455455- if (id)454454+ if (id) {456455 id->cnt = sym.st_size / sizeof(int) - 1;456456+ id->is_set = true;457457+ }457458 } else {458459 pr_err("FAILED unsupported prefix %s\n", prefix);459460 return -1;···571568 int *ptr = data->d_buf;572569 int i;573570574574- if (!id->id) {571571+ if (!id->id && !id->is_set)575572 pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name);576576- }577573578574 for (i = 0; i < id->addr_cnt; i++) {579575 unsigned long addr = id->addr[i];
···1414# include "test-libpython.c"1515#undef main16161717-#define main main_test_libpython_version1818-# include "test-libpython-version.c"1919-#undef main2020-2117#define main main_test_libperl2218# include "test-libperl.c"2319#undef main···173177int main(int argc, char *argv[])174178{175179 main_test_libpython();176176- main_test_libpython_version();177180 main_test_libperl();178181 main_test_hello();179182 main_test_libelf();
···528528446 common landlock_restrict_self sys_landlock_restrict_self529529# 447 reserved for memfd_secret530530448 common process_mrelease sys_process_mrelease531531+449 common futex_waitv sys_futex_waitv
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
···451451446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self452452# 447 reserved for memfd_secret453453448 common process_mrelease sys_process_mrelease sys_process_mrelease454454+449 common futex_waitv sys_futex_waitv sys_futex_waitv
-4
tools/perf/bench/sched-messaging.c
···223223 snd_ctx->out_fds[i] = fds[1];224224 if (!thread_mode)225225 close(fds[0]);226226-227227- free(ctx);228226 }229227230228 /* Now we have all the fds, fork the senders */···238240 if (!thread_mode)239241 for (i = 0; i < num_fds; i++)240242 close(snd_ctx->out_fds[i]);241241-242242- free(snd_ctx);243243244244 /* Return number of children to reap */245245 return num_fds * 2;
+1-1
tools/perf/builtin-inject.c
···820820 inject->tool.ordered_events = true;821821 inject->tool.ordering_requires_timestamps = true;822822 /* Allow space in the header for new attributes */823823- output_data_offset = 4096;823823+ output_data_offset = roundup(8192 + session->header.data_offset, 4096);824824 if (inject->strip)825825 strip_init(inject);826826 }
+3-1
tools/perf/tests/expr.c
···169169 TEST_ASSERT_VAL("#num_dies", expr__parse(&num_dies, ctx, "#num_dies") == 0);170170 TEST_ASSERT_VAL("#num_cores >= #num_dies", num_cores >= num_dies);171171 TEST_ASSERT_VAL("#num_packages", expr__parse(&num_packages, ctx, "#num_packages") == 0);172172- TEST_ASSERT_VAL("#num_dies >= #num_packages", num_dies >= num_packages);172172+173173+ if (num_dies) // Some platforms do not have CPU die support, for example s390174174+ TEST_ASSERT_VAL("#num_dies >= #num_packages", num_dies >= num_packages);173175174176 /*175177 * Source count returns the number of events aggregating in a leader
···444444 setup445445446446 set -e447447+ ip netns add ns2448448+ ip netns set ns2 auto449449+450450+ ip -netns ns2 link set dev lo up451451+452452+ $IP link add name veth1 type veth peer name veth2453453+ $IP link set dev veth2 netns ns2454454+ $IP address add 192.0.2.1/24 dev veth1455455+ ip -netns ns2 address add 192.0.2.1/24 dev veth2456456+ $IP link set dev veth1 up457457+ ip -netns ns2 link set dev veth2 up458458+447459 $IP link set dev lo address 52:54:00:6a:c7:5e448448- $IP link set dummy0 address 52:54:00:6a:c7:5e449449- $IP link add dummy1 type dummy450450- $IP link set dummy1 address 52:54:00:6a:c7:5e451451- $IP link set dev dummy1 up460460+ $IP link set dev veth1 address 52:54:00:6a:c7:5e461461+ ip -netns ns2 link set dev lo address 52:54:00:6a:c7:5e462462+ ip -netns ns2 link set dev veth2 address 52:54:00:6a:c7:5e463463+464464+ # 1. (ns2) redirect lo's egress to veth2's egress465465+ ip netns exec ns2 tc qdisc add dev lo parent root handle 1: fq_codel466466+ ip netns exec ns2 tc filter add dev lo parent 1: protocol arp basic \467467+ action mirred egress redirect dev veth2468468+ ip netns exec ns2 tc filter add dev lo parent 1: protocol ip basic \469469+ action mirred egress redirect dev veth2470470+471471+ # 2. (ns1) redirect veth1's ingress to lo's ingress472472+ $NS_EXEC tc qdisc add dev veth1 ingress473473+ $NS_EXEC tc filter add dev veth1 ingress protocol arp basic \474474+ action mirred ingress redirect dev lo475475+ $NS_EXEC tc filter add dev veth1 ingress protocol ip basic \476476+ action mirred ingress redirect dev lo477477+478478+ # 3. (ns1) redirect lo's egress to veth1's egress479479+ $NS_EXEC tc qdisc add dev lo parent root handle 1: fq_codel480480+ $NS_EXEC tc filter add dev lo parent 1: protocol arp basic \481481+ action mirred egress redirect dev veth1482482+ $NS_EXEC tc filter add dev lo parent 1: protocol ip basic \483483+ action mirred egress redirect dev veth1484484+485485+ # 4. (ns2) redirect veth2's ingress to lo's ingress486486+ ip netns exec ns2 tc qdisc add dev veth2 ingress487487+ ip netns exec ns2 tc filter add dev veth2 ingress protocol arp basic \488488+ action mirred ingress redirect dev lo489489+ ip netns exec ns2 tc filter add dev veth2 ingress protocol ip basic \490490+ action mirred ingress redirect dev lo491491+452492 $NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1453493 $NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1454494 $NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1455455-456456- $NS_EXEC tc qd add dev dummy1 parent root handle 1: fq_codel457457- $NS_EXEC tc filter add dev dummy1 parent 1: protocol arp basic action mirred egress redirect dev lo458458- $NS_EXEC tc filter add dev dummy1 parent 1: protocol ip basic action mirred egress redirect dev lo495495+ ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=1496496+ ip netns exec ns2 sysctl -qw net.ipv4.conf.all.accept_local=1497497+ ip netns exec ns2 sysctl -qw net.ipv4.conf.all.route_localnet=1459498 set +e460499461461- run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 198.51.100.1"500500+ run_cmd "ip netns exec ns2 ping -w1 -c1 192.0.2.1"462501 log_test $? 0 "rp_filter passes local packets"463502464464- run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 127.0.0.1"503503+ run_cmd "ip netns exec ns2 ping -w1 -c1 127.0.0.1"465504 log_test $? 0 "rp_filter passes loopback packets"466505467506 cleanup