Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2604 -1244
+2 -2
Documentation/admin-guide/blockdev/drbd/figures.rst
··· 25 25 :alt: disk-states-8.dot 26 26 :align: center 27 27 28 - .. kernel-figure:: node-states-8.dot 29 - :alt: node-states-8.dot 28 + .. kernel-figure:: peer-states-8.dot 29 + :alt: peer-states-8.dot 30 30 :align: center
-5
Documentation/admin-guide/blockdev/drbd/node-states-8.dot Documentation/admin-guide/blockdev/drbd/peer-states-8.dot
··· 1 - digraph node_states { 2 - Secondary -> Primary [ label = "ioctl_set_state()" ] 3 - Primary -> Secondary [ label = "ioctl_set_state()" ] 4 - } 5 - 6 1 digraph peer_states { 7 2 Secondary -> Primary [ label = "recv state packet" ] 8 3 Primary -> Secondary [ label = "recv state packet" ]
+4 -5
Documentation/arm64/pointer-authentication.rst
··· 53 53 virtual address size configured by the kernel. For example, with a 54 54 virtual address size of 48, the PAC is 7 bits wide. 55 55 56 - Recent versions of GCC can compile code with APIAKey-based return 57 - address protection when passed the -msign-return-address option. This 58 - uses instructions in the HINT space (unless -march=armv8.3-a or higher 59 - is also passed), and such code can run on systems without the pointer 60 - authentication extension. 56 + When ARM64_PTR_AUTH_KERNEL is selected, the kernel will be compiled 57 + with HINT space pointer authentication instructions protecting 58 + function returns. Kernels built with this option will work on hardware 59 + with or without pointer authentication support. 61 60 62 61 In addition to exec(), keys can also be reinitialized to random values 63 62 using the PR_PAC_RESET_KEYS prctl. A bitmask of PR_PAC_APIAKEY,
+10 -5
Documentation/conf.py
··· 249 249 250 250 html_static_path = ['sphinx-static'] 251 251 252 - html_context = { 253 - 'css_files': [ 254 - '_static/theme_overrides.css', 255 - ], 256 - } 252 + html_css_files = [ 253 + 'theme_overrides.css', 254 + ] 255 + 256 + if major <= 1 and minor < 8: 257 + html_context = { 258 + 'css_files': [ 259 + '_static/theme_overrides.css', 260 + ], 261 + } 257 262 258 263 # Add any extra paths that contain custom files (such as robots.txt or 259 264 # .htaccess) here, relative to this directory. These files are copied
+3 -3
Documentation/cpu-freq/core.rst
··· 73 73 The third argument is a struct cpufreq_freqs with the following 74 74 values: 75 75 76 - ===== =========================== 77 - cpu number of the affected CPU 76 + ====== ====================================== 77 + policy a pointer to the struct cpufreq_policy 78 78 old old frequency 79 79 new new frequency 80 80 flags flags of the cpufreq driver 81 - ===== =========================== 81 + ====== ====================================== 82 82 83 83 3. CPUFreq Table Generation with Operating Performance Point (OPP) 84 84 ==================================================================
+1
Documentation/devicetree/bindings/spi/spi-rockchip.yaml
··· 33 33 - rockchip,rk3328-spi 34 34 - rockchip,rk3368-spi 35 35 - rockchip,rk3399-spi 36 + - rockchip,rk3568-spi 36 37 - rockchip,rv1126-spi 37 38 - const: rockchip,rk3066-spi 38 39
+3 -6
Documentation/locking/locktypes.rst
··· 439 439 spin_lock(&p->lock); 440 440 p->count += this_cpu_read(var2); 441 441 442 - On a non-PREEMPT_RT kernel migrate_disable() maps to preempt_disable() 443 - which makes the above code fully equivalent. On a PREEMPT_RT kernel 444 442 migrate_disable() ensures that the task is pinned on the current CPU which 445 443 in turn guarantees that the per-CPU access to var1 and var2 are staying on 446 - the same CPU. 444 + the same CPU while the task remains preemptible. 447 445 448 446 The migrate_disable() substitution is not valid for the following 449 447 scenario:: ··· 454 456 p = this_cpu_ptr(&var1); 455 457 p->val = func2(); 456 458 457 - While correct on a non-PREEMPT_RT kernel, this breaks on PREEMPT_RT because 458 - here migrate_disable() does not protect against reentrancy from a 459 - preempting task. A correct substitution for this case is:: 459 + This breaks because migrate_disable() does not protect against reentrancy from 460 + a preempting task. A correct substitution for this case is:: 460 461 461 462 func() 462 463 {
+11
Documentation/process/changes.rst
··· 35 35 binutils 2.23 ld -v 36 36 flex 2.5.35 flex --version 37 37 bison 2.0 bison --version 38 + pahole 1.16 pahole --version 38 39 util-linux 2.10o fdformat --version 39 40 kmod 13 depmod -V 40 41 e2fsprogs 1.41.4 e2fsck -V ··· 108 107 109 108 Since Linux 4.16, the build system generates parsers 110 109 during build. This requires bison 2.0 or later. 110 + 111 + pahole: 112 + ------- 113 + 114 + Since Linux 5.2, if CONFIG_DEBUG_INFO_BTF is selected, the build system 115 + generates BTF (BPF Type Format) from DWARF in vmlinux, a bit later from kernel 116 + modules as well. This requires pahole v1.16 or later. 117 + 118 + It is found in the 'dwarves' or 'pahole' distro packages or from 119 + https://fedorapeople.org/~acme/dwarves/. 111 120 112 121 Perl 113 122 ----
+2 -1
Documentation/process/submitting-patches.rst
··· 14 14 Documentation/process/submit-checklist.rst 15 15 for a list of items to check before submitting code. If you are submitting 16 16 a driver, also read Documentation/process/submitting-drivers.rst; for device 17 - tree binding patches, read Documentation/process/submitting-patches.rst. 17 + tree binding patches, read 18 + Documentation/devicetree/bindings/submitting-patches.rst. 18 19 19 20 This documentation assumes that you're using ``git`` to prepare your patches. 20 21 If you're unfamiliar with ``git``, you would be well-advised to learn how to
+7 -3
MAINTAINERS
··· 12180 12180 F: include/linux/mlx5/mlx5_ifc_fpga.h 12181 12181 12182 12182 MELLANOX ETHERNET SWITCH DRIVERS 12183 - M: Jiri Pirko <jiri@nvidia.com> 12184 12183 M: Ido Schimmel <idosch@nvidia.com> 12184 + M: Petr Machata <petrm@nvidia.com> 12185 12185 L: netdev@vger.kernel.org 12186 12186 S: Supported 12187 12187 W: http://www.mellanox.com ··· 16517 16517 F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml 16518 16518 F: drivers/media/platform/sunxi/sun8i-rotate/ 16519 16519 16520 + RPMSG TTY DRIVER 16521 + M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com> 16522 + L: linux-remoteproc@vger.kernel.org 16523 + S: Maintained 16524 + F: drivers/tty/rpmsg_tty.c 16525 + 16520 16526 RTL2830 MEDIA DRIVER 16521 16527 M: Antti Palosaari <crope@iki.fi> 16522 16528 L: linux-media@vger.kernel.org ··· 16644 16638 F: drivers/iommu/s390-iommu.c 16645 16639 16646 16640 S390 IUCV NETWORK LAYER 16647 - M: Julian Wiedmann <jwi@linux.ibm.com> 16648 16641 M: Alexandra Winter <wintera@linux.ibm.com> 16649 16642 M: Wenjia Zhang <wenjia@linux.ibm.com> 16650 16643 L: linux-s390@vger.kernel.org ··· 16655 16650 F: net/iucv/ 16656 16651 16657 16652 S390 NETWORK DRIVERS 16658 - M: Julian Wiedmann <jwi@linux.ibm.com> 16659 16653 M: Alexandra Winter <wintera@linux.ibm.com> 16660 16654 M: Wenjia Zhang <wenjia@linux.ibm.com> 16661 16655 L: linux-s390@vger.kernel.org
+2 -2
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Gobble Gobble 7 7 8 8 # *DOCUMENTATION* ··· 789 789 KBUILD_CFLAGS += $(stackp-flags-y) 790 790 791 791 KBUILD_CFLAGS-$(CONFIG_WERROR) += -Werror 792 - KBUILD_CFLAGS += $(KBUILD_CFLAGS-y) $(CONFIG_CC_IMPLICIT_FALLTHROUGH) 792 + KBUILD_CFLAGS += $(KBUILD_CFLAGS-y) $(CONFIG_CC_IMPLICIT_FALLTHROUGH:"%"=%) 793 793 794 794 ifdef CONFIG_CC_IS_CLANG 795 795 KBUILD_CPPFLAGS += -Qunused-arguments
+6
arch/arm64/kernel/entry-ftrace.S
··· 77 77 .endm 78 78 79 79 SYM_CODE_START(ftrace_regs_caller) 80 + #ifdef BTI_C 81 + BTI_C 82 + #endif 80 83 ftrace_regs_entry 1 81 84 b ftrace_common 82 85 SYM_CODE_END(ftrace_regs_caller) 83 86 84 87 SYM_CODE_START(ftrace_caller) 88 + #ifdef BTI_C 89 + BTI_C 90 + #endif 85 91 ftrace_regs_entry 0 86 92 b ftrace_common 87 93 SYM_CODE_END(ftrace_caller)
+1 -1
arch/arm64/kernel/machine_kexec.c
··· 147 147 if (rc) 148 148 return rc; 149 149 kimage->arch.ttbr1 = __pa(trans_pgd); 150 - kimage->arch.zero_page = __pa(empty_zero_page); 150 + kimage->arch.zero_page = __pa_symbol(empty_zero_page); 151 151 152 152 reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start; 153 153 memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
+1 -1
arch/mips/net/bpf_jit_comp.h
··· 98 98 #define emit(...) __emit(__VA_ARGS__) 99 99 100 100 /* Workaround for R10000 ll/sc errata */ 101 - #ifdef CONFIG_WAR_R10000 101 + #ifdef CONFIG_WAR_R10000_LLSC 102 102 #define LLSC_beqz beqzl 103 103 #else 104 104 #define LLSC_beqz beqz
+5
arch/parisc/Makefile
··· 15 15 # Mike Shaver, Helge Deller and Martin K. Petersen 16 16 # 17 17 18 + ifdef CONFIG_PARISC_SELF_EXTRACT 19 + boot := arch/parisc/boot 20 + KBUILD_IMAGE := $(boot)/bzImage 21 + else 18 22 KBUILD_IMAGE := vmlinuz 23 + endif 19 24 20 25 NM = sh $(srctree)/arch/parisc/nm 21 26 CHECKFLAGS += -D__hppa__=1
+13 -1
arch/parisc/configs/generic-64bit_defconfig
··· 1 1 CONFIG_LOCALVERSION="-64bit" 2 2 # CONFIG_LOCALVERSION_AUTO is not set 3 + CONFIG_KERNEL_LZ4=y 3 4 CONFIG_SYSVIPC=y 4 5 CONFIG_POSIX_MQUEUE=y 6 + CONFIG_AUDIT=y 5 7 CONFIG_BSD_PROCESS_ACCT=y 6 8 CONFIG_BSD_PROCESS_ACCT_V3=y 7 9 CONFIG_TASKSTATS=y ··· 37 35 CONFIG_BLK_DEV_INTEGRITY=y 38 36 CONFIG_BINFMT_MISC=m 39 37 # CONFIG_COMPACTION is not set 38 + CONFIG_MEMORY_FAILURE=y 40 39 CONFIG_NET=y 41 40 CONFIG_PACKET=y 42 41 CONFIG_UNIX=y ··· 68 65 CONFIG_SCSI_SRP_ATTRS=y 69 66 CONFIG_ISCSI_BOOT_SYSFS=y 70 67 CONFIG_SCSI_MPT2SAS=y 71 - CONFIG_SCSI_LASI700=m 68 + CONFIG_SCSI_LASI700=y 72 69 CONFIG_SCSI_SYM53C8XX_2=y 73 70 CONFIG_SCSI_ZALON=y 74 71 CONFIG_SCSI_QLA_ISCSI=m 75 72 CONFIG_SCSI_DH=y 76 73 CONFIG_ATA=y 74 + CONFIG_SATA_SIL=y 75 + CONFIG_SATA_SIS=y 76 + CONFIG_SATA_VIA=y 77 77 CONFIG_PATA_NS87415=y 78 78 CONFIG_PATA_SIL680=y 79 79 CONFIG_ATA_GENERIC=y ··· 85 79 CONFIG_BLK_DEV_DM=m 86 80 CONFIG_DM_RAID=m 87 81 CONFIG_DM_UEVENT=y 82 + CONFIG_DM_AUDIT=y 88 83 CONFIG_FUSION=y 89 84 CONFIG_FUSION_SPI=y 90 85 CONFIG_FUSION_SAS=y ··· 203 196 CONFIG_FB_MATROX_I2C=y 204 197 CONFIG_FB_MATROX_MAVEN=y 205 198 CONFIG_FB_RADEON=y 199 + CONFIG_LOGO=y 200 + # CONFIG_LOGO_LINUX_CLUT224 is not set 206 201 CONFIG_HIDRAW=y 207 202 CONFIG_HID_PID=y 208 203 CONFIG_USB_HIDDEV=y 209 204 CONFIG_USB=y 205 + CONFIG_USB_EHCI_HCD=y 206 + CONFIG_USB_OHCI_HCD=y 207 + CONFIG_USB_OHCI_HCD_PLATFORM=y 210 208 CONFIG_UIO=y 211 209 CONFIG_UIO_PDRV_GENIRQ=m 212 210 CONFIG_UIO_AEC=m
+1
arch/parisc/install.sh
··· 39 39 if [ -n "${INSTALLKERNEL}" ]; then 40 40 if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi 41 41 if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi 42 + if [ -x /usr/sbin/${INSTALLKERNEL} ]; then exec /usr/sbin/${INSTALLKERNEL} "$@"; fi 42 43 fi 43 44 44 45 # Default install
+7 -21
arch/parisc/kernel/time.c
··· 249 249 static int __init init_cr16_clocksource(void) 250 250 { 251 251 /* 252 - * The cr16 interval timers are not syncronized across CPUs on 253 - * different sockets, so mark them unstable and lower rating on 254 - * multi-socket SMP systems. 252 + * The cr16 interval timers are not syncronized across CPUs, even if 253 + * they share the same socket. 255 254 */ 256 255 if (num_online_cpus() > 1 && !running_on_qemu) { 257 - int cpu; 258 - unsigned long cpu0_loc; 259 - cpu0_loc = per_cpu(cpu_data, 0).cpu_loc; 256 + /* mark sched_clock unstable */ 257 + clear_sched_clock_stable(); 260 258 261 - for_each_online_cpu(cpu) { 262 - if (cpu == 0) 263 - continue; 264 - if ((cpu0_loc != 0) && 265 - (cpu0_loc == per_cpu(cpu_data, cpu).cpu_loc)) 266 - continue; 267 - 268 - /* mark sched_clock unstable */ 269 - clear_sched_clock_stable(); 270 - 271 - clocksource_cr16.name = "cr16_unstable"; 272 - clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE; 273 - clocksource_cr16.rating = 0; 274 - break; 275 - } 259 + clocksource_cr16.name = "cr16_unstable"; 260 + clocksource_cr16.flags = CLOCK_SOURCE_UNSTABLE; 261 + clocksource_cr16.rating = 0; 276 262 } 277 263 278 264 /* register at clocksource framework */
+8 -2
arch/s390/configs/debug_defconfig
··· 403 403 CONFIG_CONNECTOR=y 404 404 CONFIG_ZRAM=y 405 405 CONFIG_BLK_DEV_LOOP=m 406 - CONFIG_BLK_DEV_CRYPTOLOOP=m 407 406 CONFIG_BLK_DEV_DRBD=m 408 407 CONFIG_BLK_DEV_NBD=m 409 408 CONFIG_BLK_DEV_RAM=y ··· 475 476 CONFIG_MACVTAP=m 476 477 CONFIG_VXLAN=m 477 478 CONFIG_BAREUDP=m 479 + CONFIG_AMT=m 478 480 CONFIG_TUN=m 479 481 CONFIG_VETH=m 480 482 CONFIG_VIRTIO_NET=m ··· 489 489 # CONFIG_NET_VENDOR_AMD is not set 490 490 # CONFIG_NET_VENDOR_AQUANTIA is not set 491 491 # CONFIG_NET_VENDOR_ARC is not set 492 + # CONFIG_NET_VENDOR_ASIX is not set 492 493 # CONFIG_NET_VENDOR_ATHEROS is not set 493 494 # CONFIG_NET_VENDOR_BROADCOM is not set 494 495 # CONFIG_NET_VENDOR_BROCADE is not set ··· 572 571 CONFIG_WATCHDOG_NOWAYOUT=y 573 572 CONFIG_SOFT_WATCHDOG=m 574 573 CONFIG_DIAG288_WATCHDOG=m 574 + # CONFIG_DRM_DEBUG_MODESET_LOCK is not set 575 575 CONFIG_FB=y 576 576 CONFIG_FRAMEBUFFER_CONSOLE=y 577 577 CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y ··· 777 775 CONFIG_CRC7=m 778 776 CONFIG_CRC8=m 779 777 CONFIG_RANDOM32_SELFTEST=y 778 + CONFIG_XZ_DEC_MICROLZMA=y 780 779 CONFIG_DMA_CMA=y 781 780 CONFIG_CMA_SIZE_MBYTES=0 782 781 CONFIG_PRINTK_TIME=y 783 782 CONFIG_DYNAMIC_DEBUG=y 784 783 CONFIG_DEBUG_INFO=y 785 784 CONFIG_DEBUG_INFO_DWARF4=y 785 + CONFIG_DEBUG_INFO_BTF=y 786 786 CONFIG_GDB_SCRIPTS=y 787 787 CONFIG_HEADERS_INSTALL=y 788 788 CONFIG_DEBUG_SECTION_MISMATCH=y ··· 811 807 CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m 812 808 CONFIG_DEBUG_PER_CPU_MAPS=y 813 809 CONFIG_KFENCE=y 810 + CONFIG_KFENCE_STATIC_KEYS=y 814 811 CONFIG_DEBUG_SHIRQ=y 815 812 CONFIG_PANIC_ON_OOPS=y 816 813 CONFIG_DETECT_HUNG_TASK=y ··· 847 842 CONFIG_SAMPLES=y 848 843 CONFIG_SAMPLE_TRACE_PRINTK=m 849 844 CONFIG_SAMPLE_FTRACE_DIRECT=m 845 + CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m 850 846 CONFIG_DEBUG_ENTRY=y 851 847 CONFIG_CIO_INJECT=y 852 848 CONFIG_KUNIT=m ··· 866 860 CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y 867 861 CONFIG_LKDTM=m 868 862 CONFIG_TEST_MIN_HEAP=y 869 - CONFIG_KPROBES_SANITY_TEST=y 863 + CONFIG_KPROBES_SANITY_TEST=m 870 864 CONFIG_RBTREE_TEST=y 871 865 CONFIG_INTERVAL_TREE_TEST=m 872 866 CONFIG_PERCPU_TEST=m
+6 -1
arch/s390/configs/defconfig
··· 394 394 CONFIG_CONNECTOR=y 395 395 CONFIG_ZRAM=y 396 396 CONFIG_BLK_DEV_LOOP=m 397 - CONFIG_BLK_DEV_CRYPTOLOOP=m 398 397 CONFIG_BLK_DEV_DRBD=m 399 398 CONFIG_BLK_DEV_NBD=m 400 399 CONFIG_BLK_DEV_RAM=y ··· 466 467 CONFIG_MACVTAP=m 467 468 CONFIG_VXLAN=m 468 469 CONFIG_BAREUDP=m 470 + CONFIG_AMT=m 469 471 CONFIG_TUN=m 470 472 CONFIG_VETH=m 471 473 CONFIG_VIRTIO_NET=m ··· 480 480 # CONFIG_NET_VENDOR_AMD is not set 481 481 # CONFIG_NET_VENDOR_AQUANTIA is not set 482 482 # CONFIG_NET_VENDOR_ARC is not set 483 + # CONFIG_NET_VENDOR_ASIX is not set 483 484 # CONFIG_NET_VENDOR_ATHEROS is not set 484 485 # CONFIG_NET_VENDOR_BROADCOM is not set 485 486 # CONFIG_NET_VENDOR_BROCADE is not set ··· 763 762 CONFIG_CRC4=m 764 763 CONFIG_CRC7=m 765 764 CONFIG_CRC8=m 765 + CONFIG_XZ_DEC_MICROLZMA=y 766 766 CONFIG_DMA_CMA=y 767 767 CONFIG_CMA_SIZE_MBYTES=0 768 768 CONFIG_PRINTK_TIME=y 769 769 CONFIG_DYNAMIC_DEBUG=y 770 770 CONFIG_DEBUG_INFO=y 771 771 CONFIG_DEBUG_INFO_DWARF4=y 772 + CONFIG_DEBUG_INFO_BTF=y 772 773 CONFIG_GDB_SCRIPTS=y 773 774 CONFIG_DEBUG_SECTION_MISMATCH=y 774 775 CONFIG_MAGIC_SYSRQ=y ··· 795 792 CONFIG_SAMPLES=y 796 793 CONFIG_SAMPLE_TRACE_PRINTK=m 797 794 CONFIG_SAMPLE_FTRACE_DIRECT=m 795 + CONFIG_SAMPLE_FTRACE_DIRECT_MULTI=m 798 796 CONFIG_KUNIT=m 799 797 CONFIG_KUNIT_DEBUGFS=y 800 798 CONFIG_LKDTM=m 799 + CONFIG_KPROBES_SANITY_TEST=m 801 800 CONFIG_PERCPU_TEST=m 802 801 CONFIG_ATOMIC64_SELFTEST=y 803 802 CONFIG_TEST_BPF=m
+2
arch/s390/configs/zfcpdump_defconfig
··· 65 65 # CONFIG_NETWORK_FILESYSTEMS is not set 66 66 CONFIG_LSM="yama,loadpin,safesetid,integrity" 67 67 # CONFIG_ZLIB_DFLTCC is not set 68 + CONFIG_XZ_DEC_MICROLZMA=y 68 69 CONFIG_PRINTK_TIME=y 69 70 # CONFIG_SYMBOLIC_ERRNAME is not set 70 71 CONFIG_DEBUG_INFO=y 72 + CONFIG_DEBUG_INFO_BTF=y 71 73 CONFIG_DEBUG_FS=y 72 74 CONFIG_DEBUG_KERNEL=y 73 75 CONFIG_PANIC_ON_OOPS=y
+4 -3
arch/s390/include/asm/pci_io.h
··· 14 14 15 15 /* I/O Map */ 16 16 #define ZPCI_IOMAP_SHIFT 48 17 - #define ZPCI_IOMAP_ADDR_BASE 0x8000000000000000UL 17 + #define ZPCI_IOMAP_ADDR_SHIFT 62 18 + #define ZPCI_IOMAP_ADDR_BASE (1UL << ZPCI_IOMAP_ADDR_SHIFT) 18 19 #define ZPCI_IOMAP_ADDR_OFF_MASK ((1UL << ZPCI_IOMAP_SHIFT) - 1) 19 20 #define ZPCI_IOMAP_MAX_ENTRIES \ 20 - ((ULONG_MAX - ZPCI_IOMAP_ADDR_BASE + 1) / (1UL << ZPCI_IOMAP_SHIFT)) 21 + (1UL << (ZPCI_IOMAP_ADDR_SHIFT - ZPCI_IOMAP_SHIFT)) 21 22 #define ZPCI_IOMAP_ADDR_IDX_MASK \ 22 - (~ZPCI_IOMAP_ADDR_OFF_MASK - ZPCI_IOMAP_ADDR_BASE) 23 + ((ZPCI_IOMAP_ADDR_BASE - 1) & ~ZPCI_IOMAP_ADDR_OFF_MASK) 23 24 24 25 struct zpci_iomap_entry { 25 26 u32 fh;
+3 -2
arch/s390/lib/test_unwind.c
··· 173 173 } 174 174 175 175 /* 176 - * trigger specification exception 176 + * Trigger operation exception; use insn notation to bypass 177 + * llvm's integrated assembler sanity checks. 177 178 */ 178 179 asm volatile( 179 - " mvcl %%r1,%%r1\n" 180 + " .insn e,0x0000\n" /* illegal opcode */ 180 181 "0: nopr %%r7\n" 181 182 EX_TABLE(0b, 0b) 182 183 :);
+1
arch/x86/Kconfig
··· 1932 1932 depends on ACPI 1933 1933 select UCS2_STRING 1934 1934 select EFI_RUNTIME_WRAPPERS 1935 + select ARCH_USE_MEMREMAP_PROT 1935 1936 help 1936 1937 This enables the kernel to use EFI runtime services that are 1937 1938 available (such as the EFI variable services).
+18 -19
arch/x86/entry/entry_64.S
··· 574 574 ud2 575 575 1: 576 576 #endif 577 + #ifdef CONFIG_XEN_PV 578 + ALTERNATIVE "", "jmp xenpv_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV 579 + #endif 580 + 577 581 POP_REGS pop_rdi=0 578 582 579 583 /* ··· 894 890 .Lparanoid_entry_checkgs: 895 891 /* EBX = 1 -> kernel GSBASE active, no restore required */ 896 892 movl $1, %ebx 893 + 897 894 /* 898 895 * The kernel-enforced convention is a negative GSBASE indicates 899 896 * a kernel value. No SWAPGS needed on entry and exit. ··· 902 897 movl $MSR_GS_BASE, %ecx 903 898 rdmsr 904 899 testl %edx, %edx 905 - jns .Lparanoid_entry_swapgs 906 - ret 907 - 908 - .Lparanoid_entry_swapgs: 909 - swapgs 910 - 911 - /* 912 - * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an 913 - * unconditional CR3 write, even in the PTI case. So do an lfence 914 - * to prevent GS speculation, regardless of whether PTI is enabled. 915 - */ 916 - FENCE_SWAPGS_KERNEL_ENTRY 900 + js .Lparanoid_kernel_gsbase 917 901 918 902 /* EBX = 0 -> SWAPGS required on exit */ 919 903 xorl %ebx, %ebx 904 + swapgs 905 + .Lparanoid_kernel_gsbase: 906 + 907 + FENCE_SWAPGS_KERNEL_ENTRY 920 908 ret 921 909 SYM_CODE_END(paranoid_entry) 922 910 ··· 991 993 pushq %r12 992 994 ret 993 995 994 - .Lerror_entry_done_lfence: 995 - FENCE_SWAPGS_KERNEL_ENTRY 996 - .Lerror_entry_done: 997 - ret 998 - 999 996 /* 1000 997 * There are two places in the kernel that can potentially fault with 1001 998 * usergs. Handle them here. B stepping K8s sometimes report a ··· 1013 1020 * .Lgs_change's error handler with kernel gsbase. 1014 1021 */ 1015 1022 SWAPGS 1016 - FENCE_SWAPGS_USER_ENTRY 1017 - jmp .Lerror_entry_done 1023 + 1024 + /* 1025 + * Issue an LFENCE to prevent GS speculation, regardless of whether it is a 1026 + * kernel or user gsbase. 1027 + */ 1028 + .Lerror_entry_done_lfence: 1029 + FENCE_SWAPGS_KERNEL_ENTRY 1030 + ret 1018 1031 1019 1032 .Lbstep_iret: 1020 1033 /* Fix truncated RIP */
+1 -1
arch/x86/include/asm/intel-family.h
··· 108 108 #define INTEL_FAM6_ALDERLAKE 0x97 /* Golden Cove / Gracemont */ 109 109 #define INTEL_FAM6_ALDERLAKE_L 0x9A /* Golden Cove / Gracemont */ 110 110 111 - #define INTEL_FAM6_RAPTOR_LAKE 0xB7 111 + #define INTEL_FAM6_RAPTORLAKE 0xB7 112 112 113 113 /* "Small Core" Processors (Atom) */ 114 114
+1
arch/x86/include/asm/kvm_host.h
··· 1036 1036 #define APICV_INHIBIT_REASON_PIT_REINJ 4 1037 1037 #define APICV_INHIBIT_REASON_X2APIC 5 1038 1038 #define APICV_INHIBIT_REASON_BLOCKIRQ 6 1039 + #define APICV_INHIBIT_REASON_ABSENT 7 1039 1040 1040 1041 struct kvm_arch { 1041 1042 unsigned long n_used_mmu_pages;
+11
arch/x86/include/asm/sev-common.h
··· 73 73 74 74 #define GHCB_RESP_CODE(v) ((v) & GHCB_MSR_INFO_MASK) 75 75 76 + /* 77 + * Error codes related to GHCB input that can be communicated back to the guest 78 + * by setting the lower 32-bits of the GHCB SW_EXITINFO1 field to 2. 79 + */ 80 + #define GHCB_ERR_NOT_REGISTERED 1 81 + #define GHCB_ERR_INVALID_USAGE 2 82 + #define GHCB_ERR_INVALID_SCRATCH_AREA 3 83 + #define GHCB_ERR_MISSING_INPUT 4 84 + #define GHCB_ERR_INVALID_INPUT 5 85 + #define GHCB_ERR_INVALID_EVENT 6 86 + 76 87 #endif
+1 -1
arch/x86/kernel/fpu/signal.c
··· 118 118 struct fpstate *fpstate) 119 119 { 120 120 struct xregs_state __user *x = buf; 121 - struct _fpx_sw_bytes sw_bytes; 121 + struct _fpx_sw_bytes sw_bytes = {}; 122 122 u32 xfeatures; 123 123 int err; 124 124
+39 -18
arch/x86/kernel/sev.c
··· 294 294 char *dst, char *buf, size_t size) 295 295 { 296 296 unsigned long error_code = X86_PF_PROT | X86_PF_WRITE; 297 - char __user *target = (char __user *)dst; 298 - u64 d8; 299 - u32 d4; 300 - u16 d2; 301 - u8 d1; 302 297 303 298 /* 304 299 * This function uses __put_user() independent of whether kernel or user ··· 315 320 * instructions here would cause infinite nesting. 316 321 */ 317 322 switch (size) { 318 - case 1: 323 + case 1: { 324 + u8 d1; 325 + u8 __user *target = (u8 __user *)dst; 326 + 319 327 memcpy(&d1, buf, 1); 320 328 if (__put_user(d1, target)) 321 329 goto fault; 322 330 break; 323 - case 2: 331 + } 332 + case 2: { 333 + u16 d2; 334 + u16 __user *target = (u16 __user *)dst; 335 + 324 336 memcpy(&d2, buf, 2); 325 337 if (__put_user(d2, target)) 326 338 goto fault; 327 339 break; 328 - case 4: 340 + } 341 + case 4: { 342 + u32 d4; 343 + u32 __user *target = (u32 __user *)dst; 344 + 329 345 memcpy(&d4, buf, 4); 330 346 if (__put_user(d4, target)) 331 347 goto fault; 332 348 break; 333 - case 8: 349 + } 350 + case 8: { 351 + u64 d8; 352 + u64 __user *target = (u64 __user *)dst; 353 + 334 354 memcpy(&d8, buf, 8); 335 355 if (__put_user(d8, target)) 336 356 goto fault; 337 357 break; 358 + } 338 359 default: 339 360 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); 340 361 return ES_UNSUPPORTED; ··· 373 362 char *src, char *buf, size_t size) 374 363 { 375 364 unsigned long error_code = X86_PF_PROT; 376 - char __user *s = (char __user *)src; 377 - u64 d8; 378 - u32 d4; 379 - u16 d2; 380 - u8 d1; 381 365 382 366 /* 383 367 * This function uses __get_user() independent of whether kernel or user ··· 394 388 * instructions here would cause infinite nesting. 395 389 */ 396 390 switch (size) { 397 - case 1: 391 + case 1: { 392 + u8 d1; 393 + u8 __user *s = (u8 __user *)src; 394 + 398 395 if (__get_user(d1, s)) 399 396 goto fault; 400 397 memcpy(buf, &d1, 1); 401 398 break; 402 - case 2: 399 + } 400 + case 2: { 401 + u16 d2; 402 + u16 __user *s = (u16 __user *)src; 403 + 403 404 if (__get_user(d2, s)) 404 405 goto fault; 405 406 memcpy(buf, &d2, 2); 406 407 break; 407 - case 4: 408 + } 409 + case 4: { 410 + u32 d4; 411 + u32 __user *s = (u32 __user *)src; 412 + 408 413 if (__get_user(d4, s)) 409 414 goto fault; 410 415 memcpy(buf, &d4, 4); 411 416 break; 412 - case 8: 417 + } 418 + case 8: { 419 + u64 d8; 420 + u64 __user *s = (u64 __user *)src; 413 421 if (__get_user(d8, s)) 414 422 goto fault; 415 423 memcpy(buf, &d8, 8); 416 424 break; 425 + } 417 426 default: 418 427 WARN_ONCE(1, "%s: Invalid size: %zu\n", __func__, size); 419 428 return ES_UNSUPPORTED;
+24 -4
arch/x86/kernel/tsc.c
··· 1180 1180 1181 1181 EXPORT_SYMBOL_GPL(mark_tsc_unstable); 1182 1182 1183 + static void __init tsc_disable_clocksource_watchdog(void) 1184 + { 1185 + clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1186 + clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1187 + } 1188 + 1183 1189 static void __init check_system_tsc_reliable(void) 1184 1190 { 1185 1191 #if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC) ··· 1202 1196 #endif 1203 1197 if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE)) 1204 1198 tsc_clocksource_reliable = 1; 1199 + 1200 + /* 1201 + * Disable the clocksource watchdog when the system has: 1202 + * - TSC running at constant frequency 1203 + * - TSC which does not stop in C-States 1204 + * - the TSC_ADJUST register which allows to detect even minimal 1205 + * modifications 1206 + * - not more than two sockets. As the number of sockets cannot be 1207 + * evaluated at the early boot stage where this has to be 1208 + * invoked, check the number of online memory nodes as a 1209 + * fallback solution which is an reasonable estimate. 1210 + */ 1211 + if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC) && 1212 + boot_cpu_has(X86_FEATURE_NONSTOP_TSC) && 1213 + boot_cpu_has(X86_FEATURE_TSC_ADJUST) && 1214 + nr_online_nodes <= 2) 1215 + tsc_disable_clocksource_watchdog(); 1205 1216 } 1206 1217 1207 1218 /* ··· 1410 1387 if (tsc_unstable) 1411 1388 goto unreg; 1412 1389 1413 - if (tsc_clocksource_reliable || no_tsc_watchdog) 1414 - clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1415 - 1416 1390 if (boot_cpu_has(X86_FEATURE_NONSTOP_TSC_S3)) 1417 1391 clocksource_tsc.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP; 1418 1392 ··· 1547 1527 } 1548 1528 1549 1529 if (tsc_clocksource_reliable || no_tsc_watchdog) 1550 - clocksource_tsc_early.flags &= ~CLOCK_SOURCE_MUST_VERIFY; 1530 + tsc_disable_clocksource_watchdog(); 1551 1531 1552 1532 clocksource_register_khz(&clocksource_tsc_early, tsc_khz); 1553 1533 detect_art();
+41
arch/x86/kernel/tsc_sync.c
··· 30 30 }; 31 31 32 32 static DEFINE_PER_CPU(struct tsc_adjust, tsc_adjust); 33 + static struct timer_list tsc_sync_check_timer; 33 34 34 35 /* 35 36 * TSC's on different sockets may be reset asynchronously. ··· 77 76 adj->warned = true; 78 77 } 79 78 } 79 + 80 + /* 81 + * Normally the tsc_sync will be checked every time system enters idle 82 + * state, but there is still caveat that a system won't enter idle, 83 + * either because it's too busy or configured purposely to not enter 84 + * idle. 85 + * 86 + * So setup a periodic timer (every 10 minutes) to make sure the check 87 + * is always on. 88 + */ 89 + 90 + #define SYNC_CHECK_INTERVAL (HZ * 600) 91 + 92 + static void tsc_sync_check_timer_fn(struct timer_list *unused) 93 + { 94 + int next_cpu; 95 + 96 + tsc_verify_tsc_adjust(false); 97 + 98 + /* Run the check for all onlined CPUs in turn */ 99 + next_cpu = cpumask_next(raw_smp_processor_id(), cpu_online_mask); 100 + if (next_cpu >= nr_cpu_ids) 101 + next_cpu = cpumask_first(cpu_online_mask); 102 + 103 + tsc_sync_check_timer.expires += SYNC_CHECK_INTERVAL; 104 + add_timer_on(&tsc_sync_check_timer, next_cpu); 105 + } 106 + 107 + static int __init start_sync_check_timer(void) 108 + { 109 + if (!cpu_feature_enabled(X86_FEATURE_TSC_ADJUST) || tsc_clocksource_reliable) 110 + return 0; 111 + 112 + timer_setup(&tsc_sync_check_timer, tsc_sync_check_timer_fn, 0); 113 + tsc_sync_check_timer.expires = jiffies + SYNC_CHECK_INTERVAL; 114 + add_timer(&tsc_sync_check_timer); 115 + 116 + return 0; 117 + } 118 + late_initcall(start_sync_check_timer); 80 119 81 120 static void tsc_sanitize_first_cpu(struct tsc_adjust *cur, s64 bootval, 82 121 unsigned int cpu, bool bootcpu)
+21 -2
arch/x86/kvm/mmu/mmu.c
··· 1936 1936 1937 1937 static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp) 1938 1938 { 1939 - return sp->role.invalid || 1939 + if (sp->role.invalid) 1940 + return true; 1941 + 1942 + /* TDP MMU pages due not use the MMU generation. */ 1943 + return !sp->tdp_mmu_page && 1940 1944 unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen); 1941 1945 } 1942 1946 ··· 3980 3976 return true; 3981 3977 } 3982 3978 3979 + /* 3980 + * Returns true if the page fault is stale and needs to be retried, i.e. if the 3981 + * root was invalidated by a memslot update or a relevant mmu_notifier fired. 3982 + */ 3983 + static bool is_page_fault_stale(struct kvm_vcpu *vcpu, 3984 + struct kvm_page_fault *fault, int mmu_seq) 3985 + { 3986 + if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa))) 3987 + return true; 3988 + 3989 + return fault->slot && 3990 + mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva); 3991 + } 3992 + 3983 3993 static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) 3984 3994 { 3985 3995 bool is_tdp_mmu_fault = is_tdp_mmu(vcpu->arch.mmu); ··· 4031 4013 else 4032 4014 write_lock(&vcpu->kvm->mmu_lock); 4033 4015 4034 - if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) 4016 + if (is_page_fault_stale(vcpu, fault, mmu_seq)) 4035 4017 goto out_unlock; 4018 + 4036 4019 r = make_mmu_pages_available(vcpu); 4037 4020 if (r) 4038 4021 goto out_unlock;
+2 -1
arch/x86/kvm/mmu/paging_tmpl.h
··· 911 911 912 912 r = RET_PF_RETRY; 913 913 write_lock(&vcpu->kvm->mmu_lock); 914 - if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) 914 + 915 + if (is_page_fault_stale(vcpu, fault, mmu_seq)) 915 916 goto out_unlock; 916 917 917 918 kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT);
+1
arch/x86/kvm/svm/avic.c
··· 900 900 bool svm_check_apicv_inhibit_reasons(ulong bit) 901 901 { 902 902 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) | 903 + BIT(APICV_INHIBIT_REASON_ABSENT) | 903 904 BIT(APICV_INHIBIT_REASON_HYPERV) | 904 905 BIT(APICV_INHIBIT_REASON_NESTED) | 905 906 BIT(APICV_INHIBIT_REASON_IRQWIN) |
+1 -1
arch/x86/kvm/svm/pmu.c
··· 281 281 pmu->nr_arch_gp_counters = AMD64_NUM_COUNTERS; 282 282 283 283 pmu->counter_bitmask[KVM_PMC_GP] = ((u64)1 << 48) - 1; 284 - pmu->reserved_bits = 0xffffffff00200000ull; 284 + pmu->reserved_bits = 0xfffffff000280000ull; 285 285 pmu->version = 1; 286 286 /* not applicable to AMD; but clean them to prevent any fall out */ 287 287 pmu->counter_bitmask[KVM_PMC_FIXED] = 0;
+60 -42
arch/x86/kvm/svm/sev.c
··· 2260 2260 __free_page(virt_to_page(svm->sev_es.vmsa)); 2261 2261 2262 2262 if (svm->sev_es.ghcb_sa_free) 2263 - kfree(svm->sev_es.ghcb_sa); 2263 + kvfree(svm->sev_es.ghcb_sa); 2264 2264 } 2265 2265 2266 2266 static void dump_ghcb(struct vcpu_svm *svm) ··· 2352 2352 memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); 2353 2353 } 2354 2354 2355 - static int sev_es_validate_vmgexit(struct vcpu_svm *svm) 2355 + static bool sev_es_validate_vmgexit(struct vcpu_svm *svm) 2356 2356 { 2357 2357 struct kvm_vcpu *vcpu; 2358 2358 struct ghcb *ghcb; 2359 - u64 exit_code = 0; 2359 + u64 exit_code; 2360 + u64 reason; 2360 2361 2361 2362 ghcb = svm->sev_es.ghcb; 2362 2363 2363 - /* Only GHCB Usage code 0 is supported */ 2364 - if (ghcb->ghcb_usage) 2365 - goto vmgexit_err; 2366 - 2367 2364 /* 2368 - * Retrieve the exit code now even though is may not be marked valid 2365 + * Retrieve the exit code now even though it may not be marked valid 2369 2366 * as it could help with debugging. 2370 2367 */ 2371 2368 exit_code = ghcb_get_sw_exit_code(ghcb); 2369 + 2370 + /* Only GHCB Usage code 0 is supported */ 2371 + if (ghcb->ghcb_usage) { 2372 + reason = GHCB_ERR_INVALID_USAGE; 2373 + goto vmgexit_err; 2374 + } 2375 + 2376 + reason = GHCB_ERR_MISSING_INPUT; 2372 2377 2373 2378 if (!ghcb_sw_exit_code_is_valid(ghcb) || 2374 2379 !ghcb_sw_exit_info_1_is_valid(ghcb) || ··· 2453 2448 case SVM_VMGEXIT_UNSUPPORTED_EVENT: 2454 2449 break; 2455 2450 default: 2451 + reason = GHCB_ERR_INVALID_EVENT; 2456 2452 goto vmgexit_err; 2457 2453 } 2458 2454 2459 - return 0; 2455 + return true; 2460 2456 2461 2457 vmgexit_err: 2462 2458 vcpu = &svm->vcpu; 2463 2459 2464 - if (ghcb->ghcb_usage) { 2460 + if (reason == GHCB_ERR_INVALID_USAGE) { 2465 2461 vcpu_unimpl(vcpu, "vmgexit: ghcb usage %#x is not valid\n", 2466 2462 ghcb->ghcb_usage); 2463 + } else if (reason == GHCB_ERR_INVALID_EVENT) { 2464 + vcpu_unimpl(vcpu, "vmgexit: exit code %#llx is not valid\n", 2465 + exit_code); 2467 2466 } else { 2468 - vcpu_unimpl(vcpu, "vmgexit: exit reason %#llx is not valid\n", 2467 + vcpu_unimpl(vcpu, "vmgexit: exit code %#llx input is not valid\n", 2469 2468 exit_code); 2470 2469 dump_ghcb(svm); 2471 2470 } 2472 2471 2473 - vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 2474 - vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_UNEXPECTED_EXIT_REASON; 2475 - vcpu->run->internal.ndata = 2; 2476 - vcpu->run->internal.data[0] = exit_code; 2477 - vcpu->run->internal.data[1] = vcpu->arch.last_vmentry_cpu; 2472 + /* Clear the valid entries fields */ 2473 + memset(ghcb->save.valid_bitmap, 0, sizeof(ghcb->save.valid_bitmap)); 2478 2474 2479 - return -EINVAL; 2475 + ghcb_set_sw_exit_info_1(ghcb, 2); 2476 + ghcb_set_sw_exit_info_2(ghcb, reason); 2477 + 2478 + return false; 2480 2479 } 2481 2480 2482 2481 void sev_es_unmap_ghcb(struct vcpu_svm *svm) ··· 2502 2493 svm->sev_es.ghcb_sa_sync = false; 2503 2494 } 2504 2495 2505 - kfree(svm->sev_es.ghcb_sa); 2496 + kvfree(svm->sev_es.ghcb_sa); 2506 2497 svm->sev_es.ghcb_sa = NULL; 2507 2498 svm->sev_es.ghcb_sa_free = false; 2508 2499 } ··· 2550 2541 scratch_gpa_beg = ghcb_get_sw_scratch(ghcb); 2551 2542 if (!scratch_gpa_beg) { 2552 2543 pr_err("vmgexit: scratch gpa not provided\n"); 2553 - return false; 2544 + goto e_scratch; 2554 2545 } 2555 2546 2556 2547 scratch_gpa_end = scratch_gpa_beg + len; 2557 2548 if (scratch_gpa_end < scratch_gpa_beg) { 2558 2549 pr_err("vmgexit: scratch length (%#llx) not valid for scratch address (%#llx)\n", 2559 2550 len, scratch_gpa_beg); 2560 - return false; 2551 + goto e_scratch; 2561 2552 } 2562 2553 2563 2554 if ((scratch_gpa_beg & PAGE_MASK) == control->ghcb_gpa) { ··· 2575 2566 scratch_gpa_end > ghcb_scratch_end) { 2576 2567 pr_err("vmgexit: scratch area is outside of GHCB shared buffer area (%#llx - %#llx)\n", 2577 2568 scratch_gpa_beg, scratch_gpa_end); 2578 - return false; 2569 + goto e_scratch; 2579 2570 } 2580 2571 2581 2572 scratch_va = (void *)svm->sev_es.ghcb; ··· 2588 2579 if (len > GHCB_SCRATCH_AREA_LIMIT) { 2589 2580 pr_err("vmgexit: scratch area exceeds KVM limits (%#llx requested, %#llx limit)\n", 2590 2581 len, GHCB_SCRATCH_AREA_LIMIT); 2591 - return false; 2582 + goto e_scratch; 2592 2583 } 2593 - scratch_va = kzalloc(len, GFP_KERNEL_ACCOUNT); 2584 + scratch_va = kvzalloc(len, GFP_KERNEL_ACCOUNT); 2594 2585 if (!scratch_va) 2595 - return false; 2586 + goto e_scratch; 2596 2587 2597 2588 if (kvm_read_guest(svm->vcpu.kvm, scratch_gpa_beg, scratch_va, len)) { 2598 2589 /* Unable to copy scratch area from guest */ 2599 2590 pr_err("vmgexit: kvm_read_guest for scratch area failed\n"); 2600 2591 2601 - kfree(scratch_va); 2602 - return false; 2592 + kvfree(scratch_va); 2593 + goto e_scratch; 2603 2594 } 2604 2595 2605 2596 /* ··· 2616 2607 svm->sev_es.ghcb_sa_len = len; 2617 2608 2618 2609 return true; 2610 + 2611 + e_scratch: 2612 + ghcb_set_sw_exit_info_1(ghcb, 2); 2613 + ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_SCRATCH_AREA); 2614 + 2615 + return false; 2619 2616 } 2620 2617 2621 2618 static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask, ··· 2672 2657 2673 2658 ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_CPUID); 2674 2659 if (!ret) { 2675 - ret = -EINVAL; 2660 + /* Error, keep GHCB MSR value as-is */ 2676 2661 break; 2677 2662 } 2678 2663 ··· 2708 2693 GHCB_MSR_TERM_REASON_POS); 2709 2694 pr_info("SEV-ES guest requested termination: %#llx:%#llx\n", 2710 2695 reason_set, reason_code); 2711 - fallthrough; 2696 + 2697 + ret = -EINVAL; 2698 + break; 2712 2699 } 2713 2700 default: 2714 - ret = -EINVAL; 2701 + /* Error, keep GHCB MSR value as-is */ 2702 + break; 2715 2703 } 2716 2704 2717 2705 trace_kvm_vmgexit_msr_protocol_exit(svm->vcpu.vcpu_id, ··· 2738 2720 2739 2721 if (!ghcb_gpa) { 2740 2722 vcpu_unimpl(vcpu, "vmgexit: GHCB gpa is not set\n"); 2741 - return -EINVAL; 2723 + 2724 + /* Without a GHCB, just return right back to the guest */ 2725 + return 1; 2742 2726 } 2743 2727 2744 2728 if (kvm_vcpu_map(vcpu, ghcb_gpa >> PAGE_SHIFT, &svm->sev_es.ghcb_map)) { 2745 2729 /* Unable to map GHCB from guest */ 2746 2730 vcpu_unimpl(vcpu, "vmgexit: error mapping GHCB [%#llx] from guest\n", 2747 2731 ghcb_gpa); 2748 - return -EINVAL; 2732 + 2733 + /* Without a GHCB, just return right back to the guest */ 2734 + return 1; 2749 2735 } 2750 2736 2751 2737 svm->sev_es.ghcb = svm->sev_es.ghcb_map.hva; ··· 2759 2737 2760 2738 exit_code = ghcb_get_sw_exit_code(ghcb); 2761 2739 2762 - ret = sev_es_validate_vmgexit(svm); 2763 - if (ret) 2764 - return ret; 2740 + if (!sev_es_validate_vmgexit(svm)) 2741 + return 1; 2765 2742 2766 2743 sev_es_sync_from_ghcb(svm); 2767 2744 ghcb_set_sw_exit_info_1(ghcb, 0); 2768 2745 ghcb_set_sw_exit_info_2(ghcb, 0); 2769 2746 2770 - ret = -EINVAL; 2747 + ret = 1; 2771 2748 switch (exit_code) { 2772 2749 case SVM_VMGEXIT_MMIO_READ: 2773 2750 if (!setup_vmgexit_scratch(svm, true, control->exit_info_2)) ··· 2807 2786 default: 2808 2787 pr_err("svm: vmgexit: unsupported AP jump table request - exit_info_1=%#llx\n", 2809 2788 control->exit_info_1); 2810 - ghcb_set_sw_exit_info_1(ghcb, 1); 2811 - ghcb_set_sw_exit_info_2(ghcb, 2812 - X86_TRAP_UD | 2813 - SVM_EVTINJ_TYPE_EXEPT | 2814 - SVM_EVTINJ_VALID); 2789 + ghcb_set_sw_exit_info_1(ghcb, 2); 2790 + ghcb_set_sw_exit_info_2(ghcb, GHCB_ERR_INVALID_INPUT); 2815 2791 } 2816 2792 2817 - ret = 1; 2818 2793 break; 2819 2794 } 2820 2795 case SVM_VMGEXIT_UNSUPPORTED_EVENT: 2821 2796 vcpu_unimpl(vcpu, 2822 2797 "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n", 2823 2798 control->exit_info_1, control->exit_info_2); 2799 + ret = -EINVAL; 2824 2800 break; 2825 2801 default: 2826 2802 ret = svm_invoke_exit_handler(vcpu, exit_code); ··· 2839 2821 return -EINVAL; 2840 2822 2841 2823 if (!setup_vmgexit_scratch(svm, in, bytes)) 2842 - return -EINVAL; 2824 + return 1; 2843 2825 2844 2826 return kvm_sev_es_string_io(&svm->vcpu, size, port, svm->sev_es.ghcb_sa, 2845 2827 count, in);
+3 -1
arch/x86/kvm/vmx/nested.c
··· 2591 2591 2592 2592 if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) && 2593 2593 WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_CORE_PERF_GLOBAL_CTRL, 2594 - vmcs12->guest_ia32_perf_global_ctrl))) 2594 + vmcs12->guest_ia32_perf_global_ctrl))) { 2595 + *entry_failure_code = ENTRY_FAIL_DEFAULT; 2595 2596 return -EINVAL; 2597 + } 2596 2598 2597 2599 kvm_rsp_write(vcpu, vmcs12->guest_rsp); 2598 2600 kvm_rip_write(vcpu, vmcs12->guest_rip);
+1
arch/x86/kvm/vmx/vmx.c
··· 7525 7525 static bool vmx_check_apicv_inhibit_reasons(ulong bit) 7526 7526 { 7527 7527 ulong supported = BIT(APICV_INHIBIT_REASON_DISABLE) | 7528 + BIT(APICV_INHIBIT_REASON_ABSENT) | 7528 7529 BIT(APICV_INHIBIT_REASON_HYPERV) | 7529 7530 BIT(APICV_INHIBIT_REASON_BLOCKIRQ); 7530 7531
+5 -4
arch/x86/kvm/x86.c
··· 5740 5740 smp_wmb(); 5741 5741 kvm->arch.irqchip_mode = KVM_IRQCHIP_SPLIT; 5742 5742 kvm->arch.nr_reserved_ioapic_pins = cap->args[0]; 5743 + kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT); 5743 5744 r = 0; 5744 5745 split_irqchip_unlock: 5745 5746 mutex_unlock(&kvm->lock); ··· 6121 6120 /* Write kvm->irq_routing before enabling irqchip_in_kernel. */ 6122 6121 smp_wmb(); 6123 6122 kvm->arch.irqchip_mode = KVM_IRQCHIP_KERNEL; 6123 + kvm_request_apicv_update(kvm, true, APICV_INHIBIT_REASON_ABSENT); 6124 6124 create_irqchip_unlock: 6125 6125 mutex_unlock(&kvm->lock); 6126 6126 break; ··· 8820 8818 { 8821 8819 init_rwsem(&kvm->arch.apicv_update_lock); 8822 8820 8823 - if (enable_apicv) 8824 - clear_bit(APICV_INHIBIT_REASON_DISABLE, 8825 - &kvm->arch.apicv_inhibit_reasons); 8826 - else 8821 + set_bit(APICV_INHIBIT_REASON_ABSENT, 8822 + &kvm->arch.apicv_inhibit_reasons); 8823 + if (!enable_apicv) 8827 8824 set_bit(APICV_INHIBIT_REASON_DISABLE, 8828 8825 &kvm->arch.apicv_inhibit_reasons); 8829 8826 }
+2 -1
arch/x86/platform/efi/quirks.c
··· 277 277 return; 278 278 } 279 279 280 - new = early_memremap(data.phys_map, data.size); 280 + new = early_memremap_prot(data.phys_map, data.size, 281 + pgprot_val(pgprot_encrypted(FIXMAP_PAGE_NORMAL))); 281 282 if (!new) { 282 283 pr_err("Failed to map new boot services memmap\n"); 283 284 return;
+11 -1
arch/x86/realmode/init.c
··· 72 72 #ifdef CONFIG_X86_64 73 73 u64 *trampoline_pgd; 74 74 u64 efer; 75 + int i; 75 76 #endif 76 77 77 78 base = (unsigned char *)real_mode_header; ··· 129 128 trampoline_header->flags = 0; 130 129 131 130 trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd); 131 + 132 + /* Map the real mode stub as virtual == physical */ 132 133 trampoline_pgd[0] = trampoline_pgd_entry.pgd; 133 - trampoline_pgd[511] = init_top_pgt[511].pgd; 134 + 135 + /* 136 + * Include the entirety of the kernel mapping into the trampoline 137 + * PGD. This way, all mappings present in the normal kernel page 138 + * tables are usable while running on trampoline_pgd. 139 + */ 140 + for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++) 141 + trampoline_pgd[i] = init_top_pgt[i].pgd; 134 142 #endif 135 143 136 144 sme_sev_setup_real_mode(trampoline_header);
+20
arch/x86/xen/xen-asm.S
··· 20 20 21 21 #include <linux/init.h> 22 22 #include <linux/linkage.h> 23 + #include <../entry/calling.h> 23 24 24 25 .pushsection .noinstr.text, "ax" 25 26 /* ··· 192 191 pushq $0 193 192 jmp hypercall_iret 194 193 SYM_CODE_END(xen_iret) 194 + 195 + /* 196 + * XEN pv doesn't use trampoline stack, PER_CPU_VAR(cpu_tss_rw + TSS_sp0) is 197 + * also the kernel stack. Reusing swapgs_restore_regs_and_return_to_usermode() 198 + * in XEN pv would cause %rsp to move up to the top of the kernel stack and 199 + * leave the IRET frame below %rsp, which is dangerous to be corrupted if #NMI 200 + * interrupts. And swapgs_restore_regs_and_return_to_usermode() pushing the IRET 201 + * frame at the same address is useless. 202 + */ 203 + SYM_CODE_START(xenpv_restore_regs_and_return_to_usermode) 204 + UNWIND_HINT_REGS 205 + POP_REGS 206 + 207 + /* stackleak_erase() can work safely on the kernel stack. */ 208 + STACKLEAK_ERASE_NOCLOBBER 209 + 210 + addq $8, %rsp /* skip regs->orig_ax */ 211 + jmp xen_iret 212 + SYM_CODE_END(xenpv_restore_regs_and_return_to_usermode) 195 213 196 214 /* 197 215 * Xen handles syscall callbacks much like ordinary exceptions, which
+1
block/fops.c
··· 15 15 #include <linux/falloc.h> 16 16 #include <linux/suspend.h> 17 17 #include <linux/fs.h> 18 + #include <linux/module.h> 18 19 #include "blk.h" 19 20 20 21 static inline struct inode *bdev_file_inode(struct file *file)
+1 -1
drivers/ata/libata-sata.c
··· 827 827 if (ap->target_lpm_policy >= ARRAY_SIZE(ata_lpm_policy_names)) 828 828 return -EINVAL; 829 829 830 - return snprintf(buf, PAGE_SIZE, "%s\n", 830 + return sysfs_emit(buf, "%s\n", 831 831 ata_lpm_policy_names[ap->target_lpm_policy]); 832 832 } 833 833 DEVICE_ATTR(link_power_management_policy, S_IRUGO | S_IWUSR,
+8 -8
drivers/ata/pata_falcon.c
··· 55 55 /* Transfer multiple of 2 bytes */ 56 56 if (rw == READ) { 57 57 if (swap) 58 - raw_insw_swapw((u16 *)data_addr, (u16 *)buf, words); 58 + raw_insw_swapw(data_addr, (u16 *)buf, words); 59 59 else 60 - raw_insw((u16 *)data_addr, (u16 *)buf, words); 60 + raw_insw(data_addr, (u16 *)buf, words); 61 61 } else { 62 62 if (swap) 63 - raw_outsw_swapw((u16 *)data_addr, (u16 *)buf, words); 63 + raw_outsw_swapw(data_addr, (u16 *)buf, words); 64 64 else 65 - raw_outsw((u16 *)data_addr, (u16 *)buf, words); 65 + raw_outsw(data_addr, (u16 *)buf, words); 66 66 } 67 67 68 68 /* Transfer trailing byte, if any. */ ··· 74 74 75 75 if (rw == READ) { 76 76 if (swap) 77 - raw_insw_swapw((u16 *)data_addr, (u16 *)pad, 1); 77 + raw_insw_swapw(data_addr, (u16 *)pad, 1); 78 78 else 79 - raw_insw((u16 *)data_addr, (u16 *)pad, 1); 79 + raw_insw(data_addr, (u16 *)pad, 1); 80 80 *buf = pad[0]; 81 81 } else { 82 82 pad[0] = *buf; 83 83 if (swap) 84 - raw_outsw_swapw((u16 *)data_addr, (u16 *)pad, 1); 84 + raw_outsw_swapw(data_addr, (u16 *)pad, 1); 85 85 else 86 - raw_outsw((u16 *)data_addr, (u16 *)pad, 1); 86 + raw_outsw(data_addr, (u16 *)pad, 1); 87 87 } 88 88 words++; 89 89 }
+13 -7
drivers/ata/sata_fsl.c
··· 1394 1394 return 0; 1395 1395 } 1396 1396 1397 + static void sata_fsl_host_stop(struct ata_host *host) 1398 + { 1399 + struct sata_fsl_host_priv *host_priv = host->private_data; 1400 + 1401 + iounmap(host_priv->hcr_base); 1402 + kfree(host_priv); 1403 + } 1404 + 1397 1405 /* 1398 1406 * scsi mid-layer and libata interface structures 1399 1407 */ ··· 1433 1425 1434 1426 .port_start = sata_fsl_port_start, 1435 1427 .port_stop = sata_fsl_port_stop, 1428 + 1429 + .host_stop = sata_fsl_host_stop, 1436 1430 1437 1431 .pmp_attach = sata_fsl_pmp_attach, 1438 1432 .pmp_detach = sata_fsl_pmp_detach, ··· 1490 1480 host_priv->ssr_base = ssr_base; 1491 1481 host_priv->csr_base = csr_base; 1492 1482 1493 - irq = irq_of_parse_and_map(ofdev->dev.of_node, 0); 1494 - if (!irq) { 1495 - dev_err(&ofdev->dev, "invalid irq from platform\n"); 1483 + irq = platform_get_irq(ofdev, 0); 1484 + if (irq < 0) { 1485 + retval = irq; 1496 1486 goto error_exit_with_cleanup; 1497 1487 } 1498 1488 host_priv->irq = irq; ··· 1566 1556 device_remove_file(&ofdev->dev, &host_priv->rx_watermark); 1567 1557 1568 1558 ata_host_detach(host); 1569 - 1570 - irq_dispose_mapping(host_priv->irq); 1571 - iounmap(host_priv->hcr_base); 1572 - kfree(host_priv); 1573 1559 1574 1560 return 0; 1575 1561 }
+1 -1
drivers/block/loop.c
··· 2103 2103 int ret; 2104 2104 2105 2105 if (idx < 0) { 2106 - pr_warn("deleting an unspecified loop device is not supported.\n"); 2106 + pr_warn_once("deleting an unspecified loop device is not supported.\n"); 2107 2107 return -EINVAL; 2108 2108 } 2109 2109
+3 -3
drivers/char/agp/parisc-agp.c
··· 281 281 return 0; 282 282 } 283 283 284 - static int 284 + static int __init 285 285 lba_find_capability(int cap) 286 286 { 287 287 struct _parisc_agp_info *info = &parisc_agp_info; ··· 366 366 return error; 367 367 } 368 368 369 - static int 369 + static int __init 370 370 find_quicksilver(struct device *dev, void *data) 371 371 { 372 372 struct parisc_device **lba = data; ··· 378 378 return 0; 379 379 } 380 380 381 - static int 381 + static int __init 382 382 parisc_agp_init(void) 383 383 { 384 384 extern struct sba_device *sba_list;
+7 -7
drivers/cpufreq/cpufreq.c
··· 1004 1004 .release = cpufreq_sysfs_release, 1005 1005 }; 1006 1006 1007 - static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu) 1007 + static void add_cpu_dev_symlink(struct cpufreq_policy *policy, unsigned int cpu, 1008 + struct device *dev) 1008 1009 { 1009 - struct device *dev = get_cpu_device(cpu); 1010 - 1011 1010 if (unlikely(!dev)) 1012 1011 return; 1013 1012 ··· 1295 1296 1296 1297 if (policy->max_freq_req) { 1297 1298 /* 1298 - * CPUFREQ_CREATE_POLICY notification is sent only after 1299 - * successfully adding max_freq_req request. 1299 + * Remove max_freq_req after sending CPUFREQ_REMOVE_POLICY 1300 + * notification, since CPUFREQ_CREATE_POLICY notification was 1301 + * sent after adding max_freq_req earlier. 1300 1302 */ 1301 1303 blocking_notifier_call_chain(&cpufreq_policy_notifier_list, 1302 1304 CPUFREQ_REMOVE_POLICY, policy); ··· 1391 1391 if (new_policy) { 1392 1392 for_each_cpu(j, policy->related_cpus) { 1393 1393 per_cpu(cpufreq_cpu_data, j) = policy; 1394 - add_cpu_dev_symlink(policy, j); 1394 + add_cpu_dev_symlink(policy, j, get_cpu_device(j)); 1395 1395 } 1396 1396 1397 1397 policy->min_freq_req = kzalloc(2 * sizeof(*policy->min_freq_req), ··· 1565 1565 /* Create sysfs link on CPU registration */ 1566 1566 policy = per_cpu(cpufreq_cpu_data, cpu); 1567 1567 if (policy) 1568 - add_cpu_dev_symlink(policy, cpu); 1568 + add_cpu_dev_symlink(policy, cpu, dev); 1569 1569 1570 1570 return 0; 1571 1571 }
+1 -1
drivers/dma-buf/heaps/system_heap.c
··· 290 290 int i; 291 291 292 292 table = &buffer->sg_table; 293 - for_each_sg(table->sgl, sg, table->nents, i) { 293 + for_each_sgtable_sg(table, sg, i) { 294 294 struct page *page = sg_page(sg); 295 295 296 296 __free_pages(page, compound_order(page));
+5 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1396 1396 struct sg_table *sg = NULL; 1397 1397 uint64_t user_addr = 0; 1398 1398 struct amdgpu_bo *bo; 1399 - struct drm_gem_object *gobj; 1399 + struct drm_gem_object *gobj = NULL; 1400 1400 u32 domain, alloc_domain; 1401 1401 u64 alloc_flags; 1402 1402 int ret; ··· 1506 1506 remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info); 1507 1507 drm_vma_node_revoke(&gobj->vma_node, drm_priv); 1508 1508 err_node_allow: 1509 - drm_gem_object_put(gobj); 1510 1509 /* Don't unreserve system mem limit twice */ 1511 1510 goto err_reserve_limit; 1512 1511 err_bo_create: 1513 1512 unreserve_mem_limit(adev, size, alloc_domain, !!sg); 1514 1513 err_reserve_limit: 1515 1514 mutex_destroy(&(*mem)->lock); 1516 - kfree(*mem); 1515 + if (gobj) 1516 + drm_gem_object_put(gobj); 1517 + else 1518 + kfree(*mem); 1517 1519 err: 1518 1520 if (sg) { 1519 1521 sg_free_table(sg);
+10 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3833 3833 /* disable all interrupts */ 3834 3834 amdgpu_irq_disable_all(adev); 3835 3835 if (adev->mode_info.mode_config_initialized){ 3836 - if (!amdgpu_device_has_dc_support(adev)) 3836 + if (!drm_drv_uses_atomic_modeset(adev_to_drm(adev))) 3837 3837 drm_helper_force_disable_all(adev_to_drm(adev)); 3838 3838 else 3839 3839 drm_atomic_helper_shutdown(adev_to_drm(adev)); ··· 4289 4289 { 4290 4290 int r; 4291 4291 4292 + amdgpu_amdkfd_pre_reset(adev); 4293 + 4292 4294 if (from_hypervisor) 4293 4295 r = amdgpu_virt_request_full_gpu(adev, true); 4294 4296 else ··· 4318 4316 4319 4317 amdgpu_irq_gpu_reset_resume_helper(adev); 4320 4318 r = amdgpu_ib_ring_tests(adev); 4319 + amdgpu_amdkfd_post_reset(adev); 4321 4320 4322 4321 error: 4323 4322 if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) { ··· 5033 5030 5034 5031 cancel_delayed_work_sync(&tmp_adev->delayed_init_work); 5035 5032 5036 - amdgpu_amdkfd_pre_reset(tmp_adev); 5033 + if (!amdgpu_sriov_vf(tmp_adev)) 5034 + amdgpu_amdkfd_pre_reset(tmp_adev); 5037 5035 5038 5036 /* 5039 5037 * Mark these ASICs to be reseted as untracked first ··· 5133 5129 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res); 5134 5130 } 5135 5131 5136 - if (!amdgpu_device_has_dc_support(tmp_adev) && !job_signaled) { 5132 + if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) { 5137 5133 drm_helper_resume_force_mode(adev_to_drm(tmp_adev)); 5138 5134 } 5139 5135 ··· 5152 5148 5153 5149 skip_sched_resume: 5154 5150 list_for_each_entry(tmp_adev, device_list_handle, reset_list) { 5155 - /* unlock kfd */ 5156 - if (!need_emergency_restart) 5157 - amdgpu_amdkfd_post_reset(tmp_adev); 5151 + /* unlock kfd: SRIOV would do it separately */ 5152 + if (!need_emergency_restart && !amdgpu_sriov_vf(tmp_adev)) 5153 + amdgpu_amdkfd_post_reset(tmp_adev); 5158 5154 5159 5155 /* kfd_post_reset will do nothing if kfd device is not initialized, 5160 5156 * need to bring up kfd here if it's not be initialized before
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 157 157 [HDP_HWIP] = HDP_HWID, 158 158 [SDMA0_HWIP] = SDMA0_HWID, 159 159 [SDMA1_HWIP] = SDMA1_HWID, 160 + [SDMA2_HWIP] = SDMA2_HWID, 161 + [SDMA3_HWIP] = SDMA3_HWID, 160 162 [MMHUB_HWIP] = MMHUB_HWID, 161 163 [ATHUB_HWIP] = ATHUB_HWID, 162 164 [NBIO_HWIP] = NBIF_HWID, ··· 920 918 case IP_VERSION(3, 0, 64): 921 919 case IP_VERSION(3, 1, 1): 922 920 case IP_VERSION(3, 0, 2): 921 + case IP_VERSION(3, 0, 192): 923 922 amdgpu_device_ip_block_add(adev, &vcn_v3_0_ip_block); 924 923 if (!amdgpu_sriov_vf(adev)) 925 924 amdgpu_device_ip_block_add(adev, &jpeg_v3_0_ip_block);
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 135 135 break; 136 136 case IP_VERSION(3, 0, 0): 137 137 case IP_VERSION(3, 0, 64): 138 + case IP_VERSION(3, 0, 192): 138 139 if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0)) 139 140 fw_name = FIRMWARE_SIENNA_CICHLID; 140 141 else
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
··· 504 504 int i = 0; 505 505 506 506 for (i = 0; i < adev->mode_info.num_crtc; i++) 507 - if (adev->mode_info.crtcs[i]) 508 - hrtimer_cancel(&adev->mode_info.crtcs[i]->vblank_timer); 507 + if (adev->amdgpu_vkms_output[i].vblank_hrtimer.function) 508 + hrtimer_cancel(&adev->amdgpu_vkms_output[i].vblank_hrtimer); 509 509 510 510 kfree(adev->mode_info.bios_hardcoded_edid); 511 511 kfree(adev->amdgpu_vkms_output);
+4 -3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 4060 4060 4061 4061 gfx_v9_0_cp_enable(adev, false); 4062 4062 4063 - /* Skip suspend with A+A reset */ 4064 - if (adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) { 4065 - dev_dbg(adev->dev, "Device in reset. Skipping RLC halt\n"); 4063 + /* Skip stopping RLC with A+A reset or when RLC controls GFX clock */ 4064 + if ((adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) || 4065 + (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2))) { 4066 + dev_dbg(adev->dev, "Skipping RLC halt\n"); 4066 4067 return 0; 4067 4068 } 4068 4069
+1
drivers/gpu/drm/amd/amdgpu/nv.c
··· 183 183 switch (adev->ip_versions[UVD_HWIP][0]) { 184 184 case IP_VERSION(3, 0, 0): 185 185 case IP_VERSION(3, 0, 64): 186 + case IP_VERSION(3, 0, 192): 186 187 if (amdgpu_sriov_vf(adev)) { 187 188 if (encode) 188 189 *codecs = &sriov_sc_video_codecs_encode;
+4 -9
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 1574 1574 static void svm_range_restore_work(struct work_struct *work) 1575 1575 { 1576 1576 struct delayed_work *dwork = to_delayed_work(work); 1577 - struct amdkfd_process_info *process_info; 1578 1577 struct svm_range_list *svms; 1579 1578 struct svm_range *prange; 1580 1579 struct kfd_process *p; ··· 1593 1594 * the lifetime of this thread, kfd_process and mm will be valid. 1594 1595 */ 1595 1596 p = container_of(svms, struct kfd_process, svms); 1596 - process_info = p->kgd_process_info; 1597 1597 mm = p->mm; 1598 1598 if (!mm) 1599 1599 return; 1600 1600 1601 - mutex_lock(&process_info->lock); 1602 1601 svm_range_list_lock_and_flush_work(svms, mm); 1603 1602 mutex_lock(&svms->lock); 1604 1603 ··· 1649 1652 out_reschedule: 1650 1653 mutex_unlock(&svms->lock); 1651 1654 mmap_write_unlock(mm); 1652 - mutex_unlock(&process_info->lock); 1653 1655 1654 1656 /* If validation failed, reschedule another attempt */ 1655 1657 if (evicted_ranges) { ··· 2610 2614 2611 2615 if (atomic_read(&svms->drain_pagefaults)) { 2612 2616 pr_debug("draining retry fault, drop fault 0x%llx\n", addr); 2617 + r = 0; 2613 2618 goto out; 2614 2619 } 2615 2620 ··· 2620 2623 mm = get_task_mm(p->lead_thread); 2621 2624 if (!mm) { 2622 2625 pr_debug("svms 0x%p failed to get mm\n", svms); 2626 + r = 0; 2623 2627 goto out; 2624 2628 } 2625 2629 ··· 2658 2660 2659 2661 if (svm_range_skip_recover(prange)) { 2660 2662 amdgpu_gmc_filter_faults_remove(adev, addr, pasid); 2663 + r = 0; 2661 2664 goto out_unlock_range; 2662 2665 } 2663 2666 ··· 2667 2668 if (timestamp < AMDGPU_SVM_RANGE_RETRY_FAULT_PENDING) { 2668 2669 pr_debug("svms 0x%p [0x%lx %lx] already restored\n", 2669 2670 svms, prange->start, prange->last); 2671 + r = 0; 2670 2672 goto out_unlock_range; 2671 2673 } 2672 2674 ··· 3177 3177 svm_range_set_attr(struct kfd_process *p, uint64_t start, uint64_t size, 3178 3178 uint32_t nattr, struct kfd_ioctl_svm_attribute *attrs) 3179 3179 { 3180 - struct amdkfd_process_info *process_info = p->kgd_process_info; 3181 3180 struct mm_struct *mm = current->mm; 3182 3181 struct list_head update_list; 3183 3182 struct list_head insert_list; ··· 3194 3195 return r; 3195 3196 3196 3197 svms = &p->svms; 3197 - 3198 - mutex_lock(&process_info->lock); 3199 3198 3200 3199 svm_range_list_lock_and_flush_work(svms, mm); 3201 3200 ··· 3270 3273 mutex_unlock(&svms->lock); 3271 3274 mmap_read_unlock(mm); 3272 3275 out: 3273 - mutex_unlock(&process_info->lock); 3274 - 3275 3276 pr_debug("pasid 0x%x svms 0x%p [0x%llx 0x%llx] done, r=%d\n", p->pasid, 3276 3277 &p->svms, start, start + size - 1, r); 3277 3278
+8
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
··· 314 314 ret = -EINVAL; 315 315 goto cleanup; 316 316 } 317 + 318 + if ((aconn->base.connector_type != DRM_MODE_CONNECTOR_DisplayPort) && 319 + (aconn->base.connector_type != DRM_MODE_CONNECTOR_eDP)) { 320 + DRM_DEBUG_DRIVER("No DP connector available for CRC source\n"); 321 + ret = -EINVAL; 322 + goto cleanup; 323 + } 324 + 317 325 } 318 326 319 327 #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+16 -4
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 36 36 #include "dm_helpers.h" 37 37 38 38 #include "dc_link_ddc.h" 39 + #include "ddc_service_types.h" 40 + #include "dpcd_defs.h" 39 41 40 42 #include "i2caux_interface.h" 41 43 #include "dmub_cmd.h" ··· 159 157 }; 160 158 161 159 #if defined(CONFIG_DRM_AMD_DC_DCN) 160 + static bool needs_dsc_aux_workaround(struct dc_link *link) 161 + { 162 + if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 && 163 + (link->dpcd_caps.dpcd_rev.raw == DPCD_REV_14 || link->dpcd_caps.dpcd_rev.raw == DPCD_REV_12) && 164 + link->dpcd_caps.sink_count.bits.SINK_COUNT >= 2) 165 + return true; 166 + 167 + return false; 168 + } 169 + 162 170 static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector) 163 171 { 164 172 struct dc_sink *dc_sink = aconnector->dc_sink; ··· 178 166 u8 *dsc_branch_dec_caps = NULL; 179 167 180 168 aconnector->dsc_aux = drm_dp_mst_dsc_aux_for_port(port); 181 - #if defined(CONFIG_HP_HOOK_WORKAROUND) 169 + 182 170 /* 183 171 * drm_dp_mst_dsc_aux_for_port() will return NULL for certain configs 184 172 * because it only check the dsc/fec caps of the "port variable" and not the dock ··· 188 176 * Workaround: explicitly check the use case above and use the mst dock's aux as dsc_aux 189 177 * 190 178 */ 191 - 192 - if (!aconnector->dsc_aux && !port->parent->port_parent) 179 + if (!aconnector->dsc_aux && !port->parent->port_parent && 180 + needs_dsc_aux_workaround(aconnector->dc_link)) 193 181 aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux; 194 - #endif 182 + 195 183 if (!aconnector->dsc_aux) 196 184 return false; 197 185
+16
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 758 758 dal_ddc_service_set_transaction_type(link->ddc, 759 759 sink_caps->transaction_type); 760 760 761 + #if defined(CONFIG_DRM_AMD_DC_DCN) 762 + /* Apply work around for tunneled MST on certain USB4 docks. Always use DSC if dock 763 + * reports DSC support. 764 + */ 765 + if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA && 766 + link->type == dc_connection_mst_branch && 767 + link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 && 768 + link->dpcd_caps.dsc_caps.dsc_basic_caps.fields.dsc_support.DSC_SUPPORT && 769 + !link->dc->debug.dpia_debug.bits.disable_mst_dsc_work_around) 770 + link->wa_flags.dpia_mst_dsc_always_on = true; 771 + #endif 772 + 761 773 #if defined(CONFIG_DRM_AMD_DC_HDCP) 762 774 /* In case of fallback to SST when topology discovery below fails 763 775 * HDCP caps will be querried again later by the upper layer (caller ··· 1214 1202 if (link->type == dc_connection_mst_branch) { 1215 1203 LINK_INFO("link=%d, mst branch is now Disconnected\n", 1216 1204 link->link_index); 1205 + 1206 + /* Disable work around which keeps DSC on for tunneled MST on certain USB4 docks. */ 1207 + if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) 1208 + link->wa_flags.dpia_mst_dsc_always_on = false; 1217 1209 1218 1210 dm_helpers_dp_mst_stop_top_mgr(link->ctx, link); 1219 1211
+14 -10
drivers/gpu/drm/amd/display/dc/core/dc_resource.c
··· 1664 1664 if (old_stream->ignore_msa_timing_param != stream->ignore_msa_timing_param) 1665 1665 return false; 1666 1666 1667 + // Only Have Audio left to check whether it is same or not. This is a corner case for Tiled sinks 1668 + if (old_stream->audio_info.mode_count != stream->audio_info.mode_count) 1669 + return false; 1670 + 1667 1671 return true; 1668 1672 } 1669 1673 ··· 2256 2252 2257 2253 if (!new_ctx) 2258 2254 return DC_ERROR_UNEXPECTED; 2259 - #if defined(CONFIG_DRM_AMD_DC_DCN) 2260 - 2261 - /* 2262 - * Update link encoder to stream assignment. 2263 - * TODO: Split out reason allocation from validation. 2264 - */ 2265 - if (dc->res_pool->funcs->link_encs_assign && fast_validate == false) 2266 - dc->res_pool->funcs->link_encs_assign( 2267 - dc, new_ctx, new_ctx->streams, new_ctx->stream_count); 2268 - #endif 2269 2255 2270 2256 if (dc->res_pool->funcs->validate_global) { 2271 2257 result = dc->res_pool->funcs->validate_global(dc, new_ctx); ··· 2306 2312 if (result == DC_OK) 2307 2313 if (!dc->res_pool->funcs->validate_bandwidth(dc, new_ctx, fast_validate)) 2308 2314 result = DC_FAIL_BANDWIDTH_VALIDATE; 2315 + 2316 + #if defined(CONFIG_DRM_AMD_DC_DCN) 2317 + /* 2318 + * Only update link encoder to stream assignment after bandwidth validation passed. 2319 + * TODO: Split out assignment and validation. 2320 + */ 2321 + if (result == DC_OK && dc->res_pool->funcs->link_encs_assign && fast_validate == false) 2322 + dc->res_pool->funcs->link_encs_assign( 2323 + dc, new_ctx, new_ctx->streams, new_ctx->stream_count); 2324 + #endif 2309 2325 2310 2326 return result; 2311 2327 }
+2 -1
drivers/gpu/drm/amd/display/dc/dc.h
··· 508 508 uint32_t disable_dpia:1; 509 509 uint32_t force_non_lttpr:1; 510 510 uint32_t extend_aux_rd_interval:1; 511 - uint32_t reserved:29; 511 + uint32_t disable_mst_dsc_work_around:1; 512 + uint32_t reserved:28; 512 513 } bits; 513 514 uint32_t raw; 514 515 };
+2
drivers/gpu/drm/amd/display/dc/dc_link.h
··· 191 191 bool dp_skip_DID2; 192 192 bool dp_skip_reset_segment; 193 193 bool dp_mot_reset_segment; 194 + /* Some USB4 docks do not handle turning off MST DSC once it has been enabled. */ 195 + bool dpia_mst_dsc_always_on; 194 196 } wa_flags; 195 197 struct link_mst_stream_allocation_table mst_stream_alloc_table; 196 198
+1 -1
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1468 1468 dev_err(adev->dev, "Failed to disable smu features.\n"); 1469 1469 } 1470 1470 1471 - if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0) && 1471 + if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2) && 1472 1472 adev->gfx.rlc.funcs->stop) 1473 1473 adev->gfx.rlc.funcs->stop(adev); 1474 1474
+1
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 9 9 #include <linux/shmem_fs.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/vmalloc.h> 12 + #include <linux/module.h> 12 13 13 14 #ifdef CONFIG_X86 14 15 #include <asm/set_memory.h>
+3
drivers/gpu/drm/i915/display/intel_display_types.h
··· 1640 1640 struct intel_dp_pcon_frl frl; 1641 1641 1642 1642 struct intel_psr psr; 1643 + 1644 + /* When we last wrote the OUI for eDP */ 1645 + unsigned long last_oui_write; 1643 1646 }; 1644 1647 1645 1648 enum lspcon_vendor {
+11
drivers/gpu/drm/i915/display/intel_dp.c
··· 29 29 #include <linux/i2c.h> 30 30 #include <linux/notifier.h> 31 31 #include <linux/slab.h> 32 + #include <linux/timekeeping.h> 32 33 #include <linux/types.h> 33 34 34 35 #include <asm/byteorder.h> ··· 1956 1955 1957 1956 if (drm_dp_dpcd_write(&intel_dp->aux, DP_SOURCE_OUI, oui, sizeof(oui)) < 0) 1958 1957 drm_err(&i915->drm, "Failed to write source OUI\n"); 1958 + 1959 + intel_dp->last_oui_write = jiffies; 1960 + } 1961 + 1962 + void intel_dp_wait_source_oui(struct intel_dp *intel_dp) 1963 + { 1964 + struct drm_i915_private *i915 = dp_to_i915(intel_dp); 1965 + 1966 + drm_dbg_kms(&i915->drm, "Performing OUI wait\n"); 1967 + wait_remaining_ms_from_jiffies(intel_dp->last_oui_write, 30); 1959 1968 } 1960 1969 1961 1970 /* If the device supports it, try to set the power state appropriately */
+2
drivers/gpu/drm/i915/display/intel_dp.h
··· 119 119 const struct intel_crtc_state *crtc_state); 120 120 void intel_dp_phy_test(struct intel_encoder *encoder); 121 121 122 + void intel_dp_wait_source_oui(struct intel_dp *intel_dp); 123 + 122 124 #endif /* __INTEL_DP_H__ */
+26 -6
drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c
··· 36 36 37 37 #include "intel_backlight.h" 38 38 #include "intel_display_types.h" 39 + #include "intel_dp.h" 39 40 #include "intel_dp_aux_backlight.h" 40 41 41 42 /* TODO: ··· 106 105 struct intel_panel *panel = &connector->panel; 107 106 int ret; 108 107 u8 tcon_cap[4]; 108 + 109 + intel_dp_wait_source_oui(intel_dp); 109 110 110 111 ret = drm_dp_dpcd_read(aux, INTEL_EDP_HDR_TCON_CAP0, tcon_cap, sizeof(tcon_cap)); 111 112 if (ret != sizeof(tcon_cap)) ··· 207 204 int ret; 208 205 u8 old_ctrl, ctrl; 209 206 207 + intel_dp_wait_source_oui(intel_dp); 208 + 210 209 ret = drm_dp_dpcd_readb(&intel_dp->aux, INTEL_EDP_HDR_GETSET_CTRL_PARAMS, &old_ctrl); 211 210 if (ret != 1) { 212 211 drm_err(&i915->drm, "Failed to read current backlight control mode: %d\n", ret); ··· 298 293 struct intel_panel *panel = &connector->panel; 299 294 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); 300 295 296 + if (!panel->backlight.edp.vesa.info.aux_enable) { 297 + u32 pwm_level = intel_backlight_invert_pwm_level(connector, 298 + panel->backlight.pwm_level_max); 299 + 300 + panel->backlight.pwm_funcs->enable(crtc_state, conn_state, pwm_level); 301 + } 302 + 301 303 drm_edp_backlight_enable(&intel_dp->aux, &panel->backlight.edp.vesa.info, level); 302 304 } 303 305 ··· 316 304 struct intel_dp *intel_dp = enc_to_intel_dp(connector->encoder); 317 305 318 306 drm_edp_backlight_disable(&intel_dp->aux, &panel->backlight.edp.vesa.info); 307 + 308 + if (!panel->backlight.edp.vesa.info.aux_enable) 309 + panel->backlight.pwm_funcs->disable(old_conn_state, 310 + intel_backlight_invert_pwm_level(connector, 0)); 319 311 } 320 312 321 313 static int intel_dp_aux_vesa_setup_backlight(struct intel_connector *connector, enum pipe pipe) ··· 337 321 if (ret < 0) 338 322 return ret; 339 323 324 + if (!panel->backlight.edp.vesa.info.aux_enable) { 325 + ret = panel->backlight.pwm_funcs->setup(connector, pipe); 326 + if (ret < 0) { 327 + drm_err(&i915->drm, 328 + "Failed to setup PWM backlight controls for eDP backlight: %d\n", 329 + ret); 330 + return ret; 331 + } 332 + } 340 333 panel->backlight.max = panel->backlight.edp.vesa.info.max; 341 334 panel->backlight.min = 0; 342 335 if (current_mode == DP_EDP_BACKLIGHT_CONTROL_MODE_DPCD) { ··· 365 340 struct intel_dp *intel_dp = intel_attached_dp(connector); 366 341 struct drm_i915_private *i915 = dp_to_i915(intel_dp); 367 342 368 - /* TODO: We currently only support AUX only backlight configurations, not backlights which 369 - * require a mix of PWM and AUX controls to work. In the mean time, these machines typically 370 - * work just fine using normal PWM controls anyway. 371 - */ 372 - if ((intel_dp->edp_dpcd[1] & DP_EDP_BACKLIGHT_AUX_ENABLE_CAP) && 373 - drm_edp_backlight_supported(intel_dp->edp_dpcd)) { 343 + if (drm_edp_backlight_supported(intel_dp->edp_dpcd)) { 374 344 drm_dbg_kms(&i915->drm, "AUX Backlight Control Supported!\n"); 375 345 return true; 376 346 }
+1
drivers/gpu/drm/i915/gt/intel_gtt.c
··· 6 6 #include <linux/slab.h> /* fault-inject.h is not standalone! */ 7 7 8 8 #include <linux/fault-inject.h> 9 + #include <linux/sched/mm.h> 9 10 10 11 #include "gem/i915_gem_lmem.h" 11 12 #include "i915_trace.h"
-7
drivers/gpu/drm/i915/gt/intel_workarounds.c
··· 621 621 FF_MODE2_GS_TIMER_MASK, 622 622 FF_MODE2_GS_TIMER_224, 623 623 0, false); 624 - 625 - /* 626 - * Wa_14012131227:dg1 627 - * Wa_1508744258:tgl,rkl,dg1,adl-s,adl-p 628 - */ 629 - wa_masked_en(wal, GEN7_COMMON_SLICE_CHICKEN1, 630 - GEN9_RHWO_OPTIMIZATION_DISABLE); 631 624 } 632 625 633 626 static void dg1_ctx_workarounds_init(struct intel_engine_cs *engine,
+1
drivers/gpu/drm/i915/i915_request.c
··· 29 29 #include <linux/sched.h> 30 30 #include <linux/sched/clock.h> 31 31 #include <linux/sched/signal.h> 32 + #include <linux/sched/mm.h> 32 33 33 34 #include "gem/i915_gem_context.h" 34 35 #include "gt/intel_breadcrumbs.h"
+1
drivers/gpu/drm/lima/lima_device.c
··· 4 4 #include <linux/regulator/consumer.h> 5 5 #include <linux/reset.h> 6 6 #include <linux/clk.h> 7 + #include <linux/slab.h> 7 8 #include <linux/dma-mapping.h> 8 9 #include <linux/platform_device.h> 9 10
+1 -1
drivers/gpu/drm/msm/Kconfig
··· 4 4 tristate "MSM DRM" 5 5 depends on DRM 6 6 depends on ARCH_QCOM || SOC_IMX5 || COMPILE_TEST 7 + depends on COMMON_CLK 7 8 depends on IOMMU_SUPPORT 8 - depends on (OF && COMMON_CLK) || COMPILE_TEST 9 9 depends on QCOM_OCMEM || QCOM_OCMEM=n 10 10 depends on QCOM_LLCC || QCOM_LLCC=n 11 11 depends on QCOM_COMMAND_DB || QCOM_COMMAND_DB=n
+3 -3
drivers/gpu/drm/msm/Makefile
··· 23 23 hdmi/hdmi_i2c.o \ 24 24 hdmi/hdmi_phy.o \ 25 25 hdmi/hdmi_phy_8960.o \ 26 + hdmi/hdmi_phy_8996.o \ 26 27 hdmi/hdmi_phy_8x60.o \ 27 28 hdmi/hdmi_phy_8x74.o \ 29 + hdmi/hdmi_pll_8960.o \ 28 30 edp/edp.o \ 29 31 edp/edp_aux.o \ 30 32 edp/edp_bridge.o \ ··· 39 37 disp/mdp4/mdp4_dtv_encoder.o \ 40 38 disp/mdp4/mdp4_lcdc_encoder.o \ 41 39 disp/mdp4/mdp4_lvds_connector.o \ 40 + disp/mdp4/mdp4_lvds_pll.o \ 42 41 disp/mdp4/mdp4_irq.o \ 43 42 disp/mdp4/mdp4_kms.o \ 44 43 disp/mdp4/mdp4_plane.o \ ··· 119 116 dp/dp_audio.o 120 117 121 118 msm-$(CONFIG_DRM_FBDEV_EMULATION) += msm_fbdev.o 122 - msm-$(CONFIG_COMMON_CLK) += disp/mdp4/mdp4_lvds_pll.o 123 - msm-$(CONFIG_COMMON_CLK) += hdmi/hdmi_pll_8960.o 124 - msm-$(CONFIG_COMMON_CLK) += hdmi/hdmi_phy_8996.o 125 119 126 120 msm-$(CONFIG_DRM_MSM_HDMI_HDCP) += hdmi/hdmi_hdcp.o 127 121
+10 -10
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 1424 1424 { 1425 1425 struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 1426 1426 struct msm_gpu *gpu = &adreno_gpu->base; 1427 - u32 gpu_scid, cntl1_regval = 0; 1427 + u32 cntl1_regval = 0; 1428 1428 1429 1429 if (IS_ERR(a6xx_gpu->llc_mmio)) 1430 1430 return; 1431 1431 1432 1432 if (!llcc_slice_activate(a6xx_gpu->llc_slice)) { 1433 - gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); 1433 + u32 gpu_scid = llcc_get_slice_id(a6xx_gpu->llc_slice); 1434 1434 1435 1435 gpu_scid &= 0x1f; 1436 1436 cntl1_regval = (gpu_scid << 0) | (gpu_scid << 5) | (gpu_scid << 10) | 1437 1437 (gpu_scid << 15) | (gpu_scid << 20); 1438 + 1439 + /* On A660, the SCID programming for UCHE traffic is done in 1440 + * A6XX_GBIF_SCACHE_CNTL0[14:10] 1441 + */ 1442 + if (adreno_is_a660_family(adreno_gpu)) 1443 + gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, (0x1f << 10) | 1444 + (1 << 8), (gpu_scid << 10) | (1 << 8)); 1438 1445 } 1439 1446 1440 1447 /* ··· 1478 1471 } 1479 1472 1480 1473 gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL1, GENMASK(24, 0), cntl1_regval); 1481 - 1482 - /* On A660, the SCID programming for UCHE traffic is done in 1483 - * A6XX_GBIF_SCACHE_CNTL0[14:10] 1484 - */ 1485 - if (adreno_is_a660_family(adreno_gpu)) 1486 - gpu_rmw(gpu, REG_A6XX_GBIF_SCACHE_CNTL0, (0x1f << 10) | 1487 - (1 << 8), (gpu_scid << 10) | (1 << 8)); 1488 1474 } 1489 1475 1490 1476 static void a6xx_llc_slices_destroy(struct a6xx_gpu *a6xx_gpu) ··· 1640 1640 return (unsigned long)busy_time; 1641 1641 } 1642 1642 1643 - void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1643 + static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp) 1644 1644 { 1645 1645 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1646 1646 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
+2 -2
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
··· 777 777 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 778 778 779 779 a6xx_state->gmu_registers = state_kcalloc(a6xx_state, 780 - 2, sizeof(*a6xx_state->gmu_registers)); 780 + 3, sizeof(*a6xx_state->gmu_registers)); 781 781 782 782 if (!a6xx_state->gmu_registers) 783 783 return; 784 784 785 - a6xx_state->nr_gmu_registers = 2; 785 + a6xx_state->nr_gmu_registers = 3; 786 786 787 787 /* Get the CX GMU registers from AHB */ 788 788 _a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gmu_reglist[0],
+17
drivers/gpu/drm/msm/dp/dp_aux.c
··· 33 33 bool read; 34 34 bool no_send_addr; 35 35 bool no_send_stop; 36 + bool initted; 36 37 u32 offset; 37 38 u32 segment; 38 39 ··· 332 331 } 333 332 334 333 mutex_lock(&aux->mutex); 334 + if (!aux->initted) { 335 + ret = -EIO; 336 + goto exit; 337 + } 335 338 336 339 dp_aux_update_offset_and_segment(aux, msg); 337 340 dp_aux_transfer_helper(aux, msg, true); ··· 385 380 } 386 381 387 382 aux->cmd_busy = false; 383 + 384 + exit: 388 385 mutex_unlock(&aux->mutex); 389 386 390 387 return ret; ··· 438 431 439 432 aux = container_of(dp_aux, struct dp_aux_private, dp_aux); 440 433 434 + mutex_lock(&aux->mutex); 435 + 441 436 dp_catalog_aux_enable(aux->catalog, true); 442 437 aux->retry_cnt = 0; 438 + aux->initted = true; 439 + 440 + mutex_unlock(&aux->mutex); 443 441 } 444 442 445 443 void dp_aux_deinit(struct drm_dp_aux *dp_aux) ··· 453 441 454 442 aux = container_of(dp_aux, struct dp_aux_private, dp_aux); 455 443 444 + mutex_lock(&aux->mutex); 445 + 446 + aux->initted = false; 456 447 dp_catalog_aux_enable(aux->catalog, false); 448 + 449 + mutex_unlock(&aux->mutex); 457 450 } 458 451 459 452 int dp_aux_register(struct drm_dp_aux *dp_aux)
+2
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1658 1658 if (!prop) { 1659 1659 DRM_DEV_DEBUG(dev, 1660 1660 "failed to find data lane mapping, using default\n"); 1661 + /* Set the number of date lanes to 4 by default. */ 1662 + msm_host->num_data_lanes = 4; 1661 1663 return 0; 1662 1664 } 1663 1665
+1
drivers/gpu/drm/msm/msm_debugfs.c
··· 77 77 goto free_priv; 78 78 79 79 pm_runtime_get_sync(&gpu->pdev->dev); 80 + msm_gpu_hw_init(gpu); 80 81 show_priv->state = gpu->funcs->gpu_state_get(gpu); 81 82 pm_runtime_put_sync(&gpu->pdev->dev); 82 83
+32 -17
drivers/gpu/drm/msm/msm_drv.c
··· 967 967 return ret; 968 968 } 969 969 970 - static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, 971 - struct drm_file *file) 970 + static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, 971 + ktime_t timeout) 972 972 { 973 - struct msm_drm_private *priv = dev->dev_private; 974 - struct drm_msm_wait_fence *args = data; 975 - ktime_t timeout = to_ktime(args->timeout); 976 - struct msm_gpu_submitqueue *queue; 977 - struct msm_gpu *gpu = priv->gpu; 978 973 struct dma_fence *fence; 979 974 int ret; 980 975 981 - if (args->pad) { 982 - DRM_ERROR("invalid pad: %08x\n", args->pad); 976 + if (fence_id > queue->last_fence) { 977 + DRM_ERROR_RATELIMITED("waiting on invalid fence: %u (of %u)\n", 978 + fence_id, queue->last_fence); 983 979 return -EINVAL; 984 980 } 985 - 986 - if (!gpu) 987 - return 0; 988 - 989 - queue = msm_submitqueue_get(file->driver_priv, args->queueid); 990 - if (!queue) 991 - return -ENOENT; 992 981 993 982 /* 994 983 * Map submitqueue scoped "seqno" (which is actually an idr key) ··· 990 1001 ret = mutex_lock_interruptible(&queue->lock); 991 1002 if (ret) 992 1003 return ret; 993 - fence = idr_find(&queue->fence_idr, args->fence); 1004 + fence = idr_find(&queue->fence_idr, fence_id); 994 1005 if (fence) 995 1006 fence = dma_fence_get_rcu(fence); 996 1007 mutex_unlock(&queue->lock); ··· 1006 1017 } 1007 1018 1008 1019 dma_fence_put(fence); 1020 + 1021 + return ret; 1022 + } 1023 + 1024 + static int msm_ioctl_wait_fence(struct drm_device *dev, void *data, 1025 + struct drm_file *file) 1026 + { 1027 + struct msm_drm_private *priv = dev->dev_private; 1028 + struct drm_msm_wait_fence *args = data; 1029 + struct msm_gpu_submitqueue *queue; 1030 + int ret; 1031 + 1032 + if (args->pad) { 1033 + DRM_ERROR("invalid pad: %08x\n", args->pad); 1034 + return -EINVAL; 1035 + } 1036 + 1037 + if (!priv->gpu) 1038 + return 0; 1039 + 1040 + queue = msm_submitqueue_get(file->driver_priv, args->queueid); 1041 + if (!queue) 1042 + return -ENOENT; 1043 + 1044 + ret = wait_fence(queue, args->fence, to_ktime(args->timeout)); 1045 + 1009 1046 msm_submitqueue_put(queue); 1010 1047 1011 1048 return ret;
+2 -3
drivers/gpu/drm/msm/msm_gem.c
··· 1056 1056 { 1057 1057 struct msm_gem_object *msm_obj = to_msm_bo(obj); 1058 1058 1059 - vma->vm_flags &= ~VM_PFNMAP; 1060 - vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND; 1059 + vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP; 1061 1060 vma->vm_page_prot = msm_gem_pgprot(msm_obj, vm_get_page_prot(vma->vm_flags)); 1062 1061 1063 1062 return 0; ··· 1120 1121 break; 1121 1122 fallthrough; 1122 1123 default: 1123 - DRM_DEV_ERROR(dev->dev, "invalid cache flag: %x\n", 1124 + DRM_DEV_DEBUG(dev->dev, "invalid cache flag: %x\n", 1124 1125 (flags & MSM_BO_CACHE_MASK)); 1125 1126 return -EINVAL; 1126 1127 }
+1
drivers/gpu/drm/msm/msm_gem_shrinker.c
··· 5 5 */ 6 6 7 7 #include <linux/vmalloc.h> 8 + #include <linux/sched/mm.h> 8 9 9 10 #include "msm_drv.h" 10 11 #include "msm_gem.h"
+2
drivers/gpu/drm/msm/msm_gem_submit.c
··· 772 772 args->nr_cmds); 773 773 if (IS_ERR(submit)) { 774 774 ret = PTR_ERR(submit); 775 + submit = NULL; 775 776 goto out_unlock; 776 777 } 777 778 ··· 905 904 drm_sched_entity_push_job(&submit->base); 906 905 907 906 args->fence = submit->fence_id; 907 + queue->last_fence = submit->fence_id; 908 908 909 909 msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); 910 910 msm_process_post_deps(post_deps, args->nr_out_syncobjs,
+3
drivers/gpu/drm/msm/msm_gpu.h
··· 359 359 * @ring_nr: the ringbuffer used by this submitqueue, which is determined 360 360 * by the submitqueue's priority 361 361 * @faults: the number of GPU hangs associated with this submitqueue 362 + * @last_fence: the sequence number of the last allocated fence (for error 363 + * checking) 362 364 * @ctx: the per-drm_file context associated with the submitqueue (ie. 363 365 * which set of pgtables do submits jobs associated with the 364 366 * submitqueue use) ··· 376 374 u32 flags; 377 375 u32 ring_nr; 378 376 int faults; 377 + uint32_t last_fence; 379 378 struct msm_file_private *ctx; 380 379 struct list_head node; 381 380 struct idr fence_idr;
+9 -4
drivers/gpu/drm/msm/msm_gpu_devfreq.c
··· 20 20 struct msm_gpu *gpu = dev_to_gpu(dev); 21 21 struct dev_pm_opp *opp; 22 22 23 + /* 24 + * Note that devfreq_recommended_opp() can modify the freq 25 + * to something that actually is in the opp table: 26 + */ 23 27 opp = devfreq_recommended_opp(dev, freq, flags); 24 28 25 29 /* ··· 32 28 */ 33 29 if (gpu->devfreq.idle_freq) { 34 30 gpu->devfreq.idle_freq = *freq; 31 + dev_pm_opp_put(opp); 35 32 return 0; 36 33 } 37 34 ··· 208 203 struct msm_gpu *gpu = container_of(df, struct msm_gpu, devfreq); 209 204 unsigned long idle_freq, target_freq = 0; 210 205 211 - if (!df->devfreq) 212 - return; 213 - 214 206 /* 215 207 * Hold devfreq lock to synchronize with get_dev_status()/ 216 208 * target() callbacks ··· 229 227 { 230 228 struct msm_gpu_devfreq *df = &gpu->devfreq; 231 229 230 + if (!df->devfreq) 231 + return; 232 + 232 233 msm_hrtimer_queue_work(&df->idle_work, ms_to_ktime(1), 233 - HRTIMER_MODE_ABS); 234 + HRTIMER_MODE_REL); 234 235 }
+1
drivers/gpu/drm/ttm/ttm_tt.c
··· 34 34 #include <linux/sched.h> 35 35 #include <linux/shmem_fs.h> 36 36 #include <linux/file.h> 37 + #include <linux/module.h> 37 38 #include <drm/drm_cache.h> 38 39 #include <drm/ttm/ttm_bo_driver.h> 39 40
+19 -23
drivers/gpu/drm/vc4/vc4_kms.c
··· 337 337 struct drm_device *dev = state->dev; 338 338 struct vc4_dev *vc4 = to_vc4_dev(dev); 339 339 struct vc4_hvs *hvs = vc4->hvs; 340 - struct drm_crtc_state *old_crtc_state; 341 340 struct drm_crtc_state *new_crtc_state; 342 341 struct drm_crtc *crtc; 343 342 struct vc4_hvs_state *old_hvs_state; 343 + unsigned int channel; 344 344 int i; 345 345 346 346 for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) { ··· 353 353 vc4_hvs_mask_underrun(dev, vc4_crtc_state->assigned_channel); 354 354 } 355 355 356 - if (vc4->hvs->hvs5) 357 - clk_set_min_rate(hvs->core_clk, 500000000); 358 - 359 356 old_hvs_state = vc4_hvs_get_old_global_state(state); 360 - if (!old_hvs_state) 357 + if (IS_ERR(old_hvs_state)) 361 358 return; 362 359 363 - for_each_old_crtc_in_state(state, crtc, old_crtc_state, i) { 364 - struct vc4_crtc_state *vc4_crtc_state = 365 - to_vc4_crtc_state(old_crtc_state); 366 - unsigned int channel = vc4_crtc_state->assigned_channel; 360 + for (channel = 0; channel < HVS_NUM_CHANNELS; channel++) { 361 + struct drm_crtc_commit *commit; 367 362 int ret; 368 - 369 - if (channel == VC4_HVS_CHANNEL_DISABLED) 370 - continue; 371 363 372 364 if (!old_hvs_state->fifo_state[channel].in_use) 373 365 continue; 374 366 375 - ret = drm_crtc_commit_wait(old_hvs_state->fifo_state[channel].pending_commit); 367 + commit = old_hvs_state->fifo_state[channel].pending_commit; 368 + if (!commit) 369 + continue; 370 + 371 + ret = drm_crtc_commit_wait(commit); 376 372 if (ret) 377 373 drm_err(dev, "Timed out waiting for commit\n"); 374 + 375 + drm_crtc_commit_put(commit); 376 + old_hvs_state->fifo_state[channel].pending_commit = NULL; 378 377 } 378 + 379 + if (vc4->hvs->hvs5) 380 + clk_set_min_rate(hvs->core_clk, 500000000); 379 381 380 382 drm_atomic_helper_commit_modeset_disables(dev, state); 381 383 ··· 412 410 unsigned int i; 413 411 414 412 hvs_state = vc4_hvs_get_new_global_state(state); 415 - if (!hvs_state) 416 - return -EINVAL; 413 + if (WARN_ON(IS_ERR(hvs_state))) 414 + return PTR_ERR(hvs_state); 417 415 418 416 for_each_new_crtc_in_state(state, crtc, crtc_state, i) { 419 417 struct vc4_crtc_state *vc4_crtc_state = ··· 670 668 671 669 for (i = 0; i < HVS_NUM_CHANNELS; i++) { 672 670 state->fifo_state[i].in_use = old_state->fifo_state[i].in_use; 673 - 674 - if (!old_state->fifo_state[i].pending_commit) 675 - continue; 676 - 677 - state->fifo_state[i].pending_commit = 678 - drm_crtc_commit_get(old_state->fifo_state[i].pending_commit); 679 671 } 680 672 681 673 return &state->base; ··· 758 762 unsigned int i; 759 763 760 764 hvs_new_state = vc4_hvs_get_global_state(state); 761 - if (!hvs_new_state) 762 - return -EINVAL; 765 + if (IS_ERR(hvs_new_state)) 766 + return PTR_ERR(hvs_new_state); 763 767 764 768 for (i = 0; i < ARRAY_SIZE(hvs_new_state->fifo_state); i++) 765 769 if (!hvs_new_state->fifo_state[i].in_use)
+1 -41
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 157 157 schedule_work(&vgdev->config_changed_work); 158 158 } 159 159 160 - static __poll_t virtio_gpu_poll(struct file *filp, 161 - struct poll_table_struct *wait) 162 - { 163 - struct drm_file *drm_file = filp->private_data; 164 - struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; 165 - struct drm_device *dev = drm_file->minor->dev; 166 - struct virtio_gpu_device *vgdev = dev->dev_private; 167 - struct drm_pending_event *e = NULL; 168 - __poll_t mask = 0; 169 - 170 - if (!vgdev->has_virgl_3d || !vfpriv || !vfpriv->ring_idx_mask) 171 - return drm_poll(filp, wait); 172 - 173 - poll_wait(filp, &drm_file->event_wait, wait); 174 - 175 - if (!list_empty(&drm_file->event_list)) { 176 - spin_lock_irq(&dev->event_lock); 177 - e = list_first_entry(&drm_file->event_list, 178 - struct drm_pending_event, link); 179 - drm_file->event_space += e->event->length; 180 - list_del(&e->link); 181 - spin_unlock_irq(&dev->event_lock); 182 - 183 - kfree(e); 184 - mask |= EPOLLIN | EPOLLRDNORM; 185 - } 186 - 187 - return mask; 188 - } 189 - 190 160 static struct virtio_device_id id_table[] = { 191 161 { VIRTIO_ID_GPU, VIRTIO_DEV_ANY_ID }, 192 162 { 0 }, ··· 196 226 MODULE_AUTHOR("Gerd Hoffmann <kraxel@redhat.com>"); 197 227 MODULE_AUTHOR("Alon Levy"); 198 228 199 - static const struct file_operations virtio_gpu_driver_fops = { 200 - .owner = THIS_MODULE, 201 - .open = drm_open, 202 - .release = drm_release, 203 - .unlocked_ioctl = drm_ioctl, 204 - .compat_ioctl = drm_compat_ioctl, 205 - .poll = virtio_gpu_poll, 206 - .read = drm_read, 207 - .llseek = noop_llseek, 208 - .mmap = drm_gem_mmap 209 - }; 229 + DEFINE_DRM_GEM_FOPS(virtio_gpu_driver_fops); 210 230 211 231 static const struct drm_driver driver = { 212 232 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC,
-1
drivers/gpu/drm/virtio/virtgpu_drv.h
··· 138 138 spinlock_t lock; 139 139 }; 140 140 141 - #define VIRTGPU_EVENT_FENCE_SIGNALED_INTERNAL 0x10000000 142 141 struct virtio_gpu_fence_event { 143 142 struct drm_pending_event base; 144 143 struct drm_event event;
+1 -1
drivers/gpu/drm/virtio/virtgpu_ioctl.c
··· 54 54 if (!e) 55 55 return -ENOMEM; 56 56 57 - e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED_INTERNAL; 57 + e->event.type = VIRTGPU_EVENT_FENCE_SIGNALED; 58 58 e->event.length = sizeof(e->event); 59 59 60 60 ret = drm_event_reserve_init(dev, file, &e->base, &e->event);
+5 -5
drivers/hid/Kconfig
··· 207 207 208 208 config HID_CHICONY 209 209 tristate "Chicony devices" 210 - depends on HID 210 + depends on USB_HID 211 211 default !EXPERT 212 212 help 213 213 Support for Chicony Tactical pad and special keys on Chicony keyboards. 214 214 215 215 config HID_CORSAIR 216 216 tristate "Corsair devices" 217 - depends on HID && USB && LEDS_CLASS 217 + depends on USB_HID && LEDS_CLASS 218 218 help 219 219 Support for Corsair devices that are not fully compliant with the 220 220 HID standard. ··· 245 245 246 246 config HID_PRODIKEYS 247 247 tristate "Prodikeys PC-MIDI Keyboard support" 248 - depends on HID && SND 248 + depends on USB_HID && SND 249 249 select SND_RAWMIDI 250 250 help 251 251 Support for Prodikeys PC-MIDI Keyboard device support. ··· 560 560 561 561 config HID_LOGITECH 562 562 tristate "Logitech devices" 563 - depends on HID 563 + depends on USB_HID 564 564 depends on LEDS_CLASS 565 565 default !EXPERT 566 566 help ··· 951 951 952 952 config HID_SAMSUNG 953 953 tristate "Samsung InfraRed remote control or keyboards" 954 - depends on HID 954 + depends on USB_HID 955 955 help 956 956 Support for Samsung InfraRed remote control or keyboards. 957 957
+2 -4
drivers/hid/hid-asus.c
··· 1028 1028 if (drvdata->quirks & QUIRK_IS_MULTITOUCH) 1029 1029 drvdata->tp = &asus_i2c_tp; 1030 1030 1031 - if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && 1032 - hid_is_using_ll_driver(hdev, &usb_hid_driver)) { 1031 + if ((drvdata->quirks & QUIRK_T100_KEYBOARD) && hid_is_usb(hdev)) { 1033 1032 struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 1034 1033 1035 1034 if (intf->altsetting->desc.bInterfaceNumber == T100_TPAD_INTF) { ··· 1056 1057 drvdata->tp = &asus_t100chi_tp; 1057 1058 } 1058 1059 1059 - if ((drvdata->quirks & QUIRK_MEDION_E1239T) && 1060 - hid_is_using_ll_driver(hdev, &usb_hid_driver)) { 1060 + if ((drvdata->quirks & QUIRK_MEDION_E1239T) && hid_is_usb(hdev)) { 1061 1061 struct usb_host_interface *alt = 1062 1062 to_usb_interface(hdev->dev.parent)->altsetting; 1063 1063
+1 -1
drivers/hid/hid-bigbenff.c
··· 191 191 struct bigben_device, worker); 192 192 struct hid_field *report_field = bigben->report->field[0]; 193 193 194 - if (bigben->removed) 194 + if (bigben->removed || !report_field) 195 195 return; 196 196 197 197 if (bigben->work_led) {
+3
drivers/hid/hid-chicony.c
··· 114 114 { 115 115 int ret; 116 116 117 + if (!hid_is_usb(hdev)) 118 + return -EINVAL; 119 + 117 120 hdev->quirks |= HID_QUIRK_INPUT_PER_APP; 118 121 ret = hid_parse(hdev); 119 122 if (ret) {
+6 -1
drivers/hid/hid-corsair.c
··· 553 553 int ret; 554 554 unsigned long quirks = id->driver_data; 555 555 struct corsair_drvdata *drvdata; 556 - struct usb_interface *usbif = to_usb_interface(dev->dev.parent); 556 + struct usb_interface *usbif; 557 + 558 + if (!hid_is_usb(dev)) 559 + return -EINVAL; 560 + 561 + usbif = to_usb_interface(dev->dev.parent); 557 562 558 563 drvdata = devm_kzalloc(&dev->dev, sizeof(struct corsair_drvdata), 559 564 GFP_KERNEL);
+1 -1
drivers/hid/hid-elan.c
··· 50 50 51 51 static int is_not_elan_touchpad(struct hid_device *hdev) 52 52 { 53 - if (hdev->bus == BUS_USB) { 53 + if (hid_is_usb(hdev)) { 54 54 struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 55 55 56 56 return (intf->altsetting->desc.bInterfaceNumber !=
+3
drivers/hid/hid-elo.c
··· 230 230 int ret; 231 231 struct usb_device *udev; 232 232 233 + if (!hid_is_usb(hdev)) 234 + return -EINVAL; 235 + 233 236 priv = kzalloc(sizeof(*priv), GFP_KERNEL); 234 237 if (!priv) 235 238 return -ENOMEM;
+3
drivers/hid/hid-ft260.c
··· 915 915 struct ft260_get_chip_version_report version; 916 916 int ret; 917 917 918 + if (!hid_is_usb(hdev)) 919 + return -EINVAL; 920 + 918 921 dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL); 919 922 if (!dev) 920 923 return -ENOMEM;
+2
drivers/hid/hid-google-hammer.c
··· 586 586 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 587 587 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_DON) }, 588 588 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 589 + USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_EEL) }, 590 + { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 589 591 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_HAMMER) }, 590 592 { HID_DEVICE(BUS_USB, HID_GROUP_GENERIC, 591 593 USB_VENDOR_ID_GOOGLE, USB_DEVICE_ID_GOOGLE_MAGNEMITE) },
+7 -2
drivers/hid/hid-holtek-kbd.c
··· 140 140 static int holtek_kbd_probe(struct hid_device *hdev, 141 141 const struct hid_device_id *id) 142 142 { 143 - struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 144 - int ret = hid_parse(hdev); 143 + struct usb_interface *intf; 144 + int ret; 145 145 146 + if (!hid_is_usb(hdev)) 147 + return -EINVAL; 148 + 149 + ret = hid_parse(hdev); 146 150 if (!ret) 147 151 ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); 148 152 153 + intf = to_usb_interface(hdev->dev.parent); 149 154 if (!ret && intf->cur_altsetting->desc.bInterfaceNumber == 1) { 150 155 struct hid_input *hidinput; 151 156 list_for_each_entry(hidinput, &hdev->inputs, list) {
+9
drivers/hid/hid-holtek-mouse.c
··· 62 62 return rdesc; 63 63 } 64 64 65 + static int holtek_mouse_probe(struct hid_device *hdev, 66 + const struct hid_device_id *id) 67 + { 68 + if (!hid_is_usb(hdev)) 69 + return -EINVAL; 70 + return 0; 71 + } 72 + 65 73 static const struct hid_device_id holtek_mouse_devices[] = { 66 74 { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, 67 75 USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) }, ··· 91 83 .name = "holtek_mouse", 92 84 .id_table = holtek_mouse_devices, 93 85 .report_fixup = holtek_mouse_report_fixup, 86 + .probe = holtek_mouse_probe, 94 87 }; 95 88 96 89 module_hid_driver(holtek_mouse_driver);
+3
drivers/hid/hid-ids.h
··· 399 399 #define USB_DEVICE_ID_HP_X2_10_COVER 0x0755 400 400 #define I2C_DEVICE_ID_HP_ENVY_X360_15 0x2d05 401 401 #define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817 402 + #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 402 403 #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 403 404 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A 404 405 ··· 502 501 #define USB_DEVICE_ID_GOOGLE_MAGNEMITE 0x503d 503 502 #define USB_DEVICE_ID_GOOGLE_MOONBALL 0x5044 504 503 #define USB_DEVICE_ID_GOOGLE_DON 0x5050 504 + #define USB_DEVICE_ID_GOOGLE_EEL 0x5057 505 505 506 506 #define USB_VENDOR_ID_GOTOP 0x08f2 507 507 #define USB_DEVICE_ID_SUPER_Q2 0x007f ··· 888 886 #define USB_DEVICE_ID_MS_TOUCH_COVER_2 0x07a7 889 887 #define USB_DEVICE_ID_MS_TYPE_COVER_2 0x07a9 890 888 #define USB_DEVICE_ID_MS_POWER_COVER 0x07da 889 + #define USB_DEVICE_ID_MS_SURFACE3_COVER 0x07de 891 890 #define USB_DEVICE_ID_MS_XBOX_ONE_S_CONTROLLER 0x02fd 892 891 #define USB_DEVICE_ID_MS_PIXART_MOUSE 0x00cb 893 892 #define USB_DEVICE_ID_8BITDO_SN30_PRO_PLUS 0x02e0
+2
drivers/hid/hid-input.c
··· 325 325 HID_BATTERY_QUIRK_IGNORE }, 326 326 { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), 327 327 HID_BATTERY_QUIRK_IGNORE }, 328 + { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), 329 + HID_BATTERY_QUIRK_IGNORE }, 328 330 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_ENVY_X360_15), 329 331 HID_BATTERY_QUIRK_IGNORE }, 330 332 { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15),
+8 -2
drivers/hid/hid-lg.c
··· 749 749 750 750 static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id) 751 751 { 752 - struct usb_interface *iface = to_usb_interface(hdev->dev.parent); 753 - __u8 iface_num = iface->cur_altsetting->desc.bInterfaceNumber; 752 + struct usb_interface *iface; 753 + __u8 iface_num; 754 754 unsigned int connect_mask = HID_CONNECT_DEFAULT; 755 755 struct lg_drv_data *drv_data; 756 756 int ret; 757 + 758 + if (!hid_is_usb(hdev)) 759 + return -EINVAL; 760 + 761 + iface = to_usb_interface(hdev->dev.parent); 762 + iface_num = iface->cur_altsetting->desc.bInterfaceNumber; 757 763 758 764 /* G29 only work with the 1st interface */ 759 765 if ((hdev->product == USB_DEVICE_ID_LOGITECH_G29_WHEEL) &&
+1 -1
drivers/hid/hid-logitech-dj.c
··· 1777 1777 case recvr_type_bluetooth: no_dj_interfaces = 2; break; 1778 1778 case recvr_type_dinovo: no_dj_interfaces = 2; break; 1779 1779 } 1780 - if (hid_is_using_ll_driver(hdev, &usb_hid_driver)) { 1780 + if (hid_is_usb(hdev)) { 1781 1781 intf = to_usb_interface(hdev->dev.parent); 1782 1782 if (intf && intf->altsetting->desc.bInterfaceNumber >= 1783 1783 no_dj_interfaces) {
+8 -2
drivers/hid/hid-prodikeys.c
··· 798 798 static int pk_probe(struct hid_device *hdev, const struct hid_device_id *id) 799 799 { 800 800 int ret; 801 - struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 802 - unsigned short ifnum = intf->cur_altsetting->desc.bInterfaceNumber; 801 + struct usb_interface *intf; 802 + unsigned short ifnum; 803 803 unsigned long quirks = id->driver_data; 804 804 struct pk_device *pk; 805 805 struct pcmidi_snd *pm = NULL; 806 + 807 + if (!hid_is_usb(hdev)) 808 + return -EINVAL; 809 + 810 + intf = to_usb_interface(hdev->dev.parent); 811 + ifnum = intf->cur_altsetting->desc.bInterfaceNumber; 806 812 807 813 pk = kzalloc(sizeof(*pk), GFP_KERNEL); 808 814 if (pk == NULL) {
+1
drivers/hid/hid-quirks.c
··· 124 124 { HID_USB_DEVICE(USB_VENDOR_ID_MCS, USB_DEVICE_ID_MCS_GAMEPADBLOCK), HID_QUIRK_MULTI_INPUT }, 125 125 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PIXART_MOUSE), HID_QUIRK_ALWAYS_POLL }, 126 126 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), HID_QUIRK_NO_INIT_REPORTS }, 127 + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE3_COVER), HID_QUIRK_NO_INIT_REPORTS }, 127 128 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_SURFACE_PRO_2), HID_QUIRK_NO_INIT_REPORTS }, 128 129 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TOUCH_COVER_2), HID_QUIRK_NO_INIT_REPORTS }, 129 130 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_2), HID_QUIRK_NO_INIT_REPORTS },
+3
drivers/hid/hid-roccat-arvo.c
··· 344 344 { 345 345 int retval; 346 346 347 + if (!hid_is_usb(hdev)) 348 + return -EINVAL; 349 + 347 350 retval = hid_parse(hdev); 348 351 if (retval) { 349 352 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-isku.c
··· 324 324 { 325 325 int retval; 326 326 327 + if (!hid_is_usb(hdev)) 328 + return -EINVAL; 329 + 327 330 retval = hid_parse(hdev); 328 331 if (retval) { 329 332 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-kone.c
··· 749 749 { 750 750 int retval; 751 751 752 + if (!hid_is_usb(hdev)) 753 + return -EINVAL; 754 + 752 755 retval = hid_parse(hdev); 753 756 if (retval) { 754 757 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-koneplus.c
··· 431 431 { 432 432 int retval; 433 433 434 + if (!hid_is_usb(hdev)) 435 + return -EINVAL; 436 + 434 437 retval = hid_parse(hdev); 435 438 if (retval) { 436 439 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-konepure.c
··· 133 133 { 134 134 int retval; 135 135 136 + if (!hid_is_usb(hdev)) 137 + return -EINVAL; 138 + 136 139 retval = hid_parse(hdev); 137 140 if (retval) { 138 141 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-kovaplus.c
··· 501 501 { 502 502 int retval; 503 503 504 + if (!hid_is_usb(hdev)) 505 + return -EINVAL; 506 + 504 507 retval = hid_parse(hdev); 505 508 if (retval) { 506 509 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-lua.c
··· 160 160 { 161 161 int retval; 162 162 163 + if (!hid_is_usb(hdev)) 164 + return -EINVAL; 165 + 163 166 retval = hid_parse(hdev); 164 167 if (retval) { 165 168 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-pyra.c
··· 449 449 { 450 450 int retval; 451 451 452 + if (!hid_is_usb(hdev)) 453 + return -EINVAL; 454 + 452 455 retval = hid_parse(hdev); 453 456 if (retval) { 454 457 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-ryos.c
··· 141 141 { 142 142 int retval; 143 143 144 + if (!hid_is_usb(hdev)) 145 + return -EINVAL; 146 + 144 147 retval = hid_parse(hdev); 145 148 if (retval) { 146 149 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-roccat-savu.c
··· 113 113 { 114 114 int retval; 115 115 116 + if (!hid_is_usb(hdev)) 117 + return -EINVAL; 118 + 116 119 retval = hid_parse(hdev); 117 120 if (retval) { 118 121 hid_err(hdev, "parse failed\n");
+3
drivers/hid/hid-samsung.c
··· 152 152 int ret; 153 153 unsigned int cmask = HID_CONNECT_DEFAULT; 154 154 155 + if (!hid_is_usb(hdev)) 156 + return -EINVAL; 157 + 155 158 ret = hid_parse(hdev); 156 159 if (ret) { 157 160 hid_err(hdev, "parse failed\n");
+18 -6
drivers/hid/hid-sony.c
··· 3000 3000 sc->quirks = quirks; 3001 3001 hid_set_drvdata(hdev, sc); 3002 3002 sc->hdev = hdev; 3003 - usbdev = to_usb_device(sc->hdev->dev.parent->parent); 3004 3003 3005 3004 ret = hid_parse(hdev); 3006 3005 if (ret) { ··· 3037 3038 */ 3038 3039 if (!(hdev->claimed & HID_CLAIMED_INPUT)) { 3039 3040 hid_err(hdev, "failed to claim input\n"); 3040 - hid_hw_stop(hdev); 3041 - return -ENODEV; 3041 + ret = -ENODEV; 3042 + goto err; 3042 3043 } 3043 3044 3044 3045 if (sc->quirks & (GHL_GUITAR_PS3WIIU | GHL_GUITAR_PS4)) { 3046 + if (!hid_is_usb(hdev)) { 3047 + ret = -EINVAL; 3048 + goto err; 3049 + } 3050 + 3051 + usbdev = to_usb_device(sc->hdev->dev.parent->parent); 3052 + 3045 3053 sc->ghl_urb = usb_alloc_urb(0, GFP_ATOMIC); 3046 - if (!sc->ghl_urb) 3047 - return -ENOMEM; 3054 + if (!sc->ghl_urb) { 3055 + ret = -ENOMEM; 3056 + goto err; 3057 + } 3048 3058 3049 3059 if (sc->quirks & GHL_GUITAR_PS3WIIU) 3050 3060 ret = ghl_init_urb(sc, usbdev, ghl_ps3wiiu_magic_data, ··· 3063 3055 ARRAY_SIZE(ghl_ps4_magic_data)); 3064 3056 if (ret) { 3065 3057 hid_err(hdev, "error preparing URB\n"); 3066 - return ret; 3058 + goto err; 3067 3059 } 3068 3060 3069 3061 timer_setup(&sc->ghl_poke_timer, ghl_magic_poke, 0); ··· 3071 3063 jiffies + GHL_GUITAR_POKE_INTERVAL*HZ); 3072 3064 } 3073 3065 3066 + return ret; 3067 + 3068 + err: 3069 + hid_hw_stop(hdev); 3074 3070 return ret; 3075 3071 } 3076 3072
+3
drivers/hid/hid-thrustmaster.c
··· 274 274 int ret = 0; 275 275 struct tm_wheel *tm_wheel = NULL; 276 276 277 + if (!hid_is_usb(hdev)) 278 + return -EINVAL; 279 + 277 280 ret = hid_parse(hdev); 278 281 if (ret) { 279 282 hid_err(hdev, "parse failed with error %d\n", ret);
+1 -1
drivers/hid/hid-u2fzero.c
··· 311 311 unsigned int minor; 312 312 int ret; 313 313 314 - if (!hid_is_using_ll_driver(hdev, &usb_hid_driver)) 314 + if (!hid_is_usb(hdev)) 315 315 return -EINVAL; 316 316 317 317 dev = devm_kzalloc(&hdev->dev, sizeof(*dev), GFP_KERNEL);
+3
drivers/hid/hid-uclogic-core.c
··· 164 164 struct uclogic_drvdata *drvdata = NULL; 165 165 bool params_initialized = false; 166 166 167 + if (!hid_is_usb(hdev)) 168 + return -EINVAL; 169 + 167 170 /* 168 171 * libinput requires the pad interface to be on a different node 169 172 * than the pen, so use QUIRK_MULTI_INPUT for all tablets.
+1 -2
drivers/hid/hid-uclogic-params.c
··· 843 843 struct uclogic_params p = {0, }; 844 844 845 845 /* Check arguments */ 846 - if (params == NULL || hdev == NULL || 847 - !hid_is_using_ll_driver(hdev, &usb_hid_driver)) { 846 + if (params == NULL || hdev == NULL || !hid_is_usb(hdev)) { 848 847 rc = -EINVAL; 849 848 goto cleanup; 850 849 }
+4 -2
drivers/hid/intel-ish-hid/ipc/pci-ish.c
··· 266 266 267 267 if (ish_should_leave_d0i3(pdev) && !dev->suspend_flag 268 268 && IPC_IS_ISH_ILUP(fwsts)) { 269 - disable_irq_wake(pdev->irq); 269 + if (device_may_wakeup(&pdev->dev)) 270 + disable_irq_wake(pdev->irq); 270 271 271 272 ish_set_host_ready(dev); 272 273 ··· 338 337 */ 339 338 pci_save_state(pdev); 340 339 341 - enable_irq_wake(pdev->irq); 340 + if (device_may_wakeup(&pdev->dev)) 341 + enable_irq_wake(pdev->irq); 342 342 } 343 343 } else { 344 344 /*
+13 -6
drivers/hid/wacom_sys.c
··· 726 726 * Skip the query for this type and modify defaults based on 727 727 * interface number. 728 728 */ 729 - if (features->type == WIRELESS) { 729 + if (features->type == WIRELESS && intf) { 730 730 if (intf->cur_altsetting->desc.bInterfaceNumber == 0) 731 731 features->device_type = WACOM_DEVICETYPE_WL_MONITOR; 732 732 else ··· 2214 2214 if ((features->type == HID_GENERIC) && !strcmp("Wacom HID", features->name)) { 2215 2215 char *product_name = wacom->hdev->name; 2216 2216 2217 - if (hid_is_using_ll_driver(wacom->hdev, &usb_hid_driver)) { 2217 + if (hid_is_usb(wacom->hdev)) { 2218 2218 struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent); 2219 2219 struct usb_device *dev = interface_to_usbdev(intf); 2220 2220 product_name = dev->product; ··· 2450 2450 */ 2451 2451 2452 2452 wacom_destroy_battery(wacom); 2453 + 2454 + if (!usbdev) 2455 + return; 2453 2456 2454 2457 /* Stylus interface */ 2455 2458 hdev1 = usb_get_intfdata(usbdev->config->interface[1]); ··· 2733 2730 static int wacom_probe(struct hid_device *hdev, 2734 2731 const struct hid_device_id *id) 2735 2732 { 2736 - struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 2737 - struct usb_device *dev = interface_to_usbdev(intf); 2738 2733 struct wacom *wacom; 2739 2734 struct wacom_wac *wacom_wac; 2740 2735 struct wacom_features *features; ··· 2767 2766 wacom_wac->hid_data.inputmode = -1; 2768 2767 wacom_wac->mode_report = -1; 2769 2768 2770 - wacom->usbdev = dev; 2771 - wacom->intf = intf; 2769 + if (hid_is_usb(hdev)) { 2770 + struct usb_interface *intf = to_usb_interface(hdev->dev.parent); 2771 + struct usb_device *dev = interface_to_usbdev(intf); 2772 + 2773 + wacom->usbdev = dev; 2774 + wacom->intf = intf; 2775 + } 2776 + 2772 2777 mutex_init(&wacom->lock); 2773 2778 INIT_DELAYED_WORK(&wacom->init_work, wacom_init_work); 2774 2779 INIT_WORK(&wacom->wireless_work, wacom_wireless_work);
+3 -2
drivers/i2c/busses/i2c-cbus-gpio.c
··· 195 195 } 196 196 197 197 static const struct i2c_algorithm cbus_i2c_algo = { 198 - .smbus_xfer = cbus_i2c_smbus_xfer, 199 - .functionality = cbus_i2c_func, 198 + .smbus_xfer = cbus_i2c_smbus_xfer, 199 + .smbus_xfer_atomic = cbus_i2c_smbus_xfer, 200 + .functionality = cbus_i2c_func, 200 201 }; 201 202 202 203 static int cbus_i2c_remove(struct platform_device *pdev)
+2 -2
drivers/i2c/busses/i2c-rk3x.c
··· 423 423 if (!(ipd & REG_INT_MBRF)) 424 424 return; 425 425 426 - /* ack interrupt */ 427 - i2c_writel(i2c, REG_INT_MBRF, REG_IPD); 426 + /* ack interrupt (read also produces a spurious START flag, clear it too) */ 427 + i2c_writel(i2c, REG_INT_MBRF | REG_INT_START, REG_IPD); 428 428 429 429 /* Can only handle a maximum of 32 bytes at a time */ 430 430 if (len > 32)
+38 -7
drivers/i2c/busses/i2c-stm32f7.c
··· 1493 1493 { 1494 1494 struct stm32f7_i2c_dev *i2c_dev = data; 1495 1495 struct stm32f7_i2c_msg *f7_msg = &i2c_dev->f7_msg; 1496 + struct stm32_i2c_dma *dma = i2c_dev->dma; 1496 1497 void __iomem *base = i2c_dev->base; 1497 1498 u32 status, mask; 1498 1499 int ret = IRQ_HANDLED; ··· 1519 1518 dev_dbg(i2c_dev->dev, "<%s>: Receive NACK (addr %x)\n", 1520 1519 __func__, f7_msg->addr); 1521 1520 writel_relaxed(STM32F7_I2C_ICR_NACKCF, base + STM32F7_I2C_ICR); 1521 + if (i2c_dev->use_dma) { 1522 + stm32f7_i2c_disable_dma_req(i2c_dev); 1523 + dmaengine_terminate_async(dma->chan_using); 1524 + } 1522 1525 f7_msg->result = -ENXIO; 1523 1526 } 1524 1527 ··· 1538 1533 /* Clear STOP flag */ 1539 1534 writel_relaxed(STM32F7_I2C_ICR_STOPCF, base + STM32F7_I2C_ICR); 1540 1535 1541 - if (i2c_dev->use_dma) { 1536 + if (i2c_dev->use_dma && !f7_msg->result) { 1542 1537 ret = IRQ_WAKE_THREAD; 1543 1538 } else { 1544 1539 i2c_dev->master_mode = false; ··· 1551 1546 if (f7_msg->stop) { 1552 1547 mask = STM32F7_I2C_CR2_STOP; 1553 1548 stm32f7_i2c_set_bits(base + STM32F7_I2C_CR2, mask); 1554 - } else if (i2c_dev->use_dma) { 1549 + } else if (i2c_dev->use_dma && !f7_msg->result) { 1555 1550 ret = IRQ_WAKE_THREAD; 1556 1551 } else if (f7_msg->smbus) { 1557 1552 stm32f7_i2c_smbus_rep_start(i2c_dev); ··· 1588 1583 if (!ret) { 1589 1584 dev_dbg(i2c_dev->dev, "<%s>: Timed out\n", __func__); 1590 1585 stm32f7_i2c_disable_dma_req(i2c_dev); 1591 - dmaengine_terminate_all(dma->chan_using); 1586 + dmaengine_terminate_async(dma->chan_using); 1592 1587 f7_msg->result = -ETIMEDOUT; 1593 1588 } 1594 1589 ··· 1665 1660 /* Disable dma */ 1666 1661 if (i2c_dev->use_dma) { 1667 1662 stm32f7_i2c_disable_dma_req(i2c_dev); 1668 - dmaengine_terminate_all(dma->chan_using); 1663 + dmaengine_terminate_async(dma->chan_using); 1669 1664 } 1670 1665 1671 1666 i2c_dev->master_mode = false; ··· 1701 1696 time_left = wait_for_completion_timeout(&i2c_dev->complete, 1702 1697 i2c_dev->adap.timeout); 1703 1698 ret = f7_msg->result; 1699 + if (ret) { 1700 + if (i2c_dev->use_dma) 1701 + dmaengine_synchronize(dma->chan_using); 1702 + 1703 + /* 1704 + * It is possible that some unsent data have already been 1705 + * written into TXDR. To avoid sending old data in a 1706 + * further transfer, flush TXDR in case of any error 1707 + */ 1708 + writel_relaxed(STM32F7_I2C_ISR_TXE, 1709 + i2c_dev->base + STM32F7_I2C_ISR); 1710 + goto pm_free; 1711 + } 1704 1712 1705 1713 if (!time_left) { 1706 1714 dev_dbg(i2c_dev->dev, "Access to slave 0x%x timed out\n", 1707 1715 i2c_dev->msg->addr); 1708 1716 if (i2c_dev->use_dma) 1709 - dmaengine_terminate_all(dma->chan_using); 1717 + dmaengine_terminate_sync(dma->chan_using); 1718 + stm32f7_i2c_wait_free_bus(i2c_dev); 1710 1719 ret = -ETIMEDOUT; 1711 1720 } 1712 1721 ··· 1763 1744 timeout = wait_for_completion_timeout(&i2c_dev->complete, 1764 1745 i2c_dev->adap.timeout); 1765 1746 ret = f7_msg->result; 1766 - if (ret) 1747 + if (ret) { 1748 + if (i2c_dev->use_dma) 1749 + dmaengine_synchronize(dma->chan_using); 1750 + 1751 + /* 1752 + * It is possible that some unsent data have already been 1753 + * written into TXDR. To avoid sending old data in a 1754 + * further transfer, flush TXDR in case of any error 1755 + */ 1756 + writel_relaxed(STM32F7_I2C_ISR_TXE, 1757 + i2c_dev->base + STM32F7_I2C_ISR); 1767 1758 goto pm_free; 1759 + } 1768 1760 1769 1761 if (!timeout) { 1770 1762 dev_dbg(dev, "Access to slave 0x%x timed out\n", f7_msg->addr); 1771 1763 if (i2c_dev->use_dma) 1772 - dmaengine_terminate_all(dma->chan_using); 1764 + dmaengine_terminate_sync(dma->chan_using); 1765 + stm32f7_i2c_wait_free_bus(i2c_dev); 1773 1766 ret = -ETIMEDOUT; 1774 1767 goto pm_free; 1775 1768 }
+8
drivers/mtd/devices/mtd_dataflash.c
··· 96 96 struct mtd_info mtd; 97 97 }; 98 98 99 + static const struct spi_device_id dataflash_dev_ids[] = { 100 + { "at45" }, 101 + { "dataflash" }, 102 + { }, 103 + }; 104 + MODULE_DEVICE_TABLE(spi, dataflash_dev_ids); 105 + 99 106 #ifdef CONFIG_OF 100 107 static const struct of_device_id dataflash_dt_ids[] = { 101 108 { .compatible = "atmel,at45", }, ··· 934 927 .name = "mtd_dataflash", 935 928 .of_match_table = of_match_ptr(dataflash_dt_ids), 936 929 }, 930 + .id_table = dataflash_dev_ids, 937 931 938 932 .probe = dataflash_probe, 939 933 .remove = dataflash_remove,
+1 -1
drivers/mtd/nand/raw/Kconfig
··· 26 26 config MTD_NAND_DENALI_DT 27 27 tristate "Denali NAND controller as a DT device" 28 28 select MTD_NAND_DENALI 29 - depends on HAS_DMA && HAVE_CLK && OF 29 + depends on HAS_DMA && HAVE_CLK && OF && HAS_IOMEM 30 30 help 31 31 Enable the driver for NAND flash on platforms using a Denali NAND 32 32 controller as a DT device.
+28 -8
drivers/mtd/nand/raw/fsmc_nand.c
··· 15 15 16 16 #include <linux/clk.h> 17 17 #include <linux/completion.h> 18 + #include <linux/delay.h> 18 19 #include <linux/dmaengine.h> 19 20 #include <linux/dma-direction.h> 20 21 #include <linux/dma-mapping.h> ··· 93 92 #define FSMC_NAND_BANK_SZ 0x20 94 93 95 94 #define FSMC_BUSY_WAIT_TIMEOUT (1 * HZ) 95 + 96 + /* 97 + * According to SPEAr300 Reference Manual (RM0082) 98 + * TOUDEL = 7ns (Output delay from the flip-flops to the board) 99 + * TINDEL = 5ns (Input delay from the board to the flipflop) 100 + */ 101 + #define TOUTDEL 7000 102 + #define TINDEL 5000 96 103 97 104 struct fsmc_nand_timings { 98 105 u8 tclr; ··· 286 277 { 287 278 unsigned long hclk = clk_get_rate(host->clk); 288 279 unsigned long hclkn = NSEC_PER_SEC / hclk; 289 - u32 thiz, thold, twait, tset; 280 + u32 thiz, thold, twait, tset, twait_min; 290 281 291 282 if (sdrt->tRC_min < 30000) 292 283 return -EOPNOTSUPP; ··· 318 309 else if (tims->thold > FSMC_THOLD_MASK) 319 310 tims->thold = FSMC_THOLD_MASK; 320 311 321 - twait = max(sdrt->tRP_min, sdrt->tWP_min); 322 - tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1; 323 - if (tims->twait == 0) 324 - tims->twait = 1; 325 - else if (tims->twait > FSMC_TWAIT_MASK) 326 - tims->twait = FSMC_TWAIT_MASK; 327 - 328 312 tset = max(sdrt->tCS_min - sdrt->tWP_min, 329 313 sdrt->tCEA_max - sdrt->tREA_max); 330 314 tims->tset = DIV_ROUND_UP(tset / 1000, hclkn) - 1; ··· 325 323 tims->tset = 1; 326 324 else if (tims->tset > FSMC_TSET_MASK) 327 325 tims->tset = FSMC_TSET_MASK; 326 + 327 + /* 328 + * According to SPEAr300 Reference Manual (RM0082) which gives more 329 + * information related to FSMSC timings than the SPEAr600 one (RM0305), 330 + * twait >= tCEA - (tset * TCLK) + TOUTDEL + TINDEL 331 + */ 332 + twait_min = sdrt->tCEA_max - ((tims->tset + 1) * hclkn * 1000) 333 + + TOUTDEL + TINDEL; 334 + twait = max3(sdrt->tRP_min, sdrt->tWP_min, twait_min); 335 + 336 + tims->twait = DIV_ROUND_UP(twait / 1000, hclkn) - 1; 337 + if (tims->twait == 0) 338 + tims->twait = 1; 339 + else if (tims->twait > FSMC_TWAIT_MASK) 340 + tims->twait = FSMC_TWAIT_MASK; 328 341 329 342 return 0; 330 343 } ··· 681 664 instr->ctx.waitrdy.timeout_ms); 682 665 break; 683 666 } 667 + 668 + if (instr->delay_ns) 669 + ndelay(instr->delay_ns); 684 670 } 685 671 686 672 return ret;
+3 -3
drivers/mtd/nand/raw/nand_base.c
··· 926 926 struct nand_sdr_timings *spec_timings) 927 927 { 928 928 const struct nand_controller_ops *ops = chip->controller->ops; 929 - int best_mode = 0, mode, ret; 929 + int best_mode = 0, mode, ret = -EOPNOTSUPP; 930 930 931 931 iface->type = NAND_SDR_IFACE; 932 932 ··· 977 977 struct nand_nvddr_timings *spec_timings) 978 978 { 979 979 const struct nand_controller_ops *ops = chip->controller->ops; 980 - int best_mode = 0, mode, ret; 980 + int best_mode = 0, mode, ret = -EOPNOTSUPP; 981 981 982 982 iface->type = NAND_NVDDR_IFACE; 983 983 ··· 1837 1837 NAND_OP_CMD(NAND_CMD_ERASE1, 0), 1838 1838 NAND_OP_ADDR(2, addrs, 0), 1839 1839 NAND_OP_CMD(NAND_CMD_ERASE2, 1840 - NAND_COMMON_TIMING_MS(conf, tWB_max)), 1840 + NAND_COMMON_TIMING_NS(conf, tWB_max)), 1841 1841 NAND_OP_WAIT_RDY(NAND_COMMON_TIMING_MS(conf, tBERS_max), 1842 1842 0), 1843 1843 };
+8 -6
drivers/net/bonding/bond_alb.c
··· 1501 1501 struct slave *slave; 1502 1502 1503 1503 if (!bond_has_slaves(bond)) { 1504 - bond_info->tx_rebalance_counter = 0; 1504 + atomic_set(&bond_info->tx_rebalance_counter, 0); 1505 1505 bond_info->lp_counter = 0; 1506 1506 goto re_arm; 1507 1507 } 1508 1508 1509 1509 rcu_read_lock(); 1510 1510 1511 - bond_info->tx_rebalance_counter++; 1511 + atomic_inc(&bond_info->tx_rebalance_counter); 1512 1512 bond_info->lp_counter++; 1513 1513 1514 1514 /* send learning packets */ ··· 1530 1530 } 1531 1531 1532 1532 /* rebalance tx traffic */ 1533 - if (bond_info->tx_rebalance_counter >= BOND_TLB_REBALANCE_TICKS) { 1533 + if (atomic_read(&bond_info->tx_rebalance_counter) >= BOND_TLB_REBALANCE_TICKS) { 1534 1534 bond_for_each_slave_rcu(bond, slave, iter) { 1535 1535 tlb_clear_slave(bond, slave, 1); 1536 1536 if (slave == rcu_access_pointer(bond->curr_active_slave)) { ··· 1540 1540 bond_info->unbalanced_load = 0; 1541 1541 } 1542 1542 } 1543 - bond_info->tx_rebalance_counter = 0; 1543 + atomic_set(&bond_info->tx_rebalance_counter, 0); 1544 1544 } 1545 1545 1546 1546 if (bond_info->rlb_enabled) { ··· 1610 1610 tlb_init_slave(slave); 1611 1611 1612 1612 /* order a rebalance ASAP */ 1613 - bond->alb_info.tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; 1613 + atomic_set(&bond->alb_info.tx_rebalance_counter, 1614 + BOND_TLB_REBALANCE_TICKS); 1614 1615 1615 1616 if (bond->alb_info.rlb_enabled) 1616 1617 bond->alb_info.rlb_rebalance = 1; ··· 1648 1647 rlb_clear_slave(bond, slave); 1649 1648 } else if (link == BOND_LINK_UP) { 1650 1649 /* order a rebalance ASAP */ 1651 - bond_info->tx_rebalance_counter = BOND_TLB_REBALANCE_TICKS; 1650 + atomic_set(&bond_info->tx_rebalance_counter, 1651 + BOND_TLB_REBALANCE_TICKS); 1652 1652 if (bond->alb_info.rlb_enabled) { 1653 1653 bond->alb_info.rlb_rebalance = 1; 1654 1654 /* If the updelay module parameter is smaller than the
+7 -1
drivers/net/can/kvaser_pciefd.c
··· 248 248 #define KVASER_PCIEFD_SPACK_EWLR BIT(23) 249 249 #define KVASER_PCIEFD_SPACK_EPLR BIT(24) 250 250 251 + /* Kvaser KCAN_EPACK second word */ 252 + #define KVASER_PCIEFD_EPACK_DIR_TX BIT(0) 253 + 251 254 struct kvaser_pciefd; 252 255 253 256 struct kvaser_pciefd_can { ··· 1288 1285 1289 1286 can->err_rep_cnt++; 1290 1287 can->can.can_stats.bus_error++; 1291 - stats->rx_errors++; 1288 + if (p->header[1] & KVASER_PCIEFD_EPACK_DIR_TX) 1289 + stats->tx_errors++; 1290 + else 1291 + stats->rx_errors++; 1292 1292 1293 1293 can->bec.txerr = bec.txerr; 1294 1294 can->bec.rxerr = bec.rxerr;
+27 -15
drivers/net/can/m_can/m_can.c
··· 204 204 205 205 /* Interrupts for version 3.0.x */ 206 206 #define IR_ERR_LEC_30X (IR_STE | IR_FOE | IR_ACKE | IR_BE | IR_CRCE) 207 - #define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_ELO | IR_BEU | \ 208 - IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ 209 - IR_RF1L | IR_RF0L) 207 + #define IR_ERR_BUS_30X (IR_ERR_LEC_30X | IR_WDI | IR_BEU | IR_BEC | \ 208 + IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \ 209 + IR_RF0L) 210 210 #define IR_ERR_ALL_30X (IR_ERR_STATE | IR_ERR_BUS_30X) 211 211 212 212 /* Interrupts for version >= 3.1.x */ 213 213 #define IR_ERR_LEC_31X (IR_PED | IR_PEA) 214 - #define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_ELO | IR_BEU | \ 215 - IR_BEC | IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | \ 216 - IR_RF1L | IR_RF0L) 214 + #define IR_ERR_BUS_31X (IR_ERR_LEC_31X | IR_WDI | IR_BEU | IR_BEC | \ 215 + IR_TOO | IR_MRAF | IR_TSW | IR_TEFL | IR_RF1L | \ 216 + IR_RF0L) 217 217 #define IR_ERR_ALL_31X (IR_ERR_STATE | IR_ERR_BUS_31X) 218 218 219 219 /* Interrupt Line Select (ILS) */ ··· 517 517 err = m_can_fifo_read(cdev, fgi, M_CAN_FIFO_DATA, 518 518 cf->data, DIV_ROUND_UP(cf->len, 4)); 519 519 if (err) 520 - goto out_fail; 520 + goto out_free_skb; 521 521 } 522 522 523 523 /* acknowledge rx fifo 0 */ ··· 532 532 533 533 return 0; 534 534 535 + out_free_skb: 536 + kfree_skb(skb); 535 537 out_fail: 536 538 netdev_err(dev, "FIFO read returned %d\n", err); 537 539 return err; ··· 812 810 { 813 811 if (irqstatus & IR_WDI) 814 812 netdev_err(dev, "Message RAM Watchdog event due to missing READY\n"); 815 - if (irqstatus & IR_ELO) 816 - netdev_err(dev, "Error Logging Overflow\n"); 817 813 if (irqstatus & IR_BEU) 818 814 netdev_err(dev, "Bit Error Uncorrected\n"); 819 815 if (irqstatus & IR_BEC) ··· 1494 1494 case 30: 1495 1495 /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.x */ 1496 1496 can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); 1497 - cdev->can.bittiming_const = &m_can_bittiming_const_30X; 1498 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_30X; 1497 + cdev->can.bittiming_const = cdev->bit_timing ? 1498 + cdev->bit_timing : &m_can_bittiming_const_30X; 1499 + 1500 + cdev->can.data_bittiming_const = cdev->data_timing ? 1501 + cdev->data_timing : 1502 + &m_can_data_bittiming_const_30X; 1499 1503 break; 1500 1504 case 31: 1501 1505 /* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.1.x */ 1502 1506 can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO); 1503 - cdev->can.bittiming_const = &m_can_bittiming_const_31X; 1504 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; 1507 + cdev->can.bittiming_const = cdev->bit_timing ? 1508 + cdev->bit_timing : &m_can_bittiming_const_31X; 1509 + 1510 + cdev->can.data_bittiming_const = cdev->data_timing ? 1511 + cdev->data_timing : 1512 + &m_can_data_bittiming_const_31X; 1505 1513 break; 1506 1514 case 32: 1507 1515 case 33: 1508 1516 /* Support both MCAN version v3.2.x and v3.3.0 */ 1509 - cdev->can.bittiming_const = &m_can_bittiming_const_31X; 1510 - cdev->can.data_bittiming_const = &m_can_data_bittiming_const_31X; 1517 + cdev->can.bittiming_const = cdev->bit_timing ? 1518 + cdev->bit_timing : &m_can_bittiming_const_31X; 1519 + 1520 + cdev->can.data_bittiming_const = cdev->data_timing ? 1521 + cdev->data_timing : 1522 + &m_can_data_bittiming_const_31X; 1511 1523 1512 1524 cdev->can.ctrlmode_supported |= 1513 1525 (m_can_niso_supported(cdev) ?
+3
drivers/net/can/m_can/m_can.h
··· 85 85 struct sk_buff *tx_skb; 86 86 struct phy *transceiver; 87 87 88 + const struct can_bittiming_const *bit_timing; 89 + const struct can_bittiming_const *data_timing; 90 + 88 91 struct m_can_ops *ops; 89 92 90 93 int version;
+56 -6
drivers/net/can/m_can/m_can_pci.c
··· 18 18 19 19 #define M_CAN_PCI_MMIO_BAR 0 20 20 21 - #define M_CAN_CLOCK_FREQ_EHL 100000000 22 21 #define CTL_CSR_INT_CTL_OFFSET 0x508 22 + 23 + struct m_can_pci_config { 24 + const struct can_bittiming_const *bit_timing; 25 + const struct can_bittiming_const *data_timing; 26 + unsigned int clock_freq; 27 + }; 23 28 24 29 struct m_can_pci_priv { 25 30 struct m_can_classdev cdev; ··· 47 42 static int iomap_read_fifo(struct m_can_classdev *cdev, int offset, void *val, size_t val_count) 48 43 { 49 44 struct m_can_pci_priv *priv = cdev_to_priv(cdev); 45 + void __iomem *src = priv->base + offset; 50 46 51 - ioread32_rep(priv->base + offset, val, val_count); 47 + while (val_count--) { 48 + *(unsigned int *)val = ioread32(src); 49 + val += 4; 50 + src += 4; 51 + } 52 52 53 53 return 0; 54 54 } ··· 71 61 const void *val, size_t val_count) 72 62 { 73 63 struct m_can_pci_priv *priv = cdev_to_priv(cdev); 64 + void __iomem *dst = priv->base + offset; 74 65 75 - iowrite32_rep(priv->base + offset, val, val_count); 66 + while (val_count--) { 67 + iowrite32(*(unsigned int *)val, dst); 68 + val += 4; 69 + dst += 4; 70 + } 76 71 77 72 return 0; 78 73 } ··· 89 74 .read_fifo = iomap_read_fifo, 90 75 }; 91 76 77 + static const struct can_bittiming_const m_can_bittiming_const_ehl = { 78 + .name = KBUILD_MODNAME, 79 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 80 + .tseg1_max = 64, 81 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 82 + .tseg2_max = 128, 83 + .sjw_max = 128, 84 + .brp_min = 1, 85 + .brp_max = 512, 86 + .brp_inc = 1, 87 + }; 88 + 89 + static const struct can_bittiming_const m_can_data_bittiming_const_ehl = { 90 + .name = KBUILD_MODNAME, 91 + .tseg1_min = 2, /* Time segment 1 = prop_seg + phase_seg1 */ 92 + .tseg1_max = 16, 93 + .tseg2_min = 1, /* Time segment 2 = phase_seg2 */ 94 + .tseg2_max = 8, 95 + .sjw_max = 4, 96 + .brp_min = 1, 97 + .brp_max = 32, 98 + .brp_inc = 1, 99 + }; 100 + 101 + static const struct m_can_pci_config m_can_pci_ehl = { 102 + .bit_timing = &m_can_bittiming_const_ehl, 103 + .data_timing = &m_can_data_bittiming_const_ehl, 104 + .clock_freq = 200000000, 105 + }; 106 + 92 107 static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id) 93 108 { 94 109 struct device *dev = &pci->dev; 110 + const struct m_can_pci_config *cfg; 95 111 struct m_can_classdev *mcan_class; 96 112 struct m_can_pci_priv *priv; 97 113 void __iomem *base; ··· 150 104 if (!mcan_class) 151 105 return -ENOMEM; 152 106 107 + cfg = (const struct m_can_pci_config *)id->driver_data; 108 + 153 109 priv = cdev_to_priv(mcan_class); 154 110 155 111 priv->base = base; ··· 163 115 mcan_class->dev = &pci->dev; 164 116 mcan_class->net->irq = pci_irq_vector(pci, 0); 165 117 mcan_class->pm_clock_support = 1; 166 - mcan_class->can.clock.freq = id->driver_data; 118 + mcan_class->bit_timing = cfg->bit_timing; 119 + mcan_class->data_timing = cfg->data_timing; 120 + mcan_class->can.clock.freq = cfg->clock_freq; 167 121 mcan_class->ops = &m_can_pci_ops; 168 122 169 123 pci_set_drvdata(pci, mcan_class); ··· 218 168 m_can_pci_suspend, m_can_pci_resume); 219 169 220 170 static const struct pci_device_id m_can_pci_id_table[] = { 221 - { PCI_VDEVICE(INTEL, 0x4bc1), M_CAN_CLOCK_FREQ_EHL, }, 222 - { PCI_VDEVICE(INTEL, 0x4bc2), M_CAN_CLOCK_FREQ_EHL, }, 171 + { PCI_VDEVICE(INTEL, 0x4bc1), (kernel_ulong_t)&m_can_pci_ehl, }, 172 + { PCI_VDEVICE(INTEL, 0x4bc2), (kernel_ulong_t)&m_can_pci_ehl, }, 223 173 { } /* Terminating Entry */ 224 174 }; 225 175 MODULE_DEVICE_TABLE(pci, m_can_pci_id_table);
+1 -1
drivers/net/can/pch_can.c
··· 692 692 cf->data[i + 1] = data_reg >> 8; 693 693 } 694 694 695 - netif_receive_skb(skb); 696 695 rcv_pkts++; 697 696 stats->rx_packets++; 698 697 quota--; 699 698 stats->rx_bytes += cf->len; 699 + netif_receive_skb(skb); 700 700 701 701 pch_fifo_thresh(priv, obj_num); 702 702 obj_num++;
+6 -1
drivers/net/can/sja1000/ems_pcmcia.c
··· 234 234 free_sja1000dev(dev); 235 235 } 236 236 237 - err = request_irq(dev->irq, &ems_pcmcia_interrupt, IRQF_SHARED, 237 + if (!card->channels) { 238 + err = -ENODEV; 239 + goto failure_cleanup; 240 + } 241 + 242 + err = request_irq(pdev->irq, &ems_pcmcia_interrupt, IRQF_SHARED, 238 243 DRV_NAME, card); 239 244 if (!err) 240 245 return 0;
+73 -28
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
··· 28 28 29 29 #include "kvaser_usb.h" 30 30 31 - /* Forward declaration */ 32 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg; 33 - 34 - #define CAN_USB_CLOCK 8000000 35 31 #define MAX_USBCAN_NET_DEVICES 2 36 32 37 33 /* Command header size */ ··· 75 79 #define CMD_FLUSH_QUEUE_REPLY 68 76 80 77 81 #define CMD_LEAF_LOG_MESSAGE 106 82 + 83 + /* Leaf frequency options */ 84 + #define KVASER_USB_LEAF_SWOPTION_FREQ_MASK 0x60 85 + #define KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK 0 86 + #define KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK BIT(5) 87 + #define KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK BIT(6) 78 88 79 89 /* error factors */ 80 90 #define M16C_EF_ACKE BIT(0) ··· 342 340 }; 343 341 }; 344 342 343 + static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 344 + .name = "kvaser_usb", 345 + .tseg1_min = KVASER_USB_TSEG1_MIN, 346 + .tseg1_max = KVASER_USB_TSEG1_MAX, 347 + .tseg2_min = KVASER_USB_TSEG2_MIN, 348 + .tseg2_max = KVASER_USB_TSEG2_MAX, 349 + .sjw_max = KVASER_USB_SJW_MAX, 350 + .brp_min = KVASER_USB_BRP_MIN, 351 + .brp_max = KVASER_USB_BRP_MAX, 352 + .brp_inc = KVASER_USB_BRP_INC, 353 + }; 354 + 355 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = { 356 + .clock = { 357 + .freq = 8000000, 358 + }, 359 + .timestamp_freq = 1, 360 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 361 + }; 362 + 363 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = { 364 + .clock = { 365 + .freq = 16000000, 366 + }, 367 + .timestamp_freq = 1, 368 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 369 + }; 370 + 371 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = { 372 + .clock = { 373 + .freq = 24000000, 374 + }, 375 + .timestamp_freq = 1, 376 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 377 + }; 378 + 379 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = { 380 + .clock = { 381 + .freq = 32000000, 382 + }, 383 + .timestamp_freq = 1, 384 + .bittiming_const = &kvaser_usb_leaf_bittiming_const, 385 + }; 386 + 345 387 static void * 346 388 kvaser_usb_leaf_frame_to_cmd(const struct kvaser_usb_net_priv *priv, 347 389 const struct sk_buff *skb, int *frame_len, ··· 517 471 return rc; 518 472 } 519 473 474 + static void kvaser_usb_leaf_get_software_info_leaf(struct kvaser_usb *dev, 475 + const struct leaf_cmd_softinfo *softinfo) 476 + { 477 + u32 sw_options = le32_to_cpu(softinfo->sw_options); 478 + 479 + dev->fw_version = le32_to_cpu(softinfo->fw_version); 480 + dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx); 481 + 482 + switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 483 + case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 484 + dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; 485 + break; 486 + case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 487 + dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; 488 + break; 489 + case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 490 + dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; 491 + break; 492 + } 493 + } 494 + 520 495 static int kvaser_usb_leaf_get_software_info_inner(struct kvaser_usb *dev) 521 496 { 522 497 struct kvaser_cmd cmd; ··· 553 486 554 487 switch (dev->card_data.leaf.family) { 555 488 case KVASER_LEAF: 556 - dev->fw_version = le32_to_cpu(cmd.u.leaf.softinfo.fw_version); 557 - dev->max_tx_urbs = 558 - le16_to_cpu(cmd.u.leaf.softinfo.max_outstanding_tx); 489 + kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo); 559 490 break; 560 491 case KVASER_USBCAN: 561 492 dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); 562 493 dev->max_tx_urbs = 563 494 le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); 495 + dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz; 564 496 break; 565 497 } 566 498 ··· 1291 1225 { 1292 1226 struct kvaser_usb_dev_card_data *card_data = &dev->card_data; 1293 1227 1294 - dev->cfg = &kvaser_usb_leaf_dev_cfg; 1295 1228 card_data->ctrlmode_supported |= CAN_CTRLMODE_3_SAMPLES; 1296 1229 1297 1230 return 0; 1298 1231 } 1299 - 1300 - static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 1301 - .name = "kvaser_usb", 1302 - .tseg1_min = KVASER_USB_TSEG1_MIN, 1303 - .tseg1_max = KVASER_USB_TSEG1_MAX, 1304 - .tseg2_min = KVASER_USB_TSEG2_MIN, 1305 - .tseg2_max = KVASER_USB_TSEG2_MAX, 1306 - .sjw_max = KVASER_USB_SJW_MAX, 1307 - .brp_min = KVASER_USB_BRP_MIN, 1308 - .brp_max = KVASER_USB_BRP_MAX, 1309 - .brp_inc = KVASER_USB_BRP_INC, 1310 - }; 1311 1232 1312 1233 static int kvaser_usb_leaf_set_bittiming(struct net_device *netdev) 1313 1234 { ··· 1400 1347 .dev_flush_queue = kvaser_usb_leaf_flush_queue, 1401 1348 .dev_read_bulk_callback = kvaser_usb_leaf_read_bulk_callback, 1402 1349 .dev_frame_to_cmd = kvaser_usb_leaf_frame_to_cmd, 1403 - }; 1404 - 1405 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg = { 1406 - .clock = { 1407 - .freq = CAN_USB_CLOCK, 1408 - }, 1409 - .timestamp_freq = 1, 1410 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 1411 1350 };
+46 -37
drivers/net/dsa/mv88e6xxx/chip.c
··· 471 471 u16 reg; 472 472 int err; 473 473 474 + /* The 88e6250 family does not have the PHY detect bit. Instead, 475 + * report whether the port is internal. 476 + */ 477 + if (chip->info->family == MV88E6XXX_FAMILY_6250) 478 + return port < chip->info->num_internal_phys; 479 + 474 480 err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg); 475 481 if (err) { 476 482 dev_err(chip->dev, ··· 698 692 { 699 693 struct mv88e6xxx_chip *chip = ds->priv; 700 694 struct mv88e6xxx_port *p; 701 - int err; 695 + int err = 0; 702 696 703 697 p = &chip->ports[port]; 704 698 705 - /* FIXME: is this the correct test? If we're in fixed mode on an 706 - * internal port, why should we process this any different from 707 - * PHY mode? On the other hand, the port may be automedia between 708 - * an internal PHY and the serdes... 709 - */ 710 - if ((mode == MLO_AN_PHY) && mv88e6xxx_phy_is_internal(ds, port)) 711 - return; 712 - 713 699 mv88e6xxx_reg_lock(chip); 714 - /* In inband mode, the link may come up at any time while the link 715 - * is not forced down. Force the link down while we reconfigure the 716 - * interface mode. 717 - */ 718 - if (mode == MLO_AN_INBAND && p->interface != state->interface && 719 - chip->info->ops->port_set_link) 720 - chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN); 721 700 722 - err = mv88e6xxx_port_config_interface(chip, port, state->interface); 723 - if (err && err != -EOPNOTSUPP) 724 - goto err_unlock; 701 + if (mode != MLO_AN_PHY || !mv88e6xxx_phy_is_internal(ds, port)) { 702 + /* In inband mode, the link may come up at any time while the 703 + * link is not forced down. Force the link down while we 704 + * reconfigure the interface mode. 705 + */ 706 + if (mode == MLO_AN_INBAND && 707 + p->interface != state->interface && 708 + chip->info->ops->port_set_link) 709 + chip->info->ops->port_set_link(chip, port, 710 + LINK_FORCED_DOWN); 725 711 726 - err = mv88e6xxx_serdes_pcs_config(chip, port, mode, state->interface, 727 - state->advertising); 728 - /* FIXME: we should restart negotiation if something changed - which 729 - * is something we get if we convert to using phylinks PCS operations. 730 - */ 731 - if (err > 0) 732 - err = 0; 712 + err = mv88e6xxx_port_config_interface(chip, port, 713 + state->interface); 714 + if (err && err != -EOPNOTSUPP) 715 + goto err_unlock; 716 + 717 + err = mv88e6xxx_serdes_pcs_config(chip, port, mode, 718 + state->interface, 719 + state->advertising); 720 + /* FIXME: we should restart negotiation if something changed - 721 + * which is something we get if we convert to using phylinks 722 + * PCS operations. 723 + */ 724 + if (err > 0) 725 + err = 0; 726 + } 733 727 734 728 /* Undo the forced down state above after completing configuration 735 - * irrespective of its state on entry, which allows the link to come up. 729 + * irrespective of its state on entry, which allows the link to come 730 + * up in the in-band case where there is no separate SERDES. Also 731 + * ensure that the link can come up if the PPU is in use and we are 732 + * in PHY mode (we treat the PPU as an effective in-band mechanism.) 736 733 */ 737 - if (mode == MLO_AN_INBAND && p->interface != state->interface && 738 - chip->info->ops->port_set_link) 734 + if (chip->info->ops->port_set_link && 735 + ((mode == MLO_AN_INBAND && p->interface != state->interface) || 736 + (mode == MLO_AN_PHY && mv88e6xxx_port_ppu_updates(chip, port)))) 739 737 chip->info->ops->port_set_link(chip, port, LINK_UNFORCED); 740 738 741 739 p->interface = state->interface; ··· 762 752 ops = chip->info->ops; 763 753 764 754 mv88e6xxx_reg_lock(chip); 765 - /* Internal PHYs propagate their configuration directly to the MAC. 766 - * External PHYs depend on whether the PPU is enabled for this port. 755 + /* Force the link down if we know the port may not be automatically 756 + * updated by the switch or if we are using fixed-link mode. 767 757 */ 768 - if (((!mv88e6xxx_phy_is_internal(ds, port) && 769 - !mv88e6xxx_port_ppu_updates(chip, port)) || 758 + if ((!mv88e6xxx_port_ppu_updates(chip, port) || 770 759 mode == MLO_AN_FIXED) && ops->port_sync_link) 771 760 err = ops->port_sync_link(chip, port, mode, false); 772 761 mv88e6xxx_reg_unlock(chip); ··· 788 779 ops = chip->info->ops; 789 780 790 781 mv88e6xxx_reg_lock(chip); 791 - /* Internal PHYs propagate their configuration directly to the MAC. 792 - * External PHYs depend on whether the PPU is enabled for this port. 782 + /* Configure and force the link up if we know that the port may not 783 + * automatically updated by the switch or if we are using fixed-link 784 + * mode. 793 785 */ 794 - if ((!mv88e6xxx_phy_is_internal(ds, port) && 795 - !mv88e6xxx_port_ppu_updates(chip, port)) || 786 + if (!mv88e6xxx_port_ppu_updates(chip, port) || 796 787 mode == MLO_AN_FIXED) { 797 788 /* FIXME: for an automedia port, should we force the link 798 789 * down here - what if the link comes up due to "other" media
+7 -1
drivers/net/dsa/mv88e6xxx/serdes.c
··· 830 830 bool up) 831 831 { 832 832 u8 cmode = chip->ports[port].cmode; 833 - int err = 0; 833 + int err; 834 834 835 835 switch (cmode) { 836 836 case MV88E6XXX_PORT_STS_CMODE_SGMII: ··· 841 841 case MV88E6XXX_PORT_STS_CMODE_XAUI: 842 842 case MV88E6XXX_PORT_STS_CMODE_RXAUI: 843 843 err = mv88e6390_serdes_power_10g(chip, lane, up); 844 + break; 845 + default: 846 + err = -EINVAL; 844 847 break; 845 848 } 846 849 ··· 1543 1540 case MV88E6393X_PORT_STS_CMODE_5GBASER: 1544 1541 case MV88E6393X_PORT_STS_CMODE_10GBASER: 1545 1542 err = mv88e6390_serdes_power_10g(chip, lane, on); 1543 + break; 1544 + default: 1545 + err = -EINVAL; 1546 1546 break; 1547 1547 } 1548 1548
+4 -1
drivers/net/dsa/ocelot/felix.c
··· 298 298 } 299 299 } 300 300 301 - if (cpu < 0) 301 + if (cpu < 0) { 302 + kfree(tagging_rule); 303 + kfree(redirect_rule); 302 304 return -EINVAL; 305 + } 303 306 304 307 tagging_rule->key_type = OCELOT_VCAP_KEY_ETYPE; 305 308 *(__be16 *)tagging_rule->key.etype.etype.value = htons(ETH_P_1588);
+6 -3
drivers/net/ethernet/altera/altera_tse_main.c
··· 1430 1430 priv->rxdescmem_busaddr = dma_res->start; 1431 1431 1432 1432 } else { 1433 + ret = -ENODEV; 1433 1434 goto err_free_netdev; 1434 1435 } 1435 1436 1436 - if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) 1437 + if (!dma_set_mask(priv->device, DMA_BIT_MASK(priv->dmaops->dmamask))) { 1437 1438 dma_set_coherent_mask(priv->device, 1438 1439 DMA_BIT_MASK(priv->dmaops->dmamask)); 1439 - else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) 1440 + } else if (!dma_set_mask(priv->device, DMA_BIT_MASK(32))) { 1440 1441 dma_set_coherent_mask(priv->device, DMA_BIT_MASK(32)); 1441 - else 1442 + } else { 1443 + ret = -EIO; 1442 1444 goto err_free_netdev; 1445 + } 1443 1446 1444 1447 /* MAC address space */ 1445 1448 ret = request_and_map(pdev, "control_port", &control_port,
+3 -1
drivers/net/ethernet/broadcom/bcm4908_enet.c
··· 708 708 709 709 enet->irq_tx = platform_get_irq_byname(pdev, "tx"); 710 710 711 - dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 711 + err = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 712 + if (err) 713 + return err; 712 714 713 715 err = bcm4908_enet_dma_alloc(enet); 714 716 if (err)
+3
drivers/net/ethernet/freescale/fec.h
··· 377 377 #define FEC_ENET_WAKEUP ((uint)0x00020000) /* Wakeup request */ 378 378 #define FEC_ENET_TXF (FEC_ENET_TXF_0 | FEC_ENET_TXF_1 | FEC_ENET_TXF_2) 379 379 #define FEC_ENET_RXF (FEC_ENET_RXF_0 | FEC_ENET_RXF_1 | FEC_ENET_RXF_2) 380 + #define FEC_ENET_RXF_GET(X) (((X) == 0) ? FEC_ENET_RXF_0 : \ 381 + (((X) == 1) ? FEC_ENET_RXF_1 : \ 382 + FEC_ENET_RXF_2)) 380 383 #define FEC_ENET_TS_AVAIL ((uint)0x00010000) 381 384 #define FEC_ENET_TS_TIMER ((uint)0x00008000) 382 385
+1 -1
drivers/net/ethernet/freescale/fec_main.c
··· 1480 1480 break; 1481 1481 pkt_received++; 1482 1482 1483 - writel(FEC_ENET_RXF, fep->hwp + FEC_IEVENT); 1483 + writel(FEC_ENET_RXF_GET(queue_id), fep->hwp + FEC_IEVENT); 1484 1484 1485 1485 /* Check for errors. */ 1486 1486 status ^= BD_ENET_RX_LAST;
+3
drivers/net/ethernet/google/gve/gve_utils.c
··· 68 68 set_protocol = ctx->curr_frag_cnt == ctx->expected_frag_cnt - 1; 69 69 } else { 70 70 skb = napi_alloc_skb(napi, len); 71 + 72 + if (unlikely(!skb)) 73 + return NULL; 71 74 set_protocol = true; 72 75 } 73 76 __skb_put(skb, len);
+1
drivers/net/ethernet/huawei/hinic/hinic_sriov.c
··· 8 8 #include <linux/interrupt.h> 9 9 #include <linux/etherdevice.h> 10 10 #include <linux/netdevice.h> 11 + #include <linux/module.h> 11 12 12 13 #include "hinic_hw_dev.h" 13 14 #include "hinic_dev.h"
+8
drivers/net/ethernet/intel/i40e/i40e_debugfs.c
··· 553 553 dev_info(&pf->pdev->dev, "vsi %d not found\n", vsi_seid); 554 554 return; 555 555 } 556 + if (vsi->type != I40E_VSI_MAIN && 557 + vsi->type != I40E_VSI_FDIR && 558 + vsi->type != I40E_VSI_VMDQ2) { 559 + dev_info(&pf->pdev->dev, 560 + "vsi %d type %d descriptor rings not available\n", 561 + vsi_seid, vsi->type); 562 + return; 563 + } 556 564 if (type == RING_TYPE_XDP && !i40e_enabled_xdp_vsi(vsi)) { 557 565 dev_info(&pf->pdev->dev, "XDP not enabled on VSI %d\n", vsi_seid); 558 566 return;
+48 -27
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1949 1949 } 1950 1950 1951 1951 /** 1952 + * i40e_sync_vf_state 1953 + * @vf: pointer to the VF info 1954 + * @state: VF state 1955 + * 1956 + * Called from a VF message to synchronize the service with a potential 1957 + * VF reset state 1958 + **/ 1959 + static bool i40e_sync_vf_state(struct i40e_vf *vf, enum i40e_vf_states state) 1960 + { 1961 + int i; 1962 + 1963 + /* When handling some messages, it needs VF state to be set. 1964 + * It is possible that this flag is cleared during VF reset, 1965 + * so there is a need to wait until the end of the reset to 1966 + * handle the request message correctly. 1967 + */ 1968 + for (i = 0; i < I40E_VF_STATE_WAIT_COUNT; i++) { 1969 + if (test_bit(state, &vf->vf_states)) 1970 + return true; 1971 + usleep_range(10000, 20000); 1972 + } 1973 + 1974 + return test_bit(state, &vf->vf_states); 1975 + } 1976 + 1977 + /** 1952 1978 * i40e_vc_get_version_msg 1953 1979 * @vf: pointer to the VF info 1954 1980 * @msg: pointer to the msg buffer ··· 2034 2008 size_t len = 0; 2035 2009 int ret; 2036 2010 2037 - if (!test_bit(I40E_VF_STATE_INIT, &vf->vf_states)) { 2011 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_INIT)) { 2038 2012 aq_ret = I40E_ERR_PARAM; 2039 2013 goto err; 2040 2014 } ··· 2157 2131 bool allmulti = false; 2158 2132 bool alluni = false; 2159 2133 2160 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2134 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2161 2135 aq_ret = I40E_ERR_PARAM; 2162 2136 goto err_out; 2163 2137 } ··· 2245 2219 struct i40e_vsi *vsi; 2246 2220 u16 num_qps_all = 0; 2247 2221 2248 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2222 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2249 2223 aq_ret = I40E_ERR_PARAM; 2250 2224 goto error_param; 2251 2225 } ··· 2394 2368 i40e_status aq_ret = 0; 2395 2369 int i; 2396 2370 2397 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2371 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2398 2372 aq_ret = I40E_ERR_PARAM; 2399 2373 goto error_param; 2400 2374 } ··· 2566 2540 struct i40e_pf *pf = vf->pf; 2567 2541 i40e_status aq_ret = 0; 2568 2542 2569 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2543 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2570 2544 aq_ret = I40E_ERR_PARAM; 2571 2545 goto error_param; 2572 2546 } ··· 2616 2590 u8 cur_pairs = vf->num_queue_pairs; 2617 2591 struct i40e_pf *pf = vf->pf; 2618 2592 2619 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) 2593 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) 2620 2594 return -EINVAL; 2621 2595 2622 2596 if (req_pairs > I40E_MAX_VF_QUEUES) { ··· 2661 2635 2662 2636 memset(&stats, 0, sizeof(struct i40e_eth_stats)); 2663 2637 2664 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 2638 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 2665 2639 aq_ret = I40E_ERR_PARAM; 2666 2640 goto error_param; 2667 2641 } ··· 2778 2752 i40e_status ret = 0; 2779 2753 int i; 2780 2754 2781 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2755 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2782 2756 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { 2783 2757 ret = I40E_ERR_PARAM; 2784 2758 goto error_param; ··· 2850 2824 i40e_status ret = 0; 2851 2825 int i; 2852 2826 2853 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2827 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2854 2828 !i40e_vc_isvalid_vsi_id(vf, al->vsi_id)) { 2855 2829 ret = I40E_ERR_PARAM; 2856 2830 goto error_param; ··· 2994 2968 i40e_status aq_ret = 0; 2995 2969 int i; 2996 2970 2997 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 2971 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 2998 2972 !i40e_vc_isvalid_vsi_id(vf, vfl->vsi_id)) { 2999 2973 aq_ret = I40E_ERR_PARAM; 3000 2974 goto error_param; ··· 3114 3088 struct i40e_vsi *vsi = NULL; 3115 3089 i40e_status aq_ret = 0; 3116 3090 3117 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 3091 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 3118 3092 !i40e_vc_isvalid_vsi_id(vf, vrk->vsi_id) || 3119 - (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) { 3093 + vrk->key_len != I40E_HKEY_ARRAY_SIZE) { 3120 3094 aq_ret = I40E_ERR_PARAM; 3121 3095 goto err; 3122 3096 } ··· 3145 3119 i40e_status aq_ret = 0; 3146 3120 u16 i; 3147 3121 3148 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states) || 3122 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE) || 3149 3123 !i40e_vc_isvalid_vsi_id(vf, vrl->vsi_id) || 3150 - (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) { 3124 + vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) { 3151 3125 aq_ret = I40E_ERR_PARAM; 3152 3126 goto err; 3153 3127 } ··· 3180 3154 i40e_status aq_ret = 0; 3181 3155 int len = 0; 3182 3156 3183 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3157 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3184 3158 aq_ret = I40E_ERR_PARAM; 3185 3159 goto err; 3186 3160 } ··· 3216 3190 struct i40e_hw *hw = &pf->hw; 3217 3191 i40e_status aq_ret = 0; 3218 3192 3219 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3193 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3220 3194 aq_ret = I40E_ERR_PARAM; 3221 3195 goto err; 3222 3196 } ··· 3241 3215 i40e_status aq_ret = 0; 3242 3216 struct i40e_vsi *vsi; 3243 3217 3244 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3218 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3245 3219 aq_ret = I40E_ERR_PARAM; 3246 3220 goto err; 3247 3221 } ··· 3267 3241 i40e_status aq_ret = 0; 3268 3242 struct i40e_vsi *vsi; 3269 3243 3270 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3244 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3271 3245 aq_ret = I40E_ERR_PARAM; 3272 3246 goto err; 3273 3247 } ··· 3494 3468 i40e_status aq_ret = 0; 3495 3469 int i, ret; 3496 3470 3497 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3471 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3498 3472 aq_ret = I40E_ERR_PARAM; 3499 3473 goto err; 3500 3474 } ··· 3625 3599 i40e_status aq_ret = 0; 3626 3600 int i, ret; 3627 3601 3628 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3602 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3629 3603 aq_ret = I40E_ERR_PARAM; 3630 3604 goto err_out; 3631 3605 } ··· 3734 3708 i40e_status aq_ret = 0; 3735 3709 u64 speed = 0; 3736 3710 3737 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3711 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3738 3712 aq_ret = I40E_ERR_PARAM; 3739 3713 goto err; 3740 3714 } ··· 3823 3797 3824 3798 /* set this flag only after making sure all inputs are sane */ 3825 3799 vf->adq_enabled = true; 3826 - /* num_req_queues is set when user changes number of queues via ethtool 3827 - * and this causes issue for default VSI(which depends on this variable) 3828 - * when ADq is enabled, hence reset it. 3829 - */ 3830 - vf->num_req_queues = 0; 3831 3800 3832 3801 /* reset the VF in order to allocate resources */ 3833 3802 i40e_vc_reset_vf(vf, true); ··· 3845 3824 struct i40e_pf *pf = vf->pf; 3846 3825 i40e_status aq_ret = 0; 3847 3826 3848 - if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) { 3827 + if (!i40e_sync_vf_state(vf, I40E_VF_STATE_ACTIVE)) { 3849 3828 aq_ret = I40E_ERR_PARAM; 3850 3829 goto err; 3851 3830 }
+2
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
··· 18 18 19 19 #define I40E_MAX_VF_PROMISC_FLAGS 3 20 20 21 + #define I40E_VF_STATE_WAIT_COUNT 20 22 + 21 23 /* Various queue ctrls */ 22 24 enum i40e_queue_ctrl { 23 25 I40E_QUEUE_CTRL_UNKNOWN = 0,
+32 -11
drivers/net/ethernet/intel/iavf/iavf_ethtool.c
··· 631 631 if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending)) 632 632 return -EINVAL; 633 633 634 - new_tx_count = clamp_t(u32, ring->tx_pending, 635 - IAVF_MIN_TXD, 636 - IAVF_MAX_TXD); 637 - new_tx_count = ALIGN(new_tx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); 634 + if (ring->tx_pending > IAVF_MAX_TXD || 635 + ring->tx_pending < IAVF_MIN_TXD || 636 + ring->rx_pending > IAVF_MAX_RXD || 637 + ring->rx_pending < IAVF_MIN_RXD) { 638 + netdev_err(netdev, "Descriptors requested (Tx: %d / Rx: %d) out of range [%d-%d] (increment %d)\n", 639 + ring->tx_pending, ring->rx_pending, IAVF_MIN_TXD, 640 + IAVF_MAX_RXD, IAVF_REQ_DESCRIPTOR_MULTIPLE); 641 + return -EINVAL; 642 + } 638 643 639 - new_rx_count = clamp_t(u32, ring->rx_pending, 640 - IAVF_MIN_RXD, 641 - IAVF_MAX_RXD); 642 - new_rx_count = ALIGN(new_rx_count, IAVF_REQ_DESCRIPTOR_MULTIPLE); 644 + new_tx_count = ALIGN(ring->tx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE); 645 + if (new_tx_count != ring->tx_pending) 646 + netdev_info(netdev, "Requested Tx descriptor count rounded up to %d\n", 647 + new_tx_count); 648 + 649 + new_rx_count = ALIGN(ring->rx_pending, IAVF_REQ_DESCRIPTOR_MULTIPLE); 650 + if (new_rx_count != ring->rx_pending) 651 + netdev_info(netdev, "Requested Rx descriptor count rounded up to %d\n", 652 + new_rx_count); 643 653 644 654 /* if nothing to do return success */ 645 655 if ((new_tx_count == adapter->tx_desc_count) && 646 - (new_rx_count == adapter->rx_desc_count)) 656 + (new_rx_count == adapter->rx_desc_count)) { 657 + netdev_dbg(netdev, "Nothing to change, descriptor count is same as requested\n"); 647 658 return 0; 659 + } 648 660 649 - adapter->tx_desc_count = new_tx_count; 650 - adapter->rx_desc_count = new_rx_count; 661 + if (new_tx_count != adapter->tx_desc_count) { 662 + netdev_dbg(netdev, "Changing Tx descriptor count from %d to %d\n", 663 + adapter->tx_desc_count, new_tx_count); 664 + adapter->tx_desc_count = new_tx_count; 665 + } 666 + 667 + if (new_rx_count != adapter->rx_desc_count) { 668 + netdev_dbg(netdev, "Changing Rx descriptor count from %d to %d\n", 669 + adapter->rx_desc_count, new_rx_count); 670 + adapter->rx_desc_count = new_rx_count; 671 + } 651 672 652 673 if (netif_running(netdev)) { 653 674 adapter->flags |= IAVF_FLAG_RESET_NEEDED;
+1
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 2248 2248 } 2249 2249 2250 2250 pci_set_master(adapter->pdev); 2251 + pci_restore_msi_state(adapter->pdev); 2251 2252 2252 2253 if (i == IAVF_RESET_WAIT_COMPLETE_COUNT) { 2253 2254 dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
+12 -6
drivers/net/ethernet/intel/ice/ice_dcb_nl.c
··· 97 97 98 98 new_cfg->etscfg.maxtcs = pf->hw.func_caps.common_cap.maxtc; 99 99 100 + if (!bwcfg) 101 + new_cfg->etscfg.tcbwtable[0] = 100; 102 + 100 103 if (!bwrec) 101 104 new_cfg->etsrec.tcbwtable[0] = 100; 102 105 ··· 170 167 if (mode == pf->dcbx_cap) 171 168 return ICE_DCB_NO_HW_CHG; 172 169 173 - pf->dcbx_cap = mode; 174 170 qos_cfg = &pf->hw.port_info->qos_cfg; 175 - if (mode & DCB_CAP_DCBX_VER_CEE) { 176 - if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP) 177 - return ICE_DCB_NO_HW_CHG; 171 + 172 + /* DSCP configuration is not DCBx negotiated */ 173 + if (qos_cfg->local_dcbx_cfg.pfc_mode == ICE_QOS_MODE_DSCP) 174 + return ICE_DCB_NO_HW_CHG; 175 + 176 + pf->dcbx_cap = mode; 177 + 178 + if (mode & DCB_CAP_DCBX_VER_CEE) 178 179 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_CEE; 179 - } else { 180 + else 180 181 qos_cfg->local_dcbx_cfg.dcbx_mode = ICE_DCBX_MODE_IEEE; 181 - } 182 182 183 183 dev_info(ice_pf_to_dev(pf), "DCBx mode = 0x%x\n", mode); 184 184 return ICE_DCB_HW_CHG_RST;
+2 -2
drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
··· 1268 1268 bool is_tun = tun == ICE_FD_HW_SEG_TUN; 1269 1269 int err; 1270 1270 1271 - if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num)) 1271 + if (is_tun && !ice_get_open_tunnel_port(&pf->hw, &port_num, TNL_ALL)) 1272 1272 continue; 1273 1273 err = ice_fdir_write_fltr(pf, input, add, is_tun); 1274 1274 if (err) ··· 1652 1652 } 1653 1653 1654 1654 /* return error if not an update and no available filters */ 1655 - fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port) ? 2 : 1; 1655 + fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port, TNL_ALL) ? 2 : 1; 1656 1656 if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) && 1657 1657 ice_fdir_num_avail_fltr(hw, pf->vsi[vsi->idx]) < fltrs_needed) { 1658 1658 dev_err(dev, "Failed to add filter. The maximum number of flow director filters has been reached.\n");
+1 -1
drivers/net/ethernet/intel/ice/ice_fdir.c
··· 924 924 memcpy(pkt, ice_fdir_pkt[idx].pkt, ice_fdir_pkt[idx].pkt_len); 925 925 loc = pkt; 926 926 } else { 927 - if (!ice_get_open_tunnel_port(hw, &tnl_port)) 927 + if (!ice_get_open_tunnel_port(hw, &tnl_port, TNL_ALL)) 928 928 return ICE_ERR_DOES_NOT_EXIST; 929 929 if (!ice_fdir_pkt[idx].tun_pkt) 930 930 return ICE_ERR_PARAM;
+5 -2
drivers/net/ethernet/intel/ice/ice_flex_pipe.c
··· 1899 1899 * ice_get_open_tunnel_port - retrieve an open tunnel port 1900 1900 * @hw: pointer to the HW structure 1901 1901 * @port: returns open port 1902 + * @type: type of tunnel, can be TNL_LAST if it doesn't matter 1902 1903 */ 1903 1904 bool 1904 - ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port) 1905 + ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, 1906 + enum ice_tunnel_type type) 1905 1907 { 1906 1908 bool res = false; 1907 1909 u16 i; ··· 1911 1909 mutex_lock(&hw->tnl_lock); 1912 1910 1913 1911 for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) 1914 - if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port) { 1912 + if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].port && 1913 + (type == TNL_LAST || type == hw->tnl.tbl[i].type)) { 1915 1914 *port = hw->tnl.tbl[i].port; 1916 1915 res = true; 1917 1916 break;
+2 -1
drivers/net/ethernet/intel/ice/ice_flex_pipe.h
··· 33 33 ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, 34 34 unsigned long *bm, struct list_head *fv_list); 35 35 bool 36 - ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port); 36 + ice_get_open_tunnel_port(struct ice_hw *hw, u16 *port, 37 + enum ice_tunnel_type type); 37 38 int ice_udp_tunnel_set_port(struct net_device *netdev, unsigned int table, 38 39 unsigned int idx, struct udp_tunnel_info *ti); 39 40 int ice_udp_tunnel_unset_port(struct net_device *netdev, unsigned int table,
+21 -11
drivers/net/ethernet/intel/ice/ice_main.c
··· 5888 5888 netif_carrier_on(vsi->netdev); 5889 5889 } 5890 5890 5891 + /* clear this now, and the first stats read will be used as baseline */ 5892 + vsi->stat_offsets_loaded = false; 5893 + 5891 5894 ice_service_task_schedule(pf); 5892 5895 5893 5896 return 0; ··· 5937 5934 /** 5938 5935 * ice_update_vsi_tx_ring_stats - Update VSI Tx ring stats counters 5939 5936 * @vsi: the VSI to be updated 5937 + * @vsi_stats: the stats struct to be updated 5940 5938 * @rings: rings to work on 5941 5939 * @count: number of rings 5942 5940 */ 5943 5941 static void 5944 - ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, struct ice_tx_ring **rings, 5945 - u16 count) 5942 + ice_update_vsi_tx_ring_stats(struct ice_vsi *vsi, 5943 + struct rtnl_link_stats64 *vsi_stats, 5944 + struct ice_tx_ring **rings, u16 count) 5946 5945 { 5947 - struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats; 5948 5946 u16 i; 5949 5947 5950 5948 for (i = 0; i < count; i++) { ··· 5969 5965 */ 5970 5966 static void ice_update_vsi_ring_stats(struct ice_vsi *vsi) 5971 5967 { 5972 - struct rtnl_link_stats64 *vsi_stats = &vsi->net_stats; 5968 + struct rtnl_link_stats64 *vsi_stats; 5973 5969 u64 pkts, bytes; 5974 5970 int i; 5975 5971 5976 - /* reset netdev stats */ 5977 - vsi_stats->tx_packets = 0; 5978 - vsi_stats->tx_bytes = 0; 5979 - vsi_stats->rx_packets = 0; 5980 - vsi_stats->rx_bytes = 0; 5972 + vsi_stats = kzalloc(sizeof(*vsi_stats), GFP_ATOMIC); 5973 + if (!vsi_stats) 5974 + return; 5981 5975 5982 5976 /* reset non-netdev (extended) stats */ 5983 5977 vsi->tx_restart = 0; ··· 5987 5985 rcu_read_lock(); 5988 5986 5989 5987 /* update Tx rings counters */ 5990 - ice_update_vsi_tx_ring_stats(vsi, vsi->tx_rings, vsi->num_txq); 5988 + ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->tx_rings, 5989 + vsi->num_txq); 5991 5990 5992 5991 /* update Rx rings counters */ 5993 5992 ice_for_each_rxq(vsi, i) { ··· 6003 6000 6004 6001 /* update XDP Tx rings counters */ 6005 6002 if (ice_is_xdp_ena_vsi(vsi)) 6006 - ice_update_vsi_tx_ring_stats(vsi, vsi->xdp_rings, 6003 + ice_update_vsi_tx_ring_stats(vsi, vsi_stats, vsi->xdp_rings, 6007 6004 vsi->num_xdp_txq); 6008 6005 6009 6006 rcu_read_unlock(); 6007 + 6008 + vsi->net_stats.tx_packets = vsi_stats->tx_packets; 6009 + vsi->net_stats.tx_bytes = vsi_stats->tx_bytes; 6010 + vsi->net_stats.rx_packets = vsi_stats->rx_packets; 6011 + vsi->net_stats.rx_bytes = vsi_stats->rx_bytes; 6012 + 6013 + kfree(vsi_stats); 6010 6014 } 6011 6015 6012 6016 /**
+14 -7
drivers/net/ethernet/intel/ice/ice_switch.c
··· 3796 3796 * ice_find_recp - find a recipe 3797 3797 * @hw: pointer to the hardware structure 3798 3798 * @lkup_exts: extension sequence to match 3799 + * @tun_type: type of recipe tunnel 3799 3800 * 3800 3801 * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found. 3801 3802 */ 3802 - static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts) 3803 + static u16 3804 + ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts, 3805 + enum ice_sw_tunnel_type tun_type) 3803 3806 { 3804 3807 bool refresh_required = true; 3805 3808 struct ice_sw_recipe *recp; ··· 3863 3860 } 3864 3861 /* If for "i"th recipe the found was never set to false 3865 3862 * then it means we found our match 3863 + * Also tun type of recipe needs to be checked 3866 3864 */ 3867 - if (found) 3865 + if (found && recp[i].tun_type == tun_type) 3868 3866 return i; /* Return the recipe ID */ 3869 3867 } 3870 3868 } ··· 4655 4651 } 4656 4652 4657 4653 /* Look for a recipe which matches our requested fv / mask list */ 4658 - *rid = ice_find_recp(hw, lkup_exts); 4654 + *rid = ice_find_recp(hw, lkup_exts, rinfo->tun_type); 4659 4655 if (*rid < ICE_MAX_NUM_RECIPES) 4660 4656 /* Success if found a recipe that match the existing criteria */ 4661 4657 goto err_unroll; 4662 4658 4659 + rm->tun_type = rinfo->tun_type; 4663 4660 /* Recipe we need does not exist, add a recipe */ 4664 4661 status = ice_add_sw_recipe(hw, rm, profiles); 4665 4662 if (status) ··· 4963 4958 4964 4959 switch (tun_type) { 4965 4960 case ICE_SW_TUN_VXLAN: 4966 - case ICE_SW_TUN_GENEVE: 4967 - if (!ice_get_open_tunnel_port(hw, &open_port)) 4961 + if (!ice_get_open_tunnel_port(hw, &open_port, TNL_VXLAN)) 4968 4962 return ICE_ERR_CFG; 4969 4963 break; 4970 - 4964 + case ICE_SW_TUN_GENEVE: 4965 + if (!ice_get_open_tunnel_port(hw, &open_port, TNL_GENEVE)) 4966 + return ICE_ERR_CFG; 4967 + break; 4971 4968 default: 4972 4969 /* Nothing needs to be done for this tunnel type */ 4973 4970 return 0; ··· 5562 5555 if (status) 5563 5556 return status; 5564 5557 5565 - rid = ice_find_recp(hw, &lkup_exts); 5558 + rid = ice_find_recp(hw, &lkup_exts, rinfo->tun_type); 5566 5559 /* If did not find a recipe that match the existing criteria */ 5567 5560 if (rid == ICE_MAX_NUM_RECIPES) 5568 5561 return ICE_ERR_PARAM;
+12 -18
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 74 74 return inner ? ICE_IPV6_IL : ICE_IPV6_OFOS; 75 75 } 76 76 77 - static enum ice_protocol_type 78 - ice_proto_type_from_l4_port(bool inner, u16 ip_proto) 77 + static enum ice_protocol_type ice_proto_type_from_l4_port(u16 ip_proto) 79 78 { 80 - if (inner) { 81 - switch (ip_proto) { 82 - case IPPROTO_UDP: 83 - return ICE_UDP_ILOS; 84 - } 85 - } else { 86 - switch (ip_proto) { 87 - case IPPROTO_TCP: 88 - return ICE_TCP_IL; 89 - case IPPROTO_UDP: 90 - return ICE_UDP_OF; 91 - } 79 + switch (ip_proto) { 80 + case IPPROTO_TCP: 81 + return ICE_TCP_IL; 82 + case IPPROTO_UDP: 83 + return ICE_UDP_ILOS; 92 84 } 93 85 94 86 return 0; ··· 183 191 i++; 184 192 } 185 193 186 - if (flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) { 187 - list[i].type = ice_proto_type_from_l4_port(false, hdr->l3_key.ip_proto); 194 + if ((flags & ICE_TC_FLWR_FIELD_ENC_DEST_L4_PORT) && 195 + hdr->l3_key.ip_proto == IPPROTO_UDP) { 196 + list[i].type = ICE_UDP_OF; 188 197 list[i].h_u.l4_hdr.dst_port = hdr->l4_key.dst_port; 189 198 list[i].m_u.l4_hdr.dst_port = hdr->l4_mask.dst_port; 190 199 i++; ··· 310 317 ICE_TC_FLWR_FIELD_SRC_L4_PORT)) { 311 318 struct ice_tc_l4_hdr *l4_key, *l4_mask; 312 319 313 - list[i].type = ice_proto_type_from_l4_port(inner, headers->l3_key.ip_proto); 320 + list[i].type = ice_proto_type_from_l4_port(headers->l3_key.ip_proto); 314 321 l4_key = &headers->l4_key; 315 322 l4_mask = &headers->l4_mask; 316 323 ··· 795 802 headers->l3_mask.ttl = match.mask->ttl; 796 803 } 797 804 798 - if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS)) { 805 + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ENC_PORTS) && 806 + fltr->tunnel_type != TNL_VXLAN && fltr->tunnel_type != TNL_GENEVE) { 799 807 struct flow_match_ports match; 800 808 801 809 flow_rule_match_enc_ports(rule, &match);
+6
drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
··· 1617 1617 ice_vc_set_default_allowlist(vf); 1618 1618 1619 1619 ice_vf_fdir_exit(vf); 1620 + ice_vf_fdir_init(vf); 1620 1621 /* clean VF control VSI when resetting VFs since it should be 1621 1622 * setup only when VF creates its first FDIR rule. 1622 1623 */ ··· 1748 1747 } 1749 1748 1750 1749 ice_vf_fdir_exit(vf); 1750 + ice_vf_fdir_init(vf); 1751 1751 /* clean VF control VSI when resetting VF since it should be setup 1752 1752 * only when VF creates its first FDIR rule. 1753 1753 */ ··· 2022 2020 ret = ice_eswitch_configure(pf); 2023 2021 if (ret) 2024 2022 goto err_unroll_sriov; 2023 + 2024 + /* rearm global interrupts */ 2025 + if (test_and_clear_bit(ICE_OICR_INTR_DIS, pf->state)) 2026 + ice_irq_dynamic_ena(hw, NULL, NULL); 2025 2027 2026 2028 return 0; 2027 2029
+2 -2
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 2963 2963 mvpp2_rxq_status_update(port, rxq->id, 0, rxq->size); 2964 2964 2965 2965 if (priv->percpu_pools) { 2966 - err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->id, 0); 2966 + err = xdp_rxq_info_reg(&rxq->xdp_rxq_short, port->dev, rxq->logic_rxq, 0); 2967 2967 if (err < 0) 2968 2968 goto err_free_dma; 2969 2969 2970 - err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->id, 0); 2970 + err = xdp_rxq_info_reg(&rxq->xdp_rxq_long, port->dev, rxq->logic_rxq, 0); 2971 2971 if (err < 0) 2972 2972 goto err_unregister_rxq_short; 2973 2973
+2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
··· 5 5 * 6 6 */ 7 7 8 + #include <linux/module.h> 9 + 8 10 #include "otx2_common.h" 9 11 #include "otx2_ptp.h" 10 12
+5 -5
drivers/net/ethernet/microsoft/mana/hw_channel.c
··· 480 480 if (err) 481 481 goto out; 482 482 483 - err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size, 484 - &hwc_wq->msg_buf); 485 - if (err) 486 - goto out; 487 - 488 483 hwc_wq->hwc = hwc; 489 484 hwc_wq->gdma_wq = queue; 490 485 hwc_wq->queue_depth = q_depth; 491 486 hwc_wq->hwc_cq = hwc_cq; 487 + 488 + err = mana_hwc_alloc_dma_buf(hwc, q_depth, max_msg_size, 489 + &hwc_wq->msg_buf); 490 + if (err) 491 + goto out; 492 492 493 493 *hwc_wq_ptr = hwc_wq; 494 494 return 0;
+3 -1
drivers/net/ethernet/netronome/nfp/nfpcore/nfp_cppcore.c
··· 803 803 return -ENOMEM; 804 804 805 805 cache = kzalloc(sizeof(*cache), GFP_KERNEL); 806 - if (!cache) 806 + if (!cache) { 807 + nfp_cpp_area_free(area); 807 808 return -ENOMEM; 809 + } 808 810 809 811 cache->id = 0; 810 812 cache->addr = 0;
+7
drivers/net/ethernet/qlogic/qede/qede_fp.c
··· 1644 1644 data_split = true; 1645 1645 } 1646 1646 } else { 1647 + if (unlikely(skb->len > ETH_TX_MAX_NON_LSO_PKT_LEN)) { 1648 + DP_ERR(edev, "Unexpected non LSO skb length = 0x%x\n", skb->len); 1649 + qede_free_failed_tx_pkt(txq, first_bd, 0, false); 1650 + qede_update_tx_producer(txq); 1651 + return NETDEV_TX_OK; 1652 + } 1653 + 1647 1654 val |= ((skb->len & ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) << 1648 1655 ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT); 1649 1656 }
+9 -10
drivers/net/ethernet/qlogic/qla3xxx.c
··· 3480 3480 3481 3481 spin_lock_irqsave(&qdev->hw_lock, hw_flags); 3482 3482 3483 - err = ql_wait_for_drvr_lock(qdev); 3484 - if (err) { 3485 - err = ql_adapter_initialize(qdev); 3486 - if (err) { 3487 - netdev_err(ndev, "Unable to initialize adapter\n"); 3488 - goto err_init; 3489 - } 3490 - netdev_err(ndev, "Releasing driver lock\n"); 3491 - ql_sem_unlock(qdev, QL_DRVR_SEM_MASK); 3492 - } else { 3483 + if (!ql_wait_for_drvr_lock(qdev)) { 3493 3484 netdev_err(ndev, "Could not acquire driver lock\n"); 3485 + err = -ENODEV; 3494 3486 goto err_lock; 3495 3487 } 3488 + 3489 + err = ql_adapter_initialize(qdev); 3490 + if (err) { 3491 + netdev_err(ndev, "Unable to initialize adapter\n"); 3492 + goto err_init; 3493 + } 3494 + ql_sem_unlock(qdev, QL_DRVR_SEM_MASK); 3496 3495 3497 3496 spin_unlock_irqrestore(&qdev->hw_lock, hw_flags); 3498 3497
+1
drivers/net/phy/phylink.c
··· 1653 1653 * @mac_wol: true if the MAC needs to receive packets for Wake-on-Lan 1654 1654 * 1655 1655 * Handle a network device suspend event. There are several cases: 1656 + * 1656 1657 * - If Wake-on-Lan is not active, we can bring down the link between 1657 1658 * the MAC and PHY by calling phylink_stop(). 1658 1659 * - If Wake-on-Lan is active, and being handled only by the PHY, we
+2
drivers/net/usb/cdc_ncm.c
··· 181 181 min = ctx->max_datagram_size + ctx->max_ndp_size + sizeof(struct usb_cdc_ncm_nth32); 182 182 183 183 max = min_t(u32, CDC_NCM_NTB_MAX_SIZE_TX, le32_to_cpu(ctx->ncm_parm.dwNtbOutMaxSize)); 184 + if (max == 0) 185 + max = CDC_NCM_NTB_MAX_SIZE_TX; /* dwNtbOutMaxSize not set */ 184 186 185 187 /* some devices set dwNtbOutMaxSize too low for the above default */ 186 188 min = min(min, max);
+7 -6
drivers/net/vmxnet3/vmxnet3_drv.c
··· 3261 3261 3262 3262 #ifdef CONFIG_PCI_MSI 3263 3263 if (adapter->intr.type == VMXNET3_IT_MSIX) { 3264 - int i, nvec; 3264 + int i, nvec, nvec_allocated; 3265 3265 3266 3266 nvec = adapter->share_intr == VMXNET3_INTR_TXSHARE ? 3267 3267 1 : adapter->num_tx_queues; ··· 3274 3274 for (i = 0; i < nvec; i++) 3275 3275 adapter->intr.msix_entries[i].entry = i; 3276 3276 3277 - nvec = vmxnet3_acquire_msix_vectors(adapter, nvec); 3278 - if (nvec < 0) 3277 + nvec_allocated = vmxnet3_acquire_msix_vectors(adapter, nvec); 3278 + if (nvec_allocated < 0) 3279 3279 goto msix_err; 3280 3280 3281 3281 /* If we cannot allocate one MSIx vector per queue 3282 3282 * then limit the number of rx queues to 1 3283 3283 */ 3284 - if (nvec == VMXNET3_LINUX_MIN_MSIX_VECT) { 3284 + if (nvec_allocated == VMXNET3_LINUX_MIN_MSIX_VECT && 3285 + nvec != VMXNET3_LINUX_MIN_MSIX_VECT) { 3285 3286 if (adapter->share_intr != VMXNET3_INTR_BUDDYSHARE 3286 3287 || adapter->num_rx_queues != 1) { 3287 3288 adapter->share_intr = VMXNET3_INTR_TXSHARE; ··· 3292 3291 } 3293 3292 } 3294 3293 3295 - adapter->intr.num_intrs = nvec; 3294 + adapter->intr.num_intrs = nvec_allocated; 3296 3295 return; 3297 3296 3298 3297 msix_err: 3299 3298 /* If we cannot allocate MSIx vectors use only one rx queue */ 3300 3299 dev_info(&adapter->pdev->dev, 3301 3300 "Failed to enable MSI-X, error %d. " 3302 - "Limiting #rx queues to 1, try MSI.\n", nvec); 3301 + "Limiting #rx queues to 1, try MSI.\n", nvec_allocated); 3303 3302 3304 3303 adapter->intr.type = VMXNET3_IT_MSI; 3305 3304 }
+4 -4
drivers/net/vrf.c
··· 770 770 771 771 skb->dev = vrf_dev; 772 772 773 - vrf_nf_set_untracked(skb); 774 - 775 773 err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, 776 774 skb, NULL, vrf_dev, vrf_ip6_out_direct_finish); 777 775 ··· 789 791 /* don't divert link scope packets */ 790 792 if (rt6_need_strict(&ipv6_hdr(skb)->daddr)) 791 793 return skb; 794 + 795 + vrf_nf_set_untracked(skb); 792 796 793 797 if (qdisc_tx_is_default(vrf_dev) || 794 798 IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED) ··· 1000 1000 1001 1001 skb->dev = vrf_dev; 1002 1002 1003 - vrf_nf_set_untracked(skb); 1004 - 1005 1003 err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk, 1006 1004 skb, NULL, vrf_dev, vrf_ip_out_direct_finish); 1007 1005 ··· 1020 1022 if (ipv4_is_multicast(ip_hdr(skb)->daddr) || 1021 1023 ipv4_is_lbcast(ip_hdr(skb)->daddr)) 1022 1024 return skb; 1025 + 1026 + vrf_nf_set_untracked(skb); 1023 1027 1024 1028 if (qdisc_tx_is_default(vrf_dev) || 1025 1029 IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+17 -9
drivers/net/wwan/iosm/iosm_ipc_imem.c
··· 183 183 bool ipc_imem_ul_write_td(struct iosm_imem *ipc_imem) 184 184 { 185 185 struct ipc_mem_channel *channel; 186 + bool hpda_ctrl_pending = false; 186 187 struct sk_buff_head *ul_list; 187 188 bool hpda_pending = false; 188 - bool forced_hpdu = false; 189 189 struct ipc_pipe *pipe; 190 190 int i; 191 191 ··· 202 202 ul_list = &channel->ul_list; 203 203 204 204 /* Fill the transfer descriptor with the uplink buffer info. */ 205 - hpda_pending |= ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 205 + if (!ipc_imem_check_wwan_ips(channel)) { 206 + hpda_ctrl_pending |= 207 + ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 206 208 pipe, ul_list); 207 - 208 - /* forced HP update needed for non data channels */ 209 - if (hpda_pending && !ipc_imem_check_wwan_ips(channel)) 210 - forced_hpdu = true; 209 + } else { 210 + hpda_pending |= 211 + ipc_protocol_ul_td_send(ipc_imem->ipc_protocol, 212 + pipe, ul_list); 213 + } 211 214 } 212 215 213 - if (forced_hpdu) { 216 + /* forced HP update needed for non data channels */ 217 + if (hpda_ctrl_pending) { 214 218 hpda_pending = false; 215 219 ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol, 216 220 IPC_HP_UL_WRITE_TD); ··· 537 533 "Modem link down. Exit run state worker."); 538 534 return; 539 535 } 536 + 537 + if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag)) 538 + ipc_devlink_deinit(ipc_imem->ipc_devlink); 540 539 541 540 if (!ipc_imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg)) 542 541 ipc_imem->mux = ipc_mux_init(&mux_cfg, ipc_imem); ··· 1184 1177 ipc_port_deinit(ipc_imem->ipc_port); 1185 1178 } 1186 1179 1187 - if (ipc_imem->ipc_devlink) 1180 + if (test_and_clear_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag)) 1188 1181 ipc_devlink_deinit(ipc_imem->ipc_devlink); 1189 1182 1190 1183 ipc_imem_device_ipc_uninit(ipc_imem); ··· 1280 1273 1281 1274 ipc_imem->pci_device_id = device_id; 1282 1275 1283 - ipc_imem->ev_cdev_write_pending = false; 1284 1276 ipc_imem->cp_version = 0; 1285 1277 ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP; 1286 1278 ··· 1347 1341 1348 1342 if (ipc_flash_link_establish(ipc_imem)) 1349 1343 goto devlink_channel_fail; 1344 + 1345 + set_bit(IOSM_DEVLINK_INIT, &ipc_imem->flag); 1350 1346 } 1351 1347 return ipc_imem; 1352 1348 devlink_channel_fail:
+1 -3
drivers/net/wwan/iosm/iosm_ipc_imem.h
··· 101 101 #define IOSM_CHIP_INFO_SIZE_MAX 100 102 102 103 103 #define FULLY_FUNCTIONAL 0 104 + #define IOSM_DEVLINK_INIT 1 104 105 105 106 /* List of the supported UL/DL pipes. */ 106 107 enum ipc_mem_pipes { ··· 337 336 * process the irq actions. 338 337 * @flag: Flag to monitor the state of driver 339 338 * @td_update_timer_suspended: if true then td update timer suspend 340 - * @ev_cdev_write_pending: 0 means inform the IPC tasklet to pass 341 - * the accumulated uplink buffers to CP. 342 339 * @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass 343 340 * @reset_det_n: Reset detect flag 344 341 * @pcie_wake_n: Pcie wake flag ··· 378 379 u8 ev_irq_pending[IPC_IRQ_VECTORS]; 379 380 unsigned long flag; 380 381 u8 td_update_timer_suspended:1, 381 - ev_cdev_write_pending:1, 382 382 ev_mux_net_transmit_pending:1, 383 383 reset_det_n:1, 384 384 pcie_wake_n:1;
+1 -6
drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
··· 41 41 static int ipc_imem_tq_cdev_write(struct iosm_imem *ipc_imem, int arg, 42 42 void *msg, size_t size) 43 43 { 44 - ipc_imem->ev_cdev_write_pending = false; 45 44 ipc_imem_ul_send(ipc_imem); 46 45 47 46 return 0; ··· 49 50 /* Through tasklet to do sio write. */ 50 51 static int ipc_imem_call_cdev_write(struct iosm_imem *ipc_imem) 51 52 { 52 - if (ipc_imem->ev_cdev_write_pending) 53 - return -1; 54 - 55 - ipc_imem->ev_cdev_write_pending = true; 56 - 57 53 return ipc_task_queue_send_task(ipc_imem, ipc_imem_tq_cdev_write, 0, 58 54 NULL, 0, false); 59 55 } ··· 447 453 /* Release the pipe resources */ 448 454 ipc_imem_pipe_cleanup(ipc_imem, &channel->ul_pipe); 449 455 ipc_imem_pipe_cleanup(ipc_imem, &channel->dl_pipe); 456 + ipc_imem->nr_of_channels--; 450 457 } 451 458 452 459 void ipc_imem_sys_devlink_notify_rx(struct iosm_devlink *ipc_devlink,
+1
drivers/pci/controller/dwc/pci-exynos.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/phy/phy.h> 21 21 #include <linux/regulator/consumer.h> 22 + #include <linux/module.h> 22 23 23 24 #include "pcie-designware.h" 24 25
+1
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 18 18 #include <linux/pm_domain.h> 19 19 #include <linux/regmap.h> 20 20 #include <linux/reset.h> 21 + #include <linux/module.h> 21 22 22 23 #include "pcie-designware.h" 23 24
+1 -1
drivers/platform/x86/amd-pmc.c
··· 76 76 #define AMD_CPU_ID_CZN AMD_CPU_ID_RN 77 77 #define AMD_CPU_ID_YC 0x14B5 78 78 79 - #define PMC_MSG_DELAY_MIN_US 100 79 + #define PMC_MSG_DELAY_MIN_US 50 80 80 #define RESPONSE_REGISTER_LOOP_MAX 20000 81 81 82 82 #define SOC_SUBSYSTEM_IP_MAX 12
+7
drivers/platform/x86/intel/hid.c
··· 99 99 DMI_MATCH(DMI_PRODUCT_FAMILY, "ThinkPad X1 Tablet Gen 2"), 100 100 }, 101 101 }, 102 + { 103 + .ident = "Microsoft Surface Go 3", 104 + .matches = { 105 + DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"), 106 + DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"), 107 + }, 108 + }, 102 109 { } 103 110 }; 104 111
+12
drivers/platform/x86/lg-laptop.c
··· 657 657 if (product && strlen(product) > 4) 658 658 switch (product[4]) { 659 659 case '5': 660 + if (strlen(product) > 5) 661 + switch (product[5]) { 662 + case 'N': 663 + year = 2021; 664 + break; 665 + case '0': 666 + year = 2016; 667 + break; 668 + default: 669 + year = 2022; 670 + } 671 + break; 660 672 case '6': 661 673 year = 2016; 662 674 break;
+4 -2
drivers/platform/x86/thinkpad_acpi.c
··· 3015 3015 &dev_attr_hotkey_all_mask.attr, 3016 3016 &dev_attr_hotkey_adaptive_all_mask.attr, 3017 3017 &dev_attr_hotkey_recommended_mask.attr, 3018 + &dev_attr_hotkey_tablet_mode.attr, 3019 + &dev_attr_hotkey_radio_sw.attr, 3018 3020 #ifdef CONFIG_THINKPAD_ACPI_HOTKEY_POLL 3019 3021 &dev_attr_hotkey_source_mask.attr, 3020 3022 &dev_attr_hotkey_poll_freq.attr, ··· 5728 5726 "tpacpi::standby", 5729 5727 "tpacpi::dock_status1", 5730 5728 "tpacpi::dock_status2", 5731 - "tpacpi::unknown_led2", 5729 + "tpacpi::lid_logo_dot", 5732 5730 "tpacpi::unknown_led3", 5733 5731 "tpacpi::thinkvantage", 5734 5732 }; 5735 - #define TPACPI_SAFE_LEDS 0x1081U 5733 + #define TPACPI_SAFE_LEDS 0x1481U 5736 5734 5737 5735 static inline bool tpacpi_is_led_restricted(const unsigned int led) 5738 5736 {
+18
drivers/platform/x86/touchscreen_dmi.c
··· 905 905 .properties = trekstor_primetab_t13b_props, 906 906 }; 907 907 908 + static const struct property_entry trekstor_surftab_duo_w1_props[] = { 909 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-x"), 910 + { } 911 + }; 912 + 913 + static const struct ts_dmi_data trekstor_surftab_duo_w1_data = { 914 + .acpi_name = "GDIX1001:00", 915 + .properties = trekstor_surftab_duo_w1_props, 916 + }; 917 + 908 918 static const struct property_entry trekstor_surftab_twin_10_1_props[] = { 909 919 PROPERTY_ENTRY_U32("touchscreen-min-x", 20), 910 920 PROPERTY_ENTRY_U32("touchscreen-min-y", 0), ··· 1510 1500 .matches = { 1511 1501 DMI_MATCH(DMI_SYS_VENDOR, "TREKSTOR"), 1512 1502 DMI_MATCH(DMI_PRODUCT_NAME, "Primetab T13B"), 1503 + }, 1504 + }, 1505 + { 1506 + /* TrekStor SurfTab duo W1 10.1 ST10432-10b */ 1507 + .driver_data = (void *)&trekstor_surftab_duo_w1_data, 1508 + .matches = { 1509 + DMI_MATCH(DMI_SYS_VENDOR, "TrekStor"), 1510 + DMI_MATCH(DMI_PRODUCT_NAME, "SurfTab duo W1 10.1 (VT4)"), 1513 1511 }, 1514 1512 }, 1515 1513 {
-5
drivers/powercap/dtpm.c
··· 463 463 464 464 static int __init init_dtpm(void) 465 465 { 466 - struct dtpm_descr *dtpm_descr; 467 - 468 466 pct = powercap_register_control_type(NULL, "dtpm", NULL); 469 467 if (IS_ERR(pct)) { 470 468 pr_err("Failed to register control type\n"); 471 469 return PTR_ERR(pct); 472 470 } 473 - 474 - for_each_dtpm_table(dtpm_descr) 475 - dtpm_descr->init(); 476 471 477 472 return 0; 478 473 }
+2 -7
drivers/scsi/lpfc/lpfc_els.c
··· 5095 5095 /* NPort Recovery mode or node is just allocated */ 5096 5096 if (!lpfc_nlp_not_used(ndlp)) { 5097 5097 /* A LOGO is completing and the node is in NPR state. 5098 - * If this a fabric node that cleared its transport 5099 - * registration, release the rpi. 5098 + * Just unregister the RPI because the node is still 5099 + * required. 5100 5100 */ 5101 - spin_lock_irq(&ndlp->lock); 5102 - ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; 5103 - if (phba->sli_rev == LPFC_SLI_REV4) 5104 - ndlp->nlp_flag |= NLP_RELEASE_RPI; 5105 - spin_unlock_irq(&ndlp->lock); 5106 5101 lpfc_unreg_rpi(vport, ndlp); 5107 5102 } else { 5108 5103 /* Indicate the node has already released, should
+18
drivers/scsi/ufs/ufshcd-pci.c
··· 421 421 return err; 422 422 } 423 423 424 + static int ufs_intel_adl_init(struct ufs_hba *hba) 425 + { 426 + hba->nop_out_timeout = 200; 427 + hba->quirks |= UFSHCD_QUIRK_BROKEN_AUTO_HIBERN8; 428 + return ufs_intel_common_init(hba); 429 + } 430 + 424 431 static struct ufs_hba_variant_ops ufs_intel_cnl_hba_vops = { 425 432 .name = "intel-pci", 426 433 .init = ufs_intel_common_init, ··· 452 445 .link_startup_notify = ufs_intel_link_startup_notify, 453 446 .pwr_change_notify = ufs_intel_lkf_pwr_change_notify, 454 447 .apply_dev_quirks = ufs_intel_lkf_apply_dev_quirks, 448 + .resume = ufs_intel_resume, 449 + .device_reset = ufs_intel_device_reset, 450 + }; 451 + 452 + static struct ufs_hba_variant_ops ufs_intel_adl_hba_vops = { 453 + .name = "intel-pci", 454 + .init = ufs_intel_adl_init, 455 + .exit = ufs_intel_common_exit, 456 + .link_startup_notify = ufs_intel_link_startup_notify, 455 457 .resume = ufs_intel_resume, 456 458 .device_reset = ufs_intel_device_reset, 457 459 }; ··· 579 563 { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_ehl_hba_vops }, 580 564 { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_ehl_hba_vops }, 581 565 { PCI_VDEVICE(INTEL, 0x98FA), (kernel_ulong_t)&ufs_intel_lkf_hba_vops }, 566 + { PCI_VDEVICE(INTEL, 0x51FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops }, 567 + { PCI_VDEVICE(INTEL, 0x54FF), (kernel_ulong_t)&ufs_intel_adl_hba_vops }, 582 568 { } /* terminate list */ 583 569 }; 584 570
+13
drivers/tty/serial/8250/8250_bcm7271.c
··· 237 237 u32 rx_err; 238 238 u32 rx_timeout; 239 239 u32 rx_abort; 240 + u32 saved_mctrl; 240 241 }; 241 242 242 243 static struct dentry *brcmuart_debugfs_root; ··· 1134 1133 static int __maybe_unused brcmuart_suspend(struct device *dev) 1135 1134 { 1136 1135 struct brcmuart_priv *priv = dev_get_drvdata(dev); 1136 + struct uart_8250_port *up = serial8250_get_port(priv->line); 1137 + struct uart_port *port = &up->port; 1137 1138 1138 1139 serial8250_suspend_port(priv->line); 1139 1140 clk_disable_unprepare(priv->baud_mux_clk); 1141 + 1142 + /* 1143 + * This will prevent resume from enabling RTS before the 1144 + * baud rate has been resored. 1145 + */ 1146 + priv->saved_mctrl = port->mctrl; 1147 + port->mctrl = 0; 1140 1148 1141 1149 return 0; 1142 1150 } ··· 1153 1143 static int __maybe_unused brcmuart_resume(struct device *dev) 1154 1144 { 1155 1145 struct brcmuart_priv *priv = dev_get_drvdata(dev); 1146 + struct uart_8250_port *up = serial8250_get_port(priv->line); 1147 + struct uart_port *port = &up->port; 1156 1148 int ret; 1157 1149 1158 1150 ret = clk_prepare_enable(priv->baud_mux_clk); ··· 1177 1165 start_rx_dma(serial8250_get_port(priv->line)); 1178 1166 } 1179 1167 serial8250_resume_port(priv->line); 1168 + port->mctrl = priv->saved_mctrl; 1180 1169 return 0; 1181 1170 } 1182 1171
+25 -14
drivers/tty/serial/8250/8250_pci.c
··· 1324 1324 { 1325 1325 int scr; 1326 1326 int lcr; 1327 - int actual_baud; 1328 - int tolerance; 1329 1327 1330 - for (scr = 5 ; scr <= 15 ; scr++) { 1331 - actual_baud = 921600 * 16 / scr; 1332 - tolerance = actual_baud / 50; 1328 + for (scr = 16; scr > 4; scr--) { 1329 + unsigned int maxrate = port->uartclk / scr; 1330 + unsigned int divisor = max(maxrate / baud, 1U); 1331 + int delta = maxrate / divisor - baud; 1333 1332 1334 - if ((baud < actual_baud + tolerance) && 1335 - (baud > actual_baud - tolerance)) { 1333 + if (baud > maxrate + baud / 50) 1334 + continue; 1336 1335 1336 + if (delta > baud / 50) 1337 + divisor++; 1338 + 1339 + if (divisor > 0xffff) 1340 + continue; 1341 + 1342 + /* Update delta due to possible divisor change */ 1343 + delta = maxrate / divisor - baud; 1344 + if (abs(delta) < baud / 50) { 1337 1345 lcr = serial_port_in(port, UART_LCR); 1338 1346 serial_port_out(port, UART_LCR, lcr | 0x80); 1339 - 1340 - serial_port_out(port, UART_DLL, 1); 1341 - serial_port_out(port, UART_DLM, 0); 1347 + serial_port_out(port, UART_DLL, divisor & 0xff); 1348 + serial_port_out(port, UART_DLM, divisor >> 8 & 0xff); 1342 1349 serial_port_out(port, 2, 16 - scr); 1343 1350 serial_port_out(port, UART_LCR, lcr); 1344 1351 return; 1345 - } else if (baud > actual_baud) { 1346 - break; 1347 1352 } 1348 1353 } 1349 - serial8250_do_set_divisor(port, baud, quot, quot_frac); 1350 1354 } 1351 1355 static int pci_pericom_setup(struct serial_private *priv, 1352 1356 const struct pciserial_board *board, ··· 2295 2291 .setup = pci_pericom_setup_four_at_eight, 2296 2292 }, 2297 2293 { 2298 - .vendor = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, 2294 + .vendor = PCI_VENDOR_ID_ACCESIO, 2299 2295 .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, 2296 + .subvendor = PCI_ANY_ID, 2297 + .subdevice = PCI_ANY_ID, 2298 + .setup = pci_pericom_setup_four_at_eight, 2299 + }, 2300 + { 2301 + .vendor = PCI_VENDOR_ID_ACCESIO, 2302 + .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, 2300 2303 .subvendor = PCI_ANY_ID, 2301 2304 .subdevice = PCI_ANY_ID, 2302 2305 .setup = pci_pericom_setup_four_at_eight,
-7
drivers/tty/serial/8250/8250_port.c
··· 2024 2024 struct uart_8250_port *up = up_to_u8250p(port); 2025 2025 unsigned char mcr; 2026 2026 2027 - if (port->rs485.flags & SER_RS485_ENABLED) { 2028 - if (serial8250_in_MCR(up) & UART_MCR_RTS) 2029 - mctrl |= TIOCM_RTS; 2030 - else 2031 - mctrl &= ~TIOCM_RTS; 2032 - } 2033 - 2034 2027 mcr = serial8250_TIOCM_to_MCR(mctrl); 2035 2028 2036 2029 mcr = (mcr & up->mcr_mask) | up->mcr_force | up->mcr;
+1 -1
drivers/tty/serial/Kconfig
··· 1533 1533 tristate "LiteUART serial port support" 1534 1534 depends on HAS_IOMEM 1535 1535 depends on OF || COMPILE_TEST 1536 - depends on LITEX 1536 + depends on LITEX || COMPILE_TEST 1537 1537 select SERIAL_CORE 1538 1538 help 1539 1539 This driver is for the FPGA-based LiteUART serial controller from LiteX
+1
drivers/tty/serial/amba-pl011.c
··· 2947 2947 2948 2948 static const struct acpi_device_id __maybe_unused sbsa_uart_acpi_match[] = { 2949 2949 { "ARMH0011", 0 }, 2950 + { "ARMHB000", 0 }, 2950 2951 {}, 2951 2952 }; 2952 2953 MODULE_DEVICE_TABLE(acpi, sbsa_uart_acpi_match);
+1
drivers/tty/serial/fsl_lpuart.c
··· 2625 2625 OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1021a-lpuart", lpuart32_early_console_setup); 2626 2626 OF_EARLYCON_DECLARE(lpuart32, "fsl,ls1028a-lpuart", ls1028a_early_console_setup); 2627 2627 OF_EARLYCON_DECLARE(lpuart32, "fsl,imx7ulp-lpuart", lpuart32_imx_early_console_setup); 2628 + OF_EARLYCON_DECLARE(lpuart32, "fsl,imx8qxp-lpuart", lpuart32_imx_early_console_setup); 2628 2629 EARLYCON_DECLARE(lpuart, lpuart_early_console_setup); 2629 2630 EARLYCON_DECLARE(lpuart32, lpuart32_early_console_setup); 2630 2631
+17 -3
drivers/tty/serial/liteuart.c
··· 270 270 271 271 /* get membase */ 272 272 port->membase = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 273 - if (IS_ERR(port->membase)) 274 - return PTR_ERR(port->membase); 273 + if (IS_ERR(port->membase)) { 274 + ret = PTR_ERR(port->membase); 275 + goto err_erase_id; 276 + } 275 277 276 278 /* values not from device tree */ 277 279 port->dev = &pdev->dev; ··· 287 285 port->line = dev_id; 288 286 spin_lock_init(&port->lock); 289 287 290 - return uart_add_one_port(&liteuart_driver, &uart->port); 288 + platform_set_drvdata(pdev, port); 289 + 290 + ret = uart_add_one_port(&liteuart_driver, &uart->port); 291 + if (ret) 292 + goto err_erase_id; 293 + 294 + return 0; 295 + 296 + err_erase_id: 297 + xa_erase(&liteuart_array, uart->id); 298 + 299 + return ret; 291 300 } 292 301 293 302 static int liteuart_remove(struct platform_device *pdev) ··· 306 293 struct uart_port *port = platform_get_drvdata(pdev); 307 294 struct liteuart_port *uart = to_liteuart_port(port); 308 295 296 + uart_remove_one_port(&liteuart_driver, port); 309 297 xa_erase(&liteuart_array, uart->id); 310 298 311 299 return 0;
+3
drivers/tty/serial/msm_serial.c
··· 598 598 u32 val; 599 599 int ret; 600 600 601 + if (IS_ENABLED(CONFIG_CONSOLE_POLL)) 602 + return; 603 + 601 604 if (!dma->chan) 602 605 return; 603 606
+2 -2
drivers/tty/serial/serial-tegra.c
··· 1506 1506 .fifo_mode_enable_status = false, 1507 1507 .uart_max_port = 5, 1508 1508 .max_dma_burst_bytes = 4, 1509 - .error_tolerance_low_range = 0, 1509 + .error_tolerance_low_range = -4, 1510 1510 .error_tolerance_high_range = 4, 1511 1511 }; 1512 1512 ··· 1517 1517 .fifo_mode_enable_status = false, 1518 1518 .uart_max_port = 5, 1519 1519 .max_dma_burst_bytes = 4, 1520 - .error_tolerance_low_range = 0, 1520 + .error_tolerance_low_range = -4, 1521 1521 .error_tolerance_high_range = 4, 1522 1522 }; 1523 1523
+17 -1
drivers/tty/serial/serial_core.c
··· 1075 1075 goto out; 1076 1076 1077 1077 if (!tty_io_error(tty)) { 1078 + if (uport->rs485.flags & SER_RS485_ENABLED) { 1079 + set &= ~TIOCM_RTS; 1080 + clear &= ~TIOCM_RTS; 1081 + } 1082 + 1078 1083 uart_update_mctrl(uport, set, clear); 1079 1084 ret = 0; 1080 1085 } ··· 1554 1549 { 1555 1550 struct uart_state *state = container_of(port, struct uart_state, port); 1556 1551 struct uart_port *uport = uart_port_check(state); 1552 + char *buf; 1557 1553 1558 1554 /* 1559 1555 * At this point, we stop accepting input. To do this, we ··· 1576 1570 */ 1577 1571 tty_port_set_suspended(port, 0); 1578 1572 1579 - uart_change_pm(state, UART_PM_STATE_OFF); 1573 + /* 1574 + * Free the transmit buffer. 1575 + */ 1576 + spin_lock_irq(&uport->lock); 1577 + buf = state->xmit.buf; 1578 + state->xmit.buf = NULL; 1579 + spin_unlock_irq(&uport->lock); 1580 1580 1581 + if (buf) 1582 + free_page((unsigned long)buf); 1583 + 1584 + uart_change_pm(state, UART_PM_STATE_OFF); 1581 1585 } 1582 1586 1583 1587 static void uart_wait_until_sent(struct tty_struct *tty, int timeout)
+4 -16
drivers/usb/cdns3/cdns3-gadget.c
··· 337 337 cdns3_ep_inc_trb(&priv_ep->dequeue, &priv_ep->ccs, priv_ep->num_trbs); 338 338 } 339 339 340 - static void cdns3_move_deq_to_next_trb(struct cdns3_request *priv_req) 341 - { 342 - struct cdns3_endpoint *priv_ep = priv_req->priv_ep; 343 - int current_trb = priv_req->start_trb; 344 - 345 - while (current_trb != priv_req->end_trb) { 346 - cdns3_ep_inc_deq(priv_ep); 347 - current_trb = priv_ep->dequeue; 348 - } 349 - 350 - cdns3_ep_inc_deq(priv_ep); 351 - } 352 - 353 340 /** 354 341 * cdns3_allow_enable_l1 - enable/disable permits to transition to L1. 355 342 * @priv_dev: Extended gadget object ··· 1504 1517 1505 1518 trb = priv_ep->trb_pool + priv_ep->dequeue; 1506 1519 1507 - /* Request was dequeued and TRB was changed to TRB_LINK. */ 1508 - if (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) { 1520 + /* The TRB was changed as link TRB, and the request was handled at ep_dequeue */ 1521 + while (TRB_FIELD_TO_TYPE(le32_to_cpu(trb->control)) == TRB_LINK) { 1509 1522 trace_cdns3_complete_trb(priv_ep, trb); 1510 - cdns3_move_deq_to_next_trb(priv_req); 1523 + cdns3_ep_inc_deq(priv_ep); 1524 + trb = priv_ep->trb_pool + priv_ep->dequeue; 1511 1525 } 1512 1526 1513 1527 if (!request->stream_id) {
+3
drivers/usb/cdns3/cdnsp-mem.c
··· 987 987 988 988 /* Set up the endpoint ring. */ 989 989 pep->ring = cdnsp_ring_alloc(pdev, 2, ring_type, max_packet, mem_flags); 990 + if (!pep->ring) 991 + return -ENOMEM; 992 + 990 993 pep->skip = false; 991 994 992 995 /* Fill the endpoint context */
+1
drivers/usb/cdns3/host.c
··· 10 10 */ 11 11 12 12 #include <linux/platform_device.h> 13 + #include <linux/slab.h> 13 14 #include "core.h" 14 15 #include "drd.h" 15 16 #include "host-export.h"
+14 -7
drivers/usb/host/xhci-ring.c
··· 366 366 /* Must be called with xhci->lock held, releases and aquires lock back */ 367 367 static int xhci_abort_cmd_ring(struct xhci_hcd *xhci, unsigned long flags) 368 368 { 369 - u32 temp_32; 369 + struct xhci_segment *new_seg = xhci->cmd_ring->deq_seg; 370 + union xhci_trb *new_deq = xhci->cmd_ring->dequeue; 371 + u64 crcr; 370 372 int ret; 371 373 372 374 xhci_dbg(xhci, "Abort command ring\n"); ··· 377 375 378 376 /* 379 377 * The control bits like command stop, abort are located in lower 380 - * dword of the command ring control register. Limit the write 381 - * to the lower dword to avoid corrupting the command ring pointer 382 - * in case if the command ring is stopped by the time upper dword 383 - * is written. 378 + * dword of the command ring control register. 379 + * Some controllers require all 64 bits to be written to abort the ring. 380 + * Make sure the upper dword is valid, pointing to the next command, 381 + * avoiding corrupting the command ring pointer in case the command ring 382 + * is stopped by the time the upper dword is written. 384 383 */ 385 - temp_32 = readl(&xhci->op_regs->cmd_ring); 386 - writel(temp_32 | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 384 + next_trb(xhci, NULL, &new_seg, &new_deq); 385 + if (trb_is_link(new_deq)) 386 + next_trb(xhci, NULL, &new_seg, &new_deq); 387 + 388 + crcr = xhci_trb_virt_to_dma(new_seg, new_deq); 389 + xhci_write_64(xhci, crcr | CMD_RING_ABORT, &xhci->op_regs->cmd_ring); 387 390 388 391 /* Section 4.6.1.2 of xHCI 1.0 spec says software should also time the 389 392 * completion of the Command Abort operation. If CRR is not negated in 5
-4
drivers/usb/typec/tcpm/tcpm.c
··· 4110 4110 tcpm_try_src(port) ? SRC_TRY 4111 4111 : SNK_ATTACHED, 4112 4112 0); 4113 - else 4114 - /* Wait for VBUS, but not forever */ 4115 - tcpm_set_state(port, PORT_RESET, PD_T_PS_SOURCE_ON); 4116 4113 break; 4117 - 4118 4114 case SRC_TRY: 4119 4115 port->try_src_count++; 4120 4116 tcpm_set_cc(port, tcpm_rp_cc(port));
+3 -2
drivers/vfio/pci/vfio_pci_igd.c
··· 98 98 version = cpu_to_le16(0x0201); 99 99 100 100 if (igd_opregion_shift_copy(buf, &off, 101 - &version + (pos - OPREGION_VERSION), 101 + (u8 *)&version + 102 + (pos - OPREGION_VERSION), 102 103 &pos, &remaining, bytes)) 103 104 return -EFAULT; 104 105 } ··· 122 121 OPREGION_SIZE : 0); 123 122 124 123 if (igd_opregion_shift_copy(buf, &off, 125 - &rvda + (pos - OPREGION_RVDA), 124 + (u8 *)&rvda + (pos - OPREGION_RVDA), 126 125 &pos, &remaining, bytes)) 127 126 return -EFAULT; 128 127 }
+14 -14
drivers/vfio/vfio.c
··· 232 232 } 233 233 #endif /* CONFIG_VFIO_NOIOMMU */ 234 234 235 - /** 235 + /* 236 236 * IOMMU driver registration 237 237 */ 238 238 int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops) ··· 285 285 unsigned long action, void *data); 286 286 static void vfio_group_get(struct vfio_group *group); 287 287 288 - /** 288 + /* 289 289 * Container objects - containers are created when /dev/vfio/vfio is 290 290 * opened, but their lifecycle extends until the last user is done, so 291 291 * it's freed via kref. Must support container/group/device being ··· 309 309 kref_put(&container->kref, vfio_container_release); 310 310 } 311 311 312 - /** 312 + /* 313 313 * Group objects - create, release, get, put, search 314 314 */ 315 315 static struct vfio_group * ··· 488 488 return group; 489 489 } 490 490 491 - /** 491 + /* 492 492 * Device objects - create, release, get, put, search 493 493 */ 494 494 /* Device reference always implies a group reference */ ··· 595 595 return ret; 596 596 } 597 597 598 - /** 598 + /* 599 599 * Async device support 600 600 */ 601 601 static int vfio_group_nb_add_dev(struct vfio_group *group, struct device *dev) ··· 689 689 return NOTIFY_OK; 690 690 } 691 691 692 - /** 692 + /* 693 693 * VFIO driver API 694 694 */ 695 695 void vfio_init_group_dev(struct vfio_device *device, struct device *dev, ··· 831 831 } 832 832 EXPORT_SYMBOL_GPL(vfio_register_emulated_iommu_dev); 833 833 834 - /** 834 + /* 835 835 * Get a reference to the vfio_device for a device. Even if the 836 836 * caller thinks they own the device, they could be racing with a 837 837 * release call path, so we can't trust drvdata for the shortcut. ··· 965 965 } 966 966 EXPORT_SYMBOL_GPL(vfio_unregister_group_dev); 967 967 968 - /** 968 + /* 969 969 * VFIO base fd, /dev/vfio/vfio 970 970 */ 971 971 static long vfio_ioctl_check_extension(struct vfio_container *container, ··· 1183 1183 .compat_ioctl = compat_ptr_ioctl, 1184 1184 }; 1185 1185 1186 - /** 1186 + /* 1187 1187 * VFIO Group fd, /dev/vfio/$GROUP 1188 1188 */ 1189 1189 static void __vfio_group_unset_container(struct vfio_group *group) ··· 1536 1536 .release = vfio_group_fops_release, 1537 1537 }; 1538 1538 1539 - /** 1539 + /* 1540 1540 * VFIO Device fd 1541 1541 */ 1542 1542 static int vfio_device_fops_release(struct inode *inode, struct file *filep) ··· 1611 1611 .mmap = vfio_device_fops_mmap, 1612 1612 }; 1613 1613 1614 - /** 1614 + /* 1615 1615 * External user API, exported by symbols to be linked dynamically. 1616 1616 * 1617 1617 * The protocol includes: ··· 1659 1659 } 1660 1660 EXPORT_SYMBOL_GPL(vfio_group_get_external_user); 1661 1661 1662 - /** 1662 + /* 1663 1663 * External user API, exported by symbols to be linked dynamically. 1664 1664 * The external user passes in a device pointer 1665 1665 * to verify that: ··· 1725 1725 } 1726 1726 EXPORT_SYMBOL_GPL(vfio_external_check_extension); 1727 1727 1728 - /** 1728 + /* 1729 1729 * Sub-module support 1730 1730 */ 1731 1731 /* ··· 2272 2272 } 2273 2273 EXPORT_SYMBOL_GPL(vfio_group_iommu_domain); 2274 2274 2275 - /** 2275 + /* 2276 2276 * Module/class support 2277 2277 */ 2278 2278 static char *vfio_devnode(struct device *dev, umode_t *mode)
+9 -5
drivers/video/console/vgacon.c
··· 366 366 struct uni_pagedir *p; 367 367 368 368 /* 369 - * We cannot be loaded as a module, therefore init is always 1, 370 - * but vgacon_init can be called more than once, and init will 371 - * not be 1. 369 + * We cannot be loaded as a module, therefore init will be 1 370 + * if we are the default console, however if we are a fallback 371 + * console, for example if fbcon has failed registration, then 372 + * init will be 0, so we need to make sure our boot parameters 373 + * have been copied to the console structure for vgacon_resize 374 + * ultimately called by vc_resize. Any subsequent calls to 375 + * vgacon_init init will have init set to 0 too. 372 376 */ 373 377 c->vc_can_do_color = vga_can_do_color; 378 + c->vc_scan_lines = vga_scan_lines; 379 + c->vc_font.height = c->vc_cell_height = vga_video_font_height; 374 380 375 381 /* set dimensions manually if init != 0 since vc_resize() will fail */ 376 382 if (init) { ··· 385 379 } else 386 380 vc_resize(c, vga_video_num_columns, vga_video_num_lines); 387 381 388 - c->vc_scan_lines = vga_scan_lines; 389 - c->vc_font.height = c->vc_cell_height = vga_video_font_height; 390 382 c->vc_complement_mask = 0x7700; 391 383 if (vga_512_chars) 392 384 c->vc_hi_font_mask = 0x0800;
+5 -6
fs/cifs/connect.c
··· 1562 1562 /* fscache server cookies are based on primary channel only */ 1563 1563 if (!CIFS_SERVER_IS_CHAN(tcp_ses)) 1564 1564 cifs_fscache_get_client_cookie(tcp_ses); 1565 + #ifdef CONFIG_CIFS_FSCACHE 1566 + else 1567 + tcp_ses->fscache = tcp_ses->primary_server->fscache; 1568 + #endif /* CONFIG_CIFS_FSCACHE */ 1565 1569 1566 1570 /* queue echo request delayed work */ 1567 1571 queue_delayed_work(cifsiod_wq, &tcp_ses->echo, tcp_ses->echo_interval); ··· 3050 3046 cifs_dbg(VFS, "read only mount of RW share\n"); 3051 3047 /* no need to log a RW mount of a typical RW share */ 3052 3048 } 3053 - /* 3054 - * The cookie is initialized from volume info returned above. 3055 - * Inside cifs_fscache_get_super_cookie it checks 3056 - * that we do not get super cookie twice. 3057 - */ 3058 - cifs_fscache_get_super_cookie(tcon); 3059 3049 } 3060 3050 3061 3051 /* ··· 3424 3426 */ 3425 3427 mount_put_conns(mnt_ctx); 3426 3428 mount_get_dfs_conns(mnt_ctx); 3429 + set_root_ses(mnt_ctx); 3427 3430 3428 3431 full_path = build_unc_path_to_root(ctx, cifs_sb, true); 3429 3432 if (IS_ERR(full_path))
+10 -36
fs/cifs/fscache.c
··· 16 16 * Key layout of CIFS server cache index object 17 17 */ 18 18 struct cifs_server_key { 19 - struct { 20 - uint16_t family; /* address family */ 21 - __be16 port; /* IP port */ 22 - } hdr; 23 - union { 24 - struct in_addr ipv4_addr; 25 - struct in6_addr ipv6_addr; 26 - }; 19 + __u64 conn_id; 27 20 } __packed; 28 21 29 22 /* ··· 24 31 */ 25 32 void cifs_fscache_get_client_cookie(struct TCP_Server_Info *server) 26 33 { 27 - const struct sockaddr *sa = (struct sockaddr *) &server->dstaddr; 28 - const struct sockaddr_in *addr = (struct sockaddr_in *) sa; 29 - const struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *) sa; 30 34 struct cifs_server_key key; 31 - uint16_t key_len = sizeof(key.hdr); 32 - 33 - memset(&key, 0, sizeof(key)); 34 35 35 36 /* 36 - * Should not be a problem as sin_family/sin6_family overlays 37 - * sa_family field 37 + * Check if cookie was already initialized so don't reinitialize it. 38 + * In the future, as we integrate with newer fscache features, 39 + * we may want to instead add a check if cookie has changed 38 40 */ 39 - key.hdr.family = sa->sa_family; 40 - switch (sa->sa_family) { 41 - case AF_INET: 42 - key.hdr.port = addr->sin_port; 43 - key.ipv4_addr = addr->sin_addr; 44 - key_len += sizeof(key.ipv4_addr); 45 - break; 46 - 47 - case AF_INET6: 48 - key.hdr.port = addr6->sin6_port; 49 - key.ipv6_addr = addr6->sin6_addr; 50 - key_len += sizeof(key.ipv6_addr); 51 - break; 52 - 53 - default: 54 - cifs_dbg(VFS, "Unknown network family '%d'\n", sa->sa_family); 55 - server->fscache = NULL; 41 + if (server->fscache) 56 42 return; 57 - } 43 + 44 + memset(&key, 0, sizeof(key)); 45 + key.conn_id = server->conn_id; 58 46 59 47 server->fscache = 60 48 fscache_acquire_cookie(cifs_fscache_netfs.primary_index, 61 49 &cifs_fscache_server_index_def, 62 - &key, key_len, 50 + &key, sizeof(key), 63 51 NULL, 0, 64 52 server, 0, true); 65 53 cifs_dbg(FYI, "%s: (0x%p/0x%p)\n", ··· 66 92 * In the future, as we integrate with newer fscache features, 67 93 * we may want to instead add a check if cookie has changed 68 94 */ 69 - if (tcon->fscache == NULL) 95 + if (tcon->fscache) 70 96 return; 71 97 72 98 sharename = extract_sharename(tcon->treeName);
+7
fs/cifs/inode.c
··· 1376 1376 inode = ERR_PTR(rc); 1377 1377 } 1378 1378 1379 + /* 1380 + * The cookie is initialized from volume info returned above. 1381 + * Inside cifs_fscache_get_super_cookie it checks 1382 + * that we do not get super cookie twice. 1383 + */ 1384 + cifs_fscache_get_super_cookie(tcon); 1385 + 1379 1386 out: 1380 1387 kfree(path); 1381 1388 free_xid(xid);
+4
fs/file.c
··· 858 858 file = NULL; 859 859 else if (!get_file_rcu_many(file, refs)) 860 860 goto loop; 861 + else if (files_lookup_fd_raw(files, fd) != file) { 862 + fput_many(file, refs); 863 + goto loop; 864 + } 861 865 } 862 866 rcu_read_unlock(); 863 867
+7 -3
fs/gfs2/glock.c
··· 1857 1857 1858 1858 void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state) 1859 1859 { 1860 - struct gfs2_holder mock_gh = { .gh_gl = gl, .gh_state = state, }; 1861 1860 unsigned long delay = 0; 1862 1861 unsigned long holdtime; 1863 1862 unsigned long now = jiffies; ··· 1889 1890 * keep the glock until the last strong holder is done with it. 1890 1891 */ 1891 1892 if (!find_first_strong_holder(gl)) { 1892 - if (state == LM_ST_UNLOCKED) 1893 - mock_gh.gh_state = LM_ST_EXCLUSIVE; 1893 + struct gfs2_holder mock_gh = { 1894 + .gh_gl = gl, 1895 + .gh_state = (state == LM_ST_UNLOCKED) ? 1896 + LM_ST_EXCLUSIVE : state, 1897 + .gh_iflags = BIT(HIF_HOLDER) 1898 + }; 1899 + 1894 1900 demote_incompat_holders(gl, &mock_gh); 1895 1901 } 1896 1902 handle_callback(gl, state, delay, true);
+45 -64
fs/gfs2/inode.c
··· 40 40 static const struct inode_operations gfs2_dir_iops; 41 41 static const struct inode_operations gfs2_symlink_iops; 42 42 43 - static int iget_test(struct inode *inode, void *opaque) 44 - { 45 - u64 no_addr = *(u64 *)opaque; 46 - 47 - return GFS2_I(inode)->i_no_addr == no_addr; 48 - } 49 - 50 - static int iget_set(struct inode *inode, void *opaque) 51 - { 52 - u64 no_addr = *(u64 *)opaque; 53 - 54 - GFS2_I(inode)->i_no_addr = no_addr; 55 - inode->i_ino = no_addr; 56 - return 0; 57 - } 58 - 59 - static struct inode *gfs2_iget(struct super_block *sb, u64 no_addr) 60 - { 61 - struct inode *inode; 62 - 63 - repeat: 64 - inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr); 65 - if (!inode) 66 - return inode; 67 - if (is_bad_inode(inode)) { 68 - iput(inode); 69 - goto repeat; 70 - } 71 - return inode; 72 - } 73 - 74 43 /** 75 44 * gfs2_set_iop - Sets inode operations 76 45 * @inode: The inode with correct i_mode filled in ··· 73 104 } 74 105 } 75 106 107 + static int iget_test(struct inode *inode, void *opaque) 108 + { 109 + u64 no_addr = *(u64 *)opaque; 110 + 111 + return GFS2_I(inode)->i_no_addr == no_addr; 112 + } 113 + 114 + static int iget_set(struct inode *inode, void *opaque) 115 + { 116 + u64 no_addr = *(u64 *)opaque; 117 + 118 + GFS2_I(inode)->i_no_addr = no_addr; 119 + inode->i_ino = no_addr; 120 + return 0; 121 + } 122 + 76 123 /** 77 124 * gfs2_inode_lookup - Lookup an inode 78 125 * @sb: The super block ··· 117 132 { 118 133 struct inode *inode; 119 134 struct gfs2_inode *ip; 120 - struct gfs2_glock *io_gl = NULL; 121 135 struct gfs2_holder i_gh; 122 136 int error; 123 137 124 138 gfs2_holder_mark_uninitialized(&i_gh); 125 - inode = gfs2_iget(sb, no_addr); 139 + inode = iget5_locked(sb, no_addr, iget_test, iget_set, &no_addr); 126 140 if (!inode) 127 141 return ERR_PTR(-ENOMEM); 128 142 ··· 129 145 130 146 if (inode->i_state & I_NEW) { 131 147 struct gfs2_sbd *sdp = GFS2_SB(inode); 148 + struct gfs2_glock *io_gl; 132 149 133 150 error = gfs2_glock_get(sdp, no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl); 134 151 if (unlikely(error)) 135 152 goto fail; 136 - flush_delayed_work(&ip->i_gl->gl_work); 137 - 138 - error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 139 - if (unlikely(error)) 140 - goto fail; 141 - if (blktype != GFS2_BLKST_UNLINKED) 142 - gfs2_cancel_delete_work(io_gl); 143 153 144 154 if (type == DT_UNKNOWN || blktype != GFS2_BLKST_FREE) { 145 155 /* 146 156 * The GL_SKIP flag indicates to skip reading the inode 147 - * block. We read the inode with gfs2_inode_refresh 157 + * block. We read the inode when instantiating it 148 158 * after possibly checking the block type. 149 159 */ 150 160 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, ··· 159 181 } 160 182 } 161 183 162 - glock_set_object(ip->i_gl, ip); 163 184 set_bit(GLF_INSTANTIATE_NEEDED, &ip->i_gl->gl_flags); 164 - error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh); 185 + 186 + error = gfs2_glock_get(sdp, no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 165 187 if (unlikely(error)) 166 188 goto fail; 167 - glock_set_object(ip->i_iopen_gh.gh_gl, ip); 189 + if (blktype != GFS2_BLKST_UNLINKED) 190 + gfs2_cancel_delete_work(io_gl); 191 + error = gfs2_glock_nq_init(io_gl, LM_ST_SHARED, GL_EXACT, &ip->i_iopen_gh); 168 192 gfs2_glock_put(io_gl); 169 - io_gl = NULL; 193 + if (unlikely(error)) 194 + goto fail; 170 195 171 196 /* Lowest possible timestamp; will be overwritten in gfs2_dinode_in. */ 172 197 inode->i_atime.tv_sec = 1LL << (8 * sizeof(inode->i_atime.tv_sec) - 1); 173 198 inode->i_atime.tv_nsec = 0; 174 199 200 + glock_set_object(ip->i_gl, ip); 201 + 175 202 if (type == DT_UNKNOWN) { 176 203 /* Inode glock must be locked already */ 177 204 error = gfs2_instantiate(&i_gh); 178 - if (error) 205 + if (error) { 206 + glock_clear_object(ip->i_gl, ip); 179 207 goto fail; 208 + } 180 209 } else { 181 210 ip->i_no_formal_ino = no_formal_ino; 182 211 inode->i_mode = DT2IF(type); ··· 191 206 192 207 if (gfs2_holder_initialized(&i_gh)) 193 208 gfs2_glock_dq_uninit(&i_gh); 209 + glock_set_object(ip->i_iopen_gh.gh_gl, ip); 194 210 195 211 gfs2_set_iop(inode); 212 + unlock_new_inode(inode); 196 213 } 197 214 198 215 if (no_formal_ino && ip->i_no_formal_ino && 199 216 no_formal_ino != ip->i_no_formal_ino) { 200 - error = -ESTALE; 201 - if (inode->i_state & I_NEW) 202 - goto fail; 203 217 iput(inode); 204 - return ERR_PTR(error); 218 + return ERR_PTR(-ESTALE); 205 219 } 206 - 207 - if (inode->i_state & I_NEW) 208 - unlock_new_inode(inode); 209 220 210 221 return inode; 211 222 212 223 fail: 213 - if (gfs2_holder_initialized(&ip->i_iopen_gh)) { 214 - glock_clear_object(ip->i_iopen_gh.gh_gl, ip); 224 + if (gfs2_holder_initialized(&ip->i_iopen_gh)) 215 225 gfs2_glock_dq_uninit(&ip->i_iopen_gh); 216 - } 217 - if (io_gl) 218 - gfs2_glock_put(io_gl); 219 226 if (gfs2_holder_initialized(&i_gh)) 220 227 gfs2_glock_dq_uninit(&i_gh); 221 228 iget_failed(inode); ··· 707 730 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl); 708 731 if (error) 709 732 goto fail_free_inode; 710 - flush_delayed_work(&ip->i_gl->gl_work); 711 733 712 734 error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl); 713 735 if (error) 714 736 goto fail_free_inode; 715 737 gfs2_cancel_delete_work(io_gl); 716 738 739 + error = insert_inode_locked4(inode, ip->i_no_addr, iget_test, &ip->i_no_addr); 740 + BUG_ON(error); 741 + 717 742 error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, ghs + 1); 718 743 if (error) 719 744 goto fail_gunlock2; 720 745 721 - glock_set_object(ip->i_gl, ip); 722 746 error = gfs2_trans_begin(sdp, blocks, 0); 723 747 if (error) 724 748 goto fail_gunlock2; ··· 735 757 if (error) 736 758 goto fail_gunlock2; 737 759 760 + glock_set_object(ip->i_gl, ip); 738 761 glock_set_object(io_gl, ip); 739 762 gfs2_set_iop(inode); 740 - insert_inode_hash(inode); 741 763 742 764 free_vfs_inode = 0; /* After this point, the inode is no longer 743 765 considered free. Any failures need to undo ··· 779 801 gfs2_glock_dq_uninit(ghs + 1); 780 802 gfs2_glock_put(io_gl); 781 803 gfs2_qa_put(dip); 804 + unlock_new_inode(inode); 782 805 return error; 783 806 784 807 fail_gunlock3: 808 + glock_clear_object(ip->i_gl, ip); 785 809 glock_clear_object(io_gl, ip); 786 810 gfs2_glock_dq_uninit(&ip->i_iopen_gh); 787 811 fail_gunlock2: 788 - glock_clear_object(io_gl, ip); 789 812 gfs2_glock_put(io_gl); 790 813 fail_free_inode: 791 814 if (ip->i_gl) { 792 - glock_clear_object(ip->i_gl, ip); 793 815 if (free_vfs_inode) /* else evict will do the put for us */ 794 816 gfs2_glock_put(ip->i_gl); 795 817 } ··· 807 829 mark_inode_dirty(inode); 808 830 set_bit(free_vfs_inode ? GIF_FREE_VFS_INODE : GIF_ALLOC_FAILED, 809 831 &GFS2_I(inode)->i_flags); 810 - iput(inode); 832 + if (inode->i_state & I_NEW) 833 + iget_failed(inode); 834 + else 835 + iput(inode); 811 836 } 812 837 if (gfs2_holder_initialized(ghs + 1)) 813 838 gfs2_glock_dq_uninit(ghs + 1);
+7
fs/io-wq.c
··· 714 714 715 715 static inline bool io_should_retry_thread(long err) 716 716 { 717 + /* 718 + * Prevent perpetual task_work retry, if the task (or its group) is 719 + * exiting. 720 + */ 721 + if (fatal_signal_pending(current)) 722 + return false; 723 + 717 724 switch (err) { 718 725 case -EAGAIN: 719 726 case -ERESTARTSYS:
+8 -13
fs/netfs/read_helper.c
··· 354 354 netfs_rreq_do_write_to_cache(rreq); 355 355 } 356 356 357 - static void netfs_rreq_write_to_cache(struct netfs_read_request *rreq, 358 - bool was_async) 357 + static void netfs_rreq_write_to_cache(struct netfs_read_request *rreq) 359 358 { 360 - if (was_async) { 361 - rreq->work.func = netfs_rreq_write_to_cache_work; 362 - if (!queue_work(system_unbound_wq, &rreq->work)) 363 - BUG(); 364 - } else { 365 - netfs_rreq_do_write_to_cache(rreq); 366 - } 359 + rreq->work.func = netfs_rreq_write_to_cache_work; 360 + if (!queue_work(system_unbound_wq, &rreq->work)) 361 + BUG(); 367 362 } 368 363 369 364 /* ··· 553 558 wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS); 554 559 555 560 if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags)) 556 - return netfs_rreq_write_to_cache(rreq, was_async); 561 + return netfs_rreq_write_to_cache(rreq); 557 562 558 563 netfs_rreq_completed(rreq, was_async); 559 564 } ··· 955 960 rreq = netfs_alloc_read_request(ops, netfs_priv, file); 956 961 if (!rreq) { 957 962 if (netfs_priv) 958 - ops->cleanup(netfs_priv, folio_file_mapping(folio)); 963 + ops->cleanup(folio_file_mapping(folio), netfs_priv); 959 964 folio_unlock(folio); 960 965 return -ENOMEM; 961 966 } ··· 1186 1191 goto error; 1187 1192 have_folio_no_wait: 1188 1193 if (netfs_priv) 1189 - ops->cleanup(netfs_priv, mapping); 1194 + ops->cleanup(mapping, netfs_priv); 1190 1195 *_folio = folio; 1191 1196 _leave(" = 0"); 1192 1197 return 0; ··· 1197 1202 folio_unlock(folio); 1198 1203 folio_put(folio); 1199 1204 if (netfs_priv) 1200 - ops->cleanup(netfs_priv, mapping); 1205 + ops->cleanup(mapping, netfs_priv); 1201 1206 _leave(" = %d", ret); 1202 1207 return ret; 1203 1208 }
-1
fs/xfs/xfs_inode.c
··· 3122 3122 * appropriately. 3123 3123 */ 3124 3124 if (flags & RENAME_WHITEOUT) { 3125 - ASSERT(!(flags & (RENAME_NOREPLACE | RENAME_EXCHANGE))); 3126 3125 error = xfs_rename_alloc_whiteout(mnt_userns, target_dp, &wip); 3127 3126 if (error) 3128 3127 return error;
+3 -14
include/linux/bpf.h
··· 732 732 struct bpf_trampoline *bpf_trampoline_get(u64 key, 733 733 struct bpf_attach_target_info *tgt_info); 734 734 void bpf_trampoline_put(struct bpf_trampoline *tr); 735 + int arch_prepare_bpf_dispatcher(void *image, s64 *funcs, int num_funcs); 735 736 #define BPF_DISPATCHER_INIT(_name) { \ 736 737 .mutex = __MUTEX_INITIALIZER(_name.mutex), \ 737 738 .func = &_name##_func, \ ··· 1353 1352 * kprobes, tracepoints) to prevent deadlocks on map operations as any of 1354 1353 * these events can happen inside a region which holds a map bucket lock 1355 1354 * and can deadlock on it. 1356 - * 1357 - * Use the preemption safe inc/dec variants on RT because migrate disable 1358 - * is preemptible on RT and preemption in the middle of the RMW operation 1359 - * might lead to inconsistent state. Use the raw variants for non RT 1360 - * kernels as migrate_disable() maps to preempt_disable() so the slightly 1361 - * more expensive save operation can be avoided. 1362 1355 */ 1363 1356 static inline void bpf_disable_instrumentation(void) 1364 1357 { 1365 1358 migrate_disable(); 1366 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 1367 - this_cpu_inc(bpf_prog_active); 1368 - else 1369 - __this_cpu_inc(bpf_prog_active); 1359 + this_cpu_inc(bpf_prog_active); 1370 1360 } 1371 1361 1372 1362 static inline void bpf_enable_instrumentation(void) 1373 1363 { 1374 - if (IS_ENABLED(CONFIG_PREEMPT_RT)) 1375 - this_cpu_dec(bpf_prog_active); 1376 - else 1377 - __this_cpu_dec(bpf_prog_active); 1364 + this_cpu_dec(bpf_prog_active); 1378 1365 migrate_enable(); 1379 1366 } 1380 1367
+10 -4
include/linux/btf.h
··· 245 245 struct module *owner; 246 246 }; 247 247 248 - struct kfunc_btf_id_list; 248 + struct kfunc_btf_id_list { 249 + struct list_head list; 250 + struct mutex mutex; 251 + }; 249 252 250 253 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES 251 254 void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, ··· 257 254 struct kfunc_btf_id_set *s); 258 255 bool bpf_check_mod_kfunc_call(struct kfunc_btf_id_list *klist, u32 kfunc_id, 259 256 struct module *owner); 257 + 258 + extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list; 259 + extern struct kfunc_btf_id_list prog_test_kfunc_list; 260 260 #else 261 261 static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, 262 262 struct kfunc_btf_id_set *s) ··· 274 268 { 275 269 return false; 276 270 } 271 + 272 + static struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list __maybe_unused; 273 + static struct kfunc_btf_id_list prog_test_kfunc_list __maybe_unused; 277 274 #endif 278 275 279 276 #define DEFINE_KFUNC_BTF_ID_SET(set, name) \ 280 277 struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set), \ 281 278 THIS_MODULE } 282 - 283 - extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list; 284 - extern struct kfunc_btf_id_list prog_test_kfunc_list; 285 279 286 280 #endif
-1
include/linux/cacheinfo.h
··· 3 3 #define _LINUX_CACHEINFO_H 4 4 5 5 #include <linux/bitops.h> 6 - #include <linux/cpu.h> 7 6 #include <linux/cpumask.h> 8 7 #include <linux/smp.h> 9 8
+1
include/linux/device/driver.h
··· 18 18 #include <linux/klist.h> 19 19 #include <linux/pm.h> 20 20 #include <linux/device/bus.h> 21 + #include <linux/module.h> 21 22 22 23 /** 23 24 * enum probe_type - device driver probe type to try
+1 -4
include/linux/filter.h
··· 6 6 #define __LINUX_FILTER_H__ 7 7 8 8 #include <linux/atomic.h> 9 + #include <linux/bpf.h> 9 10 #include <linux/refcount.h> 10 11 #include <linux/compat.h> 11 12 #include <linux/skbuff.h> ··· 27 26 28 27 #include <asm/byteorder.h> 29 28 #include <uapi/linux/filter.h> 30 - #include <uapi/linux/bpf.h> 31 29 32 30 struct sk_buff; 33 31 struct sock; ··· 640 640 * This uses migrate_disable/enable() explicitly to document that the 641 641 * invocation of a BPF program does not require reentrancy protection 642 642 * against a BPF program which is invoked from a preempting task. 643 - * 644 - * For non RT enabled kernels migrate_disable/enable() maps to 645 - * preempt_disable/enable(), i.e. it disables also preemption. 646 643 */ 647 644 static inline u32 bpf_prog_run_pin_on_cpu(const struct bpf_prog *prog, 648 645 const void *ctx)
+5
include/linux/hid.h
··· 840 840 return hdev->ll_driver == driver; 841 841 } 842 842 843 + static inline bool hid_is_usb(struct hid_device *hdev) 844 + { 845 + return hid_is_using_ll_driver(hdev, &usb_hid_driver); 846 + } 847 + 843 848 #define PM_HINT_FULLON 1<<5 844 849 #define PM_HINT_NORMAL 1<<1 845 850
+6 -5
include/linux/phy.h
··· 538 538 * @mac_managed_pm: Set true if MAC driver takes of suspending/resuming PHY 539 539 * @state: State of the PHY for management purposes 540 540 * @dev_flags: Device-specific flags used by the PHY driver. 541 - * Bits [15:0] are free to use by the PHY driver to communicate 542 - * driver specific behavior. 543 - * Bits [23:16] are currently reserved for future use. 544 - * Bits [31:24] are reserved for defining generic 545 - * PHY driver behavior. 541 + * 542 + * - Bits [15:0] are free to use by the PHY driver to communicate 543 + * driver specific behavior. 544 + * - Bits [23:16] are currently reserved for future use. 545 + * - Bits [31:24] are reserved for defining generic 546 + * PHY driver behavior. 546 547 * @irq: IRQ number of the PHY's interrupt (-1 if none) 547 548 * @phy_timer: The timer for handling the state machine 548 549 * @phylink: Pointer to phylink instance for this PHY
+8 -6
include/linux/regulator/driver.h
··· 499 499 * best to shut-down regulator(s) or reboot the SOC if error 500 500 * handling is repeatedly failing. If fatal_cnt is given the IRQ 501 501 * handling is aborted if it fails for fatal_cnt times and die() 502 - * callback (if populated) or BUG() is called to try to prevent 502 + * callback (if populated) is called. If die() is not populated 503 + * poweroff for the system is attempted in order to prevent any 503 504 * further damage. 504 505 * @reread_ms: The time which is waited before attempting to re-read status 505 506 * at the worker if IC reading fails. Immediate re-read is done ··· 517 516 * @data: Driver private data pointer which will be passed as such to 518 517 * the renable, map_event and die callbacks in regulator_irq_data. 519 518 * @die: Protection callback. If IC status reading or recovery actions 520 - * fail fatal_cnt times this callback or BUG() is called. This 521 - * callback should implement a final protection attempt like 522 - * disabling the regulator. If protection succeeded this may 523 - * return 0. If anything else is returned the core assumes final 524 - * protection failed and calls BUG() as a last resort. 519 + * fail fatal_cnt times this callback is called or system is 520 + * powered off. This callback should implement a final protection 521 + * attempt like disabling the regulator. If protection succeeded 522 + * die() may return 0. If anything else is returned the core 523 + * assumes final protection failed and attempts to perform a 524 + * poweroff as a last resort. 525 525 * @map_event: Driver callback to map IRQ status into regulator devices with 526 526 * events / errors. NOTE: callback MUST initialize both the 527 527 * errors and notifs for all rdevs which it signals having
+3 -2
include/linux/sched/cputime.h
··· 18 18 #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */ 19 19 20 20 #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN 21 - extern void task_cputime(struct task_struct *t, 21 + extern bool task_cputime(struct task_struct *t, 22 22 u64 *utime, u64 *stime); 23 23 extern u64 task_gtime(struct task_struct *t); 24 24 #else 25 - static inline void task_cputime(struct task_struct *t, 25 + static inline bool task_cputime(struct task_struct *t, 26 26 u64 *utime, u64 *stime) 27 27 { 28 28 *utime = t->utime; 29 29 *stime = t->stime; 30 + return false; 30 31 } 31 32 32 33 static inline u64 task_gtime(struct task_struct *t)
+1 -1
include/net/bond_alb.h
··· 126 126 struct alb_bond_info { 127 127 struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */ 128 128 u32 unbalanced_load; 129 - int tx_rebalance_counter; 129 + atomic_t tx_rebalance_counter; 130 130 int lp_counter; 131 131 /* -------- rlb parameters -------- */ 132 132 int rlb_enabled;
+13
include/net/busy_poll.h
··· 136 136 sk_rx_queue_update(sk, skb); 137 137 } 138 138 139 + /* Variant of sk_mark_napi_id() for passive flow setup, 140 + * as sk->sk_napi_id and sk->sk_rx_queue_mapping content 141 + * needs to be set. 142 + */ 143 + static inline void sk_mark_napi_id_set(struct sock *sk, 144 + const struct sk_buff *skb) 145 + { 146 + #ifdef CONFIG_NET_RX_BUSY_POLL 147 + WRITE_ONCE(sk->sk_napi_id, skb->napi_id); 148 + #endif 149 + sk_rx_queue_set(sk, skb); 150 + } 151 + 139 152 static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id) 140 153 { 141 154 #ifdef CONFIG_NET_RX_BUSY_POLL
+3 -3
include/net/netfilter/nf_conntrack.h
··· 276 276 /* jiffies until ct expires, 0 if already expired */ 277 277 static inline unsigned long nf_ct_expires(const struct nf_conn *ct) 278 278 { 279 - s32 timeout = ct->timeout - nfct_time_stamp; 279 + s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp; 280 280 281 281 return timeout > 0 ? timeout : 0; 282 282 } 283 283 284 284 static inline bool nf_ct_is_expired(const struct nf_conn *ct) 285 285 { 286 - return (__s32)(ct->timeout - nfct_time_stamp) <= 0; 286 + return (__s32)(READ_ONCE(ct->timeout) - nfct_time_stamp) <= 0; 287 287 } 288 288 289 289 /* use after obtaining a reference count */ ··· 302 302 static inline void nf_ct_offload_timeout(struct nf_conn *ct) 303 303 { 304 304 if (nf_ct_expires(ct) < NF_CT_DAY / 2) 305 - ct->timeout = nfct_time_stamp + NF_CT_DAY; 305 + WRITE_ONCE(ct->timeout, nfct_time_stamp + NF_CT_DAY); 306 306 } 307 307 308 308 struct kernel_param;
+7
include/uapi/drm/virtgpu_drm.h
··· 196 196 __u64 ctx_set_params; 197 197 }; 198 198 199 + /* 200 + * Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in 201 + * effect. The event size is sizeof(drm_event), since there is no additional 202 + * payload. 203 + */ 204 + #define VIRTGPU_EVENT_FENCE_SIGNALED 0x90000000 205 + 199 206 #define DRM_IOCTL_VIRTGPU_MAP \ 200 207 DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map) 201 208
+2 -9
kernel/bpf/btf.c
··· 6361 6361 6362 6362 /* BTF ID set registration API for modules */ 6363 6363 6364 - struct kfunc_btf_id_list { 6365 - struct list_head list; 6366 - struct mutex mutex; 6367 - }; 6368 - 6369 6364 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES 6370 6365 6371 6366 void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l, ··· 6386 6391 { 6387 6392 struct kfunc_btf_id_set *s; 6388 6393 6389 - if (!owner) 6390 - return false; 6391 6394 mutex_lock(&klist->mutex); 6392 6395 list_for_each_entry(s, &klist->list, list) { 6393 6396 if (s->owner == owner && btf_id_set_contains(s->set, kfunc_id)) { ··· 6397 6404 return false; 6398 6405 } 6399 6406 6400 - #endif 6401 - 6402 6407 #define DEFINE_KFUNC_BTF_ID_LIST(name) \ 6403 6408 struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list), \ 6404 6409 __MUTEX_INITIALIZER(name.mutex) }; \ ··· 6404 6413 6405 6414 DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list); 6406 6415 DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list); 6416 + 6417 + #endif
+1 -1
kernel/bpf/verifier.c
··· 8456 8456 8457 8457 new_range = dst_reg->off; 8458 8458 if (range_right_open) 8459 - new_range--; 8459 + new_range++; 8460 8460 8461 8461 /* Examples for register markings: 8462 8462 *
+3 -3
kernel/sched/core.c
··· 1918 1918 }; 1919 1919 } 1920 1920 1921 - rq->uclamp_flags = 0; 1921 + rq->uclamp_flags = UCLAMP_FLAG_IDLE; 1922 1922 } 1923 1923 1924 1924 static void __init init_uclamp(void) ··· 6617 6617 int mode = sched_dynamic_mode(str); 6618 6618 if (mode < 0) { 6619 6619 pr_warn("Dynamic Preempt: unsupported mode: %s\n", str); 6620 - return 1; 6620 + return 0; 6621 6621 } 6622 6622 6623 6623 sched_dynamic_update(mode); 6624 - return 0; 6624 + return 1; 6625 6625 } 6626 6626 __setup("preempt=", setup_preempt_mode); 6627 6627
+9 -3
kernel/sched/cputime.c
··· 615 615 .sum_exec_runtime = p->se.sum_exec_runtime, 616 616 }; 617 617 618 - task_cputime(p, &cputime.utime, &cputime.stime); 618 + if (task_cputime(p, &cputime.utime, &cputime.stime)) 619 + cputime.sum_exec_runtime = task_sched_runtime(p); 619 620 cputime_adjust(&cputime, &p->prev_cputime, ut, st); 620 621 } 621 622 EXPORT_SYMBOL_GPL(task_cputime_adjusted); ··· 829 828 * add up the pending nohz execution time since the last 830 829 * cputime snapshot. 831 830 */ 832 - void task_cputime(struct task_struct *t, u64 *utime, u64 *stime) 831 + bool task_cputime(struct task_struct *t, u64 *utime, u64 *stime) 833 832 { 834 833 struct vtime *vtime = &t->vtime; 835 834 unsigned int seq; 836 835 u64 delta; 836 + int ret; 837 837 838 838 if (!vtime_accounting_enabled()) { 839 839 *utime = t->utime; 840 840 *stime = t->stime; 841 - return; 841 + return false; 842 842 } 843 843 844 844 do { 845 + ret = false; 845 846 seq = read_seqcount_begin(&vtime->seqcount); 846 847 847 848 *utime = t->utime; ··· 853 850 if (vtime->state < VTIME_SYS) 854 851 continue; 855 852 853 + ret = true; 856 854 delta = vtime_delta(vtime); 857 855 858 856 /* ··· 865 861 else 866 862 *utime += vtime->utime + delta; 867 863 } while (read_seqcount_retry(&vtime->seqcount, seq)); 864 + 865 + return ret; 868 866 } 869 867 870 868 static int vtime_state_fetch(struct vtime *vtime, int cpu)
+2 -1
kernel/softirq.c
··· 595 595 { 596 596 __irq_enter_raw(); 597 597 598 - if (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET)) 598 + if (tick_nohz_full_cpu(smp_processor_id()) || 599 + (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))) 599 600 tick_irq_enter(); 600 601 601 602 account_hardirq_enter(current);
+7
kernel/time/tick-sched.c
··· 1375 1375 now = ktime_get(); 1376 1376 if (ts->idle_active) 1377 1377 tick_nohz_stop_idle(ts, now); 1378 + /* 1379 + * If all CPUs are idle. We may need to update a stale jiffies value. 1380 + * Note nohz_full is a special case: a timekeeper is guaranteed to stay 1381 + * alive but it might be busy looping with interrupts disabled in some 1382 + * rare case (typically stop machine). So we must make sure we have a 1383 + * last resort. 1384 + */ 1378 1385 if (ts->tick_stopped) 1379 1386 tick_nohz_update_jiffies(now); 1380 1387 }
+1
lib/Kconfig.debug
··· 316 316 bool "Generate BTF typeinfo" 317 317 depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED 318 318 depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST 319 + depends on BPF_SYSCALL 319 320 help 320 321 Generate deduplicated BTF type information from DWARF debug info. 321 322 Turning this on expects presence of pahole tool, which will convert
+1
mm/damon/vaddr.c
··· 13 13 #include <linux/mmu_notifier.h> 14 14 #include <linux/page_idle.h> 15 15 #include <linux/pagewalk.h> 16 + #include <linux/sched/mm.h> 16 17 17 18 #include "prmtv-common.h" 18 19
+1
mm/memory_hotplug.c
··· 35 35 #include <linux/memblock.h> 36 36 #include <linux/compaction.h> 37 37 #include <linux/rmap.h> 38 + #include <linux/module.h> 38 39 39 40 #include <asm/tlbflush.h> 40 41
+1
mm/swap_slots.c
··· 30 30 #include <linux/swap_slots.h> 31 31 #include <linux/cpu.h> 32 32 #include <linux/cpumask.h> 33 + #include <linux/slab.h> 33 34 #include <linux/vmalloc.h> 34 35 #include <linux/mutex.h> 35 36 #include <linux/mm.h>
+8 -8
net/core/devlink.c
··· 4139 4139 return err; 4140 4140 } 4141 4141 4142 - if (info->attrs[DEVLINK_ATTR_NETNS_PID] || 4143 - info->attrs[DEVLINK_ATTR_NETNS_FD] || 4144 - info->attrs[DEVLINK_ATTR_NETNS_ID]) { 4145 - dest_net = devlink_netns_get(skb, info); 4146 - if (IS_ERR(dest_net)) 4147 - return PTR_ERR(dest_net); 4148 - } 4149 - 4150 4142 if (info->attrs[DEVLINK_ATTR_RELOAD_ACTION]) 4151 4143 action = nla_get_u8(info->attrs[DEVLINK_ATTR_RELOAD_ACTION]); 4152 4144 else ··· 4181 4189 return -EINVAL; 4182 4190 } 4183 4191 } 4192 + if (info->attrs[DEVLINK_ATTR_NETNS_PID] || 4193 + info->attrs[DEVLINK_ATTR_NETNS_FD] || 4194 + info->attrs[DEVLINK_ATTR_NETNS_ID]) { 4195 + dest_net = devlink_netns_get(skb, info); 4196 + if (IS_ERR(dest_net)) 4197 + return PTR_ERR(dest_net); 4198 + } 4199 + 4184 4200 err = devlink_reload(devlink, dest_net, action, limit, &actions_performed, info->extack); 4185 4201 4186 4202 if (dest_net)
+1 -2
net/core/neighbour.c
··· 763 763 764 764 ASSERT_RTNL(); 765 765 766 - n = kmalloc(sizeof(*n) + key_len, GFP_KERNEL); 766 + n = kzalloc(sizeof(*n) + key_len, GFP_KERNEL); 767 767 if (!n) 768 768 goto out; 769 769 770 - n->protocol = 0; 771 770 write_pnet(&n->net, net); 772 771 memcpy(n->key, pkey, key_len); 773 772 n->dev = dev;
+5
net/core/skmsg.c
··· 1124 1124 1125 1125 void sk_psock_stop_strp(struct sock *sk, struct sk_psock *psock) 1126 1126 { 1127 + psock_set_prog(&psock->progs.stream_parser, NULL); 1128 + 1127 1129 if (!psock->saved_data_ready) 1128 1130 return; 1129 1131 ··· 1214 1212 1215 1213 void sk_psock_stop_verdict(struct sock *sk, struct sk_psock *psock) 1216 1214 { 1215 + psock_set_prog(&psock->progs.stream_verdict, NULL); 1216 + psock_set_prog(&psock->progs.skb_verdict, NULL); 1217 + 1217 1218 if (!psock->saved_data_ready) 1218 1219 return; 1219 1220
+10 -5
net/core/sock_map.c
··· 167 167 write_lock_bh(&sk->sk_callback_lock); 168 168 if (strp_stop) 169 169 sk_psock_stop_strp(sk, psock); 170 - else 170 + if (verdict_stop) 171 171 sk_psock_stop_verdict(sk, psock); 172 + 173 + if (psock->psock_update_sk_prot) 174 + psock->psock_update_sk_prot(sk, psock, false); 172 175 write_unlock_bh(&sk->sk_callback_lock); 173 176 } 174 177 } ··· 285 282 286 283 if (msg_parser) 287 284 psock_set_prog(&psock->progs.msg_parser, msg_parser); 285 + if (stream_parser) 286 + psock_set_prog(&psock->progs.stream_parser, stream_parser); 287 + if (stream_verdict) 288 + psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 289 + if (skb_verdict) 290 + psock_set_prog(&psock->progs.skb_verdict, skb_verdict); 288 291 289 292 ret = sock_map_init_proto(sk, psock); 290 293 if (ret < 0) ··· 301 292 ret = sk_psock_init_strp(sk, psock); 302 293 if (ret) 303 294 goto out_unlock_drop; 304 - psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 305 - psock_set_prog(&psock->progs.stream_parser, stream_parser); 306 295 sk_psock_start_strp(sk, psock); 307 296 } else if (!stream_parser && stream_verdict && !psock->saved_data_ready) { 308 - psock_set_prog(&psock->progs.stream_verdict, stream_verdict); 309 297 sk_psock_start_verdict(sk,psock); 310 298 } else if (!stream_verdict && skb_verdict && !psock->saved_data_ready) { 311 - psock_set_prog(&psock->progs.skb_verdict, skb_verdict); 312 299 sk_psock_start_verdict(sk, psock); 313 300 } 314 301 write_unlock_bh(&sk->sk_callback_lock);
+2 -1
net/ethtool/netlink.c
··· 40 40 if (dev->dev.parent) 41 41 pm_runtime_get_sync(dev->dev.parent); 42 42 43 - if (!netif_device_present(dev)) { 43 + if (!netif_device_present(dev) || 44 + dev->reg_state == NETREG_UNREGISTERING) { 44 45 ret = -ENODEV; 45 46 goto err; 46 47 }
+1 -1
net/ipv4/inet_connection_sock.c
··· 721 721 722 722 sk_node_init(&nreq_sk->sk_node); 723 723 nreq_sk->sk_tx_queue_mapping = req_sk->sk_tx_queue_mapping; 724 - #ifdef CONFIG_XPS 724 + #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING 725 725 nreq_sk->sk_rx_queue_mapping = req_sk->sk_rx_queue_mapping; 726 726 #endif 727 727 nreq_sk->sk_incoming_cpu = req_sk->sk_incoming_cpu;
+2 -2
net/ipv4/tcp_minisocks.c
··· 829 829 int ret = 0; 830 830 int state = child->sk_state; 831 831 832 - /* record NAPI ID of child */ 833 - sk_mark_napi_id(child, skb); 832 + /* record sk_napi_id and sk_rx_queue_mapping of child. */ 833 + sk_mark_napi_id_set(child, skb); 834 834 835 835 tcp_segs_in(tcp_sk(child), skb); 836 836 if (!sock_owned_by_user(child)) {
+1 -1
net/ipv4/udp.c
··· 916 916 kfree_skb(skb); 917 917 return -EINVAL; 918 918 } 919 - if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) { 919 + if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { 920 920 kfree_skb(skb); 921 921 return -EINVAL; 922 922 }
+8
net/ipv6/seg6_iptunnel.c
··· 161 161 hdr->hop_limit = ip6_dst_hoplimit(skb_dst(skb)); 162 162 163 163 memset(IP6CB(skb), 0, sizeof(*IP6CB(skb))); 164 + 165 + /* the control block has been erased, so we have to set the 166 + * iif once again. 167 + * We read the receiving interface index directly from the 168 + * skb->skb_iif as it is done in the IPv4 receiving path (i.e.: 169 + * ip_rcv_core(...)). 170 + */ 171 + IP6CB(skb)->iif = skb->skb_iif; 164 172 } 165 173 166 174 hdr->nexthdr = NEXTHDR_ROUTING;
+3 -3
net/netfilter/nf_conntrack_core.c
··· 684 684 685 685 tstamp = nf_conn_tstamp_find(ct); 686 686 if (tstamp) { 687 - s32 timeout = ct->timeout - nfct_time_stamp; 687 + s32 timeout = READ_ONCE(ct->timeout) - nfct_time_stamp; 688 688 689 689 tstamp->stop = ktime_get_real_ns(); 690 690 if (timeout < 0) ··· 1036 1036 } 1037 1037 1038 1038 /* We want the clashing entry to go away real soon: 1 second timeout. */ 1039 - loser_ct->timeout = nfct_time_stamp + HZ; 1039 + WRITE_ONCE(loser_ct->timeout, nfct_time_stamp + HZ); 1040 1040 1041 1041 /* IPS_NAT_CLASH removes the entry automatically on the first 1042 1042 * reply. Also prevents UDP tracker from moving the entry to ··· 1560 1560 /* save hash for reusing when confirming */ 1561 1561 *(unsigned long *)(&ct->tuplehash[IP_CT_DIR_REPLY].hnnode.pprev) = hash; 1562 1562 ct->status = 0; 1563 - ct->timeout = 0; 1563 + WRITE_ONCE(ct->timeout, 0); 1564 1564 write_pnet(&ct->ct_net, net); 1565 1565 memset(&ct->__nfct_init_offset, 0, 1566 1566 offsetof(struct nf_conn, proto) -
+1 -1
net/netfilter/nf_conntrack_netlink.c
··· 1998 1998 1999 1999 if (timeout > INT_MAX) 2000 2000 timeout = INT_MAX; 2001 - ct->timeout = nfct_time_stamp + (u32)timeout; 2001 + WRITE_ONCE(ct->timeout, nfct_time_stamp + (u32)timeout); 2002 2002 2003 2003 if (test_bit(IPS_DYING_BIT, &ct->status)) 2004 2004 return -ETIME;
+2 -2
net/netfilter/nf_flow_table_core.c
··· 201 201 if (timeout < 0) 202 202 timeout = 0; 203 203 204 - if (nf_flow_timeout_delta(ct->timeout) > (__s32)timeout) 205 - ct->timeout = nfct_time_stamp + timeout; 204 + if (nf_flow_timeout_delta(READ_ONCE(ct->timeout)) > (__s32)timeout) 205 + WRITE_ONCE(ct->timeout, nfct_time_stamp + timeout); 206 206 } 207 207 208 208 static void flow_offload_fixup_ct_state(struct nf_conn *ct)
+7 -4
net/netfilter/nft_exthdr.c
··· 236 236 237 237 tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, &tcphdr_len); 238 238 if (!tcph) 239 - return; 239 + goto err; 240 240 241 241 opt = (u8 *)tcph; 242 242 for (i = sizeof(*tcph); i < tcphdr_len - 1; i += optl) { ··· 251 251 continue; 252 252 253 253 if (i + optl > tcphdr_len || priv->len + priv->offset > optl) 254 - return; 254 + goto err; 255 255 256 256 if (skb_ensure_writable(pkt->skb, 257 257 nft_thoff(pkt) + i + priv->len)) 258 - return; 258 + goto err; 259 259 260 260 tcph = nft_tcp_header_pointer(pkt, sizeof(buff), buff, 261 261 &tcphdr_len); 262 262 if (!tcph) 263 - return; 263 + goto err; 264 264 265 265 offset = i + priv->offset; 266 266 ··· 303 303 304 304 return; 305 305 } 306 + return; 307 + err: 308 + regs->verdict.code = NFT_BREAK; 306 309 } 307 310 308 311 static void nft_exthdr_sctp_eval(const struct nft_expr *expr,
+1 -1
net/netfilter/nft_set_pipapo_avx2.c
··· 886 886 NFT_PIPAPO_AVX2_BUCKET_LOAD8(4, lt, 4, pkt[4], bsize); 887 887 888 888 NFT_PIPAPO_AVX2_AND(5, 0, 1); 889 - NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 6, pkt[5], bsize); 889 + NFT_PIPAPO_AVX2_BUCKET_LOAD8(6, lt, 5, pkt[5], bsize); 890 890 NFT_PIPAPO_AVX2_AND(7, 2, 3); 891 891 892 892 /* Stall */
+8 -4
net/nfc/netlink.c
··· 636 636 { 637 637 struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; 638 638 639 - nfc_device_iter_exit(iter); 640 - kfree(iter); 639 + if (iter) { 640 + nfc_device_iter_exit(iter); 641 + kfree(iter); 642 + } 641 643 642 644 return 0; 643 645 } ··· 1394 1392 { 1395 1393 struct class_dev_iter *iter = (struct class_dev_iter *) cb->args[0]; 1396 1394 1397 - nfc_device_iter_exit(iter); 1398 - kfree(iter); 1395 + if (iter) { 1396 + nfc_device_iter_exit(iter); 1397 + kfree(iter); 1398 + } 1399 1399 1400 1400 return 0; 1401 1401 }
+1
net/sched/sch_fq_pie.c
··· 531 531 struct fq_pie_sched_data *q = qdisc_priv(sch); 532 532 533 533 tcf_block_put(q->block); 534 + q->p_params.tupdate = 0; 534 535 del_timer_sync(&q->adapt_timer); 535 536 kvfree(q->flows); 536 537 }
+5 -3
tools/bpf/resolve_btfids/main.c
··· 83 83 int cnt; 84 84 }; 85 85 int addr_cnt; 86 + bool is_set; 86 87 Elf64_Addr addr[ADDR_CNT]; 87 88 }; 88 89 ··· 452 451 * in symbol's size, together with 'cnt' field hence 453 452 * that - 1. 454 453 */ 455 - if (id) 454 + if (id) { 456 455 id->cnt = sym.st_size / sizeof(int) - 1; 456 + id->is_set = true; 457 + } 457 458 } else { 458 459 pr_err("FAILED unsupported prefix %s\n", prefix); 459 460 return -1; ··· 571 568 int *ptr = data->d_buf; 572 569 int i; 573 570 574 - if (!id->id) { 571 + if (!id->id && !id->is_set) 575 572 pr_err("WARN: resolve_btfids: unresolved symbol %s\n", id->name); 576 - } 577 573 578 574 for (i = 0; i < id->addr_cnt; i++) { 579 575 unsigned long addr = id->addr[i];
-1
tools/build/Makefile.feature
··· 48 48 numa_num_possible_cpus \ 49 49 libperl \ 50 50 libpython \ 51 - libpython-version \ 52 51 libslang \ 53 52 libslang-include-subdir \ 54 53 libtraceevent \
-4
tools/build/feature/Makefile
··· 32 32 test-numa_num_possible_cpus.bin \ 33 33 test-libperl.bin \ 34 34 test-libpython.bin \ 35 - test-libpython-version.bin \ 36 35 test-libslang.bin \ 37 36 test-libslang-include-subdir.bin \ 38 37 test-libtraceevent.bin \ ··· 225 226 226 227 $(OUTPUT)test-libpython.bin: 227 228 $(BUILD) $(FLAGS_PYTHON_EMBED) 228 - 229 - $(OUTPUT)test-libpython-version.bin: 230 - $(BUILD) 231 229 232 230 $(OUTPUT)test-libbfd.bin: 233 231 $(BUILD) -DPACKAGE='"perf"' -lbfd -ldl
-5
tools/build/feature/test-all.c
··· 14 14 # include "test-libpython.c" 15 15 #undef main 16 16 17 - #define main main_test_libpython_version 18 - # include "test-libpython-version.c" 19 - #undef main 20 - 21 17 #define main main_test_libperl 22 18 # include "test-libperl.c" 23 19 #undef main ··· 173 177 int main(int argc, char *argv[]) 174 178 { 175 179 main_test_libpython(); 176 - main_test_libpython_version(); 177 180 main_test_libperl(); 178 181 main_test_hello(); 179 182 main_test_libelf();
-11
tools/build/feature/test-libpython-version.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <Python.h> 3 - 4 - #if PY_VERSION_HEX >= 0x03000000 5 - #error 6 - #endif 7 - 8 - int main(void) 9 - { 10 - return 0; 11 - }
-14
tools/include/linux/debug_locks.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LIBLOCKDEP_DEBUG_LOCKS_H_ 3 - #define _LIBLOCKDEP_DEBUG_LOCKS_H_ 4 - 5 - #include <stddef.h> 6 - #include <linux/compiler.h> 7 - #include <asm/bug.h> 8 - 9 - #define DEBUG_LOCKS_WARN_ON(x) WARN_ON(x) 10 - 11 - extern bool debug_locks; 12 - extern bool debug_locks_silent; 13 - 14 - #endif
-12
tools/include/linux/hardirq.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LIBLOCKDEP_LINUX_HARDIRQ_H_ 3 - #define _LIBLOCKDEP_LINUX_HARDIRQ_H_ 4 - 5 - #define SOFTIRQ_BITS 0UL 6 - #define HARDIRQ_BITS 0UL 7 - #define SOFTIRQ_SHIFT 0UL 8 - #define HARDIRQ_SHIFT 0UL 9 - #define hardirq_count() 0UL 10 - #define softirq_count() 0UL 11 - 12 - #endif
-39
tools/include/linux/irqflags.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LIBLOCKDEP_LINUX_TRACE_IRQFLAGS_H_ 3 - #define _LIBLOCKDEP_LINUX_TRACE_IRQFLAGS_H_ 4 - 5 - # define lockdep_hardirq_context() 0 6 - # define lockdep_softirq_context(p) 0 7 - # define lockdep_hardirqs_enabled() 0 8 - # define lockdep_softirqs_enabled(p) 0 9 - # define lockdep_hardirq_enter() do { } while (0) 10 - # define lockdep_hardirq_exit() do { } while (0) 11 - # define lockdep_softirq_enter() do { } while (0) 12 - # define lockdep_softirq_exit() do { } while (0) 13 - # define INIT_TRACE_IRQFLAGS 14 - 15 - # define stop_critical_timings() do { } while (0) 16 - # define start_critical_timings() do { } while (0) 17 - 18 - #define raw_local_irq_disable() do { } while (0) 19 - #define raw_local_irq_enable() do { } while (0) 20 - #define raw_local_irq_save(flags) ((flags) = 0) 21 - #define raw_local_irq_restore(flags) ((void)(flags)) 22 - #define raw_local_save_flags(flags) ((flags) = 0) 23 - #define raw_irqs_disabled_flags(flags) ((void)(flags)) 24 - #define raw_irqs_disabled() 0 25 - #define raw_safe_halt() 26 - 27 - #define local_irq_enable() do { } while (0) 28 - #define local_irq_disable() do { } while (0) 29 - #define local_irq_save(flags) ((flags) = 0) 30 - #define local_irq_restore(flags) ((void)(flags)) 31 - #define local_save_flags(flags) ((flags) = 0) 32 - #define irqs_disabled() (1) 33 - #define irqs_disabled_flags(flags) ((void)(flags), 0) 34 - #define safe_halt() do { } while (0) 35 - 36 - #define trace_lock_release(x, y) 37 - #define trace_lock_acquire(a, b, c, d, e, f, g) 38 - 39 - #endif
-72
tools/include/linux/lockdep.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LIBLOCKDEP_LOCKDEP_H_ 3 - #define _LIBLOCKDEP_LOCKDEP_H_ 4 - 5 - #include <sys/prctl.h> 6 - #include <sys/syscall.h> 7 - #include <string.h> 8 - #include <limits.h> 9 - #include <linux/utsname.h> 10 - #include <linux/compiler.h> 11 - #include <linux/export.h> 12 - #include <linux/kern_levels.h> 13 - #include <linux/err.h> 14 - #include <linux/rcu.h> 15 - #include <linux/list.h> 16 - #include <linux/hardirq.h> 17 - #include <unistd.h> 18 - 19 - #define MAX_LOCK_DEPTH 63UL 20 - 21 - #define asmlinkage 22 - #define __visible 23 - 24 - #include "../../../include/linux/lockdep.h" 25 - 26 - struct task_struct { 27 - u64 curr_chain_key; 28 - int lockdep_depth; 29 - unsigned int lockdep_recursion; 30 - struct held_lock held_locks[MAX_LOCK_DEPTH]; 31 - gfp_t lockdep_reclaim_gfp; 32 - int pid; 33 - int state; 34 - char comm[17]; 35 - }; 36 - 37 - #define TASK_RUNNING 0 38 - 39 - extern struct task_struct *__curr(void); 40 - 41 - #define current (__curr()) 42 - 43 - static inline int debug_locks_off(void) 44 - { 45 - return 1; 46 - } 47 - 48 - #define task_pid_nr(tsk) ((tsk)->pid) 49 - 50 - #define KSYM_NAME_LEN 128 51 - #define printk(...) dprintf(STDOUT_FILENO, __VA_ARGS__) 52 - #define pr_err(format, ...) fprintf (stderr, format, ## __VA_ARGS__) 53 - #define pr_warn pr_err 54 - #define pr_cont pr_err 55 - 56 - #define list_del_rcu list_del 57 - 58 - #define atomic_t unsigned long 59 - #define atomic_inc(x) ((*(x))++) 60 - 61 - #define print_tainted() "" 62 - #define static_obj(x) 1 63 - 64 - #define debug_show_all_locks() 65 - extern void debug_check_no_locks_held(void); 66 - 67 - static __used bool __is_kernel_percpu_address(unsigned long addr, void *can_addr) 68 - { 69 - return false; 70 - } 71 - 72 - #endif
-4
tools/include/linux/proc_fs.h
··· 1 - #ifndef _TOOLS_INCLUDE_LINUX_PROC_FS_H 2 - #define _TOOLS_INCLUDE_LINUX_PROC_FS_H 3 - 4 - #endif /* _TOOLS_INCLUDE_LINUX_PROC_FS_H */
-2
tools/include/linux/spinlock.h
··· 37 37 return true; 38 38 } 39 39 40 - #include <linux/lockdep.h> 41 - 42 40 #endif
-33
tools/include/linux/stacktrace.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _LIBLOCKDEP_LINUX_STACKTRACE_H_ 3 - #define _LIBLOCKDEP_LINUX_STACKTRACE_H_ 4 - 5 - #include <execinfo.h> 6 - 7 - struct stack_trace { 8 - unsigned int nr_entries, max_entries; 9 - unsigned long *entries; 10 - int skip; 11 - }; 12 - 13 - static inline void print_stack_trace(struct stack_trace *trace, int spaces) 14 - { 15 - backtrace_symbols_fd((void **)trace->entries, trace->nr_entries, 1); 16 - } 17 - 18 - #define save_stack_trace(trace) \ 19 - ((trace)->nr_entries = \ 20 - backtrace((void **)(trace)->entries, (trace)->max_entries)) 21 - 22 - static inline int dump_stack(void) 23 - { 24 - void *array[64]; 25 - size_t size; 26 - 27 - size = backtrace(array, 64); 28 - backtrace_symbols_fd(array, size, 1); 29 - 30 - return 0; 31 - } 32 - 33 - #endif
+1
tools/objtool/elf.c
··· 375 375 return -1; 376 376 } 377 377 memset(sym, 0, sizeof(*sym)); 378 + INIT_LIST_HEAD(&sym->pv_target); 378 379 sym->alias = sym; 379 380 380 381 sym->idx = i;
+4
tools/objtool/objtool.c
··· 153 153 !strcmp(func->name, "_paravirt_ident_64")) 154 154 return; 155 155 156 + /* already added this function */ 157 + if (!list_empty(&func->pv_target)) 158 + return; 159 + 156 160 list_add(&func->pv_target, &f->pv_ops[idx].targets); 157 161 f->pv_ops[idx].clean = false; 158 162 }
-2
tools/perf/Makefile.config
··· 271 271 272 272 FEATURE_CHECK_CFLAGS-libpython := $(PYTHON_EMBED_CCOPTS) 273 273 FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS) 274 - FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS) 275 - FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS) 276 274 277 275 FEATURE_CHECK_LDFLAGS-libaio = -lrt 278 276
+1
tools/perf/arch/powerpc/entry/syscalls/syscall.tbl
··· 528 528 446 common landlock_restrict_self sys_landlock_restrict_self 529 529 # 447 reserved for memfd_secret 530 530 448 common process_mrelease sys_process_mrelease 531 + 449 common futex_waitv sys_futex_waitv
+1
tools/perf/arch/s390/entry/syscalls/syscall.tbl
··· 451 451 446 common landlock_restrict_self sys_landlock_restrict_self sys_landlock_restrict_self 452 452 # 447 reserved for memfd_secret 453 453 448 common process_mrelease sys_process_mrelease sys_process_mrelease 454 + 449 common futex_waitv sys_futex_waitv sys_futex_waitv
-4
tools/perf/bench/sched-messaging.c
··· 223 223 snd_ctx->out_fds[i] = fds[1]; 224 224 if (!thread_mode) 225 225 close(fds[0]); 226 - 227 - free(ctx); 228 226 } 229 227 230 228 /* Now we have all the fds, fork the senders */ ··· 238 240 if (!thread_mode) 239 241 for (i = 0; i < num_fds; i++) 240 242 close(snd_ctx->out_fds[i]); 241 - 242 - free(snd_ctx); 243 243 244 244 /* Return number of children to reap */ 245 245 return num_fds * 2;
+1 -1
tools/perf/builtin-inject.c
··· 820 820 inject->tool.ordered_events = true; 821 821 inject->tool.ordering_requires_timestamps = true; 822 822 /* Allow space in the header for new attributes */ 823 - output_data_offset = 4096; 823 + output_data_offset = roundup(8192 + session->header.data_offset, 4096); 824 824 if (inject->strip) 825 825 strip_init(inject); 826 826 }
+3 -1
tools/perf/tests/expr.c
··· 169 169 TEST_ASSERT_VAL("#num_dies", expr__parse(&num_dies, ctx, "#num_dies") == 0); 170 170 TEST_ASSERT_VAL("#num_cores >= #num_dies", num_cores >= num_dies); 171 171 TEST_ASSERT_VAL("#num_packages", expr__parse(&num_packages, ctx, "#num_packages") == 0); 172 - TEST_ASSERT_VAL("#num_dies >= #num_packages", num_dies >= num_packages); 172 + 173 + if (num_dies) // Some platforms do not have CPU die support, for example s390 174 + TEST_ASSERT_VAL("#num_dies >= #num_packages", num_dies >= num_packages); 173 175 174 176 /* 175 177 * Source count returns the number of events aggregating in a leader
+1
tools/perf/tests/parse-metric.c
··· 109 109 struct evsel *evsel; 110 110 u64 count; 111 111 112 + perf_stat__reset_shadow_stats(); 112 113 evlist__for_each_entry(evlist, evsel) { 113 114 count = find_value(evsel->name, vals); 114 115 perf_stat__update_shadow_stats(evsel, count, 0, st);
-14
tools/perf/util/bpf_skel/bperf.h
··· 1 - // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - // Copyright (c) 2021 Facebook 3 - 4 - #ifndef __BPERF_STAT_H 5 - #define __BPERF_STAT_H 6 - 7 - typedef struct { 8 - __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 9 - __uint(key_size, sizeof(__u32)); 10 - __uint(value_size, sizeof(struct bpf_perf_event_value)); 11 - __uint(max_entries, 1); 12 - } reading_map; 13 - 14 - #endif /* __BPERF_STAT_H */
+14 -5
tools/perf/util/bpf_skel/bperf_follower.bpf.c
··· 1 1 // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 // Copyright (c) 2021 Facebook 3 - #include <linux/bpf.h> 4 - #include <linux/perf_event.h> 3 + #include "vmlinux.h" 5 4 #include <bpf/bpf_helpers.h> 6 5 #include <bpf/bpf_tracing.h> 7 - #include "bperf.h" 8 6 #include "bperf_u.h" 9 7 10 - reading_map diff_readings SEC(".maps"); 11 - reading_map accum_readings SEC(".maps"); 8 + struct { 9 + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 10 + __uint(key_size, sizeof(__u32)); 11 + __uint(value_size, sizeof(struct bpf_perf_event_value)); 12 + __uint(max_entries, 1); 13 + } diff_readings SEC(".maps"); 14 + 15 + struct { 16 + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 17 + __uint(key_size, sizeof(__u32)); 18 + __uint(value_size, sizeof(struct bpf_perf_event_value)); 19 + __uint(max_entries, 1); 20 + } accum_readings SEC(".maps"); 12 21 13 22 struct { 14 23 __uint(type, BPF_MAP_TYPE_HASH);
+14 -5
tools/perf/util/bpf_skel/bperf_leader.bpf.c
··· 1 1 // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 // Copyright (c) 2021 Facebook 3 - #include <linux/bpf.h> 4 - #include <linux/perf_event.h> 3 + #include "vmlinux.h" 5 4 #include <bpf/bpf_helpers.h> 6 5 #include <bpf/bpf_tracing.h> 7 - #include "bperf.h" 8 6 9 7 struct { 10 8 __uint(type, BPF_MAP_TYPE_PERF_EVENT_ARRAY); ··· 11 13 __uint(map_flags, BPF_F_PRESERVE_ELEMS); 12 14 } events SEC(".maps"); 13 15 14 - reading_map prev_readings SEC(".maps"); 15 - reading_map diff_readings SEC(".maps"); 16 + struct { 17 + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 18 + __uint(key_size, sizeof(__u32)); 19 + __uint(value_size, sizeof(struct bpf_perf_event_value)); 20 + __uint(max_entries, 1); 21 + } prev_readings SEC(".maps"); 22 + 23 + struct { 24 + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); 25 + __uint(key_size, sizeof(__u32)); 26 + __uint(value_size, sizeof(struct bpf_perf_event_value)); 27 + __uint(max_entries, 1); 28 + } diff_readings SEC(".maps"); 16 29 17 30 SEC("raw_tp/sched_switch") 18 31 int BPF_PROG(on_switch)
+1 -1
tools/perf/util/bpf_skel/bpf_prog_profiler.bpf.c
··· 1 1 // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 // Copyright (c) 2020 Facebook 3 - #include <linux/bpf.h> 3 + #include "vmlinux.h" 4 4 #include <bpf/bpf_helpers.h> 5 5 #include <bpf/bpf_tracing.h> 6 6
+10 -5
tools/perf/util/header.c
··· 2321 2321 #define FEAT_PROCESS_STR_FUN(__feat, __feat_env) \ 2322 2322 static int process_##__feat(struct feat_fd *ff, void *data __maybe_unused) \ 2323 2323 {\ 2324 + free(ff->ph->env.__feat_env); \ 2324 2325 ff->ph->env.__feat_env = do_read_string(ff); \ 2325 2326 return ff->ph->env.__feat_env ? 0 : -ENOMEM; \ 2326 2327 } ··· 4125 4124 struct perf_record_header_feature *fe = (struct perf_record_header_feature *)event; 4126 4125 int type = fe->header.type; 4127 4126 u64 feat = fe->feat_id; 4127 + int ret = 0; 4128 4128 4129 4129 if (type < 0 || type >= PERF_RECORD_HEADER_MAX) { 4130 4130 pr_warning("invalid record type %d in pipe-mode\n", type); ··· 4143 4141 ff.size = event->header.size - sizeof(*fe); 4144 4142 ff.ph = &session->header; 4145 4143 4146 - if (feat_ops[feat].process(&ff, NULL)) 4147 - return -1; 4144 + if (feat_ops[feat].process(&ff, NULL)) { 4145 + ret = -1; 4146 + goto out; 4147 + } 4148 4148 4149 4149 if (!feat_ops[feat].print || !tool->show_feat_hdr) 4150 - return 0; 4150 + goto out; 4151 4151 4152 4152 if (!feat_ops[feat].full_only || 4153 4153 tool->show_feat_hdr >= SHOW_FEAT_HEADER_FULL_INFO) { ··· 4158 4154 fprintf(stdout, "# %s info available, use -I to display\n", 4159 4155 feat_ops[feat].name); 4160 4156 } 4161 - 4162 - return 0; 4157 + out: 4158 + free_event_desc(ff.events); 4159 + return ret; 4163 4160 } 4164 4161 4165 4162 size_t perf_event__fprintf_event_update(union perf_event *event, FILE *fp)
+1 -1
tools/perf/util/smt.c
··· 15 15 if (cached) 16 16 return cached_result; 17 17 18 - if (sysfs__read_int("devices/system/cpu/smt/active", &cached_result) > 0) 18 + if (sysfs__read_int("devices/system/cpu/smt/active", &cached_result) >= 0) 19 19 goto done; 20 20 21 21 ncpu = sysconf(_SC_NPROCESSORS_CONF);
+600 -32
tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
··· 35 35 .prog_type = BPF_PROG_TYPE_XDP, 36 36 }, 37 37 { 38 - "XDP pkt read, pkt_data' > pkt_end, good access", 38 + "XDP pkt read, pkt_data' > pkt_end, corner case, good access", 39 39 .insns = { 40 40 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 41 41 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 88 88 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 89 89 }, 90 90 { 91 + "XDP pkt read, pkt_data' > pkt_end, corner case +1, good access", 92 + .insns = { 93 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 94 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 95 + offsetof(struct xdp_md, data_end)), 96 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 97 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 98 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 99 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 100 + BPF_MOV64_IMM(BPF_REG_0, 0), 101 + BPF_EXIT_INSN(), 102 + }, 103 + .result = ACCEPT, 104 + .prog_type = BPF_PROG_TYPE_XDP, 105 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 106 + }, 107 + { 108 + "XDP pkt read, pkt_data' > pkt_end, corner case -1, bad access", 109 + .insns = { 110 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 111 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 112 + offsetof(struct xdp_md, data_end)), 113 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 114 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 115 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 116 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 117 + BPF_MOV64_IMM(BPF_REG_0, 0), 118 + BPF_EXIT_INSN(), 119 + }, 120 + .errstr = "R1 offset is outside of the packet", 121 + .result = REJECT, 122 + .prog_type = BPF_PROG_TYPE_XDP, 123 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 124 + }, 125 + { 91 126 "XDP pkt read, pkt_end > pkt_data', good access", 92 127 .insns = { 93 128 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 141 106 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 142 107 }, 143 108 { 144 - "XDP pkt read, pkt_end > pkt_data', bad access 1", 109 + "XDP pkt read, pkt_end > pkt_data', corner case -1, bad access", 145 110 .insns = { 146 111 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 147 112 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 148 113 offsetof(struct xdp_md, data_end)), 149 114 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 150 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 115 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 151 116 BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 152 117 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 153 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 118 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 154 119 BPF_MOV64_IMM(BPF_REG_0, 0), 155 120 BPF_EXIT_INSN(), 156 121 }, ··· 178 143 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 179 144 }, 180 145 { 146 + "XDP pkt read, pkt_end > pkt_data', corner case, good access", 147 + .insns = { 148 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 149 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 150 + offsetof(struct xdp_md, data_end)), 151 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 152 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 153 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 154 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 155 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 156 + BPF_MOV64_IMM(BPF_REG_0, 0), 157 + BPF_EXIT_INSN(), 158 + }, 159 + .result = ACCEPT, 160 + .prog_type = BPF_PROG_TYPE_XDP, 161 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 162 + }, 163 + { 164 + "XDP pkt read, pkt_end > pkt_data', corner case +1, good access", 165 + .insns = { 166 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 167 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 168 + offsetof(struct xdp_md, data_end)), 169 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 170 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 171 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 172 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 173 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 174 + BPF_MOV64_IMM(BPF_REG_0, 0), 175 + BPF_EXIT_INSN(), 176 + }, 177 + .result = ACCEPT, 178 + .prog_type = BPF_PROG_TYPE_XDP, 179 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 180 + }, 181 + { 181 182 "XDP pkt read, pkt_data' < pkt_end, good access", 182 183 .insns = { 183 184 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 232 161 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 233 162 }, 234 163 { 235 - "XDP pkt read, pkt_data' < pkt_end, bad access 1", 164 + "XDP pkt read, pkt_data' < pkt_end, corner case -1, bad access", 236 165 .insns = { 237 166 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 238 167 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 239 168 offsetof(struct xdp_md, data_end)), 240 169 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 241 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 170 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 242 171 BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 243 172 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 244 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 173 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 245 174 BPF_MOV64_IMM(BPF_REG_0, 0), 246 175 BPF_EXIT_INSN(), 247 176 }, ··· 269 198 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 270 199 }, 271 200 { 272 - "XDP pkt read, pkt_end < pkt_data', good access", 201 + "XDP pkt read, pkt_data' < pkt_end, corner case, good access", 202 + .insns = { 203 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 204 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 205 + offsetof(struct xdp_md, data_end)), 206 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 207 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 208 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 209 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 210 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 211 + BPF_MOV64_IMM(BPF_REG_0, 0), 212 + BPF_EXIT_INSN(), 213 + }, 214 + .result = ACCEPT, 215 + .prog_type = BPF_PROG_TYPE_XDP, 216 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 217 + }, 218 + { 219 + "XDP pkt read, pkt_data' < pkt_end, corner case +1, good access", 220 + .insns = { 221 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 222 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 223 + offsetof(struct xdp_md, data_end)), 224 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 225 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 226 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 227 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 228 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 229 + BPF_MOV64_IMM(BPF_REG_0, 0), 230 + BPF_EXIT_INSN(), 231 + }, 232 + .result = ACCEPT, 233 + .prog_type = BPF_PROG_TYPE_XDP, 234 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 235 + }, 236 + { 237 + "XDP pkt read, pkt_end < pkt_data', corner case, good access", 273 238 .insns = { 274 239 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 275 240 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 358 251 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 359 252 }, 360 253 { 254 + "XDP pkt read, pkt_end < pkt_data', corner case +1, good access", 255 + .insns = { 256 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 257 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 258 + offsetof(struct xdp_md, data_end)), 259 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 260 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 261 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 262 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 263 + BPF_MOV64_IMM(BPF_REG_0, 0), 264 + BPF_EXIT_INSN(), 265 + }, 266 + .result = ACCEPT, 267 + .prog_type = BPF_PROG_TYPE_XDP, 268 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 269 + }, 270 + { 271 + "XDP pkt read, pkt_end < pkt_data', corner case -1, bad access", 272 + .insns = { 273 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 274 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 275 + offsetof(struct xdp_md, data_end)), 276 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 277 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 278 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 279 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 280 + BPF_MOV64_IMM(BPF_REG_0, 0), 281 + BPF_EXIT_INSN(), 282 + }, 283 + .errstr = "R1 offset is outside of the packet", 284 + .result = REJECT, 285 + .prog_type = BPF_PROG_TYPE_XDP, 286 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 287 + }, 288 + { 361 289 "XDP pkt read, pkt_data' >= pkt_end, good access", 362 290 .insns = { 363 291 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 410 268 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 411 269 }, 412 270 { 413 - "XDP pkt read, pkt_data' >= pkt_end, bad access 1", 271 + "XDP pkt read, pkt_data' >= pkt_end, corner case -1, bad access", 414 272 .insns = { 415 273 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 416 274 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 417 275 offsetof(struct xdp_md, data_end)), 418 276 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 419 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 277 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 420 278 BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 421 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 279 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 422 280 BPF_MOV64_IMM(BPF_REG_0, 0), 423 281 BPF_EXIT_INSN(), 424 282 }, ··· 446 304 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 447 305 }, 448 306 { 449 - "XDP pkt read, pkt_end >= pkt_data', good access", 307 + "XDP pkt read, pkt_data' >= pkt_end, corner case, good access", 308 + .insns = { 309 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 310 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 311 + offsetof(struct xdp_md, data_end)), 312 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 313 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 314 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 315 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 316 + BPF_MOV64_IMM(BPF_REG_0, 0), 317 + BPF_EXIT_INSN(), 318 + }, 319 + .result = ACCEPT, 320 + .prog_type = BPF_PROG_TYPE_XDP, 321 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 322 + }, 323 + { 324 + "XDP pkt read, pkt_data' >= pkt_end, corner case +1, good access", 325 + .insns = { 326 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 327 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 328 + offsetof(struct xdp_md, data_end)), 329 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 330 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 331 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 332 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 333 + BPF_MOV64_IMM(BPF_REG_0, 0), 334 + BPF_EXIT_INSN(), 335 + }, 336 + .result = ACCEPT, 337 + .prog_type = BPF_PROG_TYPE_XDP, 338 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 339 + }, 340 + { 341 + "XDP pkt read, pkt_end >= pkt_data', corner case, good access", 450 342 .insns = { 451 343 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 452 344 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 535 359 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 536 360 }, 537 361 { 538 - "XDP pkt read, pkt_data' <= pkt_end, good access", 362 + "XDP pkt read, pkt_end >= pkt_data', corner case +1, good access", 363 + .insns = { 364 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 365 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 366 + offsetof(struct xdp_md, data_end)), 367 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 368 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 369 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 370 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 371 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 372 + BPF_MOV64_IMM(BPF_REG_0, 0), 373 + BPF_EXIT_INSN(), 374 + }, 375 + .result = ACCEPT, 376 + .prog_type = BPF_PROG_TYPE_XDP, 377 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 378 + }, 379 + { 380 + "XDP pkt read, pkt_end >= pkt_data', corner case -1, bad access", 381 + .insns = { 382 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 383 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 384 + offsetof(struct xdp_md, data_end)), 385 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 386 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 387 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 388 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 389 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 390 + BPF_MOV64_IMM(BPF_REG_0, 0), 391 + BPF_EXIT_INSN(), 392 + }, 393 + .errstr = "R1 offset is outside of the packet", 394 + .result = REJECT, 395 + .prog_type = BPF_PROG_TYPE_XDP, 396 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 397 + }, 398 + { 399 + "XDP pkt read, pkt_data' <= pkt_end, corner case, good access", 539 400 .insns = { 540 401 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 541 402 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, ··· 627 414 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 628 415 }, 629 416 { 417 + "XDP pkt read, pkt_data' <= pkt_end, corner case +1, good access", 418 + .insns = { 419 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 420 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 421 + offsetof(struct xdp_md, data_end)), 422 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 423 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 424 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 425 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 426 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 427 + BPF_MOV64_IMM(BPF_REG_0, 0), 428 + BPF_EXIT_INSN(), 429 + }, 430 + .result = ACCEPT, 431 + .prog_type = BPF_PROG_TYPE_XDP, 432 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 433 + }, 434 + { 435 + "XDP pkt read, pkt_data' <= pkt_end, corner case -1, bad access", 436 + .insns = { 437 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 438 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 439 + offsetof(struct xdp_md, data_end)), 440 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 441 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 442 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 443 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 444 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 445 + BPF_MOV64_IMM(BPF_REG_0, 0), 446 + BPF_EXIT_INSN(), 447 + }, 448 + .errstr = "R1 offset is outside of the packet", 449 + .result = REJECT, 450 + .prog_type = BPF_PROG_TYPE_XDP, 451 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 452 + }, 453 + { 630 454 "XDP pkt read, pkt_end <= pkt_data', good access", 631 455 .insns = { 632 456 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), ··· 681 431 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 682 432 }, 683 433 { 684 - "XDP pkt read, pkt_end <= pkt_data', bad access 1", 434 + "XDP pkt read, pkt_end <= pkt_data', corner case -1, bad access", 685 435 .insns = { 686 436 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 687 437 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 688 438 offsetof(struct xdp_md, data_end)), 689 439 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 690 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 440 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 691 441 BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 692 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 442 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 693 443 BPF_MOV64_IMM(BPF_REG_0, 0), 694 444 BPF_EXIT_INSN(), 695 445 }, ··· 717 467 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 718 468 }, 719 469 { 720 - "XDP pkt read, pkt_meta' > pkt_data, good access", 470 + "XDP pkt read, pkt_end <= pkt_data', corner case, good access", 471 + .insns = { 472 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 473 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 474 + offsetof(struct xdp_md, data_end)), 475 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 476 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 477 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 478 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 479 + BPF_MOV64_IMM(BPF_REG_0, 0), 480 + BPF_EXIT_INSN(), 481 + }, 482 + .result = ACCEPT, 483 + .prog_type = BPF_PROG_TYPE_XDP, 484 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 485 + }, 486 + { 487 + "XDP pkt read, pkt_end <= pkt_data', corner case +1, good access", 488 + .insns = { 489 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)), 490 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, 491 + offsetof(struct xdp_md, data_end)), 492 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 493 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 494 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 495 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 496 + BPF_MOV64_IMM(BPF_REG_0, 0), 497 + BPF_EXIT_INSN(), 498 + }, 499 + .result = ACCEPT, 500 + .prog_type = BPF_PROG_TYPE_XDP, 501 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 502 + }, 503 + { 504 + "XDP pkt read, pkt_meta' > pkt_data, corner case, good access", 721 505 .insns = { 722 506 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 723 507 offsetof(struct xdp_md, data_meta)), ··· 804 520 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 805 521 }, 806 522 { 523 + "XDP pkt read, pkt_meta' > pkt_data, corner case +1, good access", 524 + .insns = { 525 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 526 + offsetof(struct xdp_md, data_meta)), 527 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 528 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 529 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 530 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 531 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 532 + BPF_MOV64_IMM(BPF_REG_0, 0), 533 + BPF_EXIT_INSN(), 534 + }, 535 + .result = ACCEPT, 536 + .prog_type = BPF_PROG_TYPE_XDP, 537 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 538 + }, 539 + { 540 + "XDP pkt read, pkt_meta' > pkt_data, corner case -1, bad access", 541 + .insns = { 542 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 543 + offsetof(struct xdp_md, data_meta)), 544 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 545 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 546 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 547 + BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1), 548 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 549 + BPF_MOV64_IMM(BPF_REG_0, 0), 550 + BPF_EXIT_INSN(), 551 + }, 552 + .errstr = "R1 offset is outside of the packet", 553 + .result = REJECT, 554 + .prog_type = BPF_PROG_TYPE_XDP, 555 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 556 + }, 557 + { 807 558 "XDP pkt read, pkt_data > pkt_meta', good access", 808 559 .insns = { 809 560 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 857 538 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 858 539 }, 859 540 { 860 - "XDP pkt read, pkt_data > pkt_meta', bad access 1", 541 + "XDP pkt read, pkt_data > pkt_meta', corner case -1, bad access", 861 542 .insns = { 862 543 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 863 544 offsetof(struct xdp_md, data_meta)), 864 545 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 865 546 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 866 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 547 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 867 548 BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 868 549 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 869 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 550 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 870 551 BPF_MOV64_IMM(BPF_REG_0, 0), 871 552 BPF_EXIT_INSN(), 872 553 }, ··· 894 575 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 895 576 }, 896 577 { 578 + "XDP pkt read, pkt_data > pkt_meta', corner case, good access", 579 + .insns = { 580 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 581 + offsetof(struct xdp_md, data_meta)), 582 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 583 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 584 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 585 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 586 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 587 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 588 + BPF_MOV64_IMM(BPF_REG_0, 0), 589 + BPF_EXIT_INSN(), 590 + }, 591 + .result = ACCEPT, 592 + .prog_type = BPF_PROG_TYPE_XDP, 593 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 594 + }, 595 + { 596 + "XDP pkt read, pkt_data > pkt_meta', corner case +1, good access", 597 + .insns = { 598 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 599 + offsetof(struct xdp_md, data_meta)), 600 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 601 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 602 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 603 + BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1), 604 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 605 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 606 + BPF_MOV64_IMM(BPF_REG_0, 0), 607 + BPF_EXIT_INSN(), 608 + }, 609 + .result = ACCEPT, 610 + .prog_type = BPF_PROG_TYPE_XDP, 611 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 612 + }, 613 + { 897 614 "XDP pkt read, pkt_meta' < pkt_data, good access", 898 615 .insns = { 899 616 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 948 593 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 949 594 }, 950 595 { 951 - "XDP pkt read, pkt_meta' < pkt_data, bad access 1", 596 + "XDP pkt read, pkt_meta' < pkt_data, corner case -1, bad access", 952 597 .insns = { 953 598 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 954 599 offsetof(struct xdp_md, data_meta)), 955 600 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 956 601 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 957 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 602 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 958 603 BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 959 604 BPF_JMP_IMM(BPF_JA, 0, 0, 1), 960 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 605 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 961 606 BPF_MOV64_IMM(BPF_REG_0, 0), 962 607 BPF_EXIT_INSN(), 963 608 }, ··· 985 630 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 986 631 }, 987 632 { 988 - "XDP pkt read, pkt_data < pkt_meta', good access", 633 + "XDP pkt read, pkt_meta' < pkt_data, corner case, good access", 634 + .insns = { 635 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 636 + offsetof(struct xdp_md, data_meta)), 637 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 638 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 639 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 640 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 641 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 642 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 643 + BPF_MOV64_IMM(BPF_REG_0, 0), 644 + BPF_EXIT_INSN(), 645 + }, 646 + .result = ACCEPT, 647 + .prog_type = BPF_PROG_TYPE_XDP, 648 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 649 + }, 650 + { 651 + "XDP pkt read, pkt_meta' < pkt_data, corner case +1, good access", 652 + .insns = { 653 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 654 + offsetof(struct xdp_md, data_meta)), 655 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 656 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 657 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 658 + BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1), 659 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 660 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 661 + BPF_MOV64_IMM(BPF_REG_0, 0), 662 + BPF_EXIT_INSN(), 663 + }, 664 + .result = ACCEPT, 665 + .prog_type = BPF_PROG_TYPE_XDP, 666 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 667 + }, 668 + { 669 + "XDP pkt read, pkt_data < pkt_meta', corner case, good access", 989 670 .insns = { 990 671 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 991 672 offsetof(struct xdp_md, data_meta)), ··· 1074 683 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1075 684 }, 1076 685 { 686 + "XDP pkt read, pkt_data < pkt_meta', corner case +1, good access", 687 + .insns = { 688 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 689 + offsetof(struct xdp_md, data_meta)), 690 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 691 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 692 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 693 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 694 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 695 + BPF_MOV64_IMM(BPF_REG_0, 0), 696 + BPF_EXIT_INSN(), 697 + }, 698 + .result = ACCEPT, 699 + .prog_type = BPF_PROG_TYPE_XDP, 700 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 701 + }, 702 + { 703 + "XDP pkt read, pkt_data < pkt_meta', corner case -1, bad access", 704 + .insns = { 705 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 706 + offsetof(struct xdp_md, data_meta)), 707 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 708 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 709 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 710 + BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1), 711 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 712 + BPF_MOV64_IMM(BPF_REG_0, 0), 713 + BPF_EXIT_INSN(), 714 + }, 715 + .errstr = "R1 offset is outside of the packet", 716 + .result = REJECT, 717 + .prog_type = BPF_PROG_TYPE_XDP, 718 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 719 + }, 720 + { 1077 721 "XDP pkt read, pkt_meta' >= pkt_data, good access", 1078 722 .insns = { 1079 723 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 1126 700 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1127 701 }, 1128 702 { 1129 - "XDP pkt read, pkt_meta' >= pkt_data, bad access 1", 703 + "XDP pkt read, pkt_meta' >= pkt_data, corner case -1, bad access", 1130 704 .insns = { 1131 705 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1132 706 offsetof(struct xdp_md, data_meta)), 1133 707 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 1134 708 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 1135 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 709 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 1136 710 BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 1137 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 711 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 1138 712 BPF_MOV64_IMM(BPF_REG_0, 0), 1139 713 BPF_EXIT_INSN(), 1140 714 }, ··· 1162 736 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1163 737 }, 1164 738 { 1165 - "XDP pkt read, pkt_data >= pkt_meta', good access", 739 + "XDP pkt read, pkt_meta' >= pkt_data, corner case, good access", 740 + .insns = { 741 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 742 + offsetof(struct xdp_md, data_meta)), 743 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 744 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 745 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 746 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 747 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 748 + BPF_MOV64_IMM(BPF_REG_0, 0), 749 + BPF_EXIT_INSN(), 750 + }, 751 + .result = ACCEPT, 752 + .prog_type = BPF_PROG_TYPE_XDP, 753 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 754 + }, 755 + { 756 + "XDP pkt read, pkt_meta' >= pkt_data, corner case +1, good access", 757 + .insns = { 758 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 759 + offsetof(struct xdp_md, data_meta)), 760 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 761 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 762 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 763 + BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1), 764 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 765 + BPF_MOV64_IMM(BPF_REG_0, 0), 766 + BPF_EXIT_INSN(), 767 + }, 768 + .result = ACCEPT, 769 + .prog_type = BPF_PROG_TYPE_XDP, 770 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 771 + }, 772 + { 773 + "XDP pkt read, pkt_data >= pkt_meta', corner case, good access", 1166 774 .insns = { 1167 775 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1168 776 offsetof(struct xdp_md, data_meta)), ··· 1251 791 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1252 792 }, 1253 793 { 1254 - "XDP pkt read, pkt_meta' <= pkt_data, good access", 794 + "XDP pkt read, pkt_data >= pkt_meta', corner case +1, good access", 795 + .insns = { 796 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 797 + offsetof(struct xdp_md, data_meta)), 798 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 799 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 800 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 801 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 802 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 803 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 804 + BPF_MOV64_IMM(BPF_REG_0, 0), 805 + BPF_EXIT_INSN(), 806 + }, 807 + .result = ACCEPT, 808 + .prog_type = BPF_PROG_TYPE_XDP, 809 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 810 + }, 811 + { 812 + "XDP pkt read, pkt_data >= pkt_meta', corner case -1, bad access", 813 + .insns = { 814 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 815 + offsetof(struct xdp_md, data_meta)), 816 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 817 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 818 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 819 + BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1), 820 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 821 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 822 + BPF_MOV64_IMM(BPF_REG_0, 0), 823 + BPF_EXIT_INSN(), 824 + }, 825 + .errstr = "R1 offset is outside of the packet", 826 + .result = REJECT, 827 + .prog_type = BPF_PROG_TYPE_XDP, 828 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 829 + }, 830 + { 831 + "XDP pkt read, pkt_meta' <= pkt_data, corner case, good access", 1255 832 .insns = { 1256 833 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1257 834 offsetof(struct xdp_md, data_meta)), ··· 1343 846 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1344 847 }, 1345 848 { 849 + "XDP pkt read, pkt_meta' <= pkt_data, corner case +1, good access", 850 + .insns = { 851 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 852 + offsetof(struct xdp_md, data_meta)), 853 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 854 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 855 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9), 856 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 857 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 858 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9), 859 + BPF_MOV64_IMM(BPF_REG_0, 0), 860 + BPF_EXIT_INSN(), 861 + }, 862 + .result = ACCEPT, 863 + .prog_type = BPF_PROG_TYPE_XDP, 864 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 865 + }, 866 + { 867 + "XDP pkt read, pkt_meta' <= pkt_data, corner case -1, bad access", 868 + .insns = { 869 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 870 + offsetof(struct xdp_md, data_meta)), 871 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 872 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 873 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 874 + BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1), 875 + BPF_JMP_IMM(BPF_JA, 0, 0, 1), 876 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 877 + BPF_MOV64_IMM(BPF_REG_0, 0), 878 + BPF_EXIT_INSN(), 879 + }, 880 + .errstr = "R1 offset is outside of the packet", 881 + .result = REJECT, 882 + .prog_type = BPF_PROG_TYPE_XDP, 883 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 884 + }, 885 + { 1346 886 "XDP pkt read, pkt_data <= pkt_meta', good access", 1347 887 .insns = { 1348 888 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, ··· 1397 863 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1398 864 }, 1399 865 { 1400 - "XDP pkt read, pkt_data <= pkt_meta', bad access 1", 866 + "XDP pkt read, pkt_data <= pkt_meta', corner case -1, bad access", 1401 867 .insns = { 1402 868 BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 1403 869 offsetof(struct xdp_md, data_meta)), 1404 870 BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 1405 871 BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 1406 - BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 872 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6), 1407 873 BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 1408 - BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 874 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6), 1409 875 BPF_MOV64_IMM(BPF_REG_0, 0), 1410 876 BPF_EXIT_INSN(), 1411 877 }, ··· 1429 895 }, 1430 896 .errstr = "R1 offset is outside of the packet", 1431 897 .result = REJECT, 898 + .prog_type = BPF_PROG_TYPE_XDP, 899 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 900 + }, 901 + { 902 + "XDP pkt read, pkt_data <= pkt_meta', corner case, good access", 903 + .insns = { 904 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 905 + offsetof(struct xdp_md, data_meta)), 906 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 907 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 908 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7), 909 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 910 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7), 911 + BPF_MOV64_IMM(BPF_REG_0, 0), 912 + BPF_EXIT_INSN(), 913 + }, 914 + .result = ACCEPT, 915 + .prog_type = BPF_PROG_TYPE_XDP, 916 + .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 917 + }, 918 + { 919 + "XDP pkt read, pkt_data <= pkt_meta', corner case +1, good access", 920 + .insns = { 921 + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 922 + offsetof(struct xdp_md, data_meta)), 923 + BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)), 924 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_2), 925 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8), 926 + BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1), 927 + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8), 928 + BPF_MOV64_IMM(BPF_REG_0, 0), 929 + BPF_EXIT_INSN(), 930 + }, 931 + .result = ACCEPT, 1432 932 .prog_type = BPF_PROG_TYPE_XDP, 1433 933 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 1434 934 },
+8
tools/testing/selftests/net/fcnal-test.sh
··· 4115 4115 4116 4116 printf "\nTests passed: %3d\n" ${nsuccess} 4117 4117 printf "Tests failed: %3d\n" ${nfail} 4118 + 4119 + if [ $nfail -ne 0 ]; then 4120 + exit 1 # KSFT_FAIL 4121 + elif [ $nsuccess -eq 0 ]; then 4122 + exit $ksft_skip 4123 + fi 4124 + 4125 + exit 0 # KSFT_PASS
+49 -10
tools/testing/selftests/net/fib_tests.sh
··· 444 444 setup 445 445 446 446 set -e 447 + ip netns add ns2 448 + ip netns set ns2 auto 449 + 450 + ip -netns ns2 link set dev lo up 451 + 452 + $IP link add name veth1 type veth peer name veth2 453 + $IP link set dev veth2 netns ns2 454 + $IP address add 192.0.2.1/24 dev veth1 455 + ip -netns ns2 address add 192.0.2.1/24 dev veth2 456 + $IP link set dev veth1 up 457 + ip -netns ns2 link set dev veth2 up 458 + 447 459 $IP link set dev lo address 52:54:00:6a:c7:5e 448 - $IP link set dummy0 address 52:54:00:6a:c7:5e 449 - $IP link add dummy1 type dummy 450 - $IP link set dummy1 address 52:54:00:6a:c7:5e 451 - $IP link set dev dummy1 up 460 + $IP link set dev veth1 address 52:54:00:6a:c7:5e 461 + ip -netns ns2 link set dev lo address 52:54:00:6a:c7:5e 462 + ip -netns ns2 link set dev veth2 address 52:54:00:6a:c7:5e 463 + 464 + # 1. (ns2) redirect lo's egress to veth2's egress 465 + ip netns exec ns2 tc qdisc add dev lo parent root handle 1: fq_codel 466 + ip netns exec ns2 tc filter add dev lo parent 1: protocol arp basic \ 467 + action mirred egress redirect dev veth2 468 + ip netns exec ns2 tc filter add dev lo parent 1: protocol ip basic \ 469 + action mirred egress redirect dev veth2 470 + 471 + # 2. (ns1) redirect veth1's ingress to lo's ingress 472 + $NS_EXEC tc qdisc add dev veth1 ingress 473 + $NS_EXEC tc filter add dev veth1 ingress protocol arp basic \ 474 + action mirred ingress redirect dev lo 475 + $NS_EXEC tc filter add dev veth1 ingress protocol ip basic \ 476 + action mirred ingress redirect dev lo 477 + 478 + # 3. (ns1) redirect lo's egress to veth1's egress 479 + $NS_EXEC tc qdisc add dev lo parent root handle 1: fq_codel 480 + $NS_EXEC tc filter add dev lo parent 1: protocol arp basic \ 481 + action mirred egress redirect dev veth1 482 + $NS_EXEC tc filter add dev lo parent 1: protocol ip basic \ 483 + action mirred egress redirect dev veth1 484 + 485 + # 4. (ns2) redirect veth2's ingress to lo's ingress 486 + ip netns exec ns2 tc qdisc add dev veth2 ingress 487 + ip netns exec ns2 tc filter add dev veth2 ingress protocol arp basic \ 488 + action mirred ingress redirect dev lo 489 + ip netns exec ns2 tc filter add dev veth2 ingress protocol ip basic \ 490 + action mirred ingress redirect dev lo 491 + 452 492 $NS_EXEC sysctl -qw net.ipv4.conf.all.rp_filter=1 453 493 $NS_EXEC sysctl -qw net.ipv4.conf.all.accept_local=1 454 494 $NS_EXEC sysctl -qw net.ipv4.conf.all.route_localnet=1 455 - 456 - $NS_EXEC tc qd add dev dummy1 parent root handle 1: fq_codel 457 - $NS_EXEC tc filter add dev dummy1 parent 1: protocol arp basic action mirred egress redirect dev lo 458 - $NS_EXEC tc filter add dev dummy1 parent 1: protocol ip basic action mirred egress redirect dev lo 495 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.rp_filter=1 496 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.accept_local=1 497 + ip netns exec ns2 sysctl -qw net.ipv4.conf.all.route_localnet=1 459 498 set +e 460 499 461 - run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 198.51.100.1" 500 + run_cmd "ip netns exec ns2 ping -w1 -c1 192.0.2.1" 462 501 log_test $? 0 "rp_filter passes local packets" 463 502 464 - run_cmd "ip netns exec ns1 ping -I dummy1 -w1 -c1 127.0.0.1" 503 + run_cmd "ip netns exec ns2 ping -w1 -c1 127.0.0.1" 465 504 log_test $? 0 "rp_filter passes loopback packets" 466 505 467 506 cleanup
+36
tools/testing/selftests/net/tls.c
··· 31 31 struct tls12_crypto_info_chacha20_poly1305 chacha20; 32 32 struct tls12_crypto_info_sm4_gcm sm4gcm; 33 33 struct tls12_crypto_info_sm4_ccm sm4ccm; 34 + struct tls12_crypto_info_aes_ccm_128 aesccm128; 35 + struct tls12_crypto_info_aes_gcm_256 aesgcm256; 34 36 }; 35 37 size_t len; 36 38 }; ··· 62 60 tls12->len = sizeof(struct tls12_crypto_info_sm4_ccm); 63 61 tls12->sm4ccm.info.version = tls_version; 64 62 tls12->sm4ccm.info.cipher_type = cipher_type; 63 + break; 64 + case TLS_CIPHER_AES_CCM_128: 65 + tls12->len = sizeof(struct tls12_crypto_info_aes_ccm_128); 66 + tls12->aesccm128.info.version = tls_version; 67 + tls12->aesccm128.info.cipher_type = cipher_type; 68 + break; 69 + case TLS_CIPHER_AES_GCM_256: 70 + tls12->len = sizeof(struct tls12_crypto_info_aes_gcm_256); 71 + tls12->aesgcm256.info.version = tls_version; 72 + tls12->aesgcm256.info.cipher_type = cipher_type; 65 73 break; 66 74 default: 67 75 break; ··· 271 259 { 272 260 .tls_version = TLS_1_3_VERSION, 273 261 .cipher_type = TLS_CIPHER_SM4_CCM, 262 + }; 263 + 264 + FIXTURE_VARIANT_ADD(tls, 12_aes_ccm) 265 + { 266 + .tls_version = TLS_1_2_VERSION, 267 + .cipher_type = TLS_CIPHER_AES_CCM_128, 268 + }; 269 + 270 + FIXTURE_VARIANT_ADD(tls, 13_aes_ccm) 271 + { 272 + .tls_version = TLS_1_3_VERSION, 273 + .cipher_type = TLS_CIPHER_AES_CCM_128, 274 + }; 275 + 276 + FIXTURE_VARIANT_ADD(tls, 12_aes_gcm_256) 277 + { 278 + .tls_version = TLS_1_2_VERSION, 279 + .cipher_type = TLS_CIPHER_AES_GCM_256, 280 + }; 281 + 282 + FIXTURE_VARIANT_ADD(tls, 13_aes_gcm_256) 283 + { 284 + .tls_version = TLS_1_3_VERSION, 285 + .cipher_type = TLS_CIPHER_AES_GCM_256, 274 286 }; 275 287 276 288 FIXTURE_SETUP(tls)
+26 -4
tools/testing/selftests/netfilter/conntrack_vrf.sh
··· 150 150 # oifname is the vrf device. 151 151 test_masquerade_vrf() 152 152 { 153 + local qdisc=$1 154 + 155 + if [ "$qdisc" != "default" ]; then 156 + tc -net $ns0 qdisc add dev tvrf root $qdisc 157 + fi 158 + 153 159 ip netns exec $ns0 conntrack -F 2>/dev/null 154 160 155 161 ip netns exec $ns0 nft -f - <<EOF 156 162 flush ruleset 157 163 table ip nat { 164 + chain rawout { 165 + type filter hook output priority raw; 166 + 167 + oif tvrf ct state untracked counter 168 + } 169 + chain postrouting2 { 170 + type filter hook postrouting priority mangle; 171 + 172 + oif tvrf ct state untracked counter 173 + } 158 174 chain postrouting { 159 175 type nat hook postrouting priority 0; 160 176 # NB: masquerade should always be combined with 'oif(name) bla', ··· 187 171 fi 188 172 189 173 # must also check that nat table was evaluated on second (lower device) iteration. 190 - ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' 174 + ip netns exec $ns0 nft list table ip nat |grep -q 'counter packets 2' && 175 + ip netns exec $ns0 nft list table ip nat |grep -q 'untracked counter packets [1-9]' 191 176 if [ $? -eq 0 ]; then 192 - echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device" 177 + echo "PASS: iperf3 connect with masquerade + sport rewrite on vrf device ($qdisc qdisc)" 193 178 else 194 - echo "FAIL: vrf masq rule has unexpected counter value" 179 + echo "FAIL: vrf rules have unexpected counter value" 195 180 ret=1 181 + fi 182 + 183 + if [ "$qdisc" != "default" ]; then 184 + tc -net $ns0 qdisc del dev tvrf root 196 185 fi 197 186 } 198 187 ··· 234 213 } 235 214 236 215 test_ct_zone_in 237 - test_masquerade_vrf 216 + test_masquerade_vrf "default" 217 + test_masquerade_vrf "pfifo" 238 218 test_masquerade_veth 239 219 240 220 exit $ret
+21 -3
tools/testing/selftests/netfilter/nft_concat_range.sh
··· 23 23 24 24 # Set types, defined by TYPE_ variables below 25 25 TYPES="net_port port_net net6_port port_proto net6_port_mac net6_port_mac_proto 26 - net_port_net net_mac net_mac_icmp net6_mac_icmp net6_port_net6_port 27 - net_port_mac_proto_net" 26 + net_port_net net_mac mac_net net_mac_icmp net6_mac_icmp 27 + net6_port_net6_port net_port_mac_proto_net" 28 28 29 29 # Reported bugs, also described by TYPE_ variables below 30 30 BUGS="flush_remove_add" ··· 275 275 perf_src 276 276 perf_entries 1000 277 277 perf_proto ipv4 278 + " 279 + 280 + TYPE_mac_net=" 281 + display mac,net 282 + type_spec ether_addr . ipv4_addr 283 + chain_spec ether saddr . ip saddr 284 + dst 285 + src mac addr4 286 + start 1 287 + count 5 288 + src_delta 2000 289 + tools sendip nc bash 290 + proto udp 291 + 292 + race_repeat 0 293 + 294 + perf_duration 0 278 295 " 279 296 280 297 TYPE_net_mac_icmp=" ··· 1001 984 fi 1002 985 done 1003 986 for f in ${src}; do 1004 - __expr="${__expr} . " 987 + [ "${__expr}" != "{ " ] && __expr="${__expr} . " 988 + 1005 989 __start="$(eval format_"${f}" "${srcstart}")" 1006 990 __end="$(eval format_"${f}" "${srcend}")" 1007 991
+13 -6
tools/testing/selftests/netfilter/nft_zones_many.sh
··· 18 18 ip netns del $ns 19 19 } 20 20 21 - ip netns add $ns 22 - if [ $? -ne 0 ];then 23 - echo "SKIP: Could not create net namespace $gw" 24 - exit $ksft_skip 25 - fi 21 + checktool (){ 22 + if ! $1 > /dev/null 2>&1; then 23 + echo "SKIP: Could not $2" 24 + exit $ksft_skip 25 + fi 26 + } 27 + 28 + checktool "nft --version" "run test without nft tool" 29 + checktool "ip -Version" "run test without ip tool" 30 + checktool "socat -V" "run test without socat tool" 31 + checktool "ip netns add $ns" "create net namespace" 26 32 27 33 trap cleanup EXIT 28 34 ··· 77 71 local start=$(date +%s%3N) 78 72 i=$((i + 10000)) 79 73 j=$((j + 1)) 80 - dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" nc -w 1 -q 1 -u -p 12345 127.0.0.1 12345 > /dev/null 74 + # nft rule in output places each packet in a different zone. 75 + dd if=/dev/zero of=/dev/stdout bs=8k count=10000 2>/dev/null | ip netns exec "$ns" socat STDIN UDP:127.0.0.1:12345,sourceport=12345 81 76 if [ $? -ne 0 ] ;then 82 77 ret=1 83 78 break
+2
tools/testing/selftests/tc-testing/config
··· 60 60 CONFIG_NET_SCH_FIFO=y 61 61 CONFIG_NET_SCH_ETS=m 62 62 CONFIG_NET_SCH_RED=m 63 + CONFIG_NET_SCH_FQ_PIE=m 64 + CONFIG_NETDEVSIM=m 63 65 64 66 # 65 67 ## Network testing
+5 -3
tools/testing/selftests/tc-testing/tdc.py
··· 716 716 list_test_cases(alltests) 717 717 exit(0) 718 718 719 + exit_code = 0 # KSFT_PASS 719 720 if len(alltests): 720 721 req_plugins = pm.get_required_plugins(alltests) 721 722 try: ··· 725 724 print('The following plugins were not found:') 726 725 print('{}'.format(pde.missing_pg)) 727 726 catresults = test_runner(pm, args, alltests) 727 + if catresults.count_failures() != 0: 728 + exit_code = 1 # KSFT_FAIL 728 729 if args.format == 'none': 729 730 print('Test results output suppression requested\n') 730 731 else: ··· 751 748 gid=int(os.getenv('SUDO_GID'))) 752 749 else: 753 750 print('No tests found\n') 751 + exit_code = 4 # KSFT_SKIP 752 + exit(exit_code) 754 753 755 754 def main(): 756 755 """ ··· 771 766 print('args is {}'.format(args)) 772 767 773 768 set_operation_mode(pm, parser, args, remaining) 774 - 775 - exit(0) 776 - 777 769 778 770 if __name__ == "__main__": 779 771 main()
+1
tools/testing/selftests/tc-testing/tdc.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 + modprobe netdevsim 4 5 ./tdc.py -c actions --nobuildebpf 5 6 ./tdc.py -c qdisc