Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.16-rc8 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+2545 -1281
+8 -2
Documentation/admin-guide/kernel-parameters.txt
··· 1689 1689 architectures force reset to be always executed 1690 1690 i8042.unlock [HW] Unlock (ignore) the keylock 1691 1691 i8042.kbdreset [HW] Reset device connected to KBD port 1692 + i8042.probe_defer 1693 + [HW] Allow deferred probing upon i8042 probe errors 1692 1694 1693 1695 i810= [HW,DRM] 1694 1696 ··· 2415 2413 Default is 1 (enabled) 2416 2414 2417 2415 kvm-intel.emulate_invalid_guest_state= 2418 - [KVM,Intel] Enable emulation of invalid guest states 2419 - Default is 0 (disabled) 2416 + [KVM,Intel] Disable emulation of invalid guest state. 2417 + Ignored if kvm-intel.enable_unrestricted_guest=1, as 2418 + guest state is never invalid for unrestricted guests. 2419 + This param doesn't apply to nested guests (L2), as KVM 2420 + never emulates invalid L2 guest state. 2421 + Default is 1 (enabled) 2420 2422 2421 2423 kvm-intel.flexpriority= 2422 2424 [KVM,Intel] Disable FlexPriority feature (TPR shadow).
+25
Documentation/devicetree/bindings/regulator/samsung,s5m8767.yaml
··· 51 51 description: 52 52 Properties for single BUCK regulator. 53 53 54 + properties: 55 + op_mode: 56 + $ref: /schemas/types.yaml#/definitions/uint32 57 + enum: [0, 1, 2, 3] 58 + default: 1 59 + description: | 60 + Describes the different operating modes of the regulator with power 61 + mode change in SOC. The different possible values are: 62 + 0 - always off mode 63 + 1 - on in normal mode 64 + 2 - low power mode 65 + 3 - suspend mode 66 + 54 67 required: 55 68 - regulator-name 56 69 ··· 76 63 Properties for single BUCK regulator. 77 64 78 65 properties: 66 + op_mode: 67 + $ref: /schemas/types.yaml#/definitions/uint32 68 + enum: [0, 1, 2, 3] 69 + default: 1 70 + description: | 71 + Describes the different operating modes of the regulator with power 72 + mode change in SOC. The different possible values are: 73 + 0 - always off mode 74 + 1 - on in normal mode 75 + 2 - low power mode 76 + 3 - suspend mode 77 + 79 78 s5m8767,pmic-ext-control-gpios: 80 79 maxItems: 1 81 80 description: |
+5 -3
Documentation/i2c/summary.rst
··· 11 11 and so are not advertised as being I2C but come under different names, 12 12 e.g. TWI (Two Wire Interface), IIC. 13 13 14 - The official I2C specification is the `"I2C-bus specification and user 15 - manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_ 16 - published by NXP Semiconductors. 14 + The latest official I2C specification is the `"I2C-bus specification and user 15 + manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_ 16 + published by NXP Semiconductors. However, you need to log-in to the site to 17 + access the PDF. An older version of the specification (revision 6) is archived 18 + `here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_. 17 19 18 20 SMBus (System Management Bus) is based on the I2C protocol, and is mostly 19 21 a subset of I2C protocols and signaling. Many I2C devices will work on an
+6 -5
Documentation/networking/bonding.rst
··· 196 196 ad_actor_system 197 197 198 198 In an AD system, this specifies the mac-address for the actor in 199 - protocol packet exchanges (LACPDUs). The value cannot be NULL or 200 - multicast. It is preferred to have the local-admin bit set for this 201 - mac but driver does not enforce it. If the value is not given then 202 - system defaults to using the masters' mac address as actors' system 203 - address. 199 + protocol packet exchanges (LACPDUs). The value cannot be a multicast 200 + address. If the all-zeroes MAC is specified, bonding will internally 201 + use the MAC of the bond itself. It is preferred to have the 202 + local-admin bit set for this mac but driver does not enforce it. If 203 + the value is not given then system defaults to using the masters' 204 + mac address as actors' system address. 204 205 205 206 This parameter has effect only in 802.3ad mode and is available through 206 207 SysFs interface.
+1
Documentation/networking/device_drivers/ethernet/freescale/dpaa2/overview.rst
··· 183 183 IRQ config, enable, reset 184 184 185 185 DPNI (Datapath Network Interface) 186 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 186 187 Contains TX/RX queues, network interface configuration, and RX buffer pool 187 188 configuration mechanisms. The TX/RX queues are in memory and are identified 188 189 by queue number.
+4 -2
Documentation/networking/ip-sysctl.rst
··· 25 25 ip_no_pmtu_disc - INTEGER 26 26 Disable Path MTU Discovery. If enabled in mode 1 and a 27 27 fragmentation-required ICMP is received, the PMTU to this 28 - destination will be set to min_pmtu (see below). You will need 28 + destination will be set to the smallest of the old MTU to 29 + this destination and min_pmtu (see below). You will need 29 30 to raise min_pmtu to the smallest interface MTU on your system 30 31 manually if you want to avoid locally generated fragments. 31 32 ··· 50 49 Default: FALSE 51 50 52 51 min_pmtu - INTEGER 53 - default 552 - minimum discovered Path MTU 52 + default 552 - minimum Path MTU. Unless this is changed mannually, 53 + each cached pmtu will never be lower than this setting. 54 54 55 55 ip_forward_use_pmtu - BOOLEAN 56 56 By default we don't trust protocol path MTUs while forwarding
+2 -2
Documentation/networking/timestamping.rst
··· 582 582 and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set). 583 583 - As soon as the driver has sent the packet and/or obtained a 584 584 hardware time stamp for it, it passes the time stamp back by 585 - calling skb_hwtstamp_tx() with the original skb, the raw 586 - hardware time stamp. skb_hwtstamp_tx() clones the original skb and 585 + calling skb_tstamp_tx() with the original skb, the raw 586 + hardware time stamp. skb_tstamp_tx() clones the original skb and 587 587 adds the timestamps, therefore the original skb has to be freed now. 588 588 If obtaining the hardware time stamp somehow fails, then the driver 589 589 should not fall back to software time stamping. The rationale is that
+2
Documentation/sound/hd-audio/models.rst
··· 326 326 Headset support on USI machines 327 327 dual-codecs 328 328 Lenovo laptops with dual codecs 329 + alc285-hp-amp-init 330 + HP laptops which require speaker amplifier initialization (ALC285) 329 331 330 332 ALC680 331 333 ======
+2 -2
MAINTAINERS
··· 14845 14845 M: Ryder Lee <ryder.lee@mediatek.com> 14846 14846 M: Jianjun Wang <jianjun.wang@mediatek.com> 14847 14847 L: linux-pci@vger.kernel.org 14848 - L: linux-mediatek@lists.infradead.org 14848 + L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 14849 14849 S: Supported 14850 14850 F: Documentation/devicetree/bindings/pci/mediatek* 14851 14851 F: drivers/pci/controller/*mediatek* ··· 17423 17423 SILVACO I3C DUAL-ROLE MASTER 17424 17424 M: Miquel Raynal <miquel.raynal@bootlin.com> 17425 17425 M: Conor Culhane <conor.culhane@silvaco.com> 17426 - L: linux-i3c@lists.infradead.org 17426 + L: linux-i3c@lists.infradead.org (moderated for non-subscribers) 17427 17427 S: Maintained 17428 17428 F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml 17429 17429 F: drivers/i3c/master/svc-i3c-master.c
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 16 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc8 6 6 NAME = Gobble Gobble 7 7 8 8 # *DOCUMENTATION*
+1
arch/arm/boot/dts/imx6qdl-wandboard.dtsi
··· 309 309 310 310 ethphy: ethernet-phy@1 { 311 311 reg = <1>; 312 + qca,clk-out-frequency = <125000000>; 312 313 }; 313 314 }; 314 315 };
-1
arch/arm/include/asm/efi.h
··· 17 17 18 18 #ifdef CONFIG_EFI 19 19 void efi_init(void); 20 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 21 20 22 21 int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md); 23 22 int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
+3 -5
arch/arm/kernel/entry-armv.S
··· 596 596 tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2 597 597 reteq lr 598 598 and r8, r0, #0x00000f00 @ mask out CP number 599 - THUMB( lsr r8, r8, #8 ) 600 599 mov r7, #1 601 - add r6, r10, #TI_USED_CP 602 - ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[] 603 - THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[] 600 + add r6, r10, r8, lsr #8 @ add used_cp[] array offset first 601 + strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[] 604 602 #ifdef CONFIG_IWMMXT 605 603 @ Test if we need to give access to iWMMXt coprocessors 606 604 ldr r5, [r10, #TI_FLAGS] ··· 607 609 bcs iwmmxt_task_enable 608 610 #endif 609 611 ARM( add pc, pc, r8, lsr #6 ) 610 - THUMB( lsl r8, r8, #2 ) 612 + THUMB( lsr r8, r8, #6 ) 611 613 THUMB( add pc, r8 ) 612 614 nop 613 615
+1
arch/arm/kernel/head-nommu.S
··· 114 114 add r12, r12, r10 115 115 ret r12 116 116 1: bl __after_proc_init 117 + ldr r7, __secondary_data @ reload r7 117 118 ldr sp, [r7, #12] @ set up the stack pointer 118 119 ldr r0, [r7, #16] @ set up task pointer 119 120 mov fp, #0
+1 -1
arch/arm64/boot/dts/allwinner/sun50i-h5-orangepi-zero-plus.dts
··· 69 69 pinctrl-0 = <&emac_rgmii_pins>; 70 70 phy-supply = <&reg_gmac_3v3>; 71 71 phy-handle = <&ext_rgmii_phy>; 72 - phy-mode = "rgmii"; 72 + phy-mode = "rgmii-id"; 73 73 status = "okay"; 74 74 }; 75 75
+2 -2
arch/arm64/boot/dts/freescale/fsl-lx2160a.dtsi
··· 719 719 clock-names = "i2c"; 720 720 clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL 721 721 QORIQ_CLK_PLL_DIV(16)>; 722 - scl-gpio = <&gpio2 15 GPIO_ACTIVE_HIGH>; 722 + scl-gpios = <&gpio2 15 GPIO_ACTIVE_HIGH>; 723 723 status = "disabled"; 724 724 }; 725 725 ··· 768 768 clock-names = "i2c"; 769 769 clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL 770 770 QORIQ_CLK_PLL_DIV(16)>; 771 - scl-gpio = <&gpio2 16 GPIO_ACTIVE_HIGH>; 771 + scl-gpios = <&gpio2 16 GPIO_ACTIVE_HIGH>; 772 772 status = "disabled"; 773 773 }; 774 774
-1
arch/arm64/include/asm/efi.h
··· 14 14 15 15 #ifdef CONFIG_EFI 16 16 extern void efi_init(void); 17 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 18 17 #else 19 18 #define efi_init() 20 19 #endif
-5
arch/parisc/Kconfig
··· 85 85 config STACK_GROWSUP 86 86 def_bool y 87 87 88 - config ARCH_DEFCONFIG 89 - string 90 - default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT 91 - default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT 92 - 93 88 config GENERIC_LOCKBREAK 94 89 bool 95 90 default y
+2 -2
arch/parisc/include/asm/futex.h
··· 14 14 _futex_spin_lock(u32 __user *uaddr) 15 15 { 16 16 extern u32 lws_lock_start[]; 17 - long index = ((long)uaddr & 0x3f8) >> 1; 17 + long index = ((long)uaddr & 0x7f8) >> 1; 18 18 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index]; 19 19 preempt_disable(); 20 20 arch_spin_lock(s); ··· 24 24 _futex_spin_unlock(u32 __user *uaddr) 25 25 { 26 26 extern u32 lws_lock_start[]; 27 - long index = ((long)uaddr & 0x3f8) >> 1; 27 + long index = ((long)uaddr & 0x7f8) >> 1; 28 28 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index]; 29 29 arch_spin_unlock(s); 30 30 preempt_enable();
+1 -1
arch/parisc/kernel/syscall.S
··· 472 472 extrd,u %r1,PSW_W_BIT,1,%r1 473 473 /* sp must be aligned on 4, so deposit the W bit setting into 474 474 * the bottom of sp temporarily */ 475 - or,ev %r1,%r30,%r30 475 + or,od %r1,%r30,%r30 476 476 477 477 /* Clip LWS number to a 32-bit value for 32-bit processes */ 478 478 depdi 0, 31, 32, %r20
+2
arch/parisc/kernel/traps.c
··· 730 730 } 731 731 mmap_read_unlock(current->mm); 732 732 } 733 + /* CPU could not fetch instruction, so clear stale IIR value. */ 734 + regs->iir = 0xbaadf00d; 733 735 fallthrough; 734 736 case 27: 735 737 /* Data memory protection ID trap */
+1 -1
arch/powerpc/mm/ptdump/ptdump.c
··· 183 183 { 184 184 pte_t pte = __pte(st->current_flags); 185 185 186 - if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx) 186 + if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx) 187 187 return; 188 188 189 189 if (!pte_write(pte) || !pte_exec(pte))
-1
arch/riscv/include/asm/efi.h
··· 13 13 14 14 #ifdef CONFIG_EFI 15 15 extern void efi_init(void); 16 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 17 16 #else 18 17 #define efi_init() 19 18 #endif
-2
arch/x86/include/asm/efi.h
··· 197 197 198 198 extern void parse_efi_setup(u64 phys_addr, u32 data_len); 199 199 200 - extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 201 - 202 200 extern void efi_thunk_runtime_setup(void); 203 201 efi_status_t efi_set_virtual_address_map(unsigned long memory_map_size, 204 202 unsigned long descriptor_size,
+1
arch/x86/include/asm/kvm-x86-ops.h
··· 47 47 KVM_X86_OP(cache_reg) 48 48 KVM_X86_OP(get_rflags) 49 49 KVM_X86_OP(set_rflags) 50 + KVM_X86_OP(get_if_flag) 50 51 KVM_X86_OP(tlb_flush_all) 51 52 KVM_X86_OP(tlb_flush_current) 52 53 KVM_X86_OP_NULL(tlb_remote_flush)
+1
arch/x86/include/asm/kvm_host.h
··· 1349 1349 void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg); 1350 1350 unsigned long (*get_rflags)(struct kvm_vcpu *vcpu); 1351 1351 void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags); 1352 + bool (*get_if_flag)(struct kvm_vcpu *vcpu); 1352 1353 1353 1354 void (*tlb_flush_all)(struct kvm_vcpu *vcpu); 1354 1355 void (*tlb_flush_current)(struct kvm_vcpu *vcpu);
+2 -2
arch/x86/include/asm/pkru.h
··· 4 4 5 5 #include <asm/cpufeature.h> 6 6 7 - #define PKRU_AD_BIT 0x1 8 - #define PKRU_WD_BIT 0x2 7 + #define PKRU_AD_BIT 0x1u 8 + #define PKRU_WD_BIT 0x2u 9 9 #define PKRU_BITS_PER_PKEY 2 10 10 11 11 #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+30 -42
arch/x86/kernel/setup.c
··· 713 713 714 714 early_reserve_initrd(); 715 715 716 - if (efi_enabled(EFI_BOOT)) 717 - efi_memblock_x86_reserve_range(); 718 - 719 716 memblock_x86_reserve_range_setup_data(); 720 717 721 718 reserve_ibft_region(); ··· 737 740 } 738 741 739 742 return 0; 740 - } 741 - 742 - static char * __init prepare_command_line(void) 743 - { 744 - #ifdef CONFIG_CMDLINE_BOOL 745 - #ifdef CONFIG_CMDLINE_OVERRIDE 746 - strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 747 - #else 748 - if (builtin_cmdline[0]) { 749 - /* append boot loader cmdline to builtin */ 750 - strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); 751 - strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); 752 - strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 753 - } 754 - #endif 755 - #endif 756 - 757 - strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); 758 - 759 - parse_early_param(); 760 - 761 - return command_line; 762 743 } 763 744 764 745 /* ··· 828 853 x86_init.oem.arch_setup(); 829 854 830 855 /* 831 - * x86_configure_nx() is called before parse_early_param() (called by 832 - * prepare_command_line()) to detect whether hardware doesn't support 833 - * NX (so that the early EHCI debug console setup can safely call 834 - * set_fixmap()). It may then be called again from within noexec_setup() 835 - * during parsing early parameters to honor the respective command line 836 - * option. 837 - */ 838 - x86_configure_nx(); 839 - 840 - /* 841 - * This parses early params and it needs to run before 842 - * early_reserve_memory() because latter relies on such settings 843 - * supplied as early params. 844 - */ 845 - *cmdline_p = prepare_command_line(); 846 - 847 - /* 848 856 * Do some memory reservations *before* memory is added to memblock, so 849 857 * memblock allocations won't overwrite it. 850 858 * ··· 859 901 data_resource.end = __pa_symbol(_edata)-1; 860 902 bss_resource.start = __pa_symbol(__bss_start); 861 903 bss_resource.end = __pa_symbol(__bss_stop)-1; 904 + 905 + #ifdef CONFIG_CMDLINE_BOOL 906 + #ifdef CONFIG_CMDLINE_OVERRIDE 907 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 908 + #else 909 + if (builtin_cmdline[0]) { 910 + /* append boot loader cmdline to builtin */ 911 + strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE); 912 + strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE); 913 + strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE); 914 + } 915 + #endif 916 + #endif 917 + 918 + strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE); 919 + *cmdline_p = command_line; 920 + 921 + /* 922 + * x86_configure_nx() is called before parse_early_param() to detect 923 + * whether hardware doesn't support NX (so that the early EHCI debug 924 + * console setup can safely call set_fixmap()). It may then be called 925 + * again from within noexec_setup() during parsing early parameters 926 + * to honor the respective command line option. 927 + */ 928 + x86_configure_nx(); 929 + 930 + parse_early_param(); 931 + 932 + if (efi_enabled(EFI_BOOT)) 933 + efi_memblock_x86_reserve_range(); 862 934 863 935 #ifdef CONFIG_MEMORY_HOTPLUG 864 936 /*
+6
arch/x86/kvm/mmu/tdp_iter.c
··· 26 26 */ 27 27 void tdp_iter_restart(struct tdp_iter *iter) 28 28 { 29 + iter->yielded = false; 29 30 iter->yielded_gfn = iter->next_last_level_gfn; 30 31 iter->level = iter->root_level; 31 32 ··· 161 160 */ 162 161 void tdp_iter_next(struct tdp_iter *iter) 163 162 { 163 + if (iter->yielded) { 164 + tdp_iter_restart(iter); 165 + return; 166 + } 167 + 164 168 if (try_step_down(iter)) 165 169 return; 166 170
+6
arch/x86/kvm/mmu/tdp_iter.h
··· 45 45 * iterator walks off the end of the paging structure. 46 46 */ 47 47 bool valid; 48 + /* 49 + * True if KVM dropped mmu_lock and yielded in the middle of a walk, in 50 + * which case tdp_iter_next() needs to restart the walk at the root 51 + * level instead of advancing to the next entry. 52 + */ 53 + bool yielded; 48 54 }; 49 55 50 56 /*
+16 -13
arch/x86/kvm/mmu/tdp_mmu.c
··· 502 502 struct tdp_iter *iter, 503 503 u64 new_spte) 504 504 { 505 + WARN_ON_ONCE(iter->yielded); 506 + 505 507 lockdep_assert_held_read(&kvm->mmu_lock); 506 508 507 509 /* ··· 577 575 u64 new_spte, bool record_acc_track, 578 576 bool record_dirty_log) 579 577 { 578 + WARN_ON_ONCE(iter->yielded); 579 + 580 580 lockdep_assert_held_write(&kvm->mmu_lock); 581 581 582 582 /* ··· 644 640 * If this function should yield and flush is set, it will perform a remote 645 641 * TLB flush before yielding. 646 642 * 647 - * If this function yields, it will also reset the tdp_iter's walk over the 648 - * paging structure and the calling function should skip to the next 649 - * iteration to allow the iterator to continue its traversal from the 650 - * paging structure root. 643 + * If this function yields, iter->yielded is set and the caller must skip to 644 + * the next iteration, where tdp_iter_next() will reset the tdp_iter's walk 645 + * over the paging structures to allow the iterator to continue its traversal 646 + * from the paging structure root. 651 647 * 652 - * Return true if this function yielded and the iterator's traversal was reset. 653 - * Return false if a yield was not needed. 648 + * Returns true if this function yielded. 654 649 */ 655 - static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm, 656 - struct tdp_iter *iter, bool flush, 657 - bool shared) 650 + static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm, 651 + struct tdp_iter *iter, 652 + bool flush, bool shared) 658 653 { 654 + WARN_ON(iter->yielded); 655 + 659 656 /* Ensure forward progress has been made before yielding. */ 660 657 if (iter->next_last_level_gfn == iter->yielded_gfn) 661 658 return false; ··· 676 671 677 672 WARN_ON(iter->gfn > iter->next_last_level_gfn); 678 673 679 - tdp_iter_restart(iter); 680 - 681 - return true; 674 + iter->yielded = true; 682 675 } 683 676 684 - return false; 677 + return iter->yielded; 685 678 } 686 679 687 680 /*
+12 -9
arch/x86/kvm/svm/svm.c
··· 1585 1585 to_svm(vcpu)->vmcb->save.rflags = rflags; 1586 1586 } 1587 1587 1588 + static bool svm_get_if_flag(struct kvm_vcpu *vcpu) 1589 + { 1590 + struct vmcb *vmcb = to_svm(vcpu)->vmcb; 1591 + 1592 + return sev_es_guest(vcpu->kvm) 1593 + ? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK 1594 + : kvm_get_rflags(vcpu) & X86_EFLAGS_IF; 1595 + } 1596 + 1588 1597 static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg) 1589 1598 { 1590 1599 switch (reg) { ··· 3577 3568 if (!gif_set(svm)) 3578 3569 return true; 3579 3570 3580 - if (sev_es_guest(vcpu->kvm)) { 3581 - /* 3582 - * SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask 3583 - * bit to determine the state of the IF flag. 3584 - */ 3585 - if (!(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK)) 3586 - return true; 3587 - } else if (is_guest_mode(vcpu)) { 3571 + if (is_guest_mode(vcpu)) { 3588 3572 /* As long as interrupts are being delivered... */ 3589 3573 if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK) 3590 3574 ? !(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF) ··· 3588 3586 if (nested_exit_on_intr(svm)) 3589 3587 return false; 3590 3588 } else { 3591 - if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF)) 3589 + if (!svm_get_if_flag(vcpu)) 3592 3590 return true; 3593 3591 } 3594 3592 ··· 4623 4621 .cache_reg = svm_cache_reg, 4624 4622 .get_rflags = svm_get_rflags, 4625 4623 .set_rflags = svm_set_rflags, 4624 + .get_if_flag = svm_get_if_flag, 4626 4625 4627 4626 .tlb_flush_all = svm_flush_tlb, 4628 4627 .tlb_flush_current = svm_flush_tlb,
+32 -13
arch/x86/kvm/vmx/vmx.c
··· 1363 1363 vmx->emulation_required = vmx_emulation_required(vcpu); 1364 1364 } 1365 1365 1366 + static bool vmx_get_if_flag(struct kvm_vcpu *vcpu) 1367 + { 1368 + return vmx_get_rflags(vcpu) & X86_EFLAGS_IF; 1369 + } 1370 + 1366 1371 u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu) 1367 1372 { 1368 1373 u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO); ··· 3964 3959 if (pi_test_and_set_on(&vmx->pi_desc)) 3965 3960 return 0; 3966 3961 3967 - if (vcpu != kvm_get_running_vcpu() && 3968 - !kvm_vcpu_trigger_posted_interrupt(vcpu, false)) 3962 + if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false)) 3969 3963 kvm_vcpu_kick(vcpu); 3970 3964 3971 3965 return 0; ··· 5881 5877 vmx_flush_pml_buffer(vcpu); 5882 5878 5883 5879 /* 5884 - * We should never reach this point with a pending nested VM-Enter, and 5885 - * more specifically emulation of L2 due to invalid guest state (see 5886 - * below) should never happen as that means we incorrectly allowed a 5887 - * nested VM-Enter with an invalid vmcs12. 5880 + * KVM should never reach this point with a pending nested VM-Enter. 5881 + * More specifically, short-circuiting VM-Entry to emulate L2 due to 5882 + * invalid guest state should never happen as that means KVM knowingly 5883 + * allowed a nested VM-Enter with an invalid vmcs12. More below. 5888 5884 */ 5889 5885 if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm)) 5890 5886 return -EIO; 5891 - 5892 - /* If guest state is invalid, start emulating */ 5893 - if (vmx->emulation_required) 5894 - return handle_invalid_guest_state(vcpu); 5895 5887 5896 5888 if (is_guest_mode(vcpu)) { 5897 5889 /* ··· 5910 5910 */ 5911 5911 nested_mark_vmcs12_pages_dirty(vcpu); 5912 5912 5913 + /* 5914 + * Synthesize a triple fault if L2 state is invalid. In normal 5915 + * operation, nested VM-Enter rejects any attempt to enter L2 5916 + * with invalid state. However, those checks are skipped if 5917 + * state is being stuffed via RSM or KVM_SET_NESTED_STATE. If 5918 + * L2 state is invalid, it means either L1 modified SMRAM state 5919 + * or userspace provided bad state. Synthesize TRIPLE_FAULT as 5920 + * doing so is architecturally allowed in the RSM case, and is 5921 + * the least awful solution for the userspace case without 5922 + * risking false positives. 5923 + */ 5924 + if (vmx->emulation_required) { 5925 + nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0); 5926 + return 1; 5927 + } 5928 + 5913 5929 if (nested_vmx_reflect_vmexit(vcpu)) 5914 5930 return 1; 5915 5931 } 5932 + 5933 + /* If guest state is invalid, start emulating. L2 is handled above. */ 5934 + if (vmx->emulation_required) 5935 + return handle_invalid_guest_state(vcpu); 5916 5936 5917 5937 if (exit_reason.failed_vmentry) { 5918 5938 dump_vmcs(vcpu); ··· 6628 6608 * consistency check VM-Exit due to invalid guest state and bail. 6629 6609 */ 6630 6610 if (unlikely(vmx->emulation_required)) { 6631 - 6632 - /* We don't emulate invalid state of a nested guest */ 6633 - vmx->fail = is_guest_mode(vcpu); 6611 + vmx->fail = 0; 6634 6612 6635 6613 vmx->exit_reason.full = EXIT_REASON_INVALID_STATE; 6636 6614 vmx->exit_reason.failed_vmentry = 1; ··· 7597 7579 .cache_reg = vmx_cache_reg, 7598 7580 .get_rflags = vmx_get_rflags, 7599 7581 .set_rflags = vmx_set_rflags, 7582 + .get_if_flag = vmx_get_if_flag, 7600 7583 7601 7584 .tlb_flush_all = vmx_flush_tlb_all, 7602 7585 .tlb_flush_current = vmx_flush_tlb_current,
+2 -9
arch/x86/kvm/x86.c
··· 1331 1331 MSR_IA32_UMWAIT_CONTROL, 1332 1332 1333 1333 MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1, 1334 - MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3, 1334 + MSR_ARCH_PERFMON_FIXED_CTR0 + 2, 1335 1335 MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS, 1336 1336 MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL, 1337 1337 MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1, ··· 9001 9001 { 9002 9002 struct kvm_run *kvm_run = vcpu->run; 9003 9003 9004 - /* 9005 - * if_flag is obsolete and useless, so do not bother 9006 - * setting it for SEV-ES guests. Userspace can just 9007 - * use kvm_run->ready_for_interrupt_injection. 9008 - */ 9009 - kvm_run->if_flag = !vcpu->arch.guest_state_protected 9010 - && (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0; 9011 - 9004 + kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu); 9012 9005 kvm_run->cr8 = kvm_get_cr8(vcpu); 9013 9006 kvm_run->apic_base = kvm_get_apic_base(vcpu); 9014 9007
+1 -1
arch/x86/tools/relocs.c
··· 68 68 "(__parainstructions|__alt_instructions)(_end)?|" 69 69 "(__iommu_table|__apicdrivers|__smp_locks)(_end)?|" 70 70 "__(start|end)_pci_.*|" 71 - #if CONFIG_FW_LOADER_BUILTIN 71 + #if CONFIG_FW_LOADER 72 72 "__(start|end)_builtin_fw|" 73 73 #endif 74 74 "__(start|stop)___ksymtab(_gpl)?|"
+1 -1
drivers/android/binder_alloc.c
··· 671 671 BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size); 672 672 673 673 if (buffer->async_transaction) { 674 - alloc->free_async_space += size + sizeof(struct binder_buffer); 674 + alloc->free_async_space += buffer_size + sizeof(struct binder_buffer); 675 675 676 676 binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC, 677 677 "%d: binder_free_buf size %zd async free %zd\n",
+4 -1
drivers/auxdisplay/charlcd.c
··· 37 37 bool must_clear; 38 38 39 39 /* contains the LCD config state */ 40 - unsigned long int flags; 40 + unsigned long flags; 41 41 42 42 /* Current escape sequence and it's length or -1 if outside */ 43 43 struct { ··· 578 578 * Since charlcd_init_display() needs to write data, we have to 579 579 * enable mark the LCD initialized just before. 580 580 */ 581 + if (WARN_ON(!lcd->ops->init_display)) 582 + return -EINVAL; 583 + 581 584 ret = lcd->ops->init_display(lcd); 582 585 if (ret) 583 586 return ret;
+1 -1
drivers/base/power/main.c
··· 1902 1902 device_block_probing(); 1903 1903 1904 1904 mutex_lock(&dpm_list_mtx); 1905 - while (!list_empty(&dpm_list)) { 1905 + while (!list_empty(&dpm_list) && !error) { 1906 1906 struct device *dev = to_device(dpm_list.next); 1907 1907 1908 1908 get_device(dev);
+12 -3
drivers/block/xen-blkfront.c
··· 1512 1512 unsigned long flags; 1513 1513 struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id; 1514 1514 struct blkfront_info *info = rinfo->dev_info; 1515 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1515 1516 1516 - if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) 1517 + if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { 1518 + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); 1517 1519 return IRQ_HANDLED; 1520 + } 1518 1521 1519 1522 spin_lock_irqsave(&rinfo->ring_lock, flags); 1520 1523 again: ··· 1532 1529 for (i = rinfo->ring.rsp_cons; i != rp; i++) { 1533 1530 unsigned long id; 1534 1531 unsigned int op; 1532 + 1533 + eoiflag = 0; 1535 1534 1536 1535 RING_COPY_RESPONSE(&rinfo->ring, i, &bret); 1537 1536 id = bret.id; ··· 1651 1646 1652 1647 spin_unlock_irqrestore(&rinfo->ring_lock, flags); 1653 1648 1649 + xen_irq_lateeoi(irq, eoiflag); 1650 + 1654 1651 return IRQ_HANDLED; 1655 1652 1656 1653 err: 1657 1654 info->connected = BLKIF_STATE_ERROR; 1658 1655 1659 1656 spin_unlock_irqrestore(&rinfo->ring_lock, flags); 1657 + 1658 + /* No EOI in order to avoid further interrupts. */ 1660 1659 1661 1660 pr_alert("%s disabled for further use\n", info->gd->disk_name); 1662 1661 return IRQ_HANDLED; ··· 1701 1692 if (err) 1702 1693 goto fail; 1703 1694 1704 - err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0, 1705 - "blkif", rinfo); 1695 + err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt, 1696 + 0, "blkif", rinfo); 1706 1697 if (err <= 0) { 1707 1698 xenbus_dev_fatal(dev, err, 1708 1699 "bind_evtchn_to_irqhandler failed");
+4 -4
drivers/bus/sunxi-rsb.c
··· 687 687 688 688 static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb) 689 689 { 690 - /* Keep the clock and PM reference counts consistent. */ 691 - if (pm_runtime_status_suspended(rsb->dev)) 692 - pm_runtime_resume(rsb->dev); 693 690 reset_control_assert(rsb->rstc); 694 - clk_disable_unprepare(rsb->clk); 691 + 692 + /* Keep the clock and PM reference counts consistent. */ 693 + if (!pm_runtime_status_suspended(rsb->dev)) 694 + clk_disable_unprepare(rsb->clk); 695 695 } 696 696 697 697 static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)
+14 -9
drivers/char/ipmi/ipmi_msghandler.c
··· 3031 3031 * with removing the device attributes while reading a device 3032 3032 * attribute. 3033 3033 */ 3034 - schedule_work(&bmc->remove_work); 3034 + queue_work(remove_work_wq, &bmc->remove_work); 3035 3035 } 3036 3036 3037 3037 /* ··· 5392 5392 if (initialized) 5393 5393 goto out; 5394 5394 5395 - init_srcu_struct(&ipmi_interfaces_srcu); 5395 + rv = init_srcu_struct(&ipmi_interfaces_srcu); 5396 + if (rv) 5397 + goto out; 5398 + 5399 + remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); 5400 + if (!remove_work_wq) { 5401 + pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); 5402 + rv = -ENOMEM; 5403 + goto out_wq; 5404 + } 5396 5405 5397 5406 timer_setup(&ipmi_timer, ipmi_timeout, 0); 5398 5407 mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES); 5399 5408 5400 5409 atomic_notifier_chain_register(&panic_notifier_list, &panic_block); 5401 5410 5402 - remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq"); 5403 - if (!remove_work_wq) { 5404 - pr_err("unable to create ipmi-msghandler-remove-wq workqueue"); 5405 - rv = -ENOMEM; 5406 - goto out; 5407 - } 5408 - 5409 5411 initialized = true; 5410 5412 5413 + out_wq: 5414 + if (rv) 5415 + cleanup_srcu_struct(&ipmi_interfaces_srcu); 5411 5416 out: 5412 5417 mutex_unlock(&ipmi_interfaces_mutex); 5413 5418 return rv;
+4 -3
drivers/char/ipmi/ipmi_ssif.c
··· 1659 1659 } 1660 1660 } 1661 1661 1662 + ssif_info->client = client; 1663 + i2c_set_clientdata(client, ssif_info); 1664 + 1662 1665 rv = ssif_check_and_remove(client, ssif_info); 1663 1666 /* If rv is 0 and addr source is not SI_ACPI, continue probing */ 1664 1667 if (!rv && ssif_info->addr_source == SI_ACPI) { ··· 1681 1678 "Trying %s-specified SSIF interface at i2c address 0x%x, adapter %s, slave address 0x%x\n", 1682 1679 ipmi_addr_src_to_str(ssif_info->addr_source), 1683 1680 client->addr, client->adapter->name, slave_addr); 1684 - 1685 - ssif_info->client = client; 1686 - i2c_set_clientdata(client, ssif_info); 1687 1681 1688 1682 /* Now check for system interface capabilities */ 1689 1683 msg[0] = IPMI_NETFN_APP_REQUEST << 2; ··· 1881 1881 1882 1882 dev_err(&ssif_info->client->dev, 1883 1883 "Unable to start IPMI SSIF: %d\n", rv); 1884 + i2c_set_clientdata(client, NULL); 1884 1885 kfree(ssif_info); 1885 1886 } 1886 1887 kfree(resp);
+7
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
··· 211 211 return adf_4xxx_fw_config[obj_num].ae_mask; 212 212 } 213 213 214 + static u32 get_vf2pf_sources(void __iomem *pmisc_addr) 215 + { 216 + /* For the moment do not report vf2pf sources */ 217 + return 0; 218 + } 219 + 214 220 void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data) 215 221 { 216 222 hw_data->dev_class = &adf_4xxx_class; ··· 260 254 hw_data->set_msix_rttable = set_msix_default_rttable; 261 255 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer; 262 256 hw_data->enable_pfvf_comms = pfvf_comms_disabled; 257 + hw_data->get_vf2pf_sources = get_vf2pf_sources; 263 258 hw_data->disable_iov = adf_disable_sriov; 264 259 hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION; 265 260
+9 -10
drivers/gpio/gpio-dln2.c
··· 46 46 struct dln2_gpio { 47 47 struct platform_device *pdev; 48 48 struct gpio_chip gpio; 49 + struct irq_chip irqchip; 49 50 50 51 /* 51 52 * Cache pin direction to save us one transfer, since the hardware has ··· 384 383 mutex_unlock(&dln2->irq_lock); 385 384 } 386 385 387 - static struct irq_chip dln2_gpio_irqchip = { 388 - .name = "dln2-irq", 389 - .irq_mask = dln2_irq_mask, 390 - .irq_unmask = dln2_irq_unmask, 391 - .irq_set_type = dln2_irq_set_type, 392 - .irq_bus_lock = dln2_irq_bus_lock, 393 - .irq_bus_sync_unlock = dln2_irq_bus_unlock, 394 - }; 395 - 396 386 static void dln2_gpio_event(struct platform_device *pdev, u16 echo, 397 387 const void *data, int len) 398 388 { ··· 465 473 dln2->gpio.direction_output = dln2_gpio_direction_output; 466 474 dln2->gpio.set_config = dln2_gpio_set_config; 467 475 476 + dln2->irqchip.name = "dln2-irq", 477 + dln2->irqchip.irq_mask = dln2_irq_mask, 478 + dln2->irqchip.irq_unmask = dln2_irq_unmask, 479 + dln2->irqchip.irq_set_type = dln2_irq_set_type, 480 + dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock, 481 + dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock, 482 + 468 483 girq = &dln2->gpio.irq; 469 - girq->chip = &dln2_gpio_irqchip; 484 + girq->chip = &dln2->irqchip; 470 485 /* The event comes from the outside so no parent handler */ 471 486 girq->parent_handler = NULL; 472 487 girq->num_parents = 0;
+1 -5
drivers/gpio/gpio-virtio.c
··· 100 100 virtqueue_kick(vgpio->request_vq); 101 101 mutex_unlock(&vgpio->lock); 102 102 103 - if (!wait_for_completion_timeout(&line->completion, HZ)) { 104 - dev_err(dev, "GPIO operation timed out\n"); 105 - ret = -ETIMEDOUT; 106 - goto out; 107 - } 103 + wait_for_completion(&line->completion); 108 104 109 105 if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) { 110 106 dev_err(dev, "GPIO request failed: %d\n", gpio);
+8 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 3166 3166 bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type) 3167 3167 { 3168 3168 switch (asic_type) { 3169 + #ifdef CONFIG_DRM_AMDGPU_SI 3170 + case CHIP_HAINAN: 3171 + #endif 3172 + case CHIP_TOPAZ: 3173 + /* chips with no display hardware */ 3174 + return false; 3169 3175 #if defined(CONFIG_DRM_AMD_DC) 3170 3176 case CHIP_TAHITI: 3171 3177 case CHIP_PITCAIRN: ··· 4467 4461 int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, 4468 4462 struct amdgpu_reset_context *reset_context) 4469 4463 { 4470 - int i, j, r = 0; 4464 + int i, r = 0; 4471 4465 struct amdgpu_job *job = NULL; 4472 4466 bool need_full_reset = 4473 4467 test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags); ··· 4489 4483 4490 4484 /*clear job fence from fence drv to avoid force_completion 4491 4485 *leave NULL and vm flush fence in fence drv */ 4492 - for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) { 4493 - struct dma_fence *old, **ptr; 4486 + amdgpu_fence_driver_clear_job_fences(ring); 4494 4487 4495 - ptr = &ring->fence_drv.fences[j]; 4496 - old = rcu_dereference_protected(*ptr, 1); 4497 - if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) { 4498 - RCU_INIT_POINTER(*ptr, NULL); 4499 - } 4500 - } 4501 4488 /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ 4502 4489 amdgpu_fence_driver_force_completion(ring); 4503 4490 }
+54 -22
drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
··· 526 526 } 527 527 } 528 528 529 + union gc_info { 530 + struct gc_info_v1_0 v1; 531 + struct gc_info_v2_0 v2; 532 + }; 533 + 529 534 int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev) 530 535 { 531 536 struct binary_header *bhdr; 532 - struct gc_info_v1_0 *gc_info; 537 + union gc_info *gc_info; 533 538 534 539 if (!adev->mman.discovery_bin) { 535 540 DRM_ERROR("ip discovery uninitialized\n"); ··· 542 537 } 543 538 544 539 bhdr = (struct binary_header *)adev->mman.discovery_bin; 545 - gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin + 540 + gc_info = (union gc_info *)(adev->mman.discovery_bin + 546 541 le16_to_cpu(bhdr->table_list[GC].offset)); 547 - 548 - adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se); 549 - adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->gc_num_wgp0_per_sa) + 550 - le32_to_cpu(gc_info->gc_num_wgp1_per_sa)); 551 - adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->gc_num_sa_per_se); 552 - adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->gc_num_rb_per_se); 553 - adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->gc_num_gl2c); 554 - adev->gfx.config.max_gprs = le32_to_cpu(gc_info->gc_num_gprs); 555 - adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->gc_num_max_gs_thds); 556 - adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->gc_gs_table_depth); 557 - adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->gc_gsprim_buff_depth); 558 - adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->gc_double_offchip_lds_buffer); 559 - adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->gc_wave_size); 560 - adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->gc_max_waves_per_simd); 561 - adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->gc_max_scratch_slots_per_cu); 562 - adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->gc_lds_size); 563 - adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->gc_num_sc_per_se) / 564 - le32_to_cpu(gc_info->gc_num_sa_per_se); 565 - adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->gc_num_packer_per_sc); 566 - 542 + switch (gc_info->v1.header.version_major) { 543 + case 1: 544 + adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v1.gc_num_se); 545 + adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->v1.gc_num_wgp0_per_sa) + 546 + le32_to_cpu(gc_info->v1.gc_num_wgp1_per_sa)); 547 + adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v1.gc_num_sa_per_se); 548 + adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v1.gc_num_rb_per_se); 549 + adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v1.gc_num_gl2c); 550 + adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v1.gc_num_gprs); 551 + adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v1.gc_num_max_gs_thds); 552 + adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v1.gc_gs_table_depth); 553 + adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v1.gc_gsprim_buff_depth); 554 + adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v1.gc_double_offchip_lds_buffer); 555 + adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v1.gc_wave_size); 556 + adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v1.gc_max_waves_per_simd); 557 + adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v1.gc_max_scratch_slots_per_cu); 558 + adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v1.gc_lds_size); 559 + adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) / 560 + le32_to_cpu(gc_info->v1.gc_num_sa_per_se); 561 + adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc); 562 + break; 563 + case 2: 564 + adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v2.gc_num_se); 565 + adev->gfx.config.max_cu_per_sh = le32_to_cpu(gc_info->v2.gc_num_cu_per_sh); 566 + adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v2.gc_num_sh_per_se); 567 + adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v2.gc_num_rb_per_se); 568 + adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v2.gc_num_tccs); 569 + adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v2.gc_num_gprs); 570 + adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v2.gc_num_max_gs_thds); 571 + adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v2.gc_gs_table_depth); 572 + adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v2.gc_gsprim_buff_depth); 573 + adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v2.gc_double_offchip_lds_buffer); 574 + adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v2.gc_wave_size); 575 + adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v2.gc_max_waves_per_simd); 576 + adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v2.gc_max_scratch_slots_per_cu); 577 + adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v2.gc_lds_size); 578 + adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) / 579 + le32_to_cpu(gc_info->v2.gc_num_sh_per_se); 580 + adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc); 581 + break; 582 + default: 583 + dev_err(adev->dev, 584 + "Unhandled GC info table %d.%d\n", 585 + gc_info->v1.header.version_major, 586 + gc_info->v1.header.version_minor); 587 + return -EINVAL; 588 + } 567 589 return 0; 568 590 } 569 591
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
··· 384 384 struct amdgpu_vm_bo_base *bo_base; 385 385 int r; 386 386 387 - if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM) 387 + if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM) 388 388 return; 389 389 390 390 r = ttm_bo_validate(&bo->tbo, &placement, &ctx);
+23 -4
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 328 328 329 329 /** 330 330 * DOC: runpm (int) 331 - * Override for runtime power management control for dGPUs in PX/HG laptops. The amdgpu driver can dynamically power down 332 - * the dGPU on PX/HG laptops when it is idle. The default is -1 (auto enable). Setting the value to 0 disables this functionality. 331 + * Override for runtime power management control for dGPUs. The amdgpu driver can dynamically power down 332 + * the dGPUs when they are idle if supported. The default is -1 (auto enable). 333 + * Setting the value to 0 disables this functionality. 333 334 */ 334 - MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = PX only default)"); 335 + MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto)"); 335 336 module_param_named(runpm, amdgpu_runtime_pm, int, 0444); 336 337 337 338 /** ··· 2154 2153 adev->in_s3 = true; 2155 2154 r = amdgpu_device_suspend(drm_dev, true); 2156 2155 adev->in_s3 = false; 2157 - 2156 + if (r) 2157 + return r; 2158 + if (!adev->in_s0ix) 2159 + r = amdgpu_asic_reset(adev); 2158 2160 return r; 2159 2161 } 2160 2162 ··· 2238 2234 if (amdgpu_device_supports_px(drm_dev)) 2239 2235 drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING; 2240 2236 2237 + /* 2238 + * By setting mp1_state as PP_MP1_STATE_UNLOAD, MP1 will do some 2239 + * proper cleanups and put itself into a state ready for PNP. That 2240 + * can address some random resuming failure observed on BOCO capable 2241 + * platforms. 2242 + * TODO: this may be also needed for PX capable platform. 2243 + */ 2244 + if (amdgpu_device_supports_boco(drm_dev)) 2245 + adev->mp1_state = PP_MP1_STATE_UNLOAD; 2246 + 2241 2247 ret = amdgpu_device_suspend(drm_dev, false); 2242 2248 if (ret) { 2243 2249 adev->in_runpm = false; 2250 + if (amdgpu_device_supports_boco(drm_dev)) 2251 + adev->mp1_state = PP_MP1_STATE_NONE; 2244 2252 return ret; 2245 2253 } 2254 + 2255 + if (amdgpu_device_supports_boco(drm_dev)) 2256 + adev->mp1_state = PP_MP1_STATE_NONE; 2246 2257 2247 2258 if (amdgpu_device_supports_px(drm_dev)) { 2248 2259 /* Only need to handle PCI state in the driver for ATPX
+87 -39
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
··· 77 77 * Cast helper 78 78 */ 79 79 static const struct dma_fence_ops amdgpu_fence_ops; 80 + static const struct dma_fence_ops amdgpu_job_fence_ops; 80 81 static inline struct amdgpu_fence *to_amdgpu_fence(struct dma_fence *f) 81 82 { 82 83 struct amdgpu_fence *__f = container_of(f, struct amdgpu_fence, base); 83 84 84 - if (__f->base.ops == &amdgpu_fence_ops) 85 + if (__f->base.ops == &amdgpu_fence_ops || 86 + __f->base.ops == &amdgpu_job_fence_ops) 85 87 return __f; 86 88 87 89 return NULL; ··· 160 158 } 161 159 162 160 seq = ++ring->fence_drv.sync_seq; 163 - if (job != NULL && job->job_run_counter) { 161 + if (job && job->job_run_counter) { 164 162 /* reinit seq for resubmitted jobs */ 165 163 fence->seqno = seq; 166 164 } else { 167 - dma_fence_init(fence, &amdgpu_fence_ops, 168 - &ring->fence_drv.lock, 169 - adev->fence_context + ring->idx, 170 - seq); 171 - } 172 - 173 - if (job != NULL) { 174 - /* mark this fence has a parent job */ 175 - set_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &fence->flags); 165 + if (job) 166 + dma_fence_init(fence, &amdgpu_job_fence_ops, 167 + &ring->fence_drv.lock, 168 + adev->fence_context + ring->idx, seq); 169 + else 170 + dma_fence_init(fence, &amdgpu_fence_ops, 171 + &ring->fence_drv.lock, 172 + adev->fence_context + ring->idx, seq); 176 173 } 177 174 178 175 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, ··· 622 621 } 623 622 624 623 /** 624 + * amdgpu_fence_driver_clear_job_fences - clear job embedded fences of ring 625 + * 626 + * @ring: fence of the ring to be cleared 627 + * 628 + */ 629 + void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring) 630 + { 631 + int i; 632 + struct dma_fence *old, **ptr; 633 + 634 + for (i = 0; i <= ring->fence_drv.num_fences_mask; i++) { 635 + ptr = &ring->fence_drv.fences[i]; 636 + old = rcu_dereference_protected(*ptr, 1); 637 + if (old && old->ops == &amdgpu_job_fence_ops) 638 + RCU_INIT_POINTER(*ptr, NULL); 639 + } 640 + } 641 + 642 + /** 625 643 * amdgpu_fence_driver_force_completion - force signal latest fence of ring 626 644 * 627 645 * @ring: fence of the ring to signal ··· 663 643 664 644 static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f) 665 645 { 666 - struct amdgpu_ring *ring; 646 + return (const char *)to_amdgpu_fence(f)->ring->name; 647 + } 667 648 668 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 669 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 649 + static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f) 650 + { 651 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 670 652 671 - ring = to_amdgpu_ring(job->base.sched); 672 - } else { 673 - ring = to_amdgpu_fence(f)->ring; 674 - } 675 - return (const char *)ring->name; 653 + return (const char *)to_amdgpu_ring(job->base.sched)->name; 676 654 } 677 655 678 656 /** ··· 683 665 */ 684 666 static bool amdgpu_fence_enable_signaling(struct dma_fence *f) 685 667 { 686 - struct amdgpu_ring *ring; 668 + if (!timer_pending(&to_amdgpu_fence(f)->ring->fence_drv.fallback_timer)) 669 + amdgpu_fence_schedule_fallback(to_amdgpu_fence(f)->ring); 687 670 688 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 689 - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 671 + return true; 672 + } 690 673 691 - ring = to_amdgpu_ring(job->base.sched); 692 - } else { 693 - ring = to_amdgpu_fence(f)->ring; 694 - } 674 + /** 675 + * amdgpu_job_fence_enable_signaling - enable signalling on job fence 676 + * @f: fence 677 + * 678 + * This is the simliar function with amdgpu_fence_enable_signaling above, it 679 + * only handles the job embedded fence. 680 + */ 681 + static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f) 682 + { 683 + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence); 695 684 696 - if (!timer_pending(&ring->fence_drv.fallback_timer)) 697 - amdgpu_fence_schedule_fallback(ring); 685 + if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer)) 686 + amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched)); 698 687 699 688 return true; 700 689 } ··· 717 692 { 718 693 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 719 694 720 - if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) { 721 - /* free job if fence has a parent job */ 722 - struct amdgpu_job *job; 723 - 724 - job = container_of(f, struct amdgpu_job, hw_fence); 725 - kfree(job); 726 - } else { 727 695 /* free fence_slab if it's separated fence*/ 728 - struct amdgpu_fence *fence; 696 + kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f)); 697 + } 729 698 730 - fence = to_amdgpu_fence(f); 731 - kmem_cache_free(amdgpu_fence_slab, fence); 732 - } 699 + /** 700 + * amdgpu_job_fence_free - free up the job with embedded fence 701 + * 702 + * @rcu: RCU callback head 703 + * 704 + * Free up the job with embedded fence after the RCU grace period. 705 + */ 706 + static void amdgpu_job_fence_free(struct rcu_head *rcu) 707 + { 708 + struct dma_fence *f = container_of(rcu, struct dma_fence, rcu); 709 + 710 + /* free job if fence has a parent job */ 711 + kfree(container_of(f, struct amdgpu_job, hw_fence)); 733 712 } 734 713 735 714 /** ··· 749 720 call_rcu(&f->rcu, amdgpu_fence_free); 750 721 } 751 722 723 + /** 724 + * amdgpu_job_fence_release - callback that job embedded fence can be freed 725 + * 726 + * @f: fence 727 + * 728 + * This is the simliar function with amdgpu_fence_release above, it 729 + * only handles the job embedded fence. 730 + */ 731 + static void amdgpu_job_fence_release(struct dma_fence *f) 732 + { 733 + call_rcu(&f->rcu, amdgpu_job_fence_free); 734 + } 735 + 752 736 static const struct dma_fence_ops amdgpu_fence_ops = { 753 737 .get_driver_name = amdgpu_fence_get_driver_name, 754 738 .get_timeline_name = amdgpu_fence_get_timeline_name, ··· 769 727 .release = amdgpu_fence_release, 770 728 }; 771 729 730 + static const struct dma_fence_ops amdgpu_job_fence_ops = { 731 + .get_driver_name = amdgpu_fence_get_driver_name, 732 + .get_timeline_name = amdgpu_job_fence_get_timeline_name, 733 + .enable_signaling = amdgpu_job_fence_enable_signaling, 734 + .release = amdgpu_job_fence_release, 735 + }; 772 736 773 737 /* 774 738 * Fence debugfs
+1 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 53 53 #define AMDGPU_FENCE_FLAG_INT (1 << 1) 54 54 #define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2) 55 55 56 - /* fence flag bit to indicate the face is embedded in job*/ 57 - #define AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT (DMA_FENCE_FLAG_USER_BITS + 1) 58 - 59 56 #define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched) 60 57 61 58 #define AMDGPU_IB_POOL_SIZE (1024 * 1024) ··· 111 114 struct dma_fence **fences; 112 115 }; 113 116 117 + void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring); 114 118 void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); 115 119 116 120 int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+7
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
··· 246 246 { 247 247 int r; 248 248 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 249 + bool idle_work_unexecuted; 250 + 251 + idle_work_unexecuted = cancel_delayed_work_sync(&adev->vcn.idle_work); 252 + if (idle_work_unexecuted) { 253 + if (adev->pm.dpm_enabled) 254 + amdgpu_dpm_enable_uvd(adev, false); 255 + } 249 256 250 257 r = vcn_v1_0_hw_fini(adev); 251 258 if (r)
+1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
··· 158 158 union display_idle_optimization_u idle_info = { 0 }; 159 159 idle_info.idle_info.df_request_disabled = 1; 160 160 idle_info.idle_info.phy_ref_clk_off = 1; 161 + idle_info.idle_info.s0i2_rdy = 1; 161 162 dcn31_smu_set_display_idle_optimization(clk_mgr, idle_info.data); 162 163 /* update power state */ 163 164 clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER;
+1 -4
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 3945 3945 config.dig_be = pipe_ctx->stream->link->link_enc_hw_inst; 3946 3946 #if defined(CONFIG_DRM_AMD_DC_DCN) 3947 3947 config.stream_enc_idx = pipe_ctx->stream_res.stream_enc->id - ENGINE_ID_DIGA; 3948 - 3948 + 3949 3949 if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY || 3950 3950 pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) { 3951 - link_enc = pipe_ctx->stream->link->link_enc; 3952 - config.dio_output_type = pipe_ctx->stream->link->ep_type; 3953 - config.dio_output_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A; 3954 3951 if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY) 3955 3952 link_enc = pipe_ctx->stream->link->link_enc; 3956 3953 else if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
+1
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
··· 78 78 .get_clock = dcn10_get_clock, 79 79 .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, 80 80 .calc_vupdate_position = dcn10_calc_vupdate_position, 81 + .power_down = dce110_power_down, 81 82 .set_backlight_level = dce110_set_backlight_level, 82 83 .set_abm_immediate_disable = dce110_set_abm_immediate_disable, 83 84 .set_pipe = dce110_set_pipe,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 1069 1069 .timing_trace = false, 1070 1070 .clock_trace = true, 1071 1071 .disable_pplib_clock_request = true, 1072 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 1072 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 1073 1073 .force_single_disp_pipe_split = false, 1074 1074 .disable_dcc = DCC_ENABLE, 1075 1075 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn201/dcn201_resource.c
··· 603 603 .timing_trace = false, 604 604 .clock_trace = true, 605 605 .disable_pplib_clock_request = true, 606 - .pipe_split_policy = MPC_SPLIT_AVOID, 606 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 607 607 .force_single_disp_pipe_split = false, 608 608 .disable_dcc = DCC_ENABLE, 609 609 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
··· 874 874 .clock_trace = true, 875 875 .disable_pplib_clock_request = true, 876 876 .min_disp_clk_khz = 100000, 877 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 877 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 878 878 .force_single_disp_pipe_split = false, 879 879 .disable_dcc = DCC_ENABLE, 880 880 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_resource.c
··· 840 840 .timing_trace = false, 841 841 .clock_trace = true, 842 842 .disable_pplib_clock_request = true, 843 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 843 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 844 844 .force_single_disp_pipe_split = false, 845 845 .disable_dcc = DCC_ENABLE, 846 846 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn301/dcn301_resource.c
··· 686 686 .disable_clock_gate = true, 687 687 .disable_pplib_clock_request = true, 688 688 .disable_pplib_wm_range = true, 689 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 689 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 690 690 .force_single_disp_pipe_split = false, 691 691 .disable_dcc = DCC_ENABLE, 692 692 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn302/dcn302_resource.c
··· 211 211 .timing_trace = false, 212 212 .clock_trace = true, 213 213 .disable_pplib_clock_request = true, 214 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 214 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 215 215 .force_single_disp_pipe_split = false, 216 216 .disable_dcc = DCC_ENABLE, 217 217 .vsr_support = true,
+1 -1
drivers/gpu/drm/amd/display/dc/dcn303/dcn303_resource.c
··· 193 193 .timing_trace = false, 194 194 .clock_trace = true, 195 195 .disable_pplib_clock_request = true, 196 - .pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP, 196 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 197 197 .force_single_disp_pipe_split = false, 198 198 .disable_dcc = DCC_ENABLE, 199 199 .vsr_support = true,
+1
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_init.c
··· 101 101 .z10_restore = dcn31_z10_restore, 102 102 .z10_save_init = dcn31_z10_save_init, 103 103 .set_disp_pattern_generator = dcn30_set_disp_pattern_generator, 104 + .optimize_pwr_state = dcn21_optimize_pwr_state, 104 105 .exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state, 105 106 .update_visual_confirm_color = dcn20_update_visual_confirm_color, 106 107 };
+24 -3
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.c
··· 355 355 clk_src_regs(3, D), 356 356 clk_src_regs(4, E) 357 357 }; 358 + /*pll_id being rempped in dmub, in driver it is logical instance*/ 359 + static const struct dce110_clk_src_regs clk_src_regs_b0[] = { 360 + clk_src_regs(0, A), 361 + clk_src_regs(1, B), 362 + clk_src_regs(2, F), 363 + clk_src_regs(3, G), 364 + clk_src_regs(4, E) 365 + }; 358 366 359 367 static const struct dce110_clk_src_shift cs_shift = { 360 368 CS_COMMON_MASK_SH_LIST_DCN2_0(__SHIFT) ··· 1002 994 .timing_trace = false, 1003 995 .clock_trace = true, 1004 996 .disable_pplib_clock_request = false, 1005 - .pipe_split_policy = MPC_SPLIT_AVOID, 997 + .pipe_split_policy = MPC_SPLIT_DYNAMIC, 1006 998 .force_single_disp_pipe_split = false, 1007 999 .disable_dcc = DCC_ENABLE, 1008 1000 .vsr_support = true, ··· 2284 2276 dcn30_clock_source_create(ctx, ctx->dc_bios, 2285 2277 CLOCK_SOURCE_COMBO_PHY_PLL1, 2286 2278 &clk_src_regs[1], false); 2287 - pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2279 + /*move phypllx_pixclk_resync to dmub next*/ 2280 + if (dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) { 2281 + pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2282 + dcn30_clock_source_create(ctx, ctx->dc_bios, 2283 + CLOCK_SOURCE_COMBO_PHY_PLL2, 2284 + &clk_src_regs_b0[2], false); 2285 + pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2286 + dcn30_clock_source_create(ctx, ctx->dc_bios, 2287 + CLOCK_SOURCE_COMBO_PHY_PLL3, 2288 + &clk_src_regs_b0[3], false); 2289 + } else { 2290 + pool->base.clock_sources[DCN31_CLK_SRC_PLL2] = 2288 2291 dcn30_clock_source_create(ctx, ctx->dc_bios, 2289 2292 CLOCK_SOURCE_COMBO_PHY_PLL2, 2290 2293 &clk_src_regs[2], false); 2291 - pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2294 + pool->base.clock_sources[DCN31_CLK_SRC_PLL3] = 2292 2295 dcn30_clock_source_create(ctx, ctx->dc_bios, 2293 2296 CLOCK_SOURCE_COMBO_PHY_PLL3, 2294 2297 &clk_src_regs[3], false); 2298 + } 2299 + 2295 2300 pool->base.clock_sources[DCN31_CLK_SRC_PLL4] = 2296 2301 dcn30_clock_source_create(ctx, ctx->dc_bios, 2297 2302 CLOCK_SOURCE_COMBO_PHY_PLL4,
+31
drivers/gpu/drm/amd/display/dc/dcn31/dcn31_resource.h
··· 49 49 const struct dc_init_data *init_data, 50 50 struct dc *dc); 51 51 52 + /*temp: B0 specific before switch to dcn313 headers*/ 53 + #ifndef regPHYPLLF_PIXCLK_RESYNC_CNTL 54 + #define regPHYPLLF_PIXCLK_RESYNC_CNTL 0x007e 55 + #define regPHYPLLF_PIXCLK_RESYNC_CNTL_BASE_IDX 1 56 + #define regPHYPLLG_PIXCLK_RESYNC_CNTL 0x005f 57 + #define regPHYPLLG_PIXCLK_RESYNC_CNTL_BASE_IDX 1 58 + 59 + //PHYPLLF_PIXCLK_RESYNC_CNTL 60 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE__SHIFT 0x0 61 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1 62 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4 63 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE__SHIFT 0x8 64 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9 65 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L 66 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L 67 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L 68 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE_MASK 0x00000100L 69 + #define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L 70 + 71 + //PHYPLLG_PIXCLK_RESYNC_CNTL 72 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE__SHIFT 0x0 73 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1 74 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4 75 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE__SHIFT 0x8 76 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9 77 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L 78 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L 79 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L 80 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE_MASK 0x00000100L 81 + #define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L 82 + #endif 52 83 #endif /* _DCN31_RESOURCE_H_ */
+49
drivers/gpu/drm/amd/include/discovery.h
··· 143 143 uint32_t gc_num_gl2a; 144 144 }; 145 145 146 + struct gc_info_v1_1 { 147 + struct gpu_info_header header; 148 + 149 + uint32_t gc_num_se; 150 + uint32_t gc_num_wgp0_per_sa; 151 + uint32_t gc_num_wgp1_per_sa; 152 + uint32_t gc_num_rb_per_se; 153 + uint32_t gc_num_gl2c; 154 + uint32_t gc_num_gprs; 155 + uint32_t gc_num_max_gs_thds; 156 + uint32_t gc_gs_table_depth; 157 + uint32_t gc_gsprim_buff_depth; 158 + uint32_t gc_parameter_cache_depth; 159 + uint32_t gc_double_offchip_lds_buffer; 160 + uint32_t gc_wave_size; 161 + uint32_t gc_max_waves_per_simd; 162 + uint32_t gc_max_scratch_slots_per_cu; 163 + uint32_t gc_lds_size; 164 + uint32_t gc_num_sc_per_se; 165 + uint32_t gc_num_sa_per_se; 166 + uint32_t gc_num_packer_per_sc; 167 + uint32_t gc_num_gl2a; 168 + uint32_t gc_num_tcp_per_sa; 169 + uint32_t gc_num_sdp_interface; 170 + uint32_t gc_num_tcps; 171 + }; 172 + 173 + struct gc_info_v2_0 { 174 + struct gpu_info_header header; 175 + 176 + uint32_t gc_num_se; 177 + uint32_t gc_num_cu_per_sh; 178 + uint32_t gc_num_sh_per_se; 179 + uint32_t gc_num_rb_per_se; 180 + uint32_t gc_num_tccs; 181 + uint32_t gc_num_gprs; 182 + uint32_t gc_num_max_gs_thds; 183 + uint32_t gc_gs_table_depth; 184 + uint32_t gc_gsprim_buff_depth; 185 + uint32_t gc_parameter_cache_depth; 186 + uint32_t gc_double_offchip_lds_buffer; 187 + uint32_t gc_wave_size; 188 + uint32_t gc_max_waves_per_simd; 189 + uint32_t gc_max_scratch_slots_per_cu; 190 + uint32_t gc_lds_size; 191 + uint32_t gc_num_sc_per_se; 192 + uint32_t gc_num_packer_per_sc; 193 + }; 194 + 146 195 typedef struct harvest_info_header { 147 196 uint32_t signature; /* Table Signature */ 148 197 uint32_t version; /* Table Version */
+2 -5
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1568 1568 1569 1569 smu->watermarks_bitmap &= ~(WATERMARKS_LOADED); 1570 1570 1571 - /* skip CGPG when in S0ix */ 1572 - if (smu->is_apu && !adev->in_s0ix) 1573 - smu_set_gfx_cgpg(&adev->smu, false); 1571 + smu_set_gfx_cgpg(&adev->smu, false); 1574 1572 1575 1573 return 0; 1576 1574 } ··· 1599 1601 return ret; 1600 1602 } 1601 1603 1602 - if (smu->is_apu) 1603 - smu_set_gfx_cgpg(&adev->smu, true); 1604 + smu_set_gfx_cgpg(&adev->smu, true); 1604 1605 1605 1606 smu->disable_uclk_switch = 0; 1606 1607
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu12/smu_v12_0.c
··· 120 120 121 121 int smu_v12_0_set_gfx_cgpg(struct smu_context *smu, bool enable) 122 122 { 123 - if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) 123 + /* Until now the SMU12 only implemented for Renoir series so here neen't do APU check. */ 124 + if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) || smu->adev->in_s0ix) 124 125 return 0; 125 126 126 127 return smu_cmn_send_smc_msg_with_param(smu,
+1 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
··· 1621 1621 { 1622 1622 return smu_cmn_send_smc_msg_with_param(smu, 1623 1623 SMU_MSG_GmiPwrDnControl, 1624 - en ? 1 : 0, 1624 + en ? 0 : 1, 1625 1625 NULL); 1626 1626 } 1627 1627
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 564 564 container_of_user(base, typeof(*ext), base); 565 565 const struct set_proto_ctx_engines *set = data; 566 566 struct drm_i915_private *i915 = set->i915; 567 + struct i915_engine_class_instance prev_engine; 567 568 u64 flags; 568 569 int err = 0, n, i, j; 569 570 u16 slot, width, num_siblings; ··· 630 629 /* Create contexts / engines */ 631 630 for (i = 0; i < width; ++i) { 632 631 intel_engine_mask_t current_mask = 0; 633 - struct i915_engine_class_instance prev_engine; 634 632 635 633 for (j = 0; j < num_siblings; ++j) { 636 634 struct i915_engine_class_instance ci;
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 3017 3017 fence_array = dma_fence_array_create(eb->num_batches, 3018 3018 fences, 3019 3019 eb->context->parallel.fence_context, 3020 - eb->context->parallel.seqno, 3020 + eb->context->parallel.seqno++, 3021 3021 false); 3022 3022 if (!fence_array) { 3023 3023 kfree(fences);
+3 -3
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
··· 1662 1662 GEM_BUG_ON(intel_context_is_parent(cn)); 1663 1663 1664 1664 list_del_init(&cn->guc_id.link); 1665 - ce->guc_id = cn->guc_id; 1665 + ce->guc_id.id = cn->guc_id.id; 1666 1666 1667 - spin_lock(&ce->guc_state.lock); 1667 + spin_lock(&cn->guc_state.lock); 1668 1668 clr_context_registered(cn); 1669 - spin_unlock(&ce->guc_state.lock); 1669 + spin_unlock(&cn->guc_state.lock); 1670 1670 1671 1671 set_context_guc_id_invalid(cn); 1672 1672
+7 -5
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 1224 1224 return MODE_BAD; 1225 1225 } 1226 1226 1227 - if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode)) 1228 - return MODE_BAD; 1227 + if (hdmi->conf) { 1228 + if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode)) 1229 + return MODE_BAD; 1229 1230 1230 - if (hdmi->conf->max_mode_clock && 1231 - mode->clock > hdmi->conf->max_mode_clock) 1232 - return MODE_CLOCK_HIGH; 1231 + if (hdmi->conf->max_mode_clock && 1232 + mode->clock > hdmi->conf->max_mode_clock) 1233 + return MODE_CLOCK_HIGH; 1234 + } 1233 1235 1234 1236 if (mode->clock < 27000) 1235 1237 return MODE_CLOCK_LOW;
+28 -26
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 353 353 354 354 if (ret) 355 355 return ret; 356 + 357 + fobj = NULL; 358 + } else { 359 + fobj = dma_resv_shared_list(resv); 356 360 } 357 361 358 - fobj = dma_resv_shared_list(resv); 359 - fence = dma_resv_excl_fence(resv); 360 - 361 - if (fence) { 362 - struct nouveau_channel *prev = NULL; 363 - bool must_wait = true; 364 - 365 - f = nouveau_local_fence(fence, chan->drm); 366 - if (f) { 367 - rcu_read_lock(); 368 - prev = rcu_dereference(f->channel); 369 - if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 370 - must_wait = false; 371 - rcu_read_unlock(); 372 - } 373 - 374 - if (must_wait) 375 - ret = dma_fence_wait(fence, intr); 376 - 377 - return ret; 378 - } 379 - 380 - if (!exclusive || !fobj) 381 - return ret; 382 - 383 - for (i = 0; i < fobj->shared_count && !ret; ++i) { 362 + /* Waiting for the exclusive fence first causes performance regressions 363 + * under some circumstances. So manually wait for the shared ones first. 364 + */ 365 + for (i = 0; i < (fobj ? fobj->shared_count : 0) && !ret; ++i) { 384 366 struct nouveau_channel *prev = NULL; 385 367 bool must_wait = true; 386 368 ··· 380 398 381 399 if (must_wait) 382 400 ret = dma_fence_wait(fence, intr); 401 + } 402 + 403 + fence = dma_resv_excl_fence(resv); 404 + if (fence) { 405 + struct nouveau_channel *prev = NULL; 406 + bool must_wait = true; 407 + 408 + f = nouveau_local_fence(fence, chan->drm); 409 + if (f) { 410 + rcu_read_lock(); 411 + prev = rcu_dereference(f->channel); 412 + if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 413 + must_wait = false; 414 + rcu_read_unlock(); 415 + } 416 + 417 + if (must_wait) 418 + ret = dma_fence_wait(fence, intr); 419 + 420 + return ret; 383 421 } 384 422 385 423 return ret;
+15
drivers/hid/hid-holtek-mouse.c
··· 65 65 static int holtek_mouse_probe(struct hid_device *hdev, 66 66 const struct hid_device_id *id) 67 67 { 68 + int ret; 69 + 68 70 if (!hid_is_usb(hdev)) 69 71 return -EINVAL; 72 + 73 + ret = hid_parse(hdev); 74 + if (ret) { 75 + hid_err(hdev, "hid parse failed: %d\n", ret); 76 + return ret; 77 + } 78 + 79 + ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); 80 + if (ret) { 81 + hid_err(hdev, "hw start failed: %d\n", ret); 82 + return ret; 83 + } 84 + 70 85 return 0; 71 86 } 72 87
+3
drivers/hid/hid-vivaldi.c
··· 57 57 int ret; 58 58 59 59 drvdata = devm_kzalloc(&hdev->dev, sizeof(*drvdata), GFP_KERNEL); 60 + if (!drvdata) 61 + return -ENOMEM; 62 + 60 63 hid_set_drvdata(hdev, drvdata); 61 64 62 65 ret = hid_parse(hdev);
+62 -44
drivers/hwmon/lm90.c
··· 35 35 * explicitly as max6659, or if its address is not 0x4c. 36 36 * These chips lack the remote temperature offset feature. 37 37 * 38 - * This driver also supports the MAX6654 chip made by Maxim. This chip can 39 - * be at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is 40 - * otherwise similar to MAX6657/MAX6658/MAX6659. Extended range is available 41 - * by setting the configuration register accordingly, and is done during 42 - * initialization. Extended precision is only available at conversion rates 43 - * of 1 Hz and slower. Note that extended precision is not enabled by 44 - * default, as this driver initializes all chips to 2 Hz by design. 38 + * This driver also supports the MAX6654 chip made by Maxim. This chip can be 39 + * at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is similar 40 + * to MAX6657/MAX6658/MAX6659, but does not support critical temperature 41 + * limits. Extended range is available by setting the configuration register 42 + * accordingly, and is done during initialization. Extended precision is only 43 + * available at conversion rates of 1 Hz and slower. Note that extended 44 + * precision is not enabled by default, as this driver initializes all chips 45 + * to 2 Hz by design. 45 46 * 46 47 * This driver also supports the MAX6646, MAX6647, MAX6648, MAX6649 and 47 48 * MAX6692 chips made by Maxim. These are again similar to the LM86, ··· 189 188 #define LM90_HAVE_BROKEN_ALERT (1 << 7) /* Broken alert */ 190 189 #define LM90_HAVE_EXTENDED_TEMP (1 << 8) /* extended temperature support*/ 191 190 #define LM90_PAUSE_FOR_CONFIG (1 << 9) /* Pause conversion for config */ 191 + #define LM90_HAVE_CRIT (1 << 10)/* Chip supports CRIT/OVERT register */ 192 + #define LM90_HAVE_CRIT_ALRM_SWP (1 << 11)/* critical alarm bits swapped */ 192 193 193 194 /* LM90 status */ 194 195 #define LM90_STATUS_LTHRM (1 << 0) /* local THERM limit tripped */ ··· 200 197 #define LM90_STATUS_RHIGH (1 << 4) /* remote high temp limit tripped */ 201 198 #define LM90_STATUS_LLOW (1 << 5) /* local low temp limit tripped */ 202 199 #define LM90_STATUS_LHIGH (1 << 6) /* local high temp limit tripped */ 200 + #define LM90_STATUS_BUSY (1 << 7) /* conversion is ongoing */ 203 201 204 202 #define MAX6696_STATUS2_R2THRM (1 << 1) /* remote2 THERM limit tripped */ 205 203 #define MAX6696_STATUS2_R2OPEN (1 << 2) /* remote2 is an open circuit */ ··· 358 354 static const struct lm90_params lm90_params[] = { 359 355 [adm1032] = { 360 356 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 361 - | LM90_HAVE_BROKEN_ALERT, 357 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT, 362 358 .alert_alarms = 0x7c, 363 359 .max_convrate = 10, 364 360 }, 365 361 [adt7461] = { 366 362 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 367 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 363 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP 364 + | LM90_HAVE_CRIT, 368 365 .alert_alarms = 0x7c, 369 366 .max_convrate = 10, 370 367 }, 371 368 [g781] = { 372 369 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 373 - | LM90_HAVE_BROKEN_ALERT, 370 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT, 374 371 .alert_alarms = 0x7c, 375 372 .max_convrate = 8, 376 373 }, 377 374 [lm86] = { 378 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 375 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 376 + | LM90_HAVE_CRIT, 379 377 .alert_alarms = 0x7b, 380 378 .max_convrate = 9, 381 379 }, 382 380 [lm90] = { 383 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 381 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 382 + | LM90_HAVE_CRIT, 384 383 .alert_alarms = 0x7b, 385 384 .max_convrate = 9, 386 385 }, 387 386 [lm99] = { 388 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 387 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 388 + | LM90_HAVE_CRIT, 389 389 .alert_alarms = 0x7b, 390 390 .max_convrate = 9, 391 391 }, 392 392 [max6646] = { 393 + .flags = LM90_HAVE_CRIT, 393 394 .alert_alarms = 0x7c, 394 395 .max_convrate = 6, 395 396 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, ··· 405 396 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 406 397 }, 407 398 [max6657] = { 408 - .flags = LM90_PAUSE_FOR_CONFIG, 399 + .flags = LM90_PAUSE_FOR_CONFIG | LM90_HAVE_CRIT, 409 400 .alert_alarms = 0x7c, 410 401 .max_convrate = 8, 411 402 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 412 403 }, 413 404 [max6659] = { 414 - .flags = LM90_HAVE_EMERGENCY, 405 + .flags = LM90_HAVE_EMERGENCY | LM90_HAVE_CRIT, 415 406 .alert_alarms = 0x7c, 416 407 .max_convrate = 8, 417 408 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 418 409 }, 419 410 [max6680] = { 420 - .flags = LM90_HAVE_OFFSET, 411 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT 412 + | LM90_HAVE_CRIT_ALRM_SWP, 421 413 .alert_alarms = 0x7c, 422 414 .max_convrate = 7, 423 415 }, 424 416 [max6696] = { 425 417 .flags = LM90_HAVE_EMERGENCY 426 - | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3, 418 + | LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3 | LM90_HAVE_CRIT, 427 419 .alert_alarms = 0x1c7c, 428 420 .max_convrate = 6, 429 421 .reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL, 430 422 }, 431 423 [w83l771] = { 432 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 424 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT, 433 425 .alert_alarms = 0x7c, 434 426 .max_convrate = 8, 435 427 }, 436 428 [sa56004] = { 437 - .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT, 429 + .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT, 438 430 .alert_alarms = 0x7b, 439 431 .max_convrate = 9, 440 432 .reg_local_ext = SA56004_REG_R_LOCAL_TEMPL, 441 433 }, 442 434 [tmp451] = { 443 435 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 444 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 436 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT, 445 437 .alert_alarms = 0x7c, 446 438 .max_convrate = 9, 447 439 .reg_local_ext = TMP451_REG_R_LOCAL_TEMPL, 448 440 }, 449 441 [tmp461] = { 450 442 .flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT 451 - | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP, 443 + | LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT, 452 444 .alert_alarms = 0x7c, 453 445 .max_convrate = 9, 454 446 .reg_local_ext = TMP451_REG_R_LOCAL_TEMPL, ··· 678 668 struct i2c_client *client = data->client; 679 669 int val; 680 670 681 - val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT); 682 - if (val < 0) 683 - return val; 684 - data->temp8[LOCAL_CRIT] = val; 671 + if (data->flags & LM90_HAVE_CRIT) { 672 + val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT); 673 + if (val < 0) 674 + return val; 675 + data->temp8[LOCAL_CRIT] = val; 685 676 686 - val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT); 687 - if (val < 0) 688 - return val; 689 - data->temp8[REMOTE_CRIT] = val; 677 + val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT); 678 + if (val < 0) 679 + return val; 680 + data->temp8[REMOTE_CRIT] = val; 690 681 691 - val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST); 692 - if (val < 0) 693 - return val; 694 - data->temp_hyst = val; 682 + val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST); 683 + if (val < 0) 684 + return val; 685 + data->temp_hyst = val; 686 + } 695 687 696 688 val = lm90_read_reg(client, LM90_REG_R_REMOTE_LOWH); 697 689 if (val < 0) ··· 821 809 val = lm90_read_reg(client, LM90_REG_R_STATUS); 822 810 if (val < 0) 823 811 return val; 824 - data->alarms = val; /* lower 8 bit of alarms */ 812 + data->alarms = val & ~LM90_STATUS_BUSY; 825 813 826 814 if (data->kind == max6696) { 827 815 val = lm90_select_remote_channel(data, 1); ··· 1172 1160 else 1173 1161 temp = temp_from_s8(data->temp8[LOCAL_CRIT]); 1174 1162 1175 - /* prevent integer underflow */ 1176 - val = max(val, -128000l); 1163 + /* prevent integer overflow/underflow */ 1164 + val = clamp_val(val, -128000l, 255000l); 1177 1165 1178 1166 data->temp_hyst = hyst_to_reg(temp - val); 1179 1167 err = i2c_smbus_write_byte_data(client, LM90_REG_W_TCRIT_HYST, ··· 1204 1192 static const u8 lm90_min_alarm_bits[3] = { 5, 3, 11 }; 1205 1193 static const u8 lm90_max_alarm_bits[3] = { 6, 4, 12 }; 1206 1194 static const u8 lm90_crit_alarm_bits[3] = { 0, 1, 9 }; 1195 + static const u8 lm90_crit_alarm_bits_swapped[3] = { 1, 0, 9 }; 1207 1196 static const u8 lm90_emergency_alarm_bits[3] = { 15, 13, 14 }; 1208 1197 static const u8 lm90_fault_bits[3] = { 0, 2, 10 }; 1209 1198 ··· 1230 1217 *val = (data->alarms >> lm90_max_alarm_bits[channel]) & 1; 1231 1218 break; 1232 1219 case hwmon_temp_crit_alarm: 1233 - *val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1; 1220 + if (data->flags & LM90_HAVE_CRIT_ALRM_SWP) 1221 + *val = (data->alarms >> lm90_crit_alarm_bits_swapped[channel]) & 1; 1222 + else 1223 + *val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1; 1234 1224 break; 1235 1225 case hwmon_temp_emergency_alarm: 1236 1226 *val = (data->alarms >> lm90_emergency_alarm_bits[channel]) & 1; ··· 1481 1465 if (man_id < 0 || chip_id < 0 || config1 < 0 || convrate < 0) 1482 1466 return -ENODEV; 1483 1467 1484 - if (man_id == 0x01 || man_id == 0x5C || man_id == 0x41) { 1468 + if (man_id == 0x01 || man_id == 0x5C || man_id == 0xA1) { 1485 1469 config2 = i2c_smbus_read_byte_data(client, LM90_REG_R_CONFIG2); 1486 1470 if (config2 < 0) 1487 1471 return -ENODEV; 1488 - } else 1489 - config2 = 0; /* Make compiler happy */ 1472 + } 1490 1473 1491 1474 if ((address == 0x4C || address == 0x4D) 1492 1475 && man_id == 0x01) { /* National Semiconductor */ ··· 1918 1903 info->config = data->channel_config; 1919 1904 1920 1905 data->channel_config[0] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX | 1921 - HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM | 1922 - HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM; 1906 + HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM; 1923 1907 data->channel_config[1] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX | 1924 - HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM | 1925 - HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM | HWMON_T_FAULT; 1908 + HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM | HWMON_T_FAULT; 1909 + 1910 + if (data->flags & LM90_HAVE_CRIT) { 1911 + data->channel_config[0] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST; 1912 + data->channel_config[1] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST; 1913 + } 1926 1914 1927 1915 if (data->flags & LM90_HAVE_OFFSET) 1928 1916 data->channel_config[1] |= HWMON_T_OFFSET;
+3
drivers/i2c/i2c-dev.c
··· 535 535 sizeof(rdwr_arg))) 536 536 return -EFAULT; 537 537 538 + if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0) 539 + return -EINVAL; 540 + 538 541 if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS) 539 542 return -EINVAL; 540 543
+57 -7
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1594 1594 { 1595 1595 struct hns_roce_cmq_desc desc; 1596 1596 struct hns_roce_cmq_req *req = (struct hns_roce_cmq_req *)desc.data; 1597 + u32 clock_cycles_of_1us; 1597 1598 1598 1599 hns_roce_cmq_setup_basic_desc(&desc, HNS_ROCE_OPC_CFG_GLOBAL_PARAM, 1599 1600 false); 1600 1601 1601 - hr_reg_write(req, CFG_GLOBAL_PARAM_1US_CYCLES, 0x3e8); 1602 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) 1603 + clock_cycles_of_1us = HNS_ROCE_1NS_CFG; 1604 + else 1605 + clock_cycles_of_1us = HNS_ROCE_1US_CFG; 1606 + 1607 + hr_reg_write(req, CFG_GLOBAL_PARAM_1US_CYCLES, clock_cycles_of_1us); 1602 1608 hr_reg_write(req, CFG_GLOBAL_PARAM_UDP_PORT, ROCE_V2_UDP_DPORT); 1603 1609 1604 1610 return hns_roce_cmq_send(hr_dev, &desc, 1); ··· 4808 4802 return ret; 4809 4803 } 4810 4804 4805 + static bool check_qp_timeout_cfg_range(struct hns_roce_dev *hr_dev, u8 *timeout) 4806 + { 4807 + #define QP_ACK_TIMEOUT_MAX_HIP08 20 4808 + #define QP_ACK_TIMEOUT_OFFSET 10 4809 + #define QP_ACK_TIMEOUT_MAX 31 4810 + 4811 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 4812 + if (*timeout > QP_ACK_TIMEOUT_MAX_HIP08) { 4813 + ibdev_warn(&hr_dev->ib_dev, 4814 + "Local ACK timeout shall be 0 to 20.\n"); 4815 + return false; 4816 + } 4817 + *timeout += QP_ACK_TIMEOUT_OFFSET; 4818 + } else if (hr_dev->pci_dev->revision > PCI_REVISION_ID_HIP08) { 4819 + if (*timeout > QP_ACK_TIMEOUT_MAX) { 4820 + ibdev_warn(&hr_dev->ib_dev, 4821 + "Local ACK timeout shall be 0 to 31.\n"); 4822 + return false; 4823 + } 4824 + } 4825 + 4826 + return true; 4827 + } 4828 + 4811 4829 static int hns_roce_v2_set_opt_fields(struct ib_qp *ibqp, 4812 4830 const struct ib_qp_attr *attr, 4813 4831 int attr_mask, ··· 4841 4811 struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); 4842 4812 struct hns_roce_qp *hr_qp = to_hr_qp(ibqp); 4843 4813 int ret = 0; 4814 + u8 timeout; 4844 4815 4845 4816 if (attr_mask & IB_QP_AV) { 4846 4817 ret = hns_roce_v2_set_path(ibqp, attr, attr_mask, context, ··· 4851 4820 } 4852 4821 4853 4822 if (attr_mask & IB_QP_TIMEOUT) { 4854 - if (attr->timeout < 31) { 4855 - hr_reg_write(context, QPC_AT, attr->timeout); 4823 + timeout = attr->timeout; 4824 + if (check_qp_timeout_cfg_range(hr_dev, &timeout)) { 4825 + hr_reg_write(context, QPC_AT, timeout); 4856 4826 hr_reg_clear(qpc_mask, QPC_AT); 4857 - } else { 4858 - ibdev_warn(&hr_dev->ib_dev, 4859 - "Local ACK timeout shall be 0 to 30.\n"); 4860 4827 } 4861 4828 } 4862 4829 ··· 4911 4882 set_access_flags(hr_qp, context, qpc_mask, attr, attr_mask); 4912 4883 4913 4884 if (attr_mask & IB_QP_MIN_RNR_TIMER) { 4914 - hr_reg_write(context, QPC_MIN_RNR_TIME, attr->min_rnr_timer); 4885 + hr_reg_write(context, QPC_MIN_RNR_TIME, 4886 + hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 ? 4887 + HNS_ROCE_RNR_TIMER_10NS : attr->min_rnr_timer); 4915 4888 hr_reg_clear(qpc_mask, QPC_MIN_RNR_TIME); 4916 4889 } 4917 4890 ··· 5530 5499 5531 5500 hr_reg_write(cq_context, CQC_CQ_MAX_CNT, cq_count); 5532 5501 hr_reg_clear(cqc_mask, CQC_CQ_MAX_CNT); 5502 + 5503 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 5504 + if (cq_period * HNS_ROCE_CLOCK_ADJUST > USHRT_MAX) { 5505 + dev_info(hr_dev->dev, 5506 + "cq_period(%u) reached the upper limit, adjusted to 65.\n", 5507 + cq_period); 5508 + cq_period = HNS_ROCE_MAX_CQ_PERIOD; 5509 + } 5510 + cq_period *= HNS_ROCE_CLOCK_ADJUST; 5511 + } 5533 5512 hr_reg_write(cq_context, CQC_CQ_PERIOD, cq_period); 5534 5513 hr_reg_clear(cqc_mask, CQC_CQ_PERIOD); 5535 5514 ··· 5934 5893 to_hr_hw_page_shift(eq->mtr.hem_cfg.buf_pg_shift)); 5935 5894 hr_reg_write(eqc, EQC_EQ_PROD_INDX, HNS_ROCE_EQ_INIT_PROD_IDX); 5936 5895 hr_reg_write(eqc, EQC_EQ_MAX_CNT, eq->eq_max_cnt); 5896 + 5897 + if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08) { 5898 + if (eq->eq_period * HNS_ROCE_CLOCK_ADJUST > USHRT_MAX) { 5899 + dev_info(hr_dev->dev, "eq_period(%u) reached the upper limit, adjusted to 65.\n", 5900 + eq->eq_period); 5901 + eq->eq_period = HNS_ROCE_MAX_EQ_PERIOD; 5902 + } 5903 + eq->eq_period *= HNS_ROCE_CLOCK_ADJUST; 5904 + } 5937 5905 5938 5906 hr_reg_write(eqc, EQC_EQ_PERIOD, eq->eq_period); 5939 5907 hr_reg_write(eqc, EQC_EQE_REPORT_TIMER, HNS_ROCE_EQ_INIT_REPORT_TIMER);
+8
drivers/infiniband/hw/hns/hns_roce_hw_v2.h
··· 1444 1444 struct list_head node; /* all dips are on a list */ 1445 1445 }; 1446 1446 1447 + /* only for RNR timeout issue of HIP08 */ 1448 + #define HNS_ROCE_CLOCK_ADJUST 1000 1449 + #define HNS_ROCE_MAX_CQ_PERIOD 65 1450 + #define HNS_ROCE_MAX_EQ_PERIOD 65 1451 + #define HNS_ROCE_RNR_TIMER_10NS 1 1452 + #define HNS_ROCE_1US_CFG 999 1453 + #define HNS_ROCE_1NS_CFG 0 1454 + 1447 1455 #define HNS_ROCE_AEQ_DEFAULT_BURST_NUM 0x0 1448 1456 #define HNS_ROCE_AEQ_DEFAULT_INTERVAL 0x0 1449 1457 #define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x0
+1 -1
drivers/infiniband/hw/hns/hns_roce_srq.c
··· 259 259 260 260 static void free_srq_wrid(struct hns_roce_srq *srq) 261 261 { 262 - kfree(srq->wrid); 262 + kvfree(srq->wrid); 263 263 srq->wrid = NULL; 264 264 } 265 265
+1 -1
drivers/infiniband/hw/qib/qib_user_sdma.c
··· 941 941 &addrlimit) || 942 942 addrlimit > type_max(typeof(pkt->addrlimit))) { 943 943 ret = -EINVAL; 944 - goto free_pbc; 944 + goto free_pkt; 945 945 } 946 946 pkt->addrlimit = addrlimit; 947 947
+9 -2
drivers/input/joystick/spaceball.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/input.h> 21 21 #include <linux/serio.h> 22 + #include <asm/unaligned.h> 22 23 23 24 #define DRIVER_DESC "SpaceTec SpaceBall 2003/3003/4000 FLX driver" 24 25 ··· 76 75 77 76 case 'D': /* Ball data */ 78 77 if (spaceball->idx != 15) return; 79 - for (i = 0; i < 6; i++) 78 + /* 79 + * Skip first three bytes; read six axes worth of data. 80 + * Axis values are signed 16-bit big-endian. 81 + */ 82 + data += 3; 83 + for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) { 80 84 input_report_abs(dev, spaceball_axes[i], 81 - (__s16)((data[2 * i + 3] << 8) | data[2 * i + 2])); 85 + (__s16)get_unaligned_be16(&data[i * 2])); 86 + } 82 87 break; 83 88 84 89 case 'K': /* Button data */
+12 -9
drivers/input/misc/iqs626a.c
··· 456 456 unsigned int suspend_mode; 457 457 }; 458 458 459 - static int iqs626_parse_events(struct iqs626_private *iqs626, 460 - const struct fwnode_handle *ch_node, 461 - enum iqs626_ch_id ch_id) 459 + static noinline_for_stack int 460 + iqs626_parse_events(struct iqs626_private *iqs626, 461 + const struct fwnode_handle *ch_node, 462 + enum iqs626_ch_id ch_id) 462 463 { 463 464 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 464 465 struct i2c_client *client = iqs626->client; ··· 605 604 return 0; 606 605 } 607 606 608 - static int iqs626_parse_ati_target(struct iqs626_private *iqs626, 609 - const struct fwnode_handle *ch_node, 610 - enum iqs626_ch_id ch_id) 607 + static noinline_for_stack int 608 + iqs626_parse_ati_target(struct iqs626_private *iqs626, 609 + const struct fwnode_handle *ch_node, 610 + enum iqs626_ch_id ch_id) 611 611 { 612 612 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 613 613 struct i2c_client *client = iqs626->client; ··· 887 885 return 0; 888 886 } 889 887 890 - static int iqs626_parse_channel(struct iqs626_private *iqs626, 891 - const struct fwnode_handle *ch_node, 892 - enum iqs626_ch_id ch_id) 888 + static noinline_for_stack int 889 + iqs626_parse_channel(struct iqs626_private *iqs626, 890 + const struct fwnode_handle *ch_node, 891 + enum iqs626_ch_id ch_id) 893 892 { 894 893 struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg; 895 894 struct i2c_client *client = iqs626->client;
+2 -2
drivers/input/mouse/appletouch.c
··· 916 916 set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit); 917 917 set_bit(BTN_LEFT, input_dev->keybit); 918 918 919 + INIT_WORK(&dev->work, atp_reinit); 920 + 919 921 error = input_register_device(dev->input); 920 922 if (error) 921 923 goto err_free_buffer; 922 924 923 925 /* save our data pointer in this interface device */ 924 926 usb_set_intfdata(iface, dev); 925 - 926 - INIT_WORK(&dev->work, atp_reinit); 927 927 928 928 return 0; 929 929
+7 -1
drivers/input/mouse/elantech.c
··· 1588 1588 */ 1589 1589 static int elantech_change_report_id(struct psmouse *psmouse) 1590 1590 { 1591 - unsigned char param[2] = { 0x10, 0x03 }; 1591 + /* 1592 + * NOTE: the code is expecting to receive param[] as an array of 3 1593 + * items (see __ps2_command()), even if in this case only 2 are 1594 + * actually needed. Make sure the array size is 3 to avoid potential 1595 + * stack out-of-bound accesses. 1596 + */ 1597 + unsigned char param[3] = { 0x10, 0x03 }; 1592 1598 1593 1599 if (elantech_write_reg_params(psmouse, 0x7, param) || 1594 1600 elantech_read_reg_params(psmouse, 0x7, param) ||
+21
drivers/input/serio/i8042-x86ia64io.h
··· 995 995 { } 996 996 }; 997 997 998 + static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = { 999 + { 1000 + /* ASUS ZenBook UX425UA */ 1001 + .matches = { 1002 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1003 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"), 1004 + }, 1005 + }, 1006 + { 1007 + /* ASUS ZenBook UM325UA */ 1008 + .matches = { 1009 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 1010 + DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"), 1011 + }, 1012 + }, 1013 + { } 1014 + }; 1015 + 998 1016 #endif /* CONFIG_X86 */ 999 1017 1000 1018 #ifdef CONFIG_PNP ··· 1332 1314 1333 1315 if (dmi_check_system(i8042_dmi_kbdreset_table)) 1334 1316 i8042_kbdreset = true; 1317 + 1318 + if (dmi_check_system(i8042_dmi_probe_defer_table)) 1319 + i8042_probe_defer = true; 1335 1320 1336 1321 /* 1337 1322 * A20 was already enabled during early kernel init. But some buggy
+35 -19
drivers/input/serio/i8042.c
··· 45 45 module_param_named(unlock, i8042_unlock, bool, 0); 46 46 MODULE_PARM_DESC(unlock, "Ignore keyboard lock."); 47 47 48 + static bool i8042_probe_defer; 49 + module_param_named(probe_defer, i8042_probe_defer, bool, 0); 50 + MODULE_PARM_DESC(probe_defer, "Allow deferred probing."); 51 + 48 52 enum i8042_controller_reset_mode { 49 53 I8042_RESET_NEVER, 50 54 I8042_RESET_ALWAYS, ··· 715 711 * LCS/Telegraphics. 716 712 */ 717 713 718 - static int __init i8042_check_mux(void) 714 + static int i8042_check_mux(void) 719 715 { 720 716 unsigned char mux_version; 721 717 ··· 744 740 /* 745 741 * The following is used to test AUX IRQ delivery. 746 742 */ 747 - static struct completion i8042_aux_irq_delivered __initdata; 748 - static bool i8042_irq_being_tested __initdata; 743 + static struct completion i8042_aux_irq_delivered; 744 + static bool i8042_irq_being_tested; 749 745 750 - static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id) 746 + static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id) 751 747 { 752 748 unsigned long flags; 753 749 unsigned char str, data; ··· 774 770 * verifies success by readinng CTR. Used when testing for presence of AUX 775 771 * port. 776 772 */ 777 - static int __init i8042_toggle_aux(bool on) 773 + static int i8042_toggle_aux(bool on) 778 774 { 779 775 unsigned char param; 780 776 int i; ··· 802 798 * the presence of an AUX interface. 803 799 */ 804 800 805 - static int __init i8042_check_aux(void) 801 + static int i8042_check_aux(void) 806 802 { 807 803 int retval = -1; 808 804 bool irq_registered = false; ··· 1009 1005 1010 1006 if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) { 1011 1007 pr_err("Can't read CTR while initializing i8042\n"); 1012 - return -EIO; 1008 + return i8042_probe_defer ? -EPROBE_DEFER : -EIO; 1013 1009 } 1014 1010 1015 1011 } while (n < 2 || ctr[0] != ctr[1]); ··· 1324 1320 i8042_controller_reset(false); 1325 1321 } 1326 1322 1327 - static int __init i8042_create_kbd_port(void) 1323 + static int i8042_create_kbd_port(void) 1328 1324 { 1329 1325 struct serio *serio; 1330 1326 struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO]; ··· 1353 1349 return 0; 1354 1350 } 1355 1351 1356 - static int __init i8042_create_aux_port(int idx) 1352 + static int i8042_create_aux_port(int idx) 1357 1353 { 1358 1354 struct serio *serio; 1359 1355 int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx; ··· 1390 1386 return 0; 1391 1387 } 1392 1388 1393 - static void __init i8042_free_kbd_port(void) 1389 + static void i8042_free_kbd_port(void) 1394 1390 { 1395 1391 kfree(i8042_ports[I8042_KBD_PORT_NO].serio); 1396 1392 i8042_ports[I8042_KBD_PORT_NO].serio = NULL; 1397 1393 } 1398 1394 1399 - static void __init i8042_free_aux_ports(void) 1395 + static void i8042_free_aux_ports(void) 1400 1396 { 1401 1397 int i; 1402 1398 ··· 1406 1402 } 1407 1403 } 1408 1404 1409 - static void __init i8042_register_ports(void) 1405 + static void i8042_register_ports(void) 1410 1406 { 1411 1407 int i; 1412 1408 ··· 1447 1443 i8042_aux_irq_registered = i8042_kbd_irq_registered = false; 1448 1444 } 1449 1445 1450 - static int __init i8042_setup_aux(void) 1446 + static int i8042_setup_aux(void) 1451 1447 { 1452 1448 int (*aux_enable)(void); 1453 1449 int error; ··· 1489 1485 return error; 1490 1486 } 1491 1487 1492 - static int __init i8042_setup_kbd(void) 1488 + static int i8042_setup_kbd(void) 1493 1489 { 1494 1490 int error; 1495 1491 ··· 1539 1535 return 0; 1540 1536 } 1541 1537 1542 - static int __init i8042_probe(struct platform_device *dev) 1538 + static int i8042_probe(struct platform_device *dev) 1543 1539 { 1544 1540 int error; 1545 1541 ··· 1604 1600 .pm = &i8042_pm_ops, 1605 1601 #endif 1606 1602 }, 1603 + .probe = i8042_probe, 1607 1604 .remove = i8042_remove, 1608 1605 .shutdown = i8042_shutdown, 1609 1606 }; ··· 1615 1610 1616 1611 static int __init i8042_init(void) 1617 1612 { 1618 - struct platform_device *pdev; 1619 1613 int err; 1620 1614 1621 1615 dbg_init(); ··· 1630 1626 /* Set this before creating the dev to allow i8042_command to work right away */ 1631 1627 i8042_present = true; 1632 1628 1633 - pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0); 1634 - if (IS_ERR(pdev)) { 1635 - err = PTR_ERR(pdev); 1629 + err = platform_driver_register(&i8042_driver); 1630 + if (err) 1636 1631 goto err_platform_exit; 1632 + 1633 + i8042_platform_device = platform_device_alloc("i8042", -1); 1634 + if (!i8042_platform_device) { 1635 + err = -ENOMEM; 1636 + goto err_unregister_driver; 1637 1637 } 1638 + 1639 + err = platform_device_add(i8042_platform_device); 1640 + if (err) 1641 + goto err_free_device; 1638 1642 1639 1643 bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block); 1640 1644 panic_blink = i8042_panic_blink; 1641 1645 1642 1646 return 0; 1643 1647 1648 + err_free_device: 1649 + platform_device_put(i8042_platform_device); 1650 + err_unregister_driver: 1651 + platform_driver_unregister(&i8042_driver); 1644 1652 err_platform_exit: 1645 1653 i8042_platform_exit(); 1646 1654 return err;
+1 -1
drivers/input/touchscreen/atmel_mxt_ts.c
··· 1882 1882 if (error) { 1883 1883 dev_err(&client->dev, "Error %d parsing object table\n", error); 1884 1884 mxt_free_object_table(data); 1885 - goto err_free_mem; 1885 + return error; 1886 1886 } 1887 1887 1888 1888 data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);
+45 -1
drivers/input/touchscreen/elants_i2c.c
··· 117 117 #define ELAN_POWERON_DELAY_USEC 500 118 118 #define ELAN_RESET_DELAY_MSEC 20 119 119 120 + /* FW boot code version */ 121 + #define BC_VER_H_BYTE_FOR_EKTH3900x1_I2C 0x72 122 + #define BC_VER_H_BYTE_FOR_EKTH3900x2_I2C 0x82 123 + #define BC_VER_H_BYTE_FOR_EKTH3900x3_I2C 0x92 124 + #define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C 0x6D 125 + #define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C 0x6E 126 + #define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C 0x77 127 + #define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C 0x78 128 + #define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB 0x67 129 + #define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB 0x68 130 + #define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB 0x74 131 + #define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB 0x75 132 + 120 133 enum elants_chip_id { 121 134 EKTH3500, 122 135 EKTF3624, ··· 749 736 return 0; 750 737 } 751 738 739 + static bool elants_i2c_should_check_remark_id(struct elants_data *ts) 740 + { 741 + struct i2c_client *client = ts->client; 742 + const u8 bootcode_version = ts->iap_version; 743 + bool check; 744 + 745 + /* I2C eKTH3900 and eKTH5312 are NOT support Remark ID */ 746 + if ((bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x1_I2C) || 747 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x2_I2C) || 748 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x3_I2C) || 749 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C) || 750 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C) || 751 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C) || 752 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C) || 753 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB) || 754 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB) || 755 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB) || 756 + (bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB)) { 757 + dev_dbg(&client->dev, 758 + "eKTH3900/eKTH5312(0x%02x) are not support remark id\n", 759 + bootcode_version); 760 + check = false; 761 + } else if (bootcode_version >= 0x60) { 762 + check = true; 763 + } else { 764 + check = false; 765 + } 766 + 767 + return check; 768 + } 769 + 752 770 static int elants_i2c_do_update_firmware(struct i2c_client *client, 753 771 const struct firmware *fw, 754 772 bool force) ··· 793 749 u16 send_id; 794 750 int page, n_fw_pages; 795 751 int error; 796 - bool check_remark_id = ts->iap_version >= 0x60; 752 + bool check_remark_id = elants_i2c_should_check_remark_id(ts); 797 753 798 754 /* Recovery mode detection! */ 799 755 if (force) {
+26 -5
drivers/input/touchscreen/goodix.c
··· 102 102 { .id = "911", .data = &gt911_chip_data }, 103 103 { .id = "9271", .data = &gt911_chip_data }, 104 104 { .id = "9110", .data = &gt911_chip_data }, 105 + { .id = "9111", .data = &gt911_chip_data }, 105 106 { .id = "927", .data = &gt911_chip_data }, 106 107 { .id = "928", .data = &gt911_chip_data }, 107 108 ··· 651 650 652 651 usleep_range(6000, 10000); /* T4: > 5ms */ 653 652 654 - /* end select I2C slave addr */ 655 - error = gpiod_direction_input(ts->gpiod_rst); 656 - if (error) 657 - goto error; 653 + /* 654 + * Put the reset pin back in to input / high-impedance mode to save 655 + * power. Only do this in the non ACPI case since some ACPI boards 656 + * don't have a pull-up, so there the reset pin must stay active-high. 657 + */ 658 + if (ts->irq_pin_access_method == IRQ_PIN_ACCESS_GPIO) { 659 + error = gpiod_direction_input(ts->gpiod_rst); 660 + if (error) 661 + goto error; 662 + } 658 663 659 664 return 0; 660 665 ··· 794 787 return -EINVAL; 795 788 } 796 789 790 + /* 791 + * Normally we put the reset pin in input / high-impedance mode to save 792 + * power. But some x86/ACPI boards don't have a pull-up, so for the ACPI 793 + * case, leave the pin as is. This results in the pin not being touched 794 + * at all on x86/ACPI boards, except when needed for error-recover. 795 + */ 796 + ts->gpiod_rst_flags = GPIOD_ASIS; 797 + 797 798 return devm_acpi_dev_add_driver_gpios(dev, gpio_mapping); 798 799 } 799 800 #else ··· 826 811 if (!ts->client) 827 812 return -EINVAL; 828 813 dev = &ts->client->dev; 814 + 815 + /* 816 + * By default we request the reset pin as input, leaving it in 817 + * high-impedance when not resetting the controller to save power. 818 + */ 819 + ts->gpiod_rst_flags = GPIOD_IN; 829 820 830 821 ts->avdd28 = devm_regulator_get(dev, "AVDD28"); 831 822 if (IS_ERR(ts->avdd28)) { ··· 870 849 ts->gpiod_int = gpiod; 871 850 872 851 /* Get the reset line GPIO pin number */ 873 - gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, GPIOD_IN); 852 + gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, ts->gpiod_rst_flags); 874 853 if (IS_ERR(gpiod)) { 875 854 error = PTR_ERR(gpiod); 876 855 if (error != -EPROBE_DEFER)
+1
drivers/input/touchscreen/goodix.h
··· 87 87 struct gpio_desc *gpiod_rst; 88 88 int gpio_count; 89 89 int gpio_int_idx; 90 + enum gpiod_flags gpiod_rst_flags; 90 91 char id[GOODIX_ID_MAX_LEN + 1]; 91 92 char cfg_name[64]; 92 93 u16 version;
+1 -1
drivers/input/touchscreen/goodix_fwupload.c
··· 207 207 208 208 error = goodix_reset_no_int_sync(ts); 209 209 if (error) 210 - return error; 210 + goto release; 211 211 212 212 error = goodix_enter_upload_mode(ts->client); 213 213 if (error)
+3 -3
drivers/isdn/mISDN/core.c
··· 381 381 err = mISDN_inittimer(&debug); 382 382 if (err) 383 383 goto error2; 384 - err = l1_init(&debug); 384 + err = Isdnl1_Init(&debug); 385 385 if (err) 386 386 goto error3; 387 387 err = Isdnl2_Init(&debug); ··· 395 395 error5: 396 396 Isdnl2_cleanup(); 397 397 error4: 398 - l1_cleanup(); 398 + Isdnl1_cleanup(); 399 399 error3: 400 400 mISDN_timer_cleanup(); 401 401 error2: ··· 408 408 { 409 409 misdn_sock_cleanup(); 410 410 Isdnl2_cleanup(); 411 - l1_cleanup(); 411 + Isdnl1_cleanup(); 412 412 mISDN_timer_cleanup(); 413 413 class_unregister(&mISDN_class); 414 414
+2 -2
drivers/isdn/mISDN/core.h
··· 60 60 extern int mISDN_inittimer(u_int *); 61 61 extern void mISDN_timer_cleanup(void); 62 62 63 - extern int l1_init(u_int *); 64 - extern void l1_cleanup(void); 63 + extern int Isdnl1_Init(u_int *); 64 + extern void Isdnl1_cleanup(void); 65 65 extern int Isdnl2_Init(u_int *); 66 66 extern void Isdnl2_cleanup(void); 67 67
+2 -2
drivers/isdn/mISDN/layer1.c
··· 398 398 EXPORT_SYMBOL(create_l1); 399 399 400 400 int 401 - l1_init(u_int *deb) 401 + Isdnl1_Init(u_int *deb) 402 402 { 403 403 debug = deb; 404 404 l1fsm_s.state_count = L1S_STATE_COUNT; ··· 409 409 } 410 410 411 411 void 412 - l1_cleanup(void) 412 + Isdnl1_cleanup(void) 413 413 { 414 414 mISDN_FsmFree(&l1fsm_s); 415 415 }
+6 -1
drivers/mmc/core/core.c
··· 2264 2264 _mmc_detect_change(host, 0, false); 2265 2265 } 2266 2266 2267 - void mmc_stop_host(struct mmc_host *host) 2267 + void __mmc_stop_host(struct mmc_host *host) 2268 2268 { 2269 2269 if (host->slot.cd_irq >= 0) { 2270 2270 mmc_gpio_set_cd_wake(host, false); ··· 2273 2273 2274 2274 host->rescan_disable = 1; 2275 2275 cancel_delayed_work_sync(&host->detect); 2276 + } 2277 + 2278 + void mmc_stop_host(struct mmc_host *host) 2279 + { 2280 + __mmc_stop_host(host); 2276 2281 2277 2282 /* clear pm flags now and let card drivers set them as needed */ 2278 2283 host->pm_flags = 0;
+1
drivers/mmc/core/core.h
··· 70 70 71 71 void mmc_rescan(struct work_struct *work); 72 72 void mmc_start_host(struct mmc_host *host); 73 + void __mmc_stop_host(struct mmc_host *host); 73 74 void mmc_stop_host(struct mmc_host *host); 74 75 75 76 void _mmc_detect_change(struct mmc_host *host, unsigned long delay,
+9
drivers/mmc/core/host.c
··· 80 80 kfree(host); 81 81 } 82 82 83 + static int mmc_host_classdev_shutdown(struct device *dev) 84 + { 85 + struct mmc_host *host = cls_dev_to_mmc_host(dev); 86 + 87 + __mmc_stop_host(host); 88 + return 0; 89 + } 90 + 83 91 static struct class mmc_host_class = { 84 92 .name = "mmc_host", 85 93 .dev_release = mmc_host_classdev_release, 94 + .shutdown_pre = mmc_host_classdev_shutdown, 86 95 .pm = MMC_HOST_CLASS_DEV_PM_OPS, 87 96 }; 88 97
+16
drivers/mmc/host/meson-mx-sdhc-mmc.c
··· 135 135 struct mmc_command *cmd) 136 136 { 137 137 struct meson_mx_sdhc_host *host = mmc_priv(mmc); 138 + bool manual_stop = false; 138 139 u32 ictl, send; 139 140 int pack_len; 140 141 ··· 173 172 else 174 173 /* software flush: */ 175 174 ictl |= MESON_SDHC_ICTL_DATA_XFER_OK; 175 + 176 + /* 177 + * Mimic the logic from the vendor driver where (only) 178 + * SD_IO_RW_EXTENDED commands with more than one block set the 179 + * MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware 180 + * download in the brcmfmac driver for a BCM43362/1 card. 181 + * Without this sdio_memcpy_toio() (with a size of 219557 182 + * bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set. 183 + */ 184 + manual_stop = cmd->data->blocks > 1 && 185 + cmd->opcode == SD_IO_RW_EXTENDED; 176 186 } else { 177 187 pack_len = 0; 178 188 179 189 ictl |= MESON_SDHC_ICTL_RESP_OK; 180 190 } 191 + 192 + regmap_update_bits(host->regmap, MESON_SDHC_MISC, 193 + MESON_SDHC_MISC_MANUAL_STOP, 194 + manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0); 181 195 182 196 if (cmd->opcode == MMC_STOP_TRANSMISSION) 183 197 send |= MESON_SDHC_SEND_DATA_STOP;
+2
drivers/mmc/host/mmci_stm32_sdmmc.c
··· 441 441 return -EINVAL; 442 442 } 443 443 444 + writel_relaxed(0, dlyb->base + DLYB_CR); 445 + 444 446 phase = end_of_len - max_len / 2; 445 447 sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false); 446 448
+26 -17
drivers/mmc/host/sdhci-tegra.c
··· 356 356 } 357 357 } 358 358 359 - static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc, 360 - struct mmc_ios *ios) 361 - { 362 - struct sdhci_host *host = mmc_priv(mmc); 363 - u32 val; 364 - 365 - val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 366 - 367 - if (ios->enhanced_strobe) 368 - val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 369 - else 370 - val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 371 - 372 - sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 373 - 374 - } 375 - 376 359 static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask) 377 360 { 378 361 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ··· 774 791 tegra_sdhci_pad_autocalib(host); 775 792 tegra_host->pad_calib_required = false; 776 793 } 794 + } 795 + 796 + static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc, 797 + struct mmc_ios *ios) 798 + { 799 + struct sdhci_host *host = mmc_priv(mmc); 800 + u32 val; 801 + 802 + val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 803 + 804 + if (ios->enhanced_strobe) { 805 + val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 806 + /* 807 + * When CMD13 is sent from mmc_select_hs400es() after 808 + * switching to HS400ES mode, the bus is operating at 809 + * either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR. 810 + * To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI 811 + * interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host 812 + * controller CAR clock and the interface clock are rate matched. 813 + */ 814 + tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR); 815 + } else { 816 + val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE; 817 + } 818 + 819 + sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL); 777 820 } 778 821 779 822 static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
+1 -1
drivers/net/bonding/bond_options.c
··· 1526 1526 mac = (u8 *)&newval->value; 1527 1527 } 1528 1528 1529 - if (!is_valid_ether_addr(mac)) 1529 + if (is_multicast_ether_addr(mac)) 1530 1530 goto err; 1531 1531 1532 1532 netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
+8
drivers/net/ethernet/aquantia/atlantic/aq_ring.c
··· 366 366 if (!buff->is_eop) { 367 367 buff_ = buff; 368 368 do { 369 + if (buff_->next >= self->size) { 370 + err = -EIO; 371 + goto err_exit; 372 + } 369 373 next_ = buff_->next, 370 374 buff_ = &self->buff_ring[next_]; 371 375 is_rsc_completed = ··· 393 389 (buff->is_lro && buff->is_cso_err)) { 394 390 buff_ = buff; 395 391 do { 392 + if (buff_->next >= self->size) { 393 + err = -EIO; 394 + goto err_exit; 395 + } 396 396 next_ = buff_->next, 397 397 buff_ = &self->buff_ring[next_]; 398 398
+8 -15
drivers/net/ethernet/atheros/ag71xx.c
··· 1913 1913 ag->mac_reset = devm_reset_control_get(&pdev->dev, "mac"); 1914 1914 if (IS_ERR(ag->mac_reset)) { 1915 1915 netif_err(ag, probe, ndev, "missing mac reset\n"); 1916 - err = PTR_ERR(ag->mac_reset); 1917 - goto err_free; 1916 + return PTR_ERR(ag->mac_reset); 1918 1917 } 1919 1918 1920 1919 ag->mac_base = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 1921 - if (!ag->mac_base) { 1922 - err = -ENOMEM; 1923 - goto err_free; 1924 - } 1920 + if (!ag->mac_base) 1921 + return -ENOMEM; 1925 1922 1926 1923 ndev->irq = platform_get_irq(pdev, 0); 1927 1924 err = devm_request_irq(&pdev->dev, ndev->irq, ag71xx_interrupt, ··· 1926 1929 if (err) { 1927 1930 netif_err(ag, probe, ndev, "unable to request IRQ %d\n", 1928 1931 ndev->irq); 1929 - goto err_free; 1932 + return err; 1930 1933 } 1931 1934 1932 1935 ndev->netdev_ops = &ag71xx_netdev_ops; ··· 1954 1957 ag->stop_desc = dmam_alloc_coherent(&pdev->dev, 1955 1958 sizeof(struct ag71xx_desc), 1956 1959 &ag->stop_desc_dma, GFP_KERNEL); 1957 - if (!ag->stop_desc) { 1958 - err = -ENOMEM; 1959 - goto err_free; 1960 - } 1960 + if (!ag->stop_desc) 1961 + return -ENOMEM; 1961 1962 1962 1963 ag->stop_desc->data = 0; 1963 1964 ag->stop_desc->ctrl = 0; ··· 1970 1975 err = of_get_phy_mode(np, &ag->phy_if_mode); 1971 1976 if (err) { 1972 1977 netif_err(ag, probe, ndev, "missing phy-mode property in DT\n"); 1973 - goto err_free; 1978 + return err; 1974 1979 } 1975 1980 1976 1981 netif_napi_add(ndev, &ag->napi, ag71xx_poll, AG71XX_NAPI_WEIGHT); ··· 1978 1983 err = clk_prepare_enable(ag->clk_eth); 1979 1984 if (err) { 1980 1985 netif_err(ag, probe, ndev, "Failed to enable eth clk.\n"); 1981 - goto err_free; 1986 + return err; 1982 1987 } 1983 1988 1984 1989 ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, 0); ··· 2014 2019 ag71xx_mdio_remove(ag); 2015 2020 err_put_clk: 2016 2021 clk_disable_unprepare(ag->clk_eth); 2017 - err_free: 2018 - free_netdev(ndev); 2019 2022 return err; 2020 2023 } 2021 2024
+7 -5
drivers/net/ethernet/freescale/fman/fman_port.c
··· 1805 1805 fman = dev_get_drvdata(&fm_pdev->dev); 1806 1806 if (!fman) { 1807 1807 err = -EINVAL; 1808 - goto return_err; 1808 + goto put_device; 1809 1809 } 1810 1810 1811 1811 err = of_property_read_u32(port_node, "cell-index", &val); ··· 1813 1813 dev_err(port->dev, "%s: reading cell-index for %pOF failed\n", 1814 1814 __func__, port_node); 1815 1815 err = -EINVAL; 1816 - goto return_err; 1816 + goto put_device; 1817 1817 } 1818 1818 port_id = (u8)val; 1819 1819 port->dts_params.id = port_id; ··· 1847 1847 } else { 1848 1848 dev_err(port->dev, "%s: Illegal port type\n", __func__); 1849 1849 err = -EINVAL; 1850 - goto return_err; 1850 + goto put_device; 1851 1851 } 1852 1852 1853 1853 port->dts_params.type = port_type; ··· 1861 1861 dev_err(port->dev, "%s: incorrect qman-channel-id\n", 1862 1862 __func__); 1863 1863 err = -EINVAL; 1864 - goto return_err; 1864 + goto put_device; 1865 1865 } 1866 1866 port->dts_params.qman_channel_id = qman_channel_id; 1867 1867 } ··· 1871 1871 dev_err(port->dev, "%s: of_address_to_resource() failed\n", 1872 1872 __func__); 1873 1873 err = -ENOMEM; 1874 - goto return_err; 1874 + goto put_device; 1875 1875 } 1876 1876 1877 1877 port->dts_params.fman = fman; ··· 1896 1896 1897 1897 return 0; 1898 1898 1899 + put_device: 1900 + put_device(&fm_pdev->dev); 1899 1901 return_err: 1900 1902 of_node_put(port_node); 1901 1903 free_port:
+4 -4
drivers/net/ethernet/google/gve/gve_adminq.c
··· 738 738 * is not set to GqiRda, choose the queue format in a priority order: 739 739 * DqoRda, GqiRda, GqiQpl. Use GqiQpl as default. 740 740 */ 741 - if (priv->queue_format == GVE_GQI_RDA_FORMAT) { 742 - dev_info(&priv->pdev->dev, 743 - "Driver is running with GQI RDA queue format.\n"); 744 - } else if (dev_op_dqo_rda) { 741 + if (dev_op_dqo_rda) { 745 742 priv->queue_format = GVE_DQO_RDA_FORMAT; 746 743 dev_info(&priv->pdev->dev, 747 744 "Driver is running with DQO RDA queue format.\n"); ··· 750 753 "Driver is running with GQI RDA queue format.\n"); 751 754 supported_features_mask = 752 755 be32_to_cpu(dev_op_gqi_rda->supported_features_mask); 756 + } else if (priv->queue_format == GVE_GQI_RDA_FORMAT) { 757 + dev_info(&priv->pdev->dev, 758 + "Driver is running with GQI RDA queue format.\n"); 753 759 } else { 754 760 priv->queue_format = GVE_GQI_QPL_FORMAT; 755 761 if (dev_op_gqi_qpl)
+17
drivers/net/ethernet/intel/ice/ice_base.c
··· 6 6 #include "ice_lib.h" 7 7 #include "ice_dcb_lib.h" 8 8 9 + static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring) 10 + { 11 + rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL); 12 + return !!rx_ring->xdp_buf; 13 + } 14 + 15 + static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring) 16 + { 17 + rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); 18 + return !!rx_ring->rx_buf; 19 + } 20 + 9 21 /** 10 22 * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI 11 23 * @qs_cfg: gathered variables needed for PF->VSI queues assignment ··· 504 492 xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, 505 493 ring->q_index, ring->q_vector->napi.napi_id); 506 494 495 + kfree(ring->rx_buf); 507 496 ring->xsk_pool = ice_xsk_pool(ring); 508 497 if (ring->xsk_pool) { 498 + if (!ice_alloc_rx_buf_zc(ring)) 499 + return -ENOMEM; 509 500 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq); 510 501 511 502 ring->rx_buf_len = ··· 523 508 dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n", 524 509 ring->q_index); 525 510 } else { 511 + if (!ice_alloc_rx_buf(ring)) 512 + return -ENOMEM; 526 513 if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) 527 514 /* coverity[check_return] */ 528 515 xdp_rxq_info_reg(&ring->xdp_rxq,
+13 -6
drivers/net/ethernet/intel/ice/ice_txrx.c
··· 419 419 } 420 420 421 421 rx_skip_free: 422 - memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count); 422 + if (rx_ring->xsk_pool) 423 + memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf))); 424 + else 425 + memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf))); 423 426 424 427 /* Zero out the descriptor ring */ 425 428 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), ··· 449 446 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq)) 450 447 xdp_rxq_info_unreg(&rx_ring->xdp_rxq); 451 448 rx_ring->xdp_prog = NULL; 452 - devm_kfree(rx_ring->dev, rx_ring->rx_buf); 453 - rx_ring->rx_buf = NULL; 449 + if (rx_ring->xsk_pool) { 450 + kfree(rx_ring->xdp_buf); 451 + rx_ring->xdp_buf = NULL; 452 + } else { 453 + kfree(rx_ring->rx_buf); 454 + rx_ring->rx_buf = NULL; 455 + } 454 456 455 457 if (rx_ring->desc) { 456 458 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc), ··· 483 475 /* warn if we are about to overwrite the pointer */ 484 476 WARN_ON(rx_ring->rx_buf); 485 477 rx_ring->rx_buf = 486 - devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count, 487 - GFP_KERNEL); 478 + kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL); 488 479 if (!rx_ring->rx_buf) 489 480 return -ENOMEM; 490 481 ··· 512 505 return 0; 513 506 514 507 err: 515 - devm_kfree(dev, rx_ring->rx_buf); 508 + kfree(rx_ring->rx_buf); 516 509 rx_ring->rx_buf = NULL; 517 510 return -ENOMEM; 518 511 }
-1
drivers/net/ethernet/intel/ice/ice_txrx.h
··· 24 24 #define ICE_MAX_DATA_PER_TXD_ALIGNED \ 25 25 (~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD) 26 26 27 - #define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */ 28 27 #define ICE_MAX_TXQ_PER_TXQG 128 29 28 30 29 /* Attempt to maximize the headroom available for incoming frames. We use a 2K
+32 -34
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 12 12 #include "ice_txrx_lib.h" 13 13 #include "ice_lib.h" 14 14 15 + static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx) 16 + { 17 + return &rx_ring->xdp_buf[idx]; 18 + } 19 + 15 20 /** 16 21 * ice_qp_reset_stats - Resets all stats for rings of given index 17 22 * @vsi: VSI that contains rings of interest ··· 377 372 dma_addr_t dma; 378 373 379 374 rx_desc = ICE_RX_DESC(rx_ring, ntu); 380 - xdp = &rx_ring->xdp_buf[ntu]; 375 + xdp = ice_xdp_buf(rx_ring, ntu); 381 376 382 377 nb_buffs = min_t(u16, count, rx_ring->count - ntu); 383 378 nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs); ··· 395 390 } 396 391 397 392 ntu += nb_buffs; 398 - if (ntu == rx_ring->count) { 399 - rx_desc = ICE_RX_DESC(rx_ring, 0); 400 - xdp = rx_ring->xdp_buf; 393 + if (ntu == rx_ring->count) 401 394 ntu = 0; 402 - } 403 395 404 - /* clear the status bits for the next_to_use descriptor */ 405 - rx_desc->wb.status_error0 = 0; 406 396 ice_release_rx_desc(rx_ring, ntu); 407 397 408 398 return count == nb_buffs; ··· 419 419 /** 420 420 * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer 421 421 * @rx_ring: Rx ring 422 - * @xdp_arr: Pointer to the SW ring of xdp_buff pointers 422 + * @xdp: Pointer to XDP buffer 423 423 * 424 424 * This function allocates a new skb from a zero-copy Rx buffer. 425 425 * 426 426 * Returns the skb on success, NULL on failure. 427 427 */ 428 428 static struct sk_buff * 429 - ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr) 429 + ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) 430 430 { 431 - struct xdp_buff *xdp = *xdp_arr; 431 + unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; 432 432 unsigned int metasize = xdp->data - xdp->data_meta; 433 433 unsigned int datasize = xdp->data_end - xdp->data; 434 - unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start; 435 434 struct sk_buff *skb; 436 435 437 436 skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard, ··· 444 445 skb_metadata_set(skb, metasize); 445 446 446 447 xsk_buff_free(xdp); 447 - *xdp_arr = NULL; 448 448 return skb; 449 449 } 450 450 ··· 505 507 int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget) 506 508 { 507 509 unsigned int total_rx_bytes = 0, total_rx_packets = 0; 508 - u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); 509 510 struct ice_tx_ring *xdp_ring; 510 511 unsigned int xdp_xmit = 0; 511 512 struct bpf_prog *xdp_prog; ··· 519 522 while (likely(total_rx_packets < (unsigned int)budget)) { 520 523 union ice_32b_rx_flex_desc *rx_desc; 521 524 unsigned int size, xdp_res = 0; 522 - struct xdp_buff **xdp; 525 + struct xdp_buff *xdp; 523 526 struct sk_buff *skb; 524 527 u16 stat_err_bits; 525 528 u16 vlan_tag = 0; ··· 537 540 */ 538 541 dma_rmb(); 539 542 543 + xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean); 544 + 540 545 size = le16_to_cpu(rx_desc->wb.pkt_len) & 541 546 ICE_RX_FLX_DESC_PKT_LEN_M; 542 - if (!size) 543 - break; 547 + if (!size) { 548 + xdp->data = NULL; 549 + xdp->data_end = NULL; 550 + xdp->data_hard_start = NULL; 551 + xdp->data_meta = NULL; 552 + goto construct_skb; 553 + } 544 554 545 - xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean]; 546 - xsk_buff_set_size(*xdp, size); 547 - xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool); 555 + xsk_buff_set_size(xdp, size); 556 + xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool); 548 557 549 - xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring); 558 + xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring); 550 559 if (xdp_res) { 551 560 if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) 552 561 xdp_xmit |= xdp_res; 553 562 else 554 - xsk_buff_free(*xdp); 563 + xsk_buff_free(xdp); 555 564 556 - *xdp = NULL; 557 565 total_rx_bytes += size; 558 566 total_rx_packets++; 559 - cleaned_count++; 560 567 561 568 ice_bump_ntc(rx_ring); 562 569 continue; 563 570 } 564 - 571 + construct_skb: 565 572 /* XDP_PASS path */ 566 573 skb = ice_construct_skb_zc(rx_ring, xdp); 567 574 if (!skb) { ··· 573 572 break; 574 573 } 575 574 576 - cleaned_count++; 577 575 ice_bump_ntc(rx_ring); 578 576 579 577 if (eth_skb_pad(skb)) { ··· 594 594 ice_receive_skb(rx_ring, skb, vlan_tag); 595 595 } 596 596 597 - if (cleaned_count >= ICE_RX_BUF_WRITE) 598 - failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count); 597 + failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring)); 599 598 600 599 ice_finalize_xdp_rx(xdp_ring, xdp_xmit); 601 600 ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); ··· 810 811 */ 811 812 void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring) 812 813 { 813 - u16 i; 814 + u16 count_mask = rx_ring->count - 1; 815 + u16 ntc = rx_ring->next_to_clean; 816 + u16 ntu = rx_ring->next_to_use; 814 817 815 - for (i = 0; i < rx_ring->count; i++) { 816 - struct xdp_buff **xdp = &rx_ring->xdp_buf[i]; 818 + for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) { 819 + struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc); 817 820 818 - if (!xdp) 819 - continue; 820 - 821 - *xdp = NULL; 821 + xsk_buff_free(xdp); 822 822 } 823 823 } 824 824
+13 -6
drivers/net/ethernet/intel/igb/igb_main.c
··· 9254 9254 return __igb_shutdown(to_pci_dev(dev), NULL, 0); 9255 9255 } 9256 9256 9257 - static int __maybe_unused igb_resume(struct device *dev) 9257 + static int __maybe_unused __igb_resume(struct device *dev, bool rpm) 9258 9258 { 9259 9259 struct pci_dev *pdev = to_pci_dev(dev); 9260 9260 struct net_device *netdev = pci_get_drvdata(pdev); ··· 9297 9297 9298 9298 wr32(E1000_WUS, ~0); 9299 9299 9300 - rtnl_lock(); 9300 + if (!rpm) 9301 + rtnl_lock(); 9301 9302 if (!err && netif_running(netdev)) 9302 9303 err = __igb_open(netdev, true); 9303 9304 9304 9305 if (!err) 9305 9306 netif_device_attach(netdev); 9306 - rtnl_unlock(); 9307 + if (!rpm) 9308 + rtnl_unlock(); 9307 9309 9308 9310 return err; 9311 + } 9312 + 9313 + static int __maybe_unused igb_resume(struct device *dev) 9314 + { 9315 + return __igb_resume(dev, false); 9309 9316 } 9310 9317 9311 9318 static int __maybe_unused igb_runtime_idle(struct device *dev) ··· 9333 9326 9334 9327 static int __maybe_unused igb_runtime_resume(struct device *dev) 9335 9328 { 9336 - return igb_resume(dev); 9329 + return __igb_resume(dev, true); 9337 9330 } 9338 9331 9339 9332 static void igb_shutdown(struct pci_dev *pdev) ··· 9449 9442 * @pdev: Pointer to PCI device 9450 9443 * 9451 9444 * Restart the card from scratch, as if from a cold-boot. Implementation 9452 - * resembles the first-half of the igb_resume routine. 9445 + * resembles the first-half of the __igb_resume routine. 9453 9446 **/ 9454 9447 static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev) 9455 9448 { ··· 9489 9482 * 9490 9483 * This callback is called when the error recovery driver tells us that 9491 9484 * its OK to resume normal operation. Implementation resembles the 9492 - * second-half of the igb_resume routine. 9485 + * second-half of the __igb_resume routine. 9493 9486 */ 9494 9487 static void igb_io_resume(struct pci_dev *pdev) 9495 9488 {
+6
drivers/net/ethernet/intel/igc/igc_main.c
··· 5467 5467 mod_timer(&adapter->watchdog_timer, jiffies + 1); 5468 5468 } 5469 5469 5470 + if (icr & IGC_ICR_TS) 5471 + igc_tsync_interrupt(adapter); 5472 + 5470 5473 napi_schedule(&q_vector->napi); 5471 5474 5472 5475 return IRQ_HANDLED; ··· 5512 5509 if (!test_bit(__IGC_DOWN, &adapter->state)) 5513 5510 mod_timer(&adapter->watchdog_timer, jiffies + 1); 5514 5511 } 5512 + 5513 + if (icr & IGC_ICR_TS) 5514 + igc_tsync_interrupt(adapter); 5515 5515 5516 5516 napi_schedule(&q_vector->napi); 5517 5517
+14 -1
drivers/net/ethernet/intel/igc/igc_ptp.c
··· 768 768 */ 769 769 static bool igc_is_crosststamp_supported(struct igc_adapter *adapter) 770 770 { 771 - return IS_ENABLED(CONFIG_X86_TSC) ? pcie_ptm_enabled(adapter->pdev) : false; 771 + if (!IS_ENABLED(CONFIG_X86_TSC)) 772 + return false; 773 + 774 + /* FIXME: it was noticed that enabling support for PCIe PTM in 775 + * some i225-V models could cause lockups when bringing the 776 + * interface up/down. There should be no downsides to 777 + * disabling crosstimestamping support for i225-V, as it 778 + * doesn't have any PTP support. That way we gain some time 779 + * while root causing the issue. 780 + */ 781 + if (adapter->pdev->device == IGC_DEV_ID_I225_V) 782 + return false; 783 + 784 + return pcie_ptm_enabled(adapter->pdev); 772 785 } 773 786 774 787 static struct system_counterval_t igc_device_tstamp_to_system(u64 tstamp)
+25 -11
drivers/net/ethernet/lantiq_xrx200.c
··· 71 71 struct xrx200_chan chan_tx; 72 72 struct xrx200_chan chan_rx; 73 73 74 + u16 rx_buf_size; 75 + 74 76 struct net_device *net_dev; 75 77 struct device *dev; 76 78 ··· 99 97 xrx200_pmac_w32(priv, val, offset); 100 98 } 101 99 100 + static int xrx200_max_frame_len(int mtu) 101 + { 102 + return VLAN_ETH_HLEN + mtu; 103 + } 104 + 105 + static int xrx200_buffer_size(int mtu) 106 + { 107 + return round_up(xrx200_max_frame_len(mtu), 4 * XRX200_DMA_BURST_LEN); 108 + } 109 + 102 110 /* drop all the packets from the DMA ring */ 103 111 static void xrx200_flush_dma(struct xrx200_chan *ch) 104 112 { ··· 121 109 break; 122 110 123 111 desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | 124 - (ch->priv->net_dev->mtu + VLAN_ETH_HLEN + 125 - ETH_FCS_LEN); 112 + ch->priv->rx_buf_size; 126 113 ch->dma.desc++; 127 114 ch->dma.desc %= LTQ_DESC_NUM; 128 115 } ··· 169 158 170 159 static int xrx200_alloc_skb(struct xrx200_chan *ch) 171 160 { 172 - int len = ch->priv->net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN; 173 161 struct sk_buff *skb = ch->skb[ch->dma.desc]; 162 + struct xrx200_priv *priv = ch->priv; 174 163 dma_addr_t mapping; 175 164 int ret = 0; 176 165 177 - ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev, 178 - len); 166 + ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(priv->net_dev, 167 + priv->rx_buf_size); 179 168 if (!ch->skb[ch->dma.desc]) { 180 169 ret = -ENOMEM; 181 170 goto skip; 182 171 } 183 172 184 - mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data, 185 - len, DMA_FROM_DEVICE); 186 - if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) { 173 + mapping = dma_map_single(priv->dev, ch->skb[ch->dma.desc]->data, 174 + priv->rx_buf_size, DMA_FROM_DEVICE); 175 + if (unlikely(dma_mapping_error(priv->dev, mapping))) { 187 176 dev_kfree_skb_any(ch->skb[ch->dma.desc]); 188 177 ch->skb[ch->dma.desc] = skb; 189 178 ret = -ENOMEM; ··· 195 184 wmb(); 196 185 skip: 197 186 ch->dma.desc_base[ch->dma.desc].ctl = 198 - LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | len; 187 + LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | priv->rx_buf_size; 199 188 200 189 return ret; 201 190 } ··· 224 213 skb->protocol = eth_type_trans(skb, net_dev); 225 214 netif_receive_skb(skb); 226 215 net_dev->stats.rx_packets++; 227 - net_dev->stats.rx_bytes += len - ETH_FCS_LEN; 216 + net_dev->stats.rx_bytes += len; 228 217 229 218 return 0; 230 219 } ··· 367 356 int ret = 0; 368 357 369 358 net_dev->mtu = new_mtu; 359 + priv->rx_buf_size = xrx200_buffer_size(new_mtu); 370 360 371 361 if (new_mtu <= old_mtu) 372 362 return ret; ··· 387 375 ret = xrx200_alloc_skb(ch_rx); 388 376 if (ret) { 389 377 net_dev->mtu = old_mtu; 378 + priv->rx_buf_size = xrx200_buffer_size(old_mtu); 390 379 break; 391 380 } 392 381 dev_kfree_skb_any(skb); ··· 518 505 net_dev->netdev_ops = &xrx200_netdev_ops; 519 506 SET_NETDEV_DEV(net_dev, dev); 520 507 net_dev->min_mtu = ETH_ZLEN; 521 - net_dev->max_mtu = XRX200_DMA_DATA_LEN - VLAN_ETH_HLEN - ETH_FCS_LEN; 508 + net_dev->max_mtu = XRX200_DMA_DATA_LEN - xrx200_max_frame_len(0); 509 + priv->rx_buf_size = xrx200_buffer_size(ETH_DATA_LEN); 522 510 523 511 /* load the memory ranges */ 524 512 priv->pmac_reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
+22 -13
drivers/net/ethernet/marvell/prestera/prestera_main.c
··· 54 54 struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw, 55 55 u32 dev_id, u32 hw_id) 56 56 { 57 - struct prestera_port *port = NULL; 57 + struct prestera_port *port = NULL, *tmp; 58 58 59 59 read_lock(&sw->port_list_lock); 60 - list_for_each_entry(port, &sw->port_list, list) { 61 - if (port->dev_id == dev_id && port->hw_id == hw_id) 60 + list_for_each_entry(tmp, &sw->port_list, list) { 61 + if (tmp->dev_id == dev_id && tmp->hw_id == hw_id) { 62 + port = tmp; 62 63 break; 64 + } 63 65 } 64 66 read_unlock(&sw->port_list_lock); 65 67 ··· 70 68 71 69 struct prestera_port *prestera_find_port(struct prestera_switch *sw, u32 id) 72 70 { 73 - struct prestera_port *port = NULL; 71 + struct prestera_port *port = NULL, *tmp; 74 72 75 73 read_lock(&sw->port_list_lock); 76 - list_for_each_entry(port, &sw->port_list, list) { 77 - if (port->id == id) 74 + list_for_each_entry(tmp, &sw->port_list, list) { 75 + if (tmp->id == id) { 76 + port = tmp; 78 77 break; 78 + } 79 79 } 80 80 read_unlock(&sw->port_list_lock); 81 81 ··· 768 764 struct net_device *dev, 769 765 unsigned long event, void *ptr) 770 766 { 771 - struct netdev_notifier_changeupper_info *info = ptr; 767 + struct netdev_notifier_info *info = ptr; 768 + struct netdev_notifier_changeupper_info *cu_info; 772 769 struct prestera_port *port = netdev_priv(dev); 773 770 struct netlink_ext_ack *extack; 774 771 struct net_device *upper; 775 772 776 - extack = netdev_notifier_info_to_extack(&info->info); 777 - upper = info->upper_dev; 773 + extack = netdev_notifier_info_to_extack(info); 774 + cu_info = container_of(info, 775 + struct netdev_notifier_changeupper_info, 776 + info); 778 777 779 778 switch (event) { 780 779 case NETDEV_PRECHANGEUPPER: 780 + upper = cu_info->upper_dev; 781 781 if (!netif_is_bridge_master(upper) && 782 782 !netif_is_lag_master(upper)) { 783 783 NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type"); 784 784 return -EINVAL; 785 785 } 786 786 787 - if (!info->linking) 787 + if (!cu_info->linking) 788 788 break; 789 789 790 790 if (netdev_has_any_upper_dev(upper)) { ··· 797 789 } 798 790 799 791 if (netif_is_lag_master(upper) && 800 - !prestera_lag_master_check(upper, info->upper_info, extack)) 792 + !prestera_lag_master_check(upper, cu_info->upper_info, extack)) 801 793 return -EOPNOTSUPP; 802 794 if (netif_is_lag_master(upper) && vlan_uses_dev(dev)) { 803 795 NL_SET_ERR_MSG_MOD(extack, ··· 813 805 break; 814 806 815 807 case NETDEV_CHANGEUPPER: 808 + upper = cu_info->upper_dev; 816 809 if (netif_is_bridge_master(upper)) { 817 - if (info->linking) 810 + if (cu_info->linking) 818 811 return prestera_bridge_port_join(upper, port, 819 812 extack); 820 813 else 821 814 prestera_bridge_port_leave(upper, port); 822 815 } else if (netif_is_lag_master(upper)) { 823 - if (info->linking) 816 + if (cu_info->linking) 824 817 return prestera_lag_port_add(port, upper); 825 818 else 826 819 prestera_lag_port_del(port);
+2 -3
drivers/net/ethernet/mellanox/mlx5/core/en.h
··· 783 783 DECLARE_BITMAP(state, MLX5E_CHANNEL_NUM_STATES); 784 784 int ix; 785 785 int cpu; 786 + /* Sync between icosq recovery and XSK enable/disable. */ 787 + struct mutex icosq_recovery_lock; 786 788 }; 787 789 788 790 struct mlx5e_ptp; ··· 1016 1014 void mlx5e_destroy_rq(struct mlx5e_rq *rq); 1017 1015 1018 1016 struct mlx5e_sq_param; 1019 - int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1020 - struct mlx5e_sq_param *param, struct mlx5e_icosq *sq); 1021 - void mlx5e_close_icosq(struct mlx5e_icosq *sq); 1022 1017 int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params, 1023 1018 struct mlx5e_sq_param *param, struct xsk_buff_pool *xsk_pool, 1024 1019 struct mlx5e_xdpsq *sq, bool is_redirect);
+2
drivers/net/ethernet/mellanox/mlx5/core/en/health.h
··· 30 30 void mlx5e_reporter_icosq_cqe_err(struct mlx5e_icosq *icosq); 31 31 void mlx5e_reporter_rq_cqe_err(struct mlx5e_rq *rq); 32 32 void mlx5e_reporter_rx_timeout(struct mlx5e_rq *rq); 33 + void mlx5e_reporter_icosq_suspend_recovery(struct mlx5e_channel *c); 34 + void mlx5e_reporter_icosq_resume_recovery(struct mlx5e_channel *c); 33 35 34 36 #define MLX5E_REPORTER_PER_Q_MAX_LEN 256 35 37
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.h
··· 66 66 67 67 static inline void 68 68 mlx5e_rep_tc_receive(struct mlx5_cqe64 *cqe, struct mlx5e_rq *rq, 69 - struct sk_buff *skb) {} 69 + struct sk_buff *skb) { napi_gro_receive(rq->cq.napi, skb); } 70 70 71 71 #endif /* CONFIG_MLX5_CLS_ACT */ 72 72
+34 -1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
··· 62 62 63 63 static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx) 64 64 { 65 + struct mlx5e_rq *xskrq = NULL; 65 66 struct mlx5_core_dev *mdev; 66 67 struct mlx5e_icosq *icosq; 67 68 struct net_device *dev; ··· 71 70 int err; 72 71 73 72 icosq = ctx; 73 + 74 + mutex_lock(&icosq->channel->icosq_recovery_lock); 75 + 76 + /* mlx5e_close_rq cancels this work before RQ and ICOSQ are killed. */ 74 77 rq = &icosq->channel->rq; 78 + if (test_bit(MLX5E_RQ_STATE_ENABLED, &icosq->channel->xskrq.state)) 79 + xskrq = &icosq->channel->xskrq; 75 80 mdev = icosq->channel->mdev; 76 81 dev = icosq->channel->netdev; 77 82 err = mlx5_core_query_sq_state(mdev, icosq->sqn, &state); ··· 91 84 goto out; 92 85 93 86 mlx5e_deactivate_rq(rq); 87 + if (xskrq) 88 + mlx5e_deactivate_rq(xskrq); 89 + 94 90 err = mlx5e_wait_for_icosq_flush(icosq); 95 91 if (err) 96 92 goto out; ··· 107 97 goto out; 108 98 109 99 mlx5e_reset_icosq_cc_pc(icosq); 100 + 110 101 mlx5e_free_rx_in_progress_descs(rq); 102 + if (xskrq) 103 + mlx5e_free_rx_in_progress_descs(xskrq); 104 + 111 105 clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); 112 106 mlx5e_activate_icosq(icosq); 113 - mlx5e_activate_rq(rq); 114 107 108 + mlx5e_activate_rq(rq); 115 109 rq->stats->recover++; 110 + 111 + if (xskrq) { 112 + mlx5e_activate_rq(xskrq); 113 + xskrq->stats->recover++; 114 + } 115 + 116 + mutex_unlock(&icosq->channel->icosq_recovery_lock); 117 + 116 118 return 0; 117 119 out: 118 120 clear_bit(MLX5E_SQ_STATE_RECOVERING, &icosq->state); 121 + mutex_unlock(&icosq->channel->icosq_recovery_lock); 119 122 return err; 120 123 } 121 124 ··· 727 704 snprintf(err_str, sizeof(err_str), "ERR CQE on ICOSQ: 0x%x", icosq->sqn); 728 705 729 706 mlx5e_health_report(priv, priv->rx_reporter, err_str, &err_ctx); 707 + } 708 + 709 + void mlx5e_reporter_icosq_suspend_recovery(struct mlx5e_channel *c) 710 + { 711 + mutex_lock(&c->icosq_recovery_lock); 712 + } 713 + 714 + void mlx5e_reporter_icosq_resume_recovery(struct mlx5e_channel *c) 715 + { 716 + mutex_unlock(&c->icosq_recovery_lock); 730 717 } 731 718 732 719 static const struct devlink_health_reporter_ops mlx5_rx_reporter_ops = {
+9 -1
drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
··· 466 466 return mlx5e_health_fmsg_named_obj_nest_end(fmsg); 467 467 } 468 468 469 + static int mlx5e_tx_reporter_timeout_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg, 470 + void *ctx) 471 + { 472 + struct mlx5e_tx_timeout_ctx *to_ctx = ctx; 473 + 474 + return mlx5e_tx_reporter_dump_sq(priv, fmsg, to_ctx->sq); 475 + } 476 + 469 477 static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv, 470 478 struct devlink_fmsg *fmsg) 471 479 { ··· 569 561 to_ctx.sq = sq; 570 562 err_ctx.ctx = &to_ctx; 571 563 err_ctx.recover = mlx5e_tx_reporter_timeout_recover; 572 - err_ctx.dump = mlx5e_tx_reporter_dump_sq; 564 + err_ctx.dump = mlx5e_tx_reporter_timeout_dump; 573 565 snprintf(err_str, sizeof(err_str), 574 566 "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u", 575 567 sq->ch_ix, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc,
+15 -1
drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
··· 4 4 #include "setup.h" 5 5 #include "en/params.h" 6 6 #include "en/txrx.h" 7 + #include "en/health.h" 7 8 8 9 /* It matches XDP_UMEM_MIN_CHUNK_SIZE, but as this constant is private and may 9 10 * change unexpectedly, and mlx5e has a minimum valid stride size for striding ··· 171 170 172 171 void mlx5e_activate_xsk(struct mlx5e_channel *c) 173 172 { 173 + /* ICOSQ recovery deactivates RQs. Suspend the recovery to avoid 174 + * activating XSKRQ in the middle of recovery. 175 + */ 176 + mlx5e_reporter_icosq_suspend_recovery(c); 174 177 set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 178 + mlx5e_reporter_icosq_resume_recovery(c); 179 + 175 180 /* TX queue is created active. */ 176 181 177 182 spin_lock_bh(&c->async_icosq_lock); ··· 187 180 188 181 void mlx5e_deactivate_xsk(struct mlx5e_channel *c) 189 182 { 190 - mlx5e_deactivate_rq(&c->xskrq); 183 + /* ICOSQ recovery may reactivate XSKRQ if clear_bit is called in the 184 + * middle of recovery. Suspend the recovery to avoid it. 185 + */ 186 + mlx5e_reporter_icosq_suspend_recovery(c); 187 + clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state); 188 + mlx5e_reporter_icosq_resume_recovery(c); 189 + synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ 190 + 191 191 /* TX queue is disabled on close. */ 192 192 }
+32 -16
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 1087 1087 void mlx5e_close_rq(struct mlx5e_rq *rq) 1088 1088 { 1089 1089 cancel_work_sync(&rq->dim.work); 1090 - if (rq->icosq) 1091 - cancel_work_sync(&rq->icosq->recover_work); 1092 1090 cancel_work_sync(&rq->recover_work); 1093 1091 mlx5e_destroy_rq(rq); 1094 1092 mlx5e_free_rx_descs(rq); ··· 1214 1216 mlx5e_reporter_icosq_cqe_err(sq); 1215 1217 } 1216 1218 1219 + static void mlx5e_async_icosq_err_cqe_work(struct work_struct *recover_work) 1220 + { 1221 + struct mlx5e_icosq *sq = container_of(recover_work, struct mlx5e_icosq, 1222 + recover_work); 1223 + 1224 + /* Not implemented yet. */ 1225 + 1226 + netdev_warn(sq->channel->netdev, "async_icosq recovery is not implemented\n"); 1227 + } 1228 + 1217 1229 static int mlx5e_alloc_icosq(struct mlx5e_channel *c, 1218 1230 struct mlx5e_sq_param *param, 1219 - struct mlx5e_icosq *sq) 1231 + struct mlx5e_icosq *sq, 1232 + work_func_t recover_work_func) 1220 1233 { 1221 1234 void *sqc_wq = MLX5_ADDR_OF(sqc, param->sqc, wq); 1222 1235 struct mlx5_core_dev *mdev = c->mdev; ··· 1248 1239 if (err) 1249 1240 goto err_sq_wq_destroy; 1250 1241 1251 - INIT_WORK(&sq->recover_work, mlx5e_icosq_err_cqe_work); 1242 + INIT_WORK(&sq->recover_work, recover_work_func); 1252 1243 1253 1244 return 0; 1254 1245 ··· 1584 1575 mlx5e_reporter_tx_err_cqe(sq); 1585 1576 } 1586 1577 1587 - int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1588 - struct mlx5e_sq_param *param, struct mlx5e_icosq *sq) 1578 + static int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params, 1579 + struct mlx5e_sq_param *param, struct mlx5e_icosq *sq, 1580 + work_func_t recover_work_func) 1589 1581 { 1590 1582 struct mlx5e_create_sq_param csp = {}; 1591 1583 int err; 1592 1584 1593 - err = mlx5e_alloc_icosq(c, param, sq); 1585 + err = mlx5e_alloc_icosq(c, param, sq, recover_work_func); 1594 1586 if (err) 1595 1587 return err; 1596 1588 ··· 1630 1620 synchronize_net(); /* Sync with NAPI. */ 1631 1621 } 1632 1622 1633 - void mlx5e_close_icosq(struct mlx5e_icosq *sq) 1623 + static void mlx5e_close_icosq(struct mlx5e_icosq *sq) 1634 1624 { 1635 1625 struct mlx5e_channel *c = sq->channel; 1636 1626 ··· 2094 2084 2095 2085 spin_lock_init(&c->async_icosq_lock); 2096 2086 2097 - err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq); 2087 + err = mlx5e_open_icosq(c, params, &cparam->async_icosq, &c->async_icosq, 2088 + mlx5e_async_icosq_err_cqe_work); 2098 2089 if (err) 2099 2090 goto err_close_xdpsq_cq; 2100 2091 2101 - err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq); 2092 + mutex_init(&c->icosq_recovery_lock); 2093 + 2094 + err = mlx5e_open_icosq(c, params, &cparam->icosq, &c->icosq, 2095 + mlx5e_icosq_err_cqe_work); 2102 2096 if (err) 2103 2097 goto err_close_async_icosq; 2104 2098 ··· 2170 2156 mlx5e_close_xdpsq(&c->xdpsq); 2171 2157 if (c->xdp) 2172 2158 mlx5e_close_xdpsq(&c->rq_xdpsq); 2159 + /* The same ICOSQ is used for UMRs for both RQ and XSKRQ. */ 2160 + cancel_work_sync(&c->icosq.recover_work); 2173 2161 mlx5e_close_rq(&c->rq); 2174 2162 mlx5e_close_sqs(c); 2175 2163 mlx5e_close_icosq(&c->icosq); 2164 + mutex_destroy(&c->icosq_recovery_lock); 2176 2165 mlx5e_close_icosq(&c->async_icosq); 2177 2166 if (c->xdp) 2178 2167 mlx5e_close_cq(&c->rq_xdpsq.cq); ··· 3741 3724 3742 3725 static int mlx5e_handle_feature(struct net_device *netdev, 3743 3726 netdev_features_t *features, 3744 - netdev_features_t wanted_features, 3745 3727 netdev_features_t feature, 3746 3728 mlx5e_feature_handler feature_handler) 3747 3729 { 3748 - netdev_features_t changes = wanted_features ^ netdev->features; 3749 - bool enable = !!(wanted_features & feature); 3730 + netdev_features_t changes = *features ^ netdev->features; 3731 + bool enable = !!(*features & feature); 3750 3732 int err; 3751 3733 3752 3734 if (!(changes & feature)) ··· 3753 3737 3754 3738 err = feature_handler(netdev, enable); 3755 3739 if (err) { 3740 + MLX5E_SET_FEATURE(features, feature, !enable); 3756 3741 netdev_err(netdev, "%s feature %pNF failed, err %d\n", 3757 3742 enable ? "Enable" : "Disable", &feature, err); 3758 3743 return err; 3759 3744 } 3760 3745 3761 - MLX5E_SET_FEATURE(features, feature, enable); 3762 3746 return 0; 3763 3747 } 3764 3748 3765 3749 int mlx5e_set_features(struct net_device *netdev, netdev_features_t features) 3766 3750 { 3767 - netdev_features_t oper_features = netdev->features; 3751 + netdev_features_t oper_features = features; 3768 3752 int err = 0; 3769 3753 3770 3754 #define MLX5E_HANDLE_FEATURE(feature, handler) \ 3771 - mlx5e_handle_feature(netdev, &oper_features, features, feature, handler) 3755 + mlx5e_handle_feature(netdev, &oper_features, feature, handler) 3772 3756 3773 3757 err |= MLX5E_HANDLE_FEATURE(NETIF_F_LRO, set_feature_lro); 3774 3758 err |= MLX5E_HANDLE_FEATURE(NETIF_F_GRO_HW, set_feature_hw_gro);
+17 -16
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1196 1196 if (attr->flags & MLX5_ESW_ATTR_FLAG_SLOW_PATH) 1197 1197 goto offload_rule_0; 1198 1198 1199 - if (flow_flag_test(flow, CT)) { 1200 - mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); 1201 - return; 1202 - } 1203 - 1204 - if (flow_flag_test(flow, SAMPLE)) { 1205 - mlx5e_tc_sample_unoffload(get_sample_priv(flow->priv), flow->rule[0], attr); 1206 - return; 1207 - } 1208 - 1209 1199 if (attr->esw_attr->split_count) 1210 1200 mlx5_eswitch_del_fwd_rule(esw, flow->rule[1], attr); 1211 1201 1202 + if (flow_flag_test(flow, CT)) 1203 + mlx5_tc_ct_delete_flow(get_ct_priv(flow->priv), flow, attr); 1204 + else if (flow_flag_test(flow, SAMPLE)) 1205 + mlx5e_tc_sample_unoffload(get_sample_priv(flow->priv), flow->rule[0], attr); 1206 + else 1212 1207 offload_rule_0: 1213 - mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); 1208 + mlx5_eswitch_del_offloaded_rule(esw, flow->rule[0], attr); 1214 1209 } 1215 1210 1216 1211 struct mlx5_flow_handle * ··· 1440 1445 MLX5_FLOW_NAMESPACE_FDB, VPORT_TO_REG, 1441 1446 metadata); 1442 1447 if (err) 1443 - return err; 1448 + goto err_out; 1449 + 1450 + attr->action |= MLX5_FLOW_CONTEXT_ACTION_MOD_HDR; 1444 1451 } 1445 1452 } 1446 1453 ··· 1458 1461 if (attr->chain) { 1459 1462 NL_SET_ERR_MSG_MOD(extack, 1460 1463 "Internal port rule is only supported on chain 0"); 1461 - return -EOPNOTSUPP; 1464 + err = -EOPNOTSUPP; 1465 + goto err_out; 1462 1466 } 1463 1467 1464 1468 if (attr->dest_chain) { 1465 1469 NL_SET_ERR_MSG_MOD(extack, 1466 1470 "Internal port rule offload doesn't support goto action"); 1467 - return -EOPNOTSUPP; 1471 + err = -EOPNOTSUPP; 1472 + goto err_out; 1468 1473 } 1469 1474 1470 1475 int_port = mlx5e_tc_int_port_get(mlx5e_get_int_port_priv(priv), ··· 1474 1475 flow_flag_test(flow, EGRESS) ? 1475 1476 MLX5E_TC_INT_PORT_EGRESS : 1476 1477 MLX5E_TC_INT_PORT_INGRESS); 1477 - if (IS_ERR(int_port)) 1478 - return PTR_ERR(int_port); 1478 + if (IS_ERR(int_port)) { 1479 + err = PTR_ERR(int_port); 1480 + goto err_out; 1481 + } 1479 1482 1480 1483 esw_attr->int_port = int_port; 1481 1484 }
+3
drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
··· 121 121 122 122 u32 mlx5_chains_get_prio_range(struct mlx5_fs_chains *chains) 123 123 { 124 + if (!mlx5_chains_prios_supported(chains)) 125 + return 1; 126 + 124 127 if (mlx5_chains_ignore_flow_level_supported(chains)) 125 128 return UINT_MAX; 126 129
+6 -5
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1809 1809 1810 1810 int mlx5_recover_device(struct mlx5_core_dev *dev) 1811 1811 { 1812 - int ret = -EIO; 1812 + if (!mlx5_core_is_sf(dev)) { 1813 + mlx5_pci_disable_device(dev); 1814 + if (mlx5_pci_slot_reset(dev->pdev) != PCI_ERS_RESULT_RECOVERED) 1815 + return -EIO; 1816 + } 1813 1817 1814 - mlx5_pci_disable_device(dev); 1815 - if (mlx5_pci_slot_reset(dev->pdev) == PCI_ERS_RESULT_RECOVERED) 1816 - ret = mlx5_load_one(dev); 1817 - return ret; 1818 + return mlx5_load_one(dev); 1818 1819 } 1819 1820 1820 1821 static struct pci_driver mlx5_core_driver = {
+3 -3
drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
··· 356 356 new_irq = irq_pool_create_irq(pool, affinity); 357 357 if (IS_ERR(new_irq)) { 358 358 if (!least_loaded_irq) { 359 - mlx5_core_err(pool->dev, "Didn't find IRQ for cpu = %u\n", 360 - cpumask_first(affinity)); 359 + mlx5_core_err(pool->dev, "Didn't find a matching IRQ. err = %ld\n", 360 + PTR_ERR(new_irq)); 361 361 mutex_unlock(&pool->lock); 362 362 return new_irq; 363 363 } ··· 398 398 cpumask_copy(irq->mask, affinity); 399 399 if (!irq_pool_is_sf_pool(pool) && !pool->xa_num_irqs.max && 400 400 cpumask_empty(irq->mask)) 401 - cpumask_set_cpu(0, irq->mask); 401 + cpumask_set_cpu(cpumask_first(cpu_online_mask), irq->mask); 402 402 irq_set_affinity_hint(irq->irqn, irq->mask); 403 403 unlock: 404 404 mutex_unlock(&pool->lock);
+4 -5
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
··· 2 2 /* Copyright (c) 2019 Mellanox Technologies. */ 3 3 4 4 #include <linux/mlx5/eswitch.h> 5 + #include <linux/err.h> 5 6 #include "dr_types.h" 6 7 7 8 #define DR_DOMAIN_SW_STEERING_SUPPORTED(dmn, dmn_type) \ ··· 73 72 } 74 73 75 74 dmn->uar = mlx5_get_uars_page(dmn->mdev); 76 - if (!dmn->uar) { 75 + if (IS_ERR(dmn->uar)) { 77 76 mlx5dr_err(dmn, "Couldn't allocate UAR\n"); 78 - ret = -ENOMEM; 77 + ret = PTR_ERR(dmn->uar); 79 78 goto clean_pd; 80 79 } 81 80 ··· 164 163 165 164 static int dr_domain_query_esw_mngr(struct mlx5dr_domain *dmn) 166 165 { 167 - return dr_domain_query_vport(dmn, 168 - dmn->info.caps.is_ecpf ? MLX5_VPORT_ECPF : 0, 169 - false, 166 + return dr_domain_query_vport(dmn, 0, false, 170 167 &dmn->info.caps.vports.esw_manager_caps); 171 168 } 172 169
+2
drivers/net/ethernet/micrel/ks8851_par.c
··· 321 321 return ret; 322 322 323 323 netdev->irq = platform_get_irq(pdev, 0); 324 + if (netdev->irq < 0) 325 + return netdev->irq; 324 326 325 327 return ks8851_probe_common(netdev, dev, msg_enable); 326 328 }
+1 -1
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 3135 3135 return -EINVAL; 3136 3136 } 3137 3137 3138 - lif->dbid_inuse = bitmap_alloc(lif->dbid_count, GFP_KERNEL); 3138 + lif->dbid_inuse = bitmap_zalloc(lif->dbid_count, GFP_KERNEL); 3139 3139 if (!lif->dbid_inuse) { 3140 3140 dev_err(dev, "Failed alloc doorbell id bitmap, aborting\n"); 3141 3141 return -ENOMEM;
+1 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov.h
··· 201 201 struct qlcnic_info *, u16); 202 202 int qlcnic_sriov_cfg_vf_guest_vlan(struct qlcnic_adapter *, u16, u8); 203 203 void qlcnic_sriov_free_vlans(struct qlcnic_adapter *); 204 - void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); 204 + int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *); 205 205 bool qlcnic_sriov_check_any_vlan(struct qlcnic_vf_info *); 206 206 void qlcnic_sriov_del_vlan_id(struct qlcnic_sriov *, 207 207 struct qlcnic_vf_info *, u16);
+9 -3
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
··· 432 432 struct qlcnic_cmd_args *cmd) 433 433 { 434 434 struct qlcnic_sriov *sriov = adapter->ahw->sriov; 435 - int i, num_vlans; 435 + int i, num_vlans, ret; 436 436 u16 *vlans; 437 437 438 438 if (sriov->allowed_vlans) ··· 443 443 dev_info(&adapter->pdev->dev, "Number of allowed Guest VLANs = %d\n", 444 444 sriov->num_allowed_vlans); 445 445 446 - qlcnic_sriov_alloc_vlans(adapter); 446 + ret = qlcnic_sriov_alloc_vlans(adapter); 447 + if (ret) 448 + return ret; 447 449 448 450 if (!sriov->any_vlan) 449 451 return 0; ··· 2156 2154 return err; 2157 2155 } 2158 2156 2159 - void qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) 2157 + int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter) 2160 2158 { 2161 2159 struct qlcnic_sriov *sriov = adapter->ahw->sriov; 2162 2160 struct qlcnic_vf_info *vf; ··· 2166 2164 vf = &sriov->vf_info[i]; 2167 2165 vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans, 2168 2166 sizeof(*vf->sriov_vlans), GFP_KERNEL); 2167 + if (!vf->sriov_vlans) 2168 + return -ENOMEM; 2169 2169 } 2170 + 2171 + return 0; 2170 2172 } 2171 2173 2172 2174 void qlcnic_sriov_free_vlans(struct qlcnic_adapter *adapter)
+3 -1
drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_pf.c
··· 597 597 if (err) 598 598 goto del_flr_queue; 599 599 600 - qlcnic_sriov_alloc_vlans(adapter); 600 + err = qlcnic_sriov_alloc_vlans(adapter); 601 + if (err) 602 + goto del_flr_queue; 601 603 602 604 return err; 603 605
+4 -1
drivers/net/ethernet/sfc/falcon/rx.c
··· 728 728 efx->rx_bufs_per_page); 729 729 rx_queue->page_ring = kcalloc(page_ring_size, 730 730 sizeof(*rx_queue->page_ring), GFP_KERNEL); 731 - rx_queue->page_ptr_mask = page_ring_size - 1; 731 + if (!rx_queue->page_ring) 732 + rx_queue->page_ptr_mask = 0; 733 + else 734 + rx_queue->page_ptr_mask = page_ring_size - 1; 732 735 } 733 736 734 737 void ef4_init_rx_queue(struct ef4_rx_queue *rx_queue)
+4 -1
drivers/net/ethernet/sfc/rx_common.c
··· 150 150 efx->rx_bufs_per_page); 151 151 rx_queue->page_ring = kcalloc(page_ring_size, 152 152 sizeof(*rx_queue->page_ring), GFP_KERNEL); 153 - rx_queue->page_ptr_mask = page_ring_size - 1; 153 + if (!rx_queue->page_ring) 154 + rx_queue->page_ptr_mask = 0; 155 + else 156 + rx_queue->page_ptr_mask = page_ring_size - 1; 154 157 } 155 158 156 159 static void efx_fini_rx_recycle_ring(struct efx_rx_queue *rx_queue)
+5
drivers/net/ethernet/smsc/smc911x.c
··· 2072 2072 2073 2073 ndev->dma = (unsigned char)-1; 2074 2074 ndev->irq = platform_get_irq(pdev, 0); 2075 + if (ndev->irq < 0) { 2076 + ret = ndev->irq; 2077 + goto release_both; 2078 + } 2079 + 2075 2080 lp = netdev_priv(ndev); 2076 2081 lp->netdev = ndev; 2077 2082 #ifdef SMC_DYNAMIC_BUS_CONFIG
+1 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
··· 26 26 #define ETHER_CLK_SEL_FREQ_SEL_125M (BIT(9) | BIT(8)) 27 27 #define ETHER_CLK_SEL_FREQ_SEL_50M BIT(9) 28 28 #define ETHER_CLK_SEL_FREQ_SEL_25M BIT(8) 29 - #define ETHER_CLK_SEL_FREQ_SEL_2P5M BIT(0) 29 + #define ETHER_CLK_SEL_FREQ_SEL_2P5M 0 30 30 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_IN BIT(0) 31 31 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_TXC BIT(10) 32 32 #define ETHER_CLK_SEL_TX_CLK_EXT_SEL_DIV BIT(11)
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
··· 102 102 time.tv_nsec = priv->plat->est->btr_reserve[0]; 103 103 time.tv_sec = priv->plat->est->btr_reserve[1]; 104 104 basetime = timespec64_to_ktime(time); 105 - cycle_time = priv->plat->est->ctr[1] * NSEC_PER_SEC + 105 + cycle_time = (u64)priv->plat->est->ctr[1] * NSEC_PER_SEC + 106 106 priv->plat->est->ctr[0]; 107 107 time = stmmac_calc_tas_basetime(basetime, 108 108 current_time_ns,
+5
drivers/net/fjes/fjes_main.c
··· 1262 1262 hw->hw_res.start = res->start; 1263 1263 hw->hw_res.size = resource_size(res); 1264 1264 hw->hw_res.irq = platform_get_irq(plat_dev, 0); 1265 + if (hw->hw_res.irq < 0) { 1266 + err = hw->hw_res.irq; 1267 + goto err_free_control_wq; 1268 + } 1269 + 1265 1270 err = fjes_hw_init(&adapter->hw); 1266 1271 if (err) 1267 1272 goto err_free_control_wq;
+2 -2
drivers/net/hamradio/mkiss.c
··· 794 794 */ 795 795 netif_stop_queue(ax->dev); 796 796 797 - ax->tty = NULL; 798 - 799 797 unregister_netdev(ax->dev); 800 798 801 799 /* Free all AX25 frame buffers after unreg. */ 802 800 kfree(ax->rbuff); 803 801 kfree(ax->xbuff); 802 + 803 + ax->tty = NULL; 804 804 805 805 free_netdev(ax->dev); 806 806 }
+2 -2
drivers/net/phy/fixed_phy.c
··· 239 239 /* Check if we have a GPIO associated with this fixed phy */ 240 240 if (!gpiod) { 241 241 gpiod = fixed_phy_get_gpiod(np); 242 - if (IS_ERR(gpiod)) 243 - return ERR_CAST(gpiod); 242 + if (!gpiod) 243 + return ERR_PTR(-EINVAL); 244 244 } 245 245 246 246 /* Get the next available PHY address, up to PHY_MAX_ADDR */
+59 -56
drivers/net/tun.c
··· 209 209 struct tun_prog __rcu *steering_prog; 210 210 struct tun_prog __rcu *filter_prog; 211 211 struct ethtool_link_ksettings link_ksettings; 212 + /* init args */ 213 + struct file *file; 214 + struct ifreq *ifr; 212 215 }; 213 216 214 217 struct veth { 215 218 __be16 h_vlan_proto; 216 219 __be16 h_vlan_TCI; 217 220 }; 221 + 222 + static void tun_flow_init(struct tun_struct *tun); 223 + static void tun_flow_uninit(struct tun_struct *tun); 218 224 219 225 static int tun_napi_receive(struct napi_struct *napi, int budget) 220 226 { ··· 959 953 960 954 static const struct ethtool_ops tun_ethtool_ops; 961 955 956 + static int tun_net_init(struct net_device *dev) 957 + { 958 + struct tun_struct *tun = netdev_priv(dev); 959 + struct ifreq *ifr = tun->ifr; 960 + int err; 961 + 962 + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 963 + if (!dev->tstats) 964 + return -ENOMEM; 965 + 966 + spin_lock_init(&tun->lock); 967 + 968 + err = security_tun_dev_alloc_security(&tun->security); 969 + if (err < 0) { 970 + free_percpu(dev->tstats); 971 + return err; 972 + } 973 + 974 + tun_flow_init(tun); 975 + 976 + dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | 977 + TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | 978 + NETIF_F_HW_VLAN_STAG_TX; 979 + dev->features = dev->hw_features | NETIF_F_LLTX; 980 + dev->vlan_features = dev->features & 981 + ~(NETIF_F_HW_VLAN_CTAG_TX | 982 + NETIF_F_HW_VLAN_STAG_TX); 983 + 984 + tun->flags = (tun->flags & ~TUN_FEATURES) | 985 + (ifr->ifr_flags & TUN_FEATURES); 986 + 987 + INIT_LIST_HEAD(&tun->disabled); 988 + err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI, 989 + ifr->ifr_flags & IFF_NAPI_FRAGS, false); 990 + if (err < 0) { 991 + tun_flow_uninit(tun); 992 + security_tun_dev_free_security(tun->security); 993 + free_percpu(dev->tstats); 994 + return err; 995 + } 996 + return 0; 997 + } 998 + 962 999 /* Net device detach from fd. */ 963 1000 static void tun_net_uninit(struct net_device *dev) 964 1001 { ··· 1218 1169 } 1219 1170 1220 1171 static const struct net_device_ops tun_netdev_ops = { 1172 + .ndo_init = tun_net_init, 1221 1173 .ndo_uninit = tun_net_uninit, 1222 1174 .ndo_open = tun_net_open, 1223 1175 .ndo_stop = tun_net_close, ··· 1302 1252 } 1303 1253 1304 1254 static const struct net_device_ops tap_netdev_ops = { 1255 + .ndo_init = tun_net_init, 1305 1256 .ndo_uninit = tun_net_uninit, 1306 1257 .ndo_open = tun_net_open, 1307 1258 .ndo_stop = tun_net_close, ··· 1343 1292 #define MAX_MTU 65535 1344 1293 1345 1294 /* Initialize net device. */ 1346 - static void tun_net_init(struct net_device *dev) 1295 + static void tun_net_initialize(struct net_device *dev) 1347 1296 { 1348 1297 struct tun_struct *tun = netdev_priv(dev); 1349 1298 ··· 2257 2206 BUG_ON(!(list_empty(&tun->disabled))); 2258 2207 2259 2208 free_percpu(dev->tstats); 2260 - /* We clear tstats so that tun_set_iff() can tell if 2261 - * tun_free_netdev() has been called from register_netdevice(). 2262 - */ 2263 - dev->tstats = NULL; 2264 - 2265 2209 tun_flow_uninit(tun); 2266 2210 security_tun_dev_free_security(tun->security); 2267 2211 __tun_set_ebpf(tun, &tun->steering_prog, NULL); ··· 2762 2716 tun->rx_batched = 0; 2763 2717 RCU_INIT_POINTER(tun->steering_prog, NULL); 2764 2718 2765 - dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 2766 - if (!dev->tstats) { 2767 - err = -ENOMEM; 2768 - goto err_free_dev; 2769 - } 2719 + tun->ifr = ifr; 2720 + tun->file = file; 2770 2721 2771 - spin_lock_init(&tun->lock); 2772 - 2773 - err = security_tun_dev_alloc_security(&tun->security); 2774 - if (err < 0) 2775 - goto err_free_stat; 2776 - 2777 - tun_net_init(dev); 2778 - tun_flow_init(tun); 2779 - 2780 - dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST | 2781 - TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX | 2782 - NETIF_F_HW_VLAN_STAG_TX; 2783 - dev->features = dev->hw_features | NETIF_F_LLTX; 2784 - dev->vlan_features = dev->features & 2785 - ~(NETIF_F_HW_VLAN_CTAG_TX | 2786 - NETIF_F_HW_VLAN_STAG_TX); 2787 - 2788 - tun->flags = (tun->flags & ~TUN_FEATURES) | 2789 - (ifr->ifr_flags & TUN_FEATURES); 2790 - 2791 - INIT_LIST_HEAD(&tun->disabled); 2792 - err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI, 2793 - ifr->ifr_flags & IFF_NAPI_FRAGS, false); 2794 - if (err < 0) 2795 - goto err_free_flow; 2722 + tun_net_initialize(dev); 2796 2723 2797 2724 err = register_netdevice(tun->dev); 2798 - if (err < 0) 2799 - goto err_detach; 2725 + if (err < 0) { 2726 + free_netdev(dev); 2727 + return err; 2728 + } 2800 2729 /* free_netdev() won't check refcnt, to avoid race 2801 2730 * with dev_put() we need publish tun after registration. 2802 2731 */ ··· 2788 2767 2789 2768 strcpy(ifr->ifr_name, tun->dev->name); 2790 2769 return 0; 2791 - 2792 - err_detach: 2793 - tun_detach_all(dev); 2794 - /* We are here because register_netdevice() has failed. 2795 - * If register_netdevice() already called tun_free_netdev() 2796 - * while dealing with the error, dev->stats has been cleared. 2797 - */ 2798 - if (!dev->tstats) 2799 - goto err_free_dev; 2800 - 2801 - err_free_flow: 2802 - tun_flow_uninit(tun); 2803 - security_tun_dev_free_security(tun->security); 2804 - err_free_stat: 2805 - free_percpu(dev->tstats); 2806 - err_free_dev: 2807 - free_netdev(dev); 2808 - return err; 2809 2770 } 2810 2771 2811 2772 static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr)
+5 -3
drivers/net/usb/asix_common.c
··· 9 9 10 10 #include "asix.h" 11 11 12 + #define AX_HOST_EN_RETRIES 30 13 + 12 14 int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index, 13 15 u16 size, void *data, int in_pm) 14 16 { ··· 70 68 int i, ret; 71 69 u8 smsr; 72 70 73 - for (i = 0; i < 30; ++i) { 71 + for (i = 0; i < AX_HOST_EN_RETRIES; ++i) { 74 72 ret = asix_set_sw_mii(dev, in_pm); 75 73 if (ret == -ENODEV || ret == -ETIMEDOUT) 76 74 break; ··· 79 77 0, 0, 1, &smsr, in_pm); 80 78 if (ret == -ENODEV) 81 79 break; 82 - else if (ret < 0) 80 + else if (ret < sizeof(smsr)) 83 81 continue; 84 82 else if (smsr & AX_HOST_EN) 85 83 break; 86 84 } 87 85 88 - return ret; 86 + return i >= AX_HOST_EN_RETRIES ? -ETIMEDOUT : ret; 89 87 } 90 88 91 89 static void reset_asix_rx_fixup_info(struct asix_rx_fixup_info *rx)
+2 -2
drivers/net/usb/pegasus.c
··· 493 493 goto goon; 494 494 495 495 rx_status = buf[count - 2]; 496 - if (rx_status & 0x1e) { 496 + if (rx_status & 0x1c) { 497 497 netif_dbg(pegasus, rx_err, net, 498 498 "RX packet error %x\n", rx_status); 499 499 net->stats.rx_errors++; 500 - if (rx_status & 0x06) /* long or runt */ 500 + if (rx_status & 0x04) /* runt */ 501 501 net->stats.rx_length_errors++; 502 502 if (rx_status & 0x08) 503 503 net->stats.rx_crc_errors++;
+39 -4
drivers/net/usb/r8152.c
··· 32 32 #define NETNEXT_VERSION "12" 33 33 34 34 /* Information for net */ 35 - #define NET_VERSION "11" 35 + #define NET_VERSION "12" 36 36 37 37 #define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION 38 38 #define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>" ··· 4016 4016 ocp_write_word(tp, type, PLA_BP_BA, 0); 4017 4017 } 4018 4018 4019 + static inline void rtl_reset_ocp_base(struct r8152 *tp) 4020 + { 4021 + tp->ocp_base = -1; 4022 + } 4023 + 4019 4024 static int rtl_phy_patch_request(struct r8152 *tp, bool request, bool wait) 4020 4025 { 4021 4026 u16 data, check; ··· 4091 4086 rtl_patch_key_set(tp, key_addr, 0); 4092 4087 4093 4088 rtl_phy_patch_request(tp, false, wait); 4094 - 4095 - ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base); 4096 4089 4097 4090 return 0; 4098 4091 } ··· 4803 4800 u32 len; 4804 4801 u8 *data; 4805 4802 4803 + rtl_reset_ocp_base(tp); 4804 + 4806 4805 if (sram_read(tp, SRAM_GPHY_FW_VER) >= __le16_to_cpu(phy->version)) { 4807 4806 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n"); 4808 4807 return; ··· 4850 4845 } 4851 4846 } 4852 4847 4853 - ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base); 4848 + rtl_reset_ocp_base(tp); 4849 + 4854 4850 rtl_phy_patch_request(tp, false, wait); 4855 4851 4856 4852 if (sram_read(tp, SRAM_GPHY_FW_VER) == __le16_to_cpu(phy->version)) ··· 4866 4860 4867 4861 ver_addr = __le16_to_cpu(phy_ver->ver.addr); 4868 4862 ver = __le16_to_cpu(phy_ver->ver.data); 4863 + 4864 + rtl_reset_ocp_base(tp); 4869 4865 4870 4866 if (sram_read(tp, ver_addr) >= ver) { 4871 4867 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n"); ··· 4884 4876 static void rtl8152_fw_phy_fixup(struct r8152 *tp, struct fw_phy_fixup *fix) 4885 4877 { 4886 4878 u16 addr, data; 4879 + 4880 + rtl_reset_ocp_base(tp); 4887 4881 4888 4882 addr = __le16_to_cpu(fix->setting.addr); 4889 4883 data = ocp_reg_read(tp, addr); ··· 4918 4908 u32 length; 4919 4909 int i, num; 4920 4910 4911 + rtl_reset_ocp_base(tp); 4912 + 4921 4913 num = phy->pre_num; 4922 4914 for (i = 0; i < num; i++) 4923 4915 sram_write(tp, __le16_to_cpu(phy->pre_set[i].addr), ··· 4949 4937 u16 mode_reg, bp_index; 4950 4938 u32 length, i, num; 4951 4939 __le16 *data; 4940 + 4941 + rtl_reset_ocp_base(tp); 4952 4942 4953 4943 mode_reg = __le16_to_cpu(phy->mode_reg); 4954 4944 sram_write(tp, mode_reg, __le16_to_cpu(phy->mode_pre)); ··· 5121 5107 if (rtl_fw->post_fw) 5122 5108 rtl_fw->post_fw(tp); 5123 5109 5110 + rtl_reset_ocp_base(tp); 5124 5111 strscpy(rtl_fw->version, fw_hdr->version, RTL_VER_SIZE); 5125 5112 dev_info(&tp->intf->dev, "load %s successfully\n", rtl_fw->version); 5126 5113 } ··· 6599 6584 return true; 6600 6585 } 6601 6586 6587 + static void r8156_mdio_force_mode(struct r8152 *tp) 6588 + { 6589 + u16 data; 6590 + 6591 + /* Select force mode through 0xa5b4 bit 15 6592 + * 0: MDIO force mode 6593 + * 1: MMD force mode 6594 + */ 6595 + data = ocp_reg_read(tp, 0xa5b4); 6596 + if (data & BIT(15)) { 6597 + data &= ~BIT(15); 6598 + ocp_reg_write(tp, 0xa5b4, data); 6599 + } 6600 + } 6601 + 6602 6602 static void set_carrier(struct r8152 *tp) 6603 6603 { 6604 6604 struct net_device *netdev = tp->netdev; ··· 8046 8016 ocp_data |= ACT_ODMA; 8047 8017 ocp_write_byte(tp, MCU_TYPE_USB, USB_BMU_CONFIG, ocp_data); 8048 8018 8019 + r8156_mdio_force_mode(tp); 8049 8020 rtl_tally_reset(tp); 8050 8021 8051 8022 tp->coalesce = 15000; /* 15 us */ ··· 8176 8145 ocp_data &= ~(RX_AGG_DISABLE | RX_ZERO_EN); 8177 8146 ocp_write_word(tp, MCU_TYPE_USB, USB_USB_CTRL, ocp_data); 8178 8147 8148 + r8156_mdio_force_mode(tp); 8179 8149 rtl_tally_reset(tp); 8180 8150 8181 8151 tp->coalesce = 15000; /* 15 us */ ··· 8499 8467 8500 8468 mutex_lock(&tp->control); 8501 8469 8470 + rtl_reset_ocp_base(tp); 8471 + 8502 8472 if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) 8503 8473 ret = rtl8152_runtime_resume(tp); 8504 8474 else ··· 8516 8482 struct r8152 *tp = usb_get_intfdata(intf); 8517 8483 8518 8484 clear_bit(SELECTIVE_SUSPEND, &tp->flags); 8485 + rtl_reset_ocp_base(tp); 8519 8486 tp->rtl_ops.init(tp); 8520 8487 queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0); 8521 8488 set_ethernet_addr(tp, true);
+6 -2
drivers/net/veth.c
··· 879 879 880 880 stats->xdp_bytes += skb->len; 881 881 skb = veth_xdp_rcv_skb(rq, skb, bq, stats); 882 - if (skb) 883 - napi_gro_receive(&rq->xdp_napi, skb); 882 + if (skb) { 883 + if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC)) 884 + netif_receive_skb(skb); 885 + else 886 + napi_gro_receive(&rq->xdp_napi, skb); 887 + } 884 888 } 885 889 done++; 886 890 }
+1
drivers/net/xen-netback/common.h
··· 203 203 unsigned int rx_queue_max; 204 204 unsigned int rx_queue_len; 205 205 unsigned long last_rx_time; 206 + unsigned int rx_slots_needed; 206 207 bool stalled; 207 208 208 209 struct xenvif_copy_state rx_copy;
+49 -28
drivers/net/xen-netback/rx.c
··· 33 33 #include <xen/xen.h> 34 34 #include <xen/events.h> 35 35 36 + /* 37 + * Update the needed ring page slots for the first SKB queued. 38 + * Note that any call sequence outside the RX thread calling this function 39 + * needs to wake up the RX thread via a call of xenvif_kick_thread() 40 + * afterwards in order to avoid a race with putting the thread to sleep. 41 + */ 42 + static void xenvif_update_needed_slots(struct xenvif_queue *queue, 43 + const struct sk_buff *skb) 44 + { 45 + unsigned int needed = 0; 46 + 47 + if (skb) { 48 + needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); 49 + if (skb_is_gso(skb)) 50 + needed++; 51 + if (skb->sw_hash) 52 + needed++; 53 + } 54 + 55 + WRITE_ONCE(queue->rx_slots_needed, needed); 56 + } 57 + 36 58 static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) 37 59 { 38 60 RING_IDX prod, cons; 39 - struct sk_buff *skb; 40 - int needed; 41 - unsigned long flags; 61 + unsigned int needed; 42 62 43 - spin_lock_irqsave(&queue->rx_queue.lock, flags); 44 - 45 - skb = skb_peek(&queue->rx_queue); 46 - if (!skb) { 47 - spin_unlock_irqrestore(&queue->rx_queue.lock, flags); 63 + needed = READ_ONCE(queue->rx_slots_needed); 64 + if (!needed) 48 65 return false; 49 - } 50 - 51 - needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); 52 - if (skb_is_gso(skb)) 53 - needed++; 54 - if (skb->sw_hash) 55 - needed++; 56 - 57 - spin_unlock_irqrestore(&queue->rx_queue.lock, flags); 58 66 59 67 do { 60 68 prod = queue->rx.sring->req_prod; ··· 88 80 89 81 spin_lock_irqsave(&queue->rx_queue.lock, flags); 90 82 91 - __skb_queue_tail(&queue->rx_queue, skb); 92 - 93 - queue->rx_queue_len += skb->len; 94 - if (queue->rx_queue_len > queue->rx_queue_max) { 83 + if (queue->rx_queue_len >= queue->rx_queue_max) { 95 84 struct net_device *dev = queue->vif->dev; 96 85 97 86 netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); 87 + kfree_skb(skb); 88 + queue->vif->dev->stats.rx_dropped++; 89 + } else { 90 + if (skb_queue_empty(&queue->rx_queue)) 91 + xenvif_update_needed_slots(queue, skb); 92 + 93 + __skb_queue_tail(&queue->rx_queue, skb); 94 + 95 + queue->rx_queue_len += skb->len; 98 96 } 99 97 100 98 spin_unlock_irqrestore(&queue->rx_queue.lock, flags); ··· 114 100 115 101 skb = __skb_dequeue(&queue->rx_queue); 116 102 if (skb) { 103 + xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue)); 104 + 117 105 queue->rx_queue_len -= skb->len; 118 106 if (queue->rx_queue_len < queue->rx_queue_max) { 119 107 struct netdev_queue *txq; ··· 150 134 break; 151 135 xenvif_rx_dequeue(queue); 152 136 kfree_skb(skb); 137 + queue->vif->dev->stats.rx_dropped++; 153 138 } 154 139 } 155 140 ··· 504 487 xenvif_rx_copy_flush(queue); 505 488 } 506 489 507 - static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue) 490 + static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue) 508 491 { 509 492 RING_IDX prod, cons; 510 493 511 494 prod = queue->rx.sring->req_prod; 512 495 cons = queue->rx.req_cons; 513 496 497 + return prod - cons; 498 + } 499 + 500 + static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue) 501 + { 502 + unsigned int needed = READ_ONCE(queue->rx_slots_needed); 503 + 514 504 return !queue->stalled && 515 - prod - cons < 1 && 505 + xenvif_rx_queue_slots(queue) < needed && 516 506 time_after(jiffies, 517 507 queue->last_rx_time + queue->vif->stall_timeout); 518 508 } 519 509 520 510 static bool xenvif_rx_queue_ready(struct xenvif_queue *queue) 521 511 { 522 - RING_IDX prod, cons; 512 + unsigned int needed = READ_ONCE(queue->rx_slots_needed); 523 513 524 - prod = queue->rx.sring->req_prod; 525 - cons = queue->rx.req_cons; 526 - 527 - return queue->stalled && prod - cons >= 1; 514 + return queue->stalled && xenvif_rx_queue_slots(queue) >= needed; 528 515 } 529 516 530 517 bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
+95 -32
drivers/net/xen-netfront.c
··· 148 148 grant_ref_t gref_rx_head; 149 149 grant_ref_t grant_rx_ref[NET_RX_RING_SIZE]; 150 150 151 + unsigned int rx_rsp_unconsumed; 152 + spinlock_t rx_cons_lock; 153 + 151 154 struct page_pool *page_pool; 152 155 struct xdp_rxq_info xdp_rxq; 153 156 }; ··· 379 376 return 0; 380 377 } 381 378 382 - static void xennet_tx_buf_gc(struct netfront_queue *queue) 379 + static bool xennet_tx_buf_gc(struct netfront_queue *queue) 383 380 { 384 381 RING_IDX cons, prod; 385 382 unsigned short id; 386 383 struct sk_buff *skb; 387 384 bool more_to_do; 385 + bool work_done = false; 388 386 const struct device *dev = &queue->info->netdev->dev; 389 387 390 388 BUG_ON(!netif_carrier_ok(queue->info->netdev)); ··· 401 397 402 398 for (cons = queue->tx.rsp_cons; cons != prod; cons++) { 403 399 struct xen_netif_tx_response txrsp; 400 + 401 + work_done = true; 404 402 405 403 RING_COPY_RESPONSE(&queue->tx, cons, &txrsp); 406 404 if (txrsp.status == XEN_NETIF_RSP_NULL) ··· 447 441 448 442 xennet_maybe_wake_tx(queue); 449 443 450 - return; 444 + return work_done; 451 445 452 446 err: 453 447 queue->info->broken = true; 454 448 dev_alert(dev, "Disabled for further use\n"); 449 + 450 + return work_done; 455 451 } 456 452 457 453 struct xennet_gnttab_make_txreq { ··· 842 834 return 0; 843 835 } 844 836 837 + static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val) 838 + { 839 + unsigned long flags; 840 + 841 + spin_lock_irqsave(&queue->rx_cons_lock, flags); 842 + queue->rx.rsp_cons = val; 843 + queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); 844 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 845 + } 846 + 845 847 static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb, 846 848 grant_ref_t ref) 847 849 { ··· 903 885 xennet_move_rx_slot(queue, skb, ref); 904 886 } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); 905 887 906 - queue->rx.rsp_cons = cons; 888 + xennet_set_rx_rsp_cons(queue, cons); 907 889 return err; 908 890 } 909 891 ··· 1057 1039 } 1058 1040 1059 1041 if (unlikely(err)) 1060 - queue->rx.rsp_cons = cons + slots; 1042 + xennet_set_rx_rsp_cons(queue, cons + slots); 1061 1043 1062 1044 return err; 1063 1045 } ··· 1111 1093 __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); 1112 1094 } 1113 1095 if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { 1114 - queue->rx.rsp_cons = ++cons + skb_queue_len(list); 1096 + xennet_set_rx_rsp_cons(queue, 1097 + ++cons + skb_queue_len(list)); 1115 1098 kfree_skb(nskb); 1116 1099 return -ENOENT; 1117 1100 } ··· 1125 1106 kfree_skb(nskb); 1126 1107 } 1127 1108 1128 - queue->rx.rsp_cons = cons; 1109 + xennet_set_rx_rsp_cons(queue, cons); 1129 1110 1130 1111 return 0; 1131 1112 } ··· 1248 1229 1249 1230 if (unlikely(xennet_set_skb_gso(skb, gso))) { 1250 1231 __skb_queue_head(&tmpq, skb); 1251 - queue->rx.rsp_cons += skb_queue_len(&tmpq); 1232 + xennet_set_rx_rsp_cons(queue, 1233 + queue->rx.rsp_cons + 1234 + skb_queue_len(&tmpq)); 1252 1235 goto err; 1253 1236 } 1254 1237 } ··· 1274 1253 1275 1254 __skb_queue_tail(&rxq, skb); 1276 1255 1277 - i = ++queue->rx.rsp_cons; 1256 + i = queue->rx.rsp_cons + 1; 1257 + xennet_set_rx_rsp_cons(queue, i); 1278 1258 work_done++; 1279 1259 } 1280 1260 if (need_xdp_flush) ··· 1439 1417 return 0; 1440 1418 } 1441 1419 1442 - static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) 1420 + static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi) 1443 1421 { 1444 - struct netfront_queue *queue = dev_id; 1445 1422 unsigned long flags; 1446 1423 1447 - if (queue->info->broken) 1448 - return IRQ_HANDLED; 1424 + if (unlikely(queue->info->broken)) 1425 + return false; 1449 1426 1450 1427 spin_lock_irqsave(&queue->tx_lock, flags); 1451 - xennet_tx_buf_gc(queue); 1428 + if (xennet_tx_buf_gc(queue)) 1429 + *eoi = 0; 1452 1430 spin_unlock_irqrestore(&queue->tx_lock, flags); 1431 + 1432 + return true; 1433 + } 1434 + 1435 + static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) 1436 + { 1437 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1438 + 1439 + if (likely(xennet_handle_tx(dev_id, &eoiflag))) 1440 + xen_irq_lateeoi(irq, eoiflag); 1453 1441 1454 1442 return IRQ_HANDLED; 1455 1443 } 1456 1444 1445 + static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi) 1446 + { 1447 + unsigned int work_queued; 1448 + unsigned long flags; 1449 + 1450 + if (unlikely(queue->info->broken)) 1451 + return false; 1452 + 1453 + spin_lock_irqsave(&queue->rx_cons_lock, flags); 1454 + work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); 1455 + if (work_queued > queue->rx_rsp_unconsumed) { 1456 + queue->rx_rsp_unconsumed = work_queued; 1457 + *eoi = 0; 1458 + } else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) { 1459 + const struct device *dev = &queue->info->netdev->dev; 1460 + 1461 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 1462 + dev_alert(dev, "RX producer index going backwards\n"); 1463 + dev_alert(dev, "Disabled for further use\n"); 1464 + queue->info->broken = true; 1465 + return false; 1466 + } 1467 + spin_unlock_irqrestore(&queue->rx_cons_lock, flags); 1468 + 1469 + if (likely(netif_carrier_ok(queue->info->netdev) && work_queued)) 1470 + napi_schedule(&queue->napi); 1471 + 1472 + return true; 1473 + } 1474 + 1457 1475 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) 1458 1476 { 1459 - struct netfront_queue *queue = dev_id; 1460 - struct net_device *dev = queue->info->netdev; 1477 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1461 1478 1462 - if (queue->info->broken) 1463 - return IRQ_HANDLED; 1464 - 1465 - if (likely(netif_carrier_ok(dev) && 1466 - RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))) 1467 - napi_schedule(&queue->napi); 1479 + if (likely(xennet_handle_rx(dev_id, &eoiflag))) 1480 + xen_irq_lateeoi(irq, eoiflag); 1468 1481 1469 1482 return IRQ_HANDLED; 1470 1483 } 1471 1484 1472 1485 static irqreturn_t xennet_interrupt(int irq, void *dev_id) 1473 1486 { 1474 - xennet_tx_interrupt(irq, dev_id); 1475 - xennet_rx_interrupt(irq, dev_id); 1487 + unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; 1488 + 1489 + if (xennet_handle_tx(dev_id, &eoiflag) && 1490 + xennet_handle_rx(dev_id, &eoiflag)) 1491 + xen_irq_lateeoi(irq, eoiflag); 1492 + 1476 1493 return IRQ_HANDLED; 1477 1494 } 1478 1495 ··· 1829 1768 if (err < 0) 1830 1769 goto fail; 1831 1770 1832 - err = bind_evtchn_to_irqhandler(queue->tx_evtchn, 1833 - xennet_interrupt, 1834 - 0, queue->info->netdev->name, queue); 1771 + err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, 1772 + xennet_interrupt, 0, 1773 + queue->info->netdev->name, 1774 + queue); 1835 1775 if (err < 0) 1836 1776 goto bind_fail; 1837 1777 queue->rx_evtchn = queue->tx_evtchn; ··· 1860 1798 1861 1799 snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name), 1862 1800 "%s-tx", queue->name); 1863 - err = bind_evtchn_to_irqhandler(queue->tx_evtchn, 1864 - xennet_tx_interrupt, 1865 - 0, queue->tx_irq_name, queue); 1801 + err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, 1802 + xennet_tx_interrupt, 0, 1803 + queue->tx_irq_name, queue); 1866 1804 if (err < 0) 1867 1805 goto bind_tx_fail; 1868 1806 queue->tx_irq = err; 1869 1807 1870 1808 snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name), 1871 1809 "%s-rx", queue->name); 1872 - err = bind_evtchn_to_irqhandler(queue->rx_evtchn, 1873 - xennet_rx_interrupt, 1874 - 0, queue->rx_irq_name, queue); 1810 + err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn, 1811 + xennet_rx_interrupt, 0, 1812 + queue->rx_irq_name, queue); 1875 1813 if (err < 0) 1876 1814 goto bind_rx_fail; 1877 1815 queue->rx_irq = err; ··· 1973 1911 1974 1912 spin_lock_init(&queue->tx_lock); 1975 1913 spin_lock_init(&queue->rx_lock); 1914 + spin_lock_init(&queue->rx_cons_lock); 1976 1915 1977 1916 timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0); 1978 1917
+20 -9
drivers/nfc/st21nfca/i2c.c
··· 524 524 phy->gpiod_ena = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW); 525 525 if (IS_ERR(phy->gpiod_ena)) { 526 526 nfc_err(dev, "Unable to get ENABLE GPIO\n"); 527 - return PTR_ERR(phy->gpiod_ena); 527 + r = PTR_ERR(phy->gpiod_ena); 528 + goto out_free; 528 529 } 529 530 530 531 phy->se_status.is_ese_present = ··· 536 535 r = st21nfca_hci_platform_init(phy); 537 536 if (r < 0) { 538 537 nfc_err(&client->dev, "Unable to reboot st21nfca\n"); 539 - return r; 538 + goto out_free; 540 539 } 541 540 542 541 r = devm_request_threaded_irq(&client->dev, client->irq, NULL, ··· 545 544 ST21NFCA_HCI_DRIVER_NAME, phy); 546 545 if (r < 0) { 547 546 nfc_err(&client->dev, "Unable to register IRQ handler\n"); 548 - return r; 547 + goto out_free; 549 548 } 550 549 551 - return st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME, 552 - ST21NFCA_FRAME_HEADROOM, 553 - ST21NFCA_FRAME_TAILROOM, 554 - ST21NFCA_HCI_LLC_MAX_PAYLOAD, 555 - &phy->hdev, 556 - &phy->se_status); 550 + r = st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME, 551 + ST21NFCA_FRAME_HEADROOM, 552 + ST21NFCA_FRAME_TAILROOM, 553 + ST21NFCA_HCI_LLC_MAX_PAYLOAD, 554 + &phy->hdev, 555 + &phy->se_status); 556 + if (r) 557 + goto out_free; 558 + 559 + return 0; 560 + 561 + out_free: 562 + kfree_skb(phy->pending_skb); 563 + return r; 557 564 } 558 565 559 566 static int st21nfca_hci_i2c_remove(struct i2c_client *client) ··· 572 563 573 564 if (phy->powered) 574 565 st21nfca_hci_i2c_disable(phy); 566 + if (phy->pending_skb) 567 + kfree_skb(phy->pending_skb); 575 568 576 569 return 0; 577 570 }
+16 -13
drivers/pinctrl/bcm/pinctrl-bcm2835.c
··· 1244 1244 raw_spin_lock_init(&pc->irq_lock[i]); 1245 1245 } 1246 1246 1247 + pc->pctl_desc = *pdata->pctl_desc; 1248 + pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc); 1249 + if (IS_ERR(pc->pctl_dev)) { 1250 + gpiochip_remove(&pc->gpio_chip); 1251 + return PTR_ERR(pc->pctl_dev); 1252 + } 1253 + 1254 + pc->gpio_range = *pdata->gpio_range; 1255 + pc->gpio_range.base = pc->gpio_chip.base; 1256 + pc->gpio_range.gc = &pc->gpio_chip; 1257 + pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range); 1258 + 1247 1259 girq = &pc->gpio_chip.irq; 1248 1260 girq->chip = &bcm2835_gpio_irq_chip; 1249 1261 girq->parent_handler = bcm2835_gpio_irq_handler; ··· 1263 1251 girq->parents = devm_kcalloc(dev, BCM2835_NUM_IRQS, 1264 1252 sizeof(*girq->parents), 1265 1253 GFP_KERNEL); 1266 - if (!girq->parents) 1254 + if (!girq->parents) { 1255 + pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1267 1256 return -ENOMEM; 1257 + } 1268 1258 1269 1259 if (is_7211) { 1270 1260 pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS, ··· 1321 1307 err = gpiochip_add_data(&pc->gpio_chip, pc); 1322 1308 if (err) { 1323 1309 dev_err(dev, "could not add GPIO chip\n"); 1310 + pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range); 1324 1311 return err; 1325 1312 } 1326 - 1327 - pc->pctl_desc = *pdata->pctl_desc; 1328 - pc->pctl_dev = devm_pinctrl_register(dev, &pc->pctl_desc, pc); 1329 - if (IS_ERR(pc->pctl_dev)) { 1330 - gpiochip_remove(&pc->gpio_chip); 1331 - return PTR_ERR(pc->pctl_dev); 1332 - } 1333 - 1334 - pc->gpio_range = *pdata->gpio_range; 1335 - pc->gpio_range.base = pc->gpio_chip.base; 1336 - pc->gpio_range.gc = &pc->gpio_chip; 1337 - pinctrl_add_gpio_range(pc->pctl_dev, &pc->gpio_range); 1338 1313 1339 1314 return 0; 1340 1315 }
+6 -2
drivers/pinctrl/mediatek/pinctrl-mtk-common-v2.c
··· 285 285 desc = (const struct mtk_pin_desc *)hw->soc->pins; 286 286 *gpio_chip = &hw->chip; 287 287 288 - /* Be greedy to guess first gpio_n is equal to eint_n */ 289 - if (desc[eint_n].eint.eint_n == eint_n) 288 + /* 289 + * Be greedy to guess first gpio_n is equal to eint_n. 290 + * Only eint virtual eint number is greater than gpio number. 291 + */ 292 + if (hw->soc->npins > eint_n && 293 + desc[eint_n].eint.eint_n == eint_n) 290 294 *gpio_n = eint_n; 291 295 else 292 296 *gpio_n = mtk_xt_find_eint_num(hw, eint_n);
+4 -4
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1251 1251 bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK; 1252 1252 bank->gpio_chip.base = args.args[1]; 1253 1253 1254 - npins = args.args[2]; 1255 - while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 1256 - ++i, &args)) 1257 - npins += args.args[2]; 1254 + /* get the last defined gpio line (offset + nb of pins) */ 1255 + npins = args.args[0] + args.args[2]; 1256 + while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args)) 1257 + npins = max(npins, (int)(args.args[0] + args.args[2])); 1258 1258 } else { 1259 1259 bank_nr = pctl->nbanks; 1260 1260 bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+2 -2
drivers/platform/mellanox/mlxbf-pmc.c
··· 1374 1374 pmc->block[i].counters = info[2]; 1375 1375 pmc->block[i].type = info[3]; 1376 1376 1377 - if (IS_ERR(pmc->block[i].mmio_base)) 1378 - return PTR_ERR(pmc->block[i].mmio_base); 1377 + if (!pmc->block[i].mmio_base) 1378 + return -ENOMEM; 1379 1379 1380 1380 ret = mlxbf_pmc_create_groups(dev, i); 1381 1381 if (ret)
+1 -1
drivers/platform/x86/Makefile
··· 68 68 obj-$(CONFIG_THINKPAD_LMI) += think-lmi.o 69 69 70 70 # Intel 71 - obj-$(CONFIG_X86_PLATFORM_DRIVERS_INTEL) += intel/ 71 + obj-y += intel/ 72 72 73 73 # MSI 74 74 obj-$(CONFIG_MSI_LAPTOP) += msi-laptop.o
+2 -1
drivers/platform/x86/amd-pmc.c
··· 508 508 } 509 509 510 510 static const struct dev_pm_ops amd_pmc_pm_ops = { 511 - SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(amd_pmc_suspend, amd_pmc_resume) 511 + .suspend_noirq = amd_pmc_suspend, 512 + .resume_noirq = amd_pmc_resume, 512 513 }; 513 514 514 515 static const struct pci_device_id pmc_pci_ids[] = {
+1 -1
drivers/platform/x86/apple-gmux.c
··· 625 625 } 626 626 627 627 gmux_data->iostart = res->start; 628 - gmux_data->iolen = res->end - res->start; 628 + gmux_data->iolen = resource_size(res); 629 629 630 630 if (gmux_data->iolen < GMUX_MIN_IO_LEN) { 631 631 pr_err("gmux I/O region too small (%lu < %u)\n",
-15
drivers/platform/x86/intel/Kconfig
··· 3 3 # Intel x86 Platform Specific Drivers 4 4 # 5 5 6 - menuconfig X86_PLATFORM_DRIVERS_INTEL 7 - bool "Intel x86 Platform Specific Device Drivers" 8 - default y 9 - help 10 - Say Y here to get to see options for device drivers for 11 - various Intel x86 platforms, including vendor-specific 12 - drivers. This option alone does not add any kernel code. 13 - 14 - If you say N, all options in this submenu will be skipped 15 - and disabled. 16 - 17 - if X86_PLATFORM_DRIVERS_INTEL 18 - 19 6 source "drivers/platform/x86/intel/atomisp2/Kconfig" 20 7 source "drivers/platform/x86/intel/int1092/Kconfig" 21 8 source "drivers/platform/x86/intel/int33fe/Kconfig" ··· 170 183 171 184 To compile this driver as a module, choose M here: the module 172 185 will be called intel-uncore-frequency. 173 - 174 - endif # X86_PLATFORM_DRIVERS_INTEL
+1 -1
drivers/platform/x86/intel/pmc/pltdrv.c
··· 65 65 66 66 retval = platform_device_register(pmc_core_device); 67 67 if (retval) 68 - kfree(pmc_core_device); 68 + platform_device_put(pmc_core_device); 69 69 70 70 return retval; 71 71 }
+30 -28
drivers/platform/x86/system76_acpi.c
··· 35 35 union acpi_object *nfan; 36 36 union acpi_object *ntmp; 37 37 struct input_dev *input; 38 + bool has_open_ec; 38 39 }; 39 40 40 41 static const struct acpi_device_id device_ids[] = { ··· 280 279 281 280 static void system76_battery_init(void) 282 281 { 283 - acpi_handle handle; 284 - 285 - handle = ec_get_handle(); 286 - if (handle && acpi_has_method(handle, "GBCT")) 287 - battery_hook_register(&system76_battery_hook); 282 + battery_hook_register(&system76_battery_hook); 288 283 } 289 284 290 285 static void system76_battery_exit(void) 291 286 { 292 - acpi_handle handle; 293 - 294 - handle = ec_get_handle(); 295 - if (handle && acpi_has_method(handle, "GBCT")) 296 - battery_hook_unregister(&system76_battery_hook); 287 + battery_hook_unregister(&system76_battery_hook); 297 288 } 298 289 299 290 // Get the airplane mode LED brightness ··· 666 673 acpi_dev->driver_data = data; 667 674 data->acpi_dev = acpi_dev; 668 675 676 + // Some models do not run open EC firmware. Check for an ACPI method 677 + // that only exists on open EC to guard functionality specific to it. 678 + data->has_open_ec = acpi_has_method(acpi_device_handle(data->acpi_dev), "NFAN"); 679 + 669 680 err = system76_get(data, "INIT"); 670 681 if (err) 671 682 return err; ··· 715 718 if (err) 716 719 goto error; 717 720 718 - err = system76_get_object(data, "NFAN", &data->nfan); 719 - if (err) 720 - goto error; 721 + if (data->has_open_ec) { 722 + err = system76_get_object(data, "NFAN", &data->nfan); 723 + if (err) 724 + goto error; 721 725 722 - err = system76_get_object(data, "NTMP", &data->ntmp); 723 - if (err) 724 - goto error; 726 + err = system76_get_object(data, "NTMP", &data->ntmp); 727 + if (err) 728 + goto error; 725 729 726 - data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev, 727 - "system76_acpi", data, &thermal_chip_info, NULL); 728 - err = PTR_ERR_OR_ZERO(data->therm); 729 - if (err) 730 - goto error; 730 + data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev, 731 + "system76_acpi", data, &thermal_chip_info, NULL); 732 + err = PTR_ERR_OR_ZERO(data->therm); 733 + if (err) 734 + goto error; 731 735 732 - system76_battery_init(); 736 + system76_battery_init(); 737 + } 733 738 734 739 return 0; 735 740 736 741 error: 737 - kfree(data->ntmp); 738 - kfree(data->nfan); 742 + if (data->has_open_ec) { 743 + kfree(data->ntmp); 744 + kfree(data->nfan); 745 + } 739 746 return err; 740 747 } 741 748 ··· 750 749 751 750 data = acpi_driver_data(acpi_dev); 752 751 753 - system76_battery_exit(); 752 + if (data->has_open_ec) { 753 + system76_battery_exit(); 754 + kfree(data->nfan); 755 + kfree(data->ntmp); 756 + } 754 757 755 758 devm_led_classdev_unregister(&acpi_dev->dev, &data->ap_led); 756 759 devm_led_classdev_unregister(&acpi_dev->dev, &data->kb_led); 757 - 758 - kfree(data->nfan); 759 - kfree(data->ntmp); 760 760 761 761 system76_get(data, "FINI"); 762 762
+4 -2
drivers/scsi/libiscsi.c
··· 3100 3100 { 3101 3101 struct iscsi_conn *conn = cls_conn->dd_data; 3102 3102 struct iscsi_session *session = conn->session; 3103 + char *tmp_persistent_address = conn->persistent_address; 3104 + char *tmp_local_ipaddr = conn->local_ipaddr; 3103 3105 3104 3106 del_timer_sync(&conn->transport_timer); 3105 3107 ··· 3123 3121 spin_lock_bh(&session->frwd_lock); 3124 3122 free_pages((unsigned long) conn->data, 3125 3123 get_order(ISCSI_DEF_MAX_RECV_SEG_LEN)); 3126 - kfree(conn->persistent_address); 3127 - kfree(conn->local_ipaddr); 3128 3124 /* regular RX path uses back_lock */ 3129 3125 spin_lock_bh(&session->back_lock); 3130 3126 kfifo_in(&session->cmdpool.queue, (void*)&conn->login_task, ··· 3134 3134 mutex_unlock(&session->eh_mutex); 3135 3135 3136 3136 iscsi_destroy_conn(cls_conn); 3137 + kfree(tmp_persistent_address); 3138 + kfree(tmp_local_ipaddr); 3137 3139 } 3138 3140 EXPORT_SYMBOL_GPL(iscsi_conn_teardown); 3139 3141
+2 -2
drivers/scsi/lpfc/lpfc_debugfs.c
··· 2954 2954 char mybuf[64]; 2955 2955 char *pbuf; 2956 2956 2957 - if (nbytes > 64) 2958 - nbytes = 64; 2957 + if (nbytes > 63) 2958 + nbytes = 63; 2959 2959 2960 2960 memset(mybuf, 0, sizeof(mybuf)); 2961 2961
+5 -2
drivers/scsi/vmw_pvscsi.c
··· 586 586 * Commands like INQUIRY may transfer less data than 587 587 * requested by the initiator via bufflen. Set residual 588 588 * count to make upper layer aware of the actual amount 589 - * of data returned. 589 + * of data returned. There are cases when controller 590 + * returns zero dataLen with non zero data - do not set 591 + * residual count in that case. 590 592 */ 591 - scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen); 593 + if (e->dataLen && (e->dataLen < scsi_bufflen(cmd))) 594 + scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen); 592 595 cmd->result = (DID_OK << 16); 593 596 break; 594 597
+1 -1
drivers/spi/spi-armada-3700.c
··· 901 901 return 0; 902 902 903 903 error_clk: 904 - clk_disable_unprepare(spi->clk); 904 + clk_unprepare(spi->clk); 905 905 error: 906 906 spi_master_put(master); 907 907 out:
+2 -4
drivers/tee/optee/core.c
··· 48 48 goto err; 49 49 } 50 50 51 - for (i = 0; i < nr_pages; i++) { 52 - pages[i] = page; 53 - page++; 54 - } 51 + for (i = 0; i < nr_pages; i++) 52 + pages[i] = page + i; 55 53 56 54 shm->flags |= TEE_SHM_REGISTER; 57 55 rc = shm_register(shm->ctx, shm, pages, nr_pages,
+2
drivers/tee/optee/smc_abi.c
··· 23 23 #include "optee_private.h" 24 24 #include "optee_smc.h" 25 25 #include "optee_rpc_cmd.h" 26 + #include <linux/kmemleak.h> 26 27 #define CREATE_TRACE_POINTS 27 28 #include "optee_trace.h" 28 29 ··· 784 783 param->a4 = 0; 785 784 param->a5 = 0; 786 785 } 786 + kmemleak_not_leak(shm); 787 787 break; 788 788 case OPTEE_SMC_RPC_FUNC_FREE: 789 789 shm = reg_pair_to_ptr(param->a1, param->a2);
+66 -108
drivers/tee/tee_shm.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 - * Copyright (c) 2015-2016, Linaro Limited 3 + * Copyright (c) 2015-2017, 2019-2021 Linaro Limited 4 4 */ 5 + #include <linux/anon_inodes.h> 5 6 #include <linux/device.h> 6 - #include <linux/dma-buf.h> 7 - #include <linux/fdtable.h> 8 7 #include <linux/idr.h> 8 + #include <linux/mm.h> 9 9 #include <linux/sched.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/tee_drv.h> 12 12 #include <linux/uio.h> 13 - #include <linux/module.h> 14 13 #include "tee_private.h" 15 - 16 - MODULE_IMPORT_NS(DMA_BUF); 17 14 18 15 static void release_registered_pages(struct tee_shm *shm) 19 16 { ··· 28 31 } 29 32 } 30 33 31 - static void tee_shm_release(struct tee_shm *shm) 34 + static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm) 32 35 { 33 - struct tee_device *teedev = shm->ctx->teedev; 34 - 35 - if (shm->flags & TEE_SHM_DMA_BUF) { 36 - mutex_lock(&teedev->mutex); 37 - idr_remove(&teedev->idr, shm->id); 38 - mutex_unlock(&teedev->mutex); 39 - } 40 - 41 36 if (shm->flags & TEE_SHM_POOL) { 42 37 struct tee_shm_pool_mgr *poolm; 43 38 ··· 55 66 56 67 tee_device_put(teedev); 57 68 } 58 - 59 - static struct sg_table *tee_shm_op_map_dma_buf(struct dma_buf_attachment 60 - *attach, enum dma_data_direction dir) 61 - { 62 - return NULL; 63 - } 64 - 65 - static void tee_shm_op_unmap_dma_buf(struct dma_buf_attachment *attach, 66 - struct sg_table *table, 67 - enum dma_data_direction dir) 68 - { 69 - } 70 - 71 - static void tee_shm_op_release(struct dma_buf *dmabuf) 72 - { 73 - struct tee_shm *shm = dmabuf->priv; 74 - 75 - tee_shm_release(shm); 76 - } 77 - 78 - static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma) 79 - { 80 - struct tee_shm *shm = dmabuf->priv; 81 - size_t size = vma->vm_end - vma->vm_start; 82 - 83 - /* Refuse sharing shared memory provided by application */ 84 - if (shm->flags & TEE_SHM_USER_MAPPED) 85 - return -EINVAL; 86 - 87 - return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT, 88 - size, vma->vm_page_prot); 89 - } 90 - 91 - static const struct dma_buf_ops tee_shm_dma_buf_ops = { 92 - .map_dma_buf = tee_shm_op_map_dma_buf, 93 - .unmap_dma_buf = tee_shm_op_unmap_dma_buf, 94 - .release = tee_shm_op_release, 95 - .mmap = tee_shm_op_mmap, 96 - }; 97 69 98 70 struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags) 99 71 { ··· 90 140 goto err_dev_put; 91 141 } 92 142 143 + refcount_set(&shm->refcount, 1); 93 144 shm->flags = flags | TEE_SHM_POOL; 94 145 shm->ctx = ctx; 95 146 if (flags & TEE_SHM_DMA_BUF) ··· 104 153 goto err_kfree; 105 154 } 106 155 107 - 108 156 if (flags & TEE_SHM_DMA_BUF) { 109 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 110 - 111 157 mutex_lock(&teedev->mutex); 112 158 shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL); 113 159 mutex_unlock(&teedev->mutex); ··· 112 164 ret = ERR_PTR(shm->id); 113 165 goto err_pool_free; 114 166 } 115 - 116 - exp_info.ops = &tee_shm_dma_buf_ops; 117 - exp_info.size = shm->size; 118 - exp_info.flags = O_RDWR; 119 - exp_info.priv = shm; 120 - 121 - shm->dmabuf = dma_buf_export(&exp_info); 122 - if (IS_ERR(shm->dmabuf)) { 123 - ret = ERR_CAST(shm->dmabuf); 124 - goto err_rem; 125 - } 126 167 } 127 168 128 169 teedev_ctx_get(ctx); 129 170 130 171 return shm; 131 - err_rem: 132 - if (flags & TEE_SHM_DMA_BUF) { 133 - mutex_lock(&teedev->mutex); 134 - idr_remove(&teedev->idr, shm->id); 135 - mutex_unlock(&teedev->mutex); 136 - } 137 172 err_pool_free: 138 173 poolm->ops->free(poolm, shm); 139 174 err_kfree: ··· 177 246 goto err; 178 247 } 179 248 249 + refcount_set(&shm->refcount, 1); 180 250 shm->flags = flags | TEE_SHM_REGISTER; 181 251 shm->ctx = ctx; 182 252 shm->id = -1; ··· 238 306 goto err; 239 307 } 240 308 241 - if (flags & TEE_SHM_DMA_BUF) { 242 - DEFINE_DMA_BUF_EXPORT_INFO(exp_info); 243 - 244 - exp_info.ops = &tee_shm_dma_buf_ops; 245 - exp_info.size = shm->size; 246 - exp_info.flags = O_RDWR; 247 - exp_info.priv = shm; 248 - 249 - shm->dmabuf = dma_buf_export(&exp_info); 250 - if (IS_ERR(shm->dmabuf)) { 251 - ret = ERR_CAST(shm->dmabuf); 252 - teedev->desc->ops->shm_unregister(ctx, shm); 253 - goto err; 254 - } 255 - } 256 - 257 309 return shm; 258 310 err: 259 311 if (shm) { ··· 255 339 } 256 340 EXPORT_SYMBOL_GPL(tee_shm_register); 257 341 342 + static int tee_shm_fop_release(struct inode *inode, struct file *filp) 343 + { 344 + tee_shm_put(filp->private_data); 345 + return 0; 346 + } 347 + 348 + static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma) 349 + { 350 + struct tee_shm *shm = filp->private_data; 351 + size_t size = vma->vm_end - vma->vm_start; 352 + 353 + /* Refuse sharing shared memory provided by application */ 354 + if (shm->flags & TEE_SHM_USER_MAPPED) 355 + return -EINVAL; 356 + 357 + /* check for overflowing the buffer's size */ 358 + if (vma->vm_pgoff + vma_pages(vma) > shm->size >> PAGE_SHIFT) 359 + return -EINVAL; 360 + 361 + return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT, 362 + size, vma->vm_page_prot); 363 + } 364 + 365 + static const struct file_operations tee_shm_fops = { 366 + .owner = THIS_MODULE, 367 + .release = tee_shm_fop_release, 368 + .mmap = tee_shm_fop_mmap, 369 + }; 370 + 258 371 /** 259 372 * tee_shm_get_fd() - Increase reference count and return file descriptor 260 373 * @shm: Shared memory handle ··· 296 351 if (!(shm->flags & TEE_SHM_DMA_BUF)) 297 352 return -EINVAL; 298 353 299 - get_dma_buf(shm->dmabuf); 300 - fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC); 354 + /* matched by tee_shm_put() in tee_shm_op_release() */ 355 + refcount_inc(&shm->refcount); 356 + fd = anon_inode_getfd("tee_shm", &tee_shm_fops, shm, O_RDWR); 301 357 if (fd < 0) 302 - dma_buf_put(shm->dmabuf); 358 + tee_shm_put(shm); 303 359 return fd; 304 360 } 305 361 ··· 310 364 */ 311 365 void tee_shm_free(struct tee_shm *shm) 312 366 { 313 - /* 314 - * dma_buf_put() decreases the dmabuf reference counter and will 315 - * call tee_shm_release() when the last reference is gone. 316 - * 317 - * In the case of driver private memory we call tee_shm_release 318 - * directly instead as it doesn't have a reference counter. 319 - */ 320 - if (shm->flags & TEE_SHM_DMA_BUF) 321 - dma_buf_put(shm->dmabuf); 322 - else 323 - tee_shm_release(shm); 367 + tee_shm_put(shm); 324 368 } 325 369 EXPORT_SYMBOL_GPL(tee_shm_free); 326 370 ··· 417 481 teedev = ctx->teedev; 418 482 mutex_lock(&teedev->mutex); 419 483 shm = idr_find(&teedev->idr, id); 484 + /* 485 + * If the tee_shm was found in the IDR it must have a refcount 486 + * larger than 0 due to the guarantee in tee_shm_put() below. So 487 + * it's safe to use refcount_inc(). 488 + */ 420 489 if (!shm || shm->ctx != ctx) 421 490 shm = ERR_PTR(-EINVAL); 422 - else if (shm->flags & TEE_SHM_DMA_BUF) 423 - get_dma_buf(shm->dmabuf); 491 + else 492 + refcount_inc(&shm->refcount); 424 493 mutex_unlock(&teedev->mutex); 425 494 return shm; 426 495 } ··· 437 496 */ 438 497 void tee_shm_put(struct tee_shm *shm) 439 498 { 440 - if (shm->flags & TEE_SHM_DMA_BUF) 441 - dma_buf_put(shm->dmabuf); 499 + struct tee_device *teedev = shm->ctx->teedev; 500 + bool do_release = false; 501 + 502 + mutex_lock(&teedev->mutex); 503 + if (refcount_dec_and_test(&shm->refcount)) { 504 + /* 505 + * refcount has reached 0, we must now remove it from the 506 + * IDR before releasing the mutex. This will guarantee that 507 + * the refcount_inc() in tee_shm_get_from_id() never starts 508 + * from 0. 509 + */ 510 + if (shm->flags & TEE_SHM_DMA_BUF) 511 + idr_remove(&teedev->idr, shm->id); 512 + do_release = true; 513 + } 514 + mutex_unlock(&teedev->mutex); 515 + 516 + if (do_release) 517 + tee_shm_release(teedev, shm); 442 518 } 443 519 EXPORT_SYMBOL_GPL(tee_shm_put);
+27 -3
drivers/tty/hvc/hvc_xen.c
··· 37 37 struct xenbus_device *xbdev; 38 38 struct xencons_interface *intf; 39 39 unsigned int evtchn; 40 + XENCONS_RING_IDX out_cons; 41 + unsigned int out_cons_same; 40 42 struct hvc_struct *hvc; 41 43 int irq; 42 44 int vtermno; ··· 140 138 XENCONS_RING_IDX cons, prod; 141 139 int recv = 0; 142 140 struct xencons_info *xencons = vtermno_to_xencons(vtermno); 141 + unsigned int eoiflag = 0; 142 + 143 143 if (xencons == NULL) 144 144 return -EINVAL; 145 145 intf = xencons->intf; ··· 161 157 mb(); /* read ring before consuming */ 162 158 intf->in_cons = cons; 163 159 164 - notify_daemon(xencons); 160 + /* 161 + * When to mark interrupt having been spurious: 162 + * - there was no new data to be read, and 163 + * - the backend did not consume some output bytes, and 164 + * - the previous round with no read data didn't see consumed bytes 165 + * (we might have a race with an interrupt being in flight while 166 + * updating xencons->out_cons, so account for that by allowing one 167 + * round without any visible reason) 168 + */ 169 + if (intf->out_cons != xencons->out_cons) { 170 + xencons->out_cons = intf->out_cons; 171 + xencons->out_cons_same = 0; 172 + } 173 + if (recv) { 174 + notify_daemon(xencons); 175 + } else if (xencons->out_cons_same++ > 1) { 176 + eoiflag = XEN_EOI_FLAG_SPURIOUS; 177 + } 178 + 179 + xen_irq_lateeoi(xencons->irq, eoiflag); 180 + 165 181 return recv; 166 182 } 167 183 ··· 410 386 if (ret) 411 387 return ret; 412 388 info->evtchn = evtchn; 413 - irq = bind_evtchn_to_irq(evtchn); 389 + irq = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn); 414 390 if (irq < 0) 415 391 return irq; 416 392 info->irq = irq; ··· 575 551 return r; 576 552 577 553 info = vtermno_to_xencons(HVC_COOKIE); 578 - info->irq = bind_evtchn_to_irq(info->evtchn); 554 + info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn); 579 555 } 580 556 if (info->irq < 0) 581 557 info->irq = 0; /* NO_IRQ */
+6 -3
drivers/usb/gadget/function/f_fs.c
··· 1773 1773 1774 1774 BUG_ON(ffs->gadget); 1775 1775 1776 - if (ffs->epfiles) 1776 + if (ffs->epfiles) { 1777 1777 ffs_epfiles_destroy(ffs->epfiles, ffs->eps_count); 1778 + ffs->epfiles = NULL; 1779 + } 1778 1780 1779 - if (ffs->ffs_eventfd) 1781 + if (ffs->ffs_eventfd) { 1780 1782 eventfd_ctx_put(ffs->ffs_eventfd); 1783 + ffs->ffs_eventfd = NULL; 1784 + } 1781 1785 1782 1786 kfree(ffs->raw_descs_data); 1783 1787 kfree(ffs->raw_strings); ··· 1794 1790 1795 1791 ffs_data_clear(ffs); 1796 1792 1797 - ffs->epfiles = NULL; 1798 1793 ffs->raw_descs_data = NULL; 1799 1794 ffs->raw_descs = NULL; 1800 1795 ffs->raw_strings = NULL;
+4 -1
drivers/usb/host/xhci-pci.c
··· 123 123 /* Look for vendor-specific quirks */ 124 124 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 125 125 (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK || 126 - pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 || 127 126 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1400)) { 128 127 if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK && 129 128 pdev->revision == 0x0) { ··· 156 157 if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 157 158 pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1009) 158 159 xhci->quirks |= XHCI_BROKEN_STREAMS; 160 + 161 + if (pdev->vendor == PCI_VENDOR_ID_FRESCO_LOGIC && 162 + pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_FL1100) 163 + xhci->quirks |= XHCI_TRUST_TX_LENGTH; 159 164 160 165 if (pdev->vendor == PCI_VENDOR_ID_NEC) 161 166 xhci->quirks |= XHCI_NEC_HOST;
+10 -2
drivers/usb/mtu3/mtu3_gadget.c
··· 77 77 if (usb_endpoint_xfer_int(desc) || 78 78 usb_endpoint_xfer_isoc(desc)) { 79 79 interval = desc->bInterval; 80 - interval = clamp_val(interval, 1, 16) - 1; 80 + interval = clamp_val(interval, 1, 16); 81 81 if (usb_endpoint_xfer_isoc(desc) && comp_desc) 82 82 mult = comp_desc->bmAttributes; 83 83 } ··· 89 89 if (usb_endpoint_xfer_isoc(desc) || 90 90 usb_endpoint_xfer_int(desc)) { 91 91 interval = desc->bInterval; 92 - interval = clamp_val(interval, 1, 16) - 1; 92 + interval = clamp_val(interval, 1, 16); 93 93 mult = usb_endpoint_maxp_mult(desc) - 1; 94 94 } 95 + break; 96 + case USB_SPEED_FULL: 97 + if (usb_endpoint_xfer_isoc(desc)) 98 + interval = clamp_val(desc->bInterval, 1, 16); 99 + else if (usb_endpoint_xfer_int(desc)) 100 + interval = clamp_val(desc->bInterval, 1, 255); 101 + 95 102 break; 96 103 default: 97 104 break; /*others are ignored */ ··· 242 235 mreq->request.dma = DMA_ADDR_INVALID; 243 236 mreq->epnum = mep->epnum; 244 237 mreq->mep = mep; 238 + INIT_LIST_HEAD(&mreq->list); 245 239 trace_mtu3_alloc_request(mreq); 246 240 247 241 return &mreq->request;
+6 -1
drivers/usb/mtu3/mtu3_qmu.c
··· 273 273 gpd->dw3_info |= cpu_to_le32(GPD_EXT_FLAG_ZLP); 274 274 } 275 275 276 + /* prevent reorder, make sure GPD's HWO is set last */ 277 + mb(); 276 278 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO); 277 279 278 280 mreq->gpd = gpd; ··· 308 306 gpd->next_gpd = cpu_to_le32(lower_32_bits(enq_dma)); 309 307 ext_addr |= GPD_EXT_NGP(mtu, upper_32_bits(enq_dma)); 310 308 gpd->dw3_info = cpu_to_le32(ext_addr); 309 + /* prevent reorder, make sure GPD's HWO is set last */ 310 + mb(); 311 311 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO); 312 312 313 313 mreq->gpd = gpd; ··· 449 445 return; 450 446 } 451 447 mtu3_setbits(mbase, MU3D_EP_TXCR0(mep->epnum), TX_TXPKTRDY); 452 - 448 + /* prevent reorder, make sure GPD's HWO is set last */ 449 + mb(); 453 450 /* by pass the current GDP */ 454 451 gpd_current->dw0_info |= cpu_to_le32(GPD_FLAGS_BPS | GPD_FLAGS_HWO); 455 452
+3 -1
drivers/usb/typec/ucsi/ucsi.c
··· 1164 1164 ret = 0; 1165 1165 } 1166 1166 1167 - if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == UCSI_CONSTAT_PWR_OPMODE_PD) { 1167 + if (con->partner && 1168 + UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1169 + UCSI_CONSTAT_PWR_OPMODE_PD) { 1168 1170 ucsi_get_src_pdos(con); 1169 1171 ucsi_check_altmodes(con); 1170 1172 }
+3 -2
drivers/virt/nitro_enclaves/ne_misc_dev.c
··· 886 886 goto put_pages; 887 887 } 888 888 889 - gup_rc = get_user_pages(mem_region.userspace_addr + memory_size, 1, FOLL_GET, 890 - ne_mem_region->pages + i, NULL); 889 + gup_rc = get_user_pages_unlocked(mem_region.userspace_addr + memory_size, 1, 890 + ne_mem_region->pages + i, FOLL_GET); 891 + 891 892 if (gup_rc < 0) { 892 893 rc = gup_rc; 893 894
+6
drivers/xen/events/events_base.c
··· 1251 1251 } 1252 1252 EXPORT_SYMBOL_GPL(bind_evtchn_to_irq); 1253 1253 1254 + int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn) 1255 + { 1256 + return bind_evtchn_to_irq_chip(evtchn, &xen_lateeoi_chip, NULL); 1257 + } 1258 + EXPORT_SYMBOL_GPL(bind_evtchn_to_irq_lateeoi); 1259 + 1254 1260 static int bind_ipi_to_irq(unsigned int ipi, unsigned int cpu) 1255 1261 { 1256 1262 struct evtchn_bind_ipi bind_ipi;
+7 -3
fs/io_uring.c
··· 2891 2891 req->flags |= io_file_get_flags(file) << REQ_F_SUPPORT_NOWAIT_BIT; 2892 2892 2893 2893 kiocb->ki_pos = READ_ONCE(sqe->off); 2894 - if (kiocb->ki_pos == -1 && !(file->f_mode & FMODE_STREAM)) { 2895 - req->flags |= REQ_F_CUR_POS; 2896 - kiocb->ki_pos = file->f_pos; 2894 + if (kiocb->ki_pos == -1) { 2895 + if (!(file->f_mode & FMODE_STREAM)) { 2896 + req->flags |= REQ_F_CUR_POS; 2897 + kiocb->ki_pos = file->f_pos; 2898 + } else { 2899 + kiocb->ki_pos = 0; 2900 + } 2897 2901 } 2898 2902 kiocb->ki_flags = iocb_flags(file); 2899 2903 ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
+1 -1
fs/ksmbd/ndr.c
··· 148 148 static int ndr_read_int32(struct ndr *n, __u32 *value) 149 149 { 150 150 if (n->offset + sizeof(__u32) > n->length) 151 - return 0; 151 + return -EINVAL; 152 152 153 153 if (value) 154 154 *value = le32_to_cpu(*(__le32 *)ndr_get_field(n));
-3
fs/ksmbd/smb2ops.c
··· 271 271 if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB2_LEASES) 272 272 conn->vals->capabilities |= SMB2_GLOBAL_CAP_LEASING; 273 273 274 - if (conn->cipher_type) 275 - conn->vals->capabilities |= SMB2_GLOBAL_CAP_ENCRYPTION; 276 - 277 274 if (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) 278 275 conn->vals->capabilities |= SMB2_GLOBAL_CAP_MULTI_CHANNEL; 279 276
+25 -4
fs/ksmbd/smb2pdu.c
··· 915 915 } 916 916 } 917 917 918 + /** 919 + * smb3_encryption_negotiated() - checks if server and client agreed on enabling encryption 920 + * @conn: smb connection 921 + * 922 + * Return: true if connection should be encrypted, else false 923 + */ 924 + static bool smb3_encryption_negotiated(struct ksmbd_conn *conn) 925 + { 926 + if (!conn->ops->generate_encryptionkey) 927 + return false; 928 + 929 + /* 930 + * SMB 3.0 and 3.0.2 dialects use the SMB2_GLOBAL_CAP_ENCRYPTION flag. 931 + * SMB 3.1.1 uses the cipher_type field. 932 + */ 933 + return (conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) || 934 + conn->cipher_type; 935 + } 936 + 918 937 static void decode_compress_ctxt(struct ksmbd_conn *conn, 919 938 struct smb2_compression_capabilities_context *pneg_ctxt) 920 939 { ··· 1488 1469 (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED)) 1489 1470 sess->sign = true; 1490 1471 1491 - if (conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION && 1492 - conn->ops->generate_encryptionkey && 1472 + if (smb3_encryption_negotiated(conn) && 1493 1473 !(req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) { 1494 1474 rc = conn->ops->generate_encryptionkey(sess); 1495 1475 if (rc) { ··· 1577 1559 (req->SecurityMode & SMB2_NEGOTIATE_SIGNING_REQUIRED)) 1578 1560 sess->sign = true; 1579 1561 1580 - if ((conn->vals->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) && 1581 - conn->ops->generate_encryptionkey) { 1562 + if (smb3_encryption_negotiated(conn)) { 1582 1563 retval = conn->ops->generate_encryptionkey(sess); 1583 1564 if (retval) { 1584 1565 ksmbd_debug(SMB, ··· 2979 2962 &pntsd_size, &fattr); 2980 2963 posix_acl_release(fattr.cf_acls); 2981 2964 posix_acl_release(fattr.cf_dacls); 2965 + if (rc) { 2966 + kfree(pntsd); 2967 + goto err_out; 2968 + } 2982 2969 2983 2970 rc = ksmbd_vfs_set_sd_xattr(conn, 2984 2971 user_ns,
+4 -5
fs/namespace.c
··· 4263 4263 return err; 4264 4264 4265 4265 err = user_path_at(dfd, path, kattr.lookup_flags, &target); 4266 - if (err) 4267 - return err; 4268 - 4269 - err = do_mount_setattr(&target, &kattr); 4266 + if (!err) { 4267 + err = do_mount_setattr(&target, &kattr); 4268 + path_put(&target); 4269 + } 4270 4270 finish_mount_kattr(&kattr); 4271 - path_put(&target); 4272 4271 return err; 4273 4272 } 4274 4273
+4 -7
fs/nfsd/nfs3proc.c
··· 438 438 439 439 static void nfsd3_init_dirlist_pages(struct svc_rqst *rqstp, 440 440 struct nfsd3_readdirres *resp, 441 - int count) 441 + u32 count) 442 442 { 443 443 struct xdr_buf *buf = &resp->dirlist; 444 444 struct xdr_stream *xdr = &resp->xdr; 445 445 446 - count = min_t(u32, count, svc_max_payload(rqstp)); 446 + count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); 447 447 448 448 memset(buf, 0, sizeof(*buf)); 449 449 450 450 /* Reserve room for the NULL ptr & eof flag (-2 words) */ 451 451 buf->buflen = count - XDR_UNIT * 2; 452 452 buf->pages = rqstp->rq_next_page; 453 - while (count > 0) { 454 - rqstp->rq_next_page++; 455 - count -= PAGE_SIZE; 456 - } 453 + rqstp->rq_next_page += (buf->buflen + PAGE_SIZE - 1) >> PAGE_SHIFT; 457 454 458 455 /* This is xdr_init_encode(), but it assumes that 459 456 * the head kvec has already been consumed. */ ··· 459 462 xdr->page_ptr = buf->pages; 460 463 xdr->iov = NULL; 461 464 xdr->p = page_address(*buf->pages); 462 - xdr->end = xdr->p + (PAGE_SIZE >> 2); 465 + xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE); 463 466 xdr->rqst = NULL; 464 467 } 465 468
+4 -4
fs/nfsd/nfsproc.c
··· 556 556 557 557 static void nfsd_init_dirlist_pages(struct svc_rqst *rqstp, 558 558 struct nfsd_readdirres *resp, 559 - int count) 559 + u32 count) 560 560 { 561 561 struct xdr_buf *buf = &resp->dirlist; 562 562 struct xdr_stream *xdr = &resp->xdr; 563 563 564 - count = min_t(u32, count, PAGE_SIZE); 564 + count = clamp(count, (u32)(XDR_UNIT * 2), svc_max_payload(rqstp)); 565 565 566 566 memset(buf, 0, sizeof(*buf)); 567 567 568 568 /* Reserve room for the NULL ptr & eof flag (-2 words) */ 569 - buf->buflen = count - sizeof(__be32) * 2; 569 + buf->buflen = count - XDR_UNIT * 2; 570 570 buf->pages = rqstp->rq_next_page; 571 571 rqstp->rq_next_page++; 572 572 ··· 577 577 xdr->page_ptr = buf->pages; 578 578 xdr->iov = NULL; 579 579 xdr->p = page_address(*buf->pages); 580 - xdr->end = xdr->p + (PAGE_SIZE >> 2); 580 + xdr->end = (void *)xdr->p + min_t(u32, buf->buflen, PAGE_SIZE); 581 581 xdr->rqst = NULL; 582 582 } 583 583
+2 -2
include/linux/compiler.h
··· 121 121 asm volatile(__stringify_label(c) ":\n\t" \ 122 122 ".pushsection .discard.reachable\n\t" \ 123 123 ".long " __stringify_label(c) "b - .\n\t" \ 124 - ".popsection\n\t"); \ 124 + ".popsection\n\t" : : "i" (c)); \ 125 125 }) 126 126 #define annotate_reachable() __annotate_reachable(__COUNTER__) 127 127 ··· 129 129 asm volatile(__stringify_label(c) ":\n\t" \ 130 130 ".pushsection .discard.unreachable\n\t" \ 131 131 ".long " __stringify_label(c) "b - .\n\t" \ 132 - ".popsection\n\t"); \ 132 + ".popsection\n\t" : : "i" (c)); \ 133 133 }) 134 134 #define annotate_unreachable() __annotate_unreachable(__COUNTER__) 135 135
+6
include/linux/efi.h
··· 1283 1283 } 1284 1284 #endif 1285 1285 1286 + #ifdef CONFIG_SYSFB 1287 + extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt); 1288 + #else 1289 + static inline void efifb_setup_from_dmi(struct screen_info *si, const char *opt) { } 1290 + #endif 1291 + 1286 1292 #endif /* _LINUX_EFI_H */
+1 -1
include/linux/gfp.h
··· 624 624 625 625 void *alloc_pages_exact(size_t size, gfp_t gfp_mask) __alloc_size(1); 626 626 void free_pages_exact(void *virt, size_t size); 627 - __meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(1); 627 + __meminit void *alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) __alloc_size(2); 628 628 629 629 #define __get_free_page(gfp_mask) \ 630 630 __get_free_pages((gfp_mask), 0)
+2 -2
include/linux/instrumentation.h
··· 11 11 asm volatile(__stringify(c) ": nop\n\t" \ 12 12 ".pushsection .discard.instr_begin\n\t" \ 13 13 ".long " __stringify(c) "b - .\n\t" \ 14 - ".popsection\n\t"); \ 14 + ".popsection\n\t" : : "i" (c)); \ 15 15 }) 16 16 #define instrumentation_begin() __instrumentation_begin(__COUNTER__) 17 17 ··· 50 50 asm volatile(__stringify(c) ": nop\n\t" \ 51 51 ".pushsection .discard.instr_end\n\t" \ 52 52 ".long " __stringify(c) "b - .\n\t" \ 53 - ".popsection\n\t"); \ 53 + ".popsection\n\t" : : "i" (c)); \ 54 54 }) 55 55 #define instrumentation_end() __instrumentation_end(__COUNTER__) 56 56 #else
+2 -2
include/linux/memblock.h
··· 405 405 phys_addr_t end, int nid, bool exact_nid); 406 406 phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid); 407 407 408 - static inline phys_addr_t memblock_phys_alloc(phys_addr_t size, 409 - phys_addr_t align) 408 + static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size, 409 + phys_addr_t align) 410 410 { 411 411 return memblock_phys_alloc_range(size, align, 0, 412 412 MEMBLOCK_ALLOC_ACCESSIBLE);
+1
include/linux/mmzone.h
··· 277 277 VMSCAN_THROTTLE_WRITEBACK, 278 278 VMSCAN_THROTTLE_ISOLATED, 279 279 VMSCAN_THROTTLE_NOPROGRESS, 280 + VMSCAN_THROTTLE_CONGESTED, 280 281 NR_VMSCAN_THROTTLE, 281 282 }; 282 283
+1 -1
include/linux/netdevice.h
··· 1937 1937 * @udp_tunnel_nic: UDP tunnel offload state 1938 1938 * @xdp_state: stores info on attached XDP BPF programs 1939 1939 * 1940 - * @nested_level: Used as as a parameter of spin_lock_nested() of 1940 + * @nested_level: Used as a parameter of spin_lock_nested() of 1941 1941 * dev->addr_list_lock. 1942 1942 * @unlink_list: As netif_addr_lock() can be called recursively, 1943 1943 * keep a list of interfaces to be deleted.
-1
include/linux/pagemap.h
··· 285 285 286 286 static inline bool page_cache_add_speculative(struct page *page, int count) 287 287 { 288 - VM_BUG_ON_PAGE(PageTail(page), page); 289 288 return folio_ref_try_add_rcu((struct folio *)page, count); 290 289 } 291 290
+2 -1
include/linux/skbuff.h
··· 286 286 struct tc_skb_ext { 287 287 __u32 chain; 288 288 __u16 mru; 289 + __u16 zone; 289 290 bool post_ct; 290 291 }; 291 292 #endif ··· 1381 1380 struct flow_dissector *flow_dissector, 1382 1381 void *target_container, 1383 1382 u16 *ctinfo_map, size_t mapsize, 1384 - bool post_ct); 1383 + bool post_ct, u16 zone); 1385 1384 void 1386 1385 skb_flow_dissect_tunnel_info(const struct sk_buff *skb, 1387 1386 struct flow_dissector *flow_dissector,
+2 -2
include/linux/tee_drv.h
··· 195 195 * @offset: offset of buffer in user space 196 196 * @pages: locked pages from userspace 197 197 * @num_pages: number of locked pages 198 - * @dmabuf: dmabuf used to for exporting to user space 198 + * @refcount: reference counter 199 199 * @flags: defined by TEE_SHM_* in tee_drv.h 200 200 * @id: unique id of a shared memory object on this device, shared 201 201 * with user space ··· 214 214 unsigned int offset; 215 215 struct page **pages; 216 216 size_t num_pages; 217 - struct dma_buf *dmabuf; 217 + refcount_t refcount; 218 218 u32 flags; 219 219 int id; 220 220 u64 sec_world_id;
+23 -2
include/linux/virtio_net.h
··· 7 7 #include <uapi/linux/udp.h> 8 8 #include <uapi/linux/virtio_net.h> 9 9 10 + static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type) 11 + { 12 + switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { 13 + case VIRTIO_NET_HDR_GSO_TCPV4: 14 + return protocol == cpu_to_be16(ETH_P_IP); 15 + case VIRTIO_NET_HDR_GSO_TCPV6: 16 + return protocol == cpu_to_be16(ETH_P_IPV6); 17 + case VIRTIO_NET_HDR_GSO_UDP: 18 + return protocol == cpu_to_be16(ETH_P_IP) || 19 + protocol == cpu_to_be16(ETH_P_IPV6); 20 + default: 21 + return false; 22 + } 23 + } 24 + 10 25 static inline int virtio_net_hdr_set_proto(struct sk_buff *skb, 11 26 const struct virtio_net_hdr *hdr) 12 27 { 28 + if (skb->protocol) 29 + return 0; 30 + 13 31 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { 14 32 case VIRTIO_NET_HDR_GSO_TCPV4: 15 33 case VIRTIO_NET_HDR_GSO_UDP: ··· 106 88 if (!skb->protocol) { 107 89 __be16 protocol = dev_parse_header_protocol(skb); 108 90 109 - virtio_net_hdr_set_proto(skb, hdr); 110 - if (protocol && protocol != skb->protocol) 91 + if (!protocol) 92 + virtio_net_hdr_set_proto(skb, hdr); 93 + else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type)) 111 94 return -EINVAL; 95 + else 96 + skb->protocol = protocol; 112 97 } 113 98 retry: 114 99 if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
+16
include/net/pkt_sched.h
··· 193 193 skb->tstamp = ktime_set(0, 0); 194 194 } 195 195 196 + struct tc_skb_cb { 197 + struct qdisc_skb_cb qdisc_cb; 198 + 199 + u16 mru; 200 + bool post_ct; 201 + u16 zone; /* Only valid if post_ct = true */ 202 + }; 203 + 204 + static inline struct tc_skb_cb *tc_skb_cb(const struct sk_buff *skb) 205 + { 206 + struct tc_skb_cb *cb = (struct tc_skb_cb *)skb->cb; 207 + 208 + BUILD_BUG_ON(sizeof(*cb) > sizeof_field(struct sk_buff, cb)); 209 + return cb; 210 + } 211 + 196 212 #endif
-2
include/net/sch_generic.h
··· 447 447 }; 448 448 #define QDISC_CB_PRIV_LEN 20 449 449 unsigned char data[QDISC_CB_PRIV_LEN]; 450 - u16 mru; 451 - bool post_ct; 452 450 }; 453 451 454 452 typedef void tcf_chain_head_change_t(struct tcf_proto *tp_head, void *priv);
+3 -3
include/net/sctp/sctp.h
··· 105 105 int sctp_asconf_mgmt(struct sctp_sock *, struct sctp_sockaddr_entry *); 106 106 struct sk_buff *sctp_skb_recv_datagram(struct sock *, int, int, int *); 107 107 108 + typedef int (*sctp_callback_t)(struct sctp_endpoint *, struct sctp_transport *, void *); 108 109 void sctp_transport_walk_start(struct rhashtable_iter *iter); 109 110 void sctp_transport_walk_stop(struct rhashtable_iter *iter); 110 111 struct sctp_transport *sctp_transport_get_next(struct net *net, ··· 116 115 struct net *net, 117 116 const union sctp_addr *laddr, 118 117 const union sctp_addr *paddr, void *p); 119 - int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *), 120 - int (*cb_done)(struct sctp_transport *, void *), 121 - struct net *net, int *pos, void *p); 118 + int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done, 119 + struct net *net, int *pos, void *p); 122 120 int sctp_for_each_endpoint(int (*cb)(struct sctp_endpoint *, void *), void *p); 123 121 int sctp_get_sctp_info(struct sock *sk, struct sctp_association *asoc, 124 122 struct sctp_info *info);
+2 -1
include/net/sctp/structs.h
··· 1355 1355 reconf_enable:1; 1356 1356 1357 1357 __u8 strreset_enable; 1358 + struct rcu_head rcu; 1358 1359 }; 1359 1360 1360 1361 /* Recover the outter endpoint structure. */ ··· 1371 1370 struct sctp_endpoint *sctp_endpoint_new(struct sock *, gfp_t); 1372 1371 void sctp_endpoint_free(struct sctp_endpoint *); 1373 1372 void sctp_endpoint_put(struct sctp_endpoint *); 1374 - void sctp_endpoint_hold(struct sctp_endpoint *); 1373 + int sctp_endpoint_hold(struct sctp_endpoint *ep); 1375 1374 void sctp_endpoint_add_asoc(struct sctp_endpoint *, struct sctp_association *); 1376 1375 struct sctp_association *sctp_endpoint_lookup_assoc( 1377 1376 const struct sctp_endpoint *ep,
+1 -1
include/net/sock.h
··· 431 431 #ifdef CONFIG_XFRM 432 432 struct xfrm_policy __rcu *sk_policy[2]; 433 433 #endif 434 - struct dst_entry *sk_rx_dst; 434 + struct dst_entry __rcu *sk_rx_dst; 435 435 int sk_rx_dst_ifindex; 436 436 u32 sk_rx_dst_cookie; 437 437
+3 -1
include/trace/events/vmscan.h
··· 30 30 #define _VMSCAN_THROTTLE_WRITEBACK (1 << VMSCAN_THROTTLE_WRITEBACK) 31 31 #define _VMSCAN_THROTTLE_ISOLATED (1 << VMSCAN_THROTTLE_ISOLATED) 32 32 #define _VMSCAN_THROTTLE_NOPROGRESS (1 << VMSCAN_THROTTLE_NOPROGRESS) 33 + #define _VMSCAN_THROTTLE_CONGESTED (1 << VMSCAN_THROTTLE_CONGESTED) 33 34 34 35 #define show_throttle_flags(flags) \ 35 36 (flags) ? __print_flags(flags, "|", \ 36 37 {_VMSCAN_THROTTLE_WRITEBACK, "VMSCAN_THROTTLE_WRITEBACK"}, \ 37 38 {_VMSCAN_THROTTLE_ISOLATED, "VMSCAN_THROTTLE_ISOLATED"}, \ 38 - {_VMSCAN_THROTTLE_NOPROGRESS, "VMSCAN_THROTTLE_NOPROGRESS"} \ 39 + {_VMSCAN_THROTTLE_NOPROGRESS, "VMSCAN_THROTTLE_NOPROGRESS"}, \ 40 + {_VMSCAN_THROTTLE_CONGESTED, "VMSCAN_THROTTLE_CONGESTED"} \ 39 41 ) : "VMSCAN_THROTTLE_NONE" 40 42 41 43
+1
include/uapi/linux/byteorder/big_endian.h
··· 9 9 #define __BIG_ENDIAN_BITFIELD 10 10 #endif 11 11 12 + #include <linux/stddef.h> 12 13 #include <linux/types.h> 13 14 #include <linux/swab.h> 14 15
+1
include/uapi/linux/byteorder/little_endian.h
··· 9 9 #define __LITTLE_ENDIAN_BITFIELD 10 10 #endif 11 11 12 + #include <linux/stddef.h> 12 13 #include <linux/types.h> 13 14 #include <linux/swab.h> 14 15
+3 -3
include/uapi/linux/nfc.h
··· 263 263 #define NFC_SE_ENABLED 0x1 264 264 265 265 struct sockaddr_nfc { 266 - sa_family_t sa_family; 266 + __kernel_sa_family_t sa_family; 267 267 __u32 dev_idx; 268 268 __u32 target_idx; 269 269 __u32 nfc_protocol; ··· 271 271 272 272 #define NFC_LLCP_MAX_SERVICE_NAME 63 273 273 struct sockaddr_nfc_llcp { 274 - sa_family_t sa_family; 274 + __kernel_sa_family_t sa_family; 275 275 __u32 dev_idx; 276 276 __u32 target_idx; 277 277 __u32 nfc_protocol; 278 278 __u8 dsap; /* Destination SAP, if known */ 279 279 __u8 ssap; /* Source SAP to be bound to */ 280 280 char service_name[NFC_LLCP_MAX_SERVICE_NAME]; /* Service name URI */; 281 - size_t service_name_len; 281 + __kernel_size_t service_name_len; 282 282 }; 283 283 284 284 /* NFC socket protocols */
+1
include/xen/events.h
··· 17 17 unsigned xen_evtchn_nr_channels(void); 18 18 19 19 int bind_evtchn_to_irq(evtchn_port_t evtchn); 20 + int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn); 20 21 int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, 21 22 irq_handler_t handler, 22 23 unsigned long irqflags, const char *devname,
+11
kernel/crash_core.c
··· 6 6 7 7 #include <linux/buildid.h> 8 8 #include <linux/crash_core.h> 9 + #include <linux/init.h> 9 10 #include <linux/utsname.h> 10 11 #include <linux/vmalloc.h> 11 12 ··· 295 294 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base, 296 295 "crashkernel=", suffix_tbl[SUFFIX_LOW]); 297 296 } 297 + 298 + /* 299 + * Add a dummy early_param handler to mark crashkernel= as a known command line 300 + * parameter and suppress incorrect warnings in init/main.c. 301 + */ 302 + static int __init parse_crashkernel_dummy(char *arg) 303 + { 304 + return 0; 305 + } 306 + early_param("crashkernel", parse_crashkernel_dummy); 298 307 299 308 Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, 300 309 void *data, size_t data_len)
+9 -6
kernel/ucount.c
··· 264 264 long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v) 265 265 { 266 266 struct ucounts *iter; 267 + long max = LONG_MAX; 267 268 long ret = 0; 268 269 269 270 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 270 - long max = READ_ONCE(iter->ns->ucount_max[type]); 271 271 long new = atomic_long_add_return(v, &iter->ucount[type]); 272 272 if (new < 0 || new > max) 273 273 ret = LONG_MAX; 274 274 else if (iter == ucounts) 275 275 ret = new; 276 + max = READ_ONCE(iter->ns->ucount_max[type]); 276 277 } 277 278 return ret; 278 279 } ··· 313 312 { 314 313 /* Caller must hold a reference to ucounts */ 315 314 struct ucounts *iter; 315 + long max = LONG_MAX; 316 316 long dec, ret = 0; 317 317 318 318 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 319 - long max = READ_ONCE(iter->ns->ucount_max[type]); 320 319 long new = atomic_long_add_return(1, &iter->ucount[type]); 321 320 if (new < 0 || new > max) 322 321 goto unwind; 323 322 if (iter == ucounts) 324 323 ret = new; 324 + max = READ_ONCE(iter->ns->ucount_max[type]); 325 325 /* 326 326 * Grab an extra ucount reference for the caller when 327 327 * the rlimit count was previously 0. ··· 341 339 return 0; 342 340 } 343 341 344 - bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max) 342 + bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long rlimit) 345 343 { 346 344 struct ucounts *iter; 347 - if (get_ucounts_value(ucounts, type) > max) 348 - return true; 345 + long max = rlimit; 346 + if (rlimit > LONG_MAX) 347 + max = LONG_MAX; 349 348 for (iter = ucounts; iter; iter = iter->ns->ucounts) { 350 - max = READ_ONCE(iter->ns->ucount_max[type]); 351 349 if (get_ucounts_value(iter, type) > max) 352 350 return true; 351 + max = READ_ONCE(iter->ns->ucount_max[type]); 353 352 } 354 353 return false; 355 354 }
+9 -2
mm/damon/dbgfs.c
··· 353 353 const char __user *buf, size_t count, loff_t *ppos) 354 354 { 355 355 struct damon_ctx *ctx = file->private_data; 356 + struct damon_target *t, *next_t; 356 357 bool id_is_pid = true; 357 358 char *kbuf, *nrs; 358 359 unsigned long *targets; ··· 398 397 goto unlock_out; 399 398 } 400 399 401 - /* remove targets with previously-set primitive */ 402 - damon_set_targets(ctx, NULL, 0); 400 + /* remove previously set targets */ 401 + damon_for_each_target_safe(t, next_t, ctx) { 402 + if (targetid_is_pid(ctx)) 403 + put_pid((struct pid *)t->id); 404 + damon_destroy_target(t); 405 + } 403 406 404 407 /* Configure the context for the address space type */ 405 408 if (id_is_pid) ··· 655 650 if (!targetid_is_pid(ctx)) 656 651 return; 657 652 653 + mutex_lock(&ctx->kdamond_lock); 658 654 damon_for_each_target_safe(t, next, ctx) { 659 655 put_pid((struct pid *)t->id); 660 656 damon_destroy_target(t); 661 657 } 658 + mutex_unlock(&ctx->kdamond_lock); 662 659 } 663 660 664 661 static struct damon_ctx *dbgfs_new_ctx(void)
+1
mm/kfence/core.c
··· 683 683 .open = open_objects, 684 684 .read = seq_read, 685 685 .llseek = seq_lseek, 686 + .release = seq_release, 686 687 }; 687 688 688 689 static int __init kfence_debugfs_init(void)
+5 -9
mm/memory-failure.c
··· 1470 1470 if (!(flags & MF_COUNT_INCREASED)) { 1471 1471 res = get_hwpoison_page(p, flags); 1472 1472 if (!res) { 1473 - /* 1474 - * Check "filter hit" and "race with other subpage." 1475 - */ 1476 1473 lock_page(head); 1477 - if (PageHWPoison(head)) { 1478 - if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) 1479 - || (p != head && TestSetPageHWPoison(head))) { 1474 + if (hwpoison_filter(p)) { 1475 + if (TestClearPageHWPoison(head)) 1480 1476 num_poisoned_pages_dec(); 1481 - unlock_page(head); 1482 - return 0; 1483 - } 1477 + unlock_page(head); 1478 + return 0; 1484 1479 } 1485 1480 unlock_page(head); 1486 1481 res = MF_FAILED; ··· 2234 2239 } else if (ret == 0) { 2235 2240 if (soft_offline_free_page(page) && try_again) { 2236 2241 try_again = false; 2242 + flags &= ~MF_COUNT_INCREASED; 2237 2243 goto retry; 2238 2244 } 2239 2245 }
+1 -2
mm/mempolicy.c
··· 2140 2140 * memory with both reclaim and compact as well. 2141 2141 */ 2142 2142 if (!page && (gfp & __GFP_DIRECT_RECLAIM)) 2143 - page = __alloc_pages_node(hpage_node, 2144 - gfp, order); 2143 + page = __alloc_pages(gfp, order, hpage_node, nmask); 2145 2144 2146 2145 goto out; 2147 2146 }
+56 -9
mm/vmscan.c
··· 1021 1021 unlock_page(page); 1022 1022 } 1023 1023 1024 + static bool skip_throttle_noprogress(pg_data_t *pgdat) 1025 + { 1026 + int reclaimable = 0, write_pending = 0; 1027 + int i; 1028 + 1029 + /* 1030 + * If kswapd is disabled, reschedule if necessary but do not 1031 + * throttle as the system is likely near OOM. 1032 + */ 1033 + if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) 1034 + return true; 1035 + 1036 + /* 1037 + * If there are a lot of dirty/writeback pages then do not 1038 + * throttle as throttling will occur when the pages cycle 1039 + * towards the end of the LRU if still under writeback. 1040 + */ 1041 + for (i = 0; i < MAX_NR_ZONES; i++) { 1042 + struct zone *zone = pgdat->node_zones + i; 1043 + 1044 + if (!populated_zone(zone)) 1045 + continue; 1046 + 1047 + reclaimable += zone_reclaimable_pages(zone); 1048 + write_pending += zone_page_state_snapshot(zone, 1049 + NR_ZONE_WRITE_PENDING); 1050 + } 1051 + if (2 * write_pending <= reclaimable) 1052 + return true; 1053 + 1054 + return false; 1055 + } 1056 + 1024 1057 void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason) 1025 1058 { 1026 1059 wait_queue_head_t *wqh = &pgdat->reclaim_wait[reason]; ··· 1089 1056 } 1090 1057 1091 1058 break; 1059 + case VMSCAN_THROTTLE_CONGESTED: 1060 + fallthrough; 1092 1061 case VMSCAN_THROTTLE_NOPROGRESS: 1093 - timeout = HZ/2; 1062 + if (skip_throttle_noprogress(pgdat)) { 1063 + cond_resched(); 1064 + return; 1065 + } 1066 + 1067 + timeout = 1; 1068 + 1094 1069 break; 1095 1070 case VMSCAN_THROTTLE_ISOLATED: 1096 1071 timeout = HZ/50; ··· 3362 3321 if (!current_is_kswapd() && current_may_throttle() && 3363 3322 !sc->hibernation_mode && 3364 3323 test_bit(LRUVEC_CONGESTED, &target_lruvec->flags)) 3365 - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); 3324 + reclaim_throttle(pgdat, VMSCAN_THROTTLE_CONGESTED); 3366 3325 3367 3326 if (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed, 3368 3327 sc)) ··· 3427 3386 } 3428 3387 3429 3388 /* 3430 - * Do not throttle kswapd on NOPROGRESS as it will throttle on 3431 - * VMSCAN_THROTTLE_WRITEBACK if there are too many pages under 3432 - * writeback and marked for immediate reclaim at the tail of 3433 - * the LRU. 3389 + * Do not throttle kswapd or cgroup reclaim on NOPROGRESS as it will 3390 + * throttle on VMSCAN_THROTTLE_WRITEBACK if there are too many pages 3391 + * under writeback and marked for immediate reclaim at the tail of the 3392 + * LRU. 3434 3393 */ 3435 - if (current_is_kswapd()) 3394 + if (current_is_kswapd() || cgroup_reclaim(sc)) 3436 3395 return; 3437 3396 3438 3397 /* Throttle if making no progress at high prioities. */ 3439 - if (sc->priority < DEF_PRIORITY - 2) 3398 + if (sc->priority == 1 && !sc->nr_reclaimed) 3440 3399 reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS); 3441 3400 } 3442 3401 ··· 3456 3415 unsigned long nr_soft_scanned; 3457 3416 gfp_t orig_mask; 3458 3417 pg_data_t *last_pgdat = NULL; 3418 + pg_data_t *first_pgdat = NULL; 3459 3419 3460 3420 /* 3461 3421 * If the number of buffer_heads in the machine exceeds the maximum ··· 3520 3478 /* need some check for avoid more shrink_zone() */ 3521 3479 } 3522 3480 3481 + if (!first_pgdat) 3482 + first_pgdat = zone->zone_pgdat; 3483 + 3523 3484 /* See comment about same check for global reclaim above */ 3524 3485 if (zone->zone_pgdat == last_pgdat) 3525 3486 continue; 3526 3487 last_pgdat = zone->zone_pgdat; 3527 3488 shrink_node(zone->zone_pgdat, sc); 3528 - consider_reclaim_throttle(zone->zone_pgdat, sc); 3529 3489 } 3490 + 3491 + if (first_pgdat) 3492 + consider_reclaim_throttle(first_pgdat, sc); 3530 3493 3531 3494 /* 3532 3495 * Restore to original mask to avoid the impact on the caller if we
+3 -1
net/ax25/af_ax25.c
··· 85 85 again: 86 86 ax25_for_each(s, &ax25_list) { 87 87 if (s->ax25_dev == ax25_dev) { 88 - s->ax25_dev = NULL; 89 88 spin_unlock_bh(&ax25_list_lock); 89 + lock_sock(s->sk); 90 + s->ax25_dev = NULL; 91 + release_sock(s->sk); 90 92 ax25_disconnect(s, ENETUNREACH); 91 93 spin_lock_bh(&ax25_list_lock); 92 94
+1 -1
net/bridge/br_ioctl.c
··· 337 337 338 338 args[2] = get_bridge_ifindices(net, indices, args[2]); 339 339 340 - ret = copy_to_user(uarg, indices, 340 + ret = copy_to_user((void __user *)args[1], indices, 341 341 array_size(args[2], sizeof(int))) 342 342 ? -EFAULT : args[2]; 343 343
+32
net/bridge/br_multicast.c
··· 4522 4522 } 4523 4523 #endif 4524 4524 4525 + void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx, 4526 + unsigned long val) 4527 + { 4528 + unsigned long intvl_jiffies = clock_t_to_jiffies(val); 4529 + 4530 + if (intvl_jiffies < BR_MULTICAST_QUERY_INTVL_MIN) { 4531 + br_info(brmctx->br, 4532 + "trying to set multicast query interval below minimum, setting to %lu (%ums)\n", 4533 + jiffies_to_clock_t(BR_MULTICAST_QUERY_INTVL_MIN), 4534 + jiffies_to_msecs(BR_MULTICAST_QUERY_INTVL_MIN)); 4535 + intvl_jiffies = BR_MULTICAST_QUERY_INTVL_MIN; 4536 + } 4537 + 4538 + brmctx->multicast_query_interval = intvl_jiffies; 4539 + } 4540 + 4541 + void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx, 4542 + unsigned long val) 4543 + { 4544 + unsigned long intvl_jiffies = clock_t_to_jiffies(val); 4545 + 4546 + if (intvl_jiffies < BR_MULTICAST_STARTUP_QUERY_INTVL_MIN) { 4547 + br_info(brmctx->br, 4548 + "trying to set multicast startup query interval below minimum, setting to %lu (%ums)\n", 4549 + jiffies_to_clock_t(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN), 4550 + jiffies_to_msecs(BR_MULTICAST_STARTUP_QUERY_INTVL_MIN)); 4551 + intvl_jiffies = BR_MULTICAST_STARTUP_QUERY_INTVL_MIN; 4552 + } 4553 + 4554 + brmctx->multicast_startup_query_interval = intvl_jiffies; 4555 + } 4556 + 4525 4557 /** 4526 4558 * br_multicast_list_adjacent - Returns snooped multicast addresses 4527 4559 * @dev: The bridge port adjacent to which to retrieve addresses
+2 -2
net/bridge/br_netlink.c
··· 1357 1357 if (data[IFLA_BR_MCAST_QUERY_INTVL]) { 1358 1358 u64 val = nla_get_u64(data[IFLA_BR_MCAST_QUERY_INTVL]); 1359 1359 1360 - br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 1360 + br_multicast_set_query_intvl(&br->multicast_ctx, val); 1361 1361 } 1362 1362 1363 1363 if (data[IFLA_BR_MCAST_QUERY_RESPONSE_INTVL]) { ··· 1369 1369 if (data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]) { 1370 1370 u64 val = nla_get_u64(data[IFLA_BR_MCAST_STARTUP_QUERY_INTVL]); 1371 1371 1372 - br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 1372 + br_multicast_set_startup_query_intvl(&br->multicast_ctx, val); 1373 1373 } 1374 1374 1375 1375 if (data[IFLA_BR_MCAST_STATS_ENABLED]) {
+9 -3
net/bridge/br_private.h
··· 28 28 #define BR_MAX_PORTS (1<<BR_PORT_BITS) 29 29 30 30 #define BR_MULTICAST_DEFAULT_HASH_MAX 4096 31 + #define BR_MULTICAST_QUERY_INTVL_MIN msecs_to_jiffies(1000) 32 + #define BR_MULTICAST_STARTUP_QUERY_INTVL_MIN BR_MULTICAST_QUERY_INTVL_MIN 31 33 32 34 #define BR_HWDOM_MAX BITS_PER_LONG 33 35 ··· 965 963 int nest_attr); 966 964 size_t br_multicast_querier_state_size(void); 967 965 size_t br_rports_size(const struct net_bridge_mcast *brmctx); 966 + void br_multicast_set_query_intvl(struct net_bridge_mcast *brmctx, 967 + unsigned long val); 968 + void br_multicast_set_startup_query_intvl(struct net_bridge_mcast *brmctx, 969 + unsigned long val); 968 970 969 971 static inline bool br_group_is_l2(const struct br_ip *group) 970 972 { ··· 1153 1147 static inline bool 1154 1148 br_multicast_ctx_vlan_global_disabled(const struct net_bridge_mcast *brmctx) 1155 1149 { 1156 - return br_opt_get(brmctx->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) && 1157 - br_multicast_ctx_is_vlan(brmctx) && 1158 - !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED); 1150 + return br_multicast_ctx_is_vlan(brmctx) && 1151 + (!br_opt_get(brmctx->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED) || 1152 + !(brmctx->vlan->priv_flags & BR_VLFLAG_GLOBAL_MCAST_ENABLED)); 1159 1153 } 1160 1154 1161 1155 static inline bool
+2 -2
net/bridge/br_sysfs_br.c
··· 658 658 static int set_query_interval(struct net_bridge *br, unsigned long val, 659 659 struct netlink_ext_ack *extack) 660 660 { 661 - br->multicast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 661 + br_multicast_set_query_intvl(&br->multicast_ctx, val); 662 662 return 0; 663 663 } 664 664 ··· 706 706 static int set_startup_query_interval(struct net_bridge *br, unsigned long val, 707 707 struct netlink_ext_ack *extack) 708 708 { 709 - br->multicast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 709 + br_multicast_set_startup_query_intvl(&br->multicast_ctx, val); 710 710 return 0; 711 711 } 712 712
+2 -2
net/bridge/br_vlan_options.c
··· 521 521 u64 val; 522 522 523 523 val = nla_get_u64(tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_INTVL]); 524 - v->br_mcast_ctx.multicast_query_interval = clock_t_to_jiffies(val); 524 + br_multicast_set_query_intvl(&v->br_mcast_ctx, val); 525 525 *changed = true; 526 526 } 527 527 if (tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERY_RESPONSE_INTVL]) { ··· 535 535 u64 val; 536 536 537 537 val = nla_get_u64(tb[BRIDGE_VLANDB_GOPTS_MCAST_STARTUP_QUERY_INTVL]); 538 - v->br_mcast_ctx.multicast_startup_query_interval = clock_t_to_jiffies(val); 538 + br_multicast_set_startup_query_intvl(&v->br_mcast_ctx, val); 539 539 *changed = true; 540 540 } 541 541 if (tb[BRIDGE_VLANDB_GOPTS_MCAST_QUERIER]) {
+4 -4
net/core/dev.c
··· 3941 3941 return skb; 3942 3942 3943 3943 /* qdisc_skb_cb(skb)->pkt_len was already set by the caller. */ 3944 - qdisc_skb_cb(skb)->mru = 0; 3945 - qdisc_skb_cb(skb)->post_ct = false; 3944 + tc_skb_cb(skb)->mru = 0; 3945 + tc_skb_cb(skb)->post_ct = false; 3946 3946 mini_qdisc_bstats_cpu_update(miniq, skb); 3947 3947 3948 3948 switch (tcf_classify(skb, miniq->block, miniq->filter_list, &cl_res, false)) { ··· 5103 5103 } 5104 5104 5105 5105 qdisc_skb_cb(skb)->pkt_len = skb->len; 5106 - qdisc_skb_cb(skb)->mru = 0; 5107 - qdisc_skb_cb(skb)->post_ct = false; 5106 + tc_skb_cb(skb)->mru = 0; 5107 + tc_skb_cb(skb)->post_ct = false; 5108 5108 skb->tc_at_ingress = 1; 5109 5109 mini_qdisc_bstats_cpu_update(miniq, skb); 5110 5110
+2 -1
net/core/flow_dissector.c
··· 238 238 skb_flow_dissect_ct(const struct sk_buff *skb, 239 239 struct flow_dissector *flow_dissector, 240 240 void *target_container, u16 *ctinfo_map, 241 - size_t mapsize, bool post_ct) 241 + size_t mapsize, bool post_ct, u16 zone) 242 242 { 243 243 #if IS_ENABLED(CONFIG_NF_CONNTRACK) 244 244 struct flow_dissector_key_ct *key; ··· 260 260 if (!ct) { 261 261 key->ct_state = TCA_FLOWER_KEY_CT_FLAGS_TRACKED | 262 262 TCA_FLOWER_KEY_CT_FLAGS_INVALID; 263 + key->ct_zone = zone; 263 264 return; 264 265 } 265 266
+5 -1
net/dsa/tag_ocelot.c
··· 47 47 void *injection; 48 48 __be32 *prefix; 49 49 u32 rew_op = 0; 50 + u64 qos_class; 50 51 51 52 ocelot_xmit_get_vlan_info(skb, dp, &vlan_tci, &tag_type); 53 + 54 + qos_class = netdev_get_num_tc(netdev) ? 55 + netdev_get_prio_tc_map(netdev, skb->priority) : skb->priority; 52 56 53 57 injection = skb_push(skb, OCELOT_TAG_LEN); 54 58 prefix = skb_push(skb, OCELOT_SHORT_PREFIX_LEN); ··· 61 57 memset(injection, 0, OCELOT_TAG_LEN); 62 58 ocelot_ifh_set_bypass(injection, 1); 63 59 ocelot_ifh_set_src(injection, ds->num_ports); 64 - ocelot_ifh_set_qos_class(injection, skb->priority); 60 + ocelot_ifh_set_qos_class(injection, qos_class); 65 61 ocelot_ifh_set_vlan_tci(injection, vlan_tci); 66 62 ocelot_ifh_set_tag_type(injection, tag_type); 67 63
+5 -7
net/ipv4/af_inet.c
··· 154 154 155 155 kfree(rcu_dereference_protected(inet->inet_opt, 1)); 156 156 dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1)); 157 - dst_release(sk->sk_rx_dst); 157 + dst_release(rcu_dereference_protected(sk->sk_rx_dst, 1)); 158 158 sk_refcnt_debug_dec(sk); 159 159 } 160 160 EXPORT_SYMBOL(inet_sock_destruct); ··· 1994 1994 1995 1995 ip_init(); 1996 1996 1997 + /* Initialise per-cpu ipv4 mibs */ 1998 + if (init_ipv4_mibs()) 1999 + panic("%s: Cannot init ipv4 mibs\n", __func__); 2000 + 1997 2001 /* Setup TCP slab cache for open requests. */ 1998 2002 tcp_init(); 1999 2003 ··· 2028 2024 2029 2025 if (init_inet_pernet_ops()) 2030 2026 pr_crit("%s: Cannot init ipv4 inet pernet ops\n", __func__); 2031 - /* 2032 - * Initialise per-cpu ipv4 mibs 2033 - */ 2034 - 2035 - if (init_ipv4_mibs()) 2036 - pr_crit("%s: Cannot init ipv4 mibs\n", __func__); 2037 2027 2038 2028 ipv4_proc_init(); 2039 2029
+1 -2
net/ipv4/tcp.c
··· 3012 3012 icsk->icsk_ack.rcv_mss = TCP_MIN_MSS; 3013 3013 memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); 3014 3014 __sk_dst_reset(sk); 3015 - dst_release(sk->sk_rx_dst); 3016 - sk->sk_rx_dst = NULL; 3015 + dst_release(xchg((__force struct dst_entry **)&sk->sk_rx_dst, NULL)); 3017 3016 tcp_saved_syn_free(tp); 3018 3017 tp->compressed_ack = 0; 3019 3018 tp->segs_in = 0;
+1 -1
net/ipv4/tcp_input.c
··· 5787 5787 trace_tcp_probe(sk, skb); 5788 5788 5789 5789 tcp_mstamp_refresh(tp); 5790 - if (unlikely(!sk->sk_rx_dst)) 5790 + if (unlikely(!rcu_access_pointer(sk->sk_rx_dst))) 5791 5791 inet_csk(sk)->icsk_af_ops->sk_rx_dst_set(sk, skb); 5792 5792 /* 5793 5793 * Header prediction.
+7 -4
net/ipv4/tcp_ipv4.c
··· 1701 1701 struct sock *rsk; 1702 1702 1703 1703 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ 1704 - struct dst_entry *dst = sk->sk_rx_dst; 1704 + struct dst_entry *dst; 1705 + 1706 + dst = rcu_dereference_protected(sk->sk_rx_dst, 1707 + lockdep_sock_is_held(sk)); 1705 1708 1706 1709 sock_rps_save_rxhash(sk, skb); 1707 1710 sk_mark_napi_id(sk, skb); ··· 1712 1709 if (sk->sk_rx_dst_ifindex != skb->skb_iif || 1713 1710 !INDIRECT_CALL_1(dst->ops->check, ipv4_dst_check, 1714 1711 dst, 0)) { 1712 + RCU_INIT_POINTER(sk->sk_rx_dst, NULL); 1715 1713 dst_release(dst); 1716 - sk->sk_rx_dst = NULL; 1717 1714 } 1718 1715 } 1719 1716 tcp_rcv_established(sk, skb); ··· 1789 1786 skb->sk = sk; 1790 1787 skb->destructor = sock_edemux; 1791 1788 if (sk_fullsock(sk)) { 1792 - struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1789 + struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst); 1793 1790 1794 1791 if (dst) 1795 1792 dst = dst_check(dst, 0); ··· 2204 2201 struct dst_entry *dst = skb_dst(skb); 2205 2202 2206 2203 if (dst && dst_hold_safe(dst)) { 2207 - sk->sk_rx_dst = dst; 2204 + rcu_assign_pointer(sk->sk_rx_dst, dst); 2208 2205 sk->sk_rx_dst_ifindex = skb->skb_iif; 2209 2206 } 2210 2207 }
+4 -4
net/ipv4/udp.c
··· 2250 2250 struct dst_entry *old; 2251 2251 2252 2252 if (dst_hold_safe(dst)) { 2253 - old = xchg(&sk->sk_rx_dst, dst); 2253 + old = xchg((__force struct dst_entry **)&sk->sk_rx_dst, dst); 2254 2254 dst_release(old); 2255 2255 return old != dst; 2256 2256 } ··· 2440 2440 struct dst_entry *dst = skb_dst(skb); 2441 2441 int ret; 2442 2442 2443 - if (unlikely(sk->sk_rx_dst != dst)) 2443 + if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst)) 2444 2444 udp_sk_rx_dst_set(sk, dst); 2445 2445 2446 2446 ret = udp_unicast_rcv_skb(sk, skb, uh); ··· 2599 2599 2600 2600 skb->sk = sk; 2601 2601 skb->destructor = sock_efree; 2602 - dst = READ_ONCE(sk->sk_rx_dst); 2602 + dst = rcu_dereference(sk->sk_rx_dst); 2603 2603 2604 2604 if (dst) 2605 2605 dst = dst_check(dst, 0); ··· 3075 3075 { 3076 3076 seq_setwidth(seq, 127); 3077 3077 if (v == SEQ_START_TOKEN) 3078 - seq_puts(seq, " sl local_address rem_address st tx_queue " 3078 + seq_puts(seq, " sl local_address rem_address st tx_queue " 3079 3079 "rx_queue tr tm->when retrnsmt uid timeout " 3080 3080 "inode ref pointer drops"); 3081 3081 else {
+2
net/ipv6/ip6_vti.c
··· 808 808 struct net *net = dev_net(dev); 809 809 struct vti6_net *ip6n = net_generic(net, vti6_net_id); 810 810 811 + memset(&p1, 0, sizeof(p1)); 812 + 811 813 switch (cmd) { 812 814 case SIOCGETTUNNEL: 813 815 if (dev == ip6n->fb_tnl_dev) {
+3
net/ipv6/raw.c
··· 1020 1020 struct raw6_sock *rp = raw6_sk(sk); 1021 1021 int val; 1022 1022 1023 + if (optlen < sizeof(val)) 1024 + return -EINVAL; 1025 + 1023 1026 if (copy_from_sockptr(&val, optval, sizeof(val))) 1024 1027 return -EFAULT; 1025 1028
+7 -4
net/ipv6/tcp_ipv6.c
··· 107 107 if (dst && dst_hold_safe(dst)) { 108 108 const struct rt6_info *rt = (const struct rt6_info *)dst; 109 109 110 - sk->sk_rx_dst = dst; 110 + rcu_assign_pointer(sk->sk_rx_dst, dst); 111 111 sk->sk_rx_dst_ifindex = skb->skb_iif; 112 112 sk->sk_rx_dst_cookie = rt6_get_cookie(rt); 113 113 } ··· 1505 1505 opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC)); 1506 1506 1507 1507 if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */ 1508 - struct dst_entry *dst = sk->sk_rx_dst; 1508 + struct dst_entry *dst; 1509 + 1510 + dst = rcu_dereference_protected(sk->sk_rx_dst, 1511 + lockdep_sock_is_held(sk)); 1509 1512 1510 1513 sock_rps_save_rxhash(sk, skb); 1511 1514 sk_mark_napi_id(sk, skb); ··· 1516 1513 if (sk->sk_rx_dst_ifindex != skb->skb_iif || 1517 1514 INDIRECT_CALL_1(dst->ops->check, ip6_dst_check, 1518 1515 dst, sk->sk_rx_dst_cookie) == NULL) { 1516 + RCU_INIT_POINTER(sk->sk_rx_dst, NULL); 1519 1517 dst_release(dst); 1520 - sk->sk_rx_dst = NULL; 1521 1518 } 1522 1519 } 1523 1520 ··· 1877 1874 skb->sk = sk; 1878 1875 skb->destructor = sock_edemux; 1879 1876 if (sk_fullsock(sk)) { 1880 - struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1877 + struct dst_entry *dst = rcu_dereference(sk->sk_rx_dst); 1881 1878 1882 1879 if (dst) 1883 1880 dst = dst_check(dst, sk->sk_rx_dst_cookie);
+3 -3
net/ipv6/udp.c
··· 956 956 struct dst_entry *dst = skb_dst(skb); 957 957 int ret; 958 958 959 - if (unlikely(sk->sk_rx_dst != dst)) 959 + if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst)) 960 960 udp6_sk_rx_dst_set(sk, dst); 961 961 962 962 if (!uh->check && !udp_sk(sk)->no_check6_rx) { ··· 1070 1070 1071 1071 skb->sk = sk; 1072 1072 skb->destructor = sock_efree; 1073 - dst = READ_ONCE(sk->sk_rx_dst); 1073 + dst = rcu_dereference(sk->sk_rx_dst); 1074 1074 1075 1075 if (dst) 1076 1076 dst = dst_check(dst, sk->sk_rx_dst_cookie); ··· 1204 1204 kfree_skb(skb); 1205 1205 return -EINVAL; 1206 1206 } 1207 - if (skb->len > cork->gso_size * UDP_MAX_SEGMENTS) { 1207 + if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) { 1208 1208 kfree_skb(skb); 1209 1209 return -EINVAL; 1210 1210 }
+3
net/mac80211/cfg.c
··· 1264 1264 return 0; 1265 1265 1266 1266 error: 1267 + mutex_lock(&local->mtx); 1267 1268 ieee80211_vif_release_channel(sdata); 1269 + mutex_unlock(&local->mtx); 1270 + 1268 1271 return err; 1269 1272 } 1270 1273
+5 -1
net/ncsi/ncsi-netlink.c
··· 112 112 pnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR); 113 113 if (!pnest) 114 114 return -ENOMEM; 115 - nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id); 115 + rc = nla_put_u32(skb, NCSI_PKG_ATTR_ID, np->id); 116 + if (rc) { 117 + nla_nest_cancel(skb, pnest); 118 + return rc; 119 + } 116 120 if ((0x1 << np->id) == ndp->package_whitelist) 117 121 nla_put_flag(skb, NCSI_PKG_ATTR_FORCED); 118 122 cnest = nla_nest_start_noflag(skb, NCSI_PKG_ATTR_CHANNEL_LIST);
+3 -2
net/netfilter/nf_conntrack_netlink.c
··· 1195 1195 } 1196 1196 hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[cb->args[0]], 1197 1197 hnnode) { 1198 - if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL) 1199 - continue; 1200 1198 ct = nf_ct_tuplehash_to_ctrack(h); 1201 1199 if (nf_ct_is_expired(ct)) { 1202 1200 if (i < ARRAY_SIZE(nf_ct_evict) && ··· 1204 1206 } 1205 1207 1206 1208 if (!net_eq(net, nf_ct_net(ct))) 1209 + continue; 1210 + 1211 + if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL) 1207 1212 continue; 1208 1213 1209 1214 if (cb->args[1]) {
+2 -2
net/netfilter/nf_tables_api.c
··· 4481 4481 static void nft_set_catchall_destroy(const struct nft_ctx *ctx, 4482 4482 struct nft_set *set) 4483 4483 { 4484 - struct nft_set_elem_catchall *catchall; 4484 + struct nft_set_elem_catchall *next, *catchall; 4485 4485 4486 - list_for_each_entry_rcu(catchall, &set->catchall_list, list) { 4486 + list_for_each_entry_safe(catchall, next, &set->catchall_list, list) { 4487 4487 list_del_rcu(&catchall->list); 4488 4488 nft_set_elem_destroy(set, catchall->elem, true); 4489 4489 kfree_rcu(catchall);
+7 -1
net/openvswitch/flow.c
··· 34 34 #include <net/mpls.h> 35 35 #include <net/ndisc.h> 36 36 #include <net/nsh.h> 37 + #include <net/netfilter/nf_conntrack_zones.h> 37 38 38 39 #include "conntrack.h" 39 40 #include "datapath.h" ··· 861 860 #endif 862 861 bool post_ct = false; 863 862 int res, err; 863 + u16 zone = 0; 864 864 865 865 /* Extract metadata from packet. */ 866 866 if (tun_info) { ··· 900 898 key->recirc_id = tc_ext ? tc_ext->chain : 0; 901 899 OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0; 902 900 post_ct = tc_ext ? tc_ext->post_ct : false; 901 + zone = post_ct ? tc_ext->zone : 0; 903 902 } else { 904 903 key->recirc_id = 0; 905 904 } ··· 909 906 #endif 910 907 911 908 err = key_extract(skb, key); 912 - if (!err) 909 + if (!err) { 913 910 ovs_ct_fill_key(skb, key, post_ct); /* Must be after key_extract(). */ 911 + if (post_ct && !skb_get_nfct(skb)) 912 + key->ct_zone = zone; 913 + } 914 914 return err; 915 915 } 916 916
+2
net/phonet/pep.c
··· 947 947 ret = -EBUSY; 948 948 else if (sk->sk_state == TCP_ESTABLISHED) 949 949 ret = -EISCONN; 950 + else if (!pn->pn_sk.sobject) 951 + ret = -EADDRNOTAVAIL; 950 952 else 951 953 ret = pep_sock_enable(sk, NULL, 0); 952 954 release_sock(sk);
+8 -7
net/sched/act_ct.c
··· 690 690 u8 family, u16 zone, bool *defrag) 691 691 { 692 692 enum ip_conntrack_info ctinfo; 693 - struct qdisc_skb_cb cb; 694 693 struct nf_conn *ct; 695 694 int err = 0; 696 695 bool frag; 696 + u16 mru; 697 697 698 698 /* Previously seen (loopback)? Ignore. */ 699 699 ct = nf_ct_get(skb, &ctinfo); ··· 708 708 return err; 709 709 710 710 skb_get(skb); 711 - cb = *qdisc_skb_cb(skb); 711 + mru = tc_skb_cb(skb)->mru; 712 712 713 713 if (family == NFPROTO_IPV4) { 714 714 enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; ··· 722 722 723 723 if (!err) { 724 724 *defrag = true; 725 - cb.mru = IPCB(skb)->frag_max_size; 725 + mru = IPCB(skb)->frag_max_size; 726 726 } 727 727 } else { /* NFPROTO_IPV6 */ 728 728 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) ··· 735 735 736 736 if (!err) { 737 737 *defrag = true; 738 - cb.mru = IP6CB(skb)->frag_max_size; 738 + mru = IP6CB(skb)->frag_max_size; 739 739 } 740 740 #else 741 741 err = -EOPNOTSUPP; ··· 744 744 } 745 745 746 746 if (err != -EINPROGRESS) 747 - *qdisc_skb_cb(skb) = cb; 747 + tc_skb_cb(skb)->mru = mru; 748 748 skb_clear_hash(skb); 749 749 skb->ignore_df = 1; 750 750 return err; ··· 963 963 tcf_action_update_bstats(&c->common, skb); 964 964 965 965 if (clear) { 966 - qdisc_skb_cb(skb)->post_ct = false; 966 + tc_skb_cb(skb)->post_ct = false; 967 967 ct = nf_ct_get(skb, &ctinfo); 968 968 if (ct) { 969 969 nf_conntrack_put(&ct->ct_general); ··· 1048 1048 out_push: 1049 1049 skb_push_rcsum(skb, nh_ofs); 1050 1050 1051 - qdisc_skb_cb(skb)->post_ct = true; 1051 + tc_skb_cb(skb)->post_ct = true; 1052 + tc_skb_cb(skb)->zone = p->zone; 1052 1053 out_clear: 1053 1054 if (defrag) 1054 1055 qdisc_skb_cb(skb)->pkt_len = skb->len;
+5 -2
net/sched/cls_api.c
··· 1617 1617 1618 1618 /* If we missed on some chain */ 1619 1619 if (ret == TC_ACT_UNSPEC && last_executed_chain) { 1620 + struct tc_skb_cb *cb = tc_skb_cb(skb); 1621 + 1620 1622 ext = tc_skb_ext_alloc(skb); 1621 1623 if (WARN_ON_ONCE(!ext)) 1622 1624 return TC_ACT_SHOT; 1623 1625 ext->chain = last_executed_chain; 1624 - ext->mru = qdisc_skb_cb(skb)->mru; 1625 - ext->post_ct = qdisc_skb_cb(skb)->post_ct; 1626 + ext->mru = cb->mru; 1627 + ext->post_ct = cb->post_ct; 1628 + ext->zone = cb->zone; 1626 1629 } 1627 1630 1628 1631 return ret;
+4 -2
net/sched/cls_flower.c
··· 19 19 20 20 #include <net/sch_generic.h> 21 21 #include <net/pkt_cls.h> 22 + #include <net/pkt_sched.h> 22 23 #include <net/ip.h> 23 24 #include <net/flow_dissector.h> 24 25 #include <net/geneve.h> ··· 310 309 struct tcf_result *res) 311 310 { 312 311 struct cls_fl_head *head = rcu_dereference_bh(tp->root); 313 - bool post_ct = qdisc_skb_cb(skb)->post_ct; 312 + bool post_ct = tc_skb_cb(skb)->post_ct; 313 + u16 zone = tc_skb_cb(skb)->zone; 314 314 struct fl_flow_key skb_key; 315 315 struct fl_flow_mask *mask; 316 316 struct cls_fl_filter *f; ··· 329 327 skb_flow_dissect_ct(skb, &mask->dissector, &skb_key, 330 328 fl_ct_info_to_flower_map, 331 329 ARRAY_SIZE(fl_ct_info_to_flower_map), 332 - post_ct); 330 + post_ct, zone); 333 331 skb_flow_dissect_hash(skb, &mask->dissector, &skb_key); 334 332 skb_flow_dissect(skb, &mask->dissector, &skb_key, 335 333 FLOW_DISSECTOR_F_STOP_BEFORE_ENCAP);
+2 -1
net/sched/sch_frag.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB 2 2 #include <net/netlink.h> 3 3 #include <net/sch_generic.h> 4 + #include <net/pkt_sched.h> 4 5 #include <net/dst.h> 5 6 #include <net/ip.h> 6 7 #include <net/ip6_fib.h> ··· 138 137 139 138 int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb)) 140 139 { 141 - u16 mru = qdisc_skb_cb(skb)->mru; 140 + u16 mru = tc_skb_cb(skb)->mru; 142 141 int err; 143 142 144 143 if (mru && skb->len > mru + skb->dev->hard_header_len)
+6 -6
net/sctp/diag.c
··· 290 290 return err; 291 291 } 292 292 293 - static int sctp_sock_dump(struct sctp_transport *tsp, void *p) 293 + static int sctp_sock_dump(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p) 294 294 { 295 - struct sctp_endpoint *ep = tsp->asoc->ep; 296 295 struct sctp_comm_param *commp = p; 297 296 struct sock *sk = ep->base.sk; 298 297 struct sk_buff *skb = commp->skb; ··· 301 302 int err = 0; 302 303 303 304 lock_sock(sk); 305 + if (ep != tsp->asoc->ep) 306 + goto release; 304 307 list_for_each_entry(assoc, &ep->asocs, asocs) { 305 308 if (cb->args[4] < cb->args[1]) 306 309 goto next; ··· 345 344 return err; 346 345 } 347 346 348 - static int sctp_sock_filter(struct sctp_transport *tsp, void *p) 347 + static int sctp_sock_filter(struct sctp_endpoint *ep, struct sctp_transport *tsp, void *p) 349 348 { 350 - struct sctp_endpoint *ep = tsp->asoc->ep; 351 349 struct sctp_comm_param *commp = p; 352 350 struct sock *sk = ep->base.sk; 353 351 const struct inet_diag_req_v2 *r = commp->r; ··· 505 505 if (!(idiag_states & ~(TCPF_LISTEN | TCPF_CLOSE))) 506 506 goto done; 507 507 508 - sctp_for_each_transport(sctp_sock_filter, sctp_sock_dump, 509 - net, &pos, &commp); 508 + sctp_transport_traverse_process(sctp_sock_filter, sctp_sock_dump, 509 + net, &pos, &commp); 510 510 cb->args[2] = pos; 511 511 512 512 done:
+15 -8
net/sctp/endpointola.c
··· 184 184 } 185 185 186 186 /* Final destructor for endpoint. */ 187 + static void sctp_endpoint_destroy_rcu(struct rcu_head *head) 188 + { 189 + struct sctp_endpoint *ep = container_of(head, struct sctp_endpoint, rcu); 190 + struct sock *sk = ep->base.sk; 191 + 192 + sctp_sk(sk)->ep = NULL; 193 + sock_put(sk); 194 + 195 + kfree(ep); 196 + SCTP_DBG_OBJCNT_DEC(ep); 197 + } 198 + 187 199 static void sctp_endpoint_destroy(struct sctp_endpoint *ep) 188 200 { 189 201 struct sock *sk; ··· 225 213 if (sctp_sk(sk)->bind_hash) 226 214 sctp_put_port(sk); 227 215 228 - sctp_sk(sk)->ep = NULL; 229 - /* Give up our hold on the sock */ 230 - sock_put(sk); 231 - 232 - kfree(ep); 233 - SCTP_DBG_OBJCNT_DEC(ep); 216 + call_rcu(&ep->rcu, sctp_endpoint_destroy_rcu); 234 217 } 235 218 236 219 /* Hold a reference to an endpoint. */ 237 - void sctp_endpoint_hold(struct sctp_endpoint *ep) 220 + int sctp_endpoint_hold(struct sctp_endpoint *ep) 238 221 { 239 - refcount_inc(&ep->base.refcnt); 222 + return refcount_inc_not_zero(&ep->base.refcnt); 240 223 } 241 224 242 225 /* Release a reference to an endpoint and clean up if there are
+15 -8
net/sctp/socket.c
··· 5338 5338 } 5339 5339 EXPORT_SYMBOL_GPL(sctp_transport_lookup_process); 5340 5340 5341 - int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *), 5342 - int (*cb_done)(struct sctp_transport *, void *), 5343 - struct net *net, int *pos, void *p) { 5341 + int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done, 5342 + struct net *net, int *pos, void *p) 5343 + { 5344 5344 struct rhashtable_iter hti; 5345 5345 struct sctp_transport *tsp; 5346 + struct sctp_endpoint *ep; 5346 5347 int ret; 5347 5348 5348 5349 again: ··· 5352 5351 5353 5352 tsp = sctp_transport_get_idx(net, &hti, *pos + 1); 5354 5353 for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) { 5355 - ret = cb(tsp, p); 5356 - if (ret) 5357 - break; 5354 + ep = tsp->asoc->ep; 5355 + if (sctp_endpoint_hold(ep)) { /* asoc can be peeled off */ 5356 + ret = cb(ep, tsp, p); 5357 + if (ret) 5358 + break; 5359 + sctp_endpoint_put(ep); 5360 + } 5358 5361 (*pos)++; 5359 5362 sctp_transport_put(tsp); 5360 5363 } 5361 5364 sctp_transport_walk_stop(&hti); 5362 5365 5363 5366 if (ret) { 5364 - if (cb_done && !cb_done(tsp, p)) { 5367 + if (cb_done && !cb_done(ep, tsp, p)) { 5365 5368 (*pos)++; 5369 + sctp_endpoint_put(ep); 5366 5370 sctp_transport_put(tsp); 5367 5371 goto again; 5368 5372 } 5373 + sctp_endpoint_put(ep); 5369 5374 sctp_transport_put(tsp); 5370 5375 } 5371 5376 5372 5377 return ret; 5373 5378 } 5374 - EXPORT_SYMBOL_GPL(sctp_for_each_transport); 5379 + EXPORT_SYMBOL_GPL(sctp_transport_traverse_process); 5375 5380 5376 5381 /* 7.2.1 Association Status (SCTP_STATUS) 5377 5382
+5
net/smc/smc.h
··· 180 180 u16 tx_cdc_seq; /* sequence # for CDC send */ 181 181 u16 tx_cdc_seq_fin; /* sequence # - tx completed */ 182 182 spinlock_t send_lock; /* protect wr_sends */ 183 + atomic_t cdc_pend_tx_wr; /* number of pending tx CDC wqe 184 + * - inc when post wqe, 185 + * - dec on polled tx cqe 186 + */ 187 + wait_queue_head_t cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/ 183 188 struct delayed_work tx_work; /* retry of smc_cdc_msg_send */ 184 189 u32 tx_off; /* base offset in peer rmb */ 185 190
+24 -28
net/smc/smc_cdc.c
··· 31 31 struct smc_sock *smc; 32 32 int diff; 33 33 34 - if (!conn) 35 - /* already dismissed */ 36 - return; 37 - 38 34 smc = container_of(conn, struct smc_sock, conn); 39 35 bh_lock_sock(&smc->sk); 40 36 if (!wc_status) { ··· 47 51 conn); 48 52 conn->tx_cdc_seq_fin = cdcpend->ctrl_seq; 49 53 } 54 + 55 + if (atomic_dec_and_test(&conn->cdc_pend_tx_wr) && 56 + unlikely(wq_has_sleeper(&conn->cdc_pend_tx_wq))) 57 + wake_up(&conn->cdc_pend_tx_wq); 58 + WARN_ON(atomic_read(&conn->cdc_pend_tx_wr) < 0); 59 + 50 60 smc_tx_sndbuf_nonfull(smc); 51 61 bh_unlock_sock(&smc->sk); 52 62 } ··· 109 107 conn->tx_cdc_seq++; 110 108 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq; 111 109 smc_host_msg_to_cdc((struct smc_cdc_msg *)wr_buf, conn, &cfed); 110 + 111 + atomic_inc(&conn->cdc_pend_tx_wr); 112 + smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */ 113 + 112 114 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend); 113 115 if (!rc) { 114 116 smc_curs_copy(&conn->rx_curs_confirmed, &cfed, conn); ··· 120 114 } else { 121 115 conn->tx_cdc_seq--; 122 116 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq; 117 + atomic_dec(&conn->cdc_pend_tx_wr); 123 118 } 124 119 125 120 return rc; ··· 143 136 peer->token = htonl(local->token); 144 137 peer->prod_flags.failover_validation = 1; 145 138 139 + /* We need to set pend->conn here to make sure smc_cdc_tx_handler() 140 + * can handle properly 141 + */ 142 + smc_cdc_add_pending_send(conn, pend); 143 + 144 + atomic_inc(&conn->cdc_pend_tx_wr); 145 + smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */ 146 + 146 147 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend); 148 + if (unlikely(rc)) 149 + atomic_dec(&conn->cdc_pend_tx_wr); 150 + 147 151 return rc; 148 152 } 149 153 ··· 211 193 return rc; 212 194 } 213 195 214 - static bool smc_cdc_tx_filter(struct smc_wr_tx_pend_priv *tx_pend, 215 - unsigned long data) 196 + void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn) 216 197 { 217 - struct smc_connection *conn = (struct smc_connection *)data; 218 - struct smc_cdc_tx_pend *cdc_pend = 219 - (struct smc_cdc_tx_pend *)tx_pend; 220 - 221 - return cdc_pend->conn == conn; 222 - } 223 - 224 - static void smc_cdc_tx_dismisser(struct smc_wr_tx_pend_priv *tx_pend) 225 - { 226 - struct smc_cdc_tx_pend *cdc_pend = 227 - (struct smc_cdc_tx_pend *)tx_pend; 228 - 229 - cdc_pend->conn = NULL; 230 - } 231 - 232 - void smc_cdc_tx_dismiss_slots(struct smc_connection *conn) 233 - { 234 - struct smc_link *link = conn->lnk; 235 - 236 - smc_wr_tx_dismiss_slots(link, SMC_CDC_MSG_TYPE, 237 - smc_cdc_tx_filter, smc_cdc_tx_dismisser, 238 - (unsigned long)conn); 198 + wait_event(conn->cdc_pend_tx_wq, !atomic_read(&conn->cdc_pend_tx_wr)); 239 199 } 240 200 241 201 /* Send a SMC-D CDC header.
+1 -1
net/smc/smc_cdc.h
··· 291 291 struct smc_wr_buf **wr_buf, 292 292 struct smc_rdma_wr **wr_rdma_buf, 293 293 struct smc_cdc_tx_pend **pend); 294 - void smc_cdc_tx_dismiss_slots(struct smc_connection *conn); 294 + void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn); 295 295 int smc_cdc_msg_send(struct smc_connection *conn, struct smc_wr_buf *wr_buf, 296 296 struct smc_cdc_tx_pend *pend); 297 297 int smc_cdc_get_slot_and_msg_send(struct smc_connection *conn);
+21 -6
net/smc/smc_core.c
··· 647 647 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 648 648 struct smc_link *lnk = &lgr->lnk[i]; 649 649 650 - if (smc_link_usable(lnk)) 650 + if (smc_link_sendable(lnk)) 651 651 lnk->state = SMC_LNK_INACTIVE; 652 652 } 653 653 wake_up_all(&lgr->llc_msg_waiter); ··· 1127 1127 smc_ism_unset_conn(conn); 1128 1128 tasklet_kill(&conn->rx_tsklet); 1129 1129 } else { 1130 - smc_cdc_tx_dismiss_slots(conn); 1130 + smc_cdc_wait_pend_tx_wr(conn); 1131 1131 if (current_work() != &conn->abort_work) 1132 1132 cancel_work_sync(&conn->abort_work); 1133 1133 } ··· 1204 1204 smc_llc_link_clear(lnk, log); 1205 1205 smcr_buf_unmap_lgr(lnk); 1206 1206 smcr_rtoken_clear_link(lnk); 1207 - smc_ib_modify_qp_reset(lnk); 1207 + smc_ib_modify_qp_error(lnk); 1208 1208 smc_wr_free_link(lnk); 1209 1209 smc_ib_destroy_queue_pair(lnk); 1210 1210 smc_ib_dealloc_protection_domain(lnk); ··· 1336 1336 else 1337 1337 tasklet_unlock_wait(&conn->rx_tsklet); 1338 1338 } else { 1339 - smc_cdc_tx_dismiss_slots(conn); 1339 + smc_cdc_wait_pend_tx_wr(conn); 1340 1340 } 1341 1341 smc_lgr_unregister_conn(conn); 1342 1342 smc_close_active_abort(smc); ··· 1459 1459 /* Called when an SMCR device is removed or the smc module is unloaded. 1460 1460 * If smcibdev is given, all SMCR link groups using this device are terminated. 1461 1461 * If smcibdev is NULL, all SMCR link groups are terminated. 1462 + * 1463 + * We must wait here for QPs been destroyed before we destroy the CQs, 1464 + * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus 1465 + * smc_sock cannot be released. 1462 1466 */ 1463 1467 void smc_smcr_terminate_all(struct smc_ib_device *smcibdev) 1464 1468 { 1465 1469 struct smc_link_group *lgr, *lg; 1466 1470 LIST_HEAD(lgr_free_list); 1471 + LIST_HEAD(lgr_linkdown_list); 1467 1472 int i; 1468 1473 1469 1474 spin_lock_bh(&smc_lgr_list.lock); ··· 1480 1475 list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) { 1481 1476 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1482 1477 if (lgr->lnk[i].smcibdev == smcibdev) 1483 - smcr_link_down_cond_sched(&lgr->lnk[i]); 1478 + list_move_tail(&lgr->list, &lgr_linkdown_list); 1484 1479 } 1485 1480 } 1486 1481 } ··· 1490 1485 list_del_init(&lgr->list); 1491 1486 smc_llc_set_termination_rsn(lgr, SMC_LLC_DEL_OP_INIT_TERM); 1492 1487 __smc_lgr_terminate(lgr, false); 1488 + } 1489 + 1490 + list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) { 1491 + for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1492 + if (lgr->lnk[i].smcibdev == smcibdev) { 1493 + mutex_lock(&lgr->llc_conf_mutex); 1494 + smcr_link_down_cond(&lgr->lnk[i]); 1495 + mutex_unlock(&lgr->llc_conf_mutex); 1496 + } 1497 + } 1493 1498 } 1494 1499 1495 1500 if (smcibdev) { ··· 1601 1586 if (!lgr || lnk->state == SMC_LNK_UNUSED || list_empty(&lgr->list)) 1602 1587 return; 1603 1588 1604 - smc_ib_modify_qp_reset(lnk); 1605 1589 to_lnk = smc_switch_conns(lgr, lnk, true); 1606 1590 if (!to_lnk) { /* no backup link available */ 1607 1591 smcr_link_clear(lnk, true); ··· 1838 1824 conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE; 1839 1825 conn->local_tx_ctrl.len = SMC_WR_TX_SIZE; 1840 1826 conn->urg_state = SMC_URG_READ; 1827 + init_waitqueue_head(&conn->cdc_pend_tx_wq); 1841 1828 INIT_WORK(&smc->conn.abort_work, smc_conn_abort_work); 1842 1829 if (ini->is_smcd) { 1843 1830 conn->rx_off = sizeof(struct smcd_cdc_msg);
+6
net/smc/smc_core.h
··· 415 415 return true; 416 416 } 417 417 418 + static inline bool smc_link_sendable(struct smc_link *lnk) 419 + { 420 + return smc_link_usable(lnk) && 421 + lnk->qp_attr.cur_qp_state == IB_QPS_RTS; 422 + } 423 + 418 424 static inline bool smc_link_active(struct smc_link *lnk) 419 425 { 420 426 return lnk->state == SMC_LNK_ACTIVE;
+2 -2
net/smc/smc_ib.c
··· 109 109 IB_QP_MAX_QP_RD_ATOMIC); 110 110 } 111 111 112 - int smc_ib_modify_qp_reset(struct smc_link *lnk) 112 + int smc_ib_modify_qp_error(struct smc_link *lnk) 113 113 { 114 114 struct ib_qp_attr qp_attr; 115 115 116 116 memset(&qp_attr, 0, sizeof(qp_attr)); 117 - qp_attr.qp_state = IB_QPS_RESET; 117 + qp_attr.qp_state = IB_QPS_ERR; 118 118 return ib_modify_qp(lnk->roce_qp, &qp_attr, IB_QP_STATE); 119 119 } 120 120
+1
net/smc/smc_ib.h
··· 90 90 int smc_ib_ready_link(struct smc_link *lnk); 91 91 int smc_ib_modify_qp_rts(struct smc_link *lnk); 92 92 int smc_ib_modify_qp_reset(struct smc_link *lnk); 93 + int smc_ib_modify_qp_error(struct smc_link *lnk); 93 94 long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev); 94 95 int smc_ib_get_memory_region(struct ib_pd *pd, int access_flags, 95 96 struct smc_buf_desc *buf_slot, u8 link_idx);
+1 -1
net/smc/smc_llc.c
··· 1630 1630 delllc.reason = htonl(rsn); 1631 1631 1632 1632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1633 - if (!smc_link_usable(&lgr->lnk[i])) 1633 + if (!smc_link_sendable(&lgr->lnk[i])) 1634 1634 continue; 1635 1635 if (!smc_llc_send_message_wait(&lgr->lnk[i], &delllc)) 1636 1636 break;
+9 -42
net/smc/smc_wr.c
··· 62 62 } 63 63 64 64 /* wait till all pending tx work requests on the given link are completed */ 65 - int smc_wr_tx_wait_no_pending_sends(struct smc_link *link) 65 + void smc_wr_tx_wait_no_pending_sends(struct smc_link *link) 66 66 { 67 - if (wait_event_timeout(link->wr_tx_wait, !smc_wr_is_tx_pend(link), 68 - SMC_WR_TX_WAIT_PENDING_TIME)) 69 - return 0; 70 - else /* timeout */ 71 - return -EPIPE; 67 + wait_event(link->wr_tx_wait, !smc_wr_is_tx_pend(link)); 72 68 } 73 69 74 70 static inline int smc_wr_tx_find_pending_index(struct smc_link *link, u64 wr_id) ··· 83 87 struct smc_wr_tx_pend pnd_snd; 84 88 struct smc_link *link; 85 89 u32 pnd_snd_idx; 86 - int i; 87 90 88 91 link = wc->qp->qp_context; 89 92 ··· 123 128 } 124 129 125 130 if (wc->status) { 126 - for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) { 127 - /* clear full struct smc_wr_tx_pend including .priv */ 128 - memset(&link->wr_tx_pends[i], 0, 129 - sizeof(link->wr_tx_pends[i])); 130 - memset(&link->wr_tx_bufs[i], 0, 131 - sizeof(link->wr_tx_bufs[i])); 132 - clear_bit(i, link->wr_tx_mask); 133 - } 134 131 if (link->lgr->smc_version == SMC_V2) { 135 132 memset(link->wr_tx_v2_pend, 0, 136 133 sizeof(*link->wr_tx_v2_pend)); ··· 175 188 static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx) 176 189 { 177 190 *idx = link->wr_tx_cnt; 178 - if (!smc_link_usable(link)) 191 + if (!smc_link_sendable(link)) 179 192 return -ENOLINK; 180 193 for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) { 181 194 if (!test_and_set_bit(*idx, link->wr_tx_mask)) ··· 218 231 } else { 219 232 rc = wait_event_interruptible_timeout( 220 233 link->wr_tx_wait, 221 - !smc_link_usable(link) || 234 + !smc_link_sendable(link) || 222 235 lgr->terminating || 223 236 (smc_wr_tx_get_free_slot_index(link, &idx) != -EBUSY), 224 237 SMC_WR_TX_WAIT_FREE_SLOT_TIME); ··· 345 358 unsigned long timeout) 346 359 { 347 360 struct smc_wr_tx_pend *pend; 361 + u32 pnd_idx; 348 362 int rc; 349 363 350 364 pend = container_of(priv, struct smc_wr_tx_pend, priv); 351 365 pend->compl_requested = 1; 352 - init_completion(&link->wr_tx_compl[pend->idx]); 366 + pnd_idx = pend->idx; 367 + init_completion(&link->wr_tx_compl[pnd_idx]); 353 368 354 369 rc = smc_wr_tx_send(link, priv); 355 370 if (rc) 356 371 return rc; 357 372 /* wait for completion by smc_wr_tx_process_cqe() */ 358 373 rc = wait_for_completion_interruptible_timeout( 359 - &link->wr_tx_compl[pend->idx], timeout); 374 + &link->wr_tx_compl[pnd_idx], timeout); 360 375 if (rc <= 0) 361 376 rc = -ENODATA; 362 377 if (rc > 0) ··· 406 417 break; 407 418 } 408 419 return rc; 409 - } 410 - 411 - void smc_wr_tx_dismiss_slots(struct smc_link *link, u8 wr_tx_hdr_type, 412 - smc_wr_tx_filter filter, 413 - smc_wr_tx_dismisser dismisser, 414 - unsigned long data) 415 - { 416 - struct smc_wr_tx_pend_priv *tx_pend; 417 - struct smc_wr_rx_hdr *wr_tx; 418 - int i; 419 - 420 - for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) { 421 - wr_tx = (struct smc_wr_rx_hdr *)&link->wr_tx_bufs[i]; 422 - if (wr_tx->type != wr_tx_hdr_type) 423 - continue; 424 - tx_pend = &link->wr_tx_pends[i].priv; 425 - if (filter(tx_pend, data)) 426 - dismisser(tx_pend); 427 - } 428 420 } 429 421 430 422 /****************************** receive queue ********************************/ ··· 643 673 smc_wr_wakeup_reg_wait(lnk); 644 674 smc_wr_wakeup_tx_wait(lnk); 645 675 646 - if (smc_wr_tx_wait_no_pending_sends(lnk)) 647 - memset(lnk->wr_tx_mask, 0, 648 - BITS_TO_LONGS(SMC_WR_BUF_CNT) * 649 - sizeof(*lnk->wr_tx_mask)); 676 + smc_wr_tx_wait_no_pending_sends(lnk); 650 677 wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt))); 651 678 wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt))); 652 679
+2 -3
net/smc/smc_wr.h
··· 22 22 #define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */ 23 23 24 24 #define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ) 25 - #define SMC_WR_TX_WAIT_PENDING_TIME (5 * HZ) 26 25 27 26 #define SMC_WR_TX_SIZE 44 /* actual size of wr_send data (<=SMC_WR_BUF_SIZE) */ 28 27 ··· 61 62 62 63 static inline bool smc_wr_tx_link_hold(struct smc_link *link) 63 64 { 64 - if (!smc_link_usable(link)) 65 + if (!smc_link_sendable(link)) 65 66 return false; 66 67 atomic_inc(&link->wr_tx_refcnt); 67 68 return true; ··· 129 130 smc_wr_tx_filter filter, 130 131 smc_wr_tx_dismisser dismisser, 131 132 unsigned long data); 132 - int smc_wr_tx_wait_no_pending_sends(struct smc_link *link); 133 + void smc_wr_tx_wait_no_pending_sends(struct smc_link *link); 133 134 134 135 int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler); 135 136 int smc_wr_rx_post_init(struct smc_link *link);
+4 -4
net/tipc/crypto.c
··· 524 524 return -EEXIST; 525 525 526 526 /* Allocate a new AEAD */ 527 - tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); 527 + tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC); 528 528 if (unlikely(!tmp)) 529 529 return -ENOMEM; 530 530 ··· 1474 1474 return -EEXIST; 1475 1475 1476 1476 /* Allocate crypto */ 1477 - c = kzalloc(sizeof(*c), GFP_KERNEL); 1477 + c = kzalloc(sizeof(*c), GFP_ATOMIC); 1478 1478 if (!c) 1479 1479 return -ENOMEM; 1480 1480 ··· 1488 1488 } 1489 1489 1490 1490 /* Allocate statistic structure */ 1491 - c->stats = alloc_percpu(struct tipc_crypto_stats); 1491 + c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC); 1492 1492 if (!c->stats) { 1493 1493 if (c->wq) 1494 1494 destroy_workqueue(c->wq); ··· 2461 2461 } 2462 2462 2463 2463 /* Lets duplicate it first */ 2464 - skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_KERNEL); 2464 + skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_ATOMIC); 2465 2465 rcu_read_unlock(); 2466 2466 2467 2467 /* Now, generate new key, initiate & distribute it */
+1
net/xdp/xsk_buff_pool.c
··· 83 83 xskb = &pool->heads[i]; 84 84 xskb->pool = pool; 85 85 xskb->xdp.frame_sz = umem->chunk_size - umem->headroom; 86 + INIT_LIST_HEAD(&xskb->free_list_node); 86 87 if (pool->unaligned) 87 88 pool->free_heads[i] = xskb; 88 89 else
+1 -1
scripts/recordmcount.pl
··· 219 219 220 220 } elsif ($arch eq "s390" && $bits == 64) { 221 221 if ($cc =~ /-DCC_USING_HOTPATCH/) { 222 - $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$"; 222 + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(brcl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$"; 223 223 $mcount_adjust = 0; 224 224 } 225 225 $alignment = 8;
+1 -1
security/selinux/hooks.c
··· 5785 5785 struct sk_security_struct *sksec; 5786 5786 struct common_audit_data ad; 5787 5787 struct lsm_network_audit net = {0,}; 5788 - u8 proto; 5788 + u8 proto = 0; 5789 5789 5790 5790 sk = skb_to_full_sk(skb); 5791 5791 if (sk == NULL)
+14 -17
security/tomoyo/util.c
··· 1051 1051 return false; 1052 1052 if (!domain) 1053 1053 return true; 1054 + if (READ_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED])) 1055 + return false; 1054 1056 list_for_each_entry_rcu(ptr, &domain->acl_info_list, list, 1055 1057 srcu_read_lock_held(&tomoyo_ss)) { 1056 1058 u16 perm; 1057 - u8 i; 1058 1059 1059 1060 if (ptr->is_deleted) 1060 1061 continue; ··· 1066 1065 */ 1067 1066 switch (ptr->type) { 1068 1067 case TOMOYO_TYPE_PATH_ACL: 1069 - data_race(perm = container_of(ptr, struct tomoyo_path_acl, head)->perm); 1068 + perm = data_race(container_of(ptr, struct tomoyo_path_acl, head)->perm); 1070 1069 break; 1071 1070 case TOMOYO_TYPE_PATH2_ACL: 1072 - data_race(perm = container_of(ptr, struct tomoyo_path2_acl, head)->perm); 1071 + perm = data_race(container_of(ptr, struct tomoyo_path2_acl, head)->perm); 1073 1072 break; 1074 1073 case TOMOYO_TYPE_PATH_NUMBER_ACL: 1075 - data_race(perm = container_of(ptr, struct tomoyo_path_number_acl, head) 1074 + perm = data_race(container_of(ptr, struct tomoyo_path_number_acl, head) 1076 1075 ->perm); 1077 1076 break; 1078 1077 case TOMOYO_TYPE_MKDEV_ACL: 1079 - data_race(perm = container_of(ptr, struct tomoyo_mkdev_acl, head)->perm); 1078 + perm = data_race(container_of(ptr, struct tomoyo_mkdev_acl, head)->perm); 1080 1079 break; 1081 1080 case TOMOYO_TYPE_INET_ACL: 1082 - data_race(perm = container_of(ptr, struct tomoyo_inet_acl, head)->perm); 1081 + perm = data_race(container_of(ptr, struct tomoyo_inet_acl, head)->perm); 1083 1082 break; 1084 1083 case TOMOYO_TYPE_UNIX_ACL: 1085 - data_race(perm = container_of(ptr, struct tomoyo_unix_acl, head)->perm); 1084 + perm = data_race(container_of(ptr, struct tomoyo_unix_acl, head)->perm); 1086 1085 break; 1087 1086 case TOMOYO_TYPE_MANUAL_TASK_ACL: 1088 1087 perm = 0; ··· 1090 1089 default: 1091 1090 perm = 1; 1092 1091 } 1093 - for (i = 0; i < 16; i++) 1094 - if (perm & (1 << i)) 1095 - count++; 1092 + count += hweight16(perm); 1096 1093 } 1097 1094 if (count < tomoyo_profile(domain->ns, domain->profile)-> 1098 1095 pref[TOMOYO_PREF_MAX_LEARNING_ENTRY]) 1099 1096 return true; 1100 - if (!domain->flags[TOMOYO_DIF_QUOTA_WARNED]) { 1101 - domain->flags[TOMOYO_DIF_QUOTA_WARNED] = true; 1102 - /* r->granted = false; */ 1103 - tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]); 1097 + WRITE_ONCE(domain->flags[TOMOYO_DIF_QUOTA_WARNED], true); 1098 + /* r->granted = false; */ 1099 + tomoyo_write_log(r, "%s", tomoyo_dif[TOMOYO_DIF_QUOTA_WARNED]); 1104 1100 #ifndef CONFIG_SECURITY_TOMOYO_INSECURE_BUILTIN_SETTING 1105 - pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n", 1106 - domain->domainname->name); 1101 + pr_warn("WARNING: Domain '%s' has too many ACLs to hold. Stopped learning mode.\n", 1102 + domain->domainname->name); 1107 1103 #endif 1108 - } 1109 1104 return false; 1110 1105 }
+4
sound/core/jack.c
··· 509 509 return -ENOMEM; 510 510 511 511 jack->id = kstrdup(id, GFP_KERNEL); 512 + if (jack->id == NULL) { 513 + kfree(jack); 514 + return -ENOMEM; 515 + } 512 516 513 517 /* don't creat input device for phantom jack */ 514 518 if (!phantom_jack) {
+1
sound/core/rawmidi.c
··· 447 447 err = -ENOMEM; 448 448 goto __error; 449 449 } 450 + rawmidi_file->user_pversion = 0; 450 451 init_waitqueue_entry(&wait, current); 451 452 add_wait_queue(&rmidi->open_wait, &wait); 452 453 while (1) {
+1 -1
sound/drivers/opl3/opl3_midi.c
··· 397 397 } 398 398 if (instr_4op) { 399 399 vp2 = &opl3->voices[voice + 3]; 400 - if (vp->state > 0) { 400 + if (vp2->state > 0) { 401 401 opl3_reg = reg_side | (OPL3_REG_KEYON_BLOCK + 402 402 voice_offset + 3); 403 403 reg_val = vp->keyon_reg & ~OPL3_KEYON_BIT;
+10 -3
sound/hda/intel-sdw-acpi.c
··· 132 132 return AE_NOT_FOUND; 133 133 } 134 134 135 - info->handle = handle; 136 - 137 135 /* 138 136 * On some Intel platforms, multiple children of the HDAS 139 137 * device can be found, but only one of them is the SoundWire ··· 141 143 */ 142 144 if (FIELD_GET(GENMASK(31, 28), adr) != SDW_LINK_TYPE) 143 145 return AE_OK; /* keep going */ 146 + 147 + /* found the correct SoundWire controller */ 148 + info->handle = handle; 144 149 145 150 /* device found, stop namespace walk */ 146 151 return AE_CTRL_TERMINATE; ··· 165 164 acpi_status status; 166 165 167 166 info->handle = NULL; 167 + /* 168 + * In the HDAS ACPI scope, 'SNDW' may be either the child of 169 + * 'HDAS' or the grandchild of 'HDAS'. So let's go through 170 + * the ACPI from 'HDAS' at max depth of 2 to find the 'SNDW' 171 + * device. 172 + */ 168 173 status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 169 - parent_handle, 1, 174 + parent_handle, 2, 170 175 sdw_intel_acpi_cb, 171 176 NULL, info, NULL); 172 177 if (ACPI_FAILURE(status) || info->handle == NULL)
+15 -6
sound/pci/hda/patch_hdmi.c
··· 2947 2947 2948 2948 /* Intel Haswell and onwards; audio component with eld notifier */ 2949 2949 static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid, 2950 - const int *port_map, int port_num, int dev_num) 2950 + const int *port_map, int port_num, int dev_num, 2951 + bool send_silent_stream) 2951 2952 { 2952 2953 struct hdmi_spec *spec; 2953 2954 int err; ··· 2981 2980 * Enable silent stream feature, if it is enabled via 2982 2981 * module param or Kconfig option 2983 2982 */ 2984 - if (enable_silent_stream) 2983 + if (send_silent_stream) 2985 2984 spec->send_silent_stream = true; 2986 2985 2987 2986 return parse_intel_hdmi(codec); ··· 2989 2988 2990 2989 static int patch_i915_hsw_hdmi(struct hda_codec *codec) 2991 2990 { 2992 - return intel_hsw_common_init(codec, 0x08, NULL, 0, 3); 2991 + return intel_hsw_common_init(codec, 0x08, NULL, 0, 3, 2992 + enable_silent_stream); 2993 2993 } 2994 2994 2995 2995 static int patch_i915_glk_hdmi(struct hda_codec *codec) 2996 2996 { 2997 - return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3); 2997 + /* 2998 + * Silent stream calls audio component .get_power() from 2999 + * .pin_eld_notify(). On GLK this will deadlock in i915 due 3000 + * to the audio vs. CDCLK workaround. 3001 + */ 3002 + return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3, false); 2998 3003 } 2999 3004 3000 3005 static int patch_i915_icl_hdmi(struct hda_codec *codec) ··· 3011 3004 */ 3012 3005 static const int map[] = {0x0, 0x4, 0x6, 0x8, 0xa, 0xb}; 3013 3006 3014 - return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3); 3007 + return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3, 3008 + enable_silent_stream); 3015 3009 } 3016 3010 3017 3011 static int patch_i915_tgl_hdmi(struct hda_codec *codec) ··· 3024 3016 static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf}; 3025 3017 int ret; 3026 3018 3027 - ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4); 3019 + ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4, 3020 + enable_silent_stream); 3028 3021 if (!ret) { 3029 3022 struct hdmi_spec *spec = codec->spec; 3030 3023
+28 -1
sound/pci/hda/patch_realtek.c
··· 6546 6546 alc_process_coef_fw(codec, alc233_fixup_no_audio_jack_coefs); 6547 6547 } 6548 6548 6549 + static void alc256_fixup_mic_no_presence_and_resume(struct hda_codec *codec, 6550 + const struct hda_fixup *fix, 6551 + int action) 6552 + { 6553 + /* 6554 + * The Clevo NJ51CU comes either with the ALC293 or the ALC256 codec, 6555 + * but uses the 0x8686 subproduct id in both cases. The ALC256 codec 6556 + * needs an additional quirk for sound working after suspend and resume. 6557 + */ 6558 + if (codec->core.vendor_id == 0x10ec0256) { 6559 + alc_update_coef_idx(codec, 0x10, 1<<9, 0); 6560 + snd_hda_codec_set_pincfg(codec, 0x19, 0x04a11120); 6561 + } else { 6562 + snd_hda_codec_set_pincfg(codec, 0x1a, 0x04a1113c); 6563 + } 6564 + } 6565 + 6549 6566 enum { 6550 6567 ALC269_FIXUP_GPIO2, 6551 6568 ALC269_FIXUP_SONY_VAIO, ··· 6783 6766 ALC256_FIXUP_SET_COEF_DEFAULTS, 6784 6767 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 6785 6768 ALC233_FIXUP_NO_AUDIO_JACK, 6769 + ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME, 6786 6770 }; 6787 6771 6788 6772 static const struct hda_fixup alc269_fixups[] = { ··· 8508 8490 .type = HDA_FIXUP_FUNC, 8509 8491 .v.func = alc233_fixup_no_audio_jack, 8510 8492 }, 8493 + [ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME] = { 8494 + .type = HDA_FIXUP_FUNC, 8495 + .v.func = alc256_fixup_mic_no_presence_and_resume, 8496 + .chained = true, 8497 + .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC 8498 + }, 8511 8499 }; 8512 8500 8513 8501 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 8684 8660 SND_PCI_QUIRK(0x103c, 0x84da, "HP OMEN dc0019-ur", ALC295_FIXUP_HP_OMEN), 8685 8661 SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3), 8686 8662 SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360), 8663 + SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8687 8664 SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT), 8688 8665 SND_PCI_QUIRK(0x103c, 0x869d, "HP", ALC236_FIXUP_HP_MUTE_LED), 8689 8666 SND_PCI_QUIRK(0x103c, 0x86c7, "HP Envy AiO 32", ALC274_FIXUP_HP_ENVY_GPIO), ··· 8730 8705 SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED), 8731 8706 SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST), 8732 8707 SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED), 8708 + SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF), 8733 8709 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), 8734 8710 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), 8735 8711 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 8855 8829 SND_PCI_QUIRK(0x1558, 0x8562, "Clevo NH[57][0-9]RZ[Q]", ALC269_FIXUP_DMIC), 8856 8830 SND_PCI_QUIRK(0x1558, 0x8668, "Clevo NP50B[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8857 8831 SND_PCI_QUIRK(0x1558, 0x8680, "Clevo NJ50LU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8858 - SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8832 + SND_PCI_QUIRK(0x1558, 0x8686, "Clevo NH50[CZ]U", ALC256_FIXUP_MIC_NO_PRESENCE_AND_RESUME), 8859 8833 SND_PCI_QUIRK(0x1558, 0x8a20, "Clevo NH55DCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8860 8834 SND_PCI_QUIRK(0x1558, 0x8a51, "Clevo NH70RCQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 8861 8835 SND_PCI_QUIRK(0x1558, 0x8d50, "Clevo NH55RCQ-M", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9149 9123 {.id = ALC287_FIXUP_IDEAPAD_BASS_SPK_AMP, .name = "alc287-ideapad-bass-spk-amp"}, 9150 9124 {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, 9151 9125 {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"}, 9126 + {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"}, 9152 9127 {} 9153 9128 }; 9154 9129 #define ALC225_STANDARD_PINS \
+4
sound/soc/codecs/rt5682.c
··· 929 929 unsigned int val, count; 930 930 931 931 if (jack_insert) { 932 + snd_soc_dapm_mutex_lock(dapm); 933 + 932 934 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 933 935 RT5682_PWR_VREF2 | RT5682_PWR_MB, 934 936 RT5682_PWR_VREF2 | RT5682_PWR_MB); ··· 981 979 snd_soc_component_update_bits(component, RT5682_MICBIAS_2, 982 980 RT5682_PWR_CLK25M_MASK | RT5682_PWR_CLK1M_MASK, 983 981 RT5682_PWR_CLK25M_PU | RT5682_PWR_CLK1M_PU); 982 + 983 + snd_soc_dapm_mutex_unlock(dapm); 984 984 } else { 985 985 rt5682_enable_push_button_irq(component, false); 986 986 snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+2 -2
sound/soc/codecs/tas2770.c
··· 291 291 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ | 292 292 TAS2770_TDM_CFG_REG0_31_88_2_96KHZ; 293 293 break; 294 - case 19200: 294 + case 192000: 295 295 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_48KHZ | 296 296 TAS2770_TDM_CFG_REG0_31_176_4_192KHZ; 297 297 break; 298 - case 17640: 298 + case 176400: 299 299 ramp_rate_val = TAS2770_TDM_CFG_REG0_SMP_44_1KHZ | 300 300 TAS2770_TDM_CFG_REG0_31_176_4_192KHZ; 301 301 break;
-33
sound/soc/meson/aiu-encoder-i2s.c
··· 18 18 #define AIU_RST_SOFT_I2S_FAST BIT(0) 19 19 20 20 #define AIU_I2S_DAC_CFG_MSB_FIRST BIT(2) 21 - #define AIU_I2S_MISC_HOLD_EN BIT(2) 22 21 #define AIU_CLK_CTRL_I2S_DIV_EN BIT(0) 23 22 #define AIU_CLK_CTRL_I2S_DIV GENMASK(3, 2) 24 23 #define AIU_CLK_CTRL_AOCLK_INVERT BIT(6) ··· 33 34 snd_soc_component_update_bits(component, AIU_CLK_CTRL, 34 35 AIU_CLK_CTRL_I2S_DIV_EN, 35 36 enable ? AIU_CLK_CTRL_I2S_DIV_EN : 0); 36 - } 37 - 38 - static void aiu_encoder_i2s_hold(struct snd_soc_component *component, 39 - bool enable) 40 - { 41 - snd_soc_component_update_bits(component, AIU_I2S_MISC, 42 - AIU_I2S_MISC_HOLD_EN, 43 - enable ? AIU_I2S_MISC_HOLD_EN : 0); 44 - } 45 - 46 - static int aiu_encoder_i2s_trigger(struct snd_pcm_substream *substream, int cmd, 47 - struct snd_soc_dai *dai) 48 - { 49 - struct snd_soc_component *component = dai->component; 50 - 51 - switch (cmd) { 52 - case SNDRV_PCM_TRIGGER_START: 53 - case SNDRV_PCM_TRIGGER_RESUME: 54 - case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 55 - aiu_encoder_i2s_hold(component, false); 56 - return 0; 57 - 58 - case SNDRV_PCM_TRIGGER_STOP: 59 - case SNDRV_PCM_TRIGGER_SUSPEND: 60 - case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 61 - aiu_encoder_i2s_hold(component, true); 62 - return 0; 63 - 64 - default: 65 - return -EINVAL; 66 - } 67 37 } 68 38 69 39 static int aiu_encoder_i2s_setup_desc(struct snd_soc_component *component, ··· 321 353 } 322 354 323 355 const struct snd_soc_dai_ops aiu_encoder_i2s_dai_ops = { 324 - .trigger = aiu_encoder_i2s_trigger, 325 356 .hw_params = aiu_encoder_i2s_hw_params, 326 357 .hw_free = aiu_encoder_i2s_hw_free, 327 358 .set_fmt = aiu_encoder_i2s_set_fmt,
+19
sound/soc/meson/aiu-fifo-i2s.c
··· 20 20 #define AIU_MEM_I2S_CONTROL_MODE_16BIT BIT(6) 21 21 #define AIU_MEM_I2S_BUF_CNTL_INIT BIT(0) 22 22 #define AIU_RST_SOFT_I2S_FAST BIT(0) 23 + #define AIU_I2S_MISC_HOLD_EN BIT(2) 24 + #define AIU_I2S_MISC_FORCE_LEFT_RIGHT BIT(4) 23 25 24 26 #define AIU_FIFO_I2S_BLOCK 256 25 27 ··· 92 90 unsigned int val; 93 91 int ret; 94 92 93 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 94 + AIU_I2S_MISC_HOLD_EN, 95 + AIU_I2S_MISC_HOLD_EN); 96 + 95 97 ret = aiu_fifo_hw_params(substream, params, dai); 96 98 if (ret) 97 99 return ret; ··· 122 116 val = FIELD_PREP(AIU_MEM_I2S_MASKS_IRQ_BLOCK, val); 123 117 snd_soc_component_update_bits(component, AIU_MEM_I2S_MASKS, 124 118 AIU_MEM_I2S_MASKS_IRQ_BLOCK, val); 119 + 120 + /* 121 + * Most (all?) supported SoCs have this bit set by default. The vendor 122 + * driver however sets it manually (depending on the version either 123 + * while un-setting AIU_I2S_MISC_HOLD_EN or right before that). Follow 124 + * the same approach for consistency with the vendor driver. 125 + */ 126 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 127 + AIU_I2S_MISC_FORCE_LEFT_RIGHT, 128 + AIU_I2S_MISC_FORCE_LEFT_RIGHT); 129 + 130 + snd_soc_component_update_bits(component, AIU_I2S_MISC, 131 + AIU_I2S_MISC_HOLD_EN, 0); 125 132 126 133 return 0; 127 134 }
+6
sound/soc/meson/aiu-fifo.c
··· 5 5 6 6 #include <linux/bitfield.h> 7 7 #include <linux/clk.h> 8 + #include <linux/dma-mapping.h> 8 9 #include <sound/pcm_params.h> 9 10 #include <sound/soc.h> 10 11 #include <sound/soc-dai.h> ··· 180 179 struct snd_card *card = rtd->card->snd_card; 181 180 struct aiu_fifo *fifo = dai->playback_dma_data; 182 181 size_t size = fifo->pcm->buffer_bytes_max; 182 + int ret; 183 + 184 + ret = dma_coerce_mask_and_coherent(card->dev, DMA_BIT_MASK(32)); 185 + if (ret) 186 + return ret; 183 187 184 188 snd_pcm_set_managed_buffer_all(rtd->pcm, SNDRV_DMA_TYPE_DEV, 185 189 card->dev, size, size);
+4
sound/soc/sof/intel/pci-tgl.c
··· 112 112 .driver_data = (unsigned long)&adls_desc}, 113 113 { PCI_DEVICE(0x8086, 0x51c8), /* ADL-P */ 114 114 .driver_data = (unsigned long)&adl_desc}, 115 + { PCI_DEVICE(0x8086, 0x51cd), /* ADL-P */ 116 + .driver_data = (unsigned long)&adl_desc}, 115 117 { PCI_DEVICE(0x8086, 0x51cc), /* ADL-M */ 118 + .driver_data = (unsigned long)&adl_desc}, 119 + { PCI_DEVICE(0x8086, 0x54c8), /* ADL-N */ 116 120 .driver_data = (unsigned long)&adl_desc}, 117 121 { 0, } 118 122 };
+10 -1
sound/soc/tegra/tegra_asoc_machine.c
··· 116 116 SOC_DAPM_PIN_SWITCH("Headset Mic"), 117 117 SOC_DAPM_PIN_SWITCH("Internal Mic 1"), 118 118 SOC_DAPM_PIN_SWITCH("Internal Mic 2"), 119 + SOC_DAPM_PIN_SWITCH("Headphones"), 120 + SOC_DAPM_PIN_SWITCH("Mic Jack"), 119 121 }; 120 122 121 123 int tegra_asoc_machine_init(struct snd_soc_pcm_runtime *rtd) 122 124 { 123 125 struct snd_soc_card *card = rtd->card; 124 126 struct tegra_machine *machine = snd_soc_card_get_drvdata(card); 127 + const char *jack_name; 125 128 int err; 126 129 127 130 if (machine->gpiod_hp_det && machine->asoc->add_hp_jack) { 128 - err = snd_soc_card_jack_new(card, "Headphones Jack", 131 + if (machine->asoc->hp_jack_name) 132 + jack_name = machine->asoc->hp_jack_name; 133 + else 134 + jack_name = "Headphones Jack"; 135 + 136 + err = snd_soc_card_jack_new(card, jack_name, 129 137 SND_JACK_HEADPHONE, 130 138 &tegra_machine_hp_jack, 131 139 tegra_machine_hp_jack_pins, ··· 666 658 static const struct tegra_asoc_data tegra_max98090_data = { 667 659 .mclk_rate = tegra_machine_mclk_rate_12mhz, 668 660 .card = &snd_soc_tegra_max98090, 661 + .hp_jack_name = "Headphones", 669 662 .add_common_dapm_widgets = true, 670 663 .add_common_controls = true, 671 664 .add_common_snd_ops = true,
+1
sound/soc/tegra/tegra_asoc_machine.h
··· 14 14 struct tegra_asoc_data { 15 15 unsigned int (*mclk_rate)(unsigned int srate); 16 16 const char *codec_dev_name; 17 + const char *hp_jack_name; 17 18 struct snd_soc_card *card; 18 19 unsigned int mclk_id; 19 20 bool hp_jack_gpio_active_low;
+1 -1
tools/perf/builtin-script.c
··· 2473 2473 if (perf_event__process_switch(tool, event, sample, machine) < 0) 2474 2474 return -1; 2475 2475 2476 - if (scripting_ops && scripting_ops->process_switch) 2476 + if (scripting_ops && scripting_ops->process_switch && !filter_cpu(sample)) 2477 2477 scripting_ops->process_switch(event, sample, machine); 2478 2478 2479 2479 if (!script->show_switch_events)
+13 -10
tools/perf/scripts/python/intel-pt-events.py
··· 32 32 except: 33 33 broken_pipe_exception = IOError 34 34 35 - glb_switch_str = None 36 - glb_switch_printed = True 35 + glb_switch_str = {} 37 36 glb_insn = False 38 37 glb_disassembler = None 39 38 glb_src = False ··· 69 70 ap = argparse.ArgumentParser(usage = "", add_help = False) 70 71 ap.add_argument("--insn-trace", action='store_true') 71 72 ap.add_argument("--src-trace", action='store_true') 73 + ap.add_argument("--all-switch-events", action='store_true') 72 74 global glb_args 73 75 global glb_insn 74 76 global glb_src ··· 256 256 print(start_str, src_str) 257 257 258 258 def do_process_event(param_dict): 259 - global glb_switch_printed 260 - if not glb_switch_printed: 261 - print(glb_switch_str) 262 - glb_switch_printed = True 263 259 event_attr = param_dict["attr"] 264 260 sample = param_dict["sample"] 265 261 raw_buf = param_dict["raw_buf"] ··· 269 273 # Symbol and dso info are not always resolved 270 274 dso = get_optional(param_dict, "dso") 271 275 symbol = get_optional(param_dict, "symbol") 276 + 277 + cpu = sample["cpu"] 278 + if cpu in glb_switch_str: 279 + print(glb_switch_str[cpu]) 280 + del glb_switch_str[cpu] 272 281 273 282 if name[0:12] == "instructions": 274 283 if glb_src: ··· 337 336 sys.exit(1) 338 337 339 338 def context_switch(ts, cpu, pid, tid, np_pid, np_tid, machine_pid, out, out_preempt, *x): 340 - global glb_switch_printed 341 - global glb_switch_str 342 339 if out: 343 340 out_str = "Switch out " 344 341 else: ··· 349 350 machine_str = "" 350 351 else: 351 352 machine_str = "machine PID %d" % machine_pid 352 - glb_switch_str = "%16s %5d/%-5d [%03u] %9u.%09u %5d/%-5d %s %s" % \ 353 + switch_str = "%16s %5d/%-5d [%03u] %9u.%09u %5d/%-5d %s %s" % \ 353 354 (out_str, pid, tid, cpu, ts / 1000000000, ts %1000000000, np_pid, np_tid, machine_str, preempt_str) 354 - glb_switch_printed = False 355 + if glb_args.all_switch_events: 356 + print(switch_str); 357 + else: 358 + global glb_switch_str 359 + glb_switch_str[cpu] = switch_str
+5 -3
tools/perf/ui/tui/setup.c
··· 170 170 "Press any key...", 0); 171 171 172 172 SLtt_set_cursor_visibility(1); 173 - SLsmg_refresh(); 174 - SLsmg_reset_smg(); 173 + if (!pthread_mutex_trylock(&ui__lock)) { 174 + SLsmg_refresh(); 175 + SLsmg_reset_smg(); 176 + pthread_mutex_unlock(&ui__lock); 177 + } 175 178 SLang_reset_tty(); 176 - 177 179 perf_error__unregister(&perf_tui_eops); 178 180 }
+6 -1
tools/perf/util/expr.c
··· 66 66 67 67 struct hashmap *ids__new(void) 68 68 { 69 - return hashmap__new(key_hash, key_equal, NULL); 69 + struct hashmap *hash; 70 + 71 + hash = hashmap__new(key_hash, key_equal, NULL); 72 + if (IS_ERR(hash)) 73 + return NULL; 74 + return hash; 70 75 } 71 76 72 77 void ids__free(struct hashmap *ids)
+1
tools/perf/util/intel-pt.c
··· 3625 3625 *args = p; 3626 3626 return 0; 3627 3627 } 3628 + p += 1; 3628 3629 while (1) { 3629 3630 vmcs = strtoull(p, &p, 0); 3630 3631 if (errno)
+17 -6
tools/perf/util/pmu.c
··· 1659 1659 return !strcmp(name, "cpu") || is_arm_pmu_core(name); 1660 1660 } 1661 1661 1662 + static bool pmu_alias_is_duplicate(struct sevent *alias_a, 1663 + struct sevent *alias_b) 1664 + { 1665 + /* Different names -> never duplicates */ 1666 + if (strcmp(alias_a->name, alias_b->name)) 1667 + return false; 1668 + 1669 + /* Don't remove duplicates for hybrid PMUs */ 1670 + if (perf_pmu__is_hybrid(alias_a->pmu) && 1671 + perf_pmu__is_hybrid(alias_b->pmu)) 1672 + return false; 1673 + 1674 + return true; 1675 + } 1676 + 1662 1677 void print_pmu_events(const char *event_glob, bool name_only, bool quiet_flag, 1663 1678 bool long_desc, bool details_flag, bool deprecated, 1664 1679 const char *pmu_name) ··· 1759 1744 qsort(aliases, len, sizeof(struct sevent), cmp_sevent); 1760 1745 for (j = 0; j < len; j++) { 1761 1746 /* Skip duplicates */ 1762 - if (j > 0 && !strcmp(aliases[j].name, aliases[j - 1].name)) { 1763 - if (!aliases[j].pmu || !aliases[j - 1].pmu || 1764 - !strcmp(aliases[j].pmu, aliases[j - 1].pmu)) { 1765 - continue; 1766 - } 1767 - } 1747 + if (j > 0 && pmu_alias_is_duplicate(&aliases[j], &aliases[j - 1])) 1748 + continue; 1768 1749 1769 1750 if (name_only) { 1770 1751 printf("%s ", aliases[j].name);
+1
tools/testing/selftests/kvm/.gitignore
··· 35 35 /x86_64/vmx_apic_access_test 36 36 /x86_64/vmx_close_while_nested_test 37 37 /x86_64/vmx_dirty_log_test 38 + /x86_64/vmx_invalid_nested_guest_state 38 39 /x86_64/vmx_preemption_timer_test 39 40 /x86_64/vmx_set_nested_state_test 40 41 /x86_64/vmx_tsc_adjust_test
+1
tools/testing/selftests/kvm/Makefile
··· 64 64 TEST_GEN_PROGS_x86_64 += x86_64/vmx_apic_access_test 65 65 TEST_GEN_PROGS_x86_64 += x86_64/vmx_close_while_nested_test 66 66 TEST_GEN_PROGS_x86_64 += x86_64/vmx_dirty_log_test 67 + TEST_GEN_PROGS_x86_64 += x86_64/vmx_invalid_nested_guest_state 67 68 TEST_GEN_PROGS_x86_64 += x86_64/vmx_set_nested_state_test 68 69 TEST_GEN_PROGS_x86_64 += x86_64/vmx_tsc_adjust_test 69 70 TEST_GEN_PROGS_x86_64 += x86_64/vmx_nested_tsc_scaling_test
+1 -9
tools/testing/selftests/kvm/include/kvm_util.h
··· 71 71 72 72 #endif 73 73 74 - #if defined(__x86_64__) 75 - unsigned long vm_compute_max_gfn(struct kvm_vm *vm); 76 - #else 77 - static inline unsigned long vm_compute_max_gfn(struct kvm_vm *vm) 78 - { 79 - return ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 80 - } 81 - #endif 82 - 83 74 #define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT) 84 75 #define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE) 85 76 ··· 321 330 322 331 unsigned int vm_get_page_size(struct kvm_vm *vm); 323 332 unsigned int vm_get_page_shift(struct kvm_vm *vm); 333 + unsigned long vm_compute_max_gfn(struct kvm_vm *vm); 324 334 uint64_t vm_get_max_gfn(struct kvm_vm *vm); 325 335 int vm_get_fd(struct kvm_vm *vm); 326 336
+5
tools/testing/selftests/kvm/lib/kvm_util.c
··· 2328 2328 return vm->page_shift; 2329 2329 } 2330 2330 2331 + unsigned long __attribute__((weak)) vm_compute_max_gfn(struct kvm_vm *vm) 2332 + { 2333 + return ((1ULL << vm->pa_bits) >> vm->page_shift) - 1; 2334 + } 2335 + 2331 2336 uint64_t vm_get_max_gfn(struct kvm_vm *vm) 2332 2337 { 2333 2338 return vm->max_gfn;
+105
tools/testing/selftests/kvm/x86_64/vmx_invalid_nested_guest_state.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + #include "test_util.h" 3 + #include "kvm_util.h" 4 + #include "processor.h" 5 + #include "vmx.h" 6 + 7 + #include <string.h> 8 + #include <sys/ioctl.h> 9 + 10 + #include "kselftest.h" 11 + 12 + #define VCPU_ID 0 13 + #define ARBITRARY_IO_PORT 0x2000 14 + 15 + static struct kvm_vm *vm; 16 + 17 + static void l2_guest_code(void) 18 + { 19 + /* 20 + * Generate an exit to L0 userspace, i.e. main(), via I/O to an 21 + * arbitrary port. 22 + */ 23 + asm volatile("inb %%dx, %%al" 24 + : : [port] "d" (ARBITRARY_IO_PORT) : "rax"); 25 + } 26 + 27 + static void l1_guest_code(struct vmx_pages *vmx_pages) 28 + { 29 + #define L2_GUEST_STACK_SIZE 64 30 + unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE]; 31 + 32 + GUEST_ASSERT(prepare_for_vmx_operation(vmx_pages)); 33 + GUEST_ASSERT(load_vmcs(vmx_pages)); 34 + 35 + /* Prepare the VMCS for L2 execution. */ 36 + prepare_vmcs(vmx_pages, l2_guest_code, 37 + &l2_guest_stack[L2_GUEST_STACK_SIZE]); 38 + 39 + /* 40 + * L2 must be run without unrestricted guest, verify that the selftests 41 + * library hasn't enabled it. Because KVM selftests jump directly to 42 + * 64-bit mode, unrestricted guest support isn't required. 43 + */ 44 + GUEST_ASSERT(!(vmreadz(CPU_BASED_VM_EXEC_CONTROL) & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) || 45 + !(vmreadz(SECONDARY_VM_EXEC_CONTROL) & SECONDARY_EXEC_UNRESTRICTED_GUEST)); 46 + 47 + GUEST_ASSERT(!vmlaunch()); 48 + 49 + /* L2 should triple fault after main() stuffs invalid guest state. */ 50 + GUEST_ASSERT(vmreadz(VM_EXIT_REASON) == EXIT_REASON_TRIPLE_FAULT); 51 + GUEST_DONE(); 52 + } 53 + 54 + int main(int argc, char *argv[]) 55 + { 56 + vm_vaddr_t vmx_pages_gva; 57 + struct kvm_sregs sregs; 58 + struct kvm_run *run; 59 + struct ucall uc; 60 + 61 + nested_vmx_check_supported(); 62 + 63 + vm = vm_create_default(VCPU_ID, 0, (void *) l1_guest_code); 64 + 65 + /* Allocate VMX pages and shared descriptors (vmx_pages). */ 66 + vcpu_alloc_vmx(vm, &vmx_pages_gva); 67 + vcpu_args_set(vm, VCPU_ID, 1, vmx_pages_gva); 68 + 69 + vcpu_run(vm, VCPU_ID); 70 + 71 + run = vcpu_state(vm, VCPU_ID); 72 + 73 + /* 74 + * The first exit to L0 userspace should be an I/O access from L2. 75 + * Running L1 should launch L2 without triggering an exit to userspace. 76 + */ 77 + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, 78 + "Expected KVM_EXIT_IO, got: %u (%s)\n", 79 + run->exit_reason, exit_reason_str(run->exit_reason)); 80 + 81 + TEST_ASSERT(run->io.port == ARBITRARY_IO_PORT, 82 + "Expected IN from port %d from L2, got port %d", 83 + ARBITRARY_IO_PORT, run->io.port); 84 + 85 + /* 86 + * Stuff invalid guest state for L2 by making TR unusuable. The next 87 + * KVM_RUN should induce a TRIPLE_FAULT in L2 as KVM doesn't support 88 + * emulating invalid guest state for L2. 89 + */ 90 + memset(&sregs, 0, sizeof(sregs)); 91 + vcpu_sregs_get(vm, VCPU_ID, &sregs); 92 + sregs.tr.unusable = 1; 93 + vcpu_sregs_set(vm, VCPU_ID, &sregs); 94 + 95 + vcpu_run(vm, VCPU_ID); 96 + 97 + switch (get_ucall(vm, VCPU_ID, &uc)) { 98 + case UCALL_DONE: 99 + break; 100 + case UCALL_ABORT: 101 + TEST_FAIL("%s", (const char *)uc.args[0]); 102 + default: 103 + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); 104 + } 105 + }
-1
tools/testing/selftests/net/mptcp/config
··· 9 9 CONFIG_NETFILTER_ADVANCED=y 10 10 CONFIG_NETFILTER_NETLINK=m 11 11 CONFIG_NF_TABLES=m 12 - CONFIG_NFT_COUNTER=m 13 12 CONFIG_NFT_COMPAT=m 14 13 CONFIG_NETFILTER_XTABLES=m 15 14 CONFIG_NETFILTER_XT_MATCH_BPF=m
+4 -2
tools/testing/selftests/net/udpgro_fwd.sh
··· 132 132 local rcv=`ip netns exec $NS_DST $ipt"-save" -c | grep 'dport 8000' | \ 133 133 sed -e 's/\[//' -e 's/:.*//'` 134 134 if [ $rcv != $pkts ]; then 135 - echo " fail - received $rvs packets, expected $pkts" 135 + echo " fail - received $rcv packets, expected $pkts" 136 136 ret=1 137 137 return 138 138 fi ··· 185 185 IPT=iptables 186 186 SUFFIX=24 187 187 VXDEV=vxlan 188 + PING=ping 188 189 189 190 if [ $family = 6 ]; then 190 191 BM_NET=$BM_NET_V6 ··· 193 192 SUFFIX="64 nodad" 194 193 VXDEV=vxlan6 195 194 IPT=ip6tables 195 + PING="ping6" 196 196 fi 197 197 198 198 echo "IPv$family" ··· 239 237 240 238 # load arp cache before running the test to reduce the amount of 241 239 # stray traffic on top of the UDP tunnel 242 - ip netns exec $NS_SRC ping -q -c 1 $OL_NET$DST_NAT >/dev/null 240 + ip netns exec $NS_SRC $PING -q -c 1 $OL_NET$DST_NAT >/dev/null 243 241 run_test "GRO fwd over UDP tunnel" $OL_NET$DST_NAT 1 1 $OL_NET$DST 244 242 cleanup 245 243
+6 -6
tools/testing/selftests/net/udpgso.c
··· 156 156 }, 157 157 { 158 158 /* send max number of min sized segments */ 159 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4, 159 + .tlen = UDP_MAX_SEGMENTS, 160 160 .gso_len = 1, 161 - .r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4, 161 + .r_num_mss = UDP_MAX_SEGMENTS, 162 162 }, 163 163 { 164 164 /* send max number + 1 of min sized segments: fail */ 165 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V4 + 1, 165 + .tlen = UDP_MAX_SEGMENTS + 1, 166 166 .gso_len = 1, 167 167 .tfail = true, 168 168 }, ··· 259 259 }, 260 260 { 261 261 /* send max number of min sized segments */ 262 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6, 262 + .tlen = UDP_MAX_SEGMENTS, 263 263 .gso_len = 1, 264 - .r_num_mss = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6, 264 + .r_num_mss = UDP_MAX_SEGMENTS, 265 265 }, 266 266 { 267 267 /* send max number + 1 of min sized segments: fail */ 268 - .tlen = UDP_MAX_SEGMENTS - CONST_HDRLEN_V6 + 1, 268 + .tlen = UDP_MAX_SEGMENTS + 1, 269 269 .gso_len = 1, 270 270 .tfail = true, 271 271 },
+7 -1
tools/testing/selftests/net/udpgso_bench_tx.c
··· 419 419 420 420 static void parse_opts(int argc, char **argv) 421 421 { 422 + const char *bind_addr = NULL; 422 423 int max_len, hdrlen; 423 424 int c; 424 425 ··· 447 446 cfg_cpu = strtol(optarg, NULL, 0); 448 447 break; 449 448 case 'D': 450 - setup_sockaddr(cfg_family, optarg, &cfg_dst_addr); 449 + bind_addr = optarg; 451 450 break; 452 451 case 'l': 453 452 cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000; ··· 492 491 break; 493 492 } 494 493 } 494 + 495 + if (!bind_addr) 496 + bind_addr = cfg_family == PF_INET6 ? "::" : "0.0.0.0"; 497 + 498 + setup_sockaddr(cfg_family, bind_addr, &cfg_dst_addr); 495 499 496 500 if (optind != argc) 497 501 usage(argv[0]);
+10 -6
tools/testing/selftests/vm/userfaultfd.c
··· 87 87 88 88 static bool map_shared; 89 89 static int shm_fd; 90 - static int huge_fd; 90 + static int huge_fd = -1; /* only used for hugetlb_shared test */ 91 91 static char *huge_fd_off0; 92 92 static unsigned long long *count_verify; 93 93 static int uffd = -1; ··· 223 223 224 224 static void hugetlb_release_pages(char *rel_area) 225 225 { 226 + if (huge_fd == -1) 227 + return; 228 + 226 229 if (fallocate(huge_fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 227 230 rel_area == huge_fd_off0 ? 0 : nr_pages * page_size, 228 231 nr_pages * page_size)) ··· 238 235 char **alloc_area_alias; 239 236 240 237 *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, 241 - (map_shared ? MAP_SHARED : MAP_PRIVATE) | 242 - MAP_HUGETLB, 243 - huge_fd, *alloc_area == area_src ? 0 : 244 - nr_pages * page_size); 238 + map_shared ? MAP_SHARED : 239 + MAP_PRIVATE | MAP_HUGETLB | 240 + (*alloc_area == area_src ? 0 : MAP_NORESERVE), 241 + huge_fd, 242 + *alloc_area == area_src ? 0 : nr_pages * page_size); 245 243 if (*alloc_area == MAP_FAILED) 246 244 err("mmap of hugetlbfs file failed"); 247 245 248 246 if (map_shared) { 249 247 area_alias = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, 250 - MAP_SHARED | MAP_HUGETLB, 248 + MAP_SHARED, 251 249 huge_fd, *alloc_area == area_src ? 0 : 252 250 nr_pages * page_size); 253 251 if (area_alias == MAP_FAILED)