Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.19-rc6 into usb-next

We want the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4145 -2280
+1 -1
Documentation/devicetree/bindings/input/gpio-keys.txt
··· 1 - Device-Tree bindings for input/gpio_keys.c keyboard driver 1 + Device-Tree bindings for input/keyboard/gpio_keys.c keyboard driver 2 2 3 3 Required properties: 4 4 - compatible = "gpio-keys";
+1
Documentation/devicetree/bindings/net/macb.txt
··· 10 10 Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on 11 11 the Cadence GEM, or the generic form: "cdns,gem". 12 12 Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs. 13 + Use "atmel,sama5d3-macb" for the 10/100Mbit IP available on Atmel sama5d3 SoCs. 13 14 Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs. 14 15 Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs. 15 16 Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
-1
Documentation/media/uapi/dvb/video_function_calls.rst
··· 33 33 video-clear-buffer 34 34 video-set-streamtype 35 35 video-set-format 36 - video-set-attributes
+11 -1
Documentation/virtual/kvm/api.txt
··· 4510 4510 Architectures: s390 4511 4511 Parameters: none 4512 4512 Returns: 0 on success, -EINVAL if hpage module parameter was not set 4513 - or cmma is enabled 4513 + or cmma is enabled, or the VM has the KVM_VM_S390_UCONTROL 4514 + flag set 4514 4515 4515 4516 With this capability the KVM support for memory backing with 1m pages 4516 4517 through hugetlbfs can be enabled for a VM. After the capability is ··· 4521 4520 4522 4521 While it is generally possible to create a huge page backed VM without 4523 4522 this capability, the VM will not be able to run. 4523 + 4524 + 7.14 KVM_CAP_MSR_PLATFORM_INFO 4525 + 4526 + Architectures: x86 4527 + Parameters: args[0] whether feature should be enabled or not 4528 + 4529 + With this capability, a guest may read the MSR_PLATFORM_INFO MSR. Otherwise, 4530 + a #GP would be raised when the guest tries to access. Currently, this 4531 + capability does not enable write permissions of this MSR for the guest. 4524 4532 4525 4533 8. Other capabilities. 4526 4534 ----------------------
+26 -10
MAINTAINERS
··· 9716 9716 S: Maintained 9717 9717 F: drivers/media/dvb-frontends/mn88473* 9718 9718 9719 - PCI DRIVER FOR MOBIVEIL PCIE IP 9720 - M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 9721 - L: linux-pci@vger.kernel.org 9722 - S: Supported 9723 - F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt 9724 - F: drivers/pci/controller/pcie-mobiveil.c 9725 - 9726 9719 MODULE SUPPORT 9727 9720 M: Jessica Yu <jeyu@kernel.org> 9728 9721 T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next ··· 10942 10949 M: Ksenija Stanojevic <ksenija.stanojevic@gmail.com> 10943 10950 S: Odd Fixes 10944 10951 F: Documentation/auxdisplay/lcd-panel-cgram.txt 10945 - F: drivers/misc/panel.c 10952 + F: drivers/auxdisplay/panel.c 10946 10953 10947 10954 PARALLEL PORT SUBSYSTEM 10948 10955 M: Sudip Mukherjee <sudipm.mukherjee@gmail.com> ··· 11130 11137 F: include/linux/switchtec.h 11131 11138 F: drivers/ntb/hw/mscc/ 11132 11139 11140 + PCI DRIVER FOR MOBIVEIL PCIE IP 11141 + M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> 11142 + L: linux-pci@vger.kernel.org 11143 + S: Supported 11144 + F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt 11145 + F: drivers/pci/controller/pcie-mobiveil.c 11146 + 11133 11147 PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support) 11134 11148 M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 11135 11149 M: Jason Cooper <jason@lakedaemon.net> ··· 11203 11203 11204 11204 PCI ENHANCED ERROR HANDLING (EEH) FOR POWERPC 11205 11205 M: Russell Currey <ruscur@russell.cc> 11206 + M: Sam Bobroff <sbobroff@linux.ibm.com> 11207 + M: Oliver O'Halloran <oohall@gmail.com> 11206 11208 L: linuxppc-dev@lists.ozlabs.org 11207 11209 S: Supported 11210 + F: Documentation/PCI/pci-error-recovery.txt 11211 + F: drivers/pci/pcie/aer.c 11212 + F: drivers/pci/pcie/dpc.c 11213 + F: drivers/pci/pcie/err.c 11208 11214 F: Documentation/powerpc/eeh-pci-error-recovery.txt 11209 11215 F: arch/powerpc/kernel/eeh*.c 11210 11216 F: arch/powerpc/platforms/*/eeh*.c ··· 12266 12260 12267 12261 RDT - RESOURCE ALLOCATION 12268 12262 M: Fenghua Yu <fenghua.yu@intel.com> 12263 + M: Reinette Chatre <reinette.chatre@intel.com> 12269 12264 L: linux-kernel@vger.kernel.org 12270 12265 S: Supported 12271 12266 F: arch/x86/kernel/cpu/intel_rdt* ··· 13456 13449 F: Documentation/devicetree/bindings/i2c/i2c-synquacer.txt 13457 13450 13458 13451 SOCIONEXT UNIPHIER SOUND DRIVER 13459 - M: Katsuhiro Suzuki <suzuki.katsuhiro@socionext.com> 13460 13452 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 13461 - S: Maintained 13453 + S: Orphan 13462 13454 F: sound/soc/uniphier/ 13463 13455 13464 13456 SOEKRIS NET48XX LED SUPPORT ··· 15925 15919 X86 ARCHITECTURE (32-BIT AND 64-BIT) 15926 15920 M: Thomas Gleixner <tglx@linutronix.de> 15927 15921 M: Ingo Molnar <mingo@redhat.com> 15922 + M: Borislav Petkov <bp@alien8.de> 15928 15923 R: "H. Peter Anvin" <hpa@zytor.com> 15929 15924 M: x86@kernel.org 15930 15925 L: linux-kernel@vger.kernel.org ··· 15953 15946 M: Borislav Petkov <bp@alien8.de> 15954 15947 S: Maintained 15955 15948 F: arch/x86/kernel/cpu/microcode/* 15949 + 15950 + X86 MM 15951 + M: Dave Hansen <dave.hansen@linux.intel.com> 15952 + M: Andy Lutomirski <luto@kernel.org> 15953 + M: Peter Zijlstra <peterz@infradead.org> 15954 + L: linux-kernel@vger.kernel.org 15955 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/mm 15956 + S: Maintained 15957 + F: arch/x86/mm/ 15956 15958 15957 15959 X86 PLATFORM DRIVERS 15958 15960 M: Darren Hart <dvhart@infradead.org>
+2 -14
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc6 6 6 NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION* ··· 299 299 KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION) 300 300 export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION 301 301 302 - # SUBARCH tells the usermode build what the underlying arch is. That is set 303 - # first, and if a usermode build is happening, the "ARCH=um" on the command 304 - # line overrides the setting of ARCH below. If a native build is happening, 305 - # then ARCH is assigned, getting whatever value it gets normally, and 306 - # SUBARCH is subsequently ignored. 307 - 308 - SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ 309 - -e s/sun4u/sparc64/ \ 310 - -e s/arm.*/arm/ -e s/sa110/arm/ \ 311 - -e s/s390x/s390/ -e s/parisc64/parisc/ \ 312 - -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 313 - -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \ 314 - -e s/riscv.*/riscv/) 302 + include scripts/subarch.include 315 303 316 304 # Cross compiling and selecting different set of gcc/bin-utils 317 305 # ---------------------------------------------------------------------------
+1 -1
arch/arm/boot/dts/sama5d3_emac.dtsi
··· 41 41 }; 42 42 43 43 macb1: ethernet@f802c000 { 44 - compatible = "cdns,at91sam9260-macb", "cdns,macb"; 44 + compatible = "atmel,sama5d3-macb", "cdns,at91sam9260-macb", "cdns,macb"; 45 45 reg = <0xf802c000 0x100>; 46 46 interrupts = <35 IRQ_TYPE_LEVEL_HIGH 3>; 47 47 pinctrl-names = "default";
-1
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 1051 1051 return hash__vmemmap_remove_mapping(start, page_size); 1052 1052 } 1053 1053 #endif 1054 - struct page *realmode_pfn_to_page(unsigned long pfn); 1055 1054 1056 1055 static inline pte_t pmd_pte(pmd_t pmd) 1057 1056 {
-2
arch/powerpc/include/asm/iommu.h
··· 220 220 extern int __init tce_iommu_bus_notifier_init(void); 221 221 extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry, 222 222 unsigned long *hpa, enum dma_data_direction *direction); 223 - extern long iommu_tce_xchg_rm(struct iommu_table *tbl, unsigned long entry, 224 - unsigned long *hpa, enum dma_data_direction *direction); 225 223 #else 226 224 static inline void iommu_register_group(struct iommu_table_group *table_group, 227 225 int pci_domain_number,
+1
arch/powerpc/include/asm/mmu_context.h
··· 38 38 unsigned long ua, unsigned int pageshift, unsigned long *hpa); 39 39 extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, 40 40 unsigned long ua, unsigned int pageshift, unsigned long *hpa); 41 + extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua); 41 42 extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem); 42 43 extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem); 43 44 #endif
+1
arch/powerpc/include/asm/setup.h
··· 9 9 10 10 extern unsigned int rtas_data; 11 11 extern unsigned long long memory_limit; 12 + extern bool init_mem_is_free; 12 13 extern unsigned long klimit; 13 14 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); 14 15
+2 -2
arch/powerpc/kernel/exceptions-64s.S
··· 1314 1314 1315 1315 #ifdef CONFIG_PPC_DENORMALISATION 1316 1316 mfspr r10,SPRN_HSRR1 1317 - mfspr r11,SPRN_HSRR0 /* save HSRR0 */ 1318 1317 andis. r10,r10,(HSRR1_DENORM)@h /* denorm? */ 1319 - addi r11,r11,-4 /* HSRR0 is next instruction */ 1320 1318 bne+ denorm_assist 1321 1319 #endif 1322 1320 ··· 1380 1382 */ 1381 1383 XVCPSGNDP32(32) 1382 1384 denorm_done: 1385 + mfspr r11,SPRN_HSRR0 1386 + subi r11,r11,4 1383 1387 mtspr SPRN_HSRR0,r11 1384 1388 mtcrf 0x80,r9 1385 1389 ld r9,PACA_EXGEN+EX_R9(r13)
-25
arch/powerpc/kernel/iommu.c
··· 1013 1013 } 1014 1014 EXPORT_SYMBOL_GPL(iommu_tce_xchg); 1015 1015 1016 - #ifdef CONFIG_PPC_BOOK3S_64 1017 - long iommu_tce_xchg_rm(struct iommu_table *tbl, unsigned long entry, 1018 - unsigned long *hpa, enum dma_data_direction *direction) 1019 - { 1020 - long ret; 1021 - 1022 - ret = tbl->it_ops->exchange_rm(tbl, entry, hpa, direction); 1023 - 1024 - if (!ret && ((*direction == DMA_FROM_DEVICE) || 1025 - (*direction == DMA_BIDIRECTIONAL))) { 1026 - struct page *pg = realmode_pfn_to_page(*hpa >> PAGE_SHIFT); 1027 - 1028 - if (likely(pg)) { 1029 - SetPageDirty(pg); 1030 - } else { 1031 - tbl->it_ops->exchange_rm(tbl, entry, hpa, direction); 1032 - ret = -EFAULT; 1033 - } 1034 - } 1035 - 1036 - return ret; 1037 - } 1038 - EXPORT_SYMBOL_GPL(iommu_tce_xchg_rm); 1039 - #endif 1040 - 1041 1016 int iommu_take_ownership(struct iommu_table *tbl) 1042 1017 { 1043 1018 unsigned long flags, i, sz = (tbl->it_size + 7) >> 3;
+17 -3
arch/powerpc/kernel/tm.S
··· 176 176 std r1, PACATMSCRATCH(r13) 177 177 ld r1, PACAR1(r13) 178 178 179 - /* Store the PPR in r11 and reset to decent value */ 180 179 std r11, GPR11(r1) /* Temporary stash */ 180 + 181 + /* 182 + * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is 183 + * clobbered by an exception once we turn on MSR_RI below. 184 + */ 185 + ld r11, PACATMSCRATCH(r13) 186 + std r11, GPR1(r1) 187 + 188 + /* 189 + * Store r13 away so we can free up the scratch SPR for the SLB fault 190 + * handler (needed once we start accessing the thread_struct). 191 + */ 192 + GET_SCRATCH0(r11) 193 + std r11, GPR13(r1) 181 194 182 195 /* Reset MSR RI so we can take SLB faults again */ 183 196 li r11, MSR_RI 184 197 mtmsrd r11, 1 185 198 199 + /* Store the PPR in r11 and reset to decent value */ 186 200 mfspr r11, SPRN_PPR 187 201 HMT_MEDIUM 188 202 ··· 221 207 SAVE_GPR(8, r7) /* user r8 */ 222 208 SAVE_GPR(9, r7) /* user r9 */ 223 209 SAVE_GPR(10, r7) /* user r10 */ 224 - ld r3, PACATMSCRATCH(r13) /* user r1 */ 210 + ld r3, GPR1(r1) /* user r1 */ 225 211 ld r4, GPR7(r1) /* user r7 */ 226 212 ld r5, GPR11(r1) /* user r11 */ 227 213 ld r6, GPR12(r1) /* user r12 */ 228 - GET_SCRATCH0(8) /* user r13 */ 214 + ld r8, GPR13(r1) /* user r13 */ 229 215 std r3, GPR1(r7) 230 216 std r4, GPR7(r7) 231 217 std r5, GPR11(r7)
+37 -54
arch/powerpc/kvm/book3s_64_mmu_radix.c
··· 525 525 unsigned long ea, unsigned long dsisr) 526 526 { 527 527 struct kvm *kvm = vcpu->kvm; 528 - unsigned long mmu_seq, pte_size; 529 - unsigned long gpa, gfn, hva, pfn; 528 + unsigned long mmu_seq; 529 + unsigned long gpa, gfn, hva; 530 530 struct kvm_memory_slot *memslot; 531 531 struct page *page = NULL; 532 532 long ret; ··· 623 623 */ 624 624 hva = gfn_to_hva_memslot(memslot, gfn); 625 625 if (upgrade_p && __get_user_pages_fast(hva, 1, 1, &page) == 1) { 626 - pfn = page_to_pfn(page); 627 626 upgrade_write = true; 628 627 } else { 628 + unsigned long pfn; 629 + 629 630 /* Call KVM generic code to do the slow-path check */ 630 631 pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, 631 632 writing, upgrade_p); ··· 640 639 } 641 640 } 642 641 643 - /* See if we can insert a 1GB or 2MB large PTE here */ 644 - level = 0; 645 - if (page && PageCompound(page)) { 646 - pte_size = PAGE_SIZE << compound_order(compound_head(page)); 647 - if (pte_size >= PUD_SIZE && 648 - (gpa & (PUD_SIZE - PAGE_SIZE)) == 649 - (hva & (PUD_SIZE - PAGE_SIZE))) { 650 - level = 2; 651 - pfn &= ~((PUD_SIZE >> PAGE_SHIFT) - 1); 652 - } else if (pte_size >= PMD_SIZE && 653 - (gpa & (PMD_SIZE - PAGE_SIZE)) == 654 - (hva & (PMD_SIZE - PAGE_SIZE))) { 655 - level = 1; 656 - pfn &= ~((PMD_SIZE >> PAGE_SHIFT) - 1); 642 + /* 643 + * Read the PTE from the process' radix tree and use that 644 + * so we get the shift and attribute bits. 645 + */ 646 + local_irq_disable(); 647 + ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift); 648 + pte = *ptep; 649 + local_irq_enable(); 650 + 651 + /* Get pte level from shift/size */ 652 + if (shift == PUD_SHIFT && 653 + (gpa & (PUD_SIZE - PAGE_SIZE)) == 654 + (hva & (PUD_SIZE - PAGE_SIZE))) { 655 + level = 2; 656 + } else if (shift == PMD_SHIFT && 657 + (gpa & (PMD_SIZE - PAGE_SIZE)) == 658 + (hva & (PMD_SIZE - PAGE_SIZE))) { 659 + level = 1; 660 + } else { 661 + level = 0; 662 + if (shift > PAGE_SHIFT) { 663 + /* 664 + * If the pte maps more than one page, bring over 665 + * bits from the virtual address to get the real 666 + * address of the specific single page we want. 667 + */ 668 + unsigned long rpnmask = (1ul << shift) - PAGE_SIZE; 669 + pte = __pte(pte_val(pte) | (hva & rpnmask)); 657 670 } 658 671 } 659 672 660 - /* 661 - * Compute the PTE value that we need to insert. 662 - */ 663 - if (page) { 664 - pgflags = _PAGE_READ | _PAGE_EXEC | _PAGE_PRESENT | _PAGE_PTE | 665 - _PAGE_ACCESSED; 666 - if (writing || upgrade_write) 667 - pgflags |= _PAGE_WRITE | _PAGE_DIRTY; 668 - pte = pfn_pte(pfn, __pgprot(pgflags)); 673 + pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED); 674 + if (writing || upgrade_write) { 675 + if (pte_val(pte) & _PAGE_WRITE) 676 + pte = __pte(pte_val(pte) | _PAGE_DIRTY); 669 677 } else { 670 - /* 671 - * Read the PTE from the process' radix tree and use that 672 - * so we get the attribute bits. 673 - */ 674 - local_irq_disable(); 675 - ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift); 676 - pte = *ptep; 677 - local_irq_enable(); 678 - if (shift == PUD_SHIFT && 679 - (gpa & (PUD_SIZE - PAGE_SIZE)) == 680 - (hva & (PUD_SIZE - PAGE_SIZE))) { 681 - level = 2; 682 - } else if (shift == PMD_SHIFT && 683 - (gpa & (PMD_SIZE - PAGE_SIZE)) == 684 - (hva & (PMD_SIZE - PAGE_SIZE))) { 685 - level = 1; 686 - } else if (shift && shift != PAGE_SHIFT) { 687 - /* Adjust PFN */ 688 - unsigned long mask = (1ul << shift) - PAGE_SIZE; 689 - pte = __pte(pte_val(pte) | (hva & mask)); 690 - } 691 - pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED); 692 - if (writing || upgrade_write) { 693 - if (pte_val(pte) & _PAGE_WRITE) 694 - pte = __pte(pte_val(pte) | _PAGE_DIRTY); 695 - } else { 696 - pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY)); 697 - } 678 + pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY)); 698 679 } 699 680 700 681 /* Allocate space in the tree and write the PTE */
+31 -8
arch/powerpc/kvm/book3s_64_vio_hv.c
··· 187 187 EXPORT_SYMBOL_GPL(kvmppc_gpa_to_ua); 188 188 189 189 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 190 - static void kvmppc_rm_clear_tce(struct iommu_table *tbl, unsigned long entry) 190 + static long iommu_tce_xchg_rm(struct mm_struct *mm, struct iommu_table *tbl, 191 + unsigned long entry, unsigned long *hpa, 192 + enum dma_data_direction *direction) 193 + { 194 + long ret; 195 + 196 + ret = tbl->it_ops->exchange_rm(tbl, entry, hpa, direction); 197 + 198 + if (!ret && ((*direction == DMA_FROM_DEVICE) || 199 + (*direction == DMA_BIDIRECTIONAL))) { 200 + __be64 *pua = IOMMU_TABLE_USERSPACE_ENTRY_RM(tbl, entry); 201 + /* 202 + * kvmppc_rm_tce_iommu_do_map() updates the UA cache after 203 + * calling this so we still get here a valid UA. 204 + */ 205 + if (pua && *pua) 206 + mm_iommu_ua_mark_dirty_rm(mm, be64_to_cpu(*pua)); 207 + } 208 + 209 + return ret; 210 + } 211 + 212 + static void kvmppc_rm_clear_tce(struct kvm *kvm, struct iommu_table *tbl, 213 + unsigned long entry) 191 214 { 192 215 unsigned long hpa = 0; 193 216 enum dma_data_direction dir = DMA_NONE; 194 217 195 - iommu_tce_xchg_rm(tbl, entry, &hpa, &dir); 218 + iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir); 196 219 } 197 220 198 221 static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm, ··· 247 224 unsigned long hpa = 0; 248 225 long ret; 249 226 250 - if (iommu_tce_xchg_rm(tbl, entry, &hpa, &dir)) 227 + if (iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir)) 251 228 /* 252 229 * real mode xchg can fail if struct page crosses 253 230 * a page boundary ··· 259 236 260 237 ret = kvmppc_rm_tce_iommu_mapped_dec(kvm, tbl, entry); 261 238 if (ret) 262 - iommu_tce_xchg_rm(tbl, entry, &hpa, &dir); 239 + iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir); 263 240 264 241 return ret; 265 242 } ··· 305 282 if (WARN_ON_ONCE_RM(mm_iommu_mapped_inc(mem))) 306 283 return H_CLOSED; 307 284 308 - ret = iommu_tce_xchg_rm(tbl, entry, &hpa, &dir); 285 + ret = iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir); 309 286 if (ret) { 310 287 mm_iommu_mapped_dec(mem); 311 288 /* ··· 394 371 return ret; 395 372 396 373 WARN_ON_ONCE_RM(1); 397 - kvmppc_rm_clear_tce(stit->tbl, entry); 374 + kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry); 398 375 } 399 376 400 377 kvmppc_tce_put(stt, entry, tce); ··· 543 520 goto unlock_exit; 544 521 545 522 WARN_ON_ONCE_RM(1); 546 - kvmppc_rm_clear_tce(stit->tbl, entry); 523 + kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry); 547 524 } 548 525 549 526 kvmppc_tce_put(stt, entry + i, tce); ··· 594 571 return ret; 595 572 596 573 WARN_ON_ONCE_RM(1); 597 - kvmppc_rm_clear_tce(stit->tbl, entry); 574 + kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry); 598 575 } 599 576 } 600 577
+3
arch/powerpc/lib/checksum_64.S
··· 443 443 addc r0, r8, r9 444 444 ld r10, 0(r4) 445 445 ld r11, 8(r4) 446 + #ifdef CONFIG_CPU_LITTLE_ENDIAN 447 + rotldi r5, r5, 8 448 + #endif 446 449 adde r0, r0, r10 447 450 add r5, r5, r7 448 451 adde r0, r0, r11
+6
arch/powerpc/lib/code-patching.c
··· 28 28 { 29 29 int err; 30 30 31 + /* Make sure we aren't patching a freed init section */ 32 + if (init_mem_is_free && init_section_contains(exec_addr, 4)) { 33 + pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr); 34 + return 0; 35 + } 36 + 31 37 __put_user_size(instr, patch_addr, 4, err); 32 38 if (err) 33 39 return err;
-49
arch/powerpc/mm/init_64.c
··· 308 308 { 309 309 } 310 310 311 - /* 312 - * We do not have access to the sparsemem vmemmap, so we fallback to 313 - * walking the list of sparsemem blocks which we already maintain for 314 - * the sake of crashdump. In the long run, we might want to maintain 315 - * a tree if performance of that linear walk becomes a problem. 316 - * 317 - * realmode_pfn_to_page functions can fail due to: 318 - * 1) As real sparsemem blocks do not lay in RAM continously (they 319 - * are in virtual address space which is not available in the real mode), 320 - * the requested page struct can be split between blocks so get_page/put_page 321 - * may fail. 322 - * 2) When huge pages are used, the get_page/put_page API will fail 323 - * in real mode as the linked addresses in the page struct are virtual 324 - * too. 325 - */ 326 - struct page *realmode_pfn_to_page(unsigned long pfn) 327 - { 328 - struct vmemmap_backing *vmem_back; 329 - struct page *page; 330 - unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; 331 - unsigned long pg_va = (unsigned long) pfn_to_page(pfn); 332 - 333 - for (vmem_back = vmemmap_list; vmem_back; vmem_back = vmem_back->list) { 334 - if (pg_va < vmem_back->virt_addr) 335 - continue; 336 - 337 - /* After vmemmap_list entry free is possible, need check all */ 338 - if ((pg_va + sizeof(struct page)) <= 339 - (vmem_back->virt_addr + page_size)) { 340 - page = (struct page *) (vmem_back->phys + pg_va - 341 - vmem_back->virt_addr); 342 - return page; 343 - } 344 - } 345 - 346 - /* Probably that page struct is split between real pages */ 347 - return NULL; 348 - } 349 - EXPORT_SYMBOL_GPL(realmode_pfn_to_page); 350 - 351 - #else 352 - 353 - struct page *realmode_pfn_to_page(unsigned long pfn) 354 - { 355 - struct page *page = pfn_to_page(pfn); 356 - return page; 357 - } 358 - EXPORT_SYMBOL_GPL(realmode_pfn_to_page); 359 - 360 311 #endif /* CONFIG_SPARSEMEM_VMEMMAP */ 361 312 362 313 #ifdef CONFIG_PPC_BOOK3S_64
+2
arch/powerpc/mm/mem.c
··· 63 63 #endif 64 64 65 65 unsigned long long memory_limit; 66 + bool init_mem_is_free; 66 67 67 68 #ifdef CONFIG_HIGHMEM 68 69 pte_t *kmap_pte; ··· 397 396 { 398 397 ppc_md.progress = ppc_printk_progress; 399 398 mark_initmem_nx(); 399 + init_mem_is_free = true; 400 400 free_initmem_default(POISON_FREE_INITMEM); 401 401 } 402 402
+30 -4
arch/powerpc/mm/mmu_context_iommu.c
··· 18 18 #include <linux/migrate.h> 19 19 #include <linux/hugetlb.h> 20 20 #include <linux/swap.h> 21 + #include <linux/sizes.h> 21 22 #include <asm/mmu_context.h> 22 23 #include <asm/pte-walk.h> 23 24 24 25 static DEFINE_MUTEX(mem_list_mutex); 26 + 27 + #define MM_IOMMU_TABLE_GROUP_PAGE_DIRTY 0x1 28 + #define MM_IOMMU_TABLE_GROUP_PAGE_MASK ~(SZ_4K - 1) 25 29 26 30 struct mm_iommu_table_group_mem_t { 27 31 struct list_head next; ··· 267 263 if (!page) 268 264 continue; 269 265 266 + if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) 267 + SetPageDirty(page); 268 + 270 269 put_page(page); 271 270 mem->hpas[i] = 0; 272 271 } ··· 367 360 368 361 return ret; 369 362 } 370 - EXPORT_SYMBOL_GPL(mm_iommu_lookup_rm); 371 363 372 364 struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm, 373 365 unsigned long ua, unsigned long entries) ··· 396 390 if (pageshift > mem->pageshift) 397 391 return -EFAULT; 398 392 399 - *hpa = *va | (ua & ~PAGE_MASK); 393 + *hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK); 400 394 401 395 return 0; 402 396 } ··· 419 413 if (!pa) 420 414 return -EFAULT; 421 415 422 - *hpa = *pa | (ua & ~PAGE_MASK); 416 + *hpa = (*pa & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK); 423 417 424 418 return 0; 425 419 } 426 - EXPORT_SYMBOL_GPL(mm_iommu_ua_to_hpa_rm); 420 + 421 + extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua) 422 + { 423 + struct mm_iommu_table_group_mem_t *mem; 424 + long entry; 425 + void *va; 426 + unsigned long *pa; 427 + 428 + mem = mm_iommu_lookup_rm(mm, ua, PAGE_SIZE); 429 + if (!mem) 430 + return; 431 + 432 + entry = (ua - mem->ua) >> PAGE_SHIFT; 433 + va = &mem->hpas[entry]; 434 + 435 + pa = (void *) vmalloc_to_phys(va); 436 + if (!pa) 437 + return; 438 + 439 + *pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY; 440 + } 427 441 428 442 long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem) 429 443 {
+5 -2
arch/powerpc/mm/numa.c
··· 1204 1204 int new_nid; 1205 1205 1206 1206 /* Use associativity from first thread for all siblings */ 1207 - vphn_get_associativity(cpu, associativity); 1207 + if (vphn_get_associativity(cpu, associativity)) 1208 + return cpu_to_node(cpu); 1209 + 1208 1210 new_nid = associativity_to_nid(associativity); 1209 1211 if (new_nid < 0 || !node_possible(new_nid)) 1210 1212 new_nid = first_online_node; ··· 1454 1452 1455 1453 static void reset_topology_timer(void) 1456 1454 { 1457 - mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ); 1455 + if (vphn_enabled) 1456 + mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ); 1458 1457 } 1459 1458 1460 1459 #ifdef CONFIG_SMP
+1 -1
arch/powerpc/mm/pkeys.c
··· 45 45 * Since any pkey can be used for data or execute, we will just treat 46 46 * all keys as equal and track them as one entity. 47 47 */ 48 - pkeys_total = be32_to_cpu(vals[0]); 48 + pkeys_total = vals[0]; 49 49 pkeys_devtree_defined = true; 50 50 } 51 51
+1 -1
arch/powerpc/platforms/powernv/pci-ioda-tce.c
··· 276 276 level_shift = entries_shift + 3; 277 277 level_shift = max_t(unsigned int, level_shift, PAGE_SHIFT); 278 278 279 - if ((level_shift - 3) * levels + page_shift >= 60) 279 + if ((level_shift - 3) * levels + page_shift >= 55) 280 280 return -EINVAL; 281 281 282 282 /* Allocate TCE table */
+7
arch/riscv/include/asm/asm-prototypes.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_RISCV_PROTOTYPES_H 3 + 4 + #include <linux/ftrace.h> 5 + #include <asm-generic/asm-prototypes.h> 6 + 7 + #endif /* _ASM_RISCV_PROTOTYPES_H */
+2 -2
arch/s390/kvm/kvm-s390.c
··· 481 481 break; 482 482 case KVM_CAP_S390_HPAGE_1M: 483 483 r = 0; 484 - if (hpage) 484 + if (hpage && !kvm_is_ucontrol(kvm)) 485 485 r = 1; 486 486 break; 487 487 case KVM_CAP_S390_MEM_OP: ··· 691 691 mutex_lock(&kvm->lock); 692 692 if (kvm->created_vcpus) 693 693 r = -EBUSY; 694 - else if (!hpage || kvm->arch.use_cmma) 694 + else if (!hpage || kvm->arch.use_cmma || kvm_is_ucontrol(kvm)) 695 695 r = -EINVAL; 696 696 else { 697 697 r = 0;
+3 -1
arch/s390/mm/gmap.c
··· 708 708 vmaddr |= gaddr & ~PMD_MASK; 709 709 /* Find vma in the parent mm */ 710 710 vma = find_vma(gmap->mm, vmaddr); 711 + if (!vma) 712 + continue; 711 713 /* 712 714 * We do not discard pages that are backed by 713 715 * hugetlbfs, so we don't have to refault them. 714 716 */ 715 - if (vma && is_vm_hugetlb_page(vma)) 717 + if (is_vm_hugetlb_page(vma)) 716 718 continue; 717 719 size = min(to - gaddr, PMD_SIZE - (gaddr & ~PMD_MASK)); 718 720 zap_page_range(vma, vmaddr, size);
-19
arch/x86/boot/compressed/mem_encrypt.S
··· 25 25 push %ebx 26 26 push %ecx 27 27 push %edx 28 - push %edi 29 - 30 - /* 31 - * RIP-relative addressing is needed to access the encryption bit 32 - * variable. Since we are running in 32-bit mode we need this call/pop 33 - * sequence to get the proper relative addressing. 34 - */ 35 - call 1f 36 - 1: popl %edi 37 - subl $1b, %edi 38 - 39 - movl enc_bit(%edi), %eax 40 - cmpl $0, %eax 41 - jge .Lsev_exit 42 28 43 29 /* Check if running under a hypervisor */ 44 30 movl $1, %eax ··· 55 69 56 70 movl %ebx, %eax 57 71 andl $0x3f, %eax /* Return the encryption bit location */ 58 - movl %eax, enc_bit(%edi) 59 72 jmp .Lsev_exit 60 73 61 74 .Lno_sev: 62 75 xor %eax, %eax 63 - movl %eax, enc_bit(%edi) 64 76 65 77 .Lsev_exit: 66 - pop %edi 67 78 pop %edx 68 79 pop %ecx 69 80 pop %ebx ··· 96 113 ENDPROC(set_sev_encryption_mask) 97 114 98 115 .data 99 - enc_bit: 100 - .int 0xffffffff 101 116 102 117 #ifdef CONFIG_AMD_MEM_ENCRYPT 103 118 .balign 8
-1
arch/x86/crypto/aegis128-aesni-glue.c
··· 379 379 { 380 380 if (!boot_cpu_has(X86_FEATURE_XMM2) || 381 381 !boot_cpu_has(X86_FEATURE_AES) || 382 - !boot_cpu_has(X86_FEATURE_OSXSAVE) || 383 382 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) 384 383 return -ENODEV; 385 384
-1
arch/x86/crypto/aegis128l-aesni-glue.c
··· 379 379 { 380 380 if (!boot_cpu_has(X86_FEATURE_XMM2) || 381 381 !boot_cpu_has(X86_FEATURE_AES) || 382 - !boot_cpu_has(X86_FEATURE_OSXSAVE) || 383 382 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) 384 383 return -ENODEV; 385 384
-1
arch/x86/crypto/aegis256-aesni-glue.c
··· 379 379 { 380 380 if (!boot_cpu_has(X86_FEATURE_XMM2) || 381 381 !boot_cpu_has(X86_FEATURE_AES) || 382 - !boot_cpu_has(X86_FEATURE_OSXSAVE) || 383 382 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) 384 383 return -ENODEV; 385 384
-1
arch/x86/crypto/morus1280-sse2-glue.c
··· 40 40 static int __init crypto_morus1280_sse2_module_init(void) 41 41 { 42 42 if (!boot_cpu_has(X86_FEATURE_XMM2) || 43 - !boot_cpu_has(X86_FEATURE_OSXSAVE) || 44 43 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) 45 44 return -ENODEV; 46 45
-1
arch/x86/crypto/morus640-sse2-glue.c
··· 40 40 static int __init crypto_morus640_sse2_module_init(void) 41 41 { 42 42 if (!boot_cpu_has(X86_FEATURE_XMM2) || 43 - !boot_cpu_has(X86_FEATURE_OSXSAVE) || 44 43 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) 45 44 return -ENODEV; 46 45
+4 -4
arch/x86/hyperv/hv_apic.c
··· 95 95 */ 96 96 static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector) 97 97 { 98 - struct ipi_arg_ex **arg; 99 - struct ipi_arg_ex *ipi_arg; 98 + struct hv_send_ipi_ex **arg; 99 + struct hv_send_ipi_ex *ipi_arg; 100 100 unsigned long flags; 101 101 int nr_bank = 0; 102 102 int ret = 1; ··· 105 105 return false; 106 106 107 107 local_irq_save(flags); 108 - arg = (struct ipi_arg_ex **)this_cpu_ptr(hyperv_pcpu_input_arg); 108 + arg = (struct hv_send_ipi_ex **)this_cpu_ptr(hyperv_pcpu_input_arg); 109 109 110 110 ipi_arg = *arg; 111 111 if (unlikely(!ipi_arg)) ··· 135 135 static bool __send_ipi_mask(const struct cpumask *mask, int vector) 136 136 { 137 137 int cur_cpu, vcpu; 138 - struct ipi_arg_non_ex ipi_arg; 138 + struct hv_send_ipi ipi_arg; 139 139 int ret = 1; 140 140 141 141 trace_hyperv_send_ipi_mask(mask, vector);
+10
arch/x86/include/asm/fixmap.h
··· 14 14 #ifndef _ASM_X86_FIXMAP_H 15 15 #define _ASM_X86_FIXMAP_H 16 16 17 + /* 18 + * Exposed to assembly code for setting up initial page tables. Cannot be 19 + * calculated in assembly code (fixmap entries are an enum), but is sanity 20 + * checked in the actual fixmap C code to make sure that the fixmap is 21 + * covered fully. 22 + */ 23 + #define FIXMAP_PMD_NUM 2 24 + /* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */ 25 + #define FIXMAP_PMD_TOP 507 26 + 17 27 #ifndef __ASSEMBLY__ 18 28 #include <linux/kernel.h> 19 29 #include <asm/acpi.h>
+9 -7
arch/x86/include/asm/hyperv-tlfs.h
··· 726 726 #define HV_STIMER_AUTOENABLE (1ULL << 3) 727 727 #define HV_STIMER_SINT(config) (__u8)(((config) >> 16) & 0x0F) 728 728 729 - struct ipi_arg_non_ex { 730 - u32 vector; 731 - u32 reserved; 732 - u64 cpu_mask; 733 - }; 734 - 735 729 struct hv_vpset { 736 730 u64 format; 737 731 u64 valid_bank_mask; 738 732 u64 bank_contents[]; 739 733 }; 740 734 741 - struct ipi_arg_ex { 735 + /* HvCallSendSyntheticClusterIpi hypercall */ 736 + struct hv_send_ipi { 737 + u32 vector; 738 + u32 reserved; 739 + u64 cpu_mask; 740 + }; 741 + 742 + /* HvCallSendSyntheticClusterIpiEx hypercall */ 743 + struct hv_send_ipi_ex { 742 744 u32 vector; 743 745 u32 reserved; 744 746 struct hv_vpset vp_set;
+5
arch/x86/include/asm/kvm_host.h
··· 869 869 870 870 bool x2apic_format; 871 871 bool x2apic_broadcast_quirk_disabled; 872 + 873 + bool guest_can_read_msr_platform_info; 872 874 }; 873 875 874 876 struct kvm_vm_stat { ··· 1024 1022 void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu); 1025 1023 void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr); 1026 1024 void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr); 1025 + bool (*guest_apic_has_interrupt)(struct kvm_vcpu *vcpu); 1027 1026 void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap); 1028 1027 void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu); 1029 1028 void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa); ··· 1058 1055 bool (*umip_emulated)(void); 1059 1056 1060 1057 int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr); 1058 + void (*request_immediate_exit)(struct kvm_vcpu *vcpu); 1061 1059 1062 1060 void (*sched_in)(struct kvm_vcpu *kvm, int cpu); 1063 1061 ··· 1486 1482 1487 1483 int kvm_skip_emulated_instruction(struct kvm_vcpu *vcpu); 1488 1484 int kvm_complete_insn_gp(struct kvm_vcpu *vcpu, int err); 1485 + void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu); 1489 1486 1490 1487 int kvm_is_in_guest(void); 1491 1488
+7
arch/x86/include/asm/mem_encrypt.h
··· 48 48 49 49 /* Architecture __weak replacement functions */ 50 50 void __init mem_encrypt_init(void); 51 + void __init mem_encrypt_free_decrypted_mem(void); 51 52 52 53 bool sme_active(void); 53 54 bool sev_active(void); 55 + 56 + #define __bss_decrypted __attribute__((__section__(".bss..decrypted"))) 54 57 55 58 #else /* !CONFIG_AMD_MEM_ENCRYPT */ 56 59 ··· 80 77 static inline int __init 81 78 early_set_memory_encrypted(unsigned long vaddr, unsigned long size) { return 0; } 82 79 80 + #define __bss_decrypted 81 + 83 82 #endif /* CONFIG_AMD_MEM_ENCRYPT */ 84 83 85 84 /* ··· 92 87 */ 93 88 #define __sme_pa(x) (__pa(x) | sme_me_mask) 94 89 #define __sme_pa_nodebug(x) (__pa_nodebug(x) | sme_me_mask) 90 + 91 + extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[]; 95 92 96 93 #endif /* __ASSEMBLY__ */ 97 94
+2 -1
arch/x86/include/asm/pgtable_64.h
··· 14 14 #include <asm/processor.h> 15 15 #include <linux/bitops.h> 16 16 #include <linux/threads.h> 17 + #include <asm/fixmap.h> 17 18 18 19 extern p4d_t level4_kernel_pgt[512]; 19 20 extern p4d_t level4_ident_pgt[512]; ··· 23 22 extern pmd_t level2_kernel_pgt[512]; 24 23 extern pmd_t level2_fixmap_pgt[512]; 25 24 extern pmd_t level2_ident_pgt[512]; 26 - extern pte_t level1_fixmap_pgt[512]; 25 + extern pte_t level1_fixmap_pgt[512 * FIXMAP_PMD_NUM]; 27 26 extern pgd_t init_top_pgt[]; 28 27 29 28 #define swapper_pg_dir init_top_pgt
+1
arch/x86/include/uapi/asm/kvm.h
··· 377 377 378 378 #define KVM_X86_QUIRK_LINT0_REENABLED (1 << 0) 379 379 #define KVM_X86_QUIRK_CD_NW_CLEARED (1 << 1) 380 + #define KVM_X86_QUIRK_LAPIC_MMIO_HOLE (1 << 2) 380 381 381 382 #define KVM_STATE_NESTED_GUEST_MODE 0x00000001 382 383 #define KVM_STATE_NESTED_RUN_PENDING 0x00000002
+13 -4
arch/x86/kernel/cpu/intel_rdt.h
··· 382 382 e <= QOS_L3_MBM_LOCAL_EVENT_ID); 383 383 } 384 384 385 + struct rdt_parse_data { 386 + struct rdtgroup *rdtgrp; 387 + char *buf; 388 + }; 389 + 385 390 /** 386 391 * struct rdt_resource - attributes of an RDT resource 387 392 * @rid: The index of the resource ··· 428 423 struct rdt_cache cache; 429 424 struct rdt_membw membw; 430 425 const char *format_str; 431 - int (*parse_ctrlval) (void *data, struct rdt_resource *r, 432 - struct rdt_domain *d); 426 + int (*parse_ctrlval)(struct rdt_parse_data *data, 427 + struct rdt_resource *r, 428 + struct rdt_domain *d); 433 429 struct list_head evt_list; 434 430 int num_rmid; 435 431 unsigned int mon_scale; 436 432 unsigned long fflags; 437 433 }; 438 434 439 - int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d); 440 - int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d); 435 + int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r, 436 + struct rdt_domain *d); 437 + int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r, 438 + struct rdt_domain *d); 441 439 442 440 extern struct mutex rdtgroup_mutex; 443 441 ··· 544 536 void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp); 545 537 struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r); 546 538 int update_domains(struct rdt_resource *r, int closid); 539 + int closids_supported(void); 547 540 void closid_free(int closid); 548 541 int alloc_rmid(void); 549 542 void free_rmid(u32 rmid);
+14 -13
arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c
··· 64 64 return true; 65 65 } 66 66 67 - int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d) 67 + int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r, 68 + struct rdt_domain *d) 68 69 { 69 - unsigned long data; 70 - char *buf = _buf; 70 + unsigned long bw_val; 71 71 72 72 if (d->have_new_ctrl) { 73 73 rdt_last_cmd_printf("duplicate domain %d\n", d->id); 74 74 return -EINVAL; 75 75 } 76 76 77 - if (!bw_validate(buf, &data, r)) 77 + if (!bw_validate(data->buf, &bw_val, r)) 78 78 return -EINVAL; 79 - d->new_ctrl = data; 79 + d->new_ctrl = bw_val; 80 80 d->have_new_ctrl = true; 81 81 82 82 return 0; ··· 123 123 return true; 124 124 } 125 125 126 - struct rdt_cbm_parse_data { 127 - struct rdtgroup *rdtgrp; 128 - char *buf; 129 - }; 130 - 131 126 /* 132 127 * Read one cache bit mask (hex). Check that it is valid for the current 133 128 * resource type. 134 129 */ 135 - int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d) 130 + int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r, 131 + struct rdt_domain *d) 136 132 { 137 - struct rdt_cbm_parse_data *data = _data; 138 133 struct rdtgroup *rdtgrp = data->rdtgrp; 139 134 u32 cbm_val; 140 135 ··· 190 195 static int parse_line(char *line, struct rdt_resource *r, 191 196 struct rdtgroup *rdtgrp) 192 197 { 193 - struct rdt_cbm_parse_data data; 198 + struct rdt_parse_data data; 194 199 char *dom = NULL, *id; 195 200 struct rdt_domain *d; 196 201 unsigned long dom_id; 202 + 203 + if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP && 204 + r->rid == RDT_RESOURCE_MBA) { 205 + rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n"); 206 + return -EINVAL; 207 + } 197 208 198 209 next: 199 210 if (!line || line[0] == '\0')
+44 -9
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
··· 97 97 * limited as the number of resources grows. 98 98 */ 99 99 static int closid_free_map; 100 + static int closid_free_map_len; 101 + 102 + int closids_supported(void) 103 + { 104 + return closid_free_map_len; 105 + } 100 106 101 107 static void closid_init(void) 102 108 { ··· 117 111 118 112 /* CLOSID 0 is always reserved for the default group */ 119 113 closid_free_map &= ~1; 114 + closid_free_map_len = rdt_min_closid; 120 115 } 121 116 122 117 static int closid_alloc(void) ··· 809 802 sw_shareable = 0; 810 803 exclusive = 0; 811 804 seq_printf(seq, "%d=", dom->id); 812 - for (i = 0; i < r->num_closid; i++, ctrl++) { 805 + for (i = 0; i < closids_supported(); i++, ctrl++) { 813 806 if (!closid_allocated(i)) 814 807 continue; 815 808 mode = rdtgroup_mode_by_closid(i); ··· 996 989 997 990 /* Check for overlap with other resource groups */ 998 991 ctrl = d->ctrl_val; 999 - for (i = 0; i < r->num_closid; i++, ctrl++) { 992 + for (i = 0; i < closids_supported(); i++, ctrl++) { 1000 993 ctrl_b = (unsigned long *)ctrl; 1001 994 mode = rdtgroup_mode_by_closid(i); 1002 995 if (closid_allocated(i) && i != closid && ··· 1031 1024 { 1032 1025 int closid = rdtgrp->closid; 1033 1026 struct rdt_resource *r; 1027 + bool has_cache = false; 1034 1028 struct rdt_domain *d; 1035 1029 1036 1030 for_each_alloc_enabled_rdt_resource(r) { 1031 + if (r->rid == RDT_RESOURCE_MBA) 1032 + continue; 1033 + has_cache = true; 1037 1034 list_for_each_entry(d, &r->domains, list) { 1038 1035 if (rdtgroup_cbm_overlaps(r, d, d->ctrl_val[closid], 1039 - rdtgrp->closid, false)) 1036 + rdtgrp->closid, false)) { 1037 + rdt_last_cmd_puts("schemata overlaps\n"); 1040 1038 return false; 1039 + } 1041 1040 } 1041 + } 1042 + 1043 + if (!has_cache) { 1044 + rdt_last_cmd_puts("cannot be exclusive without CAT/CDP\n"); 1045 + return false; 1042 1046 } 1043 1047 1044 1048 return true; ··· 1103 1085 rdtgrp->mode = RDT_MODE_SHAREABLE; 1104 1086 } else if (!strcmp(buf, "exclusive")) { 1105 1087 if (!rdtgroup_mode_test_exclusive(rdtgrp)) { 1106 - rdt_last_cmd_printf("schemata overlaps\n"); 1107 1088 ret = -EINVAL; 1108 1089 goto out; 1109 1090 } ··· 1172 1155 struct rdt_resource *r; 1173 1156 struct rdt_domain *d; 1174 1157 unsigned int size; 1175 - bool sep = false; 1176 - u32 cbm; 1158 + bool sep; 1159 + u32 ctrl; 1177 1160 1178 1161 rdtgrp = rdtgroup_kn_lock_live(of->kn); 1179 1162 if (!rdtgrp) { ··· 1191 1174 } 1192 1175 1193 1176 for_each_alloc_enabled_rdt_resource(r) { 1177 + sep = false; 1194 1178 seq_printf(s, "%*s:", max_name_width, r->name); 1195 1179 list_for_each_entry(d, &r->domains, list) { 1196 1180 if (sep) ··· 1199 1181 if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) { 1200 1182 size = 0; 1201 1183 } else { 1202 - cbm = d->ctrl_val[rdtgrp->closid]; 1203 - size = rdtgroup_cbm_to_size(r, d, cbm); 1184 + ctrl = (!is_mba_sc(r) ? 1185 + d->ctrl_val[rdtgrp->closid] : 1186 + d->mbps_val[rdtgrp->closid]); 1187 + if (r->rid == RDT_RESOURCE_MBA) 1188 + size = ctrl; 1189 + else 1190 + size = rdtgroup_cbm_to_size(r, d, ctrl); 1204 1191 } 1205 1192 seq_printf(s, "%d=%u", d->id, size); 1206 1193 sep = true; ··· 2359 2336 u32 *ctrl; 2360 2337 2361 2338 for_each_alloc_enabled_rdt_resource(r) { 2339 + /* 2340 + * Only initialize default allocations for CBM cache 2341 + * resources 2342 + */ 2343 + if (r->rid == RDT_RESOURCE_MBA) 2344 + continue; 2362 2345 list_for_each_entry(d, &r->domains, list) { 2363 2346 d->have_new_ctrl = false; 2364 2347 d->new_ctrl = r->cache.shareable_bits; 2365 2348 used_b = r->cache.shareable_bits; 2366 2349 ctrl = d->ctrl_val; 2367 - for (i = 0; i < r->num_closid; i++, ctrl++) { 2350 + for (i = 0; i < closids_supported(); i++, ctrl++) { 2368 2351 if (closid_allocated(i) && i != closid) { 2369 2352 mode = rdtgroup_mode_by_closid(i); 2370 2353 if (mode == RDT_MODE_PSEUDO_LOCKSETUP) ··· 2402 2373 } 2403 2374 2404 2375 for_each_alloc_enabled_rdt_resource(r) { 2376 + /* 2377 + * Only initialize default allocations for CBM cache 2378 + * resources 2379 + */ 2380 + if (r->rid == RDT_RESOURCE_MBA) 2381 + continue; 2405 2382 ret = update_domains(r, rdtgrp->closid); 2406 2383 if (ret < 0) { 2407 2384 rdt_last_cmd_puts("failed to initialize allocations\n");
+19 -1
arch/x86/kernel/head64.c
··· 35 35 #include <asm/bootparam_utils.h> 36 36 #include <asm/microcode.h> 37 37 #include <asm/kasan.h> 38 + #include <asm/fixmap.h> 38 39 39 40 /* 40 41 * Manage page tables very early on. ··· 113 112 unsigned long __head __startup_64(unsigned long physaddr, 114 113 struct boot_params *bp) 115 114 { 115 + unsigned long vaddr, vaddr_end; 116 116 unsigned long load_delta, *p; 117 117 unsigned long pgtable_flags; 118 118 pgdval_t *pgd; ··· 167 165 pud[511] += load_delta; 168 166 169 167 pmd = fixup_pointer(level2_fixmap_pgt, physaddr); 170 - pmd[506] += load_delta; 168 + for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--) 169 + pmd[i] += load_delta; 171 170 172 171 /* 173 172 * Set up the identity mapping for the switchover. These ··· 236 233 237 234 /* Encrypt the kernel and related (if SME is active) */ 238 235 sme_encrypt_kernel(bp); 236 + 237 + /* 238 + * Clear the memory encryption mask from the .bss..decrypted section. 239 + * The bss section will be memset to zero later in the initialization so 240 + * there is no need to zero it after changing the memory encryption 241 + * attribute. 242 + */ 243 + if (mem_encrypt_active()) { 244 + vaddr = (unsigned long)__start_bss_decrypted; 245 + vaddr_end = (unsigned long)__end_bss_decrypted; 246 + for (; vaddr < vaddr_end; vaddr += PMD_SIZE) { 247 + i = pmd_index(vaddr); 248 + pmd[i] -= sme_get_me_mask(); 249 + } 250 + } 239 251 240 252 /* 241 253 * Return the SME encryption mask (if SME is active) to be used as a
+12 -4
arch/x86/kernel/head_64.S
··· 24 24 #include "../entry/calling.h" 25 25 #include <asm/export.h> 26 26 #include <asm/nospec-branch.h> 27 + #include <asm/fixmap.h> 27 28 28 29 #ifdef CONFIG_PARAVIRT 29 30 #include <asm/asm-offsets.h> ··· 446 445 KERNEL_IMAGE_SIZE/PMD_SIZE) 447 446 448 447 NEXT_PAGE(level2_fixmap_pgt) 449 - .fill 506,8,0 450 - .quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC 451 - /* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */ 452 - .fill 5,8,0 448 + .fill (512 - 4 - FIXMAP_PMD_NUM),8,0 449 + pgtno = 0 450 + .rept (FIXMAP_PMD_NUM) 451 + .quad level1_fixmap_pgt + (pgtno << PAGE_SHIFT) - __START_KERNEL_map \ 452 + + _PAGE_TABLE_NOENC; 453 + pgtno = pgtno + 1 454 + .endr 455 + /* 6 MB reserved space + a 2MB hole */ 456 + .fill 4,8,0 453 457 454 458 NEXT_PAGE(level1_fixmap_pgt) 459 + .rept (FIXMAP_PMD_NUM) 455 460 .fill 512,8,0 461 + .endr 456 462 457 463 #undef PMDS 458 464
+49 -3
arch/x86/kernel/kvmclock.c
··· 28 28 #include <linux/sched/clock.h> 29 29 #include <linux/mm.h> 30 30 #include <linux/slab.h> 31 + #include <linux/set_memory.h> 31 32 32 33 #include <asm/hypervisor.h> 33 34 #include <asm/mem_encrypt.h> ··· 62 61 (PAGE_SIZE / sizeof(struct pvclock_vsyscall_time_info)) 63 62 64 63 static struct pvclock_vsyscall_time_info 65 - hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __aligned(PAGE_SIZE); 66 - static struct pvclock_wall_clock wall_clock; 64 + hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE); 65 + static struct pvclock_wall_clock wall_clock __bss_decrypted; 67 66 static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu); 67 + static struct pvclock_vsyscall_time_info *hvclock_mem; 68 68 69 69 static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void) 70 70 { ··· 238 236 native_machine_shutdown(); 239 237 } 240 238 239 + static void __init kvmclock_init_mem(void) 240 + { 241 + unsigned long ncpus; 242 + unsigned int order; 243 + struct page *p; 244 + int r; 245 + 246 + if (HVC_BOOT_ARRAY_SIZE >= num_possible_cpus()) 247 + return; 248 + 249 + ncpus = num_possible_cpus() - HVC_BOOT_ARRAY_SIZE; 250 + order = get_order(ncpus * sizeof(*hvclock_mem)); 251 + 252 + p = alloc_pages(GFP_KERNEL, order); 253 + if (!p) { 254 + pr_warn("%s: failed to alloc %d pages", __func__, (1U << order)); 255 + return; 256 + } 257 + 258 + hvclock_mem = page_address(p); 259 + 260 + /* 261 + * hvclock is shared between the guest and the hypervisor, must 262 + * be mapped decrypted. 263 + */ 264 + if (sev_active()) { 265 + r = set_memory_decrypted((unsigned long) hvclock_mem, 266 + 1UL << order); 267 + if (r) { 268 + __free_pages(p, order); 269 + hvclock_mem = NULL; 270 + pr_warn("kvmclock: set_memory_decrypted() failed. Disabling\n"); 271 + return; 272 + } 273 + } 274 + 275 + memset(hvclock_mem, 0, PAGE_SIZE << order); 276 + } 277 + 241 278 static int __init kvm_setup_vsyscall_timeinfo(void) 242 279 { 243 280 #ifdef CONFIG_X86_64 ··· 291 250 292 251 kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK; 293 252 #endif 253 + 254 + kvmclock_init_mem(); 255 + 294 256 return 0; 295 257 } 296 258 early_initcall(kvm_setup_vsyscall_timeinfo); ··· 313 269 /* Use the static page for the first CPUs, allocate otherwise */ 314 270 if (cpu < HVC_BOOT_ARRAY_SIZE) 315 271 p = &hv_clock_boot[cpu]; 272 + else if (hvclock_mem) 273 + p = hvclock_mem + cpu - HVC_BOOT_ARRAY_SIZE; 316 274 else 317 - p = kzalloc(sizeof(*p), GFP_KERNEL); 275 + return -ENOMEM; 318 276 319 277 per_cpu(hv_clock_per_cpu, cpu) = p; 320 278 return p ? 0 : -ENOMEM;
+2 -2
arch/x86/kernel/paravirt.c
··· 91 91 92 92 if (len < 5) { 93 93 #ifdef CONFIG_RETPOLINE 94 - WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr); 94 + WARN_ONCE(1, "Failing to patch indirect CALL in %ps\n", (void *)addr); 95 95 #endif 96 96 return len; /* call too long for patch site */ 97 97 } ··· 111 111 112 112 if (len < 5) { 113 113 #ifdef CONFIG_RETPOLINE 114 - WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr); 114 + WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr); 115 115 #endif 116 116 return len; /* call too long for patch site */ 117 117 }
+19
arch/x86/kernel/vmlinux.lds.S
··· 65 65 #define ALIGN_ENTRY_TEXT_BEGIN . = ALIGN(PMD_SIZE); 66 66 #define ALIGN_ENTRY_TEXT_END . = ALIGN(PMD_SIZE); 67 67 68 + /* 69 + * This section contains data which will be mapped as decrypted. Memory 70 + * encryption operates on a page basis. Make this section PMD-aligned 71 + * to avoid splitting the pages while mapping the section early. 72 + * 73 + * Note: We use a separate section so that only this section gets 74 + * decrypted to avoid exposing more than we wish. 75 + */ 76 + #define BSS_DECRYPTED \ 77 + . = ALIGN(PMD_SIZE); \ 78 + __start_bss_decrypted = .; \ 79 + *(.bss..decrypted); \ 80 + . = ALIGN(PAGE_SIZE); \ 81 + __start_bss_decrypted_unused = .; \ 82 + . = ALIGN(PMD_SIZE); \ 83 + __end_bss_decrypted = .; \ 84 + 68 85 #else 69 86 70 87 #define X86_ALIGN_RODATA_BEGIN ··· 91 74 92 75 #define ALIGN_ENTRY_TEXT_BEGIN 93 76 #define ALIGN_ENTRY_TEXT_END 77 + #define BSS_DECRYPTED 94 78 95 79 #endif 96 80 ··· 373 355 __bss_start = .; 374 356 *(.bss..page_aligned) 375 357 *(.bss) 358 + BSS_DECRYPTED 376 359 . = ALIGN(PAGE_SIZE); 377 360 __bss_stop = .; 378 361 }
+19 -3
arch/x86/kvm/lapic.c
··· 1344 1344 1345 1345 static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr) 1346 1346 { 1347 - return kvm_apic_hw_enabled(apic) && 1348 - addr >= apic->base_address && 1349 - addr < apic->base_address + LAPIC_MMIO_LENGTH; 1347 + return addr >= apic->base_address && 1348 + addr < apic->base_address + LAPIC_MMIO_LENGTH; 1350 1349 } 1351 1350 1352 1351 static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this, ··· 1356 1357 1357 1358 if (!apic_mmio_in_range(apic, address)) 1358 1359 return -EOPNOTSUPP; 1360 + 1361 + if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) { 1362 + if (!kvm_check_has_quirk(vcpu->kvm, 1363 + KVM_X86_QUIRK_LAPIC_MMIO_HOLE)) 1364 + return -EOPNOTSUPP; 1365 + 1366 + memset(data, 0xff, len); 1367 + return 0; 1368 + } 1359 1369 1360 1370 kvm_lapic_reg_read(apic, offset, len, data); 1361 1371 ··· 1924 1916 1925 1917 if (!apic_mmio_in_range(apic, address)) 1926 1918 return -EOPNOTSUPP; 1919 + 1920 + if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) { 1921 + if (!kvm_check_has_quirk(vcpu->kvm, 1922 + KVM_X86_QUIRK_LAPIC_MMIO_HOLE)) 1923 + return -EOPNOTSUPP; 1924 + 1925 + return 0; 1926 + } 1927 1927 1928 1928 /* 1929 1929 * APIC register must be aligned on 128-bits boundary.
+7 -2
arch/x86/kvm/mmu.c
··· 899 899 { 900 900 /* 901 901 * Make sure the write to vcpu->mode is not reordered in front of 902 - * reads to sptes. If it does, kvm_commit_zap_page() can see us 902 + * reads to sptes. If it does, kvm_mmu_commit_zap_page() can see us 903 903 * OUTSIDE_GUEST_MODE and proceed to free the shadow page table. 904 904 */ 905 905 smp_store_release(&vcpu->mode, OUTSIDE_GUEST_MODE); ··· 5417 5417 { 5418 5418 MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu.root_hpa)); 5419 5419 5420 - kvm_init_mmu(vcpu, true); 5420 + /* 5421 + * kvm_mmu_setup() is called only on vCPU initialization. 5422 + * Therefore, no need to reset mmu roots as they are not yet 5423 + * initialized. 5424 + */ 5425 + kvm_init_mmu(vcpu, false); 5421 5426 } 5422 5427 5423 5428 static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
+4 -3
arch/x86/kvm/svm.c
··· 1226 1226 min_sev_asid = cpuid_edx(0x8000001F); 1227 1227 1228 1228 /* Initialize SEV ASID bitmap */ 1229 - sev_asid_bitmap = kcalloc(BITS_TO_LONGS(max_sev_asid), 1230 - sizeof(unsigned long), GFP_KERNEL); 1229 + sev_asid_bitmap = bitmap_zalloc(max_sev_asid, GFP_KERNEL); 1231 1230 if (!sev_asid_bitmap) 1232 1231 return 1; 1233 1232 ··· 1404 1405 int cpu; 1405 1406 1406 1407 if (svm_sev_enabled()) 1407 - kfree(sev_asid_bitmap); 1408 + bitmap_free(sev_asid_bitmap); 1408 1409 1409 1410 for_each_possible_cpu(cpu) 1410 1411 svm_cpu_uninit(cpu); ··· 7147 7148 7148 7149 .check_intercept = svm_check_intercept, 7149 7150 .handle_external_intr = svm_handle_external_intr, 7151 + 7152 + .request_immediate_exit = __kvm_request_immediate_exit, 7150 7153 7151 7154 .sched_in = svm_sched_in, 7152 7155
+100 -38
arch/x86/kvm/vmx.c
··· 397 397 int cpu; 398 398 bool launched; 399 399 bool nmi_known_unmasked; 400 + bool hv_timer_armed; 400 401 /* Support for vnmi-less CPUs */ 401 402 int soft_vnmi_blocked; 402 403 ktime_t entry_time; ··· 1019 1018 /* Dynamic PLE window. */ 1020 1019 int ple_window; 1021 1020 bool ple_window_dirty; 1021 + 1022 + bool req_immediate_exit; 1022 1023 1023 1024 /* Support for PML */ 1024 1025 #define PML_ENTITY_NUM 512 ··· 2866 2863 unsigned long fs_base, gs_base; 2867 2864 u16 fs_sel, gs_sel; 2868 2865 int i; 2866 + 2867 + vmx->req_immediate_exit = false; 2869 2868 2870 2869 if (vmx->loaded_cpu_state) 2871 2870 return; ··· 5398 5393 * To use VMXON (and later other VMX instructions), a guest 5399 5394 * must first be able to turn on cr4.VMXE (see handle_vmon()). 5400 5395 * So basically the check on whether to allow nested VMX 5401 - * is here. 5396 + * is here. We operate under the default treatment of SMM, 5397 + * so VMX cannot be enabled under SMM. 5402 5398 */ 5403 - if (!nested_vmx_allowed(vcpu)) 5399 + if (!nested_vmx_allowed(vcpu) || is_smm(vcpu)) 5404 5400 return 1; 5405 5401 } 5406 5402 ··· 6187 6181 } 6188 6182 6189 6183 nested_mark_vmcs12_pages_dirty(vcpu); 6184 + } 6185 + 6186 + static bool vmx_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) 6187 + { 6188 + struct vcpu_vmx *vmx = to_vmx(vcpu); 6189 + void *vapic_page; 6190 + u32 vppr; 6191 + int rvi; 6192 + 6193 + if (WARN_ON_ONCE(!is_guest_mode(vcpu)) || 6194 + !nested_cpu_has_vid(get_vmcs12(vcpu)) || 6195 + WARN_ON_ONCE(!vmx->nested.virtual_apic_page)) 6196 + return false; 6197 + 6198 + rvi = vmcs_read16(GUEST_INTR_STATUS) & 0xff; 6199 + 6200 + vapic_page = kmap(vmx->nested.virtual_apic_page); 6201 + vppr = *((u32 *)(vapic_page + APIC_PROCPRI)); 6202 + kunmap(vmx->nested.virtual_apic_page); 6203 + 6204 + return ((rvi & 0xf0) > (vppr & 0xf0)); 6190 6205 } 6191 6206 6192 6207 static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu, ··· 7993 7966 kvm_x86_ops->enable_log_dirty_pt_masked = NULL; 7994 7967 } 7995 7968 7969 + if (!cpu_has_vmx_preemption_timer()) 7970 + kvm_x86_ops->request_immediate_exit = __kvm_request_immediate_exit; 7971 + 7996 7972 if (cpu_has_vmx_preemption_timer() && enable_preemption_timer) { 7997 7973 u64 vmx_msr; 7998 7974 ··· 9238 9208 9239 9209 static int handle_preemption_timer(struct kvm_vcpu *vcpu) 9240 9210 { 9241 - kvm_lapic_expired_hv_timer(vcpu); 9211 + if (!to_vmx(vcpu)->req_immediate_exit) 9212 + kvm_lapic_expired_hv_timer(vcpu); 9242 9213 return 1; 9243 9214 } 9244 9215 ··· 10626 10595 msrs[i].host, false); 10627 10596 } 10628 10597 10629 - static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu) 10598 + static void vmx_arm_hv_timer(struct vcpu_vmx *vmx, u32 val) 10599 + { 10600 + vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, val); 10601 + if (!vmx->loaded_vmcs->hv_timer_armed) 10602 + vmcs_set_bits(PIN_BASED_VM_EXEC_CONTROL, 10603 + PIN_BASED_VMX_PREEMPTION_TIMER); 10604 + vmx->loaded_vmcs->hv_timer_armed = true; 10605 + } 10606 + 10607 + static void vmx_update_hv_timer(struct kvm_vcpu *vcpu) 10630 10608 { 10631 10609 struct vcpu_vmx *vmx = to_vmx(vcpu); 10632 10610 u64 tscl; 10633 10611 u32 delta_tsc; 10634 10612 10635 - if (vmx->hv_deadline_tsc == -1) 10613 + if (vmx->req_immediate_exit) { 10614 + vmx_arm_hv_timer(vmx, 0); 10636 10615 return; 10616 + } 10637 10617 10638 - tscl = rdtsc(); 10639 - if (vmx->hv_deadline_tsc > tscl) 10640 - /* sure to be 32 bit only because checked on set_hv_timer */ 10641 - delta_tsc = (u32)((vmx->hv_deadline_tsc - tscl) >> 10642 - cpu_preemption_timer_multi); 10643 - else 10644 - delta_tsc = 0; 10618 + if (vmx->hv_deadline_tsc != -1) { 10619 + tscl = rdtsc(); 10620 + if (vmx->hv_deadline_tsc > tscl) 10621 + /* set_hv_timer ensures the delta fits in 32-bits */ 10622 + delta_tsc = (u32)((vmx->hv_deadline_tsc - tscl) >> 10623 + cpu_preemption_timer_multi); 10624 + else 10625 + delta_tsc = 0; 10645 10626 10646 - vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, delta_tsc); 10627 + vmx_arm_hv_timer(vmx, delta_tsc); 10628 + return; 10629 + } 10630 + 10631 + if (vmx->loaded_vmcs->hv_timer_armed) 10632 + vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL, 10633 + PIN_BASED_VMX_PREEMPTION_TIMER); 10634 + vmx->loaded_vmcs->hv_timer_armed = false; 10647 10635 } 10648 10636 10649 10637 static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) ··· 10722 10672 10723 10673 atomic_switch_perf_msrs(vmx); 10724 10674 10725 - vmx_arm_hv_timer(vcpu); 10675 + vmx_update_hv_timer(vcpu); 10726 10676 10727 10677 /* 10728 10678 * If this vCPU has touched SPEC_CTRL, restore the guest's value if ··· 11477 11427 u64 preemption_timeout = get_vmcs12(vcpu)->vmx_preemption_timer_value; 11478 11428 struct vcpu_vmx *vmx = to_vmx(vcpu); 11479 11429 11480 - if (vcpu->arch.virtual_tsc_khz == 0) 11481 - return; 11482 - 11483 - /* Make sure short timeouts reliably trigger an immediate vmexit. 11484 - * hrtimer_start does not guarantee this. */ 11485 - if (preemption_timeout <= 1) { 11430 + /* 11431 + * A timer value of zero is architecturally guaranteed to cause 11432 + * a VMExit prior to executing any instructions in the guest. 11433 + */ 11434 + if (preemption_timeout == 0) { 11486 11435 vmx_preemption_timer_fn(&vmx->nested.preemption_timer); 11487 11436 return; 11488 11437 } 11438 + 11439 + if (vcpu->arch.virtual_tsc_khz == 0) 11440 + return; 11489 11441 11490 11442 preemption_timeout <<= VMX_MISC_EMULATED_PREEMPTION_TIMER_RATE; 11491 11443 preemption_timeout *= 1000000; ··· 11698 11646 * bits 15:8 should be zero in posted_intr_nv, 11699 11647 * the descriptor address has been already checked 11700 11648 * in nested_get_vmcs12_pages. 11649 + * 11650 + * bits 5:0 of posted_intr_desc_addr should be zero. 11701 11651 */ 11702 11652 if (nested_cpu_has_posted_intr(vmcs12) && 11703 11653 (!nested_cpu_has_vid(vmcs12) || 11704 11654 !nested_exit_intr_ack_set(vcpu) || 11705 - vmcs12->posted_intr_nv & 0xff00)) 11655 + (vmcs12->posted_intr_nv & 0xff00) || 11656 + (vmcs12->posted_intr_desc_addr & 0x3f) || 11657 + (!page_address_valid(vcpu, vmcs12->posted_intr_desc_addr)))) 11706 11658 return -EINVAL; 11707 11659 11708 11660 /* tpr shadow is needed by all apicv features. */ ··· 12132 12076 12133 12077 exec_control = vmcs12->pin_based_vm_exec_control; 12134 12078 12135 - /* Preemption timer setting is only taken from vmcs01. */ 12136 - exec_control &= ~PIN_BASED_VMX_PREEMPTION_TIMER; 12079 + /* Preemption timer setting is computed directly in vmx_vcpu_run. */ 12137 12080 exec_control |= vmcs_config.pin_based_exec_ctrl; 12138 - if (vmx->hv_deadline_tsc == -1) 12139 - exec_control &= ~PIN_BASED_VMX_PREEMPTION_TIMER; 12081 + exec_control &= ~PIN_BASED_VMX_PREEMPTION_TIMER; 12082 + vmx->loaded_vmcs->hv_timer_armed = false; 12140 12083 12141 12084 /* Posted interrupts setting is only taken from vmcs12. */ 12142 12085 if (nested_cpu_has_posted_intr(vmcs12)) { ··· 12371 12316 12372 12317 if (vmcs12->guest_activity_state != GUEST_ACTIVITY_ACTIVE && 12373 12318 vmcs12->guest_activity_state != GUEST_ACTIVITY_HLT) 12319 + return VMXERR_ENTRY_INVALID_CONTROL_FIELD; 12320 + 12321 + if (nested_cpu_has_vpid(vmcs12) && !vmcs12->virtual_processor_id) 12374 12322 return VMXERR_ENTRY_INVALID_CONTROL_FIELD; 12375 12323 12376 12324 if (nested_vmx_check_io_bitmap_controls(vcpu, vmcs12)) ··· 12921 12863 return 0; 12922 12864 } 12923 12865 12866 + static void vmx_request_immediate_exit(struct kvm_vcpu *vcpu) 12867 + { 12868 + to_vmx(vcpu)->req_immediate_exit = true; 12869 + } 12870 + 12924 12871 static u32 vmx_get_preemption_timer_value(struct kvm_vcpu *vcpu) 12925 12872 { 12926 12873 ktime_t remaining = ··· 13316 13253 vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr); 13317 13254 vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr); 13318 13255 vmcs_write64(TSC_OFFSET, vcpu->arch.tsc_offset); 13319 - if (vmx->hv_deadline_tsc == -1) 13320 - vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL, 13321 - PIN_BASED_VMX_PREEMPTION_TIMER); 13322 - else 13323 - vmcs_set_bits(PIN_BASED_VM_EXEC_CONTROL, 13324 - PIN_BASED_VMX_PREEMPTION_TIMER); 13256 + 13325 13257 if (kvm_has_tsc_control) 13326 13258 decache_tsc_multiplier(vmx); 13327 13259 ··· 13520 13462 return -ERANGE; 13521 13463 13522 13464 vmx->hv_deadline_tsc = tscl + delta_tsc; 13523 - vmcs_set_bits(PIN_BASED_VM_EXEC_CONTROL, 13524 - PIN_BASED_VMX_PREEMPTION_TIMER); 13525 - 13526 13465 return delta_tsc == 0; 13527 13466 } 13528 13467 13529 13468 static void vmx_cancel_hv_timer(struct kvm_vcpu *vcpu) 13530 13469 { 13531 - struct vcpu_vmx *vmx = to_vmx(vcpu); 13532 - vmx->hv_deadline_tsc = -1; 13533 - vmcs_clear_bits(PIN_BASED_VM_EXEC_CONTROL, 13534 - PIN_BASED_VMX_PREEMPTION_TIMER); 13470 + to_vmx(vcpu)->hv_deadline_tsc = -1; 13535 13471 } 13536 13472 #endif 13537 13473 ··· 14006 13954 ~(KVM_STATE_NESTED_SMM_GUEST_MODE | KVM_STATE_NESTED_SMM_VMXON)) 14007 13955 return -EINVAL; 14008 13956 13957 + /* 13958 + * SMM temporarily disables VMX, so we cannot be in guest mode, 13959 + * nor can VMLAUNCH/VMRESUME be pending. Outside SMM, SMM flags 13960 + * must be zero. 13961 + */ 13962 + if (is_smm(vcpu) ? kvm_state->flags : kvm_state->vmx.smm.flags) 13963 + return -EINVAL; 13964 + 14009 13965 if ((kvm_state->vmx.smm.flags & KVM_STATE_NESTED_SMM_GUEST_MODE) && 14010 13966 !(kvm_state->vmx.smm.flags & KVM_STATE_NESTED_SMM_VMXON)) 14011 13967 return -EINVAL; ··· 14157 14097 .apicv_post_state_restore = vmx_apicv_post_state_restore, 14158 14098 .hwapic_irr_update = vmx_hwapic_irr_update, 14159 14099 .hwapic_isr_update = vmx_hwapic_isr_update, 14100 + .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt, 14160 14101 .sync_pir_to_irr = vmx_sync_pir_to_irr, 14161 14102 .deliver_posted_interrupt = vmx_deliver_posted_interrupt, 14162 14103 ··· 14191 14130 .umip_emulated = vmx_umip_emulated, 14192 14131 14193 14132 .check_nested_events = vmx_check_nested_events, 14133 + .request_immediate_exit = vmx_request_immediate_exit, 14194 14134 14195 14135 .sched_in = vmx_sched_in, 14196 14136
+65 -36
arch/x86/kvm/x86.c
··· 628 628 gfn_t gfn; 629 629 int r; 630 630 631 - if (is_long_mode(vcpu) || !is_pae(vcpu)) 631 + if (is_long_mode(vcpu) || !is_pae(vcpu) || !is_paging(vcpu)) 632 632 return false; 633 633 634 634 if (!test_bit(VCPU_EXREG_PDPTR, ··· 2537 2537 break; 2538 2538 case MSR_PLATFORM_INFO: 2539 2539 if (!msr_info->host_initiated || 2540 - data & ~MSR_PLATFORM_INFO_CPUID_FAULT || 2541 2540 (!(data & MSR_PLATFORM_INFO_CPUID_FAULT) && 2542 2541 cpuid_fault_enabled(vcpu))) 2543 2542 return 1; ··· 2779 2780 msr_info->data = vcpu->arch.osvw.status; 2780 2781 break; 2781 2782 case MSR_PLATFORM_INFO: 2783 + if (!msr_info->host_initiated && 2784 + !vcpu->kvm->arch.guest_can_read_msr_platform_info) 2785 + return 1; 2782 2786 msr_info->data = vcpu->arch.msr_platform_info; 2783 2787 break; 2784 2788 case MSR_MISC_FEATURES_ENABLES: ··· 2929 2927 case KVM_CAP_SPLIT_IRQCHIP: 2930 2928 case KVM_CAP_IMMEDIATE_EXIT: 2931 2929 case KVM_CAP_GET_MSR_FEATURES: 2930 + case KVM_CAP_MSR_PLATFORM_INFO: 2932 2931 r = 1; 2933 2932 break; 2934 2933 case KVM_CAP_SYNC_REGS: ··· 4010 4007 break; 4011 4008 4012 4009 BUILD_BUG_ON(sizeof(user_data_size) != sizeof(user_kvm_nested_state->size)); 4010 + r = -EFAULT; 4013 4011 if (get_user(user_data_size, &user_kvm_nested_state->size)) 4014 - return -EFAULT; 4012 + break; 4015 4013 4016 4014 r = kvm_x86_ops->get_nested_state(vcpu, user_kvm_nested_state, 4017 4015 user_data_size); 4018 4016 if (r < 0) 4019 - return r; 4017 + break; 4020 4018 4021 4019 if (r > user_data_size) { 4022 4020 if (put_user(r, &user_kvm_nested_state->size)) 4023 - return -EFAULT; 4024 - return -E2BIG; 4021 + r = -EFAULT; 4022 + else 4023 + r = -E2BIG; 4024 + break; 4025 4025 } 4026 + 4026 4027 r = 0; 4027 4028 break; 4028 4029 } ··· 4038 4031 if (!kvm_x86_ops->set_nested_state) 4039 4032 break; 4040 4033 4034 + r = -EFAULT; 4041 4035 if (copy_from_user(&kvm_state, user_kvm_nested_state, sizeof(kvm_state))) 4042 - return -EFAULT; 4036 + break; 4043 4037 4038 + r = -EINVAL; 4044 4039 if (kvm_state.size < sizeof(kvm_state)) 4045 - return -EINVAL; 4040 + break; 4046 4041 4047 4042 if (kvm_state.flags & 4048 4043 ~(KVM_STATE_NESTED_RUN_PENDING | KVM_STATE_NESTED_GUEST_MODE)) 4049 - return -EINVAL; 4044 + break; 4050 4045 4051 4046 /* nested_run_pending implies guest_mode. */ 4052 4047 if (kvm_state.flags == KVM_STATE_NESTED_RUN_PENDING) 4053 - return -EINVAL; 4048 + break; 4054 4049 4055 4050 r = kvm_x86_ops->set_nested_state(vcpu, user_kvm_nested_state, &kvm_state); 4056 4051 break; ··· 4357 4348 kvm->arch.hlt_in_guest = true; 4358 4349 if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE) 4359 4350 kvm->arch.pause_in_guest = true; 4351 + r = 0; 4352 + break; 4353 + case KVM_CAP_MSR_PLATFORM_INFO: 4354 + kvm->arch.guest_can_read_msr_platform_info = cap->args[0]; 4360 4355 r = 0; 4361 4356 break; 4362 4357 default: ··· 7374 7361 } 7375 7362 EXPORT_SYMBOL_GPL(kvm_vcpu_reload_apic_access_page); 7376 7363 7364 + void __kvm_request_immediate_exit(struct kvm_vcpu *vcpu) 7365 + { 7366 + smp_send_reschedule(vcpu->cpu); 7367 + } 7368 + EXPORT_SYMBOL_GPL(__kvm_request_immediate_exit); 7369 + 7377 7370 /* 7378 7371 * Returns 1 to let vcpu_run() continue the guest execution loop without 7379 7372 * exiting to the userspace. Otherwise, the value will be returned to the ··· 7584 7565 7585 7566 if (req_immediate_exit) { 7586 7567 kvm_make_request(KVM_REQ_EVENT, vcpu); 7587 - smp_send_reschedule(vcpu->cpu); 7568 + kvm_x86_ops->request_immediate_exit(vcpu); 7588 7569 } 7589 7570 7590 7571 trace_kvm_entry(vcpu->vcpu_id); ··· 7846 7827 run->mmio.is_write = vcpu->mmio_is_write; 7847 7828 vcpu->arch.complete_userspace_io = complete_emulated_mmio; 7848 7829 return 0; 7830 + } 7831 + 7832 + /* Swap (qemu) user FPU context for the guest FPU context. */ 7833 + static void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) 7834 + { 7835 + preempt_disable(); 7836 + copy_fpregs_to_fpstate(&vcpu->arch.user_fpu); 7837 + /* PKRU is separately restored in kvm_x86_ops->run. */ 7838 + __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, 7839 + ~XFEATURE_MASK_PKRU); 7840 + preempt_enable(); 7841 + trace_kvm_fpu(1); 7842 + } 7843 + 7844 + /* When vcpu_run ends, restore user space FPU context. */ 7845 + static void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) 7846 + { 7847 + preempt_disable(); 7848 + copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); 7849 + copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state); 7850 + preempt_enable(); 7851 + ++vcpu->stat.fpu_reload; 7852 + trace_kvm_fpu(0); 7849 7853 } 7850 7854 7851 7855 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) ··· 8219 8177 kvm_update_cpuid(vcpu); 8220 8178 8221 8179 idx = srcu_read_lock(&vcpu->kvm->srcu); 8222 - if (!is_long_mode(vcpu) && is_pae(vcpu)) { 8180 + if (!is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu)) { 8223 8181 load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)); 8224 8182 mmu_reset_needed = 1; 8225 8183 } ··· 8446 8404 vcpu->arch.xcr0 = XFEATURE_MASK_FP; 8447 8405 8448 8406 vcpu->arch.cr0 |= X86_CR0_ET; 8449 - } 8450 - 8451 - /* Swap (qemu) user FPU context for the guest FPU context. */ 8452 - void kvm_load_guest_fpu(struct kvm_vcpu *vcpu) 8453 - { 8454 - preempt_disable(); 8455 - copy_fpregs_to_fpstate(&vcpu->arch.user_fpu); 8456 - /* PKRU is separately restored in kvm_x86_ops->run. */ 8457 - __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state, 8458 - ~XFEATURE_MASK_PKRU); 8459 - preempt_enable(); 8460 - trace_kvm_fpu(1); 8461 - } 8462 - 8463 - /* When vcpu_run ends, restore user space FPU context. */ 8464 - void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) 8465 - { 8466 - preempt_disable(); 8467 - copy_fpregs_to_fpstate(&vcpu->arch.guest_fpu); 8468 - copy_kernel_to_fpregs(&vcpu->arch.user_fpu.state); 8469 - preempt_enable(); 8470 - ++vcpu->stat.fpu_reload; 8471 - trace_kvm_fpu(0); 8472 8407 } 8473 8408 8474 8409 void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) ··· 8871 8852 kvm->arch.kvmclock_offset = -ktime_get_boot_ns(); 8872 8853 pvclock_update_vm_gtod_copy(kvm); 8873 8854 8855 + kvm->arch.guest_can_read_msr_platform_info = true; 8856 + 8874 8857 INIT_DELAYED_WORK(&kvm->arch.kvmclock_update_work, kvmclock_update_fn); 8875 8858 INIT_DELAYED_WORK(&kvm->arch.kvmclock_sync_work, kvmclock_sync_fn); 8876 8859 ··· 9221 9200 kvm_page_track_flush_slot(kvm, slot); 9222 9201 } 9223 9202 9203 + static inline bool kvm_guest_apic_has_interrupt(struct kvm_vcpu *vcpu) 9204 + { 9205 + return (is_guest_mode(vcpu) && 9206 + kvm_x86_ops->guest_apic_has_interrupt && 9207 + kvm_x86_ops->guest_apic_has_interrupt(vcpu)); 9208 + } 9209 + 9224 9210 static inline bool kvm_vcpu_has_events(struct kvm_vcpu *vcpu) 9225 9211 { 9226 9212 if (!list_empty_careful(&vcpu->async_pf.done)) ··· 9252 9224 return true; 9253 9225 9254 9226 if (kvm_arch_interrupt_allowed(vcpu) && 9255 - kvm_cpu_has_interrupt(vcpu)) 9227 + (kvm_cpu_has_interrupt(vcpu) || 9228 + kvm_guest_apic_has_interrupt(vcpu))) 9256 9229 return true; 9257 9230 9258 9231 if (kvm_hv_has_stimer_pending(vcpu))
+4
arch/x86/mm/init.c
··· 815 815 set_memory_np_noalias(begin_ul, len_pages); 816 816 } 817 817 818 + void __weak mem_encrypt_free_decrypted_mem(void) { } 819 + 818 820 void __ref free_initmem(void) 819 821 { 820 822 e820__reallocate_tables(); 823 + 824 + mem_encrypt_free_decrypted_mem(); 821 825 822 826 free_kernel_image_pages(&__init_begin, &__init_end); 823 827 }
+24
arch/x86/mm/mem_encrypt.c
··· 348 348 EXPORT_SYMBOL(sev_active); 349 349 350 350 /* Architecture __weak replacement functions */ 351 + void __init mem_encrypt_free_decrypted_mem(void) 352 + { 353 + unsigned long vaddr, vaddr_end, npages; 354 + int r; 355 + 356 + vaddr = (unsigned long)__start_bss_decrypted_unused; 357 + vaddr_end = (unsigned long)__end_bss_decrypted; 358 + npages = (vaddr_end - vaddr) >> PAGE_SHIFT; 359 + 360 + /* 361 + * The unused memory range was mapped decrypted, change the encryption 362 + * attribute from decrypted to encrypted before freeing it. 363 + */ 364 + if (mem_encrypt_active()) { 365 + r = set_memory_encrypted(vaddr, npages); 366 + if (r) { 367 + pr_warn("failed to free unused decrypted pages\n"); 368 + return; 369 + } 370 + } 371 + 372 + free_init_pages("unused decrypted", vaddr, vaddr_end); 373 + } 374 + 351 375 void __init mem_encrypt_init(void) 352 376 { 353 377 if (!sme_me_mask)
+9
arch/x86/mm/pgtable.c
··· 637 637 { 638 638 unsigned long address = __fix_to_virt(idx); 639 639 640 + #ifdef CONFIG_X86_64 641 + /* 642 + * Ensure that the static initial page tables are covering the 643 + * fixmap completely. 644 + */ 645 + BUILD_BUG_ON(__end_of_permanent_fixed_addresses > 646 + (FIXMAP_PMD_NUM * PTRS_PER_PTE)); 647 + #endif 648 + 640 649 if (idx >= __end_of_fixed_addresses) { 641 650 BUG(); 642 651 return;
+6 -2
arch/x86/xen/mmu_pv.c
··· 1907 1907 /* L3_k[511] -> level2_fixmap_pgt */ 1908 1908 convert_pfn_mfn(level3_kernel_pgt); 1909 1909 1910 - /* L3_k[511][506] -> level1_fixmap_pgt */ 1910 + /* L3_k[511][508-FIXMAP_PMD_NUM ... 507] -> level1_fixmap_pgt */ 1911 1911 convert_pfn_mfn(level2_fixmap_pgt); 1912 1912 1913 1913 /* We get [511][511] and have Xen's version of level2_kernel_pgt */ ··· 1952 1952 set_page_prot(level2_ident_pgt, PAGE_KERNEL_RO); 1953 1953 set_page_prot(level2_kernel_pgt, PAGE_KERNEL_RO); 1954 1954 set_page_prot(level2_fixmap_pgt, PAGE_KERNEL_RO); 1955 - set_page_prot(level1_fixmap_pgt, PAGE_KERNEL_RO); 1955 + 1956 + for (i = 0; i < FIXMAP_PMD_NUM; i++) { 1957 + set_page_prot(level1_fixmap_pgt + i * PTRS_PER_PTE, 1958 + PAGE_KERNEL_RO); 1959 + } 1956 1960 1957 1961 /* Pin down new L4 */ 1958 1962 pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
+1 -1
arch/x86/xen/pmu.c
··· 478 478 irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id) 479 479 { 480 480 int err, ret = IRQ_NONE; 481 - struct pt_regs regs; 481 + struct pt_regs regs = {0}; 482 482 const struct xen_pmu_data *xenpmu_data = get_xenpmu_data(); 483 483 uint8_t xenpmu_flags = get_xenpmu_flags(); 484 484
+1 -1
block/bio.c
··· 1684 1684 const int sgrp = op_stat_group(req_op); 1685 1685 int cpu = part_stat_lock(); 1686 1686 1687 - part_stat_add(cpu, part, ticks[sgrp], duration); 1687 + part_stat_add(cpu, part, nsecs[sgrp], jiffies_to_nsecs(duration)); 1688 1688 part_round_stats(q, cpu, part); 1689 1689 part_dec_in_flight(q, part, op_is_write(req_op)); 1690 1690
+1 -3
block/blk-core.c
··· 2733 2733 * containing request is enough. 2734 2734 */ 2735 2735 if (blk_do_io_stat(req) && !(req->rq_flags & RQF_FLUSH_SEQ)) { 2736 - unsigned long duration; 2737 2736 const int sgrp = op_stat_group(req_op(req)); 2738 2737 struct hd_struct *part; 2739 2738 int cpu; 2740 2739 2741 - duration = nsecs_to_jiffies(now - req->start_time_ns); 2742 2740 cpu = part_stat_lock(); 2743 2741 part = req->part; 2744 2742 2745 2743 part_stat_inc(cpu, part, ios[sgrp]); 2746 - part_stat_add(cpu, part, ticks[sgrp], duration); 2744 + part_stat_add(cpu, part, nsecs[sgrp], now - req->start_time_ns); 2747 2745 part_round_stats(req->q, cpu, part); 2748 2746 part_dec_in_flight(req->q, part, rq_data_dir(req)); 2749 2747
+4 -9
block/blk-mq-tag.c
··· 322 322 323 323 /* 324 324 * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and 325 - * queue_hw_ctx after freeze the queue. So we could use q_usage_counter 326 - * to avoid race with it. __blk_mq_update_nr_hw_queues will users 327 - * synchronize_rcu to ensure all of the users go out of the critical 328 - * section below and see zeroed q_usage_counter. 325 + * queue_hw_ctx after freeze the queue, so we use q_usage_counter 326 + * to avoid race with it. 329 327 */ 330 - rcu_read_lock(); 331 - if (percpu_ref_is_zero(&q->q_usage_counter)) { 332 - rcu_read_unlock(); 328 + if (!percpu_ref_tryget(&q->q_usage_counter)) 333 329 return; 334 - } 335 330 336 331 queue_for_each_hw_ctx(q, hctx, i) { 337 332 struct blk_mq_tags *tags = hctx->tags; ··· 342 347 bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); 343 348 bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); 344 349 } 345 - rcu_read_unlock(); 350 + blk_queue_exit(q); 346 351 } 347 352 348 353 static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth,
+2 -2
block/blk-mq.c
··· 1628 1628 BUG_ON(!rq->q); 1629 1629 if (rq->mq_ctx != this_ctx) { 1630 1630 if (this_ctx) { 1631 - trace_block_unplug(this_q, depth, from_schedule); 1631 + trace_block_unplug(this_q, depth, !from_schedule); 1632 1632 blk_mq_sched_insert_requests(this_q, this_ctx, 1633 1633 &ctx_list, 1634 1634 from_schedule); ··· 1648 1648 * on 'ctx_list'. Do those. 1649 1649 */ 1650 1650 if (this_ctx) { 1651 - trace_block_unplug(this_q, depth, from_schedule); 1651 + trace_block_unplug(this_q, depth, !from_schedule); 1652 1652 blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list, 1653 1653 from_schedule); 1654 1654 }
+1 -1
block/elevator.c
··· 609 609 610 610 while (e->type->ops.sq.elevator_dispatch_fn(q, 1)) 611 611 ; 612 - if (q->nr_sorted && printed++ < 10) { 612 + if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) { 613 613 printk(KERN_ERR "%s: forced dispatching is broken " 614 614 "(nr_sorted=%u), please report this\n", 615 615 q->elevator->type->elevator_name, q->nr_sorted);
+3 -3
block/genhd.c
··· 1343 1343 part_stat_read(hd, ios[STAT_READ]), 1344 1344 part_stat_read(hd, merges[STAT_READ]), 1345 1345 part_stat_read(hd, sectors[STAT_READ]), 1346 - jiffies_to_msecs(part_stat_read(hd, ticks[STAT_READ])), 1346 + (unsigned int)part_stat_read_msecs(hd, STAT_READ), 1347 1347 part_stat_read(hd, ios[STAT_WRITE]), 1348 1348 part_stat_read(hd, merges[STAT_WRITE]), 1349 1349 part_stat_read(hd, sectors[STAT_WRITE]), 1350 - jiffies_to_msecs(part_stat_read(hd, ticks[STAT_WRITE])), 1350 + (unsigned int)part_stat_read_msecs(hd, STAT_WRITE), 1351 1351 inflight[0], 1352 1352 jiffies_to_msecs(part_stat_read(hd, io_ticks)), 1353 1353 jiffies_to_msecs(part_stat_read(hd, time_in_queue)), 1354 1354 part_stat_read(hd, ios[STAT_DISCARD]), 1355 1355 part_stat_read(hd, merges[STAT_DISCARD]), 1356 1356 part_stat_read(hd, sectors[STAT_DISCARD]), 1357 - jiffies_to_msecs(part_stat_read(hd, ticks[STAT_DISCARD])) 1357 + (unsigned int)part_stat_read_msecs(hd, STAT_DISCARD) 1358 1358 ); 1359 1359 } 1360 1360 disk_part_iter_exit(&piter);
+3 -3
block/partition-generic.c
··· 136 136 part_stat_read(p, ios[STAT_READ]), 137 137 part_stat_read(p, merges[STAT_READ]), 138 138 (unsigned long long)part_stat_read(p, sectors[STAT_READ]), 139 - jiffies_to_msecs(part_stat_read(p, ticks[STAT_READ])), 139 + (unsigned int)part_stat_read_msecs(p, STAT_READ), 140 140 part_stat_read(p, ios[STAT_WRITE]), 141 141 part_stat_read(p, merges[STAT_WRITE]), 142 142 (unsigned long long)part_stat_read(p, sectors[STAT_WRITE]), 143 - jiffies_to_msecs(part_stat_read(p, ticks[STAT_WRITE])), 143 + (unsigned int)part_stat_read_msecs(p, STAT_WRITE), 144 144 inflight[0], 145 145 jiffies_to_msecs(part_stat_read(p, io_ticks)), 146 146 jiffies_to_msecs(part_stat_read(p, time_in_queue)), 147 147 part_stat_read(p, ios[STAT_DISCARD]), 148 148 part_stat_read(p, merges[STAT_DISCARD]), 149 149 (unsigned long long)part_stat_read(p, sectors[STAT_DISCARD]), 150 - jiffies_to_msecs(part_stat_read(p, ticks[STAT_DISCARD]))); 150 + (unsigned int)part_stat_read_msecs(p, STAT_DISCARD)); 151 151 } 152 152 153 153 ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
+12 -2
drivers/ata/libata-core.c
··· 5359 5359 */ 5360 5360 int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active) 5361 5361 { 5362 + u64 done_mask, ap_qc_active = ap->qc_active; 5362 5363 int nr_done = 0; 5363 - u64 done_mask; 5364 5364 5365 - done_mask = ap->qc_active ^ qc_active; 5365 + /* 5366 + * If the internal tag is set on ap->qc_active, then we care about 5367 + * bit0 on the passed in qc_active mask. Move that bit up to match 5368 + * the internal tag. 5369 + */ 5370 + if (ap_qc_active & (1ULL << ATA_TAG_INTERNAL)) { 5371 + qc_active |= (qc_active & 0x01) << ATA_TAG_INTERNAL; 5372 + qc_active ^= qc_active & 0x01; 5373 + } 5374 + 5375 + done_mask = ap_qc_active ^ qc_active; 5366 5376 5367 5377 if (unlikely(done_mask & qc_active)) { 5368 5378 ata_port_err(ap, "illegal qc_active transition (%08llx->%08llx)\n",
+3
drivers/block/floppy.c
··· 3467 3467 (struct floppy_struct **)&outparam); 3468 3468 if (ret) 3469 3469 return ret; 3470 + memcpy(&inparam.g, outparam, 3471 + offsetof(struct floppy_struct, name)); 3472 + outparam = &inparam.g; 3470 3473 break; 3471 3474 case FDMSGON: 3472 3475 UDP->flags |= FTD_MSG;
+2 -2
drivers/block/xen-blkfront.c
··· 2670 2670 list_del(&gnt_list_entry->node); 2671 2671 gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL); 2672 2672 rinfo->persistent_gnts_c--; 2673 - __free_page(gnt_list_entry->page); 2674 - kfree(gnt_list_entry); 2673 + gnt_list_entry->gref = GRANT_INVALID_REF; 2674 + list_add_tail(&gnt_list_entry->node, &rinfo->grants); 2675 2675 } 2676 2676 2677 2677 spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+2
drivers/bluetooth/hci_ldisc.c
··· 543 543 } 544 544 clear_bit(HCI_UART_PROTO_SET, &hu->flags); 545 545 546 + percpu_free_rwsem(&hu->proto_lock); 547 + 546 548 kfree(hu); 547 549 } 548 550
+11 -7
drivers/clk/x86/clk-pmc-atom.c
··· 55 55 u8 nparents; 56 56 struct clk_plt *clks[PMC_CLK_NUM]; 57 57 struct clk_lookup *mclk_lookup; 58 + struct clk_lookup *ether_clk_lookup; 58 59 }; 59 60 60 61 /* Return an index in parent table */ ··· 186 185 pclk->hw.init = &init; 187 186 pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE; 188 187 spin_lock_init(&pclk->lock); 189 - 190 - /* 191 - * If the clock was already enabled by the firmware mark it as critical 192 - * to avoid it being gated by the clock framework if no driver owns it. 193 - */ 194 - if (plt_clk_is_enabled(&pclk->hw)) 195 - init.flags |= CLK_IS_CRITICAL; 196 188 197 189 ret = devm_clk_hw_register(&pdev->dev, &pclk->hw); 198 190 if (ret) { ··· 345 351 goto err_unreg_clk_plt; 346 352 } 347 353 354 + data->ether_clk_lookup = clkdev_hw_create(&data->clks[4]->hw, 355 + "ether_clk", NULL); 356 + if (!data->ether_clk_lookup) { 357 + err = -ENOMEM; 358 + goto err_drop_mclk; 359 + } 360 + 348 361 plt_clk_free_parent_names_loop(parent_names, data->nparents); 349 362 350 363 platform_set_drvdata(pdev, data); 351 364 return 0; 352 365 366 + err_drop_mclk: 367 + clkdev_drop(data->mclk_lookup); 353 368 err_unreg_clk_plt: 354 369 plt_clk_unregister_loop(data, i); 355 370 plt_clk_unregister_parents(data); ··· 372 369 373 370 data = platform_get_drvdata(pdev); 374 371 372 + clkdev_drop(data->ether_clk_lookup); 375 373 clkdev_drop(data->mclk_lookup); 376 374 plt_clk_unregister_loop(data, PMC_CLK_NUM); 377 375 plt_clk_unregister_parents(data);
+14 -6
drivers/clocksource/timer-atmel-pit.c
··· 180 180 data->base = of_iomap(node, 0); 181 181 if (!data->base) { 182 182 pr_err("Could not map PIT address\n"); 183 - return -ENXIO; 183 + ret = -ENXIO; 184 + goto exit; 184 185 } 185 186 186 187 data->mck = of_clk_get(node, 0); 187 188 if (IS_ERR(data->mck)) { 188 189 pr_err("Unable to get mck clk\n"); 189 - return PTR_ERR(data->mck); 190 + ret = PTR_ERR(data->mck); 191 + goto exit; 190 192 } 191 193 192 194 ret = clk_prepare_enable(data->mck); 193 195 if (ret) { 194 196 pr_err("Unable to enable mck\n"); 195 - return ret; 197 + goto exit; 196 198 } 197 199 198 200 /* Get the interrupts property */ 199 201 data->irq = irq_of_parse_and_map(node, 0); 200 202 if (!data->irq) { 201 203 pr_err("Unable to get IRQ from DT\n"); 202 - return -EINVAL; 204 + ret = -EINVAL; 205 + goto exit; 203 206 } 204 207 205 208 /* ··· 230 227 ret = clocksource_register_hz(&data->clksrc, pit_rate); 231 228 if (ret) { 232 229 pr_err("Failed to register clocksource\n"); 233 - return ret; 230 + goto exit; 234 231 } 235 232 236 233 /* Set up irq handler */ ··· 239 236 "at91_tick", data); 240 237 if (ret) { 241 238 pr_err("Unable to setup IRQ\n"); 242 - return ret; 239 + clocksource_unregister(&data->clksrc); 240 + goto exit; 243 241 } 244 242 245 243 /* Set up and register clockevents */ ··· 258 254 clockevents_register_device(&data->clkevt); 259 255 260 256 return 0; 257 + 258 + exit: 259 + kfree(data); 260 + return ret; 261 261 } 262 262 TIMER_OF_DECLARE(at91sam926x_pit, "atmel,at91sam9260-pit", 263 263 at91sam926x_pit_dt_init);
+11 -7
drivers/clocksource/timer-fttmr010.c
··· 130 130 cr &= ~fttmr010->t1_enable_val; 131 131 writel(cr, fttmr010->base + TIMER_CR); 132 132 133 - /* Setup the match register forward/backward in time */ 134 - cr = readl(fttmr010->base + TIMER1_COUNT); 135 - if (fttmr010->count_down) 136 - cr -= cycles; 137 - else 138 - cr += cycles; 139 - writel(cr, fttmr010->base + TIMER1_MATCH1); 133 + if (fttmr010->count_down) { 134 + /* 135 + * ASPEED Timer Controller will load TIMER1_LOAD register 136 + * into TIMER1_COUNT register when the timer is re-enabled. 137 + */ 138 + writel(cycles, fttmr010->base + TIMER1_LOAD); 139 + } else { 140 + /* Setup the match register forward in time */ 141 + cr = readl(fttmr010->base + TIMER1_COUNT); 142 + writel(cr + cycles, fttmr010->base + TIMER1_MATCH1); 143 + } 140 144 141 145 /* Start */ 142 146 cr = readl(fttmr010->base + TIMER_CR);
+3
drivers/clocksource/timer-ti-32k.c
··· 97 97 return -ENXIO; 98 98 } 99 99 100 + if (!of_machine_is_compatible("ti,am43")) 101 + ti_32k_timer.cs.flags |= CLOCK_SOURCE_SUSPEND_NONSTOP; 102 + 100 103 ti_32k_timer.counter = ti_32k_timer.base; 101 104 102 105 /*
+2 -2
drivers/cpufreq/qcom-cpufreq-kryo.c
··· 44 44 45 45 struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev; 46 46 47 - static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void) 47 + static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void) 48 48 { 49 49 size_t len; 50 50 u32 *msm_id; ··· 222 222 } 223 223 module_init(qcom_cpufreq_kryo_init); 224 224 225 - static void __init qcom_cpufreq_kryo_exit(void) 225 + static void __exit qcom_cpufreq_kryo_exit(void) 226 226 { 227 227 platform_device_unregister(kryo_cpufreq_pdev); 228 228 platform_driver_unregister(&qcom_cpufreq_kryo_driver);
+41 -5
drivers/crypto/ccp/psp-dev.c
··· 38 38 static struct sev_misc_dev *misc_dev; 39 39 static struct psp_device *psp_master; 40 40 41 + static int psp_cmd_timeout = 100; 42 + module_param(psp_cmd_timeout, int, 0644); 43 + MODULE_PARM_DESC(psp_cmd_timeout, " default timeout value, in seconds, for PSP commands"); 44 + 45 + static int psp_probe_timeout = 5; 46 + module_param(psp_probe_timeout, int, 0644); 47 + MODULE_PARM_DESC(psp_probe_timeout, " default timeout value, in seconds, during PSP device probe"); 48 + 49 + static bool psp_dead; 50 + static int psp_timeout; 51 + 41 52 static struct psp_device *psp_alloc_struct(struct sp_device *sp) 42 53 { 43 54 struct device *dev = sp->dev; ··· 93 82 return IRQ_HANDLED; 94 83 } 95 84 96 - static void sev_wait_cmd_ioc(struct psp_device *psp, unsigned int *reg) 85 + static int sev_wait_cmd_ioc(struct psp_device *psp, 86 + unsigned int *reg, unsigned int timeout) 97 87 { 98 - wait_event(psp->sev_int_queue, psp->sev_int_rcvd); 88 + int ret; 89 + 90 + ret = wait_event_timeout(psp->sev_int_queue, 91 + psp->sev_int_rcvd, timeout * HZ); 92 + if (!ret) 93 + return -ETIMEDOUT; 94 + 99 95 *reg = ioread32(psp->io_regs + psp->vdata->cmdresp_reg); 96 + 97 + return 0; 100 98 } 101 99 102 100 static int sev_cmd_buffer_len(int cmd) ··· 153 133 if (!psp) 154 134 return -ENODEV; 155 135 136 + if (psp_dead) 137 + return -EBUSY; 138 + 156 139 /* Get the physical address of the command buffer */ 157 140 phys_lsb = data ? lower_32_bits(__psp_pa(data)) : 0; 158 141 phys_msb = data ? upper_32_bits(__psp_pa(data)) : 0; 159 142 160 - dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x\n", 161 - cmd, phys_msb, phys_lsb); 143 + dev_dbg(psp->dev, "sev command id %#x buffer 0x%08x%08x timeout %us\n", 144 + cmd, phys_msb, phys_lsb, psp_timeout); 162 145 163 146 print_hex_dump_debug("(in): ", DUMP_PREFIX_OFFSET, 16, 2, data, 164 147 sev_cmd_buffer_len(cmd), false); ··· 177 154 iowrite32(reg, psp->io_regs + psp->vdata->cmdresp_reg); 178 155 179 156 /* wait for command completion */ 180 - sev_wait_cmd_ioc(psp, &reg); 157 + ret = sev_wait_cmd_ioc(psp, &reg, psp_timeout); 158 + if (ret) { 159 + if (psp_ret) 160 + *psp_ret = 0; 161 + 162 + dev_err(psp->dev, "sev command %#x timed out, disabling PSP \n", cmd); 163 + psp_dead = true; 164 + 165 + return ret; 166 + } 167 + 168 + psp_timeout = psp_cmd_timeout; 181 169 182 170 if (psp_ret) 183 171 *psp_ret = reg & PSP_CMDRESP_ERR_MASK; ··· 921 887 return; 922 888 923 889 psp_master = sp->psp_data; 890 + 891 + psp_timeout = psp_probe_timeout; 924 892 925 893 if (sev_get_api_version()) 926 894 goto err;
+6
drivers/dax/device.c
··· 535 535 return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); 536 536 } 537 537 538 + static const struct address_space_operations dev_dax_aops = { 539 + .set_page_dirty = noop_set_page_dirty, 540 + .invalidatepage = noop_invalidatepage, 541 + }; 542 + 538 543 static int dax_open(struct inode *inode, struct file *filp) 539 544 { 540 545 struct dax_device *dax_dev = inode_dax(inode); ··· 549 544 dev_dbg(&dev_dax->dev, "trace\n"); 550 545 inode->i_mapping = __dax_inode->i_mapping; 551 546 inode->i_mapping->host = __dax_inode; 547 + inode->i_mapping->a_ops = &dev_dax_aops; 552 548 filp->f_mapping = inode->i_mapping; 553 549 filp->f_wb_err = filemap_sample_wb_err(filp->f_mapping); 554 550 filp->private_data = dev_dax;
+6 -3
drivers/firmware/efi/Kconfig
··· 90 90 config EFI_ARMSTUB_DTB_LOADER 91 91 bool "Enable the DTB loader" 92 92 depends on EFI_ARMSTUB 93 + default y 93 94 help 94 95 Select this config option to add support for the dtb= command 95 96 line parameter, allowing a device tree blob to be loaded into 96 97 memory from the EFI System Partition by the stub. 97 98 98 - The device tree is typically provided by the platform or by 99 - the bootloader, so this option is mostly for development 100 - purposes only. 99 + If the device tree is provided by the platform or by 100 + the bootloader this option may not be needed. 101 + But, for various development reasons and to maintain existing 102 + functionality for bootloaders that do not have such support 103 + this option is necessary. 101 104 102 105 config EFI_BOOTLOADER_CONTROL 103 106 tristate "EFI Bootloader Control"
+5 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 272 272 273 273 int alloc_gtt_mem(struct kgd_dev *kgd, size_t size, 274 274 void **mem_obj, uint64_t *gpu_addr, 275 - void **cpu_ptr) 275 + void **cpu_ptr, bool mqd_gfx9) 276 276 { 277 277 struct amdgpu_device *adev = (struct amdgpu_device *)kgd; 278 278 struct amdgpu_bo *bo = NULL; ··· 287 287 bp.flags = AMDGPU_GEM_CREATE_CPU_GTT_USWC; 288 288 bp.type = ttm_bo_type_kernel; 289 289 bp.resv = NULL; 290 + 291 + if (mqd_gfx9) 292 + bp.flags |= AMDGPU_GEM_CREATE_MQD_GFX9; 293 + 290 294 r = amdgpu_bo_create(adev, &bp, &bo); 291 295 if (r) { 292 296 dev_err(adev->dev,
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h
··· 136 136 /* Shared API */ 137 137 int alloc_gtt_mem(struct kgd_dev *kgd, size_t size, 138 138 void **mem_obj, uint64_t *gpu_addr, 139 - void **cpu_ptr); 139 + void **cpu_ptr, bool mqd_gfx9); 140 140 void free_gtt_mem(struct kgd_dev *kgd, void *mem_obj); 141 141 void get_local_mem_info(struct kgd_dev *kgd, 142 142 struct kfd_local_mem_info *mem_info);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v7.c
··· 685 685 686 686 while (true) { 687 687 temp = RREG32(sdma_base_addr + mmSDMA0_RLC0_CONTEXT_STATUS); 688 - if (temp & SDMA0_STATUS_REG__RB_CMD_IDLE__SHIFT) 688 + if (temp & SDMA0_RLC0_CONTEXT_STATUS__IDLE_MASK) 689 689 break; 690 690 if (time_after(jiffies, end_jiffies)) 691 691 return -ETIME;
+8 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
··· 367 367 break; 368 368 case CHIP_POLARIS10: 369 369 if (type == CGS_UCODE_ID_SMU) { 370 - if ((adev->pdev->device == 0x67df) && 371 - ((adev->pdev->revision == 0xe0) || 372 - (adev->pdev->revision == 0xe3) || 373 - (adev->pdev->revision == 0xe4) || 374 - (adev->pdev->revision == 0xe5) || 375 - (adev->pdev->revision == 0xe7) || 370 + if (((adev->pdev->device == 0x67df) && 371 + ((adev->pdev->revision == 0xe0) || 372 + (adev->pdev->revision == 0xe3) || 373 + (adev->pdev->revision == 0xe4) || 374 + (adev->pdev->revision == 0xe5) || 375 + (adev->pdev->revision == 0xe7) || 376 + (adev->pdev->revision == 0xef))) || 377 + ((adev->pdev->device == 0x6fdf) && 376 378 (adev->pdev->revision == 0xef))) { 377 379 info->is_kicker = true; 378 380 strcpy(fw_name, "amdgpu/polaris10_k_smc.bin");
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
··· 740 740 {0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, 741 741 {0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, 742 742 {0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, 743 + {0x1002, 0x6FDF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10}, 743 744 /* Polaris12 */ 744 745 {0x1002, 0x6980, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12}, 745 746 {0x1002, 0x6981, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
··· 258 258 { 259 259 int i; 260 260 261 + cancel_delayed_work_sync(&adev->vce.idle_work); 262 + 261 263 if (adev->vce.vcpu_bo == NULL) 262 264 return 0; 263 265 ··· 270 268 if (i == AMDGPU_MAX_VCE_HANDLES) 271 269 return 0; 272 270 273 - cancel_delayed_work_sync(&adev->vce.idle_work); 274 271 /* TODO: suspending running encoding sessions isn't supported */ 275 272 return -EINVAL; 276 273 }
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
··· 153 153 unsigned size; 154 154 void *ptr; 155 155 156 + cancel_delayed_work_sync(&adev->vcn.idle_work); 157 + 156 158 if (adev->vcn.vcpu_bo == NULL) 157 159 return 0; 158 - 159 - cancel_delayed_work_sync(&adev->vcn.idle_work); 160 160 161 161 size = amdgpu_bo_size(adev->vcn.vcpu_bo); 162 162 ptr = adev->vcn.cpu_addr;
+2 -1
drivers/gpu/drm/amd/amdkfd/kfd_device.c
··· 457 457 458 458 if (kfd->kfd2kgd->init_gtt_mem_allocation( 459 459 kfd->kgd, size, &kfd->gtt_mem, 460 - &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr)){ 460 + &kfd->gtt_start_gpu_addr, &kfd->gtt_start_cpu_ptr, 461 + false)) { 461 462 dev_err(kfd_device, "Could not allocate %d bytes\n", size); 462 463 goto out; 463 464 }
+12 -1
drivers/gpu/drm/amd/amdkfd/kfd_iommu.c
··· 62 62 struct amd_iommu_device_info iommu_info; 63 63 unsigned int pasid_limit; 64 64 int err; 65 + struct kfd_topology_device *top_dev; 65 66 66 - if (!kfd->device_info->needs_iommu_device) 67 + top_dev = kfd_topology_device_by_id(kfd->id); 68 + 69 + /* 70 + * Overwrite ATS capability according to needs_iommu_device to fix 71 + * potential missing corresponding bit in CRAT of BIOS. 72 + */ 73 + if (!kfd->device_info->needs_iommu_device) { 74 + top_dev->node_props.capability &= ~HSA_CAP_ATS_PRESENT; 67 75 return 0; 76 + } 77 + 78 + top_dev->node_props.capability |= HSA_CAP_ATS_PRESENT; 68 79 69 80 iommu_info.flags = 0; 70 81 err = amd_iommu_device_info(kfd->pdev, &iommu_info);
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
··· 88 88 ALIGN(sizeof(struct v9_mqd), PAGE_SIZE), 89 89 &((*mqd_mem_obj)->gtt_mem), 90 90 &((*mqd_mem_obj)->gpu_addr), 91 - (void *)&((*mqd_mem_obj)->cpu_ptr)); 91 + (void *)&((*mqd_mem_obj)->cpu_ptr), true); 92 92 } else 93 93 retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct v9_mqd), 94 94 mqd_mem_obj);
+1
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 806 806 int kfd_topology_remove_device(struct kfd_dev *gpu); 807 807 struct kfd_topology_device *kfd_topology_device_by_proximity_domain( 808 808 uint32_t proximity_domain); 809 + struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id); 809 810 struct kfd_dev *kfd_device_by_id(uint32_t gpu_id); 810 811 struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev); 811 812 int kfd_topology_enum_kfd_devices(uint8_t idx, struct kfd_dev **kdev);
+16 -5
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 63 63 return device; 64 64 } 65 65 66 - struct kfd_dev *kfd_device_by_id(uint32_t gpu_id) 66 + struct kfd_topology_device *kfd_topology_device_by_id(uint32_t gpu_id) 67 67 { 68 - struct kfd_topology_device *top_dev; 69 - struct kfd_dev *device = NULL; 68 + struct kfd_topology_device *top_dev = NULL; 69 + struct kfd_topology_device *ret = NULL; 70 70 71 71 down_read(&topology_lock); 72 72 73 73 list_for_each_entry(top_dev, &topology_device_list, list) 74 74 if (top_dev->gpu_id == gpu_id) { 75 - device = top_dev->gpu; 75 + ret = top_dev; 76 76 break; 77 77 } 78 78 79 79 up_read(&topology_lock); 80 80 81 - return device; 81 + return ret; 82 + } 83 + 84 + struct kfd_dev *kfd_device_by_id(uint32_t gpu_id) 85 + { 86 + struct kfd_topology_device *top_dev; 87 + 88 + top_dev = kfd_topology_device_by_id(gpu_id); 89 + if (!top_dev) 90 + return NULL; 91 + 92 + return top_dev->gpu; 82 93 } 83 94 84 95 struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev)
+134 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 641 641 return NULL; 642 642 } 643 643 644 + static void emulated_link_detect(struct dc_link *link) 645 + { 646 + struct dc_sink_init_data sink_init_data = { 0 }; 647 + struct display_sink_capability sink_caps = { 0 }; 648 + enum dc_edid_status edid_status; 649 + struct dc_context *dc_ctx = link->ctx; 650 + struct dc_sink *sink = NULL; 651 + struct dc_sink *prev_sink = NULL; 652 + 653 + link->type = dc_connection_none; 654 + prev_sink = link->local_sink; 655 + 656 + if (prev_sink != NULL) 657 + dc_sink_retain(prev_sink); 658 + 659 + switch (link->connector_signal) { 660 + case SIGNAL_TYPE_HDMI_TYPE_A: { 661 + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C; 662 + sink_caps.signal = SIGNAL_TYPE_HDMI_TYPE_A; 663 + break; 664 + } 665 + 666 + case SIGNAL_TYPE_DVI_SINGLE_LINK: { 667 + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C; 668 + sink_caps.signal = SIGNAL_TYPE_DVI_SINGLE_LINK; 669 + break; 670 + } 671 + 672 + case SIGNAL_TYPE_DVI_DUAL_LINK: { 673 + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C; 674 + sink_caps.signal = SIGNAL_TYPE_DVI_DUAL_LINK; 675 + break; 676 + } 677 + 678 + case SIGNAL_TYPE_LVDS: { 679 + sink_caps.transaction_type = DDC_TRANSACTION_TYPE_I2C; 680 + sink_caps.signal = SIGNAL_TYPE_LVDS; 681 + break; 682 + } 683 + 684 + case SIGNAL_TYPE_EDP: { 685 + sink_caps.transaction_type = 686 + DDC_TRANSACTION_TYPE_I2C_OVER_AUX; 687 + sink_caps.signal = SIGNAL_TYPE_EDP; 688 + break; 689 + } 690 + 691 + case SIGNAL_TYPE_DISPLAY_PORT: { 692 + sink_caps.transaction_type = 693 + DDC_TRANSACTION_TYPE_I2C_OVER_AUX; 694 + sink_caps.signal = SIGNAL_TYPE_VIRTUAL; 695 + break; 696 + } 697 + 698 + default: 699 + DC_ERROR("Invalid connector type! signal:%d\n", 700 + link->connector_signal); 701 + return; 702 + } 703 + 704 + sink_init_data.link = link; 705 + sink_init_data.sink_signal = sink_caps.signal; 706 + 707 + sink = dc_sink_create(&sink_init_data); 708 + if (!sink) { 709 + DC_ERROR("Failed to create sink!\n"); 710 + return; 711 + } 712 + 713 + link->local_sink = sink; 714 + 715 + edid_status = dm_helpers_read_local_edid( 716 + link->ctx, 717 + link, 718 + sink); 719 + 720 + if (edid_status != EDID_OK) 721 + DC_ERROR("Failed to read EDID"); 722 + 723 + } 724 + 644 725 static int dm_resume(void *handle) 645 726 { 646 727 struct amdgpu_device *adev = handle; ··· 735 654 struct drm_plane *plane; 736 655 struct drm_plane_state *new_plane_state; 737 656 struct dm_plane_state *dm_new_plane_state; 657 + enum dc_connection_type new_connection_type = dc_connection_none; 738 658 int ret; 739 659 int i; 740 660 ··· 766 684 continue; 767 685 768 686 mutex_lock(&aconnector->hpd_lock); 769 - dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD); 687 + if (!dc_link_detect_sink(aconnector->dc_link, &new_connection_type)) 688 + DRM_ERROR("KMS: Failed to detect connector\n"); 689 + 690 + if (aconnector->base.force && new_connection_type == dc_connection_none) 691 + emulated_link_detect(aconnector->dc_link); 692 + else 693 + dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD); 770 694 771 695 if (aconnector->fake_enable && aconnector->dc_link->local_sink) 772 696 aconnector->fake_enable = false; ··· 1010 922 struct amdgpu_dm_connector *aconnector = (struct amdgpu_dm_connector *)param; 1011 923 struct drm_connector *connector = &aconnector->base; 1012 924 struct drm_device *dev = connector->dev; 925 + enum dc_connection_type new_connection_type = dc_connection_none; 1013 926 1014 927 /* In case of failure or MST no need to update connector status or notify the OS 1015 928 * since (for MST case) MST does this in it's own context. ··· 1020 931 if (aconnector->fake_enable) 1021 932 aconnector->fake_enable = false; 1022 933 1023 - if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) { 934 + if (!dc_link_detect_sink(aconnector->dc_link, &new_connection_type)) 935 + DRM_ERROR("KMS: Failed to detect connector\n"); 936 + 937 + if (aconnector->base.force && new_connection_type == dc_connection_none) { 938 + emulated_link_detect(aconnector->dc_link); 939 + 940 + 941 + drm_modeset_lock_all(dev); 942 + dm_restore_drm_connector_state(dev, connector); 943 + drm_modeset_unlock_all(dev); 944 + 945 + if (aconnector->base.force == DRM_FORCE_UNSPECIFIED) 946 + drm_kms_helper_hotplug_event(dev); 947 + 948 + } else if (dc_link_detect(aconnector->dc_link, DETECT_REASON_HPD)) { 1024 949 amdgpu_dm_update_connector_after_detect(aconnector); 1025 950 1026 951 ··· 1134 1031 struct drm_device *dev = connector->dev; 1135 1032 struct dc_link *dc_link = aconnector->dc_link; 1136 1033 bool is_mst_root_connector = aconnector->mst_mgr.mst_state; 1034 + enum dc_connection_type new_connection_type = dc_connection_none; 1137 1035 1138 1036 /* TODO:Temporary add mutex to protect hpd interrupt not have a gpio 1139 1037 * conflict, after implement i2c helper, this mutex should be ··· 1146 1042 if (dc_link_handle_hpd_rx_irq(dc_link, NULL, NULL) && 1147 1043 !is_mst_root_connector) { 1148 1044 /* Downstream Port status changed. */ 1149 - if (dc_link_detect(dc_link, DETECT_REASON_HPDRX)) { 1045 + if (!dc_link_detect_sink(dc_link, &new_connection_type)) 1046 + DRM_ERROR("KMS: Failed to detect connector\n"); 1047 + 1048 + if (aconnector->base.force && new_connection_type == dc_connection_none) { 1049 + emulated_link_detect(dc_link); 1050 + 1051 + if (aconnector->fake_enable) 1052 + aconnector->fake_enable = false; 1053 + 1054 + amdgpu_dm_update_connector_after_detect(aconnector); 1055 + 1056 + 1057 + drm_modeset_lock_all(dev); 1058 + dm_restore_drm_connector_state(dev, connector); 1059 + drm_modeset_unlock_all(dev); 1060 + 1061 + drm_kms_helper_hotplug_event(dev); 1062 + } else if (dc_link_detect(dc_link, DETECT_REASON_HPDRX)) { 1150 1063 1151 1064 if (aconnector->fake_enable) 1152 1065 aconnector->fake_enable = false; ··· 1554 1433 struct amdgpu_mode_info *mode_info = &adev->mode_info; 1555 1434 uint32_t link_cnt; 1556 1435 int32_t total_overlay_planes, total_primary_planes; 1436 + enum dc_connection_type new_connection_type = dc_connection_none; 1557 1437 1558 1438 link_cnt = dm->dc->caps.max_links; 1559 1439 if (amdgpu_dm_mode_config_init(dm->adev)) { ··· 1621 1499 1622 1500 link = dc_get_link_at_index(dm->dc, i); 1623 1501 1624 - if (dc_link_detect(link, DETECT_REASON_BOOT)) { 1502 + if (!dc_link_detect_sink(link, &new_connection_type)) 1503 + DRM_ERROR("KMS: Failed to detect connector\n"); 1504 + 1505 + if (aconnector->base.force && new_connection_type == dc_connection_none) { 1506 + emulated_link_detect(link); 1507 + amdgpu_dm_update_connector_after_detect(aconnector); 1508 + 1509 + } else if (dc_link_detect(link, DETECT_REASON_BOOT)) { 1625 1510 amdgpu_dm_update_connector_after_detect(aconnector); 1626 1511 register_backlight_device(dm, link); 1627 1512 } ··· 2623 2494 if (dm_state && dm_state->freesync_capable) 2624 2495 stream->ignore_msa_timing_param = true; 2625 2496 finish: 2626 - if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL) 2497 + if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL && aconnector->base.force != DRM_FORCE_ON) 2627 2498 dc_sink_release(sink); 2628 2499 2629 2500 return stream;
+2 -2
drivers/gpu/drm/amd/display/dc/core/dc_link.c
··· 195 195 return result; 196 196 } 197 197 198 - static bool detect_sink(struct dc_link *link, enum dc_connection_type *type) 198 + bool dc_link_detect_sink(struct dc_link *link, enum dc_connection_type *type) 199 199 { 200 200 uint32_t is_hpd_high = 0; 201 201 struct gpio *hpd_pin; ··· 604 604 if (link->connector_signal == SIGNAL_TYPE_VIRTUAL) 605 605 return false; 606 606 607 - if (false == detect_sink(link, &new_connection_type)) { 607 + if (false == dc_link_detect_sink(link, &new_connection_type)) { 608 608 BREAK_TO_DEBUGGER(); 609 609 return false; 610 610 }
+1
drivers/gpu/drm/amd/display/dc/dc_link.h
··· 215 215 216 216 bool dc_link_is_dp_sink_present(struct dc_link *link); 217 217 218 + bool dc_link_detect_sink(struct dc_link *link, enum dc_connection_type *type); 218 219 /* 219 220 * DPCD access interfaces 220 221 */
+1 -1
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 2560 2560 dc->prev_display_config = *pp_display_cfg; 2561 2561 } 2562 2562 2563 - void dce110_set_bandwidth( 2563 + static void dce110_set_bandwidth( 2564 2564 struct dc *dc, 2565 2565 struct dc_state *context, 2566 2566 bool decrease_allowed)
-5
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.h
··· 68 68 const struct dc_state *context, 69 69 struct dm_pp_display_configuration *pp_display_cfg); 70 70 71 - void dce110_set_bandwidth( 72 - struct dc *dc, 73 - struct dc_state *context, 74 - bool decrease_allowed); 75 - 76 71 uint32_t dce110_get_min_vblank_time_us(const struct dc_state *context); 77 72 78 73 void dp_receiver_power_ctrl(struct dc_link *link, bool on);
-12
drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
··· 244 244 dh_data->dchub_info_valid = false; 245 245 } 246 246 247 - static void dce120_set_bandwidth( 248 - struct dc *dc, 249 - struct dc_state *context, 250 - bool decrease_allowed) 251 - { 252 - if (context->stream_count <= 0) 253 - return; 254 - 255 - dce110_set_bandwidth(dc, context, decrease_allowed); 256 - } 257 - 258 247 void dce120_hw_sequencer_construct(struct dc *dc) 259 248 { 260 249 /* All registers used by dce11.2 match those in dce11 in offset and ··· 252 263 dce110_hw_sequencer_construct(dc); 253 264 dc->hwss.enable_display_power_gating = dce120_enable_display_power_gating; 254 265 dc->hwss.update_dchub = dce120_update_dchub; 255 - dc->hwss.set_bandwidth = dce120_set_bandwidth; 256 266 } 257 267
+1 -1
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
··· 292 292 struct kfd2kgd_calls { 293 293 int (*init_gtt_mem_allocation)(struct kgd_dev *kgd, size_t size, 294 294 void **mem_obj, uint64_t *gpu_addr, 295 - void **cpu_ptr); 295 + void **cpu_ptr, bool mqd_gfx9); 296 296 297 297 void (*free_gtt_mem)(struct kgd_dev *kgd, void *mem_obj); 298 298
+1
drivers/gpu/drm/arm/malidp_drv.c
··· 754 754 drm->irq_enabled = true; 755 755 756 756 ret = drm_vblank_init(drm, drm->mode_config.num_crtc); 757 + drm_crtc_vblank_reset(&malidp->crtc); 757 758 if (ret < 0) { 758 759 DRM_ERROR("failed to initialise vblank\n"); 759 760 goto vblank_fail;
+23 -2
drivers/gpu/drm/arm/malidp_hw.c
··· 384 384 385 385 static int malidp500_enable_memwrite(struct malidp_hw_device *hwdev, 386 386 dma_addr_t *addrs, s32 *pitches, 387 - int num_planes, u16 w, u16 h, u32 fmt_id) 387 + int num_planes, u16 w, u16 h, u32 fmt_id, 388 + const s16 *rgb2yuv_coeffs) 388 389 { 389 390 u32 base = MALIDP500_SE_MEMWRITE_BASE; 390 391 u32 de_base = malidp_get_block_base(hwdev, MALIDP_DE_BLOCK); ··· 417 416 418 417 malidp_hw_write(hwdev, MALIDP_DE_H_ACTIVE(w) | MALIDP_DE_V_ACTIVE(h), 419 418 MALIDP500_SE_MEMWRITE_OUT_SIZE); 419 + 420 + if (rgb2yuv_coeffs) { 421 + int i; 422 + 423 + for (i = 0; i < MALIDP_COLORADJ_NUM_COEFFS; i++) { 424 + malidp_hw_write(hwdev, rgb2yuv_coeffs[i], 425 + MALIDP500_SE_RGB_YUV_COEFFS + i * 4); 426 + } 427 + } 428 + 420 429 malidp_hw_setbits(hwdev, MALIDP_SE_MEMWRITE_EN, MALIDP500_SE_CONTROL); 421 430 422 431 return 0; ··· 669 658 670 659 static int malidp550_enable_memwrite(struct malidp_hw_device *hwdev, 671 660 dma_addr_t *addrs, s32 *pitches, 672 - int num_planes, u16 w, u16 h, u32 fmt_id) 661 + int num_planes, u16 w, u16 h, u32 fmt_id, 662 + const s16 *rgb2yuv_coeffs) 673 663 { 674 664 u32 base = MALIDP550_SE_MEMWRITE_BASE; 675 665 u32 de_base = malidp_get_block_base(hwdev, MALIDP_DE_BLOCK); ··· 700 688 MALIDP550_SE_MEMWRITE_OUT_SIZE); 701 689 malidp_hw_setbits(hwdev, MALIDP550_SE_MEMWRITE_ONESHOT | MALIDP_SE_MEMWRITE_EN, 702 690 MALIDP550_SE_CONTROL); 691 + 692 + if (rgb2yuv_coeffs) { 693 + int i; 694 + 695 + for (i = 0; i < MALIDP_COLORADJ_NUM_COEFFS; i++) { 696 + malidp_hw_write(hwdev, rgb2yuv_coeffs[i], 697 + MALIDP550_SE_RGB_YUV_COEFFS + i * 4); 698 + } 699 + } 703 700 704 701 return 0; 705 702 }
+2 -1
drivers/gpu/drm/arm/malidp_hw.h
··· 191 191 * @param fmt_id - internal format ID of output buffer 192 192 */ 193 193 int (*enable_memwrite)(struct malidp_hw_device *hwdev, dma_addr_t *addrs, 194 - s32 *pitches, int num_planes, u16 w, u16 h, u32 fmt_id); 194 + s32 *pitches, int num_planes, u16 w, u16 h, u32 fmt_id, 195 + const s16 *rgb2yuv_coeffs); 195 196 196 197 /* 197 198 * Disable the writing to memory of the next frame's content.
+21 -4
drivers/gpu/drm/arm/malidp_mw.c
··· 26 26 s32 pitches[2]; 27 27 u8 format; 28 28 u8 n_planes; 29 + bool rgb2yuv_initialized; 30 + const s16 *rgb2yuv_coeffs; 29 31 }; 30 32 31 33 static int malidp_mw_connector_get_modes(struct drm_connector *connector) ··· 86 84 static struct drm_connector_state * 87 85 malidp_mw_connector_duplicate_state(struct drm_connector *connector) 88 86 { 89 - struct malidp_mw_connector_state *mw_state; 87 + struct malidp_mw_connector_state *mw_state, *mw_current_state; 90 88 91 89 if (WARN_ON(!connector->state)) 92 90 return NULL; ··· 95 93 if (!mw_state) 96 94 return NULL; 97 95 98 - /* No need to preserve any of our driver-local data */ 96 + mw_current_state = to_mw_state(connector->state); 97 + mw_state->rgb2yuv_coeffs = mw_current_state->rgb2yuv_coeffs; 98 + mw_state->rgb2yuv_initialized = mw_current_state->rgb2yuv_initialized; 99 + 99 100 __drm_atomic_helper_connector_duplicate_state(connector, &mw_state->base); 100 101 101 102 return &mw_state->base; ··· 111 106 .destroy = malidp_mw_connector_destroy, 112 107 .atomic_duplicate_state = malidp_mw_connector_duplicate_state, 113 108 .atomic_destroy_state = drm_atomic_helper_connector_destroy_state, 109 + }; 110 + 111 + static const s16 rgb2yuv_coeffs_bt709_limited[MALIDP_COLORADJ_NUM_COEFFS] = { 112 + 47, 157, 16, 113 + -26, -87, 112, 114 + 112, -102, -10, 115 + 16, 128, 128 114 116 }; 115 117 116 118 static int ··· 168 156 mw_state->addrs[i] = obj->paddr + fb->offsets[i]; 169 157 } 170 158 mw_state->n_planes = n_planes; 159 + 160 + if (fb->format->is_yuv) 161 + mw_state->rgb2yuv_coeffs = rgb2yuv_coeffs_bt709_limited; 171 162 172 163 return 0; 173 164 } ··· 254 239 255 240 drm_writeback_queue_job(mw_conn, conn_state->writeback_job); 256 241 conn_state->writeback_job = NULL; 257 - 258 242 hwdev->hw->enable_memwrite(hwdev, mw_state->addrs, 259 243 mw_state->pitches, mw_state->n_planes, 260 - fb->width, fb->height, mw_state->format); 244 + fb->width, fb->height, mw_state->format, 245 + !mw_state->rgb2yuv_initialized ? 246 + mw_state->rgb2yuv_coeffs : NULL); 247 + mw_state->rgb2yuv_initialized = !!mw_state->rgb2yuv_coeffs; 261 248 } else { 262 249 DRM_DEV_DEBUG_DRIVER(drm->dev, "Disable memwrite\n"); 263 250 hwdev->hw->disable_memwrite(hwdev);
+2
drivers/gpu/drm/arm/malidp_regs.h
··· 205 205 #define MALIDP500_SE_BASE 0x00c00 206 206 #define MALIDP500_SE_CONTROL 0x00c0c 207 207 #define MALIDP500_SE_MEMWRITE_OUT_SIZE 0x00c2c 208 + #define MALIDP500_SE_RGB_YUV_COEFFS 0x00C74 208 209 #define MALIDP500_SE_MEMWRITE_BASE 0x00e00 209 210 #define MALIDP500_DC_IRQ_BASE 0x00f00 210 211 #define MALIDP500_CONFIG_VALID 0x00f00 ··· 239 238 #define MALIDP550_SE_CONTROL 0x08010 240 239 #define MALIDP550_SE_MEMWRITE_ONESHOT (1 << 7) 241 240 #define MALIDP550_SE_MEMWRITE_OUT_SIZE 0x08030 241 + #define MALIDP550_SE_RGB_YUV_COEFFS 0x08078 242 242 #define MALIDP550_SE_MEMWRITE_BASE 0x08100 243 243 #define MALIDP550_DC_BASE 0x0c000 244 244 #define MALIDP550_DC_CONTROL 0x0c010
+1 -1
drivers/gpu/drm/drm_atomic.c
··· 2067 2067 struct drm_connector *connector; 2068 2068 struct drm_connector_list_iter conn_iter; 2069 2069 2070 - if (!drm_core_check_feature(dev, DRIVER_ATOMIC)) 2070 + if (!drm_drv_uses_atomic_modeset(dev)) 2071 2071 return; 2072 2072 2073 2073 list_for_each_entry(plane, &config->plane_list, head) {
+1 -1
drivers/gpu/drm/drm_debugfs.c
··· 151 151 return ret; 152 152 } 153 153 154 - if (drm_core_check_feature(dev, DRIVER_ATOMIC)) { 154 + if (drm_drv_uses_atomic_modeset(dev)) { 155 155 ret = drm_atomic_debugfs_init(minor); 156 156 if (ret) { 157 157 DRM_ERROR("Failed to create atomic debugfs files\n");
-3
drivers/gpu/drm/drm_fb_helper.c
··· 2370 2370 { 2371 2371 int c, o; 2372 2372 struct drm_connector *connector; 2373 - const struct drm_connector_helper_funcs *connector_funcs; 2374 2373 int my_score, best_score, score; 2375 2374 struct drm_fb_helper_crtc **crtcs, *crtc; 2376 2375 struct drm_fb_helper_connector *fb_helper_conn; ··· 2397 2398 my_score++; 2398 2399 if (drm_has_preferred_mode(fb_helper_conn, width, height)) 2399 2400 my_score++; 2400 - 2401 - connector_funcs = connector->helper_private; 2402 2401 2403 2402 /* 2404 2403 * select a crtc for this connector and then attempt to configure
-10
drivers/gpu/drm/drm_panel.c
··· 24 24 #include <linux/err.h> 25 25 #include <linux/module.h> 26 26 27 - #include <drm/drm_device.h> 28 27 #include <drm/drm_crtc.h> 29 28 #include <drm/drm_panel.h> 30 29 ··· 104 105 if (panel->connector) 105 106 return -EBUSY; 106 107 107 - panel->link = device_link_add(connector->dev->dev, panel->dev, 0); 108 - if (!panel->link) { 109 - dev_err(panel->dev, "failed to link panel to %s\n", 110 - dev_name(connector->dev->dev)); 111 - return -EINVAL; 112 - } 113 - 114 108 panel->connector = connector; 115 109 panel->drm = connector->dev; 116 110 ··· 125 133 */ 126 134 int drm_panel_detach(struct drm_panel *panel) 127 135 { 128 - device_link_del(panel->link); 129 - 130 136 panel->connector = NULL; 131 137 panel->drm = NULL; 132 138
+5
drivers/gpu/drm/drm_syncobj.c
··· 97 97 { 98 98 int ret; 99 99 100 + WARN_ON(*fence); 101 + 100 102 *fence = drm_syncobj_fence_get(syncobj); 101 103 if (*fence) 102 104 return 1; ··· 745 743 746 744 if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) { 747 745 for (i = 0; i < count; ++i) { 746 + if (entries[i].fence) 747 + continue; 748 + 748 749 drm_syncobj_fence_get_or_add_callback(syncobjs[i], 749 750 &entries[i].fence, 750 751 &entries[i].syncobj_cb,
+21 -6
drivers/gpu/drm/etnaviv/etnaviv_drv.c
··· 592 592 struct device *dev = &pdev->dev; 593 593 struct component_match *match = NULL; 594 594 595 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 596 - 597 595 if (!dev->platform_data) { 598 596 struct device_node *core_node; 599 597 ··· 653 655 for_each_compatible_node(np, NULL, "vivante,gc") { 654 656 if (!of_device_is_available(np)) 655 657 continue; 656 - pdev = platform_device_register_simple("etnaviv", -1, 657 - NULL, 0); 658 - if (IS_ERR(pdev)) { 659 - ret = PTR_ERR(pdev); 658 + 659 + pdev = platform_device_alloc("etnaviv", -1); 660 + if (!pdev) { 661 + ret = -ENOMEM; 660 662 of_node_put(np); 661 663 goto unregister_platform_driver; 662 664 } 665 + pdev->dev.coherent_dma_mask = DMA_BIT_MASK(40); 666 + pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; 667 + 668 + /* 669 + * Apply the same DMA configuration to the virtual etnaviv 670 + * device as the GPU we found. This assumes that all Vivante 671 + * GPUs in the system share the same DMA constraints. 672 + */ 673 + of_dma_configure(&pdev->dev, np, true); 674 + 675 + ret = platform_device_add(pdev); 676 + if (ret) { 677 + platform_device_put(pdev); 678 + of_node_put(np); 679 + goto unregister_platform_driver; 680 + } 681 + 663 682 etnaviv_drm = pdev; 664 683 of_node_put(np); 665 684 break;
+1
drivers/gpu/drm/i915/gvt/handlers.c
··· 3210 3210 MMIO_D(BXT_DSI_PLL_ENABLE, D_BXT); 3211 3211 3212 3212 MMIO_D(GEN9_CLKGATE_DIS_0, D_BXT); 3213 + MMIO_D(GEN9_CLKGATE_DIS_4, D_BXT); 3213 3214 3214 3215 MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_A), D_BXT); 3215 3216 MMIO_D(HSW_TVIDEO_DIP_GCP(TRANSCODER_B), D_BXT);
+6 -1
drivers/gpu/drm/i915/gvt/kvmgt.c
··· 1833 1833 { 1834 1834 struct kvmgt_guest_info *info; 1835 1835 struct kvm *kvm; 1836 + int idx; 1837 + bool ret; 1836 1838 1837 1839 if (!handle_valid(handle)) 1838 1840 return false; ··· 1842 1840 info = (struct kvmgt_guest_info *)handle; 1843 1841 kvm = info->kvm; 1844 1842 1845 - return kvm_is_visible_gfn(kvm, gfn); 1843 + idx = srcu_read_lock(&kvm->srcu); 1844 + ret = kvm_is_visible_gfn(kvm, gfn); 1845 + srcu_read_unlock(&kvm->srcu, idx); 1846 1846 1847 + return ret; 1847 1848 } 1848 1849 1849 1850 struct intel_gvt_mpt kvmgt_mpt = {
+28
drivers/gpu/drm/i915/gvt/mmio.c
··· 244 244 245 245 /* set the bit 0:2(Core C-State ) to C0 */ 246 246 vgpu_vreg_t(vgpu, GEN6_GT_CORE_STATUS) = 0; 247 + 248 + if (IS_BROXTON(vgpu->gvt->dev_priv)) { 249 + vgpu_vreg_t(vgpu, BXT_P_CR_GT_DISP_PWRON) &= 250 + ~(BIT(0) | BIT(1)); 251 + vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY0)) &= 252 + ~PHY_POWER_GOOD; 253 + vgpu_vreg_t(vgpu, BXT_PORT_CL1CM_DW0(DPIO_PHY1)) &= 254 + ~PHY_POWER_GOOD; 255 + vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY0)) &= 256 + ~BIT(30); 257 + vgpu_vreg_t(vgpu, BXT_PHY_CTL_FAMILY(DPIO_PHY1)) &= 258 + ~BIT(30); 259 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) &= 260 + ~BXT_PHY_LANE_ENABLED; 261 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_A)) |= 262 + BXT_PHY_CMNLANE_POWERDOWN_ACK | 263 + BXT_PHY_LANE_POWERDOWN_ACK; 264 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) &= 265 + ~BXT_PHY_LANE_ENABLED; 266 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_B)) |= 267 + BXT_PHY_CMNLANE_POWERDOWN_ACK | 268 + BXT_PHY_LANE_POWERDOWN_ACK; 269 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) &= 270 + ~BXT_PHY_LANE_ENABLED; 271 + vgpu_vreg_t(vgpu, BXT_PHY_CTL(PORT_C)) |= 272 + BXT_PHY_CMNLANE_POWERDOWN_ACK | 273 + BXT_PHY_LANE_POWERDOWN_ACK; 274 + } 247 275 } else { 248 276 #define GVT_GEN8_MMIO_RESET_OFFSET (0x44200) 249 277 /* only reset the engine related, so starting with 0x44200
+1
drivers/gpu/drm/i915/gvt/vgpu.c
··· 281 281 intel_vgpu_clean_submission(vgpu); 282 282 intel_vgpu_clean_display(vgpu); 283 283 intel_vgpu_clean_opregion(vgpu); 284 + intel_vgpu_reset_ggtt(vgpu, true); 284 285 intel_vgpu_clean_gtt(vgpu); 285 286 intel_gvt_hypervisor_detach_vgpu(vgpu); 286 287 intel_vgpu_free_resource(vgpu);
+2 -1
drivers/gpu/drm/pl111/pl111_vexpress.c
··· 111 111 } 112 112 113 113 static const struct of_device_id vexpress_muxfpga_match[] = { 114 - { .compatible = "arm,vexpress-muxfpga", } 114 + { .compatible = "arm,vexpress-muxfpga", }, 115 + {} 115 116 }; 116 117 117 118 static struct platform_driver vexpress_muxfpga_driver = {
-1
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 418 418 { .compatible = "allwinner,sun8i-a33-display-engine" }, 419 419 { .compatible = "allwinner,sun8i-a83t-display-engine" }, 420 420 { .compatible = "allwinner,sun8i-h3-display-engine" }, 421 - { .compatible = "allwinner,sun8i-r40-display-engine" }, 422 421 { .compatible = "allwinner,sun8i-v3s-display-engine" }, 423 422 { .compatible = "allwinner,sun9i-a80-display-engine" }, 424 423 { }
-1
drivers/gpu/drm/sun4i/sun8i_hdmi_phy.c
··· 398 398 399 399 static const struct sun8i_hdmi_phy_variant sun50i_a64_hdmi_phy = { 400 400 .has_phy_clk = true, 401 - .has_second_pll = true, 402 401 .phy_init = &sun8i_hdmi_phy_init_h3, 403 402 .phy_disable = &sun8i_hdmi_phy_disable_h3, 404 403 .phy_config = &sun8i_hdmi_phy_config_h3,
-24
drivers/gpu/drm/sun4i/sun8i_mixer.c
··· 545 545 .vi_num = 1, 546 546 }; 547 547 548 - static const struct sun8i_mixer_cfg sun8i_r40_mixer0_cfg = { 549 - .ccsc = 0, 550 - .mod_rate = 297000000, 551 - .scaler_mask = 0xf, 552 - .ui_num = 3, 553 - .vi_num = 1, 554 - }; 555 - 556 - static const struct sun8i_mixer_cfg sun8i_r40_mixer1_cfg = { 557 - .ccsc = 1, 558 - .mod_rate = 297000000, 559 - .scaler_mask = 0x3, 560 - .ui_num = 1, 561 - .vi_num = 1, 562 - }; 563 - 564 548 static const struct sun8i_mixer_cfg sun8i_v3s_mixer_cfg = { 565 549 .vi_num = 2, 566 550 .ui_num = 1, ··· 565 581 { 566 582 .compatible = "allwinner,sun8i-h3-de2-mixer-0", 567 583 .data = &sun8i_h3_mixer0_cfg, 568 - }, 569 - { 570 - .compatible = "allwinner,sun8i-r40-de2-mixer-0", 571 - .data = &sun8i_r40_mixer0_cfg, 572 - }, 573 - { 574 - .compatible = "allwinner,sun8i-r40-de2-mixer-1", 575 - .data = &sun8i_r40_mixer1_cfg, 576 584 }, 577 585 { 578 586 .compatible = "allwinner,sun8i-v3s-de2-mixer",
-1
drivers/gpu/drm/sun4i/sun8i_tcon_top.c
··· 253 253 254 254 /* sun4i_drv uses this list to check if a device node is a TCON TOP */ 255 255 const struct of_device_id sun8i_tcon_top_of_table[] = { 256 - { .compatible = "allwinner,sun8i-r40-tcon-top" }, 257 256 { /* sentinel */ } 258 257 }; 259 258 MODULE_DEVICE_TABLE(of, sun8i_tcon_top_of_table);
+5 -3
drivers/gpu/drm/udl/udl_fb.c
··· 432 432 { 433 433 drm_fb_helper_unregister_fbi(&ufbdev->helper); 434 434 drm_fb_helper_fini(&ufbdev->helper); 435 - drm_framebuffer_unregister_private(&ufbdev->ufb.base); 436 - drm_framebuffer_cleanup(&ufbdev->ufb.base); 437 - drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base); 435 + if (ufbdev->ufb.obj) { 436 + drm_framebuffer_unregister_private(&ufbdev->ufb.base); 437 + drm_framebuffer_cleanup(&ufbdev->ufb.base); 438 + drm_gem_object_put_unlocked(&ufbdev->ufb.obj->base); 439 + } 438 440 } 439 441 440 442 int udl_fbdev_init(struct drm_device *dev)
+12 -13
drivers/gpu/drm/vc4/vc4_plane.c
··· 297 297 vc4_state->y_scaling[0] = vc4_get_scaling_mode(vc4_state->src_h[0], 298 298 vc4_state->crtc_h); 299 299 300 + vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE && 301 + vc4_state->y_scaling[0] == VC4_SCALING_NONE); 302 + 300 303 if (num_planes > 1) { 301 304 vc4_state->is_yuv = true; 302 305 ··· 315 312 vc4_get_scaling_mode(vc4_state->src_h[1], 316 313 vc4_state->crtc_h); 317 314 318 - /* YUV conversion requires that scaling be enabled, 319 - * even on a plane that's otherwise 1:1. Choose TPZ 320 - * for simplicity. 315 + /* YUV conversion requires that horizontal scaling be enabled, 316 + * even on a plane that's otherwise 1:1. Looks like only PPF 317 + * works in that case, so let's pick that one. 321 318 */ 322 - if (vc4_state->x_scaling[0] == VC4_SCALING_NONE) 323 - vc4_state->x_scaling[0] = VC4_SCALING_TPZ; 324 - if (vc4_state->y_scaling[0] == VC4_SCALING_NONE) 325 - vc4_state->y_scaling[0] = VC4_SCALING_TPZ; 319 + if (vc4_state->is_unity) 320 + vc4_state->x_scaling[0] = VC4_SCALING_PPF; 326 321 } else { 327 322 vc4_state->x_scaling[1] = VC4_SCALING_NONE; 328 323 vc4_state->y_scaling[1] = VC4_SCALING_NONE; 329 324 } 330 - 331 - vc4_state->is_unity = (vc4_state->x_scaling[0] == VC4_SCALING_NONE && 332 - vc4_state->y_scaling[0] == VC4_SCALING_NONE && 333 - vc4_state->x_scaling[1] == VC4_SCALING_NONE && 334 - vc4_state->y_scaling[1] == VC4_SCALING_NONE); 335 325 336 326 /* No configuring scaling on the cursor plane, since it gets 337 327 non-vblank-synced updates, and scaling requires requires ··· 668 672 vc4_dlist_write(vc4_state, SCALER_CSC2_ITR_R_601_5); 669 673 } 670 674 671 - if (!vc4_state->is_unity) { 675 + if (vc4_state->x_scaling[0] != VC4_SCALING_NONE || 676 + vc4_state->x_scaling[1] != VC4_SCALING_NONE || 677 + vc4_state->y_scaling[0] != VC4_SCALING_NONE || 678 + vc4_state->y_scaling[1] != VC4_SCALING_NONE) { 672 679 /* LBM Base Address. */ 673 680 if (vc4_state->y_scaling[0] != VC4_SCALING_NONE || 674 681 vc4_state->y_scaling[1] != VC4_SCALING_NONE) {
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 3729 3729 { 3730 3730 struct vmw_buffer_object *vbo = 3731 3731 container_of(bo, struct vmw_buffer_object, base); 3732 - struct ttm_operation_ctx ctx = { interruptible, true }; 3732 + struct ttm_operation_ctx ctx = { interruptible, false }; 3733 3733 int ret; 3734 3734 3735 3735 if (vbo->pin_count > 0)
+30 -12
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1512 1512 struct drm_rect *rects) 1513 1513 { 1514 1514 struct vmw_private *dev_priv = vmw_priv(dev); 1515 - struct drm_mode_config *mode_config = &dev->mode_config; 1516 1515 struct drm_rect bounding_box = {0}; 1517 1516 u64 total_pixels = 0, pixel_mem, bb_mem; 1518 1517 int i; 1519 1518 1520 1519 for (i = 0; i < num_rects; i++) { 1521 1520 /* 1522 - * Currently this check is limiting the topology within max 1523 - * texture/screentarget size. This should change in future when 1524 - * user-space support multiple fb with topology. 1521 + * For STDU only individual screen (screen target) is limited by 1522 + * SCREENTARGET_MAX_WIDTH/HEIGHT registers. 1525 1523 */ 1526 - if (rects[i].x1 < 0 || rects[i].y1 < 0 || 1527 - rects[i].x2 > mode_config->max_width || 1528 - rects[i].y2 > mode_config->max_height) { 1529 - DRM_ERROR("Invalid GUI layout.\n"); 1524 + if (dev_priv->active_display_unit == vmw_du_screen_target && 1525 + (drm_rect_width(&rects[i]) > dev_priv->stdu_max_width || 1526 + drm_rect_height(&rects[i]) > dev_priv->stdu_max_height)) { 1527 + DRM_ERROR("Screen size not supported.\n"); 1530 1528 return -EINVAL; 1531 1529 } 1532 1530 ··· 1613 1615 struct drm_connector_state *conn_state; 1614 1616 struct vmw_connector_state *vmw_conn_state; 1615 1617 1616 - if (!new_crtc_state->enable && old_crtc_state->enable) { 1618 + if (!new_crtc_state->enable) { 1617 1619 rects[i].x1 = 0; 1618 1620 rects[i].y1 = 0; 1619 1621 rects[i].x2 = 0; ··· 2214 2216 if (dev_priv->assume_16bpp) 2215 2217 assumed_bpp = 2; 2216 2218 2219 + max_width = min(max_width, dev_priv->texture_max_width); 2220 + max_height = min(max_height, dev_priv->texture_max_height); 2221 + 2222 + /* 2223 + * For STDU extra limit for a mode on SVGA_REG_SCREENTARGET_MAX_WIDTH/ 2224 + * HEIGHT registers. 2225 + */ 2217 2226 if (dev_priv->active_display_unit == vmw_du_screen_target) { 2218 2227 max_width = min(max_width, dev_priv->stdu_max_width); 2219 - max_width = min(max_width, dev_priv->texture_max_width); 2220 - 2221 2228 max_height = min(max_height, dev_priv->stdu_max_height); 2222 - max_height = min(max_height, dev_priv->texture_max_height); 2223 2229 } 2224 2230 2225 2231 /* Add preferred mode */ ··· 2378 2376 struct drm_file *file_priv) 2379 2377 { 2380 2378 struct vmw_private *dev_priv = vmw_priv(dev); 2379 + struct drm_mode_config *mode_config = &dev->mode_config; 2381 2380 struct drm_vmw_update_layout_arg *arg = 2382 2381 (struct drm_vmw_update_layout_arg *)data; 2383 2382 void __user *user_rects; ··· 2424 2421 drm_rects[i].y1 = curr_rect.y; 2425 2422 drm_rects[i].x2 = curr_rect.x + curr_rect.w; 2426 2423 drm_rects[i].y2 = curr_rect.y + curr_rect.h; 2424 + 2425 + /* 2426 + * Currently this check is limiting the topology within 2427 + * mode_config->max (which actually is max texture size 2428 + * supported by virtual device). This limit is here to address 2429 + * window managers that create a big framebuffer for whole 2430 + * topology. 2431 + */ 2432 + if (drm_rects[i].x1 < 0 || drm_rects[i].y1 < 0 || 2433 + drm_rects[i].x2 > mode_config->max_width || 2434 + drm_rects[i].y2 > mode_config->max_height) { 2435 + DRM_ERROR("Invalid GUI layout.\n"); 2436 + ret = -EINVAL; 2437 + goto out_free; 2438 + } 2427 2439 } 2428 2440 2429 2441 ret = vmw_kms_check_display_memory(dev, arg->num_outputs, drm_rects);
-25
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 1600 1600 1601 1601 dev_priv->active_display_unit = vmw_du_screen_target; 1602 1602 1603 - if (dev_priv->capabilities & SVGA_CAP_3D) { 1604 - /* 1605 - * For 3D VMs, display (scanout) buffer size is the smaller of 1606 - * max texture and max STDU 1607 - */ 1608 - uint32_t max_width, max_height; 1609 - 1610 - max_width = min(dev_priv->texture_max_width, 1611 - dev_priv->stdu_max_width); 1612 - max_height = min(dev_priv->texture_max_height, 1613 - dev_priv->stdu_max_height); 1614 - 1615 - dev->mode_config.max_width = max_width; 1616 - dev->mode_config.max_height = max_height; 1617 - } else { 1618 - /* 1619 - * Given various display aspect ratios, there's no way to 1620 - * estimate these using prim_bb_mem. So just set these to 1621 - * something arbitrarily large and we will reject any layout 1622 - * that doesn't fit prim_bb_mem later 1623 - */ 1624 - dev->mode_config.max_width = 8192; 1625 - dev->mode_config.max_height = 8192; 1626 - } 1627 - 1628 1603 vmw_kms_create_implicit_placement_property(dev_priv, false); 1629 1604 1630 1605 for (i = 0; i < VMWGFX_NUM_DISPLAY_UNITS; ++i) {
+14 -10
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 1404 1404 *srf_out = NULL; 1405 1405 1406 1406 if (for_scanout) { 1407 - uint32_t max_width, max_height; 1408 - 1409 1407 if (!svga3dsurface_is_screen_target_format(format)) { 1410 1408 DRM_ERROR("Invalid Screen Target surface format."); 1411 1409 return -EINVAL; 1412 1410 } 1413 1411 1414 - max_width = min(dev_priv->texture_max_width, 1415 - dev_priv->stdu_max_width); 1416 - max_height = min(dev_priv->texture_max_height, 1417 - dev_priv->stdu_max_height); 1418 - 1419 - if (size.width > max_width || size.height > max_height) { 1412 + if (size.width > dev_priv->texture_max_width || 1413 + size.height > dev_priv->texture_max_height) { 1420 1414 DRM_ERROR("%ux%u\n, exceeds max surface size %ux%u", 1421 1415 size.width, size.height, 1422 - max_width, max_height); 1416 + dev_priv->texture_max_width, 1417 + dev_priv->texture_max_height); 1423 1418 return -EINVAL; 1424 1419 } 1425 1420 } else { ··· 1490 1495 if (srf->flags & SVGA3D_SURFACE_BIND_STREAM_OUTPUT) 1491 1496 srf->res.backup_size += sizeof(SVGA3dDXSOState); 1492 1497 1498 + /* 1499 + * Don't set SVGA3D_SURFACE_SCREENTARGET flag for a scanout surface with 1500 + * size greater than STDU max width/height. This is really a workaround 1501 + * to support creation of big framebuffer requested by some user-space 1502 + * for whole topology. That big framebuffer won't really be used for 1503 + * binding with screen target as during prepare_fb a separate surface is 1504 + * created so it's safe to ignore SVGA3D_SURFACE_SCREENTARGET flag. 1505 + */ 1493 1506 if (dev_priv->active_display_unit == vmw_du_screen_target && 1494 - for_scanout) 1507 + for_scanout && size.width <= dev_priv->stdu_max_width && 1508 + size.height <= dev_priv->stdu_max_height) 1495 1509 srf->flags |= SVGA3D_SURFACE_SCREENTARGET; 1496 1510 1497 1511 /*
+2
drivers/gpu/vga/vga_switcheroo.c
··· 215 215 return; 216 216 217 217 client->id = ret | ID_BIT_AUDIO; 218 + if (client->ops->gpu_bound) 219 + client->ops->gpu_bound(client->pdev, ret); 218 220 } 219 221 220 222 vga_switcheroo_debugfs_init(&vgasr_priv);
+49 -23
drivers/hwmon/nct6775.c
··· 207 207 208 208 #define NUM_FAN 7 209 209 210 - #define TEMP_SOURCE_VIRTUAL 0x1f 211 - 212 210 /* Common and NCT6775 specific data */ 213 211 214 212 /* Voltage min/max registers for nr=7..14 are in bank 5 */ ··· 297 299 298 300 static const u16 NCT6775_REG_FAN[] = { 0x630, 0x632, 0x634, 0x636, 0x638 }; 299 301 static const u16 NCT6775_REG_FAN_MIN[] = { 0x3b, 0x3c, 0x3d }; 300 - static const u16 NCT6775_REG_FAN_PULSES[] = { 0x641, 0x642, 0x643, 0x644, 0 }; 301 - static const u16 NCT6775_FAN_PULSE_SHIFT[] = { 0, 0, 0, 0, 0, 0 }; 302 + static const u16 NCT6775_REG_FAN_PULSES[NUM_FAN] = { 303 + 0x641, 0x642, 0x643, 0x644 }; 304 + static const u16 NCT6775_FAN_PULSE_SHIFT[NUM_FAN] = { }; 302 305 303 306 static const u16 NCT6775_REG_TEMP[] = { 304 307 0x27, 0x150, 0x250, 0x62b, 0x62c, 0x62d }; ··· 372 373 }; 373 374 374 375 #define NCT6775_TEMP_MASK 0x001ffffe 376 + #define NCT6775_VIRT_TEMP_MASK 0x00000000 375 377 376 378 static const u16 NCT6775_REG_TEMP_ALTERNATE[32] = { 377 379 [13] = 0x661, ··· 425 425 426 426 static const u16 NCT6776_REG_FAN_MIN[] = { 427 427 0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c }; 428 - static const u16 NCT6776_REG_FAN_PULSES[] = { 429 - 0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 }; 428 + static const u16 NCT6776_REG_FAN_PULSES[NUM_FAN] = { 429 + 0x644, 0x645, 0x646, 0x647, 0x648, 0x649 }; 430 430 431 431 static const u16 NCT6776_REG_WEIGHT_DUTY_BASE[] = { 432 432 0x13e, 0x23e, 0x33e, 0x83e, 0x93e, 0xa3e }; ··· 461 461 }; 462 462 463 463 #define NCT6776_TEMP_MASK 0x007ffffe 464 + #define NCT6776_VIRT_TEMP_MASK 0x00000000 464 465 465 466 static const u16 NCT6776_REG_TEMP_ALTERNATE[32] = { 466 467 [14] = 0x401, ··· 502 501 30, 31 }; /* intrusion0, intrusion1 */ 503 502 504 503 static const u16 NCT6779_REG_FAN[] = { 505 - 0x4b0, 0x4b2, 0x4b4, 0x4b6, 0x4b8, 0x4ba, 0x660 }; 506 - static const u16 NCT6779_REG_FAN_PULSES[] = { 507 - 0x644, 0x645, 0x646, 0x647, 0x648, 0x649, 0 }; 504 + 0x4c0, 0x4c2, 0x4c4, 0x4c6, 0x4c8, 0x4ca, 0x4ce }; 505 + static const u16 NCT6779_REG_FAN_PULSES[NUM_FAN] = { 506 + 0x644, 0x645, 0x646, 0x647, 0x648, 0x649 }; 508 507 509 508 static const u16 NCT6779_REG_CRITICAL_PWM_ENABLE[] = { 510 509 0x136, 0x236, 0x336, 0x836, 0x936, 0xa36, 0xb36 }; ··· 560 559 }; 561 560 562 561 #define NCT6779_TEMP_MASK 0x07ffff7e 562 + #define NCT6779_VIRT_TEMP_MASK 0x00000000 563 563 #define NCT6791_TEMP_MASK 0x87ffff7e 564 + #define NCT6791_VIRT_TEMP_MASK 0x80000000 564 565 565 566 static const u16 NCT6779_REG_TEMP_ALTERNATE[32] 566 567 = { 0x490, 0x491, 0x492, 0x493, 0x494, 0x495, 0, 0, ··· 641 638 }; 642 639 643 640 #define NCT6792_TEMP_MASK 0x9fffff7e 641 + #define NCT6792_VIRT_TEMP_MASK 0x80000000 644 642 645 643 static const char *const nct6793_temp_label[] = { 646 644 "", ··· 679 675 }; 680 676 681 677 #define NCT6793_TEMP_MASK 0xbfff037e 678 + #define NCT6793_VIRT_TEMP_MASK 0x80000000 682 679 683 680 static const char *const nct6795_temp_label[] = { 684 681 "", ··· 717 712 }; 718 713 719 714 #define NCT6795_TEMP_MASK 0xbfffff7e 715 + #define NCT6795_VIRT_TEMP_MASK 0x80000000 720 716 721 717 static const char *const nct6796_temp_label[] = { 722 718 "", ··· 730 724 "AUXTIN4", 731 725 "SMBUSMASTER 0", 732 726 "SMBUSMASTER 1", 733 - "", 734 - "", 727 + "Virtual_TEMP", 728 + "Virtual_TEMP", 735 729 "", 736 730 "", 737 731 "", ··· 754 748 "Virtual_TEMP" 755 749 }; 756 750 757 - #define NCT6796_TEMP_MASK 0xbfff03fe 751 + #define NCT6796_TEMP_MASK 0xbfff0ffe 752 + #define NCT6796_VIRT_TEMP_MASK 0x80000c00 758 753 759 754 /* NCT6102D/NCT6106D specific data */ 760 755 ··· 786 779 787 780 static const u16 NCT6106_REG_FAN[] = { 0x20, 0x22, 0x24 }; 788 781 static const u16 NCT6106_REG_FAN_MIN[] = { 0xe0, 0xe2, 0xe4 }; 789 - static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6, 0, 0 }; 790 - static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4, 0, 0 }; 782 + static const u16 NCT6106_REG_FAN_PULSES[] = { 0xf6, 0xf6, 0xf6 }; 783 + static const u16 NCT6106_FAN_PULSE_SHIFT[] = { 0, 2, 4 }; 791 784 792 785 static const u8 NCT6106_REG_PWM_MODE[] = { 0xf3, 0xf3, 0xf3 }; 793 786 static const u8 NCT6106_PWM_MODE_MASK[] = { 0x01, 0x02, 0x04 }; ··· 924 917 return 1350000U / (reg << divreg); 925 918 } 926 919 920 + static unsigned int fan_from_reg_rpm(u16 reg, unsigned int divreg) 921 + { 922 + return reg; 923 + } 924 + 927 925 static u16 fan_to_reg(u32 fan, unsigned int divreg) 928 926 { 929 927 if (!fan) ··· 981 969 u16 reg_temp_config[NUM_TEMP]; 982 970 const char * const *temp_label; 983 971 u32 temp_mask; 972 + u32 virt_temp_mask; 984 973 985 974 u16 REG_CONFIG; 986 975 u16 REG_VBAT; ··· 1289 1276 case nct6795: 1290 1277 case nct6796: 1291 1278 return reg == 0x150 || reg == 0x153 || reg == 0x155 || 1292 - ((reg & 0xfff0) == 0x4b0 && (reg & 0x000f) < 0x0b) || 1279 + (reg & 0xfff0) == 0x4c0 || 1293 1280 reg == 0x402 || 1294 1281 reg == 0x63a || reg == 0x63c || reg == 0x63e || 1295 1282 reg == 0x640 || reg == 0x642 || reg == 0x64a || 1296 - reg == 0x64c || reg == 0x660 || 1283 + reg == 0x64c || 1297 1284 reg == 0x73 || reg == 0x75 || reg == 0x77 || reg == 0x79 || 1298 1285 reg == 0x7b || reg == 0x7d; 1299 1286 } ··· 1571 1558 reg = nct6775_read_value(data, data->REG_WEIGHT_TEMP_SEL[i]); 1572 1559 data->pwm_weight_temp_sel[i] = reg & 0x1f; 1573 1560 /* If weight is disabled, report weight source as 0 */ 1574 - if (j == 1 && !(reg & 0x80)) 1561 + if (!(reg & 0x80)) 1575 1562 data->pwm_weight_temp_sel[i] = 0; 1576 1563 1577 1564 /* Weight temp data */ ··· 1695 1682 if (data->has_fan_min & BIT(i)) 1696 1683 data->fan_min[i] = nct6775_read_value(data, 1697 1684 data->REG_FAN_MIN[i]); 1698 - data->fan_pulses[i] = 1699 - (nct6775_read_value(data, data->REG_FAN_PULSES[i]) 1700 - >> data->FAN_PULSE_SHIFT[i]) & 0x03; 1685 + 1686 + if (data->REG_FAN_PULSES[i]) { 1687 + data->fan_pulses[i] = 1688 + (nct6775_read_value(data, 1689 + data->REG_FAN_PULSES[i]) 1690 + >> data->FAN_PULSE_SHIFT[i]) & 0x03; 1691 + } 1701 1692 1702 1693 nct6775_select_fan_div(dev, data, i, reg); 1703 1694 } ··· 3656 3639 3657 3640 data->temp_label = nct6776_temp_label; 3658 3641 data->temp_mask = NCT6776_TEMP_MASK; 3642 + data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK; 3659 3643 3660 3644 data->REG_VBAT = NCT6106_REG_VBAT; 3661 3645 data->REG_DIODE = NCT6106_REG_DIODE; ··· 3735 3717 3736 3718 data->temp_label = nct6775_temp_label; 3737 3719 data->temp_mask = NCT6775_TEMP_MASK; 3720 + data->virt_temp_mask = NCT6775_VIRT_TEMP_MASK; 3738 3721 3739 3722 data->REG_CONFIG = NCT6775_REG_CONFIG; 3740 3723 data->REG_VBAT = NCT6775_REG_VBAT; ··· 3808 3789 3809 3790 data->temp_label = nct6776_temp_label; 3810 3791 data->temp_mask = NCT6776_TEMP_MASK; 3792 + data->virt_temp_mask = NCT6776_VIRT_TEMP_MASK; 3811 3793 3812 3794 data->REG_CONFIG = NCT6775_REG_CONFIG; 3813 3795 data->REG_VBAT = NCT6775_REG_VBAT; ··· 3873 3853 data->ALARM_BITS = NCT6779_ALARM_BITS; 3874 3854 data->BEEP_BITS = NCT6779_BEEP_BITS; 3875 3855 3876 - data->fan_from_reg = fan_from_reg13; 3856 + data->fan_from_reg = fan_from_reg_rpm; 3877 3857 data->fan_from_reg_min = fan_from_reg13; 3878 3858 data->target_temp_mask = 0xff; 3879 3859 data->tolerance_mask = 0x07; ··· 3881 3861 3882 3862 data->temp_label = nct6779_temp_label; 3883 3863 data->temp_mask = NCT6779_TEMP_MASK; 3864 + data->virt_temp_mask = NCT6779_VIRT_TEMP_MASK; 3884 3865 3885 3866 data->REG_CONFIG = NCT6775_REG_CONFIG; 3886 3867 data->REG_VBAT = NCT6775_REG_VBAT; ··· 3954 3933 data->ALARM_BITS = NCT6791_ALARM_BITS; 3955 3934 data->BEEP_BITS = NCT6779_BEEP_BITS; 3956 3935 3957 - data->fan_from_reg = fan_from_reg13; 3936 + data->fan_from_reg = fan_from_reg_rpm; 3958 3937 data->fan_from_reg_min = fan_from_reg13; 3959 3938 data->target_temp_mask = 0xff; 3960 3939 data->tolerance_mask = 0x07; ··· 3965 3944 case nct6791: 3966 3945 data->temp_label = nct6779_temp_label; 3967 3946 data->temp_mask = NCT6791_TEMP_MASK; 3947 + data->virt_temp_mask = NCT6791_VIRT_TEMP_MASK; 3968 3948 break; 3969 3949 case nct6792: 3970 3950 data->temp_label = nct6792_temp_label; 3971 3951 data->temp_mask = NCT6792_TEMP_MASK; 3952 + data->virt_temp_mask = NCT6792_VIRT_TEMP_MASK; 3972 3953 break; 3973 3954 case nct6793: 3974 3955 data->temp_label = nct6793_temp_label; 3975 3956 data->temp_mask = NCT6793_TEMP_MASK; 3957 + data->virt_temp_mask = NCT6793_VIRT_TEMP_MASK; 3976 3958 break; 3977 3959 case nct6795: 3978 3960 data->temp_label = nct6795_temp_label; 3979 3961 data->temp_mask = NCT6795_TEMP_MASK; 3962 + data->virt_temp_mask = NCT6795_VIRT_TEMP_MASK; 3980 3963 break; 3981 3964 case nct6796: 3982 3965 data->temp_label = nct6796_temp_label; 3983 3966 data->temp_mask = NCT6796_TEMP_MASK; 3967 + data->virt_temp_mask = NCT6796_VIRT_TEMP_MASK; 3984 3968 break; 3985 3969 } 3986 3970 ··· 4169 4143 * for each fan reflects a different temperature, and there 4170 4144 * are no duplicates. 4171 4145 */ 4172 - if (src != TEMP_SOURCE_VIRTUAL) { 4146 + if (!(data->virt_temp_mask & BIT(src))) { 4173 4147 if (mask & BIT(src)) 4174 4148 continue; 4175 4149 mask |= BIT(src);
+13 -3
drivers/hwtracing/intel_th/core.c
··· 139 139 th->thdev[i] = NULL; 140 140 } 141 141 142 - th->num_thdevs = lowest; 142 + if (lowest >= 0) 143 + th->num_thdevs = lowest; 143 144 } 144 145 145 146 if (thdrv->attr_group) ··· 488 487 .flags = IORESOURCE_MEM, 489 488 }, 490 489 { 491 - .start = TH_MMIO_SW, 490 + .start = 1, /* use resource[1] */ 492 491 .end = 0, 493 492 .flags = IORESOURCE_MEM, 494 493 }, ··· 581 580 struct intel_th_device *thdev; 582 581 struct resource res[3]; 583 582 unsigned int req = 0; 583 + bool is64bit = false; 584 584 int r, err; 585 585 586 586 thdev = intel_th_device_alloc(th, subdev->type, subdev->name, ··· 591 589 592 590 thdev->drvdata = th->drvdata; 593 591 592 + for (r = 0; r < th->num_resources; r++) 593 + if (th->resource[r].flags & IORESOURCE_MEM_64) { 594 + is64bit = true; 595 + break; 596 + } 597 + 594 598 memcpy(res, subdev->res, 595 599 sizeof(struct resource) * subdev->nres); 596 600 597 601 for (r = 0; r < subdev->nres; r++) { 598 602 struct resource *devres = th->resource; 599 - int bar = TH_MMIO_CONFIG; 603 + int bar = 0; /* cut subdevices' MMIO from resource[0] */ 600 604 601 605 /* 602 606 * Take .end == 0 to mean 'take the whole bar', ··· 611 603 */ 612 604 if (!res[r].end && res[r].flags == IORESOURCE_MEM) { 613 605 bar = res[r].start; 606 + if (is64bit) 607 + bar *= 2; 614 608 res[r].start = 0; 615 609 res[r].end = resource_size(&devres[bar]) - 1; 616 610 }
+5
drivers/hwtracing/intel_th/pci.c
··· 160 160 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x18e1), 161 161 .driver_data = (kernel_ulong_t)&intel_th_2x, 162 162 }, 163 + { 164 + /* Ice Lake PCH */ 165 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6), 166 + .driver_data = (kernel_ulong_t)&intel_th_2x, 167 + }, 163 168 { 0 }, 164 169 }; 165 170
+49 -49
drivers/infiniband/core/cache.c
··· 338 338 } 339 339 340 340 /** 341 - * add_modify_gid - Add or modify GID table entry 342 - * 343 - * @table: GID table in which GID to be added or modified 344 - * @attr: Attributes of the GID 345 - * 346 - * Returns 0 on success or appropriate error code. It accepts zero 347 - * GID addition for non RoCE ports for HCA's who report them as valid 348 - * GID. However such zero GIDs are not added to the cache. 349 - */ 350 - static int add_modify_gid(struct ib_gid_table *table, 351 - const struct ib_gid_attr *attr) 352 - { 353 - struct ib_gid_table_entry *entry; 354 - int ret = 0; 355 - 356 - /* 357 - * Invalidate any old entry in the table to make it safe to write to 358 - * this index. 359 - */ 360 - if (is_gid_entry_valid(table->data_vec[attr->index])) 361 - put_gid_entry(table->data_vec[attr->index]); 362 - 363 - /* 364 - * Some HCA's report multiple GID entries with only one valid GID, and 365 - * leave other unused entries as the zero GID. Convert zero GIDs to 366 - * empty table entries instead of storing them. 367 - */ 368 - if (rdma_is_zero_gid(&attr->gid)) 369 - return 0; 370 - 371 - entry = alloc_gid_entry(attr); 372 - if (!entry) 373 - return -ENOMEM; 374 - 375 - if (rdma_protocol_roce(attr->device, attr->port_num)) { 376 - ret = add_roce_gid(entry); 377 - if (ret) 378 - goto done; 379 - } 380 - 381 - store_gid_entry(table, entry); 382 - return 0; 383 - 384 - done: 385 - put_gid_entry(entry); 386 - return ret; 387 - } 388 - 389 - /** 390 341 * del_gid - Delete GID table entry 391 342 * 392 343 * @ib_dev: IB device whose GID entry to be deleted ··· 368 417 write_unlock_irq(&table->rwlock); 369 418 370 419 put_gid_entry_locked(entry); 420 + } 421 + 422 + /** 423 + * add_modify_gid - Add or modify GID table entry 424 + * 425 + * @table: GID table in which GID to be added or modified 426 + * @attr: Attributes of the GID 427 + * 428 + * Returns 0 on success or appropriate error code. It accepts zero 429 + * GID addition for non RoCE ports for HCA's who report them as valid 430 + * GID. However such zero GIDs are not added to the cache. 431 + */ 432 + static int add_modify_gid(struct ib_gid_table *table, 433 + const struct ib_gid_attr *attr) 434 + { 435 + struct ib_gid_table_entry *entry; 436 + int ret = 0; 437 + 438 + /* 439 + * Invalidate any old entry in the table to make it safe to write to 440 + * this index. 441 + */ 442 + if (is_gid_entry_valid(table->data_vec[attr->index])) 443 + del_gid(attr->device, attr->port_num, table, attr->index); 444 + 445 + /* 446 + * Some HCA's report multiple GID entries with only one valid GID, and 447 + * leave other unused entries as the zero GID. Convert zero GIDs to 448 + * empty table entries instead of storing them. 449 + */ 450 + if (rdma_is_zero_gid(&attr->gid)) 451 + return 0; 452 + 453 + entry = alloc_gid_entry(attr); 454 + if (!entry) 455 + return -ENOMEM; 456 + 457 + if (rdma_protocol_roce(attr->device, attr->port_num)) { 458 + ret = add_roce_gid(entry); 459 + if (ret) 460 + goto done; 461 + } 462 + 463 + store_gid_entry(table, entry); 464 + return 0; 465 + 466 + done: 467 + put_gid_entry(entry); 468 + return ret; 371 469 } 372 470 373 471 /* rwlock should be read locked, or lock should be held */
+2
drivers/infiniband/core/ucma.c
··· 1759 1759 mutex_lock(&mut); 1760 1760 if (!ctx->closing) { 1761 1761 mutex_unlock(&mut); 1762 + ucma_put_ctx(ctx); 1763 + wait_for_completion(&ctx->comp); 1762 1764 /* rdma_destroy_id ensures that no event handlers are 1763 1765 * inflight for that id before releasing it. 1764 1766 */
+45 -23
drivers/infiniband/core/uverbs_cmd.c
··· 2027 2027 2028 2028 if ((cmd->base.attr_mask & IB_QP_CUR_STATE && 2029 2029 cmd->base.cur_qp_state > IB_QPS_ERR) || 2030 - cmd->base.qp_state > IB_QPS_ERR) { 2030 + (cmd->base.attr_mask & IB_QP_STATE && 2031 + cmd->base.qp_state > IB_QPS_ERR)) { 2031 2032 ret = -EINVAL; 2032 2033 goto release_qp; 2033 2034 } 2034 2035 2035 - attr->qp_state = cmd->base.qp_state; 2036 - attr->cur_qp_state = cmd->base.cur_qp_state; 2037 - attr->path_mtu = cmd->base.path_mtu; 2038 - attr->path_mig_state = cmd->base.path_mig_state; 2039 - attr->qkey = cmd->base.qkey; 2040 - attr->rq_psn = cmd->base.rq_psn; 2041 - attr->sq_psn = cmd->base.sq_psn; 2042 - attr->dest_qp_num = cmd->base.dest_qp_num; 2043 - attr->qp_access_flags = cmd->base.qp_access_flags; 2044 - attr->pkey_index = cmd->base.pkey_index; 2045 - attr->alt_pkey_index = cmd->base.alt_pkey_index; 2046 - attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify; 2047 - attr->max_rd_atomic = cmd->base.max_rd_atomic; 2048 - attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic; 2049 - attr->min_rnr_timer = cmd->base.min_rnr_timer; 2050 - attr->port_num = cmd->base.port_num; 2051 - attr->timeout = cmd->base.timeout; 2052 - attr->retry_cnt = cmd->base.retry_cnt; 2053 - attr->rnr_retry = cmd->base.rnr_retry; 2054 - attr->alt_port_num = cmd->base.alt_port_num; 2055 - attr->alt_timeout = cmd->base.alt_timeout; 2056 - attr->rate_limit = cmd->rate_limit; 2036 + if (cmd->base.attr_mask & IB_QP_STATE) 2037 + attr->qp_state = cmd->base.qp_state; 2038 + if (cmd->base.attr_mask & IB_QP_CUR_STATE) 2039 + attr->cur_qp_state = cmd->base.cur_qp_state; 2040 + if (cmd->base.attr_mask & IB_QP_PATH_MTU) 2041 + attr->path_mtu = cmd->base.path_mtu; 2042 + if (cmd->base.attr_mask & IB_QP_PATH_MIG_STATE) 2043 + attr->path_mig_state = cmd->base.path_mig_state; 2044 + if (cmd->base.attr_mask & IB_QP_QKEY) 2045 + attr->qkey = cmd->base.qkey; 2046 + if (cmd->base.attr_mask & IB_QP_RQ_PSN) 2047 + attr->rq_psn = cmd->base.rq_psn; 2048 + if (cmd->base.attr_mask & IB_QP_SQ_PSN) 2049 + attr->sq_psn = cmd->base.sq_psn; 2050 + if (cmd->base.attr_mask & IB_QP_DEST_QPN) 2051 + attr->dest_qp_num = cmd->base.dest_qp_num; 2052 + if (cmd->base.attr_mask & IB_QP_ACCESS_FLAGS) 2053 + attr->qp_access_flags = cmd->base.qp_access_flags; 2054 + if (cmd->base.attr_mask & IB_QP_PKEY_INDEX) 2055 + attr->pkey_index = cmd->base.pkey_index; 2056 + if (cmd->base.attr_mask & IB_QP_EN_SQD_ASYNC_NOTIFY) 2057 + attr->en_sqd_async_notify = cmd->base.en_sqd_async_notify; 2058 + if (cmd->base.attr_mask & IB_QP_MAX_QP_RD_ATOMIC) 2059 + attr->max_rd_atomic = cmd->base.max_rd_atomic; 2060 + if (cmd->base.attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) 2061 + attr->max_dest_rd_atomic = cmd->base.max_dest_rd_atomic; 2062 + if (cmd->base.attr_mask & IB_QP_MIN_RNR_TIMER) 2063 + attr->min_rnr_timer = cmd->base.min_rnr_timer; 2064 + if (cmd->base.attr_mask & IB_QP_PORT) 2065 + attr->port_num = cmd->base.port_num; 2066 + if (cmd->base.attr_mask & IB_QP_TIMEOUT) 2067 + attr->timeout = cmd->base.timeout; 2068 + if (cmd->base.attr_mask & IB_QP_RETRY_CNT) 2069 + attr->retry_cnt = cmd->base.retry_cnt; 2070 + if (cmd->base.attr_mask & IB_QP_RNR_RETRY) 2071 + attr->rnr_retry = cmd->base.rnr_retry; 2072 + if (cmd->base.attr_mask & IB_QP_ALT_PATH) { 2073 + attr->alt_port_num = cmd->base.alt_port_num; 2074 + attr->alt_timeout = cmd->base.alt_timeout; 2075 + attr->alt_pkey_index = cmd->base.alt_pkey_index; 2076 + } 2077 + if (cmd->base.attr_mask & IB_QP_RATE_LIMIT) 2078 + attr->rate_limit = cmd->rate_limit; 2057 2079 2058 2080 if (cmd->base.attr_mask & IB_QP_AV) 2059 2081 copy_ah_attr_from_uverbs(qp->device, &attr->ah_attr,
+1
drivers/infiniband/core/uverbs_main.c
··· 440 440 list_del(&entry->obj_list); 441 441 kfree(entry); 442 442 } 443 + file->ev_queue.is_closed = 1; 443 444 spin_unlock_irq(&file->ev_queue.lock); 444 445 445 446 uverbs_close_fd(filp);
+1
drivers/infiniband/core/uverbs_uapi.c
··· 248 248 kfree(rcu_dereference_protected(*slot, true)); 249 249 radix_tree_iter_delete(&uapi->radix, &iter, slot); 250 250 } 251 + kfree(uapi); 251 252 } 252 253 253 254 struct uverbs_api *uverbs_alloc_api(
+38 -55
drivers/infiniband/hw/bnxt_re/main.c
··· 78 78 /* Mutex to protect the list of bnxt_re devices added */ 79 79 static DEFINE_MUTEX(bnxt_re_dev_lock); 80 80 static struct workqueue_struct *bnxt_re_wq; 81 - static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait); 81 + static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev); 82 82 83 83 /* SR-IOV helper functions */ 84 84 ··· 182 182 if (!rdev) 183 183 return; 184 184 185 - bnxt_re_ib_unreg(rdev, false); 185 + bnxt_re_ib_unreg(rdev); 186 186 } 187 187 188 188 static void bnxt_re_stop_irq(void *handle) ··· 251 251 /* Driver registration routines used to let the networking driver (bnxt_en) 252 252 * to know that the RoCE driver is now installed 253 253 */ 254 - static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait) 254 + static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev) 255 255 { 256 256 struct bnxt_en_dev *en_dev; 257 257 int rc; ··· 260 260 return -EINVAL; 261 261 262 262 en_dev = rdev->en_dev; 263 - /* Acquire rtnl lock if it is not invokded from netdev event */ 264 - if (lock_wait) 265 - rtnl_lock(); 266 263 267 264 rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev, 268 265 BNXT_ROCE_ULP); 269 - if (lock_wait) 270 - rtnl_unlock(); 271 266 return rc; 272 267 } 273 268 ··· 276 281 277 282 en_dev = rdev->en_dev; 278 283 279 - rtnl_lock(); 280 284 rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP, 281 285 &bnxt_re_ulp_ops, rdev); 282 - rtnl_unlock(); 283 286 return rc; 284 287 } 285 288 286 - static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait) 289 + static int bnxt_re_free_msix(struct bnxt_re_dev *rdev) 287 290 { 288 291 struct bnxt_en_dev *en_dev; 289 292 int rc; ··· 291 298 292 299 en_dev = rdev->en_dev; 293 300 294 - if (lock_wait) 295 - rtnl_lock(); 296 301 297 302 rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP); 298 303 299 - if (lock_wait) 300 - rtnl_unlock(); 301 304 return rc; 302 305 } 303 306 ··· 309 320 310 321 num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus()); 311 322 312 - rtnl_lock(); 313 323 num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP, 314 324 rdev->msix_entries, 315 325 num_msix_want); ··· 323 335 } 324 336 rdev->num_msix = num_msix_got; 325 337 done: 326 - rtnl_unlock(); 327 338 return rc; 328 339 } 329 340 ··· 345 358 fw_msg->timeout = timeout; 346 359 } 347 360 348 - static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id, 349 - bool lock_wait) 361 + static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id) 350 362 { 351 363 struct bnxt_en_dev *en_dev = rdev->en_dev; 352 364 struct hwrm_ring_free_input req = {0}; 353 365 struct hwrm_ring_free_output resp; 354 366 struct bnxt_fw_msg fw_msg; 355 - bool do_unlock = false; 356 367 int rc = -EINVAL; 357 368 358 369 if (!en_dev) 359 370 return rc; 360 371 361 372 memset(&fw_msg, 0, sizeof(fw_msg)); 362 - if (lock_wait) { 363 - rtnl_lock(); 364 - do_unlock = true; 365 - } 366 373 367 374 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1); 368 375 req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL; ··· 367 386 if (rc) 368 387 dev_err(rdev_to_dev(rdev), 369 388 "Failed to free HW ring:%d :%#x", req.ring_id, rc); 370 - if (do_unlock) 371 - rtnl_unlock(); 372 389 return rc; 373 390 } 374 391 ··· 384 405 return rc; 385 406 386 407 memset(&fw_msg, 0, sizeof(fw_msg)); 387 - rtnl_lock(); 388 408 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1); 389 409 req.enables = 0; 390 410 req.page_tbl_addr = cpu_to_le64(dma_arr[0]); ··· 404 426 if (!rc) 405 427 *fw_ring_id = le16_to_cpu(resp.ring_id); 406 428 407 - rtnl_unlock(); 408 429 return rc; 409 430 } 410 431 411 432 static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev, 412 - u32 fw_stats_ctx_id, bool lock_wait) 433 + u32 fw_stats_ctx_id) 413 434 { 414 435 struct bnxt_en_dev *en_dev = rdev->en_dev; 415 436 struct hwrm_stat_ctx_free_input req = {0}; 416 437 struct bnxt_fw_msg fw_msg; 417 - bool do_unlock = false; 418 438 int rc = -EINVAL; 419 439 420 440 if (!en_dev) 421 441 return rc; 422 442 423 443 memset(&fw_msg, 0, sizeof(fw_msg)); 424 - if (lock_wait) { 425 - rtnl_lock(); 426 - do_unlock = true; 427 - } 428 444 429 445 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1); 430 446 req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id); ··· 429 457 dev_err(rdev_to_dev(rdev), 430 458 "Failed to free HW stats context %#x", rc); 431 459 432 - if (do_unlock) 433 - rtnl_unlock(); 434 460 return rc; 435 461 } 436 462 ··· 448 478 return rc; 449 479 450 480 memset(&fw_msg, 0, sizeof(fw_msg)); 451 - rtnl_lock(); 452 481 453 482 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1); 454 483 req.update_period_ms = cpu_to_le32(1000); ··· 459 490 if (!rc) 460 491 *fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id); 461 492 462 - rtnl_unlock(); 463 493 return rc; 464 494 } 465 495 ··· 897 929 return rc; 898 930 } 899 931 900 - static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait) 932 + static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev) 901 933 { 902 934 int i; 903 935 904 936 for (i = 0; i < rdev->num_msix - 1; i++) { 905 - bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait); 937 + bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id); 906 938 bnxt_qplib_free_nq(&rdev->nq[i]); 907 939 } 908 940 } 909 941 910 - static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait) 942 + static void bnxt_re_free_res(struct bnxt_re_dev *rdev) 911 943 { 912 - bnxt_re_free_nq_res(rdev, lock_wait); 944 + bnxt_re_free_nq_res(rdev); 913 945 914 946 if (rdev->qplib_res.dpi_tbl.max) { 915 947 bnxt_qplib_dealloc_dpi(&rdev->qplib_res, ··· 1187 1219 return 0; 1188 1220 } 1189 1221 1190 - static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait) 1222 + static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev) 1191 1223 { 1192 1224 int i, rc; 1193 1225 ··· 1202 1234 cancel_delayed_work(&rdev->worker); 1203 1235 1204 1236 bnxt_re_cleanup_res(rdev); 1205 - bnxt_re_free_res(rdev, lock_wait); 1237 + bnxt_re_free_res(rdev); 1206 1238 1207 1239 if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) { 1208 1240 rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw); 1209 1241 if (rc) 1210 1242 dev_warn(rdev_to_dev(rdev), 1211 1243 "Failed to deinitialize RCFW: %#x", rc); 1212 - bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, 1213 - lock_wait); 1244 + bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id); 1214 1245 bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx); 1215 1246 bnxt_qplib_disable_rcfw_channel(&rdev->rcfw); 1216 - bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait); 1247 + bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id); 1217 1248 bnxt_qplib_free_rcfw_channel(&rdev->rcfw); 1218 1249 } 1219 1250 if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) { 1220 - rc = bnxt_re_free_msix(rdev, lock_wait); 1251 + rc = bnxt_re_free_msix(rdev); 1221 1252 if (rc) 1222 1253 dev_warn(rdev_to_dev(rdev), 1223 1254 "Failed to free MSI-X vectors: %#x", rc); 1224 1255 } 1225 1256 if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) { 1226 - rc = bnxt_re_unregister_netdev(rdev, lock_wait); 1257 + rc = bnxt_re_unregister_netdev(rdev); 1227 1258 if (rc) 1228 1259 dev_warn(rdev_to_dev(rdev), 1229 1260 "Failed to unregister with netdev: %#x", rc); ··· 1242 1275 static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev) 1243 1276 { 1244 1277 int i, j, rc; 1278 + 1279 + bool locked; 1280 + 1281 + /* Acquire rtnl lock through out this function */ 1282 + rtnl_lock(); 1283 + locked = true; 1245 1284 1246 1285 /* Registered a new RoCE device instance to netdev */ 1247 1286 rc = bnxt_re_register_netdev(rdev); ··· 1347 1374 schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000)); 1348 1375 } 1349 1376 1377 + rtnl_unlock(); 1378 + locked = false; 1379 + 1350 1380 /* Register ib dev */ 1351 1381 rc = bnxt_re_register_ib(rdev); 1352 1382 if (rc) { 1353 1383 pr_err("Failed to register with IB: %#x\n", rc); 1354 1384 goto fail; 1355 1385 } 1386 + set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags); 1356 1387 dev_info(rdev_to_dev(rdev), "Device registered successfully"); 1357 1388 for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) { 1358 1389 rc = device_create_file(&rdev->ibdev.dev, ··· 1372 1395 goto fail; 1373 1396 } 1374 1397 } 1375 - set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags); 1376 1398 ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed, 1377 1399 &rdev->active_width); 1378 1400 set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags); ··· 1380 1404 1381 1405 return 0; 1382 1406 free_sctx: 1383 - bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true); 1407 + bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id); 1384 1408 free_ctx: 1385 1409 bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx); 1386 1410 disable_rcfw: 1387 1411 bnxt_qplib_disable_rcfw_channel(&rdev->rcfw); 1388 1412 free_ring: 1389 - bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true); 1413 + bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id); 1390 1414 free_rcfw: 1391 1415 bnxt_qplib_free_rcfw_channel(&rdev->rcfw); 1392 1416 fail: 1393 - bnxt_re_ib_unreg(rdev, true); 1417 + if (!locked) 1418 + rtnl_lock(); 1419 + bnxt_re_ib_unreg(rdev); 1420 + rtnl_unlock(); 1421 + 1394 1422 return rc; 1395 1423 } 1396 1424 ··· 1547 1567 */ 1548 1568 if (atomic_read(&rdev->sched_count) > 0) 1549 1569 goto exit; 1550 - bnxt_re_ib_unreg(rdev, false); 1570 + bnxt_re_ib_unreg(rdev); 1551 1571 bnxt_re_remove_one(rdev); 1552 1572 bnxt_re_dev_unreg(rdev); 1553 1573 break; ··· 1626 1646 */ 1627 1647 flush_workqueue(bnxt_re_wq); 1628 1648 bnxt_re_dev_stop(rdev); 1629 - bnxt_re_ib_unreg(rdev, true); 1649 + /* Acquire the rtnl_lock as the L2 resources are freed here */ 1650 + rtnl_lock(); 1651 + bnxt_re_ib_unreg(rdev); 1652 + rtnl_unlock(); 1630 1653 bnxt_re_remove_one(rdev); 1631 1654 bnxt_re_dev_unreg(rdev); 1632 1655 }
+5 -1
drivers/infiniband/hw/hfi1/chip.c
··· 6733 6733 struct hfi1_devdata *dd = ppd->dd; 6734 6734 struct send_context *sc; 6735 6735 int i; 6736 + int sc_flags; 6736 6737 6737 6738 if (flags & FREEZE_SELF) 6738 6739 write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK); ··· 6744 6743 /* notify all SDMA engines that they are going into a freeze */ 6745 6744 sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN)); 6746 6745 6746 + sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ? 6747 + SCF_LINK_DOWN : 0); 6747 6748 /* do halt pre-handling on all enabled send contexts */ 6748 6749 for (i = 0; i < dd->num_send_contexts; i++) { 6749 6750 sc = dd->send_contexts[i].sc; 6750 6751 if (sc && (sc->flags & SCF_ENABLED)) 6751 - sc_stop(sc, SCF_FROZEN | SCF_HALTED); 6752 + sc_stop(sc, sc_flags); 6752 6753 } 6753 6754 6754 6755 /* Send context are frozen. Notify user space */ ··· 10677 10674 add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK); 10678 10675 10679 10676 handle_linkup_change(dd, 1); 10677 + pio_kernel_linkup(dd); 10680 10678 10681 10679 /* 10682 10680 * After link up, a new link width will have been set.
+41 -10
drivers/infiniband/hw/hfi1/pio.c
··· 86 86 unsigned long flags; 87 87 int write = 1; /* write sendctrl back */ 88 88 int flush = 0; /* re-read sendctrl to make sure it is flushed */ 89 + int i; 89 90 90 91 spin_lock_irqsave(&dd->sendctrl_lock, flags); 91 92 ··· 96 95 reg |= SEND_CTRL_SEND_ENABLE_SMASK; 97 96 /* Fall through */ 98 97 case PSC_DATA_VL_ENABLE: 98 + mask = 0; 99 + for (i = 0; i < ARRAY_SIZE(dd->vld); i++) 100 + if (!dd->vld[i].mtu) 101 + mask |= BIT_ULL(i); 99 102 /* Disallow sending on VLs not enabled */ 100 - mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) << 101 - SEND_CTRL_UNSUPPORTED_VL_SHIFT; 103 + mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) << 104 + SEND_CTRL_UNSUPPORTED_VL_SHIFT; 102 105 reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask; 103 106 break; 104 107 case PSC_GLOBAL_DISABLE: ··· 926 921 void sc_disable(struct send_context *sc) 927 922 { 928 923 u64 reg; 929 - unsigned long flags; 930 924 struct pio_buf *pbuf; 931 925 932 926 if (!sc) 933 927 return; 934 928 935 929 /* do all steps, even if already disabled */ 936 - spin_lock_irqsave(&sc->alloc_lock, flags); 930 + spin_lock_irq(&sc->alloc_lock); 937 931 reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL)); 938 932 reg &= ~SC(CTRL_CTXT_ENABLE_SMASK); 939 933 sc->flags &= ~SCF_ENABLED; 940 934 sc_wait_for_packet_egress(sc, 1); 941 935 write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg); 942 - spin_unlock_irqrestore(&sc->alloc_lock, flags); 943 936 944 937 /* 945 938 * Flush any waiters. Once the context is disabled, ··· 947 944 * proceed with the flush. 948 945 */ 949 946 udelay(1); 950 - spin_lock_irqsave(&sc->release_lock, flags); 947 + spin_lock(&sc->release_lock); 951 948 if (sc->sr) { /* this context has a shadow ring */ 952 949 while (sc->sr_tail != sc->sr_head) { 953 950 pbuf = &sc->sr[sc->sr_tail].pbuf; ··· 958 955 sc->sr_tail = 0; 959 956 } 960 957 } 961 - spin_unlock_irqrestore(&sc->release_lock, flags); 958 + spin_unlock(&sc->release_lock); 959 + spin_unlock_irq(&sc->alloc_lock); 962 960 } 963 961 964 962 /* return SendEgressCtxtStatus.PacketOccupancy */ ··· 1182 1178 sc = dd->send_contexts[i].sc; 1183 1179 if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER) 1184 1180 continue; 1181 + if (sc->flags & SCF_LINK_DOWN) 1182 + continue; 1185 1183 1186 1184 sc_enable(sc); /* will clear the sc frozen flag */ 1185 + } 1186 + } 1187 + 1188 + /** 1189 + * pio_kernel_linkup() - Re-enable send contexts after linkup event 1190 + * @dd: valid devive data 1191 + * 1192 + * When the link goes down, the freeze path is taken. However, a link down 1193 + * event is different from a freeze because if the send context is re-enabled 1194 + * whowever is sending data will start sending data again, which will hang 1195 + * any QP that is sending data. 1196 + * 1197 + * The freeze path now looks at the type of event that occurs and takes this 1198 + * path for link down event. 1199 + */ 1200 + void pio_kernel_linkup(struct hfi1_devdata *dd) 1201 + { 1202 + struct send_context *sc; 1203 + int i; 1204 + 1205 + for (i = 0; i < dd->num_send_contexts; i++) { 1206 + sc = dd->send_contexts[i].sc; 1207 + if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER) 1208 + continue; 1209 + 1210 + sc_enable(sc); /* will clear the sc link down flag */ 1187 1211 } 1188 1212 } 1189 1213 ··· 1414 1382 { 1415 1383 unsigned long flags; 1416 1384 1417 - /* mark the context */ 1418 - sc->flags |= flag; 1419 - 1420 1385 /* stop buffer allocations */ 1421 1386 spin_lock_irqsave(&sc->alloc_lock, flags); 1387 + /* mark the context */ 1388 + sc->flags |= flag; 1422 1389 sc->flags &= ~SCF_ENABLED; 1423 1390 spin_unlock_irqrestore(&sc->alloc_lock, flags); 1424 1391 wake_up(&sc->halt_wait);
+2
drivers/infiniband/hw/hfi1/pio.h
··· 139 139 #define SCF_IN_FREE 0x02 140 140 #define SCF_HALTED 0x04 141 141 #define SCF_FROZEN 0x08 142 + #define SCF_LINK_DOWN 0x10 142 143 143 144 struct send_context_info { 144 145 struct send_context *sc; /* allocated working context */ ··· 307 306 void pio_reset_all(struct hfi1_devdata *dd); 308 307 void pio_freeze(struct hfi1_devdata *dd); 309 308 void pio_kernel_unfreeze(struct hfi1_devdata *dd); 309 + void pio_kernel_linkup(struct hfi1_devdata *dd); 310 310 311 311 /* global PIO send control operations */ 312 312 #define PSC_GLOBAL_ENABLE 0
+1 -1
drivers/infiniband/hw/hfi1/user_sdma.c
··· 828 828 if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) { 829 829 if (++req->iov_idx == req->data_iovs) { 830 830 ret = -EFAULT; 831 - goto free_txreq; 831 + goto free_tx; 832 832 } 833 833 iovec = &req->iovs[req->iov_idx]; 834 834 WARN_ON(iovec->offset);
+7 -1
drivers/infiniband/hw/hfi1/verbs.c
··· 1582 1582 struct hfi1_pportdata *ppd; 1583 1583 struct hfi1_devdata *dd; 1584 1584 u8 sc5; 1585 + u8 sl; 1585 1586 1586 1587 if (hfi1_check_mcast(rdma_ah_get_dlid(ah_attr)) && 1587 1588 !(rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH)) ··· 1591 1590 /* test the mapping for validity */ 1592 1591 ibp = to_iport(ibdev, rdma_ah_get_port_num(ah_attr)); 1593 1592 ppd = ppd_from_ibp(ibp); 1594 - sc5 = ibp->sl_to_sc[rdma_ah_get_sl(ah_attr)]; 1595 1593 dd = dd_from_ppd(ppd); 1594 + 1595 + sl = rdma_ah_get_sl(ah_attr); 1596 + if (sl >= ARRAY_SIZE(ibp->sl_to_sc)) 1597 + return -EINVAL; 1598 + 1599 + sc5 = ibp->sl_to_sc[sl]; 1596 1600 if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf) 1597 1601 return -EINVAL; 1598 1602 return 0;
+4 -1
drivers/infiniband/hw/mlx5/devx.c
··· 723 723 attrs, MLX5_IB_ATTR_DEVX_OBJ_CREATE_HANDLE); 724 724 struct mlx5_ib_ucontext *c = to_mucontext(uobj->context); 725 725 struct mlx5_ib_dev *dev = to_mdev(c->ibucontext.device); 726 + u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)]; 726 727 struct devx_obj *obj; 727 728 int err; 728 729 ··· 755 754 756 755 err = uverbs_copy_to(attrs, MLX5_IB_ATTR_DEVX_OBJ_CREATE_CMD_OUT, cmd_out, cmd_out_len); 757 756 if (err) 758 - goto obj_free; 757 + goto obj_destroy; 759 758 760 759 return 0; 761 760 761 + obj_destroy: 762 + mlx5_cmd_exec(obj->mdev, obj->dinbox, obj->dinlen, out, sizeof(out)); 762 763 obj_free: 763 764 kfree(obj); 764 765 return err;
+3 -3
drivers/infiniband/ulp/srp/ib_srp.c
··· 2951 2951 { 2952 2952 struct srp_target_port *target = host_to_target(scmnd->device->host); 2953 2953 struct srp_rdma_ch *ch; 2954 - int i; 2954 + int i, j; 2955 2955 u8 status; 2956 2956 2957 2957 shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n"); ··· 2965 2965 2966 2966 for (i = 0; i < target->ch_count; i++) { 2967 2967 ch = &target->ch[i]; 2968 - for (i = 0; i < target->req_ring_size; ++i) { 2969 - struct srp_request *req = &ch->req_ring[i]; 2968 + for (j = 0; j < target->req_ring_size; ++j) { 2969 + struct srp_request *req = &ch->req_ring[j]; 2970 2970 2971 2971 srp_finish_req(ch, req, scmnd->device, DID_RESET << 16); 2972 2972 }
+28 -46
drivers/input/keyboard/atakbd.c
··· 75 75 */ 76 76 77 77 78 - static unsigned char atakbd_keycode[0x72] = { /* American layout */ 79 - [0] = KEY_GRAVE, 78 + static unsigned char atakbd_keycode[0x73] = { /* American layout */ 80 79 [1] = KEY_ESC, 81 80 [2] = KEY_1, 82 81 [3] = KEY_2, ··· 116 117 [38] = KEY_L, 117 118 [39] = KEY_SEMICOLON, 118 119 [40] = KEY_APOSTROPHE, 119 - [41] = KEY_BACKSLASH, /* FIXME, '#' */ 120 + [41] = KEY_GRAVE, 120 121 [42] = KEY_LEFTSHIFT, 121 - [43] = KEY_GRAVE, /* FIXME: '~' */ 122 + [43] = KEY_BACKSLASH, 122 123 [44] = KEY_Z, 123 124 [45] = KEY_X, 124 125 [46] = KEY_C, ··· 144 145 [66] = KEY_F8, 145 146 [67] = KEY_F9, 146 147 [68] = KEY_F10, 147 - [69] = KEY_ESC, 148 - [70] = KEY_DELETE, 149 - [71] = KEY_KP7, 150 - [72] = KEY_KP8, 151 - [73] = KEY_KP9, 148 + [71] = KEY_HOME, 149 + [72] = KEY_UP, 152 150 [74] = KEY_KPMINUS, 153 - [75] = KEY_KP4, 154 - [76] = KEY_KP5, 155 - [77] = KEY_KP6, 151 + [75] = KEY_LEFT, 152 + [77] = KEY_RIGHT, 156 153 [78] = KEY_KPPLUS, 157 - [79] = KEY_KP1, 158 - [80] = KEY_KP2, 159 - [81] = KEY_KP3, 160 - [82] = KEY_KP0, 161 - [83] = KEY_KPDOT, 162 - [90] = KEY_KPLEFTPAREN, 163 - [91] = KEY_KPRIGHTPAREN, 164 - [92] = KEY_KPASTERISK, /* FIXME */ 165 - [93] = KEY_KPASTERISK, 166 - [94] = KEY_KPPLUS, 167 - [95] = KEY_HELP, 154 + [80] = KEY_DOWN, 155 + [82] = KEY_INSERT, 156 + [83] = KEY_DELETE, 168 157 [96] = KEY_102ND, 169 - [97] = KEY_KPASTERISK, /* FIXME */ 170 - [98] = KEY_KPSLASH, 158 + [97] = KEY_UNDO, 159 + [98] = KEY_HELP, 171 160 [99] = KEY_KPLEFTPAREN, 172 161 [100] = KEY_KPRIGHTPAREN, 173 162 [101] = KEY_KPSLASH, 174 163 [102] = KEY_KPASTERISK, 175 - [103] = KEY_UP, 176 - [104] = KEY_KPASTERISK, /* FIXME */ 177 - [105] = KEY_LEFT, 178 - [106] = KEY_RIGHT, 179 - [107] = KEY_KPASTERISK, /* FIXME */ 180 - [108] = KEY_DOWN, 181 - [109] = KEY_KPASTERISK, /* FIXME */ 182 - [110] = KEY_KPASTERISK, /* FIXME */ 183 - [111] = KEY_KPASTERISK, /* FIXME */ 184 - [112] = KEY_KPASTERISK, /* FIXME */ 185 - [113] = KEY_KPASTERISK /* FIXME */ 164 + [103] = KEY_KP7, 165 + [104] = KEY_KP8, 166 + [105] = KEY_KP9, 167 + [106] = KEY_KP4, 168 + [107] = KEY_KP5, 169 + [108] = KEY_KP6, 170 + [109] = KEY_KP1, 171 + [110] = KEY_KP2, 172 + [111] = KEY_KP3, 173 + [112] = KEY_KP0, 174 + [113] = KEY_KPDOT, 175 + [114] = KEY_KPENTER, 186 176 }; 187 177 188 178 static struct input_dev *atakbd_dev; ··· 179 191 static void atakbd_interrupt(unsigned char scancode, char down) 180 192 { 181 193 182 - if (scancode < 0x72) { /* scancodes < 0xf2 are keys */ 194 + if (scancode < 0x73) { /* scancodes < 0xf3 are keys */ 183 195 184 196 // report raw events here? 185 197 186 198 scancode = atakbd_keycode[scancode]; 187 199 188 - if (scancode == KEY_CAPSLOCK) { /* CapsLock is a toggle switch key on Amiga */ 189 - input_report_key(atakbd_dev, scancode, 1); 190 - input_report_key(atakbd_dev, scancode, 0); 191 - input_sync(atakbd_dev); 192 - } else { 193 - input_report_key(atakbd_dev, scancode, down); 194 - input_sync(atakbd_dev); 195 - } 196 - } else /* scancodes >= 0xf2 are mouse data, most likely */ 200 + input_report_key(atakbd_dev, scancode, down); 201 + input_sync(atakbd_dev); 202 + } else /* scancodes >= 0xf3 are mouse data, most likely */ 197 203 printk(KERN_INFO "atakbd: unhandled scancode %x\n", scancode); 198 204 199 205 return;
+1 -1
drivers/input/misc/uinput.c
··· 410 410 min = abs->minimum; 411 411 max = abs->maximum; 412 412 413 - if ((min != 0 || max != 0) && max <= min) { 413 + if ((min != 0 || max != 0) && max < min) { 414 414 printk(KERN_DEBUG 415 415 "%s: invalid abs[%02x] min:%d max:%d\n", 416 416 UINPUT_NAME, code, min, max);
+2
drivers/input/mouse/elantech.c
··· 1178 1178 static const char * const middle_button_pnp_ids[] = { 1179 1179 "LEN2131", /* ThinkPad P52 w/ NFC */ 1180 1180 "LEN2132", /* ThinkPad P52 */ 1181 + "LEN2133", /* ThinkPad P72 w/ NFC */ 1182 + "LEN2134", /* ThinkPad P72 */ 1181 1183 NULL 1182 1184 }; 1183 1185
+6
drivers/input/touchscreen/egalax_ts.c
··· 241 241 struct i2c_client *client = to_i2c_client(dev); 242 242 int ret; 243 243 244 + if (device_may_wakeup(dev)) 245 + return enable_irq_wake(client->irq); 246 + 244 247 ret = i2c_master_send(client, suspend_cmd, MAX_I2C_DATA_LEN); 245 248 return ret > 0 ? 0 : ret; 246 249 } ··· 251 248 static int __maybe_unused egalax_ts_resume(struct device *dev) 252 249 { 253 250 struct i2c_client *client = to_i2c_client(dev); 251 + 252 + if (device_may_wakeup(dev)) 253 + return disable_irq_wake(client->irq); 254 254 255 255 return egalax_wake_up_device(client); 256 256 }
+6
drivers/iommu/amd_iommu.c
··· 246 246 247 247 /* The callers make sure that get_device_id() does not fail here */ 248 248 devid = get_device_id(dev); 249 + 250 + /* For ACPI HID devices, we simply return the devid as such */ 251 + if (!dev_is_pci(dev)) 252 + return devid; 253 + 249 254 ivrs_alias = amd_iommu_alias_table[devid]; 255 + 250 256 pci_for_each_dma_alias(pdev, __last_alias, &pci_alias); 251 257 252 258 if (ivrs_alias == pci_alias)
+3 -3
drivers/iommu/intel-iommu.c
··· 2540 2540 if (dev && dev_is_pci(dev) && info->pasid_supported) { 2541 2541 ret = intel_pasid_alloc_table(dev); 2542 2542 if (ret) { 2543 - __dmar_remove_one_dev_info(info); 2544 - spin_unlock_irqrestore(&device_domain_lock, flags); 2545 - return NULL; 2543 + pr_warn("No pasid table for %s, pasid disabled\n", 2544 + dev_name(dev)); 2545 + info->pasid_supported = 0; 2546 2546 } 2547 2547 } 2548 2548 spin_unlock_irqrestore(&device_domain_lock, flags);
+1 -1
drivers/iommu/intel-pasid.h
··· 11 11 #define __INTEL_PASID_H 12 12 13 13 #define PASID_MIN 0x1 14 - #define PASID_MAX 0x100000 14 + #define PASID_MAX 0x20000 15 15 16 16 struct pasid_entry { 17 17 u64 val;
+6
drivers/iommu/rockchip-iommu.c
··· 1241 1241 1242 1242 static void rk_iommu_shutdown(struct platform_device *pdev) 1243 1243 { 1244 + struct rk_iommu *iommu = platform_get_drvdata(pdev); 1245 + int i = 0, irq; 1246 + 1247 + while ((irq = platform_get_irq(pdev, i++)) != -ENXIO) 1248 + devm_free_irq(iommu->dev, irq, iommu); 1249 + 1244 1250 pm_runtime_force_suspend(&pdev->dev); 1245 1251 } 1246 1252
+1
drivers/md/bcache/bcache.h
··· 965 965 void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); 966 966 967 967 extern struct workqueue_struct *bcache_wq; 968 + extern struct workqueue_struct *bch_journal_wq; 968 969 extern struct mutex bch_register_lock; 969 970 extern struct list_head bch_cache_sets; 970 971
+3 -3
drivers/md/bcache/journal.c
··· 485 485 486 486 closure_get(&ca->set->cl); 487 487 INIT_WORK(&ja->discard_work, journal_discard_work); 488 - schedule_work(&ja->discard_work); 488 + queue_work(bch_journal_wq, &ja->discard_work); 489 489 } 490 490 } 491 491 ··· 592 592 : &j->w[0]; 593 593 594 594 __closure_wake_up(&w->wait); 595 - continue_at_nobarrier(cl, journal_write, system_wq); 595 + continue_at_nobarrier(cl, journal_write, bch_journal_wq); 596 596 } 597 597 598 598 static void journal_write_unlock(struct closure *cl) ··· 627 627 spin_unlock(&c->journal.lock); 628 628 629 629 btree_flush_write(c); 630 - continue_at(cl, journal_write, system_wq); 630 + continue_at(cl, journal_write, bch_journal_wq); 631 631 return; 632 632 } 633 633
+8
drivers/md/bcache/super.c
··· 47 47 static DEFINE_IDA(bcache_device_idx); 48 48 static wait_queue_head_t unregister_wait; 49 49 struct workqueue_struct *bcache_wq; 50 + struct workqueue_struct *bch_journal_wq; 50 51 51 52 #define BTREE_MAX_PAGES (256 * 1024 / PAGE_SIZE) 52 53 /* limitation of partitions number on single bcache device */ ··· 2342 2341 kobject_put(bcache_kobj); 2343 2342 if (bcache_wq) 2344 2343 destroy_workqueue(bcache_wq); 2344 + if (bch_journal_wq) 2345 + destroy_workqueue(bch_journal_wq); 2346 + 2345 2347 if (bcache_major) 2346 2348 unregister_blkdev(bcache_major, "bcache"); 2347 2349 unregister_reboot_notifier(&reboot); ··· 2372 2368 2373 2369 bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0); 2374 2370 if (!bcache_wq) 2371 + goto err; 2372 + 2373 + bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0); 2374 + if (!bch_journal_wq) 2375 2375 goto err; 2376 2376 2377 2377 bcache_kobj = kobject_create_and_add("bcache", fs_kobj);
+13 -28
drivers/media/i2c/mt9v111.c
··· 1159 1159 V4L2_CID_AUTO_WHITE_BALANCE, 1160 1160 0, 1, 1, 1161 1161 V4L2_WHITE_BALANCE_AUTO); 1162 - if (IS_ERR_OR_NULL(mt9v111->auto_awb)) { 1163 - ret = PTR_ERR(mt9v111->auto_awb); 1164 - goto error_free_ctrls; 1165 - } 1166 - 1167 1162 mt9v111->auto_exp = v4l2_ctrl_new_std_menu(&mt9v111->ctrls, 1168 1163 &mt9v111_ctrl_ops, 1169 1164 V4L2_CID_EXPOSURE_AUTO, 1170 1165 V4L2_EXPOSURE_MANUAL, 1171 1166 0, V4L2_EXPOSURE_AUTO); 1172 - if (IS_ERR_OR_NULL(mt9v111->auto_exp)) { 1173 - ret = PTR_ERR(mt9v111->auto_exp); 1174 - goto error_free_ctrls; 1175 - } 1176 - 1177 - /* Initialize timings */ 1178 1167 mt9v111->hblank = v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops, 1179 1168 V4L2_CID_HBLANK, 1180 1169 MT9V111_CORE_R05_MIN_HBLANK, 1181 1170 MT9V111_CORE_R05_MAX_HBLANK, 1, 1182 1171 MT9V111_CORE_R05_DEF_HBLANK); 1183 - if (IS_ERR_OR_NULL(mt9v111->hblank)) { 1184 - ret = PTR_ERR(mt9v111->hblank); 1185 - goto error_free_ctrls; 1186 - } 1187 - 1188 1172 mt9v111->vblank = v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops, 1189 1173 V4L2_CID_VBLANK, 1190 1174 MT9V111_CORE_R06_MIN_VBLANK, 1191 1175 MT9V111_CORE_R06_MAX_VBLANK, 1, 1192 1176 MT9V111_CORE_R06_DEF_VBLANK); 1193 - if (IS_ERR_OR_NULL(mt9v111->vblank)) { 1194 - ret = PTR_ERR(mt9v111->vblank); 1195 - goto error_free_ctrls; 1196 - } 1197 1177 1198 1178 /* PIXEL_RATE is fixed: just expose it to user space. */ 1199 1179 v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops, ··· 1181 1201 DIV_ROUND_CLOSEST(mt9v111->sysclk, 2), 1, 1182 1202 DIV_ROUND_CLOSEST(mt9v111->sysclk, 2)); 1183 1203 1204 + if (mt9v111->ctrls.error) { 1205 + ret = mt9v111->ctrls.error; 1206 + goto error_free_ctrls; 1207 + } 1184 1208 mt9v111->sd.ctrl_handler = &mt9v111->ctrls; 1185 1209 1186 1210 /* Start with default configuration: 640x480 UYVY. */ ··· 1210 1226 mt9v111->pad.flags = MEDIA_PAD_FL_SOURCE; 1211 1227 ret = media_entity_pads_init(&mt9v111->sd.entity, 1, &mt9v111->pad); 1212 1228 if (ret) 1213 - goto error_free_ctrls; 1229 + goto error_free_entity; 1214 1230 #endif 1215 1231 1216 1232 ret = mt9v111_chip_probe(mt9v111); 1217 1233 if (ret) 1218 - goto error_free_ctrls; 1234 + goto error_free_entity; 1219 1235 1220 1236 ret = v4l2_async_register_subdev(&mt9v111->sd); 1221 1237 if (ret) 1222 - goto error_free_ctrls; 1238 + goto error_free_entity; 1223 1239 1224 1240 return 0; 1225 1241 1226 - error_free_ctrls: 1227 - v4l2_ctrl_handler_free(&mt9v111->ctrls); 1228 - 1242 + error_free_entity: 1229 1243 #if IS_ENABLED(CONFIG_MEDIA_CONTROLLER) 1230 1244 media_entity_cleanup(&mt9v111->sd.entity); 1231 1245 #endif 1246 + 1247 + error_free_ctrls: 1248 + v4l2_ctrl_handler_free(&mt9v111->ctrls); 1232 1249 1233 1250 mutex_destroy(&mt9v111->pwr_mutex); 1234 1251 mutex_destroy(&mt9v111->stream_mutex); ··· 1244 1259 1245 1260 v4l2_async_unregister_subdev(sd); 1246 1261 1247 - v4l2_ctrl_handler_free(&mt9v111->ctrls); 1248 - 1249 1262 #if IS_ENABLED(CONFIG_MEDIA_CONTROLLER) 1250 1263 media_entity_cleanup(&sd->entity); 1251 1264 #endif 1265 + 1266 + v4l2_ctrl_handler_free(&mt9v111->ctrls); 1252 1267 1253 1268 mutex_destroy(&mt9v111->pwr_mutex); 1254 1269 mutex_destroy(&mt9v111->stream_mutex);
+2
drivers/media/platform/Kconfig
··· 541 541 depends on MFD_CROS_EC 542 542 select CEC_CORE 543 543 select CEC_NOTIFIER 544 + select CHROME_PLATFORMS 545 + select CROS_EC_PROTO 544 546 ---help--- 545 547 If you say yes here you will get support for the 546 548 ChromeOS Embedded Controller's CEC.
+1
drivers/media/platform/qcom/camss/camss-csid.c
··· 10 10 #include <linux/clk.h> 11 11 #include <linux/completion.h> 12 12 #include <linux/interrupt.h> 13 + #include <linux/io.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/of.h> 15 16 #include <linux/platform_device.h>
+1
drivers/media/platform/qcom/camss/camss-csiphy-2ph-1-0.c
··· 12 12 13 13 #include <linux/delay.h> 14 14 #include <linux/interrupt.h> 15 + #include <linux/io.h> 15 16 16 17 #define CAMSS_CSI_PHY_LNn_CFG2(n) (0x004 + 0x40 * (n)) 17 18 #define CAMSS_CSI_PHY_LNn_CFG3(n) (0x008 + 0x40 * (n))
+1
drivers/media/platform/qcom/camss/camss-csiphy-3ph-1-0.c
··· 12 12 13 13 #include <linux/delay.h> 14 14 #include <linux/interrupt.h> 15 + #include <linux/io.h> 15 16 16 17 #define CSIPHY_3PH_LNn_CFG1(n) (0x000 + 0x100 * (n)) 17 18 #define CSIPHY_3PH_LNn_CFG1_SWI_REC_DLY_PRG (BIT(7) | BIT(6))
+1
drivers/media/platform/qcom/camss/camss-csiphy.c
··· 10 10 #include <linux/clk.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/interrupt.h> 13 + #include <linux/io.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/of.h> 15 16 #include <linux/platform_device.h>
+3 -2
drivers/media/platform/qcom/camss/camss-ispif.c
··· 10 10 #include <linux/clk.h> 11 11 #include <linux/completion.h> 12 12 #include <linux/interrupt.h> 13 + #include <linux/io.h> 13 14 #include <linux/iopoll.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/mutex.h> ··· 1077 1076 else 1078 1077 return -EINVAL; 1079 1078 1080 - ispif->line = kcalloc(ispif->line_num, sizeof(*ispif->line), 1081 - GFP_KERNEL); 1079 + ispif->line = devm_kcalloc(dev, ispif->line_num, sizeof(*ispif->line), 1080 + GFP_KERNEL); 1082 1081 if (!ispif->line) 1083 1082 return -ENOMEM; 1084 1083
+1
drivers/media/platform/qcom/camss/camss-vfe-4-1.c
··· 9 9 */ 10 10 11 11 #include <linux/interrupt.h> 12 + #include <linux/io.h> 12 13 #include <linux/iopoll.h> 13 14 14 15 #include "camss-vfe.h"
+1
drivers/media/platform/qcom/camss/camss-vfe-4-7.c
··· 9 9 */ 10 10 11 11 #include <linux/interrupt.h> 12 + #include <linux/io.h> 12 13 #include <linux/iopoll.h> 13 14 14 15 #include "camss-vfe.h"
+8 -7
drivers/media/platform/qcom/camss/camss.c
··· 848 848 return -EINVAL; 849 849 } 850 850 851 - camss->csiphy = kcalloc(camss->csiphy_num, sizeof(*camss->csiphy), 852 - GFP_KERNEL); 851 + camss->csiphy = devm_kcalloc(dev, camss->csiphy_num, 852 + sizeof(*camss->csiphy), GFP_KERNEL); 853 853 if (!camss->csiphy) 854 854 return -ENOMEM; 855 855 856 - camss->csid = kcalloc(camss->csid_num, sizeof(*camss->csid), 857 - GFP_KERNEL); 856 + camss->csid = devm_kcalloc(dev, camss->csid_num, sizeof(*camss->csid), 857 + GFP_KERNEL); 858 858 if (!camss->csid) 859 859 return -ENOMEM; 860 860 861 - camss->vfe = kcalloc(camss->vfe_num, sizeof(*camss->vfe), GFP_KERNEL); 861 + camss->vfe = devm_kcalloc(dev, camss->vfe_num, sizeof(*camss->vfe), 862 + GFP_KERNEL); 862 863 if (!camss->vfe) 863 864 return -ENOMEM; 864 865 ··· 994 993 995 994 MODULE_DEVICE_TABLE(of, camss_dt_match); 996 995 997 - static int camss_runtime_suspend(struct device *dev) 996 + static int __maybe_unused camss_runtime_suspend(struct device *dev) 998 997 { 999 998 return 0; 1000 999 } 1001 1000 1002 - static int camss_runtime_resume(struct device *dev) 1001 + static int __maybe_unused camss_runtime_resume(struct device *dev) 1003 1002 { 1004 1003 return 0; 1005 1004 }
+4 -2
drivers/media/usb/dvb-usb-v2/af9035.c
··· 402 402 if (msg[0].addr == state->af9033_i2c_addr[1]) 403 403 reg |= 0x100000; 404 404 405 - ret = af9035_wr_regs(d, reg, &msg[0].buf[3], 406 - msg[0].len - 3); 405 + ret = (msg[0].len >= 3) ? af9035_wr_regs(d, reg, 406 + &msg[0].buf[3], 407 + msg[0].len - 3) 408 + : -EOPNOTSUPP; 407 409 } else { 408 410 /* I2C write */ 409 411 u8 buf[MAX_XFER_SIZE];
+6 -5
drivers/mfd/omap-usb-host.c
··· 528 528 } 529 529 530 530 static const struct of_device_id usbhs_child_match_table[] = { 531 - { .compatible = "ti,omap-ehci", }, 532 - { .compatible = "ti,omap-ohci", }, 531 + { .compatible = "ti,ehci-omap", }, 532 + { .compatible = "ti,ohci-omap3", }, 533 533 { } 534 534 }; 535 535 ··· 855 855 .pm = &usbhsomap_dev_pm_ops, 856 856 .of_match_table = usbhs_omap_dt_ids, 857 857 }, 858 + .probe = usbhs_omap_probe, 858 859 .remove = usbhs_omap_remove, 859 860 }; 860 861 ··· 865 864 MODULE_LICENSE("GPL v2"); 866 865 MODULE_DESCRIPTION("usb host common core driver for omap EHCI and OHCI"); 867 866 868 - static int __init omap_usbhs_drvinit(void) 867 + static int omap_usbhs_drvinit(void) 869 868 { 870 - return platform_driver_probe(&usbhs_omap_driver, usbhs_omap_probe); 869 + return platform_driver_register(&usbhs_omap_driver); 871 870 } 872 871 873 872 /* ··· 879 878 */ 880 879 fs_initcall_sync(omap_usbhs_drvinit); 881 880 882 - static void __exit omap_usbhs_drvexit(void) 881 + static void omap_usbhs_drvexit(void) 883 882 { 884 883 platform_driver_unregister(&usbhs_omap_driver); 885 884 }
+23 -3
drivers/mtd/devices/m25p80.c
··· 39 39 struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1), 40 40 SPI_MEM_OP_NO_ADDR, 41 41 SPI_MEM_OP_NO_DUMMY, 42 - SPI_MEM_OP_DATA_IN(len, val, 1)); 42 + SPI_MEM_OP_DATA_IN(len, NULL, 1)); 43 + void *scratchbuf; 43 44 int ret; 44 45 46 + scratchbuf = kmalloc(len, GFP_KERNEL); 47 + if (!scratchbuf) 48 + return -ENOMEM; 49 + 50 + op.data.buf.in = scratchbuf; 45 51 ret = spi_mem_exec_op(flash->spimem, &op); 46 52 if (ret < 0) 47 53 dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret, 48 54 code); 55 + else 56 + memcpy(val, scratchbuf, len); 57 + 58 + kfree(scratchbuf); 49 59 50 60 return ret; 51 61 } ··· 66 56 struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1), 67 57 SPI_MEM_OP_NO_ADDR, 68 58 SPI_MEM_OP_NO_DUMMY, 69 - SPI_MEM_OP_DATA_OUT(len, buf, 1)); 59 + SPI_MEM_OP_DATA_OUT(len, NULL, 1)); 60 + void *scratchbuf; 61 + int ret; 70 62 71 - return spi_mem_exec_op(flash->spimem, &op); 63 + scratchbuf = kmemdup(buf, len, GFP_KERNEL); 64 + if (!scratchbuf) 65 + return -ENOMEM; 66 + 67 + op.data.buf.out = scratchbuf; 68 + ret = spi_mem_exec_op(flash->spimem, &op); 69 + kfree(scratchbuf); 70 + 71 + return ret; 72 72 } 73 73 74 74 static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
+4 -1
drivers/mtd/mtdpart.c
··· 873 873 int ret, err = 0; 874 874 875 875 np = mtd_get_of_node(master); 876 - if (!mtd_is_partition(master)) 876 + if (mtd_is_partition(master)) 877 + of_node_get(np); 878 + else 877 879 np = of_get_child_by_name(np, "partitions"); 880 + 878 881 of_property_for_each_string(np, "compatible", prop, compat) { 879 882 parser = mtd_part_get_compatible_parser(compat); 880 883 if (!parser)
+6
drivers/mtd/nand/raw/denali.c
··· 596 596 } 597 597 598 598 iowrite32(DMA_ENABLE__FLAG, denali->reg + DMA_ENABLE); 599 + /* 600 + * The ->setup_dma() hook kicks DMA by using the data/command 601 + * interface, which belongs to a different AXI port from the 602 + * register interface. Read back the register to avoid a race. 603 + */ 604 + ioread32(denali->reg + DMA_ENABLE); 599 605 600 606 denali_reset_irq(denali); 601 607 denali->setup_dma(denali, dma_addr, page, write);
+3 -1
drivers/mtd/nand/raw/marvell_nand.c
··· 1547 1547 for (op_id = 0; op_id < subop->ninstrs; op_id++) { 1548 1548 unsigned int offset, naddrs; 1549 1549 const u8 *addrs; 1550 - int len = nand_subop_get_data_len(subop, op_id); 1550 + int len; 1551 1551 1552 1552 instr = &subop->instrs[op_id]; 1553 1553 ··· 1593 1593 nfc_op->ndcb[0] |= 1594 1594 NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) | 1595 1595 NDCB0_LEN_OVRD; 1596 + len = nand_subop_get_data_len(subop, op_id); 1596 1597 nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH); 1597 1598 } 1598 1599 nfc_op->data_delay_ns = instr->delay_ns; ··· 1607 1606 nfc_op->ndcb[0] |= 1608 1607 NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) | 1609 1608 NDCB0_LEN_OVRD; 1609 + len = nand_subop_get_data_len(subop, op_id); 1610 1610 nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH); 1611 1611 } 1612 1612 nfc_op->data_delay_ns = instr->delay_ns;
+6 -2
drivers/net/appletalk/ipddp.c
··· 283 283 case SIOCFINDIPDDPRT: 284 284 spin_lock_bh(&ipddp_route_lock); 285 285 rp = __ipddp_find_route(&rcp); 286 - if (rp) 287 - memcpy(&rcp2, rp, sizeof(rcp2)); 286 + if (rp) { 287 + memset(&rcp2, 0, sizeof(rcp2)); 288 + rcp2.ip = rp->ip; 289 + rcp2.at = rp->at; 290 + rcp2.flags = rp->flags; 291 + } 288 292 spin_unlock_bh(&ipddp_route_lock); 289 293 290 294 if (rp) {
+2 -9
drivers/net/bonding/bond_main.c
··· 971 971 struct slave *slave = NULL; 972 972 struct list_head *iter; 973 973 struct ad_info ad_info; 974 - struct netpoll_info *ni; 975 - const struct net_device_ops *ops; 976 974 977 975 if (BOND_MODE(bond) == BOND_MODE_8023AD) 978 976 if (bond_3ad_get_active_agg_info(bond, &ad_info)) 979 977 return; 980 978 981 979 bond_for_each_slave_rcu(bond, slave, iter) { 982 - ops = slave->dev->netdev_ops; 983 - if (!bond_slave_is_up(slave) || !ops->ndo_poll_controller) 980 + if (!bond_slave_is_up(slave)) 984 981 continue; 985 982 986 983 if (BOND_MODE(bond) == BOND_MODE_8023AD) { ··· 989 992 continue; 990 993 } 991 994 992 - ni = rcu_dereference_bh(slave->dev->npinfo); 993 - if (down_trylock(&ni->dev_lock)) 994 - continue; 995 - ops->ndo_poll_controller(slave->dev); 996 - up(&ni->dev_lock); 995 + netpoll_poll_dev(slave->dev); 997 996 } 998 997 } 999 998
+1 -1
drivers/net/dsa/mv88e6xxx/global1.h
··· 128 128 #define MV88E6XXX_G1_ATU_OP_GET_CLR_VIOLATION 0x7000 129 129 #define MV88E6XXX_G1_ATU_OP_AGE_OUT_VIOLATION BIT(7) 130 130 #define MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION BIT(6) 131 - #define MV88E6XXX_G1_ATU_OP_MISS_VIOLTATION BIT(5) 131 + #define MV88E6XXX_G1_ATU_OP_MISS_VIOLATION BIT(5) 132 132 #define MV88E6XXX_G1_ATU_OP_FULL_VIOLATION BIT(4) 133 133 134 134 /* Offset 0x0C: ATU Data Register */
+1 -1
drivers/net/dsa/mv88e6xxx/global1_atu.c
··· 349 349 chip->ports[entry.portvec].atu_member_violation++; 350 350 } 351 351 352 - if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) { 352 + if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) { 353 353 dev_err_ratelimited(chip->dev, 354 354 "ATU miss violation for %pM portvec %x\n", 355 355 entry.mac, entry.portvec);
+2 -2
drivers/net/ethernet/apple/bmac.c
··· 154 154 static irqreturn_t bmac_rxdma_intr(int irq, void *dev_id); 155 155 static void bmac_set_timeout(struct net_device *dev); 156 156 static void bmac_tx_timeout(struct timer_list *t); 157 - static int bmac_output(struct sk_buff *skb, struct net_device *dev); 157 + static netdev_tx_t bmac_output(struct sk_buff *skb, struct net_device *dev); 158 158 static void bmac_start(struct net_device *dev); 159 159 160 160 #define DBDMA_SET(x) ( ((x) | (x) << 16) ) ··· 1456 1456 spin_unlock_irqrestore(&bp->lock, flags); 1457 1457 } 1458 1458 1459 - static int 1459 + static netdev_tx_t 1460 1460 bmac_output(struct sk_buff *skb, struct net_device *dev) 1461 1461 { 1462 1462 struct bmac_data *bp = netdev_priv(dev);
+2 -2
drivers/net/ethernet/apple/mace.c
··· 78 78 79 79 static int mace_open(struct net_device *dev); 80 80 static int mace_close(struct net_device *dev); 81 - static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev); 81 + static netdev_tx_t mace_xmit_start(struct sk_buff *skb, struct net_device *dev); 82 82 static void mace_set_multicast(struct net_device *dev); 83 83 static void mace_reset(struct net_device *dev); 84 84 static int mace_set_address(struct net_device *dev, void *addr); ··· 525 525 mp->timeout_active = 1; 526 526 } 527 527 528 - static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev) 528 + static netdev_tx_t mace_xmit_start(struct sk_buff *skb, struct net_device *dev) 529 529 { 530 530 struct mace_data *mp = netdev_priv(dev); 531 531 volatile struct dbdma_regs __iomem *td = mp->tx_dma;
+2 -2
drivers/net/ethernet/apple/macmace.c
··· 89 89 90 90 static int mace_open(struct net_device *dev); 91 91 static int mace_close(struct net_device *dev); 92 - static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev); 92 + static netdev_tx_t mace_xmit_start(struct sk_buff *skb, struct net_device *dev); 93 93 static void mace_set_multicast(struct net_device *dev); 94 94 static int mace_set_address(struct net_device *dev, void *addr); 95 95 static void mace_reset(struct net_device *dev); ··· 444 444 * Transmit a frame 445 445 */ 446 446 447 - static int mace_xmit_start(struct sk_buff *skb, struct net_device *dev) 447 + static netdev_tx_t mace_xmit_start(struct sk_buff *skb, struct net_device *dev) 448 448 { 449 449 struct mace_data *mp = netdev_priv(dev); 450 450 unsigned long flags;
+17 -13
drivers/net/ethernet/aquantia/atlantic/aq_ring.c
··· 225 225 } 226 226 227 227 /* for single fragment packets use build_skb() */ 228 - if (buff->is_eop) { 228 + if (buff->is_eop && 229 + buff->len <= AQ_CFG_RX_FRAME_MAX - AQ_SKB_ALIGN) { 229 230 skb = build_skb(page_address(buff->page), 230 - buff->len + AQ_SKB_ALIGN); 231 + AQ_CFG_RX_FRAME_MAX); 231 232 if (unlikely(!skb)) { 232 233 err = -ENOMEM; 233 234 goto err_exit; ··· 248 247 buff->len - ETH_HLEN, 249 248 SKB_TRUESIZE(buff->len - ETH_HLEN)); 250 249 251 - for (i = 1U, next_ = buff->next, 252 - buff_ = &self->buff_ring[next_]; true; 253 - next_ = buff_->next, 254 - buff_ = &self->buff_ring[next_], ++i) { 255 - skb_add_rx_frag(skb, i, buff_->page, 0, 256 - buff_->len, 257 - SKB_TRUESIZE(buff->len - 258 - ETH_HLEN)); 259 - buff_->is_cleaned = 1; 250 + if (!buff->is_eop) { 251 + for (i = 1U, next_ = buff->next, 252 + buff_ = &self->buff_ring[next_]; 253 + true; next_ = buff_->next, 254 + buff_ = &self->buff_ring[next_], ++i) { 255 + skb_add_rx_frag(skb, i, 256 + buff_->page, 0, 257 + buff_->len, 258 + SKB_TRUESIZE(buff->len - 259 + ETH_HLEN)); 260 + buff_->is_cleaned = 1; 260 261 261 - if (buff_->is_eop) 262 - break; 262 + if (buff_->is_eop) 263 + break; 264 + } 263 265 } 264 266 } 265 267
-16
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 12894 12894 } 12895 12895 } 12896 12896 12897 - #ifdef CONFIG_NET_POLL_CONTROLLER 12898 - static void poll_bnx2x(struct net_device *dev) 12899 - { 12900 - struct bnx2x *bp = netdev_priv(dev); 12901 - int i; 12902 - 12903 - for_each_eth_queue(bp, i) { 12904 - struct bnx2x_fastpath *fp = &bp->fp[i]; 12905 - napi_schedule(&bnx2x_fp(bp, fp->index, napi)); 12906 - } 12907 - } 12908 - #endif 12909 - 12910 12897 static int bnx2x_validate_addr(struct net_device *dev) 12911 12898 { 12912 12899 struct bnx2x *bp = netdev_priv(dev); ··· 13100 13113 .ndo_tx_timeout = bnx2x_tx_timeout, 13101 13114 .ndo_vlan_rx_add_vid = bnx2x_vlan_rx_add_vid, 13102 13115 .ndo_vlan_rx_kill_vid = bnx2x_vlan_rx_kill_vid, 13103 - #ifdef CONFIG_NET_POLL_CONTROLLER 13104 - .ndo_poll_controller = poll_bnx2x, 13105 - #endif 13106 13116 .ndo_setup_tc = __bnx2x_setup_tc, 13107 13117 #ifdef CONFIG_BNX2X_SRIOV 13108 13118 .ndo_set_vf_mac = bnx2x_set_vf_mac,
+7 -20
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 7672 7672 bnxt_queue_sp_work(bp); 7673 7673 } 7674 7674 7675 - #ifdef CONFIG_NET_POLL_CONTROLLER 7676 - static void bnxt_poll_controller(struct net_device *dev) 7677 - { 7678 - struct bnxt *bp = netdev_priv(dev); 7679 - int i; 7680 - 7681 - /* Only process tx rings/combined rings in netpoll mode. */ 7682 - for (i = 0; i < bp->tx_nr_rings; i++) { 7683 - struct bnxt_tx_ring_info *txr = &bp->tx_ring[i]; 7684 - 7685 - napi_schedule(&txr->bnapi->napi); 7686 - } 7687 - } 7688 - #endif 7689 - 7690 7675 static void bnxt_timer(struct timer_list *t) 7691 7676 { 7692 7677 struct bnxt *bp = from_timer(bp, t, timer); ··· 8012 8027 if (ether_addr_equal(addr->sa_data, dev->dev_addr)) 8013 8028 return 0; 8014 8029 8015 - rc = bnxt_approve_mac(bp, addr->sa_data); 8030 + rc = bnxt_approve_mac(bp, addr->sa_data, true); 8016 8031 if (rc) 8017 8032 return rc; 8018 8033 ··· 8505 8520 .ndo_set_vf_spoofchk = bnxt_set_vf_spoofchk, 8506 8521 .ndo_set_vf_trust = bnxt_set_vf_trust, 8507 8522 #endif 8508 - #ifdef CONFIG_NET_POLL_CONTROLLER 8509 - .ndo_poll_controller = bnxt_poll_controller, 8510 - #endif 8511 8523 .ndo_setup_tc = bnxt_setup_tc, 8512 8524 #ifdef CONFIG_RFS_ACCEL 8513 8525 .ndo_rx_flow_steer = bnxt_rx_flow_steer, ··· 8809 8827 } else { 8810 8828 #ifdef CONFIG_BNXT_SRIOV 8811 8829 struct bnxt_vf_info *vf = &bp->vf; 8830 + bool strict_approval = true; 8812 8831 8813 8832 if (is_valid_ether_addr(vf->mac_addr)) { 8814 8833 /* overwrite netdev dev_addr with admin VF MAC */ 8815 8834 memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN); 8835 + /* Older PF driver or firmware may not approve this 8836 + * correctly. 8837 + */ 8838 + strict_approval = false; 8816 8839 } else { 8817 8840 eth_hw_addr_random(bp->dev); 8818 8841 } 8819 - rc = bnxt_approve_mac(bp, bp->dev->dev_addr); 8842 + rc = bnxt_approve_mac(bp, bp->dev->dev_addr, strict_approval); 8820 8843 #endif 8821 8844 } 8822 8845 return rc;
+3
drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
··· 46 46 } 47 47 } 48 48 49 + if (i == ARRAY_SIZE(nvm_params)) 50 + return -EOPNOTSUPP; 51 + 49 52 if (nvm_param.dir_type == BNXT_NVM_PORT_CFG) 50 53 idx = bp->pf.port_id; 51 54 else if (nvm_param.dir_type == BNXT_NVM_FUNC_CFG)
+5 -4
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
··· 1104 1104 mutex_unlock(&bp->hwrm_cmd_lock); 1105 1105 } 1106 1106 1107 - int bnxt_approve_mac(struct bnxt *bp, u8 *mac) 1107 + int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict) 1108 1108 { 1109 1109 struct hwrm_func_vf_cfg_input req = {0}; 1110 1110 int rc = 0; ··· 1122 1122 memcpy(req.dflt_mac_addr, mac, ETH_ALEN); 1123 1123 rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT); 1124 1124 mac_done: 1125 - if (rc) { 1125 + if (rc && strict) { 1126 1126 rc = -EADDRNOTAVAIL; 1127 1127 netdev_warn(bp->dev, "VF MAC address %pM not approved by the PF\n", 1128 1128 mac); 1129 + return rc; 1129 1130 } 1130 - return rc; 1131 + return 0; 1131 1132 } 1132 1133 #else 1133 1134 ··· 1145 1144 { 1146 1145 } 1147 1146 1148 - int bnxt_approve_mac(struct bnxt *bp, u8 *mac) 1147 + int bnxt_approve_mac(struct bnxt *bp, u8 *mac, bool strict) 1149 1148 { 1150 1149 return 0; 1151 1150 }
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
··· 39 39 void bnxt_sriov_disable(struct bnxt *); 40 40 void bnxt_hwrm_exec_fwd_req(struct bnxt *); 41 41 void bnxt_update_vf_mac(struct bnxt *); 42 - int bnxt_approve_mac(struct bnxt *, u8 *); 42 + int bnxt_approve_mac(struct bnxt *, u8 *, bool); 43 43 #endif
+14 -6
drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
··· 75 75 return 0; 76 76 } 77 77 78 - static void bnxt_tc_parse_vlan(struct bnxt *bp, 79 - struct bnxt_tc_actions *actions, 80 - const struct tc_action *tc_act) 78 + static int bnxt_tc_parse_vlan(struct bnxt *bp, 79 + struct bnxt_tc_actions *actions, 80 + const struct tc_action *tc_act) 81 81 { 82 - if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_POP) { 82 + switch (tcf_vlan_action(tc_act)) { 83 + case TCA_VLAN_ACT_POP: 83 84 actions->flags |= BNXT_TC_ACTION_FLAG_POP_VLAN; 84 - } else if (tcf_vlan_action(tc_act) == TCA_VLAN_ACT_PUSH) { 85 + break; 86 + case TCA_VLAN_ACT_PUSH: 85 87 actions->flags |= BNXT_TC_ACTION_FLAG_PUSH_VLAN; 86 88 actions->push_vlan_tci = htons(tcf_vlan_push_vid(tc_act)); 87 89 actions->push_vlan_tpid = tcf_vlan_push_proto(tc_act); 90 + break; 91 + default: 92 + return -EOPNOTSUPP; 88 93 } 94 + return 0; 89 95 } 90 96 91 97 static int bnxt_tc_parse_tunnel_set(struct bnxt *bp, ··· 140 134 141 135 /* Push/pop VLAN */ 142 136 if (is_tcf_vlan(tc_act)) { 143 - bnxt_tc_parse_vlan(bp, actions, tc_act); 137 + rc = bnxt_tc_parse_vlan(bp, actions, tc_act); 138 + if (rc) 139 + return rc; 144 140 continue; 145 141 } 146 142
+8
drivers/net/ethernet/cadence/macb_main.c
··· 3837 3837 .init = macb_init, 3838 3838 }; 3839 3839 3840 + static const struct macb_config sama5d3macb_config = { 3841 + .caps = MACB_CAPS_SG_DISABLED 3842 + | MACB_CAPS_USRIO_HAS_CLKEN | MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII, 3843 + .clk_init = macb_clk_init, 3844 + .init = macb_init, 3845 + }; 3846 + 3840 3847 static const struct macb_config pc302gem_config = { 3841 3848 .caps = MACB_CAPS_SG_DISABLED | MACB_CAPS_GIGABIT_MODE_AVAILABLE, 3842 3849 .dma_burst_length = 16, ··· 3911 3904 { .compatible = "cdns,gem", .data = &pc302gem_config }, 3912 3905 { .compatible = "atmel,sama5d2-gem", .data = &sama5d2_config }, 3913 3906 { .compatible = "atmel,sama5d3-gem", .data = &sama5d3_config }, 3907 + { .compatible = "atmel,sama5d3-macb", .data = &sama5d3macb_config }, 3914 3908 { .compatible = "atmel,sama5d4-gem", .data = &sama5d4_config }, 3915 3909 { .compatible = "cdns,at91rm9200-emac", .data = &emac_config }, 3916 3910 { .compatible = "cdns,emac", .data = &emac_config },
-1
drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
··· 753 753 }; 754 754 755 755 struct cpl_abort_req_rss6 { 756 - WR_HDR; 757 756 union opcode_tid ot; 758 757 __be32 srqidx_status; 759 758 };
+1 -1
drivers/net/ethernet/cirrus/ep93xx_eth.c
··· 332 332 return rx; 333 333 } 334 334 335 - static int ep93xx_xmit(struct sk_buff *skb, struct net_device *dev) 335 + static netdev_tx_t ep93xx_xmit(struct sk_buff *skb, struct net_device *dev) 336 336 { 337 337 struct ep93xx_priv *ep = netdev_priv(dev); 338 338 struct ep93xx_tdesc *txd;
+2 -2
drivers/net/ethernet/cirrus/mac89x0.c
··· 113 113 114 114 /* Index to functions, as function prototypes. */ 115 115 static int net_open(struct net_device *dev); 116 - static int net_send_packet(struct sk_buff *skb, struct net_device *dev); 116 + static netdev_tx_t net_send_packet(struct sk_buff *skb, struct net_device *dev); 117 117 static irqreturn_t net_interrupt(int irq, void *dev_id); 118 118 static void set_multicast_list(struct net_device *dev); 119 119 static void net_rx(struct net_device *dev); ··· 324 324 return 0; 325 325 } 326 326 327 - static int 327 + static netdev_tx_t 328 328 net_send_packet(struct sk_buff *skb, struct net_device *dev) 329 329 { 330 330 struct net_local *lp = netdev_priv(dev);
+1 -1
drivers/net/ethernet/hp/hp100.c
··· 2634 2634 /* Wait for link to drop */ 2635 2635 time = jiffies + (HZ / 10); 2636 2636 do { 2637 - if (~(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST)) 2637 + if (!(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST)) 2638 2638 break; 2639 2639 if (!in_interrupt()) 2640 2640 schedule_timeout_interruptible(1);
+3 -2
drivers/net/ethernet/i825xx/ether1.c
··· 64 64 #define RX_AREA_END 0x0fc00 65 65 66 66 static int ether1_open(struct net_device *dev); 67 - static int ether1_sendpacket(struct sk_buff *skb, struct net_device *dev); 67 + static netdev_tx_t ether1_sendpacket(struct sk_buff *skb, 68 + struct net_device *dev); 68 69 static irqreturn_t ether1_interrupt(int irq, void *dev_id); 69 70 static int ether1_close(struct net_device *dev); 70 71 static void ether1_setmulticastlist(struct net_device *dev); ··· 668 667 netif_wake_queue(dev); 669 668 } 670 669 671 - static int 670 + static netdev_tx_t 672 671 ether1_sendpacket (struct sk_buff *skb, struct net_device *dev) 673 672 { 674 673 int tmp, tst, nopaddr, txaddr, tbdaddr, dataddr;
+2 -2
drivers/net/ethernet/i825xx/lib82596.c
··· 347 347 0x7f /* *multi IA */ }; 348 348 349 349 static int i596_open(struct net_device *dev); 350 - static int i596_start_xmit(struct sk_buff *skb, struct net_device *dev); 350 + static netdev_tx_t i596_start_xmit(struct sk_buff *skb, struct net_device *dev); 351 351 static irqreturn_t i596_interrupt(int irq, void *dev_id); 352 352 static int i596_close(struct net_device *dev); 353 353 static void i596_add_cmd(struct net_device *dev, struct i596_cmd *cmd); ··· 966 966 } 967 967 968 968 969 - static int i596_start_xmit(struct sk_buff *skb, struct net_device *dev) 969 + static netdev_tx_t i596_start_xmit(struct sk_buff *skb, struct net_device *dev) 970 970 { 971 971 struct i596_private *lp = netdev_priv(dev); 972 972 struct tx_cmd *tx_cmd;
+4 -2
drivers/net/ethernet/i825xx/sun3_82586.c
··· 121 121 static irqreturn_t sun3_82586_interrupt(int irq,void *dev_id); 122 122 static int sun3_82586_open(struct net_device *dev); 123 123 static int sun3_82586_close(struct net_device *dev); 124 - static int sun3_82586_send_packet(struct sk_buff *,struct net_device *); 124 + static netdev_tx_t sun3_82586_send_packet(struct sk_buff *, 125 + struct net_device *); 125 126 static struct net_device_stats *sun3_82586_get_stats(struct net_device *dev); 126 127 static void set_multicast_list(struct net_device *dev); 127 128 static void sun3_82586_timeout(struct net_device *dev); ··· 1003 1002 * send frame 1004 1003 */ 1005 1004 1006 - static int sun3_82586_send_packet(struct sk_buff *skb, struct net_device *dev) 1005 + static netdev_tx_t 1006 + sun3_82586_send_packet(struct sk_buff *skb, struct net_device *dev) 1007 1007 { 1008 1008 int len,i; 1009 1009 #ifndef NO_NOPCOMMANDS
+10 -5
drivers/net/ethernet/ibm/emac/core.c
··· 2677 2677 if (of_phy_is_fixed_link(np)) { 2678 2678 int res = emac_dt_mdio_probe(dev); 2679 2679 2680 - if (!res) { 2681 - res = of_phy_register_fixed_link(np); 2682 - if (res) 2683 - mdiobus_unregister(dev->mii_bus); 2680 + if (res) 2681 + return res; 2682 + 2683 + res = of_phy_register_fixed_link(np); 2684 + dev->phy_dev = of_phy_find_device(np); 2685 + if (res || !dev->phy_dev) { 2686 + mdiobus_unregister(dev->mii_bus); 2687 + return res ? res : -EINVAL; 2684 2688 } 2685 - return res; 2689 + emac_adjust_link(dev->ndev); 2690 + put_device(&dev->phy_dev->mdio.dev); 2686 2691 } 2687 2692 return 0; 2688 2693 }
-3
drivers/net/ethernet/intel/fm10k/fm10k.h
··· 504 504 void fm10k_service_event_schedule(struct fm10k_intfc *interface); 505 505 void fm10k_macvlan_schedule(struct fm10k_intfc *interface); 506 506 void fm10k_update_rx_drop_en(struct fm10k_intfc *interface); 507 - #ifdef CONFIG_NET_POLL_CONTROLLER 508 - void fm10k_netpoll(struct net_device *netdev); 509 - #endif 510 507 511 508 /* Netdev */ 512 509 struct net_device *fm10k_alloc_netdev(const struct fm10k_info *info);
-3
drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
··· 1648 1648 .ndo_udp_tunnel_del = fm10k_udp_tunnel_del, 1649 1649 .ndo_dfwd_add_station = fm10k_dfwd_add_station, 1650 1650 .ndo_dfwd_del_station = fm10k_dfwd_del_station, 1651 - #ifdef CONFIG_NET_POLL_CONTROLLER 1652 - .ndo_poll_controller = fm10k_netpoll, 1653 - #endif 1654 1651 .ndo_features_check = fm10k_features_check, 1655 1652 }; 1656 1653
-22
drivers/net/ethernet/intel/fm10k/fm10k_pci.c
··· 1210 1210 return IRQ_HANDLED; 1211 1211 } 1212 1212 1213 - #ifdef CONFIG_NET_POLL_CONTROLLER 1214 - /** 1215 - * fm10k_netpoll - A Polling 'interrupt' handler 1216 - * @netdev: network interface device structure 1217 - * 1218 - * This is used by netconsole to send skbs without having to re-enable 1219 - * interrupts. It's not called while the normal interrupt routine is executing. 1220 - **/ 1221 - void fm10k_netpoll(struct net_device *netdev) 1222 - { 1223 - struct fm10k_intfc *interface = netdev_priv(netdev); 1224 - int i; 1225 - 1226 - /* if interface is down do nothing */ 1227 - if (test_bit(__FM10K_DOWN, interface->state)) 1228 - return; 1229 - 1230 - for (i = 0; i < interface->num_q_vectors; i++) 1231 - fm10k_msix_clean_rings(0, interface->q_vector[i]); 1232 - } 1233 - 1234 - #endif 1235 1213 #define FM10K_ERR_MSG(type) case (type): error = #type; break 1236 1214 static void fm10k_handle_fault(struct fm10k_intfc *interface, int type, 1237 1215 struct fm10k_fault *fault)
-26
drivers/net/ethernet/intel/i40evf/i40evf_main.c
··· 396 396 adapter->aq_required |= I40EVF_FLAG_AQ_MAP_VECTORS; 397 397 } 398 398 399 - #ifdef CONFIG_NET_POLL_CONTROLLER 400 - /** 401 - * i40evf_netpoll - A Polling 'interrupt' handler 402 - * @netdev: network interface device structure 403 - * 404 - * This is used by netconsole to send skbs without having to re-enable 405 - * interrupts. It's not called while the normal interrupt routine is executing. 406 - **/ 407 - static void i40evf_netpoll(struct net_device *netdev) 408 - { 409 - struct i40evf_adapter *adapter = netdev_priv(netdev); 410 - int q_vectors = adapter->num_msix_vectors - NONQ_VECS; 411 - int i; 412 - 413 - /* if interface is down do nothing */ 414 - if (test_bit(__I40E_VSI_DOWN, adapter->vsi.state)) 415 - return; 416 - 417 - for (i = 0; i < q_vectors; i++) 418 - i40evf_msix_clean_rings(0, &adapter->q_vectors[i]); 419 - } 420 - 421 - #endif 422 399 /** 423 400 * i40evf_irq_affinity_notify - Callback for affinity changes 424 401 * @notify: context as to what irq was changed ··· 3206 3229 .ndo_features_check = i40evf_features_check, 3207 3230 .ndo_fix_features = i40evf_fix_features, 3208 3231 .ndo_set_features = i40evf_set_features, 3209 - #ifdef CONFIG_NET_POLL_CONTROLLER 3210 - .ndo_poll_controller = i40evf_netpoll, 3211 - #endif 3212 3232 .ndo_setup_tc = i40evf_setup_tc, 3213 3233 }; 3214 3234
-27
drivers/net/ethernet/intel/ice/ice_main.c
··· 4806 4806 stats->rx_length_errors = vsi_stats->rx_length_errors; 4807 4807 } 4808 4808 4809 - #ifdef CONFIG_NET_POLL_CONTROLLER 4810 - /** 4811 - * ice_netpoll - polling "interrupt" handler 4812 - * @netdev: network interface device structure 4813 - * 4814 - * Used by netconsole to send skbs without having to re-enable interrupts. 4815 - * This is not called in the normal interrupt path. 4816 - */ 4817 - static void ice_netpoll(struct net_device *netdev) 4818 - { 4819 - struct ice_netdev_priv *np = netdev_priv(netdev); 4820 - struct ice_vsi *vsi = np->vsi; 4821 - struct ice_pf *pf = vsi->back; 4822 - int i; 4823 - 4824 - if (test_bit(__ICE_DOWN, vsi->state) || 4825 - !test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) 4826 - return; 4827 - 4828 - for (i = 0; i < vsi->num_q_vectors; i++) 4829 - ice_msix_clean_rings(0, vsi->q_vectors[i]); 4830 - } 4831 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 4832 - 4833 4809 /** 4834 4810 * ice_napi_disable_all - Disable NAPI for all q_vectors in the VSI 4835 4811 * @vsi: VSI having NAPI disabled ··· 5473 5497 .ndo_validate_addr = eth_validate_addr, 5474 5498 .ndo_change_mtu = ice_change_mtu, 5475 5499 .ndo_get_stats64 = ice_get_stats64, 5476 - #ifdef CONFIG_NET_POLL_CONTROLLER 5477 - .ndo_poll_controller = ice_netpoll, 5478 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 5479 5500 .ndo_vlan_rx_add_vid = ice_vlan_rx_add_vid, 5480 5501 .ndo_vlan_rx_kill_vid = ice_vlan_rx_kill_vid, 5481 5502 .ndo_set_features = ice_set_features,
-30
drivers/net/ethernet/intel/igb/igb_main.c
··· 205 205 .priority = 0 206 206 }; 207 207 #endif 208 - #ifdef CONFIG_NET_POLL_CONTROLLER 209 - /* for netdump / net console */ 210 - static void igb_netpoll(struct net_device *); 211 - #endif 212 208 #ifdef CONFIG_PCI_IOV 213 209 static unsigned int max_vfs; 214 210 module_param(max_vfs, uint, 0); ··· 2877 2881 .ndo_set_vf_spoofchk = igb_ndo_set_vf_spoofchk, 2878 2882 .ndo_set_vf_trust = igb_ndo_set_vf_trust, 2879 2883 .ndo_get_vf_config = igb_ndo_get_vf_config, 2880 - #ifdef CONFIG_NET_POLL_CONTROLLER 2881 - .ndo_poll_controller = igb_netpoll, 2882 - #endif 2883 2884 .ndo_fix_features = igb_fix_features, 2884 2885 .ndo_set_features = igb_set_features, 2885 2886 .ndo_fdb_add = igb_ndo_fdb_add, ··· 9045 9052 #endif 9046 9053 return 0; 9047 9054 } 9048 - 9049 - #ifdef CONFIG_NET_POLL_CONTROLLER 9050 - /* Polling 'interrupt' - used by things like netconsole to send skbs 9051 - * without having to re-enable interrupts. It's not called while 9052 - * the interrupt routine is executing. 9053 - */ 9054 - static void igb_netpoll(struct net_device *netdev) 9055 - { 9056 - struct igb_adapter *adapter = netdev_priv(netdev); 9057 - struct e1000_hw *hw = &adapter->hw; 9058 - struct igb_q_vector *q_vector; 9059 - int i; 9060 - 9061 - for (i = 0; i < adapter->num_q_vectors; i++) { 9062 - q_vector = adapter->q_vector[i]; 9063 - if (adapter->flags & IGB_FLAG_HAS_MSIX) 9064 - wr32(E1000_EIMC, q_vector->eims_value); 9065 - else 9066 - igb_irq_disable(adapter); 9067 - napi_schedule(&q_vector->napi); 9068 - } 9069 - } 9070 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 9071 9055 9072 9056 /** 9073 9057 * igb_io_error_detected - called when PCI error is detected
-25
drivers/net/ethernet/intel/ixgb/ixgb_main.c
··· 81 81 __be16 proto, u16 vid); 82 82 static void ixgb_restore_vlan(struct ixgb_adapter *adapter); 83 83 84 - #ifdef CONFIG_NET_POLL_CONTROLLER 85 - /* for netdump / net console */ 86 - static void ixgb_netpoll(struct net_device *dev); 87 - #endif 88 - 89 84 static pci_ers_result_t ixgb_io_error_detected (struct pci_dev *pdev, 90 85 enum pci_channel_state state); 91 86 static pci_ers_result_t ixgb_io_slot_reset (struct pci_dev *pdev); ··· 343 348 .ndo_tx_timeout = ixgb_tx_timeout, 344 349 .ndo_vlan_rx_add_vid = ixgb_vlan_rx_add_vid, 345 350 .ndo_vlan_rx_kill_vid = ixgb_vlan_rx_kill_vid, 346 - #ifdef CONFIG_NET_POLL_CONTROLLER 347 - .ndo_poll_controller = ixgb_netpoll, 348 - #endif 349 351 .ndo_fix_features = ixgb_fix_features, 350 352 .ndo_set_features = ixgb_set_features, 351 353 }; ··· 2186 2194 for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID) 2187 2195 ixgb_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), vid); 2188 2196 } 2189 - 2190 - #ifdef CONFIG_NET_POLL_CONTROLLER 2191 - /* 2192 - * Polling 'interrupt' - used by things like netconsole to send skbs 2193 - * without having to re-enable interrupts. It's not called while 2194 - * the interrupt routine is executing. 2195 - */ 2196 - 2197 - static void ixgb_netpoll(struct net_device *dev) 2198 - { 2199 - struct ixgb_adapter *adapter = netdev_priv(dev); 2200 - 2201 - disable_irq(adapter->pdev->irq); 2202 - ixgb_intr(adapter->pdev->irq, dev); 2203 - enable_irq(adapter->pdev->irq); 2204 - } 2205 - #endif 2206 2197 2207 2198 /** 2208 2199 * ixgb_io_error_detected - called when PCI error is detected
-25
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 8768 8768 return err; 8769 8769 } 8770 8770 8771 - #ifdef CONFIG_NET_POLL_CONTROLLER 8772 - /* 8773 - * Polling 'interrupt' - used by things like netconsole to send skbs 8774 - * without having to re-enable interrupts. It's not called while 8775 - * the interrupt routine is executing. 8776 - */ 8777 - static void ixgbe_netpoll(struct net_device *netdev) 8778 - { 8779 - struct ixgbe_adapter *adapter = netdev_priv(netdev); 8780 - int i; 8781 - 8782 - /* if interface is down do nothing */ 8783 - if (test_bit(__IXGBE_DOWN, &adapter->state)) 8784 - return; 8785 - 8786 - /* loop through and schedule all active queues */ 8787 - for (i = 0; i < adapter->num_q_vectors; i++) 8788 - ixgbe_msix_clean_rings(0, adapter->q_vector[i]); 8789 - } 8790 - 8791 - #endif 8792 - 8793 8771 static void ixgbe_get_ring_stats64(struct rtnl_link_stats64 *stats, 8794 8772 struct ixgbe_ring *ring) 8795 8773 { ··· 10229 10251 .ndo_get_vf_config = ixgbe_ndo_get_vf_config, 10230 10252 .ndo_get_stats64 = ixgbe_get_stats64, 10231 10253 .ndo_setup_tc = __ixgbe_setup_tc, 10232 - #ifdef CONFIG_NET_POLL_CONTROLLER 10233 - .ndo_poll_controller = ixgbe_netpoll, 10234 - #endif 10235 10254 #ifdef IXGBE_FCOE 10236 10255 .ndo_select_queue = ixgbe_select_queue, 10237 10256 .ndo_fcoe_ddp_setup = ixgbe_fcoe_ddp_get,
-21
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 4233 4233 return 0; 4234 4234 } 4235 4235 4236 - #ifdef CONFIG_NET_POLL_CONTROLLER 4237 - /* Polling 'interrupt' - used by things like netconsole to send skbs 4238 - * without having to re-enable interrupts. It's not called while 4239 - * the interrupt routine is executing. 4240 - */ 4241 - static void ixgbevf_netpoll(struct net_device *netdev) 4242 - { 4243 - struct ixgbevf_adapter *adapter = netdev_priv(netdev); 4244 - int i; 4245 - 4246 - /* if interface is down do nothing */ 4247 - if (test_bit(__IXGBEVF_DOWN, &adapter->state)) 4248 - return; 4249 - for (i = 0; i < adapter->num_rx_queues; i++) 4250 - ixgbevf_msix_clean_rings(0, adapter->q_vector[i]); 4251 - } 4252 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 4253 - 4254 4236 static int ixgbevf_suspend(struct pci_dev *pdev, pm_message_t state) 4255 4237 { 4256 4238 struct net_device *netdev = pci_get_drvdata(pdev); ··· 4464 4482 .ndo_tx_timeout = ixgbevf_tx_timeout, 4465 4483 .ndo_vlan_rx_add_vid = ixgbevf_vlan_rx_add_vid, 4466 4484 .ndo_vlan_rx_kill_vid = ixgbevf_vlan_rx_kill_vid, 4467 - #ifdef CONFIG_NET_POLL_CONTROLLER 4468 - .ndo_poll_controller = ixgbevf_netpoll, 4469 - #endif 4470 4485 .ndo_features_check = ixgbevf_features_check, 4471 4486 .ndo_bpf = ixgbevf_xdp, 4472 4487 };
+6 -7
drivers/net/ethernet/marvell/mvneta.c
··· 1890 1890 if (!data || !(rx_desc->buf_phys_addr)) 1891 1891 continue; 1892 1892 1893 - dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, 1894 - MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); 1893 + dma_unmap_page(pp->dev->dev.parent, rx_desc->buf_phys_addr, 1894 + PAGE_SIZE, DMA_FROM_DEVICE); 1895 1895 __free_page(data); 1896 1896 } 1897 1897 } ··· 2008 2008 skb_add_rx_frag(rxq->skb, frag_num, page, 2009 2009 frag_offset, frag_size, 2010 2010 PAGE_SIZE); 2011 - dma_unmap_single(dev->dev.parent, phys_addr, 2012 - PAGE_SIZE, DMA_FROM_DEVICE); 2011 + dma_unmap_page(dev->dev.parent, phys_addr, 2012 + PAGE_SIZE, DMA_FROM_DEVICE); 2013 2013 rxq->left_size -= frag_size; 2014 2014 } 2015 2015 } else { ··· 2039 2039 frag_offset, frag_size, 2040 2040 PAGE_SIZE); 2041 2041 2042 - dma_unmap_single(dev->dev.parent, phys_addr, 2043 - PAGE_SIZE, 2044 - DMA_FROM_DEVICE); 2042 + dma_unmap_page(dev->dev.parent, phys_addr, 2043 + PAGE_SIZE, DMA_FROM_DEVICE); 2045 2044 2046 2045 rxq->left_size -= frag_size; 2047 2046 }
+12 -19
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
··· 58 58 */ 59 59 static void mvpp2_mac_config(struct net_device *dev, unsigned int mode, 60 60 const struct phylink_link_state *state); 61 + static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode, 62 + phy_interface_t interface, struct phy_device *phy); 61 63 62 64 /* Queue modes */ 63 65 #define MVPP2_QDIST_SINGLE_MODE 0 ··· 3055 3053 cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK); 3056 3054 } 3057 3055 3058 - cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK; 3059 - if (cause_tx) { 3060 - cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET; 3061 - mvpp2_tx_done(port, cause_tx, qv->sw_thread_id); 3056 + if (port->has_tx_irqs) { 3057 + cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK; 3058 + if (cause_tx) { 3059 + cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET; 3060 + mvpp2_tx_done(port, cause_tx, qv->sw_thread_id); 3061 + } 3062 3062 } 3063 3063 3064 3064 /* Process RX packets */ ··· 3146 3142 mvpp22_mode_reconfigure(port); 3147 3143 3148 3144 if (port->phylink) { 3145 + netif_carrier_off(port->dev); 3149 3146 phylink_start(port->phylink); 3150 3147 } else { 3151 3148 /* Phylink isn't used as of now for ACPI, so the MAC has to be ··· 3155 3150 */ 3156 3151 struct phylink_link_state state = { 3157 3152 .interface = port->phy_interface, 3158 - .link = 1, 3159 3153 }; 3160 3154 mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state); 3155 + mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface, 3156 + NULL); 3161 3157 } 3162 3158 3163 3159 netif_tx_start_all_queues(port->dev); ··· 4501 4495 return; 4502 4496 } 4503 4497 4504 - netif_tx_stop_all_queues(port->dev); 4505 - if (!port->has_phy) 4506 - netif_carrier_off(port->dev); 4507 - 4508 4498 /* Make sure the port is disabled when reconfiguring the mode */ 4509 4499 mvpp2_port_disable(port); 4510 4500 ··· 4525 4523 if (port->priv->hw_version == MVPP21 && port->flags & MVPP2_F_LOOPBACK) 4526 4524 mvpp2_port_loopback_set(port, state); 4527 4525 4528 - /* If the port already was up, make sure it's still in the same state */ 4529 - if (state->link || !port->has_phy) { 4530 - mvpp2_port_enable(port); 4531 - 4532 - mvpp2_egress_enable(port); 4533 - mvpp2_ingress_enable(port); 4534 - if (!port->has_phy) 4535 - netif_carrier_on(dev); 4536 - netif_tx_wake_all_queues(dev); 4537 - } 4526 + mvpp2_port_enable(port); 4538 4527 } 4539 4528 4540 4529 static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
-20
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1286 1286 mutex_unlock(&mdev->state_lock); 1287 1287 } 1288 1288 1289 - #ifdef CONFIG_NET_POLL_CONTROLLER 1290 - static void mlx4_en_netpoll(struct net_device *dev) 1291 - { 1292 - struct mlx4_en_priv *priv = netdev_priv(dev); 1293 - struct mlx4_en_cq *cq; 1294 - int i; 1295 - 1296 - for (i = 0; i < priv->tx_ring_num[TX]; i++) { 1297 - cq = priv->tx_cq[TX][i]; 1298 - napi_schedule(&cq->napi); 1299 - } 1300 - } 1301 - #endif 1302 - 1303 1289 static int mlx4_en_set_rss_steer_rules(struct mlx4_en_priv *priv) 1304 1290 { 1305 1291 u64 reg_id; ··· 2932 2946 .ndo_tx_timeout = mlx4_en_tx_timeout, 2933 2947 .ndo_vlan_rx_add_vid = mlx4_en_vlan_rx_add_vid, 2934 2948 .ndo_vlan_rx_kill_vid = mlx4_en_vlan_rx_kill_vid, 2935 - #ifdef CONFIG_NET_POLL_CONTROLLER 2936 - .ndo_poll_controller = mlx4_en_netpoll, 2937 - #endif 2938 2949 .ndo_set_features = mlx4_en_set_features, 2939 2950 .ndo_fix_features = mlx4_en_fix_features, 2940 2951 .ndo_setup_tc = __mlx4_en_setup_tc, ··· 2966 2983 .ndo_set_vf_link_state = mlx4_en_set_vf_link_state, 2967 2984 .ndo_get_vf_stats = mlx4_en_get_vf_stats, 2968 2985 .ndo_get_vf_config = mlx4_en_get_vf_config, 2969 - #ifdef CONFIG_NET_POLL_CONTROLLER 2970 - .ndo_poll_controller = mlx4_en_netpoll, 2971 - #endif 2972 2986 .ndo_set_features = mlx4_en_set_features, 2973 2987 .ndo_fix_features = mlx4_en_fix_features, 2974 2988 .ndo_setup_tc = __mlx4_en_setup_tc,
+2 -1
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 240 240 struct mlx4_dev *dev = &priv->dev; 241 241 struct mlx4_eq *eq = &priv->eq_table.eq[vec]; 242 242 243 - if (!eq->affinity_mask || cpumask_empty(eq->affinity_mask)) 243 + if (!cpumask_available(eq->affinity_mask) || 244 + cpumask_empty(eq->affinity_mask)) 244 245 return; 245 246 246 247 hint_err = irq_set_affinity_hint(eq->irq, eq->affinity_mask);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
··· 206 206 u8 own; 207 207 208 208 do { 209 - own = ent->lay->status_own; 209 + own = READ_ONCE(ent->lay->status_own); 210 210 if (!(own & CMD_OWNER_HW)) { 211 211 ent->ret = 0; 212 212 return;
+2 -1
drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
··· 183 183 184 184 void mlx5e_tls_build_netdev(struct mlx5e_priv *priv) 185 185 { 186 - u32 caps = mlx5_accel_tls_device_caps(priv->mdev); 187 186 struct net_device *netdev = priv->netdev; 187 + u32 caps; 188 188 189 189 if (!mlx5_accel_is_tls_device(priv->mdev)) 190 190 return; 191 191 192 + caps = mlx5_accel_tls_device_caps(priv->mdev); 192 193 if (caps & MLX5_ACCEL_TLS_TX) { 193 194 netdev->features |= NETIF_F_HW_TLS_TX; 194 195 netdev->hw_features |= NETIF_F_HW_TLS_TX;
-19
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
··· 4315 4315 } 4316 4316 } 4317 4317 4318 - #ifdef CONFIG_NET_POLL_CONTROLLER 4319 - /* Fake "interrupt" called by netpoll (eg netconsole) to send skbs without 4320 - * reenabling interrupts. 4321 - */ 4322 - static void mlx5e_netpoll(struct net_device *dev) 4323 - { 4324 - struct mlx5e_priv *priv = netdev_priv(dev); 4325 - struct mlx5e_channels *chs = &priv->channels; 4326 - 4327 - int i; 4328 - 4329 - for (i = 0; i < chs->num; i++) 4330 - napi_schedule(&chs->c[i]->napi); 4331 - } 4332 - #endif 4333 - 4334 4318 static const struct net_device_ops mlx5e_netdev_ops = { 4335 4319 .ndo_open = mlx5e_open, 4336 4320 .ndo_stop = mlx5e_close, ··· 4339 4355 .ndo_xdp_xmit = mlx5e_xdp_xmit, 4340 4356 #ifdef CONFIG_MLX5_EN_ARFS 4341 4357 .ndo_rx_flow_steer = mlx5e_rx_flow_steer, 4342 - #endif 4343 - #ifdef CONFIG_NET_POLL_CONTROLLER 4344 - .ndo_poll_controller = mlx5e_netpoll, 4345 4358 #endif 4346 4359 #ifdef CONFIG_MLX5_ESWITCH 4347 4360 /* SRIOV E-Switch NDOs */
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/transobj.c
··· 509 509 510 510 sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx); 511 511 512 - if (next_state == MLX5_RQC_STATE_RDY) { 512 + if (next_state == MLX5_SQC_STATE_RDY) { 513 513 MLX5_SET(sqc, sqc, hairpin_peer_rq, peer_rq); 514 514 MLX5_SET(sqc, sqc, hairpin_peer_vhca, peer_vhca); 515 515 }
+2 -2
drivers/net/ethernet/mellanox/mlxsw/spectrum.c
··· 44 44 #define MLXSW_SP_FWREV_MINOR_TO_BRANCH(minor) ((minor) / 100) 45 45 46 46 #define MLXSW_SP1_FWREV_MAJOR 13 47 - #define MLXSW_SP1_FWREV_MINOR 1702 48 - #define MLXSW_SP1_FWREV_SUBMINOR 6 47 + #define MLXSW_SP1_FWREV_MINOR 1703 48 + #define MLXSW_SP1_FWREV_SUBMINOR 4 49 49 #define MLXSW_SP1_FWREV_CAN_RESET_MINOR 1702 50 50 51 51 static const struct mlxsw_fw_rev mlxsw_sp1_fw_rev = {
+3 -3
drivers/net/ethernet/microchip/lan743x_main.c
··· 2850 2850 lan743x_hardware_cleanup(adapter); 2851 2851 } 2852 2852 2853 - #ifdef CONFIG_PM 2853 + #ifdef CONFIG_PM_SLEEP 2854 2854 static u16 lan743x_pm_wakeframe_crc16(const u8 *buf, int len) 2855 2855 { 2856 2856 return bitrev16(crc16(0xFFFF, buf, len)); ··· 3016 3016 static const struct dev_pm_ops lan743x_pm_ops = { 3017 3017 SET_SYSTEM_SLEEP_PM_OPS(lan743x_pm_suspend, lan743x_pm_resume) 3018 3018 }; 3019 - #endif /*CONFIG_PM */ 3019 + #endif /* CONFIG_PM_SLEEP */ 3020 3020 3021 3021 static const struct pci_device_id lan743x_pcidev_tbl[] = { 3022 3022 { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7430) }, ··· 3028 3028 .id_table = lan743x_pcidev_tbl, 3029 3029 .probe = lan743x_pcidev_probe, 3030 3030 .remove = lan743x_pcidev_remove, 3031 - #ifdef CONFIG_PM 3031 + #ifdef CONFIG_PM_SLEEP 3032 3032 .driver.pm = &lan743x_pm_ops, 3033 3033 #endif 3034 3034 .shutdown = lan743x_pcidev_shutdown,
+9 -3
drivers/net/ethernet/mscc/ocelot_board.c
··· 91 91 struct sk_buff *skb; 92 92 struct net_device *dev; 93 93 u32 *buf; 94 - int sz, len; 94 + int sz, len, buf_len; 95 95 u32 ifh[4]; 96 96 u32 val; 97 97 struct frame_info info; ··· 116 116 err = -ENOMEM; 117 117 break; 118 118 } 119 - buf = (u32 *)skb_put(skb, info.len); 119 + buf_len = info.len - ETH_FCS_LEN; 120 + buf = (u32 *)skb_put(skb, buf_len); 120 121 121 122 len = 0; 122 123 do { 123 124 sz = ocelot_rx_frame_word(ocelot, grp, false, &val); 124 125 *buf++ = val; 125 126 len += sz; 126 - } while ((sz == 4) && (len < info.len)); 127 + } while (len < buf_len); 128 + 129 + /* Read the FCS and discard it */ 130 + sz = ocelot_rx_frame_word(ocelot, grp, false, &val); 131 + /* Update the statistics if part of the FCS was read before */ 132 + len -= ETH_FCS_LEN - sz; 127 133 128 134 if (sz < 0) { 129 135 err = sz;
-18
drivers/net/ethernet/netronome/nfp/nfp_net_common.c
··· 3146 3146 return nfp_net_reconfig_mbox(nn, NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL); 3147 3147 } 3148 3148 3149 - #ifdef CONFIG_NET_POLL_CONTROLLER 3150 - static void nfp_net_netpoll(struct net_device *netdev) 3151 - { 3152 - struct nfp_net *nn = netdev_priv(netdev); 3153 - int i; 3154 - 3155 - /* nfp_net's NAPIs are statically allocated so even if there is a race 3156 - * with reconfig path this will simply try to schedule some disabled 3157 - * NAPI instances. 3158 - */ 3159 - for (i = 0; i < nn->dp.num_stack_tx_rings; i++) 3160 - napi_schedule_irqoff(&nn->r_vecs[i].napi); 3161 - } 3162 - #endif 3163 - 3164 3149 static void nfp_net_stat64(struct net_device *netdev, 3165 3150 struct rtnl_link_stats64 *stats) 3166 3151 { ··· 3504 3519 .ndo_get_stats64 = nfp_net_stat64, 3505 3520 .ndo_vlan_rx_add_vid = nfp_net_vlan_rx_add_vid, 3506 3521 .ndo_vlan_rx_kill_vid = nfp_net_vlan_rx_kill_vid, 3507 - #ifdef CONFIG_NET_POLL_CONTROLLER 3508 - .ndo_poll_controller = nfp_net_netpoll, 3509 - #endif 3510 3522 .ndo_set_vf_mac = nfp_app_set_vf_mac, 3511 3523 .ndo_set_vf_vlan = nfp_app_set_vf_vlan, 3512 3524 .ndo_set_vf_spoofchk = nfp_app_set_vf_spoofchk,
+28 -17
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
··· 190 190 191 191 static void 192 192 qed_dcbx_set_params(struct qed_dcbx_results *p_data, 193 - struct qed_hw_info *p_info, 194 - bool enable, 195 - u8 prio, 196 - u8 tc, 193 + struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 194 + bool enable, u8 prio, u8 tc, 197 195 enum dcbx_protocol_type type, 198 196 enum qed_pci_personality personality) 199 197 { ··· 204 206 else 205 207 p_data->arr[type].update = DONT_UPDATE_DCB_DSCP; 206 208 209 + /* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */ 210 + if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) || 211 + test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits))) 212 + p_data->arr[type].dont_add_vlan0 = true; 213 + 207 214 /* QM reconf data */ 208 - if (p_info->personality == personality) 209 - qed_hw_info_set_offload_tc(p_info, tc); 215 + if (p_hwfn->hw_info.personality == personality) 216 + qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc); 217 + 218 + /* Configure dcbx vlan priority in doorbell block for roce EDPM */ 219 + if (test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits) && 220 + type == DCBX_PROTOCOL_ROCE) { 221 + qed_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1); 222 + qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1); 223 + } 210 224 } 211 225 212 226 /* Update app protocol data and hw_info fields with the TLV info */ 213 227 static void 214 228 qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, 215 - struct qed_hwfn *p_hwfn, 216 - bool enable, 217 - u8 prio, u8 tc, enum dcbx_protocol_type type) 229 + struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 230 + bool enable, u8 prio, u8 tc, 231 + enum dcbx_protocol_type type) 218 232 { 219 - struct qed_hw_info *p_info = &p_hwfn->hw_info; 220 233 enum qed_pci_personality personality; 221 234 enum dcbx_protocol_type id; 222 235 int i; ··· 240 231 241 232 personality = qed_dcbx_app_update[i].personality; 242 233 243 - qed_dcbx_set_params(p_data, p_info, enable, 234 + qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, 244 235 prio, tc, type, personality); 245 236 } 246 237 } ··· 274 265 * reconfiguring QM. Get protocol specific data for PF update ramrod command. 275 266 */ 276 267 static int 277 - qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, 268 + qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 278 269 struct qed_dcbx_results *p_data, 279 270 struct dcbx_app_priority_entry *p_tbl, 280 271 u32 pri_tc_tbl, int count, u8 dcbx_version) ··· 318 309 enable = true; 319 310 } 320 311 321 - qed_dcbx_update_app_info(p_data, p_hwfn, enable, 312 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 322 313 priority, tc, type); 323 314 } 324 315 } ··· 340 331 continue; 341 332 342 333 enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version; 343 - qed_dcbx_update_app_info(p_data, p_hwfn, enable, 334 + qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, 344 335 priority, tc, type); 345 336 } 346 337 ··· 350 341 /* Parse app TLV's to update TC information in hw_info structure for 351 342 * reconfiguring QM. Get protocol specific data for PF update ramrod command. 352 343 */ 353 - static int qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn) 344 + static int 345 + qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) 354 346 { 355 347 struct dcbx_app_priority_feature *p_app; 356 348 struct dcbx_app_priority_entry *p_tbl; ··· 375 365 p_info = &p_hwfn->hw_info; 376 366 num_entries = QED_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES); 377 367 378 - rc = qed_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl, 368 + rc = qed_dcbx_process_tlv(p_hwfn, p_ptt, &data, p_tbl, pri_tc_tbl, 379 369 num_entries, dcbx_version); 380 370 if (rc) 381 371 return rc; ··· 901 891 return rc; 902 892 903 893 if (type == QED_DCBX_OPERATIONAL_MIB) { 904 - rc = qed_dcbx_process_mib_info(p_hwfn); 894 + rc = qed_dcbx_process_mib_info(p_hwfn, p_ptt); 905 895 if (!rc) { 906 896 /* reconfigure tcs of QM queues according 907 897 * to negotiation results ··· 964 954 p_data->dcb_enable_flag = p_src->arr[type].enable; 965 955 p_data->dcb_priority = p_src->arr[type].priority; 966 956 p_data->dcb_tc = p_src->arr[type].tc; 957 + p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0; 967 958 } 968 959 969 960 /* Set pf update ramrod command params */
+1
drivers/net/ethernet/qlogic/qed/qed_dcbx.h
··· 55 55 u8 update; /* Update indication */ 56 56 u8 priority; /* Priority */ 57 57 u8 tc; /* Traffic Class */ 58 + bool dont_add_vlan0; /* Do not insert a vlan tag with id 0 */ 58 59 }; 59 60 60 61 #define QED_DCBX_VERSION_DISABLED 0
+14 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 1706 1706 int qed_hw_init(struct qed_dev *cdev, struct qed_hw_init_params *p_params) 1707 1707 { 1708 1708 struct qed_load_req_params load_req_params; 1709 - u32 load_code, param, drv_mb_param; 1709 + u32 load_code, resp, param, drv_mb_param; 1710 1710 bool b_default_mtu = true; 1711 1711 struct qed_hwfn *p_hwfn; 1712 1712 int rc = 0, mfw_rc, i; ··· 1852 1852 1853 1853 if (IS_PF(cdev)) { 1854 1854 p_hwfn = QED_LEADING_HWFN(cdev); 1855 + 1856 + /* Get pre-negotiated values for stag, bandwidth etc. */ 1857 + DP_VERBOSE(p_hwfn, 1858 + QED_MSG_SPQ, 1859 + "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n"); 1860 + drv_mb_param = 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET; 1861 + rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, 1862 + DRV_MSG_CODE_GET_OEM_UPDATES, 1863 + drv_mb_param, &resp, &param); 1864 + if (rc) 1865 + DP_NOTICE(p_hwfn, 1866 + "Failed to send GET_OEM_UPDATES attention request\n"); 1867 + 1855 1868 drv_mb_param = STORM_FW_VERSION; 1856 1869 rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt, 1857 1870 DRV_MSG_CODE_OV_UPDATE_STORM_FW_VER,
+4
drivers/net/ethernet/qlogic/qed/qed_hsi.h
··· 12414 12414 #define DRV_MSG_SET_RESOURCE_VALUE_MSG 0x35000000 12415 12415 #define DRV_MSG_CODE_OV_UPDATE_WOL 0x38000000 12416 12416 #define DRV_MSG_CODE_OV_UPDATE_ESWITCH_MODE 0x39000000 12417 + #define DRV_MSG_CODE_GET_OEM_UPDATES 0x41000000 12417 12418 12418 12419 #define DRV_MSG_CODE_BW_UPDATE_ACK 0x32000000 12419 12420 #define DRV_MSG_CODE_NIG_DRAIN 0x30000000 ··· 12541 12540 #define DRV_MB_PARAM_ESWITCH_MODE_NONE 0x0 12542 12541 #define DRV_MB_PARAM_ESWITCH_MODE_VEB 0x1 12543 12542 #define DRV_MB_PARAM_ESWITCH_MODE_VEPA 0x2 12543 + 12544 + #define DRV_MB_PARAM_DUMMY_OEM_UPDATES_MASK 0x1 12545 + #define DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET 0 12544 12546 12545 12547 #define DRV_MB_PARAM_SET_LED_MODE_OPER 0x0 12546 12548 #define DRV_MB_PARAM_SET_LED_MODE_ON 0x1
+20 -4
drivers/net/ethernet/qlogic/qed/qed_mcp.c
··· 1581 1581 p_hwfn->mcp_info->func_info.ovlan = (u16)shmem_info.ovlan_stag & 1582 1582 FUNC_MF_CFG_OV_STAG_MASK; 1583 1583 p_hwfn->hw_info.ovlan = p_hwfn->mcp_info->func_info.ovlan; 1584 - if ((p_hwfn->hw_info.hw_mode & BIT(MODE_MF_SD)) && 1585 - (p_hwfn->hw_info.ovlan != QED_MCP_VLAN_UNSET)) { 1586 - qed_wr(p_hwfn, p_ptt, 1587 - NIG_REG_LLH_FUNC_TAG_VALUE, p_hwfn->hw_info.ovlan); 1584 + if (test_bit(QED_MF_OVLAN_CLSS, &p_hwfn->cdev->mf_bits)) { 1585 + if (p_hwfn->hw_info.ovlan != QED_MCP_VLAN_UNSET) { 1586 + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_VALUE, 1587 + p_hwfn->hw_info.ovlan); 1588 + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_EN, 1); 1589 + 1590 + /* Configure DB to add external vlan to EDPM packets */ 1591 + qed_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1); 1592 + qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2, 1593 + p_hwfn->hw_info.ovlan); 1594 + } else { 1595 + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_EN, 0); 1596 + qed_wr(p_hwfn, p_ptt, NIG_REG_LLH_FUNC_TAG_VALUE, 0); 1597 + qed_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 0); 1598 + qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_EXT_VID_BB_K2, 0); 1599 + } 1600 + 1588 1601 qed_sp_pf_update_stag(p_hwfn); 1589 1602 } 1603 + 1604 + DP_VERBOSE(p_hwfn, QED_MSG_SP, "ovlan = %d hw_mode = 0x%x\n", 1605 + p_hwfn->mcp_info->func_info.ovlan, p_hwfn->hw_info.hw_mode); 1590 1606 1591 1607 /* Acknowledge the MFW */ 1592 1608 qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_S_TAG_UPDATE_ACK, 0,
+6
drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
··· 216 216 0x00c000UL 217 217 #define DORQ_REG_IFEN \ 218 218 0x100040UL 219 + #define DORQ_REG_TAG1_OVRD_MODE \ 220 + 0x1008b4UL 221 + #define DORQ_REG_PF_PCP_BB_K2 \ 222 + 0x1008c4UL 223 + #define DORQ_REG_PF_EXT_VID_BB_K2 \ 224 + 0x1008c8UL 219 225 #define DORQ_REG_DB_DROP_REASON \ 220 226 0x100a2cUL 221 227 #define DORQ_REG_DB_DROP_DETAILS \
+48 -1
drivers/net/ethernet/realtek/r8169.c
··· 13 13 #include <linux/pci.h> 14 14 #include <linux/netdevice.h> 15 15 #include <linux/etherdevice.h> 16 + #include <linux/clk.h> 16 17 #include <linux/delay.h> 17 18 #include <linux/ethtool.h> 18 19 #include <linux/phy.h> ··· 666 665 667 666 u16 event_slow; 668 667 const struct rtl_coalesce_info *coalesce_info; 668 + struct clk *clk; 669 669 670 670 struct mdio_ops { 671 671 void (*write)(struct rtl8169_private *, int, int); ··· 4071 4069 phy_speed_up(dev->phydev); 4072 4070 4073 4071 genphy_soft_reset(dev->phydev); 4072 + 4073 + /* It was reported that chip version 33 ends up with 10MBit/Half on a 4074 + * 1GBit link after resuming from S3. For whatever reason the PHY on 4075 + * this chip doesn't properly start a renegotiation when soft-reset. 4076 + * Explicitly requesting a renegotiation fixes this. 4077 + */ 4078 + if (tp->mac_version == RTL_GIGA_MAC_VER_33 && 4079 + dev->phydev->autoneg == AUTONEG_ENABLE) 4080 + phy_restart_aneg(dev->phydev); 4074 4081 } 4075 4082 4076 4083 static void rtl_rar_set(struct rtl8169_private *tp, u8 *addr) ··· 4786 4775 static void rtl_hw_aspm_clkreq_enable(struct rtl8169_private *tp, bool enable) 4787 4776 { 4788 4777 if (enable) { 4789 - RTL_W8(tp, Config2, RTL_R8(tp, Config2) | ClkReqEn); 4790 4778 RTL_W8(tp, Config5, RTL_R8(tp, Config5) | ASPM_en); 4779 + RTL_W8(tp, Config2, RTL_R8(tp, Config2) | ClkReqEn); 4791 4780 } else { 4792 4781 RTL_W8(tp, Config2, RTL_R8(tp, Config2) & ~ClkReqEn); 4793 4782 RTL_W8(tp, Config5, RTL_R8(tp, Config5) & ~ASPM_en); 4794 4783 } 4784 + 4785 + udelay(10); 4795 4786 } 4796 4787 4797 4788 static void rtl_hw_start_8168bb(struct rtl8169_private *tp) ··· 5638 5625 5639 5626 static void rtl_hw_start_8106(struct rtl8169_private *tp) 5640 5627 { 5628 + rtl_hw_aspm_clkreq_enable(tp, false); 5629 + 5641 5630 /* Force LAN exit from ASPM if Rx/Tx are not idle */ 5642 5631 RTL_W32(tp, FuncEvent, RTL_R32(tp, FuncEvent) | 0x002800); 5643 5632 ··· 5648 5633 RTL_W8(tp, DLLPR, RTL_R8(tp, DLLPR) & ~PFM_EN); 5649 5634 5650 5635 rtl_pcie_state_l2l3_enable(tp, false); 5636 + rtl_hw_aspm_clkreq_enable(tp, true); 5651 5637 } 5652 5638 5653 5639 static void rtl_hw_start_8101(struct rtl8169_private *tp) ··· 7273 7257 } 7274 7258 } 7275 7259 7260 + static void rtl_disable_clk(void *data) 7261 + { 7262 + clk_disable_unprepare(data); 7263 + } 7264 + 7276 7265 static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) 7277 7266 { 7278 7267 const struct rtl_cfg_info *cfg = rtl_cfg_infos + ent->driver_data; ··· 7297 7276 tp->pci_dev = pdev; 7298 7277 tp->msg_enable = netif_msg_init(debug.msg_enable, R8169_MSG_DEFAULT); 7299 7278 tp->supports_gmii = cfg->has_gmii; 7279 + 7280 + /* Get the *optional* external "ether_clk" used on some boards */ 7281 + tp->clk = devm_clk_get(&pdev->dev, "ether_clk"); 7282 + if (IS_ERR(tp->clk)) { 7283 + rc = PTR_ERR(tp->clk); 7284 + if (rc == -ENOENT) { 7285 + /* clk-core allows NULL (for suspend / resume) */ 7286 + tp->clk = NULL; 7287 + } else if (rc == -EPROBE_DEFER) { 7288 + return rc; 7289 + } else { 7290 + dev_err(&pdev->dev, "failed to get clk: %d\n", rc); 7291 + return rc; 7292 + } 7293 + } else { 7294 + rc = clk_prepare_enable(tp->clk); 7295 + if (rc) { 7296 + dev_err(&pdev->dev, "failed to enable clk: %d\n", rc); 7297 + return rc; 7298 + } 7299 + 7300 + rc = devm_add_action_or_reset(&pdev->dev, rtl_disable_clk, 7301 + tp->clk); 7302 + if (rc) 7303 + return rc; 7304 + } 7300 7305 7301 7306 /* enable device (incl. PCI PM wakeup and hotplug setup) */ 7302 7307 rc = pcim_enable_device(pdev);
+5
drivers/net/ethernet/renesas/ravb.h
··· 428 428 EIS_CULF1 = 0x00000080, 429 429 EIS_TFFF = 0x00000100, 430 430 EIS_QFS = 0x00010000, 431 + EIS_RESERVED = (GENMASK(31, 17) | GENMASK(15, 11)), 431 432 }; 432 433 433 434 /* RIC0 */ ··· 473 472 RIS0_FRF15 = 0x00008000, 474 473 RIS0_FRF16 = 0x00010000, 475 474 RIS0_FRF17 = 0x00020000, 475 + RIS0_RESERVED = GENMASK(31, 18), 476 476 }; 477 477 478 478 /* RIC1 */ ··· 530 528 RIS2_QFF16 = 0x00010000, 531 529 RIS2_QFF17 = 0x00020000, 532 530 RIS2_RFFF = 0x80000000, 531 + RIS2_RESERVED = GENMASK(30, 18), 533 532 }; 534 533 535 534 /* TIC */ ··· 547 544 TIS_FTF1 = 0x00000002, /* Undocumented? */ 548 545 TIS_TFUF = 0x00000100, 549 546 TIS_TFWF = 0x00000200, 547 + TIS_RESERVED = (GENMASK(31, 20) | GENMASK(15, 12) | GENMASK(7, 4)) 550 548 }; 551 549 552 550 /* ISS */ ··· 621 617 enum GIS_BIT { 622 618 GIS_PTCF = 0x00000001, /* Undocumented? */ 623 619 GIS_PTMF = 0x00000004, 620 + GIS_RESERVED = GENMASK(15, 10), 624 621 }; 625 622 626 623 /* GIE (R-Car Gen3 only) */
+6 -5
drivers/net/ethernet/renesas/ravb_main.c
··· 739 739 u32 eis, ris2; 740 740 741 741 eis = ravb_read(ndev, EIS); 742 - ravb_write(ndev, ~EIS_QFS, EIS); 742 + ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS); 743 743 if (eis & EIS_QFS) { 744 744 ris2 = ravb_read(ndev, RIS2); 745 - ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF), RIS2); 745 + ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED), 746 + RIS2); 746 747 747 748 /* Receive Descriptor Empty int */ 748 749 if (ris2 & RIS2_QFF0) ··· 796 795 u32 tis = ravb_read(ndev, TIS); 797 796 798 797 if (tis & TIS_TFUF) { 799 - ravb_write(ndev, ~TIS_TFUF, TIS); 798 + ravb_write(ndev, ~(TIS_TFUF | TIS_RESERVED), TIS); 800 799 ravb_get_tx_tstamp(ndev); 801 800 return true; 802 801 } ··· 931 930 /* Processing RX Descriptor Ring */ 932 931 if (ris0 & mask) { 933 932 /* Clear RX interrupt */ 934 - ravb_write(ndev, ~mask, RIS0); 933 + ravb_write(ndev, ~(mask | RIS0_RESERVED), RIS0); 935 934 if (ravb_rx(ndev, &quota, q)) 936 935 goto out; 937 936 } ··· 939 938 if (tis & mask) { 940 939 spin_lock_irqsave(&priv->lock, flags); 941 940 /* Clear TX interrupt */ 942 - ravb_write(ndev, ~mask, TIS); 941 + ravb_write(ndev, ~(mask | TIS_RESERVED), TIS); 943 942 ravb_tx_free(ndev, q, true); 944 943 netif_wake_subqueue(ndev, q); 945 944 mmiowb();
+1 -1
drivers/net/ethernet/renesas/ravb_ptp.c
··· 315 315 } 316 316 } 317 317 318 - ravb_write(ndev, ~gis, GIS); 318 + ravb_write(ndev, ~(gis | GIS_RESERVED), GIS); 319 319 } 320 320 321 321 void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev)
+3 -2
drivers/net/ethernet/seeq/ether3.c
··· 77 77 static int ether3_rx(struct net_device *dev, unsigned int maxcnt); 78 78 static void ether3_tx(struct net_device *dev); 79 79 static int ether3_open (struct net_device *dev); 80 - static int ether3_sendpacket (struct sk_buff *skb, struct net_device *dev); 80 + static netdev_tx_t ether3_sendpacket(struct sk_buff *skb, 81 + struct net_device *dev); 81 82 static irqreturn_t ether3_interrupt (int irq, void *dev_id); 82 83 static int ether3_close (struct net_device *dev); 83 84 static void ether3_setmulticastlist (struct net_device *dev); ··· 482 481 /* 483 482 * Transmit a packet 484 483 */ 485 - static int 484 + static netdev_tx_t 486 485 ether3_sendpacket(struct sk_buff *skb, struct net_device *dev) 487 486 { 488 487 unsigned long flags;
+2 -1
drivers/net/ethernet/seeq/sgiseeq.c
··· 578 578 return 0; 579 579 } 580 580 581 - static int sgiseeq_start_xmit(struct sk_buff *skb, struct net_device *dev) 581 + static netdev_tx_t 582 + sgiseeq_start_xmit(struct sk_buff *skb, struct net_device *dev) 582 583 { 583 584 struct sgiseeq_private *sp = netdev_priv(dev); 584 585 struct hpc3_ethregs *hregs = sp->hregs;
+2 -2
drivers/net/ethernet/sgi/ioc3-eth.c
··· 99 99 100 100 static int ioc3_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 101 101 static void ioc3_set_multicast_list(struct net_device *dev); 102 - static int ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev); 102 + static netdev_tx_t ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev); 103 103 static void ioc3_timeout(struct net_device *dev); 104 104 static inline unsigned int ioc3_hash(const unsigned char *addr); 105 105 static inline void ioc3_stop(struct ioc3_private *ip); ··· 1390 1390 .remove = ioc3_remove_one, 1391 1391 }; 1392 1392 1393 - static int ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev) 1393 + static netdev_tx_t ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev) 1394 1394 { 1395 1395 unsigned long data; 1396 1396 struct ioc3_private *ip = netdev_priv(dev);
+1 -1
drivers/net/ethernet/sgi/meth.c
··· 697 697 /* 698 698 * Transmit a packet (called by the kernel) 699 699 */ 700 - static int meth_tx(struct sk_buff *skb, struct net_device *dev) 700 + static netdev_tx_t meth_tx(struct sk_buff *skb, struct net_device *dev) 701 701 { 702 702 struct meth_private *priv = netdev_priv(dev); 703 703 unsigned long flags;
+2 -2
drivers/net/ethernet/stmicro/stmmac/common.h
··· 258 258 #define MAX_DMA_RIWT 0xff 259 259 #define MIN_DMA_RIWT 0x20 260 260 /* Tx coalesce parameters */ 261 - #define STMMAC_COAL_TX_TIMER 40000 261 + #define STMMAC_COAL_TX_TIMER 1000 262 262 #define STMMAC_MAX_COAL_TX_TICK 100000 263 263 #define STMMAC_TX_MAX_FRAMES 256 264 - #define STMMAC_TX_FRAMES 64 264 + #define STMMAC_TX_FRAMES 25 265 265 266 266 /* Packets types */ 267 267 enum packets_types {
+12 -2
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 48 48 49 49 /* Frequently used values are kept adjacent for cache effect */ 50 50 struct stmmac_tx_queue { 51 + u32 tx_count_frames; 52 + struct timer_list txtimer; 51 53 u32 queue_index; 52 54 struct stmmac_priv *priv_data; 53 55 struct dma_extended_desc *dma_etx ____cacheline_aligned_in_smp; ··· 75 73 u32 rx_zeroc_thresh; 76 74 dma_addr_t dma_rx_phy; 77 75 u32 rx_tail_addr; 76 + }; 77 + 78 + struct stmmac_channel { 78 79 struct napi_struct napi ____cacheline_aligned_in_smp; 80 + struct stmmac_priv *priv_data; 81 + u32 index; 82 + int has_rx; 83 + int has_tx; 79 84 }; 80 85 81 86 struct stmmac_tc_entry { ··· 118 109 119 110 struct stmmac_priv { 120 111 /* Frequently used values are kept adjacent for cache effect */ 121 - u32 tx_count_frames; 122 112 u32 tx_coal_frames; 123 113 u32 tx_coal_timer; 124 114 125 115 int tx_coalesce; 126 116 int hwts_tx_en; 127 117 bool tx_path_in_lpi_mode; 128 - struct timer_list txtimer; 129 118 bool tso; 130 119 131 120 unsigned int dma_buf_sz; ··· 143 136 144 137 /* TX Queue */ 145 138 struct stmmac_tx_queue tx_queue[MTL_MAX_TX_QUEUES]; 139 + 140 + /* Generic channel for NAPI */ 141 + struct stmmac_channel channel[STMMAC_CH_MAX]; 146 142 147 143 bool oldlink; 148 144 int speed;
+134 -104
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 148 148 static void stmmac_disable_all_queues(struct stmmac_priv *priv) 149 149 { 150 150 u32 rx_queues_cnt = priv->plat->rx_queues_to_use; 151 + u32 tx_queues_cnt = priv->plat->tx_queues_to_use; 152 + u32 maxq = max(rx_queues_cnt, tx_queues_cnt); 151 153 u32 queue; 152 154 153 - for (queue = 0; queue < rx_queues_cnt; queue++) { 154 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 155 + for (queue = 0; queue < maxq; queue++) { 156 + struct stmmac_channel *ch = &priv->channel[queue]; 155 157 156 - napi_disable(&rx_q->napi); 158 + napi_disable(&ch->napi); 157 159 } 158 160 } 159 161 ··· 166 164 static void stmmac_enable_all_queues(struct stmmac_priv *priv) 167 165 { 168 166 u32 rx_queues_cnt = priv->plat->rx_queues_to_use; 167 + u32 tx_queues_cnt = priv->plat->tx_queues_to_use; 168 + u32 maxq = max(rx_queues_cnt, tx_queues_cnt); 169 169 u32 queue; 170 170 171 - for (queue = 0; queue < rx_queues_cnt; queue++) { 172 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 171 + for (queue = 0; queue < maxq; queue++) { 172 + struct stmmac_channel *ch = &priv->channel[queue]; 173 173 174 - napi_enable(&rx_q->napi); 174 + napi_enable(&ch->napi); 175 175 } 176 176 } 177 177 ··· 1847 1843 * @queue: TX queue index 1848 1844 * Description: it reclaims the transmit resources after transmission completes. 1849 1845 */ 1850 - static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue) 1846 + static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue) 1851 1847 { 1852 1848 struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; 1853 1849 unsigned int bytes_compl = 0, pkts_compl = 0; 1854 - unsigned int entry; 1850 + unsigned int entry, count = 0; 1855 1851 1856 - netif_tx_lock(priv->dev); 1852 + __netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue)); 1857 1853 1858 1854 priv->xstats.tx_clean++; 1859 1855 1860 1856 entry = tx_q->dirty_tx; 1861 - while (entry != tx_q->cur_tx) { 1857 + while ((entry != tx_q->cur_tx) && (count < budget)) { 1862 1858 struct sk_buff *skb = tx_q->tx_skbuff[entry]; 1863 1859 struct dma_desc *p; 1864 1860 int status; ··· 1873 1869 /* Check if the descriptor is owned by the DMA */ 1874 1870 if (unlikely(status & tx_dma_own)) 1875 1871 break; 1872 + 1873 + count++; 1876 1874 1877 1875 /* Make sure descriptor fields are read after reading 1878 1876 * the own bit. ··· 1943 1937 stmmac_enable_eee_mode(priv); 1944 1938 mod_timer(&priv->eee_ctrl_timer, STMMAC_LPI_T(eee_timer)); 1945 1939 } 1946 - netif_tx_unlock(priv->dev); 1940 + 1941 + __netif_tx_unlock_bh(netdev_get_tx_queue(priv->dev, queue)); 1942 + 1943 + return count; 1947 1944 } 1948 1945 1949 1946 /** ··· 2029 2020 return false; 2030 2021 } 2031 2022 2023 + static int stmmac_napi_check(struct stmmac_priv *priv, u32 chan) 2024 + { 2025 + int status = stmmac_dma_interrupt_status(priv, priv->ioaddr, 2026 + &priv->xstats, chan); 2027 + struct stmmac_channel *ch = &priv->channel[chan]; 2028 + bool needs_work = false; 2029 + 2030 + if ((status & handle_rx) && ch->has_rx) { 2031 + needs_work = true; 2032 + } else { 2033 + status &= ~handle_rx; 2034 + } 2035 + 2036 + if ((status & handle_tx) && ch->has_tx) { 2037 + needs_work = true; 2038 + } else { 2039 + status &= ~handle_tx; 2040 + } 2041 + 2042 + if (needs_work && napi_schedule_prep(&ch->napi)) { 2043 + stmmac_disable_dma_irq(priv, priv->ioaddr, chan); 2044 + __napi_schedule(&ch->napi); 2045 + } 2046 + 2047 + return status; 2048 + } 2049 + 2032 2050 /** 2033 2051 * stmmac_dma_interrupt - DMA ISR 2034 2052 * @priv: driver private structure ··· 2070 2034 u32 channels_to_check = tx_channel_count > rx_channel_count ? 2071 2035 tx_channel_count : rx_channel_count; 2072 2036 u32 chan; 2073 - bool poll_scheduled = false; 2074 2037 int status[max_t(u32, MTL_MAX_TX_QUEUES, MTL_MAX_RX_QUEUES)]; 2075 2038 2076 2039 /* Make sure we never check beyond our status buffer. */ 2077 2040 if (WARN_ON_ONCE(channels_to_check > ARRAY_SIZE(status))) 2078 2041 channels_to_check = ARRAY_SIZE(status); 2079 2042 2080 - /* Each DMA channel can be used for rx and tx simultaneously, yet 2081 - * napi_struct is embedded in struct stmmac_rx_queue rather than in a 2082 - * stmmac_channel struct. 2083 - * Because of this, stmmac_poll currently checks (and possibly wakes) 2084 - * all tx queues rather than just a single tx queue. 2085 - */ 2086 2043 for (chan = 0; chan < channels_to_check; chan++) 2087 - status[chan] = stmmac_dma_interrupt_status(priv, priv->ioaddr, 2088 - &priv->xstats, chan); 2089 - 2090 - for (chan = 0; chan < rx_channel_count; chan++) { 2091 - if (likely(status[chan] & handle_rx)) { 2092 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[chan]; 2093 - 2094 - if (likely(napi_schedule_prep(&rx_q->napi))) { 2095 - stmmac_disable_dma_irq(priv, priv->ioaddr, chan); 2096 - __napi_schedule(&rx_q->napi); 2097 - poll_scheduled = true; 2098 - } 2099 - } 2100 - } 2101 - 2102 - /* If we scheduled poll, we already know that tx queues will be checked. 2103 - * If we didn't schedule poll, see if any DMA channel (used by tx) has a 2104 - * completed transmission, if so, call stmmac_poll (once). 2105 - */ 2106 - if (!poll_scheduled) { 2107 - for (chan = 0; chan < tx_channel_count; chan++) { 2108 - if (status[chan] & handle_tx) { 2109 - /* It doesn't matter what rx queue we choose 2110 - * here. We use 0 since it always exists. 2111 - */ 2112 - struct stmmac_rx_queue *rx_q = 2113 - &priv->rx_queue[0]; 2114 - 2115 - if (likely(napi_schedule_prep(&rx_q->napi))) { 2116 - stmmac_disable_dma_irq(priv, 2117 - priv->ioaddr, chan); 2118 - __napi_schedule(&rx_q->napi); 2119 - } 2120 - break; 2121 - } 2122 - } 2123 - } 2044 + status[chan] = stmmac_napi_check(priv, chan); 2124 2045 2125 2046 for (chan = 0; chan < tx_channel_count; chan++) { 2126 2047 if (unlikely(status[chan] & tx_hard_error_bump_tc)) { ··· 2213 2220 stmmac_init_tx_chan(priv, priv->ioaddr, priv->plat->dma_cfg, 2214 2221 tx_q->dma_tx_phy, chan); 2215 2222 2216 - tx_q->tx_tail_addr = tx_q->dma_tx_phy + 2217 - (DMA_TX_SIZE * sizeof(struct dma_desc)); 2223 + tx_q->tx_tail_addr = tx_q->dma_tx_phy; 2218 2224 stmmac_set_tx_tail_ptr(priv, priv->ioaddr, 2219 2225 tx_q->tx_tail_addr, chan); 2220 2226 } ··· 2225 2233 return ret; 2226 2234 } 2227 2235 2236 + static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue) 2237 + { 2238 + struct stmmac_tx_queue *tx_q = &priv->tx_queue[queue]; 2239 + 2240 + mod_timer(&tx_q->txtimer, STMMAC_COAL_TIMER(priv->tx_coal_timer)); 2241 + } 2242 + 2228 2243 /** 2229 2244 * stmmac_tx_timer - mitigation sw timer for tx. 2230 2245 * @data: data pointer ··· 2240 2241 */ 2241 2242 static void stmmac_tx_timer(struct timer_list *t) 2242 2243 { 2243 - struct stmmac_priv *priv = from_timer(priv, t, txtimer); 2244 - u32 tx_queues_count = priv->plat->tx_queues_to_use; 2245 - u32 queue; 2244 + struct stmmac_tx_queue *tx_q = from_timer(tx_q, t, txtimer); 2245 + struct stmmac_priv *priv = tx_q->priv_data; 2246 + struct stmmac_channel *ch; 2246 2247 2247 - /* let's scan all the tx queues */ 2248 - for (queue = 0; queue < tx_queues_count; queue++) 2249 - stmmac_tx_clean(priv, queue); 2248 + ch = &priv->channel[tx_q->queue_index]; 2249 + 2250 + if (likely(napi_schedule_prep(&ch->napi))) 2251 + __napi_schedule(&ch->napi); 2250 2252 } 2251 2253 2252 2254 /** ··· 2260 2260 */ 2261 2261 static void stmmac_init_tx_coalesce(struct stmmac_priv *priv) 2262 2262 { 2263 + u32 tx_channel_count = priv->plat->tx_queues_to_use; 2264 + u32 chan; 2265 + 2263 2266 priv->tx_coal_frames = STMMAC_TX_FRAMES; 2264 2267 priv->tx_coal_timer = STMMAC_COAL_TX_TIMER; 2265 - timer_setup(&priv->txtimer, stmmac_tx_timer, 0); 2266 - priv->txtimer.expires = STMMAC_COAL_TIMER(priv->tx_coal_timer); 2267 - add_timer(&priv->txtimer); 2268 + 2269 + for (chan = 0; chan < tx_channel_count; chan++) { 2270 + struct stmmac_tx_queue *tx_q = &priv->tx_queue[chan]; 2271 + 2272 + timer_setup(&tx_q->txtimer, stmmac_tx_timer, 0); 2273 + } 2268 2274 } 2269 2275 2270 2276 static void stmmac_set_rings_length(struct stmmac_priv *priv) ··· 2598 2592 static int stmmac_open(struct net_device *dev) 2599 2593 { 2600 2594 struct stmmac_priv *priv = netdev_priv(dev); 2595 + u32 chan; 2601 2596 int ret; 2602 2597 2603 2598 stmmac_check_ether_addr(priv); ··· 2695 2688 if (dev->phydev) 2696 2689 phy_stop(dev->phydev); 2697 2690 2698 - del_timer_sync(&priv->txtimer); 2691 + for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) 2692 + del_timer_sync(&priv->tx_queue[chan].txtimer); 2693 + 2699 2694 stmmac_hw_teardown(dev); 2700 2695 init_error: 2701 2696 free_dma_desc_resources(priv); ··· 2717 2708 static int stmmac_release(struct net_device *dev) 2718 2709 { 2719 2710 struct stmmac_priv *priv = netdev_priv(dev); 2711 + u32 chan; 2720 2712 2721 2713 if (priv->eee_enabled) 2722 2714 del_timer_sync(&priv->eee_ctrl_timer); ··· 2732 2722 2733 2723 stmmac_disable_all_queues(priv); 2734 2724 2735 - del_timer_sync(&priv->txtimer); 2725 + for (chan = 0; chan < priv->plat->tx_queues_to_use; chan++) 2726 + del_timer_sync(&priv->tx_queue[chan].txtimer); 2736 2727 2737 2728 /* Free the IRQ lines */ 2738 2729 free_irq(dev->irq, dev); ··· 2947 2936 priv->xstats.tx_tso_nfrags += nfrags; 2948 2937 2949 2938 /* Manage tx mitigation */ 2950 - priv->tx_count_frames += nfrags + 1; 2951 - if (likely(priv->tx_coal_frames > priv->tx_count_frames)) { 2952 - mod_timer(&priv->txtimer, 2953 - STMMAC_COAL_TIMER(priv->tx_coal_timer)); 2954 - } else { 2955 - priv->tx_count_frames = 0; 2939 + tx_q->tx_count_frames += nfrags + 1; 2940 + if (priv->tx_coal_frames <= tx_q->tx_count_frames) { 2956 2941 stmmac_set_tx_ic(priv, desc); 2957 2942 priv->xstats.tx_set_ic_bit++; 2943 + tx_q->tx_count_frames = 0; 2944 + } else { 2945 + stmmac_tx_timer_arm(priv, queue); 2958 2946 } 2959 2947 2960 2948 skb_tx_timestamp(skb); ··· 3002 2992 3003 2993 netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len); 3004 2994 2995 + tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc)); 3005 2996 stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); 3006 2997 3007 2998 return NETDEV_TX_OK; ··· 3157 3146 * This approach takes care about the fragments: desc is the first 3158 3147 * element in case of no SG. 3159 3148 */ 3160 - priv->tx_count_frames += nfrags + 1; 3161 - if (likely(priv->tx_coal_frames > priv->tx_count_frames)) { 3162 - mod_timer(&priv->txtimer, 3163 - STMMAC_COAL_TIMER(priv->tx_coal_timer)); 3164 - } else { 3165 - priv->tx_count_frames = 0; 3149 + tx_q->tx_count_frames += nfrags + 1; 3150 + if (priv->tx_coal_frames <= tx_q->tx_count_frames) { 3166 3151 stmmac_set_tx_ic(priv, desc); 3167 3152 priv->xstats.tx_set_ic_bit++; 3153 + tx_q->tx_count_frames = 0; 3154 + } else { 3155 + stmmac_tx_timer_arm(priv, queue); 3168 3156 } 3169 3157 3170 3158 skb_tx_timestamp(skb); ··· 3209 3199 netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len); 3210 3200 3211 3201 stmmac_enable_dma_transmission(priv, priv->ioaddr); 3202 + 3203 + tx_q->tx_tail_addr = tx_q->dma_tx_phy + (tx_q->cur_tx * sizeof(*desc)); 3212 3204 stmmac_set_tx_tail_ptr(priv, priv->ioaddr, tx_q->tx_tail_addr, queue); 3213 3205 3214 3206 return NETDEV_TX_OK; ··· 3331 3319 static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue) 3332 3320 { 3333 3321 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 3322 + struct stmmac_channel *ch = &priv->channel[queue]; 3334 3323 unsigned int entry = rx_q->cur_rx; 3335 3324 int coe = priv->hw->rx_csum; 3336 3325 unsigned int next_entry; ··· 3504 3491 else 3505 3492 skb->ip_summed = CHECKSUM_UNNECESSARY; 3506 3493 3507 - napi_gro_receive(&rx_q->napi, skb); 3494 + napi_gro_receive(&ch->napi, skb); 3508 3495 3509 3496 priv->dev->stats.rx_packets++; 3510 3497 priv->dev->stats.rx_bytes += frame_len; ··· 3527 3514 * Description : 3528 3515 * To look at the incoming frames and clear the tx resources. 3529 3516 */ 3530 - static int stmmac_poll(struct napi_struct *napi, int budget) 3517 + static int stmmac_napi_poll(struct napi_struct *napi, int budget) 3531 3518 { 3532 - struct stmmac_rx_queue *rx_q = 3533 - container_of(napi, struct stmmac_rx_queue, napi); 3534 - struct stmmac_priv *priv = rx_q->priv_data; 3535 - u32 tx_count = priv->plat->tx_queues_to_use; 3536 - u32 chan = rx_q->queue_index; 3537 - int work_done = 0; 3538 - u32 queue; 3519 + struct stmmac_channel *ch = 3520 + container_of(napi, struct stmmac_channel, napi); 3521 + struct stmmac_priv *priv = ch->priv_data; 3522 + int work_done = 0, work_rem = budget; 3523 + u32 chan = ch->index; 3539 3524 3540 3525 priv->xstats.napi_poll++; 3541 3526 3542 - /* check all the queues */ 3543 - for (queue = 0; queue < tx_count; queue++) 3544 - stmmac_tx_clean(priv, queue); 3527 + if (ch->has_tx) { 3528 + int done = stmmac_tx_clean(priv, work_rem, chan); 3545 3529 3546 - work_done = stmmac_rx(priv, budget, rx_q->queue_index); 3547 - if (work_done < budget) { 3548 - napi_complete_done(napi, work_done); 3549 - stmmac_enable_dma_irq(priv, priv->ioaddr, chan); 3530 + work_done += done; 3531 + work_rem -= done; 3550 3532 } 3533 + 3534 + if (ch->has_rx) { 3535 + int done = stmmac_rx(priv, work_rem, chan); 3536 + 3537 + work_done += done; 3538 + work_rem -= done; 3539 + } 3540 + 3541 + if (work_done < budget && napi_complete_done(napi, work_done)) 3542 + stmmac_enable_dma_irq(priv, priv->ioaddr, chan); 3543 + 3551 3544 return work_done; 3552 3545 } 3553 3546 ··· 4217 4198 { 4218 4199 struct net_device *ndev = NULL; 4219 4200 struct stmmac_priv *priv; 4201 + u32 queue, maxq; 4220 4202 int ret = 0; 4221 - u32 queue; 4222 4203 4223 4204 ndev = alloc_etherdev_mqs(sizeof(struct stmmac_priv), 4224 4205 MTL_MAX_TX_QUEUES, ··· 4341 4322 "Enable RX Mitigation via HW Watchdog Timer\n"); 4342 4323 } 4343 4324 4344 - for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) { 4345 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 4325 + /* Setup channels NAPI */ 4326 + maxq = max(priv->plat->rx_queues_to_use, priv->plat->tx_queues_to_use); 4346 4327 4347 - netif_napi_add(ndev, &rx_q->napi, stmmac_poll, 4348 - (8 * priv->plat->rx_queues_to_use)); 4328 + for (queue = 0; queue < maxq; queue++) { 4329 + struct stmmac_channel *ch = &priv->channel[queue]; 4330 + 4331 + ch->priv_data = priv; 4332 + ch->index = queue; 4333 + 4334 + if (queue < priv->plat->rx_queues_to_use) 4335 + ch->has_rx = true; 4336 + if (queue < priv->plat->tx_queues_to_use) 4337 + ch->has_tx = true; 4338 + 4339 + netif_napi_add(ndev, &ch->napi, stmmac_napi_poll, 4340 + NAPI_POLL_WEIGHT); 4349 4341 } 4350 4342 4351 4343 mutex_init(&priv->lock); ··· 4402 4372 priv->hw->pcs != STMMAC_PCS_RTBI) 4403 4373 stmmac_mdio_unregister(ndev); 4404 4374 error_mdio_register: 4405 - for (queue = 0; queue < priv->plat->rx_queues_to_use; queue++) { 4406 - struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; 4375 + for (queue = 0; queue < maxq; queue++) { 4376 + struct stmmac_channel *ch = &priv->channel[queue]; 4407 4377 4408 - netif_napi_del(&rx_q->napi); 4378 + netif_napi_del(&ch->napi); 4409 4379 } 4410 4380 error_hw_init: 4411 4381 destroy_workqueue(priv->wq);
+2 -3
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 67 67 * Description: 68 68 * This function validates the number of Unicast address entries supported 69 69 * by a particular Synopsys 10/100/1000 controller. The Synopsys controller 70 - * supports 1, 32, 64, or 128 Unicast filter entries for it's Unicast filter 70 + * supports 1..32, 64, or 128 Unicast filter entries for it's Unicast filter 71 71 * logic. This function validates a valid, supported configuration is 72 72 * selected, and defaults to 1 Unicast address if an unsupported 73 73 * configuration is selected. ··· 77 77 int x = ucast_entries; 78 78 79 79 switch (x) { 80 - case 1: 81 - case 32: 80 + case 1 ... 32: 82 81 case 64: 83 82 case 128: 84 83 break;
+1
drivers/net/ethernet/ti/Kconfig
··· 41 41 config TI_DAVINCI_CPDMA 42 42 tristate "TI DaVinci CPDMA Support" 43 43 depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST 44 + select GENERIC_ALLOCATOR 44 45 ---help--- 45 46 This driver supports TI's DaVinci CPDMA dma engine. 46 47
+1 -1
drivers/net/ethernet/wiznet/w5100.c
··· 835 835 w5100_tx_skb(priv->ndev, skb); 836 836 } 837 837 838 - static int w5100_start_tx(struct sk_buff *skb, struct net_device *ndev) 838 + static netdev_tx_t w5100_start_tx(struct sk_buff *skb, struct net_device *ndev) 839 839 { 840 840 struct w5100_priv *priv = netdev_priv(ndev); 841 841
+1 -1
drivers/net/ethernet/wiznet/w5300.c
··· 365 365 netif_wake_queue(ndev); 366 366 } 367 367 368 - static int w5300_start_tx(struct sk_buff *skb, struct net_device *ndev) 368 + static netdev_tx_t w5300_start_tx(struct sk_buff *skb, struct net_device *ndev) 369 369 { 370 370 struct w5300_priv *priv = netdev_priv(ndev); 371 371
+3
drivers/net/hyperv/netvsc.c
··· 1203 1203 1204 1204 net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated; 1205 1205 net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial; 1206 + netdev_info(ndev, "VF slot %u %s\n", 1207 + net_device_ctx->vf_serial, 1208 + net_device_ctx->vf_alloc ? "added" : "removed"); 1206 1209 } 1207 1210 1208 1211 static void netvsc_receive_inband(struct net_device *ndev,
+36 -31
drivers/net/hyperv/netvsc_drv.c
··· 1894 1894 rtnl_unlock(); 1895 1895 } 1896 1896 1897 - static struct net_device *get_netvsc_bymac(const u8 *mac) 1898 - { 1899 - struct net_device_context *ndev_ctx; 1900 - 1901 - list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) { 1902 - struct net_device *dev = hv_get_drvdata(ndev_ctx->device_ctx); 1903 - 1904 - if (ether_addr_equal(mac, dev->perm_addr)) 1905 - return dev; 1906 - } 1907 - 1908 - return NULL; 1909 - } 1910 - 1911 1897 static struct net_device *get_netvsc_byref(struct net_device *vf_netdev) 1912 1898 { 1913 1899 struct net_device_context *net_device_ctx; ··· 2022 2036 rtnl_unlock(); 2023 2037 } 2024 2038 2039 + /* Find netvsc by VMBus serial number. 2040 + * The PCI hyperv controller records the serial number as the slot. 2041 + */ 2042 + static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev) 2043 + { 2044 + struct device *parent = vf_netdev->dev.parent; 2045 + struct net_device_context *ndev_ctx; 2046 + struct pci_dev *pdev; 2047 + 2048 + if (!parent || !dev_is_pci(parent)) 2049 + return NULL; /* not a PCI device */ 2050 + 2051 + pdev = to_pci_dev(parent); 2052 + if (!pdev->slot) { 2053 + netdev_notice(vf_netdev, "no PCI slot information\n"); 2054 + return NULL; 2055 + } 2056 + 2057 + list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) { 2058 + if (!ndev_ctx->vf_alloc) 2059 + continue; 2060 + 2061 + if (ndev_ctx->vf_serial == pdev->slot->number) 2062 + return hv_get_drvdata(ndev_ctx->device_ctx); 2063 + } 2064 + 2065 + netdev_notice(vf_netdev, 2066 + "no netdev found for slot %u\n", pdev->slot->number); 2067 + return NULL; 2068 + } 2069 + 2025 2070 static int netvsc_register_vf(struct net_device *vf_netdev) 2026 2071 { 2027 - struct net_device *ndev; 2028 2072 struct net_device_context *net_device_ctx; 2029 - struct device *pdev = vf_netdev->dev.parent; 2030 2073 struct netvsc_device *netvsc_dev; 2074 + struct net_device *ndev; 2031 2075 int ret; 2032 2076 2033 2077 if (vf_netdev->addr_len != ETH_ALEN) 2034 2078 return NOTIFY_DONE; 2035 2079 2036 - if (!pdev || !dev_is_pci(pdev) || dev_is_pf(pdev)) 2037 - return NOTIFY_DONE; 2038 - 2039 - /* 2040 - * We will use the MAC address to locate the synthetic interface to 2041 - * associate with the VF interface. If we don't find a matching 2042 - * synthetic interface, move on. 2043 - */ 2044 - ndev = get_netvsc_bymac(vf_netdev->perm_addr); 2080 + ndev = get_netvsc_byslot(vf_netdev); 2045 2081 if (!ndev) 2046 2082 return NOTIFY_DONE; 2047 2083 ··· 2280 2272 2281 2273 cancel_delayed_work_sync(&ndev_ctx->dwork); 2282 2274 2283 - rcu_read_lock(); 2284 - nvdev = rcu_dereference(ndev_ctx->nvdev); 2285 - 2286 - if (nvdev) 2275 + rtnl_lock(); 2276 + nvdev = rtnl_dereference(ndev_ctx->nvdev); 2277 + if (nvdev) 2287 2278 cancel_work_sync(&nvdev->subchan_work); 2288 2279 2289 2280 /* 2290 2281 * Call to the vsc driver to let it know that the device is being 2291 2282 * removed. Also blocks mtu and channel changes. 2292 2283 */ 2293 - rtnl_lock(); 2294 2284 vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); 2295 2285 if (vf_netdev) 2296 2286 netvsc_unregister_vf(vf_netdev); ··· 2300 2294 list_del(&ndev_ctx->list); 2301 2295 2302 2296 rtnl_unlock(); 2303 - rcu_read_unlock(); 2304 2297 2305 2298 hv_set_drvdata(dev, NULL); 2306 2299
+2 -2
drivers/net/phy/sfp-bus.c
··· 349 349 } 350 350 if (bus->started) 351 351 bus->socket_ops->start(bus->sfp); 352 + bus->netdev->sfp_bus = bus; 352 353 bus->registered = true; 353 354 return 0; 354 355 } ··· 358 357 { 359 358 const struct sfp_upstream_ops *ops = bus->upstream_ops; 360 359 360 + bus->netdev->sfp_bus = NULL; 361 361 if (bus->registered) { 362 362 if (bus->started) 363 363 bus->socket_ops->stop(bus->sfp); ··· 440 438 { 441 439 bus->upstream_ops = NULL; 442 440 bus->upstream = NULL; 443 - bus->netdev->sfp_bus = NULL; 444 441 bus->netdev = NULL; 445 442 } 446 443 ··· 468 467 bus->upstream_ops = ops; 469 468 bus->upstream = upstream; 470 469 bus->netdev = ndev; 471 - ndev->sfp_bus = bus; 472 470 473 471 if (bus->sfp) { 474 472 ret = sfp_register_bus(bus);
+3
drivers/net/ppp/pppoe.c
··· 429 429 if (!skb) 430 430 goto out; 431 431 432 + if (skb_mac_header_len(skb) < ETH_HLEN) 433 + goto drop; 434 + 432 435 if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr))) 433 436 goto drop; 434 437
-43
drivers/net/tun.c
··· 1153 1153 1154 1154 return (features & tun->set_features) | (features & ~TUN_USER_FEATURES); 1155 1155 } 1156 - #ifdef CONFIG_NET_POLL_CONTROLLER 1157 - static void tun_poll_controller(struct net_device *dev) 1158 - { 1159 - /* 1160 - * Tun only receives frames when: 1161 - * 1) the char device endpoint gets data from user space 1162 - * 2) the tun socket gets a sendmsg call from user space 1163 - * If NAPI is not enabled, since both of those are synchronous 1164 - * operations, we are guaranteed never to have pending data when we poll 1165 - * for it so there is nothing to do here but return. 1166 - * We need this though so netpoll recognizes us as an interface that 1167 - * supports polling, which enables bridge devices in virt setups to 1168 - * still use netconsole 1169 - * If NAPI is enabled, however, we need to schedule polling for all 1170 - * queues unless we are using napi_gro_frags(), which we call in 1171 - * process context and not in NAPI context. 1172 - */ 1173 - struct tun_struct *tun = netdev_priv(dev); 1174 - 1175 - if (tun->flags & IFF_NAPI) { 1176 - struct tun_file *tfile; 1177 - int i; 1178 - 1179 - if (tun_napi_frags_enabled(tun)) 1180 - return; 1181 - 1182 - rcu_read_lock(); 1183 - for (i = 0; i < tun->numqueues; i++) { 1184 - tfile = rcu_dereference(tun->tfiles[i]); 1185 - if (tfile->napi_enabled) 1186 - napi_schedule(&tfile->napi); 1187 - } 1188 - rcu_read_unlock(); 1189 - } 1190 - return; 1191 - } 1192 - #endif 1193 1156 1194 1157 static void tun_set_headroom(struct net_device *dev, int new_hr) 1195 1158 { ··· 1246 1283 .ndo_start_xmit = tun_net_xmit, 1247 1284 .ndo_fix_features = tun_net_fix_features, 1248 1285 .ndo_select_queue = tun_select_queue, 1249 - #ifdef CONFIG_NET_POLL_CONTROLLER 1250 - .ndo_poll_controller = tun_poll_controller, 1251 - #endif 1252 1286 .ndo_set_rx_headroom = tun_set_headroom, 1253 1287 .ndo_get_stats64 = tun_net_get_stats64, 1254 1288 }; ··· 1325 1365 .ndo_set_mac_address = eth_mac_addr, 1326 1366 .ndo_validate_addr = eth_validate_addr, 1327 1367 .ndo_select_queue = tun_select_queue, 1328 - #ifdef CONFIG_NET_POLL_CONTROLLER 1329 - .ndo_poll_controller = tun_poll_controller, 1330 - #endif 1331 1368 .ndo_features_check = passthru_features_check, 1332 1369 .ndo_set_rx_headroom = tun_set_headroom, 1333 1370 .ndo_get_stats64 = tun_net_get_stats64,
+7 -7
drivers/net/usb/qmi_wwan.c
··· 1213 1213 {QMI_FIXED_INTF(0x1199, 0x9061, 8)}, /* Sierra Wireless Modem */ 1214 1214 {QMI_FIXED_INTF(0x1199, 0x9063, 8)}, /* Sierra Wireless EM7305 */ 1215 1215 {QMI_FIXED_INTF(0x1199, 0x9063, 10)}, /* Sierra Wireless EM7305 */ 1216 - {QMI_FIXED_INTF(0x1199, 0x9071, 8)}, /* Sierra Wireless MC74xx */ 1217 - {QMI_FIXED_INTF(0x1199, 0x9071, 10)}, /* Sierra Wireless MC74xx */ 1218 - {QMI_FIXED_INTF(0x1199, 0x9079, 8)}, /* Sierra Wireless EM74xx */ 1219 - {QMI_FIXED_INTF(0x1199, 0x9079, 10)}, /* Sierra Wireless EM74xx */ 1220 - {QMI_FIXED_INTF(0x1199, 0x907b, 8)}, /* Sierra Wireless EM74xx */ 1221 - {QMI_FIXED_INTF(0x1199, 0x907b, 10)}, /* Sierra Wireless EM74xx */ 1222 - {QMI_FIXED_INTF(0x1199, 0x9091, 8)}, /* Sierra Wireless EM7565 */ 1216 + {QMI_QUIRK_SET_DTR(0x1199, 0x9071, 8)}, /* Sierra Wireless MC74xx */ 1217 + {QMI_QUIRK_SET_DTR(0x1199, 0x9071, 10)},/* Sierra Wireless MC74xx */ 1218 + {QMI_QUIRK_SET_DTR(0x1199, 0x9079, 8)}, /* Sierra Wireless EM74xx */ 1219 + {QMI_QUIRK_SET_DTR(0x1199, 0x9079, 10)},/* Sierra Wireless EM74xx */ 1220 + {QMI_QUIRK_SET_DTR(0x1199, 0x907b, 8)}, /* Sierra Wireless EM74xx */ 1221 + {QMI_QUIRK_SET_DTR(0x1199, 0x907b, 10)},/* Sierra Wireless EM74xx */ 1222 + {QMI_QUIRK_SET_DTR(0x1199, 0x9091, 8)}, /* Sierra Wireless EM7565 */ 1223 1223 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 1224 1224 {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */ 1225 1225 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
+2 -2
drivers/net/veth.c
··· 463 463 int mac_len, delta, off; 464 464 struct xdp_buff xdp; 465 465 466 + skb_orphan(skb); 467 + 466 468 rcu_read_lock(); 467 469 xdp_prog = rcu_dereference(rq->xdp_prog); 468 470 if (unlikely(!xdp_prog)) { ··· 510 508 skb_copy_header(nskb, skb); 511 509 head_off = skb_headroom(nskb) - skb_headroom(skb); 512 510 skb_headers_offset_update(nskb, head_off); 513 - if (skb->sk) 514 - skb_set_owner_w(nskb, skb->sk); 515 511 consume_skb(skb); 516 512 skb = nskb; 517 513 }
+7 -1
drivers/net/xen-netfront.c
··· 908 908 BUG_ON(pull_to <= skb_headlen(skb)); 909 909 __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); 910 910 } 911 - BUG_ON(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS); 911 + if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { 912 + queue->rx.rsp_cons = ++cons; 913 + kfree_skb(nskb); 914 + return ~0U; 915 + } 912 916 913 917 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, 914 918 skb_frag_page(nfrag), ··· 1049 1045 skb->len += rx->status; 1050 1046 1051 1047 i = xennet_fill_frags(queue, skb, &tmpq); 1048 + if (unlikely(i == ~0U)) 1049 + goto err; 1052 1050 1053 1051 if (rx->flags & XEN_NETRXF_csum_blank) 1054 1052 skb->ip_summed = CHECKSUM_PARTIAL;
+4 -2
drivers/nvme/host/multipath.c
··· 537 537 538 538 INIT_WORK(&ctrl->ana_work, nvme_ana_work); 539 539 ctrl->ana_log_buf = kmalloc(ctrl->ana_log_size, GFP_KERNEL); 540 - if (!ctrl->ana_log_buf) 540 + if (!ctrl->ana_log_buf) { 541 + error = -ENOMEM; 541 542 goto out; 543 + } 542 544 543 545 error = nvme_read_ana_log(ctrl, true); 544 546 if (error) ··· 549 547 out_free_ana_log_buf: 550 548 kfree(ctrl->ana_log_buf); 551 549 out: 552 - return -ENOMEM; 550 + return error; 553 551 } 554 552 555 553 void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
+4
drivers/nvme/target/admin-cmd.c
··· 245 245 offset += len; 246 246 ngrps++; 247 247 } 248 + for ( ; grpid <= NVMET_MAX_ANAGRPS; grpid++) { 249 + if (nvmet_ana_group_enabled[grpid]) 250 + ngrps++; 251 + } 248 252 249 253 hdr.chgcnt = cpu_to_le64(nvmet_ana_chgcnt); 250 254 hdr.ngrps = cpu_to_le16(ngrps);
+4 -4
drivers/pci/controller/dwc/pcie-designware.c
··· 135 135 if (val & PCIE_ATU_ENABLE) 136 136 return; 137 137 138 - usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 138 + mdelay(LINK_WAIT_IATU); 139 139 } 140 140 dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 141 141 } ··· 178 178 if (val & PCIE_ATU_ENABLE) 179 179 return; 180 180 181 - usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 181 + mdelay(LINK_WAIT_IATU); 182 182 } 183 183 dev_err(pci->dev, "Outbound iATU is not being enabled\n"); 184 184 } ··· 236 236 if (val & PCIE_ATU_ENABLE) 237 237 return 0; 238 238 239 - usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 239 + mdelay(LINK_WAIT_IATU); 240 240 } 241 241 dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 242 242 ··· 282 282 if (val & PCIE_ATU_ENABLE) 283 283 return 0; 284 284 285 - usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX); 285 + mdelay(LINK_WAIT_IATU); 286 286 } 287 287 dev_err(pci->dev, "Inbound iATU is not being enabled\n"); 288 288
+1 -2
drivers/pci/controller/dwc/pcie-designware.h
··· 26 26 27 27 /* Parameters for the waiting for iATU enabled routine */ 28 28 #define LINK_WAIT_MAX_IATU_RETRIES 5 29 - #define LINK_WAIT_IATU_MIN 9000 30 - #define LINK_WAIT_IATU_MAX 10000 29 + #define LINK_WAIT_IATU 9 31 30 32 31 /* Synopsys-specific PCIe configuration registers */ 33 32 #define PCIE_PORT_LINK_CONTROL 0x710
+39
drivers/pci/controller/pci-hyperv.c
··· 89 89 90 90 #define STATUS_REVISION_MISMATCH 0xC0000059 91 91 92 + /* space for 32bit serial number as string */ 93 + #define SLOT_NAME_SIZE 11 94 + 92 95 /* 93 96 * Message Types 94 97 */ ··· 497 494 struct list_head list_entry; 498 495 refcount_t refs; 499 496 enum hv_pcichild_state state; 497 + struct pci_slot *pci_slot; 500 498 struct pci_function_description desc; 501 499 bool reported_missing; 502 500 struct hv_pcibus_device *hbus; ··· 1461 1457 spin_unlock_irqrestore(&hbus->device_list_lock, flags); 1462 1458 } 1463 1459 1460 + /* 1461 + * Assign entries in sysfs pci slot directory. 1462 + * 1463 + * Note that this function does not need to lock the children list 1464 + * because it is called from pci_devices_present_work which 1465 + * is serialized with hv_eject_device_work because they are on the 1466 + * same ordered workqueue. Therefore hbus->children list will not change 1467 + * even when pci_create_slot sleeps. 1468 + */ 1469 + static void hv_pci_assign_slots(struct hv_pcibus_device *hbus) 1470 + { 1471 + struct hv_pci_dev *hpdev; 1472 + char name[SLOT_NAME_SIZE]; 1473 + int slot_nr; 1474 + 1475 + list_for_each_entry(hpdev, &hbus->children, list_entry) { 1476 + if (hpdev->pci_slot) 1477 + continue; 1478 + 1479 + slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot)); 1480 + snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser); 1481 + hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr, 1482 + name, NULL); 1483 + if (IS_ERR(hpdev->pci_slot)) { 1484 + pr_warn("pci_create slot %s failed\n", name); 1485 + hpdev->pci_slot = NULL; 1486 + } 1487 + } 1488 + } 1489 + 1464 1490 /** 1465 1491 * create_root_hv_pci_bus() - Expose a new root PCI bus 1466 1492 * @hbus: Root PCI bus, as understood by this driver ··· 1514 1480 pci_lock_rescan_remove(); 1515 1481 pci_scan_child_bus(hbus->pci_bus); 1516 1482 pci_bus_assign_resources(hbus->pci_bus); 1483 + hv_pci_assign_slots(hbus); 1517 1484 pci_bus_add_devices(hbus->pci_bus); 1518 1485 pci_unlock_rescan_remove(); 1519 1486 hbus->state = hv_pcibus_installed; ··· 1777 1742 */ 1778 1743 pci_lock_rescan_remove(); 1779 1744 pci_scan_child_bus(hbus->pci_bus); 1745 + hv_pci_assign_slots(hbus); 1780 1746 pci_unlock_rescan_remove(); 1781 1747 break; 1782 1748 ··· 1893 1857 spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); 1894 1858 list_del(&hpdev->list_entry); 1895 1859 spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); 1860 + 1861 + if (hpdev->pci_slot) 1862 + pci_destroy_slot(hpdev->pci_slot); 1896 1863 1897 1864 memset(&ctxt, 0, sizeof(ctxt)); 1898 1865 ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+6 -5
drivers/pci/hotplug/acpiphp_glue.c
··· 457 457 /** 458 458 * enable_slot - enable, configure a slot 459 459 * @slot: slot to be enabled 460 + * @bridge: true if enable is for the whole bridge (not a single slot) 460 461 * 461 462 * This function should be called per *physical slot*, 462 463 * not per each slot object in ACPI namespace. 463 464 */ 464 - static void enable_slot(struct acpiphp_slot *slot) 465 + static void enable_slot(struct acpiphp_slot *slot, bool bridge) 465 466 { 466 467 struct pci_dev *dev; 467 468 struct pci_bus *bus = slot->bus; 468 469 struct acpiphp_func *func; 469 470 470 - if (bus->self && hotplug_is_native(bus->self)) { 471 + if (bridge && bus->self && hotplug_is_native(bus->self)) { 471 472 /* 472 473 * If native hotplug is used, it will take care of hotplug 473 474 * slot management and resource allocation for hotplug ··· 702 701 trim_stale_devices(dev); 703 702 704 703 /* configure all functions */ 705 - enable_slot(slot); 704 + enable_slot(slot, true); 706 705 } else { 707 706 disable_slot(slot); 708 707 } ··· 786 785 if (bridge) 787 786 acpiphp_check_bridge(bridge); 788 787 else if (!(slot->flags & SLOT_IS_GOING_AWAY)) 789 - enable_slot(slot); 788 + enable_slot(slot, false); 790 789 791 790 break; 792 791 ··· 974 973 975 974 /* configure all functions */ 976 975 if (!(slot->flags & SLOT_ENABLED)) 977 - enable_slot(slot); 976 + enable_slot(slot, false); 978 977 979 978 pci_unlock_rescan_remove(); 980 979 return 0;
+21 -14
drivers/pinctrl/intel/pinctrl-cannonlake.c
··· 15 15 16 16 #include "pinctrl-intel.h" 17 17 18 - #define CNL_PAD_OWN 0x020 19 - #define CNL_PADCFGLOCK 0x080 20 - #define CNL_HOSTSW_OWN 0x0b0 21 - #define CNL_GPI_IE 0x120 18 + #define CNL_PAD_OWN 0x020 19 + #define CNL_PADCFGLOCK 0x080 20 + #define CNL_LP_HOSTSW_OWN 0x0b0 21 + #define CNL_H_HOSTSW_OWN 0x0c0 22 + #define CNL_GPI_IE 0x120 22 23 23 24 #define CNL_GPP(r, s, e, g) \ 24 25 { \ ··· 31 30 32 31 #define CNL_NO_GPIO -1 33 32 34 - #define CNL_COMMUNITY(b, s, e, g) \ 33 + #define CNL_COMMUNITY(b, s, e, o, g) \ 35 34 { \ 36 35 .barno = (b), \ 37 36 .padown_offset = CNL_PAD_OWN, \ 38 37 .padcfglock_offset = CNL_PADCFGLOCK, \ 39 - .hostown_offset = CNL_HOSTSW_OWN, \ 38 + .hostown_offset = (o), \ 40 39 .ie_offset = CNL_GPI_IE, \ 41 40 .pin_base = (s), \ 42 41 .npins = ((e) - (s) + 1), \ 43 42 .gpps = (g), \ 44 43 .ngpps = ARRAY_SIZE(g), \ 45 44 } 45 + 46 + #define CNLLP_COMMUNITY(b, s, e, g) \ 47 + CNL_COMMUNITY(b, s, e, CNL_LP_HOSTSW_OWN, g) 48 + 49 + #define CNLH_COMMUNITY(b, s, e, g) \ 50 + CNL_COMMUNITY(b, s, e, CNL_H_HOSTSW_OWN, g) 46 51 47 52 /* Cannon Lake-H */ 48 53 static const struct pinctrl_pin_desc cnlh_pins[] = { ··· 386 379 static const struct intel_padgroup cnlh_community3_gpps[] = { 387 380 CNL_GPP(0, 155, 178, 192), /* GPP_K */ 388 381 CNL_GPP(1, 179, 202, 224), /* GPP_H */ 389 - CNL_GPP(2, 203, 215, 258), /* GPP_E */ 382 + CNL_GPP(2, 203, 215, 256), /* GPP_E */ 390 383 CNL_GPP(3, 216, 239, 288), /* GPP_F */ 391 384 CNL_GPP(4, 240, 248, CNL_NO_GPIO), /* SPI */ 392 385 }; ··· 449 442 }; 450 443 451 444 static const struct intel_community cnlh_communities[] = { 452 - CNL_COMMUNITY(0, 0, 50, cnlh_community0_gpps), 453 - CNL_COMMUNITY(1, 51, 154, cnlh_community1_gpps), 454 - CNL_COMMUNITY(2, 155, 248, cnlh_community3_gpps), 455 - CNL_COMMUNITY(3, 249, 298, cnlh_community4_gpps), 445 + CNLH_COMMUNITY(0, 0, 50, cnlh_community0_gpps), 446 + CNLH_COMMUNITY(1, 51, 154, cnlh_community1_gpps), 447 + CNLH_COMMUNITY(2, 155, 248, cnlh_community3_gpps), 448 + CNLH_COMMUNITY(3, 249, 298, cnlh_community4_gpps), 456 449 }; 457 450 458 451 static const struct intel_pinctrl_soc_data cnlh_soc_data = { ··· 810 803 }; 811 804 812 805 static const struct intel_community cnllp_communities[] = { 813 - CNL_COMMUNITY(0, 0, 67, cnllp_community0_gpps), 814 - CNL_COMMUNITY(1, 68, 180, cnllp_community1_gpps), 815 - CNL_COMMUNITY(2, 181, 243, cnllp_community4_gpps), 806 + CNLLP_COMMUNITY(0, 0, 67, cnllp_community0_gpps), 807 + CNLLP_COMMUNITY(1, 68, 180, cnllp_community1_gpps), 808 + CNLLP_COMMUNITY(2, 181, 243, cnllp_community4_gpps), 816 809 }; 817 810 818 811 static const struct intel_pinctrl_soc_data cnllp_soc_data = {
+90 -107
drivers/pinctrl/intel/pinctrl-intel.c
··· 747 747 .owner = THIS_MODULE, 748 748 }; 749 749 750 - static int intel_gpio_get(struct gpio_chip *chip, unsigned offset) 751 - { 752 - struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 753 - void __iomem *reg; 754 - u32 padcfg0; 755 - 756 - reg = intel_get_padcfg(pctrl, offset, PADCFG0); 757 - if (!reg) 758 - return -EINVAL; 759 - 760 - padcfg0 = readl(reg); 761 - if (!(padcfg0 & PADCFG0_GPIOTXDIS)) 762 - return !!(padcfg0 & PADCFG0_GPIOTXSTATE); 763 - 764 - return !!(padcfg0 & PADCFG0_GPIORXSTATE); 765 - } 766 - 767 - static void intel_gpio_set(struct gpio_chip *chip, unsigned offset, int value) 768 - { 769 - struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 770 - unsigned long flags; 771 - void __iomem *reg; 772 - u32 padcfg0; 773 - 774 - reg = intel_get_padcfg(pctrl, offset, PADCFG0); 775 - if (!reg) 776 - return; 777 - 778 - raw_spin_lock_irqsave(&pctrl->lock, flags); 779 - padcfg0 = readl(reg); 780 - if (value) 781 - padcfg0 |= PADCFG0_GPIOTXSTATE; 782 - else 783 - padcfg0 &= ~PADCFG0_GPIOTXSTATE; 784 - writel(padcfg0, reg); 785 - raw_spin_unlock_irqrestore(&pctrl->lock, flags); 786 - } 787 - 788 - static int intel_gpio_get_direction(struct gpio_chip *chip, unsigned int offset) 789 - { 790 - struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 791 - void __iomem *reg; 792 - u32 padcfg0; 793 - 794 - reg = intel_get_padcfg(pctrl, offset, PADCFG0); 795 - if (!reg) 796 - return -EINVAL; 797 - 798 - padcfg0 = readl(reg); 799 - 800 - if (padcfg0 & PADCFG0_PMODE_MASK) 801 - return -EINVAL; 802 - 803 - return !!(padcfg0 & PADCFG0_GPIOTXDIS); 804 - } 805 - 806 - static int intel_gpio_direction_input(struct gpio_chip *chip, unsigned offset) 807 - { 808 - return pinctrl_gpio_direction_input(chip->base + offset); 809 - } 810 - 811 - static int intel_gpio_direction_output(struct gpio_chip *chip, unsigned offset, 812 - int value) 813 - { 814 - intel_gpio_set(chip, offset, value); 815 - return pinctrl_gpio_direction_output(chip->base + offset); 816 - } 817 - 818 - static const struct gpio_chip intel_gpio_chip = { 819 - .owner = THIS_MODULE, 820 - .request = gpiochip_generic_request, 821 - .free = gpiochip_generic_free, 822 - .get_direction = intel_gpio_get_direction, 823 - .direction_input = intel_gpio_direction_input, 824 - .direction_output = intel_gpio_direction_output, 825 - .get = intel_gpio_get, 826 - .set = intel_gpio_set, 827 - .set_config = gpiochip_generic_config, 828 - }; 829 - 830 750 /** 831 751 * intel_gpio_to_pin() - Translate from GPIO offset to pin number 832 752 * @pctrl: Pinctrl structure ··· 792 872 return -EINVAL; 793 873 } 794 874 795 - static int intel_gpio_irq_reqres(struct irq_data *d) 875 + static int intel_gpio_get(struct gpio_chip *chip, unsigned offset) 796 876 { 797 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 798 - struct intel_pinctrl *pctrl = gpiochip_get_data(gc); 799 - int pin; 800 - int ret; 801 - 802 - pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL); 803 - if (pin >= 0) { 804 - ret = gpiochip_lock_as_irq(gc, pin); 805 - if (ret) { 806 - dev_err(pctrl->dev, "unable to lock HW IRQ %d for IRQ\n", 807 - pin); 808 - return ret; 809 - } 810 - } 811 - return 0; 812 - } 813 - 814 - static void intel_gpio_irq_relres(struct irq_data *d) 815 - { 816 - struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 817 - struct intel_pinctrl *pctrl = gpiochip_get_data(gc); 877 + struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 878 + void __iomem *reg; 879 + u32 padcfg0; 818 880 int pin; 819 881 820 - pin = intel_gpio_to_pin(pctrl, irqd_to_hwirq(d), NULL, NULL); 821 - if (pin >= 0) 822 - gpiochip_unlock_as_irq(gc, pin); 882 + pin = intel_gpio_to_pin(pctrl, offset, NULL, NULL); 883 + if (pin < 0) 884 + return -EINVAL; 885 + 886 + reg = intel_get_padcfg(pctrl, pin, PADCFG0); 887 + if (!reg) 888 + return -EINVAL; 889 + 890 + padcfg0 = readl(reg); 891 + if (!(padcfg0 & PADCFG0_GPIOTXDIS)) 892 + return !!(padcfg0 & PADCFG0_GPIOTXSTATE); 893 + 894 + return !!(padcfg0 & PADCFG0_GPIORXSTATE); 823 895 } 896 + 897 + static void intel_gpio_set(struct gpio_chip *chip, unsigned offset, int value) 898 + { 899 + struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 900 + unsigned long flags; 901 + void __iomem *reg; 902 + u32 padcfg0; 903 + int pin; 904 + 905 + pin = intel_gpio_to_pin(pctrl, offset, NULL, NULL); 906 + if (pin < 0) 907 + return; 908 + 909 + reg = intel_get_padcfg(pctrl, pin, PADCFG0); 910 + if (!reg) 911 + return; 912 + 913 + raw_spin_lock_irqsave(&pctrl->lock, flags); 914 + padcfg0 = readl(reg); 915 + if (value) 916 + padcfg0 |= PADCFG0_GPIOTXSTATE; 917 + else 918 + padcfg0 &= ~PADCFG0_GPIOTXSTATE; 919 + writel(padcfg0, reg); 920 + raw_spin_unlock_irqrestore(&pctrl->lock, flags); 921 + } 922 + 923 + static int intel_gpio_get_direction(struct gpio_chip *chip, unsigned int offset) 924 + { 925 + struct intel_pinctrl *pctrl = gpiochip_get_data(chip); 926 + void __iomem *reg; 927 + u32 padcfg0; 928 + int pin; 929 + 930 + pin = intel_gpio_to_pin(pctrl, offset, NULL, NULL); 931 + if (pin < 0) 932 + return -EINVAL; 933 + 934 + reg = intel_get_padcfg(pctrl, pin, PADCFG0); 935 + if (!reg) 936 + return -EINVAL; 937 + 938 + padcfg0 = readl(reg); 939 + 940 + if (padcfg0 & PADCFG0_PMODE_MASK) 941 + return -EINVAL; 942 + 943 + return !!(padcfg0 & PADCFG0_GPIOTXDIS); 944 + } 945 + 946 + static int intel_gpio_direction_input(struct gpio_chip *chip, unsigned offset) 947 + { 948 + return pinctrl_gpio_direction_input(chip->base + offset); 949 + } 950 + 951 + static int intel_gpio_direction_output(struct gpio_chip *chip, unsigned offset, 952 + int value) 953 + { 954 + intel_gpio_set(chip, offset, value); 955 + return pinctrl_gpio_direction_output(chip->base + offset); 956 + } 957 + 958 + static const struct gpio_chip intel_gpio_chip = { 959 + .owner = THIS_MODULE, 960 + .request = gpiochip_generic_request, 961 + .free = gpiochip_generic_free, 962 + .get_direction = intel_gpio_get_direction, 963 + .direction_input = intel_gpio_direction_input, 964 + .direction_output = intel_gpio_direction_output, 965 + .get = intel_gpio_get, 966 + .set = intel_gpio_set, 967 + .set_config = gpiochip_generic_config, 968 + }; 824 969 825 970 static void intel_gpio_irq_ack(struct irq_data *d) 826 971 { ··· 1102 1117 1103 1118 static struct irq_chip intel_gpio_irqchip = { 1104 1119 .name = "intel-gpio", 1105 - .irq_request_resources = intel_gpio_irq_reqres, 1106 - .irq_release_resources = intel_gpio_irq_relres, 1107 1120 .irq_enable = intel_gpio_irq_enable, 1108 1121 .irq_ack = intel_gpio_irq_ack, 1109 1122 .irq_mask = intel_gpio_irq_mask,
+23 -10
drivers/pinctrl/pinctrl-amd.c
··· 348 348 unsigned long flags; 349 349 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 350 350 struct amd_gpio *gpio_dev = gpiochip_get_data(gc); 351 - u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF); 352 351 353 352 raw_spin_lock_irqsave(&gpio_dev->lock, flags); 354 353 pin_reg = readl(gpio_dev->base + (d->hwirq)*4); 355 354 pin_reg |= BIT(INTERRUPT_ENABLE_OFF); 356 355 pin_reg |= BIT(INTERRUPT_MASK_OFF); 357 356 writel(pin_reg, gpio_dev->base + (d->hwirq)*4); 358 - /* 359 - * When debounce logic is enabled it takes ~900 us before interrupts 360 - * can be enabled. During this "debounce warm up" period the 361 - * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it 362 - * reads back as 1, signaling that interrupts are now enabled. 363 - */ 364 - while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask) 365 - continue; 366 357 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 367 358 } 368 359 ··· 417 426 static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type) 418 427 { 419 428 int ret = 0; 420 - u32 pin_reg; 429 + u32 pin_reg, pin_reg_irq_en, mask; 421 430 unsigned long flags, irq_flags; 422 431 struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 423 432 struct amd_gpio *gpio_dev = gpiochip_get_data(gc); ··· 486 495 } 487 496 488 497 pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF; 498 + /* 499 + * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the 500 + * debounce registers of any GPIO will block wake/interrupt status 501 + * generation for *all* GPIOs for a lenght of time that depends on 502 + * WAKE_INT_MASTER_REG.MaskStsLength[11:0]. During this period the 503 + * INTERRUPT_ENABLE bit will read as 0. 504 + * 505 + * We temporarily enable irq for the GPIO whose configuration is 506 + * changing, and then wait for it to read back as 1 to know when 507 + * debounce has settled and then disable the irq again. 508 + * We do this polling with the spinlock held to ensure other GPIO 509 + * access routines do not read an incorrect value for the irq enable 510 + * bit of other GPIOs. We keep the GPIO masked while polling to avoid 511 + * spurious irqs, and disable the irq again after polling. 512 + */ 513 + mask = BIT(INTERRUPT_ENABLE_OFF); 514 + pin_reg_irq_en = pin_reg; 515 + pin_reg_irq_en |= mask; 516 + pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF); 517 + writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4); 518 + while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask) 519 + continue; 489 520 writel(pin_reg, gpio_dev->base + (d->hwirq)*4); 490 521 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags); 491 522
+1
drivers/platform/x86/alienware-wmi.c
··· 536 536 if (obj && obj->type == ACPI_TYPE_INTEGER) 537 537 *out_data = (u32) obj->integer.value; 538 538 } 539 + kfree(output.pointer); 539 540 return status; 540 541 541 542 }
+1
drivers/platform/x86/dell-smbios-wmi.c
··· 78 78 dev_dbg(&wdev->dev, "result: [%08x,%08x,%08x,%08x]\n", 79 79 priv->buf->std.output[0], priv->buf->std.output[1], 80 80 priv->buf->std.output[2], priv->buf->std.output[3]); 81 + kfree(output.pointer); 81 82 82 83 return 0; 83 84 }
+19
drivers/regulator/bd71837-regulator.c
··· 569 569 BD71837_REG_REGLOCK); 570 570 } 571 571 572 + /* 573 + * There is a HW quirk in BD71837. The shutdown sequence timings for 574 + * bucks/LDOs which are controlled via register interface are changed. 575 + * At PMIC poweroff the voltage for BUCK6/7 is cut immediately at the 576 + * beginning of shut-down sequence. As bucks 6 and 7 are parent 577 + * supplies for LDO5 and LDO6 - this causes LDO5/6 voltage 578 + * monitoring to errorneously detect under voltage and force PMIC to 579 + * emergency state instead of poweroff. In order to avoid this we 580 + * disable voltage monitoring for LDO5 and LDO6 581 + */ 582 + err = regmap_update_bits(pmic->mfd->regmap, BD718XX_REG_MVRFLTMASK2, 583 + BD718XX_LDO5_VRMON80 | BD718XX_LDO6_VRMON80, 584 + BD718XX_LDO5_VRMON80 | BD718XX_LDO6_VRMON80); 585 + if (err) { 586 + dev_err(&pmic->pdev->dev, 587 + "Failed to disable voltage monitoring\n"); 588 + goto err; 589 + } 590 + 572 591 for (i = 0; i < ARRAY_SIZE(pmic_regulator_inits); i++) { 573 592 574 593 struct regulator_desc *desc;
+2 -2
drivers/regulator/core.c
··· 3161 3161 if (!rstate->changeable) 3162 3162 return -EPERM; 3163 3163 3164 - rstate->enabled = en; 3164 + rstate->enabled = (en) ? ENABLE_IN_SUSPEND : DISABLE_IN_SUSPEND; 3165 3165 3166 3166 return 0; 3167 3167 } ··· 4395 4395 !rdev->desc->fixed_uV) 4396 4396 rdev->is_switch = true; 4397 4397 4398 + dev_set_drvdata(&rdev->dev, rdev); 4398 4399 ret = device_register(&rdev->dev); 4399 4400 if (ret != 0) { 4400 4401 put_device(&rdev->dev); 4401 4402 goto unset_supplies; 4402 4403 } 4403 4404 4404 - dev_set_drvdata(&rdev->dev, rdev); 4405 4405 rdev_init_debugfs(rdev); 4406 4406 4407 4407 /* try to resolve regulators supply since a new one was registered */
-2
drivers/regulator/of_regulator.c
··· 213 213 else if (of_property_read_bool(suspend_np, 214 214 "regulator-off-in-suspend")) 215 215 suspend_state->enabled = DISABLE_IN_SUSPEND; 216 - else 217 - suspend_state->enabled = DO_NOTHING_IN_SUSPEND; 218 216 219 217 if (!of_property_read_u32(np, "regulator-suspend-min-microvolt", 220 218 &pval))
+2 -3
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
··· 3474 3474 vscsi->dds.window[LOCAL].liobn, 3475 3475 vscsi->dds.window[REMOTE].liobn); 3476 3476 3477 - strcpy(vscsi->eye, "VSCSI "); 3478 - strncat(vscsi->eye, vdev->name, MAX_EYE); 3477 + snprintf(vscsi->eye, sizeof(vscsi->eye), "VSCSI %s", vdev->name); 3479 3478 3480 3479 vscsi->dds.unit_id = vdev->unit_address; 3481 - strncpy(vscsi->dds.partition_name, partition_name, 3480 + strscpy(vscsi->dds.partition_name, partition_name, 3482 3481 sizeof(vscsi->dds.partition_name)); 3483 3482 vscsi->dds.partition_num = partition_number; 3484 3483
+63 -47
drivers/scsi/ipr.c
··· 3335 3335 LEAVE; 3336 3336 } 3337 3337 3338 - /** 3339 - * ipr_worker_thread - Worker thread 3340 - * @work: ioa config struct 3341 - * 3342 - * Called at task level from a work thread. This function takes care 3343 - * of adding and removing device from the mid-layer as configuration 3344 - * changes are detected by the adapter. 3345 - * 3346 - * Return value: 3347 - * nothing 3348 - **/ 3349 - static void ipr_worker_thread(struct work_struct *work) 3338 + static void ipr_add_remove_thread(struct work_struct *work) 3350 3339 { 3351 3340 unsigned long lock_flags; 3352 3341 struct ipr_resource_entry *res; 3353 3342 struct scsi_device *sdev; 3354 - struct ipr_dump *dump; 3355 3343 struct ipr_ioa_cfg *ioa_cfg = 3356 - container_of(work, struct ipr_ioa_cfg, work_q); 3344 + container_of(work, struct ipr_ioa_cfg, scsi_add_work_q); 3357 3345 u8 bus, target, lun; 3358 3346 int did_work; 3359 3347 3360 3348 ENTER; 3361 3349 spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3362 - 3363 - if (ioa_cfg->sdt_state == READ_DUMP) { 3364 - dump = ioa_cfg->dump; 3365 - if (!dump) { 3366 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3367 - return; 3368 - } 3369 - kref_get(&dump->kref); 3370 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3371 - ipr_get_ioa_dump(ioa_cfg, dump); 3372 - kref_put(&dump->kref, ipr_release_dump); 3373 - 3374 - spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3375 - if (ioa_cfg->sdt_state == DUMP_OBTAINED && !ioa_cfg->dump_timeout) 3376 - ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE); 3377 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3378 - return; 3379 - } 3380 - 3381 - if (ioa_cfg->scsi_unblock) { 3382 - ioa_cfg->scsi_unblock = 0; 3383 - ioa_cfg->scsi_blocked = 0; 3384 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3385 - scsi_unblock_requests(ioa_cfg->host); 3386 - spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3387 - if (ioa_cfg->scsi_blocked) 3388 - scsi_block_requests(ioa_cfg->host); 3389 - } 3390 - 3391 - if (!ioa_cfg->scan_enabled) { 3392 - spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3393 - return; 3394 - } 3395 3350 3396 3351 restart: 3397 3352 do { ··· 3391 3436 ioa_cfg->scan_done = 1; 3392 3437 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3393 3438 kobject_uevent(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE); 3439 + LEAVE; 3440 + } 3441 + 3442 + /** 3443 + * ipr_worker_thread - Worker thread 3444 + * @work: ioa config struct 3445 + * 3446 + * Called at task level from a work thread. This function takes care 3447 + * of adding and removing device from the mid-layer as configuration 3448 + * changes are detected by the adapter. 3449 + * 3450 + * Return value: 3451 + * nothing 3452 + **/ 3453 + static void ipr_worker_thread(struct work_struct *work) 3454 + { 3455 + unsigned long lock_flags; 3456 + struct ipr_dump *dump; 3457 + struct ipr_ioa_cfg *ioa_cfg = 3458 + container_of(work, struct ipr_ioa_cfg, work_q); 3459 + 3460 + ENTER; 3461 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3462 + 3463 + if (ioa_cfg->sdt_state == READ_DUMP) { 3464 + dump = ioa_cfg->dump; 3465 + if (!dump) { 3466 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3467 + return; 3468 + } 3469 + kref_get(&dump->kref); 3470 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3471 + ipr_get_ioa_dump(ioa_cfg, dump); 3472 + kref_put(&dump->kref, ipr_release_dump); 3473 + 3474 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3475 + if (ioa_cfg->sdt_state == DUMP_OBTAINED && !ioa_cfg->dump_timeout) 3476 + ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE); 3477 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3478 + return; 3479 + } 3480 + 3481 + if (ioa_cfg->scsi_unblock) { 3482 + ioa_cfg->scsi_unblock = 0; 3483 + ioa_cfg->scsi_blocked = 0; 3484 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3485 + scsi_unblock_requests(ioa_cfg->host); 3486 + spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags); 3487 + if (ioa_cfg->scsi_blocked) 3488 + scsi_block_requests(ioa_cfg->host); 3489 + } 3490 + 3491 + if (!ioa_cfg->scan_enabled) { 3492 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3493 + return; 3494 + } 3495 + 3496 + schedule_work(&ioa_cfg->scsi_add_work_q); 3497 + 3498 + spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 3394 3499 LEAVE; 3395 3500 } 3396 3501 ··· 9948 9933 INIT_LIST_HEAD(&ioa_cfg->free_res_q); 9949 9934 INIT_LIST_HEAD(&ioa_cfg->used_res_q); 9950 9935 INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread); 9936 + INIT_WORK(&ioa_cfg->scsi_add_work_q, ipr_add_remove_thread); 9951 9937 init_waitqueue_head(&ioa_cfg->reset_wait_q); 9952 9938 init_waitqueue_head(&ioa_cfg->msi_wait_q); 9953 9939 init_waitqueue_head(&ioa_cfg->eeh_wait_q);
+1
drivers/scsi/ipr.h
··· 1575 1575 u8 saved_mode_page_len; 1576 1576 1577 1577 struct work_struct work_q; 1578 + struct work_struct scsi_add_work_q; 1578 1579 struct workqueue_struct *reset_work_q; 1579 1580 1580 1581 wait_queue_head_t reset_wait_q;
+10 -5
drivers/scsi/lpfc/lpfc_attr.c
··· 360 360 goto buffer_done; 361 361 362 362 list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { 363 + nrport = NULL; 364 + spin_lock(&vport->phba->hbalock); 363 365 rport = lpfc_ndlp_get_nrport(ndlp); 364 - if (!rport) 365 - continue; 366 - 367 - /* local short-hand pointer. */ 368 - nrport = rport->remoteport; 366 + if (rport) 367 + nrport = rport->remoteport; 368 + spin_unlock(&vport->phba->hbalock); 369 369 if (!nrport) 370 370 continue; 371 371 ··· 3386 3386 struct lpfc_nodelist *ndlp; 3387 3387 #if (IS_ENABLED(CONFIG_NVME_FC)) 3388 3388 struct lpfc_nvme_rport *rport; 3389 + struct nvme_fc_remote_port *remoteport = NULL; 3389 3390 #endif 3390 3391 3391 3392 shost = lpfc_shost_from_vport(vport); ··· 3397 3396 if (ndlp->rport) 3398 3397 ndlp->rport->dev_loss_tmo = vport->cfg_devloss_tmo; 3399 3398 #if (IS_ENABLED(CONFIG_NVME_FC)) 3399 + spin_lock(&vport->phba->hbalock); 3400 3400 rport = lpfc_ndlp_get_nrport(ndlp); 3401 3401 if (rport) 3402 + remoteport = rport->remoteport; 3403 + spin_unlock(&vport->phba->hbalock); 3404 + if (remoteport) 3402 3405 nvme_fc_set_remoteport_devloss(rport->remoteport, 3403 3406 vport->cfg_devloss_tmo); 3404 3407 #endif
+5 -5
drivers/scsi/lpfc/lpfc_debugfs.c
··· 551 551 unsigned char *statep; 552 552 struct nvme_fc_local_port *localport; 553 553 struct lpfc_nvmet_tgtport *tgtp; 554 - struct nvme_fc_remote_port *nrport; 554 + struct nvme_fc_remote_port *nrport = NULL; 555 555 struct lpfc_nvme_rport *rport; 556 556 557 557 cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE); ··· 696 696 len += snprintf(buf + len, size - len, "\tRport List:\n"); 697 697 list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { 698 698 /* local short-hand pointer. */ 699 + spin_lock(&phba->hbalock); 699 700 rport = lpfc_ndlp_get_nrport(ndlp); 700 - if (!rport) 701 - continue; 702 - 703 - nrport = rport->remoteport; 701 + if (rport) 702 + nrport = rport->remoteport; 703 + spin_unlock(&phba->hbalock); 704 704 if (!nrport) 705 705 continue; 706 706
+8 -3
drivers/scsi/lpfc/lpfc_nvme.c
··· 2725 2725 rpinfo.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn); 2726 2726 rpinfo.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn); 2727 2727 2728 + spin_lock_irq(&vport->phba->hbalock); 2728 2729 oldrport = lpfc_ndlp_get_nrport(ndlp); 2730 + spin_unlock_irq(&vport->phba->hbalock); 2729 2731 if (!oldrport) 2730 2732 lpfc_nlp_get(ndlp); 2731 2733 ··· 2842 2840 struct nvme_fc_local_port *localport; 2843 2841 struct lpfc_nvme_lport *lport; 2844 2842 struct lpfc_nvme_rport *rport; 2845 - struct nvme_fc_remote_port *remoteport; 2843 + struct nvme_fc_remote_port *remoteport = NULL; 2846 2844 2847 2845 localport = vport->localport; 2848 2846 ··· 2856 2854 if (!lport) 2857 2855 goto input_err; 2858 2856 2857 + spin_lock_irq(&vport->phba->hbalock); 2859 2858 rport = lpfc_ndlp_get_nrport(ndlp); 2860 - if (!rport) 2859 + if (rport) 2860 + remoteport = rport->remoteport; 2861 + spin_unlock_irq(&vport->phba->hbalock); 2862 + if (!remoteport) 2861 2863 goto input_err; 2862 2864 2863 - remoteport = rport->remoteport; 2864 2865 lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, 2865 2866 "6033 Unreg nvme remoteport %p, portname x%llx, " 2866 2867 "port_id x%06x, portstate x%x port type x%x\n",
+2 -2
drivers/scsi/qla2xxx/qla_target.h
··· 374 374 static inline int fcpcmd_is_corrupted(struct atio *atio) 375 375 { 376 376 if (atio->entry_type == ATIO_TYPE7 && 377 - (le16_to_cpu(atio->attr_n_length & FCP_CMD_LENGTH_MASK) < 378 - FCP_CMD_LENGTH_MIN)) 377 + ((le16_to_cpu(atio->attr_n_length) & FCP_CMD_LENGTH_MASK) < 378 + FCP_CMD_LENGTH_MIN)) 379 379 return 1; 380 380 else 381 381 return 0;
+5 -1
drivers/scsi/sd.c
··· 1276 1276 case REQ_OP_ZONE_RESET: 1277 1277 return sd_zbc_setup_reset_cmnd(cmd); 1278 1278 default: 1279 - BUG(); 1279 + WARN_ON_ONCE(1); 1280 + return BLKPREP_KILL; 1280 1281 } 1281 1282 } 1282 1283 ··· 2960 2959 if (rot == 1) { 2961 2960 blk_queue_flag_set(QUEUE_FLAG_NONROT, q); 2962 2961 blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); 2962 + } else { 2963 + blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); 2964 + blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q); 2963 2965 } 2964 2966 2965 2967 if (sdkp->device->type == TYPE_ZBC) {
+7
drivers/scsi/ufs/ufshcd.c
··· 7940 7940 err = -ENOMEM; 7941 7941 goto out_error; 7942 7942 } 7943 + 7944 + /* 7945 + * Do not use blk-mq at this time because blk-mq does not support 7946 + * runtime pm. 7947 + */ 7948 + host->use_blk_mq = false; 7949 + 7943 7950 hba = shost_priv(host); 7944 7951 hba->host = host; 7945 7952 hba->dev = dev;
+17 -6
drivers/soundwire/stream.c
··· 899 899 struct sdw_master_runtime *m_rt = stream->m_rt; 900 900 struct sdw_slave_runtime *s_rt, *_s_rt; 901 901 902 - list_for_each_entry_safe(s_rt, _s_rt, 903 - &m_rt->slave_rt_list, m_rt_node) 904 - sdw_stream_remove_slave(s_rt->slave, stream); 902 + list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) { 903 + sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream); 904 + sdw_release_slave_stream(s_rt->slave, stream); 905 + } 905 906 906 907 list_del(&m_rt->bus_node); 907 908 } ··· 1113 1112 "Master runtime config failed for stream:%s", 1114 1113 stream->name); 1115 1114 ret = -ENOMEM; 1116 - goto error; 1115 + goto unlock; 1117 1116 } 1118 1117 1119 1118 ret = sdw_config_stream(bus->dev, stream, stream_config, false); ··· 1124 1123 if (ret) 1125 1124 goto stream_error; 1126 1125 1127 - stream->state = SDW_STREAM_CONFIGURED; 1126 + goto unlock; 1128 1127 1129 1128 stream_error: 1130 1129 sdw_release_master_stream(stream); 1131 - error: 1130 + unlock: 1132 1131 mutex_unlock(&bus->bus_lock); 1133 1132 return ret; 1134 1133 } ··· 1142 1141 * @stream: SoundWire stream 1143 1142 * @port_config: Port configuration for audio stream 1144 1143 * @num_ports: Number of ports 1144 + * 1145 + * It is expected that Slave is added before adding Master 1146 + * to the Stream. 1147 + * 1145 1148 */ 1146 1149 int sdw_stream_add_slave(struct sdw_slave *slave, 1147 1150 struct sdw_stream_config *stream_config, ··· 1191 1186 if (ret) 1192 1187 goto stream_error; 1193 1188 1189 + /* 1190 + * Change stream state to CONFIGURED on first Slave add. 1191 + * Bus is not aware of number of Slave(s) in a stream at this 1192 + * point so cannot depend on all Slave(s) to be added in order to 1193 + * change stream state to CONFIGURED. 1194 + */ 1194 1195 stream->state = SDW_STREAM_CONFIGURED; 1195 1196 goto error; 1196 1197
+6
drivers/spi/spi-fsl-dspi.c
··· 30 30 31 31 #define DRIVER_NAME "fsl-dspi" 32 32 33 + #ifdef CONFIG_M5441x 34 + #define DSPI_FIFO_SIZE 16 35 + #else 33 36 #define DSPI_FIFO_SIZE 4 37 + #endif 34 38 #define DSPI_DMA_BUFSIZE (DSPI_FIFO_SIZE * 1024) 35 39 36 40 #define SPI_MCR 0x00 ··· 627 623 static void dspi_eoq_write(struct fsl_dspi *dspi) 628 624 { 629 625 int fifo_size = DSPI_FIFO_SIZE; 626 + u16 xfer_cmd = dspi->tx_cmd; 630 627 631 628 /* Fill TX FIFO with as many transfers as possible */ 632 629 while (dspi->len && fifo_size--) { 630 + dspi->tx_cmd = xfer_cmd; 633 631 /* Request EOQF for last transfer in FIFO */ 634 632 if (dspi->len == dspi->bytes_per_word || fifo_size == 0) 635 633 dspi->tx_cmd |= SPI_PUSHR_CMD_EOQ;
+2 -2
drivers/spi/spi-gpio.c
··· 300 300 *mflags |= SPI_MASTER_NO_RX; 301 301 302 302 spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW); 303 - if (IS_ERR(spi_gpio->mosi)) 304 - return PTR_ERR(spi_gpio->mosi); 303 + if (IS_ERR(spi_gpio->sck)) 304 + return PTR_ERR(spi_gpio->sck); 305 305 306 306 for (i = 0; i < num_chipselects; i++) { 307 307 spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs",
+30 -4
drivers/spi/spi-rspi.c
··· 598 598 599 599 ret = wait_event_interruptible_timeout(rspi->wait, 600 600 rspi->dma_callbacked, HZ); 601 - if (ret > 0 && rspi->dma_callbacked) 601 + if (ret > 0 && rspi->dma_callbacked) { 602 602 ret = 0; 603 - else if (!ret) { 604 - dev_err(&rspi->master->dev, "DMA timeout\n"); 605 - ret = -ETIMEDOUT; 603 + } else { 604 + if (!ret) { 605 + dev_err(&rspi->master->dev, "DMA timeout\n"); 606 + ret = -ETIMEDOUT; 607 + } 606 608 if (tx) 607 609 dmaengine_terminate_all(rspi->master->dma_tx); 608 610 if (rx) ··· 1352 1350 1353 1351 MODULE_DEVICE_TABLE(platform, spi_driver_ids); 1354 1352 1353 + #ifdef CONFIG_PM_SLEEP 1354 + static int rspi_suspend(struct device *dev) 1355 + { 1356 + struct platform_device *pdev = to_platform_device(dev); 1357 + struct rspi_data *rspi = platform_get_drvdata(pdev); 1358 + 1359 + return spi_master_suspend(rspi->master); 1360 + } 1361 + 1362 + static int rspi_resume(struct device *dev) 1363 + { 1364 + struct platform_device *pdev = to_platform_device(dev); 1365 + struct rspi_data *rspi = platform_get_drvdata(pdev); 1366 + 1367 + return spi_master_resume(rspi->master); 1368 + } 1369 + 1370 + static SIMPLE_DEV_PM_OPS(rspi_pm_ops, rspi_suspend, rspi_resume); 1371 + #define DEV_PM_OPS &rspi_pm_ops 1372 + #else 1373 + #define DEV_PM_OPS NULL 1374 + #endif /* CONFIG_PM_SLEEP */ 1375 + 1355 1376 static struct platform_driver rspi_driver = { 1356 1377 .probe = rspi_probe, 1357 1378 .remove = rspi_remove, 1358 1379 .id_table = spi_driver_ids, 1359 1380 .driver = { 1360 1381 .name = "renesas_spi", 1382 + .pm = DEV_PM_OPS, 1361 1383 .of_match_table = of_match_ptr(rspi_of_match), 1362 1384 }, 1363 1385 };
+27 -1
drivers/spi/spi-sh-msiof.c
··· 397 397 398 398 static void sh_msiof_reset_str(struct sh_msiof_spi_priv *p) 399 399 { 400 - sh_msiof_write(p, STR, sh_msiof_read(p, STR)); 400 + sh_msiof_write(p, STR, 401 + sh_msiof_read(p, STR) & ~(STR_TDREQ | STR_RDREQ)); 401 402 } 402 403 403 404 static void sh_msiof_spi_write_fifo_8(struct sh_msiof_spi_priv *p, ··· 1427 1426 }; 1428 1427 MODULE_DEVICE_TABLE(platform, spi_driver_ids); 1429 1428 1429 + #ifdef CONFIG_PM_SLEEP 1430 + static int sh_msiof_spi_suspend(struct device *dev) 1431 + { 1432 + struct platform_device *pdev = to_platform_device(dev); 1433 + struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev); 1434 + 1435 + return spi_master_suspend(p->master); 1436 + } 1437 + 1438 + static int sh_msiof_spi_resume(struct device *dev) 1439 + { 1440 + struct platform_device *pdev = to_platform_device(dev); 1441 + struct sh_msiof_spi_priv *p = platform_get_drvdata(pdev); 1442 + 1443 + return spi_master_resume(p->master); 1444 + } 1445 + 1446 + static SIMPLE_DEV_PM_OPS(sh_msiof_spi_pm_ops, sh_msiof_spi_suspend, 1447 + sh_msiof_spi_resume); 1448 + #define DEV_PM_OPS &sh_msiof_spi_pm_ops 1449 + #else 1450 + #define DEV_PM_OPS NULL 1451 + #endif /* CONFIG_PM_SLEEP */ 1452 + 1430 1453 static struct platform_driver sh_msiof_spi_drv = { 1431 1454 .probe = sh_msiof_spi_probe, 1432 1455 .remove = sh_msiof_spi_remove, 1433 1456 .id_table = spi_driver_ids, 1434 1457 .driver = { 1435 1458 .name = "spi_sh_msiof", 1459 + .pm = DEV_PM_OPS, 1436 1460 .of_match_table = of_match_ptr(sh_msiof_match), 1437 1461 }, 1438 1462 };
+23 -8
drivers/spi/spi-tegra20-slink.c
··· 1063 1063 goto exit_free_master; 1064 1064 } 1065 1065 1066 + /* disabled clock may cause interrupt storm upon request */ 1067 + tspi->clk = devm_clk_get(&pdev->dev, NULL); 1068 + if (IS_ERR(tspi->clk)) { 1069 + ret = PTR_ERR(tspi->clk); 1070 + dev_err(&pdev->dev, "Can not get clock %d\n", ret); 1071 + goto exit_free_master; 1072 + } 1073 + ret = clk_prepare(tspi->clk); 1074 + if (ret < 0) { 1075 + dev_err(&pdev->dev, "Clock prepare failed %d\n", ret); 1076 + goto exit_free_master; 1077 + } 1078 + ret = clk_enable(tspi->clk); 1079 + if (ret < 0) { 1080 + dev_err(&pdev->dev, "Clock enable failed %d\n", ret); 1081 + goto exit_free_master; 1082 + } 1083 + 1066 1084 spi_irq = platform_get_irq(pdev, 0); 1067 1085 tspi->irq = spi_irq; 1068 1086 ret = request_threaded_irq(tspi->irq, tegra_slink_isr, ··· 1089 1071 if (ret < 0) { 1090 1072 dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n", 1091 1073 tspi->irq); 1092 - goto exit_free_master; 1093 - } 1094 - 1095 - tspi->clk = devm_clk_get(&pdev->dev, NULL); 1096 - if (IS_ERR(tspi->clk)) { 1097 - dev_err(&pdev->dev, "can not get clock\n"); 1098 - ret = PTR_ERR(tspi->clk); 1099 - goto exit_free_irq; 1074 + goto exit_clk_disable; 1100 1075 } 1101 1076 1102 1077 tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi"); ··· 1149 1138 tegra_slink_deinit_dma_param(tspi, true); 1150 1139 exit_free_irq: 1151 1140 free_irq(spi_irq, tspi); 1141 + exit_clk_disable: 1142 + clk_disable(tspi->clk); 1152 1143 exit_free_master: 1153 1144 spi_master_put(master); 1154 1145 return ret; ··· 1162 1149 struct tegra_slink_data *tspi = spi_master_get_devdata(master); 1163 1150 1164 1151 free_irq(tspi->irq, tspi); 1152 + 1153 + clk_disable(tspi->clk); 1165 1154 1166 1155 if (tspi->tx_dma_chan) 1167 1156 tegra_slink_deinit_dma_param(tspi, false);
+11 -2
drivers/spi/spi.c
··· 2143 2143 */ 2144 2144 if (ctlr->num_chipselect == 0) 2145 2145 return -EINVAL; 2146 - /* allocate dynamic bus number using Linux idr */ 2147 - if ((ctlr->bus_num < 0) && ctlr->dev.of_node) { 2146 + if (ctlr->bus_num >= 0) { 2147 + /* devices with a fixed bus num must check-in with the num */ 2148 + mutex_lock(&board_lock); 2149 + id = idr_alloc(&spi_master_idr, ctlr, ctlr->bus_num, 2150 + ctlr->bus_num + 1, GFP_KERNEL); 2151 + mutex_unlock(&board_lock); 2152 + if (WARN(id < 0, "couldn't get idr")) 2153 + return id == -ENOSPC ? -EBUSY : id; 2154 + ctlr->bus_num = id; 2155 + } else if (ctlr->dev.of_node) { 2156 + /* allocate dynamic bus number using Linux idr */ 2148 2157 id = of_alias_get_id(ctlr->dev.of_node, "spi"); 2149 2158 if (id >= 0) { 2150 2159 ctlr->bus_num = id;
-6
drivers/staging/media/mt9t031/Kconfig
··· 1 - config SOC_CAMERA_IMX074 2 - tristate "imx074 support (DEPRECATED)" 3 - depends on SOC_CAMERA && I2C 4 - help 5 - This driver supports IMX074 cameras from Sony 6 - 7 1 config SOC_CAMERA_MT9T031 8 2 tristate "mt9t031 support (DEPRECATED)" 9 3 depends on SOC_CAMERA && I2C
+14 -8
drivers/target/iscsi/iscsi_target.c
··· 1416 1416 1417 1417 sg_init_table(sg, ARRAY_SIZE(sg)); 1418 1418 sg_set_buf(sg, buf, payload_length); 1419 - sg_set_buf(sg + 1, pad_bytes, padding); 1419 + if (padding) 1420 + sg_set_buf(sg + 1, pad_bytes, padding); 1420 1421 1421 1422 ahash_request_set_crypt(hash, sg, data_crc, payload_length + padding); 1422 1423 ··· 3911 3910 static void iscsit_get_rx_pdu(struct iscsi_conn *conn) 3912 3911 { 3913 3912 int ret; 3914 - u8 buffer[ISCSI_HDR_LEN], opcode; 3913 + u8 *buffer, opcode; 3915 3914 u32 checksum = 0, digest = 0; 3916 3915 struct kvec iov; 3916 + 3917 + buffer = kcalloc(ISCSI_HDR_LEN, sizeof(*buffer), GFP_KERNEL); 3918 + if (!buffer) 3919 + return; 3917 3920 3918 3921 while (!kthread_should_stop()) { 3919 3922 /* ··· 3926 3921 */ 3927 3922 iscsit_thread_check_cpumask(conn, current, 0); 3928 3923 3929 - memset(buffer, 0, ISCSI_HDR_LEN); 3930 3924 memset(&iov, 0, sizeof(struct kvec)); 3931 3925 3932 3926 iov.iov_base = buffer; ··· 3934 3930 ret = rx_data(conn, &iov, 1, ISCSI_HDR_LEN); 3935 3931 if (ret != ISCSI_HDR_LEN) { 3936 3932 iscsit_rx_thread_wait_for_tcp(conn); 3937 - return; 3933 + break; 3938 3934 } 3939 3935 3940 3936 if (conn->conn_ops->HeaderDigest) { ··· 3944 3940 ret = rx_data(conn, &iov, 1, ISCSI_CRC_LEN); 3945 3941 if (ret != ISCSI_CRC_LEN) { 3946 3942 iscsit_rx_thread_wait_for_tcp(conn); 3947 - return; 3943 + break; 3948 3944 } 3949 3945 3950 3946 iscsit_do_crypto_hash_buf(conn->conn_rx_hash, buffer, ··· 3968 3964 } 3969 3965 3970 3966 if (conn->conn_state == TARG_CONN_STATE_IN_LOGOUT) 3971 - return; 3967 + break; 3972 3968 3973 3969 opcode = buffer[0] & ISCSI_OPCODE_MASK; 3974 3970 ··· 3979 3975 " while in Discovery Session, rejecting.\n", opcode); 3980 3976 iscsit_add_reject(conn, ISCSI_REASON_PROTOCOL_ERROR, 3981 3977 buffer); 3982 - return; 3978 + break; 3983 3979 } 3984 3980 3985 3981 ret = iscsi_target_rx_opcode(conn, buffer); 3986 3982 if (ret < 0) 3987 - return; 3983 + break; 3988 3984 } 3985 + 3986 + kfree(buffer); 3989 3987 } 3990 3988 3991 3989 int iscsi_target_rx_thread(void *arg)
+17 -28
drivers/target/iscsi/iscsi_target_auth.c
··· 26 26 #include "iscsi_target_nego.h" 27 27 #include "iscsi_target_auth.h" 28 28 29 - static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len) 30 - { 31 - int j = DIV_ROUND_UP(len, 2), rc; 32 - 33 - rc = hex2bin(dst, src, j); 34 - if (rc < 0) 35 - pr_debug("CHAP string contains non hex digit symbols\n"); 36 - 37 - dst[j] = '\0'; 38 - return j; 39 - } 40 - 41 - static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len) 42 - { 43 - int i; 44 - 45 - for (i = 0; i < src_len; i++) { 46 - sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff); 47 - } 48 - } 49 - 50 29 static int chap_gen_challenge( 51 30 struct iscsi_conn *conn, 52 31 int caller, ··· 41 62 ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH); 42 63 if (unlikely(ret)) 43 64 return ret; 44 - chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge, 65 + bin2hex(challenge_asciihex, chap->challenge, 45 66 CHAP_CHALLENGE_LENGTH); 46 67 /* 47 68 * Set CHAP_C, and copy the generated challenge into c_str. ··· 227 248 pr_err("Could not find CHAP_R.\n"); 228 249 goto out; 229 250 } 251 + if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) { 252 + pr_err("Malformed CHAP_R\n"); 253 + goto out; 254 + } 255 + if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) { 256 + pr_err("Malformed CHAP_R\n"); 257 + goto out; 258 + } 230 259 231 260 pr_debug("[server] Got CHAP_R=%s\n", chap_r); 232 - chap_string_to_hex(client_digest, chap_r, strlen(chap_r)); 233 261 234 262 tfm = crypto_alloc_shash("md5", 0, 0); 235 263 if (IS_ERR(tfm)) { ··· 280 294 goto out; 281 295 } 282 296 283 - chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE); 297 + bin2hex(response, server_digest, MD5_SIGNATURE_SIZE); 284 298 pr_debug("[server] MD5 Server Digest: %s\n", response); 285 299 286 300 if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) { ··· 335 349 pr_err("Could not find CHAP_C.\n"); 336 350 goto out; 337 351 } 338 - pr_debug("[server] Got CHAP_C=%s\n", challenge); 339 - challenge_len = chap_string_to_hex(challenge_binhex, challenge, 340 - strlen(challenge)); 352 + challenge_len = DIV_ROUND_UP(strlen(challenge), 2); 341 353 if (!challenge_len) { 342 354 pr_err("Unable to convert incoming challenge\n"); 343 355 goto out; ··· 344 360 pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n"); 345 361 goto out; 346 362 } 363 + if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) { 364 + pr_err("Malformed CHAP_C\n"); 365 + goto out; 366 + } 367 + pr_debug("[server] Got CHAP_C=%s\n", challenge); 347 368 /* 348 369 * During mutual authentication, the CHAP_C generated by the 349 370 * initiator must not match the original CHAP_C generated by ··· 402 413 /* 403 414 * Convert response from binary hex to ascii hext. 404 415 */ 405 - chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE); 416 + bin2hex(response, digest, MD5_SIGNATURE_SIZE); 406 417 *nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s", 407 418 response); 408 419 *nr_out_len += 1;
+7 -3
drivers/tty/serial/cpm_uart/cpm_uart_core.c
··· 1054 1054 /* Get the address of the host memory buffer. 1055 1055 */ 1056 1056 bdp = pinfo->rx_cur; 1057 - while (bdp->cbd_sc & BD_SC_EMPTY) 1058 - ; 1057 + if (bdp->cbd_sc & BD_SC_EMPTY) 1058 + return NO_POLL_CHAR; 1059 1059 1060 1060 /* If the buffer address is in the CPM DPRAM, don't 1061 1061 * convert it. ··· 1090 1090 poll_chars = 0; 1091 1091 } 1092 1092 if (poll_chars <= 0) { 1093 - poll_chars = poll_wait_key(poll_buf, pinfo); 1093 + int ret = poll_wait_key(poll_buf, pinfo); 1094 + 1095 + if (ret == NO_POLL_CHAR) 1096 + return ret; 1097 + poll_chars = ret; 1094 1098 pollp = poll_buf; 1095 1099 } 1096 1100 poll_chars--;
+2 -1
drivers/tty/serial/fsl_lpuart.c
··· 979 979 struct circ_buf *ring = &sport->rx_ring; 980 980 int ret, nent; 981 981 int bits, baud; 982 - struct tty_struct *tty = tty_port_tty_get(&sport->port.state->port); 982 + struct tty_port *port = &sport->port.state->port; 983 + struct tty_struct *tty = port->tty; 983 984 struct ktermios *termios = &tty->termios; 984 985 985 986 baud = tty_get_baud_rate(tty);
+8
drivers/tty/serial/imx.c
··· 2351 2351 ret); 2352 2352 return ret; 2353 2353 } 2354 + 2355 + ret = devm_request_irq(&pdev->dev, rtsirq, imx_uart_rtsint, 0, 2356 + dev_name(&pdev->dev), sport); 2357 + if (ret) { 2358 + dev_err(&pdev->dev, "failed to request rts irq: %d\n", 2359 + ret); 2360 + return ret; 2361 + } 2354 2362 } else { 2355 2363 ret = devm_request_irq(&pdev->dev, rxirq, imx_uart_int, 0, 2356 2364 dev_name(&pdev->dev), sport);
+1
drivers/tty/serial/mvebu-uart.c
··· 511 511 termios->c_iflag |= old->c_iflag & ~(INPCK | IGNPAR); 512 512 termios->c_cflag &= CREAD | CBAUD; 513 513 termios->c_cflag |= old->c_cflag & ~(CREAD | CBAUD); 514 + termios->c_cflag |= CS8; 514 515 } 515 516 516 517 spin_unlock_irqrestore(&port->lock, flags);
+8 -3
drivers/tty/tty_io.c
··· 1255 1255 static int tty_reopen(struct tty_struct *tty) 1256 1256 { 1257 1257 struct tty_driver *driver = tty->driver; 1258 + int retval; 1258 1259 1259 1260 if (driver->type == TTY_DRIVER_TYPE_PTY && 1260 1261 driver->subtype == PTY_TYPE_MASTER) ··· 1269 1268 1270 1269 tty->count++; 1271 1270 1272 - if (!tty->ldisc) 1273 - return tty_ldisc_reinit(tty, tty->termios.c_line); 1271 + if (tty->ldisc) 1272 + return 0; 1274 1273 1275 - return 0; 1274 + retval = tty_ldisc_reinit(tty, tty->termios.c_line); 1275 + if (retval) 1276 + tty->count--; 1277 + 1278 + return retval; 1276 1279 } 1277 1280 1278 1281 /**
+4
drivers/tty/vt/vt_ioctl.c
··· 32 32 #include <asm/io.h> 33 33 #include <linux/uaccess.h> 34 34 35 + #include <linux/nospec.h> 36 + 35 37 #include <linux/kbd_kern.h> 36 38 #include <linux/vt_kern.h> 37 39 #include <linux/kbd_diacr.h> ··· 702 700 if (vsa.console == 0 || vsa.console > MAX_NR_CONSOLES) 703 701 ret = -ENXIO; 704 702 else { 703 + vsa.console = array_index_nospec(vsa.console, 704 + MAX_NR_CONSOLES + 1); 705 705 vsa.console--; 706 706 console_lock(); 707 707 ret = vc_allocate(vsa.console);
+1 -1
drivers/usb/class/cdc-wdm.c
··· 460 460 461 461 set_bit(WDM_RESPONDING, &desc->flags); 462 462 spin_unlock_irq(&desc->iuspin); 463 - rv = usb_submit_urb(desc->response, GFP_ATOMIC); 463 + rv = usb_submit_urb(desc->response, GFP_KERNEL); 464 464 spin_lock_irq(&desc->iuspin); 465 465 if (rv) { 466 466 dev_err(&desc->intf->dev,
+12 -3
drivers/usb/common/roles.c
··· 109 109 */ 110 110 struct usb_role_switch *usb_role_switch_get(struct device *dev) 111 111 { 112 - return device_connection_find_match(dev, "usb-role-switch", NULL, 113 - usb_role_switch_match); 112 + struct usb_role_switch *sw; 113 + 114 + sw = device_connection_find_match(dev, "usb-role-switch", NULL, 115 + usb_role_switch_match); 116 + 117 + if (!IS_ERR_OR_NULL(sw)) 118 + WARN_ON(!try_module_get(sw->dev.parent->driver->owner)); 119 + 120 + return sw; 114 121 } 115 122 EXPORT_SYMBOL_GPL(usb_role_switch_get); 116 123 ··· 129 122 */ 130 123 void usb_role_switch_put(struct usb_role_switch *sw) 131 124 { 132 - if (!IS_ERR_OR_NULL(sw)) 125 + if (!IS_ERR_OR_NULL(sw)) { 133 126 put_device(&sw->dev); 127 + module_put(sw->dev.parent->driver->owner); 128 + } 134 129 } 135 130 EXPORT_SYMBOL_GPL(usb_role_switch_put); 136 131
+21 -3
drivers/usb/core/devio.c
··· 1434 1434 struct async *as = NULL; 1435 1435 struct usb_ctrlrequest *dr = NULL; 1436 1436 unsigned int u, totlen, isofrmlen; 1437 - int i, ret, is_in, num_sgs = 0, ifnum = -1; 1437 + int i, ret, num_sgs = 0, ifnum = -1; 1438 1438 int number_of_packets = 0; 1439 1439 unsigned int stream_id = 0; 1440 1440 void *buf; 1441 + bool is_in; 1442 + bool allow_short = false; 1443 + bool allow_zero = false; 1441 1444 unsigned long mask = USBDEVFS_URB_SHORT_NOT_OK | 1442 1445 USBDEVFS_URB_BULK_CONTINUATION | 1443 1446 USBDEVFS_URB_NO_FSBR | ··· 1474 1471 u = 0; 1475 1472 switch (uurb->type) { 1476 1473 case USBDEVFS_URB_TYPE_CONTROL: 1474 + if (is_in) 1475 + allow_short = true; 1477 1476 if (!usb_endpoint_xfer_control(&ep->desc)) 1478 1477 return -EINVAL; 1479 1478 /* min 8 byte setup packet */ ··· 1516 1511 break; 1517 1512 1518 1513 case USBDEVFS_URB_TYPE_BULK: 1514 + if (!is_in) 1515 + allow_zero = true; 1516 + else 1517 + allow_short = true; 1519 1518 switch (usb_endpoint_type(&ep->desc)) { 1520 1519 case USB_ENDPOINT_XFER_CONTROL: 1521 1520 case USB_ENDPOINT_XFER_ISOC: ··· 1540 1531 if (!usb_endpoint_xfer_int(&ep->desc)) 1541 1532 return -EINVAL; 1542 1533 interrupt_urb: 1534 + if (!is_in) 1535 + allow_zero = true; 1536 + else 1537 + allow_short = true; 1543 1538 break; 1544 1539 1545 1540 case USBDEVFS_URB_TYPE_ISO: ··· 1689 1676 u = (is_in ? URB_DIR_IN : URB_DIR_OUT); 1690 1677 if (uurb->flags & USBDEVFS_URB_ISO_ASAP) 1691 1678 u |= URB_ISO_ASAP; 1692 - if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in) 1679 + if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK) 1693 1680 u |= URB_SHORT_NOT_OK; 1694 - if (uurb->flags & USBDEVFS_URB_ZERO_PACKET) 1681 + if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET) 1695 1682 u |= URB_ZERO_PACKET; 1696 1683 if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT) 1697 1684 u |= URB_NO_INTERRUPT; 1698 1685 as->urb->transfer_flags = u; 1686 + 1687 + if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK) 1688 + dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n"); 1689 + if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET) 1690 + dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n"); 1699 1691 1700 1692 as->urb->transfer_buffer_length = uurb->buffer_length; 1701 1693 as->urb->setup_packet = (unsigned char *)dr;
+14 -14
drivers/usb/core/driver.c
··· 512 512 struct device *dev; 513 513 struct usb_device *udev; 514 514 int retval = 0; 515 - int lpm_disable_error = -ENODEV; 516 515 517 516 if (!iface) 518 517 return -ENODEV; ··· 532 533 533 534 iface->condition = USB_INTERFACE_BOUND; 534 535 535 - /* See the comment about disabling LPM in usb_probe_interface(). */ 536 - if (driver->disable_hub_initiated_lpm) { 537 - lpm_disable_error = usb_unlocked_disable_lpm(udev); 538 - if (lpm_disable_error) { 539 - dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n", 540 - __func__, driver->name); 541 - return -ENOMEM; 542 - } 543 - } 544 - 545 536 /* Claimed interfaces are initially inactive (suspended) and 546 537 * runtime-PM-enabled, but only if the driver has autosuspend 547 538 * support. Otherwise they are marked active, to prevent the ··· 550 561 if (device_is_registered(dev)) 551 562 retval = device_bind_driver(dev); 552 563 553 - /* Attempt to re-enable USB3 LPM, if the disable was successful. */ 554 - if (!lpm_disable_error) 555 - usb_unlocked_enable_lpm(udev); 564 + if (retval) { 565 + dev->driver = NULL; 566 + usb_set_intfdata(iface, NULL); 567 + iface->needs_remote_wakeup = 0; 568 + iface->condition = USB_INTERFACE_UNBOUND; 569 + 570 + /* 571 + * Unbound interfaces are always runtime-PM-disabled 572 + * and runtime-PM-suspended 573 + */ 574 + if (driver->supports_autosuspend) 575 + pm_runtime_disable(dev); 576 + pm_runtime_set_suspended(dev); 577 + } 556 578 557 579 return retval; 558 580 }
+2 -1
drivers/usb/core/quirks.c
··· 58 58 quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry), 59 59 GFP_KERNEL); 60 60 if (!quirk_list) { 61 + quirk_count = 0; 61 62 mutex_unlock(&quirk_mutex); 62 63 return -ENOMEM; 63 64 } ··· 155 154 .string = quirks_param, 156 155 }; 157 156 158 - module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644); 157 + device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644); 159 158 MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks"); 160 159 161 160 /* Lists of quirky USB devices, split in device quirks and interface quirks.
+2
drivers/usb/core/usb.c
··· 228 228 struct usb_interface_cache *intf_cache = NULL; 229 229 int i; 230 230 231 + if (!config) 232 + return NULL; 231 233 for (i = 0; i < config->desc.bNumInterfaces; i++) { 232 234 if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber 233 235 == iface_num) {
+1 -11
drivers/usb/musb/musb_dsps.c
··· 658 658 return controller; 659 659 } 660 660 661 - static void dsps_dma_controller_destroy(struct dma_controller *c) 662 - { 663 - struct musb *musb = c->musb; 664 - struct dsps_glue *glue = dev_get_drvdata(musb->controller->parent); 665 - void __iomem *usbss_base = glue->usbss_base; 666 - 667 - musb_writel(usbss_base, USBSS_IRQ_CLEARR, USBSS_IRQ_PD_COMP); 668 - cppi41_dma_controller_destroy(c); 669 - } 670 - 671 661 #ifdef CONFIG_PM_SLEEP 672 662 static void dsps_dma_controller_suspend(struct dsps_glue *glue) 673 663 { ··· 687 697 688 698 #ifdef CONFIG_USB_TI_CPPI41_DMA 689 699 .dma_init = dsps_dma_controller_create, 690 - .dma_exit = dsps_dma_controller_destroy, 700 + .dma_exit = cppi41_dma_controller_destroy, 691 701 #endif 692 702 .enable = dsps_musb_enable, 693 703 .disable = dsps_musb_disable,
+13 -4
drivers/usb/typec/mux.c
··· 9 9 10 10 #include <linux/device.h> 11 11 #include <linux/list.h> 12 + #include <linux/module.h> 12 13 #include <linux/mutex.h> 13 14 #include <linux/usb/typec_mux.h> 14 15 ··· 50 49 mutex_lock(&switch_lock); 51 50 sw = device_connection_find_match(dev, "typec-switch", NULL, 52 51 typec_switch_match); 53 - if (!IS_ERR_OR_NULL(sw)) 52 + if (!IS_ERR_OR_NULL(sw)) { 53 + WARN_ON(!try_module_get(sw->dev->driver->owner)); 54 54 get_device(sw->dev); 55 + } 55 56 mutex_unlock(&switch_lock); 56 57 57 58 return sw; ··· 68 65 */ 69 66 void typec_switch_put(struct typec_switch *sw) 70 67 { 71 - if (!IS_ERR_OR_NULL(sw)) 68 + if (!IS_ERR_OR_NULL(sw)) { 69 + module_put(sw->dev->driver->owner); 72 70 put_device(sw->dev); 71 + } 73 72 } 74 73 EXPORT_SYMBOL_GPL(typec_switch_put); 75 74 ··· 141 136 142 137 mutex_lock(&mux_lock); 143 138 mux = device_connection_find_match(dev, name, NULL, typec_mux_match); 144 - if (!IS_ERR_OR_NULL(mux)) 139 + if (!IS_ERR_OR_NULL(mux)) { 140 + WARN_ON(!try_module_get(mux->dev->driver->owner)); 145 141 get_device(mux->dev); 142 + } 146 143 mutex_unlock(&mux_lock); 147 144 148 145 return mux; ··· 159 152 */ 160 153 void typec_mux_put(struct typec_mux *mux) 161 154 { 162 - if (!IS_ERR_OR_NULL(mux)) 155 + if (!IS_ERR_OR_NULL(mux)) { 156 + module_put(mux->dev->driver->owner); 163 157 put_device(mux->dev); 158 + } 164 159 } 165 160 EXPORT_SYMBOL_GPL(typec_mux_put); 166 161
+21 -6
drivers/xen/grant-table.c
··· 1040 1040 return ret; 1041 1041 1042 1042 for (i = 0; i < count; i++) { 1043 - /* Retry eagain maps */ 1044 - if (map_ops[i].status == GNTST_eagain) 1045 - gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i, 1046 - &map_ops[i].status, __func__); 1047 - 1048 - if (map_ops[i].status == GNTST_okay) { 1043 + switch (map_ops[i].status) { 1044 + case GNTST_okay: 1045 + { 1049 1046 struct xen_page_foreign *foreign; 1050 1047 1051 1048 SetPageForeign(pages[i]); 1052 1049 foreign = xen_page_foreign(pages[i]); 1053 1050 foreign->domid = map_ops[i].dom; 1054 1051 foreign->gref = map_ops[i].ref; 1052 + break; 1053 + } 1054 + 1055 + case GNTST_no_device_space: 1056 + pr_warn_ratelimited("maptrack limit reached, can't map all guest pages\n"); 1057 + break; 1058 + 1059 + case GNTST_eagain: 1060 + /* Retry eagain maps */ 1061 + gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, 1062 + map_ops + i, 1063 + &map_ops[i].status, __func__); 1064 + /* Test status in next loop iteration. */ 1065 + i--; 1066 + break; 1067 + 1068 + default: 1069 + break; 1055 1070 } 1056 1071 } 1057 1072
+3 -11
fs/dax.c
··· 447 447 xa_unlock_irq(&mapping->i_pages); 448 448 break; 449 449 } else if (IS_ERR(entry)) { 450 + xa_unlock_irq(&mapping->i_pages); 450 451 WARN_ON_ONCE(PTR_ERR(entry) != -EAGAIN); 451 452 continue; 452 453 } ··· 1121 1120 { 1122 1121 struct inode *inode = mapping->host; 1123 1122 unsigned long vaddr = vmf->address; 1124 - vm_fault_t ret = VM_FAULT_NOPAGE; 1125 - struct page *zero_page; 1126 - pfn_t pfn; 1123 + pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr)); 1124 + vm_fault_t ret; 1127 1125 1128 - zero_page = ZERO_PAGE(0); 1129 - if (unlikely(!zero_page)) { 1130 - ret = VM_FAULT_OOM; 1131 - goto out; 1132 - } 1133 - 1134 - pfn = page_to_pfn_t(zero_page); 1135 1126 dax_insert_mapping_entry(mapping, vmf, entry, pfn, RADIX_DAX_ZERO_PAGE, 1136 1127 false); 1137 1128 ret = vmf_insert_mixed(vmf->vma, vaddr, pfn); 1138 - out: 1139 1129 trace_dax_load_hole(inode, vmf, ret); 1140 1130 return ret; 1141 1131 }
+1 -1
fs/ext2/inode.c
··· 1448 1448 } 1449 1449 inode->i_blocks = le32_to_cpu(raw_inode->i_blocks); 1450 1450 ei->i_flags = le32_to_cpu(raw_inode->i_flags); 1451 + ext2_set_inode_flags(inode); 1451 1452 ei->i_faddr = le32_to_cpu(raw_inode->i_faddr); 1452 1453 ei->i_frag_no = raw_inode->i_frag; 1453 1454 ei->i_frag_size = raw_inode->i_fsize; ··· 1518 1517 new_decode_dev(le32_to_cpu(raw_inode->i_block[1]))); 1519 1518 } 1520 1519 brelse (bh); 1521 - ext2_set_inode_flags(inode); 1522 1520 unlock_new_inode(inode); 1523 1521 return inode; 1524 1522
+9 -11
fs/ext4/dir.c
··· 76 76 else if (unlikely(rlen < EXT4_DIR_REC_LEN(de->name_len))) 77 77 error_msg = "rec_len is too small for name_len"; 78 78 else if (unlikely(((char *) de - buf) + rlen > size)) 79 - error_msg = "directory entry across range"; 79 + error_msg = "directory entry overrun"; 80 80 else if (unlikely(le32_to_cpu(de->inode) > 81 81 le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count))) 82 82 error_msg = "inode out of bounds"; ··· 85 85 86 86 if (filp) 87 87 ext4_error_file(filp, function, line, bh->b_blocknr, 88 - "bad entry in directory: %s - offset=%u(%u), " 89 - "inode=%u, rec_len=%d, name_len=%d", 90 - error_msg, (unsigned) (offset % size), 91 - offset, le32_to_cpu(de->inode), 92 - rlen, de->name_len); 88 + "bad entry in directory: %s - offset=%u, " 89 + "inode=%u, rec_len=%d, name_len=%d, size=%d", 90 + error_msg, offset, le32_to_cpu(de->inode), 91 + rlen, de->name_len, size); 93 92 else 94 93 ext4_error_inode(dir, function, line, bh->b_blocknr, 95 - "bad entry in directory: %s - offset=%u(%u), " 96 - "inode=%u, rec_len=%d, name_len=%d", 97 - error_msg, (unsigned) (offset % size), 98 - offset, le32_to_cpu(de->inode), 99 - rlen, de->name_len); 94 + "bad entry in directory: %s - offset=%u, " 95 + "inode=%u, rec_len=%d, name_len=%d, size=%d", 96 + error_msg, offset, le32_to_cpu(de->inode), 97 + rlen, de->name_len, size); 100 98 101 99 return 1; 102 100 }
+17 -3
fs/ext4/ext4.h
··· 43 43 #define __FS_HAS_ENCRYPTION IS_ENABLED(CONFIG_EXT4_FS_ENCRYPTION) 44 44 #include <linux/fscrypt.h> 45 45 46 + #include <linux/compiler.h> 47 + 48 + /* Until this gets included into linux/compiler-gcc.h */ 49 + #ifndef __nonstring 50 + #if defined(GCC_VERSION) && (GCC_VERSION >= 80000) 51 + #define __nonstring __attribute__((nonstring)) 52 + #else 53 + #define __nonstring 54 + #endif 55 + #endif 56 + 46 57 /* 47 58 * The fourth extended filesystem constants/structures 48 59 */ ··· 686 675 /* Max physical block we can address w/o extents */ 687 676 #define EXT4_MAX_BLOCK_FILE_PHYS 0xFFFFFFFF 688 677 678 + /* Max logical block we can support */ 679 + #define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFF 680 + 689 681 /* 690 682 * Structure of an inode on the disk 691 683 */ ··· 1240 1226 __le32 s_feature_ro_compat; /* readonly-compatible feature set */ 1241 1227 /*68*/ __u8 s_uuid[16]; /* 128-bit uuid for volume */ 1242 1228 /*78*/ char s_volume_name[16]; /* volume name */ 1243 - /*88*/ char s_last_mounted[64]; /* directory where last mounted */ 1229 + /*88*/ char s_last_mounted[64] __nonstring; /* directory where last mounted */ 1244 1230 /*C8*/ __le32 s_algorithm_usage_bitmap; /* For compression */ 1245 1231 /* 1246 1232 * Performance hints. Directory preallocation should only ··· 1291 1277 __le32 s_first_error_time; /* first time an error happened */ 1292 1278 __le32 s_first_error_ino; /* inode involved in first error */ 1293 1279 __le64 s_first_error_block; /* block involved of first error */ 1294 - __u8 s_first_error_func[32]; /* function where the error happened */ 1280 + __u8 s_first_error_func[32] __nonstring; /* function where the error happened */ 1295 1281 __le32 s_first_error_line; /* line number where error happened */ 1296 1282 __le32 s_last_error_time; /* most recent time of an error */ 1297 1283 __le32 s_last_error_ino; /* inode involved in last error */ 1298 1284 __le32 s_last_error_line; /* line number where error happened */ 1299 1285 __le64 s_last_error_block; /* block involved of last error */ 1300 - __u8 s_last_error_func[32]; /* function where the error happened */ 1286 + __u8 s_last_error_func[32] __nonstring; /* function where the error happened */ 1301 1287 #define EXT4_S_ERR_END offsetof(struct ext4_super_block, s_mount_opts) 1302 1288 __u8 s_mount_opts[64]; 1303 1289 __le32 s_usr_quota_inum; /* inode for tracking user quota */
+3 -1
fs/ext4/inline.c
··· 1753 1753 { 1754 1754 int err, inline_size; 1755 1755 struct ext4_iloc iloc; 1756 + size_t inline_len; 1756 1757 void *inline_pos; 1757 1758 unsigned int offset; 1758 1759 struct ext4_dir_entry_2 *de; ··· 1781 1780 goto out; 1782 1781 } 1783 1782 1783 + inline_len = ext4_get_inline_size(dir); 1784 1784 offset = EXT4_INLINE_DOTDOT_SIZE; 1785 - while (offset < dir->i_size) { 1785 + while (offset < inline_len) { 1786 1786 de = ext4_get_inline_entry(dir, &iloc, offset, 1787 1787 &inline_pos, &inline_size); 1788 1788 if (ext4_check_dir_entry(dir, NULL, de,
+11 -9
fs/ext4/inode.c
··· 3413 3413 { 3414 3414 struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); 3415 3415 unsigned int blkbits = inode->i_blkbits; 3416 - unsigned long first_block = offset >> blkbits; 3417 - unsigned long last_block = (offset + length - 1) >> blkbits; 3416 + unsigned long first_block, last_block; 3418 3417 struct ext4_map_blocks map; 3419 3418 bool delalloc = false; 3420 3419 int ret; 3421 3420 3421 + if ((offset >> blkbits) > EXT4_MAX_LOGICAL_BLOCK) 3422 + return -EINVAL; 3423 + first_block = offset >> blkbits; 3424 + last_block = min_t(loff_t, (offset + length - 1) >> blkbits, 3425 + EXT4_MAX_LOGICAL_BLOCK); 3422 3426 3423 3427 if (flags & IOMAP_REPORT) { 3424 3428 if (ext4_has_inline_data(inode)) { ··· 3952 3948 .writepages = ext4_dax_writepages, 3953 3949 .direct_IO = noop_direct_IO, 3954 3950 .set_page_dirty = noop_set_page_dirty, 3951 + .bmap = ext4_bmap, 3955 3952 .invalidatepage = noop_invalidatepage, 3956 3953 }; 3957 3954 ··· 4197 4192 return 0; 4198 4193 } 4199 4194 4200 - static void ext4_wait_dax_page(struct ext4_inode_info *ei, bool *did_unlock) 4195 + static void ext4_wait_dax_page(struct ext4_inode_info *ei) 4201 4196 { 4202 - *did_unlock = true; 4203 4197 up_write(&ei->i_mmap_sem); 4204 4198 schedule(); 4205 4199 down_write(&ei->i_mmap_sem); ··· 4208 4204 { 4209 4205 struct ext4_inode_info *ei = EXT4_I(inode); 4210 4206 struct page *page; 4211 - bool retry; 4212 4207 int error; 4213 4208 4214 4209 if (WARN_ON_ONCE(!rwsem_is_locked(&ei->i_mmap_sem))) 4215 4210 return -EINVAL; 4216 4211 4217 4212 do { 4218 - retry = false; 4219 4213 page = dax_layout_busy_page(inode->i_mapping); 4220 4214 if (!page) 4221 4215 return 0; ··· 4221 4219 error = ___wait_var_event(&page->_refcount, 4222 4220 atomic_read(&page->_refcount) == 1, 4223 4221 TASK_INTERRUPTIBLE, 0, 0, 4224 - ext4_wait_dax_page(ei, &retry)); 4225 - } while (error == 0 && retry); 4222 + ext4_wait_dax_page(ei)); 4223 + } while (error == 0); 4226 4224 4227 4225 return error; 4228 4226 } ··· 4897 4895 * not initialized on a new filesystem. */ 4898 4896 } 4899 4897 ei->i_flags = le32_to_cpu(raw_inode->i_flags); 4898 + ext4_set_inode_flags(inode); 4900 4899 inode->i_blocks = ext4_inode_blocks(raw_inode, ei); 4901 4900 ei->i_file_acl = le32_to_cpu(raw_inode->i_file_acl_lo); 4902 4901 if (ext4_has_feature_64bit(sb)) ··· 5044 5041 goto bad_inode; 5045 5042 } 5046 5043 brelse(iloc.bh); 5047 - ext4_set_inode_flags(inode); 5048 5044 5049 5045 unlock_new_inode(inode); 5050 5046 return inode;
-1
fs/ext4/mmp.c
··· 49 49 */ 50 50 sb_start_write(sb); 51 51 ext4_mmp_csum_set(sb, mmp); 52 - mark_buffer_dirty(bh); 53 52 lock_buffer(bh); 54 53 bh->b_end_io = end_buffer_write_sync; 55 54 get_bh(bh);
+6
fs/ext4/namei.c
··· 3478 3478 int credits; 3479 3479 u8 old_file_type; 3480 3480 3481 + if (new.inode && new.inode->i_nlink == 0) { 3482 + EXT4_ERROR_INODE(new.inode, 3483 + "target of rename is already freed"); 3484 + return -EFSCORRUPTED; 3485 + } 3486 + 3481 3487 if ((ext4_test_inode_flag(new_dir, EXT4_INODE_PROJINHERIT)) && 3482 3488 (!projid_eq(EXT4_I(new_dir)->i_projid, 3483 3489 EXT4_I(old_dentry->d_inode)->i_projid)))
+22 -1
fs/ext4/resize.c
··· 19 19 20 20 int ext4_resize_begin(struct super_block *sb) 21 21 { 22 + struct ext4_sb_info *sbi = EXT4_SB(sb); 22 23 int ret = 0; 23 24 24 25 if (!capable(CAP_SYS_RESOURCE)) ··· 30 29 * because the user tools have no way of handling this. Probably a 31 30 * bad time to do it anyways. 32 31 */ 33 - if (EXT4_SB(sb)->s_sbh->b_blocknr != 32 + if (EXT4_B2C(sbi, sbi->s_sbh->b_blocknr) != 34 33 le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) { 35 34 ext4_warning(sb, "won't resize using backup superblock at %llu", 36 35 (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr); ··· 1985 1984 n_blocks_count_retry = 0; 1986 1985 goto retry; 1987 1986 } 1987 + } 1988 + 1989 + /* 1990 + * Make sure the last group has enough space so that it's 1991 + * guaranteed to have enough space for all metadata blocks 1992 + * that it might need to hold. (We might not need to store 1993 + * the inode table blocks in the last block group, but there 1994 + * will be cases where this might be needed.) 1995 + */ 1996 + if ((ext4_group_first_block_no(sb, n_group) + 1997 + ext4_group_overhead_blocks(sb, n_group) + 2 + 1998 + sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) { 1999 + n_blocks_count = ext4_group_first_block_no(sb, n_group); 2000 + n_group--; 2001 + n_blocks_count_retry = 0; 2002 + if (resize_inode) { 2003 + iput(resize_inode); 2004 + resize_inode = NULL; 2005 + } 2006 + goto retry; 1988 2007 } 1989 2008 1990 2009 /* extend the last group */
+4
fs/ext4/super.c
··· 2145 2145 SEQ_OPTS_PRINT("max_dir_size_kb=%u", sbi->s_max_dir_size_kb); 2146 2146 if (test_opt(sb, DATA_ERR_ABORT)) 2147 2147 SEQ_OPTS_PUTS("data_err=abort"); 2148 + if (DUMMY_ENCRYPTION_ENABLED(sbi)) 2149 + SEQ_OPTS_PUTS("test_dummy_encryption"); 2148 2150 2149 2151 ext4_show_quota_options(seq, sb); 2150 2152 return 0; ··· 4380 4378 block = ext4_count_free_clusters(sb); 4381 4379 ext4_free_blocks_count_set(sbi->s_es, 4382 4380 EXT4_C2B(sbi, block)); 4381 + ext4_superblock_csum_set(sb); 4383 4382 err = percpu_counter_init(&sbi->s_freeclusters_counter, block, 4384 4383 GFP_KERNEL); 4385 4384 if (!err) { 4386 4385 unsigned long freei = ext4_count_free_inodes(sb); 4387 4386 sbi->s_es->s_free_inodes_count = cpu_to_le32(freei); 4387 + ext4_superblock_csum_set(sb); 4388 4388 err = percpu_counter_init(&sbi->s_freeinodes_counter, freei, 4389 4389 GFP_KERNEL); 4390 4390 }
+1
fs/ocfs2/buffer_head_io.c
··· 342 342 * for this bh as it's not marked locally 343 343 * uptodate. */ 344 344 status = -EIO; 345 + clear_buffer_needs_validate(bh); 345 346 put_bh(bh); 346 347 bhs[i] = NULL; 347 348 continue;
+1
fs/proc/kcore.c
··· 464 464 ret = -EFAULT; 465 465 goto out; 466 466 } 467 + m = NULL; /* skip the list anchor */ 467 468 } else if (m->type == KCORE_VMALLOC) { 468 469 vread(buf, (char *)start, tsz); 469 470 /* we have to zero-fill user buffer even if no read */
+6 -1
fs/ubifs/super.c
··· 1912 1912 mutex_unlock(&c->bu_mutex); 1913 1913 } 1914 1914 1915 - ubifs_assert(c, c->lst.taken_empty_lebs > 0); 1915 + if (!c->need_recovery) 1916 + ubifs_assert(c, c->lst.taken_empty_lebs > 0); 1917 + 1916 1918 return 0; 1917 1919 } 1918 1920 ··· 1955 1953 struct ubi_volume_desc *ubi; 1956 1954 int dev, vol; 1957 1955 char *endptr; 1956 + 1957 + if (!name || !*name) 1958 + return ERR_PTR(-EINVAL); 1958 1959 1959 1960 /* First, try to open using the device node path method */ 1960 1961 ubi = ubi_open_volume_path(name, mode);
-24
fs/ubifs/xattr.c
··· 152 152 ui->data_len = size; 153 153 154 154 mutex_lock(&host_ui->ui_mutex); 155 - 156 - if (!host->i_nlink) { 157 - err = -ENOENT; 158 - goto out_noent; 159 - } 160 - 161 155 host->i_ctime = current_time(host); 162 156 host_ui->xattr_cnt += 1; 163 157 host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm)); ··· 184 190 host_ui->xattr_size -= CALC_XATTR_BYTES(size); 185 191 host_ui->xattr_names -= fname_len(nm); 186 192 host_ui->flags &= ~UBIFS_CRYPT_FL; 187 - out_noent: 188 193 mutex_unlock(&host_ui->ui_mutex); 189 194 out_free: 190 195 make_bad_inode(inode); ··· 235 242 mutex_unlock(&ui->ui_mutex); 236 243 237 244 mutex_lock(&host_ui->ui_mutex); 238 - 239 - if (!host->i_nlink) { 240 - err = -ENOENT; 241 - goto out_noent; 242 - } 243 - 244 245 host->i_ctime = current_time(host); 245 246 host_ui->xattr_size -= CALC_XATTR_BYTES(old_size); 246 247 host_ui->xattr_size += CALC_XATTR_BYTES(size); ··· 256 269 out_cancel: 257 270 host_ui->xattr_size -= CALC_XATTR_BYTES(size); 258 271 host_ui->xattr_size += CALC_XATTR_BYTES(old_size); 259 - out_noent: 260 272 mutex_unlock(&host_ui->ui_mutex); 261 273 make_bad_inode(inode); 262 274 out_free: ··· 482 496 return err; 483 497 484 498 mutex_lock(&host_ui->ui_mutex); 485 - 486 - if (!host->i_nlink) { 487 - err = -ENOENT; 488 - goto out_noent; 489 - } 490 - 491 499 host->i_ctime = current_time(host); 492 500 host_ui->xattr_cnt -= 1; 493 501 host_ui->xattr_size -= CALC_DENT_SIZE(fname_len(nm)); ··· 501 521 host_ui->xattr_size += CALC_DENT_SIZE(fname_len(nm)); 502 522 host_ui->xattr_size += CALC_XATTR_BYTES(ui->data_len); 503 523 host_ui->xattr_names += fname_len(nm); 504 - out_noent: 505 524 mutex_unlock(&host_ui->ui_mutex); 506 525 ubifs_release_budget(c, &req); 507 526 make_bad_inode(inode); ··· 539 560 int err; 540 561 541 562 ubifs_assert(c, inode_is_locked(host)); 542 - 543 - if (!host->i_nlink) 544 - return -ENOENT; 545 563 546 564 if (fname_len(&nm) > UBIFS_MAX_NLEN) 547 565 return -ENAMETOOLONG;
+1 -1
include/drm/drm_drv.h
··· 675 675 static inline bool drm_drv_uses_atomic_modeset(struct drm_device *dev) 676 676 { 677 677 return drm_core_check_feature(dev, DRIVER_ATOMIC) || 678 - dev->mode_config.funcs->atomic_commit != NULL; 678 + (dev->mode_config.funcs && dev->mode_config.funcs->atomic_commit != NULL); 679 679 } 680 680 681 681
-1
include/drm/drm_panel.h
··· 89 89 struct drm_device *drm; 90 90 struct drm_connector *connector; 91 91 struct device *dev; 92 - struct device_link *link; 93 92 94 93 const struct drm_panel_funcs *funcs; 95 94
-14
include/linux/compiler-gcc.h
··· 79 79 #define __noretpoline __attribute__((indirect_branch("keep"))) 80 80 #endif 81 81 82 - /* 83 - * it doesn't make sense on ARM (currently the only user of __naked) 84 - * to trace naked functions because then mcount is called without 85 - * stack and frame pointer being set up and there is no chance to 86 - * restore the lr register to the value before mcount was called. 87 - * 88 - * The asm() bodies of naked functions often depend on standard calling 89 - * conventions, therefore they must be noinline and noclone. 90 - * 91 - * GCC 4.[56] currently fail to enforce this, so we must do so ourselves. 92 - * See GCC PR44290. 93 - */ 94 - #define __naked __attribute__((naked)) noinline __noclone notrace 95 - 96 82 #define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__) 97 83 98 84 #define __optimize(level) __attribute__((__optimize__(level)))
+8
include/linux/compiler_types.h
··· 226 226 #define notrace __attribute__((no_instrument_function)) 227 227 #endif 228 228 229 + /* 230 + * it doesn't make sense on ARM (currently the only user of __naked) 231 + * to trace naked functions because then mcount is called without 232 + * stack and frame pointer being set up and there is no chance to 233 + * restore the lr register to the value before mcount was called. 234 + */ 235 + #define __naked __attribute__((naked)) notrace 236 + 229 237 #define __compiler_offsetof(a, b) __builtin_offsetof(a, b) 230 238 231 239 /*
+4 -1
include/linux/genhd.h
··· 83 83 } __attribute__((packed)); 84 84 85 85 struct disk_stats { 86 + u64 nsecs[NR_STAT_GROUPS]; 86 87 unsigned long sectors[NR_STAT_GROUPS]; 87 88 unsigned long ios[NR_STAT_GROUPS]; 88 89 unsigned long merges[NR_STAT_GROUPS]; 89 - unsigned long ticks[NR_STAT_GROUPS]; 90 90 unsigned long io_ticks; 91 91 unsigned long time_in_queue; 92 92 }; ··· 353 353 } 354 354 355 355 #endif /* CONFIG_SMP */ 356 + 357 + #define part_stat_read_msecs(part, which) \ 358 + div_u64(part_stat_read(part, nsecs[which]), NSEC_PER_MSEC) 356 359 357 360 #define part_stat_read_accum(part, field) \ 358 361 (part_stat_read(part, field[STAT_READ]) + \
-2
include/linux/kvm_host.h
··· 733 733 void kvm_vcpu_kick(struct kvm_vcpu *vcpu); 734 734 int kvm_vcpu_yield_to(struct kvm_vcpu *target); 735 735 void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu, bool usermode_vcpu_not_eligible); 736 - void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); 737 - void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); 738 736 739 737 void kvm_flush_remote_tlbs(struct kvm *kvm); 740 738 void kvm_reload_remote_mmus(struct kvm *kvm);
+10 -8
include/linux/mfd/da9063/pdata.h
··· 21 21 /* 22 22 * Regulator configuration 23 23 */ 24 - /* DA9063 regulator IDs */ 24 + /* DA9063 and DA9063L regulator IDs */ 25 25 enum { 26 26 /* BUCKs */ 27 27 DA9063_ID_BCORE1, ··· 37 37 DA9063_ID_BMEM_BIO_MERGED, 38 38 /* When two BUCKs are merged, they cannot be reused separately */ 39 39 40 - /* LDOs */ 41 - DA9063_ID_LDO1, 42 - DA9063_ID_LDO2, 40 + /* LDOs on both DA9063 and DA9063L */ 43 41 DA9063_ID_LDO3, 44 - DA9063_ID_LDO4, 45 - DA9063_ID_LDO5, 46 - DA9063_ID_LDO6, 47 42 DA9063_ID_LDO7, 48 43 DA9063_ID_LDO8, 49 44 DA9063_ID_LDO9, 50 - DA9063_ID_LDO10, 51 45 DA9063_ID_LDO11, 46 + 47 + /* DA9063-only LDOs */ 48 + DA9063_ID_LDO1, 49 + DA9063_ID_LDO2, 50 + DA9063_ID_LDO4, 51 + DA9063_ID_LDO5, 52 + DA9063_ID_LDO6, 53 + DA9063_ID_LDO10, 52 54 }; 53 55 54 56 /* Regulators platform data */
+30 -3
include/linux/mfd/rohm-bd718x7.h
··· 78 78 BD71837_REG_TRANS_COND0 = 0x1F, 79 79 BD71837_REG_TRANS_COND1 = 0x20, 80 80 BD71837_REG_VRFAULTEN = 0x21, 81 - BD71837_REG_MVRFLTMASK0 = 0x22, 82 - BD71837_REG_MVRFLTMASK1 = 0x23, 83 - BD71837_REG_MVRFLTMASK2 = 0x24, 81 + BD718XX_REG_MVRFLTMASK0 = 0x22, 82 + BD718XX_REG_MVRFLTMASK1 = 0x23, 83 + BD718XX_REG_MVRFLTMASK2 = 0x24, 84 84 BD71837_REG_RCVCFG = 0x25, 85 85 BD71837_REG_RCVNUM = 0x26, 86 86 BD71837_REG_PWRONCONFIG0 = 0x27, ··· 158 158 /* BD71837_REG_BUCK8_VOLT bits */ 159 159 #define BUCK8_MASK 0x3F 160 160 #define BUCK8_DEFAULT 0x1E 161 + 162 + /* BD718XX Voltage monitoring masks */ 163 + #define BD718XX_BUCK1_VRMON80 0x1 164 + #define BD718XX_BUCK1_VRMON130 0x2 165 + #define BD718XX_BUCK2_VRMON80 0x4 166 + #define BD718XX_BUCK2_VRMON130 0x8 167 + #define BD718XX_1ST_NODVS_BUCK_VRMON80 0x1 168 + #define BD718XX_1ST_NODVS_BUCK_VRMON130 0x2 169 + #define BD718XX_2ND_NODVS_BUCK_VRMON80 0x4 170 + #define BD718XX_2ND_NODVS_BUCK_VRMON130 0x8 171 + #define BD718XX_3RD_NODVS_BUCK_VRMON80 0x10 172 + #define BD718XX_3RD_NODVS_BUCK_VRMON130 0x20 173 + #define BD718XX_4TH_NODVS_BUCK_VRMON80 0x40 174 + #define BD718XX_4TH_NODVS_BUCK_VRMON130 0x80 175 + #define BD718XX_LDO1_VRMON80 0x1 176 + #define BD718XX_LDO2_VRMON80 0x2 177 + #define BD718XX_LDO3_VRMON80 0x4 178 + #define BD718XX_LDO4_VRMON80 0x8 179 + #define BD718XX_LDO5_VRMON80 0x10 180 + #define BD718XX_LDO6_VRMON80 0x20 181 + 182 + /* BD71837 specific voltage monitoring masks */ 183 + #define BD71837_BUCK3_VRMON80 0x10 184 + #define BD71837_BUCK3_VRMON130 0x20 185 + #define BD71837_BUCK4_VRMON80 0x40 186 + #define BD71837_BUCK4_VRMON130 0x80 187 + #define BD71837_LDO7_VRMON80 0x40 161 188 162 189 /* BD71837_REG_IRQ bits */ 163 190 #define IRQ_SWRST 0x40
+3 -2
include/linux/netpoll.h
··· 49 49 }; 50 50 51 51 #ifdef CONFIG_NETPOLL 52 - extern void netpoll_poll_disable(struct net_device *dev); 53 - extern void netpoll_poll_enable(struct net_device *dev); 52 + void netpoll_poll_dev(struct net_device *dev); 53 + void netpoll_poll_disable(struct net_device *dev); 54 + void netpoll_poll_enable(struct net_device *dev); 54 55 #else 55 56 static inline void netpoll_poll_disable(struct net_device *dev) { return; } 56 57 static inline void netpoll_poll_enable(struct net_device *dev) { return; }
+3 -3
include/linux/regulator/machine.h
··· 48 48 * DISABLE_IN_SUSPEND - turn off regulator in suspend states 49 49 * ENABLE_IN_SUSPEND - keep regulator on in suspend states 50 50 */ 51 - #define DO_NOTHING_IN_SUSPEND (-1) 52 - #define DISABLE_IN_SUSPEND 0 53 - #define ENABLE_IN_SUSPEND 1 51 + #define DO_NOTHING_IN_SUSPEND 0 52 + #define DISABLE_IN_SUSPEND 1 53 + #define ENABLE_IN_SUSPEND 2 54 54 55 55 /* Regulator active discharge flags */ 56 56 enum regulator_active_discharge {
+4 -3
include/linux/spi/spi-mem.h
··· 81 81 * @dummy.buswidth: number of IO lanes used to transmit the dummy bytes 82 82 * @data.buswidth: number of IO lanes used to send/receive the data 83 83 * @data.dir: direction of the transfer 84 - * @data.buf.in: input buffer 85 - * @data.buf.out: output buffer 84 + * @data.nbytes: number of data bytes to send/receive. Can be zero if the 85 + * operation does not involve transferring data 86 + * @data.buf.in: input buffer (must be DMA-able) 87 + * @data.buf.out: output buffer (must be DMA-able) 86 88 */ 87 89 struct spi_mem_op { 88 90 struct { ··· 107 105 u8 buswidth; 108 106 enum spi_mem_data_dir dir; 109 107 unsigned int nbytes; 110 - /* buf.{in,out} must be DMA-able. */ 111 108 union { 112 109 void *in; 113 110 const void *out;
+1
include/linux/stmmac.h
··· 30 30 31 31 #define MTL_MAX_RX_QUEUES 8 32 32 #define MTL_MAX_TX_QUEUES 8 33 + #define STMMAC_CH_MAX 8 33 34 34 35 #define STMMAC_RX_COE_NONE 0 35 36 #define STMMAC_RX_COE_TYPE1 1
+1 -1
include/linux/uio.h
··· 172 172 static __always_inline __must_check 173 173 size_t copy_to_iter_mcsafe(void *addr, size_t bytes, struct iov_iter *i) 174 174 { 175 - if (unlikely(!check_copy_size(addr, bytes, false))) 175 + if (unlikely(!check_copy_size(addr, bytes, true))) 176 176 return 0; 177 177 else 178 178 return _copy_to_iter_mcsafe(addr, bytes, i);
+3
include/linux/vga_switcheroo.h
··· 133 133 * @can_switch: check if the device is in a position to switch now. 134 134 * Mandatory. The client should return false if a user space process 135 135 * has one of its device files open 136 + * @gpu_bound: notify the client id to audio client when the GPU is bound. 136 137 * 137 138 * Client callbacks. A client can be either a GPU or an audio device on a GPU. 138 139 * The @set_gpu_state and @can_switch methods are mandatory, @reprobe may be 139 140 * set to NULL. For audio clients, the @reprobe member is bogus. 141 + * OTOH, @gpu_bound is only for audio clients, and not used for GPU clients. 140 142 */ 141 143 struct vga_switcheroo_client_ops { 142 144 void (*set_gpu_state)(struct pci_dev *dev, enum vga_switcheroo_state); 143 145 void (*reprobe)(struct pci_dev *dev); 144 146 bool (*can_switch)(struct pci_dev *dev); 147 + void (*gpu_bound)(struct pci_dev *dev, enum vga_switcheroo_client_id); 145 148 }; 146 149 147 150 #if defined(CONFIG_VGA_SWITCHEROO)
+1 -1
include/net/nfc/hci.h
··· 87 87 * According to specification 102 622 chapter 4.4 Pipes, 88 88 * the pipe identifier is 7 bits long. 89 89 */ 90 - #define NFC_HCI_MAX_PIPES 127 90 + #define NFC_HCI_MAX_PIPES 128 91 91 struct nfc_hci_init_data { 92 92 u8 gate_count; 93 93 struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES];
+9 -10
include/net/tls.h
··· 171 171 char *rec_seq; 172 172 }; 173 173 174 + union tls_crypto_context { 175 + struct tls_crypto_info info; 176 + struct tls12_crypto_info_aes_gcm_128 aes_gcm_128; 177 + }; 178 + 174 179 struct tls_context { 175 - union { 176 - struct tls_crypto_info crypto_send; 177 - struct tls12_crypto_info_aes_gcm_128 crypto_send_aes_gcm_128; 178 - }; 179 - union { 180 - struct tls_crypto_info crypto_recv; 181 - struct tls12_crypto_info_aes_gcm_128 crypto_recv_aes_gcm_128; 182 - }; 180 + union tls_crypto_context crypto_send; 181 + union tls_crypto_context crypto_recv; 183 182 184 183 struct list_head list; 185 184 struct net_device *netdev; ··· 366 367 * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE 367 368 */ 368 369 buf[0] = record_type; 369 - buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.version); 370 - buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.version); 370 + buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.info.version); 371 + buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.info.version); 371 372 /* we can use IV for nonce explicit according to spec */ 372 373 buf[3] = pkt_len >> 8; 373 374 buf[4] = pkt_len & 0xFF;
+1
include/sound/hdaudio.h
··· 412 412 void snd_hdac_bus_stop_cmd_io(struct hdac_bus *bus); 413 413 void snd_hdac_bus_enter_link_reset(struct hdac_bus *bus); 414 414 void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus); 415 + int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset); 415 416 416 417 void snd_hdac_bus_update_rirb(struct hdac_bus *bus); 417 418 int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+1
include/sound/soc-dapm.h
··· 407 407 int snd_soc_dapm_link_dai_widgets(struct snd_soc_card *card); 408 408 void snd_soc_dapm_connect_dai_link_widgets(struct snd_soc_card *card); 409 409 int snd_soc_dapm_new_pcm(struct snd_soc_card *card, 410 + struct snd_soc_pcm_runtime *rtd, 410 411 const struct snd_soc_pcm_stream *params, 411 412 unsigned int num_params, 412 413 struct snd_soc_dapm_widget *source,
+1 -1
include/uapi/linux/keyctl.h
··· 65 65 66 66 /* keyctl structures */ 67 67 struct keyctl_dh_params { 68 - __s32 dh_private; 68 + __s32 private; 69 69 __s32 prime; 70 70 __s32 base; 71 71 };
+1
include/uapi/linux/kvm.h
··· 952 952 #define KVM_CAP_S390_HPAGE_1M 156 953 953 #define KVM_CAP_NESTED_STATE 157 954 954 #define KVM_CAP_ARM_INJECT_SERROR_ESR 158 955 + #define KVM_CAP_MSR_PLATFORM_INFO 159 955 956 956 957 #ifdef KVM_CAP_IRQ_ROUTING 957 958
+51 -49
include/uapi/sound/skl-tplg-interface.h
··· 10 10 #ifndef __HDA_TPLG_INTERFACE_H__ 11 11 #define __HDA_TPLG_INTERFACE_H__ 12 12 13 + #include <linux/types.h> 14 + 13 15 /* 14 16 * Default types range from 0~12. type can range from 0 to 0xff 15 17 * SST types start at higher to avoid any overlapping in future ··· 145 143 }; 146 144 147 145 struct skl_dfw_algo_data { 148 - u32 set_params:2; 149 - u32 rsvd:30; 150 - u32 param_id; 151 - u32 max; 146 + __u32 set_params:2; 147 + __u32 rsvd:30; 148 + __u32 param_id; 149 + __u32 max; 152 150 char params[0]; 153 151 } __packed; 154 152 ··· 165 163 /* v4 configuration data */ 166 164 167 165 struct skl_dfw_v4_module_pin { 168 - u16 module_id; 169 - u16 instance_id; 166 + __u16 module_id; 167 + __u16 instance_id; 170 168 } __packed; 171 169 172 170 struct skl_dfw_v4_module_fmt { 173 - u32 channels; 174 - u32 freq; 175 - u32 bit_depth; 176 - u32 valid_bit_depth; 177 - u32 ch_cfg; 178 - u32 interleaving_style; 179 - u32 sample_type; 180 - u32 ch_map; 171 + __u32 channels; 172 + __u32 freq; 173 + __u32 bit_depth; 174 + __u32 valid_bit_depth; 175 + __u32 ch_cfg; 176 + __u32 interleaving_style; 177 + __u32 sample_type; 178 + __u32 ch_map; 181 179 } __packed; 182 180 183 181 struct skl_dfw_v4_module_caps { 184 - u32 set_params:2; 185 - u32 rsvd:30; 186 - u32 param_id; 187 - u32 caps_size; 188 - u32 caps[HDA_SST_CFG_MAX]; 182 + __u32 set_params:2; 183 + __u32 rsvd:30; 184 + __u32 param_id; 185 + __u32 caps_size; 186 + __u32 caps[HDA_SST_CFG_MAX]; 189 187 } __packed; 190 188 191 189 struct skl_dfw_v4_pipe { 192 - u8 pipe_id; 193 - u8 pipe_priority; 194 - u16 conn_type:4; 195 - u16 rsvd:4; 196 - u16 memory_pages:8; 190 + __u8 pipe_id; 191 + __u8 pipe_priority; 192 + __u16 conn_type:4; 193 + __u16 rsvd:4; 194 + __u16 memory_pages:8; 197 195 } __packed; 198 196 199 197 struct skl_dfw_v4_module { 200 198 char uuid[SKL_UUID_STR_SZ]; 201 199 202 - u16 module_id; 203 - u16 instance_id; 204 - u32 max_mcps; 205 - u32 mem_pages; 206 - u32 obs; 207 - u32 ibs; 208 - u32 vbus_id; 200 + __u16 module_id; 201 + __u16 instance_id; 202 + __u32 max_mcps; 203 + __u32 mem_pages; 204 + __u32 obs; 205 + __u32 ibs; 206 + __u32 vbus_id; 209 207 210 - u32 max_in_queue:8; 211 - u32 max_out_queue:8; 212 - u32 time_slot:8; 213 - u32 core_id:4; 214 - u32 rsvd1:4; 208 + __u32 max_in_queue:8; 209 + __u32 max_out_queue:8; 210 + __u32 time_slot:8; 211 + __u32 core_id:4; 212 + __u32 rsvd1:4; 215 213 216 - u32 module_type:8; 217 - u32 conn_type:4; 218 - u32 dev_type:4; 219 - u32 hw_conn_type:4; 220 - u32 rsvd2:12; 214 + __u32 module_type:8; 215 + __u32 conn_type:4; 216 + __u32 dev_type:4; 217 + __u32 hw_conn_type:4; 218 + __u32 rsvd2:12; 221 219 222 - u32 params_fixup:8; 223 - u32 converter:8; 224 - u32 input_pin_type:1; 225 - u32 output_pin_type:1; 226 - u32 is_dynamic_in_pin:1; 227 - u32 is_dynamic_out_pin:1; 228 - u32 is_loadable:1; 229 - u32 rsvd3:11; 220 + __u32 params_fixup:8; 221 + __u32 converter:8; 222 + __u32 input_pin_type:1; 223 + __u32 output_pin_type:1; 224 + __u32 is_dynamic_in_pin:1; 225 + __u32 is_dynamic_out_pin:1; 226 + __u32 is_loadable:1; 227 + __u32 rsvd3:11; 230 228 231 229 struct skl_dfw_v4_pipe pipe; 232 230 struct skl_dfw_v4_module_fmt in_fmt[MAX_IN_QUEUE];
+1 -1
kernel/bpf/btf.c
··· 1844 1844 1845 1845 hdr = &btf->hdr; 1846 1846 cur = btf->nohdr_data + hdr->type_off; 1847 - end = btf->nohdr_data + hdr->type_len; 1847 + end = cur + hdr->type_len; 1848 1848 1849 1849 env->log_type_id = 1; 1850 1850 while (cur < end) {
+71 -20
kernel/bpf/sockmap.c
··· 132 132 struct work_struct gc_work; 133 133 134 134 struct proto *sk_proto; 135 + void (*save_unhash)(struct sock *sk); 135 136 void (*save_close)(struct sock *sk, long timeout); 136 137 void (*save_data_ready)(struct sock *sk); 137 138 void (*save_write_space)(struct sock *sk); ··· 144 143 static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size); 145 144 static int bpf_tcp_sendpage(struct sock *sk, struct page *page, 146 145 int offset, size_t size, int flags); 146 + static void bpf_tcp_unhash(struct sock *sk); 147 147 static void bpf_tcp_close(struct sock *sk, long timeout); 148 148 149 149 static inline struct smap_psock *smap_psock_sk(const struct sock *sk) ··· 186 184 struct proto *base) 187 185 { 188 186 prot[SOCKMAP_BASE] = *base; 187 + prot[SOCKMAP_BASE].unhash = bpf_tcp_unhash; 189 188 prot[SOCKMAP_BASE].close = bpf_tcp_close; 190 189 prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg; 191 190 prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read; ··· 220 217 return -EBUSY; 221 218 } 222 219 220 + psock->save_unhash = sk->sk_prot->unhash; 223 221 psock->save_close = sk->sk_prot->close; 224 222 psock->sk_proto = sk->sk_prot; 225 223 ··· 309 305 return e; 310 306 } 311 307 312 - static void bpf_tcp_close(struct sock *sk, long timeout) 308 + static void bpf_tcp_remove(struct sock *sk, struct smap_psock *psock) 313 309 { 314 - void (*close_fun)(struct sock *sk, long timeout); 315 310 struct smap_psock_map_entry *e; 316 311 struct sk_msg_buff *md, *mtmp; 317 - struct smap_psock *psock; 318 312 struct sock *osk; 319 - 320 - lock_sock(sk); 321 - rcu_read_lock(); 322 - psock = smap_psock_sk(sk); 323 - if (unlikely(!psock)) { 324 - rcu_read_unlock(); 325 - release_sock(sk); 326 - return sk->sk_prot->close(sk, timeout); 327 - } 328 - 329 - /* The psock may be destroyed anytime after exiting the RCU critial 330 - * section so by the time we use close_fun the psock may no longer 331 - * be valid. However, bpf_tcp_close is called with the sock lock 332 - * held so the close hook and sk are still valid. 333 - */ 334 - close_fun = psock->save_close; 335 313 336 314 if (psock->cork) { 337 315 free_start_sg(psock->sock, psock->cork, true); ··· 365 379 kfree(e); 366 380 e = psock_map_pop(sk, psock); 367 381 } 382 + } 383 + 384 + static void bpf_tcp_unhash(struct sock *sk) 385 + { 386 + void (*unhash_fun)(struct sock *sk); 387 + struct smap_psock *psock; 388 + 389 + rcu_read_lock(); 390 + psock = smap_psock_sk(sk); 391 + if (unlikely(!psock)) { 392 + rcu_read_unlock(); 393 + if (sk->sk_prot->unhash) 394 + sk->sk_prot->unhash(sk); 395 + return; 396 + } 397 + unhash_fun = psock->save_unhash; 398 + bpf_tcp_remove(sk, psock); 399 + rcu_read_unlock(); 400 + unhash_fun(sk); 401 + } 402 + 403 + static void bpf_tcp_close(struct sock *sk, long timeout) 404 + { 405 + void (*close_fun)(struct sock *sk, long timeout); 406 + struct smap_psock *psock; 407 + 408 + lock_sock(sk); 409 + rcu_read_lock(); 410 + psock = smap_psock_sk(sk); 411 + if (unlikely(!psock)) { 412 + rcu_read_unlock(); 413 + release_sock(sk); 414 + return sk->sk_prot->close(sk, timeout); 415 + } 416 + close_fun = psock->save_close; 417 + bpf_tcp_remove(sk, psock); 368 418 rcu_read_unlock(); 369 419 release_sock(sk); 370 420 close_fun(sk, timeout); ··· 2119 2097 return -EINVAL; 2120 2098 } 2121 2099 2100 + /* ULPs are currently supported only for TCP sockets in ESTABLISHED 2101 + * state. 2102 + */ 2122 2103 if (skops.sk->sk_type != SOCK_STREAM || 2123 - skops.sk->sk_protocol != IPPROTO_TCP) { 2104 + skops.sk->sk_protocol != IPPROTO_TCP || 2105 + skops.sk->sk_state != TCP_ESTABLISHED) { 2124 2106 fput(socket->file); 2125 2107 return -EOPNOTSUPP; 2126 2108 } ··· 2479 2453 return -EINVAL; 2480 2454 } 2481 2455 2456 + /* ULPs are currently supported only for TCP sockets in ESTABLISHED 2457 + * state. 2458 + */ 2459 + if (skops.sk->sk_type != SOCK_STREAM || 2460 + skops.sk->sk_protocol != IPPROTO_TCP || 2461 + skops.sk->sk_state != TCP_ESTABLISHED) { 2462 + fput(socket->file); 2463 + return -EOPNOTSUPP; 2464 + } 2465 + 2482 2466 lock_sock(skops.sk); 2483 2467 preempt_disable(); 2484 2468 rcu_read_lock(); ··· 2579 2543 .map_check_btf = map_check_no_btf, 2580 2544 }; 2581 2545 2546 + static bool bpf_is_valid_sock_op(struct bpf_sock_ops_kern *ops) 2547 + { 2548 + return ops->op == BPF_SOCK_OPS_PASSIVE_ESTABLISHED_CB || 2549 + ops->op == BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB; 2550 + } 2582 2551 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock, 2583 2552 struct bpf_map *, map, void *, key, u64, flags) 2584 2553 { 2585 2554 WARN_ON_ONCE(!rcu_read_lock_held()); 2555 + 2556 + /* ULPs are currently supported only for TCP sockets in ESTABLISHED 2557 + * state. This checks that the sock ops triggering the update is 2558 + * one indicating we are (or will be soon) in an ESTABLISHED state. 2559 + */ 2560 + if (!bpf_is_valid_sock_op(bpf_sock)) 2561 + return -EOPNOTSUPP; 2586 2562 return sock_map_ctx_update_elem(bpf_sock, map, key, flags); 2587 2563 } 2588 2564 ··· 2613 2565 struct bpf_map *, map, void *, key, u64, flags) 2614 2566 { 2615 2567 WARN_ON_ONCE(!rcu_read_lock_held()); 2568 + 2569 + if (!bpf_is_valid_sock_op(bpf_sock)) 2570 + return -EOPNOTSUPP; 2616 2571 return sock_hash_ctx_update_elem(bpf_sock, map, key, flags); 2617 2572 } 2618 2573
+1 -1
kernel/bpf/verifier.c
··· 3163 3163 * an arbitrary scalar. Disallow all math except 3164 3164 * pointer subtraction 3165 3165 */ 3166 - if (opcode == BPF_SUB){ 3166 + if (opcode == BPF_SUB && env->allow_ptr_leaks) { 3167 3167 mark_reg_unknown(env, regs, insn->dst_reg); 3168 3168 return 0; 3169 3169 }
+3
kernel/dma/Kconfig
··· 23 23 bool 24 24 select NEED_DMA_MAP_STATE 25 25 26 + config ARCH_HAS_SYNC_DMA_FOR_CPU_ALL 27 + bool 28 + 26 29 config DMA_DIRECT_OPS 27 30 bool 28 31 depends on HAS_DMA
+6
kernel/events/core.c
··· 3935 3935 goto out; 3936 3936 } 3937 3937 3938 + /* If this is a pinned event it must be running on this CPU */ 3939 + if (event->attr.pinned && event->oncpu != smp_processor_id()) { 3940 + ret = -EBUSY; 3941 + goto out; 3942 + } 3943 + 3938 3944 /* 3939 3945 * If the event is currently on this CPU, its either a per-task event, 3940 3946 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
+1 -1
kernel/pid.c
··· 195 195 idr_preload_end(); 196 196 197 197 if (nr < 0) { 198 - retval = nr; 198 + retval = (nr == -ENOSPC) ? -EAGAIN : nr; 199 199 goto out_free; 200 200 } 201 201
-3
kernel/sys.c
··· 71 71 #include <asm/io.h> 72 72 #include <asm/unistd.h> 73 73 74 - /* Hardening for Spectre-v1 */ 75 - #include <linux/nospec.h> 76 - 77 74 #include "uid16.h" 78 75 79 76 #ifndef SET_UNALIGN_CTL
+2
kernel/trace/ring_buffer.c
··· 1546 1546 tmp_iter_page = first_page; 1547 1547 1548 1548 do { 1549 + cond_resched(); 1550 + 1549 1551 to_remove_page = tmp_iter_page; 1550 1552 rb_inc_page(cpu_buffer, &tmp_iter_page); 1551 1553
+1
mm/Kconfig
··· 637 637 depends on NO_BOOTMEM 638 638 depends on SPARSEMEM 639 639 depends on !NEED_PER_CPU_KM 640 + depends on 64BIT 640 641 help 641 642 Ordinarily all struct pages are initialised during early boot in a 642 643 single thread. On very large machines this can take a considerable
+2
mm/shmem.c
··· 2227 2227 mpol_shared_policy_init(&info->policy, NULL); 2228 2228 break; 2229 2229 } 2230 + 2231 + lockdep_annotate_inode_mutex_key(inode); 2230 2232 } else 2231 2233 shmem_free_inode(sb); 2232 2234 return inode;
+11
mm/vmscan.c
··· 476 476 delta = freeable >> priority; 477 477 delta *= 4; 478 478 do_div(delta, shrinker->seeks); 479 + 480 + /* 481 + * Make sure we apply some minimal pressure on default priority 482 + * even on small cgroups. Stale objects are not only consuming memory 483 + * by themselves, but can also hold a reference to a dying cgroup, 484 + * preventing it from being reclaimed. A dying cgroup with all 485 + * corresponding structures like per-cpu stats and kmem caches 486 + * can be really big, so it may lead to a significant waste of memory. 487 + */ 488 + delta = max_t(unsigned long long, delta, min(freeable, batch_size)); 489 + 479 490 total_scan += delta; 480 491 if (total_scan < 0) { 481 492 pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+7 -3
net/batman-adv/bat_v_elp.c
··· 241 241 * the packet to be exactly of that size to make the link 242 242 * throughput estimation effective. 243 243 */ 244 - skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len); 244 + skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len); 245 245 246 246 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 247 247 "Sending unicast (probe) ELP packet on interface %s to %pM\n", ··· 268 268 struct batadv_priv *bat_priv; 269 269 struct sk_buff *skb; 270 270 u32 elp_interval; 271 + bool ret; 271 272 272 273 bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work); 273 274 hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v); ··· 330 329 * may sleep and that is not allowed in an rcu protected 331 330 * context. Therefore schedule a task for that. 332 331 */ 333 - queue_work(batadv_event_workqueue, 334 - &hardif_neigh->bat_v.metric_work); 332 + ret = queue_work(batadv_event_workqueue, 333 + &hardif_neigh->bat_v.metric_work); 334 + 335 + if (!ret) 336 + batadv_hardif_neigh_put(hardif_neigh); 335 337 } 336 338 rcu_read_unlock(); 337 339
+8 -2
net/batman-adv/bridge_loop_avoidance.c
··· 1772 1772 { 1773 1773 struct batadv_bla_backbone_gw *backbone_gw; 1774 1774 struct ethhdr *ethhdr; 1775 + bool ret; 1775 1776 1776 1777 ethhdr = eth_hdr(skb); 1777 1778 ··· 1796 1795 if (unlikely(!backbone_gw)) 1797 1796 return true; 1798 1797 1799 - queue_work(batadv_event_workqueue, &backbone_gw->report_work); 1800 - /* backbone_gw is unreferenced in the report work function function */ 1798 + ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work); 1799 + 1800 + /* backbone_gw is unreferenced in the report work function function 1801 + * if queue_work() call was successful 1802 + */ 1803 + if (!ret) 1804 + batadv_backbone_gw_put(backbone_gw); 1801 1805 1802 1806 return true; 1803 1807 }
+9 -2
net/batman-adv/gateway_client.c
··· 32 32 #include <linux/kernel.h> 33 33 #include <linux/kref.h> 34 34 #include <linux/list.h> 35 + #include <linux/lockdep.h> 35 36 #include <linux/netdevice.h> 36 37 #include <linux/netlink.h> 37 38 #include <linux/rculist.h> ··· 349 348 * @bat_priv: the bat priv with all the soft interface information 350 349 * @orig_node: originator announcing gateway capabilities 351 350 * @gateway: announced bandwidth information 351 + * 352 + * Has to be called with the appropriate locks being acquired 353 + * (gw.list_lock). 352 354 */ 353 355 static void batadv_gw_node_add(struct batadv_priv *bat_priv, 354 356 struct batadv_orig_node *orig_node, 355 357 struct batadv_tvlv_gateway_data *gateway) 356 358 { 357 359 struct batadv_gw_node *gw_node; 360 + 361 + lockdep_assert_held(&bat_priv->gw.list_lock); 358 362 359 363 if (gateway->bandwidth_down == 0) 360 364 return; ··· 375 369 gw_node->bandwidth_down = ntohl(gateway->bandwidth_down); 376 370 gw_node->bandwidth_up = ntohl(gateway->bandwidth_up); 377 371 378 - spin_lock_bh(&bat_priv->gw.list_lock); 379 372 kref_get(&gw_node->refcount); 380 373 hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list); 381 - spin_unlock_bh(&bat_priv->gw.list_lock); 382 374 383 375 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 384 376 "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n", ··· 432 428 { 433 429 struct batadv_gw_node *gw_node, *curr_gw = NULL; 434 430 431 + spin_lock_bh(&bat_priv->gw.list_lock); 435 432 gw_node = batadv_gw_node_get(bat_priv, orig_node); 436 433 if (!gw_node) { 437 434 batadv_gw_node_add(bat_priv, orig_node, gateway); 435 + spin_unlock_bh(&bat_priv->gw.list_lock); 438 436 goto out; 439 437 } 438 + spin_unlock_bh(&bat_priv->gw.list_lock); 440 439 441 440 if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) && 442 441 gw_node->bandwidth_up == ntohl(gateway->bandwidth_up))
+1 -1
net/batman-adv/main.h
··· 25 25 #define BATADV_DRIVER_DEVICE "batman-adv" 26 26 27 27 #ifndef BATADV_SOURCE_VERSION 28 - #define BATADV_SOURCE_VERSION "2018.2" 28 + #define BATADV_SOURCE_VERSION "2018.3" 29 29 #endif 30 30 31 31 /* B.A.T.M.A.N. parameters */
+22 -19
net/batman-adv/network-coding.c
··· 854 854 spinlock_t *lock; /* Used to lock list selected by "int in_coding" */ 855 855 struct list_head *list; 856 856 857 - /* Check if nc_node is already added */ 858 - nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding); 859 - 860 - /* Node found */ 861 - if (nc_node) 862 - return nc_node; 863 - 864 - nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC); 865 - if (!nc_node) 866 - return NULL; 867 - 868 - /* Initialize nc_node */ 869 - INIT_LIST_HEAD(&nc_node->list); 870 - kref_init(&nc_node->refcount); 871 - ether_addr_copy(nc_node->addr, orig_node->orig); 872 - kref_get(&orig_neigh_node->refcount); 873 - nc_node->orig_node = orig_neigh_node; 874 - 875 857 /* Select ingoing or outgoing coding node */ 876 858 if (in_coding) { 877 859 lock = &orig_neigh_node->in_coding_list_lock; ··· 863 881 list = &orig_neigh_node->out_coding_list; 864 882 } 865 883 884 + spin_lock_bh(lock); 885 + 886 + /* Check if nc_node is already added */ 887 + nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding); 888 + 889 + /* Node found */ 890 + if (nc_node) 891 + goto unlock; 892 + 893 + nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC); 894 + if (!nc_node) 895 + goto unlock; 896 + 897 + /* Initialize nc_node */ 898 + INIT_LIST_HEAD(&nc_node->list); 899 + kref_init(&nc_node->refcount); 900 + ether_addr_copy(nc_node->addr, orig_node->orig); 901 + kref_get(&orig_neigh_node->refcount); 902 + nc_node->orig_node = orig_neigh_node; 903 + 866 904 batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n", 867 905 nc_node->addr, nc_node->orig_node->orig); 868 906 869 907 /* Add nc_node to orig_node */ 870 - spin_lock_bh(lock); 871 908 kref_get(&nc_node->refcount); 872 909 list_add_tail_rcu(&nc_node->list, list); 910 + 911 + unlock: 873 912 spin_unlock_bh(lock); 874 913 875 914 return nc_node;
+19 -8
net/batman-adv/soft-interface.c
··· 574 574 struct batadv_softif_vlan *vlan; 575 575 int err; 576 576 577 + spin_lock_bh(&bat_priv->softif_vlan_list_lock); 578 + 577 579 vlan = batadv_softif_vlan_get(bat_priv, vid); 578 580 if (vlan) { 579 581 batadv_softif_vlan_put(vlan); 582 + spin_unlock_bh(&bat_priv->softif_vlan_list_lock); 580 583 return -EEXIST; 581 584 } 582 585 583 586 vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC); 584 - if (!vlan) 587 + if (!vlan) { 588 + spin_unlock_bh(&bat_priv->softif_vlan_list_lock); 585 589 return -ENOMEM; 590 + } 586 591 587 592 vlan->bat_priv = bat_priv; 588 593 vlan->vid = vid; ··· 595 590 596 591 atomic_set(&vlan->ap_isolation, 0); 597 592 598 - err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan); 599 - if (err) { 600 - kfree(vlan); 601 - return err; 602 - } 603 - 604 - spin_lock_bh(&bat_priv->softif_vlan_list_lock); 605 593 kref_get(&vlan->refcount); 606 594 hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list); 607 595 spin_unlock_bh(&bat_priv->softif_vlan_list_lock); 596 + 597 + /* batadv_sysfs_add_vlan cannot be in the spinlock section due to the 598 + * sleeping behavior of the sysfs functions and the fs_reclaim lock 599 + */ 600 + err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan); 601 + if (err) { 602 + /* ref for the function */ 603 + batadv_softif_vlan_put(vlan); 604 + 605 + /* ref for the list */ 606 + batadv_softif_vlan_put(vlan); 607 + return err; 608 + } 608 609 609 610 /* add a new TT local entry. This one will be marked with the NOPURGE 610 611 * flag
+20 -10
net/batman-adv/sysfs.c
··· 188 188 \ 189 189 return __batadv_store_uint_attr(buff, count, _min, _max, \ 190 190 _post_func, attr, \ 191 - &bat_priv->_var, net_dev); \ 191 + &bat_priv->_var, net_dev, \ 192 + NULL); \ 192 193 } 193 194 194 195 #define BATADV_ATTR_SIF_SHOW_UINT(_name, _var) \ ··· 263 262 \ 264 263 length = __batadv_store_uint_attr(buff, count, _min, _max, \ 265 264 _post_func, attr, \ 266 - &hard_iface->_var, net_dev); \ 265 + &hard_iface->_var, \ 266 + hard_iface->soft_iface, \ 267 + net_dev); \ 267 268 \ 268 269 batadv_hardif_put(hard_iface); \ 269 270 return length; \ ··· 359 356 360 357 static int batadv_store_uint_attr(const char *buff, size_t count, 361 358 struct net_device *net_dev, 359 + struct net_device *slave_dev, 362 360 const char *attr_name, 363 361 unsigned int min, unsigned int max, 364 362 atomic_t *attr) 365 363 { 364 + char ifname[IFNAMSIZ + 3] = ""; 366 365 unsigned long uint_val; 367 366 int ret; 368 367 ··· 390 385 if (atomic_read(attr) == uint_val) 391 386 return count; 392 387 393 - batadv_info(net_dev, "%s: Changing from: %i to: %lu\n", 394 - attr_name, atomic_read(attr), uint_val); 388 + if (slave_dev) 389 + snprintf(ifname, sizeof(ifname), "%s: ", slave_dev->name); 390 + 391 + batadv_info(net_dev, "%s: %sChanging from: %i to: %lu\n", 392 + attr_name, ifname, atomic_read(attr), uint_val); 395 393 396 394 atomic_set(attr, uint_val); 397 395 return count; ··· 405 397 void (*post_func)(struct net_device *), 406 398 const struct attribute *attr, 407 399 atomic_t *attr_store, 408 - struct net_device *net_dev) 400 + struct net_device *net_dev, 401 + struct net_device *slave_dev) 409 402 { 410 403 int ret; 411 404 412 - ret = batadv_store_uint_attr(buff, count, net_dev, attr->name, min, max, 413 - attr_store); 405 + ret = batadv_store_uint_attr(buff, count, net_dev, slave_dev, 406 + attr->name, min, max, attr_store); 414 407 if (post_func && ret) 415 408 post_func(net_dev); 416 409 ··· 580 571 return __batadv_store_uint_attr(buff, count, 1, BATADV_TQ_MAX_VALUE, 581 572 batadv_post_gw_reselect, attr, 582 573 &bat_priv->gw.sel_class, 583 - bat_priv->soft_iface); 574 + bat_priv->soft_iface, NULL); 584 575 } 585 576 586 577 static ssize_t batadv_show_gw_bwidth(struct kobject *kobj, ··· 1099 1090 if (old_tp_override == tp_override) 1100 1091 goto out; 1101 1092 1102 - batadv_info(net_dev, "%s: Changing from: %u.%u MBit to: %u.%u MBit\n", 1103 - "throughput_override", 1093 + batadv_info(hard_iface->soft_iface, 1094 + "%s: %s: Changing from: %u.%u MBit to: %u.%u MBit\n", 1095 + "throughput_override", net_dev->name, 1104 1096 old_tp_override / 10, old_tp_override % 10, 1105 1097 tp_override / 10, tp_override % 10); 1106 1098
+4 -2
net/batman-adv/translation-table.c
··· 1613 1613 { 1614 1614 struct batadv_tt_orig_list_entry *orig_entry; 1615 1615 1616 + spin_lock_bh(&tt_global->list_lock); 1617 + 1616 1618 orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node); 1617 1619 if (orig_entry) { 1618 1620 /* refresh the ttvn: the current value could be a bogus one that ··· 1637 1635 orig_entry->flags = flags; 1638 1636 kref_init(&orig_entry->refcount); 1639 1637 1640 - spin_lock_bh(&tt_global->list_lock); 1641 1638 kref_get(&orig_entry->refcount); 1642 1639 hlist_add_head_rcu(&orig_entry->list, 1643 1640 &tt_global->orig_list); 1644 - spin_unlock_bh(&tt_global->list_lock); 1645 1641 atomic_inc(&tt_global->orig_list_count); 1646 1642 1647 1643 sync_flags: ··· 1647 1647 out: 1648 1648 if (orig_entry) 1649 1649 batadv_tt_orig_list_entry_put(orig_entry); 1650 + 1651 + spin_unlock_bh(&tt_global->list_lock); 1650 1652 } 1651 1653 1652 1654 /**
+6 -2
net/batman-adv/tvlv.c
··· 529 529 { 530 530 struct batadv_tvlv_handler *tvlv_handler; 531 531 532 + spin_lock_bh(&bat_priv->tvlv.handler_list_lock); 533 + 532 534 tvlv_handler = batadv_tvlv_handler_get(bat_priv, type, version); 533 535 if (tvlv_handler) { 536 + spin_unlock_bh(&bat_priv->tvlv.handler_list_lock); 534 537 batadv_tvlv_handler_put(tvlv_handler); 535 538 return; 536 539 } 537 540 538 541 tvlv_handler = kzalloc(sizeof(*tvlv_handler), GFP_ATOMIC); 539 - if (!tvlv_handler) 542 + if (!tvlv_handler) { 543 + spin_unlock_bh(&bat_priv->tvlv.handler_list_lock); 540 544 return; 545 + } 541 546 542 547 tvlv_handler->ogm_handler = optr; 543 548 tvlv_handler->unicast_handler = uptr; ··· 552 547 kref_init(&tvlv_handler->refcount); 553 548 INIT_HLIST_NODE(&tvlv_handler->list); 554 549 555 - spin_lock_bh(&bat_priv->tvlv.handler_list_lock); 556 550 kref_get(&tvlv_handler->refcount); 557 551 hlist_add_head_rcu(&tvlv_handler->list, &bat_priv->tvlv.handler_list); 558 552 spin_unlock_bh(&bat_priv->tvlv.handler_list_lock);
+13 -3
net/bluetooth/smp.c
··· 83 83 84 84 struct smp_dev { 85 85 /* Secure Connections OOB data */ 86 + bool local_oob; 86 87 u8 local_pk[64]; 87 88 u8 local_rand[16]; 88 89 bool debug_key; ··· 599 598 return err; 600 599 601 600 memcpy(rand, smp->local_rand, 16); 601 + 602 + smp->local_oob = true; 602 603 603 604 return 0; 604 605 } ··· 1788 1785 * successfully received our local OOB data - therefore set the 1789 1786 * flag to indicate that local OOB is in use. 1790 1787 */ 1791 - if (req->oob_flag == SMP_OOB_PRESENT) 1788 + if (req->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob) 1792 1789 set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags); 1793 1790 1794 1791 /* SMP over BR/EDR requires special treatment */ ··· 1970 1967 * successfully received our local OOB data - therefore set the 1971 1968 * flag to indicate that local OOB is in use. 1972 1969 */ 1973 - if (rsp->oob_flag == SMP_OOB_PRESENT) 1970 + if (rsp->oob_flag == SMP_OOB_PRESENT && SMP_DEV(hdev)->local_oob) 1974 1971 set_bit(SMP_FLAG_LOCAL_OOB, &smp->flags); 1975 1972 1976 1973 smp->prsp[0] = SMP_CMD_PAIRING_RSP; ··· 2700 2697 * key was set/generated. 2701 2698 */ 2702 2699 if (test_bit(SMP_FLAG_LOCAL_OOB, &smp->flags)) { 2703 - struct smp_dev *smp_dev = chan->data; 2700 + struct l2cap_chan *hchan = hdev->smp_data; 2701 + struct smp_dev *smp_dev; 2702 + 2703 + if (!hchan || !hchan->data) 2704 + return SMP_UNSPECIFIED; 2705 + 2706 + smp_dev = hchan->data; 2704 2707 2705 2708 tfm_ecdh = smp_dev->tfm_ecdh; 2706 2709 } else { ··· 3239 3230 return ERR_CAST(tfm_ecdh); 3240 3231 } 3241 3232 3233 + smp->local_oob = false; 3242 3234 smp->tfm_aes = tfm_aes; 3243 3235 smp->tfm_cmac = tfm_cmac; 3244 3236 smp->tfm_ecdh = tfm_ecdh;
+1 -2
net/core/devlink.c
··· 2592 2592 if (!nlh) { 2593 2593 err = devlink_dpipe_send_and_alloc_skb(&skb, info); 2594 2594 if (err) 2595 - goto err_skb_send_alloc; 2595 + return err; 2596 2596 goto send_done; 2597 2597 } 2598 2598 return genlmsg_reply(skb, info); ··· 2600 2600 nla_put_failure: 2601 2601 err = -EMSGSIZE; 2602 2602 err_resource_put: 2603 - err_skb_send_alloc: 2604 2603 nlmsg_free(skb); 2605 2604 return err; 2606 2605 }
+1
net/core/ethtool.c
··· 2624 2624 case ETHTOOL_GPHYSTATS: 2625 2625 case ETHTOOL_GTSO: 2626 2626 case ETHTOOL_GPERMADDR: 2627 + case ETHTOOL_GUFO: 2627 2628 case ETHTOOL_GGSO: 2628 2629 case ETHTOOL_GGRO: 2629 2630 case ETHTOOL_GFLAGS:
+2 -1
net/core/filter.c
··· 2344 2344 if (unlikely(bytes_sg_total > copy)) 2345 2345 return -EINVAL; 2346 2346 2347 - page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy)); 2347 + page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC | __GFP_COMP, 2348 + get_order(copy)); 2348 2349 if (unlikely(!page)) 2349 2350 return -ENOMEM; 2350 2351 p = page_address(page);
+8 -5
net/core/neighbour.c
··· 1180 1180 lladdr = neigh->ha; 1181 1181 } 1182 1182 1183 + /* Update confirmed timestamp for neighbour entry after we 1184 + * received ARP packet even if it doesn't change IP to MAC binding. 1185 + */ 1186 + if (new & NUD_CONNECTED) 1187 + neigh->confirmed = jiffies; 1188 + 1183 1189 /* If entry was valid and address is not changed, 1184 1190 do not change entry state, if new one is STALE. 1185 1191 */ ··· 1207 1201 } 1208 1202 } 1209 1203 1210 - /* Update timestamps only once we know we will make a change to the 1204 + /* Update timestamp only once we know we will make a change to the 1211 1205 * neighbour entry. Otherwise we risk to move the locktime window with 1212 1206 * noop updates and ignore relevant ARP updates. 1213 1207 */ 1214 - if (new != old || lladdr != neigh->ha) { 1215 - if (new & NUD_CONNECTED) 1216 - neigh->confirmed = jiffies; 1208 + if (new != old || lladdr != neigh->ha) 1217 1209 neigh->updated = jiffies; 1218 - } 1219 1210 1220 1211 if (new != old) { 1221 1212 neigh_del_timer(neigh);
+7 -12
net/core/netpoll.c
··· 187 187 } 188 188 } 189 189 190 - static void netpoll_poll_dev(struct net_device *dev) 190 + void netpoll_poll_dev(struct net_device *dev) 191 191 { 192 - const struct net_device_ops *ops; 193 192 struct netpoll_info *ni = rcu_dereference_bh(dev->npinfo); 193 + const struct net_device_ops *ops; 194 194 195 195 /* Don't do any rx activity if the dev_lock mutex is held 196 196 * the dev_open/close paths use this to block netpoll activity 197 197 * while changing device state 198 198 */ 199 - if (down_trylock(&ni->dev_lock)) 199 + if (!ni || down_trylock(&ni->dev_lock)) 200 200 return; 201 201 202 202 if (!netif_running(dev)) { ··· 205 205 } 206 206 207 207 ops = dev->netdev_ops; 208 - if (!ops->ndo_poll_controller) { 209 - up(&ni->dev_lock); 210 - return; 211 - } 212 - 213 - /* Process pending work on NIC */ 214 - ops->ndo_poll_controller(dev); 208 + if (ops->ndo_poll_controller) 209 + ops->ndo_poll_controller(dev); 215 210 216 211 poll_napi(dev); 217 212 ··· 214 219 215 220 zap_completion_queue(); 216 221 } 222 + EXPORT_SYMBOL(netpoll_poll_dev); 217 223 218 224 void netpoll_poll_disable(struct net_device *dev) 219 225 { ··· 609 613 strlcpy(np->dev_name, ndev->name, IFNAMSIZ); 610 614 INIT_WORK(&np->cleanup_work, netpoll_async_cleanup); 611 615 612 - if ((ndev->priv_flags & IFF_DISABLE_NETPOLL) || 613 - !ndev->netdev_ops->ndo_poll_controller) { 616 + if (ndev->priv_flags & IFF_DISABLE_NETPOLL) { 614 617 np_err(np, "%s doesn't support polling, aborting\n", 615 618 np->dev_name); 616 619 err = -ENOTSUPP;
+1 -1
net/core/rtnetlink.c
··· 2810 2810 } 2811 2811 2812 2812 if (dev->rtnl_link_state == RTNL_LINK_INITIALIZED) { 2813 - __dev_notify_flags(dev, old_flags, 0U); 2813 + __dev_notify_flags(dev, old_flags, (old_flags ^ dev->flags)); 2814 2814 } else { 2815 2815 dev->rtnl_link_state = RTNL_LINK_INITIALIZED; 2816 2816 __dev_notify_flags(dev, old_flags, ~0U);
+1
net/ipv4/af_inet.c
··· 1377 1377 if (encap) 1378 1378 skb_reset_inner_headers(skb); 1379 1379 skb->network_header = (u8 *)iph - skb->head; 1380 + skb_reset_mac_len(skb); 1380 1381 } while ((skb = skb->next)); 1381 1382 1382 1383 out:
+9
net/ipv4/ip_tunnel.c
··· 627 627 const struct iphdr *tnl_params, u8 protocol) 628 628 { 629 629 struct ip_tunnel *tunnel = netdev_priv(dev); 630 + unsigned int inner_nhdr_len = 0; 630 631 const struct iphdr *inner_iph; 631 632 struct flowi4 fl4; 632 633 u8 tos, ttl; ··· 636 635 unsigned int max_headroom; /* The extra header space needed */ 637 636 __be32 dst; 638 637 bool connected; 638 + 639 + /* ensure we can access the inner net header, for several users below */ 640 + if (skb->protocol == htons(ETH_P_IP)) 641 + inner_nhdr_len = sizeof(struct iphdr); 642 + else if (skb->protocol == htons(ETH_P_IPV6)) 643 + inner_nhdr_len = sizeof(struct ipv6hdr); 644 + if (unlikely(!pskb_may_pull(skb, inner_nhdr_len))) 645 + goto tx_error; 639 646 640 647 inner_iph = (const struct iphdr *)skb_inner_network_header(skb); 641 648 connected = (tunnel->parms.iph.daddr != 0);
+26 -23
net/ipv4/udp.c
··· 2124 2124 inet_compute_pseudo); 2125 2125 } 2126 2126 2127 + /* wrapper for udp_queue_rcv_skb tacking care of csum conversion and 2128 + * return code conversion for ip layer consumption 2129 + */ 2130 + static int udp_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb, 2131 + struct udphdr *uh) 2132 + { 2133 + int ret; 2134 + 2135 + if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk)) 2136 + skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check, 2137 + inet_compute_pseudo); 2138 + 2139 + ret = udp_queue_rcv_skb(sk, skb); 2140 + 2141 + /* a return value > 0 means to resubmit the input, but 2142 + * it wants the return to be -protocol, or 0 2143 + */ 2144 + if (ret > 0) 2145 + return -ret; 2146 + return 0; 2147 + } 2148 + 2127 2149 /* 2128 2150 * All we need to do is get the socket, and then do a checksum. 2129 2151 */ ··· 2192 2170 if (unlikely(sk->sk_rx_dst != dst)) 2193 2171 udp_sk_rx_dst_set(sk, dst); 2194 2172 2195 - ret = udp_queue_rcv_skb(sk, skb); 2173 + ret = udp_unicast_rcv_skb(sk, skb, uh); 2196 2174 sock_put(sk); 2197 - /* a return value > 0 means to resubmit the input, but 2198 - * it wants the return to be -protocol, or 0 2199 - */ 2200 - if (ret > 0) 2201 - return -ret; 2202 - return 0; 2175 + return ret; 2203 2176 } 2204 2177 2205 2178 if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST)) ··· 2202 2185 saddr, daddr, udptable, proto); 2203 2186 2204 2187 sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable); 2205 - if (sk) { 2206 - int ret; 2207 - 2208 - if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk)) 2209 - skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check, 2210 - inet_compute_pseudo); 2211 - 2212 - ret = udp_queue_rcv_skb(sk, skb); 2213 - 2214 - /* a return value > 0 means to resubmit the input, but 2215 - * it wants the return to be -protocol, or 0 2216 - */ 2217 - if (ret > 0) 2218 - return -ret; 2219 - return 0; 2220 - } 2188 + if (sk) 2189 + return udp_unicast_rcv_skb(sk, skb, uh); 2221 2190 2222 2191 if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) 2223 2192 goto drop;
+1 -3
net/ipv6/addrconf.c
··· 4201 4201 p++; 4202 4202 continue; 4203 4203 } 4204 - state->offset++; 4205 4204 return ifa; 4206 4205 } 4207 4206 ··· 4224 4225 return ifa; 4225 4226 } 4226 4227 4228 + state->offset = 0; 4227 4229 while (++state->bucket < IN6_ADDR_HSIZE) { 4228 - state->offset = 0; 4229 4230 hlist_for_each_entry_rcu(ifa, 4230 4231 &inet6_addr_lst[state->bucket], addr_lst) { 4231 4232 if (!net_eq(dev_net(ifa->idev->dev), net)) 4232 4233 continue; 4233 - state->offset++; 4234 4234 return ifa; 4235 4235 } 4236 4236 }
+1
net/ipv6/ip6_offload.c
··· 115 115 payload_len = skb->len - nhoff - sizeof(*ipv6h); 116 116 ipv6h->payload_len = htons(payload_len); 117 117 skb->network_header = (u8 *)ipv6h - skb->head; 118 + skb_reset_mac_len(skb); 118 119 119 120 if (udpfrag) { 120 121 int err = ip6_find_1stfragopt(skb, &prevhdr);
+2 -4
net/ipv6/ip6_output.c
··· 219 219 kfree_skb(skb); 220 220 return -ENOBUFS; 221 221 } 222 + if (skb->sk) 223 + skb_set_owner_w(skb2, skb->sk); 222 224 consume_skb(skb); 223 225 skb = skb2; 224 - /* skb_set_owner_w() changes sk->sk_wmem_alloc atomically, 225 - * it is safe to call in our context (socket lock not held) 226 - */ 227 - skb_set_owner_w(skb, (struct sock *)sk); 228 226 } 229 227 if (opt->opt_flen) 230 228 ipv6_push_frag_opts(skb, opt, &proto);
+11 -2
net/ipv6/ip6_tunnel.c
··· 1234 1234 ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) 1235 1235 { 1236 1236 struct ip6_tnl *t = netdev_priv(dev); 1237 - const struct iphdr *iph = ip_hdr(skb); 1237 + const struct iphdr *iph; 1238 1238 int encap_limit = -1; 1239 1239 struct flowi6 fl6; 1240 1240 __u8 dsfield; ··· 1242 1242 u8 tproto; 1243 1243 int err; 1244 1244 1245 + /* ensure we can access the full inner ip header */ 1246 + if (!pskb_may_pull(skb, sizeof(struct iphdr))) 1247 + return -1; 1248 + 1249 + iph = ip_hdr(skb); 1245 1250 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 1246 1251 1247 1252 tproto = READ_ONCE(t->parms.proto); ··· 1311 1306 ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev) 1312 1307 { 1313 1308 struct ip6_tnl *t = netdev_priv(dev); 1314 - struct ipv6hdr *ipv6h = ipv6_hdr(skb); 1309 + struct ipv6hdr *ipv6h; 1315 1310 int encap_limit = -1; 1316 1311 __u16 offset; 1317 1312 struct flowi6 fl6; ··· 1320 1315 u8 tproto; 1321 1316 int err; 1322 1317 1318 + if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h)))) 1319 + return -1; 1320 + 1321 + ipv6h = ipv6_hdr(skb); 1323 1322 tproto = READ_ONCE(t->parms.proto); 1324 1323 if ((tproto != IPPROTO_IPV6 && tproto != 0) || 1325 1324 ip6_tnl_addr_conflict(t, ipv6h))
+38 -15
net/ipv6/route.c
··· 364 364 365 365 static void ip6_dst_destroy(struct dst_entry *dst) 366 366 { 367 + struct dst_metrics *p = (struct dst_metrics *)DST_METRICS_PTR(dst); 367 368 struct rt6_info *rt = (struct rt6_info *)dst; 368 369 struct fib6_info *from; 369 370 struct inet6_dev *idev; 370 371 371 - dst_destroy_metrics_generic(dst); 372 + if (p != &dst_default_metrics && refcount_dec_and_test(&p->refcnt)) 373 + kfree(p); 374 + 372 375 rt6_uncached_list_del(rt); 373 376 374 377 idev = rt->rt6i_idev; ··· 949 946 950 947 static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort) 951 948 { 952 - rt->dst.flags |= fib6_info_dst_flags(ort); 953 - 954 949 if (ort->fib6_flags & RTF_REJECT) { 955 950 ip6_rt_init_dst_reject(rt, ort); 956 951 return; ··· 979 978 rt->rt6i_flags &= ~RTF_EXPIRES; 980 979 rcu_assign_pointer(rt->from, from); 981 980 dst_init_metrics(&rt->dst, from->fib6_metrics->metrics, true); 981 + if (from->fib6_metrics != &dst_default_metrics) { 982 + rt->dst._metrics |= DST_METRICS_REFCOUNTED; 983 + refcount_inc(&from->fib6_metrics->refcnt); 984 + } 982 985 } 983 986 984 987 /* Caller must already hold reference to @ort */ ··· 4675 4670 int iif, int type, u32 portid, u32 seq, 4676 4671 unsigned int flags) 4677 4672 { 4678 - struct rtmsg *rtm; 4673 + struct rt6_info *rt6 = (struct rt6_info *)dst; 4674 + struct rt6key *rt6_dst, *rt6_src; 4675 + u32 *pmetrics, table, rt6_flags; 4679 4676 struct nlmsghdr *nlh; 4677 + struct rtmsg *rtm; 4680 4678 long expires = 0; 4681 - u32 *pmetrics; 4682 - u32 table; 4683 4679 4684 4680 nlh = nlmsg_put(skb, portid, seq, type, sizeof(*rtm), flags); 4685 4681 if (!nlh) 4686 4682 return -EMSGSIZE; 4687 4683 4684 + if (rt6) { 4685 + rt6_dst = &rt6->rt6i_dst; 4686 + rt6_src = &rt6->rt6i_src; 4687 + rt6_flags = rt6->rt6i_flags; 4688 + } else { 4689 + rt6_dst = &rt->fib6_dst; 4690 + rt6_src = &rt->fib6_src; 4691 + rt6_flags = rt->fib6_flags; 4692 + } 4693 + 4688 4694 rtm = nlmsg_data(nlh); 4689 4695 rtm->rtm_family = AF_INET6; 4690 - rtm->rtm_dst_len = rt->fib6_dst.plen; 4691 - rtm->rtm_src_len = rt->fib6_src.plen; 4696 + rtm->rtm_dst_len = rt6_dst->plen; 4697 + rtm->rtm_src_len = rt6_src->plen; 4692 4698 rtm->rtm_tos = 0; 4693 4699 if (rt->fib6_table) 4694 4700 table = rt->fib6_table->tb6_id; ··· 4714 4698 rtm->rtm_scope = RT_SCOPE_UNIVERSE; 4715 4699 rtm->rtm_protocol = rt->fib6_protocol; 4716 4700 4717 - if (rt->fib6_flags & RTF_CACHE) 4701 + if (rt6_flags & RTF_CACHE) 4718 4702 rtm->rtm_flags |= RTM_F_CLONED; 4719 4703 4720 4704 if (dest) { ··· 4722 4706 goto nla_put_failure; 4723 4707 rtm->rtm_dst_len = 128; 4724 4708 } else if (rtm->rtm_dst_len) 4725 - if (nla_put_in6_addr(skb, RTA_DST, &rt->fib6_dst.addr)) 4709 + if (nla_put_in6_addr(skb, RTA_DST, &rt6_dst->addr)) 4726 4710 goto nla_put_failure; 4727 4711 #ifdef CONFIG_IPV6_SUBTREES 4728 4712 if (src) { ··· 4730 4714 goto nla_put_failure; 4731 4715 rtm->rtm_src_len = 128; 4732 4716 } else if (rtm->rtm_src_len && 4733 - nla_put_in6_addr(skb, RTA_SRC, &rt->fib6_src.addr)) 4717 + nla_put_in6_addr(skb, RTA_SRC, &rt6_src->addr)) 4734 4718 goto nla_put_failure; 4735 4719 #endif 4736 4720 if (iif) { 4737 4721 #ifdef CONFIG_IPV6_MROUTE 4738 - if (ipv6_addr_is_multicast(&rt->fib6_dst.addr)) { 4722 + if (ipv6_addr_is_multicast(&rt6_dst->addr)) { 4739 4723 int err = ip6mr_get_route(net, skb, rtm, portid); 4740 4724 4741 4725 if (err == 0) ··· 4770 4754 /* For multipath routes, walk the siblings list and add 4771 4755 * each as a nexthop within RTA_MULTIPATH. 4772 4756 */ 4773 - if (rt->fib6_nsiblings) { 4757 + if (rt6) { 4758 + if (rt6_flags & RTF_GATEWAY && 4759 + nla_put_in6_addr(skb, RTA_GATEWAY, &rt6->rt6i_gateway)) 4760 + goto nla_put_failure; 4761 + 4762 + if (dst->dev && nla_put_u32(skb, RTA_OIF, dst->dev->ifindex)) 4763 + goto nla_put_failure; 4764 + } else if (rt->fib6_nsiblings) { 4774 4765 struct fib6_info *sibling, *next_sibling; 4775 4766 struct nlattr *mp; 4776 4767 ··· 4800 4777 goto nla_put_failure; 4801 4778 } 4802 4779 4803 - if (rt->fib6_flags & RTF_EXPIRES) { 4780 + if (rt6_flags & RTF_EXPIRES) { 4804 4781 expires = dst ? dst->expires : rt->expires; 4805 4782 expires -= jiffies; 4806 4783 } ··· 4808 4785 if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0) 4809 4786 goto nla_put_failure; 4810 4787 4811 - if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->fib6_flags))) 4788 + if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt6_flags))) 4812 4789 goto nla_put_failure; 4813 4790 4814 4791
+37 -28
net/ipv6/udp.c
··· 752 752 } 753 753 } 754 754 755 + /* wrapper for udp_queue_rcv_skb tacking care of csum conversion and 756 + * return code conversion for ip layer consumption 757 + */ 758 + static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb, 759 + struct udphdr *uh) 760 + { 761 + int ret; 762 + 763 + if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk)) 764 + skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check, 765 + ip6_compute_pseudo); 766 + 767 + ret = udpv6_queue_rcv_skb(sk, skb); 768 + 769 + /* a return value > 0 means to resubmit the input, but 770 + * it wants the return to be -protocol, or 0 771 + */ 772 + if (ret > 0) 773 + return -ret; 774 + return 0; 775 + } 776 + 755 777 int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, 756 778 int proto) 757 779 { ··· 825 803 if (unlikely(sk->sk_rx_dst != dst)) 826 804 udp6_sk_rx_dst_set(sk, dst); 827 805 828 - ret = udpv6_queue_rcv_skb(sk, skb); 829 - sock_put(sk); 806 + if (!uh->check && !udp_sk(sk)->no_check6_rx) { 807 + sock_put(sk); 808 + goto report_csum_error; 809 + } 830 810 831 - /* a return value > 0 means to resubmit the input */ 832 - if (ret > 0) 833 - return ret; 834 - return 0; 811 + ret = udp6_unicast_rcv_skb(sk, skb, uh); 812 + sock_put(sk); 813 + return ret; 835 814 } 836 815 837 816 /* ··· 845 822 /* Unicast */ 846 823 sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable); 847 824 if (sk) { 848 - int ret; 849 - 850 - if (!uh->check && !udp_sk(sk)->no_check6_rx) { 851 - udp6_csum_zero_error(skb); 852 - goto csum_error; 853 - } 854 - 855 - if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk)) 856 - skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check, 857 - ip6_compute_pseudo); 858 - 859 - ret = udpv6_queue_rcv_skb(sk, skb); 860 - 861 - /* a return value > 0 means to resubmit the input */ 862 - if (ret > 0) 863 - return ret; 864 - 865 - return 0; 825 + if (!uh->check && !udp_sk(sk)->no_check6_rx) 826 + goto report_csum_error; 827 + return udp6_unicast_rcv_skb(sk, skb, uh); 866 828 } 867 829 868 - if (!uh->check) { 869 - udp6_csum_zero_error(skb); 870 - goto csum_error; 871 - } 830 + if (!uh->check) 831 + goto report_csum_error; 872 832 873 833 if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) 874 834 goto discard; ··· 872 866 ulen, skb->len, 873 867 daddr, ntohs(uh->dest)); 874 868 goto discard; 869 + 870 + report_csum_error: 871 + udp6_csum_zero_error(skb); 875 872 csum_error: 876 873 __UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE); 877 874 discard:
+5 -1
net/mpls/af_mpls.c
··· 1533 1533 unsigned int flags; 1534 1534 1535 1535 if (event == NETDEV_REGISTER) { 1536 - /* For now just support Ethernet, IPGRE, SIT and IPIP devices */ 1536 + 1537 + /* For now just support Ethernet, IPGRE, IP6GRE, SIT and 1538 + * IPIP devices 1539 + */ 1537 1540 if (dev->type == ARPHRD_ETHER || 1538 1541 dev->type == ARPHRD_LOOPBACK || 1539 1542 dev->type == ARPHRD_IPGRE || 1543 + dev->type == ARPHRD_IP6GRE || 1540 1544 dev->type == ARPHRD_SIT || 1541 1545 dev->type == ARPHRD_TUNNEL) { 1542 1546 mdev = mpls_add_dev(dev);
+2 -1
net/netlabel/netlabel_unlabeled.c
··· 781 781 { 782 782 u32 addr_len; 783 783 784 - if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR]) { 784 + if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR] && 785 + info->attrs[NLBL_UNLABEL_A_IPV4MASK]) { 785 786 addr_len = nla_len(info->attrs[NLBL_UNLABEL_A_IPV4ADDR]); 786 787 if (addr_len != sizeof(struct in_addr) && 787 788 addr_len != nla_len(info->attrs[NLBL_UNLABEL_A_IPV4MASK]))
+10
net/nfc/hci/core.c
··· 209 209 } 210 210 create_info = (struct hci_create_pipe_resp *)skb->data; 211 211 212 + if (create_info->pipe >= NFC_HCI_MAX_PIPES) { 213 + status = NFC_HCI_ANY_E_NOK; 214 + goto exit; 215 + } 216 + 212 217 /* Save the new created pipe and bind with local gate, 213 218 * the description for skb->data[3] is destination gate id 214 219 * but since we received this cmd from host controller, we ··· 236 231 goto exit; 237 232 } 238 233 delete_info = (struct hci_delete_pipe_noti *)skb->data; 234 + 235 + if (delete_info->pipe >= NFC_HCI_MAX_PIPES) { 236 + status = NFC_HCI_ANY_E_NOK; 237 + goto exit; 238 + } 239 239 240 240 hdev->pipes[delete_info->pipe].gate = NFC_HCI_INVALID_GATE; 241 241 hdev->pipes[delete_info->pipe].dest_host = NFC_HCI_INVALID_HOST;
+1 -1
net/rds/ib.h
··· 443 443 int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op); 444 444 445 445 /* ib_stats.c */ 446 - DECLARE_PER_CPU(struct rds_ib_statistics, rds_ib_stats); 446 + DECLARE_PER_CPU_SHARED_ALIGNED(struct rds_ib_statistics, rds_ib_stats); 447 447 #define rds_ib_stats_inc(member) rds_stats_inc_which(rds_ib_stats, member) 448 448 #define rds_ib_stats_add(member, count) \ 449 449 rds_stats_add_which(rds_ib_stats, member, count)
+1 -1
net/sched/act_sample.c
··· 69 69 70 70 if (!exists) { 71 71 ret = tcf_idr_create(tn, parm->index, est, a, 72 - &act_sample_ops, bind, false); 72 + &act_sample_ops, bind, true); 73 73 if (ret) { 74 74 tcf_idr_cleanup(tn, parm->index); 75 75 return ret;
+2
net/sched/cls_api.c
··· 1902 1902 RTM_NEWCHAIN, false); 1903 1903 break; 1904 1904 case RTM_DELCHAIN: 1905 + tfilter_notify_chain(net, skb, block, q, parent, n, 1906 + chain, RTM_DELTFILTER); 1905 1907 /* Flush the chain first as the user requested chain removal. */ 1906 1908 tcf_chain_flush(chain); 1907 1909 /* In case the chain was successfully deleted, put a reference
+10 -2
net/sctp/transport.c
··· 260 260 bool sctp_transport_update_pmtu(struct sctp_transport *t, u32 pmtu) 261 261 { 262 262 struct dst_entry *dst = sctp_transport_dst_check(t); 263 + struct sock *sk = t->asoc->base.sk; 263 264 bool change = true; 264 265 265 266 if (unlikely(pmtu < SCTP_DEFAULT_MINSEGMENT)) { ··· 272 271 pmtu = SCTP_TRUNC4(pmtu); 273 272 274 273 if (dst) { 275 - dst->ops->update_pmtu(dst, t->asoc->base.sk, NULL, pmtu); 274 + struct sctp_pf *pf = sctp_get_pf_specific(dst->ops->family); 275 + union sctp_addr addr; 276 + 277 + pf->af->from_sk(&addr, sk); 278 + pf->to_sk_daddr(&t->ipaddr, sk); 279 + dst->ops->update_pmtu(dst, sk, NULL, pmtu); 280 + pf->to_sk_daddr(&addr, sk); 281 + 276 282 dst = sctp_transport_dst_check(t); 277 283 } 278 284 279 285 if (!dst) { 280 - t->af_specific->get_dst(t, &t->saddr, &t->fl, t->asoc->base.sk); 286 + t->af_specific->get_dst(t, &t->saddr, &t->fl, sk); 281 287 dst = t->dst; 282 288 } 283 289
+16 -10
net/smc/af_smc.c
··· 742 742 smc->sk.sk_err = -rc; 743 743 744 744 out: 745 - smc->sk.sk_state_change(&smc->sk); 745 + if (smc->sk.sk_err) 746 + smc->sk.sk_state_change(&smc->sk); 747 + else 748 + smc->sk.sk_write_space(&smc->sk); 746 749 kfree(smc->connect_info); 747 750 smc->connect_info = NULL; 748 751 release_sock(&smc->sk); ··· 1153 1150 } 1154 1151 1155 1152 /* listen worker: finish RDMA setup */ 1156 - static void smc_listen_rdma_finish(struct smc_sock *new_smc, 1157 - struct smc_clc_msg_accept_confirm *cclc, 1158 - int local_contact) 1153 + static int smc_listen_rdma_finish(struct smc_sock *new_smc, 1154 + struct smc_clc_msg_accept_confirm *cclc, 1155 + int local_contact) 1159 1156 { 1160 1157 struct smc_link *link = &new_smc->conn.lgr->lnk[SMC_SINGLE_LINK]; 1161 1158 int reason_code = 0; ··· 1178 1175 if (reason_code) 1179 1176 goto decline; 1180 1177 } 1181 - return; 1178 + return 0; 1182 1179 1183 1180 decline: 1184 1181 mutex_unlock(&smc_create_lgr_pending); 1185 1182 smc_listen_decline(new_smc, reason_code, local_contact); 1183 + return reason_code; 1186 1184 } 1187 1185 1188 1186 /* setup for RDMA connection of server */ ··· 1280 1276 } 1281 1277 1282 1278 /* finish worker */ 1283 - if (!ism_supported) 1284 - smc_listen_rdma_finish(new_smc, &cclc, local_contact); 1279 + if (!ism_supported) { 1280 + if (smc_listen_rdma_finish(new_smc, &cclc, local_contact)) 1281 + return; 1282 + } 1285 1283 smc_conn_save_peer_info(new_smc, &cclc); 1286 1284 mutex_unlock(&smc_create_lgr_pending); 1287 1285 smc_listen_out_connected(new_smc); ··· 1535 1529 return EPOLLNVAL; 1536 1530 1537 1531 smc = smc_sk(sock->sk); 1538 - if ((sk->sk_state == SMC_INIT) || smc->use_fallback) { 1532 + if (smc->use_fallback) { 1539 1533 /* delegate to CLC child sock */ 1540 1534 mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1541 1535 sk->sk_err = smc->clcsock->sk->sk_err; ··· 1566 1560 mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP; 1567 1561 if (sk->sk_state == SMC_APPCLOSEWAIT1) 1568 1562 mask |= EPOLLIN; 1563 + if (smc->conn.urg_state == SMC_URG_VALID) 1564 + mask |= EPOLLPRI; 1569 1565 } 1570 - if (smc->conn.urg_state == SMC_URG_VALID) 1571 - mask |= EPOLLPRI; 1572 1566 } 1573 1567 1574 1568 return mask;
+6 -8
net/smc/smc_clc.c
··· 446 446 vec[i++].iov_len = sizeof(trl); 447 447 /* due to the few bytes needed for clc-handshake this cannot block */ 448 448 len = kernel_sendmsg(smc->clcsock, &msg, vec, i, plen); 449 - if (len < sizeof(pclc)) { 450 - if (len >= 0) { 451 - reason_code = -ENETUNREACH; 452 - smc->sk.sk_err = -reason_code; 453 - } else { 454 - smc->sk.sk_err = smc->clcsock->sk->sk_err; 455 - reason_code = -smc->sk.sk_err; 456 - } 449 + if (len < 0) { 450 + smc->sk.sk_err = smc->clcsock->sk->sk_err; 451 + reason_code = -smc->sk.sk_err; 452 + } else if (len < (int)sizeof(pclc)) { 453 + reason_code = -ENETUNREACH; 454 + smc->sk.sk_err = -reason_code; 457 455 } 458 456 459 457 return reason_code;
+7 -7
net/smc/smc_close.c
··· 100 100 struct smc_cdc_conn_state_flags *txflags = 101 101 &smc->conn.local_tx_ctrl.conn_state_flags; 102 102 103 - sk->sk_err = ECONNABORTED; 104 - if (smc->clcsock && smc->clcsock->sk) { 105 - smc->clcsock->sk->sk_err = ECONNABORTED; 106 - smc->clcsock->sk->sk_state_change(smc->clcsock->sk); 103 + if (sk->sk_state != SMC_INIT && smc->clcsock && smc->clcsock->sk) { 104 + sk->sk_err = ECONNABORTED; 105 + if (smc->clcsock && smc->clcsock->sk) { 106 + smc->clcsock->sk->sk_err = ECONNABORTED; 107 + smc->clcsock->sk->sk_state_change(smc->clcsock->sk); 108 + } 107 109 } 108 110 switch (sk->sk_state) { 109 - case SMC_INIT: 110 - sk->sk_state = SMC_PEERABORTWAIT; 111 - break; 112 111 case SMC_ACTIVE: 113 112 sk->sk_state = SMC_PEERABORTWAIT; 114 113 release_sock(sk); ··· 142 143 case SMC_PEERFINCLOSEWAIT: 143 144 sock_put(sk); /* passive closing */ 144 145 break; 146 + case SMC_INIT: 145 147 case SMC_PEERABORTWAIT: 146 148 case SMC_CLOSED: 147 149 break;
+1 -1
net/smc/smc_pnet.c
··· 461 461 }; 462 462 463 463 /* SMC_PNETID family definition */ 464 - static struct genl_family smc_pnet_nl_family = { 464 + static struct genl_family smc_pnet_nl_family __ro_after_init = { 465 465 .hdrsize = 0, 466 466 .name = SMCR_GENL_FAMILY_NAME, 467 467 .version = SMCR_GENL_FAMILY_VERSION,
+14 -8
net/socket.c
··· 941 941 EXPORT_SYMBOL(dlci_ioctl_set); 942 942 943 943 static long sock_do_ioctl(struct net *net, struct socket *sock, 944 - unsigned int cmd, unsigned long arg) 944 + unsigned int cmd, unsigned long arg, 945 + unsigned int ifreq_size) 945 946 { 946 947 int err; 947 948 void __user *argp = (void __user *)arg; ··· 968 967 } else { 969 968 struct ifreq ifr; 970 969 bool need_copyout; 971 - if (copy_from_user(&ifr, argp, sizeof(struct ifreq))) 970 + if (copy_from_user(&ifr, argp, ifreq_size)) 972 971 return -EFAULT; 973 972 err = dev_ioctl(net, cmd, &ifr, &need_copyout); 974 973 if (!err && need_copyout) 975 - if (copy_to_user(argp, &ifr, sizeof(struct ifreq))) 974 + if (copy_to_user(argp, &ifr, ifreq_size)) 976 975 return -EFAULT; 977 976 } 978 977 return err; ··· 1071 1070 err = open_related_ns(&net->ns, get_net_ns); 1072 1071 break; 1073 1072 default: 1074 - err = sock_do_ioctl(net, sock, cmd, arg); 1073 + err = sock_do_ioctl(net, sock, cmd, arg, 1074 + sizeof(struct ifreq)); 1075 1075 break; 1076 1076 } 1077 1077 return err; ··· 2752 2750 int err; 2753 2751 2754 2752 set_fs(KERNEL_DS); 2755 - err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv); 2753 + err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv, 2754 + sizeof(struct compat_ifreq)); 2756 2755 set_fs(old_fs); 2757 2756 if (!err) 2758 2757 err = compat_put_timeval(&ktv, up); ··· 2769 2766 int err; 2770 2767 2771 2768 set_fs(KERNEL_DS); 2772 - err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts); 2769 + err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts, 2770 + sizeof(struct compat_ifreq)); 2773 2771 set_fs(old_fs); 2774 2772 if (!err) 2775 2773 err = compat_put_timespec(&kts, up); ··· 3076 3072 } 3077 3073 3078 3074 set_fs(KERNEL_DS); 3079 - ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r); 3075 + ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r, 3076 + sizeof(struct compat_ifreq)); 3080 3077 set_fs(old_fs); 3081 3078 3082 3079 out: ··· 3190 3185 case SIOCBONDSETHWADDR: 3191 3186 case SIOCBONDCHANGEACTIVE: 3192 3187 case SIOCGIFNAME: 3193 - return sock_do_ioctl(net, sock, cmd, arg); 3188 + return sock_do_ioctl(net, sock, cmd, arg, 3189 + sizeof(struct compat_ifreq)); 3194 3190 } 3195 3191 3196 3192 return -ENOIOCTLCMD;
+3 -3
net/tls/tls_device.c
··· 686 686 goto free_marker_record; 687 687 } 688 688 689 - crypto_info = &ctx->crypto_send; 689 + crypto_info = &ctx->crypto_send.info; 690 690 switch (crypto_info->cipher_type) { 691 691 case TLS_CIPHER_AES_GCM_128: 692 692 nonce_size = TLS_CIPHER_AES_GCM_128_IV_SIZE; ··· 780 780 781 781 ctx->priv_ctx_tx = offload_ctx; 782 782 rc = netdev->tlsdev_ops->tls_dev_add(netdev, sk, TLS_OFFLOAD_CTX_DIR_TX, 783 - &ctx->crypto_send, 783 + &ctx->crypto_send.info, 784 784 tcp_sk(sk)->write_seq); 785 785 if (rc) 786 786 goto release_netdev; ··· 862 862 goto release_ctx; 863 863 864 864 rc = netdev->tlsdev_ops->tls_dev_add(netdev, sk, TLS_OFFLOAD_CTX_DIR_RX, 865 - &ctx->crypto_recv, 865 + &ctx->crypto_recv.info, 866 866 tcp_sk(sk)->copied_seq); 867 867 if (rc) { 868 868 pr_err_ratelimited("%s: The netdev has refused to offload this socket\n",
+1 -1
net/tls/tls_device_fallback.c
··· 320 320 goto free_req; 321 321 322 322 iv = buf; 323 - memcpy(iv, tls_ctx->crypto_send_aes_gcm_128.salt, 323 + memcpy(iv, tls_ctx->crypto_send.aes_gcm_128.salt, 324 324 TLS_CIPHER_AES_GCM_128_SALT_SIZE); 325 325 aad = buf + TLS_CIPHER_AES_GCM_128_SALT_SIZE + 326 326 TLS_CIPHER_AES_GCM_128_IV_SIZE;
+16 -6
net/tls/tls_main.c
··· 241 241 ctx->sk_write_space(sk); 242 242 } 243 243 244 + static void tls_ctx_free(struct tls_context *ctx) 245 + { 246 + if (!ctx) 247 + return; 248 + 249 + memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send)); 250 + memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv)); 251 + kfree(ctx); 252 + } 253 + 244 254 static void tls_sk_proto_close(struct sock *sk, long timeout) 245 255 { 246 256 struct tls_context *ctx = tls_get_ctx(sk); ··· 304 294 #else 305 295 { 306 296 #endif 307 - kfree(ctx); 297 + tls_ctx_free(ctx); 308 298 ctx = NULL; 309 299 } 310 300 ··· 315 305 * for sk->sk_prot->unhash [tls_hw_unhash] 316 306 */ 317 307 if (free_ctx) 318 - kfree(ctx); 308 + tls_ctx_free(ctx); 319 309 } 320 310 321 311 static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval, ··· 340 330 } 341 331 342 332 /* get user crypto info */ 343 - crypto_info = &ctx->crypto_send; 333 + crypto_info = &ctx->crypto_send.info; 344 334 345 335 if (!TLS_CRYPTO_INFO_READY(crypto_info)) { 346 336 rc = -EBUSY; ··· 427 417 } 428 418 429 419 if (tx) 430 - crypto_info = &ctx->crypto_send; 420 + crypto_info = &ctx->crypto_send.info; 431 421 else 432 - crypto_info = &ctx->crypto_recv; 422 + crypto_info = &ctx->crypto_recv.info; 433 423 434 424 /* Currently we don't support set crypto info more than one time */ 435 425 if (TLS_CRYPTO_INFO_READY(crypto_info)) { ··· 509 499 goto out; 510 500 511 501 err_crypto_info: 512 - memset(crypto_info, 0, sizeof(*crypto_info)); 502 + memzero_explicit(crypto_info, sizeof(union tls_crypto_context)); 513 503 out: 514 504 return rc; 515 505 }
+13 -8
net/tls/tls_sw.c
··· 931 931 if (control != TLS_RECORD_TYPE_DATA) 932 932 goto recv_end; 933 933 } 934 + } else { 935 + /* MSG_PEEK right now cannot look beyond current skb 936 + * from strparser, meaning we cannot advance skb here 937 + * and thus unpause strparser since we'd loose original 938 + * one. 939 + */ 940 + break; 934 941 } 942 + 935 943 /* If we have a new message from strparser, continue now. */ 936 944 if (copied >= target && !ctx->recv_pkt) 937 945 break; ··· 1063 1055 goto read_failure; 1064 1056 } 1065 1057 1066 - if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.version) || 1067 - header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.version)) { 1058 + if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.info.version) || 1059 + header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.info.version)) { 1068 1060 ret = -EINVAL; 1069 1061 goto read_failure; 1070 1062 } ··· 1144 1136 1145 1137 int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx) 1146 1138 { 1147 - char keyval[TLS_CIPHER_AES_GCM_128_KEY_SIZE]; 1148 1139 struct tls_crypto_info *crypto_info; 1149 1140 struct tls12_crypto_info_aes_gcm_128 *gcm_128_info; 1150 1141 struct tls_sw_context_tx *sw_ctx_tx = NULL; ··· 1188 1181 1189 1182 if (tx) { 1190 1183 crypto_init_wait(&sw_ctx_tx->async_wait); 1191 - crypto_info = &ctx->crypto_send; 1184 + crypto_info = &ctx->crypto_send.info; 1192 1185 cctx = &ctx->tx; 1193 1186 aead = &sw_ctx_tx->aead_send; 1194 1187 } else { 1195 1188 crypto_init_wait(&sw_ctx_rx->async_wait); 1196 - crypto_info = &ctx->crypto_recv; 1189 + crypto_info = &ctx->crypto_recv.info; 1197 1190 cctx = &ctx->rx; 1198 1191 aead = &sw_ctx_rx->aead_recv; 1199 1192 } ··· 1272 1265 1273 1266 ctx->push_pending_record = tls_sw_push_pending_record; 1274 1267 1275 - memcpy(keyval, gcm_128_info->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE); 1276 - 1277 - rc = crypto_aead_setkey(*aead, keyval, 1268 + rc = crypto_aead_setkey(*aead, gcm_128_info->key, 1278 1269 TLS_CIPHER_AES_GCM_128_KEY_SIZE); 1279 1270 if (rc) 1280 1271 goto free_aead;
+13
scripts/subarch.include
··· 1 + # SUBARCH tells the usermode build what the underlying arch is. That is set 2 + # first, and if a usermode build is happening, the "ARCH=um" on the command 3 + # line overrides the setting of ARCH below. If a native build is happening, 4 + # then ARCH is assigned, getting whatever value it gets normally, and 5 + # SUBARCH is subsequently ignored. 6 + 7 + SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \ 8 + -e s/sun4u/sparc64/ \ 9 + -e s/arm.*/arm/ -e s/sa110/arm/ \ 10 + -e s/s390x/s390/ -e s/parisc64/parisc/ \ 11 + -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \ 12 + -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \ 13 + -e s/riscv.*/riscv/)
+1 -1
security/keys/dh.c
··· 300 300 } 301 301 dh_inputs.g_size = dlen; 302 302 303 - dlen = dh_data_from_key(pcopy.dh_private, &dh_inputs.key); 303 + dlen = dh_data_from_key(pcopy.private, &dh_inputs.key); 304 304 if (dlen < 0) { 305 305 ret = dlen; 306 306 goto out2;
+2
sound/firewire/bebob/bebob.c
··· 263 263 error: 264 264 mutex_unlock(&devices_mutex); 265 265 snd_bebob_stream_destroy_duplex(bebob); 266 + kfree(bebob->maudio_special_quirk); 267 + bebob->maudio_special_quirk = NULL; 266 268 snd_card_free(bebob->card); 267 269 dev_info(&bebob->unit->device, 268 270 "Sound card registration failed: %d\n", err);
+14 -14
sound/firewire/bebob/bebob_maudio.c
··· 96 96 struct fw_device *device = fw_parent_device(unit); 97 97 int err, rcode; 98 98 u64 date; 99 - __le32 cues[3] = { 100 - cpu_to_le32(MAUDIO_BOOTLOADER_CUE1), 101 - cpu_to_le32(MAUDIO_BOOTLOADER_CUE2), 102 - cpu_to_le32(MAUDIO_BOOTLOADER_CUE3) 103 - }; 99 + __le32 *cues; 104 100 105 101 /* check date of software used to build */ 106 102 err = snd_bebob_read_block(unit, INFO_OFFSET_SW_DATE, 107 103 &date, sizeof(u64)); 108 104 if (err < 0) 109 - goto end; 105 + return err; 110 106 /* 111 107 * firmware version 5058 or later has date later than "20070401", but 112 108 * 'date' is not null-terminated. ··· 110 114 if (date < 0x3230303730343031LL) { 111 115 dev_err(&unit->device, 112 116 "Use firmware version 5058 or later\n"); 113 - err = -ENOSYS; 114 - goto end; 117 + return -ENXIO; 115 118 } 119 + 120 + cues = kmalloc_array(3, sizeof(*cues), GFP_KERNEL); 121 + if (!cues) 122 + return -ENOMEM; 123 + 124 + cues[0] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE1); 125 + cues[1] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE2); 126 + cues[2] = cpu_to_le32(MAUDIO_BOOTLOADER_CUE3); 116 127 117 128 rcode = fw_run_transaction(device->card, TCODE_WRITE_BLOCK_REQUEST, 118 129 device->node_id, device->generation, 119 130 device->max_speed, BEBOB_ADDR_REG_REQ, 120 - cues, sizeof(cues)); 131 + cues, 3 * sizeof(*cues)); 132 + kfree(cues); 121 133 if (rcode != RCODE_COMPLETE) { 122 134 dev_err(&unit->device, 123 135 "Failed to send a cue to load firmware\n"); 124 136 err = -EIO; 125 137 } 126 - end: 138 + 127 139 return err; 128 140 } 129 141 ··· 294 290 bebob->midi_output_ports = 2; 295 291 } 296 292 end: 297 - if (err < 0) { 298 - kfree(params); 299 - bebob->maudio_special_quirk = NULL; 300 - } 301 293 mutex_unlock(&bebob->mutex); 302 294 return err; 303 295 }
+1
sound/firewire/digi00x/digi00x.c
··· 49 49 fw_unit_put(dg00x->unit); 50 50 51 51 mutex_destroy(&dg00x->mutex); 52 + kfree(dg00x); 52 53 } 53 54 54 55 static void dg00x_card_free(struct snd_card *card)
+6 -3
sound/firewire/fireface/ff-protocol-ff400.c
··· 146 146 { 147 147 __le32 *reg; 148 148 int i; 149 + int err; 149 150 150 151 reg = kcalloc(18, sizeof(__le32), GFP_KERNEL); 151 152 if (reg == NULL) ··· 164 163 reg[i] = cpu_to_le32(0x00000001); 165 164 } 166 165 167 - return snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST, 168 - FF400_FETCH_PCM_FRAMES, reg, 169 - sizeof(__le32) * 18, 0); 166 + err = snd_fw_transaction(ff->unit, TCODE_WRITE_BLOCK_REQUEST, 167 + FF400_FETCH_PCM_FRAMES, reg, 168 + sizeof(__le32) * 18, 0); 169 + kfree(reg); 170 + return err; 170 171 } 171 172 172 173 static void ff400_dump_sync_status(struct snd_ff *ff,
+2
sound/firewire/fireworks/fireworks.c
··· 301 301 snd_efw_transaction_remove_instance(efw); 302 302 snd_efw_stream_destroy_duplex(efw); 303 303 snd_card_free(efw->card); 304 + kfree(efw->resp_buf); 305 + efw->resp_buf = NULL; 304 306 dev_info(&efw->unit->device, 305 307 "Sound card registration failed: %d\n", err); 306 308 }
+10
sound/firewire/oxfw/oxfw.c
··· 130 130 131 131 kfree(oxfw->spec); 132 132 mutex_destroy(&oxfw->mutex); 133 + kfree(oxfw); 133 134 } 134 135 135 136 /* ··· 208 207 static void do_registration(struct work_struct *work) 209 208 { 210 209 struct snd_oxfw *oxfw = container_of(work, struct snd_oxfw, dwork.work); 210 + int i; 211 211 int err; 212 212 213 213 if (oxfw->registered) ··· 271 269 snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->rx_stream); 272 270 if (oxfw->has_output) 273 271 snd_oxfw_stream_destroy_simplex(oxfw, &oxfw->tx_stream); 272 + for (i = 0; i < SND_OXFW_STREAM_FORMAT_ENTRIES; ++i) { 273 + kfree(oxfw->tx_stream_formats[i]); 274 + oxfw->tx_stream_formats[i] = NULL; 275 + kfree(oxfw->rx_stream_formats[i]); 276 + oxfw->rx_stream_formats[i] = NULL; 277 + } 274 278 snd_card_free(oxfw->card); 279 + kfree(oxfw->spec); 280 + oxfw->spec = NULL; 275 281 dev_info(&oxfw->unit->device, 276 282 "Sound card registration failed: %d\n", err); 277 283 }
+1
sound/firewire/tascam/tascam.c
··· 93 93 fw_unit_put(tscm->unit); 94 94 95 95 mutex_destroy(&tscm->mutex); 96 + kfree(tscm); 96 97 } 97 98 98 99 static void tscm_card_free(struct snd_card *card)
+10 -5
sound/hda/hdac_controller.c
··· 40 40 */ 41 41 void snd_hdac_bus_init_cmd_io(struct hdac_bus *bus) 42 42 { 43 + WARN_ON_ONCE(!bus->rb.area); 44 + 43 45 spin_lock_irq(&bus->reg_lock); 44 46 /* CORB set up */ 45 47 bus->corb.addr = bus->rb.addr; ··· 385 383 EXPORT_SYMBOL_GPL(snd_hdac_bus_exit_link_reset); 386 384 387 385 /* reset codec link */ 388 - static int azx_reset(struct hdac_bus *bus, bool full_reset) 386 + int snd_hdac_bus_reset_link(struct hdac_bus *bus, bool full_reset) 389 387 { 390 388 if (!full_reset) 391 389 goto skip_reset; ··· 410 408 skip_reset: 411 409 /* check to see if controller is ready */ 412 410 if (!snd_hdac_chip_readb(bus, GCTL)) { 413 - dev_dbg(bus->dev, "azx_reset: controller not ready!\n"); 411 + dev_dbg(bus->dev, "controller not ready!\n"); 414 412 return -EBUSY; 415 413 } 416 414 ··· 425 423 426 424 return 0; 427 425 } 426 + EXPORT_SYMBOL_GPL(snd_hdac_bus_reset_link); 428 427 429 428 /* enable interrupts */ 430 429 static void azx_int_enable(struct hdac_bus *bus) ··· 480 477 return false; 481 478 482 479 /* reset controller */ 483 - azx_reset(bus, full_reset); 480 + snd_hdac_bus_reset_link(bus, full_reset); 484 481 485 - /* initialize interrupts */ 482 + /* clear interrupts */ 486 483 azx_int_clear(bus); 487 - azx_int_enable(bus); 488 484 489 485 /* initialize the codec command I/O */ 490 486 snd_hdac_bus_init_cmd_io(bus); 487 + 488 + /* enable interrupts after CORB/RIRB buffers are initialized above */ 489 + azx_int_enable(bus); 491 490 492 491 /* program the position buffer */ 493 492 if (bus->use_posbuf && bus->posbuf.addr) {
+1 -1
sound/pci/emu10k1/emufx.c
··· 2540 2540 emu->support_tlv = 1; 2541 2541 return put_user(SNDRV_EMU10K1_VERSION, (int __user *)argp); 2542 2542 case SNDRV_EMU10K1_IOCTL_INFO: 2543 - info = kmalloc(sizeof(*info), GFP_KERNEL); 2543 + info = kzalloc(sizeof(*info), GFP_KERNEL); 2544 2544 if (!info) 2545 2545 return -ENOMEM; 2546 2546 snd_emu10k1_fx8010_info(emu, info);
+63 -23
sound/pci/hda/hda_intel.c
··· 365 365 */ 366 366 #ifdef SUPPORT_VGA_SWITCHEROO 367 367 #define use_vga_switcheroo(chip) ((chip)->use_vga_switcheroo) 368 + #define needs_eld_notify_link(chip) ((chip)->need_eld_notify_link) 368 369 #else 369 370 #define use_vga_switcheroo(chip) 0 371 + #define needs_eld_notify_link(chip) false 370 372 #endif 371 373 372 374 #define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \ ··· 455 453 #endif 456 454 457 455 static int azx_acquire_irq(struct azx *chip, int do_disconnect); 456 + static void set_default_power_save(struct azx *chip); 458 457 459 458 /* 460 459 * initialize the PCI registers ··· 1204 1201 azx_bus(chip)->codec_powered || !chip->running) 1205 1202 return -EBUSY; 1206 1203 1204 + /* ELD notification gets broken when HD-audio bus is off */ 1205 + if (needs_eld_notify_link(hda)) 1206 + return -EBUSY; 1207 + 1207 1208 return 0; 1208 1209 } 1209 1210 ··· 1305 1298 return true; 1306 1299 } 1307 1300 1301 + /* 1302 + * The discrete GPU cannot power down unless the HDA controller runtime 1303 + * suspends, so activate runtime PM on codecs even if power_save == 0. 1304 + */ 1305 + static void setup_vga_switcheroo_runtime_pm(struct azx *chip) 1306 + { 1307 + struct hda_intel *hda = container_of(chip, struct hda_intel, chip); 1308 + struct hda_codec *codec; 1309 + 1310 + if (hda->use_vga_switcheroo && !hda->need_eld_notify_link) { 1311 + list_for_each_codec(codec, &chip->bus) 1312 + codec->auto_runtime_pm = 1; 1313 + /* reset the power save setup */ 1314 + if (chip->running) 1315 + set_default_power_save(chip); 1316 + } 1317 + } 1318 + 1319 + static void azx_vs_gpu_bound(struct pci_dev *pci, 1320 + enum vga_switcheroo_client_id client_id) 1321 + { 1322 + struct snd_card *card = pci_get_drvdata(pci); 1323 + struct azx *chip = card->private_data; 1324 + struct hda_intel *hda = container_of(chip, struct hda_intel, chip); 1325 + 1326 + if (client_id == VGA_SWITCHEROO_DIS) 1327 + hda->need_eld_notify_link = 0; 1328 + setup_vga_switcheroo_runtime_pm(chip); 1329 + } 1330 + 1308 1331 static void init_vga_switcheroo(struct azx *chip) 1309 1332 { 1310 1333 struct hda_intel *hda = container_of(chip, struct hda_intel, chip); ··· 1343 1306 dev_info(chip->card->dev, 1344 1307 "Handle vga_switcheroo audio client\n"); 1345 1308 hda->use_vga_switcheroo = 1; 1309 + hda->need_eld_notify_link = 1; /* cleared in gpu_bound op */ 1346 1310 chip->driver_caps |= AZX_DCAPS_PM_RUNTIME; 1347 1311 pci_dev_put(p); 1348 1312 } ··· 1352 1314 static const struct vga_switcheroo_client_ops azx_vs_ops = { 1353 1315 .set_gpu_state = azx_vs_set_state, 1354 1316 .can_switch = azx_vs_can_switch, 1317 + .gpu_bound = azx_vs_gpu_bound, 1355 1318 }; 1356 1319 1357 1320 static int register_vga_switcheroo(struct azx *chip) ··· 1378 1339 #define init_vga_switcheroo(chip) /* NOP */ 1379 1340 #define register_vga_switcheroo(chip) 0 1380 1341 #define check_hdmi_disabled(pci) false 1342 + #define setup_vga_switcheroo_runtime_pm(chip) /* NOP */ 1381 1343 #endif /* SUPPORT_VGA_SWITCHER */ 1382 1344 1383 1345 /* ··· 1392 1352 1393 1353 if (azx_has_pm_runtime(chip) && chip->running) 1394 1354 pm_runtime_get_noresume(&pci->dev); 1355 + chip->running = 0; 1395 1356 1396 1357 azx_del_card_list(chip); 1397 1358 ··· 2271 2230 }; 2272 2231 #endif /* CONFIG_PM */ 2273 2232 2233 + static void set_default_power_save(struct azx *chip) 2234 + { 2235 + int val = power_save; 2236 + 2237 + #ifdef CONFIG_PM 2238 + if (pm_blacklist) { 2239 + const struct snd_pci_quirk *q; 2240 + 2241 + q = snd_pci_quirk_lookup(chip->pci, power_save_blacklist); 2242 + if (q && val) { 2243 + dev_info(chip->card->dev, "device %04x:%04x is on the power_save blacklist, forcing power_save to 0\n", 2244 + q->subvendor, q->subdevice); 2245 + val = 0; 2246 + } 2247 + } 2248 + #endif /* CONFIG_PM */ 2249 + snd_hda_set_power_save(&chip->bus, val * 1000); 2250 + } 2251 + 2274 2252 /* number of codec slots for each chipset: 0 = default slots (i.e. 4) */ 2275 2253 static unsigned int azx_max_codecs[AZX_NUM_DRIVERS] = { 2276 2254 [AZX_DRIVER_NVIDIA] = 8, ··· 2301 2241 struct hda_intel *hda = container_of(chip, struct hda_intel, chip); 2302 2242 struct hdac_bus *bus = azx_bus(chip); 2303 2243 struct pci_dev *pci = chip->pci; 2304 - struct hda_codec *codec; 2305 2244 int dev = chip->dev_index; 2306 - int val; 2307 2245 int err; 2308 2246 2309 2247 hda->probe_continued = 1; ··· 2380 2322 if (err < 0) 2381 2323 goto out_free; 2382 2324 2325 + setup_vga_switcheroo_runtime_pm(chip); 2326 + 2383 2327 chip->running = 1; 2384 2328 azx_add_card_list(chip); 2385 2329 2386 - val = power_save; 2387 - #ifdef CONFIG_PM 2388 - if (pm_blacklist) { 2389 - const struct snd_pci_quirk *q; 2330 + set_default_power_save(chip); 2390 2331 2391 - q = snd_pci_quirk_lookup(chip->pci, power_save_blacklist); 2392 - if (q && val) { 2393 - dev_info(chip->card->dev, "device %04x:%04x is on the power_save blacklist, forcing power_save to 0\n", 2394 - q->subvendor, q->subdevice); 2395 - val = 0; 2396 - } 2397 - } 2398 - #endif /* CONFIG_PM */ 2399 - /* 2400 - * The discrete GPU cannot power down unless the HDA controller runtime 2401 - * suspends, so activate runtime PM on codecs even if power_save == 0. 2402 - */ 2403 - if (use_vga_switcheroo(hda)) 2404 - list_for_each_codec(codec, &chip->bus) 2405 - codec->auto_runtime_pm = 1; 2406 - 2407 - snd_hda_set_power_save(&chip->bus, val * 1000); 2408 2332 if (azx_has_pm_runtime(chip)) 2409 2333 pm_runtime_put_autosuspend(&pci->dev); 2410 2334
+1
sound/pci/hda/hda_intel.h
··· 37 37 38 38 /* vga_switcheroo setup */ 39 39 unsigned int use_vga_switcheroo:1; 40 + unsigned int need_eld_notify_link:1; 40 41 unsigned int vga_switcheroo_registered:1; 41 42 unsigned int init_failed:1; /* delayed init failed */ 42 43
+21
sound/soc/amd/acp-pcm-dma.c
··· 16 16 #include <linux/module.h> 17 17 #include <linux/delay.h> 18 18 #include <linux/io.h> 19 + #include <linux/iopoll.h> 19 20 #include <linux/sizes.h> 20 21 #include <linux/pm_runtime.h> 21 22 ··· 185 184 acp_reg_write(descr_info->xfer_val, acp_mmio, mmACP_SRBM_Targ_Idx_Data); 186 185 } 187 186 187 + static void pre_config_reset(void __iomem *acp_mmio, u16 ch_num) 188 + { 189 + u32 dma_ctrl; 190 + int ret; 191 + 192 + /* clear the reset bit */ 193 + dma_ctrl = acp_reg_read(acp_mmio, mmACP_DMA_CNTL_0 + ch_num); 194 + dma_ctrl &= ~ACP_DMA_CNTL_0__DMAChRst_MASK; 195 + acp_reg_write(dma_ctrl, acp_mmio, mmACP_DMA_CNTL_0 + ch_num); 196 + /* check the reset bit before programming configuration registers */ 197 + ret = readl_poll_timeout(acp_mmio + ((mmACP_DMA_CNTL_0 + ch_num) * 4), 198 + dma_ctrl, 199 + !(dma_ctrl & ACP_DMA_CNTL_0__DMAChRst_MASK), 200 + 100, ACP_DMA_RESET_TIME); 201 + if (ret < 0) 202 + pr_err("Failed to clear reset of channel : %d\n", ch_num); 203 + } 204 + 188 205 /* 189 206 * Initialize the DMA descriptor information for transfer between 190 207 * system memory <-> ACP SRAM ··· 255 236 config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, 256 237 &dmadscr[i]); 257 238 } 239 + pre_config_reset(acp_mmio, ch); 258 240 config_acp_dma_channel(acp_mmio, ch, 259 241 dma_dscr_idx - 1, 260 242 NUM_DSCRS_PER_CHANNEL, ··· 295 275 config_dma_descriptor_in_sram(acp_mmio, dma_dscr_idx, 296 276 &dmadscr[i]); 297 277 } 278 + pre_config_reset(acp_mmio, ch); 298 279 /* Configure the DMA channel with the above descriptore */ 299 280 config_acp_dma_channel(acp_mmio, ch, dma_dscr_idx - 1, 300 281 NUM_DSCRS_PER_CHANNEL,
+2 -2
sound/soc/codecs/cs4265.c
··· 157 157 SOC_SINGLE("Validity Bit Control Switch", CS4265_SPDIF_CTL2, 158 158 3, 1, 0), 159 159 SOC_ENUM("SPDIF Mono/Stereo", spdif_mono_stereo_enum), 160 - SOC_SINGLE("MMTLR Data Switch", 0, 161 - 1, 1, 0), 160 + SOC_SINGLE("MMTLR Data Switch", CS4265_SPDIF_CTL2, 161 + 0, 1, 0), 162 162 SOC_ENUM("Mono Channel Select", spdif_mono_select_enum), 163 163 SND_SOC_BYTES("C Data Buffer", CS4265_C_DATA_BUFF, 24), 164 164 };
+3
sound/soc/codecs/max98373.c
··· 520 520 { 521 521 switch (reg) { 522 522 case MAX98373_R2000_SW_RESET ... MAX98373_R2009_INT_FLAG3: 523 + case MAX98373_R203E_AMP_PATH_GAIN: 523 524 case MAX98373_R2054_MEAS_ADC_PVDD_CH_READBACK: 524 525 case MAX98373_R2055_MEAS_ADC_THERM_CH_READBACK: 525 526 case MAX98373_R20B6_BDE_CUR_STATE_READBACK: ··· 730 729 /* Software Reset */ 731 730 regmap_write(max98373->regmap, 732 731 MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET); 732 + usleep_range(10000, 11000); 733 733 734 734 /* IV default slot configuration */ 735 735 regmap_write(max98373->regmap, ··· 819 817 820 818 regmap_write(max98373->regmap, 821 819 MAX98373_R2000_SW_RESET, MAX98373_SOFT_RESET); 820 + usleep_range(10000, 11000); 822 821 regcache_cache_only(max98373->regmap, false); 823 822 regcache_sync(max98373->regmap); 824 823 return 0;
+4 -4
sound/soc/codecs/rt5514.c
··· 64 64 {RT5514_ANA_CTRL_LDO10, 0x00028604}, 65 65 {RT5514_ANA_CTRL_ADCFED, 0x00000800}, 66 66 {RT5514_ASRC_IN_CTRL1, 0x00000003}, 67 - {RT5514_DOWNFILTER0_CTRL3, 0x10000352}, 68 - {RT5514_DOWNFILTER1_CTRL3, 0x10000352}, 67 + {RT5514_DOWNFILTER0_CTRL3, 0x10000342}, 68 + {RT5514_DOWNFILTER1_CTRL3, 0x10000342}, 69 69 }; 70 70 71 71 static const struct reg_default rt5514_reg[] = { ··· 92 92 {RT5514_ASRC_IN_CTRL1, 0x00000003}, 93 93 {RT5514_DOWNFILTER0_CTRL1, 0x00020c2f}, 94 94 {RT5514_DOWNFILTER0_CTRL2, 0x00020c2f}, 95 - {RT5514_DOWNFILTER0_CTRL3, 0x10000352}, 95 + {RT5514_DOWNFILTER0_CTRL3, 0x10000342}, 96 96 {RT5514_DOWNFILTER1_CTRL1, 0x00020c2f}, 97 97 {RT5514_DOWNFILTER1_CTRL2, 0x00020c2f}, 98 - {RT5514_DOWNFILTER1_CTRL3, 0x10000352}, 98 + {RT5514_DOWNFILTER1_CTRL3, 0x10000342}, 99 99 {RT5514_ANA_CTRL_LDO10, 0x00028604}, 100 100 {RT5514_ANA_CTRL_LDO18_16, 0x02000345}, 101 101 {RT5514_ANA_CTRL_ADC12, 0x0000a2a8},
+4 -4
sound/soc/codecs/rt5682.c
··· 750 750 } 751 751 752 752 static const DECLARE_TLV_DB_SCALE(hp_vol_tlv, -2250, 150, 0); 753 - static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -65625, 375, 0); 754 - static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -17625, 375, 0); 753 + static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -6525, 75, 0); 754 + static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -1725, 75, 0); 755 755 static const DECLARE_TLV_DB_SCALE(adc_bst_tlv, 0, 1200, 0); 756 756 757 757 /* {0, +20, +24, +30, +35, +40, +44, +50, +52} dB */ ··· 1114 1114 1115 1115 /* DAC Digital Volume */ 1116 1116 SOC_DOUBLE_TLV("DAC1 Playback Volume", RT5682_DAC1_DIG_VOL, 1117 - RT5682_L_VOL_SFT, RT5682_R_VOL_SFT, 175, 0, dac_vol_tlv), 1117 + RT5682_L_VOL_SFT + 1, RT5682_R_VOL_SFT + 1, 86, 0, dac_vol_tlv), 1118 1118 1119 1119 /* IN Boost Volume */ 1120 1120 SOC_SINGLE_TLV("CBJ Boost Volume", RT5682_CBJ_BST_CTRL, ··· 1124 1124 SOC_DOUBLE("STO1 ADC Capture Switch", RT5682_STO1_ADC_DIG_VOL, 1125 1125 RT5682_L_MUTE_SFT, RT5682_R_MUTE_SFT, 1, 1), 1126 1126 SOC_DOUBLE_TLV("STO1 ADC Capture Volume", RT5682_STO1_ADC_DIG_VOL, 1127 - RT5682_L_VOL_SFT, RT5682_R_VOL_SFT, 127, 0, adc_vol_tlv), 1127 + RT5682_L_VOL_SFT + 1, RT5682_R_VOL_SFT + 1, 63, 0, adc_vol_tlv), 1128 1128 1129 1129 /* ADC Boost Volume Control */ 1130 1130 SOC_DOUBLE_TLV("STO1 ADC Boost Gain Volume", RT5682_STO1_ADC_BOOST,
+1 -2
sound/soc/codecs/sigmadsp.c
··· 117 117 struct sigmadsp_control *ctrl, void *data) 118 118 { 119 119 /* safeload loads up to 20 bytes in a atomic operation */ 120 - if (ctrl->num_bytes > 4 && ctrl->num_bytes <= 20 && sigmadsp->ops && 121 - sigmadsp->ops->safeload) 120 + if (ctrl->num_bytes <= 20 && sigmadsp->ops && sigmadsp->ops->safeload) 122 121 return sigmadsp->ops->safeload(sigmadsp, ctrl->addr, data, 123 122 ctrl->num_bytes); 124 123 else
+9 -3
sound/soc/codecs/tas6424.c
··· 424 424 TAS6424_FAULT_PVDD_UV | 425 425 TAS6424_FAULT_VBAT_UV; 426 426 427 - if (reg) 427 + if (!reg) { 428 + tas6424->last_fault1 = reg; 428 429 goto check_global_fault2_reg; 430 + } 429 431 430 432 /* 431 433 * Only flag errors once for a given occurrence. This is needed as ··· 463 461 TAS6424_FAULT_OTSD_CH3 | 464 462 TAS6424_FAULT_OTSD_CH4; 465 463 466 - if (!reg) 464 + if (!reg) { 465 + tas6424->last_fault2 = reg; 467 466 goto check_warn_reg; 467 + } 468 468 469 469 if ((reg & TAS6424_FAULT_OTSD) && !(tas6424->last_fault2 & TAS6424_FAULT_OTSD)) 470 470 dev_crit(dev, "experienced a global overtemp shutdown\n"); ··· 501 497 TAS6424_WARN_VDD_OTW_CH3 | 502 498 TAS6424_WARN_VDD_OTW_CH4; 503 499 504 - if (!reg) 500 + if (!reg) { 501 + tas6424->last_warn = reg; 505 502 goto out; 503 + } 506 504 507 505 if ((reg & TAS6424_WARN_VDD_UV) && !(tas6424->last_warn & TAS6424_WARN_VDD_UV)) 508 506 dev_warn(dev, "experienced a VDD under voltage condition\n");
+14 -1
sound/soc/codecs/wm8804-i2c.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/module.h> 15 15 #include <linux/i2c.h> 16 + #include <linux/acpi.h> 16 17 17 18 #include "wm8804.h" 18 19 ··· 41 40 }; 42 41 MODULE_DEVICE_TABLE(i2c, wm8804_i2c_id); 43 42 43 + #if defined(CONFIG_OF) 44 44 static const struct of_device_id wm8804_of_match[] = { 45 45 { .compatible = "wlf,wm8804", }, 46 46 { } 47 47 }; 48 48 MODULE_DEVICE_TABLE(of, wm8804_of_match); 49 + #endif 50 + 51 + #ifdef CONFIG_ACPI 52 + static const struct acpi_device_id wm8804_acpi_match[] = { 53 + { "1AEC8804", 0 }, /* Wolfson PCI ID + part ID */ 54 + { "10138804", 0 }, /* Cirrus Logic PCI ID + part ID */ 55 + { }, 56 + }; 57 + MODULE_DEVICE_TABLE(acpi, wm8804_acpi_match); 58 + #endif 49 59 50 60 static struct i2c_driver wm8804_i2c_driver = { 51 61 .driver = { 52 62 .name = "wm8804", 53 63 .pm = &wm8804_pm, 54 - .of_match_table = wm8804_of_match, 64 + .of_match_table = of_match_ptr(wm8804_of_match), 65 + .acpi_match_table = ACPI_PTR(wm8804_acpi_match), 55 66 }, 56 67 .probe = wm8804_i2c_probe, 57 68 .remove = wm8804_i2c_remove,
+1 -1
sound/soc/codecs/wm9712.c
··· 719 719 720 720 static struct platform_driver wm9712_component_driver = { 721 721 .driver = { 722 - .name = "wm9712-component", 722 + .name = "wm9712-codec", 723 723 }, 724 724 725 725 .probe = wm9712_probe,
+26
sound/soc/intel/boards/bytcr_rt5640.c
··· 575 575 BYT_RT5640_MONO_SPEAKER | 576 576 BYT_RT5640_MCLK_EN), 577 577 }, 578 + { /* Linx Linx7 tablet */ 579 + .matches = { 580 + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LINX"), 581 + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "LINX7"), 582 + }, 583 + .driver_data = (void *)(BYTCR_INPUT_DEFAULTS | 584 + BYT_RT5640_MONO_SPEAKER | 585 + BYT_RT5640_JD_NOT_INV | 586 + BYT_RT5640_SSP0_AIF1 | 587 + BYT_RT5640_MCLK_EN), 588 + }, 578 589 { /* MSI S100 tablet */ 579 590 .matches = { 580 591 DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Micro-Star International Co., Ltd."), ··· 611 600 BYT_RT5640_JD_NOT_INV | 612 601 BYT_RT5640_DIFF_MIC | 613 602 BYT_RT5640_SSP0_AIF1 | 603 + BYT_RT5640_MCLK_EN), 604 + }, 605 + { /* Onda v975w */ 606 + .matches = { 607 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"), 608 + DMI_EXACT_MATCH(DMI_BOARD_NAME, "Aptio CRB"), 609 + /* The above are too generic, also match BIOS info */ 610 + DMI_EXACT_MATCH(DMI_BIOS_VERSION, "5.6.5"), 611 + DMI_EXACT_MATCH(DMI_BIOS_DATE, "07/25/2014"), 612 + }, 613 + .driver_data = (void *)(BYT_RT5640_IN1_MAP | 614 + BYT_RT5640_JD_SRC_JD2_IN4N | 615 + BYT_RT5640_OVCD_TH_2000UA | 616 + BYT_RT5640_OVCD_SF_0P75 | 617 + BYT_RT5640_DIFF_MIC | 614 618 BYT_RT5640_MCLK_EN), 615 619 }, 616 620 { /* Pipo W4 */
+1 -1
sound/soc/intel/skylake/skl.c
··· 834 834 return -ENXIO; 835 835 } 836 836 837 - skl_init_chip(bus, true); 837 + snd_hdac_bus_reset_link(bus, true); 838 838 839 839 snd_hdac_bus_parse_capabilities(bus); 840 840
+3 -1
sound/soc/qcom/qdsp6/q6routing.c
··· 960 960 { 961 961 int i; 962 962 963 - for (i = 0; i < MAX_SESSIONS; i++) 963 + for (i = 0; i < MAX_SESSIONS; i++) { 964 964 routing_data->sessions[i].port_id = -1; 965 + routing_data->sessions[i].fedai_id = -1; 966 + } 965 967 966 968 return 0; 967 969 }
+5
sound/soc/sh/rcar/adg.c
··· 462 462 goto rsnd_adg_get_clkout_end; 463 463 464 464 req_size = prop->length / sizeof(u32); 465 + if (req_size > REQ_SIZE) { 466 + dev_err(dev, 467 + "too many clock-frequency, use top %d\n", REQ_SIZE); 468 + req_size = REQ_SIZE; 469 + } 465 470 466 471 of_property_read_u32_array(np, "clock-frequency", req_rate, req_size); 467 472 req_48kHz_rate = 0;
+20 -1
sound/soc/sh/rcar/core.c
··· 478 478 (func_call && (mod)->ops->fn) ? #fn : ""); \ 479 479 if (func_call && (mod)->ops->fn) \ 480 480 tmp = (mod)->ops->fn(mod, io, param); \ 481 - if (tmp) \ 481 + if (tmp && (tmp != -EPROBE_DEFER)) \ 482 482 dev_err(dev, "%s[%d] : %s error %d\n", \ 483 483 rsnd_mod_name(mod), rsnd_mod_id(mod), \ 484 484 #fn, tmp); \ ··· 958 958 rsnd_dai_stream_quit(io); 959 959 } 960 960 961 + static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream, 962 + struct snd_soc_dai *dai) 963 + { 964 + struct rsnd_priv *priv = rsnd_dai_to_priv(dai); 965 + struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai); 966 + struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream); 967 + 968 + return rsnd_dai_call(prepare, io, priv); 969 + } 970 + 961 971 static const struct snd_soc_dai_ops rsnd_soc_dai_ops = { 962 972 .startup = rsnd_soc_dai_startup, 963 973 .shutdown = rsnd_soc_dai_shutdown, 964 974 .trigger = rsnd_soc_dai_trigger, 965 975 .set_fmt = rsnd_soc_dai_set_fmt, 966 976 .set_tdm_slot = rsnd_soc_set_dai_tdm_slot, 977 + .prepare = rsnd_soc_dai_prepare, 967 978 }; 968 979 969 980 void rsnd_parse_connect_common(struct rsnd_dai *rdai, ··· 1560 1549 rsnd_dai_call(remove, &rdai->playback, priv); 1561 1550 rsnd_dai_call(remove, &rdai->capture, priv); 1562 1551 } 1552 + 1553 + /* 1554 + * adg is very special mod which can't use rsnd_dai_call(remove), 1555 + * and it registers ADG clock on probe. 1556 + * It should be unregister if probe failed. 1557 + * Mainly it is assuming -EPROBE_DEFER case 1558 + */ 1559 + rsnd_adg_remove(priv); 1563 1560 1564 1561 return ret; 1565 1562 }
+4
sound/soc/sh/rcar/dma.c
··· 241 241 /* try to get DMAEngine channel */ 242 242 chan = rsnd_dmaen_request_channel(io, mod_from, mod_to); 243 243 if (IS_ERR_OR_NULL(chan)) { 244 + /* Let's follow when -EPROBE_DEFER case */ 245 + if (PTR_ERR(chan) == -EPROBE_DEFER) 246 + return PTR_ERR(chan); 247 + 244 248 /* 245 249 * DMA failed. try to PIO mode 246 250 * see
+7
sound/soc/sh/rcar/rsnd.h
··· 280 280 int (*nolock_stop)(struct rsnd_mod *mod, 281 281 struct rsnd_dai_stream *io, 282 282 struct rsnd_priv *priv); 283 + int (*prepare)(struct rsnd_mod *mod, 284 + struct rsnd_dai_stream *io, 285 + struct rsnd_priv *priv); 283 286 }; 284 287 285 288 struct rsnd_dai_stream; ··· 312 309 * H 0: fallback 313 310 * H 0: hw_params 314 311 * H 0: pointer 312 + * H 0: prepare 315 313 */ 316 314 #define __rsnd_mod_shift_nolock_start 0 317 315 #define __rsnd_mod_shift_nolock_stop 0 ··· 327 323 #define __rsnd_mod_shift_fallback 28 /* always called */ 328 324 #define __rsnd_mod_shift_hw_params 28 /* always called */ 329 325 #define __rsnd_mod_shift_pointer 28 /* always called */ 326 + #define __rsnd_mod_shift_prepare 28 /* always called */ 330 327 331 328 #define __rsnd_mod_add_probe 0 332 329 #define __rsnd_mod_add_remove 0 ··· 342 337 #define __rsnd_mod_add_fallback 0 343 338 #define __rsnd_mod_add_hw_params 0 344 339 #define __rsnd_mod_add_pointer 0 340 + #define __rsnd_mod_add_prepare 0 345 341 346 342 #define __rsnd_mod_call_probe 0 347 343 #define __rsnd_mod_call_remove 0 ··· 357 351 #define __rsnd_mod_call_pointer 0 358 352 #define __rsnd_mod_call_nolock_start 0 359 353 #define __rsnd_mod_call_nolock_stop 1 354 + #define __rsnd_mod_call_prepare 0 360 355 361 356 #define rsnd_mod_to_priv(mod) ((mod)->priv) 362 357 #define rsnd_mod_name(mod) ((mod)->ops->name)
+10 -6
sound/soc/sh/rcar/ssi.c
··· 283 283 if (rsnd_ssi_is_multi_slave(mod, io)) 284 284 return 0; 285 285 286 - if (ssi->usrcnt > 1) { 286 + if (ssi->rate) { 287 287 if (ssi->rate != rate) { 288 288 dev_err(dev, "SSI parent/child should use same rate\n"); 289 289 return -EINVAL; ··· 434 434 struct rsnd_priv *priv) 435 435 { 436 436 struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod); 437 - int ret; 438 437 439 438 if (!rsnd_ssi_is_run_mods(mod, io)) 440 439 return 0; ··· 441 442 ssi->usrcnt++; 442 443 443 444 rsnd_mod_power_on(mod); 444 - 445 - ret = rsnd_ssi_master_clk_start(mod, io); 446 - if (ret < 0) 447 - return ret; 448 445 449 446 rsnd_ssi_config_init(mod, io); 450 447 ··· 847 852 return 0; 848 853 } 849 854 855 + static int rsnd_ssi_prepare(struct rsnd_mod *mod, 856 + struct rsnd_dai_stream *io, 857 + struct rsnd_priv *priv) 858 + { 859 + return rsnd_ssi_master_clk_start(mod, io); 860 + } 861 + 850 862 static struct rsnd_mod_ops rsnd_ssi_pio_ops = { 851 863 .name = SSI_NAME, 852 864 .probe = rsnd_ssi_common_probe, ··· 866 864 .pointer = rsnd_ssi_pio_pointer, 867 865 .pcm_new = rsnd_ssi_pcm_new, 868 866 .hw_params = rsnd_ssi_hw_params, 867 + .prepare = rsnd_ssi_prepare, 869 868 }; 870 869 871 870 static int rsnd_ssi_dma_probe(struct rsnd_mod *mod, ··· 943 940 .pcm_new = rsnd_ssi_pcm_new, 944 941 .fallback = rsnd_ssi_fallback, 945 942 .hw_params = rsnd_ssi_hw_params, 943 + .prepare = rsnd_ssi_prepare, 946 944 }; 947 945 948 946 int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)
+2 -2
sound/soc/soc-core.c
··· 1447 1447 sink = codec_dai->playback_widget; 1448 1448 source = cpu_dai->capture_widget; 1449 1449 if (sink && source) { 1450 - ret = snd_soc_dapm_new_pcm(card, dai_link->params, 1450 + ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params, 1451 1451 dai_link->num_params, 1452 1452 source, sink); 1453 1453 if (ret != 0) { ··· 1460 1460 sink = cpu_dai->playback_widget; 1461 1461 source = codec_dai->capture_widget; 1462 1462 if (sink && source) { 1463 - ret = snd_soc_dapm_new_pcm(card, dai_link->params, 1463 + ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params, 1464 1464 dai_link->num_params, 1465 1465 source, sink); 1466 1466 if (ret != 0) {
+4
sound/soc/soc-dapm.c
··· 3652 3652 { 3653 3653 struct snd_soc_dapm_path *source_p, *sink_p; 3654 3654 struct snd_soc_dai *source, *sink; 3655 + struct snd_soc_pcm_runtime *rtd = w->priv; 3655 3656 const struct snd_soc_pcm_stream *config = w->params + w->params_select; 3656 3657 struct snd_pcm_substream substream; 3657 3658 struct snd_pcm_hw_params *params = NULL; ··· 3712 3711 goto out; 3713 3712 } 3714 3713 substream.runtime = runtime; 3714 + substream.private_data = rtd; 3715 3715 3716 3716 switch (event) { 3717 3717 case SND_SOC_DAPM_PRE_PMU: ··· 3897 3895 } 3898 3896 3899 3897 int snd_soc_dapm_new_pcm(struct snd_soc_card *card, 3898 + struct snd_soc_pcm_runtime *rtd, 3900 3899 const struct snd_soc_pcm_stream *params, 3901 3900 unsigned int num_params, 3902 3901 struct snd_soc_dapm_widget *source, ··· 3966 3963 3967 3964 w->params = params; 3968 3965 w->num_params = num_params; 3966 + w->priv = rtd; 3969 3967 3970 3968 ret = snd_soc_dapm_add_path(&card->dapm, source, w, NULL, NULL); 3971 3969 if (ret)
+1 -1
tools/include/tools/libc_compat.h
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 1 + // SPDX-License-Identifier: (LGPL-2.0+ OR BSD-2-Clause) 2 2 /* Copyright (C) 2018 Netronome Systems, Inc. */ 3 3 4 4 #ifndef __TOOLS_LIBC_COMPAT_H
+1 -1
tools/lib/bpf/Build
··· 1 - libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o 1 + libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o
+10 -10
tools/lib/bpf/libbpf.c
··· 50 50 #include "libbpf.h" 51 51 #include "bpf.h" 52 52 #include "btf.h" 53 + #include "str_error.h" 53 54 54 55 #ifndef EM_BPF 55 56 #define EM_BPF 247 ··· 470 469 obj->efile.fd = open(obj->path, O_RDONLY); 471 470 if (obj->efile.fd < 0) { 472 471 char errmsg[STRERR_BUFSIZE]; 473 - char *cp = strerror_r(errno, errmsg, sizeof(errmsg)); 472 + char *cp = str_error(errno, errmsg, sizeof(errmsg)); 474 473 475 474 pr_warning("failed to open %s: %s\n", obj->path, cp); 476 475 return -errno; ··· 811 810 data->d_size, name, idx); 812 811 if (err) { 813 812 char errmsg[STRERR_BUFSIZE]; 814 - char *cp = strerror_r(-err, errmsg, 815 - sizeof(errmsg)); 813 + char *cp = str_error(-err, errmsg, sizeof(errmsg)); 816 814 817 815 pr_warning("failed to alloc program %s (%s): %s", 818 816 name, obj->path, cp); ··· 1140 1140 1141 1141 *pfd = bpf_create_map_xattr(&create_attr); 1142 1142 if (*pfd < 0 && create_attr.btf_key_type_id) { 1143 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1143 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1144 1144 pr_warning("Error in bpf_create_map_xattr(%s):%s(%d). Retrying without BTF.\n", 1145 1145 map->name, cp, errno); 1146 1146 create_attr.btf_fd = 0; ··· 1155 1155 size_t j; 1156 1156 1157 1157 err = *pfd; 1158 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1158 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1159 1159 pr_warning("failed to create map (name: '%s'): %s\n", 1160 1160 map->name, cp); 1161 1161 for (j = 0; j < i; j++) ··· 1339 1339 } 1340 1340 1341 1341 ret = -LIBBPF_ERRNO__LOAD; 1342 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1342 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1343 1343 pr_warning("load bpf program failed: %s\n", cp); 1344 1344 1345 1345 if (log_buf && log_buf[0] != '\0') { ··· 1654 1654 1655 1655 dir = dirname(dname); 1656 1656 if (statfs(dir, &st_fs)) { 1657 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1657 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1658 1658 pr_warning("failed to statfs %s: %s\n", dir, cp); 1659 1659 err = -errno; 1660 1660 } ··· 1690 1690 } 1691 1691 1692 1692 if (bpf_obj_pin(prog->instances.fds[instance], path)) { 1693 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1693 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1694 1694 pr_warning("failed to pin program: %s\n", cp); 1695 1695 return -errno; 1696 1696 } ··· 1708 1708 err = -errno; 1709 1709 1710 1710 if (err) { 1711 - cp = strerror_r(-err, errmsg, sizeof(errmsg)); 1711 + cp = str_error(-err, errmsg, sizeof(errmsg)); 1712 1712 pr_warning("failed to mkdir %s: %s\n", path, cp); 1713 1713 } 1714 1714 return err; ··· 1770 1770 } 1771 1771 1772 1772 if (bpf_obj_pin(map->fd, path)) { 1773 - cp = strerror_r(errno, errmsg, sizeof(errmsg)); 1773 + cp = str_error(errno, errmsg, sizeof(errmsg)); 1774 1774 pr_warning("failed to pin map: %s\n", cp); 1775 1775 return -errno; 1776 1776 }
+18
tools/lib/bpf/str_error.c
··· 1 + // SPDX-License-Identifier: LGPL-2.1 2 + #undef _GNU_SOURCE 3 + #include <string.h> 4 + #include <stdio.h> 5 + #include "str_error.h" 6 + 7 + /* 8 + * Wrapper to allow for building in non-GNU systems such as Alpine Linux's musl 9 + * libc, while checking strerror_r() return to avoid having to check this in 10 + * all places calling it. 11 + */ 12 + char *str_error(int err, char *dst, int len) 13 + { 14 + int ret = strerror_r(err, dst, len); 15 + if (ret) 16 + snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret); 17 + return dst; 18 + }
+6
tools/lib/bpf/str_error.h
··· 1 + // SPDX-License-Identifier: LGPL-2.1 2 + #ifndef BPF_STR_ERROR 3 + #define BPF_STR_ERROR 4 + 5 + char *str_error(int err, char *dst, int len); 6 + #endif // BPF_STR_ERROR
+1 -1
tools/perf/Documentation/Makefile
··· 280 280 mv $@+ $@ 281 281 282 282 ifdef USE_ASCIIDOCTOR 283 - $(OUTPUT)%.1 $(OUTPUT)%.5 $(OUTPUT)%.7 : $(OUTPUT)%.txt 283 + $(OUTPUT)%.1 $(OUTPUT)%.5 $(OUTPUT)%.7 : %.txt 284 284 $(QUIET_ASCIIDOC)$(RM) $@+ $@ && \ 285 285 $(ASCIIDOC) -b manpage -d manpage \ 286 286 $(ASCIIDOC_EXTRA) -aperf_version=$(PERF_VERSION) -o $@+ $< && \
+1 -1
tools/testing/selftests/android/Makefile
··· 6 6 7 7 include ../lib.mk 8 8 9 - all: 9 + all: khdr 10 10 @for DIR in $(SUBDIRS); do \ 11 11 BUILD_TARGET=$(OUTPUT)/$$DIR; \ 12 12 mkdir $$BUILD_TARGET -p; \
+2
tools/testing/selftests/android/ion/Makefile
··· 10 10 11 11 TEST_PROGS := ion_test.sh 12 12 13 + KSFT_KHDR_INSTALL := 1 14 + top_srcdir = ../../../../.. 13 15 include ../../lib.mk 14 16 15 17 $(OUTPUT)/ionapp_export: ionapp_export.c ipcsocket.c ionutils.c
tools/testing/selftests/android/ion/config tools/testing/selftests/android/config
+7 -3
tools/testing/selftests/bpf/test_maps.c
··· 580 580 /* Test update without programs */ 581 581 for (i = 0; i < 6; i++) { 582 582 err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY); 583 - if (err) { 583 + if (i < 2 && !err) { 584 + printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n", 585 + i, sfd[i]); 586 + goto out_sockmap; 587 + } else if (i >= 2 && err) { 584 588 printf("Failed noprog update sockmap '%i:%i'\n", 585 589 i, sfd[i]); 586 590 goto out_sockmap; ··· 745 741 } 746 742 747 743 /* Test map update elem afterwards fd lives in fd and map_fd */ 748 - for (i = 0; i < 6; i++) { 744 + for (i = 2; i < 6; i++) { 749 745 err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY); 750 746 if (err) { 751 747 printf("Failed map_fd_rx update sockmap %i '%i:%i'\n", ··· 849 845 } 850 846 851 847 /* Delete the elems without programs */ 852 - for (i = 0; i < 6; i++) { 848 + for (i = 2; i < 6; i++) { 853 849 err = bpf_map_delete_elem(fd, &i); 854 850 if (err) { 855 851 printf("Failed delete sockmap %i '%i:%i'\n",
+1
tools/testing/selftests/cgroup/.gitignore
··· 1 1 test_memcontrol 2 + test_core
+35 -3
tools/testing/selftests/cgroup/cgroup_util.c
··· 89 89 int cg_read_strcmp(const char *cgroup, const char *control, 90 90 const char *expected) 91 91 { 92 - size_t size = strlen(expected) + 1; 92 + size_t size; 93 93 char *buf; 94 + int ret; 95 + 96 + /* Handle the case of comparing against empty string */ 97 + if (!expected) 98 + size = 32; 99 + else 100 + size = strlen(expected) + 1; 94 101 95 102 buf = malloc(size); 96 103 if (!buf) 97 104 return -1; 98 105 99 - if (cg_read(cgroup, control, buf, size)) 106 + if (cg_read(cgroup, control, buf, size)) { 107 + free(buf); 100 108 return -1; 109 + } 101 110 102 - return strcmp(expected, buf); 111 + ret = strcmp(expected, buf); 112 + free(buf); 113 + return ret; 103 114 } 104 115 105 116 int cg_read_strstr(const char *cgroup, const char *control, const char *needle) ··· 347 336 cnt++; 348 337 349 338 return cnt > 1; 339 + } 340 + 341 + int set_oom_adj_score(int pid, int score) 342 + { 343 + char path[PATH_MAX]; 344 + int fd, len; 345 + 346 + sprintf(path, "/proc/%d/oom_score_adj", pid); 347 + 348 + fd = open(path, O_WRONLY | O_APPEND); 349 + if (fd < 0) 350 + return fd; 351 + 352 + len = dprintf(fd, "%d", score); 353 + if (len < 0) { 354 + close(fd); 355 + return len; 356 + } 357 + 358 + close(fd); 359 + return 0; 350 360 }
+1
tools/testing/selftests/cgroup/cgroup_util.h
··· 40 40 extern int alloc_pagecache(int fd, size_t size); 41 41 extern int alloc_anon(const char *cgroup, void *arg); 42 42 extern int is_swap_enabled(void); 43 + extern int set_oom_adj_score(int pid, int score);
+205
tools/testing/selftests/cgroup/test_memcontrol.c
··· 2 2 #define _GNU_SOURCE 3 3 4 4 #include <linux/limits.h> 5 + #include <linux/oom.h> 5 6 #include <fcntl.h> 6 7 #include <stdio.h> 7 8 #include <stdlib.h> ··· 201 200 sleep(1); 202 201 203 202 return 0; 203 + } 204 + 205 + static int alloc_anon_noexit(const char *cgroup, void *arg) 206 + { 207 + int ppid = getppid(); 208 + 209 + if (alloc_anon(cgroup, arg)) 210 + return -1; 211 + 212 + while (getppid() == ppid) 213 + sleep(1); 214 + 215 + return 0; 216 + } 217 + 218 + /* 219 + * Wait until processes are killed asynchronously by the OOM killer 220 + * If we exceed a timeout, fail. 221 + */ 222 + static int cg_test_proc_killed(const char *cgroup) 223 + { 224 + int limit; 225 + 226 + for (limit = 10; limit > 0; limit--) { 227 + if (cg_read_strcmp(cgroup, "cgroup.procs", "") == 0) 228 + return 0; 229 + 230 + usleep(100000); 231 + } 232 + return -1; 204 233 } 205 234 206 235 /* ··· 995 964 return ret; 996 965 } 997 966 967 + /* 968 + * This test disables swapping and tries to allocate anonymous memory 969 + * up to OOM with memory.group.oom set. Then it checks that all 970 + * processes in the leaf (but not the parent) were killed. 971 + */ 972 + static int test_memcg_oom_group_leaf_events(const char *root) 973 + { 974 + int ret = KSFT_FAIL; 975 + char *parent, *child; 976 + 977 + parent = cg_name(root, "memcg_test_0"); 978 + child = cg_name(root, "memcg_test_0/memcg_test_1"); 979 + 980 + if (!parent || !child) 981 + goto cleanup; 982 + 983 + if (cg_create(parent)) 984 + goto cleanup; 985 + 986 + if (cg_create(child)) 987 + goto cleanup; 988 + 989 + if (cg_write(parent, "cgroup.subtree_control", "+memory")) 990 + goto cleanup; 991 + 992 + if (cg_write(child, "memory.max", "50M")) 993 + goto cleanup; 994 + 995 + if (cg_write(child, "memory.swap.max", "0")) 996 + goto cleanup; 997 + 998 + if (cg_write(child, "memory.oom.group", "1")) 999 + goto cleanup; 1000 + 1001 + cg_run_nowait(parent, alloc_anon_noexit, (void *) MB(60)); 1002 + cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1)); 1003 + cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1)); 1004 + if (!cg_run(child, alloc_anon, (void *)MB(100))) 1005 + goto cleanup; 1006 + 1007 + if (cg_test_proc_killed(child)) 1008 + goto cleanup; 1009 + 1010 + if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0) 1011 + goto cleanup; 1012 + 1013 + if (cg_read_key_long(parent, "memory.events", "oom_kill ") != 0) 1014 + goto cleanup; 1015 + 1016 + ret = KSFT_PASS; 1017 + 1018 + cleanup: 1019 + if (child) 1020 + cg_destroy(child); 1021 + if (parent) 1022 + cg_destroy(parent); 1023 + free(child); 1024 + free(parent); 1025 + 1026 + return ret; 1027 + } 1028 + 1029 + /* 1030 + * This test disables swapping and tries to allocate anonymous memory 1031 + * up to OOM with memory.group.oom set. Then it checks that all 1032 + * processes in the parent and leaf were killed. 1033 + */ 1034 + static int test_memcg_oom_group_parent_events(const char *root) 1035 + { 1036 + int ret = KSFT_FAIL; 1037 + char *parent, *child; 1038 + 1039 + parent = cg_name(root, "memcg_test_0"); 1040 + child = cg_name(root, "memcg_test_0/memcg_test_1"); 1041 + 1042 + if (!parent || !child) 1043 + goto cleanup; 1044 + 1045 + if (cg_create(parent)) 1046 + goto cleanup; 1047 + 1048 + if (cg_create(child)) 1049 + goto cleanup; 1050 + 1051 + if (cg_write(parent, "memory.max", "80M")) 1052 + goto cleanup; 1053 + 1054 + if (cg_write(parent, "memory.swap.max", "0")) 1055 + goto cleanup; 1056 + 1057 + if (cg_write(parent, "memory.oom.group", "1")) 1058 + goto cleanup; 1059 + 1060 + cg_run_nowait(parent, alloc_anon_noexit, (void *) MB(60)); 1061 + cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1)); 1062 + cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1)); 1063 + 1064 + if (!cg_run(child, alloc_anon, (void *)MB(100))) 1065 + goto cleanup; 1066 + 1067 + if (cg_test_proc_killed(child)) 1068 + goto cleanup; 1069 + if (cg_test_proc_killed(parent)) 1070 + goto cleanup; 1071 + 1072 + ret = KSFT_PASS; 1073 + 1074 + cleanup: 1075 + if (child) 1076 + cg_destroy(child); 1077 + if (parent) 1078 + cg_destroy(parent); 1079 + free(child); 1080 + free(parent); 1081 + 1082 + return ret; 1083 + } 1084 + 1085 + /* 1086 + * This test disables swapping and tries to allocate anonymous memory 1087 + * up to OOM with memory.group.oom set. Then it checks that all 1088 + * processes were killed except those set with OOM_SCORE_ADJ_MIN 1089 + */ 1090 + static int test_memcg_oom_group_score_events(const char *root) 1091 + { 1092 + int ret = KSFT_FAIL; 1093 + char *memcg; 1094 + int safe_pid; 1095 + 1096 + memcg = cg_name(root, "memcg_test_0"); 1097 + 1098 + if (!memcg) 1099 + goto cleanup; 1100 + 1101 + if (cg_create(memcg)) 1102 + goto cleanup; 1103 + 1104 + if (cg_write(memcg, "memory.max", "50M")) 1105 + goto cleanup; 1106 + 1107 + if (cg_write(memcg, "memory.swap.max", "0")) 1108 + goto cleanup; 1109 + 1110 + if (cg_write(memcg, "memory.oom.group", "1")) 1111 + goto cleanup; 1112 + 1113 + safe_pid = cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(1)); 1114 + if (set_oom_adj_score(safe_pid, OOM_SCORE_ADJ_MIN)) 1115 + goto cleanup; 1116 + 1117 + cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(1)); 1118 + if (!cg_run(memcg, alloc_anon, (void *)MB(100))) 1119 + goto cleanup; 1120 + 1121 + if (cg_read_key_long(memcg, "memory.events", "oom_kill ") != 3) 1122 + goto cleanup; 1123 + 1124 + if (kill(safe_pid, SIGKILL)) 1125 + goto cleanup; 1126 + 1127 + ret = KSFT_PASS; 1128 + 1129 + cleanup: 1130 + if (memcg) 1131 + cg_destroy(memcg); 1132 + free(memcg); 1133 + 1134 + return ret; 1135 + } 1136 + 1137 + 998 1138 #define T(x) { x, #x } 999 1139 struct memcg_test { 1000 1140 int (*fn)(const char *root); ··· 1180 978 T(test_memcg_oom_events), 1181 979 T(test_memcg_swap_max), 1182 980 T(test_memcg_sock), 981 + T(test_memcg_oom_group_leaf_events), 982 + T(test_memcg_oom_group_parent_events), 983 + T(test_memcg_oom_group_score_events), 1183 984 }; 1184 985 #undef T 1185 986
+1
tools/testing/selftests/efivarfs/config
··· 1 + CONFIG_EFIVAR_FS=y
+1
tools/testing/selftests/futex/functional/Makefile
··· 18 18 19 19 TEST_PROGS := run.sh 20 20 21 + top_srcdir = ../../../../.. 21 22 include ../../lib.mk 22 23 23 24 $(TEST_GEN_FILES): $(HEADERS)
+2 -5
tools/testing/selftests/gpio/Makefile
··· 21 21 CFLAGS += -O2 -g -std=gnu99 -Wall -I../../../../usr/include/ 22 22 LDLIBS += -lmount -I/usr/include/libmount 23 23 24 - $(BINARIES): ../../../gpio/gpio-utils.o ../../../../usr/include/linux/gpio.h 24 + $(BINARIES):| khdr 25 + $(BINARIES): ../../../gpio/gpio-utils.o 25 26 26 27 ../../../gpio/gpio-utils.o: 27 28 make ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) -C ../../../gpio 28 - 29 - ../../../../usr/include/linux/gpio.h: 30 - make -C ../../../.. headers_install INSTALL_HDR_PATH=$(shell pwd)/../../../../usr/ 31 -
-1
tools/testing/selftests/kselftest.h
··· 19 19 #define KSFT_FAIL 1 20 20 #define KSFT_XFAIL 2 21 21 #define KSFT_XPASS 3 22 - /* Treat skip as pass */ 23 22 #define KSFT_SKIP 4 24 23 25 24 /* counters */
+1
tools/testing/selftests/kvm/.gitignore
··· 1 1 cr4_cpuid_sync_test 2 + platform_info_test 2 3 set_sregs_test 3 4 sync_regs_test 4 5 vmx_tsc_adjust_test
+5 -7
tools/testing/selftests/kvm/Makefile
··· 6 6 LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c 7 7 LIBKVM_x86_64 = lib/x86.c lib/vmx.c 8 8 9 - TEST_GEN_PROGS_x86_64 = set_sregs_test 9 + TEST_GEN_PROGS_x86_64 = platform_info_test 10 + TEST_GEN_PROGS_x86_64 += set_sregs_test 10 11 TEST_GEN_PROGS_x86_64 += sync_regs_test 11 12 TEST_GEN_PROGS_x86_64 += vmx_tsc_adjust_test 12 13 TEST_GEN_PROGS_x86_64 += cr4_cpuid_sync_test ··· 21 20 LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/ 22 21 LINUX_TOOL_INCLUDE = $(top_srcdir)tools/include 23 22 CFLAGS += -O2 -g -std=gnu99 -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -I.. 24 - LDFLAGS += -lpthread 23 + LDFLAGS += -pthread 25 24 26 25 # After inclusion, $(OUTPUT) is defined and 27 26 # $(TEST_GEN_PROGS) starts with $(OUTPUT)/ ··· 38 37 $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ) 39 38 $(AR) crs $@ $^ 40 39 41 - $(LINUX_HDR_PATH): 42 - make -C $(top_srcdir) headers_install 43 - 44 - all: $(STATIC_LIBS) $(LINUX_HDR_PATH) 40 + all: $(STATIC_LIBS) 45 41 $(TEST_GEN_PROGS): $(STATIC_LIBS) 46 - $(TEST_GEN_PROGS) $(LIBKVM_OBJ): | $(LINUX_HDR_PATH) 42 + $(STATIC_LIBS):| khdr
+4
tools/testing/selftests/kvm/include/kvm_util.h
··· 50 50 }; 51 51 52 52 int kvm_check_cap(long cap); 53 + int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap); 53 54 54 55 struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm); 55 56 void kvm_vm_free(struct kvm_vm *vmp); ··· 109 108 struct kvm_vcpu_events *events); 110 109 void vcpu_events_set(struct kvm_vm *vm, uint32_t vcpuid, 111 110 struct kvm_vcpu_events *events); 111 + uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index); 112 + void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, 113 + uint64_t msr_value); 112 114 113 115 const char *exit_reason_str(unsigned int exit_reason); 114 116
+89
tools/testing/selftests/kvm/lib/kvm_util.c
··· 63 63 return ret; 64 64 } 65 65 66 + /* VM Enable Capability 67 + * 68 + * Input Args: 69 + * vm - Virtual Machine 70 + * cap - Capability 71 + * 72 + * Output Args: None 73 + * 74 + * Return: On success, 0. On failure a TEST_ASSERT failure is produced. 75 + * 76 + * Enables a capability (KVM_CAP_*) on the VM. 77 + */ 78 + int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap) 79 + { 80 + int ret; 81 + 82 + ret = ioctl(vm->fd, KVM_ENABLE_CAP, cap); 83 + TEST_ASSERT(ret == 0, "KVM_ENABLE_CAP IOCTL failed,\n" 84 + " rc: %i errno: %i", ret, errno); 85 + 86 + return ret; 87 + } 88 + 66 89 static void vm_open(struct kvm_vm *vm, int perm) 67 90 { 68 91 vm->kvm_fd = open(KVM_DEV_PATH, perm); ··· 1241 1218 ret = ioctl(vcpu->fd, KVM_SET_VCPU_EVENTS, events); 1242 1219 TEST_ASSERT(ret == 0, "KVM_SET_VCPU_EVENTS, failed, rc: %i errno: %i", 1243 1220 ret, errno); 1221 + } 1222 + 1223 + /* VCPU Get MSR 1224 + * 1225 + * Input Args: 1226 + * vm - Virtual Machine 1227 + * vcpuid - VCPU ID 1228 + * msr_index - Index of MSR 1229 + * 1230 + * Output Args: None 1231 + * 1232 + * Return: On success, value of the MSR. On failure a TEST_ASSERT is produced. 1233 + * 1234 + * Get value of MSR for VCPU. 1235 + */ 1236 + uint64_t vcpu_get_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index) 1237 + { 1238 + struct vcpu *vcpu = vcpu_find(vm, vcpuid); 1239 + struct { 1240 + struct kvm_msrs header; 1241 + struct kvm_msr_entry entry; 1242 + } buffer = {}; 1243 + int r; 1244 + 1245 + TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid); 1246 + buffer.header.nmsrs = 1; 1247 + buffer.entry.index = msr_index; 1248 + r = ioctl(vcpu->fd, KVM_GET_MSRS, &buffer.header); 1249 + TEST_ASSERT(r == 1, "KVM_GET_MSRS IOCTL failed,\n" 1250 + " rc: %i errno: %i", r, errno); 1251 + 1252 + return buffer.entry.data; 1253 + } 1254 + 1255 + /* VCPU Set MSR 1256 + * 1257 + * Input Args: 1258 + * vm - Virtual Machine 1259 + * vcpuid - VCPU ID 1260 + * msr_index - Index of MSR 1261 + * msr_value - New value of MSR 1262 + * 1263 + * Output Args: None 1264 + * 1265 + * Return: On success, nothing. On failure a TEST_ASSERT is produced. 1266 + * 1267 + * Set value of MSR for VCPU. 1268 + */ 1269 + void vcpu_set_msr(struct kvm_vm *vm, uint32_t vcpuid, uint64_t msr_index, 1270 + uint64_t msr_value) 1271 + { 1272 + struct vcpu *vcpu = vcpu_find(vm, vcpuid); 1273 + struct { 1274 + struct kvm_msrs header; 1275 + struct kvm_msr_entry entry; 1276 + } buffer = {}; 1277 + int r; 1278 + 1279 + TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid); 1280 + memset(&buffer, 0, sizeof(buffer)); 1281 + buffer.header.nmsrs = 1; 1282 + buffer.entry.index = msr_index; 1283 + buffer.entry.data = msr_value; 1284 + r = ioctl(vcpu->fd, KVM_SET_MSRS, &buffer.header); 1285 + TEST_ASSERT(r == 1, "KVM_SET_MSRS IOCTL failed,\n" 1286 + " rc: %i errno: %i", r, errno); 1244 1287 } 1245 1288 1246 1289 /* VM VCPU Args Set
+110
tools/testing/selftests/kvm/platform_info_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Test for x86 KVM_CAP_MSR_PLATFORM_INFO 4 + * 5 + * Copyright (C) 2018, Google LLC. 6 + * 7 + * This work is licensed under the terms of the GNU GPL, version 2. 8 + * 9 + * Verifies expected behavior of controlling guest access to 10 + * MSR_PLATFORM_INFO. 11 + */ 12 + 13 + #define _GNU_SOURCE /* for program_invocation_short_name */ 14 + #include <fcntl.h> 15 + #include <stdio.h> 16 + #include <stdlib.h> 17 + #include <string.h> 18 + #include <sys/ioctl.h> 19 + 20 + #include "test_util.h" 21 + #include "kvm_util.h" 22 + #include "x86.h" 23 + 24 + #define VCPU_ID 0 25 + #define MSR_PLATFORM_INFO_MAX_TURBO_RATIO 0xff00 26 + 27 + static void guest_code(void) 28 + { 29 + uint64_t msr_platform_info; 30 + 31 + for (;;) { 32 + msr_platform_info = rdmsr(MSR_PLATFORM_INFO); 33 + GUEST_SYNC(msr_platform_info); 34 + asm volatile ("inc %r11"); 35 + } 36 + } 37 + 38 + static void set_msr_platform_info_enabled(struct kvm_vm *vm, bool enable) 39 + { 40 + struct kvm_enable_cap cap = {}; 41 + 42 + cap.cap = KVM_CAP_MSR_PLATFORM_INFO; 43 + cap.flags = 0; 44 + cap.args[0] = (int)enable; 45 + vm_enable_cap(vm, &cap); 46 + } 47 + 48 + static void test_msr_platform_info_enabled(struct kvm_vm *vm) 49 + { 50 + struct kvm_run *run = vcpu_state(vm, VCPU_ID); 51 + struct guest_args args; 52 + 53 + set_msr_platform_info_enabled(vm, true); 54 + vcpu_run(vm, VCPU_ID); 55 + TEST_ASSERT(run->exit_reason == KVM_EXIT_IO, 56 + "Exit_reason other than KVM_EXIT_IO: %u (%s),\n", 57 + run->exit_reason, 58 + exit_reason_str(run->exit_reason)); 59 + guest_args_read(vm, VCPU_ID, &args); 60 + TEST_ASSERT(args.port == GUEST_PORT_SYNC, 61 + "Received IO from port other than PORT_HOST_SYNC: %u\n", 62 + run->io.port); 63 + TEST_ASSERT((args.arg1 & MSR_PLATFORM_INFO_MAX_TURBO_RATIO) == 64 + MSR_PLATFORM_INFO_MAX_TURBO_RATIO, 65 + "Expected MSR_PLATFORM_INFO to have max turbo ratio mask: %i.", 66 + MSR_PLATFORM_INFO_MAX_TURBO_RATIO); 67 + } 68 + 69 + static void test_msr_platform_info_disabled(struct kvm_vm *vm) 70 + { 71 + struct kvm_run *run = vcpu_state(vm, VCPU_ID); 72 + 73 + set_msr_platform_info_enabled(vm, false); 74 + vcpu_run(vm, VCPU_ID); 75 + TEST_ASSERT(run->exit_reason == KVM_EXIT_SHUTDOWN, 76 + "Exit_reason other than KVM_EXIT_SHUTDOWN: %u (%s)\n", 77 + run->exit_reason, 78 + exit_reason_str(run->exit_reason)); 79 + } 80 + 81 + int main(int argc, char *argv[]) 82 + { 83 + struct kvm_vm *vm; 84 + struct kvm_run *state; 85 + int rv; 86 + uint64_t msr_platform_info; 87 + 88 + /* Tell stdout not to buffer its content */ 89 + setbuf(stdout, NULL); 90 + 91 + rv = kvm_check_cap(KVM_CAP_MSR_PLATFORM_INFO); 92 + if (!rv) { 93 + fprintf(stderr, 94 + "KVM_CAP_MSR_PLATFORM_INFO not supported, skip test\n"); 95 + exit(KSFT_SKIP); 96 + } 97 + 98 + vm = vm_create_default(VCPU_ID, 0, guest_code); 99 + 100 + msr_platform_info = vcpu_get_msr(vm, VCPU_ID, MSR_PLATFORM_INFO); 101 + vcpu_set_msr(vm, VCPU_ID, MSR_PLATFORM_INFO, 102 + msr_platform_info | MSR_PLATFORM_INFO_MAX_TURBO_RATIO); 103 + test_msr_platform_info_disabled(vm); 104 + test_msr_platform_info_enabled(vm); 105 + vcpu_set_msr(vm, VCPU_ID, MSR_PLATFORM_INFO, msr_platform_info); 106 + 107 + kvm_vm_free(vm); 108 + 109 + return 0; 110 + }
+12
tools/testing/selftests/lib.mk
··· 16 16 TEST_GEN_PROGS_EXTENDED := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_PROGS_EXTENDED)) 17 17 TEST_GEN_FILES := $(patsubst %,$(OUTPUT)/%,$(TEST_GEN_FILES)) 18 18 19 + top_srcdir ?= ../../../.. 20 + include $(top_srcdir)/scripts/subarch.include 21 + ARCH ?= $(SUBARCH) 22 + 19 23 all: $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES) 24 + 25 + .PHONY: khdr 26 + khdr: 27 + make ARCH=$(ARCH) -C $(top_srcdir) headers_install 28 + 29 + ifdef KSFT_KHDR_INSTALL 30 + $(TEST_GEN_PROGS) $(TEST_GEN_PROGS_EXTENDED) $(TEST_GEN_FILES):| khdr 31 + endif 20 32 21 33 .ONESHELL: 22 34 define RUN_TEST_PRINT_RESULT
+1
tools/testing/selftests/memory-hotplug/config
··· 2 2 CONFIG_MEMORY_HOTPLUG_SPARSE=y 3 3 CONFIG_NOTIFIER_ERROR_INJECTION=y 4 4 CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m 5 + CONFIG_MEMORY_HOTREMOVE=y
+1
tools/testing/selftests/net/Makefile
··· 15 15 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa 16 16 TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls 17 17 18 + KSFT_KHDR_INSTALL := 1 18 19 include ../lib.mk 19 20 20 21 $(OUTPUT)/reuseport_bpf_numa: LDFLAGS += -lnuma
+2 -2
tools/testing/selftests/net/pmtu.sh
··· 178 178 179 179 cleanup() { 180 180 [ ${cleanup_done} -eq 1 ] && return 181 - ip netns del ${NS_A} 2 > /dev/null 182 - ip netns del ${NS_B} 2 > /dev/null 181 + ip netns del ${NS_A} 2> /dev/null 182 + ip netns del ${NS_B} 2> /dev/null 183 183 cleanup_done=1 184 184 } 185 185
+49
tools/testing/selftests/net/tls.c
··· 502 502 EXPECT_EQ(memcmp(test_str, buf, send_len), 0); 503 503 } 504 504 505 + TEST_F(tls, recv_peek_multiple_records) 506 + { 507 + char const *test_str = "test_read_peek_mult_recs"; 508 + char const *test_str_first = "test_read_peek"; 509 + char const *test_str_second = "_mult_recs"; 510 + int len; 511 + char buf[64]; 512 + 513 + len = strlen(test_str_first); 514 + EXPECT_EQ(send(self->fd, test_str_first, len, 0), len); 515 + 516 + len = strlen(test_str_second) + 1; 517 + EXPECT_EQ(send(self->fd, test_str_second, len, 0), len); 518 + 519 + len = sizeof(buf); 520 + memset(buf, 0, len); 521 + EXPECT_NE(recv(self->cfd, buf, len, MSG_PEEK), -1); 522 + 523 + /* MSG_PEEK can only peek into the current record. */ 524 + len = strlen(test_str_first) + 1; 525 + EXPECT_EQ(memcmp(test_str_first, buf, len), 0); 526 + 527 + len = sizeof(buf); 528 + memset(buf, 0, len); 529 + EXPECT_NE(recv(self->cfd, buf, len, 0), -1); 530 + 531 + /* Non-MSG_PEEK will advance strparser (and therefore record) 532 + * however. 533 + */ 534 + len = strlen(test_str) + 1; 535 + EXPECT_EQ(memcmp(test_str, buf, len), 0); 536 + 537 + /* MSG_MORE will hold current record open, so later MSG_PEEK 538 + * will see everything. 539 + */ 540 + len = strlen(test_str_first); 541 + EXPECT_EQ(send(self->fd, test_str_first, len, MSG_MORE), len); 542 + 543 + len = strlen(test_str_second) + 1; 544 + EXPECT_EQ(send(self->fd, test_str_second, len, 0), len); 545 + 546 + len = sizeof(buf); 547 + memset(buf, 0, len); 548 + EXPECT_NE(recv(self->cfd, buf, len, MSG_PEEK), -1); 549 + 550 + len = strlen(test_str) + 1; 551 + EXPECT_EQ(memcmp(test_str, buf, len), 0); 552 + } 553 + 505 554 TEST_F(tls, pollin) 506 555 { 507 556 char const *test_str = "test_poll";
+1
tools/testing/selftests/networking/timestamping/Makefile
··· 5 5 6 6 all: $(TEST_PROGS) 7 7 8 + top_srcdir = ../../../../.. 8 9 include ../../lib.mk 9 10 10 11 clean:
+1
tools/testing/selftests/powerpc/alignment/Makefile
··· 1 1 TEST_GEN_PROGS := copy_first_unaligned alignment_handler 2 2 3 + top_srcdir = ../../../../.. 3 4 include ../../lib.mk 4 5 5 6 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+1
tools/testing/selftests/powerpc/benchmarks/Makefile
··· 4 4 5 5 CFLAGS += -O2 6 6 7 + top_srcdir = ../../../../.. 7 8 include ../../lib.mk 8 9 9 10 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/cache_shape/Makefile
··· 5 5 6 6 $(TEST_PROGS): ../harness.c ../utils.c 7 7 8 + top_srcdir = ../../../../.. 8 9 include ../../lib.mk 9 10 10 11 clean:
+1
tools/testing/selftests/powerpc/copyloops/Makefile
··· 17 17 18 18 EXTRA_SOURCES := validate.c ../harness.c stubs.S 19 19 20 + top_srcdir = ../../../../.. 20 21 include ../../lib.mk 21 22 22 23 $(OUTPUT)/copyuser_64_t%: copyuser_64.S $(EXTRA_SOURCES)
+1
tools/testing/selftests/powerpc/dscr/Makefile
··· 3 3 dscr_inherit_test dscr_inherit_exec_test dscr_sysfs_test \ 4 4 dscr_sysfs_thread_test 5 5 6 + top_srcdir = ../../../../.. 6 7 include ../../lib.mk 7 8 8 9 $(OUTPUT)/dscr_default_test: LDLIBS += -lpthread
+1
tools/testing/selftests/powerpc/math/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 TEST_GEN_PROGS := fpu_syscall fpu_preempt fpu_signal vmx_syscall vmx_preempt vmx_signal vsx_preempt 3 3 4 + top_srcdir = ../../../../.. 4 5 include ../../lib.mk 5 6 6 7 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/mm/Makefile
··· 5 5 TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors 6 6 TEST_GEN_FILES := tempfile 7 7 8 + top_srcdir = ../../../../.. 8 9 include ../../lib.mk 9 10 10 11 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/pmu/Makefile
··· 5 5 TEST_GEN_PROGS := count_instructions l3_bank_test per_event_excludes 6 6 EXTRA_SOURCES := ../harness.c event.c lib.c ../utils.c 7 7 8 + top_srcdir = ../../../../.. 8 9 include ../../lib.mk 9 10 10 11 all: $(TEST_GEN_PROGS) ebb
+1
tools/testing/selftests/powerpc/pmu/ebb/Makefile
··· 17 17 lost_exception_test no_handler_test \ 18 18 cycles_with_mmcr2_test 19 19 20 + top_srcdir = ../../../../../.. 20 21 include ../../../lib.mk 21 22 22 23 $(TEST_GEN_PROGS): ../../harness.c ../../utils.c ../event.c ../lib.c \
+1
tools/testing/selftests/powerpc/primitives/Makefile
··· 2 2 3 3 TEST_GEN_PROGS := load_unaligned_zeropad 4 4 5 + top_srcdir = ../../../../.. 5 6 include ../../lib.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/ptrace/Makefile
··· 4 4 ptrace-tm-spd-vsx ptrace-tm-spr ptrace-hwbreak ptrace-pkey core-pkey \ 5 5 perf-hwbreak 6 6 7 + top_srcdir = ../../../../.. 7 8 include ../../lib.mk 8 9 9 10 all: $(TEST_PROGS)
+1
tools/testing/selftests/powerpc/signal/Makefile
··· 8 8 CFLAGS += -maltivec 9 9 signal_tm: CFLAGS += -mhtm 10 10 11 + top_srcdir = ../../../../.. 11 12 include ../../lib.mk 12 13 13 14 clean:
+1
tools/testing/selftests/powerpc/stringloops/Makefile
··· 29 29 30 30 ASFLAGS = $(CFLAGS) 31 31 32 + top_srcdir = ../../../../.. 32 33 include ../../lib.mk 33 34 34 35 $(TEST_GEN_PROGS): $(EXTRA_SOURCES)
+1
tools/testing/selftests/powerpc/switch_endian/Makefile
··· 5 5 6 6 EXTRA_CLEAN = $(OUTPUT)/*.o $(OUTPUT)/check-reversed.S 7 7 8 + top_srcdir = ../../../../.. 8 9 include ../../lib.mk 9 10 10 11 $(OUTPUT)/switch_endian_test: $(OUTPUT)/check-reversed.S
+1
tools/testing/selftests/powerpc/syscalls/Makefile
··· 2 2 3 3 CFLAGS += -I../../../../../usr/include 4 4 5 + top_srcdir = ../../../../.. 5 6 include ../../lib.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/tm/Makefile
··· 6 6 tm-vmxcopy tm-fork tm-tar tm-tmspr tm-vmx-unavail tm-unavailable tm-trap \ 7 7 $(SIGNAL_CONTEXT_CHK_TESTS) tm-sigreturn 8 8 9 + top_srcdir = ../../../../.. 9 10 include ../../lib.mk 10 11 11 12 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+1
tools/testing/selftests/powerpc/vphn/Makefile
··· 2 2 3 3 CFLAGS += -m64 4 4 5 + top_srcdir = ../../../../.. 5 6 include ../../lib.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c
-4
tools/testing/selftests/vm/Makefile
··· 26 26 27 27 include ../lib.mk 28 28 29 - $(OUTPUT)/userfaultfd: ../../../../usr/include/linux/kernel.h 30 29 $(OUTPUT)/userfaultfd: LDLIBS += -lpthread 31 30 32 31 $(OUTPUT)/mlock-random-test: LDLIBS += -lcap 33 - 34 - ../../../../usr/include/linux/kernel.h: 35 - make -C ../../../.. headers_install