···11-Device-Tree bindings for input/gpio_keys.c keyboard driver11+Device-Tree bindings for input/keyboard/gpio_keys.c keyboard driver2233Required properties:44 - compatible = "gpio-keys";
+1
Documentation/devicetree/bindings/net/macb.txt
···1010 Use "cdns,pc302-gem" for Picochip picoXcell pc302 and later devices based on1111 the Cadence GEM, or the generic form: "cdns,gem".1212 Use "atmel,sama5d2-gem" for the GEM IP (10/100) available on Atmel sama5d2 SoCs.1313+ Use "atmel,sama5d3-macb" for the 10/100Mbit IP available on Atmel sama5d3 SoCs.1314 Use "atmel,sama5d3-gem" for the Gigabit IP available on Atmel sama5d3 SoCs.1415 Use "atmel,sama5d4-gem" for the GEM IP (10/100) available on Atmel sama5d4 SoCs.1516 Use "cdns,zynq-gem" Xilinx Zynq-7xxx SoC.
···45104510Architectures: s39045114511Parameters: none45124512Returns: 0 on success, -EINVAL if hpage module parameter was not set45134513- or cmma is enabled45134513+ or cmma is enabled, or the VM has the KVM_VM_S390_UCONTROL45144514+ flag set4514451545154516With this capability the KVM support for memory backing with 1m pages45164517through hugetlbfs can be enabled for a VM. After the capability is···4521452045224521While it is generally possible to create a huge page backed VM without45234522this capability, the VM will not be able to run.45234523+45244524+7.14 KVM_CAP_MSR_PLATFORM_INFO45254525+45264526+Architectures: x8645274527+Parameters: args[0] whether feature should be enabled or not45284528+45294529+With this capability, a guest may read the MSR_PLATFORM_INFO MSR. Otherwise,45304530+a #GP would be raised when the guest tries to access. Currently, this45314531+capability does not enable write permissions of this MSR for the guest.45244532452545338. Other capabilities.45264534----------------------
+26-10
MAINTAINERS
···97169716S: Maintained97179717F: drivers/media/dvb-frontends/mn88473*9718971897199719-PCI DRIVER FOR MOBIVEIL PCIE IP97209720-M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>97219721-L: linux-pci@vger.kernel.org97229722-S: Supported97239723-F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt97249724-F: drivers/pci/controller/pcie-mobiveil.c97259725-97269719MODULE SUPPORT97279720M: Jessica Yu <jeyu@kernel.org>97289721T: git git://git.kernel.org/pub/scm/linux/kernel/git/jeyu/linux.git modules-next···1094210949M: Ksenija Stanojevic <ksenija.stanojevic@gmail.com>1094310950S: Odd Fixes1094410951F: Documentation/auxdisplay/lcd-panel-cgram.txt1094510945-F: drivers/misc/panel.c1095210952+F: drivers/auxdisplay/panel.c10946109531094710954PARALLEL PORT SUBSYSTEM1094810955M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>···1113011137F: include/linux/switchtec.h1113111138F: drivers/ntb/hw/mscc/11132111391114011140+PCI DRIVER FOR MOBIVEIL PCIE IP1114111141+M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in>1114211142+L: linux-pci@vger.kernel.org1114311143+S: Supported1114411144+F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt1114511145+F: drivers/pci/controller/pcie-mobiveil.c1114611146+1113311147PCI DRIVER FOR MVEBU (Marvell Armada 370 and Armada XP SOC support)1113411148M: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>1113511149M: Jason Cooper <jason@lakedaemon.net>···11203112031120411204PCI ENHANCED ERROR HANDLING (EEH) FOR POWERPC1120511205M: Russell Currey <ruscur@russell.cc>1120611206+M: Sam Bobroff <sbobroff@linux.ibm.com>1120711207+M: Oliver O'Halloran <oohall@gmail.com>1120611208L: linuxppc-dev@lists.ozlabs.org1120711209S: Supported1121011210+F: Documentation/PCI/pci-error-recovery.txt1121111211+F: drivers/pci/pcie/aer.c1121211212+F: drivers/pci/pcie/dpc.c1121311213+F: drivers/pci/pcie/err.c1120811214F: Documentation/powerpc/eeh-pci-error-recovery.txt1120911215F: arch/powerpc/kernel/eeh*.c1121011216F: arch/powerpc/platforms/*/eeh*.c···12266122601226712261RDT - RESOURCE ALLOCATION1226812262M: Fenghua Yu <fenghua.yu@intel.com>1226312263+M: Reinette Chatre <reinette.chatre@intel.com>1226912264L: linux-kernel@vger.kernel.org1227012265S: Supported1227112266F: arch/x86/kernel/cpu/intel_rdt*···1345613449F: Documentation/devicetree/bindings/i2c/i2c-synquacer.txt13457134501345813451SOCIONEXT UNIPHIER SOUND DRIVER1345913459-M: Katsuhiro Suzuki <suzuki.katsuhiro@socionext.com>1346013452L: alsa-devel@alsa-project.org (moderated for non-subscribers)1346113461-S: Maintained1345313453+S: Orphan1346213454F: sound/soc/uniphier/13463134551346413456SOEKRIS NET48XX LED SUPPORT···1592515919X86 ARCHITECTURE (32-BIT AND 64-BIT)1592615920M: Thomas Gleixner <tglx@linutronix.de>1592715921M: Ingo Molnar <mingo@redhat.com>1592215922+M: Borislav Petkov <bp@alien8.de>1592815923R: "H. Peter Anvin" <hpa@zytor.com>1592915924M: x86@kernel.org1593015925L: linux-kernel@vger.kernel.org···1595315946M: Borislav Petkov <bp@alien8.de>1595415947S: Maintained1595515948F: arch/x86/kernel/cpu/microcode/*1594915949+1595015950+X86 MM1595115951+M: Dave Hansen <dave.hansen@linux.intel.com>1595215952+M: Andy Lutomirski <luto@kernel.org>1595315953+M: Peter Zijlstra <peterz@infradead.org>1595415954+L: linux-kernel@vger.kernel.org1595515955+T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/mm1595615956+S: Maintained1595715957+F: arch/x86/mm/15956159581595715959X86 PLATFORM DRIVERS1595815960M: Darren Hart <dvhart@infradead.org>
+2-14
Makefile
···22VERSION = 433PATCHLEVEL = 1944SUBLEVEL = 055-EXTRAVERSION = -rc455+EXTRAVERSION = -rc666NAME = Merciless Moray7788# *DOCUMENTATION*···299299KERNELVERSION = $(VERSION)$(if $(PATCHLEVEL),.$(PATCHLEVEL)$(if $(SUBLEVEL),.$(SUBLEVEL)))$(EXTRAVERSION)300300export VERSION PATCHLEVEL SUBLEVEL KERNELRELEASE KERNELVERSION301301302302-# SUBARCH tells the usermode build what the underlying arch is. That is set303303-# first, and if a usermode build is happening, the "ARCH=um" on the command304304-# line overrides the setting of ARCH below. If a native build is happening,305305-# then ARCH is assigned, getting whatever value it gets normally, and306306-# SUBARCH is subsequently ignored.307307-308308-SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \309309- -e s/sun4u/sparc64/ \310310- -e s/arm.*/arm/ -e s/sa110/arm/ \311311- -e s/s390x/s390/ -e s/parisc64/parisc/ \312312- -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \313313- -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \314314- -e s/riscv.*/riscv/)302302+include scripts/subarch.include315303316304# Cross compiling and selecting different set of gcc/bin-utils317305# ---------------------------------------------------------------------------
···220220extern int __init tce_iommu_bus_notifier_init(void);221221extern long iommu_tce_xchg(struct iommu_table *tbl, unsigned long entry,222222 unsigned long *hpa, enum dma_data_direction *direction);223223-extern long iommu_tce_xchg_rm(struct iommu_table *tbl, unsigned long entry,224224- unsigned long *hpa, enum dma_data_direction *direction);225223#else226224static inline void iommu_register_group(struct iommu_table_group *table_group,227225 int pci_domain_number,
+1
arch/powerpc/include/asm/mmu_context.h
···3838 unsigned long ua, unsigned int pageshift, unsigned long *hpa);3939extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem,4040 unsigned long ua, unsigned int pageshift, unsigned long *hpa);4141+extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua);4142extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);4243extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);4344#endif
+1
arch/powerpc/include/asm/setup.h
···991010extern unsigned int rtas_data;1111extern unsigned long long memory_limit;1212+extern bool init_mem_is_free;1213extern unsigned long klimit;1314extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask);1415
···10131013}10141014EXPORT_SYMBOL_GPL(iommu_tce_xchg);1015101510161016-#ifdef CONFIG_PPC_BOOK3S_6410171017-long iommu_tce_xchg_rm(struct iommu_table *tbl, unsigned long entry,10181018- unsigned long *hpa, enum dma_data_direction *direction)10191019-{10201020- long ret;10211021-10221022- ret = tbl->it_ops->exchange_rm(tbl, entry, hpa, direction);10231023-10241024- if (!ret && ((*direction == DMA_FROM_DEVICE) ||10251025- (*direction == DMA_BIDIRECTIONAL))) {10261026- struct page *pg = realmode_pfn_to_page(*hpa >> PAGE_SHIFT);10271027-10281028- if (likely(pg)) {10291029- SetPageDirty(pg);10301030- } else {10311031- tbl->it_ops->exchange_rm(tbl, entry, hpa, direction);10321032- ret = -EFAULT;10331033- }10341034- }10351035-10361036- return ret;10371037-}10381038-EXPORT_SYMBOL_GPL(iommu_tce_xchg_rm);10391039-#endif10401040-10411016int iommu_take_ownership(struct iommu_table *tbl)10421017{10431018 unsigned long flags, i, sz = (tbl->it_size + 7) >> 3;
+17-3
arch/powerpc/kernel/tm.S
···176176 std r1, PACATMSCRATCH(r13)177177 ld r1, PACAR1(r13)178178179179- /* Store the PPR in r11 and reset to decent value */180179 std r11, GPR11(r1) /* Temporary stash */180180+181181+ /*182182+ * Move the saved user r1 to the kernel stack in case PACATMSCRATCH is183183+ * clobbered by an exception once we turn on MSR_RI below.184184+ */185185+ ld r11, PACATMSCRATCH(r13)186186+ std r11, GPR1(r1)187187+188188+ /*189189+ * Store r13 away so we can free up the scratch SPR for the SLB fault190190+ * handler (needed once we start accessing the thread_struct).191191+ */192192+ GET_SCRATCH0(r11)193193+ std r11, GPR13(r1)181194182195 /* Reset MSR RI so we can take SLB faults again */183196 li r11, MSR_RI184197 mtmsrd r11, 1185198199199+ /* Store the PPR in r11 and reset to decent value */186200 mfspr r11, SPRN_PPR187201 HMT_MEDIUM188202···221207 SAVE_GPR(8, r7) /* user r8 */222208 SAVE_GPR(9, r7) /* user r9 */223209 SAVE_GPR(10, r7) /* user r10 */224224- ld r3, PACATMSCRATCH(r13) /* user r1 */210210+ ld r3, GPR1(r1) /* user r1 */225211 ld r4, GPR7(r1) /* user r7 */226212 ld r5, GPR11(r1) /* user r11 */227213 ld r6, GPR12(r1) /* user r12 */228228- GET_SCRATCH0(8) /* user r13 */214214+ ld r8, GPR13(r1) /* user r13 */229215 std r3, GPR1(r7)230216 std r4, GPR7(r7)231217 std r5, GPR11(r7)
+37-54
arch/powerpc/kvm/book3s_64_mmu_radix.c
···525525 unsigned long ea, unsigned long dsisr)526526{527527 struct kvm *kvm = vcpu->kvm;528528- unsigned long mmu_seq, pte_size;529529- unsigned long gpa, gfn, hva, pfn;528528+ unsigned long mmu_seq;529529+ unsigned long gpa, gfn, hva;530530 struct kvm_memory_slot *memslot;531531 struct page *page = NULL;532532 long ret;···623623 */624624 hva = gfn_to_hva_memslot(memslot, gfn);625625 if (upgrade_p && __get_user_pages_fast(hva, 1, 1, &page) == 1) {626626- pfn = page_to_pfn(page);627626 upgrade_write = true;628627 } else {628628+ unsigned long pfn;629629+629630 /* Call KVM generic code to do the slow-path check */630631 pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL,631632 writing, upgrade_p);···640639 }641640 }642641643643- /* See if we can insert a 1GB or 2MB large PTE here */644644- level = 0;645645- if (page && PageCompound(page)) {646646- pte_size = PAGE_SIZE << compound_order(compound_head(page));647647- if (pte_size >= PUD_SIZE &&648648- (gpa & (PUD_SIZE - PAGE_SIZE)) ==649649- (hva & (PUD_SIZE - PAGE_SIZE))) {650650- level = 2;651651- pfn &= ~((PUD_SIZE >> PAGE_SHIFT) - 1);652652- } else if (pte_size >= PMD_SIZE &&653653- (gpa & (PMD_SIZE - PAGE_SIZE)) ==654654- (hva & (PMD_SIZE - PAGE_SIZE))) {655655- level = 1;656656- pfn &= ~((PMD_SIZE >> PAGE_SHIFT) - 1);642642+ /*643643+ * Read the PTE from the process' radix tree and use that644644+ * so we get the shift and attribute bits.645645+ */646646+ local_irq_disable();647647+ ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);648648+ pte = *ptep;649649+ local_irq_enable();650650+651651+ /* Get pte level from shift/size */652652+ if (shift == PUD_SHIFT &&653653+ (gpa & (PUD_SIZE - PAGE_SIZE)) ==654654+ (hva & (PUD_SIZE - PAGE_SIZE))) {655655+ level = 2;656656+ } else if (shift == PMD_SHIFT &&657657+ (gpa & (PMD_SIZE - PAGE_SIZE)) ==658658+ (hva & (PMD_SIZE - PAGE_SIZE))) {659659+ level = 1;660660+ } else {661661+ level = 0;662662+ if (shift > PAGE_SHIFT) {663663+ /*664664+ * If the pte maps more than one page, bring over665665+ * bits from the virtual address to get the real666666+ * address of the specific single page we want.667667+ */668668+ unsigned long rpnmask = (1ul << shift) - PAGE_SIZE;669669+ pte = __pte(pte_val(pte) | (hva & rpnmask));657670 }658671 }659672660660- /*661661- * Compute the PTE value that we need to insert.662662- */663663- if (page) {664664- pgflags = _PAGE_READ | _PAGE_EXEC | _PAGE_PRESENT | _PAGE_PTE |665665- _PAGE_ACCESSED;666666- if (writing || upgrade_write)667667- pgflags |= _PAGE_WRITE | _PAGE_DIRTY;668668- pte = pfn_pte(pfn, __pgprot(pgflags));673673+ pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);674674+ if (writing || upgrade_write) {675675+ if (pte_val(pte) & _PAGE_WRITE)676676+ pte = __pte(pte_val(pte) | _PAGE_DIRTY);669677 } else {670670- /*671671- * Read the PTE from the process' radix tree and use that672672- * so we get the attribute bits.673673- */674674- local_irq_disable();675675- ptep = __find_linux_pte(vcpu->arch.pgdir, hva, NULL, &shift);676676- pte = *ptep;677677- local_irq_enable();678678- if (shift == PUD_SHIFT &&679679- (gpa & (PUD_SIZE - PAGE_SIZE)) ==680680- (hva & (PUD_SIZE - PAGE_SIZE))) {681681- level = 2;682682- } else if (shift == PMD_SHIFT &&683683- (gpa & (PMD_SIZE - PAGE_SIZE)) ==684684- (hva & (PMD_SIZE - PAGE_SIZE))) {685685- level = 1;686686- } else if (shift && shift != PAGE_SHIFT) {687687- /* Adjust PFN */688688- unsigned long mask = (1ul << shift) - PAGE_SIZE;689689- pte = __pte(pte_val(pte) | (hva & mask));690690- }691691- pte = __pte(pte_val(pte) | _PAGE_EXEC | _PAGE_ACCESSED);692692- if (writing || upgrade_write) {693693- if (pte_val(pte) & _PAGE_WRITE)694694- pte = __pte(pte_val(pte) | _PAGE_DIRTY);695695- } else {696696- pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));697697- }678678+ pte = __pte(pte_val(pte) & ~(_PAGE_WRITE | _PAGE_DIRTY));698679 }699680700681 /* Allocate space in the tree and write the PTE */
+31-8
arch/powerpc/kvm/book3s_64_vio_hv.c
···187187EXPORT_SYMBOL_GPL(kvmppc_gpa_to_ua);188188189189#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE190190-static void kvmppc_rm_clear_tce(struct iommu_table *tbl, unsigned long entry)190190+static long iommu_tce_xchg_rm(struct mm_struct *mm, struct iommu_table *tbl,191191+ unsigned long entry, unsigned long *hpa,192192+ enum dma_data_direction *direction)193193+{194194+ long ret;195195+196196+ ret = tbl->it_ops->exchange_rm(tbl, entry, hpa, direction);197197+198198+ if (!ret && ((*direction == DMA_FROM_DEVICE) ||199199+ (*direction == DMA_BIDIRECTIONAL))) {200200+ __be64 *pua = IOMMU_TABLE_USERSPACE_ENTRY_RM(tbl, entry);201201+ /*202202+ * kvmppc_rm_tce_iommu_do_map() updates the UA cache after203203+ * calling this so we still get here a valid UA.204204+ */205205+ if (pua && *pua)206206+ mm_iommu_ua_mark_dirty_rm(mm, be64_to_cpu(*pua));207207+ }208208+209209+ return ret;210210+}211211+212212+static void kvmppc_rm_clear_tce(struct kvm *kvm, struct iommu_table *tbl,213213+ unsigned long entry)191214{192215 unsigned long hpa = 0;193216 enum dma_data_direction dir = DMA_NONE;194217195195- iommu_tce_xchg_rm(tbl, entry, &hpa, &dir);218218+ iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir);196219}197220198221static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm,···247224 unsigned long hpa = 0;248225 long ret;249226250250- if (iommu_tce_xchg_rm(tbl, entry, &hpa, &dir))227227+ if (iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir))251228 /*252229 * real mode xchg can fail if struct page crosses253230 * a page boundary···259236260237 ret = kvmppc_rm_tce_iommu_mapped_dec(kvm, tbl, entry);261238 if (ret)262262- iommu_tce_xchg_rm(tbl, entry, &hpa, &dir);239239+ iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir);263240264241 return ret;265242}···305282 if (WARN_ON_ONCE_RM(mm_iommu_mapped_inc(mem)))306283 return H_CLOSED;307284308308- ret = iommu_tce_xchg_rm(tbl, entry, &hpa, &dir);285285+ ret = iommu_tce_xchg_rm(kvm->mm, tbl, entry, &hpa, &dir);309286 if (ret) {310287 mm_iommu_mapped_dec(mem);311288 /*···394371 return ret;395372396373 WARN_ON_ONCE_RM(1);397397- kvmppc_rm_clear_tce(stit->tbl, entry);374374+ kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);398375 }399376400377 kvmppc_tce_put(stt, entry, tce);···543520 goto unlock_exit;544521545522 WARN_ON_ONCE_RM(1);546546- kvmppc_rm_clear_tce(stit->tbl, entry);523523+ kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);547524 }548525549526 kvmppc_tce_put(stt, entry + i, tce);···594571 return ret;595572596573 WARN_ON_ONCE_RM(1);597597- kvmppc_rm_clear_tce(stit->tbl, entry);574574+ kvmppc_rm_clear_tce(vcpu->kvm, stit->tbl, entry);598575 }599576 }600577
···2828{2929 int err;30303131+ /* Make sure we aren't patching a freed init section */3232+ if (init_mem_is_free && init_section_contains(exec_addr, 4)) {3333+ pr_debug("Skipping init section patching addr: 0x%px\n", exec_addr);3434+ return 0;3535+ }3636+3137 __put_user_size(instr, patch_addr, 4, err);3238 if (err)3339 return err;
-49
arch/powerpc/mm/init_64.c
···308308{309309}310310311311-/*312312- * We do not have access to the sparsemem vmemmap, so we fallback to313313- * walking the list of sparsemem blocks which we already maintain for314314- * the sake of crashdump. In the long run, we might want to maintain315315- * a tree if performance of that linear walk becomes a problem.316316- *317317- * realmode_pfn_to_page functions can fail due to:318318- * 1) As real sparsemem blocks do not lay in RAM continously (they319319- * are in virtual address space which is not available in the real mode),320320- * the requested page struct can be split between blocks so get_page/put_page321321- * may fail.322322- * 2) When huge pages are used, the get_page/put_page API will fail323323- * in real mode as the linked addresses in the page struct are virtual324324- * too.325325- */326326-struct page *realmode_pfn_to_page(unsigned long pfn)327327-{328328- struct vmemmap_backing *vmem_back;329329- struct page *page;330330- unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift;331331- unsigned long pg_va = (unsigned long) pfn_to_page(pfn);332332-333333- for (vmem_back = vmemmap_list; vmem_back; vmem_back = vmem_back->list) {334334- if (pg_va < vmem_back->virt_addr)335335- continue;336336-337337- /* After vmemmap_list entry free is possible, need check all */338338- if ((pg_va + sizeof(struct page)) <=339339- (vmem_back->virt_addr + page_size)) {340340- page = (struct page *) (vmem_back->phys + pg_va -341341- vmem_back->virt_addr);342342- return page;343343- }344344- }345345-346346- /* Probably that page struct is split between real pages */347347- return NULL;348348-}349349-EXPORT_SYMBOL_GPL(realmode_pfn_to_page);350350-351351-#else352352-353353-struct page *realmode_pfn_to_page(unsigned long pfn)354354-{355355- struct page *page = pfn_to_page(pfn);356356- return page;357357-}358358-EXPORT_SYMBOL_GPL(realmode_pfn_to_page);359359-360311#endif /* CONFIG_SPARSEMEM_VMEMMAP */361312362313#ifdef CONFIG_PPC_BOOK3S_64
+2
arch/powerpc/mm/mem.c
···6363#endif64646565unsigned long long memory_limit;6666+bool init_mem_is_free;66676768#ifdef CONFIG_HIGHMEM6869pte_t *kmap_pte;···397396{398397 ppc_md.progress = ppc_printk_progress;399398 mark_initmem_nx();399399+ init_mem_is_free = true;400400 free_initmem_default(POISON_FREE_INITMEM);401401}402402
+30-4
arch/powerpc/mm/mmu_context_iommu.c
···1818#include <linux/migrate.h>1919#include <linux/hugetlb.h>2020#include <linux/swap.h>2121+#include <linux/sizes.h>2122#include <asm/mmu_context.h>2223#include <asm/pte-walk.h>23242425static DEFINE_MUTEX(mem_list_mutex);2626+2727+#define MM_IOMMU_TABLE_GROUP_PAGE_DIRTY 0x12828+#define MM_IOMMU_TABLE_GROUP_PAGE_MASK ~(SZ_4K - 1)25292630struct mm_iommu_table_group_mem_t {2731 struct list_head next;···267263 if (!page)268264 continue;269265266266+ if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY)267267+ SetPageDirty(page);268268+270269 put_page(page);271270 mem->hpas[i] = 0;272271 }···367360368361 return ret;369362}370370-EXPORT_SYMBOL_GPL(mm_iommu_lookup_rm);371363372364struct mm_iommu_table_group_mem_t *mm_iommu_find(struct mm_struct *mm,373365 unsigned long ua, unsigned long entries)···396390 if (pageshift > mem->pageshift)397391 return -EFAULT;398392399399- *hpa = *va | (ua & ~PAGE_MASK);393393+ *hpa = (*va & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);400394401395 return 0;402396}···419413 if (!pa)420414 return -EFAULT;421415422422- *hpa = *pa | (ua & ~PAGE_MASK);416416+ *hpa = (*pa & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK);423417424418 return 0;425419}426426-EXPORT_SYMBOL_GPL(mm_iommu_ua_to_hpa_rm);420420+421421+extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua)422422+{423423+ struct mm_iommu_table_group_mem_t *mem;424424+ long entry;425425+ void *va;426426+ unsigned long *pa;427427+428428+ mem = mm_iommu_lookup_rm(mm, ua, PAGE_SIZE);429429+ if (!mem)430430+ return;431431+432432+ entry = (ua - mem->ua) >> PAGE_SHIFT;433433+ va = &mem->hpas[entry];434434+435435+ pa = (void *) vmalloc_to_phys(va);436436+ if (!pa)437437+ return;438438+439439+ *pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY;440440+}427441428442long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem)429443{
+5-2
arch/powerpc/mm/numa.c
···12041204 int new_nid;1205120512061206 /* Use associativity from first thread for all siblings */12071207- vphn_get_associativity(cpu, associativity);12071207+ if (vphn_get_associativity(cpu, associativity))12081208+ return cpu_to_node(cpu);12091209+12081210 new_nid = associativity_to_nid(associativity);12091211 if (new_nid < 0 || !node_possible(new_nid))12101212 new_nid = first_online_node;···1454145214551453static void reset_topology_timer(void)14561454{14571457- mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);14551455+ if (vphn_enabled)14561456+ mod_timer(&topology_timer, jiffies + topology_timer_secs * HZ);14581457}1459145814601459#ifdef CONFIG_SMP
+1-1
arch/powerpc/mm/pkeys.c
···4545 * Since any pkey can be used for data or execute, we will just treat4646 * all keys as equal and track them as one entity.4747 */4848- pkeys_total = be32_to_cpu(vals[0]);4848+ pkeys_total = vals[0];4949 pkeys_devtree_defined = true;5050}5151
···481481 break;482482 case KVM_CAP_S390_HPAGE_1M:483483 r = 0;484484- if (hpage)484484+ if (hpage && !kvm_is_ucontrol(kvm))485485 r = 1;486486 break;487487 case KVM_CAP_S390_MEM_OP:···691691 mutex_lock(&kvm->lock);692692 if (kvm->created_vcpus)693693 r = -EBUSY;694694- else if (!hpage || kvm->arch.use_cmma)694694+ else if (!hpage || kvm->arch.use_cmma || kvm_is_ucontrol(kvm))695695 r = -EINVAL;696696 else {697697 r = 0;
+3-1
arch/s390/mm/gmap.c
···708708 vmaddr |= gaddr & ~PMD_MASK;709709 /* Find vma in the parent mm */710710 vma = find_vma(gmap->mm, vmaddr);711711+ if (!vma)712712+ continue;711713 /*712714 * We do not discard pages that are backed by713715 * hugetlbfs, so we don't have to refault them.714716 */715715- if (vma && is_vm_hugetlb_page(vma))717717+ if (is_vm_hugetlb_page(vma))716718 continue;717719 size = min(to - gaddr, PMD_SIZE - (gaddr & ~PMD_MASK));718720 zap_page_range(vma, vmaddr, size);
-19
arch/x86/boot/compressed/mem_encrypt.S
···2525 push %ebx2626 push %ecx2727 push %edx2828- push %edi2929-3030- /*3131- * RIP-relative addressing is needed to access the encryption bit3232- * variable. Since we are running in 32-bit mode we need this call/pop3333- * sequence to get the proper relative addressing.3434- */3535- call 1f3636-1: popl %edi3737- subl $1b, %edi3838-3939- movl enc_bit(%edi), %eax4040- cmpl $0, %eax4141- jge .Lsev_exit42284329 /* Check if running under a hypervisor */4430 movl $1, %eax···55695670 movl %ebx, %eax5771 andl $0x3f, %eax /* Return the encryption bit location */5858- movl %eax, enc_bit(%edi)5972 jmp .Lsev_exit60736174.Lno_sev:6275 xor %eax, %eax6363- movl %eax, enc_bit(%edi)64766577.Lsev_exit:6666- pop %edi6778 pop %edx6879 pop %ecx6980 pop %ebx···96113ENDPROC(set_sev_encryption_mask)9711498115 .data9999-enc_bit:100100- .int 0xffffffff101116102117#ifdef CONFIG_AMD_MEM_ENCRYPT103118 .balign 8
···4040static int __init crypto_morus1280_sse2_module_init(void)4141{4242 if (!boot_cpu_has(X86_FEATURE_XMM2) ||4343- !boot_cpu_has(X86_FEATURE_OSXSAVE) ||4443 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))4544 return -ENODEV;4645
-1
arch/x86/crypto/morus640-sse2-glue.c
···4040static int __init crypto_morus640_sse2_module_init(void)4141{4242 if (!boot_cpu_has(X86_FEATURE_XMM2) ||4343- !boot_cpu_has(X86_FEATURE_OSXSAVE) ||4443 !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL))4544 return -ENODEV;4645
+4-4
arch/x86/hyperv/hv_apic.c
···9595 */9696static bool __send_ipi_mask_ex(const struct cpumask *mask, int vector)9797{9898- struct ipi_arg_ex **arg;9999- struct ipi_arg_ex *ipi_arg;9898+ struct hv_send_ipi_ex **arg;9999+ struct hv_send_ipi_ex *ipi_arg;100100 unsigned long flags;101101 int nr_bank = 0;102102 int ret = 1;···105105 return false;106106107107 local_irq_save(flags);108108- arg = (struct ipi_arg_ex **)this_cpu_ptr(hyperv_pcpu_input_arg);108108+ arg = (struct hv_send_ipi_ex **)this_cpu_ptr(hyperv_pcpu_input_arg);109109110110 ipi_arg = *arg;111111 if (unlikely(!ipi_arg))···135135static bool __send_ipi_mask(const struct cpumask *mask, int vector)136136{137137 int cur_cpu, vcpu;138138- struct ipi_arg_non_ex ipi_arg;138138+ struct hv_send_ipi ipi_arg;139139 int ret = 1;140140141141 trace_hyperv_send_ipi_mask(mask, vector);
+10
arch/x86/include/asm/fixmap.h
···1414#ifndef _ASM_X86_FIXMAP_H1515#define _ASM_X86_FIXMAP_H16161717+/*1818+ * Exposed to assembly code for setting up initial page tables. Cannot be1919+ * calculated in assembly code (fixmap entries are an enum), but is sanity2020+ * checked in the actual fixmap C code to make sure that the fixmap is2121+ * covered fully.2222+ */2323+#define FIXMAP_PMD_NUM 22424+/* fixmap starts downwards from the 507th entry in level2_fixmap_pgt */2525+#define FIXMAP_PMD_TOP 5072626+1727#ifndef __ASSEMBLY__1828#include <linux/kernel.h>1929#include <asm/acpi.h>
···382382 e <= QOS_L3_MBM_LOCAL_EVENT_ID);383383}384384385385+struct rdt_parse_data {386386+ struct rdtgroup *rdtgrp;387387+ char *buf;388388+};389389+385390/**386391 * struct rdt_resource - attributes of an RDT resource387392 * @rid: The index of the resource···428423 struct rdt_cache cache;429424 struct rdt_membw membw;430425 const char *format_str;431431- int (*parse_ctrlval) (void *data, struct rdt_resource *r,432432- struct rdt_domain *d);426426+ int (*parse_ctrlval)(struct rdt_parse_data *data,427427+ struct rdt_resource *r,428428+ struct rdt_domain *d);433429 struct list_head evt_list;434430 int num_rmid;435431 unsigned int mon_scale;436432 unsigned long fflags;437433};438434439439-int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d);440440-int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d);435435+int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,436436+ struct rdt_domain *d);437437+int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r,438438+ struct rdt_domain *d);441439442440extern struct mutex rdtgroup_mutex;443441···544536void rdtgroup_pseudo_lock_remove(struct rdtgroup *rdtgrp);545537struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);546538int update_domains(struct rdt_resource *r, int closid);539539+int closids_supported(void);547540void closid_free(int closid);548541int alloc_rmid(void);549542void free_rmid(u32 rmid);
+14-13
arch/x86/kernel/cpu/intel_rdt_ctrlmondata.c
···6464 return true;6565}66666767-int parse_bw(void *_buf, struct rdt_resource *r, struct rdt_domain *d)6767+int parse_bw(struct rdt_parse_data *data, struct rdt_resource *r,6868+ struct rdt_domain *d)6869{6969- unsigned long data;7070- char *buf = _buf;7070+ unsigned long bw_val;71717272 if (d->have_new_ctrl) {7373 rdt_last_cmd_printf("duplicate domain %d\n", d->id);7474 return -EINVAL;7575 }76767777- if (!bw_validate(buf, &data, r))7777+ if (!bw_validate(data->buf, &bw_val, r))7878 return -EINVAL;7979- d->new_ctrl = data;7979+ d->new_ctrl = bw_val;8080 d->have_new_ctrl = true;81818282 return 0;···123123 return true;124124}125125126126-struct rdt_cbm_parse_data {127127- struct rdtgroup *rdtgrp;128128- char *buf;129129-};130130-131126/*132127 * Read one cache bit mask (hex). Check that it is valid for the current133128 * resource type.134129 */135135-int parse_cbm(void *_data, struct rdt_resource *r, struct rdt_domain *d)130130+int parse_cbm(struct rdt_parse_data *data, struct rdt_resource *r,131131+ struct rdt_domain *d)136132{137137- struct rdt_cbm_parse_data *data = _data;138133 struct rdtgroup *rdtgrp = data->rdtgrp;139134 u32 cbm_val;140135···190195static int parse_line(char *line, struct rdt_resource *r,191196 struct rdtgroup *rdtgrp)192197{193193- struct rdt_cbm_parse_data data;198198+ struct rdt_parse_data data;194199 char *dom = NULL, *id;195200 struct rdt_domain *d;196201 unsigned long dom_id;202202+203203+ if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP &&204204+ r->rid == RDT_RESOURCE_MBA) {205205+ rdt_last_cmd_puts("Cannot pseudo-lock MBA resource\n");206206+ return -EINVAL;207207+ }197208198209next:199210 if (!line || line[0] == '\0')
+44-9
arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
···9797 * limited as the number of resources grows.9898 */9999static int closid_free_map;100100+static int closid_free_map_len;101101+102102+int closids_supported(void)103103+{104104+ return closid_free_map_len;105105+}100106101107static void closid_init(void)102108{···117111118112 /* CLOSID 0 is always reserved for the default group */119113 closid_free_map &= ~1;114114+ closid_free_map_len = rdt_min_closid;120115}121116122117static int closid_alloc(void)···809802 sw_shareable = 0;810803 exclusive = 0;811804 seq_printf(seq, "%d=", dom->id);812812- for (i = 0; i < r->num_closid; i++, ctrl++) {805805+ for (i = 0; i < closids_supported(); i++, ctrl++) {813806 if (!closid_allocated(i))814807 continue;815808 mode = rdtgroup_mode_by_closid(i);···996989997990 /* Check for overlap with other resource groups */998991 ctrl = d->ctrl_val;999999- for (i = 0; i < r->num_closid; i++, ctrl++) {992992+ for (i = 0; i < closids_supported(); i++, ctrl++) {1000993 ctrl_b = (unsigned long *)ctrl;1001994 mode = rdtgroup_mode_by_closid(i);1002995 if (closid_allocated(i) && i != closid &&···10311024{10321025 int closid = rdtgrp->closid;10331026 struct rdt_resource *r;10271027+ bool has_cache = false;10341028 struct rdt_domain *d;1035102910361030 for_each_alloc_enabled_rdt_resource(r) {10311031+ if (r->rid == RDT_RESOURCE_MBA)10321032+ continue;10331033+ has_cache = true;10371034 list_for_each_entry(d, &r->domains, list) {10381035 if (rdtgroup_cbm_overlaps(r, d, d->ctrl_val[closid],10391039- rdtgrp->closid, false))10361036+ rdtgrp->closid, false)) {10371037+ rdt_last_cmd_puts("schemata overlaps\n");10401038 return false;10391039+ }10411040 }10411041+ }10421042+10431043+ if (!has_cache) {10441044+ rdt_last_cmd_puts("cannot be exclusive without CAT/CDP\n");10451045+ return false;10421046 }1043104710441048 return true;···11031085 rdtgrp->mode = RDT_MODE_SHAREABLE;11041086 } else if (!strcmp(buf, "exclusive")) {11051087 if (!rdtgroup_mode_test_exclusive(rdtgrp)) {11061106- rdt_last_cmd_printf("schemata overlaps\n");11071088 ret = -EINVAL;11081089 goto out;11091090 }···11721155 struct rdt_resource *r;11731156 struct rdt_domain *d;11741157 unsigned int size;11751175- bool sep = false;11761176- u32 cbm;11581158+ bool sep;11591159+ u32 ctrl;1177116011781161 rdtgrp = rdtgroup_kn_lock_live(of->kn);11791162 if (!rdtgrp) {···11911174 }1192117511931176 for_each_alloc_enabled_rdt_resource(r) {11771177+ sep = false;11941178 seq_printf(s, "%*s:", max_name_width, r->name);11951179 list_for_each_entry(d, &r->domains, list) {11961180 if (sep)···11991181 if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP) {12001182 size = 0;12011183 } else {12021202- cbm = d->ctrl_val[rdtgrp->closid];12031203- size = rdtgroup_cbm_to_size(r, d, cbm);11841184+ ctrl = (!is_mba_sc(r) ?11851185+ d->ctrl_val[rdtgrp->closid] :11861186+ d->mbps_val[rdtgrp->closid]);11871187+ if (r->rid == RDT_RESOURCE_MBA)11881188+ size = ctrl;11891189+ else11901190+ size = rdtgroup_cbm_to_size(r, d, ctrl);12041191 }12051192 seq_printf(s, "%d=%u", d->id, size);12061193 sep = true;···23592336 u32 *ctrl;2360233723612338 for_each_alloc_enabled_rdt_resource(r) {23392339+ /*23402340+ * Only initialize default allocations for CBM cache23412341+ * resources23422342+ */23432343+ if (r->rid == RDT_RESOURCE_MBA)23442344+ continue;23622345 list_for_each_entry(d, &r->domains, list) {23632346 d->have_new_ctrl = false;23642347 d->new_ctrl = r->cache.shareable_bits;23652348 used_b = r->cache.shareable_bits;23662349 ctrl = d->ctrl_val;23672367- for (i = 0; i < r->num_closid; i++, ctrl++) {23502350+ for (i = 0; i < closids_supported(); i++, ctrl++) {23682351 if (closid_allocated(i) && i != closid) {23692352 mode = rdtgroup_mode_by_closid(i);23702353 if (mode == RDT_MODE_PSEUDO_LOCKSETUP)···24022373 }2403237424042375 for_each_alloc_enabled_rdt_resource(r) {23762376+ /*23772377+ * Only initialize default allocations for CBM cache23782378+ * resources23792379+ */23802380+ if (r->rid == RDT_RESOURCE_MBA)23812381+ continue;24052382 ret = update_domains(r, rdtgrp->closid);24062383 if (ret < 0) {24072384 rdt_last_cmd_puts("failed to initialize allocations\n");
+19-1
arch/x86/kernel/head64.c
···3535#include <asm/bootparam_utils.h>3636#include <asm/microcode.h>3737#include <asm/kasan.h>3838+#include <asm/fixmap.h>38393940/*4041 * Manage page tables very early on.···113112unsigned long __head __startup_64(unsigned long physaddr,114113 struct boot_params *bp)115114{115115+ unsigned long vaddr, vaddr_end;116116 unsigned long load_delta, *p;117117 unsigned long pgtable_flags;118118 pgdval_t *pgd;···167165 pud[511] += load_delta;168166169167 pmd = fixup_pointer(level2_fixmap_pgt, physaddr);170170- pmd[506] += load_delta;168168+ for (i = FIXMAP_PMD_TOP; i > FIXMAP_PMD_TOP - FIXMAP_PMD_NUM; i--)169169+ pmd[i] += load_delta;171170172171 /*173172 * Set up the identity mapping for the switchover. These···236233237234 /* Encrypt the kernel and related (if SME is active) */238235 sme_encrypt_kernel(bp);236236+237237+ /*238238+ * Clear the memory encryption mask from the .bss..decrypted section.239239+ * The bss section will be memset to zero later in the initialization so240240+ * there is no need to zero it after changing the memory encryption241241+ * attribute.242242+ */243243+ if (mem_encrypt_active()) {244244+ vaddr = (unsigned long)__start_bss_decrypted;245245+ vaddr_end = (unsigned long)__end_bss_decrypted;246246+ for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {247247+ i = pmd_index(vaddr);248248+ pmd[i] -= sme_get_me_mask();249249+ }250250+ }239251240252 /*241253 * Return the SME encryption mask (if SME is active) to be used as a
···2828#include <linux/sched/clock.h>2929#include <linux/mm.h>3030#include <linux/slab.h>3131+#include <linux/set_memory.h>31323233#include <asm/hypervisor.h>3334#include <asm/mem_encrypt.h>···6261 (PAGE_SIZE / sizeof(struct pvclock_vsyscall_time_info))63626463static struct pvclock_vsyscall_time_info6565- hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __aligned(PAGE_SIZE);6666-static struct pvclock_wall_clock wall_clock;6464+ hv_clock_boot[HVC_BOOT_ARRAY_SIZE] __bss_decrypted __aligned(PAGE_SIZE);6565+static struct pvclock_wall_clock wall_clock __bss_decrypted;6766static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);6767+static struct pvclock_vsyscall_time_info *hvclock_mem;68686969static inline struct pvclock_vcpu_time_info *this_cpu_pvti(void)7070{···238236 native_machine_shutdown();239237}240238239239+static void __init kvmclock_init_mem(void)240240+{241241+ unsigned long ncpus;242242+ unsigned int order;243243+ struct page *p;244244+ int r;245245+246246+ if (HVC_BOOT_ARRAY_SIZE >= num_possible_cpus())247247+ return;248248+249249+ ncpus = num_possible_cpus() - HVC_BOOT_ARRAY_SIZE;250250+ order = get_order(ncpus * sizeof(*hvclock_mem));251251+252252+ p = alloc_pages(GFP_KERNEL, order);253253+ if (!p) {254254+ pr_warn("%s: failed to alloc %d pages", __func__, (1U << order));255255+ return;256256+ }257257+258258+ hvclock_mem = page_address(p);259259+260260+ /*261261+ * hvclock is shared between the guest and the hypervisor, must262262+ * be mapped decrypted.263263+ */264264+ if (sev_active()) {265265+ r = set_memory_decrypted((unsigned long) hvclock_mem,266266+ 1UL << order);267267+ if (r) {268268+ __free_pages(p, order);269269+ hvclock_mem = NULL;270270+ pr_warn("kvmclock: set_memory_decrypted() failed. Disabling\n");271271+ return;272272+ }273273+ }274274+275275+ memset(hvclock_mem, 0, PAGE_SIZE << order);276276+}277277+241278static int __init kvm_setup_vsyscall_timeinfo(void)242279{243280#ifdef CONFIG_X86_64···291250292251 kvm_clock.archdata.vclock_mode = VCLOCK_PVCLOCK;293252#endif253253+254254+ kvmclock_init_mem();255255+294256 return 0;295257}296258early_initcall(kvm_setup_vsyscall_timeinfo);···313269 /* Use the static page for the first CPUs, allocate otherwise */314270 if (cpu < HVC_BOOT_ARRAY_SIZE)315271 p = &hv_clock_boot[cpu];272272+ else if (hvclock_mem)273273+ p = hvclock_mem + cpu - HVC_BOOT_ARRAY_SIZE;316274 else317317- p = kzalloc(sizeof(*p), GFP_KERNEL);275275+ return -ENOMEM;318276319277 per_cpu(hv_clock_per_cpu, cpu) = p;320278 return p ? 0 : -ENOMEM;
+2-2
arch/x86/kernel/paravirt.c
···91919292 if (len < 5) {9393#ifdef CONFIG_RETPOLINE9494- WARN_ONCE("Failing to patch indirect CALL in %ps\n", (void *)addr);9494+ WARN_ONCE(1, "Failing to patch indirect CALL in %ps\n", (void *)addr);9595#endif9696 return len; /* call too long for patch site */9797 }···111111112112 if (len < 5) {113113#ifdef CONFIG_RETPOLINE114114- WARN_ONCE("Failing to patch indirect JMP in %ps\n", (void *)addr);114114+ WARN_ONCE(1, "Failing to patch indirect JMP in %ps\n", (void *)addr);115115#endif116116 return len; /* call too long for patch site */117117 }
+19
arch/x86/kernel/vmlinux.lds.S
···6565#define ALIGN_ENTRY_TEXT_BEGIN . = ALIGN(PMD_SIZE);6666#define ALIGN_ENTRY_TEXT_END . = ALIGN(PMD_SIZE);67676868+/*6969+ * This section contains data which will be mapped as decrypted. Memory7070+ * encryption operates on a page basis. Make this section PMD-aligned7171+ * to avoid splitting the pages while mapping the section early.7272+ *7373+ * Note: We use a separate section so that only this section gets7474+ * decrypted to avoid exposing more than we wish.7575+ */7676+#define BSS_DECRYPTED \7777+ . = ALIGN(PMD_SIZE); \7878+ __start_bss_decrypted = .; \7979+ *(.bss..decrypted); \8080+ . = ALIGN(PAGE_SIZE); \8181+ __start_bss_decrypted_unused = .; \8282+ . = ALIGN(PMD_SIZE); \8383+ __end_bss_decrypted = .; \8484+6885#else69867087#define X86_ALIGN_RODATA_BEGIN···91749275#define ALIGN_ENTRY_TEXT_BEGIN9376#define ALIGN_ENTRY_TEXT_END7777+#define BSS_DECRYPTED94789579#endif9680···373355 __bss_start = .;374356 *(.bss..page_aligned)375357 *(.bss)358358+ BSS_DECRYPTED376359 . = ALIGN(PAGE_SIZE);377360 __bss_stop = .;378361 }
+19-3
arch/x86/kvm/lapic.c
···1344134413451345static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)13461346{13471347- return kvm_apic_hw_enabled(apic) &&13481348- addr >= apic->base_address &&13491349- addr < apic->base_address + LAPIC_MMIO_LENGTH;13471347+ return addr >= apic->base_address &&13481348+ addr < apic->base_address + LAPIC_MMIO_LENGTH;13501349}1351135013521351static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,···1356135713571358 if (!apic_mmio_in_range(apic, address))13581359 return -EOPNOTSUPP;13601360+13611361+ if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {13621362+ if (!kvm_check_has_quirk(vcpu->kvm,13631363+ KVM_X86_QUIRK_LAPIC_MMIO_HOLE))13641364+ return -EOPNOTSUPP;13651365+13661366+ memset(data, 0xff, len);13671367+ return 0;13681368+ }1359136913601370 kvm_lapic_reg_read(apic, offset, len, data);13611371···1924191619251917 if (!apic_mmio_in_range(apic, address))19261918 return -EOPNOTSUPP;19191919+19201920+ if (!kvm_apic_hw_enabled(apic) || apic_x2apic_mode(apic)) {19211921+ if (!kvm_check_has_quirk(vcpu->kvm,19221922+ KVM_X86_QUIRK_LAPIC_MMIO_HOLE))19231923+ return -EOPNOTSUPP;19241924+19251925+ return 0;19261926+ }1927192719281928 /*19291929 * APIC register must be aligned on 128-bits boundary.
+7-2
arch/x86/kvm/mmu.c
···899899{900900 /*901901 * Make sure the write to vcpu->mode is not reordered in front of902902- * reads to sptes. If it does, kvm_commit_zap_page() can see us902902+ * reads to sptes. If it does, kvm_mmu_commit_zap_page() can see us903903 * OUTSIDE_GUEST_MODE and proceed to free the shadow page table.904904 */905905 smp_store_release(&vcpu->mode, OUTSIDE_GUEST_MODE);···54175417{54185418 MMU_WARN_ON(VALID_PAGE(vcpu->arch.mmu.root_hpa));5419541954205420- kvm_init_mmu(vcpu, true);54205420+ /*54215421+ * kvm_mmu_setup() is called only on vCPU initialization. 54225422+ * Therefore, no need to reset mmu roots as they are not yet54235423+ * initialized.54245424+ */54255425+ kvm_init_mmu(vcpu, false);54215426}5422542754235428static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm,
···16841684 const int sgrp = op_stat_group(req_op);16851685 int cpu = part_stat_lock();1686168616871687- part_stat_add(cpu, part, ticks[sgrp], duration);16871687+ part_stat_add(cpu, part, nsecs[sgrp], jiffies_to_nsecs(duration));16881688 part_round_stats(q, cpu, part);16891689 part_dec_in_flight(q, part, op_is_write(req_op));16901690
+1-3
block/blk-core.c
···27332733 * containing request is enough.27342734 */27352735 if (blk_do_io_stat(req) && !(req->rq_flags & RQF_FLUSH_SEQ)) {27362736- unsigned long duration;27372736 const int sgrp = op_stat_group(req_op(req));27382737 struct hd_struct *part;27392738 int cpu;2740273927412741- duration = nsecs_to_jiffies(now - req->start_time_ns);27422740 cpu = part_stat_lock();27432741 part = req->part;2744274227452743 part_stat_inc(cpu, part, ios[sgrp]);27462746- part_stat_add(cpu, part, ticks[sgrp], duration);27442744+ part_stat_add(cpu, part, nsecs[sgrp], now - req->start_time_ns);27472745 part_round_stats(req->q, cpu, part);27482746 part_dec_in_flight(req->q, part, rq_data_dir(req));27492747
+4-9
block/blk-mq-tag.c
···322322323323 /*324324 * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and325325- * queue_hw_ctx after freeze the queue. So we could use q_usage_counter326326- * to avoid race with it. __blk_mq_update_nr_hw_queues will users327327- * synchronize_rcu to ensure all of the users go out of the critical328328- * section below and see zeroed q_usage_counter.325325+ * queue_hw_ctx after freeze the queue, so we use q_usage_counter326326+ * to avoid race with it.329327 */330330- rcu_read_lock();331331- if (percpu_ref_is_zero(&q->q_usage_counter)) {332332- rcu_read_unlock();328328+ if (!percpu_ref_tryget(&q->q_usage_counter))333329 return;334334- }335330336331 queue_for_each_hw_ctx(q, hctx, i) {337332 struct blk_mq_tags *tags = hctx->tags;···342347 bt_for_each(hctx, &tags->breserved_tags, fn, priv, true);343348 bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false);344349 }345345- rcu_read_unlock();350350+ blk_queue_exit(q);346351}347352348353static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth,
+2-2
block/blk-mq.c
···16281628 BUG_ON(!rq->q);16291629 if (rq->mq_ctx != this_ctx) {16301630 if (this_ctx) {16311631- trace_block_unplug(this_q, depth, from_schedule);16311631+ trace_block_unplug(this_q, depth, !from_schedule);16321632 blk_mq_sched_insert_requests(this_q, this_ctx,16331633 &ctx_list,16341634 from_schedule);···16481648 * on 'ctx_list'. Do those.16491649 */16501650 if (this_ctx) {16511651- trace_block_unplug(this_q, depth, from_schedule);16511651+ trace_block_unplug(this_q, depth, !from_schedule);16521652 blk_mq_sched_insert_requests(this_q, this_ctx, &ctx_list,16531653 from_schedule);16541654 }
+1-1
block/elevator.c
···609609610610 while (e->type->ops.sq.elevator_dispatch_fn(q, 1))611611 ;612612- if (q->nr_sorted && printed++ < 10) {612612+ if (q->nr_sorted && !blk_queue_is_zoned(q) && printed++ < 10 ) {613613 printk(KERN_ERR "%s: forced dispatching is broken "614614 "(nr_sorted=%u), please report this\n",615615 q->elevator->type->elevator_name, q->nr_sorted);
···53595359 */53605360int ata_qc_complete_multiple(struct ata_port *ap, u64 qc_active)53615361{53625362+ u64 done_mask, ap_qc_active = ap->qc_active;53625363 int nr_done = 0;53635363- u64 done_mask;5364536453655365- done_mask = ap->qc_active ^ qc_active;53655365+ /*53665366+ * If the internal tag is set on ap->qc_active, then we care about53675367+ * bit0 on the passed in qc_active mask. Move that bit up to match53685368+ * the internal tag.53695369+ */53705370+ if (ap_qc_active & (1ULL << ATA_TAG_INTERNAL)) {53715371+ qc_active |= (qc_active & 0x01) << ATA_TAG_INTERNAL;53725372+ qc_active ^= qc_active & 0x01;53735373+ }53745374+53755375+ done_mask = ap_qc_active ^ qc_active;5366537653675377 if (unlikely(done_mask & qc_active)) {53685378 ata_port_err(ap, "illegal qc_active transition (%08llx->%08llx)\n",
···5555 u8 nparents;5656 struct clk_plt *clks[PMC_CLK_NUM];5757 struct clk_lookup *mclk_lookup;5858+ struct clk_lookup *ether_clk_lookup;5859};59606061/* Return an index in parent table */···186185 pclk->hw.init = &init;187186 pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;188187 spin_lock_init(&pclk->lock);189189-190190- /*191191- * If the clock was already enabled by the firmware mark it as critical192192- * to avoid it being gated by the clock framework if no driver owns it.193193- */194194- if (plt_clk_is_enabled(&pclk->hw))195195- init.flags |= CLK_IS_CRITICAL;196188197189 ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);198190 if (ret) {···345351 goto err_unreg_clk_plt;346352 }347353354354+ data->ether_clk_lookup = clkdev_hw_create(&data->clks[4]->hw,355355+ "ether_clk", NULL);356356+ if (!data->ether_clk_lookup) {357357+ err = -ENOMEM;358358+ goto err_drop_mclk;359359+ }360360+348361 plt_clk_free_parent_names_loop(parent_names, data->nparents);349362350363 platform_set_drvdata(pdev, data);351364 return 0;352365366366+err_drop_mclk:367367+ clkdev_drop(data->mclk_lookup);353368err_unreg_clk_plt:354369 plt_clk_unregister_loop(data, i);355370 plt_clk_unregister_parents(data);···372369373370 data = platform_get_drvdata(pdev);374371372372+ clkdev_drop(data->ether_clk_lookup);375373 clkdev_drop(data->mclk_lookup);376374 plt_clk_unregister_loop(data, PMC_CLK_NUM);377375 plt_clk_unregister_parents(data);
+14-6
drivers/clocksource/timer-atmel-pit.c
···180180 data->base = of_iomap(node, 0);181181 if (!data->base) {182182 pr_err("Could not map PIT address\n");183183- return -ENXIO;183183+ ret = -ENXIO;184184+ goto exit;184185 }185186186187 data->mck = of_clk_get(node, 0);187188 if (IS_ERR(data->mck)) {188189 pr_err("Unable to get mck clk\n");189189- return PTR_ERR(data->mck);190190+ ret = PTR_ERR(data->mck);191191+ goto exit;190192 }191193192194 ret = clk_prepare_enable(data->mck);193195 if (ret) {194196 pr_err("Unable to enable mck\n");195195- return ret;197197+ goto exit;196198 }197199198200 /* Get the interrupts property */199201 data->irq = irq_of_parse_and_map(node, 0);200202 if (!data->irq) {201203 pr_err("Unable to get IRQ from DT\n");202202- return -EINVAL;204204+ ret = -EINVAL;205205+ goto exit;203206 }204207205208 /*···230227 ret = clocksource_register_hz(&data->clksrc, pit_rate);231228 if (ret) {232229 pr_err("Failed to register clocksource\n");233233- return ret;230230+ goto exit;234231 }235232236233 /* Set up irq handler */···239236 "at91_tick", data);240237 if (ret) {241238 pr_err("Unable to setup IRQ\n");242242- return ret;239239+ clocksource_unregister(&data->clksrc);240240+ goto exit;243241 }244242245243 /* Set up and register clockevents */···258254 clockevents_register_device(&data->clkevt);259255260256 return 0;257257+258258+exit:259259+ kfree(data);260260+ return ret;261261}262262TIMER_OF_DECLARE(at91sam926x_pit, "atmel,at91sam9260-pit",263263 at91sam926x_pit_dt_init);
+11-7
drivers/clocksource/timer-fttmr010.c
···130130 cr &= ~fttmr010->t1_enable_val;131131 writel(cr, fttmr010->base + TIMER_CR);132132133133- /* Setup the match register forward/backward in time */134134- cr = readl(fttmr010->base + TIMER1_COUNT);135135- if (fttmr010->count_down)136136- cr -= cycles;137137- else138138- cr += cycles;139139- writel(cr, fttmr010->base + TIMER1_MATCH1);133133+ if (fttmr010->count_down) {134134+ /*135135+ * ASPEED Timer Controller will load TIMER1_LOAD register136136+ * into TIMER1_COUNT register when the timer is re-enabled.137137+ */138138+ writel(cycles, fttmr010->base + TIMER1_LOAD);139139+ } else {140140+ /* Setup the match register forward in time */141141+ cr = readl(fttmr010->base + TIMER1_COUNT);142142+ writel(cr + cycles, fttmr010->base + TIMER1_MATCH1);143143+ }140144141145 /* Start */142146 cr = readl(fttmr010->base + TIMER_CR);
···9090config EFI_ARMSTUB_DTB_LOADER9191 bool "Enable the DTB loader"9292 depends on EFI_ARMSTUB9393+ default y9394 help9495 Select this config option to add support for the dtb= command9596 line parameter, allowing a device tree blob to be loaded into9697 memory from the EFI System Partition by the stub.97989898- The device tree is typically provided by the platform or by9999- the bootloader, so this option is mostly for development100100- purposes only.9999+ If the device tree is provided by the platform or by100100+ the bootloader this option may not be needed.101101+ But, for various development reasons and to maintain existing102102+ functionality for bootloaders that do not have such support103103+ this option is necessary.101104102105config EFI_BOOTLOADER_CONTROL103106 tristate "EFI Bootloader Control"
···244244 dh_data->dchub_info_valid = false;245245}246246247247-static void dce120_set_bandwidth(248248- struct dc *dc,249249- struct dc_state *context,250250- bool decrease_allowed)251251-{252252- if (context->stream_count <= 0)253253- return;254254-255255- dce110_set_bandwidth(dc, context, decrease_allowed);256256-}257257-258247void dce120_hw_sequencer_construct(struct dc *dc)259248{260249 /* All registers used by dce11.2 match those in dce11 in offset and···252263 dce110_hw_sequencer_construct(dc);253264 dc->hwss.enable_display_power_gating = dce120_enable_display_power_gating;254265 dc->hwss.update_dchub = dce120_update_dchub;255255- dc->hwss.set_bandwidth = dce120_set_bandwidth;256266}257267
···754754 drm->irq_enabled = true;755755756756 ret = drm_vblank_init(drm, drm->mode_config.num_crtc);757757+ drm_crtc_vblank_reset(&malidp->crtc);757758 if (ret < 0) {758759 DRM_ERROR("failed to initialise vblank\n");759760 goto vblank_fail;
+23-2
drivers/gpu/drm/arm/malidp_hw.c
···384384385385static int malidp500_enable_memwrite(struct malidp_hw_device *hwdev,386386 dma_addr_t *addrs, s32 *pitches,387387- int num_planes, u16 w, u16 h, u32 fmt_id)387387+ int num_planes, u16 w, u16 h, u32 fmt_id,388388+ const s16 *rgb2yuv_coeffs)388389{389390 u32 base = MALIDP500_SE_MEMWRITE_BASE;390391 u32 de_base = malidp_get_block_base(hwdev, MALIDP_DE_BLOCK);···417416418417 malidp_hw_write(hwdev, MALIDP_DE_H_ACTIVE(w) | MALIDP_DE_V_ACTIVE(h),419418 MALIDP500_SE_MEMWRITE_OUT_SIZE);419419+420420+ if (rgb2yuv_coeffs) {421421+ int i;422422+423423+ for (i = 0; i < MALIDP_COLORADJ_NUM_COEFFS; i++) {424424+ malidp_hw_write(hwdev, rgb2yuv_coeffs[i],425425+ MALIDP500_SE_RGB_YUV_COEFFS + i * 4);426426+ }427427+ }428428+420429 malidp_hw_setbits(hwdev, MALIDP_SE_MEMWRITE_EN, MALIDP500_SE_CONTROL);421430422431 return 0;···669658670659static int malidp550_enable_memwrite(struct malidp_hw_device *hwdev,671660 dma_addr_t *addrs, s32 *pitches,672672- int num_planes, u16 w, u16 h, u32 fmt_id)661661+ int num_planes, u16 w, u16 h, u32 fmt_id,662662+ const s16 *rgb2yuv_coeffs)673663{674664 u32 base = MALIDP550_SE_MEMWRITE_BASE;675665 u32 de_base = malidp_get_block_base(hwdev, MALIDP_DE_BLOCK);···700688 MALIDP550_SE_MEMWRITE_OUT_SIZE);701689 malidp_hw_setbits(hwdev, MALIDP550_SE_MEMWRITE_ONESHOT | MALIDP_SE_MEMWRITE_EN,702690 MALIDP550_SE_CONTROL);691691+692692+ if (rgb2yuv_coeffs) {693693+ int i;694694+695695+ for (i = 0; i < MALIDP_COLORADJ_NUM_COEFFS; i++) {696696+ malidp_hw_write(hwdev, rgb2yuv_coeffs[i],697697+ MALIDP550_SE_RGB_YUV_COEFFS + i * 4);698698+ }699699+ }703700704701 return 0;705702}
+2-1
drivers/gpu/drm/arm/malidp_hw.h
···191191 * @param fmt_id - internal format ID of output buffer192192 */193193 int (*enable_memwrite)(struct malidp_hw_device *hwdev, dma_addr_t *addrs,194194- s32 *pitches, int num_planes, u16 w, u16 h, u32 fmt_id);194194+ s32 *pitches, int num_planes, u16 w, u16 h, u32 fmt_id,195195+ const s16 *rgb2yuv_coeffs);195196196197 /*197198 * Disable the writing to memory of the next frame's content.
···20672067 struct drm_connector *connector;20682068 struct drm_connector_list_iter conn_iter;2069206920702070- if (!drm_core_check_feature(dev, DRIVER_ATOMIC))20702070+ if (!drm_drv_uses_atomic_modeset(dev))20712071 return;2072207220732073 list_for_each_entry(plane, &config->plane_list, head) {
+1-1
drivers/gpu/drm/drm_debugfs.c
···151151 return ret;152152 }153153154154- if (drm_core_check_feature(dev, DRIVER_ATOMIC)) {154154+ if (drm_drv_uses_atomic_modeset(dev)) {155155 ret = drm_atomic_debugfs_init(minor);156156 if (ret) {157157 DRM_ERROR("Failed to create atomic debugfs files\n");
-3
drivers/gpu/drm/drm_fb_helper.c
···23702370{23712371 int c, o;23722372 struct drm_connector *connector;23732373- const struct drm_connector_helper_funcs *connector_funcs;23742373 int my_score, best_score, score;23752374 struct drm_fb_helper_crtc **crtcs, *crtc;23762375 struct drm_fb_helper_connector *fb_helper_conn;···23972398 my_score++;23982399 if (drm_has_preferred_mode(fb_helper_conn, width, height))23992400 my_score++;24002400-24012401- connector_funcs = connector->helper_private;2402240124032402 /*24042403 * select a crtc for this connector and then attempt to configure
-10
drivers/gpu/drm/drm_panel.c
···2424#include <linux/err.h>2525#include <linux/module.h>26262727-#include <drm/drm_device.h>2827#include <drm/drm_crtc.h>2928#include <drm/drm_panel.h>3029···104105 if (panel->connector)105106 return -EBUSY;106107107107- panel->link = device_link_add(connector->dev->dev, panel->dev, 0);108108- if (!panel->link) {109109- dev_err(panel->dev, "failed to link panel to %s\n",110110- dev_name(connector->dev->dev));111111- return -EINVAL;112112- }113113-114108 panel->connector = connector;115109 panel->drm = connector->dev;116110···125133 */126134int drm_panel_detach(struct drm_panel *panel)127135{128128- device_link_del(panel->link);129129-130136 panel->connector = NULL;131137 panel->drm = NULL;132138
+5
drivers/gpu/drm/drm_syncobj.c
···9797{9898 int ret;9999100100+ WARN_ON(*fence);101101+100102 *fence = drm_syncobj_fence_get(syncobj);101103 if (*fence)102104 return 1;···745743746744 if (flags & DRM_SYNCOBJ_WAIT_FLAGS_WAIT_FOR_SUBMIT) {747745 for (i = 0; i < count; ++i) {746746+ if (entries[i].fence)747747+ continue;748748+748749 drm_syncobj_fence_get_or_add_callback(syncobjs[i],749750 &entries[i].fence,750751 &entries[i].syncobj_cb,
+21-6
drivers/gpu/drm/etnaviv/etnaviv_drv.c
···592592 struct device *dev = &pdev->dev;593593 struct component_match *match = NULL;594594595595- dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));596596-597595 if (!dev->platform_data) {598596 struct device_node *core_node;599597···653655 for_each_compatible_node(np, NULL, "vivante,gc") {654656 if (!of_device_is_available(np))655657 continue;656656- pdev = platform_device_register_simple("etnaviv", -1,657657- NULL, 0);658658- if (IS_ERR(pdev)) {659659- ret = PTR_ERR(pdev);658658+659659+ pdev = platform_device_alloc("etnaviv", -1);660660+ if (!pdev) {661661+ ret = -ENOMEM;660662 of_node_put(np);661663 goto unregister_platform_driver;662664 }665665+ pdev->dev.coherent_dma_mask = DMA_BIT_MASK(40);666666+ pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;667667+668668+ /*669669+ * Apply the same DMA configuration to the virtual etnaviv670670+ * device as the GPU we found. This assumes that all Vivante671671+ * GPUs in the system share the same DMA constraints.672672+ */673673+ of_dma_configure(&pdev->dev, np, true);674674+675675+ ret = platform_device_add(pdev);676676+ if (ret) {677677+ platform_device_put(pdev);678678+ of_node_put(np);679679+ goto unregister_platform_driver;680680+ }681681+663682 etnaviv_drm = pdev;664683 of_node_put(np);665684 break;
···253253254254/* sun4i_drv uses this list to check if a device node is a TCON TOP */255255const struct of_device_id sun8i_tcon_top_of_table[] = {256256- { .compatible = "allwinner,sun8i-r40-tcon-top" },257256 { /* sentinel */ }258257};259258MODULE_DEVICE_TABLE(of, sun8i_tcon_top_of_table);
···15121512 struct drm_rect *rects)15131513{15141514 struct vmw_private *dev_priv = vmw_priv(dev);15151515- struct drm_mode_config *mode_config = &dev->mode_config;15161515 struct drm_rect bounding_box = {0};15171516 u64 total_pixels = 0, pixel_mem, bb_mem;15181517 int i;1519151815201519 for (i = 0; i < num_rects; i++) {15211520 /*15221522- * Currently this check is limiting the topology within max15231523- * texture/screentarget size. This should change in future when15241524- * user-space support multiple fb with topology.15211521+ * For STDU only individual screen (screen target) is limited by15221522+ * SCREENTARGET_MAX_WIDTH/HEIGHT registers.15251523 */15261526- if (rects[i].x1 < 0 || rects[i].y1 < 0 ||15271527- rects[i].x2 > mode_config->max_width ||15281528- rects[i].y2 > mode_config->max_height) {15291529- DRM_ERROR("Invalid GUI layout.\n");15241524+ if (dev_priv->active_display_unit == vmw_du_screen_target &&15251525+ (drm_rect_width(&rects[i]) > dev_priv->stdu_max_width ||15261526+ drm_rect_height(&rects[i]) > dev_priv->stdu_max_height)) {15271527+ DRM_ERROR("Screen size not supported.\n");15301528 return -EINVAL;15311529 }15321530···16131615 struct drm_connector_state *conn_state;16141616 struct vmw_connector_state *vmw_conn_state;1615161716161616- if (!new_crtc_state->enable && old_crtc_state->enable) {16181618+ if (!new_crtc_state->enable) {16171619 rects[i].x1 = 0;16181620 rects[i].y1 = 0;16191621 rects[i].x2 = 0;···22142216 if (dev_priv->assume_16bpp)22152217 assumed_bpp = 2;2216221822192219+ max_width = min(max_width, dev_priv->texture_max_width);22202220+ max_height = min(max_height, dev_priv->texture_max_height);22212221+22222222+ /*22232223+ * For STDU extra limit for a mode on SVGA_REG_SCREENTARGET_MAX_WIDTH/22242224+ * HEIGHT registers.22252225+ */22172226 if (dev_priv->active_display_unit == vmw_du_screen_target) {22182227 max_width = min(max_width, dev_priv->stdu_max_width);22192219- max_width = min(max_width, dev_priv->texture_max_width);22202220-22212228 max_height = min(max_height, dev_priv->stdu_max_height);22222222- max_height = min(max_height, dev_priv->texture_max_height);22232229 }2224223022252231 /* Add preferred mode */···23782376 struct drm_file *file_priv)23792377{23802378 struct vmw_private *dev_priv = vmw_priv(dev);23792379+ struct drm_mode_config *mode_config = &dev->mode_config;23812380 struct drm_vmw_update_layout_arg *arg =23822381 (struct drm_vmw_update_layout_arg *)data;23832382 void __user *user_rects;···24242421 drm_rects[i].y1 = curr_rect.y;24252422 drm_rects[i].x2 = curr_rect.x + curr_rect.w;24262423 drm_rects[i].y2 = curr_rect.y + curr_rect.h;24242424+24252425+ /*24262426+ * Currently this check is limiting the topology within24272427+ * mode_config->max (which actually is max texture size24282428+ * supported by virtual device). This limit is here to address24292429+ * window managers that create a big framebuffer for whole24302430+ * topology.24312431+ */24322432+ if (drm_rects[i].x1 < 0 || drm_rects[i].y1 < 0 ||24332433+ drm_rects[i].x2 > mode_config->max_width ||24342434+ drm_rects[i].y2 > mode_config->max_height) {24352435+ DRM_ERROR("Invalid GUI layout.\n");24362436+ ret = -EINVAL;24372437+ goto out_free;24382438+ }24272439 }2428244024292441 ret = vmw_kms_check_display_memory(dev, arg->num_outputs, drm_rects);
-25
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
···1600160016011601 dev_priv->active_display_unit = vmw_du_screen_target;1602160216031603- if (dev_priv->capabilities & SVGA_CAP_3D) {16041604- /*16051605- * For 3D VMs, display (scanout) buffer size is the smaller of16061606- * max texture and max STDU16071607- */16081608- uint32_t max_width, max_height;16091609-16101610- max_width = min(dev_priv->texture_max_width,16111611- dev_priv->stdu_max_width);16121612- max_height = min(dev_priv->texture_max_height,16131613- dev_priv->stdu_max_height);16141614-16151615- dev->mode_config.max_width = max_width;16161616- dev->mode_config.max_height = max_height;16171617- } else {16181618- /*16191619- * Given various display aspect ratios, there's no way to16201620- * estimate these using prim_bb_mem. So just set these to16211621- * something arbitrarily large and we will reject any layout16221622- * that doesn't fit prim_bb_mem later16231623- */16241624- dev->mode_config.max_width = 8192;16251625- dev->mode_config.max_height = 8192;16261626- }16271627-16281603 vmw_kms_create_implicit_placement_property(dev_priv, false);1629160416301605 for (i = 0; i < VMWGFX_NUM_DISPLAY_UNITS; ++i) {
+14-10
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
···14041404 *srf_out = NULL;1405140514061406 if (for_scanout) {14071407- uint32_t max_width, max_height;14081408-14091407 if (!svga3dsurface_is_screen_target_format(format)) {14101408 DRM_ERROR("Invalid Screen Target surface format.");14111409 return -EINVAL;14121410 }1413141114141414- max_width = min(dev_priv->texture_max_width,14151415- dev_priv->stdu_max_width);14161416- max_height = min(dev_priv->texture_max_height,14171417- dev_priv->stdu_max_height);14181418-14191419- if (size.width > max_width || size.height > max_height) {14121412+ if (size.width > dev_priv->texture_max_width ||14131413+ size.height > dev_priv->texture_max_height) {14201414 DRM_ERROR("%ux%u\n, exceeds max surface size %ux%u",14211415 size.width, size.height,14221422- max_width, max_height);14161416+ dev_priv->texture_max_width,14171417+ dev_priv->texture_max_height);14231418 return -EINVAL;14241419 }14251420 } else {···14901495 if (srf->flags & SVGA3D_SURFACE_BIND_STREAM_OUTPUT)14911496 srf->res.backup_size += sizeof(SVGA3dDXSOState);1492149714981498+ /*14991499+ * Don't set SVGA3D_SURFACE_SCREENTARGET flag for a scanout surface with15001500+ * size greater than STDU max width/height. This is really a workaround15011501+ * to support creation of big framebuffer requested by some user-space15021502+ * for whole topology. That big framebuffer won't really be used for15031503+ * binding with screen target as during prepare_fb a separate surface is15041504+ * created so it's safe to ignore SVGA3D_SURFACE_SCREENTARGET flag.15051505+ */14931506 if (dev_priv->active_display_unit == vmw_du_screen_target &&14941494- for_scanout)15071507+ for_scanout && size.width <= dev_priv->stdu_max_width &&15081508+ size.height <= dev_priv->stdu_max_height)14951509 srf->flags |= SVGA3D_SURFACE_SCREENTARGET;1496151014971511 /*
+2
drivers/gpu/vga/vga_switcheroo.c
···215215 return;216216217217 client->id = ret | ID_BIT_AUDIO;218218+ if (client->ops->gpu_bound)219219+ client->ops->gpu_bound(client->pdev, ret);218220 }219221220222 vga_switcheroo_debugfs_init(&vgasr_priv);
···338338}339339340340/**341341- * add_modify_gid - Add or modify GID table entry342342- *343343- * @table: GID table in which GID to be added or modified344344- * @attr: Attributes of the GID345345- *346346- * Returns 0 on success or appropriate error code. It accepts zero347347- * GID addition for non RoCE ports for HCA's who report them as valid348348- * GID. However such zero GIDs are not added to the cache.349349- */350350-static int add_modify_gid(struct ib_gid_table *table,351351- const struct ib_gid_attr *attr)352352-{353353- struct ib_gid_table_entry *entry;354354- int ret = 0;355355-356356- /*357357- * Invalidate any old entry in the table to make it safe to write to358358- * this index.359359- */360360- if (is_gid_entry_valid(table->data_vec[attr->index]))361361- put_gid_entry(table->data_vec[attr->index]);362362-363363- /*364364- * Some HCA's report multiple GID entries with only one valid GID, and365365- * leave other unused entries as the zero GID. Convert zero GIDs to366366- * empty table entries instead of storing them.367367- */368368- if (rdma_is_zero_gid(&attr->gid))369369- return 0;370370-371371- entry = alloc_gid_entry(attr);372372- if (!entry)373373- return -ENOMEM;374374-375375- if (rdma_protocol_roce(attr->device, attr->port_num)) {376376- ret = add_roce_gid(entry);377377- if (ret)378378- goto done;379379- }380380-381381- store_gid_entry(table, entry);382382- return 0;383383-384384-done:385385- put_gid_entry(entry);386386- return ret;387387-}388388-389389-/**390341 * del_gid - Delete GID table entry391342 *392343 * @ib_dev: IB device whose GID entry to be deleted···368417 write_unlock_irq(&table->rwlock);369418370419 put_gid_entry_locked(entry);420420+}421421+422422+/**423423+ * add_modify_gid - Add or modify GID table entry424424+ *425425+ * @table: GID table in which GID to be added or modified426426+ * @attr: Attributes of the GID427427+ *428428+ * Returns 0 on success or appropriate error code. It accepts zero429429+ * GID addition for non RoCE ports for HCA's who report them as valid430430+ * GID. However such zero GIDs are not added to the cache.431431+ */432432+static int add_modify_gid(struct ib_gid_table *table,433433+ const struct ib_gid_attr *attr)434434+{435435+ struct ib_gid_table_entry *entry;436436+ int ret = 0;437437+438438+ /*439439+ * Invalidate any old entry in the table to make it safe to write to440440+ * this index.441441+ */442442+ if (is_gid_entry_valid(table->data_vec[attr->index]))443443+ del_gid(attr->device, attr->port_num, table, attr->index);444444+445445+ /*446446+ * Some HCA's report multiple GID entries with only one valid GID, and447447+ * leave other unused entries as the zero GID. Convert zero GIDs to448448+ * empty table entries instead of storing them.449449+ */450450+ if (rdma_is_zero_gid(&attr->gid))451451+ return 0;452452+453453+ entry = alloc_gid_entry(attr);454454+ if (!entry)455455+ return -ENOMEM;456456+457457+ if (rdma_protocol_roce(attr->device, attr->port_num)) {458458+ ret = add_roce_gid(entry);459459+ if (ret)460460+ goto done;461461+ }462462+463463+ store_gid_entry(table, entry);464464+ return 0;465465+466466+done:467467+ put_gid_entry(entry);468468+ return ret;371469}372470373471/* rwlock should be read locked, or lock should be held */
+2
drivers/infiniband/core/ucma.c
···17591759 mutex_lock(&mut);17601760 if (!ctx->closing) {17611761 mutex_unlock(&mut);17621762+ ucma_put_ctx(ctx);17631763+ wait_for_completion(&ctx->comp);17621764 /* rdma_destroy_id ensures that no event handlers are17631765 * inflight for that id before releasing it.17641766 */
···7878/* Mutex to protect the list of bnxt_re devices added */7979static DEFINE_MUTEX(bnxt_re_dev_lock);8080static struct workqueue_struct *bnxt_re_wq;8181-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait);8181+static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev);82828383/* SR-IOV helper functions */8484···182182 if (!rdev)183183 return;184184185185- bnxt_re_ib_unreg(rdev, false);185185+ bnxt_re_ib_unreg(rdev);186186}187187188188static void bnxt_re_stop_irq(void *handle)···251251/* Driver registration routines used to let the networking driver (bnxt_en)252252 * to know that the RoCE driver is now installed253253 */254254-static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev, bool lock_wait)254254+static int bnxt_re_unregister_netdev(struct bnxt_re_dev *rdev)255255{256256 struct bnxt_en_dev *en_dev;257257 int rc;···260260 return -EINVAL;261261262262 en_dev = rdev->en_dev;263263- /* Acquire rtnl lock if it is not invokded from netdev event */264264- if (lock_wait)265265- rtnl_lock();266263267264 rc = en_dev->en_ops->bnxt_unregister_device(rdev->en_dev,268265 BNXT_ROCE_ULP);269269- if (lock_wait)270270- rtnl_unlock();271266 return rc;272267}273268···276281277282 en_dev = rdev->en_dev;278283279279- rtnl_lock();280284 rc = en_dev->en_ops->bnxt_register_device(en_dev, BNXT_ROCE_ULP,281285 &bnxt_re_ulp_ops, rdev);282282- rtnl_unlock();283286 return rc;284287}285288286286-static int bnxt_re_free_msix(struct bnxt_re_dev *rdev, bool lock_wait)289289+static int bnxt_re_free_msix(struct bnxt_re_dev *rdev)287290{288291 struct bnxt_en_dev *en_dev;289292 int rc;···291298292299 en_dev = rdev->en_dev;293300294294- if (lock_wait)295295- rtnl_lock();296301297302 rc = en_dev->en_ops->bnxt_free_msix(rdev->en_dev, BNXT_ROCE_ULP);298303299299- if (lock_wait)300300- rtnl_unlock();301304 return rc;302305}303306···309320310321 num_msix_want = min_t(u32, BNXT_RE_MAX_MSIX, num_online_cpus());311322312312- rtnl_lock();313323 num_msix_got = en_dev->en_ops->bnxt_request_msix(en_dev, BNXT_ROCE_ULP,314324 rdev->msix_entries,315325 num_msix_want);···323335 }324336 rdev->num_msix = num_msix_got;325337done:326326- rtnl_unlock();327338 return rc;328339}329340···345358 fw_msg->timeout = timeout;346359}347360348348-static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id,349349- bool lock_wait)361361+static int bnxt_re_net_ring_free(struct bnxt_re_dev *rdev, u16 fw_ring_id)350362{351363 struct bnxt_en_dev *en_dev = rdev->en_dev;352364 struct hwrm_ring_free_input req = {0};353365 struct hwrm_ring_free_output resp;354366 struct bnxt_fw_msg fw_msg;355355- bool do_unlock = false;356367 int rc = -EINVAL;357368358369 if (!en_dev)359370 return rc;360371361372 memset(&fw_msg, 0, sizeof(fw_msg));362362- if (lock_wait) {363363- rtnl_lock();364364- do_unlock = true;365365- }366373367374 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_FREE, -1, -1);368375 req.ring_type = RING_ALLOC_REQ_RING_TYPE_L2_CMPL;···367386 if (rc)368387 dev_err(rdev_to_dev(rdev),369388 "Failed to free HW ring:%d :%#x", req.ring_id, rc);370370- if (do_unlock)371371- rtnl_unlock();372389 return rc;373390}374391···384405 return rc;385406386407 memset(&fw_msg, 0, sizeof(fw_msg));387387- rtnl_lock();388408 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_RING_ALLOC, -1, -1);389409 req.enables = 0;390410 req.page_tbl_addr = cpu_to_le64(dma_arr[0]);···404426 if (!rc)405427 *fw_ring_id = le16_to_cpu(resp.ring_id);406428407407- rtnl_unlock();408429 return rc;409430}410431411432static int bnxt_re_net_stats_ctx_free(struct bnxt_re_dev *rdev,412412- u32 fw_stats_ctx_id, bool lock_wait)433433+ u32 fw_stats_ctx_id)413434{414435 struct bnxt_en_dev *en_dev = rdev->en_dev;415436 struct hwrm_stat_ctx_free_input req = {0};416437 struct bnxt_fw_msg fw_msg;417417- bool do_unlock = false;418438 int rc = -EINVAL;419439420440 if (!en_dev)421441 return rc;422442423443 memset(&fw_msg, 0, sizeof(fw_msg));424424- if (lock_wait) {425425- rtnl_lock();426426- do_unlock = true;427427- }428444429445 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_FREE, -1, -1);430446 req.stat_ctx_id = cpu_to_le32(fw_stats_ctx_id);···429457 dev_err(rdev_to_dev(rdev),430458 "Failed to free HW stats context %#x", rc);431459432432- if (do_unlock)433433- rtnl_unlock();434460 return rc;435461}436462···448478 return rc;449479450480 memset(&fw_msg, 0, sizeof(fw_msg));451451- rtnl_lock();452481453482 bnxt_re_init_hwrm_hdr(rdev, (void *)&req, HWRM_STAT_CTX_ALLOC, -1, -1);454483 req.update_period_ms = cpu_to_le32(1000);···459490 if (!rc)460491 *fw_stats_ctx_id = le32_to_cpu(resp.stat_ctx_id);461492462462- rtnl_unlock();463493 return rc;464494}465495···897929 return rc;898930}899931900900-static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev, bool lock_wait)932932+static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)901933{902934 int i;903935904936 for (i = 0; i < rdev->num_msix - 1; i++) {905905- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, lock_wait);937937+ bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id);906938 bnxt_qplib_free_nq(&rdev->nq[i]);907939 }908940}909941910910-static void bnxt_re_free_res(struct bnxt_re_dev *rdev, bool lock_wait)942942+static void bnxt_re_free_res(struct bnxt_re_dev *rdev)911943{912912- bnxt_re_free_nq_res(rdev, lock_wait);944944+ bnxt_re_free_nq_res(rdev);913945914946 if (rdev->qplib_res.dpi_tbl.max) {915947 bnxt_qplib_dealloc_dpi(&rdev->qplib_res,···11871219 return 0;11881220}1189122111901190-static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev, bool lock_wait)12221222+static void bnxt_re_ib_unreg(struct bnxt_re_dev *rdev)11911223{11921224 int i, rc;11931225···12021234 cancel_delayed_work(&rdev->worker);1203123512041236 bnxt_re_cleanup_res(rdev);12051205- bnxt_re_free_res(rdev, lock_wait);12371237+ bnxt_re_free_res(rdev);1206123812071239 if (test_and_clear_bit(BNXT_RE_FLAG_RCFW_CHANNEL_EN, &rdev->flags)) {12081240 rc = bnxt_qplib_deinit_rcfw(&rdev->rcfw);12091241 if (rc)12101242 dev_warn(rdev_to_dev(rdev),12111243 "Failed to deinitialize RCFW: %#x", rc);12121212- bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id,12131213- lock_wait);12441244+ bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);12141245 bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);12151246 bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);12161216- bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, lock_wait);12471247+ bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);12171248 bnxt_qplib_free_rcfw_channel(&rdev->rcfw);12181249 }12191250 if (test_and_clear_bit(BNXT_RE_FLAG_GOT_MSIX, &rdev->flags)) {12201220- rc = bnxt_re_free_msix(rdev, lock_wait);12511251+ rc = bnxt_re_free_msix(rdev);12211252 if (rc)12221253 dev_warn(rdev_to_dev(rdev),12231254 "Failed to free MSI-X vectors: %#x", rc);12241255 }12251256 if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags)) {12261226- rc = bnxt_re_unregister_netdev(rdev, lock_wait);12571257+ rc = bnxt_re_unregister_netdev(rdev);12271258 if (rc)12281259 dev_warn(rdev_to_dev(rdev),12291260 "Failed to unregister with netdev: %#x", rc);···12421275static int bnxt_re_ib_reg(struct bnxt_re_dev *rdev)12431276{12441277 int i, j, rc;12781278+12791279+ bool locked;12801280+12811281+ /* Acquire rtnl lock through out this function */12821282+ rtnl_lock();12831283+ locked = true;1245128412461285 /* Registered a new RoCE device instance to netdev */12471286 rc = bnxt_re_register_netdev(rdev);···13471374 schedule_delayed_work(&rdev->worker, msecs_to_jiffies(30000));13481375 }1349137613771377+ rtnl_unlock();13781378+ locked = false;13791379+13501380 /* Register ib dev */13511381 rc = bnxt_re_register_ib(rdev);13521382 if (rc) {13531383 pr_err("Failed to register with IB: %#x\n", rc);13541384 goto fail;13551385 }13861386+ set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);13561387 dev_info(rdev_to_dev(rdev), "Device registered successfully");13571388 for (i = 0; i < ARRAY_SIZE(bnxt_re_attributes); i++) {13581389 rc = device_create_file(&rdev->ibdev.dev,···13721395 goto fail;13731396 }13741397 }13751375- set_bit(BNXT_RE_FLAG_IBDEV_REGISTERED, &rdev->flags);13761398 ib_get_eth_speed(&rdev->ibdev, 1, &rdev->active_speed,13771399 &rdev->active_width);13781400 set_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS, &rdev->flags);···1380140413811405 return 0;13821406free_sctx:13831383- bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id, true);14071407+ bnxt_re_net_stats_ctx_free(rdev, rdev->qplib_ctx.stats.fw_id);13841408free_ctx:13851409 bnxt_qplib_free_ctx(rdev->en_dev->pdev, &rdev->qplib_ctx);13861410disable_rcfw:13871411 bnxt_qplib_disable_rcfw_channel(&rdev->rcfw);13881412free_ring:13891389- bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id, true);14131413+ bnxt_re_net_ring_free(rdev, rdev->rcfw.creq_ring_id);13901414free_rcfw:13911415 bnxt_qplib_free_rcfw_channel(&rdev->rcfw);13921416fail:13931393- bnxt_re_ib_unreg(rdev, true);14171417+ if (!locked)14181418+ rtnl_lock();14191419+ bnxt_re_ib_unreg(rdev);14201420+ rtnl_unlock();14211421+13941422 return rc;13951423}13961424···15471567 */15481568 if (atomic_read(&rdev->sched_count) > 0)15491569 goto exit;15501550- bnxt_re_ib_unreg(rdev, false);15701570+ bnxt_re_ib_unreg(rdev);15511571 bnxt_re_remove_one(rdev);15521572 bnxt_re_dev_unreg(rdev);15531573 break;···16261646 */16271647 flush_workqueue(bnxt_re_wq);16281648 bnxt_re_dev_stop(rdev);16291629- bnxt_re_ib_unreg(rdev, true);16491649+ /* Acquire the rtnl_lock as the L2 resources are freed here */16501650+ rtnl_lock();16511651+ bnxt_re_ib_unreg(rdev);16521652+ rtnl_unlock();16301653 bnxt_re_remove_one(rdev);16311654 bnxt_re_dev_unreg(rdev);16321655 }
+5-1
drivers/infiniband/hw/hfi1/chip.c
···67336733 struct hfi1_devdata *dd = ppd->dd;67346734 struct send_context *sc;67356735 int i;67366736+ int sc_flags;6736673767376738 if (flags & FREEZE_SELF)67386739 write_csr(dd, CCE_CTRL, CCE_CTRL_SPC_FREEZE_SMASK);···67446743 /* notify all SDMA engines that they are going into a freeze */67456744 sdma_freeze_notify(dd, !!(flags & FREEZE_LINK_DOWN));6746674567466746+ sc_flags = SCF_FROZEN | SCF_HALTED | (flags & FREEZE_LINK_DOWN ?67476747+ SCF_LINK_DOWN : 0);67476748 /* do halt pre-handling on all enabled send contexts */67486749 for (i = 0; i < dd->num_send_contexts; i++) {67496750 sc = dd->send_contexts[i].sc;67506751 if (sc && (sc->flags & SCF_ENABLED))67516751- sc_stop(sc, SCF_FROZEN | SCF_HALTED);67526752+ sc_stop(sc, sc_flags);67526753 }6753675467546755 /* Send context are frozen. Notify user space */···1067710674 add_rcvctrl(dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);10678106751067910676 handle_linkup_change(dd, 1);1067710677+ pio_kernel_linkup(dd);10680106781068110679 /*1068210680 * After link up, a new link width will have been set.
+41-10
drivers/infiniband/hw/hfi1/pio.c
···8686 unsigned long flags;8787 int write = 1; /* write sendctrl back */8888 int flush = 0; /* re-read sendctrl to make sure it is flushed */8989+ int i;89909091 spin_lock_irqsave(&dd->sendctrl_lock, flags);9192···9695 reg |= SEND_CTRL_SEND_ENABLE_SMASK;9796 /* Fall through */9897 case PSC_DATA_VL_ENABLE:9898+ mask = 0;9999+ for (i = 0; i < ARRAY_SIZE(dd->vld); i++)100100+ if (!dd->vld[i].mtu)101101+ mask |= BIT_ULL(i);99102 /* Disallow sending on VLs not enabled */100100- mask = (((~0ull) << num_vls) & SEND_CTRL_UNSUPPORTED_VL_MASK) <<101101- SEND_CTRL_UNSUPPORTED_VL_SHIFT;103103+ mask = (mask & SEND_CTRL_UNSUPPORTED_VL_MASK) <<104104+ SEND_CTRL_UNSUPPORTED_VL_SHIFT;102105 reg = (reg & ~SEND_CTRL_UNSUPPORTED_VL_SMASK) | mask;103106 break;104107 case PSC_GLOBAL_DISABLE:···926921void sc_disable(struct send_context *sc)927922{928923 u64 reg;929929- unsigned long flags;930924 struct pio_buf *pbuf;931925932926 if (!sc)933927 return;934928935929 /* do all steps, even if already disabled */936936- spin_lock_irqsave(&sc->alloc_lock, flags);930930+ spin_lock_irq(&sc->alloc_lock);937931 reg = read_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL));938932 reg &= ~SC(CTRL_CTXT_ENABLE_SMASK);939933 sc->flags &= ~SCF_ENABLED;940934 sc_wait_for_packet_egress(sc, 1);941935 write_kctxt_csr(sc->dd, sc->hw_context, SC(CTRL), reg);942942- spin_unlock_irqrestore(&sc->alloc_lock, flags);943936944937 /*945938 * Flush any waiters. Once the context is disabled,···947944 * proceed with the flush.948945 */949946 udelay(1);950950- spin_lock_irqsave(&sc->release_lock, flags);947947+ spin_lock(&sc->release_lock);951948 if (sc->sr) { /* this context has a shadow ring */952949 while (sc->sr_tail != sc->sr_head) {953950 pbuf = &sc->sr[sc->sr_tail].pbuf;···958955 sc->sr_tail = 0;959956 }960957 }961961- spin_unlock_irqrestore(&sc->release_lock, flags);958958+ spin_unlock(&sc->release_lock);959959+ spin_unlock_irq(&sc->alloc_lock);962960}963961964962/* return SendEgressCtxtStatus.PacketOccupancy */···11821178 sc = dd->send_contexts[i].sc;11831179 if (!sc || !(sc->flags & SCF_FROZEN) || sc->type == SC_USER)11841180 continue;11811181+ if (sc->flags & SCF_LINK_DOWN)11821182+ continue;1185118311861184 sc_enable(sc); /* will clear the sc frozen flag */11851185+ }11861186+}11871187+11881188+/**11891189+ * pio_kernel_linkup() - Re-enable send contexts after linkup event11901190+ * @dd: valid devive data11911191+ *11921192+ * When the link goes down, the freeze path is taken. However, a link down11931193+ * event is different from a freeze because if the send context is re-enabled11941194+ * whowever is sending data will start sending data again, which will hang11951195+ * any QP that is sending data.11961196+ *11971197+ * The freeze path now looks at the type of event that occurs and takes this11981198+ * path for link down event.11991199+ */12001200+void pio_kernel_linkup(struct hfi1_devdata *dd)12011201+{12021202+ struct send_context *sc;12031203+ int i;12041204+12051205+ for (i = 0; i < dd->num_send_contexts; i++) {12061206+ sc = dd->send_contexts[i].sc;12071207+ if (!sc || !(sc->flags & SCF_LINK_DOWN) || sc->type == SC_USER)12081208+ continue;12091209+12101210+ sc_enable(sc); /* will clear the sc link down flag */11871211 }11881212}11891213···14141382{14151383 unsigned long flags;1416138414171417- /* mark the context */14181418- sc->flags |= flag;14191419-14201385 /* stop buffer allocations */14211386 spin_lock_irqsave(&sc->alloc_lock, flags);13871387+ /* mark the context */13881388+ sc->flags |= flag;14221389 sc->flags &= ~SCF_ENABLED;14231390 spin_unlock_irqrestore(&sc->alloc_lock, flags);14241391 wake_up(&sc->halt_wait);
+2
drivers/infiniband/hw/hfi1/pio.h
···139139#define SCF_IN_FREE 0x02140140#define SCF_HALTED 0x04141141#define SCF_FROZEN 0x08142142+#define SCF_LINK_DOWN 0x10142143143144struct send_context_info {144145 struct send_context *sc; /* allocated working context */···307306void pio_reset_all(struct hfi1_devdata *dd);308307void pio_freeze(struct hfi1_devdata *dd);309308void pio_kernel_unfreeze(struct hfi1_devdata *dd);309309+void pio_kernel_linkup(struct hfi1_devdata *dd);310310311311/* global PIO send control operations */312312#define PSC_GLOBAL_ENABLE 0
+1-1
drivers/infiniband/hw/hfi1/user_sdma.c
···828828 if (READ_ONCE(iovec->offset) == iovec->iov.iov_len) {829829 if (++req->iov_idx == req->data_iovs) {830830 ret = -EFAULT;831831- goto free_txreq;831831+ goto free_tx;832832 }833833 iovec = &req->iovs[req->iov_idx];834834 WARN_ON(iovec->offset);
···241241 struct i2c_client *client = to_i2c_client(dev);242242 int ret;243243244244+ if (device_may_wakeup(dev))245245+ return enable_irq_wake(client->irq);246246+244247 ret = i2c_master_send(client, suspend_cmd, MAX_I2C_DATA_LEN);245248 return ret > 0 ? 0 : ret;246249}···251248static int __maybe_unused egalax_ts_resume(struct device *dev)252249{253250 struct i2c_client *client = to_i2c_client(dev);251251+252252+ if (device_may_wakeup(dev))253253+ return disable_irq_wake(client->irq);254254255255 return egalax_wake_up_device(client);256256}
+6
drivers/iommu/amd_iommu.c
···246246247247 /* The callers make sure that get_device_id() does not fail here */248248 devid = get_device_id(dev);249249+250250+ /* For ACPI HID devices, we simply return the devid as such */251251+ if (!dev_is_pci(dev))252252+ return devid;253253+249254 ivrs_alias = amd_iommu_alias_table[devid];255255+250256 pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);251257252258 if (ivrs_alias == pci_alias)
+3-3
drivers/iommu/intel-iommu.c
···25402540 if (dev && dev_is_pci(dev) && info->pasid_supported) {25412541 ret = intel_pasid_alloc_table(dev);25422542 if (ret) {25432543- __dmar_remove_one_dev_info(info);25442544- spin_unlock_irqrestore(&device_domain_lock, flags);25452545- return NULL;25432543+ pr_warn("No pasid table for %s, pasid disabled\n",25442544+ dev_name(dev));25452545+ info->pasid_supported = 0;25462546 }25472547 }25482548 spin_unlock_irqrestore(&device_domain_lock, flags);
···4747static DEFINE_IDA(bcache_device_idx);4848static wait_queue_head_t unregister_wait;4949struct workqueue_struct *bcache_wq;5050+struct workqueue_struct *bch_journal_wq;50515152#define BTREE_MAX_PAGES (256 * 1024 / PAGE_SIZE)5253/* limitation of partitions number on single bcache device */···23422341 kobject_put(bcache_kobj);23432342 if (bcache_wq)23442343 destroy_workqueue(bcache_wq);23442344+ if (bch_journal_wq)23452345+ destroy_workqueue(bch_journal_wq);23462346+23452347 if (bcache_major)23462348 unregister_blkdev(bcache_major, "bcache");23472349 unregister_reboot_notifier(&reboot);···2372236823732369 bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0);23742370 if (!bcache_wq)23712371+ goto err;23722372+23732373+ bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0);23742374+ if (!bch_journal_wq)23752375 goto err;2376237623772377 bcache_kobj = kobject_create_and_add("bcache", fs_kobj);
+13-28
drivers/media/i2c/mt9v111.c
···11591159 V4L2_CID_AUTO_WHITE_BALANCE,11601160 0, 1, 1,11611161 V4L2_WHITE_BALANCE_AUTO);11621162- if (IS_ERR_OR_NULL(mt9v111->auto_awb)) {11631163- ret = PTR_ERR(mt9v111->auto_awb);11641164- goto error_free_ctrls;11651165- }11661166-11671162 mt9v111->auto_exp = v4l2_ctrl_new_std_menu(&mt9v111->ctrls,11681163 &mt9v111_ctrl_ops,11691164 V4L2_CID_EXPOSURE_AUTO,11701165 V4L2_EXPOSURE_MANUAL,11711166 0, V4L2_EXPOSURE_AUTO);11721172- if (IS_ERR_OR_NULL(mt9v111->auto_exp)) {11731173- ret = PTR_ERR(mt9v111->auto_exp);11741174- goto error_free_ctrls;11751175- }11761176-11771177- /* Initialize timings */11781167 mt9v111->hblank = v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops,11791168 V4L2_CID_HBLANK,11801169 MT9V111_CORE_R05_MIN_HBLANK,11811170 MT9V111_CORE_R05_MAX_HBLANK, 1,11821171 MT9V111_CORE_R05_DEF_HBLANK);11831183- if (IS_ERR_OR_NULL(mt9v111->hblank)) {11841184- ret = PTR_ERR(mt9v111->hblank);11851185- goto error_free_ctrls;11861186- }11871187-11881172 mt9v111->vblank = v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops,11891173 V4L2_CID_VBLANK,11901174 MT9V111_CORE_R06_MIN_VBLANK,11911175 MT9V111_CORE_R06_MAX_VBLANK, 1,11921176 MT9V111_CORE_R06_DEF_VBLANK);11931193- if (IS_ERR_OR_NULL(mt9v111->vblank)) {11941194- ret = PTR_ERR(mt9v111->vblank);11951195- goto error_free_ctrls;11961196- }1197117711981178 /* PIXEL_RATE is fixed: just expose it to user space. */11991179 v4l2_ctrl_new_std(&mt9v111->ctrls, &mt9v111_ctrl_ops,···11811201 DIV_ROUND_CLOSEST(mt9v111->sysclk, 2), 1,11821202 DIV_ROUND_CLOSEST(mt9v111->sysclk, 2));1183120312041204+ if (mt9v111->ctrls.error) {12051205+ ret = mt9v111->ctrls.error;12061206+ goto error_free_ctrls;12071207+ }11841208 mt9v111->sd.ctrl_handler = &mt9v111->ctrls;1185120911861210 /* Start with default configuration: 640x480 UYVY. */···12101226 mt9v111->pad.flags = MEDIA_PAD_FL_SOURCE;12111227 ret = media_entity_pads_init(&mt9v111->sd.entity, 1, &mt9v111->pad);12121228 if (ret)12131213- goto error_free_ctrls;12291229+ goto error_free_entity;12141230#endif1215123112161232 ret = mt9v111_chip_probe(mt9v111);12171233 if (ret)12181218- goto error_free_ctrls;12341234+ goto error_free_entity;1219123512201236 ret = v4l2_async_register_subdev(&mt9v111->sd);12211237 if (ret)12221222- goto error_free_ctrls;12381238+ goto error_free_entity;1223123912241240 return 0;1225124112261226-error_free_ctrls:12271227- v4l2_ctrl_handler_free(&mt9v111->ctrls);12281228-12421242+error_free_entity:12291243#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)12301244 media_entity_cleanup(&mt9v111->sd.entity);12311245#endif12461246+12471247+error_free_ctrls:12481248+ v4l2_ctrl_handler_free(&mt9v111->ctrls);1232124912331250 mutex_destroy(&mt9v111->pwr_mutex);12341251 mutex_destroy(&mt9v111->stream_mutex);···1244125912451260 v4l2_async_unregister_subdev(sd);1246126112471247- v4l2_ctrl_handler_free(&mt9v111->ctrls);12481248-12491262#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)12501263 media_entity_cleanup(&sd->entity);12511264#endif12651265+12661266+ v4l2_ctrl_handler_free(&mt9v111->ctrls);1252126712531268 mutex_destroy(&mt9v111->pwr_mutex);12541269 mutex_destroy(&mt9v111->stream_mutex);
+2
drivers/media/platform/Kconfig
···541541 depends on MFD_CROS_EC542542 select CEC_CORE543543 select CEC_NOTIFIER544544+ select CHROME_PLATFORMS545545+ select CROS_EC_PROTO544546 ---help---545547 If you say yes here you will get support for the546548 ChromeOS Embedded Controller's CEC.
···3939 struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(code, 1),4040 SPI_MEM_OP_NO_ADDR,4141 SPI_MEM_OP_NO_DUMMY,4242- SPI_MEM_OP_DATA_IN(len, val, 1));4242+ SPI_MEM_OP_DATA_IN(len, NULL, 1));4343+ void *scratchbuf;4344 int ret;44454646+ scratchbuf = kmalloc(len, GFP_KERNEL);4747+ if (!scratchbuf)4848+ return -ENOMEM;4949+5050+ op.data.buf.in = scratchbuf;4551 ret = spi_mem_exec_op(flash->spimem, &op);4652 if (ret < 0)4753 dev_err(&flash->spimem->spi->dev, "error %d reading %x\n", ret,4854 code);5555+ else5656+ memcpy(val, scratchbuf, len);5757+5858+ kfree(scratchbuf);49595060 return ret;5161}···6656 struct spi_mem_op op = SPI_MEM_OP(SPI_MEM_OP_CMD(opcode, 1),6757 SPI_MEM_OP_NO_ADDR,6858 SPI_MEM_OP_NO_DUMMY,6969- SPI_MEM_OP_DATA_OUT(len, buf, 1));5959+ SPI_MEM_OP_DATA_OUT(len, NULL, 1));6060+ void *scratchbuf;6161+ int ret;70627171- return spi_mem_exec_op(flash->spimem, &op);6363+ scratchbuf = kmemdup(buf, len, GFP_KERNEL);6464+ if (!scratchbuf)6565+ return -ENOMEM;6666+6767+ op.data.buf.out = scratchbuf;6868+ ret = spi_mem_exec_op(flash->spimem, &op);6969+ kfree(scratchbuf);7070+7171+ return ret;7272}73737474static ssize_t m25p80_write(struct spi_nor *nor, loff_t to, size_t len,
+4-1
drivers/mtd/mtdpart.c
···873873 int ret, err = 0;874874875875 np = mtd_get_of_node(master);876876- if (!mtd_is_partition(master))876876+ if (mtd_is_partition(master))877877+ of_node_get(np);878878+ else877879 np = of_get_child_by_name(np, "partitions");880880+878881 of_property_for_each_string(np, "compatible", prop, compat) {879882 parser = mtd_part_get_compatible_parser(compat);880883 if (!parser)
+6
drivers/mtd/nand/raw/denali.c
···596596 }597597598598 iowrite32(DMA_ENABLE__FLAG, denali->reg + DMA_ENABLE);599599+ /*600600+ * The ->setup_dma() hook kicks DMA by using the data/command601601+ * interface, which belongs to a different AXI port from the602602+ * register interface. Read back the register to avoid a race.603603+ */604604+ ioread32(denali->reg + DMA_ENABLE);599605600606 denali_reset_irq(denali);601607 denali->setup_dma(denali, dma_addr, page, write);
+3-1
drivers/mtd/nand/raw/marvell_nand.c
···15471547 for (op_id = 0; op_id < subop->ninstrs; op_id++) {15481548 unsigned int offset, naddrs;15491549 const u8 *addrs;15501550- int len = nand_subop_get_data_len(subop, op_id);15501550+ int len;1551155115521552 instr = &subop->instrs[op_id];15531553···15931593 nfc_op->ndcb[0] |=15941594 NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) |15951595 NDCB0_LEN_OVRD;15961596+ len = nand_subop_get_data_len(subop, op_id);15961597 nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH);15971598 }15981599 nfc_op->data_delay_ns = instr->delay_ns;···16071606 nfc_op->ndcb[0] |=16081607 NDCB0_CMD_XTYPE(XTYPE_MONOLITHIC_RW) |16091608 NDCB0_LEN_OVRD;16091609+ len = nand_subop_get_data_len(subop, op_id);16101610 nfc_op->ndcb[3] |= round_up(len, FIFO_DEPTH);16111611 }16121612 nfc_op->data_delay_ns = instr->delay_ns;
+6-2
drivers/net/appletalk/ipddp.c
···283283 case SIOCFINDIPDDPRT:284284 spin_lock_bh(&ipddp_route_lock);285285 rp = __ipddp_find_route(&rcp);286286- if (rp)287287- memcpy(&rcp2, rp, sizeof(rcp2));286286+ if (rp) {287287+ memset(&rcp2, 0, sizeof(rcp2));288288+ rcp2.ip = rp->ip;289289+ rcp2.at = rp->at;290290+ rcp2.flags = rp->flags;291291+ }288292 spin_unlock_bh(&ipddp_route_lock);289293290294 if (rp) {
+2-9
drivers/net/bonding/bond_main.c
···971971 struct slave *slave = NULL;972972 struct list_head *iter;973973 struct ad_info ad_info;974974- struct netpoll_info *ni;975975- const struct net_device_ops *ops;976974977975 if (BOND_MODE(bond) == BOND_MODE_8023AD)978976 if (bond_3ad_get_active_agg_info(bond, &ad_info))979977 return;980978981979 bond_for_each_slave_rcu(bond, slave, iter) {982982- ops = slave->dev->netdev_ops;983983- if (!bond_slave_is_up(slave) || !ops->ndo_poll_controller)980980+ if (!bond_slave_is_up(slave))984981 continue;985982986983 if (BOND_MODE(bond) == BOND_MODE_8023AD) {···989992 continue;990993 }991994992992- ni = rcu_dereference_bh(slave->dev->npinfo);993993- if (down_trylock(&ni->dev_lock))994994- continue;995995- ops->ndo_poll_controller(slave->dev);996996- up(&ni->dev_lock);995995+ netpoll_poll_dev(slave->dev);997996 }998997}999998
···113113114114/* Index to functions, as function prototypes. */115115static int net_open(struct net_device *dev);116116-static int net_send_packet(struct sk_buff *skb, struct net_device *dev);116116+static netdev_tx_t net_send_packet(struct sk_buff *skb, struct net_device *dev);117117static irqreturn_t net_interrupt(int irq, void *dev_id);118118static void set_multicast_list(struct net_device *dev);119119static void net_rx(struct net_device *dev);···324324 return 0;325325}326326327327-static int327327+static netdev_tx_t328328net_send_packet(struct sk_buff *skb, struct net_device *dev)329329{330330 struct net_local *lp = netdev_priv(dev);
+1-1
drivers/net/ethernet/hp/hp100.c
···26342634 /* Wait for link to drop */26352635 time = jiffies + (HZ / 10);26362636 do {26372637- if (~(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))26372637+ if (!(hp100_inb(VG_LAN_CFG_1) & HP100_LINK_UP_ST))26382638 break;26392639 if (!in_interrupt())26402640 schedule_timeout_interruptible(1);
···26772677 if (of_phy_is_fixed_link(np)) {26782678 int res = emac_dt_mdio_probe(dev);2679267926802680- if (!res) {26812681- res = of_phy_register_fixed_link(np);26822682- if (res)26832683- mdiobus_unregister(dev->mii_bus);26802680+ if (res)26812681+ return res;26822682+26832683+ res = of_phy_register_fixed_link(np);26842684+ dev->phy_dev = of_phy_find_device(np);26852685+ if (res || !dev->phy_dev) {26862686+ mdiobus_unregister(dev->mii_bus);26872687+ return res ? res : -EINVAL;26842688 }26852685- return res;26892689+ emac_adjust_link(dev->ndev);26902690+ put_device(&dev->phy_dev->mdio.dev);26862691 }26872692 return 0;26882693 }
···12101210 return IRQ_HANDLED;12111211}1212121212131213-#ifdef CONFIG_NET_POLL_CONTROLLER12141214-/**12151215- * fm10k_netpoll - A Polling 'interrupt' handler12161216- * @netdev: network interface device structure12171217- *12181218- * This is used by netconsole to send skbs without having to re-enable12191219- * interrupts. It's not called while the normal interrupt routine is executing.12201220- **/12211221-void fm10k_netpoll(struct net_device *netdev)12221222-{12231223- struct fm10k_intfc *interface = netdev_priv(netdev);12241224- int i;12251225-12261226- /* if interface is down do nothing */12271227- if (test_bit(__FM10K_DOWN, interface->state))12281228- return;12291229-12301230- for (i = 0; i < interface->num_q_vectors; i++)12311231- fm10k_msix_clean_rings(0, interface->q_vector[i]);12321232-}12331233-12341234-#endif12351213#define FM10K_ERR_MSG(type) case (type): error = #type; break12361214static void fm10k_handle_fault(struct fm10k_intfc *interface, int type,12371215 struct fm10k_fault *fault)
-26
drivers/net/ethernet/intel/i40evf/i40evf_main.c
···396396 adapter->aq_required |= I40EVF_FLAG_AQ_MAP_VECTORS;397397}398398399399-#ifdef CONFIG_NET_POLL_CONTROLLER400400-/**401401- * i40evf_netpoll - A Polling 'interrupt' handler402402- * @netdev: network interface device structure403403- *404404- * This is used by netconsole to send skbs without having to re-enable405405- * interrupts. It's not called while the normal interrupt routine is executing.406406- **/407407-static void i40evf_netpoll(struct net_device *netdev)408408-{409409- struct i40evf_adapter *adapter = netdev_priv(netdev);410410- int q_vectors = adapter->num_msix_vectors - NONQ_VECS;411411- int i;412412-413413- /* if interface is down do nothing */414414- if (test_bit(__I40E_VSI_DOWN, adapter->vsi.state))415415- return;416416-417417- for (i = 0; i < q_vectors; i++)418418- i40evf_msix_clean_rings(0, &adapter->q_vectors[i]);419419-}420420-421421-#endif422399/**423400 * i40evf_irq_affinity_notify - Callback for affinity changes424401 * @notify: context as to what irq was changed···32063229 .ndo_features_check = i40evf_features_check,32073230 .ndo_fix_features = i40evf_fix_features,32083231 .ndo_set_features = i40evf_set_features,32093209-#ifdef CONFIG_NET_POLL_CONTROLLER32103210- .ndo_poll_controller = i40evf_netpoll,32113211-#endif32123232 .ndo_setup_tc = i40evf_setup_tc,32133233};32143234
-27
drivers/net/ethernet/intel/ice/ice_main.c
···48064806 stats->rx_length_errors = vsi_stats->rx_length_errors;48074807}4808480848094809-#ifdef CONFIG_NET_POLL_CONTROLLER48104810-/**48114811- * ice_netpoll - polling "interrupt" handler48124812- * @netdev: network interface device structure48134813- *48144814- * Used by netconsole to send skbs without having to re-enable interrupts.48154815- * This is not called in the normal interrupt path.48164816- */48174817-static void ice_netpoll(struct net_device *netdev)48184818-{48194819- struct ice_netdev_priv *np = netdev_priv(netdev);48204820- struct ice_vsi *vsi = np->vsi;48214821- struct ice_pf *pf = vsi->back;48224822- int i;48234823-48244824- if (test_bit(__ICE_DOWN, vsi->state) ||48254825- !test_bit(ICE_FLAG_MSIX_ENA, pf->flags))48264826- return;48274827-48284828- for (i = 0; i < vsi->num_q_vectors; i++)48294829- ice_msix_clean_rings(0, vsi->q_vectors[i]);48304830-}48314831-#endif /* CONFIG_NET_POLL_CONTROLLER */48324832-48334809/**48344810 * ice_napi_disable_all - Disable NAPI for all q_vectors in the VSI48354811 * @vsi: VSI having NAPI disabled···54735497 .ndo_validate_addr = eth_validate_addr,54745498 .ndo_change_mtu = ice_change_mtu,54755499 .ndo_get_stats64 = ice_get_stats64,54765476-#ifdef CONFIG_NET_POLL_CONTROLLER54775477- .ndo_poll_controller = ice_netpoll,54785478-#endif /* CONFIG_NET_POLL_CONTROLLER */54795500 .ndo_vlan_rx_add_vid = ice_vlan_rx_add_vid,54805501 .ndo_vlan_rx_kill_vid = ice_vlan_rx_kill_vid,54815502 .ndo_set_features = ice_set_features,
-30
drivers/net/ethernet/intel/igb/igb_main.c
···205205 .priority = 0206206};207207#endif208208-#ifdef CONFIG_NET_POLL_CONTROLLER209209-/* for netdump / net console */210210-static void igb_netpoll(struct net_device *);211211-#endif212208#ifdef CONFIG_PCI_IOV213209static unsigned int max_vfs;214210module_param(max_vfs, uint, 0);···28772881 .ndo_set_vf_spoofchk = igb_ndo_set_vf_spoofchk,28782882 .ndo_set_vf_trust = igb_ndo_set_vf_trust,28792883 .ndo_get_vf_config = igb_ndo_get_vf_config,28802880-#ifdef CONFIG_NET_POLL_CONTROLLER28812881- .ndo_poll_controller = igb_netpoll,28822882-#endif28832884 .ndo_fix_features = igb_fix_features,28842885 .ndo_set_features = igb_set_features,28852886 .ndo_fdb_add = igb_ndo_fdb_add,···90459052#endif90469053 return 0;90479054}90489048-90499049-#ifdef CONFIG_NET_POLL_CONTROLLER90509050-/* Polling 'interrupt' - used by things like netconsole to send skbs90519051- * without having to re-enable interrupts. It's not called while90529052- * the interrupt routine is executing.90539053- */90549054-static void igb_netpoll(struct net_device *netdev)90559055-{90569056- struct igb_adapter *adapter = netdev_priv(netdev);90579057- struct e1000_hw *hw = &adapter->hw;90589058- struct igb_q_vector *q_vector;90599059- int i;90609060-90619061- for (i = 0; i < adapter->num_q_vectors; i++) {90629062- q_vector = adapter->q_vector[i];90639063- if (adapter->flags & IGB_FLAG_HAS_MSIX)90649064- wr32(E1000_EIMC, q_vector->eims_value);90659065- else90669066- igb_irq_disable(adapter);90679067- napi_schedule(&q_vector->napi);90689068- }90699069-}90709070-#endif /* CONFIG_NET_POLL_CONTROLLER */9071905590729056/**90739057 * igb_io_error_detected - called when PCI error is detected
-25
drivers/net/ethernet/intel/ixgb/ixgb_main.c
···8181 __be16 proto, u16 vid);8282static void ixgb_restore_vlan(struct ixgb_adapter *adapter);83838484-#ifdef CONFIG_NET_POLL_CONTROLLER8585-/* for netdump / net console */8686-static void ixgb_netpoll(struct net_device *dev);8787-#endif8888-8984static pci_ers_result_t ixgb_io_error_detected (struct pci_dev *pdev,9085 enum pci_channel_state state);9186static pci_ers_result_t ixgb_io_slot_reset (struct pci_dev *pdev);···343348 .ndo_tx_timeout = ixgb_tx_timeout,344349 .ndo_vlan_rx_add_vid = ixgb_vlan_rx_add_vid,345350 .ndo_vlan_rx_kill_vid = ixgb_vlan_rx_kill_vid,346346-#ifdef CONFIG_NET_POLL_CONTROLLER347347- .ndo_poll_controller = ixgb_netpoll,348348-#endif349351 .ndo_fix_features = ixgb_fix_features,350352 .ndo_set_features = ixgb_set_features,351353};···21862194 for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)21872195 ixgb_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), vid);21882196}21892189-21902190-#ifdef CONFIG_NET_POLL_CONTROLLER21912191-/*21922192- * Polling 'interrupt' - used by things like netconsole to send skbs21932193- * without having to re-enable interrupts. It's not called while21942194- * the interrupt routine is executing.21952195- */21962196-21972197-static void ixgb_netpoll(struct net_device *dev)21982198-{21992199- struct ixgb_adapter *adapter = netdev_priv(dev);22002200-22012201- disable_irq(adapter->pdev->irq);22022202- ixgb_intr(adapter->pdev->irq, dev);22032203- enable_irq(adapter->pdev->irq);22042204-}22052205-#endif2206219722072198/**22082199 * ixgb_io_error_detected - called when PCI error is detected
-25
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···87688768 return err;87698769}8770877087718771-#ifdef CONFIG_NET_POLL_CONTROLLER87728772-/*87738773- * Polling 'interrupt' - used by things like netconsole to send skbs87748774- * without having to re-enable interrupts. It's not called while87758775- * the interrupt routine is executing.87768776- */87778777-static void ixgbe_netpoll(struct net_device *netdev)87788778-{87798779- struct ixgbe_adapter *adapter = netdev_priv(netdev);87808780- int i;87818781-87828782- /* if interface is down do nothing */87838783- if (test_bit(__IXGBE_DOWN, &adapter->state))87848784- return;87858785-87868786- /* loop through and schedule all active queues */87878787- for (i = 0; i < adapter->num_q_vectors; i++)87888788- ixgbe_msix_clean_rings(0, adapter->q_vector[i]);87898789-}87908790-87918791-#endif87928792-87938771static void ixgbe_get_ring_stats64(struct rtnl_link_stats64 *stats,87948772 struct ixgbe_ring *ring)87958773{···1022910251 .ndo_get_vf_config = ixgbe_ndo_get_vf_config,1023010252 .ndo_get_stats64 = ixgbe_get_stats64,1023110253 .ndo_setup_tc = __ixgbe_setup_tc,1023210232-#ifdef CONFIG_NET_POLL_CONTROLLER1023310233- .ndo_poll_controller = ixgbe_netpoll,1023410234-#endif1023510254#ifdef IXGBE_FCOE1023610255 .ndo_select_queue = ixgbe_select_queue,1023710256 .ndo_fcoe_ddp_setup = ixgbe_fcoe_ddp_get,
-21
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
···42334233 return 0;42344234}4235423542364236-#ifdef CONFIG_NET_POLL_CONTROLLER42374237-/* Polling 'interrupt' - used by things like netconsole to send skbs42384238- * without having to re-enable interrupts. It's not called while42394239- * the interrupt routine is executing.42404240- */42414241-static void ixgbevf_netpoll(struct net_device *netdev)42424242-{42434243- struct ixgbevf_adapter *adapter = netdev_priv(netdev);42444244- int i;42454245-42464246- /* if interface is down do nothing */42474247- if (test_bit(__IXGBEVF_DOWN, &adapter->state))42484248- return;42494249- for (i = 0; i < adapter->num_rx_queues; i++)42504250- ixgbevf_msix_clean_rings(0, adapter->q_vector[i]);42514251-}42524252-#endif /* CONFIG_NET_POLL_CONTROLLER */42534253-42544236static int ixgbevf_suspend(struct pci_dev *pdev, pm_message_t state)42554237{42564238 struct net_device *netdev = pci_get_drvdata(pdev);···44644482 .ndo_tx_timeout = ixgbevf_tx_timeout,44654483 .ndo_vlan_rx_add_vid = ixgbevf_vlan_rx_add_vid,44664484 .ndo_vlan_rx_kill_vid = ixgbevf_vlan_rx_kill_vid,44674467-#ifdef CONFIG_NET_POLL_CONTROLLER44684468- .ndo_poll_controller = ixgbevf_netpoll,44694469-#endif44704485 .ndo_features_check = ixgbevf_features_check,44714486 .ndo_bpf = ixgbevf_xdp,44724487};
···5858 */5959static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,6060 const struct phylink_link_state *state);6161+static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,6262+ phy_interface_t interface, struct phy_device *phy);61636264/* Queue modes */6365#define MVPP2_QDIST_SINGLE_MODE 0···30553053 cause_rx_tx & ~MVPP2_CAUSE_MISC_SUM_MASK);30563054 }3057305530583058- cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;30593059- if (cause_tx) {30603060- cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;30613061- mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);30563056+ if (port->has_tx_irqs) {30573057+ cause_tx = cause_rx_tx & MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_MASK;30583058+ if (cause_tx) {30593059+ cause_tx >>= MVPP2_CAUSE_TXQ_OCCUP_DESC_ALL_OFFSET;30603060+ mvpp2_tx_done(port, cause_tx, qv->sw_thread_id);30613061+ }30623062 }3063306330643064 /* Process RX packets */···31463142 mvpp22_mode_reconfigure(port);3147314331483144 if (port->phylink) {31453145+ netif_carrier_off(port->dev);31493146 phylink_start(port->phylink);31503147 } else {31513148 /* Phylink isn't used as of now for ACPI, so the MAC has to be···31553150 */31563151 struct phylink_link_state state = {31573152 .interface = port->phy_interface,31583158- .link = 1,31593153 };31603154 mvpp2_mac_config(port->dev, MLO_AN_INBAND, &state);31553155+ mvpp2_mac_link_up(port->dev, MLO_AN_INBAND, port->phy_interface,31563156+ NULL);31613157 }3162315831633159 netif_tx_start_all_queues(port->dev);···45014495 return;45024496 }4503449745044504- netif_tx_stop_all_queues(port->dev);45054505- if (!port->has_phy)45064506- netif_carrier_off(port->dev);45074507-45084498 /* Make sure the port is disabled when reconfiguring the mode */45094499 mvpp2_port_disable(port);45104500···45254523 if (port->priv->hw_version == MVPP21 && port->flags & MVPP2_F_LOOPBACK)45264524 mvpp2_port_loopback_set(port, state);4527452545284528- /* If the port already was up, make sure it's still in the same state */45294529- if (state->link || !port->has_phy) {45304530- mvpp2_port_enable(port);45314531-45324532- mvpp2_egress_enable(port);45334533- mvpp2_ingress_enable(port);45344534- if (!port->has_phy)45354535- netif_carrier_on(dev);45364536- netif_tx_wake_all_queues(dev);45374537- }45264526+ mvpp2_port_enable(port);45384527}4539452845404529static void mvpp2_mac_link_up(struct net_device *dev, unsigned int mode,
···206206 u8 own;207207208208 do {209209- own = ent->lay->status_own;209209+ own = READ_ONCE(ent->lay->status_own);210210 if (!(own & CMD_OWNER_HW)) {211211 ent->ret = 0;212212 return;
···31463146 return nfp_net_reconfig_mbox(nn, NFP_NET_CFG_MBOX_CMD_CTAG_FILTER_KILL);31473147}3148314831493149-#ifdef CONFIG_NET_POLL_CONTROLLER31503150-static void nfp_net_netpoll(struct net_device *netdev)31513151-{31523152- struct nfp_net *nn = netdev_priv(netdev);31533153- int i;31543154-31553155- /* nfp_net's NAPIs are statically allocated so even if there is a race31563156- * with reconfig path this will simply try to schedule some disabled31573157- * NAPI instances.31583158- */31593159- for (i = 0; i < nn->dp.num_stack_tx_rings; i++)31603160- napi_schedule_irqoff(&nn->r_vecs[i].napi);31613161-}31623162-#endif31633163-31643149static void nfp_net_stat64(struct net_device *netdev,31653150 struct rtnl_link_stats64 *stats)31663151{···35043519 .ndo_get_stats64 = nfp_net_stat64,35053520 .ndo_vlan_rx_add_vid = nfp_net_vlan_rx_add_vid,35063521 .ndo_vlan_rx_kill_vid = nfp_net_vlan_rx_kill_vid,35073507-#ifdef CONFIG_NET_POLL_CONTROLLER35083508- .ndo_poll_controller = nfp_net_netpoll,35093509-#endif35103522 .ndo_set_vf_mac = nfp_app_set_vf_mac,35113523 .ndo_set_vf_vlan = nfp_app_set_vf_vlan,35123524 .ndo_set_vf_spoofchk = nfp_app_set_vf_spoofchk,
+28-17
drivers/net/ethernet/qlogic/qed/qed_dcbx.c
···190190191191static void192192qed_dcbx_set_params(struct qed_dcbx_results *p_data,193193- struct qed_hw_info *p_info,194194- bool enable,195195- u8 prio,196196- u8 tc,193193+ struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,194194+ bool enable, u8 prio, u8 tc,197195 enum dcbx_protocol_type type,198196 enum qed_pci_personality personality)199197{···204206 else205207 p_data->arr[type].update = DONT_UPDATE_DCB_DSCP;206208209209+ /* Do not add vlan tag 0 when DCB is enabled and port in UFP/OV mode */210210+ if ((test_bit(QED_MF_8021Q_TAGGING, &p_hwfn->cdev->mf_bits) ||211211+ test_bit(QED_MF_8021AD_TAGGING, &p_hwfn->cdev->mf_bits)))212212+ p_data->arr[type].dont_add_vlan0 = true;213213+207214 /* QM reconf data */208208- if (p_info->personality == personality)209209- qed_hw_info_set_offload_tc(p_info, tc);215215+ if (p_hwfn->hw_info.personality == personality)216216+ qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc);217217+218218+ /* Configure dcbx vlan priority in doorbell block for roce EDPM */219219+ if (test_bit(QED_MF_UFP_SPECIFIC, &p_hwfn->cdev->mf_bits) &&220220+ type == DCBX_PROTOCOL_ROCE) {221221+ qed_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);222222+ qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1);223223+ }210224}211225212226/* Update app protocol data and hw_info fields with the TLV info */213227static void214228qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,215215- struct qed_hwfn *p_hwfn,216216- bool enable,217217- u8 prio, u8 tc, enum dcbx_protocol_type type)229229+ struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,230230+ bool enable, u8 prio, u8 tc,231231+ enum dcbx_protocol_type type)218232{219219- struct qed_hw_info *p_info = &p_hwfn->hw_info;220233 enum qed_pci_personality personality;221234 enum dcbx_protocol_type id;222235 int i;···240231241232 personality = qed_dcbx_app_update[i].personality;242233243243- qed_dcbx_set_params(p_data, p_info, enable,234234+ qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable,244235 prio, tc, type, personality);245236 }246237}···274265 * reconfiguring QM. Get protocol specific data for PF update ramrod command.275266 */276267static int277277-qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn,268268+qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,278269 struct qed_dcbx_results *p_data,279270 struct dcbx_app_priority_entry *p_tbl,280271 u32 pri_tc_tbl, int count, u8 dcbx_version)···318309 enable = true;319310 }320311321321- qed_dcbx_update_app_info(p_data, p_hwfn, enable,312312+ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,322313 priority, tc, type);323314 }324315 }···340331 continue;341332342333 enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version;343343- qed_dcbx_update_app_info(p_data, p_hwfn, enable,334334+ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,344335 priority, tc, type);345336 }346337···350341/* Parse app TLV's to update TC information in hw_info structure for351342 * reconfiguring QM. Get protocol specific data for PF update ramrod command.352343 */353353-static int qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn)344344+static int345345+qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)354346{355347 struct dcbx_app_priority_feature *p_app;356348 struct dcbx_app_priority_entry *p_tbl;···375365 p_info = &p_hwfn->hw_info;376366 num_entries = QED_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);377367378378- rc = qed_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,368368+ rc = qed_dcbx_process_tlv(p_hwfn, p_ptt, &data, p_tbl, pri_tc_tbl,379369 num_entries, dcbx_version);380370 if (rc)381371 return rc;···901891 return rc;902892903893 if (type == QED_DCBX_OPERATIONAL_MIB) {904904- rc = qed_dcbx_process_mib_info(p_hwfn);894894+ rc = qed_dcbx_process_mib_info(p_hwfn, p_ptt);905895 if (!rc) {906896 /* reconfigure tcs of QM queues according907897 * to negotiation results···964954 p_data->dcb_enable_flag = p_src->arr[type].enable;965955 p_data->dcb_priority = p_src->arr[type].priority;966956 p_data->dcb_tc = p_src->arr[type].tc;957957+ p_data->dcb_dont_add_vlan0 = p_src->arr[type].dont_add_vlan0;967958}968959969960/* Set pf update ramrod command params */
+1
drivers/net/ethernet/qlogic/qed/qed_dcbx.h
···5555 u8 update; /* Update indication */5656 u8 priority; /* Priority */5757 u8 tc; /* Traffic Class */5858+ bool dont_add_vlan0; /* Do not insert a vlan tag with id 0 */5859};59606061#define QED_DCBX_VERSION_DISABLED 0
···6767 * Description:6868 * This function validates the number of Unicast address entries supported6969 * by a particular Synopsys 10/100/1000 controller. The Synopsys controller7070- * supports 1, 32, 64, or 128 Unicast filter entries for it's Unicast filter7070+ * supports 1..32, 64, or 128 Unicast filter entries for it's Unicast filter7171 * logic. This function validates a valid, supported configuration is7272 * selected, and defaults to 1 Unicast address if an unsupported7373 * configuration is selected.···7777 int x = ucast_entries;78787979 switch (x) {8080- case 1:8181- case 32:8080+ case 1 ... 32:8281 case 64:8382 case 128:8483 break;
···18941894 rtnl_unlock();18951895}1896189618971897-static struct net_device *get_netvsc_bymac(const u8 *mac)18981898-{18991899- struct net_device_context *ndev_ctx;19001900-19011901- list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {19021902- struct net_device *dev = hv_get_drvdata(ndev_ctx->device_ctx);19031903-19041904- if (ether_addr_equal(mac, dev->perm_addr))19051905- return dev;19061906- }19071907-19081908- return NULL;19091909-}19101910-19111897static struct net_device *get_netvsc_byref(struct net_device *vf_netdev)19121898{19131899 struct net_device_context *net_device_ctx;···20222036 rtnl_unlock();20232037}2024203820392039+/* Find netvsc by VMBus serial number.20402040+ * The PCI hyperv controller records the serial number as the slot.20412041+ */20422042+static struct net_device *get_netvsc_byslot(const struct net_device *vf_netdev)20432043+{20442044+ struct device *parent = vf_netdev->dev.parent;20452045+ struct net_device_context *ndev_ctx;20462046+ struct pci_dev *pdev;20472047+20482048+ if (!parent || !dev_is_pci(parent))20492049+ return NULL; /* not a PCI device */20502050+20512051+ pdev = to_pci_dev(parent);20522052+ if (!pdev->slot) {20532053+ netdev_notice(vf_netdev, "no PCI slot information\n");20542054+ return NULL;20552055+ }20562056+20572057+ list_for_each_entry(ndev_ctx, &netvsc_dev_list, list) {20582058+ if (!ndev_ctx->vf_alloc)20592059+ continue;20602060+20612061+ if (ndev_ctx->vf_serial == pdev->slot->number)20622062+ return hv_get_drvdata(ndev_ctx->device_ctx);20632063+ }20642064+20652065+ netdev_notice(vf_netdev,20662066+ "no netdev found for slot %u\n", pdev->slot->number);20672067+ return NULL;20682068+}20692069+20252070static int netvsc_register_vf(struct net_device *vf_netdev)20262071{20272027- struct net_device *ndev;20282072 struct net_device_context *net_device_ctx;20292029- struct device *pdev = vf_netdev->dev.parent;20302073 struct netvsc_device *netvsc_dev;20742074+ struct net_device *ndev;20312075 int ret;2032207620332077 if (vf_netdev->addr_len != ETH_ALEN)20342078 return NOTIFY_DONE;2035207920362036- if (!pdev || !dev_is_pci(pdev) || dev_is_pf(pdev))20372037- return NOTIFY_DONE;20382038-20392039- /*20402040- * We will use the MAC address to locate the synthetic interface to20412041- * associate with the VF interface. If we don't find a matching20422042- * synthetic interface, move on.20432043- */20442044- ndev = get_netvsc_bymac(vf_netdev->perm_addr);20802080+ ndev = get_netvsc_byslot(vf_netdev);20452081 if (!ndev)20462082 return NOTIFY_DONE;20472083···2280227222812273 cancel_delayed_work_sync(&ndev_ctx->dwork);2282227422832283- rcu_read_lock();22842284- nvdev = rcu_dereference(ndev_ctx->nvdev);22852285-22862286- if (nvdev)22752275+ rtnl_lock();22762276+ nvdev = rtnl_dereference(ndev_ctx->nvdev);22772277+ if (nvdev)22872278 cancel_work_sync(&nvdev->subchan_work);2288227922892280 /*22902281 * Call to the vsc driver to let it know that the device is being22912282 * removed. Also blocks mtu and channel changes.22922283 */22932293- rtnl_lock();22942284 vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);22952285 if (vf_netdev)22962286 netvsc_unregister_vf(vf_netdev);···23002294 list_del(&ndev_ctx->list);2301229523022296 rtnl_unlock();23032303- rcu_read_unlock();2304229723052298 hv_set_drvdata(dev, NULL);23062299
···429429 if (!skb)430430 goto out;431431432432+ if (skb_mac_header_len(skb) < ETH_HLEN)433433+ goto drop;434434+432435 if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))433436 goto drop;434437
-43
drivers/net/tun.c
···1153115311541154 return (features & tun->set_features) | (features & ~TUN_USER_FEATURES);11551155}11561156-#ifdef CONFIG_NET_POLL_CONTROLLER11571157-static void tun_poll_controller(struct net_device *dev)11581158-{11591159- /*11601160- * Tun only receives frames when:11611161- * 1) the char device endpoint gets data from user space11621162- * 2) the tun socket gets a sendmsg call from user space11631163- * If NAPI is not enabled, since both of those are synchronous11641164- * operations, we are guaranteed never to have pending data when we poll11651165- * for it so there is nothing to do here but return.11661166- * We need this though so netpoll recognizes us as an interface that11671167- * supports polling, which enables bridge devices in virt setups to11681168- * still use netconsole11691169- * If NAPI is enabled, however, we need to schedule polling for all11701170- * queues unless we are using napi_gro_frags(), which we call in11711171- * process context and not in NAPI context.11721172- */11731173- struct tun_struct *tun = netdev_priv(dev);11741174-11751175- if (tun->flags & IFF_NAPI) {11761176- struct tun_file *tfile;11771177- int i;11781178-11791179- if (tun_napi_frags_enabled(tun))11801180- return;11811181-11821182- rcu_read_lock();11831183- for (i = 0; i < tun->numqueues; i++) {11841184- tfile = rcu_dereference(tun->tfiles[i]);11851185- if (tfile->napi_enabled)11861186- napi_schedule(&tfile->napi);11871187- }11881188- rcu_read_unlock();11891189- }11901190- return;11911191-}11921192-#endif1193115611941157static void tun_set_headroom(struct net_device *dev, int new_hr)11951158{···12461283 .ndo_start_xmit = tun_net_xmit,12471284 .ndo_fix_features = tun_net_fix_features,12481285 .ndo_select_queue = tun_select_queue,12491249-#ifdef CONFIG_NET_POLL_CONTROLLER12501250- .ndo_poll_controller = tun_poll_controller,12511251-#endif12521286 .ndo_set_rx_headroom = tun_set_headroom,12531287 .ndo_get_stats64 = tun_net_get_stats64,12541288};···13251365 .ndo_set_mac_address = eth_mac_addr,13261366 .ndo_validate_addr = eth_validate_addr,13271367 .ndo_select_queue = tun_select_queue,13281328-#ifdef CONFIG_NET_POLL_CONTROLLER13291329- .ndo_poll_controller = tun_poll_controller,13301330-#endif13311368 .ndo_features_check = passthru_features_check,13321369 .ndo_set_rx_headroom = tun_set_headroom,13331370 .ndo_get_stats64 = tun_net_get_stats64,
+7-7
drivers/net/usb/qmi_wwan.c
···12131213 {QMI_FIXED_INTF(0x1199, 0x9061, 8)}, /* Sierra Wireless Modem */12141214 {QMI_FIXED_INTF(0x1199, 0x9063, 8)}, /* Sierra Wireless EM7305 */12151215 {QMI_FIXED_INTF(0x1199, 0x9063, 10)}, /* Sierra Wireless EM7305 */12161216- {QMI_FIXED_INTF(0x1199, 0x9071, 8)}, /* Sierra Wireless MC74xx */12171217- {QMI_FIXED_INTF(0x1199, 0x9071, 10)}, /* Sierra Wireless MC74xx */12181218- {QMI_FIXED_INTF(0x1199, 0x9079, 8)}, /* Sierra Wireless EM74xx */12191219- {QMI_FIXED_INTF(0x1199, 0x9079, 10)}, /* Sierra Wireless EM74xx */12201220- {QMI_FIXED_INTF(0x1199, 0x907b, 8)}, /* Sierra Wireless EM74xx */12211221- {QMI_FIXED_INTF(0x1199, 0x907b, 10)}, /* Sierra Wireless EM74xx */12221222- {QMI_FIXED_INTF(0x1199, 0x9091, 8)}, /* Sierra Wireless EM7565 */12161216+ {QMI_QUIRK_SET_DTR(0x1199, 0x9071, 8)}, /* Sierra Wireless MC74xx */12171217+ {QMI_QUIRK_SET_DTR(0x1199, 0x9071, 10)},/* Sierra Wireless MC74xx */12181218+ {QMI_QUIRK_SET_DTR(0x1199, 0x9079, 8)}, /* Sierra Wireless EM74xx */12191219+ {QMI_QUIRK_SET_DTR(0x1199, 0x9079, 10)},/* Sierra Wireless EM74xx */12201220+ {QMI_QUIRK_SET_DTR(0x1199, 0x907b, 8)}, /* Sierra Wireless EM74xx */12211221+ {QMI_QUIRK_SET_DTR(0x1199, 0x907b, 10)},/* Sierra Wireless EM74xx */12221222+ {QMI_QUIRK_SET_DTR(0x1199, 0x9091, 8)}, /* Sierra Wireless EM7565 */12231223 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */12241224 {QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */12251225 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
···135135 if (val & PCIE_ATU_ENABLE)136136 return;137137138138- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);138138+ mdelay(LINK_WAIT_IATU);139139 }140140 dev_err(pci->dev, "Outbound iATU is not being enabled\n");141141}···178178 if (val & PCIE_ATU_ENABLE)179179 return;180180181181- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);181181+ mdelay(LINK_WAIT_IATU);182182 }183183 dev_err(pci->dev, "Outbound iATU is not being enabled\n");184184}···236236 if (val & PCIE_ATU_ENABLE)237237 return 0;238238239239- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);239239+ mdelay(LINK_WAIT_IATU);240240 }241241 dev_err(pci->dev, "Inbound iATU is not being enabled\n");242242···282282 if (val & PCIE_ATU_ENABLE)283283 return 0;284284285285- usleep_range(LINK_WAIT_IATU_MIN, LINK_WAIT_IATU_MAX);285285+ mdelay(LINK_WAIT_IATU);286286 }287287 dev_err(pci->dev, "Inbound iATU is not being enabled\n");288288
+1-2
drivers/pci/controller/dwc/pcie-designware.h
···26262727/* Parameters for the waiting for iATU enabled routine */2828#define LINK_WAIT_MAX_IATU_RETRIES 52929-#define LINK_WAIT_IATU_MIN 90003030-#define LINK_WAIT_IATU_MAX 100002929+#define LINK_WAIT_IATU 931303231/* Synopsys-specific PCIe configuration registers */3332#define PCIE_PORT_LINK_CONTROL 0x710
+39
drivers/pci/controller/pci-hyperv.c
···89899090#define STATUS_REVISION_MISMATCH 0xC000005991919292+/* space for 32bit serial number as string */9393+#define SLOT_NAME_SIZE 119494+9295/*9396 * Message Types9497 */···497494 struct list_head list_entry;498495 refcount_t refs;499496 enum hv_pcichild_state state;497497+ struct pci_slot *pci_slot;500498 struct pci_function_description desc;501499 bool reported_missing;502500 struct hv_pcibus_device *hbus;···14611457 spin_unlock_irqrestore(&hbus->device_list_lock, flags);14621458}1463145914601460+/*14611461+ * Assign entries in sysfs pci slot directory.14621462+ *14631463+ * Note that this function does not need to lock the children list14641464+ * because it is called from pci_devices_present_work which14651465+ * is serialized with hv_eject_device_work because they are on the14661466+ * same ordered workqueue. Therefore hbus->children list will not change14671467+ * even when pci_create_slot sleeps.14681468+ */14691469+static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)14701470+{14711471+ struct hv_pci_dev *hpdev;14721472+ char name[SLOT_NAME_SIZE];14731473+ int slot_nr;14741474+14751475+ list_for_each_entry(hpdev, &hbus->children, list_entry) {14761476+ if (hpdev->pci_slot)14771477+ continue;14781478+14791479+ slot_nr = PCI_SLOT(wslot_to_devfn(hpdev->desc.win_slot.slot));14801480+ snprintf(name, SLOT_NAME_SIZE, "%u", hpdev->desc.ser);14811481+ hpdev->pci_slot = pci_create_slot(hbus->pci_bus, slot_nr,14821482+ name, NULL);14831483+ if (IS_ERR(hpdev->pci_slot)) {14841484+ pr_warn("pci_create slot %s failed\n", name);14851485+ hpdev->pci_slot = NULL;14861486+ }14871487+ }14881488+}14891489+14641490/**14651491 * create_root_hv_pci_bus() - Expose a new root PCI bus14661492 * @hbus: Root PCI bus, as understood by this driver···15141480 pci_lock_rescan_remove();15151481 pci_scan_child_bus(hbus->pci_bus);15161482 pci_bus_assign_resources(hbus->pci_bus);14831483+ hv_pci_assign_slots(hbus);15171484 pci_bus_add_devices(hbus->pci_bus);15181485 pci_unlock_rescan_remove();15191486 hbus->state = hv_pcibus_installed;···17771742 */17781743 pci_lock_rescan_remove();17791744 pci_scan_child_bus(hbus->pci_bus);17451745+ hv_pci_assign_slots(hbus);17801746 pci_unlock_rescan_remove();17811747 break;17821748···18931857 spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags);18941858 list_del(&hpdev->list_entry);18951859 spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags);18601860+18611861+ if (hpdev->pci_slot)18621862+ pci_destroy_slot(hpdev->pci_slot);1896186318971864 memset(&ctxt, 0, sizeof(ctxt));18981865 ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message;
+6-5
drivers/pci/hotplug/acpiphp_glue.c
···457457/**458458 * enable_slot - enable, configure a slot459459 * @slot: slot to be enabled460460+ * @bridge: true if enable is for the whole bridge (not a single slot)460461 *461462 * This function should be called per *physical slot*,462463 * not per each slot object in ACPI namespace.463464 */464464-static void enable_slot(struct acpiphp_slot *slot)465465+static void enable_slot(struct acpiphp_slot *slot, bool bridge)465466{466467 struct pci_dev *dev;467468 struct pci_bus *bus = slot->bus;468469 struct acpiphp_func *func;469470470470- if (bus->self && hotplug_is_native(bus->self)) {471471+ if (bridge && bus->self && hotplug_is_native(bus->self)) {471472 /*472473 * If native hotplug is used, it will take care of hotplug473474 * slot management and resource allocation for hotplug···702701 trim_stale_devices(dev);703702704703 /* configure all functions */705705- enable_slot(slot);704704+ enable_slot(slot, true);706705 } else {707706 disable_slot(slot);708707 }···786785 if (bridge)787786 acpiphp_check_bridge(bridge);788787 else if (!(slot->flags & SLOT_IS_GOING_AWAY))789789- enable_slot(slot);788788+ enable_slot(slot, false);790789791790 break;792791···974973975974 /* configure all functions */976975 if (!(slot->flags & SLOT_ENABLED))977977- enable_slot(slot);976976+ enable_slot(slot, false);978977979978 pci_unlock_rescan_remove();980979 return 0;
···348348 unsigned long flags;349349 struct gpio_chip *gc = irq_data_get_irq_chip_data(d);350350 struct amd_gpio *gpio_dev = gpiochip_get_data(gc);351351- u32 mask = BIT(INTERRUPT_ENABLE_OFF) | BIT(INTERRUPT_MASK_OFF);352351353352 raw_spin_lock_irqsave(&gpio_dev->lock, flags);354353 pin_reg = readl(gpio_dev->base + (d->hwirq)*4);355354 pin_reg |= BIT(INTERRUPT_ENABLE_OFF);356355 pin_reg |= BIT(INTERRUPT_MASK_OFF);357356 writel(pin_reg, gpio_dev->base + (d->hwirq)*4);358358- /*359359- * When debounce logic is enabled it takes ~900 us before interrupts360360- * can be enabled. During this "debounce warm up" period the361361- * "INTERRUPT_ENABLE" bit will read as 0. Poll the bit here until it362362- * reads back as 1, signaling that interrupts are now enabled.363363- */364364- while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)365365- continue;366357 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);367358}368359···417426static int amd_gpio_irq_set_type(struct irq_data *d, unsigned int type)418427{419428 int ret = 0;420420- u32 pin_reg;429429+ u32 pin_reg, pin_reg_irq_en, mask;421430 unsigned long flags, irq_flags;422431 struct gpio_chip *gc = irq_data_get_irq_chip_data(d);423432 struct amd_gpio *gpio_dev = gpiochip_get_data(gc);···486495 }487496488497 pin_reg |= CLR_INTR_STAT << INTERRUPT_STS_OFF;498498+ /*499499+ * If WAKE_INT_MASTER_REG.MaskStsEn is set, a software write to the500500+ * debounce registers of any GPIO will block wake/interrupt status501501+ * generation for *all* GPIOs for a lenght of time that depends on502502+ * WAKE_INT_MASTER_REG.MaskStsLength[11:0]. During this period the503503+ * INTERRUPT_ENABLE bit will read as 0.504504+ *505505+ * We temporarily enable irq for the GPIO whose configuration is506506+ * changing, and then wait for it to read back as 1 to know when507507+ * debounce has settled and then disable the irq again.508508+ * We do this polling with the spinlock held to ensure other GPIO509509+ * access routines do not read an incorrect value for the irq enable510510+ * bit of other GPIOs. We keep the GPIO masked while polling to avoid511511+ * spurious irqs, and disable the irq again after polling.512512+ */513513+ mask = BIT(INTERRUPT_ENABLE_OFF);514514+ pin_reg_irq_en = pin_reg;515515+ pin_reg_irq_en |= mask;516516+ pin_reg_irq_en &= ~BIT(INTERRUPT_MASK_OFF);517517+ writel(pin_reg_irq_en, gpio_dev->base + (d->hwirq)*4);518518+ while ((readl(gpio_dev->base + (d->hwirq)*4) & mask) != mask)519519+ continue;489520 writel(pin_reg, gpio_dev->base + (d->hwirq)*4);490521 raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);491522
···569569 BD71837_REG_REGLOCK);570570 }571571572572+ /*573573+ * There is a HW quirk in BD71837. The shutdown sequence timings for574574+ * bucks/LDOs which are controlled via register interface are changed.575575+ * At PMIC poweroff the voltage for BUCK6/7 is cut immediately at the576576+ * beginning of shut-down sequence. As bucks 6 and 7 are parent577577+ * supplies for LDO5 and LDO6 - this causes LDO5/6 voltage578578+ * monitoring to errorneously detect under voltage and force PMIC to579579+ * emergency state instead of poweroff. In order to avoid this we580580+ * disable voltage monitoring for LDO5 and LDO6581581+ */582582+ err = regmap_update_bits(pmic->mfd->regmap, BD718XX_REG_MVRFLTMASK2,583583+ BD718XX_LDO5_VRMON80 | BD718XX_LDO6_VRMON80,584584+ BD718XX_LDO5_VRMON80 | BD718XX_LDO6_VRMON80);585585+ if (err) {586586+ dev_err(&pmic->pdev->dev,587587+ "Failed to disable voltage monitoring\n");588588+ goto err;589589+ }590590+572591 for (i = 0; i < ARRAY_SIZE(pmic_regulator_inits); i++) {573592574593 struct regulator_desc *desc;
···79407940 err = -ENOMEM;79417941 goto out_error;79427942 }79437943+79447944+ /*79457945+ * Do not use blk-mq at this time because blk-mq does not support79467946+ * runtime pm.79477947+ */79487948+ host->use_blk_mq = false;79497949+79437950 hba = shost_priv(host);79447951 hba->host = host;79457952 hba->dev = dev;
+17-6
drivers/soundwire/stream.c
···899899 struct sdw_master_runtime *m_rt = stream->m_rt;900900 struct sdw_slave_runtime *s_rt, *_s_rt;901901902902- list_for_each_entry_safe(s_rt, _s_rt,903903- &m_rt->slave_rt_list, m_rt_node)904904- sdw_stream_remove_slave(s_rt->slave, stream);902902+ list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) {903903+ sdw_slave_port_release(s_rt->slave->bus, s_rt->slave, stream);904904+ sdw_release_slave_stream(s_rt->slave, stream);905905+ }905906906907 list_del(&m_rt->bus_node);907908}···11131112 "Master runtime config failed for stream:%s",11141113 stream->name);11151114 ret = -ENOMEM;11161116- goto error;11151115+ goto unlock;11171116 }1118111711191118 ret = sdw_config_stream(bus->dev, stream, stream_config, false);···11241123 if (ret)11251124 goto stream_error;1126112511271127- stream->state = SDW_STREAM_CONFIGURED;11261126+ goto unlock;1128112711291128stream_error:11301129 sdw_release_master_stream(stream);11311131-error:11301130+unlock:11321131 mutex_unlock(&bus->bus_lock);11331132 return ret;11341133}···11421141 * @stream: SoundWire stream11431142 * @port_config: Port configuration for audio stream11441143 * @num_ports: Number of ports11441144+ *11451145+ * It is expected that Slave is added before adding Master11461146+ * to the Stream.11471147+ *11451148 */11461149int sdw_stream_add_slave(struct sdw_slave *slave,11471150 struct sdw_stream_config *stream_config,···11911186 if (ret)11921187 goto stream_error;1193118811891189+ /*11901190+ * Change stream state to CONFIGURED on first Slave add.11911191+ * Bus is not aware of number of Slave(s) in a stream at this11921192+ * point so cannot depend on all Slave(s) to be added in order to11931193+ * change stream state to CONFIGURED.11941194+ */11941195 stream->state = SDW_STREAM_CONFIGURED;11951196 goto error;11961197
+6
drivers/spi/spi-fsl-dspi.c
···30303131#define DRIVER_NAME "fsl-dspi"32323333+#ifdef CONFIG_M5441x3434+#define DSPI_FIFO_SIZE 163535+#else3336#define DSPI_FIFO_SIZE 43737+#endif3438#define DSPI_DMA_BUFSIZE (DSPI_FIFO_SIZE * 1024)35393640#define SPI_MCR 0x00···627623static void dspi_eoq_write(struct fsl_dspi *dspi)628624{629625 int fifo_size = DSPI_FIFO_SIZE;626626+ u16 xfer_cmd = dspi->tx_cmd;630627631628 /* Fill TX FIFO with as many transfers as possible */632629 while (dspi->len && fifo_size--) {630630+ dspi->tx_cmd = xfer_cmd;633631 /* Request EOQF for last transfer in FIFO */634632 if (dspi->len == dspi->bytes_per_word || fifo_size == 0)635633 dspi->tx_cmd |= SPI_PUSHR_CMD_EOQ;
+2-2
drivers/spi/spi-gpio.c
···300300 *mflags |= SPI_MASTER_NO_RX;301301302302 spi_gpio->sck = devm_gpiod_get(dev, "sck", GPIOD_OUT_LOW);303303- if (IS_ERR(spi_gpio->mosi))304304- return PTR_ERR(spi_gpio->mosi);303303+ if (IS_ERR(spi_gpio->sck))304304+ return PTR_ERR(spi_gpio->sck);305305306306 for (i = 0; i < num_chipselects; i++) {307307 spi_gpio->cs_gpios[i] = devm_gpiod_get_index(dev, "cs",
···10631063 goto exit_free_master;10641064 }1065106510661066+ /* disabled clock may cause interrupt storm upon request */10671067+ tspi->clk = devm_clk_get(&pdev->dev, NULL);10681068+ if (IS_ERR(tspi->clk)) {10691069+ ret = PTR_ERR(tspi->clk);10701070+ dev_err(&pdev->dev, "Can not get clock %d\n", ret);10711071+ goto exit_free_master;10721072+ }10731073+ ret = clk_prepare(tspi->clk);10741074+ if (ret < 0) {10751075+ dev_err(&pdev->dev, "Clock prepare failed %d\n", ret);10761076+ goto exit_free_master;10771077+ }10781078+ ret = clk_enable(tspi->clk);10791079+ if (ret < 0) {10801080+ dev_err(&pdev->dev, "Clock enable failed %d\n", ret);10811081+ goto exit_free_master;10821082+ }10831083+10661084 spi_irq = platform_get_irq(pdev, 0);10671085 tspi->irq = spi_irq;10681086 ret = request_threaded_irq(tspi->irq, tegra_slink_isr,···10891071 if (ret < 0) {10901072 dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",10911073 tspi->irq);10921092- goto exit_free_master;10931093- }10941094-10951095- tspi->clk = devm_clk_get(&pdev->dev, NULL);10961096- if (IS_ERR(tspi->clk)) {10971097- dev_err(&pdev->dev, "can not get clock\n");10981098- ret = PTR_ERR(tspi->clk);10991099- goto exit_free_irq;10741074+ goto exit_clk_disable;11001075 }1101107611021077 tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");···11491138 tegra_slink_deinit_dma_param(tspi, true);11501139exit_free_irq:11511140 free_irq(spi_irq, tspi);11411141+exit_clk_disable:11421142+ clk_disable(tspi->clk);11521143exit_free_master:11531144 spi_master_put(master);11541145 return ret;···11621149 struct tegra_slink_data *tspi = spi_master_get_devdata(master);1163115011641151 free_irq(tspi->irq, tspi);11521152+11531153+ clk_disable(tspi->clk);1165115411661155 if (tspi->tx_dma_chan)11671156 tegra_slink_deinit_dma_param(tspi, false);
+11-2
drivers/spi/spi.c
···21432143 */21442144 if (ctlr->num_chipselect == 0)21452145 return -EINVAL;21462146- /* allocate dynamic bus number using Linux idr */21472147- if ((ctlr->bus_num < 0) && ctlr->dev.of_node) {21462146+ if (ctlr->bus_num >= 0) {21472147+ /* devices with a fixed bus num must check-in with the num */21482148+ mutex_lock(&board_lock);21492149+ id = idr_alloc(&spi_master_idr, ctlr, ctlr->bus_num,21502150+ ctlr->bus_num + 1, GFP_KERNEL);21512151+ mutex_unlock(&board_lock);21522152+ if (WARN(id < 0, "couldn't get idr"))21532153+ return id == -ENOSPC ? -EBUSY : id;21542154+ ctlr->bus_num = id;21552155+ } else if (ctlr->dev.of_node) {21562156+ /* allocate dynamic bus number using Linux idr */21482157 id = of_alias_get_id(ctlr->dev.of_node, "spi");21492158 if (id >= 0) {21502159 ctlr->bus_num = id;
-6
drivers/staging/media/mt9t031/Kconfig
···11-config SOC_CAMERA_IMX07422- tristate "imx074 support (DEPRECATED)"33- depends on SOC_CAMERA && I2C44- help55- This driver supports IMX074 cameras from Sony66-71config SOC_CAMERA_MT9T03182 tristate "mt9t031 support (DEPRECATED)"93 depends on SOC_CAMERA && I2C
···2626#include "iscsi_target_nego.h"2727#include "iscsi_target_auth.h"28282929-static int chap_string_to_hex(unsigned char *dst, unsigned char *src, int len)3030-{3131- int j = DIV_ROUND_UP(len, 2), rc;3232-3333- rc = hex2bin(dst, src, j);3434- if (rc < 0)3535- pr_debug("CHAP string contains non hex digit symbols\n");3636-3737- dst[j] = '\0';3838- return j;3939-}4040-4141-static void chap_binaryhex_to_asciihex(char *dst, char *src, int src_len)4242-{4343- int i;4444-4545- for (i = 0; i < src_len; i++) {4646- sprintf(&dst[i*2], "%02x", (int) src[i] & 0xff);4747- }4848-}4949-5029static int chap_gen_challenge(5130 struct iscsi_conn *conn,5231 int caller,···4162 ret = get_random_bytes_wait(chap->challenge, CHAP_CHALLENGE_LENGTH);4263 if (unlikely(ret))4364 return ret;4444- chap_binaryhex_to_asciihex(challenge_asciihex, chap->challenge,6565+ bin2hex(challenge_asciihex, chap->challenge,4566 CHAP_CHALLENGE_LENGTH);4667 /*4768 * Set CHAP_C, and copy the generated challenge into c_str.···227248 pr_err("Could not find CHAP_R.\n");228249 goto out;229250 }251251+ if (strlen(chap_r) != MD5_SIGNATURE_SIZE * 2) {252252+ pr_err("Malformed CHAP_R\n");253253+ goto out;254254+ }255255+ if (hex2bin(client_digest, chap_r, MD5_SIGNATURE_SIZE) < 0) {256256+ pr_err("Malformed CHAP_R\n");257257+ goto out;258258+ }230259231260 pr_debug("[server] Got CHAP_R=%s\n", chap_r);232232- chap_string_to_hex(client_digest, chap_r, strlen(chap_r));233261234262 tfm = crypto_alloc_shash("md5", 0, 0);235263 if (IS_ERR(tfm)) {···280294 goto out;281295 }282296283283- chap_binaryhex_to_asciihex(response, server_digest, MD5_SIGNATURE_SIZE);297297+ bin2hex(response, server_digest, MD5_SIGNATURE_SIZE);284298 pr_debug("[server] MD5 Server Digest: %s\n", response);285299286300 if (memcmp(server_digest, client_digest, MD5_SIGNATURE_SIZE) != 0) {···335349 pr_err("Could not find CHAP_C.\n");336350 goto out;337351 }338338- pr_debug("[server] Got CHAP_C=%s\n", challenge);339339- challenge_len = chap_string_to_hex(challenge_binhex, challenge,340340- strlen(challenge));352352+ challenge_len = DIV_ROUND_UP(strlen(challenge), 2);341353 if (!challenge_len) {342354 pr_err("Unable to convert incoming challenge\n");343355 goto out;···344360 pr_err("CHAP_C exceeds maximum binary size of 1024 bytes\n");345361 goto out;346362 }363363+ if (hex2bin(challenge_binhex, challenge, challenge_len) < 0) {364364+ pr_err("Malformed CHAP_C\n");365365+ goto out;366366+ }367367+ pr_debug("[server] Got CHAP_C=%s\n", challenge);347368 /*348369 * During mutual authentication, the CHAP_C generated by the349370 * initiator must not match the original CHAP_C generated by···402413 /*403414 * Convert response from binary hex to ascii hext.404415 */405405- chap_binaryhex_to_asciihex(response, digest, MD5_SIGNATURE_SIZE);416416+ bin2hex(response, digest, MD5_SIGNATURE_SIZE);406417 *nr_out_len += sprintf(nr_out_ptr + *nr_out_len, "CHAP_R=0x%s",407418 response);408419 *nr_out_len += 1;
+7-3
drivers/tty/serial/cpm_uart/cpm_uart_core.c
···10541054 /* Get the address of the host memory buffer.10551055 */10561056 bdp = pinfo->rx_cur;10571057- while (bdp->cbd_sc & BD_SC_EMPTY)10581058- ;10571057+ if (bdp->cbd_sc & BD_SC_EMPTY)10581058+ return NO_POLL_CHAR;1059105910601060 /* If the buffer address is in the CPM DPRAM, don't10611061 * convert it.···10901090 poll_chars = 0;10911091 }10921092 if (poll_chars <= 0) {10931093- poll_chars = poll_wait_key(poll_buf, pinfo);10931093+ int ret = poll_wait_key(poll_buf, pinfo);10941094+10951095+ if (ret == NO_POLL_CHAR)10961096+ return ret;10971097+ poll_chars = ret;10941098 pollp = poll_buf;10951099 }10961100 poll_chars--;
···14341434 struct async *as = NULL;14351435 struct usb_ctrlrequest *dr = NULL;14361436 unsigned int u, totlen, isofrmlen;14371437- int i, ret, is_in, num_sgs = 0, ifnum = -1;14371437+ int i, ret, num_sgs = 0, ifnum = -1;14381438 int number_of_packets = 0;14391439 unsigned int stream_id = 0;14401440 void *buf;14411441+ bool is_in;14421442+ bool allow_short = false;14431443+ bool allow_zero = false;14411444 unsigned long mask = USBDEVFS_URB_SHORT_NOT_OK |14421445 USBDEVFS_URB_BULK_CONTINUATION |14431446 USBDEVFS_URB_NO_FSBR |···14741471 u = 0;14751472 switch (uurb->type) {14761473 case USBDEVFS_URB_TYPE_CONTROL:14741474+ if (is_in)14751475+ allow_short = true;14771476 if (!usb_endpoint_xfer_control(&ep->desc))14781477 return -EINVAL;14791478 /* min 8 byte setup packet */···15161511 break;1517151215181513 case USBDEVFS_URB_TYPE_BULK:15141514+ if (!is_in)15151515+ allow_zero = true;15161516+ else15171517+ allow_short = true;15191518 switch (usb_endpoint_type(&ep->desc)) {15201519 case USB_ENDPOINT_XFER_CONTROL:15211520 case USB_ENDPOINT_XFER_ISOC:···15401531 if (!usb_endpoint_xfer_int(&ep->desc))15411532 return -EINVAL;15421533 interrupt_urb:15341534+ if (!is_in)15351535+ allow_zero = true;15361536+ else15371537+ allow_short = true;15431538 break;1544153915451540 case USBDEVFS_URB_TYPE_ISO:···16891676 u = (is_in ? URB_DIR_IN : URB_DIR_OUT);16901677 if (uurb->flags & USBDEVFS_URB_ISO_ASAP)16911678 u |= URB_ISO_ASAP;16921692- if (uurb->flags & USBDEVFS_URB_SHORT_NOT_OK && is_in)16791679+ if (allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)16931680 u |= URB_SHORT_NOT_OK;16941694- if (uurb->flags & USBDEVFS_URB_ZERO_PACKET)16811681+ if (allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)16951682 u |= URB_ZERO_PACKET;16961683 if (uurb->flags & USBDEVFS_URB_NO_INTERRUPT)16971684 u |= URB_NO_INTERRUPT;16981685 as->urb->transfer_flags = u;16861686+16871687+ if (!allow_short && uurb->flags & USBDEVFS_URB_SHORT_NOT_OK)16881688+ dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_SHORT_NOT_OK.\n");16891689+ if (!allow_zero && uurb->flags & USBDEVFS_URB_ZERO_PACKET)16901690+ dev_warn(&ps->dev->dev, "Requested nonsensical USBDEVFS_URB_ZERO_PACKET.\n");1699169117001692 as->urb->transfer_buffer_length = uurb->buffer_length;17011693 as->urb->setup_packet = (unsigned char *)dr;
+14-14
drivers/usb/core/driver.c
···512512 struct device *dev;513513 struct usb_device *udev;514514 int retval = 0;515515- int lpm_disable_error = -ENODEV;516515517516 if (!iface)518517 return -ENODEV;···532533533534 iface->condition = USB_INTERFACE_BOUND;534535535535- /* See the comment about disabling LPM in usb_probe_interface(). */536536- if (driver->disable_hub_initiated_lpm) {537537- lpm_disable_error = usb_unlocked_disable_lpm(udev);538538- if (lpm_disable_error) {539539- dev_err(&iface->dev, "%s Failed to disable LPM for driver %s\n",540540- __func__, driver->name);541541- return -ENOMEM;542542- }543543- }544544-545536 /* Claimed interfaces are initially inactive (suspended) and546537 * runtime-PM-enabled, but only if the driver has autosuspend547538 * support. Otherwise they are marked active, to prevent the···550561 if (device_is_registered(dev))551562 retval = device_bind_driver(dev);552563553553- /* Attempt to re-enable USB3 LPM, if the disable was successful. */554554- if (!lpm_disable_error)555555- usb_unlocked_enable_lpm(udev);564564+ if (retval) {565565+ dev->driver = NULL;566566+ usb_set_intfdata(iface, NULL);567567+ iface->needs_remote_wakeup = 0;568568+ iface->condition = USB_INTERFACE_UNBOUND;569569+570570+ /*571571+ * Unbound interfaces are always runtime-PM-disabled572572+ * and runtime-PM-suspended573573+ */574574+ if (driver->supports_autosuspend)575575+ pm_runtime_disable(dev);576576+ pm_runtime_set_suspended(dev);577577+ }556578557579 return retval;558580}
+2-1
drivers/usb/core/quirks.c
···5858 quirk_list = kcalloc(quirk_count, sizeof(struct quirk_entry),5959 GFP_KERNEL);6060 if (!quirk_list) {6161+ quirk_count = 0;6162 mutex_unlock(&quirk_mutex);6263 return -ENOMEM;6364 }···155154 .string = quirks_param,156155};157156158158-module_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);157157+device_param_cb(quirks, &quirks_param_ops, &quirks_param_string, 0644);159158MODULE_PARM_DESC(quirks, "Add/modify USB quirks by specifying quirks=vendorID:productID:quirks");160159161160/* Lists of quirky USB devices, split in device quirks and interface quirks.
+2
drivers/usb/core/usb.c
···228228 struct usb_interface_cache *intf_cache = NULL;229229 int i;230230231231+ if (!config)232232+ return NULL;231233 for (i = 0; i < config->desc.bNumInterfaces; i++) {232234 if (config->intf_cache[i]->altsetting[0].desc.bInterfaceNumber233235 == iface_num) {
···7676 else if (unlikely(rlen < EXT4_DIR_REC_LEN(de->name_len)))7777 error_msg = "rec_len is too small for name_len";7878 else if (unlikely(((char *) de - buf) + rlen > size))7979- error_msg = "directory entry across range";7979+ error_msg = "directory entry overrun";8080 else if (unlikely(le32_to_cpu(de->inode) >8181 le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))8282 error_msg = "inode out of bounds";···85858686 if (filp)8787 ext4_error_file(filp, function, line, bh->b_blocknr,8888- "bad entry in directory: %s - offset=%u(%u), "8989- "inode=%u, rec_len=%d, name_len=%d",9090- error_msg, (unsigned) (offset % size),9191- offset, le32_to_cpu(de->inode),9292- rlen, de->name_len);8888+ "bad entry in directory: %s - offset=%u, "8989+ "inode=%u, rec_len=%d, name_len=%d, size=%d",9090+ error_msg, offset, le32_to_cpu(de->inode),9191+ rlen, de->name_len, size);9392 else9493 ext4_error_inode(dir, function, line, bh->b_blocknr,9595- "bad entry in directory: %s - offset=%u(%u), "9696- "inode=%u, rec_len=%d, name_len=%d",9797- error_msg, (unsigned) (offset % size),9898- offset, le32_to_cpu(de->inode),9999- rlen, de->name_len);9494+ "bad entry in directory: %s - offset=%u, "9595+ "inode=%u, rec_len=%d, name_len=%d, size=%d",9696+ error_msg, offset, le32_to_cpu(de->inode),9797+ rlen, de->name_len, size);1009810199 return 1;102100}
+17-3
fs/ext4/ext4.h
···4343#define __FS_HAS_ENCRYPTION IS_ENABLED(CONFIG_EXT4_FS_ENCRYPTION)4444#include <linux/fscrypt.h>45454646+#include <linux/compiler.h>4747+4848+/* Until this gets included into linux/compiler-gcc.h */4949+#ifndef __nonstring5050+#if defined(GCC_VERSION) && (GCC_VERSION >= 80000)5151+#define __nonstring __attribute__((nonstring))5252+#else5353+#define __nonstring5454+#endif5555+#endif5656+4657/*4758 * The fourth extended filesystem constants/structures4859 */···686675/* Max physical block we can address w/o extents */687676#define EXT4_MAX_BLOCK_FILE_PHYS 0xFFFFFFFF688677678678+/* Max logical block we can support */679679+#define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFF680680+689681/*690682 * Structure of an inode on the disk691683 */···12401226 __le32 s_feature_ro_compat; /* readonly-compatible feature set */12411227/*68*/ __u8 s_uuid[16]; /* 128-bit uuid for volume */12421228/*78*/ char s_volume_name[16]; /* volume name */12431243-/*88*/ char s_last_mounted[64]; /* directory where last mounted */12291229+/*88*/ char s_last_mounted[64] __nonstring; /* directory where last mounted */12441230/*C8*/ __le32 s_algorithm_usage_bitmap; /* For compression */12451231 /*12461232 * Performance hints. Directory preallocation should only···12911277 __le32 s_first_error_time; /* first time an error happened */12921278 __le32 s_first_error_ino; /* inode involved in first error */12931279 __le64 s_first_error_block; /* block involved of first error */12941294- __u8 s_first_error_func[32]; /* function where the error happened */12801280+ __u8 s_first_error_func[32] __nonstring; /* function where the error happened */12951281 __le32 s_first_error_line; /* line number where error happened */12961282 __le32 s_last_error_time; /* most recent time of an error */12971283 __le32 s_last_error_ino; /* inode involved in last error */12981284 __le32 s_last_error_line; /* line number where error happened */12991285 __le64 s_last_error_block; /* block involved of last error */13001300- __u8 s_last_error_func[32]; /* function where the error happened */12861286+ __u8 s_last_error_func[32] __nonstring; /* function where the error happened */13011287#define EXT4_S_ERR_END offsetof(struct ext4_super_block, s_mount_opts)13021288 __u8 s_mount_opts[64];13031289 __le32 s_usr_quota_inum; /* inode for tracking user quota */
+3-1
fs/ext4/inline.c
···17531753{17541754 int err, inline_size;17551755 struct ext4_iloc iloc;17561756+ size_t inline_len;17561757 void *inline_pos;17571758 unsigned int offset;17581759 struct ext4_dir_entry_2 *de;···17811780 goto out;17821781 }1783178217831783+ inline_len = ext4_get_inline_size(dir);17841784 offset = EXT4_INLINE_DOTDOT_SIZE;17851785- while (offset < dir->i_size) {17851785+ while (offset < inline_len) {17861786 de = ext4_get_inline_entry(dir, &iloc, offset,17871787 &inline_pos, &inline_size);17881788 if (ext4_check_dir_entry(dir, NULL, de,
···34783478 int credits;34793479 u8 old_file_type;3480348034813481+ if (new.inode && new.inode->i_nlink == 0) {34823482+ EXT4_ERROR_INODE(new.inode,34833483+ "target of rename is already freed");34843484+ return -EFSCORRUPTED;34853485+ }34863486+34813487 if ((ext4_test_inode_flag(new_dir, EXT4_INODE_PROJINHERIT)) &&34823488 (!projid_eq(EXT4_I(new_dir)->i_projid,34833489 EXT4_I(old_dentry->d_inode)->i_projid)))
+22-1
fs/ext4/resize.c
···19192020int ext4_resize_begin(struct super_block *sb)2121{2222+ struct ext4_sb_info *sbi = EXT4_SB(sb);2223 int ret = 0;23242425 if (!capable(CAP_SYS_RESOURCE))···3029 * because the user tools have no way of handling this. Probably a3130 * bad time to do it anyways.3231 */3333- if (EXT4_SB(sb)->s_sbh->b_blocknr !=3232+ if (EXT4_B2C(sbi, sbi->s_sbh->b_blocknr) !=3433 le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {3534 ext4_warning(sb, "won't resize using backup superblock at %llu",3635 (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);···19851984 n_blocks_count_retry = 0;19861985 goto retry;19871986 }19871987+ }19881988+19891989+ /*19901990+ * Make sure the last group has enough space so that it's19911991+ * guaranteed to have enough space for all metadata blocks19921992+ * that it might need to hold. (We might not need to store19931993+ * the inode table blocks in the last block group, but there19941994+ * will be cases where this might be needed.)19951995+ */19961996+ if ((ext4_group_first_block_no(sb, n_group) +19971997+ ext4_group_overhead_blocks(sb, n_group) + 2 +19981998+ sbi->s_itb_per_group + sbi->s_cluster_ratio) >= n_blocks_count) {19991999+ n_blocks_count = ext4_group_first_block_no(sb, n_group);20002000+ n_group--;20012001+ n_blocks_count_retry = 0;20022002+ if (resize_inode) {20032003+ iput(resize_inode);20042004+ resize_inode = NULL;20052005+ }20062006+ goto retry;19882007 }1989200819902009 /* extend the last group */
···342342 * for this bh as it's not marked locally343343 * uptodate. */344344 status = -EIO;345345+ clear_buffer_needs_validate(bh);345346 put_bh(bh);346347 bhs[i] = NULL;347348 continue;
+1
fs/proc/kcore.c
···464464 ret = -EFAULT;465465 goto out;466466 }467467+ m = NULL; /* skip the list anchor */467468 } else if (m->type == KCORE_VMALLOC) {468469 vread(buf, (char *)start, tsz);469470 /* we have to zero-fill user buffer even if no read */
+6-1
fs/ubifs/super.c
···19121912 mutex_unlock(&c->bu_mutex);19131913 }1914191419151915- ubifs_assert(c, c->lst.taken_empty_lebs > 0);19151915+ if (!c->need_recovery)19161916+ ubifs_assert(c, c->lst.taken_empty_lebs > 0);19171917+19161918 return 0;19171919}19181920···19551953 struct ubi_volume_desc *ubi;19561954 int dev, vol;19571955 char *endptr;19561956+19571957+ if (!name || !*name)19581958+ return ERR_PTR(-EINVAL);1958195919591960 /* First, try to open using the device node path method */19601961 ubi = ubi_open_volume_path(name, mode);
···7979#define __noretpoline __attribute__((indirect_branch("keep")))8080#endif81818282-/*8383- * it doesn't make sense on ARM (currently the only user of __naked)8484- * to trace naked functions because then mcount is called without8585- * stack and frame pointer being set up and there is no chance to8686- * restore the lr register to the value before mcount was called.8787- *8888- * The asm() bodies of naked functions often depend on standard calling8989- * conventions, therefore they must be noinline and noclone.9090- *9191- * GCC 4.[56] currently fail to enforce this, so we must do so ourselves.9292- * See GCC PR44290.9393- */9494-#define __naked __attribute__((naked)) noinline __noclone notrace9595-9682#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)97839884#define __optimize(level) __attribute__((__optimize__(level)))
+8
include/linux/compiler_types.h
···226226#define notrace __attribute__((no_instrument_function))227227#endif228228229229+/*230230+ * it doesn't make sense on ARM (currently the only user of __naked)231231+ * to trace naked functions because then mcount is called without232232+ * stack and frame pointer being set up and there is no chance to233233+ * restore the lr register to the value before mcount was called.234234+ */235235+#define __naked __attribute__((naked)) notrace236236+229237#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)230238231239/*
+4-1
include/linux/genhd.h
···8383} __attribute__((packed));84848585struct disk_stats {8686+ u64 nsecs[NR_STAT_GROUPS];8687 unsigned long sectors[NR_STAT_GROUPS];8788 unsigned long ios[NR_STAT_GROUPS];8889 unsigned long merges[NR_STAT_GROUPS];8989- unsigned long ticks[NR_STAT_GROUPS];9090 unsigned long io_ticks;9191 unsigned long time_in_queue;9292};···353353}354354355355#endif /* CONFIG_SMP */356356+357357+#define part_stat_read_msecs(part, which) \358358+ div_u64(part_stat_read(part, nsecs[which]), NSEC_PER_MSEC)356359357360#define part_stat_read_accum(part, field) \358361 (part_stat_read(part, field[STAT_READ]) + \
···4848 * DISABLE_IN_SUSPEND - turn off regulator in suspend states4949 * ENABLE_IN_SUSPEND - keep regulator on in suspend states5050 */5151-#define DO_NOTHING_IN_SUSPEND (-1)5252-#define DISABLE_IN_SUSPEND 05353-#define ENABLE_IN_SUSPEND 15151+#define DO_NOTHING_IN_SUSPEND 05252+#define DISABLE_IN_SUSPEND 15353+#define ENABLE_IN_SUSPEND 254545555/* Regulator active discharge flags */5656enum regulator_active_discharge {
+4-3
include/linux/spi/spi-mem.h
···8181 * @dummy.buswidth: number of IO lanes used to transmit the dummy bytes8282 * @data.buswidth: number of IO lanes used to send/receive the data8383 * @data.dir: direction of the transfer8484- * @data.buf.in: input buffer8585- * @data.buf.out: output buffer8484+ * @data.nbytes: number of data bytes to send/receive. Can be zero if the8585+ * operation does not involve transferring data8686+ * @data.buf.in: input buffer (must be DMA-able)8787+ * @data.buf.out: output buffer (must be DMA-able)8688 */8789struct spi_mem_op {8890 struct {···107105 u8 buswidth;108106 enum spi_mem_data_dir dir;109107 unsigned int nbytes;110110- /* buf.{in,out} must be DMA-able. */111108 union {112109 void *in;113110 const void *out;
···133133 * @can_switch: check if the device is in a position to switch now.134134 * Mandatory. The client should return false if a user space process135135 * has one of its device files open136136+ * @gpu_bound: notify the client id to audio client when the GPU is bound.136137 *137138 * Client callbacks. A client can be either a GPU or an audio device on a GPU.138139 * The @set_gpu_state and @can_switch methods are mandatory, @reprobe may be139140 * set to NULL. For audio clients, the @reprobe member is bogus.141141+ * OTOH, @gpu_bound is only for audio clients, and not used for GPU clients.140142 */141143struct vga_switcheroo_client_ops {142144 void (*set_gpu_state)(struct pci_dev *dev, enum vga_switcheroo_state);143145 void (*reprobe)(struct pci_dev *dev);144146 bool (*can_switch)(struct pci_dev *dev);147147+ void (*gpu_bound)(struct pci_dev *dev, enum vga_switcheroo_client_id);145148};146149147150#if defined(CONFIG_VGA_SWITCHEROO)
+1-1
include/net/nfc/hci.h
···8787 * According to specification 102 622 chapter 4.4 Pipes,8888 * the pipe identifier is 7 bits long.8989 */9090-#define NFC_HCI_MAX_PIPES 1279090+#define NFC_HCI_MAX_PIPES 1289191struct nfc_hci_init_data {9292 u8 gate_count;9393 struct nfc_hci_gate gates[NFC_HCI_MAX_CUSTOM_GATES];
+9-10
include/net/tls.h
···171171 char *rec_seq;172172};173173174174+union tls_crypto_context {175175+ struct tls_crypto_info info;176176+ struct tls12_crypto_info_aes_gcm_128 aes_gcm_128;177177+};178178+174179struct tls_context {175175- union {176176- struct tls_crypto_info crypto_send;177177- struct tls12_crypto_info_aes_gcm_128 crypto_send_aes_gcm_128;178178- };179179- union {180180- struct tls_crypto_info crypto_recv;181181- struct tls12_crypto_info_aes_gcm_128 crypto_recv_aes_gcm_128;182182- };180180+ union tls_crypto_context crypto_send;181181+ union tls_crypto_context crypto_recv;183182184183 struct list_head list;185184 struct net_device *netdev;···366367 * size KTLS_DTLS_HEADER_SIZE + KTLS_DTLS_NONCE_EXPLICIT_SIZE367368 */368369 buf[0] = record_type;369369- buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.version);370370- buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.version);370370+ buf[1] = TLS_VERSION_MINOR(ctx->crypto_send.info.version);371371+ buf[2] = TLS_VERSION_MAJOR(ctx->crypto_send.info.version);371372 /* we can use IV for nonce explicit according to spec */372373 buf[3] = pkt_len >> 8;373374 buf[4] = pkt_len & 0xFF;
···39353935 goto out;39363936 }3937393739383938+ /* If this is a pinned event it must be running on this CPU */39393939+ if (event->attr.pinned && event->oncpu != smp_processor_id()) {39403940+ ret = -EBUSY;39413941+ goto out;39423942+ }39433943+39383944 /*39393945 * If the event is currently on this CPU, its either a per-task event,39403946 * or local to this CPU. Furthermore it means its ACTIVE (otherwise
···637637 depends on NO_BOOTMEM638638 depends on SPARSEMEM639639 depends on !NEED_PER_CPU_KM640640+ depends on 64BIT640641 help641642 Ordinarily all struct pages are initialised during early boot in a642643 single thread. On very large machines this can take a considerable
···476476 delta = freeable >> priority;477477 delta *= 4;478478 do_div(delta, shrinker->seeks);479479+480480+ /*481481+ * Make sure we apply some minimal pressure on default priority482482+ * even on small cgroups. Stale objects are not only consuming memory483483+ * by themselves, but can also hold a reference to a dying cgroup,484484+ * preventing it from being reclaimed. A dying cgroup with all485485+ * corresponding structures like per-cpu stats and kmem caches486486+ * can be really big, so it may lead to a significant waste of memory.487487+ */488488+ delta = max_t(unsigned long long, delta, min(freeable, batch_size));489489+479490 total_scan += delta;480491 if (total_scan < 0) {481492 pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+7-3
net/batman-adv/bat_v_elp.c
···241241 * the packet to be exactly of that size to make the link242242 * throughput estimation effective.243243 */244244- skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);244244+ skb_put_zero(skb, probe_len - hard_iface->bat_v.elp_skb->len);245245246246 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,247247 "Sending unicast (probe) ELP packet on interface %s to %pM\n",···268268 struct batadv_priv *bat_priv;269269 struct sk_buff *skb;270270 u32 elp_interval;271271+ bool ret;271272272273 bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);273274 hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);···330329 * may sleep and that is not allowed in an rcu protected331330 * context. Therefore schedule a task for that.332331 */333333- queue_work(batadv_event_workqueue,334334- &hardif_neigh->bat_v.metric_work);332332+ ret = queue_work(batadv_event_workqueue,333333+ &hardif_neigh->bat_v.metric_work);334334+335335+ if (!ret)336336+ batadv_hardif_neigh_put(hardif_neigh);335337 }336338 rcu_read_unlock();337339
+8-2
net/batman-adv/bridge_loop_avoidance.c
···17721772{17731773 struct batadv_bla_backbone_gw *backbone_gw;17741774 struct ethhdr *ethhdr;17751775+ bool ret;1775177617761777 ethhdr = eth_hdr(skb);17771778···17961795 if (unlikely(!backbone_gw))17971796 return true;1798179717991799- queue_work(batadv_event_workqueue, &backbone_gw->report_work);18001800- /* backbone_gw is unreferenced in the report work function function */17981798+ ret = queue_work(batadv_event_workqueue, &backbone_gw->report_work);17991799+18001800+ /* backbone_gw is unreferenced in the report work function function18011801+ * if queue_work() call was successful18021802+ */18031803+ if (!ret)18041804+ batadv_backbone_gw_put(backbone_gw);1801180518021806 return true;18031807}
+9-2
net/batman-adv/gateway_client.c
···3232#include <linux/kernel.h>3333#include <linux/kref.h>3434#include <linux/list.h>3535+#include <linux/lockdep.h>3536#include <linux/netdevice.h>3637#include <linux/netlink.h>3738#include <linux/rculist.h>···349348 * @bat_priv: the bat priv with all the soft interface information350349 * @orig_node: originator announcing gateway capabilities351350 * @gateway: announced bandwidth information351351+ *352352+ * Has to be called with the appropriate locks being acquired353353+ * (gw.list_lock).352354 */353355static void batadv_gw_node_add(struct batadv_priv *bat_priv,354356 struct batadv_orig_node *orig_node,355357 struct batadv_tvlv_gateway_data *gateway)356358{357359 struct batadv_gw_node *gw_node;360360+361361+ lockdep_assert_held(&bat_priv->gw.list_lock);358362359363 if (gateway->bandwidth_down == 0)360364 return;···375369 gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);376370 gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);377371378378- spin_lock_bh(&bat_priv->gw.list_lock);379372 kref_get(&gw_node->refcount);380373 hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.gateway_list);381381- spin_unlock_bh(&bat_priv->gw.list_lock);382374383375 batadv_dbg(BATADV_DBG_BATMAN, bat_priv,384376 "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",···432428{433429 struct batadv_gw_node *gw_node, *curr_gw = NULL;434430431431+ spin_lock_bh(&bat_priv->gw.list_lock);435432 gw_node = batadv_gw_node_get(bat_priv, orig_node);436433 if (!gw_node) {437434 batadv_gw_node_add(bat_priv, orig_node, gateway);435435+ spin_unlock_bh(&bat_priv->gw.list_lock);438436 goto out;439437 }438438+ spin_unlock_bh(&bat_priv->gw.list_lock);440439441440 if (gw_node->bandwidth_down == ntohl(gateway->bandwidth_down) &&442441 gw_node->bandwidth_up == ntohl(gateway->bandwidth_up))
···854854 spinlock_t *lock; /* Used to lock list selected by "int in_coding" */855855 struct list_head *list;856856857857- /* Check if nc_node is already added */858858- nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding);859859-860860- /* Node found */861861- if (nc_node)862862- return nc_node;863863-864864- nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC);865865- if (!nc_node)866866- return NULL;867867-868868- /* Initialize nc_node */869869- INIT_LIST_HEAD(&nc_node->list);870870- kref_init(&nc_node->refcount);871871- ether_addr_copy(nc_node->addr, orig_node->orig);872872- kref_get(&orig_neigh_node->refcount);873873- nc_node->orig_node = orig_neigh_node;874874-875857 /* Select ingoing or outgoing coding node */876858 if (in_coding) {877859 lock = &orig_neigh_node->in_coding_list_lock;···863881 list = &orig_neigh_node->out_coding_list;864882 }865883884884+ spin_lock_bh(lock);885885+886886+ /* Check if nc_node is already added */887887+ nc_node = batadv_nc_find_nc_node(orig_node, orig_neigh_node, in_coding);888888+889889+ /* Node found */890890+ if (nc_node)891891+ goto unlock;892892+893893+ nc_node = kzalloc(sizeof(*nc_node), GFP_ATOMIC);894894+ if (!nc_node)895895+ goto unlock;896896+897897+ /* Initialize nc_node */898898+ INIT_LIST_HEAD(&nc_node->list);899899+ kref_init(&nc_node->refcount);900900+ ether_addr_copy(nc_node->addr, orig_node->orig);901901+ kref_get(&orig_neigh_node->refcount);902902+ nc_node->orig_node = orig_neigh_node;903903+866904 batadv_dbg(BATADV_DBG_NC, bat_priv, "Adding nc_node %pM -> %pM\n",867905 nc_node->addr, nc_node->orig_node->orig);868906869907 /* Add nc_node to orig_node */870870- spin_lock_bh(lock);871908 kref_get(&nc_node->refcount);872909 list_add_tail_rcu(&nc_node->list, list);910910+911911+unlock:873912 spin_unlock_bh(lock);874913875914 return nc_node;
+19-8
net/batman-adv/soft-interface.c
···574574 struct batadv_softif_vlan *vlan;575575 int err;576576577577+ spin_lock_bh(&bat_priv->softif_vlan_list_lock);578578+577579 vlan = batadv_softif_vlan_get(bat_priv, vid);578580 if (vlan) {579581 batadv_softif_vlan_put(vlan);582582+ spin_unlock_bh(&bat_priv->softif_vlan_list_lock);580583 return -EEXIST;581584 }582585583586 vlan = kzalloc(sizeof(*vlan), GFP_ATOMIC);584584- if (!vlan)587587+ if (!vlan) {588588+ spin_unlock_bh(&bat_priv->softif_vlan_list_lock);585589 return -ENOMEM;590590+ }586591587592 vlan->bat_priv = bat_priv;588593 vlan->vid = vid;···595590596591 atomic_set(&vlan->ap_isolation, 0);597592598598- err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan);599599- if (err) {600600- kfree(vlan);601601- return err;602602- }603603-604604- spin_lock_bh(&bat_priv->softif_vlan_list_lock);605593 kref_get(&vlan->refcount);606594 hlist_add_head_rcu(&vlan->list, &bat_priv->softif_vlan_list);607595 spin_unlock_bh(&bat_priv->softif_vlan_list_lock);596596+597597+ /* batadv_sysfs_add_vlan cannot be in the spinlock section due to the598598+ * sleeping behavior of the sysfs functions and the fs_reclaim lock599599+ */600600+ err = batadv_sysfs_add_vlan(bat_priv->soft_iface, vlan);601601+ if (err) {602602+ /* ref for the function */603603+ batadv_softif_vlan_put(vlan);604604+605605+ /* ref for the list */606606+ batadv_softif_vlan_put(vlan);607607+ return err;608608+ }608609609610 /* add a new TT local entry. This one will be marked with the NOPURGE610611 * flag
···16131613{16141614 struct batadv_tt_orig_list_entry *orig_entry;1615161516161616+ spin_lock_bh(&tt_global->list_lock);16171617+16161618 orig_entry = batadv_tt_global_orig_entry_find(tt_global, orig_node);16171619 if (orig_entry) {16181620 /* refresh the ttvn: the current value could be a bogus one that···16371635 orig_entry->flags = flags;16381636 kref_init(&orig_entry->refcount);1639163716401640- spin_lock_bh(&tt_global->list_lock);16411638 kref_get(&orig_entry->refcount);16421639 hlist_add_head_rcu(&orig_entry->list,16431640 &tt_global->orig_list);16441644- spin_unlock_bh(&tt_global->list_lock);16451641 atomic_inc(&tt_global->orig_list_count);1646164216471643sync_flags:···16471647out:16481648 if (orig_entry)16491649 batadv_tt_orig_list_entry_put(orig_entry);16501650+16511651+ spin_unlock_bh(&tt_global->list_lock);16501652}1651165316521654/**
···23442344 if (unlikely(bytes_sg_total > copy))23452345 return -EINVAL;2346234623472347- page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));23472347+ page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC | __GFP_COMP,23482348+ get_order(copy));23482349 if (unlikely(!page))23492350 return -ENOMEM;23502351 p = page_address(page);
+8-5
net/core/neighbour.c
···11801180 lladdr = neigh->ha;11811181 }1182118211831183+ /* Update confirmed timestamp for neighbour entry after we11841184+ * received ARP packet even if it doesn't change IP to MAC binding.11851185+ */11861186+ if (new & NUD_CONNECTED)11871187+ neigh->confirmed = jiffies;11881188+11831189 /* If entry was valid and address is not changed,11841190 do not change entry state, if new one is STALE.11851191 */···12071201 }12081202 }1209120312101210- /* Update timestamps only once we know we will make a change to the12041204+ /* Update timestamp only once we know we will make a change to the12111205 * neighbour entry. Otherwise we risk to move the locktime window with12121206 * noop updates and ignore relevant ARP updates.12131207 */12141214- if (new != old || lladdr != neigh->ha) {12151215- if (new & NUD_CONNECTED)12161216- neigh->confirmed = jiffies;12081208+ if (new != old || lladdr != neigh->ha)12171209 neigh->updated = jiffies;12181218- }1219121012201211 if (new != old) {12211212 neigh_del_timer(neigh);
+7-12
net/core/netpoll.c
···187187 }188188}189189190190-static void netpoll_poll_dev(struct net_device *dev)190190+void netpoll_poll_dev(struct net_device *dev)191191{192192- const struct net_device_ops *ops;193192 struct netpoll_info *ni = rcu_dereference_bh(dev->npinfo);193193+ const struct net_device_ops *ops;194194195195 /* Don't do any rx activity if the dev_lock mutex is held196196 * the dev_open/close paths use this to block netpoll activity197197 * while changing device state198198 */199199- if (down_trylock(&ni->dev_lock))199199+ if (!ni || down_trylock(&ni->dev_lock))200200 return;201201202202 if (!netif_running(dev)) {···205205 }206206207207 ops = dev->netdev_ops;208208- if (!ops->ndo_poll_controller) {209209- up(&ni->dev_lock);210210- return;211211- }212212-213213- /* Process pending work on NIC */214214- ops->ndo_poll_controller(dev);208208+ if (ops->ndo_poll_controller)209209+ ops->ndo_poll_controller(dev);215210216211 poll_napi(dev);217212···214219215220 zap_completion_queue();216221}222222+EXPORT_SYMBOL(netpoll_poll_dev);217223218224void netpoll_poll_disable(struct net_device *dev)219225{···609613 strlcpy(np->dev_name, ndev->name, IFNAMSIZ);610614 INIT_WORK(&np->cleanup_work, netpoll_async_cleanup);611615612612- if ((ndev->priv_flags & IFF_DISABLE_NETPOLL) ||613613- !ndev->netdev_ops->ndo_poll_controller) {616616+ if (ndev->priv_flags & IFF_DISABLE_NETPOLL) {614617 np_err(np, "%s doesn't support polling, aborting\n",615618 np->dev_name);616619 err = -ENOTSUPP;
···627627 const struct iphdr *tnl_params, u8 protocol)628628{629629 struct ip_tunnel *tunnel = netdev_priv(dev);630630+ unsigned int inner_nhdr_len = 0;630631 const struct iphdr *inner_iph;631632 struct flowi4 fl4;632633 u8 tos, ttl;···636635 unsigned int max_headroom; /* The extra header space needed */637636 __be32 dst;638637 bool connected;638638+639639+ /* ensure we can access the inner net header, for several users below */640640+ if (skb->protocol == htons(ETH_P_IP))641641+ inner_nhdr_len = sizeof(struct iphdr);642642+ else if (skb->protocol == htons(ETH_P_IPV6))643643+ inner_nhdr_len = sizeof(struct ipv6hdr);644644+ if (unlikely(!pskb_may_pull(skb, inner_nhdr_len)))645645+ goto tx_error;639646640647 inner_iph = (const struct iphdr *)skb_inner_network_header(skb);641648 connected = (tunnel->parms.iph.daddr != 0);
+26-23
net/ipv4/udp.c
···21242124 inet_compute_pseudo);21252125}2126212621272127+/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and21282128+ * return code conversion for ip layer consumption21292129+ */21302130+static int udp_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,21312131+ struct udphdr *uh)21322132+{21332133+ int ret;21342134+21352135+ if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))21362136+ skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,21372137+ inet_compute_pseudo);21382138+21392139+ ret = udp_queue_rcv_skb(sk, skb);21402140+21412141+ /* a return value > 0 means to resubmit the input, but21422142+ * it wants the return to be -protocol, or 021432143+ */21442144+ if (ret > 0)21452145+ return -ret;21462146+ return 0;21472147+}21482148+21272149/*21282150 * All we need to do is get the socket, and then do a checksum.21292151 */···21922170 if (unlikely(sk->sk_rx_dst != dst))21932171 udp_sk_rx_dst_set(sk, dst);2194217221952195- ret = udp_queue_rcv_skb(sk, skb);21732173+ ret = udp_unicast_rcv_skb(sk, skb, uh);21962174 sock_put(sk);21972197- /* a return value > 0 means to resubmit the input, but21982198- * it wants the return to be -protocol, or 021992199- */22002200- if (ret > 0)22012201- return -ret;22022202- return 0;21752175+ return ret;22032176 }2204217722052178 if (rt->rt_flags & (RTCF_BROADCAST|RTCF_MULTICAST))···22022185 saddr, daddr, udptable, proto);2203218622042187 sk = __udp4_lib_lookup_skb(skb, uh->source, uh->dest, udptable);22052205- if (sk) {22062206- int ret;22072207-22082208- if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))22092209- skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,22102210- inet_compute_pseudo);22112211-22122212- ret = udp_queue_rcv_skb(sk, skb);22132213-22142214- /* a return value > 0 means to resubmit the input, but22152215- * it wants the return to be -protocol, or 022162216- */22172217- if (ret > 0)22182218- return -ret;22192219- return 0;22202220- }21882188+ if (sk)21892189+ return udp_unicast_rcv_skb(sk, skb, uh);2221219022222191 if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))22232192 goto drop;
···219219 kfree_skb(skb);220220 return -ENOBUFS;221221 }222222+ if (skb->sk)223223+ skb_set_owner_w(skb2, skb->sk);222224 consume_skb(skb);223225 skb = skb2;224224- /* skb_set_owner_w() changes sk->sk_wmem_alloc atomically,225225- * it is safe to call in our context (socket lock not held)226226- */227227- skb_set_owner_w(skb, (struct sock *)sk);228226 }229227 if (opt->opt_flen)230228 ipv6_push_frag_opts(skb, opt, &proto);
+11-2
net/ipv6/ip6_tunnel.c
···12341234ip4ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)12351235{12361236 struct ip6_tnl *t = netdev_priv(dev);12371237- const struct iphdr *iph = ip_hdr(skb);12371237+ const struct iphdr *iph;12381238 int encap_limit = -1;12391239 struct flowi6 fl6;12401240 __u8 dsfield;···12421242 u8 tproto;12431243 int err;1244124412451245+ /* ensure we can access the full inner ip header */12461246+ if (!pskb_may_pull(skb, sizeof(struct iphdr)))12471247+ return -1;12481248+12491249+ iph = ip_hdr(skb);12451250 memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));1246125112471252 tproto = READ_ONCE(t->parms.proto);···13111306ip6ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev)13121307{13131308 struct ip6_tnl *t = netdev_priv(dev);13141314- struct ipv6hdr *ipv6h = ipv6_hdr(skb);13091309+ struct ipv6hdr *ipv6h;13151310 int encap_limit = -1;13161311 __u16 offset;13171312 struct flowi6 fl6;···13201315 u8 tproto;13211316 int err;1322131713181318+ if (unlikely(!pskb_may_pull(skb, sizeof(*ipv6h))))13191319+ return -1;13201320+13211321+ ipv6h = ipv6_hdr(skb);13231322 tproto = READ_ONCE(t->parms.proto);13241323 if ((tproto != IPPROTO_IPV6 && tproto != 0) ||13251324 ip6_tnl_addr_conflict(t, ipv6h))
+38-15
net/ipv6/route.c
···364364365365static void ip6_dst_destroy(struct dst_entry *dst)366366{367367+ struct dst_metrics *p = (struct dst_metrics *)DST_METRICS_PTR(dst);367368 struct rt6_info *rt = (struct rt6_info *)dst;368369 struct fib6_info *from;369370 struct inet6_dev *idev;370371371371- dst_destroy_metrics_generic(dst);372372+ if (p != &dst_default_metrics && refcount_dec_and_test(&p->refcnt))373373+ kfree(p);374374+372375 rt6_uncached_list_del(rt);373376374377 idev = rt->rt6i_idev;···949946950947static void ip6_rt_init_dst(struct rt6_info *rt, struct fib6_info *ort)951948{952952- rt->dst.flags |= fib6_info_dst_flags(ort);953953-954949 if (ort->fib6_flags & RTF_REJECT) {955950 ip6_rt_init_dst_reject(rt, ort);956951 return;···979978 rt->rt6i_flags &= ~RTF_EXPIRES;980979 rcu_assign_pointer(rt->from, from);981980 dst_init_metrics(&rt->dst, from->fib6_metrics->metrics, true);981981+ if (from->fib6_metrics != &dst_default_metrics) {982982+ rt->dst._metrics |= DST_METRICS_REFCOUNTED;983983+ refcount_inc(&from->fib6_metrics->refcnt);984984+ }982985}983986984987/* Caller must already hold reference to @ort */···46754670 int iif, int type, u32 portid, u32 seq,46764671 unsigned int flags)46774672{46784678- struct rtmsg *rtm;46734673+ struct rt6_info *rt6 = (struct rt6_info *)dst;46744674+ struct rt6key *rt6_dst, *rt6_src;46754675+ u32 *pmetrics, table, rt6_flags;46794676 struct nlmsghdr *nlh;46774677+ struct rtmsg *rtm;46804678 long expires = 0;46814681- u32 *pmetrics;46824682- u32 table;4683467946844680 nlh = nlmsg_put(skb, portid, seq, type, sizeof(*rtm), flags);46854681 if (!nlh)46864682 return -EMSGSIZE;4687468346844684+ if (rt6) {46854685+ rt6_dst = &rt6->rt6i_dst;46864686+ rt6_src = &rt6->rt6i_src;46874687+ rt6_flags = rt6->rt6i_flags;46884688+ } else {46894689+ rt6_dst = &rt->fib6_dst;46904690+ rt6_src = &rt->fib6_src;46914691+ rt6_flags = rt->fib6_flags;46924692+ }46934693+46884694 rtm = nlmsg_data(nlh);46894695 rtm->rtm_family = AF_INET6;46904690- rtm->rtm_dst_len = rt->fib6_dst.plen;46914691- rtm->rtm_src_len = rt->fib6_src.plen;46964696+ rtm->rtm_dst_len = rt6_dst->plen;46974697+ rtm->rtm_src_len = rt6_src->plen;46924698 rtm->rtm_tos = 0;46934699 if (rt->fib6_table)46944700 table = rt->fib6_table->tb6_id;···47144698 rtm->rtm_scope = RT_SCOPE_UNIVERSE;47154699 rtm->rtm_protocol = rt->fib6_protocol;4716470047174717- if (rt->fib6_flags & RTF_CACHE)47014701+ if (rt6_flags & RTF_CACHE)47184702 rtm->rtm_flags |= RTM_F_CLONED;4719470347204704 if (dest) {···47224706 goto nla_put_failure;47234707 rtm->rtm_dst_len = 128;47244708 } else if (rtm->rtm_dst_len)47254725- if (nla_put_in6_addr(skb, RTA_DST, &rt->fib6_dst.addr))47094709+ if (nla_put_in6_addr(skb, RTA_DST, &rt6_dst->addr))47264710 goto nla_put_failure;47274711#ifdef CONFIG_IPV6_SUBTREES47284712 if (src) {···47304714 goto nla_put_failure;47314715 rtm->rtm_src_len = 128;47324716 } else if (rtm->rtm_src_len &&47334733- nla_put_in6_addr(skb, RTA_SRC, &rt->fib6_src.addr))47174717+ nla_put_in6_addr(skb, RTA_SRC, &rt6_src->addr))47344718 goto nla_put_failure;47354719#endif47364720 if (iif) {47374721#ifdef CONFIG_IPV6_MROUTE47384738- if (ipv6_addr_is_multicast(&rt->fib6_dst.addr)) {47224722+ if (ipv6_addr_is_multicast(&rt6_dst->addr)) {47394723 int err = ip6mr_get_route(net, skb, rtm, portid);4740472447414725 if (err == 0)···47704754 /* For multipath routes, walk the siblings list and add47714755 * each as a nexthop within RTA_MULTIPATH.47724756 */47734773- if (rt->fib6_nsiblings) {47574757+ if (rt6) {47584758+ if (rt6_flags & RTF_GATEWAY &&47594759+ nla_put_in6_addr(skb, RTA_GATEWAY, &rt6->rt6i_gateway))47604760+ goto nla_put_failure;47614761+47624762+ if (dst->dev && nla_put_u32(skb, RTA_OIF, dst->dev->ifindex))47634763+ goto nla_put_failure;47644764+ } else if (rt->fib6_nsiblings) {47744765 struct fib6_info *sibling, *next_sibling;47754766 struct nlattr *mp;47764767···48004777 goto nla_put_failure;48014778 }4802477948034803- if (rt->fib6_flags & RTF_EXPIRES) {47804780+ if (rt6_flags & RTF_EXPIRES) {48044781 expires = dst ? dst->expires : rt->expires;48054782 expires -= jiffies;48064783 }···48084785 if (rtnl_put_cacheinfo(skb, dst, 0, expires, dst ? dst->error : 0) < 0)48094786 goto nla_put_failure;4810478748114811- if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt->fib6_flags)))47884788+ if (nla_put_u8(skb, RTA_PREF, IPV6_EXTRACT_PREF(rt6_flags)))48124789 goto nla_put_failure;4813479048144791
+37-28
net/ipv6/udp.c
···752752 }753753}754754755755+/* wrapper for udp_queue_rcv_skb tacking care of csum conversion and756756+ * return code conversion for ip layer consumption757757+ */758758+static int udp6_unicast_rcv_skb(struct sock *sk, struct sk_buff *skb,759759+ struct udphdr *uh)760760+{761761+ int ret;762762+763763+ if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))764764+ skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,765765+ ip6_compute_pseudo);766766+767767+ ret = udpv6_queue_rcv_skb(sk, skb);768768+769769+ /* a return value > 0 means to resubmit the input, but770770+ * it wants the return to be -protocol, or 0771771+ */772772+ if (ret > 0)773773+ return -ret;774774+ return 0;775775+}776776+755777int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,756778 int proto)757779{···825803 if (unlikely(sk->sk_rx_dst != dst))826804 udp6_sk_rx_dst_set(sk, dst);827805828828- ret = udpv6_queue_rcv_skb(sk, skb);829829- sock_put(sk);806806+ if (!uh->check && !udp_sk(sk)->no_check6_rx) {807807+ sock_put(sk);808808+ goto report_csum_error;809809+ }830810831831- /* a return value > 0 means to resubmit the input */832832- if (ret > 0)833833- return ret;834834- return 0;811811+ ret = udp6_unicast_rcv_skb(sk, skb, uh);812812+ sock_put(sk);813813+ return ret;835814 }836815837816 /*···845822 /* Unicast */846823 sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);847824 if (sk) {848848- int ret;849849-850850- if (!uh->check && !udp_sk(sk)->no_check6_rx) {851851- udp6_csum_zero_error(skb);852852- goto csum_error;853853- }854854-855855- if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))856856- skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,857857- ip6_compute_pseudo);858858-859859- ret = udpv6_queue_rcv_skb(sk, skb);860860-861861- /* a return value > 0 means to resubmit the input */862862- if (ret > 0)863863- return ret;864864-865865- return 0;825825+ if (!uh->check && !udp_sk(sk)->no_check6_rx)826826+ goto report_csum_error;827827+ return udp6_unicast_rcv_skb(sk, skb, uh);866828 }867829868868- if (!uh->check) {869869- udp6_csum_zero_error(skb);870870- goto csum_error;871871- }830830+ if (!uh->check)831831+ goto report_csum_error;872832873833 if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))874834 goto discard;···872866 ulen, skb->len,873867 daddr, ntohs(uh->dest));874868 goto discard;869869+870870+report_csum_error:871871+ udp6_csum_zero_error(skb);875872csum_error:876873 __UDP6_INC_STATS(net, UDP_MIB_CSUMERRORS, proto == IPPROTO_UDPLITE);877874discard:
+5-1
net/mpls/af_mpls.c
···15331533 unsigned int flags;1534153415351535 if (event == NETDEV_REGISTER) {15361536- /* For now just support Ethernet, IPGRE, SIT and IPIP devices */15361536+15371537+ /* For now just support Ethernet, IPGRE, IP6GRE, SIT and15381538+ * IPIP devices15391539+ */15371540 if (dev->type == ARPHRD_ETHER ||15381541 dev->type == ARPHRD_LOOPBACK ||15391542 dev->type == ARPHRD_IPGRE ||15431543+ dev->type == ARPHRD_IP6GRE ||15401544 dev->type == ARPHRD_SIT ||15411545 dev->type == ARPHRD_TUNNEL) {15421546 mdev = mpls_add_dev(dev);
+2-1
net/netlabel/netlabel_unlabeled.c
···781781{782782 u32 addr_len;783783784784- if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR]) {784784+ if (info->attrs[NLBL_UNLABEL_A_IPV4ADDR] &&785785+ info->attrs[NLBL_UNLABEL_A_IPV4MASK]) {785786 addr_len = nla_len(info->attrs[NLBL_UNLABEL_A_IPV4ADDR]);786787 if (addr_len != sizeof(struct in_addr) &&787788 addr_len != nla_len(info->attrs[NLBL_UNLABEL_A_IPV4MASK]))
+10
net/nfc/hci/core.c
···209209 }210210 create_info = (struct hci_create_pipe_resp *)skb->data;211211212212+ if (create_info->pipe >= NFC_HCI_MAX_PIPES) {213213+ status = NFC_HCI_ANY_E_NOK;214214+ goto exit;215215+ }216216+212217 /* Save the new created pipe and bind with local gate,213218 * the description for skb->data[3] is destination gate id214219 * but since we received this cmd from host controller, we···236231 goto exit;237232 }238233 delete_info = (struct hci_delete_pipe_noti *)skb->data;234234+235235+ if (delete_info->pipe >= NFC_HCI_MAX_PIPES) {236236+ status = NFC_HCI_ANY_E_NOK;237237+ goto exit;238238+ }239239240240 hdev->pipes[delete_info->pipe].gate = NFC_HCI_INVALID_GATE;241241 hdev->pipes[delete_info->pipe].dest_host = NFC_HCI_INVALID_HOST;
···69697070 if (!exists) {7171 ret = tcf_idr_create(tn, parm->index, est, a,7272- &act_sample_ops, bind, false);7272+ &act_sample_ops, bind, true);7373 if (ret) {7474 tcf_idr_cleanup(tn, parm->index);7575 return ret;
+2
net/sched/cls_api.c
···19021902 RTM_NEWCHAIN, false);19031903 break;19041904 case RTM_DELCHAIN:19051905+ tfilter_notify_chain(net, skb, block, q, parent, n,19061906+ chain, RTM_DELTFILTER);19051907 /* Flush the chain first as the user requested chain removal. */19061908 tcf_chain_flush(chain);19071909 /* In case the chain was successfully deleted, put a reference
···241241 ctx->sk_write_space(sk);242242}243243244244+static void tls_ctx_free(struct tls_context *ctx)245245+{246246+ if (!ctx)247247+ return;248248+249249+ memzero_explicit(&ctx->crypto_send, sizeof(ctx->crypto_send));250250+ memzero_explicit(&ctx->crypto_recv, sizeof(ctx->crypto_recv));251251+ kfree(ctx);252252+}253253+244254static void tls_sk_proto_close(struct sock *sk, long timeout)245255{246256 struct tls_context *ctx = tls_get_ctx(sk);···304294#else305295 {306296#endif307307- kfree(ctx);297297+ tls_ctx_free(ctx);308298 ctx = NULL;309299 }310300···315305 * for sk->sk_prot->unhash [tls_hw_unhash]316306 */317307 if (free_ctx)318318- kfree(ctx);308308+ tls_ctx_free(ctx);319309}320310321311static int do_tls_getsockopt_tx(struct sock *sk, char __user *optval,···340330 }341331342332 /* get user crypto info */343343- crypto_info = &ctx->crypto_send;333333+ crypto_info = &ctx->crypto_send.info;344334345335 if (!TLS_CRYPTO_INFO_READY(crypto_info)) {346336 rc = -EBUSY;···427417 }428418429419 if (tx)430430- crypto_info = &ctx->crypto_send;420420+ crypto_info = &ctx->crypto_send.info;431421 else432432- crypto_info = &ctx->crypto_recv;422422+ crypto_info = &ctx->crypto_recv.info;433423434424 /* Currently we don't support set crypto info more than one time */435425 if (TLS_CRYPTO_INFO_READY(crypto_info)) {···509499 goto out;510500511501err_crypto_info:512512- memset(crypto_info, 0, sizeof(*crypto_info));502502+ memzero_explicit(crypto_info, sizeof(union tls_crypto_context));513503out:514504 return rc;515505}
+13-8
net/tls/tls_sw.c
···931931 if (control != TLS_RECORD_TYPE_DATA)932932 goto recv_end;933933 }934934+ } else {935935+ /* MSG_PEEK right now cannot look beyond current skb936936+ * from strparser, meaning we cannot advance skb here937937+ * and thus unpause strparser since we'd loose original938938+ * one.939939+ */940940+ break;934941 }942942+935943 /* If we have a new message from strparser, continue now. */936944 if (copied >= target && !ctx->recv_pkt)937945 break;···10631055 goto read_failure;10641056 }1065105710661066- if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.version) ||10671067- header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.version)) {10581058+ if (header[1] != TLS_VERSION_MINOR(tls_ctx->crypto_recv.info.version) ||10591059+ header[2] != TLS_VERSION_MAJOR(tls_ctx->crypto_recv.info.version)) {10681060 ret = -EINVAL;10691061 goto read_failure;10701062 }···1144113611451137int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)11461138{11471147- char keyval[TLS_CIPHER_AES_GCM_128_KEY_SIZE];11481139 struct tls_crypto_info *crypto_info;11491140 struct tls12_crypto_info_aes_gcm_128 *gcm_128_info;11501141 struct tls_sw_context_tx *sw_ctx_tx = NULL;···1188118111891182 if (tx) {11901183 crypto_init_wait(&sw_ctx_tx->async_wait);11911191- crypto_info = &ctx->crypto_send;11841184+ crypto_info = &ctx->crypto_send.info;11921185 cctx = &ctx->tx;11931186 aead = &sw_ctx_tx->aead_send;11941187 } else {11951188 crypto_init_wait(&sw_ctx_rx->async_wait);11961196- crypto_info = &ctx->crypto_recv;11891189+ crypto_info = &ctx->crypto_recv.info;11971190 cctx = &ctx->rx;11981191 aead = &sw_ctx_rx->aead_recv;11991192 }···1272126512731266 ctx->push_pending_record = tls_sw_push_pending_record;1274126712751275- memcpy(keyval, gcm_128_info->key, TLS_CIPHER_AES_GCM_128_KEY_SIZE);12761276-12771277- rc = crypto_aead_setkey(*aead, keyval,12681268+ rc = crypto_aead_setkey(*aead, gcm_128_info->key,12781269 TLS_CIPHER_AES_GCM_128_KEY_SIZE);12791270 if (rc)12801271 goto free_aead;
+13
scripts/subarch.include
···11+# SUBARCH tells the usermode build what the underlying arch is. That is set22+# first, and if a usermode build is happening, the "ARCH=um" on the command33+# line overrides the setting of ARCH below. If a native build is happening,44+# then ARCH is assigned, getting whatever value it gets normally, and55+# SUBARCH is subsequently ignored.66+77+SUBARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \88+ -e s/sun4u/sparc64/ \99+ -e s/arm.*/arm/ -e s/sa110/arm/ \1010+ -e s/s390x/s390/ -e s/parisc64/parisc/ \1111+ -e s/ppc.*/powerpc/ -e s/mips.*/mips/ \1212+ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \1313+ -e s/riscv.*/riscv/)
···117117 struct sigmadsp_control *ctrl, void *data)118118{119119 /* safeload loads up to 20 bytes in a atomic operation */120120- if (ctrl->num_bytes > 4 && ctrl->num_bytes <= 20 && sigmadsp->ops &&121121- sigmadsp->ops->safeload)120120+ if (ctrl->num_bytes <= 20 && sigmadsp->ops && sigmadsp->ops->safeload)122121 return sigmadsp->ops->safeload(sigmadsp, ctrl->addr, data,123122 ctrl->num_bytes);124123 else
+9-3
sound/soc/codecs/tas6424.c
···424424 TAS6424_FAULT_PVDD_UV |425425 TAS6424_FAULT_VBAT_UV;426426427427- if (reg)427427+ if (!reg) {428428+ tas6424->last_fault1 = reg;428429 goto check_global_fault2_reg;430430+ }429431430432 /*431433 * Only flag errors once for a given occurrence. This is needed as···463461 TAS6424_FAULT_OTSD_CH3 |464462 TAS6424_FAULT_OTSD_CH4;465463466466- if (!reg)464464+ if (!reg) {465465+ tas6424->last_fault2 = reg;467466 goto check_warn_reg;467467+ }468468469469 if ((reg & TAS6424_FAULT_OTSD) && !(tas6424->last_fault2 & TAS6424_FAULT_OTSD))470470 dev_crit(dev, "experienced a global overtemp shutdown\n");···501497 TAS6424_WARN_VDD_OTW_CH3 |502498 TAS6424_WARN_VDD_OTW_CH4;503499504504- if (!reg)500500+ if (!reg) {501501+ tas6424->last_warn = reg;505502 goto out;503503+ }506504507505 if ((reg & TAS6424_WARN_VDD_UV) && !(tas6424->last_warn & TAS6424_WARN_VDD_UV))508506 dev_warn(dev, "experienced a VDD under voltage condition\n");
···960960{961961 int i;962962963963- for (i = 0; i < MAX_SESSIONS; i++)963963+ for (i = 0; i < MAX_SESSIONS; i++) {964964 routing_data->sessions[i].port_id = -1;965965+ routing_data->sessions[i].fedai_id = -1;966966+ }965967966968 return 0;967969}
+5
sound/soc/sh/rcar/adg.c
···462462 goto rsnd_adg_get_clkout_end;463463464464 req_size = prop->length / sizeof(u32);465465+ if (req_size > REQ_SIZE) {466466+ dev_err(dev,467467+ "too many clock-frequency, use top %d\n", REQ_SIZE);468468+ req_size = REQ_SIZE;469469+ }465470466471 of_property_read_u32_array(np, "clock-frequency", req_rate, req_size);467472 req_48kHz_rate = 0;
+20-1
sound/soc/sh/rcar/core.c
···478478 (func_call && (mod)->ops->fn) ? #fn : ""); \479479 if (func_call && (mod)->ops->fn) \480480 tmp = (mod)->ops->fn(mod, io, param); \481481- if (tmp) \481481+ if (tmp && (tmp != -EPROBE_DEFER)) \482482 dev_err(dev, "%s[%d] : %s error %d\n", \483483 rsnd_mod_name(mod), rsnd_mod_id(mod), \484484 #fn, tmp); \···958958 rsnd_dai_stream_quit(io);959959}960960961961+static int rsnd_soc_dai_prepare(struct snd_pcm_substream *substream,962962+ struct snd_soc_dai *dai)963963+{964964+ struct rsnd_priv *priv = rsnd_dai_to_priv(dai);965965+ struct rsnd_dai *rdai = rsnd_dai_to_rdai(dai);966966+ struct rsnd_dai_stream *io = rsnd_rdai_to_io(rdai, substream);967967+968968+ return rsnd_dai_call(prepare, io, priv);969969+}970970+961971static const struct snd_soc_dai_ops rsnd_soc_dai_ops = {962972 .startup = rsnd_soc_dai_startup,963973 .shutdown = rsnd_soc_dai_shutdown,964974 .trigger = rsnd_soc_dai_trigger,965975 .set_fmt = rsnd_soc_dai_set_fmt,966976 .set_tdm_slot = rsnd_soc_set_dai_tdm_slot,977977+ .prepare = rsnd_soc_dai_prepare,967978};968979969980void rsnd_parse_connect_common(struct rsnd_dai *rdai,···15601549 rsnd_dai_call(remove, &rdai->playback, priv);15611550 rsnd_dai_call(remove, &rdai->capture, priv);15621551 }15521552+15531553+ /*15541554+ * adg is very special mod which can't use rsnd_dai_call(remove),15551555+ * and it registers ADG clock on probe.15561556+ * It should be unregister if probe failed.15571557+ * Mainly it is assuming -EPROBE_DEFER case15581558+ */15591559+ rsnd_adg_remove(priv);1563156015641561 return ret;15651562}
+4
sound/soc/sh/rcar/dma.c
···241241 /* try to get DMAEngine channel */242242 chan = rsnd_dmaen_request_channel(io, mod_from, mod_to);243243 if (IS_ERR_OR_NULL(chan)) {244244+ /* Let's follow when -EPROBE_DEFER case */245245+ if (PTR_ERR(chan) == -EPROBE_DEFER)246246+ return PTR_ERR(chan);247247+244248 /*245249 * DMA failed. try to PIO mode246250 * see
+7
sound/soc/sh/rcar/rsnd.h
···280280 int (*nolock_stop)(struct rsnd_mod *mod,281281 struct rsnd_dai_stream *io,282282 struct rsnd_priv *priv);283283+ int (*prepare)(struct rsnd_mod *mod,284284+ struct rsnd_dai_stream *io,285285+ struct rsnd_priv *priv);283286};284287285288struct rsnd_dai_stream;···312309 * H 0: fallback313310 * H 0: hw_params314311 * H 0: pointer312312+ * H 0: prepare315313 */316314#define __rsnd_mod_shift_nolock_start 0317315#define __rsnd_mod_shift_nolock_stop 0···327323#define __rsnd_mod_shift_fallback 28 /* always called */328324#define __rsnd_mod_shift_hw_params 28 /* always called */329325#define __rsnd_mod_shift_pointer 28 /* always called */326326+#define __rsnd_mod_shift_prepare 28 /* always called */330327331328#define __rsnd_mod_add_probe 0332329#define __rsnd_mod_add_remove 0···342337#define __rsnd_mod_add_fallback 0343338#define __rsnd_mod_add_hw_params 0344339#define __rsnd_mod_add_pointer 0340340+#define __rsnd_mod_add_prepare 0345341346342#define __rsnd_mod_call_probe 0347343#define __rsnd_mod_call_remove 0···357351#define __rsnd_mod_call_pointer 0358352#define __rsnd_mod_call_nolock_start 0359353#define __rsnd_mod_call_nolock_stop 1354354+#define __rsnd_mod_call_prepare 0360355361356#define rsnd_mod_to_priv(mod) ((mod)->priv)362357#define rsnd_mod_name(mod) ((mod)->ops->name)
+10-6
sound/soc/sh/rcar/ssi.c
···283283 if (rsnd_ssi_is_multi_slave(mod, io))284284 return 0;285285286286- if (ssi->usrcnt > 1) {286286+ if (ssi->rate) {287287 if (ssi->rate != rate) {288288 dev_err(dev, "SSI parent/child should use same rate\n");289289 return -EINVAL;···434434 struct rsnd_priv *priv)435435{436436 struct rsnd_ssi *ssi = rsnd_mod_to_ssi(mod);437437- int ret;438437439438 if (!rsnd_ssi_is_run_mods(mod, io))440439 return 0;···441442 ssi->usrcnt++;442443443444 rsnd_mod_power_on(mod);444444-445445- ret = rsnd_ssi_master_clk_start(mod, io);446446- if (ret < 0)447447- return ret;448445449446 rsnd_ssi_config_init(mod, io);450447···847852 return 0;848853}849854855855+static int rsnd_ssi_prepare(struct rsnd_mod *mod,856856+ struct rsnd_dai_stream *io,857857+ struct rsnd_priv *priv)858858+{859859+ return rsnd_ssi_master_clk_start(mod, io);860860+}861861+850862static struct rsnd_mod_ops rsnd_ssi_pio_ops = {851863 .name = SSI_NAME,852864 .probe = rsnd_ssi_common_probe,···866864 .pointer = rsnd_ssi_pio_pointer,867865 .pcm_new = rsnd_ssi_pcm_new,868866 .hw_params = rsnd_ssi_hw_params,867867+ .prepare = rsnd_ssi_prepare,869868};870869871870static int rsnd_ssi_dma_probe(struct rsnd_mod *mod,···943940 .pcm_new = rsnd_ssi_pcm_new,944941 .fallback = rsnd_ssi_fallback,945942 .hw_params = rsnd_ssi_hw_params,943943+ .prepare = rsnd_ssi_prepare,946944};947945948946int rsnd_ssi_is_dma_mode(struct rsnd_mod *mod)
+2-2
sound/soc/soc-core.c
···14471447 sink = codec_dai->playback_widget;14481448 source = cpu_dai->capture_widget;14491449 if (sink && source) {14501450- ret = snd_soc_dapm_new_pcm(card, dai_link->params,14501450+ ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,14511451 dai_link->num_params,14521452 source, sink);14531453 if (ret != 0) {···14601460 sink = cpu_dai->playback_widget;14611461 source = codec_dai->capture_widget;14621462 if (sink && source) {14631463- ret = snd_soc_dapm_new_pcm(card, dai_link->params,14631463+ ret = snd_soc_dapm_new_pcm(card, rtd, dai_link->params,14641464 dai_link->num_params,14651465 source, sink);14661466 if (ret != 0) {
···11+// SPDX-License-Identifier: LGPL-2.122+#undef _GNU_SOURCE33+#include <string.h>44+#include <stdio.h>55+#include "str_error.h"66+77+/*88+ * Wrapper to allow for building in non-GNU systems such as Alpine Linux's musl99+ * libc, while checking strerror_r() return to avoid having to check this in1010+ * all places calling it.1111+ */1212+char *str_error(int err, char *dst, int len)1313+{1414+ int ret = strerror_r(err, dst, len);1515+ if (ret)1616+ snprintf(dst, len, "ERROR: strerror_r(%d)=%d", err, ret);1717+ return dst;1818+}
···580580 /* Test update without programs */581581 for (i = 0; i < 6; i++) {582582 err = bpf_map_update_elem(fd, &i, &sfd[i], BPF_ANY);583583- if (err) {583583+ if (i < 2 && !err) {584584+ printf("Allowed update sockmap '%i:%i' not in ESTABLISHED\n",585585+ i, sfd[i]);586586+ goto out_sockmap;587587+ } else if (i >= 2 && err) {584588 printf("Failed noprog update sockmap '%i:%i'\n",585589 i, sfd[i]);586590 goto out_sockmap;···745741 }746742747743 /* Test map update elem afterwards fd lives in fd and map_fd */748748- for (i = 0; i < 6; i++) {744744+ for (i = 2; i < 6; i++) {749745 err = bpf_map_update_elem(map_fd_rx, &i, &sfd[i], BPF_ANY);750746 if (err) {751747 printf("Failed map_fd_rx update sockmap %i '%i:%i'\n",···849845 }850846851847 /* Delete the elems without programs */852852- for (i = 0; i < 6; i++) {848848+ for (i = 2; i < 6; i++) {853849 err = bpf_map_delete_elem(fd, &i);854850 if (err) {855851 printf("Failed delete sockmap %i '%i:%i'\n",
···8989int cg_read_strcmp(const char *cgroup, const char *control,9090 const char *expected)9191{9292- size_t size = strlen(expected) + 1;9292+ size_t size;9393 char *buf;9494+ int ret;9595+9696+ /* Handle the case of comparing against empty string */9797+ if (!expected)9898+ size = 32;9999+ else100100+ size = strlen(expected) + 1;9410195102 buf = malloc(size);96103 if (!buf)97104 return -1;981059999- if (cg_read(cgroup, control, buf, size))106106+ if (cg_read(cgroup, control, buf, size)) {107107+ free(buf);100108 return -1;109109+ }101110102102- return strcmp(expected, buf);111111+ ret = strcmp(expected, buf);112112+ free(buf);113113+ return ret;103114}104115105116int cg_read_strstr(const char *cgroup, const char *control, const char *needle)···347336 cnt++;348337349338 return cnt > 1;339339+}340340+341341+int set_oom_adj_score(int pid, int score)342342+{343343+ char path[PATH_MAX];344344+ int fd, len;345345+346346+ sprintf(path, "/proc/%d/oom_score_adj", pid);347347+348348+ fd = open(path, O_WRONLY | O_APPEND);349349+ if (fd < 0)350350+ return fd;351351+352352+ len = dprintf(fd, "%d", score);353353+ if (len < 0) {354354+ close(fd);355355+ return len;356356+ }357357+358358+ close(fd);359359+ return 0;350360}
+1
tools/testing/selftests/cgroup/cgroup_util.h
···4040extern int alloc_pagecache(int fd, size_t size);4141extern int alloc_anon(const char *cgroup, void *arg);4242extern int is_swap_enabled(void);4343+extern int set_oom_adj_score(int pid, int score);
+205
tools/testing/selftests/cgroup/test_memcontrol.c
···22#define _GNU_SOURCE3344#include <linux/limits.h>55+#include <linux/oom.h>56#include <fcntl.h>67#include <stdio.h>78#include <stdlib.h>···201200 sleep(1);202201203202 return 0;203203+}204204+205205+static int alloc_anon_noexit(const char *cgroup, void *arg)206206+{207207+ int ppid = getppid();208208+209209+ if (alloc_anon(cgroup, arg))210210+ return -1;211211+212212+ while (getppid() == ppid)213213+ sleep(1);214214+215215+ return 0;216216+}217217+218218+/*219219+ * Wait until processes are killed asynchronously by the OOM killer220220+ * If we exceed a timeout, fail.221221+ */222222+static int cg_test_proc_killed(const char *cgroup)223223+{224224+ int limit;225225+226226+ for (limit = 10; limit > 0; limit--) {227227+ if (cg_read_strcmp(cgroup, "cgroup.procs", "") == 0)228228+ return 0;229229+230230+ usleep(100000);231231+ }232232+ return -1;204233}205234206235/*···995964 return ret;996965}997966967967+/*968968+ * This test disables swapping and tries to allocate anonymous memory969969+ * up to OOM with memory.group.oom set. Then it checks that all970970+ * processes in the leaf (but not the parent) were killed.971971+ */972972+static int test_memcg_oom_group_leaf_events(const char *root)973973+{974974+ int ret = KSFT_FAIL;975975+ char *parent, *child;976976+977977+ parent = cg_name(root, "memcg_test_0");978978+ child = cg_name(root, "memcg_test_0/memcg_test_1");979979+980980+ if (!parent || !child)981981+ goto cleanup;982982+983983+ if (cg_create(parent))984984+ goto cleanup;985985+986986+ if (cg_create(child))987987+ goto cleanup;988988+989989+ if (cg_write(parent, "cgroup.subtree_control", "+memory"))990990+ goto cleanup;991991+992992+ if (cg_write(child, "memory.max", "50M"))993993+ goto cleanup;994994+995995+ if (cg_write(child, "memory.swap.max", "0"))996996+ goto cleanup;997997+998998+ if (cg_write(child, "memory.oom.group", "1"))999999+ goto cleanup;10001000+10011001+ cg_run_nowait(parent, alloc_anon_noexit, (void *) MB(60));10021002+ cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1));10031003+ cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1));10041004+ if (!cg_run(child, alloc_anon, (void *)MB(100)))10051005+ goto cleanup;10061006+10071007+ if (cg_test_proc_killed(child))10081008+ goto cleanup;10091009+10101010+ if (cg_read_key_long(child, "memory.events", "oom_kill ") <= 0)10111011+ goto cleanup;10121012+10131013+ if (cg_read_key_long(parent, "memory.events", "oom_kill ") != 0)10141014+ goto cleanup;10151015+10161016+ ret = KSFT_PASS;10171017+10181018+cleanup:10191019+ if (child)10201020+ cg_destroy(child);10211021+ if (parent)10221022+ cg_destroy(parent);10231023+ free(child);10241024+ free(parent);10251025+10261026+ return ret;10271027+}10281028+10291029+/*10301030+ * This test disables swapping and tries to allocate anonymous memory10311031+ * up to OOM with memory.group.oom set. Then it checks that all10321032+ * processes in the parent and leaf were killed.10331033+ */10341034+static int test_memcg_oom_group_parent_events(const char *root)10351035+{10361036+ int ret = KSFT_FAIL;10371037+ char *parent, *child;10381038+10391039+ parent = cg_name(root, "memcg_test_0");10401040+ child = cg_name(root, "memcg_test_0/memcg_test_1");10411041+10421042+ if (!parent || !child)10431043+ goto cleanup;10441044+10451045+ if (cg_create(parent))10461046+ goto cleanup;10471047+10481048+ if (cg_create(child))10491049+ goto cleanup;10501050+10511051+ if (cg_write(parent, "memory.max", "80M"))10521052+ goto cleanup;10531053+10541054+ if (cg_write(parent, "memory.swap.max", "0"))10551055+ goto cleanup;10561056+10571057+ if (cg_write(parent, "memory.oom.group", "1"))10581058+ goto cleanup;10591059+10601060+ cg_run_nowait(parent, alloc_anon_noexit, (void *) MB(60));10611061+ cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1));10621062+ cg_run_nowait(child, alloc_anon_noexit, (void *) MB(1));10631063+10641064+ if (!cg_run(child, alloc_anon, (void *)MB(100)))10651065+ goto cleanup;10661066+10671067+ if (cg_test_proc_killed(child))10681068+ goto cleanup;10691069+ if (cg_test_proc_killed(parent))10701070+ goto cleanup;10711071+10721072+ ret = KSFT_PASS;10731073+10741074+cleanup:10751075+ if (child)10761076+ cg_destroy(child);10771077+ if (parent)10781078+ cg_destroy(parent);10791079+ free(child);10801080+ free(parent);10811081+10821082+ return ret;10831083+}10841084+10851085+/*10861086+ * This test disables swapping and tries to allocate anonymous memory10871087+ * up to OOM with memory.group.oom set. Then it checks that all10881088+ * processes were killed except those set with OOM_SCORE_ADJ_MIN10891089+ */10901090+static int test_memcg_oom_group_score_events(const char *root)10911091+{10921092+ int ret = KSFT_FAIL;10931093+ char *memcg;10941094+ int safe_pid;10951095+10961096+ memcg = cg_name(root, "memcg_test_0");10971097+10981098+ if (!memcg)10991099+ goto cleanup;11001100+11011101+ if (cg_create(memcg))11021102+ goto cleanup;11031103+11041104+ if (cg_write(memcg, "memory.max", "50M"))11051105+ goto cleanup;11061106+11071107+ if (cg_write(memcg, "memory.swap.max", "0"))11081108+ goto cleanup;11091109+11101110+ if (cg_write(memcg, "memory.oom.group", "1"))11111111+ goto cleanup;11121112+11131113+ safe_pid = cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(1));11141114+ if (set_oom_adj_score(safe_pid, OOM_SCORE_ADJ_MIN))11151115+ goto cleanup;11161116+11171117+ cg_run_nowait(memcg, alloc_anon_noexit, (void *) MB(1));11181118+ if (!cg_run(memcg, alloc_anon, (void *)MB(100)))11191119+ goto cleanup;11201120+11211121+ if (cg_read_key_long(memcg, "memory.events", "oom_kill ") != 3)11221122+ goto cleanup;11231123+11241124+ if (kill(safe_pid, SIGKILL))11251125+ goto cleanup;11261126+11271127+ ret = KSFT_PASS;11281128+11291129+cleanup:11301130+ if (memcg)11311131+ cg_destroy(memcg);11321132+ free(memcg);11331133+11341134+ return ret;11351135+}11361136+11371137+9981138#define T(x) { x, #x }9991139struct memcg_test {10001140 int (*fn)(const char *root);···1180978 T(test_memcg_oom_events),1181979 T(test_memcg_swap_max),1182980 T(test_memcg_sock),981981+ T(test_memcg_oom_group_leaf_events),982982+ T(test_memcg_oom_group_parent_events),983983+ T(test_memcg_oom_group_score_events),1183984};1184985#undef T1185986
···178178179179cleanup() {180180 [ ${cleanup_done} -eq 1 ] && return181181- ip netns del ${NS_A} 2 > /dev/null182182- ip netns del ${NS_B} 2 > /dev/null181181+ ip netns del ${NS_A} 2> /dev/null182182+ ip netns del ${NS_B} 2> /dev/null183183 cleanup_done=1184184}185185
+49
tools/testing/selftests/net/tls.c
···502502 EXPECT_EQ(memcmp(test_str, buf, send_len), 0);503503}504504505505+TEST_F(tls, recv_peek_multiple_records)506506+{507507+ char const *test_str = "test_read_peek_mult_recs";508508+ char const *test_str_first = "test_read_peek";509509+ char const *test_str_second = "_mult_recs";510510+ int len;511511+ char buf[64];512512+513513+ len = strlen(test_str_first);514514+ EXPECT_EQ(send(self->fd, test_str_first, len, 0), len);515515+516516+ len = strlen(test_str_second) + 1;517517+ EXPECT_EQ(send(self->fd, test_str_second, len, 0), len);518518+519519+ len = sizeof(buf);520520+ memset(buf, 0, len);521521+ EXPECT_NE(recv(self->cfd, buf, len, MSG_PEEK), -1);522522+523523+ /* MSG_PEEK can only peek into the current record. */524524+ len = strlen(test_str_first) + 1;525525+ EXPECT_EQ(memcmp(test_str_first, buf, len), 0);526526+527527+ len = sizeof(buf);528528+ memset(buf, 0, len);529529+ EXPECT_NE(recv(self->cfd, buf, len, 0), -1);530530+531531+ /* Non-MSG_PEEK will advance strparser (and therefore record)532532+ * however.533533+ */534534+ len = strlen(test_str) + 1;535535+ EXPECT_EQ(memcmp(test_str, buf, len), 0);536536+537537+ /* MSG_MORE will hold current record open, so later MSG_PEEK538538+ * will see everything.539539+ */540540+ len = strlen(test_str_first);541541+ EXPECT_EQ(send(self->fd, test_str_first, len, MSG_MORE), len);542542+543543+ len = strlen(test_str_second) + 1;544544+ EXPECT_EQ(send(self->fd, test_str_second, len, 0), len);545545+546546+ len = sizeof(buf);547547+ memset(buf, 0, len);548548+ EXPECT_NE(recv(self->cfd, buf, len, MSG_PEEK), -1);549549+550550+ len = strlen(test_str) + 1;551551+ EXPECT_EQ(memcmp(test_str, buf, len), 0);552552+}553553+505554TEST_F(tls, pollin)506555{507556 char const *test_str = "test_poll";