···48464846 xirc2ps_cs= [NET,PCMCIA]48474847 Format:48484848 <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]48494849+48504850+ xhci-hcd.quirks [USB,KNL]48514851+ A hex value specifying bitmask with supplemental xhci48524852+ host controller quirks. Meaning of each bit can be48534853+ consulted in header drivers/usb/host/xhci.h.
+7-10
Documentation/kbuild/kbuild.txt
···5050--------------------------------------------------5151Additional options used for $(LD) when linking modules.52525353+KBUILD_KCONFIG5454+--------------------------------------------------5555+Set the top-level Kconfig file to the value of this environment5656+variable. The default name is "Kconfig".5757+5358KBUILD_VERBOSE5459--------------------------------------------------5560Set the kbuild verbosity. Can be assigned same values as "V=...".···9388directory name found in the arch/ directory.9489But some architectures such as x86 and sparc have aliases.9590x86: i386 for 32 bit, x86_64 for 64 bit9696-sparc: sparc for 32 bit, sparc64 for 64 bit9191+sh: sh for 32 bit, sh64 for 64 bit9292+sparc: sparc32 for 32 bit, sparc64 for 64 bit97939894CROSS_COMPILE9995--------------------------------------------------···153147stripped after they are installed. If INSTALL_MOD_STRIP is '1', then154148the default option --strip-debug will be used. Otherwise,155149INSTALL_MOD_STRIP value will be used as the options to the strip command.156156-157157-INSTALL_FW_PATH158158---------------------------------------------------159159-INSTALL_FW_PATH specifies where to install the firmware blobs.160160-The default value is:161161-162162- $(INSTALL_MOD_PATH)/lib/firmware163163-164164-The value can be overridden in which case the default value is ignored.165150166151INSTALL_HDR_PATH167152--------------------------------------------------
+43-8
Documentation/kbuild/kconfig.txt
···2233Use "make help" to list all of the possible configuration targets.4455-The xconfig ('qconf') and menuconfig ('mconf') programs also66-have embedded help text. Be sure to check it for navigation,77-search, and other general help text.55+The xconfig ('qconf'), menuconfig ('mconf'), and nconfig ('nconf')66+programs also have embedded help text. Be sure to check that for77+navigation, search, and other general help text.8899======================================================================1010General···1717for you, so you may find that you need to see what NEW kernel1818symbols have been introduced.19192020-To see a list of new config symbols when using "make oldconfig", use2020+To see a list of new config symbols, use21212222 cp user/some/old.config .config2323 make listnewconfig24242525and the config program will list any new symbols, one per line.26262727+Alternatively, you can use the brute force method:2828+2929+ make oldconfig2730 scripts/diffconfig .config.old .config | less28312932______________________________________________________________________···163160 This lists all config symbols that contain "hotplug",164161 e.g., HOTPLUG_CPU, MEMORY_HOTPLUG.165162166166- For search help, enter / followed TAB-TAB-TAB (to highlight163163+ For search help, enter / followed by TAB-TAB (to highlight167164 <Help>) and Enter. This will tell you that you can also use168165 regular expressions (regexes) in the search string, so if you169166 are not interested in MEMORY_HOTPLUG, you could try···206203207204208205======================================================================206206+nconfig207207+--------------------------------------------------208208+209209+nconfig is an alternate text-based configurator. It lists function210210+keys across the bottom of the terminal (window) that execute commands.211211+You can also just use the corresponding numeric key to execute the212212+commands unless you are in a data entry window. E.g., instead of F6213213+for Save, you can just press 6.214214+215215+Use F1 for Global help or F3 for the Short help menu.216216+217217+Searching in nconfig:218218+219219+ You can search either in the menu entry "prompt" strings220220+ or in the configuration symbols.221221+222222+ Use / to begin a search through the menu entries. This does223223+ not support regular expressions. Use <Down> or <Up> for224224+ Next hit and Previous hit, respectively. Use <Esc> to225225+ terminate the search mode.226226+227227+ F8 (SymSearch) searches the configuration symbols for the228228+ given string or regular expression (regex).229229+230230+NCONFIG_MODE231231+--------------------------------------------------232232+This mode shows all sub-menus in one large tree.233233+234234+Example:235235+ make NCONFIG_MODE=single_menu nconfig236236+237237+238238+======================================================================209239xconfig210240--------------------------------------------------211241···266230267231Searching in gconfig:268232269269- None (gconfig isn't maintained as well as xconfig or menuconfig);270270- however, gconfig does have a few more viewing choices than271271- xconfig does.233233+ There is no search command in gconfig. However, gconfig does234234+ have several different viewing choices, modes, and options.272235273236###
+7-4
MAINTAINERS
···581581582582AGPGART DRIVER583583M: David Airlie <airlied@linux.ie>584584-T: git git://people.freedesktop.org/~airlied/linux (part of drm maint)584584+T: git git://anongit.freedesktop.org/drm/drm585585S: Maintained586586F: drivers/char/agp/587587F: include/linux/agp*···4468446844694469DRIVER CORE, KOBJECTS, DEBUGFS AND SYSFS44704470M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>44714471+R: "Rafael J. Wysocki" <rafael@kernel.org>44714472T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git44724473S: Supported44734474F: Documentation/kobject.txt···46394638DRM DRIVERS46404639M: David Airlie <airlied@linux.ie>46414640L: dri-devel@lists.freedesktop.org46424642-T: git git://people.freedesktop.org/~airlied/linux46414641+T: git git://anongit.freedesktop.org/drm/drm46434642B: https://bugs.freedesktop.org/46444643C: irc://chat.freenode.net/dri-devel46454644S: Maintained···10233102321023410233NXP TDA998X DRM DRIVER1023510234M: Russell King <linux@armlinux.org.uk>1023610236-S: Supported1023510235+S: Maintained1023710236T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-devel1023810237T: git git://git.armlinux.org.uk/~rmk/linux-arm.git drm-tda998x-fixes1023910238F: drivers/gpu/drm/i2c/tda998x_drv.c1024010239F: include/drm/i2c/tda998x.h1024010240+F: include/dt-bindings/display/tda998x.h1024110241+K: "nxp,tda998x"10241102421024210243NXP TFA9879 DRIVER1024310244M: Peter Rosin <peda@axentia.se>···1185711854F: arch/hexagon/11858118551185911856QUALCOMM HIDMA DRIVER1186011860-M: Sinan Kaya <okaya@codeaurora.org>1185711857+M: Sinan Kaya <okaya@kernel.org>1186111858L: linux-arm-kernel@lists.infradead.org1186211859L: linux-arm-msm@vger.kernel.org1186311860L: dmaengine@vger.kernel.org
+5-10
Makefile
···22VERSION = 433PATCHLEVEL = 1844SUBLEVEL = 055-EXTRAVERSION = -rc355+EXTRAVERSION = -rc566NAME = Merciless Moray7788# *DOCUMENTATION*···353353 else if [ -x /bin/bash ]; then echo /bin/bash; \354354 else echo sh; fi ; fi)355355356356-HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS)357357-HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS)358358-HOST_LFS_LIBS := $(shell getconf LFS_LIBS)356356+HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)357357+HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)358358+HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)359359360360HOSTCC = gcc361361HOSTCXX = g++···505505 CC_HAVE_ASM_GOTO := 1506506 KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO507507 KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO508508-endif509509-510510-ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/cc-can-link.sh $(CC)), y)511511- CC_CAN_LINK := y512512- export CC_CAN_LINK513508endif514509515510# The expansion should be delayed until arch/$(SRCARCH)/Makefile is included.···17121717PHONY += FORCE17131718FORCE:1714171917151715-# Declare the contents of the .PHONY variable as phony. We keep that17201720+# Declare the contents of the PHONY variable as phony. We keep that17161721# information in a variable so we can use it in if_changed and friends.17171722.PHONY: $(PHONY)
···272272 * Allocate stack space to store 128 bytes worth of tweaks. For273273 * performance, this space is aligned to a 16-byte boundary so that we274274 * can use the load/store instructions that declare 16-byte alignment.275275+ * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'.275276 */276276- sub sp, #128277277- bic sp, #0xf277277+ sub r12, sp, #128278278+ bic r12, #0xf279279+ mov sp, r12278280279281.if \n == 64280282 // Load first tweak
+3
arch/arm/firmware/Makefile
···11obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o22+33+# tf_generic_smc() fails to build with -fsanitize-coverage=trace-pc44+KCOV_INSTRUMENT := n
···109109static inline void omap5_erratum_workaround_801819(void) { }110110#endif111111112112+#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR113113+/*114114+ * Configure ACR and enable ACTLR[0] (Enable invalidates of BTB with115115+ * ICIALLU) to activate the workaround for secondary Core.116116+ * NOTE: it is assumed that the primary core's configuration is done117117+ * by the boot loader (kernel will detect a misconfiguration and complain118118+ * if this is not done).119119+ *120120+ * In General Purpose(GP) devices, ACR bit settings can only be done121121+ * by ROM code in "secure world" using the smc call and there is no122122+ * option to update the "firmware" on such devices. This also works for123123+ * High security(HS) devices, as a backup option in case the124124+ * "update" is not done in the "security firmware".125125+ */126126+static void omap5_secondary_harden_predictor(void)127127+{128128+ u32 acr, acr_mask;129129+130130+ asm volatile ("mrc p15, 0, %0, c1, c0, 1" : "=r" (acr));131131+132132+ /*133133+ * ACTLR[0] (Enable invalidates of BTB with ICIALLU)134134+ */135135+ acr_mask = BIT(0);136136+137137+ /* Do we already have it done.. if yes, skip expensive smc */138138+ if ((acr & acr_mask) == acr_mask)139139+ return;140140+141141+ acr |= acr_mask;142142+ omap_smc1(OMAP5_DRA7_MON_SET_ACR_INDEX, acr);143143+144144+ pr_debug("%s: ARM ACR setup for CVE_2017_5715 applied on CPU%d\n",145145+ __func__, smp_processor_id());146146+}147147+#else148148+static inline void omap5_secondary_harden_predictor(void) { }149149+#endif150150+112151static void omap4_secondary_init(unsigned int cpu)113152{114153 /*···170131 set_cntfreq();171132 /* Configure ACR to disable streaming WA for 801819 */172133 omap5_erratum_workaround_801819();134134+ /* Enable ACR to allow for ICUALLU workaround */135135+ omap5_secondary_harden_predictor();173136 }174137175138 /*
+2-2
arch/arm/mach-pxa/irq.c
···185185{186186 int i;187187188188- for (i = 0; i < pxa_internal_irq_nr / 32; i++) {188188+ for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) {189189 void __iomem *base = irq_base(i);190190191191 saved_icmr[i] = __raw_readl(base + ICMR);···204204{205205 int i;206206207207- for (i = 0; i < pxa_internal_irq_nr / 32; i++) {207207+ for (i = 0; i < DIV_ROUND_UP(pxa_internal_irq_nr, 32); i++) {208208 void __iomem *base = irq_base(i);209209210210 __raw_writel(saved_icmr[i], base + ICMR);
+9
arch/arm/mm/init.c
···736736 return 0;737737}738738739739+static int kernel_set_to_readonly __read_mostly;740740+739741void mark_rodata_ro(void)740742{743743+ kernel_set_to_readonly = 1;741744 stop_machine(__mark_rodata_ro, NULL, NULL);742745 debug_checkwx();743746}744747745748void set_kernel_text_rw(void)746749{750750+ if (!kernel_set_to_readonly)751751+ return;752752+747753 set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), false,748754 current->active_mm);749755}750756751757void set_kernel_text_ro(void)752758{759759+ if (!kernel_set_to_readonly)760760+ return;761761+753762 set_section_perms(ro_perms, ARRAY_SIZE(ro_perms), true,754763 current->active_mm);755764}
+1-1
arch/arm/net/bpf_jit_32.c
···18441844 /* there are 2 passes here */18451845 bpf_jit_dump(prog->len, image_size, 2, ctx.target);1846184618471847- set_memory_ro((unsigned long)header, header->pages);18471847+ bpf_jit_binary_lock_ro(header);18481848 prog->bpf_func = (void *)ctx.target;18491849 prog->jited = 1;18501850 prog->jited_len = image_size;
+5-5
arch/arm64/Makefile
···1010#1111# Copyright (C) 1995-2001 by Russell King12121313-LDFLAGS_vmlinux :=-p --no-undefined -X1313+LDFLAGS_vmlinux :=--no-undefined -X1414CPPFLAGS_vmlinux.lds = -DTEXT_OFFSET=$(TEXT_OFFSET)1515GZFLAGS :=-91616···6060KBUILD_CPPFLAGS += -mbig-endian6161CHECKFLAGS += -D__AARCH64EB__6262AS += -EB6363-LD += -EB6464-LDFLAGS += -maarch64linuxb6363+# We must use the linux target here, since distributions don't tend to package6464+# the ELF linker scripts with binutils, and this results in a build failure.6565+LDFLAGS += -EB -maarch64linuxb6566UTS_MACHINE := aarch64_be6667else6768KBUILD_CPPFLAGS += -mlittle-endian6869CHECKFLAGS += -D__AARCH64EL__6970AS += -EL7070-LD += -EL7171-LDFLAGS += -maarch64linux7171+LDFLAGS += -EL -maarch64linux # See comment above7272UTS_MACHINE := aarch647373endif7474
+7-12
arch/arm64/include/asm/simd.h
···2929static __must_check inline bool may_use_simd(void)3030{3131 /*3232- * The raw_cpu_read() is racy if called with preemption enabled.3333- * This is not a bug: kernel_neon_busy is only set when3434- * preemption is disabled, so we cannot migrate to another CPU3535- * while it is set, nor can we migrate to a CPU where it is set.3636- * So, if we find it clear on some CPU then we're guaranteed to3737- * find it clear on any CPU we could migrate to.3838- *3939- * If we are in between kernel_neon_begin()...kernel_neon_end(),4040- * the flag will be set, but preemption is also disabled, so we4141- * can't migrate to another CPU and spuriously see it become4242- * false.3232+ * kernel_neon_busy is only set while preemption is disabled,3333+ * and is clear whenever preemption is enabled. Since3434+ * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy3535+ * cannot change under our feet -- if it's set we cannot be3636+ * migrated, and if it's clear we cannot be migrated to a CPU3737+ * where it is set.4338 */4439 return !in_irq() && !irqs_disabled() && !in_nmi() &&4545- !raw_cpu_read(kernel_neon_busy);4040+ !this_cpu_read(kernel_neon_busy);4641}47424843#else /* ! CONFIG_KERNEL_MODE_NEON */
···99#include <linux/export.h>1010#include <asm/addrspace.h>1111#include <asm/byteorder.h>1212+#include <linux/ioport.h>1213#include <linux/sched.h>1314#include <linux/slab.h>1415#include <linux/vmalloc.h>···9998 return error;10099}101100101101+static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,102102+ void *arg)103103+{104104+ unsigned long i;105105+106106+ for (i = 0; i < nr_pages; i++) {107107+ if (pfn_valid(start_pfn + i) &&108108+ !PageReserved(pfn_to_page(start_pfn + i)))109109+ return 1;110110+ }111111+112112+ return 0;113113+}114114+102115/*103116 * Generic mapping function (not visible outside):104117 */···131116132117void __iomem * __ioremap(phys_addr_t phys_addr, phys_addr_t size, unsigned long flags)133118{119119+ unsigned long offset, pfn, last_pfn;134120 struct vm_struct * area;135135- unsigned long offset;136121 phys_addr_t last_addr;137122 void * addr;138123···152137 return (void __iomem *) CKSEG1ADDR(phys_addr);153138154139 /*155155- * Don't allow anybody to remap normal RAM that we're using..140140+ * Don't allow anybody to remap RAM that may be allocated by the page141141+ * allocator, since that could lead to races & data clobbering.156142 */157157- if (phys_addr < virt_to_phys(high_memory)) {158158- char *t_addr, *t_end;159159- struct page *page;160160-161161- t_addr = __va(phys_addr);162162- t_end = t_addr + (size - 1);163163-164164- for(page = virt_to_page(t_addr); page <= virt_to_page(t_end); page++)165165- if(!PageReserved(page))166166- return NULL;143143+ pfn = PFN_DOWN(phys_addr);144144+ last_pfn = PFN_DOWN(last_addr);145145+ if (walk_system_ram_range(pfn, last_pfn - pfn + 1, NULL,146146+ __ioremap_check_ram) == 1) {147147+ WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n",148148+ &phys_addr, &last_addr);149149+ return NULL;167150 }168151169152 /*
+5-1
arch/openrisc/include/asm/pgalloc.h
···9898 __free_page(pte);9999}100100101101+#define __pte_free_tlb(tlb, pte, addr) \102102+do { \103103+ pgtable_page_dtor(pte); \104104+ tlb_remove_page((tlb), (pte)); \105105+} while (0)101106102102-#define __pte_free_tlb(tlb, pte, addr) tlb_remove_page((tlb), (pte))103107#define pmd_pgtable(pmd) pmd_page(pmd)104108105109#define check_pgt_cache() do { } while (0)
+1-7
arch/openrisc/kernel/entry.S
···277277 l.addi r3,r1,0 // pt_regs278278 /* r4 set be EXCEPTION_HANDLE */ // effective address of fault279279280280- /*281281- * __PHX__: TODO282282- *283283- * all this can be written much simpler. look at284284- * DTLB miss handler in the CONFIG_GUARD_PROTECTED_CORE part285285- */286280#ifdef CONFIG_OPENRISC_NO_SPR_SR_DSX287281 l.lwz r6,PT_PC(r3) // address of an offending insn288282 l.lwz r6,0(r6) // instruction that caused pf···308314309315#else310316311311- l.lwz r6,PT_SR(r3) // SR317317+ l.mfspr r6,r0,SPR_SR // SR312318 l.andi r6,r6,SPR_SR_DSX // check for delay slot exception313319 l.sfne r6,r0 // exception happened in delay slot314320 l.bnf 7f
+6-3
arch/openrisc/kernel/head.S
···210210 * r4 - EEAR exception EA211211 * r10 - current pointing to current_thread_info struct212212 * r12 - syscall 0, since we didn't come from syscall213213- * r13 - temp it actually contains new SR, not needed anymore214214- * r31 - handler address of the handler we'll jump to213213+ * r30 - handler address of the handler we'll jump to215214 *216215 * handler has to save remaining registers to the exception217216 * ksp frame *before* tainting them!···243244 /* r1 is KSP, r30 is __pa(KSP) */ ;\244245 tophys (r30,r1) ;\245246 l.sw PT_GPR12(r30),r12 ;\247247+ /* r4 use for tmp before EA */ ;\246248 l.mfspr r12,r0,SPR_EPCR_BASE ;\247249 l.sw PT_PC(r30),r12 ;\248250 l.mfspr r12,r0,SPR_ESR_BASE ;\···263263 /* r12 == 1 if we come from syscall */ ;\264264 CLEAR_GPR(r12) ;\265265 /* ----- turn on MMU ----- */ ;\266266- l.ori r30,r0,(EXCEPTION_SR) ;\266266+ /* Carry DSX into exception SR */ ;\267267+ l.mfspr r30,r0,SPR_SR ;\268268+ l.andi r30,r30,SPR_SR_DSX ;\269269+ l.ori r30,r30,(EXCEPTION_SR) ;\267270 l.mtspr r0,r30,SPR_ESR_BASE ;\268271 /* r30: EA address of handler */ ;\269272 LOAD_SYMBOL_2_GPR(r30,handler) ;\
···498498 }499499 /* No longer in a system call */500500 clear_pt_regs_flag(regs, PIF_SYSCALL);501501-501501+ rseq_signal_deliver(&ksig, regs);502502 if (is_compat_task())503503 handle_signal32(&ksig, oldset, regs);504504 else···537537{538538 clear_thread_flag(TIF_NOTIFY_RESUME);539539 tracehook_notify_resume(regs);540540+ rseq_handle_notify_resume(NULL, regs);540541}
+2
arch/s390/kernel/syscalls/syscall.tbl
···389389379 common statx sys_statx compat_sys_statx390390380 common s390_sthyi sys_s390_sthyi compat_sys_s390_sthyi391391381 common kexec_file_load sys_kexec_file_load compat_sys_kexec_file_load392392+382 common io_pgetevents sys_io_pgetevents compat_sys_io_pgetevents393393+383 common rseq sys_rseq compat_sys_rseq
+4
arch/s390/mm/pgalloc.c
···252252 spin_unlock_bh(&mm->context.lock);253253 if (mask != 0)254254 return;255255+ } else {256256+ atomic_xor_bits(&page->_refcount, 3U << 24);255257 }256258257259 pgtable_page_dtor(page);···306304 break;307305 /* fallthrough */308306 case 3: /* 4K page table with pgstes */307307+ if (mask & 3)308308+ atomic_xor_bits(&page->_refcount, 3 << 24);309309 pgtable_page_dtor(page);310310 __free_page(page);311311 break;
···114114 struct pci_setup_rom *rom = NULL;115115 efi_status_t status;116116 unsigned long size;117117- uint64_t attributes, romsize;117117+ uint64_t romsize;118118 void *romimage;119119120120- status = efi_call_proto(efi_pci_io_protocol, attributes, pci,121121- EfiPciIoAttributeOperationGet, 0ULL,122122- &attributes);123123- if (status != EFI_SUCCESS)124124- return status;125125-126120 /*127127- * Some firmware images contain EFI function pointers at the place where the128128- * romimage and romsize fields are supposed to be. Typically the EFI121121+ * Some firmware images contain EFI function pointers at the place where122122+ * the romimage and romsize fields are supposed to be. Typically the EFI129123 * code is mapped at high addresses, translating to an unrealistically130124 * large romsize. The UEFI spec limits the size of option ROMs to 16131125 * MiB so we reject any ROMs over 16 MiB in size to catch this.
+1
arch/x86/crypto/aegis128-aesni-asm.S
···535535 movdqu STATE3, 0x40(STATEP)536536537537 FRAME_END538538+ ret538539ENDPROC(crypto_aegis128_aesni_enc_tail)539540540541.macro decrypt_block a s0 s1 s2 s3 s4 i
···114114 ipi_arg->vp_set.format = HV_GENERIC_SET_SPARSE_4K;115115 nr_bank = cpumask_to_vpset(&(ipi_arg->vp_set), mask);116116 }117117+ if (nr_bank < 0)118118+ goto ipi_mask_ex_done;117119 if (!nr_bank)118120 ipi_arg->vp_set.format = HV_GENERIC_SET_ALL;119121···160158161159 for_each_cpu(cur_cpu, mask) {162160 vcpu = hv_cpu_number_to_vp_number(cur_cpu);161161+ if (vcpu == VP_INVAL)162162+ goto ipi_mask_done;163163+163164 /*164165 * This particular version of the IPI hypercall can165166 * only target upto 64 CPUs.
+4-1
arch/x86/hyperv/hv_init.c
···265265{266266 u64 guest_id, required_msrs;267267 union hv_x64_msr_hypercall_contents hypercall_msr;268268- int cpuhp;268268+ int cpuhp, i;269269270270 if (x86_hyper_type != X86_HYPER_MS_HYPERV)271271 return;···292292 GFP_KERNEL);293293 if (!hv_vp_index)294294 return;295295+296296+ for (i = 0; i < num_possible_cpus(); i++)297297+ hv_vp_index[i] = VP_INVAL;295298296299 hv_vp_assist_page = kcalloc(num_possible_cpus(),297300 sizeof(*hv_vp_assist_page), GFP_KERNEL);
···155155 guestval |= guest_spec_ctrl & x86_spec_ctrl_mask;156156157157 /* SSBD controlled in MSR_SPEC_CTRL */158158- if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD))158158+ if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||159159+ static_cpu_has(X86_FEATURE_AMD_SSBD))159160 hostval |= ssbd_tif_to_spec_ctrl(ti->flags);160161161162 if (hostval != guestval) {···534533 * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD may535534 * use a completely different MSR and bit dependent on family.536535 */537537- if (!static_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))536536+ if (!static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) &&537537+ !static_cpu_has(X86_FEATURE_AMD_SSBD)) {538538 x86_amd_ssb_disable();539539- else {539539+ } else {540540 x86_spec_ctrl_base |= SPEC_CTRL_SSBD;541541 x86_spec_ctrl_mask |= SPEC_CTRL_SSBD;542542 wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
···1207120712081208 xen_setup_features();1209120912101210- xen_setup_machphys_mapping();12111211-12121210 /* Install Xen paravirt ops */12131211 pv_info = xen_info;12141212 pv_init_ops.patch = paravirt_patch_default;12151213 pv_cpu_ops = xen_cpu_ops;12141214+ xen_init_irq_ops();12151215+12161216+ /*12171217+ * Setup xen_vcpu early because it is needed for12181218+ * local_irq_disable(), irqs_disabled(), e.g. in printk().12191219+ *12201220+ * Don't do the full vcpu_info placement stuff until we have12211221+ * the cpu_possible_mask and a non-dummy shared_info.12221222+ */12231223+ xen_vcpu_info_reset(0);1216122412171225 x86_platform.get_nmi_reason = xen_get_nmi_reason;12181226···12331225 * Set up some pagetable state before starting to set any ptes.12341226 */1235122712281228+ xen_setup_machphys_mapping();12361229 xen_init_mmu_ops();1237123012381231 /* Prevent unwanted bits from being set in PTEs. */12391232 __supported_pte_mask &= ~_PAGE_GLOBAL;12331233+ __default_kernel_pte_mask &= ~_PAGE_GLOBAL;1240123412411235 /*12421236 * Prevent page tables from being allocated in highmem, even···12591249 get_cpu_cap(&boot_cpu_data);12601250 x86_configure_nx();1261125112621262- xen_init_irq_ops();12631263-12641252 /* Let's presume PV guests always boot on vCPU with id 0. */12651253 per_cpu(xen_vcpu_id, 0) = 0;12661266-12671267- /*12681268- * Setup xen_vcpu early because idt_setup_early_handler needs it for12691269- * local_irq_disable(), irqs_disabled().12701270- *12711271- * Don't do the full vcpu_info placement stuff until we have12721272- * the cpu_possible_mask and a non-dummy shared_info.12731273- */12741274- xen_vcpu_info_reset(0);1275125412761255 idt_setup_early_handler();12771256
+1-3
arch/x86/xen/irq.c
···128128129129void __init xen_init_irq_ops(void)130130{131131- /* For PVH we use default pv_irq_ops settings. */132132- if (!xen_feature(XENFEAT_hvm_callback_vector))133133- pv_irq_ops = xen_irq_ops;131131+ pv_irq_ops = xen_irq_ops;134132 x86_init.irqs.intr_init = xen_init_IRQ;135133}
-2
block/bsg.c
···267267 } else if (hdr->din_xfer_len) {268268 ret = blk_rq_map_user(q, rq, NULL, uptr64(hdr->din_xferp),269269 hdr->din_xfer_len, GFP_KERNEL);270270- } else {271271- ret = blk_rq_map_user(q, rq, NULL, NULL, 0, GFP_KERNEL);272270 }273271274272 if (ret)
+11-4
drivers/acpi/acpica/hwsleep.c
···5151 return_ACPI_STATUS(status);5252 }53535454- /*5555- * 1) Disable all GPEs5656- * 2) Enable all wakeup GPEs5757- */5454+ /* Disable all GPEs */5855 status = acpi_hw_disable_all_gpes();5956 if (ACPI_FAILURE(status)) {6057 return_ACPI_STATUS(status);6158 }5959+ /*6060+ * If the target sleep state is S5, clear all GPEs and fixed events too6161+ */6262+ if (sleep_state == ACPI_STATE_S5) {6363+ status = acpi_hw_clear_acpi_status();6464+ if (ACPI_FAILURE(status)) {6565+ return_ACPI_STATUS(status);6666+ }6767+ }6268 acpi_gbl_system_awake_and_running = FALSE;63697070+ /* Enable all wakeup GPEs */6471 status = acpi_hw_enable_all_wakeup_gpes();6572 if (ACPI_FAILURE(status)) {6673 return_ACPI_STATUS(status);
···717717 */718718 pr_err("extension failed to load: %s", hook->name);719719 __battery_hook_unregister(hook, 0);720720- return;720720+ goto end;721721 }722722 }723723 pr_info("new extension: %s\n", hook->name);724724+end:724725 mutex_unlock(&hook_mutex);725726}726727EXPORT_SYMBOL_GPL(battery_hook_register);···733732*/734733static void battery_hook_add_battery(struct acpi_battery *battery)735734{736736- struct acpi_battery_hook *hook_node;735735+ struct acpi_battery_hook *hook_node, *tmp;737736738737 mutex_lock(&hook_mutex);739738 INIT_LIST_HEAD(&battery->list);···745744 * when a battery gets hotplugged or initialized746745 * during the battery module initialization.747746 */748748- list_for_each_entry(hook_node, &battery_hook_list, list) {747747+ list_for_each_entry_safe(hook_node, tmp, &battery_hook_list, list) {749748 if (hook_node->add_battery(battery->bat)) {750749 /*751750 * The notification of the extensions has failed, to752751 * prevent further errors we will unload the extension.753752 */754754- __battery_hook_unregister(hook_node, 0);755753 pr_err("error in extension, unloading: %s",756754 hook_node->name);755755+ __battery_hook_unregister(hook_node, 0);757756 }758757 }759758 mutex_unlock(&hook_mutex);
+37-11
drivers/acpi/nfit/core.c
···408408 const guid_t *guid;409409 int rc, i;410410411411+ if (cmd_rc)412412+ *cmd_rc = -EINVAL;411413 func = cmd;412414 if (cmd == ND_CMD_CALL) {413415 call_pkg = buf;···520518 * If we return an error (like elsewhere) then caller wouldn't521519 * be able to rely upon data returned to make calculation.522520 */521521+ if (cmd_rc)522522+ *cmd_rc = 0;523523 return 0;524524 }525525···1277127312781274 mutex_lock(&acpi_desc->init_mutex);12791275 rc = sprintf(buf, "%d%s", acpi_desc->scrub_count,12801280- work_busy(&acpi_desc->dwork.work)12761276+ acpi_desc->scrub_busy12811277 && !acpi_desc->cancel ? "+\n" : "\n");12821278 mutex_unlock(&acpi_desc->init_mutex);12831279 }···29432939 return 0;29442940}2945294129422942+static void __sched_ars(struct acpi_nfit_desc *acpi_desc, unsigned int tmo)29432943+{29442944+ lockdep_assert_held(&acpi_desc->init_mutex);29452945+29462946+ acpi_desc->scrub_busy = 1;29472947+ /* note this should only be set from within the workqueue */29482948+ if (tmo)29492949+ acpi_desc->scrub_tmo = tmo;29502950+ queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ);29512951+}29522952+29532953+static void sched_ars(struct acpi_nfit_desc *acpi_desc)29542954+{29552955+ __sched_ars(acpi_desc, 0);29562956+}29572957+29582958+static void notify_ars_done(struct acpi_nfit_desc *acpi_desc)29592959+{29602960+ lockdep_assert_held(&acpi_desc->init_mutex);29612961+29622962+ acpi_desc->scrub_busy = 0;29632963+ acpi_desc->scrub_count++;29642964+ if (acpi_desc->scrub_count_state)29652965+ sysfs_notify_dirent(acpi_desc->scrub_count_state);29662966+}29672967+29462968static void acpi_nfit_scrub(struct work_struct *work)29472969{29482970 struct acpi_nfit_desc *acpi_desc;···29792949 mutex_lock(&acpi_desc->init_mutex);29802950 query_rc = acpi_nfit_query_poison(acpi_desc);29812951 tmo = __acpi_nfit_scrub(acpi_desc, query_rc);29822982- if (tmo) {29832983- queue_delayed_work(nfit_wq, &acpi_desc->dwork, tmo * HZ);29842984- acpi_desc->scrub_tmo = tmo;29852985- } else {29862986- acpi_desc->scrub_count++;29872987- if (acpi_desc->scrub_count_state)29882988- sysfs_notify_dirent(acpi_desc->scrub_count_state);29892989- }29522952+ if (tmo)29532953+ __sched_ars(acpi_desc, tmo);29542954+ else29552955+ notify_ars_done(acpi_desc);29902956 memset(acpi_desc->ars_status, 0, acpi_desc->max_ars);29912957 mutex_unlock(&acpi_desc->init_mutex);29922958}···30633037 break;30643038 }3065303930663066- queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0);30403040+ sched_ars(acpi_desc);30673041 return 0;30683042}30693043···32653239 }32663240 }32673241 if (scheduled) {32683268- queue_delayed_work(nfit_wq, &acpi_desc->dwork, 0);32423242+ sched_ars(acpi_desc);32693243 dev_dbg(dev, "ars_scan triggered\n");32703244 }32713245 mutex_unlock(&acpi_desc->init_mutex);
+1
drivers/acpi/nfit/nfit.h
···203203 unsigned int max_ars;204204 unsigned int scrub_count;205205 unsigned int scrub_mode;206206+ unsigned int scrub_busy:1;206207 unsigned int cancel:1;207208 unsigned long dimm_cmd_force_en;208209 unsigned long bus_cmd_force_en;
+8-2
drivers/acpi/pptt.c
···481481 if (cpu_node) {482482 cpu_node = acpi_find_processor_package_id(table, cpu_node,483483 level, flag);484484- /* Only the first level has a guaranteed id */485485- if (level == 0)484484+ /*485485+ * As per specification if the processor structure represents486486+ * an actual processor, then ACPI processor ID must be valid.487487+ * For processor containers ACPI_PPTT_ACPI_PROCESSOR_ID_VALID488488+ * should be set if the UID is valid489489+ */490490+ if (level == 0 ||491491+ cpu_node->flags & ACPI_PPTT_ACPI_PROCESSOR_ID_VALID)486492 return cpu_node->acpi_processor_id;487493 return ACPI_PTR_DIFF(cpu_node, table);488494 }
-2
drivers/ata/Kconfig
···398398399399config SATA_HIGHBANK400400 tristate "Calxeda Highbank SATA support"401401- depends on HAS_DMA402401 depends on ARCH_HIGHBANK || COMPILE_TEST403402 help404403 This option enables support for the Calxeda Highbank SoC's···407408408409config SATA_MV409410 tristate "Marvell SATA support"410410- depends on HAS_DMA411411 depends on PCI || ARCH_DOVE || ARCH_MV78XX0 || \412412 ARCH_MVEBU || ARCH_ORION5X || COMPILE_TEST413413 select GENERIC_PHY
+60
drivers/ata/ahci.c
···400400 { PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */401401 { PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */402402 { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci_mobile }, /* ApolloLake AHCI */403403+ { PCI_VDEVICE(INTEL, 0x34d3), board_ahci_mobile }, /* Ice Lake LP AHCI */403404404405 /* JMicron 360/1/3/5/6, match class to avoid IDE function */405406 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,···12811280 return strcmp(buf, dmi->driver_data) < 0;12821281}1283128212831283+static bool ahci_broken_lpm(struct pci_dev *pdev)12841284+{12851285+ static const struct dmi_system_id sysids[] = {12861286+ /* Various Lenovo 50 series have LPM issues with older BIOSen */12871287+ {12881288+ .matches = {12891289+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),12901290+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad X250"),12911291+ },12921292+ .driver_data = "20180406", /* 1.31 */12931293+ },12941294+ {12951295+ .matches = {12961296+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),12971297+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L450"),12981298+ },12991299+ .driver_data = "20180420", /* 1.28 */13001300+ },13011301+ {13021302+ .matches = {13031303+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),13041304+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T450s"),13051305+ },13061306+ .driver_data = "20180315", /* 1.33 */13071307+ },13081308+ {13091309+ .matches = {13101310+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),13111311+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W541"),13121312+ },13131313+ /*13141314+ * Note date based on release notes, 2.35 has been13151315+ * reported to be good, but I've been unable to get13161316+ * a hold of the reporter to get the DMI BIOS date.13171317+ * TODO: fix this.13181318+ */13191319+ .driver_data = "20180310", /* 2.35 */13201320+ },13211321+ { } /* terminate list */13221322+ };13231323+ const struct dmi_system_id *dmi = dmi_first_match(sysids);13241324+ int year, month, date;13251325+ char buf[9];13261326+13271327+ if (!dmi)13281328+ return false;13291329+13301330+ dmi_get_date(DMI_BIOS_DATE, &year, &month, &date);13311331+ snprintf(buf, sizeof(buf), "%04d%02d%02d", year, month, date);13321332+13331333+ return strcmp(buf, dmi->driver_data) < 0;13341334+}13351335+12841336static bool ahci_broken_online(struct pci_dev *pdev)12851337{12861338#define ENCODE_BUSDEVFN(bus, slot, func) \···17461692 pi.flags |= ATA_FLAG_NO_POWEROFF_SPINDOWN;17471693 dev_info(&pdev->dev,17481694 "quirky BIOS, skipping spindown on poweroff\n");16951695+ }16961696+16971697+ if (ahci_broken_lpm(pdev)) {16981698+ pi.flags |= ATA_FLAG_NO_LPM;16991699+ dev_warn(&pdev->dev,17001700+ "BIOS update required for Link Power Management support\n");17491701 }1750170217511703 if (ahci_broken_suspend(pdev)) {
···3535#include <linux/kernel.h>3636#include <linux/gfp.h>3737#include <linux/module.h>3838+#include <linux/nospec.h>3839#include <linux/blkdev.h>3940#include <linux/delay.h>4041#include <linux/interrupt.h>···1147114611481147 /* get the slot number from the message */11491148 pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8;11501150- if (pmp < EM_MAX_SLOTS)11491149+ if (pmp < EM_MAX_SLOTS) {11501150+ pmp = array_index_nospec(pmp, EM_MAX_SLOTS);11511151 emp = &pp->em_priv[pmp];11521152- else11521152+ } else {11531153 return -EINVAL;11541154+ }1154115511551156 /* mask off the activity bits if we are in sw_activity11561157 * mode, user should turn off sw_activity before setting
+3
drivers/ata/libata-core.c
···24932493 (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2)24942494 dev->horkage |= ATA_HORKAGE_NOLPM;2495249524962496+ if (ap->flags & ATA_FLAG_NO_LPM)24972497+ dev->horkage |= ATA_HORKAGE_NOLPM;24982498+24962499 if (dev->horkage & ATA_HORKAGE_NOLPM) {24972500 ata_dev_warn(dev, "LPM support broken, forcing max_power\n");24982501 dev->link->ap->target_lpm_policy = ATA_LPM_MAX_POWER;
+16-25
drivers/ata/libata-eh.c
···614614 list_for_each_entry_safe(scmd, tmp, eh_work_q, eh_entry) {615615 struct ata_queued_cmd *qc;616616617617- for (i = 0; i < ATA_MAX_QUEUE; i++) {618618- qc = __ata_qc_from_tag(ap, i);617617+ ata_qc_for_each_raw(ap, qc, i) {619618 if (qc->flags & ATA_QCFLAG_ACTIVE &&620619 qc->scsicmd == scmd)621620 break;···817818818819static int ata_eh_nr_in_flight(struct ata_port *ap)819820{821821+ struct ata_queued_cmd *qc;820822 unsigned int tag;821823 int nr = 0;822824823825 /* count only non-internal commands */824824- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {825825- if (ata_tag_internal(tag))826826- continue;827827- if (ata_qc_from_tag(ap, tag))826826+ ata_qc_for_each(ap, qc, tag) {827827+ if (qc)828828 nr++;829829 }830830···845847 goto out_unlock;846848847849 if (cnt == ap->fastdrain_cnt) {850850+ struct ata_queued_cmd *qc;848851 unsigned int tag;849852850853 /* No progress during the last interval, tag all851854 * in-flight qcs as timed out and freeze the port.852855 */853853- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {854854- struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag);856856+ ata_qc_for_each(ap, qc, tag) {855857 if (qc)856858 qc->err_mask |= AC_ERR_TIMEOUT;857859 }···9979999981000static int ata_do_link_abort(struct ata_port *ap, struct ata_link *link)9991001{10021002+ struct ata_queued_cmd *qc;10001003 int tag, nr_aborted = 0;1001100410021005 WARN_ON(!ap->ops->error_handler);···10061007 ata_eh_set_pending(ap, 0);1007100810081009 /* include internal tag in iteration */10091009- for (tag = 0; tag <= ATA_MAX_QUEUE; tag++) {10101010- struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag);10111011-10101010+ ata_qc_for_each_with_internal(ap, qc, tag) {10121011 if (qc && (!link || qc->dev->link == link)) {10131012 qc->flags |= ATA_QCFLAG_FAILED;10141013 ata_qc_complete(qc);···17091712 return;1710171317111714 /* has LLDD analyzed already? */17121712- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {17131713- qc = __ata_qc_from_tag(ap, tag);17141714-17151715+ ata_qc_for_each_raw(ap, qc, tag) {17151716 if (!(qc->flags & ATA_QCFLAG_FAILED))17161717 continue;17171718···21312136{21322137 struct ata_port *ap = link->ap;21332138 struct ata_eh_context *ehc = &link->eh_context;21392139+ struct ata_queued_cmd *qc;21342140 struct ata_device *dev;21352141 unsigned int all_err_mask = 0, eflags = 0;21362142 int tag, nr_failed = 0, nr_quiet = 0;···2164216821652169 all_err_mask |= ehc->i.err_mask;2166217021672167- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {21682168- struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag);21692169-21712171+ ata_qc_for_each_raw(ap, qc, tag) {21702172 if (!(qc->flags & ATA_QCFLAG_FAILED) ||21712173 ata_dev_phys_link(qc->dev) != link)21722174 continue;···24302436{24312437 struct ata_port *ap = link->ap;24322438 struct ata_eh_context *ehc = &link->eh_context;24392439+ struct ata_queued_cmd *qc;24332440 const char *frozen, *desc;24342441 char tries_buf[6] = "";24352442 int tag, nr_failed = 0;···24422447 if (ehc->i.desc[0] != '\0')24432448 desc = ehc->i.desc;2444244924452445- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {24462446- struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag);24472447-24502450+ ata_qc_for_each_raw(ap, qc, tag) {24482451 if (!(qc->flags & ATA_QCFLAG_FAILED) ||24492452 ata_dev_phys_link(qc->dev) != link ||24502453 ((qc->flags & ATA_QCFLAG_QUIET) &&···25042511 ehc->i.serror & SERR_DEV_XCHG ? "DevExch " : "");25052512#endif2506251325072507- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {25082508- struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag);25142514+ ata_qc_for_each_raw(ap, qc, tag) {25092515 struct ata_taskfile *cmd = &qc->tf, *res = &qc->result_tf;25102516 char data_buf[20] = "";25112517 char cdb_buf[70] = "";···39843992 */39853993void ata_eh_finish(struct ata_port *ap)39863994{39953995+ struct ata_queued_cmd *qc;39873996 int tag;3988399739893998 /* retry or finish qcs */39903990- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {39913991- struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag);39923992-39993999+ ata_qc_for_each_raw(ap, qc, tag) {39934000 if (!(qc->flags & ATA_QCFLAG_FAILED))39944001 continue;39954002
+12-6
drivers/ata/libata-scsi.c
···38053805 */38063806 goto invalid_param_len;38073807 }38083808- if (block > dev->n_sectors)38093809- goto out_of_range;3810380838113809 all = cdb[14] & 0x1;38103810+ if (all) {38113811+ /*38123812+ * Ignore the block address (zone ID) as defined by ZBC.38133813+ */38143814+ block = 0;38153815+ } else if (block >= dev->n_sectors) {38163816+ /*38173817+ * Block must be a valid zone ID (a zone start LBA).38183818+ */38193819+ fp = 2;38203820+ goto invalid_fld;38213821+ }3812382238133823 if (ata_ncq_enabled(qc->dev) &&38143824 ata_fpdma_zac_mgmt_out_supported(qc->dev)) {···3846383638473837 invalid_fld:38483838 ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff);38493849- return 1;38503850- out_of_range:38513851- /* "Logical Block Address out of range" */38523852- ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x00);38533839 return 1;38543840invalid_param_len:38553841 /* "Parameter list length error" */
+1-8
drivers/ata/sata_fsl.c
···395395{396396 /* We let libATA core do actual (queue) tag allocation */397397398398- /* all non NCQ/queued commands should have tag#0 */399399- if (ata_tag_internal(tag)) {400400- DPRINTK("mapping internal cmds to tag#0\n");401401- return 0;402402- }403403-404398 if (unlikely(tag >= SATA_FSL_QUEUE_DEPTH)) {405399 DPRINTK("tag %d invalid : out of range\n", tag);406400 return 0;···1223122912241230 /* Workaround for data length mismatch errata */12251231 if (unlikely(hstatus & INT_ON_DATA_LENGTH_MISMATCH)) {12261226- for (tag = 0; tag < ATA_MAX_QUEUE; tag++) {12271227- qc = ata_qc_from_tag(ap, tag);12321232+ ata_qc_for_each_with_internal(ap, qc, tag) {12281233 if (qc && ata_is_atapi(qc->tf.protocol)) {12291234 u32 hcontrol;12301235 /* Set HControl[27] to clear error registers */
···14831483 return -EFAULT;14841484 if (pool < 0 || pool > ZATM_LAST_POOL)14851485 return -EINVAL;14861486+ pool = array_index_nospec(pool,14871487+ ZATM_LAST_POOL + 1);14861488 if (copy_from_user(&info,14871489 &((struct zatm_pool_req __user *) arg)->info,14881490 sizeof(info))) return -EFAULT;
+9-7
drivers/base/power/domain.c
···22352235}2236223622372237static int __genpd_dev_pm_attach(struct device *dev, struct device_node *np,22382238- unsigned int index)22382238+ unsigned int index, bool power_on)22392239{22402240 struct of_phandle_args pd_args;22412241 struct generic_pm_domain *pd;···22712271 dev->pm_domain->detach = genpd_dev_pm_detach;22722272 dev->pm_domain->sync = genpd_dev_pm_sync;2273227322742274- genpd_lock(pd);22752275- ret = genpd_power_on(pd, 0);22762276- genpd_unlock(pd);22742274+ if (power_on) {22752275+ genpd_lock(pd);22762276+ ret = genpd_power_on(pd, 0);22772277+ genpd_unlock(pd);22782278+ }2277227922782280 if (ret)22792281 genpd_remove_device(pd, dev);···23092307 "#power-domain-cells") != 1)23102308 return 0;2311230923122312- return __genpd_dev_pm_attach(dev, dev->of_node, 0);23102310+ return __genpd_dev_pm_attach(dev, dev->of_node, 0, true);23132311}23142312EXPORT_SYMBOL_GPL(genpd_dev_pm_attach);23152313···23612359 }2362236023632361 /* Try to attach the device to the PM domain at the specified index. */23642364- ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index);23622362+ ret = __genpd_dev_pm_attach(genpd_dev, dev->of_node, index, false);23652363 if (ret < 1) {23662364 device_unregister(genpd_dev);23672365 return ret ? ERR_PTR(ret) : NULL;23682366 }2369236723702370- pm_runtime_set_active(genpd_dev);23712368 pm_runtime_enable(genpd_dev);23692369+ genpd_queue_power_off_work(dev_to_genpd(genpd_dev));2372237023732371 return genpd_dev;23742372}
+1-1
drivers/block/drbd/drbd_worker.c
···282282 what = COMPLETED_OK;283283 }284284285285- bio_put(req->private_bio);286285 req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status));286286+ bio_put(bio);287287288288 /* not req_mod(), we need irqsave here! */289289 spin_lock_irqsave(&device->resource->req_lock, flags);
+1
drivers/block/loop.c
···16131613 arg = (unsigned long) compat_ptr(arg);16141614 case LOOP_SET_FD:16151615 case LOOP_CHANGE_FD:16161616+ case LOOP_SET_BLOCK_SIZE:16161617 err = lo_ioctl(bdev, mode, cmd, arg);16171618 break;16181619 default:
···11# SPDX-License-Identifier: GPL-2.022# Common objects33-lib-$(CONFIG_SUNXI_CCU) += ccu_common.o44-lib-$(CONFIG_SUNXI_CCU) += ccu_mmc_timing.o55-lib-$(CONFIG_SUNXI_CCU) += ccu_reset.o33+obj-y += ccu_common.o44+obj-y += ccu_mmc_timing.o55+obj-y += ccu_reset.o6677# Base clock types88-lib-$(CONFIG_SUNXI_CCU) += ccu_div.o99-lib-$(CONFIG_SUNXI_CCU) += ccu_frac.o1010-lib-$(CONFIG_SUNXI_CCU) += ccu_gate.o1111-lib-$(CONFIG_SUNXI_CCU) += ccu_mux.o1212-lib-$(CONFIG_SUNXI_CCU) += ccu_mult.o1313-lib-$(CONFIG_SUNXI_CCU) += ccu_phase.o1414-lib-$(CONFIG_SUNXI_CCU) += ccu_sdm.o88+obj-y += ccu_div.o99+obj-y += ccu_frac.o1010+obj-y += ccu_gate.o1111+obj-y += ccu_mux.o1212+obj-y += ccu_mult.o1313+obj-y += ccu_phase.o1414+obj-y += ccu_sdm.o15151616# Multi-factor clocks1717-lib-$(CONFIG_SUNXI_CCU) += ccu_nk.o1818-lib-$(CONFIG_SUNXI_CCU) += ccu_nkm.o1919-lib-$(CONFIG_SUNXI_CCU) += ccu_nkmp.o2020-lib-$(CONFIG_SUNXI_CCU) += ccu_nm.o2121-lib-$(CONFIG_SUNXI_CCU) += ccu_mp.o1717+obj-y += ccu_nk.o1818+obj-y += ccu_nkm.o1919+obj-y += ccu_nkmp.o2020+obj-y += ccu_nm.o2121+obj-y += ccu_mp.o22222323# SoC support2424obj-$(CONFIG_SUN50I_A64_CCU) += ccu-sun50i-a64.o···3838obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80.o3939obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-de.o4040obj-$(CONFIG_SUN9I_A80_CCU) += ccu-sun9i-a80-usb.o4141-4242-# The lib-y file goals is supposed to work only in arch/*/lib or lib/. In our4343-# case, we want to use that goal, but even though lib.a will be properly4444-# generated, it will not be linked in, eventually resulting in a linker error4545-# for missing symbols.4646-#4747-# We can work around that by explicitly adding lib.a to the obj-y goal. This is4848-# an undocumented behaviour, but works well for now.4949-obj-$(CONFIG_SUNXI_CCU) += lib.a
···231231 if (ib->flags & AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE)232232 fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY;233233234234+ /* wrap the last IB with fence */235235+ if (job && job->uf_addr) {236236+ amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence,237237+ fence_flags | AMDGPU_FENCE_FLAG_64BIT);238238+ }239239+234240 r = amdgpu_fence_emit(ring, f, fence_flags);235241 if (r) {236242 dev_err(adev->dev, "failed to emit fence (%d)\n", r);···248242249243 if (ring->funcs->insert_end)250244 ring->funcs->insert_end(ring);251251-252252- /* wrap the last IB with fence */253253- if (job && job->uf_addr) {254254- amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence,255255- fence_flags | AMDGPU_FENCE_FLAG_64BIT);256256- }257245258246 if (patch_offset != ~0 && ring->funcs->patch_cond_exec)259247 amdgpu_ring_patch_cond_exec(ring, patch_offset);
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
···18821882 if (!amdgpu_device_has_dc_support(adev)) {18831883 mutex_lock(&adev->pm.mutex);18841884 amdgpu_dpm_get_active_displays(adev);18851885- adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtcs;18851885+ adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count;18861886 adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev);18871887 adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev);18881888 /* we have issues with mclk switching with refresh rates over 120 hz on the non-DC code. */
···631631 },632632};633633634634+static struct platform_device *etnaviv_drm;635635+634636static int __init etnaviv_init(void)635637{638638+ struct platform_device *pdev;636639 int ret;637640 struct device_node *np;638641···647644648645 ret = platform_driver_register(&etnaviv_platform_driver);649646 if (ret != 0)650650- platform_driver_unregister(&etnaviv_gpu_driver);647647+ goto unregister_gpu_driver;651648652649 /*653650 * If the DT contains at least one available GPU device, instantiate···656653 for_each_compatible_node(np, NULL, "vivante,gc") {657654 if (!of_device_is_available(np))658655 continue;659659-660660- platform_device_register_simple("etnaviv", -1, NULL, 0);656656+ pdev = platform_device_register_simple("etnaviv", -1,657657+ NULL, 0);658658+ if (IS_ERR(pdev)) {659659+ ret = PTR_ERR(pdev);660660+ of_node_put(np);661661+ goto unregister_platform_driver;662662+ }663663+ etnaviv_drm = pdev;661664 of_node_put(np);662665 break;663666 }664667668668+ return 0;669669+670670+unregister_platform_driver:671671+ platform_driver_unregister(&etnaviv_platform_driver);672672+unregister_gpu_driver:673673+ platform_driver_unregister(&etnaviv_gpu_driver);665674 return ret;666675}667676module_init(etnaviv_init);668677669678static void __exit etnaviv_exit(void)670679{671671- platform_driver_unregister(&etnaviv_gpu_driver);680680+ platform_device_unregister(etnaviv_drm);672681 platform_driver_unregister(&etnaviv_platform_driver);682682+ platform_driver_unregister(&etnaviv_gpu_driver);673683}674684module_exit(etnaviv_exit);675685
+3
drivers/gpu/drm/etnaviv/etnaviv_gpu.h
···131131 struct work_struct sync_point_work;132132 int sync_point_event;133133134134+ /* hang detection */135135+ u32 hangcheck_dma_addr;136136+134137 void __iomem *mmio;135138 int irq;136139
+24
drivers/gpu/drm/etnaviv/etnaviv_sched.c
···1010#include "etnaviv_gem.h"1111#include "etnaviv_gpu.h"1212#include "etnaviv_sched.h"1313+#include "state.xml.h"13141415static int etnaviv_job_hang_limit = 0;1516module_param_named(job_hang_limit, etnaviv_job_hang_limit, int , 0444);···8685{8786 struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);8887 struct etnaviv_gpu *gpu = submit->gpu;8888+ u32 dma_addr;8989+ int change;9090+9191+ /*9292+ * If the GPU managed to complete this jobs fence, the timout is9393+ * spurious. Bail out.9494+ */9595+ if (fence_completed(gpu, submit->out_fence->seqno))9696+ return;9797+9898+ /*9999+ * If the GPU is still making forward progress on the front-end (which100100+ * should never loop) we shift out the timeout to give it a chance to101101+ * finish the job.102102+ */103103+ dma_addr = gpu_read(gpu, VIVS_FE_DMA_ADDRESS);104104+ change = dma_addr - gpu->hangcheck_dma_addr;105105+ if (change < 0 || change > 16) {106106+ gpu->hangcheck_dma_addr = dma_addr;107107+ schedule_delayed_work(&sched_job->work_tdr,108108+ sched_job->sched->timeout);109109+ return;110110+ }8911190112 /* block scheduler */91113 kthread_park(gpu->sched.thread);
+3-3
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
···265265 unsigned long val;266266267267 val = readl(ctx->addr + DECON_WINCONx(win));268268- val &= ~WINCONx_BPPMODE_MASK;268268+ val &= WINCONx_ENWIN_F;269269270270 switch (fb->format->format) {271271 case DRM_FORMAT_XRGB1555:···356356 writel(val, ctx->addr + DECON_VIDOSDxB(win));357357 }358358359359- val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) |360360- VIDOSD_Wx_ALPHA_B_F(0x0);359359+ val = VIDOSD_Wx_ALPHA_R_F(0xff) | VIDOSD_Wx_ALPHA_G_F(0xff) |360360+ VIDOSD_Wx_ALPHA_B_F(0xff);361361 writel(val, ctx->addr + DECON_VIDOSDxC(win));362362363363 val = VIDOSD_Wx_ALPHA_R_F(0x0) | VIDOSD_Wx_ALPHA_G_F(0x0) |
···345345 int ret = 0;346346 int i;347347348348- /* basic checks */349349- if (buf->buf.width == 0 || buf->buf.height == 0)350350- return -EINVAL;351351- buf->format = drm_format_info(buf->buf.fourcc);352352- for (i = 0; i < buf->format->num_planes; i++) {353353- unsigned int width = (i == 0) ? buf->buf.width :354354- DIV_ROUND_UP(buf->buf.width, buf->format->hsub);355355-356356- if (buf->buf.pitch[i] == 0)357357- buf->buf.pitch[i] = width * buf->format->cpp[i];358358- if (buf->buf.pitch[i] < width * buf->format->cpp[i])359359- return -EINVAL;360360- if (!buf->buf.gem_id[i])361361- return -ENOENT;362362- }363363-364364- /* pitch for additional planes must match */365365- if (buf->format->num_planes > 2 &&366366- buf->buf.pitch[1] != buf->buf.pitch[2])367367- return -EINVAL;368368-369348 /* get GEM buffers and check their size */370349 for (i = 0; i < buf->format->num_planes; i++) {371350 unsigned int height = (i == 0) ? buf->buf.height :···407428 IPP_LIMIT_BUFFER, IPP_LIMIT_AREA, IPP_LIMIT_ROTATED, IPP_LIMIT_MAX408429};409430410410-static const enum drm_ipp_size_id limit_id_fallback[IPP_LIMIT_MAX][4] = {431431+static const enum drm_exynos_ipp_limit_type limit_id_fallback[IPP_LIMIT_MAX][4] = {411432 [IPP_LIMIT_BUFFER] = { DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER },412433 [IPP_LIMIT_AREA] = { DRM_EXYNOS_IPP_LIMIT_SIZE_AREA,413434 DRM_EXYNOS_IPP_LIMIT_SIZE_BUFFER },···474495 enum drm_ipp_size_id id = rotate ? IPP_LIMIT_ROTATED : IPP_LIMIT_AREA;475496 struct drm_ipp_limit l;476497 struct drm_exynos_ipp_limit_val *lh = &l.h, *lv = &l.v;498498+ int real_width = buf->buf.pitch[0] / buf->format->cpp[0];477499478500 if (!limits)479501 return 0;480502481503 __get_size_limit(limits, num_limits, IPP_LIMIT_BUFFER, &l);482482- if (!__size_limit_check(buf->buf.width, &l.h) ||504504+ if (!__size_limit_check(real_width, &l.h) ||483505 !__size_limit_check(buf->buf.height, &l.v))484506 return -EINVAL;485507···540560 return 0;541561}542562563563+static int exynos_drm_ipp_check_format(struct exynos_drm_ipp_task *task,564564+ struct exynos_drm_ipp_buffer *buf,565565+ struct exynos_drm_ipp_buffer *src,566566+ struct exynos_drm_ipp_buffer *dst,567567+ bool rotate, bool swap)568568+{569569+ const struct exynos_drm_ipp_formats *fmt;570570+ int ret, i;571571+572572+ fmt = __ipp_format_get(task->ipp, buf->buf.fourcc, buf->buf.modifier,573573+ buf == src ? DRM_EXYNOS_IPP_FORMAT_SOURCE :574574+ DRM_EXYNOS_IPP_FORMAT_DESTINATION);575575+ if (!fmt) {576576+ DRM_DEBUG_DRIVER("Task %pK: %s format not supported\n", task,577577+ buf == src ? "src" : "dst");578578+ return -EINVAL;579579+ }580580+581581+ /* basic checks */582582+ if (buf->buf.width == 0 || buf->buf.height == 0)583583+ return -EINVAL;584584+585585+ buf->format = drm_format_info(buf->buf.fourcc);586586+ for (i = 0; i < buf->format->num_planes; i++) {587587+ unsigned int width = (i == 0) ? buf->buf.width :588588+ DIV_ROUND_UP(buf->buf.width, buf->format->hsub);589589+590590+ if (buf->buf.pitch[i] == 0)591591+ buf->buf.pitch[i] = width * buf->format->cpp[i];592592+ if (buf->buf.pitch[i] < width * buf->format->cpp[i])593593+ return -EINVAL;594594+ if (!buf->buf.gem_id[i])595595+ return -ENOENT;596596+ }597597+598598+ /* pitch for additional planes must match */599599+ if (buf->format->num_planes > 2 &&600600+ buf->buf.pitch[1] != buf->buf.pitch[2])601601+ return -EINVAL;602602+603603+ /* check driver limits */604604+ ret = exynos_drm_ipp_check_size_limits(buf, fmt->limits,605605+ fmt->num_limits,606606+ rotate,607607+ buf == dst ? swap : false);608608+ if (ret)609609+ return ret;610610+ ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect,611611+ fmt->limits,612612+ fmt->num_limits, swap);613613+ return ret;614614+}615615+543616static int exynos_drm_ipp_task_check(struct exynos_drm_ipp_task *task)544617{545618 struct exynos_drm_ipp *ipp = task->ipp;546546- const struct exynos_drm_ipp_formats *src_fmt, *dst_fmt;547619 struct exynos_drm_ipp_buffer *src = &task->src, *dst = &task->dst;548620 unsigned int rotation = task->transform.rotation;549621 int ret = 0;···639607 return -EINVAL;640608 }641609642642- src_fmt = __ipp_format_get(ipp, src->buf.fourcc, src->buf.modifier,643643- DRM_EXYNOS_IPP_FORMAT_SOURCE);644644- if (!src_fmt) {645645- DRM_DEBUG_DRIVER("Task %pK: src format not supported\n", task);646646- return -EINVAL;647647- }648648- ret = exynos_drm_ipp_check_size_limits(src, src_fmt->limits,649649- src_fmt->num_limits,650650- rotate, false);651651- if (ret)652652- return ret;653653- ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect,654654- src_fmt->limits,655655- src_fmt->num_limits, swap);610610+ ret = exynos_drm_ipp_check_format(task, src, src, dst, rotate, swap);656611 if (ret)657612 return ret;658613659659- dst_fmt = __ipp_format_get(ipp, dst->buf.fourcc, dst->buf.modifier,660660- DRM_EXYNOS_IPP_FORMAT_DESTINATION);661661- if (!dst_fmt) {662662- DRM_DEBUG_DRIVER("Task %pK: dst format not supported\n", task);663663- return -EINVAL;664664- }665665- ret = exynos_drm_ipp_check_size_limits(dst, dst_fmt->limits,666666- dst_fmt->num_limits,667667- false, swap);668668- if (ret)669669- return ret;670670- ret = exynos_drm_ipp_check_scale_limits(&src->rect, &dst->rect,671671- dst_fmt->limits,672672- dst_fmt->num_limits, swap);614614+ ret = exynos_drm_ipp_check_format(task, dst, src, dst, false, swap);673615 if (ret)674616 return ret;675617
+1-1
drivers/gpu/drm/exynos/exynos_drm_plane.c
···132132 if (plane->state) {133133 exynos_state = to_exynos_plane_state(plane->state);134134 if (exynos_state->base.fb)135135- drm_framebuffer_unreference(exynos_state->base.fb);135135+ drm_framebuffer_put(exynos_state->base.fb);136136 kfree(exynos_state);137137 plane->state = NULL;138138 }
+2-2
drivers/gpu/drm/exynos/exynos_drm_rotator.c
···168168 val &= ~ROT_CONTROL_FLIP_MASK;169169170170 if (rotation & DRM_MODE_REFLECT_X)171171- val |= ROT_CONTROL_FLIP_HORIZONTAL;172172- if (rotation & DRM_MODE_REFLECT_Y)173171 val |= ROT_CONTROL_FLIP_VERTICAL;172172+ if (rotation & DRM_MODE_REFLECT_Y)173173+ val |= ROT_CONTROL_FLIP_HORIZONTAL;174174175175 val &= ~ROT_CONTROL_ROT_MASK;176176
+35-9
drivers/gpu/drm/exynos/exynos_drm_scaler.c
···3030#define scaler_write(cfg, offset) writel(cfg, scaler->regs + (offset))3131#define SCALER_MAX_CLK 43232#define SCALER_AUTOSUSPEND_DELAY 20003333+#define SCALER_RESET_WAIT_RETRIES 10033343435struct scaler_data {3536 const char *clk_name[SCALER_MAX_CLK];···5251static u32 scaler_get_format(u32 drm_fmt)5352{5453 switch (drm_fmt) {5555- case DRM_FORMAT_NV21:5656- return SCALER_YUV420_2P_UV;5754 case DRM_FORMAT_NV12:5555+ return SCALER_YUV420_2P_UV;5656+ case DRM_FORMAT_NV21:5857 return SCALER_YUV420_2P_VU;5958 case DRM_FORMAT_YUV420:6059 return SCALER_YUV420_3P;···6463 return SCALER_YUV422_1P_UYVY;6564 case DRM_FORMAT_YVYU:6665 return SCALER_YUV422_1P_YVYU;6767- case DRM_FORMAT_NV61:6868- return SCALER_YUV422_2P_UV;6966 case DRM_FORMAT_NV16:6767+ return SCALER_YUV422_2P_UV;6868+ case DRM_FORMAT_NV61:7069 return SCALER_YUV422_2P_VU;7170 case DRM_FORMAT_YUV422:7271 return SCALER_YUV422_3P;7373- case DRM_FORMAT_NV42:7474- return SCALER_YUV444_2P_UV;7572 case DRM_FORMAT_NV24:7373+ return SCALER_YUV444_2P_UV;7474+ case DRM_FORMAT_NV42:7675 return SCALER_YUV444_2P_VU;7776 case DRM_FORMAT_YUV444:7877 return SCALER_YUV444_3P;···9998 }10099101100 return 0;101101+}102102+103103+static inline int scaler_reset(struct scaler_context *scaler)104104+{105105+ int retry = SCALER_RESET_WAIT_RETRIES;106106+107107+ scaler_write(SCALER_CFG_SOFT_RESET, SCALER_CFG);108108+ do {109109+ cpu_relax();110110+ } while (retry > 1 &&111111+ scaler_read(SCALER_CFG) & SCALER_CFG_SOFT_RESET);112112+ do {113113+ cpu_relax();114114+ scaler_write(1, SCALER_INT_EN);115115+ } while (retry > 0 && scaler_read(SCALER_INT_EN) != 1);116116+117117+ return retry ? 0 : -EIO;102118}103119104120static inline void scaler_enable_int(struct scaler_context *scaler)···372354 u32 dst_fmt = scaler_get_format(task->dst.buf.fourcc);373355 struct drm_exynos_ipp_task_rect *dst_pos = &task->dst.rect;374356375375- scaler->task = task;376376-377357 pm_runtime_get_sync(scaler->dev);358358+ if (scaler_reset(scaler)) {359359+ pm_runtime_put(scaler->dev);360360+ return -EIO;361361+ }362362+363363+ scaler->task = task;378364379365 scaler_set_src_fmt(scaler, src_fmt);380366 scaler_set_src_base(scaler, &task->src);···416394417395static inline u32 scaler_get_int_status(struct scaler_context *scaler)418396{419419- return scaler_read(SCALER_INT_STATUS);397397+ u32 val = scaler_read(SCALER_INT_STATUS);398398+399399+ scaler_write(val, SCALER_INT_STATUS);400400+401401+ return val;420402}421403422404static inline int scaler_task_done(u32 val)
···20022002 bool write = !!(vmf->flags & FAULT_FLAG_WRITE);20032003 struct i915_vma *vma;20042004 pgoff_t page_offset;20052005- unsigned int flags;20062005 int ret;2007200620082007 /* We don't use vmf->pgoff since that has the fake offset */···20372038 goto err_unlock;20382039 }2039204020402040- /* If the object is smaller than a couple of partial vma, it is20412041- * not worth only creating a single partial vma - we may as well20422042- * clear enough space for the full object.20432043- */20442044- flags = PIN_MAPPABLE;20452045- if (obj->base.size > 2 * MIN_CHUNK_PAGES << PAGE_SHIFT)20462046- flags |= PIN_NONBLOCK | PIN_NONFAULT;2047204120482042 /* Now pin it into the GTT as needed */20492049- vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, flags);20432043+ vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,20442044+ PIN_MAPPABLE |20452045+ PIN_NONBLOCK |20462046+ PIN_NONFAULT);20502047 if (IS_ERR(vma)) {20512048 /* Use a partial view if it is bigger than available space */20522049 struct i915_ggtt_view view =20532050 compute_partial_view(obj, page_offset, MIN_CHUNK_PAGES);20512051+ unsigned int flags;2054205220552055- /* Userspace is now writing through an untracked VMA, abandon20532053+ flags = PIN_MAPPABLE;20542054+ if (view.type == I915_GGTT_VIEW_NORMAL)20552055+ flags |= PIN_NONBLOCK; /* avoid warnings for pinned */20562056+20572057+ /*20582058+ * Userspace is now writing through an untracked VMA, abandon20562059 * all hope that the hardware is able to track future writes.20572060 */20582061 obj->frontbuffer_ggtt_origin = ORIGIN_CPU;2059206220602060- vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE);20632063+ vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags);20642064+ if (IS_ERR(vma) && !view.type) {20652065+ flags = PIN_MAPPABLE;20662066+ view.type = I915_GGTT_VIEW_PARTIAL;20672067+ vma = i915_gem_object_ggtt_pin(obj, &view, 0, 0, flags);20682068+ }20612069 }20622070 if (IS_ERR(vma)) {20632071 ret = PTR_ERR(vma);
···127127128128/*129129 * The number of address send athemps tried before giving up.130130- * If the first one failes it seems like 5 to 8 attempts are required.130130+ * If the first one fails it seems like 5 to 8 attempts are required.131131 */132132#define NUM_ADDR_RESEND_ATTEMPTS 12133133
+8-9
drivers/i2c/busses/i2c-tegra.c
···545545{546546 u32 cnfg;547547548548+ /*549549+ * NACK interrupt is generated before the I2C controller generates550550+ * the STOP condition on the bus. So wait for 2 clock periods551551+ * before disabling the controller so that the STOP condition has552552+ * been delivered properly.553553+ */554554+ udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate));555555+548556 cnfg = i2c_readl(i2c_dev, I2C_CNFG);549557 if (cnfg & I2C_CNFG_PACKET_MODE_EN)550558 i2c_writel(i2c_dev, cnfg & ~I2C_CNFG_PACKET_MODE_EN, I2C_CNFG);···713705714706 if (likely(i2c_dev->msg_err == I2C_ERR_NONE))715707 return 0;716716-717717- /*718718- * NACK interrupt is generated before the I2C controller generates719719- * the STOP condition on the bus. So wait for 2 clock periods720720- * before resetting the controller so that the STOP condition has721721- * been delivered properly.722722- */723723- if (i2c_dev->msg_err == I2C_ERR_NO_ACK)724724- udelay(DIV_ROUND_UP(2 * 1000000, i2c_dev->bus_clk_rate));725708726709 tegra_i2c_init(i2c_dev);727710 if (i2c_dev->msg_err == I2C_ERR_NO_ACK) {
+10-1
drivers/i2c/i2c-core-base.c
···198198199199 val = !val;200200 bri->set_scl(adap, val);201201- ndelay(RECOVERY_NDELAY);201201+202202+ /*203203+ * If we can set SDA, we will always create STOP here to ensure204204+ * the additional pulses will do no harm. This is achieved by205205+ * letting SDA follow SCL half a cycle later.206206+ */207207+ ndelay(RECOVERY_NDELAY / 2);208208+ if (bri->set_sda)209209+ bri->set_sda(adap, val);210210+ ndelay(RECOVERY_NDELAY / 2);202211 }203212204213 /* check if recovery actually succeeded */
···774774{775775 struct c4iw_mr *mhp = to_c4iw_mr(ibmr);776776777777- if (unlikely(mhp->mpl_len == mhp->max_mpl_len))777777+ if (unlikely(mhp->mpl_len == mhp->attr.pbl_size))778778 return -ENOMEM;779779780780 mhp->mpl[mhp->mpl_len++] = addr;
+1-1
drivers/infiniband/hw/hfi1/rc.c
···271271272272 lockdep_assert_held(&qp->s_lock);273273 ps->s_txreq = get_txreq(ps->dev, qp);274274- if (IS_ERR(ps->s_txreq))274274+ if (!ps->s_txreq)275275 goto bail_no_tx;276276277277 if (priv->hdr_type == HFI1_PKT_TYPE_9B) {
+2-2
drivers/infiniband/hw/hfi1/uc.c
···11/*22- * Copyright(c) 2015, 2016 Intel Corporation.22+ * Copyright(c) 2015 - 2018 Intel Corporation.33 *44 * This file is provided under a dual BSD/GPLv2 license. When using or55 * redistributing this file, you may do so under either license.···7272 int middle = 0;73737474 ps->s_txreq = get_txreq(ps->dev, qp);7575- if (IS_ERR(ps->s_txreq))7575+ if (!ps->s_txreq)7676 goto bail_no_tx;77777878 if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_SEND_OK)) {
+2-2
drivers/infiniband/hw/hfi1/ud.c
···11/*22- * Copyright(c) 2015, 2016 Intel Corporation.22+ * Copyright(c) 2015 - 2018 Intel Corporation.33 *44 * This file is provided under a dual BSD/GPLv2 license. When using or55 * redistributing this file, you may do so under either license.···503503 u32 lid;504504505505 ps->s_txreq = get_txreq(ps->dev, qp);506506- if (IS_ERR(ps->s_txreq))506506+ if (!ps->s_txreq)507507 goto bail_no_tx;508508509509 if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_NEXT_SEND_OK)) {
+2-2
drivers/infiniband/hw/hfi1/verbs_txreq.c
···11/*22- * Copyright(c) 2016 - 2017 Intel Corporation.22+ * Copyright(c) 2016 - 2018 Intel Corporation.33 *44 * This file is provided under a dual BSD/GPLv2 license. When using or55 * redistributing this file, you may do so under either license.···9494 struct rvt_qp *qp)9595 __must_hold(&qp->s_lock)9696{9797- struct verbs_txreq *tx = ERR_PTR(-EBUSY);9797+ struct verbs_txreq *tx = NULL;98989999 write_seqlock(&dev->txwait_lock);100100 if (ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) {
+2-2
drivers/infiniband/hw/hfi1/verbs_txreq.h
···11/*22- * Copyright(c) 2016 Intel Corporation.22+ * Copyright(c) 2016 - 2018 Intel Corporation.33 *44 * This file is provided under a dual BSD/GPLv2 license. When using or55 * redistributing this file, you may do so under either license.···8383 if (unlikely(!tx)) {8484 /* call slow path to get the lock */8585 tx = __get_txreq(dev, qp);8686- if (IS_ERR(tx))8686+ if (!tx)8787 return tx;8888 }8989 tx->qp = qp;
···207207 bpf_prog_array_free(rcdev->raw->progs);208208}209209210210-int lirc_prog_attach(const union bpf_attr *attr)210210+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)211211{212212- struct bpf_prog *prog;213212 struct rc_dev *rcdev;214213 int ret;215214216215 if (attr->attach_flags)217216 return -EINVAL;218217219219- prog = bpf_prog_get_type(attr->attach_bpf_fd,220220- BPF_PROG_TYPE_LIRC_MODE2);221221- if (IS_ERR(prog))222222- return PTR_ERR(prog);223223-224218 rcdev = rc_dev_get_from_fd(attr->target_fd);225225- if (IS_ERR(rcdev)) {226226- bpf_prog_put(prog);219219+ if (IS_ERR(rcdev))227220 return PTR_ERR(rcdev);228228- }229221230222 ret = lirc_bpf_attach(rcdev, prog);231231- if (ret)232232- bpf_prog_put(prog);233223234224 put_device(&rcdev->dev);235225
+3-24
drivers/misc/ibmasm/ibmasmfs.c
···507507static ssize_t remote_settings_file_read(struct file *file, char __user *buf, size_t count, loff_t *offset)508508{509509 void __iomem *address = (void __iomem *)file->private_data;510510- unsigned char *page;511511- int retval;512510 int len = 0;513511 unsigned int value;514514-515515- if (*offset < 0)516516- return -EINVAL;517517- if (count == 0 || count > 1024)518518- return 0;519519- if (*offset != 0)520520- return 0;521521-522522- page = (unsigned char *)__get_free_page(GFP_KERNEL);523523- if (!page)524524- return -ENOMEM;512512+ char lbuf[20];525513526514 value = readl(address);527527- len = sprintf(page, "%d\n", value);515515+ len = snprintf(lbuf, sizeof(lbuf), "%d\n", value);528516529529- if (copy_to_user(buf, page, len)) {530530- retval = -EFAULT;531531- goto exit;532532- }533533- *offset += len;534534- retval = len;535535-536536-exit:537537- free_page((unsigned long)page);538538- return retval;517517+ return simple_read_from_buffer(buf, count, offset, lbuf, len);539518}540519541520static ssize_t remote_settings_file_write(struct file *file, const char __user *ubuff, size_t count, loff_t *offset)
+4-1
drivers/misc/mei/interrupt.c
···310310 if (&cl->link == &dev->file_list) {311311 /* A message for not connected fixed address clients312312 * should be silently discarded313313+ * On power down client may be force cleaned,314314+ * silently discard such messages313315 */314314- if (hdr_is_fixed(mei_hdr)) {316316+ if (hdr_is_fixed(mei_hdr) ||317317+ dev->dev_state == MEI_DEV_POWER_DOWN) {315318 mei_irq_discard_msg(dev, mei_hdr);316319 ret = 0;317320 goto reset_slots;
+2-2
drivers/misc/vmw_balloon.c
···467467 unsigned int num_pages, bool is_2m_pages, unsigned int *target)468468{469469 unsigned long status;470470- unsigned long pfn = page_to_pfn(b->page);470470+ unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page));471471472472 STATS_INC(b->stats.lock[is_2m_pages]);473473···515515 unsigned int num_pages, bool is_2m_pages, unsigned int *target)516516{517517 unsigned long status;518518- unsigned long pfn = page_to_pfn(b->page);518518+ unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page));519519520520 STATS_INC(b->stats.unlock[is_2m_pages]);521521
···10651065 * It's used when HS400 mode is enabled.10661066 */10671067 if (data->flags & MMC_DATA_WRITE &&10681068- !(host->timing != MMC_TIMING_MMC_HS400))10691069- return;10681068+ host->timing != MMC_TIMING_MMC_HS400)10691069+ goto disable;1070107010711071 if (data->flags & MMC_DATA_WRITE)10721072 enable = SDMMC_CARD_WR_THR_EN;···10741074 enable = SDMMC_CARD_RD_THR_EN;1075107510761076 if (host->timing != MMC_TIMING_MMC_HS200 &&10771077- host->timing != MMC_TIMING_UHS_SDR104)10771077+ host->timing != MMC_TIMING_UHS_SDR104 &&10781078+ host->timing != MMC_TIMING_MMC_HS400)10781079 goto disable;1079108010801081 blksz_depth = blksz / (1 << host->data_shift);
+7-8
drivers/mmc/host/renesas_sdhi_internal_dmac.c
···139139 renesas_sdhi_internal_dmac_dm_write(host, DM_CM_RST,140140 RST_RESERVED_BITS | val);141141142142- if (host->data && host->data->flags & MMC_DATA_READ)143143- clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);142142+ clear_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags);144143145144 renesas_sdhi_internal_dmac_enable_dma(host, true);146145}···163164 goto force_pio;164165165166 /* This DMAC cannot handle if buffer is not 8-bytes alignment */166166- if (!IS_ALIGNED(sg_dma_address(sg), 8)) {167167- dma_unmap_sg(&host->pdev->dev, sg, host->sg_len,168168- mmc_get_dma_dir(data));169169- goto force_pio;170170- }167167+ if (!IS_ALIGNED(sg_dma_address(sg), 8))168168+ goto force_pio_with_unmap;171169172170 if (data->flags & MMC_DATA_READ) {173171 dtran_mode |= DTRAN_MODE_CH_NUM_CH1;174172 if (test_bit(SDHI_INTERNAL_DMAC_ONE_RX_ONLY, &global_flags) &&175173 test_and_set_bit(SDHI_INTERNAL_DMAC_RX_IN_USE, &global_flags))176176- goto force_pio;174174+ goto force_pio_with_unmap;177175 } else {178176 dtran_mode |= DTRAN_MODE_CH_NUM_CH0;179177 }···184188 sg_dma_address(sg));185189186190 return;191191+192192+force_pio_with_unmap:193193+ dma_unmap_sg(&host->pdev->dev, sg, host->sg_len, mmc_get_dma_dir(data));187194188195force_pio:189196 host->force_pio = true;
+9-12
drivers/mmc/host/sdhci-esdhc-imx.c
···312312313313 if (imx_data->socdata->flags & ESDHC_FLAG_HS400)314314 val |= SDHCI_SUPPORT_HS400;315315+316316+ /*317317+ * Do not advertise faster UHS modes if there are no318318+ * pinctrl states for 100MHz/200MHz.319319+ */320320+ if (IS_ERR_OR_NULL(imx_data->pins_100mhz) ||321321+ IS_ERR_OR_NULL(imx_data->pins_200mhz))322322+ val &= ~(SDHCI_SUPPORT_SDR50 | SDHCI_SUPPORT_DDR50323323+ | SDHCI_SUPPORT_SDR104 | SDHCI_SUPPORT_HS400);315324 }316325 }317326···11671158 ESDHC_PINCTRL_STATE_100MHZ);11681159 imx_data->pins_200mhz = pinctrl_lookup_state(imx_data->pinctrl,11691160 ESDHC_PINCTRL_STATE_200MHZ);11701170- if (IS_ERR(imx_data->pins_100mhz) ||11711171- IS_ERR(imx_data->pins_200mhz)) {11721172- dev_warn(mmc_dev(host->mmc),11731173- "could not get ultra high speed state, work on normal mode\n");11741174- /*11751175- * fall back to not supporting uhs by specifying no11761176- * 1.8v quirk11771177- */11781178- host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;11791179- }11801180- } else {11811181- host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;11821161 }1183116211841163 /* call to generic mmc_of_parse to support additional capabilities */
+7
drivers/mmc/host/sunxi-mmc.c
···14461446 sunxi_mmc_init_host(host);14471447 sunxi_mmc_set_bus_width(host, mmc->ios.bus_width);14481448 sunxi_mmc_set_clk(host, &mmc->ios);14491449+ enable_irq(host->irq);1449145014501451 return 0;14511452}···14561455 struct mmc_host *mmc = dev_get_drvdata(dev);14571456 struct sunxi_mmc_host *host = mmc_priv(mmc);1458145714581458+ /*14591459+ * When clocks are off, it's possible receiving14601460+ * fake interrupts, which will stall the system.14611461+ * Disabling the irq will prevent this.14621462+ */14631463+ disable_irq(host->irq);14591464 sunxi_mmc_reset_host(host);14601465 sunxi_mmc_disable(host);14611466
+4-2
drivers/mtd/spi-nor/cadence-quadspi.c
···926926 if (ret)927927 return ret;928928929929- if (f_pdata->use_direct_mode)929929+ if (f_pdata->use_direct_mode) {930930 memcpy_toio(cqspi->ahb_base + to, buf, len);931931- else931931+ ret = cqspi_wait_idle(cqspi);932932+ } else {932933 ret = cqspi_indirect_write_execute(nor, to, buf, len);934934+ }933935 if (ret)934936 return ret;935937
···1027910279 bp->sp_rtnl_state = 0;1028010280 smp_mb();10281102811028210282+ /* Immediately indicate link as down */1028310283+ bp->link_vars.link_up = 0;1028410284+ bp->force_link_down = true;1028510285+ netif_carrier_off(bp->dev);1028610286+ BNX2X_ERR("Indicating link is down due to Tx-timeout\n");1028710287+1028210288 bnx2x_nic_unload(bp, UNLOAD_NORMAL, true);1028310289 /* When ret value shows failure of allocation failure,1028410290 * the nic is rebooted again. If open still fails, a error
···21862186 return skb;21872187}2188218821892189-#define IXGBE_XDP_PASS 021902190-#define IXGBE_XDP_CONSUMED 121912191-#define IXGBE_XDP_TX 221892189+#define IXGBE_XDP_PASS 021902190+#define IXGBE_XDP_CONSUMED BIT(0)21912191+#define IXGBE_XDP_TX BIT(1)21922192+#define IXGBE_XDP_REDIR BIT(2)2192219321932194static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter,21942195 struct xdp_frame *xdpf);···22262225 case XDP_REDIRECT:22272226 err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);22282227 if (!err)22292229- result = IXGBE_XDP_TX;22282228+ result = IXGBE_XDP_REDIR;22302229 else22312230 result = IXGBE_XDP_CONSUMED;22322231 break;···22862285 unsigned int mss = 0;22872286#endif /* IXGBE_FCOE */22882287 u16 cleaned_count = ixgbe_desc_unused(rx_ring);22892289- bool xdp_xmit = false;22882288+ unsigned int xdp_xmit = 0;22902289 struct xdp_buff xdp;2291229022922291 xdp.rxq = &rx_ring->xdp_rxq;···23292328 }2330232923312330 if (IS_ERR(skb)) {23322332- if (PTR_ERR(skb) == -IXGBE_XDP_TX) {23332333- xdp_xmit = true;23312331+ unsigned int xdp_res = -PTR_ERR(skb);23322332+23332333+ if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) {23342334+ xdp_xmit |= xdp_res;23342335 ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size);23352336 } else {23362337 rx_buffer->pagecnt_bias++;···24042401 total_rx_packets++;24052402 }2406240324072407- if (xdp_xmit) {24042404+ if (xdp_xmit & IXGBE_XDP_REDIR)24052405+ xdp_do_flush_map();24062406+24072407+ if (xdp_xmit & IXGBE_XDP_TX) {24082408 struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()];2409240924102410 /* Force memory writes to complete before letting h/w···24152409 */24162410 wmb();24172411 writel(ring->next_to_use, ring->tail);24182418-24192419- xdp_do_flush_map();24202412 }2421241324222414 u64_stats_update_begin(&rx_ring->syncp);
+4-4
drivers/net/ethernet/mellanox/mlx5/core/cmd.c
···807807 unsigned long flags;808808 bool poll_cmd = ent->polling;809809 int alloc_ret;810810+ int cmd_mode;810811811812 sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;812813 down(sem);···854853 set_signature(ent, !cmd->checksum_disabled);855854 dump_command(dev, ent, 1);856855 ent->ts1 = ktime_get_ns();856856+ cmd_mode = cmd->mode;857857858858 if (ent->callback)859859 schedule_delayed_work(&ent->cb_timeout_work, cb_timeout);···879877 iowrite32be(1 << ent->idx, &dev->iseg->cmd_dbell);880878 mmiowb();881879 /* if not in polling don't use ent after this point */882882- if (cmd->mode == CMD_MODE_POLLING || poll_cmd) {880880+ if (cmd_mode == CMD_MODE_POLLING || poll_cmd) {883881 poll_timeout(ent);884882 /* make sure we read the descriptor after ownership is SW */885883 rmb();···12781276{12791277 struct mlx5_core_dev *dev = filp->private_data;12801278 struct mlx5_cmd_debug *dbg = &dev->cmd.dbg;12811281- char outlen_str[8];12791279+ char outlen_str[8] = {0};12821280 int outlen;12831281 void *ptr;12841282 int err;···1292129012931291 if (copy_from_user(outlen_str, buf, count))12941292 return -EFAULT;12951295-12961296- outlen_str[7] = 0;1297129312981294 err = sscanf(outlen_str, "%d", &outlen);12991295 if (err < 0)
+6-6
drivers/net/ethernet/mellanox/mlx5/core/en_main.c
···28462846 mlx5e_activate_channels(&priv->channels);28472847 netif_tx_start_all_queues(priv->netdev);2848284828492849- if (MLX5_VPORT_MANAGER(priv->mdev))28492849+ if (MLX5_ESWITCH_MANAGER(priv->mdev))28502850 mlx5e_add_sqs_fwd_rules(priv);2851285128522852 mlx5e_wait_channels_min_rx_wqes(&priv->channels);···28572857{28582858 mlx5e_redirect_rqts_to_drop(priv);2859285928602860- if (MLX5_VPORT_MANAGER(priv->mdev))28602860+ if (MLX5_ESWITCH_MANAGER(priv->mdev))28612861 mlx5e_remove_sqs_fwd_rules(priv);2862286228632863 /* FIXME: This is a W/A only for tx timeout watch dog false alarm when···45974597 mlx5e_set_netdev_dev_addr(netdev);4598459845994599#if IS_ENABLED(CONFIG_MLX5_ESWITCH)46004600- if (MLX5_VPORT_MANAGER(mdev))46004600+ if (MLX5_ESWITCH_MANAGER(mdev))46014601 netdev->switchdev_ops = &mlx5e_switchdev_ops;46024602#endif46034603···4753475347544754 mlx5e_enable_async_events(priv);4755475547564756- if (MLX5_VPORT_MANAGER(priv->mdev))47564756+ if (MLX5_ESWITCH_MANAGER(priv->mdev))47574757 mlx5e_register_vport_reps(priv);4758475847594759 if (netdev->reg_state != NETREG_REGISTERED)···4788478847894789 queue_work(priv->wq, &priv->set_rx_mode_work);4790479047914791- if (MLX5_VPORT_MANAGER(priv->mdev))47914791+ if (MLX5_ESWITCH_MANAGER(priv->mdev))47924792 mlx5e_unregister_vport_reps(priv);4793479347944794 mlx5e_disable_async_events(priv);···49724972 return NULL;4973497349744974#ifdef CONFIG_MLX5_ESWITCH49754975- if (MLX5_VPORT_MANAGER(mdev)) {49754975+ if (MLX5_ESWITCH_MANAGER(mdev)) {49764976 rpriv = mlx5e_alloc_nic_rep_priv(mdev);49774977 if (!rpriv) {49784978 mlx5_core_warn(mdev, "Failed to alloc NIC rep priv data\n");
···15941594}1595159515961596/* Public E-Switch API */15971597-#define ESW_ALLOWED(esw) ((esw) && MLX5_VPORT_MANAGER((esw)->dev))15971597+#define ESW_ALLOWED(esw) ((esw) && MLX5_ESWITCH_MANAGER((esw)->dev))15981598+1598159915991600int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs, int mode)16001601{16011602 int err;16021603 int i, enabled_events;1603160416041604- if (!ESW_ALLOWED(esw))16051605- return 0;16061606-16071607- if (!MLX5_CAP_GEN(esw->dev, eswitch_flow_table) ||16051605+ if (!ESW_ALLOWED(esw) ||16081606 !MLX5_CAP_ESW_FLOWTABLE_FDB(esw->dev, ft_support)) {16091607 esw_warn(esw->dev, "E-Switch FDB is not supported, aborting ...\n");16101608 return -EOPNOTSUPP;···18041806 u64 node_guid;18051807 int err = 0;1806180818071807- if (!ESW_ALLOWED(esw))18091809+ if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))18081810 return -EPERM;18091811 if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))18101812 return -EINVAL;···18811883{18821884 struct mlx5_vport *evport;1883188518841884- if (!ESW_ALLOWED(esw))18861886+ if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))18851887 return -EPERM;18861888 if (!LEGAL_VPORT(esw, vport))18871889 return -EINVAL;
···3333#include <linux/etherdevice.h>3434#include <linux/mlx5/driver.h>3535#include <linux/mlx5/mlx5_ifc.h>3636+#include <linux/mlx5/eswitch.h>3637#include "mlx5_core.h"3738#include "lib/mpfs.h"3839···9998 int l2table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table);10099 struct mlx5_mpfs *mpfs;101100102102- if (!MLX5_VPORT_MANAGER(dev))101101+ if (!MLX5_ESWITCH_MANAGER(dev))103102 return 0;104103105104 mpfs = kzalloc(sizeof(*mpfs), GFP_KERNEL);···123122{124123 struct mlx5_mpfs *mpfs = dev->priv.mpfs;125124126126- if (!MLX5_VPORT_MANAGER(dev))125125+ if (!MLX5_ESWITCH_MANAGER(dev))127126 return;128127129128 WARN_ON(!hlist_empty(mpfs->hash));···138137 u32 index;139138 int err;140139141141- if (!MLX5_VPORT_MANAGER(dev))140140+ if (!MLX5_ESWITCH_MANAGER(dev))142141 return 0;143142144143 mutex_lock(&mpfs->lock);···180179 int err = 0;181180 u32 index;182181183183- if (!MLX5_VPORT_MANAGER(dev))182182+ if (!MLX5_ESWITCH_MANAGER(dev))184183 return 0;185184186185 mutex_lock(&mpfs->lock);
+2-2
drivers/net/ethernet/mellanox/mlx5/core/port.c
···701701static int mlx5_set_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *in,702702 int inlen)703703{704704- u32 out[MLX5_ST_SZ_DW(qtct_reg)];704704+ u32 out[MLX5_ST_SZ_DW(qetc_reg)];705705706706 if (!MLX5_CAP_GEN(mdev, ets))707707 return -EOPNOTSUPP;···713713static int mlx5_query_port_qetcr_reg(struct mlx5_core_dev *mdev, u32 *out,714714 int outlen)715715{716716- u32 in[MLX5_ST_SZ_DW(qtct_reg)];716716+ u32 in[MLX5_ST_SZ_DW(qetc_reg)];717717718718 if (!MLX5_CAP_GEN(mdev, ets))719719 return -EOPNOTSUPP;
+6-1
drivers/net/ethernet/mellanox/mlx5/core/sriov.c
···8888 return -EBUSY;8989 }90909191+ if (!MLX5_ESWITCH_MANAGER(dev))9292+ goto enable_vfs_hca;9393+9194 err = mlx5_eswitch_enable_sriov(dev->priv.eswitch, num_vfs, SRIOV_LEGACY);9295 if (err) {9396 mlx5_core_warn(dev,···9895 return err;9996 }100979898+enable_vfs_hca:10199 for (vf = 0; vf < num_vfs; vf++) {102100 err = mlx5_core_enable_hca(dev, vf + 1);103101 if (err) {···144140 }145141146142out:147147- mlx5_eswitch_disable_sriov(dev->priv.eswitch);143143+ if (MLX5_ESWITCH_MANAGER(dev))144144+ mlx5_eswitch_disable_sriov(dev->priv.eswitch);148145149146 if (mlx5_wait_for_vf_pages(dev))150147 mlx5_core_warn(dev, "timeout reclaiming VFs pages\n");
-2
drivers/net/ethernet/mellanox/mlx5/core/vport.c
···549549 return -EINVAL;550550 if (!MLX5_CAP_GEN(mdev, vport_group_manager))551551 return -EACCES;552552- if (!MLX5_CAP_ESW(mdev, nic_vport_node_guid_modify))553553- return -EOPNOTSUPP;554552555553 in = kvzalloc(inlen, GFP_KERNEL);556554 if (!in)
+6-3
drivers/net/ethernet/netronome/nfp/bpf/main.c
···81818282 ret = nfp_net_bpf_offload(nn, prog, running, extack);8383 /* Stop offload if replace not possible */8484- if (ret && prog)8585- nfp_bpf_xdp_offload(app, nn, NULL, extack);8484+ if (ret)8585+ return ret;86868787- nn->dp.bpf_offload_xdp = prog && !ret;8787+ nn->dp.bpf_offload_xdp = !!prog;8888 return ret;8989}9090···200200 struct nfp_net *nn = netdev_priv(netdev);201201202202 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)203203+ return -EOPNOTSUPP;204204+205205+ if (tcf_block_shared(f->block))203206 return -EOPNOTSUPP;204207205208 switch (f->command) {
+14
drivers/net/ethernet/netronome/nfp/flower/match.c
···123123 NFP_FLOWER_MASK_MPLS_Q;124124125125 frame->mpls_lse = cpu_to_be32(t_mpls);126126+ } else if (dissector_uses_key(flow->dissector,127127+ FLOW_DISSECTOR_KEY_BASIC)) {128128+ /* Check for mpls ether type and set NFP_FLOWER_MASK_MPLS_Q129129+ * bit, which indicates an mpls ether type but without any130130+ * mpls fields.131131+ */132132+ struct flow_dissector_key_basic *key_basic;133133+134134+ key_basic = skb_flow_dissector_target(flow->dissector,135135+ FLOW_DISSECTOR_KEY_BASIC,136136+ flow->key);137137+ if (key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_UC) ||138138+ key_basic->n_proto == cpu_to_be16(ETH_P_MPLS_MC))139139+ frame->mpls_lse = cpu_to_be32(NFP_FLOWER_MASK_MPLS_Q);126140 }127141}128142
···264264 case cpu_to_be16(ETH_P_ARP):265265 return -EOPNOTSUPP;266266267267+ case cpu_to_be16(ETH_P_MPLS_UC):268268+ case cpu_to_be16(ETH_P_MPLS_MC):269269+ if (!(key_layer & NFP_FLOWER_LAYER_MAC)) {270270+ key_layer |= NFP_FLOWER_LAYER_MAC;271271+ key_size += sizeof(struct nfp_flower_mac_mpls);272272+ }273273+ break;274274+267275 /* Will be included in layer 2. */268276 case cpu_to_be16(ETH_P_8021Q):269277 break;···629621 struct nfp_repr *repr = netdev_priv(netdev);630622631623 if (f->binder_type != TCF_BLOCK_BINDER_TYPE_CLSACT_INGRESS)624624+ return -EOPNOTSUPP;625625+626626+ if (tcf_block_shared(f->block))632627 return -EOPNOTSUPP;633628634629 switch (f->command) {
+1-5
drivers/net/ethernet/netronome/nfp/nfp_main.c
···240240 return pci_sriov_set_totalvfs(pf->pdev, pf->limit_vfs);241241242242 pf->limit_vfs = ~0;243243- pci_sriov_set_totalvfs(pf->pdev, 0); /* 0 is unset */244243 /* Allow any setting for backwards compatibility if symbol not found */245244 if (err == -ENOENT)246245 return 0;···667668668669 err = nfp_net_pci_probe(pf);669670 if (err)670670- goto err_sriov_unlimit;671671+ goto err_fw_unload;671672672673 err = nfp_hwmon_register(pf);673674 if (err) {···679680680681err_net_remove:681682 nfp_net_pci_remove(pf);682682-err_sriov_unlimit:683683- pci_sriov_set_totalvfs(pf->pdev, 0);684683err_fw_unload:685684 kfree(pf->rtbl);686685 nfp_mip_close(pf->mip);···712715 nfp_hwmon_unregister(pf);713716714717 nfp_pcie_sriov_disable(pdev);715715- pci_sriov_set_totalvfs(pf->pdev, 0);716718717719 nfp_net_pci_remove(pf);718720
···18041804 DP_INFO(p_hwfn, "Failed to update driver state\n");1805180518061806 rc = qed_mcp_ov_update_eswitch(p_hwfn, p_hwfn->p_main_ptt,18071807- QED_OV_ESWITCH_VEB);18071807+ QED_OV_ESWITCH_NONE);18081808 if (rc)18091809 DP_INFO(p_hwfn, "Failed to update eswitch mode\n");18101810 }
+8
drivers/net/ethernet/qlogic/qed/qed_main.c
···789789 /* We want a minimum of one slowpath and one fastpath vector per hwfn */790790 cdev->int_params.in.min_msix_cnt = cdev->num_hwfns * 2;791791792792+ if (is_kdump_kernel()) {793793+ DP_INFO(cdev,794794+ "Kdump kernel: Limit the max number of requested MSI-X vectors to %hd\n",795795+ cdev->int_params.in.min_msix_cnt);796796+ cdev->int_params.in.num_vectors =797797+ cdev->int_params.in.min_msix_cnt;798798+ }799799+792800 rc = qed_set_int_mode(cdev, false);793801 if (rc) {794802 DP_ERR(cdev, "qed_slowpath_setup_int ERR\n");
+17-2
drivers/net/ethernet/qlogic/qed/qed_sriov.c
···45134513static int qed_sriov_enable(struct qed_dev *cdev, int num)45144514{45154515 struct qed_iov_vf_init_params params;45164516+ struct qed_hwfn *hwfn;45174517+ struct qed_ptt *ptt;45164518 int i, j, rc;4517451945184520 if (num >= RESC_NUM(&cdev->hwfns[0], QED_VPORT)) {···4527452545284526 /* Initialize HW for VF access */45294527 for_each_hwfn(cdev, j) {45304530- struct qed_hwfn *hwfn = &cdev->hwfns[j];45314531- struct qed_ptt *ptt = qed_ptt_acquire(hwfn);45284528+ hwfn = &cdev->hwfns[j];45294529+ ptt = qed_ptt_acquire(hwfn);4532453045334531 /* Make sure not to use more than 16 queues per VF */45344532 params.num_queues = min_t(int,···45634561 DP_ERR(cdev, "Failed to enable sriov [%d]\n", rc);45644562 goto err;45654563 }45644564+45654565+ hwfn = QED_LEADING_HWFN(cdev);45664566+ ptt = qed_ptt_acquire(hwfn);45674567+ if (!ptt) {45684568+ DP_ERR(hwfn, "Failed to acquire ptt\n");45694569+ rc = -EBUSY;45704570+ goto err;45714571+ }45724572+45734573+ rc = qed_mcp_ov_update_eswitch(hwfn, ptt, QED_OV_ESWITCH_VEB);45744574+ if (rc)45754575+ DP_INFO(cdev, "Failed to update eswitch mode\n");45764576+ qed_ptt_release(hwfn, ptt);4566457745674578 return num;45684579
···6565 VM_PKT_DATA_INBAND, 0);6666}67676868+/* Worker to setup sub channels on initial setup6969+ * Initial hotplug event occurs in softirq context7070+ * and can't wait for channels.7171+ */7272+static void netvsc_subchan_work(struct work_struct *w)7373+{7474+ struct netvsc_device *nvdev =7575+ container_of(w, struct netvsc_device, subchan_work);7676+ struct rndis_device *rdev;7777+ int i, ret;7878+7979+ /* Avoid deadlock with device removal already under RTNL */8080+ if (!rtnl_trylock()) {8181+ schedule_work(w);8282+ return;8383+ }8484+8585+ rdev = nvdev->extension;8686+ if (rdev) {8787+ ret = rndis_set_subchannel(rdev->ndev, nvdev);8888+ if (ret == 0) {8989+ netif_device_attach(rdev->ndev);9090+ } else {9191+ /* fallback to only primary channel */9292+ for (i = 1; i < nvdev->num_chn; i++)9393+ netif_napi_del(&nvdev->chan_table[i].napi);9494+9595+ nvdev->max_chn = 1;9696+ nvdev->num_chn = 1;9797+ }9898+ }9999+100100+ rtnl_unlock();101101+}102102+68103static struct netvsc_device *alloc_net_device(void)69104{70105 struct netvsc_device *net_device;···1168111782 init_completion(&net_device->channel_init_wait);11883 init_waitqueue_head(&net_device->subchan_open);119119- INIT_WORK(&net_device->subchan_work, rndis_set_subchannel);8484+ INIT_WORK(&net_device->subchan_work, netvsc_subchan_work);1208512186 return net_device;12287}
+16-1
drivers/net/hyperv/netvsc_drv.c
···905905 if (IS_ERR(nvdev))906906 return PTR_ERR(nvdev);907907908908- /* Note: enable and attach happen when sub-channels setup */908908+ if (nvdev->num_chn > 1) {909909+ ret = rndis_set_subchannel(ndev, nvdev);909910911911+ /* if unavailable, just proceed with one queue */912912+ if (ret) {913913+ nvdev->max_chn = 1;914914+ nvdev->num_chn = 1;915915+ }916916+ }917917+918918+ /* In any case device is now ready */919919+ netif_device_attach(ndev);920920+921921+ /* Note: enable and attach happen when sub-channels setup */910922 netif_carrier_off(ndev);911923912924 if (netif_running(ndev)) {···21002088 }2101208921022090 memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);20912091+20922092+ if (nvdev->num_chn > 1)20932093+ schedule_work(&nvdev->subchan_work);2103209421042095 /* hw_features computed in rndis_netdev_set_hwcaps() */21052096 net->features = net->hw_features |
+12-49
drivers/net/hyperv/rndis_filter.c
···10621062 * This breaks overlap of processing the host message for the10631063 * new primary channel with the initialization of sub-channels.10641064 */10651065-void rndis_set_subchannel(struct work_struct *w)10651065+int rndis_set_subchannel(struct net_device *ndev, struct netvsc_device *nvdev)10661066{10671067- struct netvsc_device *nvdev10681068- = container_of(w, struct netvsc_device, subchan_work);10691067 struct nvsp_message *init_packet = &nvdev->channel_init_pkt;10701070- struct net_device_context *ndev_ctx;10711071- struct rndis_device *rdev;10721072- struct net_device *ndev;10731073- struct hv_device *hv_dev;10681068+ struct net_device_context *ndev_ctx = netdev_priv(ndev);10691069+ struct hv_device *hv_dev = ndev_ctx->device_ctx;10701070+ struct rndis_device *rdev = nvdev->extension;10741071 int i, ret;1075107210761076- if (!rtnl_trylock()) {10771077- schedule_work(w);10781078- return;10791079- }10801080-10811081- rdev = nvdev->extension;10821082- if (!rdev)10831083- goto unlock; /* device was removed */10841084-10851085- ndev = rdev->ndev;10861086- ndev_ctx = netdev_priv(ndev);10871087- hv_dev = ndev_ctx->device_ctx;10731073+ ASSERT_RTNL();1088107410891075 memset(init_packet, 0, sizeof(struct nvsp_message));10901076 init_packet->hdr.msg_type = NVSP_MSG5_TYPE_SUBCHANNEL;···10861100 VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED);10871101 if (ret) {10881102 netdev_err(ndev, "sub channel allocate send failed: %d\n", ret);10891089- goto failed;11031103+ return ret;10901104 }1091110510921106 wait_for_completion(&nvdev->channel_init_wait);10931107 if (init_packet->msg.v5_msg.subchn_comp.status != NVSP_STAT_SUCCESS) {10941108 netdev_err(ndev, "sub channel request failed\n");10951095- goto failed;11091109+ return -EIO;10961110 }1097111110981112 nvdev->num_chn = 1 +···11111125 for (i = 0; i < VRSS_SEND_TAB_SIZE; i++)11121126 ndev_ctx->tx_table[i] = i % nvdev->num_chn;1113112711141114- netif_device_attach(ndev);11151115- rtnl_unlock();11161116- return;11171117-11181118-failed:11191119- /* fallback to only primary channel */11201120- for (i = 1; i < nvdev->num_chn; i++)11211121- netif_napi_del(&nvdev->chan_table[i].napi);11221122-11231123- nvdev->max_chn = 1;11241124- nvdev->num_chn = 1;11251125-11261126- netif_device_attach(ndev);11271127-unlock:11281128- rtnl_unlock();11281128+ return 0;11291129}1130113011311131static int rndis_netdev_set_hwcaps(struct rndis_device *rndis_device,···13321360 netif_napi_add(net, &net_device->chan_table[i].napi,13331361 netvsc_poll, NAPI_POLL_WEIGHT);1334136213351335- if (net_device->num_chn > 1)13361336- schedule_work(&net_device->subchan_work);13631363+ return net_device;1337136413381365out:13391339- /* if unavailable, just proceed with one queue */13401340- if (ret) {13411341- net_device->max_chn = 1;13421342- net_device->num_chn = 1;13431343- }13441344-13451345- /* No sub channels, device is ready */13461346- if (net_device->num_chn == 1)13471347- netif_device_attach(net);13481348-13491349- return net_device;13661366+ /* setting up multiple channels failed */13671367+ net_device->max_chn = 1;13681368+ net_device->num_chn = 1;1350136913511370err_dev_remv:13521371 rndis_filter_device_remove(dev, net_device);
+28-8
drivers/net/ipvlan/ipvlan_main.c
···7575{7676 struct ipvl_dev *ipvlan;7777 struct net_device *mdev = port->dev;7878- int err = 0;7878+ unsigned int flags;7979+ int err;79808081 ASSERT_RTNL();8182 if (port->mode != nval) {8383+ list_for_each_entry(ipvlan, &port->ipvlans, pnode) {8484+ flags = ipvlan->dev->flags;8585+ if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S) {8686+ err = dev_change_flags(ipvlan->dev,8787+ flags | IFF_NOARP);8888+ } else {8989+ err = dev_change_flags(ipvlan->dev,9090+ flags & ~IFF_NOARP);9191+ }9292+ if (unlikely(err))9393+ goto fail;9494+ }8295 if (nval == IPVLAN_MODE_L3S) {8396 /* New mode is L3S */8497 err = ipvlan_register_nf_hook(read_pnet(&port->pnet));···9986 mdev->l3mdev_ops = &ipvl_l3mdev_ops;10087 mdev->priv_flags |= IFF_L3MDEV_MASTER;10188 } else102102- return err;8989+ goto fail;10390 } else if (port->mode == IPVLAN_MODE_L3S) {10491 /* Old mode was L3S */10592 mdev->priv_flags &= ~IFF_L3MDEV_MASTER;10693 ipvlan_unregister_nf_hook(read_pnet(&port->pnet));10794 mdev->l3mdev_ops = NULL;10895 }109109- list_for_each_entry(ipvlan, &port->ipvlans, pnode) {110110- if (nval == IPVLAN_MODE_L3 || nval == IPVLAN_MODE_L3S)111111- ipvlan->dev->flags |= IFF_NOARP;112112- else113113- ipvlan->dev->flags &= ~IFF_NOARP;114114- }11596 port->mode = nval;11697 }9898+ return 0;9999+100100+fail:101101+ /* Undo the flags changes that have been done so far. */102102+ list_for_each_entry_continue_reverse(ipvlan, &port->ipvlans, pnode) {103103+ flags = ipvlan->dev->flags;104104+ if (port->mode == IPVLAN_MODE_L3 ||105105+ port->mode == IPVLAN_MODE_L3S)106106+ dev_change_flags(ipvlan->dev, flags | IFF_NOARP);107107+ else108108+ dev_change_flags(ipvlan->dev, flags & ~IFF_NOARP);109109+ }110110+117111 return err;118112}119113
···936936 return cell;937937 }938938939939+ /* NULL cell_id only allowed for device tree; invalid otherwise */940940+ if (!cell_id)941941+ return ERR_PTR(-EINVAL);942942+939943 return nvmem_cell_get_from_list(cell_id);940944}941945EXPORT_SYMBOL_GPL(nvmem_cell_get);
-1
drivers/pci/controller/dwc/Kconfig
···5858 depends on PCI && PCI_MSI_IRQ_DOMAIN5959 select PCIE_DW_HOST6060 select PCIE_DW_PLAT6161- default y6261 help6362 Enables support for the PCIe controller in the Designware IP to6463 work in host mode. There are two instances of PCIe controller in
+2
drivers/pci/controller/pci-ftpci100.c
···355355 irq = of_irq_get(intc, 0);356356 if (irq <= 0) {357357 dev_err(p->dev, "failed to get parent IRQ\n");358358+ of_node_put(intc);358359 return irq ?: -EINVAL;359360 }360361361362 p->irqdomain = irq_domain_add_linear(intc, PCI_NUM_INTX,362363 &faraday_pci_irqdomain_ops, p);364364+ of_node_put(intc);363365 if (!p->irqdomain) {364366 dev_err(p->dev, "failed to create Gemini PCI IRQ domain\n");365367 return -EINVAL;
+13-3
drivers/pci/controller/pcie-rcar.c
···680680 if (err)681681 return err;682682683683- return phy_power_on(pcie->phy);683683+ err = phy_power_on(pcie->phy);684684+ if (err)685685+ phy_exit(pcie->phy);686686+687687+ return err;684688}685689686690static int rcar_msi_alloc(struct rcar_msi *chip)···11691165 if (rcar_pcie_hw_init(pcie)) {11701166 dev_info(dev, "PCIe link down\n");11711167 err = -ENODEV;11721172- goto err_clk_disable;11681168+ goto err_phy_shutdown;11731169 }1174117011751171 data = rcar_pci_read_reg(pcie, MACSR);···11811177 dev_err(dev,11821178 "failed to enable MSI support: %d\n",11831179 err);11841184- goto err_clk_disable;11801180+ goto err_phy_shutdown;11851181 }11861182 }11871183···11941190err_msi_teardown:11951191 if (IS_ENABLED(CONFIG_PCI_MSI))11961192 rcar_pcie_teardown_msi(pcie);11931193+11941194+err_phy_shutdown:11951195+ if (pcie->phy) {11961196+ phy_power_off(pcie->phy);11971197+ phy_exit(pcie->phy);11981198+ }1197119911981200err_clk_disable:11991201 clk_disable_unprepare(pcie->bus_clk);
+1-1
drivers/pci/controller/pcie-xilinx-nwl.c
···559559 PCI_NUM_INTX,560560 &legacy_domain_ops,561561 pcie);562562-562562+ of_node_put(legacy_intc_node);563563 if (!pcie->legacy_irq_domain) {564564 dev_err(dev, "failed to create IRQ domain\n");565565 return -ENOMEM;
+1
drivers/pci/controller/pcie-xilinx.c
···509509 port->leg_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,510510 &intx_domain_ops,511511 port);512512+ of_node_put(pcie_intc_node);512513 if (!port->leg_domain) {513514 dev_err(dev, "Failed to get a INTx IRQ domain\n");514515 return -ENODEV;
···575575}576576577577/**578578+ * pci_iov_remove - clean up SR-IOV state after PF driver is detached579579+ * @dev: the PCI device580580+ */581581+void pci_iov_remove(struct pci_dev *dev)582582+{583583+ struct pci_sriov *iov = dev->sriov;584584+585585+ if (!dev->is_physfn)586586+ return;587587+588588+ iov->driver_max_VFs = iov->total_VFs;589589+ if (iov->num_VFs)590590+ pci_warn(dev, "driver left SR-IOV enabled after remove\n");591591+}592592+593593+/**578594 * pci_iov_update_resource - update a VF BAR579595 * @dev: the PCI device580596 * @resno: the resource number
+12
drivers/pci/pci-acpi.c
···629629{630630 struct acpi_device *adev = ACPI_COMPANION(&dev->dev);631631632632+ /*633633+ * In some cases (eg. Samsung 305V4A) leaving a bridge in suspend over634634+ * system-wide suspend/resume confuses the platform firmware, so avoid635635+ * doing that, unless the bridge has a driver that should take care of636636+ * the PM handling. According to Section 16.1.6 of ACPI 6.2, endpoint637637+ * devices are expected to be in D3 before invoking the S3 entry path638638+ * from the firmware, so they should not be affected by this issue.639639+ */640640+ if (pci_is_bridge(dev) && !dev->driver &&641641+ acpi_target_system_state() != ACPI_STATE_S0)642642+ return true;643643+632644 if (!adev || !acpi_device_power_manageable(adev))633645 return false;634646
+1
drivers/pci/pci-driver.c
···445445 }446446 pcibios_free_irq(pci_dev);447447 pci_dev->driver = NULL;448448+ pci_iov_remove(pci_dev);448449 }449450450451 /* Undo the runtime PM settings in local_pci_probe() */
···265265 return err;266266267267 /* full-function RTCs won't have such missing fields */268268- if (rtc_valid_tm(&alarm->time) == 0)268268+ if (rtc_valid_tm(&alarm->time) == 0) {269269+ rtc_add_offset(rtc, &alarm->time);269270 return 0;271271+ }270272271273 /* get the "after" timestamp, to detect wrapped fields */272274 err = rtc_read_time(rtc, &now);···411409 if (err)412410 return err;413411414414- rtc_subtract_offset(rtc, &alarm->time);415412 scheduled = rtc_tm_to_time64(&alarm->time);416413417414 /* Make sure we're not setting alarms in the past */···426425 * the is alarm set for the next second and the second ticks427426 * over right here, before we set the alarm.428427 */428428+429429+ rtc_subtract_offset(rtc, &alarm->time);429430430431 if (!rtc->ops)431432 err = -ENODEV;···470467471468 mutex_unlock(&rtc->ops_lock);472469473473- rtc_add_offset(rtc, &alarm->time);474470 return err;475471}476472EXPORT_SYMBOL_GPL(rtc_set_alarm);
+1-3
drivers/rtc/rtc-mrst.c
···367367 }368368369369 retval = rtc_register_device(mrst_rtc.rtc);370370- if (retval) {371371- retval = PTR_ERR(mrst_rtc.rtc);370370+ if (retval)372371 goto cleanup0;373373- }374372375373 dev_dbg(dev, "initialised\n");376374 return 0;
+11-2
drivers/s390/block/dasd.c
···41414242#define DASD_DIAG_MOD "dasd_diag_mod"43434444+static unsigned int queue_depth = 32;4545+static unsigned int nr_hw_queues = 4;4646+4747+module_param(queue_depth, uint, 0444);4848+MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices");4949+5050+module_param(nr_hw_queues, uint, 0444);5151+MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices");5252+4453/*4554 * SECTION: exported variables of dasd.c4655 */···3124311531253116 block->tag_set.ops = &dasd_mq_ops;31263117 block->tag_set.cmd_size = sizeof(struct dasd_ccw_req);31273127- block->tag_set.nr_hw_queues = DASD_NR_HW_QUEUES;31283128- block->tag_set.queue_depth = DASD_MAX_LCU_DEV * DASD_REQ_PER_DEV;31183118+ block->tag_set.nr_hw_queues = nr_hw_queues;31193119+ block->tag_set.queue_depth = queue_depth;31293120 block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;3130312131313122 rc = blk_mq_alloc_tag_set(&block->tag_set);
-8
drivers/s390/block/dasd_int.h
···228228#define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */229229#define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */230230231231-/*232232- * There is no reliable way to determine the number of available CPUs on233233- * LPAR but there is no big performance difference between 1 and the234234- * maximum CPU number.235235- * 64 is a good trade off performance wise.236236- */237237-#define DASD_NR_HW_QUEUES 64238238-#define DASD_MAX_LCU_DEV 256239231#define DASD_REQ_PER_DEV 4240232241233/* Signature for error recovery functions. */
+12-1
drivers/s390/net/qeth_core.h
···829829/*some helper functions*/830830#define QETH_CARD_IFNAME(card) (((card)->dev)? (card)->dev->name : "")831831832832+static inline void qeth_scrub_qdio_buffer(struct qdio_buffer *buf,833833+ unsigned int elements)834834+{835835+ unsigned int i;836836+837837+ for (i = 0; i < elements; i++)838838+ memset(&buf->element[i], 0, sizeof(struct qdio_buffer_element));839839+ buf->element[14].sflags = 0;840840+ buf->element[15].sflags = 0;841841+}842842+832843/**833844 * qeth_get_elements_for_range() - find number of SBALEs to cover range.834845 * @start: Start of the address range.···10401029 __u16, __u16,10411030 enum qeth_prot_versions);10421031int qeth_set_features(struct net_device *, netdev_features_t);10431043-void qeth_recover_features(struct net_device *dev);10321032+void qeth_enable_hw_features(struct net_device *dev);10441033netdev_features_t qeth_fix_features(struct net_device *, netdev_features_t);10451034netdev_features_t qeth_features_check(struct sk_buff *skb,10461035 struct net_device *dev,
+28-19
drivers/s390/net/qeth_core_main.c
···7373 struct qeth_qdio_out_buffer *buf,7474 enum iucv_tx_notify notification);7575static void qeth_release_skbs(struct qeth_qdio_out_buffer *buf);7676-static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,7777- struct qeth_qdio_out_buffer *buf,7878- enum qeth_qdio_buffer_states newbufstate);7976static int qeth_init_qdio_out_buf(struct qeth_qdio_out_q *, int);80778178struct workqueue_struct *qeth_wq;···486489 struct qaob *aob;487490 struct qeth_qdio_out_buffer *buffer;488491 enum iucv_tx_notify notification;492492+ unsigned int i;489493490494 aob = (struct qaob *) phys_to_virt(phys_aob_addr);491495 QETH_CARD_TEXT(card, 5, "haob");···511513 qeth_notify_skbs(buffer->q, buffer, notification);512514513515 buffer->aob = NULL;514514- qeth_clear_output_buffer(buffer->q, buffer,515515- QETH_QDIO_BUF_HANDLED_DELAYED);516516+ /* Free dangling allocations. The attached skbs are handled by517517+ * qeth_cleanup_handled_pending().518518+ */519519+ for (i = 0;520520+ i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);521521+ i++) {522522+ if (aob->sba[i] && buffer->is_header[i])523523+ kmem_cache_free(qeth_core_header_cache,524524+ (void *) aob->sba[i]);525525+ }526526+ atomic_set(&buffer->state, QETH_QDIO_BUF_HANDLED_DELAYED);516527517517- /* from here on: do not touch buffer anymore */518528 qdio_release_aob(aob);519529}520530···37653759 QETH_CARD_TEXT(queue->card, 5, "aob");37663760 QETH_CARD_TEXT_(queue->card, 5, "%lx",37673761 virt_to_phys(buffer->aob));37623762+37633763+ /* prepare the queue slot for re-use: */37643764+ qeth_scrub_qdio_buffer(buffer->buffer,37653765+ QETH_MAX_BUFFER_ELEMENTS(card));37683766 if (qeth_init_qdio_out_buf(queue, bidx)) {37693767 QETH_CARD_TEXT(card, 2, "outofbuf");37703768 qeth_schedule_recovery(card);···48444834 goto out;48454835 }4846483648474847- ccw_device_get_id(CARD_RDEV(card), &id);48374837+ ccw_device_get_id(CARD_DDEV(card), &id);48484838 request->resp_buf_len = sizeof(*response);48494839 request->resp_version = DIAG26C_VERSION2;48504840 request->op_code = DIAG26C_GET_MAC;···64696459#define QETH_HW_FEATURES (NETIF_F_RXCSUM | NETIF_F_IP_CSUM | NETIF_F_TSO | \64706460 NETIF_F_IPV6_CSUM)64716461/**64726472- * qeth_recover_features() - Restore device features after recovery64736473- * @dev: the recovering net_device64746474- *64756475- * Caller must hold rtnl lock.64626462+ * qeth_enable_hw_features() - (Re-)Enable HW functions for device features64636463+ * @dev: a net_device64766464 */64776477-void qeth_recover_features(struct net_device *dev)64656465+void qeth_enable_hw_features(struct net_device *dev)64786466{64796479- netdev_features_t features = dev->features;64806467 struct qeth_card *card = dev->ml_priv;64686468+ netdev_features_t features;6481646964706470+ rtnl_lock();64716471+ features = dev->features;64826472 /* force-off any feature that needs an IPA sequence.64836473 * netdev_update_features() will restart them.64846474 */64856475 dev->features &= ~QETH_HW_FEATURES;64866476 netdev_update_features(dev);64876487-64886488- if (features == dev->features)64896489- return;64906490- dev_warn(&card->gdev->dev,64916491- "Device recovery failed to restore all offload features\n");64776477+ if (features != dev->features)64786478+ dev_warn(&card->gdev->dev,64796479+ "Device recovery failed to restore all offload features\n");64806480+ rtnl_unlock();64926481}64936493-EXPORT_SYMBOL_GPL(qeth_recover_features);64826482+EXPORT_SYMBOL_GPL(qeth_enable_hw_features);6494648364956484int qeth_set_features(struct net_device *dev, netdev_features_t features)64966485{
+15-9
drivers/s390/net/qeth_l2_main.c
···140140141141static int qeth_l2_write_mac(struct qeth_card *card, u8 *mac)142142{143143- enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ?143143+ enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ?144144 IPA_CMD_SETGMAC : IPA_CMD_SETVMAC;145145 int rc;146146···157157158158static int qeth_l2_remove_mac(struct qeth_card *card, u8 *mac)159159{160160- enum qeth_ipa_cmds cmd = is_multicast_ether_addr_64bits(mac) ?160160+ enum qeth_ipa_cmds cmd = is_multicast_ether_addr(mac) ?161161 IPA_CMD_DELGMAC : IPA_CMD_DELVMAC;162162 int rc;163163···501501 return -ERESTARTSYS;502502 }503503504504+ /* avoid racing against concurrent state change: */505505+ if (!mutex_trylock(&card->conf_mutex))506506+ return -EAGAIN;507507+504508 if (!qeth_card_hw_is_reachable(card)) {505509 ether_addr_copy(dev->dev_addr, addr->sa_data);506506- return 0;510510+ goto out_unlock;507511 }508512509513 /* don't register the same address twice */510514 if (ether_addr_equal_64bits(dev->dev_addr, addr->sa_data) &&511515 (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED))512512- return 0;516516+ goto out_unlock;513517514518 /* add the new address, switch over, drop the old */515519 rc = qeth_l2_send_setmac(card, addr->sa_data);516520 if (rc)517517- return rc;521521+ goto out_unlock;518522 ether_addr_copy(old_addr, dev->dev_addr);519523 ether_addr_copy(dev->dev_addr, addr->sa_data);520524521525 if (card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)522526 qeth_l2_remove_mac(card, old_addr);523527 card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED;524524- return 0;528528+529529+out_unlock:530530+ mutex_unlock(&card->conf_mutex);531531+ return rc;525532}526533527534static void qeth_promisc_to_bridge(struct qeth_card *card)···11191112 netif_carrier_off(card->dev);1120111311211114 qeth_set_allowed_threads(card, 0xffffffff, 0);11151115+11161116+ qeth_enable_hw_features(card->dev);11221117 if (recover_flag == CARD_STATE_RECOVER) {11231118 if (recovery_mode &&11241119 card->info.type != QETH_CARD_TYPE_OSN) {···11321123 }11331124 /* this also sets saved unicast addresses */11341125 qeth_l2_set_rx_mode(card->dev);11351135- rtnl_lock();11361136- qeth_recover_features(card->dev);11371137- rtnl_unlock();11381126 }11391127 /* let user_space know that device is online */11401128 kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
+2-1
drivers/s390/net/qeth_l3_main.c
···26622662 netif_carrier_on(card->dev);26632663 else26642664 netif_carrier_off(card->dev);26652665+26662666+ qeth_enable_hw_features(card->dev);26652667 if (recover_flag == CARD_STATE_RECOVER) {26662668 rtnl_lock();26672669 if (recovery_mode)···26712669 else26722670 dev_open(card->dev);26732671 qeth_l3_set_rx_mode(card->dev);26742674- qeth_recover_features(card->dev);26752672 rtnl_unlock();26762673 }26772674 qeth_trace_features(card);
···5151#include <linux/atomic.h>5252#include <linux/ratelimit.h>5353#include <linux/uio.h>5454+#include <linux/cred.h> /* for sg_check_file_access() */54555556#include "scsi.h"5657#include <scsi/scsi_dbg.h>···209208#define sg_printk(prefix, sdp, fmt, a...) \210209 sdev_prefix_printk(prefix, (sdp)->device, \211210 (sdp)->disk->disk_name, fmt, ##a)211211+212212+/*213213+ * The SCSI interfaces that use read() and write() as an asynchronous variant of214214+ * ioctl(..., SG_IO, ...) are fundamentally unsafe, since there are lots of ways215215+ * to trigger read() and write() calls from various contexts with elevated216216+ * privileges. This can lead to kernel memory corruption (e.g. if these217217+ * interfaces are called through splice()) and privilege escalation inside218218+ * userspace (e.g. if a process with access to such a device passes a file219219+ * descriptor to a SUID binary as stdin/stdout/stderr).220220+ *221221+ * This function provides protection for the legacy API by restricting the222222+ * calling context.223223+ */224224+static int sg_check_file_access(struct file *filp, const char *caller)225225+{226226+ if (filp->f_cred != current_real_cred()) {227227+ pr_err_once("%s: process %d (%s) changed security contexts after opening file descriptor, this is not allowed.\n",228228+ caller, task_tgid_vnr(current), current->comm);229229+ return -EPERM;230230+ }231231+ if (uaccess_kernel()) {232232+ pr_err_once("%s: process %d (%s) called from kernel context, this is not allowed.\n",233233+ caller, task_tgid_vnr(current), current->comm);234234+ return -EACCES;235235+ }236236+ return 0;237237+}212238213239static int sg_allow_access(struct file *filp, unsigned char *cmd)214240{···420392 sg_io_hdr_t *hp;421393 struct sg_header *old_hdr = NULL;422394 int retval = 0;395395+396396+ /*397397+ * This could cause a response to be stranded. Close the associated398398+ * file descriptor to free up any resources being held.399399+ */400400+ retval = sg_check_file_access(filp, __func__);401401+ if (retval)402402+ return retval;423403424404 if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))425405 return -ENXIO;···616580 struct sg_header old_hdr;617581 sg_io_hdr_t *hp;618582 unsigned char cmnd[SG_MAX_CDB_SIZE];583583+ int retval;619584620620- if (unlikely(uaccess_kernel()))621621- return -EINVAL;585585+ retval = sg_check_file_access(filp, __func__);586586+ if (retval)587587+ return retval;622588623589 if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))624590 return -ENXIO;
···37273727 * Check for overflow of 8byte PRI READ_KEYS payload and37283728 * next reservation key list descriptor.37293729 */37303730- if ((add_len + 8) > (cmd->data_length - 8))37313731- break;37323732-37333733- put_unaligned_be64(pr_reg->pr_res_key, &buf[off]);37343734- off += 8;37303730+ if (off + 8 <= cmd->data_length) {37313731+ put_unaligned_be64(pr_reg->pr_res_key, &buf[off]);37323732+ off += 8;37333733+ }37343734+ /*37353735+ * SPC5r17: 6.16.2 READ KEYS service action37363736+ * The ADDITIONAL LENGTH field indicates the number of bytes in37373737+ * the Reservation key list. The contents of the ADDITIONAL37383738+ * LENGTH field are not altered based on the allocation length37393739+ */37353740 add_len += 8;37363741 }37373742 spin_unlock(&dev->t10_pr.registration_lock);
+4
drivers/thunderbolt/domain.c
···213213 goto err_free_acl;214214 }215215 ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl);216216+ if (!ret) {217217+ /* Notify userspace about the change */218218+ kobject_uevent(&tb->dev.kobj, KOBJ_CHANGE);219219+ }216220 mutex_unlock(&tb->lock);217221218222err_free_acl:
+103-36
drivers/uio/uio.c
···215215 struct device_attribute *attr, char *buf)216216{217217 struct uio_device *idev = dev_get_drvdata(dev);218218- return sprintf(buf, "%s\n", idev->info->name);218218+ int ret;219219+220220+ mutex_lock(&idev->info_lock);221221+ if (!idev->info) {222222+ ret = -EINVAL;223223+ dev_err(dev, "the device has been unregistered\n");224224+ goto out;225225+ }226226+227227+ ret = sprintf(buf, "%s\n", idev->info->name);228228+229229+out:230230+ mutex_unlock(&idev->info_lock);231231+ return ret;219232}220233static DEVICE_ATTR_RO(name);221234···236223 struct device_attribute *attr, char *buf)237224{238225 struct uio_device *idev = dev_get_drvdata(dev);239239- return sprintf(buf, "%s\n", idev->info->version);226226+ int ret;227227+228228+ mutex_lock(&idev->info_lock);229229+ if (!idev->info) {230230+ ret = -EINVAL;231231+ dev_err(dev, "the device has been unregistered\n");232232+ goto out;233233+ }234234+235235+ ret = sprintf(buf, "%s\n", idev->info->version);236236+237237+out:238238+ mutex_unlock(&idev->info_lock);239239+ return ret;240240}241241static DEVICE_ATTR_RO(version);242242···441415static irqreturn_t uio_interrupt(int irq, void *dev_id)442416{443417 struct uio_device *idev = (struct uio_device *)dev_id;444444- irqreturn_t ret = idev->info->handler(irq, idev->info);418418+ irqreturn_t ret;445419420420+ mutex_lock(&idev->info_lock);421421+422422+ ret = idev->info->handler(irq, idev->info);446423 if (ret == IRQ_HANDLED)447424 uio_event_notify(idev->info);448425426426+ mutex_unlock(&idev->info_lock);449427 return ret;450428}451429···463433 struct uio_device *idev;464434 struct uio_listener *listener;465435 int ret = 0;466466- unsigned long flags;467436468437 mutex_lock(&minor_lock);469438 idev = idr_find(&uio_idr, iminor(inode));···489460 listener->event_count = atomic_read(&idev->event);490461 filep->private_data = listener;491462492492- spin_lock_irqsave(&idev->info_lock, flags);463463+ mutex_lock(&idev->info_lock);464464+ if (!idev->info) {465465+ mutex_unlock(&idev->info_lock);466466+ ret = -EINVAL;467467+ goto err_alloc_listener;468468+ }469469+493470 if (idev->info && idev->info->open)494471 ret = idev->info->open(idev->info, inode);495495- spin_unlock_irqrestore(&idev->info_lock, flags);472472+ mutex_unlock(&idev->info_lock);496473 if (ret)497474 goto err_infoopen;498475···530495 int ret = 0;531496 struct uio_listener *listener = filep->private_data;532497 struct uio_device *idev = listener->dev;533533- unsigned long flags;534498535535- spin_lock_irqsave(&idev->info_lock, flags);499499+ mutex_lock(&idev->info_lock);536500 if (idev->info && idev->info->release)537501 ret = idev->info->release(idev->info, inode);538538- spin_unlock_irqrestore(&idev->info_lock, flags);502502+ mutex_unlock(&idev->info_lock);539503540504 module_put(idev->owner);541505 kfree(listener);···547513 struct uio_listener *listener = filep->private_data;548514 struct uio_device *idev = listener->dev;549515 __poll_t ret = 0;550550- unsigned long flags;551516552552- spin_lock_irqsave(&idev->info_lock, flags);517517+ mutex_lock(&idev->info_lock);553518 if (!idev->info || !idev->info->irq)554519 ret = -EIO;555555- spin_unlock_irqrestore(&idev->info_lock, flags);520520+ mutex_unlock(&idev->info_lock);556521557522 if (ret)558523 return ret;···570537 DECLARE_WAITQUEUE(wait, current);571538 ssize_t retval = 0;572539 s32 event_count;573573- unsigned long flags;574540575575- spin_lock_irqsave(&idev->info_lock, flags);541541+ mutex_lock(&idev->info_lock);576542 if (!idev->info || !idev->info->irq)577543 retval = -EIO;578578- spin_unlock_irqrestore(&idev->info_lock, flags);544544+ mutex_unlock(&idev->info_lock);579545580546 if (retval)581547 return retval;···624592 struct uio_device *idev = listener->dev;625593 ssize_t retval;626594 s32 irq_on;627627- unsigned long flags;628595629629- spin_lock_irqsave(&idev->info_lock, flags);596596+ mutex_lock(&idev->info_lock);597597+ if (!idev->info) {598598+ retval = -EINVAL;599599+ goto out;600600+ }601601+630602 if (!idev->info || !idev->info->irq) {631603 retval = -EIO;632604 goto out;···654618 retval = idev->info->irqcontrol(idev->info, irq_on);655619656620out:657657- spin_unlock_irqrestore(&idev->info_lock, flags);621621+ mutex_unlock(&idev->info_lock);658622 return retval ? retval : sizeof(s32);659623}660624···676640 struct page *page;677641 unsigned long offset;678642 void *addr;643643+ int ret = 0;644644+ int mi;679645680680- int mi = uio_find_mem_index(vmf->vma);681681- if (mi < 0)682682- return VM_FAULT_SIGBUS;646646+ mutex_lock(&idev->info_lock);647647+ if (!idev->info) {648648+ ret = VM_FAULT_SIGBUS;649649+ goto out;650650+ }651651+652652+ mi = uio_find_mem_index(vmf->vma);653653+ if (mi < 0) {654654+ ret = VM_FAULT_SIGBUS;655655+ goto out;656656+ }683657684658 /*685659 * We need to subtract mi because userspace uses offset = N*PAGE_SIZE···704658 page = vmalloc_to_page(addr);705659 get_page(page);706660 vmf->page = page;707707- return 0;661661+662662+out:663663+ mutex_unlock(&idev->info_lock);664664+665665+ return ret;708666}709667710668static const struct vm_operations_struct uio_logical_vm_ops = {···733683 struct uio_device *idev = vma->vm_private_data;734684 int mi = uio_find_mem_index(vma);735685 struct uio_mem *mem;686686+736687 if (mi < 0)737688 return -EINVAL;738689 mem = idev->info->mem + mi;···775724776725 vma->vm_private_data = idev;777726727727+ mutex_lock(&idev->info_lock);728728+ if (!idev->info) {729729+ ret = -EINVAL;730730+ goto out;731731+ }732732+778733 mi = uio_find_mem_index(vma);779779- if (mi < 0)780780- return -EINVAL;734734+ if (mi < 0) {735735+ ret = -EINVAL;736736+ goto out;737737+ }781738782739 requested_pages = vma_pages(vma);783740 actual_pages = ((idev->info->mem[mi].addr & ~PAGE_MASK)784741 + idev->info->mem[mi].size + PAGE_SIZE -1) >> PAGE_SHIFT;785785- if (requested_pages > actual_pages)786786- return -EINVAL;742742+ if (requested_pages > actual_pages) {743743+ ret = -EINVAL;744744+ goto out;745745+ }787746788747 if (idev->info->mmap) {789748 ret = idev->info->mmap(idev->info, vma);790790- return ret;749749+ goto out;791750 }792751793752 switch (idev->info->mem[mi].memtype) {794753 case UIO_MEM_PHYS:795795- return uio_mmap_physical(vma);754754+ ret = uio_mmap_physical(vma);755755+ break;796756 case UIO_MEM_LOGICAL:797757 case UIO_MEM_VIRTUAL:798798- return uio_mmap_logical(vma);758758+ ret = uio_mmap_logical(vma);759759+ break;799760 default:800800- return -EINVAL;761761+ ret = -EINVAL;801762 }763763+764764+out:765765+ mutex_unlock(&idev->info_lock);766766+ return 0;802767}803768804769static const struct file_operations uio_fops = {···932865933866 idev->owner = owner;934867 idev->info = info;935935- spin_lock_init(&idev->info_lock);868868+ mutex_init(&idev->info_lock);936869 init_waitqueue_head(&idev->wait);937870 atomic_set(&idev->event, 0);938871···969902 * FDs at the time of unregister and therefore may not be970903 * freed until they are released.971904 */972972- ret = request_irq(info->irq, uio_interrupt,973973- info->irq_flags, info->name, idev);905905+ ret = request_threaded_irq(info->irq, NULL, uio_interrupt,906906+ info->irq_flags, info->name, idev);907907+974908 if (ret)975909 goto err_request_irq;976910 }···996928void uio_unregister_device(struct uio_info *info)997929{998930 struct uio_device *idev;999999- unsigned long flags;10009311001932 if (!info || !info->uio_dev)1002933 return;···10049371005938 uio_free_minor(idev);1006939940940+ mutex_lock(&idev->info_lock);1007941 uio_dev_del_attributes(idev);10089421009943 if (info->irq && info->irq != UIO_IRQ_CUSTOM)1010944 free_irq(info->irq, idev);101194510121012- spin_lock_irqsave(&idev->info_lock, flags);1013946 idev->info = NULL;10141014- spin_unlock_irqrestore(&idev->info_lock, flags);947947+ mutex_unlock(&idev->info_lock);10159481016949 device_unregister(&idev->dev);1017950
···22config USB_ASPEED_VHUB33 tristate "Aspeed vHub UDC driver"44 depends on ARCH_ASPEED || COMPILE_TEST55+ depends on USB_LIBCOMPOSITE56 help67 USB peripheral controller for the Aspeed AST2500 family78 SoCs supporting the "vHub" functionality and USB2.0
+8-4
drivers/usb/host/xhci-dbgcap.c
···508508 return 0;509509}510510511511-static void xhci_do_dbc_stop(struct xhci_hcd *xhci)511511+static int xhci_do_dbc_stop(struct xhci_hcd *xhci)512512{513513 struct xhci_dbc *dbc = xhci->dbc;514514515515 if (dbc->state == DS_DISABLED)516516- return;516516+ return -1;517517518518 writel(0, &dbc->regs->control);519519 xhci_dbc_mem_cleanup(xhci);520520 dbc->state = DS_DISABLED;521521+522522+ return 0;521523}522524523525static int xhci_dbc_start(struct xhci_hcd *xhci)···546544547545static void xhci_dbc_stop(struct xhci_hcd *xhci)548546{547547+ int ret;549548 unsigned long flags;550549 struct xhci_dbc *dbc = xhci->dbc;551550 struct dbc_port *port = &dbc->port;···559556 xhci_dbc_tty_unregister_device(xhci);560557561558 spin_lock_irqsave(&dbc->lock, flags);562562- xhci_do_dbc_stop(xhci);559559+ ret = xhci_do_dbc_stop(xhci);563560 spin_unlock_irqrestore(&dbc->lock, flags);564561565565- pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);562562+ if (!ret)563563+ pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);566564}567565568566static void
+1-1
drivers/usb/host/xhci-mem.c
···595595 if (!ep->stream_info)596596 return NULL;597597598598- if (stream_id > ep->stream_info->num_streams)598598+ if (stream_id >= ep->stream_info->num_streams)599599 return NULL;600600 return ep->stream_info->stream_rings[stream_id];601601}
···2828 def_bool y if !S39029293030config VFIO_PCI_IGD3131- depends on VFIO_PCI3232- def_bool y if X863131+ bool "VFIO PCI extensions for Intel graphics (GVT-d)"3232+ depends on VFIO_PCI && X863333+ default y3434+ help3535+ Support for Intel IGD specific extensions to enable direct3636+ assignment to virtual machines. This includes exposing an IGD3737+ specific firmware table and read-only copies of the host bridge3838+ and LPC bridge config space.3939+4040+ To enable Intel IGD assignment through vfio-pci, say Y.
+7-9
drivers/vfio/vfio_iommu_type1.c
···343343 struct page *page[1];344344 struct vm_area_struct *vma;345345 struct vm_area_struct *vmas[1];346346+ unsigned int flags = 0;346347 int ret;347348349349+ if (prot & IOMMU_WRITE)350350+ flags |= FOLL_WRITE;351351+352352+ down_read(&mm->mmap_sem);348353 if (mm == current->mm) {349349- ret = get_user_pages_longterm(vaddr, 1, !!(prot & IOMMU_WRITE),350350- page, vmas);354354+ ret = get_user_pages_longterm(vaddr, 1, flags, page, vmas);351355 } else {352352- unsigned int flags = 0;353353-354354- if (prot & IOMMU_WRITE)355355- flags |= FOLL_WRITE;356356-357357- down_read(&mm->mmap_sem);358356 ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page,359357 vmas, NULL);360358 /*···366368 ret = -EOPNOTSUPP;367369 put_page(page[0]);368370 }369369- up_read(&mm->mmap_sem);370371 }372372+ up_read(&mm->mmap_sem);371373372374 if (ret == 1) {373375 *pfn = page_to_pfn(page[0]);
+2-2
fs/autofs/Makefile
···22# Makefile for the linux autofs-filesystem routines.33#4455-obj-$(CONFIG_AUTOFS_FS) += autofs.o55+obj-$(CONFIG_AUTOFS_FS) += autofs4.o6677-autofs-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o77+autofs4-objs := init.o inode.o root.o symlink.o waitq.o expire.o dev-ioctl.o
+13-9
fs/autofs/dev-ioctl.c
···135135 cmd);136136 goto out;137137 }138138+ } else {139139+ unsigned int inr = _IOC_NR(cmd);140140+141141+ if (inr == AUTOFS_DEV_IOCTL_OPENMOUNT_CMD ||142142+ inr == AUTOFS_DEV_IOCTL_REQUESTER_CMD ||143143+ inr == AUTOFS_DEV_IOCTL_ISMOUNTPOINT_CMD) {144144+ err = -EINVAL;145145+ goto out;146146+ }138147 }139148140149 err = 0;···280271 dev_t devid;281272 int err, fd;282273283283- /* param->path has already been checked */274274+ /* param->path has been checked in validate_dev_ioctl() */275275+284276 if (!param->openmount.devid)285277 return -EINVAL;286278···443433 dev_t devid;444434 int err = -ENOENT;445435446446- if (param->size <= AUTOFS_DEV_IOCTL_SIZE) {447447- err = -EINVAL;448448- goto out;449449- }436436+ /* param->path has been checked in validate_dev_ioctl() */450437451438 devid = sbi->sb->s_dev;452439···528521 unsigned int devid, magic;529522 int err = -ENOENT;530523531531- if (param->size <= AUTOFS_DEV_IOCTL_SIZE) {532532- err = -EINVAL;533533- goto out;534534- }524524+ /* param->path has been checked in validate_dev_ioctl() */535525536526 name = param->path;537527 type = param->ismountpoint.in.type;
+1-1
fs/autofs/init.c
···2323 .kill_sb = autofs_kill_sb,2424};2525MODULE_ALIAS_FS("autofs");2626-MODULE_ALIAS("autofs4");2626+MODULE_ALIAS("autofs");27272828static int __init init_autofs_fs(void)2929{
···423423 void (*set_oplock_level)(struct cifsInodeInfo *, __u32, unsigned int,424424 bool *);425425 /* create lease context buffer for CREATE request */426426- char * (*create_lease_buf)(u8 *, u8);426426+ char * (*create_lease_buf)(u8 *lease_key, u8 oplock);427427 /* parse lease context buffer and return oplock/epoch info */428428 __u8 (*parse_lease_buf)(void *buf, unsigned int *epoch, char *lkey);429429 ssize_t (*copychunk_range)(const unsigned int,···14161416/* one of these for every pending CIFS request to the server */14171417struct mid_q_entry {14181418 struct list_head qhead; /* mids waiting on reply from this server */14191419+ struct kref refcount;14191420 struct TCP_Server_Info *server; /* server corresponding to this mid */14201421 __u64 mid; /* multiplex id */14211422 __u32 pid; /* process id */
···157157 * greater than cifs socket timeout which is 7 seconds158158 */159159 while (server->tcpStatus == CifsNeedReconnect) {160160- wait_event_interruptible_timeout(server->response_q,161161- (server->tcpStatus != CifsNeedReconnect), 10 * HZ);160160+ rc = wait_event_interruptible_timeout(server->response_q,161161+ (server->tcpStatus != CifsNeedReconnect),162162+ 10 * HZ);163163+ if (rc < 0) {164164+ cifs_dbg(FYI, "%s: aborting reconnect due to a received"165165+ " signal by the process\n", __func__);166166+ return -ERESTARTSYS;167167+ }162168163169 /* are we still trying to reconnect? */164170 if (server->tcpStatus != CifsNeedReconnect)
+7-1
fs/cifs/connect.c
···924924 server->pdu_size = next_offset;925925 }926926927927+ mid_entry = NULL;927928 if (server->ops->is_transform_hdr &&928929 server->ops->receive_transform &&929930 server->ops->is_transform_hdr(buf)) {···939938 length = mid_entry->receive(server, mid_entry);940939 }941940942942- if (length < 0)941941+ if (length < 0) {942942+ if (mid_entry)943943+ cifs_mid_q_entry_release(mid_entry);943944 continue;945945+ }944946945947 if (server->large_buf)946948 buf = server->bigbuf;···960956961957 if (!mid_entry->multiRsp || mid_entry->multiEnd)962958 mid_entry->callback(mid_entry);959959+960960+ cifs_mid_q_entry_release(mid_entry);963961 } else if (server->ops->is_oplock_break &&964962 server->ops->is_oplock_break(buf, server)) {965963 cifs_dbg(FYI, "Received oplock break\n");
···869869870870 eh = ext_inode_hdr(inode);871871 depth = ext_depth(inode);872872+ if (depth < 0 || depth > EXT4_MAX_EXTENT_DEPTH) {873873+ EXT4_ERROR_INODE(inode, "inode has invalid extent depth: %d",874874+ depth);875875+ ret = -EFSCORRUPTED;876876+ goto err;877877+ }872878873879 if (path) {874880 ext4_ext_drop_refs(path);
+12-2
fs/ext4/ialloc.c
···150150 }151151152152 ext4_lock_group(sb, block_group);153153- if (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)) {153153+ if (ext4_has_group_desc_csum(sb) &&154154+ (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT))) {155155+ if (block_group == 0) {156156+ ext4_unlock_group(sb, block_group);157157+ unlock_buffer(bh);158158+ ext4_error(sb, "Inode bitmap for bg 0 marked "159159+ "uninitialized");160160+ err = -EFSCORRUPTED;161161+ goto out;162162+ }154163 memset(bh->b_data, 0, (EXT4_INODES_PER_GROUP(sb) + 7) / 8);155164 ext4_mark_bitmap_end(EXT4_INODES_PER_GROUP(sb),156165 sb->s_blocksize * 8, bh->b_data);···10039941004995 /* recheck and clear flag under lock if we still need to */1005996 ext4_lock_group(sb, group);10061006- if (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {997997+ if (ext4_has_group_desc_csum(sb) &&998998+ (gdp->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT))) {1007999 gdp->bg_flags &= cpu_to_le16(~EXT4_BG_BLOCK_UNINIT);10081000 ext4_free_group_clusters_set(sb, gdp,10091001 ext4_free_clusters_after_init(sb, group, gdp));
+2-37
fs/ext4/inline.c
···437437438438 memset((void *)ext4_raw_inode(&is.iloc)->i_block,439439 0, EXT4_MIN_INLINE_DATA_SIZE);440440+ memset(ei->i_data, 0, EXT4_MIN_INLINE_DATA_SIZE);440441441442 if (ext4_has_feature_extents(inode->i_sb)) {442443 if (S_ISDIR(inode->i_mode) ||···887886 flags |= AOP_FLAG_NOFS;888887889888 if (ret == -ENOSPC) {889889+ ext4_journal_stop(handle);890890 ret = ext4_da_convert_inline_data_to_extent(mapping,891891 inode,892892 flags,893893 fsdata);894894- ext4_journal_stop(handle);895894 if (ret == -ENOSPC &&896895 ext4_should_retry_alloc(inode->i_sb, &retries))897896 goto retry_journal;···18891888out:18901889 up_read(&EXT4_I(inode)->xattr_sem);18911890 return (error < 0 ? error : 0);18921892-}18931893-18941894-/*18951895- * Called during xattr set, and if we can sparse space 'needed',18961896- * just create the extent tree evict the data to the outer block.18971897- *18981898- * We use jbd2 instead of page cache to move data to the 1st block18991899- * so that the whole transaction can be committed as a whole and19001900- * the data isn't lost because of the delayed page cache write.19011901- */19021902-int ext4_try_to_evict_inline_data(handle_t *handle,19031903- struct inode *inode,19041904- int needed)19051905-{19061906- int error;19071907- struct ext4_xattr_entry *entry;19081908- struct ext4_inode *raw_inode;19091909- struct ext4_iloc iloc;19101910-19111911- error = ext4_get_inode_loc(inode, &iloc);19121912- if (error)19131913- return error;19141914-19151915- raw_inode = ext4_raw_inode(&iloc);19161916- entry = (struct ext4_xattr_entry *)((void *)raw_inode +19171917- EXT4_I(inode)->i_inline_off);19181918- if (EXT4_XATTR_LEN(entry->e_name_len) +19191919- EXT4_XATTR_SIZE(le32_to_cpu(entry->e_value_size)) < needed) {19201920- error = -ENOSPC;19211921- goto out;19221922- }19231923-19241924- error = ext4_convert_inline_data_nolock(handle, inode, &iloc);19251925-out:19261926- brelse(iloc.bh);19271927- return error;19281891}1929189219301893int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
···222222 unsigned long reason)223223{224224 struct mm_struct *mm = ctx->mm;225225- pte_t *pte;225225+ pte_t *ptep, pte;226226 bool ret = true;227227228228 VM_BUG_ON(!rwsem_is_locked(&mm->mmap_sem));229229230230- pte = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));231231- if (!pte)230230+ ptep = huge_pte_offset(mm, address, vma_mmu_pagesize(vma));231231+232232+ if (!ptep)232233 goto out;233234234235 ret = false;236236+ pte = huge_ptep_get(ptep);235237236238 /*237239 * Lockless access: we're in a wait_event so it's ok if it238240 * changes under us.239241 */240240- if (huge_pte_none(*pte))242242+ if (huge_pte_none(pte))241243 ret = true;242242- if (!huge_pte_write(*pte) && (reason & VM_UFFD_WP))244244+ if (!huge_pte_write(pte) && (reason & VM_UFFD_WP))243245 ret = true;244246out:245247 return ret;
+8
include/asm-generic/tlb.h
···265265 * For now w.r.t page table cache, mark the range_size as PAGE_SIZE266266 */267267268268+#ifndef pte_free_tlb268269#define pte_free_tlb(tlb, ptep, address) \269270 do { \270271 __tlb_adjust_range(tlb, address, PAGE_SIZE); \271272 __pte_free_tlb(tlb, ptep, address); \272273 } while (0)274274+#endif273275276276+#ifndef pmd_free_tlb274277#define pmd_free_tlb(tlb, pmdp, address) \275278 do { \276279 __tlb_adjust_range(tlb, address, PAGE_SIZE); \277280 __pmd_free_tlb(tlb, pmdp, address); \278281 } while (0)282282+#endif279283280284#ifndef __ARCH_HAS_4LEVEL_HACK285285+#ifndef pud_free_tlb281286#define pud_free_tlb(tlb, pudp, address) \282287 do { \283288 __tlb_adjust_range(tlb, address, PAGE_SIZE); \284289 __pud_free_tlb(tlb, pudp, address); \285290 } while (0)286291#endif292292+#endif287293288294#ifndef __ARCH_HAS_5LEVEL_HACK295295+#ifndef p4d_free_tlb289296#define p4d_free_tlb(tlb, pudp, address) \290297 do { \291298 __tlb_adjust_range(tlb, address, PAGE_SIZE); \292299 __p4d_free_tlb(tlb, pudp, address); \293300 } while (0)301301+#endif294302#endif295303296304#define tlb_migrate_finish(mm) do {} while (0)
···55#include <uapi/linux/bpf.h>6677#ifdef CONFIG_BPF_LIRC_MODE288-int lirc_prog_attach(const union bpf_attr *attr);88+int lirc_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);99int lirc_prog_detach(const union bpf_attr *attr);1010int lirc_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);1111#else1212-static inline int lirc_prog_attach(const union bpf_attr *attr)1212+static inline int lirc_prog_attach(const union bpf_attr *attr,1313+ struct bpf_prog *prog)1314{1415 return -EINVAL;1516}
+22-7
include/linux/compiler-gcc.h
···6666#endif67676868/*6969+ * Feature detection for gnu_inline (gnu89 extern inline semantics). Either7070+ * __GNUC_STDC_INLINE__ is defined (not using gnu89 extern inline semantics,7171+ * and we opt in to the gnu89 semantics), or __GNUC_STDC_INLINE__ is not7272+ * defined so the gnu89 semantics are the default.7373+ */7474+#ifdef __GNUC_STDC_INLINE__7575+# define __gnu_inline __attribute__((gnu_inline))7676+#else7777+# define __gnu_inline7878+#endif7979+8080+/*6981 * Force always-inline if the user requests it so via the .config,7082 * or if gcc is too old.7183 * GCC does not warn about unused static inline functions for7284 * -Wunused-function. This turns out to avoid the need for complex #ifdef7385 * directives. Suppress the warning in clang as well by using "unused"7486 * function attribute, which is redundant but not harmful for gcc.8787+ * Prefer gnu_inline, so that extern inline functions do not emit an8888+ * externally visible function. This makes extern inline behave as per gnu898989+ * semantics rather than c99. This prevents multiple symbol definition errors9090+ * of extern inline functions at link time.9191+ * A lot of inline functions can cause havoc with function tracing.7592 */7693#if !defined(CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING) || \7794 !defined(CONFIG_OPTIMIZE_INLINING) || (__GNUC__ < 4)7878-#define inline inline __attribute__((always_inline,unused)) notrace7979-#define __inline__ __inline__ __attribute__((always_inline,unused)) notrace8080-#define __inline __inline __attribute__((always_inline,unused)) notrace9595+#define inline \9696+ inline __attribute__((always_inline, unused)) notrace __gnu_inline8197#else8282-/* A lot of inline functions can cause havoc with function tracing */8383-#define inline inline __attribute__((unused)) notrace8484-#define __inline__ __inline__ __attribute__((unused)) notrace8585-#define __inline __inline __attribute__((unused)) notrace9898+#define inline inline __attribute__((unused)) notrace __gnu_inline8699#endif87100101101+#define __inline__ inline102102+#define __inline inline88103#define __always_inline inline __attribute__((always_inline))89104#define noinline __attribute__((noinline))90105
+8-48
include/linux/filter.h
···470470};471471472472struct bpf_binary_header {473473- u16 pages;474474- u16 locked:1;475475-473473+ u32 pages;476474 /* Some arches need word alignment for their instructions */477475 u8 image[] __aligned(4);478476};···479481 u16 pages; /* Number of allocated pages */480482 u16 jited:1, /* Is our filter JIT'ed? */481483 jit_requested:1,/* archs need to JIT the prog */482482- locked:1, /* Program image locked? */484484+ undo_set_mem:1, /* Passed set_memory_ro() checkpoint */483485 gpl_compatible:1, /* Is filter GPL compatible? */484486 cb_access:1, /* Is control block accessed? */485487 dst_needed:1, /* Do we need dst entry? */···675677676678static inline void bpf_prog_lock_ro(struct bpf_prog *fp)677679{678678-#ifdef CONFIG_ARCH_HAS_SET_MEMORY679679- fp->locked = 1;680680- if (set_memory_ro((unsigned long)fp, fp->pages))681681- fp->locked = 0;682682-#endif680680+ fp->undo_set_mem = 1;681681+ set_memory_ro((unsigned long)fp, fp->pages);683682}684683685684static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)686685{687687-#ifdef CONFIG_ARCH_HAS_SET_MEMORY688688- if (fp->locked) {689689- WARN_ON_ONCE(set_memory_rw((unsigned long)fp, fp->pages));690690- /* In case set_memory_rw() fails, we want to be the first691691- * to crash here instead of some random place later on.692692- */693693- fp->locked = 0;694694- }695695-#endif686686+ if (fp->undo_set_mem)687687+ set_memory_rw((unsigned long)fp, fp->pages);696688}697689698690static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)699691{700700-#ifdef CONFIG_ARCH_HAS_SET_MEMORY701701- hdr->locked = 1;702702- if (set_memory_ro((unsigned long)hdr, hdr->pages))703703- hdr->locked = 0;704704-#endif692692+ set_memory_ro((unsigned long)hdr, hdr->pages);705693}706694707695static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)708696{709709-#ifdef CONFIG_ARCH_HAS_SET_MEMORY710710- if (hdr->locked) {711711- WARN_ON_ONCE(set_memory_rw((unsigned long)hdr, hdr->pages));712712- /* In case set_memory_rw() fails, we want to be the first713713- * to crash here instead of some random place later on.714714- */715715- hdr->locked = 0;716716- }717717-#endif697697+ set_memory_rw((unsigned long)hdr, hdr->pages);718698}719699720700static inline struct bpf_binary_header *···703727704728 return (void *)addr;705729}706706-707707-#ifdef CONFIG_ARCH_HAS_SET_MEMORY708708-static inline int bpf_prog_check_pages_ro_single(const struct bpf_prog *fp)709709-{710710- if (!fp->locked)711711- return -ENOLCK;712712- if (fp->jited) {713713- const struct bpf_binary_header *hdr = bpf_jit_binary_hdr(fp);714714-715715- if (!hdr->locked)716716- return -ENOLCK;717717- }718718-719719- return 0;720720-}721721-#endif722730723731int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap);724732static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
···511511#define HID_STAT_ADDED BIT(0)512512#define HID_STAT_PARSED BIT(1)513513#define HID_STAT_DUP_DETECTED BIT(2)514514+#define HID_STAT_REPROBED BIT(3)514515515516struct hid_input {516517 struct list_head list;···580579 bool battery_avoid_query;581580#endif582581583583- unsigned int status; /* see STAT flags above */582582+ unsigned long status; /* see STAT flags above */584583 unsigned claimed; /* Claimed by hidinput, hiddev? */585584 unsigned quirks; /* Various quirks the device can pull on us */586585 bool io_started; /* If IO has started */
···18571857 * is resolved), the nexthop address is returned in ipv4_dst18581858 * or ipv6_dst based on family, smac is set to mac address of18591859 * egress device, dmac is set to nexthop mac address, rt_metric18601860- * is set to metric from route (IPv4/IPv6 only).18601860+ * is set to metric from route (IPv4/IPv6 only), and ifindex18611861+ * is set to the device index of the nexthop from the FIB lookup.18611862 *18621863 * *plen* argument is the size of the passed in struct.18631864 * *flags* argument can be a combination of one or more of the···18741873 * *ctx* is either **struct xdp_md** for XDP programs or18751874 * **struct sk_buff** tc cls_act programs.18761875 * Return18771877- * Egress device index on success, 0 if packet needs to continue18781878- * up the stack for further processing or a negative error in case18791879- * of failure.18761876+ * * < 0 if any input argument is invalid18771877+ * * 0 on success (packet is forwarded, nexthop neighbor exists)18781878+ * * > 0 one of **BPF_FIB_LKUP_RET_** codes explaining why the18791879+ * * packet is not forwarded or needs assist from full stack18801880 *18811881 * int bpf_sock_hash_update(struct bpf_sock_ops_kern *skops, struct bpf_map *map, void *key, u64 flags)18821882 * Description···26142612#define BPF_FIB_LOOKUP_DIRECT BIT(0)26152613#define BPF_FIB_LOOKUP_OUTPUT BIT(1)2616261426152615+enum {26162616+ BPF_FIB_LKUP_RET_SUCCESS, /* lookup successful */26172617+ BPF_FIB_LKUP_RET_BLACKHOLE, /* dest is blackholed; can be dropped */26182618+ BPF_FIB_LKUP_RET_UNREACHABLE, /* dest is unreachable; can be dropped */26192619+ BPF_FIB_LKUP_RET_PROHIBIT, /* dest not allowed; can be dropped */26202620+ BPF_FIB_LKUP_RET_NOT_FWDED, /* packet is not forwarded */26212621+ BPF_FIB_LKUP_RET_FWD_DISABLED, /* fwding is not enabled on ingress */26222622+ BPF_FIB_LKUP_RET_UNSUPP_LWT, /* fwd requires encapsulation */26232623+ BPF_FIB_LKUP_RET_NO_NEIGH, /* no neighbor entry for nh */26242624+ BPF_FIB_LKUP_RET_FRAG_NEEDED, /* fragmentation required to fwd */26252625+};26262626+26172627struct bpf_fib_lookup {26182628 /* input: network family for lookup (AF_INET, AF_INET6)26192629 * output: network family of egress nexthop···2639262526402626 /* total length of packet from network header - used for MTU check */26412627 __u16 tot_len;26422642- __u32 ifindex; /* L3 device index for lookup */26282628+26292629+ /* input: L3 device index for lookup26302630+ * output: device index from FIB lookup26312631+ */26322632+ __u32 ifindex;2643263326442634 union {26452635 /* inputs to lookup */
+58-44
include/uapi/linux/rseq.h
···1010 * Copyright (c) 2015-2018 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>1111 */12121313-#ifdef __KERNEL__1414-# include <linux/types.h>1515-#else1616-# include <stdint.h>1717-#endif1818-1919-#include <linux/types_32_64.h>1313+#include <linux/types.h>1414+#include <asm/byteorder.h>20152116enum rseq_cpu_id_state {2217 RSEQ_CPU_ID_UNINITIALIZED = -1,···4752 __u32 version;4853 /* enum rseq_cs_flags */4954 __u32 flags;5050- LINUX_FIELD_u32_u64(start_ip);5555+ __u64 start_ip;5156 /* Offset from start_ip. */5252- LINUX_FIELD_u32_u64(post_commit_offset);5353- LINUX_FIELD_u32_u64(abort_ip);5757+ __u64 post_commit_offset;5858+ __u64 abort_ip;5459} __attribute__((aligned(4 * sizeof(__u64))));55605661/*···6267struct rseq {6368 /*6469 * Restartable sequences cpu_id_start field. Updated by the6565- * kernel, and read by user-space with single-copy atomicity6666- * semantics. Aligned on 32-bit. Always contains a value in the6767- * range of possible CPUs, although the value may not be the6868- * actual current CPU (e.g. if rseq is not initialized). This6969- * CPU number value should always be compared against the value7070- * of the cpu_id field before performing a rseq commit or7171- * returning a value read from a data structure indexed using7272- * the cpu_id_start value.7070+ * kernel. Read by user-space with single-copy atomicity7171+ * semantics. This field should only be read by the thread which7272+ * registered this data structure. Aligned on 32-bit. Always7373+ * contains a value in the range of possible CPUs, although the7474+ * value may not be the actual current CPU (e.g. if rseq is not7575+ * initialized). This CPU number value should always be compared7676+ * against the value of the cpu_id field before performing a rseq7777+ * commit or returning a value read from a data structure indexed7878+ * using the cpu_id_start value.7379 */7480 __u32 cpu_id_start;7581 /*7676- * Restartable sequences cpu_id field. Updated by the kernel,7777- * and read by user-space with single-copy atomicity semantics.7878- * Aligned on 32-bit. Values RSEQ_CPU_ID_UNINITIALIZED and7979- * RSEQ_CPU_ID_REGISTRATION_FAILED have a special semantic: the8080- * former means "rseq uninitialized", and latter means "rseq8181- * initialization failed". This value is meant to be read within8282- * rseq critical sections and compared with the cpu_id_start8383- * value previously read, before performing the commit instruction,8484- * or read and compared with the cpu_id_start value before returning8585- * a value loaded from a data structure indexed using the8686- * cpu_id_start value.8282+ * Restartable sequences cpu_id field. Updated by the kernel.8383+ * Read by user-space with single-copy atomicity semantics. This8484+ * field should only be read by the thread which registered this8585+ * data structure. Aligned on 32-bit. Values8686+ * RSEQ_CPU_ID_UNINITIALIZED and RSEQ_CPU_ID_REGISTRATION_FAILED8787+ * have a special semantic: the former means "rseq uninitialized",8888+ * and latter means "rseq initialization failed". This value is8989+ * meant to be read within rseq critical sections and compared9090+ * with the cpu_id_start value previously read, before performing9191+ * the commit instruction, or read and compared with the9292+ * cpu_id_start value before returning a value loaded from a data9393+ * structure indexed using the cpu_id_start value.8794 */8895 __u32 cpu_id;8996 /*···102105 * targeted by the rseq_cs. Also needs to be set to NULL by user-space103106 * before reclaiming memory that contains the targeted struct rseq_cs.104107 *105105- * Read and set by the kernel with single-copy atomicity semantics.106106- * Set by user-space with single-copy atomicity semantics. Aligned107107- * on 64-bit.108108+ * Read and set by the kernel. Set by user-space with single-copy109109+ * atomicity semantics. This field should only be updated by the110110+ * thread which registered this data structure. Aligned on 64-bit.108111 */109109- LINUX_FIELD_u32_u64(rseq_cs);112112+ union {113113+ __u64 ptr64;114114+#ifdef __LP64__115115+ __u64 ptr;116116+#else117117+ struct {118118+#if (defined(__BYTE_ORDER) && (__BYTE_ORDER == __BIG_ENDIAN)) || defined(__BIG_ENDIAN)119119+ __u32 padding; /* Initialized to zero. */120120+ __u32 ptr32;121121+#else /* LITTLE */122122+ __u32 ptr32;123123+ __u32 padding; /* Initialized to zero. */124124+#endif /* ENDIAN */125125+ } ptr;126126+#endif127127+ } rseq_cs;128128+110129 /*111111- * - RSEQ_DISABLE flag:130130+ * Restartable sequences flags field.112131 *113113- * Fallback fast-track flag for single-stepping.114114- * Set by user-space if lack of progress is detected.115115- * Cleared by user-space after rseq finish.116116- * Read by the kernel.132132+ * This field should only be updated by the thread which133133+ * registered this data structure. Read by the kernel.134134+ * Mainly used for single-stepping through rseq critical sections135135+ * with debuggers.136136+ *117137 * - RSEQ_CS_FLAG_NO_RESTART_ON_PREEMPT118118- * Inhibit instruction sequence block restart and event119119- * counter increment on preemption for this thread.138138+ * Inhibit instruction sequence block restart on preemption139139+ * for this thread.120140 * - RSEQ_CS_FLAG_NO_RESTART_ON_SIGNAL121121- * Inhibit instruction sequence block restart and event122122- * counter increment on signal delivery for this thread.141141+ * Inhibit instruction sequence block restart on signal142142+ * delivery for this thread.123143 * - RSEQ_CS_FLAG_NO_RESTART_ON_MIGRATE124124- * Inhibit instruction sequence block restart and event125125- * counter increment on migration for this thread.144144+ * Inhibit instruction sequence block restart on migration for145145+ * this thread.126146 */127147 __u32 flags;128148} __attribute__((aligned(4 * sizeof(__u64))));
-50
include/uapi/linux/types_32_64.h
···11-/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */22-#ifndef _UAPI_LINUX_TYPES_32_64_H33-#define _UAPI_LINUX_TYPES_32_64_H44-55-/*66- * linux/types_32_64.h77- *88- * Integer type declaration for pointers across 32-bit and 64-bit systems.99- *1010- * Copyright (c) 2015-2018 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>1111- */1212-1313-#ifdef __KERNEL__1414-# include <linux/types.h>1515-#else1616-# include <stdint.h>1717-#endif1818-1919-#include <asm/byteorder.h>2020-2121-#ifdef __BYTE_ORDER2222-# if (__BYTE_ORDER == __BIG_ENDIAN)2323-# define LINUX_BYTE_ORDER_BIG_ENDIAN2424-# else2525-# define LINUX_BYTE_ORDER_LITTLE_ENDIAN2626-# endif2727-#else2828-# ifdef __BIG_ENDIAN2929-# define LINUX_BYTE_ORDER_BIG_ENDIAN3030-# else3131-# define LINUX_BYTE_ORDER_LITTLE_ENDIAN3232-# endif3333-#endif3434-3535-#ifdef __LP64__3636-# define LINUX_FIELD_u32_u64(field) __u64 field3737-# define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) field = (intptr_t)v3838-#else3939-# ifdef LINUX_BYTE_ORDER_BIG_ENDIAN4040-# define LINUX_FIELD_u32_u64(field) __u32 field ## _padding, field4141-# define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) \4242- field ## _padding = 0, field = (intptr_t)v4343-# else4444-# define LINUX_FIELD_u32_u64(field) __u32 field, field ## _padding4545-# define LINUX_FIELD_u32_u64_INIT_ONSTACK(field, v) \4646- field = (intptr_t)v, field ## _padding = 04747-# endif4848-#endif4949-5050-#endif /* _UAPI_LINUX_TYPES_32_64_H */
+54
kernel/bpf/cgroup.c
···428428 return ret;429429}430430431431+int cgroup_bpf_prog_attach(const union bpf_attr *attr,432432+ enum bpf_prog_type ptype, struct bpf_prog *prog)433433+{434434+ struct cgroup *cgrp;435435+ int ret;436436+437437+ cgrp = cgroup_get_from_fd(attr->target_fd);438438+ if (IS_ERR(cgrp))439439+ return PTR_ERR(cgrp);440440+441441+ ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,442442+ attr->attach_flags);443443+ cgroup_put(cgrp);444444+ return ret;445445+}446446+447447+int cgroup_bpf_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype)448448+{449449+ struct bpf_prog *prog;450450+ struct cgroup *cgrp;451451+ int ret;452452+453453+ cgrp = cgroup_get_from_fd(attr->target_fd);454454+ if (IS_ERR(cgrp))455455+ return PTR_ERR(cgrp);456456+457457+ prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);458458+ if (IS_ERR(prog))459459+ prog = NULL;460460+461461+ ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);462462+ if (prog)463463+ bpf_prog_put(prog);464464+465465+ cgroup_put(cgrp);466466+ return ret;467467+}468468+469469+int cgroup_bpf_prog_query(const union bpf_attr *attr,470470+ union bpf_attr __user *uattr)471471+{472472+ struct cgroup *cgrp;473473+ int ret;474474+475475+ cgrp = cgroup_get_from_fd(attr->query.target_fd);476476+ if (IS_ERR(cgrp))477477+ return PTR_ERR(cgrp);478478+479479+ ret = cgroup_bpf_query(cgrp, attr, uattr);480480+481481+ cgroup_put(cgrp);482482+ return ret;483483+}484484+431485/**432486 * __cgroup_bpf_run_filter_skb() - Run a program for packet filtering433487 * @sk: The socket sending or receiving traffic
-28
kernel/bpf/core.c
···598598 bpf_fill_ill_insns(hdr, size);599599600600 hdr->pages = size / PAGE_SIZE;601601- hdr->locked = 0;602602-603601 hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),604602 PAGE_SIZE - sizeof(*hdr));605603 start = (get_random_int() % hole) & ~(alignment - 1);···14481450 return 0;14491451}1450145214511451-static int bpf_prog_check_pages_ro_locked(const struct bpf_prog *fp)14521452-{14531453-#ifdef CONFIG_ARCH_HAS_SET_MEMORY14541454- int i, err;14551455-14561456- for (i = 0; i < fp->aux->func_cnt; i++) {14571457- err = bpf_prog_check_pages_ro_single(fp->aux->func[i]);14581458- if (err)14591459- return err;14601460- }14611461-14621462- return bpf_prog_check_pages_ro_single(fp);14631463-#endif14641464- return 0;14651465-}14661466-14671453static void bpf_prog_select_func(struct bpf_prog *fp)14681454{14691455#ifndef CONFIG_BPF_JIT_ALWAYS_ON···15061524 * all eBPF JITs might immediately support all features.15071525 */15081526 *err = bpf_check_tail_call(fp);15091509- if (*err)15101510- return fp;1511152715121512- /* Checkpoint: at this point onwards any cBPF -> eBPF or15131513- * native eBPF program is read-only. If we failed to change15141514- * the page attributes (e.g. allocation failure from15151515- * splitting large pages), then reject the whole program15161516- * in order to guarantee not ending up with any W+X pages15171517- * from BPF side in kernel.15181518- */15191519- *err = bpf_prog_check_pages_ro_locked(fp);15201528 return fp;15211529}15221530EXPORT_SYMBOL_GPL(bpf_prog_select_runtime);
+184-70
kernel/bpf/sockmap.c
···7272 u32 n_buckets;7373 u32 elem_size;7474 struct bpf_sock_progs progs;7575+ struct rcu_head rcu;7576};76777778struct htab_elem {···9089struct smap_psock_map_entry {9190 struct list_head list;9291 struct sock **entry;9393- struct htab_elem *hash_link;9494- struct bpf_htab *htab;9292+ struct htab_elem __rcu *hash_link;9393+ struct bpf_htab __rcu *htab;9594};96959796struct smap_psock {···121120 struct bpf_prog *bpf_parse;122121 struct bpf_prog *bpf_verdict;123122 struct list_head maps;123123+ spinlock_t maps_lock;124124125125 /* Back reference used when sock callback trigger sockmap operations */126126 struct sock *sock;···142140static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);143141static int bpf_tcp_sendpage(struct sock *sk, struct page *page,144142 int offset, size_t size, int flags);143143+static void bpf_tcp_close(struct sock *sk, long timeout);145144146145static inline struct smap_psock *smap_psock_sk(const struct sock *sk)147146{···164161 return !empty;165162}166163167167-static struct proto tcp_bpf_proto;164164+enum {165165+ SOCKMAP_IPV4,166166+ SOCKMAP_IPV6,167167+ SOCKMAP_NUM_PROTS,168168+};169169+170170+enum {171171+ SOCKMAP_BASE,172172+ SOCKMAP_TX,173173+ SOCKMAP_NUM_CONFIGS,174174+};175175+176176+static struct proto *saved_tcpv6_prot __read_mostly;177177+static DEFINE_SPINLOCK(tcpv6_prot_lock);178178+static struct proto bpf_tcp_prots[SOCKMAP_NUM_PROTS][SOCKMAP_NUM_CONFIGS];179179+static void build_protos(struct proto prot[SOCKMAP_NUM_CONFIGS],180180+ struct proto *base)181181+{182182+ prot[SOCKMAP_BASE] = *base;183183+ prot[SOCKMAP_BASE].close = bpf_tcp_close;184184+ prot[SOCKMAP_BASE].recvmsg = bpf_tcp_recvmsg;185185+ prot[SOCKMAP_BASE].stream_memory_read = bpf_tcp_stream_read;186186+187187+ prot[SOCKMAP_TX] = prot[SOCKMAP_BASE];188188+ prot[SOCKMAP_TX].sendmsg = bpf_tcp_sendmsg;189189+ prot[SOCKMAP_TX].sendpage = bpf_tcp_sendpage;190190+}191191+192192+static void update_sk_prot(struct sock *sk, struct smap_psock *psock)193193+{194194+ int family = sk->sk_family == AF_INET6 ? SOCKMAP_IPV6 : SOCKMAP_IPV4;195195+ int conf = psock->bpf_tx_msg ? SOCKMAP_TX : SOCKMAP_BASE;196196+197197+ sk->sk_prot = &bpf_tcp_prots[family][conf];198198+}199199+168200static int bpf_tcp_init(struct sock *sk)169201{170202 struct smap_psock *psock;···219181 psock->save_close = sk->sk_prot->close;220182 psock->sk_proto = sk->sk_prot;221183222222- if (psock->bpf_tx_msg) {223223- tcp_bpf_proto.sendmsg = bpf_tcp_sendmsg;224224- tcp_bpf_proto.sendpage = bpf_tcp_sendpage;225225- tcp_bpf_proto.recvmsg = bpf_tcp_recvmsg;226226- tcp_bpf_proto.stream_memory_read = bpf_tcp_stream_read;184184+ /* Build IPv6 sockmap whenever the address of tcpv6_prot changes */185185+ if (sk->sk_family == AF_INET6 &&186186+ unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {187187+ spin_lock_bh(&tcpv6_prot_lock);188188+ if (likely(sk->sk_prot != saved_tcpv6_prot)) {189189+ build_protos(bpf_tcp_prots[SOCKMAP_IPV6], sk->sk_prot);190190+ smp_store_release(&saved_tcpv6_prot, sk->sk_prot);191191+ }192192+ spin_unlock_bh(&tcpv6_prot_lock);227193 }228228-229229- sk->sk_prot = &tcp_bpf_proto;194194+ update_sk_prot(sk, psock);230195 rcu_read_unlock();231196 return 0;232197}···260219 rcu_read_unlock();261220}262221222222+static struct htab_elem *lookup_elem_raw(struct hlist_head *head,223223+ u32 hash, void *key, u32 key_size)224224+{225225+ struct htab_elem *l;226226+227227+ hlist_for_each_entry_rcu(l, head, hash_node) {228228+ if (l->hash == hash && !memcmp(&l->key, key, key_size))229229+ return l;230230+ }231231+232232+ return NULL;233233+}234234+235235+static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)236236+{237237+ return &htab->buckets[hash & (htab->n_buckets - 1)];238238+}239239+240240+static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)241241+{242242+ return &__select_bucket(htab, hash)->head;243243+}244244+263245static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)264246{265247 atomic_dec(&htab->count);266248 kfree_rcu(l, rcu);267249}268250251251+static struct smap_psock_map_entry *psock_map_pop(struct sock *sk,252252+ struct smap_psock *psock)253253+{254254+ struct smap_psock_map_entry *e;255255+256256+ spin_lock_bh(&psock->maps_lock);257257+ e = list_first_entry_or_null(&psock->maps,258258+ struct smap_psock_map_entry,259259+ list);260260+ if (e)261261+ list_del(&e->list);262262+ spin_unlock_bh(&psock->maps_lock);263263+ return e;264264+}265265+269266static void bpf_tcp_close(struct sock *sk, long timeout)270267{271268 void (*close_fun)(struct sock *sk, long timeout);272272- struct smap_psock_map_entry *e, *tmp;269269+ struct smap_psock_map_entry *e;273270 struct sk_msg_buff *md, *mtmp;274271 struct smap_psock *psock;275272 struct sock *osk;···326247 */327248 close_fun = psock->save_close;328249329329- write_lock_bh(&sk->sk_callback_lock);330250 if (psock->cork) {331251 free_start_sg(psock->sock, psock->cork);332252 kfree(psock->cork);···338260 kfree(md);339261 }340262341341- list_for_each_entry_safe(e, tmp, &psock->maps, list) {263263+ e = psock_map_pop(sk, psock);264264+ while (e) {342265 if (e->entry) {343266 osk = cmpxchg(e->entry, sk, NULL);344267 if (osk == sk) {345345- list_del(&e->list);346268 smap_release_sock(psock, sk);347269 }348270 } else {349349- hlist_del_rcu(&e->hash_link->hash_node);350350- smap_release_sock(psock, e->hash_link->sk);351351- free_htab_elem(e->htab, e->hash_link);271271+ struct htab_elem *link = rcu_dereference(e->hash_link);272272+ struct bpf_htab *htab = rcu_dereference(e->htab);273273+ struct hlist_head *head;274274+ struct htab_elem *l;275275+ struct bucket *b;276276+277277+ b = __select_bucket(htab, link->hash);278278+ head = &b->head;279279+ raw_spin_lock_bh(&b->lock);280280+ l = lookup_elem_raw(head,281281+ link->hash, link->key,282282+ htab->map.key_size);283283+ /* If another thread deleted this object skip deletion.284284+ * The refcnt on psock may or may not be zero.285285+ */286286+ if (l) {287287+ hlist_del_rcu(&link->hash_node);288288+ smap_release_sock(psock, link->sk);289289+ free_htab_elem(htab, link);290290+ }291291+ raw_spin_unlock_bh(&b->lock);352292 }293293+ e = psock_map_pop(sk, psock);353294 }354354- write_unlock_bh(&sk->sk_callback_lock);355295 rcu_read_unlock();356296 close_fun(sk, timeout);357297}···1207111112081112static int bpf_tcp_ulp_register(void)12091113{12101210- tcp_bpf_proto = tcp_prot;12111211- tcp_bpf_proto.close = bpf_tcp_close;11141114+ build_protos(bpf_tcp_prots[SOCKMAP_IPV4], &tcp_prot);12121115 /* Once BPF TX ULP is registered it is never unregistered. It12131116 * will be in the ULP list for the lifetime of the system. Doing12141117 * duplicate registers is not a problem.···14521357{14531358 if (refcount_dec_and_test(&psock->refcnt)) {14541359 tcp_cleanup_ulp(sock);13601360+ write_lock_bh(&sock->sk_callback_lock);14551361 smap_stop_sock(psock, sock);13621362+ write_unlock_bh(&sock->sk_callback_lock);14561363 clear_bit(SMAP_TX_RUNNING, &psock->state);14571364 rcu_assign_sk_user_data(sock, NULL);14581365 call_rcu_sched(&psock->rcu, smap_destroy_psock);···16051508 INIT_LIST_HEAD(&psock->maps);16061509 INIT_LIST_HEAD(&psock->ingress);16071510 refcount_set(&psock->refcnt, 1);15111511+ spin_lock_init(&psock->maps_lock);1608151216091513 rcu_assign_sk_user_data(sock, psock);16101514 sock_hold(sock);···16621564 return ERR_PTR(err);16631565}1664156616651665-static void smap_list_remove(struct smap_psock *psock,16661666- struct sock **entry,16671667- struct htab_elem *hash_link)15671567+static void smap_list_map_remove(struct smap_psock *psock,15681568+ struct sock **entry)16681569{16691570 struct smap_psock_map_entry *e, *tmp;1670157115721572+ spin_lock_bh(&psock->maps_lock);16711573 list_for_each_entry_safe(e, tmp, &psock->maps, list) {16721672- if (e->entry == entry || e->hash_link == hash_link) {15741574+ if (e->entry == entry)16731575 list_del(&e->list);16741674- break;16751675- }16761576 }15771577+ spin_unlock_bh(&psock->maps_lock);15781578+}15791579+15801580+static void smap_list_hash_remove(struct smap_psock *psock,15811581+ struct htab_elem *hash_link)15821582+{15831583+ struct smap_psock_map_entry *e, *tmp;15841584+15851585+ spin_lock_bh(&psock->maps_lock);15861586+ list_for_each_entry_safe(e, tmp, &psock->maps, list) {15871587+ struct htab_elem *c = rcu_dereference(e->hash_link);15881588+15891589+ if (c == hash_link)15901590+ list_del(&e->list);15911591+ }15921592+ spin_unlock_bh(&psock->maps_lock);16771593}1678159416791595static void sock_map_free(struct bpf_map *map)···17131601 if (!sock)17141602 continue;1715160317161716- write_lock_bh(&sock->sk_callback_lock);17171604 psock = smap_psock_sk(sock);17181605 /* This check handles a racing sock event that can get the17191606 * sk_callback_lock before this case but after xchg happens···17201609 * to be null and queued for garbage collection.17211610 */17221611 if (likely(psock)) {17231723- smap_list_remove(psock, &stab->sock_map[i], NULL);16121612+ smap_list_map_remove(psock, &stab->sock_map[i]);17241613 smap_release_sock(psock, sock);17251614 }17261726- write_unlock_bh(&sock->sk_callback_lock);17271615 }17281616 rcu_read_unlock();17291617···17711661 if (!sock)17721662 return -EINVAL;1773166317741774- write_lock_bh(&sock->sk_callback_lock);17751664 psock = smap_psock_sk(sock);17761665 if (!psock)17771666 goto out;1778166717791668 if (psock->bpf_parse)17801669 smap_stop_sock(psock, sock);17811781- smap_list_remove(psock, &stab->sock_map[k], NULL);16701670+ smap_list_map_remove(psock, &stab->sock_map[k]);17821671 smap_release_sock(psock, sock);17831672out:17841784- write_unlock_bh(&sock->sk_callback_lock);17851673 return 0;17861674}17871675···18601752 }18611753 }1862175418631863- write_lock_bh(&sock->sk_callback_lock);18641755 psock = smap_psock_sk(sock);1865175618661757 /* 2. Do not allow inheriting programs if psock exists and has···19161809 if (err)19171810 goto out_free;19181811 smap_init_progs(psock, verdict, parse);18121812+ write_lock_bh(&sock->sk_callback_lock);19191813 smap_start_sock(psock, sock);18141814+ write_unlock_bh(&sock->sk_callback_lock);19201815 }1921181619221817 /* 4. Place psock in sockmap for use and stop any programs on···19281819 */19291820 if (map_link) {19301821 e->entry = map_link;18221822+ spin_lock_bh(&psock->maps_lock);19311823 list_add_tail(&e->list, &psock->maps);18241824+ spin_unlock_bh(&psock->maps_lock);19321825 }19331933- write_unlock_bh(&sock->sk_callback_lock);19341826 return err;19351827out_free:19361828 smap_release_sock(psock, sock);···19421832 }19431833 if (tx_msg)19441834 bpf_prog_put(tx_msg);19451945- write_unlock_bh(&sock->sk_callback_lock);19461835 kfree(e);19471836 return err;19481837}···19781869 if (osock) {19791870 struct smap_psock *opsock = smap_psock_sk(osock);1980187119811981- write_lock_bh(&osock->sk_callback_lock);19821982- smap_list_remove(opsock, &stab->sock_map[i], NULL);18721872+ smap_list_map_remove(opsock, &stab->sock_map[i]);19831873 smap_release_sock(opsock, osock);19841984- write_unlock_bh(&osock->sk_callback_lock);19851874 }19861875out:19871876 return err;···20201913 bpf_prog_put(orig);2021191420221915 return 0;19161916+}19171917+19181918+int sockmap_get_from_fd(const union bpf_attr *attr, int type,19191919+ struct bpf_prog *prog)19201920+{19211921+ int ufd = attr->target_fd;19221922+ struct bpf_map *map;19231923+ struct fd f;19241924+ int err;19251925+19261926+ f = fdget(ufd);19271927+ map = __bpf_map_get(f);19281928+ if (IS_ERR(map))19291929+ return PTR_ERR(map);19301930+19311931+ err = sock_map_prog(map, prog, attr->attach_type);19321932+ fdput(f);19331933+ return err;20231934}2024193520251936static void *sock_map_lookup(struct bpf_map *map, void *key)···21682043 return ERR_PTR(err);21692044}2170204521712171-static inline struct bucket *__select_bucket(struct bpf_htab *htab, u32 hash)20462046+static void __bpf_htab_free(struct rcu_head *rcu)21722047{21732173- return &htab->buckets[hash & (htab->n_buckets - 1)];21742174-}20482048+ struct bpf_htab *htab;2175204921762176-static inline struct hlist_head *select_bucket(struct bpf_htab *htab, u32 hash)21772177-{21782178- return &__select_bucket(htab, hash)->head;20502050+ htab = container_of(rcu, struct bpf_htab, rcu);20512051+ bpf_map_area_free(htab->buckets);20522052+ kfree(htab);21792053}2180205421812055static void sock_hash_free(struct bpf_map *map)···21932069 */21942070 rcu_read_lock();21952071 for (i = 0; i < htab->n_buckets; i++) {21962196- struct hlist_head *head = select_bucket(htab, i);20722072+ struct bucket *b = __select_bucket(htab, i);20732073+ struct hlist_head *head;21972074 struct hlist_node *n;21982075 struct htab_elem *l;2199207620772077+ raw_spin_lock_bh(&b->lock);20782078+ head = &b->head;22002079 hlist_for_each_entry_safe(l, n, head, hash_node) {22012080 struct sock *sock = l->sk;22022081 struct smap_psock *psock;2203208222042083 hlist_del_rcu(&l->hash_node);22052205- write_lock_bh(&sock->sk_callback_lock);22062084 psock = smap_psock_sk(sock);22072085 /* This check handles a racing sock event that can get22082086 * the sk_callback_lock before this case but after xchg···22122086 * (psock) to be null and queued for garbage collection.22132087 */22142088 if (likely(psock)) {22152215- smap_list_remove(psock, NULL, l);20892089+ smap_list_hash_remove(psock, l);22162090 smap_release_sock(psock, sock);22172091 }22182218- write_unlock_bh(&sock->sk_callback_lock);22192219- kfree(l);20922092+ free_htab_elem(htab, l);22202093 }20942094+ raw_spin_unlock_bh(&b->lock);22212095 }22222096 rcu_read_unlock();22232223- bpf_map_area_free(htab->buckets);22242224- kfree(htab);20972097+ call_rcu(&htab->rcu, __bpf_htab_free);22252098}2226209922272100static struct htab_elem *alloc_sock_hash_elem(struct bpf_htab *htab,···22452120 l_new->sk = sk;22462121 l_new->hash = hash;22472122 return l_new;22482248-}22492249-22502250-static struct htab_elem *lookup_elem_raw(struct hlist_head *head,22512251- u32 hash, void *key, u32 key_size)22522252-{22532253- struct htab_elem *l;22542254-22552255- hlist_for_each_entry_rcu(l, head, hash_node) {22562256- if (l->hash == hash && !memcmp(&l->key, key, key_size))22572257- return l;22582258- }22592259-22602260- return NULL;22612123}2262212422632125static inline u32 htab_map_hash(const void *key, u32 key_len)···23662254 goto bucket_err;23672255 }2368225623692369- e->hash_link = l_new;23702370- e->htab = container_of(map, struct bpf_htab, map);22572257+ rcu_assign_pointer(e->hash_link, l_new);22582258+ rcu_assign_pointer(e->htab,22592259+ container_of(map, struct bpf_htab, map));22602260+ spin_lock_bh(&psock->maps_lock);23712261 list_add_tail(&e->list, &psock->maps);22622262+ spin_unlock_bh(&psock->maps_lock);2372226323732264 /* add new element to the head of the list, so that23742265 * concurrent search will find it before old elem···23812266 psock = smap_psock_sk(l_old->sk);2382226723832268 hlist_del_rcu(&l_old->hash_node);23842384- smap_list_remove(psock, NULL, l_old);22692269+ smap_list_hash_remove(psock, l_old);23852270 smap_release_sock(psock, l_old->sk);23862271 free_htab_elem(htab, l_old);23872272 }···24412326 struct smap_psock *psock;2442232724432328 hlist_del_rcu(&l->hash_node);24442444- write_lock_bh(&sock->sk_callback_lock);24452329 psock = smap_psock_sk(sock);24462330 /* This check handles a racing sock event that can get the24472331 * sk_callback_lock before this case but after xchg happens···24482334 * to be null and queued for garbage collection.24492335 */24502336 if (likely(psock)) {24512451- smap_list_remove(psock, NULL, l);23372337+ smap_list_hash_remove(psock, l);24522338 smap_release_sock(psock, sock);24532339 }24542454- write_unlock_bh(&sock->sk_callback_lock);24552340 free_htab_elem(htab, l);24562341 ret = 0;24572342 }···24962383 .map_get_next_key = sock_hash_get_next_key,24972384 .map_update_elem = sock_hash_update_elem,24982385 .map_delete_elem = sock_hash_delete_elem,23862386+ .map_release_uref = sock_map_release,24992387};2500238825012389BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+21-78
kernel/bpf/syscall.c
···14831483 return err;14841484}1485148514861486-#ifdef CONFIG_CGROUP_BPF14871487-14881486static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,14891487 enum bpf_attach_type attach_type)14901488{···1497149914981500#define BPF_PROG_ATTACH_LAST_FIELD attach_flags1499150115001500-static int sockmap_get_from_fd(const union bpf_attr *attr,15011501- int type, bool attach)15021502-{15031503- struct bpf_prog *prog = NULL;15041504- int ufd = attr->target_fd;15051505- struct bpf_map *map;15061506- struct fd f;15071507- int err;15081508-15091509- f = fdget(ufd);15101510- map = __bpf_map_get(f);15111511- if (IS_ERR(map))15121512- return PTR_ERR(map);15131513-15141514- if (attach) {15151515- prog = bpf_prog_get_type(attr->attach_bpf_fd, type);15161516- if (IS_ERR(prog)) {15171517- fdput(f);15181518- return PTR_ERR(prog);15191519- }15201520- }15211521-15221522- err = sock_map_prog(map, prog, attr->attach_type);15231523- if (err) {15241524- fdput(f);15251525- if (prog)15261526- bpf_prog_put(prog);15271527- return err;15281528- }15291529-15301530- fdput(f);15311531- return 0;15321532-}15331533-15341502#define BPF_F_ATTACH_MASK \15351503 (BPF_F_ALLOW_OVERRIDE | BPF_F_ALLOW_MULTI)15361504···15041540{15051541 enum bpf_prog_type ptype;15061542 struct bpf_prog *prog;15071507- struct cgroup *cgrp;15081543 int ret;1509154415101545 if (!capable(CAP_NET_ADMIN))···15401577 ptype = BPF_PROG_TYPE_CGROUP_DEVICE;15411578 break;15421579 case BPF_SK_MSG_VERDICT:15431543- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, true);15801580+ ptype = BPF_PROG_TYPE_SK_MSG;15811581+ break;15441582 case BPF_SK_SKB_STREAM_PARSER:15451583 case BPF_SK_SKB_STREAM_VERDICT:15461546- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, true);15841584+ ptype = BPF_PROG_TYPE_SK_SKB;15851585+ break;15471586 case BPF_LIRC_MODE2:15481548- return lirc_prog_attach(attr);15871587+ ptype = BPF_PROG_TYPE_LIRC_MODE2;15881588+ break;15491589 default:15501590 return -EINVAL;15511591 }···15621596 return -EINVAL;15631597 }1564159815651565- cgrp = cgroup_get_from_fd(attr->target_fd);15661566- if (IS_ERR(cgrp)) {15671567- bpf_prog_put(prog);15681568- return PTR_ERR(cgrp);15991599+ switch (ptype) {16001600+ case BPF_PROG_TYPE_SK_SKB:16011601+ case BPF_PROG_TYPE_SK_MSG:16021602+ ret = sockmap_get_from_fd(attr, ptype, prog);16031603+ break;16041604+ case BPF_PROG_TYPE_LIRC_MODE2:16051605+ ret = lirc_prog_attach(attr, prog);16061606+ break;16071607+ default:16081608+ ret = cgroup_bpf_prog_attach(attr, ptype, prog);15691609 }1570161015711571- ret = cgroup_bpf_attach(cgrp, prog, attr->attach_type,15721572- attr->attach_flags);15731611 if (ret)15741612 bpf_prog_put(prog);15751575- cgroup_put(cgrp);15761576-15771613 return ret;15781614}15791615···15841616static int bpf_prog_detach(const union bpf_attr *attr)15851617{15861618 enum bpf_prog_type ptype;15871587- struct bpf_prog *prog;15881588- struct cgroup *cgrp;15891589- int ret;1590161915911620 if (!capable(CAP_NET_ADMIN))15921621 return -EPERM;···16161651 ptype = BPF_PROG_TYPE_CGROUP_DEVICE;16171652 break;16181653 case BPF_SK_MSG_VERDICT:16191619- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, false);16541654+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_MSG, NULL);16201655 case BPF_SK_SKB_STREAM_PARSER:16211656 case BPF_SK_SKB_STREAM_VERDICT:16221622- return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, false);16571657+ return sockmap_get_from_fd(attr, BPF_PROG_TYPE_SK_SKB, NULL);16231658 case BPF_LIRC_MODE2:16241659 return lirc_prog_detach(attr);16251660 default:16261661 return -EINVAL;16271662 }1628166316291629- cgrp = cgroup_get_from_fd(attr->target_fd);16301630- if (IS_ERR(cgrp))16311631- return PTR_ERR(cgrp);16321632-16331633- prog = bpf_prog_get_type(attr->attach_bpf_fd, ptype);16341634- if (IS_ERR(prog))16351635- prog = NULL;16361636-16371637- ret = cgroup_bpf_detach(cgrp, prog, attr->attach_type, 0);16381638- if (prog)16391639- bpf_prog_put(prog);16401640- cgroup_put(cgrp);16411641- return ret;16641664+ return cgroup_bpf_prog_detach(attr, ptype);16421665}1643166616441667#define BPF_PROG_QUERY_LAST_FIELD query.prog_cnt···16341681static int bpf_prog_query(const union bpf_attr *attr,16351682 union bpf_attr __user *uattr)16361683{16371637- struct cgroup *cgrp;16381638- int ret;16391639-16401684 if (!capable(CAP_NET_ADMIN))16411685 return -EPERM;16421686 if (CHECK_ATTR(BPF_PROG_QUERY))···16611711 default:16621712 return -EINVAL;16631713 }16641664- cgrp = cgroup_get_from_fd(attr->query.target_fd);16651665- if (IS_ERR(cgrp))16661666- return PTR_ERR(cgrp);16671667- ret = cgroup_bpf_query(cgrp, attr, uattr);16681668- cgroup_put(cgrp);16691669- return ret;17141714+17151715+ return cgroup_bpf_prog_query(attr, uattr);16701716}16711671-#endif /* CONFIG_CGROUP_BPF */1672171716731718#define BPF_PROG_TEST_RUN_LAST_FIELD test.duration16741719···23102365 case BPF_OBJ_GET:23112366 err = bpf_obj_get(&attr);23122367 break;23132313-#ifdef CONFIG_CGROUP_BPF23142368 case BPF_PROG_ATTACH:23152369 err = bpf_prog_attach(&attr);23162370 break;···23192375 case BPF_PROG_QUERY:23202376 err = bpf_prog_query(&attr, uattr);23212377 break;23222322-#endif23232378 case BPF_PROG_TEST_RUN:23242379 err = bpf_prog_test_run(&attr, uattr);23252380 break;
+24-6
kernel/kthread.c
···177177static void __kthread_parkme(struct kthread *self)178178{179179 for (;;) {180180- set_current_state(TASK_PARKED);180180+ /*181181+ * TASK_PARKED is a special state; we must serialize against182182+ * possible pending wakeups to avoid store-store collisions on183183+ * task->state.184184+ *185185+ * Such a collision might possibly result in the task state186186+ * changin from TASK_PARKED and us failing the187187+ * wait_task_inactive() in kthread_park().188188+ */189189+ set_special_state(TASK_PARKED);181190 if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))182191 break;192192+193193+ complete_all(&self->parked);183194 schedule();184195 }185196 __set_current_state(TASK_RUNNING);···201190 __kthread_parkme(to_kthread(current));202191}203192EXPORT_SYMBOL_GPL(kthread_parkme);204204-205205-void kthread_park_complete(struct task_struct *k)206206-{207207- complete_all(&to_kthread(k)->parked);208208-}209193210194static int kthread(void *_create)211195{···467461468462 reinit_completion(&kthread->parked);469463 clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);464464+ /*465465+ * __kthread_parkme() will either see !SHOULD_PARK or get the wakeup.466466+ */470467 wake_up_state(k, TASK_PARKED);471468}472469EXPORT_SYMBOL_GPL(kthread_unpark);···496487 set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);497488 if (k != current) {498489 wake_up_process(k);490490+ /*491491+ * Wait for __kthread_parkme() to complete(), this means we492492+ * _will_ have TASK_PARKED and are about to call schedule().493493+ */499494 wait_for_completion(&kthread->parked);495495+ /*496496+ * Now wait for that schedule() to complete and the task to497497+ * get scheduled out.498498+ */499499+ WARN_ON_ONCE(!wait_task_inactive(k, TASK_PARKED));500500 }501501502502 return 0;
+25-16
kernel/rseq.c
···8585{8686 u32 cpu_id = raw_smp_processor_id();87878888- if (__put_user(cpu_id, &t->rseq->cpu_id_start))8888+ if (put_user(cpu_id, &t->rseq->cpu_id_start))8989 return -EFAULT;9090- if (__put_user(cpu_id, &t->rseq->cpu_id))9090+ if (put_user(cpu_id, &t->rseq->cpu_id))9191 return -EFAULT;9292 trace_rseq_update(t);9393 return 0;···100100 /*101101 * Reset cpu_id_start to its initial state (0).102102 */103103- if (__put_user(cpu_id_start, &t->rseq->cpu_id_start))103103+ if (put_user(cpu_id_start, &t->rseq->cpu_id_start))104104 return -EFAULT;105105 /*106106 * Reset cpu_id to RSEQ_CPU_ID_UNINITIALIZED, so any user coming107107 * in after unregistration can figure out that rseq needs to be108108 * registered again.109109 */110110- if (__put_user(cpu_id, &t->rseq->cpu_id))110110+ if (put_user(cpu_id, &t->rseq->cpu_id))111111 return -EFAULT;112112 return 0;113113}···115115static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)116116{117117 struct rseq_cs __user *urseq_cs;118118- unsigned long ptr;118118+ u64 ptr;119119 u32 __user *usig;120120 u32 sig;121121 int ret;122122123123- ret = __get_user(ptr, &t->rseq->rseq_cs);124124- if (ret)125125- return ret;123123+ if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr)))124124+ return -EFAULT;126125 if (!ptr) {127126 memset(rseq_cs, 0, sizeof(*rseq_cs));128127 return 0;129128 }130130- urseq_cs = (struct rseq_cs __user *)ptr;129129+ if (ptr >= TASK_SIZE)130130+ return -EINVAL;131131+ urseq_cs = (struct rseq_cs __user *)(unsigned long)ptr;131132 if (copy_from_user(rseq_cs, urseq_cs, sizeof(*rseq_cs)))132133 return -EFAULT;133133- if (rseq_cs->version > 0)134134- return -EINVAL;135134135135+ if (rseq_cs->start_ip >= TASK_SIZE ||136136+ rseq_cs->start_ip + rseq_cs->post_commit_offset >= TASK_SIZE ||137137+ rseq_cs->abort_ip >= TASK_SIZE ||138138+ rseq_cs->version > 0)139139+ return -EINVAL;140140+ /* Check for overflow. */141141+ if (rseq_cs->start_ip + rseq_cs->post_commit_offset < rseq_cs->start_ip)142142+ return -EINVAL;136143 /* Ensure that abort_ip is not in the critical section. */137144 if (rseq_cs->abort_ip - rseq_cs->start_ip < rseq_cs->post_commit_offset)138145 return -EINVAL;139146140140- usig = (u32 __user *)(rseq_cs->abort_ip - sizeof(u32));147147+ usig = (u32 __user *)(unsigned long)(rseq_cs->abort_ip - sizeof(u32));141148 ret = get_user(sig, usig);142149 if (ret)143150 return ret;···153146 printk_ratelimited(KERN_WARNING154147 "Possible attack attempt. Unexpected rseq signature 0x%x, expecting 0x%x (pid=%d, addr=%p).\n",155148 sig, current->rseq_sig, current->pid, usig);156156- return -EPERM;149149+ return -EINVAL;157150 }158151 return 0;159152}···164157 int ret;165158166159 /* Get thread flags. */167167- ret = __get_user(flags, &t->rseq->flags);160160+ ret = get_user(flags, &t->rseq->flags);168161 if (ret)169162 return ret;170163···202195 * of code outside of the rseq assembly block. This performs203196 * a lazy clear of the rseq_cs field.204197 *205205- * Set rseq_cs to NULL with single-copy atomicity.198198+ * Set rseq_cs to NULL.206199 */207207- return __put_user(0UL, &t->rseq->rseq_cs);200200+ if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64)))201201+ return -EFAULT;202202+ return 0;208203}209204210205/*
+32-35
kernel/sched/core.c
···77 */88#include "sched.h"991010-#include <linux/kthread.h>1110#include <linux/nospec.h>12111312#include <linux/kcov.h>···27232724 membarrier_mm_sync_core_before_usermode(mm);27242725 mmdrop(mm);27252726 }27262726- if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) {27272727- switch (prev_state) {27282728- case TASK_DEAD:27292729- if (prev->sched_class->task_dead)27302730- prev->sched_class->task_dead(prev);27272727+ if (unlikely(prev_state == TASK_DEAD)) {27282728+ if (prev->sched_class->task_dead)27292729+ prev->sched_class->task_dead(prev);2731273027322732- /*27332733- * Remove function-return probe instances associated with this27342734- * task and put them back on the free list.27352735- */27362736- kprobe_flush_task(prev);27312731+ /*27322732+ * Remove function-return probe instances associated with this27332733+ * task and put them back on the free list.27342734+ */27352735+ kprobe_flush_task(prev);2737273627382738- /* Task is done with its stack. */27392739- put_task_stack(prev);27372737+ /* Task is done with its stack. */27382738+ put_task_stack(prev);2740273927412741- put_task_struct(prev);27422742- break;27432743-27442744- case TASK_PARKED:27452745- kthread_park_complete(prev);27462746- break;27472747- }27402740+ put_task_struct(prev);27482741 }2749274227502743 tick_nohz_task_switch();···31043113 struct tick_work *twork = container_of(dwork, struct tick_work, work);31053114 int cpu = twork->cpu;31063115 struct rq *rq = cpu_rq(cpu);31163116+ struct task_struct *curr;31073117 struct rq_flags rf;31183118+ u64 delta;3108311931093120 /*31103121 * Handle the tick only if it appears the remote CPU is running in full···31153122 * statistics and checks timeslices in a time-independent way, regardless31163123 * of when exactly it is running.31173124 */31183118- if (!idle_cpu(cpu) && tick_nohz_tick_stopped_cpu(cpu)) {31193119- struct task_struct *curr;31203120- u64 delta;31253125+ if (idle_cpu(cpu) || !tick_nohz_tick_stopped_cpu(cpu))31263126+ goto out_requeue;3121312731223122- rq_lock_irq(rq, &rf);31233123- update_rq_clock(rq);31243124- curr = rq->curr;31253125- delta = rq_clock_task(rq) - curr->se.exec_start;31283128+ rq_lock_irq(rq, &rf);31293129+ curr = rq->curr;31303130+ if (is_idle_task(curr))31313131+ goto out_unlock;3126313231273127- /*31283128- * Make sure the next tick runs within a reasonable31293129- * amount of time.31303130- */31313131- WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);31323132- curr->sched_class->task_tick(rq, curr, 0);31333133- rq_unlock_irq(rq, &rf);31343134- }31333133+ update_rq_clock(rq);31343134+ delta = rq_clock_task(rq) - curr->se.exec_start;3135313531363136+ /*31373137+ * Make sure the next tick runs within a reasonable31383138+ * amount of time.31393139+ */31403140+ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);31413141+ curr->sched_class->task_tick(rq, curr, 0);31423142+31433143+out_unlock:31443144+ rq_unlock_irq(rq, &rf);31453145+31463146+out_requeue:31363147 /*31373148 * Run the remote tick once per second (1Hz). This arbitrary31383149 * frequency is large enough to avoid overload but short enough
+1-1
kernel/sched/cpufreq_schedutil.c
···192192{193193 struct rq *rq = cpu_rq(sg_cpu->cpu);194194195195- if (rq->rt.rt_nr_running)195195+ if (rt_rq_is_runnable(&rq->rt))196196 return sg_cpu->max;197197198198 /*
+22-23
kernel/sched/fair.c
···39823982 if (!sched_feat(UTIL_EST))39833983 return;3984398439853985- /*39863986- * Update root cfs_rq's estimated utilization39873987- *39883988- * If *p is the last task then the root cfs_rq's estimated utilization39893989- * of a CPU is 0 by definition.39903990- */39913991- ue.enqueued = 0;39923992- if (cfs_rq->nr_running) {39933993- ue.enqueued = cfs_rq->avg.util_est.enqueued;39943994- ue.enqueued -= min_t(unsigned int, ue.enqueued,39953995- (_task_util_est(p) | UTIL_AVG_UNCHANGED));39963996- }39853985+ /* Update root cfs_rq's estimated utilization */39863986+ ue.enqueued = cfs_rq->avg.util_est.enqueued;39873987+ ue.enqueued -= min_t(unsigned int, ue.enqueued,39883988+ (_task_util_est(p) | UTIL_AVG_UNCHANGED));39973989 WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued);3998399039993991 /*···45824590 now = sched_clock_cpu(smp_processor_id());45834591 cfs_b->runtime = cfs_b->quota;45844592 cfs_b->runtime_expires = now + ktime_to_ns(cfs_b->period);45934593+ cfs_b->expires_seq++;45854594}4586459545874596static inline struct cfs_bandwidth *tg_cfs_bandwidth(struct task_group *tg)···46054612 struct task_group *tg = cfs_rq->tg;46064613 struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(tg);46074614 u64 amount = 0, min_amount, expires;46154615+ int expires_seq;4608461646094617 /* note: this is a positive sum as runtime_remaining <= 0 */46104618 min_amount = sched_cfs_bandwidth_slice() - cfs_rq->runtime_remaining;···46224628 cfs_b->idle = 0;46234629 }46244630 }46314631+ expires_seq = cfs_b->expires_seq;46254632 expires = cfs_b->runtime_expires;46264633 raw_spin_unlock(&cfs_b->lock);46274634···46324637 * spread between our sched_clock and the one on which runtime was46334638 * issued.46344639 */46354635- if ((s64)(expires - cfs_rq->runtime_expires) > 0)46404640+ if (cfs_rq->expires_seq != expires_seq) {46414641+ cfs_rq->expires_seq = expires_seq;46364642 cfs_rq->runtime_expires = expires;46434643+ }4637464446384645 return cfs_rq->runtime_remaining > 0;46394646}···46614664 * has not truly expired.46624665 *46634666 * Fortunately we can check determine whether this the case by checking46644664- * whether the global deadline has advanced. It is valid to compare46654665- * cfs_b->runtime_expires without any locks since we only care about46664666- * exact equality, so a partial write will still work.46674667+ * whether the global deadline(cfs_b->expires_seq) has advanced.46674668 */46684668-46694669- if (cfs_rq->runtime_expires != cfs_b->runtime_expires) {46694669+ if (cfs_rq->expires_seq == cfs_b->expires_seq) {46704670 /* extend local deadline, drift is bounded above by 2 ticks */46714671 cfs_rq->runtime_expires += TICK_NSEC;46724672 } else {···5196520251975203void start_cfs_bandwidth(struct cfs_bandwidth *cfs_b)51985204{52055205+ u64 overrun;52065206+51995207 lockdep_assert_held(&cfs_b->lock);5200520852015201- if (!cfs_b->period_active) {52025202- cfs_b->period_active = 1;52035203- hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);52045204- hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);52055205- }52095209+ if (cfs_b->period_active)52105210+ return;52115211+52125212+ cfs_b->period_active = 1;52135213+ overrun = hrtimer_forward_now(&cfs_b->period_timer, cfs_b->period);52145214+ cfs_b->runtime_expires += (overrun + 1) * ktime_to_ns(cfs_b->period);52155215+ cfs_b->expires_seq++;52165216+ hrtimer_start_expires(&cfs_b->period_timer, HRTIMER_MODE_ABS_PINNED);52065217}5207521852085219static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
+10-6
kernel/sched/rt.c
···508508509509 rt_se = rt_rq->tg->rt_se[cpu];510510511511- if (!rt_se)511511+ if (!rt_se) {512512 dequeue_top_rt_rq(rt_rq);513513+ /* Kick cpufreq (see the comment in kernel/sched/sched.h). */514514+ cpufreq_update_util(rq_of_rt_rq(rt_rq), 0);515515+ }513516 else if (on_rt_rq(rt_se))514517 dequeue_rt_entity(rt_se, 0);515518}···10041001 sub_nr_running(rq, rt_rq->rt_nr_running);10051002 rt_rq->rt_queued = 0;1006100310071007- /* Kick cpufreq (see the comment in kernel/sched/sched.h). */10081008- cpufreq_update_util(rq, 0);10091004}1010100510111006static void···1015101410161015 if (rt_rq->rt_queued)10171016 return;10181018- if (rt_rq_throttled(rt_rq) || !rt_rq->rt_nr_running)10171017+10181018+ if (rt_rq_throttled(rt_rq))10191019 return;1020102010211021- add_nr_running(rq, rt_rq->rt_nr_running);10221022- rt_rq->rt_queued = 1;10211021+ if (rt_rq->rt_nr_running) {10221022+ add_nr_running(rq, rt_rq->rt_nr_running);10231023+ rt_rq->rt_queued = 1;10241024+ }1023102510241026 /* Kick cpufreq (see the comment in kernel/sched/sched.h). */10251027 cpufreq_update_util(rq, 0);
+9-2
kernel/sched/sched.h
···334334 u64 runtime;335335 s64 hierarchical_quota;336336 u64 runtime_expires;337337+ int expires_seq;337338338338- int idle;339339- int period_active;339339+ short idle;340340+ short period_active;340341 struct hrtimer period_timer;341342 struct hrtimer slack_timer;342343 struct list_head throttled_cfs_rq;···552551553552#ifdef CONFIG_CFS_BANDWIDTH554553 int runtime_enabled;554554+ int expires_seq;555555 u64 runtime_expires;556556 s64 runtime_remaining;557557···610608 struct task_group *tg;611609#endif612610};611611+612612+static inline bool rt_rq_is_runnable(struct rt_rq *rt_rq)613613+{614614+ return rt_rq->rt_queued && rt_rq->rt_nr_running;615615+}613616614617/* Deadline class' related fields in a runqueue */615618struct dl_rq {
···17011701 * @filter_str: filter string17021702 * @set_str: remember @filter_str and enable detailed error in filter17031703 * @filterp: out param for created filter (always updated on return)17041704+ * Must be a pointer that references a NULL pointer.17041705 *17051706 * Creates a filter for @call with @filter_str. If @set_str is %true,17061707 * @filter_str is copied and recorded in the new filter.···17181717{17191718 struct filter_parse_error *pe = NULL;17201719 int err;17201720+17211721+ /* filterp must point to NULL */17221722+ if (WARN_ON(*filterp))17231723+ *filterp = NULL;1721172417221725 err = create_filter_start(filter_string, set_str, &pe, filterp);17231726 if (err)
+1-1
kernel/trace/trace_events_hist.c
···393393 else if (system)394394 snprintf(err, MAX_FILTER_STR_VAL, "%s.%s", system, event);395395 else396396- strncpy(err, var, MAX_FILTER_STR_VAL);396396+ strscpy(err, var, MAX_FILTER_STR_VAL);397397398398 hist_err(str, err);399399}
+4-1
kernel/trace/trace_functions_graph.c
···831831 struct ftrace_graph_ret *graph_ret;832832 struct ftrace_graph_ent *call;833833 unsigned long long duration;834834+ int cpu = iter->cpu;834835 int i;835836836837 graph_ret = &ret_entry->ret;···840839841840 if (data) {842841 struct fgraph_cpu_data *cpu_data;843843- int cpu = iter->cpu;844842845843 cpu_data = per_cpu_ptr(data->cpu_data, cpu);846844···868868 trace_seq_putc(s, ' ');869869870870 trace_seq_printf(s, "%ps();\n", (void *)call->func);871871+872872+ print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET,873873+ cpu, iter->ent->pid, flags);871874872875 return trace_handle_return(s);873876}
···227227 * so we use WARN_ONCE() here to see the stack trace if228228 * fail happens.229229 */230230- WARN_ONCE(1, "memblock: bottom-up allocation failed, memory hotunplug may be affected\n");230230+ WARN_ONCE(IS_ENABLED(CONFIG_MEMORY_HOTREMOVE),231231+ "memblock: bottom-up allocation failed, memory hotremove may be affected\n");231232 }232233233234 return __memblock_find_range_top_down(start, end, size, align, nid,
+12-17
mm/mmap.c
···186186 return next;187187}188188189189-static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf);190190-189189+static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags,190190+ struct list_head *uf);191191SYSCALL_DEFINE1(brk, unsigned long, brk)192192{193193 unsigned long retval;···245245 goto out;246246247247 /* Ok, looks good - let it rip. */248248- if (do_brk(oldbrk, newbrk-oldbrk, &uf) < 0)248248+ if (do_brk_flags(oldbrk, newbrk-oldbrk, 0, &uf) < 0)249249 goto out;250250251251set_brk:···29292929 * anonymous maps. eventually we may be able to do some29302930 * brk-specific accounting here.29312931 */29322932-static int do_brk_flags(unsigned long addr, unsigned long request, unsigned long flags, struct list_head *uf)29322932+static int do_brk_flags(unsigned long addr, unsigned long len, unsigned long flags, struct list_head *uf)29332933{29342934 struct mm_struct *mm = current->mm;29352935 struct vm_area_struct *vma, *prev;29362936- unsigned long len;29372936 struct rb_node **rb_link, *rb_parent;29382937 pgoff_t pgoff = addr >> PAGE_SHIFT;29392938 int error;29402940-29412941- len = PAGE_ALIGN(request);29422942- if (len < request)29432943- return -ENOMEM;29442944- if (!len)29452945- return 0;2946293929472940 /* Until we need other flags, refuse anything except VM_EXEC. */29482941 if ((flags & (~VM_EXEC)) != 0)···30083015 return 0;30093016}3010301730113011-static int do_brk(unsigned long addr, unsigned long len, struct list_head *uf)30123012-{30133013- return do_brk_flags(addr, len, 0, uf);30143014-}30153015-30163016-int vm_brk_flags(unsigned long addr, unsigned long len, unsigned long flags)30183018+int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)30173019{30183020 struct mm_struct *mm = current->mm;30213021+ unsigned long len;30193022 int ret;30203023 bool populate;30213024 LIST_HEAD(uf);30253025+30263026+ len = PAGE_ALIGN(request);30273027+ if (len < request)30283028+ return -ENOMEM;30293029+ if (!len)30303030+ return 0;3022303130233032 if (down_write_killable(&mm->mmap_sem))30243033 return -EINTR;
+2-2
mm/page_alloc.c
···68476847 /* Initialise every node */68486848 mminit_verify_pageflags_layout();68496849 setup_nr_node_ids();68506850+ zero_resv_unavail();68506851 for_each_online_node(nid) {68516852 pg_data_t *pgdat = NODE_DATA(nid);68526853 free_area_init_node(nid, NULL,···68586857 node_set_state(nid, N_MEMORY);68596858 check_for_memory(pgdat, nid);68606859 }68616861- zero_resv_unavail();68626860}6863686168646862static int __init cmdline_parse_core(char *p, unsigned long *core,···7033703370347034void __init free_area_init(unsigned long *zones_size)70357035{70367036+ zero_resv_unavail();70367037 free_area_init_node(0, zones_size,70377038 __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);70387038- zero_resv_unavail();70397039}7040704070417041static int page_alloc_cpu_dead(unsigned int cpu)
+7-1
mm/rmap.c
···6464#include <linux/backing-dev.h>6565#include <linux/page_idle.h>6666#include <linux/memremap.h>6767+#include <linux/userfaultfd_k.h>67686869#include <asm/tlbflush.h>6970···14821481 set_pte_at(mm, address, pvmw.pte, pteval);14831482 }1484148314851485- } else if (pte_unused(pteval)) {14841484+ } else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {14861485 /*14871486 * The guest indicated that the page content is of no14881487 * interest anymore. Simply discard the pte, vmscan14891488 * will take care of the rest.14891489+ * A future reference will then fault in a new zero14901490+ * page. When userfaultfd is active, we must not drop14911491+ * this page though, as its main user (postcopy14921492+ * migration) will not expect userfaults on already14931493+ * copied pages.14901494 */14911495 dec_mm_counter(mm, mm_counter(page));14921496 /* We have to invalidate as we cleared the pte */
···11menuconfig BPFILTER22 bool "BPF based packet filtering framework (BPFILTER)"33- default n43 depends on NET && BPF && INET54 help65 This builds experimental bpfilter framework that is aiming to···89if BPFILTER910config BPFILTER_UMH1011 tristate "bpfilter kernel module with user mode helper"1212+ depends on $(success,$(srctree)/scripts/cc-can-link.sh $(CC))1113 default m1214 help1315 This builds bpfilter kernel module with embedded user mode helper
+2-15
net/bpfilter/Makefile
···1515HOSTLDFLAGS += -static1616endif17171818-# a bit of elf magic to convert bpfilter_umh binary into a binary blob1919-# inside bpfilter_umh.o elf file referenced by2020-# _binary_net_bpfilter_bpfilter_umh_start symbol2121-# which bpfilter_kern.c passes further into umh blob loader at run-time2222-quiet_cmd_copy_umh = GEN $@2323- cmd_copy_umh = echo ':' > $(obj)/.bpfilter_umh.o.cmd; \2424- $(OBJCOPY) -I binary \2525- `LC_ALL=C $(OBJDUMP) -f net/bpfilter/bpfilter_umh \2626- |awk -F' |,' '/file format/{print "-O",$$NF} \2727- /^architecture:/{print "-B",$$2}'` \2828- --rename-section .data=.init.rodata $< $@2929-3030-$(obj)/bpfilter_umh.o: $(obj)/bpfilter_umh3131- $(call cmd,copy_umh)1818+$(obj)/bpfilter_umh_blob.o: $(obj)/bpfilter_umh32193320obj-$(CONFIG_BPFILTER_UMH) += bpfilter.o3434-bpfilter-objs += bpfilter_kern.o bpfilter_umh.o2121+bpfilter-objs += bpfilter_kern.o bpfilter_umh_blob.o
+5-6
net/bpfilter/bpfilter_kern.c
···1010#include <linux/file.h>1111#include "msgfmt.h"12121313-#define UMH_start _binary_net_bpfilter_bpfilter_umh_start1414-#define UMH_end _binary_net_bpfilter_bpfilter_umh_end1515-1616-extern char UMH_start;1717-extern char UMH_end;1313+extern char bpfilter_umh_start;1414+extern char bpfilter_umh_end;18151916static struct umh_info info;2017/* since ip_getsockopt() can run in parallel, serialize access to umh */···9093 int err;91949295 /* fork usermode process */9393- err = fork_usermode_blob(&UMH_start, &UMH_end - &UMH_start, &info);9696+ err = fork_usermode_blob(&bpfilter_umh_start,9797+ &bpfilter_umh_end - &bpfilter_umh_start,9898+ &info);9499 if (err)95100 return err;96101 pr_info("Loaded bpfilter_umh pid %d\n", info.pid);
···265265 * it is probably a retransmit.266266 */267267 if (tp->ecn_flags & TCP_ECN_SEEN)268268- tcp_enter_quickack_mode(sk, 1);268268+ tcp_enter_quickack_mode(sk, 2);269269 break;270270 case INET_ECN_CE:271271 if (tcp_ca_needs_ecn(sk))···273273274274 if (!(tp->ecn_flags & TCP_ECN_DEMAND_CWR)) {275275 /* Better not delay acks, sender can have a very low cwnd */276276- tcp_enter_quickack_mode(sk, 1);276276+ tcp_enter_quickack_mode(sk, 2);277277 tp->ecn_flags |= TCP_ECN_DEMAND_CWR;278278 }279279 tp->ecn_flags |= TCP_ECN_SEEN;···3181318131823182 if (tcp_is_reno(tp)) {31833183 tcp_remove_reno_sacks(sk, pkts_acked);31843184+31853185+ /* If any of the cumulatively ACKed segments was31863186+ * retransmitted, non-SACK case cannot confirm that31873187+ * progress was due to original transmission due to31883188+ * lack of TCPCB_SACKED_ACKED bits even if some of31893189+ * the packets may have been never retransmitted.31903190+ */31913191+ if (flag & FLAG_RETRANS_DATA_ACKED)31923192+ flag &= ~FLAG_ORIG_SACK_ACKED;31843193 } else {31853194 int delta;31863195
···4747 struct hlist_node node;4848 struct nf_conntrack_tuple tuple;4949 struct nf_conntrack_zone zone;5050+ int cpu;5151+ u32 jiffies32;5052};51535254struct nf_conncount_rb {···9391 return false;9492 conn->tuple = *tuple;9593 conn->zone = *zone;9494+ conn->cpu = raw_smp_processor_id();9595+ conn->jiffies32 = (u32)jiffies;9696 hlist_add_head(&conn->node, head);9797 return true;9898}9999EXPORT_SYMBOL_GPL(nf_conncount_add);100100+101101+static const struct nf_conntrack_tuple_hash *102102+find_or_evict(struct net *net, struct nf_conncount_tuple *conn)103103+{104104+ const struct nf_conntrack_tuple_hash *found;105105+ unsigned long a, b;106106+ int cpu = raw_smp_processor_id();107107+ __s32 age;108108+109109+ found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple);110110+ if (found)111111+ return found;112112+ b = conn->jiffies32;113113+ a = (u32)jiffies;114114+115115+ /* conn might have been added just before by another cpu and116116+ * might still be unconfirmed. In this case, nf_conntrack_find()117117+ * returns no result. Thus only evict if this cpu added the118118+ * stale entry or if the entry is older than two jiffies.119119+ */120120+ age = a - b;121121+ if (conn->cpu == cpu || age >= 2) {122122+ hlist_del(&conn->node);123123+ kmem_cache_free(conncount_conn_cachep, conn);124124+ return ERR_PTR(-ENOENT);125125+ }126126+127127+ return ERR_PTR(-EAGAIN);128128+}100129101130unsigned int nf_conncount_lookup(struct net *net, struct hlist_head *head,102131 const struct nf_conntrack_tuple *tuple,···136103{137104 const struct nf_conntrack_tuple_hash *found;138105 struct nf_conncount_tuple *conn;139139- struct hlist_node *n;140106 struct nf_conn *found_ct;107107+ struct hlist_node *n;141108 unsigned int length = 0;142109143110 *addit = tuple ? true : false;144111145112 /* check the saved connections */146113 hlist_for_each_entry_safe(conn, n, head, node) {147147- found = nf_conntrack_find_get(net, &conn->zone, &conn->tuple);148148- if (found == NULL) {149149- hlist_del(&conn->node);150150- kmem_cache_free(conncount_conn_cachep, conn);114114+ found = find_or_evict(net, conn);115115+ if (IS_ERR(found)) {116116+ /* Not found, but might be about to be confirmed */117117+ if (PTR_ERR(found) == -EAGAIN) {118118+ length++;119119+ if (!tuple)120120+ continue;121121+122122+ if (nf_ct_tuple_equal(&conn->tuple, tuple) &&123123+ nf_ct_zone_id(&conn->zone, conn->zone.dir) ==124124+ nf_ct_zone_id(zone, zone->dir))125125+ *addit = false;126126+ }151127 continue;152128 }153129
+5
net/netfilter/nf_conntrack_helper.c
···465465466466 nf_ct_expect_iterate_destroy(expect_iter_me, NULL);467467 nf_ct_iterate_destroy(unhelp, me);468468+469469+ /* Maybe someone has gotten the helper already when unhelp above.470470+ * So need to wait it.471471+ */472472+ synchronize_rcu();468473}469474EXPORT_SYMBOL_GPL(nf_conntrack_helper_unregister);470475
+10-3
net/netfilter/nf_log.c
···424424 if (write) {425425 struct ctl_table tmp = *table;426426427427+ /* proc_dostring() can append to existing strings, so we need to428428+ * initialize it as an empty string.429429+ */430430+ buf[0] = '\0';427431 tmp.data = buf;428432 r = proc_dostring(&tmp, write, buffer, lenp, ppos);429433 if (r)···446442 rcu_assign_pointer(net->nf.nf_loggers[tindex], logger);447443 mutex_unlock(&nf_log_mutex);448444 } else {445445+ struct ctl_table tmp = *table;446446+447447+ tmp.data = buf;449448 mutex_lock(&nf_log_mutex);450449 logger = nft_log_dereference(net->nf.nf_loggers[tindex]);451450 if (!logger)452452- table->data = "NONE";451451+ strlcpy(buf, "NONE", sizeof(buf));453452 else454454- table->data = logger->name;455455- r = proc_dostring(table, write, buffer, lenp, ppos);453453+ strlcpy(buf, logger->name, sizeof(buf));456454 mutex_unlock(&nf_log_mutex);455455+ r = proc_dostring(&tmp, write, buffer, lenp, ppos);457456 }458457459458 return r;
···3535 */3636 struct strp_msg strp;3737 int accum_len;3838- int early_eaten;3938};40394140static inline struct _strp_msg *_strp_msg(struct sk_buff *skb)···114115 head = strp->skb_head;115116 if (head) {116117 /* Message already in progress */117117-118118- stm = _strp_msg(head);119119- if (unlikely(stm->early_eaten)) {120120- /* Already some number of bytes on the receive sock121121- * data saved in skb_head, just indicate they122122- * are consumed.123123- */124124- eaten = orig_len <= stm->early_eaten ?125125- orig_len : stm->early_eaten;126126- stm->early_eaten -= eaten;127127-128128- return eaten;129129- }130130-131118 if (unlikely(orig_offset)) {132119 /* Getting data with a non-zero offset when a message is133120 * in progress is not expected. If it does happen, we···282297 }283298284299 stm->accum_len += cand_len;300300+ eaten += cand_len;285301 strp->need_bytes = stm->strp.full_len -286302 stm->accum_len;287287- stm->early_eaten = cand_len;288303 STRP_STATS_ADD(strp->stats.bytes, cand_len);289304 desc->count = 0; /* Stop reading socket */290305 break;
+14-21
net/wireless/nl80211.c
···62316231 nl80211_check_s32);62326232 /*62336233 * Check HT operation mode based on62346234- * IEEE 802.11 2012 8.4.2.59 HT Operation element.62346234+ * IEEE 802.11-2016 9.4.2.57 HT Operation element.62356235 */62366236 if (tb[NL80211_MESHCONF_HT_OPMODE]) {62376237 ht_opmode = nla_get_u16(tb[NL80211_MESHCONF_HT_OPMODE]);···62416241 IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))62426242 return -EINVAL;6243624362446244- if ((ht_opmode & IEEE80211_HT_OP_MODE_NON_GF_STA_PRSNT) &&62456245- (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))62466246- return -EINVAL;62446244+ /* NON_HT_STA bit is reserved, but some programs set it */62456245+ ht_opmode &= ~IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT;6247624662486248- switch (ht_opmode & IEEE80211_HT_OP_MODE_PROTECTION) {62496249- case IEEE80211_HT_OP_MODE_PROTECTION_NONE:62506250- case IEEE80211_HT_OP_MODE_PROTECTION_20MHZ:62516251- if (ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT)62526252- return -EINVAL;62536253- break;62546254- case IEEE80211_HT_OP_MODE_PROTECTION_NONMEMBER:62556255- case IEEE80211_HT_OP_MODE_PROTECTION_NONHT_MIXED:62566256- if (!(ht_opmode & IEEE80211_HT_OP_MODE_NON_HT_STA_PRSNT))62576257- return -EINVAL;62586258- break;62596259- }62606247 cfg->ht_opmode = ht_opmode;62616248 mask |= (1 << (NL80211_MESHCONF_HT_OPMODE - 1));62626249 }···1094910962 rem) {1095010963 u8 *mask_pat;10951109641095210952- nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,1095310953- nl80211_packet_pattern_policy,1095410954- info->extack);1096510965+ err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,1096610966+ nl80211_packet_pattern_policy,1096710967+ info->extack);1096810968+ if (err)1096910969+ goto error;1097010970+1095510971 err = -EINVAL;1095610972 if (!pat_tb[NL80211_PKTPAT_MASK] ||1095710973 !pat_tb[NL80211_PKTPAT_PATTERN])···1120311213 rem) {1120411214 u8 *mask_pat;11205112151120611206- nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,1120711207- nl80211_packet_pattern_policy, NULL);1121611216+ err = nla_parse_nested(pat_tb, MAX_NL80211_PKTPAT, pat,1121711217+ nl80211_packet_pattern_policy, NULL);1121811218+ if (err)1121911219+ return err;1122011220+1120811221 if (!pat_tb[NL80211_PKTPAT_MASK] ||1120911222 !pat_tb[NL80211_PKTPAT_PATTERN])1121011223 return -EINVAL;
+4-4
samples/bpf/xdp_fwd_kern.c
···4848 struct ethhdr *eth = data;4949 struct ipv6hdr *ip6h;5050 struct iphdr *iph;5151- int out_index;5251 u16 h_proto;5352 u64 nh_off;5353+ int rc;54545555 nh_off = sizeof(*eth);5656 if (data + nh_off > data_end)···101101102102 fib_params.ifindex = ctx->ingress_ifindex;103103104104- out_index = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags);104104+ rc = bpf_fib_lookup(ctx, &fib_params, sizeof(fib_params), flags);105105106106 /* verify egress index has xdp support107107 * TO-DO bpf_map_lookup_elem(&tx_port, &key) fails with···109109 * NOTE: without verification that egress index supports XDP110110 * forwarding packets are dropped.111111 */112112- if (out_index > 0) {112112+ if (rc == 0) {113113 if (h_proto == htons(ETH_P_IP))114114 ip_decrease_ttl(iph);115115 else if (h_proto == htons(ETH_P_IPV6))···117117118118 memcpy(eth->h_dest, fib_params.dmac, ETH_ALEN);119119 memcpy(eth->h_source, fib_params.smac, ETH_ALEN);120120- return bpf_redirect_map(&tx_port, out_index, 0);120120+ return bpf_redirect_map(&tx_port, fib_params.ifindex, 0);121121 }122122123123 return XDP_PASS;
···214214# Prefix -I with $(srctree) if it is not an absolute path.215215# skip if -I has no parameter216216addtree = $(if $(patsubst -I%,%,$(1)), \217217-$(if $(filter-out -I/% -I./% -I../%,$(1)),$(patsubst -I%,-I$(srctree)/%,$(1)),$(1)))217217+$(if $(filter-out -I/% -I./% -I../%,$(1)),$(patsubst -I%,-I$(srctree)/%,$(1)),$(1)),$(1))218218219219# Find all -I options and call addtree220220flags = $(foreach o,$($(1)),$(if $(filter -I%,$(o)),$(call addtree,$(o)),$(o)))
-3
scripts/Makefile.build
···590590# We never want them to be removed automatically.591591.SECONDARY: $(targets)592592593593-# Declare the contents of the .PHONY variable as phony. We keep that594594-# information in a variable se we can use it in if_changed and friends.595595-596593.PHONY: $(PHONY)
-3
scripts/Makefile.clean
···8888$(subdir-ymn):8989 $(Q)$(MAKE) $(clean)=$@90909191-# Declare the contents of the .PHONY variable as phony. We keep that9292-# information in a variable se we can use it in if_changed and friends.9393-9491.PHONY: $(PHONY)
-4
scripts/Makefile.modbuiltin
···5454$(subdir-ym):5555 $(Q)$(MAKE) $(modbuiltin)=$@56565757-5858-# Declare the contents of the .PHONY variable as phony. We keep that5959-# information in a variable se we can use it in if_changed and friends.6060-6157.PHONY: $(PHONY)
-4
scripts/Makefile.modinst
···3535$(modules):3636 $(call cmd,modules_install,$(MODLIB)/$(modinst_dir))37373838-3939-# Declare the contents of the .PHONY variable as phony. We keep that4040-# information in a variable so we can use it in if_changed and friends.4141-4238.PHONY: $(PHONY)
-4
scripts/Makefile.modpost
···149149 include $(cmd_files)150150endif151151152152-153153-# Declare the contents of the .PHONY variable as phony. We keep that154154-# information in a variable se we can use it in if_changed and friends.155155-156152.PHONY: $(PHONY)
-3
scripts/Makefile.modsign
···2727$(modules):2828 $(call cmd,sign_ko,$(MODLIB)/$(modinst_dir))29293030-# Declare the contents of the .PHONY variable as phony. We keep that3131-# information in a variable se we can use it in if_changed and friends.3232-3330.PHONY: $(PHONY)
···3131 string = ""32323333 if flag_fields[event_name][field_name]:3434- print_delim = 03535- keys = flag_fields[event_name][field_name]['values'].keys()3636- keys.sort()3737- for idx in keys:3434+ print_delim = 03535+ for idx in sorted(flag_fields[event_name][field_name]['values']):3836 if not value and not idx:3937 string += flag_fields[event_name][field_name]['values'][idx]4038 break···4951 string = ""50525153 if symbolic_fields[event_name][field_name]:5252- keys = symbolic_fields[event_name][field_name]['values'].keys()5353- keys.sort()5454- for idx in keys:5454+ for idx in sorted(symbolic_fields[event_name][field_name]['values']):5555 if not value and not idx:5656- string = symbolic_fields[event_name][field_name]['values'][idx]5656+ string = symbolic_fields[event_name][field_name]['values'][idx]5757 break5858- if (value == idx):5959- string = symbolic_fields[event_name][field_name]['values'][idx]5858+ if (value == idx):5959+ string = symbolic_fields[event_name][field_name]['values'][idx]6060 break61616262 return string···7074 string = ""7175 print_delim = 072767373- keys = trace_flags.keys()7777+ for idx in trace_flags:7878+ if not value and not idx:7979+ string += "NONE"8080+ break74817575- for idx in keys:7676- if not value and not idx:7777- string += "NONE"7878- break7979-8080- if idx and (value & idx) == idx:8181- if print_delim:8282- string += " | ";8383- string += trace_flags[idx]8484- print_delim = 18585- value &= ~idx8282+ if idx and (value & idx) == idx:8383+ if print_delim:8484+ string += " | ";8585+ string += trace_flags[idx]8686+ print_delim = 18787+ value &= ~idx86888789 return string8890
···88# PerfEvent is the base class for all perf event sample, PebsEvent99# is a HW base Intel x86 PEBS event, and user could add more SW/HW1010# event classes based on requirements.1111+from __future__ import print_function11121213import struct1314···4544 PerfEvent.event_num += 146454746 def show(self):4848- print "PMU event: name=%12s, symbol=%24s, comm=%8s, dso=%12s" % (self.name, self.symbol, self.comm, self.dso)4747+ print("PMU event: name=%12s, symbol=%24s, comm=%8s, dso=%12s" %4848+ (self.name, self.symbol, self.comm, self.dso))49495050#5151# Basic Intel PEBS (Precise Event-based Sampling) event, whose raw buffer
···1111try:1212 import wx1313except ImportError:1414- raise ImportError, "You need to install the wxpython lib for this script"1414+ raise ImportError("You need to install the wxpython lib for this script")151516161717class RootFrame(wx.Frame):
···55# This software may be distributed under the terms of the GNU General66# Public License ("GPL") version 2 as published by the Free Software77# Foundation.88+from __future__ import print_function89910import errno, os1011···3433 return str35343635def add_stats(dict, key, value):3737- if not dict.has_key(key):3636+ if key not in dict:3837 dict[key] = (value, value, value, 1)3938 else:4039 min, max, avg, count = dict[key]···7372except:7473 if not audit_package_warned:7574 audit_package_warned = True7676- print "Install the audit-libs-python package to get syscall names.\n" \7777- "For example:\n # apt-get install python-audit (Ubuntu)" \7878- "\n # yum install audit-libs-python (Fedora)" \7979- "\n etc.\n"7575+ print("Install the audit-libs-python package to get syscall names.\n"7676+ "For example:\n # apt-get install python-audit (Ubuntu)"7777+ "\n # yum install audit-libs-python (Fedora)"7878+ "\n etc.\n")80798180def syscall_name(id):8281 try:
+9-5
tools/perf/scripts/python/sched-migration.py
···99# This software is distributed under the terms of the GNU General1010# Public License ("GPL") version 2 as published by the Free Software1111# Foundation.1212-1212+from __future__ import print_function13131414import os1515import sys16161717from collections import defaultdict1818-from UserList import UserList1818+try:1919+ from UserList import UserList2020+except ImportError:2121+ # Python 3: UserList moved to the collections package2222+ from collections import UserList19232024sys.path.append(os.environ['PERF_EXEC_PATH'] + \2125 '/scripts/python/Perf-Trace-Util/lib/Perf/Trace')···304300 if i == -1:305301 return306302307307- for i in xrange(i, len(self.data)):303303+ for i in range(i, len(self.data)):308304 timeslice = self.data[i]309305 if timeslice.start > end:310306 return···340336 on_cpu_task = self.current_tsk[headers.cpu]341337342338 if on_cpu_task != -1 and on_cpu_task != prev_pid:343343- print "Sched switch event rejected ts: %s cpu: %d prev: %s(%d) next: %s(%d)" % \344344- (headers.ts_format(), headers.cpu, prev_comm, prev_pid, next_comm, next_pid)339339+ print("Sched switch event rejected ts: %s cpu: %d prev: %s(%d) next: %s(%d)" % \340340+ headers.ts_format(), headers.cpu, prev_comm, prev_pid, next_comm, next_pid)345341346342 threads[prev_pid] = prev_comm347343 threads[next_pid] = next_comm
+1-1
tools/perf/tests/builtin-test.c
···422422423423#define for_each_shell_test(dir, base, ent) \424424 while ((ent = readdir(dir)) != NULL) \425425- if (!is_directory(base, ent))425425+ if (!is_directory(base, ent) && ent->d_name[0] != '.')426426427427static const char *shell_tests__dir(char *path, size_t size)428428{
···11#!/bin/sh22# SPDX-License-Identifier: GPL-2.03344+# Kselftest framework requirement - SKIP code is 4.55+ksft_skip=466+77+msg="skip all tests:"88+if [ "$(id -u)" != "0" ]; then99+ echo $msg please run this as root >&21010+ exit $ksft_skip1111+fi1212+413SRC_TREE=../../../../514615test_run()
+9
tools/testing/selftests/bpf/test_lirc_mode2.sh
···11#!/bin/bash22# SPDX-License-Identifier: GPL-2.03344+# Kselftest framework requirement - SKIP code is 4.55+ksft_skip=466+77+msg="skip all tests:"88+if [ $UID != 0 ]; then99+ echo $msg please run this as root >&21010+ exit $ksft_skip1111+fi1212+413GREEN='\033[0;92m'514RED='\033[0;31m'615NC='\033[0m' # No Color
+9
tools/testing/selftests/bpf/test_lwt_seg6local.sh
···2121# An UDP datagram is sent from fb00::1 to fb00::6. The test succeeds if this2222# datagram can be read on NS6 when binding to fb00::6.23232424+# Kselftest framework requirement - SKIP code is 4.2525+ksft_skip=42626+2727+msg="skip all tests:"2828+if [ $UID != 0 ]; then2929+ echo $msg please run this as root >&23030+ exit $ksft_skip3131+fi3232+2433TMP_FILE="/tmp/selftest_lwt_seg6local.txt"25342635cleanup()
-6
tools/testing/selftests/bpf/test_sockmap.c
···1413141314141414int main(int argc, char **argv)14151415{14161416- struct rlimit r = {10 * 1024 * 1024, RLIM_INFINITY};14171416 int iov_count = 1, length = 1024, rate = 1;14181417 struct sockmap_options options = {0};14191418 int opt, longindex, err, cg_fd = 0;14201419 char *bpf_file = BPF_SOCKMAP_FILENAME;14211420 int test = PING_PONG;14221422-14231423- if (setrlimit(RLIMIT_MEMLOCK, &r)) {14241424- perror("setrlimit(RLIMIT_MEMLOCK)");14251425- return 1;14261426- }1427142114281422 if (argc < 2)14291423 return test_suite();
tools/testing/selftests/net/fib_tests.sh
+17-7
tools/testing/selftests/rseq/rseq.h
···133133 return cpu;134134}135135136136+static inline void rseq_clear_rseq_cs(void)137137+{138138+#ifdef __LP64__139139+ __rseq_abi.rseq_cs.ptr = 0;140140+#else141141+ __rseq_abi.rseq_cs.ptr.ptr32 = 0;142142+#endif143143+}144144+136145/*137137- * rseq_prepare_unload() should be invoked by each thread using rseq_finish*()138138- * at least once between their last rseq_finish*() and library unload of the139139- * library defining the rseq critical section (struct rseq_cs). This also140140- * applies to use of rseq in code generated by JIT: rseq_prepare_unload()141141- * should be invoked at least once by each thread using rseq_finish*() before142142- * reclaim of the memory holding the struct rseq_cs.146146+ * rseq_prepare_unload() should be invoked by each thread executing a rseq147147+ * critical section at least once between their last critical section and148148+ * library unload of the library defining the rseq critical section149149+ * (struct rseq_cs). This also applies to use of rseq in code generated by150150+ * JIT: rseq_prepare_unload() should be invoked at least once by each151151+ * thread executing a rseq critical section before reclaim of the memory152152+ * holding the struct rseq_cs.143153 */144154static inline void rseq_prepare_unload(void)145155{146146- __rseq_abi.rseq_cs = 0;156156+ rseq_clear_rseq_cs();147157}148158149159#endif /* RSEQ_H_ */