···145145146146In this mode ``intel_pstate`` registers utilization update callbacks with the147147CPU scheduler in order to run a P-state selection algorithm, either148148-``powersave`` or ``performance``, depending on the ``scaling_cur_freq`` policy148148+``powersave`` or ``performance``, depending on the ``scaling_governor`` policy149149setting in ``sysfs``. The current CPU frequency information to be made150150available from the ``scaling_cur_freq`` policy attribute in ``sysfs`` is151151periodically updated by those utilization update callbacks too.
+1-1
Documentation/admin-guide/pm/sleep-states.rst
···1515==================================16161717Depending on its configuration and the capabilities of the platform it runs on,1818-the Linux kernel can support up to four system sleep states, includig1818+the Linux kernel can support up to four system sleep states, including1919hibernation and up to three variants of system suspend. The sleep states that2020can be supported by the kernel are listed below.2121
+9-1
Documentation/bpf/bpf_devel_QA.txt
···557557 pulls in some header files containing file scope host assembly codes.558558 - You can add "-fno-jump-tables" to work around the switch table issue.559559560560- Otherwise, you can use bpf target.560560+ Otherwise, you can use bpf target. Additionally, you _must_ use bpf target561561+ when:562562+563563+ - Your program uses data structures with pointer or long / unsigned long564564+ types that interface with BPF helpers or context data structures. Access565565+ into these structures is verified by the BPF verifier and may result566566+ in verification failures if the native architecture is not aligned with567567+ the BPF architecture, e.g. 64-bit. An example of this is568568+ BPF_PROG_TYPE_SK_MSG require '-target bpf'561569562570Happy BPF hacking!
+4-1
Documentation/device-mapper/thin-provisioning.txt
···264264 data device, but just remove the mapping.265265266266 read_only: Don't allow any changes to be made to the pool267267- metadata.267267+ metadata. This mode is only available after the268268+ thin-pool has been created and first used in full269269+ read/write mode. It cannot be specified on initial270270+ thin-pool creation.268271269272 error_if_no_space: Error IOs, instead of queueing, if no space.270273
···3030Optional properties:3131- dma-coherent : Present if dma operations are coherent3232- clocks : a list of phandle + clock specifier pairs3333-- resets : a list of phandle + reset specifier pairs3433- target-supply : regulator for SATA target power3534- phys : reference to the SATA PHY node3635- phy-names : must be "sata-phy"
···3838 require specific display timings. The panel-timing subnode expresses those3939 timings as specified in the timing subnode section of the display timing4040 bindings defined in4141- Documentation/devicetree/bindings/display/display-timing.txt.4141+ Documentation/devicetree/bindings/display/panel/display-timing.txt.424243434444Connectivity
···44- compatible:55 atmel,maxtouch6677+ The following compatibles have been used in various products but are88+ deprecated:99+ atmel,qt602240_ts1010+ atmel,atmel_mxt_ts1111+ atmel,atmel_mxt_tp1212+ atmel,mXT2241313+714- reg: The I2C address of the device815916- interrupts: The sink for the touchpad's IRQ output
···55- compatible: Must contain one or more of the following:66 - "renesas,rcar-gen3-canfd" for R-Car Gen3 compatible controller.77 - "renesas,r8a7795-canfd" for R8A7795 (R-Car H3) compatible controller.88- - "renesas,r8a7796-canfd" for R8A7796 (R-Car M3) compatible controller.88+ - "renesas,r8a7796-canfd" for R8A7796 (R-Car M3-W) compatible controller.99+ - "renesas,r8a77970-canfd" for R8A77970 (R-Car V3M) compatible controller.1010+ - "renesas,r8a77980-canfd" for R8A77980 (R-Car V3H) compatible controller.9111012 When compatible with the generic version, nodes must list the1113 SoC-specific version corresponding to the platform first, followed by the
···18181919 - "renesas,etheravb-r8a7795" for the R8A7795 SoC.2020 - "renesas,etheravb-r8a7796" for the R8A7796 SoC.2121+ - "renesas,etheravb-r8a77965" for the R8A77965 SoC.2122 - "renesas,etheravb-r8a77970" for the R8A77970 SoC.2223 - "renesas,etheravb-r8a77980" for the R8A77980 SoC.2324 - "renesas,etheravb-r8a77995" for the R8A77995 SoC.
···5656configuration, drive strength and pullups. If one of these options is5757not set, its actual value will be unspecified.58585959-This driver supports the generic pin multiplexing and configuration6060-bindings. For details on each properties, you can refer to6161-./pinctrl-bindings.txt.5959+Allwinner A1X Pin Controller supports the generic pin multiplexing and6060+configuration bindings. For details on each properties, you can refer to6161+ ./pinctrl-bindings.txt.62626363Required sub-node properties:6464 - pins
···9898of_overlay_remove_all() which will remove every single one in the correct9999order.100100101101+In addition, there is the option to register notifiers that get called on102102+overlay operations. See of_overlay_notifier_register/unregister and103103+enum of_overlay_notify_action for details.104104+105105+Note that a notifier callback is not supposed to store pointers to a device106106+tree node or its content beyond OF_OVERLAY_POST_REMOVE corresponding to the107107+respective node it received.108108+101109Overlay DTS Format102110------------------103111
+2-2
Documentation/doc-guide/parse-headers.rst
···177177****178178179179180180-Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com>180180+Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org>181181182182183183COPYRIGHT184184*********185185186186187187-Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>.187187+Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>.188188189189License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>.190190
+1-1
Documentation/media/uapi/rc/keytable.c.rst
···7788 /* keytable.c - This program allows checking/replacing keys at IR991010- Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@infradead.org>1010+ Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@kernel.org>11111212 This program is free software; you can redistribute it and/or modify1313 it under the terms of the GNU General Public License as published by
+1-1
Documentation/media/uapi/v4l/v4l2grab.c.rst
···66.. code-block:: c7788 /* V4L2 video picture grabber99- Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org>99+ Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org>10101111 This program is free software; you can redistribute it and/or modify1212 it under the terms of the GNU General Public License as published by
+2-2
Documentation/sphinx/parse-headers.pl
···387387388388=head1 BUGS389389390390-Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com>390390+Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org>391391392392=head1 COPYRIGHT393393394394-Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>.394394+Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>.395395396396License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>.397397
···66help. Contact the Chinese maintainer if this translation is outdated77or if there is a problem with the translation.8899-Maintainer: Mauro Carvalho Chehab <mchehab@infradead.org>99+Maintainer: Mauro Carvalho Chehab <mchehab@kernel.org>1010Chinese maintainer: Fu Wei <tekkamanninja@gmail.com>1111---------------------------------------------------------------------1212Documentation/video4linux/v4l2-framework.txt 的中文翻译···1414如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文1515交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻1616译存在问题,请联系中文版维护者。1717-英文版维护者: Mauro Carvalho Chehab <mchehab@infradead.org>1717+英文版维护者: Mauro Carvalho Chehab <mchehab@kernel.org>1818中文版维护者: 傅炜 Fu Wei <tekkamanninja@gmail.com>1919中文版翻译者: 傅炜 Fu Wei <tekkamanninja@gmail.com>2020中文版校译者: 傅炜 Fu Wei <tekkamanninja@gmail.com>
···22VERSION = 433PATCHLEVEL = 1744SUBLEVEL = 055-EXTRAVERSION = -rc366-NAME = Fearless Coyote55+EXTRAVERSION = -rc566+NAME = Merciless Moray7788# *DOCUMENTATION*99# To see a list of typical targets execute "make help"
+4
arch/Kconfig
···464464config GCC_PLUGIN_STRUCTLEAK465465 bool "Force initialization of variables containing userspace addresses"466466 depends on GCC_PLUGINS467467+ # Currently STRUCTLEAK inserts initialization out of live scope of468468+ # variables from KASAN point of view. This leads to KASAN false469469+ # positive reports. Prohibit this combination for now.470470+ depends on !KASAN_EXTRA467471 help468472 This plugin zero-initializes any structures containing a469473 __user attribute. This can prevent some classes of information
···1818#include <linux/compiler.h>1919#include <linux/irqchip/arm-gic.h>2020#include <linux/kvm_host.h>2121+#include <linux/swab.h>21222223#include <asm/kvm_emulate.h>2324#include <asm/kvm_hyp.h>2425#include <asm/kvm_mmu.h>2626+2727+static bool __hyp_text __is_be(struct kvm_vcpu *vcpu)2828+{2929+ if (vcpu_mode_is_32bit(vcpu))3030+ return !!(read_sysreg_el2(spsr) & COMPAT_PSR_E_BIT);3131+3232+ return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE);3333+}25342635/*2736 * __vgic_v2_perform_cpuif_access -- perform a GICV access on behalf of the···7364 addr += fault_ipa - vgic->vgic_cpu_base;74657566 if (kvm_vcpu_dabt_iswrite(vcpu)) {7676- u32 data = vcpu_data_guest_to_host(vcpu,7777- vcpu_get_reg(vcpu, rd),7878- sizeof(u32));6767+ u32 data = vcpu_get_reg(vcpu, rd);6868+ if (__is_be(vcpu)) {6969+ /* guest pre-swabbed data, undo this for writel() */7070+ data = swab32(data);7171+ }7972 writel_relaxed(data, addr);8073 } else {8174 u32 data = readl_relaxed(addr);8282- vcpu_set_reg(vcpu, rd, vcpu_data_host_to_guest(vcpu, data,8383- sizeof(u32)));7575+ if (__is_be(vcpu)) {7676+ /* guest expects swabbed data */7777+ data = swab32(data);7878+ }7979+ vcpu_set_reg(vcpu, rd, data);8480 }85818682 return 1;
+3-1
arch/arm64/mm/init.c
···646646647647void __init free_initrd_mem(unsigned long start, unsigned long end)648648{649649- if (!keep_initrd)649649+ if (!keep_initrd) {650650 free_reserved_area((void *)start, (void *)end, 0, "initrd");651651+ memblock_free(__virt_to_phys(start), end - start);652652+ }651653}652654653655static int __init keepinitrd_setup(char *__unused)
···448448 * Checks all the children of @parent for a matching @id. If none449449 * found, it allocates a new device and returns it.450450 */451451-static struct parisc_device * alloc_tree_node(struct device *parent, char id)451451+static struct parisc_device * __init alloc_tree_node(452452+ struct device *parent, char id)452453{453454 struct match_id_data d = {454455 .id = id,···826825 * devices which are not physically connected (such as extra serial &827826 * keyboard ports). This problem is not yet solved.828827 */829829-static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high,830830- struct device *parent)828828+static void __init walk_native_bus(unsigned long io_io_low,829829+ unsigned long io_io_high, struct device *parent)831830{832831 int i, devices_found = 0;833832 unsigned long hpa = io_io_low;
+1-1
arch/parisc/kernel/pci.c
···174174 * pcibios_init_bridge() initializes cache line and default latency175175 * for pci controllers and pci-pci bridges176176 */177177-void __init pcibios_init_bridge(struct pci_dev *dev)177177+void __ref pcibios_init_bridge(struct pci_dev *dev)178178{179179 unsigned short bridge_ctl, bridge_ctl_new;180180
···837837 if (pdc_instr(&instr) == PDC_OK)838838 ivap[0] = instr;839839840840+ /*841841+ * Rules for the checksum of the HPMC handler:842842+ * 1. The IVA does not point to PDC/PDH space (ie: the OS has installed843843+ * its own IVA).844844+ * 2. The word at IVA + 32 is nonzero.845845+ * 3. If Length (IVA + 60) is not zero, then Length (IVA + 60) and846846+ * Address (IVA + 56) are word-aligned.847847+ * 4. The checksum of the 8 words starting at IVA + 32 plus the sum of848848+ * the Length/4 words starting at Address is zero.849849+ */850850+840851 /* Compute Checksum for HPMC handler */841852 length = os_hpmc_size;842853 ivap[7] = length;
+1-1
arch/parisc/mm/init.c
···516516 }517517}518518519519-void free_initmem(void)519519+void __ref free_initmem(void)520520{521521 unsigned long init_begin = (unsigned long)__init_begin;522522 unsigned long init_end = (unsigned long)__init_end;
+21-8
arch/powerpc/include/asm/ftrace.h
···6969#endif70707171#if defined(CONFIG_FTRACE_SYSCALLS) && !defined(__ASSEMBLY__)7272-#ifdef PPC64_ELF_ABI_v17272+/*7373+ * Some syscall entry functions on powerpc start with "ppc_" (fork and clone,7474+ * for instance) or ppc32_/ppc64_. We should also match the sys_ variant with7575+ * those.7676+ */7377#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME7878+#ifdef PPC64_ELF_ABI_v17479static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)7580{7676- /*7777- * Compare the symbol name with the system call name. Skip the .sys or .SyS7878- * prefix from the symbol name and the sys prefix from the system call name and7979- * just match the rest. This is only needed on ppc64 since symbol names on8080- * 32bit do not start with a period so the generic function will work.8181- */8282- return !strcmp(sym + 4, name + 3);8181+ /* We need to skip past the initial dot, and the __se_sys alias */8282+ return !strcmp(sym + 1, name) ||8383+ (!strncmp(sym, ".__se_sys", 9) && !strcmp(sym + 6, name)) ||8484+ (!strncmp(sym, ".ppc_", 5) && !strcmp(sym + 5, name + 4)) ||8585+ (!strncmp(sym, ".ppc32_", 7) && !strcmp(sym + 7, name + 4)) ||8686+ (!strncmp(sym, ".ppc64_", 7) && !strcmp(sym + 7, name + 4));8787+}8888+#else8989+static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)9090+{9191+ return !strcmp(sym, name) ||9292+ (!strncmp(sym, "__se_sys", 8) && !strcmp(sym + 5, name)) ||9393+ (!strncmp(sym, "ppc_", 4) && !strcmp(sym + 4, name + 4)) ||9494+ (!strncmp(sym, "ppc32_", 6) && !strcmp(sym + 6, name + 4)) ||9595+ (!strncmp(sym, "ppc64_", 6) && !strcmp(sym + 6, name + 4));8396}8497#endif8598#endif /* CONFIG_FTRACE_SYSCALLS && !__ASSEMBLY__ */
-1
arch/powerpc/include/asm/paca.h
···165165 u64 saved_msr; /* MSR saved here by enter_rtas */166166 u16 trap_save; /* Used when bad stack is encountered */167167 u8 irq_soft_mask; /* mask for irq soft masking */168168- u8 soft_enabled; /* irq soft-enable flag */169168 u8 irq_happened; /* irq happened while soft-disabled */170169 u8 io_sync; /* writel() needs spin_unlock sync */171170 u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
+5-8
arch/powerpc/include/asm/topology.h
···9191extern int stop_topology_update(void);9292extern int prrn_is_enabled(void);9393extern int find_and_online_cpu_nid(int cpu);9494+extern int timed_topology_update(int nsecs);9495#else9596static inline int start_topology_update(void)9697{···109108{110109 return 0;111110}111111+static inline int timed_topology_update(int nsecs)112112+{113113+ return 0;114114+}112115#endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */113113-114114-#if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_NEED_MULTIPLE_NODES)115115-#if defined(CONFIG_PPC_SPLPAR)116116-extern int timed_topology_update(int nsecs);117117-#else118118-#define timed_topology_update(nsecs)119119-#endif /* CONFIG_PPC_SPLPAR */120120-#endif /* CONFIG_HOTPLUG_CPU || CONFIG_NEED_MULTIPLE_NODES */121116122117#include <asm-generic/topology.h>123118
···59596060 split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order);61616262- *dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset);6262+ *dma_handle = virt_to_phys(ret);6363+ if (!WARN_ON(!dev))6464+ *dma_handle -= PFN_PHYS(dev->dma_pfn_offset);63656466 return ret_nocache;6567}···7169 unsigned long attrs)7270{7371 int order = get_order(size);7474- unsigned long pfn = (dma_handle >> PAGE_SHIFT) + dev->dma_pfn_offset;7272+ unsigned long pfn = dma_handle >> PAGE_SHIFT;7573 int k;7474+7575+ if (!WARN_ON(!dev))7676+ pfn += dev->dma_pfn_offset;76777778 for (k = 0; k < (1 << order); k++)7879 __free_pages(pfn_to_page(pfn + k), 0);···148143 if (!memsize)149144 return 0;150145151151- buf = dma_alloc_coherent(NULL, memsize, &dma_handle, GFP_KERNEL);146146+ buf = dma_alloc_coherent(&pdev->dev, memsize, &dma_handle, GFP_KERNEL);152147 if (!buf) {153148 pr_warning("%s: unable to allocate memory\n", name);154149 return -ENOMEM;
+6-62
arch/sh/mm/init.c
···211211212212 NODE_DATA(nid) = __va(phys);213213 memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));214214-215215- NODE_DATA(nid)->bdata = &bootmem_node_data[nid];216214#endif217215218216 NODE_DATA(nid)->node_start_pfn = start_pfn;219217 NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;220218}221219222222-static void __init bootmem_init_one_node(unsigned int nid)223223-{224224- unsigned long total_pages, paddr;225225- unsigned long end_pfn;226226- struct pglist_data *p;227227-228228- p = NODE_DATA(nid);229229-230230- /* Nothing to do.. */231231- if (!p->node_spanned_pages)232232- return;233233-234234- end_pfn = pgdat_end_pfn(p);235235-236236- total_pages = bootmem_bootmap_pages(p->node_spanned_pages);237237-238238- paddr = memblock_alloc(total_pages << PAGE_SHIFT, PAGE_SIZE);239239- if (!paddr)240240- panic("Can't allocate bootmap for nid[%d]\n", nid);241241-242242- init_bootmem_node(p, paddr >> PAGE_SHIFT, p->node_start_pfn, end_pfn);243243-244244- free_bootmem_with_active_regions(nid, end_pfn);245245-246246- /*247247- * XXX Handle initial reservations for the system memory node248248- * only for the moment, we'll refactor this later for handling249249- * reservations in other nodes.250250- */251251- if (nid == 0) {252252- struct memblock_region *reg;253253-254254- /* Reserve the sections we're already using. */255255- for_each_memblock(reserved, reg) {256256- reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);257257- }258258- }259259-260260- sparse_memory_present_with_active_regions(nid);261261-}262262-263220static void __init do_init_bootmem(void)264221{265222 struct memblock_region *reg;266266- int i;267223268224 /* Add active regions with valid PFNs. */269225 for_each_memblock(memory, reg) {···235279236280 plat_mem_setup();237281238238- for_each_online_node(i)239239- bootmem_init_one_node(i);282282+ for_each_memblock(memory, reg) {283283+ int nid = memblock_get_region_node(reg);240284285285+ memory_present(nid, memblock_region_memory_base_pfn(reg),286286+ memblock_region_memory_end_pfn(reg));287287+ }241288 sparse_init();242289}243290···281322{282323 unsigned long max_zone_pfns[MAX_NR_ZONES];283324 unsigned long vaddr, end;284284- int nid;285325286326 sh_mv.mv_mem_init();287327···335377 kmap_coherent_init();336378337379 memset(max_zone_pfns, 0, sizeof(max_zone_pfns));338338-339339- for_each_online_node(nid) {340340- pg_data_t *pgdat = NODE_DATA(nid);341341- unsigned long low, start_pfn;342342-343343- start_pfn = pgdat->bdata->node_min_pfn;344344- low = pgdat->bdata->node_low_pfn;345345-346346- if (max_zone_pfns[ZONE_NORMAL] < low)347347- max_zone_pfns[ZONE_NORMAL] = low;348348-349349- printk("Node %u: start_pfn = 0x%lx, low = 0x%lx\n",350350- nid, start_pfn, low);351351- }352352-380380+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;353381 free_area_init_nodes(max_zone_pfns);354382}355383
-19
arch/sh/mm/numa.c
···88 * for more details.99 */1010#include <linux/module.h>1111-#include <linux/bootmem.h>1211#include <linux/memblock.h>1312#include <linux/mm.h>1413#include <linux/numa.h>···2526 */2627void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end)2728{2828- unsigned long bootmap_pages;2929 unsigned long start_pfn, end_pfn;3030- unsigned long bootmem_paddr;31303231 /* Don't allow bogus node assignment */3332 BUG_ON(nid >= MAX_NUMNODES || nid <= 0);···4548 SMP_CACHE_BYTES, end));4649 memset(NODE_DATA(nid), 0, sizeof(struct pglist_data));47504848- NODE_DATA(nid)->bdata = &bootmem_node_data[nid];4951 NODE_DATA(nid)->node_start_pfn = start_pfn;5052 NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;5151-5252- /* Node-local bootmap */5353- bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn);5454- bootmem_paddr = memblock_alloc_base(bootmap_pages << PAGE_SHIFT,5555- PAGE_SIZE, end);5656- init_bootmem_node(NODE_DATA(nid), bootmem_paddr >> PAGE_SHIFT,5757- start_pfn, end_pfn);5858-5959- free_bootmem_with_active_regions(nid, end_pfn);6060-6161- /* Reserve the pgdat and bootmap space with the bootmem allocator */6262- reserve_bootmem_node(NODE_DATA(nid), start_pfn << PAGE_SHIFT,6363- sizeof(struct pglist_data), BOOTMEM_DEFAULT);6464- reserve_bootmem_node(NODE_DATA(nid), bootmem_paddr,6565- bootmap_pages << PAGE_SHIFT, BOOTMEM_DEFAULT);66536754 /* It's up */6855 node_set_online(nid);
+1-1
arch/sparc/include/uapi/asm/oradax.h
···33 *44 * This program is free software: you can redistribute it and/or modify55 * it under the terms of the GNU General Public License as published by66- * the Free Software Foundation, either version 3 of the License, or66+ * the Free Software Foundation, either version 2 of the License, or77 * (at your option) any later version.88 *99 * This program is distributed in the hope that it will be useful,
+1-1
arch/sparc/kernel/vio.c
···403403 if (err) {404404 printk(KERN_ERR "VIO: Could not register device %s, err=%d\n",405405 dev_name(&vdev->dev), err);406406- kfree(vdev);406406+ put_device(&vdev->dev);407407 return NULL;408408 }409409 if (vdev->dp)
···10271027 break;1028102810291029 case BPF_JMP | BPF_JA:10301030- jmp_offset = addrs[i + insn->off] - addrs[i];10301030+ if (insn->off == -1)10311031+ /* -1 jmp instructions will always jump10321032+ * backwards two bytes. Explicitly handling10331033+ * this case avoids wasting too many passes10341034+ * when there are long sequences of replaced10351035+ * dead code.10361036+ */10371037+ jmp_offset = -2;10381038+ else10391039+ jmp_offset = addrs[i + insn->off] - addrs[i];10401040+10311041 if (!jmp_offset)10321042 /* optimize out nop jumps */10331043 break;···12361226 for (pass = 0; pass < 20 || image; pass++) {12371227 proglen = do_jit(prog, addrs, image, oldproglen, &ctx);12381228 if (proglen <= 0) {12291229+out_image:12391230 image = NULL;12401231 if (header)12411232 bpf_jit_binary_free(header);···12471236 if (proglen != oldproglen) {12481237 pr_err("bpf_jit: proglen=%d != oldproglen=%d\n",12491238 proglen, oldproglen);12501250- prog = orig_prog;12511251- goto out_addrs;12391239+ goto out_image;12521240 }12531241 break;12541242 }···12831273 prog = orig_prog;12841274 }1285127512861286- if (!prog->is_func || extra_pass) {12761276+ if (!image || !prog->is_func || extra_pass) {12871277out_addrs:12881278 kfree(addrs);12891279 kfree(jit_data);
+13
arch/x86/xen/enlighten_hvm.c
···6565{6666 early_memunmap(HYPERVISOR_shared_info, PAGE_SIZE);6767 HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));6868+6969+ /*7070+ * The virtual address of the shared_info page has changed, so7171+ * the vcpu_info pointer for VCPU 0 is now stale.7272+ *7373+ * The prepare_boot_cpu callback will re-initialize it via7474+ * xen_vcpu_setup, but we can't rely on that to be called for7575+ * old Xen versions (xen_have_vector_callback == 0).7676+ *7777+ * It is, in any case, bad to have a stale vcpu_info pointer7878+ * so reset it now.7979+ */8080+ xen_vcpu_info_reset(0);6881}69827083static void __init init_hvm_pv_info(void)
+31-55
arch/x86/xen/enlighten_pv.c
···421421{422422 unsigned long va = dtr->address;423423 unsigned int size = dtr->size + 1;424424- unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE);425425- unsigned long frames[pages];426426- int f;424424+ unsigned long pfn, mfn;425425+ int level;426426+ pte_t *ptep;427427+ void *virt;427428428428- /*429429- * A GDT can be up to 64k in size, which corresponds to 8192430430- * 8-byte entries, or 16 4k pages..431431- */432432-433433- BUG_ON(size > 65536);429429+ /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */430430+ BUG_ON(size > PAGE_SIZE);434431 BUG_ON(va & ~PAGE_MASK);435432436436- for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) {437437- int level;438438- pte_t *ptep;439439- unsigned long pfn, mfn;440440- void *virt;433433+ /*434434+ * The GDT is per-cpu and is in the percpu data area.435435+ * That can be virtually mapped, so we need to do a436436+ * page-walk to get the underlying MFN for the437437+ * hypercall. The page can also be in the kernel's438438+ * linear range, so we need to RO that mapping too.439439+ */440440+ ptep = lookup_address(va, &level);441441+ BUG_ON(ptep == NULL);441442442442- /*443443- * The GDT is per-cpu and is in the percpu data area.444444- * That can be virtually mapped, so we need to do a445445- * page-walk to get the underlying MFN for the446446- * hypercall. The page can also be in the kernel's447447- * linear range, so we need to RO that mapping too.448448- */449449- ptep = lookup_address(va, &level);450450- BUG_ON(ptep == NULL);443443+ pfn = pte_pfn(*ptep);444444+ mfn = pfn_to_mfn(pfn);445445+ virt = __va(PFN_PHYS(pfn));451446452452- pfn = pte_pfn(*ptep);453453- mfn = pfn_to_mfn(pfn);454454- virt = __va(PFN_PHYS(pfn));447447+ make_lowmem_page_readonly((void *)va);448448+ make_lowmem_page_readonly(virt);455449456456- frames[f] = mfn;457457-458458- make_lowmem_page_readonly((void *)va);459459- make_lowmem_page_readonly(virt);460460- }461461-462462- if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct)))450450+ if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct)))463451 BUG();464452}465453···458470{459471 unsigned long va = dtr->address;460472 unsigned int size = dtr->size + 1;461461- unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE);462462- unsigned long frames[pages];463463- int f;473473+ unsigned long pfn, mfn;474474+ pte_t pte;464475465465- /*466466- * A GDT can be up to 64k in size, which corresponds to 8192467467- * 8-byte entries, or 16 4k pages..468468- */469469-470470- BUG_ON(size > 65536);476476+ /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */477477+ BUG_ON(size > PAGE_SIZE);471478 BUG_ON(va & ~PAGE_MASK);472479473473- for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) {474474- pte_t pte;475475- unsigned long pfn, mfn;480480+ pfn = virt_to_pfn(va);481481+ mfn = pfn_to_mfn(pfn);476482477477- pfn = virt_to_pfn(va);478478- mfn = pfn_to_mfn(pfn);483483+ pte = pfn_pte(pfn, PAGE_KERNEL_RO);479484480480- pte = pfn_pte(pfn, PAGE_KERNEL_RO);485485+ if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0))486486+ BUG();481487482482- if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0))483483- BUG();484484-485485- frames[f] = mfn;486486- }487487-488488- if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct)))488488+ if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct)))489489 BUG();490490}491491
+28-12
block/blk-mq.c
···9595{9696 struct mq_inflight *mi = priv;97979898- if (blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) {9999- /*100100- * index[0] counts the specific partition that was asked101101- * for. index[1] counts the ones that are active on the102102- * whole device, so increment that if mi->part is indeed103103- * a partition, and not a whole device.104104- */105105- if (rq->part == mi->part)106106- mi->inflight[0]++;107107- if (mi->part->partno)108108- mi->inflight[1]++;109109- }9898+ /*9999+ * index[0] counts the specific partition that was asked for. index[1]100100+ * counts the ones that are active on the whole device, so increment101101+ * that if mi->part is indeed a partition, and not a whole device.102102+ */103103+ if (rq->part == mi->part)104104+ mi->inflight[0]++;105105+ if (mi->part->partno)106106+ mi->inflight[1]++;110107}111108112109void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part,···113116114117 inflight[0] = inflight[1] = 0;115118 blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi);119119+}120120+121121+static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx,122122+ struct request *rq, void *priv,123123+ bool reserved)124124+{125125+ struct mq_inflight *mi = priv;126126+127127+ if (rq->part == mi->part)128128+ mi->inflight[rq_data_dir(rq)]++;129129+}130130+131131+void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,132132+ unsigned int inflight[2])133133+{134134+ struct mq_inflight mi = { .part = part, .inflight = inflight, };135135+136136+ inflight[0] = inflight[1] = 0;137137+ blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi);116138}117139118140void blk_freeze_queue_start(struct request_queue *q)
+3-1
block/blk-mq.h
···188188}189189190190void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part,191191- unsigned int inflight[2]);191191+ unsigned int inflight[2]);192192+void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part,193193+ unsigned int inflight[2]);192194193195static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx)194196{
···698698699699 DPRINTK("ENTER\n");700700701701- ahci_stop_engine(ap);701701+ hpriv->stop_engine(ap);702702703703 rc = sata_link_hardreset(link, sata_ehc_deb_timing(&link->eh_context),704704 deadline, &online, NULL);···724724 bool online;725725 int rc;726726727727- ahci_stop_engine(ap);727727+ hpriv->stop_engine(ap);728728729729 /* clear D2H reception area to properly wait for D2H FIS */730730 ata_tf_init(link->device, &tf);···788788789789 DPRINTK("ENTER\n");790790791791- ahci_stop_engine(ap);791791+ hpriv->stop_engine(ap);792792793793 for (i = 0; i < 2; i++) {794794 u16 val;
+7-1
drivers/ata/ahci.h
···350350 u32 em_msg_type; /* EM message type */351351 bool got_runtime_pm; /* Did we do pm_runtime_get? */352352 struct clk *clks[AHCI_MAX_CLKS]; /* Optional */353353- struct reset_control *rsts; /* Optional */354353 struct regulator **target_pwrs; /* Optional */355354 /*356355 * If platform uses PHYs. There is a 1:1 relation between the port number and···365366 * be overridden anytime before the host is activated.366367 */367368 void (*start_engine)(struct ata_port *ap);369369+ /*370370+ * Optional ahci_stop_engine override, if not set this gets set to the371371+ * default ahci_stop_engine during ahci_save_initial_config, this can372372+ * be overridden anytime before the host is activated.373373+ */374374+ int (*stop_engine)(struct ata_port *ap);375375+368376 irqreturn_t (*irq_handler)(int irq, void *dev_instance);369377370378 /* only required for per-port MSI(-X) support */
+56
drivers/ata/ahci_mvebu.c
···6262 writel(0x80, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA);6363}64646565+/**6666+ * ahci_mvebu_stop_engine6767+ *6868+ * @ap: Target ata port6969+ *7070+ * Errata Ref#226 - SATA Disk HOT swap issue when connected through7171+ * Port Multiplier in FIS-based Switching mode.7272+ *7373+ * To avoid the issue, according to design, the bits[11:8, 0] of7474+ * register PxFBS are cleared when Port Command and Status (0x18) bit[0]7575+ * changes its value from 1 to 0, i.e. falling edge of Port7676+ * Command and Status bit[0] sends PULSE that resets PxFBS7777+ * bits[11:8; 0].7878+ *7979+ * This function is used to override function of "ahci_stop_engine"8080+ * from libahci.c by adding the mvebu work around(WA) to save PxFBS8181+ * value before the PxCMD ST write of 0, then restore PxFBS value.8282+ *8383+ * Return: 0 on success; Error code otherwise.8484+ */8585+int ahci_mvebu_stop_engine(struct ata_port *ap)8686+{8787+ void __iomem *port_mmio = ahci_port_base(ap);8888+ u32 tmp, port_fbs;8989+9090+ tmp = readl(port_mmio + PORT_CMD);9191+9292+ /* check if the HBA is idle */9393+ if ((tmp & (PORT_CMD_START | PORT_CMD_LIST_ON)) == 0)9494+ return 0;9595+9696+ /* save the port PxFBS register for later restore */9797+ port_fbs = readl(port_mmio + PORT_FBS);9898+9999+ /* setting HBA to idle */100100+ tmp &= ~PORT_CMD_START;101101+ writel(tmp, port_mmio + PORT_CMD);102102+103103+ /*104104+ * bit #15 PxCMD signal doesn't clear PxFBS,105105+ * restore the PxFBS register right after clearing the PxCMD ST,106106+ * no need to wait for the PxCMD bit #15.107107+ */108108+ writel(port_fbs, port_mmio + PORT_FBS);109109+110110+ /* wait for engine to stop. This could be as long as 500 msec */111111+ tmp = ata_wait_register(ap, port_mmio + PORT_CMD,112112+ PORT_CMD_LIST_ON, PORT_CMD_LIST_ON, 1, 500);113113+ if (tmp & PORT_CMD_LIST_ON)114114+ return -EIO;115115+116116+ return 0;117117+}118118+65119#ifdef CONFIG_PM_SLEEP66120static int ahci_mvebu_suspend(struct platform_device *pdev, pm_message_t state)67121{···165111 rc = ahci_platform_enable_resources(hpriv);166112 if (rc)167113 return rc;114114+115115+ hpriv->stop_engine = ahci_mvebu_stop_engine;168116169117 if (of_device_is_compatible(pdev->dev.of_node,170118 "marvell,armada-380-ahci")) {
+1-1
drivers/ata/ahci_qoriq.c
···96969797 DPRINTK("ENTER\n");98989999- ahci_stop_engine(ap);9999+ hpriv->stop_engine(ap);100100101101 /*102102 * There is a errata on ls1021a Rev1.0 and Rev2.0 which is:
···2525#include <linux/phy/phy.h>2626#include <linux/pm_runtime.h>2727#include <linux/of_platform.h>2828-#include <linux/reset.h>2928#include "ahci.h"30293130static void ahci_host_stop(struct ata_host *host);···195196 * following order:196197 * 1) Regulator197198 * 2) Clocks (through ahci_platform_enable_clks)198198- * 3) Resets199199- * 4) Phys199199+ * 3) Phys200200 *201201 * If resource enabling fails at any point the previous enabled resources202202 * are disabled in reverse order.···215217 if (rc)216218 goto disable_regulator;217219218218- rc = reset_control_deassert(hpriv->rsts);220220+ rc = ahci_platform_enable_phys(hpriv);219221 if (rc)220222 goto disable_clks;221223222222- rc = ahci_platform_enable_phys(hpriv);223223- if (rc)224224- goto disable_resets;225225-226224 return 0;227227-228228-disable_resets:229229- reset_control_assert(hpriv->rsts);230225231226disable_clks:232227 ahci_platform_disable_clks(hpriv);···239248 * following order:240249 * 1) Phys241250 * 2) Clocks (through ahci_platform_disable_clks)242242- * 3) Resets243243- * 4) Regulator251251+ * 3) Regulator244252 */245253void ahci_platform_disable_resources(struct ahci_host_priv *hpriv)246254{247255 ahci_platform_disable_phys(hpriv);248248-249249- reset_control_assert(hpriv->rsts);250256251257 ahci_platform_disable_clks(hpriv);252258···391403 break;392404 }393405 hpriv->clks[i] = clk;394394- }395395-396396- hpriv->rsts = devm_reset_control_array_get_optional_shared(dev);397397- if (IS_ERR(hpriv->rsts)) {398398- rc = PTR_ERR(hpriv->rsts);399399- goto err_out;400406 }401407402408 hpriv->nports = child_nodes = of_get_child_count(dev->of_node);
+6
drivers/ata/libata-core.c
···45494549 ATA_HORKAGE_ZERO_AFTER_TRIM |45504550 ATA_HORKAGE_NOLPM, },4551455145524552+ /* This specific Samsung model/firmware-rev does not handle LPM well */45534553+ { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, },45544554+45554555+ /* Sandisk devices which are known to not handle LPM well */45564556+ { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, },45574557+45524558 /* devices that don't properly handle queued TRIM commands */45534559 { "Micron_M500_*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |45544560 ATA_HORKAGE_ZERO_AFTER_TRIM, },
···410410 int rc;411411 int retry = 100;412412413413- ahci_stop_engine(ap);413413+ hpriv->stop_engine(ap);414414415415 /* clear D2H reception area to properly wait for D2H FIS */416416 ata_tf_init(link->device, &tf);
+2-2
drivers/ata/sata_sil24.c
···285285 [PORT_CERR_INCONSISTENT] = { AC_ERR_HSM, ATA_EH_RESET,286286 "protocol mismatch" },287287 [PORT_CERR_DIRECTION] = { AC_ERR_HSM, ATA_EH_RESET,288288- "data directon mismatch" },288288+ "data direction mismatch" },289289 [PORT_CERR_UNDERRUN] = { AC_ERR_HSM, ATA_EH_RESET,290290 "ran out of SGEs while writing" },291291 [PORT_CERR_OVERRUN] = { AC_ERR_HSM, ATA_EH_RESET,292292 "ran out of SGEs while reading" },293293 [PORT_CERR_PKT_PROT] = { AC_ERR_HSM, ATA_EH_RESET,294294- "invalid data directon for ATAPI CDB" },294294+ "invalid data direction for ATAPI CDB" },295295 [PORT_CERR_SGT_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_RESET,296296 "SGT not on qword boundary" },297297 [PORT_CERR_SGT_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_RESET,
···126126 cpu->perf_caps.lowest_perf, cpu_num, ret);127127}128128129129+/*130130+ * The PCC subspace describes the rate at which platform can accept commands131131+ * on the shared PCC channel (including READs which do not count towards freq132132+ * trasition requests), so ideally we need to use the PCC values as a fallback133133+ * if we don't have a platform specific transition_delay_us134134+ */135135+#ifdef CONFIG_ARM64136136+#include <asm/cputype.h>137137+138138+static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)139139+{140140+ unsigned long implementor = read_cpuid_implementor();141141+ unsigned long part_num = read_cpuid_part_number();142142+ unsigned int delay_us = 0;143143+144144+ switch (implementor) {145145+ case ARM_CPU_IMP_QCOM:146146+ switch (part_num) {147147+ case QCOM_CPU_PART_FALKOR_V1:148148+ case QCOM_CPU_PART_FALKOR:149149+ delay_us = 10000;150150+ break;151151+ default:152152+ delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC;153153+ break;154154+ }155155+ break;156156+ default:157157+ delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC;158158+ break;159159+ }160160+161161+ return delay_us;162162+}163163+164164+#else165165+166166+static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu)167167+{168168+ return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;169169+}170170+#endif171171+129172static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy)130173{131174 struct cppc_cpudata *cpu;···205162 cpu->perf_caps.highest_perf;206163 policy->cpuinfo.max_freq = cppc_dmi_max_khz;207164208208- policy->transition_delay_us = cppc_get_transition_latency(cpu_num) /209209- NSEC_PER_USEC;165165+ policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num);210166 policy->shared_type = cpu->shared_type;211167212168 if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
···214214 INIT_LIST_HEAD(&nvbo->entry);215215 INIT_LIST_HEAD(&nvbo->vma_list);216216 nvbo->bo.bdev = &drm->ttm.bdev;217217- nvbo->cli = cli;218217219218 /* This is confusing, and doesn't actually mean we want an uncached220219 * mapping, but is what NOUVEAU_GEM_DOMAIN_COHERENT gets translated
···462462 select NEW_LEDS463463 select LEDS_CLASS464464 ---help---465465- Support for Lenovo devices that are not fully compliant with HID standard.465465+ Support for IBM/Lenovo devices that are not fully compliant with HID standard.466466467467- Say Y if you want support for the non-compliant features of the Lenovo468468- Thinkpad standalone keyboards, e.g:467467+ Say Y if you want support for horizontal scrolling of the IBM/Lenovo468468+ Scrollpoint mice or the non-compliant features of the Lenovo Thinkpad469469+ standalone keyboards, e.g:469470 - ThinkPad USB Keyboard with TrackPoint (supports extra LEDs and trackpoint470471 configuration)471472 - ThinkPad Compact Bluetooth Keyboard with TrackPoint (supports Fn keys)
···6161 pages on demand instead.62626363config INFINIBAND_ADDR_TRANS6464- bool6464+ bool "RDMA/CM"6565 depends on INFINIBAND6666 default y6767+ ---help---6868+ Support for RDMA communication manager (CM).6969+ This allows for a generic connection abstraction over RDMA.67706871config INFINIBAND_ADDR_TRANS_CONFIGFS6972 bool
+35-20
drivers/infiniband/core/cache.c
···291291 * so lookup free slot only if requested.292292 */293293 if (pempty && empty < 0) {294294- if (data->props & GID_TABLE_ENTRY_INVALID) {295295- /* Found an invalid (free) entry; allocate it */296296- if (data->props & GID_TABLE_ENTRY_DEFAULT) {297297- if (default_gid)298298- empty = curr_index;299299- } else {300300- empty = curr_index;301301- }294294+ if (data->props & GID_TABLE_ENTRY_INVALID &&295295+ (default_gid ==296296+ !!(data->props & GID_TABLE_ENTRY_DEFAULT))) {297297+ /*298298+ * Found an invalid (free) entry; allocate it.299299+ * If default GID is requested, then our300300+ * found slot must be one of the DEFAULT301301+ * reserved slots or we fail.302302+ * This ensures that only DEFAULT reserved303303+ * slots are used for default property GIDs.304304+ */305305+ empty = curr_index;302306 }303307 }304308···424420 return ret;425421}426422427427-int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,428428- union ib_gid *gid, struct ib_gid_attr *attr)423423+static int424424+_ib_cache_gid_del(struct ib_device *ib_dev, u8 port,425425+ union ib_gid *gid, struct ib_gid_attr *attr,426426+ unsigned long mask, bool default_gid)429427{430428 struct ib_gid_table *table;431429 int ret = 0;···437431438432 mutex_lock(&table->lock);439433440440- ix = find_gid(table, gid, attr, false,441441- GID_ATTR_FIND_MASK_GID |442442- GID_ATTR_FIND_MASK_GID_TYPE |443443- GID_ATTR_FIND_MASK_NETDEV,444444- NULL);434434+ ix = find_gid(table, gid, attr, default_gid, mask, NULL);445435 if (ix < 0) {446436 ret = -EINVAL;447437 goto out_unlock;···452450 pr_debug("%s: can't delete gid %pI6 error=%d\n",453451 __func__, gid->raw, ret);454452 return ret;453453+}454454+455455+int ib_cache_gid_del(struct ib_device *ib_dev, u8 port,456456+ union ib_gid *gid, struct ib_gid_attr *attr)457457+{458458+ unsigned long mask = GID_ATTR_FIND_MASK_GID |459459+ GID_ATTR_FIND_MASK_GID_TYPE |460460+ GID_ATTR_FIND_MASK_DEFAULT |461461+ GID_ATTR_FIND_MASK_NETDEV;462462+463463+ return _ib_cache_gid_del(ib_dev, port, gid, attr, mask, false);455464}456465457466int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port,···741728 unsigned long gid_type_mask,742729 enum ib_cache_gid_default_mode mode)743730{744744- union ib_gid gid;731731+ union ib_gid gid = { };745732 struct ib_gid_attr gid_attr;746733 struct ib_gid_table *table;747734 unsigned int gid_type;···749736750737 table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid;751738752752- make_default_gid(ndev, &gid);739739+ mask = GID_ATTR_FIND_MASK_GID_TYPE |740740+ GID_ATTR_FIND_MASK_DEFAULT |741741+ GID_ATTR_FIND_MASK_NETDEV;753742 memset(&gid_attr, 0, sizeof(gid_attr));754743 gid_attr.ndev = ndev;755744···762747 gid_attr.gid_type = gid_type;763748764749 if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) {765765- mask = GID_ATTR_FIND_MASK_GID_TYPE |766766- GID_ATTR_FIND_MASK_DEFAULT;750750+ make_default_gid(ndev, &gid);767751 __ib_cache_gid_add(ib_dev, port, &gid,768752 &gid_attr, mask, true);769753 } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) {770770- ib_cache_gid_del(ib_dev, port, &gid, &gid_attr);754754+ _ib_cache_gid_del(ib_dev, port, &gid,755755+ &gid_attr, mask, true);771756 }772757 }773758}
+43-17
drivers/infiniband/core/cma.c
···382382#define CMA_VERSION 0x00383383384384struct cma_req_info {385385+ struct sockaddr_storage listen_addr_storage;386386+ struct sockaddr_storage src_addr_storage;385387 struct ib_device *device;386388 int port;387389 union ib_gid local_gid;···868866{869867 struct ib_qp_attr qp_attr;870868 int qp_attr_mask, ret;871871- union ib_gid sgid;872869873870 mutex_lock(&id_priv->qp_mutex);874871 if (!id_priv->id.qp) {···887886888887 qp_attr.qp_state = IB_QPS_RTR;889888 ret = rdma_init_qp_attr(&id_priv->id, &qp_attr, &qp_attr_mask);890890- if (ret)891891- goto out;892892-893893- ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num,894894- rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index,895895- &sgid, NULL);896889 if (ret)897890 goto out;898891···13351340}1336134113371342static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event,13381338- const struct cma_req_info *req)13431343+ struct cma_req_info *req)13391344{13401340- struct sockaddr_storage listen_addr_storage, src_addr_storage;13411341- struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage,13421342- *src_addr = (struct sockaddr *)&src_addr_storage;13451345+ struct sockaddr *listen_addr =13461346+ (struct sockaddr *)&req->listen_addr_storage;13471347+ struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage;13431348 struct net_device *net_dev;13441349 const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL;13451350 int err;···13531358 gid, listen_addr);13541359 if (!net_dev)13551360 return ERR_PTR(-ENODEV);13561356-13571357- if (!validate_net_dev(net_dev, listen_addr, src_addr)) {13581358- dev_put(net_dev);13591359- return ERR_PTR(-EHOSTUNREACH);13601360- }1361136113621362 return net_dev;13631363}···14801490 }14811491 }1482149214931493+ /*14941494+ * Net namespace might be getting deleted while route lookup,14951495+ * cm_id lookup is in progress. Therefore, perform netdevice14961496+ * validation, cm_id lookup under rcu lock.14971497+ * RCU lock along with netdevice state check, synchronizes with14981498+ * netdevice migrating to different net namespace and also avoids14991499+ * case where net namespace doesn't get deleted while lookup is in15001500+ * progress.15011501+ * If the device state is not IFF_UP, its properties such as ifindex15021502+ * and nd_net cannot be trusted to remain valid without rcu lock.15031503+ * net/core/dev.c change_net_namespace() ensures to synchronize with15041504+ * ongoing operations on net device after device is closed using15051505+ * synchronize_net().15061506+ */15071507+ rcu_read_lock();15081508+ if (*net_dev) {15091509+ /*15101510+ * If netdevice is down, it is likely that it is administratively15111511+ * down or it might be migrating to different namespace.15121512+ * In that case avoid further processing, as the net namespace15131513+ * or ifindex may change.15141514+ */15151515+ if (((*net_dev)->flags & IFF_UP) == 0) {15161516+ id_priv = ERR_PTR(-EHOSTUNREACH);15171517+ goto err;15181518+ }15191519+15201520+ if (!validate_net_dev(*net_dev,15211521+ (struct sockaddr *)&req.listen_addr_storage,15221522+ (struct sockaddr *)&req.src_addr_storage)) {15231523+ id_priv = ERR_PTR(-EHOSTUNREACH);15241524+ goto err;15251525+ }15261526+ }15271527+14831528 bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net,14841529 rdma_ps_from_service_id(req.service_id),14851530 cma_port_from_service_id(req.service_id));14861531 id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev);15321532+err:15331533+ rcu_read_unlock();14871534 if (IS_ERR(id_priv) && *net_dev) {14881535 dev_put(*net_dev);14891536 *net_dev = NULL;14901537 }14911491-14921538 return id_priv;14931539}14941540
+4-1
drivers/infiniband/core/iwpm_util.c
···114114 struct sockaddr_storage *mapped_sockaddr,115115 u8 nl_client)116116{117117- struct hlist_head *hash_bucket_head;117117+ struct hlist_head *hash_bucket_head = NULL;118118 struct iwpm_mapping_info *map_info;119119 unsigned long flags;120120 int ret = -EINVAL;···142142 }143143 }144144 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags);145145+146146+ if (!hash_bucket_head)147147+ kfree(map_info);145148 return ret;146149}147150
+2-2
drivers/infiniband/core/mad.c
···5959MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests");60606161static struct list_head ib_mad_port_list;6262-static u32 ib_mad_client_id = 0;6262+static atomic_t ib_mad_client_id = ATOMIC_INIT(0);63636464/* Port list lock */6565static DEFINE_SPINLOCK(ib_mad_port_list_lock);···377377 }378378379379 spin_lock_irqsave(&port_priv->reg_lock, flags);380380- mad_agent_priv->agent.hi_tid = ++ib_mad_client_id;380380+ mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id);381381382382 /*383383 * Make sure MAD registration (if supplied)
···8888 * pio buffers per ctxt, etc.) Zero means use one user context per CPU.8989 */9090int num_user_contexts = -1;9191-module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO);9191+module_param_named(num_user_contexts, num_user_contexts, int, 0444);9292MODULE_PARM_DESC(9393- num_user_contexts, "Set max number of user contexts to use");9393+ num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)");94949595uint krcvqs[RXE_NUM_DATA_VL];9696int krcvqsset;···12091209 kfree(ad);12101210}1211121112121212-static void __hfi1_free_devdata(struct kobject *kobj)12121212+/**12131213+ * hfi1_clean_devdata - cleans up per-unit data structure12141214+ * @dd: pointer to a valid devdata structure12151215+ *12161216+ * It cleans up all data structures set up by12171217+ * by hfi1_alloc_devdata().12181218+ */12191219+static void hfi1_clean_devdata(struct hfi1_devdata *dd)12131220{12141214- struct hfi1_devdata *dd =12151215- container_of(kobj, struct hfi1_devdata, kobj);12161221 struct hfi1_asic_data *ad;12171222 unsigned long flags;1218122312191224 spin_lock_irqsave(&hfi1_devs_lock, flags);12201220- idr_remove(&hfi1_unit_table, dd->unit);12211221- list_del(&dd->list);12251225+ if (!list_empty(&dd->list)) {12261226+ idr_remove(&hfi1_unit_table, dd->unit);12271227+ list_del_init(&dd->list);12281228+ }12221229 ad = release_asic_data(dd);12231230 spin_unlock_irqrestore(&hfi1_devs_lock, flags);12241224- if (ad)12251225- finalize_asic_data(dd, ad);12311231+12321232+ finalize_asic_data(dd, ad);12261233 free_platform_config(dd);12271234 rcu_barrier(); /* wait for rcu callbacks to complete */12281235 free_percpu(dd->int_counter);12291236 free_percpu(dd->rcv_limit);12301237 free_percpu(dd->send_schedule);12311238 free_percpu(dd->tx_opstats);12391239+ dd->int_counter = NULL;12401240+ dd->rcv_limit = NULL;12411241+ dd->send_schedule = NULL;12421242+ dd->tx_opstats = NULL;12321243 sdma_clean(dd, dd->num_sdma);12331244 rvt_dealloc_device(&dd->verbs_dev.rdi);12451245+}12461246+12471247+static void __hfi1_free_devdata(struct kobject *kobj)12481248+{12491249+ struct hfi1_devdata *dd =12501250+ container_of(kobj, struct hfi1_devdata, kobj);12511251+12521252+ hfi1_clean_devdata(dd);12341253}1235125412361255static struct kobj_type hfi1_devdata_type = {···12841265 return ERR_PTR(-ENOMEM);12851266 dd->num_pports = nports;12861267 dd->pport = (struct hfi1_pportdata *)(dd + 1);12681268+ dd->pcidev = pdev;12691269+ pci_set_drvdata(pdev, dd);1287127012881271 INIT_LIST_HEAD(&dd->list);12891272 idr_preload(GFP_KERNEL);···13521331 return dd;1353133213541333bail:13551355- if (!list_empty(&dd->list))13561356- list_del_init(&dd->list);13571357- rvt_dealloc_device(&dd->verbs_dev.rdi);13341334+ hfi1_clean_devdata(dd);13581335 return ERR_PTR(ret);13591336}13601337
-3
drivers/infiniband/hw/hfi1/pcie.c
···163163 resource_size_t addr;164164 int ret = 0;165165166166- dd->pcidev = pdev;167167- pci_set_drvdata(pdev, dd);168168-169166 addr = pci_resource_start(pdev, 0);170167 len = pci_resource_len(pdev, 0);171168
+1
drivers/infiniband/hw/hfi1/platform.c
···199199{200200 /* Release memory allocated for eprom or fallback file read. */201201 kfree(dd->platform_config.data);202202+ dd->platform_config.data = NULL;202203}203204204205void get_port_type(struct hfi1_pportdata *ppd)
···733733 ohdr->bth[2] = cpu_to_be32(bth2);734734}735735736736+/**737737+ * hfi1_make_ruc_header_16B - build a 16B header738738+ * @qp: the queue pair739739+ * @ohdr: a pointer to the destination header memory740740+ * @bth0: bth0 passed in from the RC/UC builder741741+ * @bth2: bth2 passed in from the RC/UC builder742742+ * @middle: non zero implies indicates ahg "could" be used743743+ * @ps: the current packet state744744+ *745745+ * This routine may disarm ahg under these situations:746746+ * - packet needs a GRH747747+ * - BECN needed748748+ * - migration state not IB_MIG_MIGRATED749749+ */736750static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp,737751 struct ib_other_headers *ohdr,738752 u32 bth0, u32 bth2, int middle,···791777 else792778 middle = 0;793779780780+ if (qp->s_flags & RVT_S_ECN) {781781+ qp->s_flags &= ~RVT_S_ECN;782782+ /* we recently received a FECN, so return a BECN */783783+ becn = true;784784+ middle = 0;785785+ }794786 if (middle)795787 build_ahg(qp, bth2);796788 else···804784805785 bth0 |= pkey;806786 bth0 |= extra_bytes << 20;807807- if (qp->s_flags & RVT_S_ECN) {808808- qp->s_flags &= ~RVT_S_ECN;809809- /* we recently received a FECN, so return a BECN */810810- becn = true;811811- }812787 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);813788814789 if (!ppd->lid)···821806 pkey, becn, 0, l4, priv->s_sc);822807}823808809809+/**810810+ * hfi1_make_ruc_header_9B - build a 9B header811811+ * @qp: the queue pair812812+ * @ohdr: a pointer to the destination header memory813813+ * @bth0: bth0 passed in from the RC/UC builder814814+ * @bth2: bth2 passed in from the RC/UC builder815815+ * @middle: non zero implies indicates ahg "could" be used816816+ * @ps: the current packet state817817+ *818818+ * This routine may disarm ahg under these situations:819819+ * - packet needs a GRH820820+ * - BECN needed821821+ * - migration state not IB_MIG_MIGRATED822822+ */824823static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp,825824 struct ib_other_headers *ohdr,826825 u32 bth0, u32 bth2, int middle,···868839 else869840 middle = 0;870841842842+ if (qp->s_flags & RVT_S_ECN) {843843+ qp->s_flags &= ~RVT_S_ECN;844844+ /* we recently received a FECN, so return a BECN */845845+ bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);846846+ middle = 0;847847+ }871848 if (middle)872849 build_ahg(qp, bth2);873850 else···881846882847 bth0 |= pkey;883848 bth0 |= extra_bytes << 20;884884- if (qp->s_flags & RVT_S_ECN) {885885- qp->s_flags &= ~RVT_S_ECN;886886- /* we recently received a FECN, so return a BECN */887887- bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT);888888- }889849 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2);890850 hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh,891851 lrh0,
···11config INFINIBAND_SRP22 tristate "InfiniBand SCSI RDMA Protocol"33- depends on SCSI33+ depends on SCSI && INFINIBAND_ADDR_TRANS44 select SCSI_SRP_ATTRS55 ---help---66 Support for the SCSI RDMA Protocol over InfiniBand. This
+1-1
drivers/infiniband/ulp/srpt/Kconfig
···11config INFINIBAND_SRPT22 tristate "InfiniBand SCSI RDMA Protocol target support"33- depends on INFINIBAND && TARGET_CORE33+ depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE44 ---help---5566 Support for the SCSI RDMA Protocol (SRP) Target driver. The
···583583584584 x = (s8)(((packet[0] & 0x20) << 2) | (packet[1] & 0x7f));585585 y = (s8)(((packet[0] & 0x10) << 3) | (packet[2] & 0x7f));586586- z = packet[4] & 0x7c;586586+ z = packet[4] & 0x7f;587587588588 /*589589 * The x and y values tend to be quite large, and when used
+5-2
drivers/input/rmi4/rmi_spi.c
···147147 if (len > RMI_SPI_XFER_SIZE_LIMIT)148148 return -EINVAL;149149150150- if (rmi_spi->xfer_buf_size < len)151151- rmi_spi_manage_pools(rmi_spi, len);150150+ if (rmi_spi->xfer_buf_size < len) {151151+ ret = rmi_spi_manage_pools(rmi_spi, len);152152+ if (ret < 0)153153+ return ret;154154+ }152155153156 if (addr == 0)154157 /*
+1-1
drivers/input/touchscreen/Kconfig
···362362363363 If unsure, say N.364364365365- To compile this driver as a moudle, choose M here : the365365+ To compile this driver as a module, choose M here : the366366 module will be called hideep_ts.367367368368config TOUCHSCREEN_ILI210X
+124-76
drivers/input/touchscreen/atmel_mxt_ts.c
···280280 struct input_dev *input_dev;281281 char phys[64]; /* device physical location */282282 struct mxt_object *object_table;283283- struct mxt_info info;283283+ struct mxt_info *info;284284+ void *raw_info_block;284285 unsigned int irq;285286 unsigned int max_x;286287 unsigned int max_y;···461460{462461 u8 appmode = data->client->addr;463462 u8 bootloader;463463+ u8 family_id = data->info ? data->info->family_id : 0;464464465465 switch (appmode) {466466 case 0x4a:467467 case 0x4b:468468 /* Chips after 1664S use different scheme */469469- if (retry || data->info.family_id >= 0xa2) {469469+ if (retry || family_id >= 0xa2) {470470 bootloader = appmode - 0x24;471471 break;472472 }···694692 struct mxt_object *object;695693 int i;696694697697- for (i = 0; i < data->info.object_num; i++) {695695+ for (i = 0; i < data->info->object_num; i++) {698696 object = data->object_table + i;699697 if (object->type == type)700698 return object;···14641462 data_pos += offset;14651463 }1466146414671467- if (cfg_info.family_id != data->info.family_id) {14651465+ if (cfg_info.family_id != data->info->family_id) {14681466 dev_err(dev, "Family ID mismatch!\n");14691467 return -EINVAL;14701468 }1471146914721472- if (cfg_info.variant_id != data->info.variant_id) {14701470+ if (cfg_info.variant_id != data->info->variant_id) {14731471 dev_err(dev, "Variant ID mismatch!\n");14741472 return -EINVAL;14751473 }···1514151215151513 /* Malloc memory to store configuration */15161514 cfg_start_ofs = MXT_OBJECT_START +15171517- data->info.object_num * sizeof(struct mxt_object) +15151515+ data->info->object_num * sizeof(struct mxt_object) +15181516 MXT_INFO_CHECKSUM_SIZE;15191517 config_mem_size = data->mem_size - cfg_start_ofs;15201518 config_mem = kzalloc(config_mem_size, GFP_KERNEL);···15651563 return ret;15661564}1567156515681568-static int mxt_get_info(struct mxt_data *data)15691569-{15701570- struct i2c_client *client = data->client;15711571- struct mxt_info *info = &data->info;15721572- int error;15731573-15741574- /* Read 7-byte info block starting at address 0 */15751575- error = __mxt_read_reg(client, 0, sizeof(*info), info);15761576- if (error)15771577- return error;15781578-15791579- return 0;15801580-}15811581-15821566static void mxt_free_input_device(struct mxt_data *data)15831567{15841568 if (data->input_dev) {···15791591 video_unregister_device(&data->dbg.vdev);15801592 v4l2_device_unregister(&data->dbg.v4l2);15811593#endif15821582-15831583- kfree(data->object_table);15841594 data->object_table = NULL;15951595+ data->info = NULL;15961596+ kfree(data->raw_info_block);15971597+ data->raw_info_block = NULL;15851598 kfree(data->msg_buf);15861599 data->msg_buf = NULL;15871600 data->T5_address = 0;···15981609 data->max_reportid = 0;15991610}1600161116011601-static int mxt_get_object_table(struct mxt_data *data)16121612+static int mxt_parse_object_table(struct mxt_data *data,16131613+ struct mxt_object *object_table)16021614{16031615 struct i2c_client *client = data->client;16041604- size_t table_size;16051605- struct mxt_object *object_table;16061606- int error;16071616 int i;16081617 u8 reportid;16091618 u16 end_address;1610161916111611- table_size = data->info.object_num * sizeof(struct mxt_object);16121612- object_table = kzalloc(table_size, GFP_KERNEL);16131613- if (!object_table) {16141614- dev_err(&data->client->dev, "Failed to allocate memory\n");16151615- return -ENOMEM;16161616- }16171617-16181618- error = __mxt_read_reg(client, MXT_OBJECT_START, table_size,16191619- object_table);16201620- if (error) {16211621- kfree(object_table);16221622- return error;16231623- }16241624-16251620 /* Valid Report IDs start counting from 1 */16261621 reportid = 1;16271622 data->mem_size = 0;16281628- for (i = 0; i < data->info.object_num; i++) {16231623+ for (i = 0; i < data->info->object_num; i++) {16291624 struct mxt_object *object = object_table + i;16301625 u8 min_id, max_id;16311626···1633166016341661 switch (object->type) {16351662 case MXT_GEN_MESSAGE_T5:16361636- if (data->info.family_id == 0x80 &&16371637- data->info.version < 0x20) {16631663+ if (data->info->family_id == 0x80 &&16641664+ data->info->version < 0x20) {16381665 /*16391666 * On mXT224 firmware versions prior to V2.016401667 * read and discard unused CRC byte otherwise···16891716 /* If T44 exists, T5 position has to be directly after */16901717 if (data->T44_address && (data->T5_address != data->T44_address + 1)) {16911718 dev_err(&client->dev, "Invalid T44 position\n");16921692- error = -EINVAL;16931693- goto free_object_table;17191719+ return -EINVAL;16941720 }1695172116961722 data->msg_buf = kcalloc(data->max_reportid,16971723 data->T5_msg_size, GFP_KERNEL);16981698- if (!data->msg_buf) {16991699- dev_err(&client->dev, "Failed to allocate message buffer\n");17241724+ if (!data->msg_buf)17251725+ return -ENOMEM;17261726+17271727+ return 0;17281728+}17291729+17301730+static int mxt_read_info_block(struct mxt_data *data)17311731+{17321732+ struct i2c_client *client = data->client;17331733+ int error;17341734+ size_t size;17351735+ void *id_buf, *buf;17361736+ uint8_t num_objects;17371737+ u32 calculated_crc;17381738+ u8 *crc_ptr;17391739+17401740+ /* If info block already allocated, free it */17411741+ if (data->raw_info_block)17421742+ mxt_free_object_table(data);17431743+17441744+ /* Read 7-byte ID information block starting at address 0 */17451745+ size = sizeof(struct mxt_info);17461746+ id_buf = kzalloc(size, GFP_KERNEL);17471747+ if (!id_buf)17481748+ return -ENOMEM;17491749+17501750+ error = __mxt_read_reg(client, 0, size, id_buf);17511751+ if (error)17521752+ goto err_free_mem;17531753+17541754+ /* Resize buffer to give space for rest of info block */17551755+ num_objects = ((struct mxt_info *)id_buf)->object_num;17561756+ size += (num_objects * sizeof(struct mxt_object))17571757+ + MXT_INFO_CHECKSUM_SIZE;17581758+17591759+ buf = krealloc(id_buf, size, GFP_KERNEL);17601760+ if (!buf) {17001761 error = -ENOMEM;17011701- goto free_object_table;17621762+ goto err_free_mem;17631763+ }17641764+ id_buf = buf;17651765+17661766+ /* Read rest of info block */17671767+ error = __mxt_read_reg(client, MXT_OBJECT_START,17681768+ size - MXT_OBJECT_START,17691769+ id_buf + MXT_OBJECT_START);17701770+ if (error)17711771+ goto err_free_mem;17721772+17731773+ /* Extract & calculate checksum */17741774+ crc_ptr = id_buf + size - MXT_INFO_CHECKSUM_SIZE;17751775+ data->info_crc = crc_ptr[0] | (crc_ptr[1] << 8) | (crc_ptr[2] << 16);17761776+17771777+ calculated_crc = mxt_calculate_crc(id_buf, 0,17781778+ size - MXT_INFO_CHECKSUM_SIZE);17791779+17801780+ /*17811781+ * CRC mismatch can be caused by data corruption due to I2C comms17821782+ * issue or else device is not using Object Based Protocol (eg i2c-hid)17831783+ */17841784+ if ((data->info_crc == 0) || (data->info_crc != calculated_crc)) {17851785+ dev_err(&client->dev,17861786+ "Info Block CRC error calculated=0x%06X read=0x%06X\n",17871787+ calculated_crc, data->info_crc);17881788+ error = -EIO;17891789+ goto err_free_mem;17021790 }1703179117041704- data->object_table = object_table;17921792+ data->raw_info_block = id_buf;17931793+ data->info = (struct mxt_info *)id_buf;17941794+17951795+ dev_info(&client->dev,17961796+ "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",17971797+ data->info->family_id, data->info->variant_id,17981798+ data->info->version >> 4, data->info->version & 0xf,17991799+ data->info->build, data->info->object_num);18001800+18011801+ /* Parse object table information */18021802+ error = mxt_parse_object_table(data, id_buf + MXT_OBJECT_START);18031803+ if (error) {18041804+ dev_err(&client->dev, "Error %d parsing object table\n", error);18051805+ mxt_free_object_table(data);18061806+ goto err_free_mem;18071807+ }18081808+18091809+ data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);1705181017061811 return 0;1707181217081708-free_object_table:17091709- mxt_free_object_table(data);18131813+err_free_mem:18141814+ kfree(id_buf);17101815 return error;17111816}17121817···20972046 int error;2098204720992048 while (1) {21002100- error = mxt_get_info(data);20492049+ error = mxt_read_info_block(data);21012050 if (!error)21022051 break;21032052···21282077 msleep(MXT_FW_RESET_TIME);21292078 }2130207921312131- /* Get object table information */21322132- error = mxt_get_object_table(data);21332133- if (error) {21342134- dev_err(&client->dev, "Error %d reading object table\n", error);21352135- return error;21362136- }21372137-21382080 error = mxt_acquire_irq(data);21392081 if (error)21402140- goto err_free_object_table;20822082+ return error;2141208321422084 error = request_firmware_nowait(THIS_MODULE, true, MXT_CFG_NAME,21432085 &client->dev, GFP_KERNEL, data,···21382094 if (error) {21392095 dev_err(&client->dev, "Failed to invoke firmware loader: %d\n",21402096 error);21412141- goto err_free_object_table;20972097+ return error;21422098 }2143209921442100 return 0;21452145-21462146-err_free_object_table:21472147- mxt_free_object_table(data);21482148- return error;21492101}2150210221512103static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep)···22022162static u16 mxt_get_debug_value(struct mxt_data *data, unsigned int x,22032163 unsigned int y)22042164{22052205- struct mxt_info *info = &data->info;21652165+ struct mxt_info *info = data->info;22062166 struct mxt_dbg *dbg = &data->dbg;22072167 unsigned int ofs, page;22082168 unsigned int col = 0;···2530249025312491static void mxt_debug_init(struct mxt_data *data)25322492{25332533- struct mxt_info *info = &data->info;24932493+ struct mxt_info *info = data->info;25342494 struct mxt_dbg *dbg = &data->dbg;25352495 struct mxt_object *object;25362496 int error;···26162576 const struct firmware *cfg)26172577{26182578 struct device *dev = &data->client->dev;26192619- struct mxt_info *info = &data->info;26202579 int error;2621258026222581 error = mxt_init_t7_power_cfg(data);···2640260126412602 mxt_debug_init(data);2642260326432643- dev_info(dev,26442644- "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n",26452645- info->family_id, info->variant_id, info->version >> 4,26462646- info->version & 0xf, info->build, info->object_num);26472647-26482604 return 0;26492605}26502606···26482614 struct device_attribute *attr, char *buf)26492615{26502616 struct mxt_data *data = dev_get_drvdata(dev);26512651- struct mxt_info *info = &data->info;26172617+ struct mxt_info *info = data->info;26522618 return scnprintf(buf, PAGE_SIZE, "%u.%u.%02X\n",26532619 info->version >> 4, info->version & 0xf, info->build);26542620}···26582624 struct device_attribute *attr, char *buf)26592625{26602626 struct mxt_data *data = dev_get_drvdata(dev);26612661- struct mxt_info *info = &data->info;26272627+ struct mxt_info *info = data->info;26622628 return scnprintf(buf, PAGE_SIZE, "%u.%u\n",26632629 info->family_id, info->variant_id);26642630}···26972663 return -ENOMEM;2698266426992665 error = 0;27002700- for (i = 0; i < data->info.object_num; i++) {26662666+ for (i = 0; i < data->info->object_num; i++) {27012667 object = data->object_table + i;2702266827032669 if (!mxt_object_readable(object->type))···30693035 .driver_data = samus_platform_data,30703036 },30713037 {30383038+ /* Samsung Chromebook Pro */30393039+ .ident = "Samsung Chromebook Pro",30403040+ .matches = {30413041+ DMI_MATCH(DMI_SYS_VENDOR, "Google"),30423042+ DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"),30433043+ },30443044+ .driver_data = samus_platform_data,30453045+ },30463046+ {30723047 /* Other Google Chromebooks */30733048 .ident = "Chromebook",30743049 .matches = {···3297325432983255static const struct of_device_id mxt_of_match[] = {32993256 { .compatible = "atmel,maxtouch", },32573257+ /* Compatibles listed below are deprecated */32583258+ { .compatible = "atmel,qt602240_ts", },32593259+ { .compatible = "atmel,atmel_mxt_ts", },32603260+ { .compatible = "atmel,atmel_mxt_tp", },32613261+ { .compatible = "atmel,mXT224", },33003262 {},33013263};33023264MODULE_DEVICE_TABLE(of, mxt_of_match);
+1-1
drivers/iommu/amd_iommu.c
···83838484static DEFINE_SPINLOCK(amd_iommu_devtable_lock);8585static DEFINE_SPINLOCK(pd_bitmap_lock);8686-static DEFINE_SPINLOCK(iommu_table_lock);87868887/* List of all available dev_data structures */8988static LLIST_HEAD(dev_data_list);···35613562 *****************************************************************************/3562356335633564static struct irq_chip amd_ir_chip;35653565+static DEFINE_SPINLOCK(iommu_table_lock);3564356635653567static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table)35663568{
+25-29
drivers/iommu/dma-iommu.c
···167167 * @list: Reserved region list from iommu_get_resv_regions()168168 *169169 * IOMMU drivers can use this to implement their .get_resv_regions callback170170- * for general non-IOMMU-specific reservations. Currently, this covers host171171- * bridge windows for PCI devices and GICv3 ITS region reservation on ACPI172172- * based ARM platforms that may require HW MSI reservation.170170+ * for general non-IOMMU-specific reservations. Currently, this covers GICv3171171+ * ITS region reservation on ACPI based ARM platforms that may require HW MSI172172+ * reservation.173173 */174174void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)175175{176176- struct pci_host_bridge *bridge;177177- struct resource_entry *window;178176179179- if (!is_of_node(dev->iommu_fwspec->iommu_fwnode) &&180180- iort_iommu_msi_get_resv_regions(dev, list) < 0)181181- return;177177+ if (!is_of_node(dev->iommu_fwspec->iommu_fwnode))178178+ iort_iommu_msi_get_resv_regions(dev, list);182179183183- if (!dev_is_pci(dev))184184- return;185185-186186- bridge = pci_find_host_bridge(to_pci_dev(dev)->bus);187187- resource_list_for_each_entry(window, &bridge->windows) {188188- struct iommu_resv_region *region;189189- phys_addr_t start;190190- size_t length;191191-192192- if (resource_type(window->res) != IORESOURCE_MEM)193193- continue;194194-195195- start = window->res->start - window->offset;196196- length = window->res->end - window->res->start + 1;197197- region = iommu_alloc_resv_region(start, length, 0,198198- IOMMU_RESV_RESERVED);199199- if (!region)200200- return;201201-202202- list_add_tail(®ion->list, list);203203- }204180}205181EXPORT_SYMBOL(iommu_dma_get_resv_regions);206182···205229 return 0;206230}207231232232+static void iova_reserve_pci_windows(struct pci_dev *dev,233233+ struct iova_domain *iovad)234234+{235235+ struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);236236+ struct resource_entry *window;237237+ unsigned long lo, hi;238238+239239+ resource_list_for_each_entry(window, &bridge->windows) {240240+ if (resource_type(window->res) != IORESOURCE_MEM)241241+ continue;242242+243243+ lo = iova_pfn(iovad, window->res->start - window->offset);244244+ hi = iova_pfn(iovad, window->res->end - window->offset);245245+ reserve_iova(iovad, lo, hi);246246+ }247247+}248248+208249static int iova_reserve_iommu_regions(struct device *dev,209250 struct iommu_domain *domain)210251{···230237 struct iommu_resv_region *region;231238 LIST_HEAD(resv_regions);232239 int ret = 0;240240+241241+ if (dev_is_pci(dev))242242+ iova_reserve_pci_windows(to_pci_dev(dev), iovad);233243234244 iommu_get_resv_regions(dev, &resv_regions);235245 list_for_each_entry(region, &resv_regions, list) {
···11361136 irte->dest_id = IRTE_DEST(cfg->dest_apicid);1137113711381138 /* Update the hardware only if the interrupt is in remapped mode. */11391139- if (!force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)11391139+ if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)11401140 modify_irte(&ir_data->irq_2_iommu, irte);11411141}11421142
+9-2
drivers/iommu/rockchip-iommu.c
···10981098 data->iommu = platform_get_drvdata(iommu_dev);10991099 dev->archdata.iommu = data;1100110011011101- of_dev_put(iommu_dev);11011101+ platform_device_put(iommu_dev);1102110211031103 return 0;11041104}···11751175 for (i = 0; i < iommu->num_clocks; ++i)11761176 iommu->clocks[i].id = rk_iommu_clocks[i];1177117711781178+ /*11791179+ * iommu clocks should be present for all new devices and devicetrees11801180+ * but there are older devicetrees without clocks out in the wild.11811181+ * So clocks as optional for the time being.11821182+ */11781183 err = devm_clk_bulk_get(iommu->dev, iommu->num_clocks, iommu->clocks);11791179- if (err)11841184+ if (err == -ENOENT)11851185+ iommu->num_clocks = 0;11861186+ else if (err)11801187 return err;1181118811821189 err = clk_bulk_prepare(iommu->num_clocks, iommu->clocks);
+2-2
drivers/irqchip/qcom-irq-combiner.c
···11-/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.11+/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.22 *33 * This program is free software; you can redistribute it and/or modify44 * it under the terms of the GNU General Public License version 2 and···68686969 bit = readl_relaxed(combiner->regs[reg].addr);7070 status = bit & combiner->regs[reg].enabled;7171- if (!status)7171+ if (bit && !status)7272 pr_warn_ratelimited("Unexpected IRQ on CPU%d: (%08x %08lx %p)\n",7373 smp_processor_id(), bit,7474 combiner->regs[reg].enabled,
···106106107107void bch_data_verify(struct cached_dev *dc, struct bio *bio)108108{109109- char name[BDEVNAME_SIZE];110109 struct bio *check;111110 struct bio_vec bv, cbv;112111 struct bvec_iter iter, citer = { 0 };···133134 bv.bv_len),134135 dc->disk.c,135136 "verify failed at dev %s sector %llu",136136- bdevname(dc->bdev, name),137137+ dc->backing_dev_name,137138 (uint64_t) bio->bi_iter.bi_sector);138139139140 kunmap_atomic(p1);
+3-5
drivers/md/bcache/io.c
···5252/* IO errors */5353void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio)5454{5555- char buf[BDEVNAME_SIZE];5655 unsigned errors;57565857 WARN_ONCE(!dc, "NULL pointer of struct cached_dev");···5960 errors = atomic_add_return(1, &dc->io_errors);6061 if (errors < dc->error_limit)6162 pr_err("%s: IO error on backing device, unrecoverable",6262- bio_devname(bio, buf));6363+ dc->backing_dev_name);6364 else6465 bch_cached_dev_error(dc);6566}···104105 }105106106107 if (error) {107107- char buf[BDEVNAME_SIZE];108108 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT,109109 &ca->io_errors);110110 errors >>= IO_ERROR_SHIFT;111111112112 if (errors < ca->set->error_limit)113113 pr_err("%s: IO error on %s%s",114114- bdevname(ca->bdev, buf), m,114114+ ca->cache_dev_name, m,115115 is_read ? ", recovering." : ".");116116 else117117 bch_cache_set_error(ca->set,118118 "%s: too many IO errors %s",119119- bdevname(ca->bdev, buf), m);119119+ ca->cache_dev_name, m);120120 }121121}122122
+1-4
drivers/md/bcache/request.c
···649649 */650650 if (unlikely(s->iop.writeback &&651651 bio->bi_opf & REQ_PREFLUSH)) {652652- char buf[BDEVNAME_SIZE];653653-654654- bio_devname(bio, buf);655652 pr_err("Can't flush %s: returned bi_status %i",656656- buf, bio->bi_status);653653+ dc->backing_dev_name, bio->bi_status);657654 } else {658655 /* set to orig_bio->bi_status in bio_complete() */659656 s->iop.status = bio->bi_status;
+52-23
drivers/md/bcache/super.c
···936936static void cached_dev_detach_finish(struct work_struct *w)937937{938938 struct cached_dev *dc = container_of(w, struct cached_dev, detach);939939- char buf[BDEVNAME_SIZE];940939 struct closure cl;941940 closure_init_stack(&cl);942941···966967967968 mutex_unlock(&bch_register_lock);968969969969- pr_info("Caching disabled for %s", bdevname(dc->bdev, buf));970970+ pr_info("Caching disabled for %s", dc->backing_dev_name);970971971972 /* Drop ref we took in cached_dev_detach() */972973 closure_put(&dc->disk.cl);···998999{9991000 uint32_t rtime = cpu_to_le32(get_seconds());10001001 struct uuid_entry *u;10011001- char buf[BDEVNAME_SIZE];10021002 struct cached_dev *exist_dc, *t;10031003-10041004- bdevname(dc->bdev, buf);1005100310061004 if ((set_uuid && memcmp(set_uuid, c->sb.set_uuid, 16)) ||10071005 (!set_uuid && memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16)))10081006 return -ENOENT;1009100710101008 if (dc->disk.c) {10111011- pr_err("Can't attach %s: already attached", buf);10091009+ pr_err("Can't attach %s: already attached",10101010+ dc->backing_dev_name);10121011 return -EINVAL;10131012 }1014101310151014 if (test_bit(CACHE_SET_STOPPING, &c->flags)) {10161016- pr_err("Can't attach %s: shutting down", buf);10151015+ pr_err("Can't attach %s: shutting down",10161016+ dc->backing_dev_name);10171017 return -EINVAL;10181018 }1019101910201020 if (dc->sb.block_size < c->sb.block_size) {10211021 /* Will die */10221022 pr_err("Couldn't attach %s: block size less than set's block size",10231023- buf);10231023+ dc->backing_dev_name);10241024 return -EINVAL;10251025 }10261026···10271029 list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) {10281030 if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) {10291031 pr_err("Tried to attach %s but duplicate UUID already attached",10301030- buf);10321032+ dc->backing_dev_name);1031103310321034 return -EINVAL;10331035 }···1045104710461048 if (!u) {10471049 if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {10481048- pr_err("Couldn't find uuid for %s in set", buf);10501050+ pr_err("Couldn't find uuid for %s in set",10511051+ dc->backing_dev_name);10491052 return -ENOENT;10501053 }1051105410521055 u = uuid_find_empty(c);10531056 if (!u) {10541054- pr_err("Not caching %s, no room for UUID", buf);10571057+ pr_err("Not caching %s, no room for UUID",10581058+ dc->backing_dev_name);10551059 return -EINVAL;10561060 }10571061 }···11121112 up_write(&dc->writeback_lock);1113111311141114 pr_info("Caching %s as %s on set %pU",11151115- bdevname(dc->bdev, buf), dc->disk.disk->disk_name,11151115+ dc->backing_dev_name,11161116+ dc->disk.disk->disk_name,11161117 dc->disk.c->sb.set_uuid);11171118 return 0;11181119}···12261225 struct block_device *bdev,12271226 struct cached_dev *dc)12281227{12291229- char name[BDEVNAME_SIZE];12301228 const char *err = "cannot allocate memory";12311229 struct cache_set *c;1232123012311231+ bdevname(bdev, dc->backing_dev_name);12331232 memcpy(&dc->sb, sb, sizeof(struct cache_sb));12341233 dc->bdev = bdev;12351234 dc->bdev->bd_holder = dc;···12371236 bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1);12381237 bio_first_bvec_all(&dc->sb_bio)->bv_page = sb_page;12391238 get_page(sb_page);12391239+1240124012411241 if (cached_dev_init(dc, sb->block_size << 9))12421242 goto err;···12491247 if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj))12501248 goto err;1251124912521252- pr_info("registered backing device %s", bdevname(bdev, name));12501250+ pr_info("registered backing device %s", dc->backing_dev_name);1253125112541252 list_add(&dc->list, &uncached_devices);12551253 list_for_each_entry(c, &bch_cache_sets, list)···1261125912621260 return;12631261err:12641264- pr_notice("error %s: %s", bdevname(bdev, name), err);12621262+ pr_notice("error %s: %s", dc->backing_dev_name, err);12651263 bcache_device_stop(&dc->disk);12661264}12671265···1369136713701368bool bch_cached_dev_error(struct cached_dev *dc)13711369{13721372- char name[BDEVNAME_SIZE];13701370+ struct cache_set *c;1373137113741372 if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags))13751373 return false;···13791377 smp_mb();1380137813811379 pr_err("stop %s: too many IO errors on backing device %s\n",13821382- dc->disk.disk->disk_name, bdevname(dc->bdev, name));13801380+ dc->disk.disk->disk_name, dc->backing_dev_name);13811381+13821382+ /*13831383+ * If the cached device is still attached to a cache set,13841384+ * even dc->io_disable is true and no more I/O requests13851385+ * accepted, cache device internal I/O (writeback scan or13861386+ * garbage collection) may still prevent bcache device from13871387+ * being stopped. So here CACHE_SET_IO_DISABLE should be13881388+ * set to c->flags too, to make the internal I/O to cache13891389+ * device rejected and stopped immediately.13901390+ * If c is NULL, that means the bcache device is not attached13911391+ * to any cache set, then no CACHE_SET_IO_DISABLE bit to set.13921392+ */13931393+ c = dc->disk.c;13941394+ if (c && test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags))13951395+ pr_info("CACHE_SET_IO_DISABLE already set");1383139613841397 bcache_device_stop(&dc->disk);13851398 return true;···14121395 return false;1413139614141397 if (test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags))14151415- pr_warn("CACHE_SET_IO_DISABLE already set");13981398+ pr_info("CACHE_SET_IO_DISABLE already set");1416139914171400 /* XXX: we can be called from atomic context14181401 acquire_console_sem();···15561539 */15571540 pr_warn("stop_when_cache_set_failed of %s is \"auto\" and cache is dirty, stop it to avoid potential data corruption.",15581541 d->disk->disk_name);15421542+ /*15431543+ * There might be a small time gap that cache set is15441544+ * released but bcache device is not. Inside this time15451545+ * gap, regular I/O requests will directly go into15461546+ * backing device as no cache set attached to. This15471547+ * behavior may also introduce potential inconsistence15481548+ * data in writeback mode while cache is dirty.15491549+ * Therefore before calling bcache_device_stop() due15501550+ * to a broken cache device, dc->io_disable should be15511551+ * explicitly set to true.15521552+ */15531553+ dc->io_disable = true;15541554+ /* make others know io_disable is true earlier */15551555+ smp_mb();15591556 bcache_device_stop(d);15601557 } else {15611558 /*···20342003static int register_cache(struct cache_sb *sb, struct page *sb_page,20352004 struct block_device *bdev, struct cache *ca)20362005{20372037- char name[BDEVNAME_SIZE];20382006 const char *err = NULL; /* must be set for any error case */20392007 int ret = 0;2040200820412041- bdevname(bdev, name);20422042-20092009+ bdevname(bdev, ca->cache_dev_name);20432010 memcpy(&ca->sb, sb, sizeof(struct cache_sb));20442011 ca->bdev = bdev;20452012 ca->bdev->bd_holder = ca;···20742045 goto out;20752046 }2076204720772077- pr_info("registered cache device %s", name);20482048+ pr_info("registered cache device %s", ca->cache_dev_name);2078204920792050out:20802051 kobject_put(&ca->kobj);2081205220822053err:20832054 if (err)20842084- pr_notice("error %s: %s", name, err);20552055+ pr_notice("error %s: %s", ca->cache_dev_name, err);2085205620862057 return ret;20872058}
···88 * Muting and tone control by Jonathan Isom <jisom@ematic.com>99 *1010 * Copyright (c) 2000 Eric Sandeen <eric_sandeen@bigfoot.com>1111- * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org>1111+ * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org>1212 * This code is placed under the terms of the GNU General Public License1313 * Based on tda9855.c by Steve VanDeBogart (vandebo@uclink.berkeley.edu)1414 * Which was based on tda8425.c by Greg Alexander (c) 1998
···55 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com>66 *77 * This code is partially based upon the TVP5150 driver88- * written by Mauro Carvalho Chehab (mchehab@infradead.org),88+ * written by Mauro Carvalho Chehab <mchehab@kernel.org>,99 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com>1010 * and the TVP7002 driver in the TI LSP 2.10.00.14. Revisions by1111 * Muralidharan Karicheri and Snehaprabha Narnakaje (TI).
+1-1
drivers/media/i2c/tvp7002_reg.h
···55 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com>66 *77 * This code is partially based upon the TVP5150 driver88- * written by Mauro Carvalho Chehab (mchehab@infradead.org),88+ * written by Mauro Carvalho Chehab <mchehab@kernel.org>,99 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com>1010 * and the TVP7002 driver in the TI LSP 2.10.00.141111 *
···11/*22 * Handlers for board audio hooks, splitted from bttv-cards33 *44- * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org)44+ * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org>55 * This code is placed under the terms of the GNU General Public License66 */77
+1-1
drivers/media/pci/bt8xx/bttv-audio-hook.h
···11/*22 * Handlers for board audio hooks, splitted from bttv-cards33 *44- * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org)44+ * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org>55 * This code is placed under the terms of the GNU General Public License66 */77
···1313 (c) 2005-2006 Nickolay V. Shmyrev <nshmyrev@yandex.ru>14141515 Fixes to be fully V4L2 compliant by1616- (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org>1616+ (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org>17171818 Cropping and overscan support1919 Copyright (C) 2005, 2006 Michael H. Schimek <mschimek@gmx.at>
+1-1
drivers/media/pci/bt8xx/bttv-i2c.c
···88 & Marcus Metzler (mocm@thp.uni-koeln.de)99 (c) 1999-2003 Gerd Knorr <kraxel@bytesex.org>10101111- (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org>1111+ (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org>1212 - Multituner support and i2c address binding13131414 This program is free software; you can redistribute it and/or modify
+1-1
drivers/media/pci/cx23885/cx23885-input.c
···1313 * Copyright (C) 2008 <srinivasa.deevi at conexant dot com>1414 * Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it>1515 * Markus Rechberger <mrechberger@gmail.com>1616- * Mauro Carvalho Chehab <mchehab@infradead.org>1616+ * Mauro Carvalho Chehab <mchehab@kernel.org>1717 * Sascha Sommer <saschasommer@freenet.de>1818 * Copyright (C) 2004, 2005 Chris Pascoe1919 * Copyright (C) 2003, 2004 Gerd Knorr
+2-2
drivers/media/pci/cx88/cx88-alsa.c
···44 *55 * (c) 2007 Trent Piepho <xyzzy@speakeasy.org>66 * (c) 2005,2006 Ricardo Cerqueira <v4l@cerqueira.org>77- * (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org>77+ * (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org>88 * Based on a dummy cx88 module by Gerd Knorr <kraxel@bytesex.org>99 * Based on dummy.c by Jaroslav Kysela <perex@perex.cz>1010 *···103103104104MODULE_DESCRIPTION("ALSA driver module for cx2388x based TV cards");105105MODULE_AUTHOR("Ricardo Cerqueira");106106-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");106106+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");107107MODULE_LICENSE("GPL");108108MODULE_VERSION(CX88_VERSION);109109
···44 * Copyright 1997 M. Kirkwood55 *66 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com>77- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>77+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>88 * Converted to new API by Alan Cox <alan@lxorguk.ukuu.org.uk>99 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org>1010 *
+1-1
drivers/media/radio/radio-aztech.c
···22 * radio-aztech.c - Aztech radio card driver33 *44 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@xs4all.nl>55- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>55+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>66 * Adapted to support the Video for Linux API by77 * Russell Kroll <rkroll@exploits.org>. Based on original tuner code by:88 *
+1-1
drivers/media/radio/radio-gemtek.c
···1515 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org>1616 *1717 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com>1818- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>1818+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>1919 *2020 * Note: this card seems to swap the left and right audio channels!2121 *
+1-1
drivers/media/radio/radio-maxiradio.c
···2727 * BUGS:2828 * - card unmutes if you change frequency2929 *3030- * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@infradead.org>:3030+ * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@kernel.org>:3131 * - Conversion to V4L2 API3232 * - Uses video_ioctl2 for parsing and to add debug support3333 */
+1-1
drivers/media/radio/radio-rtrack2.c
···77 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org>88 *99 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com>1010- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>1010+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>1111 *1212 * Fully tested with actual hardware and the v4l2-compliance tool.1313 */
+1-1
drivers/media/radio/radio-sf16fmi.c
···1313 * No volume control - only mute/unmute - you have to use line volume1414 * control on SB-part of SF16-FMI/SF16-FMP/SF16-FMD1515 *1616- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>1616+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>1717 */18181919#include <linux/kernel.h> /* __setup */
+1-1
drivers/media/radio/radio-terratec.c
···1717 * Volume Control is done digitally1818 *1919 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com>2020- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>2020+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>2121 */22222323#include <linux/module.h> /* Modules */
+1-1
drivers/media/radio/radio-trust.c
···1212 * Scott McGrath (smcgrath@twilight.vtc.vsc.edu)1313 * William McGrath (wmcgrath@twilight.vtc.vsc.edu)1414 *1515- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>1515+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>1616 */17171818#include <stdarg.h>
+1-1
drivers/media/radio/radio-typhoon.c
···2525 * The frequency change is necessary since the card never seems to be2626 * completely silent.2727 *2828- * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org>2828+ * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org>2929 */30303131#include <linux/module.h> /* Modules */
+1-1
drivers/media/radio/radio-zoltrix.c
···2727 * 2002-07-15 - Fix Stereo typo2828 *2929 * 2006-07-24 - Converted to V4L2 API3030- * by Mauro Carvalho Chehab <mchehab@infradead.org>3030+ * by Mauro Carvalho Chehab <mchehab@kernel.org>3131 *3232 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com>3333 *
+1-1
drivers/media/rc/keymaps/rc-avermedia-m135a.c
···1212 *1313 * On Avermedia M135A with IR model RM-JX, the same codes exist on both1414 * Positivo (BR) and original IR, initial version and remote control codes1515- * added by Mauro Carvalho Chehab <mchehab@infradead.org>1515+ * added by Mauro Carvalho Chehab <mchehab@kernel.org>1616 *1717 * Positivo also ships Avermedia M135A with model RM-K6, extra control1818 * codes added by Herton Ronaldo Krzesinski <herton@mandriva.com.br>
···22// For Philips TEA5761 FM Chip33// I2C address is always 0x20 (0x10 at 7-bit mode).44//55-// Copyright (c) 2005-2007 Mauro Carvalho Chehab (mchehab@infradead.org)55+// Copyright (c) 2005-2007 Mauro Carvalho Chehab <mchehab@kernel.org>6677#include <linux/i2c.h>88#include <linux/slab.h>···337337EXPORT_SYMBOL_GPL(tea5761_autodetection);338338339339MODULE_DESCRIPTION("Philips TEA5761 FM tuner driver");340340-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");340340+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");341341MODULE_LICENSE("GPL v2");
+2-2
drivers/media/tuners/tea5767.c
···22// For Philips TEA5767 FM Chip used on some TV Cards like Prolink Pixelview33// I2C address is always 0xC0.44//55-// Copyright (c) 2005 Mauro Carvalho Chehab (mchehab@infradead.org)55+// Copyright (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org>66//77// tea5767 autodetection thanks to Torsten Seeboth and Atsushi Nakagawa88// from their contributions on DScaler.···469469EXPORT_SYMBOL_GPL(tea5767_autodetection);470470471471MODULE_DESCRIPTION("Philips TEA5767 FM tuner driver");472472-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");472472+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");473473MODULE_LICENSE("GPL v2");
+1-1
drivers/media/tuners/tuner-xc2028-types.h
···55 * This file includes internal tipes to be used inside tuner-xc2028.66 * Shouldn't be included outside tuner-xc202877 *88- * Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org)88+ * Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org>99 */10101111/* xc3028 firmware types */
···22//33// em28xx-camera.c - driver for Empia EM25xx/27xx/28xx USB video capture devices44//55-// Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org>55+// Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org>66// Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com>77//88// This program is free software; you can redistribute it and/or modify
+1-1
drivers/media/usb/em28xx/em28xx-cards.c
···55//66// Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it>77// Markus Rechberger <mrechberger@gmail.com>88-// Mauro Carvalho Chehab <mchehab@infradead.org>88+// Mauro Carvalho Chehab <mchehab@kernel.org>99// Sascha Sommer <saschasommer@freenet.de>1010// Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com>1111//
···22//33// DVB device driver for em28xx44//55-// (c) 2008-2011 Mauro Carvalho Chehab <mchehab@infradead.org>55+// (c) 2008-2011 Mauro Carvalho Chehab <mchehab@kernel.org>66//77// (c) 2008 Devin Heitmueller <devin.heitmueller@gmail.com>88// - Fixes for the driver to properly work with HVR-950···6363#include "tc90522.h"6464#include "qm1d1c0042.h"65656666-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");6666+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");6767MODULE_LICENSE("GPL v2");6868MODULE_DESCRIPTION(DRIVER_DESC " - digital TV interface");6969MODULE_VERSION(EM28XX_VERSION);
+1-1
drivers/media/usb/em28xx/em28xx-i2c.c
···44//55// Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it>66// Markus Rechberger <mrechberger@gmail.com>77-// Mauro Carvalho Chehab <mchehab@infradead.org>77+// Mauro Carvalho Chehab <mchehab@kernel.org>88// Sascha Sommer <saschasommer@freenet.de>99// Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com>1010//
+1-1
drivers/media/usb/em28xx/em28xx-input.c
···44//55// Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it>66// Markus Rechberger <mrechberger@gmail.com>77-// Mauro Carvalho Chehab <mchehab@infradead.org>77+// Mauro Carvalho Chehab <mchehab@kernel.org>88// Sascha Sommer <saschasommer@freenet.de>99//1010// This program is free software; you can redistribute it and/or modify
+2-2
drivers/media/usb/em28xx/em28xx-video.c
···55//66// Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it>77// Markus Rechberger <mrechberger@gmail.com>88-// Mauro Carvalho Chehab <mchehab@infradead.org>88+// Mauro Carvalho Chehab <mchehab@kernel.org>99// Sascha Sommer <saschasommer@freenet.de>1010// Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com>1111//···44444545#define DRIVER_AUTHOR "Ludovico Cavedon <cavedon@sssup.it>, " \4646 "Markus Rechberger <mrechberger@gmail.com>, " \4747- "Mauro Carvalho Chehab <mchehab@infradead.org>, " \4747+ "Mauro Carvalho Chehab <mchehab@kernel.org>, " \4848 "Sascha Sommer <saschasommer@freenet.de>"49495050static unsigned int isoc_debug;
+1-1
drivers/media/usb/em28xx/em28xx.h
···44 *55 * Copyright (C) 2005 Markus Rechberger <mrechberger@gmail.com>66 * Ludovico Cavedon <cavedon@sssup.it>77- * Mauro Carvalho Chehab <mchehab@infradead.org>77+ * Mauro Carvalho Chehab <mchehab@kernel.org>88 * Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com>99 *1010 * Based on the em2800 driver from Sascha Sommer <saschasommer@freenet.de>
+1-1
drivers/media/usb/gspca/zc3xx-reg.h
···11/*22 * zc030x registers33 *44- * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@infradead.org>44+ * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@kernel.org>55 *66 * The register aliases used here came from this driver:77 * http://zc0302.sourceforge.net/zc0302.php
+1-1
drivers/media/usb/tm6000/tm6000-cards.c
···11// SPDX-License-Identifier: GPL-2.022// tm6000-cards.c - driver for TM5600/TM6000/TM6010 USB video capture devices33//44-// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org>44+// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org>5566#include <linux/init.h>77#include <linux/module.h>
+1-1
drivers/media/usb/tm6000/tm6000-core.c
···11// SPDX-License-Identifier: GPL-2.022// tm6000-core.c - driver for TM5600/TM6000/TM6010 USB video capture devices33//44-// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org>44+// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org>55//66// Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com>77// - DVB-T support
+1-1
drivers/media/usb/tm6000/tm6000-i2c.c
···11// SPDX-License-Identifier: GPL-2.022// tm6000-i2c.c - driver for TM5600/TM6000/TM6010 USB video capture devices33//44-// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org>44+// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org>55//66// Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com>77// - Fix SMBus Read Byte command
···11// SPDX-License-Identifier: GPL-2.022// tm6000-video.c - driver for TM5600/TM6000/TM6010 USB video capture devices33//44-// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org>44+// Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org>55//66// Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com>77// - Fixed module load/unload
+1-1
drivers/media/usb/tm6000/tm6000.h
···22 * SPDX-License-Identifier: GPL-2.033 * tm6000.h - driver for TM5600/TM6000/TM6010 USB video capture devices44 *55- * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org>55+ * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org>66 *77 * Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com>88 * - DVB-T support
+2-2
drivers/media/v4l2-core/v4l2-dev.c
···1010 * 2 of the License, or (at your option) any later version.1111 *1212 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1)1313- * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2)1313+ * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2)1414 *1515 * Fixes: 20000516 Claudio Matsuoka <claudio@conectiva.com>1616 * - Added procfs support···10721072subsys_initcall(videodev_init);10731073module_exit(videodev_exit)1074107410751075-MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@infradead.org>");10751075+MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@kernel.org>");10761076MODULE_DESCRIPTION("Device registrar for Video4Linux drivers v2");10771077MODULE_LICENSE("GPL");10781078MODULE_ALIAS_CHARDEV_MAJOR(VIDEO_MAJOR);
+1-1
drivers/media/v4l2-core/v4l2-ioctl.c
···99 * 2 of the License, or (at your option) any later version.1010 *1111 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1)1212- * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2)1212+ * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2)1313 */14141515#include <linux/mm.h>
+3-3
drivers/media/v4l2-core/videobuf-core.c
···11/*22 * generic helper functions for handling video4linux capture buffers33 *44- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>44+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>55 *66 * Highly based on video-buf written originally by:77 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org>88- * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org>88+ * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org>99 * (c) 2006 Ted Walther and John Sokol1010 *1111 * This program is free software; you can redistribute it and/or modify···3838module_param(debug, int, 0644);39394040MODULE_DESCRIPTION("helper module to manage video4linux buffers");4141-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");4141+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");4242MODULE_LICENSE("GPL");43434444#define dprintk(level, fmt, arg...) \
+1-1
drivers/media/v4l2-core/videobuf-dma-contig.c
···77 * Copyright (c) 2008 Magnus Damm88 *99 * Based on videobuf-vmalloc.c,1010- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>1010+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>1111 *1212 * This program is free software; you can redistribute it and/or modify1313 * it under the terms of the GNU General Public License as published by
+3-3
drivers/media/v4l2-core/videobuf-dma-sg.c
···66 * into PAGE_SIZE chunks). They also assume the driver does not need77 * to touch the video data.88 *99- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>99+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>1010 *1111 * Highly based on video-buf written originally by:1212 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org>1313- * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org>1313+ * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org>1414 * (c) 2006 Ted Walther and John Sokol1515 *1616 * This program is free software; you can redistribute it and/or modify···4848module_param(debug, int, 0644);49495050MODULE_DESCRIPTION("helper module to manage video4linux dma sg buffers");5151-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");5151+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");5252MODULE_LICENSE("GPL");53535454#define dprintk(level, fmt, arg...) \
+2-2
drivers/media/v4l2-core/videobuf-vmalloc.c
···66 * into PAGE_SIZE chunks). They also assume the driver does not need77 * to touch the video data.88 *99- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>99+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>1010 *1111 * This program is free software; you can redistribute it and/or modify1212 * it under the terms of the GNU General Public License as published by···4141module_param(debug, int, 0644);42424343MODULE_DESCRIPTION("helper module to manage video4linux vmalloc buffers");4444-MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");4444+MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>");4545MODULE_LICENSE("GPL");46464747#define dprintk(level, fmt, arg...) \
+38-67
drivers/mtd/nand/onenand/omap2.c
···375375{376376 struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);377377 struct onenand_chip *this = mtd->priv;378378- dma_addr_t dma_src, dma_dst;379379- int bram_offset;378378+ struct device *dev = &c->pdev->dev;380379 void *buf = (void *)buffer;380380+ dma_addr_t dma_src, dma_dst;381381+ int bram_offset, err;381382 size_t xtra;382382- int ret;383383384384 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;385385- if (bram_offset & 3 || (size_t)buf & 3 || count < 384)385385+ /*386386+ * If the buffer address is not DMA-able, len is not long enough to make387387+ * DMA transfers profitable or panic_write() may be in an interrupt388388+ * context fallback to PIO mode.389389+ */390390+ if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||391391+ count < 384 || in_interrupt() || oops_in_progress )386392 goto out_copy;387387-388388- /* panic_write() may be in an interrupt context */389389- if (in_interrupt() || oops_in_progress)390390- goto out_copy;391391-392392- if (buf >= high_memory) {393393- struct page *p1;394394-395395- if (((size_t)buf & PAGE_MASK) !=396396- ((size_t)(buf + count - 1) & PAGE_MASK))397397- goto out_copy;398398- p1 = vmalloc_to_page(buf);399399- if (!p1)400400- goto out_copy;401401- buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);402402- }403393404394 xtra = count & 3;405395 if (xtra) {···397407 memcpy(buf + count, this->base + bram_offset + count, xtra);398408 }399409410410+ dma_dst = dma_map_single(dev, buf, count, DMA_FROM_DEVICE);400411 dma_src = c->phys_base + bram_offset;401401- dma_dst = dma_map_single(&c->pdev->dev, buf, count, DMA_FROM_DEVICE);402402- if (dma_mapping_error(&c->pdev->dev, dma_dst)) {403403- dev_err(&c->pdev->dev,404404- "Couldn't DMA map a %d byte buffer\n",405405- count);412412+413413+ if (dma_mapping_error(dev, dma_dst)) {414414+ dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count);406415 goto out_copy;407416 }408417409409- ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);410410- dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);418418+ err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);419419+ dma_unmap_single(dev, dma_dst, count, DMA_FROM_DEVICE);420420+ if (!err)421421+ return 0;411422412412- if (ret) {413413- dev_err(&c->pdev->dev, "timeout waiting for DMA\n");414414- goto out_copy;415415- }416416-417417- return 0;423423+ dev_err(dev, "timeout waiting for DMA\n");418424419425out_copy:420426 memcpy(buf, this->base + bram_offset, count);···423437{424438 struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);425439 struct onenand_chip *this = mtd->priv;426426- dma_addr_t dma_src, dma_dst;427427- int bram_offset;440440+ struct device *dev = &c->pdev->dev;428441 void *buf = (void *)buffer;429429- int ret;442442+ dma_addr_t dma_src, dma_dst;443443+ int bram_offset, err;430444431445 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;432432- if (bram_offset & 3 || (size_t)buf & 3 || count < 384)446446+ /*447447+ * If the buffer address is not DMA-able, len is not long enough to make448448+ * DMA transfers profitable or panic_write() may be in an interrupt449449+ * context fallback to PIO mode.450450+ */451451+ if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||452452+ count < 384 || in_interrupt() || oops_in_progress )433453 goto out_copy;434454435435- /* panic_write() may be in an interrupt context */436436- if (in_interrupt() || oops_in_progress)437437- goto out_copy;438438-439439- if (buf >= high_memory) {440440- struct page *p1;441441-442442- if (((size_t)buf & PAGE_MASK) !=443443- ((size_t)(buf + count - 1) & PAGE_MASK))444444- goto out_copy;445445- p1 = vmalloc_to_page(buf);446446- if (!p1)447447- goto out_copy;448448- buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);449449- }450450-451451- dma_src = dma_map_single(&c->pdev->dev, buf, count, DMA_TO_DEVICE);455455+ dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE);452456 dma_dst = c->phys_base + bram_offset;453453- if (dma_mapping_error(&c->pdev->dev, dma_src)) {454454- dev_err(&c->pdev->dev,455455- "Couldn't DMA map a %d byte buffer\n",456456- count);457457- return -1;458458- }459459-460460- ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);461461- dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE);462462-463463- if (ret) {464464- dev_err(&c->pdev->dev, "timeout waiting for DMA\n");457457+ if (dma_mapping_error(dev, dma_src)) {458458+ dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count);465459 goto out_copy;466460 }467461468468- return 0;462462+ err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count);463463+ dma_unmap_page(dev, dma_src, count, DMA_TO_DEVICE);464464+ if (!err)465465+ return 0;466466+467467+ dev_err(dev, "timeout waiting for DMA\n");469468470469out_copy:471470 memcpy(this->base + bram_offset, buf, count);
···706706 */707707int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms)708708{709709+ const struct nand_sdr_timings *timings;709710 u8 status = 0;710711 int ret;711712712713 if (!chip->exec_op)713714 return -ENOTSUPP;715715+716716+ /* Wait tWB before polling the STATUS reg. */717717+ timings = nand_get_sdr_timings(&chip->data_interface);718718+ ndelay(PSEC_TO_NSEC(timings->tWB_max));714719715720 ret = nand_status_op(chip, NULL);716721 if (ret)
+9-6
drivers/net/bonding/bond_alb.c
···450450{451451 int i;452452453453- if (!client_info->slave)453453+ if (!client_info->slave || !is_valid_ether_addr(client_info->mac_dst))454454 return;455455456456 for (i = 0; i < RLB_ARP_BURST_SIZE; i++) {···943943 skb->priority = TC_PRIO_CONTROL;944944 skb->dev = slave->dev;945945946946+ netdev_dbg(slave->bond->dev,947947+ "Send learning packet: dev %s mac %pM vlan %d\n",948948+ slave->dev->name, mac_addr, vid);949949+946950 if (vid)947951 __vlan_hwaccel_put_tag(skb, vlan_proto, vid);948952···969965 u8 *mac_addr = data->mac_addr;970966 struct bond_vlan_tag *tags;971967972972- if (is_vlan_dev(upper) && vlan_get_encap_level(upper) == 0) {973973- if (strict_match &&974974- ether_addr_equal_64bits(mac_addr,975975- upper->dev_addr)) {968968+ if (is_vlan_dev(upper) &&969969+ bond->nest_level == vlan_get_encap_level(upper) - 1) {970970+ if (upper->addr_assign_type == NET_ADDR_STOLEN) {976971 alb_send_lp_vid(slave, mac_addr,977972 vlan_dev_vlan_proto(upper),978973 vlan_dev_vlan_id(upper));979979- } else if (!strict_match) {974974+ } else {980975 alb_send_lp_vid(slave, upper->dev_addr,981976 vlan_dev_vlan_proto(upper),982977 vlan_dev_vlan_id(upper));
+2
drivers/net/bonding/bond_main.c
···17381738 if (bond_mode_uses_xmit_hash(bond))17391739 bond_update_slave_arr(bond, NULL);1740174017411741+ bond->nest_level = dev_get_nest_level(bond_dev);17421742+17411743 netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n",17421744 slave_dev->name,17431745 bond_is_active_slave(new_slave) ? "an active" : "a backup",
···114114 unsigned int num_gpio;115115 unsigned int max_vid;116116 unsigned int port_base_addr;117117+ unsigned int phy_base_addr;117118 unsigned int global1_addr;118119 unsigned int global2_addr;119120 unsigned int age_time_coeff;
···943943 kfree(ipsec->ip_tbl);944944 kfree(ipsec->rx_tbl);945945 kfree(ipsec->tx_tbl);946946+ kfree(ipsec);946947err1:947947- kfree(adapter->ipsec);948948 netdev_err(adapter->netdev, "Unable to allocate memory for SA tables");949949}950950
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
···34273427 hw->phy.sfp_setup_needed = false;34283428 }3429342934303430+ if (status == IXGBE_ERR_SFP_NOT_SUPPORTED)34313431+ return status;34323432+34303433 /* Reset PHY */34313434 if (!hw->phy.reset_disable && hw->phy.ops.reset)34323435 hw->phy.ops.reset(hw);
···1007100710081008 mutex_lock(&priv->state_lock);1009100910101010- if (!test_bit(MLX5E_STATE_OPENED, &priv->state))10111011- goto out;10121012-10131010 new_channels.params = priv->channels.params;10141011 mlx5e_trust_update_tx_min_inline_mode(priv, &new_channels.params);10121012+10131013+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {10141014+ priv->channels.params = new_channels.params;10151015+ goto out;10161016+ }1015101710161018 /* Skip if tx_min_inline is the same */10171019 if (new_channels.params.tx_min_inline_mode ==
···290290291291 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {292292 netdev_err(priv->netdev,293293- "\tCan't perform loobpack test while device is down\n");293293+ "\tCan't perform loopback test while device is down\n");294294 return -ENODEV;295295 }296296
+6-1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
···12611261 f->mask);12621262 addr_type = key->addr_type;1263126312641264+ /* the HW doesn't support frag first/later */12651265+ if (mask->flags & FLOW_DIS_FIRST_FRAG)12661266+ return -EOPNOTSUPP;12671267+12641268 if (mask->flags & FLOW_DIS_IS_FRAGMENT) {12651269 MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1);12661270 MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,···18681864 }1869186518701866 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol);18711871- if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) {18671867+ if (modify_ip_header && ip_proto != IPPROTO_TCP &&18681868+ ip_proto != IPPROTO_UDP && ip_proto != IPPROTO_ICMP) {18721869 pr_info("can't offload re-write of ip proto %d\n", ip_proto);18731870 return false;18741871 }
···1587158715881588 mlx5_enter_error_state(dev, true);1589158915901590+ /* Some platforms requiring freeing the IRQ's in the shutdown15911591+ * flow. If they aren't freed they can't be allocated after15921592+ * kexec. There is no need to cleanup the mlx5_core software15931593+ * contexts.15941594+ */15951595+ mlx5_irq_clear_affinity_hints(dev);15961596+ mlx5_core_eq_free_irqs(dev);15971597+15901598 return 0;15911599}15921600
···128128 u32 *out, int outlen);129129int mlx5_start_eqs(struct mlx5_core_dev *dev);130130void mlx5_stop_eqs(struct mlx5_core_dev *dev);131131+/* This function should only be called after mlx5_cmd_force_teardown_hca */132132+void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev);131133struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn);132134u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq *eq);133135void mlx5_cq_tasklet_cb(unsigned long data);
+2-2
drivers/net/ethernet/mellanox/mlxsw/core.c
···11001100err_alloc_lag_mapping:11011101 mlxsw_ports_fini(mlxsw_core);11021102err_ports_init:11031103- mlxsw_bus->fini(bus_priv);11041104-err_bus_init:11051103 if (!reload)11061104 devlink_resources_unregister(devlink, NULL);11071105err_register_resources:11061106+ mlxsw_bus->fini(bus_priv);11071107+err_bus_init:11081108 if (!reload)11091109 devlink_free(devlink);11101110err_devlink_alloc:
···17181718 struct net_device *dev = mlxsw_sp_port->dev;17191719 int err;1720172017211721- if (bridge_port->bridge_device->multicast_enabled) {17221722- if (bridge_port->bridge_device->multicast_enabled) {17231723- err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid,17241724- false);17251725- if (err)17261726- netdev_err(dev, "Unable to remove port from SMID\n");17271727- }17211721+ if (bridge_port->bridge_device->multicast_enabled &&17221722+ !bridge_port->mrouter) {17231723+ err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false);17241724+ if (err)17251725+ netdev_err(dev, "Unable to remove port from SMID\n");17281726 }1729172717301728 err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
···115115116116void qed_l2_setup(struct qed_hwfn *p_hwfn)117117{118118- if (p_hwfn->hw_info.personality != QED_PCI_ETH &&119119- p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)118118+ if (!QED_IS_L2_PERSONALITY(p_hwfn))120119 return;121120122121 mutex_init(&p_hwfn->p_l2_info->lock);···125126{126127 u32 i;127128128128- if (p_hwfn->hw_info.personality != QED_PCI_ETH &&129129- p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE)129129+ if (!QED_IS_L2_PERSONALITY(p_hwfn))130130 return;131131132132 if (!p_hwfn->p_l2_info)
+1-1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
···23702370 u8 flags = 0;2371237123722372 if (unlikely(skb->ip_summed != CHECKSUM_NONE)) {23732373- DP_INFO(cdev, "Cannot transmit a checksumed packet\n");23732373+ DP_INFO(cdev, "Cannot transmit a checksummed packet\n");23742374 return -EINVAL;23752375 }23762376
+1-1
drivers/net/ethernet/qlogic/qed/qed_main.c
···680680 tasklet_disable(p_hwfn->sp_dpc);681681 p_hwfn->b_sp_dpc_enabled = false;682682 DP_VERBOSE(cdev, NETIF_MSG_IFDOWN,683683- "Disabled sp taskelt [hwfn %d] at %p\n",683683+ "Disabled sp tasklet [hwfn %d] at %p\n",684684 i, p_hwfn->sp_dpc);685685 }686686 }
+1-1
drivers/net/ethernet/qlogic/qed/qed_roce.c
···848848849849 if (!(qp->resp_offloaded)) {850850 DP_NOTICE(p_hwfn,851851- "The responder's qp should be offloded before requester's\n");851851+ "The responder's qp should be offloaded before requester's\n");852852 return -EINVAL;853853 }854854
+1-1
drivers/net/ethernet/qlogic/qede/qede_rdma.c
···238238 }239239240240 if (!found) {241241- event_node = kzalloc(sizeof(*event_node), GFP_KERNEL);241241+ event_node = kzalloc(sizeof(*event_node), GFP_ATOMIC);242242 if (!event_node) {243243 DP_NOTICE(edev,244244 "qedr: Could not allocate memory for rdma work\n");
···49814981static void rtl_pll_power_up(struct rtl8169_private *tp)49824982{49834983 rtl_generic_op(tp, tp->pll_power_ops.up);49844984+49854985+ /* give MAC/PHY some time to resume */49864986+ msleep(20);49844987}4985498849864989static void rtl_init_pll_power_ops(struct rtl8169_private *tp)
+3-2
drivers/net/ethernet/sfc/ef10.c
···47844784 * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that47854785 * the rule is not removed by efx_rps_hash_del() below.47864786 */47874787- ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority,47884788- filter_idx, true) == 0;47874787+ if (ret)47884788+ ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority,47894789+ filter_idx, true) == 0;47894790 /* While we can't safely dereference rule (we dropped the lock), we can47904791 * still test it for NULL.47914792 */
+2
drivers/net/ethernet/sfc/rx.c
···839839 int rc;840840841841 rc = efx->type->filter_insert(efx, &req->spec, true);842842+ if (rc >= 0)843843+ rc %= efx->type->max_rx_ip_filters;842844 if (efx->rps_hash_table) {843845 spin_lock_bh(&efx->rps_hash_lock);844846 rule = efx_rps_hash_find(efx, &req->spec);
+2-3
drivers/net/ethernet/sun/niu.c
···3443344334443444 len = (val & RCR_ENTRY_L2_LEN) >>34453445 RCR_ENTRY_L2_LEN_SHIFT;34463446- len -= ETH_FCS_LEN;34463446+ append_size = len + ETH_HLEN + ETH_FCS_LEN;3447344734483448 addr = (val & RCR_ENTRY_PKT_BUF_ADDR) <<34493449 RCR_ENTRY_PKT_BUF_ADDR_SHIFT;···34533453 RCR_ENTRY_PKTBUFSZ_SHIFT];3454345434553455 off = addr & ~PAGE_MASK;34563456- append_size = rcr_size;34573456 if (num_rcr == 1) {34583457 int ptype;34593458···34653466 else34663467 skb_checksum_none_assert(skb);34673468 } else if (!(val & RCR_ENTRY_MULTI))34683468- append_size = len - skb->len;34693469+ append_size = append_size - skb->len;3469347034703471 niu_rx_skb_append(skb, page, off, append_size, rcr_size);34713472 if ((page->index + rp->rbr_block_size) - rcr_size == addr) {
···535535536536 /* Grab the bits from PHYIR1, and put them in the upper half */537537 phy_reg = mdiobus_read(bus, addr, MII_PHYSID1);538538- if (phy_reg < 0)538538+ if (phy_reg < 0) {539539+ /* if there is no device, return without an error so scanning540540+ * the bus works properly541541+ */542542+ if (phy_reg == -EIO || phy_reg == -ENODEV) {543543+ *phy_id = 0xffffffff;544544+ return 0;545545+ }546546+539547 return -EIO;548548+ }540549541550 *phy_id = (phy_reg & 0xffff) << 16;542551
···10981098 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)},10991099 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)},11001100 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)},11011101+ {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */11011102 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)},11021103 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)},11031104 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */···13421341 if (!id->driver_info) {13431342 dev_dbg(&intf->dev, "setting defaults for dynamic device id\n");13441343 id->driver_info = (unsigned long)&qmi_wwan_info;13441344+ }13451345+13461346+ /* There are devices where the same interface number can be13471347+ * configured as different functions. We should only bind to13481348+ * vendor specific functions when matching on interface number13491349+ */13501350+ if (id->match_flags & USB_DEVICE_ID_MATCH_INT_NUMBER &&13511351+ desc->bInterfaceClass != USB_CLASS_VENDOR_SPEC) {13521352+ dev_dbg(&intf->dev,13531353+ "Rejecting interface number match for class %02x\n",13541354+ desc->bInterfaceClass);13551355+ return -ENODEV;13451356 }1346135713471358 /* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */
···101101 *102102 * This function parses the regulatory channel data received as a103103 * MCC_UPDATE_CMD command. It returns a newly allocation regulatory domain,104104- * to be fed into the regulatory core. An ERR_PTR is returned on error.104104+ * to be fed into the regulatory core. In case the geo_info is set handle105105+ * accordingly. An ERR_PTR is returned on error.105106 * If not given to the regulatory core, the user is responsible for freeing106107 * the regdomain returned here with kfree.107108 */108109struct ieee80211_regdomain *109110iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg,110110- int num_of_ch, __le32 *channels, u16 fw_mcc);111111+ int num_of_ch, __le32 *channels, u16 fw_mcc,112112+ u16 geo_info);111113112114#endif /* __iwl_nvm_parse_h__ */
+2-1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
···311311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg,312312 __le32_to_cpu(resp->n_channels),313313 resp->channels,314314- __le16_to_cpu(resp->mcc));314314+ __le16_to_cpu(resp->mcc),315315+ __le16_to_cpu(resp->geo_info));315316 /* Store the return source id */316317 src_id = resp->source_id;317318 kfree(resp);
+1
drivers/net/wireless/mac80211_hwsim.c
···32363236 GENL_SET_ERR_MSG(info,"MAC is no valid source addr");32373237 NL_SET_BAD_ATTR(info->extack,32383238 info->attrs[HWSIM_ATTR_PERM_ADDR]);32393239+ kfree(hwname);32393240 return -EINVAL;32403241 }32413242
···27272828config NVME_RDMA2929 tristate "NVM Express over Fabrics RDMA host driver"3030- depends on INFINIBAND && BLOCK3030+ depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK3131 select NVME_CORE3232 select NVME_FABRICS3333 select SG_POOL
+9-26
drivers/nvme/host/core.c
···9999100100static void nvme_ns_remove(struct nvme_ns *ns);101101static int nvme_revalidate_disk(struct gendisk *disk);102102+static void nvme_put_subsystem(struct nvme_subsystem *subsys);102103103104int nvme_reset_ctrl(struct nvme_ctrl *ctrl)104105{···118117 ret = nvme_reset_ctrl(ctrl);119118 if (!ret) {120119 flush_work(&ctrl->reset_work);121121- if (ctrl->state != NVME_CTRL_LIVE)120120+ if (ctrl->state != NVME_CTRL_LIVE &&121121+ ctrl->state != NVME_CTRL_ADMIN_ONLY)122122 ret = -ENETRESET;123123 }124124···352350 ida_simple_remove(&head->subsys->ns_ida, head->instance);353351 list_del_init(&head->entry);354352 cleanup_srcu_struct(&head->srcu);353353+ nvme_put_subsystem(head->subsys);355354 kfree(head);356355}357356···767764 ret = PTR_ERR(meta);768765 goto out_unmap;769766 }767767+ req->cmd_flags |= REQ_INTEGRITY;770768 }771769 }772770···28642860 goto out_cleanup_srcu;2865286128662862 list_add_tail(&head->entry, &ctrl->subsys->nsheads);28632863+28642864+ kref_get(&ctrl->subsys->ref);28652865+28672866 return head;28682867out_cleanup_srcu:28692868 cleanup_srcu_struct(&head->srcu);···30042997 if (nvme_init_ns_head(ns, nsid, id))30052998 goto out_free_id;30062999 nvme_setup_streams_ns(ctrl, ns);30073007-30083008-#ifdef CONFIG_NVME_MULTIPATH30093009- /*30103010- * If multipathing is enabled we need to always use the subsystem30113011- * instance number for numbering our devices to avoid conflicts30123012- * between subsystems that have multiple controllers and thus use30133013- * the multipath-aware subsystem node and those that have a single30143014- * controller and use the controller node directly.30153015- */30163016- if (ns->head->disk) {30173017- sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance,30183018- ctrl->cntlid, ns->head->instance);30193019- flags = GENHD_FL_HIDDEN;30203020- } else {30213021- sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance,30223022- ns->head->instance);30233023- }30243024-#else30253025- /*30263026- * But without the multipath code enabled, multiple controller per30273027- * subsystems are visible as devices and thus we cannot use the30283028- * subsystem instance.30293029- */30303030- sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);30313031-#endif30003000+ nvme_set_disk_name(disk_name, ns, ctrl, &flags);3032300130333002 if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) {30343003 if (nvme_nvm_register(ns, disk_name, node)) {
···1515#include "nvme.h"16161717static bool multipath = true;1818-module_param(multipath, bool, 0644);1818+module_param(multipath, bool, 0444);1919MODULE_PARM_DESC(multipath,2020 "turn on native support for multiple controllers per subsystem");2121+2222+/*2323+ * If multipathing is enabled we need to always use the subsystem instance2424+ * number for numbering our devices to avoid conflicts between subsystems that2525+ * have multiple controllers and thus use the multipath-aware subsystem node2626+ * and those that have a single controller and use the controller node2727+ * directly.2828+ */2929+void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,3030+ struct nvme_ctrl *ctrl, int *flags)3131+{3232+ if (!multipath) {3333+ sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);3434+ } else if (ns->head->disk) {3535+ sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance,3636+ ctrl->cntlid, ns->head->instance);3737+ *flags = GENHD_FL_HIDDEN;3838+ } else {3939+ sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance,4040+ ns->head->instance);4141+ }4242+}21432244void nvme_failover_req(struct request *req)2345{
+17
drivers/nvme/host/nvme.h
···8484 * Supports the LighNVM command set if indicated in vs[1].8585 */8686 NVME_QUIRK_LIGHTNVM = (1 << 6),8787+8888+ /*8989+ * Set MEDIUM priority on SQ creation9090+ */9191+ NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7),8792};88938994/*···441436extern const struct block_device_operations nvme_ns_head_ops;442437443438#ifdef CONFIG_NVME_MULTIPATH439439+void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,440440+ struct nvme_ctrl *ctrl, int *flags);444441void nvme_failover_req(struct request *req);445442bool nvme_req_needs_failover(struct request *req, blk_status_t error);446443void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);···468461}469462470463#else464464+/*465465+ * Without the multipath code enabled, multiple controller per subsystems are466466+ * visible as devices and thus we cannot use the subsystem instance.467467+ */468468+static inline void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,469469+ struct nvme_ctrl *ctrl, int *flags)470470+{471471+ sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance);472472+}473473+471474static inline void nvme_failover_req(struct request *req)472475{473476}
+11-1
drivers/nvme/host/pci.c
···10931093static int adapter_alloc_sq(struct nvme_dev *dev, u16 qid,10941094 struct nvme_queue *nvmeq)10951095{10961096+ struct nvme_ctrl *ctrl = &dev->ctrl;10961097 struct nvme_command c;10971098 int flags = NVME_QUEUE_PHYS_CONTIG;10991099+11001100+ /*11011101+ * Some drives have a bug that auto-enables WRRU if MEDIUM isn't11021102+ * set. Since URGENT priority is zeroes, it makes all queues11031103+ * URGENT.11041104+ */11051105+ if (ctrl->quirks & NVME_QUIRK_MEDIUM_PRIO_SQ)11061106+ flags |= NVME_SQ_PRIO_MEDIUM;1098110710991108 /*11001109 * Note: we (ab)use the fact that the prp fields survive if no data···27102701 .driver_data = NVME_QUIRK_STRIPE_SIZE |27112702 NVME_QUIRK_DEALLOCATE_ZEROES, },27122703 { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */27132713- .driver_data = NVME_QUIRK_NO_DEEPEST_PS },27042704+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |27052705+ NVME_QUIRK_MEDIUM_PRIO_SQ },27142706 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */27152707 .driver_data = NVME_QUIRK_IDENTIFY_CNS, },27162708 { PCI_DEVICE(0x1c58, 0x0003), /* HGST adapter */
+1-1
drivers/nvme/target/Kconfig
···27272828config NVME_TARGET_RDMA2929 tristate "NVMe over Fabrics RDMA target support"3030- depends on INFINIBAND3030+ depends on INFINIBAND && INFINIBAND_ADDR_TRANS3131 depends on NVME_TARGET3232 select SGL_ALLOC3333 help
+6
drivers/nvme/target/loop.c
···469469 nvme_stop_ctrl(&ctrl->ctrl);470470 nvme_loop_shutdown_ctrl(ctrl);471471472472+ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {473473+ /* state change failure should never happen */474474+ WARN_ON_ONCE(1);475475+ return;476476+ }477477+472478 ret = nvme_loop_configure_admin_queue(ctrl);473479 if (ret)474480 goto out_disable;
+21-9
drivers/of/overlay.c
···102102103103static BLOCKING_NOTIFIER_HEAD(overlay_notify_chain);104104105105+/**106106+ * of_overlay_notifier_register() - Register notifier for overlay operations107107+ * @nb: Notifier block to register108108+ *109109+ * Register for notification on overlay operations on device tree nodes. The110110+ * reported actions definied by @of_reconfig_change. The notifier callback111111+ * furthermore receives a pointer to the affected device tree node.112112+ *113113+ * Note that a notifier callback is not supposed to store pointers to a device114114+ * tree node or its content beyond @OF_OVERLAY_POST_REMOVE corresponding to the115115+ * respective node it received.116116+ */105117int of_overlay_notifier_register(struct notifier_block *nb)106118{107119 return blocking_notifier_chain_register(&overlay_notify_chain, nb);108120}109121EXPORT_SYMBOL_GPL(of_overlay_notifier_register);110122123123+/**124124+ * of_overlay_notifier_register() - Unregister notifier for overlay operations125125+ * @nb: Notifier block to unregister126126+ */111127int of_overlay_notifier_unregister(struct notifier_block *nb)112128{113129 return blocking_notifier_chain_unregister(&overlay_notify_chain, nb);···687671 of_node_put(ovcs->fragments[i].overlay);688672 }689673 kfree(ovcs->fragments);690690-691674 /*692692- * TODO693693- *694694- * would like to: kfree(ovcs->overlay_tree);695695- * but can not since drivers may have pointers into this data696696- *697697- * would like to: kfree(ovcs->fdt);698698- * but can not since drivers may have pointers into this data675675+ * There should be no live pointers into ovcs->overlay_tree and676676+ * ovcs->fdt due to the policy that overlay notifiers are not allowed677677+ * to retain pointers into the overlay devicetree.699678 */700700-679679+ kfree(ovcs->overlay_tree);680680+ kfree(ovcs->fdt);701681 kfree(ovcs);702682}703683
+1-1
drivers/parisc/ccio-dma.c
···12631263 * I/O Page Directory, the resource map, and initalizing the12641264 * U2/Uturn chip into virtual mode.12651265 */12661266-static void12661266+static void __init12671267ccio_ioc_init(struct ioc *ioc)12681268{12691269 int i;
+27-10
drivers/pci/pci.c
···19101910EXPORT_SYMBOL(pci_pme_active);1911191119121912/**19131913- * pci_enable_wake - enable PCI device as wakeup event source19131913+ * __pci_enable_wake - enable PCI device as wakeup event source19141914 * @dev: PCI device affected19151915 * @state: PCI state from which device will issue wakeup events19161916 * @enable: True to enable event generation; false to disable···19281928 * Error code depending on the platform is returned if both the platform and19291929 * the native mechanism fail to enable the generation of wake-up events19301930 */19311931-int pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable)19311931+static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable)19321932{19331933 int ret = 0;19341934···1969196919701970 return ret;19711971}19721972+19731973+/**19741974+ * pci_enable_wake - change wakeup settings for a PCI device19751975+ * @pci_dev: Target device19761976+ * @state: PCI state from which device will issue wakeup events19771977+ * @enable: Whether or not to enable event generation19781978+ *19791979+ * If @enable is set, check device_may_wakeup() for the device before calling19801980+ * __pci_enable_wake() for it.19811981+ */19821982+int pci_enable_wake(struct pci_dev *pci_dev, pci_power_t state, bool enable)19831983+{19841984+ if (enable && !device_may_wakeup(&pci_dev->dev))19851985+ return -EINVAL;19861986+19871987+ return __pci_enable_wake(pci_dev, state, enable);19881988+}19721989EXPORT_SYMBOL(pci_enable_wake);1973199019741991/**···19981981 * should not be called twice in a row to enable wake-up due to PCI PM vs ACPI19991982 * ordering constraints.20001983 *20012001- * This function only returns error code if the device is not capable of20022002- * generating PME# from both D3_hot and D3_cold, and the platform is unable to20032003- * enable wake-up power for it.19841984+ * This function only returns error code if the device is not allowed to wake19851985+ * up the system from sleep or it is not capable of generating PME# from both19861986+ * D3_hot and D3_cold and the platform is unable to enable wake-up power for it.20041987 */20051988int pci_wake_from_d3(struct pci_dev *dev, bool enable)20061989{···2131211421322115 dev->runtime_d3cold = target_state == PCI_D3cold;2133211621342134- pci_enable_wake(dev, target_state, pci_dev_run_wake(dev));21172117+ __pci_enable_wake(dev, target_state, pci_dev_run_wake(dev));2135211821362119 error = pci_set_power_state(dev, target_state);21372120···21552138{21562139 struct pci_bus *bus = dev->bus;2157214021582158- if (device_can_wakeup(&dev->dev))21592159- return true;21602160-21612141 if (!dev->pme_support)21622142 return false;2163214321642144 /* PME-capable in principle, but not from the target power state */21652165- if (!pci_pme_capable(dev, pci_target_state(dev, false)))21452145+ if (!pci_pme_capable(dev, pci_target_state(dev, true)))21662146 return false;21472147+21482148+ if (device_can_wakeup(&dev->dev))21492149+ return true;2167215021682151 while (bus->parent) {21692152 struct pci_dev *bridge = bus->self;
+12-4
drivers/pinctrl/intel/pinctrl-cherryview.c
···1622162216231623 if (!need_valid_mask) {16241624 irq_base = devm_irq_alloc_descs(pctrl->dev, -1, 0,16251625- chip->ngpio, NUMA_NO_NODE);16251625+ community->npins, NUMA_NO_NODE);16261626 if (irq_base < 0) {16271627 dev_err(pctrl->dev, "Failed to allocate IRQ numbers\n");16281628 return irq_base;16291629 }16301630- } else {16311631- irq_base = 0;16321630 }1633163116341634- ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, irq_base,16321632+ ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, 0,16351633 handle_bad_irq, IRQ_TYPE_NONE);16361634 if (ret) {16371635 dev_err(pctrl->dev, "failed to add IRQ chip\n");16381636 return ret;16371637+ }16381638+16391639+ if (!need_valid_mask) {16401640+ for (i = 0; i < community->ngpio_ranges; i++) {16411641+ range = &community->gpio_ranges[i];16421642+16431643+ irq_domain_associate_many(chip->irq.domain, irq_base,16441644+ range->base, range->npins);16451645+ irq_base += range->npins;16461646+ }16391647 }1640164816411649 gpiochip_set_chained_irqchip(chip, &chv_gpio_irqchip, irq,
···33 *44 * This program is free software: you can redistribute it and/or modify55 * it under the terms of the GNU General Public License as published by66- * the Free Software Foundation, either version 3 of the License, or66+ * the Free Software Foundation, either version 2 of the License, or77 * (at your option) any later version.88 *99 * This program is distributed in the hope that it will be useful,
+1-2
drivers/scsi/isci/port_config.c
···291291 * Note: We have not moved the current phy_index so we will actually292292 * compare the startting phy with itself.293293 * This is expected and required to add the phy to the port. */294294- while (phy_index < SCI_MAX_PHYS) {294294+ for (; phy_index < SCI_MAX_PHYS; phy_index++) {295295 if ((phy_mask & (1 << phy_index)) == 0)296296 continue;297297 sci_phy_get_sas_address(&ihost->phys[phy_index],···311311 &ihost->phys[phy_index]);312312313313 assigned_phy_mask |= (1 << phy_index);314314- phy_index++;315314 }316315317316 }
+5-2
drivers/scsi/storvsc_drv.c
···17221722 max_targets = STORVSC_MAX_TARGETS;17231723 max_channels = STORVSC_MAX_CHANNELS;17241724 /*17251725- * On Windows8 and above, we support sub-channels for storage.17251725+ * On Windows8 and above, we support sub-channels for storage17261726+ * on SCSI and FC controllers.17261727 * The number of sub-channels offerred is based on the number of17271728 * VCPUs in the guest.17281729 */17291729- max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel);17301730+ if (!dev_is_ide)17311731+ max_sub_channels =17321732+ (num_cpus - 1) / storvsc_vcpus_per_sub_channel;17301733 }1731173417321735 scsi_driver.can_queue = (max_outstanding_req_per_channel *
+1-1
drivers/staging/media/imx/imx-media-csi.c
···17991799 priv->dev->of_node = pdata->of_node;18001800 pinctrl = devm_pinctrl_get_select_default(priv->dev);18011801 if (IS_ERR(pinctrl)) {18021802- ret = PTR_ERR(priv->vdev);18021802+ ret = PTR_ERR(pinctrl);18031803 dev_dbg(priv->dev,18041804 "devm_pinctrl_get_select_default() failed: %d\n", ret);18051805 if (ret != -ENODEV)
+4-4
drivers/target/target_core_iblock.c
···427427{428428 struct se_device *dev = cmd->se_dev;429429 struct scatterlist *sg = &cmd->t_data_sg[0];430430- unsigned char *buf, zero = 0x00, *p = &zero;431431- int rc, ret;430430+ unsigned char *buf, *not_zero;431431+ int ret;432432433433 buf = kmap(sg_page(sg)) + sg->offset;434434 if (!buf)···437437 * Fall back to block_execute_write_same() slow-path if438438 * incoming WRITE_SAME payload does not contain zeros.439439 */440440- rc = memcmp(buf, p, cmd->data_length);440440+ not_zero = memchr_inv(buf, 0x00, cmd->data_length);441441 kunmap(sg_page(sg));442442443443- if (rc)443443+ if (not_zero)444444 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;445445446446 ret = blkdev_issue_zeroout(bdev,
···185185 * @regulator: pointer to the TMU regulator structure.186186 * @reg_conf: pointer to structure to register with core thermal.187187 * @ntrip: number of supported trip points.188188+ * @enabled: current status of TMU device188189 * @tmu_initialize: SoC specific TMU initialization method189190 * @tmu_control: SoC specific TMU control method190191 * @tmu_read: SoC specific TMU temperature read method···206205 struct regulator *regulator;207206 struct thermal_zone_device *tzd;208207 unsigned int ntrip;208208+ bool enabled;209209210210 int (*tmu_initialize)(struct platform_device *pdev);211211 void (*tmu_control)(struct platform_device *pdev, bool on);···400398 mutex_lock(&data->lock);401399 clk_enable(data->clk);402400 data->tmu_control(pdev, on);401401+ data->enabled = on;403402 clk_disable(data->clk);404403 mutex_unlock(&data->lock);405404}···892889static int exynos_get_temp(void *p, int *temp)893890{894891 struct exynos_tmu_data *data = p;892892+ int value, ret = 0;895893896896- if (!data || !data->tmu_read)894894+ if (!data || !data->tmu_read || !data->enabled)897895 return -EINVAL;898896899897 mutex_lock(&data->lock);900898 clk_enable(data->clk);901899902902- *temp = code_to_temp(data, data->tmu_read(data)) * MCELSIUS;900900+ value = data->tmu_read(data);901901+ if (value < 0)902902+ ret = value;903903+ else904904+ *temp = code_to_temp(data, value) * MCELSIUS;903905904906 clk_disable(data->clk);905907 mutex_unlock(&data->lock);906908907907- return 0;909909+ return ret;908910}909911910912#ifdef CONFIG_THERMAL_EMULATION
+3-1
drivers/usb/core/config.c
···191191static const unsigned short high_speed_maxpacket_maxes[4] = {192192 [USB_ENDPOINT_XFER_CONTROL] = 64,193193 [USB_ENDPOINT_XFER_ISOC] = 1024,194194- [USB_ENDPOINT_XFER_BULK] = 512,194194+195195+ /* Bulk should be 512, but some devices use 1024: we will warn below */196196+ [USB_ENDPOINT_XFER_BULK] = 1024,195197 [USB_ENDPOINT_XFER_INT] = 1024,196198};197199static const unsigned short super_speed_maxpacket_maxes[4] = {
···39283928 if (index && !hs_ep->isochronous)39293929 epctrl |= DXEPCTL_SETD0PID;3930393039313931+ /* WA for Full speed ISOC IN in DDMA mode.39323932+ * By Clear NAK status of EP, core will send ZLP39333933+ * to IN token and assert NAK interrupt relying39343934+ * on TxFIFO status only39353935+ */39363936+39373937+ if (hsotg->gadget.speed == USB_SPEED_FULL &&39383938+ hs_ep->isochronous && dir_in) {39393939+ /* The WA applies only to core versions from 2.72a39403940+ * to 4.00a (including both). Also for FS_IOT_1.00a39413941+ * and HS_IOT_1.00a.39423942+ */39433943+ u32 gsnpsid = dwc2_readl(hsotg->regs + GSNPSID);39443944+39453945+ if ((gsnpsid >= DWC2_CORE_REV_2_72a &&39463946+ gsnpsid <= DWC2_CORE_REV_4_00a) ||39473947+ gsnpsid == DWC2_FS_IOT_REV_1_00a ||39483948+ gsnpsid == DWC2_HS_IOT_REV_1_00a)39493949+ epctrl |= DXEPCTL_CNAK;39503950+ }39513951+39313952 dev_dbg(hsotg->dev, "%s: write DxEPCTL=0x%08x\n",39323953 __func__, epctrl);39333954
+8-5
drivers/usb/dwc2/hcd.c
···358358359359static int dwc2_vbus_supply_init(struct dwc2_hsotg *hsotg)360360{361361+ int ret;362362+361363 hsotg->vbus_supply = devm_regulator_get_optional(hsotg->dev, "vbus");362362- if (IS_ERR(hsotg->vbus_supply))363363- return 0;364364+ if (IS_ERR(hsotg->vbus_supply)) {365365+ ret = PTR_ERR(hsotg->vbus_supply);366366+ hsotg->vbus_supply = NULL;367367+ return ret == -ENODEV ? 0 : ret;368368+ }364369365370 return regulator_enable(hsotg->vbus_supply);366371}···4347434243484343 spin_unlock_irqrestore(&hsotg->lock, flags);4349434443504350- dwc2_vbus_supply_init(hsotg);43514351-43524352- return 0;43454345+ return dwc2_vbus_supply_init(hsotg);43534346}4354434743554348/*
+3-1
drivers/usb/dwc2/pci.c
···141141 goto err;142142143143 glue = devm_kzalloc(dev, sizeof(*glue), GFP_KERNEL);144144- if (!glue)144144+ if (!glue) {145145+ ret = -ENOMEM;145146 goto err;147147+ }146148147149 ret = platform_device_add(dwc2);148150 if (ret) {
···990990 /* set tx_reinit and schedule the next qh */991991 ep->tx_reinit = 1;992992 }993993- musb_start_urb(musb, is_in, next_qh);993993+994994+ if (next_qh)995995+ musb_start_urb(musb, is_in, next_qh);994996 }995997}996998
+5
drivers/usb/serial/option.c
···233233/* These Quectel products use Qualcomm's vendor ID */234234#define QUECTEL_PRODUCT_UC20 0x9003235235#define QUECTEL_PRODUCT_UC15 0x9090236236+/* These u-blox products use Qualcomm's vendor ID */237237+#define UBLOX_PRODUCT_R410M 0x90b2236238/* These Yuga products use Qualcomm's vendor ID */237239#define YUGA_PRODUCT_CLM920_NC5 0x9625238240···10671065 /* Yuga products use Qualcomm vendor ID */10681066 { USB_DEVICE(QUALCOMM_VENDOR_ID, YUGA_PRODUCT_CLM920_NC5),10691067 .driver_info = RSVD(1) | RSVD(4) },10681068+ /* u-blox products using Qualcomm vendor ID */10691069+ { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M),10701070+ .driver_info = RSVD(1) | RSVD(3) },10701071 /* Quectel products using Quectel vendor ID */10711072 { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21),10721073 .driver_info = RSVD(4) },
+35-34
drivers/usb/serial/visor.c
···335335 goto exit;336336 }337337338338- if (retval == sizeof(*connection_info)) {339339- connection_info = (struct visor_connection_info *)340340- transfer_buffer;341341-342342- num_ports = le16_to_cpu(connection_info->num_ports);343343- for (i = 0; i < num_ports; ++i) {344344- switch (345345- connection_info->connections[i].port_function_id) {346346- case VISOR_FUNCTION_GENERIC:347347- string = "Generic";348348- break;349349- case VISOR_FUNCTION_DEBUGGER:350350- string = "Debugger";351351- break;352352- case VISOR_FUNCTION_HOTSYNC:353353- string = "HotSync";354354- break;355355- case VISOR_FUNCTION_CONSOLE:356356- string = "Console";357357- break;358358- case VISOR_FUNCTION_REMOTE_FILE_SYS:359359- string = "Remote File System";360360- break;361361- default:362362- string = "unknown";363363- break;364364- }365365- dev_info(dev, "%s: port %d, is for %s use\n",366366- serial->type->description,367367- connection_info->connections[i].port, string);368368- }338338+ if (retval != sizeof(*connection_info)) {339339+ dev_err(dev, "Invalid connection information received from device\n");340340+ retval = -ENODEV;341341+ goto exit;369342 }370370- /*371371- * Handle devices that report invalid stuff here.372372- */343343+344344+ connection_info = (struct visor_connection_info *)transfer_buffer;345345+346346+ num_ports = le16_to_cpu(connection_info->num_ports);347347+348348+ /* Handle devices that report invalid stuff here. */373349 if (num_ports == 0 || num_ports > 2) {374350 dev_warn(dev, "%s: No valid connect info available\n",375351 serial->type->description);376352 num_ports = 2;377353 }378354355355+ for (i = 0; i < num_ports; ++i) {356356+ switch (connection_info->connections[i].port_function_id) {357357+ case VISOR_FUNCTION_GENERIC:358358+ string = "Generic";359359+ break;360360+ case VISOR_FUNCTION_DEBUGGER:361361+ string = "Debugger";362362+ break;363363+ case VISOR_FUNCTION_HOTSYNC:364364+ string = "HotSync";365365+ break;366366+ case VISOR_FUNCTION_CONSOLE:367367+ string = "Console";368368+ break;369369+ case VISOR_FUNCTION_REMOTE_FILE_SYS:370370+ string = "Remote File System";371371+ break;372372+ default:373373+ string = "unknown";374374+ break;375375+ }376376+ dev_info(dev, "%s: port %d, is for %s use\n",377377+ serial->type->description,378378+ connection_info->connections[i].port, string);379379+ }379380 dev_info(dev, "%s: Number of ports: %d\n", serial->type->description,380381 num_ports);381382
+1
drivers/usb/typec/tcpm.c
···37253725 for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++)37263726 typec_unregister_altmode(port->port_altmode[i]);37273727 typec_unregister_port(port->typec_port);37283728+ usb_role_switch_put(port->role_sw);37283729 tcpm_debugfs_exit(port);37293730 destroy_workqueue(port->wq);37303731}
+39-8
drivers/usb/typec/tps6598x.c
···7373 struct device *dev;7474 struct regmap *regmap;7575 struct mutex lock; /* device lock */7676+ u8 i2c_protocol:1;76777778 struct typec_port *port;7879 struct typec_partner *partner;···8180 struct typec_capability typec_cap;8281};83828383+static int8484+tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)8585+{8686+ u8 data[len + 1];8787+ int ret;8888+8989+ if (!tps->i2c_protocol)9090+ return regmap_raw_read(tps->regmap, reg, val, len);9191+9292+ ret = regmap_raw_read(tps->regmap, reg, data, sizeof(data));9393+ if (ret)9494+ return ret;9595+9696+ if (data[0] < len)9797+ return -EIO;9898+9999+ memcpy(val, &data[1], len);100100+ return 0;101101+}102102+84103static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)85104{8686- return regmap_raw_read(tps->regmap, reg, val, sizeof(u16));105105+ return tps6598x_block_read(tps, reg, val, sizeof(u16));87106}8810789108static inline int tps6598x_read32(struct tps6598x *tps, u8 reg, u32 *val)90109{9191- return regmap_raw_read(tps->regmap, reg, val, sizeof(u32));110110+ return tps6598x_block_read(tps, reg, val, sizeof(u32));92111}9311294113static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)95114{9696- return regmap_raw_read(tps->regmap, reg, val, sizeof(u64));115115+ return tps6598x_block_read(tps, reg, val, sizeof(u64));97116}9811799118static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)···142121 struct tps6598x_rx_identity_reg id;143122 int ret;144123145145- ret = regmap_raw_read(tps->regmap, TPS_REG_RX_IDENTITY_SOP,146146- &id, sizeof(id));124124+ ret = tps6598x_block_read(tps, TPS_REG_RX_IDENTITY_SOP,125125+ &id, sizeof(id));147126 if (ret)148127 return ret;149128···245224 } while (val);246225247226 if (out_len) {248248- ret = regmap_raw_read(tps->regmap, TPS_REG_DATA1,249249- out_data, out_len);227227+ ret = tps6598x_block_read(tps, TPS_REG_DATA1,228228+ out_data, out_len);250229 if (ret)251230 return ret;252231 val = out_data[0];253232 } else {254254- ret = regmap_read(tps->regmap, TPS_REG_DATA1, &val);233233+ ret = tps6598x_block_read(tps, TPS_REG_DATA1, &val, sizeof(u8));255234 if (ret)256235 return ret;257236 }···405384 return ret;406385 if (!vid)407386 return -ENODEV;387387+388388+ /*389389+ * Checking can the adapter handle SMBus protocol. If it can not, the390390+ * driver needs to take care of block reads separately.391391+ *392392+ * FIXME: Testing with I2C_FUNC_I2C. regmap-i2c uses I2C protocol393393+ * unconditionally if the adapter has I2C_FUNC_I2C set.394394+ */395395+ if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C))396396+ tps->i2c_protocol = true;408397409398 ret = tps6598x_read32(tps, TPS_REG_STATUS, &status);410399 if (ret < 0)
+7
fs/btrfs/extent-tree.c
···31423142 struct rb_node *node;31433143 int ret = 0;3144314431453145+ spin_lock(&root->fs_info->trans_lock);31453146 cur_trans = root->fs_info->running_transaction;31473147+ if (cur_trans)31483148+ refcount_inc(&cur_trans->use_count);31493149+ spin_unlock(&root->fs_info->trans_lock);31463150 if (!cur_trans)31473151 return 0;31483152···31553151 head = btrfs_find_delayed_ref_head(delayed_refs, bytenr);31563152 if (!head) {31573153 spin_unlock(&delayed_refs->lock);31543154+ btrfs_put_transaction(cur_trans);31583155 return 0;31593156 }31603157···31723167 mutex_lock(&head->mutex);31733168 mutex_unlock(&head->mutex);31743169 btrfs_put_delayed_ref_head(head);31703170+ btrfs_put_transaction(cur_trans);31753171 return -EAGAIN;31763172 }31773173 spin_unlock(&delayed_refs->lock);···32053199 }32063200 spin_unlock(&head->lock);32073201 mutex_unlock(&head->mutex);32023202+ btrfs_put_transaction(cur_trans);32083203 return ret;32093204}32103205
···52365236 len = btrfs_file_extent_num_bytes(path->nodes[0], ei);52375237 }5238523852395239+ if (offset >= sctx->cur_inode_size) {52405240+ ret = 0;52415241+ goto out;52425242+ }52395243 if (offset + len > sctx->cur_inode_size)52405244 len = sctx->cur_inode_size - offset;52415245 if (len == 0) {
+122-83
fs/ceph/file.c
···7070 */71717272/*7373- * Calculate the length sum of direct io vectors that can7474- * be combined into one page vector.7373+ * How many pages to get in one call to iov_iter_get_pages(). This7474+ * determines the size of the on-stack array used as a buffer.7575 */7676-static size_t dio_get_pagev_size(const struct iov_iter *it)7777-{7878- const struct iovec *iov = it->iov;7979- const struct iovec *iovend = iov + it->nr_segs;8080- size_t size;7676+#define ITER_GET_BVECS_PAGES 6481778282- size = iov->iov_len - it->iov_offset;8383- /*8484- * An iov can be page vectored when both the current tail8585- * and the next base are page aligned.8686- */8787- while (PAGE_ALIGNED((iov->iov_base + iov->iov_len)) &&8888- (++iov < iovend && PAGE_ALIGNED((iov->iov_base)))) {8989- size += iov->iov_len;9090- }9191- dout("dio_get_pagevlen len = %zu\n", size);9292- return size;7878+static ssize_t __iter_get_bvecs(struct iov_iter *iter, size_t maxsize,7979+ struct bio_vec *bvecs)8080+{8181+ size_t size = 0;8282+ int bvec_idx = 0;8383+8484+ if (maxsize > iov_iter_count(iter))8585+ maxsize = iov_iter_count(iter);8686+8787+ while (size < maxsize) {8888+ struct page *pages[ITER_GET_BVECS_PAGES];8989+ ssize_t bytes;9090+ size_t start;9191+ int idx = 0;9292+9393+ bytes = iov_iter_get_pages(iter, pages, maxsize - size,9494+ ITER_GET_BVECS_PAGES, &start);9595+ if (bytes < 0)9696+ return size ?: bytes;9797+9898+ iov_iter_advance(iter, bytes);9999+ size += bytes;100100+101101+ for ( ; bytes; idx++, bvec_idx++) {102102+ struct bio_vec bv = {103103+ .bv_page = pages[idx],104104+ .bv_len = min_t(int, bytes, PAGE_SIZE - start),105105+ .bv_offset = start,106106+ };107107+108108+ bvecs[bvec_idx] = bv;109109+ bytes -= bv.bv_len;110110+ start = 0;111111+ }112112+ }113113+114114+ return size;93115}9411695117/*9696- * Allocate a page vector based on (@it, @nbytes).9797- * The return value is the tuple describing a page vector,9898- * that is (@pages, @page_align, @num_pages).118118+ * iov_iter_get_pages() only considers one iov_iter segment, no matter119119+ * what maxsize or maxpages are given. For ITER_BVEC that is a single120120+ * page.121121+ *122122+ * Attempt to get up to @maxsize bytes worth of pages from @iter.123123+ * Return the number of bytes in the created bio_vec array, or an error.99124 */100100-static struct page **101101-dio_get_pages_alloc(const struct iov_iter *it, size_t nbytes,102102- size_t *page_align, int *num_pages)125125+static ssize_t iter_get_bvecs_alloc(struct iov_iter *iter, size_t maxsize,126126+ struct bio_vec **bvecs, int *num_bvecs)103127{104104- struct iov_iter tmp_it = *it;105105- size_t align;106106- struct page **pages;107107- int ret = 0, idx, npages;128128+ struct bio_vec *bv;129129+ size_t orig_count = iov_iter_count(iter);130130+ ssize_t bytes;131131+ int npages;108132109109- align = (unsigned long)(it->iov->iov_base + it->iov_offset) &110110- (PAGE_SIZE - 1);111111- npages = calc_pages_for(align, nbytes);112112- pages = kvmalloc(sizeof(*pages) * npages, GFP_KERNEL);113113- if (!pages)114114- return ERR_PTR(-ENOMEM);133133+ iov_iter_truncate(iter, maxsize);134134+ npages = iov_iter_npages(iter, INT_MAX);135135+ iov_iter_reexpand(iter, orig_count);115136116116- for (idx = 0; idx < npages; ) {117117- size_t start;118118- ret = iov_iter_get_pages(&tmp_it, pages + idx, nbytes,119119- npages - idx, &start);120120- if (ret < 0)121121- goto fail;137137+ /*138138+ * __iter_get_bvecs() may populate only part of the array -- zero it139139+ * out.140140+ */141141+ bv = kvmalloc_array(npages, sizeof(*bv), GFP_KERNEL | __GFP_ZERO);142142+ if (!bv)143143+ return -ENOMEM;122144123123- iov_iter_advance(&tmp_it, ret);124124- nbytes -= ret;125125- idx += (ret + start + PAGE_SIZE - 1) / PAGE_SIZE;145145+ bytes = __iter_get_bvecs(iter, maxsize, bv);146146+ if (bytes < 0) {147147+ /*148148+ * No pages were pinned -- just free the array.149149+ */150150+ kvfree(bv);151151+ return bytes;126152 }127153128128- BUG_ON(nbytes != 0);129129- *num_pages = npages;130130- *page_align = align;131131- dout("dio_get_pages_alloc: got %d pages align %zu\n", npages, align);132132- return pages;133133-fail:134134- ceph_put_page_vector(pages, idx, false);135135- return ERR_PTR(ret);154154+ *bvecs = bv;155155+ *num_bvecs = npages;156156+ return bytes;157157+}158158+159159+static void put_bvecs(struct bio_vec *bvecs, int num_bvecs, bool should_dirty)160160+{161161+ int i;162162+163163+ for (i = 0; i < num_bvecs; i++) {164164+ if (bvecs[i].bv_page) {165165+ if (should_dirty)166166+ set_page_dirty_lock(bvecs[i].bv_page);167167+ put_page(bvecs[i].bv_page);168168+ }169169+ }170170+ kvfree(bvecs);136171}137172138173/*···781746 struct inode *inode = req->r_inode;782747 struct ceph_aio_request *aio_req = req->r_priv;783748 struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);784784- int num_pages = calc_pages_for((u64)osd_data->alignment,785785- osd_data->length);786749787787- dout("ceph_aio_complete_req %p rc %d bytes %llu\n",788788- inode, rc, osd_data->length);750750+ BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS);751751+ BUG_ON(!osd_data->num_bvecs);752752+753753+ dout("ceph_aio_complete_req %p rc %d bytes %u\n",754754+ inode, rc, osd_data->bvec_pos.iter.bi_size);789755790756 if (rc == -EOLDSNAPC) {791757 struct ceph_aio_work *aio_work;···804768 } else if (!aio_req->write) {805769 if (rc == -ENOENT)806770 rc = 0;807807- if (rc >= 0 && osd_data->length > rc) {808808- int zoff = osd_data->alignment + rc;809809- int zlen = osd_data->length - rc;771771+ if (rc >= 0 && osd_data->bvec_pos.iter.bi_size > rc) {772772+ struct iov_iter i;773773+ int zlen = osd_data->bvec_pos.iter.bi_size - rc;774774+810775 /*811776 * If read is satisfied by single OSD request,812777 * it can pass EOF. Otherwise read is within···822785 aio_req->total_len = rc + zlen;823786 }824787825825- if (zlen > 0)826826- ceph_zero_page_vector_range(zoff, zlen,827827- osd_data->pages);788788+ iov_iter_bvec(&i, ITER_BVEC, osd_data->bvec_pos.bvecs,789789+ osd_data->num_bvecs,790790+ osd_data->bvec_pos.iter.bi_size);791791+ iov_iter_advance(&i, rc);792792+ iov_iter_zero(zlen, &i);828793 }829794 }830795831831- ceph_put_page_vector(osd_data->pages, num_pages, aio_req->should_dirty);796796+ put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs,797797+ aio_req->should_dirty);832798 ceph_osdc_put_request(req);833799834800 if (rc < 0)···919879 struct ceph_fs_client *fsc = ceph_inode_to_client(inode);920880 struct ceph_vino vino;921881 struct ceph_osd_request *req;922922- struct page **pages;882882+ struct bio_vec *bvecs;923883 struct ceph_aio_request *aio_req = NULL;924884 int num_pages = 0;925885 int flags;···954914 }955915956916 while (iov_iter_count(iter) > 0) {957957- u64 size = dio_get_pagev_size(iter);958958- size_t start = 0;917917+ u64 size = iov_iter_count(iter);959918 ssize_t len;919919+920920+ if (write)921921+ size = min_t(u64, size, fsc->mount_options->wsize);922922+ else923923+ size = min_t(u64, size, fsc->mount_options->rsize);960924961925 vino = ceph_vino(inode);962926 req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,···977933 break;978934 }979935980980- if (write)981981- size = min_t(u64, size, fsc->mount_options->wsize);982982- else983983- size = min_t(u64, size, fsc->mount_options->rsize);984984-985985- len = size;986986- pages = dio_get_pages_alloc(iter, len, &start, &num_pages);987987- if (IS_ERR(pages)) {936936+ len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages);937937+ if (len < 0) {988938 ceph_osdc_put_request(req);989989- ret = PTR_ERR(pages);939939+ ret = len;990940 break;991941 }942942+ if (len != size)943943+ osd_req_op_extent_update(req, 0, len);992944993945 /*994946 * To simplify error handling, allow AIO when IO within i_size···1017977 req->r_mtime = mtime;1018978 }101997910201020- osd_req_op_extent_osd_data_pages(req, 0, pages, len, start,10211021- false, false);980980+ osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len);10229811023982 if (aio_req) {1024983 aio_req->total_len += len;···1030991 list_add_tail(&req->r_unsafe_item, &aio_req->osd_reqs);10319921032993 pos += len;10331033- iov_iter_advance(iter, len);1034994 continue;1035995 }1036996···10421004 if (ret == -ENOENT)10431005 ret = 0;10441006 if (ret >= 0 && ret < len && pos + ret < size) {10071007+ struct iov_iter i;10451008 int zlen = min_t(size_t, len - ret,10461009 size - pos - ret);10471047- ceph_zero_page_vector_range(start + ret, zlen,10481048- pages);10101010+10111011+ iov_iter_bvec(&i, ITER_BVEC, bvecs, num_pages,10121012+ len);10131013+ iov_iter_advance(&i, ret);10141014+ iov_iter_zero(zlen, &i);10491015 ret += zlen;10501016 }10511017 if (ret >= 0)10521018 len = ret;10531019 }1054102010551055- ceph_put_page_vector(pages, num_pages, should_dirty);10561056-10211021+ put_bvecs(bvecs, num_pages, should_dirty);10571022 ceph_osdc_put_request(req);10581023 if (ret < 0)10591024 break;1060102510611026 pos += len;10621062- iov_iter_advance(iter, len);10631063-10641027 if (!write && pos >= size)10651028 break;10661029
+1-1
fs/cifs/Kconfig
···197197198198config CIFS_SMB_DIRECT199199 bool "SMB Direct support (Experimental)"200200- depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y200200+ depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y201201 help202202 Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1.203203 SMB Direct allows transferring SMB packets over RDMA. If unsure,
+13
fs/cifs/cifsfs.c
···10471047 return rc;10481048}1049104910501050+/*10511051+ * Directory operations under CIFS/SMB2/SMB3 are synchronous, so fsync()10521052+ * is a dummy operation.10531053+ */10541054+static int cifs_dir_fsync(struct file *file, loff_t start, loff_t end, int datasync)10551055+{10561056+ cifs_dbg(FYI, "Sync directory - name: %pD datasync: 0x%x\n",10571057+ file, datasync);10581058+10591059+ return 0;10601060+}10611061+10501062static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off,10511063 struct file *dst_file, loff_t destoff,10521064 size_t len, unsigned int flags)···11931181 .copy_file_range = cifs_copy_file_range,11941182 .clone_file_range = cifs_clone_file_range,11951183 .llseek = generic_file_llseek,11841184+ .fsync = cifs_dir_fsync,11961185};1197118611981187static void
-8
fs/cifs/connect.c
···19771977 goto cifs_parse_mount_err;19781978 }1979197919801980-#ifdef CONFIG_CIFS_SMB_DIRECT19811981- if (vol->rdma && vol->sign) {19821982- cifs_dbg(VFS, "Currently SMB direct doesn't support signing."19831983- " This is being fixed\n");19841984- goto cifs_parse_mount_err;19851985- }19861986-#endif19871987-19881980#ifndef CONFIG_KEYS19891981 /* Muliuser mounts require CONFIG_KEYS support */19901982 if (vol->multiuser) {
+6
fs/cifs/smb2ops.c
···589589590590 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);591591592592+ /*593593+ * If ea_name is NULL (listxattr) and there are no EAs, return 0 as it's594594+ * not an error. Otherwise, the specified ea_name was not found.595595+ */592596 if (!rc)593597 rc = move_smb2_ea_to_cifs(ea_data, buf_size, smb2_data,594598 SMB2_MAX_EA_BUF, ea_name);599599+ else if (!ea_name && rc == -ENODATA)600600+ rc = 0;595601596602 kfree(smb2_data);597603 return rc;
+38-35
fs/cifs/smb2pdu.c
···730730731731int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)732732{733733- int rc = 0;734734- struct validate_negotiate_info_req vneg_inbuf;733733+ int rc;734734+ struct validate_negotiate_info_req *pneg_inbuf;735735 struct validate_negotiate_info_rsp *pneg_rsp = NULL;736736 u32 rsplen;737737 u32 inbuflen; /* max of 4 dialects */738738739739 cifs_dbg(FYI, "validate negotiate\n");740740-741741-#ifdef CONFIG_CIFS_SMB_DIRECT742742- if (tcon->ses->server->rdma)743743- return 0;744744-#endif745740746741 /* In SMB3.11 preauth integrity supersedes validate negotiate */747742 if (tcon->ses->server->dialect == SMB311_PROT_ID)···760765 if (tcon->ses->session_flags & SMB2_SESSION_FLAG_IS_NULL)761766 cifs_dbg(VFS, "Unexpected null user (anonymous) auth flag sent by server\n");762767763763- vneg_inbuf.Capabilities =768768+ pneg_inbuf = kmalloc(sizeof(*pneg_inbuf), GFP_NOFS);769769+ if (!pneg_inbuf)770770+ return -ENOMEM;771771+772772+ pneg_inbuf->Capabilities =764773 cpu_to_le32(tcon->ses->server->vals->req_capabilities);765765- memcpy(vneg_inbuf.Guid, tcon->ses->server->client_guid,774774+ memcpy(pneg_inbuf->Guid, tcon->ses->server->client_guid,766775 SMB2_CLIENT_GUID_SIZE);767776768777 if (tcon->ses->sign)769769- vneg_inbuf.SecurityMode =778778+ pneg_inbuf->SecurityMode =770779 cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED);771780 else if (global_secflags & CIFSSEC_MAY_SIGN)772772- vneg_inbuf.SecurityMode =781781+ pneg_inbuf->SecurityMode =773782 cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED);774783 else775775- vneg_inbuf.SecurityMode = 0;784784+ pneg_inbuf->SecurityMode = 0;776785777786778787 if (strcmp(tcon->ses->server->vals->version_string,779788 SMB3ANY_VERSION_STRING) == 0) {780780- vneg_inbuf.Dialects[0] = cpu_to_le16(SMB30_PROT_ID);781781- vneg_inbuf.Dialects[1] = cpu_to_le16(SMB302_PROT_ID);782782- vneg_inbuf.DialectCount = cpu_to_le16(2);789789+ pneg_inbuf->Dialects[0] = cpu_to_le16(SMB30_PROT_ID);790790+ pneg_inbuf->Dialects[1] = cpu_to_le16(SMB302_PROT_ID);791791+ pneg_inbuf->DialectCount = cpu_to_le16(2);783792 /* structure is big enough for 3 dialects, sending only 2 */784784- inbuflen = sizeof(struct validate_negotiate_info_req) - 2;793793+ inbuflen = sizeof(*pneg_inbuf) -794794+ sizeof(pneg_inbuf->Dialects[0]);785795 } else if (strcmp(tcon->ses->server->vals->version_string,786796 SMBDEFAULT_VERSION_STRING) == 0) {787787- vneg_inbuf.Dialects[0] = cpu_to_le16(SMB21_PROT_ID);788788- vneg_inbuf.Dialects[1] = cpu_to_le16(SMB30_PROT_ID);789789- vneg_inbuf.Dialects[2] = cpu_to_le16(SMB302_PROT_ID);790790- vneg_inbuf.DialectCount = cpu_to_le16(3);797797+ pneg_inbuf->Dialects[0] = cpu_to_le16(SMB21_PROT_ID);798798+ pneg_inbuf->Dialects[1] = cpu_to_le16(SMB30_PROT_ID);799799+ pneg_inbuf->Dialects[2] = cpu_to_le16(SMB302_PROT_ID);800800+ pneg_inbuf->DialectCount = cpu_to_le16(3);791801 /* structure is big enough for 3 dialects */792792- inbuflen = sizeof(struct validate_negotiate_info_req);802802+ inbuflen = sizeof(*pneg_inbuf);793803 } else {794804 /* otherwise specific dialect was requested */795795- vneg_inbuf.Dialects[0] =805805+ pneg_inbuf->Dialects[0] =796806 cpu_to_le16(tcon->ses->server->vals->protocol_id);797797- vneg_inbuf.DialectCount = cpu_to_le16(1);807807+ pneg_inbuf->DialectCount = cpu_to_le16(1);798808 /* structure is big enough for 3 dialects, sending only 1 */799799- inbuflen = sizeof(struct validate_negotiate_info_req) - 4;809809+ inbuflen = sizeof(*pneg_inbuf) -810810+ sizeof(pneg_inbuf->Dialects[0]) * 2;800811 }801812802813 rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,803814 FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,804804- (char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req),805805- (char **)&pneg_rsp, &rsplen);815815+ (char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);806816807817 if (rc != 0) {808818 cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);809809- return -EIO;819819+ rc = -EIO;820820+ goto out_free_inbuf;810821 }811822812812- if (rsplen != sizeof(struct validate_negotiate_info_rsp)) {823823+ rc = -EIO;824824+ if (rsplen != sizeof(*pneg_rsp)) {813825 cifs_dbg(VFS, "invalid protocol negotiate response size: %d\n",814826 rsplen);815827816828 /* relax check since Mac returns max bufsize allowed on ioctl */817817- if ((rsplen > CIFSMaxBufSize)818818- || (rsplen < sizeof(struct validate_negotiate_info_rsp)))819819- goto err_rsp_free;829829+ if (rsplen > CIFSMaxBufSize || rsplen < sizeof(*pneg_rsp))830830+ goto out_free_rsp;820831 }821832822833 /* check validate negotiate info response matches what we got earlier */···839838 goto vneg_out;840839841840 /* validate negotiate successful */841841+ rc = 0;842842 cifs_dbg(FYI, "validate negotiate info successful\n");843843- kfree(pneg_rsp);844844- return 0;843843+ goto out_free_rsp;845844846845vneg_out:847846 cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n");848848-err_rsp_free:847847+out_free_rsp:849848 kfree(pneg_rsp);850850- return -EIO;849849+out_free_inbuf:850850+ kfree(pneg_inbuf);851851+ return rc;851852}852853853854enum securityEnum
+1-1
fs/fs-writeback.c
···19611961 }1962196219631963 if (!list_empty(&wb->work_list))19641964- mod_delayed_work(bdi_wq, &wb->dwork, 0);19641964+ wb_wakeup(wb);19651965 else if (wb_has_dirty_io(wb) && dirty_writeback_interval)19661966 wb_wakeup_delayed(wb);19671967
+12-2
fs/ocfs2/refcounttree.c
···42504250static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir,42514251 struct dentry *new_dentry, bool preserve)42524252{42534253- int error;42534253+ int error, had_lock;42544254 struct inode *inode = d_inode(old_dentry);42554255 struct buffer_head *old_bh = NULL;42564256 struct inode *new_orphan_inode = NULL;42574257+ struct ocfs2_lock_holder oh;4257425842584259 if (!ocfs2_refcount_tree(OCFS2_SB(inode->i_sb)))42594260 return -EOPNOTSUPP;···42964295 goto out;42974296 }4298429742984298+ had_lock = ocfs2_inode_lock_tracker(new_orphan_inode, NULL, 1,42994299+ &oh);43004300+ if (had_lock < 0) {43014301+ error = had_lock;43024302+ mlog_errno(error);43034303+ goto out;43044304+ }43054305+42994306 /* If the security isn't preserved, we need to re-initialize them. */43004307 if (!preserve) {43014308 error = ocfs2_init_security_and_acl(dir, new_orphan_inode,···43114302 if (error)43124303 mlog_errno(error);43134304 }43144314-out:43154305 if (!error) {43164306 error = ocfs2_mv_orphaned_inode_to_new(dir, new_orphan_inode,43174307 new_dentry);43184308 if (error)43194309 mlog_errno(error);43204310 }43114311+ ocfs2_inode_unlock_tracker(new_orphan_inode, 1, &oh, had_lock);4321431243134313+out:43224314 if (new_orphan_inode) {43234315 /*43244316 * We need to open_unlock the inode no matter whether we
+16-7
fs/proc/kcore.c
···209209{210210 struct list_head *head = (struct list_head *)arg;211211 struct kcore_list *ent;212212+ struct page *p;213213+214214+ if (!pfn_valid(pfn))215215+ return 1;216216+217217+ p = pfn_to_page(pfn);218218+ if (!memmap_valid_within(pfn, p, page_zone(p)))219219+ return 1;212220213221 ent = kmalloc(sizeof(*ent), GFP_KERNEL);214222 if (!ent)215223 return -ENOMEM;216216- ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT));224224+ ent->addr = (unsigned long)page_to_virt(p);217225 ent->size = nr_pages << PAGE_SHIFT;218226219219- /* Sanity check: Can happen in 32bit arch...maybe */220220- if (ent->addr < (unsigned long) __va(0))227227+ if (!virt_addr_valid(ent->addr))221228 goto free_out;222229223230 /* cut not-mapped area. ....from ppc-32 code. */224231 if (ULONG_MAX - ent->addr < ent->size)225232 ent->size = ULONG_MAX - ent->addr;226233227227- /* cut when vmalloc() area is higher than direct-map area */228228- if (VMALLOC_START > (unsigned long)__va(0)) {229229- if (ent->addr > VMALLOC_START)230230- goto free_out;234234+ /*235235+ * We've already checked virt_addr_valid so we know this address236236+ * is a valid pointer, therefore we can check against it to determine237237+ * if we need to trim238238+ */239239+ if (VMALLOC_START > ent->addr) {231240 if (VMALLOC_START - ent->addr < ent->size)232241 ent->size = VMALLOC_START - ent->addr;233242 }
+8-1
fs/xfs/libxfs/xfs_attr.c
···511511 if (args->flags & ATTR_CREATE)512512 return retval;513513 retval = xfs_attr_shortform_remove(args);514514- ASSERT(retval == 0);514514+ if (retval)515515+ return retval;516516+ /*517517+ * Since we have removed the old attr, clear ATTR_REPLACE so518518+ * that the leaf format add routine won't trip over the attr519519+ * not being around.520520+ */521521+ args->flags &= ~ATTR_REPLACE;515522 }516523517524 if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
···466466 return __this_address;467467 if (di_size > XFS_DFORK_DSIZE(dip, mp))468468 return __this_address;469469+ if (dip->di_nextents)470470+ return __this_address;469471 /* fall through */470472 case XFS_DINODE_FMT_EXTENTS:471473 case XFS_DINODE_FMT_BTREE:···486484 if (XFS_DFORK_Q(dip)) {487485 switch (dip->di_aformat) {488486 case XFS_DINODE_FMT_LOCAL:487487+ if (dip->di_anextents)488488+ return __this_address;489489+ /* fall through */489490 case XFS_DINODE_FMT_EXTENTS:490491 case XFS_DINODE_FMT_BTREE:491492 break;492493 default:493494 return __this_address;494495 }496496+ } else {497497+ /*498498+ * If there is no fork offset, this may be a freshly-made inode499499+ * in a new disk cluster, in which case di_aformat is zeroed.500500+ * Otherwise, such an inode must be in EXTENTS format; this goes501501+ * for freed inodes as well.502502+ */503503+ switch (dip->di_aformat) {504504+ case 0:505505+ case XFS_DINODE_FMT_EXTENTS:506506+ break;507507+ default:508508+ return __this_address;509509+ }510510+ if (dip->di_anextents)511511+ return __this_address;495512 }496513497514 /* only version 3 or greater inodes are extensively verified here */
+19-5
fs/xfs/xfs_file.c
···778778 if (error)779779 goto out_unlock;780780 } else if (mode & FALLOC_FL_INSERT_RANGE) {781781- unsigned int blksize_mask = i_blocksize(inode) - 1;781781+ unsigned int blksize_mask = i_blocksize(inode) - 1;782782+ loff_t isize = i_size_read(inode);782783783783- new_size = i_size_read(inode) + len;784784 if (offset & blksize_mask || len & blksize_mask) {785785 error = -EINVAL;786786 goto out_unlock;787787 }788788789789- /* check the new inode size does not wrap through zero */790790- if (new_size > inode->i_sb->s_maxbytes) {789789+ /*790790+ * New inode size must not exceed ->s_maxbytes, accounting for791791+ * possible signed overflow.792792+ */793793+ if (inode->i_sb->s_maxbytes - isize < len) {791794 error = -EFBIG;792795 goto out_unlock;793796 }797797+ new_size = isize + len;794798795799 /* Offset should be less than i_size */796796- if (offset >= i_size_read(inode)) {800800+ if (offset >= isize) {797801 error = -EINVAL;798802 goto out_unlock;799803 }···880876 struct file *dst_file,881877 u64 dst_loff)882878{879879+ struct inode *srci = file_inode(src_file);880880+ u64 max_dedupe;883881 int error;884882883883+ /*884884+ * Since we have to read all these pages in to compare them, cut885885+ * it off at MAX_RW_COUNT/2 rounded down to the nearest block.886886+ * That means we won't do more than MAX_RW_COUNT IO per request.887887+ */888888+ max_dedupe = (MAX_RW_COUNT >> 1) & ~(i_blocksize(srci) - 1);889889+ if (len > max_dedupe)890890+ len = max_dedupe;885891 error = xfs_reflink_remap_range(src_file, loff, dst_file, dst_loff,886892 len, true);887893 if (error)
···9595 return 0;9696}97979898+void __oom_reap_task_mm(struct mm_struct *mm);9999+98100extern unsigned long oom_badness(struct task_struct *p,99101 struct mem_cgroup *memcg, const nodemask_t *nodemask,100102 unsigned long totalpages);
+1
include/linux/rbtree_augmented.h
···26262727#include <linux/compiler.h>2828#include <linux/rbtree.h>2929+#include <linux/rcupdate.h>29303031/*3132 * Please note - only struct rb_augment_callbacks and the prototypes for
···112112113113#ifdef CONFIG_DEBUG_ATOMIC_SLEEP114114115115+/*116116+ * Special states are those that do not use the normal wait-loop pattern. See117117+ * the comment with set_special_state().118118+ */119119+#define is_special_task_state(state) \120120+ ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD))121121+115122#define __set_current_state(state_value) \116123 do { \124124+ WARN_ON_ONCE(is_special_task_state(state_value));\117125 current->task_state_change = _THIS_IP_; \118126 current->state = (state_value); \119127 } while (0)128128+120129#define set_current_state(state_value) \121130 do { \131131+ WARN_ON_ONCE(is_special_task_state(state_value));\122132 current->task_state_change = _THIS_IP_; \123133 smp_store_mb(current->state, (state_value)); \124134 } while (0)125135136136+#define set_special_state(state_value) \137137+ do { \138138+ unsigned long flags; /* may shadow */ \139139+ WARN_ON_ONCE(!is_special_task_state(state_value)); \140140+ raw_spin_lock_irqsave(¤t->pi_lock, flags); \141141+ current->task_state_change = _THIS_IP_; \142142+ current->state = (state_value); \143143+ raw_spin_unlock_irqrestore(¤t->pi_lock, flags); \144144+ } while (0)126145#else127146/*128147 * set_current_state() includes a barrier so that the write of current->state···163144 *164145 * The above is typically ordered against the wakeup, which does:165146 *166166- * need_sleep = false;167167- * wake_up_state(p, TASK_UNINTERRUPTIBLE);147147+ * need_sleep = false;148148+ * wake_up_state(p, TASK_UNINTERRUPTIBLE);168149 *169150 * Where wake_up_state() (and all other wakeup primitives) imply enough170151 * barriers to order the store of the variable against wakeup.···173154 * once it observes the TASK_UNINTERRUPTIBLE store the waking CPU can issue a174155 * TASK_RUNNING store which can collide with __set_current_state(TASK_RUNNING).175156 *176176- * This is obviously fine, since they both store the exact same value.157157+ * However, with slightly different timing the wakeup TASK_RUNNING store can158158+ * also collide with the TASK_UNINTERRUPTIBLE store. Loosing that store is not159159+ * a problem either because that will result in one extra go around the loop160160+ * and our @cond test will save the day.177161 *178162 * Also see the comments of try_to_wake_up().179163 */180180-#define __set_current_state(state_value) do { current->state = (state_value); } while (0)181181-#define set_current_state(state_value) smp_store_mb(current->state, (state_value))164164+#define __set_current_state(state_value) \165165+ current->state = (state_value)166166+167167+#define set_current_state(state_value) \168168+ smp_store_mb(current->state, (state_value))169169+170170+/*171171+ * set_special_state() should be used for those states when the blocking task172172+ * can not use the regular condition based wait-loop. In that case we must173173+ * serialize against wakeups such that any possible in-flight TASK_RUNNING stores174174+ * will not collide with our state change.175175+ */176176+#define set_special_state(state_value) \177177+ do { \178178+ unsigned long flags; /* may shadow */ \179179+ raw_spin_lock_irqsave(¤t->pi_lock, flags); \180180+ current->state = (state_value); \181181+ raw_spin_unlock_irqrestore(¤t->pi_lock, flags); \182182+ } while (0)183183+182184#endif183185184186/* Task command name length: */
+1-1
include/linux/sched/signal.h
···280280{281281 spin_lock_irq(¤t->sighand->siglock);282282 if (current->jobctl & JOBCTL_STOP_DEQUEUED)283283- __set_current_state(TASK_STOPPED);283283+ set_special_state(TASK_STOPPED);284284 spin_unlock_irq(¤t->sighand->siglock);285285286286 schedule();
+1-1
include/linux/usb/composite.h
···5252#define USB_GADGET_DELAYED_STATUS 0x7fff /* Impossibly large value */53535454/* big enough to hold our biggest descriptor */5555-#define USB_COMP_EP0_BUFSIZ 10245555+#define USB_COMP_EP0_BUFSIZ 409656565757/* OS feature descriptor length <= 4kB */5858#define USB_COMP_EP0_OS_DESC_BUFSIZ 4096
+17
include/linux/wait_bit.h
···305305 __ret; \306306})307307308308+/**309309+ * clear_and_wake_up_bit - clear a bit and wake up anyone waiting on that bit310310+ *311311+ * @bit: the bit of the word being waited on312312+ * @word: the word being waited on, a kernel virtual address313313+ *314314+ * You can use this helper if bitflags are manipulated atomically rather than315315+ * non-atomically under a lock.316316+ */317317+static inline void clear_and_wake_up_bit(int bit, void *word)318318+{319319+ clear_bit_unlock(bit, word);320320+ /* See wake_up_bit() for which memory barrier you need to use. */321321+ smp_mb__after_atomic();322322+ wake_up_bit(word, bit);323323+}324324+308325#endif /* _LINUX_WAIT_BIT_H */
+1-1
include/media/i2c/tvp7002.h
···55 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com>66 *77 * This code is partially based upon the TVP5150 driver88- * written by Mauro Carvalho Chehab (mchehab@infradead.org),88+ * written by Mauro Carvalho Chehab <mchehab@kernel.org>,99 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com>1010 * and the TVP7002 driver in the TI LSP 2.10.00.141111 *
+2-2
include/media/videobuf-core.h
···11/*22 * generic helper functions for handling video4linux capture buffers33 *44- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>44+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>55 *66 * Highly based on video-buf written originally by:77 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org>88- * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org>88+ * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org>99 * (c) 2006 Ted Walther and John Sokol1010 *1111 * This program is free software; you can redistribute it and/or modify
+2-2
include/media/videobuf-dma-sg.h
···66 * into PAGE_SIZE chunks). They also assume the driver does not need77 * to touch the video data.88 *99- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>99+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>1010 *1111 * Highly based on video-buf written originally by:1212 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org>1313- * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org>1313+ * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org>1414 * (c) 2006 Ted Walther and John Sokol1515 *1616 * This program is free software; you can redistribute it and/or modify
+1-1
include/media/videobuf-vmalloc.h
···66 * into PAGE_SIZE chunks). They also assume the driver does not need77 * to touch the video data.88 *99- * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org>99+ * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org>1010 *1111 * This program is free software; you can redistribute it and/or modify1212 * it under the terms of the GNU General Public License as published by
+1
include/net/bonding.h
···198198 struct slave __rcu *primary_slave;199199 struct bond_up_slave __rcu *slave_arr; /* Array of usable slaves */200200 bool force_primary;201201+ u32 nest_level;201202 s32 slave_cnt; /* never change this value outside the attach/detach wrappers */202203 int (*recv_probe)(const struct sk_buff *, struct bonding *,203204 struct slave *);
+1-1
include/net/flow_dissector.h
···251251 * This structure is used to hold a digest of the full flow keys. This is a252252 * larger "hash" of a flow to allow definitively matching specific flows where253253 * the 32 bit skb->hash is not large enough. The size is limited to 16 bytes so254254- * that it can by used in CB of skb (see sch_choke for an example).254254+ * that it can be used in CB of skb (see sch_choke for an example).255255 */256256#define FLOW_KEYS_DIGEST_LEN 16257257struct flow_keys_digest {
+1-1
include/net/mac80211.h
···20802080 * virtual interface might not be given air time for the transmission of20812081 * the frame, as it is not synced with the AP/P2P GO yet, and thus the20822082 * deauthentication frame might not be transmitted.20832083- >20832083+ *20842084 * @IEEE80211_HW_DOESNT_SUPPORT_QOS_NDP: The driver (or firmware) doesn't20852085 * support QoS NDP for AP probing - that's most likely a driver bug.20862086 *
+1
include/net/tls.h
···148148 struct scatterlist *partially_sent_record;149149 u16 partially_sent_offset;150150 unsigned long flags;151151+ bool in_tcp_sendpages;151152152153 u16 pending_open_record_frags;153154 int (*push_pending_record)(struct sock *sk, int flags);
···3131 TP_ARGS(func),32323333 TP_STRUCT__entry(3434- __field(initcall_t, func)3434+ /*3535+ * Use field_struct to avoid is_signed_type()3636+ * comparison of a function pointer3737+ */3838+ __field_struct(initcall_t, func)3539 ),36403741 TP_fast_assign(···5248 TP_ARGS(func, ret),53495450 TP_STRUCT__entry(5555- __field(initcall_t, func)5656- __field(int, ret)5151+ /*5252+ * Use field_struct to avoid is_signed_type()5353+ * comparison of a function pointer5454+ */5555+ __field_struct(initcall_t, func)5656+ __field(int, ret)5757 ),58585959 TP_fast_assign(
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */22/*33 * This software is available to you under a choice of one of two44 * licenses. You may choose to be licensed under the terms of the GNU
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2008 Oracle. All rights reserved.44 *
+1-1
include/uapi/linux/tls.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved.44 *
+1-1
include/uapi/rdma/cxgb3-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2006 Chelsio, Inc. All rights reserved.44 *
+1-1
include/uapi/rdma/cxgb4-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved.44 *
+1-1
include/uapi/rdma/hns-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2016 Hisilicon Limited.44 *
+1-1
include/uapi/rdma/ib_user_cm.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2005 Topspin Communications. All rights reserved.44 * Copyright (c) 2005 Intel Corporation. All rights reserved.
+1-1
include/uapi/rdma/ib_user_ioctl_verbs.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved.44 *
+1-1
include/uapi/rdma/ib_user_mad.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2004 Topspin Communications. All rights reserved.44 * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
+1-1
include/uapi/rdma/ib_user_sa.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2005 Intel Corporation. All rights reserved.44 *
+1-1
include/uapi/rdma/ib_user_verbs.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2005 Topspin Communications. All rights reserved.44 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1-1
include/uapi/rdma/mlx4-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved.44 * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
+1-1
include/uapi/rdma/mlx5-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved.44 *
+1-1
include/uapi/rdma/mthca-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2005 Topspin Communications. All rights reserved.44 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1-1
include/uapi/rdma/nes-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved.44 * Copyright (c) 2005 Topspin Communications. All rights reserved.
+1-1
include/uapi/rdma/qedr-abi.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/* QLogic qedr NIC Driver33 * Copyright (c) 2015-2016 QLogic Corporation44 *
+1-1
include/uapi/rdma/rdma_user_cm.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2005-2006 Intel Corporation. All rights reserved.44 *
+1-1
include/uapi/rdma/rdma_user_ioctl.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved.44 *
+1-1
include/uapi/rdma/rdma_user_rxe.h
···11-/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */11+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */22/*33 * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved.44 *
+8-1
init/main.c
···423423424424 /*425425 * Enable might_sleep() and smp_processor_id() checks.426426- * They cannot be enabled earlier because with CONFIG_PRREMPT=y426426+ * They cannot be enabled earlier because with CONFIG_PREEMPT=y427427 * kernel_thread() would trigger might_sleep() splats. With428428 * CONFIG_PREEMPT_VOLUNTARY=y the init task might have scheduled429429 * already, but it's stuck on the kthreadd_done completion.···10341034static void mark_readonly(void)10351035{10361036 if (rodata_enabled) {10371037+ /*10381038+ * load_module() results in W+X mappings, which are cleaned up10391039+ * with call_rcu_sched(). Let's make sure that queued work is10401040+ * flushed so that we don't hit false positives looking for10411041+ * insecure pages which are W+X.10421042+ */10431043+ rcu_barrier_sched();10371044 mark_rodata_ro();10381045 rodata_test();10391046 } else
+2-1
kernel/bpf/arraymap.c
···476476}477477478478/* decrement refcnt of all bpf_progs that are stored in this map */479479-void bpf_fd_array_map_clear(struct bpf_map *map)479479+static void bpf_fd_array_map_clear(struct bpf_map *map)480480{481481 struct bpf_array *array = container_of(map, struct bpf_array, map);482482 int i;···495495 .map_fd_get_ptr = prog_fd_array_get_ptr,496496 .map_fd_put_ptr = prog_fd_array_put_ptr,497497 .map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem,498498+ .map_release_uref = bpf_fd_array_map_clear,498499};499500500501static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
+73-26
kernel/bpf/sockmap.c
···4343#include <net/tcp.h>4444#include <linux/ptr_ring.h>4545#include <net/inet_common.h>4646+#include <linux/sched/signal.h>46474748#define SOCK_CREATE_FLAG_MASK \4849 (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)···326325 if (ret > 0) {327326 if (apply)328327 apply_bytes -= ret;328328+329329+ sg->offset += ret;330330+ sg->length -= ret;329331 size -= ret;330332 offset += ret;331333 if (uncharge)···336332 goto retry;337333 }338334339339- sg->length = size;340340- sg->offset = offset;341335 return ret;342336 }343337···393391 } while (i != md->sg_end);394392}395393396396-static void free_bytes_sg(struct sock *sk, int bytes, struct sk_msg_buff *md)394394+static void free_bytes_sg(struct sock *sk, int bytes,395395+ struct sk_msg_buff *md, bool charge)397396{398397 struct scatterlist *sg = md->sg_data;399398 int i = md->sg_start, free;···404401 if (bytes < free) {405402 sg[i].length -= bytes;406403 sg[i].offset += bytes;407407- sk_mem_uncharge(sk, bytes);404404+ if (charge)405405+ sk_mem_uncharge(sk, bytes);408406 break;409407 }410408411411- sk_mem_uncharge(sk, sg[i].length);409409+ if (charge)410410+ sk_mem_uncharge(sk, sg[i].length);412411 put_page(sg_page(&sg[i]));413412 bytes -= sg[i].length;414413 sg[i].length = 0;···421416 if (i == MAX_SKB_FRAGS)422417 i = 0;423418 }419419+ md->sg_start = i;424420}425421426422static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)···529523 i = md->sg_start;530524531525 do {532532- r->sg_data[i] = md->sg_data[i];533533-534526 size = (apply && apply_bytes < md->sg_data[i].length) ?535527 apply_bytes : md->sg_data[i].length;536528···539535 }540536541537 sk_mem_charge(sk, size);538538+ r->sg_data[i] = md->sg_data[i];542539 r->sg_data[i].length = size;543540 md->sg_data[i].length -= size;544541 md->sg_data[i].offset += size;···580575 struct sk_msg_buff *md,581576 int flags)582577{578578+ bool ingress = !!(md->flags & BPF_F_INGRESS);583579 struct smap_psock *psock;584580 struct scatterlist *sg;585585- int i, err, free = 0;586586- bool ingress = !!(md->flags & BPF_F_INGRESS);581581+ int err = 0;587582588583 sg = md->sg_data;589584···611606out_rcu:612607 rcu_read_unlock();613608out:614614- i = md->sg_start;615615- while (sg[i].length) {616616- free += sg[i].length;617617- put_page(sg_page(&sg[i]));618618- sg[i].length = 0;619619- i++;620620- if (i == MAX_SKB_FRAGS)621621- i = 0;622622- }623623- return free;609609+ free_bytes_sg(NULL, send, md, false);610610+ return err;624611}625612626613static inline void bpf_md_init(struct smap_psock *psock)···697700 err = bpf_tcp_sendmsg_do_redirect(redir, send, m, flags);698701 lock_sock(sk);699702703703+ if (unlikely(err < 0)) {704704+ free_start_sg(sk, m);705705+ psock->sg_size = 0;706706+ if (!cork)707707+ *copied -= send;708708+ } else {709709+ psock->sg_size -= send;710710+ }711711+700712 if (cork) {701713 free_start_sg(sk, m);714714+ psock->sg_size = 0;702715 kfree(m);703716 m = NULL;717717+ err = 0;704718 }705705- if (unlikely(err))706706- *copied -= err;707707- else708708- psock->sg_size -= send;709719 break;710720 case __SK_DROP:711721 default:712712- free_bytes_sg(sk, send, m);722722+ free_bytes_sg(sk, send, m, true);713723 apply_bytes_dec(psock, send);714724 *copied -= send;715725 psock->sg_size -= send;···734730735731out_err:736732 return err;733733+}734734+735735+static int bpf_wait_data(struct sock *sk,736736+ struct smap_psock *psk, int flags,737737+ long timeo, int *err)738738+{739739+ int rc;740740+741741+ DEFINE_WAIT_FUNC(wait, woken_wake_function);742742+743743+ add_wait_queue(sk_sleep(sk), &wait);744744+ sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);745745+ rc = sk_wait_event(sk, &timeo,746746+ !list_empty(&psk->ingress) ||747747+ !skb_queue_empty(&sk->sk_receive_queue),748748+ &wait);749749+ sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);750750+ remove_wait_queue(sk_sleep(sk), &wait);751751+752752+ return rc;737753}738754739755static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,···779755 return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);780756781757 lock_sock(sk);758758+bytes_ready:782759 while (copied != len) {783760 struct scatterlist *sg;784761 struct sk_msg_buff *md;···832807 consume_skb(md->skb);833808 kfree(md);834809 }810810+ }811811+812812+ if (!copied) {813813+ long timeo;814814+ int data;815815+ int err = 0;816816+817817+ timeo = sock_rcvtimeo(sk, nonblock);818818+ data = bpf_wait_data(sk, psock, flags, timeo, &err);819819+820820+ if (data) {821821+ if (!skb_queue_empty(&sk->sk_receive_queue)) {822822+ release_sock(sk);823823+ smap_release_sock(psock, sk);824824+ copied = tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);825825+ return copied;826826+ }827827+ goto bytes_ready;828828+ }829829+830830+ if (err)831831+ copied = err;835832 }836833837834 release_sock(sk);···18781831 return err;18791832}1880183318811881-static void sock_map_release(struct bpf_map *map, struct file *map_file)18341834+static void sock_map_release(struct bpf_map *map)18821835{18831836 struct bpf_stab *stab = container_of(map, struct bpf_stab, map);18841837 struct bpf_prog *orig;···19021855 .map_get_next_key = sock_map_get_next_key,19031856 .map_update_elem = sock_map_update_elem,19041857 .map_delete_elem = sock_map_delete_elem,19051905- .map_release = sock_map_release,18581858+ .map_release_uref = sock_map_release,19061859};1907186019081861BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+16-7
kernel/bpf/syscall.c
···2626#include <linux/cred.h>2727#include <linux/timekeeping.h>2828#include <linux/ctype.h>2929+#include <linux/nospec.h>29303031#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PROG_ARRAY || \3132 (map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \···103102static struct bpf_map *find_and_alloc_map(union bpf_attr *attr)104103{105104 const struct bpf_map_ops *ops;105105+ u32 type = attr->map_type;106106 struct bpf_map *map;107107 int err;108108109109- if (attr->map_type >= ARRAY_SIZE(bpf_map_types))109109+ if (type >= ARRAY_SIZE(bpf_map_types))110110 return ERR_PTR(-EINVAL);111111- ops = bpf_map_types[attr->map_type];111111+ type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types));112112+ ops = bpf_map_types[type];112113 if (!ops)113114 return ERR_PTR(-EINVAL);114115···125122 if (IS_ERR(map))126123 return map;127124 map->ops = ops;128128- map->map_type = attr->map_type;125125+ map->map_type = type;129126 return map;130127}131128···260257static void bpf_map_put_uref(struct bpf_map *map)261258{262259 if (atomic_dec_and_test(&map->usercnt)) {263263- if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY)264264- bpf_fd_array_map_clear(map);260260+ if (map->ops->map_release_uref)261261+ map->ops->map_release_uref(map);265262 }266263}267264···874871875872static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog)876873{877877- if (type >= ARRAY_SIZE(bpf_prog_types) || !bpf_prog_types[type])874874+ const struct bpf_prog_ops *ops;875875+876876+ if (type >= ARRAY_SIZE(bpf_prog_types))877877+ return -EINVAL;878878+ type = array_index_nospec(type, ARRAY_SIZE(bpf_prog_types));879879+ ops = bpf_prog_types[type];880880+ if (!ops)878881 return -EINVAL;879882880883 if (!bpf_prog_is_dev_bound(prog->aux))881881- prog->aux->ops = bpf_prog_types[type];884884+ prog->aux->ops = ops;882885 else883886 prog->aux->ops = &bpf_offload_prog_ops;884887 prog->type = type;
···1414#include <linux/slab.h>1515#include <linux/circ_buf.h>1616#include <linux/poll.h>1717+#include <linux/nospec.h>17181819#include "internal.h"1920···868867 return NULL;869868870869 /* AUX space */871871- if (pgoff >= rb->aux_pgoff)872872- return virt_to_page(rb->aux_pages[pgoff - rb->aux_pgoff]);870870+ if (pgoff >= rb->aux_pgoff) {871871+ int aux_pgoff = array_index_nospec(pgoff - rb->aux_pgoff, rb->aux_nr_pages);872872+ return virt_to_page(rb->aux_pages[aux_pgoff]);873873+ }873874 }874875875876 return __perf_mmap_to_page(rb, pgoff);
+3-4
kernel/events/uprobes.c
···491491 if (!uprobe)492492 return NULL;493493494494- uprobe->inode = igrab(inode);494494+ uprobe->inode = inode;495495 uprobe->offset = offset;496496 init_rwsem(&uprobe->register_rwsem);497497 init_rwsem(&uprobe->consumer_rwsem);···502502 if (cur_uprobe) {503503 kfree(uprobe);504504 uprobe = cur_uprobe;505505- iput(inode);506505 }507506508507 return uprobe;···700701 rb_erase(&uprobe->rb_node, &uprobes_tree);701702 spin_unlock(&uprobes_treelock);702703 RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */703703- iput(uprobe->inode);704704 put_uprobe(uprobe);705705}706706···871873 * tuple). Creation refcount stops uprobe_unregister from freeing the872874 * @uprobe even before the register operation is complete. Creation873875 * refcount is released when the last @uc for the @uprobe874874- * unregisters.876876+ * unregisters. Caller of uprobe_register() is required to keep @inode877877+ * (and the containing mount) referenced.875878 *876879 * Return errno if it cannot successully install probes877880 * else return 0 (success)
+23-27
kernel/kthread.c
···5555 KTHREAD_IS_PER_CPU = 0,5656 KTHREAD_SHOULD_STOP,5757 KTHREAD_SHOULD_PARK,5858- KTHREAD_IS_PARKED,5958};60596160static inline void set_kthread_struct(void *kthread)···176177177178static void __kthread_parkme(struct kthread *self)178179{179179- __set_current_state(TASK_PARKED);180180- while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) {181181- if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags))182182- complete(&self->parked);180180+ for (;;) {181181+ set_current_state(TASK_PARKED);182182+ if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags))183183+ break;183184 schedule();184184- __set_current_state(TASK_PARKED);185185 }186186- clear_bit(KTHREAD_IS_PARKED, &self->flags);187186 __set_current_state(TASK_RUNNING);188187}189188···190193 __kthread_parkme(to_kthread(current));191194}192195EXPORT_SYMBOL_GPL(kthread_parkme);196196+197197+void kthread_park_complete(struct task_struct *k)198198+{199199+ complete(&to_kthread(k)->parked);200200+}193201194202static int kthread(void *_create)195203{···452450{453451 struct kthread *kthread = to_kthread(k);454452455455- clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);456453 /*457457- * We clear the IS_PARKED bit here as we don't wait458458- * until the task has left the park code. So if we'd459459- * park before that happens we'd see the IS_PARKED bit460460- * which might be about to be cleared.454454+ * Newly created kthread was parked when the CPU was offline.455455+ * The binding was lost and we need to set it again.461456 */462462- if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) {463463- /*464464- * Newly created kthread was parked when the CPU was offline.465465- * The binding was lost and we need to set it again.466466- */467467- if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))468468- __kthread_bind(k, kthread->cpu, TASK_PARKED);469469- wake_up_state(k, TASK_PARKED);470470- }457457+ if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags))458458+ __kthread_bind(k, kthread->cpu, TASK_PARKED);459459+460460+ clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags);461461+ wake_up_state(k, TASK_PARKED);471462}472463EXPORT_SYMBOL_GPL(kthread_unpark);473464···483488 if (WARN_ON(k->flags & PF_EXITING))484489 return -ENOSYS;485490486486- if (!test_bit(KTHREAD_IS_PARKED, &kthread->flags)) {487487- set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);488488- if (k != current) {489489- wake_up_process(k);490490- wait_for_completion(&kthread->parked);491491- }491491+ if (WARN_ON_ONCE(test_bit(KTHREAD_SHOULD_PARK, &kthread->flags)))492492+ return -EBUSY;493493+494494+ set_bit(KTHREAD_SHOULD_PARK, &kthread->flags);495495+ if (k != current) {496496+ wake_up_process(k);497497+ wait_for_completion(&kthread->parked);492498 }493499494500 return 0;
+5
kernel/module.c
···35173517 * walking this with preempt disabled. In all the failure paths, we35183518 * call synchronize_sched(), but we don't want to slow down the success35193519 * path, so use actual RCU here.35203520+ * Note that module_alloc() on most architectures creates W+X page35213521+ * mappings which won't be cleaned up until do_free_init() runs. Any35223522+ * code such as mark_rodata_ro() which depends on those mappings to35233523+ * be cleaned up needs to sync with the queued work - ie35243524+ * rcu_barrier_sched()35203525 */35213526 call_rcu_sched(&freeinit->rcu, do_free_init);35223527 mutex_unlock(&module_mutex);
+5-2
kernel/sched/autogroup.c
···22/*33 * Auto-group scheduling implementation:44 */55+#include <linux/nospec.h>56#include "sched.h"6778unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1;···210209 static unsigned long next = INITIAL_JIFFIES;211210 struct autogroup *ag;212211 unsigned long shares;213213- int err;212212+ int err, idx;214213215214 if (nice < MIN_NICE || nice > MAX_NICE)216215 return -EINVAL;···228227229228 next = HZ / 10 + jiffies;230229 ag = autogroup_task_get(p);231231- shares = scale_load(sched_prio_to_weight[nice + 20]);230230+231231+ idx = array_index_nospec(nice + 20, 40);232232+ shares = scale_load(sched_prio_to_weight[idx]);232233233234 down_write(&ag->lock);234235 err = sched_group_set_shares(ag->tg, shares);
+28-28
kernel/sched/core.c
···77 */88#include "sched.h"991010+#include <linux/kthread.h>1111+#include <linux/nospec.h>1212+1013#include <asm/switch_to.h>1114#include <asm/tlb.h>1215···27212718 membarrier_mm_sync_core_before_usermode(mm);27222719 mmdrop(mm);27232720 }27242724- if (unlikely(prev_state == TASK_DEAD)) {27252725- if (prev->sched_class->task_dead)27262726- prev->sched_class->task_dead(prev);27212721+ if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) {27222722+ switch (prev_state) {27232723+ case TASK_DEAD:27242724+ if (prev->sched_class->task_dead)27252725+ prev->sched_class->task_dead(prev);2727272627282728- /*27292729- * Remove function-return probe instances associated with this27302730- * task and put them back on the free list.27312731- */27322732- kprobe_flush_task(prev);27272727+ /*27282728+ * Remove function-return probe instances associated with this27292729+ * task and put them back on the free list.27302730+ */27312731+ kprobe_flush_task(prev);2733273227342734- /* Task is done with its stack. */27352735- put_task_stack(prev);27332733+ /* Task is done with its stack. */27342734+ put_task_stack(prev);2736273527372737- put_task_struct(prev);27362736+ put_task_struct(prev);27372737+ break;27382738+27392739+ case TASK_PARKED:27402740+ kthread_park_complete(prev);27412741+ break;27422742+ }27382743 }2739274427402745 tick_nohz_task_switch();···3509349835103499void __noreturn do_task_dead(void)35113500{35123512- /*35133513- * The setting of TASK_RUNNING by try_to_wake_up() may be delayed35143514- * when the following two conditions become true.35153515- * - There is race condition of mmap_sem (It is acquired by35163516- * exit_mm()), and35173517- * - SMI occurs before setting TASK_RUNINNG.35183518- * (or hypervisor of virtual machine switches to other guest)35193519- * As a result, we may become TASK_RUNNING after becoming TASK_DEAD35203520- *35213521- * To avoid it, we have to wait for releasing tsk->pi_lock which35223522- * is held by try_to_wake_up()35233523- */35243524- raw_spin_lock_irq(¤t->pi_lock);35253525- raw_spin_unlock_irq(¤t->pi_lock);35263526-35273501 /* Causes final put_task_struct in finish_task_switch(): */35283528- __set_current_state(TASK_DEAD);35023502+ set_special_state(TASK_DEAD);3529350335303504 /* Tell freezer to ignore us: */35313505 current->flags |= PF_NOFREEZE;···69246928 struct cftype *cft, s64 nice)69256929{69266930 unsigned long weight;69316931+ int idx;6927693269286933 if (nice < MIN_NICE || nice > MAX_NICE)69296934 return -ERANGE;6930693569316931- weight = sched_prio_to_weight[NICE_TO_PRIO(nice) - MAX_RT_PRIO];69366936+ idx = NICE_TO_PRIO(nice) - MAX_RT_PRIO;69376937+ idx = array_index_nospec(idx, 40);69386938+ weight = sched_prio_to_weight[idx];69396939+69326940 return sched_group_set_shares(css_tg(css), scale_load(weight));69336941}69346942#endif
+2-14
kernel/sched/cpufreq_schedutil.c
···305305 * Do not reduce the frequency if the CPU has not been idle306306 * recently, as the reduction is likely to be premature then.307307 */308308- if (busy && next_f < sg_policy->next_freq) {308308+ if (busy && next_f < sg_policy->next_freq &&309309+ sg_policy->next_freq != UINT_MAX) {309310 next_f = sg_policy->next_freq;310311311312 /* Reset cached freq as next_freq has changed */···397396398397 sg_policy = container_of(irq_work, struct sugov_policy, irq_work);399398400400- /*401401- * For RT tasks, the schedutil governor shoots the frequency to maximum.402402- * Special care must be taken to ensure that this kthread doesn't result403403- * in the same behavior.404404- *405405- * This is (mostly) guaranteed by the work_in_progress flag. The flag is406406- * updated only at the end of the sugov_work() function and before that407407- * the schedutil governor rejects all other frequency scaling requests.408408- *409409- * There is a very rare case though, where the RT thread yields right410410- * after the work_in_progress flag is cleared. The effects of that are411411- * neglected for now.412412- */413399 kthread_queue_work(&sg_policy->worker, &sg_policy->work);414400}415401
+2-57
kernel/sched/fair.c
···18541854static void numa_migrate_preferred(struct task_struct *p)18551855{18561856 unsigned long interval = HZ;18571857- unsigned long numa_migrate_retry;1858185718591858 /* This task has no NUMA fault statistics yet */18601859 if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults))···1861186218621863 /* Periodically retry migrating the task to the preferred node */18631864 interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16);18641864- numa_migrate_retry = jiffies + interval;18651865-18661866- /*18671867- * Check that the new retry threshold is after the current one. If18681868- * the retry is in the future, it implies that wake_affine has18691869- * temporarily asked NUMA balancing to backoff from placement.18701870- */18711871- if (numa_migrate_retry > p->numa_migrate_retry)18721872- return;18731873-18741874- /* Safe to try placing the task on the preferred node */18751875- p->numa_migrate_retry = numa_migrate_retry;18651865+ p->numa_migrate_retry = jiffies + interval;1876186618771867 /* Success if task is already running on preferred CPU */18781868 if (task_node(p) == p->numa_preferred_nid)···59105922 return this_eff_load < prev_eff_load ? this_cpu : nr_cpumask_bits;59115923}5912592459135913-#ifdef CONFIG_NUMA_BALANCING59145914-static void59155915-update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target)59165916-{59175917- unsigned long interval;59185918-59195919- if (!static_branch_likely(&sched_numa_balancing))59205920- return;59215921-59225922- /* If balancing has no preference then continue gathering data */59235923- if (p->numa_preferred_nid == -1)59245924- return;59255925-59265926- /*59275927- * If the wakeup is not affecting locality then it is neutral from59285928- * the perspective of NUMA balacing so continue gathering data.59295929- */59305930- if (cpu_to_node(prev_cpu) == cpu_to_node(target))59315931- return;59325932-59335933- /*59345934- * Temporarily prevent NUMA balancing trying to place waker/wakee after59355935- * wakee has been moved by wake_affine. This will potentially allow59365936- * related tasks to converge and update their data placement. The59375937- * 4 * numa_scan_period is to allow the two-pass filter to migrate59385938- * hot data to the wakers node.59395939- */59405940- interval = max(sysctl_numa_balancing_scan_delay,59415941- p->numa_scan_period << 2);59425942- p->numa_migrate_retry = jiffies + msecs_to_jiffies(interval);59435943-59445944- interval = max(sysctl_numa_balancing_scan_delay,59455945- current->numa_scan_period << 2);59465946- current->numa_migrate_retry = jiffies + msecs_to_jiffies(interval);59475947-}59485948-#else59495949-static void59505950-update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target)59515951-{59525952-}59535953-#endif59545954-59555925static int wake_affine(struct sched_domain *sd, struct task_struct *p,59565926 int this_cpu, int prev_cpu, int sync)59575927{···59255979 if (target == nr_cpumask_bits)59265980 return prev_cpu;5927598159285928- update_wa_numa_placement(p, prev_cpu, target);59295982 schedstat_inc(sd->ttwu_move_affine);59305983 schedstat_inc(p->se.statistics.nr_wakeups_affine);59315984 return target;···97929847 if (curr_cost > this_rq->max_idle_balance_cost)97939848 this_rq->max_idle_balance_cost = curr_cost;9794984998509850+out:97959851 /*97969852 * While browsing the domains, we released the rq lock, a task could97979853 * have been enqueued in the meantime. Since we're not going idle,···98019855 if (this_rq->cfs.h_nr_running && !pulled_task)98029856 pulled_task = 1;9803985798049804-out:98059858 /* Move the next balance forward */98069859 if (time_after(this_rq->next_balance, next_balance))98079860 this_rq->next_balance = next_balance;
+15-2
kernel/signal.c
···19611961 return;19621962 }1963196319641964+ set_special_state(TASK_TRACED);19651965+19641966 /*19651967 * We're committing to trapping. TRACED should be visible before19661968 * TRAPPING is cleared; otherwise, the tracer might fail do_wait().19671969 * Also, transition to TRACED and updates to ->jobctl should be19681970 * atomic with respect to siglock and should be done after the arch19691971 * hook as siglock is released and regrabbed across it.19721972+ *19731973+ * TRACER TRACEE19741974+ *19751975+ * ptrace_attach()19761976+ * [L] wait_on_bit(JOBCTL_TRAPPING) [S] set_special_state(TRACED)19771977+ * do_wait()19781978+ * set_current_state() smp_wmb();19791979+ * ptrace_do_wait()19801980+ * wait_task_stopped()19811981+ * task_stopped_code()19821982+ * [L] task_is_traced() [S] task_clear_jobctl_trapping();19701983 */19711971- set_current_state(TASK_TRACED);19841984+ smp_wmb();1972198519731986 current->last_siginfo = info;19741987 current->exit_code = exit_code;···21892176 if (task_participate_group_stop(current))21902177 notify = CLD_STOPPED;2191217821922192- __set_current_state(TASK_STOPPED);21792179+ set_special_state(TASK_STOPPED);21932180 spin_unlock_irq(¤t->sighand->siglock);2194218121952182 /*
···119119static int watchdog_running;120120static atomic_t watchdog_reset_pending;121121122122+static void inline clocksource_watchdog_lock(unsigned long *flags)123123+{124124+ spin_lock_irqsave(&watchdog_lock, *flags);125125+}126126+127127+static void inline clocksource_watchdog_unlock(unsigned long *flags)128128+{129129+ spin_unlock_irqrestore(&watchdog_lock, *flags);130130+}131131+122132static int clocksource_watchdog_kthread(void *data);123133static void __clocksource_change_rating(struct clocksource *cs, int rating);124134···152142 cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG);153143 cs->flags |= CLOCK_SOURCE_UNSTABLE;154144145145+ /*146146+ * If the clocksource is registered clocksource_watchdog_kthread() will147147+ * re-rate and re-select.148148+ */149149+ if (list_empty(&cs->list)) {150150+ cs->rating = 0;151151+ return;152152+ }153153+155154 if (cs->mark_unstable)156155 cs->mark_unstable(cs);157156157157+ /* kick clocksource_watchdog_kthread() */158158 if (finished_booting)159159 schedule_work(&watchdog_work);160160}···173153 * clocksource_mark_unstable - mark clocksource unstable via watchdog174154 * @cs: clocksource to be marked unstable175155 *176176- * This function is called instead of clocksource_change_rating from177177- * cpu hotplug code to avoid a deadlock between the clocksource mutex178178- * and the cpu hotplug mutex. It defers the update of the clocksource179179- * to the watchdog thread.156156+ * This function is called by the x86 TSC code to mark clocksources as unstable;157157+ * it defers demotion and re-selection to a kthread.180158 */181159void clocksource_mark_unstable(struct clocksource *cs)182160{···182164183165 spin_lock_irqsave(&watchdog_lock, flags);184166 if (!(cs->flags & CLOCK_SOURCE_UNSTABLE)) {185185- if (list_empty(&cs->wd_list))167167+ if (!list_empty(&cs->list) && list_empty(&cs->wd_list))186168 list_add(&cs->wd_list, &watchdog_list);187169 __clocksource_unstable(cs);188170 }···337319338320static void clocksource_enqueue_watchdog(struct clocksource *cs)339321{340340- unsigned long flags;322322+ INIT_LIST_HEAD(&cs->wd_list);341323342342- spin_lock_irqsave(&watchdog_lock, flags);343324 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {344325 /* cs is a clocksource to be watched. */345326 list_add(&cs->wd_list, &watchdog_list);···348331 if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS)349332 cs->flags |= CLOCK_SOURCE_VALID_FOR_HRES;350333 }351351- spin_unlock_irqrestore(&watchdog_lock, flags);352334}353335354336static void clocksource_select_watchdog(bool fallback)···389373390374static void clocksource_dequeue_watchdog(struct clocksource *cs)391375{392392- unsigned long flags;393393-394394- spin_lock_irqsave(&watchdog_lock, flags);395376 if (cs != watchdog) {396377 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) {397378 /* cs is a watched clocksource. */···397384 clocksource_stop_watchdog();398385 }399386 }400400- spin_unlock_irqrestore(&watchdog_lock, flags);401387}402388403389static int __clocksource_watchdog_kthread(void)404390{405391 struct clocksource *cs, *tmp;406392 unsigned long flags;407407- LIST_HEAD(unstable);408393 int select = 0;409394410395 spin_lock_irqsave(&watchdog_lock, flags);411396 list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) {412397 if (cs->flags & CLOCK_SOURCE_UNSTABLE) {413398 list_del_init(&cs->wd_list);414414- list_add(&cs->wd_list, &unstable);399399+ __clocksource_change_rating(cs, 0);415400 select = 1;416401 }417402 if (cs->flags & CLOCK_SOURCE_RESELECT) {···421410 clocksource_stop_watchdog();422411 spin_unlock_irqrestore(&watchdog_lock, flags);423412424424- /* Needs to be done outside of watchdog lock */425425- list_for_each_entry_safe(cs, tmp, &unstable, wd_list) {426426- list_del_init(&cs->wd_list);427427- __clocksource_change_rating(cs, 0);428428- }429413 return select;430414}431415···452446static inline int __clocksource_watchdog_kthread(void) { return 0; }453447static bool clocksource_is_watchdog(struct clocksource *cs) { return false; }454448void clocksource_mark_unstable(struct clocksource *cs) { }449449+450450+static void inline clocksource_watchdog_lock(unsigned long *flags) { }451451+static void inline clocksource_watchdog_unlock(unsigned long *flags) { }455452456453#endif /* CONFIG_CLOCKSOURCE_WATCHDOG */457454···788779 */789780int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq)790781{782782+ unsigned long flags;791783792784 /* Initialize mult/shift and max_idle_ns */793785 __clocksource_update_freq_scale(cs, scale, freq);794786795787 /* Add clocksource to the clocksource list */796788 mutex_lock(&clocksource_mutex);789789+790790+ clocksource_watchdog_lock(&flags);797791 clocksource_enqueue(cs);798792 clocksource_enqueue_watchdog(cs);793793+ clocksource_watchdog_unlock(&flags);794794+799795 clocksource_select();800796 clocksource_select_watchdog(false);801797 mutex_unlock(&clocksource_mutex);···822808 */823809void clocksource_change_rating(struct clocksource *cs, int rating)824810{811811+ unsigned long flags;812812+825813 mutex_lock(&clocksource_mutex);814814+ clocksource_watchdog_lock(&flags);826815 __clocksource_change_rating(cs, rating);816816+ clocksource_watchdog_unlock(&flags);817817+827818 clocksource_select();828819 clocksource_select_watchdog(false);829820 mutex_unlock(&clocksource_mutex);···840821 */841822static int clocksource_unbind(struct clocksource *cs)842823{824824+ unsigned long flags;825825+843826 if (clocksource_is_watchdog(cs)) {844827 /* Select and try to install a replacement watchdog. */845828 clocksource_select_watchdog(true);···855834 if (curr_clocksource == cs)856835 return -EBUSY;857836 }837837+838838+ clocksource_watchdog_lock(&flags);858839 clocksource_dequeue_watchdog(cs);859840 list_del_init(&cs->list);841841+ clocksource_watchdog_unlock(&flags);842842+860843 return 0;861844}862845
···5555 struct list_head list;5656 struct trace_uprobe_filter filter;5757 struct uprobe_consumer consumer;5858+ struct path path;5859 struct inode *inode;5960 char *filename;6061 unsigned long offset;···290289 for (i = 0; i < tu->tp.nr_args; i++)291290 traceprobe_free_probe_arg(&tu->tp.args[i]);292291293293- iput(tu->inode);292292+ path_put(&tu->path);294293 kfree(tu->tp.call.class->system);295294 kfree(tu->tp.call.name);296295 kfree(tu->filename);···364363static int create_trace_uprobe(int argc, char **argv)365364{366365 struct trace_uprobe *tu;367367- struct inode *inode;368366 char *arg, *event, *group, *filename;369367 char buf[MAX_EVENT_NAME_LEN];370368 struct path path;···371371 bool is_delete, is_return;372372 int i, ret;373373374374- inode = NULL;375374 ret = 0;376375 is_delete = false;377376 is_return = false;···436437 }437438 /* Find the last occurrence, in case the path contains ':' too. */438439 arg = strrchr(argv[1], ':');439439- if (!arg) {440440- ret = -EINVAL;441441- goto fail_address_parse;442442- }440440+ if (!arg)441441+ return -EINVAL;443442444443 *arg++ = '\0';445444 filename = argv[1];446445 ret = kern_path(filename, LOOKUP_FOLLOW, &path);447446 if (ret)448448- goto fail_address_parse;447447+ return ret;449448450450- inode = igrab(d_real_inode(path.dentry));451451- path_put(&path);452452-453453- if (!inode || !S_ISREG(inode->i_mode)) {449449+ if (!d_is_reg(path.dentry)) {454450 ret = -EINVAL;455451 goto fail_address_parse;456452 }···484490 goto fail_address_parse;485491 }486492 tu->offset = offset;487487- tu->inode = inode;493493+ tu->path = path;488494 tu->filename = kstrdup(filename, GFP_KERNEL);489495490496 if (!tu->filename) {···552558 return ret;553559554560fail_address_parse:555555- iput(inode);561561+ path_put(&path);556562557563 pr_info("Failed to parse address or file.\n");558564···916922 goto err_flags;917923918924 tu->consumer.filter = filter;925925+ tu->inode = d_real_inode(tu->path.dentry);919926 ret = uprobe_register(tu->inode, tu->offset, &tu->consumer);920927 if (ret)921928 goto err_buffer;···962967 WARN_ON(!uprobe_filter_is_empty(&tu->filter));963968964969 uprobe_unregister(tu->inode, tu->offset, &tu->consumer);970970+ tu->inode = NULL;965971 tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE;966972967973 uprobe_buffer_disable();···13331337create_local_trace_uprobe(char *name, unsigned long offs, bool is_return)13341338{13351339 struct trace_uprobe *tu;13361336- struct inode *inode;13371340 struct path path;13381341 int ret;13391342···13401345 if (ret)13411346 return ERR_PTR(ret);1342134713431343- inode = igrab(d_inode(path.dentry));13441344- path_put(&path);13451345-13461346- if (!inode || !S_ISREG(inode->i_mode)) {13471347- iput(inode);13481348+ if (!d_is_reg(path.dentry)) {13491349+ path_put(&path);13481350 return ERR_PTR(-EINVAL);13491351 }13501352···13561364 if (IS_ERR(tu)) {13571365 pr_info("Failed to allocate trace_uprobe.(%d)\n",13581366 (int)PTR_ERR(tu));13671367+ path_put(&path);13591368 return ERR_CAST(tu);13601369 }1361137013621371 tu->offset = offs;13631363- tu->inode = inode;13721372+ tu->path = path;13641373 tu->filename = kstrdup(name, GFP_KERNEL);13651374 init_trace_event_call(tu, &tu->tp.call);13661375
+2-2
kernel/tracepoint.c
···207207 lockdep_is_held(&tracepoints_mutex));208208 old = func_add(&tp_funcs, func, prio);209209 if (IS_ERR(old)) {210210- WARN_ON_ONCE(1);210210+ WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);211211 return PTR_ERR(old);212212 }213213···239239 lockdep_is_held(&tracepoints_mutex));240240 old = func_remove(&tp_funcs, func);241241 if (IS_ERR(old)) {242242- WARN_ON_ONCE(1);242242+ WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM);243243 return PTR_ERR(old);244244 }245245
+9-14
lib/errseq.c
···111111 * errseq_sample() - Grab current errseq_t value.112112 * @eseq: Pointer to errseq_t to be sampled.113113 *114114- * This function allows callers to sample an errseq_t value, marking it as115115- * "seen" if required.114114+ * This function allows callers to initialise their errseq_t variable.115115+ * If the error has been "seen", new callers will not see an old error.116116+ * If there is an unseen error in @eseq, the caller of this function will117117+ * see it the next time it checks for an error.116118 *119119+ * Context: Any context.117120 * Return: The current errseq value.118121 */119122errseq_t errseq_sample(errseq_t *eseq)120123{121124 errseq_t old = READ_ONCE(*eseq);122122- errseq_t new = old;123125124124- /*125125- * For the common case of no errors ever having been set, we can skip126126- * marking the SEEN bit. Once an error has been set, the value will127127- * never go back to zero.128128- */129129- if (old != 0) {130130- new |= ERRSEQ_SEEN;131131- if (old != new)132132- cmpxchg(eseq, old, new);133133- }134134- return new;126126+ /* If nobody has seen this error yet, then we can be the first. */127127+ if (!(old & ERRSEQ_SEEN))128128+ old = 0;129129+ return old;135130}136131EXPORT_SYMBOL(errseq_sample);137132
+6-1
lib/find_bit_benchmark.c
···132132 test_find_next_bit(bitmap, BITMAP_LEN);133133 test_find_next_zero_bit(bitmap, BITMAP_LEN);134134 test_find_last_bit(bitmap, BITMAP_LEN);135135- test_find_first_bit(bitmap, BITMAP_LEN);135135+136136+ /*137137+ * test_find_first_bit() may take some time, so138138+ * traverse only part of bitmap to avoid soft lockup.139139+ */140140+ test_find_first_bit(bitmap, BITMAP_LEN / 10);136141 test_find_next_and_bit(bitmap, bitmap2, BITMAP_LEN);137142138143 pr_err("\nStart testing find_bit() with sparse bitmap\n");
···115115 bdi, &bdi_debug_stats_fops);116116 if (!bdi->debug_stats) {117117 debugfs_remove(bdi->debug_dir);118118+ bdi->debug_dir = NULL;118119 return -ENOMEM;119120 }120121···384383 * the barrier provided by test_and_clear_bit() above.385384 */386385 smp_wmb();387387- clear_bit(WB_shutting_down, &wb->state);386386+ clear_and_wake_up_bit(WB_shutting_down, &wb->state);388387}389388390389static void wb_exit(struct bdi_writeback *wb)
+1-3
mm/migrate.c
···528528 int i;529529 int index = page_index(page);530530531531- for (i = 0; i < HPAGE_PMD_NR; i++) {531531+ for (i = 1; i < HPAGE_PMD_NR; i++) {532532 pslot = radix_tree_lookup_slot(&mapping->i_pages,533533 index + i);534534 radix_tree_replace_slot(&mapping->i_pages, pslot,535535 newpage + i);536536 }537537- } else {538538- radix_tree_replace_slot(&mapping->i_pages, pslot, newpage);539537 }540538541539 /*
+58-18
mm/mmap.c
···13241324 return 0;13251325}1326132613271327+static inline u64 file_mmap_size_max(struct file *file, struct inode *inode)13281328+{13291329+ if (S_ISREG(inode->i_mode))13301330+ return inode->i_sb->s_maxbytes;13311331+13321332+ if (S_ISBLK(inode->i_mode))13331333+ return MAX_LFS_FILESIZE;13341334+13351335+ /* Special "we do even unsigned file positions" case */13361336+ if (file->f_mode & FMODE_UNSIGNED_OFFSET)13371337+ return 0;13381338+13391339+ /* Yes, random drivers might want more. But I'm tired of buggy drivers */13401340+ return ULONG_MAX;13411341+}13421342+13431343+static inline bool file_mmap_ok(struct file *file, struct inode *inode,13441344+ unsigned long pgoff, unsigned long len)13451345+{13461346+ u64 maxsize = file_mmap_size_max(file, inode);13471347+13481348+ if (maxsize && len > maxsize)13491349+ return false;13501350+ maxsize -= len;13511351+ if (pgoff > maxsize >> PAGE_SHIFT)13521352+ return false;13531353+ return true;13541354+}13551355+13271356/*13281357 * The caller must hold down_write(¤t->mm->mmap_sem).13291358 */···14371408 if (file) {14381409 struct inode *inode = file_inode(file);14391410 unsigned long flags_mask;14111411+14121412+ if (!file_mmap_ok(file, inode, pgoff, len))14131413+ return -EOVERFLOW;1440141414411415 flags_mask = LEGACY_MAP_MASK | file->f_op->mmap_supported_flags;14421416···30563024 /* mm's last user has gone, and its about to be pulled down */30573025 mmu_notifier_release(mm);3058302630273027+ if (unlikely(mm_is_oom_victim(mm))) {30283028+ /*30293029+ * Manually reap the mm to free as much memory as possible.30303030+ * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard30313031+ * this mm from further consideration. Taking mm->mmap_sem for30323032+ * write after setting MMF_OOM_SKIP will guarantee that the oom30333033+ * reaper will not run on this mm again after mmap_sem is30343034+ * dropped.30353035+ *30363036+ * Nothing can be holding mm->mmap_sem here and the above call30373037+ * to mmu_notifier_release(mm) ensures mmu notifier callbacks in30383038+ * __oom_reap_task_mm() will not block.30393039+ *30403040+ * This needs to be done before calling munlock_vma_pages_all(),30413041+ * which clears VM_LOCKED, otherwise the oom reaper cannot30423042+ * reliably test it.30433043+ */30443044+ mutex_lock(&oom_lock);30453045+ __oom_reap_task_mm(mm);30463046+ mutex_unlock(&oom_lock);30473047+30483048+ set_bit(MMF_OOM_SKIP, &mm->flags);30493049+ down_write(&mm->mmap_sem);30503050+ up_write(&mm->mmap_sem);30513051+ }30523052+30593053 if (mm->locked_vm) {30603054 vma = mm->mmap;30613055 while (vma) {···31033045 /* update_hiwater_rss(mm) here? but nobody should be looking */31043046 /* Use -1 here to ensure all VMAs in the mm are unmapped */31053047 unmap_vmas(&tlb, vma, 0, -1);31063106-31073107- if (unlikely(mm_is_oom_victim(mm))) {31083108- /*31093109- * Wait for oom_reap_task() to stop working on this31103110- * mm. Because MMF_OOM_SKIP is already set before31113111- * calling down_read(), oom_reap_task() will not run31123112- * on this "mm" post up_write().31133113- *31143114- * mm_is_oom_victim() cannot be set from under us31153115- * either because victim->mm is already set to NULL31163116- * under task_lock before calling mmput and oom_mm is31173117- * set not NULL by the OOM killer only if victim->mm31183118- * is found not NULL while holding the task_lock.31193119- */31203120- set_bit(MMF_OOM_SKIP, &mm->flags);31213121- down_write(&mm->mmap_sem);31223122- up_write(&mm->mmap_sem);31233123- }31243048 free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);31253049 tlb_finish_mmu(&tlb, 0, -1);31263050
+43-38
mm/oom_kill.c
···469469 return false;470470}471471472472-473472#ifdef CONFIG_MMU474473/*475474 * OOM Reaper kernel thread which tries to reap the memory used by the OOM···479480static struct task_struct *oom_reaper_list;480481static DEFINE_SPINLOCK(oom_reaper_lock);481482482482-static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)483483+void __oom_reap_task_mm(struct mm_struct *mm)483484{484484- struct mmu_gather tlb;485485 struct vm_area_struct *vma;486486+487487+ /*488488+ * Tell all users of get_user/copy_from_user etc... that the content489489+ * is no longer stable. No barriers really needed because unmapping490490+ * should imply barriers already and the reader would hit a page fault491491+ * if it stumbled over a reaped memory.492492+ */493493+ set_bit(MMF_UNSTABLE, &mm->flags);494494+495495+ for (vma = mm->mmap ; vma; vma = vma->vm_next) {496496+ if (!can_madv_dontneed_vma(vma))497497+ continue;498498+499499+ /*500500+ * Only anonymous pages have a good chance to be dropped501501+ * without additional steps which we cannot afford as we502502+ * are OOM already.503503+ *504504+ * We do not even care about fs backed pages because all505505+ * which are reclaimable have already been reclaimed and506506+ * we do not want to block exit_mmap by keeping mm ref507507+ * count elevated without a good reason.508508+ */509509+ if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {510510+ const unsigned long start = vma->vm_start;511511+ const unsigned long end = vma->vm_end;512512+ struct mmu_gather tlb;513513+514514+ tlb_gather_mmu(&tlb, mm, start, end);515515+ mmu_notifier_invalidate_range_start(mm, start, end);516516+ unmap_page_range(&tlb, vma, start, end, NULL);517517+ mmu_notifier_invalidate_range_end(mm, start, end);518518+ tlb_finish_mmu(&tlb, start, end);519519+ }520520+ }521521+}522522+523523+static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)524524+{486525 bool ret = true;487526488527 /*489528 * We have to make sure to not race with the victim exit path490529 * and cause premature new oom victim selection:491491- * __oom_reap_task_mm exit_mm530530+ * oom_reap_task_mm exit_mm492531 * mmget_not_zero493532 * mmput494533 * atomic_dec_and_test···571534572535 trace_start_task_reaping(tsk->pid);573536574574- /*575575- * Tell all users of get_user/copy_from_user etc... that the content576576- * is no longer stable. No barriers really needed because unmapping577577- * should imply barriers already and the reader would hit a page fault578578- * if it stumbled over a reaped memory.579579- */580580- set_bit(MMF_UNSTABLE, &mm->flags);537537+ __oom_reap_task_mm(mm);581538582582- for (vma = mm->mmap ; vma; vma = vma->vm_next) {583583- if (!can_madv_dontneed_vma(vma))584584- continue;585585-586586- /*587587- * Only anonymous pages have a good chance to be dropped588588- * without additional steps which we cannot afford as we589589- * are OOM already.590590- *591591- * We do not even care about fs backed pages because all592592- * which are reclaimable have already been reclaimed and593593- * we do not want to block exit_mmap by keeping mm ref594594- * count elevated without a good reason.595595- */596596- if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {597597- const unsigned long start = vma->vm_start;598598- const unsigned long end = vma->vm_end;599599-600600- tlb_gather_mmu(&tlb, mm, start, end);601601- mmu_notifier_invalidate_range_start(mm, start, end);602602- unmap_page_range(&tlb, vma, start, end, NULL);603603- mmu_notifier_invalidate_range_end(mm, start, end);604604- tlb_finish_mmu(&tlb, start, end);605605- }606606- }607539 pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n",608540 task_pid_nr(tsk), tsk->comm,609541 K(get_mm_counter(mm, MM_ANONPAGES)),···593587 struct mm_struct *mm = tsk->signal->oom_mm;594588595589 /* Retry the down_read_trylock(mmap_sem) a few times */596596- while (attempts++ < MAX_OOM_REAP_RETRIES && !__oom_reap_task_mm(tsk, mm))590590+ while (attempts++ < MAX_OOM_REAP_RETRIES && !oom_reap_task_mm(tsk, mm))597591 schedule_timeout_idle(HZ/10);598592599593 if (attempts <= MAX_OOM_REAP_RETRIES ||600594 test_bit(MMF_OOM_SKIP, &mm->flags))601595 goto done;602602-603596604597 pr_info("oom_reaper: unable to reap pid:%d (%s)\n",605598 task_pid_nr(tsk), tsk->comm);
+1-1
mm/sparse.c
···629629 unsigned long pfn;630630631631 for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {632632- unsigned long section_nr = pfn_to_section_nr(start_pfn);632632+ unsigned long section_nr = pfn_to_section_nr(pfn);633633 struct mem_section *ms;634634635635 /*
+5-1
mm/vmstat.c
···11611161 "nr_vmscan_immediate_reclaim",11621162 "nr_dirtied",11631163 "nr_written",11641164- "nr_indirectly_reclaimable",11641164+ "", /* nr_indirectly_reclaimable */1165116511661166 /* enum writeback_stat_item counters */11671167 "nr_dirty_threshold",···17391739{17401740 unsigned long *l = arg;17411741 unsigned long off = l - (unsigned long *)m->private;17421742+17431743+ /* Skip hidden vmstat items. */17441744+ if (*vmstat_text[off] == '\0')17451745+ return 0;1742174617431747 seq_puts(m, vmstat_text[off]);17441748 seq_put_decimal_ull(m, " ", *l);
+30-12
mm/z3fold.c
···144144 PAGE_HEADLESS = 0,145145 MIDDLE_CHUNK_MAPPED,146146 NEEDS_COMPACTING,147147- PAGE_STALE147147+ PAGE_STALE,148148+ UNDER_RECLAIM148149};149150150151/*****************···174173 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private);175174 clear_bit(NEEDS_COMPACTING, &page->private);176175 clear_bit(PAGE_STALE, &page->private);176176+ clear_bit(UNDER_RECLAIM, &page->private);177177178178 spin_lock_init(&zhdr->page_lock);179179 kref_init(&zhdr->refcount);···758756 atomic64_dec(&pool->pages_nr);759757 return;760758 }759759+ if (test_bit(UNDER_RECLAIM, &page->private)) {760760+ z3fold_page_unlock(zhdr);761761+ return;762762+ }761763 if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) {762764 z3fold_page_unlock(zhdr);763765 return;···846840 kref_get(&zhdr->refcount);847841 list_del_init(&zhdr->buddy);848842 zhdr->cpu = -1;843843+ set_bit(UNDER_RECLAIM, &page->private);844844+ break;849845 }850846851847 list_del_init(&page->lru);···895887 goto next;896888 }897889next:898898- spin_lock(&pool->lock);899890 if (test_bit(PAGE_HEADLESS, &page->private)) {900891 if (ret == 0) {901901- spin_unlock(&pool->lock);902892 free_z3fold_page(page);903893 return 0;904894 }905905- } else if (kref_put(&zhdr->refcount, release_z3fold_page)) {906906- atomic64_dec(&pool->pages_nr);895895+ spin_lock(&pool->lock);896896+ list_add(&page->lru, &pool->lru);907897 spin_unlock(&pool->lock);908908- return 0;898898+ } else {899899+ z3fold_page_lock(zhdr);900900+ clear_bit(UNDER_RECLAIM, &page->private);901901+ if (kref_put(&zhdr->refcount,902902+ release_z3fold_page_locked)) {903903+ atomic64_dec(&pool->pages_nr);904904+ return 0;905905+ }906906+ /*907907+ * if we are here, the page is still not completely908908+ * free. Take the global pool lock then to be able909909+ * to add it back to the lru list910910+ */911911+ spin_lock(&pool->lock);912912+ list_add(&page->lru, &pool->lru);913913+ spin_unlock(&pool->lock);914914+ z3fold_page_unlock(zhdr);909915 }910916911911- /*912912- * Add to the beginning of LRU.913913- * Pool lock has to be kept here to ensure the page has914914- * not already been released915915- */916916- list_add(&page->lru, &pool->lru);917917+ /* We started off locked to we need to lock the pool back */918918+ spin_lock(&pool->lock);917919 }918920 spin_unlock(&pool->lock);919921 return -EAGAIN;
+1-1
net/9p/trans_common.c
···1616#include <linux/module.h>17171818/**1919- * p9_release_req_pages - Release pages after the transaction.1919+ * p9_release_pages - Release pages after the transaction.2020 */2121void p9_release_pages(struct page **pages, int nr_pages)2222{
+2-2
net/9p/trans_fd.c
···10921092};1093109310941094/**10951095- * p9_poll_proc - poll worker thread10961096- * @a: thread state and arguments10951095+ * p9_poll_workfn - poll worker thread10961096+ * @work: work queue10971097 *10981098 * polls all v9fs transports for new events and queues the appropriate10991099 * work to the work queue
+1-3
net/9p/trans_rdma.c
···6868 * @pd: Protection Domain pointer6969 * @qp: Queue Pair pointer7070 * @cq: Completion Queue pointer7171- * @dm_mr: DMA Memory Region pointer7272- * @lkey: The local access only memory region key7371 * @timeout: Number of uSecs to wait for connection management events7472 * @privport: Whether a privileged port may be used7573 * @port: The port to use···630632}631633632634/**633633- * trans_create_rdma - Transport method for creating atransport instance635635+ * rdma_create_trans - Transport method for creating a transport instance634636 * @client: client instance635637 * @addr: IP address string636638 * @args: Mount options string
+2-3
net/9p/trans_virtio.c
···60606161/**6262 * struct virtio_chan - per-instance transport information6363- * @initialized: whether the channel is initialized6463 * @inuse: whether the channel is in use6564 * @lock: protects multiple elements within this structure6665 * @client: client instance···384385 * @uidata: user bffer that should be ued for zero copy read385386 * @uodata: user buffer that shoud be user for zero copy write386387 * @inlen: read buffer size387387- * @olen: write buffer size388388- * @hdrlen: reader header size, This is the size of response protocol data388388+ * @outlen: write buffer size389389+ * @in_hdr_len: reader header size, This is the size of response protocol data389390 *390391 */391392static int
···4141#include <linux/module.h>4242#include <linux/init.h>43434444+/* Hardening for Spectre-v1 */4545+#include <linux/nospec.h>4646+4447#include "lec.h"4548#include "lec_arpc.h"4649#include "resources.h"···690687 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc));691688 if (bytes_left != 0)692689 pr_info("copy from user failed for %d bytes\n", bytes_left);693693- if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF ||694694- !dev_lec[ioc_data.dev_num])690690+ if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF)691691+ return -EINVAL;692692+ ioc_data.dev_num = array_index_nospec(ioc_data.dev_num, MAX_LEC_ITF);693693+ if (!dev_lec[ioc_data.dev_num])695694 return -EINVAL;696695 vpriv = kmalloc(sizeof(struct lec_vcc_priv), GFP_KERNEL);697696 if (!vpriv)
+2-2
net/bridge/br_if.c
···518518 return -ELOOP;519519 }520520521521- /* Device is already being bridged */522522- if (br_port_exists(dev))521521+ /* Device has master upper dev */522522+ if (netdev_master_upper_dev_get(dev))523523 return -EBUSY;524524525525 /* No bridging devices that dislike that (e.g. wireless) */
···10321032 info_size = sizeof(info);10331033 if (copy_from_user(&info, useraddr, info_size))10341034 return -EFAULT;10351035+ /* Since malicious users may modify the original data,10361036+ * we need to check whether FLOW_RSS is still requested.10371037+ */10381038+ if (!(info.flow_type & FLOW_RSS))10391039+ return -EINVAL;10351040 }1036104110371042 if (info.cmd == ETHTOOL_GRXCLSRLALL) {
···806806 }807807 }808808 }809809- bbr->idle_restart = 0;809809+ /* Restart after idle ends only once we process a new S/ACK for data */810810+ if (rs->delivered > 0)811811+ bbr->idle_restart = 0;810812}811813812814static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
+7-4
net/ipv4/udp.c
···401401 bool dev_match = (sk->sk_bound_dev_if == dif ||402402 sk->sk_bound_dev_if == sdif);403403404404- if (exact_dif && !dev_match)404404+ if (!dev_match)405405 return -1;406406- if (sk->sk_bound_dev_if && dev_match)406406+ if (sk->sk_bound_dev_if)407407 score += 4;408408 }409409···952952 sock_tx_timestamp(sk, ipc.sockc.tsflags, &ipc.tx_flags);953953954954 if (ipc.opt && ipc.opt->opt.srr) {955955- if (!daddr)956956- return -EINVAL;955955+ if (!daddr) {956956+ err = -EINVAL;957957+ goto out_free;958958+ }957959 faddr = ipc.opt->opt.faddr;958960 connected = 0;959961 }···1076107410771075out:10781076 ip_rt_put(rt);10771077+out_free:10791078 if (free)10801079 kfree(ipc.opt);10811080 if (!err)
+4-5
net/ipv6/Kconfig
···3434 bool "IPv6: Route Information (RFC 4191) support"3535 depends on IPV6_ROUTER_PREF3636 ---help---3737- This is experimental support of Route Information.3737+ Support of Route Information.38383939 If unsure, say N.40404141config IPV6_OPTIMISTIC_DAD4242 bool "IPv6: Enable RFC 4429 Optimistic DAD"4343 ---help---4444- This is experimental support for optimistic Duplicate4545- Address Detection. It allows for autoconfigured addresses4646- to be used more quickly.4444+ Support for optimistic Duplicate Address Detection. It allows for4545+ autoconfigured addresses to be used more quickly.47464847 If unsure, say N.4948···279280 depends on IPV6280281 select IP_MROUTE_COMMON281282 ---help---282282- Experimental support for IPv6 multicast forwarding.283283+ Support for IPv6 multicast forwarding.283284 If unsure, say N.284285285286config IPV6_MROUTE_MULTIPLE_TABLES
···88 * Copyright 2007, Michael Wu <flamingice@sourmilk.net>99 * Copyright 2007-2010, Intel Corporation1010 * Copyright(c) 2015-2017 Intel Deutschland GmbH1111+ * Copyright (C) 2018 Intel Corporation1112 *1213 * This program is free software; you can redistribute it and/or modify1314 * it under the terms of the GNU General Public License version 2 as···970969 ieee80211_agg_tx_operational(local, sta, tid);971970972971 sta->ampdu_mlme.addba_req_num[tid] = 0;972972+973973+ tid_tx->timeout =974974+ le16_to_cpu(mgmt->u.action.u.addba_resp.timeout);973975974976 if (tid_tx->timeout) {975977 mod_timer(&tid_tx->session_timer,
···44 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>55 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net>66 * Copyright 2013-2014 Intel Mobile Communications GmbH77+ * Copyright (C) 2018 Intel Corporation78 *89 * This program is free software; you can redistribute it and/or modify910 * it under the terms of the GNU General Public License version 2 as···11361135 }1137113611381137 /* reset session timer */11391139- if (reset_agg_timer && tid_tx->timeout)11381138+ if (reset_agg_timer)11401139 tid_tx->last_tx = jiffies;1141114011421141 return queued;
···558558 struct rds_cmsg_rx_trace t;559559 int i, j;560560561561+ memset(&t, 0, sizeof(t));561562 inc->i_rx_lat_trace[RDS_MSG_RX_CMSG] = local_clock();562563 t.rx_traces = rs->rs_rx_traces;563564 for (i = 0; i < rs->rs_rx_traces; i++) {
+6-1
net/rfkill/rfkill-gpio.c
···137137138138 ret = rfkill_register(rfkill->rfkill_dev);139139 if (ret < 0)140140- return ret;140140+ goto err_destroy;141141142142 platform_set_drvdata(pdev, rfkill);143143144144 dev_info(&pdev->dev, "%s device registered.\n", rfkill->name);145145146146 return 0;147147+148148+err_destroy:149149+ rfkill_destroy(rfkill->rfkill_dev);150150+151151+ return ret;147152}148153149154static int rfkill_gpio_remove(struct platform_device *pdev)
···476476 RXRPC_CALL_SEND_PING, /* A ping will need to be sent */477477 RXRPC_CALL_PINGING, /* Ping in process */478478 RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */479479+ RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */479480};480481481482/*
···131131 if (exists && bind)132132 return 0;133133134134- if (!lflags)134134+ if (!lflags) {135135+ if (exists)136136+ tcf_idr_release(*a, bind);135137 return -EINVAL;138138+ }136139137140 if (!exists) {138141 ret = tcf_idr_create(tn, parm->index, est, a,
···10241024 struct sctp_endpoint *ep;10251025 struct sctp_chunk *chunk;10261026 struct sctp_inq *inqueue;10271027- int state;10271027+ int first_time = 1; /* is this the first time through the loop */10281028 int error = 0;10291029+ int state;1029103010301031 /* The association should be held so we should be safe. */10311032 ep = asoc->ep;···10371036 state = asoc->state;10381037 subtype = SCTP_ST_CHUNK(chunk->chunk_hdr->type);1039103810391039+ /* If the first chunk in the packet is AUTH, do special10401040+ * processing specified in Section 6.3 of SCTP-AUTH spec10411041+ */10421042+ if (first_time && subtype.chunk == SCTP_CID_AUTH) {10431043+ struct sctp_chunkhdr *next_hdr;10441044+10451045+ next_hdr = sctp_inq_peek(inqueue);10461046+ if (!next_hdr)10471047+ goto normal;10481048+10491049+ /* If the next chunk is COOKIE-ECHO, skip the AUTH10501050+ * chunk while saving a pointer to it so we can do10511051+ * Authentication later (during cookie-echo10521052+ * processing).10531053+ */10541054+ if (next_hdr->type == SCTP_CID_COOKIE_ECHO) {10551055+ chunk->auth_chunk = skb_clone(chunk->skb,10561056+ GFP_ATOMIC);10571057+ chunk->auth = 1;10581058+ continue;10591059+ }10601060+ }10611061+10621062+normal:10401063 /* SCTP-AUTH, Section 6.3:10411064 * The receiver has a list of chunk types which it expects10421065 * to be received only after an AUTH-chunk. This list has···10991074 /* If there is an error on chunk, discard this packet. */11001075 if (error && chunk)11011076 chunk->pdiscard = 1;10771077+10781078+ if (first_time)10791079+ first_time = 0;11021080 }11031081 sctp_association_put(asoc);11041082}
+1-1
net/sctp/inqueue.c
···217217 skb_pull(chunk->skb, sizeof(*ch));218218 chunk->subh.v = NULL; /* Subheader is no longer valid. */219219220220- if (chunk->chunk_end + sizeof(*ch) < skb_tail_pointer(chunk->skb)) {220220+ if (chunk->chunk_end + sizeof(*ch) <= skb_tail_pointer(chunk->skb)) {221221 /* This is not a singleton */222222 chunk->singleton = 0;223223 } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) {
···153153 struct sctp_cmd_seq *commands);154154155155static enum sctp_ierror sctp_sf_authenticate(156156- struct net *net,157157- const struct sctp_endpoint *ep,158156 const struct sctp_association *asoc,159159- const union sctp_subtype type,160157 struct sctp_chunk *chunk);161158162159static enum sctp_disposition __sctp_sf_do_9_1_abort(···623626 return SCTP_DISPOSITION_CONSUME;624627}625628629629+static bool sctp_auth_chunk_verify(struct net *net, struct sctp_chunk *chunk,630630+ const struct sctp_association *asoc)631631+{632632+ struct sctp_chunk auth;633633+634634+ if (!chunk->auth_chunk)635635+ return true;636636+637637+ /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo638638+ * is supposed to be authenticated and we have to do delayed639639+ * authentication. We've just recreated the association using640640+ * the information in the cookie and now it's much easier to641641+ * do the authentication.642642+ */643643+644644+ /* Make sure that we and the peer are AUTH capable */645645+ if (!net->sctp.auth_enable || !asoc->peer.auth_capable)646646+ return false;647647+648648+ /* set-up our fake chunk so that we can process it */649649+ auth.skb = chunk->auth_chunk;650650+ auth.asoc = chunk->asoc;651651+ auth.sctp_hdr = chunk->sctp_hdr;652652+ auth.chunk_hdr = (struct sctp_chunkhdr *)653653+ skb_push(chunk->auth_chunk,654654+ sizeof(struct sctp_chunkhdr));655655+ skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr));656656+ auth.transport = chunk->transport;657657+658658+ return sctp_sf_authenticate(asoc, &auth) == SCTP_IERROR_NO_ERROR;659659+}660660+626661/*627662 * Respond to a normal COOKIE ECHO chunk.628663 * We are the side that is being asked for an association.···792763 if (error)793764 goto nomem_init;794765795795- /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo796796- * is supposed to be authenticated and we have to do delayed797797- * authentication. We've just recreated the association using798798- * the information in the cookie and now it's much easier to799799- * do the authentication.800800- */801801- if (chunk->auth_chunk) {802802- struct sctp_chunk auth;803803- enum sctp_ierror ret;804804-805805- /* Make sure that we and the peer are AUTH capable */806806- if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) {807807- sctp_association_free(new_asoc);808808- return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);809809- }810810-811811- /* set-up our fake chunk so that we can process it */812812- auth.skb = chunk->auth_chunk;813813- auth.asoc = chunk->asoc;814814- auth.sctp_hdr = chunk->sctp_hdr;815815- auth.chunk_hdr = (struct sctp_chunkhdr *)816816- skb_push(chunk->auth_chunk,817817- sizeof(struct sctp_chunkhdr));818818- skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr));819819- auth.transport = chunk->transport;820820-821821- ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth);822822- if (ret != SCTP_IERROR_NO_ERROR) {823823- sctp_association_free(new_asoc);824824- return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);825825- }766766+ if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) {767767+ sctp_association_free(new_asoc);768768+ return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);826769 }827770828771 repl = sctp_make_cookie_ack(new_asoc, chunk);···17951794 GFP_ATOMIC))17961795 goto nomem;1797179617971797+ if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC))17981798+ goto nomem;17991799+18001800+ if (!sctp_auth_chunk_verify(net, chunk, new_asoc))18011801+ return SCTP_DISPOSITION_DISCARD;18021802+17981803 /* Make sure no new addresses are being added during the17991804 * restart. Though this is a pretty complicated attack18001805 * since you'd have to get inside the cookie.18011806 */18021802- if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands)) {18071807+ if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands))18031808 return SCTP_DISPOSITION_CONSUME;18041804- }1805180918061810 /* If the endpoint is in the SHUTDOWN-ACK-SENT state and recognizes18071811 * the peer has restarted (Action A), it MUST NOT setup a new···19121906 GFP_ATOMIC))19131907 goto nomem;1914190819091909+ if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC))19101910+ goto nomem;19111911+19121912+ if (!sctp_auth_chunk_verify(net, chunk, new_asoc))19131913+ return SCTP_DISPOSITION_DISCARD;19141914+19151915 /* Update the content of current association. */19161916 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));19171917 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,···20152003 * a COOKIE ACK.20162004 */2017200520062006+ if (!sctp_auth_chunk_verify(net, chunk, asoc))20072007+ return SCTP_DISPOSITION_DISCARD;20082008+20182009 /* Don't accidentally move back into established state. */20192010 if (asoc->state < SCTP_STATE_ESTABLISHED) {20202011 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,···20652050 }20662051 }2067205220682068- repl = sctp_make_cookie_ack(new_asoc, chunk);20532053+ repl = sctp_make_cookie_ack(asoc, chunk);20692054 if (!repl)20702055 goto nomem;20712056···41804165 * The return value is the disposition of the chunk.41814166 */41824167static enum sctp_ierror sctp_sf_authenticate(41834183- struct net *net,41844184- const struct sctp_endpoint *ep,41854168 const struct sctp_association *asoc,41864186- const union sctp_subtype type,41874169 struct sctp_chunk *chunk)41884170{41894171 struct sctp_shared_key *sh_key = NULL;···42814269 commands);4282427042834271 auth_hdr = (struct sctp_authhdr *)chunk->skb->data;42844284- error = sctp_sf_authenticate(net, ep, asoc, type, chunk);42724272+ error = sctp_sf_authenticate(asoc, chunk);42854273 switch (error) {42864274 case SCTP_IERROR_AUTH_BAD_HMAC:42874275 /* Generate the ERROR chunk and discard the rest
···292292 smc_copy_sock_settings(&smc->sk, smc->clcsock->sk, SK_FLAGS_CLC_TO_SMC);293293}294294295295+/* register a new rmb */296296+static int smc_reg_rmb(struct smc_link *link, struct smc_buf_desc *rmb_desc)297297+{298298+ /* register memory region for new rmb */299299+ if (smc_wr_reg_send(link, rmb_desc->mr_rx[SMC_SINGLE_LINK])) {300300+ rmb_desc->regerr = 1;301301+ return -EFAULT;302302+ }303303+ return 0;304304+}305305+295306static int smc_clnt_conf_first_link(struct smc_sock *smc)296307{297308 struct smc_link_group *lgr = smc->conn.lgr;···332321333322 smc_wr_remember_qp_attr(link);334323335335- rc = smc_wr_reg_send(link,336336- smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]);337337- if (rc)324324+ if (smc_reg_rmb(link, smc->conn.rmb_desc))338325 return SMC_CLC_DECL_INTERR;339326340327 /* send CONFIRM LINK response over RoCE fabric */···482473 goto decline_rdma_unlock;483474 }484475 } else {485485- struct smc_buf_desc *buf_desc = smc->conn.rmb_desc;486486-487487- if (!buf_desc->reused) {488488- /* register memory region for new rmb */489489- rc = smc_wr_reg_send(link,490490- buf_desc->mr_rx[SMC_SINGLE_LINK]);491491- if (rc) {476476+ if (!smc->conn.rmb_desc->reused) {477477+ if (smc_reg_rmb(link, smc->conn.rmb_desc)) {492478 reason_code = SMC_CLC_DECL_INTERR;493479 goto decline_rdma_unlock;494480 }···723719724720 link = &lgr->lnk[SMC_SINGLE_LINK];725721726726- rc = smc_wr_reg_send(link,727727- smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]);728728- if (rc)722722+ if (smc_reg_rmb(link, smc->conn.rmb_desc))729723 return SMC_CLC_DECL_INTERR;730724731725 /* send CONFIRM LINK request to client over the RoCE fabric */···856854 smc_rx_init(new_smc);857855858856 if (local_contact != SMC_FIRST_CONTACT) {859859- struct smc_buf_desc *buf_desc = new_smc->conn.rmb_desc;860860-861861- if (!buf_desc->reused) {862862- /* register memory region for new rmb */863863- rc = smc_wr_reg_send(link,864864- buf_desc->mr_rx[SMC_SINGLE_LINK]);865865- if (rc) {857857+ if (!new_smc->conn.rmb_desc->reused) {858858+ if (smc_reg_rmb(link, new_smc->conn.rmb_desc)) {866859 reason_code = SMC_CLC_DECL_INTERR;867860 goto decline_rdma_unlock;868861 }···975978 }976979977980out:978978- if (lsmc->clcsock) {979979- sock_release(lsmc->clcsock);980980- lsmc->clcsock = NULL;981981- }982981 release_sock(lsk);983982 sock_put(&lsmc->sk); /* sock_hold in smc_listen */984983}···11631170 /* delegate to CLC child sock */11641171 release_sock(sk);11651172 mask = smc->clcsock->ops->poll(file, smc->clcsock, wait);11661166- /* if non-blocking connect finished ... */11671173 lock_sock(sk);11681168- if ((sk->sk_state == SMC_INIT) && (mask & EPOLLOUT)) {11691169- sk->sk_err = smc->clcsock->sk->sk_err;11701170- if (sk->sk_err) {11711171- mask |= EPOLLERR;11721172- } else {11741174+ sk->sk_err = smc->clcsock->sk->sk_err;11751175+ if (sk->sk_err) {11761176+ mask |= EPOLLERR;11771177+ } else {11781178+ /* if non-blocking connect finished ... */11791179+ if (sk->sk_state == SMC_INIT &&11801180+ mask & EPOLLOUT &&11811181+ smc->clcsock->sk->sk_state != TCP_CLOSE) {11731182 rc = smc_connect_rdma(smc);11741183 if (rc < 0)11751184 mask |= EPOLLERR;···1315132013161321 smc = smc_sk(sk);13171322 lock_sock(sk);13181318- if (sk->sk_state != SMC_ACTIVE)13231323+ if (sk->sk_state != SMC_ACTIVE) {13241324+ release_sock(sk);13191325 goto out;13261326+ }13271327+ release_sock(sk);13201328 if (smc->use_fallback)13211329 rc = kernel_sendpage(smc->clcsock, page, offset,13221330 size, flags);···13271329 rc = sock_no_sendpage(sock, page, offset, size, flags);1328133013291331out:13301330- release_sock(sk);13311332 return rc;13321333}13331334
+19-3
net/smc/smc_core.c
···32323333static u32 smc_lgr_num; /* unique link group number */34343535+static void smc_buf_free(struct smc_buf_desc *buf_desc, struct smc_link *lnk,3636+ bool is_rmb);3737+3538static void smc_lgr_schedule_free_work(struct smc_link_group *lgr)3639{3740 /* client link group creation always follows the server link group···237234 conn->sndbuf_size = 0;238235 }239236 if (conn->rmb_desc) {240240- conn->rmb_desc->reused = true;241241- conn->rmb_desc->used = 0;242242- conn->rmbe_size = 0;237237+ if (!conn->rmb_desc->regerr) {238238+ conn->rmb_desc->reused = 1;239239+ conn->rmb_desc->used = 0;240240+ conn->rmbe_size = 0;241241+ } else {242242+ /* buf registration failed, reuse not possible */243243+ struct smc_link_group *lgr = conn->lgr;244244+ struct smc_link *lnk;245245+246246+ write_lock_bh(&lgr->rmbs_lock);247247+ list_del(&conn->rmb_desc->list);248248+ write_unlock_bh(&lgr->rmbs_lock);249249+250250+ lnk = &lgr->lnk[SMC_SINGLE_LINK];251251+ smc_buf_free(conn->rmb_desc, lnk, true);252252+ }243253 }244254}245255
+2-1
net/smc/smc_core.h
···123123 */124124 u32 order; /* allocation order */125125 u32 used; /* currently used / unused */126126- bool reused; /* new created / reused */126126+ u8 reused : 1; /* new created / reused */127127+ u8 regerr : 1; /* err during registration */127128};128129129130struct smc_rtoken { /* address/key of remote RMB */
+1-4
net/sunrpc/xprtrdma/fmr_ops.c
···7272 if (IS_ERR(mr->fmr.fm_mr))7373 goto out_fmr_err;74747575+ INIT_LIST_HEAD(&mr->mr_list);7576 return 0;76777778out_fmr_err:···102101{103102 LIST_HEAD(unmap_list);104103 int rc;105105-106106- /* Ensure MW is not on any rl_registered list */107107- if (!list_empty(&mr->mr_list))108108- list_del(&mr->mr_list);109104110105 kfree(mr->fmr.fm_physaddrs);111106 kfree(mr->mr_sg);
+3-6
net/sunrpc/xprtrdma/frwr_ops.c
···110110 if (!mr->mr_sg)111111 goto out_list_err;112112113113+ INIT_LIST_HEAD(&mr->mr_list);113114 sg_init_table(mr->mr_sg, depth);114115 init_completion(&frwr->fr_linv_done);115116 return 0;···133132frwr_op_release_mr(struct rpcrdma_mr *mr)134133{135134 int rc;136136-137137- /* Ensure MR is not on any rl_registered list */138138- if (!list_empty(&mr->mr_list))139139- list_del(&mr->mr_list);140135141136 rc = ib_dereg_mr(mr->frwr.fr_mr);142137 if (rc)···192195 return;193196194197out_release:195195- pr_err("rpcrdma: FRWR reset failed %d, %p release\n", rc, mr);198198+ pr_err("rpcrdma: FRWR reset failed %d, %p released\n", rc, mr);196199 r_xprt->rx_stats.mrs_orphaned++;197200198201 spin_lock(&r_xprt->rx_buf.rb_mrlock);···473476474477 list_for_each_entry(mr, mrs, mr_list)475478 if (mr->mr_handle == rep->rr_inv_rkey) {476476- list_del(&mr->mr_list);479479+ list_del_init(&mr->mr_list);477480 trace_xprtrdma_remoteinv(mr);478481 mr->frwr.fr_state = FRWR_IS_INVALID;479482 rpcrdma_mr_unmap_and_put(mr);
+5
net/sunrpc/xprtrdma/verbs.c
···12541254 list_del(&mr->mr_all);1255125512561256 spin_unlock(&buf->rb_mrlock);12571257+12581258+ /* Ensure MW is not on any rl_registered list */12591259+ if (!list_empty(&mr->mr_list))12601260+ list_del(&mr->mr_list);12611261+12571262 ia->ri_ops->ro_release_mr(mr);12581263 count++;12591264 spin_lock(&buf->rb_mrlock);
···19501950int tipc_nl_node_get_link(struct sk_buff *skb, struct genl_info *info)19511951{19521952 struct net *net = genl_info_net(info);19531953+ struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];19531954 struct tipc_nl_msg msg;19541955 char *name;19551956 int err;···19581957 msg.portid = info->snd_portid;19591958 msg.seq = info->snd_seq;1960195919611961- if (!info->attrs[TIPC_NLA_LINK_NAME])19601960+ if (!info->attrs[TIPC_NLA_LINK])19621961 return -EINVAL;19631963- name = nla_data(info->attrs[TIPC_NLA_LINK_NAME]);19621962+19631963+ err = nla_parse_nested(attrs, TIPC_NLA_LINK_MAX,19641964+ info->attrs[TIPC_NLA_LINK],19651965+ tipc_nl_link_policy, info->extack);19661966+ if (err)19671967+ return err;19681968+19691969+ if (!attrs[TIPC_NLA_LINK_NAME])19701970+ return -EINVAL;19711971+19721972+ name = nla_data(attrs[TIPC_NLA_LINK_NAME]);1964197319651974 msg.skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);19661975 if (!msg.skb)···2255224422562245 rtnl_lock();22572246 for (bearer_id = prev_bearer; bearer_id < MAX_BEARERS; bearer_id++) {22582258- err = __tipc_nl_add_monitor(net, &msg, prev_bearer);22472247+ err = __tipc_nl_add_monitor(net, &msg, bearer_id);22592248 if (err)22602249 break;22612250 }
+2-1
net/tipc/socket.c
···1516151615171517 srcaddr->sock.family = AF_TIPC;15181518 srcaddr->sock.addrtype = TIPC_ADDR_ID;15191519+ srcaddr->sock.scope = 0;15191520 srcaddr->sock.addr.id.ref = msg_origport(hdr);15201521 srcaddr->sock.addr.id.node = msg_orignode(hdr);15211522 srcaddr->sock.addr.name.domain = 0;15221522- srcaddr->sock.scope = 0;15231523 m->msg_namelen = sizeof(struct sockaddr_tipc);1524152415251525 if (!msg_in_group(hdr))···15281528 /* Group message users may also want to know sending member's id */15291529 srcaddr->member.family = AF_TIPC;15301530 srcaddr->member.addrtype = TIPC_ADDR_NAME;15311531+ srcaddr->member.scope = 0;15311532 srcaddr->member.addr.name.name.type = msg_nametype(hdr);15321533 srcaddr->member.addr.name.name.instance = TIPC_SKB_CB(skb)->orig_member;15331534 srcaddr->member.addr.name.domain = 0;
+12-7
net/tls/tls_main.c
···114114 size = sg->length - offset;115115 offset += sg->offset;116116117117+ ctx->in_tcp_sendpages = true;117118 while (1) {118119 if (sg_is_last(sg))119120 sendpage_flags = flags;···135134 offset -= sg->offset;136135 ctx->partially_sent_offset = offset;137136 ctx->partially_sent_record = (void *)sg;137137+ ctx->in_tcp_sendpages = false;138138 return ret;139139 }140140···150148 }151149152150 clear_bit(TLS_PENDING_CLOSED_RECORD, &ctx->flags);151151+ ctx->in_tcp_sendpages = false;152152+ ctx->sk_write_space(sk);153153154154 return 0;155155}···221217{222218 struct tls_context *ctx = tls_get_ctx(sk);223219220220+ /* We are already sending pages, ignore notification */221221+ if (ctx->in_tcp_sendpages)222222+ return;223223+224224 if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) {225225 gfp_t sk_allocation = sk->sk_allocation;226226 int rc;···249241 struct tls_context *ctx = tls_get_ctx(sk);250242 long timeo = sock_sndtimeo(sk, 0);251243 void (*sk_proto_close)(struct sock *sk, long timeout);244244+ bool free_ctx = false;252245253246 lock_sock(sk);254247 sk_proto_close = ctx->sk_proto_close;255248256256- if (ctx->conf == TLS_HW_RECORD)257257- goto skip_tx_cleanup;258258-259259- if (ctx->conf == TLS_BASE) {260260- kfree(ctx);261261- ctx = NULL;249249+ if (ctx->conf == TLS_BASE || ctx->conf == TLS_HW_RECORD) {250250+ free_ctx = true;262251 goto skip_tx_cleanup;263252 }264253···292287 /* free ctx for TLS_HW_RECORD, used by tcp_set_state293288 * for sk->sk_prot->unhash [tls_hw_unhash]294289 */295295- if (ctx && ctx->conf == TLS_HW_RECORD)290290+ if (free_ctx)296291 kfree(ctx);297292}298293
+3
net/wireless/core.c
···95959696 ASSERT_RTNL();97979898+ if (strlen(newname) > NL80211_WIPHY_NAME_MAXLEN)9999+ return -EINVAL;100100+98101 /* prohibit calling the thing phy%d when %d is not its number */99102 sscanf(newname, PHY_NAME "%d%n", &wiphy_idx, &taken);100103 if (taken == strlen(newname) && wiphy_idx != rdev->wiphy_idx) {
+1
net/wireless/nl80211.c
···9214921492159215 if (nla_get_flag(info->attrs[NL80211_ATTR_EXTERNAL_AUTH_SUPPORT])) {92169216 if (!info->attrs[NL80211_ATTR_SOCKET_OWNER]) {92179217+ kzfree(connkeys);92179218 GENL_SET_ERR_MSG(info,92189219 "external auth requires connection ownership");92199220 return -EINVAL;
···21752175 return afinfo;21762176}2177217721782178+void xfrm_flush_gc(void)21792179+{21802180+ flush_work(&xfrm_state_gc_work);21812181+}21822182+EXPORT_SYMBOL(xfrm_flush_gc);21832183+21782184/* Temporarily located here until net/xfrm/xfrm_tunnel.c is created */21792185void xfrm_state_delete_tunnel(struct xfrm_state *x)21802186{
+5-2
samples/sockmap/Makefile
···6565# asm/sysreg.h - inline assembly used by it is incompatible with llvm.6666# But, there is no easy way to fix it, so just exclude it since it is6767# useless for BPF samples.6868+#6969+# -target bpf option required with SK_MSG programs, this is to ensure7070+# reading 'void *' data types for data and data_end are __u64 reads.6871$(obj)/%.o: $(src)/%.c6972 $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) -I$(obj) \7073 -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \7174 -Wno-compare-distinct-pointer-types \7275 -Wno-gnu-variable-sized-type-not-at-end \7376 -Wno-address-of-packed-member -Wno-tautological-compare \7474- -Wno-unknown-warning-option \7575- -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@7777+ -Wno-unknown-warning-option -O2 -target bpf \7878+ -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
+1-1
scripts/Makefile.gcc-plugins
···1414 endif15151616 ifdef CONFIG_GCC_PLUGIN_SANCOV1717- ifeq ($(CFLAGS_KCOV),)1717+ ifeq ($(strip $(CFLAGS_KCOV)),)1818 # It is needed because of the gcc-plugin.sh and gcc version checks.1919 gcc-plugin-$(CONFIG_GCC_PLUGIN_SANCOV) += sancov_plugin.so2020
···787787 FAIL(c, dti, node, "incorrect #size-cells for PCI bridge");788788789789 prop = get_property(node, "bus-range");790790- if (!prop) {791791- FAIL(c, dti, node, "missing bus-range for PCI bridge");790790+ if (!prop)792791 return;793793- }792792+794793 if (prop->val.len != (sizeof(cell_t) * 2)) {795794 FAIL_PROP(c, dti, node, prop, "value must be 2 cells");796795 return;
+1-1
scripts/extract_xc3028.pl
···11#!/usr/bin/env perl2233-# Copyright (c) Mauro Carvalho Chehab <mchehab@infradead.org>33+# Copyright (c) Mauro Carvalho Chehab <mchehab@kernel.org>44# Released under GPLv255#66# In order to use, you need to:
···1414# so that 'bison: not found' will be displayed if it is missing.1515ifeq ($(findstring 1,$(KBUILD_ENABLE_EXTRA_GCC_CHECKS)),)16161717-quiet_cmd_bison_no_warn = $(quet_cmd_bison)1717+quiet_cmd_bison_no_warn = $(quiet_cmd_bison)1818 cmd_bison_no_warn = $(YACC) --version >/dev/null; \1919 $(cmd_bison) 2>/dev/null20202121$(obj)/parse.tab.c: $(src)/parse.y FORCE2222 $(call if_changed,bison_no_warn)23232424-quiet_cmd_bison_h_no_warn = $(quet_cmd_bison_h)2424+quiet_cmd_bison_h_no_warn = $(quiet_cmd_bison_h)2525 cmd_bison_h_no_warn = $(YACC) --version >/dev/null; \2626 $(cmd_bison_h) 2>/dev/null2727
+1-8
scripts/mod/sumversion.c
···330330 goto out;331331 }332332333333- /* There will be a line like so:334334- deps_drivers/net/dummy.o := \335335- drivers/net/dummy.c \336336- $(wildcard include/config/net/fastroute.h) \337337- include/linux/module.h \338338-339339- Sum all files in the same dir or subdirs.340340- */333333+ /* Sum all files in the same dir or subdirs. */341334 while ((line = get_next_line(&pos, file, flen)) != NULL) {342335 char* p = line;343336
+1-1
scripts/split-man.pl
···11#!/usr/bin/perl22# SPDX-License-Identifier: GPL-2.033#44-# Author: Mauro Carvalho Chehab <mchehab@s-opensource.com>44+# Author: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>55#66# Produce manpages from kernel-doc.77# See Documentation/doc-guide/kernel-doc.rst for instructions
+2
sound/core/pcm_compat.c
···423423 return -ENOTTY;424424 if (substream->stream != dir)425425 return -EINVAL;426426+ if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN)427427+ return -EBADFD;426428427429 if ((ch = substream->runtime->channels) > 128)428430 return -EINVAL;
+2-2
sound/core/seq/seq_virmidi.c
···174174 }175175 return;176176 }177177+ spin_lock_irqsave(&substream->runtime->lock, flags);177178 if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) {178179 if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0)179179- return;180180+ goto out;180181 vmidi->event.type = SNDRV_SEQ_EVENT_NONE;181182 }182182- spin_lock_irqsave(&substream->runtime->lock, flags);183183 while (1) {184184 count = __snd_rawmidi_transmit_peek(substream, buf, sizeof(buf));185185 if (count <= 0)
···773773 u32 cycle;774774 unsigned int packets;775775776776- s->max_payload_length = amdtp_stream_get_max_payload(s);777777-778776 /*779777 * For in-stream, first packet has come.780778 * For out-stream, prepared to transmit first packet···876878 }877879878880 amdtp_stream_update(s);881881+882882+ if (s->direction == AMDTP_IN_STREAM)883883+ s->max_payload_length = amdtp_stream_get_max_payload(s);879884880885 if (s->flags & CIP_NO_HEADER)881886 s->tag = TAG_NO_CIP_HEADER;
···175175 OPT_UINTEGER('s', "nr_secs" , &p0.nr_secs, "max number of seconds to run (default: 5 secs)"),176176 OPT_UINTEGER('u', "usleep" , &p0.sleep_usecs, "usecs to sleep per loop iteration"),177177178178- OPT_BOOLEAN('R', "data_reads" , &p0.data_reads, "access the data via writes (can be mixed with -W)"),178178+ OPT_BOOLEAN('R', "data_reads" , &p0.data_reads, "access the data via reads (can be mixed with -W)"),179179 OPT_BOOLEAN('W', "data_writes" , &p0.data_writes, "access the data via writes (can be mixed with -R)"),180180 OPT_BOOLEAN('B', "data_backwards", &p0.data_backwards, "access the data backwards as well"),181181 OPT_BOOLEAN('Z', "data_zero_memset", &p0.data_zero_memset,"access the data via glibc bzero only"),
···224224 event_bpf_file225225226226event_pmu:227227-PE_NAME '/' event_config '/'227227+PE_NAME opt_event_config228228{229229 struct list_head *list, *orig_terms, *terms;230230231231- if (parse_events_copy_term_list($3, &orig_terms))231231+ if (parse_events_copy_term_list($2, &orig_terms))232232 YYABORT;233233234234 ALLOC_LIST(list);235235- if (parse_events_add_pmu(_parse_state, list, $1, $3, false)) {235235+ if (parse_events_add_pmu(_parse_state, list, $1, $2, false)) {236236 struct perf_pmu *pmu = NULL;237237 int ok = 0;238238 char *pattern;···262262 if (!ok)263263 YYABORT;264264 }265265- parse_events_terms__delete($3);265265+ parse_events_terms__delete($2);266266 parse_events_terms__delete(orig_terms);267267 $$ = list;268268}
+1
tools/power/acpi/Makefile.config
···5656# to compile vs uClibc, that can be done here as well.5757CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc-5858CROSS_COMPILE ?= $(CROSS)5959+LD = $(CC)5960HOSTCC = gcc60616162# check if compiler option is supported
+2-2
tools/testing/selftests/bpf/test_progs.c
···1108110811091109 assert(system("dd if=/dev/urandom of=/dev/zero count=4 2> /dev/null")11101110 == 0);11111111- assert(system("./urandom_read if=/dev/urandom of=/dev/zero count=4 2> /dev/null") == 0);11111111+ assert(system("./urandom_read") == 0);11121112 /* disable stack trace collection */11131113 key = 0;11141114 val = 1;···11581158 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0);1159115911601160 CHECK(build_id_matches < 1, "build id match",11611161- "Didn't find expected build ID from the map");11611161+ "Didn't find expected build ID from the map\n");1162116211631163disable_pmu:11641164 ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
+4-4
tools/testing/selftests/lib.mk
···20202121.ONESHELL:2222define RUN_TESTS2323- @export KSFT_TAP_LEVEL=`echo 1`;2424- @test_num=`echo 0`;2525- @echo "TAP version 13";2626- @for TEST in $(1); do \2323+ @export KSFT_TAP_LEVEL=`echo 1`; \2424+ test_num=`echo 0`; \2525+ echo "TAP version 13"; \2626+ for TEST in $(1); do \2727 BASENAME_TEST=`basename $$TEST`; \2828 test_num=`echo $$test_num+1 | bc`; \2929 echo "selftests: $$BASENAME_TEST"; \
···6666 "cmdUnderTest": "$TC action add action bpf object-file _b.o index 667",6767 "expExitCode": "0",6868 "verifyCmd": "$TC action get action bpf index 667",6969- "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9]* tag 3b185187f1855c4c default-action pipe.*index 667 ref",6969+ "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9]* tag 3b185187f1855c4c( jited)? default-action pipe.*index 667 ref",7070 "matchCount": "1",7171 "teardown": [7272 "$TC action flush action bpf",···9292 "cmdUnderTest": "$TC action add action bpf object-file _c.o index 667",9393 "expExitCode": "255",9494 "verifyCmd": "$TC action get action bpf index 667",9595- "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9].*index 667 ref",9595+ "matchPattern": "action order [0-9]*: bpf _c.o:\\[action\\] id [0-9].*index 667 ref",9696 "matchCount": "0",9797 "teardown": [9898- "$TC action flush action bpf",9898+ [9999+ "$TC action flush action bpf",100100+ 0,101101+ 1,102102+ 255103103+ ],99104 "rm -f _c.o"100105 ]101106 },
+1-1
virt/kvm/arm/vgic/vgic-init.c
···423423 * We cannot rely on the vgic maintenance interrupt to be424424 * delivered synchronously. This means we can only use it to425425 * exit the VM, and we perform the handling of EOIed426426- * interrupts on the exit path (see vgic_process_maintenance).426426+ * interrupts on the exit path (see vgic_fold_lr_state).427427 */428428 return IRQ_HANDLED;429429}
+8-2
virt/kvm/arm/vgic/vgic-mmio.c
···289289 irq->vcpu->cpu != -1) /* VCPU thread is running */290290 cond_resched_lock(&irq->irq_lock);291291292292- if (irq->hw)292292+ if (irq->hw) {293293 vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu);294294- else294294+ } else {295295+ u32 model = vcpu->kvm->arch.vgic.vgic_model;296296+295297 irq->active = active;298298+ if (model == KVM_DEV_TYPE_ARM_VGIC_V2 &&299299+ active && vgic_irq_is_sgi(irq->intid))300300+ irq->active_source = requester_vcpu->vcpu_id;301301+ }296302297303 if (irq->active)298304 vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+22-16
virt/kvm/arm/vgic/vgic-v2.c
···3737 vgic_v2_write_lr(i, 0);3838}39394040-void vgic_v2_set_npie(struct kvm_vcpu *vcpu)4141-{4242- struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;4343-4444- cpuif->vgic_hcr |= GICH_HCR_NPIE;4545-}4646-4740void vgic_v2_set_underflow(struct kvm_vcpu *vcpu)4841{4942 struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;···6471 int lr;6572 unsigned long flags;66736767- cpuif->vgic_hcr &= ~(GICH_HCR_UIE | GICH_HCR_NPIE);7474+ cpuif->vgic_hcr &= ~GICH_HCR_UIE;68756976 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {7077 u32 val = cpuif->vgic_lr[lr];7171- u32 intid = val & GICH_LR_VIRTUALID;7878+ u32 cpuid, intid = val & GICH_LR_VIRTUALID;7279 struct vgic_irq *irq;8080+8181+ /* Extract the source vCPU id from the LR */8282+ cpuid = val & GICH_LR_PHYSID_CPUID;8383+ cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;8484+ cpuid &= 7;73857486 /* Notify fds when the guest EOI'ed a level-triggered SPI */7587 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid))···8890 /* Always preserve the active bit */8991 irq->active = !!(val & GICH_LR_ACTIVE_BIT);90929393+ if (irq->active && vgic_irq_is_sgi(intid))9494+ irq->active_source = cpuid;9595+9196 /* Edge is the only case where we preserve the pending bit */9297 if (irq->config == VGIC_CONFIG_EDGE &&9398 (val & GICH_LR_PENDING_BIT)) {9499 irq->pending_latch = true;951009696- if (vgic_irq_is_sgi(intid)) {9797- u32 cpuid = val & GICH_LR_PHYSID_CPUID;9898-9999- cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;101101+ if (vgic_irq_is_sgi(intid))100102 irq->source |= (1 << cpuid);101101- }102103 }103104104105 /*···149152 u32 val = irq->intid;150153 bool allow_pending = true;151154152152- if (irq->active)155155+ if (irq->active) {153156 val |= GICH_LR_ACTIVE_BIT;157157+ if (vgic_irq_is_sgi(irq->intid))158158+ val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT;159159+ if (vgic_irq_is_multi_sgi(irq)) {160160+ allow_pending = false;161161+ val |= GICH_LR_EOI;162162+ }163163+ }154164155165 if (irq->hw) {156166 val |= GICH_LR_HW;···194190 BUG_ON(!src);195191 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT;196192 irq->source &= ~(1 << (src - 1));197197- if (irq->source)193193+ if (irq->source) {198194 irq->pending_latch = true;195195+ val |= GICH_LR_EOI;196196+ }199197 }200198 }201199
+29-20
virt/kvm/arm/vgic/vgic-v3.c
···2727static bool common_trap;2828static bool gicv4_enable;29293030-void vgic_v3_set_npie(struct kvm_vcpu *vcpu)3131-{3232- struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;3333-3434- cpuif->vgic_hcr |= ICH_HCR_NPIE;3535-}3636-3730void vgic_v3_set_underflow(struct kvm_vcpu *vcpu)3831{3932 struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;···4855 int lr;4956 unsigned long flags;50575151- cpuif->vgic_hcr &= ~(ICH_HCR_UIE | ICH_HCR_NPIE);5858+ cpuif->vgic_hcr &= ~ICH_HCR_UIE;52595360 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {5461 u64 val = cpuif->vgic_lr[lr];5555- u32 intid;6262+ u32 intid, cpuid;5663 struct vgic_irq *irq;6464+ bool is_v2_sgi = false;57655858- if (model == KVM_DEV_TYPE_ARM_VGIC_V3)6666+ cpuid = val & GICH_LR_PHYSID_CPUID;6767+ cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;6868+6969+ if (model == KVM_DEV_TYPE_ARM_VGIC_V3) {5970 intid = val & ICH_LR_VIRTUAL_ID_MASK;6060- else7171+ } else {6172 intid = val & GICH_LR_VIRTUALID;7373+ is_v2_sgi = vgic_irq_is_sgi(intid);7474+ }62756376 /* Notify fds when the guest EOI'ed a level-triggered IRQ */6477 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid))···8081 /* Always preserve the active bit */8182 irq->active = !!(val & ICH_LR_ACTIVE_BIT);82838484+ if (irq->active && is_v2_sgi)8585+ irq->active_source = cpuid;8686+8387 /* Edge is the only case where we preserve the pending bit */8488 if (irq->config == VGIC_CONFIG_EDGE &&8589 (val & ICH_LR_PENDING_BIT)) {8690 irq->pending_latch = true;87918888- if (vgic_irq_is_sgi(intid) &&8989- model == KVM_DEV_TYPE_ARM_VGIC_V2) {9090- u32 cpuid = val & GICH_LR_PHYSID_CPUID;9191-9292- cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;9292+ if (is_v2_sgi)9393 irq->source |= (1 << cpuid);9494- }9594 }96959796 /*···130133{131134 u32 model = vcpu->kvm->arch.vgic.vgic_model;132135 u64 val = irq->intid;133133- bool allow_pending = true;136136+ bool allow_pending = true, is_v2_sgi;134137135135- if (irq->active)138138+ is_v2_sgi = (vgic_irq_is_sgi(irq->intid) &&139139+ model == KVM_DEV_TYPE_ARM_VGIC_V2);140140+141141+ if (irq->active) {136142 val |= ICH_LR_ACTIVE_BIT;143143+ if (is_v2_sgi)144144+ val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT;145145+ if (vgic_irq_is_multi_sgi(irq)) {146146+ allow_pending = false;147147+ val |= ICH_LR_EOI;148148+ }149149+ }137150138151 if (irq->hw) {139152 val |= ICH_LR_HW;···181174 BUG_ON(!src);182175 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT;183176 irq->source &= ~(1 << (src - 1));184184- if (irq->source)177177+ if (irq->source) {185178 irq->pending_latch = true;179179+ val |= ICH_LR_EOI;180180+ }186181 }187182 }188183
+7-23
virt/kvm/arm/vgic/vgic.c
···725725 vgic_v3_set_underflow(vcpu);726726}727727728728-static inline void vgic_set_npie(struct kvm_vcpu *vcpu)729729-{730730- if (kvm_vgic_global_state.type == VGIC_V2)731731- vgic_v2_set_npie(vcpu);732732- else733733- vgic_v3_set_npie(vcpu);734734-}735735-736728/* Requires the ap_list_lock to be held. */737729static int compute_ap_list_depth(struct kvm_vcpu *vcpu,738730 bool *multi_sgi)···738746 DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));739747740748 list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {749749+ int w;750750+741751 spin_lock(&irq->irq_lock);742752 /* GICv2 SGIs can count for more than one... */743743- if (vgic_irq_is_sgi(irq->intid) && irq->source) {744744- int w = hweight8(irq->source);745745-746746- count += w;747747- *multi_sgi |= (w > 1);748748- } else {749749- count++;750750- }753753+ w = vgic_irq_get_lr_count(irq);751754 spin_unlock(&irq->irq_lock);755755+756756+ count += w;757757+ *multi_sgi |= (w > 1);752758 }753759 return count;754760}···757767 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;758768 struct vgic_irq *irq;759769 int count;760760- bool npie = false;761770 bool multi_sgi;762771 u8 prio = 0xff;763772···786797 if (likely(vgic_target_oracle(irq) == vcpu)) {787798 vgic_populate_lr(vcpu, irq, count++);788799789789- if (irq->source) {790790- npie = true;800800+ if (irq->source)791801 prio = irq->priority;792792- }793802 }794803795804 spin_unlock(&irq->irq_lock);···799812 break;800813 }801814 }802802-803803- if (npie)804804- vgic_set_npie(vcpu);805815806816 vcpu->arch.vgic_cpu.used_lrs = count;807817
+14
virt/kvm/arm/vgic/vgic.h
···110110 return irq->config == VGIC_CONFIG_LEVEL && irq->hw;111111}112112113113+static inline int vgic_irq_get_lr_count(struct vgic_irq *irq)114114+{115115+ /* Account for the active state as an interrupt */116116+ if (vgic_irq_is_sgi(irq->intid) && irq->source)117117+ return hweight8(irq->source) + irq->active;118118+119119+ return irq_is_pending(irq) || irq->active;120120+}121121+122122+static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq)123123+{124124+ return vgic_irq_get_lr_count(irq) > 1;125125+}126126+113127/*114128 * This struct provides an intermediate representation of the fields contained115129 * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC