···2424For more information on this wrapper (also called kunit_tool) checkout the2525:doc:`kunit-tool` page.26262727-Creating a kunitconfig2828-======================2727+Creating a .kunitconfig2828+=======================2929The Python script is a thin wrapper around Kbuild. As such, it needs to be3030-configured with a ``kunitconfig`` file. This file essentially contains the3030+configured with a ``.kunitconfig`` file. This file essentially contains the3131regular Kernel config, with the specific test targets as well.32323333.. code-block:: bash34343535- git clone -b master https://kunit.googlesource.com/kunitconfig $PATH_TO_KUNITCONFIG_REPO3635 cd $PATH_TO_LINUX_REPO3737- ln -s $PATH_TO_KUNIT_CONFIG_REPO/kunitconfig kunitconfig3838-3939-You may want to add kunitconfig to your local gitignore.3636+ cp arch/um/configs/kunit_defconfig .kunitconfig40374138Verifying KUnit Works4239---------------------···148151149152 obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o150153151151-Now add it to your ``kunitconfig``:154154+Now add it to your ``.kunitconfig``:152155153156.. code-block:: none154157
···1818- dma-names: should contain "tx" and "rx".1919- atmel,fifo-size: maximum number of data the RX and TX FIFOs can store for FIFO2020 capable I2C controllers.2121-- i2c-sda-hold-time-ns: TWD hold time, only available for "atmel,sama5d4-i2c"2222- and "atmel,sama5d2-i2c".2121+- i2c-sda-hold-time-ns: TWD hold time, only available for:2222+ "atmel,sama5d4-i2c",2323+ "atmel,sama5d2-i2c",2424+ "microchip,sam9x60-i2c".2325- Child nodes conforming to i2c bus binding24262527Examples :
···2323 | openrisc: | TODO |2424 | parisc: | TODO |2525 | powerpc: | ok |2626- | riscv: | TODO |2626+ | riscv: | ok |2727 | s390: | ok |2828 | sh: | ok |2929 | sparc: | TODO |
-6
Documentation/networking/dsa/sja1105.rst
···230230against this restriction and errors out when appropriate. Schedule analysis is231231needed to avoid this, which is outside the scope of the document.232232233233-At the moment, the time-aware scheduler can only be triggered based on a234234-standalone clock and not based on PTP time. This means the base-time argument235235-from tc-taprio is ignored and the schedule starts right away. It also means it236236-is more difficult to phase-align the scheduler with the other devices in the237237-network.238238-239233Device Tree bindings and board design240234=====================================241235
+1-1
Documentation/networking/ip-sysctl.txt
···603603 with the current initial RTO of 1second. With this the final timeout604604 for a passive TCP connection will happen after 63seconds.605605606606-tcp_syncookies - BOOLEAN606606+tcp_syncookies - INTEGER607607 Only valid when the kernel was compiled with CONFIG_SYN_COOKIES608608 Send out syncookies when the syn backlog queue of a socket609609 overflows. This is to prevent against the common 'SYN flood attack'
+2-2
Documentation/networking/netdev-FAQ.rst
···3434mainline tree from Linus, and ``net-next`` is where the new code goes3535for the future release. You can find the trees here:36363737-- https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git3838-- https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git3737+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git3838+- https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git39394040Q: How often do changes from these trees make it to the mainline Linus tree?4141----------------------------------------------------------------------------
+1
Documentation/process/index.rst
···6060 volatile-considered-harmful6161 botching-up-ioctls6262 clang-format6363+ ../riscv/patch-acceptance63646465.. only:: subproject and html6566
+1
Documentation/riscv/index.rst
···7788 boot-image-header99 pmu1010+ patch-acceptance10111112.. only:: subproject and html1213
+35
Documentation/riscv/patch-acceptance.rst
···11+.. SPDX-License-Identifier: GPL-2.022+33+arch/riscv maintenance guidelines for developers44+================================================55+66+Overview77+--------88+The RISC-V instruction set architecture is developed in the open:99+in-progress drafts are available for all to review and to experiment1010+with implementations. New module or extension drafts can change1111+during the development process - sometimes in ways that are1212+incompatible with previous drafts. This flexibility can present a1313+challenge for RISC-V Linux maintenance. Linux maintainers disapprove1414+of churn, and the Linux development process prefers well-reviewed and1515+tested code over experimental code. We wish to extend these same1616+principles to the RISC-V-related code that will be accepted for1717+inclusion in the kernel.1818+1919+Submit Checklist Addendum2020+-------------------------2121+We'll only accept patches for new modules or extensions if the2222+specifications for those modules or extensions are listed as being2323+"Frozen" or "Ratified" by the RISC-V Foundation. (Developers may, of2424+course, maintain their own Linux kernel trees that contain code for2525+any draft extensions that they wish.)2626+2727+Additionally, the RISC-V specification allows implementors to create2828+their own custom extensions. These custom extensions aren't required2929+to go through any review or ratification process by the RISC-V3030+Foundation. To avoid the maintenance complexity and potential3131+performance impact of adding kernel code for implementor-specific3232+RISC-V extensions, we'll only to accept patches for extensions that3333+have been officially frozen or ratified by the RISC-V Foundation.3434+(Implementors, may, of course, maintain their own Linux kernel trees3535+containing code for any custom extensions that they wish.)
···9696#define pte_dirty(pte) (pte_sw_dirty(pte) || pte_hw_dirty(pte))97979898#define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID))9999-/*100100- * Execute-only user mappings do not have the PTE_USER bit set. All valid101101- * kernel mappings have the PTE_UXN bit set.102102- */10399#define pte_valid_not_user(pte) \104104- ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))100100+ ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID)105101#define pte_valid_young(pte) \106102 ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF))107103#define pte_valid_user(pte) \···113117114118/*115119 * p??_access_permitted() is true for valid user mappings (subject to the116116- * write permission check) other than user execute-only which do not have the117117- * PTE_USER bit set. PROT_NONE mappings do not have the PTE_VALID bit set.120120+ * write permission check). PROT_NONE mappings do not have the PTE_VALID bit121121+ * set.118122 */119123#define pte_access_permitted(pte, write) \120124 (pte_valid_user(pte) && (!(write) || pte_write(pte)))
···360360361361asmlinkage void ret_from_fork(void) asm("ret_from_fork");362362363363-int copy_thread(unsigned long clone_flags, unsigned long stack_start,364364- unsigned long stk_sz, struct task_struct *p)363363+int copy_thread_tls(unsigned long clone_flags, unsigned long stack_start,364364+ unsigned long stk_sz, struct task_struct *p, unsigned long tls)365365{366366 struct pt_regs *childregs = task_pt_regs(p);367367···394394 }395395396396 /*397397- * If a TLS pointer was passed to clone (4th argument), use it398398- * for the new thread.397397+ * If a TLS pointer was passed to clone, use it for the new398398+ * thread.399399 */400400 if (clone_flags & CLONE_SETTLS)401401- p->thread.uw.tp_value = childregs->regs[3];401401+ p->thread.uw.tp_value = tls;402402 } else {403403 memset(childregs, 0, sizeof(struct pt_regs));404404 childregs->pstate = PSR_MODE_EL1h;
+1-1
arch/arm64/mm/fault.c
···445445 const struct fault_info *inf;446446 struct mm_struct *mm = current->mm;447447 vm_fault_t fault, major = 0;448448- unsigned long vm_flags = VM_READ | VM_WRITE;448448+ unsigned long vm_flags = VM_READ | VM_WRITE | VM_EXEC;449449 unsigned int mm_flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;450450451451 if (kprobe_page_fault(regs, esr))
+1-3
arch/arm64/mm/mmu.c
···10701070{10711071 unsigned long start_pfn = start >> PAGE_SHIFT;10721072 unsigned long nr_pages = size >> PAGE_SHIFT;10731073- struct zone *zone;1074107310751074 /*10761075 * FIXME: Cleanup page tables (also in arch_add_memory() in case···10781079 * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be10791080 * unlocked yet.10801081 */10811081- zone = page_zone(pfn_to_page(start_pfn));10821082- __remove_pages(zone, start_pfn, nr_pages, altmap);10821082+ __remove_pages(start_pfn, nr_pages, altmap);10831083}10841084#endif
···689689{690690 unsigned long start_pfn = start >> PAGE_SHIFT;691691 unsigned long nr_pages = size >> PAGE_SHIFT;692692- struct zone *zone;693692694694- zone = page_zone(pfn_to_page(start_pfn));695695- __remove_pages(zone, start_pfn, nr_pages, altmap);693693+ __remove_pages(start_pfn, nr_pages, altmap);696694}697695#endif
+1-1
arch/mips/Kconfig
···4747 select HAVE_ARCH_TRACEHOOK4848 select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES4949 select HAVE_ASM_MODVERSIONS5050- select HAVE_EBPF_JIT if (!CPU_MICROMIPS)5050+ select HAVE_EBPF_JIT if 64BIT && !CPU_MICROMIPS && TARGET_ISA_REV >= 25151 select HAVE_CONTEXT_TRACKING5252 select HAVE_COPY_THREAD_TLS5353 select HAVE_C_RECORDMCOUNT
+3
arch/mips/boot/compressed/Makefile
···2929 -DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) \3030 -DKERNEL_ENTRY=$(VMLINUX_ENTRY_ADDRESS)31313232+# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.3333+KCOV_INSTRUMENT := n3434+3235# decompressor objects (linked with vmlinuz)3336vmlinuzobjs-y := $(obj)/head.o $(obj)/decompress.o $(obj)/string.o3437
+2-1
arch/mips/include/asm/cpu-type.h
···1515static inline int __pure __get_cpu_type(const int cpu_type)1616{1717 switch (cpu_type) {1818-#if defined(CONFIG_SYS_HAS_CPU_LOONGSON2EF)1818+#if defined(CONFIG_SYS_HAS_CPU_LOONGSON2E) || \1919+ defined(CONFIG_SYS_HAS_CPU_LOONGSON2F)1920 case CPU_LOONGSON2EF:2021#endif2122
+19-1
arch/mips/include/asm/thread_info.h
···4949 .addr_limit = KERNEL_DS, \5050}51515252-/* How to get the thread information struct from C. */5252+/*5353+ * A pointer to the struct thread_info for the currently executing thread is5454+ * held in register $28/$gp.5555+ *5656+ * We declare __current_thread_info as a global register variable rather than a5757+ * local register variable within current_thread_info() because clang doesn't5858+ * support explicit local register variables.5959+ *6060+ * When building the VDSO we take care not to declare the global register6161+ * variable because this causes GCC to not preserve the value of $28/$gp in6262+ * functions that change its value (which is common in the PIC VDSO when6363+ * accessing the GOT). Since the VDSO shouldn't be accessing6464+ * __current_thread_info anyway we declare it extern in order to cause a link6565+ * failure if it's referenced.6666+ */6767+#ifdef __VDSO__6868+extern struct thread_info *__current_thread_info;6969+#else5370register struct thread_info *__current_thread_info __asm__("$28");7171+#endif54725573static inline struct thread_info *current_thread_info(void)5674{
···5050 return 0;5151}52525353+static void fill_cpumask_siblings(int cpu, cpumask_t *cpu_map)5454+{5555+ int cpu1;5656+5757+ for_each_possible_cpu(cpu1)5858+ if (cpus_are_siblings(cpu, cpu1))5959+ cpumask_set_cpu(cpu1, cpu_map);6060+}6161+6262+static void fill_cpumask_cluster(int cpu, cpumask_t *cpu_map)6363+{6464+ int cpu1;6565+ int cluster = cpu_cluster(&cpu_data[cpu]);6666+6767+ for_each_possible_cpu(cpu1)6868+ if (cpu_cluster(&cpu_data[cpu1]) == cluster)6969+ cpumask_set_cpu(cpu1, cpu_map);7070+}7171+5372static int __populate_cache_leaves(unsigned int cpu)5473{5574 struct cpuinfo_mips *c = ¤t_cpu_data;···7657 struct cacheinfo *this_leaf = this_cpu_ci->info_list;77587859 if (c->icache.waysize) {6060+ /* L1 caches are per core */6161+ fill_cpumask_siblings(cpu, &this_leaf->shared_cpu_map);7962 populate_cache(dcache, this_leaf, 1, CACHE_TYPE_DATA);6363+ fill_cpumask_siblings(cpu, &this_leaf->shared_cpu_map);8064 populate_cache(icache, this_leaf, 1, CACHE_TYPE_INST);8165 } else {8266 populate_cache(dcache, this_leaf, 1, CACHE_TYPE_UNIFIED);8367 }84688585- if (c->scache.waysize)6969+ if (c->scache.waysize) {7070+ /* L2 cache is per cluster */7171+ fill_cpumask_cluster(cpu, &this_leaf->shared_cpu_map);8672 populate_cache(scache, this_leaf, 2, CACHE_TYPE_UNIFIED);7373+ }87748875 if (c->tcache.waysize)8976 populate_cache(tcache, this_leaf, 3, CACHE_TYPE_UNIFIED);
+1-1
arch/mips/net/ebpf_jit.c
···18041804 unsigned int image_size;18051805 u8 *image_ptr;1806180618071807- if (!prog->jit_requested || MIPS_ISA_REV < 2)18071807+ if (!prog->jit_requested)18081808 return prog;1809180918101810 tmp = bpf_jit_blind_constants(prog);
+20
arch/mips/vdso/vgettimeofday.c
···1717 return __cvdso_clock_gettime32(clock, ts);1818}19192020+#ifdef CONFIG_MIPS_CLOCK_VSYSCALL2121+2222+/*2323+ * This is behind the ifdef so that we don't provide the symbol when there's no2424+ * possibility of there being a usable clocksource, because there's nothing we2525+ * can do without it. When libc fails the symbol lookup it should fall back on2626+ * the standard syscall path.2727+ */2028int __vdso_gettimeofday(struct __kernel_old_timeval *tv,2129 struct timezone *tz)2230{2331 return __cvdso_gettimeofday(tv, tz);2432}3333+3434+#endif /* CONFIG_MIPS_CLOCK_VSYSCALL */25352636int __vdso_clock_getres(clockid_t clock_id,2737 struct old_timespec32 *res)···5343 return __cvdso_clock_gettime(clock, ts);5444}55454646+#ifdef CONFIG_MIPS_CLOCK_VSYSCALL4747+4848+/*4949+ * This is behind the ifdef so that we don't provide the symbol when there's no5050+ * possibility of there being a usable clocksource, because there's nothing we5151+ * can do without it. When libc fails the symbol lookup it should fall back on5252+ * the standard syscall path.5353+ */5654int __vdso_gettimeofday(struct __kernel_old_timeval *tv,5755 struct timezone *tz)5856{5957 return __cvdso_gettimeofday(tv, tz);6058}5959+6060+#endif /* CONFIG_MIPS_CLOCK_VSYSCALL */61616262int __vdso_clock_getres(clockid_t clock_id,6363 struct __kernel_timespec *res)
+1
arch/parisc/Kconfig
···6262 select HAVE_FTRACE_MCOUNT_RECORD if HAVE_DYNAMIC_FTRACE6363 select HAVE_KPROBES_ON_FTRACE6464 select HAVE_DYNAMIC_FTRACE_WITH_REGS6565+ select HAVE_COPY_THREAD_TLS65666667 help6768 The PA-RISC microprocessor is designed by Hewlett-Packard and used
+4-4
arch/parisc/kernel/process.c
···208208 * Copy architecture-specific thread state209209 */210210int211211-copy_thread(unsigned long clone_flags, unsigned long usp,212212- unsigned long kthread_arg, struct task_struct *p)211211+copy_thread_tls(unsigned long clone_flags, unsigned long usp,212212+ unsigned long kthread_arg, struct task_struct *p, unsigned long tls)213213{214214 struct pt_regs *cregs = &(p->thread.regs);215215 void *stack = task_stack_page(p);···254254 cregs->ksp = (unsigned long)stack + THREAD_SZ_ALGN + FRAME_SIZE;255255 cregs->kpc = (unsigned long) &child_return;256256257257- /* Setup thread TLS area from the 4th parameter in clone */257257+ /* Setup thread TLS area */258258 if (clone_flags & CLONE_SETTLS)259259- cregs->cr27 = cregs->gr[23];259259+ cregs->cr27 = tls;260260 }261261262262 return 0;
+1
arch/powerpc/include/asm/spinlock.h
···1515 *1616 * (the type definitions are in asm/spinlock_types.h)1717 */1818+#include <linux/jump_label.h>1819#include <linux/irqflags.h>1920#ifdef CONFIG_PPC642021#include <asm/paca.h>
+1-2
arch/powerpc/mm/mem.c
···151151{152152 unsigned long start_pfn = start >> PAGE_SHIFT;153153 unsigned long nr_pages = size >> PAGE_SHIFT;154154- struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap);155154 int ret;156155157157- __remove_pages(page_zone(page), start_pfn, nr_pages, altmap);156156+ __remove_pages(start_pfn, nr_pages, altmap);158157159158 /* Remove htab bolted mappings for this section of memory */160159 start = (unsigned long)__va(start);
+2-2
arch/powerpc/mm/slice.c
···50505151#endif52525353-static inline bool slice_addr_is_low(unsigned long addr)5353+static inline notrace bool slice_addr_is_low(unsigned long addr)5454{5555 u64 tmp = (u64)addr;5656···659659 mm_ctx_user_psize(¤t->mm->context), 1);660660}661661662662-unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)662662+unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr)663663{664664 unsigned char *psizes;665665 int index, mask_index;
+2
arch/riscv/Kconfig
···6464 select SPARSEMEM_STATIC if 32BIT6565 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU6666 select HAVE_ARCH_MMAP_RND_BITS if MMU6767+ select ARCH_HAS_GCOV_PROFILE_ALL6868+ select HAVE_COPY_THREAD_TLS67696870config ARCH_MMAP_RND_BITS_MIN6971 default 18 if 64BIT
···23232424 irq_enter();2525 switch (regs->cause & ~CAUSE_IRQ_FLAG) {2626- case IRQ_TIMER:2626+ case RV_IRQ_TIMER:2727 riscv_timer_interrupt();2828 break;2929#ifdef CONFIG_SMP3030- case IRQ_SOFT:3030+ case RV_IRQ_SOFT:3131 /*3232 * We only use software interrupts to pass IPIs, so if a non-SMP3333 * system gets one, then we don't know what to do.···3535 riscv_software_interrupt();3636 break;3737#endif3838- case IRQ_EXT:3838+ case RV_IRQ_EXT:3939 handle_arch_irq(regs);4040 break;4141 default:
+3-3
arch/riscv/kernel/process.c
···9999 return 0;100100}101101102102-int copy_thread(unsigned long clone_flags, unsigned long usp,103103- unsigned long arg, struct task_struct *p)102102+int copy_thread_tls(unsigned long clone_flags, unsigned long usp,103103+ unsigned long arg, struct task_struct *p, unsigned long tls)104104{105105 struct pt_regs *childregs = task_pt_regs(p);106106···121121 if (usp) /* User fork */122122 childregs->sp = usp;123123 if (clone_flags & CLONE_SETTLS)124124- childregs->tp = childregs->a5;124124+ childregs->tp = tls;125125 childregs->a0 = 0; /* Return value of fork() */126126 p->thread.ra = (unsigned long)ret_from_fork;127127 }
-3
arch/riscv/kernel/riscv_ksyms.c
···99/*1010 * Assembly functions that may be used (directly or indirectly) by modules1111 */1212-EXPORT_SYMBOL(__clear_user);1313-EXPORT_SYMBOL(__asm_copy_to_user);1414-EXPORT_SYMBOL(__asm_copy_from_user);1512EXPORT_SYMBOL(memset);1613EXPORT_SYMBOL(memcpy);
···2222 else2323 on_each_cpu(ipi_remote_fence_i, NULL, 1);2424}2525+EXPORT_SYMBOL(flush_icache_all);25262627/*2728 * Performs an icache flush for the given MM context. RISC-V has no direct
+6-6
arch/riscv/mm/init.c
···9999 pr_info("initrd not found or empty");100100 goto disable;101101 }102102- if (__pa(initrd_end) > PFN_PHYS(max_low_pfn)) {102102+ if (__pa_symbol(initrd_end) > PFN_PHYS(max_low_pfn)) {103103 pr_err("initrd extends beyond end of memory");104104 goto disable;105105 }106106107107 size = initrd_end - initrd_start;108108- memblock_reserve(__pa(initrd_start), size);108108+ memblock_reserve(__pa_symbol(initrd_start), size);109109 initrd_below_start_ok = 1;110110111111 pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n",···124124{125125 struct memblock_region *reg;126126 phys_addr_t mem_size = 0;127127- phys_addr_t vmlinux_end = __pa(&_end);128128- phys_addr_t vmlinux_start = __pa(&_start);127127+ phys_addr_t vmlinux_end = __pa_symbol(&_end);128128+ phys_addr_t vmlinux_start = __pa_symbol(&_start);129129130130 /* Find the memory region containing the kernel */131131 for_each_memblock(memory, reg) {···445445446446 /* Setup swapper PGD for fixmap */447447 create_pgd_mapping(swapper_pg_dir, FIXADDR_START,448448- __pa(fixmap_pgd_next),448448+ __pa_symbol(fixmap_pgd_next),449449 PGDIR_SIZE, PAGE_TABLE);450450451451 /* Map all memory banks */···474474 clear_fixmap(FIX_PMD);475475476476 /* Move to swapper page table */477477- csr_write(CSR_SATP, PFN_DOWN(__pa(swapper_pg_dir)) | SATP_MODE);477477+ csr_write(CSR_SATP, PFN_DOWN(__pa_symbol(swapper_pg_dir)) | SATP_MODE);478478 local_flush_tlb_all();479479}480480#else
+1-3
arch/s390/mm/init.c
···292292{293293 unsigned long start_pfn = start >> PAGE_SHIFT;294294 unsigned long nr_pages = size >> PAGE_SHIFT;295295- struct zone *zone;296295297297- zone = page_zone(pfn_to_page(start_pfn));298298- __remove_pages(zone, start_pfn, nr_pages, altmap);296296+ __remove_pages(start_pfn, nr_pages, altmap);299297 vmem_remove_mapping(start, size);300298}301299#endif /* CONFIG_MEMORY_HOTPLUG */
+1-3
arch/sh/mm/init.c
···434434{435435 unsigned long start_pfn = PFN_DOWN(start);436436 unsigned long nr_pages = size >> PAGE_SHIFT;437437- struct zone *zone;438437439439- zone = page_zone(pfn_to_page(start_pfn));440440- __remove_pages(zone, start_pfn, nr_pages, altmap);438438+ __remove_pages(start_pfn, nr_pages, altmap);441439}442440#endif /* CONFIG_MEMORY_HOTPLUG */
···3636extern unsigned long getreg(struct task_struct *child, int regno);3737extern int putreg(struct task_struct *child, int regno, unsigned long value);38383939-extern int arch_copy_tls(struct task_struct *new);3939+extern int arch_set_tls(struct task_struct *new, unsigned long tls);4040extern void clear_flushed_tls(struct task_struct *task);4141extern int syscall_trace_enter(struct pt_regs *regs);4242extern void syscall_trace_leave(struct pt_regs *regs);
+3-3
arch/um/kernel/process.c
···153153 userspace(¤t->thread.regs.regs, current_thread_info()->aux_fp_regs);154154}155155156156-int copy_thread(unsigned long clone_flags, unsigned long sp,157157- unsigned long arg, struct task_struct * p)156156+int copy_thread_tls(unsigned long clone_flags, unsigned long sp,157157+ unsigned long arg, struct task_struct * p, unsigned long tls)158158{159159 void (*handler)(void);160160 int kthread = current->flags & PF_KTHREAD;···188188 * Set a new TLS for the child thread?189189 */190190 if (clone_flags & CLONE_SETTLS)191191- ret = arch_copy_tls(p);191191+ ret = arch_set_tls(p, tls);192192 }193193194194 return ret;
+1-3
arch/x86/mm/init_32.c
···865865{866866 unsigned long start_pfn = start >> PAGE_SHIFT;867867 unsigned long nr_pages = size >> PAGE_SHIFT;868868- struct zone *zone;869868870870- zone = page_zone(pfn_to_page(start_pfn));871871- __remove_pages(zone, start_pfn, nr_pages, altmap);869869+ __remove_pages(start_pfn, nr_pages, altmap);872870}873871#endif874872
···215215 return 0;216216}217217218218-int arch_copy_tls(struct task_struct *new)218218+int arch_set_tls(struct task_struct *new, unsigned long tls)219219{220220 struct user_desc info;221221 int idx, ret = -EFAULT;222222223223- if (copy_from_user(&info,224224- (void __user *) UPT_SI(&new->thread.regs.regs),225225- sizeof(info)))223223+ if (copy_from_user(&info, (void __user *) tls, sizeof(info)))226224 goto out;227225228226 ret = -EINVAL;
+3-4
arch/x86/um/tls_64.c
···66{77}8899-int arch_copy_tls(struct task_struct *t)99+int arch_set_tls(struct task_struct *t, unsigned long tls)1010{1111 /*1212 * If CLONE_SETTLS is set, we need to save the thread id1313- * (which is argument 5, child_tid, of clone) so it can be set1414- * during context switches.1313+ * so it can be set during context switches.1514 */1616- t->thread.arch.fs = t->thread.regs.regs.gp[R8 / sizeof(long)];1515+ t->thread.arch.fs = tls;17161817 return 0;1918}
···202202 * involved. Much simpler to just not copy those live frames across.203203 */204204205205-int copy_thread(unsigned long clone_flags, unsigned long usp_thread_fn,206206- unsigned long thread_fn_arg, struct task_struct *p)205205+int copy_thread_tls(unsigned long clone_flags, unsigned long usp_thread_fn,206206+ unsigned long thread_fn_arg, struct task_struct *p,207207+ unsigned long tls)207208{208209 struct pt_regs *childregs = task_pt_regs(p);209210···267266268267 childregs->syscall = regs->syscall;269268270270- /* The thread pointer is passed in the '4th argument' (= a5) */271269 if (clone_flags & CLONE_SETTLS)272272- childregs->threadptr = childregs->areg[5];270270+ childregs->threadptr = tls;273271 } else {274272 p->thread.ra = MAKE_RA_FOR_CALL(275273 (unsigned long)ret_from_kernel_thread, 1);
+49
block/bio.c
···539539EXPORT_SYMBOL(zero_fill_bio_iter);540540541541/**542542+ * bio_truncate - truncate the bio to small size of @new_size543543+ * @bio: the bio to be truncated544544+ * @new_size: new size for truncating the bio545545+ *546546+ * Description:547547+ * Truncate the bio to new size of @new_size. If bio_op(bio) is548548+ * REQ_OP_READ, zero the truncated part. This function should only549549+ * be used for handling corner cases, such as bio eod.550550+ */551551+void bio_truncate(struct bio *bio, unsigned new_size)552552+{553553+ struct bio_vec bv;554554+ struct bvec_iter iter;555555+ unsigned int done = 0;556556+ bool truncated = false;557557+558558+ if (new_size >= bio->bi_iter.bi_size)559559+ return;560560+561561+ if (bio_op(bio) != REQ_OP_READ)562562+ goto exit;563563+564564+ bio_for_each_segment(bv, bio, iter) {565565+ if (done + bv.bv_len > new_size) {566566+ unsigned offset;567567+568568+ if (!truncated)569569+ offset = new_size - done;570570+ else571571+ offset = 0;572572+ zero_user(bv.bv_page, offset, bv.bv_len - offset);573573+ truncated = true;574574+ }575575+ done += bv.bv_len;576576+ }577577+578578+ exit:579579+ /*580580+ * Don't touch bvec table here and make it really immutable, since581581+ * fs bio user has to retrieve all pages via bio_for_each_segment_all582582+ * in its .end_bio() callback.583583+ *584584+ * It is enough to truncate bio by updating .bi_size since we can make585585+ * correct bvec with the updated .bi_size for drivers.586586+ */587587+ bio->bi_iter.bi_size = new_size;588588+}589589+590590+/**542591 * bio_put - release a reference to a bio543592 * @bio: bio to release reference to544593 *
···66#include <linux/compat.h>77#include <linux/elevator.h>88#include <linux/hdreg.h>99+#include <linux/pr.h>910#include <linux/slab.h>1011#include <linux/syscalls.h>1112#include <linux/types.h>···355354 * but we call blkdev_ioctl, which gets the lock for us356355 */357356 case BLKRRPART:357357+ case BLKREPORTZONE:358358+ case BLKRESETZONE:359359+ case BLKOPENZONE:360360+ case BLKCLOSEZONE:361361+ case BLKFINISHZONE:362362+ case BLKGETZONESZ:363363+ case BLKGETNRZONES:358364 return blkdev_ioctl(bdev, mode, cmd,359365 (unsigned long)compat_ptr(arg));360366 case BLKBSZSET_32:···409401 case BLKTRACETEARDOWN: /* compatible */410402 ret = blk_trace_ioctl(bdev, cmd, compat_ptr(arg));411403 return ret;404404+ case IOC_PR_REGISTER:405405+ case IOC_PR_RESERVE:406406+ case IOC_PR_RELEASE:407407+ case IOC_PR_PREEMPT:408408+ case IOC_PR_PREEMPT_ABORT:409409+ case IOC_PR_CLEAR:410410+ return blkdev_ioctl(bdev, mode, cmd,411411+ (unsigned long)compat_ptr(arg));412412 default:413413 if (disk->fops->compat_ioctl)414414 ret = disk->fops->compat_ioctl(bdev, mode, cmd, arg);
+95-40
drivers/ata/ahci_brcm.c
···7676};77777878enum brcm_ahci_quirks {7979- BRCM_AHCI_QUIRK_NO_NCQ = BIT(0),8080- BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE = BIT(1),7979+ BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE = BIT(0),8180};82818382struct brcm_ahci_priv {···212213 brcm_sata_phy_disable(priv, i);213214}214215215215-static u32 brcm_ahci_get_portmask(struct platform_device *pdev,216216+static u32 brcm_ahci_get_portmask(struct ahci_host_priv *hpriv,216217 struct brcm_ahci_priv *priv)217218{218218- void __iomem *ahci;219219- struct resource *res;220219 u32 impl;221220222222- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ahci");223223- ahci = devm_ioremap_resource(&pdev->dev, res);224224- if (IS_ERR(ahci))225225- return 0;226226-227227- impl = readl(ahci + HOST_PORTS_IMPL);221221+ impl = readl(hpriv->mmio + HOST_PORTS_IMPL);228222229223 if (fls(impl) > SATA_TOP_MAX_PHYS)230224 dev_warn(priv->dev, "warning: more ports than PHYs (%#x)\n",231225 impl);232226 else if (!impl)233227 dev_info(priv->dev, "no ports found\n");234234-235235- devm_iounmap(&pdev->dev, ahci);236236- devm_release_mem_region(&pdev->dev, res->start, resource_size(res));237228238229 return impl;239230}···273284274285 /* Perform the SATA PHY reset sequence */275286 brcm_sata_phy_disable(priv, ap->port_no);287287+288288+ /* Reset the SATA clock */289289+ ahci_platform_disable_clks(hpriv);290290+ msleep(10);291291+292292+ ahci_platform_enable_clks(hpriv);293293+ msleep(10);276294277295 /* Bring the PHY back on */278296 brcm_sata_phy_enable(priv, ap->port_no);···343347 struct ata_host *host = dev_get_drvdata(dev);344348 struct ahci_host_priv *hpriv = host->private_data;345349 struct brcm_ahci_priv *priv = hpriv->plat_data;346346- int ret;347350348348- ret = ahci_platform_suspend(dev);349351 brcm_sata_phys_disable(priv);350350- return ret;352352+353353+ return ahci_platform_suspend(dev);351354}352355353356static int brcm_ahci_resume(struct device *dev)···354359 struct ata_host *host = dev_get_drvdata(dev);355360 struct ahci_host_priv *hpriv = host->private_data;356361 struct brcm_ahci_priv *priv = hpriv->plat_data;362362+ int ret;363363+364364+ /* Make sure clocks are turned on before re-configuration */365365+ ret = ahci_platform_enable_clks(hpriv);366366+ if (ret)367367+ return ret;357368358369 brcm_sata_init(priv);359370 brcm_sata_phys_enable(priv);360371 brcm_sata_alpm_init(hpriv);361361- return ahci_platform_resume(dev);372372+373373+ /* Since we had to enable clocks earlier on, we cannot use374374+ * ahci_platform_resume() as-is since a second call to375375+ * ahci_platform_enable_resources() would bump up the resources376376+ * (regulators, clocks, PHYs) count artificially so we copy the part377377+ * after ahci_platform_enable_resources().378378+ */379379+ ret = ahci_platform_enable_phys(hpriv);380380+ if (ret)381381+ goto out_disable_phys;382382+383383+ ret = ahci_platform_resume_host(dev);384384+ if (ret)385385+ goto out_disable_platform_phys;386386+387387+ /* We resumed so update PM runtime state */388388+ pm_runtime_disable(dev);389389+ pm_runtime_set_active(dev);390390+ pm_runtime_enable(dev);391391+392392+ return 0;393393+394394+out_disable_platform_phys:395395+ ahci_platform_disable_phys(hpriv);396396+out_disable_phys:397397+ brcm_sata_phys_disable(priv);398398+ ahci_platform_disable_clks(hpriv);399399+ return ret;362400}363401#endif364402···438410 if (!IS_ERR_OR_NULL(priv->rcdev))439411 reset_control_deassert(priv->rcdev);440412441441- if ((priv->version == BRCM_SATA_BCM7425) ||442442- (priv->version == BRCM_SATA_NSP)) {443443- priv->quirks |= BRCM_AHCI_QUIRK_NO_NCQ;444444- priv->quirks |= BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE;413413+ hpriv = ahci_platform_get_resources(pdev, 0);414414+ if (IS_ERR(hpriv)) {415415+ ret = PTR_ERR(hpriv);416416+ goto out_reset;445417 }446418419419+ hpriv->plat_data = priv;420420+ hpriv->flags = AHCI_HFLAG_WAKE_BEFORE_STOP | AHCI_HFLAG_NO_WRITE_TO_RO;421421+422422+ switch (priv->version) {423423+ case BRCM_SATA_BCM7425:424424+ hpriv->flags |= AHCI_HFLAG_DELAY_ENGINE;425425+ /* fall through */426426+ case BRCM_SATA_NSP:427427+ hpriv->flags |= AHCI_HFLAG_NO_NCQ;428428+ priv->quirks |= BRCM_AHCI_QUIRK_SKIP_PHY_ENABLE;429429+ break;430430+ default:431431+ break;432432+ }433433+434434+ ret = ahci_platform_enable_clks(hpriv);435435+ if (ret)436436+ goto out_reset;437437+438438+ /* Must be first so as to configure endianness including that439439+ * of the standard AHCI register space.440440+ */447441 brcm_sata_init(priv);448442449449- priv->port_mask = brcm_ahci_get_portmask(pdev, priv);450450- if (!priv->port_mask)451451- return -ENODEV;443443+ /* Initializes priv->port_mask which is used below */444444+ priv->port_mask = brcm_ahci_get_portmask(hpriv, priv);445445+ if (!priv->port_mask) {446446+ ret = -ENODEV;447447+ goto out_disable_clks;448448+ }452449450450+ /* Must be done before ahci_platform_enable_phys() */453451 brcm_sata_phys_enable(priv);454454-455455- hpriv = ahci_platform_get_resources(pdev, 0);456456- if (IS_ERR(hpriv))457457- return PTR_ERR(hpriv);458458- hpriv->plat_data = priv;459459- hpriv->flags = AHCI_HFLAG_WAKE_BEFORE_STOP;460452461453 brcm_sata_alpm_init(hpriv);462454463463- ret = ahci_platform_enable_resources(hpriv);455455+ ret = ahci_platform_enable_phys(hpriv);464456 if (ret)465465- return ret;466466-467467- if (priv->quirks & BRCM_AHCI_QUIRK_NO_NCQ)468468- hpriv->flags |= AHCI_HFLAG_NO_NCQ;469469- hpriv->flags |= AHCI_HFLAG_NO_WRITE_TO_RO;457457+ goto out_disable_phys;470458471459 ret = ahci_platform_init_host(pdev, hpriv, &ahci_brcm_port_info,472460 &ahci_platform_sht);473461 if (ret)474474- return ret;462462+ goto out_disable_platform_phys;475463476464 dev_info(dev, "Broadcom AHCI SATA3 registered\n");477465478466 return 0;467467+468468+out_disable_platform_phys:469469+ ahci_platform_disable_phys(hpriv);470470+out_disable_phys:471471+ brcm_sata_phys_disable(priv);472472+out_disable_clks:473473+ ahci_platform_disable_clks(hpriv);474474+out_reset:475475+ if (!IS_ERR_OR_NULL(priv->rcdev))476476+ reset_control_assert(priv->rcdev);477477+ return ret;479478}480479481480static int brcm_ahci_remove(struct platform_device *pdev)···512457 struct brcm_ahci_priv *priv = hpriv->plat_data;513458 int ret;514459460460+ brcm_sata_phys_disable(priv);461461+515462 ret = ata_platform_remove_one(pdev);516463 if (ret)517464 return ret;518518-519519- brcm_sata_phys_disable(priv);520465521466 return 0;522467}
+4-2
drivers/ata/libahci_platform.c
···4343 * RETURNS:4444 * 0 on success otherwise a negative error code4545 */4646-static int ahci_platform_enable_phys(struct ahci_host_priv *hpriv)4646+int ahci_platform_enable_phys(struct ahci_host_priv *hpriv)4747{4848 int rc, i;4949···7474 }7575 return rc;7676}7777+EXPORT_SYMBOL_GPL(ahci_platform_enable_phys);77787879/**7980 * ahci_platform_disable_phys - Disable PHYs···8281 *8382 * This function disables all PHYs found in hpriv->phys.8483 */8585-static void ahci_platform_disable_phys(struct ahci_host_priv *hpriv)8484+void ahci_platform_disable_phys(struct ahci_host_priv *hpriv)8685{8786 int i;8887···9190 phy_exit(hpriv->phys[i]);9291 }9392}9393+EXPORT_SYMBOL_GPL(ahci_platform_disable_phys);94949595/**9696 * ahci_platform_enable_clks - Enable platform clocks
+24
drivers/ata/libata-core.c
···53295329}5330533053315331/**53325332+ * ata_qc_get_active - get bitmask of active qcs53335333+ * @ap: port in question53345334+ *53355335+ * LOCKING:53365336+ * spin_lock_irqsave(host lock)53375337+ *53385338+ * RETURNS:53395339+ * Bitmask of active qcs53405340+ */53415341+u64 ata_qc_get_active(struct ata_port *ap)53425342+{53435343+ u64 qc_active = ap->qc_active;53445344+53455345+ /* ATA_TAG_INTERNAL is sent to hw as tag 0 */53465346+ if (qc_active & (1ULL << ATA_TAG_INTERNAL)) {53475347+ qc_active |= (1 << 0);53485348+ qc_active &= ~(1ULL << ATA_TAG_INTERNAL);53495349+ }53505350+53515351+ return qc_active;53525352+}53535353+EXPORT_SYMBOL_GPL(ata_qc_get_active);53545354+53555355+/**53325356 * ata_qc_complete_multiple - Complete multiple qcs successfully53335357 * @ap: port in question53345358 * @qc_active: new qc_active mask
···8383 select DEVFREQ_GOV_PASSIVE8484 select DEVFREQ_EVENT_EXYNOS_PPMU8585 select PM_DEVFREQ_EVENT8686- select PM_OPP8786 help8887 This adds the common DEVFREQ driver for Exynos Memory bus. Exynos8988 Memory bus has one more group of memory bus (e.g, MIF and INT block).···9798 ARCH_TEGRA_132_SOC || ARCH_TEGRA_124_SOC || \9899 ARCH_TEGRA_210_SOC || \99100 COMPILE_TEST100100- select PM_OPP101101+ depends on COMMON_CLK101102 help102103 This adds the DEVFREQ driver for the Tegra family of SoCs.103104 It reads ACTMON counters of memory controllers and adjusts the···108109 depends on (TEGRA_MC && TEGRA20_EMC) || COMPILE_TEST109110 depends on COMMON_CLK110111 select DEVFREQ_GOV_SIMPLE_ONDEMAND111111- select PM_OPP112112 help113113 This adds the DEVFREQ driver for the Tegra20 family of SoCs.114114 It reads Memory Controller counters and adjusts the operating···119121 select DEVFREQ_EVENT_ROCKCHIP_DFI120122 select DEVFREQ_GOV_SIMPLE_ONDEMAND121123 select PM_DEVFREQ_EVENT122122- select PM_OPP123124 help124125 This adds the DEVFREQ driver for the RK3399 DMC(Dynamic Memory Controller).125126 It sets the frequency for the memory controller and reads the usage counts
···553553554554config GPIO_TEGRA186555555 tristate "NVIDIA Tegra186 GPIO support"556556- default ARCH_TEGRA_186_SOC557557- depends on ARCH_TEGRA_186_SOC || COMPILE_TEST556556+ default ARCH_TEGRA_186_SOC || ARCH_TEGRA_194_SOC557557+ depends on ARCH_TEGRA_186_SOC || ARCH_TEGRA_194_SOC || COMPILE_TEST558558 depends on OF_GPIO559559 select GPIOLIB_IRQCHIP560560 select IRQ_DOMAIN_HIERARCHY···11481148config GPIO_MAX7762011491149 tristate "GPIO support for PMIC MAX77620 and MAX20024"11501150 depends on MFD_MAX7762011511151+ select GPIOLIB_IRQCHIP11511152 help11521153 GPIO driver for MAX77620 and MAX20024 PMIC from Maxim Semiconductor.11531154 MAX77620 PMIC has 8 pins that can be configured as GPIOs. The
+1-1
drivers/gpio/gpio-aspeed-sgpio.c
···107107 return gpio->base + bank->irq_regs + GPIO_IRQ_STATUS;108108 default:109109 /* acturally if code runs to here, it's an error case */110110- BUG_ON(1);110110+ BUG();111111 }112112}113113
+7-4
drivers/gpio/gpio-mockup.c
···156156 mutex_lock(&chip->lock);157157158158 if (test_bit(FLAG_REQUESTED, &desc->flags) &&159159- !test_bit(FLAG_IS_OUT, &desc->flags)) {159159+ !test_bit(FLAG_IS_OUT, &desc->flags)) {160160 curr = __gpio_mockup_get(chip, offset);161161 if (curr == value)162162 goto out;···165165 irq_type = irq_get_trigger_type(irq);166166167167 if ((value == 1 && (irq_type & IRQ_TYPE_EDGE_RISING)) ||168168- (value == 0 && (irq_type & IRQ_TYPE_EDGE_FALLING)))168168+ (value == 0 && (irq_type & IRQ_TYPE_EDGE_FALLING)))169169 irq_sim_fire(sim, offset);170170 }171171···226226 int direction;227227228228 mutex_lock(&chip->lock);229229- direction = !chip->lines[offset].dir;229229+ direction = chip->lines[offset].dir;230230 mutex_unlock(&chip->lock);231231232232 return direction;···395395 struct gpio_chip *gc;396396 struct device *dev;397397 const char *name;398398- int rv, base;398398+ int rv, base, i;399399 u16 ngpio;400400401401 dev = &pdev->dev;···446446 sizeof(*chip->lines), GFP_KERNEL);447447 if (!chip->lines)448448 return -ENOMEM;449449+450450+ for (i = 0; i < gc->ngpio; i++)451451+ chip->lines[i].dir = GPIO_LINE_DIRECTION_IN;449452450453 if (device_property_read_bool(dev, "named-gpio-lines")) {451454 rv = gpio_mockup_name_lines(dev, chip);
+1
drivers/gpio/gpio-mpc8xxx.c
···346346 return -ENOMEM;347347348348 gc = &mpc8xxx_gc->gc;349349+ gc->parent = &pdev->dev;349350350351 if (of_property_read_bool(np, "little-endian")) {351352 ret = bgpio_init(gc, &pdev->dev, 4,
···2121#include "gpiolib.h"2222#include "gpiolib-acpi.h"23232424+#define QUIRK_NO_EDGE_EVENTS_ON_BOOT 0x01l2525+#define QUIRK_NO_WAKEUP 0x02l2626+2427static int run_edge_events_on_boot = -1;2528module_param(run_edge_events_on_boot, int, 0444);2629MODULE_PARM_DESC(run_edge_events_on_boot,2730 "Run edge _AEI event-handlers at boot: 0=no, 1=yes, -1=auto");3131+3232+static int honor_wakeup = -1;3333+module_param(honor_wakeup, int, 0444);3434+MODULE_PARM_DESC(honor_wakeup,3535+ "Honor the ACPI wake-capable flag: 0=no, 1=yes, -1=auto");28362937/**3038 * struct acpi_gpio_event - ACPI GPIO event handler data···289281 event->handle = evt_handle;290282 event->handler = handler;291283 event->irq = irq;292292- event->irq_is_wake = agpio->wake_capable == ACPI_WAKE_CAPABLE;284284+ event->irq_is_wake = honor_wakeup && agpio->wake_capable == ACPI_WAKE_CAPABLE;293285 event->pin = pin;294286 event->desc = desc;295287···13171309/* We must use _sync so that this runs after the first deferred_probe run */13181310late_initcall_sync(acpi_gpio_handle_deferred_request_irqs);1319131113201320-static const struct dmi_system_id run_edge_events_on_boot_blacklist[] = {13121312+static const struct dmi_system_id gpiolib_acpi_quirks[] = {13211313 {13221314 /*13231315 * The Minix Neo Z83-4 has a micro-USB-B id-pin handler for···13271319 .matches = {13281320 DMI_MATCH(DMI_SYS_VENDOR, "MINIX"),13291321 DMI_MATCH(DMI_PRODUCT_NAME, "Z83-4"),13301330- }13221322+ },13231323+ .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT,13311324 },13321325 {13331326 /*···13401331 .matches = {13411332 DMI_MATCH(DMI_SYS_VENDOR, "Wortmann_AG"),13421333 DMI_MATCH(DMI_PRODUCT_NAME, "TERRA_PAD_1061"),13431343- }13341334+ },13351335+ .driver_data = (void *)QUIRK_NO_EDGE_EVENTS_ON_BOOT,13361336+ },13371337+ {13381338+ /*13391339+ * Various HP X2 10 Cherry Trail models use an external13401340+ * embedded-controller connected via I2C + an ACPI GPIO13411341+ * event handler. The embedded controller generates various13421342+ * spurious wakeup events when suspended. So disable wakeup13431343+ * for its handler (it uses the only ACPI GPIO event handler).13441344+ * This breaks wakeup when opening the lid, the user needs13451345+ * to press the power-button to wakeup the system. The13461346+ * alternative is suspend simply not working, which is worse.13471347+ */13481348+ .matches = {13491349+ DMI_MATCH(DMI_SYS_VENDOR, "HP"),13501350+ DMI_MATCH(DMI_PRODUCT_NAME, "HP x2 Detachable 10-p0XX"),13511351+ },13521352+ .driver_data = (void *)QUIRK_NO_WAKEUP,13441353 },13451354 {} /* Terminating entry */13461355};1347135613481357static int acpi_gpio_setup_params(void)13491358{13591359+ const struct dmi_system_id *id;13601360+ long quirks = 0;13611361+13621362+ id = dmi_first_match(gpiolib_acpi_quirks);13631363+ if (id)13641364+ quirks = (long)id->driver_data;13651365+13501366 if (run_edge_events_on_boot < 0) {13511351- if (dmi_check_system(run_edge_events_on_boot_blacklist))13671367+ if (quirks & QUIRK_NO_EDGE_EVENTS_ON_BOOT)13521368 run_edge_events_on_boot = 0;13531369 else13541370 run_edge_events_on_boot = 1;13711371+ }13721372+13731373+ if (honor_wakeup < 0) {13741374+ if (quirks & QUIRK_NO_WAKEUP)13751375+ honor_wakeup = 0;13761376+ else13771377+ honor_wakeup = 1;13551378 }1356137913571380 return 0;
+11-2
drivers/gpio/gpiolib.c
···220220 chip = gpiod_to_chip(desc);221221 offset = gpio_chip_hwgpio(desc);222222223223+ /*224224+ * Open drain emulation using input mode may incorrectly report225225+ * input here, fix that up.226226+ */227227+ if (test_bit(FLAG_OPEN_DRAIN, &desc->flags) &&228228+ test_bit(FLAG_IS_OUT, &desc->flags))229229+ return 0;230230+223231 if (!chip->get_direction)224232 return -ENOTSUPP;225233···4480447244814473 if (chip->ngpio <= p->chip_hwnum) {44824474 dev_err(dev,44834483- "requested GPIO %d is out of range [0..%d] for chip %s\n",44844484- idx, chip->ngpio, chip->label);44754475+ "requested GPIO %u (%u) is out of range [0..%u] for chip %s\n",44764476+ idx, p->chip_hwnum, chip->ngpio - 1,44774477+ chip->label);44854478 return ERR_PTR(-EINVAL);44864479 }44874480
···12831283 * Changes struct fb_var_screeninfo are currently not pushed back12841284 * to KMS, hence fail if different settings are requested.12851285 */12861286- if (var->bits_per_pixel != fb->format->cpp[0] * 8 ||12861286+ if (var->bits_per_pixel > fb->format->cpp[0] * 8 ||12871287 var->xres > fb->width || var->yres > fb->height ||12881288 var->xres_virtual > fb->width || var->yres_virtual > fb->height) {12891289 DRM_DEBUG("fb requested width/height/bpp can't fit in current fb "···13071307 !var->blue.msb_right && !var->transp.msb_right) {13081308 drm_fb_helper_fill_pixel_fmt(var, fb->format->depth);13091309 }13101310+13111311+ /*13121312+ * Likewise, bits_per_pixel should be rounded up to a supported value.13131313+ */13141314+ var->bits_per_pixel = fb->format->cpp[0] * 8;1310131513111316 /*13121317 * drm fbdev emulation doesn't support changing the pixel format at all,
+2-2
drivers/gpu/drm/i915/display/intel_audio.c
···856856 }857857858858 /* Force CDCLK to 2*BCLK as long as we need audio powered. */859859- if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))859859+ if (IS_GEMINILAKE(dev_priv))860860 glk_force_audio_cdclk(dev_priv, true);861861862862 if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))···875875876876 /* Stop forcing CDCLK to 2*BCLK if no need for audio to be powered. */877877 if (--dev_priv->audio_power_refcount == 0)878878- if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))878878+ if (IS_GEMINILAKE(dev_priv))879879 glk_force_audio_cdclk(dev_priv, false);880880881881 intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO, cookie);
···150150 */151151 u16 write_domain;152152153153- struct intel_frontbuffer *frontbuffer;153153+ struct intel_frontbuffer __rcu *frontbuffer;154154155155 /** Current tiling stride for the object, if it's tiled. */156156 unsigned int tiling_and_stride;
+2-1
drivers/gpu/drm/i915/gt/intel_gt_pm.c
···9494 intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);9595 }96969797+ /* Defer dropping the display power well for 100ms, it's slow! */9798 GEM_BUG_ON(!wakeref);9898- intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref);9999+ intel_display_power_put_async(i915, POWER_DOMAIN_GT_IRQ, wakeref);99100100101 i915_globals_park();101102
+2
drivers/gpu/drm/i915/gt/intel_lrc.c
···44164416 ve->base.gt = siblings[0]->gt;44174417 ve->base.uncore = siblings[0]->uncore;44184418 ve->base.id = -1;44194419+44194420 ve->base.class = OTHER_CLASS;44204421 ve->base.uabi_class = I915_ENGINE_CLASS_INVALID;44214422 ve->base.instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;44234423+ ve->base.uabi_instance = I915_ENGINE_CLASS_INVALID_VIRTUAL;4422442444234425 /*44244426 * The decision on whether to submit a request using semaphores
+11-20
drivers/gpu/drm/i915/gt/intel_ring_submission.c
···14131413 int len;14141414 u32 *cs;1415141514161416- flags |= MI_MM_SPACE_GTT;14171417- if (IS_HASWELL(i915))14181418- /* These flags are for resource streamer on HSW+ */14191419- flags |= HSW_MI_RS_SAVE_STATE_EN | HSW_MI_RS_RESTORE_STATE_EN;14201420- else14211421- /* We need to save the extended state for powersaving modes */14221422- flags |= MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN;14231423-14241416 len = 4;14251417 if (IS_GEN(i915, 7))14261418 len += 2 + (num_engines ? 4 * num_engines + 6 : 0);···15811589 }1582159015831591 if (ce->state) {15841584- u32 hw_flags;15921592+ u32 flags;1585159315861594 GEM_BUG_ON(rq->engine->id != RCS0);1587159515881588- /*15891589- * The kernel context(s) is treated as pure scratch and is not15901590- * expected to retain any state (as we sacrifice it during15911591- * suspend and on resume it may be corrupted). This is ok,15921592- * as nothing actually executes using the kernel context; it15931593- * is purely used for flushing user contexts.15941594- */15951595- hw_flags = 0;15961596- if (i915_gem_context_is_kernel(rq->gem_context))15971597- hw_flags = MI_RESTORE_INHIBIT;15961596+ /* For resource streamer on HSW+ and power context elsewhere */15971597+ BUILD_BUG_ON(HSW_MI_RS_SAVE_STATE_EN != MI_SAVE_EXT_STATE_EN);15981598+ BUILD_BUG_ON(HSW_MI_RS_RESTORE_STATE_EN != MI_RESTORE_EXT_STATE_EN);1598159915991599- ret = mi_set_context(rq, hw_flags);16001600+ flags = MI_SAVE_EXT_STATE_EN | MI_MM_SPACE_GTT;16011601+ if (!i915_gem_context_is_kernel(rq->gem_context))16021602+ flags |= MI_RESTORE_EXT_STATE_EN;16031603+ else16041604+ flags |= MI_RESTORE_INHIBIT;16051605+16061606+ ret = mi_set_context(rq, flags);16001607 if (ret)16011608 return ret;16021609 }
···161161 * We manually control the domain here and pretend that it162162 * remains coherent i.e. in the GTT domain, like shmem_pwrite.163163 */164164- intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);164164+ i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);165165166166 if (copy_from_user(vaddr, user_data, args->size))167167 return -EFAULT;···169169 drm_clflush_virt_range(vaddr, args->size);170170 intel_gt_chipset_flush(&to_i915(obj->base.dev)->gt);171171172172- intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);172172+ i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);173173 return 0;174174}175175···589589 goto out_unpin;590590 }591591592592- intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);592592+ i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);593593594594 user_data = u64_to_user_ptr(args->data_ptr);595595 offset = args->offset;···631631 user_data += page_length;632632 offset += page_length;633633 }634634- intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);634634+ i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);635635636636 i915_gem_object_unlock_fence(obj, fence);637637out_unpin:···721721 offset = 0;722722 }723723724724- intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);724724+ i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);725725 i915_gem_object_unlock_fence(obj, fence);726726727727 return ret;
+20-53
drivers/gpu/drm/i915/i915_pmu.c
···144144 return ktime_to_ns(ktime_sub(ktime_get(), kt));145145}146146147147-static u64 __pmu_estimate_rc6(struct i915_pmu *pmu)148148-{149149- u64 val;150150-151151- /*152152- * We think we are runtime suspended.153153- *154154- * Report the delta from when the device was suspended to now,155155- * on top of the last known real value, as the approximated RC6156156- * counter value.157157- */158158- val = ktime_since(pmu->sleep_last);159159- val += pmu->sample[__I915_SAMPLE_RC6].cur;160160-161161- pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = val;162162-163163- return val;164164-}165165-166166-static u64 __pmu_update_rc6(struct i915_pmu *pmu, u64 val)167167-{168168- /*169169- * If we are coming back from being runtime suspended we must170170- * be careful not to report a larger value than returned171171- * previously.172172- */173173- if (val >= pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur) {174174- pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur = 0;175175- pmu->sample[__I915_SAMPLE_RC6].cur = val;176176- } else {177177- val = pmu->sample[__I915_SAMPLE_RC6_ESTIMATED].cur;178178- }179179-180180- return val;181181-}182182-183147static u64 get_rc6(struct intel_gt *gt)184148{185149 struct drm_i915_private *i915 = gt->i915;186150 struct i915_pmu *pmu = &i915->pmu;187151 unsigned long flags;152152+ bool awake = false;188153 u64 val;189154190190- val = 0;191155 if (intel_gt_pm_get_if_awake(gt)) {192156 val = __get_rc6(gt);193157 intel_gt_pm_put_async(gt);158158+ awake = true;194159 }195160196161 spin_lock_irqsave(&pmu->lock, flags);197162198198- if (val)199199- val = __pmu_update_rc6(pmu, val);163163+ if (awake) {164164+ pmu->sample[__I915_SAMPLE_RC6].cur = val;165165+ } else {166166+ /*167167+ * We think we are runtime suspended.168168+ *169169+ * Report the delta from when the device was suspended to now,170170+ * on top of the last known real value, as the approximated RC6171171+ * counter value.172172+ */173173+ val = ktime_since(pmu->sleep_last);174174+ val += pmu->sample[__I915_SAMPLE_RC6].cur;175175+ }176176+177177+ if (val < pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur)178178+ val = pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur;200179 else201201- val = __pmu_estimate_rc6(pmu);180180+ pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = val;202181203182 spin_unlock_irqrestore(&pmu->lock, flags);204183···189210 struct i915_pmu *pmu = &i915->pmu;190211191212 if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY))192192- __pmu_update_rc6(pmu, __get_rc6(&i915->gt));213213+ pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(&i915->gt);193214194215 pmu->sleep_last = ktime_get();195195-}196196-197197-static void unpark_rc6(struct drm_i915_private *i915)198198-{199199- struct i915_pmu *pmu = &i915->pmu;200200-201201- /* Estimate how long we slept and accumulate that into rc6 counters */202202- if (pmu->enable & config_enabled_mask(I915_PMU_RC6_RESIDENCY))203203- __pmu_estimate_rc6(pmu);204216}205217206218#else···202232}203233204234static void park_rc6(struct drm_i915_private *i915) {}205205-static void unpark_rc6(struct drm_i915_private *i915) {}206235207236#endif208237···249280 * Re-enable sampling timer when GPU goes active.250281 */251282 __i915_pmu_maybe_start_timer(pmu);252252-253253- unpark_rc6(i915);254283255284 spin_unlock_irq(&pmu->lock);256285}
···768768769769 if (steam->quirks & STEAM_QUIRK_WIRELESS) {770770 hid_info(hdev, "Steam wireless receiver connected");771771+ /* If using a wireless adaptor ask for connection status */772772+ steam->connected = false;771773 steam_request_conn_status(steam);772774 } else {775775+ /* A wired connection is always present */776776+ steam->connected = true;773777 ret = steam_register(steam);774778 if (ret) {775779 hid_err(hdev,
···5858 struct i2c_adapter adapter;5959 struct completion completion;6060 struct i2c_msg *curr_msg;6161+ struct clk *bus_clk;6162 int num_msgs;6263 u32 msg_err;6364 u8 *msg_buf;···405404 struct resource *mem, *irq;406405 int ret;407406 struct i2c_adapter *adap;408408- struct clk *bus_clk;409407 struct clk *mclk;410408 u32 bus_clk_rate;411409···427427 return PTR_ERR(mclk);428428 }429429430430- bus_clk = bcm2835_i2c_register_div(&pdev->dev, mclk, i2c_dev);430430+ i2c_dev->bus_clk = bcm2835_i2c_register_div(&pdev->dev, mclk, i2c_dev);431431432432- if (IS_ERR(bus_clk)) {432432+ if (IS_ERR(i2c_dev->bus_clk)) {433433 dev_err(&pdev->dev, "Could not register clock\n");434434- return PTR_ERR(bus_clk);434434+ return PTR_ERR(i2c_dev->bus_clk);435435 }436436437437 ret = of_property_read_u32(pdev->dev.of_node, "clock-frequency",···442442 bus_clk_rate = 100000;443443 }444444445445- ret = clk_set_rate_exclusive(bus_clk, bus_clk_rate);445445+ ret = clk_set_rate_exclusive(i2c_dev->bus_clk, bus_clk_rate);446446 if (ret < 0) {447447 dev_err(&pdev->dev, "Could not set clock frequency\n");448448 return ret;449449 }450450451451- ret = clk_prepare_enable(bus_clk);451451+ ret = clk_prepare_enable(i2c_dev->bus_clk);452452 if (ret) {453453 dev_err(&pdev->dev, "Couldn't prepare clock");454454 return ret;···491491static int bcm2835_i2c_remove(struct platform_device *pdev)492492{493493 struct bcm2835_i2c_dev *i2c_dev = platform_get_drvdata(pdev);494494- struct clk *bus_clk = devm_clk_get(i2c_dev->dev, "div");495494496496- clk_rate_exclusive_put(bus_clk);497497- clk_disable_unprepare(bus_clk);495495+ clk_rate_exclusive_put(i2c_dev->bus_clk);496496+ clk_disable_unprepare(i2c_dev->bus_clk);498497499498 free_irq(i2c_dev->irq, i2c_dev);500499 i2c_del_adapter(&i2c_dev->adapter);
+10-3
drivers/i2c/i2c-core-base.c
···186186 * If we can set SDA, we will always create a STOP to ensure additional187187 * pulses will do no harm. This is achieved by letting SDA follow SCL188188 * half a cycle later. Check the 'incomplete_write_byte' fault injector189189- * for details.189189+ * for details. Note that we must honour tsu:sto, 4us, but lets use 5us190190+ * here for simplicity.190191 */191192 bri->set_scl(adap, scl);192192- ndelay(RECOVERY_NDELAY / 2);193193+ ndelay(RECOVERY_NDELAY);193194 if (bri->set_sda)194195 bri->set_sda(adap, scl);195196 ndelay(RECOVERY_NDELAY / 2);···212211 scl = !scl;213212 bri->set_scl(adap, scl);214213 /* Creating STOP again, see above */215215- ndelay(RECOVERY_NDELAY / 2);214214+ if (scl) {215215+ /* Honour minimum tsu:sto */216216+ ndelay(RECOVERY_NDELAY);217217+ } else {218218+ /* Honour minimum tf and thd:dat */219219+ ndelay(RECOVERY_NDELAY / 2);220220+ }216221 if (bri->set_sda)217222 bri->set_sda(adap, scl);218223 ndelay(RECOVERY_NDELAY / 2);
+3-1
drivers/infiniband/hw/bnxt_re/ib_verbs.c
···33053305 int rc;3306330633073307 rc = bnxt_qplib_free_mrw(&rdev->qplib_res, &mr->qplib_mr);33083308- if (rc)33083308+ if (rc) {33093309 dev_err(rdev_to_dev(rdev), "Dereg MR failed: %#x\n", rc);33103310+ return rc;33113311+ }3310331233113313 if (mr->pages) {33123314 rc = bnxt_qplib_free_fast_reg_page_list(&rdev->qplib_res,
+6-6
drivers/infiniband/hw/bnxt_re/qplib_fp.c
···22832283 /* Add qp to flush list of the CQ */22842284 bnxt_qplib_add_flush_qp(qp);22852285 } else {22862286+ /* Before we complete, do WA 9060 */22872287+ if (do_wa9060(qp, cq, cq_cons, sw_sq_cons,22882288+ cqe_sq_cons)) {22892289+ *lib_qp = qp;22902290+ goto out;22912291+ }22862292 if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) {22872287- /* Before we complete, do WA 9060 */22882288- if (do_wa9060(qp, cq, cq_cons, sw_sq_cons,22892289- cqe_sq_cons)) {22902290- *lib_qp = qp;22912291- goto out;22922292- }22932293 cqe->status = CQ_REQ_STATUS_OK;22942294 cqe++;22952295 (*budget)--;
+3-1
drivers/infiniband/hw/hfi1/iowait.c
···8181void iowait_cancel_work(struct iowait *w)8282{8383 cancel_work_sync(&iowait_get_ib_work(w)->iowork);8484- cancel_work_sync(&iowait_get_tid_work(w)->iowork);8484+ /* Make sure that the iowork for TID RDMA is used */8585+ if (iowait_get_tid_work(w)->iowork.func)8686+ cancel_work_sync(&iowait_get_tid_work(w)->iowork);8587}86888789/**
+9
drivers/infiniband/hw/hfi1/tid_rdma.c
···46334633 */46344634 fpsn = full_flow_psn(flow, flow->flow_state.spsn);46354635 req->r_ack_psn = psn;46364636+ /*46374637+ * If resync_psn points to the last flow PSN for a46384638+ * segment and the new segment (likely from a new46394639+ * request) starts with a new generation number, we46404640+ * need to adjust resync_psn accordingly.46414641+ */46424642+ if (flow->flow_state.generation !=46434643+ (resync_psn >> HFI1_KDETH_BTH_SEQ_SHIFT))46444644+ resync_psn = mask_psn(fpsn - 1);46364645 flow->resync_npkts +=46374646 delta_psn(mask_psn(resync_psn + 1), fpsn);46384647 /*
···878878 }879879 }880880881881- __clear_bit(*old_keycode, dev->keybit);882882- __set_bit(ke->keycode, dev->keybit);883883-884884- for (i = 0; i < dev->keycodemax; i++) {885885- if (input_fetch_keycode(dev, i) == *old_keycode) {886886- __set_bit(*old_keycode, dev->keybit);887887- break; /* Setting the bit twice is useless, so break */881881+ if (*old_keycode <= KEY_MAX) {882882+ __clear_bit(*old_keycode, dev->keybit);883883+ for (i = 0; i < dev->keycodemax; i++) {884884+ if (input_fetch_keycode(dev, i) == *old_keycode) {885885+ __set_bit(*old_keycode, dev->keybit);886886+ /* Setting the bit twice is useless, so break */887887+ break;888888+ }888889 }889890 }890891892892+ __set_bit(ke->keycode, dev->keybit);891893 return 0;892894}893895···945943 * Simulate keyup event if keycode is not present946944 * in the keymap anymore947945 */948948- if (test_bit(EV_KEY, dev->evbit) &&949949- !is_event_supported(old_keycode, dev->keybit, KEY_MAX) &&950950- __test_and_clear_bit(old_keycode, dev->key)) {946946+ if (old_keycode > KEY_MAX) {947947+ dev_warn(dev->dev.parent ?: &dev->dev,948948+ "%s: got too big old keycode %#x\n",949949+ __func__, old_keycode);950950+ } else if (test_bit(EV_KEY, dev->evbit) &&951951+ !is_event_supported(old_keycode, dev->keybit, KEY_MAX) &&952952+ __test_and_clear_bit(old_keycode, dev->key)) {951953 struct input_value vals[] = {952954 { EV_KEY, old_keycode, 0 },953955 input_value_sync
+7-1
drivers/input/keyboard/imx_sc_key.c
···7878 return;7979 }80808181- state = (bool)msg.state;8181+ /*8282+ * The response data from SCU firmware is 4 bytes,8383+ * but ONLY the first byte is the key state, other8484+ * 3 bytes could be some dirty data, so we should8585+ * ONLY take the first byte as key state.8686+ */8787+ state = (bool)(msg.state & 0xff);82888389 if (state ^ priv->keystate) {8490 priv->keystate = state;
···256256 * Skip contexts other than external interrupts for our257257 * privilege level.258258 */259259- if (parent.args[0] != IRQ_EXT)259259+ if (parent.args[0] != RV_IRQ_EXT)260260 continue;261261262262 hartid = plic_find_hart_id(parent.np);
+27-13
drivers/media/cec/cec-adap.c
···380380 } else {381381 list_del_init(&data->list);382382 if (!(data->msg.tx_status & CEC_TX_STATUS_OK))383383- data->adap->transmit_queue_sz--;383383+ if (!WARN_ON(!data->adap->transmit_queue_sz))384384+ data->adap->transmit_queue_sz--;384385 }385386386387 if (data->msg.tx_status & CEC_TX_STATUS_OK) {···433432 * need to do anything special in that case.434433 */435434 }435435+ /*436436+ * If something went wrong and this counter isn't what it should437437+ * be, then this will reset it back to 0. Warn if it is not 0,438438+ * since it indicates a bug, either in this framework or in a439439+ * CEC driver.440440+ */441441+ if (WARN_ON(adap->transmit_queue_sz))442442+ adap->transmit_queue_sz = 0;436443}437444438445/*···465456 bool timeout = false;466457 u8 attempts;467458468468- if (adap->transmitting) {459459+ if (adap->transmit_in_progress) {469460 int err;470461471462 /*···500491 goto unlock;501492 }502493503503- if (adap->transmitting && timeout) {494494+ if (adap->transmit_in_progress && timeout) {504495 /*505496 * If we timeout, then log that. Normally this does506497 * not happen and it is an indication of a faulty CEC···509500 * so much traffic on the bus that the adapter was510501 * unable to transmit for CEC_XFER_TIMEOUT_MS (2.1s).511502 */512512- pr_warn("cec-%s: message %*ph timed out\n", adap->name,513513- adap->transmitting->msg.len,514514- adap->transmitting->msg.msg);503503+ if (adap->transmitting) {504504+ pr_warn("cec-%s: message %*ph timed out\n", adap->name,505505+ adap->transmitting->msg.len,506506+ adap->transmitting->msg.msg);507507+ /* Just give up on this. */508508+ cec_data_cancel(adap->transmitting,509509+ CEC_TX_STATUS_TIMEOUT);510510+ } else {511511+ pr_warn("cec-%s: transmit timed out\n", adap->name);512512+ }515513 adap->transmit_in_progress = false;516514 adap->tx_timeouts++;517517- /* Just give up on this. */518518- cec_data_cancel(adap->transmitting,519519- CEC_TX_STATUS_TIMEOUT);520515 goto unlock;521516 }522517···535522 data = list_first_entry(&adap->transmit_queue,536523 struct cec_data, list);537524 list_del_init(&data->list);538538- adap->transmit_queue_sz--;525525+ if (!WARN_ON(!data->adap->transmit_queue_sz))526526+ adap->transmit_queue_sz--;539527540528 /* Make this the current transmitting message */541529 adap->transmitting = data;···10991085 valid_la = false;11001086 else if (!cec_msg_is_broadcast(msg) && !(dir_fl & DIRECTED))11011087 valid_la = false;11021102- else if (cec_msg_is_broadcast(msg) && !(dir_fl & BCAST1_4))10881088+ else if (cec_msg_is_broadcast(msg) && !(dir_fl & BCAST))11031089 valid_la = false;11041090 else if (cec_msg_is_broadcast(msg) &&11051105- adap->log_addrs.cec_version >= CEC_OP_CEC_VERSION_2_0 &&11061106- !(dir_fl & BCAST2_0))10911091+ adap->log_addrs.cec_version < CEC_OP_CEC_VERSION_2_0 &&10921092+ !(dir_fl & BCAST1_4))11071093 valid_la = false;11081094 }11091095 if (valid_la && min_len) {
+13-4
drivers/media/usb/pulse8-cec/pulse8-cec.c
···116116 unsigned int vers;117117 struct completion cmd_done;118118 struct work_struct work;119119+ u8 work_result;119120 struct delayed_work ping_eeprom_work;120121 struct cec_msg rx_msg;121122 u8 data[DATA_SIZE];···138137{139138 struct pulse8 *pulse8 =140139 container_of(work, struct pulse8, work);140140+ u8 result = pulse8->work_result;141141142142- switch (pulse8->data[0] & 0x3f) {142142+ pulse8->work_result = 0;143143+ switch (result & 0x3f) {143144 case MSGCODE_FRAME_DATA:144145 cec_received_msg(pulse8->adap, &pulse8->rx_msg);145146 break;···175172 pulse8->escape = false;176173 } else if (data == MSGEND) {177174 struct cec_msg *msg = &pulse8->rx_msg;175175+ u8 msgcode = pulse8->buf[0];178176179177 if (debug)180178 dev_info(pulse8->dev, "received: %*ph\n",181179 pulse8->idx, pulse8->buf);182182- pulse8->data[0] = pulse8->buf[0];183183- switch (pulse8->buf[0] & 0x3f) {180180+ switch (msgcode & 0x3f) {184181 case MSGCODE_FRAME_START:185182 msg->len = 1;186183 msg->msg[0] = pulse8->buf[1];···189186 if (msg->len == CEC_MAX_MSG_SIZE)190187 break;191188 msg->msg[msg->len++] = pulse8->buf[1];192192- if (pulse8->buf[0] & MSGCODE_FRAME_EOM)189189+ if (msgcode & MSGCODE_FRAME_EOM) {190190+ WARN_ON(pulse8->work_result);191191+ pulse8->work_result = msgcode;193192 schedule_work(&pulse8->work);193193+ break;194194+ }194195 break;195196 case MSGCODE_TRANSMIT_SUCCEEDED:196197 case MSGCODE_TRANSMIT_FAILED_LINE:197198 case MSGCODE_TRANSMIT_FAILED_ACK:198199 case MSGCODE_TRANSMIT_FAILED_TIMEOUT_DATA:199200 case MSGCODE_TRANSMIT_FAILED_TIMEOUT_LINE:201201+ WARN_ON(pulse8->work_result);202202+ pulse8->work_result = msgcode;200203 schedule_work(&pulse8->work);201204 break;202205 case MSGCODE_HIGH_ERROR:
+8-6
drivers/mtd/nand/onenand/omap2.c
···148148 unsigned long timeout;149149 u32 syscfg;150150151151- if (state == FL_RESETING || state == FL_PREPARING_ERASE ||151151+ if (state == FL_RESETTING || state == FL_PREPARING_ERASE ||152152 state == FL_VERIFYING_ERASE) {153153 int i = 21;154154 unsigned int intr_flags = ONENAND_INT_MASTER;155155156156 switch (state) {157157- case FL_RESETING:157157+ case FL_RESETTING:158158 intr_flags |= ONENAND_INT_RESET;159159 break;160160 case FL_PREPARING_ERASE:···328328 struct dma_async_tx_descriptor *tx;329329 dma_cookie_t cookie;330330331331- tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count, 0);331331+ tx = dmaengine_prep_dma_memcpy(c->dma_chan, dst, src, count,332332+ DMA_CTRL_ACK | DMA_PREP_INTERRUPT);332333 if (!tx) {333334 dev_err(&c->pdev->dev, "Failed to prepare DMA memcpy\n");334335 return -EIO;···376375 * context fallback to PIO mode.377376 */378377 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||379379- count < 384 || in_interrupt() || oops_in_progress )378378+ count < 384 || in_interrupt() || oops_in_progress)380379 goto out_copy;381380382381 xtra = count & 3;···423422 * context fallback to PIO mode.424423 */425424 if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 ||426426- count < 384 || in_interrupt() || oops_in_progress )425425+ count < 384 || in_interrupt() || oops_in_progress)427426 goto out_copy;428427429428 dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE);···529528 c->gpmc_cs, c->phys_base, c->onenand.base,530529 c->dma_chan ? "DMA" : "PIO");531530532532- if ((r = onenand_scan(&c->mtd, 1)) < 0)531531+ r = onenand_scan(&c->mtd, 1);532532+ if (r < 0)533533 goto err_release_dma;534534535535 freq = omap2_onenand_get_freq(c->onenand.version_id);
···3737/* Max ECC buffer length */3838#define FMC2_MAX_ECC_BUF_LEN (FMC2_BCHDSRS_LEN * FMC2_MAX_SG)39394040+#define FMC2_TIMEOUT_US 10004041#define FMC2_TIMEOUT_MS 100041424243/* Timings */···5453#define FMC2_PMEM 0x885554#define FMC2_PATT 0x8c5655#define FMC2_HECCR 0x945656+#define FMC2_ISR 0x1845757+#define FMC2_ICR 0x1885758#define FMC2_CSQCR 0x2005859#define FMC2_CSQCFGR1 0x2045960#define FMC2_CSQCFGR2 0x208···120117#define FMC2_PATT_ATTHOLD(x) (((x) & 0xff) << 16)121118#define FMC2_PATT_ATTHIZ(x) (((x) & 0xff) << 24)122119#define FMC2_PATT_DEFAULT 0x0a0a0a0a120120+121121+/* Register: FMC2_ISR */122122+#define FMC2_ISR_IHLF BIT(1)123123+124124+/* Register: FMC2_ICR */125125+#define FMC2_ICR_CIHLF BIT(1)123126124127/* Register: FMC2_CSQCR */125128#define FMC2_CSQCR_CSQSTART BIT(0)···13311322 stm32_fmc2_set_buswidth_16(fmc2, true);13321323}1333132413251325+static int stm32_fmc2_waitrdy(struct nand_chip *chip, unsigned long timeout_ms)13261326+{13271327+ struct stm32_fmc2_nfc *fmc2 = to_stm32_nfc(chip->controller);13281328+ const struct nand_sdr_timings *timings;13291329+ u32 isr, sr;13301330+13311331+ /* Check if there is no pending requests to the NAND flash */13321332+ if (readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_SR, sr,13331333+ sr & FMC2_SR_NWRF, 1,13341334+ FMC2_TIMEOUT_US))13351335+ dev_warn(fmc2->dev, "Waitrdy timeout\n");13361336+13371337+ /* Wait tWB before R/B# signal is low */13381338+ timings = nand_get_sdr_timings(&chip->data_interface);13391339+ ndelay(PSEC_TO_NSEC(timings->tWB_max));13401340+13411341+ /* R/B# signal is low, clear high level flag */13421342+ writel_relaxed(FMC2_ICR_CIHLF, fmc2->io_base + FMC2_ICR);13431343+13441344+ /* Wait R/B# signal is high */13451345+ return readl_relaxed_poll_timeout_atomic(fmc2->io_base + FMC2_ISR,13461346+ isr, isr & FMC2_ISR_IHLF,13471347+ 5, 1000 * timeout_ms);13481348+}13491349+13341350static int stm32_fmc2_exec_op(struct nand_chip *chip,13351351 const struct nand_operation *op,13361352 bool check_only)···14001366 break;1401136714021368 case NAND_OP_WAITRDY_INSTR:14031403- ret = nand_soft_waitrdy(chip,14041404- instr->ctx.waitrdy.timeout_ms);13691369+ ret = stm32_fmc2_waitrdy(chip,13701370+ instr->ctx.waitrdy.timeout_ms);14051371 break;14061372 }14071373 }
+2-1
drivers/mtd/sm_ftl.c
···247247248248 /* FTL can contain -1 entries that are by default filled with bits */249249 if (block == -1) {250250- memset(buffer, 0xFF, SM_SECTOR_SIZE);250250+ if (buffer)251251+ memset(buffer, 0xFF, SM_SECTOR_SIZE);251252 return 0;252253 }253254
···15901590 struct usb_endpoint_descriptor *ep;15911591 int i;1592159215931593- iface_desc = &dev->intf->altsetting[0];15931593+ iface_desc = dev->intf->cur_altsetting;1594159415951595 for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {15961596 ep = &iface_desc->endpoint[i].desc;
+1-1
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
···13101310 struct usb_endpoint_descriptor *endpoint;13111311 int i;1312131213131313- iface_desc = &dev->intf->altsetting[0];13131313+ iface_desc = dev->intf->cur_altsetting;1314131413151315 for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {13161316 endpoint = &iface_desc->endpoint[i].desc;
···360360{361361 u16 ptr = MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST;362362363363+ /* Use the default high priority for management frames sent to364364+ * the CPU.365365+ */366366+ port |= MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI;367367+363368 return mv88e6390_g1_monitor_write(chip, ptr, port);364369}365370
+1
drivers/net/dsa/mv88e6xxx/global1.h
···211211#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_INGRESS_DEST 0x2000212212#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_EGRESS_DEST 0x2100213213#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST 0x3000214214+#define MV88E6390_G1_MONITOR_MGMT_CTL_PTR_CPU_DEST_MGMTPRI 0x00e0214215#define MV88E6390_G1_MONITOR_MGMT_CTL_DATA_MASK 0x00ff215216216217/* Offset 0x1C: Global Control 2 */
+6-6
drivers/net/dsa/mv88e6xxx/port.c
···393393}394394395395static int mv88e6xxx_port_set_cmode(struct mv88e6xxx_chip *chip, int port,396396- phy_interface_t mode)396396+ phy_interface_t mode, bool force)397397{398398 u8 lane;399399 u16 cmode;···427427 cmode = 0;428428 }429429430430- /* cmode doesn't change, nothing to do for us */431431- if (cmode == chip->ports[port].cmode)430430+ /* cmode doesn't change, nothing to do for us unless forced */431431+ if (cmode == chip->ports[port].cmode && !force)432432 return 0;433433434434 lane = mv88e6xxx_serdes_get_lane(chip, port);···484484 if (port != 9 && port != 10)485485 return -EOPNOTSUPP;486486487487- return mv88e6xxx_port_set_cmode(chip, port, mode);487487+ return mv88e6xxx_port_set_cmode(chip, port, mode, false);488488}489489490490int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,···504504 break;505505 }506506507507- return mv88e6xxx_port_set_cmode(chip, port, mode);507507+ return mv88e6xxx_port_set_cmode(chip, port, mode, false);508508}509509510510static int mv88e6341_port_set_cmode_writable(struct mv88e6xxx_chip *chip,···555555 if (err)556556 return err;557557558558- return mv88e6xxx_port_set_cmode(chip, port, mode);558558+ return mv88e6xxx_port_set_cmode(chip, port, mode, true);559559}560560561561int mv88e6185_port_get_cmode(struct mv88e6xxx_chip *chip, int port, u8 *cmode)
+5-5
drivers/net/dsa/sja1105/sja1105_main.c
···1569156915701570 if (enabled) {15711571 /* Enable VLAN filtering. */15721572- tpid = ETH_P_8021AD;15731573- tpid2 = ETH_P_8021Q;15721572+ tpid = ETH_P_8021Q;15731573+ tpid2 = ETH_P_8021AD;15741574 } else {15751575 /* Disable VLAN filtering. */15761576 tpid = ETH_P_SJA1105;···1579157915801580 table = &priv->static_config.tables[BLK_IDX_GENERAL_PARAMS];15811581 general_params = table->entries;15821582- /* EtherType used to identify outer tagged (S-tag) VLAN traffic */15831583- general_params->tpid = tpid;15841582 /* EtherType used to identify inner tagged (C-tag) VLAN traffic */15831583+ general_params->tpid = tpid;15841584+ /* EtherType used to identify outer tagged (S-tag) VLAN traffic */15851585 general_params->tpid2 = tpid2;15861586 /* When VLAN filtering is on, we need to at least be able to15871587 * decode management traffic through the "backup plan".···18551855 if (!clone)18561856 goto out;1857185718581858- sja1105_ptp_txtstamp_skb(ds, slot, clone);18581858+ sja1105_ptp_txtstamp_skb(ds, port, clone);1859185918601860out:18611861 mutex_unlock(&priv->mgmt_lock);
···477477 if (admin->cycle_time_extension)478478 return -ENOTSUPP;479479480480- if (!ns_to_sja1105_delta(admin->base_time)) {481481- dev_err(ds->dev, "A base time of zero is not hardware-allowed\n");482482- return -ERANGE;483483- }484484-485480 for (i = 0; i < admin->num_entries; i++) {486481 s64 delta_ns = admin->entries[i].interval;487482 s64 delta_cycles = ns_to_sja1105_delta(delta_ns);
···504504505505 enum cc_pause requested_fc; /* flow control user has requested */506506 enum cc_pause fc; /* actual link flow control */507507+ enum cc_pause advertised_fc; /* actual advertised flow control */507508508509 enum cc_fec requested_fec; /* Forward Error Correction: */509510 enum cc_fec fec; /* requested and actual in use */
···209209 /* We need to copy the refcount since this ste210210 * may have been traversed several times211211 */212212- refcount_set(&new_ste->refcount, refcount_read(&cur_ste->refcount));212212+ new_ste->refcount = cur_ste->refcount;213213214214 /* Link old STEs rule_mem list to the new ste */215215 mlx5dr_rule_update_rule_member(cur_ste, new_ste);···637637 rule_mem = kvzalloc(sizeof(*rule_mem), GFP_KERNEL);638638 if (!rule_mem)639639 return -ENOMEM;640640+641641+ INIT_LIST_HEAD(&rule_mem->list);642642+ INIT_LIST_HEAD(&rule_mem->use_ste_list);640643641644 rule_mem->ste = ste;642645 list_add_tail(&rule_mem->list, &nic_rule->rule_members_list);
···651651 mlxsw_sp_port->tclass_qdiscs[tclass_num].handle == p->child_handle)652652 return 0;653653654654+ if (!p->child_handle) {655655+ /* This is an invisible FIFO replacing the original Qdisc.656656+ * Ignore it--the original Qdisc's destroy will follow.657657+ */658658+ return 0;659659+ }660660+654661 /* See if the grafted qdisc is already offloaded on any tclass. If so,655662 * unoffload it.656663 */
···957957 /* default */958958 break;959959 case PHY_INTERFACE_MODE_RGMII:960960+ case PHY_INTERFACE_MODE_RGMII_ID:961961+ case PHY_INTERFACE_MODE_RGMII_RXID:962962+ case PHY_INTERFACE_MODE_RGMII_TXID:960963 reg |= SYSCON_EPIT | SYSCON_ETCS_INT_GMII;961964 break;962965 case PHY_INTERFACE_MODE_RMII:
+1-1
drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
···4444 * rate, which then uses the auto-reparenting feature of the4545 * clock driver, and enabling/disabling the clock.4646 */4747- if (gmac->interface == PHY_INTERFACE_MODE_RGMII) {4747+ if (phy_interface_mode_is_rgmii(gmac->interface)) {4848 clk_set_rate(gmac->tx_clk, SUN7I_GMAC_GMII_RGMII_RATE);4949 clk_prepare_enable(gmac->tx_clk);5050 gmac->clk_enabled = 1;
···566566 struct sfp_bus *bus;567567 int ret;568568569569+ if (!fwnode)570570+ return 0;571571+569572 bus = sfp_bus_find_fwnode(fwnode);570573 if (IS_ERR(bus)) {571574 ret = PTR_ERR(bus);
···222222 case NVME_SC_CAP_EXCEEDED:223223 return BLK_STS_NOSPC;224224 case NVME_SC_LBA_RANGE:225225+ case NVME_SC_CMD_INTERRUPTED:226226+ case NVME_SC_NS_NOT_READY:225227 return BLK_STS_TARGET;226228 case NVME_SC_BAD_ATTRIBUTES:227229 case NVME_SC_ONCS_NOT_SUPPORTED:
···441441 return ret;442442443443 meson_calc_reg_and_bit(bank, pin, REG_DS, ®, &bit);444444+ bit = bit << 1;444445445446 ret = regmap_read(pc->reg_ds, reg, &val);446447 if (ret)
+1-1
drivers/platform/mips/Kconfig
···18181919config CPU_HWMON2020 tristate "Loongson-3 CPU HWMon Driver"2121- depends on CONFIG_MACH_LOONGSON642121+ depends on MACH_LOONGSON642222 select HWMON2323 default y2424 help
+3
drivers/powercap/intel_rapl_common.c
···12951295 struct cpuinfo_x86 *c = &cpu_data(cpu);12961296 int ret;1297129712981298+ if (!rapl_defaults)12991299+ return ERR_PTR(-ENODEV);13001300+12981301 rp = kzalloc(sizeof(struct rapl_package), GFP_KERNEL);12991302 if (!rp)13001303 return ERR_PTR(-ENOMEM);
+14-17
drivers/ptp/ptp_clock.c
···166166 .read = ptp_read,167167};168168169169-static void delete_ptp_clock(struct posix_clock *pc)169169+static void ptp_clock_release(struct device *dev)170170{171171- struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);171171+ struct ptp_clock *ptp = container_of(dev, struct ptp_clock, dev);172172173173 mutex_destroy(&ptp->tsevq_mux);174174 mutex_destroy(&ptp->pincfg_mux);···213213 }214214215215 ptp->clock.ops = ptp_clock_ops;216216- ptp->clock.release = delete_ptp_clock;217216 ptp->info = info;218217 ptp->devid = MKDEV(major, index);219218 ptp->index = index;···235236 if (err)236237 goto no_pin_groups;237238238238- /* Create a new device in our class. */239239- ptp->dev = device_create_with_groups(ptp_class, parent, ptp->devid,240240- ptp, ptp->pin_attr_groups,241241- "ptp%d", ptp->index);242242- if (IS_ERR(ptp->dev)) {243243- err = PTR_ERR(ptp->dev);244244- goto no_device;245245- }246246-247239 /* Register a new PPS source. */248240 if (info->pps) {249241 struct pps_source_info pps;···250260 }251261 }252262253253- /* Create a posix clock. */254254- err = posix_clock_register(&ptp->clock, ptp->devid);263263+ /* Initialize a new device of our class in our clock structure. */264264+ device_initialize(&ptp->dev);265265+ ptp->dev.devt = ptp->devid;266266+ ptp->dev.class = ptp_class;267267+ ptp->dev.parent = parent;268268+ ptp->dev.groups = ptp->pin_attr_groups;269269+ ptp->dev.release = ptp_clock_release;270270+ dev_set_drvdata(&ptp->dev, ptp);271271+ dev_set_name(&ptp->dev, "ptp%d", ptp->index);272272+273273+ /* Create a posix clock and link it to the device. */274274+ err = posix_clock_register(&ptp->clock, &ptp->dev);255275 if (err) {256276 pr_err("failed to create posix clock\n");257277 goto no_clock;···273273 if (ptp->pps_source)274274 pps_unregister_source(ptp->pps_source);275275no_pps:276276- device_destroy(ptp_class, ptp->devid);277277-no_device:278276 ptp_cleanup_pin_groups(ptp);279277no_pin_groups:280278 if (ptp->kworker)···302304 if (ptp->pps_source)303305 pps_unregister_source(ptp->pps_source);304306305305- device_destroy(ptp_class, ptp->devid);306307 ptp_cleanup_pin_groups(ptp);307308308309 posix_clock_unregister(&ptp->clock);
+1-1
drivers/ptp/ptp_private.h
···28282929struct ptp_clock {3030 struct posix_clock clock;3131- struct device *dev;3131+ struct device dev;3232 struct ptp_clock_info *info;3333 dev_t devid;3434 int index; /* index into clocks.map */
+7-4
drivers/regulator/axp20x-regulator.c
···413413 int i;414414415415 for (i = 0; i < rate_count; i++) {416416- if (ramp <= slew_rates[i])417417- cfg = AXP20X_DCDC2_LDO3_V_RAMP_LDO3_RATE(i);418418- else416416+ if (ramp > slew_rates[i])419417 break;418418+419419+ if (id == AXP20X_DCDC2)420420+ cfg = AXP20X_DCDC2_LDO3_V_RAMP_DCDC2_RATE(i);421421+ else422422+ cfg = AXP20X_DCDC2_LDO3_V_RAMP_LDO3_RATE(i);420423 }421424422425 if (cfg == 0xff) {···608605 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO1_MASK),609606 AXP_DESC(AXP22X, ELDO2, "eldo2", "eldoin", 700, 3300, 100,610607 AXP22X_ELDO2_V_OUT, AXP22X_ELDO2_V_OUT_MASK,611611- AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO1_MASK),608608+ AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO2_MASK),612609 AXP_DESC(AXP22X, ELDO3, "eldo3", "eldoin", 700, 3300, 100,613610 AXP22X_ELDO3_V_OUT, AXP22X_ELDO3_V_OUT_MASK,614611 AXP22X_PWR_OUT_CTRL2, AXP22X_PWR_OUT_ELDO3_MASK),
···379379CLK_OF_DECLARE_DRIVER(sun50i_h6_rtc_clk, "allwinner,sun50i-h6-rtc",380380 sun50i_h6_rtc_clk_init);381381382382+/*383383+ * The R40 user manual is self-conflicting on whether the prescaler is384384+ * fixed or configurable. The clock diagram shows it as fixed, but there385385+ * is also a configurable divider in the RTC block.386386+ */387387+static const struct sun6i_rtc_clk_data sun8i_r40_rtc_data = {388388+ .rc_osc_rate = 16000000,389389+ .fixed_prescaler = 512,390390+};391391+static void __init sun8i_r40_rtc_clk_init(struct device_node *node)392392+{393393+ sun6i_rtc_clk_init(node, &sun8i_r40_rtc_data);394394+}395395+CLK_OF_DECLARE_DRIVER(sun8i_r40_rtc_clk, "allwinner,sun8i-r40-rtc",396396+ sun8i_r40_rtc_clk_init);397397+382398static const struct sun6i_rtc_clk_data sun8i_v3_rtc_data = {383399 .rc_osc_rate = 32000,384400 .has_out_clk = 1,
···287287 card->state = CARD_STATE_HARDSETUP;288288 }289289 if (card->state == CARD_STATE_HARDSETUP) {290290- qeth_qdio_clear_card(card, 0);291290 qeth_drain_output_queues(card);292291 qeth_clear_working_pool_list(card);293292 card->state = CARD_STATE_DOWN;294293 }295294295295+ qeth_qdio_clear_card(card, 0);296296 flush_workqueue(card->event_wq);297297 card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED;298298 card->info.promisc_mode = 0;···19521952/* check if VNICC is currently enabled */19531953bool qeth_l2_vnicc_is_in_use(struct qeth_card *card)19541954{19551955- /* if everything is turned off, VNICC is not active */19561956- if (!card->options.vnicc.cur_chars)19551955+ if (!card->options.vnicc.sup_chars)19571956 return false;19581957 /* default values are only OK if rx_bcast was not enabled by user19591958 * or the card is offline.···20392040 /* enforce assumed default values and recover settings, if changed */20402041 error |= qeth_l2_vnicc_recover_timeout(card, QETH_VNICC_LEARNING,20412042 timeout);20422042- chars_tmp = card->options.vnicc.wanted_chars ^ QETH_VNICC_DEFAULT;20432043- chars_tmp |= QETH_VNICC_BRIDGE_INVISIBLE;20432043+ /* Change chars, if necessary */20442044+ chars_tmp = card->options.vnicc.wanted_chars ^20452045+ card->options.vnicc.cur_chars;20442046 chars_len = sizeof(card->options.vnicc.wanted_chars) * BITS_PER_BYTE;20452047 for_each_set_bit(i, &chars_tmp, chars_len) {20462048 vnicc = BIT(i);
···85558555 psli->sli_flag &= ~LPFC_SLI_ASYNC_MBX_BLK;85568556 spin_unlock_irq(&phba->hbalock);8557855785588558- /* wake up worker thread to post asynchronlous mailbox command */85588558+ /* wake up worker thread to post asynchronous mailbox command */85598559 lpfc_worker_wake_up(phba);85608560}85618561···88238823 return rc;88248824 }8825882588268826- /* Now, interrupt mode asynchrous mailbox command */88268826+ /* Now, interrupt mode asynchronous mailbox command */88278827 rc = lpfc_mbox_cmd_check(phba, mboxq);88288828 if (rc) {88298829 lpfc_printf_log(phba, KERN_ERR, LOG_MBOX | LOG_SLI,···1311213112}13113131131311413114/**1311513115- * lpfc_sli4_sp_handle_async_event - Handle an asynchroous event1311513115+ * lpfc_sli4_sp_handle_async_event - Handle an asynchronous event1311613116 * @phba: Pointer to HBA context object.1311713117 * @cqe: Pointer to mailbox completion queue entry.1311813118 *1311913119- * This routine process a mailbox completion queue entry with asynchrous1311913119+ * This routine process a mailbox completion queue entry with asynchronous1312013120 * event.1312113121 *1312213122 * Return: true if work posted to worker thread, otherwise false.···1327013270 * @cqe: Pointer to mailbox completion queue entry.1327113271 *1327213272 * This routine process a mailbox completion queue entry, it invokes the1327313273- * proper mailbox complete handling or asynchrous event handling routine1327313273+ * proper mailbox complete handling or asynchronous event handling routine1327413274 * according to the MCQE's async bit.1327513275 *1327613276 * Return: true if work posted to worker thread, otherwise false.
···185185 struct spi_transfer *cur_transfer;186186 struct spi_message *cur_msg;187187 struct chip_data *cur_chip;188188+ size_t progress;188189 size_t len;189190 const void *tx;190191 void *rx;···587586 dspi->tx_cmd |= SPI_PUSHR_CMD_CTCNT;588587589588 if (dspi->devtype_data->xspi_mode && dspi->bits_per_word > 16) {590590- /* Write two TX FIFO entries first, and then the corresponding591591- * CMD FIFO entry.589589+ /* Write the CMD FIFO entry first, and then the two590590+ * corresponding TX FIFO entries.592591 */593592 u32 data = dspi_pop_tx(dspi);594593595595- if (dspi->cur_chip->ctar_val & SPI_CTAR_LSBFE) {596596- /* LSB */597597- tx_fifo_write(dspi, data & 0xFFFF);598598- tx_fifo_write(dspi, data >> 16);599599- } else {600600- /* MSB */601601- tx_fifo_write(dspi, data >> 16);602602- tx_fifo_write(dspi, data & 0xFFFF);603603- }604594 cmd_fifo_write(dspi);595595+ tx_fifo_write(dspi, data & 0xFFFF);596596+ tx_fifo_write(dspi, data >> 16);605597 } else {606598 /* Write one entry to both TX FIFO and CMD FIFO607599 * simultaneously.···652658 u32 spi_tcr;653659654660 spi_take_timestamp_post(dspi->ctlr, dspi->cur_transfer,655655- dspi->tx - dspi->bytes_per_word, !dspi->irq);661661+ dspi->progress, !dspi->irq);656662657663 /* Get transfer counter (in number of SPI transfers). It was658664 * reset to 0 when transfer(s) were started.···661667 spi_tcnt = SPI_TCR_GET_TCNT(spi_tcr);662668 /* Update total number of bytes that were transferred */663669 msg->actual_length += spi_tcnt * dspi->bytes_per_word;670670+ dspi->progress += spi_tcnt;664671665672 trans_mode = dspi->devtype_data->trans_mode;666673 if (trans_mode == DSPI_EOQ_MODE)···674679 return 0;675680676681 spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer,677677- dspi->tx, !dspi->irq);682682+ dspi->progress, !dspi->irq);678683679684 if (trans_mode == DSPI_EOQ_MODE)680685 dspi_eoq_write(dspi);···763768 dspi->rx = transfer->rx_buf;764769 dspi->rx_end = dspi->rx + transfer->len;765770 dspi->len = transfer->len;771771+ dspi->progress = 0;766772 /* Validated transfer specific frame size (defaults applied) */767773 dspi->bits_per_word = transfer->bits_per_word;768774 if (transfer->bits_per_word <= 8)···785789 SPI_CTARE_DTCP(1));786790787791 spi_take_timestamp_pre(dspi->ctlr, dspi->cur_transfer,788788- dspi->tx, !dspi->irq);792792+ dspi->progress, !dspi->irq);789793790794 trans_mode = dspi->devtype_data->trans_mode;791795 switch (trans_mode) {
+19-12
drivers/spi/spi-uniphier.c
···290290 }291291}292292293293-static void uniphier_spi_fill_tx_fifo(struct uniphier_spi_priv *priv)293293+static void uniphier_spi_set_fifo_threshold(struct uniphier_spi_priv *priv,294294+ unsigned int threshold)294295{295295- unsigned int fifo_threshold, fill_bytes;296296 u32 val;297297298298- fifo_threshold = DIV_ROUND_UP(priv->rx_bytes,299299- bytes_per_word(priv->bits_per_word));300300- fifo_threshold = min(fifo_threshold, SSI_FIFO_DEPTH);301301-302302- fill_bytes = fifo_threshold - (priv->rx_bytes - priv->tx_bytes);303303-304304- /* set fifo threshold */305298 val = readl(priv->base + SSI_FC);306299 val &= ~(SSI_FC_TXFTH_MASK | SSI_FC_RXFTH_MASK);307307- val |= FIELD_PREP(SSI_FC_TXFTH_MASK, fifo_threshold);308308- val |= FIELD_PREP(SSI_FC_RXFTH_MASK, fifo_threshold);300300+ val |= FIELD_PREP(SSI_FC_TXFTH_MASK, SSI_FIFO_DEPTH - threshold);301301+ val |= FIELD_PREP(SSI_FC_RXFTH_MASK, threshold);309302 writel(val, priv->base + SSI_FC);303303+}310304311311- while (fill_bytes--)305305+static void uniphier_spi_fill_tx_fifo(struct uniphier_spi_priv *priv)306306+{307307+ unsigned int fifo_threshold, fill_words;308308+ unsigned int bpw = bytes_per_word(priv->bits_per_word);309309+310310+ fifo_threshold = DIV_ROUND_UP(priv->rx_bytes, bpw);311311+ fifo_threshold = min(fifo_threshold, SSI_FIFO_DEPTH);312312+313313+ uniphier_spi_set_fifo_threshold(priv, fifo_threshold);314314+315315+ fill_words = fifo_threshold -316316+ DIV_ROUND_UP(priv->rx_bytes - priv->tx_bytes, bpw);317317+318318+ while (fill_words--)312319 uniphier_spi_send(priv);313320}314321
+8-14
drivers/spi/spi.c
···14991499 * advances its @tx buffer pointer monotonically.15001500 * @ctlr: Pointer to the spi_controller structure of the driver15011501 * @xfer: Pointer to the transfer being timestamped15021502- * @tx: Pointer to the current word within the xfer->tx_buf that the driver is15031503- * preparing to transmit right now.15021502+ * @progress: How many words (not bytes) have been transferred so far15041503 * @irqs_off: If true, will disable IRQs and preemption for the duration of the15051504 * transfer, for less jitter in time measurement. Only compatible15061505 * with PIO drivers. If true, must follow up with···15091510 */15101511void spi_take_timestamp_pre(struct spi_controller *ctlr,15111512 struct spi_transfer *xfer,15121512- const void *tx, bool irqs_off)15131513+ size_t progress, bool irqs_off)15131514{15141514- u8 bytes_per_word = DIV_ROUND_UP(xfer->bits_per_word, 8);15151515-15161515 if (!xfer->ptp_sts)15171516 return;1518151715191518 if (xfer->timestamped_pre)15201519 return;1521152015221522- if (tx < (xfer->tx_buf + xfer->ptp_sts_word_pre * bytes_per_word))15211521+ if (progress < xfer->ptp_sts_word_pre)15231522 return;1524152315251524 /* Capture the resolution of the timestamp */15261526- xfer->ptp_sts_word_pre = (tx - xfer->tx_buf) / bytes_per_word;15251525+ xfer->ptp_sts_word_pre = progress;1527152615281527 xfer->timestamped_pre = true;15291528···15431546 * timestamped.15441547 * @ctlr: Pointer to the spi_controller structure of the driver15451548 * @xfer: Pointer to the transfer being timestamped15461546- * @tx: Pointer to the current word within the xfer->tx_buf that the driver has15471547- * just transmitted.15491549+ * @progress: How many words (not bytes) have been transferred so far15481550 * @irqs_off: If true, will re-enable IRQs and preemption for the local CPU.15491551 */15501552void spi_take_timestamp_post(struct spi_controller *ctlr,15511553 struct spi_transfer *xfer,15521552- const void *tx, bool irqs_off)15541554+ size_t progress, bool irqs_off)15531555{15541554- u8 bytes_per_word = DIV_ROUND_UP(xfer->bits_per_word, 8);15551555-15561556 if (!xfer->ptp_sts)15571557 return;1558155815591559 if (xfer->timestamped_post)15601560 return;1561156115621562- if (tx < (xfer->tx_buf + xfer->ptp_sts_word_post * bytes_per_word))15621562+ if (progress < xfer->ptp_sts_word_post)15631563 return;1564156415651565 ptp_read_system_postts(xfer->ptp_sts);···15671573 }1568157415691575 /* Capture the resolution of the timestamp */15701570- xfer->ptp_sts_word_post = (tx - xfer->tx_buf) / bytes_per_word;15761576+ xfer->ptp_sts_word_post = progress;1571157715721578 xfer->timestamped_post = true;15731579}
···59596060 kfree(usb_buffer);61616262- if (ret >= 0 && ret < (int)length)6262+ if (ret == (int)length)6363+ ret = 0;6464+ else6365 ret = -EIO;64666567end_unlock:···7472{7573 return vnt_control_out(priv, MESSAGE_TYPE_WRITE,7674 reg_off, reg, sizeof(u8), &data);7575+}7676+7777+int vnt_control_out_blocks(struct vnt_private *priv,7878+ u16 block, u8 reg, u16 length, u8 *data)7979+{8080+ int ret = 0, i;8181+8282+ for (i = 0; i < length; i += block) {8383+ u16 len = min_t(int, length - i, block);8484+8585+ ret = vnt_control_out(priv, MESSAGE_TYPE_WRITE,8686+ i, reg, len, data + i);8787+ if (ret)8888+ goto end;8989+ }9090+end:9191+ return ret;7792}78937994int vnt_control_in(struct vnt_private *priv, u8 request, u16 value,···122103123104 kfree(usb_buffer);124105125125- if (ret >= 0 && ret < (int)length)106106+ if (ret == (int)length)107107+ ret = 0;108108+ else126109 ret = -EIO;127110128111end_unlock:
···9999 if (vnt_init(priv)) {100100 /* If fail all ends TODO retry */101101 dev_err(&priv->usb->dev, "failed to start\n");102102+ usb_set_intfdata(priv->intf, NULL);102103 ieee80211_free_hw(priv->hw);103104 return;104105 }
+3-1
drivers/target/target_core_iblock.c
···646646 }647647648648 bip->bip_iter.bi_size = bio_integrity_bytes(bi, bio_sectors(bio));649649- bip_set_seed(bip, bio->bi_iter.bi_sector);649649+ /* virtual start sector must be in integrity interval units */650650+ bip_set_seed(bip, bio->bi_iter.bi_sector >>651651+ (bi->interval_exp - SECTOR_SHIFT));650652651653 pr_debug("IBLOCK BIP Size: %u Sector: %llu\n", bip->bip_iter.bi_size,652654 (unsigned long long)bip->bip_iter.bi_sector);
+3
drivers/thermal/qcom/tsens.c
···110110 irq = platform_get_irq_byname(pdev, "uplow");111111 if (irq < 0) {112112 ret = irq;113113+ /* For old DTs with no IRQ defined */114114+ if (irq == -ENXIO)115115+ ret = 0;113116 goto err_put_device;114117 }115118
+10
drivers/tty/serdev/core.c
···663663 return AE_OK;664664}665665666666+static const struct acpi_device_id serdev_acpi_devices_blacklist[] = {667667+ { "INT3511", 0 },668668+ { "INT3512", 0 },669669+ { },670670+};671671+666672static acpi_status acpi_serdev_add_device(acpi_handle handle, u32 level,667673 void *data, void **return_value)668674{···679673 return AE_OK;680674681675 if (acpi_device_enumerated(adev))676676+ return AE_OK;677677+678678+ /* Skip if black listed */679679+ if (!acpi_match_device_ids(adev, serdev_acpi_devices_blacklist))682680 return AE_OK;683681684682 if (acpi_serdev_check_resources(ctrl, adev))
+1-2
drivers/tty/tty_port.c
···8989{9090 if (WARN_ON(index >= driver->num))9191 return;9292- if (!driver->ports[index])9393- driver->ports[index] = port;9292+ driver->ports[index] = port;9493}9594EXPORT_SYMBOL_GPL(tty_port_link_device);9695
···2467246724682468static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)24692469{24702470+ /*24712471+ * For OUT direction, host may send less than the setup24722472+ * length. Return true for all OUT requests.24732473+ */24742474+ if (!req->direction)24752475+ return true;24762476+24702477 return req->request.actual == req->request.length;24712478}24722479
+1
drivers/usb/gadget/udc/Kconfig
···445445 tristate "NVIDIA Tegra Superspeed USB 3.0 Device Controller"446446 depends on ARCH_TEGRA || COMPILE_TEST447447 depends on PHY_TEGRA_XUSB448448+ select USB_ROLE_SWITCH448449 help449450 Enables NVIDIA Tegra USB 3.0 device mode controller driver.450451
+6-2
drivers/usb/host/ohci-da8xx.c
···415415 }416416417417 da8xx_ohci->oc_gpio = devm_gpiod_get_optional(dev, "oc", GPIOD_IN);418418- if (IS_ERR(da8xx_ohci->oc_gpio))418418+ if (IS_ERR(da8xx_ohci->oc_gpio)) {419419+ error = PTR_ERR(da8xx_ohci->oc_gpio);419420 goto err;421421+ }420422421423 if (da8xx_ohci->oc_gpio) {422424 oc_irq = gpiod_to_irq(da8xx_ohci->oc_gpio);423423- if (oc_irq < 0)425425+ if (oc_irq < 0) {426426+ error = oc_irq;424427 goto err;428428+ }425429426430 error = devm_request_threaded_irq(dev, oc_irq, NULL,427431 ohci_da8xx_oc_thread, IRQF_TRIGGER_RISING |
+5-2
drivers/usb/musb/jz4740.c
···7575static int jz4740_musb_init(struct musb *musb)7676{7777 struct device *dev = musb->controller->parent;7878+ int err;78797980 if (dev->of_node)8081 musb->xceiv = devm_usb_get_phy_by_phandle(dev, "phys", 0);8182 else8283 musb->xceiv = devm_usb_get_phy(dev, USB_PHY_TYPE_USB2);8384 if (IS_ERR(musb->xceiv)) {8484- dev_err(dev, "No transceiver configured\n");8585- return PTR_ERR(musb->xceiv);8585+ err = PTR_ERR(musb->xceiv);8686+ if (err != -EPROBE_DEFER)8787+ dev_err(dev, "No transceiver configured: %d", err);8888+ return err;8689 }87908891 /* Silicon does not implement ConfigData register.
+11
drivers/usb/musb/musb_core.c
···18401840#define MUSB_QUIRK_B_INVALID_VBUS_91 (MUSB_DEVCTL_BDEVICE | \18411841 (2 << MUSB_DEVCTL_VBUS_SHIFT) | \18421842 MUSB_DEVCTL_SESSION)18431843+#define MUSB_QUIRK_B_DISCONNECT_99 (MUSB_DEVCTL_BDEVICE | \18441844+ (3 << MUSB_DEVCTL_VBUS_SHIFT) | \18451845+ MUSB_DEVCTL_SESSION)18431846#define MUSB_QUIRK_A_DISCONNECT_19 ((3 << MUSB_DEVCTL_VBUS_SHIFT) | \18441847 MUSB_DEVCTL_SESSION)18451848···18651862 s = MUSB_DEVCTL_FSDEV | MUSB_DEVCTL_LSDEV |18661863 MUSB_DEVCTL_HR;18671864 switch (devctl & ~s) {18651865+ case MUSB_QUIRK_B_DISCONNECT_99:18661866+ musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");18671867+ schedule_delayed_work(&musb->irq_work,18681868+ msecs_to_jiffies(1000));18691869+ break;18681870 case MUSB_QUIRK_B_INVALID_VBUS_91:18691871 if (musb->quirk_retries && !musb->flush_irq_work) {18701872 musb_dbg(musb,···23172309 musb_platform_disable(musb);23182310 musb_disable_interrupts(musb);23192311 musb_writeb(musb->mregs, MUSB_DEVCTL, 0);23122312+23132313+ /* MUSB_POWER_SOFTCONN might be already set, JZ4740 does this. */23142314+ musb_writeb(musb->mregs, MUSB_POWER, 0);2320231523212316 /* Init IRQ workqueue before request_irq */23222317 INIT_DELAYED_WORK(&musb->irq_work, musb_irq_work);
···3838 spinlock_t susp_lock;3939 unsigned int suspended:1;4040 unsigned int use_send_setup:1;4141+ unsigned int use_zlp:1;4142 int in_flight;4243 unsigned int open_ports;4344 void *private;
···432432433433 if (status & TCPC_ALERT_RX_STATUS) {434434 struct pd_message msg;435435- unsigned int cnt;435435+ unsigned int cnt, payload_cnt;436436 u16 header;437437438438 regmap_read(tcpci->regmap, TCPC_RX_BYTE_CNT, &cnt);439439+ /*440440+ * 'cnt' corresponds to READABLE_BYTE_COUNT in section 4.4.14441441+ * of the TCPCI spec [Rev 2.0 Ver 1.0 October 2017] and is442442+ * defined in table 4-36 as one greater than the number of443443+ * bytes received. And that number includes the header. So:444444+ */445445+ if (cnt > 3)446446+ payload_cnt = cnt - (1 + sizeof(msg.header));447447+ else448448+ payload_cnt = 0;439449440450 tcpci_read16(tcpci, TCPC_RX_HDR, &header);441451 msg.header = cpu_to_le16(header);442452443443- if (WARN_ON(cnt > sizeof(msg.payload)))444444- cnt = sizeof(msg.payload);453453+ if (WARN_ON(payload_cnt > sizeof(msg.payload)))454454+ payload_cnt = sizeof(msg.payload);445455446446- if (cnt > 0)456456+ if (payload_cnt > 0)447457 regmap_raw_read(tcpci->regmap, TCPC_RX_DATA,448448- &msg.payload, cnt);458458+ &msg.payload, payload_cnt);449459450460 /* Read complete, clear RX status alert bit */451461 tcpci_write16(tcpci, TCPC_ALERT, TCPC_ALERT_RX_STATUS);
···687687config MAX77620_WATCHDOG688688 tristate "Maxim Max77620 Watchdog Timer"689689 depends on MFD_MAX77620 || COMPILE_TEST690690+ select WATCHDOG_CORE690691 help691692 This is the driver for the Max77620 watchdog timer.692693 Say 'Y' here to enable the watchdog timer support for···14451444config TQMX86_WDT14461445 tristate "TQ-Systems TQMX86 Watchdog Timer"14471446 depends on X8614471447+ select WATCHDOG_CORE14481448 help14491449 This is the driver for the hardware watchdog timer in the TQMX86 IO14501450 controller found on some of their ComExpress Modules.
+1-1
drivers/watchdog/imx7ulp_wdt.c
···112112{113113 struct imx7ulp_wdt_device *wdt = watchdog_get_drvdata(wdog);114114115115- imx7ulp_wdt_enable(wdt->base, true);115115+ imx7ulp_wdt_enable(wdog, true);116116 imx7ulp_wdt_set_timeout(&wdt->wdd, 1);117117118118 /* wait for wdog to fire */
+2-2
drivers/watchdog/orion_wdt.c
···602602 set_bit(WDOG_HW_RUNNING, &dev->wdt.status);603603604604 /* Request the IRQ only after the watchdog is disabled */605605- irq = platform_get_irq(pdev, 0);605605+ irq = platform_get_irq_optional(pdev, 0);606606 if (irq > 0) {607607 /*608608 * Not all supported platforms specify an interrupt for the···617617 }618618619619 /* Optional 2nd interrupt for pretimeout */620620- irq = platform_get_irq(pdev, 1);620620+ irq = platform_get_irq_optional(pdev, 1);621621 if (irq > 0) {622622 orion_wdt_info.options |= WDIOF_PRETIMEOUT;623623 ret = devm_request_irq(&pdev->dev, irq, orion_wdt_pre_irq,
···14791479 disk_num_bytes =14801480 btrfs_file_extent_disk_num_bytes(leaf, fi);14811481 /*14821482- * If extent we got ends before our range starts, skip14831483- * to next extent14821482+ * If the extent we got ends before our current offset,14831483+ * skip to the next extent.14841484 */14851485- if (extent_end <= start) {14851485+ if (extent_end <= cur_offset) {14861486 path->slots[0]++;14871487 goto next_slot;14881488 }
+5-28
fs/buffer.c
···30313031 * errors, this only handles the "we need to be able to30323032 * do IO at the final sector" case.30333033 */30343034-void guard_bio_eod(int op, struct bio *bio)30343034+void guard_bio_eod(struct bio *bio)30353035{30363036 sector_t maxsector;30373037- struct bio_vec *bvec = bio_last_bvec_all(bio);30383038- unsigned truncated_bytes;30393037 struct hd_struct *part;3040303830413039 rcu_read_lock();···30593061 if (likely((bio->bi_iter.bi_size >> 9) <= maxsector))30603062 return;3061306330623062- /* Uhhuh. We've got a bio that straddles the device size! */30633063- truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9);30643064-30653065- /*30663066- * The bio contains more than one segment which spans EOD, just return30673067- * and let IO layer turn it into an EIO30683068- */30693069- if (truncated_bytes > bvec->bv_len)30703070- return;30713071-30723072- /* Truncate the bio.. */30733073- bio->bi_iter.bi_size -= truncated_bytes;30743074- bvec->bv_len -= truncated_bytes;30753075-30763076- /* ..and clear the end of the buffer for reads */30773077- if (op == REQ_OP_READ) {30783078- struct bio_vec bv;30793079-30803080- mp_bvec_last_segment(bvec, &bv);30813081- zero_user(bv.bv_page, bv.bv_offset + bv.bv_len,30823082- truncated_bytes);30833083- }30643064+ bio_truncate(bio, maxsector << 9);30843065}3085306630863067static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,···30953118 bio->bi_end_io = end_bio_bh_io_sync;30963119 bio->bi_private = bh;3097312030983098- /* Take care of bh's that straddle the end of the device */30993099- guard_bio_eod(op, bio);31003100-31013121 if (buffer_meta(bh))31023122 op_flags |= REQ_META;31033123 if (buffer_prio(bh))31043124 op_flags |= REQ_PRIO;31053125 bio_set_op_attrs(bio, op, op_flags);31263126+31273127+ /* Take care of bh's that straddle the end of the device */31283128+ guard_bio_eod(bio);3106312931073130 if (wbc) {31083131 wbc_init_bio(wbc, bio);
+1-1
fs/char_dev.c
···352352353353 if (owner && !try_module_get(owner))354354 return NULL;355355- kobj = kobject_get(&p->kobj);355355+ kobj = kobject_get_unless_zero(&p->kobj);356356 if (!kobj)357357 module_put(owner);358358 return kobj;
···139139 dput(dentry);140140}141141142142+static bool reparse_file_needs_reval(const struct cifs_fattr *fattr)143143+{144144+ if (!(fattr->cf_cifsattrs & ATTR_REPARSE))145145+ return false;146146+ /*147147+ * The DFS tags should be only intepreted by server side as per148148+ * MS-FSCC 2.1.2.1, but let's include them anyway.149149+ *150150+ * Besides, if cf_cifstag is unset (0), then we still need it to be151151+ * revalidated to know exactly what reparse point it is.152152+ */153153+ switch (fattr->cf_cifstag) {154154+ case IO_REPARSE_TAG_DFS:155155+ case IO_REPARSE_TAG_DFSR:156156+ case IO_REPARSE_TAG_SYMLINK:157157+ case IO_REPARSE_TAG_NFS:158158+ case 0:159159+ return true;160160+ }161161+ return false;162162+}163163+142164static void143165cifs_fill_common_info(struct cifs_fattr *fattr, struct cifs_sb_info *cifs_sb)144166{···180158 * is a symbolic link, DFS referral or a reparse point with a direct181159 * access like junctions, deduplicated files, NFS symlinks.182160 */183183- if (fattr->cf_cifsattrs & ATTR_REPARSE)161161+ if (reparse_file_needs_reval(fattr))184162 fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;185163186164 /* non-unix readdir doesn't provide nlink */···216194 }217195}218196197197+static void __dir_info_to_fattr(struct cifs_fattr *fattr, const void *info)198198+{199199+ const FILE_DIRECTORY_INFO *fi = info;200200+201201+ memset(fattr, 0, sizeof(*fattr));202202+ fattr->cf_cifsattrs = le32_to_cpu(fi->ExtFileAttributes);203203+ fattr->cf_eof = le64_to_cpu(fi->EndOfFile);204204+ fattr->cf_bytes = le64_to_cpu(fi->AllocationSize);205205+ fattr->cf_createtime = le64_to_cpu(fi->CreationTime);206206+ fattr->cf_atime = cifs_NTtimeToUnix(fi->LastAccessTime);207207+ fattr->cf_ctime = cifs_NTtimeToUnix(fi->ChangeTime);208208+ fattr->cf_mtime = cifs_NTtimeToUnix(fi->LastWriteTime);209209+}210210+219211void220212cifs_dir_info_to_fattr(struct cifs_fattr *fattr, FILE_DIRECTORY_INFO *info,221213 struct cifs_sb_info *cifs_sb)222214{223223- memset(fattr, 0, sizeof(*fattr));224224- fattr->cf_cifsattrs = le32_to_cpu(info->ExtFileAttributes);225225- fattr->cf_eof = le64_to_cpu(info->EndOfFile);226226- fattr->cf_bytes = le64_to_cpu(info->AllocationSize);227227- fattr->cf_createtime = le64_to_cpu(info->CreationTime);228228- fattr->cf_atime = cifs_NTtimeToUnix(info->LastAccessTime);229229- fattr->cf_ctime = cifs_NTtimeToUnix(info->ChangeTime);230230- fattr->cf_mtime = cifs_NTtimeToUnix(info->LastWriteTime);215215+ __dir_info_to_fattr(fattr, info);216216+ cifs_fill_common_info(fattr, cifs_sb);217217+}231218219219+static void cifs_fulldir_info_to_fattr(struct cifs_fattr *fattr,220220+ SEARCH_ID_FULL_DIR_INFO *info,221221+ struct cifs_sb_info *cifs_sb)222222+{223223+ __dir_info_to_fattr(fattr, info);224224+225225+ /* See MS-FSCC 2.4.18 FileIdFullDirectoryInformation */226226+ if (fattr->cf_cifsattrs & ATTR_REPARSE)227227+ fattr->cf_cifstag = le32_to_cpu(info->EaSize);232228 cifs_fill_common_info(fattr, cifs_sb);233229}234230···794754 cifs_std_info_to_fattr(&fattr,795755 (FIND_FILE_STANDARD_INFO *)find_entry,796756 cifs_sb);757757+ break;758758+ case SMB_FIND_FILE_ID_FULL_DIR_INFO:759759+ cifs_fulldir_info_to_fattr(&fattr,760760+ (SEARCH_ID_FULL_DIR_INFO *)find_entry,761761+ cifs_sb);797762 break;798763 default:799764 cifs_dir_info_to_fattr(&fattr,
+1-1
fs/cifs/smb2file.c
···6767 goto out;686869697070- if (oparms->tcon->use_resilient) {7070+ if (oparms->tcon->use_resilient) {7171 /* default timeout is 0, servers pick default (120 seconds) */7272 nr_ioctl_req.Timeout =7373 cpu_to_le32(oparms->tcon->handle_timeout);
+2
fs/direct-io.c
···3939#include <linux/atomic.h>4040#include <linux/prefetch.h>41414242+#include "internal.h"4343+4244/*4345 * How many user pages to map in one call to get_user_pages(). This determines4446 * the size of a structure in the slab cache
+6-1
fs/file.c
···960960 return ksys_dup3(oldfd, newfd, 0);961961}962962963963-SYSCALL_DEFINE1(dup, unsigned int, fildes)963963+int ksys_dup(unsigned int fildes)964964{965965 int ret = -EBADF;966966 struct file *file = fget_raw(fildes);···973973 fput(file);974974 }975975 return ret;976976+}977977+978978+SYSCALL_DEFINE1(dup, unsigned int, fildes)979979+{980980+ return ksys_dup(fildes);976981}977982978983int f_dupfd(unsigned int from, struct file *file, unsigned flags)
···1066106610671067 ocfs2_clear_journal_error(osb->sb, journal->j_journal, osb->slot_num);1068106810691069+ if (replayed) {10701070+ jbd2_journal_lock_updates(journal->j_journal);10711071+ status = jbd2_journal_flush(journal->j_journal);10721072+ jbd2_journal_unlock_updates(journal->j_journal);10731073+ if (status < 0)10741074+ mlog_errno(status);10751075+ }10761076+10691077 status = ocfs2_journal_toggle_dirty(osb, 1, replayed);10701078 if (status < 0) {10711079 mlog_errno(status);
+5-2
fs/posix_acl.c
···631631632632/**633633 * posix_acl_update_mode - update mode in set_acl634634+ * @inode: target inode635635+ * @mode_p: mode (pointer) for update636636+ * @acl: acl pointer634637 *635638 * Update the file mode when setting an ACL: compute the new file permission636639 * bits based on the ACL. In addition, if the ACL is equivalent to the new637637- * file mode, set *acl to NULL to indicate that no ACL should be set.640640+ * file mode, set *@acl to NULL to indicate that no ACL should be set.638641 *639639- * As with chmod, clear the setgit bit if the caller is not in the owning group642642+ * As with chmod, clear the setgid bit if the caller is not in the owning group640643 * or capable of CAP_FSETID (see inode_change_ok).641644 *642645 * Called from set_acl inode operations.
+13
fs/pstore/ram.c
···407407408408 prz = cxt->dprzs[cxt->dump_write_cnt];409409410410+ /*411411+ * Since this is a new crash dump, we need to reset the buffer in412412+ * case it still has an old dump present. Without this, the new dump413413+ * will get appended, which would seriously confuse anything trying414414+ * to check dump file contents. Specifically, ramoops_read_kmsg_hdr()415415+ * expects to find a dump header in the beginning of buffer data, so416416+ * we must to reset the buffer values, in order to ensure that the417417+ * header will be written to the beginning of the buffer.418418+ */419419+ persistent_ram_zap(prz);420420+410421 /* Build header and append record contents. */411422 hlen = ramoops_write_kmsg_hdr(prz, record);412423 if (!hlen)···583572 prz_ar[i] = persistent_ram_new(*paddr, zone_sz, sig,584573 &cxt->ecc_info,585574 cxt->memtype, flags, label);575575+ kfree(label);586576 if (IS_ERR(prz_ar[i])) {587577 err = PTR_ERR(prz_ar[i]);588578 dev_err(dev, "failed to request %s mem region (0x%zx@0x%llx): %d\n",···629617 label = kasprintf(GFP_KERNEL, "ramoops:%s", name);630618 *prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info,631619 cxt->memtype, PRZ_FLAG_ZAP_OLD, label);620620+ kfree(label);632621 if (IS_ERR(*prz)) {633622 int err = PTR_ERR(*prz);634623
+1-1
fs/pstore/ram_core.c
···574574 /* Initialize general buffer state. */575575 raw_spin_lock_init(&prz->buffer_lock);576576 prz->flags = flags;577577- prz->label = label;577577+ prz->label = kstrdup(label, GFP_KERNEL);578578579579 ret = persistent_ram_buffer_map(start, size, prz, memtype);580580 if (ret)
···470470 gfp_t);471471extern int bio_uncopy_user(struct bio *);472472void zero_fill_bio_iter(struct bio *bio, struct bvec_iter iter);473473+void bio_truncate(struct bio *bio, unsigned new_size);473474474475static inline void zero_fill_bio(struct bio *bio)475476{
-22
include/linux/bvec.h
···153153 }154154}155155156156-/*157157- * Get the last single-page segment from the multi-page bvec and store it158158- * in @seg159159- */160160-static inline void mp_bvec_last_segment(const struct bio_vec *bvec,161161- struct bio_vec *seg)162162-{163163- unsigned total = bvec->bv_offset + bvec->bv_len;164164- unsigned last_page = (total - 1) / PAGE_SIZE;165165-166166- seg->bv_page = bvec->bv_page + last_page;167167-168168- /* the whole segment is inside the last page */169169- if (bvec->bv_offset >= last_page * PAGE_SIZE) {170170- seg->bv_offset = bvec->bv_offset % PAGE_SIZE;171171- seg->bv_len = bvec->bv_len;172172- } else {173173- seg->bv_offset = 0;174174- seg->bv_len = total - last_page * PAGE_SIZE;175175- }176176-}177177-178156#endif /* __LINUX_BVEC_ITER_H */
+34
include/linux/can/dev.h
···1818#include <linux/can/error.h>1919#include <linux/can/led.h>2020#include <linux/can/netlink.h>2121+#include <linux/can/skb.h>2122#include <linux/netdevice.h>22232324/*···9291#define get_can_dlc(i) (min_t(__u8, (i), CAN_MAX_DLC))9392#define get_canfd_dlc(i) (min_t(__u8, (i), CANFD_MAX_DLC))94939494+/* Check for outgoing skbs that have not been created by the CAN subsystem */9595+static inline bool can_skb_headroom_valid(struct net_device *dev,9696+ struct sk_buff *skb)9797+{9898+ /* af_packet creates a headroom of HH_DATA_MOD bytes which is fine */9999+ if (WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct can_skb_priv)))100100+ return false;101101+102102+ /* af_packet does not apply CAN skb specific settings */103103+ if (skb->ip_summed == CHECKSUM_NONE) {104104+ /* init headroom */105105+ can_skb_prv(skb)->ifindex = dev->ifindex;106106+ can_skb_prv(skb)->skbcnt = 0;107107+108108+ skb->ip_summed = CHECKSUM_UNNECESSARY;109109+110110+ /* preform proper loopback on capable devices */111111+ if (dev->flags & IFF_ECHO)112112+ skb->pkt_type = PACKET_LOOPBACK;113113+ else114114+ skb->pkt_type = PACKET_HOST;115115+116116+ skb_reset_mac_header(skb);117117+ skb_reset_network_header(skb);118118+ skb_reset_transport_header(skb);119119+ }120120+121121+ return true;122122+}123123+95124/* Drop a given socketbuffer if it does not contain a valid CAN frame. */96125static inline bool can_dropped_invalid_skb(struct net_device *dev,97126 struct sk_buff *skb)···137106 cfd->len > CANFD_MAX_DLEN))138107 goto inval_skb;139108 } else109109+ goto inval_skb;110110+111111+ if (!can_skb_headroom_valid(dev, skb))140112 goto inval_skb;141113142114 return false;
+4-1
include/linux/dmaengine.h
···13641364static inline int dmaengine_desc_set_reuse(struct dma_async_tx_descriptor *tx)13651365{13661366 struct dma_slave_caps caps;13671367+ int ret;1367136813681368- dma_get_slave_caps(tx->chan, &caps);13691369+ ret = dma_get_slave_caps(tx->chan, &caps);13701370+ if (ret)13711371+ return ret;1369137213701373 if (caps.descriptor_reuse) {13711374 tx->flags |= DMA_CTRL_REUSE;
···7979 */8080#define round_down(x, y) ((x) & ~__round_mask(x, y))81818282-/**8383- * FIELD_SIZEOF - get the size of a struct's field8484- * @t: the target struct8585- * @f: the target struct's field8686- * Return: the size of @f in the struct definition without having a8787- * declared instance of @t.8888- */8989-#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))9090-9182#define typeof_member(T, m) typeof(((T*)0)->m)92839384#define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP
···122122123123extern void arch_remove_memory(int nid, u64 start, u64 size,124124 struct vmem_altmap *altmap);125125-extern void __remove_pages(struct zone *zone, unsigned long start_pfn,126126- unsigned long nr_pages, struct vmem_altmap *altmap);125125+extern void __remove_pages(unsigned long start_pfn, unsigned long nr_pages,126126+ struct vmem_altmap *altmap);127127128128/* reasonably generic interface to expand the physical pages */129129extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,···342342extern int add_memory_resource(int nid, struct resource *resource);343343extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,344344 unsigned long nr_pages, struct vmem_altmap *altmap);345345+extern void remove_pfn_range_from_zone(struct zone *zone,346346+ unsigned long start_pfn,347347+ unsigned long nr_pages);345348extern bool is_memblock_offlined(struct memory_block *mem);346349extern int sparse_add_section(int nid, unsigned long pfn,347350 unsigned long nr_pages, struct vmem_altmap *altmap);
···4040 FL_READING,4141 FL_CACHEDPRG,4242 /* These 4 come from onenand_state_t, which has been unified here */4343- FL_RESETING,4343+ FL_RESETTING,4444 FL_OTPING,4545 FL_PREPARING_ERASE,4646 FL_VERIFYING_ERASE,
···6969 *7070 * @ops: Functional interface to the clock7171 * @cdev: Character device instance for this clock7272- * @kref: Reference count.7272+ * @dev: Pointer to the clock's device.7373 * @rwsem: Protects the 'zombie' field from concurrent access.7474 * @zombie: If 'zombie' is true, then the hardware has disappeared.7575- * @release: A function to free the structure when the reference count reaches7676- * zero. May be NULL if structure is statically allocated.7775 *7876 * Drivers should embed their struct posix_clock within a private7977 * structure, obtaining a reference to it during callbacks using8078 * container_of().7979+ *8080+ * Drivers should supply an initialized but not exposed struct device8181+ * to posix_clock_register(). It is used to manage lifetime of the8282+ * driver's private structure. It's 'release' field should be set to8383+ * a release function for this private structure.8184 */8285struct posix_clock {8386 struct posix_clock_operations ops;8487 struct cdev cdev;8585- struct kref kref;8888+ struct device *dev;8689 struct rw_semaphore rwsem;8790 bool zombie;8888- void (*release)(struct posix_clock *clk);8991};90929193/**9294 * posix_clock_register() - register a new clock9393- * @clk: Pointer to the clock. Caller must provide 'ops' and 'release'9494- * @devid: Allocated device id9595+ * @clk: Pointer to the clock. Caller must provide 'ops' field9696+ * @dev: Pointer to the initialized device. Caller must provide9797+ * 'release' field9598 *9699 * A clock driver calls this function to register itself with the97100 * clock device subsystem. If 'clk' points to dynamically allocated···103100 *104101 * Returns zero on success, non-zero otherwise.105102 */106106-int posix_clock_register(struct posix_clock *clk, dev_t devid);103103+int posix_clock_register(struct posix_clock *clk, struct device *dev);107104108105/**109106 * posix_clock_unregister() - unregister a clock
+2-2
include/linux/spi/spi.h
···689689/* Helper calls for driver to timestamp transfer */690690void spi_take_timestamp_pre(struct spi_controller *ctlr,691691 struct spi_transfer *xfer,692692- const void *tx, bool irqs_off);692692+ size_t progress, bool irqs_off);693693void spi_take_timestamp_post(struct spi_controller *ctlr,694694 struct spi_transfer *xfer,695695- const void *tx, bool irqs_off);695695+ size_t progress, bool irqs_off);696696697697/* the spi driver core manages memory for the spi_controller classdev */698698extern struct spi_controller *__spi_alloc_controller(struct device *host,
···3434 __kernel_ulong_t __sec;3535#if defined(__sparc__) && defined(__arch64__)3636 unsigned int __usec;3737+ unsigned int __pad;3738#else3839 __kernel_ulong_t __usec;3940#endif
+5-5
include/uapi/linux/kcov.h
···99 * and the comment before kcov_remote_start() for usage details.1010 */1111struct kcov_remote_arg {1212- unsigned int trace_mode; /* KCOV_TRACE_PC or KCOV_TRACE_CMP */1313- unsigned int area_size; /* Length of coverage buffer in words */1414- unsigned int num_handles; /* Size of handles array */1515- __u64 common_handle;1616- __u64 handles[0];1212+ __u32 trace_mode; /* KCOV_TRACE_PC or KCOV_TRACE_CMP */1313+ __u32 area_size; /* Length of coverage buffer in words */1414+ __u32 num_handles; /* Size of handles array */1515+ __aligned_u64 common_handle;1616+ __aligned_u64 handles[0];1717};18181919#define KCOV_REMOTE_MAX_HANDLES 0x100
+6-20
init/main.c
···9393#include <linux/rodata_test.h>9494#include <linux/jump_label.h>9595#include <linux/mem_encrypt.h>9696-#include <linux/file.h>97969897#include <asm/io.h>9998#include <asm/bugs.h>···1157115811581159void console_on_rootfs(void)11591160{11601160- struct file *file;11611161- unsigned int i;11611161+ /* Open the /dev/console as stdin, this should never fail */11621162+ if (ksys_open((const char __user *) "/dev/console", O_RDWR, 0) < 0)11631163+ pr_err("Warning: unable to open an initial console.\n");1162116411631163- /* Open /dev/console in kernelspace, this should never fail */11641164- file = filp_open("/dev/console", O_RDWR, 0);11651165- if (IS_ERR(file))11661166- goto err_out;11671167-11681168- /* create stdin/stdout/stderr, this should never fail */11691169- for (i = 0; i < 3; i++) {11701170- if (f_dupfd(i, file, 0) != i)11711171- goto err_out;11721172- }11731173-11741174- return;11751175-11761176-err_out:11771177- /* no panic -- this might not be fatal */11781178- pr_err("Warning: unable to open an initial console.\n");11791179- return;11651165+ /* create stdout/stderr */11661166+ (void) ksys_dup(0);11671167+ (void) ksys_dup(0);11801168}1181116911821170static noinline void __init kernel_init_freeable(void)
+9-2
kernel/bpf/cgroup.c
···3535 */3636static void cgroup_bpf_release(struct work_struct *work)3737{3838- struct cgroup *cgrp = container_of(work, struct cgroup,3939- bpf.release_work);3838+ struct cgroup *p, *cgrp = container_of(work, struct cgroup,3939+ bpf.release_work);4040 enum bpf_cgroup_storage_type stype;4141 struct bpf_prog_array *old_array;4242 unsigned int type;···6464 }65656666 mutex_unlock(&cgroup_mutex);6767+6868+ for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p))6969+ cgroup_bpf_put(p);67706871 percpu_ref_exit(&cgrp->bpf.refcnt);6972 cgroup_put(cgrp);···202199 */203200#define NR ARRAY_SIZE(cgrp->bpf.effective)204201 struct bpf_prog_array *arrays[NR] = {};202202+ struct cgroup *p;205203 int ret, i;206204207205 ret = percpu_ref_init(&cgrp->bpf.refcnt, cgroup_bpf_release_fn, 0,208206 GFP_KERNEL);209207 if (ret)210208 return ret;209209+210210+ for (p = cgroup_parent(cgrp); p; p = cgroup_parent(p))211211+ cgroup_bpf_get(p);211212212213 for (i = 0; i < NR; i++)213214 INIT_LIST_HEAD(&cgrp->bpf.progs[i]);
+29-23
kernel/bpf/verifier.c
···907907 BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5908908};909909910910-static void __mark_reg_not_init(struct bpf_reg_state *reg);910910+static void __mark_reg_not_init(const struct bpf_verifier_env *env,911911+ struct bpf_reg_state *reg);911912912913/* Mark the unknown part of a register (variable offset or scalar value) as913914 * known to have the value @imm.···946945 verbose(env, "mark_reg_known_zero(regs, %u)\n", regno);947946 /* Something bad happened, let's kill all regs */948947 for (regno = 0; regno < MAX_BPF_REG; regno++)949949- __mark_reg_not_init(regs + regno);948948+ __mark_reg_not_init(env, regs + regno);950949 return;951950 }952951 __mark_reg_known_zero(regs + regno);···10551054}1056105510571056/* Mark a register as having a completely unknown (scalar) value. */10581058-static void __mark_reg_unknown(struct bpf_reg_state *reg)10571057+static void __mark_reg_unknown(const struct bpf_verifier_env *env,10581058+ struct bpf_reg_state *reg)10591059{10601060 /*10611061 * Clear type, id, off, and union(map_ptr, range) and···10661064 reg->type = SCALAR_VALUE;10671065 reg->var_off = tnum_unknown;10681066 reg->frameno = 0;10671067+ reg->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ?10681068+ true : false;10691069 __mark_reg_unbounded(reg);10701070}10711071···10781074 verbose(env, "mark_reg_unknown(regs, %u)\n", regno);10791075 /* Something bad happened, let's kill all regs except FP */10801076 for (regno = 0; regno < BPF_REG_FP; regno++)10811081- __mark_reg_not_init(regs + regno);10771077+ __mark_reg_not_init(env, regs + regno);10821078 return;10831079 }10841084- regs += regno;10851085- __mark_reg_unknown(regs);10861086- /* constant backtracking is enabled for root without bpf2bpf calls */10871087- regs->precise = env->subprog_cnt > 1 || !env->allow_ptr_leaks ?10881088- true : false;10801080+ __mark_reg_unknown(env, regs + regno);10891081}1090108210911091-static void __mark_reg_not_init(struct bpf_reg_state *reg)10831083+static void __mark_reg_not_init(const struct bpf_verifier_env *env,10841084+ struct bpf_reg_state *reg)10921085{10931093- __mark_reg_unknown(reg);10861086+ __mark_reg_unknown(env, reg);10941087 reg->type = NOT_INIT;10951088}10961089···10981097 verbose(env, "mark_reg_not_init(regs, %u)\n", regno);10991098 /* Something bad happened, let's kill all regs except FP */11001099 for (regno = 0; regno < BPF_REG_FP; regno++)11011101- __mark_reg_not_init(regs + regno);11001100+ __mark_reg_not_init(env, regs + regno);11021101 return;11031102 }11041104- __mark_reg_not_init(regs + regno);11031103+ __mark_reg_not_init(env, regs + regno);11051104}1106110511071106#define DEF_NOT_SUBREG (0)···32353234 }32363235 if (state->stack[spi].slot_type[0] == STACK_SPILL &&32373236 state->stack[spi].spilled_ptr.type == SCALAR_VALUE) {32383238- __mark_reg_unknown(&state->stack[spi].spilled_ptr);32373237+ __mark_reg_unknown(env, &state->stack[spi].spilled_ptr);32393238 for (j = 0; j < BPF_REG_SIZE; j++)32403239 state->stack[spi].slot_type[j] = STACK_MISC;32413240 goto mark;···38933892 if (!reg)38943893 continue;38953894 if (reg_is_pkt_pointer_any(reg))38963896- __mark_reg_unknown(reg);38953895+ __mark_reg_unknown(env, reg);38973896 }38983897}38993898···39213920 if (!reg)39223921 continue;39233922 if (reg->ref_obj_id == ref_obj_id)39243924- __mark_reg_unknown(reg);39233923+ __mark_reg_unknown(env, reg);39253924 }39263925}39273926···45834582 /* Taint dst register if offset had invalid bounds derived from45844583 * e.g. dead branches.45854584 */45864586- __mark_reg_unknown(dst_reg);45854585+ __mark_reg_unknown(env, dst_reg);45874586 return 0;45884587 }45894588···48354834 /* Taint dst register if offset had invalid bounds derived from48364835 * e.g. dead branches.48374836 */48384838- __mark_reg_unknown(dst_reg);48374837+ __mark_reg_unknown(env, dst_reg);48394838 return 0;48404839 }4841484048424841 if (!src_known &&48434842 opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) {48444844- __mark_reg_unknown(dst_reg);48434843+ __mark_reg_unknown(env, dst_reg);48454844 return 0;48464845 }48474846···62646263static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)62656264{62666265 struct bpf_reg_state *regs = cur_regs(env);62666266+ static const int ctx_reg = BPF_REG_6;62676267 u8 mode = BPF_MODE(insn->code);62686268 int i, err;62696269···62986296 }6299629763006298 /* check whether implicit source operand (register R6) is readable */63016301- err = check_reg_arg(env, BPF_REG_6, SRC_OP);62996299+ err = check_reg_arg(env, ctx_reg, SRC_OP);63026300 if (err)63036301 return err;63046302···63176315 return -EINVAL;63186316 }6319631763206320- if (regs[BPF_REG_6].type != PTR_TO_CTX) {63186318+ if (regs[ctx_reg].type != PTR_TO_CTX) {63216319 verbose(env,63226320 "at the time of BPF_LD_ABS|IND R6 != pointer to skb\n");63236321 return -EINVAL;···63296327 if (err)63306328 return err;63316329 }63306330+63316331+ err = check_ctx_reg(env, ®s[ctx_reg], ctx_reg);63326332+ if (err < 0)63336333+ return err;6332633463336335 /* reset caller saved regs to unreadable */63346336 for (i = 0; i < CALLER_SAVED_REGS; i++) {···69886982 /* since the register is unused, clear its state69896983 * to make further comparison simpler69906984 */69916991- __mark_reg_not_init(&st->regs[i]);69856985+ __mark_reg_not_init(env, &st->regs[i]);69926986 }6993698769946988 for (i = 0; i < st->allocated_stack / BPF_REG_SIZE; i++) {···69966990 /* liveness must not touch this stack slot anymore */69976991 st->stack[i].spilled_ptr.live |= REG_LIVE_DONE;69986992 if (!(live & REG_LIVE_READ)) {69996999- __mark_reg_not_init(&st->stack[i].spilled_ptr);69936993+ __mark_reg_not_init(env, &st->stack[i].spilled_ptr);70006994 for (j = 0; j < BPF_REG_SIZE; j++)70016995 st->stack[i].slot_type[j] = STACK_INVALID;70026996 }
+3-3
kernel/cred.c
···223223 new->magic = CRED_MAGIC;224224#endif225225226226- if (security_cred_alloc_blank(new, GFP_KERNEL) < 0)226226+ if (security_cred_alloc_blank(new, GFP_KERNEL_ACCOUNT) < 0)227227 goto error;228228229229 return new;···282282 new->security = NULL;283283#endif284284285285- if (security_prepare_creds(new, old, GFP_KERNEL) < 0)285285+ if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)286286 goto error;287287 validate_creds(new);288288 return new;···715715#ifdef CONFIG_SECURITY716716 new->security = NULL;717717#endif718718- if (security_prepare_creds(new, old, GFP_KERNEL) < 0)718718+ if (security_prepare_creds(new, old, GFP_KERNEL_ACCOUNT) < 0)719719 goto error;720720721721 put_cred(old);
+8-4
kernel/exit.c
···517517 }518518519519 write_unlock_irq(&tasklist_lock);520520- if (unlikely(pid_ns == &init_pid_ns)) {521521- panic("Attempted to kill init! exitcode=0x%08x\n",522522- father->signal->group_exit_code ?: father->exit_code);523523- }524520525521 list_for_each_entry_safe(p, n, dead, ptrace_entry) {526522 list_del_init(&p->ptrace_entry);···762766 acct_update_integrals(tsk);763767 group_dead = atomic_dec_and_test(&tsk->signal->live);764768 if (group_dead) {769769+ /*770770+ * If the last thread of global init has exited, panic771771+ * immediately to get a useable coredump.772772+ */773773+ if (unlikely(is_global_init(tsk)))774774+ panic("Attempted to kill init! exitcode=0x%08x\n",775775+ tsk->signal->group_exit_code ?: (int)code);776776+765777#ifdef CONFIG_POSIX_TIMERS766778 hrtimer_cancel(&tsk->signal->real_timer);767779 exit_itimers(tsk->signal);
+10
kernel/fork.c
···25782578#endif2579257925802580#ifdef __ARCH_WANT_SYS_CLONE325812581+25822582+/*25832583+ * copy_thread implementations handle CLONE_SETTLS by reading the TLS value from25842584+ * the registers containing the syscall arguments for clone. This doesn't work25852585+ * with clone3 since the TLS value is passed in clone_args instead.25862586+ */25872587+#ifndef CONFIG_HAVE_COPY_THREAD_TLS25882588+#error clone3 requires copy_thread_tls support in arch25892589+#endif25902590+25812591noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,25822592 struct clone_args __user *uargs,25832593 size_t usize)
+7
kernel/seccomp.c
···10261026 struct seccomp_notif unotif;10271027 ssize_t ret;1028102810291029+ /* Verify that we're not given garbage to keep struct extensible. */10301030+ ret = check_zeroed_user(buf, sizeof(unotif));10311031+ if (ret < 0)10321032+ return ret;10331033+ if (!ret)10341034+ return -EINVAL;10351035+10291036 memset(&unotif, 0, sizeof(unotif));1030103710311038 ret = down_interruptible(&filter->notif->request);
+19-11
kernel/taskstats.c
···554554static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)555555{556556 struct signal_struct *sig = tsk->signal;557557- struct taskstats *stats;557557+ struct taskstats *stats_new, *stats;558558559559- if (sig->stats || thread_group_empty(tsk))560560- goto ret;559559+ /* Pairs with smp_store_release() below. */560560+ stats = smp_load_acquire(&sig->stats);561561+ if (stats || thread_group_empty(tsk))562562+ return stats;561563562564 /* No problem if kmem_cache_zalloc() fails */563563- stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);565565+ stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);564566565567 spin_lock_irq(&tsk->sighand->siglock);566566- if (!sig->stats) {567567- sig->stats = stats;568568- stats = NULL;568568+ stats = sig->stats;569569+ if (!stats) {570570+ /*571571+ * Pairs with smp_store_release() above and order the572572+ * kmem_cache_zalloc().573573+ */574574+ smp_store_release(&sig->stats, stats_new);575575+ stats = stats_new;576576+ stats_new = NULL;569577 }570578 spin_unlock_irq(&tsk->sighand->siglock);571579572572- if (stats)573573- kmem_cache_free(taskstats_cache, stats);574574-ret:575575- return sig->stats;580580+ if (stats_new)581581+ kmem_cache_free(taskstats_cache, stats_new);582582+583583+ return stats;576584}577585578586/* Send pid data out on exit */
···9696 return 0;9797}98989999+/*100100+ * Not all archs define MCOUNT_INSN_SIZE which is used to look for direct101101+ * functions. But those archs currently don't support direct functions102102+ * anyway, and ftrace_find_rec_direct() is just a stub for them.103103+ * Define MCOUNT_INSN_SIZE to keep those archs compiling.104104+ */105105+#ifndef MCOUNT_INSN_SIZE106106+/* Make sure this only works without direct calls */107107+# ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS108108+# error MCOUNT_INSN_SIZE not defined with direct calls enabled109109+# endif110110+# define MCOUNT_INSN_SIZE 0111111+#endif112112+99113int function_graph_enter(unsigned long ret, unsigned long func,100114 unsigned long frame_pointer, unsigned long *retp)101115{
···195195 unsigned long irq_flags;196196 void *entry = NULL;197197 int entry_size;198198- u64 val;198198+ u64 val = 0;199199 int len;200200201201 entry = trace_alloc_entry(call, &entry_size);
+3-1
kernel/trace/trace_sched_wakeup.c
···630630 if (ret) {631631 pr_info("wakeup trace: Couldn't activate tracepoint"632632 " probe to kernel_sched_migrate_task\n");633633- return;633633+ goto fail_deprobe_sched_switch;634634 }635635636636 wakeup_reset(tr);···648648 printk(KERN_ERR "failed to start wakeup tracer\n");649649650650 return;651651+fail_deprobe_sched_switch:652652+ unregister_trace_sched_switch(probe_wakeup_sched_switch, NULL);651653fail_deprobe_wake_new:652654 unregister_trace_sched_wakeup_new(probe_wakeup, NULL);653655fail_deprobe:
+1-1
kernel/trace/trace_seq.c
···381381 int prefix_type, int rowsize, int groupsize,382382 const void *buf, size_t len, bool ascii)383383{384384- unsigned int save_len = s->seq.len;384384+ unsigned int save_len = s->seq.len;385385386386 if (s->full)387387 return 0;
+5
kernel/trace/trace_stack.c
···283283 local_irq_restore(flags);284284}285285286286+/* Some archs may not define MCOUNT_INSN_SIZE */287287+#ifndef MCOUNT_INSN_SIZE288288+# define MCOUNT_INSN_SIZE 0289289+#endif290290+286291static void287292stack_trace_call(unsigned long ip, unsigned long parent_ip,288293 struct ftrace_ops *op, struct pt_regs *pt_regs)
+6-2
mm/gup_benchmark.c
···2626 unsigned long i, nr_pages, addr, next;2727 int nr;2828 struct page **pages;2929+ int ret = 0;29303031 if (gup->size > ULONG_MAX)3132 return -EINVAL;···6463 NULL);6564 break;6665 default:6767- return -1;6666+ kvfree(pages);6767+ ret = -EINVAL;6868+ goto out;6869 }69707071 if (nr <= 0)···8885 gup->put_delta_usec = ktime_us_delta(end_time, start_time);89869087 kvfree(pages);9191- return 0;8888+out:8989+ return ret;9290}93919492static long gup_benchmark_ioctl(struct file *filep, unsigned int cmd,
+50-1
mm/hugetlb.c
···2727#include <linux/swapops.h>2828#include <linux/jhash.h>2929#include <linux/numa.h>3030+#include <linux/llist.h>30313132#include <asm/page.h>3233#include <asm/pgtable.h>···11371136 page[2].mapping = NULL;11381137}1139113811401140-void free_huge_page(struct page *page)11391139+static void __free_huge_page(struct page *page)11411140{11421141 /*11431142 * Can't pass hstate in here because it is called from the···11981197 enqueue_huge_page(h, page);11991198 }12001199 spin_unlock(&hugetlb_lock);12001200+}12011201+12021202+/*12031203+ * As free_huge_page() can be called from a non-task context, we have12041204+ * to defer the actual freeing in a workqueue to prevent potential12051205+ * hugetlb_lock deadlock.12061206+ *12071207+ * free_hpage_workfn() locklessly retrieves the linked list of pages to12081208+ * be freed and frees them one-by-one. As the page->mapping pointer is12091209+ * going to be cleared in __free_huge_page() anyway, it is reused as the12101210+ * llist_node structure of a lockless linked list of huge pages to be freed.12111211+ */12121212+static LLIST_HEAD(hpage_freelist);12131213+12141214+static void free_hpage_workfn(struct work_struct *work)12151215+{12161216+ struct llist_node *node;12171217+ struct page *page;12181218+12191219+ node = llist_del_all(&hpage_freelist);12201220+12211221+ while (node) {12221222+ page = container_of((struct address_space **)node,12231223+ struct page, mapping);12241224+ node = node->next;12251225+ __free_huge_page(page);12261226+ }12271227+}12281228+static DECLARE_WORK(free_hpage_work, free_hpage_workfn);12291229+12301230+void free_huge_page(struct page *page)12311231+{12321232+ /*12331233+ * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.12341234+ */12351235+ if (!in_task()) {12361236+ /*12371237+ * Only call schedule_work() if hpage_freelist is previously12381238+ * empty. Otherwise, schedule_work() had been called but the12391239+ * workfn hasn't retrieved the list yet.12401240+ */12411241+ if (llist_add((struct llist_node *)&page->mapping,12421242+ &hpage_freelist))12431243+ schedule_work(&free_hpage_work);12441244+ return;12451245+ }12461246+12471247+ __free_huge_page(page);12011248}1202124912031250static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
+16-15
mm/memory_hotplug.c
···483483 pgdat->node_spanned_pages = node_end_pfn - node_start_pfn;484484}485485486486-static void __remove_zone(struct zone *zone, unsigned long start_pfn,487487- unsigned long nr_pages)486486+void __ref remove_pfn_range_from_zone(struct zone *zone,487487+ unsigned long start_pfn,488488+ unsigned long nr_pages)488489{489490 struct pglist_data *pgdat = zone->zone_pgdat;490491 unsigned long flags;···500499 return;501500#endif502501502502+ clear_zone_contiguous(zone);503503+503504 pgdat_resize_lock(zone->zone_pgdat, &flags);504505 shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);505506 update_pgdat_span(pgdat);506507 pgdat_resize_unlock(zone->zone_pgdat, &flags);508508+509509+ set_zone_contiguous(zone);507510}508511509509-static void __remove_section(struct zone *zone, unsigned long pfn,510510- unsigned long nr_pages, unsigned long map_offset,511511- struct vmem_altmap *altmap)512512+static void __remove_section(unsigned long pfn, unsigned long nr_pages,513513+ unsigned long map_offset,514514+ struct vmem_altmap *altmap)512515{513516 struct mem_section *ms = __nr_to_section(pfn_to_section_nr(pfn));514517515518 if (WARN_ON_ONCE(!valid_section(ms)))516519 return;517520518518- __remove_zone(zone, pfn, nr_pages);519521 sparse_remove_section(ms, pfn, nr_pages, map_offset, altmap);520522}521523522524/**523523- * __remove_pages() - remove sections of pages from a zone524524- * @zone: zone from which pages need to be removed525525+ * __remove_pages() - remove sections of pages525526 * @pfn: starting pageframe (must be aligned to start of a section)526527 * @nr_pages: number of pages to remove (must be multiple of section size)527528 * @altmap: alternative device page map or %NULL if default memmap is used···533530 * sure that pages are marked reserved and zones are adjust properly by534531 * calling offline_pages().535532 */536536-void __remove_pages(struct zone *zone, unsigned long pfn,537537- unsigned long nr_pages, struct vmem_altmap *altmap)533533+void __remove_pages(unsigned long pfn, unsigned long nr_pages,534534+ struct vmem_altmap *altmap)538535{539536 unsigned long map_offset = 0;540537 unsigned long nr, start_sec, end_sec;541538542539 map_offset = vmem_altmap_offset(altmap);543543-544544- clear_zone_contiguous(zone);545540546541 if (check_pfn_span(pfn, nr_pages, "remove"))547542 return;···552551 cond_resched();553552 pfns = min(nr_pages, PAGES_PER_SECTION554553 - (pfn & ~PAGE_SECTION_MASK));555555- __remove_section(zone, pfn, pfns, map_offset, altmap);554554+ __remove_section(pfn, pfns, map_offset, altmap);556555 pfn += pfns;557556 nr_pages -= pfns;558557 map_offset = 0;559558 }560560-561561- set_zone_contiguous(zone);562559}563560564561int set_online_page_callback(online_page_callback_t callback)···868869 (unsigned long long) pfn << PAGE_SHIFT,869870 (((unsigned long long) pfn + nr_pages) << PAGE_SHIFT) - 1);870871 memory_notify(MEM_CANCEL_ONLINE, &arg);872872+ remove_pfn_range_from_zone(zone, pfn, nr_pages);871873 mem_hotplug_done();872874 return ret;873875}···16281628 writeback_set_ratelimit();1629162916301630 memory_notify(MEM_OFFLINE, &arg);16311631+ remove_pfn_range_from_zone(zone, start_pfn, nr_pages);16311632 mem_hotplug_done();16321633 return 0;16331634
···15121512/*15131513 * Resolves the given address to a struct page, isolates it from the LRU and15141514 * puts it to the given pagelist.15151515- * Returns -errno if the page cannot be found/isolated or 0 when it has been15161516- * queued or the page doesn't need to be migrated because it is already on15171517- * the target node15151515+ * Returns:15161516+ * errno - if the page cannot be found/isolated15171517+ * 0 - when it doesn't have to be migrated because it is already on the15181518+ * target node15191519+ * 1 - when it has been queued15181520 */15191521static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,15201522 int node, struct list_head *pagelist, bool migrate_all)···15551553 if (PageHuge(page)) {15561554 if (PageHead(page)) {15571555 isolate_huge_page(page, pagelist);15581558- err = 0;15561556+ err = 1;15591557 }15601558 } else {15611559 struct page *head;···15651563 if (err)15661564 goto out_putpage;1567156515681568- err = 0;15661566+ err = 1;15691567 list_add_tail(&head->lru, pagelist);15701568 mod_node_page_state(page_pgdat(head),15711569 NR_ISOLATED_ANON + page_is_file_cache(head),···16421640 */16431641 err = add_page_for_migration(mm, addr, current_node,16441642 &pagelist, flags & MPOL_MF_MOVE_ALL);16451645- if (!err)16431643+16441644+ if (!err) {16451645+ /* The page is already on the target node */16461646+ err = store_status(status, i, current_node, 1);16471647+ if (err)16481648+ goto out_flush;16461649 continue;16501650+ } else if (err > 0) {16511651+ /* The page is successfully queued for migration */16521652+ continue;16531653+ }1647165416481655 err = store_status(status, i, err, 1);16491656 if (err)
-6
mm/mmap.c
···9090 * MAP_PRIVATE r: (no) no r: (yes) yes r: (no) yes r: (no) yes9191 * w: (no) no w: (no) no w: (copy) copy w: (no) no9292 * x: (no) no x: (no) yes x: (no) yes x: (yes) yes9393- *9494- * On arm64, PROT_EXEC has the following behaviour for both MAP_SHARED and9595- * MAP_PRIVATE:9696- * r: (no) no9797- * w: (no) no9898- * x: (yes) yes9993 */10094pgprot_t protection_map[16] __ro_after_init = {10195 __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
···586586 return 0;587587}588588589589-static void vlan_dev_uninit(struct net_device *dev)589589+/* Note: this function might be called multiple times for the same device. */590590+void vlan_dev_uninit(struct net_device *dev)590591{591592 struct vlan_priority_tci_mapping *pm;592593 struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
+12-7
net/8021q/vlan_netlink.c
···108108 struct ifla_vlan_flags *flags;109109 struct ifla_vlan_qos_mapping *m;110110 struct nlattr *attr;111111- int rem;111111+ int rem, err;112112113113 if (data[IFLA_VLAN_FLAGS]) {114114 flags = nla_data(data[IFLA_VLAN_FLAGS]);115115- vlan_dev_change_flags(dev, flags->flags, flags->mask);115115+ err = vlan_dev_change_flags(dev, flags->flags, flags->mask);116116+ if (err)117117+ return err;116118 }117119 if (data[IFLA_VLAN_INGRESS_QOS]) {118120 nla_for_each_nested(attr, data[IFLA_VLAN_INGRESS_QOS], rem) {···125123 if (data[IFLA_VLAN_EGRESS_QOS]) {126124 nla_for_each_nested(attr, data[IFLA_VLAN_EGRESS_QOS], rem) {127125 m = nla_data(attr);128128- vlan_dev_set_egress_priority(dev, m->from, m->to);126126+ err = vlan_dev_set_egress_priority(dev, m->from, m->to);127127+ if (err)128128+ return err;129129 }130130 }131131 return 0;···183179 return -EINVAL;184180185181 err = vlan_changelink(dev, tb, data, extack);186186- if (err < 0)187187- return err;188188-189189- return register_vlan_dev(dev, extack);182182+ if (!err)183183+ err = register_vlan_dev(dev, extack);184184+ if (err)185185+ vlan_dev_uninit(dev);186186+ return err;190187}191188192189static inline size_t vlan_qos_map_size(unsigned int n)
···4545 case NETDEV_CHANGE: /* Link (carrier) state changes */4646 hsr_check_carrier_and_operstate(hsr);4747 break;4848+ case NETDEV_CHANGENAME:4949+ if (is_hsr_master(dev))5050+ hsr_debugfs_rename(dev);5151+ break;4852 case NETDEV_CHANGEADDR:4953 if (port->type == HSR_PT_MASTER) {5054 /* This should not happen since there's no···68646965 /* Make sure we recognize frames from ourselves in hsr_rcv() */7066 port = hsr_port_get_hsr(hsr, HSR_PT_SLAVE_B);7171- res = hsr_create_self_node(&hsr->self_node_db,6767+ res = hsr_create_self_node(hsr,7268 master->dev->dev_addr,7369 port ?7470 port->dev->dev_addr :···127123{128124 unregister_netdevice_notifier(&hsr_nb);129125 hsr_netlink_exit();126126+ hsr_debugfs_remove_root();130127}131128132129module_init(hsr_init);
+15-7
net/hsr/hsr_main.h
···160160 int announce_count;161161 u16 sequence_nr;162162 u16 sup_sequence_nr; /* For HSRv1 separate seq_nr for supervision */163163- u8 prot_version; /* Indicate if HSRv0 or HSRv1. */164164- spinlock_t seqnr_lock; /* locking for sequence_nr */163163+ u8 prot_version; /* Indicate if HSRv0 or HSRv1. */164164+ spinlock_t seqnr_lock; /* locking for sequence_nr */165165+ spinlock_t list_lock; /* locking for node list */165166 unsigned char sup_multicast_addr[ETH_ALEN];166167#ifdef CONFIG_DEBUG_FS167168 struct dentry *node_tbl_root;···185184}186185187186#if IS_ENABLED(CONFIG_DEBUG_FS)188188-int hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev);187187+void hsr_debugfs_rename(struct net_device *dev);188188+void hsr_debugfs_init(struct hsr_priv *priv, struct net_device *hsr_dev);189189void hsr_debugfs_term(struct hsr_priv *priv);190190+void hsr_debugfs_create_root(void);191191+void hsr_debugfs_remove_root(void);190192#else191191-static inline int hsr_debugfs_init(struct hsr_priv *priv,192192- struct net_device *hsr_dev)193193+static inline void void hsr_debugfs_rename(struct net_device *dev)193194{194194- return 0;195195}196196-196196+static inline void hsr_debugfs_init(struct hsr_priv *priv,197197+ struct net_device *hsr_dev)198198+{}197199static inline void hsr_debugfs_term(struct hsr_priv *priv)200200+{}201201+static inline void hsr_debugfs_create_root(void)202202+{}203203+static inline void hsr_debugfs_remove_root(void)198204{}199205#endif200206
···17271727 }1728172817291729 /* Ignore very old stuff early */17301730- if (!after(sp[used_sacks].end_seq, prior_snd_una))17301730+ if (!after(sp[used_sacks].end_seq, prior_snd_una)) {17311731+ if (i == 0)17321732+ first_sack_index = -1;17311733 continue;17341734+ }1732173517331736 used_sacks++;17341737 }
···193193}194194195195/*196196- * Ping the other end to fill our RTT cache and to retrieve the rwind197197- * and MTU parameters.198198- */199199-static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb)200200-{201201- struct rxrpc_skb_priv *sp = rxrpc_skb(skb);202202- ktime_t now = skb->tstamp;203203-204204- if (call->peer->rtt_usage < 3 ||205205- ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now))206206- rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial,207207- true, true,208208- rxrpc_propose_ack_ping_for_params);209209-}210210-211211-/*212196 * Apply a hard ACK by advancing the Tx window.213197 */214198static bool rxrpc_rotate_tx_window(struct rxrpc_call *call, rxrpc_seq_t to,···13801396 call = rxrpc_new_incoming_call(local, rx, skb);13811397 if (!call)13821398 goto reject_packet;13831383- rxrpc_send_ping(call, skb);13841384- mutex_unlock(&call->user_mutex);13851399 }1386140013871401 /* Process a call packet; this either discards or passes on the ref
···11081108 return err;11091109}1110111011111111-static bool u32_hnode_empty(struct tc_u_hnode *ht, bool *non_root_ht)11121112-{11131113- int i;11141114-11151115- if (!ht)11161116- return true;11171117- if (!ht->is_root) {11181118- *non_root_ht = true;11191119- return false;11201120- }11211121- if (*non_root_ht)11221122- return false;11231123- if (ht->refcnt < 2)11241124- return true;11251125-11261126- for (i = 0; i <= ht->divisor; i++) {11271127- if (rtnl_dereference(ht->ht[i]))11281128- return false;11291129- }11301130- return true;11311131-}11321132-11331111static void u32_walk(struct tcf_proto *tp, struct tcf_walker *arg,11341112 bool rtnl_held)11351113{11361114 struct tc_u_common *tp_c = tp->data;11371137- bool non_root_ht = false;11381115 struct tc_u_hnode *ht;11391116 struct tc_u_knode *n;11401117 unsigned int h;···11241147 ht = rtnl_dereference(ht->next)) {11251148 if (ht->prio != tp->prio)11261149 continue;11271127- if (u32_hnode_empty(ht, &non_root_ht))11281128- return;11291150 if (arg->count >= arg->skip) {11301151 if (arg->fn(tp, ht, arg) < 0) {11311152 arg->stop = 1;
+1-1
net/sched/sch_cake.c
···17691769 q->avg_window_begin));17701770 u64 b = q->avg_window_bytes * (u64)NSEC_PER_SEC;1771177117721772- do_div(b, window_interval);17721772+ b = div64_u64(b, window_interval);17731773 q->avg_peak_bandwidth =17741774 cake_ewma(q->avg_peak_bandwidth, b,17751775 b > q->avg_peak_bandwidth ? 2 : 8);
+12-11
net/sched/sch_fq.c
···301301 f->socket_hash != sk->sk_hash)) {302302 f->credit = q->initial_quantum;303303 f->socket_hash = sk->sk_hash;304304+ if (q->rate_enable)305305+ smp_store_release(&sk->sk_pacing_status,306306+ SK_PACING_FQ);304307 if (fq_flow_is_throttled(f))305308 fq_flow_unset_throttled(q, f);306309 f->time_next_packet = 0ULL;···325322326323 fq_flow_set_detached(f);327324 f->sk = sk;328328- if (skb->sk == sk)325325+ if (skb->sk == sk) {329326 f->socket_hash = sk->sk_hash;327327+ if (q->rate_enable)328328+ smp_store_release(&sk->sk_pacing_status,329329+ SK_PACING_FQ);330330+ }330331 f->credit = q->initial_quantum;331332332333 rb_link_node(&f->fq_node, parent, p);···435428 f->qlen++;436429 qdisc_qstats_backlog_inc(sch, skb);437430 if (fq_flow_is_detached(f)) {438438- struct sock *sk = skb->sk;439439-440431 fq_flow_add_tail(&q->new_flows, f);441432 if (time_after(jiffies, f->age + q->flow_refill_delay))442433 f->credit = max_t(u32, f->credit, q->quantum);443443- if (sk && q->rate_enable) {444444- if (unlikely(smp_load_acquire(&sk->sk_pacing_status) !=445445- SK_PACING_FQ))446446- smp_store_release(&sk->sk_pacing_status,447447- SK_PACING_FQ);448448- }449434 q->inactive_flows--;450435 }451436···786787 if (tb[TCA_FQ_QUANTUM]) {787788 u32 quantum = nla_get_u32(tb[TCA_FQ_QUANTUM]);788789789789- if (quantum > 0)790790+ if (quantum > 0 && quantum <= (1 << 20)) {790791 q->quantum = quantum;791791- else792792+ } else {793793+ NL_SET_ERR_MSG_MOD(extack, "invalid quantum");792794 err = -EINVAL;795795+ }793796 }794797795798 if (tb[TCA_FQ_INITIAL_QUANTUM])
+8-2
net/sched/sch_prio.c
···292292 struct tc_prio_qopt_offload graft_offload;293293 unsigned long band = arg - 1;294294295295- if (new == NULL)296296- new = &noop_qdisc;295295+ if (!new) {296296+ new = qdisc_create_dflt(sch->dev_queue, &pfifo_qdisc_ops,297297+ TC_H_MAKE(sch->handle, arg), extack);298298+ if (!new)299299+ new = &noop_qdisc;300300+ else301301+ qdisc_hash_add(new, true);302302+ }297303298304 *old = qdisc_replace(sch, new, &q->queues[band]);299305
+18-10
net/sctp/sm_sideeffect.c
···13631363 /* Generate an INIT ACK chunk. */13641364 new_obj = sctp_make_init_ack(asoc, chunk, GFP_ATOMIC,13651365 0);13661366- if (!new_obj)13671367- goto nomem;13661366+ if (!new_obj) {13671367+ error = -ENOMEM;13681368+ break;13691369+ }1368137013691371 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,13701372 SCTP_CHUNK(new_obj));···13881386 if (!new_obj) {13891387 if (cmd->obj.chunk)13901388 sctp_chunk_free(cmd->obj.chunk);13911391- goto nomem;13891389+ error = -ENOMEM;13901390+ break;13921391 }13931392 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,13941393 SCTP_CHUNK(new_obj));···1436143314371434 /* Generate a SHUTDOWN chunk. */14381435 new_obj = sctp_make_shutdown(asoc, chunk);14391439- if (!new_obj)14401440- goto nomem;14361436+ if (!new_obj) {14371437+ error = -ENOMEM;14381438+ break;14391439+ }14411440 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY,14421441 SCTP_CHUNK(new_obj));14431442 break;···17751770 break;17761771 }1777177217781778- if (error)17731773+ if (error) {17741774+ cmd = sctp_next_cmd(commands);17751775+ while (cmd) {17761776+ if (cmd->verb == SCTP_CMD_REPLY)17771777+ sctp_chunk_free(cmd->obj.chunk);17781778+ cmd = sctp_next_cmd(commands);17791779+ }17791780 break;17811781+ }17801782 }1781178317821782-out:17831784 /* If this is in response to a received chunk, wait until17841785 * we are done with the packet to open the queue so that we don't17851786 * send multiple packets in response to a single request.···18001789 sp->data_ready_signalled = 0;1801179018021791 return error;18031803-nomem:18041804- error = -ENOMEM;18051805- goto out;18061792}
+15-15
net/sctp/stream.c
···8484 return 0;85858686 ret = genradix_prealloc(&stream->out, outcnt, gfp);8787- if (ret) {8888- genradix_free(&stream->out);8787+ if (ret)8988 return ret;9090- }91899290 stream->outcnt = outcnt;9391 return 0;···100102 return 0;101103102104 ret = genradix_prealloc(&stream->in, incnt, gfp);103103- if (ret) {104104- genradix_free(&stream->in);105105+ if (ret)105106 return ret;106106- }107107108108 stream->incnt = incnt;109109 return 0;···119123 * a new one with new outcnt to save memory if needed.120124 */121125 if (outcnt == stream->outcnt)122122- goto in;126126+ goto handle_in;123127124128 /* Filter out chunks queued on streams that won't exist anymore */125129 sched->unsched_all(stream);···128132129133 ret = sctp_stream_alloc_out(stream, outcnt, gfp);130134 if (ret)131131- goto out;135135+ goto out_err;132136133137 for (i = 0; i < stream->outcnt; i++)134138 SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;135139136136-in:140140+handle_in:137141 sctp_stream_interleave_init(stream);138142 if (!incnt)139143 goto out;140144141145 ret = sctp_stream_alloc_in(stream, incnt, gfp);142142- if (ret) {143143- sched->free(stream);144144- genradix_free(&stream->out);145145- stream->outcnt = 0;146146- goto out;147147- }146146+ if (ret)147147+ goto in_err;148148149149+ goto out;150150+151151+in_err:152152+ sched->free(stream);153153+ genradix_free(&stream->in);154154+out_err:155155+ genradix_free(&stream->out);156156+ stream->outcnt = 0;149157out:150158 return ret;151159}
···287287 *288288 * Caller must hold socket lock289289 */290290-static void tsk_rej_rx_queue(struct sock *sk)290290+static void tsk_rej_rx_queue(struct sock *sk, int error)291291{292292 struct sk_buff *skb;293293294294 while ((skb = __skb_dequeue(&sk->sk_receive_queue)))295295- tipc_sk_respond(sk, skb, TIPC_ERR_NO_PORT);295295+ tipc_sk_respond(sk, skb, error);296296}297297298298static bool tipc_sk_connected(struct sock *sk)···545545 /* Remove pending SYN */546546 __skb_queue_purge(&sk->sk_write_queue);547547548548- /* Reject all unreceived messages, except on an active connection549549- * (which disconnects locally & sends a 'FIN+' to peer).550550- */551551- while ((skb = __skb_dequeue(&sk->sk_receive_queue)) != NULL) {552552- if (TIPC_SKB_CB(skb)->bytes_read) {553553- kfree_skb(skb);554554- continue;555555- }556556- if (!tipc_sk_type_connectionless(sk) &&557557- sk->sk_state != TIPC_DISCONNECTING) {558558- tipc_set_sk_state(sk, TIPC_DISCONNECTING);559559- tipc_node_remove_conn(net, dnode, tsk->portid);560560- }561561- tipc_sk_respond(sk, skb, error);548548+ /* Remove partially received buffer if any */549549+ skb = skb_peek(&sk->sk_receive_queue);550550+ if (skb && TIPC_SKB_CB(skb)->bytes_read) {551551+ __skb_unlink(skb, &sk->sk_receive_queue);552552+ kfree_skb(skb);562553 }563554564564- if (tipc_sk_type_connectionless(sk))555555+ /* Reject all unreceived messages if connectionless */556556+ if (tipc_sk_type_connectionless(sk)) {557557+ tsk_rej_rx_queue(sk, error);565558 return;559559+ }566560567567- if (sk->sk_state != TIPC_DISCONNECTING) {561561+ switch (sk->sk_state) {562562+ case TIPC_CONNECTING:563563+ case TIPC_ESTABLISHED:564564+ tipc_set_sk_state(sk, TIPC_DISCONNECTING);565565+ tipc_node_remove_conn(net, dnode, tsk->portid);566566+ /* Send a FIN+/- to its peer */567567+ skb = __skb_dequeue(&sk->sk_receive_queue);568568+ if (skb) {569569+ __skb_queue_purge(&sk->sk_receive_queue);570570+ tipc_sk_respond(sk, skb, error);571571+ break;572572+ }568573 skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE,569574 TIPC_CONN_MSG, SHORT_H_SIZE, 0, dnode,570575 tsk_own_node(tsk), tsk_peer_port(tsk),571576 tsk->portid, error);572577 if (skb)573578 tipc_node_xmit_skb(net, skb, dnode, tsk->portid);574574- tipc_node_remove_conn(net, dnode, tsk->portid);575575- tipc_set_sk_state(sk, TIPC_DISCONNECTING);579579+ break;580580+ case TIPC_LISTEN:581581+ /* Reject all SYN messages */582582+ tsk_rej_rx_queue(sk, error);583583+ break;584584+ default:585585+ __skb_queue_purge(&sk->sk_receive_queue);586586+ break;576587 }577588}578589···24432432 return sock_intr_errno(*timeo_p);2444243324452434 add_wait_queue(sk_sleep(sk), &wait);24462446- done = sk_wait_event(sk, timeo_p,24472447- sk->sk_state != TIPC_CONNECTING, &wait);24352435+ done = sk_wait_event(sk, timeo_p, tipc_sk_connected(sk),24362436+ &wait);24482437 remove_wait_queue(sk_sleep(sk), &wait);24492438 } while (!done);24502439 return 0;···26542643 * Reject any stray messages received by new socket26552644 * before the socket lock was taken (very, very unlikely)26562645 */26572657- tsk_rej_rx_queue(new_sk);26462646+ tsk_rej_rx_queue(new_sk, TIPC_ERR_NO_PORT);2658264726592648 /* Connect new socket to it's peer */26602649 tipc_sk_finish_conn(new_tsock, msg_origport(msg), msg_orignode(msg));
+2-2
samples/seccomp/user-trap.c
···298298 req = malloc(sizes.seccomp_notif);299299 if (!req)300300 goto out_close;301301- memset(req, 0, sizeof(*req));302301303302 resp = malloc(sizes.seccomp_notif_resp);304303 if (!resp)305304 goto out_req;306306- memset(resp, 0, sizeof(*resp));305305+ memset(resp, 0, sizes.seccomp_notif_resp);307306308307 while (1) {308308+ memset(req, 0, sizes.seccomp_notif);309309 if (ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, req)) {310310 perror("ioctl recv");311311 goto out_resp;
+4-5
scripts/gcc-plugins/Kconfig
···1414 An arch should select this symbol if it supports building with1515 GCC plugins.16161717-config GCC_PLUGINS1818- bool1717+menuconfig GCC_PLUGINS1818+ bool "GCC plugins"1919 depends on HAVE_GCC_PLUGINS2020 depends on PLUGIN_HOSTCC != ""2121 default y···25252626 See Documentation/core-api/gcc-plugins.rst for details.27272828-menu "GCC plugins"2929- depends on GCC_PLUGINS2828+if GCC_PLUGINS30293130config GCC_PLUGIN_CYC_COMPLEXITY3231 bool "Compute the cyclomatic complexity of a function" if EXPERT···112113 bool113114 depends on GCC_PLUGINS && ARM114115115115-endmenu116116+endif
···317317318318 if (!bprm || !profile->xattr_count)319319 return 0;320320+ might_sleep();320321321322 /* transition from exec match to xattr set */322323 state = aa_dfa_null_transition(profile->xmatch, state);···362361}363362364363/**365365- * __attach_match_ - find an attachment match364364+ * find_attach - do attachment search for unconfined processes366365 * @bprm - binprm structure of transitioning task367367- * @name - to match against (NOT NULL)366366+ * @ns: the current namespace (NOT NULL)368367 * @head - profile list to walk (NOT NULL)368368+ * @name - to match against (NOT NULL)369369 * @info - info message if there was an error (NOT NULL)370370 *371371 * Do a linear search on the profiles in the list. There is a matching···376374 *377375 * Requires: @head not be shared or have appropriate locks held378376 *379379- * Returns: profile or NULL if no match found377377+ * Returns: label or NULL if no match found380378 */381381-static struct aa_profile *__attach_match(const struct linux_binprm *bprm,382382- const char *name,383383- struct list_head *head,384384- const char **info)379379+static struct aa_label *find_attach(const struct linux_binprm *bprm,380380+ struct aa_ns *ns, struct list_head *head,381381+ const char *name, const char **info)385382{386383 int candidate_len = 0, candidate_xattrs = 0;387384 bool conflict = false;···389388 AA_BUG(!name);390389 AA_BUG(!head);391390391391+ rcu_read_lock();392392+restart:392393 list_for_each_entry_rcu(profile, head, base.list) {393394 if (profile->label.flags & FLAG_NULL &&394395 &profile->label == ns_unconfined(profile->ns))···416413 perm = dfa_user_allow(profile->xmatch, state);417414 /* any accepting state means a valid match. */418415 if (perm & MAY_EXEC) {419419- int ret;416416+ int ret = 0;420417421418 if (count < candidate_len)422419 continue;423420424424- ret = aa_xattrs_match(bprm, profile, state);425425- /* Fail matching if the xattrs don't match */426426- if (ret < 0)427427- continue;421421+ if (bprm && profile->xattr_count) {422422+ long rev = READ_ONCE(ns->revision);428423424424+ if (!aa_get_profile_not0(profile))425425+ goto restart;426426+ rcu_read_unlock();427427+ ret = aa_xattrs_match(bprm, profile,428428+ state);429429+ rcu_read_lock();430430+ aa_put_profile(profile);431431+ if (rev !=432432+ READ_ONCE(ns->revision))433433+ /* policy changed */434434+ goto restart;435435+ /*436436+ * Fail matching if the xattrs don't437437+ * match438438+ */439439+ if (ret < 0)440440+ continue;441441+ }429442 /*430443 * TODO: allow for more flexible best match431444 *···464445 candidate_xattrs = ret;465446 conflict = false;466447 }467467- } else if (!strcmp(profile->base.name, name))448448+ } else if (!strcmp(profile->base.name, name)) {468449 /*469450 * old exact non-re match, without conditionals such470451 * as xattrs. no more searching required471452 */472472- return profile;453453+ candidate = profile;454454+ goto out;455455+ }473456 }474457475475- if (conflict) {476476- *info = "conflicting profile attachments";458458+ if (!candidate || conflict) {459459+ if (conflict)460460+ *info = "conflicting profile attachments";461461+ rcu_read_unlock();477462 return NULL;478463 }479464480480- return candidate;481481-}482482-483483-/**484484- * find_attach - do attachment search for unconfined processes485485- * @bprm - binprm structure of transitioning task486486- * @ns: the current namespace (NOT NULL)487487- * @list: list to search (NOT NULL)488488- * @name: the executable name to match against (NOT NULL)489489- * @info: info message if there was an error490490- *491491- * Returns: label or NULL if no match found492492- */493493-static struct aa_label *find_attach(const struct linux_binprm *bprm,494494- struct aa_ns *ns, struct list_head *list,495495- const char *name, const char **info)496496-{497497- struct aa_profile *profile;498498-499499- rcu_read_lock();500500- profile = aa_get_profile(__attach_match(bprm, name, list, info));465465+out:466466+ candidate = aa_get_newest_profile(candidate);501467 rcu_read_unlock();502468503503- return profile ? &profile->label : NULL;469469+ return &candidate->label;504470}505471506472static const char *next_name(int xtype, const char *name)
···508508 return snd_usb_create_mixer(chip, quirk->ifnum, 0);509509}510510511511+512512+static int setup_fmt_after_resume_quirk(struct snd_usb_audio *chip,513513+ struct usb_interface *iface,514514+ struct usb_driver *driver,515515+ const struct snd_usb_audio_quirk *quirk)516516+{517517+ chip->setup_fmt_after_resume_quirk = 1;518518+ return 1; /* Continue with creating streams and mixer */519519+}520520+511521/*512522 * audio-interface quirks513523 *···556546 [QUIRK_AUDIO_EDIROL_UAXX] = create_uaxx_quirk,557547 [QUIRK_AUDIO_ALIGN_TRANSFER] = create_align_transfer_quirk,558548 [QUIRK_AUDIO_STANDARD_MIXER] = create_standard_mixer_quirk,549549+ [QUIRK_SETUP_FMT_AFTER_RESUME] = setup_fmt_after_resume_quirk,559550 };560551561552 if (quirk->type < QUIRK_TYPE_COUNT) {···13971386 case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */13981387 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */13991388 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */13891389+ case USB_ID(0x05a7, 0x1020): /* Bose Companion 5 */14001390 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */14011391 case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */14021392 case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */
+2-1
sound/usb/usbaudio.h
···3333 wait_queue_head_t shutdown_wait;3434 unsigned int txfr_quirk:1; /* Subframe boundaries on transfers */3535 unsigned int tx_length_quirk:1; /* Put length specifier in transfers */3636-3636+ unsigned int setup_fmt_after_resume_quirk:1; /* setup the format to interface after resume */3737 int num_interfaces;3838 int num_suspended_intf;3939 int sample_rate_read_error;···9898 QUIRK_AUDIO_EDIROL_UAXX,9999 QUIRK_AUDIO_ALIGN_TRANSFER,100100 QUIRK_AUDIO_STANDARD_MIXER,101101+ QUIRK_SETUP_FMT_AFTER_RESUME,101102102103 QUIRK_TYPE_COUNT103104};
···3131 TEST_FAILURE = auto()32323333def create_default_kunitconfig():3434- if not os.path.exists(kunit_kernel.KUNITCONFIG_PATH):3434+ if not os.path.exists(kunit_kernel.kunitconfig_path):3535 shutil.copyfile('arch/um/configs/kunit_defconfig',3636- kunit_kernel.KUNITCONFIG_PATH)3636+ kunit_kernel.kunitconfig_path)37373838def run_tests(linux: kunit_kernel.LinuxSourceTree,3939 request: KunitRequest) -> KunitResult:4040- if request.defconfig:4141- create_default_kunitconfig()4242-4340 config_start = time.time()4441 success = linux.build_reconfig(request.build_dir)4542 config_end = time.time()···105108 run_parser.add_argument('--build_dir',106109 help='As in the make command, it specifies the build '107110 'directory.',108108- type=str, default=None, metavar='build_dir')111111+ type=str, default='', metavar='build_dir')109112110113 run_parser.add_argument('--defconfig',111111- help='Uses a default kunitconfig.',114114+ help='Uses a default .kunitconfig.',112115 action='store_true')113116114117 cli_args = parser.parse_args(argv)115118116119 if cli_args.subcommand == 'run':120120+ if cli_args.build_dir:121121+ if not os.path.exists(cli_args.build_dir):122122+ os.mkdir(cli_args.build_dir)123123+ kunit_kernel.kunitconfig_path = os.path.join(124124+ cli_args.build_dir,125125+ kunit_kernel.kunitconfig_path)126126+117127 if cli_args.defconfig:118128 create_default_kunitconfig()119129
+5-5
tools/testing/kunit/kunit_kernel.py
···1414import kunit_config15151616KCONFIG_PATH = '.config'1717-KUNITCONFIG_PATH = 'kunitconfig'1717+kunitconfig_path = '.kunitconfig'18181919class ConfigError(Exception):2020 """Represents an error trying to configure the Linux kernel."""···82828383 def __init__(self):8484 self._kconfig = kunit_config.Kconfig()8585- self._kconfig.read_from_file(KUNITCONFIG_PATH)8585+ self._kconfig.read_from_file(kunitconfig_path)8686 self._ops = LinuxSourceTreeOperations()87878888 def clean(self):···111111 return True112112113113 def build_reconfig(self, build_dir):114114- """Creates a new .config if it is not a subset of the kunitconfig."""114114+ """Creates a new .config if it is not a subset of the .kunitconfig."""115115 kconfig_path = get_kconfig_path(build_dir)116116 if os.path.exists(kconfig_path):117117 existing_kconfig = kunit_config.Kconfig()···140140 return False141141 return True142142143143- def run_kernel(self, args=[], timeout=None, build_dir=None):143143+ def run_kernel(self, args=[], timeout=None, build_dir=''):144144 args.extend(['mem=256M'])145145 process = self._ops.linux_bin(args, timeout, build_dir)146146- with open('test.log', 'w') as f:146146+ with open(os.path.join(build_dir, 'test.log'), 'w') as f:147147 for line in process.stdout:148148 f.write(line.rstrip().decode('ascii') + '\n')149149 yield line.rstrip().decode('ascii')
···120120$(BPFOBJ): force121121 $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/122122123123-BPF_HELPERS := $(BPFDIR)/bpf_helper_defs.h $(wildcard $(BPFDIR)/bpf_*.h)124124-$(BPFDIR)/bpf_helper_defs.h:125125- $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/ bpf_helper_defs.h123123+BPF_HELPERS := $(OUTPUT)/bpf_helper_defs.h $(wildcard $(BPFDIR)/bpf_*.h)124124+$(OUTPUT)/bpf_helper_defs.h:125125+ $(MAKE) -C $(BPFDIR) OUTPUT=$(OUTPUT)/ $(OUTPUT)/bpf_helper_defs.h126126127127# Get Clang's default includes on this system, as opposed to those seen by128128# '-target bpf'. This fixes "missing" files on some architectures/distros,
···34343535check_mods()3636{3737+ local uid=$(id -u)3838+ if [ $uid -ne 0 ]; then3939+ echo "skip all tests: must be run as root" >&24040+ exit $ksft_skip4141+ fi4242+3743 trap "test_modprobe" EXIT3844 if [ ! -d $DIR ]; then3945 modprobe test_firmware
+14-1
tools/testing/selftests/livepatch/functions.sh
···77MAX_RETRIES=60088RETRY_INTERVAL=".1" # seconds991010+# Kselftest framework requirement - SKIP code is 41111+ksft_skip=41212+1013# log(msg) - write message to kernel log1114# msg - insightful words1215function log() {···2118function skip() {2219 log "SKIP: $1"2320 echo "SKIP: $1" >&22424- exit 42121+ exit $ksft_skip2222+}2323+2424+# root test2525+function is_root() {2626+ uid=$(id -u)2727+ if [ $uid -ne 0 ]; then2828+ echo "skip all tests: must be run as root" >&22929+ exit $ksft_skip3030+ fi2531}26322733# die(msg) - game over, man···7462# for verbose livepatching output and turn on7563# the ftrace_enabled sysctl.7664function setup_config() {6565+ is_root7766 push_config7867 set_dynamic_debug7968 set_ftrace_enabled 1
+1-2
tools/testing/selftests/livepatch/test-state.sh
···88MOD_LIVEPATCH2=test_klp_state299MOD_LIVEPATCH3=test_klp_state310101111-set_dynamic_debug1212-1111+setup_config13121413# TEST: Loading and removing a module that modifies the system state1514
···226226 return 0227227}228228229229-test_tcp_forwarding()229229+test_tcp_forwarding_ip()230230{231231 local nsa=$1232232 local nsb=$2233233+ local dstip=$3234234+ local dstport=$4233235 local lret=0234236235237 ip netns exec $nsb nc -w 5 -l -p 12345 < "$ns2in" > "$ns2out" &236238 lpid=$!237239238240 sleep 1239239- ip netns exec $nsa nc -w 4 10.0.2.99 12345 < "$ns1in" > "$ns1out" &241241+ ip netns exec $nsa nc -w 4 "$dstip" "$dstport" < "$ns1in" > "$ns1out" &240242 cpid=$!241243242244 sleep 3···255253 check_transfer "$ns2in" "$ns1out" "ns1 <- ns2"256254 if [ $? -ne 0 ];then257255 lret=1256256+ fi257257+258258+ return $lret259259+}260260+261261+test_tcp_forwarding()262262+{263263+ test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345264264+265265+ return $?266266+}267267+268268+test_tcp_forwarding_nat()269269+{270270+ local lret271271+272272+ test_tcp_forwarding_ip "$1" "$2" 10.0.2.99 12345273273+ lret=$?274274+275275+ if [ $lret -eq 0 ] ; then276276+ test_tcp_forwarding_ip "$1" "$2" 10.6.6.6 1666277277+ lret=$?258278 fi259279260280 return $lret···307283# Same, but with NAT enabled.308284ip netns exec nsr1 nft -f - <<EOF309285table ip nat {286286+ chain prerouting {287287+ type nat hook prerouting priority 0; policy accept;288288+ meta iif "veth0" ip daddr 10.6.6.6 tcp dport 1666 counter dnat ip to 10.0.2.99:12345289289+ }290290+310291 chain postrouting {311292 type nat hook postrouting priority 0; policy accept;312312- meta oifname "veth1" masquerade293293+ meta oifname "veth1" counter masquerade313294 }314295}315296EOF316297317317-test_tcp_forwarding ns1 ns2298298+test_tcp_forwarding_nat ns1 ns2318299319300if [ $? -eq 0 ] ;then320301 echo "PASS: flow offloaded for ns1/ns2 with NAT"···342313ip netns exec ns1 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null343314ip netns exec ns2 sysctl net.ipv4.ip_no_pmtu_disc=0 > /dev/null344315345345-test_tcp_forwarding ns1 ns2316316+test_tcp_forwarding_nat ns1 ns2346317if [ $? -eq 0 ] ;then347318 echo "PASS: flow offloaded for ns1/ns2 with NAT and pmtu discovery"348319else
+10-8
tools/testing/selftests/rseq/param_test.c
···1515#include <errno.h>1616#include <stddef.h>17171818-static inline pid_t gettid(void)1818+static inline pid_t rseq_gettid(void)1919{2020 return syscall(__NR_gettid);2121}···373373 rseq_percpu_unlock(&data->lock, cpu);374374#ifndef BENCHMARK375375 if (i != 0 && !(i % (reps / 10)))376376- printf_verbose("tid %d: count %lld\n", (int) gettid(), i);376376+ printf_verbose("tid %d: count %lld\n",377377+ (int) rseq_gettid(), i);377378#endif378379 }379380 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n",380380- (int) gettid(), nr_abort, signals_delivered);381381+ (int) rseq_gettid(), nr_abort, signals_delivered);381382 if (!opt_disable_rseq && thread_data->reg &&382383 rseq_unregister_current_thread())383384 abort();···455454 } while (rseq_unlikely(ret));456455#ifndef BENCHMARK457456 if (i != 0 && !(i % (reps / 10)))458458- printf_verbose("tid %d: count %lld\n", (int) gettid(), i);457457+ printf_verbose("tid %d: count %lld\n",458458+ (int) rseq_gettid(), i);459459#endif460460 }461461 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n",462462- (int) gettid(), nr_abort, signals_delivered);462462+ (int) rseq_gettid(), nr_abort, signals_delivered);463463 if (!opt_disable_rseq && thread_data->reg &&464464 rseq_unregister_current_thread())465465 abort();···607605 }608606609607 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n",610610- (int) gettid(), nr_abort, signals_delivered);608608+ (int) rseq_gettid(), nr_abort, signals_delivered);611609 if (!opt_disable_rseq && rseq_unregister_current_thread())612610 abort();613611···798796 }799797800798 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n",801801- (int) gettid(), nr_abort, signals_delivered);799799+ (int) rseq_gettid(), nr_abort, signals_delivered);802800 if (!opt_disable_rseq && rseq_unregister_current_thread())803801 abort();804802···10131011 }1014101210151013 printf_verbose("tid %d: number of rseq abort: %d, signals delivered: %u\n",10161016- (int) gettid(), nr_abort, signals_delivered);10141014+ (int) rseq_gettid(), nr_abort, signals_delivered);10171015 if (!opt_disable_rseq && rseq_unregister_current_thread())10181016 abort();10191017
+7-5
tools/testing/selftests/rseq/rseq.h
···149149/*150150 * rseq_prepare_unload() should be invoked by each thread executing a rseq151151 * critical section at least once between their last critical section and152152- * library unload of the library defining the rseq critical section153153- * (struct rseq_cs). This also applies to use of rseq in code generated by154154- * JIT: rseq_prepare_unload() should be invoked at least once by each155155- * thread executing a rseq critical section before reclaim of the memory156156- * holding the struct rseq_cs.152152+ * library unload of the library defining the rseq critical section (struct153153+ * rseq_cs) or the code referred to by the struct rseq_cs start_ip and154154+ * post_commit_offset fields. This also applies to use of rseq in code155155+ * generated by JIT: rseq_prepare_unload() should be invoked at least once by156156+ * each thread executing a rseq critical section before reclaim of the memory157157+ * holding the struct rseq_cs or reclaim of the code pointed to by struct158158+ * rseq_cs start_ip and post_commit_offset fields.157159 */158160static inline void rseq_prepare_unload(void)159161{
···128128 str="${ftype} ${name} ${location} ${str}"129129 ;;130130 "nod")131131- local dev=`LC_ALL=C ls -l "${location}"`131131+ local dev="`LC_ALL=C ls -l "${location}"`"132132 local maj=`field 5 ${dev}`133133 local min=`field 6 ${dev}`134134 maj=${maj%,}