···182182 space-efficient. If this option is not present, large padding is183183 used - that is for compatibility with older kernels.184184185185+allow_discards186186+ Allow block discard requests (a.k.a. TRIM) for the integrity device.187187+ Discards are only allowed to devices using internal hash.185188186186-The journal mode (D/J), buffer_sectors, journal_watermark, commit_time can187187-be changed when reloading the target (load an inactive table and swap the188188-tables with suspend and resume). The other arguments should not be changed189189-when reloading the target because the layout of disk data depend on them190190-and the reloaded target would be non-functional.189189+The journal mode (D/J), buffer_sectors, journal_watermark, commit_time and190190+allow_discards can be changed when reloading the target (load an inactive191191+table and swap the tables with suspend and resume). The other arguments192192+should not be changed when reloading the target because the layout of disk193193+data depend on them and the reloaded target would be non-functional.191194192195193196The layout of the formatted block device:
+1-2
Documentation/admin-guide/kernel-parameters.txt
···5187518751885188 usbcore.old_scheme_first=51895189 [USB] Start with the old device initialization51905190- scheme, applies only to low and full-speed devices51915191- (default 0 = off).51905190+ scheme (default 0 = off).5192519151935192 usbcore.usbfs_memory_mb=51945193 [USB] Memory limit (in MB) for buffers allocated by
···7575 description: |7676 disables over voltage protection of this buck77777878- additionalProperties: false7878+ unevaluatedProperties: false7979+7980 additionalProperties: false80818182required:
···1616 - "renesas,xhci-r8a7791" for r8a7791 SoC1717 - "renesas,xhci-r8a7793" for r8a7793 SoC1818 - "renesas,xhci-r8a7795" for r8a7795 SoC1919- - "renesas,xhci-r8a7796" for r8a7796 SoC1919+ - "renesas,xhci-r8a7796" for r8a77960 SoC2020+ - "renesas,xhci-r8a77961" for r8a77961 SoC2021 - "renesas,xhci-r8a77965" for r8a77965 SoC2122 - "renesas,xhci-r8a77990" for r8a77990 SoC2223 - "renesas,rcar-gen2-xhci" for a generic R-Car Gen2 or RZ/G1 compatible
···6161 - running6262 - ICE OS Default Package6363 - The name of the DDP package that is active in the device. The DDP6464- package is loaded by the driver during initialization. Each varation6565- of DDP package shall have a unique name.6464+ package is loaded by the driver during initialization. Each6565+ variation of the DDP package has a unique name.6666 * - ``fw.app``6767 - running6868 - 1.3.1.0
+9-10
MAINTAINERS
···570570F: drivers/input/misc/adxl34x.c571571572572ADXL372 THREE-AXIS DIGITAL ACCELEROMETER DRIVER573573-M: Stefan Popa <stefan.popa@analog.com>573573+M: Michael Hennerich <michael.hennerich@analog.com>574574S: Supported575575W: http://ez.analog.com/community/linux-device-drivers576576F: Documentation/devicetree/bindings/iio/accel/adi,adxl372.yaml···922922F: drivers/net/ethernet/amd/xgbe/923923924924ANALOG DEVICES INC AD5686 DRIVER925925-M: Stefan Popa <stefan.popa@analog.com>925925+M: Michael Hennerich <Michael.Hennerich@analog.com>926926L: linux-pm@vger.kernel.org927927S: Supported928928W: http://ez.analog.com/community/linux-device-drivers···930930F: drivers/iio/dac/ad5696*931931932932ANALOG DEVICES INC AD5758 DRIVER933933-M: Stefan Popa <stefan.popa@analog.com>933933+M: Michael Hennerich <Michael.Hennerich@analog.com>934934L: linux-iio@vger.kernel.org935935S: Supported936936W: http://ez.analog.com/community/linux-device-drivers···946946F: drivers/iio/adc/ad7091r5.c947947948948ANALOG DEVICES INC AD7124 DRIVER949949-M: Stefan Popa <stefan.popa@analog.com>949949+M: Michael Hennerich <Michael.Hennerich@analog.com>950950L: linux-iio@vger.kernel.org951951S: Supported952952W: http://ez.analog.com/community/linux-device-drivers···970970F: drivers/iio/adc/ad7292.c971971972972ANALOG DEVICES INC AD7606 DRIVER973973-M: Stefan Popa <stefan.popa@analog.com>973973+M: Michael Hennerich <Michael.Hennerich@analog.com>974974M: Beniamin Bia <beniamin.bia@analog.com>975975L: linux-iio@vger.kernel.org976976S: Supported···979979F: drivers/iio/adc/ad7606.c980980981981ANALOG DEVICES INC AD7768-1 DRIVER982982-M: Stefan Popa <stefan.popa@analog.com>982982+M: Michael Hennerich <Michael.Hennerich@analog.com>983983L: linux-iio@vger.kernel.org984984S: Supported985985W: http://ez.analog.com/community/linux-device-drivers···10401040F: drivers/hwmon/adm1177.c1041104110421042ANALOG DEVICES INC ADP5061 DRIVER10431043-M: Stefan Popa <stefan.popa@analog.com>10431043+M: Michael Hennerich <Michael.Hennerich@analog.com>10441044L: linux-pm@vger.kernel.org10451045S: Supported10461046W: http://ez.analog.com/community/linux-device-drivers···11091109ANALOG DEVICES INC IIO DRIVERS11101110M: Lars-Peter Clausen <lars@metafoo.de>11111111M: Michael Hennerich <Michael.Hennerich@analog.com>11121112-M: Stefan Popa <stefan.popa@analog.com>11131112S: Supported11141113W: http://wiki.analog.com/11151114W: http://ez.analog.com/community/linux-device-drivers···36573658S: Maintained36583659W: http://btrfs.wiki.kernel.org/36593660Q: http://patchwork.kernel.org/project/linux-btrfs/list/36603660-T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git36613661+T: git git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git36613662F: Documentation/filesystems/btrfs.rst36623663F: fs/btrfs/36633664F: include/linux/btrfs*···59355936DYNAMIC INTERRUPT MODERATION59365937M: Tal Gilboa <talgi@mellanox.com>59375938S: Maintained59395939+F: Documentation/networking/net_dim.rst59385940F: include/linux/dim.h59395941F: lib/dim/59405940-F: Documentation/networking/net_dim.rst5941594259425943DZ DECSTATION DZ11 SERIAL DRIVER59435944M: "Maciej W. Rozycki" <macro@linux-mips.org>
···9191 return;9292 }93939494- kernel_neon_begin();9595- chacha_doneon(state, dst, src, bytes, nrounds);9696- kernel_neon_end();9494+ do {9595+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);9696+9797+ kernel_neon_begin();9898+ chacha_doneon(state, dst, src, todo, nrounds);9999+ kernel_neon_end();100100+101101+ bytes -= todo;102102+ src += todo;103103+ dst += todo;104104+ } while (bytes);97105}98106EXPORT_SYMBOL(chacha_crypt_arch);99107
+1-1
arch/arm/crypto/nhpoly1305-neon-glue.c
···3030 return crypto_nhpoly1305_update(desc, src, srclen);31313232 do {3333- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3333+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);34343535 kernel_neon_begin();3636 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11-4
arch/arm/crypto/poly1305-glue.c
···160160 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);161161162162 if (static_branch_likely(&have_neon) && do_neon) {163163- kernel_neon_begin();164164- poly1305_blocks_neon(&dctx->h, src, len, 1);165165- kernel_neon_end();163163+ do {164164+ unsigned int todo = min_t(unsigned int, len, SZ_4K);165165+166166+ kernel_neon_begin();167167+ poly1305_blocks_neon(&dctx->h, src, todo, 1);168168+ kernel_neon_end();169169+170170+ len -= todo;171171+ src += todo;172172+ } while (len);166173 } else {167174 poly1305_blocks_arm(&dctx->h, src, len, 1);175175+ src += len;168176 }169169- src += len;170177 nbytes %= POLY1305_BLOCK_SIZE;171178 }172179
+11-3
arch/arm64/crypto/chacha-neon-glue.c
···8787 !crypto_simd_usable())8888 return chacha_crypt_generic(state, dst, src, bytes, nrounds);89899090- kernel_neon_begin();9191- chacha_doneon(state, dst, src, bytes, nrounds);9292- kernel_neon_end();9090+ do {9191+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);9292+9393+ kernel_neon_begin();9494+ chacha_doneon(state, dst, src, todo, nrounds);9595+ kernel_neon_end();9696+9797+ bytes -= todo;9898+ src += todo;9999+ dst += todo;100100+ } while (bytes);93101}94102EXPORT_SYMBOL(chacha_crypt_arch);95103
+1-1
arch/arm64/crypto/nhpoly1305-neon-glue.c
···3030 return crypto_nhpoly1305_update(desc, src, srclen);31313232 do {3333- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3333+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);34343535 kernel_neon_begin();3636 crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+11-4
arch/arm64/crypto/poly1305-glue.c
···143143 unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);144144145145 if (static_branch_likely(&have_neon) && crypto_simd_usable()) {146146- kernel_neon_begin();147147- poly1305_blocks_neon(&dctx->h, src, len, 1);148148- kernel_neon_end();146146+ do {147147+ unsigned int todo = min_t(unsigned int, len, SZ_4K);148148+149149+ kernel_neon_begin();150150+ poly1305_blocks_neon(&dctx->h, src, todo, 1);151151+ kernel_neon_end();152152+153153+ len -= todo;154154+ src += todo;155155+ } while (len);149156 } else {150157 poly1305_blocks(&dctx->h, src, len, 1);158158+ src += len;151159 }152152- src += len;153160 nbytes %= POLY1305_BLOCK_SIZE;154161 }155162
···732732 stw r10,_CCR(r1)733733 stw r1,KSP(r3) /* Set old stack pointer */734734735735- kuap_check r2, r4735735+ kuap_check r2, r0736736#ifdef CONFIG_SMP737737 /* We need a sync somewhere here to make sure that if the738738 * previous task gets rescheduled on another CPU, it sees all
+2
arch/powerpc/kernel/setup_64.c
···534534 lsizep = of_get_property(np, propnames[3], NULL);535535 if (bsizep == NULL)536536 bsizep = lsizep;537537+ if (lsizep == NULL)538538+ lsizep = bsizep;537539 if (lsizep != NULL)538540 lsize = be32_to_cpu(*lsizep);539541 if (bsizep != NULL)
···397397398398config PPC_KUAP_DEBUG399399 bool "Extra debugging for Kernel Userspace Access Protection"400400- depends on PPC_KUAP && (PPC_RADIX_MMU || PPC_32)400400+ depends on PPC_KUAP && (PPC_RADIX_MMU || PPC32)401401 help402402 Add extra debugging for Kernel Userspace Access Protection (KUAP)403403 If you're unsure, say N.
+1-1
arch/riscv/Kconfig
···6060 select ARCH_HAS_GIGANTIC_PAGE6161 select ARCH_HAS_SET_DIRECT_MAP6262 select ARCH_HAS_SET_MEMORY6363- select ARCH_HAS_STRICT_KERNEL_RWX6363+ select ARCH_HAS_STRICT_KERNEL_RWX if MMU6464 select ARCH_WANT_HUGE_PMD_SHARE if 64BIT6565 select SPARSEMEM_STATIC if 32BIT6666 select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT if MMU
+10-7
arch/riscv/kernel/sbi.c
···102102{103103 sbi_ecall(SBI_EXT_0_1_SHUTDOWN, 0, 0, 0, 0, 0, 0, 0);104104}105105-EXPORT_SYMBOL(sbi_set_timer);105105+EXPORT_SYMBOL(sbi_shutdown);106106107107/**108108 * sbi_clear_ipi() - Clear any pending IPIs for the calling hart.···113113{114114 sbi_ecall(SBI_EXT_0_1_CLEAR_IPI, 0, 0, 0, 0, 0, 0, 0);115115}116116-EXPORT_SYMBOL(sbi_shutdown);116116+EXPORT_SYMBOL(sbi_clear_ipi);117117118118/**119119 * sbi_set_timer_v01() - Program the timer for next timer event.···167167168168 return result;169169}170170+171171+static void sbi_set_power_off(void)172172+{173173+ pm_power_off = sbi_shutdown;174174+}170175#else171176static void __sbi_set_timer_v01(uint64_t stime_value)172177{···196191197192 return 0;198193}194194+195195+static void sbi_set_power_off(void) {}199196#endif /* CONFIG_RISCV_SBI_V01 */200197201198static void __sbi_set_timer_v02(uint64_t stime_value)···547540 return __sbi_base_ecall(SBI_EXT_BASE_GET_IMP_VERSION);548541}549542550550-static void sbi_power_off(void)551551-{552552- sbi_shutdown();553553-}554543555544int __init sbi_init(void)556545{557546 int ret;558547559559- pm_power_off = sbi_power_off;548548+ sbi_set_power_off();560549 ret = sbi_get_spec_version();561550 if (ret > 0)562551 sbi_spec_version = ret;
+2-2
arch/riscv/kernel/stacktrace.c
···1212#include <linux/stacktrace.h>1313#include <linux/ftrace.h>14141515+register unsigned long sp_in_global __asm__("sp");1616+1517#ifdef CONFIG_FRAME_POINTER16181719struct stackframe {1820 unsigned long fp;1921 unsigned long ra;2022};2121-2222-register unsigned long sp_in_global __asm__("sp");23232424void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,2525 bool (*fn)(unsigned long, void *), void *arg)
+3-3
arch/riscv/kernel/vdso/Makefile
···3333 $(call if_changed,vdsold)34343535# We also create a special relocatable object that should mirror the symbol3636-# table and layout of the linked DSO. With ld -R we can then refer to3737-# these symbols in the kernel code rather than hand-coded addresses.3636+# table and layout of the linked DSO. With ld --just-symbols we can then3737+# refer to these symbols in the kernel code rather than hand-coded addresses.38383939SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \4040 -Wl,--build-id -Wl,--hash-style=both4141$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE4242 $(call if_changed,vdsold)43434444-LDFLAGS_vdso-syms.o := -r -R4444+LDFLAGS_vdso-syms.o := -r --just-symbols4545$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE4646 $(call if_changed,ld)4747
···6464{6565 mm_segment_t old_fs;6666 unsigned long asce, cr;6767+ unsigned long flags;67686869 old_fs = current->thread.mm_segment;6970 if (old_fs & 1)7071 return old_fs;7272+ /* protect against a concurrent page table upgrade */7373+ local_irq_save(flags);7174 current->thread.mm_segment |= 1;7275 asce = S390_lowcore.kernel_asce;7376 if (likely(old_fs == USER_DS)) {···8683 __ctl_load(asce, 7, 7);8784 set_cpu_flag(CIF_ASCE_SECONDARY);8885 }8686+ local_irq_restore(flags);8987 return old_fs;9088}9189EXPORT_SYMBOL(enable_sacf_uaccess);
+14-2
arch/s390/mm/pgalloc.c
···7070{7171 struct mm_struct *mm = arg;72727373- if (current->active_mm == mm)7474- set_user_asce(mm);7373+ /* we must change all active ASCEs to avoid the creation of new TLBs */7474+ if (current->active_mm == mm) {7575+ S390_lowcore.user_asce = mm->context.asce;7676+ if (current->thread.mm_segment == USER_DS) {7777+ __ctl_load(S390_lowcore.user_asce, 1, 1);7878+ /* Mark user-ASCE present in CR1 */7979+ clear_cpu_flag(CIF_ASCE_PRIMARY);8080+ }8181+ if (current->thread.mm_segment == USER_DS_SACF) {8282+ __ctl_load(S390_lowcore.user_asce, 7, 7);8383+ /* enable_sacf_uaccess does all or nothing */8484+ WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));8585+ }8686+ }7587 __tlb_flush_local();7688}7789
···3232 const u32 inc)3333{3434 /* SIMD disables preemption, so relax after processing each page. */3535- BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);3535+ BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);36363737 if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {3838 blake2s_compress_generic(state, block, nblocks, inc);3939 return;4040 }41414242- for (;;) {4242+ do {4343 const size_t blocks = min_t(size_t, nblocks,4444- PAGE_SIZE / BLAKE2S_BLOCK_SIZE);4444+ SZ_4K / BLAKE2S_BLOCK_SIZE);45454646 kernel_fpu_begin();4747 if (IS_ENABLED(CONFIG_AS_AVX512) &&···5252 kernel_fpu_end();53535454 nblocks -= blocks;5555- if (!nblocks)5656- break;5755 block += blocks * BLAKE2S_BLOCK_SIZE;5858- }5656+ } while (nblocks);5957}6058EXPORT_SYMBOL(blake2s_compress_arch);6159
+11-3
arch/x86/crypto/chacha_glue.c
···153153 bytes <= CHACHA_BLOCK_SIZE)154154 return chacha_crypt_generic(state, dst, src, bytes, nrounds);155155156156- kernel_fpu_begin();157157- chacha_dosimd(state, dst, src, bytes, nrounds);158158- kernel_fpu_end();156156+ do {157157+ unsigned int todo = min_t(unsigned int, bytes, SZ_4K);158158+159159+ kernel_fpu_begin();160160+ chacha_dosimd(state, dst, src, todo, nrounds);161161+ kernel_fpu_end();162162+163163+ bytes -= todo;164164+ src += todo;165165+ dst += todo;166166+ } while (bytes);159167}160168EXPORT_SYMBOL(chacha_crypt_arch);161169
+1-1
arch/x86/crypto/nhpoly1305-avx2-glue.c
···2929 return crypto_nhpoly1305_update(desc, src, srclen);30303131 do {3232- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3232+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);33333434 kernel_fpu_begin();3535 crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+1-1
arch/x86/crypto/nhpoly1305-sse2-glue.c
···2929 return crypto_nhpoly1305_update(desc, src, srclen);30303131 do {3232- unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);3232+ unsigned int n = min_t(unsigned int, srclen, SZ_4K);33333434 kernel_fpu_begin();3535 crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+6-7
arch/x86/crypto/poly1305_glue.c
···9191 struct poly1305_arch_internal *state = ctx;92929393 /* SIMD disables preemption, so relax after processing each page. */9494- BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||9595- PAGE_SIZE % POLY1305_BLOCK_SIZE);9494+ BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||9595+ SZ_4K % POLY1305_BLOCK_SIZE);96969797 if (!static_branch_likely(&poly1305_use_avx) ||9898 (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||···102102 return;103103 }104104105105- for (;;) {106106- const size_t bytes = min_t(size_t, len, PAGE_SIZE);105105+ do {106106+ const size_t bytes = min_t(size_t, len, SZ_4K);107107108108 kernel_fpu_begin();109109 if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))···113113 else114114 poly1305_blocks_avx(ctx, inp, bytes, padbit);115115 kernel_fpu_end();116116+116117 len -= bytes;117117- if (!len)118118- break;119118 inp += bytes;120120- }119119+ } while (len);121120}122121123122static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+10-2
arch/x86/hyperv/hv_init.c
···7373 struct page *pg;74747575 input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);7676- pg = alloc_page(GFP_KERNEL);7676+ /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */7777+ pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);7778 if (unlikely(!pg))7879 return -ENOMEM;7980 *input_arg = page_address(pg);···255254static int hv_suspend(void)256255{257256 union hv_x64_msr_hypercall_contents hypercall_msr;257257+ int ret;258258259259 /*260260 * Reset the hypercall page as it is going to be invalidated···272270 hypercall_msr.enable = 0;273271 wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);274272275275- return 0;273273+ ret = hv_cpu_die(0);274274+ return ret;276275}277276278277static void hv_resume(void)279278{280279 union hv_x64_msr_hypercall_contents hypercall_msr;280280+ int ret;281281+282282+ ret = hv_cpu_init(0);283283+ WARN_ON(ret);281284282285 /* Re-enable the hypercall page */283286 rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);···295288 hv_hypercall_pg_saved = NULL;296289}297290291291+/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */298292static struct syscore_ops hv_syscore_ops = {299293 .suspend = hv_suspend,300294 .resume = hv_resume,
···496496497497 if (!disk_part_scan_enabled(disk))498498 return 0;499499- if (bdev->bd_part_count || bdev->bd_openers > 1)499499+ if (bdev->bd_part_count)500500 return -EBUSY;501501 res = invalidate_partition(disk, 0);502502 if (res)
+2-2
drivers/acpi/device_pm.c
···273273 end:274274 if (result) {275275 dev_warn(&device->dev, "Failed to change power state to %s\n",276276- acpi_power_state_string(state));276276+ acpi_power_state_string(target_state));277277 } else {278278 device->power.state = target_state;279279 ACPI_DEBUG_PRINT((ACPI_DB_INFO,280280 "Device [%s] transitioned to %s\n",281281 device->pnp.bus_id,282282- acpi_power_state_string(state)));282282+ acpi_power_state_string(target_state)));283283 }284284285285 return result;
···3333} ____cacheline_aligned_in_smp;34343535struct virtio_blk {3636+ /*3737+ * This mutex must be held by anything that may run after3838+ * virtblk_remove() sets vblk->vdev to NULL.3939+ *4040+ * blk-mq, virtqueue processing, and sysfs attribute code paths are4141+ * shut down before vblk->vdev is set to NULL and therefore do not need4242+ * to hold this mutex.4343+ */4444+ struct mutex vdev_mutex;3645 struct virtio_device *vdev;37463847 /* The disk structure for the kernel. */···52435344 /* Process context for config space updates */5445 struct work_struct config_work;4646+4747+ /*4848+ * Tracks references from block_device_operations open/release and4949+ * virtio_driver probe/remove so this object can be freed once no5050+ * longer in use.5151+ */5252+ refcount_t refs;55535654 /* What host tells us, plus 2 for header & tailer. */5755 unsigned int sg_elems;···311295 return err;312296}313297298298+static void virtblk_get(struct virtio_blk *vblk)299299+{300300+ refcount_inc(&vblk->refs);301301+}302302+303303+static void virtblk_put(struct virtio_blk *vblk)304304+{305305+ if (refcount_dec_and_test(&vblk->refs)) {306306+ ida_simple_remove(&vd_index_ida, vblk->index);307307+ mutex_destroy(&vblk->vdev_mutex);308308+ kfree(vblk);309309+ }310310+}311311+312312+static int virtblk_open(struct block_device *bd, fmode_t mode)313313+{314314+ struct virtio_blk *vblk = bd->bd_disk->private_data;315315+ int ret = 0;316316+317317+ mutex_lock(&vblk->vdev_mutex);318318+319319+ if (vblk->vdev)320320+ virtblk_get(vblk);321321+ else322322+ ret = -ENXIO;323323+324324+ mutex_unlock(&vblk->vdev_mutex);325325+ return ret;326326+}327327+328328+static void virtblk_release(struct gendisk *disk, fmode_t mode)329329+{330330+ struct virtio_blk *vblk = disk->private_data;331331+332332+ virtblk_put(vblk);333333+}334334+314335/* We provide getgeo only to please some old bootloader/partitioning tools */315336static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)316337{317338 struct virtio_blk *vblk = bd->bd_disk->private_data;339339+ int ret = 0;340340+341341+ mutex_lock(&vblk->vdev_mutex);342342+343343+ if (!vblk->vdev) {344344+ ret = -ENXIO;345345+ goto out;346346+ }318347319348 /* see if the host passed in geometry config */320349 if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {···375314 geo->sectors = 1 << 5;376315 geo->cylinders = get_capacity(bd->bd_disk) >> 11;377316 }378378- return 0;317317+out:318318+ mutex_unlock(&vblk->vdev_mutex);319319+ return ret;379320}380321381322static const struct block_device_operations virtblk_fops = {382323 .owner = THIS_MODULE,324324+ .open = virtblk_open,325325+ .release = virtblk_release,383326 .getgeo = virtblk_getgeo,384327};385328···720655 goto out_free_index;721656 }722657658658+ /* This reference is dropped in virtblk_remove(). */659659+ refcount_set(&vblk->refs, 1);660660+ mutex_init(&vblk->vdev_mutex);661661+723662 vblk->vdev = vdev;724663 vblk->sg_elems = sg_elems;725664···889820static void virtblk_remove(struct virtio_device *vdev)890821{891822 struct virtio_blk *vblk = vdev->priv;892892- int index = vblk->index;893893- int refc;894823895824 /* Make sure no work handler is accessing the device. */896825 flush_work(&vblk->config_work);···898831899832 blk_mq_free_tag_set(&vblk->tag_set);900833834834+ mutex_lock(&vblk->vdev_mutex);835835+901836 /* Stop all the virtqueues. */902837 vdev->config->reset(vdev);903838904904- refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);839839+ /* Virtqueues are stopped, nothing can use vblk->vdev anymore. */840840+ vblk->vdev = NULL;841841+905842 put_disk(vblk->disk);906843 vdev->config->del_vqs(vdev);907844 kfree(vblk->vqs);908908- kfree(vblk);909845910910- /* Only free device id if we don't have any users */911911- if (refc == 1)912912- ida_simple_remove(&vd_index_ida, index);846846+ mutex_unlock(&vblk->vdev_mutex);847847+848848+ virtblk_put(vblk);913849}914850915851#ifdef CONFIG_PM_SLEEP
+160-34
drivers/counter/104-quad-8.c
···4444 * @base: base port address of the IIO device4545 */4646struct quad8_iio {4747+ struct mutex lock;4748 struct counter_device counter;4849 unsigned int fck_prescaler[QUAD8_NUM_COUNTERS];4950 unsigned int preset[QUAD8_NUM_COUNTERS];···124123 /* Borrow XOR Carry effectively doubles count range */125124 *val = (borrow ^ carry) << 24;126125126126+ mutex_lock(&priv->lock);127127+127128 /* Reset Byte Pointer; transfer Counter to Output Latch */128129 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,129130 base_offset + 1);130131131132 for (i = 0; i < 3; i++)132133 *val |= (unsigned int)inb(base_offset) << (8 * i);134134+135135+ mutex_unlock(&priv->lock);133136134137 return IIO_VAL_INT;135138 case IIO_CHAN_INFO_ENABLE:···165160 if ((unsigned int)val > 0xFFFFFF)166161 return -EINVAL;167162163163+ mutex_lock(&priv->lock);164164+168165 /* Reset Byte Pointer */169166 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);170167···190183 /* Reset Error flag */191184 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);192185186186+ mutex_unlock(&priv->lock);187187+193188 return 0;194189 case IIO_CHAN_INFO_ENABLE:195190 /* only boolean values accepted */196191 if (val < 0 || val > 1)197192 return -EINVAL;193193+194194+ mutex_lock(&priv->lock);198195199196 priv->ab_enable[chan->channel] = val;200197···207196 /* Load I/O control configuration */208197 outb(QUAD8_CTR_IOR | ior_cfg, base_offset + 1);209198199199+ mutex_unlock(&priv->lock);200200+210201 return 0;211202 case IIO_CHAN_INFO_SCALE:203203+ mutex_lock(&priv->lock);204204+212205 /* Quadrature scaling only available in quadrature mode */213213- if (!priv->quadrature_mode[chan->channel] && (val2 || val != 1))206206+ if (!priv->quadrature_mode[chan->channel] &&207207+ (val2 || val != 1)) {208208+ mutex_unlock(&priv->lock);214209 return -EINVAL;210210+ }215211216212 /* Only three gain states (1, 0.5, 0.25) */217213 if (val == 1 && !val2)···232214 priv->quadrature_scale[chan->channel] = 2;233215 break;234216 default:217217+ mutex_unlock(&priv->lock);235218 return -EINVAL;236219 }237237- else220220+ else {221221+ mutex_unlock(&priv->lock);238222 return -EINVAL;223223+ }239224225225+ mutex_unlock(&priv->lock);240226 return 0;241227 }242228···277255 if (preset > 0xFFFFFF)278256 return -EINVAL;279257258258+ mutex_lock(&priv->lock);259259+280260 priv->preset[chan->channel] = preset;281261282262 /* Reset Byte Pointer */···287263 /* Set Preset Register */288264 for (i = 0; i < 3; i++)289265 outb(preset >> (8 * i), base_offset);266266+267267+ mutex_unlock(&priv->lock);290268291269 return len;292270}···319293 /* Preset enable is active low in Input/Output Control register */320294 preset_enable = !preset_enable;321295296296+ mutex_lock(&priv->lock);297297+322298 priv->preset_enable[chan->channel] = preset_enable;323299324300 ior_cfg = priv->ab_enable[chan->channel] |···328300329301 /* Load I/O control configuration to Input / Output Control Register */330302 outb(QUAD8_CTR_IOR | ior_cfg, base_offset);303303+304304+ mutex_unlock(&priv->lock);331305332306 return len;333307}···388358 unsigned int mode_cfg = cnt_mode << 1;389359 const int base_offset = priv->base + 2 * chan->channel + 1;390360361361+ mutex_lock(&priv->lock);362362+391363 priv->count_mode[chan->channel] = cnt_mode;392364393365 /* Add quadrature mode configuration */···398366399367 /* Load mode configuration to Counter Mode Register */400368 outb(QUAD8_CTR_CMR | mode_cfg, base_offset);369369+370370+ mutex_unlock(&priv->lock);401371402372 return 0;403373}···428394 const struct iio_chan_spec *chan, unsigned int synchronous_mode)429395{430396 struct quad8_iio *const priv = iio_priv(indio_dev);431431- const unsigned int idr_cfg = synchronous_mode |432432- priv->index_polarity[chan->channel] << 1;433397 const int base_offset = priv->base + 2 * chan->channel + 1;398398+ unsigned int idr_cfg = synchronous_mode;399399+400400+ mutex_lock(&priv->lock);401401+402402+ idr_cfg |= priv->index_polarity[chan->channel] << 1;434403435404 /* Index function must be non-synchronous in non-quadrature mode */436436- if (synchronous_mode && !priv->quadrature_mode[chan->channel])405405+ if (synchronous_mode && !priv->quadrature_mode[chan->channel]) {406406+ mutex_unlock(&priv->lock);437407 return -EINVAL;408408+ }438409439410 priv->synchronous_mode[chan->channel] = synchronous_mode;440411441412 /* Load Index Control configuration to Index Control Register */442413 outb(QUAD8_CTR_IDR | idr_cfg, base_offset);414414+415415+ mutex_unlock(&priv->lock);443416444417 return 0;445418}···475434 const struct iio_chan_spec *chan, unsigned int quadrature_mode)476435{477436 struct quad8_iio *const priv = iio_priv(indio_dev);478478- unsigned int mode_cfg = priv->count_mode[chan->channel] << 1;479437 const int base_offset = priv->base + 2 * chan->channel + 1;438438+ unsigned int mode_cfg;439439+440440+ mutex_lock(&priv->lock);441441+442442+ mode_cfg = priv->count_mode[chan->channel] << 1;480443481444 if (quadrature_mode)482445 mode_cfg |= (priv->quadrature_scale[chan->channel] + 1) << 3;···497452498453 /* Load mode configuration to Counter Mode Register */499454 outb(QUAD8_CTR_CMR | mode_cfg, base_offset);455455+456456+ mutex_unlock(&priv->lock);500457501458 return 0;502459}···527480 const struct iio_chan_spec *chan, unsigned int index_polarity)528481{529482 struct quad8_iio *const priv = iio_priv(indio_dev);530530- const unsigned int idr_cfg = priv->synchronous_mode[chan->channel] |531531- index_polarity << 1;532483 const int base_offset = priv->base + 2 * chan->channel + 1;484484+ unsigned int idr_cfg = index_polarity << 1;485485+486486+ mutex_lock(&priv->lock);487487+488488+ idr_cfg |= priv->synchronous_mode[chan->channel];533489534490 priv->index_polarity[chan->channel] = index_polarity;535491536492 /* Load Index Control configuration to Index Control Register */537493 outb(QUAD8_CTR_IDR | idr_cfg, base_offset);494494+495495+ mutex_unlock(&priv->lock);538496539497 return 0;540498}···641589static int quad8_count_read(struct counter_device *counter,642590 struct counter_count *count, unsigned long *val)643591{644644- const struct quad8_iio *const priv = counter->priv;592592+ struct quad8_iio *const priv = counter->priv;645593 const int base_offset = priv->base + 2 * count->id;646594 unsigned int flags;647595 unsigned int borrow;···655603 /* Borrow XOR Carry effectively doubles count range */656604 *val = (unsigned long)(borrow ^ carry) << 24;657605606606+ mutex_lock(&priv->lock);607607+658608 /* Reset Byte Pointer; transfer Counter to Output Latch */659609 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,660610 base_offset + 1);···664610 for (i = 0; i < 3; i++)665611 *val |= (unsigned long)inb(base_offset) << (8 * i);666612613613+ mutex_unlock(&priv->lock);614614+667615 return 0;668616}669617670618static int quad8_count_write(struct counter_device *counter,671619 struct counter_count *count, unsigned long val)672620{673673- const struct quad8_iio *const priv = counter->priv;621621+ struct quad8_iio *const priv = counter->priv;674622 const int base_offset = priv->base + 2 * count->id;675623 int i;676624677625 /* Only 24-bit values are supported */678626 if (val > 0xFFFFFF)679627 return -EINVAL;628628+629629+ mutex_lock(&priv->lock);680630681631 /* Reset Byte Pointer */682632 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);···705647 /* Reset Error flag */706648 outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);707649650650+ mutex_unlock(&priv->lock);651651+708652 return 0;709653}710654···727667static int quad8_function_get(struct counter_device *counter,728668 struct counter_count *count, size_t *function)729669{730730- const struct quad8_iio *const priv = counter->priv;670670+ struct quad8_iio *const priv = counter->priv;731671 const int id = count->id;732732- const unsigned int quadrature_mode = priv->quadrature_mode[id];733733- const unsigned int scale = priv->quadrature_scale[id];734672735735- if (quadrature_mode)736736- switch (scale) {673673+ mutex_lock(&priv->lock);674674+675675+ if (priv->quadrature_mode[id])676676+ switch (priv->quadrature_scale[id]) {737677 case 0:738678 *function = QUAD8_COUNT_FUNCTION_QUADRATURE_X1;739679 break;···747687 else748688 *function = QUAD8_COUNT_FUNCTION_PULSE_DIRECTION;749689690690+ mutex_unlock(&priv->lock);691691+750692 return 0;751693}752694···759697 const int id = count->id;760698 unsigned int *const quadrature_mode = priv->quadrature_mode + id;761699 unsigned int *const scale = priv->quadrature_scale + id;762762- unsigned int mode_cfg = priv->count_mode[id] << 1;763700 unsigned int *const synchronous_mode = priv->synchronous_mode + id;764764- const unsigned int idr_cfg = priv->index_polarity[id] << 1;765701 const int base_offset = priv->base + 2 * id + 1;702702+ unsigned int mode_cfg;703703+ unsigned int idr_cfg;704704+705705+ mutex_lock(&priv->lock);706706+707707+ mode_cfg = priv->count_mode[id] << 1;708708+ idr_cfg = priv->index_polarity[id] << 1;766709767710 if (function == QUAD8_COUNT_FUNCTION_PULSE_DIRECTION) {768711 *quadrature_mode = 0;···802735803736 /* Load mode configuration to Counter Mode Register */804737 outb(QUAD8_CTR_CMR | mode_cfg, base_offset);738738+739739+ mutex_unlock(&priv->lock);805740806741 return 0;807742}···921852{922853 struct quad8_iio *const priv = counter->priv;923854 const size_t channel_id = signal->id - 16;924924- const unsigned int idr_cfg = priv->synchronous_mode[channel_id] |925925- index_polarity << 1;926855 const int base_offset = priv->base + 2 * channel_id + 1;856856+ unsigned int idr_cfg = index_polarity << 1;857857+858858+ mutex_lock(&priv->lock);859859+860860+ idr_cfg |= priv->synchronous_mode[channel_id];927861928862 priv->index_polarity[channel_id] = index_polarity;929863930864 /* Load Index Control configuration to Index Control Register */931865 outb(QUAD8_CTR_IDR | idr_cfg, base_offset);866866+867867+ mutex_unlock(&priv->lock);932868933869 return 0;934870}···961887{962888 struct quad8_iio *const priv = counter->priv;963889 const size_t channel_id = signal->id - 16;964964- const unsigned int idr_cfg = synchronous_mode |965965- priv->index_polarity[channel_id] << 1;966890 const int base_offset = priv->base + 2 * channel_id + 1;891891+ unsigned int idr_cfg = synchronous_mode;892892+893893+ mutex_lock(&priv->lock);894894+895895+ idr_cfg |= priv->index_polarity[channel_id] << 1;967896968897 /* Index function must be non-synchronous in non-quadrature mode */969969- if (synchronous_mode && !priv->quadrature_mode[channel_id])898898+ if (synchronous_mode && !priv->quadrature_mode[channel_id]) {899899+ mutex_unlock(&priv->lock);970900 return -EINVAL;901901+ }971902972903 priv->synchronous_mode[channel_id] = synchronous_mode;973904974905 /* Load Index Control configuration to Index Control Register */975906 outb(QUAD8_CTR_IDR | idr_cfg, base_offset);907907+908908+ mutex_unlock(&priv->lock);976909977910 return 0;978911}···1045964 break;1046965 }1047966967967+ mutex_lock(&priv->lock);968968+1048969 priv->count_mode[count->id] = cnt_mode;10499701050971 /* Set count mode configuration value */···10589751059976 /* Load mode configuration to Counter Mode Register */1060977 outb(QUAD8_CTR_CMR | mode_cfg, base_offset);978978+979979+ mutex_unlock(&priv->lock);10619801062981 return 0;1063982}···11021017 if (err)11031018 return err;1104101910201020+ mutex_lock(&priv->lock);10211021+11051022 priv->ab_enable[count->id] = ab_enable;1106102311071024 ior_cfg = ab_enable | priv->preset_enable[count->id] << 1;1108102511091026 /* Load I/O control configuration */11101027 outb(QUAD8_CTR_IOR | ior_cfg, base_offset + 1);10281028+10291029+ mutex_unlock(&priv->lock);1111103011121031 return len;11131032}···11411052 return sprintf(buf, "%u\n", priv->preset[count->id]);11421053}1143105410551055+static void quad8_preset_register_set(struct quad8_iio *quad8iio, int id,10561056+ unsigned int preset)10571057+{10581058+ const unsigned int base_offset = quad8iio->base + 2 * id;10591059+ int i;10601060+10611061+ quad8iio->preset[id] = preset;10621062+10631063+ /* Reset Byte Pointer */10641064+ outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);10651065+10661066+ /* Set Preset Register */10671067+ for (i = 0; i < 3; i++)10681068+ outb(preset >> (8 * i), base_offset);10691069+}10701070+11441071static ssize_t quad8_count_preset_write(struct counter_device *counter,11451072 struct counter_count *count, void *private, const char *buf, size_t len)11461073{11471074 struct quad8_iio *const priv = counter->priv;11481148- const int base_offset = priv->base + 2 * count->id;11491075 unsigned int preset;11501076 int ret;11511151- int i;1152107711531078 ret = kstrtouint(buf, 0, &preset);11541079 if (ret)···11721069 if (preset > 0xFFFFFF)11731070 return -EINVAL;1174107111751175- priv->preset[count->id] = preset;10721072+ mutex_lock(&priv->lock);1176107311771177- /* Reset Byte Pointer */11781178- outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);10741074+ quad8_preset_register_set(priv, count->id, preset);1179107511801180- /* Set Preset Register */11811181- for (i = 0; i < 3; i++)11821182- outb(preset >> (8 * i), base_offset);10761076+ mutex_unlock(&priv->lock);1183107711841078 return len;11851079}···11841084static ssize_t quad8_count_ceiling_read(struct counter_device *counter,11851085 struct counter_count *count, void *private, char *buf)11861086{11871187- const struct quad8_iio *const priv = counter->priv;10871087+ struct quad8_iio *const priv = counter->priv;10881088+10891089+ mutex_lock(&priv->lock);1188109011891091 /* Range Limit and Modulo-N count modes use preset value as ceiling */11901092 switch (priv->count_mode[count->id]) {11911093 case 1:11921094 case 3:11931193- return quad8_count_preset_read(counter, count, private, buf);10951095+ mutex_unlock(&priv->lock);10961096+ return sprintf(buf, "%u\n", priv->preset[count->id]);11941097 }10981098+10991099+ mutex_unlock(&priv->lock);1195110011961101 /* By default 0x1FFFFFF (25 bits unsigned) is maximum count */11971102 return sprintf(buf, "33554431\n");···12061101 struct counter_count *count, void *private, const char *buf, size_t len)12071102{12081103 struct quad8_iio *const priv = counter->priv;11041104+ unsigned int ceiling;11051105+ int ret;11061106+11071107+ ret = kstrtouint(buf, 0, &ceiling);11081108+ if (ret)11091109+ return ret;11101110+11111111+ /* Only 24-bit values are supported */11121112+ if (ceiling > 0xFFFFFF)11131113+ return -EINVAL;11141114+11151115+ mutex_lock(&priv->lock);1209111612101117 /* Range Limit and Modulo-N count modes use preset value as ceiling */12111118 switch (priv->count_mode[count->id]) {12121119 case 1:12131120 case 3:12141214- return quad8_count_preset_write(counter, count, private, buf,12151215- len);11211121+ quad8_preset_register_set(priv, count->id, ceiling);11221122+ break;12161123 }11241124+11251125+ mutex_unlock(&priv->lock);1217112612181127 return len;12191128}···12561137 /* Preset enable is active low in Input/Output Control register */12571138 preset_enable = !preset_enable;1258113911401140+ mutex_lock(&priv->lock);11411141+12591142 priv->preset_enable[count->id] = preset_enable;1260114312611144 ior_cfg = priv->ab_enable[count->id] | (unsigned int)preset_enable << 1;1262114512631146 /* Load I/O control configuration to Input / Output Control Register */12641147 outb(QUAD8_CTR_IOR | ior_cfg, base_offset);11481148+11491149+ mutex_unlock(&priv->lock);1265115012661151 return len;12671152}···15511428 quad8iio->counter.num_signals = ARRAY_SIZE(quad8_signals);15521429 quad8iio->counter.priv = quad8iio;15531430 quad8iio->base = base[id];14311431+14321432+ /* Initialize mutex */14331433+ mutex_init(&quad8iio->lock);1554143415551435 /* Reset all counters and disable interrupt function */15561436 outb(QUAD8_CHAN_OP_RESET_COUNTERS, base[id] + QUAD8_REG_CHAN_OP);
+1-1
drivers/cpufreq/intel_pstate.c
···1059105910601060 update_turbo_state();10611061 if (global.turbo_disabled) {10621062- pr_warn("Turbo disabled by BIOS or unavailable on processor\n");10621062+ pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n");10631063 mutex_unlock(&intel_pstate_limits_lock);10641064 mutex_unlock(&intel_pstate_driver_lock);10651065 return -EPERM;
+7-3
drivers/crypto/caam/caamalg.c
···963963 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);964964 struct aead_edesc *edesc;965965 int ecode = 0;966966+ bool has_bklog;966967967968 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);968969969970 edesc = rctx->edesc;971971+ has_bklog = edesc->bklog;970972971973 if (err)972974 ecode = caam_jr_strstatus(jrdev, err);···981979 * If no backlog flag, the completion of the request is done982980 * by CAAM, not crypto engine.983981 */984984- if (!edesc->bklog)982982+ if (!has_bklog)985983 aead_request_complete(req, ecode);986984 else987985 crypto_finalize_aead_request(jrp->engine, req, ecode);···997995 struct caam_drv_private_jr *jrp = dev_get_drvdata(jrdev);998996 int ivsize = crypto_skcipher_ivsize(skcipher);999997 int ecode = 0;998998+ bool has_bklog;100099910011000 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);1002100110031002 edesc = rctx->edesc;10031003+ has_bklog = edesc->bklog;10041004 if (err)10051005 ecode = caam_jr_strstatus(jrdev, err);10061006···10321028 * If no backlog flag, the completion of the request is done10331029 * by CAAM, not crypto engine.10341030 */10351035- if (!edesc->bklog)10311031+ if (!has_bklog)10361032 skcipher_request_complete(req, ecode);10371033 else10381034 crypto_finalize_skcipher_request(jrp->engine, req, ecode);···1715171117161712 if (ivsize || mapped_dst_nents > 1)17171713 sg_to_sec4_set_last(edesc->sec4_sg + dst_sg_idx +17181718- mapped_dst_nents);17141714+ mapped_dst_nents - 1 + !!ivsize);1719171517201716 if (sec4_sg_bytes) {17211717 edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
+6-2
drivers/crypto/caam/caamhash.c
···583583 struct caam_hash_state *state = ahash_request_ctx(req);584584 struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);585585 int ecode = 0;586586+ bool has_bklog;586587587588 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);588589589590 edesc = state->edesc;591591+ has_bklog = edesc->bklog;590592591593 if (err)592594 ecode = caam_jr_strstatus(jrdev, err);···605603 * If no backlog flag, the completion of the request is done606604 * by CAAM, not crypto engine.607605 */608608- if (!edesc->bklog)606606+ if (!has_bklog)609607 req->base.complete(&req->base, ecode);610608 else611609 crypto_finalize_hash_request(jrp->engine, req, ecode);···634632 struct caam_hash_state *state = ahash_request_ctx(req);635633 int digestsize = crypto_ahash_digestsize(ahash);636634 int ecode = 0;635635+ bool has_bklog;637636638637 dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);639638640639 edesc = state->edesc;640640+ has_bklog = edesc->bklog;641641 if (err)642642 ecode = caam_jr_strstatus(jrdev, err);643643···667663 * If no backlog flag, the completion of the request is done668664 * by CAAM, not crypto engine.669665 */670670- if (!edesc->bklog)666666+ if (!has_bklog)671667 req->base.complete(&req->base, ecode);672668 else673669 crypto_finalize_hash_request(jrp->engine, req, ecode);
+6-2
drivers/crypto/caam/caampkc.c
···121121 struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);122122 struct rsa_edesc *edesc;123123 int ecode = 0;124124+ bool has_bklog;124125125126 if (err)126127 ecode = caam_jr_strstatus(dev, err);127128128129 edesc = req_ctx->edesc;130130+ has_bklog = edesc->bklog;129131130132 rsa_pub_unmap(dev, edesc, req);131133 rsa_io_unmap(dev, edesc, req);···137135 * If no backlog flag, the completion of the request is done138136 * by CAAM, not crypto engine.139137 */140140- if (!edesc->bklog)138138+ if (!has_bklog)141139 akcipher_request_complete(req, ecode);142140 else143141 crypto_finalize_akcipher_request(jrp->engine, req, ecode);···154152 struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);155153 struct rsa_edesc *edesc;156154 int ecode = 0;155155+ bool has_bklog;157156158157 if (err)159158 ecode = caam_jr_strstatus(dev, err);160159161160 edesc = req_ctx->edesc;161161+ has_bklog = edesc->bklog;162162163163 switch (key->priv_form) {164164 case FORM1:···180176 * If no backlog flag, the completion of the request is done181177 * by CAAM, not crypto engine.182178 */183183- if (!edesc->bklog)179179+ if (!has_bklog)184180 akcipher_request_complete(req, ecode);185181 else186182 crypto_finalize_akcipher_request(jrp->engine, req, ecode);
+46-37
drivers/crypto/chelsio/chcr_ktls.c
···673673 return 0;674674}675675676676-/*677677- * chcr_write_cpl_set_tcb_ulp: update tcb values.678678- * TCB is responsible to create tcp headers, so all the related values679679- * should be correctly updated.680680- * @tx_info - driver specific tls info.681681- * @q - tx queue on which packet is going out.682682- * @tid - TCB identifier.683683- * @pos - current index where should we start writing.684684- * @word - TCB word.685685- * @mask - TCB word related mask.686686- * @val - TCB word related value.687687- * @reply - set 1 if looking for TP response.688688- * return - next position to write.689689- */690690-static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,691691- struct sge_eth_txq *q, u32 tid,692692- void *pos, u16 word, u64 mask,676676+static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,677677+ u32 tid, void *pos, u16 word, u64 mask,693678 u64 val, u32 reply)694679{695680 struct cpl_set_tcb_field_core *cpl;696681 struct ulptx_idata *idata;697682 struct ulp_txpkt *txpkt;698698- void *save_pos = NULL;699699- u8 buf[48] = {0};700700- int left;701683702702- left = (void *)q->q.stat - pos;703703- if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {704704- if (!left) {705705- pos = q->q.desc;706706- } else {707707- save_pos = pos;708708- pos = buf;709709- }710710- }711684 /* ULP_TXPKT */712685 txpkt = pos;713686 txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0));···705732 idata = (struct ulptx_idata *)(cpl + 1);706733 idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP));707734 idata->len = htonl(0);735735+ pos = idata + 1;708736709709- if (save_pos) {710710- pos = chcr_copy_to_txd(buf, &q->q, save_pos,711711- CHCR_SET_TCB_FIELD_LEN);712712- } else {713713- /* check again if we are at the end of the queue */714714- if (left == CHCR_SET_TCB_FIELD_LEN)737737+ return pos;738738+}739739+740740+741741+/*742742+ * chcr_write_cpl_set_tcb_ulp: update tcb values.743743+ * TCB is responsible to create tcp headers, so all the related values744744+ * should be correctly updated.745745+ * @tx_info - driver specific tls info.746746+ * @q - tx queue on which packet is going out.747747+ * @tid - TCB identifier.748748+ * @pos - current index where should we start writing.749749+ * @word - TCB word.750750+ * @mask - TCB word related mask.751751+ * @val - TCB word related value.752752+ * @reply - set 1 if looking for TP response.753753+ * return - next position to write.754754+ */755755+static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,756756+ struct sge_eth_txq *q, u32 tid,757757+ void *pos, u16 word, u64 mask,758758+ u64 val, u32 reply)759759+{760760+ int left = (void *)q->q.stat - pos;761761+762762+ if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {763763+ if (!left) {715764 pos = q->q.desc;716716- else717717- pos = idata + 1;765765+ } else {766766+ u8 buf[48] = {0};767767+768768+ __chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word,769769+ mask, val, reply);770770+771771+ return chcr_copy_to_txd(buf, &q->q, pos,772772+ CHCR_SET_TCB_FIELD_LEN);773773+ }718774 }775775+776776+ pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word,777777+ mask, val, reply);778778+779779+ /* check again if we are at the end of the queue */780780+ if (left == CHCR_SET_TCB_FIELD_LEN)781781+ pos = q->q.desc;719782720783 return pos;721784}
+4-3
drivers/dma-buf/dma-buf.c
···388388389389 return ret;390390391391- case DMA_BUF_SET_NAME:391391+ case DMA_BUF_SET_NAME_A:392392+ case DMA_BUF_SET_NAME_B:392393 return dma_buf_set_name(dmabuf, (const char __user *)arg);393394394395 default:···656655 * calls attach() of dma_buf_ops to allow device-specific attach functionality657656 * @dmabuf: [in] buffer to attach device to.658657 * @dev: [in] device to be attached.659659- * @importer_ops [in] importer operations for the attachment660660- * @importer_priv [in] importer private pointer for the attachment658658+ * @importer_ops: [in] importer operations for the attachment659659+ * @importer_priv: [in] importer private pointer for the attachment661660 *662661 * Returns struct dma_buf_attachment pointer for this attachment. Attachments663662 * must be cleaned up by calling dma_buf_detach().
+2-1
drivers/dma/Kconfig
···241241242242config HISI_DMA243243 tristate "HiSilicon DMA Engine support"244244- depends on ARM64 || (COMPILE_TEST && PCI_MSI)244244+ depends on ARM64 || COMPILE_TEST245245+ depends on PCI_MSI245246 select DMA_ENGINE246247 select DMA_VIRTUAL_CHANNELS247248 help
+26-34
drivers/dma/dmaengine.c
···232232 struct dma_chan_dev *chan_dev;233233234234 chan_dev = container_of(dev, typeof(*chan_dev), device);235235- if (atomic_dec_and_test(chan_dev->idr_ref)) {236236- ida_free(&dma_ida, chan_dev->dev_id);237237- kfree(chan_dev->idr_ref);238238- }239235 kfree(chan_dev);240236}241237···10391043}1040104410411045static int __dma_async_device_channel_register(struct dma_device *device,10421042- struct dma_chan *chan,10431043- int chan_id)10461046+ struct dma_chan *chan)10441047{10451048 int rc = 0;10461046- int chancnt = device->chancnt;10471047- atomic_t *idr_ref;10481048- struct dma_chan *tchan;10491049-10501050- tchan = list_first_entry_or_null(&device->channels,10511051- struct dma_chan, device_node);10521052- if (!tchan)10531053- return -ENODEV;10541054-10551055- if (tchan->dev) {10561056- idr_ref = tchan->dev->idr_ref;10571057- } else {10581058- idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL);10591059- if (!idr_ref)10601060- return -ENOMEM;10611061- atomic_set(idr_ref, 0);10621062- }1063104910641050 chan->local = alloc_percpu(typeof(*chan->local));10651051 if (!chan->local)···10571079 * When the chan_id is a negative value, we are dynamically adding10581080 * the channel. Otherwise we are static enumerating.10591081 */10601060- chan->chan_id = chan_id < 0 ? chancnt : chan_id;10821082+ mutex_lock(&device->chan_mutex);10831083+ chan->chan_id = ida_alloc(&device->chan_ida, GFP_KERNEL);10841084+ mutex_unlock(&device->chan_mutex);10851085+ if (chan->chan_id < 0) {10861086+ pr_err("%s: unable to alloc ida for chan: %d\n",10871087+ __func__, chan->chan_id);10881088+ goto err_out;10891089+ }10901090+10611091 chan->dev->device.class = &dma_devclass;10621092 chan->dev->device.parent = device->dev;10631093 chan->dev->chan = chan;10641064- chan->dev->idr_ref = idr_ref;10651094 chan->dev->dev_id = device->dev_id;10661066- atomic_inc(idr_ref);10671095 dev_set_name(&chan->dev->device, "dma%dchan%d",10681096 device->dev_id, chan->chan_id);10691069-10701097 rc = device_register(&chan->dev->device);10711098 if (rc)10721072- goto err_out;10991099+ goto err_out_ida;10731100 chan->client_count = 0;10741074- device->chancnt = chan->chan_id + 1;11011101+ device->chancnt++;1075110210761103 return 0;1077110411051105+ err_out_ida:11061106+ mutex_lock(&device->chan_mutex);11071107+ ida_free(&device->chan_ida, chan->chan_id);11081108+ mutex_unlock(&device->chan_mutex);10781109 err_out:10791110 free_percpu(chan->local);10801111 kfree(chan->dev);10811081- if (atomic_dec_return(idr_ref) == 0)10821082- kfree(idr_ref);10831112 return rc;10841113}10851114···10951110{10961111 int rc;1097111210981098- rc = __dma_async_device_channel_register(device, chan, -1);11131113+ rc = __dma_async_device_channel_register(device, chan);10991114 if (rc < 0)11001115 return rc;11011116···11151130 device->chancnt--;11161131 chan->dev->chan = NULL;11171132 mutex_unlock(&dma_list_mutex);11331133+ mutex_lock(&device->chan_mutex);11341134+ ida_free(&device->chan_ida, chan->chan_id);11351135+ mutex_unlock(&device->chan_mutex);11181136 device_unregister(&chan->dev->device);11191137 free_percpu(chan->local);11201138}···11401152 */11411153int dma_async_device_register(struct dma_device *device)11421154{11431143- int rc, i = 0;11551155+ int rc;11441156 struct dma_chan* chan;1145115711461158 if (!device)···12451257 if (rc != 0)12461258 return rc;1247125912601260+ mutex_init(&device->chan_mutex);12611261+ ida_init(&device->chan_ida);12621262+12481263 /* represent channels in sysfs. Probably want devs too */12491264 list_for_each_entry(chan, &device->channels, device_node) {12501250- rc = __dma_async_device_channel_register(device, chan, i++);12651265+ rc = __dma_async_device_channel_register(device, chan);12511266 if (rc < 0)12521267 goto err_out;12531268 }···13251334 */13261335 dma_cap_set(DMA_PRIVATE, device->cap_mask);13271336 dma_channel_rebalance();13371337+ ida_free(&dma_ida, device->dev_id);13281338 dma_device_put(device);13291339 mutex_unlock(&dma_list_mutex);13301340}
···816816static void tegra_dma_synchronize(struct dma_chan *dc)817817{818818 struct tegra_dma_channel *tdc = to_tegra_dma_chan(dc);819819+ int err;820820+821821+ err = pm_runtime_get_sync(tdc->tdma->dev);822822+ if (err < 0) {823823+ dev_err(tdc2dev(tdc), "Failed to synchronize DMA: %d\n", err);824824+ return;825825+ }819826820827 /*821828 * CPU, which handles interrupt, could be busy in···832825 wait_event(tdc->wq, tegra_dma_eoc_interrupt_deasserted(tdc));833826834827 tasklet_kill(&tdc->tasklet);828828+829829+ pm_runtime_put(tdc->tdma->dev);835830}836831837832static unsigned int tegra_dma_sg_bytes_xferred(struct tegra_dma_channel *tdc,
+1
drivers/dma/ti/k3-psil.c
···2727 soc_ep_map = &j721e_ep_map;2828 } else {2929 pr_err("PSIL: No compatible machine found for map\n");3030+ mutex_unlock(&ep_map_mutex);3031 return ERR_PTR(-ENOTSUPP);3132 }3233 pr_debug("%s: Using map for %s\n", __func__, soc_ep_map->name);
+10-10
drivers/dma/xilinx/xilinx_dma.c
···12301230 return ret;1231123112321232 spin_lock_irqsave(&chan->lock, flags);12331233-12341234- desc = list_last_entry(&chan->active_list,12351235- struct xilinx_dma_tx_descriptor, node);12361236- /*12371237- * VDMA and simple mode do not support residue reporting, so the12381238- * residue field will always be 0.12391239- */12401240- if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)12411241- residue = xilinx_dma_get_residue(chan, desc);12421242-12331233+ if (!list_empty(&chan->active_list)) {12341234+ desc = list_last_entry(&chan->active_list,12351235+ struct xilinx_dma_tx_descriptor, node);12361236+ /*12371237+ * VDMA and simple mode do not support residue reporting, so the12381238+ * residue field will always be 0.12391239+ */12401240+ if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)12411241+ residue = xilinx_dma_get_residue(chan, desc);12421242+ }12431243 spin_unlock_irqrestore(&chan->lock, flags);1244124412451245 dma_set_residue(txstate, residue);
+2-2
drivers/firmware/imx/Kconfig
···12121313config IMX_SCU1414 bool "IMX SCU Protocol driver"1515- depends on IMX_MBOX || COMPILE_TEST1515+ depends on IMX_MBOX1616 help1717 The System Controller Firmware (SCFW) is a low-level system function1818 which runs on a dedicated Cortex-M core to provide power, clock, and···24242525config IMX_SCU_PD2626 bool "IMX SCU Power Domain driver"2727- depends on IMX_SCU || COMPILE_TEST2727+ depends on IMX_SCU2828 help2929 The System Controller Firmware (SCFW) based power domain driver.
+4-2
drivers/fpga/dfl-pci.c
···248248 return ret;249249250250 ret = pci_enable_sriov(pcidev, num_vfs);251251- if (ret)251251+ if (ret) {252252 dfl_fpga_cdev_config_ports_pf(cdev);253253+ return ret;254254+ }253255 }254256255255- return ret;257257+ return num_vfs;256258}257259258260static void cci_pci_remove(struct pci_dev *pcidev)
+2-1
drivers/fpga/zynq-fpga.c
···583583584584 priv->clk = devm_clk_get(dev, "ref_clk");585585 if (IS_ERR(priv->clk)) {586586- dev_err(dev, "input clock not found\n");586586+ if (PTR_ERR(priv->clk) != -EPROBE_DEFER)587587+ dev_err(dev, "input clock not found\n");587588 return PTR_ERR(priv->clk);588589 }589590
+2-1
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···8585 * - 3.34.0 - Non-DC can flip correctly between buffers with different pitches8686 * - 3.35.0 - Add drm_amdgpu_info_device::tcc_disabled_mask8787 * - 3.36.0 - Allow reading more status registers on si/cik8888+ * - 3.37.0 - L2 is invalidated before SDMA IBs, needed for correctness8889 */8990#define KMS_DRIVER_MAJOR 39090-#define KMS_DRIVER_MINOR 369191+#define KMS_DRIVER_MINOR 379192#define KMS_DRIVER_PATCHLEVEL 092939394int amdgpu_vram_limit = 0;
···16251625 hws->funcs.verify_allow_pstate_change_high(dc);16261626}1627162716281628+void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)16291629+{16301630+ /* cursor lock is per MPCC tree, so only need to lock one pipe per stream */16311631+ if (!pipe || pipe->top_pipe)16321632+ return;16331633+16341634+ dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc,16351635+ pipe->stream_res.opp->inst, lock);16361636+}16371637+16281638static bool wait_for_reset_trigger_to_occur(16291639 struct dc_context *dc_ctx,16301640 struct timing_generator *tg)
···284284 .dram_channel_width_bytes = 4,285285 .fabric_datapath_to_dcn_data_return_bytes = 32,286286 .dcn_downspread_percent = 0.5,287287- .downspread_percent = 0.5,287287+ .downspread_percent = 0.38,288288 .dram_page_open_time_ns = 50.0,289289 .dram_rw_turnaround_time_ns = 17.5,290290 .dram_return_buffer_per_channel_bytes = 8192,···339339#define DCCG_SRII(reg_name, block, id)\340340 .block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \341341 mm ## block ## id ## _ ## reg_name342342+343343+#define VUPDATE_SRII(reg_name, block, id)\344344+ .reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \345345+ mm ## reg_name ## _ ## block ## id342346343347/* NBIO */344348#define NBIO_BASE_INNER(seg) \···13781374{13791375 struct dcn21_resource_pool *pool = TO_DCN21_RES_POOL(dc->res_pool);13801376 struct clk_limit_table *clk_table = &bw_params->clk_table;13811381- unsigned int i, j, k;13821382- int closest_clk_lvl;13771377+ struct _vcs_dpi_voltage_scaling_st clock_limits[DC__VOLTAGE_STATES];13781378+ unsigned int i, j, closest_clk_lvl;1383137913841380 // Default clock levels are used for diags, which may lead to overclocking.13851385- if (!IS_FPGA_MAXIMUS_DC(dc->ctx->dce_environment) && !IS_DIAG_DC(dc->ctx->dce_environment)) {13811381+ if (!IS_DIAG_DC(dc->ctx->dce_environment)) {13861382 dcn2_1_ip.max_num_otg = pool->base.res_cap->num_timing_generator;13871383 dcn2_1_ip.max_num_dpp = pool->base.pipe_count;13881384 dcn2_1_soc.num_chans = bw_params->num_channels;1389138513901390- /* Vmin: leave lowest DCN clocks, override with dcfclk, fclk, memclk from fuse */13911391- dcn2_1_soc.clock_limits[0].state = 0;13921392- dcn2_1_soc.clock_limits[0].dcfclk_mhz = clk_table->entries[0].dcfclk_mhz;13931393- dcn2_1_soc.clock_limits[0].fabricclk_mhz = clk_table->entries[0].fclk_mhz;13941394- dcn2_1_soc.clock_limits[0].socclk_mhz = clk_table->entries[0].socclk_mhz;13951395- dcn2_1_soc.clock_limits[0].dram_speed_mts = clk_table->entries[0].memclk_mhz * 2;13961396-13971397- /*13981398- * Other levels: find closest DCN clocks that fit the given clock limit using dcfclk13991399- * as indicator14001400- */14011401-14021402- closest_clk_lvl = -1;14031403- /* index currently being filled */14041404- k = 1;14051405- for (i = 1; i < clk_table->num_entries; i++) {14061406- /* loop backwards, skip duplicate state*/14071407- for (j = dcn2_1_soc.num_states - 1; j >= k; j--) {13861386+ ASSERT(clk_table->num_entries);13871387+ for (i = 0; i < clk_table->num_entries; i++) {13881388+ /* loop backwards*/13891389+ for (closest_clk_lvl = 0, j = dcn2_1_soc.num_states - 1; j >= 0; j--) {14081390 if ((unsigned int) dcn2_1_soc.clock_limits[j].dcfclk_mhz <= clk_table->entries[i].dcfclk_mhz) {14091391 closest_clk_lvl = j;14101392 break;14111393 }14121394 }1413139514141414- /* if found a lvl that fits, use the DCN clks from it, if not, go to next clk limit*/14151415- if (closest_clk_lvl != -1) {14161416- dcn2_1_soc.clock_limits[k].state = i;14171417- dcn2_1_soc.clock_limits[k].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;14181418- dcn2_1_soc.clock_limits[k].fabricclk_mhz = clk_table->entries[i].fclk_mhz;14191419- dcn2_1_soc.clock_limits[k].socclk_mhz = clk_table->entries[i].socclk_mhz;14201420- dcn2_1_soc.clock_limits[k].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2;13961396+ clock_limits[i].state = i;13971397+ clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;13981398+ clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;13991399+ clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;14001400+ clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2;1421140114221422- dcn2_1_soc.clock_limits[k].dispclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dispclk_mhz;14231423- dcn2_1_soc.clock_limits[k].dppclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dppclk_mhz;14241424- dcn2_1_soc.clock_limits[k].dram_bw_per_chan_gbps = dcn2_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;14251425- dcn2_1_soc.clock_limits[k].dscclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz;14261426- dcn2_1_soc.clock_limits[k].dtbclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;14271427- dcn2_1_soc.clock_limits[k].phyclk_d18_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;14281428- dcn2_1_soc.clock_limits[k].phyclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz;14291429- k++;14301430- }14021402+ clock_limits[i].dispclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dispclk_mhz;14031403+ clock_limits[i].dppclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dppclk_mhz;14041404+ clock_limits[i].dram_bw_per_chan_gbps = dcn2_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;14051405+ clock_limits[i].dscclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz;14061406+ clock_limits[i].dtbclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;14071407+ clock_limits[i].phyclk_d18_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;14081408+ clock_limits[i].phyclk_mhz = dcn2_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz;14311409 }14321432- dcn2_1_soc.num_states = k;14101410+ for (i = 0; i < clk_table->num_entries; i++)14111411+ dcn2_1_soc.clock_limits[i] = clock_limits[i];14121412+ if (clk_table->num_entries) {14131413+ dcn2_1_soc.num_states = clk_table->num_entries;14141414+ /* duplicate last level */14151415+ dcn2_1_soc.clock_limits[dcn2_1_soc.num_states] = dcn2_1_soc.clock_limits[dcn2_1_soc.num_states - 1];14161416+ dcn2_1_soc.clock_limits[dcn2_1_soc.num_states].state = dcn2_1_soc.num_states;14171417+ }14331418 }14341434-14351435- /* duplicate last level */14361436- dcn2_1_soc.clock_limits[dcn2_1_soc.num_states] = dcn2_1_soc.clock_limits[dcn2_1_soc.num_states - 1];14371437- dcn2_1_soc.clock_limits[dcn2_1_soc.num_states].state = dcn2_1_soc.num_states;1438141914391420 dml_init_instance(&dc->dml, &dcn2_1_soc, &dcn2_1_ip, DML_PROJECT_DCN21);14401421}
+16
drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
···210210 struct mpcc_blnd_cfg *blnd_cfg,211211 int mpcc_id);212212213213+ /*214214+ * Lock cursor updates for the specified OPP.215215+ * OPP defines the set of MPCC that are locked together for cursor.216216+ *217217+ * Parameters:218218+ * [in] mpc - MPC context.219219+ * [in] opp_id - The OPP to lock cursor updates on220220+ * [in] lock - lock/unlock the OPP221221+ *222222+ * Return: void223223+ */224224+ void (*cursor_lock)(225225+ struct mpc *mpc,226226+ int opp_id,227227+ bool lock);228228+213229 struct mpcc* (*get_mpcc_for_dpp)(214230 struct mpc_tree *tree,215231 int dpp_id);
···480480 return ret;481481482482 ret = qxl_release_reserve_list(release, true);483483- if (ret)483483+ if (ret) {484484+ qxl_release_free(qdev, release);484485 return ret;485485-486486+ }486487 cmd = (struct qxl_surface_cmd *)qxl_release_map(qdev, release);487488 cmd->type = QXL_SURFACE_CMD_CREATE;488489 cmd->flags = QXL_SURF_FLAG_KEEP_DATA;···500499 /* no need to add a release to the fence for this surface bo,501500 since it is only released when we ask to destroy the surface502501 and it would never signal otherwise */503503- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);504502 qxl_release_fence_buffer_objects(release);503503+ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);505504506505 surf->hw_surf_alloc = true;507506 spin_lock(&qdev->surf_id_idr_lock);···543542 cmd->surface_id = id;544543 qxl_release_unmap(qdev, release, &cmd->release_info);545544546546- qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);547547-548545 qxl_release_fence_buffer_objects(release);546546+ qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);549547550548 return 0;551549}
···11551155config HID_MCP222111561156 tristate "Microchip MCP2221 HID USB-to-I2C/SMbus host support"11571157 depends on USB_HID && I2C11581158+ depends on GPIOLIB11581159 ---help---11591160 Provides I2C and SMBUS host adapter functionality over USB-HID11601161 through MCP2221 device.
+1
drivers/hid/hid-alps.c
···802802 break;803803 case HID_DEVICE_ID_ALPS_U1_DUAL:804804 case HID_DEVICE_ID_ALPS_U1:805805+ case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY:805806 data->dev_type = U1;806807 break;807808 default:
···872872}873873874874static const struct hid_device_id lg_g15_devices[] = {875875+ /* The G11 is a G15 without the LCD, treat it as a G15 */876876+ { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,877877+ USB_DEVICE_ID_LOGITECH_G11),878878+ .driver_data = LG_G15 },875879 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,876880 USB_DEVICE_ID_LOGITECH_G15_LCD),877881 .driver_data = LG_G15 },
···978978979979 return drv->resume(dev);980980}981981+#else982982+#define vmbus_suspend NULL983983+#define vmbus_resume NULL981984#endif /* CONFIG_PM_SLEEP */982985983986/*···1000997}10019981002999/*10031003- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than10041004- * SET_SYSTEM_SLEEP_PM_OPS: see the comment before vmbus_bus_pm.10001000+ * Note: we must use the "noirq" ops: see the comment before vmbus_bus_pm.10011001+ *10021002+ * suspend_noirq/resume_noirq are set to NULL to support Suspend-to-Idle: we10031003+ * shouldn't suspend the vmbus devices upon Suspend-to-Idle, otherwise there10041004+ * is no way to wake up a Generation-2 VM.10051005+ *10061006+ * The other 4 ops are for hibernation.10051007 */10081008+10061009static const struct dev_pm_ops vmbus_pm = {10071007- SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_suspend, vmbus_resume)10101010+ .suspend_noirq = NULL,10111011+ .resume_noirq = NULL,10121012+ .freeze_noirq = vmbus_suspend,10131013+ .thaw_noirq = vmbus_resume,10141014+ .poweroff_noirq = vmbus_suspend,10151015+ .restore_noirq = vmbus_resume,10081016};1009101710101018/* The one and only one */···2295228122962282 return 0;22972283}22842284+#else22852285+#define vmbus_bus_suspend NULL22862286+#define vmbus_bus_resume NULL22982287#endif /* CONFIG_PM_SLEEP */2299228823002289static const struct acpi_device_id vmbus_acpi_device_ids[] = {···23082291MODULE_DEVICE_TABLE(acpi, vmbus_acpi_device_ids);2309229223102293/*23112311- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than23122312- * SET_SYSTEM_SLEEP_PM_OPS, otherwise NIC SR-IOV can not work, because the23132313- * "pci_dev_pm_ops" uses the "noirq" callbacks: in the resume path, the23142314- * pci "noirq" restore callback runs before "non-noirq" callbacks (see22942294+ * Note: we must use the "no_irq" ops, otherwise hibernation can not work with22952295+ * PCI device assignment, because "pci_dev_pm_ops" uses the "noirq" ops: in22962296+ * the resume path, the pci "noirq" restore op runs before "non-noirq" op (see23152297 * resume_target_kernel() -> dpm_resume_start(), and hibernation_restore() ->23162298 * dpm_resume_end()). This means vmbus_bus_resume() and the pci-hyperv's23172317- * resume callback must also run via the "noirq" callbacks.22992299+ * resume callback must also run via the "noirq" ops.23002300+ *23012301+ * Set suspend_noirq/resume_noirq to NULL for Suspend-to-Idle: see the comment23022302+ * earlier in this file before vmbus_pm.23182303 */23042304+23192305static const struct dev_pm_ops vmbus_bus_pm = {23202320- SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_bus_suspend, vmbus_bus_resume)23062306+ .suspend_noirq = NULL,23072307+ .resume_noirq = NULL,23082308+ .freeze_noirq = vmbus_bus_suspend,23092309+ .thaw_noirq = vmbus_bus_resume,23102310+ .poweroff_noirq = vmbus_bus_suspend,23112311+ .restore_noirq = vmbus_bus_resume23212312};2322231323232314static struct acpi_driver vmbus_acpi_driver = {
···360360 value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);361361 i2c_slave_event(iproc_i2c->slave,362362 I2C_SLAVE_WRITE_RECEIVED, &value);363363+ if (rx_status == I2C_SLAVE_RX_END)364364+ i2c_slave_event(iproc_i2c->slave,365365+ I2C_SLAVE_STOP, &value);363366 }364367 } else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) {365368 /* Master read other than start */
+12-24
drivers/i2c/busses/i2c-tegra.c
···996996 do {997997 u32 status = i2c_readl(i2c_dev, I2C_INT_STATUS);998998999999- if (status)999999+ if (status) {10001000 tegra_i2c_isr(i2c_dev->irq, i2c_dev);1001100110021002- if (completion_done(complete)) {10031003- s64 delta = ktime_ms_delta(ktimeout, ktime);10021002+ if (completion_done(complete)) {10031003+ s64 delta = ktime_ms_delta(ktimeout, ktime);1004100410051005- return msecs_to_jiffies(delta) ?: 1;10051005+ return msecs_to_jiffies(delta) ?: 1;10061006+ }10061007 }1007100810081009 ktime = ktime_get();···10301029 disable_irq(i2c_dev->irq);1031103010321031 /*10331033- * Under some rare circumstances (like running KASAN +10341034- * NFS root) CPU, which handles interrupt, may stuck in10351035- * uninterruptible state for a significant time. In this10361036- * case we will get timeout if I2C transfer is running on10371037- * a sibling CPU, despite of IRQ being raised.10381038- *10391039- * In order to handle this rare condition, the IRQ status10401040- * needs to be checked after timeout.10321032+ * There is a chance that completion may happen after IRQ10331033+ * synchronization, which is done by disable_irq().10411034 */10421042- if (ret == 0)10431043- ret = tegra_i2c_poll_completion_timeout(i2c_dev,10441044- complete, 0);10351035+ if (ret == 0 && completion_done(complete)) {10361036+ dev_warn(i2c_dev->dev,10371037+ "completion done after timeout\n");10381038+ ret = 1;10391039+ }10451040 }1046104110471042 return ret;···12151218 if (dma) {12161219 time_left = tegra_i2c_wait_completion_timeout(12171220 i2c_dev, &i2c_dev->dma_complete, xfer_time);12181218-12191219- /*12201220- * Synchronize DMA first, since dmaengine_terminate_sync()12211221- * performs synchronization after the transfer's termination12221222- * and we want to get a completion if transfer succeeded.12231223- */12241224- dmaengine_synchronize(i2c_dev->msg_read ?12251225- i2c_dev->rx_dma_chan :12261226- i2c_dev->tx_dma_chan);1227122112281222 dmaengine_terminate_sync(i2c_dev->msg_read ?12291223 i2c_dev->rx_dma_chan :
···102102103103#define XADC_FLAGS_BUFFERED BIT(0)104104105105+/*106106+ * The XADC hardware supports a samplerate of up to 1MSPS. Unfortunately it does107107+ * not have a hardware FIFO. Which means an interrupt is generated for each108108+ * conversion sequence. At 1MSPS sample rate the CPU in ZYNQ7000 is completely109109+ * overloaded by the interrupts that it soft-lockups. For this reason the driver110110+ * limits the maximum samplerate 150kSPS. At this rate the CPU is fairly busy,111111+ * but still responsive.112112+ */113113+#define XADC_MAX_SAMPLERATE 150000114114+105115static void xadc_write_reg(struct xadc *xadc, unsigned int reg,106116 uint32_t val)107117{···684674685675 spin_lock_irqsave(&xadc->lock, flags);686676 xadc_read_reg(xadc, XADC_AXI_REG_IPIER, &val);687687- xadc_write_reg(xadc, XADC_AXI_REG_IPISR, val & XADC_AXI_INT_EOS);677677+ xadc_write_reg(xadc, XADC_AXI_REG_IPISR, XADC_AXI_INT_EOS);688678 if (state)689679 val |= XADC_AXI_INT_EOS;690680 else···732722{733723 uint16_t val;734724725725+ /* Powerdown the ADC-B when it is not needed. */735726 switch (seq_mode) {736727 case XADC_CONF1_SEQ_SIMULTANEOUS:737728 case XADC_CONF1_SEQ_INDEPENDENT:738738- val = XADC_CONF2_PD_ADC_B;729729+ val = 0;739730 break;740731 default:741741- val = 0;732732+ val = XADC_CONF2_PD_ADC_B;742733 break;743734 }744735···808797 if (ret)809798 goto err;810799800800+ /*801801+ * In simultaneous mode the upper and lower aux channels are samples at802802+ * the same time. In this mode the upper 8 bits in the sequencer803803+ * register are don't care and the lower 8 bits control two channels804804+ * each. As such we must set the bit if either the channel in the lower805805+ * group or the upper group is enabled.806806+ */807807+ if (seq_mode == XADC_CONF1_SEQ_SIMULTANEOUS)808808+ scan_mask = ((scan_mask >> 8) | scan_mask) & 0xff0000;809809+811810 ret = xadc_write_adc_reg(xadc, XADC_REG_SEQ(1), scan_mask >> 16);812811 if (ret)813812 goto err;···844823 .postdisable = &xadc_postdisable,845824};846825826826+static int xadc_read_samplerate(struct xadc *xadc)827827+{828828+ unsigned int div;829829+ uint16_t val16;830830+ int ret;831831+832832+ ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);833833+ if (ret)834834+ return ret;835835+836836+ div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;837837+ if (div < 2)838838+ div = 2;839839+840840+ return xadc_get_dclk_rate(xadc) / div / 26;841841+}842842+847843static int xadc_read_raw(struct iio_dev *indio_dev,848844 struct iio_chan_spec const *chan, int *val, int *val2, long info)849845{850846 struct xadc *xadc = iio_priv(indio_dev);851851- unsigned int div;852847 uint16_t val16;853848 int ret;854849···917880 *val = -((273150 << 12) / 503975);918881 return IIO_VAL_INT;919882 case IIO_CHAN_INFO_SAMP_FREQ:920920- ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);921921- if (ret)883883+ ret = xadc_read_samplerate(xadc);884884+ if (ret < 0)922885 return ret;923886924924- div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;925925- if (div < 2)926926- div = 2;927927-928928- *val = xadc_get_dclk_rate(xadc) / div / 26;929929-887887+ *val = ret;930888 return IIO_VAL_INT;931889 default:932890 return -EINVAL;933891 }934892}935893936936-static int xadc_write_raw(struct iio_dev *indio_dev,937937- struct iio_chan_spec const *chan, int val, int val2, long info)894894+static int xadc_write_samplerate(struct xadc *xadc, int val)938895{939939- struct xadc *xadc = iio_priv(indio_dev);940896 unsigned long clk_rate = xadc_get_dclk_rate(xadc);941897 unsigned int div;942898943899 if (!clk_rate)944900 return -EINVAL;945901946946- if (info != IIO_CHAN_INFO_SAMP_FREQ)947947- return -EINVAL;948948-949902 if (val <= 0)950903 return -EINVAL;951904952905 /* Max. 150 kSPS */953953- if (val > 150000)954954- val = 150000;906906+ if (val > XADC_MAX_SAMPLERATE)907907+ val = XADC_MAX_SAMPLERATE;955908956909 val *= 26;957910···954927 * limit.955928 */956929 div = clk_rate / val;957957- if (clk_rate / div / 26 > 150000)930930+ if (clk_rate / div / 26 > XADC_MAX_SAMPLERATE)958931 div++;959932 if (div < 2)960933 div = 2;···963936964937 return xadc_update_adc_reg(xadc, XADC_REG_CONF2, XADC_CONF2_DIV_MASK,965938 div << XADC_CONF2_DIV_OFFSET);939939+}940940+941941+static int xadc_write_raw(struct iio_dev *indio_dev,942942+ struct iio_chan_spec const *chan, int val, int val2, long info)943943+{944944+ struct xadc *xadc = iio_priv(indio_dev);945945+946946+ if (info != IIO_CHAN_INFO_SAMP_FREQ)947947+ return -EINVAL;948948+949949+ return xadc_write_samplerate(xadc, val);966950}967951968952static const struct iio_event_spec xadc_temp_events[] = {···12601222 ret = clk_prepare_enable(xadc->clk);12611223 if (ret)12621224 goto err_free_samplerate_trigger;12251225+12261226+ /*12271227+ * Make sure not to exceed the maximum samplerate since otherwise the12281228+ * resulting interrupt storm will soft-lock the system.12291229+ */12301230+ if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {12311231+ ret = xadc_read_samplerate(xadc);12321232+ if (ret < 0)12331233+ goto err_free_samplerate_trigger;12341234+ if (ret > XADC_MAX_SAMPLERATE) {12351235+ ret = xadc_write_samplerate(xadc, XADC_MAX_SAMPLERATE);12361236+ if (ret < 0)12371237+ goto err_free_samplerate_trigger;12381238+ }12391239+ }1263124012641241 ret = request_irq(xadc->irq, xadc->ops->interrupt_handler, 0,12651242 dev_name(&pdev->dev), indio_dev);
···525525 ret = fwnode_property_read_u32(child, "num", &num);526526 if (ret)527527 return ret;528528- if (num > AD5770R_MAX_CHANNELS)528528+ if (num >= AD5770R_MAX_CHANNELS)529529 return -EINVAL;530530531531 ret = fwnode_property_read_u32_array(child,
+10-1
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
···16171617 if (result)16181618 goto out_unlock;1619161916201620+ pm_runtime_disable(dev);16211621+ pm_runtime_set_active(dev);16221622+ pm_runtime_enable(dev);16231623+16201624 result = inv_mpu6050_switch_engine(st, true, st->suspended_sensors);16211625 if (result)16221626 goto out_unlock;···1642163816431639 mutex_lock(&st->lock);1644164016411641+ st->suspended_sensors = 0;16421642+ if (pm_runtime_suspended(dev)) {16431643+ result = 0;16441644+ goto out_unlock;16451645+ }16461646+16451647 if (iio_buffer_enabled(indio_dev)) {16461648 result = inv_mpu6050_prepare_fifo(st, false);16471649 if (result)16481650 goto out_unlock;16491651 }1650165216511651- st->suspended_sensors = 0;16521653 if (st->chip_config.accl_en)16531654 st->suspended_sensors |= INV_MPU6050_SENSOR_ACCL;16541655 if (st->chip_config.gyro_en)
+3
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
···337337 * @gain: Configured sensor sensitivity.338338 * @odr: Output data rate of the sensor [Hz].339339 * @watermark: Sensor watermark level.340340+ * @decimator: Sensor decimation factor.340341 * @sip: Number of samples in a given pattern.341342 * @ts_ref: Sensor timestamp reference for hw one.342343 * @ext_info: Sensor settings if it is connected to i2c controller···351350 u32 odr;352351353352 u16 watermark;353353+ u8 decimator;354354 u8 sip;355355 s64 ts_ref;356356357357 struct {358358 const struct st_lsm6dsx_ext_dev_settings *settings;359359+ u32 slv_odr;359360 u8 addr;360361 } ext_info;361362};
···20362036 return 0;20372037}2038203820392039-static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw)20392039+static int st_lsm6dsx_reset_device(struct st_lsm6dsx_hw *hw)20402040{20412041 const struct st_lsm6dsx_reg *reg;20422042 int err;20432043+20442044+ /*20452045+ * flush hw FIFO before device reset in order to avoid20462046+ * possible races on interrupt line 1. If the first interrupt20472047+ * line is asserted during hw reset the device will work in20482048+ * I3C-only mode (if it is supported)20492049+ */20502050+ err = st_lsm6dsx_flush_fifo(hw);20512051+ if (err < 0 && err != -ENOTSUPP)20522052+ return err;2043205320442054 /* device sw reset */20452055 reg = &hw->settings->reset;···20682058 return err;2069205920702060 msleep(50);20612061+20622062+ return 0;20632063+}20642064+20652065+static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw)20662066+{20672067+ const struct st_lsm6dsx_reg *reg;20682068+ int err;20692069+20702070+ err = st_lsm6dsx_reset_device(hw);20712071+ if (err < 0)20722072+ return err;2071207320722074 /* enable Block Data Update */20732075 reg = &hw->settings->bdu;
···360360 * uverbs_uobject_fd_release(), and the caller is expected to ensure361361 * that release is never done while a call to lookup is possible.362362 */363363- if (f->f_op != fd_type->fops) {363363+ if (f->f_op != fd_type->fops || uobject->ufile != ufile) {364364 fput(f);365365 return ERR_PTR(-EBADF);366366 }···474474 filp = anon_inode_getfile(fd_type->name, fd_type->fops, NULL,475475 fd_type->flags);476476 if (IS_ERR(filp)) {477477+ uverbs_uobject_put(uobj);477478 uobj = ERR_CAST(filp);478478- goto err_uobj;479479+ goto err_fd;479480 }480481 uobj->object = filp;481482482483 uobj->id = new_fd;483484 return uobj;484485485485-err_uobj:486486- uverbs_uobject_put(uobj);487486err_fd:488487 put_unused_fd(new_fd);489488 return uobj;···678679 enum rdma_lookup_mode mode)679680{680681 assert_uverbs_usecnt(uobj, mode);681681- uobj->uapi_object->type_class->lookup_put(uobj, mode);682682 /*683683 * In order to unlock an object, either decrease its usecnt for684684 * read access or zero it in case of exclusive access. See···694696 break;695697 }696698699699+ uobj->uapi_object->type_class->lookup_put(uobj, mode);697700 /* Pairs with the kref obtained by type->lookup_get */698701 uverbs_uobject_put(uobj);699702}
+4
drivers/infiniband/core/uverbs_main.c
···820820 ret = mmget_not_zero(mm);821821 if (!ret) {822822 list_del_init(&priv->list);823823+ if (priv->entry) {824824+ rdma_user_mmap_entry_put(priv->entry);825825+ priv->entry = NULL;826826+ }823827 mm = NULL;824828 continue;825829 }
···14991499 int i;1500150015011501 for (i = 0; i < ARRAY_SIZE(pdefault_rules->rules_create_list); i++) {15021502+ union ib_flow_spec ib_spec = {};15021503 int ret;15031503- union ib_flow_spec ib_spec;15041504+15041505 switch (pdefault_rules->rules_create_list[i]) {15051506 case 0:15061507 /* no rule */
···9696 if (!cmd)9797 return;98989999+ memset(cmd, 0, sizeof(*cmd));100100+99101 if (vote_x == 0 && vote_y == 0)100102 valid = false;101103···114112 * Set the wait for completion flag on command that need to be completed115113 * before the next command.116114 */117117- if (commit)118118- cmd->wait = true;115115+ cmd->wait = commit;119116}120117121118static void tcs_list_gen(struct list_head *bcm_list, int bucket,
+2-2
drivers/iommu/Kconfig
···362362363363config SPAPR_TCE_IOMMU364364 bool "sPAPR TCE IOMMU Support"365365- depends on PPC_POWERNV || PPC_PSERIES || (PPC && COMPILE_TEST)365365+ depends on PPC_POWERNV || PPC_PSERIES366366 select IOMMU_API367367 help368368 Enables bits of IOMMU API required by VFIO. The iommu_ops···457457458458config MTK_IOMMU459459 bool "MTK IOMMU Support"460460- depends on ARM || ARM64 || COMPILE_TEST460460+ depends on HAS_DMA461461 depends on ARCH_MEDIATEK || COMPILE_TEST462462 select ARM_DMA_USE_IOMMU463463 select IOMMU_API
+1-1
drivers/iommu/amd_iommu_init.c
···29362936{29372937 for (; *str; ++str) {29382938 if (strncmp(str, "legacy", 6) == 0) {29392939- amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY;29392939+ amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;29402940 break;29412941 }29422942 if (strncmp(str, "vapic", 5) == 0) {
···824824 qcom_iommu->dev = dev;825825826826 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);827827- if (res)827827+ if (res) {828828 qcom_iommu->local_base = devm_ioremap_resource(dev, res);829829+ if (IS_ERR(qcom_iommu->local_base))830830+ return PTR_ERR(qcom_iommu->local_base);831831+ }829832830833 qcom_iommu->iface_clk = devm_clk_get(dev, "iface");831834 if (IS_ERR(qcom_iommu->iface_clk)) {
+4-2
drivers/md/dm-mpath.c
···585585586586 /* Do we need to select a new pgpath? */587587 pgpath = READ_ONCE(m->current_pgpath);588588- queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);589589- if (!pgpath || !queue_io)588588+ if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))590589 pgpath = choose_pgpath(m, bio->bi_iter.bi_size);590590+591591+ /* MPATHF_QUEUE_IO might have been cleared by choose_pgpath. */592592+ queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);591593592594 if ((pgpath && queue_io) ||593595 (!pgpath && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) {
+1-1
drivers/md/dm-verity-fec.c
···435435 fio->level++;436436437437 if (type == DM_VERITY_BLOCK_TYPE_METADATA)438438- block += v->data_blocks;438438+ block = block - v->hash_start + v->data_blocks;439439440440 /*441441 * For RS(M, N), the continuous FEC data is divided into blocks of N
···878878 * Issued High Priority Interrupt, and check for card status879879 * until out-of prg-state.880880 */881881-int mmc_interrupt_hpi(struct mmc_card *card)881881+static int mmc_interrupt_hpi(struct mmc_card *card)882882{883883 int err;884884 u32 status;
+10-11
drivers/mmc/host/cqhci.c
···55#include <linux/delay.h>66#include <linux/highmem.h>77#include <linux/io.h>88+#include <linux/iopoll.h>89#include <linux/module.h>910#include <linux/dma-mapping.h>1011#include <linux/slab.h>···350349/* CQHCI is idle and should halt immediately, so set a small timeout */351350#define CQHCI_OFF_TIMEOUT 100352351352352+static u32 cqhci_read_ctl(struct cqhci_host *cq_host)353353+{354354+ return cqhci_readl(cq_host, CQHCI_CTL);355355+}356356+353357static void cqhci_off(struct mmc_host *mmc)354358{355359 struct cqhci_host *cq_host = mmc->cqe_private;356356- ktime_t timeout;357357- bool timed_out;358360 u32 reg;361361+ int err;359362360363 if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt)361364 return;···369364370365 cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL);371366372372- timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT);373373- while (1) {374374- timed_out = ktime_compare(ktime_get(), timeout) > 0;375375- reg = cqhci_readl(cq_host, CQHCI_CTL);376376- if ((reg & CQHCI_HALT) || timed_out)377377- break;378378- }379379-380380- if (timed_out)367367+ err = readx_poll_timeout(cqhci_read_ctl, cq_host, reg,368368+ reg & CQHCI_HALT, 0, CQHCI_OFF_TIMEOUT);369369+ if (err < 0)381370 pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc));382371 else383372 pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
···235235{236236 /* Wait for 5ms after set 1.8V signal enable bit */237237 usleep_range(5000, 5500);238238+239239+ /*240240+ * For some reason the controller's Host Control2 register reports241241+ * the bit representing 1.8V signaling as 0 when read after it was242242+ * written as 1. Subsequent read reports 1.243243+ *244244+ * Since this may cause some issues, do an empty read of the Host245245+ * Control2 register here to circumvent this.246246+ */247247+ sdhci_readw(host, SDHCI_HOST_CONTROL2);238248}239249240250static const struct sdhci_ops sdhci_xenon_ops = {
+1-1
drivers/net/dsa/mv88e6xxx/Kconfig
···2424 bool "PTP support for Marvell 88E6xxx"2525 default n2626 depends on NET_DSA_MV88E6XXX_GLOBAL22727+ depends on PTP_1588_CLOCK2728 imply NETWORK_PHY_TIMESTAMPING2828- imply PTP_1588_CLOCK2929 help3030 Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch3131 chips that support it.
···2020config NET_DSA_SJA1105_PTP2121 bool "Support for the PTP clock on the NXP SJA1105 Ethernet switch"2222 depends on NET_DSA_SJA11052323+ depends on PTP_1588_CLOCK2324 help2425 This enables support for timestamping and PTP clock manipulations in2526 the SJA1105 DSA driver.
+18-8
drivers/net/dsa/sja1105/sja1105_ptp.c
···16161717/* PTPSYNCTS has no interrupt or update mechanism, because the intended1818 * hardware use case is for the timestamp to be collected synchronously,1919- * immediately after the CAS_MASTER SJA1105 switch has triggered a CASSYNC2020- * pulse on the PTP_CLK pin. When used as a generic extts source, it needs2121- * polling and a comparison with the old value. The polling interval is just2222- * the Nyquist rate of a canonical PPS input (e.g. from a GPS module).2323- * Anything of higher frequency than 1 Hz will be lost, since there is no2424- * timestamp FIFO.1919+ * immediately after the CAS_MASTER SJA1105 switch has performed a CASSYNC2020+ * one-shot toggle (no return to level) on the PTP_CLK pin. When used as a2121+ * generic extts source, the PTPSYNCTS register needs polling and a comparison2222+ * with the old value. The polling interval is configured as the Nyquist rate2323+ * of a signal with 50% duty cycle and 1Hz frequency, which is sadly all that2424+ * this hardware can do (but may be enough for some setups). Anything of higher2525+ * frequency than 1 Hz will be lost, since there is no timestamp FIFO.2526 */2626-#define SJA1105_EXTTS_INTERVAL (HZ / 2)2727+#define SJA1105_EXTTS_INTERVAL (HZ / 4)27282829/* This range is actually +/- SJA1105_MAX_ADJ_PPB2930 * divided by 1000 (ppb -> ppm) and with a 16-bit···755754 return -EOPNOTSUPP;756755757756 /* Reject requests with unsupported flags */758758- if (extts->flags)757757+ if (extts->flags & ~(PTP_ENABLE_FEATURE |758758+ PTP_RISING_EDGE |759759+ PTP_FALLING_EDGE |760760+ PTP_STRICT_FLAGS))761761+ return -EOPNOTSUPP;762762+763763+ /* We can only enable time stamping on both edges, sadly. */764764+ if ((extts->flags & PTP_STRICT_FLAGS) &&765765+ (extts->flags & PTP_ENABLE_FEATURE) &&766766+ (extts->flags & PTP_EXTTS_EDGES) != PTP_EXTTS_EDGES)759767 return -EOPNOTSUPP;760768761769 rc = sja1105_change_ptp_clk_pin_func(priv, PTP_PF_EXTTS);
···3535config MACB_USE_HWSTAMP3636 bool "Use IEEE 1588 hwstamp"3737 depends on MACB3838+ depends on PTP_1588_CLOCK3839 default y3939- imply PTP_1588_CLOCK4040 ---help---4141 Enable IEEE 1588 Precision Time Protocol (PTP) support for MACB.4242
+12-12
drivers/net/ethernet/cadence/macb_main.c
···334334 int status;335335336336 status = pm_runtime_get_sync(&bp->pdev->dev);337337- if (status < 0)337337+ if (status < 0) {338338+ pm_runtime_put_noidle(&bp->pdev->dev);338339 goto mdio_pm_exit;340340+ }339341340342 status = macb_mdio_wait_for_idle(bp);341343 if (status < 0)···388386 int status;389387390388 status = pm_runtime_get_sync(&bp->pdev->dev);391391- if (status < 0)389389+ if (status < 0) {390390+ pm_runtime_put_noidle(&bp->pdev->dev);392391 goto mdio_pm_exit;392392+ }393393394394 status = macb_mdio_wait_for_idle(bp);395395 if (status < 0)···38203816 int ret;3821381738223818 ret = pm_runtime_get_sync(&lp->pdev->dev);38233823- if (ret < 0)38193819+ if (ret < 0) {38203820+ pm_runtime_put_noidle(&lp->pdev->dev);38243821 return ret;38223822+ }3825382338263824 /* Clear internal statistics */38273825 ctl = macb_readl(lp, NCR);···4178417241794173static int fu540_c000_init(struct platform_device *pdev)41804174{41814181- struct resource *res;41824182-41834183- res = platform_get_resource(pdev, IORESOURCE_MEM, 1);41844184- if (!res)41854185- return -ENODEV;41864186-41874187- mgmt->reg = ioremap(res->start, resource_size(res));41884188- if (!mgmt->reg)41894189- return -ENOMEM;41754175+ mgmt->reg = devm_platform_ioremap_resource(pdev, 1);41764176+ if (IS_ERR(mgmt->reg))41774177+ return PTR_ERR(mgmt->reg);4190417841914179 return macb_init(pdev);41924180}
+1-1
drivers/net/ethernet/cavium/Kconfig
···5454config CAVIUM_PTP5555 tristate "Cavium PTP coprocessor as PTP clock"5656 depends on 64BIT && PCI5757- imply PTP_1588_CLOCK5757+ depends on PTP_1588_CLOCK5858 ---help---5959 This driver adds support for the Precision Time Protocol Clocks and6060 Timestamping coprocessor (PTP) found on Cavium processors.
+37-3
drivers/net/ethernet/chelsio/cxgb4/sge.c
···22072207 if (unlikely(skip_eotx_wr)) {22082208 start = (u64 *)wr;22092209 eosw_txq->state = next_state;22102210+ eosw_txq->cred -= wrlen16;22112211+ eosw_txq->ncompl++;22122212+ eosw_txq->last_compl = 0;22102213 goto write_wr_headers;22112214 }22122215···23682365 return cxgb4_eth_xmit(skb, dev);23692366}2370236723682368+static void eosw_txq_flush_pending_skbs(struct sge_eosw_txq *eosw_txq)23692369+{23702370+ int pktcount = eosw_txq->pidx - eosw_txq->last_pidx;23712371+ int pidx = eosw_txq->pidx;23722372+ struct sk_buff *skb;23732373+23742374+ if (!pktcount)23752375+ return;23762376+23772377+ if (pktcount < 0)23782378+ pktcount += eosw_txq->ndesc;23792379+23802380+ while (pktcount--) {23812381+ pidx--;23822382+ if (pidx < 0)23832383+ pidx += eosw_txq->ndesc;23842384+23852385+ skb = eosw_txq->desc[pidx].skb;23862386+ if (skb) {23872387+ dev_consume_skb_any(skb);23882388+ eosw_txq->desc[pidx].skb = NULL;23892389+ eosw_txq->inuse--;23902390+ }23912391+ }23922392+23932393+ eosw_txq->pidx = eosw_txq->last_pidx + 1;23942394+}23952395+23712396/**23722397 * cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc.23732398 * @dev - netdevice···24712440 FW_FLOWC_MNEM_EOSTATE_CLOSING :24722441 FW_FLOWC_MNEM_EOSTATE_ESTABLISHED);2473244224742474- eosw_txq->cred -= len16;24752475- eosw_txq->ncompl++;24762476- eosw_txq->last_compl = 0;24432443+ /* Free up any pending skbs to ensure there's room for24442444+ * termination FLOWC.24452445+ */24462446+ if (tc == FW_SCHED_CLS_NONE)24472447+ eosw_txq_flush_pending_skbs(eosw_txq);2477244824782449 ret = eosw_txq_enqueue(eosw_txq, skb);24792450 if (ret) {···27282695 * is ever running at a time ...27292696 */27302697static void service_ofldq(struct sge_uld_txq *q)26982698+ __must_hold(&q->sendq.lock)27312699{27322700 u64 *pos, *before, *end;27332701 int credits;
···10301030{10311031 int i, j;1032103210331033- /* Loop through all the mac tables entries. There are 1024 rows of 410341034- * entries.10351035- */10361036- for (i = 0; i < 1024; i++) {10331033+ /* Loop through all the mac tables entries. */10341034+ for (i = 0; i < ocelot->num_mact_rows; i++) {10371035 for (j = 0; j < 4; j++) {10381036 struct ocelot_mact_entry entry;10391037 bool is_static;···1456145814571459void ocelot_set_ageing_time(struct ocelot *ocelot, unsigned int msecs)14581460{14591459- ocelot_write(ocelot, ANA_AUTOAGE_AGE_PERIOD(msecs / 2),14601460- ANA_AUTOAGE);14611461+ unsigned int age_period = ANA_AUTOAGE_AGE_PERIOD(msecs / 2000);14621462+14631463+ /* Setting AGE_PERIOD to zero effectively disables automatic aging,14641464+ * which is clearly not what our intention is. So avoid that.14651465+ */14661466+ if (!age_period)14671467+ age_period = 1;14681468+14691469+ ocelot_rmw(ocelot, age_period, ANA_AUTOAGE_AGE_PERIOD_M, ANA_AUTOAGE);14611470}14621471EXPORT_SYMBOL(ocelot_set_ageing_time);14631472
···283283 if (!nfp_nsp_has_hwinfo_lookup(nsp)) {284284 nfp_warn(pf->cpp, "NSP doesn't support PF MAC generation\n");285285 eth_hw_addr_random(nn->dp.netdev);286286+ nfp_nsp_close(nsp);286287 return;287288 }288289
···40514051/**40524052 * stmmac_interrupt - main ISR40534053 * @irq: interrupt number.40544054- * @dev_id: to pass the net device pointer.40544054+ * @dev_id: to pass the net device pointer (must be valid).40554055 * Description: this is the main driver interrupt service routine.40564056 * It can call:40574057 * o DMA service routine (to manage incoming frame reception and transmission···4074407440754075 if (priv->irq_wake)40764076 pm_wakeup_event(priv->device, 0);40774077-40784078- if (unlikely(!dev)) {40794079- netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);40804080- return IRQ_NONE;40814081- }4082407740834078 /* Check if adapter is up */40844079 if (test_bit(STMMAC_DOWN, &priv->state))
+1-2
drivers/net/ethernet/ti/Kconfig
···9090config TI_CPTS_MOD9191 tristate9292 depends on TI_CPTS9393+ depends on PTP_1588_CLOCK9394 default y if TI_CPSW=y || TI_KEYSTONE_NETCP=y || TI_CPSW_SWITCHDEV=y9494- select NET_PTP_CLASSIFY9595- imply PTP_1588_CLOCK9695 default m97969897config TI_K3_AM65_CPSW_NUSS
···1063106310641064 complete(&gsi->completion);10651065}10661066+10661067/* Inter-EE interrupt handler */10671068static void gsi_isr_glob_ee(struct gsi *gsi)10681069{···15161515 struct completion *completion = &gsi->completion;15171516 u32 val;1518151715181518+ /* First zero the result code field */15191519+ val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);15201520+ val &= ~GENERIC_EE_RESULT_FMASK;15211521+ iowrite32(val, gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);15221522+15231523+ /* Now issue the command */15191524 val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK);15201525 val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK);15211526 val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK);···1827182018281821 /* Worst case we need an event for every outstanding TRE */18291822 if (data->channel.tre_count > data->channel.event_count) {18301830- dev_warn(gsi->dev, "channel %u limited to %u TREs\n",18311831- data->channel_id, data->channel.tre_count);18321823 tre_count = data->channel.event_count;18241824+ dev_warn(gsi->dev, "channel %u limited to %u TREs\n",18251825+ data->channel_id, tre_count);18331826 } else {18341827 tre_count = data->channel.tre_count;18351828 }
···12691269 ret, endpoint->channel_id, endpoint->endpoint_id);12701270}1271127112721272+static int ipa_endpoint_stop_rx_dma(struct ipa *ipa)12731273+{12741274+ u16 size = IPA_ENDPOINT_STOP_RX_SIZE;12751275+ struct gsi_trans *trans;12761276+ dma_addr_t addr;12771277+ int ret;12781278+12791279+ trans = ipa_cmd_trans_alloc(ipa, 1);12801280+ if (!trans) {12811281+ dev_err(&ipa->pdev->dev,12821282+ "no transaction for RX endpoint STOP workaround\n");12831283+ return -EBUSY;12841284+ }12851285+12861286+ /* Read into the highest part of the zero memory area */12871287+ addr = ipa->zero_addr + ipa->zero_size - size;12881288+12891289+ ipa_cmd_dma_task_32b_addr_add(trans, size, addr, false);12901290+12911291+ ret = gsi_trans_commit_wait_timeout(trans, ENDPOINT_STOP_DMA_TIMEOUT);12921292+ if (ret)12931293+ gsi_trans_free(trans);12941294+12951295+ return ret;12961296+}12971297+12981298+/**12991299+ * ipa_endpoint_stop() - Stops a GSI channel in IPA13001300+ * @client: Client whose endpoint should be stopped13011301+ *13021302+ * This function implements the sequence to stop a GSI channel13031303+ * in IPA. This function returns when the channel is is STOP state.13041304+ *13051305+ * Return value: 0 on success, negative otherwise13061306+ */13071307+int ipa_endpoint_stop(struct ipa_endpoint *endpoint)13081308+{13091309+ u32 retries = IPA_ENDPOINT_STOP_RX_RETRIES;13101310+ int ret;13111311+13121312+ do {13131313+ struct ipa *ipa = endpoint->ipa;13141314+ struct gsi *gsi = &ipa->gsi;13151315+13161316+ ret = gsi_channel_stop(gsi, endpoint->channel_id);13171317+ if (ret != -EAGAIN || endpoint->toward_ipa)13181318+ break;13191319+13201320+ /* For IPA v3.5.1, send a DMA read task and check again */13211321+ if (ipa->version == IPA_VERSION_3_5_1) {13221322+ ret = ipa_endpoint_stop_rx_dma(ipa);13231323+ if (ret)13241324+ break;13251325+ }13261326+13271327+ msleep(1);13281328+ } while (retries--);13291329+13301330+ return retries ? ret : -EIO;13311331+}13321332+12721333static void ipa_endpoint_program(struct ipa_endpoint *endpoint)12731334{12741335 if (endpoint->toward_ipa) {
+4-2
drivers/net/macsec.c
···13051305 struct crypto_aead *tfm;13061306 int ret;1307130713081308- tfm = crypto_alloc_aead("gcm(aes)", 0, 0);13081308+ /* Pick a sync gcm(aes) cipher to ensure order is preserved. */13091309+ tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);1309131013101311 if (IS_ERR(tfm))13111312 return tfm;···26412640 if (ret)26422641 goto rollback;2643264226442644- rtnl_unlock();26452643 /* Force features update, since they are different for SW MACSec and26462644 * HW offloading cases.26472645 */26482646 netdev_update_features(dev);26472647+26482648+ rtnl_unlock();26492649 return 0;2650265026512651rollback:
···11# SPDX-License-Identifier: GPL-2.0-only22config PHY_TEGRA_XUSB33 tristate "NVIDIA Tegra XUSB pad controller driver"44- depends on ARCH_TEGRA44+ depends on ARCH_TEGRA && USB_SUPPORT55+ select USB_COMMON56 select USB_CONN_GPIO67 select USB_PHY78 help
+46-34
drivers/platform/chrome/cros_ec_sensorhub.c
···5252 int sensor_type[MOTIONSENSE_TYPE_MAX] = { 0 };5353 struct cros_ec_command *msg = sensorhub->msg;5454 struct cros_ec_dev *ec = sensorhub->ec;5555- int ret, i, sensor_num;5555+ int ret, i;5656 char *name;57575858- sensor_num = cros_ec_get_sensor_count(ec);5959- if (sensor_num < 0) {6060- dev_err(dev,6161- "Unable to retrieve sensor information (err:%d)\n",6262- sensor_num);6363- return sensor_num;6464- }6565-6666- sensorhub->sensor_num = sensor_num;6767- if (sensor_num == 0) {6868- dev_err(dev, "Zero sensors reported.\n");6969- return -EINVAL;7070- }71587259 msg->version = 1;7360 msg->insize = sizeof(struct ec_response_motion_sense);7461 msg->outsize = sizeof(struct ec_params_motion_sense);75627676- for (i = 0; i < sensor_num; i++) {6363+ for (i = 0; i < sensorhub->sensor_num; i++) {7764 sensorhub->params->cmd = MOTIONSENSE_CMD_INFO;7865 sensorhub->params->info.sensor_num = i;7966···127140 struct cros_ec_dev *ec = dev_get_drvdata(dev->parent);128141 struct cros_ec_sensorhub *data;129142 struct cros_ec_command *msg;130130- int ret;131131- int i;143143+ int ret, i, sensor_num;132144133145 msg = devm_kzalloc(dev, sizeof(struct cros_ec_command) +134146 max((u16)sizeof(struct ec_params_motion_sense),···152166 dev_set_drvdata(dev, data);153167154168 /* Check whether this EC is a sensor hub. */155155- if (cros_ec_check_features(data->ec, EC_FEATURE_MOTION_SENSE)) {169169+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE)) {170170+ sensor_num = cros_ec_get_sensor_count(ec);171171+ if (sensor_num < 0) {172172+ dev_err(dev,173173+ "Unable to retrieve sensor information (err:%d)\n",174174+ sensor_num);175175+ return sensor_num;176176+ }177177+ if (sensor_num == 0) {178178+ dev_err(dev, "Zero sensors reported.\n");179179+ return -EINVAL;180180+ }181181+ data->sensor_num = sensor_num;182182+183183+ /*184184+ * Prepare the ring handler before enumering the185185+ * sensors.186186+ */187187+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {188188+ ret = cros_ec_sensorhub_ring_allocate(data);189189+ if (ret)190190+ return ret;191191+ }192192+193193+ /* Enumerate the sensors.*/156194 ret = cros_ec_sensorhub_register(dev, data);157195 if (ret)158196 return ret;197197+198198+ /*199199+ * When the EC does not have a FIFO, the sensors will query200200+ * their data themselves via sysfs or a software trigger.201201+ */202202+ if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {203203+ ret = cros_ec_sensorhub_ring_add(data);204204+ if (ret)205205+ return ret;206206+ /*207207+ * The msg and its data is not under the control of the208208+ * ring handler.209209+ */210210+ return devm_add_action_or_reset(dev,211211+ cros_ec_sensorhub_ring_remove,212212+ data);213213+ }214214+159215 } else {160216 /*161217 * If the device has sensors but does not claim to···212184 }213185 }214186215215- /*216216- * If the EC does not have a FIFO, the sensors will query their data217217- * themselves via sysfs or a software trigger.218218- */219219- if (cros_ec_check_features(ec, EC_FEATURE_MOTION_SENSE_FIFO)) {220220- ret = cros_ec_sensorhub_ring_add(data);221221- if (ret)222222- return ret;223223- /*224224- * The msg and its data is not under the control of the ring225225- * handler.226226- */227227- return devm_add_action_or_reset(dev,228228- cros_ec_sensorhub_ring_remove,229229- data);230230- }231187232188 return 0;233189}
+47-28
drivers/platform/chrome/cros_ec_sensorhub_ring.c
···957957}958958959959/**960960+ * cros_ec_sensorhub_ring_allocate() - Prepare the FIFO functionality if the EC961961+ * supports it.962962+ *963963+ * @sensorhub : Sensor Hub object.964964+ *965965+ * Return: 0 on success.966966+ */967967+int cros_ec_sensorhub_ring_allocate(struct cros_ec_sensorhub *sensorhub)968968+{969969+ int fifo_info_length =970970+ sizeof(struct ec_response_motion_sense_fifo_info) +971971+ sizeof(u16) * sensorhub->sensor_num;972972+973973+ /* Allocate the array for lost events. */974974+ sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length,975975+ GFP_KERNEL);976976+ if (!sensorhub->fifo_info)977977+ return -ENOMEM;978978+979979+ /*980980+ * Allocate the callback area based on the number of sensors.981981+ * Add one for the sensor ring.982982+ */983983+ sensorhub->push_data = devm_kcalloc(sensorhub->dev,984984+ sensorhub->sensor_num,985985+ sizeof(*sensorhub->push_data),986986+ GFP_KERNEL);987987+ if (!sensorhub->push_data)988988+ return -ENOMEM;989989+990990+ sensorhub->tight_timestamps = cros_ec_check_features(991991+ sensorhub->ec,992992+ EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS);993993+994994+ if (sensorhub->tight_timestamps) {995995+ sensorhub->batch_state = devm_kcalloc(sensorhub->dev,996996+ sensorhub->sensor_num,997997+ sizeof(*sensorhub->batch_state),998998+ GFP_KERNEL);999999+ if (!sensorhub->batch_state)10001000+ return -ENOMEM;10011001+ }10021002+10031003+ return 0;10041004+}10051005+10061006+/**9601007 * cros_ec_sensorhub_ring_add() - Add the FIFO functionality if the EC9611008 * supports it.9621009 *···1018971 int fifo_info_length =1019972 sizeof(struct ec_response_motion_sense_fifo_info) +1020973 sizeof(u16) * sensorhub->sensor_num;10211021-10221022- /* Allocate the array for lost events. */10231023- sensorhub->fifo_info = devm_kzalloc(sensorhub->dev, fifo_info_length,10241024- GFP_KERNEL);10251025- if (!sensorhub->fifo_info)10261026- return -ENOMEM;10279741028975 /* Retrieve FIFO information */1029976 sensorhub->msg->version = 2;···1039998 if (!sensorhub->ring)1040999 return -ENOMEM;1041100010421042- /*10431043- * Allocate the callback area based on the number of sensors.10441044- */10451045- sensorhub->push_data = devm_kcalloc(10461046- sensorhub->dev, sensorhub->sensor_num,10471047- sizeof(*sensorhub->push_data),10481048- GFP_KERNEL);10491049- if (!sensorhub->push_data)10501050- return -ENOMEM;10511051-10521001 sensorhub->fifo_timestamp[CROS_EC_SENSOR_LAST_TS] =10531002 cros_ec_get_time_ns();10541054-10551055- sensorhub->tight_timestamps = cros_ec_check_features(10561056- ec, EC_FEATURE_MOTION_SENSE_TIGHT_TIMESTAMPS);10571057-10581058- if (sensorhub->tight_timestamps) {10591059- sensorhub->batch_state = devm_kcalloc(sensorhub->dev,10601060- sensorhub->sensor_num,10611061- sizeof(*sensorhub->batch_state),10621062- GFP_KERNEL);10631063- if (!sensorhub->batch_state)10641064- return -ENOMEM;10651065- }1066100310671004 /* Register the notifier that will act as a top half interrupt. */10681005 sensorhub->notifier.notifier_call = cros_ec_sensorhub_event;
+24
drivers/platform/x86/asus-nb-wmi.c
···515515 .detect_quirks = asus_nb_wmi_quirks,516516};517517518518+static const struct dmi_system_id asus_nb_wmi_blacklist[] __initconst = {519519+ {520520+ /*521521+ * asus-nb-wm adds no functionality. The T100TA has a detachable522522+ * USB kbd, so no hotkeys and it has no WMI rfkill; and loading523523+ * asus-nb-wm causes the camera LED to turn and _stay_ on.524524+ */525525+ .matches = {526526+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),527527+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),528528+ },529529+ },530530+ {531531+ /* The Asus T200TA has the same issue as the T100TA */532532+ .matches = {533533+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),534534+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T200TA"),535535+ },536536+ },537537+ {} /* Terminating entry */538538+};518539519540static int __init asus_nb_wmi_init(void)520541{542542+ if (dmi_check_system(asus_nb_wmi_blacklist))543543+ return -ENODEV;544544+521545 return asus_wmi_register_driver(&asus_nb_wmi_driver);522546}523547
+1-1
drivers/platform/x86/intel-uncore-frequency.c
···5353/* Storage for uncore data for all instances */5454static struct uncore_data *uncore_instances;5555/* Root of the all uncore sysfs kobjs */5656-struct kobject *uncore_root_kobj;5656+static struct kobject *uncore_root_kobj;5757/* Stores the CPU mask of the target CPUs to use during uncore read/write */5858static cpumask_t uncore_cpu_mask;5959/* CPU online callback register instance */
+5-19
drivers/platform/x86/intel_pmc_core.c
···255255};256256257257static const struct pmc_bit_map icl_pfear_map[] = {258258- /* Ice Lake generation onwards only */258258+ /* Ice Lake and Jasper Lake generation onwards only */259259 {"RES_65", BIT(0)},260260 {"RES_66", BIT(1)},261261 {"RES_67", BIT(2)},···274274};275275276276static const struct pmc_bit_map tgl_pfear_map[] = {277277- /* Tiger Lake, Elkhart Lake and Jasper Lake generation onwards only */277277+ /* Tiger Lake and Elkhart Lake generation onwards only */278278 {"PSF9", BIT(0)},279279 {"RES_66", BIT(1)},280280 {"RES_67", BIT(2)},···692692 kfree(lpm_regs);693693}694694695695-#if IS_ENABLED(CONFIG_DEBUG_FS)696695static bool slps0_dbg_latch;697696698697static inline u8 pmc_core_reg_read_byte(struct pmc_dev *pmcdev, int offset)···11321133 &pmc_core_substate_l_sts_regs_fops);11331134 }11341135}11351135-#else11361136-static inline void pmc_core_dbgfs_register(struct pmc_dev *pmcdev)11371137-{11381138-}11391139-11401140-static inline void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev)11411141-{11421142-}11431143-#endif /* CONFIG_DEBUG_FS */1144113611451137static const struct x86_cpu_id intel_pmc_core_ids[] = {11461138 X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &spt_reg_map),···11461156 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &tgl_reg_map),11471157 X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &tgl_reg_map),11481158 X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, &tgl_reg_map),11491149- X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &tgl_reg_map),11591159+ X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &icl_reg_map),11501160 {}11511161};11521162···12501260 return 0;12511261}1252126212531253-#ifdef CONFIG_PM_SLEEP12541254-12551263static bool warn_on_s0ix_failures;12561264module_param(warn_on_s0ix_failures, bool, 0644);12571265MODULE_PARM_DESC(warn_on_s0ix_failures, "Check and warn for S0ix failures");1258126612591259-static int pmc_core_suspend(struct device *dev)12671267+static __maybe_unused int pmc_core_suspend(struct device *dev)12601268{12611269 struct pmc_dev *pmcdev = dev_get_drvdata(dev);12621270···13061318 return false;13071319}1308132013091309-static int pmc_core_resume(struct device *dev)13211321+static __maybe_unused int pmc_core_resume(struct device *dev)13101322{13111323 struct pmc_dev *pmcdev = dev_get_drvdata(dev);13121324 const struct pmc_bit_map **maps = pmcdev->map->lpm_sts;···1335134713361348 return 0;13371349}13381338-13391339-#endif1340135013411351static const struct dev_pm_ops pmc_core_pm_ops = {13421352 SET_LATE_SYSTEM_SLEEP_PM_OPS(pmc_core_suspend, pmc_core_resume)
···95489548 if (!battery_info.batteries[battery].start_support)95499549 return -ENODEV;95509550 /* valid values are [0, 99] */95519551- if (value < 0 || value > 99)95519551+ if (value > 99)95529552 return -EINVAL;95539553 if (value > battery_info.batteries[battery].charge_stop)95549554 return -EINVAL;
+2-2
drivers/platform/x86/xiaomi-wmi.c
···2323 unsigned int key_code;2424};25252626-int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context)2626+static int xiaomi_wmi_probe(struct wmi_device *wdev, const void *context)2727{2828 struct xiaomi_wmi *data;2929···4848 return input_register_device(data->input_dev);4949}50505151-void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy)5151+static void xiaomi_wmi_notify(struct wmi_device *wdev, union acpi_object *dummy)5252{5353 struct xiaomi_wmi *data;5454
+5-5
drivers/s390/net/qeth_core_main.c
···70487048 unsigned int i;7049704970507050 /* Quiesce the NAPI instances: */70517051- qeth_for_each_output_queue(card, queue, i) {70517051+ qeth_for_each_output_queue(card, queue, i)70527052 napi_disable(&queue->napi);70537053- del_timer_sync(&queue->timer);70547054- }7055705370567054 /* Stop .ndo_start_xmit, might still access queue->napi. */70577055 netif_tx_disable(dev);7058705670597059- /* Queues may get re-allocated, so remove the NAPIs here. */70607060- qeth_for_each_output_queue(card, queue, i)70577057+ qeth_for_each_output_queue(card, queue, i) {70587058+ del_timer_sync(&queue->timer);70597059+ /* Queues may get re-allocated, so remove the NAPIs. */70617060 netif_napi_del(&queue->napi);70617061+ }70627062 } else {70637063 netif_tx_disable(dev);70647064 }
+17-18
drivers/scsi/qla2xxx/qla_os.c
···37323732 }37333733 qla2x00_wait_for_hba_ready(base_vha);3734373437353735+ /*37363736+ * if UNLOADING flag is already set, then continue unload,37373737+ * where it was set first.37383738+ */37393739+ if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))37403740+ return;37413741+37353742 if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) ||37363743 IS_QLA28XX(ha)) {37373744 if (ha->flags.fw_started)···37563749 }3757375037583751 qla2x00_wait_for_sess_deletion(base_vha);37593759-37603760- /*37613761- * if UNLOAD flag is already set, then continue unload,37623762- * where it was set first.37633763- */37643764- if (test_bit(UNLOADING, &base_vha->dpc_flags))37653765- return;37663766-37673767- set_bit(UNLOADING, &base_vha->dpc_flags);3768375237693753 qla_nvme_delete(base_vha);37703754···48614863{48624864 struct qla_work_evt *e;48634865 uint8_t bail;48664866+48674867+ if (test_bit(UNLOADING, &vha->dpc_flags))48684868+ return NULL;4864486948654870 QLA_VHA_MARK_BUSY(vha, bail);48664871 if (bail)···66296628 struct pci_dev *pdev = ha->pdev;66306629 scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);6631663066326632- /*66336633- * if UNLOAD flag is already set, then continue unload,66346634- * where it was set first.66356635- */66366636- if (test_bit(UNLOADING, &base_vha->dpc_flags))66376637- return;66386638-66396631 ql_log(ql_log_warn, base_vha, 0x015b,66406632 "Disabling adapter.\n");66416633···66396645 return;66406646 }6641664766426642- qla2x00_wait_for_sess_deletion(base_vha);66486648+ /*66496649+ * if UNLOADING flag is already set, then continue unload,66506650+ * where it was set first.66516651+ */66526652+ if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))66536653+ return;6643665466446644- set_bit(UNLOADING, &base_vha->dpc_flags);66556655+ qla2x00_wait_for_sess_deletion(base_vha);6645665666466657 qla2x00_delete_all_vps(ha, base_vha);66476658
···9292 int ret;93939494 for (i = 0; i < insn->n; i++) {9595+ /* FIXME: lo bit 0 chooses voltage output or current output */9596 lo = ((data[i] & 0x0f) << 4) | (chan << 1) | 0x01;9697 hi = (data[i] & 0xff0) >> 4;9798···105104 ret = comedi_timeout(dev, s, insn, dt2815_ao_status, 0x10);106105 if (ret)107106 return ret;107107+108108+ outb(hi, dev->iobase + DT2815_DATA);108109109110 devpriv->ao_readback[chan] = data[i];110111 }
+1-2
drivers/staging/gasket/gasket_sysfs.c
···228228 }229229230230 mutex_lock(&mapping->mutex);231231- for (i = 0; strcmp(attrs[i].attr.attr.name, GASKET_ARRAY_END_MARKER);232232- i++) {231231+ for (i = 0; attrs[i].attr.attr.name != NULL; i++) {233232 if (mapping->attribute_count == GASKET_SYSFS_MAX_NODES) {234233 dev_err(device,235234 "Maximum number of sysfs nodes reached for device\n");
-4
drivers/staging/gasket/gasket_sysfs.h
···3030 */3131#define GASKET_SYSFS_MAX_NODES 19632323333-/* End markers for sysfs struct arrays. */3434-#define GASKET_ARRAY_END_TOKEN GASKET_RESERVED_ARRAY_END3535-#define GASKET_ARRAY_END_MARKER __stringify(GASKET_ARRAY_END_TOKEN)3636-3733/*3834 * Terminator struct for a gasket_sysfs_attr array. Must be at the end of3935 * all gasket_sysfs_attribute arrays.
···207207 priv->wake_up_count =208208 priv->hw->conf.listen_interval;209209210210- --priv->wake_up_count;210210+ if (priv->wake_up_count)211211+ --priv->wake_up_count;211212212213 /* Turn on wake up to listen next beacon */213214 if (priv->wake_up_count == 1)
···88888989config HVC_RISCV_SBI9090 bool "RISC-V SBI console support"9191- depends on RISCV_SBI9191+ depends on RISCV_SBI_V019292 select HVC_DRIVER9393 help9494 This enables support for console output via RISC-V SBI calls, which
+14-9
drivers/tty/hvc/hvc_console.c
···302302 vtermnos[index] = vtermno;303303 cons_ops[index] = ops;304304305305- /* reserve all indices up to and including this index */306306- if (last_hvc < index)307307- last_hvc = index;308308-309305 /* check if we need to re-register the kernel console */310306 hvc_check_console(index);311307···956960 cons_ops[i] == hp->ops)957961 break;958962959959- /* no matching slot, just use a counter */960960- if (i >= MAX_NR_HVC_CONSOLES)961961- i = ++last_hvc;963963+ if (i >= MAX_NR_HVC_CONSOLES) {964964+965965+ /* find 'empty' slot for console */966966+ for (i = 0; i < MAX_NR_HVC_CONSOLES && vtermnos[i] != -1; i++) {967967+ }968968+969969+ /* no matching slot, just use a counter */970970+ if (i == MAX_NR_HVC_CONSOLES)971971+ i = ++last_hvc + MAX_NR_HVC_CONSOLES;972972+ }962973963974 hp->index = i;964964- cons_ops[i] = ops;965965- vtermnos[i] = vtermno;975975+ if (i < MAX_NR_HVC_CONSOLES) {976976+ cons_ops[i] = ops;977977+ vtermnos[i] = vtermno;978978+ }966979967980 list_add_tail(&(hp->next), &hvc_structs);968981 mutex_unlock(&hvc_structs_mutex);
+14-11
drivers/tty/rocket.c
···632632 tty_port_init(&info->port);633633 info->port.ops = &rocket_port_ops;634634 info->flags &= ~ROCKET_MODE_MASK;635635- switch (pc104[board][line]) {636636- case 422:637637- info->flags |= ROCKET_MODE_RS422;638638- break;639639- case 485:640640- info->flags |= ROCKET_MODE_RS485;641641- break;642642- case 232:643643- default:635635+ if (board < ARRAY_SIZE(pc104) && line < ARRAY_SIZE(pc104_1))636636+ switch (pc104[board][line]) {637637+ case 422:638638+ info->flags |= ROCKET_MODE_RS422;639639+ break;640640+ case 485:641641+ info->flags |= ROCKET_MODE_RS485;642642+ break;643643+ case 232:644644+ default:645645+ info->flags |= ROCKET_MODE_RS232;646646+ break;647647+ }648648+ else644649 info->flags |= ROCKET_MODE_RS232;645645- break;646646- }647650648651 info->intmask = RXF_TRIG | TXFIFO_MT | SRC_INT | DELTA_CD | DELTA_CTS | DELTA_DSR;649652 if (sInitChan(ctlp, &info->channel, aiop, chan) == 0) {
+1-1
drivers/tty/serial/Kconfig
···86868787config SERIAL_EARLYCON_RISCV_SBI8888 bool "Early console using RISC-V SBI"8989- depends on RISCV_SBI8989+ depends on RISCV_SBI_V019090 select SERIAL_CORE9191 select SERIAL_CORE_CONSOLE9292 select SERIAL_EARLYCON
+3-1
drivers/tty/serial/bcm63xx_uart.c
···843843 if (IS_ERR(clk) && pdev->dev.of_node)844844 clk = of_clk_get(pdev->dev.of_node, 0);845845846846- if (IS_ERR(clk))846846+ if (IS_ERR(clk)) {847847+ clk_put(clk);847848 return -ENODEV;849849+ }848850849851 port->iotype = UPIO_MEM;850852 port->irq = res_irq->start;
···870870 tty_insert_flip_char(tport, c, TTY_NORMAL);871871 } else {872872 for (i = 0; i < count; i++) {873873- char c = serial_port_in(port, SCxRDR);873873+ char c;874874875875- status = serial_port_in(port, SCxSR);875875+ if (port->type == PORT_SCIF ||876876+ port->type == PORT_HSCIF) {877877+ status = serial_port_in(port, SCxSR);878878+ c = serial_port_in(port, SCxRDR);879879+ } else {880880+ c = serial_port_in(port, SCxRDR);881881+ status = serial_port_in(port, SCxSR);882882+ }876883 if (uart_handle_sysrq_char(port, c)) {877884 count--; i--;878885 continue;
+3
drivers/tty/serial/sunhv.c
···567567 sunserial_console_match(&sunhv_console, op->dev.of_node,568568 &sunhv_reg, port->line, false);569569570570+ /* We need to initialize lock even for non-registered console */571571+ spin_lock_init(&port->lock);572572+570573 err = uart_add_one_port(&sunhv_reg, port);571574 if (err)572575 goto out_unregister_driver;
+49-162
drivers/tty/serial/xilinx_uartps.c
···26262727#define CDNS_UART_TTY_NAME "ttyPS"2828#define CDNS_UART_NAME "xuartps"2929+#define CDNS_UART_MAJOR 0 /* use dynamic node allocation */3030+#define CDNS_UART_MINOR 0 /* works best with devtmpfs */3131+#define CDNS_UART_NR_PORTS 162932#define CDNS_UART_FIFO_SIZE 64 /* FIFO size */3033#define CDNS_UART_REGISTER_SPACE 0x10003134#define TX_TIMEOUT 50000032353336/* Rx Trigger level */3437static int rx_trigger_level = 56;3535-static int uartps_major;3638module_param(rx_trigger_level, uint, 0444);3739MODULE_PARM_DESC(rx_trigger_level, "Rx trigger level, 1-63 bytes");3840···190188 * @pclk: APB clock191189 * @cdns_uart_driver: Pointer to UART driver192190 * @baud: Current baud rate193193- * @id: Port ID194191 * @clk_rate_change_nb: Notifier block for clock changes195192 * @quirks: Flags for RXBS support.196193 */···199198 struct clk *pclk;200199 struct uart_driver *cdns_uart_driver;201200 unsigned int baud;202202- int id;203201 struct notifier_block clk_rate_change_nb;204202 u32 quirks;205203 bool cts_override;···11331133#endif11341134};1135113511361136+static struct uart_driver cdns_uart_uart_driver;11371137+11361138#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE11371139/**11381140 * cdns_uart_console_putchar - write the character to the FIFO buffer···1274127212751273 return uart_set_options(port, co, baud, parity, bits, flow);12761274}12751275+12761276+static struct console cdns_uart_console = {12771277+ .name = CDNS_UART_TTY_NAME,12781278+ .write = cdns_uart_console_write,12791279+ .device = uart_console_device,12801280+ .setup = cdns_uart_console_setup,12811281+ .flags = CON_PRINTBUFFER,12821282+ .index = -1, /* Specified on the cmdline (e.g. console=ttyPS ) */12831283+ .data = &cdns_uart_uart_driver,12841284+};12771285#endif /* CONFIG_SERIAL_XILINX_PS_UART_CONSOLE */1278128612791287#ifdef CONFIG_PM_SLEEP···14151403};14161404MODULE_DEVICE_TABLE(of, cdns_uart_of_match);1417140514181418-/*14191419- * Maximum number of instances without alias IDs but if there is alias14201420- * which target "< MAX_UART_INSTANCES" range this ID can't be used.14211421- */14221422-#define MAX_UART_INSTANCES 3214231423-14241424-/* Stores static aliases list */14251425-static DECLARE_BITMAP(alias_bitmap, MAX_UART_INSTANCES);14261426-static int alias_bitmap_initialized;14271427-14281428-/* Stores actual bitmap of allocated IDs with alias IDs together */14291429-static DECLARE_BITMAP(bitmap, MAX_UART_INSTANCES);14301430-/* Protect bitmap operations to have unique IDs */14311431-static DEFINE_MUTEX(bitmap_lock);14321432-14331433-static int cdns_get_id(struct platform_device *pdev)14341434-{14351435- int id, ret;14361436-14371437- mutex_lock(&bitmap_lock);14381438-14391439- /* Alias list is stable that's why get alias bitmap only once */14401440- if (!alias_bitmap_initialized) {14411441- ret = of_alias_get_alias_list(cdns_uart_of_match, "serial",14421442- alias_bitmap, MAX_UART_INSTANCES);14431443- if (ret && ret != -EOVERFLOW) {14441444- mutex_unlock(&bitmap_lock);14451445- return ret;14461446- }14471447-14481448- alias_bitmap_initialized++;14491449- }14501450-14511451- /* Make sure that alias ID is not taken by instance without alias */14521452- bitmap_or(bitmap, bitmap, alias_bitmap, MAX_UART_INSTANCES);14531453-14541454- dev_dbg(&pdev->dev, "Alias bitmap: %*pb\n",14551455- MAX_UART_INSTANCES, bitmap);14561456-14571457- /* Look for a serialN alias */14581458- id = of_alias_get_id(pdev->dev.of_node, "serial");14591459- if (id < 0) {14601460- dev_warn(&pdev->dev,14611461- "No serial alias passed. Using the first free id\n");14621462-14631463- /*14641464- * Start with id 0 and check if there is no serial0 alias14651465- * which points to device which is compatible with this driver.14661466- * If alias exists then try next free position.14671467- */14681468- id = 0;14691469-14701470- for (;;) {14711471- dev_info(&pdev->dev, "Checking id %d\n", id);14721472- id = find_next_zero_bit(bitmap, MAX_UART_INSTANCES, id);14731473-14741474- /* No free empty instance */14751475- if (id == MAX_UART_INSTANCES) {14761476- dev_err(&pdev->dev, "No free ID\n");14771477- mutex_unlock(&bitmap_lock);14781478- return -EINVAL;14791479- }14801480-14811481- dev_dbg(&pdev->dev, "The empty id is %d\n", id);14821482- /* Check if ID is empty */14831483- if (!test_and_set_bit(id, bitmap)) {14841484- /* Break the loop if bit is taken */14851485- dev_dbg(&pdev->dev,14861486- "Selected ID %d allocation passed\n",14871487- id);14881488- break;14891489- }14901490- dev_dbg(&pdev->dev,14911491- "Selected ID %d allocation failed\n", id);14921492- /* if taking bit fails then try next one */14931493- id++;14941494- }14951495- }14961496-14971497- mutex_unlock(&bitmap_lock);14981498-14991499- return id;15001500-}14061406+/* Temporary variable for storing number of instances */14071407+static int instances;1501140815021409/**15031410 * cdns_uart_probe - Platform driver probe···14261495 */14271496static int cdns_uart_probe(struct platform_device *pdev)14281497{14291429- int rc, irq;14981498+ int rc, id, irq;14301499 struct uart_port *port;14311500 struct resource *res;14321501 struct cdns_uart *cdns_uart_data;14331502 const struct of_device_id *match;14341434- struct uart_driver *cdns_uart_uart_driver;14351435- char *driver_name;14361436-#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE14371437- struct console *cdns_uart_console;14381438-#endif1439150314401504 cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data),14411505 GFP_KERNEL);···14401514 if (!port)14411515 return -ENOMEM;1442151614431443- cdns_uart_uart_driver = devm_kzalloc(&pdev->dev,14441444- sizeof(*cdns_uart_uart_driver),14451445- GFP_KERNEL);14461446- if (!cdns_uart_uart_driver)14471447- return -ENOMEM;15171517+ /* Look for a serialN alias */15181518+ id = of_alias_get_id(pdev->dev.of_node, "serial");15191519+ if (id < 0)15201520+ id = 0;1448152114491449- cdns_uart_data->id = cdns_get_id(pdev);14501450- if (cdns_uart_data->id < 0)14511451- return cdns_uart_data->id;14521452-14531453- /* There is a need to use unique driver name */14541454- driver_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%d",14551455- CDNS_UART_NAME, cdns_uart_data->id);14561456- if (!driver_name) {14571457- rc = -ENOMEM;14581458- goto err_out_id;15221522+ if (id >= CDNS_UART_NR_PORTS) {15231523+ dev_err(&pdev->dev, "Cannot get uart_port structure\n");15241524+ return -ENODEV;14591525 }1460152614611461- cdns_uart_uart_driver->owner = THIS_MODULE;14621462- cdns_uart_uart_driver->driver_name = driver_name;14631463- cdns_uart_uart_driver->dev_name = CDNS_UART_TTY_NAME;14641464- cdns_uart_uart_driver->major = uartps_major;14651465- cdns_uart_uart_driver->minor = cdns_uart_data->id;14661466- cdns_uart_uart_driver->nr = 1;14671467-15271527+ if (!cdns_uart_uart_driver.state) {15281528+ cdns_uart_uart_driver.owner = THIS_MODULE;15291529+ cdns_uart_uart_driver.driver_name = CDNS_UART_NAME;15301530+ cdns_uart_uart_driver.dev_name = CDNS_UART_TTY_NAME;15311531+ cdns_uart_uart_driver.major = CDNS_UART_MAJOR;15321532+ cdns_uart_uart_driver.minor = CDNS_UART_MINOR;15331533+ cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;14681534#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE14691469- cdns_uart_console = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_console),14701470- GFP_KERNEL);14711471- if (!cdns_uart_console) {14721472- rc = -ENOMEM;14731473- goto err_out_id;14741474- }14751475-14761476- strncpy(cdns_uart_console->name, CDNS_UART_TTY_NAME,14771477- sizeof(cdns_uart_console->name));14781478- cdns_uart_console->index = cdns_uart_data->id;14791479- cdns_uart_console->write = cdns_uart_console_write;14801480- cdns_uart_console->device = uart_console_device;14811481- cdns_uart_console->setup = cdns_uart_console_setup;14821482- cdns_uart_console->flags = CON_PRINTBUFFER;14831483- cdns_uart_console->data = cdns_uart_uart_driver;14841484- cdns_uart_uart_driver->cons = cdns_uart_console;15351535+ cdns_uart_uart_driver.cons = &cdns_uart_console;14851536#endif1486153714871487- rc = uart_register_driver(cdns_uart_uart_driver);14881488- if (rc < 0) {14891489- dev_err(&pdev->dev, "Failed to register driver\n");14901490- goto err_out_id;15381538+ rc = uart_register_driver(&cdns_uart_uart_driver);15391539+ if (rc < 0) {15401540+ dev_err(&pdev->dev, "Failed to register driver\n");15411541+ return rc;15421542+ }14911543 }1492154414931493- cdns_uart_data->cdns_uart_driver = cdns_uart_uart_driver;14941494-14951495- /*14961496- * Setting up proper name_base needs to be done after uart14971497- * registration because tty_driver structure is not filled.14981498- * name_base is 0 by default.14991499- */15001500- cdns_uart_uart_driver->tty_driver->name_base = cdns_uart_data->id;15451545+ cdns_uart_data->cdns_uart_driver = &cdns_uart_uart_driver;1501154615021547 match = of_match_node(cdns_uart_of_match, pdev->dev.of_node);15031548 if (match && match->data) {···15461649 port->ops = &cdns_uart_ops;15471650 port->fifosize = CDNS_UART_FIFO_SIZE;15481651 port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_XILINX_PS_UART_CONSOLE);16521652+ port->line = id;1549165315501654 /*15511655 * Register the port.···15781680 console_port = port;15791681#endif1580168215811581- rc = uart_add_one_port(cdns_uart_uart_driver, port);16831683+ rc = uart_add_one_port(&cdns_uart_uart_driver, port);15821684 if (rc) {15831685 dev_err(&pdev->dev,15841686 "uart_add_one_port() failed; err=%i\n", rc);···15881690#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE15891691 /* This is not port which is used for console that's why clean it up */15901692 if (console_port == port &&15911591- !(cdns_uart_uart_driver->cons->flags & CON_ENABLED))16931693+ !(cdns_uart_uart_driver.cons->flags & CON_ENABLED))15921694 console_port = NULL;15931695#endif1594169615951595- uartps_major = cdns_uart_uart_driver->tty_driver->major;15961697 cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,15971698 "cts-override");16991699+17001700+ instances++;17011701+15981702 return 0;1599170316001704err_out_pm_disable:···16121712err_out_clk_dis_pclk:16131713 clk_disable_unprepare(cdns_uart_data->pclk);16141714err_out_unregister_driver:16151615- uart_unregister_driver(cdns_uart_data->cdns_uart_driver);16161616-err_out_id:16171617- mutex_lock(&bitmap_lock);16181618- if (cdns_uart_data->id < MAX_UART_INSTANCES)16191619- clear_bit(cdns_uart_data->id, bitmap);16201620- mutex_unlock(&bitmap_lock);17151715+ if (!instances)17161716+ uart_unregister_driver(cdns_uart_data->cdns_uart_driver);16211717 return rc;16221718}16231719···16361740#endif16371741 rc = uart_remove_one_port(cdns_uart_data->cdns_uart_driver, port);16381742 port->mapbase = 0;16391639- mutex_lock(&bitmap_lock);16401640- if (cdns_uart_data->id < MAX_UART_INSTANCES)16411641- clear_bit(cdns_uart_data->id, bitmap);16421642- mutex_unlock(&bitmap_lock);16431743 clk_disable_unprepare(cdns_uart_data->uartclk);16441744 clk_disable_unprepare(cdns_uart_data->pclk);16451745 pm_runtime_disable(&pdev->dev);···16481756 console_port = NULL;16491757#endif1650175816511651- /* If this is last instance major number should be initialized */16521652- mutex_lock(&bitmap_lock);16531653- if (bitmap_empty(bitmap, MAX_UART_INSTANCES))16541654- uartps_major = 0;16551655- mutex_unlock(&bitmap_lock);16561656-16571657- uart_unregister_driver(cdns_uart_data->cdns_uart_driver);17591759+ if (!--instances)17601760+ uart_unregister_driver(cdns_uart_data->cdns_uart_driver);16581761 return rc;16591762}16601763
+2
drivers/tty/sysrq.c
···7474 return 1;7575 return sysrq_enabled;7676}7777+EXPORT_SYMBOL_GPL(sysrq_mask);77787879/*7980 * A value of 1 means 'all', other nonzero values are an op mask:···1059105810601059 return 0;10611060}10611061+EXPORT_SYMBOL_GPL(sysrq_toggle_support);1062106210631063static int __sysrq_swap_key_ops(int key, struct sysrq_key_op *insert_op_p,10641064 struct sysrq_key_op *remove_op_p)
+4-3
drivers/tty/vt/vt.c
···8181#include <linux/errno.h>8282#include <linux/kd.h>8383#include <linux/slab.h>8484+#include <linux/vmalloc.h>8485#include <linux/major.h>8586#include <linux/mm.h>8687#include <linux/console.h>···351350 /* allocate everything in one go */352351 memsize = cols * rows * sizeof(char32_t);353352 memsize += rows * sizeof(char32_t *);354354- p = kmalloc(memsize, GFP_KERNEL);353353+ p = vmalloc(memsize);355354 if (!p)356355 return NULL;357356···367366368367static void vc_uniscr_set(struct vc_data *vc, struct uni_screen *new_uniscr)369368{370370- kfree(vc->vc_uni_screen);369369+ vfree(vc->vc_uni_screen);371370 vc->vc_uni_screen = new_uniscr;372371}373372···12071206 if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)12081207 return 0;1209120812101210- if (new_screen_size > (4 << 20))12091209+ if (new_screen_size > KMALLOC_MAX_SIZE)12111210 return -EINVAL;12121211 newscreen = kzalloc(new_screen_size, GFP_USER);12131212 if (!newscreen)
+31-5
drivers/usb/class/cdc-acm.c
···412412413413exit:414414 retval = usb_submit_urb(urb, GFP_ATOMIC);415415- if (retval && retval != -EPERM)415415+ if (retval && retval != -EPERM && retval != -ENODEV)416416 dev_err(&acm->control->dev,417417 "%s - usb_submit_urb failed: %d\n", __func__, retval);418418+ else419419+ dev_vdbg(&acm->control->dev,420420+ "control resubmission terminated %d\n", retval);418421}419422420423static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)···433430 dev_err(&acm->data->dev,434431 "urb %d failed submission with %d\n",435432 index, res);433433+ } else {434434+ dev_vdbg(&acm->data->dev, "intended failure %d\n", res);436435 }437436 set_bit(index, &acm->read_urbs_free);438437 return res;···476471 int status = urb->status;477472 bool stopped = false;478473 bool stalled = false;474474+ bool cooldown = false;479475480476 dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",481477 rb->index, urb->actual_length, status);···503497 __func__, status);504498 stopped = true;505499 break;500500+ case -EOVERFLOW:501501+ case -EPROTO:502502+ dev_dbg(&acm->data->dev,503503+ "%s - cooling babbling device\n", __func__);504504+ usb_mark_last_busy(acm->dev);505505+ set_bit(rb->index, &acm->urbs_in_error_delay);506506+ cooldown = true;507507+ break;506508 default:507509 dev_dbg(&acm->data->dev,508510 "%s - nonzero urb status received: %d\n",···532518 */533519 smp_mb__after_atomic();534520535535- if (stopped || stalled) {521521+ if (stopped || stalled || cooldown) {536522 if (stalled)537523 schedule_work(&acm->work);524524+ else if (cooldown)525525+ schedule_delayed_work(&acm->dwork, HZ / 2);538526 return;539527 }540528···573557 struct acm *acm = container_of(work, struct acm, work);574558575559 if (test_bit(EVENT_RX_STALL, &acm->flags)) {576576- if (!(usb_autopm_get_interface(acm->data))) {560560+ smp_mb(); /* against acm_suspend() */561561+ if (!acm->susp_count) {577562 for (i = 0; i < acm->rx_buflimit; i++)578563 usb_kill_urb(acm->read_urbs[i]);579564 usb_clear_halt(acm->dev, acm->in);580565 acm_submit_read_urbs(acm, GFP_KERNEL);581581- usb_autopm_put_interface(acm->data);566566+ clear_bit(EVENT_RX_STALL, &acm->flags);582567 }583583- clear_bit(EVENT_RX_STALL, &acm->flags);568568+ }569569+570570+ if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {571571+ for (i = 0; i < ACM_NR; i++) 572572+ if (test_and_clear_bit(i, &acm->urbs_in_error_delay))573573+ acm_submit_read_urb(acm, i, GFP_NOIO);584574 }585575586576 if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))···13551333 acm->readsize = readsize;13561334 acm->rx_buflimit = num_rx_buf;13571335 INIT_WORK(&acm->work, acm_softint);13361336+ INIT_DELAYED_WORK(&acm->dwork, acm_softint);13581337 init_waitqueue_head(&acm->wioctl);13591338 spin_lock_init(&acm->write_lock);13601339 spin_lock_init(&acm->read_lock);···1565154215661543 acm_kill_urbs(acm);15671544 cancel_work_sync(&acm->work);15451545+ cancel_delayed_work_sync(&acm->dwork);1568154615691547 tty_unregister_device(acm_tty_driver, acm->minor);15701548···1608158416091585 acm_kill_urbs(acm);16101586 cancel_work_sync(&acm->work);15871587+ cancel_delayed_work_sync(&acm->dwork);15881588+ acm->urbs_in_error_delay = 0;1611158916121590 return 0;16131591}
+4-1
drivers/usb/class/cdc-acm.h
···109109# define EVENT_TTY_WAKEUP 0110110# define EVENT_RX_STALL 1111111# define ACM_THROTTLED 2112112+# define ACM_ERROR_DELAY 3113113+ unsigned long urbs_in_error_delay; /* these need to be restarted after a delay */112114 struct usb_cdc_line_coding line; /* bits, stop, parity */113113- struct work_struct work; /* work queue entry for line discipline waking up */115115+ struct work_struct work; /* work queue entry for various purposes*/116116+ struct delayed_work dwork; /* for cool downs needed in error recovery */114117 unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */115118 unsigned int ctrlout; /* output control lines (DTR, RTS) */116119 struct async_icount iocount; /* counters for control line changes */
+15-3
drivers/usb/core/hub.c
···12231223#ifdef CONFIG_PM12241224 udev->reset_resume = 1;12251225#endif12261226+ /* Don't set the change_bits when the device12271227+ * was powered off.12281228+ */12291229+ if (test_bit(port1, hub->power_bits))12301230+ set_bit(port1, hub->change_bits);1226123112271232 } else {12281233 /* The power session is gone; tell hub_wq */···27282723{27292724 int old_scheme_first_port =27302725 port_dev->quirks & USB_PORT_QUIRK_OLD_SCHEME;27312731- int quick_enumeration = (udev->speed == USB_SPEED_HIGH);2732272627332727 if (udev->speed >= USB_SPEED_SUPER)27342728 return false;2735272927362736- return USE_NEW_SCHEME(retry, old_scheme_first_port || old_scheme_first27372737- || quick_enumeration);27302730+ return USE_NEW_SCHEME(retry, old_scheme_first_port || old_scheme_first);27382731}2739273227402733/* Is a USB 3.0 port in the Inactive or Compliance Mode state?···30913088 if (portchange & USB_PORT_STAT_C_ENABLE)30923089 usb_clear_port_feature(hub->hdev, port1,30933090 USB_PORT_FEAT_C_ENABLE);30913091+30923092+ /*30933093+ * Whatever made this reset-resume necessary may have30943094+ * turned on the port1 bit in hub->change_bits. But after30953095+ * a successful reset-resume we want the bit to be clear;30963096+ * if it was on it would indicate that something happened30973097+ * following the reset-resume.30983098+ */30993099+ clear_bit(port1, hub->change_bits);30943100 }3095310130963102 return status;
+8-1
drivers/usb/core/message.c
···589589 int i, retval;590590591591 spin_lock_irqsave(&io->lock, flags);592592- if (io->status) {592592+ if (io->status || io->count == 0) {593593 spin_unlock_irqrestore(&io->lock, flags);594594 return;595595 }596596 /* shut everything down */597597 io->status = -ECONNRESET;598598+ io->count++; /* Keep the request alive until we're done */598599 spin_unlock_irqrestore(&io->lock, flags);599600600601 for (i = io->entries - 1; i >= 0; --i) {···609608 dev_warn(&io->dev->dev, "%s, unlink --> %d\n",610609 __func__, retval);611610 }611611+612612+ spin_lock_irqsave(&io->lock, flags);613613+ io->count--;614614+ if (!io->count)615615+ complete(&io->complete);616616+ spin_unlock_irqrestore(&io->lock, flags);612617}613618EXPORT_SYMBOL_GPL(usb_sg_cancel);614619
···307307308308/* Global TX Fifo Size Register */309309#define DWC31_GTXFIFOSIZ_TXFRAMNUM BIT(15) /* DWC_usb31 only */310310-#define DWC31_GTXFIFOSIZ_TXFDEF(n) ((n) & 0x7fff) /* DWC_usb31 only */311311-#define DWC3_GTXFIFOSIZ_TXFDEF(n) ((n) & 0xffff)310310+#define DWC31_GTXFIFOSIZ_TXFDEP(n) ((n) & 0x7fff) /* DWC_usb31 only */311311+#define DWC3_GTXFIFOSIZ_TXFDEP(n) ((n) & 0xffff)312312#define DWC3_GTXFIFOSIZ_TXFSTADDR(n) ((n) & 0xffff0000)313313+314314+/* Global RX Fifo Size Register */315315+#define DWC31_GRXFIFOSIZ_RXFDEP(n) ((n) & 0x7fff) /* DWC_usb31 only */316316+#define DWC3_GRXFIFOSIZ_RXFDEP(n) ((n) & 0xffff)313317314318/* Global Event Size Registers */315319#define DWC3_GEVNTSIZ_INTMASK BIT(31)
+47-29
drivers/usb/dwc3/gadget.c
···17281728 u32 reg;1729172917301730 u8 link_state;17311731- u8 speed;1732173117331732 /*17341733 * According to the Databook Remote wakeup request should···17371738 */17381739 reg = dwc3_readl(dwc->regs, DWC3_DSTS);1739174017401740- speed = reg & DWC3_DSTS_CONNECTSPD;17411741- if ((speed == DWC3_DSTS_SUPERSPEED) ||17421742- (speed == DWC3_DSTS_SUPERSPEED_PLUS))17431743- return 0;17441744-17451741 link_state = DWC3_DSTS_USBLNKST(reg);1746174217471743 switch (link_state) {17441744+ case DWC3_LINK_STATE_RESET:17481745 case DWC3_LINK_STATE_RX_DET: /* in HS, means Early Suspend */17491746 case DWC3_LINK_STATE_U3: /* in HS, means SUSPEND */17471747+ case DWC3_LINK_STATE_RESUME:17501748 break;17511749 default:17521750 return -EINVAL;···22232227{22242228 struct dwc3 *dwc = dep->dwc;22252229 int mdwidth;22262226- int kbytes;22272230 int size;2228223122292232 mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);···2231223622322237 size = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(dep->number >> 1));22332238 if (dwc3_is_usb31(dwc))22342234- size = DWC31_GTXFIFOSIZ_TXFDEF(size);22392239+ size = DWC31_GTXFIFOSIZ_TXFDEP(size);22352240 else22362236- size = DWC3_GTXFIFOSIZ_TXFDEF(size);22412241+ size = DWC3_GTXFIFOSIZ_TXFDEP(size);2237224222382243 /* FIFO Depth is in MDWDITH bytes. Multiply */22392244 size *= mdwidth;2240224522412241- kbytes = size / 1024;22422242- if (kbytes == 0)22432243- kbytes = 1;22442244-22452246 /*22462246- * FIFO sizes account an extra MDWIDTH * (kbytes + 1) bytes for22472247- * internal overhead. We don't really know how these are used,22482248- * but documentation say it exists.22472247+ * To meet performance requirement, a minimum TxFIFO size of 3x22482248+ * MaxPacketSize is recommended for endpoints that support burst and a22492249+ * minimum TxFIFO size of 2x MaxPacketSize for endpoints that don't22502250+ * support burst. Use those numbers and we can calculate the max packet22512251+ * limit as below.22492252 */22502250- size -= mdwidth * (kbytes + 1);22512251- size /= kbytes;22532253+ if (dwc->maximum_speed >= USB_SPEED_SUPER)22542254+ size /= 3;22552255+ else22562256+ size /= 2;2252225722532258 usb_ep_set_maxpacket_limit(&dep->endpoint, size);22542259···22662271static int dwc3_gadget_init_out_endpoint(struct dwc3_ep *dep)22672272{22682273 struct dwc3 *dwc = dep->dwc;22742274+ int mdwidth;22752275+ int size;2269227622702270- usb_ep_set_maxpacket_limit(&dep->endpoint, 1024);22772277+ mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);22782278+22792279+ /* MDWIDTH is represented in bits, convert to bytes */22802280+ mdwidth /= 8;22812281+22822282+ /* All OUT endpoints share a single RxFIFO space */22832283+ size = dwc3_readl(dwc->regs, DWC3_GRXFIFOSIZ(0));22842284+ if (dwc3_is_usb31(dwc))22852285+ size = DWC31_GRXFIFOSIZ_RXFDEP(size);22862286+ else22872287+ size = DWC3_GRXFIFOSIZ_RXFDEP(size);22882288+22892289+ /* FIFO depth is in MDWDITH bytes */22902290+ size *= mdwidth;22912291+22922292+ /*22932293+ * To meet performance requirement, a minimum recommended RxFIFO size22942294+ * is defined as follow:22952295+ * RxFIFO size >= (3 x MaxPacketSize) +22962296+ * (3 x 8 bytes setup packets size) + (16 bytes clock crossing margin)22972297+ *22982298+ * Then calculate the max packet limit as below.22992299+ */23002300+ size -= (3 * 8) + 16;23012301+ if (size < 0)23022302+ size = 0;23032303+ else23042304+ size /= 3;23052305+23062306+ usb_ep_set_maxpacket_limit(&dep->endpoint, size);22712307 dep->endpoint.max_streams = 15;22722308 dep->endpoint.ops = &dwc3_gadget_ep_ops;22732309 list_add_tail(&dep->endpoint.ep_list,···2510248425112485static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)25122486{25132513- /*25142514- * For OUT direction, host may send less than the setup25152515- * length. Return true for all OUT requests.25162516- */25172517- if (!req->direction)25182518- return true;25192519-25202520- return req->request.actual == req->request.length;24872487+ return req->num_pending_sgs == 0;25212488}2522248925232490static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,···2534251525352516 req->request.actual = req->request.length - req->remaining;2536251725372537- if (!dwc3_gadget_ep_request_completed(req) ||25382538- req->num_pending_sgs) {25182518+ if (!dwc3_gadget_ep_request_completed(req)) {25392519 __dwc3_gadget_kick_transfer(dep);25402520 goto out;25412521 }
+4-4
drivers/usb/early/xhci-dbc.c
···728728 case COMP_USB_TRANSACTION_ERROR:729729 case COMP_STALL_ERROR:730730 default:731731- if (ep_id == XDBC_EPID_OUT)731731+ if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL)732732 xdbc.flags |= XDBC_FLAGS_OUT_STALL;733733- if (ep_id == XDBC_EPID_IN)733733+ if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL)734734 xdbc.flags |= XDBC_FLAGS_IN_STALL;735735736736 xdbc_trace("endpoint %d stalled\n", ep_id);737737 break;738738 }739739740740- if (ep_id == XDBC_EPID_IN) {740740+ if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL) {741741 xdbc.flags &= ~XDBC_FLAGS_IN_PROCESS;742742 xdbc_bulk_transfer(NULL, XDBC_MAX_PACKET, true);743743- } else if (ep_id == XDBC_EPID_OUT) {743743+ } else if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL) {744744 xdbc.flags &= ~XDBC_FLAGS_OUT_PROCESS;745745 } else {746746 xdbc_trace("invalid endpoint id %d\n", ep_id);
+16-2
drivers/usb/early/xhci-dbc.h
···120120 u32 cycle_state;121121};122122123123-#define XDBC_EPID_OUT 2124124-#define XDBC_EPID_IN 3123123+/*124124+ * These are the "Endpoint ID" (also known as "Context Index") values for the125125+ * OUT Transfer Ring and the IN Transfer Ring of a Debug Capability Context data126126+ * structure.127127+ * According to the "eXtensible Host Controller Interface for Universal Serial128128+ * Bus (xHCI)" specification, section "7.6.3.2 Endpoint Contexts and Transfer129129+ * Rings", these should be 0 and 1, and those are the values AMD machines give130130+ * you; but Intel machines seem to use the formula from section "4.5.1 Device131131+ * Context Index", which is supposed to be used for the Device Context only.132132+ * Luckily the values from Intel don't overlap with those from AMD, so we can133133+ * just test for both.134134+ */135135+#define XDBC_EPID_OUT 0136136+#define XDBC_EPID_IN 1137137+#define XDBC_EPID_OUT_INTEL 2138138+#define XDBC_EPID_IN_INTEL 3125139126140struct xdbc_state {127141 u16 vendor;
···15711571 }15721572 if ((temp & PORT_RC))15731573 reset_change = true;15741574+ if (temp & PORT_OC)15751575+ status = 1;15741576 }15751577 if (!status && !reset_change) {15761578 xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);···16371635 xhci_dbg(xhci, "port %d polling in bus suspend, waiting\n",16381636 port_index);16391637 goto retry;16381638+ }16391639+ /* bail out if port detected a over-current condition */16401640+ if (t1 & PORT_OC) {16411641+ bus_state->bus_suspended = 0;16421642+ spin_unlock_irqrestore(&xhci->lock, flags);16431643+ xhci_dbg(xhci, "Bus suspend bailout, port over-current detected\n");16441644+ return -EBUSY;16401645 }16411646 /* suspend ports in U0, or bail out for new connect changes */16421647 if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+40-6
drivers/usb/host/xhci-ring.c
···547547 stream_id);548548 return;549549 }550550+ /*551551+ * A cancelled TD can complete with a stall if HW cached the trb.552552+ * In this case driver can't find cur_td, but if the ring is empty we553553+ * can move the dequeue pointer to the current enqueue position.554554+ */555555+ if (!cur_td) {556556+ if (list_empty(&ep_ring->td_list)) {557557+ state->new_deq_seg = ep_ring->enq_seg;558558+ state->new_deq_ptr = ep_ring->enqueue;559559+ state->new_cycle_state = ep_ring->cycle_state;560560+ goto done;561561+ } else {562562+ xhci_warn(xhci, "Can't find new dequeue state, missing cur_td\n");563563+ return;564564+ }565565+ }566566+550567 /* Dig out the cycle state saved by the xHC during the stop ep cmd */551568 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,552569 "Finding endpoint context");···609592 state->new_deq_seg = new_seg;610593 state->new_deq_ptr = new_deq;611594595595+done:612596 /* Don't update the ring cycle state for the producer (us). */613597 xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,614598 "Cycle state = 0x%x", state->new_cycle_state);···1874185618751857 if (reset_type == EP_HARD_RESET) {18761858 ep->ep_state |= EP_HARD_CLEAR_TOGGLE;18771877- xhci_cleanup_stalled_ring(xhci, ep_index, stream_id, td);18781878- xhci_clear_hub_tt_buffer(xhci, td, ep);18591859+ xhci_cleanup_stalled_ring(xhci, slot_id, ep_index, stream_id,18601860+ td);18791861 }18801862 xhci_ring_cmd_db(xhci);18811863}···19961978 if (trb_comp_code == COMP_STALL_ERROR ||19971979 xhci_requires_manual_halt_cleanup(xhci, ep_ctx,19981980 trb_comp_code)) {19991999- /* Issue a reset endpoint command to clear the host side20002000- * halt, followed by a set dequeue command to move the20012001- * dequeue pointer past the TD.20022002- * The class driver clears the device side halt later.19811981+ /*19821982+ * xhci internal endpoint state will go to a "halt" state for19831983+ * any stall, including default control pipe protocol stall.19841984+ * To clear the host side halt we need to issue a reset endpoint19851985+ * command, followed by a set dequeue command to move past the19861986+ * TD.19871987+ * Class drivers clear the device side halt from a functional19881988+ * stall later. Hub TT buffer should only be cleared for FS/LS19891989+ * devices behind HS hubs for functional stalls.20031990 */19911991+ if ((ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR))19921992+ xhci_clear_hub_tt_buffer(xhci, td, ep);20041993 xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index,20051994 ep_ring->stream_id, td, EP_HARD_RESET);20061995 } else {···25632538 ep->skip = false;25642539 xhci_dbg(xhci, "td_list is empty while skip flag set. Clear skip flag for slot %u ep %u.\n",25652540 slot_id, ep_index);25412541+ }25422542+ if (trb_comp_code == COMP_STALL_ERROR ||25432543+ xhci_requires_manual_halt_cleanup(xhci, ep_ctx,25442544+ trb_comp_code)) {25452545+ xhci_cleanup_halted_endpoint(xhci, slot_id,25462546+ ep_index,25472547+ ep_ring->stream_id,25482548+ NULL,25492549+ EP_HARD_RESET);25662550 }25672551 goto cleanup;25682552 }
+7-7
drivers/usb/host/xhci.c
···30313031 added_ctxs, added_ctxs);30323032}3033303330343034-void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,30353035- unsigned int stream_id, struct xhci_td *td)30343034+void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int slot_id,30353035+ unsigned int ep_index, unsigned int stream_id,30363036+ struct xhci_td *td)30363037{30373038 struct xhci_dequeue_state deq_state;30383038- struct usb_device *udev = td->urb->dev;3039303930403040 xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep,30413041 "Cleaning up stalled endpoint ring");30423042 /* We need to move the HW's dequeue pointer past this TD,30433043 * or it will attempt to resend it on the next doorbell ring.30443044 */30453045- xhci_find_new_dequeue_state(xhci, udev->slot_id,30463046- ep_index, stream_id, td, &deq_state);30453045+ xhci_find_new_dequeue_state(xhci, slot_id, ep_index, stream_id, td,30463046+ &deq_state);3047304730483048 if (!deq_state.new_deq_ptr || !deq_state.new_deq_seg)30493049 return;···30543054 if (!(xhci->quirks & XHCI_RESET_EP_QUIRK)) {30553055 xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep,30563056 "Queueing new dequeue state");30573057- xhci_queue_new_dequeue_state(xhci, udev->slot_id,30573057+ xhci_queue_new_dequeue_state(xhci, slot_id,30583058 ep_index, &deq_state);30593059 } else {30603060 /* Better hope no one uses the input context between now and the···30653065 xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,30663066 "Setting up input context for "30673067 "configure endpoint command");30683068- xhci_setup_input_ctx_for_quirk(xhci, udev->slot_id,30683068+ xhci_setup_input_ctx_for_quirk(xhci, slot_id,30693069 ep_index, &deq_state);30703070 }30713071}
+3-2
drivers/usb/host/xhci.h
···21162116void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci,21172117 unsigned int slot_id, unsigned int ep_index,21182118 struct xhci_dequeue_state *deq_state);21192119-void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,21202120- unsigned int stream_id, struct xhci_td *td);21192119+void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int slot_id,21202120+ unsigned int ep_index, unsigned int stream_id,21212121+ struct xhci_td *td);21212122void xhci_stop_endpoint_command_watchdog(struct timer_list *t);21222123void xhci_handle_command_timeout(struct work_struct *work);21232124
+10-10
drivers/usb/misc/sisusbvga/sisusb.c
···11991199/* High level: Gfx (indexed) register access */1200120012011201#ifdef CONFIG_USB_SISUSBVGA_CON12021202-int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data)12021202+int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data)12031203{12041204 return sisusb_write_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);12051205}1206120612071207-int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 *data)12071207+int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 *data)12081208{12091209 return sisusb_read_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);12101210}12111211#endif1212121212131213-int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,12131213+int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,12141214 u8 index, u8 data)12151215{12161216 int ret;···12201220 return ret;12211221}1222122212231223-int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,12231223+int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,12241224 u8 index, u8 *data)12251225{12261226 int ret;···12301230 return ret;12311231}1232123212331233-int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port, u8 idx,12331233+int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port, u8 idx,12341234 u8 myand, u8 myor)12351235{12361236 int ret;···12451245}1246124612471247static int sisusb_setidxregmask(struct sisusb_usb_data *sisusb,12481248- int port, u8 idx, u8 data, u8 mask)12481248+ u32 port, u8 idx, u8 data, u8 mask)12491249{12501250 int ret;12511251 u8 tmp;···12581258 return ret;12591259}1260126012611261-int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,12611261+int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,12621262 u8 index, u8 myor)12631263{12641264 return sisusb_setidxregandor(sisusb, port, index, 0xff, myor);12651265}1266126612671267-int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,12671267+int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,12681268 u8 idx, u8 myand)12691269{12701270 return sisusb_setidxregandor(sisusb, port, idx, myand, 0x00);···27852785static int sisusb_handle_command(struct sisusb_usb_data *sisusb,27862786 struct sisusb_command *y, unsigned long arg)27872787{27882788- int retval, port, length;27892789- u32 address;27882788+ int retval, length;27892789+ u32 port, address;2790279027912791 /* All our commands require the device27922792 * to be initialized.
+7-7
drivers/usb/misc/sisusbvga/sisusb_init.h
···812812int SiSUSBSetMode(struct SiS_Private *SiS_Pr, unsigned short ModeNo);813813int SiSUSBSetVESAMode(struct SiS_Private *SiS_Pr, unsigned short VModeNo);814814815815-extern int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data);816816-extern int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 * data);817817-extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,815815+extern int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data);816816+extern int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 * data);817817+extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,818818 u8 index, u8 data);819819-extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,819819+extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,820820 u8 index, u8 * data);821821-extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port,821821+extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port,822822 u8 idx, u8 myand, u8 myor);823823-extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,823823+extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,824824 u8 index, u8 myor);825825-extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,825825+extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,826826 u8 idx, u8 myand);827827828828void sisusb_delete(struct kref *kref);
+43-3
drivers/usb/storage/uas.c
···8181static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *prefix,8282 int status);83838484+/*8585+ * This driver needs its own workqueue, as we need to control memory allocation.8686+ *8787+ * In the course of error handling and power management uas_wait_for_pending_cmnds()8888+ * needs to flush pending work items. In these contexts we cannot allocate memory8989+ * by doing block IO as we would deadlock. For the same reason we cannot wait9090+ * for anything allocating memory not heeding these constraints.9191+ *9292+ * So we have to control all work items that can be on the workqueue we flush.9393+ * Hence we cannot share a queue and need our own.9494+ */9595+static struct workqueue_struct *workqueue;9696+8497static void uas_do_work(struct work_struct *work)8598{8699 struct uas_dev_info *devinfo =···122109 if (!err)123110 cmdinfo->state &= ~IS_IN_WORK_LIST;124111 else125125- schedule_work(&devinfo->work);112112+ queue_work(workqueue, &devinfo->work);126113 }127114out:128115 spin_unlock_irqrestore(&devinfo->lock, flags);···147134148135 lockdep_assert_held(&devinfo->lock);149136 cmdinfo->state |= IS_IN_WORK_LIST;150150- schedule_work(&devinfo->work);137137+ queue_work(workqueue, &devinfo->work);151138}152139153140static void uas_zap_pending(struct uas_dev_info *devinfo, int result)···202189{203190 struct uas_cmd_info *ci = (void *)&cmnd->SCp;204191 struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp;192192+193193+ if (status == -ENODEV) /* too late */194194+ return;205195206196 scmd_printk(KERN_INFO, cmnd,207197 "%s %d uas-tag %d inflight:%s%s%s%s%s%s%s%s%s%s%s%s ",···12421226 .id_table = uas_usb_ids,12431227};1244122812451245-module_usb_driver(uas_driver);12291229+static int __init uas_init(void)12301230+{12311231+ int rv;12321232+12331233+ workqueue = alloc_workqueue("uas", WQ_MEM_RECLAIM, 0);12341234+ if (!workqueue)12351235+ return -ENOMEM;12361236+12371237+ rv = usb_register(&uas_driver);12381238+ if (rv) {12391239+ destroy_workqueue(workqueue);12401240+ return -ENOMEM;12411241+ }12421242+12431243+ return 0;12441244+}12451245+12461246+static void __exit uas_exit(void)12471247+{12481248+ usb_deregister(&uas_driver);12491249+ destroy_workqueue(workqueue);12501250+}12511251+12521252+module_init(uas_init);12531253+module_exit(uas_exit);1246125412471255MODULE_LICENSE("GPL");12481256MODULE_IMPORT_NS(USB_STORAGE);
···37943794 */37953795 break;3796379637973797+ case PORT_RESET:37983798+ case PORT_RESET_WAIT_OFF:37993799+ /*38003800+ * State set back to default mode once the timer completes.38013801+ * Ignore CC changes here.38023802+ */38033803+ break;38043804+37973805 default:37983806 if (tcpm_port_is_disconnected(port))37993807 tcpm_set_state(port, unattached_state(port), 0);···38633855 case SRC_TRY_DEBOUNCE:38643856 /* Do nothing, waiting for sink detection */38653857 break;38583858+38593859+ case PORT_RESET:38603860+ case PORT_RESET_WAIT_OFF:38613861+ /*38623862+ * State set back to default mode once the timer completes.38633863+ * Ignore vbus changes here.38643864+ */38653865+ break;38663866+38663867 default:38673868 break;38683869 }···39253908 case PORT_RESET_WAIT_OFF:39263909 tcpm_set_state(port, tcpm_default_state(port), 0);39273910 break;39113911+39283912 case SRC_TRY_WAIT:39293913 case SRC_TRY_DEBOUNCE:39303914 /* Do nothing, waiting for sink detection */39313915 break;39163916+39173917+ case PORT_RESET:39183918+ /*39193919+ * State set back to default mode once the timer completes.39203920+ * Ignore vbus changes here.39213921+ */39223922+ break;39233923+39323924 default:39333925 if (port->pwr_role == TYPEC_SINK &&39343926 port->attached)
···181181 break;182182 }183183184184- vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);185185- added = true;186186-187187- /* Deliver to monitoring devices all correctly transmitted188188- * packets.184184+ /* Deliver to monitoring devices all packets that we185185+ * will transmit.189186 */190187 virtio_transport_deliver_tap_pkt(pkt);188188+189189+ vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);190190+ added = true;191191192192 pkt->off += payload_len;193193 total_len += payload_len;···196196 * to send it with the next available buffer.197197 */198198 if (pkt->off < pkt->len) {199199+ /* We are queueing the same virtio_vsock_pkt to handle200200+ * the remaining bytes, and we want to deliver it201201+ * to monitoring devices in the next iteration.202202+ */203203+ pkt->tap_delivered = false;204204+199205 spin_lock_bh(&vsock->send_pkt_list_lock);200206 list_add(&pkt->list, &vsock->send_pkt_list);201207 spin_unlock_bh(&vsock->send_pkt_list_lock);···548542549543 mutex_unlock(&vq->mutex);550544 }545545+546546+ /* Some packets may have been queued before the device was started,547547+ * let's kick the send worker to send them.548548+ */549549+ vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);551550552551 mutex_unlock(&vsock->dev.mutex);553552 return 0;
···916916 path = btrfs_alloc_path();917917 if (!path) {918918 ret = -ENOMEM;919919- goto out;919919+ goto out_put_group;920920 }921921922922 /*···954954 ret = btrfs_orphan_add(trans, BTRFS_I(inode));955955 if (ret) {956956 btrfs_add_delayed_iput(inode);957957- goto out;957957+ goto out_put_group;958958 }959959 clear_nlink(inode);960960 /* One for the block groups ref */···977977978978 ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);979979 if (ret < 0)980980- goto out;980980+ goto out_put_group;981981 if (ret > 0)982982 btrfs_release_path(path);983983 if (ret == 0) {984984 ret = btrfs_del_item(trans, tree_root, path);985985 if (ret)986986- goto out;986986+ goto out_put_group;987987 btrfs_release_path(path);988988 }989989···1102110211031103 ret = remove_block_group_free_space(trans, block_group);11041104 if (ret)11051105- goto out;11051105+ goto out_put_group;1106110611071107- btrfs_put_block_group(block_group);11071107+ /* Once for the block groups rbtree */11081108 btrfs_put_block_group(block_group);1109110911101110 ret = btrfs_search_slot(trans, root, &key, path, -1, 1);···11271127 /* once for the tree */11281128 free_extent_map(em);11291129 }11301130+11311131+out_put_group:11321132+ /* Once for the lookup reference */11331133+ btrfs_put_block_group(block_group);11301134out:11311135 if (remove_rsv)11321136 btrfs_delayed_refs_rsv_release(fs_info, 1);···12921288 if (ret)12931289 goto err;12941290 mutex_unlock(&fs_info->unused_bg_unpin_mutex);12911291+ if (prev_trans)12921292+ btrfs_put_transaction(prev_trans);1295129312961294 return true;1297129512981296err:12991297 mutex_unlock(&fs_info->unused_bg_unpin_mutex);12981298+ if (prev_trans)12991299+ btrfs_put_transaction(prev_trans);13001300 btrfs_dec_block_group_ro(bg);13011301 return false;13021302}
···662662 }663663664664got_it:665665- btrfs_record_root_in_trans(h, root);666666-667665 if (!current->journal_info)668666 current->journal_info = h;667667+668668+ /*669669+ * btrfs_record_root_in_trans() needs to alloc new extents, and may670670+ * call btrfs_join_transaction() while we're also starting a671671+ * transaction.672672+ *673673+ * Thus it need to be called after current->journal_info initialized,674674+ * or we can deadlock.675675+ */676676+ btrfs_record_root_in_trans(h, root);677677+669678 return h;670679671680join_fail:
+40-3
fs/btrfs/tree-log.c
···42264226 const u64 ino = btrfs_ino(inode);42274227 struct btrfs_path *dst_path = NULL;42284228 bool dropped_extents = false;42294229+ u64 truncate_offset = i_size;42304230+ struct extent_buffer *leaf;42314231+ int slot;42294232 int ins_nr = 0;42304233 int start_slot;42314234 int ret;···42434240 if (ret < 0)42444241 goto out;4245424242434243+ /*42444244+ * We must check if there is a prealloc extent that starts before the42454245+ * i_size and crosses the i_size boundary. This is to ensure later we42464246+ * truncate down to the end of that extent and not to the i_size, as42474247+ * otherwise we end up losing part of the prealloc extent after a log42484248+ * replay and with an implicit hole if there is another prealloc extent42494249+ * that starts at an offset beyond i_size.42504250+ */42514251+ ret = btrfs_previous_item(root, path, ino, BTRFS_EXTENT_DATA_KEY);42524252+ if (ret < 0)42534253+ goto out;42544254+42554255+ if (ret == 0) {42564256+ struct btrfs_file_extent_item *ei;42574257+42584258+ leaf = path->nodes[0];42594259+ slot = path->slots[0];42604260+ ei = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);42614261+42624262+ if (btrfs_file_extent_type(leaf, ei) ==42634263+ BTRFS_FILE_EXTENT_PREALLOC) {42644264+ u64 extent_end;42654265+42664266+ btrfs_item_key_to_cpu(leaf, &key, slot);42674267+ extent_end = key.offset +42684268+ btrfs_file_extent_num_bytes(leaf, ei);42694269+42704270+ if (extent_end > i_size)42714271+ truncate_offset = extent_end;42724272+ }42734273+ } else {42744274+ ret = 0;42754275+ }42764276+42464277 while (true) {42474247- struct extent_buffer *leaf = path->nodes[0];42484248- int slot = path->slots[0];42784278+ leaf = path->nodes[0];42794279+ slot = path->slots[0];4249428042504281 if (slot >= btrfs_header_nritems(leaf)) {42514282 if (ins_nr > 0) {···43174280 ret = btrfs_truncate_inode_items(trans,43184281 root->log_root,43194282 &inode->vfs_inode,43204320- i_size,42834283+ truncate_offset,43214284 BTRFS_EXTENT_DATA_KEY);43224285 } while (ret == -EAGAIN);43234286 if (ret)
+2-1
fs/cifs/cifsglob.h
···18911891/*18921892 * This lock protects the cifs_tcp_ses_list, the list of smb sessions per18931893 * tcp session, and the list of tcon's per smb session. It also protects18941894- * the reference counters for the server, smb session, and tcon. Finally,18941894+ * the reference counters for the server, smb session, and tcon. It also18951895+ * protects some fields in the TCP_Server_Info struct such as dstaddr. Finally,18951896 * changes to the tcon->tidStatus should be done while holding this lock.18961897 * generally the locks should be taken in order tcp_ses_lock before18971898 * tcon->open_file_lock and that before file->file_info_lock since the
···506506 * This function creates a file in debugfs with the given name that507507 * contains the value of the variable @value. If the @mode variable is so508508 * set, it can be read from, and written to.509509- *510510- * This function will return a pointer to a dentry if it succeeds. This511511- * pointer must be passed to the debugfs_remove() function when the file is512512- * to be removed (no automatic cleanup happens if your module is unloaded,513513- * you are responsible here.) If an error occurs, ERR_PTR(-ERROR) will be514514- * returned.515515- *516516- * If debugfs is not enabled in the kernel, the value ERR_PTR(-ENODEV) will517517- * be returned.518509 */519519-struct dentry *debugfs_create_u32(const char *name, umode_t mode,520520- struct dentry *parent, u32 *value)510510+void debugfs_create_u32(const char *name, umode_t mode, struct dentry *parent,511511+ u32 *value)521512{522522- return debugfs_create_mode_unsafe(name, mode, parent, value, &fops_u32,513513+ debugfs_create_mode_unsafe(name, mode, parent, value, &fops_u32,523514 &fops_u32_ro, &fops_u32_wo);524515}525516EXPORT_SYMBOL_GPL(debugfs_create_u32);
+31-27
fs/io_uring.c
···524524 REQ_F_OVERFLOW_BIT,525525 REQ_F_POLLED_BIT,526526 REQ_F_BUFFER_SELECTED_BIT,527527+ REQ_F_NO_FILE_TABLE_BIT,527528528529 /* not a real bit, just to check we're not overflowing the space */529530 __REQ_F_LAST_BIT,···578577 REQ_F_POLLED = BIT(REQ_F_POLLED_BIT),579578 /* buffer already selected */580579 REQ_F_BUFFER_SELECTED = BIT(REQ_F_BUFFER_SELECTED_BIT),580580+ /* doesn't need file table for this request */581581+ REQ_F_NO_FILE_TABLE = BIT(REQ_F_NO_FILE_TABLE_BIT),581582};582583583584struct async_poll {···802799 .needs_file = 1,803800 .fd_non_neg = 1,804801 .needs_fs = 1,802802+ .file_table = 1,805803 },806804 [IORING_OP_READ] = {807805 .needs_mm = 1,···12951291 struct io_kiocb *req;1296129212971293 req = ctx->fallback_req;12981298- if (!test_and_set_bit_lock(0, (unsigned long *) ctx->fallback_req))12941294+ if (!test_and_set_bit_lock(0, (unsigned long *) &ctx->fallback_req))12991295 return req;1300129613011297 return NULL;···13821378 if (likely(!io_is_fallback_req(req)))13831379 kmem_cache_free(req_cachep, req);13841380 else13851385- clear_bit_unlock(0, (unsigned long *) req->ctx->fallback_req);13811381+ clear_bit_unlock(0, (unsigned long *) &req->ctx->fallback_req);13861382}1387138313881384struct req_batch {···20382034 * any file. For now, just ensure that anything potentially problematic is done20392035 * inline.20402036 */20412041-static bool io_file_supports_async(struct file *file)20372037+static bool io_file_supports_async(struct file *file, int rw)20422038{20432039 umode_t mode = file_inode(file)->i_mode;20442040···20472043 if (S_ISREG(mode) && file->f_op != &io_uring_fops)20482044 return true;2049204520502050- return false;20462046+ if (!(file->f_mode & FMODE_NOWAIT))20472047+ return false;20482048+20492049+ if (rw == READ)20502050+ return file->f_op->read_iter != NULL;20512051+20522052+ return file->f_op->write_iter != NULL;20512053}2052205420532055static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,···25812571 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so25822572 * we know to async punt it even if it was opened O_NONBLOCK25832573 */25842584- if (force_nonblock && !io_file_supports_async(req->file))25742574+ if (force_nonblock && !io_file_supports_async(req->file, READ))25852575 goto copy_iov;2586257625872577 iov_count = iov_iter_count(&iter);···26042594 if (ret)26052595 goto out_free;26062596 /* any defer here is final, must blocking retry */26072607- if (!(req->flags & REQ_F_NOWAIT))25972597+ if (!(req->flags & REQ_F_NOWAIT) &&25982598+ !file_can_poll(req->file))26082599 req->flags |= REQ_F_MUST_PUNT;26092600 return -EAGAIN;26102601 }···26732662 * If the file doesn't support async, mark it as REQ_F_MUST_PUNT so26742663 * we know to async punt it even if it was opened O_NONBLOCK26752664 */26762676- if (force_nonblock && !io_file_supports_async(req->file))26652665+ if (force_nonblock && !io_file_supports_async(req->file, WRITE))26772666 goto copy_iov;2678266726792668 /* file path doesn't support NOWAIT for non-direct_IO */···27272716 if (ret)27282717 goto out_free;27292718 /* any defer here is final, must blocking retry */27302730- req->flags |= REQ_F_MUST_PUNT;27192719+ if (!file_can_poll(req->file))27202720+ req->flags |= REQ_F_MUST_PUNT;27312721 return -EAGAIN;27322722 }27332723 }···27682756 return 0;27692757}2770275827712771-static bool io_splice_punt(struct file *file)27722772-{27732773- if (get_pipe_info(file))27742774- return false;27752775- if (!io_file_supports_async(file))27762776- return true;27772777- return !(file->f_flags & O_NONBLOCK);27782778-}27792779-27802759static int io_splice(struct io_kiocb *req, bool force_nonblock)27812760{27822761 struct io_splice *sp = &req->splice;···27772774 loff_t *poff_in, *poff_out;27782775 long ret;2779277627802780- if (force_nonblock) {27812781- if (io_splice_punt(in) || io_splice_punt(out))27822782- return -EAGAIN;27832783- flags |= SPLICE_F_NONBLOCK;27842784- }27772777+ if (force_nonblock)27782778+ return -EAGAIN;2785277927862780 poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;27872781 poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;···33553355 struct kstat stat;33563356 int ret;3357335733583358- if (force_nonblock)33583358+ if (force_nonblock) {33593359+ /* only need file table for an actual valid fd */33603360+ if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)33613361+ req->flags |= REQ_F_NO_FILE_TABLE;33593362 return -EAGAIN;33633363+ }3360336433613365 if (vfs_stat_set_lookup_flags(&lookup_flags, ctx->how.flags))33623366 return -EINVAL;···35063502 if (io_req_cancelled(req))35073503 return;35083504 __io_sync_file_range(req);35093509- io_put_req(req); /* put submission ref */35053505+ io_steal_work(req, workptr);35103506}3511350735123508static int io_sync_file_range(struct io_kiocb *req, bool force_nonblock)···50195015 int ret;5020501650215017 /* Still need defer if there is pending req in defer list. */50225022- if (!req_need_defer(req) && list_empty(&ctx->defer_list))50185018+ if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))50235019 return 0;5024502050255021 if (!req->io && io_alloc_async_ctx(req))···54335429 int ret = -EBADF;54345430 struct io_ring_ctx *ctx = req->ctx;5435543154365436- if (req->work.files)54325432+ if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))54375433 return 0;54385434 if (!ctx->ring_file)54395435 return -EBADF;···73317327 * it could cause shutdown to hang.73327328 */73337329 while (ctx->sqo_thread && !wq_has_sleeper(&ctx->sqo_wait))73347334- cpu_relax();73307330+ cond_resched();7335733173367332 io_kill_timeouts(ctx);73377333 io_poll_remove_all(ctx);
+8
fs/ioctl.c
···5555static int ioctl_fibmap(struct file *filp, int __user *p)5656{5757 struct inode *inode = file_inode(filp);5858+ struct super_block *sb = inode->i_sb;5859 int error, ur_block;5960 sector_t block;6061···71707271 block = ur_block;7372 error = bmap(inode, &block);7373+7474+ if (block > INT_MAX) {7575+ error = -ERANGE;7676+ pr_warn_ratelimited("[%s/%d] FS: %s File: %pD4 would truncate fibmap result\n",7777+ current->comm, task_pid_nr(current),7878+ sb->s_id, filp);7979+ }74807581 if (error)7682 ur_block = 0;
···329329330330/**331331 * struct dma_buf_attach_ops - importer operations for an attachment332332- * @move_notify: [optional] notification that the DMA-buf is moving333332 *334333 * Attachment operations implemented by the importer.335334 */336335struct dma_buf_attach_ops {337336 /**338338- * @move_notify337337+ * @move_notify: [optional] notification that the DMA-buf is moving339338 *340339 * If this callback is provided the framework can avoid pinning the341340 * backing store while mappings exists.
+6-6
include/linux/dmaengine.h
···8383/**8484 * Interleaved Transfer Request8585 * ----------------------------8686- * A chunk is collection of contiguous bytes to be transfered.8686+ * A chunk is collection of contiguous bytes to be transferred.8787 * The gap(in bytes) between two chunks is called inter-chunk-gap(ICG).8888- * ICGs may or maynot change between chunks.8888+ * ICGs may or may not change between chunks.8989 * A FRAME is the smallest series of contiguous {chunk,icg} pairs,9090 * that when repeated an integral number of times, specifies the transfer.9191 * A transfer template is specification of a Frame, the number of times···341341 * @chan: driver channel device342342 * @device: sysfs device343343 * @dev_id: parent dma_device dev_id344344- * @idr_ref: reference count to gate release of dma_device dev_id345344 */346345struct dma_chan_dev {347346 struct dma_chan *chan;348347 struct device device;349348 int dev_id;350350- atomic_t *idr_ref;351349};352350353351/**···833835 int dev_id;834836 struct device *dev;835837 struct module *owner;838838+ struct ida chan_ida;839839+ struct mutex chan_mutex; /* to protect chan_ida */836840837841 u32 src_addr_widths;838842 u32 dst_addr_widths;···10691069 * dmaengine_synchronize() needs to be called before it is safe to free10701070 * any memory that is accessed by previously submitted descriptors or before10711071 * freeing any resources accessed from within the completion callback of any10721072- * perviously submitted descriptors.10721072+ * previously submitted descriptors.10731073 *10741074 * This function can be called from atomic context as well as from within a10751075 * complete callback of a descriptor submitted on the same channel.···10911091 *10921092 * Synchronizes to the DMA channel termination to the current context. When this10931093 * function returns it is guaranteed that all transfers for previously issued10941094- * descriptors have stopped and and it is safe to free the memory assoicated10941094+ * descriptors have stopped and it is safe to free the memory associated10951095 * with them. Furthermore it is guaranteed that all complete callback functions10961096 * for a previously submitted descriptor have finished running and it is safe to10971097 * free resources accessed from within the complete callbacks.
···7171#if IS_ENABLED(CONFIG_SUNRPC_DEBUG)7272 struct dentry *cl_debugfs; /* debugfs directory */7373#endif7474- struct rpc_xprt_iter cl_xpi;7474+ /* cl_work is only needed after cl_xpi is no longer used,7575+ * and that are of similar size7676+ */7777+ union {7878+ struct rpc_xprt_iter cl_xpi;7979+ struct work_struct cl_work;8080+ };7581 const struct cred *cl_cred;7682};7783···242236 (task->tk_msg.rpc_proc->p_decode != NULL);243237}244238239239+static inline void rpc_task_close_connection(struct rpc_task *task)240240+{241241+ if (task->tk_xprt)242242+ xprt_force_disconnect(task->tk_xprt);243243+}245244#endif /* _LINUX_SUNRPC_CLNT_H */
···6666 int read;6767 int flags;6868 /* Data points here */6969- unsigned long data[0];6969+ unsigned long data[];7070};71717272/* Values for .flags field of tty_buffer */
+24-2
include/linux/virtio_net.h
···33#define _LINUX_VIRTIO_NET_H4455#include <linux/if_vlan.h>66+#include <uapi/linux/tcp.h>77+#include <uapi/linux/udp.h>68#include <uapi/linux/virtio_net.h>79810static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,···3028 bool little_endian)3129{3230 unsigned int gso_type = 0;3131+ unsigned int thlen = 0;3232+ unsigned int ip_proto;33333434 if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {3535 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {3636 case VIRTIO_NET_HDR_GSO_TCPV4:3737 gso_type = SKB_GSO_TCPV4;3838+ ip_proto = IPPROTO_TCP;3939+ thlen = sizeof(struct tcphdr);3840 break;3941 case VIRTIO_NET_HDR_GSO_TCPV6:4042 gso_type = SKB_GSO_TCPV6;4343+ ip_proto = IPPROTO_TCP;4444+ thlen = sizeof(struct tcphdr);4145 break;4246 case VIRTIO_NET_HDR_GSO_UDP:4347 gso_type = SKB_GSO_UDP;4848+ ip_proto = IPPROTO_UDP;4949+ thlen = sizeof(struct udphdr);4450 break;4551 default:4652 return -EINVAL;···67576858 if (!skb_partial_csum_set(skb, start, off))6959 return -EINVAL;6060+6161+ if (skb_transport_offset(skb) + thlen > skb_headlen(skb))6262+ return -EINVAL;7063 } else {7164 /* gso packets without NEEDS_CSUM do not set transport_offset.7265 * probe and drop if does not match one of the above types.7366 */7467 if (gso_type && skb->network_header) {6868+ struct flow_keys_basic keys;6969+7570 if (!skb->protocol)7671 virtio_net_hdr_set_proto(skb, hdr);7772retry:7878- skb_probe_transport_header(skb);7979- if (!skb_transport_header_was_set(skb)) {7373+ if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,7474+ NULL, 0, 0, 0,7575+ 0)) {8076 /* UFO does not specify ipv4 or 6: try both */8177 if (gso_type & SKB_GSO_UDP &&8278 skb->protocol == htons(ETH_P_IP)) {···9175 }9276 return -EINVAL;9377 }7878+7979+ if (keys.control.thoff + thlen > skb_headlen(skb) ||8080+ keys.basic.ip_proto != ip_proto)8181+ return -EINVAL;8282+8383+ skb_set_transport_header(skb, keys.control.thoff);9484 }9585 }9686
···437437 return atomic_read(&net->ipv4.rt_genid);438438}439439440440+#if IS_ENABLED(CONFIG_IPV6)441441+static inline int rt_genid_ipv6(const struct net *net)442442+{443443+ return atomic_read(&net->ipv6.fib6_sernum);444444+}445445+#endif446446+440447static inline void rt_genid_bump_ipv4(struct net *net)441448{442449 atomic_inc(&net->ipv4.rt_genid);
+1
include/net/sch_generic.h
···407407 struct mutex lock;408408 struct list_head chain_list;409409 u32 index; /* block index for shared blocks */410410+ u32 classid; /* which class this block belongs to */410411 refcount_t refcnt;411412 struct net *net;412413 struct Qdisc *q;
+1
include/soc/mscc/ocelot.h
···507507 unsigned int num_stats;508508509509 int shared_queue_sz;510510+ int num_mact_rows;510511511512 struct net_device *hw_bridge_dev;512513 u16 bridge_mask;
···39394040#define DMA_BUF_BASE 'b'4141#define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)4242+4343+/* 32/64bitness of this uapi was botched in android, there's no difference4444+ * between them in actual uapi, they're just different numbers.4545+ */4246#define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *)4747+#define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32)4848+#define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64)43494450#endif
+1-1
include/uapi/linux/fiemap.h
···3434 __u32 fm_mapped_extents;/* number of extents that were mapped (out) */3535 __u32 fm_extent_count; /* size of fm_extents array (in) */3636 __u32 fm_reserved;3737- struct fiemap_extent fm_extents[]; /* array of mapped extents (out) */3737+ struct fiemap_extent fm_extents[0]; /* array of mapped extents (out) */3838};39394040#define FIEMAP_MAX_OFFSET (~0ULL)
+2-2
include/uapi/linux/hyperv.h
···119119120120struct hv_fcopy_hdr {121121 __u32 operation;122122- uuid_le service_id0; /* currently unused */123123- uuid_le service_id1; /* currently unused */122122+ __u8 service_id0[16]; /* currently unused */123123+ __u8 service_id1[16]; /* currently unused */124124} __attribute__((packed));125125126126#define OVER_WRITE 0x1
+3-3
include/uapi/linux/if_arcnet.h
···6060 __u8 proto; /* protocol ID field - varies */6161 __u8 split_flag; /* for use with split packets */6262 __be16 sequence; /* sequence number */6363- __u8 payload[]; /* space remaining in packet (504 bytes)*/6363+ __u8 payload[0]; /* space remaining in packet (504 bytes)*/6464};6565#define RFC1201_HDR_SIZE 46666···6969 */7070struct arc_rfc1051 {7171 __u8 proto; /* ARC_P_RFC1051_ARP/RFC1051_IP */7272- __u8 payload[]; /* 507 bytes */7272+ __u8 payload[0]; /* 507 bytes */7373};7474#define RFC1051_HDR_SIZE 17575···8080struct arc_eth_encap {8181 __u8 proto; /* Always ARC_P_ETHER */8282 struct ethhdr eth; /* standard ethernet header (yuck!) */8383- __u8 payload[]; /* 493 bytes */8383+ __u8 payload[0]; /* 493 bytes */8484};8585#define ETH_ENCAP_HDR_SIZE 148686
···615615 v - 1, rtm_cmd);616616 v_change_start = 0;617617 }618618+ cond_resched();618619 }619620 /* v_change_start is set only if the last/whole range changed */620621 if (v_change_start)
+10-2
net/core/devlink.c
···43314331 end_offset = nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]);43324332 end_offset += nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]);43334333 dump = false;43344334+43354335+ if (start_offset == end_offset) {43364336+ err = 0;43374337+ goto nla_put_failure;43384338+ }43344339 }4335434043364341 err = devlink_nl_region_read_snapshot_fill(skb, devlink,···54165411{54175412 enum devlink_health_reporter_state prev_health_state;54185413 struct devlink *devlink = reporter->devlink;54145414+ unsigned long recover_ts_threshold;5419541554205416 /* write a log message of the current error */54215417 WARN_ON(!msg);···54275421 devlink_recover_notify(reporter, DEVLINK_CMD_HEALTH_REPORTER_RECOVER);5428542254295423 /* abort if the previous error wasn't recovered */54245424+ recover_ts_threshold = reporter->last_recovery_ts +54255425+ msecs_to_jiffies(reporter->graceful_period);54305426 if (reporter->auto_recover &&54315427 (prev_health_state != DEVLINK_HEALTH_REPORTER_STATE_HEALTHY ||54325432- jiffies - reporter->last_recovery_ts <54335433- msecs_to_jiffies(reporter->graceful_period))) {54285428+ (reporter->last_recovery_ts && reporter->recovery_count &&54295429+ time_is_after_jiffies(recover_ts_threshold)))) {54345430 trace_devlink_health_recover_aborted(devlink,54355431 reporter->ops->name,54365432 reporter->health_state,
···23642364 }23652365}2366236623672367-/* On 32bit arches, an skb frag is limited to 2^15 */23682367#define SKB_FRAG_PAGE_ORDER get_order(32768)23692368DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);23702369
···39263926 */39273927 break;39283928#endif39293929- case TCPOPT_MPTCP:39303930- mptcp_parse_option(skb, ptr, opsize, opt_rx);39313931- break;39323932-39333929 case TCPOPT_FASTOPEN:39343930 tcp_parse_fastopen_option(39353931 opsize - TCPOLEN_FASTOPEN_BASE,···6014601860156019 tcp_sync_mss(sk, icsk->icsk_pmtu_cookie);60166020 tcp_initialize_rcv_mss(sk);60176017-60186018- if (sk_is_mptcp(sk))60196019- mptcp_rcv_synsent(sk);6020602160216022 /* Remember, tcp_poll() does not lock socket!60226023 * Change state from SYN-SENT only after copied_seq
+25
net/ipv6/route.c
···13851385 }13861386 ip6_rt_copy_init(pcpu_rt, res);13871387 pcpu_rt->rt6i_flags |= RTF_PCPU;13881388+13891389+ if (f6i->nh)13901390+ pcpu_rt->sernum = rt_genid_ipv6(dev_net(dev));13911391+13881392 return pcpu_rt;13931393+}13941394+13951395+static bool rt6_is_valid(const struct rt6_info *rt6)13961396+{13971397+ return rt6->sernum == rt_genid_ipv6(dev_net(rt6->dst.dev));13891398}1390139913911400/* It should be called with rcu_read_lock() acquired */···14031394 struct rt6_info *pcpu_rt;1404139514051396 pcpu_rt = this_cpu_read(*res->nh->rt6i_pcpu);13971397+13981398+ if (pcpu_rt && pcpu_rt->sernum && !rt6_is_valid(pcpu_rt)) {13991399+ struct rt6_info *prev, **p;14001400+14011401+ p = this_cpu_ptr(res->nh->rt6i_pcpu);14021402+ prev = xchg(p, NULL);14031403+ if (prev) {14041404+ dst_dev_put(&prev->dst);14051405+ dst_release(&prev->dst);14061406+ }14071407+14081408+ pcpu_rt = NULL;14091409+ }1406141014071411 return pcpu_rt;14081412}···26142592 struct rt6_info *rt;2615259326162594 rt = container_of(dst, struct rt6_info, dst);25952595+25962596+ if (rt->sernum)25972597+ return rt6_is_valid(rt) ? dst : NULL;2617259826182599 rcu_read_lock();26192600
+8-2
net/ipv6/seg6.c
···27272828bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len)2929{3030- int trailing;3130 unsigned int tlv_offset;3131+ int max_last_entry;3232+ int trailing;32333334 if (srh->type != IPV6_SRCRT_TYPE_4)3435 return false;···3736 if (((srh->hdrlen + 1) << 3) != len)3837 return false;39384040- if (srh->segments_left > srh->first_segment)3939+ max_last_entry = (srh->hdrlen / 2) - 1;4040+4141+ if (srh->first_segment > max_last_entry)4242+ return false;4343+4444+ if (srh->segments_left > srh->first_segment + 1)4145 return false;42464347 tlv_offset = sizeof(*srh) + ((srh->first_segment + 1) << 4);
+41-54
net/mptcp/options.c
···1616 return (flags & MPTCP_CAP_FLAG_MASK) == MPTCP_CAP_HMAC_SHA256;1717}18181919-void mptcp_parse_option(const struct sk_buff *skb, const unsigned char *ptr,2020- int opsize, struct tcp_options_received *opt_rx)1919+static void mptcp_parse_option(const struct sk_buff *skb,2020+ const unsigned char *ptr, int opsize,2121+ struct mptcp_options_received *mp_opt)2122{2222- struct mptcp_options_received *mp_opt = &opt_rx->mptcp;2323 u8 subtype = *ptr >> 4;2424 int expected_opsize;2525 u8 version;···283283}284284285285void mptcp_get_options(const struct sk_buff *skb,286286- struct tcp_options_received *opt_rx)286286+ struct mptcp_options_received *mp_opt)287287{288288- const unsigned char *ptr;289288 const struct tcphdr *th = tcp_hdr(skb);290290- int length = (th->doff * 4) - sizeof(struct tcphdr);289289+ const unsigned char *ptr;290290+ int length;291291292292+ /* initialize option status */293293+ mp_opt->mp_capable = 0;294294+ mp_opt->mp_join = 0;295295+ mp_opt->add_addr = 0;296296+ mp_opt->rm_addr = 0;297297+ mp_opt->dss = 0;298298+299299+ length = (th->doff * 4) - sizeof(struct tcphdr);292300 ptr = (const unsigned char *)(th + 1);293301294302 while (length > 0) {···316308 if (opsize > length)317309 return; /* don't parse partial options */318310 if (opcode == TCPOPT_MPTCP)319319- mptcp_parse_option(skb, ptr, opsize, opt_rx);311311+ mptcp_parse_option(skb, ptr, opsize, mp_opt);320312 ptr += opsize - 2;321313 length -= opsize;322314 }···350342 return true;351343 }352344 return false;353353-}354354-355355-void mptcp_rcv_synsent(struct sock *sk)356356-{357357- struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);358358- struct tcp_sock *tp = tcp_sk(sk);359359-360360- if (subflow->request_mptcp && tp->rx_opt.mptcp.mp_capable) {361361- subflow->mp_capable = 1;362362- subflow->can_ack = 1;363363- subflow->remote_key = tp->rx_opt.mptcp.sndr_key;364364- pr_debug("subflow=%p, remote_key=%llu", subflow,365365- subflow->remote_key);366366- } else if (subflow->request_join && tp->rx_opt.mptcp.mp_join) {367367- subflow->mp_join = 1;368368- subflow->thmac = tp->rx_opt.mptcp.thmac;369369- subflow->remote_nonce = tp->rx_opt.mptcp.nonce;370370- pr_debug("subflow=%p, thmac=%llu, remote_nonce=%u", subflow,371371- subflow->thmac, subflow->remote_nonce);372372- } else if (subflow->request_mptcp) {373373- tcp_sk(sk)->is_mptcp = 0;374374- }375345}376346377347/* MP_JOIN client subflow must wait for 4th ack before sending any data:···695709 if (TCP_SKB_CB(skb)->seq != subflow->ssn_offset + 1)696710 return subflow->mp_capable;697711698698- if (mp_opt->use_ack) {712712+ if (mp_opt->dss && mp_opt->use_ack) {699713 /* subflows are fully established as soon as we get any700714 * additional ack.701715 */702716 subflow->fully_established = 1;703717 goto fully_established;704718 }705705-706706- WARN_ON_ONCE(subflow->can_ack);707719708720 /* If the first established packet does not contain MP_CAPABLE + data709721 * then fallback to TCP···712728 return false;713729 }714730731731+ if (unlikely(!READ_ONCE(msk->pm.server_side)))732732+ pr_warn_once("bogus mpc option on established client sk");715733 subflow->fully_established = 1;716734 subflow->remote_key = mp_opt->sndr_key;717735 subflow->can_ack = 1;···805819{806820 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);807821 struct mptcp_sock *msk = mptcp_sk(subflow->conn);808808- struct mptcp_options_received *mp_opt;822822+ struct mptcp_options_received mp_opt;809823 struct mptcp_ext *mpext;810824811811- mp_opt = &opt_rx->mptcp;812812- if (!check_fully_established(msk, sk, subflow, skb, mp_opt))825825+ mptcp_get_options(skb, &mp_opt);826826+ if (!check_fully_established(msk, sk, subflow, skb, &mp_opt))813827 return;814828815815- if (mp_opt->add_addr && add_addr_hmac_valid(msk, mp_opt)) {829829+ if (mp_opt.add_addr && add_addr_hmac_valid(msk, &mp_opt)) {816830 struct mptcp_addr_info addr;817831818818- addr.port = htons(mp_opt->port);819819- addr.id = mp_opt->addr_id;820820- if (mp_opt->family == MPTCP_ADDR_IPVERSION_4) {832832+ addr.port = htons(mp_opt.port);833833+ addr.id = mp_opt.addr_id;834834+ if (mp_opt.family == MPTCP_ADDR_IPVERSION_4) {821835 addr.family = AF_INET;822822- addr.addr = mp_opt->addr;836836+ addr.addr = mp_opt.addr;823837 }824838#if IS_ENABLED(CONFIG_MPTCP_IPV6)825825- else if (mp_opt->family == MPTCP_ADDR_IPVERSION_6) {839839+ else if (mp_opt.family == MPTCP_ADDR_IPVERSION_6) {826840 addr.family = AF_INET6;827827- addr.addr6 = mp_opt->addr6;841841+ addr.addr6 = mp_opt.addr6;828842 }829843#endif830830- if (!mp_opt->echo)844844+ if (!mp_opt.echo)831845 mptcp_pm_add_addr_received(msk, &addr);832832- mp_opt->add_addr = 0;846846+ mp_opt.add_addr = 0;833847 }834848835835- if (!mp_opt->dss)849849+ if (!mp_opt.dss)836850 return;837851838852 /* we can't wait for recvmsg() to update the ack_seq, otherwise839853 * monodirectional flows will stuck840854 */841841- if (mp_opt->use_ack)842842- update_una(msk, mp_opt);855855+ if (mp_opt.use_ack)856856+ update_una(msk, &mp_opt);843857844858 mpext = skb_ext_add(skb, SKB_EXT_MPTCP);845859 if (!mpext)···847861848862 memset(mpext, 0, sizeof(*mpext));849863850850- if (mp_opt->use_map) {851851- if (mp_opt->mpc_map) {864864+ if (mp_opt.use_map) {865865+ if (mp_opt.mpc_map) {852866 /* this is an MP_CAPABLE carrying MPTCP data853867 * we know this map the first chunk of data854868 */···858872 mpext->subflow_seq = 1;859873 mpext->dsn64 = 1;860874 mpext->mpc_map = 1;875875+ mpext->data_fin = 0;861876 } else {862862- mpext->data_seq = mp_opt->data_seq;863863- mpext->subflow_seq = mp_opt->subflow_seq;864864- mpext->dsn64 = mp_opt->dsn64;865865- mpext->data_fin = mp_opt->data_fin;877877+ mpext->data_seq = mp_opt.data_seq;878878+ mpext->subflow_seq = mp_opt.subflow_seq;879879+ mpext->dsn64 = mp_opt.dsn64;880880+ mpext->data_fin = mp_opt.data_fin;866881 }867867- mpext->data_len = mp_opt->data_len;882882+ mpext->data_len = mp_opt.data_len;868883 mpext->use_map = 1;869884 }870885}
+9-8
net/mptcp/protocol.c
···1316131613171317static int mptcp_disconnect(struct sock *sk, int flags)13181318{13191319- lock_sock(sk);13201320- __mptcp_clear_xmit(sk);13211321- release_sock(sk);13221322- mptcp_cancel_work(sk);13231323- return tcp_disconnect(sk, flags);13191319+ /* Should never be called.13201320+ * inet_stream_connect() calls ->disconnect, but that13211321+ * refers to the subflow socket, not the mptcp one.13221322+ */13231323+ WARN_ON_ONCE(1);13241324+ return 0;13241325}1325132613261327#if IS_ENABLED(CONFIG_MPTCP_IPV6)···13341333#endif1335133413361335struct sock *mptcp_sk_clone(const struct sock *sk,13371337- const struct tcp_options_received *opt_rx,13361336+ const struct mptcp_options_received *mp_opt,13381337 struct request_sock *req)13391338{13401339 struct mptcp_subflow_request_sock *subflow_req = mptcp_subflow_rsk(req);···1373137213741373 msk->write_seq = subflow_req->idsn + 1;13751374 atomic64_set(&msk->snd_una, msk->write_seq);13761376- if (opt_rx->mptcp.mp_capable) {13751375+ if (mp_opt->mp_capable) {13771376 msk->can_ack = true;13781378- msk->remote_key = opt_rx->mptcp.sndr_key;13771377+ msk->remote_key = mp_opt->sndr_key;13791378 mptcp_crypto_key_sha(msk->remote_key, NULL, &ack_seq);13801379 ack_seq++;13811380 msk->ack_seq = ack_seq;
···637637 if (ctl->divisor &&638638 (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536))639639 return -EINVAL;640640+641641+ /* slot->allot is a short, make sure quantum is not too big. */642642+ if (ctl->quantum) {643643+ unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum);644644+645645+ if (scaled <= 0 || scaled > SHRT_MAX)646646+ return -EINVAL;647647+ }648648+640649 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,641650 ctl_v1->Wlog))642651 return -EINVAL;
···1687168716881688 case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */16891689 case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */16901690- case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */16901690+ case USB_ID(0x16d0, 0x06b2): /* NuPrime DAC-10 */16911691 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */16921692 case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */16931693 case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */
···1515 exit_unsupported1616fi17171818-if [ ! -f set_ftrace_filter ]; then1919- echo "set_ftrace_filter not found? Is function tracer not set?"2020- exit_unsupported2121-fi1818+check_filter_file set_ftrace_filter22192320do_function_fork=12421
···1616 exit_unsupported1717fi18181919-if [ ! -f set_ftrace_filter ]; then2020- echo "set_ftrace_filter not found? Is function tracer not set?"2121- exit_unsupported2222-fi1919+check_filter_file set_ftrace_filter23202421do_function_fork=12522
···1111#12121313# The triggers are set within the set_ftrace_filter file1414-if [ ! -f set_ftrace_filter ]; then1515- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1616- exit_unsupported1717-fi1414+check_filter_file set_ftrace_filter18151916do_reset() {2017 reset_ftrace_filter
···1010#11111212# The triggers are set within the set_ftrace_filter file1313-if [ ! -f set_ftrace_filter ]; then1414- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1515- exit_unsupported1616-fi1313+check_filter_file set_ftrace_filter17141815fail() { # mesg1916 echo $1
···1111#12121313# The triggers are set within the set_ftrace_filter file1414-if [ ! -f set_ftrace_filter ]; then1515- echo "set_ftrace_filter not found? Is dynamic ftrace not set?"1616- exit_unsupported1717-fi1414+check_filter_file set_ftrace_filter18151916fail() { # mesg2017 echo $1
+6
tools/testing/selftests/ftrace/test.d/functions
···11+check_filter_file() { # check filter file introduced by dynamic ftrace22+ if [ ! -f "$1" ]; then33+ echo "$1 not found? Is dynamic ftrace not set?"44+ exit_unsupported55+ fi66+}1728clear_trace() { # reset trace output39 echo > trace
···4848 exec 2>/dev/null4949 printf "$orig_message_cost" > /proc/sys/net/core/message_cost5050 ip0 link del dev wg05151+ ip0 link del dev wg15152 ip1 link del dev wg05353+ ip1 link del dev wg15254 ip2 link del dev wg05555+ ip2 link del dev wg15356 local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)"5457 [[ -n $to_kill ]] && kill $to_kill5558 pp ip netns del $netns1···8077key1="$(pp wg genkey)"8178key2="$(pp wg genkey)"8279key3="$(pp wg genkey)"8080+key4="$(pp wg genkey)"8381pub1="$(pp wg pubkey <<<"$key1")"8482pub2="$(pp wg pubkey <<<"$key2")"8583pub3="$(pp wg pubkey <<<"$key3")"8484+pub4="$(pp wg pubkey <<<"$key4")"8685psk="$(pp wg genpsk)"8786[[ -n $key1 && -n $key2 && -n $psk ]]88878988configure_peers() {9089 ip1 addr add 192.168.241.1/24 dev wg09191- ip1 addr add fd00::1/24 dev wg09090+ ip1 addr add fd00::1/112 dev wg092919392 ip2 addr add 192.168.241.2/24 dev wg09494- ip2 addr add fd00::2/24 dev wg09393+ ip2 addr add fd00::2/112 dev wg095949695 n1 wg set wg0 \9796 private-key <(echo "$key1") \···235230n1 wg set wg0 private-key <(echo "$key3")236231n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove237232n1 ping -W 1 -c 1 192.168.241.2233233+n2 wg set wg0 peer "$pub3" remove238234239239-ip1 link del wg0235235+# Test that we can route wg through wg236236+ip1 addr flush dev wg0237237+ip2 addr flush dev wg0238238+ip1 addr add fd00::5:1/112 dev wg0239239+ip2 addr add fd00::5:2/112 dev wg0240240+n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips fd00::5:2/128 endpoint 127.0.0.1:2241241+n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips fd00::5:1/128 endpoint 127.212.121.99:9998242242+ip1 link add wg1 type wireguard243243+ip2 link add wg1 type wireguard244244+ip1 addr add 192.168.241.1/24 dev wg1245245+ip1 addr add fd00::1/112 dev wg1246246+ip2 addr add 192.168.241.2/24 dev wg1247247+ip2 addr add fd00::2/112 dev wg1248248+ip1 link set mtu 1340 up dev wg1249249+ip2 link set mtu 1340 up dev wg1250250+n1 wg set wg1 listen-port 5 private-key <(echo "$key3") peer "$pub4" allowed-ips 192.168.241.2/32,fd00::2/128 endpoint [fd00::5:2]:5251251+n2 wg set wg1 listen-port 5 private-key <(echo "$key4") peer "$pub3" allowed-ips 192.168.241.1/32,fd00::1/128 endpoint [fd00::5:1]:5252252+tests253253+# Try to set up a routing loop between the two namespaces254254+ip1 link set netns $netns0 dev wg1255255+ip0 addr add 192.168.241.1/24 dev wg1256256+ip0 link set up dev wg1257257+n0 ping -W 1 -c 1 192.168.241.2258258+n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7240259ip2 link del wg0260260+ip2 link del wg1261261+! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel262262+263263+ip0 link del wg1264264+ip1 link del wg0241265242266# Test using NAT. We now change the topology to this:243267# ┌────────────────────────────────────────┐ ┌────────────────────────────────────────────────┐ ┌────────────────────────────────────────┐···315281pp sleep 3316282n2 ping -W 1 -c 1 192.168.241.1317283n1 wg set wg0 peer "$pub2" persistent-keepalive 0284284+285285+# Test that onion routing works, even when it loops286286+n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5287287+ip1 addr add 192.168.242.1/24 dev wg0288288+ip2 link add wg1 type wireguard289289+ip2 addr add 192.168.242.2/24 dev wg1290290+n2 wg set wg1 private-key <(echo "$key3") listen-port 5 peer "$pub1" allowed-ips 192.168.242.1/32291291+ip2 link set wg1 up292292+n1 ping -W 1 -c 1 192.168.242.2293293+ip2 link del wg1294294+n1 wg set wg0 peer "$pub3" endpoint 192.168.242.2:5295295+! n1 ping -W 1 -c 1 192.168.242.2 || false # Should not crash kernel296296+n1 wg set wg0 peer "$pub3" remove297297+ip1 addr del 192.168.242.1/24 dev wg0318298319299# Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs.320300ip1 -6 addr add fc00::9/96 dev vethc