Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.8-rc7 into staging-next

We need the staging fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3462 -3170
+3
.mailmap
··· 198 198 Mayuresh Janorkar <mayur@ti.com> 199 199 Michael Buesch <m@bues.ch> 200 200 Michel Dänzer <michel@tungstengraphics.com> 201 + Mike Rapoport <rppt@kernel.org> <mike@compulab.co.il> 202 + Mike Rapoport <rppt@kernel.org> <mike.rapoport@gmail.com> 203 + Mike Rapoport <rppt@kernel.org> <rppt@linux.ibm.com> 201 204 Miodrag Dinic <miodrag.dinic@mips.com> <miodrag.dinic@imgtec.com> 202 205 Miquel Raynal <miquel.raynal@bootlin.com> <miquel.raynal@free-electrons.com> 203 206 Mitesh shah <mshah@teja.com>
+10 -1
Documentation/ABI/testing/debugfs-driver-habanalabs
··· 16 16 gating mechanism in Gaudi. Due to how Gaudi is built, the 17 17 clock gating needs to be disabled in order to access the 18 18 registers of the TPC and MME engines. This is sometimes needed 19 - during debug by the user and hence the user needs this option 19 + during debug by the user and hence the user needs this option. 20 + The user can supply a bitmask value, each bit represents 21 + a different engine to disable/enable its clock gating feature. 22 + The bitmask is composed of 20 bits: 23 + 0 - 7 : DMA channels 24 + 8 - 11 : MME engines 25 + 12 - 19 : TPC engines 26 + The bit's location of a specific engine can be determined 27 + using (1 << GAUDI_ENGINE_ID_*). GAUDI_ENGINE_ID_* values 28 + are defined in uapi habanalabs.h file in enum gaudi_engine_id 20 29 21 30 What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers 22 31 Date: Jan 2019
+13 -4
Documentation/devicetree/bindings/sound/simple-card.yaml
··· 378 378 - | 379 379 sound { 380 380 compatible = "simple-audio-card"; 381 + #address-cells = <1>; 382 + #size-cells = <0>; 381 383 382 384 simple-audio-card,name = "rsnd-ak4643"; 383 385 simple-audio-card,format = "left_j"; ··· 393 391 "ak4642 Playback", "DAI1 Playback"; 394 392 395 393 dpcmcpu: simple-audio-card,cpu@0 { 394 + reg = <0>; 396 395 sound-dai = <&rcar_sound 0>; 397 396 }; 398 397 399 398 simple-audio-card,cpu@1 { 399 + reg = <1>; 400 400 sound-dai = <&rcar_sound 1>; 401 401 }; 402 402 ··· 422 418 - | 423 419 sound { 424 420 compatible = "simple-audio-card"; 421 + #address-cells = <1>; 422 + #size-cells = <0>; 425 423 426 424 simple-audio-card,routing = 427 425 "pcm3168a Playback", "DAI1 Playback", ··· 432 426 "pcm3168a Playback", "DAI4 Playback"; 433 427 434 428 simple-audio-card,dai-link@0 { 429 + reg = <0>; 435 430 format = "left_j"; 436 431 bitclock-master = <&sndcpu0>; 437 432 frame-master = <&sndcpu0>; ··· 446 439 }; 447 440 448 441 simple-audio-card,dai-link@1 { 442 + reg = <1>; 449 443 format = "i2s"; 450 444 bitclock-master = <&sndcpu1>; 451 445 frame-master = <&sndcpu1>; 452 446 453 447 convert-channels = <8>; /* TDM Split */ 454 448 455 - sndcpu1: cpu@0 { 449 + sndcpu1: cpu0 { 456 450 sound-dai = <&rcar_sound 1>; 457 451 }; 458 - cpu@1 { 452 + cpu1 { 459 453 sound-dai = <&rcar_sound 2>; 460 454 }; 461 - cpu@2 { 455 + cpu2 { 462 456 sound-dai = <&rcar_sound 3>; 463 457 }; 464 - cpu@3 { 458 + cpu3 { 465 459 sound-dai = <&rcar_sound 4>; 466 460 }; 467 461 codec { ··· 474 466 }; 475 467 476 468 simple-audio-card,dai-link@2 { 469 + reg = <2>; 477 470 format = "i2s"; 478 471 bitclock-master = <&sndcpu2>; 479 472 frame-master = <&sndcpu2>;
+12
Documentation/driver-api/ptp.rst
··· 23 23 + Ancillary clock features 24 24 - Time stamp external events 25 25 - Period output signals configurable from user space 26 + - Low Pass Filter (LPF) access from user space 26 27 - Synchronization of the Linux system time via the PPS subsystem 27 28 28 29 PTP hardware clock kernel API ··· 95 94 96 95 - Auxiliary Slave/Master Mode Snapshot (optional interrupt) 97 96 - Target Time (optional interrupt) 97 + 98 + * Renesas (IDT) ClockMatrix™ 99 + 100 + - Up to 4 independent PHC channels 101 + - Integrated low pass filter (LPF), access via .adjPhase (compliant to ITU-T G.8273.2) 102 + - Programmable output periodic signals 103 + - Programmable inputs can time stamp external triggers 104 + - Driver and/or hardware configuration through firmware (idtcm.bin) 105 + - LPF settings (bandwidth, phase limiting, automatic holdover, physical layer assist (per ITU-T G.8273.2)) 106 + - Programmable output PTP clocks, any frequency up to 1GHz (to other PHY/MAC time stampers, refclk to ASSPs/SoCs/FPGAs) 107 + - Lock to GNSS input, automatic switching between GNSS and user-space PHC control (optional)
+13 -6
Documentation/networking/bareudp.rst
··· 26 26 27 27 1) Device creation & deletion 28 28 29 - a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype 0x8847. 29 + a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls_uc 30 30 31 31 This creates a bareudp tunnel device which tunnels L3 traffic with ethertype 32 32 0x8847 (MPLS traffic). The destination port of the UDP header will be set to ··· 34 34 35 35 b) ip link delete bareudp0 36 36 37 - 2) Device creation with multiple proto mode enabled 37 + 2) Device creation with multiproto mode enabled 38 38 39 - There are two ways to create a bareudp device for MPLS & IP with multiproto mode 40 - enabled. 39 + The multiproto mode allows bareudp tunnels to handle several protocols of the 40 + same family. It is currently only available for IP and MPLS. This mode has to 41 + be enabled explicitly with the "multiproto" flag. 41 42 42 - a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype 0x8847 multiproto 43 + a) ip link add dev bareudp0 type bareudp dstport 6635 ethertype ipv4 multiproto 43 44 44 - b) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls 45 + For an IPv4 tunnel the multiproto mode allows the tunnel to also handle 46 + IPv6. 47 + 48 + b) ip link add dev bareudp0 type bareudp dstport 6635 ethertype mpls_uc multiproto 49 + 50 + For MPLS, the multiproto mode allows the tunnel to handle both unicast 51 + and multicast MPLS packets. 45 52 46 53 3) Device Usage 47 54
+17 -4
MAINTAINERS
··· 6956 6956 M: Nicolin Chen <nicoleotsuka@gmail.com> 6957 6957 M: Xiubo Li <Xiubo.Lee@gmail.com> 6958 6958 R: Fabio Estevam <festevam@gmail.com> 6959 + R: Shengjiu Wang <shengjiu.wang@gmail.com> 6959 6960 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 6960 6961 L: linuxppc-dev@lists.ozlabs.org 6961 6962 S: Maintained ··· 9314 9313 F: scripts/Kconfig.include 9315 9314 F: scripts/kconfig/ 9316 9315 9316 + KCOV 9317 + R: Dmitry Vyukov <dvyukov@google.com> 9318 + R: Andrey Konovalov <andreyknvl@google.com> 9319 + L: kasan-dev@googlegroups.com 9320 + S: Maintained 9321 + F: Documentation/dev-tools/kcov.rst 9322 + F: include/linux/kcov.h 9323 + F: include/uapi/linux/kcov.h 9324 + F: kernel/kcov.c 9325 + F: scripts/Makefile.kcov 9326 + 9317 9327 KCSAN 9318 9328 M: Marco Elver <elver@google.com> 9319 9329 R: Dmitry Vyukov <dvyukov@google.com> ··· 11260 11248 F: drivers/crypto/atmel-ecc.* 11261 11249 11262 11250 MICROCHIP I2C DRIVER 11263 - M: Ludovic Desroches <ludovic.desroches@microchip.com> 11251 + M: Codrin Ciubotariu <codrin.ciubotariu@microchip.com> 11264 11252 L: linux-i2c@vger.kernel.org 11265 11253 S: Supported 11266 11254 F: drivers/i2c/busses/i2c-at91-*.c ··· 11352 11340 F: include/dt-bindings/iio/adc/at91-sama5d2_adc.h 11353 11341 11354 11342 MICROCHIP SAMA5D2-COMPATIBLE SHUTDOWN CONTROLLER 11355 - M: Nicolas Ferre <nicolas.ferre@microchip.com> 11343 + M: Claudiu Beznea <claudiu.beznea@microchip.com> 11356 11344 S: Supported 11357 11345 F: drivers/power/reset/at91-sama5d2_shdwc.c 11358 11346 11359 11347 MICROCHIP SPI DRIVER 11360 - M: Nicolas Ferre <nicolas.ferre@microchip.com> 11348 + M: Tudor Ambarus <tudor.ambarus@microchip.com> 11361 11349 S: Supported 11362 11350 F: drivers/spi/spi-atmel.* 11363 11351 11364 11352 MICROCHIP SSC DRIVER 11365 - M: Nicolas Ferre <nicolas.ferre@microchip.com> 11353 + M: Codrin Ciubotariu <codrin.ciubotariu@microchip.com> 11366 11354 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 11367 11355 S: Supported 11368 11356 F: drivers/misc/atmel-ssc.c ··· 14888 14876 F: include/linux/dasd_mod.h 14889 14877 14890 14878 S390 IOMMU (PCI) 14879 + M: Matthew Rosato <mjrosato@linux.ibm.com> 14891 14880 M: Gerald Schaefer <gerald.schaefer@linux.ibm.com> 14892 14881 L: linux-s390@vger.kernel.org 14893 14882 S: Supported
+3 -3
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 8 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Kleptomaniac Octopus 7 7 8 8 # *DOCUMENTATION* ··· 567 567 ifneq ($(CROSS_COMPILE),) 568 568 CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%)) 569 569 GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit)) 570 - CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR) 570 + CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE)) 571 571 GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..) 572 572 endif 573 573 ifneq ($(GCC_TOOLCHAIN),) ··· 1754 1754 descend: $(build-dirs) 1755 1755 $(build-dirs): prepare 1756 1756 $(Q)$(MAKE) $(build)=$@ \ 1757 - single-build=$(if $(filter-out $@/, $(filter $@/%, $(single-no-ko))),1) \ 1757 + single-build=$(if $(filter-out $@/, $(filter $@/%, $(KBUILD_SINGLE_TARGETS))),1) \ 1758 1758 need-builtin=1 need-modorder=1 1759 1759 1760 1760 clean-dirs := $(addprefix _clean_, $(clean-dirs))
+1 -1
arch/arm64/Makefile
··· 137 137 138 138 core-y += arch/arm64/ 139 139 libs-y := arch/arm64/lib/ $(libs-y) 140 - core-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a 140 + libs-$(CONFIG_EFI_STUB) += $(objtree)/drivers/firmware/efi/libstub/lib.a 141 141 142 142 # Default target when executing plain make 143 143 boot := arch/arm64/boot
+1 -4
arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
··· 454 454 status = "okay"; 455 455 phy-mode = "2500base-x"; 456 456 phys = <&cp1_comphy5 2>; 457 - fixed-link { 458 - speed = <2500>; 459 - full-duplex; 460 - }; 457 + managed = "in-band-status"; 461 458 }; 462 459 463 460 &cp1_spi1 {
+1 -1
arch/arm64/kernel/vdso32/Makefile
··· 14 14 COMPAT_GCC_TOOLCHAIN := $(realpath $(COMPAT_GCC_TOOLCHAIN_DIR)/..) 15 15 16 16 CC_COMPAT_CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE_COMPAT:%-=%)) 17 - CC_COMPAT_CLANG_FLAGS += --prefix=$(COMPAT_GCC_TOOLCHAIN_DIR) 17 + CC_COMPAT_CLANG_FLAGS += --prefix=$(COMPAT_GCC_TOOLCHAIN_DIR)$(notdir $(CROSS_COMPILE_COMPAT)) 18 18 CC_COMPAT_CLANG_FLAGS += -no-integrated-as -Qunused-arguments 19 19 ifneq ($(COMPAT_GCC_TOOLCHAIN),) 20 20 CC_COMPAT_CLANG_FLAGS += --gcc-toolchain=$(COMPAT_GCC_TOOLCHAIN)
+2
arch/parisc/include/asm/atomic.h
··· 212 212 _atomic_spin_unlock_irqrestore(v, flags); 213 213 } 214 214 215 + #define atomic64_set_release(v, i) atomic64_set((v), (i)) 216 + 215 217 static __inline__ s64 216 218 atomic64_read(const atomic64_t *v) 217 219 {
+2
arch/parisc/include/asm/cmpxchg.h
··· 60 60 extern unsigned long __cmpxchg_u32(volatile unsigned int *m, unsigned int old, 61 61 unsigned int new_); 62 62 extern u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new_); 63 + extern u8 __cmpxchg_u8(volatile u8 *ptr, u8 old, u8 new_); 63 64 64 65 /* don't worry...optimizer will get rid of most of this */ 65 66 static inline unsigned long ··· 72 71 #endif 73 72 case 4: return __cmpxchg_u32((unsigned int *)ptr, 74 73 (unsigned int)old, (unsigned int)new_); 74 + case 1: return __cmpxchg_u8((u8 *)ptr, (u8)old, (u8)new_); 75 75 } 76 76 __cmpxchg_called_with_bad_pointer(); 77 77 return old;
+12
arch/parisc/lib/bitops.c
··· 79 79 _atomic_spin_unlock_irqrestore(ptr, flags); 80 80 return (unsigned long)prev; 81 81 } 82 + 83 + u8 __cmpxchg_u8(volatile u8 *ptr, u8 old, u8 new) 84 + { 85 + unsigned long flags; 86 + u8 prev; 87 + 88 + _atomic_spin_lock_irqsave(ptr, flags); 89 + if ((prev = *ptr) == old) 90 + *ptr = new; 91 + _atomic_spin_unlock_irqrestore(ptr, flags); 92 + return prev; 93 + }
+47 -23
arch/riscv/mm/init.c
··· 95 95 #ifdef CONFIG_BLK_DEV_INITRD 96 96 static void __init setup_initrd(void) 97 97 { 98 + phys_addr_t start; 98 99 unsigned long size; 99 100 100 - if (initrd_start >= initrd_end) { 101 - pr_info("initrd not found or empty"); 102 - goto disable; 103 - } 104 - if (__pa_symbol(initrd_end) > PFN_PHYS(max_low_pfn)) { 105 - pr_err("initrd extends beyond end of memory"); 101 + /* Ignore the virtul address computed during device tree parsing */ 102 + initrd_start = initrd_end = 0; 103 + 104 + if (!phys_initrd_size) 105 + return; 106 + /* 107 + * Round the memory region to page boundaries as per free_initrd_mem() 108 + * This allows us to detect whether the pages overlapping the initrd 109 + * are in use, but more importantly, reserves the entire set of pages 110 + * as we don't want these pages allocated for other purposes. 111 + */ 112 + start = round_down(phys_initrd_start, PAGE_SIZE); 113 + size = phys_initrd_size + (phys_initrd_start - start); 114 + size = round_up(size, PAGE_SIZE); 115 + 116 + if (!memblock_is_region_memory(start, size)) { 117 + pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region", 118 + (u64)start, size); 106 119 goto disable; 107 120 } 108 121 109 - size = initrd_end - initrd_start; 110 - memblock_reserve(__pa_symbol(initrd_start), size); 122 + if (memblock_is_region_reserved(start, size)) { 123 + pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region\n", 124 + (u64)start, size); 125 + goto disable; 126 + } 127 + 128 + memblock_reserve(start, size); 129 + /* Now convert initrd to virtual addresses */ 130 + initrd_start = (unsigned long)__va(phys_initrd_start); 131 + initrd_end = initrd_start + phys_initrd_size; 111 132 initrd_below_start_ok = 1; 112 133 113 134 pr_info("Initial ramdisk at: 0x%p (%lu bytes)\n", ··· 147 126 { 148 127 struct memblock_region *reg; 149 128 phys_addr_t mem_size = 0; 129 + phys_addr_t total_mem = 0; 130 + phys_addr_t mem_start, end = 0; 150 131 phys_addr_t vmlinux_end = __pa_symbol(&_end); 151 132 phys_addr_t vmlinux_start = __pa_symbol(&_start); 152 133 153 134 /* Find the memory region containing the kernel */ 154 135 for_each_memblock(memory, reg) { 155 - phys_addr_t end = reg->base + reg->size; 156 - 157 - if (reg->base <= vmlinux_start && vmlinux_end <= end) { 158 - mem_size = min(reg->size, (phys_addr_t)-PAGE_OFFSET); 159 - 160 - /* 161 - * Remove memblock from the end of usable area to the 162 - * end of region 163 - */ 164 - if (reg->base + mem_size < end) 165 - memblock_remove(reg->base + mem_size, 166 - end - reg->base - mem_size); 167 - } 136 + end = reg->base + reg->size; 137 + if (!total_mem) 138 + mem_start = reg->base; 139 + if (reg->base <= vmlinux_start && vmlinux_end <= end) 140 + BUG_ON(reg->size == 0); 141 + total_mem = total_mem + reg->size; 168 142 } 169 - BUG_ON(mem_size == 0); 143 + 144 + /* 145 + * Remove memblock from the end of usable area to the 146 + * end of region 147 + */ 148 + mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET); 149 + if (mem_start + mem_size < end) 150 + memblock_remove(mem_start + mem_size, 151 + end - mem_start - mem_size); 170 152 171 153 /* Reserve from the start of the kernel to the end of the kernel */ 172 154 memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); 173 155 174 - set_max_mapnr(PFN_DOWN(mem_size)); 175 156 max_pfn = PFN_DOWN(memblock_end_of_DRAM()); 176 157 max_low_pfn = max_pfn; 158 + set_max_mapnr(max_low_pfn); 177 159 178 160 #ifdef CONFIG_BLK_DEV_INITRD 179 161 setup_initrd();
+2 -2
arch/riscv/mm/kasan_init.c
··· 44 44 (__pa(((uintptr_t) kasan_early_shadow_pmd))), 45 45 __pgprot(_PAGE_TABLE))); 46 46 47 - flush_tlb_all(); 47 + local_flush_tlb_all(); 48 48 } 49 49 50 50 static void __init populate(void *start, void *end) ··· 79 79 pfn_pgd(PFN_DOWN(__pa(&pmd[offset])), 80 80 __pgprot(_PAGE_TABLE))); 81 81 82 - flush_tlb_all(); 82 + local_flush_tlb_all(); 83 83 memset(start, 0, end - start); 84 84 } 85 85
+2 -2
arch/s390/kernel/perf_cpum_cf_events.c
··· 292 292 CPUMF_EVENT_ATTR(cf_z15, DFLT_ACCESS, 0x00f7); 293 293 CPUMF_EVENT_ATTR(cf_z15, DFLT_CYCLES, 0x00fc); 294 294 CPUMF_EVENT_ATTR(cf_z15, DFLT_CC, 0x00108); 295 - CPUMF_EVENT_ATTR(cf_z15, DFLT_CCERROR, 0x00109); 295 + CPUMF_EVENT_ATTR(cf_z15, DFLT_CCFINISH, 0x00109); 296 296 CPUMF_EVENT_ATTR(cf_z15, MT_DIAG_CYCLES_ONE_THR_ACTIVE, 0x01c0); 297 297 CPUMF_EVENT_ATTR(cf_z15, MT_DIAG_CYCLES_TWO_THR_ACTIVE, 0x01c1); 298 298 ··· 629 629 CPUMF_EVENT_PTR(cf_z15, DFLT_ACCESS), 630 630 CPUMF_EVENT_PTR(cf_z15, DFLT_CYCLES), 631 631 CPUMF_EVENT_PTR(cf_z15, DFLT_CC), 632 - CPUMF_EVENT_PTR(cf_z15, DFLT_CCERROR), 632 + CPUMF_EVENT_PTR(cf_z15, DFLT_CCFINISH), 633 633 CPUMF_EVENT_PTR(cf_z15, MT_DIAG_CYCLES_ONE_THR_ACTIVE), 634 634 CPUMF_EVENT_PTR(cf_z15, MT_DIAG_CYCLES_TWO_THR_ACTIVE), 635 635 NULL,
+1
arch/x86/include/asm/iosf_mbi.h
··· 39 39 #define BT_MBI_UNIT_PMC 0x04 40 40 #define BT_MBI_UNIT_GFX 0x06 41 41 #define BT_MBI_UNIT_SMI 0x0C 42 + #define BT_MBI_UNIT_CCK 0x14 42 43 #define BT_MBI_UNIT_USB 0x43 43 44 #define BT_MBI_UNIT_SATA 0xA3 44 45 #define BT_MBI_UNIT_PCIE 0xA6
+17 -10
arch/x86/kernel/dumpstack.c
··· 71 71 printk("%s %s%pB\n", log_lvl, reliable ? "" : "? ", (void *)address); 72 72 } 73 73 74 + static int copy_code(struct pt_regs *regs, u8 *buf, unsigned long src, 75 + unsigned int nbytes) 76 + { 77 + if (!user_mode(regs)) 78 + return copy_from_kernel_nofault(buf, (u8 *)src, nbytes); 79 + 80 + /* 81 + * Make sure userspace isn't trying to trick us into dumping kernel 82 + * memory by pointing the userspace instruction pointer at it. 83 + */ 84 + if (__chk_range_not_ok(src, nbytes, TASK_SIZE_MAX)) 85 + return -EINVAL; 86 + 87 + return copy_from_user_nmi(buf, (void __user *)src, nbytes); 88 + } 89 + 74 90 /* 75 91 * There are a couple of reasons for the 2/3rd prologue, courtesy of Linus: 76 92 * ··· 113 97 #define OPCODE_BUFSIZE (PROLOGUE_SIZE + 1 + EPILOGUE_SIZE) 114 98 u8 opcodes[OPCODE_BUFSIZE]; 115 99 unsigned long prologue = regs->ip - PROLOGUE_SIZE; 116 - bool bad_ip; 117 100 118 - /* 119 - * Make sure userspace isn't trying to trick us into dumping kernel 120 - * memory by pointing the userspace instruction pointer at it. 121 - */ 122 - bad_ip = user_mode(regs) && 123 - __chk_range_not_ok(prologue, OPCODE_BUFSIZE, TASK_SIZE_MAX); 124 - 125 - if (bad_ip || copy_from_kernel_nofault(opcodes, (u8 *)prologue, 126 - OPCODE_BUFSIZE)) { 101 + if (copy_code(regs, opcodes, prologue, sizeof(opcodes))) { 127 102 printk("%sCode: Bad RIP value.\n", loglvl); 128 103 } else { 129 104 printk("%sCode: %" __stringify(PROLOGUE_SIZE) "ph <%02x> %"
+1 -1
arch/x86/kernel/fpu/xstate.c
··· 1074 1074 copy_part(offsetof(struct fxregs_state, st_space), 128, 1075 1075 &xsave->i387.st_space, &kbuf, &offset_start, &count); 1076 1076 if (header.xfeatures & XFEATURE_MASK_SSE) 1077 - copy_part(xstate_offsets[XFEATURE_MASK_SSE], 256, 1077 + copy_part(xstate_offsets[XFEATURE_SSE], 256, 1078 1078 &xsave->i387.xmm_space, &kbuf, &offset_start, &count); 1079 1079 /* 1080 1080 * Fill xsave->i387.sw_reserved value for ptrace frame:
-5
arch/x86/kernel/stacktrace.c
··· 58 58 * or a page fault), which can make frame pointers 59 59 * unreliable. 60 60 */ 61 - 62 61 if (IS_ENABLED(CONFIG_FRAME_POINTER)) 63 62 return -EINVAL; 64 63 } ··· 78 79 79 80 /* Check for stack corruption */ 80 81 if (unwind_error(&state)) 81 - return -EINVAL; 82 - 83 - /* Success path for non-user tasks, i.e. kthreads and idle tasks */ 84 - if (!(task->flags & (PF_KTHREAD | PF_IDLE))) 85 82 return -EINVAL; 86 83 87 84 return 0;
+6 -2
arch/x86/kernel/unwind_orc.c
··· 440 440 /* 441 441 * Find the orc_entry associated with the text address. 442 442 * 443 - * Decrement call return addresses by one so they work for sibling 444 - * calls and calls to noreturn functions. 443 + * For a call frame (as opposed to a signal frame), state->ip points to 444 + * the instruction after the call. That instruction's stack layout 445 + * could be different from the call instruction's layout, for example 446 + * if the call was to a noreturn function. So get the ORC data for the 447 + * call instruction itself. 445 448 */ 446 449 orc = orc_find(state->signal ? state->ip : state->ip - 1); 447 450 if (!orc) { ··· 665 662 state->sp = task->thread.sp; 666 663 state->bp = READ_ONCE_NOCHECK(frame->bp); 667 664 state->ip = READ_ONCE_NOCHECK(frame->ret_addr); 665 + state->signal = (void *)state->ip == ret_from_fork; 668 666 } 669 667 670 668 if (get_stack_info((unsigned long *)state->sp, state->task,
+1
arch/x86/kernel/vmlinux.lds.S
··· 358 358 .bss : AT(ADDR(.bss) - LOAD_OFFSET) { 359 359 __bss_start = .; 360 360 *(.bss..page_aligned) 361 + . = ALIGN(PAGE_SIZE); 361 362 *(BSS_MAIN) 362 363 BSS_DECRYPTED 363 364 . = ALIGN(PAGE_SIZE);
+1 -1
arch/xtensa/include/asm/checksum.h
··· 57 57 __wsum csum_and_copy_from_user(const void __user *src, void *dst, 58 58 int len, __wsum sum, int *err_ptr) 59 59 { 60 - if (access_ok(dst, len)) 60 + if (access_ok(src, len)) 61 61 return csum_partial_copy_generic((__force const void *)src, dst, 62 62 len, sum, err_ptr, NULL); 63 63 if (len)
+1 -1
drivers/android/binder_alloc.c
··· 947 947 trace_binder_unmap_user_end(alloc, index); 948 948 } 949 949 mmap_read_unlock(mm); 950 - mmput(mm); 950 + mmput_async(mm); 951 951 952 952 trace_binder_unmap_kernel_start(alloc, index); 953 953
+1 -1
drivers/base/property.c
··· 721 721 return next; 722 722 723 723 /* When no more children in primary, continue with secondary */ 724 - if (!IS_ERR_OR_NULL(fwnode->secondary)) 724 + if (fwnode && !IS_ERR_OR_NULL(fwnode->secondary)) 725 725 next = fwnode_get_next_child_node(fwnode->secondary, child); 726 726 727 727 return next;
+22
drivers/bus/ti-sysc.c
··· 2865 2865 return error; 2866 2866 } 2867 2867 2868 + /* 2869 + * Ignore timers tagged with no-reset and no-idle. These are likely in use, 2870 + * for example by drivers/clocksource/timer-ti-dm-systimer.c. If more checks 2871 + * are needed, we could also look at the timer register configuration. 2872 + */ 2873 + static int sysc_check_active_timer(struct sysc *ddata) 2874 + { 2875 + if (ddata->cap->type != TI_SYSC_OMAP2_TIMER && 2876 + ddata->cap->type != TI_SYSC_OMAP4_TIMER) 2877 + return 0; 2878 + 2879 + if ((ddata->cfg.quirks & SYSC_QUIRK_NO_RESET_ON_INIT) && 2880 + (ddata->cfg.quirks & SYSC_QUIRK_NO_IDLE)) 2881 + return -EBUSY; 2882 + 2883 + return 0; 2884 + } 2885 + 2868 2886 static const struct of_device_id sysc_match_table[] = { 2869 2887 { .compatible = "simple-bus", }, 2870 2888 { /* sentinel */ }, ··· 2936 2918 sysc_init_early_quirks(ddata); 2937 2919 2938 2920 error = sysc_check_disabled_devices(ddata); 2921 + if (error) 2922 + return error; 2923 + 2924 + error = sysc_check_active_timer(ddata); 2939 2925 if (error) 2940 2926 return error; 2941 2927
+7 -3
drivers/char/mem.c
··· 814 814 #ifdef CONFIG_IO_STRICT_DEVMEM 815 815 void revoke_devmem(struct resource *res) 816 816 { 817 - struct inode *inode = READ_ONCE(devmem_inode); 817 + /* pairs with smp_store_release() in devmem_init_inode() */ 818 + struct inode *inode = smp_load_acquire(&devmem_inode); 818 819 819 820 /* 820 821 * Check that the initialization has completed. Losing the race ··· 1029 1028 return rc; 1030 1029 } 1031 1030 1032 - /* publish /dev/mem initialized */ 1033 - WRITE_ONCE(devmem_inode, inode); 1031 + /* 1032 + * Publish /dev/mem initialized. 1033 + * Pairs with smp_load_acquire() in revoke_devmem(). 1034 + */ 1035 + smp_store_release(&devmem_inode, inode); 1034 1036 1035 1037 return 0; 1036 1038 }
+36 -10
drivers/clocksource/timer-ti-dm-systimer.c
··· 19 19 /* For type1, set SYSC_OMAP2_CLOCKACTIVITY for fck off on idle, l4 clock on */ 20 20 #define DMTIMER_TYPE1_ENABLE ((1 << 9) | (SYSC_IDLE_SMART << 3) | \ 21 21 SYSC_OMAP2_ENAWAKEUP | SYSC_OMAP2_AUTOIDLE) 22 - 22 + #define DMTIMER_TYPE1_DISABLE (SYSC_OMAP2_SOFTRESET | SYSC_OMAP2_AUTOIDLE) 23 23 #define DMTIMER_TYPE2_ENABLE (SYSC_IDLE_SMART_WKUP << 2) 24 24 #define DMTIMER_RESET_WAIT 100000 25 25 ··· 44 44 u8 ctrl; 45 45 u8 wakeup; 46 46 u8 ifctrl; 47 + struct clk *fck; 48 + struct clk *ick; 47 49 unsigned long rate; 48 50 }; 49 51 ··· 300 298 } 301 299 302 300 /* Interface clocks are only available on some SoCs variants */ 303 - static int __init dmtimer_systimer_init_clock(struct device_node *np, 301 + static int __init dmtimer_systimer_init_clock(struct dmtimer_systimer *t, 302 + struct device_node *np, 304 303 const char *name, 305 304 unsigned long *rate) 306 305 { 307 306 struct clk *clock; 308 307 unsigned long r; 308 + bool is_ick = false; 309 309 int error; 310 310 311 + is_ick = !strncmp(name, "ick", 3); 312 + 311 313 clock = of_clk_get_by_name(np, name); 312 - if ((PTR_ERR(clock) == -EINVAL) && !strncmp(name, "ick", 3)) 314 + if ((PTR_ERR(clock) == -EINVAL) && is_ick) 313 315 return 0; 314 316 else if (IS_ERR(clock)) 315 317 return PTR_ERR(clock); ··· 325 319 r = clk_get_rate(clock); 326 320 if (!r) 327 321 return -ENODEV; 322 + 323 + if (is_ick) 324 + t->ick = clock; 325 + else 326 + t->fck = clock; 328 327 329 328 *rate = r; 330 329 ··· 350 339 351 340 static void dmtimer_systimer_disable(struct dmtimer_systimer *t) 352 341 { 353 - writel_relaxed(0, t->base + t->sysc); 342 + if (!dmtimer_systimer_revision1(t)) 343 + return; 344 + 345 + writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc); 354 346 } 355 347 356 348 static int __init dmtimer_systimer_setup(struct device_node *np, ··· 380 366 pr_err("%s: clock source init failed: %i\n", __func__, error); 381 367 382 368 /* For ti-sysc, we have timer clocks at the parent module level */ 383 - error = dmtimer_systimer_init_clock(np->parent, "fck", &rate); 369 + error = dmtimer_systimer_init_clock(t, np->parent, "fck", &rate); 384 370 if (error) 385 371 goto err_unmap; 386 372 387 373 t->rate = rate; 388 374 389 - error = dmtimer_systimer_init_clock(np->parent, "ick", &rate); 375 + error = dmtimer_systimer_init_clock(t, np->parent, "ick", &rate); 390 376 if (error) 391 377 goto err_unmap; 392 378 ··· 510 496 struct dmtimer_systimer *t = &clkevt->t; 511 497 512 498 dmtimer_systimer_disable(t); 499 + clk_disable(t->fck); 513 500 } 514 501 515 502 static void omap_clockevent_unidle(struct clock_event_device *evt) 516 503 { 517 504 struct dmtimer_clockevent *clkevt = to_dmtimer_clockevent(evt); 518 505 struct dmtimer_systimer *t = &clkevt->t; 506 + int error; 507 + 508 + error = clk_enable(t->fck); 509 + if (error) 510 + pr_err("could not enable timer fck on resume: %i\n", error); 519 511 520 512 dmtimer_systimer_enable(t); 521 513 writel_relaxed(OMAP_TIMER_INT_OVERFLOW, t->base + t->irq_ena); ··· 590 570 3, /* Timer internal resynch latency */ 591 571 0xffffffff); 592 572 593 - if (of_device_is_compatible(np, "ti,am33xx") || 594 - of_device_is_compatible(np, "ti,am43")) { 573 + if (of_machine_is_compatible("ti,am33xx") || 574 + of_machine_is_compatible("ti,am43")) { 595 575 dev->suspend = omap_clockevent_idle; 596 576 dev->resume = omap_clockevent_unidle; 597 577 } ··· 636 616 637 617 clksrc->loadval = readl_relaxed(t->base + t->counter); 638 618 dmtimer_systimer_disable(t); 619 + clk_disable(t->fck); 639 620 } 640 621 641 622 static void dmtimer_clocksource_resume(struct clocksource *cs) 642 623 { 643 624 struct dmtimer_clocksource *clksrc = to_dmtimer_clocksource(cs); 644 625 struct dmtimer_systimer *t = &clksrc->t; 626 + int error; 627 + 628 + error = clk_enable(t->fck); 629 + if (error) 630 + pr_err("could not enable timer fck on resume: %i\n", error); 645 631 646 632 dmtimer_systimer_enable(t); 647 633 writel_relaxed(clksrc->loadval, t->base + t->counter); ··· 679 653 dev->mask = CLOCKSOURCE_MASK(32); 680 654 dev->flags = CLOCK_SOURCE_IS_CONTINUOUS; 681 655 682 - if (of_device_is_compatible(np, "ti,am33xx") || 683 - of_device_is_compatible(np, "ti,am43")) { 656 + /* Unlike for clockevent, legacy code sets suspend only for am4 */ 657 + if (of_machine_is_compatible("ti,am43")) { 684 658 dev->suspend = dmtimer_clocksource_suspend; 685 659 dev->resume = dmtimer_clocksource_resume; 686 660 }
+1 -1
drivers/crypto/chelsio/chtls/chtls_cm.c
··· 102 102 case PF_INET: 103 103 if (likely(!inet_sk(sk)->inet_rcv_saddr)) 104 104 return ndev; 105 - ndev = ip_dev_find(&init_net, inet_sk(sk)->inet_rcv_saddr); 105 + ndev = __ip_dev_find(&init_net, inet_sk(sk)->inet_rcv_saddr, false); 106 106 break; 107 107 #if IS_ENABLED(CONFIG_IPV6) 108 108 case PF_INET6:
+4 -3
drivers/crypto/chelsio/chtls/chtls_io.c
··· 1052 1052 &record_type); 1053 1053 if (err) 1054 1054 goto out_err; 1055 + 1056 + /* Avoid appending tls handshake, alert to tls data */ 1057 + if (skb) 1058 + tx_skb_finalize(skb); 1055 1059 } 1056 1060 1057 1061 recordsz = size; 1058 1062 csk->tlshws.txleft = recordsz; 1059 1063 csk->tlshws.type = record_type; 1060 - 1061 - if (skb) 1062 - ULP_SKB_CB(skb)->ulp.tls.type = record_type; 1063 1064 } 1064 1065 1065 1066 if (!skb || (ULP_SKB_CB(skb)->flags & ULPCB_FLAG_NO_APPEND) ||
+1 -4
drivers/firmware/efi/efi-pstore.c
··· 356 356 357 357 static __init int efivars_pstore_init(void) 358 358 { 359 - if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) 360 - return 0; 361 - 362 - if (!efivars_kobject()) 359 + if (!efivars_kobject() || !efivar_supports_writes()) 363 360 return 0; 364 361 365 362 if (efivars_pstore_disable)
+8 -4
drivers/firmware/efi/efi.c
··· 176 176 static int generic_ops_register(void) 177 177 { 178 178 generic_ops.get_variable = efi.get_variable; 179 - generic_ops.set_variable = efi.set_variable; 180 - generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking; 181 179 generic_ops.get_next_variable = efi.get_next_variable; 182 180 generic_ops.query_variable_store = efi_query_variable_store; 183 181 182 + if (efi_rt_services_supported(EFI_RT_SUPPORTED_SET_VARIABLE)) { 183 + generic_ops.set_variable = efi.set_variable; 184 + generic_ops.set_variable_nonblocking = efi.set_variable_nonblocking; 185 + } 184 186 return efivars_register(&generic_efivars, &generic_ops, efi_kobj); 185 187 } 186 188 ··· 384 382 return -ENOMEM; 385 383 } 386 384 387 - if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) { 385 + if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE | 386 + EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME)) { 388 387 efivar_ssdt_load(); 389 388 error = generic_ops_register(); 390 389 if (error) ··· 419 416 err_remove_group: 420 417 sysfs_remove_group(efi_kobj, &efi_subsys_attr_group); 421 418 err_unregister: 422 - if (efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) 419 + if (efi_rt_services_supported(EFI_RT_SUPPORTED_GET_VARIABLE | 420 + EFI_RT_SUPPORTED_GET_NEXT_VARIABLE_NAME)) 423 421 generic_ops_unregister(); 424 422 err_put: 425 423 kobject_put(efi_kobj);
+1 -4
drivers/firmware/efi/efivars.c
··· 680 680 struct kobject *parent_kobj = efivars_kobject(); 681 681 int error = 0; 682 682 683 - if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) 684 - return -ENODEV; 685 - 686 683 /* No efivars has been registered yet */ 687 - if (!parent_kobj) 684 + if (!parent_kobj || !efivar_supports_writes()) 688 685 return 0; 689 686 690 687 printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,
+1 -2
drivers/firmware/efi/libstub/Makefile
··· 6 6 # enabled, even if doing so doesn't break the build. 7 7 # 8 8 cflags-$(CONFIG_X86_32) := -march=i386 9 - cflags-$(CONFIG_X86_64) := -mcmodel=small \ 10 - $(call cc-option,-maccumulate-outgoing-args) 9 + cflags-$(CONFIG_X86_64) := -mcmodel=small 11 10 cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \ 12 11 -fPIC -fno-strict-aliasing -mno-red-zone \ 13 12 -mno-mmx -mno-sse -fshort-wchar \
+1 -1
drivers/firmware/efi/libstub/alignedmem.c
··· 44 44 *addr = ALIGN((unsigned long)alloc_addr, align); 45 45 46 46 if (slack > 0) { 47 - int l = (alloc_addr % align) / EFI_PAGE_SIZE; 47 + int l = (alloc_addr & (align - 1)) / EFI_PAGE_SIZE; 48 48 49 49 if (l) { 50 50 efi_bs_call(free_pages, alloc_addr, slack - l + 1);
-17
drivers/firmware/efi/libstub/efi-stub.c
··· 122 122 } 123 123 124 124 /* 125 - * This function handles the architcture specific differences between arm and 126 - * arm64 regarding where the kernel image must be loaded and any memory that 127 - * must be reserved. On failure it is required to free all 128 - * all allocations it has made. 129 - */ 130 - efi_status_t handle_kernel_image(unsigned long *image_addr, 131 - unsigned long *image_size, 132 - unsigned long *reserve_addr, 133 - unsigned long *reserve_size, 134 - unsigned long dram_base, 135 - efi_loaded_image_t *image); 136 - 137 - asmlinkage void __noreturn efi_enter_kernel(unsigned long entrypoint, 138 - unsigned long fdt_addr, 139 - unsigned long fdt_size); 140 - 141 - /* 142 125 * EFI entry point for the arm/arm64 EFI stubs. This is the entrypoint 143 126 * that is described in the PE/COFF header. Most of the code is the same 144 127 * for both archictectures, with the arch-specific code provided in the
+16
drivers/firmware/efi/libstub/efistub.h
··· 776 776 unsigned long *load_size, 777 777 unsigned long soft_limit, 778 778 unsigned long hard_limit); 779 + /* 780 + * This function handles the architcture specific differences between arm and 781 + * arm64 regarding where the kernel image must be loaded and any memory that 782 + * must be reserved. On failure it is required to free all 783 + * all allocations it has made. 784 + */ 785 + efi_status_t handle_kernel_image(unsigned long *image_addr, 786 + unsigned long *image_size, 787 + unsigned long *reserve_addr, 788 + unsigned long *reserve_size, 789 + unsigned long dram_base, 790 + efi_loaded_image_t *image); 791 + 792 + asmlinkage void __noreturn efi_enter_kernel(unsigned long entrypoint, 793 + unsigned long fdt_addr, 794 + unsigned long fdt_size); 779 795 780 796 void efi_handle_post_ebs_state(void); 781 797
+4 -4
drivers/firmware/efi/libstub/x86-stub.c
··· 8 8 9 9 #include <linux/efi.h> 10 10 #include <linux/pci.h> 11 + #include <linux/stddef.h> 11 12 12 13 #include <asm/efi.h> 13 14 #include <asm/e820/types.h> ··· 362 361 int options_size = 0; 363 362 efi_status_t status; 364 363 char *cmdline_ptr; 365 - unsigned long ramdisk_addr; 366 - unsigned long ramdisk_size; 367 364 368 365 efi_system_table = sys_table_arg; 369 366 ··· 389 390 390 391 hdr = &boot_params->hdr; 391 392 392 - /* Copy the second sector to boot_params */ 393 - memcpy(&hdr->jump, image_base + 512, 512); 393 + /* Copy the setup header from the second sector to boot_params */ 394 + memcpy(&hdr->jump, image_base + 512, 395 + sizeof(struct setup_header) - offsetof(struct setup_header, jump)); 394 396 395 397 /* 396 398 * Fill out some of the header fields ourselves because the
+6
drivers/firmware/efi/vars.c
··· 1229 1229 return rv; 1230 1230 } 1231 1231 EXPORT_SYMBOL_GPL(efivars_unregister); 1232 + 1233 + int efivar_supports_writes(void) 1234 + { 1235 + return __efivars && __efivars->ops->set_variable; 1236 + } 1237 + EXPORT_SYMBOL_GPL(efivar_supports_writes);
+2 -1
drivers/fpga/dfl-afu-main.c
··· 83 83 * on this port and minimum soft reset pulse width has elapsed. 84 84 * Driver polls port_soft_reset_ack to determine if reset done by HW. 85 85 */ 86 - if (readq_poll_timeout(base + PORT_HDR_CTRL, v, v & PORT_CTRL_SFTRST, 86 + if (readq_poll_timeout(base + PORT_HDR_CTRL, v, 87 + v & PORT_CTRL_SFTRST_ACK, 87 88 RST_POLL_INVL, RST_POLL_TIMEOUT)) { 88 89 dev_err(&pdev->dev, "timeout, fail to reset device\n"); 89 90 return -ETIMEDOUT;
+2 -1
drivers/fpga/dfl-pci.c
··· 227 227 { 228 228 struct cci_drvdata *drvdata = pci_get_drvdata(pcidev); 229 229 struct dfl_fpga_cdev *cdev = drvdata->cdev; 230 - int ret = 0; 231 230 232 231 if (!num_vfs) { 233 232 /* ··· 238 239 dfl_fpga_cdev_config_ports_pf(cdev); 239 240 240 241 } else { 242 + int ret; 243 + 241 244 /* 242 245 * before enable SRIOV, put released ports into VF access mode 243 246 * first of all.
+3 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
··· 778 778 tmp_str++; 779 779 while (isspace(*++tmp_str)); 780 780 781 - while (tmp_str[0]) { 782 - sub_str = strsep(&tmp_str, delimiter); 781 + while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) { 783 782 ret = kstrtol(sub_str, 0, &parameter[parameter_size]); 784 783 if (ret) 785 784 return -EINVAL; ··· 1038 1039 memcpy(buf_cpy, buf, bytes); 1039 1040 buf_cpy[bytes] = '\0'; 1040 1041 tmp = buf_cpy; 1041 - while (tmp[0]) { 1042 - sub_str = strsep(&tmp, delimiter); 1042 + while ((sub_str = strsep(&tmp, delimiter)) != NULL) { 1043 1043 if (strlen(sub_str)) { 1044 1044 ret = kstrtol(sub_str, 0, &level); 1045 1045 if (ret) ··· 1635 1637 i++; 1636 1638 memcpy(buf_cpy, buf, count-i); 1637 1639 tmp_str = buf_cpy; 1638 - while (tmp_str[0]) { 1639 - sub_str = strsep(&tmp_str, delimiter); 1640 + while ((sub_str = strsep(&tmp_str, delimiter)) != NULL) { 1640 1641 ret = kstrtol(sub_str, 0, &parameter[parameter_size]); 1641 1642 if (ret) 1642 1643 return -EINVAL;
+6 -4
drivers/gpu/drm/amd/powerplay/smumgr/vegam_smumgr.c
··· 644 644 645 645 /* sclk is bigger than max sclk in the dependence table */ 646 646 *voltage |= (dep_table->entries[i - 1].vddc * VOLTAGE_SCALE) << VDDC_SHIFT; 647 - vddci = phm_find_closest_vddci(&(data->vddci_voltage_table), 648 - (dep_table->entries[i - 1].vddc - 649 - (uint16_t)VDDC_VDDCI_DELTA)); 650 647 651 648 if (SMU7_VOLTAGE_CONTROL_NONE == data->vddci_control) 652 649 *voltage |= (data->vbios_boot_state.vddci_bootup_value * ··· 651 654 else if (dep_table->entries[i - 1].vddci) 652 655 *voltage |= (dep_table->entries[i - 1].vddci * 653 656 VOLTAGE_SCALE) << VDDC_SHIFT; 654 - else 657 + else { 658 + vddci = phm_find_closest_vddci(&(data->vddci_voltage_table), 659 + (dep_table->entries[i - 1].vddc - 660 + (uint16_t)VDDC_VDDCI_DELTA)); 661 + 655 662 *voltage |= (vddci * VOLTAGE_SCALE) << VDDCI_SHIFT; 663 + } 656 664 657 665 if (SMU7_VOLTAGE_CONTROL_NONE == data->mvdd_control) 658 666 *mvdd = data->vbios_boot_state.mvdd_bootup_value * VOLTAGE_SCALE;
+2
drivers/gpu/drm/lima/lima_pp.c
··· 271 271 272 272 int lima_pp_bcast_resume(struct lima_ip *ip) 273 273 { 274 + /* PP has been reset by individual PP resume */ 275 + ip->data.async_reset = false; 274 276 return 0; 275 277 } 276 278
+1 -1
drivers/gpu/drm/sun4i/sun4i_hdmi_enc.c
··· 260 260 unsigned long reg; 261 261 262 262 reg = readl(hdmi->base + SUN4I_HDMI_HPD_REG); 263 - if (reg & SUN4I_HDMI_HPD_HIGH) { 263 + if (!(reg & SUN4I_HDMI_HPD_HIGH)) { 264 264 cec_phys_addr_invalidate(hdmi->cec_adap); 265 265 return connector_status_disconnected; 266 266 }
+12 -16
drivers/i2c/busses/i2c-cadence.c
··· 421 421 /* Read data if receive data valid is set */ 422 422 while (cdns_i2c_readreg(CDNS_I2C_SR_OFFSET) & 423 423 CDNS_I2C_SR_RXDV) { 424 - /* 425 - * Clear hold bit that was set for FIFO control if 426 - * RX data left is less than FIFO depth, unless 427 - * repeated start is selected. 428 - */ 429 - if ((id->recv_count < CDNS_I2C_FIFO_DEPTH) && 430 - !id->bus_hold_flag) 431 - cdns_i2c_clear_bus_hold(id); 432 - 433 424 if (id->recv_count > 0) { 434 425 *(id->p_recv_buf)++ = 435 426 cdns_i2c_readreg(CDNS_I2C_DATA_OFFSET); 436 427 id->recv_count--; 437 428 id->curr_recv_count--; 429 + 430 + /* 431 + * Clear hold bit that was set for FIFO control 432 + * if RX data left is less than or equal to 433 + * FIFO DEPTH unless repeated start is selected 434 + */ 435 + if (id->recv_count <= CDNS_I2C_FIFO_DEPTH && 436 + !id->bus_hold_flag) 437 + cdns_i2c_clear_bus_hold(id); 438 + 438 439 } else { 439 440 dev_err(id->adap.dev.parent, 440 441 "xfer_size reg rollover. xfer aborted!\n"); ··· 595 594 * Check for the message size against FIFO depth and set the 596 595 * 'hold bus' bit if it is greater than FIFO depth. 597 596 */ 598 - if ((id->recv_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag) 597 + if (id->recv_count > CDNS_I2C_FIFO_DEPTH) 599 598 ctrl_reg |= CDNS_I2C_CR_HOLD; 600 - else 601 - ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD; 602 599 603 600 cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET); 604 601 ··· 653 654 * Check for the message size against FIFO depth and set the 654 655 * 'hold bus' bit if it is greater than FIFO depth. 655 656 */ 656 - if ((id->send_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag) 657 + if (id->send_count > CDNS_I2C_FIFO_DEPTH) 657 658 ctrl_reg |= CDNS_I2C_CR_HOLD; 658 - else 659 - ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD; 660 - 661 659 cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET); 662 660 663 661 /* Clear the interrupts in interrupt status register. */
+4 -2
drivers/i2c/busses/i2c-qcom-geni.c
··· 367 367 geni_se_select_mode(se, GENI_SE_FIFO); 368 368 369 369 writel_relaxed(len, se->base + SE_I2C_RX_TRANS_LEN); 370 - geni_se_setup_m_cmd(se, I2C_READ, m_param); 371 370 372 371 if (dma_buf && geni_se_rx_dma_prep(se, dma_buf, len, &rx_dma)) { 373 372 geni_se_select_mode(se, GENI_SE_FIFO); 374 373 i2c_put_dma_safe_msg_buf(dma_buf, msg, false); 375 374 dma_buf = NULL; 376 375 } 376 + 377 + geni_se_setup_m_cmd(se, I2C_READ, m_param); 377 378 378 379 time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); 379 380 if (!time_left) ··· 409 408 geni_se_select_mode(se, GENI_SE_FIFO); 410 409 411 410 writel_relaxed(len, se->base + SE_I2C_TX_TRANS_LEN); 412 - geni_se_setup_m_cmd(se, I2C_WRITE, m_param); 413 411 414 412 if (dma_buf && geni_se_tx_dma_prep(se, dma_buf, len, &tx_dma)) { 415 413 geni_se_select_mode(se, GENI_SE_FIFO); 416 414 i2c_put_dma_safe_msg_buf(dma_buf, msg, false); 417 415 dma_buf = NULL; 418 416 } 417 + 418 + geni_se_setup_m_cmd(se, I2C_WRITE, m_param); 419 419 420 420 if (!dma_buf) /* Get FIFO IRQ */ 421 421 writel_relaxed(1, se->base + SE_GENI_TX_WATERMARK_REG);
+3
drivers/i2c/busses/i2c-rcar.c
··· 868 868 /* disable irqs and ensure none is running before clearing ptr */ 869 869 rcar_i2c_write(priv, ICSIER, 0); 870 870 rcar_i2c_write(priv, ICSCR, 0); 871 + rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */ 871 872 872 873 synchronize_irq(priv->irq); 873 874 priv->slave = NULL; ··· 969 968 ret = rcar_i2c_clock_calculate(priv); 970 969 if (ret < 0) 971 970 goto out_pm_put; 971 + 972 + rcar_i2c_write(priv, ICSAR, 0); /* Gen2: must be 0 if not using slave */ 972 973 973 974 if (priv->devtype == I2C_RCAR_GEN3) { 974 975 priv->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+2
drivers/infiniband/core/cm.c
··· 3676 3676 return ret; 3677 3677 } 3678 3678 cm_id_priv->id.state = IB_CM_IDLE; 3679 + spin_lock_irq(&cm.lock); 3679 3680 if (!RB_EMPTY_NODE(&cm_id_priv->sidr_id_node)) { 3680 3681 rb_erase(&cm_id_priv->sidr_id_node, &cm.remote_sidr_table); 3681 3682 RB_CLEAR_NODE(&cm_id_priv->sidr_id_node); 3682 3683 } 3684 + spin_unlock_irq(&cm.lock); 3683 3685 return 0; 3684 3686 } 3685 3687
+3 -3
drivers/infiniband/core/rdma_core.c
··· 649 649 { 650 650 struct ib_uverbs_file *ufile = attrs->ufile; 651 651 652 - /* alloc_commit consumes the uobj kref */ 653 - uobj->uapi_object->type_class->alloc_commit(uobj); 654 - 655 652 /* kref is held so long as the uobj is on the uobj list. */ 656 653 uverbs_uobject_get(uobj); 657 654 spin_lock_irq(&ufile->uobjects_lock); ··· 657 660 658 661 /* matches atomic_set(-1) in alloc_uobj */ 659 662 atomic_set(&uobj->usecnt, 0); 663 + 664 + /* alloc_commit consumes the uobj kref */ 665 + uobj->uapi_object->type_class->alloc_commit(uobj); 660 666 661 667 /* Matches the down_read in rdma_alloc_begin_uobject */ 662 668 up_read(&ufile->hw_destroy_rwsem);
+22 -12
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 3954 3954 return 0; 3955 3955 } 3956 3956 3957 + static inline enum ib_mtu get_mtu(struct ib_qp *ibqp, 3958 + const struct ib_qp_attr *attr) 3959 + { 3960 + if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) 3961 + return IB_MTU_4096; 3962 + 3963 + return attr->path_mtu; 3964 + } 3965 + 3957 3966 static int modify_qp_init_to_rtr(struct ib_qp *ibqp, 3958 3967 const struct ib_qp_attr *attr, int attr_mask, 3959 3968 struct hns_roce_v2_qp_context *context, ··· 3974 3965 struct ib_device *ibdev = &hr_dev->ib_dev; 3975 3966 dma_addr_t trrl_ba; 3976 3967 dma_addr_t irrl_ba; 3968 + enum ib_mtu mtu; 3977 3969 u8 port_num; 3978 3970 u64 *mtts; 3979 3971 u8 *dmac; ··· 4072 4062 roce_set_field(qpc_mask->byte_52_udpspn_dmac, V2_QPC_BYTE_52_DMAC_M, 4073 4063 V2_QPC_BYTE_52_DMAC_S, 0); 4074 4064 4075 - /* mtu*(2^LP_PKTN_INI) should not bigger than 1 message length 64kb */ 4065 + mtu = get_mtu(ibqp, attr); 4066 + 4067 + if (attr_mask & IB_QP_PATH_MTU) { 4068 + roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 4069 + V2_QPC_BYTE_24_MTU_S, mtu); 4070 + roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 4071 + V2_QPC_BYTE_24_MTU_S, 0); 4072 + } 4073 + 4074 + #define MAX_LP_MSG_LEN 65536 4075 + /* MTU*(2^LP_PKTN_INI) shouldn't be bigger than 64kb */ 4076 4076 roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, 4077 4077 V2_QPC_BYTE_56_LP_PKTN_INI_S, 4078 - ilog2(hr_dev->caps.max_sq_inline / IB_MTU_4096)); 4078 + ilog2(MAX_LP_MSG_LEN / ib_mtu_enum_to_int(mtu))); 4079 4079 roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_LP_PKTN_INI_M, 4080 4080 V2_QPC_BYTE_56_LP_PKTN_INI_S, 0); 4081 - 4082 - if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) 4083 - roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 4084 - V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); 4085 - else if (attr_mask & IB_QP_PATH_MTU) 4086 - roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 4087 - V2_QPC_BYTE_24_MTU_S, attr->path_mtu); 4088 - 4089 - roce_set_field(qpc_mask->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 4090 - V2_QPC_BYTE_24_MTU_S, 0); 4091 4081 4092 4082 roce_set_bit(qpc_mask->byte_108_rx_reqepsn, 4093 4083 V2_QPC_BYTE_108_RX_REQ_PSN_ERR_S, 0);
+1 -1
drivers/infiniband/hw/hns/hns_roce_mr.c
··· 120 120 121 121 mr->pbl_hop_num = is_fast ? 1 : hr_dev->caps.pbl_hop_num; 122 122 buf_attr.page_shift = is_fast ? PAGE_SHIFT : 123 - hr_dev->caps.pbl_buf_pg_sz + HNS_HW_PAGE_SHIFT; 123 + hr_dev->caps.pbl_buf_pg_sz + PAGE_SHIFT; 124 124 buf_attr.region[0].size = length; 125 125 buf_attr.region[0].hopnum = mr->pbl_hop_num; 126 126 buf_attr.region_count = 1;
+19 -3
drivers/infiniband/hw/mlx5/odp.c
··· 601 601 */ 602 602 synchronize_srcu(&dev->odp_srcu); 603 603 604 + /* 605 + * All work on the prefetch list must be completed, xa_erase() prevented 606 + * new work from being created. 607 + */ 608 + wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work)); 609 + 610 + /* 611 + * At this point it is forbidden for any other thread to enter 612 + * pagefault_mr() on this imr. It is already forbidden to call 613 + * pagefault_mr() on an implicit child. Due to this additions to 614 + * implicit_children are prevented. 615 + */ 616 + 617 + /* 618 + * Block destroy_unused_implicit_child_mr() from incrementing 619 + * num_deferred_work. 620 + */ 604 621 xa_lock(&imr->implicit_children); 605 622 xa_for_each (&imr->implicit_children, idx, mtt) { 606 623 __xa_erase(&imr->implicit_children, idx); ··· 626 609 xa_unlock(&imr->implicit_children); 627 610 628 611 /* 629 - * num_deferred_work can only be incremented inside the odp_srcu, or 630 - * under xa_lock while the child is in the xarray. Thus at this point 631 - * it is only decreasing, and all work holding it is now on the wq. 612 + * Wait for any concurrent destroy_unused_implicit_child_mr() to 613 + * complete. 632 614 */ 633 615 wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work)); 634 616
+2 -2
drivers/infiniband/hw/mlx5/srq_cmd.c
··· 83 83 struct mlx5_srq_table *table = &dev->srq_table; 84 84 struct mlx5_core_srq *srq; 85 85 86 - xa_lock(&table->array); 86 + xa_lock_irq(&table->array); 87 87 srq = xa_load(&table->array, srqn); 88 88 if (srq) 89 89 refcount_inc(&srq->common.refcount); 90 - xa_unlock(&table->array); 90 + xa_unlock_irq(&table->array); 91 91 92 92 return srq; 93 93 }
+9 -3
drivers/interconnect/core.c
··· 243 243 { 244 244 struct icc_provider *p = node->provider; 245 245 struct icc_req *r; 246 + u32 avg_bw, peak_bw; 246 247 247 248 node->avg_bw = 0; 248 249 node->peak_bw = 0; ··· 252 251 p->pre_aggregate(node); 253 252 254 253 hlist_for_each_entry(r, &node->req_list, req_node) { 255 - if (!r->enabled) 256 - continue; 257 - p->aggregate(node, r->tag, r->avg_bw, r->peak_bw, 254 + if (r->enabled) { 255 + avg_bw = r->avg_bw; 256 + peak_bw = r->peak_bw; 257 + } else { 258 + avg_bw = 0; 259 + peak_bw = 0; 260 + } 261 + p->aggregate(node, r->tag, avg_bw, peak_bw, 258 262 &node->avg_bw, &node->peak_bw); 259 263 } 260 264
+7 -7
drivers/interconnect/qcom/msm8916.c
··· 197 197 DEFINE_QNODE(pcnoc_int_1, MSM8916_PNOC_INT_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS); 198 198 DEFINE_QNODE(pcnoc_m_0, MSM8916_PNOC_MAS_0, 8, -1, -1, MSM8916_PNOC_INT_0); 199 199 DEFINE_QNODE(pcnoc_m_1, MSM8916_PNOC_MAS_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS); 200 - DEFINE_QNODE(pcnoc_s_0, MSM8916_PNOC_SLV_0, 8, -1, -1, MSM8916_SLAVE_CLK_CTL, MSM8916_SLAVE_TLMM, MSM8916_SLAVE_TCSR, MSM8916_SLAVE_SECURITY, MSM8916_SLAVE_MSS); 201 - DEFINE_QNODE(pcnoc_s_1, MSM8916_PNOC_SLV_1, 8, -1, -1, MSM8916_SLAVE_IMEM_CFG, MSM8916_SLAVE_CRYPTO_0_CFG, MSM8916_SLAVE_MSG_RAM, MSM8916_SLAVE_PDM, MSM8916_SLAVE_PRNG); 202 - DEFINE_QNODE(pcnoc_s_2, MSM8916_PNOC_SLV_2, 8, -1, -1, MSM8916_SLAVE_SPDM, MSM8916_SLAVE_BOOT_ROM, MSM8916_SLAVE_BIMC_CFG, MSM8916_SLAVE_PNOC_CFG, MSM8916_SLAVE_PMIC_ARB); 203 - DEFINE_QNODE(pcnoc_s_3, MSM8916_PNOC_SLV_3, 8, -1, -1, MSM8916_SLAVE_MPM, MSM8916_SLAVE_SNOC_CFG, MSM8916_SLAVE_RBCPR_CFG, MSM8916_SLAVE_QDSS_CFG, MSM8916_SLAVE_DEHR_CFG); 204 - DEFINE_QNODE(pcnoc_s_4, MSM8916_PNOC_SLV_4, 8, -1, -1, MSM8916_SLAVE_VENUS_CFG, MSM8916_SLAVE_CAMERA_CFG, MSM8916_SLAVE_DISPLAY_CFG); 205 - DEFINE_QNODE(pcnoc_s_8, MSM8916_PNOC_SLV_8, 8, -1, -1, MSM8916_SLAVE_USB_HS, MSM8916_SLAVE_SDCC_1, MSM8916_SLAVE_BLSP_1); 206 - DEFINE_QNODE(pcnoc_s_9, MSM8916_PNOC_SLV_9, 8, -1, -1, MSM8916_SLAVE_SDCC_2, MSM8916_SLAVE_LPASS, MSM8916_SLAVE_GRAPHICS_3D_CFG); 200 + DEFINE_QNODE(pcnoc_s_0, MSM8916_PNOC_SLV_0, 4, -1, -1, MSM8916_SLAVE_CLK_CTL, MSM8916_SLAVE_TLMM, MSM8916_SLAVE_TCSR, MSM8916_SLAVE_SECURITY, MSM8916_SLAVE_MSS); 201 + DEFINE_QNODE(pcnoc_s_1, MSM8916_PNOC_SLV_1, 4, -1, -1, MSM8916_SLAVE_IMEM_CFG, MSM8916_SLAVE_CRYPTO_0_CFG, MSM8916_SLAVE_MSG_RAM, MSM8916_SLAVE_PDM, MSM8916_SLAVE_PRNG); 202 + DEFINE_QNODE(pcnoc_s_2, MSM8916_PNOC_SLV_2, 4, -1, -1, MSM8916_SLAVE_SPDM, MSM8916_SLAVE_BOOT_ROM, MSM8916_SLAVE_BIMC_CFG, MSM8916_SLAVE_PNOC_CFG, MSM8916_SLAVE_PMIC_ARB); 203 + DEFINE_QNODE(pcnoc_s_3, MSM8916_PNOC_SLV_3, 4, -1, -1, MSM8916_SLAVE_MPM, MSM8916_SLAVE_SNOC_CFG, MSM8916_SLAVE_RBCPR_CFG, MSM8916_SLAVE_QDSS_CFG, MSM8916_SLAVE_DEHR_CFG); 204 + DEFINE_QNODE(pcnoc_s_4, MSM8916_PNOC_SLV_4, 4, -1, -1, MSM8916_SLAVE_VENUS_CFG, MSM8916_SLAVE_CAMERA_CFG, MSM8916_SLAVE_DISPLAY_CFG); 205 + DEFINE_QNODE(pcnoc_s_8, MSM8916_PNOC_SLV_8, 4, -1, -1, MSM8916_SLAVE_USB_HS, MSM8916_SLAVE_SDCC_1, MSM8916_SLAVE_BLSP_1); 206 + DEFINE_QNODE(pcnoc_s_9, MSM8916_PNOC_SLV_9, 4, -1, -1, MSM8916_SLAVE_SDCC_2, MSM8916_SLAVE_LPASS, MSM8916_SLAVE_GRAPHICS_3D_CFG); 207 207 DEFINE_QNODE(pcnoc_snoc_mas, MSM8916_PNOC_SNOC_MAS, 8, 29, -1, MSM8916_PNOC_SNOC_SLV); 208 208 DEFINE_QNODE(pcnoc_snoc_slv, MSM8916_PNOC_SNOC_SLV, 8, -1, 45, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC, MSM8916_SNOC_INT_1); 209 209 DEFINE_QNODE(qdss_int, MSM8916_SNOC_QDSS_INT, 8, -1, -1, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC);
+17 -20
drivers/iommu/qcom_iommu.c
··· 65 65 struct mutex init_mutex; /* Protects iommu pointer */ 66 66 struct iommu_domain domain; 67 67 struct qcom_iommu_dev *iommu; 68 + struct iommu_fwspec *fwspec; 68 69 }; 69 70 70 71 static struct qcom_iommu_domain *to_qcom_iommu_domain(struct iommu_domain *dom) ··· 85 84 return dev_iommu_priv_get(dev); 86 85 } 87 86 88 - static struct qcom_iommu_ctx * to_ctx(struct device *dev, unsigned asid) 87 + static struct qcom_iommu_ctx * to_ctx(struct qcom_iommu_domain *d, unsigned asid) 89 88 { 90 - struct qcom_iommu_dev *qcom_iommu = to_iommu(dev); 89 + struct qcom_iommu_dev *qcom_iommu = d->iommu; 91 90 if (!qcom_iommu) 92 91 return NULL; 93 92 return qcom_iommu->ctxs[asid - 1]; ··· 119 118 120 119 static void qcom_iommu_tlb_sync(void *cookie) 121 120 { 122 - struct iommu_fwspec *fwspec; 123 - struct device *dev = cookie; 121 + struct qcom_iommu_domain *qcom_domain = cookie; 122 + struct iommu_fwspec *fwspec = qcom_domain->fwspec; 124 123 unsigned i; 125 124 126 - fwspec = dev_iommu_fwspec_get(dev); 127 - 128 125 for (i = 0; i < fwspec->num_ids; i++) { 129 - struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]); 126 + struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]); 130 127 unsigned int val, ret; 131 128 132 129 iommu_writel(ctx, ARM_SMMU_CB_TLBSYNC, 0); ··· 138 139 139 140 static void qcom_iommu_tlb_inv_context(void *cookie) 140 141 { 141 - struct device *dev = cookie; 142 - struct iommu_fwspec *fwspec; 142 + struct qcom_iommu_domain *qcom_domain = cookie; 143 + struct iommu_fwspec *fwspec = qcom_domain->fwspec; 143 144 unsigned i; 144 145 145 - fwspec = dev_iommu_fwspec_get(dev); 146 - 147 146 for (i = 0; i < fwspec->num_ids; i++) { 148 - struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]); 147 + struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]); 149 148 iommu_writel(ctx, ARM_SMMU_CB_S1_TLBIASID, ctx->asid); 150 149 } 151 150 ··· 153 156 static void qcom_iommu_tlb_inv_range_nosync(unsigned long iova, size_t size, 154 157 size_t granule, bool leaf, void *cookie) 155 158 { 156 - struct device *dev = cookie; 157 - struct iommu_fwspec *fwspec; 159 + struct qcom_iommu_domain *qcom_domain = cookie; 160 + struct iommu_fwspec *fwspec = qcom_domain->fwspec; 158 161 unsigned i, reg; 159 162 160 163 reg = leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA; 161 164 162 - fwspec = dev_iommu_fwspec_get(dev); 163 - 164 165 for (i = 0; i < fwspec->num_ids; i++) { 165 - struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]); 166 + struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]); 166 167 size_t s = size; 167 168 168 169 iova = (iova >> 12) << 12; ··· 251 256 }; 252 257 253 258 qcom_domain->iommu = qcom_iommu; 254 - pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, dev); 259 + qcom_domain->fwspec = fwspec; 260 + 261 + pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, &pgtbl_cfg, qcom_domain); 255 262 if (!pgtbl_ops) { 256 263 dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n"); 257 264 ret = -ENOMEM; ··· 266 269 domain->geometry.force_aperture = true; 267 270 268 271 for (i = 0; i < fwspec->num_ids; i++) { 269 - struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]); 272 + struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]); 270 273 271 274 if (!ctx->secure_init) { 272 275 ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, ctx->asid); ··· 416 419 417 420 pm_runtime_get_sync(qcom_iommu->dev); 418 421 for (i = 0; i < fwspec->num_ids; i++) { 419 - struct qcom_iommu_ctx *ctx = to_ctx(dev, fwspec->ids[i]); 422 + struct qcom_iommu_ctx *ctx = to_ctx(qcom_domain, fwspec->ids[i]); 420 423 421 424 /* Disable the context bank: */ 422 425 iommu_writel(ctx, ARM_SMMU_CB_SCTLR, 0);
+2 -2
drivers/md/dm-integrity.c
··· 2420 2420 unsigned prev_free_sectors; 2421 2421 2422 2422 /* the following test is not needed, but it tests the replay code */ 2423 - if (unlikely(dm_suspended(ic->ti)) && !ic->meta_dev) 2423 + if (unlikely(dm_post_suspending(ic->ti)) && !ic->meta_dev) 2424 2424 return; 2425 2425 2426 2426 spin_lock_irq(&ic->endio_wait.lock); ··· 2481 2481 2482 2482 next_chunk: 2483 2483 2484 - if (unlikely(dm_suspended(ic->ti))) 2484 + if (unlikely(dm_post_suspending(ic->ti))) 2485 2485 goto unlock_ret; 2486 2486 2487 2487 range.logical_sector = le64_to_cpu(ic->sb->recalc_sector);
+17
drivers/md/dm.c
··· 143 143 #define DMF_NOFLUSH_SUSPENDING 5 144 144 #define DMF_DEFERRED_REMOVE 6 145 145 #define DMF_SUSPENDED_INTERNALLY 7 146 + #define DMF_POST_SUSPENDING 8 146 147 147 148 #define DM_NUMA_NODE NUMA_NO_NODE 148 149 static int dm_numa_node = DM_NUMA_NODE; ··· 2409 2408 if (!dm_suspended_md(md)) { 2410 2409 dm_table_presuspend_targets(map); 2411 2410 set_bit(DMF_SUSPENDED, &md->flags); 2411 + set_bit(DMF_POST_SUSPENDING, &md->flags); 2412 2412 dm_table_postsuspend_targets(map); 2413 2413 } 2414 2414 /* dm_put_live_table must be before msleep, otherwise deadlock is possible */ ··· 2768 2766 if (r) 2769 2767 goto out_unlock; 2770 2768 2769 + set_bit(DMF_POST_SUSPENDING, &md->flags); 2771 2770 dm_table_postsuspend_targets(map); 2771 + clear_bit(DMF_POST_SUSPENDING, &md->flags); 2772 2772 2773 2773 out_unlock: 2774 2774 mutex_unlock(&md->suspend_lock); ··· 2867 2863 (void) __dm_suspend(md, map, suspend_flags, TASK_UNINTERRUPTIBLE, 2868 2864 DMF_SUSPENDED_INTERNALLY); 2869 2865 2866 + set_bit(DMF_POST_SUSPENDING, &md->flags); 2870 2867 dm_table_postsuspend_targets(map); 2868 + clear_bit(DMF_POST_SUSPENDING, &md->flags); 2871 2869 } 2872 2870 2873 2871 static void __dm_internal_resume(struct mapped_device *md) ··· 3030 3024 return test_bit(DMF_SUSPENDED, &md->flags); 3031 3025 } 3032 3026 3027 + static int dm_post_suspending_md(struct mapped_device *md) 3028 + { 3029 + return test_bit(DMF_POST_SUSPENDING, &md->flags); 3030 + } 3031 + 3033 3032 int dm_suspended_internally_md(struct mapped_device *md) 3034 3033 { 3035 3034 return test_bit(DMF_SUSPENDED_INTERNALLY, &md->flags); ··· 3050 3039 return dm_suspended_md(dm_table_get_md(ti->table)); 3051 3040 } 3052 3041 EXPORT_SYMBOL_GPL(dm_suspended); 3042 + 3043 + int dm_post_suspending(struct dm_target *ti) 3044 + { 3045 + return dm_post_suspending_md(dm_table_get_md(ti->table)); 3046 + } 3047 + EXPORT_SYMBOL_GPL(dm_post_suspending); 3053 3048 3054 3049 int dm_noflush_suspending(struct dm_target *ti) 3055 3050 {
+11 -3
drivers/misc/habanalabs/command_submission.c
··· 499 499 struct asic_fixed_properties *asic = &hdev->asic_prop; 500 500 struct hw_queue_properties *hw_queue_prop; 501 501 502 + /* This must be checked here to prevent out-of-bounds access to 503 + * hw_queues_props array 504 + */ 505 + if (chunk->queue_index >= HL_MAX_QUEUES) { 506 + dev_err(hdev->dev, "Queue index %d is invalid\n", 507 + chunk->queue_index); 508 + return -EINVAL; 509 + } 510 + 502 511 hw_queue_prop = &asic->hw_queues_props[chunk->queue_index]; 503 512 504 - if ((chunk->queue_index >= HL_MAX_QUEUES) || 505 - (hw_queue_prop->type == QUEUE_TYPE_NA)) { 506 - dev_err(hdev->dev, "Queue index %d is invalid\n", 513 + if (hw_queue_prop->type == QUEUE_TYPE_NA) { 514 + dev_err(hdev->dev, "Queue index %d is not applicable\n", 507 515 chunk->queue_index); 508 516 return -EINVAL; 509 517 }
+8 -15
drivers/misc/habanalabs/debugfs.c
··· 36 36 pkt.i2c_reg = i2c_reg; 37 37 38 38 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 39 - HL_DEVICE_TIMEOUT_USEC, (long *) val); 39 + 0, (long *) val); 40 40 41 41 if (rc) 42 42 dev_err(hdev->dev, "Failed to read from I2C, error %d\n", rc); ··· 63 63 pkt.value = cpu_to_le64(val); 64 64 65 65 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 66 - HL_DEVICE_TIMEOUT_USEC, NULL); 66 + 0, NULL); 67 67 68 68 if (rc) 69 69 dev_err(hdev->dev, "Failed to write to I2C, error %d\n", rc); ··· 87 87 pkt.value = cpu_to_le64(state); 88 88 89 89 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 90 - HL_DEVICE_TIMEOUT_USEC, NULL); 90 + 0, NULL); 91 91 92 92 if (rc) 93 93 dev_err(hdev->dev, "Failed to set LED %d, error %d\n", led, rc); ··· 981 981 if (*ppos) 982 982 return 0; 983 983 984 - sprintf(tmp_buf, "%d\n", hdev->clock_gating); 984 + sprintf(tmp_buf, "0x%llx\n", hdev->clock_gating_mask); 985 985 rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf, 986 986 strlen(tmp_buf) + 1); 987 987 ··· 993 993 { 994 994 struct hl_dbg_device_entry *entry = file_inode(f)->i_private; 995 995 struct hl_device *hdev = entry->hdev; 996 - u32 value; 996 + u64 value; 997 997 ssize_t rc; 998 998 999 999 if (atomic_read(&hdev->in_reset)) { ··· 1002 1002 return 0; 1003 1003 } 1004 1004 1005 - rc = kstrtouint_from_user(buf, count, 10, &value); 1005 + rc = kstrtoull_from_user(buf, count, 16, &value); 1006 1006 if (rc) 1007 1007 return rc; 1008 1008 1009 - if (value) { 1010 - hdev->clock_gating = 1; 1011 - if (hdev->asic_funcs->enable_clock_gating) 1012 - hdev->asic_funcs->enable_clock_gating(hdev); 1013 - } else { 1014 - if (hdev->asic_funcs->disable_clock_gating) 1015 - hdev->asic_funcs->disable_clock_gating(hdev); 1016 - hdev->clock_gating = 0; 1017 - } 1009 + hdev->clock_gating_mask = value; 1010 + hdev->asic_funcs->set_clock_gating(hdev); 1018 1011 1019 1012 return count; 1020 1013 }
+1 -1
drivers/misc/habanalabs/device.c
··· 608 608 hdev->in_debug = 0; 609 609 610 610 if (!hdev->hard_reset_pending) 611 - hdev->asic_funcs->enable_clock_gating(hdev); 611 + hdev->asic_funcs->set_clock_gating(hdev); 612 612 613 613 goto out; 614 614 }
+5 -5
drivers/misc/habanalabs/firmware_if.c
··· 61 61 pkt.ctl = cpu_to_le32(opcode << ARMCP_PKT_CTL_OPCODE_SHIFT); 62 62 63 63 return hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, 64 - sizeof(pkt), HL_DEVICE_TIMEOUT_USEC, NULL); 64 + sizeof(pkt), 0, NULL); 65 65 } 66 66 67 67 int hl_fw_send_cpu_message(struct hl_device *hdev, u32 hw_queue_id, u32 *msg, ··· 144 144 pkt.value = cpu_to_le64(event_type); 145 145 146 146 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 147 - HL_DEVICE_TIMEOUT_USEC, &result); 147 + 0, &result); 148 148 149 149 if (rc) 150 150 dev_err(hdev->dev, "failed to unmask RAZWI IRQ %d", event_type); ··· 183 183 ARMCP_PKT_CTL_OPCODE_SHIFT); 184 184 185 185 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) pkt, 186 - total_pkt_size, HL_DEVICE_TIMEOUT_USEC, &result); 186 + total_pkt_size, 0, &result); 187 187 188 188 if (rc) 189 189 dev_err(hdev->dev, "failed to unmask IRQ array\n"); ··· 204 204 test_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); 205 205 206 206 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &test_pkt, 207 - sizeof(test_pkt), HL_DEVICE_TIMEOUT_USEC, &result); 207 + sizeof(test_pkt), 0, &result); 208 208 209 209 if (!rc) { 210 210 if (result != ARMCP_PACKET_FENCE_VAL) ··· 248 248 hb_pkt.value = cpu_to_le64(ARMCP_PACKET_FENCE_VAL); 249 249 250 250 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &hb_pkt, 251 - sizeof(hb_pkt), HL_DEVICE_TIMEOUT_USEC, &result); 251 + sizeof(hb_pkt), 0, &result); 252 252 253 253 if ((rc) || (result != ARMCP_PACKET_FENCE_VAL)) 254 254 rc = -EIO;
+84 -39
drivers/misc/habanalabs/gaudi/gaudi.c
··· 80 80 #define GAUDI_PLDM_QMAN0_TIMEOUT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) 81 81 #define GAUDI_PLDM_TPC_KERNEL_WAIT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) 82 82 #define GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC 1000000 /* 1s */ 83 + #define GAUDI_MSG_TO_CPU_TIMEOUT_USEC 4000000 /* 4s */ 83 84 84 85 #define GAUDI_QMAN0_FENCE_VAL 0x72E91AB9 85 86 ··· 99 98 100 99 #define GAUDI_ARB_WDT_TIMEOUT 0x1000000 101 100 101 + #define GAUDI_CLK_GATE_DEBUGFS_MASK (\ 102 + BIT(GAUDI_ENGINE_ID_MME_0) |\ 103 + BIT(GAUDI_ENGINE_ID_MME_2) |\ 104 + GENMASK_ULL(GAUDI_ENGINE_ID_TPC_7, GAUDI_ENGINE_ID_TPC_0)) 105 + 102 106 static const char gaudi_irq_name[GAUDI_MSI_ENTRIES][GAUDI_MAX_STRING_LEN] = { 103 107 "gaudi cq 0_0", "gaudi cq 0_1", "gaudi cq 0_2", "gaudi cq 0_3", 104 108 "gaudi cq 1_0", "gaudi cq 1_1", "gaudi cq 1_2", "gaudi cq 1_3", ··· 112 106 }; 113 107 114 108 static const u8 gaudi_dma_assignment[GAUDI_DMA_MAX] = { 115 - [GAUDI_PCI_DMA_1] = 0, 116 - [GAUDI_PCI_DMA_2] = 1, 117 - [GAUDI_PCI_DMA_3] = 5, 118 - [GAUDI_HBM_DMA_1] = 2, 119 - [GAUDI_HBM_DMA_2] = 3, 120 - [GAUDI_HBM_DMA_3] = 4, 121 - [GAUDI_HBM_DMA_4] = 6, 122 - [GAUDI_HBM_DMA_5] = 7 109 + [GAUDI_PCI_DMA_1] = GAUDI_ENGINE_ID_DMA_0, 110 + [GAUDI_PCI_DMA_2] = GAUDI_ENGINE_ID_DMA_1, 111 + [GAUDI_PCI_DMA_3] = GAUDI_ENGINE_ID_DMA_5, 112 + [GAUDI_HBM_DMA_1] = GAUDI_ENGINE_ID_DMA_2, 113 + [GAUDI_HBM_DMA_2] = GAUDI_ENGINE_ID_DMA_3, 114 + [GAUDI_HBM_DMA_3] = GAUDI_ENGINE_ID_DMA_4, 115 + [GAUDI_HBM_DMA_4] = GAUDI_ENGINE_ID_DMA_6, 116 + [GAUDI_HBM_DMA_5] = GAUDI_ENGINE_ID_DMA_7 123 117 }; 124 118 125 119 static const u8 gaudi_cq_assignment[NUMBER_OF_CMPLT_QUEUES] = { ··· 1825 1819 1826 1820 gaudi_init_rate_limiter(hdev); 1827 1821 1828 - gaudi_disable_clock_gating(hdev); 1822 + hdev->asic_funcs->disable_clock_gating(hdev); 1829 1823 1830 1824 for (tpc_id = 0, tpc_offset = 0; 1831 1825 tpc_id < TPC_NUMBER_OF_ENGINES; ··· 2537 2531 WREG32(mmTPC7_CFG_TPC_STALL, 1 << TPC0_CFG_TPC_STALL_V_SHIFT); 2538 2532 } 2539 2533 2540 - static void gaudi_enable_clock_gating(struct hl_device *hdev) 2534 + static void gaudi_set_clock_gating(struct hl_device *hdev) 2541 2535 { 2542 2536 struct gaudi_device *gaudi = hdev->asic_specific; 2543 2537 u32 qman_offset; 2544 2538 int i; 2545 - 2546 - if (!hdev->clock_gating) 2547 - return; 2548 - 2549 - if (gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) 2550 - return; 2551 2539 2552 2540 /* In case we are during debug session, don't enable the clock gate 2553 2541 * as it may interfere ··· 2549 2549 if (hdev->in_debug) 2550 2550 return; 2551 2551 2552 - for (i = 0, qman_offset = 0 ; i < PCI_DMA_NUMBER_OF_CHNLS ; i++) { 2552 + for (i = GAUDI_PCI_DMA_1, qman_offset = 0 ; i < GAUDI_HBM_DMA_1 ; i++) { 2553 + if (!(hdev->clock_gating_mask & 2554 + (BIT_ULL(gaudi_dma_assignment[i])))) 2555 + continue; 2556 + 2553 2557 qman_offset = gaudi_dma_assignment[i] * DMA_QMAN_OFFSET; 2554 2558 WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset, QMAN_CGM1_PWR_GATE_EN); 2555 2559 WREG32(mmDMA0_QM_CGM_CFG + qman_offset, 2556 2560 QMAN_UPPER_CP_CGM_PWR_GATE_EN); 2557 2561 } 2558 2562 2559 - for (; i < HBM_DMA_NUMBER_OF_CHNLS ; i++) { 2563 + for (i = GAUDI_HBM_DMA_1 ; i < GAUDI_DMA_MAX ; i++) { 2564 + if (!(hdev->clock_gating_mask & 2565 + (BIT_ULL(gaudi_dma_assignment[i])))) 2566 + continue; 2567 + 2560 2568 qman_offset = gaudi_dma_assignment[i] * DMA_QMAN_OFFSET; 2561 2569 WREG32(mmDMA0_QM_CGM_CFG1 + qman_offset, QMAN_CGM1_PWR_GATE_EN); 2562 2570 WREG32(mmDMA0_QM_CGM_CFG + qman_offset, 2563 2571 QMAN_COMMON_CP_CGM_PWR_GATE_EN); 2564 2572 } 2565 2573 2566 - WREG32(mmMME0_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN); 2567 - WREG32(mmMME0_QM_CGM_CFG, 2568 - QMAN_COMMON_CP_CGM_PWR_GATE_EN); 2569 - WREG32(mmMME2_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN); 2570 - WREG32(mmMME2_QM_CGM_CFG, 2571 - QMAN_COMMON_CP_CGM_PWR_GATE_EN); 2574 + if (hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_0))) { 2575 + WREG32(mmMME0_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN); 2576 + WREG32(mmMME0_QM_CGM_CFG, QMAN_COMMON_CP_CGM_PWR_GATE_EN); 2577 + } 2578 + 2579 + if (hdev->clock_gating_mask & (BIT_ULL(GAUDI_ENGINE_ID_MME_2))) { 2580 + WREG32(mmMME2_QM_CGM_CFG1, QMAN_CGM1_PWR_GATE_EN); 2581 + WREG32(mmMME2_QM_CGM_CFG, QMAN_COMMON_CP_CGM_PWR_GATE_EN); 2582 + } 2572 2583 2573 2584 for (i = 0, qman_offset = 0 ; i < TPC_NUMBER_OF_ENGINES ; i++) { 2585 + if (!(hdev->clock_gating_mask & 2586 + (BIT_ULL(GAUDI_ENGINE_ID_TPC_0 + i)))) 2587 + continue; 2588 + 2574 2589 WREG32(mmTPC0_QM_CGM_CFG1 + qman_offset, 2575 2590 QMAN_CGM1_PWR_GATE_EN); 2576 2591 WREG32(mmTPC0_QM_CGM_CFG + qman_offset, ··· 2678 2663 gaudi_stop_hbm_dma_qmans(hdev); 2679 2664 gaudi_stop_pci_dma_qmans(hdev); 2680 2665 2681 - gaudi_disable_clock_gating(hdev); 2666 + hdev->asic_funcs->disable_clock_gating(hdev); 2682 2667 2683 2668 msleep(wait_timeout_ms); 2684 2669 ··· 3018 3003 3019 3004 gaudi_init_tpc_qmans(hdev); 3020 3005 3021 - gaudi_enable_clock_gating(hdev); 3006 + hdev->asic_funcs->set_clock_gating(hdev); 3022 3007 3023 3008 gaudi_enable_timestamp(hdev); 3024 3009 ··· 3127 3112 HW_CAP_HBM_DMA | HW_CAP_PLL | 3128 3113 HW_CAP_MMU | 3129 3114 HW_CAP_SRAM_SCRAMBLER | 3130 - HW_CAP_HBM_SCRAMBLER); 3115 + HW_CAP_HBM_SCRAMBLER | 3116 + HW_CAP_CLK_GATE); 3117 + 3131 3118 memset(gaudi->events_stat, 0, sizeof(gaudi->events_stat)); 3132 3119 } 3133 3120 ··· 3479 3462 *result = 0; 3480 3463 return 0; 3481 3464 } 3465 + 3466 + if (!timeout) 3467 + timeout = GAUDI_MSG_TO_CPU_TIMEOUT_USEC; 3482 3468 3483 3469 return hl_fw_send_cpu_message(hdev, GAUDI_QUEUE_ID_CPU_PQ, msg, len, 3484 3470 timeout, result); ··· 3885 3865 rc = -EPERM; 3886 3866 break; 3887 3867 3868 + case PACKET_WREG_BULK: 3869 + dev_err(hdev->dev, 3870 + "User not allowed to use WREG_BULK\n"); 3871 + rc = -EPERM; 3872 + break; 3873 + 3888 3874 case PACKET_LOAD_AND_EXE: 3889 3875 rc = gaudi_validate_load_and_exe_pkt(hdev, parser, 3890 3876 (struct packet_load_and_exe *) user_pkt); ··· 3906 3880 break; 3907 3881 3908 3882 case PACKET_WREG_32: 3909 - case PACKET_WREG_BULK: 3910 3883 case PACKET_MSG_LONG: 3911 3884 case PACKET_MSG_SHORT: 3912 3885 case PACKET_REPEAT: ··· 4546 4521 int rc = 0; 4547 4522 4548 4523 if ((addr >= CFG_BASE) && (addr < CFG_BASE + CFG_SIZE)) { 4549 - if (gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) { 4524 + 4525 + if ((gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) && 4526 + (hdev->clock_gating_mask & 4527 + GAUDI_CLK_GATE_DEBUGFS_MASK)) { 4528 + 4550 4529 dev_err_ratelimited(hdev->dev, 4551 4530 "Can't read register - clock gating is enabled!\n"); 4552 4531 rc = -EFAULT; 4553 4532 } else { 4554 4533 *val = RREG32(addr - CFG_BASE); 4555 4534 } 4535 + 4556 4536 } else if ((addr >= SRAM_BASE_ADDR) && 4557 4537 (addr < SRAM_BASE_ADDR + SRAM_BAR_SIZE)) { 4558 4538 *val = readl(hdev->pcie_bar[SRAM_BAR_ID] + ··· 4593 4563 int rc = 0; 4594 4564 4595 4565 if ((addr >= CFG_BASE) && (addr < CFG_BASE + CFG_SIZE)) { 4596 - if (gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) { 4566 + 4567 + if ((gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) && 4568 + (hdev->clock_gating_mask & 4569 + GAUDI_CLK_GATE_DEBUGFS_MASK)) { 4570 + 4597 4571 dev_err_ratelimited(hdev->dev, 4598 4572 "Can't write register - clock gating is enabled!\n"); 4599 4573 rc = -EFAULT; 4600 4574 } else { 4601 4575 WREG32(addr - CFG_BASE, val); 4602 4576 } 4577 + 4603 4578 } else if ((addr >= SRAM_BASE_ADDR) && 4604 4579 (addr < SRAM_BASE_ADDR + SRAM_BAR_SIZE)) { 4605 4580 writel(val, hdev->pcie_bar[SRAM_BAR_ID] + ··· 4640 4605 int rc = 0; 4641 4606 4642 4607 if ((addr >= CFG_BASE) && (addr <= CFG_BASE + CFG_SIZE - sizeof(u64))) { 4643 - if (gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) { 4608 + 4609 + if ((gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) && 4610 + (hdev->clock_gating_mask & 4611 + GAUDI_CLK_GATE_DEBUGFS_MASK)) { 4612 + 4644 4613 dev_err_ratelimited(hdev->dev, 4645 4614 "Can't read register - clock gating is enabled!\n"); 4646 4615 rc = -EFAULT; ··· 4654 4615 4655 4616 *val = (((u64) val_h) << 32) | val_l; 4656 4617 } 4618 + 4657 4619 } else if ((addr >= SRAM_BASE_ADDR) && 4658 4620 (addr <= SRAM_BASE_ADDR + SRAM_BAR_SIZE - sizeof(u64))) { 4659 4621 *val = readq(hdev->pcie_bar[SRAM_BAR_ID] + ··· 4691 4651 int rc = 0; 4692 4652 4693 4653 if ((addr >= CFG_BASE) && (addr <= CFG_BASE + CFG_SIZE - sizeof(u64))) { 4694 - if (gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) { 4654 + 4655 + if ((gaudi->hw_cap_initialized & HW_CAP_CLK_GATE) && 4656 + (hdev->clock_gating_mask & 4657 + GAUDI_CLK_GATE_DEBUGFS_MASK)) { 4658 + 4695 4659 dev_err_ratelimited(hdev->dev, 4696 4660 "Can't write register - clock gating is enabled!\n"); 4697 4661 rc = -EFAULT; ··· 4704 4660 WREG32(addr + sizeof(u32) - CFG_BASE, 4705 4661 upper_32_bits(val)); 4706 4662 } 4663 + 4707 4664 } else if ((addr >= SRAM_BASE_ADDR) && 4708 4665 (addr <= SRAM_BASE_ADDR + SRAM_BAR_SIZE - sizeof(u64))) { 4709 4666 writeq(val, hdev->pcie_bar[SRAM_BAR_ID] + ··· 4926 4881 gaudi_mmu_prepare_reg(hdev, mmPSOC_GLOBAL_CONF_TRACE_ARUSER, asid); 4927 4882 gaudi_mmu_prepare_reg(hdev, mmPSOC_GLOBAL_CONF_TRACE_AWUSER, asid); 4928 4883 4929 - hdev->asic_funcs->enable_clock_gating(hdev); 4884 + hdev->asic_funcs->set_clock_gating(hdev); 4930 4885 4931 4886 mutex_unlock(&gaudi->clk_gate_mutex); 4932 4887 } ··· 5307 5262 } 5308 5263 5309 5264 if (disable_clock_gating) { 5310 - hdev->asic_funcs->enable_clock_gating(hdev); 5265 + hdev->asic_funcs->set_clock_gating(hdev); 5311 5266 mutex_unlock(&gaudi->clk_gate_mutex); 5312 5267 } 5313 5268 } ··· 5794 5749 /* Clear interrupts */ 5795 5750 WREG32(mmTPC0_CFG_TPC_INTR_CAUSE + tpc_offset, 0); 5796 5751 5797 - hdev->asic_funcs->enable_clock_gating(hdev); 5752 + hdev->asic_funcs->set_clock_gating(hdev); 5798 5753 5799 5754 mutex_unlock(&gaudi->clk_gate_mutex); 5800 5755 ··· 6310 6265 if (s) 6311 6266 seq_puts(s, "\n"); 6312 6267 6313 - hdev->asic_funcs->enable_clock_gating(hdev); 6268 + hdev->asic_funcs->set_clock_gating(hdev); 6314 6269 6315 6270 mutex_unlock(&gaudi->clk_gate_mutex); 6316 6271 ··· 6411 6366 dev_err(hdev->dev, 6412 6367 "Timeout while waiting for TPC%d icache prefetch\n", 6413 6368 tpc_id); 6414 - hdev->asic_funcs->enable_clock_gating(hdev); 6369 + hdev->asic_funcs->set_clock_gating(hdev); 6415 6370 mutex_unlock(&gaudi->clk_gate_mutex); 6416 6371 return -EIO; 6417 6372 } ··· 6440 6395 1000, 6441 6396 kernel_timeout); 6442 6397 6443 - hdev->asic_funcs->enable_clock_gating(hdev); 6398 + hdev->asic_funcs->set_clock_gating(hdev); 6444 6399 mutex_unlock(&gaudi->clk_gate_mutex); 6445 6400 6446 6401 if (rc) { ··· 6781 6736 .mmu_invalidate_cache = gaudi_mmu_invalidate_cache, 6782 6737 .mmu_invalidate_cache_range = gaudi_mmu_invalidate_cache_range, 6783 6738 .send_heartbeat = gaudi_send_heartbeat, 6784 - .enable_clock_gating = gaudi_enable_clock_gating, 6739 + .set_clock_gating = gaudi_set_clock_gating, 6785 6740 .disable_clock_gating = gaudi_disable_clock_gating, 6786 6741 .debug_coresight = gaudi_debug_coresight, 6787 6742 .is_device_idle = gaudi_is_device_idle,
+12 -8
drivers/misc/habanalabs/goya/goya.c
··· 88 88 #define GOYA_PLDM_MMU_TIMEOUT_USEC (MMU_CONFIG_TIMEOUT_USEC * 100) 89 89 #define GOYA_PLDM_QMAN0_TIMEOUT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) 90 90 #define GOYA_BOOT_FIT_REQ_TIMEOUT_USEC 1000000 /* 1s */ 91 + #define GOYA_MSG_TO_CPU_TIMEOUT_USEC 4000000 /* 4s */ 91 92 92 93 #define GOYA_QMAN0_FENCE_VAL 0xD169B243 93 94 ··· 2831 2830 return 0; 2832 2831 } 2833 2832 2833 + if (!timeout) 2834 + timeout = GOYA_MSG_TO_CPU_TIMEOUT_USEC; 2835 + 2834 2836 return hl_fw_send_cpu_message(hdev, GOYA_QUEUE_ID_CPU_PQ, msg, len, 2835 2837 timeout, result); 2836 2838 } ··· 4435 4431 pkt->armcp_pkt.ctl = cpu_to_le32(ARMCP_PACKET_UNMASK_RAZWI_IRQ_ARRAY << 4436 4432 ARMCP_PKT_CTL_OPCODE_SHIFT); 4437 4433 4438 - rc = goya_send_cpu_message(hdev, (u32 *) pkt, total_pkt_size, 4439 - HL_DEVICE_TIMEOUT_USEC, &result); 4434 + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) pkt, 4435 + total_pkt_size, 0, &result); 4440 4436 4441 4437 if (rc) 4442 4438 dev_err(hdev->dev, "failed to unmask IRQ array\n"); ··· 4468 4464 ARMCP_PKT_CTL_OPCODE_SHIFT); 4469 4465 pkt.value = cpu_to_le64(event_type); 4470 4466 4471 - rc = goya_send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 4472 - HL_DEVICE_TIMEOUT_USEC, &result); 4467 + rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 4468 + 0, &result); 4473 4469 4474 4470 if (rc) 4475 4471 dev_err(hdev->dev, "failed to unmask RAZWI IRQ %d", event_type); ··· 5032 5028 return 0; 5033 5029 } 5034 5030 5035 - static void goya_enable_clock_gating(struct hl_device *hdev) 5031 + static void goya_set_clock_gating(struct hl_device *hdev) 5036 5032 { 5037 - 5033 + /* clock gating not supported in Goya */ 5038 5034 } 5039 5035 5040 5036 static void goya_disable_clock_gating(struct hl_device *hdev) 5041 5037 { 5042 - 5038 + /* clock gating not supported in Goya */ 5043 5039 } 5044 5040 5045 5041 static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask, ··· 5263 5259 .mmu_invalidate_cache = goya_mmu_invalidate_cache, 5264 5260 .mmu_invalidate_cache_range = goya_mmu_invalidate_cache_range, 5265 5261 .send_heartbeat = goya_send_heartbeat, 5266 - .enable_clock_gating = goya_enable_clock_gating, 5262 + .set_clock_gating = goya_set_clock_gating, 5267 5263 .disable_clock_gating = goya_disable_clock_gating, 5268 5264 .debug_coresight = goya_debug_coresight, 5269 5265 .is_device_idle = goya_is_device_idle,
+13 -6
drivers/misc/habanalabs/habanalabs.h
··· 578 578 * @mmu_invalidate_cache_range: flush specific MMU STLB cache lines with 579 579 * ASID-VA-size mask. 580 580 * @send_heartbeat: send is-alive packet to ArmCP and verify response. 581 - * @enable_clock_gating: enable clock gating for reducing power consumption. 582 - * @disable_clock_gating: disable clock for accessing registers on HBW. 581 + * @set_clock_gating: enable/disable clock gating per engine according to 582 + * clock gating mask in hdev 583 + * @disable_clock_gating: disable clock gating completely 583 584 * @debug_coresight: perform certain actions on Coresight for debugging. 584 585 * @is_device_idle: return true if device is idle, false otherwise. 585 586 * @soft_reset_late_init: perform certain actions needed after soft reset. ··· 588 587 * @hw_queues_unlock: release H/W queues lock. 589 588 * @get_pci_id: retrieve PCI ID. 590 589 * @get_eeprom_data: retrieve EEPROM data from F/W. 591 - * @send_cpu_message: send buffer to ArmCP. 590 + * @send_cpu_message: send message to F/W. If the message is timedout, the 591 + * driver will eventually reset the device. The timeout can 592 + * be determined by the calling function or it can be 0 and 593 + * then the timeout is the default timeout for the specific 594 + * ASIC 592 595 * @get_hw_state: retrieve the H/W state 593 596 * @pci_bars_map: Map PCI BARs. 594 597 * @set_dram_bar_base: Set DRAM BAR to map specific device address. Returns ··· 685 680 int (*mmu_invalidate_cache_range)(struct hl_device *hdev, bool is_hard, 686 681 u32 asid, u64 va, u64 size); 687 682 int (*send_heartbeat)(struct hl_device *hdev); 688 - void (*enable_clock_gating)(struct hl_device *hdev); 683 + void (*set_clock_gating)(struct hl_device *hdev); 689 684 void (*disable_clock_gating)(struct hl_device *hdev); 690 685 int (*debug_coresight)(struct hl_device *hdev, void *data); 691 686 bool (*is_device_idle)(struct hl_device *hdev, u32 *mask, ··· 1403 1398 * @max_power: the max power of the device, as configured by the sysadmin. This 1404 1399 * value is saved so in case of hard-reset, the driver will restore 1405 1400 * this value and update the F/W after the re-initialization 1401 + * @clock_gating_mask: is clock gating enabled. bitmask that represents the 1402 + * different engines. See debugfs-driver-habanalabs for 1403 + * details. 1406 1404 * @in_reset: is device in reset flow. 1407 1405 * @curr_pll_profile: current PLL profile. 1408 1406 * @cs_active_cnt: number of active command submissions on this device (active ··· 1433 1425 * @init_done: is the initialization of the device done. 1434 1426 * @mmu_enable: is MMU enabled. 1435 1427 * @mmu_huge_page_opt: is MMU huge pages optimization enabled. 1436 - * @clock_gating: is clock gating enabled. 1437 1428 * @device_cpu_disabled: is the device CPU disabled (due to timeouts) 1438 1429 * @dma_mask: the dma mask that was set for this device 1439 1430 * @in_debug: is device under debug. This, together with fpriv_list, enforces ··· 1500 1493 atomic64_t dram_used_mem; 1501 1494 u64 timeout_jiffies; 1502 1495 u64 max_power; 1496 + u64 clock_gating_mask; 1503 1497 atomic_t in_reset; 1504 1498 enum hl_pll_frequency curr_pll_profile; 1505 1499 int cs_active_cnt; ··· 1522 1514 u8 dram_default_page_mapping; 1523 1515 u8 pmmu_huge_range; 1524 1516 u8 init_done; 1525 - u8 clock_gating; 1526 1517 u8 device_cpu_disabled; 1527 1518 u8 dma_mask; 1528 1519 u8 in_debug;
+1 -1
drivers/misc/habanalabs/habanalabs_drv.c
··· 232 232 hdev->fw_loading = 1; 233 233 hdev->cpu_queues_enable = 1; 234 234 hdev->heartbeat = 1; 235 - hdev->clock_gating = 1; 235 + hdev->clock_gating_mask = ULONG_MAX; 236 236 237 237 hdev->reset_pcilink = 0; 238 238 hdev->axi_drain = 0;
+9 -10
drivers/misc/habanalabs/hwmon.c
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/hwmon.h> 12 12 13 - #define SENSORS_PKT_TIMEOUT 1000000 /* 1s */ 14 13 #define HWMON_NR_SENSOR_TYPES (hwmon_pwm + 1) 15 14 16 15 int hl_build_hwmon_channel_info(struct hl_device *hdev, ··· 322 323 pkt.type = __cpu_to_le16(attr); 323 324 324 325 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 325 - SENSORS_PKT_TIMEOUT, value); 326 + 0, value); 326 327 327 328 if (rc) { 328 329 dev_err(hdev->dev, ··· 349 350 pkt.value = __cpu_to_le64(value); 350 351 351 352 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 352 - SENSORS_PKT_TIMEOUT, NULL); 353 + 0, NULL); 353 354 354 355 if (rc) 355 356 dev_err(hdev->dev, ··· 373 374 pkt.type = __cpu_to_le16(attr); 374 375 375 376 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 376 - SENSORS_PKT_TIMEOUT, value); 377 + 0, value); 377 378 378 379 if (rc) { 379 380 dev_err(hdev->dev, ··· 399 400 pkt.type = __cpu_to_le16(attr); 400 401 401 402 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 402 - SENSORS_PKT_TIMEOUT, value); 403 + 0, value); 403 404 404 405 if (rc) { 405 406 dev_err(hdev->dev, ··· 425 426 pkt.type = __cpu_to_le16(attr); 426 427 427 428 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 428 - SENSORS_PKT_TIMEOUT, value); 429 + 0, value); 429 430 430 431 if (rc) { 431 432 dev_err(hdev->dev, ··· 451 452 pkt.type = __cpu_to_le16(attr); 452 453 453 454 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 454 - SENSORS_PKT_TIMEOUT, value); 455 + 0, value); 455 456 456 457 if (rc) { 457 458 dev_err(hdev->dev, ··· 478 479 pkt.value = cpu_to_le64(value); 479 480 480 481 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 481 - SENSORS_PKT_TIMEOUT, NULL); 482 + 0, NULL); 482 483 483 484 if (rc) 484 485 dev_err(hdev->dev, ··· 501 502 pkt.value = __cpu_to_le64(value); 502 503 503 504 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 504 - SENSORS_PKT_TIMEOUT, NULL); 505 + 0, NULL); 505 506 506 507 if (rc) 507 508 dev_err(hdev->dev, ··· 526 527 pkt.value = __cpu_to_le64(value); 527 528 528 529 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 529 - SENSORS_PKT_TIMEOUT, NULL); 530 + 0, NULL); 530 531 531 532 if (rc) 532 533 dev_err(hdev->dev,
+4 -7
drivers/misc/habanalabs/sysfs.c
··· 9 9 10 10 #include <linux/pci.h> 11 11 12 - #define SET_CLK_PKT_TIMEOUT 1000000 /* 1s */ 13 - #define SET_PWR_PKT_TIMEOUT 1000000 /* 1s */ 14 - 15 12 long hl_get_frequency(struct hl_device *hdev, u32 pll_index, bool curr) 16 13 { 17 14 struct armcp_packet pkt; ··· 26 29 pkt.pll_index = cpu_to_le32(pll_index); 27 30 28 31 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 29 - SET_CLK_PKT_TIMEOUT, &result); 32 + 0, &result); 30 33 31 34 if (rc) { 32 35 dev_err(hdev->dev, ··· 51 54 pkt.value = cpu_to_le64(freq); 52 55 53 56 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 54 - SET_CLK_PKT_TIMEOUT, NULL); 57 + 0, NULL); 55 58 56 59 if (rc) 57 60 dev_err(hdev->dev, ··· 71 74 ARMCP_PKT_CTL_OPCODE_SHIFT); 72 75 73 76 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 74 - SET_PWR_PKT_TIMEOUT, &result); 77 + 0, &result); 75 78 76 79 if (rc) { 77 80 dev_err(hdev->dev, "Failed to get max power, error %d\n", rc); ··· 93 96 pkt.value = cpu_to_le64(value); 94 97 95 98 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 96 - SET_PWR_PKT_TIMEOUT, NULL); 99 + 0, NULL); 97 100 98 101 if (rc) 99 102 dev_err(hdev->dev, "Failed to set max power, error %d\n", rc);
+1 -1
drivers/mmc/host/sdhci-of-aspeed.c
··· 68 68 if (WARN_ON(clock > host->max_clk)) 69 69 clock = host->max_clk; 70 70 71 - for (div = 1; div < 256; div *= 2) { 71 + for (div = 2; div < 256; div *= 2) { 72 72 if ((parent / div) <= clock) 73 73 break; 74 74 }
+7 -3
drivers/net/bonding/bond_main.c
··· 5053 5053 bond_dev->rtnl_link_ops = &bond_link_ops; 5054 5054 5055 5055 res = register_netdevice(bond_dev); 5056 + if (res < 0) { 5057 + free_netdev(bond_dev); 5058 + rtnl_unlock(); 5059 + 5060 + return res; 5061 + } 5056 5062 5057 5063 netif_carrier_off(bond_dev); 5058 5064 5059 5065 bond_work_init_all(bond); 5060 5066 5061 5067 rtnl_unlock(); 5062 - if (res < 0) 5063 - free_netdev(bond_dev); 5064 - return res; 5068 + return 0; 5065 5069 } 5066 5070 5067 5071 static int __net_init bond_net_init(struct net *net)
+1 -2
drivers/net/bonding/bond_netlink.c
··· 456 456 return err; 457 457 458 458 err = register_netdevice(bond_dev); 459 - 460 - netif_carrier_off(bond_dev); 461 459 if (!err) { 462 460 struct bonding *bond = netdev_priv(bond_dev); 463 461 462 + netif_carrier_off(bond_dev); 464 463 bond_work_init_all(bond); 465 464 } 466 465
+23 -19
drivers/net/dsa/microchip/ksz9477.c
··· 974 974 PORT_MIRROR_SNIFFER, false); 975 975 } 976 976 977 - static void ksz9477_phy_setup(struct ksz_device *dev, int port, 978 - struct phy_device *phy) 979 - { 980 - /* Only apply to port with PHY. */ 981 - if (port >= dev->phy_port_cnt) 982 - return; 983 - 984 - /* The MAC actually cannot run in 1000 half-duplex mode. */ 985 - phy_remove_link_mode(phy, 986 - ETHTOOL_LINK_MODE_1000baseT_Half_BIT); 987 - 988 - /* PHY does not support gigabit. */ 989 - if (!(dev->features & GBIT_SUPPORT)) 990 - phy_remove_link_mode(phy, 991 - ETHTOOL_LINK_MODE_1000baseT_Full_BIT); 992 - } 993 - 994 977 static bool ksz9477_get_gbit(struct ksz_device *dev, u8 data) 995 978 { 996 979 bool gbit; ··· 1586 1603 .get_port_addr = ksz9477_get_port_addr, 1587 1604 .cfg_port_member = ksz9477_cfg_port_member, 1588 1605 .flush_dyn_mac_table = ksz9477_flush_dyn_mac_table, 1589 - .phy_setup = ksz9477_phy_setup, 1590 1606 .port_setup = ksz9477_port_setup, 1591 1607 .r_mib_cnt = ksz9477_r_mib_cnt, 1592 1608 .r_mib_pkt = ksz9477_r_mib_pkt, ··· 1599 1617 1600 1618 int ksz9477_switch_register(struct ksz_device *dev) 1601 1619 { 1602 - return ksz_switch_register(dev, &ksz9477_dev_ops); 1620 + int ret, i; 1621 + struct phy_device *phydev; 1622 + 1623 + ret = ksz_switch_register(dev, &ksz9477_dev_ops); 1624 + if (ret) 1625 + return ret; 1626 + 1627 + for (i = 0; i < dev->phy_port_cnt; ++i) { 1628 + if (!dsa_is_user_port(dev->ds, i)) 1629 + continue; 1630 + 1631 + phydev = dsa_to_port(dev->ds, i)->slave->phydev; 1632 + 1633 + /* The MAC actually cannot run in 1000 half-duplex mode. */ 1634 + phy_remove_link_mode(phydev, 1635 + ETHTOOL_LINK_MODE_1000baseT_Half_BIT); 1636 + 1637 + /* PHY does not support gigabit. */ 1638 + if (!(dev->features & GBIT_SUPPORT)) 1639 + phy_remove_link_mode(phydev, 1640 + ETHTOOL_LINK_MODE_1000baseT_Full_BIT); 1641 + } 1642 + return ret; 1603 1643 } 1604 1644 EXPORT_SYMBOL(ksz9477_switch_register); 1605 1645
-2
drivers/net/dsa/microchip/ksz_common.c
··· 358 358 359 359 /* setup slave port */ 360 360 dev->dev_ops->port_setup(dev, port, false); 361 - if (dev->dev_ops->phy_setup) 362 - dev->dev_ops->phy_setup(dev, port, phy); 363 361 364 362 /* port_stp_state_set() will be called after to enable the port so 365 363 * there is no need to do anything.
-2
drivers/net/dsa/microchip/ksz_common.h
··· 119 119 u32 (*get_port_addr)(int port, int offset); 120 120 void (*cfg_port_member)(struct ksz_device *dev, int port, u8 member); 121 121 void (*flush_dyn_mac_table)(struct ksz_device *dev, int port); 122 - void (*phy_setup)(struct ksz_device *dev, int port, 123 - struct phy_device *phy); 124 122 void (*port_cleanup)(struct ksz_device *dev, int port); 125 123 void (*port_setup)(struct ksz_device *dev, int port, bool cpu_port); 126 124 void (*r_phy)(struct ksz_device *dev, u16 phy, u16 reg, u16 *val);
+19 -3
drivers/net/dsa/mv88e6xxx/chip.c
··· 664 664 const struct phylink_link_state *state) 665 665 { 666 666 struct mv88e6xxx_chip *chip = ds->priv; 667 + struct mv88e6xxx_port *p; 667 668 int err; 669 + 670 + p = &chip->ports[port]; 668 671 669 672 /* FIXME: is this the correct test? If we're in fixed mode on an 670 673 * internal port, why should we process this any different from ··· 678 675 return; 679 676 680 677 mv88e6xxx_reg_lock(chip); 681 - /* FIXME: should we force the link down here - but if we do, how 682 - * do we restore the link force/unforce state? The driver layering 683 - * gets in the way. 678 + /* In inband mode, the link may come up at any time while the link 679 + * is not forced down. Force the link down while we reconfigure the 680 + * interface mode. 684 681 */ 682 + if (mode == MLO_AN_INBAND && p->interface != state->interface && 683 + chip->info->ops->port_set_link) 684 + chip->info->ops->port_set_link(chip, port, LINK_FORCED_DOWN); 685 + 685 686 err = mv88e6xxx_port_config_interface(chip, port, state->interface); 686 687 if (err && err != -EOPNOTSUPP) 687 688 goto err_unlock; ··· 697 690 */ 698 691 if (err > 0) 699 692 err = 0; 693 + 694 + /* Undo the forced down state above after completing configuration 695 + * irrespective of its state on entry, which allows the link to come up. 696 + */ 697 + if (mode == MLO_AN_INBAND && p->interface != state->interface && 698 + chip->info->ops->port_set_link) 699 + chip->info->ops->port_set_link(chip, port, LINK_UNFORCED); 700 + 701 + p->interface = state->interface; 700 702 701 703 err_unlock: 702 704 mv88e6xxx_reg_unlock(chip);
+1
drivers/net/dsa/mv88e6xxx/chip.h
··· 232 232 u64 atu_full_violation; 233 233 u64 vtu_member_violation; 234 234 u64 vtu_miss_violation; 235 + phy_interface_t interface; 235 236 u8 cmode; 236 237 bool mirror_ingress; 237 238 bool mirror_egress;
+1
drivers/net/ethernet/aquantia/atlantic/aq_hw.h
··· 64 64 u8 rx_rings; 65 65 bool flow_control; 66 66 bool is_64_dma; 67 + u32 quirks; 67 68 u32 priv_data_len; 68 69 }; 69 70
+9
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 415 415 self->aq_nic_cfg.aq_hw_caps->media_type == AQ_HW_MEDIA_TYPE_TP) { 416 416 self->aq_hw->phy_id = HW_ATL_PHY_ID_MAX; 417 417 err = aq_phy_init(self->aq_hw); 418 + 419 + /* Disable the PTP on NICs where it's known to cause datapath 420 + * problems. 421 + * Ideally this should have been done by PHY provisioning, but 422 + * many units have been shipped with enabled PTP block already. 423 + */ 424 + if (self->aq_nic_cfg.aq_hw_caps->quirks & AQ_NIC_QUIRK_BAD_PTP) 425 + if (self->aq_hw->phy_id != HW_ATL_PHY_ID_MAX) 426 + aq_phy_disable_ptp(self->aq_hw); 418 427 } 419 428 420 429 for (i = 0U; i < self->aq_vecs; i++) {
+2
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 81 81 #define AQ_NIC_FLAG_ERR_UNPLUG 0x40000000U 82 82 #define AQ_NIC_FLAG_ERR_HW 0x80000000U 83 83 84 + #define AQ_NIC_QUIRK_BAD_PTP BIT(0) 85 + 84 86 #define AQ_NIC_WOL_MODES (WAKE_MAGIC |\ 85 87 WAKE_PHY) 86 88
+27 -2
drivers/net/ethernet/aquantia/atlantic/aq_phy.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /* aQuantia Corporation Network Driver 3 - * Copyright (C) 2018-2019 aQuantia Corporation. All rights reserved 2 + /* Atlantic Network Driver 3 + * 4 + * Copyright (C) 2018-2019 aQuantia Corporation 5 + * Copyright (C) 2019-2020 Marvell International Ltd. 4 6 */ 5 7 6 8 #include "aq_phy.h" 9 + 10 + #define HW_ATL_PTP_DISABLE_MSK BIT(10) 7 11 8 12 bool aq_mdio_busy_wait(struct aq_hw_s *aq_hw) 9 13 { ··· 148 144 } 149 145 150 146 return true; 147 + } 148 + 149 + void aq_phy_disable_ptp(struct aq_hw_s *aq_hw) 150 + { 151 + static const u16 ptp_registers[] = { 152 + 0x031e, 153 + 0x031d, 154 + 0x031c, 155 + 0x031b, 156 + }; 157 + u16 val; 158 + int i; 159 + 160 + for (i = 0; i < ARRAY_SIZE(ptp_registers); i++) { 161 + val = aq_phy_read_reg(aq_hw, MDIO_MMD_VEND1, 162 + ptp_registers[i]); 163 + 164 + aq_phy_write_reg(aq_hw, MDIO_MMD_VEND1, 165 + ptp_registers[i], 166 + val & ~HW_ATL_PTP_DISABLE_MSK); 167 + } 151 168 }
+6 -2
drivers/net/ethernet/aquantia/atlantic/aq_phy.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* aQuantia Corporation Network Driver 3 - * Copyright (C) 2018-2019 aQuantia Corporation. All rights reserved 2 + /* Atlantic Network Driver 3 + * 4 + * Copyright (C) 2018-2019 aQuantia Corporation 5 + * Copyright (C) 2019-2020 Marvell International Ltd. 4 6 */ 5 7 6 8 #ifndef AQ_PHY_H ··· 30 28 bool aq_phy_init_phy_id(struct aq_hw_s *aq_hw); 31 29 32 30 bool aq_phy_init(struct aq_hw_s *aq_hw); 31 + 32 + void aq_phy_disable_ptp(struct aq_hw_s *aq_hw); 33 33 34 34 #endif /* AQ_PHY_H */
+25 -1
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c
··· 93 93 AQ_NIC_RATE_100M, 94 94 }; 95 95 96 + const struct aq_hw_caps_s hw_atl_b0_caps_aqc111 = { 97 + DEFAULT_B0_BOARD_BASIC_CAPABILITIES, 98 + .media_type = AQ_HW_MEDIA_TYPE_TP, 99 + .link_speed_msk = AQ_NIC_RATE_5G | 100 + AQ_NIC_RATE_2G5 | 101 + AQ_NIC_RATE_1G | 102 + AQ_NIC_RATE_100M, 103 + .quirks = AQ_NIC_QUIRK_BAD_PTP, 104 + }; 105 + 106 + const struct aq_hw_caps_s hw_atl_b0_caps_aqc112 = { 107 + DEFAULT_B0_BOARD_BASIC_CAPABILITIES, 108 + .media_type = AQ_HW_MEDIA_TYPE_TP, 109 + .link_speed_msk = AQ_NIC_RATE_2G5 | 110 + AQ_NIC_RATE_1G | 111 + AQ_NIC_RATE_100M, 112 + .quirks = AQ_NIC_QUIRK_BAD_PTP, 113 + }; 114 + 96 115 static int hw_atl_b0_hw_reset(struct aq_hw_s *self) 97 116 { 98 117 int err = 0; ··· 373 354 374 355 /* WSP, if min_rate is set for at least one TC. 375 356 * RR otherwise. 357 + * 358 + * NB! MAC FW sets arb mode itself if PTP is enabled. We shouldn't 359 + * overwrite it here in that case. 376 360 */ 377 - hw_atl_tps_tx_pkt_shed_data_arb_mode_set(self, min_rate_msk ? 1U : 0U); 361 + if (!nic_cfg->is_ptp) 362 + hw_atl_tps_tx_pkt_shed_data_arb_mode_set(self, min_rate_msk ? 1U : 0U); 363 + 378 364 /* Data TC Arbiter takes precedence over Descriptor TC Arbiter, 379 365 * leave Descriptor TC Arbiter as RR. 380 366 */
+4 -6
drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.h
··· 18 18 extern const struct aq_hw_caps_s hw_atl_b0_caps_aqc107; 19 19 extern const struct aq_hw_caps_s hw_atl_b0_caps_aqc108; 20 20 extern const struct aq_hw_caps_s hw_atl_b0_caps_aqc109; 21 - 22 - #define hw_atl_b0_caps_aqc111 hw_atl_b0_caps_aqc108 23 - #define hw_atl_b0_caps_aqc112 hw_atl_b0_caps_aqc109 21 + extern const struct aq_hw_caps_s hw_atl_b0_caps_aqc111; 22 + extern const struct aq_hw_caps_s hw_atl_b0_caps_aqc112; 24 23 25 24 #define hw_atl_b0_caps_aqc100s hw_atl_b0_caps_aqc100 26 25 #define hw_atl_b0_caps_aqc107s hw_atl_b0_caps_aqc107 27 26 #define hw_atl_b0_caps_aqc108s hw_atl_b0_caps_aqc108 28 27 #define hw_atl_b0_caps_aqc109s hw_atl_b0_caps_aqc109 29 - 30 - #define hw_atl_b0_caps_aqc111s hw_atl_b0_caps_aqc108 31 - #define hw_atl_b0_caps_aqc112s hw_atl_b0_caps_aqc109 28 + #define hw_atl_b0_caps_aqc111s hw_atl_b0_caps_aqc111 29 + #define hw_atl_b0_caps_aqc112s hw_atl_b0_caps_aqc112 32 30 33 31 extern const struct aq_hw_ops hw_atl_ops_b0; 34 32
+2 -1
drivers/net/ethernet/atheros/ag71xx.c
··· 556 556 ag->mdio_reset = of_reset_control_get_exclusive(np, "mdio"); 557 557 if (IS_ERR(ag->mdio_reset)) { 558 558 netif_err(ag, probe, ndev, "Failed to get reset mdio.\n"); 559 - return PTR_ERR(ag->mdio_reset); 559 + err = PTR_ERR(ag->mdio_reset); 560 + goto mdio_err_put_clk; 560 561 } 561 562 562 563 mii_bus->name = "ag71xx_mdio";
+15 -7
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 3418 3418 */ 3419 3419 void bnxt_set_ring_params(struct bnxt *bp) 3420 3420 { 3421 - u32 ring_size, rx_size, rx_space; 3421 + u32 ring_size, rx_size, rx_space, max_rx_cmpl; 3422 3422 u32 agg_factor = 0, agg_ring_size = 0; 3423 3423 3424 3424 /* 8 for CRC and VLAN */ ··· 3474 3474 bp->tx_nr_pages = bnxt_calc_nr_ring_pages(ring_size, TX_DESC_CNT); 3475 3475 bp->tx_ring_mask = (bp->tx_nr_pages * TX_DESC_CNT) - 1; 3476 3476 3477 - ring_size = bp->rx_ring_size * (2 + agg_factor) + bp->tx_ring_size; 3477 + max_rx_cmpl = bp->rx_ring_size; 3478 + /* MAX TPA needs to be added because TPA_START completions are 3479 + * immediately recycled, so the TPA completions are not bound by 3480 + * the RX ring size. 3481 + */ 3482 + if (bp->flags & BNXT_FLAG_TPA) 3483 + max_rx_cmpl += bp->max_tpa; 3484 + /* RX and TPA completions are 32-byte, all others are 16-byte */ 3485 + ring_size = max_rx_cmpl * 2 + agg_ring_size + bp->tx_ring_size; 3478 3486 bp->cp_ring_size = ring_size; 3479 3487 3480 3488 bp->cp_nr_pages = bnxt_calc_nr_ring_pages(ring_size, CP_DESC_CNT); ··· 10393 10385 &bp->sp_event)) 10394 10386 bnxt_hwrm_phy_qcaps(bp); 10395 10387 10396 - if (test_and_clear_bit(BNXT_LINK_CFG_CHANGE_SP_EVENT, 10397 - &bp->sp_event)) 10398 - bnxt_init_ethtool_link_settings(bp); 10399 - 10400 10388 rc = bnxt_update_link(bp, true); 10401 - mutex_unlock(&bp->link_lock); 10402 10389 if (rc) 10403 10390 netdev_err(bp->dev, "SP task can't update link (rc: %x)\n", 10404 10391 rc); 10392 + 10393 + if (test_and_clear_bit(BNXT_LINK_CFG_CHANGE_SP_EVENT, 10394 + &bp->sp_event)) 10395 + bnxt_init_ethtool_link_settings(bp); 10396 + mutex_unlock(&bp->link_lock); 10405 10397 } 10406 10398 if (test_and_clear_bit(BNXT_UPDATE_PHY_SP_EVENT, &bp->sp_event)) { 10407 10399 int rc;
+4 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
··· 1765 1765 if (epause->tx_pause) 1766 1766 link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX; 1767 1767 1768 - if (netif_running(dev)) 1768 + if (netif_running(dev)) { 1769 + mutex_lock(&bp->link_lock); 1769 1770 rc = bnxt_hwrm_set_pause(bp); 1771 + mutex_unlock(&bp->link_lock); 1772 + } 1770 1773 return rc; 1771 1774 } 1772 1775
+65 -79
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 543 543 #define VALIDATE_MASK(x) \ 544 544 bcmgenet_hfb_validate_mask(&(x), sizeof(x)) 545 545 546 - static int bcmgenet_hfb_insert_data(u32 *f, int offset, 547 - void *val, void *mask, size_t size) 546 + static int bcmgenet_hfb_insert_data(struct bcmgenet_priv *priv, u32 f_index, 547 + u32 offset, void *val, void *mask, 548 + size_t size) 548 549 { 549 - int index; 550 - u32 tmp; 550 + u32 index, tmp; 551 551 552 - index = offset / 2; 553 - tmp = f[index]; 552 + index = f_index * priv->hw_params->hfb_filter_size + offset / 2; 553 + tmp = bcmgenet_hfb_readl(priv, index * sizeof(u32)); 554 554 555 555 while (size--) { 556 556 if (offset++ & 1) { ··· 567 567 tmp |= 0x10000; 568 568 break; 569 569 } 570 - f[index++] = tmp; 570 + bcmgenet_hfb_writel(priv, tmp, index++ * sizeof(u32)); 571 571 if (size) 572 - tmp = f[index]; 572 + tmp = bcmgenet_hfb_readl(priv, 573 + index * sizeof(u32)); 573 574 } else { 574 575 tmp &= ~0xCFF00; 575 576 tmp |= (*(unsigned char *)val++) << 8; ··· 586 585 break; 587 586 } 588 587 if (!size) 589 - f[index] = tmp; 588 + bcmgenet_hfb_writel(priv, tmp, index * sizeof(u32)); 590 589 } 591 590 } 592 591 593 592 return 0; 594 593 } 595 594 596 - static void bcmgenet_hfb_set_filter(struct bcmgenet_priv *priv, u32 *f_data, 597 - u32 f_length, u32 rx_queue, int f_index) 598 - { 599 - u32 base = f_index * priv->hw_params->hfb_filter_size; 600 - int i; 601 - 602 - for (i = 0; i < f_length; i++) 603 - bcmgenet_hfb_writel(priv, f_data[i], (base + i) * sizeof(u32)); 604 - 605 - bcmgenet_hfb_set_filter_length(priv, f_index, 2 * f_length); 606 - bcmgenet_hfb_set_filter_rx_queue_mapping(priv, f_index, rx_queue); 607 - } 608 - 609 - static int bcmgenet_hfb_create_rxnfc_filter(struct bcmgenet_priv *priv, 610 - struct bcmgenet_rxnfc_rule *rule) 595 + static void bcmgenet_hfb_create_rxnfc_filter(struct bcmgenet_priv *priv, 596 + struct bcmgenet_rxnfc_rule *rule) 611 597 { 612 598 struct ethtool_rx_flow_spec *fs = &rule->fs; 613 - int err = 0, offset = 0, f_length = 0; 599 + u32 offset = 0, f_length = 0, f; 614 600 u8 val_8, mask_8; 615 601 __be16 val_16; 616 602 u16 mask_16; 617 603 size_t size; 618 - u32 *f_data; 619 604 620 - f_data = kcalloc(priv->hw_params->hfb_filter_size, sizeof(u32), 621 - GFP_KERNEL); 622 - if (!f_data) 623 - return -ENOMEM; 624 - 605 + f = fs->location; 625 606 if (fs->flow_type & FLOW_MAC_EXT) { 626 - bcmgenet_hfb_insert_data(f_data, 0, 607 + bcmgenet_hfb_insert_data(priv, f, 0, 627 608 &fs->h_ext.h_dest, &fs->m_ext.h_dest, 628 609 sizeof(fs->h_ext.h_dest)); 629 610 } ··· 613 630 if (fs->flow_type & FLOW_EXT) { 614 631 if (fs->m_ext.vlan_etype || 615 632 fs->m_ext.vlan_tci) { 616 - bcmgenet_hfb_insert_data(f_data, 12, 633 + bcmgenet_hfb_insert_data(priv, f, 12, 617 634 &fs->h_ext.vlan_etype, 618 635 &fs->m_ext.vlan_etype, 619 636 sizeof(fs->h_ext.vlan_etype)); 620 - bcmgenet_hfb_insert_data(f_data, 14, 637 + bcmgenet_hfb_insert_data(priv, f, 14, 621 638 &fs->h_ext.vlan_tci, 622 639 &fs->m_ext.vlan_tci, 623 640 sizeof(fs->h_ext.vlan_tci)); ··· 629 646 switch (fs->flow_type & ~(FLOW_EXT | FLOW_MAC_EXT)) { 630 647 case ETHER_FLOW: 631 648 f_length += DIV_ROUND_UP(ETH_HLEN, 2); 632 - bcmgenet_hfb_insert_data(f_data, 0, 649 + bcmgenet_hfb_insert_data(priv, f, 0, 633 650 &fs->h_u.ether_spec.h_dest, 634 651 &fs->m_u.ether_spec.h_dest, 635 652 sizeof(fs->h_u.ether_spec.h_dest)); 636 - bcmgenet_hfb_insert_data(f_data, ETH_ALEN, 653 + bcmgenet_hfb_insert_data(priv, f, ETH_ALEN, 637 654 &fs->h_u.ether_spec.h_source, 638 655 &fs->m_u.ether_spec.h_source, 639 656 sizeof(fs->h_u.ether_spec.h_source)); 640 - bcmgenet_hfb_insert_data(f_data, (2 * ETH_ALEN) + offset, 657 + bcmgenet_hfb_insert_data(priv, f, (2 * ETH_ALEN) + offset, 641 658 &fs->h_u.ether_spec.h_proto, 642 659 &fs->m_u.ether_spec.h_proto, 643 660 sizeof(fs->h_u.ether_spec.h_proto)); ··· 647 664 /* Specify IP Ether Type */ 648 665 val_16 = htons(ETH_P_IP); 649 666 mask_16 = 0xFFFF; 650 - bcmgenet_hfb_insert_data(f_data, (2 * ETH_ALEN) + offset, 667 + bcmgenet_hfb_insert_data(priv, f, (2 * ETH_ALEN) + offset, 651 668 &val_16, &mask_16, sizeof(val_16)); 652 - bcmgenet_hfb_insert_data(f_data, 15 + offset, 669 + bcmgenet_hfb_insert_data(priv, f, 15 + offset, 653 670 &fs->h_u.usr_ip4_spec.tos, 654 671 &fs->m_u.usr_ip4_spec.tos, 655 672 sizeof(fs->h_u.usr_ip4_spec.tos)); 656 - bcmgenet_hfb_insert_data(f_data, 23 + offset, 673 + bcmgenet_hfb_insert_data(priv, f, 23 + offset, 657 674 &fs->h_u.usr_ip4_spec.proto, 658 675 &fs->m_u.usr_ip4_spec.proto, 659 676 sizeof(fs->h_u.usr_ip4_spec.proto)); 660 - bcmgenet_hfb_insert_data(f_data, 26 + offset, 677 + bcmgenet_hfb_insert_data(priv, f, 26 + offset, 661 678 &fs->h_u.usr_ip4_spec.ip4src, 662 679 &fs->m_u.usr_ip4_spec.ip4src, 663 680 sizeof(fs->h_u.usr_ip4_spec.ip4src)); 664 - bcmgenet_hfb_insert_data(f_data, 30 + offset, 681 + bcmgenet_hfb_insert_data(priv, f, 30 + offset, 665 682 &fs->h_u.usr_ip4_spec.ip4dst, 666 683 &fs->m_u.usr_ip4_spec.ip4dst, 667 684 sizeof(fs->h_u.usr_ip4_spec.ip4dst)); ··· 671 688 /* Only supports 20 byte IPv4 header */ 672 689 val_8 = 0x45; 673 690 mask_8 = 0xFF; 674 - bcmgenet_hfb_insert_data(f_data, ETH_HLEN + offset, 691 + bcmgenet_hfb_insert_data(priv, f, ETH_HLEN + offset, 675 692 &val_8, &mask_8, 676 693 sizeof(val_8)); 677 694 size = sizeof(fs->h_u.usr_ip4_spec.l4_4_bytes); 678 - bcmgenet_hfb_insert_data(f_data, 695 + bcmgenet_hfb_insert_data(priv, f, 679 696 ETH_HLEN + 20 + offset, 680 697 &fs->h_u.usr_ip4_spec.l4_4_bytes, 681 698 &fs->m_u.usr_ip4_spec.l4_4_bytes, ··· 684 701 break; 685 702 } 686 703 704 + bcmgenet_hfb_set_filter_length(priv, f, 2 * f_length); 687 705 if (!fs->ring_cookie || fs->ring_cookie == RX_CLS_FLOW_WAKE) { 688 706 /* Ring 0 flows can be handled by the default Descriptor Ring 689 707 * We'll map them to ring 0, but don't enable the filter 690 708 */ 691 - bcmgenet_hfb_set_filter(priv, f_data, f_length, 0, 692 - fs->location); 709 + bcmgenet_hfb_set_filter_rx_queue_mapping(priv, f, 0); 693 710 rule->state = BCMGENET_RXNFC_STATE_DISABLED; 694 711 } else { 695 712 /* Other Rx rings are direct mapped here */ 696 - bcmgenet_hfb_set_filter(priv, f_data, f_length, 697 - fs->ring_cookie, fs->location); 698 - bcmgenet_hfb_enable_filter(priv, fs->location); 713 + bcmgenet_hfb_set_filter_rx_queue_mapping(priv, f, 714 + fs->ring_cookie); 715 + bcmgenet_hfb_enable_filter(priv, f); 699 716 rule->state = BCMGENET_RXNFC_STATE_ENABLED; 700 717 } 701 - 702 - kfree(f_data); 703 - 704 - return err; 705 718 } 706 719 707 720 /* bcmgenet_hfb_clear 708 721 * 709 722 * Clear Hardware Filter Block and disable all filtering. 710 723 */ 724 + static void bcmgenet_hfb_clear_filter(struct bcmgenet_priv *priv, u32 f_index) 725 + { 726 + u32 base, i; 727 + 728 + base = f_index * priv->hw_params->hfb_filter_size; 729 + for (i = 0; i < priv->hw_params->hfb_filter_size; i++) 730 + bcmgenet_hfb_writel(priv, 0x0, (base + i) * sizeof(u32)); 731 + } 732 + 711 733 static void bcmgenet_hfb_clear(struct bcmgenet_priv *priv) 712 734 { 713 735 u32 i; 736 + 737 + if (GENET_IS_V1(priv) || GENET_IS_V2(priv)) 738 + return; 714 739 715 740 bcmgenet_hfb_reg_writel(priv, 0x0, HFB_CTRL); 716 741 bcmgenet_hfb_reg_writel(priv, 0x0, HFB_FLT_ENABLE_V3PLUS); ··· 731 740 bcmgenet_hfb_reg_writel(priv, 0x0, 732 741 HFB_FLT_LEN_V3PLUS + i * sizeof(u32)); 733 742 734 - for (i = 0; i < priv->hw_params->hfb_filter_cnt * 735 - priv->hw_params->hfb_filter_size; i++) 736 - bcmgenet_hfb_writel(priv, 0x0, i * sizeof(u32)); 743 + for (i = 0; i < priv->hw_params->hfb_filter_cnt; i++) 744 + bcmgenet_hfb_clear_filter(priv, i); 737 745 } 738 746 739 747 static void bcmgenet_hfb_init(struct bcmgenet_priv *priv) 740 748 { 741 749 int i; 742 750 751 + INIT_LIST_HEAD(&priv->rxnfc_list); 743 752 if (GENET_IS_V1(priv) || GENET_IS_V2(priv)) 744 753 return; 745 754 746 - INIT_LIST_HEAD(&priv->rxnfc_list); 747 755 for (i = 0; i < MAX_NUM_OF_FS_RULES; i++) { 748 756 INIT_LIST_HEAD(&priv->rxnfc_rules[i].list); 749 757 priv->rxnfc_rules[i].state = BCMGENET_RXNFC_STATE_UNUSED; ··· 1427 1437 loc_rule = &priv->rxnfc_rules[cmd->fs.location]; 1428 1438 if (loc_rule->state == BCMGENET_RXNFC_STATE_ENABLED) 1429 1439 bcmgenet_hfb_disable_filter(priv, cmd->fs.location); 1430 - if (loc_rule->state != BCMGENET_RXNFC_STATE_UNUSED) 1440 + if (loc_rule->state != BCMGENET_RXNFC_STATE_UNUSED) { 1431 1441 list_del(&loc_rule->list); 1442 + bcmgenet_hfb_clear_filter(priv, cmd->fs.location); 1443 + } 1432 1444 loc_rule->state = BCMGENET_RXNFC_STATE_UNUSED; 1433 1445 memcpy(&loc_rule->fs, &cmd->fs, 1434 1446 sizeof(struct ethtool_rx_flow_spec)); 1435 1447 1436 - err = bcmgenet_hfb_create_rxnfc_filter(priv, loc_rule); 1437 - if (err) { 1438 - netdev_err(dev, "rxnfc: Could not install rule (%d)\n", 1439 - err); 1440 - return err; 1441 - } 1448 + bcmgenet_hfb_create_rxnfc_filter(priv, loc_rule); 1442 1449 1443 1450 list_add_tail(&loc_rule->list, &priv->rxnfc_list); 1444 1451 ··· 1460 1473 1461 1474 if (rule->state == BCMGENET_RXNFC_STATE_ENABLED) 1462 1475 bcmgenet_hfb_disable_filter(priv, cmd->fs.location); 1463 - if (rule->state != BCMGENET_RXNFC_STATE_UNUSED) 1476 + if (rule->state != BCMGENET_RXNFC_STATE_UNUSED) { 1464 1477 list_del(&rule->list); 1478 + bcmgenet_hfb_clear_filter(priv, cmd->fs.location); 1479 + } 1465 1480 rule->state = BCMGENET_RXNFC_STATE_UNUSED; 1466 1481 memset(&rule->fs, 0, sizeof(struct ethtool_rx_flow_spec)); 1467 1482 ··· 3988 3999 if (err) 3989 4000 err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 3990 4001 if (err) 3991 - goto err; 4002 + goto err_clk_disable; 3992 4003 3993 4004 /* Mii wait queue */ 3994 4005 init_waitqueue_head(&priv->wq); ··· 4000 4011 if (IS_ERR(priv->clk_wol)) { 4001 4012 dev_dbg(&priv->pdev->dev, "failed to get enet-wol clock\n"); 4002 4013 err = PTR_ERR(priv->clk_wol); 4003 - goto err; 4014 + goto err_clk_disable; 4004 4015 } 4005 4016 4006 4017 priv->clk_eee = devm_clk_get_optional(&priv->pdev->dev, "enet-eee"); 4007 4018 if (IS_ERR(priv->clk_eee)) { 4008 4019 dev_dbg(&priv->pdev->dev, "failed to get enet-eee clock\n"); 4009 4020 err = PTR_ERR(priv->clk_eee); 4010 - goto err; 4021 + goto err_clk_disable; 4011 4022 } 4012 4023 4013 4024 /* If this is an internal GPHY, power it on now, before UniMAC is ··· 4118 4129 { 4119 4130 struct net_device *dev = dev_get_drvdata(d); 4120 4131 struct bcmgenet_priv *priv = netdev_priv(dev); 4132 + struct bcmgenet_rxnfc_rule *rule; 4121 4133 unsigned long dma_ctrl; 4122 - u32 offset, reg; 4134 + u32 reg; 4123 4135 int ret; 4124 4136 4125 4137 if (!netif_running(dev)) ··· 4151 4161 4152 4162 bcmgenet_set_hw_addr(priv, dev->dev_addr); 4153 4163 4154 - offset = HFB_FLT_ENABLE_V3PLUS; 4155 - bcmgenet_hfb_reg_writel(priv, priv->hfb_en[1], offset); 4156 - bcmgenet_hfb_reg_writel(priv, priv->hfb_en[2], offset + sizeof(u32)); 4157 - bcmgenet_hfb_reg_writel(priv, priv->hfb_en[0], HFB_CTRL); 4164 + /* Restore hardware filters */ 4165 + bcmgenet_hfb_clear(priv); 4166 + list_for_each_entry(rule, &priv->rxnfc_list, list) 4167 + if (rule->state != BCMGENET_RXNFC_STATE_UNUSED) 4168 + bcmgenet_hfb_create_rxnfc_filter(priv, rule); 4158 4169 4159 4170 if (priv->internal_phy) { 4160 4171 reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT); ··· 4199 4208 { 4200 4209 struct net_device *dev = dev_get_drvdata(d); 4201 4210 struct bcmgenet_priv *priv = netdev_priv(dev); 4202 - u32 offset; 4203 4211 4204 4212 if (!netif_running(dev)) 4205 4213 return 0; ··· 4210 4220 if (!device_may_wakeup(d)) 4211 4221 phy_suspend(dev->phydev); 4212 4222 4213 - /* Preserve filter state and disable filtering */ 4214 - priv->hfb_en[0] = bcmgenet_hfb_reg_readl(priv, HFB_CTRL); 4215 - offset = HFB_FLT_ENABLE_V3PLUS; 4216 - priv->hfb_en[1] = bcmgenet_hfb_reg_readl(priv, offset); 4217 - priv->hfb_en[2] = bcmgenet_hfb_reg_readl(priv, offset + sizeof(u32)); 4223 + /* Disable filtering */ 4218 4224 bcmgenet_hfb_reg_writel(priv, 0, HFB_CTRL); 4219 4225 4220 4226 return 0;
-1
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 696 696 u32 wolopts; 697 697 u8 sopass[SOPASS_MAX]; 698 698 bool wol_active; 699 - u32 hfb_en[3]; 700 699 701 700 struct bcmgenet_mib_counters mib; 702 701
+15 -7
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 217 217 218 218 priv->wol_active = 0; 219 219 clk_disable_unprepare(priv->clk_wol); 220 + priv->crc_fwd_en = 0; 220 221 221 222 /* Disable Magic Packet Detection */ 222 - reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 223 - reg &= ~(MPD_EN | MPD_PW_EN); 224 - bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 223 + if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE)) { 224 + reg = bcmgenet_umac_readl(priv, UMAC_MPD_CTRL); 225 + if (!(reg & MPD_EN)) 226 + return; /* already reset so skip the rest */ 227 + reg &= ~(MPD_EN | MPD_PW_EN); 228 + bcmgenet_umac_writel(priv, reg, UMAC_MPD_CTRL); 229 + } 225 230 226 231 /* Disable WAKE_FILTER Detection */ 227 - reg = bcmgenet_hfb_reg_readl(priv, HFB_CTRL); 228 - reg &= ~(RBUF_HFB_EN | RBUF_ACPI_EN); 229 - bcmgenet_hfb_reg_writel(priv, reg, HFB_CTRL); 232 + if (priv->wolopts & WAKE_FILTER) { 233 + reg = bcmgenet_hfb_reg_readl(priv, HFB_CTRL); 234 + if (!(reg & RBUF_ACPI_EN)) 235 + return; /* already reset so skip the rest */ 236 + reg &= ~(RBUF_HFB_EN | RBUF_ACPI_EN); 237 + bcmgenet_hfb_reg_writel(priv, reg, HFB_CTRL); 238 + } 230 239 231 240 /* Disable CRC Forward */ 232 241 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 233 242 reg &= ~CMD_CRC_FWD; 234 243 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 235 - priv->crc_fwd_en = 0; 236 244 }
+1 -1
drivers/net/ethernet/cadence/macb_main.c
··· 3736 3736 3737 3737 if (!(bp->caps & MACB_CAPS_USRIO_DISABLED)) { 3738 3738 val = 0; 3739 - if (bp->phy_interface == PHY_INTERFACE_MODE_RGMII) 3739 + if (phy_interface_mode_is_rgmii(bp->phy_interface)) 3740 3740 val = GEM_BIT(RGMII); 3741 3741 else if (bp->phy_interface == PHY_INTERFACE_MODE_RMII && 3742 3742 (bp->caps & MACB_CAPS_USRIO_DEFAULT_IS_MII_GMII))
+1
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2938 2938 txq_info = adap->sge.uld_txq_info[tx_uld_type]; 2939 2939 if (unlikely(!txq_info)) { 2940 2940 WARN_ON(true); 2941 + kfree_skb(skb); 2941 2942 return NET_XMIT_DROP; 2942 2943 } 2943 2944
+1 -1
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 2938 2938 DMA_BIT_MASK(40)); 2939 2939 if (err) { 2940 2940 netdev_err(net_dev, "dma_coerce_mask_and_coherent() failed\n"); 2941 - return err; 2941 + goto free_netdev; 2942 2942 } 2943 2943 2944 2944 /* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
+1 -1
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
··· 3632 3632 3633 3633 dpni_dev = to_fsl_mc_device(priv->net_dev->dev.parent); 3634 3634 dpmac_dev = fsl_mc_get_endpoint(dpni_dev); 3635 - if (IS_ERR(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) 3635 + if (IS_ERR_OR_NULL(dpmac_dev) || dpmac_dev->dev.type != &fsl_mc_bus_dpmac_type) 3636 3636 return 0; 3637 3637 3638 3638 if (dpaa2_mac_is_type_fixed(dpmac_dev, priv->mc_io))
+1
drivers/net/ethernet/freescale/enetc/enetc_pf.c
··· 906 906 return 0; 907 907 908 908 err_reg_netdev: 909 + enetc_mdio_remove(pf); 909 910 enetc_of_put_phy(priv); 910 911 enetc_free_msix(priv); 911 912 err_alloc_msix:
+1
drivers/net/ethernet/freescale/fec.h
··· 590 590 void fec_ptp_init(struct platform_device *pdev, int irq_idx); 591 591 void fec_ptp_stop(struct platform_device *pdev); 592 592 void fec_ptp_start_cyclecounter(struct net_device *ndev); 593 + void fec_ptp_disable_hwts(struct net_device *ndev); 593 594 int fec_ptp_set(struct net_device *ndev, struct ifreq *ifr); 594 595 int fec_ptp_get(struct net_device *ndev, struct ifreq *ifr); 595 596
+17 -6
drivers/net/ethernet/freescale/fec_main.c
··· 1294 1294 ndev->stats.tx_bytes += skb->len; 1295 1295 } 1296 1296 1297 - if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) && 1298 - fep->bufdesc_ex) { 1297 + /* NOTE: SKBTX_IN_PROGRESS being set does not imply it's we who 1298 + * are to time stamp the packet, so we still need to check time 1299 + * stamping enabled flag. 1300 + */ 1301 + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS && 1302 + fep->hwts_tx_en) && 1303 + fep->bufdesc_ex) { 1299 1304 struct skb_shared_hwtstamps shhwtstamps; 1300 1305 struct bufdesc_ex *ebdp = (struct bufdesc_ex *)bdp; 1301 1306 ··· 2728 2723 return -ENODEV; 2729 2724 2730 2725 if (fep->bufdesc_ex) { 2731 - if (cmd == SIOCSHWTSTAMP) 2732 - return fec_ptp_set(ndev, rq); 2733 - if (cmd == SIOCGHWTSTAMP) 2734 - return fec_ptp_get(ndev, rq); 2726 + bool use_fec_hwts = !phy_has_hwtstamp(phydev); 2727 + 2728 + if (cmd == SIOCSHWTSTAMP) { 2729 + if (use_fec_hwts) 2730 + return fec_ptp_set(ndev, rq); 2731 + fec_ptp_disable_hwts(ndev); 2732 + } else if (cmd == SIOCGHWTSTAMP) { 2733 + if (use_fec_hwts) 2734 + return fec_ptp_get(ndev, rq); 2735 + } 2735 2736 } 2736 2737 2737 2738 return phy_mii_ioctl(phydev, rq, cmd);
+12
drivers/net/ethernet/freescale/fec_ptp.c
··· 452 452 return -EOPNOTSUPP; 453 453 } 454 454 455 + /** 456 + * fec_ptp_disable_hwts - disable hardware time stamping 457 + * @ndev: pointer to net_device 458 + */ 459 + void fec_ptp_disable_hwts(struct net_device *ndev) 460 + { 461 + struct fec_enet_private *fep = netdev_priv(ndev); 462 + 463 + fep->hwts_tx_en = 0; 464 + fep->hwts_rx_en = 0; 465 + } 466 + 455 467 int fec_ptp_set(struct net_device *ndev, struct ifreq *ifr) 456 468 { 457 469 struct fec_enet_private *fep = netdev_priv(ndev);
+5 -1
drivers/net/ethernet/freescale/gianfar.c
··· 779 779 780 780 mac_addr = of_get_mac_address(np); 781 781 782 - if (!IS_ERR(mac_addr)) 782 + if (!IS_ERR(mac_addr)) { 783 783 ether_addr_copy(dev->dev_addr, mac_addr); 784 + } else { 785 + eth_hw_addr_random(dev); 786 + dev_info(&ofdev->dev, "Using random MAC address: %pM\n", dev->dev_addr); 787 + } 784 788 785 789 if (model && !strcasecmp(model, "TSEC")) 786 790 priv->device_flags |= FSL_GIANFAR_DEV_HAS_GIGABIT |
+1
drivers/net/ethernet/hisilicon/hns3/hnae3.h
··· 77 77 ((ring)->p = ((ring)->p - 1 + (ring)->desc_num) % (ring)->desc_num) 78 78 79 79 enum hns_desc_type { 80 + DESC_TYPE_UNKNOWN, 80 81 DESC_TYPE_SKB, 81 82 DESC_TYPE_FRAGLIST_SKB, 82 83 DESC_TYPE_PAGE,
+12 -12
drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
··· 1118 1118 return -ENOMEM; 1119 1119 } 1120 1120 1121 + desc_cb->priv = priv; 1121 1122 desc_cb->length = size; 1123 + desc_cb->dma = dma; 1124 + desc_cb->type = type; 1122 1125 1123 1126 if (likely(size <= HNS3_MAX_BD_SIZE)) { 1124 - desc_cb->priv = priv; 1125 - desc_cb->dma = dma; 1126 - desc_cb->type = type; 1127 1127 desc->addr = cpu_to_le64(dma); 1128 1128 desc->tx.send_size = cpu_to_le16(size); 1129 1129 desc->tx.bdtp_fe_sc_vld_ra_ri = ··· 1135 1135 } 1136 1136 1137 1137 frag_buf_num = hns3_tx_bd_count(size); 1138 - sizeoflast = size & HNS3_TX_LAST_SIZE_M; 1138 + sizeoflast = size % HNS3_MAX_BD_SIZE; 1139 1139 sizeoflast = sizeoflast ? sizeoflast : HNS3_MAX_BD_SIZE; 1140 1140 1141 1141 /* When frag size is bigger than hardware limit, split this frag */ 1142 1142 for (k = 0; k < frag_buf_num; k++) { 1143 - /* The txbd's baseinfo of DESC_TYPE_PAGE & DESC_TYPE_SKB */ 1144 - desc_cb->priv = priv; 1145 - desc_cb->dma = dma + HNS3_MAX_BD_SIZE * k; 1146 - desc_cb->type = ((type == DESC_TYPE_FRAGLIST_SKB || 1147 - type == DESC_TYPE_SKB) && !k) ? 1148 - type : DESC_TYPE_PAGE; 1149 - 1150 1143 /* now, fill the descriptor */ 1151 1144 desc->addr = cpu_to_le64(dma + HNS3_MAX_BD_SIZE * k); 1152 1145 desc->tx.send_size = cpu_to_le16((k == frag_buf_num - 1) ? ··· 1151 1158 /* move ring pointer to next */ 1152 1159 ring_ptr_move_fw(ring, next_to_use); 1153 1160 1154 - desc_cb = &ring->desc_cb[ring->next_to_use]; 1155 1161 desc = &ring->desc[ring->next_to_use]; 1156 1162 } 1157 1163 ··· 1338 1346 unsigned int i; 1339 1347 1340 1348 for (i = 0; i < ring->desc_num; i++) { 1349 + struct hns3_desc *desc = &ring->desc[ring->next_to_use]; 1350 + 1351 + memset(desc, 0, sizeof(*desc)); 1352 + 1341 1353 /* check if this is where we started */ 1342 1354 if (ring->next_to_use == next_to_use_orig) 1343 1355 break; 1344 1356 1345 1357 /* rollback one */ 1346 1358 ring_ptr_move_bw(ring, next_to_use); 1359 + 1360 + if (!ring->desc_cb[ring->next_to_use].dma) 1361 + continue; 1347 1362 1348 1363 /* unmap the descriptor dma address */ 1349 1364 if (ring->desc_cb[ring->next_to_use].type == DESC_TYPE_SKB || ··· 1368 1369 1369 1370 ring->desc_cb[ring->next_to_use].length = 0; 1370 1371 ring->desc_cb[ring->next_to_use].dma = 0; 1372 + ring->desc_cb[ring->next_to_use].type = DESC_TYPE_UNKNOWN; 1371 1373 } 1372 1374 } 1373 1375
-2
drivers/net/ethernet/hisilicon/hns3/hns3_enet.h
··· 165 165 #define HNS3_TXD_MSS_S 0 166 166 #define HNS3_TXD_MSS_M (0x3fff << HNS3_TXD_MSS_S) 167 167 168 - #define HNS3_TX_LAST_SIZE_M 0xffff 169 - 170 168 #define HNS3_VECTOR_TX_IRQ BIT_ULL(0) 171 169 #define HNS3_VECTOR_RX_IRQ BIT_ULL(1) 172 170
+22 -27
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
··· 2673 2673 delay_time); 2674 2674 } 2675 2675 2676 - static int hclge_get_mac_link_status(struct hclge_dev *hdev) 2676 + static int hclge_get_mac_link_status(struct hclge_dev *hdev, int *link_status) 2677 2677 { 2678 2678 struct hclge_link_status_cmd *req; 2679 2679 struct hclge_desc desc; 2680 - int link_status; 2681 2680 int ret; 2682 2681 2683 2682 hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_QUERY_LINK_STATUS, true); ··· 2688 2689 } 2689 2690 2690 2691 req = (struct hclge_link_status_cmd *)desc.data; 2691 - link_status = req->status & HCLGE_LINK_STATUS_UP_M; 2692 + *link_status = (req->status & HCLGE_LINK_STATUS_UP_M) > 0 ? 2693 + HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN; 2692 2694 2693 - return !!link_status; 2695 + return 0; 2694 2696 } 2695 2697 2696 - static int hclge_get_mac_phy_link(struct hclge_dev *hdev) 2698 + static int hclge_get_mac_phy_link(struct hclge_dev *hdev, int *link_status) 2697 2699 { 2698 - unsigned int mac_state; 2699 - int link_stat; 2700 + struct phy_device *phydev = hdev->hw.mac.phydev; 2701 + 2702 + *link_status = HCLGE_LINK_STATUS_DOWN; 2700 2703 2701 2704 if (test_bit(HCLGE_STATE_DOWN, &hdev->state)) 2702 2705 return 0; 2703 2706 2704 - mac_state = hclge_get_mac_link_status(hdev); 2707 + if (phydev && (phydev->state != PHY_RUNNING || !phydev->link)) 2708 + return 0; 2705 2709 2706 - if (hdev->hw.mac.phydev) { 2707 - if (hdev->hw.mac.phydev->state == PHY_RUNNING) 2708 - link_stat = mac_state & 2709 - hdev->hw.mac.phydev->link; 2710 - else 2711 - link_stat = 0; 2712 - 2713 - } else { 2714 - link_stat = mac_state; 2715 - } 2716 - 2717 - return !!link_stat; 2710 + return hclge_get_mac_link_status(hdev, link_status); 2718 2711 } 2719 2712 2720 2713 static void hclge_update_link_status(struct hclge_dev *hdev) ··· 2716 2725 struct hnae3_handle *rhandle; 2717 2726 struct hnae3_handle *handle; 2718 2727 int state; 2728 + int ret; 2719 2729 int i; 2720 2730 2721 2731 if (!client) ··· 2725 2733 if (test_and_set_bit(HCLGE_STATE_LINK_UPDATING, &hdev->state)) 2726 2734 return; 2727 2735 2728 - state = hclge_get_mac_phy_link(hdev); 2736 + ret = hclge_get_mac_phy_link(hdev, &state); 2737 + if (ret) { 2738 + clear_bit(HCLGE_STATE_LINK_UPDATING, &hdev->state); 2739 + return; 2740 + } 2741 + 2729 2742 if (state != hdev->hw.mac.link) { 2730 2743 for (i = 0; i < hdev->num_vmdq_vport + 1; i++) { 2731 2744 handle = &hdev->vport[i].nic; ··· 6521 6524 { 6522 6525 #define HCLGE_MAC_LINK_STATUS_NUM 100 6523 6526 6527 + int link_status; 6524 6528 int i = 0; 6525 6529 int ret; 6526 6530 6527 6531 do { 6528 - ret = hclge_get_mac_link_status(hdev); 6529 - if (ret < 0) 6532 + ret = hclge_get_mac_link_status(hdev, &link_status); 6533 + if (ret) 6530 6534 return ret; 6531 - else if (ret == link_ret) 6535 + if (link_status == link_ret) 6532 6536 return 0; 6533 6537 6534 6538 msleep(HCLGE_LINK_STATUS_MS); ··· 6540 6542 static int hclge_mac_phy_link_status_wait(struct hclge_dev *hdev, bool en, 6541 6543 bool is_phy) 6542 6544 { 6543 - #define HCLGE_LINK_STATUS_DOWN 0 6544 - #define HCLGE_LINK_STATUS_UP 1 6545 - 6546 6545 int link_ret; 6547 6546 6548 6547 link_ret = en ? HCLGE_LINK_STATUS_UP : HCLGE_LINK_STATUS_DOWN;
+3
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
··· 317 317 HCLGE_LF_XSFP_ABSENT, 318 318 }; 319 319 320 + #define HCLGE_LINK_STATUS_DOWN 0 321 + #define HCLGE_LINK_STATUS_UP 1 322 + 320 323 #define HCLGE_PG_NUM 4 321 324 #define HCLGE_SCH_MODE_SP 0 322 325 #define HCLGE_SCH_MODE_DWRR 1
+2 -1
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 710 710 err = mlxsw_core_trap_register(mlxsw_core, &mlxsw_emad_rx_listener, 711 711 mlxsw_core); 712 712 if (err) 713 - return err; 713 + goto err_trap_register; 714 714 715 715 err = mlxsw_core->driver->basic_trap_groups_set(mlxsw_core); 716 716 if (err) ··· 722 722 err_emad_trap_set: 723 723 mlxsw_core_trap_unregister(mlxsw_core, &mlxsw_emad_rx_listener, 724 724 mlxsw_core); 725 + err_trap_register: 725 726 destroy_workqueue(mlxsw_core->emad_wq); 726 727 return err; 727 728 }
+32 -16
drivers/net/ethernet/mellanox/mlxsw/core_env.c
··· 45 45 static int 46 46 mlxsw_env_query_module_eeprom(struct mlxsw_core *mlxsw_core, int module, 47 47 u16 offset, u16 size, void *data, 48 - unsigned int *p_read_size) 48 + bool qsfp, unsigned int *p_read_size) 49 49 { 50 50 char eeprom_tmp[MLXSW_REG_MCIA_EEPROM_SIZE]; 51 51 char mcia_pl[MLXSW_REG_MCIA_LEN]; ··· 54 54 int status; 55 55 int err; 56 56 57 + /* MCIA register accepts buffer size <= 48. Page of size 128 should be 58 + * read by chunks of size 48, 48, 32. Align the size of the last chunk 59 + * to avoid reading after the end of the page. 60 + */ 57 61 size = min_t(u16, size, MLXSW_REG_MCIA_EEPROM_SIZE); 58 62 59 63 if (offset < MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH && ··· 67 63 68 64 i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_LOW; 69 65 if (offset >= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) { 70 - page = MLXSW_REG_MCIA_PAGE_GET(offset); 71 - offset -= MLXSW_REG_MCIA_EEPROM_UP_PAGE_LENGTH * page; 72 - /* When reading upper pages 1, 2 and 3 the offset starts at 73 - * 128. Please refer to "QSFP+ Memory Map" figure in SFF-8436 74 - * specification for graphical depiction. 75 - * MCIA register accepts buffer size <= 48. Page of size 128 76 - * should be read by chunks of size 48, 48, 32. Align the size 77 - * of the last chunk to avoid reading after the end of the 78 - * page. 79 - */ 80 - if (offset + size > MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) 81 - size = MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH - offset; 66 + if (qsfp) { 67 + /* When reading upper pages 1, 2 and 3 the offset 68 + * starts at 128. Please refer to "QSFP+ Memory Map" 69 + * figure in SFF-8436 specification for graphical 70 + * depiction. 71 + */ 72 + page = MLXSW_REG_MCIA_PAGE_GET(offset); 73 + offset -= MLXSW_REG_MCIA_EEPROM_UP_PAGE_LENGTH * page; 74 + if (offset + size > MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH) 75 + size = MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH - offset; 76 + } else { 77 + /* When reading upper pages 1, 2 and 3 the offset 78 + * starts at 0 and I2C high address is used. Please refer 79 + * refer to "Memory Organization" figure in SFF-8472 80 + * specification for graphical depiction. 81 + */ 82 + i2c_addr = MLXSW_REG_MCIA_I2C_ADDR_HIGH; 83 + offset -= MLXSW_REG_MCIA_EEPROM_PAGE_LENGTH; 84 + } 82 85 } 83 86 84 87 mlxsw_reg_mcia_pack(mcia_pl, module, 0, page, offset, size, i2c_addr); ··· 177 166 int err; 178 167 179 168 err = mlxsw_env_query_module_eeprom(mlxsw_core, module, 0, offset, 180 - module_info, &read_size); 169 + module_info, false, &read_size); 181 170 if (err) 182 171 return err; 183 172 ··· 208 197 /* Verify if transceiver provides diagnostic monitoring page */ 209 198 err = mlxsw_env_query_module_eeprom(mlxsw_core, module, 210 199 SFP_DIAGMON, 1, &diag_mon, 211 - &read_size); 200 + false, &read_size); 212 201 if (err) 213 202 return err; 214 203 ··· 236 225 int offset = ee->offset; 237 226 unsigned int read_size; 238 227 int i = 0; 228 + bool qsfp; 239 229 int err; 240 230 241 231 if (!ee->len) 242 232 return -EINVAL; 243 233 244 234 memset(data, 0, ee->len); 235 + /* Validate module identifier value. */ 236 + err = mlxsw_env_validate_cable_ident(mlxsw_core, module, &qsfp); 237 + if (err) 238 + return err; 245 239 246 240 while (i < ee->len) { 247 241 err = mlxsw_env_query_module_eeprom(mlxsw_core, module, offset, 248 242 ee->len - i, data + i, 249 - &read_size); 243 + qsfp, &read_size); 250 244 if (err) { 251 245 netdev_err(netdev, "Eeprom query failed\n"); 252 246 return err;
+1 -1
drivers/net/ethernet/neterion/vxge/vxge-main.c
··· 98 98 { 99 99 struct sk_buff **skb_ptr = NULL; 100 100 struct sk_buff **temp; 101 - #define NR_SKB_COMPLETED 128 101 + #define NR_SKB_COMPLETED 16 102 102 struct sk_buff *completed[NR_SKB_COMPLETED]; 103 103 int more; 104 104
+5 -2
drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
··· 103 103 void *p) 104 104 { 105 105 struct ionic_lif *lif = netdev_priv(netdev); 106 + unsigned int offset; 106 107 unsigned int size; 107 108 108 109 regs->version = IONIC_DEV_CMD_REG_VERSION; 109 110 111 + offset = 0; 110 112 size = IONIC_DEV_INFO_REG_COUNT * sizeof(u32); 111 - memcpy_fromio(p, lif->ionic->idev.dev_info_regs->words, size); 113 + memcpy_fromio(p + offset, lif->ionic->idev.dev_info_regs->words, size); 112 114 115 + offset += size; 113 116 size = IONIC_DEV_CMD_REG_COUNT * sizeof(u32); 114 - memcpy_fromio(p, lif->ionic->idev.dev_cmd_regs->words, size); 117 + memcpy_fromio(p + offset, lif->ionic->idev.dev_cmd_regs->words, size); 115 118 } 116 119 117 120 static int ionic_get_link_ksettings(struct net_device *netdev,
+25 -25
drivers/net/ethernet/pensando/ionic/ionic_lif.c
··· 96 96 u16 link_status; 97 97 bool link_up; 98 98 99 - if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state) || 100 - test_bit(IONIC_LIF_F_QUEUE_RESET, lif->state)) 99 + if (!test_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state)) 101 100 return; 102 101 103 102 link_status = le16_to_cpu(lif->info->status.link_status); ··· 113 114 netif_carrier_on(netdev); 114 115 } 115 116 116 - if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) 117 + if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) { 118 + mutex_lock(&lif->queue_lock); 117 119 ionic_start_queues(lif); 120 + mutex_unlock(&lif->queue_lock); 121 + } 118 122 } else { 119 123 if (netif_carrier_ok(netdev)) { 120 124 netdev_info(netdev, "Link down\n"); 121 125 netif_carrier_off(netdev); 122 126 } 123 127 124 - if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) 128 + if (lif->netdev->flags & IFF_UP && netif_running(lif->netdev)) { 129 + mutex_lock(&lif->queue_lock); 125 130 ionic_stop_queues(lif); 131 + mutex_unlock(&lif->queue_lock); 132 + } 126 133 } 127 134 128 135 clear_bit(IONIC_LIF_F_LINK_CHECK_REQUESTED, lif->state); ··· 868 863 if (f) 869 864 return 0; 870 865 871 - netdev_dbg(lif->netdev, "rx_filter add ADDR %pM (id %d)\n", addr, 872 - ctx.comp.rx_filter_add.filter_id); 866 + netdev_dbg(lif->netdev, "rx_filter add ADDR %pM\n", addr); 873 867 874 868 memcpy(ctx.cmd.rx_filter_add.mac.addr, addr, ETH_ALEN); 875 869 err = ionic_adminq_post_wait(lif, &ctx); ··· 897 893 return -ENOENT; 898 894 } 899 895 896 + netdev_dbg(lif->netdev, "rx_filter del ADDR %pM (id %d)\n", 897 + addr, f->filter_id); 898 + 900 899 ctx.cmd.rx_filter_del.filter_id = cpu_to_le32(f->filter_id); 901 900 ionic_rx_filter_free(lif, f); 902 901 spin_unlock_bh(&lif->rx_filters.lock); ··· 907 900 err = ionic_adminq_post_wait(lif, &ctx); 908 901 if (err && err != -EEXIST) 909 902 return err; 910 - 911 - netdev_dbg(lif->netdev, "rx_filter del ADDR %pM (id %d)\n", addr, 912 - ctx.cmd.rx_filter_del.filter_id); 913 903 914 904 return 0; 915 905 } ··· 1355 1351 }; 1356 1352 int err; 1357 1353 1354 + netdev_dbg(netdev, "rx_filter add VLAN %d\n", vid); 1358 1355 err = ionic_adminq_post_wait(lif, &ctx); 1359 1356 if (err) 1360 1357 return err; 1361 - 1362 - netdev_dbg(netdev, "rx_filter add VLAN %d (id %d)\n", vid, 1363 - ctx.comp.rx_filter_add.filter_id); 1364 1358 1365 1359 return ionic_rx_filter_save(lif, 0, IONIC_RXQ_INDEX_ANY, 0, &ctx); 1366 1360 } ··· 1384 1382 return -ENOENT; 1385 1383 } 1386 1384 1387 - netdev_dbg(netdev, "rx_filter del VLAN %d (id %d)\n", vid, 1388 - le32_to_cpu(ctx.cmd.rx_filter_del.filter_id)); 1385 + netdev_dbg(netdev, "rx_filter del VLAN %d (id %d)\n", 1386 + vid, f->filter_id); 1389 1387 1390 1388 ctx.cmd.rx_filter_del.filter_id = cpu_to_le32(f->filter_id); 1391 1389 ionic_rx_filter_free(lif, f); ··· 1995 1993 bool running; 1996 1994 int err = 0; 1997 1995 1998 - err = ionic_wait_for_bit(lif, IONIC_LIF_F_QUEUE_RESET); 1999 - if (err) 2000 - return err; 2001 - 1996 + mutex_lock(&lif->queue_lock); 2002 1997 running = netif_running(lif->netdev); 2003 1998 if (running) { 2004 1999 netif_device_detach(lif->netdev); 2005 2000 err = ionic_stop(lif->netdev); 2006 2001 if (err) 2007 - goto reset_out; 2002 + return err; 2008 2003 } 2009 2004 2010 2005 if (cb) ··· 2011 2012 err = ionic_open(lif->netdev); 2012 2013 netif_device_attach(lif->netdev); 2013 2014 } 2014 - 2015 - reset_out: 2016 - clear_bit(IONIC_LIF_F_QUEUE_RESET, lif->state); 2015 + mutex_unlock(&lif->queue_lock); 2017 2016 2018 2017 return err; 2019 2018 } ··· 2158 2161 2159 2162 if (test_bit(IONIC_LIF_F_UP, lif->state)) { 2160 2163 dev_info(ionic->dev, "Surprise FW stop, stopping queues\n"); 2164 + mutex_lock(&lif->queue_lock); 2161 2165 ionic_stop_queues(lif); 2166 + mutex_unlock(&lif->queue_lock); 2162 2167 } 2163 2168 2164 2169 if (netif_running(lif->netdev)) { ··· 2279 2280 cancel_work_sync(&lif->deferred.work); 2280 2281 cancel_work_sync(&lif->tx_timeout_work); 2281 2282 ionic_rx_filters_deinit(lif); 2283 + if (lif->netdev->features & NETIF_F_RXHASH) 2284 + ionic_lif_rss_deinit(lif); 2282 2285 } 2283 - 2284 - if (lif->netdev->features & NETIF_F_RXHASH) 2285 - ionic_lif_rss_deinit(lif); 2286 2286 2287 2287 napi_disable(&lif->adminqcq->napi); 2288 2288 ionic_lif_qcq_deinit(lif, lif->notifyqcq); 2289 2289 ionic_lif_qcq_deinit(lif, lif->adminqcq); 2290 2290 2291 + mutex_destroy(&lif->queue_lock); 2291 2292 ionic_lif_reset(lif); 2292 2293 } 2293 2294 ··· 2464 2465 return err; 2465 2466 2466 2467 lif->hw_index = le16_to_cpu(comp.hw_index); 2468 + mutex_init(&lif->queue_lock); 2467 2469 2468 2470 /* now that we have the hw_index we can figure out our doorbell page */ 2469 2471 lif->dbid_count = le32_to_cpu(lif->ionic->ident.dev.ndbpgs_per_lif);
+1 -7
drivers/net/ethernet/pensando/ionic/ionic_lif.h
··· 135 135 IONIC_LIF_F_SW_DEBUG_STATS, 136 136 IONIC_LIF_F_UP, 137 137 IONIC_LIF_F_LINK_CHECK_REQUESTED, 138 - IONIC_LIF_F_QUEUE_RESET, 139 138 IONIC_LIF_F_FW_RESET, 140 139 141 140 /* leave this as last */ ··· 164 165 unsigned int hw_index; 165 166 unsigned int kern_pid; 166 167 u64 __iomem *kern_dbpage; 168 + struct mutex queue_lock; /* lock for queue structures */ 167 169 spinlock_t adminq_lock; /* lock for AdminQ operations */ 168 170 struct ionic_qcq *adminqcq; 169 171 struct ionic_qcq *notifyqcq; ··· 212 212 #define lif_to_rxstats(lif, i) ((lif)->rxqcqs[i].stats->rx) 213 213 #define lif_to_txq(lif, i) (&lif_to_txqcq((lif), i)->q) 214 214 #define lif_to_rxq(lif, i) (&lif_to_txqcq((lif), i)->q) 215 - 216 - /* return 0 if successfully set the bit, else non-zero */ 217 - static inline int ionic_wait_for_bit(struct ionic_lif *lif, int bitname) 218 - { 219 - return wait_on_bit_lock(lif->state, bitname, TASK_INTERRUPTIBLE); 220 - } 221 215 222 216 static inline u32 ionic_coal_usec_to_hw(struct ionic *ionic, u32 usecs) 223 217 {
+29
drivers/net/ethernet/pensando/ionic/ionic_rx_filter.c
··· 21 21 void ionic_rx_filter_replay(struct ionic_lif *lif) 22 22 { 23 23 struct ionic_rx_filter_add_cmd *ac; 24 + struct hlist_head new_id_list; 24 25 struct ionic_admin_ctx ctx; 25 26 struct ionic_rx_filter *f; 26 27 struct hlist_head *head; 27 28 struct hlist_node *tmp; 29 + unsigned int key; 28 30 unsigned int i; 29 31 int err; 30 32 33 + INIT_HLIST_HEAD(&new_id_list); 31 34 ac = &ctx.cmd.rx_filter_add; 32 35 33 36 for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) { ··· 61 58 ac->mac.addr); 62 59 break; 63 60 } 61 + spin_lock_bh(&lif->rx_filters.lock); 62 + ionic_rx_filter_free(lif, f); 63 + spin_unlock_bh(&lif->rx_filters.lock); 64 + 65 + continue; 64 66 } 67 + 68 + /* remove from old id list, save new id in tmp list */ 69 + spin_lock_bh(&lif->rx_filters.lock); 70 + hlist_del(&f->by_id); 71 + spin_unlock_bh(&lif->rx_filters.lock); 72 + f->filter_id = le32_to_cpu(ctx.comp.rx_filter_add.filter_id); 73 + hlist_add_head(&f->by_id, &new_id_list); 65 74 } 66 75 } 76 + 77 + /* rebuild the by_id hash lists with the new filter ids */ 78 + spin_lock_bh(&lif->rx_filters.lock); 79 + hlist_for_each_entry_safe(f, tmp, &new_id_list, by_id) { 80 + key = f->filter_id & IONIC_RX_FILTER_HLISTS_MASK; 81 + head = &lif->rx_filters.by_id[key]; 82 + hlist_add_head(&f->by_id, head); 83 + } 84 + spin_unlock_bh(&lif->rx_filters.lock); 67 85 } 68 86 69 87 int ionic_rx_filters_init(struct ionic_lif *lif) ··· 93 69 94 70 spin_lock_init(&lif->rx_filters.lock); 95 71 72 + spin_lock_bh(&lif->rx_filters.lock); 96 73 for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) { 97 74 INIT_HLIST_HEAD(&lif->rx_filters.by_hash[i]); 98 75 INIT_HLIST_HEAD(&lif->rx_filters.by_id[i]); 99 76 } 77 + spin_unlock_bh(&lif->rx_filters.lock); 100 78 101 79 return 0; 102 80 } ··· 110 84 struct hlist_node *tmp; 111 85 unsigned int i; 112 86 87 + spin_lock_bh(&lif->rx_filters.lock); 113 88 for (i = 0; i < IONIC_RX_FILTER_HLISTS; i++) { 114 89 head = &lif->rx_filters.by_id[i]; 115 90 hlist_for_each_entry_safe(f, tmp, head, by_id) 116 91 ionic_rx_filter_free(lif, f); 117 92 } 93 + spin_unlock_bh(&lif->rx_filters.lock); 118 94 } 119 95 120 96 int ionic_rx_filter_save(struct ionic_lif *lif, u32 flow_id, u16 rxq_index, ··· 152 124 f->filter_id = le32_to_cpu(ctx->comp.rx_filter_add.filter_id); 153 125 f->rxq_index = rxq_index; 154 126 memcpy(&f->cmd, ac, sizeof(f->cmd)); 127 + netdev_dbg(lif->netdev, "rx_filter add filter_id %d\n", f->filter_id); 155 128 156 129 INIT_HLIST_NODE(&f->by_hash); 157 130 INIT_HLIST_NODE(&f->by_id);
-6
drivers/net/ethernet/pensando/ionic/ionic_txrx.c
··· 161 161 return; 162 162 } 163 163 164 - /* no packet processing while resetting */ 165 - if (unlikely(test_bit(IONIC_LIF_F_QUEUE_RESET, q->lif->state))) { 166 - stats->dropped++; 167 - return; 168 - } 169 - 170 164 stats->pkts++; 171 165 stats->bytes += le16_to_cpu(comp->len); 172 166
+2 -2
drivers/net/ethernet/qlogic/qed/qed_cxt.c
··· 2008 2008 enum protocol_type proto; 2009 2009 2010 2010 if (p_hwfn->mcp_info->func_info.protocol == QED_PCI_ETH_RDMA) { 2011 - DP_NOTICE(p_hwfn, 2012 - "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n"); 2011 + DP_VERBOSE(p_hwfn, QED_MSG_SP, 2012 + "Current day drivers don't support RoCE & iWARP simultaneously on the same PF. Default to RoCE-only\n"); 2013 2013 p_hwfn->hw_info.personality = QED_PCI_ETH_ROCE; 2014 2014 } 2015 2015
+1 -1
drivers/net/ethernet/qlogic/qed/qed_dev.c
··· 3102 3102 } 3103 3103 3104 3104 /* Log and clear previous pglue_b errors if such exist */ 3105 - qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt); 3105 + qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_main_ptt, true); 3106 3106 3107 3107 /* Enable the PF's internal FID_enable in the PXP */ 3108 3108 rc = qed_pglueb_set_pfid_enable(p_hwfn, p_hwfn->p_main_ptt,
+31 -22
drivers/net/ethernet/qlogic/qed/qed_int.c
··· 257 257 #define PGLUE_ATTENTION_ZLR_VALID (1 << 25) 258 258 #define PGLUE_ATTENTION_ILT_VALID (1 << 23) 259 259 260 - int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, 261 - struct qed_ptt *p_ptt) 260 + int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 261 + bool hw_init) 262 262 { 263 + char msg[256]; 263 264 u32 tmp; 264 265 265 266 tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS2); ··· 274 273 details = qed_rd(p_hwfn, p_ptt, 275 274 PGLUE_B_REG_TX_ERR_WR_DETAILS); 276 275 277 - DP_NOTICE(p_hwfn, 278 - "Illegal write by chip to [%08x:%08x] blocked.\n" 279 - "Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]\n" 280 - "Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]\n", 281 - addr_hi, addr_lo, details, 282 - (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_PFID), 283 - (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VFID), 284 - GET_FIELD(details, 285 - PGLUE_ATTENTION_DETAILS_VF_VALID) ? 1 : 0, 286 - tmp, 287 - GET_FIELD(tmp, 288 - PGLUE_ATTENTION_DETAILS2_WAS_ERR) ? 1 : 0, 289 - GET_FIELD(tmp, 290 - PGLUE_ATTENTION_DETAILS2_BME) ? 1 : 0, 291 - GET_FIELD(tmp, 292 - PGLUE_ATTENTION_DETAILS2_FID_EN) ? 1 : 0); 276 + snprintf(msg, sizeof(msg), 277 + "Illegal write by chip to [%08x:%08x] blocked.\n" 278 + "Details: %08x [PFID %02x, VFID %02x, VF_VALID %02x]\n" 279 + "Details2 %08x [Was_error %02x BME deassert %02x FID_enable deassert %02x]", 280 + addr_hi, addr_lo, details, 281 + (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_PFID), 282 + (u8)GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VFID), 283 + !!GET_FIELD(details, PGLUE_ATTENTION_DETAILS_VF_VALID), 284 + tmp, 285 + !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_WAS_ERR), 286 + !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_BME), 287 + !!GET_FIELD(tmp, PGLUE_ATTENTION_DETAILS2_FID_EN)); 288 + 289 + if (hw_init) 290 + DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, "%s\n", msg); 291 + else 292 + DP_NOTICE(p_hwfn, "%s\n", msg); 293 293 } 294 294 295 295 tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_RD_DETAILS2); ··· 323 321 } 324 322 325 323 tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_TX_ERR_WR_DETAILS_ICPL); 326 - if (tmp & PGLUE_ATTENTION_ICPL_VALID) 327 - DP_NOTICE(p_hwfn, "ICPL error - %08x\n", tmp); 324 + if (tmp & PGLUE_ATTENTION_ICPL_VALID) { 325 + snprintf(msg, sizeof(msg), "ICPL error - %08x", tmp); 326 + 327 + if (hw_init) 328 + DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, "%s\n", msg); 329 + else 330 + DP_NOTICE(p_hwfn, "%s\n", msg); 331 + } 328 332 329 333 tmp = qed_rd(p_hwfn, p_ptt, PGLUE_B_REG_MASTER_ZLR_ERR_DETAILS); 330 334 if (tmp & PGLUE_ATTENTION_ZLR_VALID) { ··· 369 361 370 362 static int qed_pglueb_rbc_attn_cb(struct qed_hwfn *p_hwfn) 371 363 { 372 - return qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_dpc_ptt); 364 + return qed_pglueb_rbc_attn_handler(p_hwfn, p_hwfn->p_dpc_ptt, false); 373 365 } 374 366 375 367 static int qed_fw_assertion(struct qed_hwfn *p_hwfn) ··· 1201 1193 index, attn_bits, attn_acks, asserted_bits, 1202 1194 deasserted_bits, p_sb_attn_sw->known_attn); 1203 1195 } else if (asserted_bits == 0x100) { 1204 - DP_INFO(p_hwfn, "MFW indication via attention\n"); 1196 + DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, 1197 + "MFW indication via attention\n"); 1205 1198 } else { 1206 1199 DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, 1207 1200 "MFW indication [deassertion]\n");
+2 -2
drivers/net/ethernet/qlogic/qed/qed_int.h
··· 442 442 443 443 #define QED_MAPPING_MEMORY_SIZE(dev) (NUM_OF_SBS(dev)) 444 444 445 - int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, 446 - struct qed_ptt *p_ptt); 445 + int qed_pglueb_rbc_attn_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, 446 + bool hw_init); 447 447 448 448 #endif
+24 -2
drivers/net/ethernet/renesas/ravb_main.c
··· 1450 1450 struct ravb_private *priv = container_of(work, struct ravb_private, 1451 1451 work); 1452 1452 struct net_device *ndev = priv->ndev; 1453 + int error; 1453 1454 1454 1455 netif_tx_stop_all_queues(ndev); 1455 1456 ··· 1459 1458 ravb_ptp_stop(ndev); 1460 1459 1461 1460 /* Wait for DMA stopping */ 1462 - ravb_stop_dma(ndev); 1461 + if (ravb_stop_dma(ndev)) { 1462 + /* If ravb_stop_dma() fails, the hardware is still operating 1463 + * for TX and/or RX. So, this should not call the following 1464 + * functions because ravb_dmac_init() is possible to fail too. 1465 + * Also, this should not retry ravb_stop_dma() again and again 1466 + * here because it's possible to wait forever. So, this just 1467 + * re-enables the TX and RX and skip the following 1468 + * re-initialization procedure. 1469 + */ 1470 + ravb_rcv_snd_enable(ndev); 1471 + goto out; 1472 + } 1463 1473 1464 1474 ravb_ring_free(ndev, RAVB_BE); 1465 1475 ravb_ring_free(ndev, RAVB_NC); 1466 1476 1467 1477 /* Device init */ 1468 - ravb_dmac_init(ndev); 1478 + error = ravb_dmac_init(ndev); 1479 + if (error) { 1480 + /* If ravb_dmac_init() fails, descriptors are freed. So, this 1481 + * should return here to avoid re-enabling the TX and RX in 1482 + * ravb_emac_init(). 1483 + */ 1484 + netdev_err(ndev, "%s: ravb_dmac_init() failed, error %d\n", 1485 + __func__, error); 1486 + return; 1487 + } 1469 1488 ravb_emac_init(ndev); 1470 1489 1490 + out: 1471 1491 /* Initialise PTP Clock driver */ 1472 1492 if (priv->chip_id == RCAR_GEN2) 1473 1493 ravb_ptp_init(ndev, priv->pdev);
+2 -2
drivers/net/ethernet/smsc/smc91x.c
··· 2274 2274 ret = try_toggle_control_gpio(&pdev->dev, &lp->power_gpio, 2275 2275 "power", 0, 0, 100); 2276 2276 if (ret) 2277 - return ret; 2277 + goto out_free_netdev; 2278 2278 2279 2279 /* 2280 2280 * Optional reset GPIO configured? Minimum 100 ns reset needed ··· 2283 2283 ret = try_toggle_control_gpio(&pdev->dev, &lp->reset_gpio, 2284 2284 "reset", 0, 0, 100); 2285 2285 if (ret) 2286 - return ret; 2286 + goto out_free_netdev; 2287 2287 2288 2288 /* 2289 2289 * Need to wait for optional EEPROM to load, max 750 us according
+1 -1
drivers/net/ethernet/socionext/sni_ave.c
··· 1191 1191 ret = regmap_update_bits(priv->regmap, SG_ETPINMODE, 1192 1192 priv->pinmode_mask, priv->pinmode_val); 1193 1193 if (ret) 1194 - return ret; 1194 + goto out_reset_assert; 1195 1195 1196 1196 ave_global_reset(ndev); 1197 1197
+2 -1
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 1850 1850 port->ndev->max_mtu = AM65_CPSW_MAX_PACKET_SIZE; 1851 1851 port->ndev->hw_features = NETIF_F_SG | 1852 1852 NETIF_F_RXCSUM | 1853 - NETIF_F_HW_CSUM; 1853 + NETIF_F_HW_CSUM | 1854 + NETIF_F_HW_TC; 1854 1855 port->ndev->features = port->ndev->hw_features | 1855 1856 NETIF_F_HW_VLAN_CTAG_FILTER; 1856 1857 port->ndev->vlan_features |= NETIF_F_SG;
+1 -1
drivers/net/geneve.c
··· 1615 1615 struct netlink_ext_ack *extack) 1616 1616 { 1617 1617 struct geneve_dev *geneve = netdev_priv(dev); 1618 + enum ifla_geneve_df df = geneve->df; 1618 1619 struct geneve_sock *gs4, *gs6; 1619 1620 struct ip_tunnel_info info; 1620 1621 bool metadata; 1621 1622 bool use_udp6_rx_checksums; 1622 - enum ifla_geneve_df df; 1623 1623 bool ttl_inherit; 1624 1624 int err; 1625 1625
+1 -1
drivers/net/hippi/rrunner.c
··· 1242 1242 rrpriv->info = NULL; 1243 1243 } 1244 1244 if (rrpriv->rx_ctrl) { 1245 - pci_free_consistent(pdev, sizeof(struct ring_ctrl), 1245 + pci_free_consistent(pdev, 256 * sizeof(struct ring_ctrl), 1246 1246 rrpriv->rx_ctrl, rrpriv->rx_ctrl_dma); 1247 1247 rrpriv->rx_ctrl = NULL; 1248 1248 }
+4 -2
drivers/net/ieee802154/adf7242.c
··· 4 4 * 5 5 * Copyright 2009-2017 Analog Devices Inc. 6 6 * 7 - * http://www.analog.com/ADF7242 7 + * https://www.analog.com/ADF7242 8 8 */ 9 9 10 10 #include <linux/kernel.h> ··· 1262 1262 WQ_MEM_RECLAIM); 1263 1263 if (unlikely(!lp->wqueue)) { 1264 1264 ret = -ENOMEM; 1265 - goto err_hw_init; 1265 + goto err_alloc_wq; 1266 1266 } 1267 1267 1268 1268 ret = adf7242_hw_init(lp); ··· 1294 1294 return ret; 1295 1295 1296 1296 err_hw_init: 1297 + destroy_workqueue(lp->wqueue); 1298 + err_alloc_wq: 1297 1299 mutex_destroy(&lp->bmux); 1298 1300 ieee802154_free_hw(lp->hw); 1299 1301
+2 -2
drivers/net/netdevsim/netdev.c
··· 302 302 rtnl_lock(); 303 303 err = nsim_bpf_init(ns); 304 304 if (err) 305 - goto err_free_netdev; 305 + goto err_rtnl_unlock; 306 306 307 307 nsim_ipsec_init(ns); 308 308 ··· 316 316 err_ipsec_teardown: 317 317 nsim_ipsec_teardown(ns); 318 318 nsim_bpf_uninit(ns); 319 + err_rtnl_unlock: 319 320 rtnl_unlock(); 320 - err_free_netdev: 321 321 free_netdev(dev); 322 322 return ERR_PTR(err); 323 323 }
+4
drivers/net/phy/dp83640.c
··· 1260 1260 dp83640->hwts_rx_en = 1; 1261 1261 dp83640->layer = PTP_CLASS_L4; 1262 1262 dp83640->version = PTP_CLASS_V1; 1263 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_EVENT; 1263 1264 break; 1264 1265 case HWTSTAMP_FILTER_PTP_V2_L4_EVENT: 1265 1266 case HWTSTAMP_FILTER_PTP_V2_L4_SYNC: ··· 1268 1267 dp83640->hwts_rx_en = 1; 1269 1268 dp83640->layer = PTP_CLASS_L4; 1270 1269 dp83640->version = PTP_CLASS_V2; 1270 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L4_EVENT; 1271 1271 break; 1272 1272 case HWTSTAMP_FILTER_PTP_V2_L2_EVENT: 1273 1273 case HWTSTAMP_FILTER_PTP_V2_L2_SYNC: ··· 1276 1274 dp83640->hwts_rx_en = 1; 1277 1275 dp83640->layer = PTP_CLASS_L2; 1278 1276 dp83640->version = PTP_CLASS_V2; 1277 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_L2_EVENT; 1279 1278 break; 1280 1279 case HWTSTAMP_FILTER_PTP_V2_EVENT: 1281 1280 case HWTSTAMP_FILTER_PTP_V2_SYNC: ··· 1284 1281 dp83640->hwts_rx_en = 1; 1285 1282 dp83640->layer = PTP_CLASS_L4 | PTP_CLASS_L2; 1286 1283 dp83640->version = PTP_CLASS_V2; 1284 + cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; 1287 1285 break; 1288 1286 default: 1289 1287 return -ERANGE;
+1
drivers/net/usb/ax88172a.c
··· 187 187 ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, buf, 0); 188 188 if (ret < ETH_ALEN) { 189 189 netdev_err(dev->net, "Failed to read MAC address: %d\n", ret); 190 + ret = -EIO; 190 191 goto free; 191 192 } 192 193 memcpy(dev->net->dev_addr, buf, ETH_ALEN);
+3 -2
drivers/net/usb/hso.c
··· 1390 1390 unsigned long flags; 1391 1391 1392 1392 if (old) 1393 - hso_dbg(0x16, "Termios called with: cflags new[%d] - old[%d]\n", 1394 - tty->termios.c_cflag, old->c_cflag); 1393 + hso_dbg(0x16, "Termios called with: cflags new[%u] - old[%u]\n", 1394 + (unsigned int)tty->termios.c_cflag, 1395 + (unsigned int)old->c_cflag); 1395 1396 1396 1397 /* the actual setup */ 1397 1398 spin_lock_irqsave(&serial->serial_lock, flags);
+3 -1
drivers/net/wan/hdlc_x25.c
··· 71 71 { 72 72 unsigned char *ptr; 73 73 74 - if (skb_cow(skb, 1)) 74 + if (skb_cow(skb, 1)) { 75 + kfree_skb(skb); 75 76 return NET_RX_DROP; 77 + } 76 78 77 79 skb_push(skb, 1); 78 80 skb_reset_network_header(skb);
+5 -3
drivers/net/wan/lapbether.c
··· 128 128 { 129 129 unsigned char *ptr; 130 130 131 - skb_push(skb, 1); 132 - 133 - if (skb_cow(skb, 1)) 131 + if (skb_cow(skb, 1)) { 132 + kfree_skb(skb); 134 133 return NET_RX_DROP; 134 + } 135 + 136 + skb_push(skb, 1); 135 137 136 138 ptr = skb->data; 137 139 *ptr = X25_IFACE_DATA;
+14 -7
drivers/net/wan/x25_asy.c
··· 183 183 netif_wake_queue(sl->dev); 184 184 } 185 185 186 - /* Send one completely decapsulated IP datagram to the IP layer. */ 186 + /* Send an LAPB frame to the LAPB module to process. */ 187 187 188 188 static void x25_asy_bump(struct x25_asy *sl) 189 189 { ··· 195 195 count = sl->rcount; 196 196 dev->stats.rx_bytes += count; 197 197 198 - skb = dev_alloc_skb(count+1); 198 + skb = dev_alloc_skb(count); 199 199 if (skb == NULL) { 200 200 netdev_warn(sl->dev, "memory squeeze, dropping packet\n"); 201 201 dev->stats.rx_dropped++; 202 202 return; 203 203 } 204 - skb_push(skb, 1); /* LAPB internal control */ 205 204 skb_put_data(skb, sl->rbuff, count); 206 205 skb->protocol = x25_type_trans(skb, sl->dev); 207 206 err = lapb_data_received(skb->dev, skb); ··· 208 209 kfree_skb(skb); 209 210 printk(KERN_DEBUG "x25_asy: data received err - %d\n", err); 210 211 } else { 211 - netif_rx(skb); 212 212 dev->stats.rx_packets++; 213 213 } 214 214 } ··· 354 356 */ 355 357 356 358 /* 357 - * Called when I frame data arrives. We did the work above - throw it 358 - * at the net layer. 359 + * Called when I frame data arrive. We add a pseudo header for upper 360 + * layers and pass it to upper layers. 359 361 */ 360 362 361 363 static int x25_asy_data_indication(struct net_device *dev, struct sk_buff *skb) 362 364 { 365 + if (skb_cow(skb, 1)) { 366 + kfree_skb(skb); 367 + return NET_RX_DROP; 368 + } 369 + skb_push(skb, 1); 370 + skb->data[0] = X25_IFACE_DATA; 371 + 372 + skb->protocol = x25_type_trans(skb, dev); 373 + 363 374 return netif_rx(skb); 364 375 } 365 376 ··· 664 657 switch (s) { 665 658 case X25_END: 666 659 if (!test_and_clear_bit(SLF_ERROR, &sl->flags) && 667 - sl->rcount > 2) 660 + sl->rcount >= 2) 668 661 x25_asy_bump(sl); 669 662 clear_bit(SLF_ESCAPE, &sl->flags); 670 663 sl->rcount = 0;
+1 -1
drivers/net/wireless/ath/ath10k/ahb.c
··· 820 820 ath10k_ahb_release_irq_legacy(ar); 821 821 822 822 err_free_pipes: 823 - ath10k_pci_free_pipes(ar); 823 + ath10k_pci_release_resource(ar); 824 824 825 825 err_resource_deinit: 826 826 ath10k_ahb_resource_deinit(ar);
+37 -41
drivers/net/wireless/ath/ath10k/pci.c
··· 3473 3473 3474 3474 timer_setup(&ar_pci->rx_post_retry, ath10k_pci_rx_replenish_retry, 0); 3475 3475 3476 + ar_pci->attr = kmemdup(pci_host_ce_config_wlan, 3477 + sizeof(pci_host_ce_config_wlan), 3478 + GFP_KERNEL); 3479 + if (!ar_pci->attr) 3480 + return -ENOMEM; 3481 + 3482 + ar_pci->pipe_config = kmemdup(pci_target_ce_config_wlan, 3483 + sizeof(pci_target_ce_config_wlan), 3484 + GFP_KERNEL); 3485 + if (!ar_pci->pipe_config) { 3486 + ret = -ENOMEM; 3487 + goto err_free_attr; 3488 + } 3489 + 3490 + ar_pci->serv_to_pipe = kmemdup(pci_target_service_to_ce_map_wlan, 3491 + sizeof(pci_target_service_to_ce_map_wlan), 3492 + GFP_KERNEL); 3493 + if (!ar_pci->serv_to_pipe) { 3494 + ret = -ENOMEM; 3495 + goto err_free_pipe_config; 3496 + } 3497 + 3476 3498 if (QCA_REV_6174(ar) || QCA_REV_9377(ar)) 3477 3499 ath10k_pci_override_ce_config(ar); 3478 3500 ··· 3502 3480 if (ret) { 3503 3481 ath10k_err(ar, "failed to allocate copy engine pipes: %d\n", 3504 3482 ret); 3505 - return ret; 3483 + goto err_free_serv_to_pipe; 3506 3484 } 3507 3485 3508 3486 return 0; 3487 + 3488 + err_free_serv_to_pipe: 3489 + kfree(ar_pci->serv_to_pipe); 3490 + err_free_pipe_config: 3491 + kfree(ar_pci->pipe_config); 3492 + err_free_attr: 3493 + kfree(ar_pci->attr); 3494 + return ret; 3509 3495 } 3510 3496 3511 3497 void ath10k_pci_release_resource(struct ath10k *ar) 3512 3498 { 3499 + struct ath10k_pci *ar_pci = ath10k_pci_priv(ar); 3500 + 3513 3501 ath10k_pci_rx_retry_sync(ar); 3514 3502 netif_napi_del(&ar->napi); 3515 3503 ath10k_pci_ce_deinit(ar); 3516 3504 ath10k_pci_free_pipes(ar); 3505 + kfree(ar_pci->attr); 3506 + kfree(ar_pci->pipe_config); 3507 + kfree(ar_pci->serv_to_pipe); 3517 3508 } 3518 3509 3519 3510 static const struct ath10k_bus_ops ath10k_pci_bus_ops = { ··· 3636 3601 3637 3602 timer_setup(&ar_pci->ps_timer, ath10k_pci_ps_timer, 0); 3638 3603 3639 - ar_pci->attr = kmemdup(pci_host_ce_config_wlan, 3640 - sizeof(pci_host_ce_config_wlan), 3641 - GFP_KERNEL); 3642 - if (!ar_pci->attr) { 3643 - ret = -ENOMEM; 3644 - goto err_free; 3645 - } 3646 - 3647 - ar_pci->pipe_config = kmemdup(pci_target_ce_config_wlan, 3648 - sizeof(pci_target_ce_config_wlan), 3649 - GFP_KERNEL); 3650 - if (!ar_pci->pipe_config) { 3651 - ret = -ENOMEM; 3652 - goto err_free; 3653 - } 3654 - 3655 - ar_pci->serv_to_pipe = kmemdup(pci_target_service_to_ce_map_wlan, 3656 - sizeof(pci_target_service_to_ce_map_wlan), 3657 - GFP_KERNEL); 3658 - if (!ar_pci->serv_to_pipe) { 3659 - ret = -ENOMEM; 3660 - goto err_free; 3661 - } 3662 - 3663 3604 ret = ath10k_pci_setup_resource(ar); 3664 3605 if (ret) { 3665 3606 ath10k_err(ar, "failed to setup resource: %d\n", ret); ··· 3716 3705 3717 3706 err_free_irq: 3718 3707 ath10k_pci_free_irq(ar); 3719 - ath10k_pci_rx_retry_sync(ar); 3720 3708 3721 3709 err_deinit_irq: 3722 - ath10k_pci_deinit_irq(ar); 3710 + ath10k_pci_release_resource(ar); 3723 3711 3724 3712 err_sleep: 3725 3713 ath10k_pci_sleep_sync(ar); ··· 3730 3720 err_core_destroy: 3731 3721 ath10k_core_destroy(ar); 3732 3722 3733 - err_free: 3734 - kfree(ar_pci->attr); 3735 - kfree(ar_pci->pipe_config); 3736 - kfree(ar_pci->serv_to_pipe); 3737 - 3738 3723 return ret; 3739 3724 } 3740 3725 3741 3726 static void ath10k_pci_remove(struct pci_dev *pdev) 3742 3727 { 3743 3728 struct ath10k *ar = pci_get_drvdata(pdev); 3744 - struct ath10k_pci *ar_pci; 3745 3729 3746 3730 ath10k_dbg(ar, ATH10K_DBG_PCI, "pci remove\n"); 3747 3731 3748 3732 if (!ar) 3749 - return; 3750 - 3751 - ar_pci = ath10k_pci_priv(ar); 3752 - 3753 - if (!ar_pci) 3754 3733 return; 3755 3734 3756 3735 ath10k_core_unregister(ar); ··· 3749 3750 ath10k_pci_sleep_sync(ar); 3750 3751 ath10k_pci_release(ar); 3751 3752 ath10k_core_destroy(ar); 3752 - kfree(ar_pci->attr); 3753 - kfree(ar_pci->pipe_config); 3754 - kfree(ar_pci->serv_to_pipe); 3755 3753 } 3756 3754 3757 3755 MODULE_DEVICE_TABLE(pci, ath10k_pci_id_table);
+3 -1
drivers/net/wireless/ath/ath9k/hif_usb.c
··· 733 733 return; 734 734 } 735 735 736 + rx_buf->skb = nskb; 737 + 736 738 usb_fill_int_urb(urb, hif_dev->udev, 737 739 usb_rcvintpipe(hif_dev->udev, 738 740 USB_REG_IN_PIPE), 739 741 nskb->data, MAX_REG_IN_BUF_SIZE, 740 - ath9k_hif_usb_reg_in_cb, nskb, 1); 742 + ath9k_hif_usb_reg_in_cb, rx_buf, 1); 741 743 } 742 744 743 745 resubmit:
+14 -2
drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
··· 271 271 { 272 272 struct iwl_fw_ini_trigger_tlv *trig = (void *)tlv->data; 273 273 u32 tp = le32_to_cpu(trig->time_point); 274 + struct iwl_ucode_tlv *dup = NULL; 275 + int ret; 274 276 275 277 if (le32_to_cpu(tlv->length) < sizeof(*trig)) 276 278 return -EINVAL; ··· 285 283 return -EINVAL; 286 284 } 287 285 288 - if (!le32_to_cpu(trig->occurrences)) 286 + if (!le32_to_cpu(trig->occurrences)) { 287 + dup = kmemdup(tlv, sizeof(*tlv) + le32_to_cpu(tlv->length), 288 + GFP_KERNEL); 289 + if (!dup) 290 + return -ENOMEM; 291 + trig = (void *)dup->data; 289 292 trig->occurrences = cpu_to_le32(-1); 293 + tlv = dup; 294 + } 290 295 291 - return iwl_dbg_tlv_add(tlv, &trans->dbg.time_point[tp].trig_list); 296 + ret = iwl_dbg_tlv_add(tlv, &trans->dbg.time_point[tp].trig_list); 297 + kfree(dup); 298 + 299 + return ret; 292 300 } 293 301 294 302 static int (*dbg_tlv_alloc[])(struct iwl_trans *trans,
+3 -5
drivers/net/wireless/intel/iwlwifi/mvm/sta.c
··· 1189 1189 for_each_set_bit(i, &changetid_queues, IWL_MAX_HW_QUEUES) 1190 1190 iwl_mvm_change_queue_tid(mvm, i); 1191 1191 1192 + rcu_read_unlock(); 1193 + 1192 1194 if (free_queue >= 0 && alloc_for_sta != IWL_MVM_INVALID_STA) { 1193 1195 ret = iwl_mvm_free_inactive_queue(mvm, free_queue, queue_owner, 1194 1196 alloc_for_sta); 1195 - if (ret) { 1196 - rcu_read_unlock(); 1197 + if (ret) 1197 1198 return ret; 1198 - } 1199 1199 } 1200 - 1201 - rcu_read_unlock(); 1202 1200 1203 1201 return free_queue; 1204 1202 }
+2
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 582 582 IWL_DEV_INFO(0x30DC, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name), 583 583 IWL_DEV_INFO(0x31DC, 0x1551, iwl9560_2ac_cfg_soc, iwl9560_killer_1550s_name), 584 584 IWL_DEV_INFO(0x31DC, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name), 585 + IWL_DEV_INFO(0xA370, 0x1551, iwl9560_2ac_cfg_soc, iwl9560_killer_1550s_name), 586 + IWL_DEV_INFO(0xA370, 0x1552, iwl9560_2ac_cfg_soc, iwl9560_killer_1550i_name), 585 587 586 588 IWL_DEV_INFO(0x271C, 0x0214, iwl9260_2ac_cfg, iwl9260_1_name), 587 589
+1
drivers/net/wireless/mediatek/mt76/mt76.h
··· 301 301 #define MT_DRV_TX_ALIGNED4_SKBS BIT(1) 302 302 #define MT_DRV_SW_RX_AIRTIME BIT(2) 303 303 #define MT_DRV_RX_DMA_HDR BIT(3) 304 + #define MT_DRV_HW_MGMT_TXQ BIT(4) 304 305 305 306 struct mt76_driver_ops { 306 307 u32 drv_flags;
+2
drivers/net/wireless/mediatek/mt76/mt7603/main.c
··· 642 642 { 643 643 struct mt7603_dev *dev = hw->priv; 644 644 645 + mutex_lock(&dev->mt76.mutex); 645 646 dev->coverage_class = max_t(s16, coverage_class, 0); 646 647 mt7603_mac_set_timing(dev); 648 + mutex_unlock(&dev->mt76.mutex); 647 649 } 648 650 649 651 static void mt7603_tx(struct ieee80211_hw *hw,
+5 -4
drivers/net/wireless/mediatek/mt76/mt7615/debugfs.c
··· 234 234 int i; 235 235 236 236 for (i = 0; i < 16; i++) { 237 - int j, acs = i / 4, index = i % 4; 237 + int j, wmm_idx = i % MT7615_MAX_WMM_SETS; 238 + int acs = i / MT7615_MAX_WMM_SETS; 238 239 u32 ctrl, val, qlen = 0; 239 240 240 - val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, index)); 241 + val = mt76_rr(dev, MT_PLE_AC_QEMPTY(acs, wmm_idx)); 241 242 ctrl = BIT(31) | BIT(15) | (acs << 8); 242 243 243 244 for (j = 0; j < 32; j++) { ··· 246 245 continue; 247 246 248 247 mt76_wr(dev, MT_PLE_FL_Q0_CTRL, 249 - ctrl | (j + (index << 5))); 248 + ctrl | (j + (wmm_idx << 5))); 250 249 qlen += mt76_get_field(dev, MT_PLE_FL_Q3_CTRL, 251 250 GENMASK(11, 0)); 252 251 } 253 - seq_printf(s, "AC%d%d: queued=%d\n", acs, index, qlen); 252 + seq_printf(s, "AC%d%d: queued=%d\n", wmm_idx, acs, qlen); 254 253 } 255 254 256 255 return 0;
+5 -4
drivers/net/wireless/mediatek/mt76/mt7615/dma.c
··· 36 36 mt7622_init_tx_queues_multi(struct mt7615_dev *dev) 37 37 { 38 38 static const u8 wmm_queue_map[] = { 39 - MT7622_TXQ_AC0, 40 - MT7622_TXQ_AC1, 41 - MT7622_TXQ_AC2, 42 - MT7622_TXQ_AC3, 39 + [IEEE80211_AC_BK] = MT7622_TXQ_AC0, 40 + [IEEE80211_AC_BE] = MT7622_TXQ_AC1, 41 + [IEEE80211_AC_VI] = MT7622_TXQ_AC2, 42 + [IEEE80211_AC_VO] = MT7622_TXQ_AC3, 43 43 }; 44 44 int ret; 45 45 int i; ··· 100 100 int i; 101 101 102 102 mt76_queue_tx_cleanup(dev, MT_TXQ_MCU, false); 103 + mt76_queue_tx_cleanup(dev, MT_TXQ_PSD, false); 103 104 if (is_mt7615(&dev->mt76)) { 104 105 mt76_queue_tx_cleanup(dev, MT_TXQ_BE, false); 105 106 } else {
+1 -2
drivers/net/wireless/mediatek/mt76/mt7615/eeprom.c
··· 72 72 { 73 73 int ret; 74 74 75 - ret = mt76_eeprom_init(&dev->mt76, MT7615_EEPROM_SIZE + 76 - MT7615_EEPROM_EXTRA_DATA); 75 + ret = mt76_eeprom_init(&dev->mt76, MT7615_EEPROM_FULL_SIZE); 77 76 if (ret < 0) 78 77 return ret; 79 78
+1 -1
drivers/net/wireless/mediatek/mt76/mt7615/eeprom.h
··· 17 17 #define MT7615_EEPROM_TXDPD_SIZE 216 18 18 #define MT7615_EEPROM_TXDPD_COUNT (44 + 3) 19 19 20 - #define MT7615_EEPROM_EXTRA_DATA (MT7615_EEPROM_TXDPD_OFFSET + \ 20 + #define MT7615_EEPROM_FULL_SIZE (MT7615_EEPROM_TXDPD_OFFSET + \ 21 21 MT7615_EEPROM_TXDPD_COUNT * \ 22 22 MT7615_EEPROM_TXDPD_SIZE) 23 23
+8 -14
drivers/net/wireless/mediatek/mt76/mt7615/mac.c
··· 526 526 fc_type = (le16_to_cpu(fc) & IEEE80211_FCTL_FTYPE) >> 2; 527 527 fc_stype = (le16_to_cpu(fc) & IEEE80211_FCTL_STYPE) >> 4; 528 528 529 - if (ieee80211_is_data(fc) || ieee80211_is_bufferable_mmpdu(fc)) { 530 - q_idx = wmm_idx * MT7615_MAX_WMM_SETS + 531 - skb_get_queue_mapping(skb); 532 - p_fmt = is_usb ? MT_TX_TYPE_SF : MT_TX_TYPE_CT; 533 - } else if (beacon) { 534 - if (ext_phy) 535 - q_idx = MT_LMAC_BCN1; 536 - else 537 - q_idx = MT_LMAC_BCN0; 529 + if (beacon) { 538 530 p_fmt = MT_TX_TYPE_FW; 539 - } else { 540 - if (ext_phy) 541 - q_idx = MT_LMAC_ALTX1; 542 - else 543 - q_idx = MT_LMAC_ALTX0; 531 + q_idx = ext_phy ? MT_LMAC_BCN1 : MT_LMAC_BCN0; 532 + } else if (skb_get_queue_mapping(skb) >= MT_TXQ_PSD) { 544 533 p_fmt = is_usb ? MT_TX_TYPE_SF : MT_TX_TYPE_CT; 534 + q_idx = ext_phy ? MT_LMAC_ALTX1 : MT_LMAC_ALTX0; 535 + } else { 536 + p_fmt = is_usb ? MT_TX_TYPE_SF : MT_TX_TYPE_CT; 537 + q_idx = wmm_idx * MT7615_MAX_WMM_SETS + 538 + mt7615_lmac_mapping(dev, skb_get_queue_mapping(skb)); 545 539 } 546 540 547 541 val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + sz_txd) |
-15
drivers/net/wireless/mediatek/mt76/mt7615/mac.h
··· 124 124 MT_TX_TYPE_FW, 125 125 }; 126 126 127 - enum tx_pkt_queue_idx { 128 - MT_LMAC_AC00, 129 - MT_LMAC_AC01, 130 - MT_LMAC_AC02, 131 - MT_LMAC_AC03, 132 - MT_LMAC_ALTX0 = 0x10, 133 - MT_LMAC_BMC0, 134 - MT_LMAC_BCN0, 135 - MT_LMAC_PSMP0, 136 - MT_LMAC_ALTX1, 137 - MT_LMAC_BMC1, 138 - MT_LMAC_BCN1, 139 - MT_LMAC_PSMP1, 140 - }; 141 - 142 127 enum tx_port_idx { 143 128 MT_TX_PORT_IDX_LMAC, 144 129 MT_TX_PORT_IDX_MCU
+4
drivers/net/wireless/mediatek/mt76/mt7615/main.c
··· 397 397 struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv; 398 398 struct mt7615_dev *dev = mt7615_hw_dev(hw); 399 399 400 + queue = mt7615_lmac_mapping(dev, queue); 400 401 queue += mvif->wmm_idx * MT7615_MAX_WMM_SETS; 401 402 402 403 return mt7615_mcu_set_wmm(dev, queue, params); ··· 736 735 mt7615_set_coverage_class(struct ieee80211_hw *hw, s16 coverage_class) 737 736 { 738 737 struct mt7615_phy *phy = mt7615_hw_phy(hw); 738 + struct mt7615_dev *dev = phy->dev; 739 739 740 + mutex_lock(&dev->mt76.mutex); 740 741 phy->coverage_class = max_t(s16, coverage_class, 0); 741 742 mt7615_mac_set_timing(phy); 743 + mutex_unlock(&dev->mt76.mutex); 742 744 } 743 745 744 746 static int
+1 -1
drivers/net/wireless/mediatek/mt76/mt7615/mmio.c
··· 146 146 static const struct mt76_driver_ops drv_ops = { 147 147 /* txwi_size = txd size + txp size */ 148 148 .txwi_size = MT_TXD_SIZE + sizeof(struct mt7615_txp_common), 149 - .drv_flags = MT_DRV_TXWI_NO_FREE, 149 + .drv_flags = MT_DRV_TXWI_NO_FREE | MT_DRV_HW_MGMT_TXQ, 150 150 .survey_flags = SURVEY_INFO_TIME_TX | 151 151 SURVEY_INFO_TIME_RX | 152 152 SURVEY_INFO_TIME_BSS_RX,
+30
drivers/net/wireless/mediatek/mt76/mt7615/mt7615.h
··· 282 282 struct list_head wd_head; 283 283 }; 284 284 285 + enum tx_pkt_queue_idx { 286 + MT_LMAC_AC00, 287 + MT_LMAC_AC01, 288 + MT_LMAC_AC02, 289 + MT_LMAC_AC03, 290 + MT_LMAC_ALTX0 = 0x10, 291 + MT_LMAC_BMC0, 292 + MT_LMAC_BCN0, 293 + MT_LMAC_PSMP0, 294 + MT_LMAC_ALTX1, 295 + MT_LMAC_BMC1, 296 + MT_LMAC_BCN1, 297 + MT_LMAC_PSMP1, 298 + }; 299 + 285 300 enum { 286 301 HW_BSSID_0 = 0x0, 287 302 HW_BSSID_1, ··· 460 445 return MT7663_WTBL_SIZE; 461 446 else 462 447 return MT7615_WTBL_SIZE; 448 + } 449 + 450 + static inline u8 mt7615_lmac_mapping(struct mt7615_dev *dev, u8 ac) 451 + { 452 + static const u8 lmac_queue_map[] = { 453 + [IEEE80211_AC_BK] = MT_LMAC_AC00, 454 + [IEEE80211_AC_BE] = MT_LMAC_AC01, 455 + [IEEE80211_AC_VI] = MT_LMAC_AC02, 456 + [IEEE80211_AC_VO] = MT_LMAC_AC03, 457 + }; 458 + 459 + if (WARN_ON_ONCE(ac >= ARRAY_SIZE(lmac_queue_map))) 460 + return MT_LMAC_AC01; /* BE */ 461 + 462 + return lmac_queue_map[ac]; 463 463 } 464 464 465 465 void mt7615_dma_reset(struct mt7615_dev *dev);
+7 -6
drivers/net/wireless/mediatek/mt76/mt7615/usb.c
··· 270 270 { 271 271 static const struct mt76_driver_ops drv_ops = { 272 272 .txwi_size = MT_USB_TXD_SIZE, 273 - .drv_flags = MT_DRV_RX_DMA_HDR, 273 + .drv_flags = MT_DRV_RX_DMA_HDR | MT_DRV_HW_MGMT_TXQ, 274 274 .tx_prepare_skb = mt7663u_tx_prepare_skb, 275 275 .tx_complete_skb = mt7663u_tx_complete_skb, 276 276 .tx_status_data = mt7663u_tx_status_data, ··· 329 329 if (!mt76_poll_msec(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_PWR_ON, 330 330 FW_STATE_PWR_ON << 1, 500)) { 331 331 dev_err(dev->mt76.dev, "Timeout for power on\n"); 332 - return -EIO; 332 + ret = -EIO; 333 + goto error; 333 334 } 334 335 335 336 alloc_queues: 336 337 ret = mt76u_alloc_mcu_queue(&dev->mt76); 337 338 if (ret) 338 - goto error; 339 + goto error_free_q; 339 340 340 341 ret = mt76u_alloc_queues(&dev->mt76); 341 342 if (ret) 342 - goto error; 343 + goto error_free_q; 343 344 344 345 ret = mt7663u_register_device(dev); 345 346 if (ret) 346 - goto error_freeq; 347 + goto error_free_q; 347 348 348 349 return 0; 349 350 350 - error_freeq: 351 + error_free_q: 351 352 mt76u_queues_deinit(&dev->mt76); 352 353 error: 353 354 mt76u_deinit(&dev->mt76);
+3 -2
drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
··· 456 456 tasklet_disable(&dev->mt76.tx_tasklet); 457 457 napi_disable(&dev->mt76.tx_napi); 458 458 459 - for (i = 0; i < ARRAY_SIZE(dev->mt76.napi); i++) 459 + mt76_for_each_q_rx(&dev->mt76, i) { 460 460 napi_disable(&dev->mt76.napi[i]); 461 + } 461 462 462 463 mutex_lock(&dev->mt76.mutex); 463 464 ··· 516 515 517 516 tasklet_enable(&dev->mt76.pre_tbtt_tasklet); 518 517 519 - for (i = 0; i < ARRAY_SIZE(dev->mt76.napi); i++) { 518 + mt76_for_each_q_rx(&dev->mt76, i) { 520 519 napi_enable(&dev->mt76.napi[i]); 521 520 napi_schedule(&dev->mt76.napi[i]); 522 521 }
+3
drivers/net/wireless/mediatek/mt76/mt7915/main.c
··· 716 716 mt7915_set_coverage_class(struct ieee80211_hw *hw, s16 coverage_class) 717 717 { 718 718 struct mt7915_phy *phy = mt7915_hw_phy(hw); 719 + struct mt7915_dev *dev = phy->dev; 719 720 721 + mutex_lock(&dev->mt76.mutex); 720 722 phy->coverage_class = max_t(s16, coverage_class, 0); 721 723 mt7915_mac_set_timing(phy); 724 + mutex_unlock(&dev->mt76.mutex); 722 725 } 723 726 724 727 static int
+7
drivers/net/wireless/mediatek/mt76/tx.c
··· 264 264 skb_set_queue_mapping(skb, qid); 265 265 } 266 266 267 + if ((dev->drv->drv_flags & MT_DRV_HW_MGMT_TXQ) && 268 + !ieee80211_is_data(hdr->frame_control) && 269 + !ieee80211_is_bufferable_mmpdu(hdr->frame_control)) { 270 + qid = MT_TXQ_PSD; 271 + skb_set_queue_mapping(skb, qid); 272 + } 273 + 267 274 if (!(wcid->tx_info & MT_WCID_TX_INFO_SET)) 268 275 ieee80211_get_tx_rates(info->control.vif, sta, skb, 269 276 info->control.rates, 1);
+26 -13
drivers/net/wireless/mediatek/mt76/usb.c
··· 1010 1010 static u8 mt76u_ac_to_hwq(struct mt76_dev *dev, u8 ac) 1011 1011 { 1012 1012 if (mt76_chip(dev) == 0x7663) { 1013 - static const u8 wmm_queue_map[] = { 1014 - [IEEE80211_AC_VO] = 0, 1015 - [IEEE80211_AC_VI] = 1, 1016 - [IEEE80211_AC_BE] = 2, 1017 - [IEEE80211_AC_BK] = 4, 1013 + static const u8 lmac_queue_map[] = { 1014 + /* ac to lmac mapping */ 1015 + [IEEE80211_AC_BK] = 0, 1016 + [IEEE80211_AC_BE] = 1, 1017 + [IEEE80211_AC_VI] = 2, 1018 + [IEEE80211_AC_VO] = 4, 1018 1019 }; 1019 1020 1020 - if (WARN_ON(ac >= ARRAY_SIZE(wmm_queue_map))) 1021 - return 2; /* BE */ 1021 + if (WARN_ON(ac >= ARRAY_SIZE(lmac_queue_map))) 1022 + return 1; /* BE */ 1022 1023 1023 - return wmm_queue_map[ac]; 1024 + return lmac_queue_map[ac]; 1024 1025 } 1025 1026 1026 1027 return mt76_ac_to_hwq(ac); ··· 1067 1066 1068 1067 static void mt76u_free_tx(struct mt76_dev *dev) 1069 1068 { 1070 - struct mt76_queue *q; 1071 - int i, j; 1069 + int i; 1072 1070 1073 1071 for (i = 0; i < IEEE80211_NUM_ACS; i++) { 1072 + struct mt76_queue *q; 1073 + int j; 1074 + 1074 1075 q = dev->q_tx[i].q; 1076 + if (!q) 1077 + continue; 1078 + 1075 1079 for (j = 0; j < q->ndesc; j++) 1076 1080 usb_free_urb(q->entry[j].urb); 1077 1081 } ··· 1084 1078 1085 1079 void mt76u_stop_tx(struct mt76_dev *dev) 1086 1080 { 1087 - struct mt76_queue_entry entry; 1088 - struct mt76_queue *q; 1089 - int i, j, ret; 1081 + int ret; 1090 1082 1091 1083 ret = wait_event_timeout(dev->tx_wait, !mt76_has_tx_pending(&dev->phy), 1092 1084 HZ / 5); 1093 1085 if (!ret) { 1086 + struct mt76_queue_entry entry; 1087 + struct mt76_queue *q; 1088 + int i, j; 1089 + 1094 1090 dev_err(dev->dev, "timed out waiting for pending tx\n"); 1095 1091 1096 1092 for (i = 0; i < IEEE80211_NUM_ACS; i++) { 1097 1093 q = dev->q_tx[i].q; 1094 + if (!q) 1095 + continue; 1096 + 1098 1097 for (j = 0; j < q->ndesc; j++) 1099 1098 usb_kill_urb(q->entry[j].urb); 1100 1099 } ··· 1111 1100 */ 1112 1101 for (i = 0; i < IEEE80211_NUM_ACS; i++) { 1113 1102 q = dev->q_tx[i].q; 1103 + if (!q) 1104 + continue; 1114 1105 1115 1106 /* Assure we are in sync with killed tasklet. */ 1116 1107 spin_lock_bh(&q->lock);
+44 -24
drivers/net/xen-netfront.c
··· 63 63 MODULE_PARM_DESC(max_queues, 64 64 "Maximum number of queues per virtual interface"); 65 65 66 + #define XENNET_TIMEOUT (5 * HZ) 67 + 66 68 static const struct ethtool_ops xennet_ethtool_ops; 67 69 68 70 struct netfront_cb { ··· 1336 1334 1337 1335 netif_carrier_off(netdev); 1338 1336 1339 - xenbus_switch_state(dev, XenbusStateInitialising); 1340 - wait_event(module_wq, 1341 - xenbus_read_driver_state(dev->otherend) != 1342 - XenbusStateClosed && 1343 - xenbus_read_driver_state(dev->otherend) != 1344 - XenbusStateUnknown); 1337 + do { 1338 + xenbus_switch_state(dev, XenbusStateInitialising); 1339 + err = wait_event_timeout(module_wq, 1340 + xenbus_read_driver_state(dev->otherend) != 1341 + XenbusStateClosed && 1342 + xenbus_read_driver_state(dev->otherend) != 1343 + XenbusStateUnknown, XENNET_TIMEOUT); 1344 + } while (!err); 1345 + 1345 1346 return netdev; 1346 1347 1347 1348 exit: ··· 2144 2139 }; 2145 2140 #endif /* CONFIG_SYSFS */ 2146 2141 2142 + static void xennet_bus_close(struct xenbus_device *dev) 2143 + { 2144 + int ret; 2145 + 2146 + if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2147 + return; 2148 + do { 2149 + xenbus_switch_state(dev, XenbusStateClosing); 2150 + ret = wait_event_timeout(module_wq, 2151 + xenbus_read_driver_state(dev->otherend) == 2152 + XenbusStateClosing || 2153 + xenbus_read_driver_state(dev->otherend) == 2154 + XenbusStateClosed || 2155 + xenbus_read_driver_state(dev->otherend) == 2156 + XenbusStateUnknown, 2157 + XENNET_TIMEOUT); 2158 + } while (!ret); 2159 + 2160 + if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed) 2161 + return; 2162 + 2163 + do { 2164 + xenbus_switch_state(dev, XenbusStateClosed); 2165 + ret = wait_event_timeout(module_wq, 2166 + xenbus_read_driver_state(dev->otherend) == 2167 + XenbusStateClosed || 2168 + xenbus_read_driver_state(dev->otherend) == 2169 + XenbusStateUnknown, 2170 + XENNET_TIMEOUT); 2171 + } while (!ret); 2172 + } 2173 + 2147 2174 static int xennet_remove(struct xenbus_device *dev) 2148 2175 { 2149 2176 struct netfront_info *info = dev_get_drvdata(&dev->dev); 2150 2177 2151 - dev_dbg(&dev->dev, "%s\n", dev->nodename); 2152 - 2153 - if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) { 2154 - xenbus_switch_state(dev, XenbusStateClosing); 2155 - wait_event(module_wq, 2156 - xenbus_read_driver_state(dev->otherend) == 2157 - XenbusStateClosing || 2158 - xenbus_read_driver_state(dev->otherend) == 2159 - XenbusStateUnknown); 2160 - 2161 - xenbus_switch_state(dev, XenbusStateClosed); 2162 - wait_event(module_wq, 2163 - xenbus_read_driver_state(dev->otherend) == 2164 - XenbusStateClosed || 2165 - xenbus_read_driver_state(dev->otherend) == 2166 - XenbusStateUnknown); 2167 - } 2168 - 2178 + xennet_bus_close(dev); 2169 2179 xennet_disconnect_backend(info); 2170 2180 2171 2181 if (info->netdev->reg_state == NETREG_REGISTERED)
+1
drivers/nfc/s3fwrn5/core.c
··· 198 198 case S3FWRN5_MODE_FW: 199 199 return s3fwrn5_fw_recv_frame(ndev, skb); 200 200 default: 201 + kfree_skb(skb); 201 202 return -ENODEV; 202 203 } 203 204 }
+9 -21
drivers/pci/pci.c
··· 4638 4638 * pcie_wait_for_link_delay - Wait until link is active or inactive 4639 4639 * @pdev: Bridge device 4640 4640 * @active: waiting for active or inactive? 4641 - * @delay: Delay to wait after link has become active (in ms). Specify %0 4642 - * for no delay. 4641 + * @delay: Delay to wait after link has become active (in ms) 4643 4642 * 4644 4643 * Use this to wait till link becomes active or inactive. 4645 4644 */ ··· 4679 4680 msleep(10); 4680 4681 timeout -= 10; 4681 4682 } 4682 - if (active && ret && delay) 4683 + if (active && ret) 4683 4684 msleep(delay); 4684 4685 else if (ret != active) 4685 4686 pci_info(pdev, "Data Link Layer Link Active not %s in 1000 msec\n", ··· 4800 4801 if (!pcie_downstream_port(dev)) 4801 4802 return; 4802 4803 4803 - /* 4804 - * Per PCIe r5.0, sec 6.6.1, for downstream ports that support 4805 - * speeds > 5 GT/s, we must wait for link training to complete 4806 - * before the mandatory delay. 4807 - * 4808 - * We can only tell when link training completes via DLL Link 4809 - * Active, which is required for downstream ports that support 4810 - * speeds > 5 GT/s (sec 7.5.3.6). Unfortunately some common 4811 - * devices do not implement Link Active reporting even when it's 4812 - * required, so we'll check for that directly instead of checking 4813 - * the supported link speed. We assume devices without Link Active 4814 - * reporting can train in 100 ms regardless of speed. 4815 - */ 4816 - if (dev->link_active_reporting) { 4817 - pci_dbg(dev, "waiting for link to train\n"); 4818 - if (!pcie_wait_for_link_delay(dev, true, 0)) { 4804 + if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) { 4805 + pci_dbg(dev, "waiting %d ms for downstream link\n", delay); 4806 + msleep(delay); 4807 + } else { 4808 + pci_dbg(dev, "waiting %d ms for downstream link, after activation\n", 4809 + delay); 4810 + if (!pcie_wait_for_link_delay(dev, true, delay)) { 4819 4811 /* Did not train, no need to wait any further */ 4820 4812 return; 4821 4813 } 4822 4814 } 4823 - pci_dbg(child, "waiting %d ms to become accessible\n", delay); 4824 - msleep(delay); 4825 4815 4826 4816 if (!pci_device_is_present(child)) { 4827 4817 pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
+11 -5
drivers/scsi/scsi_lib.c
··· 547 547 scsi_uninit_cmd(cmd); 548 548 } 549 549 550 + static void scsi_run_queue_async(struct scsi_device *sdev) 551 + { 552 + if (scsi_target(sdev)->single_lun || 553 + !list_empty(&sdev->host->starved_list)) 554 + kblockd_schedule_work(&sdev->requeue_work); 555 + else 556 + blk_mq_run_hw_queues(sdev->request_queue, true); 557 + } 558 + 550 559 /* Returns false when no more bytes to process, true if there are more */ 551 560 static bool scsi_end_request(struct request *req, blk_status_t error, 552 561 unsigned int bytes) ··· 600 591 601 592 __blk_mq_end_request(req, error); 602 593 603 - if (scsi_target(sdev)->single_lun || 604 - !list_empty(&sdev->host->starved_list)) 605 - kblockd_schedule_work(&sdev->requeue_work); 606 - else 607 - blk_mq_run_hw_queues(q, true); 594 + scsi_run_queue_async(sdev); 608 595 609 596 percpu_ref_put(&q->q_usage_counter); 610 597 return false; ··· 1707 1702 */ 1708 1703 if (req->rq_flags & RQF_DONTPREP) 1709 1704 scsi_mq_uninit_cmd(cmd); 1705 + scsi_run_queue_async(sdev); 1710 1706 break; 1711 1707 } 1712 1708 return ret;
+14 -6
drivers/staging/comedi/drivers/addi_apci_1032.c
··· 106 106 unsigned int *data) 107 107 { 108 108 struct apci1032_private *devpriv = dev->private; 109 - unsigned int shift, oldmask; 109 + unsigned int shift, oldmask, himask, lomask; 110 110 111 111 switch (data[0]) { 112 112 case INSN_CONFIG_DIGITAL_TRIG: 113 113 if (data[1] != 0) 114 114 return -EINVAL; 115 115 shift = data[3]; 116 - oldmask = (1U << shift) - 1; 116 + if (shift < 32) { 117 + oldmask = (1U << shift) - 1; 118 + himask = data[4] << shift; 119 + lomask = data[5] << shift; 120 + } else { 121 + oldmask = 0xffffffffu; 122 + himask = 0; 123 + lomask = 0; 124 + } 117 125 switch (data[2]) { 118 126 case COMEDI_DIGITAL_TRIG_DISABLE: 119 127 devpriv->ctrl = 0; ··· 144 136 devpriv->mode2 &= oldmask; 145 137 } 146 138 /* configure specified channels */ 147 - devpriv->mode1 |= data[4] << shift; 148 - devpriv->mode2 |= data[5] << shift; 139 + devpriv->mode1 |= himask; 140 + devpriv->mode2 |= lomask; 149 141 break; 150 142 case COMEDI_DIGITAL_TRIG_ENABLE_LEVELS: 151 143 if (devpriv->ctrl != (APCI1032_CTRL_INT_ENA | ··· 162 154 devpriv->mode2 &= oldmask; 163 155 } 164 156 /* configure specified channels */ 165 - devpriv->mode1 |= data[4] << shift; 166 - devpriv->mode2 |= data[5] << shift; 157 + devpriv->mode1 |= himask; 158 + devpriv->mode2 |= lomask; 167 159 break; 168 160 default: 169 161 return -EINVAL;
+19 -5
drivers/staging/comedi/drivers/addi_apci_1500.c
··· 452 452 struct apci1500_private *devpriv = dev->private; 453 453 unsigned int trig = data[1]; 454 454 unsigned int shift = data[3]; 455 - unsigned int hi_mask = data[4] << shift; 456 - unsigned int lo_mask = data[5] << shift; 457 - unsigned int chan_mask = hi_mask | lo_mask; 458 - unsigned int old_mask = (1 << shift) - 1; 455 + unsigned int hi_mask; 456 + unsigned int lo_mask; 457 + unsigned int chan_mask; 458 + unsigned int old_mask; 459 459 unsigned int pm; 460 460 unsigned int pt; 461 461 unsigned int pp; 462 + unsigned int invalid_chan; 462 463 463 464 if (trig > 1) { 464 465 dev_dbg(dev->class_dev, ··· 467 466 return -EINVAL; 468 467 } 469 468 470 - if (chan_mask > 0xffff) { 469 + if (shift <= 16) { 470 + hi_mask = data[4] << shift; 471 + lo_mask = data[5] << shift; 472 + old_mask = (1U << shift) - 1; 473 + invalid_chan = (data[4] | data[5]) >> (16 - shift); 474 + } else { 475 + hi_mask = 0; 476 + lo_mask = 0; 477 + old_mask = 0xffff; 478 + invalid_chan = data[4] | data[5]; 479 + } 480 + chan_mask = hi_mask | lo_mask; 481 + 482 + if (invalid_chan) { 471 483 dev_dbg(dev->class_dev, "invalid digital trigger channel\n"); 472 484 return -EINVAL; 473 485 }
+14 -6
drivers/staging/comedi/drivers/addi_apci_1564.c
··· 331 331 unsigned int *data) 332 332 { 333 333 struct apci1564_private *devpriv = dev->private; 334 - unsigned int shift, oldmask; 334 + unsigned int shift, oldmask, himask, lomask; 335 335 336 336 switch (data[0]) { 337 337 case INSN_CONFIG_DIGITAL_TRIG: 338 338 if (data[1] != 0) 339 339 return -EINVAL; 340 340 shift = data[3]; 341 - oldmask = (1U << shift) - 1; 341 + if (shift < 32) { 342 + oldmask = (1U << shift) - 1; 343 + himask = data[4] << shift; 344 + lomask = data[5] << shift; 345 + } else { 346 + oldmask = 0xffffffffu; 347 + himask = 0; 348 + lomask = 0; 349 + } 342 350 switch (data[2]) { 343 351 case COMEDI_DIGITAL_TRIG_DISABLE: 344 352 devpriv->ctrl = 0; ··· 370 362 devpriv->mode2 &= oldmask; 371 363 } 372 364 /* configure specified channels */ 373 - devpriv->mode1 |= data[4] << shift; 374 - devpriv->mode2 |= data[5] << shift; 365 + devpriv->mode1 |= himask; 366 + devpriv->mode2 |= lomask; 375 367 break; 376 368 case COMEDI_DIGITAL_TRIG_ENABLE_LEVELS: 377 369 if (devpriv->ctrl != (APCI1564_DI_IRQ_ENA | ··· 388 380 devpriv->mode2 &= oldmask; 389 381 } 390 382 /* configure specified channels */ 391 - devpriv->mode1 |= data[4] << shift; 392 - devpriv->mode2 |= data[5] << shift; 383 + devpriv->mode1 |= himask; 384 + devpriv->mode2 |= lomask; 393 385 break; 394 386 default: 395 387 return -EINVAL;
+1 -1
drivers/staging/comedi/drivers/ni_6527.c
··· 332 332 case COMEDI_DIGITAL_TRIG_ENABLE_EDGES: 333 333 /* check shift amount */ 334 334 shift = data[3]; 335 - if (shift >= s->n_chan) { 335 + if (shift >= 32) { 336 336 mask = 0; 337 337 rising = 0; 338 338 falling = 0;
+1 -1
drivers/staging/media/atomisp/Kconfig
··· 22 22 module will be called atomisp 23 23 24 24 config VIDEO_ATOMISP_ISP2401 25 - bool "VIDEO_ATOMISP_ISP2401" 25 + bool "Use Intel Atom ISP on Cherrytail/Anniedale (ISP2401)" 26 26 depends on VIDEO_ATOMISP 27 27 help 28 28 Enable support for Atom ISP2401-based boards.
+1 -5
drivers/staging/media/atomisp/Makefile
··· 156 156 pci/hive_isp_css_common/host/timed_ctrl.o \ 157 157 pci/hive_isp_css_common/host/vmem.o \ 158 158 pci/hive_isp_css_shared/host/tag.o \ 159 + pci/system_local.o \ 159 160 160 161 obj-byt = \ 161 162 pci/css_2400_system/hive/ia_css_isp_configs.o \ ··· 183 182 -I$(atomisp)/include/hmm/ \ 184 183 -I$(atomisp)/include/mmu/ \ 185 184 -I$(atomisp)/pci/ \ 186 - -I$(atomisp)/pci/hrt/ \ 187 185 -I$(atomisp)/pci/base/circbuf/interface/ \ 188 186 -I$(atomisp)/pci/base/refcount/interface/ \ 189 187 -I$(atomisp)/pci/camera/pipe/interface/ \ ··· 192 192 -I$(atomisp)/pci/hive_isp_css_include/ \ 193 193 -I$(atomisp)/pci/hive_isp_css_include/device_access/ \ 194 194 -I$(atomisp)/pci/hive_isp_css_include/host/ \ 195 - -I$(atomisp)/pci/hive_isp_css_include/memory_access/ \ 196 195 -I$(atomisp)/pci/hive_isp_css_shared/ \ 197 196 -I$(atomisp)/pci/hive_isp_css_shared/host/ \ 198 197 -I$(atomisp)/pci/isp/kernels/ \ ··· 310 311 -I$(atomisp)/pci/runtime/tagger/interface/ 311 312 312 313 INCLUDES_byt += \ 313 - -I$(atomisp)/pci/css_2400_system/ \ 314 314 -I$(atomisp)/pci/css_2400_system/hive/ \ 315 - -I$(atomisp)/pci/css_2400_system/hrt/ \ 316 315 317 316 INCLUDES_cht += \ 318 317 -I$(atomisp)/pci/css_2401_system/ \ ··· 318 321 -I$(atomisp)/pci/css_2401_system/hive/ \ 319 322 -I$(atomisp)/pci/css_2401_system/hrt/ \ 320 323 321 - # -I$(atomisp)/pci/css_2401_system/hrt/ \ 322 324 # -I$(atomisp)/pci/css_2401_system/hive_isp_css_2401_system_generated/ \ 323 325 324 326 DEFINES := -DHRT_HW -DHRT_ISP_CSS_CUSTOM_HOST -DHRT_USE_VIR_ADDRS -D__HOST__
+3 -3
drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
··· 495 495 ret = ov2680_read_reg(client, 1, OV2680_MIRROR_REG, &val); 496 496 if (ret) 497 497 return ret; 498 - if (value) { 498 + if (value) 499 499 val |= OV2680_FLIP_MIRROR_BIT_ENABLE; 500 - } else { 500 + else 501 501 val &= ~OV2680_FLIP_MIRROR_BIT_ENABLE; 502 - } 502 + 503 503 ret = ov2680_write_reg(client, 1, 504 504 OV2680_MIRROR_REG, val); 505 505 if (ret)
+4 -2
drivers/staging/media/atomisp/i2c/ov5693/atomisp-ov5693.c
··· 1899 1899 { 1900 1900 struct ov5693_device *dev; 1901 1901 int i2c; 1902 - int ret = 0; 1902 + int ret; 1903 1903 void *pdata; 1904 1904 unsigned int i; 1905 1905 ··· 1929 1929 pdata = gmin_camera_platform_data(&dev->sd, 1930 1930 ATOMISP_INPUT_FORMAT_RAW_10, 1931 1931 atomisp_bayer_order_bggr); 1932 - if (!pdata) 1932 + if (!pdata) { 1933 + ret = -EINVAL; 1933 1934 goto out_free; 1935 + } 1934 1936 1935 1937 ret = ov5693_s_config(&dev->sd, client->irq, pdata); 1936 1938 if (ret)
+1
drivers/staging/media/atomisp/include/linux/atomisp_platform.h
··· 250 250 #define IS_MFLD __IS_SOC(INTEL_FAM6_ATOM_SALTWELL_MID) 251 251 #define IS_BYT __IS_SOC(INTEL_FAM6_ATOM_SILVERMONT) 252 252 #define IS_CHT __IS_SOC(INTEL_FAM6_ATOM_AIRMONT) 253 + #define IS_MRFD __IS_SOC(INTEL_FAM6_ATOM_SILVERMONT_MID) 253 254 #define IS_MOFD __IS_SOC(INTEL_FAM6_ATOM_AIRMONT_MID) 254 255 255 256 /* Both CHT and MOFD come with ISP2401 */
-3
drivers/staging/media/atomisp/pci/atomisp-regs.h
··· 20 20 #define ATOMISP_REGS_H 21 21 22 22 /* common register definitions */ 23 - #define PUNIT_PORT 0x04 24 - #define CCK_PORT 0x14 25 - 26 23 #define PCICMDSTS 0x01 27 24 #define INTR 0x0f 28 25 #define MSI_CAPID 0x24
+2 -2
drivers/staging/media/atomisp/pci/atomisp_acc.c
··· 355 355 356 356 pgnr = DIV_ROUND_UP(map->length, PAGE_SIZE); 357 357 if (pgnr < ((PAGE_ALIGN(map->length)) >> PAGE_SHIFT)) { 358 - dev_err(atomisp_dev, 358 + dev_err(asd->isp->dev, 359 359 "user space memory size is less than the expected size..\n"); 360 360 return -ENOMEM; 361 361 } else if (pgnr > ((PAGE_ALIGN(map->length)) >> PAGE_SHIFT)) { 362 - dev_err(atomisp_dev, 362 + dev_err(asd->isp->dev, 363 363 "user space memory size is large than the expected size..\n"); 364 364 return -ENOMEM; 365 365 }
+32 -27
drivers/staging/media/atomisp/pci/atomisp_cmd.c
··· 21 21 #include <linux/firmware.h> 22 22 #include <linux/pci.h> 23 23 #include <linux/interrupt.h> 24 + #include <linux/io.h> 24 25 #include <linux/kernel.h> 25 26 #include <linux/kfifo.h> 26 27 #include <linux/pm_runtime.h> ··· 110 109 111 110 static unsigned short atomisp_get_sensor_fps(struct atomisp_sub_device *asd) 112 111 { 113 - struct v4l2_subdev_frame_interval fi; 112 + struct v4l2_subdev_frame_interval fi = { 0 }; 114 113 struct atomisp_device *isp = asd->isp; 115 114 116 115 unsigned short fps = 0; ··· 207 206 enum atomisp_dfs_mode mode, 208 207 bool force) 209 208 { 209 + struct pci_dev *pdev = to_pci_dev(isp->dev); 210 210 /* FIXME! Only use subdev[0] status yet */ 211 211 struct atomisp_sub_device *asd = &isp->asd[0]; 212 212 const struct atomisp_dfs_config *dfs; ··· 221 219 return -EINVAL; 222 220 } 223 221 224 - if ((isp->pdev->device & ATOMISP_PCI_DEVICE_SOC_MASK) == 222 + if ((pdev->device & ATOMISP_PCI_DEVICE_SOC_MASK) == 225 223 ATOMISP_PCI_DEVICE_SOC_CHT && ATOMISP_USE_YUVPP(asd)) 226 224 isp->dfs = &dfs_config_cht_soc; 227 225 ··· 359 357 irq_clear_all(IRQ0_ID); 360 358 } 361 359 362 - void atomisp_msi_irq_init(struct atomisp_device *isp, struct pci_dev *dev) 360 + void atomisp_msi_irq_init(struct atomisp_device *isp) 363 361 { 362 + struct pci_dev *pdev = to_pci_dev(isp->dev); 364 363 u32 msg32; 365 364 u16 msg16; 366 365 367 - pci_read_config_dword(dev, PCI_MSI_CAPID, &msg32); 366 + pci_read_config_dword(pdev, PCI_MSI_CAPID, &msg32); 368 367 msg32 |= 1 << MSI_ENABLE_BIT; 369 - pci_write_config_dword(dev, PCI_MSI_CAPID, msg32); 368 + pci_write_config_dword(pdev, PCI_MSI_CAPID, msg32); 370 369 371 370 msg32 = (1 << INTR_IER) | (1 << INTR_IIR); 372 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, msg32); 371 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, msg32); 373 372 374 - pci_read_config_word(dev, PCI_COMMAND, &msg16); 373 + pci_read_config_word(pdev, PCI_COMMAND, &msg16); 375 374 msg16 |= (PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | 376 375 PCI_COMMAND_INTX_DISABLE); 377 - pci_write_config_word(dev, PCI_COMMAND, msg16); 376 + pci_write_config_word(pdev, PCI_COMMAND, msg16); 378 377 } 379 378 380 - void atomisp_msi_irq_uninit(struct atomisp_device *isp, struct pci_dev *dev) 379 + void atomisp_msi_irq_uninit(struct atomisp_device *isp) 381 380 { 381 + struct pci_dev *pdev = to_pci_dev(isp->dev); 382 382 u32 msg32; 383 383 u16 msg16; 384 384 385 - pci_read_config_dword(dev, PCI_MSI_CAPID, &msg32); 385 + pci_read_config_dword(pdev, PCI_MSI_CAPID, &msg32); 386 386 msg32 &= ~(1 << MSI_ENABLE_BIT); 387 - pci_write_config_dword(dev, PCI_MSI_CAPID, msg32); 387 + pci_write_config_dword(pdev, PCI_MSI_CAPID, msg32); 388 388 389 389 msg32 = 0x0; 390 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, msg32); 390 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, msg32); 391 391 392 - pci_read_config_word(dev, PCI_COMMAND, &msg16); 392 + pci_read_config_word(pdev, PCI_COMMAND, &msg16); 393 393 msg16 &= ~(PCI_COMMAND_MASTER); 394 - pci_write_config_word(dev, PCI_COMMAND, msg16); 394 + pci_write_config_word(pdev, PCI_COMMAND, msg16); 395 395 } 396 396 397 397 static void atomisp_sof_event(struct atomisp_sub_device *asd) ··· 484 480 /* Clear irq reg */ 485 481 static void clear_irq_reg(struct atomisp_device *isp) 486 482 { 483 + struct pci_dev *pdev = to_pci_dev(isp->dev); 487 484 u32 msg_ret; 488 485 489 - pci_read_config_dword(isp->pdev, PCI_INTERRUPT_CTRL, &msg_ret); 486 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &msg_ret); 490 487 msg_ret |= 1 << INTR_IIR; 491 - pci_write_config_dword(isp->pdev, PCI_INTERRUPT_CTRL, msg_ret); 488 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, msg_ret); 492 489 } 493 490 494 491 static struct atomisp_sub_device * ··· 670 665 void dump_sp_dmem(struct atomisp_device *isp, unsigned int addr, 671 666 unsigned int size) 672 667 { 673 - u32 __iomem *io_virt_addr; 674 668 unsigned int data = 0; 675 669 unsigned int size32 = DIV_ROUND_UP(size, sizeof(u32)); 676 670 677 - dev_dbg(isp->dev, "atomisp_io_base:%p\n", atomisp_io_base); 671 + dev_dbg(isp->dev, "atomisp mmio base: %p\n", isp->base); 678 672 dev_dbg(isp->dev, "%s, addr:0x%x, size: %d, size32: %d\n", __func__, 679 673 addr, size, size32); 680 674 if (size32 * 4 + addr > 0x4000) { ··· 682 678 return; 683 679 } 684 680 addr += SP_DMEM_BASE; 685 - io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 681 + addr &= 0x003FFFFF; 686 682 do { 687 - data = *io_virt_addr; 683 + data = readl(isp->base + addr); 688 684 dev_dbg(isp->dev, "%s, \t [0x%x]:0x%x\n", __func__, addr, data); 689 - io_virt_addr += sizeof(u32); 690 - size32 -= 1; 691 - } while (size32 > 0); 685 + addr += sizeof(u32); 686 + } while (--size32); 692 687 } 693 688 694 689 static struct videobuf_buffer *atomisp_css_frame_to_vbuf( ··· 1292 1289 1293 1290 static void __atomisp_css_recover(struct atomisp_device *isp, bool isp_timeout) 1294 1291 { 1292 + struct pci_dev *pdev = to_pci_dev(isp->dev); 1295 1293 enum ia_css_pipe_id css_pipe_id; 1296 1294 bool stream_restart[MAX_STREAM_NUM] = {0}; 1297 1295 bool depth_mode = false; ··· 1376 1372 clear_isp_irq(hrt_isp_css_irq_sp); 1377 1373 1378 1374 /* Set the SRSE to 3 before resetting */ 1379 - pci_write_config_dword(isp->pdev, PCI_I_CONTROL, isp->saved_regs.i_control | 1380 - MRFLD_PCI_I_CONTROL_SRSE_RESET_MASK); 1375 + pci_write_config_dword(pdev, PCI_I_CONTROL, 1376 + isp->saved_regs.i_control | MRFLD_PCI_I_CONTROL_SRSE_RESET_MASK); 1381 1377 1382 1378 /* reset ISP and restore its state */ 1383 1379 isp->isp_timeout = true; ··· 6162 6158 /*Turn off ISP dphy */ 6163 6159 int atomisp_ospm_dphy_down(struct atomisp_device *isp) 6164 6160 { 6161 + struct pci_dev *pdev = to_pci_dev(isp->dev); 6165 6162 unsigned long flags; 6166 6163 u32 reg; 6167 6164 ··· 6184 6179 * MRFLD HW design need all CSI ports are disabled before 6185 6180 * powering down the IUNIT. 6186 6181 */ 6187 - pci_read_config_dword(isp->pdev, MRFLD_PCI_CSI_CONTROL, &reg); 6182 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, &reg); 6188 6183 reg |= MRFLD_ALL_CSI_PORTS_OFF_MASK; 6189 - pci_write_config_dword(isp->pdev, MRFLD_PCI_CSI_CONTROL, reg); 6184 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, reg); 6190 6185 return 0; 6191 6186 } 6192 6187
+2 -2
drivers/staging/media/atomisp/pci/atomisp_cmd.h
··· 68 68 /* 69 69 * Interrupt functions 70 70 */ 71 - void atomisp_msi_irq_init(struct atomisp_device *isp, struct pci_dev *dev); 72 - void atomisp_msi_irq_uninit(struct atomisp_device *isp, struct pci_dev *dev); 71 + void atomisp_msi_irq_init(struct atomisp_device *isp); 72 + void atomisp_msi_irq_uninit(struct atomisp_device *isp); 73 73 void atomisp_wdt_work(struct work_struct *work); 74 74 void atomisp_wdt(struct timer_list *t); 75 75 void atomisp_setup_flash(struct atomisp_sub_device *asd);
-2
drivers/staging/media/atomisp/pci/atomisp_compat.h
··· 29 29 struct video_device; 30 30 enum atomisp_input_stream_id; 31 31 32 - extern void __iomem *atomisp_io_base; 33 - 34 32 struct atomisp_metadata_buf { 35 33 struct ia_css_metadata *metadata; 36 34 void *md_vptr;
+34 -36
drivers/staging/media/atomisp/pci/atomisp_compat_css20.c
··· 33 33 #include "atomisp_ioctl.h" 34 34 #include "atomisp_acc.h" 35 35 36 - #include <asm/intel-mid.h> 37 - 38 36 #include "ia_css_debug.h" 39 37 #include "ia_css_isp_param.h" 40 38 #include "sh_css_hrt.h" 41 39 #include "ia_css_isys.h" 42 40 41 + #include <linux/io.h> 43 42 #include <linux/pm_runtime.h> 44 43 45 44 /* Assume max number of ACC stages */ ··· 68 69 69 70 static void atomisp_css2_hw_store_8(hrt_address addr, uint8_t data) 70 71 { 71 - s8 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 72 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 72 73 unsigned long flags; 73 74 74 75 spin_lock_irqsave(&mmio_lock, flags); 75 - *io_virt_addr = data; 76 + writeb(data, isp->base + (addr & 0x003FFFFF)); 76 77 spin_unlock_irqrestore(&mmio_lock, flags); 77 78 } 78 79 79 80 static void atomisp_css2_hw_store_16(hrt_address addr, uint16_t data) 80 81 { 81 - s16 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 82 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 82 83 unsigned long flags; 83 84 84 85 spin_lock_irqsave(&mmio_lock, flags); 85 - *io_virt_addr = data; 86 + writew(data, isp->base + (addr & 0x003FFFFF)); 86 87 spin_unlock_irqrestore(&mmio_lock, flags); 87 88 } 88 89 89 90 void atomisp_css2_hw_store_32(hrt_address addr, uint32_t data) 90 91 { 91 - s32 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 92 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 92 93 unsigned long flags; 93 94 94 95 spin_lock_irqsave(&mmio_lock, flags); 95 - *io_virt_addr = data; 96 + writel(data, isp->base + (addr & 0x003FFFFF)); 96 97 spin_unlock_irqrestore(&mmio_lock, flags); 97 98 } 98 99 99 100 static uint8_t atomisp_css2_hw_load_8(hrt_address addr) 100 101 { 101 - s8 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 102 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 102 103 unsigned long flags; 103 104 u8 ret; 104 105 105 106 spin_lock_irqsave(&mmio_lock, flags); 106 - ret = *io_virt_addr; 107 + ret = readb(isp->base + (addr & 0x003FFFFF)); 107 108 spin_unlock_irqrestore(&mmio_lock, flags); 108 109 return ret; 109 110 } 110 111 111 112 static uint16_t atomisp_css2_hw_load_16(hrt_address addr) 112 113 { 113 - s16 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 114 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 114 115 unsigned long flags; 115 116 u16 ret; 116 117 117 118 spin_lock_irqsave(&mmio_lock, flags); 118 - ret = *io_virt_addr; 119 + ret = readw(isp->base + (addr & 0x003FFFFF)); 119 120 spin_unlock_irqrestore(&mmio_lock, flags); 120 121 return ret; 121 122 } 122 123 123 124 static uint32_t atomisp_css2_hw_load_32(hrt_address addr) 124 125 { 125 - s32 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 126 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 126 127 unsigned long flags; 127 128 u32 ret; 128 129 129 130 spin_lock_irqsave(&mmio_lock, flags); 130 - ret = *io_virt_addr; 131 + ret = readl(isp->base + (addr & 0x003FFFFF)); 131 132 spin_unlock_irqrestore(&mmio_lock, flags); 132 133 return ret; 133 134 } 134 135 135 - static void atomisp_css2_hw_store(hrt_address addr, 136 - const void *from, uint32_t n) 136 + static void atomisp_css2_hw_store(hrt_address addr, const void *from, uint32_t n) 137 137 { 138 - s8 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 138 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 139 139 unsigned long flags; 140 140 unsigned int i; 141 141 142 + addr &= 0x003FFFFF; 142 143 spin_lock_irqsave(&mmio_lock, flags); 143 - for (i = 0; i < n; i++, io_virt_addr++, from++) 144 - *io_virt_addr = *(s8 *)from; 144 + for (i = 0; i < n; i++, from++) 145 + writeb(*(s8 *)from, isp->base + addr + i); 146 + 145 147 spin_unlock_irqrestore(&mmio_lock, flags); 146 148 } 147 149 148 150 static void atomisp_css2_hw_load(hrt_address addr, void *to, uint32_t n) 149 151 { 150 - s8 __iomem *io_virt_addr = atomisp_io_base + (addr & 0x003FFFFF); 152 + struct atomisp_device *isp = dev_get_drvdata(atomisp_dev); 151 153 unsigned long flags; 152 154 unsigned int i; 153 155 156 + addr &= 0x003FFFFF; 154 157 spin_lock_irqsave(&mmio_lock, flags); 155 - for (i = 0; i < n; i++, to++, io_virt_addr++) 156 - *(s8 *)to = *io_virt_addr; 158 + for (i = 0; i < n; i++, to++) 159 + *(s8 *)to = readb(isp->base + addr + i); 157 160 spin_unlock_irqrestore(&mmio_lock, flags); 158 161 } 159 162 ··· 182 181 *data = atomisp_css2_hw_load_32(addr); 183 182 } 184 183 185 - static int hmm_get_mmu_base_addr(unsigned int *mmu_base_addr) 184 + static int hmm_get_mmu_base_addr(struct device *dev, unsigned int *mmu_base_addr) 186 185 { 187 186 if (!sh_mmu_mrfld.get_pd_base) { 188 - dev_err(atomisp_dev, "get mmu base address failed.\n"); 187 + dev_err(dev, "get mmu base address failed.\n"); 189 188 return -EINVAL; 190 189 } 191 190 ··· 840 839 int ret; 841 840 int err; 842 841 843 - ret = hmm_get_mmu_base_addr(&mmu_base_addr); 842 + ret = hmm_get_mmu_base_addr(isp->dev, &mmu_base_addr); 844 843 if (ret) 845 844 return ret; 846 845 ··· 942 941 unsigned int mmu_base_addr; 943 942 int ret; 944 943 945 - ret = hmm_get_mmu_base_addr(&mmu_base_addr); 944 + ret = hmm_get_mmu_base_addr(isp->dev, &mmu_base_addr); 946 945 if (ret) { 947 946 dev_err(isp->dev, "get base address error.\n"); 948 947 return -EINVAL; ··· 1967 1966 true, 1968 1967 0x13000, 1969 1968 &size_mem_words) != 0) { 1970 - if (intel_mid_identify_cpu() == 1971 - INTEL_MID_CPU_CHIP_TANGIER) 1969 + if (IS_MRFD) 1972 1970 size_mem_words = CSS_MIPI_FRAME_BUFFER_SIZE_2; 1973 1971 else 1974 1972 size_mem_words = CSS_MIPI_FRAME_BUFFER_SIZE_1; ··· 2414 2414 struct ia_css_resolution *effective_res = 2415 2415 &stream_config->input_config.effective_res; 2416 2416 2417 - const struct bayer_ds_factor bds_fct[] = {{2, 1}, {3, 2}, {5, 4} }; 2417 + static const struct bayer_ds_factor bds_fct[] = {{2, 1}, {3, 2}, {5, 4} }; 2418 2418 /* 2419 2419 * BZ201033: YUV decimation factor of 4 causes couple of rightmost 2420 2420 * columns to be shaded. Remove this factor to work around the CSS bug. 2421 2421 * const unsigned int yuv_dec_fct[] = {4, 2}; 2422 2422 */ 2423 - const unsigned int yuv_dec_fct[] = { 2 }; 2423 + static const unsigned int yuv_dec_fct[] = { 2 }; 2424 2424 unsigned int i; 2425 2425 2426 2426 if (width == 0 && height == 0) ··· 2540 2540 struct ia_css_resolution *effective_res = 2541 2541 &stream_config->input_config.effective_res; 2542 2542 2543 - const struct bayer_ds_factor bds_factors[] = { 2543 + static const struct bayer_ds_factor bds_factors[] = { 2544 2544 {8, 1}, {6, 1}, {4, 1}, {3, 1}, {2, 1}, {3, 2} 2545 2545 }; 2546 2546 unsigned int i; ··· 4337 4337 [IA_CSS_ACC_STANDALONE] = "Stand-alone acceleration", 4338 4338 }; 4339 4339 4340 - int atomisp_css_dump_blob_infor(void) 4340 + int atomisp_css_dump_blob_infor(struct atomisp_device *isp) 4341 4341 { 4342 4342 struct ia_css_blob_descr *bd = sh_css_blob_info; 4343 4343 unsigned int i, nm = sh_css_num_binaries; ··· 4354 4354 for (i = 0; i < sh_css_num_binaries - NUM_OF_SPS; i++) { 4355 4355 switch (bd[i].header.type) { 4356 4356 case ia_css_isp_firmware: 4357 - dev_dbg(atomisp_dev, 4358 - "Num%2d type %s (%s), binary id is %2d, name is %s\n", 4357 + dev_dbg(isp->dev, "Num%2d type %s (%s), binary id is %2d, name is %s\n", 4359 4358 i + NUM_OF_SPS, 4360 4359 fw_type_name[bd[i].header.type], 4361 4360 fw_acc_type_name[bd[i].header.info.isp.type], ··· 4362 4363 bd[i].name); 4363 4364 break; 4364 4365 default: 4365 - dev_dbg(atomisp_dev, 4366 - "Num%2d type %s, name is %s\n", 4366 + dev_dbg(isp->dev, "Num%2d type %s, name is %s\n", 4367 4367 i + NUM_OF_SPS, fw_type_name[bd[i].header.type], 4368 4368 bd[i].name); 4369 4369 }
+1 -1
drivers/staging/media/atomisp/pci/atomisp_compat_css20.h
··· 153 153 154 154 int atomisp_css_dump_sp_raw_copy_linecount(bool reduced); 155 155 156 - int atomisp_css_dump_blob_infor(void); 156 + int atomisp_css_dump_blob_infor(struct atomisp_device *isp); 157 157 158 158 void atomisp_css_set_isp_config_id(struct atomisp_sub_device *asd, 159 159 uint32_t isp_config_id);
+7 -7
drivers/staging/media/atomisp/pci/atomisp_drvfs.c
··· 62 62 63 63 if (opt & OPTION_VALID) { 64 64 if (opt & OPTION_BIN_LIST) { 65 - ret = atomisp_css_dump_blob_infor(); 65 + ret = atomisp_css_dump_blob_infor(isp); 66 66 if (ret) { 67 - dev_err(atomisp_dev, "%s dump blob infor err[ret:%d]\n", 67 + dev_err(isp->dev, "%s dump blob infor err[ret:%d]\n", 68 68 __func__, ret); 69 69 goto opt_err; 70 70 } ··· 76 76 atomisp_css_debug_dump_isp_binary(); 77 77 } else { 78 78 ret = -EPERM; 79 - dev_err(atomisp_dev, "%s dump running bin err[ret:%d]\n", 79 + dev_err(isp->dev, "%s dump running bin err[ret:%d]\n", 80 80 __func__, ret); 81 81 goto opt_err; 82 82 } ··· 86 86 hmm_show_mem_stat(__func__, __LINE__); 87 87 } else { 88 88 ret = -EINVAL; 89 - dev_err(atomisp_dev, "%s dump nothing[ret=%d]\n", __func__, 90 - ret); 89 + dev_err(isp->dev, "%s dump nothing[ret=%d]\n", __func__, ret); 91 90 } 92 91 93 92 opt_err: ··· 184 185 driver_remove_file(drv, &iunit_drvfs_attrs[i]); 185 186 } 186 187 187 - int atomisp_drvfs_init(struct device_driver *drv, struct atomisp_device *isp) 188 + int atomisp_drvfs_init(struct atomisp_device *isp) 188 189 { 190 + struct device_driver *drv = isp->dev->driver; 189 191 int ret; 190 192 191 193 iunit_debug.isp = isp; ··· 194 194 195 195 ret = iunit_drvfs_create_files(iunit_debug.drv); 196 196 if (ret) { 197 - dev_err(atomisp_dev, "drvfs_create_files error: %d\n", ret); 197 + dev_err(isp->dev, "drvfs_create_files error: %d\n", ret); 198 198 iunit_drvfs_remove_files(iunit_debug.drv); 199 199 } 200 200
+1 -1
drivers/staging/media/atomisp/pci/atomisp_drvfs.h
··· 19 19 #ifndef __ATOMISP_DRVFS_H__ 20 20 #define __ATOMISP_DRVFS_H__ 21 21 22 - int atomisp_drvfs_init(struct device_driver *drv, struct atomisp_device *isp); 22 + int atomisp_drvfs_init(struct atomisp_device *isp); 23 23 void atomisp_drvfs_exit(void); 24 24 25 25 #endif /* __ATOMISP_DRVFS_H__ */
+352 -187
drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
··· 26 26 #define CLK_RATE_19_2MHZ 19200000 27 27 #define CLK_RATE_25_0MHZ 25000000 28 28 29 + /* Valid clock number range from 0 to 5 */ 30 + #define MAX_CLK_COUNT 5 31 + 29 32 /* X-Powers AXP288 register set */ 30 33 #define ALDO1_SEL_REG 0x28 31 34 #define ALDO1_CTRL3_REG 0x13 ··· 64 61 65 62 struct gmin_subdev { 66 63 struct v4l2_subdev *subdev; 67 - int clock_num; 68 64 enum clock_rate clock_src; 69 - bool clock_on; 70 65 struct clk *pmc_clk; 71 66 struct gpio_desc *gpio0; 72 67 struct gpio_desc *gpio1; ··· 76 75 unsigned int csi_lanes; 77 76 enum atomisp_input_format csi_fmt; 78 77 enum atomisp_bayer_order csi_bayer; 78 + 79 + bool clock_on; 79 80 bool v1p8_on; 80 81 bool v2p8_on; 81 82 bool v1p2_on; 82 83 bool v2p8_vcm_on; 84 + 85 + int v1p8_gpio; 86 + int v2p8_gpio; 83 87 84 88 u8 pwm_i2c_addr; 85 89 ··· 96 90 static struct gmin_subdev gmin_subdevs[MAX_SUBDEVS]; 97 91 98 92 /* ACPI HIDs for the PMICs that could be used by this driver */ 99 - #define PMIC_ACPI_AXP "INT33F4:00" /* XPower AXP288 PMIC */ 100 - #define PMIC_ACPI_TI "INT33F5:00" /* Dollar Cove TI PMIC */ 101 - #define PMIC_ACPI_CRYSTALCOVE "INT33FD:00" /* Crystal Cove PMIC */ 93 + #define PMIC_ACPI_AXP "INT33F4" /* XPower AXP288 PMIC */ 94 + #define PMIC_ACPI_TI "INT33F5" /* Dollar Cove TI PMIC */ 95 + #define PMIC_ACPI_CRYSTALCOVE "INT33FD" /* Crystal Cove PMIC */ 102 96 103 97 #define PMIC_PLATFORM_TI "intel_soc_pmic_chtdc_ti" 104 98 ··· 111 105 } pmic_id; 112 106 113 107 static const char *pmic_name[] = { 114 - [PMIC_UNSET] = "unset", 108 + [PMIC_UNSET] = "ACPI device PM", 115 109 [PMIC_REGULATOR] = "regulator driver", 116 110 [PMIC_AXP] = "XPower AXP288 PMIC", 117 111 [PMIC_TI] = "Dollar Cove TI PMIC", ··· 124 118 static const struct atomisp_platform_data pdata = { 125 119 .subdevs = pdata_subdevs, 126 120 }; 127 - 128 - /* 129 - * Something of a hack. The ECS E7 board drives camera 2.8v from an 130 - * external regulator instead of the PMIC. There's a gmin_CamV2P8 131 - * config variable that specifies the GPIO to handle this particular 132 - * case, but this needs a broader architecture for handling camera 133 - * power. 134 - */ 135 - enum { V2P8_GPIO_UNSET = -2, V2P8_GPIO_NONE = -1 }; 136 - static int v2p8_gpio = V2P8_GPIO_UNSET; 137 - 138 - /* 139 - * Something of a hack. The CHT RVP board drives camera 1.8v from an 140 - * external regulator instead of the PMIC just like ECS E7 board, see the 141 - * comments above. 142 - */ 143 - enum { V1P8_GPIO_UNSET = -2, V1P8_GPIO_NONE = -1 }; 144 - static int v1p8_gpio = V1P8_GPIO_UNSET; 145 121 146 122 static LIST_HEAD(vcm_devices); 147 123 static DEFINE_MUTEX(vcm_lock); ··· 187 199 * gmin_subdev struct is already initialized for us. 188 200 */ 189 201 gs = find_gmin_subdev(subdev); 202 + if (!gs) 203 + return -ENODEV; 190 204 191 205 pdata.subdevs[i].type = type; 192 206 pdata.subdevs[i].port = gs->csi_port; ··· 284 294 {"INT33F8:00_CsiFmt", "13"}, 285 295 {"INT33F8:00_CsiBayer", "0"}, 286 296 {"INT33F8:00_CamClk", "0"}, 297 + 287 298 {"INT33F9:00_CamType", "1"}, 288 299 {"INT33F9:00_CsiPort", "0"}, 289 300 {"INT33F9:00_CsiLanes", "1"}, ··· 300 309 {"INT33BE:00_CsiFmt", "13"}, 301 310 {"INT33BE:00_CsiBayer", "2"}, 302 311 {"INT33BE:00_CamClk", "0"}, 312 + 303 313 {"INT33F0:00_CsiPort", "0"}, 304 314 {"INT33F0:00_CsiLanes", "1"}, 305 315 {"INT33F0:00_CsiFmt", "13"}, ··· 314 322 {"XXOV2680:00_CsiPort", "1"}, 315 323 {"XXOV2680:00_CsiLanes", "1"}, 316 324 {"XXOV2680:00_CamClk", "0"}, 325 + 317 326 {"XXGC0310:00_CsiPort", "0"}, 318 327 {"XXGC0310:00_CsiLanes", "1"}, 319 328 {"XXGC0310:00_CamClk", "1"}, ··· 374 381 #define GMIN_PMC_CLK_NAME 14 /* "pmc_plt_clk_[0..5]" */ 375 382 static char gmin_pmc_clk_name[GMIN_PMC_CLK_NAME]; 376 383 377 - static int gmin_i2c_match_one(struct device *dev, const void *data) 378 - { 379 - const char *name = data; 380 - struct i2c_client *client; 381 - 382 - if (dev->type != &i2c_client_type) 383 - return 0; 384 - 385 - client = to_i2c_client(dev); 386 - 387 - return (!strcmp(name, client->name)); 388 - } 389 - 390 384 static struct i2c_client *gmin_i2c_dev_exists(struct device *dev, char *name, 391 385 struct i2c_client **client) 392 386 { 387 + struct acpi_device *adev; 393 388 struct device *d; 394 389 395 - while ((d = bus_find_device(&i2c_bus_type, NULL, name, 396 - gmin_i2c_match_one))) { 397 - *client = to_i2c_client(d); 398 - dev_dbg(dev, "found '%s' at address 0x%02x, adapter %d\n", 399 - (*client)->name, (*client)->addr, 400 - (*client)->adapter->nr); 401 - return *client; 402 - } 390 + adev = acpi_dev_get_first_match_dev(name, NULL, -1); 391 + if (!adev) 392 + return NULL; 403 393 404 - return NULL; 394 + d = bus_find_device_by_acpi_dev(&i2c_bus_type, adev); 395 + acpi_dev_put(adev); 396 + if (!d) 397 + return NULL; 398 + 399 + *client = i2c_verify_client(d); 400 + put_device(d); 401 + 402 + dev_dbg(dev, "found '%s' at address 0x%02x, adapter %d\n", 403 + (*client)->name, (*client)->addr, (*client)->adapter->nr); 404 + return *client; 405 405 } 406 406 407 407 static int gmin_i2c_write(struct device *dev, u16 i2c_addr, u8 reg, ··· 413 427 "I2C write, addr: 0x%02x, reg: 0x%02x, value: 0x%02x, mask: 0x%02x\n", 414 428 i2c_addr, reg, value, mask); 415 429 416 - ret = intel_soc_pmic_exec_mipi_pmic_seq_element(i2c_addr, reg, 417 - value, mask); 418 - 419 - if (ret == -EOPNOTSUPP) { 430 + ret = intel_soc_pmic_exec_mipi_pmic_seq_element(i2c_addr, reg, value, mask); 431 + if (ret == -EOPNOTSUPP) 420 432 dev_err(dev, 421 433 "ACPI didn't mapped the OpRegion needed to access I2C address 0x%02x.\n" 422 - "Need to compile the Kernel using CONFIG_*_PMIC_OPREGION settings\n", 434 + "Need to compile the kernel using CONFIG_*_PMIC_OPREGION settings\n", 423 435 i2c_addr); 424 - return ret; 425 - } 426 436 427 437 return ret; 428 438 } 429 439 430 - static struct gmin_subdev *gmin_subdev_add(struct v4l2_subdev *subdev) 440 + static int atomisp_get_acpi_power(struct device *dev) 431 441 { 432 - struct i2c_client *power = NULL, *client = v4l2_get_subdevdata(subdev); 433 - struct acpi_device *adev; 434 - acpi_handle handle; 435 - struct device *dev; 436 - int i, ret; 442 + char name[5]; 443 + struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 444 + struct acpi_buffer b_name = { sizeof(name), name }; 445 + union acpi_object *package, *element; 446 + acpi_handle handle = ACPI_HANDLE(dev); 447 + acpi_handle rhandle; 448 + acpi_status status; 449 + int clock_num = -1; 450 + int i; 437 451 438 - if (!client) 439 - return NULL; 452 + status = acpi_evaluate_object(handle, "_PR0", NULL, &buffer); 453 + if (!ACPI_SUCCESS(status)) 454 + return -1; 440 455 441 - dev = &client->dev; 456 + package = buffer.pointer; 442 457 443 - handle = ACPI_HANDLE(dev); 458 + if (!buffer.length || !package 459 + || package->type != ACPI_TYPE_PACKAGE 460 + || !package->package.count) 461 + goto fail; 444 462 445 - // FIXME: may need to release resources allocated by acpi_bus_get_device() 446 - if (!handle || acpi_bus_get_device(handle, &adev)) { 447 - dev_err(dev, "Error could not get ACPI device\n"); 448 - return NULL; 463 + for (i = 0; i < package->package.count; i++) { 464 + element = &package->package.elements[i]; 465 + 466 + if (element->type != ACPI_TYPE_LOCAL_REFERENCE) 467 + continue; 468 + 469 + rhandle = element->reference.handle; 470 + if (!rhandle) 471 + goto fail; 472 + 473 + acpi_get_name(rhandle, ACPI_SINGLE_NAME, &b_name); 474 + 475 + dev_dbg(dev, "Found PM resource '%s'\n", name); 476 + if (strlen(name) == 4 && !strncmp(name, "CLK", 3)) { 477 + if (name[3] >= '0' && name[3] <= '4') 478 + clock_num = name[3] - '0'; 479 + #if 0 480 + /* 481 + * We could abort here, but let's parse all resources, 482 + * as this is helpful for debugging purposes 483 + */ 484 + if (clock_num >= 0) 485 + break; 486 + #endif 487 + } 449 488 } 450 489 451 - dev_info(&client->dev, "%s: ACPI detected it on bus ID=%s, HID=%s\n", 452 - __func__, acpi_device_bid(adev), acpi_device_hid(adev)); 490 + fail: 491 + ACPI_FREE(buffer.pointer); 453 492 454 - if (!pmic_id) { 455 - if (gmin_i2c_dev_exists(dev, PMIC_ACPI_TI, &power)) 456 - pmic_id = PMIC_TI; 457 - else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_AXP, &power)) 458 - pmic_id = PMIC_AXP; 459 - else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_CRYSTALCOVE, &power)) 460 - pmic_id = PMIC_CRYSTALCOVE; 461 - else 462 - pmic_id = PMIC_REGULATOR; 463 - } 493 + return clock_num; 494 + } 464 495 465 - for (i = 0; i < MAX_SUBDEVS && gmin_subdevs[i].subdev; i++) 466 - ; 467 - if (i >= MAX_SUBDEVS) 468 - return NULL; 496 + static u8 gmin_get_pmic_id_and_addr(struct device *dev) 497 + { 498 + struct i2c_client *power; 499 + static u8 pmic_i2c_addr; 469 500 470 - if (power) { 471 - gmin_subdevs[i].pwm_i2c_addr = power->addr; 472 - dev_info(dev, 473 - "gmin: power management provided via %s (i2c addr 0x%02x)\n", 474 - pmic_name[pmic_id], power->addr); 475 - } else { 476 - dev_info(dev, "gmin: power management provided via %s\n", 477 - pmic_name[pmic_id]); 478 - } 501 + if (pmic_id) 502 + return pmic_i2c_addr; 479 503 480 - gmin_subdevs[i].subdev = subdev; 481 - gmin_subdevs[i].clock_num = gmin_get_var_int(dev, false, "CamClk", 0); 504 + if (gmin_i2c_dev_exists(dev, PMIC_ACPI_TI, &power)) 505 + pmic_id = PMIC_TI; 506 + else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_AXP, &power)) 507 + pmic_id = PMIC_AXP; 508 + else if (gmin_i2c_dev_exists(dev, PMIC_ACPI_CRYSTALCOVE, &power)) 509 + pmic_id = PMIC_CRYSTALCOVE; 510 + else 511 + pmic_id = PMIC_REGULATOR; 512 + 513 + pmic_i2c_addr = power ? power->addr : 0; 514 + return pmic_i2c_addr; 515 + } 516 + 517 + static int gmin_detect_pmic(struct v4l2_subdev *subdev) 518 + { 519 + struct i2c_client *client = v4l2_get_subdevdata(subdev); 520 + struct device *dev = &client->dev; 521 + u8 pmic_i2c_addr; 522 + 523 + pmic_i2c_addr = gmin_get_pmic_id_and_addr(dev); 524 + dev_info(dev, "gmin: power management provided via %s (i2c addr 0x%02x)\n", 525 + pmic_name[pmic_id], pmic_i2c_addr); 526 + return pmic_i2c_addr; 527 + } 528 + 529 + static int gmin_subdev_add(struct gmin_subdev *gs) 530 + { 531 + struct i2c_client *client = v4l2_get_subdevdata(gs->subdev); 532 + struct device *dev = &client->dev; 533 + struct acpi_device *adev = ACPI_COMPANION(dev); 534 + int ret, clock_num = -1; 535 + 536 + dev_info(dev, "%s: ACPI path is %pfw\n", __func__, dev_fwnode(dev)); 537 + 482 538 /*WA:CHT requires XTAL clock as PLL is not stable.*/ 483 - gmin_subdevs[i].clock_src = gmin_get_var_int(dev, false, "ClkSrc", 484 - VLV2_CLK_PLL_19P2MHZ); 485 - gmin_subdevs[i].csi_port = gmin_get_var_int(dev, false, "CsiPort", 0); 486 - gmin_subdevs[i].csi_lanes = gmin_get_var_int(dev, false, "CsiLanes", 1); 539 + gs->clock_src = gmin_get_var_int(dev, false, "ClkSrc", 540 + VLV2_CLK_PLL_19P2MHZ); 487 541 488 - /* get PMC clock with clock framework */ 489 - snprintf(gmin_pmc_clk_name, 490 - sizeof(gmin_pmc_clk_name), 491 - "%s_%d", "pmc_plt_clk", gmin_subdevs[i].clock_num); 542 + gs->csi_port = gmin_get_var_int(dev, false, "CsiPort", 0); 543 + gs->csi_lanes = gmin_get_var_int(dev, false, "CsiLanes", 1); 492 544 493 - gmin_subdevs[i].pmc_clk = devm_clk_get(dev, gmin_pmc_clk_name); 494 - if (IS_ERR(gmin_subdevs[i].pmc_clk)) { 495 - ret = PTR_ERR(gmin_subdevs[i].pmc_clk); 545 + gs->gpio0 = gpiod_get_index(dev, NULL, 0, GPIOD_OUT_LOW); 546 + if (IS_ERR(gs->gpio0)) 547 + gs->gpio0 = NULL; 548 + else 549 + dev_info(dev, "will handle gpio0 via ACPI\n"); 496 550 497 - dev_err(dev, 498 - "Failed to get clk from %s : %d\n", 499 - gmin_pmc_clk_name, 500 - ret); 551 + gs->gpio1 = gpiod_get_index(dev, NULL, 1, GPIOD_OUT_LOW); 552 + if (IS_ERR(gs->gpio1)) 553 + gs->gpio1 = NULL; 554 + else 555 + dev_info(dev, "will handle gpio1 via ACPI\n"); 501 556 502 - return NULL; 557 + /* 558 + * Those are used only when there is an external regulator apart 559 + * from the PMIC that would be providing power supply, like on the 560 + * two cases below: 561 + * 562 + * The ECS E7 board drives camera 2.8v from an external regulator 563 + * instead of the PMIC. There's a gmin_CamV2P8 config variable 564 + * that specifies the GPIO to handle this particular case, 565 + * but this needs a broader architecture for handling camera power. 566 + * 567 + * The CHT RVP board drives camera 1.8v from an* external regulator 568 + * instead of the PMIC just like ECS E7 board. 569 + */ 570 + 571 + gs->v1p8_gpio = gmin_get_var_int(dev, true, "V1P8GPIO", -1); 572 + gs->v2p8_gpio = gmin_get_var_int(dev, true, "V2P8GPIO", -1); 573 + 574 + /* 575 + * FIXME: 576 + * 577 + * The ACPI handling code checks for the _PR? tables in order to 578 + * know what is required to switch the device from power state 579 + * D0 (_PR0) up to D3COLD (_PR3). 580 + * 581 + * The adev->flags.power_manageable is set to true if the device 582 + * has a _PR0 table, which can be checked by calling 583 + * acpi_device_power_manageable(adev). 584 + * 585 + * However, this only says that the device can be set to power off 586 + * mode. 587 + * 588 + * At least on the DSDT tables we've seen so far, there's no _PR3, 589 + * nor _PS3 (which would have a somewhat similar effect). 590 + * So, using ACPI for power management won't work, except if adding 591 + * an ACPI override logic somewhere. 592 + * 593 + * So, at least for the existing devices we know, the check below 594 + * will always be false. 595 + */ 596 + if (acpi_device_can_wakeup(adev) && 597 + acpi_device_can_poweroff(adev)) { 598 + dev_info(dev, 599 + "gmin: power management provided via device PM\n"); 600 + return 0; 503 601 } 602 + 603 + /* 604 + * The code below is here due to backward compatibility with devices 605 + * whose ACPI BIOS may not contain everything that would be needed 606 + * in order to set clocks and do power management. 607 + */ 608 + 609 + /* 610 + * According with : 611 + * https://github.com/projectceladon/hardware-intel-kernelflinger/blob/master/doc/fastboot.md 612 + * 613 + * The "CamClk" EFI var is set via fastboot on some Android devices, 614 + * and seems to contain the number of the clock used to feed the 615 + * sensor. 616 + * 617 + * On systems with a proper ACPI table, this is given via the _PR0 618 + * power resource table. The logic below should first check if there 619 + * is a power resource already, falling back to the EFI vars detection 620 + * otherwise. 621 + */ 622 + 623 + /* Try first to use ACPI to get the clock resource */ 624 + if (acpi_device_power_manageable(adev)) 625 + clock_num = atomisp_get_acpi_power(dev); 626 + 627 + /* Fall-back use EFI and/or DMI match */ 628 + if (clock_num < 0) 629 + clock_num = gmin_get_var_int(dev, false, "CamClk", 0); 630 + 631 + if (clock_num < 0 || clock_num > MAX_CLK_COUNT) { 632 + dev_err(dev, "Invalid clock number\n"); 633 + return -EINVAL; 634 + } 635 + 636 + snprintf(gmin_pmc_clk_name, sizeof(gmin_pmc_clk_name), 637 + "%s_%d", "pmc_plt_clk", clock_num); 638 + 639 + gs->pmc_clk = devm_clk_get(dev, gmin_pmc_clk_name); 640 + if (IS_ERR(gs->pmc_clk)) { 641 + ret = PTR_ERR(gs->pmc_clk); 642 + dev_err(dev, "Failed to get clk from %s: %d\n", gmin_pmc_clk_name, ret); 643 + return ret; 644 + } 645 + dev_info(dev, "Will use CLK%d (%s)\n", clock_num, gmin_pmc_clk_name); 504 646 505 647 /* 506 648 * The firmware might enable the clock at ··· 640 526 * to disable a clock that has not been enabled, 641 527 * we need to enable the clock first. 642 528 */ 643 - ret = clk_prepare_enable(gmin_subdevs[i].pmc_clk); 529 + ret = clk_prepare_enable(gs->pmc_clk); 644 530 if (!ret) 645 - clk_disable_unprepare(gmin_subdevs[i].pmc_clk); 646 - 647 - gmin_subdevs[i].gpio0 = gpiod_get_index(dev, NULL, 0, GPIOD_OUT_LOW); 648 - if (IS_ERR(gmin_subdevs[i].gpio0)) 649 - gmin_subdevs[i].gpio0 = NULL; 650 - 651 - gmin_subdevs[i].gpio1 = gpiod_get_index(dev, NULL, 1, GPIOD_OUT_LOW); 652 - if (IS_ERR(gmin_subdevs[i].gpio1)) 653 - gmin_subdevs[i].gpio1 = NULL; 531 + clk_disable_unprepare(gs->pmc_clk); 654 532 655 533 switch (pmic_id) { 656 534 case PMIC_REGULATOR: 657 - gmin_subdevs[i].v1p8_reg = regulator_get(dev, "V1P8SX"); 658 - gmin_subdevs[i].v2p8_reg = regulator_get(dev, "V2P8SX"); 535 + gs->v1p8_reg = regulator_get(dev, "V1P8SX"); 536 + gs->v2p8_reg = regulator_get(dev, "V2P8SX"); 659 537 660 - gmin_subdevs[i].v1p2_reg = regulator_get(dev, "V1P2A"); 661 - gmin_subdevs[i].v2p8_vcm_reg = regulator_get(dev, "VPROG4B"); 538 + gs->v1p2_reg = regulator_get(dev, "V1P2A"); 539 + gs->v2p8_vcm_reg = regulator_get(dev, "VPROG4B"); 662 540 663 541 /* Note: ideally we would initialize v[12]p8_on to the 664 542 * output of regulator_is_enabled(), but sadly that ··· 662 556 break; 663 557 664 558 case PMIC_AXP: 665 - gmin_subdevs[i].eldo1_1p8v = gmin_get_var_int(dev, false, 666 - "eldo1_1p8v", 667 - ELDO1_1P8V); 668 - gmin_subdevs[i].eldo1_sel_reg = gmin_get_var_int(dev, false, 669 - "eldo1_sel_reg", 670 - ELDO1_SEL_REG); 671 - gmin_subdevs[i].eldo1_ctrl_shift = gmin_get_var_int(dev, false, 672 - "eldo1_ctrl_shift", 673 - ELDO1_CTRL_SHIFT); 674 - gmin_subdevs[i].eldo2_1p8v = gmin_get_var_int(dev, false, 675 - "eldo2_1p8v", 676 - ELDO2_1P8V); 677 - gmin_subdevs[i].eldo2_sel_reg = gmin_get_var_int(dev, false, 678 - "eldo2_sel_reg", 679 - ELDO2_SEL_REG); 680 - gmin_subdevs[i].eldo2_ctrl_shift = gmin_get_var_int(dev, false, 681 - "eldo2_ctrl_shift", 682 - ELDO2_CTRL_SHIFT); 683 - gmin_subdevs[i].pwm_i2c_addr = power->addr; 559 + gs->eldo1_1p8v = gmin_get_var_int(dev, false, 560 + "eldo1_1p8v", 561 + ELDO1_1P8V); 562 + gs->eldo1_sel_reg = gmin_get_var_int(dev, false, 563 + "eldo1_sel_reg", 564 + ELDO1_SEL_REG); 565 + gs->eldo1_ctrl_shift = gmin_get_var_int(dev, false, 566 + "eldo1_ctrl_shift", 567 + ELDO1_CTRL_SHIFT); 568 + gs->eldo2_1p8v = gmin_get_var_int(dev, false, 569 + "eldo2_1p8v", 570 + ELDO2_1P8V); 571 + gs->eldo2_sel_reg = gmin_get_var_int(dev, false, 572 + "eldo2_sel_reg", 573 + ELDO2_SEL_REG); 574 + gs->eldo2_ctrl_shift = gmin_get_var_int(dev, false, 575 + "eldo2_ctrl_shift", 576 + ELDO2_CTRL_SHIFT); 684 577 break; 685 578 686 579 default: 687 580 break; 688 581 } 689 582 690 - return &gmin_subdevs[i]; 583 + return 0; 691 584 } 692 585 693 586 static struct gmin_subdev *find_gmin_subdev(struct v4l2_subdev *subdev) ··· 696 591 for (i = 0; i < MAX_SUBDEVS; i++) 697 592 if (gmin_subdevs[i].subdev == subdev) 698 593 return &gmin_subdevs[i]; 699 - return gmin_subdev_add(subdev); 594 + return NULL; 595 + } 596 + 597 + static struct gmin_subdev *find_free_gmin_subdev_slot(void) 598 + { 599 + unsigned int i; 600 + 601 + for (i = 0; i < MAX_SUBDEVS; i++) 602 + if (gmin_subdevs[i].subdev == NULL) 603 + return &gmin_subdevs[i]; 604 + return NULL; 700 605 } 701 606 702 607 static int axp_regulator_set(struct device *dev, struct gmin_subdev *gs, ··· 815 700 { 816 701 struct gmin_subdev *gs = find_gmin_subdev(subdev); 817 702 int ret; 818 - struct device *dev; 819 - struct i2c_client *client = v4l2_get_subdevdata(subdev); 820 703 int value; 821 704 822 - dev = &client->dev; 823 - 824 - if (v1p8_gpio == V1P8_GPIO_UNSET) { 825 - v1p8_gpio = gmin_get_var_int(dev, true, 826 - "V1P8GPIO", V1P8_GPIO_NONE); 827 - if (v1p8_gpio != V1P8_GPIO_NONE) { 828 - pr_info("atomisp_gmin_platform: 1.8v power on GPIO %d\n", 829 - v1p8_gpio); 830 - ret = gpio_request(v1p8_gpio, "camera_v1p8_en"); 831 - if (!ret) 832 - ret = gpio_direction_output(v1p8_gpio, 0); 833 - if (ret) 834 - pr_err("V1P8 GPIO initialization failed\n"); 835 - } 705 + if (gs->v1p8_gpio >= 0) { 706 + pr_info("atomisp_gmin_platform: 1.8v power on GPIO %d\n", 707 + gs->v1p8_gpio); 708 + ret = gpio_request(gs->v1p8_gpio, "camera_v1p8_en"); 709 + if (!ret) 710 + ret = gpio_direction_output(gs->v1p8_gpio, 0); 711 + if (ret) 712 + pr_err("V1P8 GPIO initialization failed\n"); 836 713 } 837 714 838 715 if (!gs || gs->v1p8_on == on) 839 716 return 0; 840 717 gs->v1p8_on = on; 841 718 842 - if (v1p8_gpio >= 0) 843 - gpio_set_value(v1p8_gpio, on); 719 + if (gs->v1p8_gpio >= 0) 720 + gpio_set_value(gs->v1p8_gpio, on); 844 721 845 722 if (gs->v1p8_reg) { 846 723 regulator_set_voltage(gs->v1p8_reg, 1800000, 1800000); ··· 869 762 { 870 763 struct gmin_subdev *gs = find_gmin_subdev(subdev); 871 764 int ret; 872 - struct device *dev; 873 - struct i2c_client *client = v4l2_get_subdevdata(subdev); 874 765 int value; 875 766 876 - dev = &client->dev; 877 - 878 - if (v2p8_gpio == V2P8_GPIO_UNSET) { 879 - v2p8_gpio = gmin_get_var_int(dev, true, 880 - "V2P8GPIO", V2P8_GPIO_NONE); 881 - if (v2p8_gpio != V2P8_GPIO_NONE) { 882 - pr_info("atomisp_gmin_platform: 2.8v power on GPIO %d\n", 883 - v2p8_gpio); 884 - ret = gpio_request(v2p8_gpio, "camera_v2p8"); 885 - if (!ret) 886 - ret = gpio_direction_output(v2p8_gpio, 0); 887 - if (ret) 888 - pr_err("V2P8 GPIO initialization failed\n"); 889 - } 767 + if (gs->v2p8_gpio >= 0) { 768 + pr_info("atomisp_gmin_platform: 2.8v power on GPIO %d\n", 769 + gs->v2p8_gpio); 770 + ret = gpio_request(gs->v2p8_gpio, "camera_v2p8"); 771 + if (!ret) 772 + ret = gpio_direction_output(gs->v2p8_gpio, 0); 773 + if (ret) 774 + pr_err("V2P8 GPIO initialization failed\n"); 890 775 } 891 776 892 777 if (!gs || gs->v2p8_on == on) 893 778 return 0; 894 779 gs->v2p8_on = on; 895 780 896 - if (v2p8_gpio >= 0) 897 - gpio_set_value(v2p8_gpio, on); 781 + if (gs->v2p8_gpio >= 0) 782 + gpio_set_value(gs->v2p8_gpio, on); 898 783 899 784 if (gs->v2p8_reg) { 900 785 regulator_set_voltage(gs->v2p8_reg, 2900000, 2900000); ··· 916 817 } 917 818 918 819 return -EINVAL; 820 + } 821 + 822 + static int gmin_acpi_pm_ctrl(struct v4l2_subdev *subdev, int on) 823 + { 824 + int ret = 0; 825 + struct gmin_subdev *gs = find_gmin_subdev(subdev); 826 + struct i2c_client *client = v4l2_get_subdevdata(subdev); 827 + struct acpi_device *adev = ACPI_COMPANION(&client->dev); 828 + 829 + /* Use the ACPI power management to control it */ 830 + on = !!on; 831 + if (gs->clock_on == on) 832 + return 0; 833 + 834 + dev_dbg(subdev->dev, "Setting power state to %s\n", 835 + on ? "on" : "off"); 836 + 837 + if (on) 838 + ret = acpi_device_set_power(adev, 839 + ACPI_STATE_D0); 840 + else 841 + ret = acpi_device_set_power(adev, 842 + ACPI_STATE_D3_COLD); 843 + 844 + if (!ret) 845 + gs->clock_on = on; 846 + else 847 + dev_err(subdev->dev, "Couldn't set power state to %s\n", 848 + on ? "on" : "off"); 849 + 850 + return ret; 919 851 } 920 852 921 853 static int gmin_flisclk_ctrl(struct v4l2_subdev *subdev, int on) ··· 1014 884 return NULL; 1015 885 } 1016 886 1017 - static struct camera_sensor_platform_data gmin_plat = { 887 + static struct camera_sensor_platform_data pmic_gmin_plat = { 1018 888 .gpio0_ctrl = gmin_gpio0_ctrl, 1019 889 .gpio1_ctrl = gmin_gpio1_ctrl, 1020 890 .v1p8_ctrl = gmin_v1p8_ctrl, ··· 1025 895 .get_vcm_ctrl = gmin_get_vcm_ctrl, 1026 896 }; 1027 897 898 + static struct camera_sensor_platform_data acpi_gmin_plat = { 899 + .gpio0_ctrl = gmin_gpio0_ctrl, 900 + .gpio1_ctrl = gmin_gpio1_ctrl, 901 + .v1p8_ctrl = gmin_acpi_pm_ctrl, 902 + .v2p8_ctrl = gmin_acpi_pm_ctrl, 903 + .v1p2_ctrl = gmin_acpi_pm_ctrl, 904 + .flisclk_ctrl = gmin_acpi_pm_ctrl, 905 + .csi_cfg = gmin_csi_cfg, 906 + .get_vcm_ctrl = gmin_get_vcm_ctrl, 907 + }; 908 + 1028 909 struct camera_sensor_platform_data *gmin_camera_platform_data( 1029 910 struct v4l2_subdev *subdev, 1030 911 enum atomisp_input_format csi_format, 1031 912 enum atomisp_bayer_order csi_bayer) 1032 913 { 1033 - struct gmin_subdev *gs = find_gmin_subdev(subdev); 914 + u8 pmic_i2c_addr = gmin_detect_pmic(subdev); 915 + struct gmin_subdev *gs; 1034 916 917 + gs = find_free_gmin_subdev_slot(); 918 + gs->subdev = subdev; 1035 919 gs->csi_fmt = csi_format; 1036 920 gs->csi_bayer = csi_bayer; 921 + gs->pwm_i2c_addr = pmic_i2c_addr; 1037 922 1038 - return &gmin_plat; 923 + gmin_subdev_add(gs); 924 + if (gs->pmc_clk) 925 + return &pmic_gmin_plat; 926 + else 927 + return &acpi_gmin_plat; 1039 928 } 1040 929 EXPORT_SYMBOL_GPL(gmin_camera_platform_data); 1041 930 ··· 1106 957 union acpi_object *obj, *cur = NULL; 1107 958 int i; 1108 959 960 + /* 961 + * The data reported by "CamClk" seems to be either 0 or 1 at the 962 + * _DSM table. 963 + * 964 + * At the ACPI tables we looked so far, this is not related to the 965 + * actual clock source for the sensor, which is given by the 966 + * _PR0 ACPI table. So, ignore it, as otherwise this will be 967 + * set to a wrong value. 968 + */ 969 + if (!strcmp(var, "CamClk")) 970 + return -EINVAL; 971 + 1109 972 obj = acpi_evaluate_dsm(handle, &atomisp_dsm_guid, 0, 0, NULL); 1110 973 if (!obj) { 1111 974 dev_info_once(dev, "Didn't find ACPI _DSM table.\n"); 1112 975 return -EINVAL; 1113 976 } 977 + 978 + /* Return on unexpected object type */ 979 + if (obj->type != ACPI_TYPE_PACKAGE) 980 + return -EINVAL; 1114 981 1115 982 #if 0 /* Just for debugging purposes */ 1116 983 for (i = 0; i < obj->package.count; i++) { ··· 1320 1155 * trying. The driver itself does direct calls to the PUNIT to manage 1321 1156 * ISP power. 1322 1157 */ 1323 - static void isp_pm_cap_fixup(struct pci_dev *dev) 1158 + static void isp_pm_cap_fixup(struct pci_dev *pdev) 1324 1159 { 1325 - dev_info(&dev->dev, "Disabling PCI power management on camera ISP\n"); 1326 - dev->pm_cap = 0; 1160 + dev_info(&pdev->dev, "Disabling PCI power management on camera ISP\n"); 1161 + pdev->pm_cap = 0; 1327 1162 } 1328 1163 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0f38, isp_pm_cap_fixup); 1329 1164
+1 -1
drivers/staging/media/atomisp/pci/atomisp_internal.h
··· 216 216 * ci device struct 217 217 */ 218 218 struct atomisp_device { 219 - struct pci_dev *pdev; 220 219 struct device *dev; 221 220 struct v4l2_device v4l2_dev; 222 221 struct media_device media_dev; 223 222 struct atomisp_platform_data *pdata; 224 223 void *mmu_l1_base; 224 + void __iomem *base; 225 225 const struct firmware *firmware; 226 226 227 227 struct pm_qos_request pm_qos;
+9 -10
drivers/staging/media/atomisp/pci/atomisp_ioctl.c
··· 549 549 550 550 strscpy(cap->driver, DRIVER, sizeof(cap->driver)); 551 551 strscpy(cap->card, CARD, sizeof(cap->card)); 552 - snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", 553 - pci_name(isp->pdev)); 552 + snprintf(cap->bus_info, sizeof(cap->bus_info), "PCI:%s", dev_name(isp->dev)); 554 553 555 554 return 0; 556 555 } ··· 1634 1635 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1635 1636 struct atomisp_sub_device *asd = pipe->asd; 1636 1637 struct atomisp_device *isp = video_get_drvdata(vdev); 1638 + struct pci_dev *pdev = to_pci_dev(isp->dev); 1637 1639 enum ia_css_pipe_id css_pipe_id; 1638 1640 unsigned int sensor_start_stream; 1639 1641 unsigned int wdt_duration = ATOMISP_ISP_TIMEOUT_DURATION; ··· 1844 1844 /* Enable the CSI interface on ANN B0/K0 */ 1845 1845 if (isp->media_dev.hw_revision >= ((ATOMISP_HW_REVISION_ISP2401 << 1846 1846 ATOMISP_HW_REVISION_SHIFT) | ATOMISP_HW_STEPPING_B0)) { 1847 - pci_write_config_word(isp->pdev, MRFLD_PCI_CSI_CONTROL, 1848 - isp->saved_regs.csi_control | 1849 - MRFLD_PCI_CSI_CONTROL_CSI_READY); 1847 + pci_write_config_word(pdev, MRFLD_PCI_CSI_CONTROL, 1848 + isp->saved_regs.csi_control | MRFLD_PCI_CSI_CONTROL_CSI_READY); 1850 1849 } 1851 1850 1852 1851 /* stream on the sensor */ ··· 1890 1891 { 1891 1892 struct video_device *vdev = video_devdata(file); 1892 1893 struct atomisp_device *isp = video_get_drvdata(vdev); 1894 + struct pci_dev *pdev = to_pci_dev(isp->dev); 1893 1895 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1894 1896 struct atomisp_sub_device *asd = pipe->asd; 1895 1897 struct atomisp_video_pipe *capture_pipe = NULL; ··· 2076 2076 /* Disable the CSI interface on ANN B0/K0 */ 2077 2077 if (isp->media_dev.hw_revision >= ((ATOMISP_HW_REVISION_ISP2401 << 2078 2078 ATOMISP_HW_REVISION_SHIFT) | ATOMISP_HW_STEPPING_B0)) { 2079 - pci_write_config_word(isp->pdev, MRFLD_PCI_CSI_CONTROL, 2080 - isp->saved_regs.csi_control & 2081 - ~MRFLD_PCI_CSI_CONTROL_CSI_READY); 2079 + pci_write_config_word(pdev, MRFLD_PCI_CSI_CONTROL, 2080 + isp->saved_regs.csi_control & ~MRFLD_PCI_CSI_CONTROL_CSI_READY); 2082 2081 } 2083 2082 2084 2083 if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_LOW, false)) ··· 2110 2111 } 2111 2112 2112 2113 /* disable PUNIT/ISP acknowlede/handshake - SRSE=3 */ 2113 - pci_write_config_dword(isp->pdev, PCI_I_CONTROL, isp->saved_regs.i_control | 2114 - MRFLD_PCI_I_CONTROL_SRSE_RESET_MASK); 2114 + pci_write_config_dword(pdev, PCI_I_CONTROL, 2115 + isp->saved_regs.i_control | MRFLD_PCI_I_CONTROL_SRSE_RESET_MASK); 2115 2116 dev_err(isp->dev, "atomisp_reset"); 2116 2117 atomisp_reset(isp); 2117 2118 for (i = 0; i < isp->num_of_streams; i++) {
+125 -156
drivers/staging/media/atomisp/pci/atomisp_v4l2.c
··· 127 127 128 128 struct device *atomisp_dev; 129 129 130 - void __iomem *atomisp_io_base; 131 - 132 130 static const struct atomisp_freq_scaling_rule dfs_rules_merr[] = { 133 131 { 134 132 .width = ISP_FREQ_RULE_ANY, ··· 510 512 511 513 static int atomisp_save_iunit_reg(struct atomisp_device *isp) 512 514 { 513 - struct pci_dev *dev = isp->pdev; 515 + struct pci_dev *pdev = to_pci_dev(isp->dev); 514 516 515 517 dev_dbg(isp->dev, "%s\n", __func__); 516 518 517 - pci_read_config_word(dev, PCI_COMMAND, &isp->saved_regs.pcicmdsts); 519 + pci_read_config_word(pdev, PCI_COMMAND, &isp->saved_regs.pcicmdsts); 518 520 /* isp->saved_regs.ispmmadr is set from the atomisp_pci_probe() */ 519 - pci_read_config_dword(dev, PCI_MSI_CAPID, &isp->saved_regs.msicap); 520 - pci_read_config_dword(dev, PCI_MSI_ADDR, &isp->saved_regs.msi_addr); 521 - pci_read_config_word(dev, PCI_MSI_DATA, &isp->saved_regs.msi_data); 522 - pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &isp->saved_regs.intr); 523 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, 524 - &isp->saved_regs.interrupt_control); 521 + pci_read_config_dword(pdev, PCI_MSI_CAPID, &isp->saved_regs.msicap); 522 + pci_read_config_dword(pdev, PCI_MSI_ADDR, &isp->saved_regs.msi_addr); 523 + pci_read_config_word(pdev, PCI_MSI_DATA, &isp->saved_regs.msi_data); 524 + pci_read_config_byte(pdev, PCI_INTERRUPT_LINE, &isp->saved_regs.intr); 525 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &isp->saved_regs.interrupt_control); 525 526 526 - pci_read_config_dword(dev, MRFLD_PCI_PMCS, 527 - &isp->saved_regs.pmcs); 527 + pci_read_config_dword(pdev, MRFLD_PCI_PMCS, &isp->saved_regs.pmcs); 528 528 /* Ensure read/write combining is enabled. */ 529 - pci_read_config_dword(dev, PCI_I_CONTROL, 530 - &isp->saved_regs.i_control); 529 + pci_read_config_dword(pdev, PCI_I_CONTROL, &isp->saved_regs.i_control); 531 530 isp->saved_regs.i_control |= 532 531 MRFLD_PCI_I_CONTROL_ENABLE_READ_COMBINING | 533 532 MRFLD_PCI_I_CONTROL_ENABLE_WRITE_COMBINING; 534 - pci_read_config_dword(dev, MRFLD_PCI_CSI_ACCESS_CTRL_VIOL, 533 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_ACCESS_CTRL_VIOL, 535 534 &isp->saved_regs.csi_access_viol); 536 - pci_read_config_dword(dev, MRFLD_PCI_CSI_RCOMP_CONTROL, 535 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_RCOMP_CONTROL, 537 536 &isp->saved_regs.csi_rcomp_config); 538 537 /* 539 538 * Hardware bugs require setting CSI_HS_OVR_CLK_GATE_ON_UPDATE. ··· 540 545 * is missed, and IUNIT can hang. 541 546 * For both issues, setting this bit is a workaround. 542 547 */ 543 - isp->saved_regs.csi_rcomp_config |= 544 - MRFLD_PCI_CSI_HS_OVR_CLK_GATE_ON_UPDATE; 545 - pci_read_config_dword(dev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 548 + isp->saved_regs.csi_rcomp_config |= MRFLD_PCI_CSI_HS_OVR_CLK_GATE_ON_UPDATE; 549 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 546 550 &isp->saved_regs.csi_afe_dly); 547 - pci_read_config_dword(dev, MRFLD_PCI_CSI_CONTROL, 551 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, 548 552 &isp->saved_regs.csi_control); 549 553 if (isp->media_dev.hw_revision >= 550 554 (ATOMISP_HW_REVISION_ISP2401 << ATOMISP_HW_REVISION_SHIFT)) 551 - isp->saved_regs.csi_control |= 552 - MRFLD_PCI_CSI_CONTROL_PARPATHEN; 555 + isp->saved_regs.csi_control |= MRFLD_PCI_CSI_CONTROL_PARPATHEN; 553 556 /* 554 557 * On CHT CSI_READY bit should be enabled before stream on 555 558 */ 556 559 if (IS_CHT && (isp->media_dev.hw_revision >= ((ATOMISP_HW_REVISION_ISP2401 << 557 560 ATOMISP_HW_REVISION_SHIFT) | ATOMISP_HW_STEPPING_B0))) 558 - isp->saved_regs.csi_control |= 559 - MRFLD_PCI_CSI_CONTROL_CSI_READY; 560 - pci_read_config_dword(dev, MRFLD_PCI_CSI_AFE_RCOMP_CONTROL, 561 + isp->saved_regs.csi_control |= MRFLD_PCI_CSI_CONTROL_CSI_READY; 562 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_AFE_RCOMP_CONTROL, 561 563 &isp->saved_regs.csi_afe_rcomp_config); 562 - pci_read_config_dword(dev, MRFLD_PCI_CSI_AFE_HS_CONTROL, 564 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_AFE_HS_CONTROL, 563 565 &isp->saved_regs.csi_afe_hs_control); 564 - pci_read_config_dword(dev, MRFLD_PCI_CSI_DEADLINE_CONTROL, 566 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_DEADLINE_CONTROL, 565 567 &isp->saved_regs.csi_deadline_control); 566 568 return 0; 567 569 } 568 570 569 571 static int __maybe_unused atomisp_restore_iunit_reg(struct atomisp_device *isp) 570 572 { 571 - struct pci_dev *dev = isp->pdev; 573 + struct pci_dev *pdev = to_pci_dev(isp->dev); 572 574 573 575 dev_dbg(isp->dev, "%s\n", __func__); 574 576 575 - pci_write_config_word(dev, PCI_COMMAND, isp->saved_regs.pcicmdsts); 576 - pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 577 - isp->saved_regs.ispmmadr); 578 - pci_write_config_dword(dev, PCI_MSI_CAPID, isp->saved_regs.msicap); 579 - pci_write_config_dword(dev, PCI_MSI_ADDR, isp->saved_regs.msi_addr); 580 - pci_write_config_word(dev, PCI_MSI_DATA, isp->saved_regs.msi_data); 581 - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, isp->saved_regs.intr); 582 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, 583 - isp->saved_regs.interrupt_control); 584 - pci_write_config_dword(dev, PCI_I_CONTROL, 585 - isp->saved_regs.i_control); 577 + pci_write_config_word(pdev, PCI_COMMAND, isp->saved_regs.pcicmdsts); 578 + pci_write_config_dword(pdev, PCI_BASE_ADDRESS_0, isp->saved_regs.ispmmadr); 579 + pci_write_config_dword(pdev, PCI_MSI_CAPID, isp->saved_regs.msicap); 580 + pci_write_config_dword(pdev, PCI_MSI_ADDR, isp->saved_regs.msi_addr); 581 + pci_write_config_word(pdev, PCI_MSI_DATA, isp->saved_regs.msi_data); 582 + pci_write_config_byte(pdev, PCI_INTERRUPT_LINE, isp->saved_regs.intr); 583 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, isp->saved_regs.interrupt_control); 584 + pci_write_config_dword(pdev, PCI_I_CONTROL, isp->saved_regs.i_control); 586 585 587 - pci_write_config_dword(dev, MRFLD_PCI_PMCS, 588 - isp->saved_regs.pmcs); 589 - pci_write_config_dword(dev, MRFLD_PCI_CSI_ACCESS_CTRL_VIOL, 586 + pci_write_config_dword(pdev, MRFLD_PCI_PMCS, isp->saved_regs.pmcs); 587 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_ACCESS_CTRL_VIOL, 590 588 isp->saved_regs.csi_access_viol); 591 - pci_write_config_dword(dev, MRFLD_PCI_CSI_RCOMP_CONTROL, 589 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_RCOMP_CONTROL, 592 590 isp->saved_regs.csi_rcomp_config); 593 - pci_write_config_dword(dev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 591 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 594 592 isp->saved_regs.csi_afe_dly); 595 - pci_write_config_dword(dev, MRFLD_PCI_CSI_CONTROL, 593 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, 596 594 isp->saved_regs.csi_control); 597 - pci_write_config_dword(dev, MRFLD_PCI_CSI_AFE_RCOMP_CONTROL, 595 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_RCOMP_CONTROL, 598 596 isp->saved_regs.csi_afe_rcomp_config); 599 - pci_write_config_dword(dev, MRFLD_PCI_CSI_AFE_HS_CONTROL, 597 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_HS_CONTROL, 600 598 isp->saved_regs.csi_afe_hs_control); 601 - pci_write_config_dword(dev, MRFLD_PCI_CSI_DEADLINE_CONTROL, 599 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_DEADLINE_CONTROL, 602 600 isp->saved_regs.csi_deadline_control); 603 601 604 602 /* ··· 607 619 608 620 static int atomisp_mrfld_pre_power_down(struct atomisp_device *isp) 609 621 { 610 - struct pci_dev *dev = isp->pdev; 622 + struct pci_dev *pdev = to_pci_dev(isp->dev); 611 623 u32 irq; 612 624 unsigned long flags; 613 625 ··· 623 635 * So, here we need to check if there is any pending 624 636 * IRQ, if so, waiting for it to be served 625 637 */ 626 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 638 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 627 639 irq = irq & 1 << INTR_IIR; 628 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, irq); 640 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, irq); 629 641 630 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 642 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 631 643 if (!(irq & (1 << INTR_IIR))) 632 644 goto done; 633 645 ··· 640 652 spin_unlock_irqrestore(&isp->lock, flags); 641 653 return -EAGAIN; 642 654 } else { 643 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 655 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 644 656 irq = irq & 1 << INTR_IIR; 645 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, irq); 657 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, irq); 646 658 647 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 659 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 648 660 if (!(irq & (1 << INTR_IIR))) { 649 661 atomisp_css2_hw_store_32(MRFLD_INTR_ENABLE_REG, 0x0); 650 662 goto done; ··· 663 675 * to IIR. It could block subsequent interrupt messages. 664 676 * HW sighting:4568410. 665 677 */ 666 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 678 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 667 679 irq &= ~(1 << INTR_IER); 668 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, irq); 680 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, irq); 669 681 670 - atomisp_msi_irq_uninit(isp, dev); 682 + atomisp_msi_irq_uninit(isp); 671 683 atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_LOW, true); 672 684 spin_unlock_irqrestore(&isp->lock, flags); 673 685 ··· 743 755 744 756 /* Wait until ISPSSPM0 bit[25:24] shows the right value */ 745 757 iosf_mbi_read(BT_MBI_UNIT_PMC, MBI_REG_READ, MRFLD_ISPSSPM0, &tmp); 746 - tmp = (tmp & MRFLD_ISPSSPM0_ISPSSC_MASK) >> MRFLD_ISPSSPM0_ISPSSS_OFFSET; 758 + tmp = (tmp >> MRFLD_ISPSSPM0_ISPSSS_OFFSET) & MRFLD_ISPSSPM0_ISPSSC_MASK; 747 759 if (tmp == val) { 748 760 trace_ipu_cstate(enable); 749 761 return 0; ··· 766 778 /* Workaround for pmu_nc_set_power_state not ready in MRFLD */ 767 779 int atomisp_mrfld_power_down(struct atomisp_device *isp) 768 780 { 769 - // FIXME: at least with ISP2401, enabling this code causes the driver to break 770 - return 0 && atomisp_mrfld_power(isp, false); 781 + return atomisp_mrfld_power(isp, false); 771 782 } 772 783 773 784 /* Workaround for pmu_nc_set_power_state not ready in MRFLD */ 774 785 int atomisp_mrfld_power_up(struct atomisp_device *isp) 775 786 { 776 - // FIXME: at least with ISP2401, enabling this code causes the driver to break 777 - return 0 && atomisp_mrfld_power(isp, true); 787 + return atomisp_mrfld_power(isp, true); 778 788 } 779 789 780 790 int atomisp_runtime_suspend(struct device *dev) ··· 888 902 889 903 int atomisp_csi_lane_config(struct atomisp_device *isp) 890 904 { 905 + struct pci_dev *pdev = to_pci_dev(isp->dev); 891 906 static const struct { 892 907 u8 code; 893 908 u8 lanes[MRFLD_PORT_NUM]; ··· 990 1003 return -EINVAL; 991 1004 } 992 1005 993 - pci_read_config_dword(isp->pdev, MRFLD_PCI_CSI_CONTROL, &csi_control); 1006 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, &csi_control); 994 1007 csi_control &= ~port_config_mask; 995 1008 csi_control |= (portconfigs[i].code << MRFLD_PORT_CONFIGCODE_SHIFT) 996 1009 | (portconfigs[i].lanes[0] ? 0 : (1 << MRFLD_PORT1_ENABLE_SHIFT)) ··· 1000 1013 | (((1 << portconfigs[i].lanes[1]) - 1) << MRFLD_PORT2_LANES_SHIFT) 1001 1014 | (((1 << portconfigs[i].lanes[2]) - 1) << port3_lanes_shift); 1002 1015 1003 - pci_write_config_dword(isp->pdev, MRFLD_PCI_CSI_CONTROL, csi_control); 1016 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_CONTROL, csi_control); 1004 1017 1005 1018 dev_dbg(isp->dev, 1006 1019 "%s: the portconfig is %d-%d-%d, CSI_CONTROL is 0x%08X\n", ··· 1427 1440 * Check for flags the driver was compiled with against the PCI 1428 1441 * device. Always returns true on other than ISP 2400. 1429 1442 */ 1430 - static bool is_valid_device(struct pci_dev *dev, 1431 - const struct pci_device_id *id) 1443 + static bool is_valid_device(struct pci_dev *pdev, const struct pci_device_id *id) 1432 1444 { 1433 1445 unsigned int a0_max_id = 0; 1434 1446 const char *name; ··· 1451 1465 name = "Cherrytrail"; 1452 1466 break; 1453 1467 default: 1454 - dev_err(&dev->dev, "%s: unknown device ID %x04:%x04\n", 1468 + dev_err(&pdev->dev, "%s: unknown device ID %x04:%x04\n", 1455 1469 product, id->vendor, id->device); 1456 1470 return false; 1457 1471 } 1458 1472 1459 - if (dev->revision <= ATOMISP_PCI_REV_BYT_A0_MAX) { 1460 - dev_err(&dev->dev, "%s revision %d is not unsupported\n", 1461 - name, dev->revision); 1473 + if (pdev->revision <= ATOMISP_PCI_REV_BYT_A0_MAX) { 1474 + dev_err(&pdev->dev, "%s revision %d is not unsupported\n", 1475 + name, pdev->revision); 1462 1476 return false; 1463 1477 } 1464 1478 ··· 1469 1483 1470 1484 #if defined(ISP2400) 1471 1485 if (IS_ISP2401) { 1472 - dev_err(&dev->dev, "Support for %s (ISP2401) was disabled at compile time\n", 1486 + dev_err(&pdev->dev, "Support for %s (ISP2401) was disabled at compile time\n", 1473 1487 name); 1474 1488 return false; 1475 1489 } 1476 1490 #else 1477 1491 if (!IS_ISP2401) { 1478 - dev_err(&dev->dev, "Support for %s (ISP2400) was disabled at compile time\n", 1492 + dev_err(&pdev->dev, "Support for %s (ISP2400) was disabled at compile time\n", 1479 1493 name); 1480 1494 return false; 1481 1495 } 1482 1496 #endif 1483 1497 1484 - dev_info(&dev->dev, "Detected %s version %d (ISP240%c) on %s\n", 1485 - name, dev->revision, 1486 - IS_ISP2401 ? '1' : '0', 1487 - product); 1498 + dev_info(&pdev->dev, "Detected %s version %d (ISP240%c) on %s\n", 1499 + name, pdev->revision, IS_ISP2401 ? '1' : '0', product); 1488 1500 1489 1501 return true; 1490 1502 } ··· 1522 1538 1523 1539 #define ATOM_ISP_PCI_BAR 0 1524 1540 1525 - static int atomisp_pci_probe(struct pci_dev *dev, 1526 - const struct pci_device_id *id) 1541 + static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1527 1542 { 1528 1543 const struct atomisp_platform_data *pdata; 1529 1544 struct atomisp_device *isp; 1530 1545 unsigned int start; 1531 - void __iomem *base; 1532 1546 int err, val; 1533 1547 u32 irq; 1534 1548 1535 - if (!is_valid_device(dev, id)) 1549 + if (!is_valid_device(pdev, id)) 1536 1550 return -ENODEV; 1537 1551 1538 1552 /* Pointer to struct device. */ 1539 - atomisp_dev = &dev->dev; 1553 + atomisp_dev = &pdev->dev; 1540 1554 1541 1555 pdata = atomisp_get_platform_data(); 1542 1556 if (!pdata) 1543 - dev_warn(&dev->dev, "no platform data available\n"); 1557 + dev_warn(&pdev->dev, "no platform data available\n"); 1544 1558 1545 - err = pcim_enable_device(dev); 1559 + err = pcim_enable_device(pdev); 1546 1560 if (err) { 1547 - dev_err(&dev->dev, "Failed to enable CI ISP device (%d)\n", 1548 - err); 1561 + dev_err(&pdev->dev, "Failed to enable CI ISP device (%d)\n", err); 1549 1562 return err; 1550 1563 } 1551 1564 1552 - start = pci_resource_start(dev, ATOM_ISP_PCI_BAR); 1553 - dev_dbg(&dev->dev, "start: 0x%x\n", start); 1565 + start = pci_resource_start(pdev, ATOM_ISP_PCI_BAR); 1566 + dev_dbg(&pdev->dev, "start: 0x%x\n", start); 1554 1567 1555 - err = pcim_iomap_regions(dev, 1 << ATOM_ISP_PCI_BAR, pci_name(dev)); 1568 + err = pcim_iomap_regions(pdev, 1 << ATOM_ISP_PCI_BAR, pci_name(pdev)); 1556 1569 if (err) { 1557 - dev_err(&dev->dev, "Failed to I/O memory remapping (%d)\n", 1558 - err); 1570 + dev_err(&pdev->dev, "Failed to I/O memory remapping (%d)\n", err); 1559 1571 goto ioremap_fail; 1560 1572 } 1561 1573 1562 - base = pcim_iomap_table(dev)[ATOM_ISP_PCI_BAR]; 1563 - dev_dbg(&dev->dev, "base: %p\n", base); 1564 - 1565 - atomisp_io_base = base; 1566 - 1567 - dev_dbg(&dev->dev, "atomisp_io_base: %p\n", atomisp_io_base); 1568 - 1569 - isp = devm_kzalloc(&dev->dev, sizeof(struct atomisp_device), GFP_KERNEL); 1574 + isp = devm_kzalloc(&pdev->dev, sizeof(*isp), GFP_KERNEL); 1570 1575 if (!isp) { 1571 1576 err = -ENOMEM; 1572 1577 goto atomisp_dev_alloc_fail; 1573 1578 } 1574 - isp->pdev = dev; 1575 - isp->dev = &dev->dev; 1579 + 1580 + isp->dev = &pdev->dev; 1581 + isp->base = pcim_iomap_table(pdev)[ATOM_ISP_PCI_BAR]; 1576 1582 isp->sw_contex.power_state = ATOM_ISP_POWER_UP; 1577 1583 isp->saved_regs.ispmmadr = start; 1584 + 1585 + dev_dbg(&pdev->dev, "atomisp mmio base: %p\n", isp->base); 1578 1586 1579 1587 rt_mutex_init(&isp->mutex); 1580 1588 mutex_init(&isp->streamoff_mutex); 1581 1589 spin_lock_init(&isp->lock); 1582 1590 1583 1591 /* This is not a true PCI device on SoC, so the delay is not needed. */ 1584 - isp->pdev->d3_delay = 0; 1592 + pdev->d3_delay = 0; 1593 + 1594 + pci_set_drvdata(pdev, isp); 1585 1595 1586 1596 switch (id->device & ATOMISP_PCI_DEVICE_SOC_MASK) { 1587 1597 case ATOMISP_PCI_DEVICE_SOC_MRFLD: ··· 1626 1648 * have specs yet for exactly how it varies. Default to 1627 1649 * BYT-CR but let provisioning set it via EFI variable 1628 1650 */ 1629 - isp->hpll_freq = gmin_get_var_int(&dev->dev, false, "HpllFreq", 1630 - HPLL_FREQ_2000MHZ); 1651 + isp->hpll_freq = gmin_get_var_int(&pdev->dev, false, "HpllFreq", HPLL_FREQ_2000MHZ); 1631 1652 1632 1653 /* 1633 1654 * for BYT/CHT we are put isp into D3cold to avoid pci registers access 1634 1655 * in power off. Set d3cold_delay to 0 since default 100ms is not 1635 1656 * necessary. 1636 1657 */ 1637 - isp->pdev->d3cold_delay = 0; 1658 + pdev->d3cold_delay = 0; 1638 1659 break; 1639 1660 case ATOMISP_PCI_DEVICE_SOC_ANN: 1640 1661 isp->media_dev.hw_revision = ( ··· 1643 1666 ATOMISP_HW_REVISION_ISP2401_LEGACY 1644 1667 #endif 1645 1668 << ATOMISP_HW_REVISION_SHIFT); 1646 - isp->media_dev.hw_revision |= isp->pdev->revision < 2 ? 1669 + isp->media_dev.hw_revision |= pdev->revision < 2 ? 1647 1670 ATOMISP_HW_STEPPING_A0 : ATOMISP_HW_STEPPING_B0; 1648 1671 isp->dfs = &dfs_config_merr; 1649 1672 isp->hpll_freq = HPLL_FREQ_1600MHZ; ··· 1656 1679 ATOMISP_HW_REVISION_ISP2401_LEGACY 1657 1680 #endif 1658 1681 << ATOMISP_HW_REVISION_SHIFT); 1659 - isp->media_dev.hw_revision |= isp->pdev->revision < 2 ? 1682 + isp->media_dev.hw_revision |= pdev->revision < 2 ? 1660 1683 ATOMISP_HW_STEPPING_A0 : ATOMISP_HW_STEPPING_B0; 1661 1684 1662 1685 isp->dfs = &dfs_config_cht; 1663 - isp->pdev->d3cold_delay = 0; 1686 + pdev->d3cold_delay = 0; 1664 1687 1665 - iosf_mbi_read(CCK_PORT, MBI_REG_READ, CCK_FUSE_REG_0, &val); 1688 + iosf_mbi_read(BT_MBI_UNIT_CCK, MBI_REG_READ, CCK_FUSE_REG_0, &val); 1666 1689 switch (val & CCK_FUSE_HPLL_FREQ_MASK) { 1667 1690 case 0x00: 1668 1691 isp->hpll_freq = HPLL_FREQ_800MHZ; ··· 1675 1698 break; 1676 1699 default: 1677 1700 isp->hpll_freq = HPLL_FREQ_1600MHZ; 1678 - dev_warn(isp->dev, 1679 - "read HPLL from cck failed. Default to 1600 MHz.\n"); 1701 + dev_warn(&pdev->dev, "read HPLL from cck failed. Default to 1600 MHz.\n"); 1680 1702 } 1681 1703 break; 1682 1704 default: 1683 - dev_err(&dev->dev, "un-supported IUNIT device\n"); 1705 + dev_err(&pdev->dev, "un-supported IUNIT device\n"); 1684 1706 err = -ENODEV; 1685 1707 goto atomisp_dev_alloc_fail; 1686 1708 } 1687 1709 1688 - dev_info(&dev->dev, "ISP HPLL frequency base = %d MHz\n", 1689 - isp->hpll_freq); 1710 + dev_info(&pdev->dev, "ISP HPLL frequency base = %d MHz\n", isp->hpll_freq); 1690 1711 1691 1712 isp->max_isr_latency = ATOMISP_MAX_ISR_LATENCY; 1692 1713 ··· 1693 1718 isp->firmware = atomisp_load_firmware(isp); 1694 1719 if (!isp->firmware) { 1695 1720 err = -ENOENT; 1696 - dev_dbg(&dev->dev, "Firmware load failed\n"); 1721 + dev_dbg(&pdev->dev, "Firmware load failed\n"); 1697 1722 goto load_fw_fail; 1698 1723 } 1699 1724 1700 - err = sh_css_check_firmware_version(isp->dev, 1701 - isp->firmware->data); 1725 + err = sh_css_check_firmware_version(isp->dev, isp->firmware->data); 1702 1726 if (err) { 1703 - dev_dbg(&dev->dev, "Firmware version check failed\n"); 1727 + dev_dbg(&pdev->dev, "Firmware version check failed\n"); 1704 1728 goto fw_validation_fail; 1705 1729 } 1706 1730 } else { 1707 - dev_info(&dev->dev, "Firmware load will be deferred\n"); 1731 + dev_info(&pdev->dev, "Firmware load will be deferred\n"); 1708 1732 } 1709 1733 1710 - pci_set_master(dev); 1711 - pci_set_drvdata(dev, isp); 1734 + pci_set_master(pdev); 1712 1735 1713 - err = pci_enable_msi(dev); 1736 + err = pci_enable_msi(pdev); 1714 1737 if (err) { 1715 - dev_err(&dev->dev, "Failed to enable msi (%d)\n", err); 1738 + dev_err(&pdev->dev, "Failed to enable msi (%d)\n", err); 1716 1739 goto enable_msi_fail; 1717 1740 } 1718 1741 1719 - atomisp_msi_irq_init(isp, dev); 1742 + atomisp_msi_irq_init(isp); 1720 1743 1721 1744 cpu_latency_qos_add_request(&isp->pm_qos, PM_QOS_DEFAULT_VALUE); 1722 1745 ··· 1735 1762 * Workaround for imbalance data eye issue which is observed 1736 1763 * on TNG B0. 1737 1764 */ 1738 - pci_read_config_dword(dev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 1739 - &csi_afe_trim); 1765 + pci_read_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, &csi_afe_trim); 1740 1766 csi_afe_trim &= ~((MRFLD_PCI_CSI_HSRXCLKTRIM_MASK << 1741 1767 MRFLD_PCI_CSI1_HSRXCLKTRIM_SHIFT) | 1742 1768 (MRFLD_PCI_CSI_HSRXCLKTRIM_MASK << ··· 1748 1776 MRFLD_PCI_CSI2_HSRXCLKTRIM_SHIFT) | 1749 1777 (MRFLD_PCI_CSI3_HSRXCLKTRIM << 1750 1778 MRFLD_PCI_CSI3_HSRXCLKTRIM_SHIFT); 1751 - pci_write_config_dword(dev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, 1752 - csi_afe_trim); 1779 + pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, csi_afe_trim); 1753 1780 } 1754 1781 1755 1782 err = atomisp_initialize_modules(isp); 1756 1783 if (err < 0) { 1757 - dev_err(&dev->dev, "atomisp_initialize_modules (%d)\n", err); 1784 + dev_err(&pdev->dev, "atomisp_initialize_modules (%d)\n", err); 1758 1785 goto initialize_modules_fail; 1759 1786 } 1760 1787 1761 1788 err = atomisp_register_entities(isp); 1762 1789 if (err < 0) { 1763 - dev_err(&dev->dev, "atomisp_register_entities failed (%d)\n", 1764 - err); 1790 + dev_err(&pdev->dev, "atomisp_register_entities failed (%d)\n", err); 1765 1791 goto register_entities_fail; 1766 1792 } 1767 1793 err = atomisp_create_pads_links(isp); ··· 1772 1802 /* save the iunit context only once after all the values are init'ed. */ 1773 1803 atomisp_save_iunit_reg(isp); 1774 1804 1775 - pm_runtime_put_noidle(&dev->dev); 1776 - pm_runtime_allow(&dev->dev); 1805 + pm_runtime_put_noidle(&pdev->dev); 1806 + pm_runtime_allow(&pdev->dev); 1777 1807 1778 1808 hmm_init_mem_stat(repool_pgnr, dypool_enable, dypool_pgnr); 1779 1809 err = hmm_pool_register(repool_pgnr, HMM_POOL_TYPE_RESERVED); 1780 1810 if (err) { 1781 - dev_err(&dev->dev, "Failed to register reserved memory pool.\n"); 1811 + dev_err(&pdev->dev, "Failed to register reserved memory pool.\n"); 1782 1812 goto hmm_pool_fail; 1783 1813 } 1784 1814 1785 1815 /* Init ISP memory management */ 1786 1816 hmm_init(); 1787 1817 1788 - err = devm_request_threaded_irq(&dev->dev, dev->irq, 1818 + err = devm_request_threaded_irq(&pdev->dev, pdev->irq, 1789 1819 atomisp_isr, atomisp_isr_thread, 1790 1820 IRQF_SHARED, "isp_irq", isp); 1791 1821 if (err) { 1792 - dev_err(&dev->dev, "Failed to request irq (%d)\n", err); 1822 + dev_err(&pdev->dev, "Failed to request irq (%d)\n", err); 1793 1823 goto request_irq_fail; 1794 1824 } 1795 1825 ··· 1797 1827 if (!defer_fw_load) { 1798 1828 err = atomisp_css_load_firmware(isp); 1799 1829 if (err) { 1800 - dev_err(&dev->dev, "Failed to init css.\n"); 1830 + dev_err(&pdev->dev, "Failed to init css.\n"); 1801 1831 goto css_init_fail; 1802 1832 } 1803 1833 } else { 1804 - dev_dbg(&dev->dev, "Skip css init.\n"); 1834 + dev_dbg(&pdev->dev, "Skip css init.\n"); 1805 1835 } 1806 1836 /* Clear FW image from memory */ 1807 1837 release_firmware(isp->firmware); 1808 1838 isp->firmware = NULL; 1809 1839 isp->css_env.isp_css_fw.data = NULL; 1810 1840 1811 - atomisp_drvfs_init(&dev->driver->driver, isp); 1841 + atomisp_drvfs_init(isp); 1812 1842 1813 1843 return 0; 1814 1844 1815 1845 css_init_fail: 1816 - devm_free_irq(&dev->dev, dev->irq, isp); 1846 + devm_free_irq(&pdev->dev, pdev->irq, isp); 1817 1847 request_irq_fail: 1818 1848 hmm_cleanup(); 1819 1849 hmm_pool_unregister(HMM_POOL_TYPE_RESERVED); ··· 1826 1856 atomisp_uninitialize_modules(isp); 1827 1857 initialize_modules_fail: 1828 1858 cpu_latency_qos_remove_request(&isp->pm_qos); 1829 - atomisp_msi_irq_uninit(isp, dev); 1830 - pci_disable_msi(dev); 1859 + atomisp_msi_irq_uninit(isp); 1860 + pci_disable_msi(pdev); 1831 1861 enable_msi_fail: 1832 1862 fw_validation_fail: 1833 1863 release_firmware(isp->firmware); ··· 1839 1869 * The following lines have been copied from atomisp suspend path 1840 1870 */ 1841 1871 1842 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 1872 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 1843 1873 irq = irq & 1 << INTR_IIR; 1844 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, irq); 1874 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, irq); 1845 1875 1846 - pci_read_config_dword(dev, PCI_INTERRUPT_CTRL, &irq); 1876 + pci_read_config_dword(pdev, PCI_INTERRUPT_CTRL, &irq); 1847 1877 irq &= ~(1 << INTR_IER); 1848 - pci_write_config_dword(dev, PCI_INTERRUPT_CTRL, irq); 1878 + pci_write_config_dword(pdev, PCI_INTERRUPT_CTRL, irq); 1849 1879 1850 - atomisp_msi_irq_uninit(isp, dev); 1880 + atomisp_msi_irq_uninit(isp); 1851 1881 1852 1882 atomisp_ospm_dphy_down(isp); 1853 1883 1854 1884 /* Address later when we worry about the ...field chips */ 1855 1885 if (IS_ENABLED(CONFIG_PM) && atomisp_mrfld_power_down(isp)) 1856 - dev_err(&dev->dev, "Failed to switch off ISP\n"); 1886 + dev_err(&pdev->dev, "Failed to switch off ISP\n"); 1857 1887 1858 1888 atomisp_dev_alloc_fail: 1859 - pcim_iounmap_regions(dev, 1 << ATOM_ISP_PCI_BAR); 1889 + pcim_iounmap_regions(pdev, 1 << ATOM_ISP_PCI_BAR); 1860 1890 1861 1891 ioremap_fail: 1862 1892 return err; 1863 1893 } 1864 1894 1865 - static void atomisp_pci_remove(struct pci_dev *dev) 1895 + static void atomisp_pci_remove(struct pci_dev *pdev) 1866 1896 { 1867 - struct atomisp_device *isp = (struct atomisp_device *) 1868 - pci_get_drvdata(dev); 1897 + struct atomisp_device *isp = pci_get_drvdata(pdev); 1869 1898 1870 - dev_info(&dev->dev, "Removing atomisp driver\n"); 1899 + dev_info(&pdev->dev, "Removing atomisp driver\n"); 1871 1900 1872 1901 atomisp_drvfs_exit(); 1873 1902 ··· 1875 1906 ia_css_unload_firmware(); 1876 1907 hmm_cleanup(); 1877 1908 1878 - pm_runtime_forbid(&dev->dev); 1879 - pm_runtime_get_noresume(&dev->dev); 1909 + pm_runtime_forbid(&pdev->dev); 1910 + pm_runtime_get_noresume(&pdev->dev); 1880 1911 cpu_latency_qos_remove_request(&isp->pm_qos); 1881 1912 1882 - atomisp_msi_irq_uninit(isp, dev); 1913 + atomisp_msi_irq_uninit(isp); 1883 1914 atomisp_unregister_entities(isp); 1884 1915 1885 1916 destroy_workqueue(isp->wdt_work_queue);
+14 -14
drivers/staging/media/atomisp/pci/base/refcount/src/refcount.c
··· 48 48 return NULL; 49 49 if (!myrefcount.items) { 50 50 ia_css_debug_dtrace(IA_CSS_DEBUG_ERROR, 51 - "refcount_find_entry(): Ref count not initialized!\n"); 51 + "%s(): Ref count not initialized!\n", __func__); 52 52 return NULL; 53 53 } 54 54 ··· 73 73 74 74 if (size == 0) { 75 75 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 76 - "ia_css_refcount_init(): Size of 0 for Ref count init!\n"); 76 + "%s(): Size of 0 for Ref count init!\n", __func__); 77 77 return -EINVAL; 78 78 } 79 79 if (myrefcount.items) { 80 80 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 81 - "ia_css_refcount_init(): Ref count is already initialized\n"); 81 + "%s(): Ref count is already initialized\n", __func__); 82 82 return -EINVAL; 83 83 } 84 84 myrefcount.items = ··· 99 99 u32 i; 100 100 101 101 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 102 - "ia_css_refcount_uninit() entry\n"); 102 + "%s() entry\n", __func__); 103 103 for (i = 0; i < myrefcount.size; i++) { 104 104 /* driver verifier tool has issues with &arr[i] 105 105 and prefers arr + i; as these are actually equivalent ··· 120 120 myrefcount.items = NULL; 121 121 myrefcount.size = 0; 122 122 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 123 - "ia_css_refcount_uninit() leave\n"); 123 + "%s() leave\n", __func__); 124 124 } 125 125 126 126 ia_css_ptr ia_css_refcount_increment(s32 id, ia_css_ptr ptr) ··· 133 133 entry = refcount_find_entry(ptr, false); 134 134 135 135 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 136 - "ia_css_refcount_increment(%x) 0x%x\n", id, ptr); 136 + "%s(%x) 0x%x\n", __func__, id, ptr); 137 137 138 138 if (!entry) { 139 139 entry = refcount_find_entry(ptr, true); ··· 145 145 146 146 if (entry->id != id) { 147 147 ia_css_debug_dtrace(IA_CSS_DEBUG_ERROR, 148 - "ia_css_refcount_increment(): Ref count IDS do not match!\n"); 148 + "%s(): Ref count IDS do not match!\n", __func__); 149 149 return mmgr_NULL; 150 150 } 151 151 ··· 165 165 struct ia_css_refcount_entry *entry; 166 166 167 167 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 168 - "ia_css_refcount_decrement(%x) 0x%x\n", id, ptr); 168 + "%s(%x) 0x%x\n", __func__, id, ptr); 169 169 170 170 if (ptr == mmgr_NULL) 171 171 return false; ··· 175 175 if (entry) { 176 176 if (entry->id != id) { 177 177 ia_css_debug_dtrace(IA_CSS_DEBUG_ERROR, 178 - "ia_css_refcount_decrement(): Ref count IDS do not match!\n"); 178 + "%s(): Ref count IDS do not match!\n", __func__); 179 179 return false; 180 180 } 181 181 if (entry->count > 0) { ··· 225 225 u32 count = 0; 226 226 227 227 assert(clear_func_ptr); 228 - ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, "ia_css_refcount_clear(%x)\n", 229 - id); 228 + ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, "%s(%x)\n", 229 + __func__, id); 230 230 231 231 for (i = 0; i < myrefcount.size; i++) { 232 232 /* driver verifier tool has issues with &arr[i] ··· 236 236 entry = myrefcount.items + i; 237 237 if ((entry->data != mmgr_NULL) && (entry->id == id)) { 238 238 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 239 - "ia_css_refcount_clear: %x: 0x%x\n", 239 + "%s: %x: 0x%x\n", __func__, 240 240 id, entry->data); 241 241 if (clear_func_ptr) { 242 242 /* clear using provided function */ 243 243 clear_func_ptr(entry->data); 244 244 } else { 245 245 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 246 - "ia_css_refcount_clear: using hmm_free: no clear_func\n"); 246 + "%s: using hmm_free: no clear_func\n", __func__); 247 247 hmm_free(entry->data); 248 248 } 249 249 ··· 260 260 } 261 261 } 262 262 ia_css_debug_dtrace(IA_CSS_DEBUG_TRACE, 263 - "ia_css_refcount_clear(%x): cleared %d\n", id, 263 + "%s(%x): cleared %d\n", __func__, id, 264 264 count); 265 265 } 266 266
+3 -21
drivers/staging/media/atomisp/pci/hive_types.h
··· 52 52 typedef unsigned int hive_uint32; 53 53 typedef unsigned long long hive_uint64; 54 54 55 - /* by default assume 32 bit master port (both data and address) */ 56 - #ifndef HRT_DATA_WIDTH 57 - #define HRT_DATA_WIDTH 32 58 - #endif 59 - #ifndef HRT_ADDRESS_WIDTH 60 - #define HRT_ADDRESS_WIDTH 32 61 - #endif 62 - 55 + #define HRT_DATA_WIDTH 32 56 + #define HRT_ADDRESS_WIDTH 64 63 57 #define HRT_DATA_BYTES (HRT_DATA_WIDTH / 8) 64 58 #define HRT_ADDRESS_BYTES (HRT_ADDRESS_WIDTH / 8) 59 + #define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3) 65 60 66 - #if HRT_DATA_WIDTH == 64 67 - typedef hive_uint64 hrt_data; 68 - #elif HRT_DATA_WIDTH == 32 69 61 typedef hive_uint32 hrt_data; 70 - #else 71 - #error data width not supported 72 - #endif 73 - 74 - #if HRT_ADDRESS_WIDTH == 64 75 62 typedef hive_uint64 hrt_address; 76 - #elif HRT_ADDRESS_WIDTH == 32 77 - typedef hive_uint32 hrt_address; 78 - #else 79 - #error adddres width not supported 80 - #endif 81 63 82 64 /* use 64 bit addresses in simulation, where possible */ 83 65 typedef hive_uint64 hive_sim_address;
+5 -5
drivers/staging/media/atomisp/pci/hmm/hmm.c
··· 735 735 736 736 void hmm_show_mem_stat(const char *func, const int line) 737 737 { 738 - trace_printk("tol_cnt=%d usr_size=%d res_size=%d res_cnt=%d sys_size=%d dyc_thr=%d dyc_size=%d.\n", 739 - hmm_mem_stat.tol_cnt, 740 - hmm_mem_stat.usr_size, hmm_mem_stat.res_size, 741 - hmm_mem_stat.res_cnt, hmm_mem_stat.sys_size, 742 - hmm_mem_stat.dyc_thr, hmm_mem_stat.dyc_size); 738 + pr_info("tol_cnt=%d usr_size=%d res_size=%d res_cnt=%d sys_size=%d dyc_thr=%d dyc_size=%d.\n", 739 + hmm_mem_stat.tol_cnt, 740 + hmm_mem_stat.usr_size, hmm_mem_stat.res_size, 741 + hmm_mem_stat.res_cnt, hmm_mem_stat.sys_size, 742 + hmm_mem_stat.dyc_thr, hmm_mem_stat.dyc_size); 743 743 } 744 744 745 745 void hmm_init_mem_stat(int res_pgnr, int dyc_en, int dyc_pgnr)
-302
drivers/staging/media/atomisp/pci/isp2400_system_global.h
··· 13 13 * more details. 14 14 */ 15 15 16 - #ifndef __SYSTEM_GLOBAL_H_INCLUDED__ 17 - #define __SYSTEM_GLOBAL_H_INCLUDED__ 18 - 19 - #include <hive_isp_css_defs.h> 20 - #include <type_support.h> 21 - 22 - /* 23 - * The longest allowed (uninteruptible) bus transfer, does not 24 - * take stalling into account 25 - */ 26 - #define HIVE_ISP_MAX_BURST_LENGTH 1024 27 - 28 - /* 29 - * Maximum allowed burst length in words for the ISP DMA 30 - */ 31 - #define ISP_DMA_MAX_BURST_LENGTH 128 32 - 33 - /* 34 - * Create a list of HAS and IS properties that defines the system 35 - * 36 - * The configuration assumes the following 37 - * - The system is hetereogeneous; Multiple cells and devices classes 38 - * - The cell and device instances are homogeneous, each device type 39 - * belongs to the same class 40 - * - Device instances supporting a subset of the class capabilities are 41 - * allowed 42 - * 43 - * We could manage different device classes through the enumerated 44 - * lists (C) or the use of classes (C++), but that is presently not 45 - * fully supported 46 - * 47 - * N.B. the 3 input formatters are of 2 different classess 48 - */ 49 - 50 16 #define USE_INPUT_SYSTEM_VERSION_2 51 - 52 - #define HAS_MMU_VERSION_2 53 - #define HAS_DMA_VERSION_2 54 - #define HAS_GDC_VERSION_2 55 - #define HAS_VAMEM_VERSION_2 56 - #define HAS_HMEM_VERSION_1 57 - #define HAS_BAMEM_VERSION_2 58 - #define HAS_IRQ_VERSION_2 59 - #define HAS_IRQ_MAP_VERSION_2 60 - #define HAS_INPUT_FORMATTER_VERSION_2 61 - /* 2401: HAS_INPUT_SYSTEM_VERSION_2401 */ 62 - #define HAS_INPUT_SYSTEM_VERSION_2 63 - #define HAS_BUFFERED_SENSOR 64 - #define HAS_FIFO_MONITORS_VERSION_2 65 - /* #define HAS_GP_REGS_VERSION_2 */ 66 - #define HAS_GP_DEVICE_VERSION_2 67 - #define HAS_GPIO_VERSION_1 68 - #define HAS_TIMED_CTRL_VERSION_1 69 - #define HAS_RX_VERSION_2 70 - 71 - #define DMA_DDR_TO_VAMEM_WORKAROUND 72 - #define DMA_DDR_TO_HMEM_WORKAROUND 73 - 74 - /* 75 - * Semi global. "HRT" is accessible from SP, but the HRT types do not fully apply 76 - */ 77 - #define HRT_VADDRESS_WIDTH 32 78 - //#define HRT_ADDRESS_WIDTH 64 /* Surprise, this is a local property*/ 79 - #define HRT_DATA_WIDTH 32 80 - 81 - #define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3) 82 - #define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8) 83 - 84 - /* The main bus connecting all devices */ 85 - #define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH 86 - #define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES 87 - 88 - /* per-frame parameter handling support */ 89 - #define SH_CSS_ENABLE_PER_FRAME_PARAMS 90 - 91 - typedef u32 hrt_bus_align_t; 92 - 93 - /* 94 - * Enumerate the devices, device access through the API is by ID, through the DLI by address 95 - * The enumerator terminators are used to size the wiring arrays and as an exception value. 96 - */ 97 - typedef enum { 98 - DDR0_ID = 0, 99 - N_DDR_ID 100 - } ddr_ID_t; 101 - 102 - typedef enum { 103 - ISP0_ID = 0, 104 - N_ISP_ID 105 - } isp_ID_t; 106 - 107 - typedef enum { 108 - SP0_ID = 0, 109 - N_SP_ID 110 - } sp_ID_t; 111 - 112 - typedef enum { 113 - MMU0_ID = 0, 114 - MMU1_ID, 115 - N_MMU_ID 116 - } mmu_ID_t; 117 - 118 - typedef enum { 119 - DMA0_ID = 0, 120 - N_DMA_ID 121 - } dma_ID_t; 122 - 123 - typedef enum { 124 - GDC0_ID = 0, 125 - GDC1_ID, 126 - N_GDC_ID 127 - } gdc_ID_t; 128 - 129 - #define N_GDC_ID_CPP 2 // this extra define is needed because we want to use it also in the preprocessor, and that doesn't work with enums. 130 - 131 - typedef enum { 132 - VAMEM0_ID = 0, 133 - VAMEM1_ID, 134 - VAMEM2_ID, 135 - N_VAMEM_ID 136 - } vamem_ID_t; 137 - 138 - typedef enum { 139 - BAMEM0_ID = 0, 140 - N_BAMEM_ID 141 - } bamem_ID_t; 142 - 143 - typedef enum { 144 - HMEM0_ID = 0, 145 - N_HMEM_ID 146 - } hmem_ID_t; 147 - 148 - /* 149 - typedef enum { 150 - IRQ0_ID = 0, 151 - N_IRQ_ID 152 - } irq_ID_t; 153 - */ 154 - 155 - typedef enum { 156 - IRQ0_ID = 0, // GP IRQ block 157 - IRQ1_ID, // Input formatter 158 - IRQ2_ID, // input system 159 - IRQ3_ID, // input selector 160 - N_IRQ_ID 161 - } irq_ID_t; 162 - 163 - typedef enum { 164 - FIFO_MONITOR0_ID = 0, 165 - N_FIFO_MONITOR_ID 166 - } fifo_monitor_ID_t; 167 - 168 - /* 169 - * Deprecated: Since all gp_reg instances are different 170 - * and put in the address maps of other devices we cannot 171 - * enumerate them as that assumes the instrances are the 172 - * same. 173 - * 174 - * We define a single GP_DEVICE containing all gp_regs 175 - * w.r.t. a single base address 176 - * 177 - typedef enum { 178 - GP_REGS0_ID = 0, 179 - N_GP_REGS_ID 180 - } gp_regs_ID_t; 181 - */ 182 - typedef enum { 183 - GP_DEVICE0_ID = 0, 184 - N_GP_DEVICE_ID 185 - } gp_device_ID_t; 186 - 187 - typedef enum { 188 - GP_TIMER0_ID = 0, 189 - GP_TIMER1_ID, 190 - GP_TIMER2_ID, 191 - GP_TIMER3_ID, 192 - GP_TIMER4_ID, 193 - GP_TIMER5_ID, 194 - GP_TIMER6_ID, 195 - GP_TIMER7_ID, 196 - N_GP_TIMER_ID 197 - } gp_timer_ID_t; 198 - 199 - typedef enum { 200 - GPIO0_ID = 0, 201 - N_GPIO_ID 202 - } gpio_ID_t; 203 - 204 - typedef enum { 205 - TIMED_CTRL0_ID = 0, 206 - N_TIMED_CTRL_ID 207 - } timed_ctrl_ID_t; 208 - 209 - typedef enum { 210 - INPUT_FORMATTER0_ID = 0, 211 - INPUT_FORMATTER1_ID, 212 - INPUT_FORMATTER2_ID, 213 - INPUT_FORMATTER3_ID, 214 - N_INPUT_FORMATTER_ID 215 - } input_formatter_ID_t; 216 - 217 - /* The IF RST is outside the IF */ 218 - #define INPUT_FORMATTER0_SRST_OFFSET 0x0824 219 - #define INPUT_FORMATTER1_SRST_OFFSET 0x0624 220 - #define INPUT_FORMATTER2_SRST_OFFSET 0x0424 221 - #define INPUT_FORMATTER3_SRST_OFFSET 0x0224 222 - 223 - #define INPUT_FORMATTER0_SRST_MASK 0x0001 224 - #define INPUT_FORMATTER1_SRST_MASK 0x0002 225 - #define INPUT_FORMATTER2_SRST_MASK 0x0004 226 - #define INPUT_FORMATTER3_SRST_MASK 0x0008 227 - 228 - typedef enum { 229 - INPUT_SYSTEM0_ID = 0, 230 - N_INPUT_SYSTEM_ID 231 - } input_system_ID_t; 232 - 233 - typedef enum { 234 - RX0_ID = 0, 235 - N_RX_ID 236 - } rx_ID_t; 237 - 238 - enum mipi_port_id { 239 - MIPI_PORT0_ID = 0, 240 - MIPI_PORT1_ID, 241 - MIPI_PORT2_ID, 242 - N_MIPI_PORT_ID 243 - }; 244 - 245 - #define N_RX_CHANNEL_ID 4 246 - 247 - /* Generic port enumeration with an internal port type ID */ 248 - typedef enum { 249 - CSI_PORT0_ID = 0, 250 - CSI_PORT1_ID, 251 - CSI_PORT2_ID, 252 - TPG_PORT0_ID, 253 - PRBS_PORT0_ID, 254 - FIFO_PORT0_ID, 255 - MEMORY_PORT0_ID, 256 - N_INPUT_PORT_ID 257 - } input_port_ID_t; 258 - 259 - typedef enum { 260 - CAPTURE_UNIT0_ID = 0, 261 - CAPTURE_UNIT1_ID, 262 - CAPTURE_UNIT2_ID, 263 - ACQUISITION_UNIT0_ID, 264 - DMA_UNIT0_ID, 265 - CTRL_UNIT0_ID, 266 - GPREGS_UNIT0_ID, 267 - FIFO_UNIT0_ID, 268 - IRQ_UNIT0_ID, 269 - N_SUB_SYSTEM_ID 270 - } sub_system_ID_t; 271 - 272 - #define N_CAPTURE_UNIT_ID 3 273 - #define N_ACQUISITION_UNIT_ID 1 274 - #define N_CTRL_UNIT_ID 1 275 - 276 - enum ia_css_isp_memories { 277 - IA_CSS_ISP_PMEM0 = 0, 278 - IA_CSS_ISP_DMEM0, 279 - IA_CSS_ISP_VMEM0, 280 - IA_CSS_ISP_VAMEM0, 281 - IA_CSS_ISP_VAMEM1, 282 - IA_CSS_ISP_VAMEM2, 283 - IA_CSS_ISP_HMEM0, 284 - IA_CSS_SP_DMEM0, 285 - IA_CSS_DDR, 286 - N_IA_CSS_MEMORIES 287 - }; 288 - 289 - #define IA_CSS_NUM_MEMORIES 9 290 - /* For driver compatibility */ 291 - #define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES 292 - #define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES 293 - 294 - #if 0 295 - typedef enum { 296 - dev_chn, /* device channels, external resource */ 297 - ext_mem, /* external memories */ 298 - int_mem, /* internal memories */ 299 - int_chn /* internal channels, user defined */ 300 - } resource_type_t; 301 - 302 - /* if this enum is extended with other memory resources, pls also extend the function resource_to_memptr() */ 303 - typedef enum { 304 - vied_nci_dev_chn_dma_ext0, 305 - int_mem_vmem0, 306 - int_mem_dmem0 307 - } resource_id_t; 308 - 309 - /* enum listing the different memories within a program group. 310 - This enum is used in the mem_ptr_t type */ 311 - typedef enum { 312 - buf_mem_invalid = 0, 313 - buf_mem_vmem_prog0, 314 - buf_mem_dmem_prog0 315 - } buf_mem_t; 316 - 317 - #endif 318 - #endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
-321
drivers/staging/media/atomisp/pci/isp2400_system_local.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Support for Intel Camera Imaging ISP subsystem. 4 - * Copyright (c) 2010-2015, Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - */ 15 - 16 - #ifndef __SYSTEM_LOCAL_H_INCLUDED__ 17 - #define __SYSTEM_LOCAL_H_INCLUDED__ 18 - 19 - #ifdef HRT_ISP_CSS_CUSTOM_HOST 20 - #ifndef HRT_USE_VIR_ADDRS 21 - #define HRT_USE_VIR_ADDRS 22 - #endif 23 - #endif 24 - 25 - #include "system_global.h" 26 - 27 - /* HRT assumes 32 by default (see Linux/include/hive_types.h), overrule it in case it is different */ 28 - #undef HRT_ADDRESS_WIDTH 29 - #define HRT_ADDRESS_WIDTH 64 /* Surprise, this is a local property */ 30 - 31 - /* This interface is deprecated */ 32 - #include "hive_types.h" 33 - 34 - /* 35 - * Cell specific address maps 36 - */ 37 - #if HRT_ADDRESS_WIDTH == 64 38 - 39 - #define GP_FIFO_BASE ((hrt_address)0x0000000000090104) /* This is NOT a base address */ 40 - 41 - /* DDR */ 42 - static const hrt_address DDR_BASE[N_DDR_ID] = { 43 - (hrt_address)0x0000000120000000ULL 44 - }; 45 - 46 - /* ISP */ 47 - static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = { 48 - (hrt_address)0x0000000000020000ULL 49 - }; 50 - 51 - static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = { 52 - (hrt_address)0x0000000000200000ULL 53 - }; 54 - 55 - static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = { 56 - (hrt_address)0x0000000000100000ULL 57 - }; 58 - 59 - static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = { 60 - (hrt_address)0x00000000001C0000ULL, 61 - (hrt_address)0x00000000001D0000ULL, 62 - (hrt_address)0x00000000001E0000ULL 63 - }; 64 - 65 - static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = { 66 - (hrt_address)0x00000000001F0000ULL 67 - }; 68 - 69 - /* SP */ 70 - static const hrt_address SP_CTRL_BASE[N_SP_ID] = { 71 - (hrt_address)0x0000000000010000ULL 72 - }; 73 - 74 - static const hrt_address SP_DMEM_BASE[N_SP_ID] = { 75 - (hrt_address)0x0000000000300000ULL 76 - }; 77 - 78 - static const hrt_address SP_PMEM_BASE[N_SP_ID] = { 79 - (hrt_address)0x00000000000B0000ULL 80 - }; 81 - 82 - /* MMU */ 83 - /* 84 - * MMU0_ID: The data MMU 85 - * MMU1_ID: The icache MMU 86 - */ 87 - static const hrt_address MMU_BASE[N_MMU_ID] = { 88 - (hrt_address)0x0000000000070000ULL, 89 - (hrt_address)0x00000000000A0000ULL 90 - }; 91 - 92 - /* DMA */ 93 - static const hrt_address DMA_BASE[N_DMA_ID] = { 94 - (hrt_address)0x0000000000040000ULL 95 - }; 96 - 97 - /* IRQ */ 98 - static const hrt_address IRQ_BASE[N_IRQ_ID] = { 99 - (hrt_address)0x0000000000000500ULL, 100 - (hrt_address)0x0000000000030A00ULL, 101 - (hrt_address)0x000000000008C000ULL, 102 - (hrt_address)0x0000000000090200ULL 103 - }; 104 - 105 - /* 106 - (hrt_address)0x0000000000000500ULL}; 107 - */ 108 - 109 - /* GDC */ 110 - static const hrt_address GDC_BASE[N_GDC_ID] = { 111 - (hrt_address)0x0000000000050000ULL, 112 - (hrt_address)0x0000000000060000ULL 113 - }; 114 - 115 - /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 116 - static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = { 117 - (hrt_address)0x0000000000000000ULL 118 - }; 119 - 120 - /* 121 - static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = { 122 - (hrt_address)0x0000000000000000ULL}; 123 - 124 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 125 - (hrt_address)0x0000000000090000ULL}; 126 - */ 127 - 128 - /* GP_DEVICE (single base for all separate GP_REG instances) */ 129 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 130 - (hrt_address)0x0000000000000000ULL 131 - }; 132 - 133 - /*GP TIMER , all timer registers are inter-twined, 134 - * so, having multiple base addresses for 135 - * different timers does not help*/ 136 - static const hrt_address GP_TIMER_BASE = 137 - (hrt_address)0x0000000000000600ULL; 138 - /* GPIO */ 139 - static const hrt_address GPIO_BASE[N_GPIO_ID] = { 140 - (hrt_address)0x0000000000000400ULL 141 - }; 142 - 143 - /* TIMED_CTRL */ 144 - static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = { 145 - (hrt_address)0x0000000000000100ULL 146 - }; 147 - 148 - /* INPUT_FORMATTER */ 149 - static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = { 150 - (hrt_address)0x0000000000030000ULL, 151 - (hrt_address)0x0000000000030200ULL, 152 - (hrt_address)0x0000000000030400ULL, 153 - (hrt_address)0x0000000000030600ULL 154 - }; /* memcpy() */ 155 - 156 - /* INPUT_SYSTEM */ 157 - static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = { 158 - (hrt_address)0x0000000000080000ULL 159 - }; 160 - 161 - /* (hrt_address)0x0000000000081000ULL, */ /* capture A */ 162 - /* (hrt_address)0x0000000000082000ULL, */ /* capture B */ 163 - /* (hrt_address)0x0000000000083000ULL, */ /* capture C */ 164 - /* (hrt_address)0x0000000000084000ULL, */ /* Acquisition */ 165 - /* (hrt_address)0x0000000000085000ULL, */ /* DMA */ 166 - /* (hrt_address)0x0000000000089000ULL, */ /* ctrl */ 167 - /* (hrt_address)0x000000000008A000ULL, */ /* GP regs */ 168 - /* (hrt_address)0x000000000008B000ULL, */ /* FIFO */ 169 - /* (hrt_address)0x000000000008C000ULL, */ /* IRQ */ 170 - 171 - /* RX, the MIPI lane control regs start at offset 0 */ 172 - static const hrt_address RX_BASE[N_RX_ID] = { 173 - (hrt_address)0x0000000000080100ULL 174 - }; 175 - 176 - #elif HRT_ADDRESS_WIDTH == 32 177 - 178 - #define GP_FIFO_BASE ((hrt_address)0x00090104) /* This is NOT a base address */ 179 - 180 - /* DDR : Attention, this value not defined in 32-bit */ 181 - static const hrt_address DDR_BASE[N_DDR_ID] = { 182 - (hrt_address)0x00000000UL 183 - }; 184 - 185 - /* ISP */ 186 - static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = { 187 - (hrt_address)0x00020000UL 188 - }; 189 - 190 - static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = { 191 - (hrt_address)0x00200000UL 192 - }; 193 - 194 - static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = { 195 - (hrt_address)0x100000UL 196 - }; 197 - 198 - static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = { 199 - (hrt_address)0xffffffffUL, 200 - (hrt_address)0xffffffffUL, 201 - (hrt_address)0xffffffffUL 202 - }; 203 - 204 - static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = { 205 - (hrt_address)0xffffffffUL 206 - }; 207 - 208 - /* SP */ 209 - static const hrt_address SP_CTRL_BASE[N_SP_ID] = { 210 - (hrt_address)0x00010000UL 211 - }; 212 - 213 - static const hrt_address SP_DMEM_BASE[N_SP_ID] = { 214 - (hrt_address)0x00300000UL 215 - }; 216 - 217 - static const hrt_address SP_PMEM_BASE[N_SP_ID] = { 218 - (hrt_address)0x000B0000UL 219 - }; 220 - 221 - /* MMU */ 222 - /* 223 - * MMU0_ID: The data MMU 224 - * MMU1_ID: The icache MMU 225 - */ 226 - static const hrt_address MMU_BASE[N_MMU_ID] = { 227 - (hrt_address)0x00070000UL, 228 - (hrt_address)0x000A0000UL 229 - }; 230 - 231 - /* DMA */ 232 - static const hrt_address DMA_BASE[N_DMA_ID] = { 233 - (hrt_address)0x00040000UL 234 - }; 235 - 236 - /* IRQ */ 237 - static const hrt_address IRQ_BASE[N_IRQ_ID] = { 238 - (hrt_address)0x00000500UL, 239 - (hrt_address)0x00030A00UL, 240 - (hrt_address)0x0008C000UL, 241 - (hrt_address)0x00090200UL 242 - }; 243 - 244 - /* 245 - (hrt_address)0x00000500UL}; 246 - */ 247 - 248 - /* GDC */ 249 - static const hrt_address GDC_BASE[N_GDC_ID] = { 250 - (hrt_address)0x00050000UL, 251 - (hrt_address)0x00060000UL 252 - }; 253 - 254 - /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 255 - static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = { 256 - (hrt_address)0x00000000UL 257 - }; 258 - 259 - /* 260 - static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = { 261 - (hrt_address)0x00000000UL}; 262 - 263 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 264 - (hrt_address)0x00090000UL}; 265 - */ 266 - 267 - /* GP_DEVICE (single base for all separate GP_REG instances) */ 268 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 269 - (hrt_address)0x00000000UL 270 - }; 271 - 272 - /*GP TIMER , all timer registers are inter-twined, 273 - * so, having multiple base addresses for 274 - * different timers does not help*/ 275 - static const hrt_address GP_TIMER_BASE = 276 - (hrt_address)0x00000600UL; 277 - 278 - /* GPIO */ 279 - static const hrt_address GPIO_BASE[N_GPIO_ID] = { 280 - (hrt_address)0x00000400UL 281 - }; 282 - 283 - /* TIMED_CTRL */ 284 - static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = { 285 - (hrt_address)0x00000100UL 286 - }; 287 - 288 - /* INPUT_FORMATTER */ 289 - static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = { 290 - (hrt_address)0x00030000UL, 291 - (hrt_address)0x00030200UL, 292 - (hrt_address)0x00030400UL 293 - }; 294 - 295 - /* (hrt_address)0x00030600UL, */ /* memcpy() */ 296 - 297 - /* INPUT_SYSTEM */ 298 - static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = { 299 - (hrt_address)0x00080000UL 300 - }; 301 - 302 - /* (hrt_address)0x00081000UL, */ /* capture A */ 303 - /* (hrt_address)0x00082000UL, */ /* capture B */ 304 - /* (hrt_address)0x00083000UL, */ /* capture C */ 305 - /* (hrt_address)0x00084000UL, */ /* Acquisition */ 306 - /* (hrt_address)0x00085000UL, */ /* DMA */ 307 - /* (hrt_address)0x00089000UL, */ /* ctrl */ 308 - /* (hrt_address)0x0008A000UL, */ /* GP regs */ 309 - /* (hrt_address)0x0008B000UL, */ /* FIFO */ 310 - /* (hrt_address)0x0008C000UL, */ /* IRQ */ 311 - 312 - /* RX, the MIPI lane control regs start at offset 0 */ 313 - static const hrt_address RX_BASE[N_RX_ID] = { 314 - (hrt_address)0x00080100UL 315 - }; 316 - 317 - #else 318 - #error "system_local.h: HRT_ADDRESS_WIDTH must be one of {32,64}" 319 - #endif 320 - 321 - #endif /* __SYSTEM_LOCAL_H_INCLUDED__ */
+2 -410
drivers/staging/media/atomisp/pci/isp2401_system_global.h
··· 13 13 * more details. 14 14 */ 15 15 16 - #ifndef __SYSTEM_GLOBAL_H_INCLUDED__ 17 - #define __SYSTEM_GLOBAL_H_INCLUDED__ 18 - 19 - #include <hive_isp_css_defs.h> 20 - #include <type_support.h> 21 - 22 - /* 23 - * The longest allowed (uninteruptible) bus transfer, does not 24 - * take stalling into account 25 - */ 26 - #define HIVE_ISP_MAX_BURST_LENGTH 1024 27 - 28 - /* 29 - * Maximum allowed burst length in words for the ISP DMA 30 - * This value is set to 2 to prevent the ISP DMA from blocking 31 - * the bus for too long; as the input system can only buffer 32 - * 2 lines on Moorefield and Cherrytrail, the input system buffers 33 - * may overflow if blocked for too long (BZ 2726). 34 - */ 35 - #define ISP_DMA_MAX_BURST_LENGTH 2 36 - 37 - /* 38 - * Create a list of HAS and IS properties that defines the system 39 - * 40 - * The configuration assumes the following 41 - * - The system is hetereogeneous; Multiple cells and devices classes 42 - * - The cell and device instances are homogeneous, each device type 43 - * belongs to the same class 44 - * - Device instances supporting a subset of the class capabilities are 45 - * allowed 46 - * 47 - * We could manage different device classes through the enumerated 48 - * lists (C) or the use of classes (C++), but that is presently not 49 - * fully supported 50 - * 51 - * N.B. the 3 input formatters are of 2 different classess 52 - */ 53 - 54 - #define USE_INPUT_SYSTEM_VERSION_2401 55 - 56 - #define HAS_MMU_VERSION_2 57 - #define HAS_DMA_VERSION_2 58 - #define HAS_GDC_VERSION_2 59 - #define HAS_VAMEM_VERSION_2 60 - #define HAS_HMEM_VERSION_1 61 - #define HAS_BAMEM_VERSION_2 62 - #define HAS_IRQ_VERSION_2 63 - #define HAS_IRQ_MAP_VERSION_2 64 - #define HAS_INPUT_FORMATTER_VERSION_2 65 - /* 2401: HAS_INPUT_SYSTEM_VERSION_3 */ 66 - /* 2400: HAS_INPUT_SYSTEM_VERSION_2 */ 67 - #define HAS_INPUT_SYSTEM_VERSION_2 68 - #define HAS_INPUT_SYSTEM_VERSION_2401 69 - #define HAS_BUFFERED_SENSOR 70 - #define HAS_FIFO_MONITORS_VERSION_2 71 - /* #define HAS_GP_REGS_VERSION_2 */ 72 - #define HAS_GP_DEVICE_VERSION_2 73 - #define HAS_GPIO_VERSION_1 74 - #define HAS_TIMED_CTRL_VERSION_1 75 - #define HAS_RX_VERSION_2 76 16 #define HAS_NO_INPUT_FORMATTER 77 - /*#define HAS_NO_PACKED_RAW_PIXELS*/ 78 - /*#define HAS_NO_DVS_6AXIS_CONFIG_UPDATE*/ 79 - 80 - #define DMA_DDR_TO_VAMEM_WORKAROUND 81 - #define DMA_DDR_TO_HMEM_WORKAROUND 82 - 83 - /* 84 - * Semi global. "HRT" is accessible from SP, but 85 - * the HRT types do not fully apply 86 - */ 87 - #define HRT_VADDRESS_WIDTH 32 88 - /* Surprise, this is a local property*/ 89 - /*#define HRT_ADDRESS_WIDTH 64 */ 90 - #define HRT_DATA_WIDTH 32 91 - 92 - #define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3) 93 - #define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8) 94 - 95 - /* The main bus connecting all devices */ 96 - #define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH 97 - #define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES 98 - 17 + #define USE_INPUT_SYSTEM_VERSION_2401 18 + #define HAS_INPUT_SYSTEM_VERSION_2401 99 19 #define CSI2P_DISABLE_ISYS2401_ONLINE_MODE 100 - 101 - /* per-frame parameter handling support */ 102 - #define SH_CSS_ENABLE_PER_FRAME_PARAMS 103 - 104 - typedef u32 hrt_bus_align_t; 105 - 106 - /* 107 - * Enumerate the devices, device access through the API is by ID, 108 - * through the DLI by address. The enumerator terminators are used 109 - * to size the wiring arrays and as an exception value. 110 - */ 111 - typedef enum { 112 - DDR0_ID = 0, 113 - N_DDR_ID 114 - } ddr_ID_t; 115 - 116 - typedef enum { 117 - ISP0_ID = 0, 118 - N_ISP_ID 119 - } isp_ID_t; 120 - 121 - typedef enum { 122 - SP0_ID = 0, 123 - N_SP_ID 124 - } sp_ID_t; 125 - 126 - typedef enum { 127 - MMU0_ID = 0, 128 - MMU1_ID, 129 - N_MMU_ID 130 - } mmu_ID_t; 131 - 132 - typedef enum { 133 - DMA0_ID = 0, 134 - N_DMA_ID 135 - } dma_ID_t; 136 - 137 - typedef enum { 138 - GDC0_ID = 0, 139 - GDC1_ID, 140 - N_GDC_ID 141 - } gdc_ID_t; 142 - 143 - /* this extra define is needed because we want to use it also 144 - in the preprocessor, and that doesn't work with enums. 145 - */ 146 - #define N_GDC_ID_CPP 2 147 - 148 - typedef enum { 149 - VAMEM0_ID = 0, 150 - VAMEM1_ID, 151 - VAMEM2_ID, 152 - N_VAMEM_ID 153 - } vamem_ID_t; 154 - 155 - typedef enum { 156 - BAMEM0_ID = 0, 157 - N_BAMEM_ID 158 - } bamem_ID_t; 159 - 160 - typedef enum { 161 - HMEM0_ID = 0, 162 - N_HMEM_ID 163 - } hmem_ID_t; 164 - 165 - typedef enum { 166 - ISYS_IRQ0_ID = 0, /* port a */ 167 - ISYS_IRQ1_ID, /* port b */ 168 - ISYS_IRQ2_ID, /* port c */ 169 - N_ISYS_IRQ_ID 170 - } isys_irq_ID_t; 171 - 172 - typedef enum { 173 - IRQ0_ID = 0, /* GP IRQ block */ 174 - IRQ1_ID, /* Input formatter */ 175 - IRQ2_ID, /* input system */ 176 - IRQ3_ID, /* input selector */ 177 - N_IRQ_ID 178 - } irq_ID_t; 179 - 180 - typedef enum { 181 - FIFO_MONITOR0_ID = 0, 182 - N_FIFO_MONITOR_ID 183 - } fifo_monitor_ID_t; 184 - 185 - /* 186 - * Deprecated: Since all gp_reg instances are different 187 - * and put in the address maps of other devices we cannot 188 - * enumerate them as that assumes the instrances are the 189 - * same. 190 - * 191 - * We define a single GP_DEVICE containing all gp_regs 192 - * w.r.t. a single base address 193 - * 194 - typedef enum { 195 - GP_REGS0_ID = 0, 196 - N_GP_REGS_ID 197 - } gp_regs_ID_t; 198 - */ 199 - typedef enum { 200 - GP_DEVICE0_ID = 0, 201 - N_GP_DEVICE_ID 202 - } gp_device_ID_t; 203 - 204 - typedef enum { 205 - GP_TIMER0_ID = 0, 206 - GP_TIMER1_ID, 207 - GP_TIMER2_ID, 208 - GP_TIMER3_ID, 209 - GP_TIMER4_ID, 210 - GP_TIMER5_ID, 211 - GP_TIMER6_ID, 212 - GP_TIMER7_ID, 213 - N_GP_TIMER_ID 214 - } gp_timer_ID_t; 215 - 216 - typedef enum { 217 - GPIO0_ID = 0, 218 - N_GPIO_ID 219 - } gpio_ID_t; 220 - 221 - typedef enum { 222 - TIMED_CTRL0_ID = 0, 223 - N_TIMED_CTRL_ID 224 - } timed_ctrl_ID_t; 225 - 226 - typedef enum { 227 - INPUT_FORMATTER0_ID = 0, 228 - INPUT_FORMATTER1_ID, 229 - INPUT_FORMATTER2_ID, 230 - INPUT_FORMATTER3_ID, 231 - N_INPUT_FORMATTER_ID 232 - } input_formatter_ID_t; 233 - 234 - /* The IF RST is outside the IF */ 235 - #define INPUT_FORMATTER0_SRST_OFFSET 0x0824 236 - #define INPUT_FORMATTER1_SRST_OFFSET 0x0624 237 - #define INPUT_FORMATTER2_SRST_OFFSET 0x0424 238 - #define INPUT_FORMATTER3_SRST_OFFSET 0x0224 239 - 240 - #define INPUT_FORMATTER0_SRST_MASK 0x0001 241 - #define INPUT_FORMATTER1_SRST_MASK 0x0002 242 - #define INPUT_FORMATTER2_SRST_MASK 0x0004 243 - #define INPUT_FORMATTER3_SRST_MASK 0x0008 244 - 245 - typedef enum { 246 - INPUT_SYSTEM0_ID = 0, 247 - N_INPUT_SYSTEM_ID 248 - } input_system_ID_t; 249 - 250 - typedef enum { 251 - RX0_ID = 0, 252 - N_RX_ID 253 - } rx_ID_t; 254 - 255 - enum mipi_port_id { 256 - MIPI_PORT0_ID = 0, 257 - MIPI_PORT1_ID, 258 - MIPI_PORT2_ID, 259 - N_MIPI_PORT_ID 260 - }; 261 - 262 - #define N_RX_CHANNEL_ID 4 263 - 264 - /* Generic port enumeration with an internal port type ID */ 265 - typedef enum { 266 - CSI_PORT0_ID = 0, 267 - CSI_PORT1_ID, 268 - CSI_PORT2_ID, 269 - TPG_PORT0_ID, 270 - PRBS_PORT0_ID, 271 - FIFO_PORT0_ID, 272 - MEMORY_PORT0_ID, 273 - N_INPUT_PORT_ID 274 - } input_port_ID_t; 275 - 276 - typedef enum { 277 - CAPTURE_UNIT0_ID = 0, 278 - CAPTURE_UNIT1_ID, 279 - CAPTURE_UNIT2_ID, 280 - ACQUISITION_UNIT0_ID, 281 - DMA_UNIT0_ID, 282 - CTRL_UNIT0_ID, 283 - GPREGS_UNIT0_ID, 284 - FIFO_UNIT0_ID, 285 - IRQ_UNIT0_ID, 286 - N_SUB_SYSTEM_ID 287 - } sub_system_ID_t; 288 - 289 - #define N_CAPTURE_UNIT_ID 3 290 - #define N_ACQUISITION_UNIT_ID 1 291 - #define N_CTRL_UNIT_ID 1 292 - 293 - /* 294 - * Input-buffer Controller. 295 - */ 296 - typedef enum { 297 - IBUF_CTRL0_ID = 0, /* map to ISYS2401_IBUF_CNTRL_A */ 298 - IBUF_CTRL1_ID, /* map to ISYS2401_IBUF_CNTRL_B */ 299 - IBUF_CTRL2_ID, /* map ISYS2401_IBUF_CNTRL_C */ 300 - N_IBUF_CTRL_ID 301 - } ibuf_ctrl_ID_t; 302 - /* end of Input-buffer Controller */ 303 - 304 - /* 305 - * Stream2MMIO. 306 - */ 307 - typedef enum { 308 - STREAM2MMIO0_ID = 0, /* map to ISYS2401_S2M_A */ 309 - STREAM2MMIO1_ID, /* map to ISYS2401_S2M_B */ 310 - STREAM2MMIO2_ID, /* map to ISYS2401_S2M_C */ 311 - N_STREAM2MMIO_ID 312 - } stream2mmio_ID_t; 313 - 314 - typedef enum { 315 - /* 316 - * Stream2MMIO 0 has 8 SIDs that are indexed by 317 - * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID7_ID]. 318 - * 319 - * Stream2MMIO 1 has 4 SIDs that are indexed by 320 - * [STREAM2MMIO_SID0_ID...TREAM2MMIO_SID3_ID]. 321 - * 322 - * Stream2MMIO 2 has 4 SIDs that are indexed by 323 - * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID3_ID]. 324 - */ 325 - STREAM2MMIO_SID0_ID = 0, 326 - STREAM2MMIO_SID1_ID, 327 - STREAM2MMIO_SID2_ID, 328 - STREAM2MMIO_SID3_ID, 329 - STREAM2MMIO_SID4_ID, 330 - STREAM2MMIO_SID5_ID, 331 - STREAM2MMIO_SID6_ID, 332 - STREAM2MMIO_SID7_ID, 333 - N_STREAM2MMIO_SID_ID 334 - } stream2mmio_sid_ID_t; 335 - /* end of Stream2MMIO */ 336 - 337 - /** 338 - * Input System 2401: CSI-MIPI recevier. 339 - */ 340 - typedef enum { 341 - CSI_RX_BACKEND0_ID = 0, /* map to ISYS2401_MIPI_BE_A */ 342 - CSI_RX_BACKEND1_ID, /* map to ISYS2401_MIPI_BE_B */ 343 - CSI_RX_BACKEND2_ID, /* map to ISYS2401_MIPI_BE_C */ 344 - N_CSI_RX_BACKEND_ID 345 - } csi_rx_backend_ID_t; 346 - 347 - typedef enum { 348 - CSI_RX_FRONTEND0_ID = 0, /* map to ISYS2401_CSI_RX_A */ 349 - CSI_RX_FRONTEND1_ID, /* map to ISYS2401_CSI_RX_B */ 350 - CSI_RX_FRONTEND2_ID, /* map to ISYS2401_CSI_RX_C */ 351 - #define N_CSI_RX_FRONTEND_ID (CSI_RX_FRONTEND2_ID + 1) 352 - } csi_rx_frontend_ID_t; 353 - 354 - typedef enum { 355 - CSI_RX_DLANE0_ID = 0, /* map to DLANE0 in CSI RX */ 356 - CSI_RX_DLANE1_ID, /* map to DLANE1 in CSI RX */ 357 - CSI_RX_DLANE2_ID, /* map to DLANE2 in CSI RX */ 358 - CSI_RX_DLANE3_ID, /* map to DLANE3 in CSI RX */ 359 - N_CSI_RX_DLANE_ID 360 - } csi_rx_fe_dlane_ID_t; 361 - /* end of CSI-MIPI receiver */ 362 - 363 - typedef enum { 364 - ISYS2401_DMA0_ID = 0, 365 - N_ISYS2401_DMA_ID 366 - } isys2401_dma_ID_t; 367 - 368 - /** 369 - * Pixel-generator. ("system_global.h") 370 - */ 371 - typedef enum { 372 - PIXELGEN0_ID = 0, 373 - PIXELGEN1_ID, 374 - PIXELGEN2_ID, 375 - N_PIXELGEN_ID 376 - } pixelgen_ID_t; 377 - /* end of pixel-generator. ("system_global.h") */ 378 - 379 - typedef enum { 380 - INPUT_SYSTEM_CSI_PORT0_ID = 0, 381 - INPUT_SYSTEM_CSI_PORT1_ID, 382 - INPUT_SYSTEM_CSI_PORT2_ID, 383 - 384 - INPUT_SYSTEM_PIXELGEN_PORT0_ID, 385 - INPUT_SYSTEM_PIXELGEN_PORT1_ID, 386 - INPUT_SYSTEM_PIXELGEN_PORT2_ID, 387 - 388 - N_INPUT_SYSTEM_INPUT_PORT_ID 389 - } input_system_input_port_ID_t; 390 - 391 - #define N_INPUT_SYSTEM_CSI_PORT 3 392 - 393 - typedef enum { 394 - ISYS2401_DMA_CHANNEL_0 = 0, 395 - ISYS2401_DMA_CHANNEL_1, 396 - ISYS2401_DMA_CHANNEL_2, 397 - ISYS2401_DMA_CHANNEL_3, 398 - ISYS2401_DMA_CHANNEL_4, 399 - ISYS2401_DMA_CHANNEL_5, 400 - ISYS2401_DMA_CHANNEL_6, 401 - ISYS2401_DMA_CHANNEL_7, 402 - ISYS2401_DMA_CHANNEL_8, 403 - ISYS2401_DMA_CHANNEL_9, 404 - ISYS2401_DMA_CHANNEL_10, 405 - ISYS2401_DMA_CHANNEL_11, 406 - N_ISYS2401_DMA_CHANNEL 407 - } isys2401_dma_channel; 408 - 409 - enum ia_css_isp_memories { 410 - IA_CSS_ISP_PMEM0 = 0, 411 - IA_CSS_ISP_DMEM0, 412 - IA_CSS_ISP_VMEM0, 413 - IA_CSS_ISP_VAMEM0, 414 - IA_CSS_ISP_VAMEM1, 415 - IA_CSS_ISP_VAMEM2, 416 - IA_CSS_ISP_HMEM0, 417 - IA_CSS_SP_DMEM0, 418 - IA_CSS_DDR, 419 - N_IA_CSS_MEMORIES 420 - }; 421 - 422 - #define IA_CSS_NUM_MEMORIES 9 423 - /* For driver compatibility */ 424 - #define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES 425 - #define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES 426 - 427 - #endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
-402
drivers/staging/media/atomisp/pci/isp2401_system_local.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Support for Intel Camera Imaging ISP subsystem. 4 - * Copyright (c) 2015, Intel Corporation. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - */ 15 - 16 - #ifndef __SYSTEM_LOCAL_H_INCLUDED__ 17 - #define __SYSTEM_LOCAL_H_INCLUDED__ 18 - 19 - #ifdef HRT_ISP_CSS_CUSTOM_HOST 20 - #ifndef HRT_USE_VIR_ADDRS 21 - #define HRT_USE_VIR_ADDRS 22 - #endif 23 - #endif 24 - 25 - #include "system_global.h" 26 - 27 - #define HRT_ADDRESS_WIDTH 64 /* Surprise, this is a local property */ 28 - 29 - /* This interface is deprecated */ 30 - #include "hive_types.h" 31 - 32 - /* 33 - * Cell specific address maps 34 - */ 35 - #if HRT_ADDRESS_WIDTH == 64 36 - 37 - #define GP_FIFO_BASE ((hrt_address)0x0000000000090104) /* This is NOT a base address */ 38 - 39 - /* DDR */ 40 - static const hrt_address DDR_BASE[N_DDR_ID] = { 41 - 0x0000000120000000ULL 42 - }; 43 - 44 - /* ISP */ 45 - static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = { 46 - 0x0000000000020000ULL 47 - }; 48 - 49 - static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = { 50 - 0x0000000000200000ULL 51 - }; 52 - 53 - static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = { 54 - 0x0000000000100000ULL 55 - }; 56 - 57 - static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = { 58 - 0x00000000001C0000ULL, 59 - 0x00000000001D0000ULL, 60 - 0x00000000001E0000ULL 61 - }; 62 - 63 - static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = { 64 - 0x00000000001F0000ULL 65 - }; 66 - 67 - /* SP */ 68 - static const hrt_address SP_CTRL_BASE[N_SP_ID] = { 69 - 0x0000000000010000ULL 70 - }; 71 - 72 - static const hrt_address SP_DMEM_BASE[N_SP_ID] = { 73 - 0x0000000000300000ULL 74 - }; 75 - 76 - /* MMU */ 77 - /* 78 - * MMU0_ID: The data MMU 79 - * MMU1_ID: The icache MMU 80 - */ 81 - static const hrt_address MMU_BASE[N_MMU_ID] = { 82 - 0x0000000000070000ULL, 83 - 0x00000000000A0000ULL 84 - }; 85 - 86 - /* DMA */ 87 - static const hrt_address DMA_BASE[N_DMA_ID] = { 88 - 0x0000000000040000ULL 89 - }; 90 - 91 - static const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = { 92 - 0x00000000000CA000ULL 93 - }; 94 - 95 - /* IRQ */ 96 - static const hrt_address IRQ_BASE[N_IRQ_ID] = { 97 - 0x0000000000000500ULL, 98 - 0x0000000000030A00ULL, 99 - 0x000000000008C000ULL, 100 - 0x0000000000090200ULL 101 - }; 102 - 103 - /* 104 - 0x0000000000000500ULL}; 105 - */ 106 - 107 - /* GDC */ 108 - static const hrt_address GDC_BASE[N_GDC_ID] = { 109 - 0x0000000000050000ULL, 110 - 0x0000000000060000ULL 111 - }; 112 - 113 - /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 114 - static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = { 115 - 0x0000000000000000ULL 116 - }; 117 - 118 - /* 119 - static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = { 120 - 0x0000000000000000ULL}; 121 - 122 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 123 - 0x0000000000090000ULL}; 124 - */ 125 - 126 - /* GP_DEVICE (single base for all separate GP_REG instances) */ 127 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 128 - 0x0000000000000000ULL 129 - }; 130 - 131 - /*GP TIMER , all timer registers are inter-twined, 132 - * so, having multiple base addresses for 133 - * different timers does not help*/ 134 - static const hrt_address GP_TIMER_BASE = 135 - (hrt_address)0x0000000000000600ULL; 136 - 137 - /* GPIO */ 138 - static const hrt_address GPIO_BASE[N_GPIO_ID] = { 139 - 0x0000000000000400ULL 140 - }; 141 - 142 - /* TIMED_CTRL */ 143 - static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = { 144 - 0x0000000000000100ULL 145 - }; 146 - 147 - /* INPUT_FORMATTER */ 148 - static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = { 149 - 0x0000000000030000ULL, 150 - 0x0000000000030200ULL, 151 - 0x0000000000030400ULL, 152 - 0x0000000000030600ULL 153 - }; /* memcpy() */ 154 - 155 - /* INPUT_SYSTEM */ 156 - static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = { 157 - 0x0000000000080000ULL 158 - }; 159 - 160 - /* 0x0000000000081000ULL, */ /* capture A */ 161 - /* 0x0000000000082000ULL, */ /* capture B */ 162 - /* 0x0000000000083000ULL, */ /* capture C */ 163 - /* 0x0000000000084000ULL, */ /* Acquisition */ 164 - /* 0x0000000000085000ULL, */ /* DMA */ 165 - /* 0x0000000000089000ULL, */ /* ctrl */ 166 - /* 0x000000000008A000ULL, */ /* GP regs */ 167 - /* 0x000000000008B000ULL, */ /* FIFO */ 168 - /* 0x000000000008C000ULL, */ /* IRQ */ 169 - 170 - /* RX, the MIPI lane control regs start at offset 0 */ 171 - static const hrt_address RX_BASE[N_RX_ID] = { 172 - 0x0000000000080100ULL 173 - }; 174 - 175 - /* IBUF_CTRL, part of the Input System 2401 */ 176 - static const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = { 177 - 0x00000000000C1800ULL, /* ibuf controller A */ 178 - 0x00000000000C3800ULL, /* ibuf controller B */ 179 - 0x00000000000C5800ULL /* ibuf controller C */ 180 - }; 181 - 182 - /* ISYS IRQ Controllers, part of the Input System 2401 */ 183 - static const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = { 184 - 0x00000000000C1400ULL, /* port a */ 185 - 0x00000000000C3400ULL, /* port b */ 186 - 0x00000000000C5400ULL /* port c */ 187 - }; 188 - 189 - /* CSI FE, part of the Input System 2401 */ 190 - static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = { 191 - 0x00000000000C0400ULL, /* csi fe controller A */ 192 - 0x00000000000C2400ULL, /* csi fe controller B */ 193 - 0x00000000000C4400ULL /* csi fe controller C */ 194 - }; 195 - 196 - /* CSI BE, part of the Input System 2401 */ 197 - static const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = { 198 - 0x00000000000C0800ULL, /* csi be controller A */ 199 - 0x00000000000C2800ULL, /* csi be controller B */ 200 - 0x00000000000C4800ULL /* csi be controller C */ 201 - }; 202 - 203 - /* PIXEL Generator, part of the Input System 2401 */ 204 - static const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = { 205 - 0x00000000000C1000ULL, /* pixel gen controller A */ 206 - 0x00000000000C3000ULL, /* pixel gen controller B */ 207 - 0x00000000000C5000ULL /* pixel gen controller C */ 208 - }; 209 - 210 - /* Stream2MMIO, part of the Input System 2401 */ 211 - static const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = { 212 - 0x00000000000C0C00ULL, /* stream2mmio controller A */ 213 - 0x00000000000C2C00ULL, /* stream2mmio controller B */ 214 - 0x00000000000C4C00ULL /* stream2mmio controller C */ 215 - }; 216 - #elif HRT_ADDRESS_WIDTH == 32 217 - 218 - #define GP_FIFO_BASE ((hrt_address)0x00090104) /* This is NOT a base address */ 219 - 220 - /* DDR : Attention, this value not defined in 32-bit */ 221 - static const hrt_address DDR_BASE[N_DDR_ID] = { 222 - 0x00000000UL 223 - }; 224 - 225 - /* ISP */ 226 - static const hrt_address ISP_CTRL_BASE[N_ISP_ID] = { 227 - 0x00020000UL 228 - }; 229 - 230 - static const hrt_address ISP_DMEM_BASE[N_ISP_ID] = { 231 - 0xffffffffUL 232 - }; 233 - 234 - static const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = { 235 - 0xffffffffUL 236 - }; 237 - 238 - static const hrt_address ISP_VAMEM_BASE[N_VAMEM_ID] = { 239 - 0xffffffffUL, 240 - 0xffffffffUL, 241 - 0xffffffffUL 242 - }; 243 - 244 - static const hrt_address ISP_HMEM_BASE[N_HMEM_ID] = { 245 - 0xffffffffUL 246 - }; 247 - 248 - /* SP */ 249 - static const hrt_address SP_CTRL_BASE[N_SP_ID] = { 250 - 0x00010000UL 251 - }; 252 - 253 - static const hrt_address SP_DMEM_BASE[N_SP_ID] = { 254 - 0x00300000UL 255 - }; 256 - 257 - /* MMU */ 258 - /* 259 - * MMU0_ID: The data MMU 260 - * MMU1_ID: The icache MMU 261 - */ 262 - static const hrt_address MMU_BASE[N_MMU_ID] = { 263 - 0x00070000UL, 264 - 0x000A0000UL 265 - }; 266 - 267 - /* DMA */ 268 - static const hrt_address DMA_BASE[N_DMA_ID] = { 269 - 0x00040000UL 270 - }; 271 - 272 - static const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = { 273 - 0x000CA000UL 274 - }; 275 - 276 - /* IRQ */ 277 - static const hrt_address IRQ_BASE[N_IRQ_ID] = { 278 - 0x00000500UL, 279 - 0x00030A00UL, 280 - 0x0008C000UL, 281 - 0x00090200UL 282 - }; 283 - 284 - /* 285 - 0x00000500UL}; 286 - */ 287 - 288 - /* GDC */ 289 - static const hrt_address GDC_BASE[N_GDC_ID] = { 290 - 0x00050000UL, 291 - 0x00060000UL 292 - }; 293 - 294 - /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 295 - static const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = { 296 - 0x00000000UL 297 - }; 298 - 299 - /* 300 - static const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = { 301 - 0x00000000UL}; 302 - 303 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 304 - 0x00090000UL}; 305 - */ 306 - 307 - /* GP_DEVICE (single base for all separate GP_REG instances) */ 308 - static const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 309 - 0x00000000UL 310 - }; 311 - 312 - /*GP TIMER , all timer registers are inter-twined, 313 - * so, having multiple base addresses for 314 - * different timers does not help*/ 315 - static const hrt_address GP_TIMER_BASE = 316 - (hrt_address)0x00000600UL; 317 - /* GPIO */ 318 - static const hrt_address GPIO_BASE[N_GPIO_ID] = { 319 - 0x00000400UL 320 - }; 321 - 322 - /* TIMED_CTRL */ 323 - static const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = { 324 - 0x00000100UL 325 - }; 326 - 327 - /* INPUT_FORMATTER */ 328 - static const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = { 329 - 0x00030000UL, 330 - 0x00030200UL, 331 - 0x00030400UL 332 - }; 333 - 334 - /* 0x00030600UL, */ /* memcpy() */ 335 - 336 - /* INPUT_SYSTEM */ 337 - static const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = { 338 - 0x00080000UL 339 - }; 340 - 341 - /* 0x00081000UL, */ /* capture A */ 342 - /* 0x00082000UL, */ /* capture B */ 343 - /* 0x00083000UL, */ /* capture C */ 344 - /* 0x00084000UL, */ /* Acquisition */ 345 - /* 0x00085000UL, */ /* DMA */ 346 - /* 0x00089000UL, */ /* ctrl */ 347 - /* 0x0008A000UL, */ /* GP regs */ 348 - /* 0x0008B000UL, */ /* FIFO */ 349 - /* 0x0008C000UL, */ /* IRQ */ 350 - 351 - /* RX, the MIPI lane control regs start at offset 0 */ 352 - static const hrt_address RX_BASE[N_RX_ID] = { 353 - 0x00080100UL 354 - }; 355 - 356 - /* IBUF_CTRL, part of the Input System 2401 */ 357 - static const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = { 358 - 0x000C1800UL, /* ibuf controller A */ 359 - 0x000C3800UL, /* ibuf controller B */ 360 - 0x000C5800UL /* ibuf controller C */ 361 - }; 362 - 363 - /* ISYS IRQ Controllers, part of the Input System 2401 */ 364 - static const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = { 365 - 0x000C1400ULL, /* port a */ 366 - 0x000C3400ULL, /* port b */ 367 - 0x000C5400ULL /* port c */ 368 - }; 369 - 370 - /* CSI FE, part of the Input System 2401 */ 371 - static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = { 372 - 0x000C0400UL, /* csi fe controller A */ 373 - 0x000C2400UL, /* csi fe controller B */ 374 - 0x000C4400UL /* csi fe controller C */ 375 - }; 376 - 377 - /* CSI BE, part of the Input System 2401 */ 378 - static const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = { 379 - 0x000C0800UL, /* csi be controller A */ 380 - 0x000C2800UL, /* csi be controller B */ 381 - 0x000C4800UL /* csi be controller C */ 382 - }; 383 - 384 - /* PIXEL Generator, part of the Input System 2401 */ 385 - static const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = { 386 - 0x000C1000UL, /* pixel gen controller A */ 387 - 0x000C3000UL, /* pixel gen controller B */ 388 - 0x000C5000UL /* pixel gen controller C */ 389 - }; 390 - 391 - /* Stream2MMIO, part of the Input System 2401 */ 392 - static const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = { 393 - 0x000C0C00UL, /* stream2mmio controller A */ 394 - 0x000C2C00UL, /* stream2mmio controller B */ 395 - 0x000C4C00UL /* stream2mmio controller C */ 396 - }; 397 - 398 - #else 399 - #error "system_local.h: HRT_ADDRESS_WIDTH must be one of {32,64}" 400 - #endif 401 - 402 - #endif /* __SYSTEM_LOCAL_H_INCLUDED__ */
+7 -2
drivers/staging/media/atomisp/pci/sh_css.c
··· 1841 1841 #endif 1842 1842 1843 1843 #if !defined(HAS_NO_INPUT_SYSTEM) 1844 - dma_set_max_burst_size(DMA0_ID, HIVE_DMA_BUS_DDR_CONN, 1845 - ISP_DMA_MAX_BURST_LENGTH); 1844 + 1845 + if (!IS_ISP2401) 1846 + dma_set_max_burst_size(DMA0_ID, HIVE_DMA_BUS_DDR_CONN, 1847 + ISP2400_DMA_MAX_BURST_LENGTH); 1848 + else 1849 + dma_set_max_burst_size(DMA0_ID, HIVE_DMA_BUS_DDR_CONN, 1850 + ISP2401_DMA_MAX_BURST_LENGTH); 1846 1851 1847 1852 if (ia_css_isys_init() != INPUT_SYSTEM_ERR_NO_ERROR) 1848 1853 err = -EINVAL;
+395
drivers/staging/media/atomisp/pci/system_global.h
··· 4 4 * (c) 2020 Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 5 5 */ 6 6 7 + #ifndef __SYSTEM_GLOBAL_H_INCLUDED__ 8 + #define __SYSTEM_GLOBAL_H_INCLUDED__ 9 + 10 + /* 11 + * Create a list of HAS and IS properties that defines the system 12 + * Those are common for both ISP2400 and ISP2401 13 + * 14 + * The configuration assumes the following 15 + * - The system is hetereogeneous; Multiple cells and devices classes 16 + * - The cell and device instances are homogeneous, each device type 17 + * belongs to the same class 18 + * - Device instances supporting a subset of the class capabilities are 19 + * allowed 20 + * 21 + * We could manage different device classes through the enumerated 22 + * lists (C) or the use of classes (C++), but that is presently not 23 + * fully supported 24 + * 25 + * N.B. the 3 input formatters are of 2 different classess 26 + */ 27 + 28 + #define HAS_MMU_VERSION_2 29 + #define HAS_DMA_VERSION_2 30 + #define HAS_GDC_VERSION_2 31 + #define HAS_VAMEM_VERSION_2 32 + #define HAS_HMEM_VERSION_1 33 + #define HAS_BAMEM_VERSION_2 34 + #define HAS_IRQ_VERSION_2 35 + #define HAS_IRQ_MAP_VERSION_2 36 + #define HAS_INPUT_FORMATTER_VERSION_2 37 + #define HAS_INPUT_SYSTEM_VERSION_2 38 + #define HAS_BUFFERED_SENSOR 39 + #define HAS_FIFO_MONITORS_VERSION_2 40 + #define HAS_GP_DEVICE_VERSION_2 41 + #define HAS_GPIO_VERSION_1 42 + #define HAS_TIMED_CTRL_VERSION_1 43 + #define HAS_RX_VERSION_2 44 + 45 + /* per-frame parameter handling support */ 46 + #define SH_CSS_ENABLE_PER_FRAME_PARAMS 47 + 48 + #define DMA_DDR_TO_VAMEM_WORKAROUND 49 + #define DMA_DDR_TO_HMEM_WORKAROUND 50 + 51 + /* 52 + * The longest allowed (uninteruptible) bus transfer, does not 53 + * take stalling into account 54 + */ 55 + #define HIVE_ISP_MAX_BURST_LENGTH 1024 56 + 57 + /* 58 + * Maximum allowed burst length in words for the ISP DMA 59 + * This value is set to 2 to prevent the ISP DMA from blocking 60 + * the bus for too long; as the input system can only buffer 61 + * 2 lines on Moorefield and Cherrytrail, the input system buffers 62 + * may overflow if blocked for too long (BZ 2726). 63 + */ 64 + #define ISP2400_DMA_MAX_BURST_LENGTH 128 65 + #define ISP2401_DMA_MAX_BURST_LENGTH 2 66 + 7 67 #ifdef ISP2401 8 68 # include "isp2401_system_global.h" 9 69 #else 10 70 # include "isp2400_system_global.h" 11 71 #endif 72 + 73 + #include <hive_isp_css_defs.h> 74 + #include <type_support.h> 75 + 76 + /* This interface is deprecated */ 77 + #include "hive_types.h" 78 + 79 + /* 80 + * Semi global. "HRT" is accessible from SP, but the HRT types do not fully apply 81 + */ 82 + #define HRT_VADDRESS_WIDTH 32 83 + 84 + #define SIZEOF_HRT_REG (HRT_DATA_WIDTH >> 3) 85 + #define HIVE_ISP_CTRL_DATA_BYTES (HIVE_ISP_CTRL_DATA_WIDTH / 8) 86 + 87 + /* The main bus connecting all devices */ 88 + #define HRT_BUS_WIDTH HIVE_ISP_CTRL_DATA_WIDTH 89 + #define HRT_BUS_BYTES HIVE_ISP_CTRL_DATA_BYTES 90 + 91 + typedef u32 hrt_bus_align_t; 92 + 93 + /* 94 + * Enumerate the devices, device access through the API is by ID, 95 + * through the DLI by address. The enumerator terminators are used 96 + * to size the wiring arrays and as an exception value. 97 + */ 98 + typedef enum { 99 + DDR0_ID = 0, 100 + N_DDR_ID 101 + } ddr_ID_t; 102 + 103 + typedef enum { 104 + ISP0_ID = 0, 105 + N_ISP_ID 106 + } isp_ID_t; 107 + 108 + typedef enum { 109 + SP0_ID = 0, 110 + N_SP_ID 111 + } sp_ID_t; 112 + 113 + typedef enum { 114 + MMU0_ID = 0, 115 + MMU1_ID, 116 + N_MMU_ID 117 + } mmu_ID_t; 118 + 119 + typedef enum { 120 + DMA0_ID = 0, 121 + N_DMA_ID 122 + } dma_ID_t; 123 + 124 + typedef enum { 125 + GDC0_ID = 0, 126 + GDC1_ID, 127 + N_GDC_ID 128 + } gdc_ID_t; 129 + 130 + /* this extra define is needed because we want to use it also 131 + in the preprocessor, and that doesn't work with enums. 132 + */ 133 + #define N_GDC_ID_CPP 2 134 + 135 + typedef enum { 136 + VAMEM0_ID = 0, 137 + VAMEM1_ID, 138 + VAMEM2_ID, 139 + N_VAMEM_ID 140 + } vamem_ID_t; 141 + 142 + typedef enum { 143 + BAMEM0_ID = 0, 144 + N_BAMEM_ID 145 + } bamem_ID_t; 146 + 147 + typedef enum { 148 + HMEM0_ID = 0, 149 + N_HMEM_ID 150 + } hmem_ID_t; 151 + 152 + typedef enum { 153 + IRQ0_ID = 0, /* GP IRQ block */ 154 + IRQ1_ID, /* Input formatter */ 155 + IRQ2_ID, /* input system */ 156 + IRQ3_ID, /* input selector */ 157 + N_IRQ_ID 158 + } irq_ID_t; 159 + 160 + typedef enum { 161 + FIFO_MONITOR0_ID = 0, 162 + N_FIFO_MONITOR_ID 163 + } fifo_monitor_ID_t; 164 + 165 + typedef enum { 166 + GP_DEVICE0_ID = 0, 167 + N_GP_DEVICE_ID 168 + } gp_device_ID_t; 169 + 170 + typedef enum { 171 + GP_TIMER0_ID = 0, 172 + GP_TIMER1_ID, 173 + GP_TIMER2_ID, 174 + GP_TIMER3_ID, 175 + GP_TIMER4_ID, 176 + GP_TIMER5_ID, 177 + GP_TIMER6_ID, 178 + GP_TIMER7_ID, 179 + N_GP_TIMER_ID 180 + } gp_timer_ID_t; 181 + 182 + typedef enum { 183 + GPIO0_ID = 0, 184 + N_GPIO_ID 185 + } gpio_ID_t; 186 + 187 + typedef enum { 188 + TIMED_CTRL0_ID = 0, 189 + N_TIMED_CTRL_ID 190 + } timed_ctrl_ID_t; 191 + 192 + typedef enum { 193 + INPUT_FORMATTER0_ID = 0, 194 + INPUT_FORMATTER1_ID, 195 + INPUT_FORMATTER2_ID, 196 + INPUT_FORMATTER3_ID, 197 + N_INPUT_FORMATTER_ID 198 + } input_formatter_ID_t; 199 + 200 + /* The IF RST is outside the IF */ 201 + #define INPUT_FORMATTER0_SRST_OFFSET 0x0824 202 + #define INPUT_FORMATTER1_SRST_OFFSET 0x0624 203 + #define INPUT_FORMATTER2_SRST_OFFSET 0x0424 204 + #define INPUT_FORMATTER3_SRST_OFFSET 0x0224 205 + 206 + #define INPUT_FORMATTER0_SRST_MASK 0x0001 207 + #define INPUT_FORMATTER1_SRST_MASK 0x0002 208 + #define INPUT_FORMATTER2_SRST_MASK 0x0004 209 + #define INPUT_FORMATTER3_SRST_MASK 0x0008 210 + 211 + typedef enum { 212 + INPUT_SYSTEM0_ID = 0, 213 + N_INPUT_SYSTEM_ID 214 + } input_system_ID_t; 215 + 216 + typedef enum { 217 + RX0_ID = 0, 218 + N_RX_ID 219 + } rx_ID_t; 220 + 221 + enum mipi_port_id { 222 + MIPI_PORT0_ID = 0, 223 + MIPI_PORT1_ID, 224 + MIPI_PORT2_ID, 225 + N_MIPI_PORT_ID 226 + }; 227 + 228 + #define N_RX_CHANNEL_ID 4 229 + 230 + /* Generic port enumeration with an internal port type ID */ 231 + typedef enum { 232 + CSI_PORT0_ID = 0, 233 + CSI_PORT1_ID, 234 + CSI_PORT2_ID, 235 + TPG_PORT0_ID, 236 + PRBS_PORT0_ID, 237 + FIFO_PORT0_ID, 238 + MEMORY_PORT0_ID, 239 + N_INPUT_PORT_ID 240 + } input_port_ID_t; 241 + 242 + typedef enum { 243 + CAPTURE_UNIT0_ID = 0, 244 + CAPTURE_UNIT1_ID, 245 + CAPTURE_UNIT2_ID, 246 + ACQUISITION_UNIT0_ID, 247 + DMA_UNIT0_ID, 248 + CTRL_UNIT0_ID, 249 + GPREGS_UNIT0_ID, 250 + FIFO_UNIT0_ID, 251 + IRQ_UNIT0_ID, 252 + N_SUB_SYSTEM_ID 253 + } sub_system_ID_t; 254 + 255 + #define N_CAPTURE_UNIT_ID 3 256 + #define N_ACQUISITION_UNIT_ID 1 257 + #define N_CTRL_UNIT_ID 1 258 + 259 + 260 + enum ia_css_isp_memories { 261 + IA_CSS_ISP_PMEM0 = 0, 262 + IA_CSS_ISP_DMEM0, 263 + IA_CSS_ISP_VMEM0, 264 + IA_CSS_ISP_VAMEM0, 265 + IA_CSS_ISP_VAMEM1, 266 + IA_CSS_ISP_VAMEM2, 267 + IA_CSS_ISP_HMEM0, 268 + IA_CSS_SP_DMEM0, 269 + IA_CSS_DDR, 270 + N_IA_CSS_MEMORIES 271 + }; 272 + 273 + #define IA_CSS_NUM_MEMORIES 9 274 + /* For driver compatibility */ 275 + #define N_IA_CSS_ISP_MEMORIES IA_CSS_NUM_MEMORIES 276 + #define IA_CSS_NUM_ISP_MEMORIES IA_CSS_NUM_MEMORIES 277 + 278 + /* 279 + * ISP2401 specific enums 280 + */ 281 + 282 + typedef enum { 283 + ISYS_IRQ0_ID = 0, /* port a */ 284 + ISYS_IRQ1_ID, /* port b */ 285 + ISYS_IRQ2_ID, /* port c */ 286 + N_ISYS_IRQ_ID 287 + } isys_irq_ID_t; 288 + 289 + 290 + /* 291 + * Input-buffer Controller. 292 + */ 293 + typedef enum { 294 + IBUF_CTRL0_ID = 0, /* map to ISYS2401_IBUF_CNTRL_A */ 295 + IBUF_CTRL1_ID, /* map to ISYS2401_IBUF_CNTRL_B */ 296 + IBUF_CTRL2_ID, /* map ISYS2401_IBUF_CNTRL_C */ 297 + N_IBUF_CTRL_ID 298 + } ibuf_ctrl_ID_t; 299 + /* end of Input-buffer Controller */ 300 + 301 + /* 302 + * Stream2MMIO. 303 + */ 304 + typedef enum { 305 + STREAM2MMIO0_ID = 0, /* map to ISYS2401_S2M_A */ 306 + STREAM2MMIO1_ID, /* map to ISYS2401_S2M_B */ 307 + STREAM2MMIO2_ID, /* map to ISYS2401_S2M_C */ 308 + N_STREAM2MMIO_ID 309 + } stream2mmio_ID_t; 310 + 311 + typedef enum { 312 + /* 313 + * Stream2MMIO 0 has 8 SIDs that are indexed by 314 + * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID7_ID]. 315 + * 316 + * Stream2MMIO 1 has 4 SIDs that are indexed by 317 + * [STREAM2MMIO_SID0_ID...TREAM2MMIO_SID3_ID]. 318 + * 319 + * Stream2MMIO 2 has 4 SIDs that are indexed by 320 + * [STREAM2MMIO_SID0_ID...STREAM2MMIO_SID3_ID]. 321 + */ 322 + STREAM2MMIO_SID0_ID = 0, 323 + STREAM2MMIO_SID1_ID, 324 + STREAM2MMIO_SID2_ID, 325 + STREAM2MMIO_SID3_ID, 326 + STREAM2MMIO_SID4_ID, 327 + STREAM2MMIO_SID5_ID, 328 + STREAM2MMIO_SID6_ID, 329 + STREAM2MMIO_SID7_ID, 330 + N_STREAM2MMIO_SID_ID 331 + } stream2mmio_sid_ID_t; 332 + /* end of Stream2MMIO */ 333 + 334 + /** 335 + * Input System 2401: CSI-MIPI recevier. 336 + */ 337 + typedef enum { 338 + CSI_RX_BACKEND0_ID = 0, /* map to ISYS2401_MIPI_BE_A */ 339 + CSI_RX_BACKEND1_ID, /* map to ISYS2401_MIPI_BE_B */ 340 + CSI_RX_BACKEND2_ID, /* map to ISYS2401_MIPI_BE_C */ 341 + N_CSI_RX_BACKEND_ID 342 + } csi_rx_backend_ID_t; 343 + 344 + typedef enum { 345 + CSI_RX_FRONTEND0_ID = 0, /* map to ISYS2401_CSI_RX_A */ 346 + CSI_RX_FRONTEND1_ID, /* map to ISYS2401_CSI_RX_B */ 347 + CSI_RX_FRONTEND2_ID, /* map to ISYS2401_CSI_RX_C */ 348 + #define N_CSI_RX_FRONTEND_ID (CSI_RX_FRONTEND2_ID + 1) 349 + } csi_rx_frontend_ID_t; 350 + 351 + typedef enum { 352 + CSI_RX_DLANE0_ID = 0, /* map to DLANE0 in CSI RX */ 353 + CSI_RX_DLANE1_ID, /* map to DLANE1 in CSI RX */ 354 + CSI_RX_DLANE2_ID, /* map to DLANE2 in CSI RX */ 355 + CSI_RX_DLANE3_ID, /* map to DLANE3 in CSI RX */ 356 + N_CSI_RX_DLANE_ID 357 + } csi_rx_fe_dlane_ID_t; 358 + /* end of CSI-MIPI receiver */ 359 + 360 + typedef enum { 361 + ISYS2401_DMA0_ID = 0, 362 + N_ISYS2401_DMA_ID 363 + } isys2401_dma_ID_t; 364 + 365 + /** 366 + * Pixel-generator. ("system_global.h") 367 + */ 368 + typedef enum { 369 + PIXELGEN0_ID = 0, 370 + PIXELGEN1_ID, 371 + PIXELGEN2_ID, 372 + N_PIXELGEN_ID 373 + } pixelgen_ID_t; 374 + /* end of pixel-generator. ("system_global.h") */ 375 + 376 + typedef enum { 377 + INPUT_SYSTEM_CSI_PORT0_ID = 0, 378 + INPUT_SYSTEM_CSI_PORT1_ID, 379 + INPUT_SYSTEM_CSI_PORT2_ID, 380 + 381 + INPUT_SYSTEM_PIXELGEN_PORT0_ID, 382 + INPUT_SYSTEM_PIXELGEN_PORT1_ID, 383 + INPUT_SYSTEM_PIXELGEN_PORT2_ID, 384 + 385 + N_INPUT_SYSTEM_INPUT_PORT_ID 386 + } input_system_input_port_ID_t; 387 + 388 + #define N_INPUT_SYSTEM_CSI_PORT 3 389 + 390 + typedef enum { 391 + ISYS2401_DMA_CHANNEL_0 = 0, 392 + ISYS2401_DMA_CHANNEL_1, 393 + ISYS2401_DMA_CHANNEL_2, 394 + ISYS2401_DMA_CHANNEL_3, 395 + ISYS2401_DMA_CHANNEL_4, 396 + ISYS2401_DMA_CHANNEL_5, 397 + ISYS2401_DMA_CHANNEL_6, 398 + ISYS2401_DMA_CHANNEL_7, 399 + ISYS2401_DMA_CHANNEL_8, 400 + ISYS2401_DMA_CHANNEL_9, 401 + ISYS2401_DMA_CHANNEL_10, 402 + ISYS2401_DMA_CHANNEL_11, 403 + N_ISYS2401_DMA_CHANNEL 404 + } isys2401_dma_channel; 405 + 406 + #endif /* __SYSTEM_GLOBAL_H_INCLUDED__ */
+179
drivers/staging/media/atomisp/pci/system_local.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Support for Intel Camera Imaging ISP subsystem. 4 + * Copyright (c) 2015, Intel Corporation. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 14 + */ 15 + 16 + #include "system_local.h" 17 + 18 + /* ISP */ 19 + const hrt_address ISP_CTRL_BASE[N_ISP_ID] = { 20 + 0x0000000000020000ULL 21 + }; 22 + 23 + const hrt_address ISP_DMEM_BASE[N_ISP_ID] = { 24 + 0x0000000000200000ULL 25 + }; 26 + 27 + const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID] = { 28 + 0x0000000000100000ULL 29 + }; 30 + 31 + /* SP */ 32 + const hrt_address SP_CTRL_BASE[N_SP_ID] = { 33 + 0x0000000000010000ULL 34 + }; 35 + 36 + const hrt_address SP_DMEM_BASE[N_SP_ID] = { 37 + 0x0000000000300000ULL 38 + }; 39 + 40 + /* MMU */ 41 + /* 42 + * MMU0_ID: The data MMU 43 + * MMU1_ID: The icache MMU 44 + */ 45 + const hrt_address MMU_BASE[N_MMU_ID] = { 46 + 0x0000000000070000ULL, 47 + 0x00000000000A0000ULL 48 + }; 49 + 50 + /* DMA */ 51 + const hrt_address DMA_BASE[N_DMA_ID] = { 52 + 0x0000000000040000ULL 53 + }; 54 + 55 + const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID] = { 56 + 0x00000000000CA000ULL 57 + }; 58 + 59 + /* IRQ */ 60 + const hrt_address IRQ_BASE[N_IRQ_ID] = { 61 + 0x0000000000000500ULL, 62 + 0x0000000000030A00ULL, 63 + 0x000000000008C000ULL, 64 + 0x0000000000090200ULL 65 + }; 66 + 67 + /* 68 + 0x0000000000000500ULL}; 69 + */ 70 + 71 + /* GDC */ 72 + const hrt_address GDC_BASE[N_GDC_ID] = { 73 + 0x0000000000050000ULL, 74 + 0x0000000000060000ULL 75 + }; 76 + 77 + /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 78 + const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID] = { 79 + 0x0000000000000000ULL 80 + }; 81 + 82 + /* 83 + const hrt_address GP_REGS_BASE[N_GP_REGS_ID] = { 84 + 0x0000000000000000ULL}; 85 + 86 + const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 87 + 0x0000000000090000ULL}; 88 + */ 89 + 90 + /* GP_DEVICE (single base for all separate GP_REG instances) */ 91 + const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID] = { 92 + 0x0000000000000000ULL 93 + }; 94 + 95 + /*GP TIMER , all timer registers are inter-twined, 96 + * so, having multiple base addresses for 97 + * different timers does not help*/ 98 + const hrt_address GP_TIMER_BASE = 99 + (hrt_address)0x0000000000000600ULL; 100 + 101 + /* GPIO */ 102 + const hrt_address GPIO_BASE[N_GPIO_ID] = { 103 + 0x0000000000000400ULL 104 + }; 105 + 106 + /* TIMED_CTRL */ 107 + const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID] = { 108 + 0x0000000000000100ULL 109 + }; 110 + 111 + /* INPUT_FORMATTER */ 112 + const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID] = { 113 + 0x0000000000030000ULL, 114 + 0x0000000000030200ULL, 115 + 0x0000000000030400ULL, 116 + 0x0000000000030600ULL 117 + }; /* memcpy() */ 118 + 119 + /* INPUT_SYSTEM */ 120 + const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID] = { 121 + 0x0000000000080000ULL 122 + }; 123 + 124 + /* 0x0000000000081000ULL, */ /* capture A */ 125 + /* 0x0000000000082000ULL, */ /* capture B */ 126 + /* 0x0000000000083000ULL, */ /* capture C */ 127 + /* 0x0000000000084000ULL, */ /* Acquisition */ 128 + /* 0x0000000000085000ULL, */ /* DMA */ 129 + /* 0x0000000000089000ULL, */ /* ctrl */ 130 + /* 0x000000000008A000ULL, */ /* GP regs */ 131 + /* 0x000000000008B000ULL, */ /* FIFO */ 132 + /* 0x000000000008C000ULL, */ /* IRQ */ 133 + 134 + /* RX, the MIPI lane control regs start at offset 0 */ 135 + const hrt_address RX_BASE[N_RX_ID] = { 136 + 0x0000000000080100ULL 137 + }; 138 + 139 + /* IBUF_CTRL, part of the Input System 2401 */ 140 + const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID] = { 141 + 0x00000000000C1800ULL, /* ibuf controller A */ 142 + 0x00000000000C3800ULL, /* ibuf controller B */ 143 + 0x00000000000C5800ULL /* ibuf controller C */ 144 + }; 145 + 146 + /* ISYS IRQ Controllers, part of the Input System 2401 */ 147 + const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID] = { 148 + 0x00000000000C1400ULL, /* port a */ 149 + 0x00000000000C3400ULL, /* port b */ 150 + 0x00000000000C5400ULL /* port c */ 151 + }; 152 + 153 + /* CSI FE, part of the Input System 2401 */ 154 + const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID] = { 155 + 0x00000000000C0400ULL, /* csi fe controller A */ 156 + 0x00000000000C2400ULL, /* csi fe controller B */ 157 + 0x00000000000C4400ULL /* csi fe controller C */ 158 + }; 159 + 160 + /* CSI BE, part of the Input System 2401 */ 161 + const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID] = { 162 + 0x00000000000C0800ULL, /* csi be controller A */ 163 + 0x00000000000C2800ULL, /* csi be controller B */ 164 + 0x00000000000C4800ULL /* csi be controller C */ 165 + }; 166 + 167 + /* PIXEL Generator, part of the Input System 2401 */ 168 + const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID] = { 169 + 0x00000000000C1000ULL, /* pixel gen controller A */ 170 + 0x00000000000C3000ULL, /* pixel gen controller B */ 171 + 0x00000000000C5000ULL /* pixel gen controller C */ 172 + }; 173 + 174 + /* Stream2MMIO, part of the Input System 2401 */ 175 + const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID] = { 176 + 0x00000000000C0C00ULL, /* stream2mmio controller A */ 177 + 0x00000000000C2C00ULL, /* stream2mmio controller B */ 178 + 0x00000000000C4C00ULL /* stream2mmio controller C */ 179 + };
+98 -6
drivers/staging/media/atomisp/pci/system_local.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - // SPDX-License-Identifier: GPL-2.0-or-later 3 2 /* 4 - * (c) 2020 Mauro Carvalho Chehab <mchehab+huawei@kernel.org> 3 + * Support for Intel Camera Imaging ISP subsystem. 4 + * Copyright (c) 2015, Intel Corporation. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms and conditions of the GNU General Public License, 8 + * version 2, as published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope it will be useful, but WITHOUT 11 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 + * more details. 5 14 */ 6 15 7 - #ifdef ISP2401 8 - # include "isp2401_system_local.h" 9 - #else 10 - # include "isp2400_system_local.h" 16 + #ifndef __SYSTEM_LOCAL_H_INCLUDED__ 17 + #define __SYSTEM_LOCAL_H_INCLUDED__ 18 + 19 + #ifdef HRT_ISP_CSS_CUSTOM_HOST 20 + #ifndef HRT_USE_VIR_ADDRS 21 + #define HRT_USE_VIR_ADDRS 11 22 #endif 23 + #endif 24 + 25 + #include "system_global.h" 26 + 27 + /* This interface is deprecated */ 28 + #include "hive_types.h" 29 + 30 + /* 31 + * Cell specific address maps 32 + */ 33 + 34 + #define GP_FIFO_BASE ((hrt_address)0x0000000000090104) /* This is NOT a base address */ 35 + 36 + /* ISP */ 37 + extern const hrt_address ISP_CTRL_BASE[N_ISP_ID]; 38 + extern const hrt_address ISP_DMEM_BASE[N_ISP_ID]; 39 + extern const hrt_address ISP_BAMEM_BASE[N_BAMEM_ID]; 40 + 41 + /* SP */ 42 + extern const hrt_address SP_CTRL_BASE[N_SP_ID]; 43 + extern const hrt_address SP_DMEM_BASE[N_SP_ID]; 44 + 45 + /* MMU */ 46 + 47 + extern const hrt_address MMU_BASE[N_MMU_ID]; 48 + 49 + /* DMA */ 50 + extern const hrt_address DMA_BASE[N_DMA_ID]; 51 + extern const hrt_address ISYS2401_DMA_BASE[N_ISYS2401_DMA_ID]; 52 + 53 + /* IRQ */ 54 + extern const hrt_address IRQ_BASE[N_IRQ_ID]; 55 + 56 + /* GDC */ 57 + extern const hrt_address GDC_BASE[N_GDC_ID]; 58 + 59 + /* FIFO_MONITOR (not a subset of GP_DEVICE) */ 60 + extern const hrt_address FIFO_MONITOR_BASE[N_FIFO_MONITOR_ID]; 61 + 62 + /* GP_DEVICE (single base for all separate GP_REG instances) */ 63 + extern const hrt_address GP_DEVICE_BASE[N_GP_DEVICE_ID]; 64 + 65 + /*GP TIMER , all timer registers are inter-twined, 66 + * so, having multiple base addresses for 67 + * different timers does not help*/ 68 + extern const hrt_address GP_TIMER_BASE; 69 + 70 + /* GPIO */ 71 + extern const hrt_address GPIO_BASE[N_GPIO_ID]; 72 + 73 + /* TIMED_CTRL */ 74 + extern const hrt_address TIMED_CTRL_BASE[N_TIMED_CTRL_ID]; 75 + 76 + /* INPUT_FORMATTER */ 77 + extern const hrt_address INPUT_FORMATTER_BASE[N_INPUT_FORMATTER_ID]; 78 + 79 + /* INPUT_SYSTEM */ 80 + extern const hrt_address INPUT_SYSTEM_BASE[N_INPUT_SYSTEM_ID]; 81 + 82 + /* RX, the MIPI lane control regs start at offset 0 */ 83 + extern const hrt_address RX_BASE[N_RX_ID]; 84 + 85 + /* IBUF_CTRL, part of the Input System 2401 */ 86 + extern const hrt_address IBUF_CTRL_BASE[N_IBUF_CTRL_ID]; 87 + 88 + /* ISYS IRQ Controllers, part of the Input System 2401 */ 89 + extern const hrt_address ISYS_IRQ_BASE[N_ISYS_IRQ_ID]; 90 + 91 + /* CSI FE, part of the Input System 2401 */ 92 + extern const hrt_address CSI_RX_FE_CTRL_BASE[N_CSI_RX_FRONTEND_ID]; 93 + 94 + /* CSI BE, part of the Input System 2401 */ 95 + extern const hrt_address CSI_RX_BE_CTRL_BASE[N_CSI_RX_BACKEND_ID]; 96 + 97 + /* PIXEL Generator, part of the Input System 2401 */ 98 + extern const hrt_address PIXELGEN_CTRL_BASE[N_PIXELGEN_ID]; 99 + 100 + /* Stream2MMIO, part of the Input System 2401 */ 101 + extern const hrt_address STREAM2MMIO_CTRL_BASE[N_STREAM2MMIO_ID]; 102 + 103 + #endif /* __SYSTEM_LOCAL_H_INCLUDED__ */
+15 -1
drivers/staging/wlan-ng/prism2usb.c
··· 61 61 const struct usb_device_id *id) 62 62 { 63 63 struct usb_device *dev; 64 - 64 + const struct usb_endpoint_descriptor *epd; 65 + const struct usb_host_interface *iface_desc = interface->cur_altsetting; 65 66 struct wlandevice *wlandev = NULL; 66 67 struct hfa384x *hw = NULL; 67 68 int result = 0; 69 + 70 + if (iface_desc->desc.bNumEndpoints != 2) { 71 + result = -ENODEV; 72 + goto failed; 73 + } 74 + 75 + result = -EINVAL; 76 + epd = &iface_desc->endpoint[1].desc; 77 + if (!usb_endpoint_is_bulk_in(epd)) 78 + goto failed; 79 + epd = &iface_desc->endpoint[2].desc; 80 + if (!usb_endpoint_is_bulk_out(epd)) 81 + goto failed; 68 82 69 83 dev = interface_to_usbdev(interface); 70 84 wlandev = create_wlan();
+1 -1
drivers/tty/serial/8250/8250_core.c
··· 524 524 */ 525 525 up->mcr_mask = ~ALPHA_KLUDGE_MCR; 526 526 up->mcr_force = ALPHA_KLUDGE_MCR; 527 + serial8250_set_defaults(up); 527 528 } 528 529 529 530 /* chain base port ops to support Remote Supervisor Adapter */ ··· 548 547 port->membase = old_serial_port[i].iomem_base; 549 548 port->iotype = old_serial_port[i].io_type; 550 549 port->regshift = old_serial_port[i].iomem_reg_shift; 551 - serial8250_set_defaults(up); 552 550 553 551 port->irqflags |= irqflag; 554 552 if (serial8250_isa_config != NULL)
+11 -1
drivers/tty/serial/8250/8250_exar.c
··· 326 326 * devices will export them as GPIOs, so we pre-configure them safely 327 327 * as inputs. 328 328 */ 329 - u8 dir = pcidev->vendor == PCI_VENDOR_ID_EXAR ? 0xff : 0x00; 329 + 330 + u8 dir = 0x00; 331 + 332 + if ((pcidev->vendor == PCI_VENDOR_ID_EXAR) && 333 + (pcidev->subsystem_vendor != PCI_VENDOR_ID_SEALEVEL)) { 334 + // Configure GPIO as inputs for Commtech adapters 335 + dir = 0xff; 336 + } else { 337 + // Configure GPIO as outputs for SeaLevel adapters 338 + dir = 0x00; 339 + } 330 340 331 341 writeb(0x00, p + UART_EXAR_MPIOINT_7_0); 332 342 writeb(0x00, p + UART_EXAR_MPIOLVL_7_0);
+18
drivers/tty/serial/8250/8250_mtk.c
··· 306 306 } 307 307 #endif 308 308 309 + /* 310 + * Store the requested baud rate before calling the generic 8250 311 + * set_termios method. Standard 8250 port expects bauds to be 312 + * no higher than (uartclk / 16) so the baud will be clamped if it 313 + * gets out of that bound. Mediatek 8250 port supports speed 314 + * higher than that, therefore we'll get original baud rate back 315 + * after calling the generic set_termios method and recalculate 316 + * the speed later in this method. 317 + */ 318 + baud = tty_termios_baud_rate(termios); 319 + 309 320 serial8250_do_set_termios(port, termios, old); 321 + 322 + tty_termios_encode_baud_rate(termios, baud, baud); 310 323 311 324 /* 312 325 * Mediatek UARTs use an extra highspeed register (MTK_UART_HIGHS) ··· 351 338 * interrupts disabled. 352 339 */ 353 340 spin_lock_irqsave(&port->lock, flags); 341 + 342 + /* 343 + * Update the per-port timeout. 344 + */ 345 + uart_update_timeout(port, termios->c_cflag, baud); 354 346 355 347 /* set DLAB we have cval saved in up->lcr from the call to the core */ 356 348 serial_port_out(port, UART_LCR, up->lcr | UART_LCR_DLAB);
+7 -9
drivers/tty/serial/serial-tegra.c
··· 635 635 } 636 636 637 637 static void tegra_uart_handle_rx_pio(struct tegra_uart_port *tup, 638 - struct tty_port *tty) 638 + struct tty_port *port) 639 639 { 640 640 do { 641 641 char flag = TTY_NORMAL; ··· 653 653 ch = (unsigned char) tegra_uart_read(tup, UART_RX); 654 654 tup->uport.icount.rx++; 655 655 656 - if (!uart_handle_sysrq_char(&tup->uport, ch) && tty) 657 - tty_insert_flip_char(tty, ch, flag); 656 + if (uart_handle_sysrq_char(&tup->uport, ch)) 657 + continue; 658 658 659 659 if (tup->uport.ignore_status_mask & UART_LSR_DR) 660 660 continue; 661 + 662 + tty_insert_flip_char(port, ch, flag); 661 663 } while (1); 662 664 } 663 665 664 666 static void tegra_uart_copy_rx_to_tty(struct tegra_uart_port *tup, 665 - struct tty_port *tty, 667 + struct tty_port *port, 666 668 unsigned int count) 667 669 { 668 670 int copied; ··· 674 672 return; 675 673 676 674 tup->uport.icount.rx += count; 677 - if (!tty) { 678 - dev_err(tup->uport.dev, "No tty port\n"); 679 - return; 680 - } 681 675 682 676 if (tup->uport.ignore_status_mask & UART_LSR_DR) 683 677 return; 684 678 685 679 dma_sync_single_for_cpu(tup->uport.dev, tup->rx_dma_buf_phys, 686 680 count, DMA_FROM_DEVICE); 687 - copied = tty_insert_flip_string(tty, 681 + copied = tty_insert_flip_string(port, 688 682 ((unsigned char *)(tup->rx_dma_buf_virt)), count); 689 683 if (copied != count) { 690 684 WARN_ON(1);
+6 -2
drivers/tty/serial/xilinx_uartps.c
··· 1580 1580 * If register_console() don't assign value, then console_port pointer 1581 1581 * is cleanup. 1582 1582 */ 1583 - if (!console_port) 1583 + if (!console_port) { 1584 + cdns_uart_console.index = id; 1584 1585 console_port = port; 1586 + } 1585 1587 #endif 1586 1588 1587 1589 rc = uart_add_one_port(&cdns_uart_uart_driver, port); ··· 1596 1594 #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE 1597 1595 /* This is not port which is used for console that's why clean it up */ 1598 1596 if (console_port == port && 1599 - !(cdns_uart_uart_driver.cons->flags & CON_ENABLED)) 1597 + !(cdns_uart_uart_driver.cons->flags & CON_ENABLED)) { 1600 1598 console_port = NULL; 1599 + cdns_uart_console.index = -1; 1600 + } 1601 1601 #endif 1602 1602 1603 1603 cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,
+18 -11
drivers/tty/vt/vt.c
··· 1092 1092 .destruct = vc_port_destruct, 1093 1093 }; 1094 1094 1095 + /* 1096 + * Change # of rows and columns (0 means unchanged/the size of fg_console) 1097 + * [this is to be used together with some user program 1098 + * like resize that changes the hardware videomode] 1099 + */ 1100 + #define VC_MAXCOL (32767) 1101 + #define VC_MAXROW (32767) 1102 + 1095 1103 int vc_allocate(unsigned int currcons) /* return 0 on success */ 1096 1104 { 1097 1105 struct vt_notifier_param param; 1098 1106 struct vc_data *vc; 1107 + int err; 1099 1108 1100 1109 WARN_CONSOLE_UNLOCKED(); 1101 1110 ··· 1134 1125 if (!*vc->vc_uni_pagedir_loc) 1135 1126 con_set_default_unimap(vc); 1136 1127 1128 + err = -EINVAL; 1129 + if (vc->vc_cols > VC_MAXCOL || vc->vc_rows > VC_MAXROW || 1130 + vc->vc_screenbuf_size > KMALLOC_MAX_SIZE || !vc->vc_screenbuf_size) 1131 + goto err_free; 1132 + err = -ENOMEM; 1137 1133 vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_KERNEL); 1138 1134 if (!vc->vc_screenbuf) 1139 1135 goto err_free; ··· 1157 1143 visual_deinit(vc); 1158 1144 kfree(vc); 1159 1145 vc_cons[currcons].d = NULL; 1160 - return -ENOMEM; 1146 + return err; 1161 1147 } 1162 1148 1163 1149 static inline int resize_screen(struct vc_data *vc, int width, int height, ··· 1171 1157 1172 1158 return err; 1173 1159 } 1174 - 1175 - /* 1176 - * Change # of rows and columns (0 means unchanged/the size of fg_console) 1177 - * [this is to be used together with some user program 1178 - * like resize that changes the hardware videomode] 1179 - */ 1180 - #define VC_RESIZE_MAXCOL (32767) 1181 - #define VC_RESIZE_MAXROW (32767) 1182 1160 1183 1161 /** 1184 1162 * vc_do_resize - resizing method for the tty ··· 1207 1201 user = vc->vc_resize_user; 1208 1202 vc->vc_resize_user = 0; 1209 1203 1210 - if (cols > VC_RESIZE_MAXCOL || lines > VC_RESIZE_MAXROW) 1204 + if (cols > VC_MAXCOL || lines > VC_MAXROW) 1211 1205 return -EINVAL; 1212 1206 1213 1207 new_cols = (cols ? cols : vc->vc_cols); ··· 1218 1212 if (new_cols == vc->vc_cols && new_rows == vc->vc_rows) 1219 1213 return 0; 1220 1214 1221 - if (new_screen_size > KMALLOC_MAX_SIZE) 1215 + if (new_screen_size > KMALLOC_MAX_SIZE || !new_screen_size) 1222 1216 return -EINVAL; 1223 1217 newscreen = kzalloc(new_screen_size, GFP_USER); 1224 1218 if (!newscreen) ··· 3399 3393 INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK); 3400 3394 tty_port_init(&vc->port); 3401 3395 visual_init(vc, currcons, 1); 3396 + /* Assuming vc->vc_{cols,rows,screenbuf_size} are sane here. */ 3402 3397 vc->vc_screenbuf = kzalloc(vc->vc_screenbuf_size, GFP_NOWAIT); 3403 3398 vc_init(vc, vc->vc_rows, vc->vc_cols, 3404 3399 currcons || !vc->vc_sw->con_save_screen);
+4
drivers/usb/host/xhci-mtk-sch.c
··· 557 557 if (is_fs_or_ls(speed) && !has_tt) 558 558 return false; 559 559 560 + /* skip endpoint with zero maxpkt */ 561 + if (usb_endpoint_maxp(&ep->desc) == 0) 562 + return false; 563 + 560 564 return true; 561 565 } 562 566
+3
drivers/usb/host/xhci-pci.c
··· 265 265 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 266 266 pdev->device == 0x1142) 267 267 xhci->quirks |= XHCI_TRUST_TX_LENGTH; 268 + if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 269 + pdev->device == 0x2142) 270 + xhci->quirks |= XHCI_NO_64BIT_SUPPORT; 268 271 269 272 if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 270 273 pdev->device == PCI_DEVICE_ID_ASMEDIA_1042A_XHCI)
+1 -1
drivers/usb/host/xhci-tegra.c
··· 856 856 if (!tegra->context.ipfs) 857 857 return -ENOMEM; 858 858 859 - tegra->context.fpci = devm_kcalloc(tegra->dev, soc->ipfs.num_offsets, 859 + tegra->context.fpci = devm_kcalloc(tegra->dev, soc->fpci.num_offsets, 860 860 sizeof(u32), GFP_KERNEL); 861 861 if (!tegra->context.fpci) 862 862 return -ENOMEM;
+5
drivers/vfio/pci/vfio_pci.c
··· 521 521 vfio_pci_vf_token_user_add(vdev, -1); 522 522 vfio_spapr_pci_eeh_release(vdev->pdev); 523 523 vfio_pci_disable(vdev); 524 + mutex_lock(&vdev->igate); 524 525 if (vdev->err_trigger) { 525 526 eventfd_ctx_put(vdev->err_trigger); 526 527 vdev->err_trigger = NULL; 527 528 } 529 + mutex_unlock(&vdev->igate); 530 + 531 + mutex_lock(&vdev->igate); 528 532 if (vdev->req_trigger) { 529 533 eventfd_ctx_put(vdev->req_trigger); 530 534 vdev->req_trigger = NULL; 531 535 } 536 + mutex_unlock(&vdev->igate); 532 537 } 533 538 534 539 mutex_unlock(&vdev->reflck->lock);
+2 -2
drivers/video/fbdev/core/bitblit.c
··· 216 216 region.color = color; 217 217 region.rop = ROP_COPY; 218 218 219 - if (rw && !bottom_only) { 219 + if ((int) rw > 0 && !bottom_only) { 220 220 region.dx = info->var.xoffset + rs; 221 221 region.dy = 0; 222 222 region.width = rw; ··· 224 224 info->fbops->fb_fillrect(info, &region); 225 225 } 226 226 227 - if (bh) { 227 + if ((int) bh > 0) { 228 228 region.dx = info->var.xoffset; 229 229 region.dy = info->var.yoffset + bs; 230 230 region.width = rs;
+2 -2
drivers/video/fbdev/core/fbcon_ccw.c
··· 201 201 region.color = color; 202 202 region.rop = ROP_COPY; 203 203 204 - if (rw && !bottom_only) { 204 + if ((int) rw > 0 && !bottom_only) { 205 205 region.dx = 0; 206 206 region.dy = info->var.yoffset; 207 207 region.height = rw; ··· 209 209 info->fbops->fb_fillrect(info, &region); 210 210 } 211 211 212 - if (bh) { 212 + if ((int) bh > 0) { 213 213 region.dx = info->var.xoffset + bs; 214 214 region.dy = 0; 215 215 region.height = info->var.yres_virtual;
+2 -2
drivers/video/fbdev/core/fbcon_cw.c
··· 184 184 region.color = color; 185 185 region.rop = ROP_COPY; 186 186 187 - if (rw && !bottom_only) { 187 + if ((int) rw > 0 && !bottom_only) { 188 188 region.dx = 0; 189 189 region.dy = info->var.yoffset + rs; 190 190 region.height = rw; ··· 192 192 info->fbops->fb_fillrect(info, &region); 193 193 } 194 194 195 - if (bh) { 195 + if ((int) bh > 0) { 196 196 region.dx = info->var.xoffset; 197 197 region.dy = info->var.yoffset; 198 198 region.height = info->var.yres;
+2 -2
drivers/video/fbdev/core/fbcon_ud.c
··· 231 231 region.color = color; 232 232 region.rop = ROP_COPY; 233 233 234 - if (rw && !bottom_only) { 234 + if ((int) rw > 0 && !bottom_only) { 235 235 region.dy = 0; 236 236 region.dx = info->var.xoffset; 237 237 region.width = rw; ··· 239 239 info->fbops->fb_fillrect(info, &region); 240 240 } 241 241 242 - if (bh) { 242 + if ((int) bh > 0) { 243 243 region.dy = info->var.yoffset; 244 244 region.dx = info->var.xoffset; 245 245 region.height = bh;
+2 -2
drivers/virtio/virtio_mmio.c
··· 641 641 &vm_cmdline_id, &consumed); 642 642 643 643 /* 644 - * sscanf() must processes at least 2 chunks; also there 644 + * sscanf() must process at least 2 chunks; also there 645 645 * must be no extra characters after the last chunk, so 646 646 * str[consumed] must be '\0' 647 647 */ 648 - if (processed < 2 || str[consumed]) 648 + if (processed < 2 || str[consumed] || irq == 0) 649 649 return -EINVAL; 650 650 651 651 resources[0].flags = IORESOURCE_MEM;
+1
fs/btrfs/backref.c
··· 1461 1461 if (ret < 0 && ret != -ENOENT) { 1462 1462 ulist_free(tmp); 1463 1463 ulist_free(*roots); 1464 + *roots = NULL; 1464 1465 return ret; 1465 1466 } 1466 1467 node = ulist_next(tmp, &uiter);
+2 -1
fs/btrfs/extent_io.c
··· 1999 1999 if (!PageDirty(pages[i]) || 2000 2000 pages[i]->mapping != mapping) { 2001 2001 unlock_page(pages[i]); 2002 - put_page(pages[i]); 2002 + for (; i < ret; i++) 2003 + put_page(pages[i]); 2003 2004 err = -EAGAIN; 2004 2005 goto out; 2005 2006 }
+10 -13
fs/btrfs/inode.c
··· 8123 8123 /* 8124 8124 * Qgroup reserved space handler 8125 8125 * Page here will be either 8126 - * 1) Already written to disk 8127 - * In this case, its reserved space is released from data rsv map 8128 - * and will be freed by delayed_ref handler finally. 8129 - * So even we call qgroup_free_data(), it won't decrease reserved 8130 - * space. 8131 - * 2) Not written to disk 8132 - * This means the reserved space should be freed here. However, 8133 - * if a truncate invalidates the page (by clearing PageDirty) 8134 - * and the page is accounted for while allocating extent 8135 - * in btrfs_check_data_free_space() we let delayed_ref to 8136 - * free the entire extent. 8126 + * 1) Already written to disk or ordered extent already submitted 8127 + * Then its QGROUP_RESERVED bit in io_tree is already cleaned. 8128 + * Qgroup will be handled by its qgroup_record then. 8129 + * btrfs_qgroup_free_data() call will do nothing here. 8130 + * 8131 + * 2) Not written to disk yet 8132 + * Then btrfs_qgroup_free_data() call will clear the QGROUP_RESERVED 8133 + * bit of its io_tree, and free the qgroup reserved data space. 8134 + * Since the IO will never happen for this page. 8137 8135 */ 8138 - if (PageDirty(page)) 8139 - btrfs_qgroup_free_data(inode, NULL, page_start, PAGE_SIZE); 8136 + btrfs_qgroup_free_data(inode, NULL, page_start, PAGE_SIZE); 8140 8137 if (!inode_evicting) { 8141 8138 clear_extent_bit(tree, page_start, page_end, EXTENT_LOCKED | 8142 8139 EXTENT_DELALLOC | EXTENT_DELALLOC_NEW |
+8
fs/btrfs/volumes.c
··· 7052 7052 mutex_lock(&fs_info->chunk_mutex); 7053 7053 7054 7054 /* 7055 + * It is possible for mount and umount to race in such a way that 7056 + * we execute this code path, but open_fs_devices failed to clear 7057 + * total_rw_bytes. We certainly want it cleared before reading the 7058 + * device items, so clear it here. 7059 + */ 7060 + fs_info->fs_devices->total_rw_bytes = 0; 7061 + 7062 + /* 7055 7063 * Read all device items, and then all the chunk items. All 7056 7064 * device items are found before any chunk item (their object id 7057 7065 * is smaller than the lowest possible object id for a chunk
+2 -8
fs/cifs/inode.c
··· 2044 2044 FILE_UNIX_BASIC_INFO *info_buf_target; 2045 2045 unsigned int xid; 2046 2046 int rc, tmprc; 2047 - bool new_target = d_really_is_negative(target_dentry); 2048 2047 2049 2048 if (flags & ~RENAME_NOREPLACE) 2050 2049 return -EINVAL; ··· 2120 2121 */ 2121 2122 2122 2123 unlink_target: 2123 - /* 2124 - * If the target dentry was created during the rename, try 2125 - * unlinking it if it's not negative 2126 - */ 2127 - if (new_target && 2128 - d_really_is_positive(target_dentry) && 2129 - (rc == -EACCES || rc == -EEXIST)) { 2124 + /* Try unlinking the target dentry if it's not negative */ 2125 + if (d_really_is_positive(target_dentry) && (rc == -EACCES || rc == -EEXIST)) { 2130 2126 if (d_is_dir(target_dentry)) 2131 2127 tmprc = cifs_rmdir(target_dir, target_dentry); 2132 2128 else
+3 -3
fs/efivarfs/super.c
··· 201 201 sb->s_d_op = &efivarfs_d_ops; 202 202 sb->s_time_gran = 1; 203 203 204 + if (!efivar_supports_writes()) 205 + sb->s_flags |= SB_RDONLY; 206 + 204 207 inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0, true); 205 208 if (!inode) 206 209 return -ENOMEM; ··· 255 252 256 253 static __init int efivarfs_init(void) 257 254 { 258 - if (!efi_rt_services_supported(EFI_RT_SUPPORTED_VARIABLE_SERVICES)) 259 - return -ENODEV; 260 - 261 255 if (!efivars_kobject()) 262 256 return -ENODEV; 263 257
+1 -1
fs/exfat/dir.c
··· 1112 1112 ret = exfat_get_next_cluster(sb, &clu.dir); 1113 1113 } 1114 1114 1115 - if (ret || clu.dir != EXFAT_EOF_CLUSTER) { 1115 + if (ret || clu.dir == EXFAT_EOF_CLUSTER) { 1116 1116 /* just initialized hint_stat */ 1117 1117 hint_stat->clu = p_dir->dir; 1118 1118 hint_stat->eidx = 0;
+1 -1
fs/exfat/exfat_fs.h
··· 371 371 static inline sector_t exfat_cluster_to_sector(struct exfat_sb_info *sbi, 372 372 unsigned int clus) 373 373 { 374 - return ((clus - EXFAT_RESERVED_CLUSTERS) << sbi->sect_per_clus_bits) + 374 + return ((sector_t)(clus - EXFAT_RESERVED_CLUSTERS) << sbi->sect_per_clus_bits) + 375 375 sbi->data_start_sector; 376 376 } 377 377
+1 -1
fs/exfat/file.c
··· 176 176 ep2->dentry.stream.size = 0; 177 177 } else { 178 178 ep2->dentry.stream.valid_size = cpu_to_le64(new_size); 179 - ep2->dentry.stream.size = ep->dentry.stream.valid_size; 179 + ep2->dentry.stream.size = ep2->dentry.stream.valid_size; 180 180 } 181 181 182 182 if (new_size == 0) {
+4 -4
fs/exfat/nls.c
··· 495 495 struct exfat_uni_name *p_uniname, int *p_lossy) 496 496 { 497 497 int i, unilen, lossy = NLS_NAME_NO_LOSSY; 498 - unsigned short upname[MAX_NAME_LENGTH + 1]; 498 + __le16 upname[MAX_NAME_LENGTH + 1]; 499 499 unsigned short *uniname = p_uniname->name; 500 500 501 501 WARN_ON(!len); ··· 519 519 exfat_wstrchr(bad_uni_chars, *uniname)) 520 520 lossy |= NLS_NAME_LOSSY; 521 521 522 - upname[i] = exfat_toupper(sb, *uniname); 522 + upname[i] = cpu_to_le16(exfat_toupper(sb, *uniname)); 523 523 uniname++; 524 524 } 525 525 ··· 597 597 struct exfat_uni_name *p_uniname, int *p_lossy) 598 598 { 599 599 int i = 0, unilen = 0, lossy = NLS_NAME_NO_LOSSY; 600 - unsigned short upname[MAX_NAME_LENGTH + 1]; 600 + __le16 upname[MAX_NAME_LENGTH + 1]; 601 601 unsigned short *uniname = p_uniname->name; 602 602 struct nls_table *nls = EXFAT_SB(sb)->nls_io; 603 603 ··· 611 611 exfat_wstrchr(bad_uni_chars, *uniname)) 612 612 lossy |= NLS_NAME_LOSSY; 613 613 614 - upname[unilen] = exfat_toupper(sb, *uniname); 614 + upname[unilen] = cpu_to_le16(exfat_toupper(sb, *uniname)); 615 615 uniname++; 616 616 unilen++; 617 617 }
+36 -25
fs/io_uring.c
··· 605 605 606 606 struct async_poll { 607 607 struct io_poll_iocb poll; 608 + struct io_poll_iocb *double_poll; 608 609 struct io_wq_work work; 609 610 }; 610 611 ··· 4160 4159 return false; 4161 4160 } 4162 4161 4163 - static void io_poll_remove_double(struct io_kiocb *req) 4162 + static void io_poll_remove_double(struct io_kiocb *req, void *data) 4164 4163 { 4165 - struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io; 4164 + struct io_poll_iocb *poll = data; 4166 4165 4167 4166 lockdep_assert_held(&req->ctx->completion_lock); 4168 4167 ··· 4182 4181 { 4183 4182 struct io_ring_ctx *ctx = req->ctx; 4184 4183 4185 - io_poll_remove_double(req); 4184 + io_poll_remove_double(req, req->io); 4186 4185 req->poll.done = true; 4187 4186 io_cqring_fill_event(req, error ? error : mangle_poll(mask)); 4188 4187 io_commit_cqring(ctx); ··· 4225 4224 int sync, void *key) 4226 4225 { 4227 4226 struct io_kiocb *req = wait->private; 4228 - struct io_poll_iocb *poll = (struct io_poll_iocb *) req->io; 4227 + struct io_poll_iocb *poll = req->apoll->double_poll; 4229 4228 __poll_t mask = key_to_poll(key); 4230 4229 4231 4230 /* for instances that support it check for an event match first: */ 4232 4231 if (mask && !(mask & poll->events)) 4233 4232 return 0; 4234 4233 4235 - if (req->poll.head) { 4234 + if (poll && poll->head) { 4236 4235 bool done; 4237 4236 4238 - spin_lock(&req->poll.head->lock); 4239 - done = list_empty(&req->poll.wait.entry); 4237 + spin_lock(&poll->head->lock); 4238 + done = list_empty(&poll->wait.entry); 4240 4239 if (!done) 4241 - list_del_init(&req->poll.wait.entry); 4242 - spin_unlock(&req->poll.head->lock); 4240 + list_del_init(&poll->wait.entry); 4241 + spin_unlock(&poll->head->lock); 4243 4242 if (!done) 4244 4243 __io_async_wake(req, poll, mask, io_poll_task_func); 4245 4244 } ··· 4259 4258 } 4260 4259 4261 4260 static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt, 4262 - struct wait_queue_head *head) 4261 + struct wait_queue_head *head, 4262 + struct io_poll_iocb **poll_ptr) 4263 4263 { 4264 4264 struct io_kiocb *req = pt->req; 4265 4265 ··· 4271 4269 */ 4272 4270 if (unlikely(poll->head)) { 4273 4271 /* already have a 2nd entry, fail a third attempt */ 4274 - if (req->io) { 4272 + if (*poll_ptr) { 4275 4273 pt->error = -EINVAL; 4276 4274 return; 4277 4275 } ··· 4283 4281 io_init_poll_iocb(poll, req->poll.events, io_poll_double_wake); 4284 4282 refcount_inc(&req->refs); 4285 4283 poll->wait.private = req; 4286 - req->io = (void *) poll; 4284 + *poll_ptr = poll; 4287 4285 } 4288 4286 4289 4287 pt->error = 0; ··· 4295 4293 struct poll_table_struct *p) 4296 4294 { 4297 4295 struct io_poll_table *pt = container_of(p, struct io_poll_table, pt); 4296 + struct async_poll *apoll = pt->req->apoll; 4298 4297 4299 - __io_queue_proc(&pt->req->apoll->poll, pt, head); 4298 + __io_queue_proc(&apoll->poll, pt, head, &apoll->double_poll); 4300 4299 } 4301 4300 4302 4301 static void io_sq_thread_drop_mm(struct io_ring_ctx *ctx) ··· 4347 4344 } 4348 4345 } 4349 4346 4347 + io_poll_remove_double(req, apoll->double_poll); 4350 4348 spin_unlock_irq(&ctx->completion_lock); 4351 4349 4352 4350 /* restore ->work in case we need to retry again */ 4353 4351 if (req->flags & REQ_F_WORK_INITIALIZED) 4354 4352 memcpy(&req->work, &apoll->work, sizeof(req->work)); 4353 + kfree(apoll->double_poll); 4355 4354 kfree(apoll); 4356 4355 4357 4356 if (!canceled) { ··· 4441 4436 struct async_poll *apoll; 4442 4437 struct io_poll_table ipt; 4443 4438 __poll_t mask, ret; 4444 - bool had_io; 4445 4439 4446 4440 if (!req->file || !file_can_poll(req->file)) 4447 4441 return false; ··· 4452 4448 apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC); 4453 4449 if (unlikely(!apoll)) 4454 4450 return false; 4451 + apoll->double_poll = NULL; 4455 4452 4456 4453 req->flags |= REQ_F_POLLED; 4457 4454 if (req->flags & REQ_F_WORK_INITIALIZED) 4458 4455 memcpy(&apoll->work, &req->work, sizeof(req->work)); 4459 - had_io = req->io != NULL; 4460 4456 4461 4457 io_get_req_task(req); 4462 4458 req->apoll = apoll; ··· 4474 4470 ret = __io_arm_poll_handler(req, &apoll->poll, &ipt, mask, 4475 4471 io_async_wake); 4476 4472 if (ret) { 4477 - ipt.error = 0; 4478 - /* only remove double add if we did it here */ 4479 - if (!had_io) 4480 - io_poll_remove_double(req); 4473 + io_poll_remove_double(req, apoll->double_poll); 4481 4474 spin_unlock_irq(&ctx->completion_lock); 4482 4475 if (req->flags & REQ_F_WORK_INITIALIZED) 4483 4476 memcpy(&req->work, &apoll->work, sizeof(req->work)); 4477 + kfree(apoll->double_poll); 4484 4478 kfree(apoll); 4485 4479 return false; 4486 4480 } ··· 4509 4507 bool do_complete; 4510 4508 4511 4509 if (req->opcode == IORING_OP_POLL_ADD) { 4512 - io_poll_remove_double(req); 4510 + io_poll_remove_double(req, req->io); 4513 4511 do_complete = __io_poll_remove_one(req, &req->poll); 4514 4512 } else { 4515 4513 struct async_poll *apoll = req->apoll; 4514 + 4515 + io_poll_remove_double(req, apoll->double_poll); 4516 4516 4517 4517 /* non-poll requests have submit ref still */ 4518 4518 do_complete = __io_poll_remove_one(req, &apoll->poll); ··· 4528 4524 if (req->flags & REQ_F_WORK_INITIALIZED) 4529 4525 memcpy(&req->work, &apoll->work, 4530 4526 sizeof(req->work)); 4527 + kfree(apoll->double_poll); 4531 4528 kfree(apoll); 4532 4529 } 4533 4530 } ··· 4629 4624 { 4630 4625 struct io_poll_table *pt = container_of(p, struct io_poll_table, pt); 4631 4626 4632 - __io_queue_proc(&pt->req->poll, pt, head); 4627 + __io_queue_proc(&pt->req->poll, pt, head, (struct io_poll_iocb **) &pt->req->io); 4633 4628 } 4634 4629 4635 4630 static int io_poll_add_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) ··· 4737 4732 { 4738 4733 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 4739 4734 return -EINVAL; 4740 - if (sqe->flags || sqe->ioprio || sqe->buf_index || sqe->len) 4735 + if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) 4736 + return -EINVAL; 4737 + if (sqe->ioprio || sqe->buf_index || sqe->len) 4741 4738 return -EINVAL; 4742 4739 4743 4740 req->timeout.addr = READ_ONCE(sqe->addr); ··· 4917 4910 { 4918 4911 if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL)) 4919 4912 return -EINVAL; 4920 - if (sqe->flags || sqe->ioprio || sqe->off || sqe->len || 4921 - sqe->cancel_flags) 4913 + if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) 4914 + return -EINVAL; 4915 + if (sqe->ioprio || sqe->off || sqe->len || sqe->cancel_flags) 4922 4916 return -EINVAL; 4923 4917 4924 4918 req->cancel.addr = READ_ONCE(sqe->addr); ··· 4937 4929 static int io_files_update_prep(struct io_kiocb *req, 4938 4930 const struct io_uring_sqe *sqe) 4939 4931 { 4940 - if (sqe->flags || sqe->ioprio || sqe->rw_flags) 4932 + if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT))) 4933 + return -EINVAL; 4934 + if (sqe->ioprio || sqe->rw_flags) 4941 4935 return -EINVAL; 4942 4936 4943 4937 req->files_update.offset = READ_ONCE(sqe->off); ··· 5730 5720 * Never try inline submit of IOSQE_ASYNC is set, go straight 5731 5721 * to async execution. 5732 5722 */ 5723 + io_req_init_async(req); 5733 5724 req->work.flags |= IO_WQ_WORK_CONCURRENT; 5734 5725 io_queue_async_work(req); 5735 5726 } else {
+19 -1
fs/nfsd/nfs4state.c
··· 507 507 return ret; 508 508 } 509 509 510 + static struct nfsd_file *find_deleg_file(struct nfs4_file *f) 511 + { 512 + struct nfsd_file *ret = NULL; 513 + 514 + spin_lock(&f->fi_lock); 515 + if (f->fi_deleg_file) 516 + ret = nfsd_file_get(f->fi_deleg_file); 517 + spin_unlock(&f->fi_lock); 518 + return ret; 519 + } 520 + 510 521 static atomic_long_t num_delegations; 511 522 unsigned long max_delegations; 512 523 ··· 2455 2444 oo = ols->st_stateowner; 2456 2445 nf = st->sc_file; 2457 2446 file = find_any_file(nf); 2447 + if (!file) 2448 + return 0; 2458 2449 2459 2450 seq_printf(s, "- "); 2460 2451 nfs4_show_stateid(s, &st->sc_stateid); ··· 2494 2481 oo = ols->st_stateowner; 2495 2482 nf = st->sc_file; 2496 2483 file = find_any_file(nf); 2484 + if (!file) 2485 + return 0; 2497 2486 2498 2487 seq_printf(s, "- "); 2499 2488 nfs4_show_stateid(s, &st->sc_stateid); ··· 2528 2513 2529 2514 ds = delegstateid(st); 2530 2515 nf = st->sc_file; 2531 - file = nf->fi_deleg_file; 2516 + file = find_deleg_file(nf); 2517 + if (!file) 2518 + return 0; 2532 2519 2533 2520 seq_printf(s, "- "); 2534 2521 nfs4_show_stateid(s, &st->sc_stateid); ··· 2546 2529 seq_printf(s, ", "); 2547 2530 nfs4_show_fname(s, file); 2548 2531 seq_printf(s, " }\n"); 2532 + nfsd_file_put(file); 2549 2533 2550 2534 return 0; 2551 2535 }
+1 -1
fs/squashfs/block.c
··· 175 175 /* Extract the length of the metadata block */ 176 176 data = page_address(bvec->bv_page) + bvec->bv_offset; 177 177 length = data[offset]; 178 - if (offset <= bvec->bv_len - 1) { 178 + if (offset < bvec->bv_len - 1) { 179 179 length |= data[offset + 1] << 8; 180 180 } else { 181 181 if (WARN_ON_ONCE(!bio_next_segment(bio, &iter_all))) {
+11 -7
fs/zonefs/super.c
··· 607 607 int nr_pages; 608 608 ssize_t ret; 609 609 610 - nr_pages = iov_iter_npages(from, BIO_MAX_PAGES); 611 - if (!nr_pages) 612 - return 0; 613 - 614 610 max = queue_max_zone_append_sectors(bdev_get_queue(bdev)); 615 611 max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize); 616 612 iov_iter_truncate(from, max); 613 + 614 + nr_pages = iov_iter_npages(from, BIO_MAX_PAGES); 615 + if (!nr_pages) 616 + return 0; 617 617 618 618 bio = bio_alloc_bioset(GFP_NOFS, nr_pages, &fs_bio_set); 619 619 if (!bio) ··· 1119 1119 char *file_name; 1120 1120 struct dentry *dir; 1121 1121 unsigned int n = 0; 1122 - int ret = -ENOMEM; 1122 + int ret; 1123 1123 1124 1124 /* If the group is empty, there is nothing to do */ 1125 1125 if (!zd->nr_zones[type]) ··· 1135 1135 zgroup_name = "seq"; 1136 1136 1137 1137 dir = zonefs_create_inode(sb->s_root, zgroup_name, NULL, type); 1138 - if (!dir) 1138 + if (!dir) { 1139 + ret = -ENOMEM; 1139 1140 goto free; 1141 + } 1140 1142 1141 1143 /* 1142 1144 * The first zone contains the super block: skip it. ··· 1176 1174 * Use the file number within its group as file name. 1177 1175 */ 1178 1176 snprintf(file_name, ZONEFS_NAME_MAX - 1, "%u", n); 1179 - if (!zonefs_create_inode(dir, file_name, zone, type)) 1177 + if (!zonefs_create_inode(dir, file_name, zone, type)) { 1178 + ret = -ENOMEM; 1180 1179 goto free; 1180 + } 1181 1181 1182 1182 n++; 1183 1183 }
+4 -1
include/asm-generic/vmlinux.lds.h
··· 341 341 342 342 #define PAGE_ALIGNED_DATA(page_align) \ 343 343 . = ALIGN(page_align); \ 344 - *(.data..page_aligned) 344 + *(.data..page_aligned) \ 345 + . = ALIGN(page_align); 345 346 346 347 #define READ_MOSTLY_DATA(align) \ 347 348 . = ALIGN(align); \ ··· 738 737 . = ALIGN(bss_align); \ 739 738 .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \ 740 739 BSS_FIRST_SECTIONS \ 740 + . = ALIGN(PAGE_SIZE); \ 741 741 *(.bss..page_aligned) \ 742 + . = ALIGN(PAGE_SIZE); \ 742 743 *(.dynbss) \ 743 744 *(BSS_MAIN) \ 744 745 *(COMMON) \
+1
include/linux/device-mapper.h
··· 426 426 int dm_copy_name_and_uuid(struct mapped_device *md, char *name, char *uuid); 427 427 struct gendisk *dm_disk(struct mapped_device *md); 428 428 int dm_suspended(struct dm_target *ti); 429 + int dm_post_suspending(struct dm_target *ti); 429 430 int dm_noflush_suspending(struct dm_target *ti); 430 431 void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors); 431 432 union map_info *dm_get_rq_mapinfo(struct request *rq);
+1
include/linux/efi.h
··· 994 994 int efivars_unregister(struct efivars *efivars); 995 995 struct kobject *efivars_kobject(void); 996 996 997 + int efivar_supports_writes(void); 997 998 int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *), 998 999 void *data, bool duplicates, struct list_head *head); 999 1000
+1 -1
include/linux/i2c.h
··· 56 56 * on a bus (or read from them). Apart from two basic transfer functions to 57 57 * transmit one message at a time, a more complex version can be used to 58 58 * transmit an arbitrary number of messages without interruption. 59 - * @count must be be less than 64k since msg.len is u16. 59 + * @count must be less than 64k since msg.len is u16. 60 60 */ 61 61 int i2c_transfer_buffer_flags(const struct i2c_client *client, 62 62 char *buf, int count, u16 flags);
+4 -1
include/linux/io-mapping.h
··· 107 107 resource_size_t base, 108 108 unsigned long size) 109 109 { 110 + iomap->iomem = ioremap_wc(base, size); 111 + if (!iomap->iomem) 112 + return NULL; 113 + 110 114 iomap->base = base; 111 115 iomap->size = size; 112 - iomap->iomem = ioremap_wc(base, size); 113 116 #if defined(pgprot_noncached_wc) /* archs can't agree on a name ... */ 114 117 iomap->prot = pgprot_noncached_wc(PAGE_KERNEL); 115 118 #elif defined(pgprot_writecombine)
+1 -1
include/linux/rhashtable.h
··· 33 33 * of two or more hash tables when the rhashtable is being resized. 34 34 * The end of the chain is marked with a special nulls marks which has 35 35 * the least significant bit set but otherwise stores the address of 36 - * the hash bucket. This allows us to be be sure we've found the end 36 + * the hash bucket. This allows us to be sure we've found the end 37 37 * of the right list. 38 38 * The value stored in the hash bucket has BIT(0) used as a lock bit. 39 39 * This bit must be atomically set before any changes are made to
+4 -2
include/linux/tcp.h
··· 220 220 } rack; 221 221 u16 advmss; /* Advertised MSS */ 222 222 u8 compressed_ack; 223 - u8 dup_ack_counter; 223 + u8 dup_ack_counter:2, 224 + tlp_retrans:1, /* TLP is a retransmission */ 225 + unused:5; 224 226 u32 chrono_start; /* Start time in jiffies of a TCP chrono */ 225 227 u32 chrono_stat[3]; /* Time in jiffies for chrono_stat stats */ 226 228 u8 chrono_type:2, /* current chronograph type */ ··· 245 243 save_syn:1, /* Save headers of SYN packet */ 246 244 is_cwnd_limited:1,/* forward progress limited by snd_cwnd? */ 247 245 syn_smc:1; /* SYN includes SMC */ 248 - u32 tlp_high_seq; /* snd_nxt at the time of TLP retransmit. */ 246 + u32 tlp_high_seq; /* snd_nxt at the time of TLP */ 249 247 250 248 u32 tcp_tx_delay; /* delay (in usec) added to TX packets */ 251 249 u64 tcp_wstamp_ns; /* departure time for next sent data packet */
+2 -1
include/linux/xattr.h
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/types.h> 17 17 #include <linux/spinlock.h> 18 + #include <linux/mm.h> 18 19 #include <uapi/linux/xattr.h> 19 20 20 21 struct inode; ··· 95 94 96 95 list_for_each_entry_safe(xattr, node, &xattrs->head, list) { 97 96 kfree(xattr->name); 98 - kfree(xattr); 97 + kvfree(xattr); 99 98 } 100 99 } 101 100
-1
include/net/flow_offload.h
··· 5 5 #include <linux/list.h> 6 6 #include <linux/netlink.h> 7 7 #include <net/flow_dissector.h> 8 - #include <linux/rhashtable.h> 9 8 10 9 struct flow_match { 11 10 struct flow_dissector *dissector;
+1
include/sound/rt5670.h
··· 12 12 int jd_mode; 13 13 bool in2_diff; 14 14 bool dev_gpio; 15 + bool gpio1_is_ext_spk_en; 15 16 16 17 bool dmic_en; 17 18 unsigned int dmic1_data_pin;
+1
include/sound/soc-dai.h
··· 161 161 int snd_soc_dai_compress_new(struct snd_soc_dai *dai, 162 162 struct snd_soc_pcm_runtime *rtd, int num); 163 163 bool snd_soc_dai_stream_valid(struct snd_soc_dai *dai, int stream); 164 + void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link); 164 165 void snd_soc_dai_action(struct snd_soc_dai *dai, 165 166 int stream, int action); 166 167 static inline void snd_soc_dai_activate(struct snd_soc_dai *dai,
+2
include/sound/soc.h
··· 444 444 const struct snd_soc_component_driver *component_driver, 445 445 struct snd_soc_dai_driver *dai_drv, int num_dai); 446 446 void snd_soc_unregister_component(struct device *dev); 447 + void snd_soc_unregister_component_by_driver(struct device *dev, 448 + const struct snd_soc_component_driver *component_driver); 447 449 struct snd_soc_component *snd_soc_lookup_component_nolocked(struct device *dev, 448 450 const char *driver_name); 449 451 struct snd_soc_component *snd_soc_lookup_component(struct device *dev,
+1 -1
kernel/events/uprobes.c
··· 2199 2199 if (!uprobe) { 2200 2200 if (is_swbp > 0) { 2201 2201 /* No matching uprobe; signal SIGTRAP. */ 2202 - send_sig(SIGTRAP, current, 0); 2202 + force_sig(SIGTRAP); 2203 2203 } else { 2204 2204 /* 2205 2205 * Either we raced with uprobe_unregister() or we can't
+15 -10
kernel/sched/core.c
··· 4119 4119 local_irq_disable(); 4120 4120 rcu_note_context_switch(preempt); 4121 4121 4122 - /* See deactivate_task() below. */ 4123 - prev_state = prev->state; 4124 - 4125 4122 /* 4126 4123 * Make sure that signal_pending_state()->signal_pending() below 4127 4124 * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE) ··· 4142 4145 update_rq_clock(rq); 4143 4146 4144 4147 switch_count = &prev->nivcsw; 4148 + 4145 4149 /* 4146 - * We must re-load prev->state in case ttwu_remote() changed it 4147 - * before we acquired rq->lock. 4150 + * We must load prev->state once (task_struct::state is volatile), such 4151 + * that: 4152 + * 4153 + * - we form a control dependency vs deactivate_task() below. 4154 + * - ptrace_{,un}freeze_traced() can change ->state underneath us. 4148 4155 */ 4149 - if (!preempt && prev_state && prev_state == prev->state) { 4156 + prev_state = prev->state; 4157 + if (!preempt && prev_state) { 4150 4158 if (signal_pending_state(prev_state, prev)) { 4151 4159 prev->state = TASK_RUNNING; 4152 4160 } else { ··· 4165 4163 4166 4164 /* 4167 4165 * __schedule() ttwu() 4168 - * prev_state = prev->state; if (READ_ONCE(p->on_rq) && ...) 4169 - * LOCK rq->lock goto out; 4170 - * smp_mb__after_spinlock(); smp_acquire__after_ctrl_dep(); 4171 - * p->on_rq = 0; p->state = TASK_WAKING; 4166 + * prev_state = prev->state; if (p->on_rq && ...) 4167 + * if (prev_state) goto out; 4168 + * p->on_rq = 0; smp_acquire__after_ctrl_dep(); 4169 + * p->state = TASK_WAKING 4170 + * 4171 + * Where __schedule() and ttwu() have matching control dependencies. 4172 4172 * 4173 4173 * After this, schedule() must not care about p->state any more. 4174 4174 */ ··· 4485 4481 int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags, 4486 4482 void *key) 4487 4483 { 4484 + WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~WF_SYNC); 4488 4485 return try_to_wake_up(curr->private, mode, wake_flags); 4489 4486 } 4490 4487 EXPORT_SYMBOL(default_wake_function);
+10 -5
mm/hugetlb.c
··· 45 45 unsigned int default_hstate_idx; 46 46 struct hstate hstates[HUGE_MAX_HSTATE]; 47 47 48 + #ifdef CONFIG_CMA 48 49 static struct cma *hugetlb_cma[MAX_NUMNODES]; 50 + #endif 51 + static unsigned long hugetlb_cma_size __initdata; 49 52 50 53 /* 51 54 * Minimum page order among possible hugepage sizes, set to a proper value ··· 1238 1235 * If the page isn't allocated using the cma allocator, 1239 1236 * cma_release() returns false. 1240 1237 */ 1241 - if (IS_ENABLED(CONFIG_CMA) && 1242 - cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) 1238 + #ifdef CONFIG_CMA 1239 + if (cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)) 1243 1240 return; 1241 + #endif 1244 1242 1245 1243 free_contig_range(page_to_pfn(page), 1 << order); 1246 1244 } ··· 1252 1248 { 1253 1249 unsigned long nr_pages = 1UL << huge_page_order(h); 1254 1250 1255 - if (IS_ENABLED(CONFIG_CMA)) { 1251 + #ifdef CONFIG_CMA 1252 + { 1256 1253 struct page *page; 1257 1254 int node; 1258 1255 ··· 1267 1262 return page; 1268 1263 } 1269 1264 } 1265 + #endif 1270 1266 1271 1267 return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); 1272 1268 } ··· 2577 2571 2578 2572 for (i = 0; i < h->max_huge_pages; ++i) { 2579 2573 if (hstate_is_gigantic(h)) { 2580 - if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) { 2574 + if (hugetlb_cma_size) { 2581 2575 pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n"); 2582 2576 break; 2583 2577 } ··· 5660 5654 } 5661 5655 5662 5656 #ifdef CONFIG_CMA 5663 - static unsigned long hugetlb_cma_size __initdata; 5664 5657 static bool cma_reserve_called __initdata; 5665 5658 5666 5659 static int __init cmdline_parse_hugetlb_cma(char *p)
+3
mm/khugepaged.c
··· 958 958 return SCAN_ADDRESS_RANGE; 959 959 if (!hugepage_vma_check(vma, vma->vm_flags)) 960 960 return SCAN_VMA_CHECK; 961 + /* Anon VMA expected */ 962 + if (!vma->anon_vma || vma->vm_ops) 963 + return SCAN_VMA_CHECK; 961 964 return 0; 962 965 } 963 966
+10 -3
mm/memcontrol.c
··· 5669 5669 if (!mem_cgroup_is_root(mc.to)) 5670 5670 page_counter_uncharge(&mc.to->memory, mc.moved_swap); 5671 5671 5672 - mem_cgroup_id_get_many(mc.to, mc.moved_swap); 5673 5672 css_put_many(&mc.to->css, mc.moved_swap); 5674 5673 5675 5674 mc.moved_swap = 0; ··· 5859 5860 ent = target.ent; 5860 5861 if (!mem_cgroup_move_swap_account(ent, mc.from, mc.to)) { 5861 5862 mc.precharge--; 5862 - /* we fixup refcnts and charges later. */ 5863 + mem_cgroup_id_get_many(mc.to, 1); 5864 + /* we fixup other refcnts and charges later. */ 5863 5865 mc.moved_swap++; 5864 5866 } 5865 5867 break; ··· 7186 7186 { }, /* terminate */ 7187 7187 }; 7188 7188 7189 + /* 7190 + * If mem_cgroup_swap_init() is implemented as a subsys_initcall() 7191 + * instead of a core_initcall(), this could mean cgroup_memory_noswap still 7192 + * remains set to false even when memcg is disabled via "cgroup_disable=memory" 7193 + * boot parameter. This may result in premature OOPS inside 7194 + * mem_cgroup_get_nr_swap_pages() function in corner cases. 7195 + */ 7189 7196 static int __init mem_cgroup_swap_init(void) 7190 7197 { 7191 7198 /* No memory control -> no swap control */ ··· 7207 7200 7208 7201 return 0; 7209 7202 } 7210 - subsys_initcall(mem_cgroup_swap_init); 7203 + core_initcall(mem_cgroup_swap_init); 7211 7204 7212 7205 #endif /* CONFIG_MEMCG_SWAP */
+1 -1
mm/memory.c
··· 1601 1601 return insert_pages(vma, addr, pages, num, vma->vm_page_prot); 1602 1602 #else 1603 1603 unsigned long idx = 0, pgcount = *num; 1604 - int err; 1604 + int err = -EINVAL; 1605 1605 1606 1606 for (; idx < pgcount; ++idx) { 1607 1607 err = vm_insert_page(vma, addr + (PAGE_SIZE * idx), pages[idx]);
+14 -2
mm/mmap.c
··· 2620 2620 * Create a list of vma's touched by the unmap, removing them from the mm's 2621 2621 * vma list as we go.. 2622 2622 */ 2623 - static void 2623 + static bool 2624 2624 detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma, 2625 2625 struct vm_area_struct *prev, unsigned long end) 2626 2626 { ··· 2645 2645 2646 2646 /* Kill the cache */ 2647 2647 vmacache_invalidate(mm); 2648 + 2649 + /* 2650 + * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or 2651 + * VM_GROWSUP VMA. Such VMAs can change their size under 2652 + * down_read(mmap_lock) and collide with the VMA we are about to unmap. 2653 + */ 2654 + if (vma && (vma->vm_flags & VM_GROWSDOWN)) 2655 + return false; 2656 + if (prev && (prev->vm_flags & VM_GROWSUP)) 2657 + return false; 2658 + return true; 2648 2659 } 2649 2660 2650 2661 /* ··· 2836 2825 } 2837 2826 2838 2827 /* Detach vmas from rbtree */ 2839 - detach_vmas_to_be_unmapped(mm, vma, prev, end); 2828 + if (!detach_vmas_to_be_unmapped(mm, vma, prev, end)) 2829 + downgrade = false; 2840 2830 2841 2831 if (downgrade) 2842 2832 mmap_write_downgrade(mm);
+1 -1
mm/shmem.c
··· 3178 3178 new_xattr->name = kmalloc(XATTR_SECURITY_PREFIX_LEN + len, 3179 3179 GFP_KERNEL); 3180 3180 if (!new_xattr->name) { 3181 - kfree(new_xattr); 3181 + kvfree(new_xattr); 3182 3182 return -ENOMEM; 3183 3183 } 3184 3184
+28 -7
mm/slab_common.c
··· 326 326 if (s->refcount < 0) 327 327 return 1; 328 328 329 + #ifdef CONFIG_MEMCG_KMEM 330 + /* 331 + * Skip the dying kmem_cache. 332 + */ 333 + if (s->memcg_params.dying) 334 + return 1; 335 + #endif 336 + 329 337 return 0; 330 338 } 331 339 ··· 894 886 return 0; 895 887 } 896 888 897 - static void flush_memcg_workqueue(struct kmem_cache *s) 889 + static void memcg_set_kmem_cache_dying(struct kmem_cache *s) 898 890 { 899 891 spin_lock_irq(&memcg_kmem_wq_lock); 900 892 s->memcg_params.dying = true; 901 893 spin_unlock_irq(&memcg_kmem_wq_lock); 894 + } 902 895 896 + static void flush_memcg_workqueue(struct kmem_cache *s) 897 + { 903 898 /* 904 899 * SLAB and SLUB deactivate the kmem_caches through call_rcu. Make 905 900 * sure all registered rcu callbacks have been invoked. ··· 934 923 { 935 924 return 0; 936 925 } 937 - 938 - static inline void flush_memcg_workqueue(struct kmem_cache *s) 939 - { 940 - } 941 926 #endif /* CONFIG_MEMCG_KMEM */ 942 927 943 928 void slab_kmem_cache_release(struct kmem_cache *s) ··· 951 944 if (unlikely(!s)) 952 945 return; 953 946 954 - flush_memcg_workqueue(s); 955 - 956 947 get_online_cpus(); 957 948 get_online_mems(); 958 949 ··· 959 954 s->refcount--; 960 955 if (s->refcount) 961 956 goto out_unlock; 957 + 958 + #ifdef CONFIG_MEMCG_KMEM 959 + memcg_set_kmem_cache_dying(s); 960 + 961 + mutex_unlock(&slab_mutex); 962 + 963 + put_online_mems(); 964 + put_online_cpus(); 965 + 966 + flush_memcg_workqueue(s); 967 + 968 + get_online_cpus(); 969 + get_online_mems(); 970 + 971 + mutex_lock(&slab_mutex); 972 + #endif 962 973 963 974 err = shutdown_memcg_caches(s); 964 975 if (!err)
+8 -2
net/ax25/af_ax25.c
··· 1187 1187 if (addr_len > sizeof(struct sockaddr_ax25) && 1188 1188 fsa->fsa_ax25.sax25_ndigis != 0) { 1189 1189 /* Valid number of digipeaters ? */ 1190 - if (fsa->fsa_ax25.sax25_ndigis < 1 || fsa->fsa_ax25.sax25_ndigis > AX25_MAX_DIGIS) { 1190 + if (fsa->fsa_ax25.sax25_ndigis < 1 || 1191 + fsa->fsa_ax25.sax25_ndigis > AX25_MAX_DIGIS || 1192 + addr_len < sizeof(struct sockaddr_ax25) + 1193 + sizeof(ax25_address) * fsa->fsa_ax25.sax25_ndigis) { 1191 1194 err = -EINVAL; 1192 1195 goto out_release; 1193 1196 } ··· 1510 1507 struct full_sockaddr_ax25 *fsa = (struct full_sockaddr_ax25 *)usax; 1511 1508 1512 1509 /* Valid number of digipeaters ? */ 1513 - if (usax->sax25_ndigis < 1 || usax->sax25_ndigis > AX25_MAX_DIGIS) { 1510 + if (usax->sax25_ndigis < 1 || 1511 + usax->sax25_ndigis > AX25_MAX_DIGIS || 1512 + addr_len < sizeof(struct sockaddr_ax25) + 1513 + sizeof(ax25_address) * usax->sax25_ndigis) { 1514 1514 err = -EINVAL; 1515 1515 goto out; 1516 1516 }
+1 -1
net/core/dev.c
··· 5601 5601 skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) { 5602 5602 if (skb->dev->reg_state == NETREG_UNREGISTERING) { 5603 5603 __skb_unlink(skb, &sd->input_pkt_queue); 5604 - kfree_skb(skb); 5604 + dev_kfree_skb_irq(skb); 5605 5605 input_queue_head_incr(sd); 5606 5606 } 5607 5607 }
+1
net/core/flow_offload.c
··· 4 4 #include <net/flow_offload.h> 5 5 #include <linux/rtnetlink.h> 6 6 #include <linux/mutex.h> 7 + #include <linux/rhashtable.h> 7 8 8 9 struct flow_rule *flow_rule_alloc(unsigned int num_actions) 9 10 {
+1 -1
net/core/net-sysfs.c
··· 1108 1108 trans_timeout = queue->trans_timeout; 1109 1109 spin_unlock_irq(&queue->_xmit_lock); 1110 1110 1111 - return sprintf(buf, "%lu", trans_timeout); 1111 + return sprintf(buf, fmt_ulong, trans_timeout); 1112 1112 } 1113 1113 1114 1114 static unsigned int get_netdev_queue_index(struct netdev_queue *queue)
+2 -1
net/core/rtnetlink.c
··· 3343 3343 */ 3344 3344 if (err < 0) { 3345 3345 /* If device is not registered at all, free it now */ 3346 - if (dev->reg_state == NETREG_UNINITIALIZED) 3346 + if (dev->reg_state == NETREG_UNINITIALIZED || 3347 + dev->reg_state == NETREG_UNREGISTERED) 3347 3348 free_netdev(dev); 3348 3349 goto out; 3349 3350 }
+1
net/core/sock_reuseport.c
··· 101 101 more_reuse->prog = reuse->prog; 102 102 more_reuse->reuseport_id = reuse->reuseport_id; 103 103 more_reuse->bind_inany = reuse->bind_inany; 104 + more_reuse->has_conns = reuse->has_conns; 104 105 105 106 memcpy(more_reuse->socks, reuse->socks, 106 107 reuse->num_socks * sizeof(struct sock *));
+13 -5
net/hsr/hsr_forward.c
··· 120 120 return skb_clone(frame->skb_std, GFP_ATOMIC); 121 121 } 122 122 123 - static void hsr_fill_tag(struct sk_buff *skb, struct hsr_frame_info *frame, 124 - struct hsr_port *port, u8 proto_version) 123 + static struct sk_buff *hsr_fill_tag(struct sk_buff *skb, 124 + struct hsr_frame_info *frame, 125 + struct hsr_port *port, u8 proto_version) 125 126 { 126 127 struct hsr_ethhdr *hsr_ethhdr; 127 128 int lane_id; 128 129 int lsdu_size; 130 + 131 + /* pad to minimum packet size which is 60 + 6 (HSR tag) */ 132 + if (skb_put_padto(skb, ETH_ZLEN + HSR_HLEN)) 133 + return NULL; 129 134 130 135 if (port->type == HSR_PT_SLAVE_A) 131 136 lane_id = 0; ··· 149 144 hsr_ethhdr->hsr_tag.encap_proto = hsr_ethhdr->ethhdr.h_proto; 150 145 hsr_ethhdr->ethhdr.h_proto = htons(proto_version ? 151 146 ETH_P_HSR : ETH_P_PRP); 147 + 148 + return skb; 152 149 } 153 150 154 151 static struct sk_buff *create_tagged_skb(struct sk_buff *skb_o, ··· 179 172 memmove(dst, src, movelen); 180 173 skb_reset_mac_header(skb); 181 174 182 - hsr_fill_tag(skb, frame, port, port->hsr->prot_version); 183 - 184 - return skb; 175 + /* skb_put_padto free skb on error and hsr_fill_tag returns NULL in 176 + * that case 177 + */ 178 + return hsr_fill_tag(skb, frame, port, port->hsr->prot_version); 185 179 } 186 180 187 181 /* If the original frame was an HSR tagged frame, just clone it to be sent
+2 -1
net/hsr/hsr_framereg.c
··· 325 325 if (port->type != node_dst->addr_B_port) 326 326 return; 327 327 328 - ether_addr_copy(eth_hdr(skb)->h_dest, node_dst->macaddress_B); 328 + if (is_valid_ether_addr(node_dst->macaddress_B)) 329 + ether_addr_copy(eth_hdr(skb)->h_dest, node_dst->macaddress_B); 329 330 } 330 331 331 332 void hsr_register_frame_in(struct hsr_node *node, struct hsr_port *port,
+6 -5
net/ipv4/tcp_input.c
··· 3488 3488 } 3489 3489 } 3490 3490 3491 - /* This routine deals with acks during a TLP episode. 3492 - * We mark the end of a TLP episode on receiving TLP dupack or when 3493 - * ack is after tlp_high_seq. 3494 - * Ref: loss detection algorithm in draft-dukkipati-tcpm-tcp-loss-probe. 3491 + /* This routine deals with acks during a TLP episode and ends an episode by 3492 + * resetting tlp_high_seq. Ref: TLP algorithm in draft-ietf-tcpm-rack 3495 3493 */ 3496 3494 static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag) 3497 3495 { ··· 3498 3500 if (before(ack, tp->tlp_high_seq)) 3499 3501 return; 3500 3502 3501 - if (flag & FLAG_DSACKING_ACK) { 3503 + if (!tp->tlp_retrans) { 3504 + /* TLP of new data has been acknowledged */ 3505 + tp->tlp_high_seq = 0; 3506 + } else if (flag & FLAG_DSACKING_ACK) { 3502 3507 /* This DSACK means original and TLP probe arrived; no loss */ 3503 3508 tp->tlp_high_seq = 0; 3504 3509 } else if (after(ack, tp->tlp_high_seq)) {
+8 -5
net/ipv4/tcp_output.c
··· 2624 2624 int pcount; 2625 2625 int mss = tcp_current_mss(sk); 2626 2626 2627 + /* At most one outstanding TLP */ 2628 + if (tp->tlp_high_seq) 2629 + goto rearm_timer; 2630 + 2631 + tp->tlp_retrans = 0; 2627 2632 skb = tcp_send_head(sk); 2628 2633 if (skb && tcp_snd_wnd_test(tp, skb, mss)) { 2629 2634 pcount = tp->packets_out; ··· 2645 2640 inet_csk(sk)->icsk_pending = 0; 2646 2641 return; 2647 2642 } 2648 - 2649 - /* At most one outstanding TLP retransmission. */ 2650 - if (tp->tlp_high_seq) 2651 - goto rearm_timer; 2652 2643 2653 2644 if (skb_still_in_host_queue(sk, skb)) 2654 2645 goto rearm_timer; ··· 2667 2666 if (__tcp_retransmit_skb(sk, skb, 1)) 2668 2667 goto rearm_timer; 2669 2668 2669 + tp->tlp_retrans = 1; 2670 + 2671 + probe_sent: 2670 2672 /* Record snd_nxt for loss detection. */ 2671 2673 tp->tlp_high_seq = tp->snd_nxt; 2672 2674 2673 - probe_sent: 2674 2675 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPLOSSPROBES); 2675 2676 /* Reset s.t. tcp_rearm_rto will restart timer from now */ 2676 2677 inet_csk(sk)->icsk_pending = 0;
+10 -7
net/ipv4/udp.c
··· 416 416 struct udp_hslot *hslot2, 417 417 struct sk_buff *skb) 418 418 { 419 - struct sock *sk, *result; 419 + struct sock *sk, *result, *reuseport_result; 420 420 int score, badness; 421 421 u32 hash = 0; 422 422 ··· 426 426 score = compute_score(sk, net, saddr, sport, 427 427 daddr, hnum, dif, sdif); 428 428 if (score > badness) { 429 + reuseport_result = NULL; 430 + 429 431 if (sk->sk_reuseport && 430 432 sk->sk_state != TCP_ESTABLISHED) { 431 433 hash = udp_ehashfn(net, daddr, hnum, 432 434 saddr, sport); 433 - result = reuseport_select_sock(sk, hash, skb, 434 - sizeof(struct udphdr)); 435 - if (result && !reuseport_has_conns(sk, false)) 436 - return result; 435 + reuseport_result = reuseport_select_sock(sk, hash, skb, 436 + sizeof(struct udphdr)); 437 + if (reuseport_result && !reuseport_has_conns(sk, false)) 438 + return reuseport_result; 437 439 } 440 + 441 + result = reuseport_result ? : sk; 438 442 badness = score; 439 - result = sk; 440 443 } 441 444 } 442 445 return result; ··· 2054 2051 /* 2055 2052 * UDP-Lite specific tests, ignored on UDP sockets 2056 2053 */ 2057 - if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 2054 + if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 2058 2055 2059 2056 /* 2060 2057 * MIB statistics other than incrementing the error count are
+6 -5
net/ipv6/ip6_gre.c
··· 1562 1562 static int __net_init ip6gre_init_net(struct net *net) 1563 1563 { 1564 1564 struct ip6gre_net *ign = net_generic(net, ip6gre_net_id); 1565 + struct net_device *ndev; 1565 1566 int err; 1566 1567 1567 1568 if (!net_has_fallback_tunnels(net)) 1568 1569 return 0; 1569 - ign->fb_tunnel_dev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0", 1570 - NET_NAME_UNKNOWN, 1571 - ip6gre_tunnel_setup); 1572 - if (!ign->fb_tunnel_dev) { 1570 + ndev = alloc_netdev(sizeof(struct ip6_tnl), "ip6gre0", 1571 + NET_NAME_UNKNOWN, ip6gre_tunnel_setup); 1572 + if (!ndev) { 1573 1573 err = -ENOMEM; 1574 1574 goto err_alloc_dev; 1575 1575 } 1576 + ign->fb_tunnel_dev = ndev; 1576 1577 dev_net_set(ign->fb_tunnel_dev, net); 1577 1578 /* FB netdevice is special: we have one, and only one per netns. 1578 1579 * Allowing to move it to another netns is clearly unsafe. ··· 1593 1592 return 0; 1594 1593 1595 1594 err_reg_dev: 1596 - free_netdev(ign->fb_tunnel_dev); 1595 + free_netdev(ndev); 1597 1596 err_alloc_dev: 1598 1597 return err; 1599 1598 }
+10 -7
net/ipv6/udp.c
··· 148 148 int dif, int sdif, struct udp_hslot *hslot2, 149 149 struct sk_buff *skb) 150 150 { 151 - struct sock *sk, *result; 151 + struct sock *sk, *result, *reuseport_result; 152 152 int score, badness; 153 153 u32 hash = 0; 154 154 ··· 158 158 score = compute_score(sk, net, saddr, sport, 159 159 daddr, hnum, dif, sdif); 160 160 if (score > badness) { 161 + reuseport_result = NULL; 162 + 161 163 if (sk->sk_reuseport && 162 164 sk->sk_state != TCP_ESTABLISHED) { 163 165 hash = udp6_ehashfn(net, daddr, hnum, 164 166 saddr, sport); 165 167 166 - result = reuseport_select_sock(sk, hash, skb, 167 - sizeof(struct udphdr)); 168 - if (result && !reuseport_has_conns(sk, false)) 169 - return result; 168 + reuseport_result = reuseport_select_sock(sk, hash, skb, 169 + sizeof(struct udphdr)); 170 + if (reuseport_result && !reuseport_has_conns(sk, false)) 171 + return reuseport_result; 170 172 } 171 - result = sk; 173 + 174 + result = reuseport_result ? : sk; 172 175 badness = score; 173 176 } 174 177 } ··· 646 643 /* 647 644 * UDP-Lite specific tests, ignored on UDP sockets (see net/ipv4/udp.c). 648 645 */ 649 - if ((is_udplite & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 646 + if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) { 650 647 651 648 if (up->pcrlen == 0) { /* full coverage was set */ 652 649 net_dbg_ratelimited("UDPLITE6: partial coverage %d while full coverage %d requested\n",
+8 -4
net/netfilter/ipvs/ip_vs_sync.c
··· 1717 1717 { 1718 1718 struct ip_vs_sync_thread_data *tinfo = data; 1719 1719 struct netns_ipvs *ipvs = tinfo->ipvs; 1720 + struct sock *sk = tinfo->sock->sk; 1721 + struct udp_sock *up = udp_sk(sk); 1720 1722 int len; 1721 1723 1722 1724 pr_info("sync thread started: state = BACKUP, mcast_ifn = %s, " ··· 1726 1724 ipvs->bcfg.mcast_ifn, ipvs->bcfg.syncid, tinfo->id); 1727 1725 1728 1726 while (!kthread_should_stop()) { 1729 - wait_event_interruptible(*sk_sleep(tinfo->sock->sk), 1730 - !skb_queue_empty(&tinfo->sock->sk->sk_receive_queue) 1731 - || kthread_should_stop()); 1727 + wait_event_interruptible(*sk_sleep(sk), 1728 + !skb_queue_empty_lockless(&sk->sk_receive_queue) || 1729 + !skb_queue_empty_lockless(&up->reader_queue) || 1730 + kthread_should_stop()); 1732 1731 1733 1732 /* do we have data now? */ 1734 - while (!skb_queue_empty(&(tinfo->sock->sk->sk_receive_queue))) { 1733 + while (!skb_queue_empty_lockless(&sk->sk_receive_queue) || 1734 + !skb_queue_empty_lockless(&up->reader_queue)) { 1735 1735 len = ip_vs_receive(tinfo->sock, tinfo->buf, 1736 1736 ipvs->bcfg.sync_maxlen); 1737 1737 if (len <= 0) {
+14 -27
net/netfilter/nf_tables_api.c
··· 188 188 nf_unregister_net_hook(net, &hook->ops); 189 189 } 190 190 191 - static int nft_register_basechain_hooks(struct net *net, int family, 192 - struct nft_base_chain *basechain) 193 - { 194 - if (family == NFPROTO_NETDEV) 195 - return nft_netdev_register_hooks(net, &basechain->hook_list); 196 - 197 - return nf_register_net_hook(net, &basechain->ops); 198 - } 199 - 200 - static void nft_unregister_basechain_hooks(struct net *net, int family, 201 - struct nft_base_chain *basechain) 202 - { 203 - if (family == NFPROTO_NETDEV) 204 - nft_netdev_unregister_hooks(net, &basechain->hook_list); 205 - else 206 - nf_unregister_net_hook(net, &basechain->ops); 207 - } 208 - 209 191 static int nf_tables_register_hook(struct net *net, 210 192 const struct nft_table *table, 211 193 struct nft_chain *chain) ··· 205 223 if (basechain->type->ops_register) 206 224 return basechain->type->ops_register(net, ops); 207 225 208 - return nft_register_basechain_hooks(net, table->family, basechain); 226 + if (table->family == NFPROTO_NETDEV) 227 + return nft_netdev_register_hooks(net, &basechain->hook_list); 228 + 229 + return nf_register_net_hook(net, &basechain->ops); 209 230 } 210 231 211 232 static void nf_tables_unregister_hook(struct net *net, ··· 227 242 if (basechain->type->ops_unregister) 228 243 return basechain->type->ops_unregister(net, ops); 229 244 230 - nft_unregister_basechain_hooks(net, table->family, basechain); 245 + if (table->family == NFPROTO_NETDEV) 246 + nft_netdev_unregister_hooks(net, &basechain->hook_list); 247 + else 248 + nf_unregister_net_hook(net, &basechain->ops); 231 249 } 232 250 233 251 static int nft_trans_table_add(struct nft_ctx *ctx, int msg_type) ··· 820 832 if (cnt && i++ == cnt) 821 833 break; 822 834 823 - nft_unregister_basechain_hooks(net, table->family, 824 - nft_base_chain(chain)); 835 + nf_tables_unregister_hook(net, table, chain); 825 836 } 826 837 } 827 838 ··· 835 848 if (!nft_is_base_chain(chain)) 836 849 continue; 837 850 838 - err = nft_register_basechain_hooks(net, table->family, 839 - nft_base_chain(chain)); 851 + err = nf_tables_register_hook(net, table, chain); 840 852 if (err < 0) 841 853 goto err_register_hooks; 842 854 ··· 880 894 nft_trans_table_enable(trans) = false; 881 895 } else if (!(flags & NFT_TABLE_F_DORMANT) && 882 896 ctx->table->flags & NFT_TABLE_F_DORMANT) { 897 + ctx->table->flags &= ~NFT_TABLE_F_DORMANT; 883 898 ret = nf_tables_table_enable(ctx->net, ctx->table); 884 - if (ret >= 0) { 885 - ctx->table->flags &= ~NFT_TABLE_F_DORMANT; 899 + if (ret >= 0) 886 900 nft_trans_table_enable(trans) = true; 887 - } 901 + else 902 + ctx->table->flags |= NFT_TABLE_F_DORMANT; 888 903 } 889 904 if (ret < 0) 890 905 goto err;
+4 -1
net/nfc/nci/core.c
··· 1228 1228 1229 1229 rc = nfc_register_device(ndev->nfc_dev); 1230 1230 if (rc) 1231 - goto destroy_rx_wq_exit; 1231 + goto destroy_tx_wq_exit; 1232 1232 1233 1233 goto exit; 1234 + 1235 + destroy_tx_wq_exit: 1236 + destroy_workqueue(ndev->tx_wq); 1234 1237 1235 1238 destroy_rx_wq_exit: 1236 1239 destroy_workqueue(ndev->rx_wq);
+1
net/qrtr/qrtr.c
··· 1180 1180 sk->sk_state_change(sk); 1181 1181 1182 1182 sock_set_flag(sk, SOCK_DEAD); 1183 + sock_orphan(sk); 1183 1184 sock->sk = NULL; 1184 1185 1185 1186 if (!sock_flag(sk, SOCK_ZAPPED))
+1 -1
net/rxrpc/recvmsg.c
··· 543 543 list_empty(&rx->recvmsg_q) && 544 544 rx->sk.sk_state != RXRPC_SERVER_LISTENING) { 545 545 release_sock(&rx->sk); 546 - return -ENODATA; 546 + return -EAGAIN; 547 547 } 548 548 549 549 if (list_empty(&rx->recvmsg_q)) {
+1 -1
net/rxrpc/sendmsg.c
··· 304 304 /* this should be in poll */ 305 305 sk_clear_bit(SOCKWQ_ASYNC_NOSPACE, sk); 306 306 307 - if (sk->sk_err || (sk->sk_shutdown & SEND_SHUTDOWN)) 307 + if (sk->sk_shutdown & SEND_SHUTDOWN) 308 308 return -EPIPE; 309 309 310 310 more = msg->msg_flags & MSG_MORE;
+14 -2
net/sched/act_ct.c
··· 673 673 } 674 674 675 675 static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb, 676 - u8 family, u16 zone) 676 + u8 family, u16 zone, bool *defrag) 677 677 { 678 678 enum ip_conntrack_info ctinfo; 679 + struct qdisc_skb_cb cb; 679 680 struct nf_conn *ct; 680 681 int err = 0; 681 682 bool frag; ··· 694 693 return err; 695 694 696 695 skb_get(skb); 696 + cb = *qdisc_skb_cb(skb); 697 697 698 698 if (family == NFPROTO_IPV4) { 699 699 enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; ··· 705 703 local_bh_enable(); 706 704 if (err && err != -EINPROGRESS) 707 705 goto out_free; 706 + 707 + if (!err) 708 + *defrag = true; 708 709 } else { /* NFPROTO_IPV6 */ 709 710 #if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) 710 711 enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone; ··· 716 711 err = nf_ct_frag6_gather(net, skb, user); 717 712 if (err && err != -EINPROGRESS) 718 713 goto out_free; 714 + 715 + if (!err) 716 + *defrag = true; 719 717 #else 720 718 err = -EOPNOTSUPP; 721 719 goto out_free; 722 720 #endif 723 721 } 724 722 723 + *qdisc_skb_cb(skb) = cb; 725 724 skb_clear_hash(skb); 726 725 skb->ignore_df = 1; 727 726 return err; ··· 923 914 int nh_ofs, err, retval; 924 915 struct tcf_ct_params *p; 925 916 bool skip_add = false; 917 + bool defrag = false; 926 918 struct nf_conn *ct; 927 919 u8 family; 928 920 ··· 956 946 */ 957 947 nh_ofs = skb_network_offset(skb); 958 948 skb_pull_rcsum(skb, nh_ofs); 959 - err = tcf_ct_handle_fragments(net, skb, family, p->zone); 949 + err = tcf_ct_handle_fragments(net, skb, family, p->zone, &defrag); 960 950 if (err == -EINPROGRESS) { 961 951 retval = TC_ACT_STOLEN; 962 952 goto out; ··· 1024 1014 1025 1015 out: 1026 1016 tcf_action_update_bstats(&c->common, skb); 1017 + if (defrag) 1018 + qdisc_skb_cb(skb)->pkt_len = skb->len; 1027 1019 return retval; 1028 1020 1029 1021 drop:
-1
net/sched/cls_api.c
··· 20 20 #include <linux/kmod.h> 21 21 #include <linux/slab.h> 22 22 #include <linux/idr.h> 23 - #include <linux/rhashtable.h> 24 23 #include <linux/jhash.h> 25 24 #include <linux/rculist.h> 26 25 #include <net/net_namespace.h>
+18 -9
net/sctp/stream.c
··· 22 22 #include <net/sctp/sm.h> 23 23 #include <net/sctp/stream_sched.h> 24 24 25 - /* Migrates chunks from stream queues to new stream queues if needed, 26 - * but not across associations. Also, removes those chunks to streams 27 - * higher than the new max. 28 - */ 29 - static void sctp_stream_outq_migrate(struct sctp_stream *stream, 30 - struct sctp_stream *new, __u16 outcnt) 25 + static void sctp_stream_shrink_out(struct sctp_stream *stream, __u16 outcnt) 31 26 { 32 27 struct sctp_association *asoc; 33 28 struct sctp_chunk *ch, *temp; 34 29 struct sctp_outq *outq; 35 - int i; 36 30 37 31 asoc = container_of(stream, struct sctp_association, stream); 38 32 outq = &asoc->outqueue; ··· 50 56 51 57 sctp_chunk_free(ch); 52 58 } 59 + } 60 + 61 + /* Migrates chunks from stream queues to new stream queues if needed, 62 + * but not across associations. Also, removes those chunks to streams 63 + * higher than the new max. 64 + */ 65 + static void sctp_stream_outq_migrate(struct sctp_stream *stream, 66 + struct sctp_stream *new, __u16 outcnt) 67 + { 68 + int i; 69 + 70 + if (stream->outcnt > outcnt) 71 + sctp_stream_shrink_out(stream, outcnt); 53 72 54 73 if (new) { 55 74 /* Here we actually move the old ext stuff into the new ··· 1044 1037 nums = ntohs(addstrm->number_of_streams); 1045 1038 number = stream->outcnt - nums; 1046 1039 1047 - if (result == SCTP_STRRESET_PERFORMED) 1040 + if (result == SCTP_STRRESET_PERFORMED) { 1048 1041 for (i = number; i < stream->outcnt; i++) 1049 1042 SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN; 1050 - else 1043 + } else { 1044 + sctp_stream_shrink_out(stream, number); 1051 1045 stream->outcnt = number; 1046 + } 1052 1047 1053 1048 *evp = sctp_ulpevent_make_stream_change_event(asoc, flags, 1054 1049 0, nums, GFP_ATOMIC);
+8 -4
net/smc/af_smc.c
··· 126 126 127 127 static void smc_restore_fallback_changes(struct smc_sock *smc) 128 128 { 129 - smc->clcsock->file->private_data = smc->sk.sk_socket; 130 - smc->clcsock->file = NULL; 129 + if (smc->clcsock->file) { /* non-accepted sockets have no file yet */ 130 + smc->clcsock->file->private_data = smc->sk.sk_socket; 131 + smc->clcsock->file = NULL; 132 + } 131 133 } 132 134 133 135 static int __smc_release(struct smc_sock *smc) ··· 354 352 */ 355 353 mutex_lock(&lgr->llc_conf_mutex); 356 354 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 357 - if (lgr->lnk[i].state != SMC_LNK_ACTIVE) 355 + if (!smc_link_active(&lgr->lnk[i])) 358 356 continue; 359 357 rc = smcr_link_reg_rmb(&lgr->lnk[i], rmb_desc); 360 358 if (rc) ··· 634 632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 635 633 struct smc_link *l = &smc->conn.lgr->lnk[i]; 636 634 637 - if (l->peer_qpn == ntoh24(aclc->qpn)) { 635 + if (l->peer_qpn == ntoh24(aclc->qpn) && 636 + !memcmp(l->peer_gid, &aclc->lcl.gid, SMC_GID_SIZE) && 637 + !memcmp(l->peer_mac, &aclc->lcl.mac, sizeof(l->peer_mac))) { 638 638 link = l; 639 639 break; 640 640 }
+5 -1
net/smc/smc_cdc.c
··· 66 66 rc = smc_wr_tx_get_free_slot(link, smc_cdc_tx_handler, wr_buf, 67 67 wr_rdma_buf, 68 68 (struct smc_wr_tx_pend_priv **)pend); 69 - if (conn->killed) 69 + if (conn->killed) { 70 70 /* abnormal termination */ 71 + if (!rc) 72 + smc_wr_tx_put_slot(link, 73 + (struct smc_wr_tx_pend_priv *)pend); 71 74 rc = -EPIPE; 75 + } 72 76 return rc; 73 77 } 74 78
+24 -85
net/smc/smc_core.c
··· 45 45 static atomic_t lgr_cnt = ATOMIC_INIT(0); /* number of existing link groups */ 46 46 static DECLARE_WAIT_QUEUE_HEAD(lgrs_deleted); 47 47 48 - struct smc_ib_up_work { 49 - struct work_struct work; 50 - struct smc_link_group *lgr; 51 - struct smc_ib_device *smcibdev; 52 - u8 ibport; 53 - }; 54 - 55 48 static void smc_buf_free(struct smc_link_group *lgr, bool is_rmb, 56 49 struct smc_buf_desc *buf_desc); 57 50 static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft); 58 51 59 - static void smc_link_up_work(struct work_struct *work); 60 52 static void smc_link_down_work(struct work_struct *work); 61 53 62 54 /* return head of link group list and its lock for a given link group */ ··· 318 326 319 327 get_device(&ini->ib_dev->ibdev->dev); 320 328 atomic_inc(&ini->ib_dev->lnk_cnt); 321 - lnk->state = SMC_LNK_ACTIVATING; 322 329 lnk->link_id = smcr_next_link_id(lgr); 323 330 lnk->lgr = lgr; 324 331 lnk->link_idx = link_idx; ··· 353 362 rc = smc_wr_create_link(lnk); 354 363 if (rc) 355 364 goto destroy_qp; 365 + lnk->state = SMC_LNK_ACTIVATING; 356 366 return 0; 357 367 358 368 destroy_qp: ··· 444 452 } 445 453 smc->conn.lgr = lgr; 446 454 spin_lock_bh(lgr_lock); 447 - list_add(&lgr->list, lgr_list); 455 + list_add_tail(&lgr->list, lgr_list); 448 456 spin_unlock_bh(lgr_lock); 449 457 return 0; 450 458 ··· 542 550 smc_wr_wakeup_tx_wait(from_lnk); 543 551 544 552 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 545 - if (lgr->lnk[i].state != SMC_LNK_ACTIVE || 546 - i == from_lnk->link_idx) 553 + if (!smc_link_active(&lgr->lnk[i]) || i == from_lnk->link_idx) 547 554 continue; 548 555 if (is_dev_err && from_lnk->smcibdev == lgr->lnk[i].smcibdev && 549 556 from_lnk->ibport == lgr->lnk[i].ibport) { ··· 1097 1106 sock_put(&smc->sk); /* sock_hold done by schedulers of abort_work */ 1098 1107 } 1099 1108 1100 - /* link is up - establish alternate link if applicable */ 1101 - static void smcr_link_up(struct smc_link_group *lgr, 1102 - struct smc_ib_device *smcibdev, u8 ibport) 1103 - { 1104 - struct smc_link *link = NULL; 1105 - 1106 - if (list_empty(&lgr->list) || 1107 - lgr->type == SMC_LGR_SYMMETRIC || 1108 - lgr->type == SMC_LGR_ASYMMETRIC_PEER) 1109 - return; 1110 - 1111 - if (lgr->role == SMC_SERV) { 1112 - /* trigger local add link processing */ 1113 - link = smc_llc_usable_link(lgr); 1114 - if (!link) 1115 - return; 1116 - smc_llc_srv_add_link_local(link); 1117 - } else { 1118 - /* invite server to start add link processing */ 1119 - u8 gid[SMC_GID_SIZE]; 1120 - 1121 - if (smc_ib_determine_gid(smcibdev, ibport, lgr->vlan_id, gid, 1122 - NULL)) 1123 - return; 1124 - if (lgr->llc_flow_lcl.type != SMC_LLC_FLOW_NONE) { 1125 - /* some other llc task is ongoing */ 1126 - wait_event_timeout(lgr->llc_flow_waiter, 1127 - (list_empty(&lgr->list) || 1128 - lgr->llc_flow_lcl.type == SMC_LLC_FLOW_NONE), 1129 - SMC_LLC_WAIT_TIME); 1130 - } 1131 - /* lgr or device no longer active? */ 1132 - if (!list_empty(&lgr->list) && 1133 - smc_ib_port_active(smcibdev, ibport)) 1134 - link = smc_llc_usable_link(lgr); 1135 - if (link) 1136 - smc_llc_send_add_link(link, smcibdev->mac[ibport - 1], 1137 - gid, NULL, SMC_LLC_REQ); 1138 - wake_up(&lgr->llc_flow_waiter); /* wake up next waiter */ 1139 - } 1140 - } 1141 - 1142 1109 void smcr_port_add(struct smc_ib_device *smcibdev, u8 ibport) 1143 1110 { 1144 - struct smc_ib_up_work *ib_work; 1145 1111 struct smc_link_group *lgr, *n; 1146 1112 1147 1113 list_for_each_entry_safe(lgr, n, &smc_lgr_list.list, list) { 1114 + struct smc_link *link; 1115 + 1148 1116 if (strncmp(smcibdev->pnetid[ibport - 1], lgr->pnet_id, 1149 1117 SMC_MAX_PNETID_LEN) || 1150 1118 lgr->type == SMC_LGR_SYMMETRIC || 1151 1119 lgr->type == SMC_LGR_ASYMMETRIC_PEER) 1152 1120 continue; 1153 - ib_work = kmalloc(sizeof(*ib_work), GFP_KERNEL); 1154 - if (!ib_work) 1155 - continue; 1156 - INIT_WORK(&ib_work->work, smc_link_up_work); 1157 - ib_work->lgr = lgr; 1158 - ib_work->smcibdev = smcibdev; 1159 - ib_work->ibport = ibport; 1160 - schedule_work(&ib_work->work); 1121 + 1122 + /* trigger local add link processing */ 1123 + link = smc_llc_usable_link(lgr); 1124 + if (link) 1125 + smc_llc_add_link_local(link); 1161 1126 } 1162 1127 } 1163 1128 ··· 1151 1204 SMC_LLC_WAIT_TIME); 1152 1205 mutex_lock(&lgr->llc_conf_mutex); 1153 1206 } 1154 - if (!list_empty(&lgr->list)) 1207 + if (!list_empty(&lgr->list)) { 1155 1208 smc_llc_send_delete_link(to_lnk, del_link_id, 1156 1209 SMC_LLC_REQ, true, 1157 1210 SMC_LLC_DEL_LOST_PATH); 1211 + smcr_link_clear(lnk, true); 1212 + } 1158 1213 wake_up(&lgr->llc_flow_waiter); /* wake up next waiter */ 1159 1214 } 1160 1215 } ··· 1194 1245 smcr_link_down_cond_sched(lnk); 1195 1246 } 1196 1247 } 1197 - } 1198 - 1199 - static void smc_link_up_work(struct work_struct *work) 1200 - { 1201 - struct smc_ib_up_work *ib_work = container_of(work, 1202 - struct smc_ib_up_work, 1203 - work); 1204 - struct smc_link_group *lgr = ib_work->lgr; 1205 - 1206 - if (list_empty(&lgr->list)) 1207 - goto out; 1208 - smcr_link_up(lgr, ib_work->smcibdev, ib_work->ibport); 1209 - out: 1210 - kfree(ib_work); 1211 1248 } 1212 1249 1213 1250 static void smc_link_down_work(struct work_struct *work) ··· 1268 1333 return false; 1269 1334 1270 1335 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1271 - if (lgr->lnk[i].state != SMC_LNK_ACTIVE) 1336 + if (!smc_link_active(&lgr->lnk[i])) 1272 1337 continue; 1273 1338 if ((lgr->role == SMC_SERV || lgr->lnk[i].peer_qpn == clcqpn) && 1274 1339 !memcmp(lgr->lnk[i].peer_gid, &lcl->gid, SMC_GID_SIZE) && ··· 1311 1376 smcr_lgr_match(lgr, ini->ib_lcl, role, ini->ib_clcqpn)) && 1312 1377 !lgr->sync_err && 1313 1378 lgr->vlan_id == ini->vlan_id && 1314 - (role == SMC_CLNT || 1379 + (role == SMC_CLNT || ini->is_smcd || 1315 1380 lgr->conns_num < SMC_RMBS_PER_LGR_MAX)) { 1316 1381 /* link group found */ 1317 1382 ini->cln_first_contact = SMC_REUSE_CONTACT; ··· 1716 1781 1717 1782 void smc_sndbuf_sync_sg_for_cpu(struct smc_connection *conn) 1718 1783 { 1719 - if (!conn->lgr || conn->lgr->is_smcd || !smc_link_usable(conn->lnk)) 1784 + if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) 1720 1785 return; 1721 1786 smc_ib_sync_sg_for_cpu(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); 1722 1787 } 1723 1788 1724 1789 void smc_sndbuf_sync_sg_for_device(struct smc_connection *conn) 1725 1790 { 1726 - if (!conn->lgr || conn->lgr->is_smcd || !smc_link_usable(conn->lnk)) 1791 + if (!conn->lgr || conn->lgr->is_smcd || !smc_link_active(conn->lnk)) 1727 1792 return; 1728 1793 smc_ib_sync_sg_for_device(conn->lnk, conn->sndbuf_desc, DMA_TO_DEVICE); 1729 1794 } ··· 1735 1800 if (!conn->lgr || conn->lgr->is_smcd) 1736 1801 return; 1737 1802 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1738 - if (!smc_link_usable(&conn->lgr->lnk[i])) 1803 + if (!smc_link_active(&conn->lgr->lnk[i])) 1739 1804 continue; 1740 1805 smc_ib_sync_sg_for_cpu(&conn->lgr->lnk[i], conn->rmb_desc, 1741 1806 DMA_FROM_DEVICE); ··· 1749 1814 if (!conn->lgr || conn->lgr->is_smcd) 1750 1815 return; 1751 1816 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 1752 - if (!smc_link_usable(&conn->lgr->lnk[i])) 1817 + if (!smc_link_active(&conn->lgr->lnk[i])) 1753 1818 continue; 1754 1819 smc_ib_sync_sg_for_device(&conn->lgr->lnk[i], conn->rmb_desc, 1755 1820 DMA_FROM_DEVICE); ··· 1772 1837 return rc; 1773 1838 /* create rmb */ 1774 1839 rc = __smc_buf_create(smc, is_smcd, true); 1775 - if (rc) 1840 + if (rc) { 1841 + mutex_lock(&smc->conn.lgr->sndbufs_lock); 1842 + list_del(&smc->conn.sndbuf_desc->list); 1843 + mutex_unlock(&smc->conn.lgr->sndbufs_lock); 1776 1844 smc_buf_free(smc->conn.lgr, false, smc->conn.sndbuf_desc); 1845 + } 1777 1846 return rc; 1778 1847 } 1779 1848
+5
net/smc/smc_core.h
··· 349 349 return true; 350 350 } 351 351 352 + static inline bool smc_link_active(struct smc_link *lnk) 353 + { 354 + return lnk->state == SMC_LNK_ACTIVE; 355 + } 356 + 352 357 struct smc_sock; 353 358 struct smc_clc_msg_accept_confirm; 354 359 struct smc_clc_msg_local;
+13 -3
net/smc/smc_ib.c
··· 506 506 int cqe_size_order, smc_order; 507 507 long rc; 508 508 509 + mutex_lock(&smcibdev->mutex); 510 + rc = 0; 511 + if (smcibdev->initialized) 512 + goto out; 509 513 /* the calculated number of cq entries fits to mlx5 cq allocation */ 510 514 cqe_size_order = cache_line_size() == 128 ? 7 : 6; 511 515 smc_order = MAX_ORDER - cqe_size_order - 1; ··· 521 517 rc = PTR_ERR_OR_ZERO(smcibdev->roce_cq_send); 522 518 if (IS_ERR(smcibdev->roce_cq_send)) { 523 519 smcibdev->roce_cq_send = NULL; 524 - return rc; 520 + goto out; 525 521 } 526 522 smcibdev->roce_cq_recv = ib_create_cq(smcibdev->ibdev, 527 523 smc_wr_rx_cq_handler, NULL, ··· 533 529 } 534 530 smc_wr_add_dev(smcibdev); 535 531 smcibdev->initialized = 1; 536 - return rc; 532 + goto out; 537 533 538 534 err: 539 535 ib_destroy_cq(smcibdev->roce_cq_send); 536 + out: 537 + mutex_unlock(&smcibdev->mutex); 540 538 return rc; 541 539 } 542 540 543 541 static void smc_ib_cleanup_per_ibdev(struct smc_ib_device *smcibdev) 544 542 { 543 + mutex_lock(&smcibdev->mutex); 545 544 if (!smcibdev->initialized) 546 - return; 545 + goto out; 547 546 smcibdev->initialized = 0; 548 547 ib_destroy_cq(smcibdev->roce_cq_recv); 549 548 ib_destroy_cq(smcibdev->roce_cq_send); 550 549 smc_wr_remove_dev(smcibdev); 550 + out: 551 + mutex_unlock(&smcibdev->mutex); 551 552 } 552 553 553 554 static struct ib_client smc_ib_client; ··· 575 566 INIT_WORK(&smcibdev->port_event_work, smc_ib_port_event_work); 576 567 atomic_set(&smcibdev->lnk_cnt, 0); 577 568 init_waitqueue_head(&smcibdev->lnks_deleted); 569 + mutex_init(&smcibdev->mutex); 578 570 mutex_lock(&smc_ib_devices.mutex); 579 571 list_add_tail(&smcibdev->list, &smc_ib_devices.list); 580 572 mutex_unlock(&smc_ib_devices.mutex);
+1
net/smc/smc_ib.h
··· 52 52 DECLARE_BITMAP(ports_going_away, SMC_MAX_PORTS); 53 53 atomic_t lnk_cnt; /* number of links on ibdev */ 54 54 wait_queue_head_t lnks_deleted; /* wait 4 removal of all links*/ 55 + struct mutex mutex; /* protect dev setup+cleanup */ 55 56 }; 56 57 57 58 struct smc_buf_desc;
+85 -42
net/smc/smc_llc.c
··· 428 428 rtok_ix = 1; 429 429 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 430 430 link = &send_link->lgr->lnk[i]; 431 - if (link->state == SMC_LNK_ACTIVE && link != send_link) { 431 + if (smc_link_active(link) && link != send_link) { 432 432 rkeyllc->rtoken[rtok_ix].link_id = link->link_id; 433 433 rkeyllc->rtoken[rtok_ix].rmb_key = 434 434 htonl(rmb_desc->mr_rx[link->link_idx]->rkey); ··· 895 895 return rc; 896 896 } 897 897 898 + /* as an SMC client, invite server to start the add_link processing */ 899 + static void smc_llc_cli_add_link_invite(struct smc_link *link, 900 + struct smc_llc_qentry *qentry) 901 + { 902 + struct smc_link_group *lgr = smc_get_lgr(link); 903 + struct smc_init_info ini; 904 + 905 + if (lgr->type == SMC_LGR_SYMMETRIC || 906 + lgr->type == SMC_LGR_ASYMMETRIC_PEER) 907 + goto out; 908 + 909 + ini.vlan_id = lgr->vlan_id; 910 + smc_pnet_find_alt_roce(lgr, &ini, link->smcibdev); 911 + if (!ini.ib_dev) 912 + goto out; 913 + 914 + smc_llc_send_add_link(link, ini.ib_dev->mac[ini.ib_port - 1], 915 + ini.ib_gid, NULL, SMC_LLC_REQ); 916 + out: 917 + kfree(qentry); 918 + } 919 + 920 + static bool smc_llc_is_local_add_link(union smc_llc_msg *llc) 921 + { 922 + if (llc->raw.hdr.common.type == SMC_LLC_ADD_LINK && 923 + !llc->add_link.qp_mtu && !llc->add_link.link_num) 924 + return true; 925 + return false; 926 + } 927 + 898 928 static void smc_llc_process_cli_add_link(struct smc_link_group *lgr) 899 929 { 900 930 struct smc_llc_qentry *qentry; ··· 932 902 qentry = smc_llc_flow_qentry_clr(&lgr->llc_flow_lcl); 933 903 934 904 mutex_lock(&lgr->llc_conf_mutex); 935 - smc_llc_cli_add_link(qentry->link, qentry); 905 + if (smc_llc_is_local_add_link(&qentry->msg)) 906 + smc_llc_cli_add_link_invite(qentry->link, qentry); 907 + else 908 + smc_llc_cli_add_link(qentry->link, qentry); 936 909 mutex_unlock(&lgr->llc_conf_mutex); 937 910 } 938 911 ··· 944 911 int i, link_count = 0; 945 912 946 913 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) { 947 - if (!smc_link_usable(&lgr->lnk[i])) 914 + if (!smc_link_active(&lgr->lnk[i])) 948 915 continue; 949 916 link_count++; 950 917 } ··· 1084 1051 if (rc) 1085 1052 return -ENOLINK; 1086 1053 /* receive CONFIRM LINK response over the RoCE fabric */ 1087 - qentry = smc_llc_wait(lgr, link, SMC_LLC_WAIT_FIRST_TIME, 1088 - SMC_LLC_CONFIRM_LINK); 1089 - if (!qentry) { 1054 + qentry = smc_llc_wait(lgr, link, SMC_LLC_WAIT_FIRST_TIME, 0); 1055 + if (!qentry || 1056 + qentry->msg.raw.hdr.common.type != SMC_LLC_CONFIRM_LINK) { 1090 1057 /* send DELETE LINK */ 1091 1058 smc_llc_send_delete_link(link, link_new->link_id, SMC_LLC_REQ, 1092 1059 false, SMC_LLC_DEL_LOST_PATH); 1060 + if (qentry) 1061 + smc_llc_flow_qentry_del(&lgr->llc_flow_lcl); 1093 1062 return -ENOLINK; 1094 1063 } 1095 1064 smc_llc_save_peer_uid(qentry); ··· 1193 1158 mutex_unlock(&lgr->llc_conf_mutex); 1194 1159 } 1195 1160 1196 - /* enqueue a local add_link req to trigger a new add_link flow, only as SERV */ 1197 - void smc_llc_srv_add_link_local(struct smc_link *link) 1161 + /* enqueue a local add_link req to trigger a new add_link flow */ 1162 + void smc_llc_add_link_local(struct smc_link *link) 1198 1163 { 1199 1164 struct smc_llc_msg_add_link add_llc = {0}; 1200 1165 1201 1166 add_llc.hd.length = sizeof(add_llc); 1202 1167 add_llc.hd.common.type = SMC_LLC_ADD_LINK; 1203 - /* no dev and port needed, we as server ignore client data anyway */ 1168 + /* no dev and port needed */ 1204 1169 smc_llc_enqueue(link, (union smc_llc_msg *)&add_llc); 1205 1170 } 1206 1171 ··· 1380 1345 1381 1346 if (lgr->type == SMC_LGR_SINGLE && !list_empty(&lgr->list)) { 1382 1347 /* trigger setup of asymm alt link */ 1383 - smc_llc_srv_add_link_local(lnk); 1348 + smc_llc_add_link_local(lnk); 1384 1349 } 1385 1350 out: 1386 1351 mutex_unlock(&lgr->llc_conf_mutex); ··· 1509 1474 if (list_empty(&lgr->list)) 1510 1475 goto out; /* lgr is terminating */ 1511 1476 if (lgr->role == SMC_CLNT) { 1512 - if (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_ADD_LINK) { 1477 + if (smc_llc_is_local_add_link(llc)) { 1478 + if (lgr->llc_flow_lcl.type == 1479 + SMC_LLC_FLOW_ADD_LINK) 1480 + break; /* add_link in progress */ 1481 + if (smc_llc_flow_start(&lgr->llc_flow_lcl, 1482 + qentry)) { 1483 + schedule_work(&lgr->llc_add_link_work); 1484 + } 1485 + return; 1486 + } 1487 + if (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_ADD_LINK && 1488 + !lgr->llc_flow_lcl.qentry) { 1513 1489 /* a flow is waiting for this message */ 1514 1490 smc_llc_flow_qentry_set(&lgr->llc_flow_lcl, 1515 1491 qentry); ··· 1544 1498 } 1545 1499 break; 1546 1500 case SMC_LLC_DELETE_LINK: 1547 - if (lgr->role == SMC_CLNT) { 1548 - /* server requests to delete this link, send response */ 1549 - if (lgr->llc_flow_lcl.type != SMC_LLC_FLOW_NONE) { 1550 - /* DEL LINK REQ during ADD LINK SEQ */ 1551 - smc_llc_flow_qentry_set(&lgr->llc_flow_lcl, 1552 - qentry); 1553 - wake_up(&lgr->llc_msg_waiter); 1554 - } else if (smc_llc_flow_start(&lgr->llc_flow_lcl, 1555 - qentry)) { 1556 - schedule_work(&lgr->llc_del_link_work); 1557 - } 1558 - } else { 1559 - if (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_ADD_LINK && 1560 - !lgr->llc_flow_lcl.qentry) { 1561 - /* DEL LINK REQ during ADD LINK SEQ */ 1562 - smc_llc_flow_qentry_set(&lgr->llc_flow_lcl, 1563 - qentry); 1564 - wake_up(&lgr->llc_msg_waiter); 1565 - } else if (smc_llc_flow_start(&lgr->llc_flow_lcl, 1566 - qentry)) { 1567 - schedule_work(&lgr->llc_del_link_work); 1568 - } 1501 + if (lgr->llc_flow_lcl.type == SMC_LLC_FLOW_ADD_LINK && 1502 + !lgr->llc_flow_lcl.qentry) { 1503 + /* DEL LINK REQ during ADD LINK SEQ */ 1504 + smc_llc_flow_qentry_set(&lgr->llc_flow_lcl, qentry); 1505 + wake_up(&lgr->llc_msg_waiter); 1506 + } else if (smc_llc_flow_start(&lgr->llc_flow_lcl, qentry)) { 1507 + schedule_work(&lgr->llc_del_link_work); 1569 1508 } 1570 1509 return; 1571 1510 case SMC_LLC_CONFIRM_RKEY: ··· 1616 1585 static void smc_llc_rx_response(struct smc_link *link, 1617 1586 struct smc_llc_qentry *qentry) 1618 1587 { 1588 + enum smc_llc_flowtype flowtype = link->lgr->llc_flow_lcl.type; 1589 + struct smc_llc_flow *flow = &link->lgr->llc_flow_lcl; 1619 1590 u8 llc_type = qentry->msg.raw.hdr.common.type; 1620 1591 1621 1592 switch (llc_type) { 1622 1593 case SMC_LLC_TEST_LINK: 1623 - if (link->state == SMC_LNK_ACTIVE) 1594 + if (smc_link_active(link)) 1624 1595 complete(&link->llc_testlink_resp); 1625 1596 break; 1626 1597 case SMC_LLC_ADD_LINK: 1627 - case SMC_LLC_DELETE_LINK: 1628 - case SMC_LLC_CONFIRM_LINK: 1629 1598 case SMC_LLC_ADD_LINK_CONT: 1599 + case SMC_LLC_CONFIRM_LINK: 1600 + if (flowtype != SMC_LLC_FLOW_ADD_LINK || flow->qentry) 1601 + break; /* drop out-of-flow response */ 1602 + goto assign; 1603 + case SMC_LLC_DELETE_LINK: 1604 + if (flowtype != SMC_LLC_FLOW_DEL_LINK || flow->qentry) 1605 + break; /* drop out-of-flow response */ 1606 + goto assign; 1630 1607 case SMC_LLC_CONFIRM_RKEY: 1631 1608 case SMC_LLC_DELETE_RKEY: 1632 - /* assign responses to the local flow, we requested them */ 1633 - smc_llc_flow_qentry_set(&link->lgr->llc_flow_lcl, qentry); 1634 - wake_up(&link->lgr->llc_msg_waiter); 1635 - return; 1609 + if (flowtype != SMC_LLC_FLOW_RKEY || flow->qentry) 1610 + break; /* drop out-of-flow response */ 1611 + goto assign; 1636 1612 case SMC_LLC_CONFIRM_RKEY_CONT: 1637 1613 /* not used because max links is 3 */ 1638 1614 break; ··· 1648 1610 break; 1649 1611 } 1650 1612 kfree(qentry); 1613 + return; 1614 + assign: 1615 + /* assign responses to the local flow, we requested them */ 1616 + smc_llc_flow_qentry_set(&link->lgr->llc_flow_lcl, qentry); 1617 + wake_up(&link->lgr->llc_msg_waiter); 1651 1618 } 1652 1619 1653 1620 static void smc_llc_enqueue(struct smc_link *link, union smc_llc_msg *llc) ··· 1706 1663 u8 user_data[16] = { 0 }; 1707 1664 int rc; 1708 1665 1709 - if (link->state != SMC_LNK_ACTIVE) 1666 + if (!smc_link_active(link)) 1710 1667 return; /* don't reschedule worker */ 1711 1668 expire_time = link->wr_rx_tstamp + link->llc_testlink_time; 1712 1669 if (time_is_after_jiffies(expire_time)) { ··· 1718 1675 /* receive TEST LINK response over RoCE fabric */ 1719 1676 rc = wait_for_completion_interruptible_timeout(&link->llc_testlink_resp, 1720 1677 SMC_LLC_WAIT_TIME); 1721 - if (link->state != SMC_LNK_ACTIVE) 1678 + if (!smc_link_active(link)) 1722 1679 return; /* link state changed */ 1723 1680 if (rc <= 0) { 1724 1681 smcr_link_down_cond_sched(link);
+1 -1
net/smc/smc_llc.h
··· 103 103 u32 rsn); 104 104 int smc_llc_cli_add_link(struct smc_link *link, struct smc_llc_qentry *qentry); 105 105 int smc_llc_srv_add_link(struct smc_link *link); 106 - void smc_llc_srv_add_link_local(struct smc_link *link); 106 + void smc_llc_add_link_local(struct smc_link *link); 107 107 int smc_llc_init(void) __init; 108 108 109 109 #endif /* SMC_LLC_H */
+1 -1
net/tipc/link.c
··· 827 827 state |= l->bc_rcvlink->rcv_unacked; 828 828 state |= l->rcv_unacked; 829 829 state |= !skb_queue_empty(&l->transmq); 830 - state |= !skb_queue_empty(&l->deferdq); 831 830 probe = mstate->probing; 832 831 probe |= l->silent_intv_cnt; 833 832 if (probe || mstate->monitoring) 834 833 l->silent_intv_cnt++; 834 + probe |= !skb_queue_empty(&l->deferdq); 835 835 if (l->snd_nxt == l->checkpoint) { 836 836 tipc_link_update_cwin(l, 0, 0); 837 837 probe = true;
+1 -1
net/vmw_vsock/virtio_transport.c
··· 22 22 #include <net/af_vsock.h> 23 23 24 24 static struct workqueue_struct *virtio_vsock_workqueue; 25 - static struct virtio_vsock *the_virtio_vsock; 25 + static struct virtio_vsock __rcu *the_virtio_vsock; 26 26 static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */ 27 27 28 28 struct virtio_vsock {
+2 -2
scripts/decode_stacktrace.sh
··· 87 87 return 88 88 fi 89 89 90 - # Strip out the base of the path 91 - code=${code#$basepath/} 90 + # Strip out the base of the path on each line 91 + code=$(while read -r line; do echo "${line#$basepath/}"; done <<< "$code") 92 92 93 93 # In the case of inlines, move everything to same line 94 94 code=${code//$'\n'/' '}
+1 -1
scripts/gdb/linux/symbols.py
··· 96 96 return "" 97 97 attrs = sect_attrs['attrs'] 98 98 section_name_to_address = { 99 - attrs[n]['name'].string(): attrs[n]['address'] 99 + attrs[n]['battr']['attr']['name'].string(): attrs[n]['address'] 100 100 for n in range(int(sect_attrs['nsections']))} 101 101 args = [] 102 102 for section_name in [".data", ".data..read_mostly", ".rodata", ".bss",
+10 -2
scripts/mod/modpost.c
··· 138 138 139 139 char *get_line(char **stringp) 140 140 { 141 + char *orig = *stringp, *next; 142 + 141 143 /* do not return the unwanted extra line at EOF */ 142 - if (*stringp && **stringp == '\0') 144 + if (!orig || *orig == '\0') 143 145 return NULL; 144 146 145 - return strsep(stringp, "\n"); 147 + next = strchr(orig, '\n'); 148 + if (next) 149 + *next++ = '\0'; 150 + 151 + *stringp = next; 152 + 153 + return orig; 146 154 } 147 155 148 156 /* A list of all modules we processed */
+3 -1
sound/core/info.c
··· 606 606 { 607 607 int c; 608 608 609 - if (snd_BUG_ON(!buffer || !buffer->buffer)) 609 + if (snd_BUG_ON(!buffer)) 610 + return 1; 611 + if (!buffer->buffer) 610 612 return 1; 611 613 if (len <= 0 || buffer->stop || buffer->error) 612 614 return 1;
+1
sound/pci/hda/patch_realtek.c
··· 7587 7587 SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7588 7588 SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7589 7589 SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8), 7590 + SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET), 7590 7591 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), 7591 7592 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 7592 7593 SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+1 -3
sound/soc/amd/raven/pci-acp3x.c
··· 232 232 } 233 233 pm_runtime_set_autosuspend_delay(&pci->dev, 2000); 234 234 pm_runtime_use_autosuspend(&pci->dev); 235 - pm_runtime_set_active(&pci->dev); 236 235 pm_runtime_put_noidle(&pci->dev); 237 - pm_runtime_enable(&pci->dev); 238 236 pm_runtime_allow(&pci->dev); 239 237 return 0; 240 238 ··· 301 303 ret = acp3x_deinit(adata->acp3x_base); 302 304 if (ret) 303 305 dev_err(&pci->dev, "ACP de-init failed\n"); 304 - pm_runtime_disable(&pci->dev); 306 + pm_runtime_forbid(&pci->dev); 305 307 pm_runtime_get_noresume(&pci->dev); 306 308 pci_disable_msi(pci); 307 309 pci_release_regions(pci);
-8
sound/soc/codecs/max98373.c
··· 779 779 regmap_write(max98373->regmap, 780 780 MAX98373_R202A_PCM_TO_SPK_MONO_MIX_2, 781 781 0x1); 782 - /* Set inital volume (0dB) */ 783 - regmap_write(max98373->regmap, 784 - MAX98373_R203D_AMP_DIG_VOL_CTRL, 785 - 0x00); 786 - regmap_write(max98373->regmap, 787 - MAX98373_R203E_AMP_PATH_GAIN, 788 - 0x00); 789 782 /* Enable DC blocker */ 790 783 regmap_write(max98373->regmap, 791 784 MAX98373_R203F_AMP_DSP_CFG, ··· 862 869 .num_dapm_widgets = ARRAY_SIZE(max98373_dapm_widgets), 863 870 .dapm_routes = max98373_audio_map, 864 871 .num_dapm_routes = ARRAY_SIZE(max98373_audio_map), 865 - .idle_bias_on = 1, 866 872 .use_pmdown_time = 1, 867 873 .endianness = 1, 868 874 .non_legacy_dai_naming = 1,
+4 -4
sound/soc/codecs/rt286.c
··· 272 272 regmap_read(rt286->regmap, RT286_GET_MIC1_SENSE, &buf); 273 273 *mic = buf & 0x80000000; 274 274 } 275 - if (!*mic) { 275 + 276 + if (!*hp) { 276 277 snd_soc_dapm_disable_pin(dapm, "HV"); 277 278 snd_soc_dapm_disable_pin(dapm, "VREF"); 278 - } 279 - if (!*hp) 280 279 snd_soc_dapm_disable_pin(dapm, "LDO1"); 281 - snd_soc_dapm_sync(dapm); 280 + snd_soc_dapm_sync(dapm); 281 + } 282 282 283 283 return 0; 284 284 }
+58 -17
sound/soc/codecs/rt5670.c
··· 31 31 #include "rt5670.h" 32 32 #include "rt5670-dsp.h" 33 33 34 - #define RT5670_DEV_GPIO BIT(0) 35 - #define RT5670_IN2_DIFF BIT(1) 36 - #define RT5670_DMIC_EN BIT(2) 37 - #define RT5670_DMIC1_IN2P BIT(3) 38 - #define RT5670_DMIC1_GPIO6 BIT(4) 39 - #define RT5670_DMIC1_GPIO7 BIT(5) 40 - #define RT5670_DMIC2_INR BIT(6) 41 - #define RT5670_DMIC2_GPIO8 BIT(7) 42 - #define RT5670_DMIC3_GPIO5 BIT(8) 43 - #define RT5670_JD_MODE1 BIT(9) 44 - #define RT5670_JD_MODE2 BIT(10) 45 - #define RT5670_JD_MODE3 BIT(11) 34 + #define RT5670_DEV_GPIO BIT(0) 35 + #define RT5670_IN2_DIFF BIT(1) 36 + #define RT5670_DMIC_EN BIT(2) 37 + #define RT5670_DMIC1_IN2P BIT(3) 38 + #define RT5670_DMIC1_GPIO6 BIT(4) 39 + #define RT5670_DMIC1_GPIO7 BIT(5) 40 + #define RT5670_DMIC2_INR BIT(6) 41 + #define RT5670_DMIC2_GPIO8 BIT(7) 42 + #define RT5670_DMIC3_GPIO5 BIT(8) 43 + #define RT5670_JD_MODE1 BIT(9) 44 + #define RT5670_JD_MODE2 BIT(10) 45 + #define RT5670_JD_MODE3 BIT(11) 46 + #define RT5670_GPIO1_IS_EXT_SPK_EN BIT(12) 46 47 47 48 static unsigned long rt5670_quirk; 48 49 static unsigned int quirk_override; ··· 603 602 EXPORT_SYMBOL_GPL(rt5670_set_jack_detect); 604 603 605 604 static const DECLARE_TLV_DB_SCALE(out_vol_tlv, -4650, 150, 0); 606 - static const DECLARE_TLV_DB_SCALE(dac_vol_tlv, -65625, 375, 0); 605 + static const DECLARE_TLV_DB_MINMAX(dac_vol_tlv, -6562, 0); 607 606 static const DECLARE_TLV_DB_SCALE(in_vol_tlv, -3450, 150, 0); 608 - static const DECLARE_TLV_DB_SCALE(adc_vol_tlv, -17625, 375, 0); 607 + static const DECLARE_TLV_DB_MINMAX(adc_vol_tlv, -1762, 3000); 609 608 static const DECLARE_TLV_DB_SCALE(adc_bst_tlv, 0, 1200, 0); 610 609 611 610 /* {0, +20, +24, +30, +35, +40, +44, +50, +52} dB */ ··· 1448 1447 return 0; 1449 1448 } 1450 1449 1450 + static int rt5670_spk_event(struct snd_soc_dapm_widget *w, 1451 + struct snd_kcontrol *kcontrol, int event) 1452 + { 1453 + struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm); 1454 + struct rt5670_priv *rt5670 = snd_soc_component_get_drvdata(component); 1455 + 1456 + if (!rt5670->pdata.gpio1_is_ext_spk_en) 1457 + return 0; 1458 + 1459 + switch (event) { 1460 + case SND_SOC_DAPM_POST_PMU: 1461 + regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2, 1462 + RT5670_GP1_OUT_MASK, RT5670_GP1_OUT_HI); 1463 + break; 1464 + 1465 + case SND_SOC_DAPM_PRE_PMD: 1466 + regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2, 1467 + RT5670_GP1_OUT_MASK, RT5670_GP1_OUT_LO); 1468 + break; 1469 + 1470 + default: 1471 + return 0; 1472 + } 1473 + 1474 + return 0; 1475 + } 1476 + 1451 1477 static int rt5670_bst1_event(struct snd_soc_dapm_widget *w, 1452 1478 struct snd_kcontrol *kcontrol, int event) 1453 1479 { ··· 1888 1860 }; 1889 1861 1890 1862 static const struct snd_soc_dapm_widget rt5672_specific_dapm_widgets[] = { 1891 - SND_SOC_DAPM_PGA("SPO Amp", SND_SOC_NOPM, 0, 0, NULL, 0), 1863 + SND_SOC_DAPM_PGA_E("SPO Amp", SND_SOC_NOPM, 0, 0, NULL, 0, 1864 + rt5670_spk_event, SND_SOC_DAPM_PRE_PMD | 1865 + SND_SOC_DAPM_POST_PMU), 1892 1866 SND_SOC_DAPM_OUTPUT("SPOLP"), 1893 1867 SND_SOC_DAPM_OUTPUT("SPOLN"), 1894 1868 SND_SOC_DAPM_OUTPUT("SPORP"), ··· 2887 2857 }, 2888 2858 { 2889 2859 .callback = rt5670_quirk_cb, 2890 - .ident = "Lenovo Thinkpad Tablet 10", 2860 + .ident = "Lenovo Miix 2 10", 2891 2861 .matches = { 2892 2862 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 2893 2863 DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Miix 2 10"), 2894 2864 }, 2895 2865 .driver_data = (unsigned long *)(RT5670_DMIC_EN | 2896 2866 RT5670_DMIC1_IN2P | 2897 - RT5670_DEV_GPIO | 2867 + RT5670_GPIO1_IS_EXT_SPK_EN | 2898 2868 RT5670_JD_MODE2), 2899 2869 }, 2900 2870 { ··· 2953 2923 if (rt5670_quirk & RT5670_DEV_GPIO) { 2954 2924 rt5670->pdata.dev_gpio = true; 2955 2925 dev_info(&i2c->dev, "quirk dev_gpio\n"); 2926 + } 2927 + if (rt5670_quirk & RT5670_GPIO1_IS_EXT_SPK_EN) { 2928 + rt5670->pdata.gpio1_is_ext_spk_en = true; 2929 + dev_info(&i2c->dev, "quirk GPIO1 is external speaker enable\n"); 2956 2930 } 2957 2931 if (rt5670_quirk & RT5670_IN2_DIFF) { 2958 2932 rt5670->pdata.in2_diff = true; ··· 3053 3019 /* for irq */ 3054 3020 regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL1, 3055 3021 RT5670_GP1_PIN_MASK, RT5670_GP1_PIN_IRQ); 3022 + regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2, 3023 + RT5670_GP1_PF_MASK, RT5670_GP1_PF_OUT); 3024 + } 3025 + 3026 + if (rt5670->pdata.gpio1_is_ext_spk_en) { 3027 + regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL1, 3028 + RT5670_GP1_PIN_MASK, RT5670_GP1_PIN_GPIO1); 3056 3029 regmap_update_bits(rt5670->regmap, RT5670_GPIO_CTRL2, 3057 3030 RT5670_GP1_PF_MASK, RT5670_GP1_PF_OUT); 3058 3031 }
+1 -1
sound/soc/codecs/rt5670.h
··· 757 757 #define RT5670_PWR_VREF2_BIT 4 758 758 #define RT5670_PWR_FV2 (0x1 << 3) 759 759 #define RT5670_PWR_FV2_BIT 3 760 - #define RT5670_LDO_SEL_MASK (0x3) 760 + #define RT5670_LDO_SEL_MASK (0x7) 761 761 #define RT5670_LDO_SEL_SFT 0 762 762 763 763 /* Power Management for Analog 2 (0x64) */
+29 -17
sound/soc/codecs/rt5682.c
··· 967 967 rt5682_enable_push_button_irq(component, false); 968 968 snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1, 969 969 RT5682_TRIG_JD_MASK, RT5682_TRIG_JD_LOW); 970 - if (snd_soc_dapm_get_pin_status(dapm, "MICBIAS")) 970 + if (!snd_soc_dapm_get_pin_status(dapm, "MICBIAS")) 971 + snd_soc_component_update_bits(component, 972 + RT5682_PWR_ANLG_1, RT5682_PWR_MB, 0); 973 + if (!snd_soc_dapm_get_pin_status(dapm, "Vref2")) 971 974 snd_soc_component_update_bits(component, 972 975 RT5682_PWR_ANLG_1, RT5682_PWR_VREF2, 0); 973 - else 974 - snd_soc_component_update_bits(component, 975 - RT5682_PWR_ANLG_1, 976 - RT5682_PWR_VREF2 | RT5682_PWR_MB, 0); 977 976 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_3, 978 977 RT5682_PWR_CBJ, 0); 979 978 ··· 991 992 992 993 rt5682->hs_jack = hs_jack; 993 994 994 - if (!rt5682->is_sdw) { 995 - if (!hs_jack) { 996 - regmap_update_bits(rt5682->regmap, RT5682_IRQ_CTRL_2, 997 - RT5682_JD1_EN_MASK, RT5682_JD1_DIS); 998 - regmap_update_bits(rt5682->regmap, RT5682_RC_CLK_CTRL, 999 - RT5682_POW_JDH | RT5682_POW_JDL, 0); 1000 - cancel_delayed_work_sync(&rt5682->jack_detect_work); 1001 - return 0; 1002 - } 995 + if (!hs_jack) { 996 + regmap_update_bits(rt5682->regmap, RT5682_IRQ_CTRL_2, 997 + RT5682_JD1_EN_MASK, RT5682_JD1_DIS); 998 + regmap_update_bits(rt5682->regmap, RT5682_RC_CLK_CTRL, 999 + RT5682_POW_JDH | RT5682_POW_JDL, 0); 1000 + cancel_delayed_work_sync(&rt5682->jack_detect_work); 1003 1001 1002 + return 0; 1003 + } 1004 + 1005 + if (!rt5682->is_sdw) { 1004 1006 switch (rt5682->pdata.jd_src) { 1005 1007 case RT5682_JD1: 1006 1008 snd_soc_component_update_bits(component, ··· 1082 1082 /* jack was out, report jack type */ 1083 1083 rt5682->jack_type = 1084 1084 rt5682_headset_detect(rt5682->component, 1); 1085 - } else { 1085 + } else if ((rt5682->jack_type & SND_JACK_HEADSET) == 1086 + SND_JACK_HEADSET) { 1086 1087 /* jack is already in, report button event */ 1087 1088 rt5682->jack_type = SND_JACK_HEADSET; 1088 1089 btn_type = rt5682_button_detect(rt5682->component); ··· 1609 1608 0, set_filter_clk, SND_SOC_DAPM_PRE_PMU), 1610 1609 SND_SOC_DAPM_SUPPLY("Vref1", RT5682_PWR_ANLG_1, RT5682_PWR_VREF1_BIT, 0, 1611 1610 rt5682_set_verf, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU), 1612 - SND_SOC_DAPM_SUPPLY("Vref2", RT5682_PWR_ANLG_1, RT5682_PWR_VREF2_BIT, 0, 1613 - NULL, 0), 1611 + SND_SOC_DAPM_SUPPLY("Vref2", SND_SOC_NOPM, 0, 0, NULL, 0), 1614 1612 SND_SOC_DAPM_SUPPLY("MICBIAS", SND_SOC_NOPM, 0, 0, NULL, 0), 1615 1613 1616 1614 /* ASRC */ ··· 2492 2492 snd_soc_dapm_force_enable_pin_unlocked(dapm, "MICBIAS"); 2493 2493 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 2494 2494 RT5682_PWR_MB, RT5682_PWR_MB); 2495 + 2496 + snd_soc_dapm_force_enable_pin_unlocked(dapm, "Vref2"); 2497 + snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 2498 + RT5682_PWR_VREF2 | RT5682_PWR_FV2, 2499 + RT5682_PWR_VREF2); 2500 + usleep_range(55000, 60000); 2501 + snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 2502 + RT5682_PWR_FV2, RT5682_PWR_FV2); 2503 + 2495 2504 snd_soc_dapm_force_enable_pin_unlocked(dapm, "I2S1"); 2496 2505 snd_soc_dapm_force_enable_pin_unlocked(dapm, "PLL2F"); 2497 2506 snd_soc_dapm_force_enable_pin_unlocked(dapm, "PLL2B"); ··· 2526 2517 snd_soc_dapm_mutex_lock(dapm); 2527 2518 2528 2519 snd_soc_dapm_disable_pin_unlocked(dapm, "MICBIAS"); 2520 + snd_soc_dapm_disable_pin_unlocked(dapm, "Vref2"); 2529 2521 if (!rt5682->jack_type) 2530 2522 snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1, 2523 + RT5682_PWR_VREF2 | RT5682_PWR_FV2 | 2531 2524 RT5682_PWR_MB, 0); 2525 + 2532 2526 snd_soc_dapm_disable_pin_unlocked(dapm, "I2S1"); 2533 2527 snd_soc_dapm_disable_pin_unlocked(dapm, "PLL2F"); 2534 2528 snd_soc_dapm_disable_pin_unlocked(dapm, "PLL2B");
+5 -1
sound/soc/codecs/wm8974.c
··· 186 186 187 187 /* Boost mixer */ 188 188 static const struct snd_kcontrol_new wm8974_boost_mixer[] = { 189 - SOC_DAPM_SINGLE("Aux Switch", WM8974_INPPGA, 6, 1, 0), 189 + SOC_DAPM_SINGLE("Aux Switch", WM8974_INPPGA, 6, 1, 1), 190 190 }; 191 191 192 192 /* Input PGA */ ··· 474 474 iface |= 0x0008; 475 475 break; 476 476 case SND_SOC_DAIFMT_DSP_A: 477 + if ((fmt & SND_SOC_DAIFMT_INV_MASK) == SND_SOC_DAIFMT_IB_IF || 478 + (fmt & SND_SOC_DAIFMT_INV_MASK) == SND_SOC_DAIFMT_NB_IF) { 479 + return -EINVAL; 480 + } 477 481 iface |= 0x00018; 478 482 break; 479 483 default:
+2 -2
sound/soc/generic/audio-graph-card.c
··· 317 317 if (ret < 0) 318 318 goto out_put_node; 319 319 320 - dai_link->dpcm_playback = 1; 321 - dai_link->dpcm_capture = 1; 320 + snd_soc_dai_link_set_capabilities(dai_link); 321 + 322 322 dai_link->ops = &graph_ops; 323 323 dai_link->init = asoc_simple_dai_init; 324 324
+2 -2
sound/soc/generic/simple-card.c
··· 231 231 if (ret < 0) 232 232 goto out_put_node; 233 233 234 - dai_link->dpcm_playback = 1; 235 - dai_link->dpcm_capture = 1; 234 + snd_soc_dai_link_set_capabilities(dai_link); 235 + 236 236 dai_link->ops = &simple_ops; 237 237 dai_link->init = asoc_simple_dai_init; 238 238
+1
sound/soc/intel/boards/bdw-rt5677.c
··· 354 354 { 355 355 .name = "Codec DSP", 356 356 .stream_name = "Wake on Voice", 357 + .capture_only = 1, 357 358 .ops = &bdw_rt5677_dsp_ops, 358 359 SND_SOC_DAILINK_REG(dsp), 359 360 },
+3 -1
sound/soc/intel/boards/bytcht_es8316.c
··· 543 543 544 544 if (cnt) { 545 545 ret = device_add_properties(codec_dev, props); 546 - if (ret) 546 + if (ret) { 547 + put_device(codec_dev); 547 548 return ret; 549 + } 548 550 } 549 551 550 552 devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);
+11 -12
sound/soc/intel/boards/cht_bsw_rt5672.c
··· 253 253 params_set_format(params, SNDRV_PCM_FORMAT_S24_LE); 254 254 255 255 /* 256 - * Default mode for SSP configuration is TDM 4 slot 256 + * Default mode for SSP configuration is TDM 4 slot. One board/design, 257 + * the Lenovo Miix 2 10 uses not 1 but 2 codecs connected to SSP2. The 258 + * second piggy-backed, output-only codec is inside the keyboard-dock 259 + * (which has extra speakers). Unlike the main rt5672 codec, we cannot 260 + * configure this codec, it is hard coded to use 2 channel 24 bit I2S. 261 + * Since we only support 2 channels anyways, there is no need for TDM 262 + * on any cht-bsw-rt5672 designs. So we simply use I2S 2ch everywhere. 257 263 */ 258 - ret = snd_soc_dai_set_fmt(asoc_rtd_to_codec(rtd, 0), 259 - SND_SOC_DAIFMT_DSP_B | 260 - SND_SOC_DAIFMT_IB_NF | 264 + ret = snd_soc_dai_set_fmt(asoc_rtd_to_cpu(rtd, 0), 265 + SND_SOC_DAIFMT_I2S | 266 + SND_SOC_DAIFMT_NB_NF | 261 267 SND_SOC_DAIFMT_CBS_CFS); 262 268 if (ret < 0) { 263 - dev_err(rtd->dev, "can't set format to TDM %d\n", ret); 264 - return ret; 265 - } 266 - 267 - /* TDM 4 slots 24 bit, set Rx & Tx bitmask to 4 active slots */ 268 - ret = snd_soc_dai_set_tdm_slot(asoc_rtd_to_codec(rtd, 0), 0xF, 0xF, 4, 24); 269 - if (ret < 0) { 270 - dev_err(rtd->dev, "can't set codec TDM slot %d\n", ret); 269 + dev_err(rtd->dev, "can't set format to I2S, err %d\n", ret); 271 270 return ret; 272 271 } 273 272
+1 -1
sound/soc/qcom/Kconfig
··· 72 72 73 73 config SND_SOC_QDSP6 74 74 tristate "SoC ALSA audio driver for QDSP6" 75 - depends on QCOM_APR && HAS_DMA 75 + depends on QCOM_APR 76 76 select SND_SOC_QDSP6_COMMON 77 77 select SND_SOC_QDSP6_CORE 78 78 select SND_SOC_QDSP6_AFE
+13
sound/soc/rockchip/rk3399_gru_sound.c
··· 219 219 return 0; 220 220 } 221 221 222 + static int rockchip_sound_startup(struct snd_pcm_substream *substream) 223 + { 224 + struct snd_pcm_runtime *runtime = substream->runtime; 225 + 226 + runtime->hw.formats = SNDRV_PCM_FMTBIT_S16_LE; 227 + return snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_RATE, 228 + 8000, 96000); 229 + } 230 + 222 231 static const struct snd_soc_ops rockchip_sound_max98357a_ops = { 232 + .startup = rockchip_sound_startup, 223 233 .hw_params = rockchip_sound_max98357a_hw_params, 224 234 }; 225 235 226 236 static const struct snd_soc_ops rockchip_sound_rt5514_ops = { 237 + .startup = rockchip_sound_startup, 227 238 .hw_params = rockchip_sound_rt5514_hw_params, 228 239 }; 229 240 230 241 static const struct snd_soc_ops rockchip_sound_da7219_ops = { 242 + .startup = rockchip_sound_startup, 231 243 .hw_params = rockchip_sound_da7219_hw_params, 232 244 }; 233 245 234 246 static const struct snd_soc_ops rockchip_sound_dmic_ops = { 247 + .startup = rockchip_sound_startup, 235 248 .hw_params = rockchip_sound_dmic_hw_params, 236 249 }; 237 250
+27
sound/soc/soc-core.c
··· 2573 2573 EXPORT_SYMBOL_GPL(snd_soc_register_component); 2574 2574 2575 2575 /** 2576 + * snd_soc_unregister_component_by_driver - Unregister component using a given driver 2577 + * from the ASoC core 2578 + * 2579 + * @dev: The device to unregister 2580 + * @component_driver: The component driver to unregister 2581 + */ 2582 + void snd_soc_unregister_component_by_driver(struct device *dev, 2583 + const struct snd_soc_component_driver *component_driver) 2584 + { 2585 + struct snd_soc_component *component; 2586 + 2587 + if (!component_driver) 2588 + return; 2589 + 2590 + mutex_lock(&client_mutex); 2591 + component = snd_soc_lookup_component_nolocked(dev, component_driver->name); 2592 + if (!component) 2593 + goto out; 2594 + 2595 + snd_soc_del_component_unlocked(component); 2596 + 2597 + out: 2598 + mutex_unlock(&client_mutex); 2599 + } 2600 + EXPORT_SYMBOL_GPL(snd_soc_unregister_component_by_driver); 2601 + 2602 + /** 2576 2603 * snd_soc_unregister_component - Unregister all related component 2577 2604 * from the ASoC core 2578 2605 *
+38
sound/soc/soc-dai.c
··· 391 391 return stream->channels_min; 392 392 } 393 393 394 + /* 395 + * snd_soc_dai_link_set_capabilities() - set dai_link properties based on its DAIs 396 + */ 397 + void snd_soc_dai_link_set_capabilities(struct snd_soc_dai_link *dai_link) 398 + { 399 + struct snd_soc_dai_link_component *cpu; 400 + struct snd_soc_dai_link_component *codec; 401 + struct snd_soc_dai *dai; 402 + bool supported[SNDRV_PCM_STREAM_LAST + 1]; 403 + int direction; 404 + int i; 405 + 406 + for_each_pcm_streams(direction) { 407 + supported[direction] = true; 408 + 409 + for_each_link_cpus(dai_link, i, cpu) { 410 + dai = snd_soc_find_dai(cpu); 411 + if (!dai || !snd_soc_dai_stream_valid(dai, direction)) { 412 + supported[direction] = false; 413 + break; 414 + } 415 + } 416 + if (!supported[direction]) 417 + continue; 418 + for_each_link_codecs(dai_link, i, codec) { 419 + dai = snd_soc_find_dai(codec); 420 + if (!dai || !snd_soc_dai_stream_valid(dai, direction)) { 421 + supported[direction] = false; 422 + break; 423 + } 424 + } 425 + } 426 + 427 + dai_link->dpcm_playback = supported[SNDRV_PCM_STREAM_PLAYBACK]; 428 + dai_link->dpcm_capture = supported[SNDRV_PCM_STREAM_CAPTURE]; 429 + } 430 + EXPORT_SYMBOL_GPL(snd_soc_dai_link_set_capabilities); 431 + 394 432 void snd_soc_dai_action(struct snd_soc_dai *dai, 395 433 int stream, int action) 396 434 {
+5 -3
sound/soc/soc-devres.c
··· 48 48 49 49 static void devm_component_release(struct device *dev, void *res) 50 50 { 51 - snd_soc_unregister_component(*(struct device **)res); 51 + const struct snd_soc_component_driver **cmpnt_drv = res; 52 + 53 + snd_soc_unregister_component_by_driver(dev, *cmpnt_drv); 52 54 } 53 55 54 56 /** ··· 67 65 const struct snd_soc_component_driver *cmpnt_drv, 68 66 struct snd_soc_dai_driver *dai_drv, int num_dai) 69 67 { 70 - struct device **ptr; 68 + const struct snd_soc_component_driver **ptr; 71 69 int ret; 72 70 73 71 ptr = devres_alloc(devm_component_release, sizeof(*ptr), GFP_KERNEL); ··· 76 74 77 75 ret = snd_soc_register_component(dev, cmpnt_drv, dai_drv, num_dai); 78 76 if (ret == 0) { 79 - *ptr = dev; 77 + *ptr = cmpnt_drv; 80 78 devres_add(dev, ptr); 81 79 } else { 82 80 devres_free(ptr);
+1 -1
sound/soc/soc-generic-dmaengine-pcm.c
··· 478 478 479 479 pcm = soc_component_to_pcm(component); 480 480 481 - snd_soc_unregister_component(dev); 481 + snd_soc_unregister_component_by_driver(dev, component->driver); 482 482 dmaengine_pcm_release_chan(pcm); 483 483 kfree(pcm); 484 484 }
+18 -6
sound/soc/soc-topology.c
··· 1261 1261 list_add(&routes[i]->dobj.list, &tplg->comp->dobj_list); 1262 1262 1263 1263 ret = soc_tplg_add_route(tplg, routes[i]); 1264 - if (ret < 0) 1264 + if (ret < 0) { 1265 + /* 1266 + * this route was added to the list, it will 1267 + * be freed in remove_route() so increment the 1268 + * counter to skip it in the error handling 1269 + * below. 1270 + */ 1271 + i++; 1265 1272 break; 1273 + } 1266 1274 1267 1275 /* add route, but keep going if some fail */ 1268 1276 snd_soc_dapm_add_routes(dapm, routes[i], 1); 1269 1277 } 1270 1278 1271 - /* free memory allocated for all dapm routes in case of error */ 1272 - if (ret < 0) 1273 - for (i = 0; i < count ; i++) 1274 - kfree(routes[i]); 1279 + /* 1280 + * free memory allocated for all dapm routes not added to the 1281 + * list in case of error 1282 + */ 1283 + if (ret < 0) { 1284 + while (i < count) 1285 + kfree(routes[i++]); 1286 + } 1275 1287 1276 1288 /* 1277 1289 * free pointer to array of dapm routes as this is no longer needed. ··· 1371 1359 if (err < 0) { 1372 1360 dev_err(tplg->dev, "ASoC: failed to init %s\n", 1373 1361 mc->hdr.name); 1374 - soc_tplg_free_tlv(tplg, &kc[i]); 1375 1362 goto err_sm; 1376 1363 } 1377 1364 } ··· 1378 1367 1379 1368 err_sm: 1380 1369 for (; i >= 0; i--) { 1370 + soc_tplg_free_tlv(tplg, &kc[i]); 1381 1371 sm = (struct soc_mixer_control *)kc[i].private_value; 1382 1372 kfree(sm); 1383 1373 kfree(kc[i].name);
+5 -5
sound/soc/sof/core.c
··· 345 345 struct snd_sof_pdata *pdata = sdev->pdata; 346 346 int ret; 347 347 348 - ret = snd_sof_dsp_power_down_notify(sdev); 349 - if (ret < 0) 350 - dev_warn(dev, "error: %d failed to prepare DSP for device removal", 351 - ret); 352 - 353 348 if (IS_ENABLED(CONFIG_SND_SOC_SOF_PROBE_WORK_QUEUE)) 354 349 cancel_work_sync(&sdev->probe_work); 355 350 356 351 if (sdev->fw_state > SOF_FW_BOOT_NOT_STARTED) { 352 + ret = snd_sof_dsp_power_down_notify(sdev); 353 + if (ret < 0) 354 + dev_warn(dev, "error: %d failed to prepare DSP for device removal", 355 + ret); 356 + 357 357 snd_sof_fw_unload(sdev); 358 358 snd_sof_ipc_free(sdev); 359 359 snd_sof_free_debug(sdev);
+8
sound/soc/sof/imx/imx8.c
··· 375 375 static struct snd_soc_dai_driver imx8_dai[] = { 376 376 { 377 377 .name = "esai-port", 378 + .playback = { 379 + .channels_min = 1, 380 + .channels_max = 8, 381 + }, 382 + .capture = { 383 + .channels_min = 1, 384 + .channels_max = 8, 385 + }, 378 386 }, 379 387 }; 380 388
+8
sound/soc/sof/imx/imx8m.c
··· 240 240 static struct snd_soc_dai_driver imx8m_dai[] = { 241 241 { 242 242 .name = "sai-port", 243 + .playback = { 244 + .channels_min = 1, 245 + .channels_max = 32, 246 + }, 247 + .capture = { 248 + .channels_min = 1, 249 + .channels_max = 32, 250 + }, 243 251 }, 244 252 }; 245 253
+1 -1
tools/perf/pmu-events/arch/s390/cf_z15/extended.json
··· 380 380 { 381 381 "Unit": "CPU-M-CF", 382 382 "EventCode": "265", 383 - "EventName": "DFLT_CCERROR", 383 + "EventName": "DFLT_CCFINISH", 384 384 "BriefDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2", 385 385 "PublicDescription": "Increments by one for every DEFLATE CONVERSION CALL instruction executed that ended in Condition Codes 0, 1 or 2" 386 386 },
+1 -1
tools/testing/selftests/net/fib_nexthop_multiprefix.sh
··· 144 144 145 145 cleanup() 146 146 { 147 - for n in h1 r1 h2 h3 h4 147 + for n in h0 r1 h1 h2 h3 148 148 do 149 149 ip netns del ${n} 2>/dev/null 150 150 done
+2
tools/testing/selftests/net/ip_defrag.sh
··· 6 6 set +x 7 7 set -e 8 8 9 + modprobe -q nf_defrag_ipv6 10 + 9 11 readonly NETNS="ns-$(mktemp -u XXXXXX)" 10 12 11 13 setup() {
+1 -1
tools/testing/selftests/net/txtimestamp.sh
··· 75 75 fi 76 76 } 77 77 78 - if [[ "$(ip netns identify)" == "root" ]]; then 78 + if [[ -z "$(ip netns identify)" ]]; then 79 79 ./in_netns.sh $0 $@ 80 80 else 81 81 main $@