Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

include/linux/net.h
a5ef058dc4d9 ("net: introduce and use custom sockopt socket flag")
e993ffe3da4b ("net: flag sockets supporting msghdr originated zerocopy")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+5140 -4892
+3 -1
.mailmap
··· 104 104 Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com> 105 105 Corey Minyard <minyard@acm.org> 106 106 Damian Hobson-Garcia <dhobsong@igel.co.jp> 107 + Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com> 107 108 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com> 108 109 Daniel Borkmann <daniel@iogearbox.net> <danborkmann@iogearbox.net> 109 110 Daniel Borkmann <daniel@iogearbox.net> <daniel.borkmann@tik.ee.ethz.ch> ··· 354 353 Pratyush Anand <pratyush.anand@gmail.com> <pratyush.anand@st.com> 355 354 Praveen BP <praveenbp@ti.com> 356 355 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com> 357 - Qais Yousef <qsyousef@gmail.com> <qais.yousef@imgtec.com> 356 + Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com> 357 + Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com> 358 358 Quentin Monnet <quentin@isovalent.com> <quentin.monnet@netronome.com> 359 359 Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com> 360 360 Rafael J. Wysocki <rjw@rjwysocki.net> <rjw@sisk.pl>
-1
Documentation/admin-guide/acpi/index.rst
··· 9 9 :maxdepth: 1 10 10 11 11 initrd_table_override 12 - dsdt-override 13 12 ssdt-overlays 14 13 cppc_sysfs 15 14 fan_performance_states
+36
Documentation/block/ublk.rst
··· 144 144 For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's 145 145 responsibility to save IO target specific info in userspace. 146 146 147 + - ``UBLK_CMD_START_USER_RECOVERY`` 148 + 149 + This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This 150 + command is accepted after the old process has exited, ublk device is quiesced 151 + and ``/dev/ublkc*`` is released. User should send this command before he starts 152 + a new process which re-opens ``/dev/ublkc*``. When this command returns, the 153 + ublk device is ready for the new process. 154 + 155 + - ``UBLK_CMD_END_USER_RECOVERY`` 156 + 157 + This command is valid if ``UBLK_F_USER_RECOVERY`` feature is enabled. This 158 + command is accepted after ublk device is quiesced and a new process has 159 + opened ``/dev/ublkc*`` and get all ublk queues be ready. When this command 160 + returns, ublk device is unquiesced and new I/O requests are passed to the 161 + new process. 162 + 163 + - user recovery feature description 164 + 165 + Two new features are added for user recovery: ``UBLK_F_USER_RECOVERY`` and 166 + ``UBLK_F_USER_RECOVERY_REISSUE``. 167 + 168 + With ``UBLK_F_USER_RECOVERY`` set, after one ubq_daemon(ublk server's io 169 + handler) is dying, ublk does not delete ``/dev/ublkb*`` during the whole 170 + recovery stage and ublk device ID is kept. It is ublk server's 171 + responsibility to recover the device context by its own knowledge. 172 + Requests which have not been issued to userspace are requeued. Requests 173 + which have been issued to userspace are aborted. 174 + 175 + With ``UBLK_F_USER_RECOVERY_REISSUE`` set, after one ubq_daemon(ublk 176 + server's io handler) is dying, contrary to ``UBLK_F_USER_RECOVERY``, 177 + requests which have been issued to userspace are requeued and will be 178 + re-issued to the new process after handling ``UBLK_CMD_END_USER_RECOVERY``. 179 + ``UBLK_F_USER_RECOVERY_REISSUE`` is designed for backends who tolerate 180 + double-write since the driver may issue the same I/O request twice. It 181 + might be useful to a read-only FS or a VM backend. 182 + 147 183 Data plane 148 184 ---------- 149 185
-9
Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
··· 1 - Dongwoon Anatech DW9714 camera voice coil lens driver 2 - 3 - DW9174 is a 10-bit DAC with current sink capability. It is intended 4 - for driving voice coil lenses in camera modules. 5 - 6 - Mandatory properties: 7 - 8 - - compatible: "dongwoon,dw9714" 9 - - reg: I²C slave address
+47
Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/media/i2c/dongwoon,dw9714.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Dongwoon Anatech DW9714 camera voice coil lens driver 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzk@kernel.org> 11 + 12 + description: 13 + DW9174 is a 10-bit DAC with current sink capability. It is intended for 14 + driving voice coil lenses in camera modules. 15 + 16 + properties: 17 + compatible: 18 + const: dongwoon,dw9714 19 + 20 + reg: 21 + maxItems: 1 22 + 23 + powerdown-gpios: 24 + description: 25 + XSD pin for shutdown (active low) 26 + 27 + vcc-supply: 28 + description: VDD power supply 29 + 30 + required: 31 + - compatible 32 + - reg 33 + 34 + additionalProperties: false 35 + 36 + examples: 37 + - | 38 + i2c { 39 + #address-cells = <1>; 40 + #size-cells = <0>; 41 + 42 + camera-lens@c { 43 + compatible = "dongwoon,dw9714"; 44 + reg = <0x0c>; 45 + vcc-supply = <&reg_csi_1v8>; 46 + }; 47 + };
-4
Documentation/devicetree/bindings/pinctrl/xlnx,zynqmp-pinctrl.yaml
··· 274 274 slew-rate: 275 275 enum: [0, 1] 276 276 277 - output-enable: 278 - description: 279 - This will internally disable the tri-state for MIO pins. 280 - 281 277 drive-strength: 282 278 description: 283 279 Selects the drive strength for MIO pins, in mA.
+15 -4
Documentation/driver-api/media/mc-core.rst
··· 214 214 Pipelines and media streams 215 215 ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 216 216 217 + A media stream is a stream of pixels or metadata originating from one or more 218 + source devices (such as a sensors) and flowing through media entity pads 219 + towards the final sinks. The stream can be modified on the route by the 220 + devices (e.g. scaling or pixel format conversions), or it can be split into 221 + multiple branches, or multiple branches can be merged. 222 + 223 + A media pipeline is a set of media streams which are interdependent. This 224 + interdependency can be caused by the hardware (e.g. configuration of a second 225 + stream cannot be changed if the first stream has been enabled) or by the driver 226 + due to the software design. Most commonly a media pipeline consists of a single 227 + stream which does not branch. 228 + 217 229 When starting streaming, drivers must notify all entities in the pipeline to 218 230 prevent link states from being modified during streaming by calling 219 231 :c:func:`media_pipeline_start()`. 220 232 221 - The function will mark all entities connected to the given entity through 222 - enabled links, either directly or indirectly, as streaming. 233 + The function will mark all the pads which are part of the pipeline as streaming. 223 234 224 235 The struct media_pipeline instance pointed to by 225 - the pipe argument will be stored in every entity in the pipeline. 236 + the pipe argument will be stored in every pad in the pipeline. 226 237 Drivers should embed the struct media_pipeline 227 238 in higher-level pipeline structures and can then access the 228 - pipeline through the struct media_entity 239 + pipeline through the struct media_pad 229 240 pipe field. 230 241 231 242 Calls to :c:func:`media_pipeline_start()` can be nested.
+2
Documentation/hwmon/corsair-psu.rst
··· 19 19 20 20 Corsair HX1200i 21 21 22 + Corsair HX1500i 23 + 22 24 Corsair RM550i 23 25 24 26 Corsair RM650i
+10
Documentation/process/maintainer-netdev.rst
··· 319 319 Finally, go back and read 320 320 :ref:`Documentation/process/submitting-patches.rst <submittingpatches>` 321 321 to be sure you are not repeating some common mistake documented there. 322 + 323 + My company uses peer feedback in employee performance reviews. Can I ask netdev maintainers for feedback? 324 + --------------------------------------------------------------------------------------------------------- 325 + 326 + Yes, especially if you spend significant amount of time reviewing code 327 + and go out of your way to improve shared infrastructure. 328 + 329 + The feedback must be requested by you, the contributor, and will always 330 + be shared with you (even if you request for it to be submitted to your 331 + manager).
+2
Documentation/userspace-api/media/cec.h.rst.exceptions
··· 239 239 ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE 240 240 ignore define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX 241 241 ignore define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX 242 + ignore define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL 242 243 243 244 ignore define CEC_MSG_GIVE_FEATURES 244 245 ··· 488 487 489 488 ignore define CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST 490 489 ignore define CEC_MSG_SYSTEM_AUDIO_MODE_STATUS 490 + ignore define CEC_MSG_SET_AUDIO_VOLUME_LEVEL 491 491 492 492 ignore define CEC_OP_AUD_FMT_ID_CEA861 493 493 ignore define CEC_OP_AUD_FMT_ID_CEA861_CXT
+2 -2
Documentation/userspace-api/media/v4l/libv4l-introduction.rst
··· 136 136 137 137 operates like the :c:func:`read()` function. 138 138 139 - .. c:function:: void v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset); 139 + .. c:function:: void *v4l2_mmap(void *start, size_t length, int prot, int flags, int fd, int64_t offset); 140 140 141 - operates like the :c:func:`munmap()` function. 141 + operates like the :c:func:`mmap()` function. 142 142 143 143 .. c:function:: int v4l2_munmap(void *_start, size_t length); 144 144
+12 -5
MAINTAINERS
··· 6285 6285 L: linux-media@vger.kernel.org 6286 6286 S: Maintained 6287 6287 T: git git://linuxtv.org/media_tree.git 6288 - F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt 6288 + F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml 6289 6289 F: drivers/media/i2c/dw9714.c 6290 6290 6291 6291 DONGWOON DW9768 LENS VOICE COIL DRIVER ··· 14715 14715 F: drivers/nvme/target/fabrics-cmd-auth.c 14716 14716 F: include/linux/nvme-auth.h 14717 14717 14718 + NVM EXPRESS HARDWARE MONITORING SUPPORT 14719 + M: Guenter Roeck <linux@roeck-us.net> 14720 + L: linux-nvme@lists.infradead.org 14721 + S: Supported 14722 + F: drivers/nvme/host/hwmon.c 14723 + 14718 14724 NVM EXPRESS FC TRANSPORT DRIVERS 14719 14725 M: James Smart <james.smart@broadcom.com> 14720 14726 L: linux-nvme@lists.infradead.org ··· 15851 15845 F: drivers/pci/controller/dwc/*designware* 15852 15846 15853 15847 PCI DRIVER FOR TI DRA7XX/J721E 15854 - M: Kishon Vijay Abraham I <kishon@ti.com> 15848 + M: Vignesh Raghavendra <vigneshr@ti.com> 15855 15849 L: linux-omap@vger.kernel.org 15856 15850 L: linux-pci@vger.kernel.org 15857 15851 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 15868 15862 F: drivers/pci/controller/pci-v3-semi.c 15869 15863 15870 15864 PCI ENDPOINT SUBSYSTEM 15871 - M: Kishon Vijay Abraham I <kishon@ti.com> 15872 15865 M: Lorenzo Pieralisi <lpieralisi@kernel.org> 15873 15866 R: Krzysztof Wilczyński <kw@linux.com> 15874 15867 R: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> 15868 + R: Kishon Vijay Abraham I <kishon@kernel.org> 15875 15869 L: linux-pci@vger.kernel.org 15876 15870 S: Supported 15877 15871 Q: https://patchwork.kernel.org/project/linux-pci/list/ ··· 16677 16671 F: drivers/net/phy/dp83640* 16678 16672 F: drivers/ptp/* 16679 16673 F: include/linux/ptp_cl* 16674 + K: (?:\b|_)ptp(?:\b|_) 16680 16675 16681 16676 PTP VIRTUAL CLOCK SUPPORT 16682 16677 M: Yangbo Lu <yangbo.lu@nxp.com> ··· 18144 18137 S: Maintained 18145 18138 T: git git://linuxtv.org/media_tree.git 18146 18139 F: drivers/staging/media/deprecated/saa7146/ 18147 - F: include/media/drv-intf/saa7146* 18148 18140 18149 18141 SAFESETID SECURITY MODULE 18150 18142 M: Micah Morton <mortonm@chromium.org> ··· 22132 22126 F: drivers/watchdog/ 22133 22127 F: include/linux/watchdog.h 22134 22128 F: include/uapi/linux/watchdog.h 22129 + F: include/trace/events/watchdog.h 22135 22130 22136 22131 WHISKEYCOVE PMIC GPIO DRIVER 22137 22132 M: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> ··· 22773 22766 W: http://mjpeg.sourceforge.net/driver-zoran/ 22774 22767 Q: https://patchwork.linuxtv.org/project/linux-media/list/ 22775 22768 F: Documentation/driver-api/media/drivers/zoran.rst 22776 - F: drivers/staging/media/zoran/ 22769 + F: drivers/media/pci/zoran/ 22777 22770 22778 22771 ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER 22779 22772 M: Minchan Kim <minchan@kernel.org>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 1 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+13 -5
arch/arm64/include/asm/kvm_pgtable.h
··· 13 13 14 14 #define KVM_PGTABLE_MAX_LEVELS 4U 15 15 16 + /* 17 + * The largest supported block sizes for KVM (no 52-bit PA support): 18 + * - 4K (level 1): 1GB 19 + * - 16K (level 2): 32MB 20 + * - 64K (level 2): 512MB 21 + */ 22 + #ifdef CONFIG_ARM64_4K_PAGES 23 + #define KVM_PGTABLE_MIN_BLOCK_LEVEL 1U 24 + #else 25 + #define KVM_PGTABLE_MIN_BLOCK_LEVEL 2U 26 + #endif 27 + 16 28 static inline u64 kvm_get_parange(u64 mmfr0) 17 29 { 18 30 u64 parange = cpuid_feature_extract_unsigned_field(mmfr0, ··· 70 58 71 59 static inline bool kvm_level_supports_block_mapping(u32 level) 72 60 { 73 - /* 74 - * Reject invalid block mappings and don't bother with 4TB mappings for 75 - * 52-bit PAs. 76 - */ 77 - return !(level == 0 || (PAGE_SIZE != SZ_4K && level == 1)); 61 + return level >= KVM_PGTABLE_MIN_BLOCK_LEVEL; 78 62 } 79 63 80 64 /**
-20
arch/arm64/include/asm/stage2_pgtable.h
··· 11 11 #include <linux/pgtable.h> 12 12 13 13 /* 14 - * PGDIR_SHIFT determines the size a top-level page table entry can map 15 - * and depends on the number of levels in the page table. Compute the 16 - * PGDIR_SHIFT for a given number of levels. 17 - */ 18 - #define pt_levels_pgdir_shift(lvls) ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - (lvls)) 19 - 20 - /* 21 14 * The hardware supports concatenation of up to 16 tables at stage2 entry 22 15 * level and we use the feature whenever possible, which means we resolve 4 23 16 * additional bits of address at the entry level. ··· 23 30 #define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4) 24 31 #define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr) 25 32 26 - /* stage2_pgdir_shift() is the size mapped by top-level stage2 entry for the VM */ 27 - #define stage2_pgdir_shift(kvm) pt_levels_pgdir_shift(kvm_stage2_levels(kvm)) 28 - #define stage2_pgdir_size(kvm) (1ULL << stage2_pgdir_shift(kvm)) 29 - #define stage2_pgdir_mask(kvm) ~(stage2_pgdir_size(kvm) - 1) 30 - 31 33 /* 32 34 * kvm_mmmu_cache_min_pages() is the number of pages required to install 33 35 * a stage-2 translation. We pre-allocate the entry level page table at 34 36 * the VM creation. 35 37 */ 36 38 #define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1) 37 - 38 - static inline phys_addr_t 39 - stage2_pgd_addr_end(struct kvm *kvm, phys_addr_t addr, phys_addr_t end) 40 - { 41 - phys_addr_t boundary = (addr + stage2_pgdir_size(kvm)) & stage2_pgdir_mask(kvm); 42 - 43 - return (boundary - 1 < end - 1) ? boundary : end; 44 - } 45 39 46 40 #endif /* __ARM64_S2_PGTABLE_H_ */
+6 -1
arch/arm64/kernel/entry-ftrace.S
··· 7 7 */ 8 8 9 9 #include <linux/linkage.h> 10 + #include <linux/cfi_types.h> 10 11 #include <asm/asm-offsets.h> 11 12 #include <asm/assembler.h> 12 13 #include <asm/ftrace.h> ··· 295 294 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 296 295 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ 297 296 298 - SYM_FUNC_START(ftrace_stub) 297 + SYM_TYPED_FUNC_START(ftrace_stub) 299 298 ret 300 299 SYM_FUNC_END(ftrace_stub) 300 + 301 + SYM_TYPED_FUNC_START(ftrace_stub_graph) 302 + ret 303 + SYM_FUNC_END(ftrace_stub_graph) 301 304 302 305 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 303 306 /*
+1 -4
arch/arm64/kvm/hyp/Makefile
··· 5 5 6 6 incdir := $(srctree)/$(src)/include 7 7 subdir-asflags-y := -I$(incdir) 8 - subdir-ccflags-y := -I$(incdir) \ 9 - -fno-stack-protector \ 10 - -DDISABLE_BRANCH_PROFILING \ 11 - $(DISABLE_STACKLEAK_PLUGIN) 8 + subdir-ccflags-y := -I$(incdir) 12 9 13 10 obj-$(CONFIG_KVM) += vhe/ nvhe/ pgtable.o
+7
arch/arm64/kvm/hyp/nvhe/Makefile
··· 10 10 # will explode instantly (Words of Marc Zyngier). So introduce a generic flag 11 11 # __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM. 12 12 ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__ 13 + ccflags-y += -fno-stack-protector \ 14 + -DDISABLE_BRANCH_PROFILING \ 15 + $(DISABLE_STACKLEAK_PLUGIN) 13 16 14 17 hostprogs := gen-hyprel 15 18 HOST_EXTRACFLAGS += -I$(objtree)/include ··· 92 89 # Remove ftrace, Shadow Call Stack, and CFI CFLAGS. 93 90 # This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations. 94 91 KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS)) 92 + # Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile' 93 + # when profile optimization is applied. gen-hyprel does not support SHT_REL and 94 + # causes a build failure. Remove profile optimization flags. 95 + KBUILD_CFLAGS := $(filter-out -fprofile-sample-use=% -fprofile-use=%, $(KBUILD_CFLAGS)) 95 96 96 97 # KVM nVHE code is run at a different exception code with a different map, so 97 98 # compiler instrumentation that inserts callbacks or checks into the code may
+8 -1
arch/arm64/kvm/mmu.c
··· 31 31 32 32 static unsigned long io_map_base; 33 33 34 + static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) 35 + { 36 + phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); 37 + phys_addr_t boundary = ALIGN_DOWN(addr + size, size); 38 + 39 + return (boundary - 1 < end - 1) ? boundary : end; 40 + } 34 41 35 42 /* 36 43 * Release kvm_mmu_lock periodically if the memory region is large. Otherwise, ··· 59 52 if (!pgt) 60 53 return -EINVAL; 61 54 62 - next = stage2_pgd_addr_end(kvm, addr, end); 55 + next = stage2_range_addr_end(addr, end); 63 56 ret = fn(pgt, addr, next - addr); 64 57 if (ret) 65 58 break;
+4 -1
arch/arm64/kvm/vgic/vgic-its.c
··· 2149 2149 2150 2150 memset(entry, 0, esz); 2151 2151 2152 - while (len > 0) { 2152 + while (true) { 2153 2153 int next_offset; 2154 2154 size_t byte_offset; 2155 2155 ··· 2162 2162 return next_offset; 2163 2163 2164 2164 byte_offset = next_offset * esz; 2165 + if (byte_offset >= len) 2166 + break; 2167 + 2165 2168 id += next_offset; 2166 2169 gpa += byte_offset; 2167 2170 len -= byte_offset;
-8
arch/riscv/include/asm/cacheflush.h
··· 42 42 43 43 #endif /* CONFIG_SMP */ 44 44 45 - /* 46 - * The T-Head CMO errata internally probe the CBOM block size, but otherwise 47 - * don't depend on Zicbom. 48 - */ 49 45 extern unsigned int riscv_cbom_block_size; 50 - #ifdef CONFIG_RISCV_ISA_ZICBOM 51 46 void riscv_init_cbom_blocksize(void); 52 - #else 53 - static inline void riscv_init_cbom_blocksize(void) { } 54 - #endif 55 47 56 48 #ifdef CONFIG_RISCV_DMA_NONCOHERENT 57 49 void riscv_noncoherent_supported(void);
+1
arch/riscv/include/asm/kvm_vcpu_timer.h
··· 45 45 int kvm_riscv_vcpu_timer_reset(struct kvm_vcpu *vcpu); 46 46 void kvm_riscv_vcpu_timer_restore(struct kvm_vcpu *vcpu); 47 47 void kvm_riscv_guest_timer_init(struct kvm *kvm); 48 + void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu); 48 49 void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu); 49 50 bool kvm_riscv_vcpu_timer_pending(struct kvm_vcpu *vcpu); 50 51
+3
arch/riscv/kvm/vcpu.c
··· 708 708 clear_bit(IRQ_VS_SOFT, &v->irqs_pending); 709 709 } 710 710 } 711 + 712 + /* Sync-up timer CSRs */ 713 + kvm_riscv_vcpu_timer_sync(vcpu); 711 714 } 712 715 713 716 int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
+20 -7
arch/riscv/kvm/vcpu_timer.c
··· 320 320 kvm_riscv_vcpu_timer_unblocking(vcpu); 321 321 } 322 322 323 + void kvm_riscv_vcpu_timer_sync(struct kvm_vcpu *vcpu) 324 + { 325 + struct kvm_vcpu_timer *t = &vcpu->arch.timer; 326 + 327 + if (!t->sstc_enabled) 328 + return; 329 + 330 + #if defined(CONFIG_32BIT) 331 + t->next_cycles = csr_read(CSR_VSTIMECMP); 332 + t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32; 333 + #else 334 + t->next_cycles = csr_read(CSR_VSTIMECMP); 335 + #endif 336 + } 337 + 323 338 void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu) 324 339 { 325 340 struct kvm_vcpu_timer *t = &vcpu->arch.timer; ··· 342 327 if (!t->sstc_enabled) 343 328 return; 344 329 345 - t = &vcpu->arch.timer; 346 - #if defined(CONFIG_32BIT) 347 - t->next_cycles = csr_read(CSR_VSTIMECMP); 348 - t->next_cycles |= (u64)csr_read(CSR_VSTIMECMPH) << 32; 349 - #else 350 - t->next_cycles = csr_read(CSR_VSTIMECMP); 351 - #endif 330 + /* 331 + * The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync() 332 + * upon every VM exit so no need to save here. 333 + */ 334 + 352 335 /* timer should be enabled for the remaining operations */ 353 336 if (unlikely(!t->init_done)) 354 337 return;
+38
arch/riscv/mm/cacheflush.c
··· 3 3 * Copyright (C) 2017 SiFive 4 4 */ 5 5 6 + #include <linux/of.h> 6 7 #include <asm/cacheflush.h> 7 8 8 9 #ifdef CONFIG_SMP ··· 87 86 flush_icache_all(); 88 87 } 89 88 #endif /* CONFIG_MMU */ 89 + 90 + unsigned int riscv_cbom_block_size; 91 + EXPORT_SYMBOL_GPL(riscv_cbom_block_size); 92 + 93 + void riscv_init_cbom_blocksize(void) 94 + { 95 + struct device_node *node; 96 + unsigned long cbom_hartid; 97 + u32 val, probed_block_size; 98 + int ret; 99 + 100 + probed_block_size = 0; 101 + for_each_of_cpu_node(node) { 102 + unsigned long hartid; 103 + 104 + ret = riscv_of_processor_hartid(node, &hartid); 105 + if (ret) 106 + continue; 107 + 108 + /* set block-size for cbom extension if available */ 109 + ret = of_property_read_u32(node, "riscv,cbom-block-size", &val); 110 + if (ret) 111 + continue; 112 + 113 + if (!probed_block_size) { 114 + probed_block_size = val; 115 + cbom_hartid = hartid; 116 + } else { 117 + if (probed_block_size != val) 118 + pr_warn("cbom-block-size mismatched between harts %lu and %lu\n", 119 + cbom_hartid, hartid); 120 + } 121 + } 122 + 123 + if (probed_block_size) 124 + riscv_cbom_block_size = probed_block_size; 125 + }
-41
arch/riscv/mm/dma-noncoherent.c
··· 8 8 #include <linux/dma-direct.h> 9 9 #include <linux/dma-map-ops.h> 10 10 #include <linux/mm.h> 11 - #include <linux/of.h> 12 - #include <linux/of_device.h> 13 11 #include <asm/cacheflush.h> 14 - 15 - unsigned int riscv_cbom_block_size; 16 - EXPORT_SYMBOL_GPL(riscv_cbom_block_size); 17 12 18 13 static bool noncoherent_supported; 19 14 ··· 71 76 72 77 dev->dma_coherent = coherent; 73 78 } 74 - 75 - #ifdef CONFIG_RISCV_ISA_ZICBOM 76 - void riscv_init_cbom_blocksize(void) 77 - { 78 - struct device_node *node; 79 - unsigned long cbom_hartid; 80 - u32 val, probed_block_size; 81 - int ret; 82 - 83 - probed_block_size = 0; 84 - for_each_of_cpu_node(node) { 85 - unsigned long hartid; 86 - 87 - ret = riscv_of_processor_hartid(node, &hartid); 88 - if (ret) 89 - continue; 90 - 91 - /* set block-size for cbom extension if available */ 92 - ret = of_property_read_u32(node, "riscv,cbom-block-size", &val); 93 - if (ret) 94 - continue; 95 - 96 - if (!probed_block_size) { 97 - probed_block_size = val; 98 - cbom_hartid = hartid; 99 - } else { 100 - if (probed_block_size != val) 101 - pr_warn("cbom-block-size mismatched between harts %lu and %lu\n", 102 - cbom_hartid, hartid); 103 - } 104 - } 105 - 106 - if (probed_block_size) 107 - riscv_cbom_block_size = probed_block_size; 108 - } 109 - #endif 110 79 111 80 void riscv_noncoherent_supported(void) 112 81 {
-1
arch/x86/Kconfig
··· 1973 1973 config EFI_STUB 1974 1974 bool "EFI stub support" 1975 1975 depends on EFI 1976 - depends on $(cc-option,-mabi=ms) || X86_32 1977 1976 select RELOCATABLE 1978 1977 help 1979 1978 This kernel feature allows a bzImage to be loaded directly
+1 -1
arch/x86/events/intel/lbr.c
··· 1596 1596 return; 1597 1597 1598 1598 clear_arch_lbr: 1599 - clear_cpu_cap(&boot_cpu_data, X86_FEATURE_ARCH_LBR); 1599 + setup_clear_cpu_cap(X86_FEATURE_ARCH_LBR); 1600 1600 } 1601 1601 1602 1602 /**
+3 -1
arch/x86/include/asm/iommu.h
··· 25 25 { 26 26 u64 start = rmrr->base_address; 27 27 u64 end = rmrr->end_address + 1; 28 + int entry_type; 28 29 29 - if (e820__mapped_all(start, end, E820_TYPE_RESERVED)) 30 + entry_type = e820__get_entry_type(start, end); 31 + if (entry_type == E820_TYPE_RESERVED || entry_type == E820_TYPE_NVS) 30 32 return 0; 31 33 32 34 pr_err(FW_BUG "No firmware reserved region can cover this RMRR [%#018Lx-%#018Lx], contact BIOS vendor for fixes\n",
+13 -3
arch/x86/kernel/cpu/microcode/amd.c
··· 440 440 return ret; 441 441 442 442 native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 443 - if (rev >= mc->hdr.patch_id) 443 + 444 + /* 445 + * Allow application of the same revision to pick up SMT-specific 446 + * changes even if the revision of the other SMT thread is already 447 + * up-to-date. 448 + */ 449 + if (rev > mc->hdr.patch_id) 444 450 return ret; 445 451 446 452 if (!__apply_microcode_amd(mc)) { ··· 534 528 535 529 native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy); 536 530 537 - /* Check whether we have saved a new patch already: */ 538 - if (*new_rev && rev < mc->hdr.patch_id) { 531 + /* 532 + * Check whether a new patch has been saved already. Also, allow application of 533 + * the same revision in order to pick up SMT-thread-specific configuration even 534 + * if the sibling SMT thread already has an up-to-date revision. 535 + */ 536 + if (*new_rev && rev <= mc->hdr.patch_id) { 539 537 if (!__apply_microcode_amd(mc)) { 540 538 *new_rev = mc->hdr.patch_id; 541 539 return;
+2 -6
arch/x86/kernel/cpu/resctrl/core.c
··· 66 66 .rid = RDT_RESOURCE_L3, 67 67 .name = "L3", 68 68 .cache_level = 3, 69 - .cache = { 70 - .min_cbm_bits = 1, 71 - }, 72 69 .domains = domain_init(RDT_RESOURCE_L3), 73 70 .parse_ctrlval = parse_cbm, 74 71 .format_str = "%d=%0*x", ··· 80 83 .rid = RDT_RESOURCE_L2, 81 84 .name = "L2", 82 85 .cache_level = 2, 83 - .cache = { 84 - .min_cbm_bits = 1, 85 - }, 86 86 .domains = domain_init(RDT_RESOURCE_L2), 87 87 .parse_ctrlval = parse_cbm, 88 88 .format_str = "%d=%0*x", ··· 830 836 r->cache.arch_has_sparse_bitmaps = false; 831 837 r->cache.arch_has_empty_bitmaps = false; 832 838 r->cache.arch_has_per_cpu_cfg = false; 839 + r->cache.min_cbm_bits = 1; 833 840 } else if (r->rid == RDT_RESOURCE_MBA) { 834 841 hw_res->msr_base = MSR_IA32_MBA_THRTL_BASE; 835 842 hw_res->msr_update = mba_wrmsr_intel; ··· 851 856 r->cache.arch_has_sparse_bitmaps = true; 852 857 r->cache.arch_has_empty_bitmaps = true; 853 858 r->cache.arch_has_per_cpu_cfg = true; 859 + r->cache.min_cbm_bits = 0; 854 860 } else if (r->rid == RDT_RESOURCE_MBA) { 855 861 hw_res->msr_base = MSR_IA32_MBA_BW_BASE; 856 862 hw_res->msr_update = mba_wrmsr_amd;
+12 -6
arch/x86/kernel/cpu/topology.c
··· 96 96 unsigned int ht_mask_width, core_plus_mask_width, die_plus_mask_width; 97 97 unsigned int core_select_mask, core_level_siblings; 98 98 unsigned int die_select_mask, die_level_siblings; 99 + unsigned int pkg_mask_width; 99 100 bool die_level_present = false; 100 101 int leaf; 101 102 ··· 112 111 core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx); 113 112 core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 114 113 die_level_siblings = LEVEL_MAX_SIBLINGS(ebx); 115 - die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 114 + pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 116 115 117 116 sub_index = 1; 118 - do { 117 + while (true) { 119 118 cpuid_count(leaf, sub_index, &eax, &ebx, &ecx, &edx); 120 119 121 120 /* ··· 133 132 die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 134 133 } 135 134 136 - sub_index++; 137 - } while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE); 135 + if (LEAFB_SUBTYPE(ecx) != INVALID_TYPE) 136 + pkg_mask_width = BITS_SHIFT_NEXT_LEVEL(eax); 137 + else 138 + break; 138 139 139 - core_select_mask = (~(-1 << core_plus_mask_width)) >> ht_mask_width; 140 + sub_index++; 141 + } 142 + 143 + core_select_mask = (~(-1 << pkg_mask_width)) >> ht_mask_width; 140 144 die_select_mask = (~(-1 << die_plus_mask_width)) >> 141 145 core_plus_mask_width; 142 146 ··· 154 148 } 155 149 156 150 c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, 157 - die_plus_mask_width); 151 + pkg_mask_width); 158 152 /* 159 153 * Reinit the apicid, now that we have extended initial_apicid. 160 154 */
-8
arch/x86/kernel/fpu/init.c
··· 210 210 fpstate_reset(&current->thread.fpu); 211 211 } 212 212 213 - static void __init fpu__init_init_fpstate(void) 214 - { 215 - /* Bring init_fpstate size and features up to date */ 216 - init_fpstate.size = fpu_kernel_cfg.max_size; 217 - init_fpstate.xfeatures = fpu_kernel_cfg.max_features; 218 - } 219 - 220 213 /* 221 214 * Called on the boot CPU once per system bootup, to set up the initial 222 215 * FPU state that is later cloned into all processes: ··· 229 236 fpu__init_system_xstate_size_legacy(); 230 237 fpu__init_system_xstate(fpu_kernel_cfg.max_size); 231 238 fpu__init_task_struct_size(); 232 - fpu__init_init_fpstate(); 233 239 }
+23 -19
arch/x86/kernel/fpu/xstate.c
··· 360 360 361 361 print_xstate_features(); 362 362 363 - xstate_init_xcomp_bv(&init_fpstate.regs.xsave, fpu_kernel_cfg.max_features); 363 + xstate_init_xcomp_bv(&init_fpstate.regs.xsave, init_fpstate.xfeatures); 364 364 365 365 /* 366 366 * Init all the features state with header.xfeatures being 0x0 ··· 678 678 return ebx; 679 679 } 680 680 681 - /* 682 - * Will the runtime-enumerated 'xstate_size' fit in the init 683 - * task's statically-allocated buffer? 684 - */ 685 - static bool __init is_supported_xstate_size(unsigned int test_xstate_size) 686 - { 687 - if (test_xstate_size <= sizeof(init_fpstate.regs)) 688 - return true; 689 - 690 - pr_warn("x86/fpu: xstate buffer too small (%zu < %d), disabling xsave\n", 691 - sizeof(init_fpstate.regs), test_xstate_size); 692 - return false; 693 - } 694 - 695 681 static int __init init_xstate_size(void) 696 682 { 697 683 /* Recompute the context size for enabled features: */ ··· 702 716 703 717 kernel_default_size = 704 718 xstate_calculate_size(fpu_kernel_cfg.default_features, compacted); 705 - 706 - /* Ensure we have the space to store all default enabled features. */ 707 - if (!is_supported_xstate_size(kernel_default_size)) 708 - return -EINVAL; 709 719 710 720 if (!paranoid_xstate_size_valid(kernel_size)) 711 721 return -EINVAL; ··· 856 874 */ 857 875 update_regset_xstate_info(fpu_user_cfg.max_size, 858 876 fpu_user_cfg.max_features); 877 + 878 + /* 879 + * init_fpstate excludes dynamic states as they are large but init 880 + * state is zero. 881 + */ 882 + init_fpstate.size = fpu_kernel_cfg.default_size; 883 + init_fpstate.xfeatures = fpu_kernel_cfg.default_features; 884 + 885 + if (init_fpstate.size > sizeof(init_fpstate.regs)) { 886 + pr_warn("x86/fpu: init_fpstate buffer too small (%zu < %d), disabling XSAVE\n", 887 + sizeof(init_fpstate.regs), init_fpstate.size); 888 + goto out_disable; 889 + } 859 890 860 891 setup_init_fpu_buf(); 861 892 ··· 1124 1129 * init_fpstate. The gap tracking will zero these states. 1125 1130 */ 1126 1131 mask = fpstate->user_xfeatures; 1132 + 1133 + /* 1134 + * Dynamic features are not present in init_fpstate. When they are 1135 + * in an all zeros init state, remove those from 'mask' to zero 1136 + * those features in the user buffer instead of retrieving them 1137 + * from init_fpstate. 1138 + */ 1139 + if (fpu_state_size_dynamic()) 1140 + mask &= (header.xfeatures | xinit->header.xcomp_bv); 1127 1141 1128 1142 for_each_extended_xfeature(i, mask) { 1129 1143 /*
+13 -21
arch/x86/kernel/ftrace_64.S
··· 4 4 */ 5 5 6 6 #include <linux/linkage.h> 7 + #include <linux/cfi_types.h> 7 8 #include <asm/ptrace.h> 8 9 #include <asm/ftrace.h> 9 10 #include <asm/export.h> ··· 130 129 131 130 .endm 132 131 132 + SYM_TYPED_FUNC_START(ftrace_stub) 133 + RET 134 + SYM_FUNC_END(ftrace_stub) 135 + 136 + SYM_TYPED_FUNC_START(ftrace_stub_graph) 137 + RET 138 + SYM_FUNC_END(ftrace_stub_graph) 139 + 133 140 #ifdef CONFIG_DYNAMIC_FTRACE 134 141 135 142 SYM_FUNC_START(__fentry__) ··· 181 172 */ 182 173 SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL) 183 174 ANNOTATE_NOENDBR 184 - 185 - jmp ftrace_epilogue 175 + RET 186 176 SYM_FUNC_END(ftrace_caller); 187 177 STACK_FRAME_NON_STANDARD_FP(ftrace_caller) 188 - 189 - SYM_FUNC_START(ftrace_epilogue) 190 - /* 191 - * This is weak to keep gas from relaxing the jumps. 192 - */ 193 - SYM_INNER_LABEL_ALIGN(ftrace_stub, SYM_L_WEAK) 194 - UNWIND_HINT_FUNC 195 - ENDBR 196 - RET 197 - SYM_FUNC_END(ftrace_epilogue) 198 178 199 179 SYM_FUNC_START(ftrace_regs_caller) 200 180 /* Save the current flags before any operations that can change them */ ··· 260 262 popfq 261 263 262 264 /* 263 - * As this jmp to ftrace_epilogue can be a short jump 264 - * it must not be copied into the trampoline. 265 - * The trampoline will add the code to jump 266 - * to the return. 265 + * The trampoline will add the return. 267 266 */ 268 267 SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL) 269 268 ANNOTATE_NOENDBR 270 - jmp ftrace_epilogue 269 + RET 271 270 272 271 /* Swap the flags with orig_rax */ 273 272 1: movq MCOUNT_REG_SIZE(%rsp), %rdi ··· 275 280 /* Restore flags */ 276 281 popfq 277 282 UNWIND_HINT_FUNC 278 - jmp ftrace_epilogue 283 + RET 279 284 280 285 SYM_FUNC_END(ftrace_regs_caller) 281 286 STACK_FRAME_NON_STANDARD_FP(ftrace_regs_caller) ··· 286 291 SYM_FUNC_START(__fentry__) 287 292 cmpq $ftrace_stub, ftrace_trace_function 288 293 jnz trace 289 - 290 - SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL) 291 - ENDBR 292 294 RET 293 295 294 296 trace:
+1 -1
arch/x86/kernel/unwind_orc.c
··· 713 713 /* Otherwise, skip ahead to the user-specified starting frame: */ 714 714 while (!unwind_done(state) && 715 715 (!on_stack(&state->stack_info, first_frame, sizeof(long)) || 716 - state->sp < (unsigned long)first_frame)) 716 + state->sp <= (unsigned long)first_frame)) 717 717 unwind_next_frame(state); 718 718 719 719 return;
+73 -14
arch/x86/kvm/x86.c
··· 6442 6442 return 0; 6443 6443 } 6444 6444 6445 - static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, void __user *argp) 6445 + static int kvm_vm_ioctl_set_msr_filter(struct kvm *kvm, 6446 + struct kvm_msr_filter *filter) 6446 6447 { 6447 - struct kvm_msr_filter __user *user_msr_filter = argp; 6448 6448 struct kvm_x86_msr_filter *new_filter, *old_filter; 6449 - struct kvm_msr_filter filter; 6450 6449 bool default_allow; 6451 6450 bool empty = true; 6452 6451 int r = 0; 6453 6452 u32 i; 6454 6453 6455 - if (copy_from_user(&filter, user_msr_filter, sizeof(filter))) 6456 - return -EFAULT; 6457 - 6458 - if (filter.flags & ~KVM_MSR_FILTER_DEFAULT_DENY) 6454 + if (filter->flags & ~KVM_MSR_FILTER_DEFAULT_DENY) 6459 6455 return -EINVAL; 6460 6456 6461 - for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) 6462 - empty &= !filter.ranges[i].nmsrs; 6457 + for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) 6458 + empty &= !filter->ranges[i].nmsrs; 6463 6459 6464 - default_allow = !(filter.flags & KVM_MSR_FILTER_DEFAULT_DENY); 6460 + default_allow = !(filter->flags & KVM_MSR_FILTER_DEFAULT_DENY); 6465 6461 if (empty && !default_allow) 6466 6462 return -EINVAL; 6467 6463 ··· 6465 6469 if (!new_filter) 6466 6470 return -ENOMEM; 6467 6471 6468 - for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) { 6469 - r = kvm_add_msr_filter(new_filter, &filter.ranges[i]); 6472 + for (i = 0; i < ARRAY_SIZE(filter->ranges); i++) { 6473 + r = kvm_add_msr_filter(new_filter, &filter->ranges[i]); 6470 6474 if (r) { 6471 6475 kvm_free_msr_filter(new_filter); 6472 6476 return r; ··· 6488 6492 6489 6493 return 0; 6490 6494 } 6495 + 6496 + #ifdef CONFIG_KVM_COMPAT 6497 + /* for KVM_X86_SET_MSR_FILTER */ 6498 + struct kvm_msr_filter_range_compat { 6499 + __u32 flags; 6500 + __u32 nmsrs; 6501 + __u32 base; 6502 + __u32 bitmap; 6503 + }; 6504 + 6505 + struct kvm_msr_filter_compat { 6506 + __u32 flags; 6507 + struct kvm_msr_filter_range_compat ranges[KVM_MSR_FILTER_MAX_RANGES]; 6508 + }; 6509 + 6510 + #define KVM_X86_SET_MSR_FILTER_COMPAT _IOW(KVMIO, 0xc6, struct kvm_msr_filter_compat) 6511 + 6512 + long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl, 6513 + unsigned long arg) 6514 + { 6515 + void __user *argp = (void __user *)arg; 6516 + struct kvm *kvm = filp->private_data; 6517 + long r = -ENOTTY; 6518 + 6519 + switch (ioctl) { 6520 + case KVM_X86_SET_MSR_FILTER_COMPAT: { 6521 + struct kvm_msr_filter __user *user_msr_filter = argp; 6522 + struct kvm_msr_filter_compat filter_compat; 6523 + struct kvm_msr_filter filter; 6524 + int i; 6525 + 6526 + if (copy_from_user(&filter_compat, user_msr_filter, 6527 + sizeof(filter_compat))) 6528 + return -EFAULT; 6529 + 6530 + filter.flags = filter_compat.flags; 6531 + for (i = 0; i < ARRAY_SIZE(filter.ranges); i++) { 6532 + struct kvm_msr_filter_range_compat *cr; 6533 + 6534 + cr = &filter_compat.ranges[i]; 6535 + filter.ranges[i] = (struct kvm_msr_filter_range) { 6536 + .flags = cr->flags, 6537 + .nmsrs = cr->nmsrs, 6538 + .base = cr->base, 6539 + .bitmap = (__u8 *)(ulong)cr->bitmap, 6540 + }; 6541 + } 6542 + 6543 + r = kvm_vm_ioctl_set_msr_filter(kvm, &filter); 6544 + break; 6545 + } 6546 + } 6547 + 6548 + return r; 6549 + } 6550 + #endif 6491 6551 6492 6552 #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER 6493 6553 static int kvm_arch_suspend_notifier(struct kvm *kvm) ··· 6967 6915 case KVM_SET_PMU_EVENT_FILTER: 6968 6916 r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp); 6969 6917 break; 6970 - case KVM_X86_SET_MSR_FILTER: 6971 - r = kvm_vm_ioctl_set_msr_filter(kvm, argp); 6918 + case KVM_X86_SET_MSR_FILTER: { 6919 + struct kvm_msr_filter __user *user_msr_filter = argp; 6920 + struct kvm_msr_filter filter; 6921 + 6922 + if (copy_from_user(&filter, user_msr_filter, sizeof(filter))) 6923 + return -EFAULT; 6924 + 6925 + r = kvm_vm_ioctl_set_msr_filter(kvm, &filter); 6972 6926 break; 6927 + } 6973 6928 default: 6974 6929 r = -ENOTTY; 6975 6930 }
+13
arch/x86/net/bpf_jit_comp.c
··· 11 11 #include <linux/bpf.h> 12 12 #include <linux/memory.h> 13 13 #include <linux/sort.h> 14 + #include <linux/init.h> 14 15 #include <asm/extable.h> 15 16 #include <asm/set_memory.h> 16 17 #include <asm/nospec-branch.h> ··· 387 386 out: 388 387 mutex_unlock(&text_mutex); 389 388 return ret; 389 + } 390 + 391 + int __init bpf_arch_init_dispatcher_early(void *ip) 392 + { 393 + const u8 *nop_insn = x86_nops[5]; 394 + 395 + if (is_endbr(*(u32 *)ip)) 396 + ip += ENDBR_INSN_SIZE; 397 + 398 + if (memcmp(ip, nop_insn, X86_PATCH_SIZE)) 399 + text_poke_early(ip, nop_insn, X86_PATCH_SIZE); 400 + return 0; 390 401 } 391 402 392 403 int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
-4
block/bfq-iosched.h
··· 369 369 unsigned long split_time; /* time of last split */ 370 370 371 371 unsigned long first_IO_time; /* time of first I/O for this queue */ 372 - 373 372 unsigned long creation_time; /* when this queue is created */ 374 - 375 - /* max service rate measured so far */ 376 - u32 max_service_rate; 377 373 378 374 /* 379 375 * Pointer to the waker queue for this queue, i.e., to the
+1 -1
block/bio.c
··· 741 741 return; 742 742 } 743 743 744 - if (bio->bi_opf & REQ_ALLOC_CACHE) { 744 + if ((bio->bi_opf & REQ_ALLOC_CACHE) && !WARN_ON_ONCE(in_interrupt())) { 745 745 struct bio_alloc_cache *cache; 746 746 747 747 bio_uninit(bio);
+5 -2
block/blk-mq.c
··· 3112 3112 struct page *page; 3113 3113 unsigned long flags; 3114 3114 3115 - /* There is no need to clear a driver tags own mapping */ 3116 - if (drv_tags == tags) 3115 + /* 3116 + * There is no need to clear mapping if driver tags is not initialized 3117 + * or the mapping belongs to the driver tags. 3118 + */ 3119 + if (!drv_tags || drv_tags == tags) 3117 3120 return; 3118 3121 3119 3122 list_for_each_entry(page, &tags->page_list, lru) {
+20 -13
drivers/acpi/acpi_extlog.c
··· 12 12 #include <linux/ratelimit.h> 13 13 #include <linux/edac.h> 14 14 #include <linux/ras.h> 15 + #include <acpi/ghes.h> 15 16 #include <asm/cpu.h> 16 17 #include <asm/mce.h> 17 18 ··· 139 138 int cpu = mce->extcpu; 140 139 struct acpi_hest_generic_status *estatus, *tmp; 141 140 struct acpi_hest_generic_data *gdata; 142 - const guid_t *fru_id = &guid_null; 143 - char *fru_text = ""; 141 + const guid_t *fru_id; 142 + char *fru_text; 144 143 guid_t *sec_type; 145 144 static u32 err_seq; 146 145 ··· 161 160 162 161 /* log event via trace */ 163 162 err_seq++; 164 - gdata = (struct acpi_hest_generic_data *)(tmp + 1); 165 - if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID) 166 - fru_id = (guid_t *)gdata->fru_id; 167 - if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT) 168 - fru_text = gdata->fru_text; 169 - sec_type = (guid_t *)gdata->section_type; 170 - if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) { 171 - struct cper_sec_mem_err *mem = (void *)(gdata + 1); 172 - if (gdata->error_data_length >= sizeof(*mem)) 173 - trace_extlog_mem_event(mem, err_seq, fru_id, fru_text, 174 - (u8)gdata->error_severity); 163 + apei_estatus_for_each_section(tmp, gdata) { 164 + if (gdata->validation_bits & CPER_SEC_VALID_FRU_ID) 165 + fru_id = (guid_t *)gdata->fru_id; 166 + else 167 + fru_id = &guid_null; 168 + if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT) 169 + fru_text = gdata->fru_text; 170 + else 171 + fru_text = ""; 172 + sec_type = (guid_t *)gdata->section_type; 173 + if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) { 174 + struct cper_sec_mem_err *mem = (void *)(gdata + 1); 175 + 176 + if (gdata->error_data_length >= sizeof(*mem)) 177 + trace_extlog_mem_event(mem, err_seq, fru_id, fru_text, 178 + (u8)gdata->error_severity); 179 + } 175 180 } 176 181 177 182 out:
+1 -1
drivers/acpi/apei/ghes.c
··· 163 163 clear_fixmap(fixmap_idx); 164 164 } 165 165 166 - int ghes_estatus_pool_init(int num_ghes) 166 + int ghes_estatus_pool_init(unsigned int num_ghes) 167 167 { 168 168 unsigned long addr, len; 169 169 int rc;
+2 -1
drivers/acpi/arm64/iort.c
··· 1142 1142 struct iommu_resv_region *region; 1143 1143 1144 1144 region = iommu_alloc_resv_region(base + SZ_64K, SZ_64K, 1145 - prot, IOMMU_RESV_MSI); 1145 + prot, IOMMU_RESV_MSI, 1146 + GFP_KERNEL); 1146 1147 if (region) 1147 1148 list_add_tail(&region->list, head); 1148 1149 }
+1
drivers/acpi/pci_root.c
··· 323 323 324 324 list_for_each_entry(pn, &adev->physical_node_list, node) { 325 325 if (dev_is_pci(pn->dev)) { 326 + get_device(pn->dev); 326 327 pci_dev = to_pci_dev(pn->dev); 327 328 break; 328 329 }
+33 -16
drivers/acpi/resource.c
··· 428 428 { } 429 429 }; 430 430 431 + static const struct dmi_system_id lenovo_82ra[] = { 432 + { 433 + .ident = "LENOVO IdeaPad Flex 5 16ALC7", 434 + .matches = { 435 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 436 + DMI_MATCH(DMI_PRODUCT_NAME, "82RA"), 437 + }, 438 + }, 439 + { } 440 + }; 441 + 431 442 struct irq_override_cmp { 432 443 const struct dmi_system_id *system; 433 444 unsigned char irq; 434 445 unsigned char triggering; 435 446 unsigned char polarity; 436 447 unsigned char shareable; 448 + bool override; 437 449 }; 438 450 439 - static const struct irq_override_cmp skip_override_table[] = { 440 - { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 }, 441 - { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0 }, 451 + static const struct irq_override_cmp override_table[] = { 452 + { medion_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false }, 453 + { asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false }, 454 + { lenovo_82ra, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true }, 455 + { lenovo_82ra, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true }, 442 456 }; 443 457 444 458 static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity, 445 459 u8 shareable) 446 460 { 447 461 int i; 462 + 463 + for (i = 0; i < ARRAY_SIZE(override_table); i++) { 464 + const struct irq_override_cmp *entry = &override_table[i]; 465 + 466 + if (dmi_check_system(entry->system) && 467 + entry->irq == gsi && 468 + entry->triggering == triggering && 469 + entry->polarity == polarity && 470 + entry->shareable == shareable) 471 + return entry->override; 472 + } 448 473 449 474 #ifdef CONFIG_X86 450 475 /* ··· 480 455 if (boot_cpu_has(X86_FEATURE_ZEN)) 481 456 return false; 482 457 #endif 483 - 484 - for (i = 0; i < ARRAY_SIZE(skip_override_table); i++) { 485 - const struct irq_override_cmp *entry = &skip_override_table[i]; 486 - 487 - if (dmi_check_system(entry->system) && 488 - entry->irq == gsi && 489 - entry->triggering == triggering && 490 - entry->polarity == polarity && 491 - entry->shareable == shareable) 492 - return false; 493 - } 494 458 495 459 return true; 496 460 } ··· 512 498 u8 pol = p ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 513 499 514 500 if (triggering != trig || polarity != pol) { 515 - pr_warn("ACPI: IRQ %d override to %s, %s\n", gsi, 516 - t ? "level" : "edge", p ? "low" : "high"); 501 + pr_warn("ACPI: IRQ %d override to %s%s, %s%s\n", gsi, 502 + t ? "level" : "edge", 503 + trig == triggering ? "" : "(!)", 504 + p ? "low" : "high", 505 + pol == polarity ? "" : "(!)"); 517 506 triggering = trig; 518 507 polarity = pol; 519 508 }
+4 -3
drivers/acpi/scan.c
··· 1509 1509 goto out; 1510 1510 } 1511 1511 1512 + *map = r; 1513 + 1512 1514 list_for_each_entry(rentry, &list, node) { 1513 1515 if (rentry->res->start >= rentry->res->end) { 1514 - kfree(r); 1516 + kfree(*map); 1517 + *map = NULL; 1515 1518 ret = -EINVAL; 1516 1519 dev_dbg(dma_dev, "Invalid DMA regions configuration\n"); 1517 1520 goto out; ··· 1526 1523 r->offset = rentry->offset; 1527 1524 r++; 1528 1525 } 1529 - 1530 - *map = r; 1531 1526 } 1532 1527 out: 1533 1528 acpi_dev_free_resource_list(&list);
+6 -8
drivers/block/drbd/drbd_req.c
··· 30 30 return NULL; 31 31 memset(req, 0, sizeof(*req)); 32 32 33 - req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, bio_src, 34 - GFP_NOIO, &drbd_io_bio_set); 35 - req->private_bio->bi_private = req; 36 - req->private_bio->bi_end_io = drbd_request_endio; 37 - 38 33 req->rq_state = (bio_data_dir(bio_src) == WRITE ? RQ_WRITE : 0) 39 34 | (bio_op(bio_src) == REQ_OP_WRITE_ZEROES ? RQ_ZEROES : 0) 40 35 | (bio_op(bio_src) == REQ_OP_DISCARD ? RQ_UNMAP : 0); ··· 1214 1219 /* Update disk stats */ 1215 1220 req->start_jif = bio_start_io_acct(req->master_bio); 1216 1221 1217 - if (!get_ldev(device)) { 1218 - bio_put(req->private_bio); 1219 - req->private_bio = NULL; 1222 + if (get_ldev(device)) { 1223 + req->private_bio = bio_alloc_clone(device->ldev->backing_bdev, 1224 + bio, GFP_NOIO, 1225 + &drbd_io_bio_set); 1226 + req->private_bio->bi_private = req; 1227 + req->private_bio->bi_end_io = drbd_request_endio; 1220 1228 } 1221 1229 1222 1230 /* process discards always from our submitter thread */
+1 -1
drivers/block/ublk_drv.c
··· 124 124 bool force_abort; 125 125 unsigned short nr_io_ready; /* how many ios setup */ 126 126 struct ublk_device *dev; 127 - struct ublk_io ios[0]; 127 + struct ublk_io ios[]; 128 128 }; 129 129 130 130 #define UBLK_DAEMON_MONITOR_PERIOD (5 * HZ)
+2 -4
drivers/cpufreq/cpufreq-dt.c
··· 222 222 if (reg_name[0]) { 223 223 priv->opp_token = dev_pm_opp_set_regulators(cpu_dev, reg_name); 224 224 if (priv->opp_token < 0) { 225 - ret = priv->opp_token; 226 - if (ret != -EPROBE_DEFER) 227 - dev_err(cpu_dev, "failed to set regulators: %d\n", 228 - ret); 225 + ret = dev_err_probe(cpu_dev, priv->opp_token, 226 + "failed to set regulators\n"); 229 227 goto free_cpumask; 230 228 } 231 229 }
+1 -3
drivers/cpufreq/imx6q-cpufreq.c
··· 396 396 ret = imx6q_opp_check_speed_grading(cpu_dev); 397 397 } 398 398 if (ret) { 399 - if (ret != -EPROBE_DEFER) 400 - dev_err(cpu_dev, "failed to read ocotp: %d\n", 401 - ret); 399 + dev_err_probe(cpu_dev, ret, "failed to read ocotp\n"); 402 400 goto out_free_opp; 403 401 } 404 402
+13 -12
drivers/cpufreq/qcom-cpufreq-nvmem.c
··· 64 64 65 65 static void get_krait_bin_format_a(struct device *cpu_dev, 66 66 int *speed, int *pvs, int *pvs_ver, 67 - struct nvmem_cell *pvs_nvmem, u8 *buf) 67 + u8 *buf) 68 68 { 69 69 u32 pte_efuse; 70 70 ··· 95 95 96 96 static void get_krait_bin_format_b(struct device *cpu_dev, 97 97 int *speed, int *pvs, int *pvs_ver, 98 - struct nvmem_cell *pvs_nvmem, u8 *buf) 98 + u8 *buf) 99 99 { 100 100 u32 pte_efuse, redundant_sel; 101 101 ··· 213 213 int speed = 0, pvs = 0, pvs_ver = 0; 214 214 u8 *speedbin; 215 215 size_t len; 216 + int ret = 0; 216 217 217 218 speedbin = nvmem_cell_read(speedbin_nvmem, &len); 218 219 ··· 223 222 switch (len) { 224 223 case 4: 225 224 get_krait_bin_format_a(cpu_dev, &speed, &pvs, &pvs_ver, 226 - speedbin_nvmem, speedbin); 225 + speedbin); 227 226 break; 228 227 case 8: 229 228 get_krait_bin_format_b(cpu_dev, &speed, &pvs, &pvs_ver, 230 - speedbin_nvmem, speedbin); 229 + speedbin); 231 230 break; 232 231 default: 233 232 dev_err(cpu_dev, "Unable to read nvmem data. Defaulting to 0!\n"); 234 - return -ENODEV; 233 + ret = -ENODEV; 234 + goto len_error; 235 235 } 236 236 237 237 snprintf(*pvs_name, sizeof("speedXX-pvsXX-vXX"), "speed%d-pvs%d-v%d", ··· 240 238 241 239 drv->versions = (1 << speed); 242 240 241 + len_error: 243 242 kfree(speedbin); 244 - return 0; 243 + return ret; 245 244 } 246 245 247 246 static const struct qcom_cpufreq_match_data match_data_kryo = { ··· 265 262 struct nvmem_cell *speedbin_nvmem; 266 263 struct device_node *np; 267 264 struct device *cpu_dev; 268 - char *pvs_name = "speedXX-pvsXX-vXX"; 265 + char pvs_name_buffer[] = "speedXX-pvsXX-vXX"; 266 + char *pvs_name = pvs_name_buffer; 269 267 unsigned cpu; 270 268 const struct of_device_id *match; 271 269 int ret; ··· 299 295 if (drv->data->get_version) { 300 296 speedbin_nvmem = of_nvmem_cell_get(np, NULL); 301 297 if (IS_ERR(speedbin_nvmem)) { 302 - if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER) 303 - dev_err(cpu_dev, 304 - "Could not get nvmem cell: %ld\n", 305 - PTR_ERR(speedbin_nvmem)); 306 - ret = PTR_ERR(speedbin_nvmem); 298 + ret = dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), 299 + "Could not get nvmem cell\n"); 307 300 goto free_drv; 308 301 } 309 302
+3 -6
drivers/cpufreq/sun50i-cpufreq-nvmem.c
··· 56 56 57 57 speedbin_nvmem = of_nvmem_cell_get(np, NULL); 58 58 of_node_put(np); 59 - if (IS_ERR(speedbin_nvmem)) { 60 - if (PTR_ERR(speedbin_nvmem) != -EPROBE_DEFER) 61 - pr_err("Could not get nvmem cell: %ld\n", 62 - PTR_ERR(speedbin_nvmem)); 63 - return PTR_ERR(speedbin_nvmem); 64 - } 59 + if (IS_ERR(speedbin_nvmem)) 60 + return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), 61 + "Could not get nvmem cell\n"); 65 62 66 63 speedbin = nvmem_cell_read(speedbin_nvmem, &len); 67 64 nvmem_cell_put(speedbin_nvmem);
+1
drivers/cpufreq/tegra194-cpufreq.c
··· 589 589 { .compatible = "nvidia,tegra239-ccplex-cluster", .data = &tegra239_cpufreq_soc }, 590 590 { /* sentinel */ } 591 591 }; 592 + MODULE_DEVICE_TABLE(of, tegra194_cpufreq_of_match); 592 593 593 594 static struct platform_driver tegra194_ccplex_driver = { 594 595 .driver = {
-22
drivers/firmware/efi/Kconfig
··· 124 124 is supported by the encapsulated image. (The compression algorithm 125 125 used is described in the zboot image header) 126 126 127 - config EFI_ZBOOT_SIGNED 128 - def_bool y 129 - depends on EFI_ZBOOT_SIGNING_CERT != "" 130 - depends on EFI_ZBOOT_SIGNING_KEY != "" 131 - 132 - config EFI_ZBOOT_SIGNING 133 - bool "Sign the EFI decompressor for UEFI secure boot" 134 - depends on EFI_ZBOOT 135 - help 136 - Use the 'sbsign' command line tool (which must exist on the host 137 - path) to sign both the EFI decompressor PE/COFF image, as well as the 138 - encapsulated PE/COFF image, which is subsequently compressed and 139 - wrapped by the former image. 140 - 141 - config EFI_ZBOOT_SIGNING_CERT 142 - string "Certificate to use for signing the compressed EFI boot image" 143 - depends on EFI_ZBOOT_SIGNING 144 - 145 - config EFI_ZBOOT_SIGNING_KEY 146 - string "Private key to use for signing the compressed EFI boot image" 147 - depends on EFI_ZBOOT_SIGNING 148 - 149 127 config EFI_ARMSTUB_DTB_LOADER 150 128 bool "Enable the DTB loader" 151 129 depends on EFI_GENERIC_STUB && !RISCV && !LOONGARCH
+1 -1
drivers/firmware/efi/arm-runtime.c
··· 63 63 64 64 if (!(md->attribute & EFI_MEMORY_RUNTIME)) 65 65 continue; 66 - if (md->virt_addr == 0) 66 + if (md->virt_addr == U64_MAX) 67 67 return false; 68 68 69 69 ret = efi_create_mapping(&efi_mm, md);
+2
drivers/firmware/efi/efi.c
··· 271 271 acpi_status ret = acpi_load_table(data, NULL); 272 272 if (ret) 273 273 pr_err("failed to load table: %u\n", ret); 274 + else 275 + continue; 274 276 } else { 275 277 pr_err("failed to get var data: 0x%lx\n", status); 276 278 }
+4 -25
drivers/firmware/efi/libstub/Makefile.zboot
··· 20 20 zboot-method-$(CONFIG_KERNEL_GZIP) := gzip 21 21 zboot-size-len-$(CONFIG_KERNEL_GZIP) := 0 22 22 23 - quiet_cmd_sbsign = SBSIGN $@ 24 - cmd_sbsign = sbsign --out $@ $< \ 25 - --key $(CONFIG_EFI_ZBOOT_SIGNING_KEY) \ 26 - --cert $(CONFIG_EFI_ZBOOT_SIGNING_CERT) 27 - 28 - $(obj)/$(EFI_ZBOOT_PAYLOAD).signed: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE 29 - $(call if_changed,sbsign) 30 - 31 - ZBOOT_PAYLOAD-y := $(EFI_ZBOOT_PAYLOAD) 32 - ZBOOT_PAYLOAD-$(CONFIG_EFI_ZBOOT_SIGNED) := $(EFI_ZBOOT_PAYLOAD).signed 33 - 34 - $(obj)/vmlinuz: $(obj)/$(ZBOOT_PAYLOAD-y) FORCE 23 + $(obj)/vmlinuz: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE 35 24 $(call if_changed,$(zboot-method-y)) 36 25 37 26 OBJCOPYFLAGS_vmlinuz.o := -I binary -O $(EFI_ZBOOT_BFD_TARGET) \ 38 - --rename-section .data=.gzdata,load,alloc,readonly,contents 27 + --rename-section .data=.gzdata,load,alloc,readonly,contents 39 28 $(obj)/vmlinuz.o: $(obj)/vmlinuz FORCE 40 29 $(call if_changed,objcopy) 41 30 ··· 42 53 $(obj)/vmlinuz.efi.elf: $(obj)/vmlinuz.o $(ZBOOT_DEPS) FORCE 43 54 $(call if_changed,ld) 44 55 45 - ZBOOT_EFI-y := vmlinuz.efi 46 - ZBOOT_EFI-$(CONFIG_EFI_ZBOOT_SIGNED) := vmlinuz.efi.unsigned 47 - 48 - OBJCOPYFLAGS_$(ZBOOT_EFI-y) := -O binary 49 - $(obj)/$(ZBOOT_EFI-y): $(obj)/vmlinuz.efi.elf FORCE 56 + OBJCOPYFLAGS_vmlinuz.efi := -O binary 57 + $(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.elf FORCE 50 58 $(call if_changed,objcopy) 51 59 52 60 targets += zboot-header.o vmlinuz vmlinuz.o vmlinuz.efi.elf vmlinuz.efi 53 - 54 - ifneq ($(CONFIG_EFI_ZBOOT_SIGNED),) 55 - $(obj)/vmlinuz.efi: $(obj)/vmlinuz.efi.unsigned FORCE 56 - $(call if_changed,sbsign) 57 - endif 58 - 59 - targets += $(EFI_ZBOOT_PAYLOAD).signed vmlinuz.efi.unsigned
+4 -4
drivers/firmware/efi/libstub/fdt.c
··· 313 313 314 314 /* 315 315 * Set the virtual address field of all 316 - * EFI_MEMORY_RUNTIME entries to 0. This will signal 317 - * the incoming kernel that no virtual translation has 318 - * been installed. 316 + * EFI_MEMORY_RUNTIME entries to U64_MAX. This will 317 + * signal the incoming kernel that no virtual 318 + * translation has been installed. 319 319 */ 320 320 for (l = 0; l < priv.boot_memmap->map_size; 321 321 l += priv.boot_memmap->desc_size) { 322 322 p = (void *)priv.boot_memmap->map + l; 323 323 324 324 if (p->attribute & EFI_MEMORY_RUNTIME) 325 - p->virt_addr = 0; 325 + p->virt_addr = U64_MAX; 326 326 } 327 327 } 328 328 return EFI_SUCCESS;
+3 -3
drivers/firmware/efi/libstub/x86-stub.c
··· 765 765 * relocated by efi_relocate_kernel. 766 766 * On failure, we exit to the firmware via efi_exit instead of returning. 767 767 */ 768 - unsigned long efi_main(efi_handle_t handle, 769 - efi_system_table_t *sys_table_arg, 770 - struct boot_params *boot_params) 768 + asmlinkage unsigned long efi_main(efi_handle_t handle, 769 + efi_system_table_t *sys_table_arg, 770 + struct boot_params *boot_params) 771 771 { 772 772 unsigned long bzimage_addr = (unsigned long)startup_32; 773 773 unsigned long buffer_start, buffer_end;
+2 -1
drivers/firmware/efi/libstub/zboot.lds
··· 38 38 } 39 39 } 40 40 41 - PROVIDE(__efistub__gzdata_size = ABSOLUTE(. - __efistub__gzdata_start)); 41 + PROVIDE(__efistub__gzdata_size = 42 + ABSOLUTE(__efistub__gzdata_end - __efistub__gzdata_start)); 42 43 43 44 PROVIDE(__data_rawsize = ABSOLUTE(_edata - _etext)); 44 45 PROVIDE(__data_size = ABSOLUTE(_end - _etext));
+1 -1
drivers/firmware/efi/riscv-runtime.c
··· 41 41 42 42 if (!(md->attribute & EFI_MEMORY_RUNTIME)) 43 43 continue; 44 - if (md->virt_addr == 0) 44 + if (md->virt_addr == U64_MAX) 45 45 return false; 46 46 47 47 ret = efi_create_mapping(&efi_mm, md);
+5 -5
drivers/firmware/efi/vars.c
··· 7 7 */ 8 8 9 9 #include <linux/types.h> 10 + #include <linux/sizes.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/init.h> 12 13 #include <linux/module.h> ··· 21 20 22 21 static DEFINE_SEMAPHORE(efivars_lock); 23 22 24 - efi_status_t check_var_size(u32 attributes, unsigned long size) 23 + static efi_status_t check_var_size(u32 attributes, unsigned long size) 25 24 { 26 25 const struct efivar_operations *fops; 27 26 28 27 fops = __efivars->ops; 29 28 30 29 if (!fops->query_variable_store) 31 - return EFI_UNSUPPORTED; 30 + return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES; 32 31 33 32 return fops->query_variable_store(attributes, size, false); 34 33 } 35 - EXPORT_SYMBOL_NS_GPL(check_var_size, EFIVAR); 36 34 35 + static 37 36 efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size) 38 37 { 39 38 const struct efivar_operations *fops; ··· 41 40 fops = __efivars->ops; 42 41 43 42 if (!fops->query_variable_store) 44 - return EFI_UNSUPPORTED; 43 + return (size <= SZ_64K) ? EFI_SUCCESS : EFI_OUT_OF_RESOURCES; 45 44 46 45 return fops->query_variable_store(attributes, size, true); 47 46 } 48 - EXPORT_SYMBOL_NS_GPL(check_var_size_nonblocking, EFIVAR); 49 47 50 48 /** 51 49 * efivars_kobject - get the kobject for the registered efivars
-4
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 274 274 #define AMDGPU_RESET_VCE (1 << 13) 275 275 #define AMDGPU_RESET_VCE1 (1 << 14) 276 276 277 - #define AMDGPU_RESET_LEVEL_SOFT_RECOVERY (1 << 0) 278 - #define AMDGPU_RESET_LEVEL_MODE2 (1 << 1) 279 - 280 277 /* max cursor sizes (in pixels) */ 281 278 #define CIK_CURSOR_WIDTH 128 282 279 #define CIK_CURSOR_HEIGHT 128 ··· 1062 1065 1063 1066 struct work_struct reset_work; 1064 1067 1065 - uint32_t amdgpu_reset_level_mask; 1066 1068 bool job_hang; 1067 1069 }; 1068 1070
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 134 134 reset_context.method = AMD_RESET_METHOD_NONE; 135 135 reset_context.reset_req_dev = adev; 136 136 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 137 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 138 137 139 138 amdgpu_device_gpu_recover(adev, NULL, &reset_context); 140 139 }
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
··· 111 111 112 112 lock_srbm(adev, mec, pipe, 0, 0); 113 113 114 - WREG32(SOC15_REG_OFFSET(GC, 0, regCPC_INT_CNTL), 114 + WREG32_SOC15(GC, 0, regCPC_INT_CNTL, 115 115 CP_INT_CNTL_RING0__TIME_STAMP_INT_ENABLE_MASK | 116 116 CP_INT_CNTL_RING0__OPCODE_ERROR_INT_ENABLE_MASK); 117 117
-2
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
··· 1954 1954 return PTR_ERR(ent); 1955 1955 } 1956 1956 1957 - debugfs_create_u32("amdgpu_reset_level", 0600, root, &adev->amdgpu_reset_level_mask); 1958 - 1959 1957 /* Register debugfs entries for amdgpu_ttm */ 1960 1958 amdgpu_ttm_debugfs_init(adev); 1961 1959 amdgpu_debugfs_pm_init(adev);
+9 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 2928 2928 amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE); 2929 2929 amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE); 2930 2930 2931 + /* 2932 + * Per PMFW team's suggestion, driver needs to handle gfxoff 2933 + * and df cstate features disablement for gpu reset(e.g. Mode1Reset) 2934 + * scenario. Add the missing df cstate disablement here. 2935 + */ 2936 + if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW)) 2937 + dev_warn(adev->dev, "Failed to disallow df cstate"); 2938 + 2931 2939 for (i = adev->num_ip_blocks - 1; i >= 0; i--) { 2932 2940 if (!adev->ip_blocks[i].status.valid) 2933 2941 continue; ··· 5218 5210 5219 5211 reset_context->job = job; 5220 5212 reset_context->hive = hive; 5221 - 5222 5213 /* 5223 5214 * Build list of devices to reset. 5224 5215 * In case we are in XGMI hive mode, resort the device list ··· 5344 5337 amdgpu_ras_resume(adev); 5345 5338 } else { 5346 5339 r = amdgpu_do_asic_reset(device_list_handle, reset_context); 5347 - if (r && r == -EAGAIN) { 5348 - set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags); 5349 - adev->asic_reset_res = 0; 5340 + if (r && r == -EAGAIN) 5350 5341 goto retry; 5351 - } 5352 5342 5353 5343 if (!r && gpu_reset_for_dev_remove) 5354 5344 goto recover_end; ··· 5781 5777 reset_context.reset_req_dev = adev; 5782 5778 set_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 5783 5779 set_bit(AMDGPU_SKIP_HW_RESET, &reset_context.flags); 5784 - set_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 5785 5780 5786 5781 adev->no_hw_access = true; 5787 5782 r = amdgpu_device_pre_asic_reset(adev, &reset_context);
-1
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
··· 72 72 reset_context.method = AMD_RESET_METHOD_NONE; 73 73 reset_context.reset_req_dev = adev; 74 74 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 75 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 76 75 77 76 r = amdgpu_device_gpu_recover(ring->adev, job, &reset_context); 78 77 if (r)
+19 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
··· 1950 1950 reset_context.method = AMD_RESET_METHOD_NONE; 1951 1951 reset_context.reset_req_dev = adev; 1952 1952 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 1953 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 1954 1953 1955 1954 amdgpu_device_gpu_recover(ras->adev, NULL, &reset_context); 1956 1955 } ··· 2267 2268 2268 2269 static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev) 2269 2270 { 2271 + if (amdgpu_sriov_vf(adev)) { 2272 + switch (adev->ip_versions[MP0_HWIP][0]) { 2273 + case IP_VERSION(13, 0, 2): 2274 + return true; 2275 + default: 2276 + return false; 2277 + } 2278 + } 2279 + 2280 + if (adev->asic_type == CHIP_IP_DISCOVERY) { 2281 + switch (adev->ip_versions[MP0_HWIP][0]) { 2282 + case IP_VERSION(13, 0, 0): 2283 + case IP_VERSION(13, 0, 10): 2284 + return true; 2285 + default: 2286 + return false; 2287 + } 2288 + } 2289 + 2270 2290 return adev->asic_type == CHIP_VEGA10 || 2271 2291 adev->asic_type == CHIP_VEGA20 || 2272 2292 adev->asic_type == CHIP_ARCTURUS || ··· 2327 2309 2328 2310 if (!adev->is_atom_fw || 2329 2311 !amdgpu_ras_asic_supported(adev)) 2330 - return; 2331 - 2332 - /* If driver run on sriov guest side, only enable ras for aldebaran */ 2333 - if (amdgpu_sriov_vf(adev) && 2334 - adev->ip_versions[MP1_HWIP][0] != IP_VERSION(13, 0, 2)) 2335 2312 return; 2336 2313 2337 2314 if (!adev->gmc.xgmi.connected_to_cpu) {
-14
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c
··· 37 37 { 38 38 int ret = 0; 39 39 40 - adev->amdgpu_reset_level_mask = 0x1; 41 - 42 40 switch (adev->ip_versions[MP1_HWIP][0]) { 43 41 case IP_VERSION(13, 0, 2): 44 42 ret = aldebaran_reset_init(adev); ··· 74 76 { 75 77 struct amdgpu_reset_handler *reset_handler = NULL; 76 78 77 - if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2)) 78 - return -ENOSYS; 79 - 80 - if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags)) 81 - return -ENOSYS; 82 - 83 79 if (adev->reset_cntl && adev->reset_cntl->get_reset_handler) 84 80 reset_handler = adev->reset_cntl->get_reset_handler( 85 81 adev->reset_cntl, reset_context); ··· 89 97 { 90 98 int ret; 91 99 struct amdgpu_reset_handler *reset_handler = NULL; 92 - 93 - if (!(adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_MODE2)) 94 - return -ENOSYS; 95 - 96 - if (test_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context->flags)) 97 - return -ENOSYS; 98 100 99 101 if (adev->reset_cntl) 100 102 reset_handler = adev->reset_cntl->get_reset_handler(
+1 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_reset.h
··· 30 30 31 31 AMDGPU_NEED_FULL_RESET = 0, 32 32 AMDGPU_SKIP_HW_RESET = 1, 33 - AMDGPU_SKIP_MODE2_RESET = 2, 34 - AMDGPU_RESET_FOR_DEVICE_REMOVE = 3, 33 + AMDGPU_RESET_FOR_DEVICE_REMOVE = 2, 35 34 }; 36 35 37 36 struct amdgpu_reset_context {
-3
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
··· 405 405 { 406 406 ktime_t deadline = ktime_add_us(ktime_get(), 10000); 407 407 408 - if (!(ring->adev->amdgpu_reset_level_mask & AMDGPU_RESET_LEVEL_SOFT_RECOVERY)) 409 - return false; 410 - 411 408 if (amdgpu_sriov_vf(ring->adev) || !ring->funcs->soft_recovery || !fence) 412 409 return false; 413 410
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 439 439 while (cursor.remaining) { 440 440 amdgpu_res_next(&cursor, cursor.size); 441 441 442 + if (!cursor.remaining) 443 + break; 444 + 442 445 /* ttm_resource_ioremap only supports contiguous memory */ 443 446 if (end != cursor.start) 444 447 return false;
+6
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
··· 726 726 adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE; 727 727 } 728 728 729 + if (amdgpu_sriov_vf(adev) && adev->asic_type == CHIP_SIENNA_CICHLID) 730 + /* VF MMIO access (except mailbox range) from CPU 731 + * will be blocked during sriov runtime 732 + */ 733 + adev->virt.caps |= AMDGPU_VF_MMIO_ACCESS_PROTECT; 734 + 729 735 /* we have the ability to check now */ 730 736 if (amdgpu_sriov_vf(adev)) { 731 737 switch (adev->asic_type) {
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
··· 31 31 #define AMDGPU_SRIOV_CAPS_IS_VF (1 << 2) /* this GPU is a virtual function */ 32 32 #define AMDGPU_PASSTHROUGH_MODE (1 << 3) /* thw whole GPU is pass through for VM */ 33 33 #define AMDGPU_SRIOV_CAPS_RUNTIME (1 << 4) /* is out of full access mode */ 34 + #define AMDGPU_VF_MMIO_ACCESS_PROTECT (1 << 5) /* MMIO write access is not allowed in sriov runtime */ 34 35 35 36 /* flags for indirect register access path supported by rlcg for sriov */ 36 37 #define AMDGPU_RLCG_GC_WRITE_LEGACY (0x8 << 28) ··· 297 296 298 297 #define amdgpu_passthrough(adev) \ 299 298 ((adev)->virt.caps & AMDGPU_PASSTHROUGH_MODE) 299 + 300 + #define amdgpu_sriov_vf_mmio_access_protection(adev) \ 301 + ((adev)->virt.caps & AMDGPU_VF_MMIO_ACCESS_PROTECT) 300 302 301 303 static inline bool is_virtual_machine(void) 302 304 {
+5 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2338 2338 */ 2339 2339 #ifdef CONFIG_X86_64 2340 2340 if (amdgpu_vm_update_mode == -1) { 2341 - if (amdgpu_gmc_vram_full_visible(&adev->gmc)) 2341 + /* For asic with VF MMIO access protection 2342 + * avoid using CPU for VM table updates 2343 + */ 2344 + if (amdgpu_gmc_vram_full_visible(&adev->gmc) && 2345 + !amdgpu_sriov_vf_mmio_access_protection(adev)) 2342 2346 adev->vm_manager.vm_update_mode = 2343 2347 AMDGPU_VM_USE_CPU_FOR_COMPUTE; 2344 2348 else
+8 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
··· 116 116 DMA_RESV_USAGE_BOOKKEEP); 117 117 } 118 118 119 - if (fence && !p->immediate) 119 + if (fence && !p->immediate) { 120 + /* 121 + * Most hw generations now have a separate queue for page table 122 + * updates, but when the queue is shared with userspace we need 123 + * the extra CPU round trip to correctly flush the TLB. 124 + */ 125 + set_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &f->flags); 120 126 swap(*fence, f); 127 + } 121 128 dma_fence_put(f); 122 129 return 0; 123 130
+2 -1
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 1571 1571 WREG32_SOC15(GC, 0, regSH_MEM_BASES, sh_mem_bases); 1572 1572 1573 1573 /* Enable trap for each kfd vmid. */ 1574 - data = RREG32(SOC15_REG_OFFSET(GC, 0, regSPI_GDBG_PER_VMID_CNTL)); 1574 + data = RREG32_SOC15(GC, 0, regSPI_GDBG_PER_VMID_CNTL); 1575 1575 data = REG_SET_FIELD(data, SPI_GDBG_PER_VMID_CNTL, TRAP_EN, 1); 1576 1576 } 1577 1577 soc21_grbm_select(adev, 0, 0, 0, 0); ··· 5076 5076 case IP_VERSION(11, 0, 0): 5077 5077 case IP_VERSION(11, 0, 1): 5078 5078 case IP_VERSION(11, 0, 2): 5079 + case IP_VERSION(11, 0, 3): 5079 5080 gfx_v11_0_update_gfx_clock_gating(adev, 5080 5081 state == AMD_CG_STATE_GATE); 5081 5082 break;
+11 -7
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
··· 186 186 /* Use register 17 for GART */ 187 187 const unsigned eng = 17; 188 188 unsigned int i; 189 + unsigned char hub_ip = 0; 190 + 191 + hub_ip = (vmhub == AMDGPU_GFXHUB_0) ? 192 + GC_HWIP : MMHUB_HWIP; 189 193 190 194 spin_lock(&adev->gmc.invalidate_lock); 191 195 /* ··· 203 199 if (use_semaphore) { 204 200 for (i = 0; i < adev->usec_timeout; i++) { 205 201 /* a read return value of 1 means semaphore acuqire */ 206 - tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_sem + 207 - hub->eng_distance * eng); 202 + tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem + 203 + hub->eng_distance * eng, hub_ip); 208 204 if (tmp & 0x1) 209 205 break; 210 206 udelay(1); ··· 214 210 DRM_ERROR("Timeout waiting for sem acquire in VM flush!\n"); 215 211 } 216 212 217 - WREG32_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req); 213 + WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_req + hub->eng_distance * eng, inv_req, hub_ip); 218 214 219 215 /* Wait for ACK with a delay.*/ 220 216 for (i = 0; i < adev->usec_timeout; i++) { 221 - tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_ack + 222 - hub->eng_distance * eng); 217 + tmp = RREG32_RLC_NO_KIQ(hub->vm_inv_eng0_ack + 218 + hub->eng_distance * eng, hub_ip); 223 219 tmp &= 1 << vmid; 224 220 if (tmp) 225 221 break; ··· 233 229 * add semaphore release after invalidation, 234 230 * write with 0 means semaphore release 235 231 */ 236 - WREG32_NO_KIQ(hub->vm_inv_eng0_sem + 237 - hub->eng_distance * eng, 0); 232 + WREG32_RLC_NO_KIQ(hub->vm_inv_eng0_sem + 233 + hub->eng_distance * eng, 0, hub_ip); 238 234 239 235 /* Issue additional private vm invalidation to MMHUB */ 240 236 if ((vmhub != AMDGPU_GFXHUB_0) &&
+41 -4
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 1156 1156 return 0; 1157 1157 } 1158 1158 1159 + static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev) 1160 + { 1161 + uint32_t data; 1162 + int i; 1163 + 1164 + mutex_lock(&adev->srbm_mutex); 1165 + soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0); 1166 + 1167 + /* disable the queue if it's active */ 1168 + if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) { 1169 + WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1); 1170 + for (i = 0; i < adev->usec_timeout; i++) { 1171 + if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1)) 1172 + break; 1173 + udelay(1); 1174 + } 1175 + } 1176 + data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL); 1177 + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL, 1178 + DOORBELL_EN, 0); 1179 + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL, 1180 + DOORBELL_HIT, 1); 1181 + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data); 1182 + 1183 + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0); 1184 + 1185 + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0); 1186 + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0); 1187 + WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0); 1188 + 1189 + soc21_grbm_select(adev, 0, 0, 0, 0); 1190 + mutex_unlock(&adev->srbm_mutex); 1191 + 1192 + adev->mes.ring.sched.ready = false; 1193 + } 1194 + 1159 1195 static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring) 1160 1196 { 1161 1197 uint32_t tmp; ··· 1243 1207 1244 1208 static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev) 1245 1209 { 1210 + if (adev->mes.ring.sched.ready) 1211 + mes_v11_0_kiq_dequeue_sched(adev); 1212 + 1246 1213 mes_v11_0_enable(adev, false); 1247 1214 return 0; 1248 1215 } ··· 1301 1262 1302 1263 static int mes_v11_0_hw_fini(void *handle) 1303 1264 { 1304 - struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1305 - 1306 - adev->mes.ring.sched.ready = false; 1307 1265 return 0; 1308 1266 } 1309 1267 ··· 1332 1296 { 1333 1297 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 1334 1298 1335 - if (!amdgpu_in_reset(adev)) 1299 + if (!amdgpu_in_reset(adev) && 1300 + (adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3))) 1336 1301 amdgpu_mes_self_test(adev); 1337 1302 1338 1303 return 0;
-1
drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
··· 290 290 reset_context.method = AMD_RESET_METHOD_NONE; 291 291 reset_context.reset_req_dev = adev; 292 292 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 293 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 294 293 295 294 amdgpu_device_gpu_recover(adev, NULL, &reset_context); 296 295 }
-1
drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c
··· 317 317 reset_context.method = AMD_RESET_METHOD_NONE; 318 318 reset_context.reset_req_dev = adev; 319 319 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 320 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 321 320 322 321 amdgpu_device_gpu_recover(adev, NULL, &reset_context); 323 322 }
-1
drivers/gpu/drm/amd/amdgpu/mxgpu_vi.c
··· 529 529 reset_context.method = AMD_RESET_METHOD_NONE; 530 530 reset_context.reset_req_dev = adev; 531 531 clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags); 532 - clear_bit(AMDGPU_SKIP_MODE2_RESET, &reset_context.flags); 533 532 534 533 amdgpu_device_gpu_recover(adev, NULL, &reset_context); 535 534 }
-5
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
··· 1417 1417 WREG32_SDMA(i, mmSDMA0_CNTL, temp); 1418 1418 1419 1419 if (!amdgpu_sriov_vf(adev)) { 1420 - ring = &adev->sdma.instance[i].ring; 1421 - adev->nbio.funcs->sdma_doorbell_range(adev, i, 1422 - ring->use_doorbell, ring->doorbell_index, 1423 - adev->doorbell_index.sdma_doorbell_range); 1424 - 1425 1420 /* unhalt engine */ 1426 1421 temp = RREG32_SDMA(i, mmSDMA0_F32_CNTL); 1427 1422 temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+17 -8
drivers/gpu/drm/amd/amdgpu/sienna_cichlid.c
··· 31 31 #include "amdgpu_psp.h" 32 32 #include "amdgpu_xgmi.h" 33 33 34 + static bool sienna_cichlid_is_mode2_default(struct amdgpu_reset_control *reset_ctl) 35 + { 36 + #if 0 37 + struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle; 38 + 39 + if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(11, 0, 7) && 40 + adev->pm.fw_version >= 0x3a5500 && !amdgpu_sriov_vf(adev)) 41 + return true; 42 + #endif 43 + return false; 44 + } 45 + 34 46 static struct amdgpu_reset_handler * 35 47 sienna_cichlid_get_reset_handler(struct amdgpu_reset_control *reset_ctl, 36 48 struct amdgpu_reset_context *reset_context) 37 49 { 38 50 struct amdgpu_reset_handler *handler; 39 - struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle; 40 51 41 52 if (reset_context->method != AMD_RESET_METHOD_NONE) { 42 53 list_for_each_entry(handler, &reset_ctl->reset_handlers, ··· 55 44 if (handler->reset_method == reset_context->method) 56 45 return handler; 57 46 } 58 - } else { 59 - list_for_each_entry(handler, &reset_ctl->reset_handlers, 47 + } 48 + 49 + if (sienna_cichlid_is_mode2_default(reset_ctl)) { 50 + list_for_each_entry (handler, &reset_ctl->reset_handlers, 60 51 handler_list) { 61 - if (handler->reset_method == AMD_RESET_METHOD_MODE2 && 62 - adev->pm.fw_version >= 0x3a5500 && 63 - !amdgpu_sriov_vf(adev)) { 64 - reset_context->method = AMD_RESET_METHOD_MODE2; 52 + if (handler->reset_method == AMD_RESET_METHOD_MODE2) 65 53 return handler; 66 - } 67 54 } 68 55 } 69 56
+21
drivers/gpu/drm/amd/amdgpu/soc15.c
··· 1211 1211 return 0; 1212 1212 } 1213 1213 1214 + static void soc15_sdma_doorbell_range_init(struct amdgpu_device *adev) 1215 + { 1216 + int i; 1217 + 1218 + /* sdma doorbell range is programed by hypervisor */ 1219 + if (!amdgpu_sriov_vf(adev)) { 1220 + for (i = 0; i < adev->sdma.num_instances; i++) { 1221 + adev->nbio.funcs->sdma_doorbell_range(adev, i, 1222 + true, adev->doorbell_index.sdma_engine[i] << 1, 1223 + adev->doorbell_index.sdma_doorbell_range); 1224 + } 1225 + } 1226 + } 1227 + 1214 1228 static int soc15_common_hw_init(void *handle) 1215 1229 { 1216 1230 struct amdgpu_device *adev = (struct amdgpu_device *)handle; ··· 1244 1230 1245 1231 /* enable the doorbell aperture */ 1246 1232 soc15_enable_doorbell_aperture(adev, true); 1233 + /* HW doorbell routing policy: doorbell writing not 1234 + * in SDMA/IH/MM/ACV range will be routed to CP. So 1235 + * we need to init SDMA doorbell range prior 1236 + * to CP ip block init and ring test. IH already 1237 + * happens before CP. 1238 + */ 1239 + soc15_sdma_doorbell_range_init(adev); 1247 1240 1248 1241 return 0; 1249 1242 }
+6 -1
drivers/gpu/drm/amd/amdgpu/soc21.c
··· 423 423 case IP_VERSION(11, 0, 0): 424 424 return amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__UMC); 425 425 case IP_VERSION(11, 0, 2): 426 + case IP_VERSION(11, 0, 3): 426 427 return false; 427 428 default: 428 429 return true; ··· 637 636 break; 638 637 case IP_VERSION(11, 0, 3): 639 638 adev->cg_flags = AMD_CG_SUPPORT_VCN_MGCG | 640 - AMD_CG_SUPPORT_JPEG_MGCG; 639 + AMD_CG_SUPPORT_JPEG_MGCG | 640 + AMD_CG_SUPPORT_GFX_CGCG | 641 + AMD_CG_SUPPORT_GFX_CGLS | 642 + AMD_CG_SUPPORT_REPEATER_FGCG | 643 + AMD_CG_SUPPORT_GFX_MGCG; 641 644 adev->pg_flags = AMD_PG_SUPPORT_VCN | 642 645 AMD_PG_SUPPORT_VCN_DPG | 643 646 AMD_PG_SUPPORT_JPEG;
+1 -1
drivers/gpu/drm/amd/display/dc/dml/Makefile
··· 77 77 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/dcn32_fpu.o := $(dml_ccflags) 78 78 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_32.o := $(dml_ccflags) $(frame_warn_flag) 79 79 CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_rq_dlg_calc_32.o := $(dml_ccflags) 80 - CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags) 80 + CFLAGS_$(AMDDALPATH)/dc/dml/dcn32/display_mode_vba_util_32.o := $(dml_ccflags) $(frame_warn_flag) 81 81 CFLAGS_$(AMDDALPATH)/dc/dml/dcn321/dcn321_fpu.o := $(dml_ccflags) 82 82 CFLAGS_$(AMDDALPATH)/dc/dml/dcn31/dcn31_fpu.o := $(dml_ccflags) 83 83 CFLAGS_$(AMDDALPATH)/dc/dml/dcn301/dcn301_fpu.o := $(dml_ccflags)
+3 -2
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
··· 262 262 uint32_t queue_id); 263 263 264 264 int (*hqd_destroy)(struct amdgpu_device *adev, void *mqd, 265 - uint32_t reset_type, unsigned int timeout, 266 - uint32_t pipe_id, uint32_t queue_id); 265 + enum kfd_preempt_type reset_type, 266 + unsigned int timeout, uint32_t pipe_id, 267 + uint32_t queue_id); 267 268 268 269 bool (*hqd_sdma_is_occupied)(struct amdgpu_device *adev, void *mqd); 269 270
+2 -2
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 3362 3362 if (adev->pm.sysfs_initialized) 3363 3363 return 0; 3364 3364 3365 + INIT_LIST_HEAD(&adev->pm.pm_attr_list); 3366 + 3365 3367 if (adev->pm.dpm_enabled == 0) 3366 3368 return 0; 3367 - 3368 - INIT_LIST_HEAD(&adev->pm.pm_attr_list); 3369 3369 3370 3370 adev->pm.int_hwmon_dev = hwmon_device_register_with_groups(adev->dev, 3371 3371 DRIVER_NAME, adev,
+13 -12
drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
··· 67 67 int vega10_fan_ctrl_get_fan_speed_pwm(struct pp_hwmgr *hwmgr, 68 68 uint32_t *speed) 69 69 { 70 - struct amdgpu_device *adev = hwmgr->adev; 71 - uint32_t duty100, duty; 72 - uint64_t tmp64; 70 + uint32_t current_rpm; 71 + uint32_t percent = 0; 73 72 74 - duty100 = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_FDO_CTRL1), 75 - CG_FDO_CTRL1, FMAX_DUTY100); 76 - duty = REG_GET_FIELD(RREG32_SOC15(THM, 0, mmCG_THERMAL_STATUS), 77 - CG_THERMAL_STATUS, FDO_PWM_DUTY); 73 + if (hwmgr->thermal_controller.fanInfo.bNoFan) 74 + return 0; 78 75 79 - if (!duty100) 80 - return -EINVAL; 76 + if (vega10_get_current_rpm(hwmgr, &current_rpm)) 77 + return -1; 81 78 82 - tmp64 = (uint64_t)duty * 255; 83 - do_div(tmp64, duty100); 84 - *speed = MIN((uint32_t)tmp64, 255); 79 + if (hwmgr->thermal_controller. 80 + advanceFanControlParameters.usMaxFanRPM != 0) 81 + percent = current_rpm * 255 / 82 + hwmgr->thermal_controller. 83 + advanceFanControlParameters.usMaxFanRPM; 84 + 85 + *speed = MIN(percent, 255); 85 86 86 87 return 0; 87 88 }
+2 -2
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 1314 1314 1315 1315 ret = smu_enable_thermal_alert(smu); 1316 1316 if (ret) { 1317 - dev_err(adev->dev, "Failed to enable thermal alert!\n"); 1318 - return ret; 1317 + dev_err(adev->dev, "Failed to enable thermal alert!\n"); 1318 + return ret; 1319 1319 } 1320 1320 1321 1321 ret = smu_notify_display_change(smu);
+15 -2
drivers/gpu/drm/amd/pm/swsmu/inc/pmfw_if/smu13_driver_if_v13_0_4.h
··· 27 27 // *** IMPORTANT *** 28 28 // SMU TEAM: Always increment the interface version if 29 29 // any structure is changed in this file 30 - #define PMFW_DRIVER_IF_VERSION 5 30 + #define PMFW_DRIVER_IF_VERSION 7 31 31 32 32 typedef struct { 33 33 int32_t value; ··· 163 163 uint16_t DclkFrequency; //[MHz] 164 164 uint16_t MemclkFrequency; //[MHz] 165 165 uint16_t spare; //[centi] 166 - uint16_t UvdActivity; //[centi] 167 166 uint16_t GfxActivity; //[centi] 167 + uint16_t UvdActivity; //[centi] 168 168 169 169 uint16_t Voltage[2]; //[mV] indices: VDDCR_VDD, VDDCR_SOC 170 170 uint16_t Current[2]; //[mA] indices: VDDCR_VDD, VDDCR_SOC ··· 199 199 uint16_t DeviceState; 200 200 uint16_t CurTemp; //[centi-Celsius] 201 201 uint16_t spare2; 202 + 203 + uint16_t AverageGfxclkFrequency; 204 + uint16_t AverageFclkFrequency; 205 + uint16_t AverageGfxActivity; 206 + uint16_t AverageSocclkFrequency; 207 + uint16_t AverageVclkFrequency; 208 + uint16_t AverageVcnActivity; 209 + uint16_t AverageDRAMReads; //Filtered DF Bandwidth::DRAM Reads 210 + uint16_t AverageDRAMWrites; //Filtered DF Bandwidth::DRAM Writes 211 + uint16_t AverageSocketPower; //Filtered value of CurrentSocketPower 212 + uint16_t AverageCorePower; //Filtered of [sum of CorePower[8]]) 213 + uint16_t AverageCoreC0Residency[8]; //Filtered of [average C0 residency % per core] 214 + uint32_t MetricsCounter; //Counts the # of metrics table parameter reads per update to the metrics table, i.e. if the metrics table update happens every 1 second, this value could be up to 1000 if the smu collected metrics data every cycle, or as low as 0 if the smu was asleep the whole time. Reset to 0 after writing. 202 215 } SmuMetrics_t; 203 216 204 217 typedef struct {
+1 -1
drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
··· 28 28 #define SMU13_DRIVER_IF_VERSION_INV 0xFFFFFFFF 29 29 #define SMU13_DRIVER_IF_VERSION_YELLOW_CARP 0x04 30 30 #define SMU13_DRIVER_IF_VERSION_ALDE 0x08 31 - #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x05 31 + #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_4 0x07 32 32 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_5 0x04 33 33 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_0 0x30 34 34 #define SMU13_DRIVER_IF_VERSION_SMU_V13_0_7 0x2C
+8
drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
··· 2242 2242 static int arcturus_set_df_cstate(struct smu_context *smu, 2243 2243 enum pp_df_cstate state) 2244 2244 { 2245 + struct amdgpu_device *adev = smu->adev; 2245 2246 uint32_t smu_version; 2246 2247 int ret; 2248 + 2249 + /* 2250 + * Arcturus does not need the cstate disablement 2251 + * prerequisite for gpu reset. 2252 + */ 2253 + if (amdgpu_in_reset(adev) || adev->in_suspend) 2254 + return 0; 2247 2255 2248 2256 ret = smu_cmn_get_smc_version(smu, NULL, &smu_version); 2249 2257 if (ret) {
+9
drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
··· 1640 1640 static int aldebaran_set_df_cstate(struct smu_context *smu, 1641 1641 enum pp_df_cstate state) 1642 1642 { 1643 + struct amdgpu_device *adev = smu->adev; 1644 + 1645 + /* 1646 + * Aldebaran does not need the cstate disablement 1647 + * prerequisite for gpu reset. 1648 + */ 1649 + if (amdgpu_in_reset(adev) || adev->in_suspend) 1650 + return 0; 1651 + 1643 1652 return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_DFCstateControl, state, NULL); 1644 1653 } 1645 1654
+2 -4
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 211 211 return 0; 212 212 213 213 if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 7)) || 214 - (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0))) 214 + (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 0)) || 215 + (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10))) 215 216 return 0; 216 217 217 218 /* override pptable_id from driver parameter */ ··· 455 454 dev_info(adev->dev, "override pptable id %d\n", pptable_id); 456 455 } else { 457 456 pptable_id = smu->smu_table.boot_values.pp_table_id; 458 - 459 - if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 10)) 460 - pptable_id = 6666; 461 457 } 462 458 463 459 /* force using vbios pptable in sriov mode */
+11
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 119 119 MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0), 120 120 MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0), 121 121 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0), 122 + MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0), 122 123 }; 123 124 124 125 static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = { ··· 1754 1753 return ret; 1755 1754 } 1756 1755 1756 + static int smu_v13_0_0_set_df_cstate(struct smu_context *smu, 1757 + enum pp_df_cstate state) 1758 + { 1759 + return smu_cmn_send_smc_msg_with_param(smu, 1760 + SMU_MSG_DFCstateControl, 1761 + state, 1762 + NULL); 1763 + } 1764 + 1757 1765 static const struct pptable_funcs smu_v13_0_0_ppt_funcs = { 1758 1766 .get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask, 1759 1767 .set_default_dpm_table = smu_v13_0_0_set_default_dpm_table, ··· 1832 1822 .mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported, 1833 1823 .mode1_reset = smu_v13_0_mode1_reset, 1834 1824 .set_mp1_state = smu_v13_0_0_set_mp1_state, 1825 + .set_df_cstate = smu_v13_0_0_set_df_cstate, 1835 1826 }; 1836 1827 1837 1828 void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
+12
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 121 121 MSG_MAP(Mode1Reset, PPSMC_MSG_Mode1Reset, 0), 122 122 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareMp1ForUnload, 0), 123 123 MSG_MAP(SetMGpuFanBoostLimitRpm, PPSMC_MSG_SetMGpuFanBoostLimitRpm, 0), 124 + MSG_MAP(DFCstateControl, PPSMC_MSG_SetExternalClientDfCstateAllow, 0), 124 125 }; 125 126 126 127 static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = { ··· 1588 1587 1589 1588 return true; 1590 1589 } 1590 + 1591 + static int smu_v13_0_7_set_df_cstate(struct smu_context *smu, 1592 + enum pp_df_cstate state) 1593 + { 1594 + return smu_cmn_send_smc_msg_with_param(smu, 1595 + SMU_MSG_DFCstateControl, 1596 + state, 1597 + NULL); 1598 + } 1599 + 1591 1600 static const struct pptable_funcs smu_v13_0_7_ppt_funcs = { 1592 1601 .get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask, 1593 1602 .set_default_dpm_table = smu_v13_0_7_set_default_dpm_table, ··· 1660 1649 .mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported, 1661 1650 .mode1_reset = smu_v13_0_mode1_reset, 1662 1651 .set_mp1_state = smu_v13_0_7_set_mp1_state, 1652 + .set_df_cstate = smu_v13_0_7_set_df_cstate, 1663 1653 }; 1664 1654 1665 1655 void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+1 -1
drivers/gpu/drm/drm_connector.c
··· 435 435 if (drm_WARN_ON(dev, funcs && funcs->destroy)) 436 436 return -EINVAL; 437 437 438 - ret = __drm_connector_init(dev, connector, funcs, connector_type, NULL); 438 + ret = __drm_connector_init(dev, connector, funcs, connector_type, ddc); 439 439 if (ret) 440 440 return ret; 441 441
+1
drivers/gpu/drm/nouveau/nouveau_dmem.c
··· 176 176 .src = &src, 177 177 .dst = &dst, 178 178 .pgmap_owner = drm->dev, 179 + .fault_page = vmf->page, 179 180 .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE, 180 181 }; 181 182
+18 -18
drivers/gpu/drm/panfrost/panfrost_dump.c
··· 63 63 { 64 64 struct panfrost_dump_object_header *hdr = iter->hdr; 65 65 66 - hdr->magic = cpu_to_le32(PANFROSTDUMP_MAGIC); 67 - hdr->type = cpu_to_le32(type); 68 - hdr->file_offset = cpu_to_le32(iter->data - iter->start); 69 - hdr->file_size = cpu_to_le32(data_end - iter->data); 66 + hdr->magic = PANFROSTDUMP_MAGIC; 67 + hdr->type = type; 68 + hdr->file_offset = iter->data - iter->start; 69 + hdr->file_size = data_end - iter->data; 70 70 71 71 iter->hdr++; 72 - iter->data += le32_to_cpu(hdr->file_size); 72 + iter->data += hdr->file_size; 73 73 } 74 74 75 75 static void ··· 93 93 94 94 reg = panfrost_dump_registers[i] + js_as_offset; 95 95 96 - dumpreg->reg = cpu_to_le32(reg); 97 - dumpreg->value = cpu_to_le32(gpu_read(pfdev, reg)); 96 + dumpreg->reg = reg; 97 + dumpreg->value = gpu_read(pfdev, reg); 98 98 } 99 99 100 100 panfrost_core_dump_header(iter, PANFROSTDUMP_BUF_REG, dumpreg); ··· 106 106 struct panfrost_dump_iterator iter; 107 107 struct drm_gem_object *dbo; 108 108 unsigned int n_obj, n_bomap_pages; 109 - __le64 *bomap, *bomap_start; 109 + u64 *bomap, *bomap_start; 110 110 size_t file_size; 111 111 u32 as_nr; 112 112 int slot; ··· 177 177 * For now, we write the job identifier in the register dump header, 178 178 * so that we can decode the entire dump later with pandecode 179 179 */ 180 - iter.hdr->reghdr.jc = cpu_to_le64(job->jc); 181 - iter.hdr->reghdr.major = cpu_to_le32(PANFROSTDUMP_MAJOR); 182 - iter.hdr->reghdr.minor = cpu_to_le32(PANFROSTDUMP_MINOR); 183 - iter.hdr->reghdr.gpu_id = cpu_to_le32(pfdev->features.id); 184 - iter.hdr->reghdr.nbos = cpu_to_le64(job->bo_count); 180 + iter.hdr->reghdr.jc = job->jc; 181 + iter.hdr->reghdr.major = PANFROSTDUMP_MAJOR; 182 + iter.hdr->reghdr.minor = PANFROSTDUMP_MINOR; 183 + iter.hdr->reghdr.gpu_id = pfdev->features.id; 184 + iter.hdr->reghdr.nbos = job->bo_count; 185 185 186 186 panfrost_core_dump_registers(&iter, pfdev, as_nr, slot); 187 187 ··· 218 218 219 219 WARN_ON(!mapping->active); 220 220 221 - iter.hdr->bomap.data[0] = cpu_to_le32((bomap - bomap_start)); 221 + iter.hdr->bomap.data[0] = bomap - bomap_start; 222 222 223 223 for_each_sgtable_page(bo->base.sgt, &page_iter, 0) { 224 224 struct page *page = sg_page_iter_page(&page_iter); 225 225 226 226 if (!IS_ERR(page)) { 227 - *bomap++ = cpu_to_le64(page_to_phys(page)); 227 + *bomap++ = page_to_phys(page); 228 228 } else { 229 229 dev_err(pfdev->dev, "Panfrost Dump: wrong page\n"); 230 - *bomap++ = ~cpu_to_le64(0); 230 + *bomap++ = 0; 231 231 } 232 232 } 233 233 234 - iter.hdr->bomap.iova = cpu_to_le64(mapping->mmnode.start << PAGE_SHIFT); 234 + iter.hdr->bomap.iova = mapping->mmnode.start << PAGE_SHIFT; 235 235 236 236 vaddr = map.vaddr; 237 237 memcpy(iter.data, vaddr, bo->base.base.size); 238 238 239 239 drm_gem_shmem_vunmap(&bo->base, &map); 240 240 241 - iter.hdr->bomap.valid = cpu_to_le32(1); 241 + iter.hdr->bomap.valid = 1; 242 242 243 243 dump_header: panfrost_core_dump_header(&iter, PANFROSTDUMP_BUF_BO, iter.data + 244 244 bo->base.base.size);
+2 -1
drivers/gpu/drm/scheduler/sched_entity.c
··· 385 385 } 386 386 387 387 s_fence = to_drm_sched_fence(fence); 388 - if (s_fence && s_fence->sched == sched) { 388 + if (s_fence && s_fence->sched == sched && 389 + !test_bit(DRM_SCHED_FENCE_DONT_PIPELINE, &fence->flags)) { 389 390 390 391 /* 391 392 * Fence is from the same scheduler, only need to wait for
+1 -1
drivers/gpu/drm/tests/drm_format_helper_test.c
··· 438 438 iosys_map_set_vaddr(&src, xrgb8888); 439 439 440 440 drm_fb_xrgb8888_to_xrgb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip); 441 - buf = le32buf_to_cpu(test, buf, TEST_BUF_SIZE); 441 + buf = le32buf_to_cpu(test, buf, dst_size / sizeof(u32)); 442 442 KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 443 443 } 444 444
+1
drivers/gpu/drm/vc4/vc4_drv.c
··· 490 490 module_exit(vc4_drm_unregister); 491 491 492 492 MODULE_ALIAS("platform:vc4-drm"); 493 + MODULE_SOFTDEP("pre: snd-soc-hdmi-codec"); 493 494 MODULE_DESCRIPTION("Broadcom VC4 DRM Driver"); 494 495 MODULE_AUTHOR("Eric Anholt <eric@anholt.net>"); 495 496 MODULE_LICENSE("GPL v2");
+29
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 3318 3318 struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev); 3319 3319 unsigned long __maybe_unused flags; 3320 3320 u32 __maybe_unused value; 3321 + unsigned long rate; 3321 3322 int ret; 3323 + 3324 + /* 3325 + * The HSM clock is in the HDMI power domain, so we need to set 3326 + * its frequency while the power domain is active so that it 3327 + * keeps its rate. 3328 + */ 3329 + ret = clk_set_min_rate(vc4_hdmi->hsm_clock, HSM_MIN_CLOCK_FREQ); 3330 + if (ret) 3331 + return ret; 3322 3332 3323 3333 ret = clk_prepare_enable(vc4_hdmi->hsm_clock); 3324 3334 if (ret) 3325 3335 return ret; 3336 + 3337 + /* 3338 + * Whenever the RaspberryPi boots without an HDMI monitor 3339 + * plugged in, the firmware won't have initialized the HSM clock 3340 + * rate and it will be reported as 0. 3341 + * 3342 + * If we try to access a register of the controller in such a 3343 + * case, it will lead to a silent CPU stall. Let's make sure we 3344 + * prevent such a case. 3345 + */ 3346 + rate = clk_get_rate(vc4_hdmi->hsm_clock); 3347 + if (!rate) { 3348 + ret = -EINVAL; 3349 + goto err_disable_clk; 3350 + } 3326 3351 3327 3352 if (vc4_hdmi->variant->reset) 3328 3353 vc4_hdmi->variant->reset(vc4_hdmi); ··· 3370 3345 #endif 3371 3346 3372 3347 return 0; 3348 + 3349 + err_disable_clk: 3350 + clk_disable_unprepare(vc4_hdmi->hsm_clock); 3351 + return ret; 3373 3352 } 3374 3353 3375 3354 static void vc4_hdmi_put_ddc_device(void *ptr)
+2
drivers/hid/hid-ids.h
··· 867 867 #define USB_DEVICE_ID_MADCATZ_BEATPAD 0x4540 868 868 #define USB_DEVICE_ID_MADCATZ_RAT5 0x1705 869 869 #define USB_DEVICE_ID_MADCATZ_RAT9 0x1709 870 + #define USB_DEVICE_ID_MADCATZ_MMO7 0x1713 870 871 871 872 #define USB_VENDOR_ID_MCC 0x09db 872 873 #define USB_DEVICE_ID_MCC_PMD1024LS 0x0076 ··· 1143 1142 #define USB_DEVICE_ID_SONY_PS4_CONTROLLER_2 0x09cc 1144 1143 #define USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE 0x0ba0 1145 1144 #define USB_DEVICE_ID_SONY_PS5_CONTROLLER 0x0ce6 1145 + #define USB_DEVICE_ID_SONY_PS5_CONTROLLER_2 0x0df2 1146 1146 #define USB_DEVICE_ID_SONY_MOTION_CONTROLLER 0x03d5 1147 1147 #define USB_DEVICE_ID_SONY_NAVIGATION_CONTROLLER 0x042f 1148 1148 #define USB_DEVICE_ID_SONY_BUZZ_CONTROLLER 0x0002
+1 -1
drivers/hid/hid-lenovo.c
··· 985 985 struct device *dev = led_cdev->dev->parent; 986 986 struct hid_device *hdev = to_hid_device(dev); 987 987 struct lenovo_drvdata *data_pointer = hid_get_drvdata(hdev); 988 - u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED }; 988 + static const u8 tp10ubkbd_led[] = { TP10UBKBD_MUTE_LED, TP10UBKBD_MICMUTE_LED }; 989 989 int led_nr = 0; 990 990 int ret = 0; 991 991
+1 -1
drivers/hid/hid-magicmouse.c
··· 480 480 magicmouse_raw_event(hdev, report, data + 2, data[1]); 481 481 magicmouse_raw_event(hdev, report, data + 2 + data[1], 482 482 size - 2 - data[1]); 483 - break; 483 + return 0; 484 484 default: 485 485 return 0; 486 486 }
+76 -7
drivers/hid/hid-playstation.c
··· 46 46 uint32_t fw_version; 47 47 48 48 int (*parse_report)(struct ps_device *dev, struct hid_report *report, u8 *data, int size); 49 + void (*remove)(struct ps_device *dev); 49 50 }; 50 51 51 52 /* Calibration data for playstation motion sensors. */ ··· 108 107 #define DS_STATUS_CHARGING GENMASK(7, 4) 109 108 #define DS_STATUS_CHARGING_SHIFT 4 110 109 110 + /* Feature version from DualSense Firmware Info report. */ 111 + #define DS_FEATURE_VERSION(major, minor) ((major & 0xff) << 8 | (minor & 0xff)) 112 + 111 113 /* 112 114 * Status of a DualSense touch point contact. 113 115 * Contact IDs, with highest bit set are 'inactive' ··· 129 125 #define DS_OUTPUT_VALID_FLAG1_RELEASE_LEDS BIT(3) 130 126 #define DS_OUTPUT_VALID_FLAG1_PLAYER_INDICATOR_CONTROL_ENABLE BIT(4) 131 127 #define DS_OUTPUT_VALID_FLAG2_LIGHTBAR_SETUP_CONTROL_ENABLE BIT(1) 128 + #define DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2 BIT(2) 132 129 #define DS_OUTPUT_POWER_SAVE_CONTROL_MIC_MUTE BIT(4) 133 130 #define DS_OUTPUT_LIGHTBAR_SETUP_LIGHT_OUT BIT(1) 134 131 ··· 147 142 struct input_dev *sensors; 148 143 struct input_dev *touchpad; 149 144 145 + /* Update version is used as a feature/capability version. */ 146 + uint16_t update_version; 147 + 150 148 /* Calibration data for accelerometer and gyroscope. */ 151 149 struct ps_calibration_data accel_calib_data[3]; 152 150 struct ps_calibration_data gyro_calib_data[3]; ··· 160 152 uint32_t sensor_timestamp_us; 161 153 162 154 /* Compatible rumble state */ 155 + bool use_vibration_v2; 163 156 bool update_rumble; 164 157 uint8_t motor_left; 165 158 uint8_t motor_right; ··· 183 174 struct led_classdev player_leds[5]; 184 175 185 176 struct work_struct output_worker; 177 + bool output_worker_initialized; 186 178 void *output_report_dmabuf; 187 179 uint8_t output_seq; /* Sequence number for output report. */ 188 180 }; ··· 309 299 {0, 0}, 310 300 }; 311 301 302 + static inline void dualsense_schedule_work(struct dualsense *ds); 312 303 static void dualsense_set_lightbar(struct dualsense *ds, uint8_t red, uint8_t green, uint8_t blue); 313 304 314 305 /* ··· 800 789 return ret; 801 790 } 802 791 792 + 803 793 static int dualsense_get_firmware_info(struct dualsense *ds) 804 794 { 805 795 uint8_t *buf; ··· 819 807 820 808 ds->base.hw_version = get_unaligned_le32(&buf[24]); 821 809 ds->base.fw_version = get_unaligned_le32(&buf[28]); 810 + 811 + /* Update version is some kind of feature version. It is distinct from 812 + * the firmware version as there can be many different variations of a 813 + * controller over time with the same physical shell, but with different 814 + * PCBs and other internal changes. The update version (internal name) is 815 + * used as a means to detect what features are available and change behavior. 816 + * Note: the version is different between DualSense and DualSense Edge. 817 + */ 818 + ds->update_version = get_unaligned_le16(&buf[44]); 822 819 823 820 err_free: 824 821 kfree(buf); ··· 899 878 ds->update_player_leds = true; 900 879 spin_unlock_irqrestore(&ds->base.lock, flags); 901 880 902 - schedule_work(&ds->output_worker); 881 + dualsense_schedule_work(ds); 903 882 904 883 return 0; 905 884 } ··· 943 922 } 944 923 } 945 924 925 + static inline void dualsense_schedule_work(struct dualsense *ds) 926 + { 927 + unsigned long flags; 928 + 929 + spin_lock_irqsave(&ds->base.lock, flags); 930 + if (ds->output_worker_initialized) 931 + schedule_work(&ds->output_worker); 932 + spin_unlock_irqrestore(&ds->base.lock, flags); 933 + } 934 + 946 935 /* 947 936 * Helper function to send DualSense output reports. Applies a CRC at the end of a report 948 937 * for Bluetooth reports. ··· 991 960 if (ds->update_rumble) { 992 961 /* Select classic rumble style haptics and enable it. */ 993 962 common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_HAPTICS_SELECT; 994 - common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION; 963 + if (ds->use_vibration_v2) 964 + common->valid_flag2 |= DS_OUTPUT_VALID_FLAG2_COMPATIBLE_VIBRATION2; 965 + else 966 + common->valid_flag0 |= DS_OUTPUT_VALID_FLAG0_COMPATIBLE_VIBRATION; 995 967 common->motor_left = ds->motor_left; 996 968 common->motor_right = ds->motor_right; 997 969 ds->update_rumble = false; ··· 1116 1082 spin_unlock_irqrestore(&ps_dev->lock, flags); 1117 1083 1118 1084 /* Schedule updating of microphone state at hardware level. */ 1119 - schedule_work(&ds->output_worker); 1085 + dualsense_schedule_work(ds); 1120 1086 } 1121 1087 ds->last_btn_mic_state = btn_mic_state; 1122 1088 ··· 1231 1197 ds->motor_right = effect->u.rumble.weak_magnitude / 256; 1232 1198 spin_unlock_irqrestore(&ds->base.lock, flags); 1233 1199 1234 - schedule_work(&ds->output_worker); 1200 + dualsense_schedule_work(ds); 1235 1201 return 0; 1202 + } 1203 + 1204 + static void dualsense_remove(struct ps_device *ps_dev) 1205 + { 1206 + struct dualsense *ds = container_of(ps_dev, struct dualsense, base); 1207 + unsigned long flags; 1208 + 1209 + spin_lock_irqsave(&ds->base.lock, flags); 1210 + ds->output_worker_initialized = false; 1211 + spin_unlock_irqrestore(&ds->base.lock, flags); 1212 + 1213 + cancel_work_sync(&ds->output_worker); 1236 1214 } 1237 1215 1238 1216 static int dualsense_reset_leds(struct dualsense *ds) ··· 1283 1237 ds->lightbar_blue = blue; 1284 1238 spin_unlock_irqrestore(&ds->base.lock, flags); 1285 1239 1286 - schedule_work(&ds->output_worker); 1240 + dualsense_schedule_work(ds); 1287 1241 } 1288 1242 1289 1243 static void dualsense_set_player_leds(struct dualsense *ds) ··· 1306 1260 1307 1261 ds->update_player_leds = true; 1308 1262 ds->player_leds_state = player_ids[player_id]; 1309 - schedule_work(&ds->output_worker); 1263 + dualsense_schedule_work(ds); 1310 1264 } 1311 1265 1312 1266 static struct ps_device *dualsense_create(struct hid_device *hdev) ··· 1345 1299 ps_dev->battery_capacity = 100; /* initial value until parse_report. */ 1346 1300 ps_dev->battery_status = POWER_SUPPLY_STATUS_UNKNOWN; 1347 1301 ps_dev->parse_report = dualsense_parse_report; 1302 + ps_dev->remove = dualsense_remove; 1348 1303 INIT_WORK(&ds->output_worker, dualsense_output_worker); 1304 + ds->output_worker_initialized = true; 1349 1305 hid_set_drvdata(hdev, ds); 1350 1306 1351 1307 max_output_report_size = sizeof(struct dualsense_output_report_bt); ··· 1366 1318 if (ret) { 1367 1319 hid_err(hdev, "Failed to get firmware info from DualSense\n"); 1368 1320 return ERR_PTR(ret); 1321 + } 1322 + 1323 + /* Original DualSense firmware simulated classic controller rumble through 1324 + * its new haptics hardware. It felt different from classic rumble users 1325 + * were used to. Since then new firmwares were introduced to change behavior 1326 + * and make this new 'v2' behavior default on PlayStation and other platforms. 1327 + * The original DualSense requires a new enough firmware as bundled with PS5 1328 + * software released in 2021. DualSense edge supports it out of the box. 1329 + * Both devices also support the old mode, but it is not really used. 1330 + */ 1331 + if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) { 1332 + /* Feature version 2.21 introduced new vibration method. */ 1333 + ds->use_vibration_v2 = ds->update_version >= DS_FEATURE_VERSION(2, 21); 1334 + } else if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) { 1335 + ds->use_vibration_v2 = true; 1369 1336 } 1370 1337 1371 1338 ret = ps_devices_list_add(ps_dev); ··· 1499 1436 goto err_stop; 1500 1437 } 1501 1438 1502 - if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER) { 1439 + if (hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER || 1440 + hdev->product == USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) { 1503 1441 dev = dualsense_create(hdev); 1504 1442 if (IS_ERR(dev)) { 1505 1443 hid_err(hdev, "Failed to create dualsense.\n"); ··· 1525 1461 ps_devices_list_remove(dev); 1526 1462 ps_device_release_player_id(dev); 1527 1463 1464 + if (dev->remove) 1465 + dev->remove(dev); 1466 + 1528 1467 hid_hw_close(hdev); 1529 1468 hid_hw_stop(hdev); 1530 1469 } ··· 1535 1468 static const struct hid_device_id ps_devices[] = { 1536 1469 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) }, 1537 1470 { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER) }, 1471 + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) }, 1472 + { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS5_CONTROLLER_2) }, 1538 1473 { } 1539 1474 }; 1540 1475 MODULE_DEVICE_TABLE(hid, ps_devices);
+1
drivers/hid/hid-quirks.c
··· 620 620 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7) }, 621 621 { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT5) }, 622 622 { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9) }, 623 + { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7) }, 623 624 #endif 624 625 #if IS_ENABLED(CONFIG_HID_SAMSUNG) 625 626 { HID_USB_DEVICE(USB_VENDOR_ID_SAMSUNG, USB_DEVICE_ID_SAMSUNG_IR_REMOTE) },
+2
drivers/hid/hid-saitek.c
··· 187 187 .driver_data = SAITEK_RELEASE_MODE_RAT7 }, 188 188 { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_MMO7), 189 189 .driver_data = SAITEK_RELEASE_MODE_MMO7 }, 190 + { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_MMO7), 191 + .driver_data = SAITEK_RELEASE_MODE_MMO7 }, 190 192 { } 191 193 }; 192 194
+41 -15
drivers/hwmon/coretemp.c
··· 46 46 #define TOTAL_ATTRS (MAX_CORE_ATTRS + 1) 47 47 #define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO) 48 48 49 - #define TO_CORE_ID(cpu) (cpu_data(cpu).cpu_core_id) 50 - #define TO_ATTR_NO(cpu) (TO_CORE_ID(cpu) + BASE_SYSFS_ATTR_NO) 51 - 52 49 #ifdef CONFIG_SMP 53 50 #define for_each_sibling(i, cpu) \ 54 51 for_each_cpu(i, topology_sibling_cpumask(cpu)) ··· 88 91 struct platform_data { 89 92 struct device *hwmon_dev; 90 93 u16 pkg_id; 94 + u16 cpu_map[NUM_REAL_CORES]; 95 + struct ida ida; 91 96 struct cpumask cpumask; 92 97 struct temp_data *core_data[MAX_CORE_DATA]; 93 98 struct device_attribute name_attr; ··· 440 441 MSR_IA32_THERM_STATUS; 441 442 tdata->is_pkg_data = pkg_flag; 442 443 tdata->cpu = cpu; 443 - tdata->cpu_core_id = TO_CORE_ID(cpu); 444 + tdata->cpu_core_id = topology_core_id(cpu); 444 445 tdata->attr_size = MAX_CORE_ATTRS; 445 446 mutex_init(&tdata->update_lock); 446 447 return tdata; ··· 453 454 struct platform_data *pdata = platform_get_drvdata(pdev); 454 455 struct cpuinfo_x86 *c = &cpu_data(cpu); 455 456 u32 eax, edx; 456 - int err, attr_no; 457 + int err, index, attr_no; 457 458 458 459 /* 459 460 * Find attr number for sysfs: ··· 461 462 * The attr number is always core id + 2 462 463 * The Pkgtemp will always show up as temp1_*, if available 463 464 */ 464 - attr_no = pkg_flag ? PKG_SYSFS_ATTR_NO : TO_ATTR_NO(cpu); 465 + if (pkg_flag) { 466 + attr_no = PKG_SYSFS_ATTR_NO; 467 + } else { 468 + index = ida_alloc(&pdata->ida, GFP_KERNEL); 469 + if (index < 0) 470 + return index; 471 + pdata->cpu_map[index] = topology_core_id(cpu); 472 + attr_no = index + BASE_SYSFS_ATTR_NO; 473 + } 465 474 466 - if (attr_no > MAX_CORE_DATA - 1) 467 - return -ERANGE; 475 + if (attr_no > MAX_CORE_DATA - 1) { 476 + err = -ERANGE; 477 + goto ida_free; 478 + } 468 479 469 480 tdata = init_temp_data(cpu, pkg_flag); 470 - if (!tdata) 471 - return -ENOMEM; 481 + if (!tdata) { 482 + err = -ENOMEM; 483 + goto ida_free; 484 + } 472 485 473 486 /* Test if we can access the status register */ 474 487 err = rdmsr_safe_on_cpu(cpu, tdata->status_reg, &eax, &edx); ··· 516 505 exit_free: 517 506 pdata->core_data[attr_no] = NULL; 518 507 kfree(tdata); 508 + ida_free: 509 + if (!pkg_flag) 510 + ida_free(&pdata->ida, index); 519 511 return err; 520 512 } 521 513 ··· 538 524 539 525 kfree(pdata->core_data[indx]); 540 526 pdata->core_data[indx] = NULL; 527 + 528 + if (indx >= BASE_SYSFS_ATTR_NO) 529 + ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO); 541 530 } 542 531 543 532 static int coretemp_probe(struct platform_device *pdev) ··· 554 537 return -ENOMEM; 555 538 556 539 pdata->pkg_id = pdev->id; 540 + ida_init(&pdata->ida); 557 541 platform_set_drvdata(pdev, pdata); 558 542 559 543 pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME, ··· 571 553 if (pdata->core_data[i]) 572 554 coretemp_remove_core(pdata, i); 573 555 556 + ida_destroy(&pdata->ida); 574 557 return 0; 575 558 } 576 559 ··· 666 647 struct platform_device *pdev = coretemp_get_pdev(cpu); 667 648 struct platform_data *pd; 668 649 struct temp_data *tdata; 669 - int indx, target; 650 + int i, indx = -1, target; 670 651 671 652 /* 672 653 * Don't execute this on suspend as the device remove locks ··· 679 660 if (!pdev) 680 661 return 0; 681 662 682 - /* The core id is too big, just return */ 683 - indx = TO_ATTR_NO(cpu); 684 - if (indx > MAX_CORE_DATA - 1) 663 + pd = platform_get_drvdata(pdev); 664 + 665 + for (i = 0; i < NUM_REAL_CORES; i++) { 666 + if (pd->cpu_map[i] == topology_core_id(cpu)) { 667 + indx = i + BASE_SYSFS_ATTR_NO; 668 + break; 669 + } 670 + } 671 + 672 + /* Too many cores and this core is not populated, just return */ 673 + if (indx < 0) 685 674 return 0; 686 675 687 - pd = platform_get_drvdata(pdev); 688 676 tdata = pd->core_data[indx]; 689 677 690 678 cpumask_clear_cpu(cpu, &pd->cpumask);
+2 -1
drivers/hwmon/corsair-psu.c
··· 820 820 { HID_USB_DEVICE(0x1b1c, 0x1c0b) }, /* Corsair RM750i */ 821 821 { HID_USB_DEVICE(0x1b1c, 0x1c0c) }, /* Corsair RM850i */ 822 822 { HID_USB_DEVICE(0x1b1c, 0x1c0d) }, /* Corsair RM1000i */ 823 - { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsaur HX1000i revision 2 */ 823 + { HID_USB_DEVICE(0x1b1c, 0x1c1e) }, /* Corsair HX1000i revision 2 */ 824 + { HID_USB_DEVICE(0x1b1c, 0x1c1f) }, /* Corsair HX1500i */ 824 825 { }, 825 826 }; 826 827 MODULE_DEVICE_TABLE(hid, corsairpsu_idtable);
+4 -1
drivers/hwmon/pwm-fan.c
··· 257 257 258 258 if (val == 0) { 259 259 /* Disable pwm-fan unconditionally */ 260 - ret = __set_pwm(ctx, 0); 260 + if (ctx->enabled) 261 + ret = __set_pwm(ctx, 0); 262 + else 263 + ret = pwm_fan_switch_power(ctx, false); 261 264 if (ret) 262 265 ctx->enable_mode = old_val; 263 266 pwm_fan_update_state(ctx, 0);
+1
drivers/i2c/busses/Kconfig
··· 764 764 config I2C_MLXBF 765 765 tristate "Mellanox BlueField I2C controller" 766 766 depends on MELLANOX_PLATFORM && ARM64 767 + depends on ACPI 767 768 select I2C_SLAVE 768 769 help 769 770 Enabling this option will add I2C SMBus support for Mellanox BlueField
-9
drivers/i2c/busses/i2c-mlxbf.c
··· 2247 2247 .max_write_len = MLXBF_I2C_MASTER_DATA_W_LENGTH, 2248 2248 }; 2249 2249 2250 - #ifdef CONFIG_ACPI 2251 2250 static const struct acpi_device_id mlxbf_i2c_acpi_ids[] = { 2252 2251 { "MLNXBF03", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_1] }, 2253 2252 { "MLNXBF23", (kernel_ulong_t)&mlxbf_i2c_chip[MLXBF_I2C_CHIP_TYPE_2] }, ··· 2281 2282 2282 2283 return 0; 2283 2284 } 2284 - #else 2285 - static int mlxbf_i2c_acpi_probe(struct device *dev, struct mlxbf_i2c_priv *priv) 2286 - { 2287 - return -ENOENT; 2288 - } 2289 - #endif /* CONFIG_ACPI */ 2290 2285 2291 2286 static int mlxbf_i2c_probe(struct platform_device *pdev) 2292 2287 { ··· 2483 2490 .remove = mlxbf_i2c_remove, 2484 2491 .driver = { 2485 2492 .name = "i2c-mlxbf", 2486 - #ifdef CONFIG_ACPI 2487 2493 .acpi_match_table = ACPI_PTR(mlxbf_i2c_acpi_ids), 2488 - #endif /* CONFIG_ACPI */ 2489 2494 }, 2490 2495 }; 2491 2496
+1 -1
drivers/i2c/busses/i2c-mlxcpld.c
··· 40 40 #define MLXCPLD_LPCI2C_STATUS_REG 0x9 41 41 #define MLXCPLD_LPCI2C_DATA_REG 0xa 42 42 43 - /* LPC I2C masks and parametres */ 43 + /* LPC I2C masks and parameters */ 44 44 #define MLXCPLD_LPCI2C_RST_SEL_MASK 0x1 45 45 #define MLXCPLD_LPCI2C_TRANS_END 0x1 46 46 #define MLXCPLD_LPCI2C_STATUS_NACK 0x10
+8 -5
drivers/i2c/busses/i2c-qcom-cci.c
··· 639 639 if (ret < 0) 640 640 goto error; 641 641 642 + pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); 643 + pm_runtime_use_autosuspend(dev); 644 + pm_runtime_set_active(dev); 645 + pm_runtime_enable(dev); 646 + 642 647 for (i = 0; i < cci->data->num_masters; i++) { 643 648 if (!cci->master[i].cci) 644 649 continue; ··· 655 650 } 656 651 } 657 652 658 - pm_runtime_set_autosuspend_delay(dev, MSEC_PER_SEC); 659 - pm_runtime_use_autosuspend(dev); 660 - pm_runtime_set_active(dev); 661 - pm_runtime_enable(dev); 662 - 663 653 return 0; 664 654 665 655 error_i2c: 656 + pm_runtime_disable(dev); 657 + pm_runtime_dont_use_autosuspend(dev); 658 + 666 659 for (--i ; i >= 0; i--) { 667 660 if (cci->master[i].cci) { 668 661 i2c_del_adapter(&cci->master[i].adap);
+1 -1
drivers/i2c/busses/i2c-sis630.c
··· 97 97 module_param(force, bool, 0); 98 98 MODULE_PARM_DESC(force, "Forcibly enable the SIS630. DANGEROUS!"); 99 99 100 - /* SMBus base adress */ 100 + /* SMBus base address */ 101 101 static unsigned short smbus_base; 102 102 103 103 /* supported chips */
+1
drivers/i2c/busses/i2c-xiic.c
··· 920 920 921 921 module_platform_driver(xiic_i2c_driver); 922 922 923 + MODULE_ALIAS("platform:" DRIVER_NAME); 923 924 MODULE_AUTHOR("info@mocean-labs.com"); 924 925 MODULE_DESCRIPTION("Xilinx I2C bus driver"); 925 926 MODULE_LICENSE("GPL v2");
+4 -3
drivers/iommu/amd/iommu.c
··· 2330 2330 type = IOMMU_RESV_RESERVED; 2331 2331 2332 2332 region = iommu_alloc_resv_region(entry->address_start, 2333 - length, prot, type); 2333 + length, prot, type, 2334 + GFP_KERNEL); 2334 2335 if (!region) { 2335 2336 dev_err(dev, "Out of memory allocating dm-regions\n"); 2336 2337 return; ··· 2341 2340 2342 2341 region = iommu_alloc_resv_region(MSI_RANGE_START, 2343 2342 MSI_RANGE_END - MSI_RANGE_START + 1, 2344 - 0, IOMMU_RESV_MSI); 2343 + 0, IOMMU_RESV_MSI, GFP_KERNEL); 2345 2344 if (!region) 2346 2345 return; 2347 2346 list_add_tail(&region->list, head); 2348 2347 2349 2348 region = iommu_alloc_resv_region(HT_RANGE_START, 2350 2349 HT_RANGE_END - HT_RANGE_START + 1, 2351 - 0, IOMMU_RESV_RESERVED); 2350 + 0, IOMMU_RESV_RESERVED, GFP_KERNEL); 2352 2351 if (!region) 2353 2352 return; 2354 2353 list_add_tail(&region->list, head);
+1 -1
drivers/iommu/apple-dart.c
··· 758 758 759 759 region = iommu_alloc_resv_region(DOORBELL_ADDR, 760 760 PAGE_SIZE, prot, 761 - IOMMU_RESV_MSI); 761 + IOMMU_RESV_MSI, GFP_KERNEL); 762 762 if (!region) 763 763 return; 764 764
+1 -1
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
··· 2757 2757 int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; 2758 2758 2759 2759 region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, 2760 - prot, IOMMU_RESV_SW_MSI); 2760 + prot, IOMMU_RESV_SW_MSI, GFP_KERNEL); 2761 2761 if (!region) 2762 2762 return; 2763 2763
+1 -1
drivers/iommu/arm/arm-smmu/arm-smmu.c
··· 1534 1534 int prot = IOMMU_WRITE | IOMMU_NOEXEC | IOMMU_MMIO; 1535 1535 1536 1536 region = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, 1537 - prot, IOMMU_RESV_SW_MSI); 1537 + prot, IOMMU_RESV_SW_MSI, GFP_KERNEL); 1538 1538 if (!region) 1539 1539 return; 1540 1540
+12 -5
drivers/iommu/intel/iommu.c
··· 2410 2410 2411 2411 if (md_domain_init(si_domain, DEFAULT_DOMAIN_ADDRESS_WIDTH)) { 2412 2412 domain_exit(si_domain); 2413 + si_domain = NULL; 2413 2414 return -EFAULT; 2414 2415 } 2415 2416 ··· 3052 3051 for_each_active_iommu(iommu, drhd) { 3053 3052 disable_dmar_iommu(iommu); 3054 3053 free_dmar_iommu(iommu); 3054 + } 3055 + if (si_domain) { 3056 + domain_exit(si_domain); 3057 + si_domain = NULL; 3055 3058 } 3056 3059 3057 3060 return ret; ··· 4539 4534 struct device *i_dev; 4540 4535 int i; 4541 4536 4542 - down_read(&dmar_global_lock); 4537 + rcu_read_lock(); 4543 4538 for_each_rmrr_units(rmrr) { 4544 4539 for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt, 4545 4540 i, i_dev) { ··· 4557 4552 IOMMU_RESV_DIRECT_RELAXABLE : IOMMU_RESV_DIRECT; 4558 4553 4559 4554 resv = iommu_alloc_resv_region(rmrr->base_address, 4560 - length, prot, type); 4555 + length, prot, type, 4556 + GFP_ATOMIC); 4561 4557 if (!resv) 4562 4558 break; 4563 4559 4564 4560 list_add_tail(&resv->list, head); 4565 4561 } 4566 4562 } 4567 - up_read(&dmar_global_lock); 4563 + rcu_read_unlock(); 4568 4564 4569 4565 #ifdef CONFIG_INTEL_IOMMU_FLOPPY_WA 4570 4566 if (dev_is_pci(device)) { ··· 4573 4567 4574 4568 if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) { 4575 4569 reg = iommu_alloc_resv_region(0, 1UL << 24, prot, 4576 - IOMMU_RESV_DIRECT_RELAXABLE); 4570 + IOMMU_RESV_DIRECT_RELAXABLE, 4571 + GFP_KERNEL); 4577 4572 if (reg) 4578 4573 list_add_tail(&reg->list, head); 4579 4574 } ··· 4583 4576 4584 4577 reg = iommu_alloc_resv_region(IOAPIC_RANGE_START, 4585 4578 IOAPIC_RANGE_END - IOAPIC_RANGE_START + 1, 4586 - 0, IOMMU_RESV_MSI); 4579 + 0, IOMMU_RESV_MSI, GFP_KERNEL); 4587 4580 if (!reg) 4588 4581 return; 4589 4582 list_add_tail(&reg->list, head);
+4 -3
drivers/iommu/iommu.c
··· 504 504 LIST_HEAD(stack); 505 505 506 506 nr = iommu_alloc_resv_region(new->start, new->length, 507 - new->prot, new->type); 507 + new->prot, new->type, GFP_KERNEL); 508 508 if (!nr) 509 509 return -ENOMEM; 510 510 ··· 2579 2579 2580 2580 struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start, 2581 2581 size_t length, int prot, 2582 - enum iommu_resv_type type) 2582 + enum iommu_resv_type type, 2583 + gfp_t gfp) 2583 2584 { 2584 2585 struct iommu_resv_region *region; 2585 2586 2586 - region = kzalloc(sizeof(*region), GFP_KERNEL); 2587 + region = kzalloc(sizeof(*region), gfp); 2587 2588 if (!region) 2588 2589 return NULL; 2589 2590
+2 -1
drivers/iommu/mtk_iommu.c
··· 917 917 continue; 918 918 919 919 region = iommu_alloc_resv_region(resv->iova_base, resv->size, 920 - prot, IOMMU_RESV_RESERVED); 920 + prot, IOMMU_RESV_RESERVED, 921 + GFP_KERNEL); 921 922 if (!region) 922 923 return; 923 924
+6 -3
drivers/iommu/virtio-iommu.c
··· 490 490 fallthrough; 491 491 case VIRTIO_IOMMU_RESV_MEM_T_RESERVED: 492 492 region = iommu_alloc_resv_region(start, size, 0, 493 - IOMMU_RESV_RESERVED); 493 + IOMMU_RESV_RESERVED, 494 + GFP_KERNEL); 494 495 break; 495 496 case VIRTIO_IOMMU_RESV_MEM_T_MSI: 496 497 region = iommu_alloc_resv_region(start, size, prot, 497 - IOMMU_RESV_MSI); 498 + IOMMU_RESV_MSI, 499 + GFP_KERNEL); 498 500 break; 499 501 } 500 502 if (!region) ··· 911 909 */ 912 910 if (!msi) { 913 911 msi = iommu_alloc_resv_region(MSI_IOVA_BASE, MSI_IOVA_LENGTH, 914 - prot, IOMMU_RESV_SW_MSI); 912 + prot, IOMMU_RESV_SW_MSI, 913 + GFP_KERNEL); 915 914 if (!msi) 916 915 return; 917 916
+1 -1
drivers/media/Kconfig
··· 24 24 25 25 config MEDIA_SUPPORT_FILTER 26 26 bool "Filter media drivers" 27 - default y if !EMBEDDED && !EXPERT 27 + default y if !EXPERT 28 28 help 29 29 Configuring the media subsystem can be complex, as there are 30 30 hundreds of drivers and other config options.
+1
drivers/media/cec/core/cec-adap.c
··· 1027 1027 [CEC_MSG_REPORT_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED, 1028 1028 [CEC_MSG_REQUEST_SHORT_AUDIO_DESCRIPTOR] = 2 | DIRECTED, 1029 1029 [CEC_MSG_SET_SYSTEM_AUDIO_MODE] = 3 | BOTH, 1030 + [CEC_MSG_SET_AUDIO_VOLUME_LEVEL] = 3 | DIRECTED, 1030 1031 [CEC_MSG_SYSTEM_AUDIO_MODE_REQUEST] = 2 | DIRECTED, 1031 1032 [CEC_MSG_SYSTEM_AUDIO_MODE_STATUS] = 3 | DIRECTED, 1032 1033 [CEC_MSG_SET_AUDIO_RATE] = 3 | DIRECTED,
+4
drivers/media/cec/platform/cros-ec/cros-ec-cec.c
··· 44 44 uint8_t *cec_message = cros_ec->event_data.data.cec_message; 45 45 unsigned int len = cros_ec->event_size; 46 46 47 + if (len > CEC_MAX_MSG_SIZE) 48 + len = CEC_MAX_MSG_SIZE; 47 49 cros_ec_cec->rx_msg.len = len; 48 50 memcpy(cros_ec_cec->rx_msg.msg, cec_message, len); 49 51 ··· 223 221 { "Google", "Moli", "0000:00:02.0", "Port B" }, 224 222 /* Google Kinox */ 225 223 { "Google", "Kinox", "0000:00:02.0", "Port B" }, 224 + /* Google Kuldax */ 225 + { "Google", "Kuldax", "0000:00:02.0", "Port B" }, 226 226 }; 227 227 228 228 static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
+2
drivers/media/cec/platform/s5p/s5p_cec.c
··· 115 115 dev_dbg(cec->dev, "Buffer overrun (worker did not process previous message)\n"); 116 116 cec->rx = STATE_BUSY; 117 117 cec->msg.len = status >> 24; 118 + if (cec->msg.len > CEC_MAX_MSG_SIZE) 119 + cec->msg.len = CEC_MAX_MSG_SIZE; 118 120 cec->msg.rx_status = CEC_RX_STATUS_OK; 119 121 s5p_cec_get_rx_buf(cec, cec->msg.len, 120 122 cec->msg.msg);
+1 -1
drivers/media/dvb-frontends/drxk_hard.c
··· 6660 6660 static int drxk_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks) 6661 6661 { 6662 6662 struct drxk_state *state = fe->demodulator_priv; 6663 - u16 err; 6663 + u16 err = 0; 6664 6664 6665 6665 dprintk(1, "\n"); 6666 6666
+6 -5
drivers/media/i2c/ar0521.c
··· 406 406 struct v4l2_subdev_format *format) 407 407 { 408 408 struct ar0521_dev *sensor = to_ar0521_dev(sd); 409 - int ret = 0; 410 409 411 410 ar0521_adj_fmt(&format->format); 412 411 ··· 422 423 } 423 424 424 425 mutex_unlock(&sensor->lock); 425 - return ret; 426 + return 0; 426 427 } 427 428 428 429 static int ar0521_s_ctrl(struct v4l2_ctrl *ctrl) ··· 755 756 gpiod_set_value(sensor->reset_gpio, 0); 756 757 usleep_range(4500, 5000); /* min 45000 clocks */ 757 758 758 - for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) 759 - if (ar0521_write_regs(sensor, initial_regs[cnt].data, 760 - initial_regs[cnt].count)) 759 + for (cnt = 0; cnt < ARRAY_SIZE(initial_regs); cnt++) { 760 + ret = ar0521_write_regs(sensor, initial_regs[cnt].data, 761 + initial_regs[cnt].count); 762 + if (ret) 761 763 goto off; 764 + } 762 765 763 766 ret = ar0521_write_reg(sensor, AR0521_REG_SERIAL_FORMAT, 764 767 AR0521_REG_SERIAL_FORMAT_MIPI |
+47
drivers/media/i2c/ir-kbd-i2c.c
··· 238 238 return 1; 239 239 } 240 240 241 + static int get_key_geniatech(struct IR_i2c *ir, enum rc_proto *protocol, 242 + u32 *scancode, u8 *toggle) 243 + { 244 + int i, rc; 245 + unsigned char b; 246 + 247 + /* poll IR chip */ 248 + for (i = 0; i < 4; i++) { 249 + rc = i2c_master_recv(ir->c, &b, 1); 250 + if (rc == 1) 251 + break; 252 + msleep(20); 253 + } 254 + if (rc != 1) { 255 + dev_dbg(&ir->rc->dev, "read error\n"); 256 + if (rc < 0) 257 + return rc; 258 + return -EIO; 259 + } 260 + 261 + /* don't repeat the key */ 262 + if (ir->old == b) 263 + return 0; 264 + ir->old = b; 265 + 266 + /* decode to RC5 */ 267 + b &= 0x7f; 268 + b = (b - 1) / 2; 269 + 270 + dev_dbg(&ir->rc->dev, "key %02x\n", b); 271 + 272 + *protocol = RC_PROTO_RC5; 273 + *scancode = b; 274 + *toggle = ir->old >> 7; 275 + return 1; 276 + } 277 + 241 278 static int get_key_avermedia_cardbus(struct IR_i2c *ir, enum rc_proto *protocol, 242 279 u32 *scancode, u8 *toggle) 243 280 { ··· 803 766 rc_proto = RC_PROTO_BIT_OTHER; 804 767 ir_codes = RC_MAP_EMPTY; 805 768 break; 769 + case 0x33: 770 + name = "Geniatech"; 771 + ir->get_key = get_key_geniatech; 772 + rc_proto = RC_PROTO_BIT_RC5; 773 + ir_codes = RC_MAP_TOTAL_MEDIA_IN_HAND_02; 774 + ir->old = 0xfc; 775 + break; 806 776 case 0x6b: 807 777 name = "FusionHDTV"; 808 778 ir->get_key = get_key_fusionhdtv; ··· 868 824 break; 869 825 case IR_KBD_GET_KEY_KNC1: 870 826 ir->get_key = get_key_knc1; 827 + break; 828 + case IR_KBD_GET_KEY_GENIATECH: 829 + ir->get_key = get_key_geniatech; 871 830 break; 872 831 case IR_KBD_GET_KEY_FUSIONHDTV: 873 832 ir->get_key = get_key_fusionhdtv;
+1 -1
drivers/media/i2c/isl7998x.c
··· 8 8 9 9 #include <linux/bitfield.h> 10 10 #include <linux/delay.h> 11 - #include <linux/gpio.h> 11 + #include <linux/gpio/consumer.h> 12 12 #include <linux/i2c.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of_graph.h>
+1 -1
drivers/media/i2c/mt9v111.c
··· 633 633 634 634 /* 635 635 * Set pixel integration time to the whole frame time. 636 - * This value controls the the shutter delay when running with AE 636 + * This value controls the shutter delay when running with AE 637 637 * disabled. If longer than frame time, it affects the output 638 638 * frame rate. 639 639 */
+79 -50
drivers/media/i2c/ov5640.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/module.h> 17 17 #include <linux/of_device.h> 18 + #include <linux/pm_runtime.h> 18 19 #include <linux/regulator/consumer.h> 19 20 #include <linux/slab.h> 20 21 #include <linux/types.h> ··· 447 446 448 447 /* lock to protect all members below */ 449 448 struct mutex lock; 450 - 451 - int power_count; 452 449 453 450 struct v4l2_mbus_framefmt fmt; 454 451 bool pending_fmt_change; ··· 2695 2696 return ret; 2696 2697 } 2697 2698 2698 - /* --------------- Subdev Operations --------------- */ 2699 - 2700 - static int ov5640_s_power(struct v4l2_subdev *sd, int on) 2699 + static int ov5640_sensor_suspend(struct device *dev) 2701 2700 { 2702 - struct ov5640_dev *sensor = to_ov5640_dev(sd); 2703 - int ret = 0; 2701 + struct v4l2_subdev *sd = dev_get_drvdata(dev); 2702 + struct ov5640_dev *ov5640 = to_ov5640_dev(sd); 2704 2703 2705 - mutex_lock(&sensor->lock); 2706 - 2707 - /* 2708 - * If the power count is modified from 0 to != 0 or from != 0 to 0, 2709 - * update the power state. 2710 - */ 2711 - if (sensor->power_count == !on) { 2712 - ret = ov5640_set_power(sensor, !!on); 2713 - if (ret) 2714 - goto out; 2715 - } 2716 - 2717 - /* Update the power count. */ 2718 - sensor->power_count += on ? 1 : -1; 2719 - WARN_ON(sensor->power_count < 0); 2720 - out: 2721 - mutex_unlock(&sensor->lock); 2722 - 2723 - if (on && !ret && sensor->power_count == 1) { 2724 - /* restore controls */ 2725 - ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler); 2726 - } 2727 - 2728 - return ret; 2704 + return ov5640_set_power(ov5640, false); 2729 2705 } 2706 + 2707 + static int ov5640_sensor_resume(struct device *dev) 2708 + { 2709 + struct v4l2_subdev *sd = dev_get_drvdata(dev); 2710 + struct ov5640_dev *ov5640 = to_ov5640_dev(sd); 2711 + 2712 + return ov5640_set_power(ov5640, true); 2713 + } 2714 + 2715 + /* --------------- Subdev Operations --------------- */ 2730 2716 2731 2717 static int ov5640_try_frame_interval(struct ov5640_dev *sensor, 2732 2718 struct v4l2_fract *fi, ··· 3298 3314 3299 3315 /* v4l2_ctrl_lock() locks our own mutex */ 3300 3316 3317 + if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev)) 3318 + return 0; 3319 + 3301 3320 switch (ctrl->id) { 3302 3321 case V4L2_CID_AUTOGAIN: 3303 3322 val = ov5640_get_gain(sensor); ··· 3315 3328 sensor->ctrls.exposure->val = val; 3316 3329 break; 3317 3330 } 3331 + 3332 + pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3318 3333 3319 3334 return 0; 3320 3335 } ··· 3347 3358 /* 3348 3359 * If the device is not powered up by the host driver do 3349 3360 * not apply any controls to H/W at this time. Instead 3350 - * the controls will be restored right after power-up. 3361 + * the controls will be restored at start streaming time. 3351 3362 */ 3352 - if (sensor->power_count == 0) 3363 + if (!pm_runtime_get_if_in_use(&sensor->i2c_client->dev)) 3353 3364 return 0; 3354 3365 3355 3366 switch (ctrl->id) { ··· 3390 3401 ret = -EINVAL; 3391 3402 break; 3392 3403 } 3404 + 3405 + pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3393 3406 3394 3407 return ret; 3395 3408 } ··· 3668 3677 struct ov5640_dev *sensor = to_ov5640_dev(sd); 3669 3678 int ret = 0; 3670 3679 3680 + if (enable) { 3681 + ret = pm_runtime_resume_and_get(&sensor->i2c_client->dev); 3682 + if (ret < 0) 3683 + return ret; 3684 + 3685 + ret = v4l2_ctrl_handler_setup(&sensor->ctrls.handler); 3686 + if (ret) { 3687 + pm_runtime_put(&sensor->i2c_client->dev); 3688 + return ret; 3689 + } 3690 + } 3691 + 3671 3692 mutex_lock(&sensor->lock); 3672 3693 3673 3694 if (sensor->streaming == !enable) { ··· 3704 3701 if (!ret) 3705 3702 sensor->streaming = enable; 3706 3703 } 3704 + 3707 3705 out: 3708 3706 mutex_unlock(&sensor->lock); 3707 + 3708 + if (!enable || ret) 3709 + pm_runtime_put_autosuspend(&sensor->i2c_client->dev); 3710 + 3709 3711 return ret; 3710 3712 } 3711 3713 ··· 3732 3724 } 3733 3725 3734 3726 static const struct v4l2_subdev_core_ops ov5640_core_ops = { 3735 - .s_power = ov5640_s_power, 3736 3727 .log_status = v4l2_ctrl_subdev_log_status, 3737 3728 .subscribe_event = v4l2_ctrl_subdev_subscribe_event, 3738 3729 .unsubscribe_event = v4l2_event_subdev_unsubscribe, ··· 3777 3770 int ret = 0; 3778 3771 u16 chip_id; 3779 3772 3780 - ret = ov5640_set_power_on(sensor); 3781 - if (ret) 3782 - return ret; 3783 - 3784 3773 ret = ov5640_read_reg16(sensor, OV5640_REG_CHIP_ID, &chip_id); 3785 3774 if (ret) { 3786 3775 dev_err(&client->dev, "%s: failed to read chip identifier\n", 3787 3776 __func__); 3788 - goto power_off; 3777 + return ret; 3789 3778 } 3790 3779 3791 3780 if (chip_id != 0x5640) { 3792 3781 dev_err(&client->dev, "%s: wrong chip identifier, expected 0x5640, got 0x%x\n", 3793 3782 __func__, chip_id); 3794 - ret = -ENXIO; 3783 + return -ENXIO; 3795 3784 } 3796 3785 3797 - power_off: 3798 - ov5640_set_power_off(sensor); 3799 - return ret; 3786 + return 0; 3800 3787 } 3801 3788 3802 3789 static int ov5640_probe(struct i2c_client *client) ··· 3881 3880 3882 3881 ret = ov5640_get_regulators(sensor); 3883 3882 if (ret) 3884 - return ret; 3883 + goto entity_cleanup; 3885 3884 3886 3885 mutex_init(&sensor->lock); 3887 - 3888 - ret = ov5640_check_chip_id(sensor); 3889 - if (ret) 3890 - goto entity_cleanup; 3891 3886 3892 3887 ret = ov5640_init_controls(sensor); 3893 3888 if (ret) 3894 3889 goto entity_cleanup; 3895 3890 3891 + ret = ov5640_sensor_resume(dev); 3892 + if (ret) { 3893 + dev_err(dev, "failed to power on\n"); 3894 + goto entity_cleanup; 3895 + } 3896 + 3897 + pm_runtime_set_active(dev); 3898 + pm_runtime_get_noresume(dev); 3899 + pm_runtime_enable(dev); 3900 + 3901 + ret = ov5640_check_chip_id(sensor); 3902 + if (ret) 3903 + goto err_pm_runtime; 3904 + 3896 3905 ret = v4l2_async_register_subdev_sensor(&sensor->sd); 3897 3906 if (ret) 3898 - goto free_ctrls; 3907 + goto err_pm_runtime; 3908 + 3909 + pm_runtime_set_autosuspend_delay(dev, 1000); 3910 + pm_runtime_use_autosuspend(dev); 3911 + pm_runtime_put_autosuspend(dev); 3899 3912 3900 3913 return 0; 3901 3914 3902 - free_ctrls: 3915 + err_pm_runtime: 3916 + pm_runtime_put_noidle(dev); 3917 + pm_runtime_disable(dev); 3903 3918 v4l2_ctrl_handler_free(&sensor->ctrls.handler); 3919 + ov5640_sensor_suspend(dev); 3904 3920 entity_cleanup: 3905 3921 media_entity_cleanup(&sensor->sd.entity); 3906 3922 mutex_destroy(&sensor->lock); ··· 3928 3910 { 3929 3911 struct v4l2_subdev *sd = i2c_get_clientdata(client); 3930 3912 struct ov5640_dev *sensor = to_ov5640_dev(sd); 3913 + struct device *dev = &client->dev; 3914 + 3915 + pm_runtime_disable(dev); 3916 + if (!pm_runtime_status_suspended(dev)) 3917 + ov5640_sensor_suspend(dev); 3918 + pm_runtime_set_suspended(dev); 3931 3919 3932 3920 v4l2_async_unregister_subdev(&sensor->sd); 3933 3921 media_entity_cleanup(&sensor->sd.entity); 3934 3922 v4l2_ctrl_handler_free(&sensor->ctrls.handler); 3935 3923 mutex_destroy(&sensor->lock); 3936 3924 } 3925 + 3926 + static const struct dev_pm_ops ov5640_pm_ops = { 3927 + SET_RUNTIME_PM_OPS(ov5640_sensor_suspend, ov5640_sensor_resume, NULL) 3928 + }; 3937 3929 3938 3930 static const struct i2c_device_id ov5640_id[] = { 3939 3931 {"ov5640", 0}, ··· 3961 3933 .driver = { 3962 3934 .name = "ov5640", 3963 3935 .of_match_table = ov5640_dt_ids, 3936 + .pm = &ov5640_pm_ops, 3964 3937 }, 3965 3938 .id_table = ov5640_id, 3966 3939 .probe_new = ov5640_probe,
+6 -4
drivers/media/i2c/ov8865.c
··· 3034 3034 &rate); 3035 3035 if (!ret && sensor->extclk) { 3036 3036 ret = clk_set_rate(sensor->extclk, rate); 3037 - if (ret) 3038 - return dev_err_probe(dev, ret, 3039 - "failed to set clock rate\n"); 3037 + if (ret) { 3038 + dev_err_probe(dev, ret, "failed to set clock rate\n"); 3039 + goto error_endpoint; 3040 + } 3040 3041 } else if (ret && !sensor->extclk) { 3041 - return dev_err_probe(dev, ret, "invalid clock config\n"); 3042 + dev_err_probe(dev, ret, "invalid clock config\n"); 3043 + goto error_endpoint; 3042 3044 } 3043 3045 3044 3046 sensor->extclk_rate = rate ? rate : clk_get_rate(sensor->extclk);
+6 -7
drivers/media/mc/mc-device.c
··· 581 581 struct media_device *mdev = entity->graph_obj.mdev; 582 582 struct media_link *link, *tmp; 583 583 struct media_interface *intf; 584 - unsigned int i; 584 + struct media_pad *iter; 585 585 586 586 ida_free(&mdev->entity_internal_idx, entity->internal_idx); 587 587 ··· 597 597 __media_entity_remove_links(entity); 598 598 599 599 /* Remove all pads that belong to this entity */ 600 - for (i = 0; i < entity->num_pads; i++) 601 - media_gobj_destroy(&entity->pads[i].graph_obj); 600 + media_entity_for_each_pad(entity, iter) 601 + media_gobj_destroy(&iter->graph_obj); 602 602 603 603 /* Remove the entity */ 604 604 media_gobj_destroy(&entity->graph_obj); ··· 610 610 struct media_entity *entity) 611 611 { 612 612 struct media_entity_notify *notify, *next; 613 - unsigned int i; 613 + struct media_pad *iter; 614 614 int ret; 615 615 616 616 if (entity->function == MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN || ··· 639 639 media_gobj_create(mdev, MEDIA_GRAPH_ENTITY, &entity->graph_obj); 640 640 641 641 /* Initialize objects at the pads */ 642 - for (i = 0; i < entity->num_pads; i++) 643 - media_gobj_create(mdev, MEDIA_GRAPH_PAD, 644 - &entity->pads[i].graph_obj); 642 + media_entity_for_each_pad(entity, iter) 643 + media_gobj_create(mdev, MEDIA_GRAPH_PAD, &iter->graph_obj); 645 644 646 645 /* invoke entity_notify callbacks */ 647 646 list_for_each_entry_safe(notify, next, &mdev->entity_notify, list)
+544 -134
drivers/media/mc/mc-entity.c
··· 59 59 } 60 60 } 61 61 62 - __must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum, 63 - int idx_max) 62 + __must_check int media_entity_enum_init(struct media_entity_enum *ent_enum, 63 + struct media_device *mdev) 64 64 { 65 - idx_max = ALIGN(idx_max, BITS_PER_LONG); 65 + int idx_max; 66 + 67 + idx_max = ALIGN(mdev->entity_internal_idx_max + 1, BITS_PER_LONG); 66 68 ent_enum->bmap = bitmap_zalloc(idx_max, GFP_KERNEL); 67 69 if (!ent_enum->bmap) 68 70 return -ENOMEM; ··· 73 71 74 72 return 0; 75 73 } 76 - EXPORT_SYMBOL_GPL(__media_entity_enum_init); 74 + EXPORT_SYMBOL_GPL(media_entity_enum_init); 77 75 78 76 void media_entity_enum_cleanup(struct media_entity_enum *ent_enum) 79 77 { ··· 195 193 struct media_pad *pads) 196 194 { 197 195 struct media_device *mdev = entity->graph_obj.mdev; 198 - unsigned int i; 196 + struct media_pad *iter; 197 + unsigned int i = 0; 199 198 200 199 if (num_pads >= MEDIA_ENTITY_MAX_PADS) 201 200 return -E2BIG; ··· 207 204 if (mdev) 208 205 mutex_lock(&mdev->graph_mutex); 209 206 210 - for (i = 0; i < num_pads; i++) { 211 - pads[i].entity = entity; 212 - pads[i].index = i; 207 + media_entity_for_each_pad(entity, iter) { 208 + iter->entity = entity; 209 + iter->index = i++; 213 210 if (mdev) 214 211 media_gobj_create(mdev, MEDIA_GRAPH_PAD, 215 - &entity->pads[i].graph_obj); 212 + &iter->graph_obj); 216 213 } 217 214 218 215 if (mdev) ··· 225 222 /* ----------------------------------------------------------------------------- 226 223 * Graph traversal 227 224 */ 225 + 226 + /* 227 + * This function checks the interdependency inside the entity between @pad0 228 + * and @pad1. If two pads are interdependent they are part of the same pipeline 229 + * and enabling one of the pads means that the other pad will become "locked" 230 + * and doesn't allow configuration changes. 231 + * 232 + * This function uses the &media_entity_operations.has_pad_interdep() operation 233 + * to check the dependency inside the entity between @pad0 and @pad1. If the 234 + * has_pad_interdep operation is not implemented, all pads of the entity are 235 + * considered to be interdependent. 236 + */ 237 + static bool media_entity_has_pad_interdep(struct media_entity *entity, 238 + unsigned int pad0, unsigned int pad1) 239 + { 240 + if (pad0 >= entity->num_pads || pad1 >= entity->num_pads) 241 + return false; 242 + 243 + if (entity->pads[pad0].flags & entity->pads[pad1].flags & 244 + (MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_SOURCE)) 245 + return false; 246 + 247 + if (!entity->ops || !entity->ops->has_pad_interdep) 248 + return true; 249 + 250 + return entity->ops->has_pad_interdep(entity, pad0, pad1); 251 + } 228 252 229 253 static struct media_entity * 230 254 media_entity_other(struct media_entity *entity, struct media_link *link) ··· 397 367 } 398 368 EXPORT_SYMBOL_GPL(media_graph_walk_next); 399 369 400 - int media_entity_get_fwnode_pad(struct media_entity *entity, 401 - struct fwnode_handle *fwnode, 402 - unsigned long direction_flags) 403 - { 404 - struct fwnode_endpoint endpoint; 405 - unsigned int i; 406 - int ret; 407 - 408 - if (!entity->ops || !entity->ops->get_fwnode_pad) { 409 - for (i = 0; i < entity->num_pads; i++) { 410 - if (entity->pads[i].flags & direction_flags) 411 - return i; 412 - } 413 - 414 - return -ENXIO; 415 - } 416 - 417 - ret = fwnode_graph_parse_endpoint(fwnode, &endpoint); 418 - if (ret) 419 - return ret; 420 - 421 - ret = entity->ops->get_fwnode_pad(entity, &endpoint); 422 - if (ret < 0) 423 - return ret; 424 - 425 - if (ret >= entity->num_pads) 426 - return -ENXIO; 427 - 428 - if (!(entity->pads[ret].flags & direction_flags)) 429 - return -ENXIO; 430 - 431 - return ret; 432 - } 433 - EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad); 434 - 435 370 /* ----------------------------------------------------------------------------- 436 371 * Pipeline management 437 372 */ 438 373 439 - __must_check int __media_pipeline_start(struct media_entity *entity, 440 - struct media_pipeline *pipe) 374 + /* 375 + * The pipeline traversal stack stores pads that are reached during graph 376 + * traversal, with a list of links to be visited to continue the traversal. 377 + * When a new pad is reached, an entry is pushed on the top of the stack and 378 + * points to the incoming pad and the first link of the entity. 379 + * 380 + * To find further pads in the pipeline, the traversal algorithm follows 381 + * internal pad dependencies in the entity, and then links in the graph. It 382 + * does so by iterating over all links of the entity, and following enabled 383 + * links that originate from a pad that is internally connected to the incoming 384 + * pad, as reported by the media_entity_has_pad_interdep() function. 385 + */ 386 + 387 + /** 388 + * struct media_pipeline_walk_entry - Entry in the pipeline traversal stack 389 + * 390 + * @pad: The media pad being visited 391 + * @links: Links left to be visited 392 + */ 393 + struct media_pipeline_walk_entry { 394 + struct media_pad *pad; 395 + struct list_head *links; 396 + }; 397 + 398 + /** 399 + * struct media_pipeline_walk - State used by the media pipeline traversal 400 + * algorithm 401 + * 402 + * @mdev: The media device 403 + * @stack: Depth-first search stack 404 + * @stack.size: Number of allocated entries in @stack.entries 405 + * @stack.top: Index of the top stack entry (-1 if the stack is empty) 406 + * @stack.entries: Stack entries 407 + */ 408 + struct media_pipeline_walk { 409 + struct media_device *mdev; 410 + 411 + struct { 412 + unsigned int size; 413 + int top; 414 + struct media_pipeline_walk_entry *entries; 415 + } stack; 416 + }; 417 + 418 + #define MEDIA_PIPELINE_STACK_GROW_STEP 16 419 + 420 + static struct media_pipeline_walk_entry * 421 + media_pipeline_walk_top(struct media_pipeline_walk *walk) 441 422 { 442 - struct media_device *mdev = entity->graph_obj.mdev; 443 - struct media_graph *graph = &pipe->graph; 444 - struct media_entity *entity_err = entity; 445 - struct media_link *link; 423 + return &walk->stack.entries[walk->stack.top]; 424 + } 425 + 426 + static bool media_pipeline_walk_empty(struct media_pipeline_walk *walk) 427 + { 428 + return walk->stack.top == -1; 429 + } 430 + 431 + /* Increase the stack size by MEDIA_PIPELINE_STACK_GROW_STEP elements. */ 432 + static int media_pipeline_walk_resize(struct media_pipeline_walk *walk) 433 + { 434 + struct media_pipeline_walk_entry *entries; 435 + unsigned int new_size; 436 + 437 + /* Safety check, to avoid stack overflows in case of bugs. */ 438 + if (walk->stack.size >= 256) 439 + return -E2BIG; 440 + 441 + new_size = walk->stack.size + MEDIA_PIPELINE_STACK_GROW_STEP; 442 + 443 + entries = krealloc(walk->stack.entries, 444 + new_size * sizeof(*walk->stack.entries), 445 + GFP_KERNEL); 446 + if (!entries) 447 + return -ENOMEM; 448 + 449 + walk->stack.entries = entries; 450 + walk->stack.size = new_size; 451 + 452 + return 0; 453 + } 454 + 455 + /* Push a new entry on the stack. */ 456 + static int media_pipeline_walk_push(struct media_pipeline_walk *walk, 457 + struct media_pad *pad) 458 + { 459 + struct media_pipeline_walk_entry *entry; 446 460 int ret; 447 461 448 - if (pipe->streaming_count) { 449 - pipe->streaming_count++; 462 + if (walk->stack.top + 1 >= walk->stack.size) { 463 + ret = media_pipeline_walk_resize(walk); 464 + if (ret) 465 + return ret; 466 + } 467 + 468 + walk->stack.top++; 469 + entry = media_pipeline_walk_top(walk); 470 + entry->pad = pad; 471 + entry->links = pad->entity->links.next; 472 + 473 + dev_dbg(walk->mdev->dev, 474 + "media pipeline: pushed entry %u: '%s':%u\n", 475 + walk->stack.top, pad->entity->name, pad->index); 476 + 477 + return 0; 478 + } 479 + 480 + /* 481 + * Move the top entry link cursor to the next link. If all links of the entry 482 + * have been visited, pop the entry itself. 483 + */ 484 + static void media_pipeline_walk_pop(struct media_pipeline_walk *walk) 485 + { 486 + struct media_pipeline_walk_entry *entry; 487 + 488 + if (WARN_ON(walk->stack.top < 0)) 489 + return; 490 + 491 + entry = media_pipeline_walk_top(walk); 492 + 493 + if (entry->links->next == &entry->pad->entity->links) { 494 + dev_dbg(walk->mdev->dev, 495 + "media pipeline: entry %u has no more links, popping\n", 496 + walk->stack.top); 497 + 498 + walk->stack.top--; 499 + return; 500 + } 501 + 502 + entry->links = entry->links->next; 503 + 504 + dev_dbg(walk->mdev->dev, 505 + "media pipeline: moved entry %u to next link\n", 506 + walk->stack.top); 507 + } 508 + 509 + /* Free all memory allocated while walking the pipeline. */ 510 + static void media_pipeline_walk_destroy(struct media_pipeline_walk *walk) 511 + { 512 + kfree(walk->stack.entries); 513 + } 514 + 515 + /* Add a pad to the pipeline and push it to the stack. */ 516 + static int media_pipeline_add_pad(struct media_pipeline *pipe, 517 + struct media_pipeline_walk *walk, 518 + struct media_pad *pad) 519 + { 520 + struct media_pipeline_pad *ppad; 521 + 522 + list_for_each_entry(ppad, &pipe->pads, list) { 523 + if (ppad->pad == pad) { 524 + dev_dbg(pad->graph_obj.mdev->dev, 525 + "media pipeline: already contains pad '%s':%u\n", 526 + pad->entity->name, pad->index); 527 + return 0; 528 + } 529 + } 530 + 531 + ppad = kzalloc(sizeof(*ppad), GFP_KERNEL); 532 + if (!ppad) 533 + return -ENOMEM; 534 + 535 + ppad->pipe = pipe; 536 + ppad->pad = pad; 537 + 538 + list_add_tail(&ppad->list, &pipe->pads); 539 + 540 + dev_dbg(pad->graph_obj.mdev->dev, 541 + "media pipeline: added pad '%s':%u\n", 542 + pad->entity->name, pad->index); 543 + 544 + return media_pipeline_walk_push(walk, pad); 545 + } 546 + 547 + /* Explore the next link of the entity at the top of the stack. */ 548 + static int media_pipeline_explore_next_link(struct media_pipeline *pipe, 549 + struct media_pipeline_walk *walk) 550 + { 551 + struct media_pipeline_walk_entry *entry = media_pipeline_walk_top(walk); 552 + struct media_pad *pad; 553 + struct media_link *link; 554 + struct media_pad *local; 555 + struct media_pad *remote; 556 + int ret; 557 + 558 + pad = entry->pad; 559 + link = list_entry(entry->links, typeof(*link), list); 560 + media_pipeline_walk_pop(walk); 561 + 562 + dev_dbg(walk->mdev->dev, 563 + "media pipeline: exploring link '%s':%u -> '%s':%u\n", 564 + link->source->entity->name, link->source->index, 565 + link->sink->entity->name, link->sink->index); 566 + 567 + /* Skip links that are not enabled. */ 568 + if (!(link->flags & MEDIA_LNK_FL_ENABLED)) { 569 + dev_dbg(walk->mdev->dev, 570 + "media pipeline: skipping link (disabled)\n"); 450 571 return 0; 451 572 } 452 573 453 - ret = media_graph_walk_init(&pipe->graph, mdev); 574 + /* Get the local pad and remote pad. */ 575 + if (link->source->entity == pad->entity) { 576 + local = link->source; 577 + remote = link->sink; 578 + } else { 579 + local = link->sink; 580 + remote = link->source; 581 + } 582 + 583 + /* 584 + * Skip links that originate from a different pad than the incoming pad 585 + * that is not connected internally in the entity to the incoming pad. 586 + */ 587 + if (pad != local && 588 + !media_entity_has_pad_interdep(pad->entity, pad->index, local->index)) { 589 + dev_dbg(walk->mdev->dev, 590 + "media pipeline: skipping link (no route)\n"); 591 + return 0; 592 + } 593 + 594 + /* 595 + * Add the local and remote pads of the link to the pipeline and push 596 + * them to the stack, if they're not already present. 597 + */ 598 + ret = media_pipeline_add_pad(pipe, walk, local); 454 599 if (ret) 455 600 return ret; 456 601 457 - media_graph_walk_start(&pipe->graph, entity); 602 + ret = media_pipeline_add_pad(pipe, walk, remote); 603 + if (ret) 604 + return ret; 458 605 459 - while ((entity = media_graph_walk_next(graph))) { 460 - DECLARE_BITMAP(active, MEDIA_ENTITY_MAX_PADS); 461 - DECLARE_BITMAP(has_no_links, MEDIA_ENTITY_MAX_PADS); 606 + return 0; 607 + } 462 608 463 - if (entity->pipe && entity->pipe != pipe) { 464 - pr_err("Pipe active for %s. Can't start for %s\n", 465 - entity->name, 466 - entity_err->name); 609 + static void media_pipeline_cleanup(struct media_pipeline *pipe) 610 + { 611 + while (!list_empty(&pipe->pads)) { 612 + struct media_pipeline_pad *ppad; 613 + 614 + ppad = list_first_entry(&pipe->pads, typeof(*ppad), list); 615 + list_del(&ppad->list); 616 + kfree(ppad); 617 + } 618 + } 619 + 620 + static int media_pipeline_populate(struct media_pipeline *pipe, 621 + struct media_pad *pad) 622 + { 623 + struct media_pipeline_walk walk = { }; 624 + struct media_pipeline_pad *ppad; 625 + int ret; 626 + 627 + /* 628 + * Populate the media pipeline by walking the media graph, starting 629 + * from @pad. 630 + */ 631 + INIT_LIST_HEAD(&pipe->pads); 632 + pipe->mdev = pad->graph_obj.mdev; 633 + 634 + walk.mdev = pipe->mdev; 635 + walk.stack.top = -1; 636 + ret = media_pipeline_add_pad(pipe, &walk, pad); 637 + if (ret) 638 + goto done; 639 + 640 + /* 641 + * Use a depth-first search algorithm: as long as the stack is not 642 + * empty, explore the next link of the top entry. The 643 + * media_pipeline_explore_next_link() function will either move to the 644 + * next link, pop the entry if fully visited, or add new entries on 645 + * top. 646 + */ 647 + while (!media_pipeline_walk_empty(&walk)) { 648 + ret = media_pipeline_explore_next_link(pipe, &walk); 649 + if (ret) 650 + goto done; 651 + } 652 + 653 + dev_dbg(pad->graph_obj.mdev->dev, 654 + "media pipeline populated, found pads:\n"); 655 + 656 + list_for_each_entry(ppad, &pipe->pads, list) 657 + dev_dbg(pad->graph_obj.mdev->dev, "- '%s':%u\n", 658 + ppad->pad->entity->name, ppad->pad->index); 659 + 660 + WARN_ON(walk.stack.top != -1); 661 + 662 + ret = 0; 663 + 664 + done: 665 + media_pipeline_walk_destroy(&walk); 666 + 667 + if (ret) 668 + media_pipeline_cleanup(pipe); 669 + 670 + return ret; 671 + } 672 + 673 + __must_check int __media_pipeline_start(struct media_pad *pad, 674 + struct media_pipeline *pipe) 675 + { 676 + struct media_device *mdev = pad->entity->graph_obj.mdev; 677 + struct media_pipeline_pad *err_ppad; 678 + struct media_pipeline_pad *ppad; 679 + int ret; 680 + 681 + lockdep_assert_held(&mdev->graph_mutex); 682 + 683 + /* 684 + * If the entity is already part of a pipeline, that pipeline must 685 + * be the same as the pipe given to media_pipeline_start(). 686 + */ 687 + if (WARN_ON(pad->pipe && pad->pipe != pipe)) 688 + return -EINVAL; 689 + 690 + /* 691 + * If the pipeline has already been started, it is guaranteed to be 692 + * valid, so just increase the start count. 693 + */ 694 + if (pipe->start_count) { 695 + pipe->start_count++; 696 + return 0; 697 + } 698 + 699 + /* 700 + * Populate the pipeline. This populates the media_pipeline pads list 701 + * with media_pipeline_pad instances for each pad found during graph 702 + * walk. 703 + */ 704 + ret = media_pipeline_populate(pipe, pad); 705 + if (ret) 706 + return ret; 707 + 708 + /* 709 + * Now that all the pads in the pipeline have been gathered, perform 710 + * the validation steps. 711 + */ 712 + 713 + list_for_each_entry(ppad, &pipe->pads, list) { 714 + struct media_pad *pad = ppad->pad; 715 + struct media_entity *entity = pad->entity; 716 + bool has_enabled_link = false; 717 + bool has_link = false; 718 + struct media_link *link; 719 + 720 + dev_dbg(mdev->dev, "Validating pad '%s':%u\n", pad->entity->name, 721 + pad->index); 722 + 723 + /* 724 + * 1. Ensure that the pad doesn't already belong to a different 725 + * pipeline. 726 + */ 727 + if (pad->pipe) { 728 + dev_dbg(mdev->dev, "Failed to start pipeline: pad '%s':%u busy\n", 729 + pad->entity->name, pad->index); 467 730 ret = -EBUSY; 468 731 goto error; 469 732 } 470 733 471 - /* Already streaming --- no need to check. */ 472 - if (entity->pipe) 473 - continue; 474 - 475 - entity->pipe = pipe; 476 - 477 - if (!entity->ops || !entity->ops->link_validate) 478 - continue; 479 - 480 - bitmap_zero(active, entity->num_pads); 481 - bitmap_fill(has_no_links, entity->num_pads); 482 - 734 + /* 735 + * 2. Validate all active links whose sink is the current pad. 736 + * Validation of the source pads is performed in the context of 737 + * the connected sink pad to avoid duplicating checks. 738 + */ 483 739 for_each_media_entity_data_link(entity, link) { 484 - struct media_pad *pad = link->sink->entity == entity 485 - ? link->sink : link->source; 740 + /* Skip links unrelated to the current pad. */ 741 + if (link->sink != pad && link->source != pad) 742 + continue; 486 743 487 - /* Mark that a pad is connected by a link. */ 488 - bitmap_clear(has_no_links, pad->index, 1); 489 - 490 - /* 491 - * Pads that either do not need to connect or 492 - * are connected through an enabled link are 493 - * fine. 494 - */ 495 - if (!(pad->flags & MEDIA_PAD_FL_MUST_CONNECT) || 496 - link->flags & MEDIA_LNK_FL_ENABLED) 497 - bitmap_set(active, pad->index, 1); 744 + /* Record if the pad has links and enabled links. */ 745 + if (link->flags & MEDIA_LNK_FL_ENABLED) 746 + has_enabled_link = true; 747 + has_link = true; 498 748 499 749 /* 500 - * Link validation will only take place for 501 - * sink ends of the link that are enabled. 750 + * Validate the link if it's enabled and has the 751 + * current pad as its sink. 502 752 */ 503 - if (link->sink != pad || 504 - !(link->flags & MEDIA_LNK_FL_ENABLED)) 753 + if (!(link->flags & MEDIA_LNK_FL_ENABLED)) 754 + continue; 755 + 756 + if (link->sink != pad) 757 + continue; 758 + 759 + if (!entity->ops || !entity->ops->link_validate) 505 760 continue; 506 761 507 762 ret = entity->ops->link_validate(link); 508 - if (ret < 0 && ret != -ENOIOCTLCMD) { 509 - dev_dbg(entity->graph_obj.mdev->dev, 510 - "link validation failed for '%s':%u -> '%s':%u, error %d\n", 763 + if (ret) { 764 + dev_dbg(mdev->dev, 765 + "Link '%s':%u -> '%s':%u failed validation: %d\n", 511 766 link->source->entity->name, 512 767 link->source->index, 513 - entity->name, link->sink->index, ret); 768 + link->sink->entity->name, 769 + link->sink->index, ret); 514 770 goto error; 515 771 } 772 + 773 + dev_dbg(mdev->dev, 774 + "Link '%s':%u -> '%s':%u is valid\n", 775 + link->source->entity->name, 776 + link->source->index, 777 + link->sink->entity->name, 778 + link->sink->index); 516 779 } 517 780 518 - /* Either no links or validated links are fine. */ 519 - bitmap_or(active, active, has_no_links, entity->num_pads); 520 - 521 - if (!bitmap_full(active, entity->num_pads)) { 781 + /* 782 + * 3. If the pad has the MEDIA_PAD_FL_MUST_CONNECT flag set, 783 + * ensure that it has either no link or an enabled link. 784 + */ 785 + if ((pad->flags & MEDIA_PAD_FL_MUST_CONNECT) && has_link && 786 + !has_enabled_link) { 787 + dev_dbg(mdev->dev, 788 + "Pad '%s':%u must be connected by an enabled link\n", 789 + pad->entity->name, pad->index); 522 790 ret = -ENOLINK; 523 - dev_dbg(entity->graph_obj.mdev->dev, 524 - "'%s':%u must be connected by an enabled link\n", 525 - entity->name, 526 - (unsigned)find_first_zero_bit( 527 - active, entity->num_pads)); 528 791 goto error; 529 792 } 793 + 794 + /* Validation passed, store the pipe pointer in the pad. */ 795 + pad->pipe = pipe; 530 796 } 531 797 532 - pipe->streaming_count++; 798 + pipe->start_count++; 533 799 534 800 return 0; 535 801 ··· 834 508 * Link validation on graph failed. We revert what we did and 835 509 * return the error. 836 510 */ 837 - media_graph_walk_start(graph, entity_err); 838 511 839 - while ((entity_err = media_graph_walk_next(graph))) { 840 - entity_err->pipe = NULL; 841 - 842 - /* 843 - * We haven't started entities further than this so we quit 844 - * here. 845 - */ 846 - if (entity_err == entity) 512 + list_for_each_entry(err_ppad, &pipe->pads, list) { 513 + if (err_ppad == ppad) 847 514 break; 515 + 516 + err_ppad->pad->pipe = NULL; 848 517 } 849 518 850 - media_graph_walk_cleanup(graph); 519 + media_pipeline_cleanup(pipe); 851 520 852 521 return ret; 853 522 } 854 523 EXPORT_SYMBOL_GPL(__media_pipeline_start); 855 524 856 - __must_check int media_pipeline_start(struct media_entity *entity, 525 + __must_check int media_pipeline_start(struct media_pad *pad, 857 526 struct media_pipeline *pipe) 858 527 { 859 - struct media_device *mdev = entity->graph_obj.mdev; 528 + struct media_device *mdev = pad->entity->graph_obj.mdev; 860 529 int ret; 861 530 862 531 mutex_lock(&mdev->graph_mutex); 863 - ret = __media_pipeline_start(entity, pipe); 532 + ret = __media_pipeline_start(pad, pipe); 864 533 mutex_unlock(&mdev->graph_mutex); 865 534 return ret; 866 535 } 867 536 EXPORT_SYMBOL_GPL(media_pipeline_start); 868 537 869 - void __media_pipeline_stop(struct media_entity *entity) 538 + void __media_pipeline_stop(struct media_pad *pad) 870 539 { 871 - struct media_graph *graph = &entity->pipe->graph; 872 - struct media_pipeline *pipe = entity->pipe; 540 + struct media_pipeline *pipe = pad->pipe; 541 + struct media_pipeline_pad *ppad; 873 542 874 543 /* 875 544 * If the following check fails, the driver has performed an ··· 873 552 if (WARN_ON(!pipe)) 874 553 return; 875 554 876 - if (--pipe->streaming_count) 555 + if (--pipe->start_count) 877 556 return; 878 557 879 - media_graph_walk_start(graph, entity); 558 + list_for_each_entry(ppad, &pipe->pads, list) 559 + ppad->pad->pipe = NULL; 880 560 881 - while ((entity = media_graph_walk_next(graph))) 882 - entity->pipe = NULL; 561 + media_pipeline_cleanup(pipe); 883 562 884 - media_graph_walk_cleanup(graph); 885 - 563 + if (pipe->allocated) 564 + kfree(pipe); 886 565 } 887 566 EXPORT_SYMBOL_GPL(__media_pipeline_stop); 888 567 889 - void media_pipeline_stop(struct media_entity *entity) 568 + void media_pipeline_stop(struct media_pad *pad) 890 569 { 891 - struct media_device *mdev = entity->graph_obj.mdev; 570 + struct media_device *mdev = pad->entity->graph_obj.mdev; 892 571 893 572 mutex_lock(&mdev->graph_mutex); 894 - __media_pipeline_stop(entity); 573 + __media_pipeline_stop(pad); 895 574 mutex_unlock(&mdev->graph_mutex); 896 575 } 897 576 EXPORT_SYMBOL_GPL(media_pipeline_stop); 577 + 578 + __must_check int media_pipeline_alloc_start(struct media_pad *pad) 579 + { 580 + struct media_device *mdev = pad->entity->graph_obj.mdev; 581 + struct media_pipeline *new_pipe = NULL; 582 + struct media_pipeline *pipe; 583 + int ret; 584 + 585 + mutex_lock(&mdev->graph_mutex); 586 + 587 + /* 588 + * Is the entity already part of a pipeline? If not, we need to allocate 589 + * a pipe. 590 + */ 591 + pipe = media_pad_pipeline(pad); 592 + if (!pipe) { 593 + new_pipe = kzalloc(sizeof(*new_pipe), GFP_KERNEL); 594 + if (!new_pipe) { 595 + ret = -ENOMEM; 596 + goto out; 597 + } 598 + 599 + pipe = new_pipe; 600 + pipe->allocated = true; 601 + } 602 + 603 + ret = __media_pipeline_start(pad, pipe); 604 + if (ret) 605 + kfree(new_pipe); 606 + 607 + out: 608 + mutex_unlock(&mdev->graph_mutex); 609 + 610 + return ret; 611 + } 612 + EXPORT_SYMBOL_GPL(media_pipeline_alloc_start); 898 613 899 614 /* ----------------------------------------------------------------------------- 900 615 * Links management ··· 1186 829 { 1187 830 const u32 mask = MEDIA_LNK_FL_ENABLED; 1188 831 struct media_device *mdev; 1189 - struct media_entity *source, *sink; 832 + struct media_pad *source, *sink; 1190 833 int ret = -EBUSY; 1191 834 1192 835 if (link == NULL) ··· 1202 845 if (link->flags == flags) 1203 846 return 0; 1204 847 1205 - source = link->source->entity; 1206 - sink = link->sink->entity; 848 + source = link->source; 849 + sink = link->sink; 1207 850 1208 851 if (!(link->flags & MEDIA_LNK_FL_DYNAMIC) && 1209 - (media_entity_is_streaming(source) || 1210 - media_entity_is_streaming(sink))) 852 + (media_pad_is_streaming(source) || media_pad_is_streaming(sink))) 1211 853 return -EBUSY; 1212 854 1213 855 mdev = source->graph_obj.mdev; ··· 1346 990 return found_pad; 1347 991 } 1348 992 EXPORT_SYMBOL_GPL(media_pad_remote_pad_unique); 993 + 994 + int media_entity_get_fwnode_pad(struct media_entity *entity, 995 + struct fwnode_handle *fwnode, 996 + unsigned long direction_flags) 997 + { 998 + struct fwnode_endpoint endpoint; 999 + unsigned int i; 1000 + int ret; 1001 + 1002 + if (!entity->ops || !entity->ops->get_fwnode_pad) { 1003 + for (i = 0; i < entity->num_pads; i++) { 1004 + if (entity->pads[i].flags & direction_flags) 1005 + return i; 1006 + } 1007 + 1008 + return -ENXIO; 1009 + } 1010 + 1011 + ret = fwnode_graph_parse_endpoint(fwnode, &endpoint); 1012 + if (ret) 1013 + return ret; 1014 + 1015 + ret = entity->ops->get_fwnode_pad(entity, &endpoint); 1016 + if (ret < 0) 1017 + return ret; 1018 + 1019 + if (ret >= entity->num_pads) 1020 + return -ENXIO; 1021 + 1022 + if (!(entity->pads[ret].flags & direction_flags)) 1023 + return -ENXIO; 1024 + 1025 + return ret; 1026 + } 1027 + EXPORT_SYMBOL_GPL(media_entity_get_fwnode_pad); 1028 + 1029 + struct media_pipeline *media_entity_pipeline(struct media_entity *entity) 1030 + { 1031 + struct media_pad *pad; 1032 + 1033 + media_entity_for_each_pad(entity, pad) { 1034 + if (pad->pipe) 1035 + return pad->pipe; 1036 + } 1037 + 1038 + return NULL; 1039 + } 1040 + EXPORT_SYMBOL_GPL(media_entity_pipeline); 1041 + 1042 + struct media_pipeline *media_pad_pipeline(struct media_pad *pad) 1043 + { 1044 + return pad->pipe; 1045 + } 1046 + EXPORT_SYMBOL_GPL(media_pad_pipeline); 1349 1047 1350 1048 static void media_interface_init(struct media_device *mdev, 1351 1049 struct media_interface *intf,
+2 -2
drivers/media/pci/cx18/cx18-av-core.c
··· 339 339 340 340 /* 341 341 * For a 13.5 Mpps clock and 15,625 Hz line rate, a line is 342 - * is 864 pixels = 720 active + 144 blanking. ITU-R BT.601 342 + * 864 pixels = 720 active + 144 blanking. ITU-R BT.601 343 343 * specifies 12 luma clock periods or ~ 0.9 * 13.5 Mpps after 344 344 * the end of active video to start a horizontal line, so that 345 345 * leaves 132 pixels of hblank to ignore. ··· 399 399 400 400 /* 401 401 * For a 13.5 Mpps clock and 15,734.26 Hz line rate, a line is 402 - * is 858 pixels = 720 active + 138 blanking. The Hsync leading 402 + * 858 pixels = 720 active + 138 blanking. The Hsync leading 403 403 * edge should happen 1.2 us * 13.5 Mpps ~= 16 pixels after the 404 404 * end of active video, leaving 122 pixels of hblank to ignore 405 405 * before active video starts.
+1 -1
drivers/media/pci/cx88/cx88-input.c
··· 586 586 { 587 587 struct i2c_board_info info; 588 588 static const unsigned short default_addr_list[] = { 589 - 0x18, 0x6b, 0x71, 589 + 0x18, 0x33, 0x6b, 0x71, 590 590 I2C_CLIENT_END 591 591 }; 592 592 static const unsigned short pvr2000_addr_list[] = {
+1
drivers/media/pci/cx88/cx88-video.c
··· 1388 1388 } 1389 1389 fallthrough; 1390 1390 case CX88_BOARD_DVICO_FUSIONHDTV_5_PCI_NANO: 1391 + case CX88_BOARD_NOTONLYTV_LV3H: 1391 1392 request_module("ir-kbd-i2c"); 1392 1393 } 1393 1394
+3 -3
drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
··· 989 989 return r; 990 990 } 991 991 992 - r = media_pipeline_start(&q->vdev.entity, &q->pipe); 992 + r = video_device_pipeline_start(&q->vdev, &q->pipe); 993 993 if (r) 994 994 goto fail_pipeline; 995 995 ··· 1009 1009 fail_csi2_subdev: 1010 1010 cio2_hw_exit(cio2, q); 1011 1011 fail_hw: 1012 - media_pipeline_stop(&q->vdev.entity); 1012 + video_device_pipeline_stop(&q->vdev); 1013 1013 fail_pipeline: 1014 1014 dev_dbg(dev, "failed to start streaming (%d)\n", r); 1015 1015 cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_QUEUED); ··· 1030 1030 cio2_hw_exit(cio2, q); 1031 1031 synchronize_irq(cio2->pci_dev->irq); 1032 1032 cio2_vb2_return_all_buffers(q, VB2_BUF_STATE_ERROR); 1033 - media_pipeline_stop(&q->vdev.entity); 1033 + video_device_pipeline_stop(&q->vdev); 1034 1034 pm_runtime_put(dev); 1035 1035 cio2->streaming = false; 1036 1036 }
+4 -7
drivers/media/platform/amphion/vpu_v4l2.c
··· 603 603 inst->workqueue = NULL; 604 604 } 605 605 606 + if (inst->fh.m2m_ctx) { 607 + v4l2_m2m_ctx_release(inst->fh.m2m_ctx); 608 + inst->fh.m2m_ctx = NULL; 609 + } 606 610 v4l2_ctrl_handler_free(&inst->ctrl_handler); 607 611 mutex_destroy(&inst->lock); 608 612 v4l2_fh_del(&inst->fh); ··· 688 684 struct vpu_inst *inst = to_inst(file); 689 685 690 686 vpu_trace(vpu->dev, "tgid = %d, pid = %d, inst = %p\n", inst->tgid, inst->pid, inst); 691 - 692 - vpu_inst_lock(inst); 693 - if (inst->fh.m2m_ctx) { 694 - v4l2_m2m_ctx_release(inst->fh.m2m_ctx); 695 - inst->fh.m2m_ctx = NULL; 696 - } 697 - vpu_inst_unlock(inst); 698 687 699 688 call_void_vop(inst, release); 700 689 vpu_inst_unregister(inst);
+3 -10
drivers/media/platform/chips-media/coda-jpeg.c
··· 421 421 coda_write(dev, (s32)values[i], CODA9_REG_JPEG_HUFF_DATA); 422 422 } 423 423 424 - static int coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx) 424 + static void coda9_jpeg_dec_huff_setup(struct coda_ctx *ctx) 425 425 { 426 426 struct coda_huff_tab *huff_tab = ctx->params.jpeg_huff_tab; 427 427 struct coda_dev *dev = ctx->dev; ··· 455 455 coda9_jpeg_write_huff_values(dev, huff_tab->luma_ac, 162); 456 456 coda9_jpeg_write_huff_values(dev, huff_tab->chroma_ac, 162); 457 457 coda_write(dev, 0x000, CODA9_REG_JPEG_HUFF_CTRL); 458 - return 0; 459 458 } 460 459 461 460 static inline void coda9_jpeg_write_qmat_tab(struct coda_dev *dev, ··· 1393 1394 coda_write(dev, ctx->params.jpeg_restart_interval, 1394 1395 CODA9_REG_JPEG_RST_INTVAL); 1395 1396 1396 - if (ctx->params.jpeg_huff_tab) { 1397 - ret = coda9_jpeg_dec_huff_setup(ctx); 1398 - if (ret < 0) { 1399 - v4l2_err(&dev->v4l2_dev, 1400 - "failed to set up Huffman tables: %d\n", ret); 1401 - return ret; 1402 - } 1403 - } 1397 + if (ctx->params.jpeg_huff_tab) 1398 + coda9_jpeg_dec_huff_setup(ctx); 1404 1399 1405 1400 coda9_jpeg_qmat_setup(ctx); 1406 1401
+1 -1
drivers/media/platform/mediatek/mdp3/mtk-mdp3-cmdq.c
··· 457 457 kfree(path); 458 458 atomic_dec(&mdp->job_count); 459 459 wake_up(&mdp->callback_wq); 460 - if (cmd->pkt.buf_size > 0) 460 + if (cmd && cmd->pkt.buf_size > 0) 461 461 mdp_cmdq_pkt_destroy(&cmd->pkt); 462 462 kfree(comps); 463 463 kfree(cmd);
+4 -3
drivers/media/platform/mediatek/mdp3/mtk-mdp3-comp.c
··· 682 682 int i, ret; 683 683 684 684 if (comp->comp_dev) { 685 - ret = pm_runtime_get_sync(comp->comp_dev); 685 + ret = pm_runtime_resume_and_get(comp->comp_dev); 686 686 if (ret < 0) { 687 687 dev_err(dev, 688 688 "Failed to get power, err %d. type:%d id:%d\n", ··· 699 699 dev_err(dev, 700 700 "Failed to enable clk %d. type:%d id:%d\n", 701 701 i, comp->type, comp->id); 702 + pm_runtime_put(comp->comp_dev); 702 703 return ret; 703 704 } 704 705 } ··· 870 869 871 870 ret = mdp_comp_init(mdp, node, comp, id); 872 871 if (ret) { 873 - kfree(comp); 872 + devm_kfree(dev, comp); 874 873 return ERR_PTR(ret); 875 874 } 876 875 mdp->comp[id] = comp; ··· 931 930 if (mdp->comp[i]) { 932 931 pm_runtime_disable(mdp->comp[i]->comp_dev); 933 932 mdp_comp_deinit(mdp->comp[i]); 934 - kfree(mdp->comp[i]); 933 + devm_kfree(mdp->comp[i]->comp_dev, mdp->comp[i]); 935 934 mdp->comp[i] = NULL; 936 935 } 937 936 }
+2 -1
drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c
··· 289 289 mdp_comp_destroy(mdp); 290 290 err_return: 291 291 for (i = 0; i < MDP_PIPE_MAX; i++) 292 - mtk_mutex_put(mdp->mdp_mutex[i]); 292 + if (mdp) 293 + mtk_mutex_put(mdp->mdp_mutex[i]); 293 294 kfree(mdp); 294 295 dev_dbg(dev, "Errno %d\n", ret); 295 296 return ret;
+2 -1
drivers/media/platform/mediatek/mdp3/mtk-mdp3-vpu.c
··· 173 173 /* vpu work_size was set in mdp_vpu_ipi_handle_init_ack */ 174 174 175 175 mem_size = vpu_alloc_size; 176 - if (mdp_vpu_shared_mem_alloc(vpu)) { 176 + err = mdp_vpu_shared_mem_alloc(vpu); 177 + if (err) { 177 178 dev_err(&mdp->pdev->dev, "VPU memory alloc fail!"); 178 179 goto err_mem_alloc; 179 180 }
+2 -2
drivers/media/platform/nxp/dw100/dw100.c
··· 373 373 * The coordinates are saved in UQ12.4 fixed point format. 374 374 */ 375 375 static void dw100_ctrl_dewarping_map_init(const struct v4l2_ctrl *ctrl, 376 - u32 from_idx, u32 elems, 376 + u32 from_idx, 377 377 union v4l2_ctrl_ptr ptr) 378 378 { 379 379 struct dw100_ctx *ctx = ··· 398 398 ctx->map_height = mh; 399 399 ctx->map_size = mh * mw * sizeof(u32); 400 400 401 - for (idx = from_idx; idx < elems; idx++) { 401 + for (idx = from_idx; idx < ctrl->elems; idx++) { 402 402 qy = min_t(u32, (idx / mw) * qdy, qsh); 403 403 qx = min_t(u32, (idx % mw) * qdx, qsw); 404 404 map[idx] = dw100_map_format_coordinates(qx, qy);
+3 -3
drivers/media/platform/qcom/camss/camss-video.c
··· 493 493 struct v4l2_subdev *subdev; 494 494 int ret; 495 495 496 - ret = media_pipeline_start(&vdev->entity, &video->pipe); 496 + ret = video_device_pipeline_start(vdev, &video->pipe); 497 497 if (ret < 0) 498 498 return ret; 499 499 ··· 522 522 return 0; 523 523 524 524 error: 525 - media_pipeline_stop(&vdev->entity); 525 + video_device_pipeline_stop(vdev); 526 526 527 527 video->ops->flush_buffers(video, VB2_BUF_STATE_QUEUED); 528 528 ··· 553 553 v4l2_subdev_call(subdev, video, s_stream, 0); 554 554 } 555 555 556 - media_pipeline_stop(&vdev->entity); 556 + video_device_pipeline_stop(vdev); 557 557 558 558 video->ops->flush_buffers(video, VB2_BUF_STATE_ERROR); 559 559 }
+7 -6
drivers/media/platform/qcom/venus/helpers.c
··· 1800 1800 struct venus_core *core = inst->core; 1801 1801 u32 fmt = to_hfi_raw_fmt(v4l2_pixfmt); 1802 1802 struct hfi_plat_caps *caps; 1803 - u32 buftype; 1803 + bool found; 1804 1804 1805 1805 if (!fmt) 1806 1806 return false; ··· 1809 1809 if (!caps) 1810 1810 return false; 1811 1811 1812 - if (inst->session_type == VIDC_SESSION_TYPE_DEC) 1813 - buftype = HFI_BUFFER_OUTPUT2; 1814 - else 1815 - buftype = HFI_BUFFER_OUTPUT; 1812 + found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT, fmt); 1813 + if (found) 1814 + goto done; 1816 1815 1817 - return find_fmt_from_caps(caps, buftype, fmt); 1816 + found = find_fmt_from_caps(caps, HFI_BUFFER_OUTPUT2, fmt); 1817 + done: 1818 + return found; 1818 1819 } 1819 1820 EXPORT_SYMBOL_GPL(venus_helper_check_format); 1820 1821
+1 -4
drivers/media/platform/qcom/venus/hfi.c
··· 569 569 570 570 int hfi_create(struct venus_core *core, const struct hfi_core_ops *ops) 571 571 { 572 - int ret; 573 - 574 572 if (!ops) 575 573 return -EINVAL; 576 574 ··· 577 579 core->state = CORE_UNINIT; 578 580 init_completion(&core->done); 579 581 pkt_set_version(core->res->hfi_version); 580 - ret = venus_hfi_create(core); 581 582 582 - return ret; 583 + return venus_hfi_create(core); 583 584 } 584 585 585 586 void hfi_destroy(struct venus_core *core)
+2
drivers/media/platform/qcom/venus/vdec.c
··· 183 183 else 184 184 return NULL; 185 185 fmt = find_format(inst, pixmp->pixelformat, f->type); 186 + if (!fmt) 187 + return NULL; 186 188 } 187 189 188 190 pixmp->width = clamp(pixmp->width, frame_width_min(inst),
+21 -8
drivers/media/platform/qcom/venus/venc.c
··· 192 192 pixmp->height = clamp(pixmp->height, frame_height_min(inst), 193 193 frame_height_max(inst)); 194 194 195 - if (f->type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) { 196 - pixmp->width = ALIGN(pixmp->width, 128); 197 - pixmp->height = ALIGN(pixmp->height, 32); 198 - } 195 + pixmp->width = ALIGN(pixmp->width, 128); 196 + pixmp->height = ALIGN(pixmp->height, 32); 199 197 200 198 pixmp->width = ALIGN(pixmp->width, 2); 201 199 pixmp->height = ALIGN(pixmp->height, 2); ··· 390 392 struct v4l2_fract *timeperframe = &out->timeperframe; 391 393 u64 us_per_frame, fps; 392 394 393 - if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && 395 + if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && 394 396 a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) 395 397 return -EINVAL; 396 398 ··· 422 424 { 423 425 struct venus_inst *inst = to_inst(file); 424 426 425 - if (a->type != V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE && 427 + if (a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT && 426 428 a->type != V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE) 427 429 return -EINVAL; 428 430 ··· 507 509 return 0; 508 510 } 509 511 512 + static int venc_subscribe_event(struct v4l2_fh *fh, 513 + const struct v4l2_event_subscription *sub) 514 + { 515 + switch (sub->type) { 516 + case V4L2_EVENT_EOS: 517 + return v4l2_event_subscribe(fh, sub, 2, NULL); 518 + case V4L2_EVENT_CTRL: 519 + return v4l2_ctrl_subscribe_event(fh, sub); 520 + default: 521 + return -EINVAL; 522 + } 523 + } 524 + 510 525 static const struct v4l2_ioctl_ops venc_ioctl_ops = { 511 526 .vidioc_querycap = venc_querycap, 512 527 .vidioc_enum_fmt_vid_cap = venc_enum_fmt, ··· 545 534 .vidioc_g_parm = venc_g_parm, 546 535 .vidioc_enum_framesizes = venc_enum_framesizes, 547 536 .vidioc_enum_frameintervals = venc_enum_frameintervals, 548 - .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, 537 + .vidioc_subscribe_event = venc_subscribe_event, 549 538 .vidioc_unsubscribe_event = v4l2_event_unsubscribe, 539 + .vidioc_try_encoder_cmd = v4l2_m2m_ioctl_try_encoder_cmd, 550 540 }; 551 541 552 542 static int venc_pm_get(struct venus_inst *inst) ··· 698 686 return ret; 699 687 } 700 688 701 - if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC) { 689 + if (inst->fmt_cap->pixfmt == V4L2_PIX_FMT_HEVC && 690 + ctr->profile.hevc == V4L2_MPEG_VIDEO_HEVC_PROFILE_MAIN_10) { 702 691 struct hfi_hdr10_pq_sei hdr10; 703 692 unsigned int c; 704 693
+33 -5
drivers/media/platform/qcom/venus/venc_ctrls.c
··· 8 8 9 9 #include "core.h" 10 10 #include "venc.h" 11 + #include "helpers.h" 11 12 12 13 #define BITRATE_MIN 32000 13 14 #define BITRATE_MAX 160000000 ··· 337 336 * if we disable 8x8 transform for HP. 338 337 */ 339 338 340 - if (ctrl->val == 0) 341 - return -EINVAL; 342 339 343 340 ctr->h264_8x8_transform = ctrl->val; 344 341 break; ··· 347 348 return 0; 348 349 } 349 350 351 + static int venc_op_g_volatile_ctrl(struct v4l2_ctrl *ctrl) 352 + { 353 + struct venus_inst *inst = ctrl_to_inst(ctrl); 354 + struct hfi_buffer_requirements bufreq; 355 + enum hfi_version ver = inst->core->res->hfi_version; 356 + int ret; 357 + 358 + switch (ctrl->id) { 359 + case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT: 360 + ret = venus_helper_get_bufreq(inst, HFI_BUFFER_INPUT, &bufreq); 361 + if (!ret) 362 + ctrl->val = HFI_BUFREQ_COUNT_MIN(&bufreq, ver); 363 + break; 364 + default: 365 + return -EINVAL; 366 + } 367 + 368 + return 0; 369 + } 370 + 350 371 static const struct v4l2_ctrl_ops venc_ctrl_ops = { 351 372 .s_ctrl = venc_op_s_ctrl, 373 + .g_volatile_ctrl = venc_op_g_volatile_ctrl, 352 374 }; 353 375 354 376 int venc_ctrl_init(struct venus_inst *inst) 355 377 { 356 378 int ret; 379 + struct v4l2_ctrl_hdr10_mastering_display p_hdr10_mastering = { 380 + { 34000, 13250, 7500 }, 381 + { 16000, 34500, 3000 }, 15635, 16450, 10000000, 500, 382 + }; 383 + struct v4l2_ctrl_hdr10_cll_info p_hdr10_cll = { 1000, 400 }; 357 384 358 - ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 58); 385 + ret = v4l2_ctrl_handler_init(&inst->ctrl_handler, 59); 359 386 if (ret) 360 387 return ret; 361 388 ··· 460 435 V4L2_CID_MPEG_VIDEO_VP8_PROFILE, 461 436 V4L2_MPEG_VIDEO_VP8_PROFILE_3, 462 437 0, V4L2_MPEG_VIDEO_VP8_PROFILE_0); 438 + 439 + v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, 440 + V4L2_CID_MIN_BUFFERS_FOR_OUTPUT, 4, 11, 1, 4); 463 441 464 442 v4l2_ctrl_new_std(&inst->ctrl_handler, &venc_ctrl_ops, 465 443 V4L2_CID_MPEG_VIDEO_BITRATE, BITRATE_MIN, BITRATE_MAX, ··· 607 579 608 580 v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops, 609 581 V4L2_CID_COLORIMETRY_HDR10_CLL_INFO, 610 - v4l2_ctrl_ptr_create(NULL)); 582 + v4l2_ctrl_ptr_create(&p_hdr10_cll)); 611 583 612 584 v4l2_ctrl_new_std_compound(&inst->ctrl_handler, &venc_ctrl_ops, 613 585 V4L2_CID_COLORIMETRY_HDR10_MASTERING_DISPLAY, 614 - v4l2_ctrl_ptr_create(NULL)); 586 + v4l2_ctrl_ptr_create((void *)&p_hdr10_mastering)); 615 587 616 588 v4l2_ctrl_new_std_menu(&inst->ctrl_handler, &venc_ctrl_ops, 617 589 V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD_TYPE,
+2 -3
drivers/media/platform/renesas/rcar-vin/rcar-core.c
··· 786 786 return 0; 787 787 788 788 /* 789 - * Don't allow link changes if any entity in the graph is 790 - * streaming, modifying the CHSEL register fields can disrupt 791 - * running streams. 789 + * Don't allow link changes if any stream in the graph is active as 790 + * modifying the CHSEL register fields can disrupt running streams. 792 791 */ 793 792 media_device_for_each_entity(entity, &group->mdev) 794 793 if (media_entity_is_streaming(entity))
+3 -15
drivers/media/platform/renesas/rcar-vin/rcar-dma.c
··· 1244 1244 1245 1245 static int rvin_set_stream(struct rvin_dev *vin, int on) 1246 1246 { 1247 - struct media_pipeline *pipe; 1248 - struct media_device *mdev; 1249 1247 struct v4l2_subdev *sd; 1250 1248 struct media_pad *pad; 1251 1249 int ret; ··· 1263 1265 sd = media_entity_to_v4l2_subdev(pad->entity); 1264 1266 1265 1267 if (!on) { 1266 - media_pipeline_stop(&vin->vdev.entity); 1268 + video_device_pipeline_stop(&vin->vdev); 1267 1269 return v4l2_subdev_call(sd, video, s_stream, 0); 1268 1270 } 1269 1271 ··· 1271 1273 if (ret) 1272 1274 return ret; 1273 1275 1274 - /* 1275 - * The graph lock needs to be taken to protect concurrent 1276 - * starts of multiple VIN instances as they might share 1277 - * a common subdevice down the line and then should use 1278 - * the same pipe. 1279 - */ 1280 - mdev = vin->vdev.entity.graph_obj.mdev; 1281 - mutex_lock(&mdev->graph_mutex); 1282 - pipe = sd->entity.pipe ? sd->entity.pipe : &vin->vdev.pipe; 1283 - ret = __media_pipeline_start(&vin->vdev.entity, pipe); 1284 - mutex_unlock(&mdev->graph_mutex); 1276 + ret = video_device_pipeline_alloc_start(&vin->vdev); 1285 1277 if (ret) 1286 1278 return ret; 1287 1279 ··· 1279 1291 if (ret == -ENOIOCTLCMD) 1280 1292 ret = 0; 1281 1293 if (ret) 1282 - media_pipeline_stop(&vin->vdev.entity); 1294 + video_device_pipeline_stop(&vin->vdev); 1283 1295 1284 1296 return ret; 1285 1297 }
+3 -3
drivers/media/platform/renesas/vsp1/vsp1_video.c
··· 927 927 } 928 928 mutex_unlock(&pipe->lock); 929 929 930 - media_pipeline_stop(&video->video.entity); 930 + video_device_pipeline_stop(&video->video); 931 931 vsp1_video_release_buffers(video); 932 932 vsp1_video_pipeline_put(pipe); 933 933 } ··· 1046 1046 return PTR_ERR(pipe); 1047 1047 } 1048 1048 1049 - ret = __media_pipeline_start(&video->video.entity, &pipe->pipe); 1049 + ret = __video_device_pipeline_start(&video->video, &pipe->pipe); 1050 1050 if (ret < 0) { 1051 1051 mutex_unlock(&mdev->graph_mutex); 1052 1052 goto err_pipe; ··· 1070 1070 return 0; 1071 1071 1072 1072 err_stop: 1073 - media_pipeline_stop(&video->video.entity); 1073 + video_device_pipeline_stop(&video->video); 1074 1074 err_pipe: 1075 1075 vsp1_video_pipeline_put(pipe); 1076 1076 return ret;
+11 -10
drivers/media/platform/rockchip/rkisp1/rkisp1-capture.c
··· 913 913 * 914 914 * Call s_stream(false) in the reverse order from 915 915 * rkisp1_pipeline_stream_enable() and disable the DMA engine. 916 - * Should be called before media_pipeline_stop() 916 + * Should be called before video_device_pipeline_stop() 917 917 */ 918 918 static void rkisp1_pipeline_stream_disable(struct rkisp1_capture *cap) 919 919 __must_hold(&cap->rkisp1->stream_lock) ··· 926 926 * If the other capture is streaming, isp and sensor nodes shouldn't 927 927 * be disabled, skip them. 928 928 */ 929 - if (rkisp1->pipe.streaming_count < 2) 929 + if (rkisp1->pipe.start_count < 2) 930 930 v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, false); 931 931 932 932 v4l2_subdev_call(&rkisp1->resizer_devs[cap->id].sd, video, s_stream, ··· 937 937 * rkisp1_pipeline_stream_enable - enable nodes in the pipeline 938 938 * 939 939 * Enable the DMA Engine and call s_stream(true) through the pipeline. 940 - * Should be called after media_pipeline_start() 940 + * Should be called after video_device_pipeline_start() 941 941 */ 942 942 static int rkisp1_pipeline_stream_enable(struct rkisp1_capture *cap) 943 943 __must_hold(&cap->rkisp1->stream_lock) ··· 956 956 * If the other capture is streaming, isp and sensor nodes are already 957 957 * enabled, skip them. 958 958 */ 959 - if (rkisp1->pipe.streaming_count > 1) 959 + if (rkisp1->pipe.start_count > 1) 960 960 return 0; 961 961 962 962 ret = v4l2_subdev_call(&rkisp1->isp.sd, video, s_stream, true); ··· 994 994 995 995 rkisp1_dummy_buf_destroy(cap); 996 996 997 - media_pipeline_stop(&node->vdev.entity); 997 + video_device_pipeline_stop(&node->vdev); 998 998 999 999 mutex_unlock(&cap->rkisp1->stream_lock); 1000 1000 } ··· 1008 1008 1009 1009 mutex_lock(&cap->rkisp1->stream_lock); 1010 1010 1011 - ret = media_pipeline_start(entity, &cap->rkisp1->pipe); 1011 + ret = video_device_pipeline_start(&cap->vnode.vdev, &cap->rkisp1->pipe); 1012 1012 if (ret) { 1013 1013 dev_err(cap->rkisp1->dev, "start pipeline failed %d\n", ret); 1014 1014 goto err_ret_buffers; ··· 1044 1044 err_destroy_dummy: 1045 1045 rkisp1_dummy_buf_destroy(cap); 1046 1046 err_pipeline_stop: 1047 - media_pipeline_stop(entity); 1047 + video_device_pipeline_stop(&cap->vnode.vdev); 1048 1048 err_ret_buffers: 1049 1049 rkisp1_return_all_buffers(cap, VB2_BUF_STATE_QUEUED); 1050 1050 mutex_unlock(&cap->rkisp1->stream_lock); ··· 1273 1273 struct rkisp1_capture *cap = video_get_drvdata(vdev); 1274 1274 const struct rkisp1_capture_fmt_cfg *fmt = 1275 1275 rkisp1_find_fmt_cfg(cap, cap->pix.fmt.pixelformat); 1276 - struct v4l2_subdev_format sd_fmt; 1276 + struct v4l2_subdev_format sd_fmt = { 1277 + .which = V4L2_SUBDEV_FORMAT_ACTIVE, 1278 + .pad = link->source->index, 1279 + }; 1277 1280 int ret; 1278 1281 1279 - sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE; 1280 - sd_fmt.pad = link->source->index; 1281 1282 ret = v4l2_subdev_call(sd, pad, get_fmt, NULL, &sd_fmt); 1282 1283 if (ret) 1283 1284 return ret;
+23 -7
drivers/media/platform/rockchip/rkisp1/rkisp1-common.h
··· 378 378 struct v4l2_format vdev_fmt; 379 379 380 380 enum v4l2_quantization quantization; 381 + enum v4l2_ycbcr_encoding ycbcr_encoding; 381 382 enum rkisp1_fmt_raw_pat_type raw_type; 382 383 }; 383 384 ··· 557 556 */ 558 557 const struct rkisp1_mbus_info *rkisp1_mbus_info_get_by_code(u32 mbus_code); 559 558 560 - /* rkisp1_params_configure - configure the params when stream starts. 561 - * This function is called by the isp entity upon stream starts. 562 - * The function applies the initial configuration of the parameters. 559 + /* 560 + * rkisp1_params_pre_configure - Configure the params before stream start 563 561 * 564 - * @params: pointer to rkisp1_params. 562 + * @params: pointer to rkisp1_params 565 563 * @bayer_pat: the bayer pattern on the isp video sink pad 566 564 * @quantization: the quantization configured on the isp's src pad 565 + * @ycbcr_encoding: the ycbcr_encoding configured on the isp's src pad 566 + * 567 + * This function is called by the ISP entity just before the ISP gets started. 568 + * It applies the initial ISP parameters from the first params buffer, but 569 + * skips LSC as it needs to be configured after the ISP is started. 567 570 */ 568 - void rkisp1_params_configure(struct rkisp1_params *params, 569 - enum rkisp1_fmt_raw_pat_type bayer_pat, 570 - enum v4l2_quantization quantization); 571 + void rkisp1_params_pre_configure(struct rkisp1_params *params, 572 + enum rkisp1_fmt_raw_pat_type bayer_pat, 573 + enum v4l2_quantization quantization, 574 + enum v4l2_ycbcr_encoding ycbcr_encoding); 575 + 576 + /* 577 + * rkisp1_params_post_configure - Configure the params after stream start 578 + * 579 + * @params: pointer to rkisp1_params 580 + * 581 + * This function is called by the ISP entity just after the ISP gets started. 582 + * It applies the initial ISP LSC parameters from the first params buffer. 583 + */ 584 + void rkisp1_params_post_configure(struct rkisp1_params *params); 571 585 572 586 /* rkisp1_params_disable - disable all parameters. 573 587 * This function is called by the isp entity upon stream start
+127 -17
drivers/media/platform/rockchip/rkisp1/rkisp1-isp.c
··· 231 231 struct v4l2_mbus_framefmt *src_frm; 232 232 233 233 src_frm = rkisp1_isp_get_pad_fmt(isp, NULL, 234 - RKISP1_ISP_PAD_SINK_VIDEO, 234 + RKISP1_ISP_PAD_SOURCE_VIDEO, 235 235 V4L2_SUBDEV_FORMAT_ACTIVE); 236 - rkisp1_params_configure(&rkisp1->params, sink_fmt->bayer_pat, 237 - src_frm->quantization); 236 + rkisp1_params_pre_configure(&rkisp1->params, sink_fmt->bayer_pat, 237 + src_frm->quantization, 238 + src_frm->ycbcr_enc); 238 239 } 239 240 240 241 return 0; ··· 341 340 RKISP1_CIF_ISP_CTRL_ISP_ENABLE | 342 341 RKISP1_CIF_ISP_CTRL_ISP_INFORM_ENABLE; 343 342 rkisp1_write(rkisp1, RKISP1_CIF_ISP_CTRL, val); 343 + 344 + if (isp->src_fmt->pixel_enc != V4L2_PIXEL_ENC_BAYER) 345 + rkisp1_params_post_configure(&rkisp1->params); 344 346 } 345 347 346 348 /* ---------------------------------------------------------------------------- ··· 435 431 struct v4l2_mbus_framefmt *sink_fmt, *src_fmt; 436 432 struct v4l2_rect *sink_crop, *src_crop; 437 433 434 + /* Video. */ 438 435 sink_fmt = v4l2_subdev_get_try_format(sd, sd_state, 439 436 RKISP1_ISP_PAD_SINK_VIDEO); 440 437 sink_fmt->width = RKISP1_DEFAULT_WIDTH; 441 438 sink_fmt->height = RKISP1_DEFAULT_HEIGHT; 442 439 sink_fmt->field = V4L2_FIELD_NONE; 443 440 sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT; 441 + sink_fmt->colorspace = V4L2_COLORSPACE_RAW; 442 + sink_fmt->xfer_func = V4L2_XFER_FUNC_NONE; 443 + sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601; 444 + sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE; 444 445 445 446 sink_crop = v4l2_subdev_get_try_crop(sd, sd_state, 446 447 RKISP1_ISP_PAD_SINK_VIDEO); ··· 458 449 RKISP1_ISP_PAD_SOURCE_VIDEO); 459 450 *src_fmt = *sink_fmt; 460 451 src_fmt->code = RKISP1_DEF_SRC_PAD_FMT; 452 + src_fmt->colorspace = V4L2_COLORSPACE_SRGB; 453 + src_fmt->xfer_func = V4L2_XFER_FUNC_SRGB; 454 + src_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601; 455 + src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE; 461 456 462 457 src_crop = v4l2_subdev_get_try_crop(sd, sd_state, 463 458 RKISP1_ISP_PAD_SOURCE_VIDEO); 464 459 *src_crop = *sink_crop; 465 460 461 + /* Parameters and statistics. */ 466 462 sink_fmt = v4l2_subdev_get_try_format(sd, sd_state, 467 463 RKISP1_ISP_PAD_SINK_PARAMS); 468 464 src_fmt = v4l2_subdev_get_try_format(sd, sd_state, ··· 486 472 struct v4l2_mbus_framefmt *format, 487 473 unsigned int which) 488 474 { 489 - const struct rkisp1_mbus_info *mbus_info; 475 + const struct rkisp1_mbus_info *sink_info; 476 + const struct rkisp1_mbus_info *src_info; 477 + struct v4l2_mbus_framefmt *sink_fmt; 490 478 struct v4l2_mbus_framefmt *src_fmt; 491 479 const struct v4l2_rect *src_crop; 480 + bool set_csc; 492 481 482 + sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state, 483 + RKISP1_ISP_PAD_SINK_VIDEO, which); 493 484 src_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state, 494 485 RKISP1_ISP_PAD_SOURCE_VIDEO, which); 495 486 src_crop = rkisp1_isp_get_pad_crop(isp, sd_state, 496 487 RKISP1_ISP_PAD_SOURCE_VIDEO, which); 497 488 489 + /* 490 + * Media bus code. The ISP can operate in pass-through mode (Bayer in, 491 + * Bayer out or YUV in, YUV out) or process Bayer data to YUV, but 492 + * can't convert from YUV to Bayer. 493 + */ 494 + sink_info = rkisp1_mbus_info_get_by_code(sink_fmt->code); 495 + 498 496 src_fmt->code = format->code; 499 - mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code); 500 - if (!mbus_info || !(mbus_info->direction & RKISP1_ISP_SD_SRC)) { 497 + src_info = rkisp1_mbus_info_get_by_code(src_fmt->code); 498 + if (!src_info || !(src_info->direction & RKISP1_ISP_SD_SRC)) { 501 499 src_fmt->code = RKISP1_DEF_SRC_PAD_FMT; 502 - mbus_info = rkisp1_mbus_info_get_by_code(src_fmt->code); 500 + src_info = rkisp1_mbus_info_get_by_code(src_fmt->code); 503 501 } 504 - if (which == V4L2_SUBDEV_FORMAT_ACTIVE) 505 - isp->src_fmt = mbus_info; 502 + 503 + if (sink_info->pixel_enc == V4L2_PIXEL_ENC_YUV && 504 + src_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) { 505 + src_fmt->code = sink_fmt->code; 506 + src_info = sink_info; 507 + } 508 + 509 + /* 510 + * The source width and height must be identical to the source crop 511 + * size. 512 + */ 506 513 src_fmt->width = src_crop->width; 507 514 src_fmt->height = src_crop->height; 508 515 509 516 /* 510 - * The CSC API is used to allow userspace to force full 511 - * quantization on YUV formats. 517 + * Copy the color space for the sink pad. When converting from Bayer to 518 + * YUV, default to a limited quantization range. 512 519 */ 513 - if (format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC && 514 - format->quantization == V4L2_QUANTIZATION_FULL_RANGE && 515 - mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV) 516 - src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE; 517 - else if (mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV) 520 + src_fmt->colorspace = sink_fmt->colorspace; 521 + src_fmt->xfer_func = sink_fmt->xfer_func; 522 + src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc; 523 + 524 + if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER && 525 + src_info->pixel_enc == V4L2_PIXEL_ENC_YUV) 518 526 src_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE; 519 527 else 520 - src_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE; 528 + src_fmt->quantization = sink_fmt->quantization; 529 + 530 + /* 531 + * Allow setting the source color space fields when the SET_CSC flag is 532 + * set and the source format is YUV. If the sink format is YUV, don't 533 + * set the color primaries, transfer function or YCbCr encoding as the 534 + * ISP is bypassed in that case and passes YUV data through without 535 + * modifications. 536 + * 537 + * The color primaries and transfer function are configured through the 538 + * cross-talk matrix and tone curve respectively. Settings for those 539 + * hardware blocks are conveyed through the ISP parameters buffer, as 540 + * they need to combine color space information with other image tuning 541 + * characteristics and can't thus be computed by the kernel based on the 542 + * color space. The source pad colorspace and xfer_func fields are thus 543 + * ignored by the driver, but can be set by userspace to propagate 544 + * accurate color space information down the pipeline. 545 + */ 546 + set_csc = format->flags & V4L2_MBUS_FRAMEFMT_SET_CSC; 547 + 548 + if (set_csc && src_info->pixel_enc == V4L2_PIXEL_ENC_YUV) { 549 + if (sink_info->pixel_enc == V4L2_PIXEL_ENC_BAYER) { 550 + if (format->colorspace != V4L2_COLORSPACE_DEFAULT) 551 + src_fmt->colorspace = format->colorspace; 552 + if (format->xfer_func != V4L2_XFER_FUNC_DEFAULT) 553 + src_fmt->xfer_func = format->xfer_func; 554 + if (format->ycbcr_enc != V4L2_YCBCR_ENC_DEFAULT) 555 + src_fmt->ycbcr_enc = format->ycbcr_enc; 556 + } 557 + 558 + if (format->quantization != V4L2_QUANTIZATION_DEFAULT) 559 + src_fmt->quantization = format->quantization; 560 + } 521 561 522 562 *format = *src_fmt; 563 + 564 + /* 565 + * Restore the SET_CSC flag if it was set to indicate support for the 566 + * CSC setting API. 567 + */ 568 + if (set_csc) 569 + format->flags |= V4L2_MBUS_FRAMEFMT_SET_CSC; 570 + 571 + /* Store the source format info when setting the active format. */ 572 + if (which == V4L2_SUBDEV_FORMAT_ACTIVE) 573 + isp->src_fmt = src_info; 523 574 } 524 575 525 576 static void rkisp1_isp_set_src_crop(struct rkisp1_isp *isp, ··· 652 573 const struct rkisp1_mbus_info *mbus_info; 653 574 struct v4l2_mbus_framefmt *sink_fmt; 654 575 struct v4l2_rect *sink_crop; 576 + bool is_yuv; 655 577 656 578 sink_fmt = rkisp1_isp_get_pad_fmt(isp, sd_state, 657 579 RKISP1_ISP_PAD_SINK_VIDEO, ··· 672 592 sink_fmt->height = clamp_t(u32, format->height, 673 593 RKISP1_ISP_MIN_HEIGHT, 674 594 RKISP1_ISP_MAX_HEIGHT); 595 + 596 + /* 597 + * Adjust the color space fields. Accept any color primaries and 598 + * transfer function for both YUV and Bayer. For YUV any YCbCr encoding 599 + * and quantization range is also accepted. For Bayer formats, the YCbCr 600 + * encoding isn't applicable, and the quantization range can only be 601 + * full. 602 + */ 603 + is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV; 604 + 605 + sink_fmt->colorspace = format->colorspace ? : 606 + (is_yuv ? V4L2_COLORSPACE_SRGB : 607 + V4L2_COLORSPACE_RAW); 608 + sink_fmt->xfer_func = format->xfer_func ? : 609 + V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace); 610 + if (is_yuv) { 611 + sink_fmt->ycbcr_enc = format->ycbcr_enc ? : 612 + V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace); 613 + sink_fmt->quantization = format->quantization ? : 614 + V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace, 615 + sink_fmt->ycbcr_enc); 616 + } else { 617 + /* 618 + * The YCbCr encoding isn't applicable for non-YUV formats, but 619 + * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it 620 + * should be ignored by userspace. 621 + */ 622 + sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601; 623 + sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE; 624 + } 675 625 676 626 *format = *sink_fmt; 677 627
+318 -215
drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
··· 18 18 #define RKISP1_ISP_PARAMS_REQ_BUFS_MIN 2 19 19 #define RKISP1_ISP_PARAMS_REQ_BUFS_MAX 8 20 20 21 + #define RKISP1_ISP_DPCC_METHODS_SET(n) \ 22 + (RKISP1_CIF_ISP_DPCC_METHODS_SET_1 + 0x4 * (n)) 21 23 #define RKISP1_ISP_DPCC_LINE_THRESH(n) \ 22 24 (RKISP1_CIF_ISP_DPCC_LINE_THRESH_1 + 0x14 * (n)) 23 25 #define RKISP1_ISP_DPCC_LINE_MAD_FAC(n) \ ··· 58 56 unsigned int i; 59 57 u32 mode; 60 58 61 - /* avoid to override the old enable value */ 59 + /* 60 + * The enable bit is controlled in rkisp1_isp_isr_other_config() and 61 + * must be preserved. The grayscale mode should be configured 62 + * automatically based on the media bus code on the ISP sink pad, so 63 + * only the STAGE1_ENABLE bit can be set by userspace. 64 + */ 62 65 mode = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE); 63 - mode &= RKISP1_CIF_ISP_DPCC_ENA; 64 - mode |= arg->mode & ~RKISP1_CIF_ISP_DPCC_ENA; 66 + mode &= RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE; 67 + mode |= arg->mode & RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE; 65 68 rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_MODE, mode); 66 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_OUTPUT_MODE, 67 - arg->output_mode); 68 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_SET_USE, 69 - arg->set_use); 70 69 71 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_1, 72 - arg->methods[0].method); 73 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_2, 74 - arg->methods[1].method); 75 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_METHODS_SET_3, 76 - arg->methods[2].method); 70 + rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_OUTPUT_MODE, 71 + arg->output_mode & RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK); 72 + rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_SET_USE, 73 + arg->set_use & RKISP1_CIF_ISP_DPCC_SET_USE_MASK); 74 + 77 75 for (i = 0; i < RKISP1_CIF_ISP_DPCC_METHODS_MAX; i++) { 76 + rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_METHODS_SET(i), 77 + arg->methods[i].method & 78 + RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK); 78 79 rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_THRESH(i), 79 - arg->methods[i].line_thresh); 80 + arg->methods[i].line_thresh & 81 + RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK); 80 82 rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_LINE_MAD_FAC(i), 81 - arg->methods[i].line_mad_fac); 83 + arg->methods[i].line_mad_fac & 84 + RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK); 82 85 rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_PG_FAC(i), 83 - arg->methods[i].pg_fac); 86 + arg->methods[i].pg_fac & 87 + RKISP1_CIF_ISP_DPCC_PG_FAC_MASK); 84 88 rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RND_THRESH(i), 85 - arg->methods[i].rnd_thresh); 89 + arg->methods[i].rnd_thresh & 90 + RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK); 86 91 rkisp1_write(params->rkisp1, RKISP1_ISP_DPCC_RG_FAC(i), 87 - arg->methods[i].rg_fac); 92 + arg->methods[i].rg_fac & 93 + RKISP1_CIF_ISP_DPCC_RG_FAC_MASK); 88 94 } 89 95 90 96 rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RND_OFFS, 91 - arg->rnd_offs); 97 + arg->rnd_offs & RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK); 92 98 rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_DPCC_RO_LIMITS, 93 - arg->ro_limits); 99 + arg->ro_limits & RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK); 94 100 } 95 101 96 102 /* ISP black level subtraction interface function */ ··· 198 188 rkisp1_lsc_matrix_config_v10(struct rkisp1_params *params, 199 189 const struct rkisp1_cif_isp_lsc_config *pconfig) 200 190 { 201 - unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data; 191 + struct rkisp1_device *rkisp1 = params->rkisp1; 192 + u32 lsc_status, sram_addr, lsc_table_sel; 193 + unsigned int i, j; 202 194 203 - isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS); 195 + lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS); 204 196 205 197 /* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */ 206 - sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ? 198 + sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ? 207 199 RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 : 208 200 RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153; 209 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr); 210 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr); 211 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr); 212 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr); 201 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr); 202 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr); 203 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr); 204 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr); 213 205 214 206 /* program data tables (table size is 9 * 17 = 153) */ 215 207 for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) { 208 + const __u16 *r_tbl = pconfig->r_data_tbl[i]; 209 + const __u16 *gr_tbl = pconfig->gr_data_tbl[i]; 210 + const __u16 *gb_tbl = pconfig->gb_data_tbl[i]; 211 + const __u16 *b_tbl = pconfig->b_data_tbl[i]; 212 + 216 213 /* 217 214 * 17 sectors with 2 values in one DWORD = 9 218 215 * DWORDs (2nd value of last DWORD unused) 219 216 */ 220 217 for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) { 221 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j], 222 - pconfig->r_data_tbl[i][j + 1]); 223 - rkisp1_write(params->rkisp1, 224 - RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data); 225 - 226 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j], 227 - pconfig->gr_data_tbl[i][j + 1]); 228 - rkisp1_write(params->rkisp1, 229 - RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data); 230 - 231 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j], 232 - pconfig->gb_data_tbl[i][j + 1]); 233 - rkisp1_write(params->rkisp1, 234 - RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data); 235 - 236 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j], 237 - pconfig->b_data_tbl[i][j + 1]); 238 - rkisp1_write(params->rkisp1, 239 - RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data); 218 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 219 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10( 220 + r_tbl[j], r_tbl[j + 1])); 221 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 222 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10( 223 + gr_tbl[j], gr_tbl[j + 1])); 224 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 225 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10( 226 + gb_tbl[j], gb_tbl[j + 1])); 227 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 228 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10( 229 + b_tbl[j], b_tbl[j + 1])); 240 230 } 241 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->r_data_tbl[i][j], 0); 242 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 243 - data); 244 231 245 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gr_data_tbl[i][j], 0); 246 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 247 - data); 248 - 249 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->gb_data_tbl[i][j], 0); 250 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 251 - data); 252 - 253 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(pconfig->b_data_tbl[i][j], 0); 254 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 255 - data); 232 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 233 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(r_tbl[j], 0)); 234 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 235 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gr_tbl[j], 0)); 236 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 237 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(gb_tbl[j], 0)); 238 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 239 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V10(b_tbl[j], 0)); 256 240 } 257 - isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ? 258 - RKISP1_CIF_ISP_LSC_TABLE_0 : 259 - RKISP1_CIF_ISP_LSC_TABLE_1; 260 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, 261 - isp_lsc_table_sel); 241 + 242 + lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ? 243 + RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1; 244 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel); 262 245 } 263 246 264 247 static void 265 248 rkisp1_lsc_matrix_config_v12(struct rkisp1_params *params, 266 249 const struct rkisp1_cif_isp_lsc_config *pconfig) 267 250 { 268 - unsigned int isp_lsc_status, sram_addr, isp_lsc_table_sel, i, j, data; 251 + struct rkisp1_device *rkisp1 = params->rkisp1; 252 + u32 lsc_status, sram_addr, lsc_table_sel; 253 + unsigned int i, j; 269 254 270 - isp_lsc_status = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_STATUS); 255 + lsc_status = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_STATUS); 271 256 272 257 /* RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153 = ( 17 * 18 ) >> 1 */ 273 - sram_addr = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ? 274 - RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 : 275 - RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153; 276 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr); 277 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr); 278 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr); 279 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr); 258 + sram_addr = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ? 259 + RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_0 : 260 + RKISP1_CIF_ISP_LSC_TABLE_ADDRESS_153; 261 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_ADDR, sram_addr); 262 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_ADDR, sram_addr); 263 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_ADDR, sram_addr); 264 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_ADDR, sram_addr); 280 265 281 266 /* program data tables (table size is 9 * 17 = 153) */ 282 267 for (i = 0; i < RKISP1_CIF_ISP_LSC_SAMPLES_MAX; i++) { 268 + const __u16 *r_tbl = pconfig->r_data_tbl[i]; 269 + const __u16 *gr_tbl = pconfig->gr_data_tbl[i]; 270 + const __u16 *gb_tbl = pconfig->gb_data_tbl[i]; 271 + const __u16 *b_tbl = pconfig->b_data_tbl[i]; 272 + 283 273 /* 284 274 * 17 sectors with 2 values in one DWORD = 9 285 275 * DWORDs (2nd value of last DWORD unused) 286 276 */ 287 277 for (j = 0; j < RKISP1_CIF_ISP_LSC_SAMPLES_MAX - 1; j += 2) { 288 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 289 - pconfig->r_data_tbl[i][j], 290 - pconfig->r_data_tbl[i][j + 1]); 291 - rkisp1_write(params->rkisp1, 292 - RKISP1_CIF_ISP_LSC_R_TABLE_DATA, data); 293 - 294 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 295 - pconfig->gr_data_tbl[i][j], 296 - pconfig->gr_data_tbl[i][j + 1]); 297 - rkisp1_write(params->rkisp1, 298 - RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, data); 299 - 300 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 301 - pconfig->gb_data_tbl[i][j], 302 - pconfig->gb_data_tbl[i][j + 1]); 303 - rkisp1_write(params->rkisp1, 304 - RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, data); 305 - 306 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 307 - pconfig->b_data_tbl[i][j], 308 - pconfig->b_data_tbl[i][j + 1]); 309 - rkisp1_write(params->rkisp1, 310 - RKISP1_CIF_ISP_LSC_B_TABLE_DATA, data); 278 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 279 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 280 + r_tbl[j], r_tbl[j + 1])); 281 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 282 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 283 + gr_tbl[j], gr_tbl[j + 1])); 284 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 285 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 286 + gb_tbl[j], gb_tbl[j + 1])); 287 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 288 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12( 289 + b_tbl[j], b_tbl[j + 1])); 311 290 } 312 291 313 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->r_data_tbl[i][j], 0); 314 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 315 - data); 316 - 317 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gr_data_tbl[i][j], 0); 318 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 319 - data); 320 - 321 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->gb_data_tbl[i][j], 0); 322 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 323 - data); 324 - 325 - data = RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(pconfig->b_data_tbl[i][j], 0); 326 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 327 - data); 292 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_R_TABLE_DATA, 293 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(r_tbl[j], 0)); 294 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GR_TABLE_DATA, 295 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gr_tbl[j], 0)); 296 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_GB_TABLE_DATA, 297 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(gb_tbl[j], 0)); 298 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_B_TABLE_DATA, 299 + RKISP1_CIF_ISP_LSC_TABLE_DATA_V12(b_tbl[j], 0)); 328 300 } 329 - isp_lsc_table_sel = (isp_lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE) ? 330 - RKISP1_CIF_ISP_LSC_TABLE_0 : 331 - RKISP1_CIF_ISP_LSC_TABLE_1; 332 - rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, 333 - isp_lsc_table_sel); 301 + 302 + lsc_table_sel = lsc_status & RKISP1_CIF_ISP_LSC_ACTIVE_TABLE ? 303 + RKISP1_CIF_ISP_LSC_TABLE_0 : RKISP1_CIF_ISP_LSC_TABLE_1; 304 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_TABLE_SEL, lsc_table_sel); 334 305 } 335 306 336 307 static void rkisp1_lsc_config(struct rkisp1_params *params, 337 308 const struct rkisp1_cif_isp_lsc_config *arg) 338 309 { 339 - unsigned int i, data; 340 - u32 lsc_ctrl; 310 + struct rkisp1_device *rkisp1 = params->rkisp1; 311 + u32 lsc_ctrl, data; 312 + unsigned int i; 341 313 342 314 /* To config must be off , store the current status firstly */ 343 - lsc_ctrl = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_LSC_CTRL); 315 + lsc_ctrl = rkisp1_read(rkisp1, RKISP1_CIF_ISP_LSC_CTRL); 344 316 rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL, 345 317 RKISP1_CIF_ISP_LSC_CTRL_ENA); 346 318 params->ops->lsc_matrix_config(params, arg); ··· 331 339 /* program x size tables */ 332 340 data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_size_tbl[i * 2], 333 341 arg->x_size_tbl[i * 2 + 1]); 334 - rkisp1_write(params->rkisp1, 335 - RKISP1_CIF_ISP_LSC_XSIZE_01 + i * 4, data); 342 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XSIZE(i), data); 336 343 337 344 /* program x grad tables */ 338 - data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->x_grad_tbl[i * 2], 345 + data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->x_grad_tbl[i * 2], 339 346 arg->x_grad_tbl[i * 2 + 1]); 340 - rkisp1_write(params->rkisp1, 341 - RKISP1_CIF_ISP_LSC_XGRAD_01 + i * 4, data); 347 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_XGRAD(i), data); 342 348 343 349 /* program y size tables */ 344 350 data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_size_tbl[i * 2], 345 351 arg->y_size_tbl[i * 2 + 1]); 346 - rkisp1_write(params->rkisp1, 347 - RKISP1_CIF_ISP_LSC_YSIZE_01 + i * 4, data); 352 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YSIZE(i), data); 348 353 349 354 /* program y grad tables */ 350 - data = RKISP1_CIF_ISP_LSC_SECT_SIZE(arg->y_grad_tbl[i * 2], 355 + data = RKISP1_CIF_ISP_LSC_SECT_GRAD(arg->y_grad_tbl[i * 2], 351 356 arg->y_grad_tbl[i * 2 + 1]); 352 - rkisp1_write(params->rkisp1, 353 - RKISP1_CIF_ISP_LSC_YGRAD_01 + i * 4, data); 357 + rkisp1_write(rkisp1, RKISP1_CIF_ISP_LSC_YGRAD(i), data); 354 358 } 355 359 356 360 /* restore the lsc ctrl status */ 357 - if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA) { 358 - rkisp1_param_set_bits(params, 359 - RKISP1_CIF_ISP_LSC_CTRL, 361 + if (lsc_ctrl & RKISP1_CIF_ISP_LSC_CTRL_ENA) 362 + rkisp1_param_set_bits(params, RKISP1_CIF_ISP_LSC_CTRL, 360 363 RKISP1_CIF_ISP_LSC_CTRL_ENA); 361 - } else { 362 - rkisp1_param_clear_bits(params, 363 - RKISP1_CIF_ISP_LSC_CTRL, 364 + else 365 + rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL, 364 366 RKISP1_CIF_ISP_LSC_CTRL_ENA); 365 - } 366 367 } 367 368 368 369 /* ISP Filtering function */ ··· 1051 1066 } 1052 1067 } 1053 1068 1054 - static void rkisp1_csm_config(struct rkisp1_params *params, bool full_range) 1069 + static void rkisp1_csm_config(struct rkisp1_params *params) 1055 1070 { 1056 - static const u16 full_range_coeff[] = { 1057 - 0x0026, 0x004b, 0x000f, 1058 - 0x01ea, 0x01d6, 0x0040, 1059 - 0x0040, 0x01ca, 0x01f6 1071 + struct csm_coeffs { 1072 + u16 limited[9]; 1073 + u16 full[9]; 1060 1074 }; 1061 - static const u16 limited_range_coeff[] = { 1062 - 0x0021, 0x0040, 0x000d, 1063 - 0x01ed, 0x01db, 0x0038, 1064 - 0x0038, 0x01d1, 0x01f7, 1075 + static const struct csm_coeffs rec601_coeffs = { 1076 + .limited = { 1077 + 0x0021, 0x0042, 0x000d, 1078 + 0x01ed, 0x01db, 0x0038, 1079 + 0x0038, 0x01d1, 0x01f7, 1080 + }, 1081 + .full = { 1082 + 0x0026, 0x004b, 0x000f, 1083 + 0x01ea, 0x01d6, 0x0040, 1084 + 0x0040, 0x01ca, 0x01f6, 1085 + }, 1065 1086 }; 1087 + static const struct csm_coeffs rec709_coeffs = { 1088 + .limited = { 1089 + 0x0018, 0x0050, 0x0008, 1090 + 0x01f3, 0x01d5, 0x0038, 1091 + 0x0038, 0x01cd, 0x01fb, 1092 + }, 1093 + .full = { 1094 + 0x001b, 0x005c, 0x0009, 1095 + 0x01f1, 0x01cf, 0x0040, 1096 + 0x0040, 0x01c6, 0x01fa, 1097 + }, 1098 + }; 1099 + static const struct csm_coeffs rec2020_coeffs = { 1100 + .limited = { 1101 + 0x001d, 0x004c, 0x0007, 1102 + 0x01f0, 0x01d8, 0x0038, 1103 + 0x0038, 0x01cd, 0x01fb, 1104 + }, 1105 + .full = { 1106 + 0x0022, 0x0057, 0x0008, 1107 + 0x01ee, 0x01d2, 0x0040, 1108 + 0x0040, 0x01c5, 0x01fb, 1109 + }, 1110 + }; 1111 + static const struct csm_coeffs smpte240m_coeffs = { 1112 + .limited = { 1113 + 0x0018, 0x004f, 0x000a, 1114 + 0x01f3, 0x01d5, 0x0038, 1115 + 0x0038, 0x01ce, 0x01fa, 1116 + }, 1117 + .full = { 1118 + 0x001b, 0x005a, 0x000b, 1119 + 0x01f1, 0x01cf, 0x0040, 1120 + 0x0040, 0x01c7, 0x01f9, 1121 + }, 1122 + }; 1123 + 1124 + const struct csm_coeffs *coeffs; 1125 + const u16 *csm; 1066 1126 unsigned int i; 1067 1127 1068 - if (full_range) { 1069 - for (i = 0; i < ARRAY_SIZE(full_range_coeff); i++) 1070 - rkisp1_write(params->rkisp1, 1071 - RKISP1_CIF_ISP_CC_COEFF_0 + i * 4, 1072 - full_range_coeff[i]); 1128 + switch (params->ycbcr_encoding) { 1129 + case V4L2_YCBCR_ENC_601: 1130 + default: 1131 + coeffs = &rec601_coeffs; 1132 + break; 1133 + case V4L2_YCBCR_ENC_709: 1134 + coeffs = &rec709_coeffs; 1135 + break; 1136 + case V4L2_YCBCR_ENC_BT2020: 1137 + coeffs = &rec2020_coeffs; 1138 + break; 1139 + case V4L2_YCBCR_ENC_SMPTE240M: 1140 + coeffs = &smpte240m_coeffs; 1141 + break; 1142 + } 1073 1143 1144 + if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE) { 1145 + csm = coeffs->full; 1074 1146 rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, 1075 1147 RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA | 1076 1148 RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA); 1077 1149 } else { 1078 - for (i = 0; i < ARRAY_SIZE(limited_range_coeff); i++) 1079 - rkisp1_write(params->rkisp1, 1080 - RKISP1_CIF_ISP_CC_COEFF_0 + i * 4, 1081 - limited_range_coeff[i]); 1082 - 1150 + csm = coeffs->limited; 1083 1151 rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_CTRL, 1084 1152 RKISP1_CIF_ISP_CTRL_ISP_CSM_Y_FULL_ENA | 1085 1153 RKISP1_CIF_ISP_CTRL_ISP_CSM_C_FULL_ENA); 1086 1154 } 1155 + 1156 + for (i = 0; i < 9; i++) 1157 + rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_CC_COEFF_0 + i * 4, 1158 + csm[i]); 1087 1159 } 1088 1160 1089 1161 /* ISP De-noise Pre-Filter(DPF) function */ ··· 1258 1216 if (module_ens & RKISP1_CIF_ISP_MODULE_DPCC) 1259 1217 rkisp1_param_set_bits(params, 1260 1218 RKISP1_CIF_ISP_DPCC_MODE, 1261 - RKISP1_CIF_ISP_DPCC_ENA); 1219 + RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE); 1262 1220 else 1263 1221 rkisp1_param_clear_bits(params, 1264 1222 RKISP1_CIF_ISP_DPCC_MODE, 1265 - RKISP1_CIF_ISP_DPCC_ENA); 1223 + RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE); 1266 1224 } 1267 1225 1268 1226 /* update bls config */ ··· 1295 1253 rkisp1_param_clear_bits(params, 1296 1254 RKISP1_CIF_ISP_CTRL, 1297 1255 RKISP1_CIF_ISP_CTRL_ISP_GAMMA_IN_ENA); 1298 - } 1299 - 1300 - /* update lsc config */ 1301 - if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC) 1302 - rkisp1_lsc_config(params, 1303 - &new_params->others.lsc_config); 1304 - 1305 - if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) { 1306 - if (module_ens & RKISP1_CIF_ISP_MODULE_LSC) 1307 - rkisp1_param_set_bits(params, 1308 - RKISP1_CIF_ISP_LSC_CTRL, 1309 - RKISP1_CIF_ISP_LSC_CTRL_ENA); 1310 - else 1311 - rkisp1_param_clear_bits(params, 1312 - RKISP1_CIF_ISP_LSC_CTRL, 1313 - RKISP1_CIF_ISP_LSC_CTRL_ENA); 1314 1256 } 1315 1257 1316 1258 /* update awb gains */ ··· 1413 1387 } 1414 1388 } 1415 1389 1390 + static void 1391 + rkisp1_isp_isr_lsc_config(struct rkisp1_params *params, 1392 + const struct rkisp1_params_cfg *new_params) 1393 + { 1394 + unsigned int module_en_update, module_cfg_update, module_ens; 1395 + 1396 + module_en_update = new_params->module_en_update; 1397 + module_cfg_update = new_params->module_cfg_update; 1398 + module_ens = new_params->module_ens; 1399 + 1400 + /* update lsc config */ 1401 + if (module_cfg_update & RKISP1_CIF_ISP_MODULE_LSC) 1402 + rkisp1_lsc_config(params, 1403 + &new_params->others.lsc_config); 1404 + 1405 + if (module_en_update & RKISP1_CIF_ISP_MODULE_LSC) { 1406 + if (module_ens & RKISP1_CIF_ISP_MODULE_LSC) 1407 + rkisp1_param_set_bits(params, 1408 + RKISP1_CIF_ISP_LSC_CTRL, 1409 + RKISP1_CIF_ISP_LSC_CTRL_ENA); 1410 + else 1411 + rkisp1_param_clear_bits(params, 1412 + RKISP1_CIF_ISP_LSC_CTRL, 1413 + RKISP1_CIF_ISP_LSC_CTRL_ENA); 1414 + } 1415 + } 1416 + 1416 1417 static void rkisp1_isp_isr_meas_config(struct rkisp1_params *params, 1417 1418 struct rkisp1_params_cfg *new_params) 1418 1419 { ··· 1501 1448 } 1502 1449 } 1503 1450 1504 - static void rkisp1_params_apply_params_cfg(struct rkisp1_params *params, 1505 - unsigned int frame_sequence) 1451 + static bool rkisp1_params_get_buffer(struct rkisp1_params *params, 1452 + struct rkisp1_buffer **buf, 1453 + struct rkisp1_params_cfg **cfg) 1506 1454 { 1507 - struct rkisp1_params_cfg *new_params; 1508 - struct rkisp1_buffer *cur_buf = NULL; 1509 - 1510 1455 if (list_empty(&params->params)) 1511 - return; 1456 + return false; 1512 1457 1513 - cur_buf = list_first_entry(&params->params, 1514 - struct rkisp1_buffer, queue); 1458 + *buf = list_first_entry(&params->params, struct rkisp1_buffer, queue); 1459 + *cfg = vb2_plane_vaddr(&(*buf)->vb.vb2_buf, 0); 1515 1460 1516 - new_params = (struct rkisp1_params_cfg *)vb2_plane_vaddr(&cur_buf->vb.vb2_buf, 0); 1461 + return true; 1462 + } 1517 1463 1518 - rkisp1_isp_isr_other_config(params, new_params); 1519 - rkisp1_isp_isr_meas_config(params, new_params); 1464 + static void rkisp1_params_complete_buffer(struct rkisp1_params *params, 1465 + struct rkisp1_buffer *buf, 1466 + unsigned int frame_sequence) 1467 + { 1468 + list_del(&buf->queue); 1520 1469 1521 - /* update shadow register immediately */ 1522 - rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD); 1523 - 1524 - list_del(&cur_buf->queue); 1525 - 1526 - cur_buf->vb.sequence = frame_sequence; 1527 - vb2_buffer_done(&cur_buf->vb.vb2_buf, VB2_BUF_STATE_DONE); 1470 + buf->vb.sequence = frame_sequence; 1471 + vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_DONE); 1528 1472 } 1529 1473 1530 1474 void rkisp1_params_isr(struct rkisp1_device *rkisp1) 1531 1475 { 1532 - /* 1533 - * This isr is called when the ISR finishes processing a frame (RKISP1_CIF_ISP_FRAME). 1534 - * Configurations performed here will be applied on the next frame. 1535 - * Since frame_sequence is updated on the vertical sync signal, we should use 1536 - * frame_sequence + 1 here to indicate to userspace on which frame these parameters 1537 - * are being applied. 1538 - */ 1539 - unsigned int frame_sequence = rkisp1->isp.frame_sequence + 1; 1540 1476 struct rkisp1_params *params = &rkisp1->params; 1477 + struct rkisp1_params_cfg *new_params; 1478 + struct rkisp1_buffer *cur_buf; 1541 1479 1542 1480 spin_lock(&params->config_lock); 1543 - rkisp1_params_apply_params_cfg(params, frame_sequence); 1544 1481 1482 + if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params)) 1483 + goto unlock; 1484 + 1485 + rkisp1_isp_isr_other_config(params, new_params); 1486 + rkisp1_isp_isr_lsc_config(params, new_params); 1487 + rkisp1_isp_isr_meas_config(params, new_params); 1488 + 1489 + /* update shadow register immediately */ 1490 + rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, 1491 + RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD); 1492 + 1493 + /* 1494 + * This isr is called when the ISR finishes processing a frame 1495 + * (RKISP1_CIF_ISP_FRAME). Configurations performed here will be 1496 + * applied on the next frame. Since frame_sequence is updated on the 1497 + * vertical sync signal, we should use frame_sequence + 1 here to 1498 + * indicate to userspace on which frame these parameters are being 1499 + * applied. 1500 + */ 1501 + rkisp1_params_complete_buffer(params, cur_buf, 1502 + rkisp1->isp.frame_sequence + 1); 1503 + 1504 + unlock: 1545 1505 spin_unlock(&params->config_lock); 1546 1506 } 1547 1507 ··· 1597 1531 14 1598 1532 }; 1599 1533 1600 - static void rkisp1_params_config_parameter(struct rkisp1_params *params) 1534 + void rkisp1_params_pre_configure(struct rkisp1_params *params, 1535 + enum rkisp1_fmt_raw_pat_type bayer_pat, 1536 + enum v4l2_quantization quantization, 1537 + enum v4l2_ycbcr_encoding ycbcr_encoding) 1601 1538 { 1602 1539 struct rkisp1_cif_isp_hst_config hst = rkisp1_hst_params_default_config; 1540 + struct rkisp1_params_cfg *new_params; 1541 + struct rkisp1_buffer *cur_buf; 1542 + 1543 + params->quantization = quantization; 1544 + params->ycbcr_encoding = ycbcr_encoding; 1545 + params->raw_type = bayer_pat; 1603 1546 1604 1547 params->ops->awb_meas_config(params, &rkisp1_awb_params_default_config); 1605 1548 params->ops->awb_meas_enable(params, &rkisp1_awb_params_default_config, ··· 1627 1552 rkisp1_param_set_bits(params, RKISP1_CIF_ISP_HIST_PROP_V10, 1628 1553 rkisp1_hst_params_default_config.mode); 1629 1554 1630 - /* set the range */ 1631 - if (params->quantization == V4L2_QUANTIZATION_FULL_RANGE) 1632 - rkisp1_csm_config(params, true); 1633 - else 1634 - rkisp1_csm_config(params, false); 1555 + rkisp1_csm_config(params); 1635 1556 1636 1557 spin_lock_irq(&params->config_lock); 1637 1558 1638 1559 /* apply the first buffer if there is one already */ 1639 - rkisp1_params_apply_params_cfg(params, 0); 1640 1560 1561 + if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params)) 1562 + goto unlock; 1563 + 1564 + rkisp1_isp_isr_other_config(params, new_params); 1565 + rkisp1_isp_isr_meas_config(params, new_params); 1566 + 1567 + /* update shadow register immediately */ 1568 + rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, 1569 + RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD); 1570 + 1571 + unlock: 1641 1572 spin_unlock_irq(&params->config_lock); 1642 1573 } 1643 1574 1644 - void rkisp1_params_configure(struct rkisp1_params *params, 1645 - enum rkisp1_fmt_raw_pat_type bayer_pat, 1646 - enum v4l2_quantization quantization) 1575 + void rkisp1_params_post_configure(struct rkisp1_params *params) 1647 1576 { 1648 - params->quantization = quantization; 1649 - params->raw_type = bayer_pat; 1650 - rkisp1_params_config_parameter(params); 1577 + struct rkisp1_params_cfg *new_params; 1578 + struct rkisp1_buffer *cur_buf; 1579 + 1580 + spin_lock_irq(&params->config_lock); 1581 + 1582 + /* 1583 + * Apply LSC parameters from the first buffer (if any is already 1584 + * available. This must be done after the ISP gets started in the 1585 + * ISP8000Nano v18.02 (found in the i.MX8MP) as access to the LSC RAM 1586 + * is gated by the ISP_CTRL.ISP_ENABLE bit. As this initialization 1587 + * ordering doesn't affect other ISP versions negatively, do so 1588 + * unconditionally. 1589 + */ 1590 + 1591 + if (!rkisp1_params_get_buffer(params, &cur_buf, &new_params)) 1592 + goto unlock; 1593 + 1594 + rkisp1_isp_isr_lsc_config(params, new_params); 1595 + 1596 + /* update shadow register immediately */ 1597 + rkisp1_param_set_bits(params, RKISP1_CIF_ISP_CTRL, 1598 + RKISP1_CIF_ISP_CTRL_ISP_CFG_UPD); 1599 + 1600 + rkisp1_params_complete_buffer(params, cur_buf, 0); 1601 + 1602 + unlock: 1603 + spin_unlock_irq(&params->config_lock); 1651 1604 } 1652 1605 1653 1606 /* ··· 1685 1582 void rkisp1_params_disable(struct rkisp1_params *params) 1686 1583 { 1687 1584 rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_DPCC_MODE, 1688 - RKISP1_CIF_ISP_DPCC_ENA); 1585 + RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE); 1689 1586 rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_LSC_CTRL, 1690 1587 RKISP1_CIF_ISP_LSC_CTRL_ENA); 1691 1588 rkisp1_param_clear_bits(params, RKISP1_CIF_ISP_BLS_CTRL,
+17 -30
drivers/media/platform/rockchip/rkisp1/rkisp1-regs.h
··· 576 576 (((v0) & 0x1FFF) | (((v1) & 0x1FFF) << 13)) 577 577 #define RKISP1_CIF_ISP_LSC_SECT_SIZE(v0, v1) \ 578 578 (((v0) & 0xFFF) | (((v1) & 0xFFF) << 16)) 579 - #define RKISP1_CIF_ISP_LSC_GRAD_SIZE(v0, v1) \ 579 + #define RKISP1_CIF_ISP_LSC_SECT_GRAD(v0, v1) \ 580 580 (((v0) & 0xFFF) | (((v1) & 0xFFF) << 16)) 581 581 582 582 /* LSC: ISP_LSC_TABLE_SEL */ ··· 618 618 #define RKISP1_CIF_ISP_CTRL_ISP_GAMMA_OUT_ENA_READ(x) (((x) >> 11) & 1) 619 619 620 620 /* DPCC */ 621 - /* ISP_DPCC_MODE */ 622 - #define RKISP1_CIF_ISP_DPCC_ENA BIT(0) 623 - #define RKISP1_CIF_ISP_DPCC_MODE_MAX 0x07 624 - #define RKISP1_CIF_ISP_DPCC_OUTPUTMODE_MAX 0x0F 625 - #define RKISP1_CIF_ISP_DPCC_SETUSE_MAX 0x0F 626 - #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RESERVED 0xFFFFE000 627 - #define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RESERVED 0xFFFF0000 628 - #define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RESERVED 0xFFFFC0C0 629 - #define RKISP1_CIF_ISP_DPCC_PG_FAC_RESERVED 0xFFFFC0C0 630 - #define RKISP1_CIF_ISP_DPCC_RND_THRESH_RESERVED 0xFFFF0000 631 - #define RKISP1_CIF_ISP_DPCC_RG_FAC_RESERVED 0xFFFFC0C0 632 - #define RKISP1_CIF_ISP_DPCC_RO_LIMIT_RESERVED 0xFFFFF000 633 - #define RKISP1_CIF_ISP_DPCC_RND_OFFS_RESERVED 0xFFFFF000 621 + #define RKISP1_CIF_ISP_DPCC_MODE_DPCC_ENABLE BIT(0) 622 + #define RKISP1_CIF_ISP_DPCC_MODE_GRAYSCALE_MODE BIT(1) 623 + #define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_MASK GENMASK(3, 0) 624 + #define RKISP1_CIF_ISP_DPCC_SET_USE_MASK GENMASK(3, 0) 625 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_MASK 0x00001f1f 626 + #define RKISP1_CIF_ISP_DPCC_LINE_THRESH_MASK 0x0000ffff 627 + #define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_MASK 0x00003f3f 628 + #define RKISP1_CIF_ISP_DPCC_PG_FAC_MASK 0x00003f3f 629 + #define RKISP1_CIF_ISP_DPCC_RND_THRESH_MASK 0x0000ffff 630 + #define RKISP1_CIF_ISP_DPCC_RG_FAC_MASK 0x00003f3f 631 + #define RKISP1_CIF_ISP_DPCC_RO_LIMIT_MASK 0x00000fff 632 + #define RKISP1_CIF_ISP_DPCC_RND_OFFS_MASK 0x00000fff 634 633 635 634 /* BLS */ 636 635 /* ISP_BLS_CTRL */ ··· 1072 1073 #define RKISP1_CIF_ISP_LSC_GR_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x00000018) 1073 1074 #define RKISP1_CIF_ISP_LSC_B_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x0000001C) 1074 1075 #define RKISP1_CIF_ISP_LSC_GB_TABLE_DATA (RKISP1_CIF_ISP_LSC_BASE + 0x00000020) 1075 - #define RKISP1_CIF_ISP_LSC_XGRAD_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000024) 1076 - #define RKISP1_CIF_ISP_LSC_XGRAD_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000028) 1077 - #define RKISP1_CIF_ISP_LSC_XGRAD_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000002C) 1078 - #define RKISP1_CIF_ISP_LSC_XGRAD_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000030) 1079 - #define RKISP1_CIF_ISP_LSC_YGRAD_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000034) 1080 - #define RKISP1_CIF_ISP_LSC_YGRAD_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000038) 1081 - #define RKISP1_CIF_ISP_LSC_YGRAD_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000003C) 1082 - #define RKISP1_CIF_ISP_LSC_YGRAD_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000040) 1083 - #define RKISP1_CIF_ISP_LSC_XSIZE_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000044) 1084 - #define RKISP1_CIF_ISP_LSC_XSIZE_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000048) 1085 - #define RKISP1_CIF_ISP_LSC_XSIZE_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000004C) 1086 - #define RKISP1_CIF_ISP_LSC_XSIZE_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000050) 1087 - #define RKISP1_CIF_ISP_LSC_YSIZE_01 (RKISP1_CIF_ISP_LSC_BASE + 0x00000054) 1088 - #define RKISP1_CIF_ISP_LSC_YSIZE_23 (RKISP1_CIF_ISP_LSC_BASE + 0x00000058) 1089 - #define RKISP1_CIF_ISP_LSC_YSIZE_45 (RKISP1_CIF_ISP_LSC_BASE + 0x0000005C) 1090 - #define RKISP1_CIF_ISP_LSC_YSIZE_67 (RKISP1_CIF_ISP_LSC_BASE + 0x00000060) 1076 + #define RKISP1_CIF_ISP_LSC_XGRAD(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000024 + (n) * 4) 1077 + #define RKISP1_CIF_ISP_LSC_YGRAD(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000034 + (n) * 4) 1078 + #define RKISP1_CIF_ISP_LSC_XSIZE(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000044 + (n) * 4) 1079 + #define RKISP1_CIF_ISP_LSC_YSIZE(n) (RKISP1_CIF_ISP_LSC_BASE + 0x00000054 + (n) * 4) 1091 1080 #define RKISP1_CIF_ISP_LSC_TABLE_SEL (RKISP1_CIF_ISP_LSC_BASE + 0x00000064) 1092 1081 #define RKISP1_CIF_ISP_LSC_STATUS (RKISP1_CIF_ISP_LSC_BASE + 0x00000068) 1093 1082
+42 -3
drivers/media/platform/rockchip/rkisp1/rkisp1-resizer.c
··· 411 411 sink_fmt->height = RKISP1_DEFAULT_HEIGHT; 412 412 sink_fmt->field = V4L2_FIELD_NONE; 413 413 sink_fmt->code = RKISP1_DEF_FMT; 414 + sink_fmt->colorspace = V4L2_COLORSPACE_SRGB; 415 + sink_fmt->xfer_func = V4L2_XFER_FUNC_SRGB; 416 + sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601; 417 + sink_fmt->quantization = V4L2_QUANTIZATION_LIM_RANGE; 414 418 415 419 sink_crop = v4l2_subdev_get_try_crop(sd, sd_state, 416 420 RKISP1_RSZ_PAD_SINK); ··· 507 503 const struct rkisp1_mbus_info *mbus_info; 508 504 struct v4l2_mbus_framefmt *sink_fmt, *src_fmt; 509 505 struct v4l2_rect *sink_crop; 506 + bool is_yuv; 510 507 511 508 sink_fmt = rkisp1_rsz_get_pad_fmt(rsz, sd_state, RKISP1_RSZ_PAD_SINK, 512 509 which); ··· 529 524 if (which == V4L2_SUBDEV_FORMAT_ACTIVE) 530 525 rsz->pixel_enc = mbus_info->pixel_enc; 531 526 532 - /* Propagete to source pad */ 533 - src_fmt->code = sink_fmt->code; 534 - 535 527 sink_fmt->width = clamp_t(u32, format->width, 536 528 RKISP1_ISP_MIN_WIDTH, 537 529 RKISP1_ISP_MAX_WIDTH); ··· 536 534 RKISP1_ISP_MIN_HEIGHT, 537 535 RKISP1_ISP_MAX_HEIGHT); 538 536 537 + /* 538 + * Adjust the color space fields. Accept any color primaries and 539 + * transfer function for both YUV and Bayer. For YUV any YCbCr encoding 540 + * and quantization range is also accepted. For Bayer formats, the YCbCr 541 + * encoding isn't applicable, and the quantization range can only be 542 + * full. 543 + */ 544 + is_yuv = mbus_info->pixel_enc == V4L2_PIXEL_ENC_YUV; 545 + 546 + sink_fmt->colorspace = format->colorspace ? : 547 + (is_yuv ? V4L2_COLORSPACE_SRGB : 548 + V4L2_COLORSPACE_RAW); 549 + sink_fmt->xfer_func = format->xfer_func ? : 550 + V4L2_MAP_XFER_FUNC_DEFAULT(sink_fmt->colorspace); 551 + if (is_yuv) { 552 + sink_fmt->ycbcr_enc = format->ycbcr_enc ? : 553 + V4L2_MAP_YCBCR_ENC_DEFAULT(sink_fmt->colorspace); 554 + sink_fmt->quantization = format->quantization ? : 555 + V4L2_MAP_QUANTIZATION_DEFAULT(false, sink_fmt->colorspace, 556 + sink_fmt->ycbcr_enc); 557 + } else { 558 + /* 559 + * The YCbCr encoding isn't applicable for non-YUV formats, but 560 + * V4L2 has no "no encoding" value. Hardcode it to Rec. 601, it 561 + * should be ignored by userspace. 562 + */ 563 + sink_fmt->ycbcr_enc = V4L2_YCBCR_ENC_601; 564 + sink_fmt->quantization = V4L2_QUANTIZATION_FULL_RANGE; 565 + } 566 + 539 567 *format = *sink_fmt; 568 + 569 + /* Propagate the media bus code and color space to the source pad. */ 570 + src_fmt->code = sink_fmt->code; 571 + src_fmt->colorspace = sink_fmt->colorspace; 572 + src_fmt->xfer_func = sink_fmt->xfer_func; 573 + src_fmt->ycbcr_enc = sink_fmt->ycbcr_enc; 574 + src_fmt->quantization = sink_fmt->quantization; 540 575 541 576 /* Update sink crop */ 542 577 rkisp1_rsz_set_sink_crop(rsz, sd_state, sink_crop, which);
+4 -5
drivers/media/platform/samsung/exynos4-is/fimc-capture.c
··· 524 524 mutex_lock(&fimc->lock); 525 525 526 526 if (close && vc->streaming) { 527 - media_pipeline_stop(&vc->ve.vdev.entity); 527 + video_device_pipeline_stop(&vc->ve.vdev); 528 528 vc->streaming = false; 529 529 } 530 530 ··· 1176 1176 { 1177 1177 struct fimc_dev *fimc = video_drvdata(file); 1178 1178 struct fimc_vid_cap *vc = &fimc->vid_cap; 1179 - struct media_entity *entity = &vc->ve.vdev.entity; 1180 1179 struct fimc_source_info *si = NULL; 1181 1180 struct v4l2_subdev *sd; 1182 1181 int ret; ··· 1183 1184 if (fimc_capture_active(fimc)) 1184 1185 return -EBUSY; 1185 1186 1186 - ret = media_pipeline_start(entity, &vc->ve.pipe->mp); 1187 + ret = video_device_pipeline_start(&vc->ve.vdev, &vc->ve.pipe->mp); 1187 1188 if (ret < 0) 1188 1189 return ret; 1189 1190 ··· 1217 1218 } 1218 1219 1219 1220 err_p_stop: 1220 - media_pipeline_stop(entity); 1221 + video_device_pipeline_stop(&vc->ve.vdev); 1221 1222 return ret; 1222 1223 } 1223 1224 ··· 1233 1234 return ret; 1234 1235 1235 1236 if (vc->streaming) { 1236 - media_pipeline_stop(&vc->ve.vdev.entity); 1237 + video_device_pipeline_stop(&vc->ve.vdev); 1237 1238 vc->streaming = false; 1238 1239 } 1239 1240
+4 -5
drivers/media/platform/samsung/exynos4-is/fimc-isp-video.c
··· 312 312 is_singular_file = v4l2_fh_is_singular_file(file); 313 313 314 314 if (is_singular_file && ivc->streaming) { 315 - media_pipeline_stop(entity); 315 + video_device_pipeline_stop(&ivc->ve.vdev); 316 316 ivc->streaming = 0; 317 317 } 318 318 ··· 490 490 { 491 491 struct fimc_isp *isp = video_drvdata(file); 492 492 struct exynos_video_entity *ve = &isp->video_capture.ve; 493 - struct media_entity *me = &ve->vdev.entity; 494 493 int ret; 495 494 496 - ret = media_pipeline_start(me, &ve->pipe->mp); 495 + ret = video_device_pipeline_start(&ve->vdev, &ve->pipe->mp); 497 496 if (ret < 0) 498 497 return ret; 499 498 ··· 507 508 isp->video_capture.streaming = 1; 508 509 return 0; 509 510 p_stop: 510 - media_pipeline_stop(me); 511 + video_device_pipeline_stop(&ve->vdev); 511 512 return ret; 512 513 } 513 514 ··· 522 523 if (ret < 0) 523 524 return ret; 524 525 525 - media_pipeline_stop(&video->ve.vdev.entity); 526 + video_device_pipeline_stop(&video->ve.vdev); 526 527 video->streaming = 0; 527 528 return 0; 528 529 }
+4 -5
drivers/media/platform/samsung/exynos4-is/fimc-lite.c
··· 516 516 if (v4l2_fh_is_singular_file(file) && 517 517 atomic_read(&fimc->out_path) == FIMC_IO_DMA) { 518 518 if (fimc->streaming) { 519 - media_pipeline_stop(entity); 519 + video_device_pipeline_stop(&fimc->ve.vdev); 520 520 fimc->streaming = false; 521 521 } 522 522 fimc_lite_stop_capture(fimc, false); ··· 812 812 enum v4l2_buf_type type) 813 813 { 814 814 struct fimc_lite *fimc = video_drvdata(file); 815 - struct media_entity *entity = &fimc->ve.vdev.entity; 816 815 int ret; 817 816 818 817 if (fimc_lite_active(fimc)) 819 818 return -EBUSY; 820 819 821 - ret = media_pipeline_start(entity, &fimc->ve.pipe->mp); 820 + ret = video_device_pipeline_start(&fimc->ve.vdev, &fimc->ve.pipe->mp); 822 821 if (ret < 0) 823 822 return ret; 824 823 ··· 834 835 } 835 836 836 837 err_p_stop: 837 - media_pipeline_stop(entity); 838 + video_device_pipeline_stop(&fimc->ve.vdev); 838 839 return 0; 839 840 } 840 841 ··· 848 849 if (ret < 0) 849 850 return ret; 850 851 851 - media_pipeline_stop(&fimc->ve.vdev.entity); 852 + video_device_pipeline_stop(&fimc->ve.vdev); 852 853 fimc->streaming = false; 853 854 return 0; 854 855 }
+3 -3
drivers/media/platform/samsung/s3c-camif/camif-capture.c
··· 848 848 if (s3c_vp_active(vp)) 849 849 return 0; 850 850 851 - ret = media_pipeline_start(sensor, camif->m_pipeline); 851 + ret = media_pipeline_start(sensor->pads, camif->m_pipeline); 852 852 if (ret < 0) 853 853 return ret; 854 854 855 855 ret = camif_pipeline_validate(camif); 856 856 if (ret < 0) { 857 - media_pipeline_stop(sensor); 857 + media_pipeline_stop(sensor->pads); 858 858 return ret; 859 859 } 860 860 ··· 878 878 879 879 ret = vb2_streamoff(&vp->vb_queue, type); 880 880 if (ret == 0) 881 - media_pipeline_stop(&camif->sensor.sd->entity); 881 + media_pipeline_stop(camif->sensor.sd->entity.pads); 882 882 return ret; 883 883 } 884 884
+3 -3
drivers/media/platform/st/stm32/stm32-dcmi.c
··· 751 751 goto err_unlocked; 752 752 } 753 753 754 - ret = media_pipeline_start(&dcmi->vdev->entity, &dcmi->pipeline); 754 + ret = video_device_pipeline_start(dcmi->vdev, &dcmi->pipeline); 755 755 if (ret < 0) { 756 756 dev_err(dcmi->dev, "%s: Failed to start streaming, media pipeline start error (%d)\n", 757 757 __func__, ret); ··· 865 865 dcmi_pipeline_stop(dcmi); 866 866 867 867 err_media_pipeline_stop: 868 - media_pipeline_stop(&dcmi->vdev->entity); 868 + video_device_pipeline_stop(dcmi->vdev); 869 869 870 870 err_pm_put: 871 871 pm_runtime_put(dcmi->dev); ··· 892 892 893 893 dcmi_pipeline_stop(dcmi); 894 894 895 - media_pipeline_stop(&dcmi->vdev->entity); 895 + video_device_pipeline_stop(dcmi->vdev); 896 896 897 897 spin_lock_irq(&dcmi->irqlock); 898 898
+1 -1
drivers/media/platform/sunxi/sun4i-csi/Kconfig
··· 3 3 config VIDEO_SUN4I_CSI 4 4 tristate "Allwinner A10 CMOS Sensor Interface Support" 5 5 depends on V4L_PLATFORM_DRIVERS 6 - depends on VIDEO_DEV && COMMON_CLK && HAS_DMA 6 + depends on VIDEO_DEV && COMMON_CLK && RESET_CONTROLLER && HAS_DMA 7 7 depends on ARCH_SUNXI || COMPILE_TEST 8 8 select MEDIA_CONTROLLER 9 9 select VIDEO_V4L2_SUBDEV_API
+3 -3
drivers/media/platform/sunxi/sun4i-csi/sun4i_dma.c
··· 266 266 goto err_clear_dma_queue; 267 267 } 268 268 269 - ret = media_pipeline_start(&csi->vdev.entity, &csi->vdev.pipe); 269 + ret = video_device_pipeline_alloc_start(&csi->vdev); 270 270 if (ret < 0) 271 271 goto err_free_scratch_buffer; 272 272 ··· 330 330 sun4i_csi_capture_stop(csi); 331 331 332 332 err_disable_pipeline: 333 - media_pipeline_stop(&csi->vdev.entity); 333 + video_device_pipeline_stop(&csi->vdev); 334 334 335 335 err_free_scratch_buffer: 336 336 dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr, ··· 359 359 return_all_buffers(csi, VB2_BUF_STATE_ERROR); 360 360 spin_unlock_irqrestore(&csi->qlock, flags); 361 361 362 - media_pipeline_stop(&csi->vdev.entity); 362 + video_device_pipeline_stop(&csi->vdev); 363 363 364 364 dma_free_coherent(csi->dev, csi->scratch.size, csi->scratch.vaddr, 365 365 csi->scratch.paddr);
+7 -5
drivers/media/platform/sunxi/sun6i-csi/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config VIDEO_SUN6I_CSI 3 - tristate "Allwinner V3s Camera Sensor Interface driver" 4 - depends on V4L_PLATFORM_DRIVERS 5 - depends on VIDEO_DEV && COMMON_CLK && HAS_DMA 3 + tristate "Allwinner A31 Camera Sensor Interface (CSI) Driver" 4 + depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV 6 5 depends on ARCH_SUNXI || COMPILE_TEST 6 + depends on PM && COMMON_CLK && RESET_CONTROLLER && HAS_DMA 7 7 select MEDIA_CONTROLLER 8 8 select VIDEO_V4L2_SUBDEV_API 9 9 select VIDEOBUF2_DMA_CONTIG 10 - select REGMAP_MMIO 11 10 select V4L2_FWNODE 11 + select REGMAP_MMIO 12 12 help 13 - Support for the Allwinner Camera Sensor Interface Controller on V3s. 13 + Support for the Allwinner A31 Camera Sensor Interface (CSI) 14 + controller, also found on other platforms such as the A83T, H3, 15 + V3/V3s or A64.
+348 -248
drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
··· 23 23 #include <linux/sched.h> 24 24 #include <linux/sizes.h> 25 25 #include <linux/slab.h> 26 + #include <media/v4l2-mc.h> 26 27 27 28 #include "sun6i_csi.h" 28 29 #include "sun6i_csi_reg.h" 29 30 30 - #define MODULE_NAME "sun6i-csi" 31 - 32 - struct sun6i_csi_dev { 33 - struct sun6i_csi csi; 34 - struct device *dev; 35 - 36 - struct regmap *regmap; 37 - struct clk *clk_mod; 38 - struct clk *clk_ram; 39 - struct reset_control *rstc_bus; 40 - 41 - int planar_offset[3]; 42 - }; 43 - 44 - static inline struct sun6i_csi_dev *sun6i_csi_to_dev(struct sun6i_csi *csi) 45 - { 46 - return container_of(csi, struct sun6i_csi_dev, csi); 47 - } 31 + /* Helpers */ 48 32 49 33 /* TODO add 10&12 bit YUV, RGB support */ 50 - bool sun6i_csi_is_format_supported(struct sun6i_csi *csi, 34 + bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev, 51 35 u32 pixformat, u32 mbus_code) 52 36 { 53 - struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi); 37 + struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2; 54 38 55 39 /* 56 40 * Some video receivers have the ability to be compatible with 57 41 * 8bit and 16bit bus width. 58 42 * Identify the media bus format from device tree. 59 43 */ 60 - if ((sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_PARALLEL 61 - || sdev->csi.v4l2_ep.bus_type == V4L2_MBUS_BT656) 62 - && sdev->csi.v4l2_ep.bus.parallel.bus_width == 16) { 44 + if ((v4l2->v4l2_ep.bus_type == V4L2_MBUS_PARALLEL 45 + || v4l2->v4l2_ep.bus_type == V4L2_MBUS_BT656) 46 + && v4l2->v4l2_ep.bus.parallel.bus_width == 16) { 63 47 switch (pixformat) { 64 48 case V4L2_PIX_FMT_NV12_16L16: 65 49 case V4L2_PIX_FMT_NV12: ··· 60 76 case MEDIA_BUS_FMT_YVYU8_1X16: 61 77 return true; 62 78 default: 63 - dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n", 79 + dev_dbg(csi_dev->dev, 80 + "Unsupported mbus code: 0x%x\n", 64 81 mbus_code); 65 82 break; 66 83 } 67 84 break; 68 85 default: 69 - dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n", 86 + dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n", 70 87 pixformat); 71 88 break; 72 89 } ··· 124 139 case MEDIA_BUS_FMT_YVYU8_2X8: 125 140 return true; 126 141 default: 127 - dev_dbg(sdev->dev, "Unsupported mbus code: 0x%x\n", 142 + dev_dbg(csi_dev->dev, "Unsupported mbus code: 0x%x\n", 128 143 mbus_code); 129 144 break; 130 145 } ··· 139 154 return (mbus_code == MEDIA_BUS_FMT_JPEG_1X8); 140 155 141 156 default: 142 - dev_dbg(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat); 157 + dev_dbg(csi_dev->dev, "Unsupported pixformat: 0x%x\n", 158 + pixformat); 143 159 break; 144 160 } 145 161 146 162 return false; 147 163 } 148 164 149 - int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable) 165 + int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable) 150 166 { 151 - struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi); 152 - struct device *dev = sdev->dev; 153 - struct regmap *regmap = sdev->regmap; 167 + struct device *dev = csi_dev->dev; 168 + struct regmap *regmap = csi_dev->regmap; 154 169 int ret; 155 170 156 171 if (!enable) { 157 172 regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, 0); 173 + pm_runtime_put(dev); 158 174 159 - clk_disable_unprepare(sdev->clk_ram); 160 - if (of_device_is_compatible(dev->of_node, 161 - "allwinner,sun50i-a64-csi")) 162 - clk_rate_exclusive_put(sdev->clk_mod); 163 - clk_disable_unprepare(sdev->clk_mod); 164 - reset_control_assert(sdev->rstc_bus); 165 175 return 0; 166 176 } 167 177 168 - ret = clk_prepare_enable(sdev->clk_mod); 169 - if (ret) { 170 - dev_err(sdev->dev, "Enable csi clk err %d\n", ret); 178 + ret = pm_runtime_resume_and_get(dev); 179 + if (ret < 0) 171 180 return ret; 172 - } 173 - 174 - if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi")) 175 - clk_set_rate_exclusive(sdev->clk_mod, 300000000); 176 - 177 - ret = clk_prepare_enable(sdev->clk_ram); 178 - if (ret) { 179 - dev_err(sdev->dev, "Enable clk_dram_csi clk err %d\n", ret); 180 - goto clk_mod_disable; 181 - } 182 - 183 - ret = reset_control_deassert(sdev->rstc_bus); 184 - if (ret) { 185 - dev_err(sdev->dev, "reset err %d\n", ret); 186 - goto clk_ram_disable; 187 - } 188 181 189 182 regmap_update_bits(regmap, CSI_EN_REG, CSI_EN_CSI_EN, CSI_EN_CSI_EN); 190 183 191 184 return 0; 192 - 193 - clk_ram_disable: 194 - clk_disable_unprepare(sdev->clk_ram); 195 - clk_mod_disable: 196 - if (of_device_is_compatible(dev->of_node, "allwinner,sun50i-a64-csi")) 197 - clk_rate_exclusive_put(sdev->clk_mod); 198 - clk_disable_unprepare(sdev->clk_mod); 199 - return ret; 200 185 } 201 186 202 - static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_dev *sdev, 187 + static enum csi_input_fmt get_csi_input_format(struct sun6i_csi_device *csi_dev, 203 188 u32 mbus_code, u32 pixformat) 204 189 { 205 190 /* non-YUV */ ··· 187 232 } 188 233 189 234 /* not support YUV420 input format yet */ 190 - dev_dbg(sdev->dev, "Select YUV422 as default input format of CSI.\n"); 235 + dev_dbg(csi_dev->dev, "Select YUV422 as default input format of CSI.\n"); 191 236 return CSI_INPUT_FORMAT_YUV422; 192 237 } 193 238 194 - static enum csi_output_fmt get_csi_output_format(struct sun6i_csi_dev *sdev, 195 - u32 pixformat, u32 field) 239 + static enum csi_output_fmt 240 + get_csi_output_format(struct sun6i_csi_device *csi_dev, u32 pixformat, 241 + u32 field) 196 242 { 197 243 bool buf_interlaced = false; 198 244 ··· 252 296 return buf_interlaced ? CSI_FRAME_RAW_8 : CSI_FIELD_RAW_8; 253 297 254 298 default: 255 - dev_warn(sdev->dev, "Unsupported pixformat: 0x%x\n", pixformat); 299 + dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x\n", pixformat); 256 300 break; 257 301 } 258 302 259 303 return CSI_FIELD_RAW_8; 260 304 } 261 305 262 - static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_dev *sdev, 306 + static enum csi_input_seq get_csi_input_seq(struct sun6i_csi_device *csi_dev, 263 307 u32 mbus_code, u32 pixformat) 264 308 { 265 309 /* Input sequence does not apply to non-YUV formats */ ··· 286 330 case MEDIA_BUS_FMT_YVYU8_2X8: 287 331 return CSI_INPUT_SEQ_YVYU; 288 332 default: 289 - dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n", 333 + dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n", 290 334 mbus_code); 291 335 break; 292 336 } ··· 308 352 case MEDIA_BUS_FMT_YVYU8_2X8: 309 353 return CSI_INPUT_SEQ_YUYV; 310 354 default: 311 - dev_warn(sdev->dev, "Unsupported mbus code: 0x%x\n", 355 + dev_warn(csi_dev->dev, "Unsupported mbus code: 0x%x\n", 312 356 mbus_code); 313 357 break; 314 358 } ··· 318 362 return CSI_INPUT_SEQ_YUYV; 319 363 320 364 default: 321 - dev_warn(sdev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n", 365 + dev_warn(csi_dev->dev, "Unsupported pixformat: 0x%x, defaulting to YUYV\n", 322 366 pixformat); 323 367 break; 324 368 } ··· 326 370 return CSI_INPUT_SEQ_YUYV; 327 371 } 328 372 329 - static void sun6i_csi_setup_bus(struct sun6i_csi_dev *sdev) 373 + static void sun6i_csi_setup_bus(struct sun6i_csi_device *csi_dev) 330 374 { 331 - struct v4l2_fwnode_endpoint *endpoint = &sdev->csi.v4l2_ep; 332 - struct sun6i_csi *csi = &sdev->csi; 375 + struct v4l2_fwnode_endpoint *endpoint = &csi_dev->v4l2.v4l2_ep; 376 + struct sun6i_csi_config *config = &csi_dev->config; 333 377 unsigned char bus_width; 334 378 u32 flags; 335 379 u32 cfg; 336 380 bool input_interlaced = false; 337 381 338 - if (csi->config.field == V4L2_FIELD_INTERLACED 339 - || csi->config.field == V4L2_FIELD_INTERLACED_TB 340 - || csi->config.field == V4L2_FIELD_INTERLACED_BT) 382 + if (config->field == V4L2_FIELD_INTERLACED 383 + || config->field == V4L2_FIELD_INTERLACED_TB 384 + || config->field == V4L2_FIELD_INTERLACED_BT) 341 385 input_interlaced = true; 342 386 343 387 bus_width = endpoint->bus.parallel.bus_width; 344 388 345 - regmap_read(sdev->regmap, CSI_IF_CFG_REG, &cfg); 389 + regmap_read(csi_dev->regmap, CSI_IF_CFG_REG, &cfg); 346 390 347 391 cfg &= ~(CSI_IF_CFG_CSI_IF_MASK | CSI_IF_CFG_MIPI_IF_MASK | 348 392 CSI_IF_CFG_IF_DATA_WIDTH_MASK | ··· 390 434 cfg |= CSI_IF_CFG_CLK_POL_FALLING_EDGE; 391 435 break; 392 436 default: 393 - dev_warn(sdev->dev, "Unsupported bus type: %d\n", 437 + dev_warn(csi_dev->dev, "Unsupported bus type: %d\n", 394 438 endpoint->bus_type); 395 439 break; 396 440 } ··· 408 452 case 16: /* No need to configure DATA_WIDTH for 16bit */ 409 453 break; 410 454 default: 411 - dev_warn(sdev->dev, "Unsupported bus width: %u\n", bus_width); 455 + dev_warn(csi_dev->dev, "Unsupported bus width: %u\n", bus_width); 412 456 break; 413 457 } 414 458 415 - regmap_write(sdev->regmap, CSI_IF_CFG_REG, cfg); 459 + regmap_write(csi_dev->regmap, CSI_IF_CFG_REG, cfg); 416 460 } 417 461 418 - static void sun6i_csi_set_format(struct sun6i_csi_dev *sdev) 462 + static void sun6i_csi_set_format(struct sun6i_csi_device *csi_dev) 419 463 { 420 - struct sun6i_csi *csi = &sdev->csi; 464 + struct sun6i_csi_config *config = &csi_dev->config; 421 465 u32 cfg; 422 466 u32 val; 423 467 424 - regmap_read(sdev->regmap, CSI_CH_CFG_REG, &cfg); 468 + regmap_read(csi_dev->regmap, CSI_CH_CFG_REG, &cfg); 425 469 426 470 cfg &= ~(CSI_CH_CFG_INPUT_FMT_MASK | 427 471 CSI_CH_CFG_OUTPUT_FMT_MASK | CSI_CH_CFG_VFLIP_EN | 428 472 CSI_CH_CFG_HFLIP_EN | CSI_CH_CFG_FIELD_SEL_MASK | 429 473 CSI_CH_CFG_INPUT_SEQ_MASK); 430 474 431 - val = get_csi_input_format(sdev, csi->config.code, 432 - csi->config.pixelformat); 475 + val = get_csi_input_format(csi_dev, config->code, 476 + config->pixelformat); 433 477 cfg |= CSI_CH_CFG_INPUT_FMT(val); 434 478 435 - val = get_csi_output_format(sdev, csi->config.pixelformat, 436 - csi->config.field); 479 + val = get_csi_output_format(csi_dev, config->pixelformat, 480 + config->field); 437 481 cfg |= CSI_CH_CFG_OUTPUT_FMT(val); 438 482 439 - val = get_csi_input_seq(sdev, csi->config.code, 440 - csi->config.pixelformat); 483 + val = get_csi_input_seq(csi_dev, config->code, 484 + config->pixelformat); 441 485 cfg |= CSI_CH_CFG_INPUT_SEQ(val); 442 486 443 - if (csi->config.field == V4L2_FIELD_TOP) 487 + if (config->field == V4L2_FIELD_TOP) 444 488 cfg |= CSI_CH_CFG_FIELD_SEL_FIELD0; 445 - else if (csi->config.field == V4L2_FIELD_BOTTOM) 489 + else if (config->field == V4L2_FIELD_BOTTOM) 446 490 cfg |= CSI_CH_CFG_FIELD_SEL_FIELD1; 447 491 else 448 492 cfg |= CSI_CH_CFG_FIELD_SEL_BOTH; 449 493 450 - regmap_write(sdev->regmap, CSI_CH_CFG_REG, cfg); 494 + regmap_write(csi_dev->regmap, CSI_CH_CFG_REG, cfg); 451 495 } 452 496 453 - static void sun6i_csi_set_window(struct sun6i_csi_dev *sdev) 497 + static void sun6i_csi_set_window(struct sun6i_csi_device *csi_dev) 454 498 { 455 - struct sun6i_csi_config *config = &sdev->csi.config; 499 + struct sun6i_csi_config *config = &csi_dev->config; 456 500 u32 bytesperline_y; 457 501 u32 bytesperline_c; 458 - int *planar_offset = sdev->planar_offset; 502 + int *planar_offset = csi_dev->planar_offset; 459 503 u32 width = config->width; 460 504 u32 height = config->height; 461 505 u32 hor_len = width; ··· 465 509 case V4L2_PIX_FMT_YVYU: 466 510 case V4L2_PIX_FMT_UYVY: 467 511 case V4L2_PIX_FMT_VYUY: 468 - dev_dbg(sdev->dev, 512 + dev_dbg(csi_dev->dev, 469 513 "Horizontal length should be 2 times of width for packed YUV formats!\n"); 470 514 hor_len = width * 2; 471 515 break; ··· 473 517 break; 474 518 } 475 519 476 - regmap_write(sdev->regmap, CSI_CH_HSIZE_REG, 520 + regmap_write(csi_dev->regmap, CSI_CH_HSIZE_REG, 477 521 CSI_CH_HSIZE_HOR_LEN(hor_len) | 478 522 CSI_CH_HSIZE_HOR_START(0)); 479 - regmap_write(sdev->regmap, CSI_CH_VSIZE_REG, 523 + regmap_write(csi_dev->regmap, CSI_CH_VSIZE_REG, 480 524 CSI_CH_VSIZE_VER_LEN(height) | 481 525 CSI_CH_VSIZE_VER_START(0)); 482 526 ··· 508 552 bytesperline_c * height; 509 553 break; 510 554 default: /* raw */ 511 - dev_dbg(sdev->dev, 555 + dev_dbg(csi_dev->dev, 512 556 "Calculating pixelformat(0x%x)'s bytesperline as a packed format\n", 513 557 config->pixelformat); 514 558 bytesperline_y = (sun6i_csi_get_bpp(config->pixelformat) * ··· 519 563 break; 520 564 } 521 565 522 - regmap_write(sdev->regmap, CSI_CH_BUF_LEN_REG, 566 + regmap_write(csi_dev->regmap, CSI_CH_BUF_LEN_REG, 523 567 CSI_CH_BUF_LEN_BUF_LEN_C(bytesperline_c) | 524 568 CSI_CH_BUF_LEN_BUF_LEN_Y(bytesperline_y)); 525 569 } 526 570 527 - int sun6i_csi_update_config(struct sun6i_csi *csi, 571 + int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev, 528 572 struct sun6i_csi_config *config) 529 573 { 530 - struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi); 531 - 532 574 if (!config) 533 575 return -EINVAL; 534 576 535 - memcpy(&csi->config, config, sizeof(csi->config)); 577 + memcpy(&csi_dev->config, config, sizeof(csi_dev->config)); 536 578 537 - sun6i_csi_setup_bus(sdev); 538 - sun6i_csi_set_format(sdev); 539 - sun6i_csi_set_window(sdev); 579 + sun6i_csi_setup_bus(csi_dev); 580 + sun6i_csi_set_format(csi_dev); 581 + sun6i_csi_set_window(csi_dev); 540 582 541 583 return 0; 542 584 } 543 585 544 - void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr) 586 + void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev, 587 + dma_addr_t addr) 545 588 { 546 - struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi); 547 - 548 - regmap_write(sdev->regmap, CSI_CH_F0_BUFA_REG, 549 - (addr + sdev->planar_offset[0]) >> 2); 550 - if (sdev->planar_offset[1] != -1) 551 - regmap_write(sdev->regmap, CSI_CH_F1_BUFA_REG, 552 - (addr + sdev->planar_offset[1]) >> 2); 553 - if (sdev->planar_offset[2] != -1) 554 - regmap_write(sdev->regmap, CSI_CH_F2_BUFA_REG, 555 - (addr + sdev->planar_offset[2]) >> 2); 589 + regmap_write(csi_dev->regmap, CSI_CH_F0_BUFA_REG, 590 + (addr + csi_dev->planar_offset[0]) >> 2); 591 + if (csi_dev->planar_offset[1] != -1) 592 + regmap_write(csi_dev->regmap, CSI_CH_F1_BUFA_REG, 593 + (addr + csi_dev->planar_offset[1]) >> 2); 594 + if (csi_dev->planar_offset[2] != -1) 595 + regmap_write(csi_dev->regmap, CSI_CH_F2_BUFA_REG, 596 + (addr + csi_dev->planar_offset[2]) >> 2); 556 597 } 557 598 558 - void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable) 599 + void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable) 559 600 { 560 - struct sun6i_csi_dev *sdev = sun6i_csi_to_dev(csi); 561 - struct regmap *regmap = sdev->regmap; 601 + struct regmap *regmap = csi_dev->regmap; 562 602 563 603 if (!enable) { 564 604 regmap_update_bits(regmap, CSI_CAP_REG, CSI_CAP_CH0_VCAP_ON, 0); ··· 575 623 CSI_CAP_CH0_VCAP_ON); 576 624 } 577 625 578 - /* ----------------------------------------------------------------------------- 579 - * Media Controller and V4L2 580 - */ 581 - static int sun6i_csi_link_entity(struct sun6i_csi *csi, 626 + /* Media */ 627 + 628 + static const struct media_device_ops sun6i_csi_media_ops = { 629 + .link_notify = v4l2_pipeline_link_notify, 630 + }; 631 + 632 + /* V4L2 */ 633 + 634 + static int sun6i_csi_link_entity(struct sun6i_csi_device *csi_dev, 582 635 struct media_entity *entity, 583 636 struct fwnode_handle *fwnode) 584 637 { ··· 594 637 595 638 ret = media_entity_get_fwnode_pad(entity, fwnode, MEDIA_PAD_FL_SOURCE); 596 639 if (ret < 0) { 597 - dev_err(csi->dev, "%s: no source pad in external entity %s\n", 598 - __func__, entity->name); 640 + dev_err(csi_dev->dev, 641 + "%s: no source pad in external entity %s\n", __func__, 642 + entity->name); 599 643 return -EINVAL; 600 644 } 601 645 602 646 src_pad_index = ret; 603 647 604 - sink = &csi->video.vdev.entity; 605 - sink_pad = &csi->video.pad; 648 + sink = &csi_dev->video.video_dev.entity; 649 + sink_pad = &csi_dev->video.pad; 606 650 607 - dev_dbg(csi->dev, "creating %s:%u -> %s:%u link\n", 651 + dev_dbg(csi_dev->dev, "creating %s:%u -> %s:%u link\n", 608 652 entity->name, src_pad_index, sink->name, sink_pad->index); 609 653 ret = media_create_pad_link(entity, src_pad_index, sink, 610 654 sink_pad->index, 611 655 MEDIA_LNK_FL_ENABLED | 612 656 MEDIA_LNK_FL_IMMUTABLE); 613 657 if (ret < 0) { 614 - dev_err(csi->dev, "failed to create %s:%u -> %s:%u link\n", 658 + dev_err(csi_dev->dev, "failed to create %s:%u -> %s:%u link\n", 615 659 entity->name, src_pad_index, 616 660 sink->name, sink_pad->index); 617 661 return ret; ··· 623 665 624 666 static int sun6i_subdev_notify_complete(struct v4l2_async_notifier *notifier) 625 667 { 626 - struct sun6i_csi *csi = container_of(notifier, struct sun6i_csi, 627 - notifier); 628 - struct v4l2_device *v4l2_dev = &csi->v4l2_dev; 668 + struct sun6i_csi_device *csi_dev = 669 + container_of(notifier, struct sun6i_csi_device, 670 + v4l2.notifier); 671 + struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2; 672 + struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev; 629 673 struct v4l2_subdev *sd; 630 674 int ret; 631 675 632 - dev_dbg(csi->dev, "notify complete, all subdevs registered\n"); 676 + dev_dbg(csi_dev->dev, "notify complete, all subdevs registered\n"); 633 677 634 678 sd = list_first_entry(&v4l2_dev->subdevs, struct v4l2_subdev, list); 635 679 if (!sd) 636 680 return -EINVAL; 637 681 638 - ret = sun6i_csi_link_entity(csi, &sd->entity, sd->fwnode); 682 + ret = sun6i_csi_link_entity(csi_dev, &sd->entity, sd->fwnode); 639 683 if (ret < 0) 640 684 return ret; 641 685 642 - ret = v4l2_device_register_subdev_nodes(&csi->v4l2_dev); 686 + ret = v4l2_device_register_subdev_nodes(v4l2_dev); 643 687 if (ret < 0) 644 688 return ret; 645 689 646 - return media_device_register(&csi->media_dev); 690 + return 0; 647 691 } 648 692 649 693 static const struct v4l2_async_notifier_operations sun6i_csi_async_ops = { ··· 656 696 struct v4l2_fwnode_endpoint *vep, 657 697 struct v4l2_async_subdev *asd) 658 698 { 659 - struct sun6i_csi *csi = dev_get_drvdata(dev); 699 + struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev); 660 700 661 701 if (vep->base.port || vep->base.id) { 662 702 dev_warn(dev, "Only support a single port with one endpoint\n"); ··· 666 706 switch (vep->bus_type) { 667 707 case V4L2_MBUS_PARALLEL: 668 708 case V4L2_MBUS_BT656: 669 - csi->v4l2_ep = *vep; 709 + csi_dev->v4l2.v4l2_ep = *vep; 670 710 return 0; 671 711 default: 672 712 dev_err(dev, "Unsupported media bus type\n"); ··· 674 714 } 675 715 } 676 716 677 - static void sun6i_csi_v4l2_cleanup(struct sun6i_csi *csi) 717 + static int sun6i_csi_v4l2_setup(struct sun6i_csi_device *csi_dev) 678 718 { 679 - media_device_unregister(&csi->media_dev); 680 - v4l2_async_nf_unregister(&csi->notifier); 681 - v4l2_async_nf_cleanup(&csi->notifier); 682 - sun6i_video_cleanup(&csi->video); 683 - v4l2_device_unregister(&csi->v4l2_dev); 684 - v4l2_ctrl_handler_free(&csi->ctrl_handler); 685 - media_device_cleanup(&csi->media_dev); 686 - } 687 - 688 - static int sun6i_csi_v4l2_init(struct sun6i_csi *csi) 689 - { 719 + struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2; 720 + struct media_device *media_dev = &v4l2->media_dev; 721 + struct v4l2_device *v4l2_dev = &v4l2->v4l2_dev; 722 + struct v4l2_async_notifier *notifier = &v4l2->notifier; 723 + struct device *dev = csi_dev->dev; 690 724 int ret; 691 725 692 - csi->media_dev.dev = csi->dev; 693 - strscpy(csi->media_dev.model, "Allwinner Video Capture Device", 694 - sizeof(csi->media_dev.model)); 695 - csi->media_dev.hw_revision = 0; 726 + /* Media Device */ 696 727 697 - media_device_init(&csi->media_dev); 698 - v4l2_async_nf_init(&csi->notifier); 728 + strscpy(media_dev->model, SUN6I_CSI_DESCRIPTION, 729 + sizeof(media_dev->model)); 730 + media_dev->hw_revision = 0; 731 + media_dev->ops = &sun6i_csi_media_ops; 732 + media_dev->dev = dev; 699 733 700 - ret = v4l2_ctrl_handler_init(&csi->ctrl_handler, 0); 734 + media_device_init(media_dev); 735 + 736 + ret = media_device_register(media_dev); 701 737 if (ret) { 702 - dev_err(csi->dev, "V4L2 controls handler init failed (%d)\n", 703 - ret); 704 - goto clean_media; 738 + dev_err(dev, "failed to register media device: %d\n", ret); 739 + goto error_media; 705 740 } 706 741 707 - csi->v4l2_dev.mdev = &csi->media_dev; 708 - csi->v4l2_dev.ctrl_handler = &csi->ctrl_handler; 709 - ret = v4l2_device_register(csi->dev, &csi->v4l2_dev); 742 + /* V4L2 Device */ 743 + 744 + v4l2_dev->mdev = media_dev; 745 + 746 + ret = v4l2_device_register(dev, v4l2_dev); 710 747 if (ret) { 711 - dev_err(csi->dev, "V4L2 device registration failed (%d)\n", 712 - ret); 713 - goto free_ctrl; 748 + dev_err(dev, "failed to register v4l2 device: %d\n", ret); 749 + goto error_media; 714 750 } 715 751 716 - ret = sun6i_video_init(&csi->video, csi, "sun6i-csi"); 752 + /* Video */ 753 + 754 + ret = sun6i_video_setup(csi_dev); 717 755 if (ret) 718 - goto unreg_v4l2; 756 + goto error_v4l2_device; 719 757 720 - ret = v4l2_async_nf_parse_fwnode_endpoints(csi->dev, 721 - &csi->notifier, 758 + /* V4L2 Async */ 759 + 760 + v4l2_async_nf_init(notifier); 761 + notifier->ops = &sun6i_csi_async_ops; 762 + 763 + ret = v4l2_async_nf_parse_fwnode_endpoints(dev, notifier, 722 764 sizeof(struct 723 765 v4l2_async_subdev), 724 766 sun6i_csi_fwnode_parse); 725 767 if (ret) 726 - goto clean_video; 768 + goto error_video; 727 769 728 - csi->notifier.ops = &sun6i_csi_async_ops; 729 - 730 - ret = v4l2_async_nf_register(&csi->v4l2_dev, &csi->notifier); 770 + ret = v4l2_async_nf_register(v4l2_dev, notifier); 731 771 if (ret) { 732 - dev_err(csi->dev, "notifier registration failed\n"); 733 - goto clean_video; 772 + dev_err(dev, "failed to register v4l2 async notifier: %d\n", 773 + ret); 774 + goto error_v4l2_async_notifier; 734 775 } 735 776 736 777 return 0; 737 778 738 - clean_video: 739 - sun6i_video_cleanup(&csi->video); 740 - unreg_v4l2: 741 - v4l2_device_unregister(&csi->v4l2_dev); 742 - free_ctrl: 743 - v4l2_ctrl_handler_free(&csi->ctrl_handler); 744 - clean_media: 745 - v4l2_async_nf_cleanup(&csi->notifier); 746 - media_device_cleanup(&csi->media_dev); 779 + error_v4l2_async_notifier: 780 + v4l2_async_nf_cleanup(notifier); 781 + 782 + error_video: 783 + sun6i_video_cleanup(csi_dev); 784 + 785 + error_v4l2_device: 786 + v4l2_device_unregister(&v4l2->v4l2_dev); 787 + 788 + error_media: 789 + media_device_unregister(media_dev); 790 + media_device_cleanup(media_dev); 747 791 748 792 return ret; 749 793 } 750 794 751 - /* ----------------------------------------------------------------------------- 752 - * Resources and IRQ 753 - */ 754 - static irqreturn_t sun6i_csi_isr(int irq, void *dev_id) 795 + static void sun6i_csi_v4l2_cleanup(struct sun6i_csi_device *csi_dev) 755 796 { 756 - struct sun6i_csi_dev *sdev = (struct sun6i_csi_dev *)dev_id; 757 - struct regmap *regmap = sdev->regmap; 797 + struct sun6i_csi_v4l2 *v4l2 = &csi_dev->v4l2; 798 + 799 + media_device_unregister(&v4l2->media_dev); 800 + v4l2_async_nf_unregister(&v4l2->notifier); 801 + v4l2_async_nf_cleanup(&v4l2->notifier); 802 + sun6i_video_cleanup(csi_dev); 803 + v4l2_device_unregister(&v4l2->v4l2_dev); 804 + media_device_cleanup(&v4l2->media_dev); 805 + } 806 + 807 + /* Platform */ 808 + 809 + static irqreturn_t sun6i_csi_interrupt(int irq, void *private) 810 + { 811 + struct sun6i_csi_device *csi_dev = private; 812 + struct regmap *regmap = csi_dev->regmap; 758 813 u32 status; 759 814 760 815 regmap_read(regmap, CSI_CH_INT_STA_REG, &status); ··· 789 814 } 790 815 791 816 if (status & CSI_CH_INT_STA_FD_PD) 792 - sun6i_video_frame_done(&sdev->csi.video); 817 + sun6i_video_frame_done(csi_dev); 793 818 794 819 regmap_write(regmap, CSI_CH_INT_STA_REG, status); 795 820 796 821 return IRQ_HANDLED; 797 822 } 823 + 824 + static int sun6i_csi_suspend(struct device *dev) 825 + { 826 + struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev); 827 + 828 + reset_control_assert(csi_dev->reset); 829 + clk_disable_unprepare(csi_dev->clock_ram); 830 + clk_disable_unprepare(csi_dev->clock_mod); 831 + 832 + return 0; 833 + } 834 + 835 + static int sun6i_csi_resume(struct device *dev) 836 + { 837 + struct sun6i_csi_device *csi_dev = dev_get_drvdata(dev); 838 + int ret; 839 + 840 + ret = reset_control_deassert(csi_dev->reset); 841 + if (ret) { 842 + dev_err(dev, "failed to deassert reset\n"); 843 + return ret; 844 + } 845 + 846 + ret = clk_prepare_enable(csi_dev->clock_mod); 847 + if (ret) { 848 + dev_err(dev, "failed to enable module clock\n"); 849 + goto error_reset; 850 + } 851 + 852 + ret = clk_prepare_enable(csi_dev->clock_ram); 853 + if (ret) { 854 + dev_err(dev, "failed to enable ram clock\n"); 855 + goto error_clock_mod; 856 + } 857 + 858 + return 0; 859 + 860 + error_clock_mod: 861 + clk_disable_unprepare(csi_dev->clock_mod); 862 + 863 + error_reset: 864 + reset_control_assert(csi_dev->reset); 865 + 866 + return ret; 867 + } 868 + 869 + static const struct dev_pm_ops sun6i_csi_pm_ops = { 870 + .runtime_suspend = sun6i_csi_suspend, 871 + .runtime_resume = sun6i_csi_resume, 872 + }; 798 873 799 874 static const struct regmap_config sun6i_csi_regmap_config = { 800 875 .reg_bits = 32, ··· 853 828 .max_register = 0x9c, 854 829 }; 855 830 856 - static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev, 857 - struct platform_device *pdev) 831 + static int sun6i_csi_resources_setup(struct sun6i_csi_device *csi_dev, 832 + struct platform_device *platform_dev) 858 833 { 834 + struct device *dev = csi_dev->dev; 835 + const struct sun6i_csi_variant *variant; 859 836 void __iomem *io_base; 860 837 int ret; 861 838 int irq; 862 839 863 - io_base = devm_platform_ioremap_resource(pdev, 0); 840 + variant = of_device_get_match_data(dev); 841 + if (!variant) 842 + return -EINVAL; 843 + 844 + /* Registers */ 845 + 846 + io_base = devm_platform_ioremap_resource(platform_dev, 0); 864 847 if (IS_ERR(io_base)) 865 848 return PTR_ERR(io_base); 866 849 867 - sdev->regmap = devm_regmap_init_mmio_clk(&pdev->dev, "bus", io_base, 868 - &sun6i_csi_regmap_config); 869 - if (IS_ERR(sdev->regmap)) { 870 - dev_err(&pdev->dev, "Failed to init register map\n"); 871 - return PTR_ERR(sdev->regmap); 850 + csi_dev->regmap = devm_regmap_init_mmio_clk(dev, "bus", io_base, 851 + &sun6i_csi_regmap_config); 852 + if (IS_ERR(csi_dev->regmap)) { 853 + dev_err(dev, "failed to init register map\n"); 854 + return PTR_ERR(csi_dev->regmap); 872 855 } 873 856 874 - sdev->clk_mod = devm_clk_get(&pdev->dev, "mod"); 875 - if (IS_ERR(sdev->clk_mod)) { 876 - dev_err(&pdev->dev, "Unable to acquire csi clock\n"); 877 - return PTR_ERR(sdev->clk_mod); 857 + /* Clocks */ 858 + 859 + csi_dev->clock_mod = devm_clk_get(dev, "mod"); 860 + if (IS_ERR(csi_dev->clock_mod)) { 861 + dev_err(dev, "failed to acquire module clock\n"); 862 + return PTR_ERR(csi_dev->clock_mod); 878 863 } 879 864 880 - sdev->clk_ram = devm_clk_get(&pdev->dev, "ram"); 881 - if (IS_ERR(sdev->clk_ram)) { 882 - dev_err(&pdev->dev, "Unable to acquire dram-csi clock\n"); 883 - return PTR_ERR(sdev->clk_ram); 865 + csi_dev->clock_ram = devm_clk_get(dev, "ram"); 866 + if (IS_ERR(csi_dev->clock_ram)) { 867 + dev_err(dev, "failed to acquire ram clock\n"); 868 + return PTR_ERR(csi_dev->clock_ram); 884 869 } 885 870 886 - sdev->rstc_bus = devm_reset_control_get_shared(&pdev->dev, NULL); 887 - if (IS_ERR(sdev->rstc_bus)) { 888 - dev_err(&pdev->dev, "Cannot get reset controller\n"); 889 - return PTR_ERR(sdev->rstc_bus); 890 - } 891 - 892 - irq = platform_get_irq(pdev, 0); 893 - if (irq < 0) 894 - return -ENXIO; 895 - 896 - ret = devm_request_irq(&pdev->dev, irq, sun6i_csi_isr, 0, MODULE_NAME, 897 - sdev); 871 + ret = clk_set_rate_exclusive(csi_dev->clock_mod, 872 + variant->clock_mod_rate); 898 873 if (ret) { 899 - dev_err(&pdev->dev, "Cannot request csi IRQ\n"); 874 + dev_err(dev, "failed to set mod clock rate\n"); 900 875 return ret; 901 876 } 902 877 878 + /* Reset */ 879 + 880 + csi_dev->reset = devm_reset_control_get_shared(dev, NULL); 881 + if (IS_ERR(csi_dev->reset)) { 882 + dev_err(dev, "failed to acquire reset\n"); 883 + ret = PTR_ERR(csi_dev->reset); 884 + goto error_clock_rate_exclusive; 885 + } 886 + 887 + /* Interrupt */ 888 + 889 + irq = platform_get_irq(platform_dev, 0); 890 + if (irq < 0) { 891 + dev_err(dev, "failed to get interrupt\n"); 892 + ret = -ENXIO; 893 + goto error_clock_rate_exclusive; 894 + } 895 + 896 + ret = devm_request_irq(dev, irq, sun6i_csi_interrupt, 0, SUN6I_CSI_NAME, 897 + csi_dev); 898 + if (ret) { 899 + dev_err(dev, "failed to request interrupt\n"); 900 + goto error_clock_rate_exclusive; 901 + } 902 + 903 + /* Runtime PM */ 904 + 905 + pm_runtime_enable(dev); 906 + 903 907 return 0; 908 + 909 + error_clock_rate_exclusive: 910 + clk_rate_exclusive_put(csi_dev->clock_mod); 911 + 912 + return ret; 904 913 } 905 914 906 - static int sun6i_csi_probe(struct platform_device *pdev) 915 + static void sun6i_csi_resources_cleanup(struct sun6i_csi_device *csi_dev) 907 916 { 908 - struct sun6i_csi_dev *sdev; 917 + pm_runtime_disable(csi_dev->dev); 918 + clk_rate_exclusive_put(csi_dev->clock_mod); 919 + } 920 + 921 + static int sun6i_csi_probe(struct platform_device *platform_dev) 922 + { 923 + struct sun6i_csi_device *csi_dev; 924 + struct device *dev = &platform_dev->dev; 909 925 int ret; 910 926 911 - sdev = devm_kzalloc(&pdev->dev, sizeof(*sdev), GFP_KERNEL); 912 - if (!sdev) 927 + csi_dev = devm_kzalloc(dev, sizeof(*csi_dev), GFP_KERNEL); 928 + if (!csi_dev) 913 929 return -ENOMEM; 914 930 915 - sdev->dev = &pdev->dev; 931 + csi_dev->dev = &platform_dev->dev; 932 + platform_set_drvdata(platform_dev, csi_dev); 916 933 917 - ret = sun6i_csi_resource_request(sdev, pdev); 934 + ret = sun6i_csi_resources_setup(csi_dev, platform_dev); 918 935 if (ret) 919 936 return ret; 920 937 921 - platform_set_drvdata(pdev, sdev); 938 + ret = sun6i_csi_v4l2_setup(csi_dev); 939 + if (ret) 940 + goto error_resources; 922 941 923 - sdev->csi.dev = &pdev->dev; 924 - return sun6i_csi_v4l2_init(&sdev->csi); 942 + return 0; 943 + 944 + error_resources: 945 + sun6i_csi_resources_cleanup(csi_dev); 946 + 947 + return ret; 925 948 } 926 949 927 950 static int sun6i_csi_remove(struct platform_device *pdev) 928 951 { 929 - struct sun6i_csi_dev *sdev = platform_get_drvdata(pdev); 952 + struct sun6i_csi_device *csi_dev = platform_get_drvdata(pdev); 930 953 931 - sun6i_csi_v4l2_cleanup(&sdev->csi); 954 + sun6i_csi_v4l2_cleanup(csi_dev); 955 + sun6i_csi_resources_cleanup(csi_dev); 932 956 933 957 return 0; 934 958 } 935 959 960 + static const struct sun6i_csi_variant sun6i_a31_csi_variant = { 961 + .clock_mod_rate = 297000000, 962 + }; 963 + 964 + static const struct sun6i_csi_variant sun50i_a64_csi_variant = { 965 + .clock_mod_rate = 300000000, 966 + }; 967 + 936 968 static const struct of_device_id sun6i_csi_of_match[] = { 937 - { .compatible = "allwinner,sun6i-a31-csi", }, 938 - { .compatible = "allwinner,sun8i-a83t-csi", }, 939 - { .compatible = "allwinner,sun8i-h3-csi", }, 940 - { .compatible = "allwinner,sun8i-v3s-csi", }, 941 - { .compatible = "allwinner,sun50i-a64-csi", }, 969 + { 970 + .compatible = "allwinner,sun6i-a31-csi", 971 + .data = &sun6i_a31_csi_variant, 972 + }, 973 + { 974 + .compatible = "allwinner,sun8i-a83t-csi", 975 + .data = &sun6i_a31_csi_variant, 976 + }, 977 + { 978 + .compatible = "allwinner,sun8i-h3-csi", 979 + .data = &sun6i_a31_csi_variant, 980 + }, 981 + { 982 + .compatible = "allwinner,sun8i-v3s-csi", 983 + .data = &sun6i_a31_csi_variant, 984 + }, 985 + { 986 + .compatible = "allwinner,sun50i-a64-csi", 987 + .data = &sun50i_a64_csi_variant, 988 + }, 942 989 {}, 943 990 }; 991 + 944 992 MODULE_DEVICE_TABLE(of, sun6i_csi_of_match); 945 993 946 994 static struct platform_driver sun6i_csi_platform_driver = { 947 - .probe = sun6i_csi_probe, 948 - .remove = sun6i_csi_remove, 949 - .driver = { 950 - .name = MODULE_NAME, 951 - .of_match_table = of_match_ptr(sun6i_csi_of_match), 995 + .probe = sun6i_csi_probe, 996 + .remove = sun6i_csi_remove, 997 + .driver = { 998 + .name = SUN6I_CSI_NAME, 999 + .of_match_table = of_match_ptr(sun6i_csi_of_match), 1000 + .pm = &sun6i_csi_pm_ops, 952 1001 }, 953 1002 }; 1003 + 954 1004 module_platform_driver(sun6i_csi_platform_driver); 955 1005 956 - MODULE_DESCRIPTION("Allwinner V3s Camera Sensor Interface driver"); 1006 + MODULE_DESCRIPTION("Allwinner A31 Camera Sensor Interface driver"); 957 1007 MODULE_AUTHOR("Yong Deng <yong.deng@magewell.com>"); 958 1008 MODULE_LICENSE("GPL");
+46 -18
drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.h
··· 8 8 #ifndef __SUN6I_CSI_H__ 9 9 #define __SUN6I_CSI_H__ 10 10 11 - #include <media/v4l2-ctrls.h> 12 11 #include <media/v4l2-device.h> 13 12 #include <media/v4l2-fwnode.h> 13 + #include <media/videobuf2-v4l2.h> 14 14 15 15 #include "sun6i_video.h" 16 16 17 - struct sun6i_csi; 17 + #define SUN6I_CSI_NAME "sun6i-csi" 18 + #define SUN6I_CSI_DESCRIPTION "Allwinner A31 CSI Device" 19 + 20 + struct sun6i_csi_buffer { 21 + struct vb2_v4l2_buffer v4l2_buffer; 22 + struct list_head list; 23 + 24 + dma_addr_t dma_addr; 25 + bool queued_to_csi; 26 + }; 18 27 19 28 /** 20 29 * struct sun6i_csi_config - configs for sun6i csi ··· 41 32 u32 height; 42 33 }; 43 34 44 - struct sun6i_csi { 45 - struct device *dev; 46 - struct v4l2_ctrl_handler ctrl_handler; 35 + struct sun6i_csi_v4l2 { 47 36 struct v4l2_device v4l2_dev; 48 37 struct media_device media_dev; 49 38 50 39 struct v4l2_async_notifier notifier; 51 - 52 40 /* video port settings */ 53 41 struct v4l2_fwnode_endpoint v4l2_ep; 42 + }; 43 + 44 + struct sun6i_csi_device { 45 + struct device *dev; 54 46 55 47 struct sun6i_csi_config config; 56 - 48 + struct sun6i_csi_v4l2 v4l2; 57 49 struct sun6i_video video; 50 + 51 + struct regmap *regmap; 52 + struct clk *clock_mod; 53 + struct clk *clock_ram; 54 + struct reset_control *reset; 55 + 56 + int planar_offset[3]; 57 + }; 58 + 59 + struct sun6i_csi_variant { 60 + unsigned long clock_mod_rate; 58 61 }; 59 62 60 63 /** 61 64 * sun6i_csi_is_format_supported() - check if the format supported by csi 62 - * @csi: pointer to the csi 65 + * @csi_dev: pointer to the csi device 63 66 * @pixformat: v4l2 pixel format (V4L2_PIX_FMT_*) 64 67 * @mbus_code: media bus format code (MEDIA_BUS_FMT_*) 68 + * 69 + * Return: true if format is supported, false otherwise. 65 70 */ 66 - bool sun6i_csi_is_format_supported(struct sun6i_csi *csi, u32 pixformat, 67 - u32 mbus_code); 71 + bool sun6i_csi_is_format_supported(struct sun6i_csi_device *csi_dev, 72 + u32 pixformat, u32 mbus_code); 68 73 69 74 /** 70 75 * sun6i_csi_set_power() - power on/off the csi 71 - * @csi: pointer to the csi 76 + * @csi_dev: pointer to the csi device 72 77 * @enable: on/off 78 + * 79 + * Return: 0 if successful, error code otherwise. 73 80 */ 74 - int sun6i_csi_set_power(struct sun6i_csi *csi, bool enable); 81 + int sun6i_csi_set_power(struct sun6i_csi_device *csi_dev, bool enable); 75 82 76 83 /** 77 84 * sun6i_csi_update_config() - update the csi register settings 78 - * @csi: pointer to the csi 85 + * @csi_dev: pointer to the csi device 79 86 * @config: see struct sun6i_csi_config 87 + * 88 + * Return: 0 if successful, error code otherwise. 80 89 */ 81 - int sun6i_csi_update_config(struct sun6i_csi *csi, 90 + int sun6i_csi_update_config(struct sun6i_csi_device *csi_dev, 82 91 struct sun6i_csi_config *config); 83 92 84 93 /** 85 94 * sun6i_csi_update_buf_addr() - update the csi frame buffer address 86 - * @csi: pointer to the csi 95 + * @csi_dev: pointer to the csi device 87 96 * @addr: frame buffer's physical address 88 97 */ 89 - void sun6i_csi_update_buf_addr(struct sun6i_csi *csi, dma_addr_t addr); 98 + void sun6i_csi_update_buf_addr(struct sun6i_csi_device *csi_dev, 99 + dma_addr_t addr); 90 100 91 101 /** 92 102 * sun6i_csi_set_stream() - start/stop csi streaming 93 - * @csi: pointer to the csi 103 + * @csi_dev: pointer to the csi device 94 104 * @enable: start/stop 95 105 */ 96 - void sun6i_csi_set_stream(struct sun6i_csi *csi, bool enable); 106 + void sun6i_csi_set_stream(struct sun6i_csi_device *csi_dev, bool enable); 97 107 98 108 /* get bpp form v4l2 pixformat */ 99 109 static inline int sun6i_csi_get_bpp(unsigned int pixformat)
+322 -274
drivers/media/platform/sunxi/sun6i-csi/sun6i_video.c
··· 23 23 #define MAX_WIDTH (4800) 24 24 #define MAX_HEIGHT (4800) 25 25 26 - struct sun6i_csi_buffer { 27 - struct vb2_v4l2_buffer vb; 28 - struct list_head list; 26 + /* Helpers */ 29 27 30 - dma_addr_t dma_addr; 31 - bool queued_to_csi; 32 - }; 28 + static struct v4l2_subdev * 29 + sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad) 30 + { 31 + struct media_pad *remote; 33 32 34 - static const u32 supported_pixformats[] = { 33 + remote = media_pad_remote_pad_first(&video->pad); 34 + 35 + if (!remote || !is_media_entity_v4l2_subdev(remote->entity)) 36 + return NULL; 37 + 38 + if (pad) 39 + *pad = remote->index; 40 + 41 + return media_entity_to_v4l2_subdev(remote->entity); 42 + } 43 + 44 + /* Format */ 45 + 46 + static const u32 sun6i_video_formats[] = { 35 47 V4L2_PIX_FMT_SBGGR8, 36 48 V4L2_PIX_FMT_SGBRG8, 37 49 V4L2_PIX_FMT_SGRBG8, ··· 73 61 V4L2_PIX_FMT_JPEG, 74 62 }; 75 63 76 - static bool is_pixformat_valid(unsigned int pixformat) 64 + static bool sun6i_video_format_check(u32 format) 77 65 { 78 66 unsigned int i; 79 67 80 - for (i = 0; i < ARRAY_SIZE(supported_pixformats); i++) 81 - if (supported_pixformats[i] == pixformat) 68 + for (i = 0; i < ARRAY_SIZE(sun6i_video_formats); i++) 69 + if (sun6i_video_formats[i] == format) 82 70 return true; 83 71 84 72 return false; 85 73 } 86 74 87 - static struct v4l2_subdev * 88 - sun6i_video_remote_subdev(struct sun6i_video *video, u32 *pad) 75 + /* Video */ 76 + 77 + static void sun6i_video_buffer_configure(struct sun6i_csi_device *csi_dev, 78 + struct sun6i_csi_buffer *csi_buffer) 89 79 { 90 - struct media_pad *remote; 91 - 92 - remote = media_pad_remote_pad_first(&video->pad); 93 - 94 - if (!remote || !is_media_entity_v4l2_subdev(remote->entity)) 95 - return NULL; 96 - 97 - if (pad) 98 - *pad = remote->index; 99 - 100 - return media_entity_to_v4l2_subdev(remote->entity); 80 + csi_buffer->queued_to_csi = true; 81 + sun6i_csi_update_buf_addr(csi_dev, csi_buffer->dma_addr); 101 82 } 102 83 103 - static int sun6i_video_queue_setup(struct vb2_queue *vq, 104 - unsigned int *nbuffers, 105 - unsigned int *nplanes, 84 + static void sun6i_video_configure(struct sun6i_csi_device *csi_dev) 85 + { 86 + struct sun6i_video *video = &csi_dev->video; 87 + struct sun6i_csi_config config = { 0 }; 88 + 89 + config.pixelformat = video->format.fmt.pix.pixelformat; 90 + config.code = video->mbus_code; 91 + config.field = video->format.fmt.pix.field; 92 + config.width = video->format.fmt.pix.width; 93 + config.height = video->format.fmt.pix.height; 94 + 95 + sun6i_csi_update_config(csi_dev, &config); 96 + } 97 + 98 + /* Queue */ 99 + 100 + static int sun6i_video_queue_setup(struct vb2_queue *queue, 101 + unsigned int *buffers_count, 102 + unsigned int *planes_count, 106 103 unsigned int sizes[], 107 104 struct device *alloc_devs[]) 108 105 { 109 - struct sun6i_video *video = vb2_get_drv_priv(vq); 110 - unsigned int size = video->fmt.fmt.pix.sizeimage; 106 + struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue); 107 + struct sun6i_video *video = &csi_dev->video; 108 + unsigned int size = video->format.fmt.pix.sizeimage; 111 109 112 - if (*nplanes) 110 + if (*planes_count) 113 111 return sizes[0] < size ? -EINVAL : 0; 114 112 115 - *nplanes = 1; 113 + *planes_count = 1; 116 114 sizes[0] = size; 117 115 118 116 return 0; 119 117 } 120 118 121 - static int sun6i_video_buffer_prepare(struct vb2_buffer *vb) 119 + static int sun6i_video_buffer_prepare(struct vb2_buffer *buffer) 122 120 { 123 - struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); 124 - struct sun6i_csi_buffer *buf = 125 - container_of(vbuf, struct sun6i_csi_buffer, vb); 126 - struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue); 127 - unsigned long size = video->fmt.fmt.pix.sizeimage; 121 + struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue); 122 + struct sun6i_video *video = &csi_dev->video; 123 + struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev; 124 + struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer); 125 + struct sun6i_csi_buffer *csi_buffer = 126 + container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer); 127 + unsigned long size = video->format.fmt.pix.sizeimage; 128 128 129 - if (vb2_plane_size(vb, 0) < size) { 130 - v4l2_err(video->vdev.v4l2_dev, "buffer too small (%lu < %lu)\n", 131 - vb2_plane_size(vb, 0), size); 129 + if (vb2_plane_size(buffer, 0) < size) { 130 + v4l2_err(v4l2_dev, "buffer too small (%lu < %lu)\n", 131 + vb2_plane_size(buffer, 0), size); 132 132 return -EINVAL; 133 133 } 134 134 135 - vb2_set_plane_payload(vb, 0, size); 135 + vb2_set_plane_payload(buffer, 0, size); 136 136 137 - buf->dma_addr = vb2_dma_contig_plane_dma_addr(vb, 0); 138 - 139 - vbuf->field = video->fmt.fmt.pix.field; 137 + csi_buffer->dma_addr = vb2_dma_contig_plane_dma_addr(buffer, 0); 138 + v4l2_buffer->field = video->format.fmt.pix.field; 140 139 141 140 return 0; 142 141 } 143 142 144 - static int sun6i_video_start_streaming(struct vb2_queue *vq, unsigned int count) 143 + static void sun6i_video_buffer_queue(struct vb2_buffer *buffer) 145 144 { 146 - struct sun6i_video *video = vb2_get_drv_priv(vq); 145 + struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(buffer->vb2_queue); 146 + struct sun6i_video *video = &csi_dev->video; 147 + struct vb2_v4l2_buffer *v4l2_buffer = to_vb2_v4l2_buffer(buffer); 148 + struct sun6i_csi_buffer *csi_buffer = 149 + container_of(v4l2_buffer, struct sun6i_csi_buffer, v4l2_buffer); 150 + unsigned long flags; 151 + 152 + spin_lock_irqsave(&video->dma_queue_lock, flags); 153 + csi_buffer->queued_to_csi = false; 154 + list_add_tail(&csi_buffer->list, &video->dma_queue); 155 + spin_unlock_irqrestore(&video->dma_queue_lock, flags); 156 + } 157 + 158 + static int sun6i_video_start_streaming(struct vb2_queue *queue, 159 + unsigned int count) 160 + { 161 + struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue); 162 + struct sun6i_video *video = &csi_dev->video; 163 + struct video_device *video_dev = &video->video_dev; 147 164 struct sun6i_csi_buffer *buf; 148 165 struct sun6i_csi_buffer *next_buf; 149 - struct sun6i_csi_config config; 150 166 struct v4l2_subdev *subdev; 151 167 unsigned long flags; 152 168 int ret; 153 169 154 170 video->sequence = 0; 155 171 156 - ret = media_pipeline_start(&video->vdev.entity, &video->vdev.pipe); 172 + ret = video_device_pipeline_alloc_start(video_dev); 157 173 if (ret < 0) 158 - goto clear_dma_queue; 174 + goto error_dma_queue_flush; 159 175 160 176 if (video->mbus_code == 0) { 161 177 ret = -EINVAL; 162 - goto stop_media_pipeline; 178 + goto error_media_pipeline; 163 179 } 164 180 165 181 subdev = sun6i_video_remote_subdev(video, NULL); 166 182 if (!subdev) { 167 183 ret = -EINVAL; 168 - goto stop_media_pipeline; 184 + goto error_media_pipeline; 169 185 } 170 186 171 - config.pixelformat = video->fmt.fmt.pix.pixelformat; 172 - config.code = video->mbus_code; 173 - config.field = video->fmt.fmt.pix.field; 174 - config.width = video->fmt.fmt.pix.width; 175 - config.height = video->fmt.fmt.pix.height; 176 - 177 - ret = sun6i_csi_update_config(video->csi, &config); 178 - if (ret < 0) 179 - goto stop_media_pipeline; 187 + sun6i_video_configure(csi_dev); 180 188 181 189 spin_lock_irqsave(&video->dma_queue_lock, flags); 182 190 183 191 buf = list_first_entry(&video->dma_queue, 184 192 struct sun6i_csi_buffer, list); 185 - buf->queued_to_csi = true; 186 - sun6i_csi_update_buf_addr(video->csi, buf->dma_addr); 193 + sun6i_video_buffer_configure(csi_dev, buf); 187 194 188 - sun6i_csi_set_stream(video->csi, true); 195 + sun6i_csi_set_stream(csi_dev, true); 189 196 190 197 /* 191 198 * CSI will lookup the next dma buffer for next frame before the ··· 224 193 * would also drop frame when lacking of queued buffer. 225 194 */ 226 195 next_buf = list_next_entry(buf, list); 227 - next_buf->queued_to_csi = true; 228 - sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr); 196 + sun6i_video_buffer_configure(csi_dev, next_buf); 229 197 230 198 spin_unlock_irqrestore(&video->dma_queue_lock, flags); 231 199 232 200 ret = v4l2_subdev_call(subdev, video, s_stream, 1); 233 201 if (ret && ret != -ENOIOCTLCMD) 234 - goto stop_csi_stream; 202 + goto error_stream; 235 203 236 204 return 0; 237 205 238 - stop_csi_stream: 239 - sun6i_csi_set_stream(video->csi, false); 240 - stop_media_pipeline: 241 - media_pipeline_stop(&video->vdev.entity); 242 - clear_dma_queue: 206 + error_stream: 207 + sun6i_csi_set_stream(csi_dev, false); 208 + 209 + error_media_pipeline: 210 + video_device_pipeline_stop(video_dev); 211 + 212 + error_dma_queue_flush: 243 213 spin_lock_irqsave(&video->dma_queue_lock, flags); 244 214 list_for_each_entry(buf, &video->dma_queue, list) 245 - vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_QUEUED); 215 + vb2_buffer_done(&buf->v4l2_buffer.vb2_buf, 216 + VB2_BUF_STATE_QUEUED); 246 217 INIT_LIST_HEAD(&video->dma_queue); 247 218 spin_unlock_irqrestore(&video->dma_queue_lock, flags); 248 219 249 220 return ret; 250 221 } 251 222 252 - static void sun6i_video_stop_streaming(struct vb2_queue *vq) 223 + static void sun6i_video_stop_streaming(struct vb2_queue *queue) 253 224 { 254 - struct sun6i_video *video = vb2_get_drv_priv(vq); 225 + struct sun6i_csi_device *csi_dev = vb2_get_drv_priv(queue); 226 + struct sun6i_video *video = &csi_dev->video; 255 227 struct v4l2_subdev *subdev; 256 228 unsigned long flags; 257 229 struct sun6i_csi_buffer *buf; ··· 263 229 if (subdev) 264 230 v4l2_subdev_call(subdev, video, s_stream, 0); 265 231 266 - sun6i_csi_set_stream(video->csi, false); 232 + sun6i_csi_set_stream(csi_dev, false); 267 233 268 - media_pipeline_stop(&video->vdev.entity); 234 + video_device_pipeline_stop(&video->video_dev); 269 235 270 236 /* Release all active buffers */ 271 237 spin_lock_irqsave(&video->dma_queue_lock, flags); 272 238 list_for_each_entry(buf, &video->dma_queue, list) 273 - vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR); 239 + vb2_buffer_done(&buf->v4l2_buffer.vb2_buf, VB2_BUF_STATE_ERROR); 274 240 INIT_LIST_HEAD(&video->dma_queue); 275 241 spin_unlock_irqrestore(&video->dma_queue_lock, flags); 276 242 } 277 243 278 - static void sun6i_video_buffer_queue(struct vb2_buffer *vb) 244 + void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev) 279 245 { 280 - struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); 281 - struct sun6i_csi_buffer *buf = 282 - container_of(vbuf, struct sun6i_csi_buffer, vb); 283 - struct sun6i_video *video = vb2_get_drv_priv(vb->vb2_queue); 284 - unsigned long flags; 285 - 286 - spin_lock_irqsave(&video->dma_queue_lock, flags); 287 - buf->queued_to_csi = false; 288 - list_add_tail(&buf->list, &video->dma_queue); 289 - spin_unlock_irqrestore(&video->dma_queue_lock, flags); 290 - } 291 - 292 - void sun6i_video_frame_done(struct sun6i_video *video) 293 - { 246 + struct sun6i_video *video = &csi_dev->video; 294 247 struct sun6i_csi_buffer *buf; 295 248 struct sun6i_csi_buffer *next_buf; 296 - struct vb2_v4l2_buffer *vbuf; 249 + struct vb2_v4l2_buffer *v4l2_buffer; 297 250 298 251 spin_lock(&video->dma_queue_lock); 299 252 300 253 buf = list_first_entry(&video->dma_queue, 301 254 struct sun6i_csi_buffer, list); 302 255 if (list_is_last(&buf->list, &video->dma_queue)) { 303 - dev_dbg(video->csi->dev, "Frame dropped!\n"); 304 - goto unlock; 256 + dev_dbg(csi_dev->dev, "Frame dropped!\n"); 257 + goto complete; 305 258 } 306 259 307 260 next_buf = list_next_entry(buf, list); ··· 298 277 * for next ISR call. 299 278 */ 300 279 if (!next_buf->queued_to_csi) { 301 - next_buf->queued_to_csi = true; 302 - sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr); 303 - dev_dbg(video->csi->dev, "Frame dropped!\n"); 304 - goto unlock; 280 + sun6i_video_buffer_configure(csi_dev, next_buf); 281 + dev_dbg(csi_dev->dev, "Frame dropped!\n"); 282 + goto complete; 305 283 } 306 284 307 285 list_del(&buf->list); 308 - vbuf = &buf->vb; 309 - vbuf->vb2_buf.timestamp = ktime_get_ns(); 310 - vbuf->sequence = video->sequence; 311 - vb2_buffer_done(&vbuf->vb2_buf, VB2_BUF_STATE_DONE); 286 + v4l2_buffer = &buf->v4l2_buffer; 287 + v4l2_buffer->vb2_buf.timestamp = ktime_get_ns(); 288 + v4l2_buffer->sequence = video->sequence; 289 + vb2_buffer_done(&v4l2_buffer->vb2_buf, VB2_BUF_STATE_DONE); 312 290 313 291 /* Prepare buffer for next frame but one. */ 314 292 if (!list_is_last(&next_buf->list, &video->dma_queue)) { 315 293 next_buf = list_next_entry(next_buf, list); 316 - next_buf->queued_to_csi = true; 317 - sun6i_csi_update_buf_addr(video->csi, next_buf->dma_addr); 294 + sun6i_video_buffer_configure(csi_dev, next_buf); 318 295 } else { 319 - dev_dbg(video->csi->dev, "Next frame will be dropped!\n"); 296 + dev_dbg(csi_dev->dev, "Next frame will be dropped!\n"); 320 297 } 321 298 322 - unlock: 299 + complete: 323 300 video->sequence++; 324 301 spin_unlock(&video->dma_queue_lock); 325 302 } 326 303 327 - static const struct vb2_ops sun6i_csi_vb2_ops = { 304 + static const struct vb2_ops sun6i_video_queue_ops = { 328 305 .queue_setup = sun6i_video_queue_setup, 329 - .wait_prepare = vb2_ops_wait_prepare, 330 - .wait_finish = vb2_ops_wait_finish, 331 306 .buf_prepare = sun6i_video_buffer_prepare, 307 + .buf_queue = sun6i_video_buffer_queue, 332 308 .start_streaming = sun6i_video_start_streaming, 333 309 .stop_streaming = sun6i_video_stop_streaming, 334 - .buf_queue = sun6i_video_buffer_queue, 310 + .wait_prepare = vb2_ops_wait_prepare, 311 + .wait_finish = vb2_ops_wait_finish, 335 312 }; 336 313 337 - static int vidioc_querycap(struct file *file, void *priv, 338 - struct v4l2_capability *cap) 339 - { 340 - struct sun6i_video *video = video_drvdata(file); 314 + /* V4L2 Device */ 341 315 342 - strscpy(cap->driver, "sun6i-video", sizeof(cap->driver)); 343 - strscpy(cap->card, video->vdev.name, sizeof(cap->card)); 344 - snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", 345 - video->csi->dev->of_node->name); 316 + static int sun6i_video_querycap(struct file *file, void *private, 317 + struct v4l2_capability *capability) 318 + { 319 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 320 + struct video_device *video_dev = &csi_dev->video.video_dev; 321 + 322 + strscpy(capability->driver, SUN6I_CSI_NAME, sizeof(capability->driver)); 323 + strscpy(capability->card, video_dev->name, sizeof(capability->card)); 324 + snprintf(capability->bus_info, sizeof(capability->bus_info), 325 + "platform:%s", dev_name(csi_dev->dev)); 346 326 347 327 return 0; 348 328 } 349 329 350 - static int vidioc_enum_fmt_vid_cap(struct file *file, void *priv, 351 - struct v4l2_fmtdesc *f) 330 + static int sun6i_video_enum_fmt(struct file *file, void *private, 331 + struct v4l2_fmtdesc *fmtdesc) 352 332 { 353 - u32 index = f->index; 333 + u32 index = fmtdesc->index; 354 334 355 - if (index >= ARRAY_SIZE(supported_pixformats)) 335 + if (index >= ARRAY_SIZE(sun6i_video_formats)) 356 336 return -EINVAL; 357 337 358 - f->pixelformat = supported_pixformats[index]; 338 + fmtdesc->pixelformat = sun6i_video_formats[index]; 359 339 360 340 return 0; 361 341 } 362 342 363 - static int vidioc_g_fmt_vid_cap(struct file *file, void *priv, 364 - struct v4l2_format *fmt) 343 + static int sun6i_video_g_fmt(struct file *file, void *private, 344 + struct v4l2_format *format) 365 345 { 366 - struct sun6i_video *video = video_drvdata(file); 346 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 347 + struct sun6i_video *video = &csi_dev->video; 367 348 368 - *fmt = video->fmt; 349 + *format = video->format; 369 350 370 351 return 0; 371 352 } 372 353 373 - static int sun6i_video_try_fmt(struct sun6i_video *video, 374 - struct v4l2_format *f) 354 + static int sun6i_video_format_try(struct sun6i_video *video, 355 + struct v4l2_format *format) 375 356 { 376 - struct v4l2_pix_format *pixfmt = &f->fmt.pix; 357 + struct v4l2_pix_format *pix_format = &format->fmt.pix; 377 358 int bpp; 378 359 379 - if (!is_pixformat_valid(pixfmt->pixelformat)) 380 - pixfmt->pixelformat = supported_pixformats[0]; 360 + if (!sun6i_video_format_check(pix_format->pixelformat)) 361 + pix_format->pixelformat = sun6i_video_formats[0]; 381 362 382 - v4l_bound_align_image(&pixfmt->width, MIN_WIDTH, MAX_WIDTH, 1, 383 - &pixfmt->height, MIN_HEIGHT, MAX_WIDTH, 1, 1); 363 + v4l_bound_align_image(&pix_format->width, MIN_WIDTH, MAX_WIDTH, 1, 364 + &pix_format->height, MIN_HEIGHT, MAX_WIDTH, 1, 1); 384 365 385 - bpp = sun6i_csi_get_bpp(pixfmt->pixelformat); 386 - pixfmt->bytesperline = (pixfmt->width * bpp) >> 3; 387 - pixfmt->sizeimage = pixfmt->bytesperline * pixfmt->height; 366 + bpp = sun6i_csi_get_bpp(pix_format->pixelformat); 367 + pix_format->bytesperline = (pix_format->width * bpp) >> 3; 368 + pix_format->sizeimage = pix_format->bytesperline * pix_format->height; 388 369 389 - if (pixfmt->field == V4L2_FIELD_ANY) 390 - pixfmt->field = V4L2_FIELD_NONE; 370 + if (pix_format->field == V4L2_FIELD_ANY) 371 + pix_format->field = V4L2_FIELD_NONE; 391 372 392 - if (pixfmt->pixelformat == V4L2_PIX_FMT_JPEG) 393 - pixfmt->colorspace = V4L2_COLORSPACE_JPEG; 373 + if (pix_format->pixelformat == V4L2_PIX_FMT_JPEG) 374 + pix_format->colorspace = V4L2_COLORSPACE_JPEG; 394 375 else 395 - pixfmt->colorspace = V4L2_COLORSPACE_SRGB; 376 + pix_format->colorspace = V4L2_COLORSPACE_SRGB; 396 377 397 - pixfmt->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; 398 - pixfmt->quantization = V4L2_QUANTIZATION_DEFAULT; 399 - pixfmt->xfer_func = V4L2_XFER_FUNC_DEFAULT; 378 + pix_format->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT; 379 + pix_format->quantization = V4L2_QUANTIZATION_DEFAULT; 380 + pix_format->xfer_func = V4L2_XFER_FUNC_DEFAULT; 400 381 401 382 return 0; 402 383 } 403 384 404 - static int sun6i_video_set_fmt(struct sun6i_video *video, struct v4l2_format *f) 385 + static int sun6i_video_format_set(struct sun6i_video *video, 386 + struct v4l2_format *format) 405 387 { 406 388 int ret; 407 389 408 - ret = sun6i_video_try_fmt(video, f); 390 + ret = sun6i_video_format_try(video, format); 409 391 if (ret) 410 392 return ret; 411 393 412 - video->fmt = *f; 394 + video->format = *format; 413 395 414 396 return 0; 415 397 } 416 398 417 - static int vidioc_s_fmt_vid_cap(struct file *file, void *priv, 418 - struct v4l2_format *f) 399 + static int sun6i_video_s_fmt(struct file *file, void *private, 400 + struct v4l2_format *format) 419 401 { 420 - struct sun6i_video *video = video_drvdata(file); 402 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 403 + struct sun6i_video *video = &csi_dev->video; 421 404 422 - if (vb2_is_busy(&video->vb2_vidq)) 405 + if (vb2_is_busy(&video->queue)) 423 406 return -EBUSY; 424 407 425 - return sun6i_video_set_fmt(video, f); 408 + return sun6i_video_format_set(video, format); 426 409 } 427 410 428 - static int vidioc_try_fmt_vid_cap(struct file *file, void *priv, 429 - struct v4l2_format *f) 411 + static int sun6i_video_try_fmt(struct file *file, void *private, 412 + struct v4l2_format *format) 430 413 { 431 - struct sun6i_video *video = video_drvdata(file); 414 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 415 + struct sun6i_video *video = &csi_dev->video; 432 416 433 - return sun6i_video_try_fmt(video, f); 417 + return sun6i_video_format_try(video, format); 434 418 } 435 419 436 - static int vidioc_enum_input(struct file *file, void *fh, 437 - struct v4l2_input *inp) 420 + static int sun6i_video_enum_input(struct file *file, void *private, 421 + struct v4l2_input *input) 438 422 { 439 - if (inp->index != 0) 423 + if (input->index != 0) 440 424 return -EINVAL; 441 425 442 - strscpy(inp->name, "camera", sizeof(inp->name)); 443 - inp->type = V4L2_INPUT_TYPE_CAMERA; 426 + input->type = V4L2_INPUT_TYPE_CAMERA; 427 + strscpy(input->name, "Camera", sizeof(input->name)); 444 428 445 429 return 0; 446 430 } 447 431 448 - static int vidioc_g_input(struct file *file, void *fh, unsigned int *i) 432 + static int sun6i_video_g_input(struct file *file, void *private, 433 + unsigned int *index) 449 434 { 450 - *i = 0; 435 + *index = 0; 451 436 452 437 return 0; 453 438 } 454 439 455 - static int vidioc_s_input(struct file *file, void *fh, unsigned int i) 440 + static int sun6i_video_s_input(struct file *file, void *private, 441 + unsigned int index) 456 442 { 457 - if (i != 0) 443 + if (index != 0) 458 444 return -EINVAL; 459 445 460 446 return 0; 461 447 } 462 448 463 449 static const struct v4l2_ioctl_ops sun6i_video_ioctl_ops = { 464 - .vidioc_querycap = vidioc_querycap, 465 - .vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap, 466 - .vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap, 467 - .vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap, 468 - .vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap, 450 + .vidioc_querycap = sun6i_video_querycap, 469 451 470 - .vidioc_enum_input = vidioc_enum_input, 471 - .vidioc_s_input = vidioc_s_input, 472 - .vidioc_g_input = vidioc_g_input, 452 + .vidioc_enum_fmt_vid_cap = sun6i_video_enum_fmt, 453 + .vidioc_g_fmt_vid_cap = sun6i_video_g_fmt, 454 + .vidioc_s_fmt_vid_cap = sun6i_video_s_fmt, 455 + .vidioc_try_fmt_vid_cap = sun6i_video_try_fmt, 473 456 474 - .vidioc_reqbufs = vb2_ioctl_reqbufs, 475 - .vidioc_querybuf = vb2_ioctl_querybuf, 476 - .vidioc_qbuf = vb2_ioctl_qbuf, 477 - .vidioc_expbuf = vb2_ioctl_expbuf, 478 - .vidioc_dqbuf = vb2_ioctl_dqbuf, 457 + .vidioc_enum_input = sun6i_video_enum_input, 458 + .vidioc_g_input = sun6i_video_g_input, 459 + .vidioc_s_input = sun6i_video_s_input, 460 + 479 461 .vidioc_create_bufs = vb2_ioctl_create_bufs, 480 462 .vidioc_prepare_buf = vb2_ioctl_prepare_buf, 463 + .vidioc_reqbufs = vb2_ioctl_reqbufs, 464 + .vidioc_querybuf = vb2_ioctl_querybuf, 465 + .vidioc_expbuf = vb2_ioctl_expbuf, 466 + .vidioc_qbuf = vb2_ioctl_qbuf, 467 + .vidioc_dqbuf = vb2_ioctl_dqbuf, 481 468 .vidioc_streamon = vb2_ioctl_streamon, 482 469 .vidioc_streamoff = vb2_ioctl_streamoff, 483 - 484 - .vidioc_log_status = v4l2_ctrl_log_status, 485 - .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, 486 - .vidioc_unsubscribe_event = v4l2_event_unsubscribe, 487 470 }; 488 471 489 - /* ----------------------------------------------------------------------------- 490 - * V4L2 file operations 491 - */ 472 + /* V4L2 File */ 473 + 492 474 static int sun6i_video_open(struct file *file) 493 475 { 494 - struct sun6i_video *video = video_drvdata(file); 476 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 477 + struct sun6i_video *video = &csi_dev->video; 495 478 int ret = 0; 496 479 497 480 if (mutex_lock_interruptible(&video->lock)) ··· 503 478 504 479 ret = v4l2_fh_open(file); 505 480 if (ret < 0) 506 - goto unlock; 481 + goto error_lock; 507 482 508 - ret = v4l2_pipeline_pm_get(&video->vdev.entity); 483 + ret = v4l2_pipeline_pm_get(&video->video_dev.entity); 509 484 if (ret < 0) 510 - goto fh_release; 485 + goto error_v4l2_fh; 511 486 512 - /* check if already powered */ 513 - if (!v4l2_fh_is_singular_file(file)) 514 - goto unlock; 515 - 516 - ret = sun6i_csi_set_power(video->csi, true); 517 - if (ret < 0) 518 - goto fh_release; 487 + /* Power on at first open. */ 488 + if (v4l2_fh_is_singular_file(file)) { 489 + ret = sun6i_csi_set_power(csi_dev, true); 490 + if (ret < 0) 491 + goto error_v4l2_fh; 492 + } 519 493 520 494 mutex_unlock(&video->lock); 495 + 521 496 return 0; 522 497 523 - fh_release: 498 + error_v4l2_fh: 524 499 v4l2_fh_release(file); 525 - unlock: 500 + 501 + error_lock: 526 502 mutex_unlock(&video->lock); 503 + 527 504 return ret; 528 505 } 529 506 530 507 static int sun6i_video_close(struct file *file) 531 508 { 532 - struct sun6i_video *video = video_drvdata(file); 533 - bool last_fh; 509 + struct sun6i_csi_device *csi_dev = video_drvdata(file); 510 + struct sun6i_video *video = &csi_dev->video; 511 + bool last_close; 534 512 535 513 mutex_lock(&video->lock); 536 514 537 - last_fh = v4l2_fh_is_singular_file(file); 515 + last_close = v4l2_fh_is_singular_file(file); 538 516 539 517 _vb2_fop_release(file, NULL); 518 + v4l2_pipeline_pm_put(&video->video_dev.entity); 540 519 541 - v4l2_pipeline_pm_put(&video->vdev.entity); 542 - 543 - if (last_fh) 544 - sun6i_csi_set_power(video->csi, false); 520 + /* Power off at last close. */ 521 + if (last_close) 522 + sun6i_csi_set_power(csi_dev, false); 545 523 546 524 mutex_unlock(&video->lock); 547 525 ··· 560 532 .poll = vb2_fop_poll 561 533 }; 562 534 563 - /* ----------------------------------------------------------------------------- 564 - * Media Operations 565 - */ 535 + /* Media Entity */ 536 + 566 537 static int sun6i_video_link_validate_get_format(struct media_pad *pad, 567 538 struct v4l2_subdev_format *fmt) 568 539 { ··· 581 554 { 582 555 struct video_device *vdev = container_of(link->sink->entity, 583 556 struct video_device, entity); 584 - struct sun6i_video *video = video_get_drvdata(vdev); 557 + struct sun6i_csi_device *csi_dev = video_get_drvdata(vdev); 558 + struct sun6i_video *video = &csi_dev->video; 585 559 struct v4l2_subdev_format source_fmt; 586 560 int ret; 587 561 588 562 video->mbus_code = 0; 589 563 590 564 if (!media_pad_remote_pad_first(link->sink->entity->pads)) { 591 - dev_info(video->csi->dev, 592 - "video node %s pad not connected\n", vdev->name); 565 + dev_info(csi_dev->dev, "video node %s pad not connected\n", 566 + vdev->name); 593 567 return -ENOLINK; 594 568 } 595 569 ··· 598 570 if (ret < 0) 599 571 return ret; 600 572 601 - if (!sun6i_csi_is_format_supported(video->csi, 602 - video->fmt.fmt.pix.pixelformat, 573 + if (!sun6i_csi_is_format_supported(csi_dev, 574 + video->format.fmt.pix.pixelformat, 603 575 source_fmt.format.code)) { 604 - dev_err(video->csi->dev, 576 + dev_err(csi_dev->dev, 605 577 "Unsupported pixformat: 0x%x with mbus code: 0x%x!\n", 606 - video->fmt.fmt.pix.pixelformat, 578 + video->format.fmt.pix.pixelformat, 607 579 source_fmt.format.code); 608 580 return -EPIPE; 609 581 } 610 582 611 - if (source_fmt.format.width != video->fmt.fmt.pix.width || 612 - source_fmt.format.height != video->fmt.fmt.pix.height) { 613 - dev_err(video->csi->dev, 583 + if (source_fmt.format.width != video->format.fmt.pix.width || 584 + source_fmt.format.height != video->format.fmt.pix.height) { 585 + dev_err(csi_dev->dev, 614 586 "Wrong width or height %ux%u (%ux%u expected)\n", 615 - video->fmt.fmt.pix.width, video->fmt.fmt.pix.height, 587 + video->format.fmt.pix.width, video->format.fmt.pix.height, 616 588 source_fmt.format.width, source_fmt.format.height); 617 589 return -EPIPE; 618 590 } ··· 626 598 .link_validate = sun6i_video_link_validate 627 599 }; 628 600 629 - int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi, 630 - const char *name) 601 + /* Video */ 602 + 603 + int sun6i_video_setup(struct sun6i_csi_device *csi_dev) 631 604 { 632 - struct video_device *vdev = &video->vdev; 633 - struct vb2_queue *vidq = &video->vb2_vidq; 634 - struct v4l2_format fmt = { 0 }; 605 + struct sun6i_video *video = &csi_dev->video; 606 + struct v4l2_device *v4l2_dev = &csi_dev->v4l2.v4l2_dev; 607 + struct video_device *video_dev = &video->video_dev; 608 + struct vb2_queue *queue = &video->queue; 609 + struct media_pad *pad = &video->pad; 610 + struct v4l2_format format = { 0 }; 611 + struct v4l2_pix_format *pix_format = &format.fmt.pix; 635 612 int ret; 636 613 637 - video->csi = csi; 614 + /* Media Entity */ 638 615 639 - /* Initialize the media entity... */ 640 - video->pad.flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT; 641 - vdev->entity.ops = &sun6i_video_media_ops; 642 - ret = media_entity_pads_init(&vdev->entity, 1, &video->pad); 616 + video_dev->entity.ops = &sun6i_video_media_ops; 617 + 618 + /* Media Pad */ 619 + 620 + pad->flags = MEDIA_PAD_FL_SINK | MEDIA_PAD_FL_MUST_CONNECT; 621 + 622 + ret = media_entity_pads_init(&video_dev->entity, 1, pad); 643 623 if (ret < 0) 644 624 return ret; 645 625 646 - mutex_init(&video->lock); 626 + /* DMA queue */ 647 627 648 628 INIT_LIST_HEAD(&video->dma_queue); 649 629 spin_lock_init(&video->dma_queue_lock); 650 630 651 631 video->sequence = 0; 652 632 653 - /* Setup default format */ 654 - fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 655 - fmt.fmt.pix.pixelformat = supported_pixformats[0]; 656 - fmt.fmt.pix.width = 1280; 657 - fmt.fmt.pix.height = 720; 658 - fmt.fmt.pix.field = V4L2_FIELD_NONE; 659 - sun6i_video_set_fmt(video, &fmt); 633 + /* Queue */ 660 634 661 - /* Initialize videobuf2 queue */ 662 - vidq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 663 - vidq->io_modes = VB2_MMAP | VB2_DMABUF; 664 - vidq->drv_priv = video; 665 - vidq->buf_struct_size = sizeof(struct sun6i_csi_buffer); 666 - vidq->ops = &sun6i_csi_vb2_ops; 667 - vidq->mem_ops = &vb2_dma_contig_memops; 668 - vidq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 669 - vidq->lock = &video->lock; 670 - /* Make sure non-dropped frame */ 671 - vidq->min_buffers_needed = 3; 672 - vidq->dev = csi->dev; 635 + mutex_init(&video->lock); 673 636 674 - ret = vb2_queue_init(vidq); 637 + queue->type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 638 + queue->io_modes = VB2_MMAP | VB2_DMABUF; 639 + queue->buf_struct_size = sizeof(struct sun6i_csi_buffer); 640 + queue->ops = &sun6i_video_queue_ops; 641 + queue->mem_ops = &vb2_dma_contig_memops; 642 + queue->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 643 + queue->lock = &video->lock; 644 + queue->dev = csi_dev->dev; 645 + queue->drv_priv = csi_dev; 646 + 647 + /* Make sure non-dropped frame. */ 648 + queue->min_buffers_needed = 3; 649 + 650 + ret = vb2_queue_init(queue); 675 651 if (ret) { 676 - v4l2_err(&csi->v4l2_dev, "vb2_queue_init failed: %d\n", ret); 677 - goto clean_entity; 652 + v4l2_err(v4l2_dev, "failed to initialize vb2 queue: %d\n", ret); 653 + goto error_media_entity; 678 654 } 679 655 680 - /* Register video device */ 681 - strscpy(vdev->name, name, sizeof(vdev->name)); 682 - vdev->release = video_device_release_empty; 683 - vdev->fops = &sun6i_video_fops; 684 - vdev->ioctl_ops = &sun6i_video_ioctl_ops; 685 - vdev->vfl_type = VFL_TYPE_VIDEO; 686 - vdev->vfl_dir = VFL_DIR_RX; 687 - vdev->v4l2_dev = &csi->v4l2_dev; 688 - vdev->queue = vidq; 689 - vdev->lock = &video->lock; 690 - vdev->device_caps = V4L2_CAP_STREAMING | V4L2_CAP_VIDEO_CAPTURE; 691 - video_set_drvdata(vdev, video); 656 + /* V4L2 Format */ 692 657 693 - ret = video_register_device(vdev, VFL_TYPE_VIDEO, -1); 658 + format.type = queue->type; 659 + pix_format->pixelformat = sun6i_video_formats[0]; 660 + pix_format->width = 1280; 661 + pix_format->height = 720; 662 + pix_format->field = V4L2_FIELD_NONE; 663 + 664 + sun6i_video_format_set(video, &format); 665 + 666 + /* Video Device */ 667 + 668 + strscpy(video_dev->name, SUN6I_CSI_NAME, sizeof(video_dev->name)); 669 + video_dev->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 670 + video_dev->vfl_dir = VFL_DIR_RX; 671 + video_dev->release = video_device_release_empty; 672 + video_dev->fops = &sun6i_video_fops; 673 + video_dev->ioctl_ops = &sun6i_video_ioctl_ops; 674 + video_dev->v4l2_dev = v4l2_dev; 675 + video_dev->queue = queue; 676 + video_dev->lock = &video->lock; 677 + 678 + video_set_drvdata(video_dev, csi_dev); 679 + 680 + ret = video_register_device(video_dev, VFL_TYPE_VIDEO, -1); 694 681 if (ret < 0) { 695 - v4l2_err(&csi->v4l2_dev, 696 - "video_register_device failed: %d\n", ret); 697 - goto clean_entity; 682 + v4l2_err(v4l2_dev, "failed to register video device: %d\n", 683 + ret); 684 + goto error_media_entity; 698 685 } 699 686 700 687 return 0; 701 688 702 - clean_entity: 703 - media_entity_cleanup(&video->vdev.entity); 689 + error_media_entity: 690 + media_entity_cleanup(&video_dev->entity); 691 + 704 692 mutex_destroy(&video->lock); 693 + 705 694 return ret; 706 695 } 707 696 708 - void sun6i_video_cleanup(struct sun6i_video *video) 697 + void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev) 709 698 { 710 - vb2_video_unregister_device(&video->vdev); 711 - media_entity_cleanup(&video->vdev.entity); 699 + struct sun6i_video *video = &csi_dev->video; 700 + struct video_device *video_dev = &video->video_dev; 701 + 702 + vb2_video_unregister_device(video_dev); 703 + media_entity_cleanup(&video_dev->entity); 712 704 mutex_destroy(&video->lock); 713 705 }
+10 -13
drivers/media/platform/sunxi/sun6i-csi/sun6i_video.h
··· 11 11 #include <media/v4l2-dev.h> 12 12 #include <media/videobuf2-core.h> 13 13 14 - struct sun6i_csi; 14 + struct sun6i_csi_device; 15 15 16 16 struct sun6i_video { 17 - struct video_device vdev; 17 + struct video_device video_dev; 18 + struct vb2_queue queue; 19 + struct mutex lock; /* Queue lock. */ 18 20 struct media_pad pad; 19 - struct sun6i_csi *csi; 20 21 21 - struct mutex lock; 22 - 23 - struct vb2_queue vb2_vidq; 24 - spinlock_t dma_queue_lock; 25 22 struct list_head dma_queue; 23 + spinlock_t dma_queue_lock; /* DMA queue lock. */ 26 24 27 - unsigned int sequence; 28 - struct v4l2_format fmt; 25 + struct v4l2_format format; 29 26 u32 mbus_code; 27 + unsigned int sequence; 30 28 }; 31 29 32 - int sun6i_video_init(struct sun6i_video *video, struct sun6i_csi *csi, 33 - const char *name); 34 - void sun6i_video_cleanup(struct sun6i_video *video); 30 + int sun6i_video_setup(struct sun6i_csi_device *csi_dev); 31 + void sun6i_video_cleanup(struct sun6i_csi_device *csi_dev); 35 32 36 - void sun6i_video_frame_done(struct sun6i_video *video); 33 + void sun6i_video_frame_done(struct sun6i_csi_device *csi_dev); 37 34 38 35 #endif /* __SUN6I_VIDEO_H__ */
+2 -2
drivers/media/platform/sunxi/sun6i-mipi-csi2/Kconfig
··· 3 3 tristate "Allwinner A31 MIPI CSI-2 Controller Driver" 4 4 depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV 5 5 depends on ARCH_SUNXI || COMPILE_TEST 6 - depends on PM && COMMON_CLK 6 + depends on PM && COMMON_CLK && RESET_CONTROLLER 7 + depends on PHY_SUN6I_MIPI_DPHY 7 8 select MEDIA_CONTROLLER 8 9 select VIDEO_V4L2_SUBDEV_API 9 10 select V4L2_FWNODE 10 - select PHY_SUN6I_MIPI_DPHY 11 11 select GENERIC_PHY_MIPI_DPHY 12 12 select REGMAP_MMIO 13 13 help
+16 -4
drivers/media/platform/sunxi/sun6i-mipi-csi2/sun6i_mipi_csi2.c
··· 661 661 csi2_dev->reset = devm_reset_control_get_shared(dev, NULL); 662 662 if (IS_ERR(csi2_dev->reset)) { 663 663 dev_err(dev, "failed to get reset controller\n"); 664 - return PTR_ERR(csi2_dev->reset); 664 + ret = PTR_ERR(csi2_dev->reset); 665 + goto error_clock_rate_exclusive; 665 666 } 666 667 667 668 /* D-PHY */ ··· 670 669 csi2_dev->dphy = devm_phy_get(dev, "dphy"); 671 670 if (IS_ERR(csi2_dev->dphy)) { 672 671 dev_err(dev, "failed to get MIPI D-PHY\n"); 673 - return PTR_ERR(csi2_dev->dphy); 672 + ret = PTR_ERR(csi2_dev->dphy); 673 + goto error_clock_rate_exclusive; 674 674 } 675 675 676 676 ret = phy_init(csi2_dev->dphy); 677 677 if (ret) { 678 678 dev_err(dev, "failed to initialize MIPI D-PHY\n"); 679 - return ret; 679 + goto error_clock_rate_exclusive; 680 680 } 681 681 682 682 /* Runtime PM */ ··· 685 683 pm_runtime_enable(dev); 686 684 687 685 return 0; 686 + 687 + error_clock_rate_exclusive: 688 + clk_rate_exclusive_put(csi2_dev->clock_mod); 689 + 690 + return ret; 688 691 } 689 692 690 693 static void ··· 719 712 720 713 ret = sun6i_mipi_csi2_bridge_setup(csi2_dev); 721 714 if (ret) 722 - return ret; 715 + goto error_resources; 723 716 724 717 return 0; 718 + 719 + error_resources: 720 + sun6i_mipi_csi2_resources_cleanup(csi2_dev); 721 + 722 + return ret; 725 723 } 726 724 727 725 static int sun6i_mipi_csi2_remove(struct platform_device *platform_dev)
+1 -1
drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/Kconfig
··· 3 3 tristate "Allwinner A83T MIPI CSI-2 Controller and D-PHY Driver" 4 4 depends on V4L_PLATFORM_DRIVERS && VIDEO_DEV 5 5 depends on ARCH_SUNXI || COMPILE_TEST 6 - depends on PM && COMMON_CLK 6 + depends on PM && COMMON_CLK && RESET_CONTROLLER 7 7 select MEDIA_CONTROLLER 8 8 select VIDEO_V4L2_SUBDEV_API 9 9 select V4L2_FWNODE
+18 -5
drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/sun8i_a83t_mipi_csi2.c
··· 719 719 csi2_dev->clock_mipi = devm_clk_get(dev, "mipi"); 720 720 if (IS_ERR(csi2_dev->clock_mipi)) { 721 721 dev_err(dev, "failed to acquire mipi clock\n"); 722 - return PTR_ERR(csi2_dev->clock_mipi); 722 + ret = PTR_ERR(csi2_dev->clock_mipi); 723 + goto error_clock_rate_exclusive; 723 724 } 724 725 725 726 csi2_dev->clock_misc = devm_clk_get(dev, "misc"); 726 727 if (IS_ERR(csi2_dev->clock_misc)) { 727 728 dev_err(dev, "failed to acquire misc clock\n"); 728 - return PTR_ERR(csi2_dev->clock_misc); 729 + ret = PTR_ERR(csi2_dev->clock_misc); 730 + goto error_clock_rate_exclusive; 729 731 } 730 732 731 733 /* Reset */ ··· 735 733 csi2_dev->reset = devm_reset_control_get_shared(dev, NULL); 736 734 if (IS_ERR(csi2_dev->reset)) { 737 735 dev_err(dev, "failed to get reset controller\n"); 738 - return PTR_ERR(csi2_dev->reset); 736 + ret = PTR_ERR(csi2_dev->reset); 737 + goto error_clock_rate_exclusive; 739 738 } 740 739 741 740 /* D-PHY */ ··· 744 741 ret = sun8i_a83t_dphy_register(csi2_dev); 745 742 if (ret) { 746 743 dev_err(dev, "failed to initialize MIPI D-PHY\n"); 747 - return ret; 744 + goto error_clock_rate_exclusive; 748 745 } 749 746 750 747 /* Runtime PM */ ··· 752 749 pm_runtime_enable(dev); 753 750 754 751 return 0; 752 + 753 + error_clock_rate_exclusive: 754 + clk_rate_exclusive_put(csi2_dev->clock_mod); 755 + 756 + return ret; 755 757 } 756 758 757 759 static void ··· 786 778 787 779 ret = sun8i_a83t_mipi_csi2_bridge_setup(csi2_dev); 788 780 if (ret) 789 - return ret; 781 + goto error_resources; 790 782 791 783 return 0; 784 + 785 + error_resources: 786 + sun8i_a83t_mipi_csi2_resources_cleanup(csi2_dev); 787 + 788 + return ret; 792 789 } 793 790 794 791 static int sun8i_a83t_mipi_csi2_remove(struct platform_device *platform_dev)
+1 -1
drivers/media/platform/sunxi/sun8i-di/Kconfig
··· 4 4 depends on V4L_MEM2MEM_DRIVERS 5 5 depends on VIDEO_DEV 6 6 depends on ARCH_SUNXI || COMPILE_TEST 7 - depends on COMMON_CLK && OF 7 + depends on COMMON_CLK && RESET_CONTROLLER && OF 8 8 depends on PM 9 9 select VIDEOBUF2_DMA_CONTIG 10 10 select V4L2_MEM2MEM_DEV
+1 -1
drivers/media/platform/sunxi/sun8i-rotate/Kconfig
··· 5 5 depends on V4L_MEM2MEM_DRIVERS 6 6 depends on VIDEO_DEV 7 7 depends on ARCH_SUNXI || COMPILE_TEST 8 - depends on COMMON_CLK && OF 8 + depends on COMMON_CLK && RESET_CONTROLLER && OF 9 9 depends on PM 10 10 select VIDEOBUF2_DMA_CONTIG 11 11 select V4L2_MEM2MEM_DEV
+3 -3
drivers/media/platform/ti/cal/cal-video.c
··· 708 708 dma_addr_t addr; 709 709 int ret; 710 710 711 - ret = media_pipeline_start(&ctx->vdev.entity, &ctx->phy->pipe); 711 + ret = video_device_pipeline_alloc_start(&ctx->vdev); 712 712 if (ret < 0) { 713 713 ctx_err(ctx, "Failed to start media pipeline: %d\n", ret); 714 714 goto error_release_buffers; ··· 761 761 cal_ctx_unprepare(ctx); 762 762 763 763 error_pipeline: 764 - media_pipeline_stop(&ctx->vdev.entity); 764 + video_device_pipeline_stop(&ctx->vdev); 765 765 error_release_buffers: 766 766 cal_release_buffers(ctx, VB2_BUF_STATE_QUEUED); 767 767 ··· 782 782 783 783 cal_release_buffers(ctx, VB2_BUF_STATE_ERROR); 784 784 785 - media_pipeline_stop(&ctx->vdev.entity); 785 + video_device_pipeline_stop(&ctx->vdev); 786 786 } 787 787 788 788 static const struct vb2_ops cal_video_qops = {
-1
drivers/media/platform/ti/cal/cal.h
··· 174 174 struct device_node *source_ep_node; 175 175 struct device_node *source_node; 176 176 struct v4l2_subdev *source; 177 - struct media_pipeline pipe; 178 177 179 178 struct v4l2_subdev subdev; 180 179 struct media_pad pads[CAL_CAMERARX_NUM_PADS];
+1 -3
drivers/media/platform/ti/omap3isp/isp.c
··· 937 937 struct isp_pipeline *pipe; 938 938 struct media_pad *pad; 939 939 940 - if (!me->pipe) 941 - return 0; 942 940 pipe = to_isp_pipeline(me); 943 - if (pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED) 941 + if (!pipe || pipe->stream_state == ISP_PIPELINE_STREAM_STOPPED) 944 942 return 0; 945 943 pad = media_pad_remote_pad_first(&pipe->output->pad); 946 944 return pad->entity == me;
+4 -5
drivers/media/platform/ti/omap3isp/ispvideo.c
··· 1093 1093 /* Start streaming on the pipeline. No link touching an entity in the 1094 1094 * pipeline can be activated or deactivated once streaming is started. 1095 1095 */ 1096 - pipe = video->video.entity.pipe 1097 - ? to_isp_pipeline(&video->video.entity) : &video->pipe; 1096 + pipe = to_isp_pipeline(&video->video.entity) ? : &video->pipe; 1098 1097 1099 1098 ret = media_entity_enum_init(&pipe->ent_enum, &video->isp->media_dev); 1100 1099 if (ret) ··· 1103 1104 pipe->l3_ick = clk_get_rate(video->isp->clock[ISP_CLK_L3_ICK]); 1104 1105 pipe->max_rate = pipe->l3_ick; 1105 1106 1106 - ret = media_pipeline_start(&video->video.entity, &pipe->pipe); 1107 + ret = video_device_pipeline_start(&video->video, &pipe->pipe); 1107 1108 if (ret < 0) 1108 1109 goto err_pipeline_start; 1109 1110 ··· 1160 1161 return 0; 1161 1162 1162 1163 err_check_format: 1163 - media_pipeline_stop(&video->video.entity); 1164 + video_device_pipeline_stop(&video->video); 1164 1165 err_pipeline_start: 1165 1166 /* TODO: Implement PM QoS */ 1166 1167 /* The DMA queue must be emptied here, otherwise CCDC interrupts that ··· 1227 1228 video->error = false; 1228 1229 1229 1230 /* TODO: Implement PM QoS */ 1230 - media_pipeline_stop(&video->video.entity); 1231 + video_device_pipeline_stop(&video->video); 1231 1232 1232 1233 media_entity_enum_cleanup(&pipe->ent_enum); 1233 1234
+9 -2
drivers/media/platform/ti/omap3isp/ispvideo.h
··· 99 99 unsigned int external_width; 100 100 }; 101 101 102 - #define to_isp_pipeline(__e) \ 103 - container_of((__e)->pipe, struct isp_pipeline, pipe) 102 + static inline struct isp_pipeline *to_isp_pipeline(struct media_entity *entity) 103 + { 104 + struct media_pipeline *pipe = media_entity_pipeline(entity); 105 + 106 + if (!pipe) 107 + return NULL; 108 + 109 + return container_of(pipe, struct isp_pipeline, pipe); 110 + } 104 111 105 112 static inline int isp_pipeline_ready(struct isp_pipeline *pipe) 106 113 {
+9 -5
drivers/media/platform/verisilicon/hantro_drv.c
··· 251 251 252 252 static int hantro_try_ctrl(struct v4l2_ctrl *ctrl) 253 253 { 254 + struct hantro_ctx *ctx; 255 + 256 + ctx = container_of(ctrl->handler, 257 + struct hantro_ctx, ctrl_handler); 258 + 254 259 if (ctrl->id == V4L2_CID_STATELESS_H264_SPS) { 255 260 const struct v4l2_ctrl_h264_sps *sps = ctrl->p_new.p_h264_sps; 256 261 ··· 271 266 } else if (ctrl->id == V4L2_CID_STATELESS_HEVC_SPS) { 272 267 const struct v4l2_ctrl_hevc_sps *sps = ctrl->p_new.p_hevc_sps; 273 268 274 - if (sps->bit_depth_luma_minus8 != sps->bit_depth_chroma_minus8) 275 - /* Luma and chroma bit depth mismatch */ 269 + if (sps->bit_depth_luma_minus8 != 0 && sps->bit_depth_luma_minus8 != 2) 270 + /* Only 8-bit and 10-bit are supported */ 276 271 return -EINVAL; 277 - if (sps->bit_depth_luma_minus8 != 0) 278 - /* Only 8-bit is supported */ 279 - return -EINVAL; 272 + 273 + ctx->bit_depth = sps->bit_depth_luma_minus8 + 8; 280 274 } else if (ctrl->id == V4L2_CID_STATELESS_VP9_FRAME) { 281 275 const struct v4l2_ctrl_vp9_frame *dec_params = ctrl->p_new.p_vp9_frame; 282 276
+1 -3
drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
··· 12 12 13 13 static size_t hantro_hevc_chroma_offset(struct hantro_ctx *ctx) 14 14 { 15 - return ctx->dst_fmt.width * ctx->dst_fmt.height; 15 + return ctx->dst_fmt.width * ctx->dst_fmt.height * ctx->bit_depth / 8; 16 16 } 17 17 18 18 static size_t hantro_hevc_motion_vectors_offset(struct hantro_ctx *ctx) ··· 166 166 167 167 hantro_reg_write(vpu, &g2_bit_depth_y_minus8, sps->bit_depth_luma_minus8); 168 168 hantro_reg_write(vpu, &g2_bit_depth_c_minus8, sps->bit_depth_chroma_minus8); 169 - 170 - hantro_reg_write(vpu, &g2_output_8_bits, 0); 171 169 172 170 hantro_reg_write(vpu, &g2_hdr_skip_length, compute_header_skip_length(ctx)); 173 171
+2 -2
drivers/media/platform/verisilicon/hantro_hevc.c
··· 104 104 hevc_dec->tile_bsd.cpu = NULL; 105 105 } 106 106 107 - size = VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1); 107 + size = (VERT_FILTER_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8; 108 108 hevc_dec->tile_filter.cpu = dma_alloc_coherent(vpu->dev, size, 109 109 &hevc_dec->tile_filter.dma, 110 110 GFP_KERNEL); ··· 112 112 goto err_free_tile_buffers; 113 113 hevc_dec->tile_filter.size = size; 114 114 115 - size = VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1); 115 + size = (VERT_SAO_RAM_SIZE * height64 * (num_tile_cols - 1) * ctx->bit_depth) / 8; 116 116 hevc_dec->tile_sao.cpu = dma_alloc_coherent(vpu->dev, size, 117 117 &hevc_dec->tile_sao.dma, 118 118 GFP_KERNEL);
+6 -1
drivers/media/platform/verisilicon/hantro_postproc.c
··· 114 114 struct hantro_dev *vpu = ctx->dev; 115 115 struct vb2_v4l2_buffer *dst_buf; 116 116 int down_scale = down_scale_factor(ctx); 117 + int out_depth; 117 118 size_t chroma_offset; 118 119 dma_addr_t dst_dma; 119 120 ··· 133 132 hantro_write_addr(vpu, G2_RS_OUT_LUMA_ADDR, dst_dma); 134 133 hantro_write_addr(vpu, G2_RS_OUT_CHROMA_ADDR, dst_dma + chroma_offset); 135 134 } 135 + 136 + out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat); 136 137 if (ctx->dev->variant->legacy_regs) { 137 - int out_depth = hantro_get_format_depth(ctx->dst_fmt.pixelformat); 138 138 u8 pp_shift = 0; 139 139 140 140 if (out_depth > 8) ··· 143 141 144 142 hantro_reg_write(ctx->dev, &g2_rs_out_bit_depth, out_depth); 145 143 hantro_reg_write(ctx->dev, &g2_pp_pix_shift, pp_shift); 144 + } else { 145 + hantro_reg_write(vpu, &g2_output_8_bits, out_depth > 8 ? 0 : 1); 146 + hantro_reg_write(vpu, &g2_output_format, out_depth > 8 ? 1 : 0); 146 147 } 147 148 hantro_reg_write(vpu, &g2_out_rs_e, 1); 148 149 }
+27
drivers/media/platform/verisilicon/imx8m_vpu_hw.c
··· 162 162 .step_height = MB_DIM, 163 163 }, 164 164 }, 165 + { 166 + .fourcc = V4L2_PIX_FMT_P010, 167 + .codec_mode = HANTRO_MODE_NONE, 168 + .postprocessed = true, 169 + .frmsize = { 170 + .min_width = FMT_MIN_WIDTH, 171 + .max_width = FMT_UHD_WIDTH, 172 + .step_width = MB_DIM, 173 + .min_height = FMT_MIN_HEIGHT, 174 + .max_height = FMT_UHD_HEIGHT, 175 + .step_height = MB_DIM, 176 + }, 177 + }, 165 178 }; 166 179 167 180 static const struct hantro_fmt imx8m_vpu_g2_dec_fmts[] = { 168 181 { 169 182 .fourcc = V4L2_PIX_FMT_NV12_4L4, 170 183 .codec_mode = HANTRO_MODE_NONE, 184 + .match_depth = true, 185 + .frmsize = { 186 + .min_width = FMT_MIN_WIDTH, 187 + .max_width = FMT_UHD_WIDTH, 188 + .step_width = TILE_MB_DIM, 189 + .min_height = FMT_MIN_HEIGHT, 190 + .max_height = FMT_UHD_HEIGHT, 191 + .step_height = TILE_MB_DIM, 192 + }, 193 + }, 194 + { 195 + .fourcc = V4L2_PIX_FMT_P010_4L4, 196 + .codec_mode = HANTRO_MODE_NONE, 197 + .match_depth = true, 171 198 .frmsize = { 172 199 .min_width = FMT_MIN_WIDTH, 173 200 .max_width = FMT_UHD_WIDTH,
+5 -6
drivers/media/platform/xilinx/xilinx-dma.c
··· 402 402 * Use the pipeline object embedded in the first DMA object that starts 403 403 * streaming. 404 404 */ 405 - pipe = dma->video.entity.pipe 406 - ? to_xvip_pipeline(&dma->video.entity) : &dma->pipe; 405 + pipe = to_xvip_pipeline(&dma->video) ? : &dma->pipe; 407 406 408 - ret = media_pipeline_start(&dma->video.entity, &pipe->pipe); 407 + ret = video_device_pipeline_start(&dma->video, &pipe->pipe); 409 408 if (ret < 0) 410 409 goto error; 411 410 ··· 430 431 return 0; 431 432 432 433 error_stop: 433 - media_pipeline_stop(&dma->video.entity); 434 + video_device_pipeline_stop(&dma->video); 434 435 435 436 error: 436 437 /* Give back all queued buffers to videobuf2. */ ··· 447 448 static void xvip_dma_stop_streaming(struct vb2_queue *vq) 448 449 { 449 450 struct xvip_dma *dma = vb2_get_drv_priv(vq); 450 - struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video.entity); 451 + struct xvip_pipeline *pipe = to_xvip_pipeline(&dma->video); 451 452 struct xvip_dma_buffer *buf, *nbuf; 452 453 453 454 /* Stop the pipeline. */ ··· 458 459 459 460 /* Cleanup the pipeline and mark it as being stopped. */ 460 461 xvip_pipeline_cleanup(pipe); 461 - media_pipeline_stop(&dma->video.entity); 462 + video_device_pipeline_stop(&dma->video); 462 463 463 464 /* Give back all queued buffers to videobuf2. */ 464 465 spin_lock_irq(&dma->queued_lock);
+7 -2
drivers/media/platform/xilinx/xilinx-dma.h
··· 45 45 struct xvip_dma *output; 46 46 }; 47 47 48 - static inline struct xvip_pipeline *to_xvip_pipeline(struct media_entity *e) 48 + static inline struct xvip_pipeline *to_xvip_pipeline(struct video_device *vdev) 49 49 { 50 - return container_of(e->pipe, struct xvip_pipeline, pipe); 50 + struct media_pipeline *pipe = video_device_pipeline(vdev); 51 + 52 + if (!pipe) 53 + return NULL; 54 + 55 + return container_of(pipe, struct xvip_pipeline, pipe); 51 56 } 52 57 53 58 /**
+1 -4
drivers/media/radio/radio-si476x.c
··· 1072 1072 1073 1073 static int si476x_radio_fops_release(struct file *file) 1074 1074 { 1075 - int err; 1076 1075 struct si476x_radio *radio = video_drvdata(file); 1077 1076 1078 1077 if (v4l2_fh_is_singular_file(file) && ··· 1079 1080 si476x_core_set_power_state(radio->core, 1080 1081 SI476X_POWER_DOWN); 1081 1082 1082 - err = v4l2_fh_release(file); 1083 - 1084 - return err; 1083 + return v4l2_fh_release(file); 1085 1084 } 1086 1085 1087 1086 static ssize_t si476x_radio_fops_read(struct file *file, char __user *buf,
+1 -1
drivers/media/radio/si4713/si4713.c
··· 14 14 #include <linux/interrupt.h> 15 15 #include <linux/i2c.h> 16 16 #include <linux/slab.h> 17 - #include <linux/gpio.h> 17 + #include <linux/gpio/consumer.h> 18 18 #include <linux/module.h> 19 19 #include <media/v4l2-device.h> 20 20 #include <media/v4l2-ioctl.h>
+1 -3
drivers/media/rc/imon.c
··· 684 684 */ 685 685 static int send_associate_24g(struct imon_context *ictx) 686 686 { 687 - int retval; 688 687 const unsigned char packet[8] = { 0x01, 0x00, 0x00, 0x00, 689 688 0x00, 0x00, 0x00, 0x20 }; 690 689 ··· 698 699 } 699 700 700 701 memcpy(ictx->usb_tx_buf, packet, sizeof(packet)); 701 - retval = send_packet(ictx); 702 702 703 - return retval; 703 + return send_packet(ictx); 704 704 } 705 705 706 706 /*
+1 -1
drivers/media/rc/mceusb.c
··· 1077 1077 struct mceusb_dev *ir = dev->priv; 1078 1078 unsigned int units; 1079 1079 1080 - units = DIV_ROUND_CLOSEST(timeout, MCE_TIME_UNIT); 1080 + units = DIV_ROUND_UP(timeout, MCE_TIME_UNIT); 1081 1081 1082 1082 cmdbuf[2] = units >> 8; 1083 1083 cmdbuf[3] = units;
+3 -4
drivers/media/test-drivers/vimc/vimc-capture.c
··· 241 241 static int vimc_capture_start_streaming(struct vb2_queue *vq, unsigned int count) 242 242 { 243 243 struct vimc_capture_device *vcapture = vb2_get_drv_priv(vq); 244 - struct media_entity *entity = &vcapture->vdev.entity; 245 244 int ret; 246 245 247 246 vcapture->sequence = 0; 248 247 249 248 /* Start the media pipeline */ 250 - ret = media_pipeline_start(entity, &vcapture->stream.pipe); 249 + ret = video_device_pipeline_start(&vcapture->vdev, &vcapture->stream.pipe); 251 250 if (ret) { 252 251 vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED); 253 252 return ret; ··· 254 255 255 256 ret = vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 1); 256 257 if (ret) { 257 - media_pipeline_stop(entity); 258 + video_device_pipeline_stop(&vcapture->vdev); 258 259 vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_QUEUED); 259 260 return ret; 260 261 } ··· 273 274 vimc_streamer_s_stream(&vcapture->stream, &vcapture->ved, 0); 274 275 275 276 /* Stop the media pipeline */ 276 - media_pipeline_stop(&vcapture->vdev.entity); 277 + video_device_pipeline_stop(&vcapture->vdev); 277 278 278 279 /* Release all active buffers */ 279 280 vimc_capture_return_all_buffers(vcapture, VB2_BUF_STATE_ERROR);
+1 -3
drivers/media/tuners/xc4000.c
··· 282 282 static int xc_write_reg(struct xc4000_priv *priv, u16 regAddr, u16 i2cData) 283 283 { 284 284 u8 buf[4]; 285 - int result; 286 285 287 286 buf[0] = (regAddr >> 8) & 0xFF; 288 287 buf[1] = regAddr & 0xFF; 289 288 buf[2] = (i2cData >> 8) & 0xFF; 290 289 buf[3] = i2cData & 0xFF; 291 - result = xc_send_i2c_data(priv, buf, 4); 292 290 293 - return result; 291 + return xc_send_i2c_data(priv, buf, 4); 294 292 } 295 293 296 294 static int xc_load_i2c_sequence(struct dvb_frontend *fe, const u8 *i2c_sequence)
+4 -4
drivers/media/usb/au0828/au0828-core.c
··· 410 410 goto end; 411 411 } 412 412 413 - ret = __media_pipeline_start(entity, pipe); 413 + ret = __media_pipeline_start(entity->pads, pipe); 414 414 if (ret) { 415 415 pr_err("Start Pipeline: %s->%s Error %d\n", 416 416 source->name, entity->name, ret); ··· 501 501 return; 502 502 503 503 /* stop pipeline */ 504 - __media_pipeline_stop(dev->active_link_owner); 504 + __media_pipeline_stop(dev->active_link_owner->pads); 505 505 pr_debug("Pipeline stop for %s\n", 506 506 dev->active_link_owner->name); 507 507 508 508 ret = __media_pipeline_start( 509 - dev->active_link_user, 509 + dev->active_link_user->pads, 510 510 dev->active_link_user_pipe); 511 511 if (ret) { 512 512 pr_err("Start Pipeline: %s->%s %d\n", ··· 532 532 return; 533 533 534 534 /* stop pipeline */ 535 - __media_pipeline_stop(dev->active_link_owner); 535 + __media_pipeline_stop(dev->active_link_owner->pads); 536 536 pr_debug("Pipeline stop for %s\n", 537 537 dev->active_link_owner->name); 538 538
+1 -1
drivers/media/usb/dvb-usb-v2/af9035.c
··· 1497 1497 /* 1498 1498 * AF9035 gpiot2 = FC0012 enable 1499 1499 * XXX: there seems to be something on gpioh8 too, but on my 1500 - * my test I didn't find any difference. 1500 + * test I didn't find any difference. 1501 1501 */ 1502 1502 1503 1503 if (adap->id == 0) {
+1 -1
drivers/media/usb/msi2500/msi2500.c
··· 209 209 * 210 210 * Control bits for previous samples is 32-bit field, containing 16 x 2-bit 211 211 * numbers. This results one 2-bit number for 8 samples. It is likely used for 212 - * for bit shifting sample by given bits, increasing actual sampling resolution. 212 + * bit shifting sample by given bits, increasing actual sampling resolution. 213 213 * Number 2 (0b10) was never seen. 214 214 * 215 215 * 6 * 16 * 2 * 4 = 768 samples. 768 * 4 = 3072 bytes
+4 -4
drivers/media/v4l2-core/v4l2-ctrls-api.c
··· 89 89 /* Helper function: copy the initial control value back to the caller */ 90 90 static int def_to_user(struct v4l2_ext_control *c, struct v4l2_ctrl *ctrl) 91 91 { 92 - ctrl->type_ops->init(ctrl, 0, ctrl->elems, ctrl->p_new); 92 + ctrl->type_ops->init(ctrl, 0, ctrl->p_new); 93 93 94 94 return ptr_to_user(c, ctrl, ctrl->p_new); 95 95 } ··· 126 126 if (ctrl->is_dyn_array) 127 127 ctrl->new_elems = elems; 128 128 else if (ctrl->is_array) 129 - ctrl->type_ops->init(ctrl, elems, ctrl->elems, ctrl->p_new); 129 + ctrl->type_ops->init(ctrl, elems, ctrl->p_new); 130 130 return 0; 131 131 } 132 132 ··· 494 494 /* Validate a new control */ 495 495 static int validate_new(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr p_new) 496 496 { 497 - return ctrl->type_ops->validate(ctrl, ctrl->new_elems, p_new); 497 + return ctrl->type_ops->validate(ctrl, p_new); 498 498 } 499 499 500 500 /* Validate controls. */ ··· 1007 1007 ctrl->p_cur.p = p_array + elems * ctrl->elem_size; 1008 1008 for (i = 0; i < ctrl->nr_of_dims; i++) 1009 1009 ctrl->dims[i] = dims[i]; 1010 - ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur); 1010 + ctrl->type_ops->init(ctrl, 0, ctrl->p_cur); 1011 1011 cur_to_new(ctrl); 1012 1012 send_event(NULL, ctrl, V4L2_EVENT_CTRL_CH_VALUE | 1013 1013 V4L2_EVENT_CTRL_CH_DIMENSIONS);
+10 -9
drivers/media/v4l2-core/v4l2-ctrls-core.c
··· 65 65 v4l2_event_queue_fh(sev->fh, &ev); 66 66 } 67 67 68 - bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems, 68 + bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, 69 69 union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2) 70 70 { 71 71 unsigned int i; ··· 74 74 case V4L2_CTRL_TYPE_BUTTON: 75 75 return false; 76 76 case V4L2_CTRL_TYPE_STRING: 77 - for (i = 0; i < elems; i++) { 77 + for (i = 0; i < ctrl->elems; i++) { 78 78 unsigned int idx = i * ctrl->elem_size; 79 79 80 80 /* strings are always 0-terminated */ ··· 84 84 return true; 85 85 default: 86 86 return !memcmp(ptr1.p_const, ptr2.p_const, 87 - elems * ctrl->elem_size); 87 + ctrl->elems * ctrl->elem_size); 88 88 } 89 89 } 90 90 EXPORT_SYMBOL(v4l2_ctrl_type_op_equal); ··· 178 178 } 179 179 180 180 void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx, 181 - u32 tot_elems, union v4l2_ctrl_ptr ptr) 181 + union v4l2_ctrl_ptr ptr) 182 182 { 183 183 unsigned int i; 184 + u32 tot_elems = ctrl->elems; 184 185 u32 elems = tot_elems - from_idx; 185 186 186 187 if (from_idx >= tot_elems) ··· 996 995 } 997 996 } 998 997 999 - int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems, 998 + int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, 1000 999 union v4l2_ctrl_ptr ptr) 1001 1000 { 1002 1001 unsigned int i; ··· 1018 1017 1019 1018 case V4L2_CTRL_TYPE_BUTTON: 1020 1019 case V4L2_CTRL_TYPE_CTRL_CLASS: 1021 - memset(ptr.p_s32, 0, elems * sizeof(s32)); 1020 + memset(ptr.p_s32, 0, ctrl->new_elems * sizeof(s32)); 1022 1021 return 0; 1023 1022 } 1024 1023 1025 - for (i = 0; !ret && i < elems; i++) 1024 + for (i = 0; !ret && i < ctrl->new_elems; i++) 1026 1025 ret = std_validate_elem(ctrl, i, ptr); 1027 1026 return ret; 1028 1027 } ··· 1725 1724 memcpy(ctrl->p_def.p, p_def.p_const, elem_size); 1726 1725 } 1727 1726 1728 - ctrl->type_ops->init(ctrl, 0, elems, ctrl->p_cur); 1727 + ctrl->type_ops->init(ctrl, 0, ctrl->p_cur); 1729 1728 cur_to_new(ctrl); 1730 1729 1731 1730 if (handler_new_ref(hdl, ctrl, NULL, false, false)) { ··· 2070 2069 ctrl_changed = true; 2071 2070 if (!ctrl_changed) 2072 2071 ctrl_changed = !ctrl->type_ops->equal(ctrl, 2073 - ctrl->elems, ctrl->p_cur, ctrl->p_new); 2072 + ctrl->p_cur, ctrl->p_new); 2074 2073 ctrl->has_changed = ctrl_changed; 2075 2074 changed |= ctrl->has_changed; 2076 2075 }
+72
drivers/media/v4l2-core/v4l2-dev.c
··· 1095 1095 } 1096 1096 EXPORT_SYMBOL(video_unregister_device); 1097 1097 1098 + #if defined(CONFIG_MEDIA_CONTROLLER) 1099 + 1100 + __must_check int video_device_pipeline_start(struct video_device *vdev, 1101 + struct media_pipeline *pipe) 1102 + { 1103 + struct media_entity *entity = &vdev->entity; 1104 + 1105 + if (entity->num_pads != 1) 1106 + return -ENODEV; 1107 + 1108 + return media_pipeline_start(&entity->pads[0], pipe); 1109 + } 1110 + EXPORT_SYMBOL_GPL(video_device_pipeline_start); 1111 + 1112 + __must_check int __video_device_pipeline_start(struct video_device *vdev, 1113 + struct media_pipeline *pipe) 1114 + { 1115 + struct media_entity *entity = &vdev->entity; 1116 + 1117 + if (entity->num_pads != 1) 1118 + return -ENODEV; 1119 + 1120 + return __media_pipeline_start(&entity->pads[0], pipe); 1121 + } 1122 + EXPORT_SYMBOL_GPL(__video_device_pipeline_start); 1123 + 1124 + void video_device_pipeline_stop(struct video_device *vdev) 1125 + { 1126 + struct media_entity *entity = &vdev->entity; 1127 + 1128 + if (WARN_ON(entity->num_pads != 1)) 1129 + return; 1130 + 1131 + return media_pipeline_stop(&entity->pads[0]); 1132 + } 1133 + EXPORT_SYMBOL_GPL(video_device_pipeline_stop); 1134 + 1135 + void __video_device_pipeline_stop(struct video_device *vdev) 1136 + { 1137 + struct media_entity *entity = &vdev->entity; 1138 + 1139 + if (WARN_ON(entity->num_pads != 1)) 1140 + return; 1141 + 1142 + return __media_pipeline_stop(&entity->pads[0]); 1143 + } 1144 + EXPORT_SYMBOL_GPL(__video_device_pipeline_stop); 1145 + 1146 + __must_check int video_device_pipeline_alloc_start(struct video_device *vdev) 1147 + { 1148 + struct media_entity *entity = &vdev->entity; 1149 + 1150 + if (entity->num_pads != 1) 1151 + return -ENODEV; 1152 + 1153 + return media_pipeline_alloc_start(&entity->pads[0]); 1154 + } 1155 + EXPORT_SYMBOL_GPL(video_device_pipeline_alloc_start); 1156 + 1157 + struct media_pipeline *video_device_pipeline(struct video_device *vdev) 1158 + { 1159 + struct media_entity *entity = &vdev->entity; 1160 + 1161 + if (WARN_ON(entity->num_pads != 1)) 1162 + return NULL; 1163 + 1164 + return media_pad_pipeline(&entity->pads[0]); 1165 + } 1166 + EXPORT_SYMBOL_GPL(video_device_pipeline); 1167 + 1168 + #endif /* CONFIG_MEDIA_CONTROLLER */ 1169 + 1098 1170 /* 1099 1171 * Initialise video for linux 1100 1172 */
+8
drivers/mfd/syscon.c
··· 66 66 goto err_map; 67 67 } 68 68 69 + /* Parse the device's DT node for an endianness specification */ 70 + if (of_property_read_bool(np, "big-endian")) 71 + syscon_config.val_format_endian = REGMAP_ENDIAN_BIG; 72 + else if (of_property_read_bool(np, "little-endian")) 73 + syscon_config.val_format_endian = REGMAP_ENDIAN_LITTLE; 74 + else if (of_property_read_bool(np, "native-endian")) 75 + syscon_config.val_format_endian = REGMAP_ENDIAN_NATIVE; 76 + 69 77 /* 70 78 * search for reg-io-width property in DT. If it is not provided, 71 79 * default to 4 bytes. regmap_init_mmio will return an error if values
+5
drivers/net/ethernet/amd/xgbe/xgbe-pci.c
··· 285 285 286 286 /* Yellow Carp devices do not need cdr workaround */ 287 287 pdata->vdata->an_cdr_workaround = 0; 288 + 289 + /* Yellow Carp devices do not need rrc */ 290 + pdata->vdata->enable_rrc = 0; 288 291 } else { 289 292 pdata->xpcs_window_def_reg = PCS_V2_WINDOW_DEF; 290 293 pdata->xpcs_window_sel_reg = PCS_V2_WINDOW_SELECT; ··· 486 483 .tx_desc_prefetch = 5, 487 484 .rx_desc_prefetch = 5, 488 485 .an_cdr_workaround = 1, 486 + .enable_rrc = 1, 489 487 }; 490 488 491 489 static struct xgbe_version_data xgbe_v2b = { ··· 502 498 .tx_desc_prefetch = 5, 503 499 .rx_desc_prefetch = 5, 504 500 .an_cdr_workaround = 1, 501 + .enable_rrc = 1, 505 502 }; 506 503 507 504 static const struct pci_device_id xgbe_pci_table[] = {
+37 -21
drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
··· 239 239 #define XGBE_SFP_BASE_BR_1GBE_MAX 0x0d 240 240 #define XGBE_SFP_BASE_BR_10GBE_MIN 0x64 241 241 #define XGBE_SFP_BASE_BR_10GBE_MAX 0x68 242 + #define XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX 0x78 242 243 243 244 #define XGBE_SFP_BASE_CU_CABLE_LEN 18 244 245 ··· 284 283 285 284 #define XGBE_BEL_FUSE_VENDOR "BEL-FUSE " 286 285 #define XGBE_BEL_FUSE_PARTNO "1GBT-SFP06 " 286 + 287 + #define XGBE_MOLEX_VENDOR "Molex Inc. " 287 288 288 289 struct xgbe_sfp_ascii { 289 290 union { ··· 837 834 break; 838 835 case XGBE_SFP_SPEED_10000: 839 836 min = XGBE_SFP_BASE_BR_10GBE_MIN; 840 - max = XGBE_SFP_BASE_BR_10GBE_MAX; 837 + if (memcmp(&sfp_eeprom->base[XGBE_SFP_BASE_VENDOR_NAME], 838 + XGBE_MOLEX_VENDOR, XGBE_SFP_BASE_VENDOR_NAME_LEN) == 0) 839 + max = XGBE_MOLEX_SFP_BASE_BR_10GBE_MAX; 840 + else 841 + max = XGBE_SFP_BASE_BR_10GBE_MAX; 841 842 break; 842 843 default: 843 844 return false; ··· 1158 1151 } 1159 1152 1160 1153 /* Determine the type of SFP */ 1161 - if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR) 1154 + if (phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE && 1155 + xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000)) 1156 + phy_data->sfp_base = XGBE_SFP_BASE_10000_CR; 1157 + else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_SR) 1162 1158 phy_data->sfp_base = XGBE_SFP_BASE_10000_SR; 1163 1159 else if (sfp_base[XGBE_SFP_BASE_10GBE_CC] & XGBE_SFP_BASE_10GBE_CC_LR) 1164 1160 phy_data->sfp_base = XGBE_SFP_BASE_10000_LR; ··· 1177 1167 phy_data->sfp_base = XGBE_SFP_BASE_1000_CX; 1178 1168 else if (sfp_base[XGBE_SFP_BASE_1GBE_CC] & XGBE_SFP_BASE_1GBE_CC_T) 1179 1169 phy_data->sfp_base = XGBE_SFP_BASE_1000_T; 1180 - else if ((phy_data->sfp_cable == XGBE_SFP_CABLE_PASSIVE) && 1181 - xgbe_phy_sfp_bit_rate(sfp_eeprom, XGBE_SFP_SPEED_10000)) 1182 - phy_data->sfp_base = XGBE_SFP_BASE_10000_CR; 1183 1170 1184 1171 switch (phy_data->sfp_base) { 1185 1172 case XGBE_SFP_BASE_1000_T: ··· 1986 1979 1987 1980 static void xgbe_phy_pll_ctrl(struct xgbe_prv_data *pdata, bool enable) 1988 1981 { 1982 + /* PLL_CTRL feature needs to be enabled for fixed PHY modes (Non-Autoneg) only */ 1983 + if (pdata->phy.autoneg != AUTONEG_DISABLE) 1984 + return; 1985 + 1989 1986 XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_VEND2_PMA_MISC_CTRL0, 1990 1987 XGBE_PMA_PLL_CTRL_MASK, 1991 1988 enable ? XGBE_PMA_PLL_CTRL_ENABLE ··· 2000 1989 } 2001 1990 2002 1991 static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, 2003 - unsigned int cmd, unsigned int sub_cmd) 1992 + enum xgbe_mb_cmd cmd, enum xgbe_mb_subcmd sub_cmd) 2004 1993 { 2005 1994 unsigned int s0 = 0; 2006 1995 unsigned int wait; ··· 2040 2029 xgbe_phy_rx_reset(pdata); 2041 2030 2042 2031 reenable_pll: 2043 - /* Enable PLL re-initialization */ 2044 - xgbe_phy_pll_ctrl(pdata, true); 2032 + /* Enable PLL re-initialization, not needed for PHY Power Off and RRC cmds */ 2033 + if (cmd != XGBE_MB_CMD_POWER_OFF && 2034 + cmd != XGBE_MB_CMD_RRC) 2035 + xgbe_phy_pll_ctrl(pdata, true); 2045 2036 } 2046 2037 2047 2038 static void xgbe_phy_rrc(struct xgbe_prv_data *pdata) 2048 2039 { 2049 2040 /* Receiver Reset Cycle */ 2050 - xgbe_phy_perform_ratechange(pdata, 5, 0); 2041 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_RRC, XGBE_MB_SUBCMD_NONE); 2051 2042 2052 2043 netif_dbg(pdata, link, pdata->netdev, "receiver reset complete\n"); 2053 2044 } ··· 2059 2046 struct xgbe_phy_data *phy_data = pdata->phy_data; 2060 2047 2061 2048 /* Power off */ 2062 - xgbe_phy_perform_ratechange(pdata, 0, 0); 2049 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_POWER_OFF, XGBE_MB_SUBCMD_NONE); 2063 2050 2064 2051 phy_data->cur_mode = XGBE_MODE_UNKNOWN; 2065 2052 ··· 2074 2061 2075 2062 /* 10G/SFI */ 2076 2063 if (phy_data->sfp_cable != XGBE_SFP_CABLE_PASSIVE) { 2077 - xgbe_phy_perform_ratechange(pdata, 3, 0); 2064 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, XGBE_MB_SUBCMD_ACTIVE); 2078 2065 } else { 2079 2066 if (phy_data->sfp_cable_len <= 1) 2080 - xgbe_phy_perform_ratechange(pdata, 3, 1); 2067 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, 2068 + XGBE_MB_SUBCMD_PASSIVE_1M); 2081 2069 else if (phy_data->sfp_cable_len <= 3) 2082 - xgbe_phy_perform_ratechange(pdata, 3, 2); 2070 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, 2071 + XGBE_MB_SUBCMD_PASSIVE_3M); 2083 2072 else 2084 - xgbe_phy_perform_ratechange(pdata, 3, 3); 2073 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_SFI, 2074 + XGBE_MB_SUBCMD_PASSIVE_OTHER); 2085 2075 } 2086 2076 2087 2077 phy_data->cur_mode = XGBE_MODE_SFI; ··· 2099 2083 xgbe_phy_set_redrv_mode(pdata); 2100 2084 2101 2085 /* 1G/X */ 2102 - xgbe_phy_perform_ratechange(pdata, 1, 3); 2086 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX); 2103 2087 2104 2088 phy_data->cur_mode = XGBE_MODE_X; 2105 2089 ··· 2113 2097 xgbe_phy_set_redrv_mode(pdata); 2114 2098 2115 2099 /* 1G/SGMII */ 2116 - xgbe_phy_perform_ratechange(pdata, 1, 2); 2100 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_SGMII); 2117 2101 2118 2102 phy_data->cur_mode = XGBE_MODE_SGMII_1000; 2119 2103 ··· 2127 2111 xgbe_phy_set_redrv_mode(pdata); 2128 2112 2129 2113 /* 100M/SGMII */ 2130 - xgbe_phy_perform_ratechange(pdata, 1, 1); 2114 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_100MBITS); 2131 2115 2132 2116 phy_data->cur_mode = XGBE_MODE_SGMII_100; 2133 2117 ··· 2141 2125 xgbe_phy_set_redrv_mode(pdata); 2142 2126 2143 2127 /* 10G/KR */ 2144 - xgbe_phy_perform_ratechange(pdata, 4, 0); 2128 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_10G_KR, XGBE_MB_SUBCMD_NONE); 2145 2129 2146 2130 phy_data->cur_mode = XGBE_MODE_KR; 2147 2131 ··· 2155 2139 xgbe_phy_set_redrv_mode(pdata); 2156 2140 2157 2141 /* 2.5G/KX */ 2158 - xgbe_phy_perform_ratechange(pdata, 2, 0); 2142 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_2_5G, XGBE_MB_SUBCMD_NONE); 2159 2143 2160 2144 phy_data->cur_mode = XGBE_MODE_KX_2500; 2161 2145 ··· 2169 2153 xgbe_phy_set_redrv_mode(pdata); 2170 2154 2171 2155 /* 1G/KX */ 2172 - xgbe_phy_perform_ratechange(pdata, 1, 3); 2156 + xgbe_phy_perform_ratechange(pdata, XGBE_MB_CMD_SET_1G, XGBE_MB_SUBCMD_1G_KX); 2173 2157 2174 2158 phy_data->cur_mode = XGBE_MODE_KX_1000; 2175 2159 ··· 2656 2640 } 2657 2641 2658 2642 /* No link, attempt a receiver reset cycle */ 2659 - if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) { 2643 + if (pdata->vdata->enable_rrc && phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) { 2660 2644 phy_data->rrc_count = 0; 2661 2645 xgbe_phy_rrc(pdata); 2662 2646 }
+26
drivers/net/ethernet/amd/xgbe/xgbe.h
··· 611 611 XGBE_MDIO_MODE_CL45, 612 612 }; 613 613 614 + enum xgbe_mb_cmd { 615 + XGBE_MB_CMD_POWER_OFF = 0, 616 + XGBE_MB_CMD_SET_1G, 617 + XGBE_MB_CMD_SET_2_5G, 618 + XGBE_MB_CMD_SET_10G_SFI, 619 + XGBE_MB_CMD_SET_10G_KR, 620 + XGBE_MB_CMD_RRC 621 + }; 622 + 623 + enum xgbe_mb_subcmd { 624 + XGBE_MB_SUBCMD_NONE = 0, 625 + 626 + /* 10GbE SFP subcommands */ 627 + XGBE_MB_SUBCMD_ACTIVE = 0, 628 + XGBE_MB_SUBCMD_PASSIVE_1M, 629 + XGBE_MB_SUBCMD_PASSIVE_3M, 630 + XGBE_MB_SUBCMD_PASSIVE_OTHER, 631 + 632 + /* 1GbE Mode subcommands */ 633 + XGBE_MB_SUBCMD_10MBITS = 0, 634 + XGBE_MB_SUBCMD_100MBITS, 635 + XGBE_MB_SUBCMD_1G_SGMII, 636 + XGBE_MB_SUBCMD_1G_KX 637 + }; 638 + 614 639 struct xgbe_phy { 615 640 struct ethtool_link_ksettings lks; 616 641 ··· 1038 1013 unsigned int tx_desc_prefetch; 1039 1014 unsigned int rx_desc_prefetch; 1040 1015 unsigned int an_cdr_workaround; 1016 + unsigned int enable_rrc; 1041 1017 }; 1042 1018 1043 1019 struct xgbe_prv_data {
+72 -24
drivers/net/ethernet/aquantia/atlantic/aq_macsec.c
··· 1394 1394 egress_sa_threshold_expired); 1395 1395 } 1396 1396 1397 + #define AQ_LOCKED_MDO_DEF(mdo) \ 1398 + static int aq_locked_mdo_##mdo(struct macsec_context *ctx) \ 1399 + { \ 1400 + struct aq_nic_s *nic = netdev_priv(ctx->netdev); \ 1401 + int ret; \ 1402 + mutex_lock(&nic->macsec_mutex); \ 1403 + ret = aq_mdo_##mdo(ctx); \ 1404 + mutex_unlock(&nic->macsec_mutex); \ 1405 + return ret; \ 1406 + } 1407 + 1408 + AQ_LOCKED_MDO_DEF(dev_open) 1409 + AQ_LOCKED_MDO_DEF(dev_stop) 1410 + AQ_LOCKED_MDO_DEF(add_secy) 1411 + AQ_LOCKED_MDO_DEF(upd_secy) 1412 + AQ_LOCKED_MDO_DEF(del_secy) 1413 + AQ_LOCKED_MDO_DEF(add_rxsc) 1414 + AQ_LOCKED_MDO_DEF(upd_rxsc) 1415 + AQ_LOCKED_MDO_DEF(del_rxsc) 1416 + AQ_LOCKED_MDO_DEF(add_rxsa) 1417 + AQ_LOCKED_MDO_DEF(upd_rxsa) 1418 + AQ_LOCKED_MDO_DEF(del_rxsa) 1419 + AQ_LOCKED_MDO_DEF(add_txsa) 1420 + AQ_LOCKED_MDO_DEF(upd_txsa) 1421 + AQ_LOCKED_MDO_DEF(del_txsa) 1422 + AQ_LOCKED_MDO_DEF(get_dev_stats) 1423 + AQ_LOCKED_MDO_DEF(get_tx_sc_stats) 1424 + AQ_LOCKED_MDO_DEF(get_tx_sa_stats) 1425 + AQ_LOCKED_MDO_DEF(get_rx_sc_stats) 1426 + AQ_LOCKED_MDO_DEF(get_rx_sa_stats) 1427 + 1397 1428 const struct macsec_ops aq_macsec_ops = { 1398 - .mdo_dev_open = aq_mdo_dev_open, 1399 - .mdo_dev_stop = aq_mdo_dev_stop, 1400 - .mdo_add_secy = aq_mdo_add_secy, 1401 - .mdo_upd_secy = aq_mdo_upd_secy, 1402 - .mdo_del_secy = aq_mdo_del_secy, 1403 - .mdo_add_rxsc = aq_mdo_add_rxsc, 1404 - .mdo_upd_rxsc = aq_mdo_upd_rxsc, 1405 - .mdo_del_rxsc = aq_mdo_del_rxsc, 1406 - .mdo_add_rxsa = aq_mdo_add_rxsa, 1407 - .mdo_upd_rxsa = aq_mdo_upd_rxsa, 1408 - .mdo_del_rxsa = aq_mdo_del_rxsa, 1409 - .mdo_add_txsa = aq_mdo_add_txsa, 1410 - .mdo_upd_txsa = aq_mdo_upd_txsa, 1411 - .mdo_del_txsa = aq_mdo_del_txsa, 1412 - .mdo_get_dev_stats = aq_mdo_get_dev_stats, 1413 - .mdo_get_tx_sc_stats = aq_mdo_get_tx_sc_stats, 1414 - .mdo_get_tx_sa_stats = aq_mdo_get_tx_sa_stats, 1415 - .mdo_get_rx_sc_stats = aq_mdo_get_rx_sc_stats, 1416 - .mdo_get_rx_sa_stats = aq_mdo_get_rx_sa_stats, 1429 + .mdo_dev_open = aq_locked_mdo_dev_open, 1430 + .mdo_dev_stop = aq_locked_mdo_dev_stop, 1431 + .mdo_add_secy = aq_locked_mdo_add_secy, 1432 + .mdo_upd_secy = aq_locked_mdo_upd_secy, 1433 + .mdo_del_secy = aq_locked_mdo_del_secy, 1434 + .mdo_add_rxsc = aq_locked_mdo_add_rxsc, 1435 + .mdo_upd_rxsc = aq_locked_mdo_upd_rxsc, 1436 + .mdo_del_rxsc = aq_locked_mdo_del_rxsc, 1437 + .mdo_add_rxsa = aq_locked_mdo_add_rxsa, 1438 + .mdo_upd_rxsa = aq_locked_mdo_upd_rxsa, 1439 + .mdo_del_rxsa = aq_locked_mdo_del_rxsa, 1440 + .mdo_add_txsa = aq_locked_mdo_add_txsa, 1441 + .mdo_upd_txsa = aq_locked_mdo_upd_txsa, 1442 + .mdo_del_txsa = aq_locked_mdo_del_txsa, 1443 + .mdo_get_dev_stats = aq_locked_mdo_get_dev_stats, 1444 + .mdo_get_tx_sc_stats = aq_locked_mdo_get_tx_sc_stats, 1445 + .mdo_get_tx_sa_stats = aq_locked_mdo_get_tx_sa_stats, 1446 + .mdo_get_rx_sc_stats = aq_locked_mdo_get_rx_sc_stats, 1447 + .mdo_get_rx_sa_stats = aq_locked_mdo_get_rx_sa_stats, 1417 1448 }; 1418 1449 1419 1450 int aq_macsec_init(struct aq_nic_s *nic) ··· 1466 1435 1467 1436 nic->ndev->features |= NETIF_F_HW_MACSEC; 1468 1437 nic->ndev->macsec_ops = &aq_macsec_ops; 1438 + mutex_init(&nic->macsec_mutex); 1469 1439 1470 1440 return 0; 1471 1441 } ··· 1490 1458 if (!nic->macsec_cfg) 1491 1459 return 0; 1492 1460 1493 - rtnl_lock(); 1461 + mutex_lock(&nic->macsec_mutex); 1494 1462 1495 1463 if (nic->aq_fw_ops->send_macsec_req) { 1496 1464 struct macsec_cfg_request cfg = { 0 }; ··· 1539 1507 ret = aq_apply_macsec_cfg(nic); 1540 1508 1541 1509 unlock: 1542 - rtnl_unlock(); 1510 + mutex_unlock(&nic->macsec_mutex); 1543 1511 return ret; 1544 1512 } 1545 1513 ··· 1551 1519 if (!netif_carrier_ok(nic->ndev)) 1552 1520 return; 1553 1521 1554 - rtnl_lock(); 1522 + mutex_lock(&nic->macsec_mutex); 1555 1523 aq_check_txsa_expiration(nic); 1556 - rtnl_unlock(); 1524 + mutex_unlock(&nic->macsec_mutex); 1557 1525 } 1558 1526 1559 1527 int aq_macsec_rx_sa_cnt(struct aq_nic_s *nic) ··· 1564 1532 if (!cfg) 1565 1533 return 0; 1566 1534 1535 + mutex_lock(&nic->macsec_mutex); 1536 + 1567 1537 for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { 1568 1538 if (!test_bit(i, &cfg->rxsc_idx_busy)) 1569 1539 continue; 1570 1540 cnt += hweight_long(cfg->aq_rxsc[i].rx_sa_idx_busy); 1571 1541 } 1572 1542 1543 + mutex_unlock(&nic->macsec_mutex); 1573 1544 return cnt; 1574 1545 } 1575 1546 1576 1547 int aq_macsec_tx_sc_cnt(struct aq_nic_s *nic) 1577 1548 { 1549 + int cnt; 1550 + 1578 1551 if (!nic->macsec_cfg) 1579 1552 return 0; 1580 1553 1581 - return hweight_long(nic->macsec_cfg->txsc_idx_busy); 1554 + mutex_lock(&nic->macsec_mutex); 1555 + cnt = hweight_long(nic->macsec_cfg->txsc_idx_busy); 1556 + mutex_unlock(&nic->macsec_mutex); 1557 + 1558 + return cnt; 1582 1559 } 1583 1560 1584 1561 int aq_macsec_tx_sa_cnt(struct aq_nic_s *nic) ··· 1598 1557 if (!cfg) 1599 1558 return 0; 1600 1559 1560 + mutex_lock(&nic->macsec_mutex); 1561 + 1601 1562 for (i = 0; i < AQ_MACSEC_MAX_SC; i++) { 1602 1563 if (!test_bit(i, &cfg->txsc_idx_busy)) 1603 1564 continue; 1604 1565 cnt += hweight_long(cfg->aq_txsc[i].tx_sa_idx_busy); 1605 1566 } 1606 1567 1568 + mutex_unlock(&nic->macsec_mutex); 1607 1569 return cnt; 1608 1570 } 1609 1571 ··· 1677 1633 1678 1634 if (!cfg) 1679 1635 return data; 1636 + 1637 + mutex_lock(&nic->macsec_mutex); 1680 1638 1681 1639 aq_macsec_update_stats(nic); 1682 1640 ··· 1761 1715 i++; 1762 1716 1763 1717 data += i; 1718 + 1719 + mutex_unlock(&nic->macsec_mutex); 1764 1720 1765 1721 return data; 1766 1722 }
+2
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 157 157 struct mutex fwreq_mutex; 158 158 #if IS_ENABLED(CONFIG_MACSEC) 159 159 struct aq_macsec_cfg *macsec_cfg; 160 + /* mutex to protect data in macsec_cfg */ 161 + struct mutex macsec_mutex; 160 162 #endif 161 163 /* PTP support */ 162 164 struct aq_ptp_s *aq_ptp;
+1
drivers/net/ethernet/cadence/macb_main.c
··· 806 806 807 807 bp->phylink_config.dev = &dev->dev; 808 808 bp->phylink_config.type = PHYLINK_NETDEV; 809 + bp->phylink_config.mac_managed_pm = true; 809 810 810 811 if (bp->phy_interface == PHY_INTERFACE_MODE_SGMII) { 811 812 bp->phylink_config.poll_fixed_state = true;
+2 -2
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
··· 221 221 net_dev->netdev_ops = dpaa_ops; 222 222 mac_addr = mac_dev->addr; 223 223 224 - net_dev->mem_start = (unsigned long)mac_dev->vaddr; 225 - net_dev->mem_end = (unsigned long)mac_dev->vaddr_end; 224 + net_dev->mem_start = (unsigned long)priv->mac_dev->res->start; 225 + net_dev->mem_end = (unsigned long)priv->mac_dev->res->end; 226 226 227 227 net_dev->min_mtu = ETH_MIN_MTU; 228 228 net_dev->max_mtu = dpaa_get_max_mtu();
+1 -1
drivers/net/ethernet/freescale/dpaa/dpaa_eth_sysfs.c
··· 18 18 19 19 if (mac_dev) 20 20 return sprintf(buf, "%llx", 21 - (unsigned long long)mac_dev->vaddr); 21 + (unsigned long long)mac_dev->res->start); 22 22 else 23 23 return sprintf(buf, "none"); 24 24 }
+6 -6
drivers/net/ethernet/freescale/fman/mac.c
··· 158 158 struct device_node *mac_node, *dev_node; 159 159 struct mac_device *mac_dev; 160 160 struct platform_device *of_dev; 161 - struct resource *res; 162 161 struct mac_priv_s *priv; 163 162 struct fman_mac_params params; 164 163 u32 val; ··· 217 218 of_node_put(dev_node); 218 219 219 220 /* Get the address of the memory mapped registers */ 220 - res = platform_get_mem_or_io(_of_dev, 0); 221 - if (!res) { 221 + mac_dev->res = platform_get_mem_or_io(_of_dev, 0); 222 + if (!mac_dev->res) { 222 223 dev_err(dev, "could not get registers\n"); 223 224 return -EINVAL; 224 225 } 225 226 226 - err = devm_request_resource(dev, fman_get_mem_region(priv->fman), res); 227 + err = devm_request_resource(dev, fman_get_mem_region(priv->fman), 228 + mac_dev->res); 227 229 if (err) { 228 230 dev_err_probe(dev, err, "could not request resource\n"); 229 231 return err; 230 232 } 231 233 232 - mac_dev->vaddr = devm_ioremap(dev, res->start, resource_size(res)); 234 + mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start, 235 + resource_size(mac_dev->res)); 233 236 if (!mac_dev->vaddr) { 234 237 dev_err(dev, "devm_ioremap() failed\n"); 235 238 return -EIO; 236 239 } 237 - mac_dev->vaddr_end = mac_dev->vaddr + resource_size(res); 238 240 239 241 if (!of_device_is_available(mac_node)) 240 242 return -ENODEV;
+1 -1
drivers/net/ethernet/freescale/fman/mac.h
··· 21 21 22 22 struct mac_device { 23 23 void __iomem *vaddr; 24 - void __iomem *vaddr_end; 25 24 struct device *dev; 25 + struct resource *res; 26 26 u8 addr[ETH_ALEN]; 27 27 struct fman_port *port[2]; 28 28 struct phylink *phylink;
+12 -6
drivers/net/ethernet/huawei/hinic/hinic_debugfs.c
··· 85 85 struct tag_sml_funcfg_tbl *funcfg_table_elem; 86 86 struct hinic_cmd_lt_rd *read_data; 87 87 u16 out_size = sizeof(*read_data); 88 + int ret = ~0; 88 89 int err; 89 90 90 91 read_data = kzalloc(sizeof(*read_data), GFP_KERNEL); ··· 112 111 113 112 switch (idx) { 114 113 case VALID: 115 - return funcfg_table_elem->dw0.bs.valid; 114 + ret = funcfg_table_elem->dw0.bs.valid; 115 + break; 116 116 case RX_MODE: 117 - return funcfg_table_elem->dw0.bs.nic_rx_mode; 117 + ret = funcfg_table_elem->dw0.bs.nic_rx_mode; 118 + break; 118 119 case MTU: 119 - return funcfg_table_elem->dw1.bs.mtu; 120 + ret = funcfg_table_elem->dw1.bs.mtu; 121 + break; 120 122 case RQ_DEPTH: 121 - return funcfg_table_elem->dw13.bs.cfg_rq_depth; 123 + ret = funcfg_table_elem->dw13.bs.cfg_rq_depth; 124 + break; 122 125 case QUEUE_NUM: 123 - return funcfg_table_elem->dw13.bs.cfg_q_num; 126 + ret = funcfg_table_elem->dw13.bs.cfg_q_num; 127 + break; 124 128 } 125 129 126 130 kfree(read_data); 127 131 128 - return ~0; 132 + return ret; 129 133 } 130 134 131 135 static ssize_t hinic_dbg_cmd_read(struct file *filp, char __user *buffer, size_t count,
+1 -1
drivers/net/ethernet/huawei/hinic/hinic_hw_cmdq.c
··· 924 924 925 925 err_set_cmdq_depth: 926 926 hinic_ceq_unregister_cb(&func_to_io->ceqs, HINIC_CEQ_CMDQ); 927 - 927 + free_cmdq(&cmdqs->cmdq[HINIC_CMDQ_SYNC]); 928 928 err_cmdq_ctxt: 929 929 hinic_wqs_cmdq_free(&cmdqs->cmdq_pages, cmdqs->saved_wqs, 930 930 HINIC_MAX_CMDQ_TYPES);
+1 -1
drivers/net/ethernet/huawei/hinic/hinic_hw_dev.c
··· 877 877 if (err) 878 878 return -EINVAL; 879 879 880 - interrupt_info->lli_credit_cnt = temp_info.lli_timer_cnt; 880 + interrupt_info->lli_credit_cnt = temp_info.lli_credit_cnt; 881 881 interrupt_info->lli_timer_cnt = temp_info.lli_timer_cnt; 882 882 883 883 err = hinic_msg_to_mgmt(&pfhwdev->pf_to_mgmt, HINIC_MOD_COMM,
-1
drivers/net/ethernet/huawei/hinic/hinic_sriov.c
··· 1174 1174 dev_err(&hwdev->hwif->pdev->dev, 1175 1175 "Failed to register VF, err: %d, status: 0x%x, out size: 0x%x\n", 1176 1176 err, register_info.status, out_size); 1177 - hinic_unregister_vf_mbox_cb(hwdev, HINIC_MOD_L2NIC); 1178 1177 return -EIO; 1179 1178 } 1180 1179 } else {
-1
drivers/net/ethernet/lantiq_etop.c
··· 485 485 len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len; 486 486 487 487 if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) { 488 - dev_kfree_skb_any(skb); 489 488 netdev_err(dev, "tx ring full\n"); 490 489 netif_tx_stop_queue(txq); 491 490 return NETDEV_TX_BUSY;
+1 -10
drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
··· 1846 1846 void mlx5e_macsec_cleanup(struct mlx5e_priv *priv) 1847 1847 { 1848 1848 struct mlx5e_macsec *macsec = priv->macsec; 1849 - struct mlx5_core_dev *mdev = macsec->mdev; 1849 + struct mlx5_core_dev *mdev = priv->mdev; 1850 1850 1851 1851 if (!macsec) 1852 1852 return; 1853 1853 1854 1854 mlx5_notifier_unregister(mdev, &macsec->nb); 1855 - 1856 1855 mlx5e_macsec_fs_cleanup(macsec->macsec_fs); 1857 - 1858 - /* Cleanup workqueue */ 1859 1856 destroy_workqueue(macsec->wq); 1860 - 1861 1857 mlx5e_macsec_aso_cleanup(&macsec->aso, mdev); 1862 - 1863 - priv->macsec = NULL; 1864 - 1865 1858 rhashtable_destroy(&macsec->sci_hash); 1866 - 1867 1859 mutex_destroy(&macsec->lock); 1868 - 1869 1860 kfree(macsec); 1870 1861 }
+9 -1
drivers/net/ethernet/microchip/lan966x/lan966x_ethtool.c
··· 656 656 stats->rx_dropped = dev->stats.rx_dropped + 657 657 lan966x->stats[idx + SYS_COUNT_RX_LONG] + 658 658 lan966x->stats[idx + SYS_COUNT_DR_LOCAL] + 659 - lan966x->stats[idx + SYS_COUNT_DR_TAIL]; 659 + lan966x->stats[idx + SYS_COUNT_DR_TAIL] + 660 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_0] + 661 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_1] + 662 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_2] + 663 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_3] + 664 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_4] + 665 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_5] + 666 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_6] + 667 + lan966x->stats[idx + SYS_COUNT_RX_RED_PRIO_7]; 660 668 661 669 for (i = 0; i < LAN966X_NUM_TC; i++) { 662 670 stats->rx_dropped +=
+15 -23
drivers/net/ethernet/netronome/nfp/nfp_main.c
··· 716 716 return val; 717 717 } 718 718 719 - static int nfp_pf_cfg_hwinfo(struct nfp_pf *pf, bool sp_indiff) 719 + static void nfp_pf_cfg_hwinfo(struct nfp_pf *pf) 720 720 { 721 721 struct nfp_nsp *nsp; 722 722 char hwinfo[32]; 723 + bool sp_indiff; 723 724 int err; 724 725 725 726 nsp = nfp_nsp_open(pf->cpp); 726 727 if (IS_ERR(nsp)) 727 - return PTR_ERR(nsp); 728 + return; 728 729 730 + if (!nfp_nsp_has_hwinfo_set(nsp)) 731 + goto end; 732 + 733 + sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) || 734 + (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF); 735 + 736 + /* No need to clean `sp_indiff` in driver, management firmware 737 + * will do it when application firmware is unloaded. 738 + */ 729 739 snprintf(hwinfo, sizeof(hwinfo), "sp_indiff=%d", sp_indiff); 730 740 err = nfp_nsp_hwinfo_set(nsp, hwinfo, sizeof(hwinfo)); 731 741 /* Not a fatal error, no need to return error to stop driver from loading */ ··· 749 739 pf->eth_tbl = __nfp_eth_read_ports(pf->cpp, nsp); 750 740 } 751 741 742 + end: 752 743 nfp_nsp_close(nsp); 753 - return 0; 754 - } 755 - 756 - static int nfp_pf_nsp_cfg(struct nfp_pf *pf) 757 - { 758 - bool sp_indiff = (nfp_net_pf_get_app_id(pf) == NFP_APP_FLOWER_NIC) || 759 - (nfp_net_pf_get_app_cap(pf) & NFP_NET_APP_CAP_SP_INDIFF); 760 - 761 - return nfp_pf_cfg_hwinfo(pf, sp_indiff); 762 - } 763 - 764 - static void nfp_pf_nsp_clean(struct nfp_pf *pf) 765 - { 766 - nfp_pf_cfg_hwinfo(pf, false); 767 744 } 768 745 769 746 static int nfp_pci_probe(struct pci_dev *pdev, ··· 853 856 goto err_fw_unload; 854 857 } 855 858 856 - err = nfp_pf_nsp_cfg(pf); 857 - if (err) 858 - goto err_fw_unload; 859 + nfp_pf_cfg_hwinfo(pf); 859 860 860 861 err = nfp_net_pci_probe(pf); 861 862 if (err) 862 - goto err_nsp_clean; 863 + goto err_fw_unload; 863 864 864 865 err = nfp_hwmon_register(pf); 865 866 if (err) { ··· 869 874 870 875 err_net_remove: 871 876 nfp_net_pci_remove(pf); 872 - err_nsp_clean: 873 - nfp_pf_nsp_clean(pf); 874 877 err_fw_unload: 875 878 kfree(pf->rtbl); 876 879 nfp_mip_close(pf->mip); ··· 908 915 909 916 nfp_net_pci_remove(pf); 910 917 911 - nfp_pf_nsp_clean(pf); 912 918 vfree(pf->dumpspec); 913 919 kfree(pf->rtbl); 914 920 nfp_mip_close(pf->mip);
+2
drivers/net/ethernet/socionext/netsec.c
··· 1961 1961 ret = PTR_ERR(priv->phydev); 1962 1962 dev_err(priv->dev, "get_phy_device err(%d)\n", ret); 1963 1963 priv->phydev = NULL; 1964 + mdiobus_unregister(bus); 1964 1965 return -ENODEV; 1965 1966 } 1966 1967 1967 1968 ret = phy_device_register(priv->phydev); 1968 1969 if (ret) { 1970 + phy_device_free(priv->phydev); 1969 1971 mdiobus_unregister(bus); 1970 1972 dev_err(priv->dev, 1971 1973 "phy_device_register err(%d)\n", ret);
+3
drivers/nfc/virtual_ncidev.c
··· 54 54 mutex_lock(&nci_mutex); 55 55 if (state != virtual_ncidev_enabled) { 56 56 mutex_unlock(&nci_mutex); 57 + kfree_skb(skb); 57 58 return 0; 58 59 } 59 60 60 61 if (send_buff) { 61 62 mutex_unlock(&nci_mutex); 63 + kfree_skb(skb); 62 64 return -1; 63 65 } 64 66 send_buff = skb_copy(skb, GFP_KERNEL); 65 67 mutex_unlock(&nci_mutex); 66 68 wake_up_interruptible(&wq); 69 + consume_skb(skb); 67 70 68 71 return 0; 69 72 }
+2
drivers/nvme/host/apple.c
··· 1039 1039 dma_max_mapping_size(anv->dev) >> 9); 1040 1040 anv->ctrl.max_segments = NVME_MAX_SEGS; 1041 1041 1042 + dma_set_max_seg_size(anv->dev, 0xffffffff); 1043 + 1042 1044 /* 1043 1045 * Enable NVMMU and linear submission queues. 1044 1046 * While we could keep those disabled and pretend this is slightly
+6 -2
drivers/nvme/host/core.c
··· 3262 3262 return ret; 3263 3263 3264 3264 if (!ctrl->identified && !nvme_discovery_ctrl(ctrl)) { 3265 + /* 3266 + * Do not return errors unless we are in a controller reset, 3267 + * the controller works perfectly fine without hwmon. 3268 + */ 3265 3269 ret = nvme_hwmon_init(ctrl); 3266 - if (ret < 0) 3270 + if (ret == -EINTR) 3267 3271 return ret; 3268 3272 } 3269 3273 ··· 4850 4846 return 0; 4851 4847 4852 4848 out_cleanup_admin_q: 4853 - blk_mq_destroy_queue(ctrl->fabrics_q); 4849 + blk_mq_destroy_queue(ctrl->admin_q); 4854 4850 out_free_tagset: 4855 4851 blk_mq_free_tag_set(ctrl->admin_tagset); 4856 4852 return ret;
+22 -10
drivers/nvme/host/hwmon.c
··· 12 12 13 13 struct nvme_hwmon_data { 14 14 struct nvme_ctrl *ctrl; 15 - struct nvme_smart_log log; 15 + struct nvme_smart_log *log; 16 16 struct mutex read_lock; 17 17 }; 18 18 ··· 60 60 static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data) 61 61 { 62 62 return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0, 63 - NVME_CSI_NVM, &data->log, sizeof(data->log), 0); 63 + NVME_CSI_NVM, data->log, sizeof(*data->log), 0); 64 64 } 65 65 66 66 static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type, 67 67 u32 attr, int channel, long *val) 68 68 { 69 69 struct nvme_hwmon_data *data = dev_get_drvdata(dev); 70 - struct nvme_smart_log *log = &data->log; 70 + struct nvme_smart_log *log = data->log; 71 71 int temp; 72 72 int err; 73 73 ··· 163 163 case hwmon_temp_max: 164 164 case hwmon_temp_min: 165 165 if ((!channel && data->ctrl->wctemp) || 166 - (channel && data->log.temp_sensor[channel - 1])) { 166 + (channel && data->log->temp_sensor[channel - 1])) { 167 167 if (data->ctrl->quirks & 168 168 NVME_QUIRK_NO_TEMP_THRESH_CHANGE) 169 169 return 0444; ··· 176 176 break; 177 177 case hwmon_temp_input: 178 178 case hwmon_temp_label: 179 - if (!channel || data->log.temp_sensor[channel - 1]) 179 + if (!channel || data->log->temp_sensor[channel - 1]) 180 180 return 0444; 181 181 break; 182 182 default: ··· 230 230 231 231 data = kzalloc(sizeof(*data), GFP_KERNEL); 232 232 if (!data) 233 - return 0; 233 + return -ENOMEM; 234 + 235 + data->log = kzalloc(sizeof(*data->log), GFP_KERNEL); 236 + if (!data->log) { 237 + err = -ENOMEM; 238 + goto err_free_data; 239 + } 234 240 235 241 data->ctrl = ctrl; 236 242 mutex_init(&data->read_lock); ··· 244 238 err = nvme_hwmon_get_smart_log(data); 245 239 if (err) { 246 240 dev_warn(dev, "Failed to read smart log (error %d)\n", err); 247 - kfree(data); 248 - return err; 241 + goto err_free_log; 249 242 } 250 243 251 244 hwmon = hwmon_device_register_with_info(dev, "nvme", ··· 252 247 NULL); 253 248 if (IS_ERR(hwmon)) { 254 249 dev_warn(dev, "Failed to instantiate hwmon device\n"); 255 - kfree(data); 256 - return PTR_ERR(hwmon); 250 + err = PTR_ERR(hwmon); 251 + goto err_free_log; 257 252 } 258 253 ctrl->hwmon_device = hwmon; 259 254 return 0; 255 + 256 + err_free_log: 257 + kfree(data->log); 258 + err_free_data: 259 + kfree(data); 260 + return err; 260 261 } 261 262 262 263 void nvme_hwmon_exit(struct nvme_ctrl *ctrl) ··· 273 262 274 263 hwmon_device_unregister(ctrl->hwmon_device); 275 264 ctrl->hwmon_device = NULL; 265 + kfree(data->log); 276 266 kfree(data); 277 267 } 278 268 }
+10
drivers/nvme/host/pci.c
··· 3511 3511 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, 3512 3512 { PCI_DEVICE(0x2646, 0x2263), /* KINGSTON A2000 NVMe SSD */ 3513 3513 .driver_data = NVME_QUIRK_NO_DEEPEST_PS, }, 3514 + { PCI_DEVICE(0x2646, 0x5018), /* KINGSTON OM8SFP4xxxxP OS21012 NVMe SSD */ 3515 + .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3516 + { PCI_DEVICE(0x2646, 0x5016), /* KINGSTON OM3PGP4xxxxP OS21011 NVMe SSD */ 3517 + .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3518 + { PCI_DEVICE(0x2646, 0x501A), /* KINGSTON OM8PGP4xxxxP OS21005 NVMe SSD */ 3519 + .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3520 + { PCI_DEVICE(0x2646, 0x501B), /* KINGSTON OM8PGP4xxxxQ OS21005 NVMe SSD */ 3521 + .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3522 + { PCI_DEVICE(0x2646, 0x501E), /* KINGSTON OM3PGP4xxxxQ OS21011 NVMe SSD */ 3523 + .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3514 3524 { PCI_DEVICE(0x1e4B, 0x1001), /* MAXIO MAP1001 */ 3515 3525 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3516 3526 { PCI_DEVICE(0x1e4B, 0x1002), /* MAXIO MAP1002 */
-4
drivers/nvme/target/configfs.c
··· 1290 1290 static ssize_t nvmet_subsys_attr_qid_max_store(struct config_item *item, 1291 1291 const char *page, size_t cnt) 1292 1292 { 1293 - struct nvmet_port *port = to_nvmet_port(item); 1294 1293 u16 qid_max; 1295 - 1296 - if (nvmet_is_port_enabled(port, __func__)) 1297 - return -EACCES; 1298 1294 1299 1295 if (sscanf(page, "%hu\n", &qid_max) != 1) 1300 1296 return -EINVAL;
+1 -1
drivers/nvme/target/core.c
··· 1176 1176 * reset the keep alive timer when the controller is enabled. 1177 1177 */ 1178 1178 if (ctrl->kato) 1179 - mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ); 1179 + mod_delayed_work(nvmet_wq, &ctrl->ka_work, ctrl->kato * HZ); 1180 1180 } 1181 1181 1182 1182 static void nvmet_clear_ctrl(struct nvmet_ctrl *ctrl)
+8 -3
drivers/pci/controller/pci-tegra.c
··· 415 415 * address (access to which generates correct config transaction) falls in 416 416 * this 4 KiB region. 417 417 */ 418 + static unsigned int tegra_pcie_conf_offset(u8 bus, unsigned int devfn, 419 + unsigned int where) 420 + { 421 + return ((where & 0xf00) << 16) | (bus << 16) | (PCI_SLOT(devfn) << 11) | 422 + (PCI_FUNC(devfn) << 8) | (where & 0xff); 423 + } 424 + 418 425 static void __iomem *tegra_pcie_map_bus(struct pci_bus *bus, 419 426 unsigned int devfn, 420 427 int where) ··· 443 436 unsigned int offset; 444 437 u32 base; 445 438 446 - offset = PCI_CONF1_EXT_ADDRESS(bus->number, PCI_SLOT(devfn), 447 - PCI_FUNC(devfn), where) & 448 - ~PCI_CONF1_ENABLE; 439 + offset = tegra_pcie_conf_offset(bus->number, devfn, where); 449 440 450 441 /* move 4 KiB window to offset within the FPCI region */ 451 442 base = 0xfe100000 + ((offset & ~(SZ_4K - 1)) >> 8);
+2 -2
drivers/pinctrl/pinctrl-ingenic.c
··· 667 667 static const struct group_desc jz4755_groups[] = { 668 668 INGENIC_PIN_GROUP("uart0-data", jz4755_uart0_data, 0), 669 669 INGENIC_PIN_GROUP("uart0-hwflow", jz4755_uart0_hwflow, 0), 670 - INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 0), 670 + INGENIC_PIN_GROUP("uart1-data", jz4755_uart1_data, 1), 671 671 INGENIC_PIN_GROUP("uart2-data", jz4755_uart2_data, 1), 672 672 INGENIC_PIN_GROUP("ssi-dt-b", jz4755_ssi_dt_b, 0), 673 673 INGENIC_PIN_GROUP("ssi-dt-f", jz4755_ssi_dt_f, 0), ··· 721 721 "ssi-ce1-b", "ssi-ce1-f", 722 722 }; 723 723 static const char *jz4755_mmc0_groups[] = { "mmc0-1bit", "mmc0-4bit", }; 724 - static const char *jz4755_mmc1_groups[] = { "mmc0-1bit", "mmc0-4bit", }; 724 + static const char *jz4755_mmc1_groups[] = { "mmc1-1bit", "mmc1-4bit", }; 725 725 static const char *jz4755_i2c_groups[] = { "i2c-data", }; 726 726 static const char *jz4755_cim_groups[] = { "cim-data", }; 727 727 static const char *jz4755_lcd_groups[] = {
+13 -4
drivers/pinctrl/pinctrl-ocelot.c
··· 1864 1864 if (val & bit) 1865 1865 ack = true; 1866 1866 1867 + /* Try to clear any rising edges */ 1868 + if (!active && ack) 1869 + regmap_write_bits(info->map, REG(OCELOT_GPIO_INTR, info, gpio), 1870 + bit, bit); 1871 + 1867 1872 /* Enable the interrupt now */ 1868 1873 gpiochip_enable_irq(chip, gpio); 1869 1874 regmap_update_bits(info->map, REG(OCELOT_GPIO_INTR_ENA, info, gpio), 1870 1875 bit, bit); 1871 1876 1872 1877 /* 1873 - * In case the interrupt line is still active and the interrupt 1874 - * controller has not seen any changes in the interrupt line, then it 1875 - * means that there happen another interrupt while the line was active. 1878 + * In case the interrupt line is still active then it means that 1879 + * there happen another interrupt while the line was active. 1876 1880 * So we missed that one, so we need to kick the interrupt again 1877 1881 * handler. 1878 1882 */ 1879 - if (active && !ack) { 1883 + regmap_read(info->map, REG(OCELOT_GPIO_IN, info, gpio), &val); 1884 + if ((!(val & bit) && trigger_level == IRQ_TYPE_LEVEL_LOW) || 1885 + (val & bit && trigger_level == IRQ_TYPE_LEVEL_HIGH)) 1886 + active = true; 1887 + 1888 + if (active) { 1880 1889 struct ocelot_irq_work *work; 1881 1890 1882 1891 work = kmalloc(sizeof(*work), GFP_ATOMIC);
-9
drivers/pinctrl/pinctrl-zynqmp.c
··· 412 412 413 413 break; 414 414 case PIN_CONFIG_BIAS_HIGH_IMPEDANCE: 415 - param = PM_PINCTRL_CONFIG_TRI_STATE; 416 - arg = PM_PINCTRL_TRI_STATE_ENABLE; 417 - ret = zynqmp_pm_pinctrl_set_config(pin, param, arg); 418 - break; 419 415 case PIN_CONFIG_MODE_LOW_POWER: 420 416 /* 421 417 * These cases are mentioned in dts but configurable ··· 419 423 * boot time warnings as of now. 420 424 */ 421 425 ret = 0; 422 - break; 423 - case PIN_CONFIG_OUTPUT_ENABLE: 424 - param = PM_PINCTRL_CONFIG_TRI_STATE; 425 - arg = PM_PINCTRL_TRI_STATE_DISABLE; 426 - ret = zynqmp_pm_pinctrl_set_config(pin, param, arg); 427 426 break; 428 427 default: 429 428 dev_warn(pctldev->dev,
+21
drivers/pinctrl/qcom/pinctrl-msm.c
··· 51 51 * detection. 52 52 * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller 53 53 * @disabled_for_mux: These IRQs were disabled because we muxed away. 54 + * @ever_gpio: This bit is set the first time we mux a pin to gpio_func. 54 55 * @soc: Reference to soc_data of platform specific data. 55 56 * @regs: Base addresses for the TLMM tiles. 56 57 * @phys_base: Physical base address ··· 73 72 DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO); 74 73 DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO); 75 74 DECLARE_BITMAP(disabled_for_mux, MAX_NR_GPIO); 75 + DECLARE_BITMAP(ever_gpio, MAX_NR_GPIO); 76 76 77 77 const struct msm_pinctrl_soc_data *soc; 78 78 void __iomem *regs[MAX_NR_TILES]; ··· 219 217 raw_spin_lock_irqsave(&pctrl->lock, flags); 220 218 221 219 val = msm_readl_ctl(pctrl, g); 220 + 221 + /* 222 + * If this is the first time muxing to GPIO and the direction is 223 + * output, make sure that we're not going to be glitching the pin 224 + * by reading the current state of the pin and setting it as the 225 + * output. 226 + */ 227 + if (i == gpio_func && (val & BIT(g->oe_bit)) && 228 + !test_and_set_bit(group, pctrl->ever_gpio)) { 229 + u32 io_val = msm_readl_io(pctrl, g); 230 + 231 + if (io_val & BIT(g->in_bit)) { 232 + if (!(io_val & BIT(g->out_bit))) 233 + msm_writel_io(io_val | BIT(g->out_bit), pctrl, g); 234 + } else { 235 + if (io_val & BIT(g->out_bit)) 236 + msm_writel_io(io_val & ~BIT(g->out_bit), pctrl, g); 237 + } 238 + } 222 239 223 240 if (egpio_func && i == egpio_func) { 224 241 if (val & BIT(g->egpio_present))
+4 -3
drivers/scsi/lpfc/lpfc_init.c
··· 4812 4812 rc = lpfc_vmid_res_alloc(phba, vport); 4813 4813 4814 4814 if (rc) 4815 - goto out; 4815 + goto out_put_shost; 4816 4816 4817 4817 /* Initialize all internally managed lists. */ 4818 4818 INIT_LIST_HEAD(&vport->fc_nodes); ··· 4830 4830 4831 4831 error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev); 4832 4832 if (error) 4833 - goto out_put_shost; 4833 + goto out_free_vmid; 4834 4834 4835 4835 spin_lock_irq(&phba->port_list_lock); 4836 4836 list_add_tail(&vport->listentry, &phba->port_list); 4837 4837 spin_unlock_irq(&phba->port_list_lock); 4838 4838 return vport; 4839 4839 4840 - out_put_shost: 4840 + out_free_vmid: 4841 4841 kfree(vport->vmid); 4842 4842 bitmap_free(vport->vmid_priority_range); 4843 + out_put_shost: 4843 4844 scsi_host_put(shost); 4844 4845 out: 4845 4846 return NULL;
+8
drivers/scsi/scsi_sysfs.c
··· 828 828 } 829 829 830 830 mutex_lock(&sdev->state_mutex); 831 + switch (sdev->sdev_state) { 832 + case SDEV_RUNNING: 833 + case SDEV_OFFLINE: 834 + break; 835 + default: 836 + mutex_unlock(&sdev->state_mutex); 837 + return -EINVAL; 838 + } 831 839 if (sdev->sdev_state == SDEV_RUNNING && state == SDEV_RUNNING) { 832 840 ret = 0; 833 841 } else {
-1
drivers/staging/media/atomisp/Makefile
··· 17 17 pci/atomisp_compat_css20.o \ 18 18 pci/atomisp_csi2.o \ 19 19 pci/atomisp_drvfs.o \ 20 - pci/atomisp_file.o \ 21 20 pci/atomisp_fops.o \ 22 21 pci/atomisp_ioctl.o \ 23 22 pci/atomisp_subdev.o \
+9 -10
drivers/staging/media/atomisp/i2c/atomisp-ov2680.c
··· 841 841 if (!ov2680_info) 842 842 return -EINVAL; 843 843 844 - mutex_lock(&dev->input_lock); 845 - 846 844 res = v4l2_find_nearest_size(ov2680_res_preview, 847 845 ARRAY_SIZE(ov2680_res_preview), width, 848 846 height, fmt->width, fmt->height); ··· 853 855 fmt->code = MEDIA_BUS_FMT_SBGGR10_1X10; 854 856 if (format->which == V4L2_SUBDEV_FORMAT_TRY) { 855 857 sd_state->pads->try_fmt = *fmt; 856 - mutex_unlock(&dev->input_lock); 857 858 return 0; 858 859 } 859 860 860 861 dev_dbg(&client->dev, "%s: %dx%d\n", 861 862 __func__, fmt->width, fmt->height); 862 863 864 + mutex_lock(&dev->input_lock); 865 + 863 866 /* s_power has not been called yet for std v4l2 clients (camorama) */ 864 867 power_up(sd); 865 868 ret = ov2680_write_reg_array(client, dev->res->regs); 866 - if (ret) 869 + if (ret) { 867 870 dev_err(&client->dev, 868 871 "ov2680 write resolution register err: %d\n", ret); 872 + goto err; 873 + } 869 874 870 875 vts = dev->res->lines_per_frame; 871 876 ··· 877 876 vts = dev->exposure + OV2680_INTEGRATION_TIME_MARGIN; 878 877 879 878 ret = ov2680_write_reg(client, 2, OV2680_TIMING_VTS_H, vts); 880 - if (ret) 879 + if (ret) { 881 880 dev_err(&client->dev, "ov2680 write vts err: %d\n", ret); 881 + goto err; 882 + } 882 883 883 884 ret = ov2680_get_intg_factor(client, ov2680_info, res); 884 885 if (ret) { ··· 897 894 if (v_flag) 898 895 ov2680_v_flip(sd, v_flag); 899 896 900 - /* 901 - * ret = startup(sd); 902 - * if (ret) 903 - * dev_err(&client->dev, "ov2680 startup err\n"); 904 - */ 897 + dev->res = res; 905 898 err: 906 899 mutex_unlock(&dev->input_lock); 907 900 return ret;
-6
drivers/staging/media/atomisp/include/hmm/hmm_bo.h
··· 65 65 #define check_bo_null_return_void(bo) \ 66 66 check_null_return_void(bo, "NULL hmm buffer object.\n") 67 67 68 - #define HMM_MAX_ORDER 3 69 - #define HMM_MIN_ORDER 0 70 - 71 68 #define ISP_VM_START 0x0 72 69 #define ISP_VM_SIZE (0x7FFFFFFF) /* 2G address space */ 73 70 #define ISP_PTR_NULL NULL ··· 86 89 #define HMM_BO_VMAPED 0x10 87 90 #define HMM_BO_VMAPED_CACHED 0x20 88 91 #define HMM_BO_ACTIVE 0x1000 89 - #define HMM_BO_MEM_TYPE_USER 0x1 90 - #define HMM_BO_MEM_TYPE_PFN 0x2 91 92 92 93 struct hmm_bo_device { 93 94 struct isp_mmu mmu; ··· 121 126 enum hmm_bo_type type; 122 127 int mmap_count; 123 128 int status; 124 - int mem_type; 125 129 void *vmap_addr; /* kernel virtual address by vmap */ 126 130 127 131 struct rb_node node;
-14
drivers/staging/media/atomisp/include/linux/atomisp.h
··· 740 740 ATOMISP_FRAME_STATUS_FLASH_FAILED, 741 741 }; 742 742 743 - /* ISP memories, isp2400 */ 744 - enum atomisp_acc_memory { 745 - ATOMISP_ACC_MEMORY_PMEM0 = 0, 746 - ATOMISP_ACC_MEMORY_DMEM0, 747 - /* for backward compatibility */ 748 - ATOMISP_ACC_MEMORY_DMEM = ATOMISP_ACC_MEMORY_DMEM0, 749 - ATOMISP_ACC_MEMORY_VMEM0, 750 - ATOMISP_ACC_MEMORY_VAMEM0, 751 - ATOMISP_ACC_MEMORY_VAMEM1, 752 - ATOMISP_ACC_MEMORY_VAMEM2, 753 - ATOMISP_ACC_MEMORY_HMEM0, 754 - ATOMISP_ACC_NR_MEMORY 755 - }; 756 - 757 743 enum atomisp_ext_isp_id { 758 744 EXT_ISP_CID_ISO = 0, 759 745 EXT_ISP_CID_CAPTURE_HDR,
-2
drivers/staging/media/atomisp/include/linux/atomisp_gmin_platform.h
··· 26 26 int atomisp_gmin_remove_subdev(struct v4l2_subdev *sd); 27 27 int gmin_get_var_int(struct device *dev, bool is_gmin, 28 28 const char *var, int def); 29 - int camera_sensor_csi(struct v4l2_subdev *sd, u32 port, 30 - u32 lanes, u32 format, u32 bayer_order, int flag); 31 29 struct camera_sensor_platform_data * 32 30 gmin_camera_platform_data( 33 31 struct v4l2_subdev *subdev,
-18
drivers/staging/media/atomisp/include/linux/atomisp_platform.h
··· 141 141 struct intel_v4l2_subdev_table *subdevs; 142 142 }; 143 143 144 - /* Describe the capacities of one single sensor. */ 145 - struct atomisp_sensor_caps { 146 - /* The number of streams this sensor can output. */ 147 - int stream_num; 148 - bool is_slave; 149 - }; 150 - 151 - /* Describe the capacities of sensors connected to one camera port. */ 152 - struct atomisp_camera_caps { 153 - /* The number of sensors connected to this camera port. */ 154 - int sensor_num; 155 - /* The capacities of each sensor. */ 156 - struct atomisp_sensor_caps sensor[MAX_SENSORS_PER_PORT]; 157 - /* Define whether stream control is required for multiple streams. */ 158 - bool multi_stream_ctrl; 159 - }; 160 - 161 144 /* 162 145 * Sensor of external ISP can send multiple steams with different mipi data 163 146 * type in the same virtual channel. This information needs to come from the ··· 218 235 }; 219 236 220 237 const struct atomisp_platform_data *atomisp_get_platform_data(void); 221 - const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void); 222 238 223 239 /* API from old platform_camera.h, new CPUID implementation */ 224 240 #define __IS_SOC(x) (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && \
+19
drivers/staging/media/atomisp/notes.txt
··· 28 28 this means that unlike in fixed pipelines the soft pipelines 29 29 on the ISP can do multiple processing steps in a single pipeline 30 30 element (in a single binary). 31 + 32 + ### 33 + 34 + The sensor drivers use of v4l2_get_subdev_hostdata(), which returns 35 + a camera_mipi_info struct. This struct is allocated/managed by 36 + the core atomisp code. The most important parts of the struct 37 + are filled by the atomisp core itself, like e.g. the port number. 38 + 39 + The sensor drivers on a set_fmt call do fill in camera_mipi_info.data 40 + which is a atomisp_sensor_mode_data struct. This gets filled from 41 + a function called <sensor_name>_get_intg_factor(). This struct is not 42 + used by the atomisp code at all. It is returned to userspace by 43 + a ATOMISP_IOC_G_SENSOR_MODE_DATA and the Android userspace does use this. 44 + 45 + Other members of camera_mipi_info which are set by some drivers are: 46 + -metadata_width, metadata_height, metadata_effective_width, set by 47 + the ov5693 driver (and used by the atomisp core) 48 + -raw_bayer_order, adjusted by the ov2680 driver when flipping since 49 + flipping can change the bayer order
+67 -648
drivers/staging/media/atomisp/pci/atomisp_cmd.c
··· 80 80 } ptr; 81 81 }; 82 82 83 + static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id); 84 + 83 85 /* 84 86 * get sensor:dis71430/ov2720 related info from v4l2_subdev->priv data field. 85 87 * subdev->priv is set in mrst.c ··· 98 96 { 99 97 return (struct atomisp_video_pipe *) 100 98 container_of(dev, struct atomisp_video_pipe, vdev); 101 - } 102 - 103 - /* 104 - * get struct atomisp_acc_pipe from v4l2 video_device 105 - */ 106 - struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev) 107 - { 108 - return (struct atomisp_acc_pipe *) 109 - container_of(dev, struct atomisp_acc_pipe, vdev); 110 99 } 111 100 112 101 static unsigned short atomisp_get_sensor_fps(struct atomisp_sub_device *asd) ··· 770 777 enum ia_css_pipe_id css_pipe_id, 771 778 enum ia_css_buffer_type buf_type) 772 779 { 773 - struct atomisp_device *isp = asd->isp; 774 - 775 - if (css_pipe_id == IA_CSS_PIPE_ID_COPY && 776 - isp->inputs[asd->input_curr].camera_caps-> 777 - sensor[asd->sensor_curr].stream_num > 1) { 778 - switch (stream_id) { 779 - case ATOMISP_INPUT_STREAM_PREVIEW: 780 - return &asd->video_out_preview; 781 - case ATOMISP_INPUT_STREAM_POSTVIEW: 782 - return &asd->video_out_vf; 783 - case ATOMISP_INPUT_STREAM_VIDEO: 784 - return &asd->video_out_video_capture; 785 - case ATOMISP_INPUT_STREAM_CAPTURE: 786 - default: 787 - return &asd->video_out_capture; 788 - } 789 - } 790 - 791 780 /* video is same in online as in continuouscapture mode */ 792 781 if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_LOWLAT) { 793 782 /* ··· 881 906 enum atomisp_metadata_type md_type; 882 907 struct atomisp_device *isp = asd->isp; 883 908 struct v4l2_control ctrl; 884 - bool reset_wdt_timer = false; 909 + 910 + lockdep_assert_held(&isp->mutex); 885 911 886 912 if ( 887 913 buf_type != IA_CSS_BUFFER_TYPE_METADATA && ··· 989 1013 break; 990 1014 case IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME: 991 1015 case IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME: 992 - if (IS_ISP2401) 993 - reset_wdt_timer = true; 994 - 995 1016 pipe->buffers_in_css--; 996 1017 frame = buffer.css_buffer.data.frame; 997 1018 if (!frame) { ··· 1041 1068 break; 1042 1069 case IA_CSS_BUFFER_TYPE_OUTPUT_FRAME: 1043 1070 case IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME: 1044 - if (IS_ISP2401) 1045 - reset_wdt_timer = true; 1046 - 1047 1071 pipe->buffers_in_css--; 1048 1072 frame = buffer.css_buffer.data.frame; 1049 1073 if (!frame) { ··· 1208 1238 */ 1209 1239 wake_up(&vb->done); 1210 1240 } 1211 - if (IS_ISP2401) 1212 - atomic_set(&pipe->wdt_count, 0); 1213 1241 1214 1242 /* 1215 1243 * Requeue should only be done for 3a and dis buffers. ··· 1224 1256 } 1225 1257 if (!error && q_buffers) 1226 1258 atomisp_qbuffers_to_css(asd); 1227 - 1228 - if (IS_ISP2401) { 1229 - /* If there are no buffers queued then 1230 - * delete wdt timer. */ 1231 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 1232 - return; 1233 - if (!atomisp_buffers_queued_pipe(pipe)) 1234 - atomisp_wdt_stop_pipe(pipe, false); 1235 - else if (reset_wdt_timer) 1236 - /* SOF irq should not reset wdt timer. */ 1237 - atomisp_wdt_refresh_pipe(pipe, 1238 - ATOMISP_WDT_KEEP_CURRENT_DELAY); 1239 - } 1240 1259 } 1241 1260 1242 1261 void atomisp_delayed_init_work(struct work_struct *work) ··· 1262 1307 bool stream_restart[MAX_STREAM_NUM] = {0}; 1263 1308 bool depth_mode = false; 1264 1309 int i, ret, depth_cnt = 0; 1310 + unsigned long flags; 1265 1311 1266 - if (!isp->sw_contex.file_input) 1267 - atomisp_css_irq_enable(isp, 1268 - IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false); 1312 + lockdep_assert_held(&isp->mutex); 1313 + 1314 + if (!atomisp_streaming_count(isp)) 1315 + return; 1316 + 1317 + atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false); 1269 1318 1270 1319 BUG_ON(isp->num_of_streams > MAX_STREAM_NUM); 1271 1320 ··· 1290 1331 1291 1332 stream_restart[asd->index] = true; 1292 1333 1334 + spin_lock_irqsave(&isp->lock, flags); 1293 1335 asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING; 1336 + spin_unlock_irqrestore(&isp->lock, flags); 1294 1337 1295 1338 /* stream off sensor */ 1296 1339 ret = v4l2_subdev_call( ··· 1307 1346 css_pipe_id = atomisp_get_css_pipe_id(asd); 1308 1347 atomisp_css_stop(asd, css_pipe_id, true); 1309 1348 1349 + spin_lock_irqsave(&isp->lock, flags); 1310 1350 asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED; 1351 + spin_unlock_irqrestore(&isp->lock, flags); 1311 1352 1312 1353 asd->preview_exp_id = 1; 1313 1354 asd->postview_exp_id = 1; ··· 1350 1387 IA_CSS_INPUT_MODE_BUFFERED_SENSOR); 1351 1388 1352 1389 css_pipe_id = atomisp_get_css_pipe_id(asd); 1353 - if (atomisp_css_start(asd, css_pipe_id, true)) 1390 + if (atomisp_css_start(asd, css_pipe_id, true)) { 1354 1391 dev_warn(isp->dev, 1355 1392 "start SP failed, so do not set streaming to be enable!\n"); 1356 - else 1393 + } else { 1394 + spin_lock_irqsave(&isp->lock, flags); 1357 1395 asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED; 1396 + spin_unlock_irqrestore(&isp->lock, flags); 1397 + } 1358 1398 1359 1399 atomisp_csi2_configure(asd); 1360 1400 } 1361 1401 1362 - if (!isp->sw_contex.file_input) { 1363 - atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, 1364 - atomisp_css_valid_sof(isp)); 1402 + atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, 1403 + atomisp_css_valid_sof(isp)); 1365 1404 1366 - if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0) 1367 - dev_dbg(isp->dev, "DFS auto failed while recovering!\n"); 1368 - } else { 1369 - if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, true) < 0) 1370 - dev_dbg(isp->dev, "DFS max failed while recovering!\n"); 1371 - } 1405 + if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_AUTO, true) < 0) 1406 + dev_dbg(isp->dev, "DFS auto failed while recovering!\n"); 1372 1407 1373 1408 for (i = 0; i < isp->num_of_streams; i++) { 1374 1409 struct atomisp_sub_device *asd; ··· 1415 1454 } 1416 1455 } 1417 1456 1418 - void atomisp_wdt_work(struct work_struct *work) 1457 + void atomisp_assert_recovery_work(struct work_struct *work) 1419 1458 { 1420 1459 struct atomisp_device *isp = container_of(work, struct atomisp_device, 1421 - wdt_work); 1422 - int i; 1423 - unsigned int pipe_wdt_cnt[MAX_STREAM_NUM][4] = { {0} }; 1424 - bool css_recover = true; 1460 + assert_recovery_work); 1425 1461 1426 - rt_mutex_lock(&isp->mutex); 1427 - if (!atomisp_streaming_count(isp)) { 1428 - atomic_set(&isp->wdt_work_queued, 0); 1429 - rt_mutex_unlock(&isp->mutex); 1430 - return; 1431 - } 1432 - 1433 - if (!IS_ISP2401) { 1434 - dev_err(isp->dev, "timeout %d of %d\n", 1435 - atomic_read(&isp->wdt_count) + 1, 1436 - ATOMISP_ISP_MAX_TIMEOUT_COUNT); 1437 - } else { 1438 - for (i = 0; i < isp->num_of_streams; i++) { 1439 - struct atomisp_sub_device *asd = &isp->asd[i]; 1440 - 1441 - pipe_wdt_cnt[i][0] += 1442 - atomic_read(&asd->video_out_capture.wdt_count); 1443 - pipe_wdt_cnt[i][1] += 1444 - atomic_read(&asd->video_out_vf.wdt_count); 1445 - pipe_wdt_cnt[i][2] += 1446 - atomic_read(&asd->video_out_preview.wdt_count); 1447 - pipe_wdt_cnt[i][3] += 1448 - atomic_read(&asd->video_out_video_capture.wdt_count); 1449 - css_recover = 1450 - (pipe_wdt_cnt[i][0] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT && 1451 - pipe_wdt_cnt[i][1] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT && 1452 - pipe_wdt_cnt[i][2] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT && 1453 - pipe_wdt_cnt[i][3] <= ATOMISP_ISP_MAX_TIMEOUT_COUNT) 1454 - ? true : false; 1455 - dev_err(isp->dev, 1456 - "pipe on asd%d timeout cnt: (%d, %d, %d, %d) of %d, recover = %d\n", 1457 - asd->index, pipe_wdt_cnt[i][0], pipe_wdt_cnt[i][1], 1458 - pipe_wdt_cnt[i][2], pipe_wdt_cnt[i][3], 1459 - ATOMISP_ISP_MAX_TIMEOUT_COUNT, css_recover); 1460 - } 1461 - } 1462 - 1463 - if (css_recover) { 1464 - ia_css_debug_dump_sp_sw_debug_info(); 1465 - ia_css_debug_dump_debug_info(__func__); 1466 - for (i = 0; i < isp->num_of_streams; i++) { 1467 - struct atomisp_sub_device *asd = &isp->asd[i]; 1468 - 1469 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 1470 - continue; 1471 - dev_err(isp->dev, "%s, vdev %s buffers in css: %d\n", 1472 - __func__, 1473 - asd->video_out_capture.vdev.name, 1474 - asd->video_out_capture. 1475 - buffers_in_css); 1476 - dev_err(isp->dev, 1477 - "%s, vdev %s buffers in css: %d\n", 1478 - __func__, 1479 - asd->video_out_vf.vdev.name, 1480 - asd->video_out_vf. 1481 - buffers_in_css); 1482 - dev_err(isp->dev, 1483 - "%s, vdev %s buffers in css: %d\n", 1484 - __func__, 1485 - asd->video_out_preview.vdev.name, 1486 - asd->video_out_preview. 1487 - buffers_in_css); 1488 - dev_err(isp->dev, 1489 - "%s, vdev %s buffers in css: %d\n", 1490 - __func__, 1491 - asd->video_out_video_capture.vdev.name, 1492 - asd->video_out_video_capture. 1493 - buffers_in_css); 1494 - dev_err(isp->dev, 1495 - "%s, s3a buffers in css preview pipe:%d\n", 1496 - __func__, 1497 - asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_PREVIEW]); 1498 - dev_err(isp->dev, 1499 - "%s, s3a buffers in css capture pipe:%d\n", 1500 - __func__, 1501 - asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_CAPTURE]); 1502 - dev_err(isp->dev, 1503 - "%s, s3a buffers in css video pipe:%d\n", 1504 - __func__, 1505 - asd->s3a_bufs_in_css[IA_CSS_PIPE_ID_VIDEO]); 1506 - dev_err(isp->dev, 1507 - "%s, dis buffers in css: %d\n", 1508 - __func__, asd->dis_bufs_in_css); 1509 - dev_err(isp->dev, 1510 - "%s, metadata buffers in css preview pipe:%d\n", 1511 - __func__, 1512 - asd->metadata_bufs_in_css 1513 - [ATOMISP_INPUT_STREAM_GENERAL] 1514 - [IA_CSS_PIPE_ID_PREVIEW]); 1515 - dev_err(isp->dev, 1516 - "%s, metadata buffers in css capture pipe:%d\n", 1517 - __func__, 1518 - asd->metadata_bufs_in_css 1519 - [ATOMISP_INPUT_STREAM_GENERAL] 1520 - [IA_CSS_PIPE_ID_CAPTURE]); 1521 - dev_err(isp->dev, 1522 - "%s, metadata buffers in css video pipe:%d\n", 1523 - __func__, 1524 - asd->metadata_bufs_in_css 1525 - [ATOMISP_INPUT_STREAM_GENERAL] 1526 - [IA_CSS_PIPE_ID_VIDEO]); 1527 - if (asd->enable_raw_buffer_lock->val) { 1528 - unsigned int j; 1529 - 1530 - dev_err(isp->dev, "%s, raw_buffer_locked_count %d\n", 1531 - __func__, asd->raw_buffer_locked_count); 1532 - for (j = 0; j <= ATOMISP_MAX_EXP_ID / 32; j++) 1533 - dev_err(isp->dev, "%s, raw_buffer_bitmap[%d]: 0x%x\n", 1534 - __func__, j, 1535 - asd->raw_buffer_bitmap[j]); 1536 - } 1537 - } 1538 - 1539 - /*sh_css_dump_sp_state();*/ 1540 - /*sh_css_dump_isp_state();*/ 1541 - } else { 1542 - for (i = 0; i < isp->num_of_streams; i++) { 1543 - struct atomisp_sub_device *asd = &isp->asd[i]; 1544 - 1545 - if (asd->streaming == 1546 - ATOMISP_DEVICE_STREAMING_ENABLED) { 1547 - atomisp_clear_css_buffer_counters(asd); 1548 - atomisp_flush_bufs_and_wakeup(asd); 1549 - complete(&asd->init_done); 1550 - } 1551 - if (IS_ISP2401) 1552 - atomisp_wdt_stop(asd, false); 1553 - } 1554 - 1555 - if (!IS_ISP2401) { 1556 - atomic_set(&isp->wdt_count, 0); 1557 - } else { 1558 - isp->isp_fatal_error = true; 1559 - atomic_set(&isp->wdt_work_queued, 0); 1560 - 1561 - rt_mutex_unlock(&isp->mutex); 1562 - return; 1563 - } 1564 - } 1565 - 1462 + mutex_lock(&isp->mutex); 1566 1463 __atomisp_css_recover(isp, true); 1567 - if (IS_ISP2401) { 1568 - for (i = 0; i < isp->num_of_streams; i++) { 1569 - struct atomisp_sub_device *asd = &isp->asd[i]; 1570 - 1571 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 1572 - continue; 1573 - 1574 - atomisp_wdt_refresh(asd, 1575 - isp->sw_contex.file_input ? 1576 - ATOMISP_ISP_FILE_TIMEOUT_DURATION : 1577 - ATOMISP_ISP_TIMEOUT_DURATION); 1578 - } 1579 - } 1580 - 1581 - dev_err(isp->dev, "timeout recovery handling done\n"); 1582 - atomic_set(&isp->wdt_work_queued, 0); 1583 - 1584 - rt_mutex_unlock(&isp->mutex); 1464 + mutex_unlock(&isp->mutex); 1585 1465 } 1586 1466 1587 1467 void atomisp_css_flush(struct atomisp_device *isp) 1588 1468 { 1589 - int i; 1590 - 1591 - if (!atomisp_streaming_count(isp)) 1592 - return; 1593 - 1594 - /* Disable wdt */ 1595 - for (i = 0; i < isp->num_of_streams; i++) { 1596 - struct atomisp_sub_device *asd = &isp->asd[i]; 1597 - 1598 - atomisp_wdt_stop(asd, true); 1599 - } 1600 - 1601 1469 /* Start recover */ 1602 1470 __atomisp_css_recover(isp, false); 1603 - /* Restore wdt */ 1604 - for (i = 0; i < isp->num_of_streams; i++) { 1605 - struct atomisp_sub_device *asd = &isp->asd[i]; 1606 1471 1607 - if (asd->streaming != 1608 - ATOMISP_DEVICE_STREAMING_ENABLED) 1609 - continue; 1610 - 1611 - atomisp_wdt_refresh(asd, 1612 - isp->sw_contex.file_input ? 1613 - ATOMISP_ISP_FILE_TIMEOUT_DURATION : 1614 - ATOMISP_ISP_TIMEOUT_DURATION); 1615 - } 1616 1472 dev_dbg(isp->dev, "atomisp css flush done\n"); 1617 - } 1618 - 1619 - void atomisp_wdt(struct timer_list *t) 1620 - { 1621 - struct atomisp_sub_device *asd; 1622 - struct atomisp_device *isp; 1623 - 1624 - if (!IS_ISP2401) { 1625 - asd = from_timer(asd, t, wdt); 1626 - isp = asd->isp; 1627 - } else { 1628 - struct atomisp_video_pipe *pipe = from_timer(pipe, t, wdt); 1629 - 1630 - asd = pipe->asd; 1631 - isp = asd->isp; 1632 - 1633 - atomic_inc(&pipe->wdt_count); 1634 - dev_warn(isp->dev, 1635 - "[WARNING]asd %d pipe %s ISP timeout %d!\n", 1636 - asd->index, pipe->vdev.name, 1637 - atomic_read(&pipe->wdt_count)); 1638 - } 1639 - 1640 - if (atomic_read(&isp->wdt_work_queued)) { 1641 - dev_dbg(isp->dev, "ISP watchdog was put into workqueue\n"); 1642 - return; 1643 - } 1644 - atomic_set(&isp->wdt_work_queued, 1); 1645 - queue_work(isp->wdt_work_queue, &isp->wdt_work); 1646 - } 1647 - 1648 - /* ISP2400 */ 1649 - void atomisp_wdt_start(struct atomisp_sub_device *asd) 1650 - { 1651 - atomisp_wdt_refresh(asd, ATOMISP_ISP_TIMEOUT_DURATION); 1652 - } 1653 - 1654 - /* ISP2401 */ 1655 - void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe, 1656 - unsigned int delay) 1657 - { 1658 - unsigned long next; 1659 - 1660 - if (!pipe->asd) { 1661 - dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", 1662 - __func__, pipe->vdev.name); 1663 - return; 1664 - } 1665 - 1666 - if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY) 1667 - pipe->wdt_duration = delay; 1668 - 1669 - next = jiffies + pipe->wdt_duration; 1670 - 1671 - /* Override next if it has been pushed beyon the "next" time */ 1672 - if (atomisp_is_wdt_running(pipe) && time_after(pipe->wdt_expires, next)) 1673 - next = pipe->wdt_expires; 1674 - 1675 - pipe->wdt_expires = next; 1676 - 1677 - if (atomisp_is_wdt_running(pipe)) 1678 - dev_dbg(pipe->asd->isp->dev, "WDT will hit after %d ms (%s)\n", 1679 - ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name); 1680 - else 1681 - dev_dbg(pipe->asd->isp->dev, "WDT starts with %d ms period (%s)\n", 1682 - ((int)(next - jiffies) * 1000 / HZ), pipe->vdev.name); 1683 - 1684 - mod_timer(&pipe->wdt, next); 1685 - } 1686 - 1687 - void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay) 1688 - { 1689 - if (!IS_ISP2401) { 1690 - unsigned long next; 1691 - 1692 - if (delay != ATOMISP_WDT_KEEP_CURRENT_DELAY) 1693 - asd->wdt_duration = delay; 1694 - 1695 - next = jiffies + asd->wdt_duration; 1696 - 1697 - /* Override next if it has been pushed beyon the "next" time */ 1698 - if (atomisp_is_wdt_running(asd) && time_after(asd->wdt_expires, next)) 1699 - next = asd->wdt_expires; 1700 - 1701 - asd->wdt_expires = next; 1702 - 1703 - if (atomisp_is_wdt_running(asd)) 1704 - dev_dbg(asd->isp->dev, "WDT will hit after %d ms\n", 1705 - ((int)(next - jiffies) * 1000 / HZ)); 1706 - else 1707 - dev_dbg(asd->isp->dev, "WDT starts with %d ms period\n", 1708 - ((int)(next - jiffies) * 1000 / HZ)); 1709 - 1710 - mod_timer(&asd->wdt, next); 1711 - atomic_set(&asd->isp->wdt_count, 0); 1712 - } else { 1713 - dev_dbg(asd->isp->dev, "WDT refresh all:\n"); 1714 - if (atomisp_is_wdt_running(&asd->video_out_capture)) 1715 - atomisp_wdt_refresh_pipe(&asd->video_out_capture, delay); 1716 - if (atomisp_is_wdt_running(&asd->video_out_preview)) 1717 - atomisp_wdt_refresh_pipe(&asd->video_out_preview, delay); 1718 - if (atomisp_is_wdt_running(&asd->video_out_vf)) 1719 - atomisp_wdt_refresh_pipe(&asd->video_out_vf, delay); 1720 - if (atomisp_is_wdt_running(&asd->video_out_video_capture)) 1721 - atomisp_wdt_refresh_pipe(&asd->video_out_video_capture, delay); 1722 - } 1723 - } 1724 - 1725 - /* ISP2401 */ 1726 - void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync) 1727 - { 1728 - if (!pipe->asd) { 1729 - dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", 1730 - __func__, pipe->vdev.name); 1731 - return; 1732 - } 1733 - 1734 - if (!atomisp_is_wdt_running(pipe)) 1735 - return; 1736 - 1737 - dev_dbg(pipe->asd->isp->dev, 1738 - "WDT stop asd %d (%s)\n", pipe->asd->index, pipe->vdev.name); 1739 - 1740 - if (sync) { 1741 - del_timer_sync(&pipe->wdt); 1742 - cancel_work_sync(&pipe->asd->isp->wdt_work); 1743 - } else { 1744 - del_timer(&pipe->wdt); 1745 - } 1746 - } 1747 - 1748 - /* ISP 2401 */ 1749 - void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe) 1750 - { 1751 - atomisp_wdt_refresh_pipe(pipe, ATOMISP_ISP_TIMEOUT_DURATION); 1752 - } 1753 - 1754 - void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync) 1755 - { 1756 - dev_dbg(asd->isp->dev, "WDT stop:\n"); 1757 - 1758 - if (!IS_ISP2401) { 1759 - if (sync) { 1760 - del_timer_sync(&asd->wdt); 1761 - cancel_work_sync(&asd->isp->wdt_work); 1762 - } else { 1763 - del_timer(&asd->wdt); 1764 - } 1765 - } else { 1766 - atomisp_wdt_stop_pipe(&asd->video_out_capture, sync); 1767 - atomisp_wdt_stop_pipe(&asd->video_out_preview, sync); 1768 - atomisp_wdt_stop_pipe(&asd->video_out_vf, sync); 1769 - atomisp_wdt_stop_pipe(&asd->video_out_video_capture, sync); 1770 - } 1771 1473 } 1772 1474 1773 1475 void atomisp_setup_flash(struct atomisp_sub_device *asd) ··· 1508 1884 * For CSS2.0: we change the way to not dequeue all the event at one 1509 1885 * time, instead, dequue one and process one, then another 1510 1886 */ 1511 - rt_mutex_lock(&isp->mutex); 1887 + mutex_lock(&isp->mutex); 1512 1888 if (atomisp_css_isr_thread(isp, frame_done_found, css_pipe_done)) 1513 1889 goto out; 1514 1890 ··· 1519 1895 atomisp_setup_flash(asd); 1520 1896 } 1521 1897 out: 1522 - rt_mutex_unlock(&isp->mutex); 1523 - for (i = 0; i < isp->num_of_streams; i++) { 1524 - asd = &isp->asd[i]; 1525 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED 1526 - && css_pipe_done[asd->index] 1527 - && isp->sw_contex.file_input) 1528 - v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 1529 - video, s_stream, 1); 1530 - } 1898 + mutex_unlock(&isp->mutex); 1531 1899 dev_dbg(isp->dev, "<%s\n", __func__); 1532 1900 1533 1901 return IRQ_HANDLED; ··· 1938 2322 { 1939 2323 struct atomisp_device *isp = asd->isp; 1940 2324 int err; 1941 - u16 stream_id = atomisp_source_pad_to_stream_id(asd, source_pad); 1942 2325 1943 2326 if (atomisp_css_get_grid_info(asd, pipe_id, source_pad)) 1944 2327 return; ··· 1946 2331 the grid size. */ 1947 2332 atomisp_css_free_stat_buffers(asd); 1948 2333 1949 - err = atomisp_alloc_css_stat_bufs(asd, stream_id); 2334 + err = atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL); 1950 2335 if (err) { 1951 2336 dev_err(isp->dev, "stat_buf allocate error\n"); 1952 2337 goto err; ··· 3692 4077 unsigned long irqflags; 3693 4078 bool need_to_enqueue_buffer = false; 3694 4079 4080 + lockdep_assert_held(&asd->isp->mutex); 4081 + 3695 4082 if (!asd) { 3696 4083 dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", 3697 4084 __func__, pipe->vdev.name); ··· 3760 4143 return; 3761 4144 3762 4145 atomisp_qbuffers_to_css(asd); 3763 - 3764 - if (!IS_ISP2401) { 3765 - if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd)) 3766 - atomisp_wdt_start(asd); 3767 - } else { 3768 - if (atomisp_buffers_queued_pipe(pipe)) { 3769 - if (!atomisp_is_wdt_running(pipe)) 3770 - atomisp_wdt_start_pipe(pipe); 3771 - else 3772 - atomisp_wdt_refresh_pipe(pipe, 3773 - ATOMISP_WDT_KEEP_CURRENT_DELAY); 3774 - } 3775 - } 3776 4146 } 3777 4147 3778 4148 /* ··· 3773 4169 struct atomisp_css_params_with_list *param = NULL; 3774 4170 struct atomisp_css_params *css_param = &asd->params.css_param; 3775 4171 int ret; 4172 + 4173 + lockdep_assert_held(&asd->isp->mutex); 3776 4174 3777 4175 if (!asd) { 3778 4176 dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", ··· 4430 4824 const struct atomisp_format_bridge *fmt; 4431 4825 struct atomisp_input_stream_info *stream_info = 4432 4826 (struct atomisp_input_stream_info *)snr_mbus_fmt->reserved; 4433 - u16 stream_index; 4434 - int source_pad = atomisp_subdev_source_pad(vdev); 4435 4827 int ret; 4436 4828 4437 4829 if (!asd) { ··· 4441 4837 if (!isp->inputs[asd->input_curr].camera) 4442 4838 return -EINVAL; 4443 4839 4444 - stream_index = atomisp_source_pad_to_stream_id(asd, source_pad); 4445 4840 fmt = atomisp_get_format_bridge(f->pixelformat); 4446 4841 if (!fmt) { 4447 4842 dev_err(isp->dev, "unsupported pixelformat!\n"); ··· 4454 4851 snr_mbus_fmt->width = f->width; 4455 4852 snr_mbus_fmt->height = f->height; 4456 4853 4457 - __atomisp_init_stream_info(stream_index, stream_info); 4854 + __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info); 4458 4855 4459 4856 dev_dbg(isp->dev, "try_mbus_fmt: asking for %ux%u\n", 4460 4857 snr_mbus_fmt->width, snr_mbus_fmt->height); ··· 4489 4886 return 0; 4490 4887 } 4491 4888 4492 - if (snr_mbus_fmt->width < f->width 4493 - && snr_mbus_fmt->height < f->height) { 4889 + if (!res_overflow || (snr_mbus_fmt->width < f->width && 4890 + snr_mbus_fmt->height < f->height)) { 4494 4891 f->width = snr_mbus_fmt->width; 4495 4892 f->height = snr_mbus_fmt->height; 4496 4893 /* Set the flag when resolution requested is ··· 4505 4902 ATOM_ISP_MAX_WIDTH), ATOM_ISP_STEP_WIDTH); 4506 4903 f->height = rounddown(clamp_t(u32, f->height, ATOM_ISP_MIN_HEIGHT, 4507 4904 ATOM_ISP_MAX_HEIGHT), ATOM_ISP_STEP_HEIGHT); 4508 - 4509 - return 0; 4510 - } 4511 - 4512 - static int 4513 - atomisp_try_fmt_file(struct atomisp_device *isp, struct v4l2_format *f) 4514 - { 4515 - u32 width = f->fmt.pix.width; 4516 - u32 height = f->fmt.pix.height; 4517 - u32 pixelformat = f->fmt.pix.pixelformat; 4518 - enum v4l2_field field = f->fmt.pix.field; 4519 - u32 depth; 4520 - 4521 - if (!atomisp_get_format_bridge(pixelformat)) { 4522 - dev_err(isp->dev, "Wrong output pixelformat\n"); 4523 - return -EINVAL; 4524 - } 4525 - 4526 - depth = atomisp_get_pixel_depth(pixelformat); 4527 - 4528 - if (field == V4L2_FIELD_ANY) { 4529 - field = V4L2_FIELD_NONE; 4530 - } else if (field != V4L2_FIELD_NONE) { 4531 - dev_err(isp->dev, "Wrong output field\n"); 4532 - return -EINVAL; 4533 - } 4534 - 4535 - f->fmt.pix.field = field; 4536 - f->fmt.pix.width = clamp_t(u32, 4537 - rounddown(width, (u32)ATOM_ISP_STEP_WIDTH), 4538 - ATOM_ISP_MIN_WIDTH, ATOM_ISP_MAX_WIDTH); 4539 - f->fmt.pix.height = clamp_t(u32, rounddown(height, 4540 - (u32)ATOM_ISP_STEP_HEIGHT), 4541 - ATOM_ISP_MIN_HEIGHT, ATOM_ISP_MAX_HEIGHT); 4542 - f->fmt.pix.bytesperline = (width * depth) >> 3; 4543 4905 4544 4906 return 0; 4545 4907 } ··· 4739 5171 int (*configure_pp_input)(struct atomisp_sub_device *asd, 4740 5172 unsigned int width, unsigned int height) = 4741 5173 configure_pp_input_nop; 4742 - u16 stream_index; 4743 5174 const struct atomisp_in_fmt_conv *fc; 4744 5175 int ret, i; 4745 5176 ··· 4747 5180 __func__, vdev->name); 4748 5181 return -EINVAL; 4749 5182 } 4750 - stream_index = atomisp_source_pad_to_stream_id(asd, source_pad); 4751 5183 4752 5184 v4l2_fh_init(&fh.vfh, vdev); 4753 5185 ··· 4766 5200 dev_err(isp->dev, "mipi_info is NULL\n"); 4767 5201 return -EINVAL; 4768 5202 } 4769 - if (atomisp_set_sensor_mipi_to_isp(asd, stream_index, 5203 + if (atomisp_set_sensor_mipi_to_isp(asd, ATOMISP_INPUT_STREAM_GENERAL, 4770 5204 mipi_info)) 4771 5205 return -EINVAL; 4772 5206 fc = atomisp_find_in_fmt_conv_by_atomisp_in_fmt( ··· 4850 5284 /* ISP2401 new input system need to use copy pipe */ 4851 5285 if (asd->copy_mode) { 4852 5286 pipe_id = IA_CSS_PIPE_ID_COPY; 4853 - atomisp_css_capture_enable_online(asd, stream_index, false); 5287 + atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL, false); 4854 5288 } else if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) { 4855 5289 /* video same in continuouscapture and online modes */ 4856 5290 configure_output = atomisp_css_video_configure_output; ··· 4882 5316 pipe_id = IA_CSS_PIPE_ID_CAPTURE; 4883 5317 4884 5318 atomisp_update_capture_mode(asd); 4885 - atomisp_css_capture_enable_online(asd, stream_index, false); 5319 + atomisp_css_capture_enable_online(asd, 5320 + ATOMISP_INPUT_STREAM_GENERAL, 5321 + false); 4886 5322 } 4887 5323 } 4888 5324 } else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) { ··· 4909 5341 4910 5342 if (!asd->continuous_mode->val) 4911 5343 /* in case of ANR, force capture pipe to offline mode */ 4912 - atomisp_css_capture_enable_online(asd, stream_index, 5344 + atomisp_css_capture_enable_online(asd, ATOMISP_INPUT_STREAM_GENERAL, 4913 5345 asd->params.low_light ? 4914 5346 false : asd->params.online_process); 4915 5347 ··· 4940 5372 pipe_id = IA_CSS_PIPE_ID_YUVPP; 4941 5373 4942 5374 if (asd->copy_mode) 4943 - ret = atomisp_css_copy_configure_output(asd, stream_index, 5375 + ret = atomisp_css_copy_configure_output(asd, ATOMISP_INPUT_STREAM_GENERAL, 4944 5376 pix->width, pix->height, 4945 5377 format->planar ? pix->bytesperline : 4946 5378 pix->bytesperline * 8 / format->depth, ··· 4964 5396 return -EINVAL; 4965 5397 } 4966 5398 if (asd->copy_mode) 4967 - ret = atomisp_css_copy_get_output_frame_info(asd, stream_index, 4968 - output_info); 5399 + ret = atomisp_css_copy_get_output_frame_info(asd, 5400 + ATOMISP_INPUT_STREAM_GENERAL, 5401 + output_info); 4969 5402 else 4970 5403 ret = get_frame_info(asd, output_info); 4971 5404 if (ret) { ··· 4981 5412 ia_css_frame_free(asd->raw_output_frame); 4982 5413 asd->raw_output_frame = NULL; 4983 5414 4984 - if (!asd->continuous_mode->val && 4985 - !asd->params.online_process && !isp->sw_contex.file_input && 5415 + if (!asd->continuous_mode->val && !asd->params.online_process && 4986 5416 ia_css_frame_allocate_from_info(&asd->raw_output_frame, 4987 5417 raw_output_info)) 4988 5418 return -ENOMEM; ··· 5030 5462 src = atomisp_subdev_get_ffmt(&asd->subdev, NULL, 5031 5463 V4L2_SUBDEV_FORMAT_ACTIVE, source_pad); 5032 5464 5033 - if ((sink->code == src->code && 5034 - sink->width == f->width && 5035 - sink->height == f->height) || 5036 - ((asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) && 5037 - (asd->isp->inputs[asd->input_curr].camera_caps-> 5038 - sensor[asd->sensor_curr].stream_num > 1))) 5465 + if (sink->code == src->code && sink->width == f->width && sink->height == f->height) 5039 5466 asd->copy_mode = true; 5040 5467 else 5041 5468 asd->copy_mode = false; ··· 5058 5495 struct atomisp_device *isp; 5059 5496 struct atomisp_input_stream_info *stream_info = 5060 5497 (struct atomisp_input_stream_info *)ffmt->reserved; 5061 - u16 stream_index = ATOMISP_INPUT_STREAM_GENERAL; 5062 5498 int source_pad = atomisp_subdev_source_pad(vdev); 5063 5499 struct v4l2_subdev_fh fh; 5064 5500 int ret; ··· 5072 5510 5073 5511 v4l2_fh_init(&fh.vfh, vdev); 5074 5512 5075 - stream_index = atomisp_source_pad_to_stream_id(asd, source_pad); 5076 - 5077 5513 format = atomisp_get_format_bridge(pixelformat); 5078 5514 if (!format) 5079 5515 return -EINVAL; ··· 5084 5524 ffmt->width, ffmt->height, padding_w, padding_h, 5085 5525 dvs_env_w, dvs_env_h); 5086 5526 5087 - __atomisp_init_stream_info(stream_index, stream_info); 5527 + __atomisp_init_stream_info(ATOMISP_INPUT_STREAM_GENERAL, stream_info); 5088 5528 5089 5529 req_ffmt = ffmt; 5090 5530 ··· 5116 5556 if (ret) 5117 5557 return ret; 5118 5558 5119 - __atomisp_update_stream_env(asd, stream_index, stream_info); 5559 + __atomisp_update_stream_env(asd, ATOMISP_INPUT_STREAM_GENERAL, stream_info); 5120 5560 5121 5561 dev_dbg(isp->dev, "sensor width: %d, height: %d\n", 5122 5562 ffmt->width, ffmt->height); ··· 5140 5580 return css_input_resolution_changed(asd, ffmt); 5141 5581 } 5142 5582 5143 - int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f) 5583 + int atomisp_set_fmt(struct file *file, void *unused, struct v4l2_format *f) 5144 5584 { 5585 + struct video_device *vdev = video_devdata(file); 5145 5586 struct atomisp_device *isp = video_get_drvdata(vdev); 5146 5587 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 5147 5588 struct atomisp_sub_device *asd = pipe->asd; ··· 5165 5604 struct v4l2_subdev_fh fh; 5166 5605 int ret; 5167 5606 5168 - if (!asd) { 5169 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 5170 - __func__, vdev->name); 5171 - return -EINVAL; 5172 - } 5607 + ret = atomisp_pipe_check(pipe, true); 5608 + if (ret) 5609 + return ret; 5173 5610 5174 5611 if (source_pad >= ATOMISP_SUBDEV_PADS_NUM) 5175 5612 return -EINVAL; 5176 - 5177 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) { 5178 - dev_warn(isp->dev, "ISP does not support set format while at streaming!\n"); 5179 - return -EBUSY; 5180 - } 5181 5613 5182 5614 dev_dbg(isp->dev, 5183 5615 "setting resolution %ux%u on pad %u for asd%d, bytesperline %u\n", ··· 5253 5699 f->fmt.pix.height = r.height; 5254 5700 } 5255 5701 5256 - if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW && 5257 - (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) && 5258 - (asd->isp->inputs[asd->input_curr].camera_caps-> 5259 - sensor[asd->sensor_curr].stream_num > 1)) { 5260 - /* For M10MO outputing YUV preview images. */ 5261 - u16 video_index = 5262 - atomisp_source_pad_to_stream_id(asd, 5263 - ATOMISP_SUBDEV_PAD_SOURCE_VIDEO); 5264 - 5265 - ret = atomisp_css_copy_get_output_frame_info(asd, 5266 - video_index, &output_info); 5267 - if (ret) { 5268 - dev_err(isp->dev, 5269 - "copy_get_output_frame_info ret %i", ret); 5270 - return -EINVAL; 5271 - } 5272 - if (!asd->yuvpp_mode) { 5273 - /* 5274 - * If viewfinder was configured into copy_mode, 5275 - * we switch to using yuvpp pipe instead. 5276 - */ 5277 - asd->yuvpp_mode = true; 5278 - ret = atomisp_css_copy_configure_output( 5279 - asd, video_index, 0, 0, 0, 0); 5280 - if (ret) { 5281 - dev_err(isp->dev, 5282 - "failed to disable copy pipe"); 5283 - return -EINVAL; 5284 - } 5285 - ret = atomisp_css_yuvpp_configure_output( 5286 - asd, video_index, 5287 - output_info.res.width, 5288 - output_info.res.height, 5289 - output_info.padded_width, 5290 - output_info.format); 5291 - if (ret) { 5292 - dev_err(isp->dev, 5293 - "failed to set up yuvpp pipe\n"); 5294 - return -EINVAL; 5295 - } 5296 - atomisp_css_video_enable_online(asd, false); 5297 - atomisp_css_preview_enable_online(asd, 5298 - ATOMISP_INPUT_STREAM_GENERAL, false); 5299 - } 5300 - atomisp_css_yuvpp_configure_viewfinder(asd, video_index, 5301 - f->fmt.pix.width, f->fmt.pix.height, 5302 - format_bridge->planar ? f->fmt.pix.bytesperline 5303 - : f->fmt.pix.bytesperline * 8 5304 - / format_bridge->depth, format_bridge->sh_fmt); 5305 - atomisp_css_yuvpp_get_viewfinder_frame_info( 5306 - asd, video_index, &output_info); 5307 - } else if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) { 5702 + if (source_pad == ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW) { 5308 5703 atomisp_css_video_configure_viewfinder(asd, 5309 5704 f->fmt.pix.width, f->fmt.pix.height, 5310 5705 format_bridge->planar ? f->fmt.pix.bytesperline ··· 5581 6078 return 0; 5582 6079 } 5583 6080 5584 - int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f) 5585 - { 5586 - struct atomisp_device *isp = video_get_drvdata(vdev); 5587 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 5588 - struct atomisp_sub_device *asd = pipe->asd; 5589 - struct v4l2_mbus_framefmt ffmt = {0}; 5590 - const struct atomisp_format_bridge *format_bridge; 5591 - struct v4l2_subdev_fh fh; 5592 - int ret; 5593 - 5594 - if (!asd) { 5595 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 5596 - __func__, vdev->name); 5597 - return -EINVAL; 5598 - } 5599 - 5600 - v4l2_fh_init(&fh.vfh, vdev); 5601 - 5602 - dev_dbg(isp->dev, "setting fmt %ux%u 0x%x for file inject\n", 5603 - f->fmt.pix.width, f->fmt.pix.height, f->fmt.pix.pixelformat); 5604 - ret = atomisp_try_fmt_file(isp, f); 5605 - if (ret) { 5606 - dev_err(isp->dev, "atomisp_try_fmt_file err: %d\n", ret); 5607 - return ret; 5608 - } 5609 - 5610 - format_bridge = atomisp_get_format_bridge(f->fmt.pix.pixelformat); 5611 - if (!format_bridge) { 5612 - dev_dbg(isp->dev, "atomisp_get_format_bridge err! fmt:0x%x\n", 5613 - f->fmt.pix.pixelformat); 5614 - return -EINVAL; 5615 - } 5616 - 5617 - pipe->pix = f->fmt.pix; 5618 - atomisp_css_input_set_mode(asd, IA_CSS_INPUT_MODE_FIFO); 5619 - atomisp_css_input_configure_port(asd, 5620 - __get_mipi_port(isp, ATOMISP_CAMERA_PORT_PRIMARY), 2, 0xffff4, 5621 - 0, 0, 0, 0); 5622 - ffmt.width = f->fmt.pix.width; 5623 - ffmt.height = f->fmt.pix.height; 5624 - ffmt.code = format_bridge->mbus_code; 5625 - 5626 - atomisp_subdev_set_ffmt(&asd->subdev, fh.state, 5627 - V4L2_SUBDEV_FORMAT_ACTIVE, 5628 - ATOMISP_SUBDEV_PAD_SINK, &ffmt); 5629 - 5630 - return 0; 5631 - } 5632 - 5633 6081 int atomisp_set_shading_table(struct atomisp_sub_device *asd, 5634 6082 struct atomisp_shading_table *user_shading_table) 5635 6083 { ··· 5729 6275 { 5730 6276 struct v4l2_ctrl *c; 5731 6277 6278 + lockdep_assert_held(&asd->isp->mutex); 6279 + 5732 6280 /* 5733 6281 * In case of M10MO ZSL capture case, we need to issue a separate 5734 6282 * capture request to M10MO which will output captured jpeg image ··· 5835 6379 return 0; 5836 6380 } 5837 6381 5838 - int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd, 5839 - uint16_t source_pad) 5840 - { 5841 - int stream_id; 5842 - struct atomisp_device *isp = asd->isp; 5843 - 5844 - if (isp->inputs[asd->input_curr].camera_caps-> 5845 - sensor[asd->sensor_curr].stream_num == 1) 5846 - return ATOMISP_INPUT_STREAM_GENERAL; 5847 - 5848 - switch (source_pad) { 5849 - case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE: 5850 - stream_id = ATOMISP_INPUT_STREAM_CAPTURE; 5851 - break; 5852 - case ATOMISP_SUBDEV_PAD_SOURCE_VF: 5853 - stream_id = ATOMISP_INPUT_STREAM_POSTVIEW; 5854 - break; 5855 - case ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW: 5856 - stream_id = ATOMISP_INPUT_STREAM_PREVIEW; 5857 - break; 5858 - case ATOMISP_SUBDEV_PAD_SOURCE_VIDEO: 5859 - stream_id = ATOMISP_INPUT_STREAM_VIDEO; 5860 - break; 5861 - default: 5862 - stream_id = ATOMISP_INPUT_STREAM_GENERAL; 5863 - } 5864 - 5865 - return stream_id; 5866 - } 5867 - 5868 6382 bool atomisp_is_vf_pipe(struct atomisp_video_pipe *pipe) 5869 6383 { 5870 6384 struct atomisp_sub_device *asd = pipe->asd; ··· 5885 6459 spin_unlock_irqrestore(&asd->raw_buffer_bitmap_lock, flags); 5886 6460 } 5887 6461 5888 - int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id) 6462 + static int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id) 5889 6463 { 5890 6464 int *bitmap, bit; 5891 6465 unsigned long flags; ··· 5975 6549 int value = *exp_id; 5976 6550 int ret; 5977 6551 6552 + lockdep_assert_held(&isp->mutex); 6553 + 5978 6554 ret = __is_raw_buffer_locked(asd, value); 5979 6555 if (ret) { 5980 6556 dev_err(isp->dev, "%s exp_id %d invalid %d.\n", __func__, value, ret); ··· 5997 6569 struct atomisp_device *isp = asd->isp; 5998 6570 int value = *exp_id; 5999 6571 int ret; 6572 + 6573 + lockdep_assert_held(&isp->mutex); 6000 6574 6001 6575 ret = __clear_raw_buffer_bitmap(asd, value); 6002 6576 if (ret) { ··· 6034 6604 { 6035 6605 if (!event || asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 6036 6606 return -EINVAL; 6607 + 6608 + lockdep_assert_held(&asd->isp->mutex); 6037 6609 6038 6610 dev_dbg(asd->isp->dev, "%s: trying to inject a fake event 0x%x\n", 6039 6611 __func__, *event); ··· 6106 6674 enum ia_css_pipe_id pipe_id; 6107 6675 struct ia_css_pipe_info p_info; 6108 6676 int ret; 6109 - 6110 - if (!asd) { 6111 - dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", 6112 - __func__, vdev->name); 6113 - return -EINVAL; 6114 - } 6115 - 6116 - if (asd->isp->inputs[asd->input_curr].camera_caps-> 6117 - sensor[asd->sensor_curr].stream_num > 1) { 6118 - /* External ISP */ 6119 - *invalid_frame_num = 0; 6120 - return 0; 6121 - } 6122 6677 6123 6678 pipe_id = atomisp_get_pipe_id(pipe); 6124 6679 if (!asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id]) {
+2 -9
drivers/staging/media/atomisp/pci/atomisp_cmd.h
··· 54 54 unsigned int size); 55 55 struct camera_mipi_info *atomisp_to_sensor_mipi_info(struct v4l2_subdev *sd); 56 56 struct atomisp_video_pipe *atomisp_to_video_pipe(struct video_device *dev); 57 - struct atomisp_acc_pipe *atomisp_to_acc_pipe(struct video_device *dev); 58 57 int atomisp_reset(struct atomisp_device *isp); 59 58 void atomisp_flush_bufs_and_wakeup(struct atomisp_sub_device *asd); 60 59 void atomisp_clear_css_buffer_counters(struct atomisp_sub_device *asd); ··· 65 66 /* Interrupt functions */ 66 67 void atomisp_msi_irq_init(struct atomisp_device *isp); 67 68 void atomisp_msi_irq_uninit(struct atomisp_device *isp); 68 - void atomisp_wdt_work(struct work_struct *work); 69 - void atomisp_wdt(struct timer_list *t); 69 + void atomisp_assert_recovery_work(struct work_struct *work); 70 70 void atomisp_setup_flash(struct atomisp_sub_device *asd); 71 71 irqreturn_t atomisp_isr(int irq, void *dev); 72 72 irqreturn_t atomisp_isr_thread(int irq, void *isp_ptr); ··· 266 268 int atomisp_try_fmt(struct video_device *vdev, struct v4l2_pix_format *f, 267 269 bool *res_overflow); 268 270 269 - int atomisp_set_fmt(struct video_device *vdev, struct v4l2_format *f); 270 - int atomisp_set_fmt_file(struct video_device *vdev, struct v4l2_format *f); 271 + int atomisp_set_fmt(struct file *file, void *fh, struct v4l2_format *f); 271 272 272 273 int atomisp_set_shading_table(struct atomisp_sub_device *asd, 273 274 struct atomisp_shading_table *shading_table); ··· 297 300 bool q_buffers, enum atomisp_input_stream_id stream_id); 298 301 299 302 void atomisp_css_flush(struct atomisp_device *isp); 300 - int atomisp_source_pad_to_stream_id(struct atomisp_sub_device *asd, 301 - uint16_t source_pad); 302 303 303 304 /* Events. Only one event has to be exported for now. */ 304 305 void atomisp_eof_event(struct atomisp_sub_device *asd, uint8_t exp_id); ··· 319 324 int atomisp_exp_id_unlock(struct atomisp_sub_device *asd, int *exp_id); 320 325 int atomisp_exp_id_capture(struct atomisp_sub_device *asd, int *exp_id); 321 326 322 - /* Function to update Raw Buffer bitmap */ 323 - int atomisp_set_raw_buffer_bitmap(struct atomisp_sub_device *asd, int exp_id); 324 327 void atomisp_init_raw_buffer_bitmap(struct atomisp_sub_device *asd); 325 328 326 329 /* Function to enable/disable zoom for capture pipe */
-10
drivers/staging/media/atomisp/pci/atomisp_compat.h
··· 129 129 130 130 void atomisp_free_metadata_output_buf(struct atomisp_sub_device *asd); 131 131 132 - void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd, 133 - struct atomisp_css_buffer *isp_css_buffer, 134 - struct ia_css_isp_dvs_statistics_map *dvs_map); 135 - 136 132 void atomisp_css_temp_pipe_to_pipe_id(struct atomisp_sub_device *asd, 137 133 struct atomisp_css_event *current_event); 138 134 ··· 430 434 431 435 void atomisp_css_morph_table_free(struct ia_css_morph_table *table); 432 436 433 - void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp, 434 - unsigned int overlap); 435 - 436 437 int atomisp_css_get_dis_stat(struct atomisp_sub_device *asd, 437 438 struct atomisp_dis_statistics *stats); 438 439 439 440 int atomisp_css_update_stream(struct atomisp_sub_device *asd); 440 - 441 - struct atomisp_acc_fw; 442 - int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw); 443 441 444 442 int atomisp_css_isr_thread(struct atomisp_device *isp, 445 443 bool *frame_done_found,
+9 -91
drivers/staging/media/atomisp/pci/atomisp_compat_css20.c
··· 1427 1427 struct ia_css_pipe_info p_info; 1428 1428 struct ia_css_grid_info old_info; 1429 1429 struct atomisp_device *isp = asd->isp; 1430 - int stream_index = atomisp_source_pad_to_stream_id(asd, source_pad); 1431 1430 int md_width = asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL]. 1432 1431 stream_config.metadata_config.resolution.width; 1433 1432 ··· 1434 1435 memset(&old_info, 0, sizeof(struct ia_css_grid_info)); 1435 1436 1436 1437 if (ia_css_pipe_get_info( 1437 - asd->stream_env[stream_index].pipes[pipe_id], 1438 + asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].pipes[pipe_id], 1438 1439 &p_info) != 0) { 1439 1440 dev_err(isp->dev, "ia_css_pipe_get_info failed\n"); 1440 1441 return -EINVAL; ··· 1570 1571 kvfree(asd->params.metadata_user[i]); 1571 1572 asd->params.metadata_user[i] = NULL; 1572 1573 } 1573 - } 1574 - } 1575 - 1576 - void atomisp_css_get_dis_statistics(struct atomisp_sub_device *asd, 1577 - struct atomisp_css_buffer *isp_css_buffer, 1578 - struct ia_css_isp_dvs_statistics_map *dvs_map) 1579 - { 1580 - if (asd->params.dvs_stat) { 1581 - if (dvs_map) 1582 - ia_css_translate_dvs2_statistics( 1583 - asd->params.dvs_stat, dvs_map); 1584 - else 1585 - ia_css_get_dvs2_statistics(asd->params.dvs_stat, 1586 - isp_css_buffer->css_buffer.data.stats_dvs); 1587 1574 } 1588 1575 } 1589 1576 ··· 2679 2694 struct atomisp_device *isp = asd->isp; 2680 2695 2681 2696 if (ATOMISP_SOC_CAMERA(asd)) { 2682 - stream_index = atomisp_source_pad_to_stream_id(asd, source_pad); 2697 + stream_index = ATOMISP_INPUT_STREAM_GENERAL; 2683 2698 } else { 2684 2699 stream_index = (pipe_index == IA_CSS_PIPE_ID_YUVPP) ? 2685 2700 ATOMISP_INPUT_STREAM_VIDEO : 2686 - atomisp_source_pad_to_stream_id(asd, source_pad); 2701 + ATOMISP_INPUT_STREAM_GENERAL; 2687 2702 } 2688 2703 2689 2704 if (0 != ia_css_pipe_get_info(asd->stream_env[stream_index] ··· 3611 3626 struct atomisp_dis_buf *dis_buf; 3612 3627 unsigned long flags; 3613 3628 3629 + lockdep_assert_held(&isp->mutex); 3630 + 3614 3631 if (!asd->params.dvs_stat->hor_prod.odd_real || 3615 3632 !asd->params.dvs_stat->hor_prod.odd_imag || 3616 3633 !asd->params.dvs_stat->hor_prod.even_real || ··· 3624 3637 return -EINVAL; 3625 3638 3626 3639 /* isp needs to be streaming to get DIS statistics */ 3627 - spin_lock_irqsave(&isp->lock, flags); 3628 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) { 3629 - spin_unlock_irqrestore(&isp->lock, flags); 3640 + if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 3630 3641 return -EINVAL; 3631 - } 3632 - spin_unlock_irqrestore(&isp->lock, flags); 3633 3642 3634 3643 if (atomisp_compare_dvs_grid(asd, &stats->dvs2_stat.grid_info) != 0) 3635 3644 /* If the grid info in the argument differs from the current ··· 3746 3763 ia_css_morph_table_free(table); 3747 3764 } 3748 3765 3749 - void atomisp_css_set_cont_prev_start_time(struct atomisp_device *isp, 3750 - unsigned int overlap) 3751 - { 3752 - /* CSS 2.0 doesn't support this API. */ 3753 - dev_dbg(isp->dev, "set cont prev start time is not supported.\n"); 3754 - return; 3755 - } 3756 - 3757 - /* Set the ACC binary arguments */ 3758 - int atomisp_css_set_acc_parameters(struct atomisp_acc_fw *acc_fw) 3759 - { 3760 - unsigned int mem; 3761 - 3762 - for (mem = 0; mem < ATOMISP_ACC_NR_MEMORY; mem++) { 3763 - if (acc_fw->args[mem].length == 0) 3764 - continue; 3765 - 3766 - ia_css_isp_param_set_css_mem_init(&acc_fw->fw->mem_initializers, 3767 - IA_CSS_PARAM_CLASS_PARAM, mem, 3768 - acc_fw->args[mem].css_ptr, 3769 - acc_fw->args[mem].length); 3770 - } 3771 - 3772 - return 0; 3773 - } 3774 - 3775 3766 static struct atomisp_sub_device *__get_atomisp_subdev( 3776 3767 struct ia_css_pipe *css_pipe, 3777 3768 struct atomisp_device *isp, ··· 3781 3824 enum atomisp_input_stream_id stream_id = 0; 3782 3825 struct atomisp_css_event current_event; 3783 3826 struct atomisp_sub_device *asd; 3784 - bool reset_wdt_timer[MAX_STREAM_NUM] = {false}; 3785 - int i; 3827 + 3828 + lockdep_assert_held(&isp->mutex); 3786 3829 3787 3830 while (!ia_css_dequeue_psys_event(&current_event.event)) { 3788 3831 if (current_event.event.type == ··· 3796 3839 __func__, 3797 3840 current_event.event.fw_assert_module_id, 3798 3841 current_event.event.fw_assert_line_no); 3799 - for (i = 0; i < isp->num_of_streams; i++) 3800 - atomisp_wdt_stop(&isp->asd[i], 0); 3801 3842 3802 - if (!IS_ISP2401) 3803 - atomisp_wdt(&isp->asd[0].wdt); 3804 - else 3805 - queue_work(isp->wdt_work_queue, &isp->wdt_work); 3806 - 3843 + queue_work(system_long_wq, &isp->assert_recovery_work); 3807 3844 return -EINVAL; 3808 3845 } else if (current_event.event.type == IA_CSS_EVENT_TYPE_FW_WARNING) { 3809 3846 dev_warn(isp->dev, "%s: ISP reports warning, code is %d, exp_id %d\n", ··· 3826 3875 frame_done_found[asd->index] = true; 3827 3876 atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_OUTPUT_FRAME, 3828 3877 current_event.pipe, true, stream_id); 3829 - 3830 - if (!IS_ISP2401) 3831 - reset_wdt_timer[asd->index] = true; /* ISP running */ 3832 - 3833 3878 break; 3834 3879 case IA_CSS_EVENT_TYPE_SECOND_OUTPUT_FRAME_DONE: 3835 3880 dev_dbg(isp->dev, "event: Second output frame done"); 3836 3881 frame_done_found[asd->index] = true; 3837 3882 atomisp_buf_done(asd, 0, IA_CSS_BUFFER_TYPE_SEC_OUTPUT_FRAME, 3838 3883 current_event.pipe, true, stream_id); 3839 - 3840 - if (!IS_ISP2401) 3841 - reset_wdt_timer[asd->index] = true; /* ISP running */ 3842 - 3843 3884 break; 3844 3885 case IA_CSS_EVENT_TYPE_3A_STATISTICS_DONE: 3845 3886 dev_dbg(isp->dev, "event: 3A stats frame done"); ··· 3852 3909 atomisp_buf_done(asd, 0, 3853 3910 IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME, 3854 3911 current_event.pipe, true, stream_id); 3855 - 3856 - if (!IS_ISP2401) 3857 - reset_wdt_timer[asd->index] = true; /* ISP running */ 3858 - 3859 3912 break; 3860 3913 case IA_CSS_EVENT_TYPE_SECOND_VF_OUTPUT_FRAME_DONE: 3861 3914 dev_dbg(isp->dev, "event: second VF output frame done"); 3862 3915 atomisp_buf_done(asd, 0, 3863 3916 IA_CSS_BUFFER_TYPE_SEC_VF_OUTPUT_FRAME, 3864 3917 current_event.pipe, true, stream_id); 3865 - if (!IS_ISP2401) 3866 - reset_wdt_timer[asd->index] = true; /* ISP running */ 3867 - 3868 3918 break; 3869 3919 case IA_CSS_EVENT_TYPE_DIS_STATISTICS_DONE: 3870 3920 dev_dbg(isp->dev, "event: dis stats frame done"); ··· 3878 3942 current_event.event.type); 3879 3943 break; 3880 3944 } 3881 - } 3882 - 3883 - if (IS_ISP2401) 3884 - return 0; 3885 - 3886 - /* ISP2400: If there are no buffers queued then delete wdt timer. */ 3887 - for (i = 0; i < isp->num_of_streams; i++) { 3888 - asd = &isp->asd[i]; 3889 - if (!asd) 3890 - continue; 3891 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 3892 - continue; 3893 - if (!atomisp_buffers_queued(asd)) 3894 - atomisp_wdt_stop(asd, false); 3895 - else if (reset_wdt_timer[i]) 3896 - /* SOF irq should not reset wdt timer. */ 3897 - atomisp_wdt_refresh(asd, 3898 - ATOMISP_WDT_KEEP_CURRENT_DELAY); 3899 3945 } 3900 3946 3901 3947 return 0;
-229
drivers/staging/media/atomisp/pci/atomisp_file.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Support for Medifield PNW Camera Imaging ISP subsystem. 4 - * 5 - * Copyright (c) 2010 Intel Corporation. All Rights Reserved. 6 - * 7 - * Copyright (c) 2010 Silicon Hive www.siliconhive.com. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License version 11 - * 2 as published by the Free Software Foundation. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - * 18 - * 19 - */ 20 - 21 - #include <media/v4l2-event.h> 22 - #include <media/v4l2-mediabus.h> 23 - 24 - #include <media/videobuf-vmalloc.h> 25 - #include <linux/delay.h> 26 - 27 - #include "ia_css.h" 28 - 29 - #include "atomisp_cmd.h" 30 - #include "atomisp_common.h" 31 - #include "atomisp_file.h" 32 - #include "atomisp_internal.h" 33 - #include "atomisp_ioctl.h" 34 - 35 - static void file_work(struct work_struct *work) 36 - { 37 - struct atomisp_file_device *file_dev = 38 - container_of(work, struct atomisp_file_device, work); 39 - struct atomisp_device *isp = file_dev->isp; 40 - /* only support file injection on subdev0 */ 41 - struct atomisp_sub_device *asd = &isp->asd[0]; 42 - struct atomisp_video_pipe *out_pipe = &asd->video_in; 43 - unsigned short *buf = videobuf_to_vmalloc(out_pipe->outq.bufs[0]); 44 - struct v4l2_mbus_framefmt isp_sink_fmt; 45 - 46 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 47 - return; 48 - 49 - dev_dbg(isp->dev, ">%s: ready to start streaming\n", __func__); 50 - isp_sink_fmt = *atomisp_subdev_get_ffmt(&asd->subdev, NULL, 51 - V4L2_SUBDEV_FORMAT_ACTIVE, 52 - ATOMISP_SUBDEV_PAD_SINK); 53 - 54 - while (!ia_css_isp_has_started()) 55 - usleep_range(1000, 1500); 56 - 57 - ia_css_stream_send_input_frame(asd->stream_env[ATOMISP_INPUT_STREAM_GENERAL].stream, 58 - buf, isp_sink_fmt.width, 59 - isp_sink_fmt.height); 60 - dev_dbg(isp->dev, "<%s: streaming done\n", __func__); 61 - } 62 - 63 - static int file_input_s_stream(struct v4l2_subdev *sd, int enable) 64 - { 65 - struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd); 66 - struct atomisp_device *isp = file_dev->isp; 67 - /* only support file injection on subdev0 */ 68 - struct atomisp_sub_device *asd = &isp->asd[0]; 69 - 70 - dev_dbg(isp->dev, "%s: enable %d\n", __func__, enable); 71 - if (enable) { 72 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_ENABLED) 73 - return 0; 74 - 75 - queue_work(file_dev->work_queue, &file_dev->work); 76 - return 0; 77 - } 78 - cancel_work_sync(&file_dev->work); 79 - return 0; 80 - } 81 - 82 - static int file_input_get_fmt(struct v4l2_subdev *sd, 83 - struct v4l2_subdev_state *sd_state, 84 - struct v4l2_subdev_format *format) 85 - { 86 - struct v4l2_mbus_framefmt *fmt = &format->format; 87 - struct atomisp_file_device *file_dev = v4l2_get_subdevdata(sd); 88 - struct atomisp_device *isp = file_dev->isp; 89 - /* only support file injection on subdev0 */ 90 - struct atomisp_sub_device *asd = &isp->asd[0]; 91 - struct v4l2_mbus_framefmt *isp_sink_fmt; 92 - 93 - if (format->pad) 94 - return -EINVAL; 95 - isp_sink_fmt = atomisp_subdev_get_ffmt(&asd->subdev, NULL, 96 - V4L2_SUBDEV_FORMAT_ACTIVE, 97 - ATOMISP_SUBDEV_PAD_SINK); 98 - 99 - fmt->width = isp_sink_fmt->width; 100 - fmt->height = isp_sink_fmt->height; 101 - fmt->code = isp_sink_fmt->code; 102 - 103 - return 0; 104 - } 105 - 106 - static int file_input_set_fmt(struct v4l2_subdev *sd, 107 - struct v4l2_subdev_state *sd_state, 108 - struct v4l2_subdev_format *format) 109 - { 110 - struct v4l2_mbus_framefmt *fmt = &format->format; 111 - 112 - if (format->pad) 113 - return -EINVAL; 114 - file_input_get_fmt(sd, sd_state, format); 115 - if (format->which == V4L2_SUBDEV_FORMAT_TRY) 116 - sd_state->pads->try_fmt = *fmt; 117 - return 0; 118 - } 119 - 120 - static int file_input_log_status(struct v4l2_subdev *sd) 121 - { 122 - /*to fake*/ 123 - return 0; 124 - } 125 - 126 - static int file_input_s_power(struct v4l2_subdev *sd, int on) 127 - { 128 - /* to fake */ 129 - return 0; 130 - } 131 - 132 - static int file_input_enum_mbus_code(struct v4l2_subdev *sd, 133 - struct v4l2_subdev_state *sd_state, 134 - struct v4l2_subdev_mbus_code_enum *code) 135 - { 136 - /*to fake*/ 137 - return 0; 138 - } 139 - 140 - static int file_input_enum_frame_size(struct v4l2_subdev *sd, 141 - struct v4l2_subdev_state *sd_state, 142 - struct v4l2_subdev_frame_size_enum *fse) 143 - { 144 - /*to fake*/ 145 - return 0; 146 - } 147 - 148 - static int file_input_enum_frame_ival(struct v4l2_subdev *sd, 149 - struct v4l2_subdev_state *sd_state, 150 - struct v4l2_subdev_frame_interval_enum 151 - *fie) 152 - { 153 - /*to fake*/ 154 - return 0; 155 - } 156 - 157 - static const struct v4l2_subdev_video_ops file_input_video_ops = { 158 - .s_stream = file_input_s_stream, 159 - }; 160 - 161 - static const struct v4l2_subdev_core_ops file_input_core_ops = { 162 - .log_status = file_input_log_status, 163 - .s_power = file_input_s_power, 164 - }; 165 - 166 - static const struct v4l2_subdev_pad_ops file_input_pad_ops = { 167 - .enum_mbus_code = file_input_enum_mbus_code, 168 - .enum_frame_size = file_input_enum_frame_size, 169 - .enum_frame_interval = file_input_enum_frame_ival, 170 - .get_fmt = file_input_get_fmt, 171 - .set_fmt = file_input_set_fmt, 172 - }; 173 - 174 - static const struct v4l2_subdev_ops file_input_ops = { 175 - .core = &file_input_core_ops, 176 - .video = &file_input_video_ops, 177 - .pad = &file_input_pad_ops, 178 - }; 179 - 180 - void 181 - atomisp_file_input_unregister_entities(struct atomisp_file_device *file_dev) 182 - { 183 - media_entity_cleanup(&file_dev->sd.entity); 184 - v4l2_device_unregister_subdev(&file_dev->sd); 185 - } 186 - 187 - int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev, 188 - struct v4l2_device *vdev) 189 - { 190 - /* Register the subdev and video nodes. */ 191 - return v4l2_device_register_subdev(vdev, &file_dev->sd); 192 - } 193 - 194 - void atomisp_file_input_cleanup(struct atomisp_device *isp) 195 - { 196 - struct atomisp_file_device *file_dev = &isp->file_dev; 197 - 198 - if (file_dev->work_queue) { 199 - destroy_workqueue(file_dev->work_queue); 200 - file_dev->work_queue = NULL; 201 - } 202 - } 203 - 204 - int atomisp_file_input_init(struct atomisp_device *isp) 205 - { 206 - struct atomisp_file_device *file_dev = &isp->file_dev; 207 - struct v4l2_subdev *sd = &file_dev->sd; 208 - struct media_pad *pads = file_dev->pads; 209 - struct media_entity *me = &sd->entity; 210 - 211 - file_dev->isp = isp; 212 - file_dev->work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1); 213 - if (!file_dev->work_queue) { 214 - dev_err(isp->dev, "Failed to initialize file inject workq\n"); 215 - return -ENOMEM; 216 - } 217 - 218 - INIT_WORK(&file_dev->work, file_work); 219 - 220 - v4l2_subdev_init(sd, &file_input_ops); 221 - sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; 222 - strscpy(sd->name, "file_input_subdev", sizeof(sd->name)); 223 - v4l2_set_subdevdata(sd, file_dev); 224 - 225 - pads[0].flags = MEDIA_PAD_FL_SINK; 226 - me->function = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN; 227 - 228 - return media_entity_pads_init(me, 1, pads); 229 - }
-44
drivers/staging/media/atomisp/pci/atomisp_file.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Support for Medifield PNW Camera Imaging ISP subsystem. 4 - * 5 - * Copyright (c) 2010 Intel Corporation. All Rights Reserved. 6 - * 7 - * Copyright (c) 2010 Silicon Hive www.siliconhive.com. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License version 11 - * 2 as published by the Free Software Foundation. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - * 18 - * 19 - */ 20 - 21 - #ifndef __ATOMISP_FILE_H__ 22 - #define __ATOMISP_FILE_H__ 23 - 24 - #include <media/media-entity.h> 25 - #include <media/v4l2-subdev.h> 26 - 27 - struct atomisp_device; 28 - 29 - struct atomisp_file_device { 30 - struct v4l2_subdev sd; 31 - struct atomisp_device *isp; 32 - struct media_pad pads[1]; 33 - 34 - struct workqueue_struct *work_queue; 35 - struct work_struct work; 36 - }; 37 - 38 - void atomisp_file_input_cleanup(struct atomisp_device *isp); 39 - int atomisp_file_input_init(struct atomisp_device *isp); 40 - void atomisp_file_input_unregister_entities( 41 - struct atomisp_file_device *file_dev); 42 - int atomisp_file_input_register_entities(struct atomisp_file_device *file_dev, 43 - struct v4l2_device *vdev); 44 - #endif /* __ATOMISP_FILE_H__ */
+39 -235
drivers/staging/media/atomisp/pci/atomisp_fops.c
··· 369 369 return IA_CSS_BUFFER_TYPE_VF_OUTPUT_FRAME; 370 370 } 371 371 372 - static int atomisp_qbuffers_to_css_for_all_pipes(struct atomisp_sub_device *asd) 373 - { 374 - enum ia_css_buffer_type buf_type; 375 - enum ia_css_pipe_id css_capture_pipe_id = IA_CSS_PIPE_ID_COPY; 376 - enum ia_css_pipe_id css_preview_pipe_id = IA_CSS_PIPE_ID_COPY; 377 - enum ia_css_pipe_id css_video_pipe_id = IA_CSS_PIPE_ID_COPY; 378 - enum atomisp_input_stream_id input_stream_id; 379 - struct atomisp_video_pipe *capture_pipe; 380 - struct atomisp_video_pipe *preview_pipe; 381 - struct atomisp_video_pipe *video_pipe; 382 - 383 - capture_pipe = &asd->video_out_capture; 384 - preview_pipe = &asd->video_out_preview; 385 - video_pipe = &asd->video_out_video_capture; 386 - 387 - buf_type = atomisp_get_css_buf_type( 388 - asd, css_preview_pipe_id, 389 - atomisp_subdev_source_pad(&preview_pipe->vdev)); 390 - input_stream_id = ATOMISP_INPUT_STREAM_PREVIEW; 391 - atomisp_q_video_buffers_to_css(asd, preview_pipe, 392 - input_stream_id, 393 - buf_type, css_preview_pipe_id); 394 - 395 - buf_type = atomisp_get_css_buf_type(asd, css_capture_pipe_id, 396 - atomisp_subdev_source_pad(&capture_pipe->vdev)); 397 - input_stream_id = ATOMISP_INPUT_STREAM_GENERAL; 398 - atomisp_q_video_buffers_to_css(asd, capture_pipe, 399 - input_stream_id, 400 - buf_type, css_capture_pipe_id); 401 - 402 - buf_type = atomisp_get_css_buf_type(asd, css_video_pipe_id, 403 - atomisp_subdev_source_pad(&video_pipe->vdev)); 404 - input_stream_id = ATOMISP_INPUT_STREAM_VIDEO; 405 - atomisp_q_video_buffers_to_css(asd, video_pipe, 406 - input_stream_id, 407 - buf_type, css_video_pipe_id); 408 - return 0; 409 - } 410 - 411 372 /* queue all available buffers to css */ 412 373 int atomisp_qbuffers_to_css(struct atomisp_sub_device *asd) 413 374 { ··· 383 422 struct atomisp_video_pipe *video_pipe = NULL; 384 423 bool raw_mode = atomisp_is_mbuscode_raw( 385 424 asd->fmt[asd->capture_pad].fmt.code); 386 - 387 - if (asd->isp->inputs[asd->input_curr].camera_caps-> 388 - sensor[asd->sensor_curr].stream_num == 2 && 389 - !asd->yuvpp_mode) 390 - return atomisp_qbuffers_to_css_for_all_pipes(asd); 391 425 392 426 if (asd->vfpp->val == ATOMISP_VFPP_DISABLE_SCALER) { 393 427 video_pipe = &asd->video_out_video_capture; ··· 549 593 atomisp_videobuf_free_buf(vb); 550 594 } 551 595 552 - static int atomisp_buf_setup_output(struct videobuf_queue *vq, 553 - unsigned int *count, unsigned int *size) 554 - { 555 - struct atomisp_video_pipe *pipe = vq->priv_data; 556 - 557 - *size = pipe->pix.sizeimage; 558 - 559 - return 0; 560 - } 561 - 562 - static int atomisp_buf_prepare_output(struct videobuf_queue *vq, 563 - struct videobuf_buffer *vb, 564 - enum v4l2_field field) 565 - { 566 - struct atomisp_video_pipe *pipe = vq->priv_data; 567 - 568 - vb->size = pipe->pix.sizeimage; 569 - vb->width = pipe->pix.width; 570 - vb->height = pipe->pix.height; 571 - vb->field = field; 572 - vb->state = VIDEOBUF_PREPARED; 573 - 574 - return 0; 575 - } 576 - 577 - static void atomisp_buf_queue_output(struct videobuf_queue *vq, 578 - struct videobuf_buffer *vb) 579 - { 580 - struct atomisp_video_pipe *pipe = vq->priv_data; 581 - 582 - list_add_tail(&vb->queue, &pipe->activeq_out); 583 - vb->state = VIDEOBUF_QUEUED; 584 - } 585 - 586 - static void atomisp_buf_release_output(struct videobuf_queue *vq, 587 - struct videobuf_buffer *vb) 588 - { 589 - videobuf_vmalloc_free(vb); 590 - vb->state = VIDEOBUF_NEEDS_INIT; 591 - } 592 - 593 596 static const struct videobuf_queue_ops videobuf_qops = { 594 597 .buf_setup = atomisp_buf_setup, 595 598 .buf_prepare = atomisp_buf_prepare, 596 599 .buf_queue = atomisp_buf_queue, 597 600 .buf_release = atomisp_buf_release, 598 - }; 599 - 600 - static const struct videobuf_queue_ops videobuf_qops_output = { 601 - .buf_setup = atomisp_buf_setup_output, 602 - .buf_prepare = atomisp_buf_prepare_output, 603 - .buf_queue = atomisp_buf_queue_output, 604 - .buf_release = atomisp_buf_release_output, 605 601 }; 606 602 607 603 static int atomisp_init_pipe(struct atomisp_video_pipe *pipe) ··· 568 660 sizeof(struct atomisp_buffer), pipe, 569 661 NULL); /* ext_lock: NULL */ 570 662 571 - videobuf_queue_vmalloc_init(&pipe->outq, &videobuf_qops_output, NULL, 572 - &pipe->irq_lock, 573 - V4L2_BUF_TYPE_VIDEO_OUTPUT, 574 - V4L2_FIELD_NONE, 575 - sizeof(struct atomisp_buffer), pipe, 576 - NULL); /* ext_lock: NULL */ 577 - 578 663 INIT_LIST_HEAD(&pipe->activeq); 579 - INIT_LIST_HEAD(&pipe->activeq_out); 580 664 INIT_LIST_HEAD(&pipe->buffers_waiting_for_param); 581 665 INIT_LIST_HEAD(&pipe->per_frame_params); 582 666 memset(pipe->frame_request_config_id, 0, ··· 584 684 { 585 685 unsigned int i; 586 686 587 - isp->sw_contex.file_input = false; 588 687 isp->need_gfx_throttle = true; 589 688 isp->isp_fatal_error = false; 590 689 isp->mipi_frame_size = 0; ··· 640 741 return asd->video_out_preview.users + 641 742 asd->video_out_vf.users + 642 743 asd->video_out_capture.users + 643 - asd->video_out_video_capture.users + 644 - asd->video_acc.users + 645 - asd->video_in.users; 744 + asd->video_out_video_capture.users; 646 745 } 647 746 648 747 unsigned int atomisp_dev_users(struct atomisp_device *isp) ··· 657 760 { 658 761 struct video_device *vdev = video_devdata(file); 659 762 struct atomisp_device *isp = video_get_drvdata(vdev); 660 - struct atomisp_video_pipe *pipe = NULL; 661 - struct atomisp_acc_pipe *acc_pipe = NULL; 662 - struct atomisp_sub_device *asd; 663 - bool acc_node = false; 763 + struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 764 + struct atomisp_sub_device *asd = pipe->asd; 664 765 int ret; 665 766 666 767 dev_dbg(isp->dev, "open device %s\n", vdev->name); 667 768 668 - /* 669 - * Ensure that if we are still loading we block. Once the loading 670 - * is over we can proceed. We can't blindly hold the lock until 671 - * that occurs as if the load fails we'll deadlock the unload 672 - */ 673 - rt_mutex_lock(&isp->loading); 674 - /* 675 - * FIXME: revisit this with a better check once the code structure 676 - * is cleaned up a bit more 677 - */ 678 769 ret = v4l2_fh_open(file); 679 - if (ret) { 680 - dev_err(isp->dev, 681 - "%s: v4l2_fh_open() returned error %d\n", 682 - __func__, ret); 683 - rt_mutex_unlock(&isp->loading); 770 + if (ret) 684 771 return ret; 685 - } 686 - if (!isp->ready) { 687 - rt_mutex_unlock(&isp->loading); 688 - return -ENXIO; 689 - } 690 - rt_mutex_unlock(&isp->loading); 691 772 692 - rt_mutex_lock(&isp->mutex); 773 + mutex_lock(&isp->mutex); 693 774 694 - acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC"); 695 - if (acc_node) { 696 - acc_pipe = atomisp_to_acc_pipe(vdev); 697 - asd = acc_pipe->asd; 698 - } else { 699 - pipe = atomisp_to_video_pipe(vdev); 700 - asd = pipe->asd; 701 - } 702 775 asd->subdev.devnode = vdev; 703 776 /* Deferred firmware loading case. */ 704 777 if (isp->css_env.isp_css_fw.bytes == 0) { ··· 690 823 isp->css_env.isp_css_fw.data = NULL; 691 824 } 692 825 693 - if (acc_node && acc_pipe->users) { 694 - dev_dbg(isp->dev, "acc node already opened\n"); 695 - rt_mutex_unlock(&isp->mutex); 696 - return -EBUSY; 697 - } else if (acc_node) { 698 - goto dev_init; 699 - } 700 - 701 826 if (!isp->input_cnt) { 702 827 dev_err(isp->dev, "no camera attached\n"); 703 828 ret = -EINVAL; ··· 701 842 */ 702 843 if (pipe->users) { 703 844 dev_dbg(isp->dev, "video node already opened\n"); 704 - rt_mutex_unlock(&isp->mutex); 845 + mutex_unlock(&isp->mutex); 705 846 return -EBUSY; 706 847 } 707 848 ··· 709 850 if (ret) 710 851 goto error; 711 852 712 - dev_init: 713 853 if (atomisp_dev_users(isp)) { 714 854 dev_dbg(isp->dev, "skip init isp in open\n"); 715 855 goto init_subdev; ··· 743 885 atomisp_subdev_init_struct(asd); 744 886 745 887 done: 746 - 747 - if (acc_node) 748 - acc_pipe->users++; 749 - else 750 - pipe->users++; 751 - rt_mutex_unlock(&isp->mutex); 888 + pipe->users++; 889 + mutex_unlock(&isp->mutex); 752 890 753 891 /* Ensure that a mode is set */ 754 - if (!acc_node) 755 - v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode); 892 + v4l2_ctrl_s_ctrl(asd->run_mode, pipe->default_run_mode); 756 893 757 894 return 0; 758 895 ··· 755 902 atomisp_css_uninit(isp); 756 903 pm_runtime_put(vdev->v4l2_dev->dev); 757 904 error: 758 - rt_mutex_unlock(&isp->mutex); 905 + mutex_unlock(&isp->mutex); 906 + v4l2_fh_release(file); 759 907 return ret; 760 908 } 761 909 ··· 764 910 { 765 911 struct video_device *vdev = video_devdata(file); 766 912 struct atomisp_device *isp = video_get_drvdata(vdev); 767 - struct atomisp_video_pipe *pipe; 768 - struct atomisp_acc_pipe *acc_pipe; 769 - struct atomisp_sub_device *asd; 770 - bool acc_node; 913 + struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 914 + struct atomisp_sub_device *asd = pipe->asd; 771 915 struct v4l2_requestbuffers req; 772 916 struct v4l2_subdev_fh fh; 773 917 struct v4l2_rect clear_compose = {0}; 918 + unsigned long flags; 774 919 int ret = 0; 775 920 776 921 v4l2_fh_init(&fh.vfh, vdev); ··· 778 925 if (!isp) 779 926 return -EBADF; 780 927 781 - mutex_lock(&isp->streamoff_mutex); 782 - rt_mutex_lock(&isp->mutex); 928 + mutex_lock(&isp->mutex); 783 929 784 930 dev_dbg(isp->dev, "release device %s\n", vdev->name); 785 - acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC"); 786 - if (acc_node) { 787 - acc_pipe = atomisp_to_acc_pipe(vdev); 788 - asd = acc_pipe->asd; 789 - } else { 790 - pipe = atomisp_to_video_pipe(vdev); 791 - asd = pipe->asd; 792 - } 931 + 793 932 asd->subdev.devnode = vdev; 794 - if (acc_node) { 795 - acc_pipe->users--; 796 - goto subdev_uninit; 797 - } 933 + 798 934 pipe->users--; 799 935 800 936 if (pipe->capq.streaming) ··· 792 950 __func__); 793 951 794 952 if (pipe->capq.streaming && 795 - __atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) { 796 - dev_err(isp->dev, 797 - "atomisp_streamoff failed on release, driver bug"); 953 + atomisp_streamoff(file, NULL, V4L2_BUF_TYPE_VIDEO_CAPTURE)) { 954 + dev_err(isp->dev, "atomisp_streamoff failed on release, driver bug"); 798 955 goto done; 799 956 } 800 957 801 958 if (pipe->users) 802 959 goto done; 803 960 804 - if (__atomisp_reqbufs(file, NULL, &req)) { 805 - dev_err(isp->dev, 806 - "atomisp_reqbufs failed on release, driver bug"); 961 + if (atomisp_reqbufs(file, NULL, &req)) { 962 + dev_err(isp->dev, "atomisp_reqbufs failed on release, driver bug"); 807 963 goto done; 808 - } 809 - 810 - if (pipe->outq.bufs[0]) { 811 - mutex_lock(&pipe->outq.vb_lock); 812 - videobuf_queue_cancel(&pipe->outq); 813 - mutex_unlock(&pipe->outq.vb_lock); 814 964 } 815 965 816 966 /* ··· 812 978 * The sink pad setting can only be cleared when all device nodes 813 979 * get released. 814 980 */ 815 - if (!isp->sw_contex.file_input && asd->fmt_auto->val) { 981 + if (asd->fmt_auto->val) { 816 982 struct v4l2_mbus_framefmt isp_sink_fmt = { 0 }; 817 983 818 984 atomisp_subdev_set_ffmt(&asd->subdev, fh.state, 819 985 V4L2_SUBDEV_FORMAT_ACTIVE, 820 986 ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt); 821 987 } 822 - subdev_uninit: 988 + 823 989 if (atomisp_subdev_users(asd)) 824 990 goto done; 825 - 826 - /* clear the sink pad for file input */ 827 - if (isp->sw_contex.file_input && asd->fmt_auto->val) { 828 - struct v4l2_mbus_framefmt isp_sink_fmt = { 0 }; 829 - 830 - atomisp_subdev_set_ffmt(&asd->subdev, fh.state, 831 - V4L2_SUBDEV_FORMAT_ACTIVE, 832 - ATOMISP_SUBDEV_PAD_SINK, &isp_sink_fmt); 833 - } 834 991 835 992 atomisp_css_free_stat_buffers(asd); 836 993 atomisp_free_internal_buffers(asd); ··· 832 1007 833 1008 /* clear the asd field to show this camera is not used */ 834 1009 isp->inputs[asd->input_curr].asd = NULL; 1010 + spin_lock_irqsave(&isp->lock, flags); 835 1011 asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED; 1012 + spin_unlock_irqrestore(&isp->lock, flags); 836 1013 837 1014 if (atomisp_dev_users(isp)) 838 1015 goto done; ··· 856 1029 dev_err(isp->dev, "Failed to power off device\n"); 857 1030 858 1031 done: 859 - if (!acc_node) { 860 - atomisp_subdev_set_selection(&asd->subdev, fh.state, 861 - V4L2_SUBDEV_FORMAT_ACTIVE, 862 - atomisp_subdev_source_pad(vdev), 863 - V4L2_SEL_TGT_COMPOSE, 0, 864 - &clear_compose); 865 - } 866 - rt_mutex_unlock(&isp->mutex); 867 - mutex_unlock(&isp->streamoff_mutex); 1032 + atomisp_subdev_set_selection(&asd->subdev, fh.state, 1033 + V4L2_SUBDEV_FORMAT_ACTIVE, 1034 + atomisp_subdev_source_pad(vdev), 1035 + V4L2_SEL_TGT_COMPOSE, 0, 1036 + &clear_compose); 1037 + mutex_unlock(&isp->mutex); 868 1038 869 1039 return v4l2_fh_release(file); 870 1040 } ··· 1018 1194 if (!(vma->vm_flags & (VM_WRITE | VM_READ))) 1019 1195 return -EACCES; 1020 1196 1021 - rt_mutex_lock(&isp->mutex); 1197 + mutex_lock(&isp->mutex); 1022 1198 1023 1199 if (!(vma->vm_flags & VM_SHARED)) { 1024 1200 /* Map private buffer. ··· 1029 1205 */ 1030 1206 vma->vm_flags |= VM_SHARED; 1031 1207 ret = hmm_mmap(vma, vma->vm_pgoff << PAGE_SHIFT); 1032 - rt_mutex_unlock(&isp->mutex); 1208 + mutex_unlock(&isp->mutex); 1033 1209 return ret; 1034 1210 } 1035 1211 ··· 1072 1248 } 1073 1249 raw_virt_addr->data_bytes = origin_size; 1074 1250 vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; 1075 - rt_mutex_unlock(&isp->mutex); 1251 + mutex_unlock(&isp->mutex); 1076 1252 return 0; 1077 1253 } 1078 1254 ··· 1084 1260 ret = -EINVAL; 1085 1261 goto error; 1086 1262 } 1087 - rt_mutex_unlock(&isp->mutex); 1263 + mutex_unlock(&isp->mutex); 1088 1264 1089 1265 return atomisp_videobuf_mmap_mapper(&pipe->capq, vma); 1090 1266 1091 1267 error: 1092 - rt_mutex_unlock(&isp->mutex); 1268 + mutex_unlock(&isp->mutex); 1093 1269 1094 1270 return ret; 1095 - } 1096 - 1097 - static int atomisp_file_mmap(struct file *file, struct vm_area_struct *vma) 1098 - { 1099 - struct video_device *vdev = video_devdata(file); 1100 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1101 - 1102 - return videobuf_mmap_mapper(&pipe->outq, vma); 1103 1271 } 1104 1272 1105 1273 static __poll_t atomisp_poll(struct file *file, ··· 1101 1285 struct atomisp_device *isp = video_get_drvdata(vdev); 1102 1286 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1103 1287 1104 - rt_mutex_lock(&isp->mutex); 1288 + mutex_lock(&isp->mutex); 1105 1289 if (pipe->capq.streaming != 1) { 1106 - rt_mutex_unlock(&isp->mutex); 1290 + mutex_unlock(&isp->mutex); 1107 1291 return EPOLLERR; 1108 1292 } 1109 - rt_mutex_unlock(&isp->mutex); 1293 + mutex_unlock(&isp->mutex); 1110 1294 1111 1295 return videobuf_poll_stream(file, &pipe->capq, pt); 1112 1296 } ··· 1123 1307 * needs to be made safe for compat tasks instead. 1124 1308 .compat_ioctl32 = atomisp_compat_ioctl32, 1125 1309 */ 1126 - #endif 1127 - .poll = atomisp_poll, 1128 - }; 1129 - 1130 - const struct v4l2_file_operations atomisp_file_fops = { 1131 - .owner = THIS_MODULE, 1132 - .open = atomisp_open, 1133 - .release = atomisp_release, 1134 - .mmap = atomisp_file_mmap, 1135 - .unlocked_ioctl = video_ioctl2, 1136 - #ifdef CONFIG_COMPAT 1137 - /* .compat_ioctl32 = atomisp_compat_ioctl32, */ 1138 1310 #endif 1139 1311 .poll = atomisp_poll, 1140 1312 };
+40 -54
drivers/staging/media/atomisp/pci/atomisp_gmin_platform.c
··· 134 134 135 135 static struct gmin_subdev *find_gmin_subdev(struct v4l2_subdev *subdev); 136 136 137 - /* 138 - * Legacy/stub behavior copied from upstream platform_camera.c. The 139 - * atomisp driver relies on these values being non-NULL in a few 140 - * places, even though they are hard-coded in all current 141 - * implementations. 142 - */ 143 - const struct atomisp_camera_caps *atomisp_get_default_camera_caps(void) 144 - { 145 - static const struct atomisp_camera_caps caps = { 146 - .sensor_num = 1, 147 - .sensor = { 148 - { .stream_num = 1, }, 149 - }, 150 - }; 151 - return &caps; 152 - } 153 - EXPORT_SYMBOL_GPL(atomisp_get_default_camera_caps); 154 - 155 137 const struct atomisp_platform_data *atomisp_get_platform_data(void) 156 138 { 157 139 return &pdata; ··· 1048 1066 return ret; 1049 1067 } 1050 1068 1069 + static int camera_sensor_csi_alloc(struct v4l2_subdev *sd, u32 port, u32 lanes, 1070 + u32 format, u32 bayer_order) 1071 + { 1072 + struct i2c_client *client = v4l2_get_subdevdata(sd); 1073 + struct camera_mipi_info *csi; 1074 + 1075 + csi = kzalloc(sizeof(*csi), GFP_KERNEL); 1076 + if (!csi) 1077 + return -ENOMEM; 1078 + 1079 + csi->port = port; 1080 + csi->num_lanes = lanes; 1081 + csi->input_format = format; 1082 + csi->raw_bayer_order = bayer_order; 1083 + v4l2_set_subdev_hostdata(sd, csi); 1084 + csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED; 1085 + csi->metadata_effective_width = NULL; 1086 + dev_info(&client->dev, 1087 + "camera pdata: port: %d lanes: %d order: %8.8x\n", 1088 + port, lanes, bayer_order); 1089 + 1090 + return 0; 1091 + } 1092 + 1093 + static void camera_sensor_csi_free(struct v4l2_subdev *sd) 1094 + { 1095 + struct camera_mipi_info *csi; 1096 + 1097 + csi = v4l2_get_subdev_hostdata(sd); 1098 + kfree(csi); 1099 + } 1100 + 1051 1101 static int gmin_csi_cfg(struct v4l2_subdev *sd, int flag) 1052 1102 { 1053 1103 struct i2c_client *client = v4l2_get_subdevdata(sd); ··· 1088 1074 if (!client || !gs) 1089 1075 return -ENODEV; 1090 1076 1091 - return camera_sensor_csi(sd, gs->csi_port, gs->csi_lanes, 1092 - gs->csi_fmt, gs->csi_bayer, flag); 1077 + if (flag) 1078 + return camera_sensor_csi_alloc(sd, gs->csi_port, gs->csi_lanes, 1079 + gs->csi_fmt, gs->csi_bayer); 1080 + camera_sensor_csi_free(sd); 1081 + return 0; 1093 1082 } 1094 1083 1095 1084 static struct camera_vcm_control *gmin_get_vcm_ctrl(struct v4l2_subdev *subdev, ··· 1224 1207 if (!strcmp(var, "CamClk")) 1225 1208 return -EINVAL; 1226 1209 1227 - obj = acpi_evaluate_dsm(handle, &atomisp_dsm_guid, 0, 0, NULL); 1210 + /* Return on unexpected object type */ 1211 + obj = acpi_evaluate_dsm_typed(handle, &atomisp_dsm_guid, 0, 0, NULL, 1212 + ACPI_TYPE_PACKAGE); 1228 1213 if (!obj) { 1229 1214 dev_info_once(dev, "Didn't find ACPI _DSM table.\n"); 1230 1215 return -EINVAL; 1231 1216 } 1232 - 1233 - /* Return on unexpected object type */ 1234 - if (obj->type != ACPI_TYPE_PACKAGE) 1235 - return -EINVAL; 1236 1217 1237 1218 #if 0 /* Just for debugging purposes */ 1238 1219 for (i = 0; i < obj->package.count; i++) { ··· 1374 1359 return ret ? def : result; 1375 1360 } 1376 1361 EXPORT_SYMBOL_GPL(gmin_get_var_int); 1377 - 1378 - int camera_sensor_csi(struct v4l2_subdev *sd, u32 port, 1379 - u32 lanes, u32 format, u32 bayer_order, int flag) 1380 - { 1381 - struct i2c_client *client = v4l2_get_subdevdata(sd); 1382 - struct camera_mipi_info *csi = NULL; 1383 - 1384 - if (flag) { 1385 - csi = kzalloc(sizeof(*csi), GFP_KERNEL); 1386 - if (!csi) 1387 - return -ENOMEM; 1388 - csi->port = port; 1389 - csi->num_lanes = lanes; 1390 - csi->input_format = format; 1391 - csi->raw_bayer_order = bayer_order; 1392 - v4l2_set_subdev_hostdata(sd, (void *)csi); 1393 - csi->metadata_format = ATOMISP_INPUT_FORMAT_EMBEDDED; 1394 - csi->metadata_effective_width = NULL; 1395 - dev_info(&client->dev, 1396 - "camera pdata: port: %d lanes: %d order: %8.8x\n", 1397 - port, lanes, bayer_order); 1398 - } else { 1399 - csi = v4l2_get_subdev_hostdata(sd); 1400 - kfree(csi); 1401 - } 1402 - 1403 - return 0; 1404 - } 1405 - EXPORT_SYMBOL_GPL(camera_sensor_csi); 1406 1362 1407 1363 /* PCI quirk: The BYT ISP advertises PCI runtime PM but it doesn't 1408 1364 * work. Disable so the kernel framework doesn't hang the device
+5 -50
drivers/staging/media/atomisp/pci/atomisp_internal.h
··· 34 34 #include "sh_css_legacy.h" 35 35 36 36 #include "atomisp_csi2.h" 37 - #include "atomisp_file.h" 38 37 #include "atomisp_subdev.h" 39 38 #include "atomisp_tpg.h" 40 39 #include "atomisp_compat.h" ··· 85 86 #define ATOM_ISP_POWER_DOWN 0 86 87 #define ATOM_ISP_POWER_UP 1 87 88 88 - #define ATOM_ISP_MAX_INPUTS 4 89 + #define ATOM_ISP_MAX_INPUTS 3 89 90 90 91 #define ATOMISP_SC_TYPE_SIZE 2 91 92 92 93 #define ATOMISP_ISP_TIMEOUT_DURATION (2 * HZ) 93 94 #define ATOMISP_EXT_ISP_TIMEOUT_DURATION (6 * HZ) 94 - #define ATOMISP_ISP_FILE_TIMEOUT_DURATION (60 * HZ) 95 95 #define ATOMISP_WDT_KEEP_CURRENT_DELAY 0 96 96 #define ATOMISP_ISP_MAX_TIMEOUT_COUNT 2 97 97 #define ATOMISP_CSS_STOP_TIMEOUT_US 200000 ··· 104 106 #define ATOMISP_DELAYED_INIT_NOT_QUEUED 0 105 107 #define ATOMISP_DELAYED_INIT_QUEUED 1 106 108 #define ATOMISP_DELAYED_INIT_DONE 2 107 - 108 - #define ATOMISP_CALC_CSS_PREV_OVERLAP(lines) \ 109 - ((lines) * 38 / 100 & 0xfffffe) 110 109 111 110 /* 112 111 * Define how fast CPU should be able to serve ISP interrupts. ··· 127 132 * Moorefield/Baytrail platform. 128 133 */ 129 134 #define ATOMISP_SOC_CAMERA(asd) \ 130 - (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA \ 131 - && asd->isp->inputs[asd->input_curr].camera_caps-> \ 132 - sensor[asd->sensor_curr].stream_num == 1) 135 + (asd->isp->inputs[asd->input_curr].type == SOC_CAMERA) 133 136 134 137 #define ATOMISP_USE_YUVPP(asd) \ 135 138 (ATOMISP_SOC_CAMERA(asd) && ATOMISP_CSS_SUPPORT_YUVPP && \ ··· 160 167 */ 161 168 struct atomisp_sub_device *asd; 162 169 163 - const struct atomisp_camera_caps *camera_caps; 164 170 int sensor_index; 165 171 }; 166 172 ··· 195 203 }; 196 204 197 205 struct atomisp_sw_contex { 198 - bool file_input; 199 206 int power_state; 200 207 int running_freq; 201 208 }; ··· 232 241 233 242 struct atomisp_mipi_csi2_device csi2_port[ATOMISP_CAMERA_NR_PORTS]; 234 243 struct atomisp_tpg_device tpg; 235 - struct atomisp_file_device file_dev; 236 244 237 245 /* Purpose of mutex is to protect and serialize use of isp data 238 246 * structures and css API calls. */ 239 - struct rt_mutex mutex; 240 - /* 241 - * This mutex ensures that we don't allow an open to succeed while 242 - * the initialization process is incomplete 243 - */ 244 - struct rt_mutex loading; 245 - /* Set once the ISP is ready to allow opens */ 246 - bool ready; 247 - /* 248 - * Serialise streamoff: mutex is dropped during streamoff to 249 - * cancel the watchdog queue. MUST be acquired BEFORE 250 - * "mutex". 251 - */ 252 - struct mutex streamoff_mutex; 247 + struct mutex mutex; 253 248 254 249 unsigned int input_cnt; 255 250 struct atomisp_input_subdev inputs[ATOM_ISP_MAX_INPUTS]; ··· 249 272 /* isp timeout status flag */ 250 273 bool isp_timeout; 251 274 bool isp_fatal_error; 252 - struct workqueue_struct *wdt_work_queue; 253 - struct work_struct wdt_work; 275 + struct work_struct assert_recovery_work; 254 276 255 - /* ISP2400 */ 256 - atomic_t wdt_count; 257 - 258 - atomic_t wdt_work_queued; 259 - 260 - spinlock_t lock; /* Just for streaming below */ 277 + spinlock_t lock; /* Protects asd[i].streaming */ 261 278 262 279 bool need_gfx_throttle; 263 280 ··· 266 295 container_of(dev, struct atomisp_device, v4l2_dev) 267 296 268 297 extern struct device *atomisp_dev; 269 - 270 - #define atomisp_is_wdt_running(a) timer_pending(&(a)->wdt) 271 - 272 - /* ISP2401 */ 273 - void atomisp_wdt_refresh_pipe(struct atomisp_video_pipe *pipe, 274 - unsigned int delay); 275 - void atomisp_wdt_refresh(struct atomisp_sub_device *asd, unsigned int delay); 276 - 277 - /* ISP2400 */ 278 - void atomisp_wdt_start(struct atomisp_sub_device *asd); 279 - 280 - /* ISP2401 */ 281 - void atomisp_wdt_start_pipe(struct atomisp_video_pipe *pipe); 282 - void atomisp_wdt_stop_pipe(struct atomisp_video_pipe *pipe, bool sync); 283 - 284 - void atomisp_wdt_stop(struct atomisp_sub_device *asd, bool sync); 285 298 286 299 #endif /* __ATOMISP_INTERNAL_H__ */
+142 -634
drivers/staging/media/atomisp/pci/atomisp_ioctl.c
··· 535 535 return NULL; 536 536 } 537 537 538 + int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool settings_change) 539 + { 540 + lockdep_assert_held(&pipe->isp->mutex); 541 + 542 + if (pipe->isp->isp_fatal_error) 543 + return -EIO; 544 + 545 + switch (pipe->asd->streaming) { 546 + case ATOMISP_DEVICE_STREAMING_DISABLED: 547 + break; 548 + case ATOMISP_DEVICE_STREAMING_ENABLED: 549 + if (settings_change) { 550 + dev_err(pipe->isp->dev, "Set fmt/input IOCTL while streaming\n"); 551 + return -EBUSY; 552 + } 553 + break; 554 + case ATOMISP_DEVICE_STREAMING_STOPPING: 555 + dev_err(pipe->isp->dev, "IOCTL issued while stopping\n"); 556 + return -EBUSY; 557 + default: 558 + return -EINVAL; 559 + } 560 + 561 + return 0; 562 + } 563 + 538 564 /* 539 565 * v4l2 ioctls 540 566 * return ISP capabilities ··· 635 609 return asd->video_out_preview.capq.streaming 636 610 + asd->video_out_capture.capq.streaming 637 611 + asd->video_out_video_capture.capq.streaming 638 - + asd->video_out_vf.capq.streaming 639 - + asd->video_in.capq.streaming; 612 + + asd->video_out_vf.capq.streaming; 640 613 } 641 614 642 615 unsigned int atomisp_streaming_count(struct atomisp_device *isp) ··· 655 630 static int atomisp_g_input(struct file *file, void *fh, unsigned int *input) 656 631 { 657 632 struct video_device *vdev = video_devdata(file); 658 - struct atomisp_device *isp = video_get_drvdata(vdev); 659 633 struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 660 634 661 - if (!asd) { 662 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 663 - __func__, vdev->name); 664 - return -EINVAL; 665 - } 666 - 667 - rt_mutex_lock(&isp->mutex); 668 635 *input = asd->input_curr; 669 - rt_mutex_unlock(&isp->mutex); 670 - 671 636 return 0; 672 637 } 673 638 ··· 668 653 { 669 654 struct video_device *vdev = video_devdata(file); 670 655 struct atomisp_device *isp = video_get_drvdata(vdev); 671 - struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 656 + struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 657 + struct atomisp_sub_device *asd = pipe->asd; 672 658 struct v4l2_subdev *camera = NULL; 673 659 struct v4l2_subdev *motor; 674 660 int ret; 675 661 676 - if (!asd) { 677 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 678 - __func__, vdev->name); 679 - return -EINVAL; 680 - } 662 + ret = atomisp_pipe_check(pipe, true); 663 + if (ret) 664 + return ret; 681 665 682 - rt_mutex_lock(&isp->mutex); 683 666 if (input >= ATOM_ISP_MAX_INPUTS || input >= isp->input_cnt) { 684 667 dev_dbg(isp->dev, "input_cnt: %d\n", isp->input_cnt); 685 - ret = -EINVAL; 686 - goto error; 668 + return -EINVAL; 687 669 } 688 670 689 671 /* ··· 692 680 dev_err(isp->dev, 693 681 "%s, camera is already used by stream: %d\n", __func__, 694 682 isp->inputs[input].asd->index); 695 - ret = -EBUSY; 696 - goto error; 683 + return -EBUSY; 697 684 } 698 685 699 686 camera = isp->inputs[input].camera; 700 687 if (!camera) { 701 688 dev_err(isp->dev, "%s, no camera\n", __func__); 702 - ret = -EINVAL; 703 - goto error; 704 - } 705 - 706 - if (atomisp_subdev_streaming_count(asd)) { 707 - dev_err(isp->dev, 708 - "ISP is still streaming, stop first\n"); 709 - ret = -EINVAL; 710 - goto error; 689 + return -EINVAL; 711 690 } 712 691 713 692 /* power off the current owned sensor, as it is not used this time */ ··· 717 714 ret = v4l2_subdev_call(isp->inputs[input].camera, core, s_power, 1); 718 715 if (ret) { 719 716 dev_err(isp->dev, "Failed to power-on sensor\n"); 720 - goto error; 717 + return ret; 721 718 } 722 719 /* 723 720 * Some sensor driver resets the run mode during power-on, thus force ··· 730 727 0, isp->inputs[input].sensor_index, 0); 731 728 if (ret && (ret != -ENOIOCTLCMD)) { 732 729 dev_err(isp->dev, "Failed to select sensor\n"); 733 - goto error; 730 + return ret; 734 731 } 735 732 736 733 if (!IS_ISP2401) { ··· 741 738 ret = v4l2_subdev_call(motor, core, s_power, 1); 742 739 } 743 740 744 - if (!isp->sw_contex.file_input && motor) 741 + if (motor) 745 742 ret = v4l2_subdev_call(motor, core, init, 1); 746 743 747 744 asd->input_curr = input; 748 745 /* mark this camera is used by the current stream */ 749 746 isp->inputs[input].asd = asd; 750 - rt_mutex_unlock(&isp->mutex); 751 747 752 748 return 0; 753 - 754 - error: 755 - rt_mutex_unlock(&isp->mutex); 756 - 757 - return ret; 758 749 } 759 750 760 751 static int atomisp_enum_framesizes(struct file *file, void *priv, ··· 816 819 unsigned int i, fi = 0; 817 820 int rval; 818 821 819 - if (!asd) { 820 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 821 - __func__, vdev->name); 822 - return -EINVAL; 823 - } 824 - 825 822 camera = isp->inputs[asd->input_curr].camera; 826 823 if(!camera) { 827 824 dev_err(isp->dev, "%s(): camera is NULL, device is %s\n", ··· 823 832 return -EINVAL; 824 833 } 825 834 826 - rt_mutex_lock(&isp->mutex); 827 - 828 835 rval = v4l2_subdev_call(camera, pad, enum_mbus_code, NULL, &code); 829 836 if (rval == -ENOIOCTLCMD) { 830 837 dev_warn(isp->dev, 831 838 "enum_mbus_code pad op not supported by %s. Please fix your sensor driver!\n", 832 839 camera->name); 833 840 } 834 - rt_mutex_unlock(&isp->mutex); 835 841 836 842 if (rval) 837 843 return rval; ··· 858 870 } 859 871 860 872 return -EINVAL; 861 - } 862 - 863 - static int atomisp_g_fmt_file(struct file *file, void *fh, 864 - struct v4l2_format *f) 865 - { 866 - struct video_device *vdev = video_devdata(file); 867 - struct atomisp_device *isp = video_get_drvdata(vdev); 868 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 869 - 870 - rt_mutex_lock(&isp->mutex); 871 - f->fmt.pix = pipe->pix; 872 - rt_mutex_unlock(&isp->mutex); 873 - 874 - return 0; 875 873 } 876 874 877 875 static int atomisp_adjust_fmt(struct v4l2_format *f) ··· 931 957 struct v4l2_format *f) 932 958 { 933 959 struct video_device *vdev = video_devdata(file); 934 - struct atomisp_device *isp = video_get_drvdata(vdev); 935 960 int ret; 936 961 937 - rt_mutex_lock(&isp->mutex); 938 - ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL); 939 - rt_mutex_unlock(&isp->mutex); 962 + /* 963 + * atomisp_try_fmt() gived results with padding included, note 964 + * (this gets removed again by the atomisp_adjust_fmt() call below. 965 + */ 966 + f->fmt.pix.width += pad_w; 967 + f->fmt.pix.height += pad_h; 940 968 969 + ret = atomisp_try_fmt(vdev, &f->fmt.pix, NULL); 941 970 if (ret) 942 971 return ret; 943 972 ··· 951 974 struct v4l2_format *f) 952 975 { 953 976 struct video_device *vdev = video_devdata(file); 954 - struct atomisp_device *isp = video_get_drvdata(vdev); 955 977 struct atomisp_video_pipe *pipe; 956 978 957 - rt_mutex_lock(&isp->mutex); 958 979 pipe = atomisp_to_video_pipe(vdev); 959 - rt_mutex_unlock(&isp->mutex); 960 980 961 981 f->fmt.pix = pipe->pix; 962 982 ··· 966 992 f->fmt.pix.height = 10000; 967 993 968 994 return atomisp_try_fmt_cap(file, fh, f); 969 - } 970 - 971 - static int atomisp_s_fmt_cap(struct file *file, void *fh, 972 - struct v4l2_format *f) 973 - { 974 - struct video_device *vdev = video_devdata(file); 975 - struct atomisp_device *isp = video_get_drvdata(vdev); 976 - int ret; 977 - 978 - rt_mutex_lock(&isp->mutex); 979 - if (isp->isp_fatal_error) { 980 - ret = -EIO; 981 - rt_mutex_unlock(&isp->mutex); 982 - return ret; 983 - } 984 - ret = atomisp_set_fmt(vdev, f); 985 - rt_mutex_unlock(&isp->mutex); 986 - return ret; 987 - } 988 - 989 - static int atomisp_s_fmt_file(struct file *file, void *fh, 990 - struct v4l2_format *f) 991 - { 992 - struct video_device *vdev = video_devdata(file); 993 - struct atomisp_device *isp = video_get_drvdata(vdev); 994 - int ret; 995 - 996 - rt_mutex_lock(&isp->mutex); 997 - ret = atomisp_set_fmt_file(vdev, f); 998 - rt_mutex_unlock(&isp->mutex); 999 - return ret; 1000 995 } 1001 996 1002 997 /* ··· 1103 1160 /* 1104 1161 * Initiate Memory Mapping or User Pointer I/O 1105 1162 */ 1106 - int __atomisp_reqbufs(struct file *file, void *fh, 1107 - struct v4l2_requestbuffers *req) 1163 + int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req) 1108 1164 { 1109 1165 struct video_device *vdev = video_devdata(file); 1110 1166 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); ··· 1112 1170 struct ia_css_frame *frame; 1113 1171 struct videobuf_vmalloc_memory *vm_mem; 1114 1172 u16 source_pad = atomisp_subdev_source_pad(vdev); 1115 - u16 stream_id; 1116 1173 int ret = 0, i = 0; 1117 - 1118 - if (!asd) { 1119 - dev_err(pipe->isp->dev, "%s(): asd is NULL, device is %s\n", 1120 - __func__, vdev->name); 1121 - return -EINVAL; 1122 - } 1123 - stream_id = atomisp_source_pad_to_stream_id(asd, source_pad); 1124 1174 1125 1175 if (req->count == 0) { 1126 1176 mutex_lock(&pipe->capq.vb_lock); ··· 1134 1200 if (ret) 1135 1201 return ret; 1136 1202 1137 - atomisp_alloc_css_stat_bufs(asd, stream_id); 1203 + atomisp_alloc_css_stat_bufs(asd, ATOMISP_INPUT_STREAM_GENERAL); 1138 1204 1139 1205 /* 1140 1206 * for user pointer type, buffers are not really allocated here, ··· 1172 1238 return -ENOMEM; 1173 1239 } 1174 1240 1175 - int atomisp_reqbufs(struct file *file, void *fh, 1176 - struct v4l2_requestbuffers *req) 1177 - { 1178 - struct video_device *vdev = video_devdata(file); 1179 - struct atomisp_device *isp = video_get_drvdata(vdev); 1180 - int ret; 1181 - 1182 - rt_mutex_lock(&isp->mutex); 1183 - ret = __atomisp_reqbufs(file, fh, req); 1184 - rt_mutex_unlock(&isp->mutex); 1185 - 1186 - return ret; 1187 - } 1188 - 1189 - static int atomisp_reqbufs_file(struct file *file, void *fh, 1190 - struct v4l2_requestbuffers *req) 1191 - { 1192 - struct video_device *vdev = video_devdata(file); 1193 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1194 - 1195 - if (req->count == 0) { 1196 - mutex_lock(&pipe->outq.vb_lock); 1197 - atomisp_videobuf_free_queue(&pipe->outq); 1198 - mutex_unlock(&pipe->outq.vb_lock); 1199 - return 0; 1200 - } 1201 - 1202 - return videobuf_reqbufs(&pipe->outq, req); 1203 - } 1204 - 1205 1241 /* application query the status of a buffer */ 1206 1242 static int atomisp_querybuf(struct file *file, void *fh, 1207 1243 struct v4l2_buffer *buf) ··· 1180 1276 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1181 1277 1182 1278 return videobuf_querybuf(&pipe->capq, buf); 1183 - } 1184 - 1185 - static int atomisp_querybuf_file(struct file *file, void *fh, 1186 - struct v4l2_buffer *buf) 1187 - { 1188 - struct video_device *vdev = video_devdata(file); 1189 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1190 - 1191 - return videobuf_querybuf(&pipe->outq, buf); 1192 1279 } 1193 1280 1194 1281 /* ··· 1200 1305 struct ia_css_frame *handle = NULL; 1201 1306 u32 length; 1202 1307 u32 pgnr; 1203 - int ret = 0; 1308 + int ret; 1204 1309 1205 - if (!asd) { 1206 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 1207 - __func__, vdev->name); 1208 - return -EINVAL; 1209 - } 1210 - 1211 - rt_mutex_lock(&isp->mutex); 1212 - if (isp->isp_fatal_error) { 1213 - ret = -EIO; 1214 - goto error; 1215 - } 1216 - 1217 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) { 1218 - dev_err(isp->dev, "%s: reject, as ISP at stopping.\n", 1219 - __func__); 1220 - ret = -EIO; 1221 - goto error; 1222 - } 1310 + ret = atomisp_pipe_check(pipe, false); 1311 + if (ret) 1312 + return ret; 1223 1313 1224 1314 if (!buf || buf->index >= VIDEO_MAX_FRAME || 1225 1315 !pipe->capq.bufs[buf->index]) { 1226 1316 dev_err(isp->dev, "Invalid index for qbuf.\n"); 1227 - ret = -EINVAL; 1228 - goto error; 1317 + return -EINVAL; 1229 1318 } 1230 1319 1231 1320 /* ··· 1217 1338 * address and reprograme out page table properly 1218 1339 */ 1219 1340 if (buf->memory == V4L2_MEMORY_USERPTR) { 1341 + if (offset_in_page(buf->m.userptr)) { 1342 + dev_err(isp->dev, "Error userptr is not page aligned.\n"); 1343 + return -EINVAL; 1344 + } 1345 + 1220 1346 vb = pipe->capq.bufs[buf->index]; 1221 1347 vm_mem = vb->priv; 1222 - if (!vm_mem) { 1223 - ret = -EINVAL; 1224 - goto error; 1225 - } 1348 + if (!vm_mem) 1349 + return -EINVAL; 1226 1350 1227 1351 length = vb->bsize; 1228 1352 pgnr = (length + (PAGE_SIZE - 1)) >> PAGE_SHIFT; ··· 1234 1352 goto done; 1235 1353 1236 1354 if (atomisp_get_css_frame_info(asd, 1237 - atomisp_subdev_source_pad(vdev), &frame_info)) { 1238 - ret = -EIO; 1239 - goto error; 1240 - } 1355 + atomisp_subdev_source_pad(vdev), &frame_info)) 1356 + return -EIO; 1241 1357 1242 1358 ret = ia_css_frame_map(&handle, &frame_info, 1243 1359 (void __user *)buf->m.userptr, 1244 1360 pgnr); 1245 1361 if (ret) { 1246 1362 dev_err(isp->dev, "Failed to map user buffer\n"); 1247 - goto error; 1363 + return ret; 1248 1364 } 1249 1365 1250 1366 if (vm_mem->vaddr) { ··· 1286 1406 1287 1407 pipe->frame_params[buf->index] = NULL; 1288 1408 1289 - rt_mutex_unlock(&isp->mutex); 1290 - 1409 + mutex_unlock(&isp->mutex); 1291 1410 ret = videobuf_qbuf(&pipe->capq, buf); 1292 - rt_mutex_lock(&isp->mutex); 1411 + mutex_lock(&isp->mutex); 1293 1412 if (ret) 1294 - goto error; 1413 + return ret; 1295 1414 1296 1415 /* TODO: do this better, not best way to queue to css */ 1297 1416 if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) { ··· 1298 1419 atomisp_handle_parameter_and_buffer(pipe); 1299 1420 } else { 1300 1421 atomisp_qbuffers_to_css(asd); 1301 - 1302 - if (!IS_ISP2401) { 1303 - if (!atomisp_is_wdt_running(asd) && atomisp_buffers_queued(asd)) 1304 - atomisp_wdt_start(asd); 1305 - } else { 1306 - if (!atomisp_is_wdt_running(pipe) && 1307 - atomisp_buffers_queued_pipe(pipe)) 1308 - atomisp_wdt_start_pipe(pipe); 1309 - } 1310 1422 } 1311 1423 } 1312 1424 ··· 1319 1449 asd->pending_capture_request++; 1320 1450 dev_dbg(isp->dev, "Add one pending capture request.\n"); 1321 1451 } 1322 - rt_mutex_unlock(&isp->mutex); 1323 1452 1324 1453 dev_dbg(isp->dev, "qbuf buffer %d (%s) for asd%d\n", buf->index, 1325 1454 vdev->name, asd->index); 1326 1455 1327 - return ret; 1328 - 1329 - error: 1330 - rt_mutex_unlock(&isp->mutex); 1331 - return ret; 1332 - } 1333 - 1334 - static int atomisp_qbuf_file(struct file *file, void *fh, 1335 - struct v4l2_buffer *buf) 1336 - { 1337 - struct video_device *vdev = video_devdata(file); 1338 - struct atomisp_device *isp = video_get_drvdata(vdev); 1339 - struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1340 - int ret; 1341 - 1342 - rt_mutex_lock(&isp->mutex); 1343 - if (isp->isp_fatal_error) { 1344 - ret = -EIO; 1345 - goto error; 1346 - } 1347 - 1348 - if (!buf || buf->index >= VIDEO_MAX_FRAME || 1349 - !pipe->outq.bufs[buf->index]) { 1350 - dev_err(isp->dev, "Invalid index for qbuf.\n"); 1351 - ret = -EINVAL; 1352 - goto error; 1353 - } 1354 - 1355 - if (buf->memory != V4L2_MEMORY_MMAP) { 1356 - dev_err(isp->dev, "Unsupported memory method\n"); 1357 - ret = -EINVAL; 1358 - goto error; 1359 - } 1360 - 1361 - if (buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) { 1362 - dev_err(isp->dev, "Unsupported buffer type\n"); 1363 - ret = -EINVAL; 1364 - goto error; 1365 - } 1366 - rt_mutex_unlock(&isp->mutex); 1367 - 1368 - return videobuf_qbuf(&pipe->outq, buf); 1369 - 1370 - error: 1371 - rt_mutex_unlock(&isp->mutex); 1372 - 1373 - return ret; 1456 + return 0; 1374 1457 } 1375 1458 1376 1459 static int __get_frame_exp_id(struct atomisp_video_pipe *pipe, ··· 1352 1529 struct atomisp_video_pipe *pipe = atomisp_to_video_pipe(vdev); 1353 1530 struct atomisp_sub_device *asd = pipe->asd; 1354 1531 struct atomisp_device *isp = video_get_drvdata(vdev); 1355 - int ret = 0; 1532 + int ret; 1356 1533 1357 - if (!asd) { 1358 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 1359 - __func__, vdev->name); 1360 - return -EINVAL; 1361 - } 1534 + ret = atomisp_pipe_check(pipe, false); 1535 + if (ret) 1536 + return ret; 1362 1537 1363 - rt_mutex_lock(&isp->mutex); 1364 - 1365 - if (isp->isp_fatal_error) { 1366 - rt_mutex_unlock(&isp->mutex); 1367 - return -EIO; 1368 - } 1369 - 1370 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) { 1371 - rt_mutex_unlock(&isp->mutex); 1372 - dev_err(isp->dev, "%s: reject, as ISP at stopping.\n", 1373 - __func__); 1374 - return -EIO; 1375 - } 1376 - 1377 - rt_mutex_unlock(&isp->mutex); 1378 - 1538 + mutex_unlock(&isp->mutex); 1379 1539 ret = videobuf_dqbuf(&pipe->capq, buf, file->f_flags & O_NONBLOCK); 1540 + mutex_lock(&isp->mutex); 1380 1541 if (ret) { 1381 1542 if (ret != -EAGAIN) 1382 1543 dev_dbg(isp->dev, "<%s: %d\n", __func__, ret); 1383 1544 return ret; 1384 1545 } 1385 - rt_mutex_lock(&isp->mutex); 1546 + 1386 1547 buf->bytesused = pipe->pix.sizeimage; 1387 1548 buf->reserved = asd->frame_status[buf->index]; 1388 1549 ··· 1380 1573 if (!(buf->flags & V4L2_BUF_FLAG_ERROR)) 1381 1574 buf->reserved |= __get_frame_exp_id(pipe, buf) << 16; 1382 1575 buf->reserved2 = pipe->frame_config_id[buf->index]; 1383 - rt_mutex_unlock(&isp->mutex); 1384 1576 1385 1577 dev_dbg(isp->dev, 1386 1578 "dqbuf buffer %d (%s) for asd%d with exp_id %d, isp_config_id %d\n", ··· 1428 1622 1429 1623 static unsigned int atomisp_sensor_start_stream(struct atomisp_sub_device *asd) 1430 1624 { 1431 - struct atomisp_device *isp = asd->isp; 1432 - 1433 - if (isp->inputs[asd->input_curr].camera_caps-> 1434 - sensor[asd->sensor_curr].stream_num > 1) { 1435 - if (asd->high_speed_mode) 1436 - return 1; 1437 - else 1438 - return 2; 1439 - } 1440 - 1441 1625 if (asd->vfpp->val != ATOMISP_VFPP_ENABLE || 1442 1626 asd->copy_mode) 1443 1627 return 1; ··· 1446 1650 int atomisp_stream_on_master_slave_sensor(struct atomisp_device *isp, 1447 1651 bool isp_timeout) 1448 1652 { 1449 - unsigned int master = -1, slave = -1, delay_slave = 0; 1450 - int i, ret; 1653 + unsigned int master, slave, delay_slave = 0; 1654 + int ret; 1451 1655 1452 - /* 1453 - * ISP only support 2 streams now so ignore multiple master/slave 1454 - * case to reduce the delay between 2 stream_on calls. 1455 - */ 1456 - for (i = 0; i < isp->num_of_streams; i++) { 1457 - int sensor_index = isp->asd[i].input_curr; 1458 - 1459 - if (isp->inputs[sensor_index].camera_caps-> 1460 - sensor[isp->asd[i].sensor_curr].is_slave) 1461 - slave = sensor_index; 1462 - else 1463 - master = sensor_index; 1464 - } 1465 - 1466 - if (master == -1 || slave == -1) { 1467 - master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR; 1468 - slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR; 1469 - dev_warn(isp->dev, 1470 - "depth mode use default master=%s.slave=%s.\n", 1471 - isp->inputs[master].camera->name, 1472 - isp->inputs[slave].camera->name); 1473 - } 1656 + master = ATOMISP_DEPTH_DEFAULT_MASTER_SENSOR; 1657 + slave = ATOMISP_DEPTH_DEFAULT_SLAVE_SENSOR; 1658 + dev_warn(isp->dev, 1659 + "depth mode use default master=%s.slave=%s.\n", 1660 + isp->inputs[master].camera->name, 1661 + isp->inputs[slave].camera->name); 1474 1662 1475 1663 ret = v4l2_subdev_call(isp->inputs[master].camera, core, 1476 1664 ioctl, ATOMISP_IOC_G_DEPTH_SYNC_COMP, ··· 1488 1708 return 0; 1489 1709 } 1490 1710 1491 - /* FIXME! ISP2400 */ 1492 - static void __wdt_on_master_slave_sensor(struct atomisp_device *isp, 1493 - unsigned int wdt_duration) 1494 - { 1495 - if (atomisp_buffers_queued(&isp->asd[0])) 1496 - atomisp_wdt_refresh(&isp->asd[0], wdt_duration); 1497 - if (atomisp_buffers_queued(&isp->asd[1])) 1498 - atomisp_wdt_refresh(&isp->asd[1], wdt_duration); 1499 - } 1500 - 1501 - /* FIXME! ISP2401 */ 1502 - static void __wdt_on_master_slave_sensor_pipe(struct atomisp_video_pipe *pipe, 1503 - unsigned int wdt_duration, 1504 - bool enable) 1505 - { 1506 - static struct atomisp_video_pipe *pipe0; 1507 - 1508 - if (enable) { 1509 - if (atomisp_buffers_queued_pipe(pipe0)) 1510 - atomisp_wdt_refresh_pipe(pipe0, wdt_duration); 1511 - if (atomisp_buffers_queued_pipe(pipe)) 1512 - atomisp_wdt_refresh_pipe(pipe, wdt_duration); 1513 - } else { 1514 - pipe0 = pipe; 1515 - } 1516 - } 1517 - 1518 - static void atomisp_pause_buffer_event(struct atomisp_device *isp) 1519 - { 1520 - struct v4l2_event event = {0}; 1521 - int i; 1522 - 1523 - event.type = V4L2_EVENT_ATOMISP_PAUSE_BUFFER; 1524 - 1525 - for (i = 0; i < isp->num_of_streams; i++) { 1526 - int sensor_index = isp->asd[i].input_curr; 1527 - 1528 - if (isp->inputs[sensor_index].camera_caps-> 1529 - sensor[isp->asd[i].sensor_curr].is_slave) { 1530 - v4l2_event_queue(isp->asd[i].subdev.devnode, &event); 1531 - break; 1532 - } 1533 - } 1534 - } 1535 - 1536 1711 /* Input system HW workaround */ 1537 1712 /* Input system address translation corrupts burst during */ 1538 1713 /* invalidate. SW workaround for this is to set burst length */ ··· 1519 1784 struct pci_dev *pdev = to_pci_dev(isp->dev); 1520 1785 enum ia_css_pipe_id css_pipe_id; 1521 1786 unsigned int sensor_start_stream; 1522 - unsigned int wdt_duration = ATOMISP_ISP_TIMEOUT_DURATION; 1523 - int ret = 0; 1524 1787 unsigned long irqflags; 1525 - 1526 - if (!asd) { 1527 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 1528 - __func__, vdev->name); 1529 - return -EINVAL; 1530 - } 1788 + int ret; 1531 1789 1532 1790 dev_dbg(isp->dev, "Start stream on pad %d for asd%d\n", 1533 1791 atomisp_subdev_source_pad(vdev), asd->index); ··· 1530 1802 return -EINVAL; 1531 1803 } 1532 1804 1533 - rt_mutex_lock(&isp->mutex); 1534 - if (isp->isp_fatal_error) { 1535 - ret = -EIO; 1536 - goto out; 1537 - } 1538 - 1539 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_STOPPING) { 1540 - ret = -EBUSY; 1541 - goto out; 1542 - } 1805 + ret = atomisp_pipe_check(pipe, false); 1806 + if (ret) 1807 + return ret; 1543 1808 1544 1809 if (pipe->capq.streaming) 1545 - goto out; 1810 + return 0; 1546 1811 1547 1812 /* Input system HW workaround */ 1548 1813 atomisp_dma_burst_len_cfg(asd); ··· 1550 1829 if (list_empty(&pipe->capq.stream)) { 1551 1830 spin_unlock_irqrestore(&pipe->irq_lock, irqflags); 1552 1831 dev_dbg(isp->dev, "no buffer in the queue\n"); 1553 - ret = -EINVAL; 1554 - goto out; 1832 + return -EINVAL; 1555 1833 } 1556 1834 spin_unlock_irqrestore(&pipe->irq_lock, irqflags); 1557 1835 1558 1836 ret = videobuf_streamon(&pipe->capq); 1559 1837 if (ret) 1560 - goto out; 1838 + return ret; 1561 1839 1562 1840 /* Reset pending capture request count. */ 1563 1841 asd->pending_capture_request = 0; 1564 1842 1565 - if ((atomisp_subdev_streaming_count(asd) > sensor_start_stream) && 1566 - (!isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl)) { 1843 + if (atomisp_subdev_streaming_count(asd) > sensor_start_stream) { 1567 1844 /* trigger still capture */ 1568 1845 if (asd->continuous_mode->val && 1569 1846 atomisp_subdev_source_pad(vdev) ··· 1575 1856 1576 1857 if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) { 1577 1858 flush_work(&asd->delayed_init_work); 1578 - rt_mutex_unlock(&isp->mutex); 1579 - if (wait_for_completion_interruptible( 1580 - &asd->init_done) != 0) 1859 + mutex_unlock(&isp->mutex); 1860 + ret = wait_for_completion_interruptible(&asd->init_done); 1861 + mutex_lock(&isp->mutex); 1862 + if (ret != 0) 1581 1863 return -ERESTARTSYS; 1582 - rt_mutex_lock(&isp->mutex); 1583 1864 } 1584 1865 1585 1866 /* handle per_frame_setting parameter and buffers */ ··· 1601 1882 asd->params.offline_parm.num_captures, 1602 1883 asd->params.offline_parm.skip_frames, 1603 1884 asd->params.offline_parm.offset); 1604 - if (ret) { 1605 - ret = -EINVAL; 1606 - goto out; 1607 - } 1608 - if (asd->depth_mode->val) 1609 - atomisp_pause_buffer_event(isp); 1885 + if (ret) 1886 + return -EINVAL; 1610 1887 } 1611 1888 } 1612 1889 atomisp_qbuffers_to_css(asd); 1613 - goto out; 1890 + return 0; 1614 1891 } 1615 1892 1616 1893 if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) { ··· 1632 1917 1633 1918 ret = atomisp_css_start(asd, css_pipe_id, false); 1634 1919 if (ret) 1635 - goto out; 1920 + return ret; 1636 1921 1922 + spin_lock_irqsave(&isp->lock, irqflags); 1637 1923 asd->streaming = ATOMISP_DEVICE_STREAMING_ENABLED; 1924 + spin_unlock_irqrestore(&isp->lock, irqflags); 1638 1925 atomic_set(&asd->sof_count, -1); 1639 1926 atomic_set(&asd->sequence, -1); 1640 1927 atomic_set(&asd->sequence_temp, -1); 1641 - if (isp->sw_contex.file_input) 1642 - wdt_duration = ATOMISP_ISP_FILE_TIMEOUT_DURATION; 1643 1928 1644 1929 asd->params.dis_proj_data_valid = false; 1645 1930 asd->latest_preview_exp_id = 0; ··· 1653 1938 1654 1939 /* Only start sensor when the last streaming instance started */ 1655 1940 if (atomisp_subdev_streaming_count(asd) < sensor_start_stream) 1656 - goto out; 1941 + return 0; 1657 1942 1658 1943 start_sensor: 1659 1944 if (isp->flash) { ··· 1662 1947 atomisp_setup_flash(asd); 1663 1948 } 1664 1949 1665 - if (!isp->sw_contex.file_input) { 1666 - atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, 1667 - atomisp_css_valid_sof(isp)); 1668 - atomisp_csi2_configure(asd); 1669 - /* 1670 - * set freq to max when streaming count > 1 which indicate 1671 - * dual camera would run 1672 - */ 1673 - if (atomisp_streaming_count(isp) > 1) { 1674 - if (atomisp_freq_scaling(isp, 1675 - ATOMISP_DFS_MODE_MAX, false) < 0) 1676 - dev_dbg(isp->dev, "DFS max mode failed!\n"); 1677 - } else { 1678 - if (atomisp_freq_scaling(isp, 1679 - ATOMISP_DFS_MODE_AUTO, false) < 0) 1680 - dev_dbg(isp->dev, "DFS auto mode failed!\n"); 1681 - } 1682 - } else { 1683 - if (atomisp_freq_scaling(isp, ATOMISP_DFS_MODE_MAX, false) < 0) 1950 + atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, 1951 + atomisp_css_valid_sof(isp)); 1952 + atomisp_csi2_configure(asd); 1953 + /* 1954 + * set freq to max when streaming count > 1 which indicate 1955 + * dual camera would run 1956 + */ 1957 + if (atomisp_streaming_count(isp) > 1) { 1958 + if (atomisp_freq_scaling(isp, 1959 + ATOMISP_DFS_MODE_MAX, false) < 0) 1684 1960 dev_dbg(isp->dev, "DFS max mode failed!\n"); 1961 + } else { 1962 + if (atomisp_freq_scaling(isp, 1963 + ATOMISP_DFS_MODE_AUTO, false) < 0) 1964 + dev_dbg(isp->dev, "DFS auto mode failed!\n"); 1685 1965 } 1686 1966 1687 1967 if (asd->depth_mode->val && atomisp_streaming_count(isp) == ··· 1684 1974 ret = atomisp_stream_on_master_slave_sensor(isp, false); 1685 1975 if (ret) { 1686 1976 dev_err(isp->dev, "master slave sensor stream on failed!\n"); 1687 - goto out; 1977 + return ret; 1688 1978 } 1689 - if (!IS_ISP2401) 1690 - __wdt_on_master_slave_sensor(isp, wdt_duration); 1691 - else 1692 - __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, true); 1693 1979 goto start_delay_wq; 1694 1980 } else if (asd->depth_mode->val && (atomisp_streaming_count(isp) < 1695 1981 ATOMISP_DEPTH_SENSOR_STREAMON_COUNT)) { 1696 - if (IS_ISP2401) 1697 - __wdt_on_master_slave_sensor_pipe(pipe, wdt_duration, false); 1698 1982 goto start_delay_wq; 1699 1983 } 1700 1984 ··· 1703 1999 ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 1704 2000 video, s_stream, 1); 1705 2001 if (ret) { 2002 + spin_lock_irqsave(&isp->lock, irqflags); 1706 2003 asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED; 1707 - ret = -EINVAL; 1708 - goto out; 1709 - } 1710 - 1711 - if (!IS_ISP2401) { 1712 - if (atomisp_buffers_queued(asd)) 1713 - atomisp_wdt_refresh(asd, wdt_duration); 1714 - } else { 1715 - if (atomisp_buffers_queued_pipe(pipe)) 1716 - atomisp_wdt_refresh_pipe(pipe, wdt_duration); 2004 + spin_unlock_irqrestore(&isp->lock, irqflags); 2005 + return -EINVAL; 1717 2006 } 1718 2007 1719 2008 start_delay_wq: 1720 2009 if (asd->continuous_mode->val) { 1721 - struct v4l2_mbus_framefmt *sink; 1722 - 1723 - sink = atomisp_subdev_get_ffmt(&asd->subdev, NULL, 1724 - V4L2_SUBDEV_FORMAT_ACTIVE, 1725 - ATOMISP_SUBDEV_PAD_SINK); 2010 + atomisp_subdev_get_ffmt(&asd->subdev, NULL, 2011 + V4L2_SUBDEV_FORMAT_ACTIVE, 2012 + ATOMISP_SUBDEV_PAD_SINK); 1726 2013 1727 2014 reinit_completion(&asd->init_done); 1728 2015 asd->delayed_init = ATOMISP_DELAYED_INIT_QUEUED; 1729 2016 queue_work(asd->delayed_init_workq, &asd->delayed_init_work); 1730 - atomisp_css_set_cont_prev_start_time(isp, 1731 - ATOMISP_CALC_CSS_PREV_OVERLAP(sink->height)); 1732 2017 } else { 1733 2018 asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED; 1734 2019 } 1735 - out: 1736 - rt_mutex_unlock(&isp->mutex); 1737 - return ret; 2020 + 2021 + return 0; 1738 2022 } 1739 2023 1740 - int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type) 2024 + int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type) 1741 2025 { 1742 2026 struct video_device *vdev = video_devdata(file); 1743 2027 struct atomisp_device *isp = video_get_drvdata(vdev); ··· 1742 2050 unsigned long flags; 1743 2051 bool first_streamoff = false; 1744 2052 1745 - if (!asd) { 1746 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 1747 - __func__, vdev->name); 1748 - return -EINVAL; 1749 - } 1750 - 1751 2053 dev_dbg(isp->dev, "Stop stream on pad %d for asd%d\n", 1752 2054 atomisp_subdev_source_pad(vdev), asd->index); 1753 2055 1754 2056 lockdep_assert_held(&isp->mutex); 1755 - lockdep_assert_held(&isp->streamoff_mutex); 1756 2057 1757 2058 if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE) { 1758 2059 dev_dbg(isp->dev, "unsupported v4l2 buf type\n"); ··· 1756 2071 * do only videobuf_streamoff for capture & vf pipes in 1757 2072 * case of continuous capture 1758 2073 */ 1759 - if ((asd->continuous_mode->val || 1760 - isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) && 1761 - atomisp_subdev_source_pad(vdev) != 1762 - ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW && 1763 - atomisp_subdev_source_pad(vdev) != 1764 - ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) { 1765 - if (isp->inputs[asd->input_curr].camera_caps->multi_stream_ctrl) { 1766 - v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 1767 - video, s_stream, 0); 1768 - } else if (atomisp_subdev_source_pad(vdev) 1769 - == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) { 2074 + if (asd->continuous_mode->val && 2075 + atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW && 2076 + atomisp_subdev_source_pad(vdev) != ATOMISP_SUBDEV_PAD_SOURCE_VIDEO) { 2077 + if (atomisp_subdev_source_pad(vdev) == ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE) { 1770 2078 /* stop continuous still capture if needed */ 1771 2079 if (asd->params.offline_parm.num_captures == -1) 1772 2080 atomisp_css_offline_capture_configure(asd, ··· 1796 2118 if (!pipe->capq.streaming) 1797 2119 return 0; 1798 2120 1799 - spin_lock_irqsave(&isp->lock, flags); 1800 - if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) { 1801 - asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING; 2121 + if (asd->streaming == ATOMISP_DEVICE_STREAMING_ENABLED) 1802 2122 first_streamoff = true; 1803 - } 1804 - spin_unlock_irqrestore(&isp->lock, flags); 1805 - 1806 - if (first_streamoff) { 1807 - /* if other streams are running, should not disable watch dog */ 1808 - rt_mutex_unlock(&isp->mutex); 1809 - atomisp_wdt_stop(asd, true); 1810 - 1811 - /* 1812 - * must stop sending pixels into GP_FIFO before stop 1813 - * the pipeline. 1814 - */ 1815 - if (isp->sw_contex.file_input) 1816 - v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 1817 - video, s_stream, 0); 1818 - 1819 - rt_mutex_lock(&isp->mutex); 1820 - } 1821 2123 1822 2124 spin_lock_irqsave(&isp->lock, flags); 1823 2125 if (atomisp_subdev_streaming_count(asd) == 1) 1824 2126 asd->streaming = ATOMISP_DEVICE_STREAMING_DISABLED; 2127 + else 2128 + asd->streaming = ATOMISP_DEVICE_STREAMING_STOPPING; 1825 2129 spin_unlock_irqrestore(&isp->lock, flags); 1826 2130 1827 2131 if (!first_streamoff) { ··· 1814 2154 } 1815 2155 1816 2156 atomisp_clear_css_buffer_counters(asd); 1817 - 1818 - if (!isp->sw_contex.file_input) 1819 - atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, 1820 - false); 2157 + atomisp_css_irq_enable(isp, IA_CSS_IRQ_INFO_CSS_RECEIVER_SOF, false); 1821 2158 1822 2159 if (asd->delayed_init == ATOMISP_DELAYED_INIT_QUEUED) { 1823 2160 cancel_work_sync(&asd->delayed_init_work); 1824 2161 asd->delayed_init = ATOMISP_DELAYED_INIT_NOT_QUEUED; 1825 2162 } 1826 - if (first_streamoff) { 1827 - css_pipe_id = atomisp_get_css_pipe_id(asd); 1828 - atomisp_css_stop(asd, css_pipe_id, false); 1829 - } 2163 + 2164 + css_pipe_id = atomisp_get_css_pipe_id(asd); 2165 + atomisp_css_stop(asd, css_pipe_id, false); 2166 + 1830 2167 /* cancel work queue*/ 1831 2168 if (asd->video_out_capture.users) { 1832 2169 capture_pipe = &asd->video_out_capture; ··· 1867 2210 != atomisp_sensor_start_stream(asd)) 1868 2211 return 0; 1869 2212 1870 - if (!isp->sw_contex.file_input) 1871 - ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 1872 - video, s_stream, 0); 2213 + ret = v4l2_subdev_call(isp->inputs[asd->input_curr].camera, 2214 + video, s_stream, 0); 1873 2215 1874 2216 if (isp->flash) { 1875 2217 asd->params.num_flash_frames = 0; ··· 1940 2284 return ret; 1941 2285 } 1942 2286 1943 - static int atomisp_streamoff(struct file *file, void *fh, 1944 - enum v4l2_buf_type type) 1945 - { 1946 - struct video_device *vdev = video_devdata(file); 1947 - struct atomisp_device *isp = video_get_drvdata(vdev); 1948 - int rval; 1949 - 1950 - mutex_lock(&isp->streamoff_mutex); 1951 - rt_mutex_lock(&isp->mutex); 1952 - rval = __atomisp_streamoff(file, fh, type); 1953 - rt_mutex_unlock(&isp->mutex); 1954 - mutex_unlock(&isp->streamoff_mutex); 1955 - 1956 - return rval; 1957 - } 1958 - 1959 2287 /* 1960 2288 * To get the current value of a control. 1961 2289 * applications initialize the id field of a struct v4l2_control and ··· 1953 2313 struct atomisp_device *isp = video_get_drvdata(vdev); 1954 2314 int i, ret = -EINVAL; 1955 2315 1956 - if (!asd) { 1957 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 1958 - __func__, vdev->name); 1959 - return -EINVAL; 1960 - } 1961 - 1962 2316 for (i = 0; i < ctrls_num; i++) { 1963 2317 if (ci_v4l2_controls[i].id == control->id) { 1964 2318 ret = 0; ··· 1962 2328 1963 2329 if (ret) 1964 2330 return ret; 1965 - 1966 - rt_mutex_lock(&isp->mutex); 1967 2331 1968 2332 switch (control->id) { 1969 2333 case V4L2_CID_IRIS_ABSOLUTE: ··· 1984 2352 case V4L2_CID_TEST_PATTERN_COLOR_GR: 1985 2353 case V4L2_CID_TEST_PATTERN_COLOR_GB: 1986 2354 case V4L2_CID_TEST_PATTERN_COLOR_B: 1987 - rt_mutex_unlock(&isp->mutex); 1988 2355 return v4l2_g_ctrl(isp->inputs[asd->input_curr].camera-> 1989 2356 ctrl_handler, control); 1990 2357 case V4L2_CID_COLORFX: ··· 2012 2381 break; 2013 2382 } 2014 2383 2015 - rt_mutex_unlock(&isp->mutex); 2016 2384 return ret; 2017 2385 } 2018 2386 ··· 2028 2398 struct atomisp_device *isp = video_get_drvdata(vdev); 2029 2399 int i, ret = -EINVAL; 2030 2400 2031 - if (!asd) { 2032 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2033 - __func__, vdev->name); 2034 - return -EINVAL; 2035 - } 2036 - 2037 2401 for (i = 0; i < ctrls_num; i++) { 2038 2402 if (ci_v4l2_controls[i].id == control->id) { 2039 2403 ret = 0; ··· 2038 2414 if (ret) 2039 2415 return ret; 2040 2416 2041 - rt_mutex_lock(&isp->mutex); 2042 2417 switch (control->id) { 2043 2418 case V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE: 2044 2419 case V4L2_CID_EXPOSURE: ··· 2058 2435 case V4L2_CID_TEST_PATTERN_COLOR_GR: 2059 2436 case V4L2_CID_TEST_PATTERN_COLOR_GB: 2060 2437 case V4L2_CID_TEST_PATTERN_COLOR_B: 2061 - rt_mutex_unlock(&isp->mutex); 2062 2438 return v4l2_s_ctrl(NULL, 2063 2439 isp->inputs[asd->input_curr].camera-> 2064 2440 ctrl_handler, control); ··· 2089 2467 ret = -EINVAL; 2090 2468 break; 2091 2469 } 2092 - rt_mutex_unlock(&isp->mutex); 2093 2470 return ret; 2094 2471 } 2095 2472 ··· 2105 2484 struct video_device *vdev = video_devdata(file); 2106 2485 struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 2107 2486 struct atomisp_device *isp = video_get_drvdata(vdev); 2108 - 2109 - if (!asd) { 2110 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2111 - __func__, vdev->name); 2112 - return -EINVAL; 2113 - } 2114 2487 2115 2488 switch (qc->id) { 2116 2489 case V4L2_CID_FOCUS_ABSOLUTE: ··· 2150 2535 struct v4l2_control ctrl; 2151 2536 int i; 2152 2537 int ret = 0; 2153 - 2154 - if (!asd) { 2155 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2156 - __func__, vdev->name); 2157 - return -EINVAL; 2158 - } 2159 2538 2160 2539 if (!IS_ISP2401) 2161 2540 motor = isp->inputs[asd->input_curr].motor; ··· 2201 2592 &ctrl); 2202 2593 break; 2203 2594 case V4L2_CID_ZOOM_ABSOLUTE: 2204 - rt_mutex_lock(&isp->mutex); 2205 2595 ret = atomisp_digital_zoom(asd, 0, &ctrl.value); 2206 - rt_mutex_unlock(&isp->mutex); 2207 2596 break; 2208 2597 case V4L2_CID_G_SKIP_FRAMES: 2209 2598 ret = v4l2_subdev_call( ··· 2260 2653 int i; 2261 2654 int ret = 0; 2262 2655 2263 - if (!asd) { 2264 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2265 - __func__, vdev->name); 2266 - return -EINVAL; 2267 - } 2268 - 2269 2656 if (!IS_ISP2401) 2270 2657 motor = isp->inputs[asd->input_curr].motor; 2271 2658 else ··· 2308 2707 case V4L2_CID_FLASH_STROBE: 2309 2708 case V4L2_CID_FLASH_MODE: 2310 2709 case V4L2_CID_FLASH_STATUS_REGISTER: 2311 - rt_mutex_lock(&isp->mutex); 2312 2710 if (isp->flash) { 2313 2711 ret = 2314 2712 v4l2_s_ctrl(NULL, isp->flash->ctrl_handler, ··· 2322 2722 asd->params.num_flash_frames = 0; 2323 2723 } 2324 2724 } 2325 - rt_mutex_unlock(&isp->mutex); 2326 2725 break; 2327 2726 case V4L2_CID_ZOOM_ABSOLUTE: 2328 - rt_mutex_lock(&isp->mutex); 2329 2727 ret = atomisp_digital_zoom(asd, 1, &ctrl.value); 2330 - rt_mutex_unlock(&isp->mutex); 2331 2728 break; 2332 2729 default: 2333 2730 ctr = v4l2_ctrl_find(&asd->ctrl_handler, ctrl.id); ··· 2381 2784 struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 2382 2785 struct atomisp_device *isp = video_get_drvdata(vdev); 2383 2786 2384 - if (!asd) { 2385 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2386 - __func__, vdev->name); 2387 - return -EINVAL; 2388 - } 2389 - 2390 2787 if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) { 2391 2788 dev_err(isp->dev, "unsupported v4l2 buf type\n"); 2392 2789 return -EINVAL; 2393 2790 } 2394 2791 2395 - rt_mutex_lock(&isp->mutex); 2396 2792 parm->parm.capture.capturemode = asd->run_mode->val; 2397 - rt_mutex_unlock(&isp->mutex); 2398 2793 2399 2794 return 0; 2400 2795 } ··· 2401 2812 int rval; 2402 2813 int fps; 2403 2814 2404 - if (!asd) { 2405 - dev_err(isp->dev, "%s(): asd is NULL, device is %s\n", 2406 - __func__, vdev->name); 2407 - return -EINVAL; 2408 - } 2409 - 2410 2815 if (parm->type != V4L2_BUF_TYPE_VIDEO_CAPTURE) { 2411 2816 dev_err(isp->dev, "unsupported v4l2 buf type\n"); 2412 2817 return -EINVAL; 2413 2818 } 2414 - 2415 - rt_mutex_lock(&isp->mutex); 2416 2819 2417 2820 asd->high_speed_mode = false; 2418 2821 switch (parm->parm.capture.capturemode) { ··· 2424 2843 asd->high_speed_mode = true; 2425 2844 } 2426 2845 2427 - goto out; 2846 + return rval == -ENOIOCTLCMD ? 0 : rval; 2428 2847 } 2429 2848 case CI_MODE_VIDEO: 2430 2849 mode = ATOMISP_RUN_MODE_VIDEO; ··· 2439 2858 mode = ATOMISP_RUN_MODE_PREVIEW; 2440 2859 break; 2441 2860 default: 2442 - rval = -EINVAL; 2443 - goto out; 2861 + return -EINVAL; 2444 2862 } 2445 2863 2446 2864 rval = v4l2_ctrl_s_ctrl(asd->run_mode, mode); 2447 2865 2448 - out: 2449 - rt_mutex_unlock(&isp->mutex); 2450 - 2451 2866 return rval == -ENOIOCTLCMD ? 0 : rval; 2452 - } 2453 - 2454 - static int atomisp_s_parm_file(struct file *file, void *fh, 2455 - struct v4l2_streamparm *parm) 2456 - { 2457 - struct video_device *vdev = video_devdata(file); 2458 - struct atomisp_device *isp = video_get_drvdata(vdev); 2459 - 2460 - if (parm->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) { 2461 - dev_err(isp->dev, "unsupported v4l2 buf type for output\n"); 2462 - return -EINVAL; 2463 - } 2464 - 2465 - rt_mutex_lock(&isp->mutex); 2466 - isp->sw_contex.file_input = true; 2467 - rt_mutex_unlock(&isp->mutex); 2468 - 2469 - return 0; 2470 2867 } 2471 2868 2472 2869 static long atomisp_vidioc_default(struct file *file, void *fh, ··· 2452 2893 { 2453 2894 struct video_device *vdev = video_devdata(file); 2454 2895 struct atomisp_device *isp = video_get_drvdata(vdev); 2455 - struct atomisp_sub_device *asd; 2896 + struct atomisp_sub_device *asd = atomisp_to_video_pipe(vdev)->asd; 2456 2897 struct v4l2_subdev *motor; 2457 - bool acc_node; 2458 2898 int err; 2459 - 2460 - acc_node = !strcmp(vdev->name, "ATOMISP ISP ACC"); 2461 - if (acc_node) 2462 - asd = atomisp_to_acc_pipe(vdev)->asd; 2463 - else 2464 - asd = atomisp_to_video_pipe(vdev)->asd; 2465 2899 2466 2900 if (!IS_ISP2401) 2467 2901 motor = isp->inputs[asd->input_curr].motor; 2468 2902 else 2469 2903 motor = isp->motor; 2470 2904 2471 - switch (cmd) { 2472 - case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA: 2473 - case ATOMISP_IOC_S_EXPOSURE: 2474 - case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP: 2475 - case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA: 2476 - case ATOMISP_IOC_EXT_ISP_CTRL: 2477 - case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO: 2478 - case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE: 2479 - case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE: 2480 - case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT: 2481 - case ATOMISP_IOC_S_SENSOR_EE_CONFIG: 2482 - case ATOMISP_IOC_G_UPDATE_EXPOSURE: 2483 - /* we do not need take isp->mutex for these IOCTLs */ 2484 - break; 2485 - default: 2486 - rt_mutex_lock(&isp->mutex); 2487 - break; 2488 - } 2489 2905 switch (cmd) { 2490 2906 case ATOMISP_IOC_S_SENSOR_RUNMODE: 2491 2907 if (IS_ISP2401) ··· 2707 3173 break; 2708 3174 } 2709 3175 2710 - switch (cmd) { 2711 - case ATOMISP_IOC_G_MOTOR_PRIV_INT_DATA: 2712 - case ATOMISP_IOC_S_EXPOSURE: 2713 - case ATOMISP_IOC_G_SENSOR_CALIBRATION_GROUP: 2714 - case ATOMISP_IOC_G_SENSOR_PRIV_INT_DATA: 2715 - case ATOMISP_IOC_EXT_ISP_CTRL: 2716 - case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_INFO: 2717 - case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_MODE: 2718 - case ATOMISP_IOC_G_SENSOR_AE_BRACKETING_MODE: 2719 - case ATOMISP_IOC_S_SENSOR_AE_BRACKETING_LUT: 2720 - case ATOMISP_IOC_G_UPDATE_EXPOSURE: 2721 - break; 2722 - default: 2723 - rt_mutex_unlock(&isp->mutex); 2724 - break; 2725 - } 2726 3176 return err; 2727 3177 } 2728 3178 ··· 2725 3207 .vidioc_enum_fmt_vid_cap = atomisp_enum_fmt_cap, 2726 3208 .vidioc_try_fmt_vid_cap = atomisp_try_fmt_cap, 2727 3209 .vidioc_g_fmt_vid_cap = atomisp_g_fmt_cap, 2728 - .vidioc_s_fmt_vid_cap = atomisp_s_fmt_cap, 3210 + .vidioc_s_fmt_vid_cap = atomisp_set_fmt, 2729 3211 .vidioc_reqbufs = atomisp_reqbufs, 2730 3212 .vidioc_querybuf = atomisp_querybuf, 2731 3213 .vidioc_qbuf = atomisp_qbuf, ··· 2735 3217 .vidioc_default = atomisp_vidioc_default, 2736 3218 .vidioc_s_parm = atomisp_s_parm, 2737 3219 .vidioc_g_parm = atomisp_g_parm, 2738 - }; 2739 - 2740 - const struct v4l2_ioctl_ops atomisp_file_ioctl_ops = { 2741 - .vidioc_querycap = atomisp_querycap, 2742 - .vidioc_g_fmt_vid_out = atomisp_g_fmt_file, 2743 - .vidioc_s_fmt_vid_out = atomisp_s_fmt_file, 2744 - .vidioc_s_parm = atomisp_s_parm_file, 2745 - .vidioc_reqbufs = atomisp_reqbufs_file, 2746 - .vidioc_querybuf = atomisp_querybuf_file, 2747 - .vidioc_qbuf = atomisp_qbuf_file, 2748 3220 };
+4 -10
drivers/staging/media/atomisp/pci/atomisp_ioctl.h
··· 34 34 const struct 35 35 atomisp_format_bridge *atomisp_get_format_bridge_from_mbus(u32 mbus_code); 36 36 37 + int atomisp_pipe_check(struct atomisp_video_pipe *pipe, bool streaming_ok); 38 + 37 39 int atomisp_alloc_css_stat_bufs(struct atomisp_sub_device *asd, 38 40 uint16_t stream_id); 39 41 40 - int __atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type); 41 - int __atomisp_reqbufs(struct file *file, void *fh, 42 - struct v4l2_requestbuffers *req); 43 - 44 - int atomisp_reqbufs(struct file *file, void *fh, 45 - struct v4l2_requestbuffers *req); 42 + int atomisp_streamoff(struct file *file, void *fh, enum v4l2_buf_type type); 43 + int atomisp_reqbufs(struct file *file, void *fh, struct v4l2_requestbuffers *req); 46 44 47 45 enum ia_css_pipe_id atomisp_get_css_pipe_id(struct atomisp_sub_device 48 46 *asd); 49 47 50 48 void atomisp_videobuf_free_buf(struct videobuf_buffer *vb); 51 49 52 - extern const struct v4l2_file_operations atomisp_file_fops; 53 - 54 50 extern const struct v4l2_ioctl_ops atomisp_ioctl_ops; 55 - 56 - extern const struct v4l2_ioctl_ops atomisp_file_ioctl_ops; 57 51 58 52 unsigned int atomisp_streaming_count(struct atomisp_device *isp); 59 53
+36 -97
drivers/staging/media/atomisp/pci/atomisp_subdev.c
··· 373 373 struct atomisp_sub_device *isp_sd = v4l2_get_subdevdata(sd); 374 374 struct atomisp_device *isp = isp_sd->isp; 375 375 struct v4l2_mbus_framefmt *ffmt[ATOMISP_SUBDEV_PADS_NUM]; 376 - u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode); 377 376 struct v4l2_rect *crop[ATOMISP_SUBDEV_PADS_NUM], 378 377 *comp[ATOMISP_SUBDEV_PADS_NUM]; 379 - enum atomisp_input_stream_id stream_id; 380 378 unsigned int i; 381 379 unsigned int padding_w = pad_w; 382 380 unsigned int padding_h = pad_h; 383 - 384 - stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad); 385 381 386 382 isp_get_fmt_rect(sd, sd_state, which, ffmt, crop, comp); 387 383 ··· 474 478 dvs_w = dvs_h = 0; 475 479 } 476 480 atomisp_css_video_set_dis_envelope(isp_sd, dvs_w, dvs_h); 477 - atomisp_css_input_set_effective_resolution(isp_sd, stream_id, 478 - crop[pad]->width, crop[pad]->height); 479 - 481 + atomisp_css_input_set_effective_resolution(isp_sd, 482 + ATOMISP_INPUT_STREAM_GENERAL, 483 + crop[pad]->width, 484 + crop[pad]->height); 480 485 break; 481 486 } 482 487 case ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE: ··· 520 523 if (r->width * crop[ATOMISP_SUBDEV_PAD_SINK]->height < 521 524 crop[ATOMISP_SUBDEV_PAD_SINK]->width * r->height) 522 525 atomisp_css_input_set_effective_resolution(isp_sd, 523 - stream_id, 526 + ATOMISP_INPUT_STREAM_GENERAL, 524 527 rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]-> 525 528 height * r->width / r->height, 526 529 ATOM_ISP_STEP_WIDTH), 527 530 crop[ATOMISP_SUBDEV_PAD_SINK]->height); 528 531 else 529 532 atomisp_css_input_set_effective_resolution(isp_sd, 530 - stream_id, 533 + ATOMISP_INPUT_STREAM_GENERAL, 531 534 crop[ATOMISP_SUBDEV_PAD_SINK]->width, 532 535 rounddown(crop[ATOMISP_SUBDEV_PAD_SINK]-> 533 536 width * r->height / r->width, ··· 617 620 struct atomisp_device *isp = isp_sd->isp; 618 621 struct v4l2_mbus_framefmt *__ffmt = 619 622 atomisp_subdev_get_ffmt(sd, sd_state, which, pad); 620 - u16 vdev_pad = atomisp_subdev_source_pad(sd->devnode); 621 - enum atomisp_input_stream_id stream_id; 622 623 623 624 dev_dbg(isp->dev, "ffmt: pad %s w %d h %d code 0x%8.8x which %s\n", 624 625 atomisp_pad_str(pad), ffmt->width, ffmt->height, ffmt->code, 625 626 which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY" 626 627 : "V4L2_SUBDEV_FORMAT_ACTIVE"); 627 - 628 - stream_id = atomisp_source_pad_to_stream_id(isp_sd, vdev_pad); 629 628 630 629 switch (pad) { 631 630 case ATOMISP_SUBDEV_PAD_SINK: { ··· 642 649 643 650 if (which == V4L2_SUBDEV_FORMAT_ACTIVE) { 644 651 atomisp_css_input_set_resolution(isp_sd, 645 - stream_id, ffmt); 652 + ATOMISP_INPUT_STREAM_GENERAL, ffmt); 646 653 atomisp_css_input_set_binning_factor(isp_sd, 647 - stream_id, 654 + ATOMISP_INPUT_STREAM_GENERAL, 648 655 atomisp_get_sensor_bin_factor(isp_sd)); 649 - atomisp_css_input_set_bayer_order(isp_sd, stream_id, 656 + atomisp_css_input_set_bayer_order(isp_sd, ATOMISP_INPUT_STREAM_GENERAL, 650 657 fc->bayer_order); 651 - atomisp_css_input_set_format(isp_sd, stream_id, 658 + atomisp_css_input_set_format(isp_sd, ATOMISP_INPUT_STREAM_GENERAL, 652 659 fc->atomisp_in_fmt); 653 - atomisp_css_set_default_isys_config(isp_sd, stream_id, 660 + atomisp_css_set_default_isys_config(isp_sd, ATOMISP_INPUT_STREAM_GENERAL, 654 661 ffmt); 655 662 } 656 663 ··· 867 874 { 868 875 struct atomisp_sub_device *asd = container_of( 869 876 ctrl->handler, struct atomisp_sub_device, ctrl_handler); 877 + unsigned int streaming; 878 + unsigned long flags; 870 879 871 880 switch (ctrl->id) { 872 881 case V4L2_CID_RUN_MODE: 873 882 return __atomisp_update_run_mode(asd); 874 883 case V4L2_CID_DEPTH_MODE: 875 - if (asd->streaming != ATOMISP_DEVICE_STREAMING_DISABLED) { 884 + /* Use spinlock instead of mutex to avoid possible locking issues */ 885 + spin_lock_irqsave(&asd->isp->lock, flags); 886 + streaming = asd->streaming; 887 + spin_unlock_irqrestore(&asd->isp->lock, flags); 888 + if (streaming != ATOMISP_DEVICE_STREAMING_DISABLED) { 876 889 dev_err(asd->isp->dev, 877 890 "ISP is streaming, it is not supported to change the depth mode\n"); 878 891 return -EINVAL; ··· 1065 1066 pipe->isp = asd->isp; 1066 1067 spin_lock_init(&pipe->irq_lock); 1067 1068 INIT_LIST_HEAD(&pipe->activeq); 1068 - INIT_LIST_HEAD(&pipe->activeq_out); 1069 1069 INIT_LIST_HEAD(&pipe->buffers_waiting_for_param); 1070 1070 INIT_LIST_HEAD(&pipe->per_frame_params); 1071 1071 memset(pipe->frame_request_config_id, ··· 1072 1074 memset(pipe->frame_params, 1073 1075 0, VIDEO_MAX_FRAME * 1074 1076 sizeof(struct atomisp_css_params_with_list *)); 1075 - } 1076 - 1077 - static void atomisp_init_acc_pipe(struct atomisp_sub_device *asd, 1078 - struct atomisp_acc_pipe *pipe) 1079 - { 1080 - pipe->asd = asd; 1081 - pipe->isp = asd->isp; 1082 1077 } 1083 1078 1084 1079 /* ··· 1117 1126 if (ret < 0) 1118 1127 return ret; 1119 1128 1120 - atomisp_init_subdev_pipe(asd, &asd->video_in, 1121 - V4L2_BUF_TYPE_VIDEO_OUTPUT); 1122 - 1123 1129 atomisp_init_subdev_pipe(asd, &asd->video_out_preview, 1124 1130 V4L2_BUF_TYPE_VIDEO_CAPTURE); 1125 1131 ··· 1128 1140 1129 1141 atomisp_init_subdev_pipe(asd, &asd->video_out_video_capture, 1130 1142 V4L2_BUF_TYPE_VIDEO_CAPTURE); 1131 - 1132 - atomisp_init_acc_pipe(asd, &asd->video_acc); 1133 - 1134 - ret = atomisp_video_init(&asd->video_in, "MEMORY", 1135 - ATOMISP_RUN_MODE_SDV); 1136 - if (ret < 0) 1137 - return ret; 1138 1143 1139 1144 ret = atomisp_video_init(&asd->video_out_capture, "CAPTURE", 1140 1145 ATOMISP_RUN_MODE_STILL_CAPTURE); ··· 1148 1167 ATOMISP_RUN_MODE_VIDEO); 1149 1168 if (ret < 0) 1150 1169 return ret; 1151 - 1152 - atomisp_acc_init(&asd->video_acc, "ACC"); 1153 1170 1154 1171 ret = v4l2_ctrl_handler_init(&asd->ctrl_handler, 1); 1155 1172 if (ret) ··· 1205 1226 return ret; 1206 1227 } 1207 1228 } 1208 - for (i = 0; i < isp->input_cnt - 2; i++) { 1229 + for (i = 0; i < isp->input_cnt; i++) { 1230 + /* Don't create links for the test-pattern-generator */ 1231 + if (isp->inputs[i].type == TEST_PATTERN) 1232 + continue; 1233 + 1209 1234 ret = media_create_pad_link(&isp->inputs[i].camera->entity, 0, 1210 1235 &isp->csi2_port[isp->inputs[i]. 1211 1236 port].subdev.entity, ··· 1245 1262 entity, 0, 0); 1246 1263 if (ret < 0) 1247 1264 return ret; 1248 - /* 1249 - * file input only supported on subdev0 1250 - * so do not create pad link for subdevs other then subdev0 1251 - */ 1252 - if (asd->index) 1253 - return 0; 1254 - ret = media_create_pad_link(&asd->video_in.vdev.entity, 1255 - 0, &asd->subdev.entity, 1256 - ATOMISP_SUBDEV_PAD_SINK, 0); 1257 - if (ret < 0) 1258 - return ret; 1259 1265 } 1260 1266 return 0; 1261 1267 } ··· 1274 1302 { 1275 1303 atomisp_subdev_cleanup_entities(asd); 1276 1304 v4l2_device_unregister_subdev(&asd->subdev); 1277 - atomisp_video_unregister(&asd->video_in); 1278 1305 atomisp_video_unregister(&asd->video_out_preview); 1279 1306 atomisp_video_unregister(&asd->video_out_vf); 1280 1307 atomisp_video_unregister(&asd->video_out_capture); 1281 1308 atomisp_video_unregister(&asd->video_out_video_capture); 1282 - atomisp_acc_unregister(&asd->video_acc); 1283 1309 } 1284 1310 1285 - int atomisp_subdev_register_entities(struct atomisp_sub_device *asd, 1286 - struct v4l2_device *vdev) 1311 + int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd, 1312 + struct v4l2_device *vdev) 1313 + { 1314 + return v4l2_device_register_subdev(vdev, &asd->subdev); 1315 + } 1316 + 1317 + int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd, 1318 + struct v4l2_device *vdev) 1287 1319 { 1288 1320 int ret; 1289 - u32 device_caps; 1290 1321 1291 1322 /* 1292 1323 * FIXME: check if all device caps are properly initialized. 1293 - * Should any of those use V4L2_CAP_META_OUTPUT? Probably yes. 1324 + * Should any of those use V4L2_CAP_META_CAPTURE? Probably yes. 1294 1325 */ 1295 1326 1296 - device_caps = V4L2_CAP_VIDEO_CAPTURE | 1297 - V4L2_CAP_STREAMING; 1298 - 1299 - /* Register the subdev and video node. */ 1300 - 1301 - ret = v4l2_device_register_subdev(vdev, &asd->subdev); 1302 - if (ret < 0) 1303 - goto error; 1304 - 1305 1327 asd->video_out_preview.vdev.v4l2_dev = vdev; 1306 - asd->video_out_preview.vdev.device_caps = device_caps | 1307 - V4L2_CAP_VIDEO_OUTPUT; 1328 + asd->video_out_preview.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1308 1329 ret = video_register_device(&asd->video_out_preview.vdev, 1309 1330 VFL_TYPE_VIDEO, -1); 1310 1331 if (ret < 0) 1311 1332 goto error; 1312 1333 1313 1334 asd->video_out_capture.vdev.v4l2_dev = vdev; 1314 - asd->video_out_capture.vdev.device_caps = device_caps | 1315 - V4L2_CAP_VIDEO_OUTPUT; 1335 + asd->video_out_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1316 1336 ret = video_register_device(&asd->video_out_capture.vdev, 1317 1337 VFL_TYPE_VIDEO, -1); 1318 1338 if (ret < 0) 1319 1339 goto error; 1320 1340 1321 1341 asd->video_out_vf.vdev.v4l2_dev = vdev; 1322 - asd->video_out_vf.vdev.device_caps = device_caps | 1323 - V4L2_CAP_VIDEO_OUTPUT; 1342 + asd->video_out_vf.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1324 1343 ret = video_register_device(&asd->video_out_vf.vdev, 1325 1344 VFL_TYPE_VIDEO, -1); 1326 1345 if (ret < 0) 1327 1346 goto error; 1328 1347 1329 1348 asd->video_out_video_capture.vdev.v4l2_dev = vdev; 1330 - asd->video_out_video_capture.vdev.device_caps = device_caps | 1331 - V4L2_CAP_VIDEO_OUTPUT; 1349 + asd->video_out_video_capture.vdev.device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1332 1350 ret = video_register_device(&asd->video_out_video_capture.vdev, 1333 - VFL_TYPE_VIDEO, -1); 1334 - if (ret < 0) 1335 - goto error; 1336 - asd->video_acc.vdev.v4l2_dev = vdev; 1337 - asd->video_acc.vdev.device_caps = device_caps | 1338 - V4L2_CAP_VIDEO_OUTPUT; 1339 - ret = video_register_device(&asd->video_acc.vdev, 1340 - VFL_TYPE_VIDEO, -1); 1341 - if (ret < 0) 1342 - goto error; 1343 - 1344 - /* 1345 - * file input only supported on subdev0 1346 - * so do not create video node for subdevs other then subdev0 1347 - */ 1348 - if (asd->index) 1349 - return 0; 1350 - 1351 - asd->video_in.vdev.v4l2_dev = vdev; 1352 - asd->video_in.vdev.device_caps = device_caps | 1353 - V4L2_CAP_VIDEO_CAPTURE; 1354 - ret = video_register_device(&asd->video_in.vdev, 1355 1351 VFL_TYPE_VIDEO, -1); 1356 1352 if (ret < 0) 1357 1353 goto error; ··· 1355 1415 return -ENOMEM; 1356 1416 for (i = 0; i < isp->num_of_streams; i++) { 1357 1417 asd = &isp->asd[i]; 1358 - spin_lock_init(&asd->lock); 1359 1418 asd->isp = isp; 1360 1419 isp_subdev_init_params(asd); 1361 1420 asd->index = i;
+13 -58
drivers/staging/media/atomisp/pci/atomisp_subdev.h
··· 70 70 enum v4l2_buf_type type; 71 71 struct media_pad pad; 72 72 struct videobuf_queue capq; 73 - struct videobuf_queue outq; 74 73 struct list_head activeq; 75 - struct list_head activeq_out; 76 74 /* 77 75 * the buffers waiting for per-frame parameters, this is only valid 78 76 * in per-frame setting mode. ··· 84 86 85 87 unsigned int buffers_in_css; 86 88 87 - /* irq_lock is used to protect video buffer state change operations and 88 - * also to make activeq, activeq_out, capq and outq list 89 - * operations atomic. */ 89 + /* 90 + * irq_lock is used to protect video buffer state change operations and 91 + * also to make activeq and capq operations atomic. 92 + */ 90 93 spinlock_t irq_lock; 91 94 unsigned int users; 92 95 ··· 108 109 */ 109 110 unsigned int frame_request_config_id[VIDEO_MAX_FRAME]; 110 111 struct atomisp_css_params_with_list *frame_params[VIDEO_MAX_FRAME]; 111 - 112 - /* 113 - * move wdt from asd struct to create wdt for each pipe 114 - */ 115 - /* ISP2401 */ 116 - struct timer_list wdt; 117 - unsigned int wdt_duration; /* in jiffies */ 118 - unsigned long wdt_expires; 119 - atomic_t wdt_count; 120 - }; 121 - 122 - struct atomisp_acc_pipe { 123 - struct video_device vdev; 124 - unsigned int users; 125 - bool running; 126 - struct atomisp_sub_device *asd; 127 - struct atomisp_device *isp; 128 112 }; 129 113 130 114 struct atomisp_pad_format { ··· 249 267 struct list_head list; 250 268 }; 251 269 252 - struct atomisp_acc_fw { 253 - struct ia_css_fw_info *fw; 254 - unsigned int handle; 255 - unsigned int flags; 256 - unsigned int type; 257 - struct { 258 - size_t length; 259 - unsigned long css_ptr; 260 - } args[ATOMISP_ACC_NR_MEMORY]; 261 - struct list_head list; 262 - }; 263 - 264 - struct atomisp_map { 265 - ia_css_ptr ptr; 266 - size_t length; 267 - struct list_head list; 268 - /* FIXME: should keep book which maps are currently used 269 - * by binaries and not allow releasing those 270 - * which are in use. Implement by reference counting. 271 - */ 272 - }; 273 - 274 270 struct atomisp_sub_device { 275 271 struct v4l2_subdev subdev; 276 272 struct media_pad pads[ATOMISP_SUBDEV_PADS_NUM]; ··· 257 297 258 298 enum atomisp_subdev_input_entity input; 259 299 unsigned int output; 260 - struct atomisp_video_pipe video_in; 261 300 struct atomisp_video_pipe video_out_capture; /* capture output */ 262 301 struct atomisp_video_pipe video_out_vf; /* viewfinder output */ 263 302 struct atomisp_video_pipe video_out_preview; /* preview output */ 264 - struct atomisp_acc_pipe video_acc; 265 303 /* video pipe main output */ 266 304 struct atomisp_video_pipe video_out_video_capture; 267 305 /* struct isp_subdev_params params; */ 268 - spinlock_t lock; 269 306 struct atomisp_device *isp; 270 307 struct v4l2_ctrl_handler ctrl_handler; 271 308 struct v4l2_ctrl *fmt_auto; ··· 313 356 314 357 /* This field specifies which camera (v4l2 input) is selected. */ 315 358 int input_curr; 316 - /* This field specifies which sensor is being selected when there 317 - are multiple sensors connected to the same MIPI port. */ 318 - int sensor_curr; 319 359 320 360 atomic_t sof_count; 321 361 atomic_t sequence; /* Sequence value that is assigned to buffer. */ 322 362 atomic_t sequence_temp; 323 363 324 - unsigned int streaming; /* Hold both mutex and lock to change this */ 364 + /* 365 + * Writers of streaming must hold both isp->mutex and isp->lock. 366 + * Readers of streaming need to hold only one of the two locks. 367 + */ 368 + unsigned int streaming; 325 369 bool stream_prepared; /* whether css stream is created */ 326 370 327 371 /* subdev index: will be used to show which subdev is holding the ··· 347 389 1]; /* Record each Raw Buffer lock status */ 348 390 int raw_buffer_locked_count; 349 391 spinlock_t raw_buffer_bitmap_lock; 350 - 351 - /* ISP 2400 */ 352 - struct timer_list wdt; 353 - unsigned int wdt_duration; /* in jiffies */ 354 - unsigned long wdt_expires; 355 392 356 393 /* ISP2401 */ 357 394 bool re_trigger_capture; ··· 403 450 void atomisp_subdev_cleanup_pending_events(struct atomisp_sub_device *asd); 404 451 405 452 void atomisp_subdev_unregister_entities(struct atomisp_sub_device *asd); 406 - int atomisp_subdev_register_entities(struct atomisp_sub_device *asd, 407 - struct v4l2_device *vdev); 453 + int atomisp_subdev_register_subdev(struct atomisp_sub_device *asd, 454 + struct v4l2_device *vdev); 455 + int atomisp_subdev_register_video_nodes(struct atomisp_sub_device *asd, 456 + struct v4l2_device *vdev); 408 457 int atomisp_subdev_init(struct atomisp_device *isp); 409 458 void atomisp_subdev_cleanup(struct atomisp_device *isp); 410 459 int atomisp_create_pads_links(struct atomisp_device *isp);
+32 -132
drivers/staging/media/atomisp/pci/atomisp_v4l2.c
··· 34 34 #include "atomisp_cmd.h" 35 35 #include "atomisp_common.h" 36 36 #include "atomisp_fops.h" 37 - #include "atomisp_file.h" 38 37 #include "atomisp_ioctl.h" 39 38 #include "atomisp_internal.h" 40 39 #include "atomisp-regs.h" ··· 441 442 video->pad.flags = MEDIA_PAD_FL_SINK; 442 443 video->vdev.fops = &atomisp_fops; 443 444 video->vdev.ioctl_ops = &atomisp_ioctl_ops; 444 - break; 445 - case V4L2_BUF_TYPE_VIDEO_OUTPUT: 446 - direction = "input"; 447 - video->pad.flags = MEDIA_PAD_FL_SOURCE; 448 - video->vdev.fops = &atomisp_file_fops; 449 - video->vdev.ioctl_ops = &atomisp_file_ioctl_ops; 445 + video->vdev.lock = &video->isp->mutex; 450 446 break; 451 447 default: 452 448 return -EINVAL; ··· 461 467 return 0; 462 468 } 463 469 464 - void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name) 465 - { 466 - video->vdev.fops = &atomisp_fops; 467 - video->vdev.ioctl_ops = &atomisp_ioctl_ops; 468 - 469 - /* Initialize the video device. */ 470 - snprintf(video->vdev.name, sizeof(video->vdev.name), 471 - "ATOMISP ISP %s", name); 472 - video->vdev.release = video_device_release_empty; 473 - video_set_drvdata(&video->vdev, video->isp); 474 - } 475 - 476 470 void atomisp_video_unregister(struct atomisp_video_pipe *video) 477 471 { 478 472 if (video_is_registered(&video->vdev)) { 479 473 media_entity_cleanup(&video->vdev.entity); 480 474 video_unregister_device(&video->vdev); 481 475 } 482 - } 483 - 484 - void atomisp_acc_unregister(struct atomisp_acc_pipe *video) 485 - { 486 - if (video_is_registered(&video->vdev)) 487 - video_unregister_device(&video->vdev); 488 476 } 489 477 490 478 static int atomisp_save_iunit_reg(struct atomisp_device *isp) ··· 1007 1031 &subdevs->v4l2_subdev.board_info; 1008 1032 struct i2c_adapter *adapter = 1009 1033 i2c_get_adapter(subdevs->v4l2_subdev.i2c_adapter_id); 1010 - int sensor_num, i; 1011 1034 1012 1035 dev_info(isp->dev, "Probing Subdev %s\n", board_info->type); 1013 1036 ··· 1065 1090 * pixel_format. 1066 1091 */ 1067 1092 isp->inputs[isp->input_cnt].frame_size.pixel_format = 0; 1068 - isp->inputs[isp->input_cnt].camera_caps = 1069 - atomisp_get_default_camera_caps(); 1070 - sensor_num = isp->inputs[isp->input_cnt] 1071 - .camera_caps->sensor_num; 1072 1093 isp->input_cnt++; 1073 - for (i = 1; i < sensor_num; i++) { 1074 - if (isp->input_cnt >= ATOM_ISP_MAX_INPUTS) { 1075 - dev_warn(isp->dev, 1076 - "atomisp inputs out of range\n"); 1077 - break; 1078 - } 1079 - isp->inputs[isp->input_cnt] = 1080 - isp->inputs[isp->input_cnt - 1]; 1081 - isp->inputs[isp->input_cnt].sensor_index = i; 1082 - isp->input_cnt++; 1083 - } 1084 1094 break; 1085 1095 case CAMERA_MOTOR: 1086 1096 if (isp->motor) { ··· 1118 1158 for (i = 0; i < isp->num_of_streams; i++) 1119 1159 atomisp_subdev_unregister_entities(&isp->asd[i]); 1120 1160 atomisp_tpg_unregister_entities(&isp->tpg); 1121 - atomisp_file_input_unregister_entities(&isp->file_dev); 1122 1161 for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++) 1123 1162 atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]); 1124 1163 ··· 1169 1210 goto csi_and_subdev_probe_failed; 1170 1211 } 1171 1212 1172 - ret = 1173 - atomisp_file_input_register_entities(&isp->file_dev, &isp->v4l2_dev); 1174 - if (ret < 0) { 1175 - dev_err(isp->dev, "atomisp_file_input_register_entities\n"); 1176 - goto file_input_register_failed; 1177 - } 1178 - 1179 1213 ret = atomisp_tpg_register_entities(&isp->tpg, &isp->v4l2_dev); 1180 1214 if (ret < 0) { 1181 1215 dev_err(isp->dev, "atomisp_tpg_register_entities\n"); ··· 1178 1226 for (i = 0; i < isp->num_of_streams; i++) { 1179 1227 struct atomisp_sub_device *asd = &isp->asd[i]; 1180 1228 1181 - ret = atomisp_subdev_register_entities(asd, &isp->v4l2_dev); 1229 + ret = atomisp_subdev_register_subdev(asd, &isp->v4l2_dev); 1182 1230 if (ret < 0) { 1183 - dev_err(isp->dev, 1184 - "atomisp_subdev_register_entities fail\n"); 1231 + dev_err(isp->dev, "atomisp_subdev_register_subdev fail\n"); 1185 1232 for (; i > 0; i--) 1186 1233 atomisp_subdev_unregister_entities( 1187 1234 &isp->asd[i - 1]); ··· 1218 1267 } 1219 1268 } 1220 1269 1221 - dev_dbg(isp->dev, 1222 - "FILE_INPUT enable, camera_cnt: %d\n", isp->input_cnt); 1223 - isp->inputs[isp->input_cnt].type = FILE_INPUT; 1224 - isp->inputs[isp->input_cnt].port = -1; 1225 - isp->inputs[isp->input_cnt].camera_caps = 1226 - atomisp_get_default_camera_caps(); 1227 - isp->inputs[isp->input_cnt++].camera = &isp->file_dev.sd; 1228 - 1229 1270 if (isp->input_cnt < ATOM_ISP_MAX_INPUTS) { 1230 1271 dev_dbg(isp->dev, 1231 1272 "TPG detected, camera_cnt: %d\n", isp->input_cnt); 1232 1273 isp->inputs[isp->input_cnt].type = TEST_PATTERN; 1233 1274 isp->inputs[isp->input_cnt].port = -1; 1234 - isp->inputs[isp->input_cnt].camera_caps = 1235 - atomisp_get_default_camera_caps(); 1236 1275 isp->inputs[isp->input_cnt++].camera = &isp->tpg.sd; 1237 1276 } else { 1238 1277 dev_warn(isp->dev, "too many atomisp inputs, TPG ignored.\n"); 1239 1278 } 1240 1279 1241 - ret = v4l2_device_register_subdev_nodes(&isp->v4l2_dev); 1242 - if (ret < 0) 1243 - goto link_failed; 1244 - 1245 - return media_device_register(&isp->media_dev); 1280 + return 0; 1246 1281 1247 1282 link_failed: 1248 1283 for (i = 0; i < isp->num_of_streams; i++) ··· 1241 1304 subdev_register_failed: 1242 1305 atomisp_tpg_unregister_entities(&isp->tpg); 1243 1306 tpg_register_failed: 1244 - atomisp_file_input_unregister_entities(&isp->file_dev); 1245 - file_input_register_failed: 1246 1307 for (i = 0; i < ATOMISP_CAMERA_NR_PORTS; i++) 1247 1308 atomisp_mipi_csi2_unregister_entities(&isp->csi2_port[i]); 1248 1309 csi_and_subdev_probe_failed: ··· 1251 1316 return ret; 1252 1317 } 1253 1318 1319 + static int atomisp_register_device_nodes(struct atomisp_device *isp) 1320 + { 1321 + int i, err; 1322 + 1323 + for (i = 0; i < isp->num_of_streams; i++) { 1324 + err = atomisp_subdev_register_video_nodes(&isp->asd[i], &isp->v4l2_dev); 1325 + if (err) 1326 + return err; 1327 + } 1328 + 1329 + err = atomisp_create_pads_links(isp); 1330 + if (err) 1331 + return err; 1332 + 1333 + err = v4l2_device_register_subdev_nodes(&isp->v4l2_dev); 1334 + if (err) 1335 + return err; 1336 + 1337 + return media_device_register(&isp->media_dev); 1338 + } 1339 + 1254 1340 static int atomisp_initialize_modules(struct atomisp_device *isp) 1255 1341 { 1256 1342 int ret; ··· 1280 1324 if (ret < 0) { 1281 1325 dev_err(isp->dev, "mipi csi2 initialization failed\n"); 1282 1326 goto error_mipi_csi2; 1283 - } 1284 - 1285 - ret = atomisp_file_input_init(isp); 1286 - if (ret < 0) { 1287 - dev_err(isp->dev, 1288 - "file input device initialization failed\n"); 1289 - goto error_file_input; 1290 1327 } 1291 1328 1292 1329 ret = atomisp_tpg_init(isp); ··· 1299 1350 error_isp_subdev: 1300 1351 error_tpg: 1301 1352 atomisp_tpg_cleanup(isp); 1302 - error_file_input: 1303 - atomisp_file_input_cleanup(isp); 1304 1353 error_mipi_csi2: 1305 1354 atomisp_mipi_csi2_cleanup(isp); 1306 1355 return ret; ··· 1307 1360 static void atomisp_uninitialize_modules(struct atomisp_device *isp) 1308 1361 { 1309 1362 atomisp_tpg_cleanup(isp); 1310 - atomisp_file_input_cleanup(isp); 1311 1363 atomisp_mipi_csi2_cleanup(isp); 1312 1364 } 1313 1365 ··· 1416 1470 return true; 1417 1471 } 1418 1472 1419 - static int init_atomisp_wdts(struct atomisp_device *isp) 1420 - { 1421 - int i, err; 1422 - 1423 - atomic_set(&isp->wdt_work_queued, 0); 1424 - isp->wdt_work_queue = alloc_workqueue(isp->v4l2_dev.name, 0, 1); 1425 - if (!isp->wdt_work_queue) { 1426 - dev_err(isp->dev, "Failed to initialize wdt work queue\n"); 1427 - err = -ENOMEM; 1428 - goto alloc_fail; 1429 - } 1430 - INIT_WORK(&isp->wdt_work, atomisp_wdt_work); 1431 - 1432 - for (i = 0; i < isp->num_of_streams; i++) { 1433 - struct atomisp_sub_device *asd = &isp->asd[i]; 1434 - 1435 - if (!IS_ISP2401) { 1436 - timer_setup(&asd->wdt, atomisp_wdt, 0); 1437 - } else { 1438 - timer_setup(&asd->video_out_capture.wdt, 1439 - atomisp_wdt, 0); 1440 - timer_setup(&asd->video_out_preview.wdt, 1441 - atomisp_wdt, 0); 1442 - timer_setup(&asd->video_out_vf.wdt, atomisp_wdt, 0); 1443 - timer_setup(&asd->video_out_video_capture.wdt, 1444 - atomisp_wdt, 0); 1445 - } 1446 - } 1447 - return 0; 1448 - alloc_fail: 1449 - return err; 1450 - } 1451 - 1452 1473 #define ATOM_ISP_PCI_BAR 0 1453 1474 1454 1475 static int atomisp_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) ··· 1464 1551 1465 1552 dev_dbg(&pdev->dev, "atomisp mmio base: %p\n", isp->base); 1466 1553 1467 - rt_mutex_init(&isp->mutex); 1468 - rt_mutex_init(&isp->loading); 1469 - mutex_init(&isp->streamoff_mutex); 1554 + mutex_init(&isp->mutex); 1470 1555 spin_lock_init(&isp->lock); 1471 1556 1472 1557 /* This is not a true PCI device on SoC, so the delay is not needed. */ ··· 1636 1725 pci_write_config_dword(pdev, MRFLD_PCI_CSI_AFE_TRIM_CONTROL, csi_afe_trim); 1637 1726 } 1638 1727 1639 - rt_mutex_lock(&isp->loading); 1640 - 1641 1728 err = atomisp_initialize_modules(isp); 1642 1729 if (err < 0) { 1643 1730 dev_err(&pdev->dev, "atomisp_initialize_modules (%d)\n", err); ··· 1647 1738 dev_err(&pdev->dev, "atomisp_register_entities failed (%d)\n", err); 1648 1739 goto register_entities_fail; 1649 1740 } 1650 - err = atomisp_create_pads_links(isp); 1651 - if (err < 0) 1652 - goto register_entities_fail; 1653 - /* init atomisp wdts */ 1654 - err = init_atomisp_wdts(isp); 1655 - if (err != 0) 1656 - goto wdt_work_queue_fail; 1741 + 1742 + INIT_WORK(&isp->assert_recovery_work, atomisp_assert_recovery_work); 1657 1743 1658 1744 /* save the iunit context only once after all the values are init'ed. */ 1659 1745 atomisp_save_iunit_reg(isp); ··· 1681 1777 release_firmware(isp->firmware); 1682 1778 isp->firmware = NULL; 1683 1779 isp->css_env.isp_css_fw.data = NULL; 1684 - isp->ready = true; 1685 - rt_mutex_unlock(&isp->loading); 1780 + 1781 + err = atomisp_register_device_nodes(isp); 1782 + if (err) 1783 + goto css_init_fail; 1686 1784 1687 1785 atomisp_drvfs_init(isp); 1688 1786 ··· 1695 1789 request_irq_fail: 1696 1790 hmm_cleanup(); 1697 1791 pm_runtime_get_noresume(&pdev->dev); 1698 - destroy_workqueue(isp->wdt_work_queue); 1699 - wdt_work_queue_fail: 1700 1792 atomisp_unregister_entities(isp); 1701 1793 register_entities_fail: 1702 1794 atomisp_uninitialize_modules(isp); 1703 1795 initialize_modules_fail: 1704 - rt_mutex_unlock(&isp->loading); 1705 1796 cpu_latency_qos_remove_request(&isp->pm_qos); 1706 1797 atomisp_msi_irq_uninit(isp); 1707 1798 pci_free_irq_vectors(pdev); ··· 1753 1850 1754 1851 atomisp_msi_irq_uninit(isp); 1755 1852 atomisp_unregister_entities(isp); 1756 - 1757 - destroy_workqueue(isp->wdt_work_queue); 1758 - atomisp_file_input_cleanup(isp); 1759 1853 1760 1854 release_firmware(isp->firmware); 1761 1855 }
-3
drivers/staging/media/atomisp/pci/atomisp_v4l2.h
··· 22 22 #define __ATOMISP_V4L2_H__ 23 23 24 24 struct atomisp_video_pipe; 25 - struct atomisp_acc_pipe; 26 25 struct v4l2_device; 27 26 struct atomisp_device; 28 27 struct firmware; 29 28 30 29 int atomisp_video_init(struct atomisp_video_pipe *video, const char *name, 31 30 unsigned int run_mode); 32 - void atomisp_acc_init(struct atomisp_acc_pipe *video, const char *name); 33 31 void atomisp_video_unregister(struct atomisp_video_pipe *video); 34 - void atomisp_acc_unregister(struct atomisp_acc_pipe *video); 35 32 const struct firmware *atomisp_load_firmware(struct atomisp_device *isp); 36 33 int atomisp_csi_lane_config(struct atomisp_device *isp); 37 34
+29 -169
drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
··· 44 44 #include "hmm/hmm_common.h" 45 45 #include "hmm/hmm_bo.h" 46 46 47 - static unsigned int order_to_nr(unsigned int order) 48 - { 49 - return 1U << order; 50 - } 51 - 52 - static unsigned int nr_to_order_bottom(unsigned int nr) 53 - { 54 - return fls(nr) - 1; 55 - } 56 - 57 47 static int __bo_init(struct hmm_bo_device *bdev, struct hmm_buffer_object *bo, 58 48 unsigned int pgnr) 59 49 { ··· 615 625 return bo; 616 626 } 617 627 618 - static void free_private_bo_pages(struct hmm_buffer_object *bo, 619 - int free_pgnr) 628 + static void free_pages_bulk_array(unsigned long nr_pages, struct page **page_array) 620 629 { 621 - int i, ret; 630 + unsigned long i; 622 631 623 - for (i = 0; i < free_pgnr; i++) { 624 - ret = set_pages_wb(bo->pages[i], 1); 625 - if (ret) 626 - dev_err(atomisp_dev, 627 - "set page to WB err ...ret = %d\n", 628 - ret); 629 - /* 630 - W/A: set_pages_wb seldom return value = -EFAULT 631 - indicate that address of page is not in valid 632 - range(0xffff880000000000~0xffffc7ffffffffff) 633 - then, _free_pages would panic; Do not know why page 634 - address be valid,it maybe memory corruption by lowmemory 635 - */ 636 - if (!ret) { 637 - __free_pages(bo->pages[i], 0); 638 - } 639 - } 632 + for (i = 0; i < nr_pages; i++) 633 + __free_pages(page_array[i], 0); 634 + } 635 + 636 + static void free_private_bo_pages(struct hmm_buffer_object *bo) 637 + { 638 + set_pages_array_wb(bo->pages, bo->pgnr); 639 + free_pages_bulk_array(bo->pgnr, bo->pages); 640 640 } 641 641 642 642 /*Allocate pages which will be used only by ISP*/ 643 643 static int alloc_private_pages(struct hmm_buffer_object *bo) 644 644 { 645 + const gfp_t gfp = __GFP_NOWARN | __GFP_RECLAIM | __GFP_FS; 645 646 int ret; 646 - unsigned int pgnr, order, blk_pgnr, alloc_pgnr; 647 - struct page *pages; 648 - gfp_t gfp = GFP_NOWAIT | __GFP_NOWARN; /* REVISIT: need __GFP_FS too? */ 649 - int i, j; 650 - int failure_number = 0; 651 - bool reduce_order = false; 652 - bool lack_mem = true; 653 647 654 - pgnr = bo->pgnr; 648 + ret = alloc_pages_bulk_array(gfp, bo->pgnr, bo->pages); 649 + if (ret != bo->pgnr) { 650 + free_pages_bulk_array(ret, bo->pages); 651 + return -ENOMEM; 652 + } 655 653 656 - i = 0; 657 - alloc_pgnr = 0; 658 - 659 - while (pgnr) { 660 - order = nr_to_order_bottom(pgnr); 661 - /* 662 - * if be short of memory, we will set order to 0 663 - * everytime. 664 - */ 665 - if (lack_mem) 666 - order = HMM_MIN_ORDER; 667 - else if (order > HMM_MAX_ORDER) 668 - order = HMM_MAX_ORDER; 669 - retry: 670 - /* 671 - * When order > HMM_MIN_ORDER, for performance reasons we don't 672 - * want alloc_pages() to sleep. In case it fails and fallbacks 673 - * to HMM_MIN_ORDER or in case the requested order is originally 674 - * the minimum value, we can allow alloc_pages() to sleep for 675 - * robustness purpose. 676 - * 677 - * REVISIT: why __GFP_FS is necessary? 678 - */ 679 - if (order == HMM_MIN_ORDER) { 680 - gfp &= ~GFP_NOWAIT; 681 - gfp |= __GFP_RECLAIM | __GFP_FS; 682 - } 683 - 684 - pages = alloc_pages(gfp, order); 685 - if (unlikely(!pages)) { 686 - /* 687 - * in low memory case, if allocation page fails, 688 - * we turn to try if order=0 allocation could 689 - * succeed. if order=0 fails too, that means there is 690 - * no memory left. 691 - */ 692 - if (order == HMM_MIN_ORDER) { 693 - dev_err(atomisp_dev, 694 - "%s: cannot allocate pages\n", 695 - __func__); 696 - goto cleanup; 697 - } 698 - order = HMM_MIN_ORDER; 699 - failure_number++; 700 - reduce_order = true; 701 - /* 702 - * if fail two times continuously, we think be short 703 - * of memory now. 704 - */ 705 - if (failure_number == 2) { 706 - lack_mem = true; 707 - failure_number = 0; 708 - } 709 - goto retry; 710 - } else { 711 - blk_pgnr = order_to_nr(order); 712 - 713 - /* 714 - * set memory to uncacheable -- UC_MINUS 715 - */ 716 - ret = set_pages_uc(pages, blk_pgnr); 717 - if (ret) { 718 - dev_err(atomisp_dev, 719 - "set page uncacheablefailed.\n"); 720 - 721 - __free_pages(pages, order); 722 - 723 - goto cleanup; 724 - } 725 - 726 - for (j = 0; j < blk_pgnr; j++, i++) { 727 - bo->pages[i] = pages + j; 728 - } 729 - 730 - pgnr -= blk_pgnr; 731 - 732 - /* 733 - * if order is not reduced this time, clear 734 - * failure_number. 735 - */ 736 - if (reduce_order) 737 - reduce_order = false; 738 - else 739 - failure_number = 0; 740 - } 654 + ret = set_pages_array_uc(bo->pages, bo->pgnr); 655 + if (ret) { 656 + dev_err(atomisp_dev, "set pages uncacheable failed.\n"); 657 + free_pages_bulk_array(bo->pgnr, bo->pages); 658 + return ret; 741 659 } 742 660 743 661 return 0; 744 - cleanup: 745 - alloc_pgnr = i; 746 - free_private_bo_pages(bo, alloc_pgnr); 747 - return -ENOMEM; 748 662 } 749 663 750 664 static void free_user_pages(struct hmm_buffer_object *bo, ··· 656 762 { 657 763 int i; 658 764 659 - if (bo->mem_type == HMM_BO_MEM_TYPE_PFN) { 660 - unpin_user_pages(bo->pages, page_nr); 661 - } else { 662 - for (i = 0; i < page_nr; i++) 663 - put_page(bo->pages[i]); 664 - } 765 + for (i = 0; i < page_nr; i++) 766 + put_page(bo->pages[i]); 665 767 } 666 768 667 769 /* ··· 667 777 const void __user *userptr) 668 778 { 669 779 int page_nr; 670 - struct vm_area_struct *vma; 671 - 672 - mutex_unlock(&bo->mutex); 673 - mmap_read_lock(current->mm); 674 - vma = find_vma(current->mm, (unsigned long)userptr); 675 - mmap_read_unlock(current->mm); 676 - if (!vma) { 677 - dev_err(atomisp_dev, "find_vma failed\n"); 678 - mutex_lock(&bo->mutex); 679 - return -EFAULT; 680 - } 681 - mutex_lock(&bo->mutex); 682 - /* 683 - * Handle frame buffer allocated in other kerenl space driver 684 - * and map to user space 685 - */ 686 780 687 781 userptr = untagged_addr(userptr); 688 782 689 - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) { 690 - page_nr = pin_user_pages((unsigned long)userptr, bo->pgnr, 691 - FOLL_LONGTERM | FOLL_WRITE, 692 - bo->pages, NULL); 693 - bo->mem_type = HMM_BO_MEM_TYPE_PFN; 694 - } else { 695 - /*Handle frame buffer allocated in user space*/ 696 - mutex_unlock(&bo->mutex); 697 - page_nr = get_user_pages_fast((unsigned long)userptr, 698 - (int)(bo->pgnr), 1, bo->pages); 699 - mutex_lock(&bo->mutex); 700 - bo->mem_type = HMM_BO_MEM_TYPE_USER; 701 - } 702 - 703 - dev_dbg(atomisp_dev, "%s: %d %s pages were allocated as 0x%08x\n", 704 - __func__, 705 - bo->pgnr, 706 - bo->mem_type == HMM_BO_MEM_TYPE_USER ? "user" : "pfn", page_nr); 783 + /* Handle frame buffer allocated in user space */ 784 + mutex_unlock(&bo->mutex); 785 + page_nr = get_user_pages_fast((unsigned long)userptr, bo->pgnr, 1, bo->pages); 786 + mutex_lock(&bo->mutex); 707 787 708 788 /* can be written by caller, not forced */ 709 789 if (page_nr != bo->pgnr) { ··· 714 854 mutex_lock(&bo->mutex); 715 855 check_bo_status_no_goto(bo, HMM_BO_PAGE_ALLOCED, status_err); 716 856 717 - bo->pages = kmalloc_array(bo->pgnr, sizeof(struct page *), GFP_KERNEL); 857 + bo->pages = kcalloc(bo->pgnr, sizeof(struct page *), GFP_KERNEL); 718 858 if (unlikely(!bo->pages)) { 719 859 ret = -ENOMEM; 720 860 goto alloc_err; ··· 770 910 bo->status &= (~HMM_BO_PAGE_ALLOCED); 771 911 772 912 if (bo->type == HMM_BO_PRIVATE) 773 - free_private_bo_pages(bo, bo->pgnr); 913 + free_private_bo_pages(bo); 774 914 else if (bo->type == HMM_BO_USER) 775 915 free_user_pages(bo, bo->pgnr); 776 916 else
+2 -2
drivers/staging/media/atomisp/pci/sh_css_params.c
··· 950 950 params->fpn_config.data = NULL; 951 951 } 952 952 if (!params->fpn_config.data) { 953 - params->fpn_config.data = kvmalloc(height * width * 954 - sizeof(short), GFP_KERNEL); 953 + params->fpn_config.data = kvmalloc(array3_size(height, width, sizeof(short)), 954 + GFP_KERNEL); 955 955 if (!params->fpn_config.data) { 956 956 IA_CSS_ERROR("out of memory"); 957 957 IA_CSS_LEAVE_ERR_PRIVATE(-ENOMEM);
+4 -4
drivers/staging/media/imx/imx-media-utils.c
··· 863 863 mutex_lock(&imxmd->md.graph_mutex); 864 864 865 865 if (on) { 866 - ret = __media_pipeline_start(entity, &imxmd->pipe); 866 + ret = __media_pipeline_start(entity->pads, &imxmd->pipe); 867 867 if (ret) 868 868 goto out; 869 869 ret = v4l2_subdev_call(sd, video, s_stream, 1); 870 870 if (ret) 871 - __media_pipeline_stop(entity); 871 + __media_pipeline_stop(entity->pads); 872 872 } else { 873 873 v4l2_subdev_call(sd, video, s_stream, 0); 874 - if (entity->pipe) 875 - __media_pipeline_stop(entity); 874 + if (media_pad_pipeline(entity->pads)) 875 + __media_pipeline_stop(entity->pads); 876 876 } 877 877 878 878 out:
+3 -3
drivers/staging/media/imx/imx7-media-csi.c
··· 1360 1360 1361 1361 mutex_lock(&csi->mdev.graph_mutex); 1362 1362 1363 - ret = __media_pipeline_start(&csi->sd.entity, &csi->pipe); 1363 + ret = __video_device_pipeline_start(csi->vdev, &csi->pipe); 1364 1364 if (ret) 1365 1365 goto err_unlock; 1366 1366 ··· 1373 1373 return 0; 1374 1374 1375 1375 err_stop: 1376 - __media_pipeline_stop(&csi->sd.entity); 1376 + __video_device_pipeline_stop(csi->vdev); 1377 1377 err_unlock: 1378 1378 mutex_unlock(&csi->mdev.graph_mutex); 1379 1379 dev_err(csi->dev, "pipeline start failed with %d\n", ret); ··· 1396 1396 1397 1397 mutex_lock(&csi->mdev.graph_mutex); 1398 1398 v4l2_subdev_call(&csi->sd, video, s_stream, 0); 1399 - __media_pipeline_stop(&csi->sd.entity); 1399 + __video_device_pipeline_stop(csi->vdev); 1400 1400 mutex_unlock(&csi->mdev.graph_mutex); 1401 1401 1402 1402 /* release all active buffers */
+5 -2
drivers/staging/media/ipu3/include/uapi/intel-ipu3.h
··· 626 626 * @b: white balance gain for B channel. 627 627 * @gb: white balance gain for Gb channel. 628 628 * 629 - * Precision u3.13, range [0, 8). White balance correction is done by applying 630 - * a multiplicative gain to each color channels prior to BNR. 629 + * For BNR parameters WB gain factor for the three channels [Ggr, Ggb, Gb, Gr]. 630 + * Their precision is U3.13 and the range is (0, 8) and the actual gain is 631 + * Gx + 1, it is typically Gx = 1. 632 + * 633 + * Pout = {Pin * (1 + Gx)}. 631 634 */ 632 635 struct ipu3_uapi_bnr_static_config_wb_gains_config { 633 636 __u16 gr;
+17 -20
drivers/staging/media/ipu3/ipu3-v4l2.c
··· 192 192 struct v4l2_subdev_state *sd_state, 193 193 struct v4l2_subdev_selection *sel) 194 194 { 195 - struct v4l2_rect *try_sel, *r; 196 - struct imgu_v4l2_subdev *imgu_sd = container_of(sd, 197 - struct imgu_v4l2_subdev, 198 - subdev); 195 + struct imgu_v4l2_subdev *imgu_sd = 196 + container_of(sd, struct imgu_v4l2_subdev, subdev); 199 197 200 198 if (sel->pad != IMGU_NODE_IN) 201 199 return -EINVAL; 202 200 203 201 switch (sel->target) { 204 202 case V4L2_SEL_TGT_CROP: 205 - try_sel = v4l2_subdev_get_try_crop(sd, sd_state, sel->pad); 206 - r = &imgu_sd->rect.eff; 207 - break; 203 + if (sel->which == V4L2_SUBDEV_FORMAT_TRY) 204 + sel->r = *v4l2_subdev_get_try_crop(sd, sd_state, 205 + sel->pad); 206 + else 207 + sel->r = imgu_sd->rect.eff; 208 + return 0; 208 209 case V4L2_SEL_TGT_COMPOSE: 209 - try_sel = v4l2_subdev_get_try_compose(sd, sd_state, sel->pad); 210 - r = &imgu_sd->rect.bds; 211 - break; 210 + if (sel->which == V4L2_SUBDEV_FORMAT_TRY) 211 + sel->r = *v4l2_subdev_get_try_compose(sd, sd_state, 212 + sel->pad); 213 + else 214 + sel->r = imgu_sd->rect.bds; 215 + return 0; 212 216 default: 213 217 return -EINVAL; 214 218 } 215 - 216 - if (sel->which == V4L2_SUBDEV_FORMAT_TRY) 217 - sel->r = *try_sel; 218 - else 219 - sel->r = *r; 220 - 221 - return 0; 222 219 } 223 220 224 221 static int imgu_subdev_set_selection(struct v4l2_subdev *sd, ··· 483 486 pipe = node->pipe; 484 487 imgu_pipe = &imgu->imgu_pipe[pipe]; 485 488 atomic_set(&node->sequence, 0); 486 - r = media_pipeline_start(&node->vdev.entity, &imgu_pipe->pipeline); 489 + r = video_device_pipeline_start(&node->vdev, &imgu_pipe->pipeline); 487 490 if (r < 0) 488 491 goto fail_return_bufs; 489 492 ··· 508 511 return 0; 509 512 510 513 fail_stop_pipeline: 511 - media_pipeline_stop(&node->vdev.entity); 514 + video_device_pipeline_stop(&node->vdev); 512 515 fail_return_bufs: 513 516 imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_QUEUED); 514 517 ··· 548 551 imgu_return_all_buffers(imgu, node, VB2_BUF_STATE_ERROR); 549 552 mutex_unlock(&imgu->streaming_lock); 550 553 551 - media_pipeline_stop(&node->vdev.entity); 554 + video_device_pipeline_stop(&node->vdev); 552 555 } 553 556 554 557 /******************** v4l2_ioctl_ops ********************/
+2
drivers/staging/media/meson/vdec/vdec.c
··· 1102 1102 1103 1103 err_vdev_release: 1104 1104 video_device_release(vdev); 1105 + v4l2_device_unregister(&core->v4l2_dev); 1105 1106 return ret; 1106 1107 } 1107 1108 ··· 1111 1110 struct amvdec_core *core = platform_get_drvdata(pdev); 1112 1111 1113 1112 video_unregister_device(core->vdev_dec); 1113 + v4l2_device_unregister(&core->v4l2_dev); 1114 1114 1115 1115 return 0; 1116 1116 }
+1 -3
drivers/staging/media/omap4iss/iss.c
··· 548 548 struct iss_pipeline *pipe; 549 549 struct media_pad *pad; 550 550 551 - if (!me->pipe) 552 - return 0; 553 551 pipe = to_iss_pipeline(me); 554 - if (pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED) 552 + if (!pipe || pipe->stream_state == ISS_PIPELINE_STREAM_STOPPED) 555 553 return 0; 556 554 pad = media_pad_remote_pad_first(&pipe->output->pad); 557 555 return pad->entity == me;
+4 -5
drivers/staging/media/omap4iss/iss_video.c
··· 870 870 * Start streaming on the pipeline. No link touching an entity in the 871 871 * pipeline can be activated or deactivated once streaming is started. 872 872 */ 873 - pipe = entity->pipe 874 - ? to_iss_pipeline(entity) : &video->pipe; 873 + pipe = to_iss_pipeline(&video->video.entity) ? : &video->pipe; 875 874 pipe->external = NULL; 876 875 pipe->external_rate = 0; 877 876 pipe->external_bpp = 0; ··· 886 887 if (video->iss->pdata->set_constraints) 887 888 video->iss->pdata->set_constraints(video->iss, true); 888 889 889 - ret = media_pipeline_start(entity, &pipe->pipe); 890 + ret = video_device_pipeline_start(&video->video, &pipe->pipe); 890 891 if (ret < 0) 891 892 goto err_media_pipeline_start; 892 893 ··· 977 978 err_omap4iss_set_stream: 978 979 vb2_streamoff(&vfh->queue, type); 979 980 err_iss_video_check_format: 980 - media_pipeline_stop(&video->video.entity); 981 + video_device_pipeline_stop(&video->video); 981 982 err_media_pipeline_start: 982 983 if (video->iss->pdata->set_constraints) 983 984 video->iss->pdata->set_constraints(video->iss, false); ··· 1031 1032 1032 1033 if (video->iss->pdata->set_constraints) 1033 1034 video->iss->pdata->set_constraints(video->iss, false); 1034 - media_pipeline_stop(&video->video.entity); 1035 + video_device_pipeline_stop(&video->video); 1035 1036 1036 1037 done: 1037 1038 mutex_unlock(&video->stream_lock);
+9 -2
drivers/staging/media/omap4iss/iss_video.h
··· 90 90 int external_bpp; 91 91 }; 92 92 93 - #define to_iss_pipeline(__e) \ 94 - container_of((__e)->pipe, struct iss_pipeline, pipe) 93 + static inline struct iss_pipeline *to_iss_pipeline(struct media_entity *entity) 94 + { 95 + struct media_pipeline *pipe = media_entity_pipeline(entity); 96 + 97 + if (!pipe) 98 + return NULL; 99 + 100 + return container_of(pipe, struct iss_pipeline, pipe); 101 + } 95 102 96 103 static inline int iss_pipeline_ready(struct iss_pipeline *pipe) 97 104 {
+1
drivers/staging/media/sunxi/cedrus/Kconfig
··· 2 2 config VIDEO_SUNXI_CEDRUS 3 3 tristate "Allwinner Cedrus VPU driver" 4 4 depends on VIDEO_DEV 5 + depends on RESET_CONTROLLER 5 6 depends on HAS_DMA 6 7 depends on OF 7 8 select MEDIA_CONTROLLER
+3 -3
drivers/staging/media/tegra-video/tegra210.c
··· 547 547 VI_INCR_SYNCPT_NO_STALL); 548 548 549 549 /* start the pipeline */ 550 - ret = media_pipeline_start(&chan->video.entity, pipe); 550 + ret = video_device_pipeline_start(&chan->video, pipe); 551 551 if (ret < 0) 552 552 goto error_pipeline_start; 553 553 ··· 595 595 error_kthread_start: 596 596 tegra_channel_set_stream(chan, false); 597 597 error_set_stream: 598 - media_pipeline_stop(&chan->video.entity); 598 + video_device_pipeline_stop(&chan->video); 599 599 error_pipeline_start: 600 600 tegra_channel_release_buffers(chan, VB2_BUF_STATE_QUEUED); 601 601 return ret; ··· 617 617 618 618 tegra_channel_release_buffers(chan, VB2_BUF_STATE_ERROR); 619 619 tegra_channel_set_stream(chan, false); 620 - media_pipeline_stop(&chan->video.entity); 620 + video_device_pipeline_stop(&chan->video); 621 621 } 622 622 623 623 /*
+1 -5
drivers/thermal/intel/intel_powerclamp.c
··· 516 516 cpus_read_lock(); 517 517 518 518 /* prefer BSP */ 519 - control_cpu = 0; 520 - if (!cpu_online(control_cpu)) { 521 - control_cpu = get_cpu(); 522 - put_cpu(); 523 - } 519 + control_cpu = cpumask_first(cpu_online_mask); 524 520 525 521 clamping = true; 526 522 schedule_delayed_work(&poll_pkg_cstate_work, 0);
+4
drivers/watchdog/watchdog_core.c
··· 38 38 39 39 #include "watchdog_core.h" /* For watchdog_dev_register/... */ 40 40 41 + #define CREATE_TRACE_POINTS 42 + #include <trace/events/watchdog.h> 43 + 41 44 static DEFINE_IDA(watchdog_ida); 42 45 43 46 static int stop_on_reboot = -1; ··· 166 163 int ret; 167 164 168 165 ret = wdd->ops->stop(wdd); 166 + trace_watchdog_stop(wdd, ret); 169 167 if (ret) 170 168 return NOTIFY_BAD; 171 169 }
+10 -2
drivers/watchdog/watchdog_dev.c
··· 47 47 #include "watchdog_core.h" 48 48 #include "watchdog_pretimeout.h" 49 49 50 + #include <trace/events/watchdog.h> 51 + 50 52 /* the dev_t structure to store the dynamically allocated watchdog devices */ 51 53 static dev_t watchdog_devt; 52 54 /* Reference to watchdog device behind /dev/watchdog */ ··· 159 157 160 158 wd_data->last_hw_keepalive = now; 161 159 162 - if (wdd->ops->ping) 160 + if (wdd->ops->ping) { 163 161 err = wdd->ops->ping(wdd); /* ping the watchdog */ 164 - else 162 + trace_watchdog_ping(wdd, err); 163 + } else { 165 164 err = wdd->ops->start(wdd); /* restart watchdog */ 165 + trace_watchdog_start(wdd, err); 166 + } 166 167 167 168 if (err == 0) 168 169 watchdog_hrtimer_pretimeout_start(wdd); ··· 264 259 } 265 260 } else { 266 261 err = wdd->ops->start(wdd); 262 + trace_watchdog_start(wdd, err); 267 263 if (err == 0) { 268 264 set_bit(WDOG_ACTIVE, &wdd->status); 269 265 wd_data->last_keepalive = started_at; ··· 303 297 if (wdd->ops->stop) { 304 298 clear_bit(WDOG_HW_RUNNING, &wdd->status); 305 299 err = wdd->ops->stop(wdd); 300 + trace_watchdog_stop(wdd, err); 306 301 } else { 307 302 set_bit(WDOG_HW_RUNNING, &wdd->status); 308 303 } ··· 376 369 377 370 if (wdd->ops->set_timeout) { 378 371 err = wdd->ops->set_timeout(wdd, timeout); 372 + trace_watchdog_set_timeout(wdd, timeout, err); 379 373 } else { 380 374 wdd->timeout = timeout; 381 375 /* Disable pretimeout if it doesn't fit the new timeout */
+15 -12
drivers/xen/grant-dma-ops.c
··· 31 31 32 32 static inline dma_addr_t grant_to_dma(grant_ref_t grant) 33 33 { 34 - return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << PAGE_SHIFT); 34 + return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << XEN_PAGE_SHIFT); 35 35 } 36 36 37 37 static inline grant_ref_t dma_to_grant(dma_addr_t dma) 38 38 { 39 - return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> PAGE_SHIFT); 39 + return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >> XEN_PAGE_SHIFT); 40 40 } 41 41 42 42 static struct xen_grant_dma_data *find_xen_grant_dma_data(struct device *dev) ··· 79 79 unsigned long attrs) 80 80 { 81 81 struct xen_grant_dma_data *data; 82 - unsigned int i, n_pages = PFN_UP(size); 82 + unsigned int i, n_pages = XEN_PFN_UP(size); 83 83 unsigned long pfn; 84 84 grant_ref_t grant; 85 85 void *ret; ··· 91 91 if (unlikely(data->broken)) 92 92 return NULL; 93 93 94 - ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp); 94 + ret = alloc_pages_exact(n_pages * XEN_PAGE_SIZE, gfp); 95 95 if (!ret) 96 96 return NULL; 97 97 98 98 pfn = virt_to_pfn(ret); 99 99 100 100 if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) { 101 - free_pages_exact(ret, n_pages * PAGE_SIZE); 101 + free_pages_exact(ret, n_pages * XEN_PAGE_SIZE); 102 102 return NULL; 103 103 } 104 104 ··· 116 116 dma_addr_t dma_handle, unsigned long attrs) 117 117 { 118 118 struct xen_grant_dma_data *data; 119 - unsigned int i, n_pages = PFN_UP(size); 119 + unsigned int i, n_pages = XEN_PFN_UP(size); 120 120 grant_ref_t grant; 121 121 122 122 data = find_xen_grant_dma_data(dev); ··· 138 138 139 139 gnttab_free_grant_reference_seq(grant, n_pages); 140 140 141 - free_pages_exact(vaddr, n_pages * PAGE_SIZE); 141 + free_pages_exact(vaddr, n_pages * XEN_PAGE_SIZE); 142 142 } 143 143 144 144 static struct page *xen_grant_dma_alloc_pages(struct device *dev, size_t size, ··· 168 168 unsigned long attrs) 169 169 { 170 170 struct xen_grant_dma_data *data; 171 - unsigned int i, n_pages = PFN_UP(offset + size); 171 + unsigned long dma_offset = xen_offset_in_page(offset), 172 + pfn_offset = XEN_PFN_DOWN(offset); 173 + unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size); 172 174 grant_ref_t grant; 173 175 dma_addr_t dma_handle; 174 176 ··· 189 187 190 188 for (i = 0; i < n_pages; i++) { 191 189 gnttab_grant_foreign_access_ref(grant + i, data->backend_domid, 192 - xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE); 190 + pfn_to_gfn(page_to_xen_pfn(page) + i + pfn_offset), 191 + dir == DMA_TO_DEVICE); 193 192 } 194 193 195 - dma_handle = grant_to_dma(grant) + offset; 194 + dma_handle = grant_to_dma(grant) + dma_offset; 196 195 197 196 return dma_handle; 198 197 } ··· 203 200 unsigned long attrs) 204 201 { 205 202 struct xen_grant_dma_data *data; 206 - unsigned long offset = dma_handle & (PAGE_SIZE - 1); 207 - unsigned int i, n_pages = PFN_UP(offset + size); 203 + unsigned long dma_offset = xen_offset_in_page(dma_handle); 204 + unsigned int i, n_pages = XEN_PFN_UP(dma_offset + size); 208 205 grant_ref_t grant; 209 206 210 207 if (WARN_ON(dir == DMA_NONE))
+29 -10
fs/cifs/cached_dir.c
··· 253 253 dentry = dget(cifs_sb->root); 254 254 else { 255 255 dentry = path_to_dentry(cifs_sb, path); 256 - if (IS_ERR(dentry)) 256 + if (IS_ERR(dentry)) { 257 + rc = -ENOENT; 257 258 goto oshr_free; 259 + } 258 260 } 259 261 cfid->dentry = dentry; 260 262 cfid->tcon = tcon; ··· 340 338 free_cached_dir(cfid); 341 339 } 342 340 341 + void drop_cached_dir_by_name(const unsigned int xid, struct cifs_tcon *tcon, 342 + const char *name, struct cifs_sb_info *cifs_sb) 343 + { 344 + struct cached_fid *cfid = NULL; 345 + int rc; 346 + 347 + rc = open_cached_dir(xid, tcon, name, cifs_sb, true, &cfid); 348 + if (rc) { 349 + cifs_dbg(FYI, "no cached dir found for rmdir(%s)\n", name); 350 + return; 351 + } 352 + spin_lock(&cfid->cfids->cfid_list_lock); 353 + if (cfid->has_lease) { 354 + cfid->has_lease = false; 355 + kref_put(&cfid->refcount, smb2_close_cached_fid); 356 + } 357 + spin_unlock(&cfid->cfids->cfid_list_lock); 358 + close_cached_dir(cfid); 359 + } 360 + 361 + 343 362 void close_cached_dir(struct cached_fid *cfid) 344 363 { 345 364 kref_put(&cfid->refcount, smb2_close_cached_fid); ··· 401 378 { 402 379 struct cached_fids *cfids = tcon->cfids; 403 380 struct cached_fid *cfid, *q; 404 - struct list_head entry; 381 + LIST_HEAD(entry); 405 382 406 - INIT_LIST_HEAD(&entry); 407 383 spin_lock(&cfids->cfid_list_lock); 408 384 list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { 409 - list_del(&cfid->entry); 410 - list_add(&cfid->entry, &entry); 385 + list_move(&cfid->entry, &entry); 411 386 cfids->num_entries--; 412 387 cfid->is_open = false; 388 + cfid->on_list = false; 413 389 /* To prevent race with smb2_cached_lease_break() */ 414 390 kref_get(&cfid->refcount); 415 391 } 416 392 spin_unlock(&cfids->cfid_list_lock); 417 393 418 394 list_for_each_entry_safe(cfid, q, &entry, entry) { 419 - cfid->on_list = false; 420 395 list_del(&cfid->entry); 421 396 cancel_work_sync(&cfid->lease_break); 422 397 if (cfid->has_lease) { ··· 539 518 void free_cached_dirs(struct cached_fids *cfids) 540 519 { 541 520 struct cached_fid *cfid, *q; 542 - struct list_head entry; 521 + LIST_HEAD(entry); 543 522 544 - INIT_LIST_HEAD(&entry); 545 523 spin_lock(&cfids->cfid_list_lock); 546 524 list_for_each_entry_safe(cfid, q, &cfids->entries, entry) { 547 525 cfid->on_list = false; 548 526 cfid->is_open = false; 549 - list_del(&cfid->entry); 550 - list_add(&cfid->entry, &entry); 527 + list_move(&cfid->entry, &entry); 551 528 } 552 529 spin_unlock(&cfids->cfid_list_lock); 553 530
+4
fs/cifs/cached_dir.h
··· 69 69 struct dentry *dentry, 70 70 struct cached_fid **cfid); 71 71 extern void close_cached_dir(struct cached_fid *cfid); 72 + extern void drop_cached_dir_by_name(const unsigned int xid, 73 + struct cifs_tcon *tcon, 74 + const char *name, 75 + struct cifs_sb_info *cifs_sb); 72 76 extern void close_all_cached_dirs(struct cifs_sb_info *cifs_sb); 73 77 extern void invalidate_all_cached_dirs(struct cifs_tcon *tcon); 74 78 extern int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]);
+5 -2
fs/cifs/cifsfs.c
··· 1302 1302 ssize_t rc; 1303 1303 struct cifsFileInfo *cfile = dst_file->private_data; 1304 1304 1305 - if (cfile->swapfile) 1306 - return -EOPNOTSUPP; 1305 + if (cfile->swapfile) { 1306 + rc = -EOPNOTSUPP; 1307 + free_xid(xid); 1308 + return rc; 1309 + } 1307 1310 1308 1311 rc = cifs_file_copychunk_range(xid, src_file, off, dst_file, destoff, 1309 1312 len, flags);
+2 -2
fs/cifs/cifsfs.h
··· 153 153 #endif /* CONFIG_CIFS_NFSD_EXPORT */ 154 154 155 155 /* when changing internal version - update following two lines at same time */ 156 - #define SMB3_PRODUCT_BUILD 39 157 - #define CIFS_VERSION "2.39" 156 + #define SMB3_PRODUCT_BUILD 40 157 + #define CIFS_VERSION "2.40" 158 158 #endif /* _CIFSFS_H */
+4 -2
fs/cifs/dir.c
··· 543 543 cifs_dbg(FYI, "cifs_create parent inode = 0x%p name is: %pd and dentry = 0x%p\n", 544 544 inode, direntry, direntry); 545 545 546 - if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb)))) 547 - return -EIO; 546 + if (unlikely(cifs_forced_shutdown(CIFS_SB(inode->i_sb)))) { 547 + rc = -EIO; 548 + goto out_free_xid; 549 + } 548 550 549 551 tlink = cifs_sb_tlink(CIFS_SB(inode->i_sb)); 550 552 rc = PTR_ERR(tlink);
+7 -4
fs/cifs/file.c
··· 1885 1885 struct cifsFileInfo *cfile; 1886 1886 __u32 type; 1887 1887 1888 - rc = -EACCES; 1889 1888 xid = get_xid(); 1890 1889 1891 - if (!(fl->fl_flags & FL_FLOCK)) 1892 - return -ENOLCK; 1890 + if (!(fl->fl_flags & FL_FLOCK)) { 1891 + rc = -ENOLCK; 1892 + free_xid(xid); 1893 + return rc; 1894 + } 1893 1895 1894 1896 cfile = (struct cifsFileInfo *)file->private_data; 1895 1897 tcon = tlink_tcon(cfile->tlink); ··· 1910 1908 * if no lock or unlock then nothing to do since we do not 1911 1909 * know what it is 1912 1910 */ 1911 + rc = -EOPNOTSUPP; 1913 1912 free_xid(xid); 1914 - return -EOPNOTSUPP; 1913 + return rc; 1915 1914 } 1916 1915 1917 1916 rc = cifs_setlk(file, fl, type, wait_flag, posix_lck, lock, unlock,
+4 -2
fs/cifs/inode.c
··· 368 368 369 369 if (cfile->symlink_target) { 370 370 fattr.cf_symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL); 371 - if (!fattr.cf_symlink_target) 372 - return -ENOMEM; 371 + if (!fattr.cf_symlink_target) { 372 + rc = -ENOMEM; 373 + goto cifs_gfiunix_out; 374 + } 373 375 } 374 376 375 377 rc = CIFSSMBUnixQFileInfo(xid, tcon, cfile->fid.netfid, &find_data);
+1
fs/cifs/sess.c
··· 496 496 cifs_put_tcp_session(chan->server, 0); 497 497 } 498 498 499 + free_xid(xid); 499 500 return rc; 500 501 } 501 502
+2
fs/cifs/smb2inode.c
··· 655 655 smb2_rmdir(const unsigned int xid, struct cifs_tcon *tcon, const char *name, 656 656 struct cifs_sb_info *cifs_sb) 657 657 { 658 + drop_cached_dir_by_name(xid, tcon, name, cifs_sb); 658 659 return smb2_compound_op(xid, tcon, cifs_sb, name, DELETE, FILE_OPEN, 659 660 CREATE_NOT_FILE, ACL_NO_MODE, 660 661 NULL, SMB2_OP_RMDIR, NULL, NULL, NULL); ··· 699 698 { 700 699 struct cifsFileInfo *cfile; 701 700 701 + drop_cached_dir_by_name(xid, tcon, from_name, cifs_sb); 702 702 cifs_get_writable_path(tcon, from_name, FIND_WR_WITH_DELETE, &cfile); 703 703 704 704 return smb2_set_path_attr(xid, tcon, from_name, to_name,
+2 -1
fs/cifs/smb2ops.c
··· 530 530 p = buf; 531 531 532 532 spin_lock(&ses->iface_lock); 533 + ses->iface_count = 0; 533 534 /* 534 535 * Go through iface_list and do kref_put to remove 535 536 * any unused ifaces. ifaces in use will be removed ··· 652 651 kref_put(&iface->refcount, release_iface); 653 652 } else 654 653 list_add_tail(&info->iface_head, &ses->iface_list); 655 - spin_unlock(&ses->iface_lock); 656 654 657 655 ses->iface_count++; 656 + spin_unlock(&ses->iface_lock); 658 657 ses->iface_last_update = jiffies; 659 658 next_iface: 660 659 nb_iface++;
+8 -9
fs/cifs/smb2pdu.c
··· 1341 1341 static void 1342 1342 SMB2_sess_free_buffer(struct SMB2_sess_data *sess_data) 1343 1343 { 1344 - int i; 1344 + struct kvec *iov = sess_data->iov; 1345 1345 1346 - /* zero the session data before freeing, as it might contain sensitive info (keys, etc) */ 1347 - for (i = 0; i < 2; i++) 1348 - if (sess_data->iov[i].iov_base) 1349 - memzero_explicit(sess_data->iov[i].iov_base, sess_data->iov[i].iov_len); 1346 + /* iov[1] is already freed by caller */ 1347 + if (sess_data->buf0_type != CIFS_NO_BUFFER && iov[0].iov_base) 1348 + memzero_explicit(iov[0].iov_base, iov[0].iov_len); 1350 1349 1351 - free_rsp_buf(sess_data->buf0_type, sess_data->iov[0].iov_base); 1350 + free_rsp_buf(sess_data->buf0_type, iov[0].iov_base); 1352 1351 sess_data->buf0_type = CIFS_NO_BUFFER; 1353 1352 } 1354 1353 ··· 1530 1531 &blob_length, ses, server, 1531 1532 sess_data->nls_cp); 1532 1533 if (rc) 1533 - goto out_err; 1534 + goto out; 1534 1535 1535 1536 if (use_spnego) { 1536 1537 /* BB eventually need to add this */ ··· 1577 1578 } 1578 1579 1579 1580 out: 1580 - memzero_explicit(ntlmssp_blob, blob_length); 1581 + kfree_sensitive(ntlmssp_blob); 1581 1582 SMB2_sess_free_buffer(sess_data); 1582 1583 if (!rc) { 1583 1584 sess_data->result = 0; ··· 1661 1662 } 1662 1663 #endif 1663 1664 out: 1664 - memzero_explicit(ntlmssp_blob, blob_length); 1665 + kfree_sensitive(ntlmssp_blob); 1665 1666 SMB2_sess_free_buffer(sess_data); 1666 1667 kfree_sensitive(ses->ntlmssp); 1667 1668 ses->ntlmssp = NULL;
-16
fs/efivarfs/vars.c
··· 651 651 if (err) 652 652 return err; 653 653 654 - /* 655 - * Ensure that the available space hasn't shrunk below the safe level 656 - */ 657 - status = check_var_size(attributes, *size + ucs2_strsize(name, 1024)); 658 - if (status != EFI_SUCCESS) { 659 - if (status != EFI_UNSUPPORTED) { 660 - err = efi_status_to_err(status); 661 - goto out; 662 - } 663 - 664 - if (*size > 65536) { 665 - err = -ENOSPC; 666 - goto out; 667 - } 668 - } 669 - 670 654 status = efivar_set_variable_locked(name, vendor, attributes, *size, 671 655 data, false); 672 656 if (status != EFI_SUCCESS) {
+3 -1
fs/nfsd/nfsctl.c
··· 1458 1458 goto out_drc_error; 1459 1459 retval = nfsd_reply_cache_init(nn); 1460 1460 if (retval) 1461 - goto out_drc_error; 1461 + goto out_cache_error; 1462 1462 get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key)); 1463 1463 seqlock_init(&nn->writeverf_lock); 1464 1464 1465 1465 return 0; 1466 1466 1467 + out_cache_error: 1468 + nfsd4_leases_net_shutdown(nn); 1467 1469 out_drc_error: 1468 1470 nfsd_idmap_shutdown(net); 1469 1471 out_idmap_error:
+1 -1
fs/nfsd/nfsfh.c
··· 392 392 skip_pseudoflavor_check: 393 393 /* Finally, check access permissions. */ 394 394 error = nfsd_permission(rqstp, exp, dentry, access); 395 - trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error); 396 395 out: 396 + trace_nfsd_fh_verify_err(rqstp, fhp, type, access, error); 397 397 if (error == nfserr_stale) 398 398 nfsd_stats_fh_stale_inc(exp); 399 399 return error;
+11 -12
fs/ocfs2/namei.c
··· 232 232 handle_t *handle = NULL; 233 233 struct ocfs2_super *osb; 234 234 struct ocfs2_dinode *dirfe; 235 + struct ocfs2_dinode *fe = NULL; 235 236 struct buffer_head *new_fe_bh = NULL; 236 237 struct inode *inode = NULL; 237 238 struct ocfs2_alloc_context *inode_ac = NULL; ··· 383 382 goto leave; 384 383 } 385 384 385 + fe = (struct ocfs2_dinode *) new_fe_bh->b_data; 386 386 if (S_ISDIR(mode)) { 387 387 status = ocfs2_fill_new_dir(osb, handle, dir, inode, 388 388 new_fe_bh, data_ac, meta_ac); ··· 456 454 leave: 457 455 if (status < 0 && did_quota_inode) 458 456 dquot_free_inode(inode); 459 - if (handle) 457 + if (handle) { 458 + if (status < 0 && fe) 459 + ocfs2_set_links_count(fe, 0); 460 460 ocfs2_commit_trans(osb, handle); 461 + } 461 462 462 463 ocfs2_inode_unlock(dir, 1); 463 464 if (did_block_signals) ··· 637 632 return status; 638 633 } 639 634 640 - status = __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh, 635 + return __ocfs2_mknod_locked(dir, inode, dev, new_fe_bh, 641 636 parent_fe_bh, handle, inode_ac, 642 637 fe_blkno, suballoc_loc, suballoc_bit); 643 - if (status < 0) { 644 - u64 bg_blkno = ocfs2_which_suballoc_group(fe_blkno, suballoc_bit); 645 - int tmp = ocfs2_free_suballoc_bits(handle, inode_ac->ac_inode, 646 - inode_ac->ac_bh, suballoc_bit, bg_blkno, 1); 647 - if (tmp) 648 - mlog_errno(tmp); 649 - } 650 - 651 - return status; 652 638 } 653 639 654 640 static int ocfs2_mkdir(struct user_namespace *mnt_userns, ··· 2024 2028 ocfs2_clusters_to_bytes(osb->sb, 1)); 2025 2029 if (status < 0 && did_quota_inode) 2026 2030 dquot_free_inode(inode); 2027 - if (handle) 2031 + if (handle) { 2032 + if (status < 0 && fe) 2033 + ocfs2_set_links_count(fe, 0); 2028 2034 ocfs2_commit_trans(osb, handle); 2035 + } 2029 2036 2030 2037 ocfs2_inode_unlock(dir, 1); 2031 2038 if (did_block_signals)
+1 -1
fs/proc/task_mmu.c
··· 902 902 goto out_put_mm; 903 903 904 904 hold_task_mempolicy(priv); 905 - vma = mas_find(&mas, 0); 905 + vma = mas_find(&mas, ULONG_MAX); 906 906 907 907 if (unlikely(!vma)) 908 908 goto empty_set;
+1 -1
include/acpi/ghes.h
··· 71 71 void ghes_unregister_vendor_record_notifier(struct notifier_block *nb); 72 72 #endif 73 73 74 - int ghes_estatus_pool_init(int num_ghes); 74 + int ghes_estatus_pool_init(unsigned int num_ghes); 75 75 76 76 /* From drivers/edac/ghes_edac.c */ 77 77
+12 -6
include/asm-generic/vmlinux.lds.h
··· 162 162 #define PATCHABLE_DISCARDS *(__patchable_function_entries) 163 163 #endif 164 164 165 + #ifndef CONFIG_ARCH_SUPPORTS_CFI_CLANG 166 + /* 167 + * Simply points to ftrace_stub, but with the proper protocol. 168 + * Defined by the linker script in linux/vmlinux.lds.h 169 + */ 170 + #define FTRACE_STUB_HACK ftrace_stub_graph = ftrace_stub; 171 + #else 172 + #define FTRACE_STUB_HACK 173 + #endif 174 + 165 175 #ifdef CONFIG_FTRACE_MCOUNT_RECORD 166 176 /* 167 177 * The ftrace call sites are logged to a section whose name depends on the 168 178 * compiler option used. A given kernel image will only use one, AKA 169 179 * FTRACE_CALLSITE_SECTION. We capture all of them here to avoid header 170 180 * dependencies for FTRACE_CALLSITE_SECTION's definition. 171 - * 172 - * Need to also make ftrace_stub_graph point to ftrace_stub 173 - * so that the same stub location may have different protocols 174 - * and not mess up with C verifiers. 175 181 * 176 182 * ftrace_ops_list_func will be defined as arch_ftrace_ops_list_func 177 183 * as some archs will have a different prototype for that function ··· 188 182 KEEP(*(__mcount_loc)) \ 189 183 KEEP_PATCHABLE \ 190 184 __stop_mcount_loc = .; \ 191 - ftrace_stub_graph = ftrace_stub; \ 185 + FTRACE_STUB_HACK \ 192 186 ftrace_ops_list_func = arch_ftrace_ops_list_func; 193 187 #else 194 188 # ifdef CONFIG_FUNCTION_TRACER 195 - # define MCOUNT_REC() ftrace_stub_graph = ftrace_stub; \ 189 + # define MCOUNT_REC() FTRACE_STUB_HACK \ 196 190 ftrace_ops_list_func = arch_ftrace_ops_list_func; 197 191 # else 198 192 # define MCOUNT_REC()
+9
include/drm/gpu_scheduler.h
··· 32 32 33 33 #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000) 34 34 35 + /** 36 + * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining 37 + * 38 + * Setting this flag on a scheduler fence prevents pipelining of jobs depending 39 + * on this fence. In other words we always insert a full CPU round trip before 40 + * dependen jobs are pushed to the hw queue. 41 + */ 42 + #define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS 43 + 35 44 struct drm_gem_object; 36 45 37 46 struct drm_gpu_scheduler;
+13 -1
include/linux/bpf.h
··· 27 27 #include <linux/bpfptr.h> 28 28 #include <linux/btf.h> 29 29 #include <linux/rcupdate_trace.h> 30 + #include <linux/init.h> 30 31 31 32 struct bpf_verifier_env; 32 33 struct bpf_verifier_log; ··· 971 970 struct bpf_attach_target_info *tgt_info); 972 971 void bpf_trampoline_put(struct bpf_trampoline *tr); 973 972 int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs); 973 + int __init bpf_arch_init_dispatcher_early(void *ip); 974 + 974 975 #define BPF_DISPATCHER_INIT(_name) { \ 975 976 .mutex = __MUTEX_INITIALIZER(_name.mutex), \ 976 977 .func = &_name##_func, \ ··· 985 982 .lnode = LIST_HEAD_INIT(_name.ksym.lnode), \ 986 983 }, \ 987 984 } 985 + 986 + #define BPF_DISPATCHER_INIT_CALL(_name) \ 987 + static int __init _name##_init(void) \ 988 + { \ 989 + return bpf_arch_init_dispatcher_early(_name##_func); \ 990 + } \ 991 + early_initcall(_name##_init) 988 992 989 993 #ifdef CONFIG_X86_64 990 994 #define BPF_DISPATCHER_ATTRIBUTES __attribute__((patchable_function_entry(5))) ··· 1010 1000 } \ 1011 1001 EXPORT_SYMBOL(bpf_dispatcher_##name##_func); \ 1012 1002 struct bpf_dispatcher bpf_dispatcher_##name = \ 1013 - BPF_DISPATCHER_INIT(bpf_dispatcher_##name); 1003 + BPF_DISPATCHER_INIT(bpf_dispatcher_##name); \ 1004 + BPF_DISPATCHER_INIT_CALL(bpf_dispatcher_##name); 1005 + 1014 1006 #define DECLARE_BPF_DISPATCHER(name) \ 1015 1007 unsigned int bpf_dispatcher_##name##_func( \ 1016 1008 const void *ctx, \
-3
include/linux/efi.h
··· 1085 1085 efi_status_t efivar_set_variable(efi_char16_t *name, efi_guid_t *vendor, 1086 1086 u32 attr, unsigned long data_size, void *data); 1087 1087 1088 - efi_status_t check_var_size(u32 attributes, unsigned long size); 1089 - efi_status_t check_var_size_nonblocking(u32 attributes, unsigned long size); 1090 - 1091 1088 #if IS_ENABLED(CONFIG_EFI_CAPSULE_LOADER) 1092 1089 extern bool efi_capsule_pending(int *reset_type); 1093 1090
+1 -1
include/linux/iommu.h
··· 455 455 extern bool iommu_default_passthrough(void); 456 456 extern struct iommu_resv_region * 457 457 iommu_alloc_resv_region(phys_addr_t start, size_t length, int prot, 458 - enum iommu_resv_type type); 458 + enum iommu_resv_type type, gfp_t gfp); 459 459 extern int iommu_get_group_resv_regions(struct iommu_group *group, 460 460 struct list_head *head); 461 461
+2
include/linux/kvm_host.h
··· 1390 1390 struct kvm_enable_cap *cap); 1391 1391 long kvm_arch_vm_ioctl(struct file *filp, 1392 1392 unsigned int ioctl, unsigned long arg); 1393 + long kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl, 1394 + unsigned long arg); 1393 1395 1394 1396 int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu); 1395 1397 int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu);
+2 -1
include/linux/net.h
··· 41 41 #define SOCK_NOSPACE 2 42 42 #define SOCK_PASSCRED 3 43 43 #define SOCK_PASSSEC 4 44 - #define SOCK_CUSTOM_SOCKOPT 5 44 + #define SOCK_SUPPORT_ZC 5 45 + #define SOCK_CUSTOM_SOCKOPT 6 45 46 46 47 #ifndef ARCH_HAS_SOCKET_TYPES 47 48 /**
+15 -4
include/linux/perf_event.h
··· 756 756 struct fasync_struct *fasync; 757 757 758 758 /* delayed work for NMIs and such */ 759 - int pending_wakeup; 760 - int pending_kill; 761 - int pending_disable; 759 + unsigned int pending_wakeup; 760 + unsigned int pending_kill; 761 + unsigned int pending_disable; 762 + unsigned int pending_sigtrap; 762 763 unsigned long pending_addr; /* SIGTRAP */ 763 - struct irq_work pending; 764 + struct irq_work pending_irq; 765 + struct callback_head pending_task; 766 + unsigned int pending_work; 764 767 765 768 atomic_t event_limit; 766 769 ··· 880 877 #endif 881 878 void *task_ctx_data; /* pmu specific data */ 882 879 struct rcu_head rcu_head; 880 + 881 + /* 882 + * Sum (event->pending_sigtrap + event->pending_work) 883 + * 884 + * The SIGTRAP is targeted at ctx->task, as such it won't do changing 885 + * that until the signal is delivered. 886 + */ 887 + local_t nr_pending; 883 888 }; 884 889 885 890 /*
+1
include/linux/utsname.h
··· 10 10 #include <uapi/linux/utsname.h> 11 11 12 12 enum uts_proc { 13 + UTS_PROC_ARCH, 13 14 UTS_PROC_OSTYPE, 14 15 UTS_PROC_OSRELEASE, 15 16 UTS_PROC_VERSION,
+1
include/media/i2c/ir-kbd-i2c.h
··· 35 35 IR_KBD_GET_KEY_PIXELVIEW, 36 36 IR_KBD_GET_KEY_HAUP, 37 37 IR_KBD_GET_KEY_KNC1, 38 + IR_KBD_GET_KEY_GENIATECH, 38 39 IR_KBD_GET_KEY_FUSIONHDTV, 39 40 IR_KBD_GET_KEY_HAUP_XVR, 40 41 IR_KBD_GET_KEY_AVERMEDIA_CARDBUS,
-15
include/media/media-device.h
··· 192 192 #define MEDIA_DEV_NOTIFY_POST_LINK_CH 1 193 193 194 194 /** 195 - * media_entity_enum_init - Initialise an entity enumeration 196 - * 197 - * @ent_enum: Entity enumeration to be initialised 198 - * @mdev: The related media device 199 - * 200 - * Return: zero on success or a negative error code. 201 - */ 202 - static inline __must_check int media_entity_enum_init( 203 - struct media_entity_enum *ent_enum, struct media_device *mdev) 204 - { 205 - return __media_entity_enum_init(ent_enum, 206 - mdev->entity_internal_idx_max + 1); 207 - } 208 - 209 - /** 210 195 * media_device_init() - Initializes a media device element 211 196 * 212 197 * @mdev: pointer to struct &media_device
+142 -27
include/media/media-entity.h
··· 17 17 #include <linux/fwnode.h> 18 18 #include <linux/list.h> 19 19 #include <linux/media.h> 20 + #include <linux/minmax.h> 20 21 #include <linux/types.h> 21 22 22 23 /* Enums used internally at the media controller to represent graphs */ ··· 100 99 /** 101 100 * struct media_pipeline - Media pipeline related information 102 101 * 103 - * @streaming_count: Streaming start count - streaming stop count 104 - * @graph: Media graph walk during pipeline start / stop 102 + * @allocated: Media pipeline allocated and freed by the framework 103 + * @mdev: The media device the pipeline is part of 104 + * @pads: List of media_pipeline_pad 105 + * @start_count: Media pipeline start - stop count 105 106 */ 106 107 struct media_pipeline { 107 - int streaming_count; 108 - struct media_graph graph; 108 + bool allocated; 109 + struct media_device *mdev; 110 + struct list_head pads; 111 + int start_count; 112 + }; 113 + 114 + /** 115 + * struct media_pipeline_pad - A pad part of a media pipeline 116 + * 117 + * @list: Entry in the media_pad pads list 118 + * @pipe: The media_pipeline that the pad is part of 119 + * @pad: The media pad 120 + * 121 + * This structure associate a pad with a media pipeline. Instances of 122 + * media_pipeline_pad are created by media_pipeline_start() when it builds the 123 + * pipeline, and stored in the &media_pad.pads list. media_pipeline_stop() 124 + * removes the entries from the list and deletes them. 125 + */ 126 + struct media_pipeline_pad { 127 + struct list_head list; 128 + struct media_pipeline *pipe; 129 + struct media_pad *pad; 109 130 }; 110 131 111 132 /** ··· 209 186 * @flags: Pad flags, as defined in 210 187 * :ref:`include/uapi/linux/media.h <media_header>` 211 188 * (seek for ``MEDIA_PAD_FL_*``) 189 + * @pipe: Pipeline this pad belongs to. Use media_entity_pipeline() to 190 + * access this field. 212 191 */ 213 192 struct media_pad { 214 193 struct media_gobj graph_obj; /* must be first field in struct */ ··· 218 193 u16 index; 219 194 enum media_pad_signal_type sig_type; 220 195 unsigned long flags; 196 + 197 + /* 198 + * The fields below are private, and should only be accessed via 199 + * appropriate functions. 200 + */ 201 + struct media_pipeline *pipe; 221 202 }; 222 203 223 204 /** ··· 237 206 * @link_validate: Return whether a link is valid from the entity point of 238 207 * view. The media_pipeline_start() function 239 208 * validates all links by calling this operation. Optional. 209 + * @has_pad_interdep: Return whether a two pads inside the entity are 210 + * interdependent. If two pads are interdependent they are 211 + * part of the same pipeline and enabling one of the pads 212 + * means that the other pad will become "locked" and 213 + * doesn't allow configuration changes. pad0 and pad1 are 214 + * guaranteed to not both be sinks or sources. 215 + * Optional: If the operation isn't implemented all pads 216 + * will be considered as interdependent. 240 217 * 241 218 * .. note:: 242 219 * ··· 258 219 const struct media_pad *local, 259 220 const struct media_pad *remote, u32 flags); 260 221 int (*link_validate)(struct media_link *link); 222 + bool (*has_pad_interdep)(struct media_entity *entity, unsigned int pad0, 223 + unsigned int pad1); 261 224 }; 262 225 263 226 /** ··· 310 269 * @links: List of data links. 311 270 * @ops: Entity operations. 312 271 * @use_count: Use count for the entity. 313 - * @pipe: Pipeline this entity belongs to. 314 272 * @info: Union with devnode information. Kept just for backward 315 273 * compatibility. 316 274 * @info.dev: Contains device major and minor info. ··· 345 305 346 306 int use_count; 347 307 348 - struct media_pipeline *pipe; 349 - 350 308 union { 351 309 struct { 352 310 u32 major; ··· 352 314 } dev; 353 315 } info; 354 316 }; 317 + 318 + /** 319 + * media_entity_for_each_pad - Iterate on all pads in an entity 320 + * @entity: The entity the pads belong to 321 + * @iter: The iterator pad 322 + * 323 + * Iterate on all pads in a media entity. 324 + */ 325 + #define media_entity_for_each_pad(entity, iter) \ 326 + for (iter = (entity)->pads; \ 327 + iter < &(entity)->pads[(entity)->num_pads]; \ 328 + ++iter) 355 329 356 330 /** 357 331 * struct media_interface - A media interface graph object. ··· 476 426 } 477 427 478 428 /** 479 - * __media_entity_enum_init - Initialise an entity enumeration 429 + * media_entity_enum_init - Initialise an entity enumeration 480 430 * 481 431 * @ent_enum: Entity enumeration to be initialised 482 - * @idx_max: Maximum number of entities in the enumeration 432 + * @mdev: The related media device 483 433 * 484 - * Return: Returns zero on success or a negative error code. 434 + * Return: zero on success or a negative error code. 485 435 */ 486 - __must_check int __media_entity_enum_init(struct media_entity_enum *ent_enum, 487 - int idx_max); 436 + __must_check int media_entity_enum_init(struct media_entity_enum *ent_enum, 437 + struct media_device *mdev); 488 438 489 439 /** 490 440 * media_entity_enum_cleanup - Release resources of an entity enumeration ··· 974 924 } 975 925 976 926 /** 927 + * media_pad_is_streaming - Test if a pad is part of a streaming pipeline 928 + * @pad: The pad 929 + * 930 + * Return: True if the pad is part of a pipeline started with the 931 + * media_pipeline_start() function, false otherwise. 932 + */ 933 + static inline bool media_pad_is_streaming(const struct media_pad *pad) 934 + { 935 + return pad->pipe; 936 + } 937 + 938 + /** 977 939 * media_entity_is_streaming - Test if an entity is part of a streaming pipeline 978 940 * @entity: The entity 979 941 * ··· 994 932 */ 995 933 static inline bool media_entity_is_streaming(const struct media_entity *entity) 996 934 { 997 - return entity->pipe; 935 + struct media_pad *pad; 936 + 937 + media_entity_for_each_pad(entity, pad) { 938 + if (media_pad_is_streaming(pad)) 939 + return true; 940 + } 941 + 942 + return false; 998 943 } 944 + 945 + /** 946 + * media_entity_pipeline - Get the media pipeline an entity is part of 947 + * @entity: The entity 948 + * 949 + * DEPRECATED: use media_pad_pipeline() instead. 950 + * 951 + * This function returns the media pipeline that an entity has been associated 952 + * with when constructing the pipeline with media_pipeline_start(). The pointer 953 + * remains valid until media_pipeline_stop() is called. 954 + * 955 + * In general, entities can be part of multiple pipelines, when carrying 956 + * multiple streams (either on different pads, or on the same pad using 957 + * multiplexed streams). This function is to be used only for entities that 958 + * do not support multiple pipelines. 959 + * 960 + * Return: The media_pipeline the entity is part of, or NULL if the entity is 961 + * not part of any pipeline. 962 + */ 963 + struct media_pipeline *media_entity_pipeline(struct media_entity *entity); 964 + 965 + /** 966 + * media_pad_pipeline - Get the media pipeline a pad is part of 967 + * @pad: The pad 968 + * 969 + * This function returns the media pipeline that a pad has been associated 970 + * with when constructing the pipeline with media_pipeline_start(). The pointer 971 + * remains valid until media_pipeline_stop() is called. 972 + * 973 + * Return: The media_pipeline the pad is part of, or NULL if the pad is 974 + * not part of any pipeline. 975 + */ 976 + struct media_pipeline *media_pad_pipeline(struct media_pad *pad); 999 977 1000 978 /** 1001 979 * media_entity_get_fwnode_pad - Get pad number from fwnode ··· 1115 1013 1116 1014 /** 1117 1015 * media_pipeline_start - Mark a pipeline as streaming 1118 - * @entity: Starting entity 1119 - * @pipe: Media pipeline to be assigned to all entities in the pipeline. 1016 + * @pad: Starting pad 1017 + * @pipe: Media pipeline to be assigned to all pads in the pipeline. 1120 1018 * 1121 - * Mark all entities connected to a given entity through enabled links, either 1019 + * Mark all pads connected to a given pad through enabled links, either 1122 1020 * directly or indirectly, as streaming. The given pipeline object is assigned 1123 - * to every entity in the pipeline and stored in the media_entity pipe field. 1021 + * to every pad in the pipeline and stored in the media_pad pipe field. 1124 1022 * 1125 1023 * Calls to this function can be nested, in which case the same number of 1126 1024 * media_pipeline_stop() calls will be required to stop streaming. The 1127 1025 * pipeline pointer must be identical for all nested calls to 1128 1026 * media_pipeline_start(). 1129 1027 */ 1130 - __must_check int media_pipeline_start(struct media_entity *entity, 1028 + __must_check int media_pipeline_start(struct media_pad *pad, 1131 1029 struct media_pipeline *pipe); 1132 1030 /** 1133 1031 * __media_pipeline_start - Mark a pipeline as streaming 1134 1032 * 1135 - * @entity: Starting entity 1136 - * @pipe: Media pipeline to be assigned to all entities in the pipeline. 1033 + * @pad: Starting pad 1034 + * @pipe: Media pipeline to be assigned to all pads in the pipeline. 1137 1035 * 1138 1036 * ..note:: This is the non-locking version of media_pipeline_start() 1139 1037 */ 1140 - __must_check int __media_pipeline_start(struct media_entity *entity, 1038 + __must_check int __media_pipeline_start(struct media_pad *pad, 1141 1039 struct media_pipeline *pipe); 1142 1040 1143 1041 /** 1144 1042 * media_pipeline_stop - Mark a pipeline as not streaming 1145 - * @entity: Starting entity 1043 + * @pad: Starting pad 1146 1044 * 1147 - * Mark all entities connected to a given entity through enabled links, either 1148 - * directly or indirectly, as not streaming. The media_entity pipe field is 1045 + * Mark all pads connected to a given pads through enabled links, either 1046 + * directly or indirectly, as not streaming. The media_pad pipe field is 1149 1047 * reset to %NULL. 1150 1048 * 1151 1049 * If multiple calls to media_pipeline_start() have been made, the same 1152 1050 * number of calls to this function are required to mark the pipeline as not 1153 1051 * streaming. 1154 1052 */ 1155 - void media_pipeline_stop(struct media_entity *entity); 1053 + void media_pipeline_stop(struct media_pad *pad); 1156 1054 1157 1055 /** 1158 1056 * __media_pipeline_stop - Mark a pipeline as not streaming 1159 1057 * 1160 - * @entity: Starting entity 1058 + * @pad: Starting pad 1161 1059 * 1162 1060 * .. note:: This is the non-locking version of media_pipeline_stop() 1163 1061 */ 1164 - void __media_pipeline_stop(struct media_entity *entity); 1062 + void __media_pipeline_stop(struct media_pad *pad); 1063 + 1064 + /** 1065 + * media_pipeline_alloc_start - Mark a pipeline as streaming 1066 + * @pad: Starting pad 1067 + * 1068 + * media_pipeline_alloc_start() is similar to media_pipeline_start() but instead 1069 + * of working on a given pipeline the function will use an existing pipeline if 1070 + * the pad is already part of a pipeline, or allocate a new pipeline. 1071 + * 1072 + * Calls to media_pipeline_alloc_start() must be matched with 1073 + * media_pipeline_stop(). 1074 + */ 1075 + __must_check int media_pipeline_alloc_start(struct media_pad *pad); 1165 1076 1166 1077 /** 1167 1078 * media_devnode_create() - creates and initializes a device node interface
+2 -1
include/media/v4l2-common.h
··· 175 175 * 176 176 * @sd: pointer to &struct v4l2_subdev 177 177 * @client: pointer to struct i2c_client 178 - * @devname: the name of the device; if NULL, the I²C device's name will be used 178 + * @devname: the name of the device; if NULL, the I²C device drivers's name 179 + * will be used 179 180 * @postfix: sub-device specific string to put right after the I²C device name; 180 181 * may be NULL 181 182 */
+11 -17
include/media/v4l2-ctrls.h
··· 121 121 * struct v4l2_ctrl_type_ops - The control type operations that the driver 122 122 * has to provide. 123 123 * 124 - * @equal: return true if both values are equal. 125 - * @init: initialize the value. 124 + * @equal: return true if all ctrl->elems array elements are equal. 125 + * @init: initialize the value for array elements from from_idx to ctrl->elems. 126 126 * @log: log the value. 127 - * @validate: validate the value. Return 0 on success and a negative value 128 - * otherwise. 127 + * @validate: validate the value for ctrl->new_elems array elements. 128 + * Return 0 on success and a negative value otherwise. 129 129 */ 130 130 struct v4l2_ctrl_type_ops { 131 - bool (*equal)(const struct v4l2_ctrl *ctrl, u32 elems, 132 - union v4l2_ctrl_ptr ptr1, 133 - union v4l2_ctrl_ptr ptr2); 134 - void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx, u32 tot_elems, 131 + bool (*equal)(const struct v4l2_ctrl *ctrl, 132 + union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2); 133 + void (*init)(const struct v4l2_ctrl *ctrl, u32 from_idx, 135 134 union v4l2_ctrl_ptr ptr); 136 135 void (*log)(const struct v4l2_ctrl *ctrl); 137 - int (*validate)(const struct v4l2_ctrl *ctrl, u32 elems, 138 - union v4l2_ctrl_ptr ptr); 136 + int (*validate)(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr); 139 137 }; 140 138 141 139 /** ··· 1541 1543 * v4l2_ctrl_type_op_equal - Default v4l2_ctrl_type_ops equal callback. 1542 1544 * 1543 1545 * @ctrl: The v4l2_ctrl pointer. 1544 - * @elems: The number of elements to compare. 1545 1546 * @ptr1: A v4l2 control value. 1546 1547 * @ptr2: A v4l2 control value. 1547 1548 * 1548 1549 * Return: true if values are equal, otherwise false. 1549 1550 */ 1550 - bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, u32 elems, 1551 + bool v4l2_ctrl_type_op_equal(const struct v4l2_ctrl *ctrl, 1551 1552 union v4l2_ctrl_ptr ptr1, union v4l2_ctrl_ptr ptr2); 1552 1553 1553 1554 /** ··· 1554 1557 * 1555 1558 * @ctrl: The v4l2_ctrl pointer. 1556 1559 * @from_idx: Starting element index. 1557 - * @elems: The number of elements to initialize. 1558 1560 * @ptr: The v4l2 control value. 1559 1561 * 1560 1562 * Return: void 1561 1563 */ 1562 1564 void v4l2_ctrl_type_op_init(const struct v4l2_ctrl *ctrl, u32 from_idx, 1563 - u32 elems, union v4l2_ctrl_ptr ptr); 1565 + union v4l2_ctrl_ptr ptr); 1564 1566 1565 1567 /** 1566 1568 * v4l2_ctrl_type_op_log - Default v4l2_ctrl_type_ops log callback. ··· 1574 1578 * v4l2_ctrl_type_op_validate - Default v4l2_ctrl_type_ops validate callback. 1575 1579 * 1576 1580 * @ctrl: The v4l2_ctrl pointer. 1577 - * @elems: The number of elements in the control. 1578 1581 * @ptr: The v4l2 control value. 1579 1582 * 1580 1583 * Return: 0 on success, a negative error code on failure. 1581 1584 */ 1582 - int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, u32 elems, 1583 - union v4l2_ctrl_ptr ptr); 1585 + int v4l2_ctrl_type_op_validate(const struct v4l2_ctrl *ctrl, union v4l2_ctrl_ptr ptr); 1584 1586 1585 1587 #endif
+102
include/media/v4l2-dev.h
··· 539 539 return test_bit(V4L2_FL_REGISTERED, &vdev->flags); 540 540 } 541 541 542 + #if defined(CONFIG_MEDIA_CONTROLLER) 543 + 544 + /** 545 + * video_device_pipeline_start - Mark a pipeline as streaming 546 + * @vdev: Starting video device 547 + * @pipe: Media pipeline to be assigned to all entities in the pipeline. 548 + * 549 + * Mark all entities connected to a given video device through enabled links, 550 + * either directly or indirectly, as streaming. The given pipeline object is 551 + * assigned to every pad in the pipeline and stored in the media_pad pipe 552 + * field. 553 + * 554 + * Calls to this function can be nested, in which case the same number of 555 + * video_device_pipeline_stop() calls will be required to stop streaming. The 556 + * pipeline pointer must be identical for all nested calls to 557 + * video_device_pipeline_start(). 558 + * 559 + * The video device must contain a single pad. 560 + * 561 + * This is a convenience wrapper around media_pipeline_start(). 562 + */ 563 + __must_check int video_device_pipeline_start(struct video_device *vdev, 564 + struct media_pipeline *pipe); 565 + 566 + /** 567 + * __video_device_pipeline_start - Mark a pipeline as streaming 568 + * @vdev: Starting video device 569 + * @pipe: Media pipeline to be assigned to all entities in the pipeline. 570 + * 571 + * ..note:: This is the non-locking version of video_device_pipeline_start() 572 + * 573 + * The video device must contain a single pad. 574 + * 575 + * This is a convenience wrapper around __media_pipeline_start(). 576 + */ 577 + __must_check int __video_device_pipeline_start(struct video_device *vdev, 578 + struct media_pipeline *pipe); 579 + 580 + /** 581 + * video_device_pipeline_stop - Mark a pipeline as not streaming 582 + * @vdev: Starting video device 583 + * 584 + * Mark all entities connected to a given video device through enabled links, 585 + * either directly or indirectly, as not streaming. The media_pad pipe field 586 + * is reset to %NULL. 587 + * 588 + * If multiple calls to media_pipeline_start() have been made, the same 589 + * number of calls to this function are required to mark the pipeline as not 590 + * streaming. 591 + * 592 + * The video device must contain a single pad. 593 + * 594 + * This is a convenience wrapper around media_pipeline_stop(). 595 + */ 596 + void video_device_pipeline_stop(struct video_device *vdev); 597 + 598 + /** 599 + * __video_device_pipeline_stop - Mark a pipeline as not streaming 600 + * @vdev: Starting video device 601 + * 602 + * .. note:: This is the non-locking version of media_pipeline_stop() 603 + * 604 + * The video device must contain a single pad. 605 + * 606 + * This is a convenience wrapper around __media_pipeline_stop(). 607 + */ 608 + void __video_device_pipeline_stop(struct video_device *vdev); 609 + 610 + /** 611 + * video_device_pipeline_alloc_start - Mark a pipeline as streaming 612 + * @vdev: Starting video device 613 + * 614 + * video_device_pipeline_alloc_start() is similar to video_device_pipeline_start() 615 + * but instead of working on a given pipeline the function will use an 616 + * existing pipeline if the video device is already part of a pipeline, or 617 + * allocate a new pipeline. 618 + * 619 + * Calls to video_device_pipeline_alloc_start() must be matched with 620 + * video_device_pipeline_stop(). 621 + */ 622 + __must_check int video_device_pipeline_alloc_start(struct video_device *vdev); 623 + 624 + /** 625 + * video_device_pipeline - Get the media pipeline a video device is part of 626 + * @vdev: The video device 627 + * 628 + * This function returns the media pipeline that a video device has been 629 + * associated with when constructing the pipeline with 630 + * video_device_pipeline_start(). The pointer remains valid until 631 + * video_device_pipeline_stop() is called. 632 + * 633 + * Return: The media_pipeline the video device is part of, or NULL if the video 634 + * device is not part of any pipeline. 635 + * 636 + * The video device must contain a single pad. 637 + * 638 + * This is a convenience wrapper around media_entity_pipeline(). 639 + */ 640 + struct media_pipeline *video_device_pipeline(struct video_device *vdev); 641 + 642 + #endif /* CONFIG_MEDIA_CONTROLLER */ 643 + 542 644 #endif /* _V4L2_DEV_H */
-4
include/media/v4l2-fwnode.h
··· 45 45 */ 46 46 struct v4l2_fwnode_endpoint { 47 47 struct fwnode_endpoint base; 48 - /* 49 - * Fields below this line will be zeroed by 50 - * v4l2_fwnode_endpoint_parse() 51 - */ 52 48 enum v4l2_mbus_type bus_type; 53 49 struct { 54 50 struct v4l2_mbus_config_parallel parallel;
+11 -1
include/media/v4l2-subdev.h
··· 358 358 } bus; 359 359 }; 360 360 361 - #define V4L2_FRAME_DESC_ENTRY_MAX 4 361 + /* 362 + * If this number is too small, it should be dropped altogether and the 363 + * API switched to a dynamic number of frame descriptor entries. 364 + */ 365 + #define V4L2_FRAME_DESC_ENTRY_MAX 8 362 366 363 367 /** 364 368 * enum v4l2_mbus_frame_desc_type - media bus frame description type ··· 1050 1046 struct v4l2_subdev_state *state, 1051 1047 unsigned int pad) 1052 1048 { 1049 + if (WARN_ON(!state)) 1050 + return NULL; 1053 1051 if (WARN_ON(pad >= sd->entity.num_pads)) 1054 1052 pad = 0; 1055 1053 return &state->pads[pad].try_fmt; ··· 1070 1064 struct v4l2_subdev_state *state, 1071 1065 unsigned int pad) 1072 1066 { 1067 + if (WARN_ON(!state)) 1068 + return NULL; 1073 1069 if (WARN_ON(pad >= sd->entity.num_pads)) 1074 1070 pad = 0; 1075 1071 return &state->pads[pad].try_crop; ··· 1090 1082 struct v4l2_subdev_state *state, 1091 1083 unsigned int pad) 1092 1084 { 1085 + if (WARN_ON(!state)) 1086 + return NULL; 1093 1087 if (WARN_ON(pad >= sd->entity.num_pads)) 1094 1088 pad = 0; 1095 1089 return &state->pads[pad].try_compose;
+1 -1
include/net/sock.h
··· 2585 2585 2586 2586 static inline gfp_t gfp_memcg_charge(void) 2587 2587 { 2588 - return in_softirq() ? GFP_NOWAIT : GFP_KERNEL; 2588 + return in_softirq() ? GFP_ATOMIC : GFP_KERNEL; 2589 2589 } 2590 2590 2591 2591 static inline long sock_rcvtimeo(const struct sock *sk, bool noblock)
+66
include/trace/events/watchdog.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM watchdog 4 + 5 + #if !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ) 6 + #define _TRACE_WATCHDOG_H 7 + 8 + #include <linux/watchdog.h> 9 + #include <linux/tracepoint.h> 10 + 11 + DECLARE_EVENT_CLASS(watchdog_template, 12 + 13 + TP_PROTO(struct watchdog_device *wdd, int err), 14 + 15 + TP_ARGS(wdd, err), 16 + 17 + TP_STRUCT__entry( 18 + __field(int, id) 19 + __field(int, err) 20 + ), 21 + 22 + TP_fast_assign( 23 + __entry->id = wdd->id; 24 + __entry->err = err; 25 + ), 26 + 27 + TP_printk("watchdog%d err=%d", __entry->id, __entry->err) 28 + ); 29 + 30 + DEFINE_EVENT(watchdog_template, watchdog_start, 31 + TP_PROTO(struct watchdog_device *wdd, int err), 32 + TP_ARGS(wdd, err)); 33 + 34 + DEFINE_EVENT(watchdog_template, watchdog_ping, 35 + TP_PROTO(struct watchdog_device *wdd, int err), 36 + TP_ARGS(wdd, err)); 37 + 38 + DEFINE_EVENT(watchdog_template, watchdog_stop, 39 + TP_PROTO(struct watchdog_device *wdd, int err), 40 + TP_ARGS(wdd, err)); 41 + 42 + TRACE_EVENT(watchdog_set_timeout, 43 + 44 + TP_PROTO(struct watchdog_device *wdd, unsigned int timeout, int err), 45 + 46 + TP_ARGS(wdd, timeout, err), 47 + 48 + TP_STRUCT__entry( 49 + __field(int, id) 50 + __field(unsigned int, timeout) 51 + __field(int, err) 52 + ), 53 + 54 + TP_fast_assign( 55 + __entry->id = wdd->id; 56 + __entry->timeout = timeout; 57 + __entry->err = err; 58 + ), 59 + 60 + TP_printk("watchdog%d timeout=%u err=%d", __entry->id, __entry->timeout, __entry->err) 61 + ); 62 + 63 + #endif /* !defined(_TRACE_WATCHDOG_H) || defined(TRACE_HEADER_MULTI_READ) */ 64 + 65 + /* This part must be outside protection */ 66 + #include <trace/define_trace.h>
+20 -16
include/uapi/drm/panfrost_drm.h
··· 235 235 #define PANFROSTDUMP_BUF_BO (PANFROSTDUMP_BUF_BOMAP + 1) 236 236 #define PANFROSTDUMP_BUF_TRAILER (PANFROSTDUMP_BUF_BO + 1) 237 237 238 + /* 239 + * This structure is the native endianness of the dumping machine, tools can 240 + * detect the endianness by looking at the value in 'magic'. 241 + */ 238 242 struct panfrost_dump_object_header { 239 - __le32 magic; 240 - __le32 type; 241 - __le32 file_size; 242 - __le32 file_offset; 243 + __u32 magic; 244 + __u32 type; 245 + __u32 file_size; 246 + __u32 file_offset; 243 247 244 248 union { 245 - struct pan_reg_hdr { 246 - __le64 jc; 247 - __le32 gpu_id; 248 - __le32 major; 249 - __le32 minor; 250 - __le64 nbos; 249 + struct { 250 + __u64 jc; 251 + __u32 gpu_id; 252 + __u32 major; 253 + __u32 minor; 254 + __u64 nbos; 251 255 } reghdr; 252 256 253 257 struct pan_bomap_hdr { 254 - __le32 valid; 255 - __le64 iova; 256 - __le32 data[2]; 258 + __u32 valid; 259 + __u64 iova; 260 + __u32 data[2]; 257 261 } bomap; 258 262 259 263 /* ··· 265 261 * with new fields and also keep it 512-byte aligned 266 262 */ 267 263 268 - __le32 sizer[496]; 264 + __u32 sizer[496]; 269 265 }; 270 266 }; 271 267 272 268 /* Registers object, an array of these */ 273 269 struct panfrost_dump_registers { 274 - __le32 reg; 275 - __le32 value; 270 + __u32 reg; 271 + __u32 value; 276 272 }; 277 273 278 274 #if defined(__cplusplus)
+14
include/uapi/linux/cec-funcs.h
··· 1568 1568 } 1569 1569 } 1570 1570 1571 + static inline void cec_msg_set_audio_volume_level(struct cec_msg *msg, 1572 + __u8 audio_volume_level) 1573 + { 1574 + msg->len = 3; 1575 + msg->msg[1] = CEC_MSG_SET_AUDIO_VOLUME_LEVEL; 1576 + msg->msg[2] = audio_volume_level; 1577 + } 1578 + 1579 + static inline void cec_ops_set_audio_volume_level(const struct cec_msg *msg, 1580 + __u8 *audio_volume_level) 1581 + { 1582 + *audio_volume_level = msg->msg[2]; 1583 + } 1584 + 1571 1585 1572 1586 /* Audio Rate Control Feature */ 1573 1587 static inline void cec_msg_set_audio_rate(struct cec_msg *msg,
+2
include/uapi/linux/cec.h
··· 768 768 #define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_RATE 0x08 769 769 #define CEC_OP_FEAT_DEV_SINK_HAS_ARC_TX 0x04 770 770 #define CEC_OP_FEAT_DEV_SOURCE_HAS_ARC_RX 0x02 771 + #define CEC_OP_FEAT_DEV_HAS_SET_AUDIO_VOLUME_LEVEL 0x01 771 772 772 773 #define CEC_MSG_GIVE_FEATURES 0xa5 /* HDMI 2.0 */ 773 774 ··· 1060 1059 #define CEC_OP_AUD_FMT_ID_CEA861 0 1061 1060 #define CEC_OP_AUD_FMT_ID_CEA861_CXT 1 1062 1061 1062 + #define CEC_MSG_SET_AUDIO_VOLUME_LEVEL 0x73 1063 1063 1064 1064 /* Audio Rate Control Feature */ 1065 1065 #define CEC_MSG_SET_AUDIO_RATE 0x9a
+61 -16
include/uapi/linux/rkisp1-config.h
··· 117 117 /* 118 118 * Defect Pixel Cluster Correction 119 119 */ 120 - #define RKISP1_CIF_ISP_DPCC_METHODS_MAX 3 120 + #define RKISP1_CIF_ISP_DPCC_METHODS_MAX 3 121 + 122 + #define RKISP1_CIF_ISP_DPCC_MODE_STAGE1_ENABLE (1U << 2) 123 + 124 + #define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_G_CENTER (1U << 0) 125 + #define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_INCL_RB_CENTER (1U << 1) 126 + #define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_G_3X3 (1U << 2) 127 + #define RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_STAGE1_RB_3X3 (1U << 3) 128 + 129 + /* 0-2 for sets 1-3 */ 130 + #define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_SET(n) ((n) << 0) 131 + #define RKISP1_CIF_ISP_DPCC_SET_USE_STAGE1_USE_FIX_SET (1U << 3) 132 + 133 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_GREEN_ENABLE (1U << 0) 134 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_GREEN_ENABLE (1U << 1) 135 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_GREEN_ENABLE (1U << 2) 136 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_GREEN_ENABLE (1U << 3) 137 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_GREEN_ENABLE (1U << 4) 138 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_PG_RED_BLUE_ENABLE (1U << 8) 139 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_LC_RED_BLUE_ENABLE (1U << 9) 140 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RO_RED_BLUE_ENABLE (1U << 10) 141 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RND_RED_BLUE_ENABLE (1U << 11) 142 + #define RKISP1_CIF_ISP_DPCC_METHODS_SET_RG_RED_BLUE_ENABLE (1U << 12) 143 + 144 + #define RKISP1_CIF_ISP_DPCC_LINE_THRESH_G(v) ((v) << 0) 145 + #define RKISP1_CIF_ISP_DPCC_LINE_THRESH_RB(v) ((v) << 8) 146 + #define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_G(v) ((v) << 0) 147 + #define RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_RB(v) ((v) << 8) 148 + #define RKISP1_CIF_ISP_DPCC_PG_FAC_G(v) ((v) << 0) 149 + #define RKISP1_CIF_ISP_DPCC_PG_FAC_RB(v) ((v) << 8) 150 + #define RKISP1_CIF_ISP_DPCC_RND_THRESH_G(v) ((v) << 0) 151 + #define RKISP1_CIF_ISP_DPCC_RND_THRESH_RB(v) ((v) << 8) 152 + #define RKISP1_CIF_ISP_DPCC_RG_FAC_G(v) ((v) << 0) 153 + #define RKISP1_CIF_ISP_DPCC_RG_FAC_RB(v) ((v) << 8) 154 + 155 + #define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_G(n, v) ((v) << ((n) * 4)) 156 + #define RKISP1_CIF_ISP_DPCC_RO_LIMITS_n_RB(n, v) ((v) << ((n) * 4 + 2)) 157 + 158 + #define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_G(n, v) ((v) << ((n) * 4)) 159 + #define RKISP1_CIF_ISP_DPCC_RND_OFFS_n_RB(n, v) ((v) << ((n) * 4 + 2)) 121 160 122 161 /* 123 162 * Denoising pre filter ··· 288 249 }; 289 250 290 251 /** 291 - * struct rkisp1_cif_isp_dpcc_methods_config - Methods Configuration used by DPCC 252 + * struct rkisp1_cif_isp_dpcc_methods_config - DPCC methods set configuration 292 253 * 293 - * Methods Configuration used by Defect Pixel Cluster Correction 254 + * This structure stores the configuration of one set of methods for the DPCC 255 + * algorithm. Multiple methods can be selected in each set (independently for 256 + * the Green and Red/Blue components) through the @method field, the result is 257 + * the logical AND of all enabled methods. The remaining fields set thresholds 258 + * and factors for each method. 294 259 * 295 - * @method: Method enable bits 296 - * @line_thresh: Line threshold 297 - * @line_mad_fac: Line MAD factor 298 - * @pg_fac: Peak gradient factor 299 - * @rnd_thresh: Rank Neighbor Difference threshold 300 - * @rg_fac: Rank gradient factor 260 + * @method: Method enable bits (RKISP1_CIF_ISP_DPCC_METHODS_SET_*) 261 + * @line_thresh: Line threshold (RKISP1_CIF_ISP_DPCC_LINE_THRESH_*) 262 + * @line_mad_fac: Line Mean Absolute Difference factor (RKISP1_CIF_ISP_DPCC_LINE_MAD_FAC_*) 263 + * @pg_fac: Peak gradient factor (RKISP1_CIF_ISP_DPCC_PG_FAC_*) 264 + * @rnd_thresh: Rank Neighbor Difference threshold (RKISP1_CIF_ISP_DPCC_RND_THRESH_*) 265 + * @rg_fac: Rank gradient factor (RKISP1_CIF_ISP_DPCC_RG_FAC_*) 301 266 */ 302 267 struct rkisp1_cif_isp_dpcc_methods_config { 303 268 __u32 method; ··· 315 272 /** 316 273 * struct rkisp1_cif_isp_dpcc_config - Configuration used by DPCC 317 274 * 318 - * Configuration used by Defect Pixel Cluster Correction 275 + * Configuration used by Defect Pixel Cluster Correction. Three sets of methods 276 + * can be configured and selected through the @set_use field. The result is the 277 + * logical OR of all enabled sets. 319 278 * 320 - * @mode: dpcc output mode 321 - * @output_mode: whether use hard coded methods 322 - * @set_use: stage1 methods set 323 - * @methods: methods config 324 - * @ro_limits: rank order limits 325 - * @rnd_offs: differential rank offsets for rank neighbor difference 279 + * @mode: DPCC mode (RKISP1_CIF_ISP_DPCC_MODE_*) 280 + * @output_mode: Interpolation output mode (RKISP1_CIF_ISP_DPCC_OUTPUT_MODE_*) 281 + * @set_use: Methods sets selection (RKISP1_CIF_ISP_DPCC_SET_USE_*) 282 + * @methods: Methods sets configuration 283 + * @ro_limits: Rank order limits (RKISP1_CIF_ISP_DPCC_RO_LIMITS_*) 284 + * @rnd_offs: Differential rank offsets for rank neighbor difference (RKISP1_CIF_ISP_DPCC_RND_OFFS_*) 326 285 */ 327 286 struct rkisp1_cif_isp_dpcc_config { 328 287 __u32 mode;
+1 -1
init/Kconfig
··· 66 66 This shows whether a suitable Rust toolchain is available (found). 67 67 68 68 Please see Documentation/rust/quick-start.rst for instructions on how 69 - to satify the build requirements of Rust support. 69 + to satisfy the build requirements of Rust support. 70 70 71 71 In particular, the Makefile target 'rustavailable' is useful to check 72 72 why the Rust toolchain is not being detected.
+2 -14
io_uring/filetable.h
··· 5 5 #include <linux/file.h> 6 6 #include <linux/io_uring_types.h> 7 7 8 - /* 9 - * FFS_SCM is only available on 64-bit archs, for 32-bit we just define it as 0 10 - * and define IO_URING_SCM_ALL. For this case, we use SCM for all files as we 11 - * can't safely always dereference the file when the task has exited and ring 12 - * cleanup is done. If a file is tracked and part of SCM, then unix gc on 13 - * process exit may reap it before __io_sqe_files_unregister() is run. 14 - */ 15 8 #define FFS_NOWAIT 0x1UL 16 9 #define FFS_ISREG 0x2UL 17 - #if defined(CONFIG_64BIT) 18 - #define FFS_SCM 0x4UL 19 - #else 20 - #define IO_URING_SCM_ALL 21 - #define FFS_SCM 0x0UL 22 - #endif 23 - #define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG|FFS_SCM) 10 + #define FFS_MASK ~(FFS_NOWAIT|FFS_ISREG) 24 11 25 12 bool io_alloc_file_tables(struct io_file_table *table, unsigned nr_files); 26 13 void io_free_file_tables(struct io_file_table *table); ··· 25 38 26 39 static inline void io_file_bitmap_clear(struct io_file_table *table, int bit) 27 40 { 41 + WARN_ON_ONCE(!test_bit(bit, table->bitmap)); 28 42 __clear_bit(bit, table->bitmap); 29 43 table->alloc_hint = bit; 30 44 }
+1 -1
io_uring/io-wq.c
··· 1164 1164 wqe = kzalloc_node(sizeof(struct io_wqe), GFP_KERNEL, alloc_node); 1165 1165 if (!wqe) 1166 1166 goto err; 1167 + wq->wqes[node] = wqe; 1167 1168 if (!alloc_cpumask_var(&wqe->cpu_mask, GFP_KERNEL)) 1168 1169 goto err; 1169 1170 cpumask_copy(wqe->cpu_mask, cpumask_of_node(node)); 1170 - wq->wqes[node] = wqe; 1171 1171 wqe->node = alloc_node; 1172 1172 wqe->acct[IO_WQ_ACCT_BOUND].max_workers = bounded; 1173 1173 wqe->acct[IO_WQ_ACCT_UNBOUND].max_workers =
+7 -17
io_uring/io_uring.c
··· 1587 1587 res |= FFS_ISREG; 1588 1588 if (__io_file_supports_nowait(file, mode)) 1589 1589 res |= FFS_NOWAIT; 1590 - if (io_file_need_scm(file)) 1591 - res |= FFS_SCM; 1592 1590 return res; 1593 1591 } 1594 1592 ··· 1858 1860 /* mask in overlapping REQ_F and FFS bits */ 1859 1861 req->flags |= (file_ptr << REQ_F_SUPPORT_NOWAIT_BIT); 1860 1862 io_req_set_rsrc_node(req, ctx, 0); 1861 - WARN_ON_ONCE(file && !test_bit(fd, ctx->file_table.bitmap)); 1862 1863 out: 1863 1864 io_ring_submit_unlock(ctx, issue_flags); 1864 1865 return file; ··· 2560 2563 2561 2564 static void io_req_caches_free(struct io_ring_ctx *ctx) 2562 2565 { 2563 - struct io_submit_state *state = &ctx->submit_state; 2564 2566 int nr = 0; 2565 2567 2566 2568 mutex_lock(&ctx->uring_lock); 2567 - io_flush_cached_locked_reqs(ctx, state); 2569 + io_flush_cached_locked_reqs(ctx, &ctx->submit_state); 2568 2570 2569 2571 while (!io_req_cache_empty(ctx)) { 2570 - struct io_wq_work_node *node; 2571 - struct io_kiocb *req; 2572 + struct io_kiocb *req = io_alloc_req(ctx); 2572 2573 2573 - node = wq_stack_extract(&state->free_list); 2574 - req = container_of(node, struct io_kiocb, comp_list); 2575 2574 kmem_cache_free(req_cachep, req); 2576 2575 nr++; 2577 2576 } ··· 2804 2811 io_poll_remove_all(ctx, NULL, true); 2805 2812 mutex_unlock(&ctx->uring_lock); 2806 2813 2807 - /* failed during ring init, it couldn't have issued any requests */ 2808 - if (ctx->rings) { 2814 + /* 2815 + * If we failed setting up the ctx, we might not have any rings 2816 + * and therefore did not submit any requests 2817 + */ 2818 + if (ctx->rings) 2809 2819 io_kill_timeouts(ctx, NULL, true); 2810 - /* if we failed setting up the ctx, we might not have any rings */ 2811 - io_iopoll_try_reap_events(ctx); 2812 - /* drop cached put refs after potentially doing completions */ 2813 - if (current->io_uring) 2814 - io_uring_drop_tctx_refs(current); 2815 - } 2816 2820 2817 2821 INIT_WORK(&ctx->exit_work, io_ring_exit_work); 2818 2822 /*
+3
io_uring/msg_ring.c
··· 95 95 96 96 msg->src_fd = array_index_nospec(msg->src_fd, ctx->nr_user_files); 97 97 file_ptr = io_fixed_file_slot(&ctx->file_table, msg->src_fd)->file_ptr; 98 + if (!file_ptr) 99 + goto out_unlock; 100 + 98 101 src_file = (struct file *) (file_ptr & FFS_MASK); 99 102 get_file(src_file); 100 103
+4
io_uring/net.c
··· 1056 1056 sock = sock_from_file(req->file); 1057 1057 if (unlikely(!sock)) 1058 1058 return -ENOTSOCK; 1059 + if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags)) 1060 + return -EOPNOTSUPP; 1059 1061 1060 1062 msg.msg_name = NULL; 1061 1063 msg.msg_control = NULL; ··· 1153 1151 sock = sock_from_file(req->file); 1154 1152 if (unlikely(!sock)) 1155 1153 return -ENOTSOCK; 1154 + if (!test_bit(SOCK_SUPPORT_ZC, &sock->flags)) 1155 + return -EOPNOTSUPP; 1156 1156 1157 1157 if (req_has_async_data(req)) { 1158 1158 kmsg = req->async_data;
+2 -5
io_uring/rsrc.c
··· 757 757 758 758 void __io_sqe_files_unregister(struct io_ring_ctx *ctx) 759 759 { 760 - #if !defined(IO_URING_SCM_ALL) 761 760 int i; 762 761 763 762 for (i = 0; i < ctx->nr_user_files; i++) { 764 763 struct file *file = io_file_from_index(&ctx->file_table, i); 765 764 766 - if (!file) 767 - continue; 768 - if (io_fixed_file_slot(&ctx->file_table, i)->file_ptr & FFS_SCM) 765 + /* skip scm accounted files, they'll be freed by ->ring_sock */ 766 + if (!file || io_file_need_scm(file)) 769 767 continue; 770 768 io_file_bitmap_clear(&ctx->file_table, i); 771 769 fput(file); 772 770 } 773 - #endif 774 771 775 772 #if defined(CONFIG_UNIX) 776 773 if (ctx->ring_sock) {
-4
io_uring/rsrc.h
··· 82 82 #if defined(CONFIG_UNIX) 83 83 static inline bool io_file_need_scm(struct file *filp) 84 84 { 85 - #if defined(IO_URING_SCM_ALL) 86 - return true; 87 - #else 88 85 return !!unix_get_socket(filp); 89 - #endif 90 86 } 91 87 #else 92 88 static inline bool io_file_need_scm(struct file *filp)
-2
io_uring/rw.c
··· 242 242 { 243 243 struct io_rw *rw = io_kiocb_to_cmd(req, struct io_rw); 244 244 245 - WARN_ON(!in_task()); 246 - 247 245 if (rw->kiocb.ki_flags & IOCB_WRITE) { 248 246 kiocb_end_write(req); 249 247 fsnotify_modify(req->file);
+5
kernel/bpf/btf.c
··· 4436 4436 return -EINVAL; 4437 4437 } 4438 4438 4439 + if (btf_type_is_resolve_source_only(ret_type)) { 4440 + btf_verifier_log_type(env, t, "Invalid return type"); 4441 + return -EINVAL; 4442 + } 4443 + 4439 4444 if (btf_type_needs_resolve(ret_type) && 4440 4445 !env_type_is_resolved(env, ret_type_id)) { 4441 4446 err = btf_resolve(env, ret_type, ret_type_id);
+6
kernel/bpf/dispatcher.c
··· 4 4 #include <linux/hash.h> 5 5 #include <linux/bpf.h> 6 6 #include <linux/filter.h> 7 + #include <linux/init.h> 7 8 8 9 /* The BPF dispatcher is a multiway branch code generator. The 9 10 * dispatcher is a mechanism to avoid the performance penalty of an ··· 87 86 } 88 87 89 88 int __weak arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_funcs) 89 + { 90 + return -ENOTSUPP; 91 + } 92 + 93 + int __weak __init bpf_arch_init_dispatcher_early(void *ip) 90 94 { 91 95 return -ENOTSUPP; 92 96 }
+16 -2
kernel/bpf/memalloc.c
··· 423 423 /* No progs are using this bpf_mem_cache, but htab_map_free() called 424 424 * bpf_mem_cache_free() for all remaining elements and they can be in 425 425 * free_by_rcu or in waiting_for_gp lists, so drain those lists now. 426 + * 427 + * Except for waiting_for_gp list, there are no concurrent operations 428 + * on these lists, so it is safe to use __llist_del_all(). 426 429 */ 427 430 llist_for_each_safe(llnode, t, __llist_del_all(&c->free_by_rcu)) 428 431 free_one(c, llnode); 429 432 llist_for_each_safe(llnode, t, llist_del_all(&c->waiting_for_gp)) 430 433 free_one(c, llnode); 431 - llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist)) 434 + llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist)) 432 435 free_one(c, llnode); 433 - llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra)) 436 + llist_for_each_safe(llnode, t, __llist_del_all(&c->free_llist_extra)) 434 437 free_one(c, llnode); 435 438 } 436 439 ··· 501 498 rcu_in_progress = 0; 502 499 for_each_possible_cpu(cpu) { 503 500 c = per_cpu_ptr(ma->cache, cpu); 501 + /* 502 + * refill_work may be unfinished for PREEMPT_RT kernel 503 + * in which irq work is invoked in a per-CPU RT thread. 504 + * It is also possible for kernel with 505 + * arch_irq_work_has_interrupt() being false and irq 506 + * work is invoked in timer interrupt. So waiting for 507 + * the completion of irq work to ease the handling of 508 + * concurrency. 509 + */ 510 + irq_work_sync(&c->refill_work); 504 511 drain_mem_cache(c); 505 512 rcu_in_progress += atomic_read(&c->call_rcu_in_progress); 506 513 } ··· 525 512 cc = per_cpu_ptr(ma->caches, cpu); 526 513 for (i = 0; i < NUM_CACHES; i++) { 527 514 c = &cc->cache[i]; 515 + irq_work_sync(&c->refill_work); 528 516 drain_mem_cache(c); 529 517 rcu_in_progress += atomic_read(&c->call_rcu_in_progress); 530 518 }
+1
kernel/bpf/verifier.c
··· 6946 6946 __mark_reg_not_init(env, &callee->regs[BPF_REG_5]); 6947 6947 6948 6948 callee->in_callback_fn = true; 6949 + callee->callback_ret_range = tnum_range(0, 1); 6949 6950 return 0; 6950 6951 } 6951 6952
+113 -38
kernel/events/core.c
··· 54 54 #include <linux/highmem.h> 55 55 #include <linux/pgtable.h> 56 56 #include <linux/buildid.h> 57 + #include <linux/task_work.h> 57 58 58 59 #include "internal.h" 59 60 ··· 2277 2276 event->pmu->del(event, 0); 2278 2277 event->oncpu = -1; 2279 2278 2280 - if (READ_ONCE(event->pending_disable) >= 0) { 2281 - WRITE_ONCE(event->pending_disable, -1); 2279 + if (event->pending_disable) { 2280 + event->pending_disable = 0; 2282 2281 perf_cgroup_event_disable(event, ctx); 2283 2282 state = PERF_EVENT_STATE_OFF; 2284 2283 } 2284 + 2285 + if (event->pending_sigtrap) { 2286 + bool dec = true; 2287 + 2288 + event->pending_sigtrap = 0; 2289 + if (state != PERF_EVENT_STATE_OFF && 2290 + !event->pending_work) { 2291 + event->pending_work = 1; 2292 + dec = false; 2293 + task_work_add(current, &event->pending_task, TWA_RESUME); 2294 + } 2295 + if (dec) 2296 + local_dec(&event->ctx->nr_pending); 2297 + } 2298 + 2285 2299 perf_event_set_state(event, state); 2286 2300 2287 2301 if (!is_software_event(event)) ··· 2448 2432 * hold the top-level event's child_mutex, so any descendant that 2449 2433 * goes to exit will block in perf_event_exit_event(). 2450 2434 * 2451 - * When called from perf_pending_event it's OK because event->ctx 2435 + * When called from perf_pending_irq it's OK because event->ctx 2452 2436 * is the current context on this CPU and preemption is disabled, 2453 2437 * hence we can't get into perf_event_task_sched_out for this context. 2454 2438 */ ··· 2487 2471 2488 2472 void perf_event_disable_inatomic(struct perf_event *event) 2489 2473 { 2490 - WRITE_ONCE(event->pending_disable, smp_processor_id()); 2491 - /* can fail, see perf_pending_event_disable() */ 2492 - irq_work_queue(&event->pending); 2474 + event->pending_disable = 1; 2475 + irq_work_queue(&event->pending_irq); 2493 2476 } 2494 2477 2495 2478 #define MAX_INTERRUPTS (~0ULL) ··· 3443 3428 raw_spin_lock_nested(&next_ctx->lock, SINGLE_DEPTH_NESTING); 3444 3429 if (context_equiv(ctx, next_ctx)) { 3445 3430 3431 + perf_pmu_disable(pmu); 3432 + 3433 + /* PMIs are disabled; ctx->nr_pending is stable. */ 3434 + if (local_read(&ctx->nr_pending) || 3435 + local_read(&next_ctx->nr_pending)) { 3436 + /* 3437 + * Must not swap out ctx when there's pending 3438 + * events that rely on the ctx->task relation. 3439 + */ 3440 + raw_spin_unlock(&next_ctx->lock); 3441 + rcu_read_unlock(); 3442 + goto inside_switch; 3443 + } 3444 + 3446 3445 WRITE_ONCE(ctx->task, next); 3447 3446 WRITE_ONCE(next_ctx->task, task); 3448 - 3449 - perf_pmu_disable(pmu); 3450 3447 3451 3448 if (cpuctx->sched_cb_usage && pmu->sched_task) 3452 3449 pmu->sched_task(ctx, false); ··· 3500 3473 raw_spin_lock(&ctx->lock); 3501 3474 perf_pmu_disable(pmu); 3502 3475 3476 + inside_switch: 3503 3477 if (cpuctx->sched_cb_usage && pmu->sched_task) 3504 3478 pmu->sched_task(ctx, false); 3505 3479 task_ctx_sched_out(cpuctx, ctx, EVENT_ALL); ··· 4967 4939 4968 4940 static void _free_event(struct perf_event *event) 4969 4941 { 4970 - irq_work_sync(&event->pending); 4942 + irq_work_sync(&event->pending_irq); 4971 4943 4972 4944 unaccount_event(event); 4973 4945 ··· 6467 6439 return; 6468 6440 6469 6441 /* 6470 - * perf_pending_event() can race with the task exiting. 6442 + * Both perf_pending_task() and perf_pending_irq() can race with the 6443 + * task exiting. 6471 6444 */ 6472 6445 if (current->flags & PF_EXITING) 6473 6446 return; ··· 6477 6448 event->attr.type, event->attr.sig_data); 6478 6449 } 6479 6450 6480 - static void perf_pending_event_disable(struct perf_event *event) 6451 + /* 6452 + * Deliver the pending work in-event-context or follow the context. 6453 + */ 6454 + static void __perf_pending_irq(struct perf_event *event) 6481 6455 { 6482 - int cpu = READ_ONCE(event->pending_disable); 6456 + int cpu = READ_ONCE(event->oncpu); 6483 6457 6458 + /* 6459 + * If the event isn't running; we done. event_sched_out() will have 6460 + * taken care of things. 6461 + */ 6484 6462 if (cpu < 0) 6485 6463 return; 6486 6464 6465 + /* 6466 + * Yay, we hit home and are in the context of the event. 6467 + */ 6487 6468 if (cpu == smp_processor_id()) { 6488 - WRITE_ONCE(event->pending_disable, -1); 6489 - 6490 - if (event->attr.sigtrap) { 6469 + if (event->pending_sigtrap) { 6470 + event->pending_sigtrap = 0; 6491 6471 perf_sigtrap(event); 6492 - atomic_set_release(&event->event_limit, 1); /* rearm event */ 6493 - return; 6472 + local_dec(&event->ctx->nr_pending); 6494 6473 } 6495 - 6496 - perf_event_disable_local(event); 6474 + if (event->pending_disable) { 6475 + event->pending_disable = 0; 6476 + perf_event_disable_local(event); 6477 + } 6497 6478 return; 6498 6479 } 6499 6480 ··· 6523 6484 * irq_work_queue(); // FAILS 6524 6485 * 6525 6486 * irq_work_run() 6526 - * perf_pending_event() 6487 + * perf_pending_irq() 6527 6488 * 6528 6489 * But the event runs on CPU-B and wants disabling there. 6529 6490 */ 6530 - irq_work_queue_on(&event->pending, cpu); 6491 + irq_work_queue_on(&event->pending_irq, cpu); 6531 6492 } 6532 6493 6533 - static void perf_pending_event(struct irq_work *entry) 6494 + static void perf_pending_irq(struct irq_work *entry) 6534 6495 { 6535 - struct perf_event *event = container_of(entry, struct perf_event, pending); 6496 + struct perf_event *event = container_of(entry, struct perf_event, pending_irq); 6536 6497 int rctx; 6537 6498 6538 - rctx = perf_swevent_get_recursion_context(); 6539 6499 /* 6540 6500 * If we 'fail' here, that's OK, it means recursion is already disabled 6541 6501 * and we won't recurse 'further'. 6542 6502 */ 6503 + rctx = perf_swevent_get_recursion_context(); 6543 6504 6544 - perf_pending_event_disable(event); 6545 - 6505 + /* 6506 + * The wakeup isn't bound to the context of the event -- it can happen 6507 + * irrespective of where the event is. 6508 + */ 6546 6509 if (event->pending_wakeup) { 6547 6510 event->pending_wakeup = 0; 6548 6511 perf_event_wakeup(event); 6549 6512 } 6550 6513 6514 + __perf_pending_irq(event); 6515 + 6551 6516 if (rctx >= 0) 6552 6517 perf_swevent_put_recursion_context(rctx); 6518 + } 6519 + 6520 + static void perf_pending_task(struct callback_head *head) 6521 + { 6522 + struct perf_event *event = container_of(head, struct perf_event, pending_task); 6523 + int rctx; 6524 + 6525 + /* 6526 + * If we 'fail' here, that's OK, it means recursion is already disabled 6527 + * and we won't recurse 'further'. 6528 + */ 6529 + preempt_disable_notrace(); 6530 + rctx = perf_swevent_get_recursion_context(); 6531 + 6532 + if (event->pending_work) { 6533 + event->pending_work = 0; 6534 + perf_sigtrap(event); 6535 + local_dec(&event->ctx->nr_pending); 6536 + } 6537 + 6538 + if (rctx >= 0) 6539 + perf_swevent_put_recursion_context(rctx); 6540 + preempt_enable_notrace(); 6553 6541 } 6554 6542 6555 6543 #ifdef CONFIG_GUEST_PERF_EVENTS ··· 9278 9212 */ 9279 9213 9280 9214 static int __perf_event_overflow(struct perf_event *event, 9281 - int throttle, struct perf_sample_data *data, 9282 - struct pt_regs *regs) 9215 + int throttle, struct perf_sample_data *data, 9216 + struct pt_regs *regs) 9283 9217 { 9284 9218 int events = atomic_read(&event->event_limit); 9285 9219 int ret = 0; ··· 9302 9236 if (events && atomic_dec_and_test(&event->event_limit)) { 9303 9237 ret = 1; 9304 9238 event->pending_kill = POLL_HUP; 9305 - event->pending_addr = data->addr; 9306 - 9307 9239 perf_event_disable_inatomic(event); 9240 + } 9241 + 9242 + if (event->attr.sigtrap) { 9243 + /* 9244 + * Should not be able to return to user space without processing 9245 + * pending_sigtrap (kernel events can overflow multiple times). 9246 + */ 9247 + WARN_ON_ONCE(event->pending_sigtrap && event->attr.exclude_kernel); 9248 + if (!event->pending_sigtrap) { 9249 + event->pending_sigtrap = 1; 9250 + local_inc(&event->ctx->nr_pending); 9251 + } 9252 + event->pending_addr = data->addr; 9253 + irq_work_queue(&event->pending_irq); 9308 9254 } 9309 9255 9310 9256 READ_ONCE(event->overflow_handler)(event, data, regs); 9311 9257 9312 9258 if (*perf_event_fasync(event) && event->pending_kill) { 9313 9259 event->pending_wakeup = 1; 9314 - irq_work_queue(&event->pending); 9260 + irq_work_queue(&event->pending_irq); 9315 9261 } 9316 9262 9317 9263 return ret; 9318 9264 } 9319 9265 9320 9266 int perf_event_overflow(struct perf_event *event, 9321 - struct perf_sample_data *data, 9322 - struct pt_regs *regs) 9267 + struct perf_sample_data *data, 9268 + struct pt_regs *regs) 9323 9269 { 9324 9270 return __perf_event_overflow(event, 1, data, regs); 9325 9271 } ··· 11648 11570 11649 11571 11650 11572 init_waitqueue_head(&event->waitq); 11651 - event->pending_disable = -1; 11652 - init_irq_work(&event->pending, perf_pending_event); 11573 + init_irq_work(&event->pending_irq, perf_pending_irq); 11574 + init_task_work(&event->pending_task, perf_pending_task); 11653 11575 11654 11576 mutex_init(&event->mmap_mutex); 11655 11577 raw_spin_lock_init(&event->addr_filters.lock); ··· 11670 11592 11671 11593 if (parent_event) 11672 11594 event->event_caps = parent_event->event_caps; 11673 - 11674 - if (event->attr.sigtrap) 11675 - atomic_set(&event->event_limit, 1); 11676 11595 11677 11596 if (task) { 11678 11597 event->attach_state = PERF_ATTACH_TASK;
+1 -1
kernel/events/ring_buffer.c
··· 22 22 atomic_set(&handle->rb->poll, EPOLLIN); 23 23 24 24 handle->event->pending_wakeup = 1; 25 - irq_work_queue(&handle->event->pending); 25 + irq_work_queue(&handle->event->pending_irq); 26 26 } 27 27 28 28 /*
+16 -2
kernel/gcov/gcc_4_7.c
··· 30 30 31 31 #define GCOV_TAG_FUNCTION_LENGTH 3 32 32 33 + /* Since GCC 12.1 sizes are in BYTES and not in WORDS (4B). */ 34 + #if (__GNUC__ >= 12) 35 + #define GCOV_UNIT_SIZE 4 36 + #else 37 + #define GCOV_UNIT_SIZE 1 38 + #endif 39 + 33 40 static struct gcov_info *gcov_info_head; 34 41 35 42 /** ··· 390 383 pos += store_gcov_u32(buffer, pos, info->version); 391 384 pos += store_gcov_u32(buffer, pos, info->stamp); 392 385 386 + #if (__GNUC__ >= 12) 387 + /* Use zero as checksum of the compilation unit. */ 388 + pos += store_gcov_u32(buffer, pos, 0); 389 + #endif 390 + 393 391 for (fi_idx = 0; fi_idx < info->n_functions; fi_idx++) { 394 392 fi_ptr = info->functions[fi_idx]; 395 393 396 394 /* Function record. */ 397 395 pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION); 398 - pos += store_gcov_u32(buffer, pos, GCOV_TAG_FUNCTION_LENGTH); 396 + pos += store_gcov_u32(buffer, pos, 397 + GCOV_TAG_FUNCTION_LENGTH * GCOV_UNIT_SIZE); 399 398 pos += store_gcov_u32(buffer, pos, fi_ptr->ident); 400 399 pos += store_gcov_u32(buffer, pos, fi_ptr->lineno_checksum); 401 400 pos += store_gcov_u32(buffer, pos, fi_ptr->cfg_checksum); ··· 415 402 /* Counter record. */ 416 403 pos += store_gcov_u32(buffer, pos, 417 404 GCOV_TAG_FOR_COUNTER(ct_idx)); 418 - pos += store_gcov_u32(buffer, pos, ci_ptr->num * 2); 405 + pos += store_gcov_u32(buffer, pos, 406 + ci_ptr->num * 2 * GCOV_UNIT_SIZE); 419 407 420 408 for (cv_idx = 0; cv_idx < ci_ptr->num; cv_idx++) { 421 409 pos += store_gcov_u64(buffer, pos,
+6 -4
kernel/rcu/tree.c
··· 1403 1403 // where caller does not hold the root rcu_node structure's lock. 1404 1404 static void rcu_poll_gp_seq_start_unlocked(unsigned long *snap) 1405 1405 { 1406 + unsigned long flags; 1406 1407 struct rcu_node *rnp = rcu_get_root(); 1407 1408 1408 1409 if (rcu_init_invoked()) { 1409 1410 lockdep_assert_irqs_enabled(); 1410 - raw_spin_lock_irq_rcu_node(rnp); 1411 + raw_spin_lock_irqsave_rcu_node(rnp, flags); 1411 1412 } 1412 1413 rcu_poll_gp_seq_start(snap); 1413 1414 if (rcu_init_invoked()) 1414 - raw_spin_unlock_irq_rcu_node(rnp); 1415 + raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1415 1416 } 1416 1417 1417 1418 // Make the polled API aware of the end of a grace period, but where 1418 1419 // caller does not hold the root rcu_node structure's lock. 1419 1420 static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap) 1420 1421 { 1422 + unsigned long flags; 1421 1423 struct rcu_node *rnp = rcu_get_root(); 1422 1424 1423 1425 if (rcu_init_invoked()) { 1424 1426 lockdep_assert_irqs_enabled(); 1425 - raw_spin_lock_irq_rcu_node(rnp); 1427 + raw_spin_lock_irqsave_rcu_node(rnp, flags); 1426 1428 } 1427 1429 rcu_poll_gp_seq_end(snap); 1428 1430 if (rcu_init_invoked()) 1429 - raw_spin_unlock_irq_rcu_node(rnp); 1431 + raw_spin_unlock_irqrestore_rcu_node(rnp, flags); 1430 1432 } 1431 1433 1432 1434 /*
+12 -12
kernel/sched/core.c
··· 4823 4823 4824 4824 #ifdef CONFIG_SMP 4825 4825 4826 - static void do_balance_callbacks(struct rq *rq, struct callback_head *head) 4826 + static void do_balance_callbacks(struct rq *rq, struct balance_callback *head) 4827 4827 { 4828 4828 void (*func)(struct rq *rq); 4829 - struct callback_head *next; 4829 + struct balance_callback *next; 4830 4830 4831 4831 lockdep_assert_rq_held(rq); 4832 4832 ··· 4853 4853 * This abuse is tolerated because it places all the unlikely/odd cases behind 4854 4854 * a single test, namely: rq->balance_callback == NULL. 4855 4855 */ 4856 - struct callback_head balance_push_callback = { 4856 + struct balance_callback balance_push_callback = { 4857 4857 .next = NULL, 4858 - .func = (void (*)(struct callback_head *))balance_push, 4858 + .func = balance_push, 4859 4859 }; 4860 4860 4861 - static inline struct callback_head * 4861 + static inline struct balance_callback * 4862 4862 __splice_balance_callbacks(struct rq *rq, bool split) 4863 4863 { 4864 - struct callback_head *head = rq->balance_callback; 4864 + struct balance_callback *head = rq->balance_callback; 4865 4865 4866 4866 if (likely(!head)) 4867 4867 return NULL; ··· 4883 4883 return head; 4884 4884 } 4885 4885 4886 - static inline struct callback_head *splice_balance_callbacks(struct rq *rq) 4886 + static inline struct balance_callback *splice_balance_callbacks(struct rq *rq) 4887 4887 { 4888 4888 return __splice_balance_callbacks(rq, true); 4889 4889 } ··· 4893 4893 do_balance_callbacks(rq, __splice_balance_callbacks(rq, false)); 4894 4894 } 4895 4895 4896 - static inline void balance_callbacks(struct rq *rq, struct callback_head *head) 4896 + static inline void balance_callbacks(struct rq *rq, struct balance_callback *head) 4897 4897 { 4898 4898 unsigned long flags; 4899 4899 ··· 4910 4910 { 4911 4911 } 4912 4912 4913 - static inline struct callback_head *splice_balance_callbacks(struct rq *rq) 4913 + static inline struct balance_callback *splice_balance_callbacks(struct rq *rq) 4914 4914 { 4915 4915 return NULL; 4916 4916 } 4917 4917 4918 - static inline void balance_callbacks(struct rq *rq, struct callback_head *head) 4918 + static inline void balance_callbacks(struct rq *rq, struct balance_callback *head) 4919 4919 { 4920 4920 } 4921 4921 ··· 6188 6188 preempt_enable(); 6189 6189 } 6190 6190 6191 - static DEFINE_PER_CPU(struct callback_head, core_balance_head); 6191 + static DEFINE_PER_CPU(struct balance_callback, core_balance_head); 6192 6192 6193 6193 static void queue_core_balance(struct rq *rq) 6194 6194 { ··· 7419 7419 int oldpolicy = -1, policy = attr->sched_policy; 7420 7420 int retval, oldprio, newprio, queued, running; 7421 7421 const struct sched_class *prev_class; 7422 - struct callback_head *head; 7422 + struct balance_callback *head; 7423 7423 struct rq_flags rf; 7424 7424 int reset_on_fork; 7425 7425 int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+2 -2
kernel/sched/deadline.c
··· 644 644 return rq->online && dl_task(prev); 645 645 } 646 646 647 - static DEFINE_PER_CPU(struct callback_head, dl_push_head); 648 - static DEFINE_PER_CPU(struct callback_head, dl_pull_head); 647 + static DEFINE_PER_CPU(struct balance_callback, dl_push_head); 648 + static DEFINE_PER_CPU(struct balance_callback, dl_pull_head); 649 649 650 650 static void push_dl_tasks(struct rq *); 651 651 static void pull_dl_task(struct rq *);
+2 -2
kernel/sched/rt.c
··· 410 410 return !plist_head_empty(&rq->rt.pushable_tasks); 411 411 } 412 412 413 - static DEFINE_PER_CPU(struct callback_head, rt_push_head); 414 - static DEFINE_PER_CPU(struct callback_head, rt_pull_head); 413 + static DEFINE_PER_CPU(struct balance_callback, rt_push_head); 414 + static DEFINE_PER_CPU(struct balance_callback, rt_pull_head); 415 415 416 416 static void push_rt_tasks(struct rq *); 417 417 static void pull_rt_task(struct rq *);
+19 -13
kernel/sched/sched.h
··· 938 938 DECLARE_STATIC_KEY_FALSE(sched_uclamp_used); 939 939 #endif /* CONFIG_UCLAMP_TASK */ 940 940 941 + struct rq; 942 + struct balance_callback { 943 + struct balance_callback *next; 944 + void (*func)(struct rq *rq); 945 + }; 946 + 941 947 /* 942 948 * This is the main, per-CPU runqueue data structure. 943 949 * ··· 1042 1036 unsigned long cpu_capacity; 1043 1037 unsigned long cpu_capacity_orig; 1044 1038 1045 - struct callback_head *balance_callback; 1039 + struct balance_callback *balance_callback; 1046 1040 1047 1041 unsigned char nohz_idle_balance; 1048 1042 unsigned char idle_balance; ··· 1188 1182 #endif 1189 1183 } 1190 1184 1185 + DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); 1186 + 1187 + #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) 1188 + #define this_rq() this_cpu_ptr(&runqueues) 1189 + #define task_rq(p) cpu_rq(task_cpu(p)) 1190 + #define cpu_curr(cpu) (cpu_rq(cpu)->curr) 1191 + #define raw_rq() raw_cpu_ptr(&runqueues) 1192 + 1191 1193 struct sched_group; 1192 1194 #ifdef CONFIG_SCHED_CORE 1193 1195 static inline struct cpumask *sched_group_span(struct sched_group *sg); ··· 1283 1269 return true; 1284 1270 1285 1271 for_each_cpu_and(cpu, sched_group_span(group), p->cpus_ptr) { 1286 - if (sched_core_cookie_match(rq, p)) 1272 + if (sched_core_cookie_match(cpu_rq(cpu), p)) 1287 1273 return true; 1288 1274 } 1289 1275 return false; ··· 1397 1383 #else 1398 1384 static inline void update_idle_core(struct rq *rq) { } 1399 1385 #endif 1400 - 1401 - DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); 1402 - 1403 - #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) 1404 - #define this_rq() this_cpu_ptr(&runqueues) 1405 - #define task_rq(p) cpu_rq(task_cpu(p)) 1406 - #define cpu_curr(cpu) (cpu_rq(cpu)->curr) 1407 - #define raw_rq() raw_cpu_ptr(&runqueues) 1408 1386 1409 1387 #ifdef CONFIG_FAIR_GROUP_SCHED 1410 1388 static inline struct task_struct *task_of(struct sched_entity *se) ··· 1550 1544 #endif 1551 1545 }; 1552 1546 1553 - extern struct callback_head balance_push_callback; 1547 + extern struct balance_callback balance_push_callback; 1554 1548 1555 1549 /* 1556 1550 * Lockdep annotation that avoids accidental unlocks; it's like a ··· 1730 1724 1731 1725 static inline void 1732 1726 queue_balance_callback(struct rq *rq, 1733 - struct callback_head *head, 1727 + struct balance_callback *head, 1734 1728 void (*func)(struct rq *rq)) 1735 1729 { 1736 1730 lockdep_assert_rq_held(rq); ··· 1743 1737 if (unlikely(head->next || rq->balance_callback == &balance_push_callback)) 1744 1738 return; 1745 1739 1746 - head->func = (void (*)(struct callback_head *))func; 1740 + head->func = func; 1747 1741 head->next = rq->balance_callback; 1748 1742 rq->balance_callback = head; 1749 1743 }
+39 -43
kernel/trace/blktrace.c
··· 346 346 mutex_unlock(&blk_probe_mutex); 347 347 } 348 348 349 + static int blk_trace_start(struct blk_trace *bt) 350 + { 351 + if (bt->trace_state != Blktrace_setup && 352 + bt->trace_state != Blktrace_stopped) 353 + return -EINVAL; 354 + 355 + blktrace_seq++; 356 + smp_mb(); 357 + bt->trace_state = Blktrace_running; 358 + raw_spin_lock_irq(&running_trace_lock); 359 + list_add(&bt->running_list, &running_trace_list); 360 + raw_spin_unlock_irq(&running_trace_lock); 361 + trace_note_time(bt); 362 + 363 + return 0; 364 + } 365 + 366 + static int blk_trace_stop(struct blk_trace *bt) 367 + { 368 + if (bt->trace_state != Blktrace_running) 369 + return -EINVAL; 370 + 371 + bt->trace_state = Blktrace_stopped; 372 + raw_spin_lock_irq(&running_trace_lock); 373 + list_del_init(&bt->running_list); 374 + raw_spin_unlock_irq(&running_trace_lock); 375 + relay_flush(bt->rchan); 376 + 377 + return 0; 378 + } 379 + 349 380 static void blk_trace_cleanup(struct request_queue *q, struct blk_trace *bt) 350 381 { 382 + blk_trace_stop(bt); 351 383 synchronize_rcu(); 352 384 blk_trace_free(q, bt); 353 385 put_probe_ref(); ··· 394 362 if (!bt) 395 363 return -EINVAL; 396 364 397 - if (bt->trace_state != Blktrace_running) 398 - blk_trace_cleanup(q, bt); 365 + blk_trace_cleanup(q, bt); 399 366 400 367 return 0; 401 368 } ··· 689 658 690 659 static int __blk_trace_startstop(struct request_queue *q, int start) 691 660 { 692 - int ret; 693 661 struct blk_trace *bt; 694 662 695 663 bt = rcu_dereference_protected(q->blk_trace, ··· 696 666 if (bt == NULL) 697 667 return -EINVAL; 698 668 699 - /* 700 - * For starting a trace, we can transition from a setup or stopped 701 - * trace. For stopping a trace, the state must be running 702 - */ 703 - ret = -EINVAL; 704 - if (start) { 705 - if (bt->trace_state == Blktrace_setup || 706 - bt->trace_state == Blktrace_stopped) { 707 - blktrace_seq++; 708 - smp_mb(); 709 - bt->trace_state = Blktrace_running; 710 - raw_spin_lock_irq(&running_trace_lock); 711 - list_add(&bt->running_list, &running_trace_list); 712 - raw_spin_unlock_irq(&running_trace_lock); 713 - 714 - trace_note_time(bt); 715 - ret = 0; 716 - } 717 - } else { 718 - if (bt->trace_state == Blktrace_running) { 719 - bt->trace_state = Blktrace_stopped; 720 - raw_spin_lock_irq(&running_trace_lock); 721 - list_del_init(&bt->running_list); 722 - raw_spin_unlock_irq(&running_trace_lock); 723 - relay_flush(bt->rchan); 724 - ret = 0; 725 - } 726 - } 727 - 728 - return ret; 669 + if (start) 670 + return blk_trace_start(bt); 671 + else 672 + return blk_trace_stop(bt); 729 673 } 730 674 731 675 int blk_trace_startstop(struct request_queue *q, int start) ··· 776 772 void blk_trace_shutdown(struct request_queue *q) 777 773 { 778 774 if (rcu_dereference_protected(q->blk_trace, 779 - lockdep_is_held(&q->debugfs_mutex))) { 780 - __blk_trace_startstop(q, 0); 775 + lockdep_is_held(&q->debugfs_mutex))) 781 776 __blk_trace_remove(q); 782 - } 783 777 } 784 778 785 779 #ifdef CONFIG_BLK_CGROUP ··· 1616 1614 if (bt == NULL) 1617 1615 return -EINVAL; 1618 1616 1619 - if (bt->trace_state == Blktrace_running) { 1620 - bt->trace_state = Blktrace_stopped; 1621 - raw_spin_lock_irq(&running_trace_lock); 1622 - list_del_init(&bt->running_list); 1623 - raw_spin_unlock_irq(&running_trace_lock); 1624 - relay_flush(bt->rchan); 1625 - } 1617 + blk_trace_stop(bt); 1626 1618 1627 1619 put_probe_ref(); 1628 1620 synchronize_rcu();
+2
kernel/trace/bpf_trace.c
··· 687 687 688 688 perf_sample_data_init(sd, 0, 0); 689 689 sd->raw = &raw; 690 + sd->sample_flags |= PERF_SAMPLE_RAW; 690 691 691 692 err = __bpf_perf_event_output(regs, map, flags, sd); 692 693 ··· 746 745 perf_fetch_caller_regs(regs); 747 746 perf_sample_data_init(sd, 0, 0); 748 747 sd->raw = &raw; 748 + sd->sample_flags |= PERF_SAMPLE_RAW; 749 749 750 750 ret = __bpf_perf_event_output(regs, map, flags, sd); 751 751 out:
+1
kernel/utsname_sysctl.c
··· 74 74 static DEFINE_CTL_TABLE_POLL(hostname_poll); 75 75 static DEFINE_CTL_TABLE_POLL(domainname_poll); 76 76 77 + // Note: update 'enum uts_proc' to match any changes to this table 77 78 static struct ctl_table uts_kern_table[] = { 78 79 { 79 80 .procname = "arch",
+2 -2
lib/kunit/string-stream.c
··· 56 56 frag_container = alloc_string_stream_fragment(stream->test, 57 57 len, 58 58 stream->gfp); 59 - if (!frag_container) 60 - return -ENOMEM; 59 + if (IS_ERR(frag_container)) 60 + return PTR_ERR(frag_container); 61 61 62 62 len = vsnprintf(frag_container->fragment, len, fmt, args); 63 63 spin_lock(&stream->lock);
+1 -1
lib/kunit/test.c
··· 265 265 kunit_set_failure(test); 266 266 267 267 stream = alloc_string_stream(test, GFP_KERNEL); 268 - if (!stream) { 268 + if (IS_ERR(stream)) { 269 269 WARN(true, 270 270 "Could not allocate stream to print failed assertion in %s:%d\n", 271 271 loc->file,
+10 -1
mm/huge_memory.c
··· 2455 2455 page_tail); 2456 2456 page_tail->mapping = head->mapping; 2457 2457 page_tail->index = head->index + tail; 2458 - page_tail->private = 0; 2458 + 2459 + /* 2460 + * page->private should not be set in tail pages with the exception 2461 + * of swap cache pages that store the swp_entry_t in tail pages. 2462 + * Fix up and warn once if private is unexpectedly set. 2463 + */ 2464 + if (!folio_test_swapcache(page_folio(head))) { 2465 + VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, head); 2466 + page_tail->private = 0; 2467 + } 2459 2468 2460 2469 /* Page flags must be visible before we make the page non-compound. */ 2461 2470 smp_wmb();
+28 -9
mm/hugetlb.c
··· 1014 1014 VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma); 1015 1015 /* 1016 1016 * Clear vm_private_data 1017 + * - For shared mappings this is a per-vma semaphore that may be 1018 + * allocated in a subsequent call to hugetlb_vm_op_open. 1019 + * Before clearing, make sure pointer is not associated with vma 1020 + * as this will leak the structure. This is the case when called 1021 + * via clear_vma_resv_huge_pages() and hugetlb_vm_op_open has already 1022 + * been called to allocate a new structure. 1017 1023 * - For MAP_PRIVATE mappings, this is the reserve map which does 1018 1024 * not apply to children. Faults generated by the children are 1019 1025 * not guaranteed to succeed, even if read-only. 1020 - * - For shared mappings this is a per-vma semaphore that may be 1021 - * allocated in a subsequent call to hugetlb_vm_op_open. 1022 1026 */ 1023 - vma->vm_private_data = (void *)0; 1024 - if (!(vma->vm_flags & VM_MAYSHARE)) 1025 - return; 1027 + if (vma->vm_flags & VM_MAYSHARE) { 1028 + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; 1029 + 1030 + if (vma_lock && vma_lock->vma != vma) 1031 + vma->vm_private_data = NULL; 1032 + } else 1033 + vma->vm_private_data = NULL; 1026 1034 } 1027 1035 1028 1036 /* ··· 2932 2924 page = alloc_buddy_huge_page_with_mpol(h, vma, addr); 2933 2925 if (!page) 2934 2926 goto out_uncharge_cgroup; 2927 + spin_lock_irq(&hugetlb_lock); 2935 2928 if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { 2936 2929 SetHPageRestoreReserve(page); 2937 2930 h->resv_huge_pages--; 2938 2931 } 2939 - spin_lock_irq(&hugetlb_lock); 2940 2932 list_add(&page->lru, &h->hugepage_activelist); 2941 2933 set_page_refcounted(page); 2942 2934 /* Fall through */ ··· 4609 4601 struct resv_map *resv = vma_resv_map(vma); 4610 4602 4611 4603 /* 4604 + * HPAGE_RESV_OWNER indicates a private mapping. 4612 4605 * This new VMA should share its siblings reservation map if present. 4613 4606 * The VMA will only ever have a valid reservation map pointer where 4614 4607 * it is being copied for another still existing VMA. As that VMA ··· 4624 4615 4625 4616 /* 4626 4617 * vma_lock structure for sharable mappings is vma specific. 4627 - * Clear old pointer (if copied via vm_area_dup) and create new. 4618 + * Clear old pointer (if copied via vm_area_dup) and allocate 4619 + * new structure. Before clearing, make sure vma_lock is not 4620 + * for this vma. 4628 4621 */ 4629 4622 if (vma->vm_flags & VM_MAYSHARE) { 4630 - vma->vm_private_data = NULL; 4631 - hugetlb_vma_lock_alloc(vma); 4623 + struct hugetlb_vma_lock *vma_lock = vma->vm_private_data; 4624 + 4625 + if (vma_lock) { 4626 + if (vma_lock->vma != vma) { 4627 + vma->vm_private_data = NULL; 4628 + hugetlb_vma_lock_alloc(vma); 4629 + } else 4630 + pr_warn("HugeTLB: vma_lock already exists in %s.\n", __func__); 4631 + } else 4632 + hugetlb_vma_lock_alloc(vma); 4632 4633 } 4633 4634 } 4634 4635
+11 -6
mm/mempolicy.c
··· 787 787 static int mbind_range(struct mm_struct *mm, unsigned long start, 788 788 unsigned long end, struct mempolicy *new_pol) 789 789 { 790 - MA_STATE(mas, &mm->mm_mt, start - 1, start - 1); 790 + MA_STATE(mas, &mm->mm_mt, start, start); 791 791 struct vm_area_struct *prev; 792 792 struct vm_area_struct *vma; 793 793 int err = 0; 794 794 pgoff_t pgoff; 795 795 796 - prev = mas_find_rev(&mas, 0); 797 - if (prev && (start < prev->vm_end)) 798 - vma = prev; 799 - else 800 - vma = mas_next(&mas, end - 1); 796 + prev = mas_prev(&mas, 0); 797 + if (unlikely(!prev)) 798 + mas_set(&mas, start); 799 + 800 + vma = mas_find(&mas, end - 1); 801 + if (WARN_ON(!vma)) 802 + return 0; 803 + 804 + if (start > vma->vm_start) 805 + prev = vma; 801 806 802 807 for (; vma; vma = mas_next(&mas, end - 1)) { 803 808 unsigned long vmstart = max(start, vma->vm_start);
+10 -10
mm/mmap.c
··· 618 618 struct vm_area_struct *expand) 619 619 { 620 620 struct mm_struct *mm = vma->vm_mm; 621 - struct vm_area_struct *next_next, *next = find_vma(mm, vma->vm_end); 621 + struct vm_area_struct *next_next = NULL; /* uninit var warning */ 622 + struct vm_area_struct *next = find_vma(mm, vma->vm_end); 622 623 struct vm_area_struct *orig_vma = vma; 623 624 struct address_space *mapping = NULL; 624 625 struct rb_root_cached *root = NULL; ··· 2626 2625 if (error) 2627 2626 goto unmap_and_free_vma; 2628 2627 2629 - /* Can addr have changed?? 2630 - * 2631 - * Answer: Yes, several device drivers can do it in their 2632 - * f_op->mmap method. -DaveM 2628 + /* 2629 + * Expansion is handled above, merging is handled below. 2630 + * Drivers should not alter the address of the VMA. 2633 2631 */ 2634 - WARN_ON_ONCE(addr != vma->vm_start); 2635 - 2636 - addr = vma->vm_start; 2632 + if (WARN_ON((addr != vma->vm_start))) { 2633 + error = -EINVAL; 2634 + goto close_and_free_vma; 2635 + } 2637 2636 mas_reset(&mas); 2638 2637 2639 2638 /* ··· 2655 2654 vm_area_free(vma); 2656 2655 vma = merge; 2657 2656 /* Update vm_flags to pick up the change. */ 2658 - addr = vma->vm_start; 2659 2657 vm_flags = vma->vm_flags; 2660 2658 goto unmap_writable; 2661 2659 } ··· 2681 2681 if (mas_preallocate(&mas, vma, GFP_KERNEL)) { 2682 2682 error = -ENOMEM; 2683 2683 if (file) 2684 - goto unmap_and_free_vma; 2684 + goto close_and_free_vma; 2685 2685 else 2686 2686 goto free_vma; 2687 2687 }
+11 -7
mm/page_alloc.c
··· 5784 5784 size_t size) 5785 5785 { 5786 5786 if (addr) { 5787 - unsigned long alloc_end = addr + (PAGE_SIZE << order); 5788 - unsigned long used = addr + PAGE_ALIGN(size); 5787 + unsigned long nr = DIV_ROUND_UP(size, PAGE_SIZE); 5788 + struct page *page = virt_to_page((void *)addr); 5789 + struct page *last = page + nr; 5789 5790 5790 - split_page(virt_to_page((void *)addr), order); 5791 - while (used < alloc_end) { 5792 - free_page(used); 5793 - used += PAGE_SIZE; 5794 - } 5791 + split_page_owner(page, 1 << order); 5792 + split_page_memcg(page, 1 << order); 5793 + while (page < --last) 5794 + set_page_refcounted(last); 5795 + 5796 + last = page + (1UL << order); 5797 + for (page += nr; page < last; page++) 5798 + __free_pages_ok(page, 0, FPI_TO_TAIL); 5795 5799 } 5796 5800 return (void *)addr; 5797 5801 }
+3
mm/zsmalloc.c
··· 2311 2311 int fg; 2312 2312 struct size_class *class = pool->size_class[i]; 2313 2313 2314 + if (!class) 2315 + continue; 2316 + 2314 2317 if (class->index != i) 2315 2318 continue; 2316 2319
+7
net/core/net_namespace.c
··· 117 117 118 118 static int ops_init(const struct pernet_operations *ops, struct net *net) 119 119 { 120 + struct net_generic *ng; 120 121 int err = -ENOMEM; 121 122 void *data = NULL; 122 123 ··· 136 135 if (!err) 137 136 return 0; 138 137 138 + if (ops->id && ops->size) { 139 139 cleanup: 140 + ng = rcu_dereference_protected(net->gen, 141 + lockdep_is_held(&pernet_ops_rwsem)); 142 + ng->ptr[*ops->id] = NULL; 143 + } 144 + 140 145 kfree(data); 141 146 142 147 out:
+1 -1
net/ethtool/pse-pd.c
··· 64 64 if (ret < 0) 65 65 return ret; 66 66 67 - ret = pse_get_pse_attributes(dev, info->extack, data); 67 + ret = pse_get_pse_attributes(dev, info ? info->extack : NULL, data); 68 68 69 69 ethnl_ops_complete(dev); 70 70
+1
net/ipv4/tcp.c
··· 457 457 WRITE_ONCE(sk->sk_sndbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_wmem[1])); 458 458 WRITE_ONCE(sk->sk_rcvbuf, READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[1])); 459 459 460 + set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); 460 461 sk_sockets_allocated_inc(sk); 461 462 } 462 463 EXPORT_SYMBOL(tcp_init_sock);
+2 -1
net/ipv4/tcp_input.c
··· 2192 2192 */ 2193 2193 static bool tcp_check_sack_reneging(struct sock *sk, int flag) 2194 2194 { 2195 - if (flag & FLAG_SACK_RENEGING) { 2195 + if (flag & FLAG_SACK_RENEGING && 2196 + flag & FLAG_SND_UNA_ADVANCED) { 2196 2197 struct tcp_sock *tp = tcp_sk(sk); 2197 2198 unsigned long delay = max(usecs_to_jiffies(tp->srtt_us >> 4), 2198 2199 msecs_to_jiffies(10));
+3 -1
net/ipv4/tcp_ipv4.c
··· 1874 1874 __skb_push(skb, hdrlen); 1875 1875 1876 1876 no_coalesce: 1877 + limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); 1878 + 1877 1879 /* Only socket owner can try to collapse/prune rx queues 1878 1880 * to reduce memory overhead, so add a little headroom here. 1879 1881 * Few sockets backlog are possibly concurrently non empty. 1880 1882 */ 1881 - limit = READ_ONCE(sk->sk_rcvbuf) + READ_ONCE(sk->sk_sndbuf) + 64*1024; 1883 + limit += 64 * 1024; 1882 1884 1883 1885 if (unlikely(sk_add_backlog(sk, skb, limit))) { 1884 1886 bh_unlock_sock(sk);
+1
net/ipv4/udp.c
··· 1624 1624 { 1625 1625 udp_lib_init_sock(sk); 1626 1626 sk->sk_destruct = udp_destruct_sock; 1627 + set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags); 1627 1628 return 0; 1628 1629 } 1629 1630
+15 -8
net/kcm/kcmsock.c
··· 162 162 /* Buffer limit is okay now, add to ready list */ 163 163 list_add_tail(&kcm->wait_rx_list, 164 164 &kcm->mux->kcm_rx_waiters); 165 - kcm->rx_wait = true; 165 + /* paired with lockless reads in kcm_rfree() */ 166 + WRITE_ONCE(kcm->rx_wait, true); 166 167 } 167 168 168 169 static void kcm_rfree(struct sk_buff *skb) ··· 179 178 /* For reading rx_wait and rx_psock without holding lock */ 180 179 smp_mb__after_atomic(); 181 180 182 - if (!kcm->rx_wait && !kcm->rx_psock && 181 + if (!READ_ONCE(kcm->rx_wait) && !READ_ONCE(kcm->rx_psock) && 183 182 sk_rmem_alloc_get(sk) < sk->sk_rcvlowat) { 184 183 spin_lock_bh(&mux->rx_lock); 185 184 kcm_rcv_ready(kcm); ··· 238 237 if (kcm_queue_rcv_skb(&kcm->sk, skb)) { 239 238 /* Should mean socket buffer full */ 240 239 list_del(&kcm->wait_rx_list); 241 - kcm->rx_wait = false; 240 + /* paired with lockless reads in kcm_rfree() */ 241 + WRITE_ONCE(kcm->rx_wait, false); 242 242 243 243 /* Commit rx_wait to read in kcm_free */ 244 244 smp_wmb(); ··· 282 280 kcm = list_first_entry(&mux->kcm_rx_waiters, 283 281 struct kcm_sock, wait_rx_list); 284 282 list_del(&kcm->wait_rx_list); 285 - kcm->rx_wait = false; 283 + /* paired with lockless reads in kcm_rfree() */ 284 + WRITE_ONCE(kcm->rx_wait, false); 286 285 287 286 psock->rx_kcm = kcm; 288 - kcm->rx_psock = psock; 287 + /* paired with lockless reads in kcm_rfree() */ 288 + WRITE_ONCE(kcm->rx_psock, psock); 289 289 290 290 spin_unlock_bh(&mux->rx_lock); 291 291 ··· 314 310 spin_lock_bh(&mux->rx_lock); 315 311 316 312 psock->rx_kcm = NULL; 317 - kcm->rx_psock = NULL; 313 + /* paired with lockless reads in kcm_rfree() */ 314 + WRITE_ONCE(kcm->rx_psock, NULL); 318 315 319 316 /* Commit kcm->rx_psock before sk_rmem_alloc_get to sync with 320 317 * kcm_rfree ··· 1245 1240 if (!kcm->rx_psock) { 1246 1241 if (kcm->rx_wait) { 1247 1242 list_del(&kcm->wait_rx_list); 1248 - kcm->rx_wait = false; 1243 + /* paired with lockless reads in kcm_rfree() */ 1244 + WRITE_ONCE(kcm->rx_wait, false); 1249 1245 } 1250 1246 1251 1247 requeue_rx_msgs(mux, &kcm->sk.sk_receive_queue); ··· 1799 1793 1800 1794 if (kcm->rx_wait) { 1801 1795 list_del(&kcm->wait_rx_list); 1802 - kcm->rx_wait = false; 1796 + /* paired with lockless reads in kcm_rfree() */ 1797 + WRITE_ONCE(kcm->rx_wait, false); 1803 1798 } 1804 1799 /* Move any pending receive messages to other kcm sockets */ 1805 1800 requeue_rx_msgs(mux, &sk->sk_receive_queue);
+12 -4
net/tipc/topsrv.c
··· 450 450 static void tipc_topsrv_accept(struct work_struct *work) 451 451 { 452 452 struct tipc_topsrv *srv = container_of(work, struct tipc_topsrv, awork); 453 - struct socket *lsock = srv->listener; 454 - struct socket *newsock; 453 + struct socket *newsock, *lsock; 455 454 struct tipc_conn *con; 456 455 struct sock *newsk; 457 456 int ret; 457 + 458 + spin_lock_bh(&srv->idr_lock); 459 + if (!srv->listener) { 460 + spin_unlock_bh(&srv->idr_lock); 461 + return; 462 + } 463 + lsock = srv->listener; 464 + spin_unlock_bh(&srv->idr_lock); 458 465 459 466 while (1) { 460 467 ret = kernel_accept(lsock, &newsock, O_NONBLOCK); ··· 496 489 497 490 read_lock_bh(&sk->sk_callback_lock); 498 491 srv = sk->sk_user_data; 499 - if (srv->listener) 492 + if (srv) 500 493 queue_work(srv->rcv_wq, &srv->awork); 501 494 read_unlock_bh(&sk->sk_callback_lock); 502 495 } ··· 706 699 __module_get(lsock->sk->sk_prot_creator->owner); 707 700 srv->listener = NULL; 708 701 spin_unlock_bh(&srv->idr_lock); 709 - sock_release(lsock); 702 + 710 703 tipc_topsrv_work_stop(srv); 704 + sock_release(lsock); 711 705 idr_destroy(&srv->conn_idr); 712 706 kfree(srv); 713 707 }
+3 -2
security/selinux/ss/services.c
··· 2022 2022 * in `newc'. Verify that the context is valid 2023 2023 * under the new policy. 2024 2024 */ 2025 - static int convert_context(struct context *oldc, struct context *newc, void *p) 2025 + static int convert_context(struct context *oldc, struct context *newc, void *p, 2026 + gfp_t gfp_flags) 2026 2027 { 2027 2028 struct convert_context_args *args; 2028 2029 struct ocontext *oc; ··· 2037 2036 args = p; 2038 2037 2039 2038 if (oldc->str) { 2040 - s = kstrdup(oldc->str, GFP_KERNEL); 2039 + s = kstrdup(oldc->str, gfp_flags); 2041 2040 if (!s) 2042 2041 return -ENOMEM; 2043 2042
+2 -2
security/selinux/ss/sidtab.c
··· 325 325 } 326 326 327 327 rc = convert->func(context, &dst_convert->context, 328 - convert->args); 328 + convert->args, GFP_ATOMIC); 329 329 if (rc) { 330 330 context_destroy(&dst->context); 331 331 goto out_unlock; ··· 404 404 while (i < SIDTAB_LEAF_ENTRIES && *pos < count) { 405 405 rc = convert->func(&esrc->ptr_leaf->entries[i].context, 406 406 &edst->ptr_leaf->entries[i].context, 407 - convert->args); 407 + convert->args, GFP_KERNEL); 408 408 if (rc) 409 409 return rc; 410 410 (*pos)++;
+1 -1
security/selinux/ss/sidtab.h
··· 65 65 }; 66 66 67 67 struct sidtab_convert_params { 68 - int (*func)(struct context *oldc, struct context *newc, void *args); 68 + int (*func)(struct context *oldc, struct context *newc, void *args, gfp_t gfp_flags); 69 69 void *args; 70 70 struct sidtab *target; 71 71 };
+1
tools/include/uapi/linux/kvm.h
··· 1177 1177 #define KVM_CAP_VM_DISABLE_NX_HUGE_PAGES 220 1178 1178 #define KVM_CAP_S390_ZPCI_OP 221 1179 1179 #define KVM_CAP_S390_CPU_TOPOLOGY 222 1180 + #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 1180 1181 1181 1182 #ifdef KVM_CAP_IRQ_ROUTING 1182 1183
+13
tools/testing/selftests/bpf/prog_tests/btf.c
··· 3936 3936 .err_str = "Invalid type_id", 3937 3937 }, 3938 3938 { 3939 + .descr = "decl_tag test #16, func proto, return type", 3940 + .raw_types = { 3941 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 3942 + BTF_VAR_ENC(NAME_TBD, 1, 0), /* [2] */ 3943 + BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DECL_TAG, 0, 0), 2), (-1), /* [3] */ 3944 + BTF_FUNC_PROTO_ENC(3, 0), /* [4] */ 3945 + BTF_END_RAW, 3946 + }, 3947 + BTF_STR_SEC("\0local\0tag1"), 3948 + .btf_load_err = true, 3949 + .err_str = "Invalid return type", 3950 + }, 3951 + { 3939 3952 .descr = "type_tag test #1", 3940 3953 .raw_types = { 3941 3954 BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */
+2 -2
tools/testing/selftests/bpf/progs/user_ringbuf_success.c
··· 47 47 if (status) { 48 48 bpf_printk("bpf_dynptr_read() failed: %d\n", status); 49 49 err = 1; 50 - return 0; 50 + return 1; 51 51 } 52 52 } else { 53 53 sample = bpf_dynptr_data(dynptr, 0, sizeof(*sample)); 54 54 if (!sample) { 55 55 bpf_printk("Unexpectedly failed to get sample\n"); 56 56 err = 2; 57 - return 0; 57 + return 1; 58 58 } 59 59 stack_sample = *sample; 60 60 }
+3 -1
tools/testing/selftests/drivers/net/bonding/Makefile
··· 7 7 bond-lladdr-target.sh \ 8 8 dev_addr_lists.sh 9 9 10 - TEST_FILES := lag_lib.sh 10 + TEST_FILES := \ 11 + lag_lib.sh \ 12 + net_forwarding_lib.sh 11 13 12 14 include ../../../lib.mk
+1 -1
tools/testing/selftests/drivers/net/bonding/dev_addr_lists.sh
··· 14 14 REQUIRE_MZ=no 15 15 NUM_NETIFS=0 16 16 lib_dir=$(dirname "$0") 17 - source "$lib_dir"/../../../net/forwarding/lib.sh 17 + source "$lib_dir"/net_forwarding_lib.sh 18 18 19 19 source "$lib_dir"/lag_lib.sh 20 20
+2 -2
tools/testing/selftests/drivers/net/dsa/test_bridge_fdb_stress.sh
··· 18 18 REQUIRE_JQ="no" 19 19 REQUIRE_MZ="no" 20 20 NETIF_CREATE="no" 21 - lib_dir=$(dirname $0)/../../../net/forwarding 22 - source $lib_dir/lib.sh 21 + lib_dir=$(dirname "$0") 22 + source "$lib_dir"/lib.sh 23 23 24 24 cleanup() { 25 25 echo "Cleaning up"
+4
tools/testing/selftests/drivers/net/team/Makefile
··· 3 3 4 4 TEST_PROGS := dev_addr_lists.sh 5 5 6 + TEST_FILES := \ 7 + lag_lib.sh \ 8 + net_forwarding_lib.sh 9 + 6 10 include ../../../lib.mk
+3 -3
tools/testing/selftests/drivers/net/team/dev_addr_lists.sh
··· 11 11 REQUIRE_MZ=no 12 12 NUM_NETIFS=0 13 13 lib_dir=$(dirname "$0") 14 - source "$lib_dir"/../../../net/forwarding/lib.sh 14 + source "$lib_dir"/net_forwarding_lib.sh 15 15 16 - source "$lib_dir"/../bonding/lag_lib.sh 16 + source "$lib_dir"/lag_lib.sh 17 17 18 18 19 19 destroy() 20 20 { 21 - local ifnames=(dummy0 dummy1 team0 mv0) 21 + local ifnames=(dummy1 dummy2 team0 mv0) 22 22 local ifname 23 23 24 24 for ifname in "${ifnames[@]}"; do
+1 -1
tools/testing/selftests/ftrace/test.d/dynevent/test_duplicates.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: Generic dynamic event - check if duplicate events are caught 4 - # requires: dynamic_events "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README 4 + # requires: dynamic_events "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README 5 5 6 6 echo 0 > events/enable 7 7
+1 -1
tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-eprobe.tc
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # description: event trigger - test inter-event histogram trigger eprobe on synthetic event 4 - # requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/]<event>] <attached-group>.<attached-event> [<args>]":README 4 + # requires: dynamic_events synthetic_events events/syscalls/sys_enter_openat/hist "e[:[<group>/][<event>]] <attached-group>.<attached-event> [<args>]":README 5 5 6 6 echo 0 > events/enable 7 7
+2 -4
tools/testing/selftests/futex/functional/Makefile
··· 3 3 CFLAGS := $(CFLAGS) -g -O2 -Wall -D_GNU_SOURCE -pthread $(INCLUDES) $(KHDR_INCLUDES) 4 4 LDLIBS := -lpthread -lrt 5 5 6 - HEADERS := \ 6 + LOCAL_HDRS := \ 7 7 ../include/futextest.h \ 8 8 ../include/atomic.h \ 9 9 ../include/logging.h 10 - TEST_GEN_FILES := \ 10 + TEST_GEN_PROGS := \ 11 11 futex_wait_timeout \ 12 12 futex_wait_wouldblock \ 13 13 futex_requeue_pi \ ··· 24 24 top_srcdir = ../../../../.. 25 25 DEFAULT_INSTALL_HDR_PATH := 1 26 26 include ../../lib.mk 27 - 28 - $(TEST_GEN_FILES): $(HEADERS)
+3 -3
tools/testing/selftests/intel_pstate/Makefile
··· 2 2 CFLAGS := $(CFLAGS) -Wall -D_GNU_SOURCE 3 3 LDLIBS += -lm 4 4 5 - uname_M := $(shell uname -m 2>/dev/null || echo not) 6 - ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/) 5 + ARCH ?= $(shell uname -m 2>/dev/null || echo not) 6 + ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/) 7 7 8 - ifeq (x86,$(ARCH)) 8 + ifeq (x86,$(ARCH_PROCESSED)) 9 9 TEST_GEN_FILES := msr aperf 10 10 endif 11 11
+3 -3
tools/testing/selftests/kexec/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 # Makefile for kexec tests 3 3 4 - uname_M := $(shell uname -m 2>/dev/null || echo not) 5 - ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/) 4 + ARCH ?= $(shell uname -m 2>/dev/null || echo not) 5 + ARCH_PROCESSED := $(shell echo $(ARCH) | sed -e s/i.86/x86/ -e s/x86_64/x86/) 6 6 7 - ifeq ($(ARCH),$(filter $(ARCH),x86 ppc64le)) 7 + ifeq ($(ARCH_PROCESSED),$(filter $(ARCH_PROCESSED),x86 ppc64le)) 8 8 TEST_PROGS := test_kexec_load.sh test_kexec_file_load.sh 9 9 TEST_FILES := kexec_common_lib.sh 10 10
+2 -2
tools/testing/selftests/kvm/aarch64/vgic_init.c
··· 662 662 : KVM_DEV_TYPE_ARM_VGIC_V2; 663 663 664 664 if (!__kvm_test_create_device(v.vm, other)) { 665 - ret = __kvm_test_create_device(v.vm, other); 666 - TEST_ASSERT(ret && (errno == EINVAL || errno == EEXIST), 665 + ret = __kvm_create_device(v.vm, other); 666 + TEST_ASSERT(ret < 0 && (errno == EINVAL || errno == EEXIST), 667 667 "create GIC device while other version exists"); 668 668 } 669 669
+1 -1
tools/testing/selftests/kvm/memslot_modification_stress_test.c
··· 67 67 static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay, 68 68 uint64_t nr_modifications) 69 69 { 70 - const uint64_t pages = 1; 70 + uint64_t pages = max_t(int, vm->page_size, getpagesize()) / vm->page_size; 71 71 uint64_t gpa; 72 72 int i; 73 73
+2 -2
tools/testing/selftests/lib.mk
··· 70 70 run_tests: all 71 71 ifdef building_out_of_srctree 72 72 @if [ "X$(TEST_PROGS)$(TEST_PROGS_EXTENDED)$(TEST_FILES)" != "X" ]; then \ 73 - rsync -aq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \ 73 + rsync -aLq $(TEST_PROGS) $(TEST_PROGS_EXTENDED) $(TEST_FILES) $(OUTPUT); \ 74 74 fi 75 75 @if [ "X$(TEST_PROGS)" != "X" ]; then \ 76 76 $(call RUN_TESTS, $(TEST_GEN_PROGS) $(TEST_CUSTOM_PROGS) \ ··· 84 84 85 85 define INSTALL_SINGLE_RULE 86 86 $(if $(INSTALL_LIST),@mkdir -p $(INSTALL_PATH)) 87 - $(if $(INSTALL_LIST),rsync -a $(INSTALL_LIST) $(INSTALL_PATH)/) 87 + $(if $(INSTALL_LIST),rsync -aL $(INSTALL_LIST) $(INSTALL_PATH)/) 88 88 endef 89 89 90 90 define INSTALL_RULE
-1
tools/testing/selftests/memory-hotplug/mem-on-off-test.sh
··· 138 138 { 139 139 for memory in `hotpluggable_offline_memory`; do 140 140 if ! online_memory_expect_success $memory; then 141 - echo "$FUNCNAME $memory: unexpected fail" >&2 142 141 retval=1 143 142 fi 144 143 done
+32 -3
tools/testing/selftests/perf_events/sigtrap_threads.c
··· 62 62 .remove_on_exec = 1, /* Required by sigtrap. */ 63 63 .sigtrap = 1, /* Request synchronous SIGTRAP on event. */ 64 64 .sig_data = TEST_SIG_DATA(addr, id), 65 + .exclude_kernel = 1, /* To allow */ 66 + .exclude_hv = 1, /* running as !root */ 65 67 }; 66 68 return attr; 67 69 } ··· 95 93 96 94 __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); 97 95 iter = ctx.iterate_on; /* read */ 98 - for (i = 0; i < iter - 1; i++) { 99 - __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); 100 - ctx.iterate_on = iter; /* idempotent write */ 96 + if (iter >= 0) { 97 + for (i = 0; i < iter - 1; i++) { 98 + __atomic_fetch_add(&ctx.tids_want_signal, tid, __ATOMIC_RELAXED); 99 + ctx.iterate_on = iter; /* idempotent write */ 100 + } 101 + } else { 102 + while (ctx.iterate_on); 101 103 } 102 104 103 105 return NULL; ··· 209 203 210 204 EXPECT_EQ(ctx.signal_count, NUM_THREADS * ctx.iterate_on); 211 205 EXPECT_EQ(ctx.tids_want_signal, 0); 206 + EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on); 207 + EXPECT_EQ(ctx.first_siginfo.si_perf_type, PERF_TYPE_BREAKPOINT); 208 + EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0)); 209 + } 210 + 211 + TEST_F(sigtrap_threads, signal_stress_with_disable) 212 + { 213 + const int target_count = NUM_THREADS * 3000; 214 + int i; 215 + 216 + ctx.iterate_on = -1; 217 + 218 + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); 219 + pthread_barrier_wait(&self->barrier); 220 + while (__atomic_load_n(&ctx.signal_count, __ATOMIC_RELAXED) < target_count) { 221 + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0); 222 + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_ENABLE, 0), 0); 223 + } 224 + ctx.iterate_on = 0; 225 + for (i = 0; i < NUM_THREADS; i++) 226 + ASSERT_EQ(pthread_join(self->threads[i], NULL), 0); 227 + EXPECT_EQ(ioctl(self->fd, PERF_EVENT_IOC_DISABLE, 0), 0); 228 + 212 229 EXPECT_EQ(ctx.first_siginfo.si_addr, &ctx.iterate_on); 213 230 EXPECT_EQ(ctx.first_siginfo.si_perf_type, PERF_TYPE_BREAKPOINT); 214 231 EXPECT_EQ(ctx.first_siginfo.si_perf_data, TEST_SIG_DATA(&ctx.iterate_on, 0));
+1 -1
tools/verification/dot2/dot2c.py
··· 111 111 112 112 def format_aut_init_header(self): 113 113 buff = [] 114 - buff.append("struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def)) 114 + buff.append("static struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def)) 115 115 return buff 116 116 117 117 def __get_string_vector_per_line_content(self, buff):
+11
virt/kvm/kvm_main.c
··· 4839 4839 }; 4840 4840 }; 4841 4841 4842 + long __weak kvm_arch_vm_compat_ioctl(struct file *filp, unsigned int ioctl, 4843 + unsigned long arg) 4844 + { 4845 + return -ENOTTY; 4846 + } 4847 + 4842 4848 static long kvm_vm_compat_ioctl(struct file *filp, 4843 4849 unsigned int ioctl, unsigned long arg) 4844 4850 { ··· 4853 4847 4854 4848 if (kvm->mm != current->mm || kvm->vm_dead) 4855 4849 return -EIO; 4850 + 4851 + r = kvm_arch_vm_compat_ioctl(filp, ioctl, arg); 4852 + if (r != -ENOTTY) 4853 + return r; 4854 + 4856 4855 switch (ioctl) { 4857 4856 #ifdef CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT 4858 4857 case KVM_CLEAR_DIRTY_LOG: {